A clever hack that sometimes saves lots of time: when a computation has to be updated, hang on to certain internal parameters that go into calculating the answer — and then, rather than recomputing the whole shebang, as new information comes in you can simply revise those parameters and quickly get the new result. This is the key concept behind Kalman Filtering, the famous trick that solves many real-time aerospace control problems (and loads of other signal processing challenges).
A couple of examples may add light. For starters, suppose you need to figure out the average over the past 365 days of something like a temperature or a stock's price. You add up 365 numbers and divide by 365; boring but feasible. To find the answer tomorrow, however, instead of adding up the latest 365 values you can save work by taking the previous total, subtracting out the initial number from a year and a day ago, and adding in the current value before doing the division. (Ignore leap-year issues!)
Another, more subtle demonstration of the same kludge: "continued fractions" are infinite quotients of the form 1+1/(2+1/(2+1/(2+ ...))) — fractions that crawl down forever. (They look prettier when written out as a big fraction with ever-shrinking horizontal bars, but you get the picture.) Lots of important numbers have brilliantly simple continued fraction representations. The one above, for instance, is just the square root of two ( = 1.41421356...). The base of the natural logarithms, e ( = 2.718281828...), has the continued fraction series 2+1/(1+1/(2+1/(1+1/(1+1/(4+1/(1+1/(1+1/(6+ ...)))))))). Continued fractions are superb for finding the best ratios to represent complicated numbers, like 22/7 for pi (or better yet, 355/113). They've got lots of other important uses too. (Enough motivation — cut to the chase!)
But actually evaluating a continued fraction looks ugly. It seems to require picking some point in the nested parenthetical hierarchy to cut off the series, and then crawling up from there, converting denominators, taking reciprocals, adding terms, etc. Plentch! Doing it bottom-up is not only slow. It also means that you have to recompute the whole mess just to add a single new term. But there's an alternative: the subbookkeeping trick works magic to let you evaluate top-down, as far as you like. You just keep track of a couple of subtotals that combine to make the fraction at each stage.
There's a generic moral to this story (and it's not just the fact that "subbookkeeping" has more double letters in a row than any other common English word!). When a job gets unæsthetic and repetitive, look for a way to factor it into sub-jobs. Then seek covert components — hidden variables — that can carry along the information you need to axe the redundant effort. Sometimes, given luck and cleverness, there's a pleasant surprise.
Wednesday, June 21, 2000 at 16:34:51 (EDT) = 2000-06-21
TopicScience - TopicProgramming
Question: how does this 'Sub'Book'Keeping' work?
For example, how do you compute the rational approximations for
pi from this?
(correlates: MagneticStuds, CastingShadows, OnIrreducibility, ...)