Some might say that I'm full of heuristics — rules of thumb — especially for distance running, since it's a subject that has so many endearingly quantitative parameters to play with. Recently while sitting at the back of a large room during a long, boring meeting I filled a sheet of paper with calculations and came up with a new formula. It's a corollary to the old rule:
|Every minute too fast that you go during the first half of a race costs you two minutes in the second half.|
The new guideline is more general:
|Your optimal pace is one-third of a standard deviation less than your actual pace.|
These two formulæ are mathematically equivalent in some simple cases. Both say that it's best to go at a steady speed. (The "standard deviation" is a measure of the plus-or-minus fluctuation in one's pace; it is zero for an absolutely constant velocity.)
How do these rules compare when applied to actual race data? Take the latest Marine Corps Marathon (29 Oct 2006). I didn't participate in the race, but I went downtown to photograph friends there. Here's a tabulation based on their outcomes. The "half" columns are first and second halves of the marathon, "sigma" (σ) is a rough estimate of the standard deviation of the pace (in seconds/mile, based on crude split data from miles 5, 10, 15, and 20), and the "actual" finishing time is compared with theoretical "best" possible result using the old rule and the new one above.
|runner||half||half||σ||actual||best (old)||best (new)|
The agreement between the "old" and "new" rules is amazingly close. I'll run some more detailed tests using my mile-by-mile splits later to see if the correlation persists, or if it's coincidental.
Of course, the fundamental question remains: on a given day, could a peson really have finished several minutes faster by using an even pacing strategy?
(cf. TwoGreatSecrets (9 Nov 2001), NeedForSpeed (10 Aug 2002), LogbookTyrannicide (17 Oct 2002), HandicapJogging (8 Oct 2003), DecelerationParameter (28 Dec 2003), RootMeanSquareDance (24 Apr 2004), ... )