Based on a true story: the dashboard clock on your car only shows hours and minutes (not seconds). You know the clock is only accurate to plus or minus a few minutes; likewise, your digital watch is set to within a few minutes of true time. Assume the car's clock and your watch are independent, equally likely to have any error within that range, but both running at the same rate so the difference between them is a constant.

At a random time you observe the car clock and your watch. At the moment that you do the comparison, you see that they display the same time in hours and minutes (HH:MM displays are equal; seconds are not shown).

**Question: after that single observation, what is the likely difference in seconds between the clock and the watch?**

This actually happened to Robin Z and me a few months ago, and it provoked a fascinating debate between us (and within myself). Clearly the time difference in seconds — call it Δ ("delta") — could be any value between plus or minus a hair under 60 seconds for two clocks that have the same HH:MM values at a given instant. How likely are the clocks to differ by less than a second? By 29-30 seconds? By 59-60 seconds? By an arbitrary number of seconds? Remember, you can't see the seconds on the car clock display, only the hours and minutes, and before making the comparison you only knew that both timepieces were in error by arbitrary independent amounts within a few minutes of the true time.

One of us argued that, after seeing matching HH:MM displays, the clock and the watch are equally likely to have any offset within the allowed range: -60 < Δ < 60 seconds. That's all that a single observation can tell you, logically. The other of us felt strongly that the odds favor a smaller Δ, since for example if the difference between the unseen HH:MM:SS displays is 59 seconds then only one observation in 60 will show matching HH:MM values and 59/60ths of the time the clock and the watch won't agree. Contrariwise, if the unseen difference between clock and watch is only 1 second, then 59 times out of 60 they will match. This position suggests that the probability P of any value of Δ between 0 and 60 should be peaked at Δ = 0 and decrease linearly to vanish (P = 0) when Δ = 60 seconds.

And even more amusingly: every twelve hours over the following two days my belief flipped back and forth between the two positions! The arguments on each side seem plausible.

So what do you think? *To be continued ...*

* ^z* - 2013-12-04