The speed of light in a vacuum is one of the big constants in the universe. It is the c in Einstein’s famous E=mc2 equation. A lot of physics hinges on the notion that c does not vary.
But as recently as the 1930s and 40s, some physicists thought that the speed of light might be slowing down at the rate of 4 kilometers per second per year, or even more bizarrely, that c might be oscillating up and down in a cycle with a period of 40 years.
Overconfidence in the accuracy of measurements proved to be the real problem. The psychological bias is explored in fascinating detail in Assessing uncertainty in physical constants, a classic paper by Max Henrion and Baruch Fischhoff published in the American Journal of Physics in 1986.
From 1876 to 1902, measurements overestimated c by about 70 kilometers per second. From 1905 to 1950, they underestimated it by about 15 kilometers per second. In 1941, a physicist named Raymond Birge tried to make sense of the mess by adjusting for systematic errors in measurement experiments and concluded that “after a long and, at times hectic history, the value for c has at last settled down into fairly satisfactory ‘steady state.'” But, as Henrion and Fischhoff point out, Birge’s confidence proved premature:
Just nine years later, the recommended value had shifted by 2.4 of his 1941 standard deviations. This 1950 value, too, was soon supplanted, by a value different by over 2 of its standard deviations. Once again, shifting estimates prompted the suggestion that c might be changing, this time increasing.
Henrion and Fischhoff diagnosed several ways that decisions by scientists could magnify their overconfidence bias. The one that really stands out for me is the human tendency to embrace confirming evidence while brushing aside disconfirming evidence.
“Unfortunately, people have a considerable ability to ‘explain away’ events that are inconsistent with their prior beliefs,” they note. Mendel did it with his pea breeding experiments that began to unlock the rules of heredity. Millikan did it with his oil-drop experiments that measured the charge of the electron.
The figure above (redrawn by Morgan) shows measurements of the speed of light going back to 1860. For more than 20 years, physicists were so far off the mark that their confidence intervals, or estimates of measurement error, did not even overlap with the eventual best-measured value (indicated by a horizontal line in the figure).
You can also see how the light speed measurements show a bandwagon effect. They cluster around particular values at different periods in time, suggesting that scientists massaged their data so as to not wander too far from the results reported by their peers.
Sources:
Assessing uncertainty in physical constants, by Max Henrion and Baruch Fischhoff, Am J Phys (1986)
Use (and abuse) of expert elicitation in support of decision making for public policy, by M. Granger Morgan PNAS (2014)
I’m probably the last to see this article –> Speed of light not so constant after all, such an elegant experiment!
https://www.sciencenews.org/article/speed-light-not-so-constant-after-all
Great historical overview, Joe. Not a scientist myself. But I do seem to recall some early studies where they averaged the speed of light between hill tops or mountain tops. Puzzled me why they averaged, wasn’t light supposed to be constant?
Again, thanks for the article, great insight.