In climate forecasting, the situation is more equivocal: the theory about the greenhouse effect is strong, which supports more complicated models. However, temperature data is very noisy, which argues against them. Which consideration wins out? We can address this question empirically, by evaluating the success and failure of different predictive approaches in climate science. What matters most, as always, is how well the predictions do in the real world.
I would urge caution against reducing the forecasting process to a series of bumper-sticker slogans. Heuristics like Occam’s razor (“other things being equal, a simpler explanation is better than a more complex one”50) sound sexy, but they are hard to apply. We have seen cases, as in the SIR models used to forecast disease outbreaks, where the assumptions of a model are simple and elegant—but where they are much too naïve to provide for very skillful forecasts. We have also seen cases, as in earthquake prediction, where unbelievably convoluted forecasting schemes that look great in the software package fail miserably in practice.
An admonition like “The more complex you make the model the worse the forecast gets” is equivalent to saying “Never add too much salt to the recipe.” How much complexity—how much salt—did you begin with? If you want to get good at forecasting, you’ll need to immerse yourself in the craft and trust your own taste buds.
Knowing the limitations of forecasting is half the battle, and on that score the climate forecasters do reasonably well. Climate scientists are keenly aware of uncertainty: variations on the term uncertain or uncertainty were used 159 times in just one of the three IPCC 1990 reports.51 And there is a whole nomenclature that the IPCC authors have developed to convey how much agreement or certainty there is about a finding. For instance, the phrase “likely” taken alone is meant to imply at least a 66 percent chance of a prediction occurring when it appears in an IPCC report, while the phrase “virtually certain” implies 99 percent confidence or more.
Once the matter created by the Big Bang cooled sufficiently, the universe consisted of a vast cloud of hydrogen atom consisting of a single proton surrounded by a single electron—along with a smattering of slightly heavier elements, including helium (with two protons) and lithium (with three). The universe at that time was about the most boring place imaginable. It consisted of nothing but disembodied atoms drifting through space and the radiation left over from the Big Bang. Yet the potential tor infinite creativity was already present in the most unassuming of places: the asymmetries of the hydrogen atom. A hydrogen atom is not a featureless round ball that just bounces off other hydrogen atoms. It has an inherent duality, a positive proton and a negative electron, two polar opposites. Both the proton and the proton-electron system can acquire energy, which causes them t( assume different configurations. All of the creativity we observe in the world arises ultimately from the potential shapes inherent in hydrogen atoms.
For if as scientists we seek simplicity, then obviously we try the simplest surviving theory first, and retreat from it only when it proves false. Not this course, but any other, requires explanation. If you want to go somewhere quickly, and several alternate routes are equally likely to be open, no one asks why you take the shortest. The simplest theory is to be chosen not because it is the most likely to be true but because it is scientifically the most rewarding among equally likely alternatives. We aim at simplicity and hope for truth.
I think that considerable progress can be made in the analysis of the operations of nature by the scholar who reduces rather complicated phenomena to their proximate causes and primitive forces, even though the causes of those causes have not yet been detected.