” … climate scientists cannot conduct controlled experiments on the Earth…. Instead they use … Global Climate Models, or GCMs–mathematical representations of the Earth that run on computers.”
“Processes operating at smaller scales [than 100 km], such as clouds, cannot be represented explicitly in the models but just instead be parameterized.”
“Parameterizations … [have] ad hoc constructions that are tuned so the model produces a realistic present-day climate. Consequently, parameterizations are one of the largest sources of uncertainly in GCMs.”
– Andrew Dessler and Edward Parson, The Science and Politics of Global Climate Change: A Guide to the Debate (Cambridge University Press, 2000), pp. 19–20.
The above explanation by climate scientist Andrew Dessler (co-author Parson is a lawyer/public policy specialist) opens the door to asking the question: are climate models ready for prime time?
Dessler goes on to say that “models can be tested by examining how well they reproduce the Earth’s actual climate.” And: “Considered in total, current models do a remarkable job of reproducing observations, lending confidence to their prediction” (p. 20).
Really?
Necessarily incomplete (“parameterized”) models with uncertain physical equations can be “right” for the wrong reasons, not only wrong for the right ones.
There is a burden of history to alarmist models. The curse of Malthusianism, beginning with (at least) the 1972 Club of Rome/MIT “limits of growth” computer model, is well documented. And today, the debate over the utility of climate models (“better than nothing” might not be if it conveys false knowledge) rages in the popular press and among wonks.
The Economist
“The Economist, which usually just parrots the party line, includes a pretty good article explaining the basics of computer climate modeling, and especially their large limitations and defects,” noted Stephen Hayward at PowerLine (September 23). “Although the magazine tries hard not to sound openly skeptical, it is hard for any unbiased reader to finish this piece and think ‘the science is settled’.”
The article, “Predicting the Climate Future is Riddled with Uncertainty” (September 2019) includes these statements (reproduced by Hayward):
[Climate modeling] is a complicated process. A model’s code has to represent everything from the laws of thermodynamics to the intricacies of how air molecules interact with one another. Running it means performing quadrillions of mathematical operations a second—hence the need for supercomputers.
And using it to make predictions means doing this thousands of times, with slightly different inputs on each run, to get a sense of which outcomes are likely, which unlikely but possible, and which implausible in the extreme.
Even so, such models are crude. Millions of grid cells might sound a lot, but it means that an individual cell’s area, seen from above, is about 10,000 square kilometres, while an air or ocean cell may have a volume of as much as 100,000km3. Treating these enormous areas and volumes as points misses much detail.
Clouds, for instance, present a particular challenge to modellers. Depending on how they form and where, they can either warm or cool the climate. But a cloud is far smaller than even the smallest grid-cells, so its individual effect cannot be captured. The same is true of regional effects caused by things like topographic features or islands.
Building models is also made hard by lack of knowledge about the ways that carbon—the central atom in molecules of carbon dioxide and methane, the main heat-capturing greenhouse gases other than water vapour—moves through the environment.
Understanding Earth’s carbon cycles is crucial to understanding climate change. But much of that element’s movement is facilitated by living organisms, and these are even more difficult to understand than physical processes.
“True knowledge results in effective action,” is one of my favorite quotations. The better-than-nothing view of models misses this point.
Mototaka Nakamur (“Confessions of a Climate Scientist”)
The modeling quandary was also explained by Mototaka Nakamura in a Japanese-language booklet on “the sorry state of climate science.” An expert on climate modeling and the inputs driving the outputs, he is someone to listen to.
“These models completely lack some critically important climate processes and feedbacks,” he states, “and represent some other critically important climate processes and feedbacks in grossly distorted manners to the extent that makes these models totally useless for any meaningful climate prediction.”
Specific problems include unknowns regarding large and small-scale ocean dynamics; aerosol-generating clouds; ice-albedo (reflectivity) feedbacks; and water vapor causality.
As a result, model parameters are “tuned” (fudged) to to align with what is believed-to-be causal reality. “The models are ‘tuned’ by tinkering around with values of various parameters until the best compromise is obtained,” Nakamura admits.
I used to do it myself. It is a necessary and unavoidable procedure and not a problem so long as the user is aware of its ramifications and is honest about it. But it is a serious and fatal flaw if it is used for climate forecasting/prediction purposes.
Gerald North
The above analyses are in keeping with the views (really warnings) about climate modeling made two decades ago by Gerald North, certainly a distinguished climate scientist, during his consulting era with Enron Corp. Some of his quotations follow:
“We do not know much about modeling climate. It is as though we are modeling a human being. Models are in position at last to tell us the creature has two arms and two legs, but we are being asked to cure cancer.” [Gerald North (Texas A&M) to Rob Bradley (Enron), November 12, 1999]
“[Model results] could also be sociological: getting the socially acceptable answer.” [Gerald North (Texas A&M) to Rob Bradley (Enron), June 20, 1998]
“There is a good reason for a lack of consensus on the science. It is simply too early. The problem is difficult, and there are pitifully few ways to test climate models.” [Gerald North (Texas A&M) to Rob Bradley (Enron), July 13, 1998]
“One has to fill in what goes on between 5 km and the surface. The standard way is through atmospheric models. I cannot make a better excuse.” [Gerald North (Texas A&M) to Rob Bradley (Enron), October 2, 1998]
“The ocean lag effect can always be used to explain the ‘underwarming’….
The different models couple to the oceans differently. There is quite a bit of slack here (undetermined fudge factors). If a model is too sensitive, one can just couple in a little more ocean to make it agree with the record. This is why models with different sensitivities all seem to mock the record about equally well. (Modelers would be insulted by my explanation, but I think it is correct.)” [Gerald North (Texas A&M) to Rob Bradley (Enron), August 17, 1998]
Conclusion
Climate science is not settled as long as the physical processes behind climate change, not to mention climate models themselves, are in open debate. Models are no better than their assumptions. And models cannot be tested. Complexity cannot be modeled away where the results cannot be known to be right or wrong.
Assuming that models could reach a state of precision (a big assumption), does not rescue current modeling efforts. The burden of proof is on alarmist models, not actual climate.
Thanks for updating and making the historical connections on the climate modeling issue. Another interesting discussion of how the models are limited was provided in 2015 by Dr. R.G. Brown of Duke University in an extended comment, comparing weather and climate models. A lot of detailed information was included leading to some conclusions:
“Even with all of the care I describe above and then some, weather models computed at close to the limits of our ability to compute (and get a decent answer faster than nature “computes” it by making it actually happen) track the weather accurately for a comparatively short time — days — before small variations between the heavily modeled, heavily under-sampled model initial conditions and the actual initial state of the weather plus errors in the computation due to many things — discrete arithmetic, the finite grid size, errors in the implementation of the climate dynamics at the grid resolution used (which have to be approximated in various ways to “mimic” the neglected internal smaller scaled dynamics that they cannot afford to compute) cause the models to systematically diverge from the actual weather.”
“Then here is the interesting point. Climate models are just weather models run in exactly this way, with one exception. Since they know that the model will produce results indistinguishable from ordinary static statistics two weeks in, they don’t bother initializing them all that carefully. The idea is that no matter how then initialize them, after running them out to weeks or months the bundle of trajectories they produce from small perturbations will statistically “converge” at any given time to what is supposed to be the long time statistical average, which is what they are trying to predict.”
“This assumption is itself dubious, as neither the weather nor the climate is stationary and it is most definitely non-Markovian so that the neglected details in the initial state do matter in the evolution of both, and there is also no theorem of which I am aware that states that the average or statistical distribution of a bundle of trajectories generated from a nonlinear chaotic model of this sort will in even the medium run be an accurate representation of the nonstationary statistical distribution of possible future climates. But it’s the only game in town, so they give it a try.”
“So far, it looks like (not unlike the circumstance with weather) climate models can sometimes track the climate for a decade or so before they diverge from it.”
“The IPCC then takes the results of many GCMs and compounds all errors by super-averaging their results (which has the effect of hiding the fluctuation problem from inquiring eyes), ignoring the fact that some models in particular truly suck in all respects at predicting the climate and that others do much better, because the ones that do better predict less long run warming and that isn’t the message they want to convey to policy makers, and transform its envelope into a completely unjustifiable assertion of “statistical confidence”.”
“By this standard, “the set of models in CMIP5″ has long since failed. There isn’t the slightest doubt that their collective prediction is statistical nonsense. It remains to be seen if individual models in the collection deserve to be kept in the running as not failed yet, because even applying the Bonferroni correction to the “ensemble” of CMIP5 is not good statistical practice. Each model should really be evaluated on its own merits as one doesn’t expect the “mean” or “distribution” of individual model results to have any meaning in statistics (note that this is NOT like perturbing the initial conditions of ONE model, which is a form of Monte Carlo statistical sampling and is something that has some actual meaning).”
Full text is in my post https://rclutz.wordpress.com/2015/06/11/climate-models-explained/
Subject: Climate Change
Recently, on June 28, 2019, a scholarly journal that is maintained by the top-ranked journal Nature published a scientific research paper, titled “Intensified East Asian Winter Monsoon During the Last Geomagnetic Reversal Transition” by a group of Japanese scientists which found according to its lead investigator that “The umbrella effect caused by galactic cosmic rays is important when thinking about current global warming, as well as, the warm period of the medieval era.” When the journal Nature is willing to print such a contradictory piece of research it is clear that the science is in a state of flux. This remarkable finding confirmed the result found by Profs. Kauppinen & Malmi, both from Finland, in a paper titled “No Experimental Evidence For Significant Anthropogenic Climate Change” (June 29, 2019) that “… the (IPCC) models fail to derive the influence of low cloud cover fraction on global temperature. A too-small natural component results in a too-large portion for the contribution of greenhouse gases like CO2. The IPCC represents the climate sensitivity more than one order of magnitude larger than our sensitivity 0.24 degrees C. Because the anthropogenic portion in the increased CO2 is less than 10%, we have practically no anthropogenic climate change. The low clouds control mainly the global temperature.” The South China Morning Post on Aug. 11, 2019, next reported that “A new study has found winters in Northern China have been warming since 4000 BC — regardless of human activity — “. This research was published in the Journal of Geophysical Research: Atmospheres and concluded that human activity “… appears to have little to due with increased greenhouse gases.” The “driving forces include the Sun, the atmosphere, and its interaction with the ocean” but “We have detected no evidence of human influence.” This study’s findings confirm an earlier study that was published in Scientific Reports in 2014. Most importantly, the lead investigator for the Kobe University research paper insisted that “… she is now more worried about cooling than warming.” Compellingly, on Sept. 23, 2019, over 500 climate experts delivered a letter to UN secretary-general, Antonio Guterres, which states (in bold) that “THERE IS NO CLIMATE EMERGENCY” and that “The general-circulation models of climate on which international policy is at present founded are unfit for their purpose.” Amazingly, no mainstream news organization has reported any of these facts.
★★★★★
+1,000,000
The GCMs ain’t worth a “bucket of warm spit.”
Robert Brown of Duke also wrote a piece in WUWT in 2014, preceding the piece Mr. Clutz cites ,regarding science debates (what they are and are not), peer review (what it is supposed to be) and climate models and why averaging them is not proper. It is a gem. See it at https://wattsupwiththat.com/2014/10/06/real-science-debates-are-not-rare/
[…] https://www.masterresource.org/north-gerald-texas-am/climate-models-north-today/ […]
[…] Climate Model Subjectivism (validating Gerald North two decades later) […]
[…] https://www.masterresource.org/north-gerald-texas-am/climate-models-north-today/ […]
And today, the debate over the utility of climate models (“better than nothing” might not be if it conveys false knowledge) rages in the popular press and among wonks.
Climate Models are clearly much, much, much, worse than nothing.
They have promoted damage to our economy, energy production, plain common sense, climate models are very evil.
[…] Their description adds to the explanation of modeling explained by Gerald North, Andrew Dessler and Edward Parson, Mototaka Nakamur, and a feature in The Economist (see here). […]
[…] Wait! So models are imperfect? You said they can be tested? Ahem …. Want to debate Roy Spencer or John Christy or Judith Curry or Richard Lindzen on this? Remember what your distinguished Texas A&M colleague Gerald North said about models? […]
[…] the “laws of physics” driving high-sensitivity warming in climate models (ugh!) is precisely what is unsettled, as I have documented elsewhere. To share one quotation […]
[…] controversially believes the science is settled (wrong), and climate models are sound (wrong). He habitually calls his opponents “deniers,” as in Holocaust deniers. His long […]
[…] a distinguished climate scientist emeritus at Texas A&M (Dessler’s department), was very critical of models. High-sensitivity climate models hard-wire what is not known to result in […]