Why is Climate Modeling Stuck?

Why is climate modeling stuck?

There is no obvious reason why we can’t do better, and yet the promised progress in this direction isn’t appearing anywhere near as fast as the policy sector needs it.

Specifically, is it possible to provide useful regional prognostics of the consequences of global change? That’s what most people want us to do and what most people think we are up to. Whether that is what we want to do or not, I think some of us should rise to the occasion.

I would like to make the case that I can help do something about this. It’s a personal ambition to actually influence the architecture of a significant climate modeling effort before I retire. I think my skill set is unusual and strong in this regard, but unfortunately my track record is less so. It’s not that I haven’t accomplished anything as a coder, a software architect, a EE, or a manager, actually. It’s just that academia treats my time in industry as tantamount to “unemployment” and vice versa.

At the least, though, I can try to start a conversation. Are we stuck? If so, why? What could be done with a radical rethinking of the approach?

I have an essay on one view of the problem on another blog of mine. … I think both climate dynamics and software engineering issues are germane, and I’d welcome any informed discussion on it. There seem to be enough people from each camp lending me an ear occasionally that we might be able to make some progress.

Update: Well, since I’ve failed to move the discussion over there, I’ll move the article here. Thanks for any feedback.

I believe that progress in climate modeling has been relatively limited since the first successes in linking atmosphere and ocean models without flux corrections. (That’s about a decade now, long enough to start being cause for concern.) One idea is that tighter codesign of components such as atmosphere and ocean models in the first place would help, and there’s something to be said for that, but I don’t think that’s the core issue.

I suggest that there is a deeper issue based on workflow presumptions. The relationship between the computer science community and the application science community is key. I suggest that the relationship is ill-understood and consequently the field is underproductive.

The relationship between the software development practitioners and the domain scientists is misconstrued by both sides, and both are limited by past experience. Success in such fields as weather modeling and tide prediction provide a context which inappropriately dominates thinking, planning and execution.

Operational codes are the wrong model because scientists do not modify operational codes. Commercial codes are also the wrong model because bankers, CFOs and COOs do not modify operational codes. The primary purpose of scientific codes as opposed to operational codes is to enable science, that is, free experimentation and testing of hypotheses.

Modifiability by non-expert programmers should be and sadly is not treated as a crucial design constraint. The application scientist is an expert on physics, perhaps on certain branches of mathematics such as statistics and dynamics, but is typically a journeyman programmer. In general the scientist does not find the abstractions of computer science intrinsically interesting and considers the program to be an expensive and balky laboratory tool.

Being presented with codes that are not designed for modification greatly limits scientific productivity. Some scientists have enormous energy for the task (or the assistance of relatively unambitious and unmarketable Fortran-ready assistants) and take on the task with energy and panache, but the sad fact is that they have little idea of what to do or how to do it. This is hardly their fault; they are modifying opaque and unwelcoming bodies of code. Under the daunting circumstances these modifications have the flavor of “one-offs”, scripts intended to perform a single calculation, and treated as done more or less when the result “looks reasonable”. The key abstractions of computer science and even its key goals are ignored, just as if you were writing a five-liner to, say, flatten a directory tree with some systematic renaming. “Hmm, looks right. OK, next issue.”

This, while scientific coding has much to learn from the commercial sector, the key use case is rather atypical. The key is in providing an abstraction layer useful to the journeyman programmer, while providing all the verification, validation, replicability, version control and process management the user needs, whether the user knows it or not. As these services become discovered and understood, the value of these abstractions will be revealed, and the power of the entire enterprise will resume its forward progress.

It’s my opinion that Python provides not only a platform for this strategy but also an example of it. When a novice Python programmer invokes “a = b + c”, a surprisingly large number of things potentially happen. An arithmetic addition is commonly but not inevitably among the consequences and the intentions. The additional machinery is not in the way of the young novice counting apples but is available to the programmer extending the addition operator to support user defined classes.

Consider why Matlab is so widely preferred over the much more elegant and powerful Mathematica platform by application scientists. This is because the scientists are not interested in abstractions in their own right; they are interested in the systems they study. Software is seen as a tool to investigate the systems and not as a topic of intrinsic interest. Matlab is (arguably incorrectly) perceived as better than Mathematica because it exposes only abstractions that map naturally onto the application scientist’s worldview.

Alas, the application scientist’s worldview is largely (see Marshall McLuhan) formed by the tools with which the scientist is most familiar. The key to progress is the Pythonic way, which is to provide great abstraction power without having it get in the way. Scientists learn mathematical abstractions as necessary to understand the phenomena of interest. Computer science properly construed is a branch of mathematics (and not a branch of trade-school mechanics thankyouverymuch) and scientists will take to the more relevant of its abstractions as they become available and their utility becomes clear.

Maybe starting from a blank slate we can get moving again toward a system that can actually make useful regional climate prognoses. It is time we took on the serious task of determining the extent to which such a thing is possible. I also think the strategies I have hinted at here have broad applicability in other sciences.

I am trying to work through enough details of how to extend this Python mojo to scientific programming to make a credible proposal. I think I have enough to work with, but I’ll have to treat the details as a trade secret for now. Meanwhile I would welcome comments.

Bipartisan Congressional Participation in Science Debate!

Chris and Sheril continue to do great work in reminding us how democracy is supposed to work.

Congressman Vern Ehlers, R-MI, and congressman Rush Holt, D-NJ, have agreed to co-chair the non-partisan initiative, called ScienceDebate2008.com, whose signers also include fourteen Nobel laureates, several university presidents, other congresspersons of both parties, the president of the Academy of Evangelical Scientists and Ethicists, and the heads of several of America’s major science organizations, including the American Association for the Advancement of Science.

These congressmen are apparently “also scientists” whatever that might mean. Anyway it’s great news!

More at Intersection and at the ScienceDebate site.

Orangutan’s Last Stand

From the UNEP:

The management and enforcement of the protection regime in Indonesia is insufficient, and illegal activities – such as logging, hunting and mining, is rampant. The RAPPAM methodology, developed by WWF, has been used to assess the relative pressures and threats using questionnaires and workshops. Borneo and Sumatra are home to the Orangutan, and the protected areas represent vital habitat for the survival of the species.

Does Indonesia’s national sovereignty trump everything else? Does Indonesia have a right to sell off its national parks, much as Texas claims to have? Do orangutans have rights? Do I have a right to live in a world where orangutans are not extinct?

I don’t know, but I think that small worlds are different from big ones. Eventually obligations trump rights; the smaller the world the more so.

Excised Paragraphs

Spending XMas day trying yet again to convince NSF to let me, not so much rewrite the climate models, as redesign the architecture of the models to match the workflow.

Given the nature of the call, the following is probably not going to strengthen my argument, but I think it’s interesting and I welcome your input. (I’d especially welcome commentary from JM and JM).


The difficulties in constructing working high-performance codes color the scientific process and other decision support networks dependent on it.

To some extent the problem in climate modeling is based on the origins of the component models in operational prediction communities (such as weather prediction), wherein the goal of software design is the cost-efficient optimal projection of the state of the atmosphere into the near future. It’s often noted that this is an initial value problem while running similar codes in climate mode is a boundary condition problem; the objectives are substantially different. Nevertheless, a weather code has a climate and a climate code has weather; these are structurally similar. Accordingly, the methods of the weather modeling community are injected into climate methodologies.

The problem is not in the different mathematical structure of the purposes at hand. It is in the different social structures. A weather model is write-once, run many times. Its purpose is efficiency and correctness. A climate model is an experimental platform. While it is efficiency constrained, flexibility and transparency are key to its utility, keys which are of trivial importance in operational settings.

I believe that despite the very slow progress of the past decade or so, climate modeling has the potential to be vastly more skillful. It seems at least that this should be put to the test. Flexibility, transparency, interoperability, testability and accessibility to automated reasoning are needed. Climate modeling needs to partake of modern agile development methodologies such as those at Google and similar very high productivity companies. Certainly the potential value add is there. It’s time that some institutional structure existed to support this.

What is Education For Anyway?

This, found at Dot Earth, strikes me as very disturbing. I can only conclude that what passes for education in America isn’t really education in any form needed to support the sort of Jeffersonian democracy that Americans aspire to.

A lot of us live in intellectual silos, it seems. A sobering survey of more than 1,700 voters, published by the Pew Research Center for the Public and the Press in January, found that more education, for example, does not shift attitudes, and instead actually hardens them.

In the survey, Republicans with a college degree were substantially more skeptical about global warming than Republicans without one. Democrats with a college degree were significantly more convinced global warming was a problem than were Democrats who didn’t go to college.

This is bad news for anyone commenting on Dot Earth who plans to try to win over readers with starkly different attitudes. My hope is that the interactions here will be a little bit like the scientific process, whittling away at unsupported arguments, building on areas of agreement and creating a trajectory toward understanding and meaningful action.

The linked survey also has a number of other daunting statistics, but I think the one that Andrew Revkin focuses on is particularly suprising and unfortunate.

Does anyone know of comparable data in other countries?

Update: I found comment #10 on the referenced Dot Earth article especially interesting among many interesting responses. Consider this advice:

So if you’re interested in bringing doubters/skeptics over to an understanding of the theory, be a little be humble, be as familiar with the limits of the theory as you are with the strengths, and try to resist making calls to ban SUVs, restrict reproductive rights, constrict the economy and other nutty ideas.

So what am I to do? Of course the science stands by itself, and I am glad that occasionally someone can be won over by reason.

On the other hand, I think the taboo against considering the nature of the growth imperative is very much a core issue in coming up with a sensible solution to our problems. Even if a growth-imperative-friendly greenhouse gas strategy is meaningful, something else will break soon enough. I don’t understand why this particular belief, that it is “nutty” to “constrict” the “economy”, is immune from skeptical inquiry.

Update: My response to comment #10 is visible as comment #131

Update: Some excellent postings by Edwin Hall on the Dot Earth article. Please take special note of this one.

There’s No Ultimate Tipping Point

Ray on a response on RealClimate says nicely something I have been trying to say to the doomsayers:

Edward Greisch: See the chart on page 274 of “Six Degrees” by Mark Lynas. Mark Lynas says we have until 2015 to BEGIN REDUCING our total CO2 output and we have until 2050 to actually reduce our CO2 output by 90%. Mark Lynas says if we don’t follow the schedule in Six Degrees, we will encounter positive feedbacks which will take the control of the climate out of our hands. Civilization may fall anyway well before 2050, but we can avoid going extinct by 2100. Mark Lynas says we have to hold the CO2 level to 400 parts per million to have a 75% chance of avoiding the positive feedbacks. Is Mark Lynas correct? 8 years is a very tough timetable to stop the building of coal fired power plants and replace some coal fired power plants with nuclear. I doubt that anything else other than a plague that kills a few billion people could make a dent within 8 years.

[Response: From other estimates I’ve seen, Lynas’ timetable seems about right if the goal is to avoid 450 ppm. To avoid 400ppm, even his timetable is a bit of a stretch. However, with regard to the impacts of exceeding 400ppm (or even 450ppm), if you are quoting Lynas correctly I would differ with his assessment. There is no magic threshold crossed at 450 that commits us suddenly to the kind of catastrophic changes you mention, and certainly not to extinction of humanity by 2100. If we can’t hold the line at 450, there are still harms to be avoided by stopping short of doubling. If we can’t stop doubling, there are still harms to be avoided by preventing tripling, and so forth. But his general sentiment that we can’t drag our feet on this is correct. –raypierre]

No matter what the circumstance, unless we are finally extinct, there is always a best we can do. We should strive in our imperfect way to stay close to that.

Capitalism Being Creative

My objection to economics should not be taken as an objection to capitalism. I think capitalism is necessary, if not entirely sufficient. We just have to build in a few gentle incentives and stand back.

Here’s a nice example of creative capitalism promoting efficiency: auto insurance based on miles travelled.

This is a very helpful idea for those of us who have to drive sometimes but don’t like to or want to.