Well, I went to the Austin Forum talk as promised, “a presentation by J. Tinsley Oden on predictive computational science. In this talk, Oden traces the development of scientific thinking and philosophy that underlie predictivity” and about a third of it was tantalizingly close to being interesting. It really looked like it was leading up to some substantial insights, but then it sort of fizzled into a tired supercomputing pep talk.
First my favorite slide:
Yes. Although Oden allowed that this might have been slightly tongue-in-cheek on Eddington’s part, it is really important to emphasize that purely empirical science is a bore; that the ability to collect numbers and plot them on graphs is a very long way from the pinnacle of science.
This was in the context of the computation enthusiast’s claim that computation is a third pillar of science, joining observation and theory as a coequal. I think this is possible but a long way off, and Oden did talk about some spectacular failings of modeling in a frank way, except that implicitly but notably none of those were at the University of Texas.
Another refreshing point was his antidote to the grade-school model of the scientific method, and the Popperian model for which he had equally little use. He offered this view of What Science is Like. I think it has a lot more merit than the gross oversimplifications that most nonscientists are taught to believe in in grade school.
Sorry, it’s a bit blurry but I think you get the idea. (You can click on the images for a clearer view.)
Before indulging in twenty minutes of tedious cheerleading for TACC and ACES (our local supercomputing institutions) he offered the following four guideposts:
V & V = Verification and Validation = (respectively) a) is it the right system for the purpose and b) does the software actually model the system?
QofI: Quantity of Interest = The “answer”, the quantity or quantities you want to predict
UQ = Uncertainty Quantification: estimated error statistics of the predicted quantity
This seemed to be a good place to launch into some substance but that’s as far as he got.
Note this talk was not about climate models but about predictive models in general. He certainly didn’t get into the differences between prediction types, where climate models present some unusual philosophical problems.
I left feeling that I had seen a very fine meal but had not actually tasted it.
To be fair he actually did show Bayes’ theorem and talked about it for a couple of minutes.
I really think there is some alternate universe where quantitative public talks are given to nonexperts who nevertheless can be expected to know fundamentals such as Bayes’ theorem like their own phone number. Somewhere over the rainbow I suppose.