The next book on my reading list is Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future, by Orrin H. Pilkey and Linda Pilkey-Jarvis. xvi + 230 pp. Columbia University Press, 2006. $29.50. In a book review at this link Carl Wunsch, the Carl and Ida Green Professor of Physical Oceanography in the Department of Earth, Atmospheric and Planetary Sciences at the Massachusetts Institute of Technology, asks “What happens when an immature and incomplete science meets a societal demand for information and direction? The spectacle is not pretty, as we learn from Useless Arithmetic, a new book that describes a long list of incompetent and sometimes mindless uses of fragmentary scientific ideas in the realm of public policy.”

Pilkey continues his review with this insight: “With modern computers, it is now possible for a graduate student or a practicing engineer to acquire a very complex computer code, hundreds of thousands of lines long, worked over by several preceding generations of scientists, with a complexity so great that no single individual actually understands either the underlying physical principles or the behavior of the computer code—or the degree to which it actually represents the phenomenon of interest. These codes are accompanied by manuals explaining how to set them up and how to run them, often with a very long list of “default” parameters. Sometimes they represent the coupling of two or more submodels, each of which appears well understood, but whose interaction can lead to completely unexpected behavior (as when a simple pendulum is hung on the end of another simple pendulum). One hundred years in the future, who will be able to reconstruct the assumptions and details of these calculations?”

I am seeking examples from mining of this problem. I suspect there must be many. Go to any trade show and the booths are alive with brochures and fresh-faced IT folk eager to sell you their new software that solves every ultimate problem you have at your mine. Even my friends sell me on the beauty of computer models.

My own experiences of this phenomenon include more horror stories of groundwater pollution migration modeling than I care to impose on you. The worst involved a week of fighting large egos to persuade them the water was not above the surface at the site, and never would be, in spite of what their computer models predicted. Then there was that mine in Utah where there was a report from every consultant in the directory trying to predict where the phreatic surface in the tailings impoundment would be 100 years in the future. I got into trouble by showing that it could be anywhere depending on where the pond was, how the layers of tailings formed, and the rate at which the height of the impoundment increased. They fired us as they had done so many before; they sought a deterministic answer, not a probability function. And the concept of the Observational Method horrified them.

I have just completed a report, not yet public, in which we compare alternative site cleanup strategies. We used one of those decision making software packages. It is so powerful we got carried away. You construct a decision tree with as many branches and nodes and score boxes and probability functions as your mind and time allow. Turn it on and it churns out ranking scores for each of the alternatives. The process is amazingly impressive. The first computer derived ranking was counter-intuitive. By tweaking the scores and the functions, we achieved correspondence between intuition, logic, and computer rankings. I defy anybody to find out how we did this. Why worry. The shear imposing magnificence of the computer numbers and the intricate and beautiful complexity of the figures it prints out ensure that the regulators and the public will accept out conclusions. And they never would if we based these conclusions on intuition and logic alone. That is sad, but true.

Another computer model story: we had three years of data about the moisture content of the waste disposal facility cover. UnSatH was chosen to see if we could establish definitive soil parameters and hence predict cover performance in extreme climatic conditions. Seemed a simple idea: just run the code with different parameters until you got correspondence with the field measurements. Theoretically there is an infinite number of sets of parameters that will yield correspondence. We found out the truth of this theory fast enough. Statistical best fit is at best smoke and mirrors, although adroit wording in the report placated the regulators.

These stories highlight the fact that models are not there to replicate reality. They are just tools to assist your understanding.

I think it was Pasteur who said that chance favors the prepared mind. Einstein said genius if ninety-nine percent perspiration and one percent inspiration. Using the computer to model the real life situation is part of preparing the mind, part of the perspiration. As an engineer, you use intuition, judgment, and perspective to engineer. The computer models are a valuable aid, but they are not the engineer. They is no substitute for human intelligence. Ultimately you have to put you cards on the table and specify a slope angle, a pillar size and spacing, a dewatering well location. In so doing you rely on all you faculties, only one of which is the computer printout paper.

Hence if you have a story or opinion about this form of computer code use or abuse, let me know.

P.S. The preface is available at this link.