By Dan Jessie, IIASA Research Scholar, Advanced Systems Analysis Program

As policymakers turn to the scientific community to inform their decisions on topics such as climate change, public health, and energy policy, scientists and mathematicians face the challenge of providing reliable information regarding trade-offs and outcomes for various courses of action. To generate this information, scientists use a variety of complex models and methods. However, how can we know whether the output of these models is valid?

This question was the focus of a recent conference I attended, arranged by IIASA Council Chair Donald G. Saari and the Institute for Mathematical Behavioral Sciences at the University of California, Irvine. The conference featured a number of talks by leading mathematicians and scientists who research complex systems, including Carl Simon, the founding Director of the University of Michigan’s Center for the Study of Complex Systems, and Simon Levin, Director of the Center for BioComplexity at Princeton University. All talks focused on answering the question, “Validation. What is it?”

To get a feel for how difficult this topic is, consider that during the lunch discussions,  each speaker professed to know less than everybody else! In spite of this self-claimed ignorance, each talk presented challenging new ideas regarding both specifics of how validation can be carried out for a given model, as well as formulations of general guidelines for what is necessary for validation.

How closely does a model need to mirror reality? © Mopic | Dreamstime.com - Binary Background Photo

How closely does a model need to mirror reality? © Mopic | Dreamstime.com – Binary Background Photo

For example, one talk discussed the necessity of understanding the connecting information between the pieces of a system. While it may seem obvious that, to understand a system built from many different components, one needs to understand both the pieces and how the pieces fit together, this talk contained a surprising twist: oftentimes, the methodology we use to model a problem unknowingly ignores this connecting information. By using examples from a variety of fields, such as social choice, nanotechnology, and astrophysics, the speaker showed how many current research problems can be understood in this light. This talk presented a big challenge to the research community to develop the appropriate tools for building valid models of complex systems.

Overall, the atmosphere of the conference was one of debate, and it seemed that no two speakers agreed completely on what validation required, or even meant. Some recurring questions in the arguments were how closely does a model need to mirror reality, and how do we assess predictions given that every model fails in some predictions? What role do funding agencies and peer review play in validation? The arguments generated by the talks weren’t limited to the conference schedule, either, and carried into the dinners and beyond.

I left the conference with a sense of excitement at seeing so many new ideas that challenge the current methods and models. This is still a new and growing topic, but one where advances will have wide-ranging impacts in terms of how we approach and answer scientific questions.

IIASA Council Chair Don Saari: Validation: What is it?

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.