What did we learn from COVID-19 models?

By Sibel Eker, researcher in the IIASA Energy Program

IIASA researcher Sibel Eker explores the usefulness and reliability of COVID-19 models for informing decision making about the extent of the epidemic and the healthcare problem.

© zack Ng 99 | Dreamstime.com

In the early days of the COVID-19 pandemic, when facts were uncertain, decisions were urgent, and stakes were very high, both the public and policymakers turned not to oracles, but to mathematical modelers to ask how many people could be infected and how the pandemic would evolve. The response was a plethora of hypothetical models shared on online platforms and numerous better calibrated scientific models published in online repositories. A few such models were announced to support governments’ decision-making processes in countries like Austria, the UK, and the US.

With this announcement, a heated debate began about the accuracy of model projections and their reliability. In the UK, for instance, the model developed by the MRC Centre for Global Infectious Disease Analysis at Imperial College London projected around 500,000 and 20,000 deaths without and with strict measures, respectively. These different policy scenarios were misinterpreted by the media as a drastic variation in the model assumptions, and hence a lack of reliability. In the US, projections of the model developed by the University of Washington’s Institute for Health Metrics and Evaluation (IHME) changed as new data were fed into the model, sparking further debate about the accuracy thereof.

This discussion about the accuracy and reliability of COVID-19 models led me to rethink model validity and validation. In a previous study, my colleagues and I showed that, based on a vast scientific literature on model validation and practitioners’ views, validity often equates with how good a model represents the reality, which is often measured by how accurately the model replicates the observed data. However, representativeness does not always imply the usefulness of a model. A commentary following that study emphasized the tradeoff between representativeness and the propagation error caused by it, thereby cautioning against an exaggerated focus on extending model boundaries and creating a modeling hubris.

Following these previous studies, in my latest commentary in Humanities and Social Sciences Communications, I briefly reviewed the COVID-19 models used in public policymaking in Austria, the UK, and the US in terms of how they capture the complexity of reality, how they report their validation, and how they communicate their assumptions and uncertainties. I concluded that the three models are undeniably useful for informing the public and policy debate about the extent of the epidemic and the healthcare problem. They serve the purpose of synthesizing the best available knowledge and data, and they provide a testbed for altering our assumptions and creating a variety of “what-if” scenarios. However, they cannot be seen as accurate prediction tools, not only because no model is able to do this, but also because these models lacked thorough formal validation according to their reports in late March. While it may be true that media misinterpretation triggered the debate about accuracy, there are expressions of overconfidence in the reporting of these models, even though the communication of uncertainties and assumptions are not fully clear.

© Jaka Vukotič | Dreamstime.com

© Jaka Vukotič | Dreamstime.com

The uncertainty and urgency associated with pandemic decision-making is familiar to many policymaking situations from climate change mitigation to sustainable resource management. Therefore, the lessons learned from the use of COVID models can resonate in other disciplines. Post-crisis research can analyze the usefulness of these models in the discourse and decision making so that we can better prepare for the next outbreak and we can better utilize policy models in any situation. Until then, we should take the prediction claims of any model with caution, focus on the scenario analysis capability of models, and remind ourselves one more time that a model is a representation of reality, not the reality itself, like René Magritte notes that his perfectly curved and brightly polished pipe is not a pipe.

References

Eker S (2020). Validity and usefulness of COVID-19 models. Humanities and Social Sciences Communications 7 (1) [pure.iiasa.ac.at/16614]

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Running global models in a castle in Europe

By Matt Cooper, PhD student at the Department of Geographical Sciences, University of Maryland, and 2018 winner of the IIASA Peccei Award

I never pictured myself working in Europe.  I have always been an eager traveler, and I spent many years living, working and doing fieldwork in Africa and Asia before starting my PhD.  I was interested in topics like international development, environmental conservation, public health, and smallholder agriculture. These interests led me to my MA research in Mali, working for an NGO in Nairobi, and to helping found a National Park in the Philippines.  But Europe seemed like a remote possibility.  That was at least until fall 2017, when I was looking for opportunities to get abroad and gain some research experience for the following summer.  I was worried that I wouldn’t find many opportunities, because my PhD research was different from what I had previously done.  Rather than interviewing farmers or measuring trees in the field myself, I was running global models using data from satellites and other projects.  Since most funding for PhD students is for fieldwork, I wasn’t sure what kind of opportunities I would find.  However, luckily, I heard about an interesting opportunity called the Young Scientists Summer Program (YSSP) at IIASA, and I decided to apply.

Participating in the YSSP turned out to be a great experience, both personally and professionally.  Vienna is a wonderful city to live in, and I quickly made friends with my fellow YSSPers.  Every weekend was filled with trips to the Alps or to nearby countries, and IIASA offers all sorts of activities during the week, from cultural festivals to triathlons.  I also received very helpful advice and research instruction from my supervisors at IIASA, who brought a wealth of experience to my research topic.  It felt very much as if I had found my kind of people among the international PhD students and academics at IIASA.  Freed from the distractions of teaching, I was also able to focus 100% on my research and I conducted the largest-ever analysis of drought and child malnutrition.

© Matt Cooper

Now, I am very grateful to have another summer at IIASA coming up, thanks to the Peccei Award. I will again focus on the impact climate shocks like drought have on child health.  however, I will build on last year’s research by looking at future scenarios of climate change and economic development.  Will greater prosperity offset the impacts of severe droughts and flooding on children in developing countries?  Or does climate change pose a hazard that will offset the global health gains of the past few decades?  These are the questions that I hope to answer during the coming summer, where my research will benefit from many of the future scenarios already developed at IIASA.

I can’t think of a better research institute to conduct this kind of systemic, global research than IIASA, and I can’t picture a more enjoyable place to live for a summer than Vienna.

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

This is not reality

By Sibel Eker, IIASA postdoctoral research scholar

© Jaka Vukotič | Dreamstime.com

Ceci n’est pas une pipe – This is not a pipe © Jaka Vukotič | Dreamstime.com

Quantitative models are an important part of environmental and economic research and policymaking. For instance, IIASA models such as GLOBIOM and GAINS have long assisted the European Commission in impact assessment and policy analysis2; and the energy policies in the US have long been guided by a national energy systems model (NEMS)3.

Despite such successful modelling applications, model criticisms often make the headlines. Either in scientific literature or in popular media, some critiques highlight that models are used as if they are precise predictors and that they don’t deal with uncertainties adequately4,5,6, whereas others accuse models of not accurately replicating reality7. Still more criticize models for extrapolating historical data as if it is a good estimate of the future8, and for their limited scopes that omit relevant and important processes9,10.

Validation is the modeling step employed to deal with such criticism and to ensure that a model is credible. However, validation means different things in different modelling fields, to different practitioners and to different decision makers. Some consider validity as an accurate representation of reality, based either on the processes included in the model scope or on the match between the model output and empirical data. According to others, an accurate representation is impossible; therefore, a model’s validity depends on how useful it is to understand the complexity and to test different assumptions.

Given this variety of views, we conducted a text-mining analysis on a large body of academic literature to understand the prevalent views and approaches in the model validation practice. We then complemented this analysis with an online survey among modeling practitioners. The purpose of the survey was to investigate the practitioners’ perspectives, and how it depends on background factors.

According to our results, published recently in Eker et al. (2018)1, data and prediction are the most prevalent themes in the model validation literature in all main areas of sustainability science such as energy, hydrology and ecosystems. As Figure 1 below shows, the largest fraction of practitioners (41%) think that a match between the past data and model output is a strong indicator of a model’s predictive power (Question 3). Around one third of the respondents disagree that a model is valid if it replicates the past since multiple models can achieve this, while another one third agree (Question 4). A large majority (69%) disagrees with Question 5, that models cannot provide accurate projects, implying that they support using models for prediction purposes. Overall, there is no strong consensus among the practitioners about the role of historical data in model validation. Still, objections to relying on data-oriented validation have not been widely reflected in practice.

Figure 1

Figure 1: Survey responses to the key issues in model validation. Source: Eker et al. (2018)

According to most practitioners who participated in the survey, decision-makers find a model credible if it replicates the historical data (Question 6), and if the assumptions and uncertainties are communicated clearly (Question 8). Therefore, practitioners think that decision makers demand that models match historical data. They also acknowledge the calls for a clear communication of uncertainties and assumptions, which is increasingly considered as best-practice in modeling.

One intriguing finding is that the acknowledgement of uncertainties and assumptions depends on experience level. The practitioners with a very low experience level (0-2 years) or with very long experience (more than 10 years) tend to agree more with the importance of clarifying uncertainties and assumptions. Could it be because a longer engagement in modeling and a longer interaction with decision makers help to acknowledge the necessity of communicating uncertainties and assumptions? Would inexperienced modelers favor uncertainty communication due to their fresh training on the best-practice and their understanding of the methods to deal with uncertainty? Would the employment conditions of modelers play a role in this finding?

As a modeler by myself, I am surprised by the variety of views on validation and their differences from my prior view. With such findings and questions raised, I think this paper can provide model developers and users with reflections on and insights into their practice. It can also facilitate communication in the interface between modelling and decision-making, so that the two parties can elaborate on what makes their models valid and how it can contribute to decision-making.

Model validation is a heated topic that would inevitably stay discordant. Still, one consensus to reach is that a model is a representation of reality, not the reality itself, just like the disclaimer of René Magritte that his perfectly curved and brightly polished pipe is not a pipe.

References

  1. Eker S, Rovenskaya E, Obersteiner M, Langan S. Practice and perspectives in the validation of resource management models. Nature Communications 2018, 9(1): 5359. DOI: 10.1038/s41467-018-07811-9 [pure.iiasa.ac.at/id/eprint/15646/]
  2. EC. Modelling tools for EU analysis. 2019  [cited  16-01-2019]Available from: https://ec.europa.eu/clima/policies/strategies/analysis/models_en
  3. EIA. ANNUAL ENERGY OUTLOOK 2018: US Energy Information Administration; 2018. https://www.eia.gov/outlooks/aeo/info_nems_archive.php
  4. The Economist. In Plato’s cave. The Economist 2009  [cited]Available from: http://www.economist.com/node/12957753#print
  5. The Economist. Number-crunchers crunched: The uses and abuses of mathematical models. The Economist. 2010. http://www.economist.com/node/15474075
  6. Stirling A. Keep it complex. Nature 2010, 468(7327): 1029-1031. https://doi.org/10.1038/4681029a
  7. Nuccitelli D. Climate scientists just debunked deniers’ favorite argument. The Guardian. 2017. https://www.theguardian.com/environment/climate-consensus-97-per-cent/2017/jun/28/climate-scientists-just-debunked-deniers-favorite-argument
  8. Anscombe N. Models guiding climate policy are ‘dangerously optimistic’. The Guardian 2011  [cited]Available from: https://www.theguardian.com/environment/2011/feb/24/models-climate-policy-optimistic
  9. Jogalekar A. Climate change models fail to accurately simulate droughts. Scientific American 2013  [cited]Available from: https://blogs.scientificamerican.com/the-curious-wavefunction/climate-change-models-fail-to-accurately-simulate-droughts/
  10. Kruger T, Geden O, Rayner S. Abandon hype in climate models. The Guardian. 2016. https://www.theguardian.com/science/political-science/2016/apr/26/abandon-hype-in-climate-models

Using Twitter data for demographic research

By Dilek Yildiz, Wittgenstein Center for Demography and Global Human Capital (IIASA, VID/ÖAW and WU), Vienna Institute of Demography, Austrian Academy of Sciences, International Institute for Applied Systems Analysis

Social media offers a promising source of data for social science research that could provide insights into attitudes, behavior, social linkages and interactions between individuals. As of the third quarter of 2017, Twitter alone had on average 330 million active users per month. The magnitude and the richness of this data attract social scientists working in many different fields with topics studied ranging from extracting quantitative measures such as migration and unemployment, to more qualitative work such as looking at the footprint of second demographic transition (i.e., the shift from high to low fertility) and gender revolution. Although, the use of social media data for scientific research has increased rapidly in recent years, several questions remain unanswered. In a recent publication with Jo Munson, Agnese Vitali and Ramine Tinati from the University of Southampton, and Jennifer Holland from Erasmus University, Rotterdam, we investigated to what extent findings obtained with social media data are generalizable to broader populations, and what constitutes best practice for estimating demographic information from Twitter data.

A key issue when using this data source is that a sample selected from a social media platform differs from a sample used in standard statistical analysis. Usually, a sample is randomly selected according to a survey design so that information gathered from this sample can be used to make inferences about a general population (e.g., people living in Austria). However, despite the huge number of users, the information gathered from Twitter and the estimates produced are subject to bias due to its non-random, non-representative nature. Consistent with previous research conducted in the United States, we found that Twitter users are more likely than the general population to be young and male, and that Twitter penetration is highest in urban areas. In addition, the demographic characteristics of users, such as age and gender, are not always readily available. Consequently, despite its potential, deriving the demographic characteristics of social media users and dealing with the non-random, non-representative populations from which they are drawn represent challenges for social scientists.

Although previous research has explored methods for conducting demographic research using non-representative internet data, few studies mention or account for the bias and measurement error inherent in social media data. To fill this gap, we investigated best practice for estimating demographic information from Twitter users, and then attempted to reduce selection bias by calibrating the non-representative sample of Twitter users with a more reliable source.

Exemplar of CrowdFlower task © Jo Munson.

We gathered information from 979,992 geo-located Tweets sent by 22,356 unique users in South-East England and estimated their demographic characteristics using the crowd-sourcing platform CrowdFlower and the image-recognition software Face++. Our results show that CrowdFlower estimates age more accurately than Face++, while both tools are highly reliable for estimating the sex of Twitter users.

To evaluate and reduce the selection bias, we ran a series of models and calibrated the non-representative sample of Twitter users with mid-year population estimates for South-East England from the UK Office of National Statistics. We then corrected the bias in age-, sex-, and location-specific population counts. This bias correction exercise shows promise for unbiased inference when using social media data and can be used to further reduce selection bias by including other sociodemographic variables of social media users such as ethnicity.  By extending the modeling framework slightly to include an additional variable, which is only available through social media data, it is also possible to make unbiased inferences for broader populations by, for example, extracting the variable of interest from Tweets via text mining. Lastly, our methodology lends itself for use in the calculation of sample weights for Twitter users or Tweets. This means that a Twitter sample can be treated as an individual-level dataset for micro-level analysis (e.g., for measuring associations between variables obtained from Twitter data).

Reference:

Yildiz, D., Munson, J., Vitali, A., Tinati, R. and Holland, J.A. (2017). Using Twitter data for demographic research, Demographic Research, 37 (46): 1477-1514. doi: 10.4054/DemRes.2017.37.46

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Is open science the way to go?

By Luke Kirwan, IIASA open access manager

At this year’s European Geosciences Union a panel of experts convened to debate the benefits of open science. Open science means making as much of the scientific output and processes publicly visible and accessible, including publications, models, and data sets.

Open science includes not just open access to research findings, but the idea of sharing data, methods, and processes. ©PongMoji | Shutterstock

In terms of the benefits of open science the panelists—who included representatives from academia, government, and academic publishing—generally agreed that openness favors increased collaboration and the development of large networks, especially in terms of geoscience data, which improves precision in the interpretation of results. There is evidence that sharing data and linking to publications increases both readership and citations. A growing number of funding bodies and journals are also requiring researchers to make the data underlining a publication as publicly available as possible. In the context of Horizon 2020, researchers are instructed to make their data ‘as open as possible, as closed as necessary.’

This statement was intentionally left vague, because the European Research Council (ERC) realized that a one size fits all approach would not be able to cover the entirety of research practices across the scientific community, said Jean-Paul Bourguignon, president of the ERC.

Barbara Romanowicz from Collège de France and Institut de Physique du Glove de Paris also pointed to the need for disciplines to develop standardized metadata standards and a community ethic to facilitate interoperability. She also pointed out that the requirements for making raw data openly accessible are quite different to those for making models accessible. These problems require increased resources to be adequately addressed.

Roche DG, Lanfear R, Binning SA, Haff TM, Schwanz LE, Cain KE, Kokko H, Jennions MD, Kruuk LEB (2014). Troubleshooting public data archiving: suggestions to increase participation. PLOS Biology. 12 (1): e1001779. doi:10.1371/journal.pbio.1001779.

Playing devil’s advocate, Helen Glaves from the British Geological Survey pointed to several areas of potential concern. She questioned whether the costs involved in providing long-term preservation and access to data are the most efficient use of taxpayers money. She also suggested that charging for access could be used to generate revenues to fund future research. However, possibly a more salient concern for researchers that she raised was  the fear of scientists that making their data and research available in good faith, could allow their hard work to be passed off by another researcher as their own.

Many of these issues were raised by audience members during the questions and answer session. Scientists pointed out that research data involved a lot of hard work to collate, they had concerns about inappropriate secondary reuse, jobs and research grants are highly competitive. However, the view was also expressed that paying for access to research fundamentally amounts to ‘double taxation’ if the research has been funded by public money, and that even restrictive sharing is better than not sharing at all. It was also argued that incentivising sharing through increased citations and visibility would both help encourage researchers to make their research more open and aide researchers in the pursuit of grants or research positions. To bring about these changes in research practices will involve investing in training the next generation of scientists in these new processes.

Here at IIASA we are fully committed to open access and in the library, we assist our researchers with any queries or issues they may have with widely sharing their research. As well as improving the visibility of research publications through Pure, our institutional repository, we can also assist with making research data discoverable and citable.

A video of the discussion is available on YouTube.

This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.