By Shorouk Elkobros, IIASA Science Communication Fellow
Assessing energy-related choices and the behaviors of households can help us transition to a low-carbon economy. How can research provide more effective decision-making tools to policymakers for better climate change mitigation policies?
We live at a defining moment for climate change, where today’s actions affect tomorrow’s reality. Every little climate-friendly decision counts. Whether we decide to insulate our houses, put solar panels on our rooftops, or invest in energy-efficient appliances. However, our personal and energy-related decisions vary based on our awareness, age, education, income, energy provider services, social norms, culture, and many other factors. Researchers are starting to pay attention to how this diversity is not well represented in the economic models that politicians use to plan climate change policies.
@ VectorMine | Dreamstime.com
Designing policies inspired by people
Households contribute an average of 70% of global greenhouse gas emissions. Limiting global emissions requires holistic policy approaches that take households’ behaviors and lifestyle decisions into account. Adding such a dimension can potentially upscale low carbon behavioral and social changes to national and global levels, which is fundamental to tackling climate change.
Worried about the future of the planet and motivated to support policymakers in designing better climate change mitigation policies, the authors of a recent study published in the journal Environmental Modeling & Software aspired to build bridges through interdisciplinary research. The study presented a novel interdisciplinary method that aims to integrate households’ energy behavior and social dynamics in climate-energy-economy models and thus help politicians design policies inspired by people.
“I have always been interested in the science-policy-society aspect of mitigating climate change. Climate change is a collective challenge that we need to address together to come up with better solutions for future generations,” notes study lead author Leila Niamir, a researcher jointly associated with the Mercator Research Institute on Global Commons and Climate Change, Berlin and the IIASA Transitions to New Technologies Program.
Better models for a better future
Climate change mitigation policies play a pivotal role in achieving ambitious environmental targets like the Paris Agreement or the Sustainable Development Goals (SDGs). To be able to formulate appropriate mitigation policies, decision makers need assessment tools to measure complex systems quantitatively. In the past decade, a variety of assessment tools have emerged, which have since been predominantly used to support climate change policy debates. In the study, Niamir argues that current assessment models are missing bottom-up and grassroots dynamics, they cannot project realistic variables of what households’ lifestyles and social movement are, and they therefore may not be sufficient to provide reliable information for policymakers.
There is a gap between what policymakers’ current assessment tools can offer and what social scientists and behavioral economists highlight as pro-environmental behavior and climate change mitigation movements. By adding this complex behavior and social perspective to the models, the researchers make it easier for policymakers to design future policies to accommodate different societal behaviors and lifestyles.
Niamir and her team presented a novel method for systematically upscaling grassroots dynamics by linking the best of both “top-down” macroeconomic computable general equilibrium (CGE) models and “bottom-up” empirical agent-based models (ABM). Their approach demonstrates that with computational ABM directly linked to survey data and macroeconomic CGE models, individual behavioral diversity and social influences can be considered when designing implementable and politically feasible policy options.
“We need better assessment tools to quantitatively explore the complex climate-energy-economy system, and reveal the potential of demand-side mitigation strategies. To see substantial changes, we need a mix of external interventions, from soft information policies aimed at raising awareness bottom-up, to financial incentives altering the macro landscape of energy markets and technological transitions. Only modular and integrated models can help policymakers quantitatively explore this complex system and plan for changes in the coming decades,” says Niamir.
Towards a low-carbon economy
We cannot tackle what we do not know. Pathways to a low-carbon economy future entail diminishing the growing discrepancy between mitigation policies and individual and collective behaviors. When redesigning our socio-environmental systems to mitigate climate change, we need to start looking at people as case studies rather than numbers. To transition to a low-carbon economy and accelerate decarbonization, policymakers must adopt novel models that integrate energy consumption, individual behavior, heterogeneity, and social influence into current assessment tools.
“Mitigating climate change indeed requires a massive effort from individual and social movements to advance national and international collaboration. Each individual small step towards shrinking our carbon footprint creates cascading changes in social behavior and consequently mitigates climate change,” Niamir concludes.
Niamir L, Ivanova O, & Filatova T (2020). Economy-wide impacts of behavioral climate change mitigation: linking agent-based and computable general equilibrium models. Environmental Modelling & Software 134: e104839. [pure.iiasa.ac.at/16671]
Note: This article gives the views of the authors, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.
Over the past decade, the open-source movement (e.g., the Free Software Foundation (FSF) and the Open Source Initiative (OSI)) has had a tremendous impact on the modeling of energy systems and climate change mitigation policies. It is now widely expected – in particular by and of early-career researchers – that data, software code, and tools supporting scientific analysis are published for transparency and reproducibility. Many journals actually require that authors make the underlying data available in line with the FAIR principles – this acronym stands for findable, accessible, interoperable, and reusable. The principles postulate best-practice guidance for scientific data stewardship. Initiatives such as Plan S, requiring all manuscripts from projects funded by the signatories to be released as open-access publications, lend further support to the push for open science.
Alas, the energy and climate modeling community has so far failed to realize and implement the full potential of the broader movement towards collaborative work and best practice of scientific software development. To live up to the expectation of truly open science, the research community needs to move beyond “only” open-source.
Until now, the main focus of the call for open and transparent research has been on releasing the final status of scientific work under an open-source license – giving others the right to inspect, reuse, modify, and share the original work. In practice, this often means simply uploading the data and source code for generating results or analysis to a service like Zenodo. This is obviously an improvement compared to the previously common “available upon reasonable request” approach. Unfortunately, the data and source code are still all too often poorly documented and do not follow best practice of scientific software development or data curation. While the research is therefore formally “open”, it is often not easily intelligible or reusable with reasonable effort by other researchers.
What do I mean by “best practice”? Imagine I implement a particular feature in a model or write a script to answer a specific research question. I then add a second feature – which inadvertently changes the behavior of the first feature. You might think that this could be easily identified and corrected. Unfortunately, given the complexity and size to which scientific software projects tend to quickly evolve, one often fails to spot the altered behavior immediately.
One solution to this risk is “continuous integration” and automated testing. This is a practice common in software development: for each new feature, we write specific tests in an as-simple-as-possible example at the same time as implementing the function or feature itself. These tests are then executed every time that a new feature is added to the model, toolbox, or software package, ensuring that existing features continue to work as expected when adding a new functionality.
Other practices that modelers and all researchers using numerical methods should follow include using version control and writing documentation throughout the development of scientific software rather than leaving this until the end. In addition, not just the manuscript and results of scientific work should be scrutinized (aka “peer review”), but such appraisal should also apply to the scientific software code written to process data and analyze model results. In addition, like the mentoring of early-career researchers, such a review should not just come at the end of a project but should be a continuous process throughout the development of the manuscript and the related analysis scripts.
In the course that I teach at TU Wien, as well as in my work on the MESSAGEix model, the Intergovernmental Panel on Climate Change Special Report on Global Warming of 1.5°C scenario ensemble, and other projects at the IIASA Energy Program, I try to explain to students and junior researchers that following such best-practice steps is in their own best interest. This is true even when it is just a master’s thesis or some coursework assignment. However, I always struggle to find the best way to convince them that following best practice is not just a noble ideal in itself, but actually helps in doing research more effectively. Only when one has experienced the panic and stress caused by a model not solving or a script not running shortly before a submission deadline can a researcher fully appreciate the benefits of well-structured code, explicit dependencies, continuous integration, tests, and good documentation.
A common trope says that your worst collaborator is yourself from six months ago, because you didn’t write enough explanatory comments in your code and you don’t respond to emails. So even though it sounds paradoxical at first, spending a bit more time following best practice of scientific software development can actually give you more time for interesting research. Moreover, when you then release your code and data under an open-source license, it is more likely that other researchers can efficiently build on your work – bringing us one step closer to a community of open science!
Note: This article gives the views of the authors, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.
In the early days of the COVID-19 pandemic, when facts were uncertain, decisions were urgent, and stakes were very high, both the public and policymakers turned not to oracles, but to mathematical modelers to ask how many people could be infected and how the pandemic would evolve. The response was a plethora of hypothetical models shared on online platforms and numerous better calibrated scientific models published in online repositories. A few such models were announced to support governments’ decision-making processes in countries like Austria, the UK, and the US.
With this announcement, a heated debate began about the accuracy of model projections and their reliability. In the UK, for instance, the model developed by the MRC Centre for Global Infectious Disease Analysis at Imperial College London projected around 500,000 and 20,000 deaths without and with strict measures, respectively. These different policy scenarios were misinterpreted by the media as a drastic variation in the model assumptions, and hence a lack of reliability. In the US, projections of the model developed by the University of Washington’s Institute for Health Metrics and Evaluation (IHME) changed as new data were fed into the model, sparking further debate about the accuracy thereof.
This discussion about the accuracy and reliability of COVID-19 models led me to rethink model validity and validation. In a previous study, my colleagues and I showed that, based on a vast scientific literature on model validation and practitioners’ views, validity often equates with how good a model represents the reality, which is often measured by how accurately the model replicates the observed data. However, representativeness does not always imply the usefulness of a model. A commentary following that study emphasized the tradeoff between representativeness and the propagation error caused by it, thereby cautioning against an exaggerated focus on extending model boundaries and creating a modeling hubris.
Following these previous studies, in my latest commentary in Humanities and Social Sciences Communications, I briefly reviewed the COVID-19 models used in public policymaking in Austria, the UK, and the US in terms of how they capture the complexity of reality, how they report their validation, and how they communicate their assumptions and uncertainties. I concluded that the three models are undeniably useful for informing the public and policy debate about the extent of the epidemic and the healthcare problem. They serve the purpose of synthesizing the best available knowledge and data, and they provide a testbed for altering our assumptions and creating a variety of “what-if” scenarios. However, they cannot be seen as accurate prediction tools, not only because no model is able to do this, but also because these models lacked thorough formal validation according to their reports in late March. While it may be true that media misinterpretation triggered the debate about accuracy, there are expressions of overconfidence in the reporting of these models, even though the communication of uncertainties and assumptions are not fully clear.
The uncertainty and urgency associated with pandemic decision-making is familiar to many policymaking situations from climate change mitigation to sustainable resource management. Therefore, the lessons learned from the use of COVID models can resonate in other disciplines. Post-crisis research can analyze the usefulness of these models in the discourse and decision making so that we can better prepare for the next outbreak and we can better utilize policy models in any situation. Until then, we should take the prediction claims of any model with caution, focus on the scenario analysis capability of models, and remind ourselves one more time that a model is a representation of reality, not the reality itself, like René Magritte notes that his perfectly curved and brightly polished pipe is not a pipe.
Eker S (2020). Validity and usefulness of COVID-19 models. Humanities and Social Sciences Communications 7 (1) [pure.iiasa.ac.at/16614]
Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.
By Nicole Arbour, external relations manager in the IIASA Communications and External Relations department
As Canadian expats in Austria, one of the things that has particularly struck my family and I is the orderliness with which the country is dealing with the pandemic. As quarantine policies were put into place, we saw panic toilet paper hoarding in other countries, but here in Austria people were (amazingly) compliant and seemed to obey instructions and timelines provided by the authorities. We never worried about our basic needs. Grocery stores were always well stocked, public transit was always there and on time – and masks were readily available when required as physical barrier to protect others.
Expert opinions, governments, and publics are making it clear that there is no one-size-fits-all solution to this pandemic. What works in Austria might not be what worked for South Korea; and likely not the same as what works in other parts of Europe. Consider the Canadian landscape. There is huge variation in sociopolitical and cultural dynamics between and within provinces and territories. What works for some parts of Canada (virtual home schooling, grocery shopping) is impossible for others (Canada’s North). Cultural norms (multigenerational living, child/elder care) vary across the vast landscape. The “At Home on the Land” initiative – aimed at the particular needs of Indigenous communities is an example of a culturally-grounded way to address the pandemic. Finding solutions isn’t always as intuitive as we might like.
Humans tend to look for the easiest way out – we want simple solutions to complex problems. We don’t seem to want to think about the problems, we want them magically disappear. And thinking “outside of the box” isn’t always appreciated. Hand washing, clean water and the advent of antibiotics have made enormous leaps in our ability to tackle public health outbreaks – significant results. Where the bubonic plague is estimated to have killed 30%-60% of Europe’s population in the Middle Ages, modern outbreaks are now quickly identified and contained (were you even aware of the 2017 outbreak in Madagascar?). Understanding transmission routes has significantly impacted public health outcomes. The identification of tainted water as a vector for cholera transmission by John Snow led to the advent of modern epidemiology. But, as we find solutions to larger challenges, those that remain are more complex with increasing numbers of variables making solutions harder to come by.
There is some global agreement: lots of testing, quick results/containment, use of masks/physical barriers for community protection, social distancing, data collection. However, certain measures work better in some jurisdictions than others. What policies and practices are working and why are they working in these contexts? What is applicable in different contexts?
Our current global situation, has reminded me of a presentation I saw on the 2014 Ebola outbreak (Professor Melissa Leach, IDS), and how important it is to remember the human factor in crises. She discussed how the key elements that made the Ebola pandemic so persistent – despite the best efforts of global public health engagement – was a/the failure to understand how historic context, trust, cultural dynamics played into the spread of the virus. Those providing interventions did not appreciate how historic context (i.e. post-colonialism, slavery, medical testing scandals) and mistrust in the intentions of Western interventions factored into the willingness of the local population to accept the solutions provided. Awareness of social structures, influencers and leaders, and co-creation were also important to developing solutions that would be adopted by affected communities.
Evidence is more than the numbers of tests, infections and deaths. It is understanding the social context of communities, society writ large, and how they interact within and between. It’s about understanding historical context and how it feeds into local culture, social interactions and trust relationships. It’s about community dynamics, power struggles and the struggle for some to meet basic survival needs. It’s about timing of decision-making, political landscapes and different ways of leading. As with many of our global challenges, it’s a complex and multifaceted systems problem – in which the human factor is a huge driver.
As we strive for solutions to this global crisis – bring on innovation, research and science funding. We will need these – but please, also bring along those who study the complexity that is humanity: epidemiologists, anthropologists, economists, ethicists, political scientists, sociologists, futurists, etc. In an era where evidence is being questioned, fake news is rampant and anti-science sentiments are strong, it is crucial that we remember that one piece to engaging with this and the world’s other wicked problems is our relationships with our communities – the ones we are trying to protect. Public trust, built on understanding of the importance of human dynamics is key to broad acceptance and uptake. Solutions need to be palatable to society, or they won’t be adopted.
As we focus on the virus, let’s not forget the humans.
Weather patterns and events are changing and becoming more extreme, sea levels are rising, and greenhouse gas emissions are now at their highest levels in history. Climate change is affecting every individual in every city on every continent. It imposes adverse impact on people, communities, and countries, disrupting regional and national economies.
Climate change mitigation refers to efforts to reduce or prevent emissions of greenhouse gases to limit the magnitude of long-term climate change. Human consumption, in combination with a growing population, contributes to climate change by increasing the rate of greenhouse gas emissions. Over the last decade, instigated by the Paris Agreement, the efforts to limit global warming have been expanding. Significant attention is being devoted to new energy technologies on both the production and consumption sides, however, changes in individual behavior and management practices as part of the mitigation strategy are often neglected. This might derive from the complex nature of human which makes explaining and affecting human behavior a difficult task. As a result, quantitative tools to assess household emissions, considering the diversity of behaviors and a variety of psychological and social factors influencing them beyond purely economic considerations, are scarce. Policymakers would benefit from reliable decision supporting tools that explore the interaction of economic decision-making and behavioral heterogeneity in households behavioral and lifestyle changes, when testing climate mitigation policies (e.g. carbon pricing, subsidies).
To address this issue, during my PhD research I studied the potential of behavioral changes among heterogeneous households regarding energy use and their role in mitigating climate change. By designing and conducting comprehensive household surveys, it was explored how individuals choose to change their energy behaviour and what factors trigger or inhibit these choices. Decision support tools are designed to study large-scale regional effects of individual actions, and to explore how they may change over time and space. The model explicitly treats behavioral triggers and barriers at the individual level, assuming that energy use decision making is a multi-stage process. This theoretically and empirically grounded simulation model offers policymakers ways to explore various policy portfolios by running diverse micro and macro scenarios.
This model was further developed during my collaboration with the IIASA the Young Scientists Summer Program (YSSP), to estimate macro impacts of individuals’ energy behavioral changes on carbon emissions. Within this research, we illustrate that individual energy behavior, especially when amplified through social context, shapes energy demand and, consequently, carbon emissions. Our results show that residential energy demand is strongly linked to personal and social norms. When assessing the cumulative impacts of these behavioral processes, we quantify individual and combined effects of social dynamics and of carbon pricing on individual energy efficiency and on the aggregated regional energy demand and emissions.
In summary, mitigating climate change requires massive worldwide efforts and strong involvement of regions, cities, businesses and individuals, in addition to the commitments at the national levels. We should always keep in mind that every single behavior matters. In the transition to a sustainable and resilient society, we –as individuals- are more than just consumers.
Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.
 Climate Action– United Nations Sustainable Development Goals https://www.un.org/sustainabledevelopment/climate-change/  Creutzig, F., et al. (2018). Towards demand-side solutions for mitigating climate change. Nature Climate Change 8, 268-271; Grubler, A., et al. (2018). A low energy demand scenario for meeting the 1.5 degrees C target and sustainable development goals without negative emission technologies. Nature Energy 3, 515-527; Creutzig, F., et al. (2016). Beyond Technology: Demand-Side Solutions for Climate Change Mitigation. Annual Review of Environment and Resources, Vol 41 41, 173-198  Niamir, L. (2019). Behavioural Climate Change Mitigation: from individual energy choices to demand-side potential (University of Twente); Creutzig, F., et al. (2018). Towards demand-side solutions for mitigating climate change. Nature Climate Change 8, 268-271; Niamir, L., et al. (2018). Transition to low-carbon economy: Assessing cumulative impacts of individual behavioural changes. Energy Policy, 118; Stern N. Economics: Current climate models are grossly misleading. Nature 530(7591):407–9.  Niamir, L. et al. (2020). Demand-side solutions for climate mitigation: Bottom-up drivers of household energy behaviour change in the Netherlands and Spain. Energy Research & Social Science, 62, 101356.  The results of this collaboration was presented at Impacts World 2017 and won the best prize, and also published at Climatic Change Journal.
By Jan Marco Müller, IIASA Acting Chief Operations Officer
Jan Marco Müller shares his insights into the recent high-level forum in Vienna that brought together science advisors to ministers of foreign affairs from across the world and other experts in the practice, theory, and discussion of science diplomacy.
Established following an initiative by the United States and the Soviet Union during the Cold War, IIASA can be considered a child of diplomacy for science. At the same time, the institute has always been one of the world’s premier vehicles of science for diplomacy, by using science to build bridges between nations including those with special relations. However, there is another dimension of science diplomacy which has gained traction in recent years: the support scientists can provide to diplomats and policymakers in the foreign policy domain – known as science in diplomacy.
As global challenges become more complex and interdependent and technological progress advances at an ever-increasing speed, the scientific-technical dimension of foreign policies has gained increasing attention. This is illustrated by four examples:
Climate change impacts everybody on the planet, regardless of national borders.
Many digital technologies escape national jurisdictions and so create tensions between nations: e.g. cryptocurrencies, deep fakes, and internet trolls.
Trade agreements are often hampered by disagreements on technical standards, which themselves are influenced by societal values: people may still remember the discussions around chlorinated chicken in the US-EU trade negotiations a few years ago.
National interests are increasingly entering international spaces, which in the past have been governed by science, such as the Arctic/Antarctic, the deep sea and outer space.
Ministries of foreign affairs and diplomatic services around the world are all confronted with similar issues and critically depend on advice provided by scientists.
With this in mind, on 25-26 November 2019 IIASA, together with the International Network for Government Science Advice (INGSA), the Austrian Federal Ministry of Europe, Integration and Foreign Affairs, the Diplomatische Akademie Wien (Vienna School of International Studies), and the Natural History Museum Vienna, held the global meeting of the Foreign Ministries Science & Technology Advice Network (FMSTAN).
FMSTAN gathers science advisors to ministers of foreign affairs from around the planet, providing a platform for the exchange of information and best practices. IIASA hosted the first meeting of this network in October 2016, which has since grown significantly, with some 50 countries now participating in its biannual meetings.
The global meeting in November was organized back to back with the meetings of two other important networks in the science diplomacy arena: the Science Policy in Diplomacy and External Relations Network (SPIDER) – which is the science diplomacy branch of the International Network for Government Science Advice – and the Big Research Infrastructures for Diplomacy and Global Engagement through Science (BRIDGES) Network. BRIDGES was established a year ago following an initiative by my colleague Maurizio Bona at CERN and myself, with the aim of uniting the science diplomacy officers of all major international research infrastructures. In addition, a 3-day training course organized by the EU-funded project Using science for/in diplomacy for addressing global challenges (S4D4C) was arranged in parallel to achieve maximum synergies.
The meetings were attended by around 100 science diplomats including the President-elect of the International Science Council Sir Peter Gluckman, the UN Advisor on the Sustainable Development Goals Jeffrey Sachs, the former Rector of the University for Peace of the UN Martin Lees, the S&T Advisor to the US Secretary of State Matt Chessen, the S&T Advisor to the Japanese Foreign Minister Teruo Kishi, the Science Diplomacy Advisor to the Mexican Foreign Minister José Ramón López Portillo, and the Chief Science Advisor in the Dutch Ministry of Foreign Affairs Dirk-Jan Koch, to name just a few.
Six major topics were discussed:
The role of science diplomacy in achieving the Sustainable Development Goals
The importance of science in international security policies
The challenges for science diplomacy in the current geopolitical environment
The role of science in diplomatic curricula (and vice versa)
Future challenges for science diplomacy and the role of systemic thinking in policymaking
The Vienna meeting offered a unique platform for all those who “speak science” in the diplomatic arena to exchange ideas and experiences, while fostering a common global agenda. For additional insights I recommend reading the piece “Science Diplomacy: A Pragmatic Perspective from the Inside” which aims at making the term science diplomacy more operational – all the four authors participated to the Vienna meeting.
Overall the event demonstrated once again the convening power of IIASA and the leadership of the institute in confirming Vienna as one of the global hubs for science diplomacy.