Black swan sandwich: From one risk to layered risks

By Leena Ilmola-Sheppard, IIASA Advanced Systems Analysis (ASA) Program

Crisis management problems are getting more complex and complicated, but at the same time, governments have less and less resources for their management. How can research help decision makers plan for the unplannable?

Last week in Geneva,  I took part in a crisis management workshop for national decision makers organized by the OECD High Level Risk Forum and the Swiss Federation Chancellor  While the meeting was very specific to national security and crisis management, I found some takeaway messages that are relevant to us researchers as well, especially for those of us that hope that to help decision makers make better decisions through modeling.

leena-pics

Mads Ecklon, Head of the Centre for Preparedness Planning and Crisis Management of the Danish Emergency Management Agency, used the figure above as a framework to explain crisis management. His message can also be applied to the development of any social system.  Picture 1 describes the standard starting point of the modeling exercise. We are modeling one behavior and then analyze how the system performance develops in a controlled situation. Ecklon explained that potential futures are not so predictable: the crisis in hand can either be solved, solved only partially, not solved at all, or in the worst case the problem may escalate (you never know how a social system will react in the crisis situation—a small incident can turn into a massive riot).  The challenge for both national level crisis managers and modelers is same; you have to take all of these potential developments into a consideration.

But what happens if a new, unexpected crisis pops up while all attention is focused on the initial problem?  Such hard-to-predict events are often referred to as “black swan events.” Eclon said that their team has more frequently been seeing situations where, when attention is focused on the current crisis, a new, different or related, crisis develops and no one notices it.  For example, in the UK in 2007, just when all the crisis management resources were invested in flooding crisis, foot and mouth disease broke out among cattle.  The new phenomenon, Ecklon  claimed, is that these crises are piling up and even if they are independent from each other, the joint impact can be disastrous.

Leena5

Modeling black swan events
I think that this message is important for modelers as well.  We may be very happy to model all the four windows of our comic strip. But how can we include new surprises and crises into an ongoing model?   We should develop models that include different development trajectories triggered by a change in one of our variables, but simultaneously we should be able to account for several overlapping surprises.

In the meeting, national risk managers spoke about ”unknown unknowns,” low probability high impact risks–strange unforeseen animals like a black swan that jump on the plate just when we think that the situation is in some kind of control.

This kind of modeling challenge is fascinating from an academic perspective, but researchers’ intellectual hunger should not be the only reason to develop methods for these kinds of situations.  From decision makers’ perspective, this is exactly the case where useful models are needed. The multiple simultaneous developments of the complex systems are difficult to capture even for the brightest of the crisis teams, but a model could manage a job very well.

Most of the IIASA models are large, integrated models that cover global systems. These models are not designed for digesting black swan sandwiches.  The Danish crisis management team has a solution worth for benchmarking for this problem as well. They have a specific small team that is called a Pandora’s Cell.  Pandora’s Cell is dedicated to anticipating, imagining, and scanning for potential not-so-obvious developments that should be taken into consideration in decision making. This dedicated team is needed because all the other resources available have been focused on the obvious events, as described in the square one of our comic strip.

dreamstime_xl_47204013

Black swan events refer to those that are unpredictable and difficult to plan for. © Wrangel | Dreamstime.com – Black Swan Photo

What do our models really represent?

By Dan Jessie, IIASA Research Scholar, Advanced Systems Analysis Program

As policymakers turn to the scientific community to inform their decisions on topics such as climate change, public health, and energy policy, scientists and mathematicians face the challenge of providing reliable information regarding trade-offs and outcomes for various courses of action. To generate this information, scientists use a variety of complex models and methods. However, how can we know whether the output of these models is valid?

This question was the focus of a recent conference I attended, arranged by IIASA Council Chair Donald G. Saari and the Institute for Mathematical Behavioral Sciences at the University of California, Irvine. The conference featured a number of talks by leading mathematicians and scientists who research complex systems, including Carl Simon, the founding Director of the University of Michigan’s Center for the Study of Complex Systems, and Simon Levin, Director of the Center for BioComplexity at Princeton University. All talks focused on answering the question, “Validation. What is it?”

To get a feel for how difficult this topic is, consider that during the lunch discussions,  each speaker professed to know less than everybody else! In spite of this self-claimed ignorance, each talk presented challenging new ideas regarding both specifics of how validation can be carried out for a given model, as well as formulations of general guidelines for what is necessary for validation.

How closely does a model need to mirror reality? © Mopic | Dreamstime.com - Binary Background Photo

How closely does a model need to mirror reality? © Mopic | Dreamstime.com – Binary Background Photo

For example, one talk discussed the necessity of understanding the connecting information between the pieces of a system. While it may seem obvious that, to understand a system built from many different components, one needs to understand both the pieces and how the pieces fit together, this talk contained a surprising twist: oftentimes, the methodology we use to model a problem unknowingly ignores this connecting information. By using examples from a variety of fields, such as social choice, nanotechnology, and astrophysics, the speaker showed how many current research problems can be understood in this light. This talk presented a big challenge to the research community to develop the appropriate tools for building valid models of complex systems.

Overall, the atmosphere of the conference was one of debate, and it seemed that no two speakers agreed completely on what validation required, or even meant. Some recurring questions in the arguments were how closely does a model need to mirror reality, and how do we assess predictions given that every model fails in some predictions? What role do funding agencies and peer review play in validation? The arguments generated by the talks weren’t limited to the conference schedule, either, and carried into the dinners and beyond.

I left the conference with a sense of excitement at seeing so many new ideas that challenge the current methods and models. This is still a new and growing topic, but one where advances will have wide-ranging impacts in terms of how we approach and answer scientific questions.

IIASA Council Chair Don Saari: Validation: What is it?

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

How can research help achieve resilience?

By Elisabeth Suwandschieff, Research Scholar, IIASA Ecosystems Services and Management Program

IMG_3114

Vienna, Austria

We live in a world that is fluid and diverse. Yet policymakers have to find solutions to problems that are definitive and effective, able to adapt to uncertain, changing, and challenging environments. How can research help policymakers to achieve such resilience?

At last week’s 4th Viennese Talks on Resilience and Networks, I listened to a number of talks on this topic from prominent figures in politics, military, research, and the private sector who came together to discuss future potential pathways for Austria. Speakers from politics emphasized the importance of social solutions such as greater investment in education. Meanwhile researchers from IIASA and other institutions brought perspective from systems analysis methods and explained how research on dynamic systems can inform policy making.

System dynamics view
From the research perspective, IIASA’s Brian Fath and others brought a systems analytical view of complex systems and their dynamics. They explained that complex systems such as organizations, businesses, and cities go through different stages in their “ecocycle.” Understanding the cycle and process is key to influencing its development.

FAS.research Director Harald Katzmair argued that life, as a complex system, can be seen as a process of growth, stagnation, destructurization and reorganization. In a recent research project, Katzmair found that the main factor in achieving resilience was the ability of the system to remain flexible through improvisation, collaboration, behavioral change and openness. If we apply this to our understanding of the world it becomes necessary to rethink our approach to leadership in every aspect.

“Our world is not a closed system; it does not consist of one choice, one idea, one currency,” said Katzmair.

Fath said that resilience is achieved by successfully managing each stage of the life cycle, explaining that even collapse can be seen as a key feature of system dynamics, because it results in developmental opportunities. Through disturbance and adaptive change in the landscape, new landscapes can be shaped.

Applying research to resilience
Many of the research talks were mathematical and complex. How can such research help in achieving resilience on a practical level? The issue for policymakers is that they have to provide definitive solutions when actually we live in a world that is fluid and diverse – therefore we need a diversified portfolio of problem solving. That is, solutions must be broad without losing focus. They must be effective, but remain flexible and open.

Research can bring different experiences together, provide a platform and a common language that can be shared. Systems thinking is a powerful way to condense the different ways of thinking and produce a portfolio of options rather than provide rigid solutions.

The adaptive cycle (Burkhard et al. 2011)

The adaptive cycle (Burkhard et al. 2011)

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Interview: Aquatic invaders and ecological networks

Danielle Haak, who recently completed her PhD from the Nebraska Cooperative Fish and Wildlife Research Unit and the School of Natural Resources at the University of Nebraska-Lincoln, has won the annual Peccei Award for her outstanding research as part of the 2014 Young Scientists Summer Program (YSSP) in IIASA’s Advanced Systems Analysis research program.

Haak_postYSSP_IcelandCould you tell me a bit about yourself? Where are you from and what do you study?
I grew up in Milwaukee, Wisconsin (USA), and it was there I fell in love with the natural world. As a kid, my family and I spent weekends boating on Lake Michigan, and I’ve always been fascinated by lakes and the hidden world beneath the water’s surface. As an undergraduate, I spent a few summers in northern Wisconsin at a limnology research station, and this is where I realized I could actually make a career out of this fascination! I went on to get a BSc in Wildlife Ecology, a MSc in Biological Sciences, and I recently defended my PhD dissertation that focused on the energetics and habitat requirements of the invasive freshwater Chinese mystery snail. In general, I’m interested in aquatic invasive species and how their introduction affects ecosystem structure, functioning, and resilience.

How did you get interested in this subject?
I was drawn to aquatic invasive species during my undergraduate research. My first independent research project was on invasive crayfish in a northern Wisconsin lake; in addition to out-competing the native crayfish population, the invasive species suffered from a fungal disease outbreak, and we wanted to understand its prevalence throughout the lake. I also worked as a technician on a whole-lake study researching the efficacy of manual removal of an invasive crayfish species from another lake. It was a long-term project that successfully reduced the invasive rusty crayfish population enough that the native crayfish population was able to recover, and the entire lake underwent a drastic physical change as a result. These large-scale dynamics have always been appealing to me, and I knew it was something I wanted to pursue in my career. When I started my PhD at the University of Nebraska-Lincoln, our research group had just started a number of side projects on the Chinese mystery snail, and there was an obvious gap in our scientific understanding of the species; thus, it made sense to take advantage of this opportunity!

What was the question you were trying to answer in your YSSP research project?
My YSSP project built upon my dissertation topic but went in a slightly different direction. My YSSP supervisor, Dr. Brian Fath, and I wanted to utilize the already-established methods of social and ecological network analyses, but in a way that hadn’t been done before. Ultimately, we had two main questions. First, we wanted to investigate how the social dynamics of ecosystems can be integrated into ecological network analysis. And second, we wanted to use network analysis to analyze the ecological effects and movement of the Chinese mystery snail in the southeast region of Nebraska.

What did you find?
Because there were a few parts to this research, we had a number of different results. First, we were able to create directed networks of how anglers and boaters moved among a network of flood-control reservoirs. We also developed ecological networks specific to each of the 19 reservoirs included in our study. Both of these findings were relevant by themselves, but the cool part was how we combined them. We adapted the framework of infectious disease network modeling to simulate what would happen within the first 25 years after a hypothetical introduction. The human movements connecting reservoirs were equivalent to a disease’s transmission rate, and the individual population growth of the snail within each reservoir after an introduction was like a disease’s incubation time leading up to a threshold where that reservoir then became contagious. We started with 5 infected and contagious reservoirs, and after 25 years only 5 of the 19 reservoirs did not have the Chinese mystery snail in it. Finally, we identified three of the already-infected reservoirs where preventing snails from being transported out of them would be most critical as well as two susceptible reservoirs where preventing introduction of the snails would be most beneficial.

Chinese Mystery Snail. Photo: Wisconsin Department of Natural Resources, Doug Jensen

Chinese Mystery Snail. Photo: Wisconsin Department of Natural Resources, Doug Jensen

Why is this research important for policy or society?
Our preliminary results demonstrated that social and ecological network models can be used in tandem, which has the potential to address a number of complex policy and management issues. Additionally, being able to prioritize reservoirs based on how effective prevention efforts would be allows managers to focus their limited resources in places they would get the best return on their investment. I believe there is also a great deal of potential in using this combined model approach to assess the spread of other aquatic invasive species of concern as well as other types of disturbances.  

How are you planning to continue this research when you return to IIASA?
I would like to work with Dr. Fath on refining some of my individual ecological network models, and possibly incorporating some of the additional social data that’s available to us. We also discussed possibly using the approach to look at other questions related to aquatic invasive species, but in different geographical regions and possibly with different software. One of the best parts of this project was coming up with so many questions on where we could go next, and I really enjoyed working with Dr. Fath and gaining a new perspective on the questions that interest me.

How did your time at IIASA affect your PhD research?
My time at IIASA refreshed my love of the scientific process, and I loved the flexibility in adjusting my project as I learned more and developed new questions. Ultimately, I ended up with an additional chapter for my dissertation and came home with a mostly-completed draft.

What was your favorite aspect of the YSSP and IIASA?
I loved so much about YSSP and working at IIASA, but the best part was probably the ability to meet other brilliant scientists and students from around the world. In addition to thought-provoking discussions on science and research, we also had some incredible discussions on life in other countries with drastically different cultures. The other students made the entire summer even better, and I’m so happy I was able to participate in such an incredible experience. IIASA has a truly unique work environment, and everyone made us feel right at home. It really was a dream come true, and I’m so excited about the opportunity to return and pick up where I left off. The only thing missing will be my fellow YSSPers! I wish we could all come back every summer!

What was your favorite moment of the summer?
I think my favorite experience was the end of summer workshop and dinner and dance that followed. I was so impressed during the initial presentations and it was great to hear about all the progress that was made in the short three months. Celebrating this progress with a night of dancing and dining was just the perfect ending to a great summer. It was a bittersweet farewell, but I think it cemented our friendships and was a great capstone to an already dreamlike experience!

Photo credit: Danielle Haak

Danielle Haak (right) and fellow YSSPer Adriana Reyes, at the end-of-summer awards ceremony.

Note: This article gives the views of the interviewee, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Global carbon taxation: a back of the envelope calculation

By Armon Rezai, Vienna University of Economics and Business Administration and IIASA,
and Rick van der Ploeg, University of Oxford, U.K., University Amsterdam and CEPR 

The biggest externality on the planet is the failure of markets to price carbon emissions appropriately (Stern, 2007). This leads to excessive fossil fuel use which induces global warming and all the economic costs that go with it. Governments should cease the moment of plummeting oil prices and set a price of carbon equal to the optimal social cost of carbon (SCC), where the SCC is the present discounted value of all future production losses from the global warming induced by emitting one extra ton of carbon (e.g., Foley et al., 2013; Nordhaus, 2014). Our calculations suggest a price of $15 per ton of emitted CO2 or 13 cents per gallon gasoline. This price can be either implemented with a global tax on carbon emissions or with competitive markets for tradable emission rights and, in the absence of second-best issues, must be the same throughout the globe.

The most prominent integrated assessment model of climate and the economy is DICE (Nordhaus, 2008; 2014). Such models can be used to calculate the optimal level and time path for the price of carbon. Alas, most people including policy makers and economists view these integrated assessment models as a “black box” and consequently the resulting prescriptions for the carbon price are hard to understand and communicate to policymakers.

© Cta88 | Dreamstime.com - Operating Oil And Gas Well Contour, Outlined On Sunset Photo

© Cta88 | Dreamstime.com 

New rule for the global carbon price
This is why we propose a simple rule for the global carbon price, which can be calculated on the back of the envelope and approximates the correct optimal carbon price very accurately. Furthermore, this rule is robust, transparent, and easy to understand and implement. The rule depends on geophysical factors, such as dissipation rates of atmospheric carbon into oceanic sinks, and economic parameters, such as the long-run growth rate of productivity and the societal rates of time impatience and intergenerational inequality aversion. Our rule is based on the following premises.

  • First, the carbon cycle dynamics are much more sluggish than the process of growth convergence. This allows us to base our calculations on trend growth rates.
  • Second, a fifth of carbon emission stays permanently in the atmosphere and of the remainder 60 percent is absorbed by the oceans and the earth’s surface within a year and the rest has a half-time of three hundred years. After 3 decades half of carbon has left the atmosphere. Emitting one ton of carbon thus implies that is left in the atmosphere after t years.
  • Third, marginal climate damages are roughly 2.38 percent of world GDP per trillion tons of extra carbon in the atmosphere. These figures come from Golosov et al. (2014) and are based on DICE. It assumes that doubling the stock of atmospheric carbon yields a rise in global mean temperature of 3 degrees Celsius. Hence, the within-period damage of one ton of carbon after t years is
  • Fourth, the SCC is the discounted sum of all future within-period damages. The interest rate to discount these damages r follows from the Keyes-Ramsey rule as the rate of time impatience r plus the coefficient of relative intergenerational inequality aversion (IIA) times the per-capita growth rate in living standards g. Growth in living standards thus leads to wealthier future generations that require a higher interest rate, especially if IIA is large, because current generations are then less prepared to sacrifice current consumption.
  • Fifth, it takes a long time to warm up the earth. We suppose that the average lag between global mean temperature and the stock of atmospheric carbon is 40 years.

We thus get the following back-of-the-envelope rule for the optimal SCC and price of carbon:

Capture

where r = ρ+ (IIA-1)x g. Here the term in the first set of round brackets is the present discounted value of all future within-period damages resulting from emitting one ton of carbon and the term in the second set of round brackets is the attenuation in the SCC due to the lag between the change in temperature and the change in the stock of atmospheric carbon.

Policy insights from the new rule
This rule gives the following policy insights:

  • The global price of carbon is high if welfare of future generations is not discounted much.
  • Higher growth in living standards g boosts the interest rate and thus depresses the optimal global carbon price if IIA > 1. As future generations are better off, current generations are less prepared to make sacrifices to combat global warming. However, with IIA < 1, growth in living standards boosts the price of carbon.
  • Higher IIA implies that current generations are less prepared to temper future climate damages if there is growth in living standards and thus the optimal global price of carbon is lower.
  • The lag between temperature and atmospheric carbon and decay of atmospheric carbon depresses the price of carbon (the term in the second pair of brackets).
  • The optimal price of carbon rises in proportion with world GDP which in 2014 totalled 76 trillion USD.

The rule is easy to extend to allow for marginal damages reacting less than proportionally to world GDP (Rezai and van der Ploeg, 2014). For example, additive instead of multiplicative damages resulting from global warming gives a lower initial price of carbon, especially if economic growth is high, and a completely flat time path for the price of carbon. In general, the lower elasticity of climate damages with respect to GDP, the flatter the time path of the carbon price.

Calculating the optimal price of carbon following the new rule
Our benchmark set of parameters for our rule is to suppose trend growth in living standards of 2 percent per annum and a degree of intergenerational aversion of 2, and to not discount the welfare of future generations at all (g = 2%, IIA = 2, r = 0). This gives an optimal price of carbon of $55 per ton of emitted carbon, $15 per ton of emitted CO2, or 13 cents per gallon gasoline, which subsequently rises in line with world GDP at a rate of 2 percent per annum.

Leaving ethical issues aside, our rule shows that discounting the welfare of future generations at 2 percent per annum (keeping g = 2% and IIA = 2) implies that the optimal global carbon price falls to $20 per ton of emitted carbon, $5.5 per ton of emitted CO2, or 5 cents per gallon gasoline.

If society were to be more concerned with intergenerational inequality aversion and uses a higher IIA of 4 (keeping g = 2%, r = 0), current generations should sacrifice less current consumption to improve climate decades and centuries ahead. This is why our rule then indicates that the initial optimal carbon price falls to $10 per ton of carbon. Taking a lower IIA of one and a discount rate of 1.5% per annum as in Golosov et al. (2014) pushes up the initial price of carbon to $81 per ton emitted carbon.

A more pessimistic forecast of growth in living standards of 1 instead of 2 percent per annum (keeping IIA = 2, r = 0) boosts the initial price of carbon to $132 per ton of carbon, which subsequently grows at the rate of 1 percent per annum. To illustrate how accurate our back-of-the-envelope rule is, we road-test it in a sophisticated integrated assessment model of growth, savings, investment and climate change with endogenous transitions between fossil fuel and renewable energy and forward-looking dynamics associated with scarce fossil fuel (for details see Rezai and van der Ploeg, 2014). The figure below shows that our rule approximates optimal policy very well.

Capture-graph

The table below also confirms that our rule also predicts the optimal timing of energy transitions and the optimal amount of fossil fuel to be left unexploited in the earth very accurately. Business as usual leads to unacceptable degrees of global warming (4 degrees Celsius), since much more carbon is burnt (1640 Giga tons of carbon) than in the first best (955 GtC) or under our simple rule (960 GtC). Our rule also accurately predicts by how much the transition to the carbon-free era is brought forward (by about 18 years). No wonder our rule yields almost the same welfare gain as the first best while business as usual leads to significant welfare losses (3% of world GDP).

Transition times and carbon budget

Fossil fuel Only Renewable Only Carbon used maximum temperature Welfare loss
IIA=2 First best 2010-2060 2061 – 955 GtC 3.1 °C 0%
Business as usual 2010-2078 2079 – 1640 GtC 4.0 °C – 3%
Simple rule 2010-2061 2062 – 960 GtC 3.1 °C – 0.001%

 Recent findings in the IPCC’s fifth assessment report support our findings. While it is not possible to translate their estimates of the social cost of carbon into our model in a straight-forward manner, scenarios with similar levels of global warming yield similar time profiles for the price of carbon.

Our rule for the global price of carbon is easy to extend for growth damages of global warming (Dell et al., 2012). This pushes up the carbon tax and brings forward the carbon-free era to 2044, curbs the total carbon budget (to 452 GtC) and the maximum temperature (to 2.3 degrees Celsius). Allowing for prudence in face of growth uncertainty also induces a marginally more ambitious climate policy, but rather less so. On the other hand, additive damages leads to a laxer climate policy with a much bigger carbon budget (1600 GtC) and abandoning fossil fuel much later (2077).

In sum, our back-of-the-envelope rule for the optimal global price of carbon and gives an accurate prediction of the optimal carbon tax. It highlights the importance of economic primitives, such as the trend growth rate of GDP, for climate policy. We hope that as the rule is easy to understand and communicate, it might also be easier to implement.

References
Dell, Melissa, Jones, B. and B. Olken (2012). Temperature shocks and economic growth: Evidence from the last half century, American Economic Journal: Macroeconomics 4, 66-95.
Foley, Duncan, Rezai, A. and L. Taylor (2013). The social cost of carbon emissions. Economics Letters 121, 90-97.
Golosov, M., J. Hassler, P. Krusell and (2014). Optimal taxes on fossil fuel in general equilibrium, Econometrica, 82, 1, 41-88.
Nordhaus, William (2008). A Question of Balance: Economic Models of Climate Change, Yale University Press, New Haven, Connecticut.
Nordhaus, William (2014). Estimates of the social cost of carbon: concepts and results from the DICE-2013R model and alternative approaches, Journal of the Association of Environmental and Resource Economists, 1, 273-312.
Rezai, Armon and Frederick van der Ploeg (2014). Intergenerational Inequality Aversion, Growth and the Role of Damages: Occam’s Rule for the Global Carbon Tax, Discussion Paper 10292, CEPR, London.
Stern, Nicholas (2007). The Economics of Climate Change: The Stern Review, Cambridge University Press, Cambridge.

Note: This article gives the views of the authors, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.