By 2090, the area burned by forest fires in the European Union could increase by 200% because of climate change. However, preventive fires could keep that increase to below 50%. Improved firefighting response could provide additional protection against forest fires. These findings were the result of modeling work we did for the EU Mediation project on projecting future burned areas and adaptation options in Europe. When we talk about these results, people often want to know more about how our model works, what assumptions it makes, and how reliable it is.
Figure 1. The WildFire cLimate impacts and Adaptation Model (FLAM) schematic – estimation of expected burned area.
The model is complex: every link in the schematic shown above represents a specific mathematical formula. These formulas have been developed by many researchers who studied how wildfire occurrence is related to climate, population, and biomass available for burning. Their results have been aggregated into mathematical relations and functions attempting to replicate real processes. The model code runs through the scheme with daily weather inputs in order to calculate the potential for fire ignition, spread, and burned areas. The model transforms spatial and intertemporal inputs into expected burned areas for 25km squares across the entirety of Europe. These squares can be summed up into geographic regions, e.g. countries, as well as burned areas can be aggregated over a given time period, e.g. 10 years.
It took days for our colleague Mirco Migliavacca to run the model during his work at the Joint Research Center of the European Commission. In fact, the scheme depicted in Figure 1 shows only a small piece of a larger picture reflecting the Community Land Model with the integrated fire module (CLM-AB), which he used. CLM-AB calculates all inputs in the indicated fire module, based on modeling processes in the global vegetation system. To speed up the running times for the case study focused on the wildfires in Europe, my colleague Nikolay Khabarov developed a standalone version of the fire model by decoupling the fire module from CLM-AB. When I joined the study, we had also found alternatives for input data, e.g. IIASA’s Global Forest Database, and implemented additional procedures in order to create our wildfire climate impacts and adaptation model (FLAM).
We used the historical data from satellite observations in order to validate modeling results. At the beginning many numerical experiments in CLM and FLAM did not give satisfactory results – there was either overestimation or underestimation of modeled burned areas compared to those reported in available datasets. One day a purely mathematical insight happened. We realized that in the fire algorithm implemented in FLAM, there is a parameter that can be factorized, mathematically speaking. This parameter, a probability of extinguishing a fire in a pixel in one day, was constant for Europe and set to 0.5. It became obvious that this parameter should vary with respect to a region. Factorization of this variable gave a possibility to avoid routine calculations, and use it for calibrating the model over a historical period. This can be done analytically by solving a corresponding polynomial equation. Analytical findings allowed us to introduce an effective calibration procedure and at the same time to estimate a firefighting efficiency on a country level. Further, using the advice of our colleagues Anatoly Shvidenko and Dmitry Schepaschenko, we have introduced adaptation options in the model, for example prescribed burnings, which firefighters use to reduce the fuel availability and, consequently, potential of a major fire.
Prescribed burnings are one tool that can help prevent major wildfires. (cc) US Bureau of Land Management via Flickr
Once we had calibrated the model so that it adequately performed on the historical period (using historical climate data), we used climate scenarios to produce future projections. Currently, we are working on further improvements in modeling accuracy in annual burned areas by introducing additional regionally specific factors in the model. In the recent study published in the International Journal of Wildland Fire, we suggested improving the original model by modifying the fire probability function reflecting fuel moisture. This modification allows for a dramatic improvement of accuracy in modelled burned areas for a range of European countries.
Despite some success in modeling annual burned areas in Europe, we still have difficulties in predicting the extreme fires, in particular in some more arid and hence vulnerable regions such as Spain. However, we accept the challenge, because credible modeling results in terms of burned areas provide important information for assessing economic damages and CO2 emissions, due to climate and human activities. Our research has the potential to help society to realize these risks and undertake preventive measures. It also delivers an additional scientific value due to the fact, that fire risks must be included in forest management models.
I would like to thank all the study co-authors for their valuable contributions and efficient collaboration.
Reference Krasovskii, A., Khabarov, N., Migliavacca, M., Kraxner, F. and Obersteiner, M. (2016) Regional aspects of modelling burned areas in Europe. International Journal of Wildland Fire. http://dx.doi.org/10.1071/WF15012
Note: This article gives the views of the interviewee, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.
By Sergio Rinaldi, IIASA Evolution and Ecology Program and Politecnico di Milano, Italy
Is it possible to predict how love stories develop, progress, and end using mathematical models? I have studied this question over the past 20 years with a group of researchers at IIASA and at the Politecnico di Milano, and as we show in our new book Modeling Love Dynamics (World Scientific, 2016), the answer is yes. The emerging message is that prediction is possible, if we can describe in formulas the way each individual reacts to the love and to the appeal of the partner.
Consider a standard love story, which develops like those described in a classical Hollywood movie such as Titanic. This story can be easily modeled, if one considers reasonably appealing individuals who increase their reaction with the partner’s love – so called secure individuals. Starting from the state of indifference, where the individuals are at their first encounter, their feelings continuously grow and tend toward a positive plateau.
Mala Powers and José Ferrer in Cyrano de Bergerac, 1950. – Public Domain
Love stories become more intriguing when one individual is not particularly appealing, if not repelling, as in the fairy tale “Beauty and The Beast.” Indeed, in these cases, there exists also a second romantic regime, which is negative and can therefore entrain, in the long run, marital dissolution. In order to avoid that trap, people who are not very charming, or believe to be so, do all they can to look more attractive to the partner. At the first date, she wears her nicest dress and he shows up with his best fitting T-shirt. However, after a while, the bluffing can be interrupted, because the couple has entered the safe basin of attraction of the positive regime. Needless to say, the model also supports much more sophisticated behavioral strategies, like that described by Edmond Rostand in his “Cyrano de Bergerac,” the masterpiece of the French love literature.
Not all individuals are secure. Indeed, some people react less and less strongly when the love of the partner overcomes a certain threshold. These individuals, often very keen to flirtation, are incapable of becoming one with their partner. The model shows that couples composed of insecure individuals tend, with almost no exception, toward an unbalanced romantic regime in which the most insecure is only marginally involved and is therefore prone to break up the relationship at the first opportunity. This is why after just 20 minutes of the very long “Gone with the Wind,” when one realizes that Scarlett and Rhett are both insecure, the model can already predict the end of the film, where he quits her with the lapidary “Frankly, my dear, I don’t give a damn.” The same conclusion is expected if only one of the two individuals is insecure. This explains the numerous failures in the romantic life of some individuals, like the beautiful star Liz Taylor, who is described as very insecure in all her biographies, and went, indeed, through eight marriages.
Clark Gable and Vivien Leigh in Gone with the Wind, 1939 – MGM Pictures | Public Domain
Mathematical models can also be used to interpret more complex romantic behaviors. Particularly important is the case of individuals who overestimate the appeal of the partners when they are more in love with them (like parents who have a biased view of the beauty of their own kids). Interestingly, if insecurity is also present, biased couples can have romantic regimes characterized by recurrent ups and downs. In other words, the theory says that bias and insecurity is an explosive mix that triggers turbulence in the life of a couple.
In the second part of the book we focus on the effects of the social environment and to the consequences of extra-emotional compartments. In this context, our analysis of the 20-years long relationship between Laura and the famous Italian poet Francis Petrarch shows that poetic inspiration is an important destabilizing factor, responsible for transforming a quiet relationship into a turbulent one.
Finally, we studied triangular relationships, with emphasis on the effects of conflict and jealousy. In all these cases the dynamics of the feelings can be very wild, up to the point of being chaotic and, hence, unpredictable. When this occurs, the life of the couple becomes unsustainable, because painful periods of crisis can virtually start at any moment: a heavy permanent stress. The model can thus explain why the relationship is often interrupted, sometimes even tragically, as in the famous film by François Truffaut “Jules et Jim”, where Kathe’s suicide is perceived as a real relief.
As an environmental economist working on economic valuation and optimisation of water use, the academy was very interesting for me. Water management is a dynamic process and requires bringing perspectives and expertise from different disciplines together. Application of systems analysis enables us to combine aspects from various domains, come up with models that identify nonlinearities, project regime shifts, and tipping points in the management of water as well as other natural resources. Such projects require interdisciplinary collaboration and communicable results to inform policy. Scientists need to translate their results to a language accessible to the policymakers, in order for society to pick up on and capitalize on the research efforts. The MSA 2015 provided me with necessary training to go deeper into different modelling methodologies, and learn the concepts and principles of science for policy first-hand from IIASA scientists.
The reading list sent before the course gave me the impression that I would probably be the only environmental economist amongst a crowd of mathematical modellers. However, arriving in Moscow, I found that the MSA 2015 participants came from a broad range of backgrounds and countries at different stages of their careers in academia or policy. We all came to learn and discuss the natural resource constraints to infinite economic growth on finite planet.
During lectures, the theoretical foundations of different mathematical approaches such as dynamical systems theory, optimal control theory and game theory were presented by leading scientists, such as Michael Ghil. Fundamentals of addressing challenges of natural resource management and comparing contemporary models of economic growth were also covered as central themes.
The course acknowledged the issues related with ecosystems services, public goods, inter-generational and international fairness, and public and common pool resource dynamics in the face of economic growth and resource constraints. The training underlined feedbacks between institutional dynamics and resource dynamics in complex social-ecological system and need for interdisciplinary and policy-relevant research, an important take-home message for next generation scientists.
Photos by M. Nazli Koseoglu
What makes the MSA so special? Apart from lectures, we had tutorials, a group project, poster and project presentation sessions, as well as interesting talks on IIASA activities by Margaret Goud-Collins and Elena Rovenskaya, and an inspiring session on the importance of finding the right mentor for a successful career by Prof Nøstbakken. The MSA 2015 program had a good balance of theory and practice, which encouraged participants to be proactive and engaged.
I particularly liked the poster session. We presented our ongoing projects and received feedback from the lecturers and other participants. It was great to get comments and perspectives that I never thought of, and tips from senior researchers. In the late days of the academy we were assigned to prepare a group project on Artic systems which allowed us to put what we had learned at the lectures into practice and apply important topics outside our exact fields of study; in my case, these topics were petroleum economics and artic futures. I found the multi-disciplinary group work to be a great exercise for the development of my current study.
Attending the MSA 2015 provided useful training, both theoretical and practical, for understanding systems analysis approaches better. The host institution and organizing committee at Lomonosov Moscow State Univesrity provided impeccable hospitality, and the setting, in a landmark building in a landmark city, was a great perk. I received very constructive feedback, and made good connections around the world. I would recommend all early-career researchers in relevant fields to take this great opportunity next summer!
By Matthias Wildemeersch, IIASA Advanced Systems Analysis and Ecosystems Services and Management Programs
FotoQuest Austria is a citizen science campaign initiated by the IIASA Ecosystems Services & Management Program that aims to involve the general public in mapping land use in Austria. Understanding the evolution of urban sprawl is important to estimate the risk of flooding, while the preservation of wetlands has important implications for climate change.
But how can we engage people in environmental monitoring, in particular when they are growing increasingly resistant to traditional forms of advertising? Viral marketing makes use of social networks to spread messages, and takes advantage of the trust that we have in the recommendation coming from a friend rather than from a stranger or a company.
Network science and the formal description of spreading phenomena can shed light on the propagation of messages through communities and can be applied to inform and design viral marketing campaigns.
Network science is a multi-disciplinary field of research that draws on graph theory, statistical mechanics, inference, and other theories to study the behavior of agents in various networks. The spreading phenomena in viral marketing show similarities with well-studied spreading processes over biological, social, physical, and financial networks. For instance, we can think about epidemics,which are well understood and allow for the design of optimal strategies to contain viruses. Another example is opinion dynamics, which received renewed research attention over the last years in the context of social media. In contrast to diseases or computer viruses, which we aim to contain and stop, the goal of viral marketing is to spread widely, reaching the largest possible fraction of a community.
What makes viral marketing unique? But some aspects of viral marketing are very different from what we see in other spreading phenomena. First of all, there are many platforms that can be used to spread information at the same time, and the interaction between these platforms is not always transparent. Human psychology is a crucial factor in social networks, as repeated interaction and saturation can decrease the willingness to further spread viral content. Marketing campaigns have a limited budget, and therefore it is meaningful to understand how we can use incentives and how efficient they are. This also means that it is essential to find the group of most influential people that can be used as seeds for the viral campaign.
Network science has addressed to a great extent all these individual questions, mostly under the assumption of full knowledge of the connections between the agents and their influence. Currently, so-called multiplexes are an active research field that studies the behavior of multi-layer networks. This research unveils the relationships between the dynamics of viral marketing, the connection pattern, and strength between the network layers. Although viral spreading may be unachievable in a single layer, for example a social network like Facebook, the critical threshold may be exceeded by joining different platforms. Within a given platform, people alike can be clustered using community detection algorithms. Once the communities are identified, influence maximization algorithms have been established to select these persons that maximize the spread of viral content. Although this discrete optimization problem is computationally difficult—or NP-hard—mathematicians have proposed algorithms that can efficiently predict who to target to give a campaign the best chance of going viral. On top of that, optimal pricing strategies have been developed to reward recommenders.
Although the literature is extensive, the nature of the results is often theoretical and involves mathematically complex models and algorithms. Considering that only partial information on the network is usually available, it is not straightforward to bring this knowledge back to a practical marketing campaign. So researchers in this field are trying to bridge the gap between theoretical results and practical problems. The generic, powerful methods of network science are sufficiently versatile to capture the specifics of real-world applications. As such, network science can provide guidelines that can bring great value for the design of heuristic methods in marketing strategies.
Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.
By Dan Jessie, IIASA Research Scholar, Advanced Systems Analysis Program
As policymakers turn to the scientific community to inform their decisions on topics such as climate change, public health, and energy policy, scientists and mathematicians face the challenge of providing reliable information regarding trade-offs and outcomes for various courses of action. To generate this information, scientists use a variety of complex models and methods. However, how can we know whether the output of these models is valid?
This question was the focus of a recent conference I attended, arranged by IIASA Council Chair Donald G. Saari and the Institute for Mathematical Behavioral Sciences at the University of California, Irvine. The conference featured a number of talks by leading mathematicians and scientists who research complex systems, including Carl Simon, the founding Director of the University of Michigan’s Center for the Study of Complex Systems, and Simon Levin, Director of the Center for BioComplexity at Princeton University. All talks focused on answering the question, “Validation. What is it?”
To get a feel for how difficult this topic is, consider that during the lunch discussions, each speaker professed to know less than everybody else! In spite of this self-claimed ignorance, each talk presented challenging new ideas regarding both specifics of how validation can be carried out for a given model, as well as formulations of general guidelines for what is necessary for validation.
For example, one talk discussed the necessity of understanding the connecting information between the pieces of a system. While it may seem obvious that, to understand a system built from many different components, one needs to understand both the pieces and how the pieces fit together, this talk contained a surprising twist: oftentimes, the methodology we use to model a problem unknowingly ignores this connecting information. By using examples from a variety of fields, such as social choice, nanotechnology, and astrophysics, the speaker showed how many current research problems can be understood in this light. This talk presented a big challenge to the research community to develop the appropriate tools for building valid models of complex systems.
Overall, the atmosphere of the conference was one of debate, and it seemed that no two speakers agreed completely on what validation required, or even meant. Some recurring questions in the arguments were how closely does a model need to mirror reality, and how do we assess predictions given that every model fails in some predictions? What role do funding agencies and peer review play in validation? The arguments generated by the talks weren’t limited to the conference schedule, either, and carried into the dinners and beyond.
I left the conference with a sense of excitement at seeing so many new ideas that challenge the current methods and models. This is still a new and growing topic, but one where advances will have wide-ranging impacts in terms of how we approach and answer scientific questions.
IIASA Council Chair Don Saari: Validation: What is it?
Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.