The hidden impacts of species extinction

by Melina Filzinger, IIASA Science Communication Fellow

Ecosystems worldwide are changed by the influence of humans, often leading to the extinction of species, for example due to climate change or loss of natural habitat. But it doesn’t stop there: as the different species in an ecosystem feed on each other and are thereby interconnected, the loss of one species might lead to the extinction of others, which can even destabilize the whole system. “In nature, everything is connected in a complex way, so at first glance you cannot be sure what will happen if one species disappears from an ecosystem,” says IIASA postdoc Mateusz Iskrzyński.

This is why the IIASA Evolution and Ecology (EEP) and Advanced Systems Analysis (ASA) programs are employing food-web modeling to find out which properties make ecosystems particularly vulnerable to species extinction. Food webs are stylized networks that represent the feeding relationships in an ecosystem. Their nodes are given by species or groups of species, and their links indicate how biomass cycles through the system by means of eating and being eaten. “This type of network analysis has a surprising power to uncover general patterns in complex relationships,” explains Iskrzyński.

Every one of these food webs is the result of years of intense research that involves both data collection to assess the abundance of species in an area, and reconstructing the links of the network from existing knowledge about the diets of different species. The largest of the currently available webs contain about 100 nodes and 1,000 weighted links. Here, “weighted” means that each link is characterized by the biomass flow between the nodes it connects.

Usually, food webs are published and considered individually, but recently efforts have been stepped up to collect them and analyze them together. Now, the ASA and EEP programs have collected 220 food webs from all over the world in the largest database assembled so far. This involved unifying the parametrization of the data and reconstructing missing links.

The researchers use this database to find out how different ecosystems react to the ongoing human-made species loss, and which ones are most at risk. This is done by removing a single node from a food web, which corresponds to the extinction of one group of species, and modeling how the populations of the remaining species change as a result. The main question is how these changes in the food web depend on its structural properties, like its size and the degree of connectedness between the nodes.

From the preliminary results obtained so far, it seems that small and highly connected food webs are particularly vulnerable to the indirect effects of species extinction. This means that in these webs the extinction of one species is especially likely to lead to large disruptive change affecting many other organisms. “Understanding the factors that cause such high vulnerability is crucial for the sustainable management and conservation of ecosystems,” says Iskrzyński. He hopes that this research will encourage more, and more precise, empirical ecosystems studies, as reliable data is still missing from many places in the world.

As a next step, the scientists in the two programs are planning to understand which factors determine the impact that the disappearance of a particular group of organisms has. They are going to make the software they use for their simulations publicly available, together with the database they developed.

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Estimating risk across Africa

by Melina Filzinger, IIASA Science Communication Fellow

Having just finished tenth grade, Lillian Petersen from New Mexico, USA is currently spending the summer at IIASA, working with researchers from both the Ecosystems Services and Management (ESM), and Risk and Resilience (RISK) programs on developing risk models for all African countries.

At a talk Petersen gave at the Los Alamos Nature Center/Pajarito Environmental Education Center, her method for predicting food shortages in Africa from satellite images caught the attention of Molly Jahn from the University of Wisconsin-Madison. Jahn, who is collaborating with the ESM and RISK programs at IIASA, was so impressed with Petersen’s work that she added her to her research group and connected her to IIASA researchers for a joint project.

One of the indicators used to estimate poverty in Nigeria. © Lillian Petersen | IIASA

Knowing which areas are at risk for disasters like conflict, disease outbreak, or famine is often an important first step for preventing their occurrence. In developed countries, there is already a lot of work being done to estimate these risks. In developing countries, however, a lack of data often hinders risk modeling, even though these countries are often most at risk for disasters.

Many humanitarian crises, like famine, are closely connected to poverty. However, high resolution poverty estimates are only available for a few African countries. This is why Petersen and her colleagues are developing methods to obtain those poverty estimates for all of Africa using freely available data, like maps showing major roads and cities, as well as high-resolution satellite images. Information about poverty in a certain region can be extracted from this data by considering several indicators. For example, areas that are close to major roads or cities, or those that have a large amount of lighting at night, meaning that electricity is available, are usually less poor than those without these features. The researchers are also analyzing the trading potential with neighboring countries, the land cover type, and distance to major shipping routes, such as waterways.

As no single one of these indicators can perfectly predict poverty, the scientists combine them. They “train” their model using the countries for which poverty data exists: A comparison of the model’s output and the real data helps to reveal which combination of indicators gives a reliable estimate of poverty. Following this, they plan to apply that knowledge in order to accurately predict poverty with high spatial resolution over the entire African continent.

Poverty data for Nigeria in 2010 (left) and poverty estimates based on five different indicators (right). © Lillian Petersen | IIASA

Once these estimates exist, Petersen and her colleagues will apply risk models to find out which areas are particularly vulnerable to disease outbreaks, famine, and conflicts. “I hope that this research will inform policymakers about which populations are most at risk for humanitarian crises, so that they can target these populations systematically in aid programs,” says Petersen, adding that preventing a disaster is generally cheaper than dealing with its aftermath.

The skills Petersen is using for her research are largely self-taught. After learning computer programming with the help of a book when she was in fifth grade, Petersen conducted her first research project on the effect of El Nino on the winter weather in the US when she was in seventh grade. “It was a small project, but I was pretty excited to obtain scientific results from raw data,” she says. After this first success she has been building up her skills every year, by competing at science fairs across the US with her research projects.

Her internship at IIASA gives Petersen access to the resources she needs to take her research to the next level. “Getting feedback from some of the top scientists in the field here at IIASA is definitely improving my work,’’ she says. Petersen is hoping to publish a paper about her project next year, and wants to major in applied mathematics after she finishes high school.

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Raising the game: A new approach to understanding decision making

by Melina Filzinger, IIASA Science Communication Fellow

Strategic board games are staple entertainment for families all over the world, but what many do not know is that games can also be a valuable research tool. As her project for the Young Scientists Summer Program (YSSP), Sara Turner is piloting an experiment that uses a game called the Forest Game, developed by IIASA and the Centre for Systems Solutions, to find out how policy decisions are made and how they change over time. “Games let you abstract from the specifics of a real-world case, but are more human-centric than, for example, computer simulations,” says Turner.

Interface of the Forest Game, © IIASA

In the Forest Game, a group of five to ten players is asked to make decisions about the management of a forest together. Harvesting trees yields returns for the players, while harvesting too many of them might destroy the forest or increase the risk of flooding. There are some uncertainties in the game – for example, the players do not know exactly how resilient the forest is. The goal of the research project is to run multiple iterations of the game with different players and starting conditions, and trace how group discussions and the resulting decisions change over time. This helps to generate hypotheses about the ways in which individuals interact to generate policy outcomes. Each game takes about an hour to play.

Even though the Forest Game deals with forest management, this is only one example of a broader class of decision-making dilemma: when a resource is limited, and it is costly to prevent access, people will tend to over-exploit the resource. This in turn leads to a wide range of problems, from over-fishing to air pollution. Although games cannot capture the complexity of real situations, they can still help us understand the core dynamics of the problem and develop ideas and strategies that are relevant to solving it. “The game is not designed to be directly applicable to real life, but it helps to come up with hypotheses that you can then compare to real-life cases,” explains Turner.

Questions about the sustainable management of resources have been studied for decades, but not a lot is known about the role values play in shaping group decision making and the stability of the implemented policies. To investigate this, each participant is asked to fill out a short ten-minute survey assessing their core values and beliefs, after which they are put into a group with people who either have a very similar or very different worldview from them. “It is really interesting to put a person in a decision-making context with other people and get some insight into how they work through that problem,” says Turner.

© Sara Turner

For example, if you are a person that strongly values equality, in the game you might be likely to argue in favor of a policy where all participants obtain the same amount of returns, regardless of the number of trees the individual player chooses to harvest. If many players in the group share your belief, that policy might be more likely to be implemented than in a very diverse group.

Another interesting question whenever you run a game for research purposes is, “Who are the right players?” Some games are targeted at real-world policymakers, but often games can also be educational for the broader public. ‘’People learn a lot during games, because of the way that information is processed and experienced,” says Turner. That is why many participants, although they might not see a connection between the game and their life at first, find themselves relying on the insights they gained while playing when faced with similar situations in the future.

In this case, the goal is to study group decision-making processes in general, so the details of who is playing are not particularly important. However, to obtain groups of players with heterogeneous worldviews, a high degree of diversity is preferable.

While the game has previously mainly been played by YSSP participants and students of the University of Vienna, Turner is currently trying to recruit a more diverse set of players from both within and outside of IIASA. “It would be ideal to have a pool of participants who come from a wide variety of educational and cultural backgrounds,” she says.

If you are interested in participating in the Forest Game, you can write Sara Turner an e-mail to turner@iiasa.ac.at.

Note: This article gives the views of the authors, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Using Twitter data for demographic research

By Dilek Yildiz, Wittgenstein Center for Demography and Global Human Capital (IIASA, VID/ÖAW and WU), Vienna Institute of Demography, Austrian Academy of Sciences, International Institute for Applied Systems Analysis

Social media offers a promising source of data for social science research that could provide insights into attitudes, behavior, social linkages and interactions between individuals. As of the third quarter of 2017, Twitter alone had on average 330 million active users per month. The magnitude and the richness of this data attract social scientists working in many different fields with topics studied ranging from extracting quantitative measures such as migration and unemployment, to more qualitative work such as looking at the footprint of second demographic transition (i.e., the shift from high to low fertility) and gender revolution. Although, the use of social media data for scientific research has increased rapidly in recent years, several questions remain unanswered. In a recent publication with Jo Munson, Agnese Vitali and Ramine Tinati from the University of Southampton, and Jennifer Holland from Erasmus University, Rotterdam, we investigated to what extent findings obtained with social media data are generalizable to broader populations, and what constitutes best practice for estimating demographic information from Twitter data.

A key issue when using this data source is that a sample selected from a social media platform differs from a sample used in standard statistical analysis. Usually, a sample is randomly selected according to a survey design so that information gathered from this sample can be used to make inferences about a general population (e.g., people living in Austria). However, despite the huge number of users, the information gathered from Twitter and the estimates produced are subject to bias due to its non-random, non-representative nature. Consistent with previous research conducted in the United States, we found that Twitter users are more likely than the general population to be young and male, and that Twitter penetration is highest in urban areas. In addition, the demographic characteristics of users, such as age and gender, are not always readily available. Consequently, despite its potential, deriving the demographic characteristics of social media users and dealing with the non-random, non-representative populations from which they are drawn represent challenges for social scientists.

Although previous research has explored methods for conducting demographic research using non-representative internet data, few studies mention or account for the bias and measurement error inherent in social media data. To fill this gap, we investigated best practice for estimating demographic information from Twitter users, and then attempted to reduce selection bias by calibrating the non-representative sample of Twitter users with a more reliable source.

Exemplar of CrowdFlower task © Jo Munson.

We gathered information from 979,992 geo-located Tweets sent by 22,356 unique users in South-East England and estimated their demographic characteristics using the crowd-sourcing platform CrowdFlower and the image-recognition software Face++. Our results show that CrowdFlower estimates age more accurately than Face++, while both tools are highly reliable for estimating the sex of Twitter users.

To evaluate and reduce the selection bias, we ran a series of models and calibrated the non-representative sample of Twitter users with mid-year population estimates for South-East England from the UK Office of National Statistics. We then corrected the bias in age-, sex-, and location-specific population counts. This bias correction exercise shows promise for unbiased inference when using social media data and can be used to further reduce selection bias by including other sociodemographic variables of social media users such as ethnicity.  By extending the modeling framework slightly to include an additional variable, which is only available through social media data, it is also possible to make unbiased inferences for broader populations by, for example, extracting the variable of interest from Tweets via text mining. Lastly, our methodology lends itself for use in the calculation of sample weights for Twitter users or Tweets. This means that a Twitter sample can be treated as an individual-level dataset for micro-level analysis (e.g., for measuring associations between variables obtained from Twitter data).

Reference:

Yildiz, D., Munson, J., Vitali, A., Tinati, R. and Holland, J.A. (2017). Using Twitter data for demographic research, Demographic Research, 37 (46): 1477-1514. doi: 10.4054/DemRes.2017.37.46

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Intelligent cooperation

By Valeria Javalera Rincón, IIASA CONACYT Postdoctoral Fellow in the Ecosystems Services and Management and Advanced Systems Analysis programs.

What is more important: water, energy, or food?

If you work in the water, energy or agriculture sector we can guess what your answer might be! But if you are a policy or decision maker trying to balance all three, then you know that it is getting more and more difficult to meet the growing demand for water, energy, and food with the natural resources available. The need for this balance was confirmed by the 17 Sustainable Development Goals, agreed by 193 countries, and the Paris climate agreement. But how to achieve it? Intelligent cooperation is the key.

The thing is that water, energy, and food are all related in such a way that are reliant on each other for production or distribution. This is the so-called Water-Energy-Food nexus. In many cases, you need water to produce energy, you need energy to pump water, and you need water and energy to produce, distribute, and conserve food.

Many scientists have tried to relate or to link models for water, agriculture, land, and energy to study these synergic relationships. In general, so far, there are two ways that this has been solved: One is integrating models with “hard linkages” like this:

© Daniel Javalera

In the picture there are six models (let’s say water, land use, hydro energy, gas, coal, food production models) that are then integrated into just one. The resulting integrated model then preserves the relationships but is complex, and in order to make it work with our current computer power you often have to sacrifice details.

Another way is to link them is using so-called “soft linkages” where the output of one model is the input of the next one, like this:

© Daniel Javalera

In the picture, each person is a model and the input is the amount of water left. These models all refer to a common resource (the water) and are connected using “soft linkages.” These linkages are based on sequential interaction, so there is no feedback, and no real synergy.

The intelligent linker agent

But what if we could have the relations and synergies between the models? It would mean much more accurate findings and helpful policy advice. Well, now we can. The secret is to link through an intelligent linker agent.

I developed a methodology in which an intelligent linker agent is used as a “negotiator” between models that can communicate with each other. This negotiator applies a machine-learning algorithm that gives it the capability to learn from the interactions with the models. Through these interactions, the intelligent linker can advise on globally optimal actions.

The knowledge of the intelligent linker is based on past experience and also on hypothetical future actions that are evaluated in a training process.  This methodology has been used to link drinking water networks, such as Barcelona’s drinking water network.

When I came to IIASA, I was asked to apply this approach to optimize trading between cities in the Shanxi region of China. I used a set of previously development models which aimed to distribute water and land available for each city in order to produce food (eight types of crops) and coal for energy. The intelligent linker agent optimizes trading between cities in order to satisfy demand at the lowest cost for each city.

The purpose of this exercise was to compare the solutions with those from “hard linkages” – like those in the first picture. We found that the intelligent linker is flexible enough to find the optimal solution to questions such as: How much of each of these products should each city export/import to satisfy global demand at a global lower economic and ecological cost? What actions are optimal when the total production is insufficient to meet the total demand? Under what conditions is it preferable to stop imports/exports when production is insufficient to supply the demand of each city?

The answers to these questions can be calculated by the interaction with the models of each city just by the interfacing with the intelligent linker agent, this means that no major changes in the models of each city were needed. We also found that, under the same conditions, the solutions using the intelligent linker agent were in agreement with those found when hard linking was used.

My next challenge is to build a prototype of a “distributed computer platform,” which will allow us to link models on different computers in different parts of the world—so that we in Austria could link to a model built by colleagues in Brazil, for example.  I also want to link models of different sectors and regions of the globe, in order to prove that intelligent cooperation is the key to improving global welfare.

References

Xu X, Gao J, Cao G-YErmoliev YErmolieva TKryazhimskiy AV, & Rovenskaya E (2015). Modeling water-energy-food nexus for planning energy and agriculture developments: case study of coal mining industry in Shanxi province, China. IIASA Interim Report. IIASA, Laxenburg, Austria: IR-15-020

Javalera V, Morcego B, & Puig V, Negotiation and Learning in distributed MPC of Large Scale Systems, Proceedings of the 2010 American Control Conference, Baltimore, MD, 2010, pp. 3168-3173. doi: 10.1109/ACC.2010.5530986

Valeria J, Morcego B, & Puig V, Distributed MPC for Large Scale Systems using Agent-based Reinforcement Learning, In IFAC Proceedings Volumes, Volume 43, Issue 8, 2010, Pages 597-602, ISSN 1474-6670, ISBN 9783902661913, https://doi.org/10.3182/20100712-3-FR-2020.00097.

Morcego B, Javalera V, Puig V, & Vito R (2014). Distributed MPC Using Reinforcement Learning Based Negotiation: Application to Large Scale Systems. In: Maestre J., Negenborn R. (eds) Distributed Model Predictive Control Made Easy. Intelligent Systems, Control and automation: Science and Engineering, vol 69. Springer, Dordrecht

Javalera Rincón V, Distributed large scale systems: a multi-agent RL-MPC architecture, Universitat Politècnica de Catalunya. Institut d’Organització i Control de Sistemes Industrials,Doctoral thesis. 2016. http://upcommons.upc.edu/handle/2117/96332

Note: This article gives the views of the author and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

New open-source software supports land-cover monitoring

By Victor Maus, IIASA Ecosystems Services and Management Program

Nowadays, satellite images are an abundant supply of data which we can use to get information about our planet and its changes. Satellite images can, for example,  help us detect an approaching storm, measure the expansion of a city, identify deforested areas, or estimate how crop areas change over time. Usually, we are interested in extracting information from large areas, for example, deforestation in the Amazon Rainforest (5.5 million km², around 15 times the area of Germany). It would be challenging for us to monitor and map such vast areas without combining satellite images with automated and semi-automated computer programs.

Aerial view of the Amazon Rainforest, near Manaus, Brazil. Monitoring deforestation in the Amazon is difficult because the area is massive and remote. ©Neil Palmer | CIAT

To address this problem, I developed — along with my colleagues Gilberto Camara from the Brazilian National Institute for Space Research and Marius Appel and Edzer Pebesma from the University of Münster, Germany — a new open source software to extract information about land-cover changes from satellite images. The tool maps different crop types (e.g., soybean, maize, and wheat), forests, and grassland, and can be used to support land-use monitoring and planning.

Our software, called dtwSat, is open-source and can be freely installed and used for academic and commercial purposes. It builds upon on other graphical and statistical open-source extensions of the statistical program R. Adding to that, our article in press in Journal of Statistical Software is completely reproducible and provides a step-by-step example of how to use the tool to produce land-cover maps. Given that we have public access to an extensive amount satellite images, we also get much benefit from tools that are openly available, reproducible, and comparable. These, in particular, can contribute to rapid scientific development.

The software dtwSat is based on a method widely used for speech recognition called Dynamic Time Warping (DTW). Instead of spoken words, we adapted DTW to identify ‘phenological cycles’ of the vegetation. These encompass the plants’ life cycle events, such as how deciduous trees lose their leaves in the fall.  The software compares a set of phenological cycles of the vegetation measured from satellite images (just like a dictionary of spoken words) with all pixels in successive satellite images, taken at different times. After comparing the satellite time series with all phenological cycles in the dictionary, dtwSat builds a sequence of the land-cover maps according to similarity to the phenological cycles.

The series of maps produced by dtwSat allows for land-cover change monitoring and can help answer questions such as how much of the Amazon rainforest has been replaced with soy or grass for cattle grazing during the last decade? It could also help study the effects of policies and international agreements, such Brazil’s Soy Moratorium, where soybean traders agreed not to buy soy from areas deforested after 2006 in the Brazilian Amazon. If soy farming cannot expand over areas deforested after 2006, it might expand to areas formerly used for cattle grazing deforested before 2006, and force the cattle grazing farmers to open new areas that have been cleared more recently. Therefore, besides monitoring changes, the land-cover information can help better understand direct and indirect drivers of deforestation and support new land-use policy.

This slideshow requires JavaScript.

Further info: dtwSat is distributed under the GPL (≥2) license. The software is available from the IIASA repository PURE pure.iiasa.ac.at/14514/. Precompiled binary available from CRAN at cran.r-project.org/web/packages/dtwSat/index.html

dtwSat development version available from GitHub at github.com/vwmaus/dtwSat

Reference:

Maus V, Camara G, Appel M, & Pebesma E (2017). dtwSat: Time-Weighted Dynamic Time Warping for Satellite Image Time Series Analysis in R. Journal of Statistical Software (In Press).

Maus, V, Camara, G, Cartaxo, R, Sanchez, A, Ramos, FM, & de Queiroz, GR (2016). A Time-Weighted Dynamic Time Warping Method for Land-Use and Land-Cover Mapping. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 9 (8): 3729–39.

This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.