The IIASA COVID-19 dashboard

By Tadeusz Bara-Slupski, Artificial Intelligence for Good initiative leader, Appsilon Data Science

Tadeusz Bara-Slupski discusses the Artificial Intelligence for Good initiative’s recent collaboration with IIASA to develop an interactive COVID-19 data visualization tool.

Number of hospital beds per 1000 population © IIASA

Public institutions rely on external data sources and analysis to guide policymaking and intervention. Through our AI for Good initiative, we support organizations that provide such inputs with our technical expertise. We were recently approached by IIASA to create a dashboard to visualize COVID-19 data. This builds on our previous collaboration, which had us deliver a decision-making tool for natural disaster risk planning in Madagascar. In this article, we provide an example of how to help policymakers navigate the ocean of available data with dashboards that turn these data into actionable information.

Data is useful information when it creates value…or saves lives

The current pandemic emergency has put an unprecedented strain on both public health services and policymaking bodies around the world. Government action has been constrained in many cases by limited access to equipment and personnel. Adequate policymaking can help to coordinate the emergency relief effort effectively, make better use of scarce resources, and prevent such shortages in the future. This, however, requires access to secure, timely, and accurate information.

Governments commission various public bodies and research institutes to provide such data both for planning and coordinating the response. For instance, in the UK, the government commissioned the National Health Service (NHS) to build a data platform to consolidate a number of data providers into one single source. However, for the data to be useful it must be presented in a way that is consistent with the demands of an emergency situation. Therefore, the NHS partnered with a number of tech companies to visualize the data in dashboards and to provide deeper insights. Raw data, regardless of its quality, is not useful information until it is understood in a way that creates value – or in this case informs action that could save lives.

IIASA approached us to support them in making their COVID-19 data and indicators more useful to policymakers. The institute’s research is used by policymakers around the world to make critical decisions. We appreciated the opportunity to use our skills to support their efforts by creating an interactive data visualization tool.

IIASA COVID-19 report and mapbook

Research indicates that while all segments of the population are vulnerable to the virus, not all countries are equally vulnerable at the same time. Therefore, there is a need for accurate socioeconomic and demographic data to inform the allocation of scarce resources between countries and even within countries.

IIASA responded to this need with a regularly updated website and data report: “COVID-19: Visualizing regional socioeconomic indicators for Europe”. The reader is introduced to a range of demographic, socioeconomic, and health-related indicators for European Union member countries and sub-regions in five categories:

  • Current COVID-19 trends – information about the number of cases and effectiveness of policy response measures
  • Demographic indicators – age, population density, migration
  • Economic indicators – GDP, income, share of workers who work from home
  • Health-related indicators – information about healthcare system capacity
  • Tourism – number of visitors, including foreign

The indicators and data were chosen for their value in assisting epidemiological analysis and balanced policy formulation. Policymakers often face the challenge of prioritizing pandemic mitigation efforts over long-term impacts like unemployment, production losses, and supply-chain disruptions. IIASA’s series of maps and graphs facilitates understanding of these impacts while maintaining the focus on containing the spread of the virus.

Our collaboration – a dashboard for policymakers

Having taken the first step to disseminate the data as information in the form of a mapbook, Asjad Naqvi decided to make these data even more accessible by turning the maps into an interactive and visually appealing tool.

IIASA has previously approached Appsilon Data Science with a data visualization project, which had us improve the features and design of Visualize, a decision support tool for policymakers in natural disaster risk management. Building on this experience, we set out to assist Naqvi with creating a dashboard to deliver the data to end-users even faster.

The application allows for browsing through a list of 32 indicators and visualizing them on an interactive map. The list is not final with indicators being regularly reviewed, added, and retired on a weekly basis.

White circles indicate the number of cases per 1 million citizens.

The application will continue to provide the latest and most relevant information to track regional performance in Europe also in the post-pandemic phase:

The pandemic has a disproportionate impact on women’s employment and revealed some of the systemic inequalities.

Social distancing measures, for instance, have a large impact on sectors with high female employment rates. The closure of schools and daycare facilities particularly affects working mothers. Indicators such as female unemployment rate can inform appropriate remedial action in the post-COVID world and highlight regions of special concern like Castilla-La-Mancha in Spain.

Given the urgency of the pandemic emergency, we managed to develop and deploy this application within five days. We believe such partnerships between data science consultancies and research institutes can transform the way policymakers utilize data. We are looking forward to future collaborations with IIASA and other partners to help transform data into accessible and useful information.

This project was conducted as part of our Artificial Intelligence for Good initiative. The application is available to explore here.

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Lifting COVID-19 restrictions: Visualizing real-time Twitter sentiments

By Santosh Karanam, .NET Full Stack Developer in the IIASA Ecosystems Services and Management Program

Santosh Karanam describes his efforts to visualize people’s reactions to the easing of COVID-19 restrictions in real time as they are expressed on Twitter.

© Ezthaiphoto |

Who would have imagined at the beginning of 2020, when the United Nations Department of Economic and Social Affairs was still projecting global economic growth at 2.5%, that within a few months the same department would have to release a new briefing stating that the global economy is now projected to shrink by 0.9% in 2020 due to a pandemic. This is mainly due to sudden restrictions and disruptions in global supply chains and international trade. COVID-19 is already having a lasting impact on the global economy; nearly 100 countries have closed their national borders during the past month, and the movement of people and tourism flows have come to a screeching halt.

In some countries, the COVID-19 pandemic has peaked in terms of the number of new infections, however, many countries are yet to reach the peak. Countries that seem to have crossed the peak are looking for ways to lift restrictions gradually, while keeping an eye on infection rates to avoid a second wave of infections. These actions by governments are being watched closely by people around the globe and trigger various kinds of emotional reactions.

Visualizing Twitter reactions in real time

I was curious about the possibility of visualizing these reactions, or sentiments, on a real-time basis as we crawl through these unprecedented times of the COVID-19 pandemic. It led me to create a real-time dashboard to visualize sentiments about the lifting of pandemic restrictions expressed or evident on the social media platform Twitter.

Twitter has application program interfaces (APIs) that enable developers to pull data from Twitter in a machine-readable format. This data is the same as the data shown when you open your Twitter account in either a browser or a mobile application and search for specific words. I decided to utilize this data using search key words like “lifting lockdown” and ”lifting restriction” and assign sentiment scores to tweets relating to these keywords using sentiment140.

Sentiment140 is a program created by computer science graduates from Stanford University that allows you to discover the sentiment of a brand, product, or topic on Twitter. It automatically classifies the sentiment of Twitter messages by using classifiers built using machine learning algorithms, and provides transparency for the classification results of individual tweets. Twitter uses complex algorithms to get the results for key words. These tweets are pulled continuously in real time and sent to sentiment140 APIs where they are assigned sentiment scores: 0 for negative, 2 for neutral and 4 for positive.

Below is an example of this scoring:


Scores Sentiment
Why are people so eager to end lockdown and lift restrictions… for a second wave and then moan again… the mind boggles!! 0 Negative
Iran begins lifting restrictions after brief coronavirus lockdown 2 Neutral
Germany has now begun to lift restrictions to visit one another and open businesses soon because we actually listened and stayed at home. Germany has now been marked the 2nd Safest country during the pandemic 4 Positive


From April 12th 2020 to April 21st 2020, a total of 208,220 tweets were scored and analyzed, this total number of tweets is growing daily as new tweets come in. The tweets are analyzed (sentiment scored) in real time and aggregated hourly. The above examples are taken from the analyzed tweets. For simplicity and to have a wholistic view of all relevant tweets, replies to tweets and re-tweets are all scored as people may react days after the initial tweet. For this experiment, only English language tweets are considered.

The scores assigned are aggregated every hour, stored in cloud storage, and are shown in the website dashboard. The dashboard shows the status of the current day’s scores and is updated every hour, it also shows the previous four days’ sentiment score results.


Visit the website and see the dashboard.

Trends so far:

I can see a trend where most of the tweets fall under neutral scores as we are in the early days of restrictions being lifted. Many people are concerned about whether the measures will work. As the days progress I expect the neutral scores to reduce and convert into either positive or negative scores. This all depends on how infection rates either rise or fall in the days to come. Ideally, if everything turns out as planned, the positive sentiments will grow, and negative and neutral sentiments will shrink.

The scored tweets are not country specific but are captured globally, the reason being that less than 1-2% of Tweets are geo-tagged and for a real time experiment, I thought it would be too little data per hour. Since very few countries have crossed the peak of the curve, the current results show that the neutral and negative scores form the major share as we progress and hopefully, if infection rates do not increase drastically with the ease of lockdown restrictions, we might see positive sentiment scores taking the major share.


Additional info:

This is a sample experiment that I am running in the Microsoft Azure cloud using Azure Event Hubs and Azure Stream Analytics for real-time processing of Twitter data. I am storing the aggregated score results in Azure Blob Stores – you can read more about the setup here. The aggregated results are shown using a simple react java script application, which is again hosted in Microsoft Azure cloud. Do contact me for further details.



Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis


By Leena Ilmola-Sheppard, senior researcher in the IIASA Advanced Systems Analysis Program.

Leena Ilmola-Sheppard discusses the value of employing novel research methods aimed at producing fast results to inform policies that address immediate problems like the current COVID-19 pandemic.

© Alberto Mihai |

As researchers, the majority of our work – even if it is applied research – requires deep insight and plenty of reading and writing, which sometimes takes years. When we initiate a new method development project, for example, we never know if it will eventually prove to be useful in real life, except on very rare occasions when we are willing to step out of our academic comfort zones and explore if we are able to address the challenges that decision makers are faced with right now.

I would like to encourage my colleagues and our network to try and answer the call when decision makers ask for our help. It however requires courage to produce fast results with no time for peer review, to explore the limits of our knowledge and capabilities of our tools, and to run the risk of failure.

I share two examples with you in this blog. The first one describes a situation that played out years ago, while the second one is happening today.

When the first signs of a potential refugee crisis became visible late in 2014, the Finnish Prime Minister’s Office contacted the IIASA Advanced Systems Analysis Program (ASA) and asked whether we could produce an analysis for them. The ASA team had an idea to develop a new method for qualitative systems analysis based on an application of causal-loop-diagrams and we decided to test the approach with an expert team of 14 people from different Finnish ministries. I have to admit that the process was not exactly the best example of rigorous science, but it was able to produce results in only eight weeks.

“Experts that participated in the process from the government side accepted that the process was a pilot and exploratory in nature. In the end, the group was however able to develop a shared language for the different aspects of the refugee situation in Finland. The method produced comprised a shared understanding of the events and their interdependencies and we were able to assess the systemic impact of different policies, including unintended consequences. That was a lot in that situation,” said Sari Löytökorpi, Secretary General and Chief Specialist of the Finnish Prime Minister’s Office when reflecting on that experience recently.

The second case I want to describe here is the current coronavirus pandemic. The COVID-19 virus reached Finland at the end of January when a Chinese tourist was diagnosed. The first fatality in Finland was recorded on 20 March. This time, the challenge we are presented with is to look beyond the pandemic. The two research questions presented to us by the Prime Minister’s Office and the Ministry of Economic Affairs are: ‘How can the resilience of the national economy be enhanced in this situation?’ and secondly ‘What will the world look like after the pandemic?’

Pekka Lindroos, Director of Foresight and Policy Planning in the Finnish Ministry of Economic Affairs is confident, “We know that the pandemic will have a huge impact on the economy. The global outcome of current national policy measures is a major unknown and traditional economic analysis is not able to cover the dynamics of the numerous dimensions of the rupture. That is why we are exploring a combination of novel qualitative analysis and foresight methods with researchers in the IIASA ASA Program.”

I have been working on the implementation of the systems perspective to the coronavirus situation with a few close colleagues around the world who are experts in resilience and risk. We were able to deliver the first report on Friday, 27 March. Among other things, it emphasized the role of social capital and society’s resilience. A more detailed report is currently in production.

A simple systems map (causal loop diagram) representing a preliminary understanding of the world after COVID-19 from a one country perspective.

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Rethinking optimal control theory in resource economics

By Serguei Kaniovski, Economist with the Austrian Institute of Economic Research (WIFO)

Serguei Kaniovski and colleagues from IIASA and the Steklov Mathematical Institute of the Russian Academy of Sciences revisited a classic growth model in resource economics using recent advances in optimal control theory.

The late 1960s and early 1970s gave rise to Doomsday Models that predicted a collapse of Western Civilization under the pressure of over-population and environmental pollution. The very influential 1972 Club of Rome’s report on the “Limits to Growth” painted a gloomy picture, sparking an ongoing debate. One question was whether the scarcity of natural resources like fossil fuels would limit growth and cause a substantial decline in people’s standard of living.

The Doomsday reasoning was met with doubt by the economists of that time, leading the future Nobel Prize laureate and growth theorist, Robert Solow, to state that “the various Doomsday Models are worthless as science and as guides to public policy“. In a combined effort, economists developed a class of growth models with resource constraints. The conclusions they reached using the Dasgupta-Heal-Solow-Stiglitz (DHSS) modeling framework offered a more optimistic outlook.

© Kantver |

Economic applications have been well ahead of the mathematical theory used for identifying optimal economic policies, leaving some model solutions unexposed and some technical issues unsettled. The theory that allows us to identify optimal policies and describe the model dynamics was originally developed in the 1950s for engineering applications but has since become the main tool for analyzing economic growth models. These models however contain many features that are not standard to optimal control theory – a subfield of mathematics that deals with the control of continuously operating dynamic systems – which makes a fully rigorous analysis difficult. The key theoretical challenges are infinite planning horizons and nonstandard control constraints.

In our latest paper we offer a complete and rigorous analysis of the welfare-maximizing investment and depletion policies in the DHSS model with capital depreciation and arbitrary (decreasing, constant, and increasing) returns to scale. The investment policy specifies the portion of the final output to be invested in capital. A depletion policy says how fast a finite stock of exhaustible resources should be used. We prove the existence of a solution and characterize the behavior of solutions for all combinations of the model parameters using necessary rather than sufficient (Arrow’s theorem) optimality conditions.

In the main case of decreasing, constant, or weakly increasing returns to scale, the optimal investment and depletion policies converge to a constant share of output invested in capital and a constant rate of depletion of the natural resource. The optimal investment ratio decreases with the longevity of capital and impatience. The relationship between the optimal investment ratio and the output elasticity of produced capital is ambiguous. The performed analytical analysis identifies those relationships among model parameters that are critical to the optimal dynamics. In this, it differs from more conventional scenario-based approaches. From a practical point of view, application of the model to real data could be helpful for evaluating actual depletion and investment policies.

Strongly increasing returns to scale make it optimal to deplete the resource without investing in produced capital. Whether a zero-investment strategy is followed from the outset, from an instant of time, or asymptotically will depend on the sizes of the capital and resource stocks. In some special cases of increasing returns, welfare-maximizing investment and extraction policies may not exist under strong scale effects in resource use. This occurs when an initial stock of capital is small relative to the initial resource stock. It implies that it would have been impossible to formulate a welfare-maximizing policy in the early history of humanity, when produced capital was scarce and resources were abundant.


Aseev S, Besov K, & Kaniovski S (2019). Optimal Policies in the Dasgupta—Heal—Solow—Stiglitz Model under Nonconstant Returns to Scale. Proceedings of the Steklov Institute of Mathematics 304 (1): 74-109. []

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Exploring risk in development indicators

By Junko Mochizuki, researcher with the IIASA Risk and Resilience Program

IIASA researcher Junko Mochizuki writes about her recent research in which she and other IIASA colleagues developed an indicator to help identify vulnerable countries that should be prioritized for human development and disaster risk reduction interventions.

© Yong Hian Lim |

Working as part of an interdisciplinary team at IIASA, it is not uncommon for researchers to uncover disciplinary blind spots that would otherwise have gone unnoticed. This usually leads to a conversation that goes something like, “If only we could learn from the other disciplines more often, we can create more effective theories, methods, and approaches.”

My recently published paper with Asjad Naqvi from the IIASA Advanced Systems Analysis Program titled Reflecting Risk in Development Indicators was the fruit of such an exchange. In one afternoon, our coffee conversation hypothesized various reasons as to why the disaster risk discipline continued to create one risk indicator after another while the development community remained silent on this disciplinary advancement and did not seem to be incorporating these indicators into ongoing research in their own field.

Global ambitions such as the Sustainable Development Goals (SDGs) and Sendai Framework for Disaster Risk Reduction call for disaster mainstreaming, in other words, that disaster risk be assessed and managed in combination with any development planning efforts. For various reasons, we however continue to measure development and disasters separately. We know that globally the poor are more exposed to risk and that disasters hurt development, but there was not a single effective measure that captured this interlinkage in an easy-to-grasp manner. Our aim was therefore to demonstrate how this could be done using the information on disasters and development that we already have at our disposal.

The Human Development Indicator (HDI) is a summary measure of average attainment in key dimensions of human development – education, life expectancy, and per capita income indicators – that are used to rank countries into four tiers of human development. Using the HDI as an example, Asjad and myself compiled global datasets on human development, disaster risk, and public expenditure, and developed a method to discount the HDI indicator for 131 countries globally – just as others have done to adjust for income– and gender-inequality. Discounting the HDI indicator for education, for instance, involves multiplying it by the annual economic value of the average loss in terms of education facilities, divided by the annual public expenditure on education. We did this for each dimension of the HDI.

Conceptually, the indicator development was an intriguing exercise as we and our reviewers asked interesting questions. These included questions about the non-linearity of disaster impact, especially in the health sector, such as how multiple critical lifeline failures may lead to high death tolls in the days, weeks, and even months following an initial disaster event. Other issues we examined were around possibilities for the so-called build-back-better approach, which offers an opportunity to create better societal outcomes following a disaster.

Our formulation of the proposed penalty function hardly captures these complexities, but it nevertheless provides a starting point to debate these possibilities, not just among disaster researchers, but also among others working in the development field.

For those familiar with the global analysis of disaster risk, the results of our analysis may not be surprising: disasters, unlike other development issues (such as income- and gender inequalities for which HDI have been reformulated), have a small group of countries that stand out in terms of their relative burdens. These are small island states such as Belize, Fiji, and Vanuatu, as well as highly exposed low and lower-middle income countries like Honduras, Madagascar, and the Philippines, which were identified as hotspots in terms of risk-adjustments to HDI. Simply put, this means that these countries will have to divert public and private funds to pay for response and recovery efforts in the event of disasters, where these expenses are sizeable relative to the resources they have in advancing the three dimensions of the HDI indicator. Despite their high relative risk, the latter countries also receive less external support measured in terms of per capita aid-flow.

Our study shows that global efforts to promote disaster risk reduction like the Sendai Framework should be aware of this heterogeneity and that more attention in the form of policy support and resource allocation may be needed to support groups of outliers. Finally, although the cost of most disasters that occur globally are small relative to the size of most countries’ national economies, further sub-national analysis will help identify highly vulnerable areas within countries that should be prioritized for development and disaster risk reduction interventions.


Mochizuki J & Naqvi A (2019). Reflecting Disaster Risk in Development Indicators. Sustainability 11 (4): e996 []

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Shaping my scientific career

By Davit Stepanyan, PhD candidate and research associate at Humboldt University of Berlin, International Agricultural Trade and Development Group and 2019 IIASA Young Scientists Summer Program (YSSP) Award Finalist.

Participating in the YSSP at IIASA was the biggest boost to my scientific career and has shifted my research to a whole new level. IIASA provides a perfect research environment, especially for young researchers who are at the beginning of their career paths and helps to shape and integrate their scientific ideas and discoveries into the global research community. Being surrounded by leading scientists in the field of systems analysis who were open to discuss my ideas and who encouraged me to look at my own research from different angles was the most important push during my PhD studies. Having the work I did at IIASA recognized with an Honorable Mention in the 2019 YSSP Awards has motivated me to continue digging deeper into the world of systems analysis and to pursue new challenges.

© Davit Stepanyan

Although my background is in economics, mathematics has always been my passion. When I started my PhD studies, I decided to combine these two disciplines by taking on the challenge of developing an efficient method of quantifying uncertainties in large-scale economic simulation models, and so drastically reduce the need and cost of big data computers and data management.

The discourse on uncertainty has always been central to many fields of science from cosmology to economics. In our daily lives when making decisions we also consider uncertainty, even if subconsciously: We will often ask ourselves questions like “What if…?”, “What is the chance of…?” etc. These questions and their answers are also crucial to systems analysis since the final goal is to represent our objectives in models as close to reality as possible.

I applied for the YSSP during my third year of PhD research. I had reached the stage where I had developed the theoretical framework for my method, and it was the time to test it on well-established large-scale simulation models. The IIASA Global Biosphere Management Model (GLOBIOM), is a simulation model with global coverage: It is the perfect example of a large-scale simulation model that has faced difficulties applying burdensome uncertainty quantification techniques (e.g. Monte Carlo or quasi-Monte Carlo).

The results from GLOBIOM have been very successful; my proposed method was able to produce high-quality results using only about 4% of the computer and data storage capacities of the above-mentioned existing methods. Since my stay at IIASA, I have successfully applied my proposed method to two other large-scale simulation models. These results are in the process of becoming a scientific publication and hopefully will benefit many other users of large-scale simulation models.

Looking forward, despite computer capacities developing at high speed, in a time of ‘big data’ we can anticipate that simulation models will grow in size and scope to such an extent that more efficient methods will be required.

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.