Network science and marketing: A virus’ tale

By Matthias Wildemeersch,  IIASA Advanced Systems Analysis and Ecosystems Services and Management Programs

FotoQuest Austria is a citizen science campaign initiated by the IIASA Ecosystems Services & Management Program that aims to involve the general public in mapping land use in Austria. Understanding the evolution of urban sprawl is important to estimate the risk of flooding, while the preservation of wetlands has important implications for climate change.

But how can we engage people in environmental monitoring, in particular when they are growing increasingly resistant to traditional forms of advertising? Viral marketing makes use of social networks to spread messages, and takes advantage of the trust that we have in the recommendation coming from a friend rather than from a stranger or a company.

Network science and the formal description of spreading phenomena can shed light on the propagation of messages through communities and can be applied to inform and design viral marketing campaigns.

Viral spreading © kittitee550 | Dollar Photo Club

Viral spreading © kittitee550 | Dollar Photo Club

Network science is a multi-disciplinary field of research that draws on graph theory, statistical mechanics, inference, and other theories to study the behavior of agents in various networks. The spreading phenomena in viral marketing show similarities with well-studied spreading processes over biological, social, physical, and financial networks. For instance, we can think about epidemics,which are well understood and allow for the design of optimal strategies to contain viruses. Another example is opinion dynamics, which received renewed research attention over the last years in the context of social media.  In contrast to diseases or computer viruses, which we aim to contain and stop, the goal of viral marketing is to spread widely, reaching the largest possible fraction of a community.

What makes viral marketing unique?
But some aspects of viral marketing are very different from what we see in other spreading phenomena. First of all, there are many platforms that can be used to spread information at the same time, and the interaction between these platforms is not always transparent. Human psychology is a crucial factor in social networks, as repeated interaction and saturation can decrease the willingness to further spread viral content. Marketing campaigns have a limited budget, and therefore it is meaningful to understand how we can use incentives and how efficient they are. This also means that it is essential to find the group of most influential people that can be used as seeds for the viral campaign.

Network science has addressed to a great extent all these individual questions, mostly under the assumption of full knowledge of the connections between the agents and their influence. Currently, so-called multiplexes are an active research field that studies the behavior of multi-layer networks. This research unveils the relationships between the dynamics of viral marketing, the connection pattern, and strength between the network layers. Although viral spreading may be unachievable in a single layer, for example a social network like Facebook, the critical threshold may be exceeded by joining different platforms. Within a given platform, people alike can be clustered using community detection algorithms. Once the communities are identified, influence maximization algorithms have been established to select these persons that maximize the spread of viral content. Although this discrete optimization problem is computationally difficult—or NP-hard—mathematicians have proposed algorithms that can efficiently predict who to target to give a campaign the best chance of going viral. On top of that, optimal pricing strategies have been developed to reward recommenders.

The FotoQuest Austria app aims to engage citizen scientists in their campaign - network theory may help them go "viral." © IIASA

The FotoQuest Austria app aims to engage citizen scientists in their campaign – network theory may help them go “viral.” © IIASA

Although the literature is extensive, the nature of the results is often theoretical and involves mathematically complex models and algorithms. Considering that only partial information on the network is usually available, it is not straightforward to bring this knowledge back to a practical marketing campaign. So researchers in this field are trying to bridge the gap between theoretical results and practical problems. The generic, powerful methods of network science are sufficiently versatile to capture the specifics of real-world applications. As such, network science can provide guidelines that can bring great value for the design of heuristic methods in marketing strategies.

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

What do our models really represent?

By Dan Jessie, IIASA Research Scholar, Advanced Systems Analysis Program

As policymakers turn to the scientific community to inform their decisions on topics such as climate change, public health, and energy policy, scientists and mathematicians face the challenge of providing reliable information regarding trade-offs and outcomes for various courses of action. To generate this information, scientists use a variety of complex models and methods. However, how can we know whether the output of these models is valid?

This question was the focus of a recent conference I attended, arranged by IIASA Council Chair Donald G. Saari and the Institute for Mathematical Behavioral Sciences at the University of California, Irvine. The conference featured a number of talks by leading mathematicians and scientists who research complex systems, including Carl Simon, the founding Director of the University of Michigan’s Center for the Study of Complex Systems, and Simon Levin, Director of the Center for BioComplexity at Princeton University. All talks focused on answering the question, “Validation. What is it?”

To get a feel for how difficult this topic is, consider that during the lunch discussions,  each speaker professed to know less than everybody else! In spite of this self-claimed ignorance, each talk presented challenging new ideas regarding both specifics of how validation can be carried out for a given model, as well as formulations of general guidelines for what is necessary for validation.

How closely does a model need to mirror reality? © Mopic | Dreamstime.com - Binary Background Photo

How closely does a model need to mirror reality? © Mopic | Dreamstime.com – Binary Background Photo

For example, one talk discussed the necessity of understanding the connecting information between the pieces of a system. While it may seem obvious that, to understand a system built from many different components, one needs to understand both the pieces and how the pieces fit together, this talk contained a surprising twist: oftentimes, the methodology we use to model a problem unknowingly ignores this connecting information. By using examples from a variety of fields, such as social choice, nanotechnology, and astrophysics, the speaker showed how many current research problems can be understood in this light. This talk presented a big challenge to the research community to develop the appropriate tools for building valid models of complex systems.

Overall, the atmosphere of the conference was one of debate, and it seemed that no two speakers agreed completely on what validation required, or even meant. Some recurring questions in the arguments were how closely does a model need to mirror reality, and how do we assess predictions given that every model fails in some predictions? What role do funding agencies and peer review play in validation? The arguments generated by the talks weren’t limited to the conference schedule, either, and carried into the dinners and beyond.

I left the conference with a sense of excitement at seeing so many new ideas that challenge the current methods and models. This is still a new and growing topic, but one where advances will have wide-ranging impacts in terms of how we approach and answer scientific questions.

IIASA Council Chair Don Saari: Validation: What is it?

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Global carbon taxation: a back of the envelope calculation

By Armon Rezai, Vienna University of Economics and Business Administration and IIASA,
and Rick van der Ploeg, University of Oxford, U.K., University Amsterdam and CEPR 

The biggest externality on the planet is the failure of markets to price carbon emissions appropriately (Stern, 2007). This leads to excessive fossil fuel use which induces global warming and all the economic costs that go with it. Governments should cease the moment of plummeting oil prices and set a price of carbon equal to the optimal social cost of carbon (SCC), where the SCC is the present discounted value of all future production losses from the global warming induced by emitting one extra ton of carbon (e.g., Foley et al., 2013; Nordhaus, 2014). Our calculations suggest a price of $15 per ton of emitted CO2 or 13 cents per gallon gasoline. This price can be either implemented with a global tax on carbon emissions or with competitive markets for tradable emission rights and, in the absence of second-best issues, must be the same throughout the globe.

The most prominent integrated assessment model of climate and the economy is DICE (Nordhaus, 2008; 2014). Such models can be used to calculate the optimal level and time path for the price of carbon. Alas, most people including policy makers and economists view these integrated assessment models as a “black box” and consequently the resulting prescriptions for the carbon price are hard to understand and communicate to policymakers.

© Cta88 | Dreamstime.com - Operating Oil And Gas Well Contour, Outlined On Sunset Photo

© Cta88 | Dreamstime.com 

New rule for the global carbon price
This is why we propose a simple rule for the global carbon price, which can be calculated on the back of the envelope and approximates the correct optimal carbon price very accurately. Furthermore, this rule is robust, transparent, and easy to understand and implement. The rule depends on geophysical factors, such as dissipation rates of atmospheric carbon into oceanic sinks, and economic parameters, such as the long-run growth rate of productivity and the societal rates of time impatience and intergenerational inequality aversion. Our rule is based on the following premises.

  • First, the carbon cycle dynamics are much more sluggish than the process of growth convergence. This allows us to base our calculations on trend growth rates.
  • Second, a fifth of carbon emission stays permanently in the atmosphere and of the remainder 60 percent is absorbed by the oceans and the earth’s surface within a year and the rest has a half-time of three hundred years. After 3 decades half of carbon has left the atmosphere. Emitting one ton of carbon thus implies that is left in the atmosphere after t years.
  • Third, marginal climate damages are roughly 2.38 percent of world GDP per trillion tons of extra carbon in the atmosphere. These figures come from Golosov et al. (2014) and are based on DICE. It assumes that doubling the stock of atmospheric carbon yields a rise in global mean temperature of 3 degrees Celsius. Hence, the within-period damage of one ton of carbon after t years is
  • Fourth, the SCC is the discounted sum of all future within-period damages. The interest rate to discount these damages r follows from the Keyes-Ramsey rule as the rate of time impatience r plus the coefficient of relative intergenerational inequality aversion (IIA) times the per-capita growth rate in living standards g. Growth in living standards thus leads to wealthier future generations that require a higher interest rate, especially if IIA is large, because current generations are then less prepared to sacrifice current consumption.
  • Fifth, it takes a long time to warm up the earth. We suppose that the average lag between global mean temperature and the stock of atmospheric carbon is 40 years.

We thus get the following back-of-the-envelope rule for the optimal SCC and price of carbon:

Capture

where r = ρ+ (IIA-1)x g. Here the term in the first set of round brackets is the present discounted value of all future within-period damages resulting from emitting one ton of carbon and the term in the second set of round brackets is the attenuation in the SCC due to the lag between the change in temperature and the change in the stock of atmospheric carbon.

Policy insights from the new rule
This rule gives the following policy insights:

  • The global price of carbon is high if welfare of future generations is not discounted much.
  • Higher growth in living standards g boosts the interest rate and thus depresses the optimal global carbon price if IIA > 1. As future generations are better off, current generations are less prepared to make sacrifices to combat global warming. However, with IIA < 1, growth in living standards boosts the price of carbon.
  • Higher IIA implies that current generations are less prepared to temper future climate damages if there is growth in living standards and thus the optimal global price of carbon is lower.
  • The lag between temperature and atmospheric carbon and decay of atmospheric carbon depresses the price of carbon (the term in the second pair of brackets).
  • The optimal price of carbon rises in proportion with world GDP which in 2014 totalled 76 trillion USD.

The rule is easy to extend to allow for marginal damages reacting less than proportionally to world GDP (Rezai and van der Ploeg, 2014). For example, additive instead of multiplicative damages resulting from global warming gives a lower initial price of carbon, especially if economic growth is high, and a completely flat time path for the price of carbon. In general, the lower elasticity of climate damages with respect to GDP, the flatter the time path of the carbon price.

Calculating the optimal price of carbon following the new rule
Our benchmark set of parameters for our rule is to suppose trend growth in living standards of 2 percent per annum and a degree of intergenerational aversion of 2, and to not discount the welfare of future generations at all (g = 2%, IIA = 2, r = 0). This gives an optimal price of carbon of $55 per ton of emitted carbon, $15 per ton of emitted CO2, or 13 cents per gallon gasoline, which subsequently rises in line with world GDP at a rate of 2 percent per annum.

Leaving ethical issues aside, our rule shows that discounting the welfare of future generations at 2 percent per annum (keeping g = 2% and IIA = 2) implies that the optimal global carbon price falls to $20 per ton of emitted carbon, $5.5 per ton of emitted CO2, or 5 cents per gallon gasoline.

If society were to be more concerned with intergenerational inequality aversion and uses a higher IIA of 4 (keeping g = 2%, r = 0), current generations should sacrifice less current consumption to improve climate decades and centuries ahead. This is why our rule then indicates that the initial optimal carbon price falls to $10 per ton of carbon. Taking a lower IIA of one and a discount rate of 1.5% per annum as in Golosov et al. (2014) pushes up the initial price of carbon to $81 per ton emitted carbon.

A more pessimistic forecast of growth in living standards of 1 instead of 2 percent per annum (keeping IIA = 2, r = 0) boosts the initial price of carbon to $132 per ton of carbon, which subsequently grows at the rate of 1 percent per annum. To illustrate how accurate our back-of-the-envelope rule is, we road-test it in a sophisticated integrated assessment model of growth, savings, investment and climate change with endogenous transitions between fossil fuel and renewable energy and forward-looking dynamics associated with scarce fossil fuel (for details see Rezai and van der Ploeg, 2014). The figure below shows that our rule approximates optimal policy very well.

Capture-graph

The table below also confirms that our rule also predicts the optimal timing of energy transitions and the optimal amount of fossil fuel to be left unexploited in the earth very accurately. Business as usual leads to unacceptable degrees of global warming (4 degrees Celsius), since much more carbon is burnt (1640 Giga tons of carbon) than in the first best (955 GtC) or under our simple rule (960 GtC). Our rule also accurately predicts by how much the transition to the carbon-free era is brought forward (by about 18 years). No wonder our rule yields almost the same welfare gain as the first best while business as usual leads to significant welfare losses (3% of world GDP).

Transition times and carbon budget

Fossil fuel Only Renewable Only Carbon used maximum temperature Welfare loss
IIA=2 First best 2010-2060 2061 – 955 GtC 3.1 °C 0%
Business as usual 2010-2078 2079 – 1640 GtC 4.0 °C – 3%
Simple rule 2010-2061 2062 – 960 GtC 3.1 °C – 0.001%

 Recent findings in the IPCC’s fifth assessment report support our findings. While it is not possible to translate their estimates of the social cost of carbon into our model in a straight-forward manner, scenarios with similar levels of global warming yield similar time profiles for the price of carbon.

Our rule for the global price of carbon is easy to extend for growth damages of global warming (Dell et al., 2012). This pushes up the carbon tax and brings forward the carbon-free era to 2044, curbs the total carbon budget (to 452 GtC) and the maximum temperature (to 2.3 degrees Celsius). Allowing for prudence in face of growth uncertainty also induces a marginally more ambitious climate policy, but rather less so. On the other hand, additive damages leads to a laxer climate policy with a much bigger carbon budget (1600 GtC) and abandoning fossil fuel much later (2077).

In sum, our back-of-the-envelope rule for the optimal global price of carbon and gives an accurate prediction of the optimal carbon tax. It highlights the importance of economic primitives, such as the trend growth rate of GDP, for climate policy. We hope that as the rule is easy to understand and communicate, it might also be easier to implement.

References
Dell, Melissa, Jones, B. and B. Olken (2012). Temperature shocks and economic growth: Evidence from the last half century, American Economic Journal: Macroeconomics 4, 66-95.
Foley, Duncan, Rezai, A. and L. Taylor (2013). The social cost of carbon emissions. Economics Letters 121, 90-97.
Golosov, M., J. Hassler, P. Krusell and (2014). Optimal taxes on fossil fuel in general equilibrium, Econometrica, 82, 1, 41-88.
Nordhaus, William (2008). A Question of Balance: Economic Models of Climate Change, Yale University Press, New Haven, Connecticut.
Nordhaus, William (2014). Estimates of the social cost of carbon: concepts and results from the DICE-2013R model and alternative approaches, Journal of the Association of Environmental and Resource Economists, 1, 273-312.
Rezai, Armon and Frederick van der Ploeg (2014). Intergenerational Inequality Aversion, Growth and the Role of Damages: Occam’s Rule for the Global Carbon Tax, Discussion Paper 10292, CEPR, London.
Stern, Nicholas (2007). The Economics of Climate Change: The Stern Review, Cambridge University Press, Cambridge.

Note: This article gives the views of the authors, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Charting connections: the next challenge for systems analysis

At IIASA in Laxenburg this week, renowned mathematician Don Saari laid out a challenge for the Institute’s scientists:  to better understand complex systems, he said, researchers must find better ways to model the interactions between different factors.

“In a large number of models, we use climate change or other factors as a variable. What we’re doing is throwing in these variables, rather than representing interactions—like how does energy affect population?” said Saari, a longtime IIASA collaborator and council member, and newly elected IIASA Council Chair, a position he will take up in November. “The great challenge of systems analysis is figuring out how to connect all the parts.”

“Whenever you take any type of system and look at parts and how you combine parts, you’re looking at a reductionist philosophy. We all do that in this room,” said Saari. “It is the obvious way to address a complex problem: to break it down into solvable parts.”

rubiks cube on a blue backgroundThe danger of reductionism, Saari said, is that it can turn out completely incorrect solutions—without any indication that they are incorrect.  He said, “The whole may be completely different than the sum of its parts.”

Take a Rubik’s cube as an example: Saari said “If you try to solve it by first doing the red side, then the green, then the blue, you will end up with a mess. What happens on one side is influenced by what’s happening on all the other sides.”

In the same way, the world’s great systems of energy, water, climate all influence each other. During the discussions, IIASA Deputy Director Nebojsa Nakicenovic noted that current work to extend the findings of the Global Energy Assessment to include water resources could narrow the potential number of sustainable scenarios identified for energy futures by more than half.

Saari pointed out that many of the world’s great scientists—including Nobel Prize winner Tom Schelling and Kyoto prize winner Simon Levin, both IIASA alumni—reached their groundbreaking ideas by elucidating the connections between two different fields.

It may sound like a simple solution to a methodological challenge. However, understanding the connections and influences between complex systems is far from simple. As researcher Tatjana Ermoliova pointed out in the discussion, “In physical systems you can hope to observe and discover the linkages.” But between human, economic, and global environmental systems those linkages are elusive and fraught with uncertainty.

At the end of the lecture, IIASA Director & CEO Prof. Dr. Pavel Kabat turned the challenge towards IIASA scientists, and we now extend it also to our readers: How can scientists better model the connections between systems, and what needs to change in our thinking in order to do so?