Why Germany and not Japan is the leader in renewable energy

By Jessica Jewell, IIASA Energy Program

Why have Germany and Japan, two large, and in many respects similar developed democracies pursued different energy options? A recently published study examines why Germany has become the world’s leader in renewable energy while phasing out its nuclear power and Japan has deployed only a trivial amount of renewables while constructing a record number of nuclear reactors.

The widespread story is that Germany rejected nuclear power in a politically bold move after Fukushima and instead pursued ‘Energiewende’ prioritizing wind and solar energy to combat climate change. Leading scholars such as Amory Lovins described Japanese policymakers as manipulated by the nuclear lobby, clinging to their old ways, and unwilling to properly support renewable energy. The lesson to other countries is that public anti-nuclear sentiments and a capable democratic government is what it takes to turn to decentralized renewable energy.

This research shows that these stories are myths. As I and my coauthor wrote in a letter to the editor in Nature last year, Japan had ambitious renewable targets already before Fukushima and there is no evidence that these have been affected by its nuclear plans. The same holds for Germany: its targets for renewable energy were not affected by the change in its nuclear strategy following Fukushima’s disaster in 2011.

© nixki | Shutterstock

In fact, the differences between Germany and Japan started not in 2011 after Fukushima, but some 20 years earlier in the early 1990s when Japan’s electricity consumption was rapidly growing and it desperately needed to expand electricity generation to feed demand that could not be matched with very scarce domestic fossil fuels. Furthermore, Japan was developing ‘energy angst’ related not only to its high dependence on Middle Eastern oil and gas but also to potential competition with China’s with its rising appetite for energy. At the same time, Germany’s electricity consumption stagnated in the 1990s and its energy security improved following the end of the Cold War. Germany was also one of the world’s largest coal producers and could in principle supply all its domestic electricity from coal. As a result, in the 1990s, Japan was forced to build nuclear power plants, but Germany could easily do without them.

There was another important development in the early 1990s: wind power technology diffused to Germany from neighboring Denmark. This was triggered by an electricity feed-in-law of 1990s, which obliged German electric utilities to buy electricity from small producers at close-to-retail prices. The law, which aimed to benefit a small number of micro-hydro plant owners, unexpectedly led to almost a 100-fold rise in wind installations in Germany. Although still insignificant in terms of electricity, this development created a large and vocal lobby of owners and manufacturers of wind turbines. In the early 2000s, the wind sector provided less than one-tenth of nuclear electricity but had more jobs than in the nuclear sector. In contrast, Japan’s similar policies of buying wind energy from decentralized producers did not result in any considerable growth of wind power, because the Danish technologies prevalent in the early 1990s could not be as easily diffused to Japan.

By the turn of the century, the electricity sectors in Germany and Japan still looked largely similar, but the political dynamics could not be more different. In Germany, a huge politically-powerful coal sector was represented by Socio-Democratic Party and the so-called ‘red-green’ coalition was formed with the Green party, who represented the rapidly growing wind power sector. The stagnating nuclear industry, however, had not seen new domestic orders or construction for 15 years and large industrial players like Siemens had begun to diversify away from it. All this was in the context of a positive energy security outlook and declining electricity prices. In contrast, in Japan, the nuclear sector had vigorously grown over the last decade and was becoming globally dominant by acquiring significant manufacturing capacities. Nuclear power was the only plausible response to the energy angst and it lacked any credible political opponents: the domestic coal sector in Japan virtually did not exist (Germany had around 70,000 coal mining jobs, Japan – about 1,000) and wind had never taken off.

© Pla2na | Shutterstock

The results of these very different political dynamics were predictably different: the red-green coalition in Germany legislated nuclear phase-out in 2002 and unprecedented financial support for renewables in 2000, while retaining coal subsidies and triggering construction of new coal power plants. Japan continued to support solar energy in which it had been the global leader since the 1970s but it also adopted a plan for constructing many more nuclear reactors designed to substitute imported fuels. Fukushima, rather than highlighting differences actually made the energy trajectories of two countries more similar as both countries began to struggle to replace their aging nuclear capacities with new renewables.

How does this story relate to wider questions such as: why are some countries more successful in deploying renewables than others? The answer is not in ‘stronger political will’ and in the strength of climate change concerns, but in economy, geography, and the structure of energy systems. Political wins for renewables and the climate can also be the result of dubious political compromises such as the alliance with the coal lobby in Germany, which led to the rapid growth of renewables and demise of nuclear power. It may be particularly difficult for countries with fossil fuel resources to implement renewable energy policies if they lead to the contraction of domestic coal, gas or oil industries.

Reference: Cherp A, Vinichenko V, Jewell J, Suzuki M, & Antal M (2016). Comparing electricity transitions: A historical analysis of nuclear, wind and solar power in Germany and Japan. Energy Policy 101: 612-628.

Acknowledgements

The study was supported by the CD-LINKS project and the Central European University’s Intellectual Theme’s Initiative.

This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

 

Natural regeneration for sustainable development

By Alvaro Silva Iribarrem, researcher in the IIASA Ecosystems Services and Management Program.   

Restoration of degraded ecosystems is an exciting and relatively new way of looking into the conservation of natural capital—the world’s natural assets, including soil, air, water, and all living things. For one, the success of restoration is more readily verifiable than, for example, avoided degradation. Further, it increases the landscape’s resilience: natural areas can be placed around agricultural crops, increasing their yields by providing habitat for pollinators and therefore increasing pollination, protecting them from natural disasters, and improving the provision of important ecosystem services for human wellbeing.

These services include removing CO2 from the atmosphere (which contributes directly to climate change mitigation); ensuring that more sediment is filtered from the rivers (which reduces the risk of landslides and floods); and providing habitat for a large diversity of species. For scientists, it feels like being at the head of the counter-offensive: it is us, humans, finally doing something not only to slow our seemingly unstoppable degradation of the environment, but to actively start pushing it back.

Restoring an ecosystem to its original state can be an expensive endeavor, but tropical rainforests are very resilient. For example, even after centuries of extensive use of the Brazilian Atlantic Forest, which has been reduced to a tenth of its size, in many places it would still grow back to much of its original state in a manner of decades, if allowed to do so. For such ecosystems, natural regeneration represents an extraordinary opportunity to enable restoration at scales that would otherwise be cost-prohibitive.

In places like the Paraitinga watershed, in the countryside of São Paulo state, most of the original forest has long been cleared, and replaced, predominantly, by small dairy farms. After over a century of careless land use, large areas of the converted landscape has degraded to the point where yields are so low that farms are barely viable. The lack of forest cover has led to frequent floods. The worst of the recent ones, in 2010, destroyed most of the historical city of São Luiz do Paraitinga, with a population of 11 thousand inhabitants.

The 2010 flood in Paraitinga. © Luciano Dinamarco

The aftermath of the 2010 flood in Paraitinga. © Luciano Dinamarco

In a couple of recent publications, we made a comprehensive effort to include the natural regeneration of that watershed’s native forest as part of a bigger plan for more sustainable development of the region, one that would increase its resilience to this kind of disaster.

Starting from a landscape approach, we looked at the potential for grass growth in the region, and concluded that it was possible to accommodate all foreseeable future demands for cattle production and still make space for the restoration of a large area in the watershed. Sustainable intensification of current pasture is key to avoid the economic losses that could otherwise follow the land shortage caused by such a large-scale restoration. It would also help to gain the farmers’ acceptance. By producing more in a smaller area, they could let go of the degraded areas they currently use, allowing the native forest fragments nearby to spread.

In our regeneration scenario, we assume that around 24,000 hectares of pastureland that is presently abandoned in the watershed will be allowed to undergo natural regeneration in the next 20 years. This naturally occurring forest regrowth would sequester 6.2 million tons of CO2 from the atmosphere. Additionally, it would reduce sediment load into rivers by 570,000 tons annually, bringing water purification costs in the area down by 0.37 dollars per year per hectare restored. Finally we showed that restoration of even this relatively small area would be enough to significantly increase habitat availability for all species, particularly for those which travel between forest fragments.

To understand the difficulties farmers face in improving productivity, we conducted interviews and focus groups with them. We found that the tendency to keep to their old, low-producing, land-extensive ways, is less related to a resistance to change, and more to a lack of technical knowledge and the means to make the upfront investments needed to switch to a more productive system. Credit for investment is available and cheap in the country, but only a small number of farmers in the region risk taking it. Technical assistance is key to tap into these resources and enable the necessary improvement of the watershed’s production. The conditions for unlocking large-scale forest regrowth, not only in the Paraitinga watershed but in many similar landscapes in the country, are in place—they need only to be implemented properly.

Strassburg BB, Barros FS, Crouzeilles R, Silva Iribarrem, A, dos Santos JS, Silva D, Sansevero JB, Alves-Pinto H, Feltrain-Barbieri R, & Latawiec AE (2016). The role of natural regeneration to ecosystem services provision and habitat availability: a case study in the Brazilian Atlantic Forest. Biotropics. 

Alves-Pinto HN, Latawiec AE, Strassburg BBN, Barros FSM, Sansevero JBB, Iribarrem A, Crouzeilles R, Lemgruber LC, Rangel M, & Silva ACP (2016). Reconciling rural development and ecological restoration: Strategies and policy recommendations for the Brazilian Atlantic Forest. Land Use Policy.

This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Interview: Defining the future(s)

Samir KC is a researcher in the IIASA World Population Program. He worked on the population projections that form the “human core” of the Shared Socioeconomic Pathways (SSPs), a set of scenarios designed for climate change research, but increasingly being applied more broadly to research in sustainability and environmental change.

What are the SSPs?
The Shared Socioeconomic Pathways are about the future, how the future could look like under different set of conditions. When we want to talk about the future or we need to think about the future, we always have to do some kind of a projection. Whatever the topic is, even in our personal life, we can use scenarios to map out how things might develop, creating different pathways, which can allow us to better understand how our choices could affect these pathways.

Socioeconomic means the major factors socially as well as economically that can affect future changes on our planet—demographic, socially, and economic. But within this broad umbrella, there are multiple disciplines who work on their own topics and have their own methods and data. If they want to work together they have to match with each other so that output of one work could be the input to another group. That’s why the word shared is there.

The SSPs were developed for the Intergovernmental Panel on Climate Change (IPCC). Why were they needed?
For one thing, we just needed to update the data from the earlier generation of emissions scenarios, and define new scenarios. But secondly, the focus changed a bit between the IPCC’s last report and the most recent one, released in 2014. In the new scenarios, the focus is more on challenges to adaptation and mitigation of climate change. These dimensions are harder to incorporate because they depend on a lot of socioeconomic factors.

Researchers use scenarios to map out a range of possible future developments in the socioeocomic factors that influence climate change. © Salvatore Vastano via Flickr

Researchers use scenarios to map out a range of possible future developments in the socioeocomic factors that influence climate change. © Salvatore Vastano via Flickr

You worked specifically on the population projections for the SSPs, which were published in 2014. How did this process work?
The first thing that we did was to define narratives for each of the SSPs, essentially a story about how the world would look like in the future. This first part is very important. These narratives were based on the current knowledge of science and how the variables are related and interact.

Then for each of the pathways, we had to start defining the variables like population, urbanization, technological change, and economy. Since population is one of the first variables you need in order to calculate other socioeconomic variables, it was the first thing we looked at when turning the narratives into a quantitative projection. Population is needed as a multiplier to calculate demand in the future, for example to calculate how much energy will be required in the future, how much water, and many other things. At the same time when there are adverse effects of climate change, the population determines how many people are impacted as well as who and where. For example the air pollution group who would need population to see how will air pollution affect the population. So population is an important variable.

It was an iterative process—there were lots of calls, involving sometimes 10 or 15 people from many different fields. Whenever we had something to share or something to decide, it was done in this big group. It was a lot of talking and listening to others. That was a very educational for me, because I learned a lot about how people are using population data. It was a very good dialogue—people had sometimes very simple questions but sometimes very interesting questions about population, fertility, mortality, and those kinds of things.

How did your population projections differ from previous demographic data used for climate research?
In most climate research, until recently, population was used as a total number. Populations were assumed to be homogenous—everybody the same, the average will represent everyone. We argued that that is not the case, that you need to consider population heterogeneity, not only age and sex, but also education levels. There is a growing body of research showing that these details make a difference.

Still not everybody is using it, but for example, people working on GDP have used it, and hopefully more and more will use these factors in the future. We have shown in the past that knowing the education level of the population can help us make better projections. Having a more educated population has effects on many other socioeconomic measures. For example, more educated societies have higher level of productivity. Education level has also been used to calculate the speed of technological change. In societies where there are highly educated people the advancement in technological change comes faster than otherwise. And these factors are key to understanding humanity’s vulnerability to climate change, our ability to adapt, and our chances to solve the problem.

Schoolchildren in Indonesia: Population variables like education have big impacts on greenhouse gas emissions and vulnerability to climate change. © Asian Development Bank

Schoolchildren in Indonesia: Population variables like education have big impacts on greenhouse gas emissions and vulnerability to climate change. © Asian Development Bank

A lot of your work focuses on what might happen in the future. How do you explain to people the difference between scenarios or projections and predictions? When we make projections about the future, we don’t use the word “prediction.” The chances that such a projection will be wrong are 100%. We can never say exactly what will happen in the future.

It’s important to understand how the narratives were defined, how we defined the scenarios. We cannot guarantee the future, the results, but we can guarantee the quality of what can be done, what we can say now, today about the future. And then there is the idea of uncertainty – we have said something about the future but we haven’t reported any kind of uncertainty there other than reporting ranges of scenarios. This is a big area for future work. It’s difficult to do, and it would be difficult to interpret, but it’s important to consider.

More information

References
KC S, Lutz W (2014). The human core of the shared socioeconomic pathways: Population scenarios by age, sex and level of education for all countries to 2100. Global Environmental Change http://pure.iiasa.ac.at/10759/

Riahi K, van Vuuren DP, Kriegler E, et al. (2016). The Shared Socioeconomic Pathways and their energy, land use, and greenhouse gas emissions implications: An overview. Global Environmental Change. http://pure.iiasa.ac.at/13280/

This article gives the views of the interviewee, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Are the Paris climate commitments fair?

Shinichiro Fujimori, guest researcher in the IIASA Energy Program, discusses the implications of a recent paper with IIASA Science writer and Editor Daisy Brickhill.

The climate mitigation costs of the Paris Agreement are fairly distributed between countries, but they are not fair for future generations, a new IIASA study has found. This suggests that the relative differences between countries’ climate commitments can be kept the same, but to ensure equity for our descendants they must all be raised .

The Paris Agreement allows each country to set its own climate commitments (known as the Intended Nationally Determined Contributions, or INDCs), and while this autonomy encourages more states to enter into the agreement, it may result in some countries freeloading by not making their fair share of cuts. There is also a trade-off between the mitigation investments we make now, and how much we leave for our descendants to deal with. The study by Fujimori and colleagues examines the issue of equity from different angles.

How did you measure the equity of the climate commitments?

We designed four scenarios: there was the baseline, which has no climate policy, and therefore no emission constraints at all. Then there was a scenario with a carbon price that is the same all over the world, set high enough to put us on course to meet targets to keep warming well below 2°C by 2100. The third scenario allowed different countries to have different carbon prices, meaning that they followed their current INDCs until 2030, but at that point a global carbon price was again put in place to ensure that we reach 2°C targets by the end of the century. Finally, we created a scenario where all emissions reduction targets were 20% higher than the INDCs until 2030. Again, after that a global carbon price was set. For all of the scenarios we also varied what is known in economics as the “discount rate.”

What is a discount rate?

People tend to devalue the future. So, for example, given the choice of €100 now or €150 in five years, many people would choose the €100 now. This is known as a time preference. You can add to this an “inequality aversion.” This is the amount that a wealthy person is willing to reduce their consumption by in order to increase the amount a poor person can consume. Together they make the discount rate.

We used different values of discount rate to see what might happen if people cared a lot about future generations, or poorer countries, or if they did not.

And, are the INDCs fair?

We found that delaying emissions reductions will push the costs onto future generations. In all our scenarios, regardless of the discount rate, there was inequality between the generations. The best scenario for equity between current and future generations was the second scenario with high, globally uniform carbon taxes that start immediately.

© Robwilson39 | Dreamstime

© Robwilson39 | Dreamstime

The inequity between generations was not unexpected, but what was surprising was that under the Paris Agreement the equity between countries was good. The third scenario, which followed the INDCs until 2030, has much better equality between the regions until the global carbon price began in 2030. This is because low-income countries tended to set lower carbon prices, and more developed countries had higher carbon prices.

That means that the last scenario is the ideal. We can keep the relative differences between the INDCs the same but raise them all so that we can meet the targets. That would give us both inter-regional equity and inter-generation equity.

What about the costs of the impacts of climate change? The Paris Agreement mentioned the need for a mechanism to support the victims of climate-related loss and damage.  Might that not create a completely different picture of equity?

That is not something we covered in the study, but it is very important. We need many more studies in that area. We need flood teams, agricultural teams, and others, all collaborating across disciplines. Very much how IIASA works, in fact. Fortunately, the model we constructed for this study can incorporate all of these aspects, as they become available, and translate them into a comprehensive economic assessment.

Liu JY, Fujimori S, & Masui T (2016). Temporal and spatial distribution of global mitigation cost: INDCs and equity Environmental Research Letters, 11 (11): 114004.

This article gives the views of the interviewee, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Should food security be a priority for the EU?

By David Leclère, IIASA Ecosystems Services and Management Program

August was the warmest ever recorded globally, as was every single month since October 2015. It will not take long for these records to become the norm, and this will tremendously challenge food provision for everyone on the planet. Each additional Celsius degree in global mean temperature will reduce wheat yield by about 5%. While we struggle to take action for limiting global warming by the end of the century to 2°C above preindustrial levels, business as usual scenarios come closer to +5 °C.

However, we lack good and actionable knowledge on this perfect storm in the making. Despite the heat, world wheat production should hit a new record high in 2016, but EU production is expected to be 10% lower than last year. In France, this drop should be around 25-30% and one has to go back to 1983 to find yields equally low. Explanations indeed now point to weather as a large contributor. But underlying mechanisms were  poorly anticipated by forecasts and are poorly addressed in climate change impacts research.

©Paul Townsend via Flickr

©Paul Townsend via Flickr

Second, many blind spots remain. For example, livestock has a tremendous share in the carbon footprint of agriculture, but also a high nutritional and cultural value. Yet, livestock were not even mentioned once in the summary for policymakers of the last IPCC report dedicated to impacts and adaptation. Heat stress reduces animal production, and increases greenhouse gas emissions per unit of product. In addition, a lower share of animal products in our diet could dramatically reduce pollution and food insecurity. However, we don’t understand well consumers’ preferences in that respect, and how they can be translated in actionable policies.

How can we generate adequate knowledge in time while climate is changing? To be able to forecast yields and prevent dramatic price swings like the 2008 food crisis? To avoid bad surprises due to large missing knowledge, like the livestock question?

In short: it will take far more research to answer these questions—and that means a major increase in funding.

I recently presented two studies by our team at a scientific conference in Germany, which was organized by a European network of agricultural research scientists (MACSUR). One was a literature review on how to estimate the consequences of heat stress on livestock at a global scale. The other one presented scenarios on future food security in Europe, generated in a way that delivers useful knowledge for stakeholders. The MACSUR network was funded as a knowledge hub to foster interactions between research institutes of European countries. In many countries, the funding covered travels and workshops, not new research. Of course, nowadays researchers have to compete for funding to do actual research.

So let’s play the game. The MACSUR network is now aiming at a ‘Future and Emerging Technologies Flagship’, the biggest type of EU funding: 1 billion Euros over 10 years for hundreds of researchers. Recent examples include the Human Brain Project, the Graphene Flagship, and the Quantum Technology Flagship. We are trying to get one on modeling food security under climate change.

© Sacha Drouart

© Sacha Drouart

Such a project could leapfrog our ability to deal with climate change, a major societal challenge Europe is confronted with (one of the two requirements for FET Flagship funding).  The other requirement gave us a hard time at first sight: generating technological innovation, growth and jobs in Europe -but one just needs the right lens. First, agriculture already sustains about 44 million jobs in the EU and this will increase if we are serious about reducing the carbon content of our economy. Second, data now flows at an unprecedented speed (aka, big data). Think about the amount of data acquired with Pokemon Go, and imagine we would harness such concept for science through crowdsourcing and citizen-based science. With such data, agricultural forecasts would perform much better. Similarly, light drones and connected devices will likely open a new era for farm management. Third, we need models that translate big data into knowledge, and not only for the agricultural sector. Similarly, models can also be powerful tools to confront views and could trigger large social innovation.

To get this funding, we need support from a lot of people. The Graphene project claimed support from than 3500 actors, from citizens to industrial players in Europe. We have until end of November to reach 3500 votes, at least. If you think EU should give food security under climate change the same importance as improving the understanding of the human brain, or developing quantum computers, we need you. This will simply never happen without you! Please help us out with two simple actions:

  • Go the proposal, and vote for/comment it (see instructions, please highlight the potential for concrete innovations)!
  • Spread the word – share this post with your friends, your family, and your colleagues!

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.

Arctic in the spotlight

By Anni Reissell, IIASA Arctic Futures Initiative

It is that time of the year again – in late summer and early fall the media is covering the Arctic sea ice extent. Whether it is another record-breaking low like 2005, 2007, or 2012, or in second place, like this year (see for example New York Times, Guardian), the news is not good.

The minimum Arctic sea ice extent this year tied for second-lowest. Credit: National Snow and Ice Data Center

The minimum Arctic sea ice extent this year tied for second-lowest. Credit: National Snow and Ice Data Center

And again, we hear many speculations on when we will start to experience an ice-free Arctic Ocean during summertime. Will it be 2030, 2050?

Are we stuck in keeping track and recording, observing the change, how fast or slow it is from year to another? Or is something different this year?

I believe that yes, there is a bit of a difference – and a bit more hope. We are in the post-Paris climate agreement (COP21) and UN Sustainable Development Goals (SDG) world.

Today, 48% of 196 nations have formally bound their governments to the Paris agreement, and it is anticipated that by the end of the year, the required 55 nations responsible for 55% of emissions globally will have formally committed to the Paris agreement. This is when the agreement takes legal force, although implementation is another issue and a new story.

I attend scientific meetings, and meetings gathering science, policy, and business stakeholders. Way too often when I attend those meetings, the participants again state that we must do this and we must do that, but they are not prepared to give concrete help and concrete suggestions. They do not talk about the possibility to commit themselves to anything other than stating the need or supervising the statement of needs, leaving the planning of implementation and search for resources happily to some unnamed others.

The Arctic today is in the spotlight not just in the sense that the world’s attention is briefly focused there: it is melting fast under the effect of a variety of physical forces that concentrate warming in the Arctic region. What could we do to help cool the Arctic more quickly?

Melting sea ice in the Arctic, during a 2011 research cruise. Credit: NASA Goddard Space Flight Center

Melting sea ice in the Arctic, during a 2011 research cruise. Credit: NASA Goddard Space Flight Center

Reducing greenhouse gas emissions through agreements and voluntary implementation by nations, ramping up the use of renewable energy sources and developing new technology, and then waiting for greenhouse gases to decrease in the atmosphere–this will all take a long time. And it will be much longer before we experience the impacts of the emissions reductions. But in parallel to these slow but indispensable developments, there are faster ways of helping out the Arctic in particular. And as a co-benefit, we can clean the air, improve our health, helping the rest of the world as well.

About 25% of the current warming of the Arctic is attributed to black carbon, that is, soot coming from incomplete combustion of fossil fuels.

The main culprit for the man-made black carbon in the Arctic surface atmosphere is gas flaring, wasteful burning of gas in the oil and gas industry. Gas flaring has been found to contribute to 42% of the annual mean black carbon surface concentrations in the Arctic, hence dominating the black carbon emissions north of 66oN.

A large part of the warming experienced in the Arctic is due to black carbon emissions from the eight Arctic nations and the region north of approximately 40oN, including European Union, Russia, Ukraine, China, Canada, and part of the USA.

The USA and Canada have agreed to end routine gas flaring by 2030. My hope is that the IIASA Arctic Futures Initiative could get together science, policy and business stakeholders from the Arctic nations in order to tackle this problem, with other concerned parties, and with countries not yet involved in discussions.

Reference
Stohl, A., Aamaas, B., Amann, M., et. al. (2015). Evaluating the climate and air quality impacts of short-lived pollutants, Atmos. Chem. Phys., 15, 10529-10566, doi:10.5194/acp-15-10529-2015, 2015.

Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.