Sunday, April 2, 2017

Traditional Views, Revisionist Views, and Counter-revisionist Views on the Industrial Revolution

Following up on my post on our paper about the Industrial Revolution , I thought some more context would be useful. The traditional view of the Industrial Revolution was that the availability of resources of coal, iron ore, and earlier water power in Britain were crucial factors that lead to the Industrial Revolution occurring in Britain and not elsewhere. Of course, these weren't sufficient - industrialization didn't happen in China - and so institutions also seemed to be important. But in recent years economists have emphasized the role of institutions and downplayed the role of resources more and more. This is what I call the revisionist view. Tony Wrigley and Robert Allen are key exponents of a counter-revisionist view, reemphasizing the role of resources, though not ignoring the importance of institutions. Our paper is a mathematical and quantitative exploration of the counter-revisionist view.

Economists and historians are divided on the importance of coal in fueling the increase in the rate of economic growth in the Industrial Revolution. Many researchers (e.g. Wilkinson, 1973; Wrigley, 1988, 2010; Pomeranz, 2000; Krausmann et al., 2008; Allen, 2009, 2012; Barbier, 2011; Gutberlet, 2012; Kander et al., 2013; Fernihough and O’Rourke, 2014, Gars and Olovsson, 2015) argue that innovations in the use, and growth in the quantity consumed, of coal played a crucial role in driving the Industrial Revolution. By contrast, some economic historians (e.g. Clark and Jacks, 2007; Kunnas and Myllyntaus 2009) and economists (e.g. Madsen et al., 2010) either argue that it was not necessary to expand the use of modern energy carriers such as coal, or do not give coal a central role (e.g. Clark, 2014).

Wrigley (1988, 2010) stresses that the shift from an economy that relied on land resources to one based on fossil fuels is the essence of the Industrial Revolution and could explain the differential development of the Dutch and British economies. Both countries had the necessary institutions for the Industrial Revolution to occur but capital accumulation in the Netherlands faced a renewable energy resource constraint, while in Britain domestic coal mines in combination with steam engines, at first to pump water out of the mines and later for many other uses, provided a way out from the constraint. Early in the Industrial Revolution, the transport of coal had to be carried out using traditional energy carriers, for instance by horse carriages, and was very costly, but the adoption of coal-using steam engines for transport, reduced the costs of trade and the Industrial Revolution spread to other regions and countries.

Pomeranz (2001) makes a similar argument, but addresses the issue of the large historical divergence in economic growth rates between England and the Western World on the one hand and China and the rest of Asia on the other. He suggests that shallow coal-mines, close to urban centers together with the exploitation of land resources overseas were very important in the rise of England. “Ghost land”, used for the production of cotton for the British textile industry provided England with natural resources, and eased the constraints of the fixed supply of land. In this way, England could break the constraints of the organic economy (based on land production) and enter into modern economic growth.

Allen (2009) places energy innovation center-stage in his explanation of why the industrial revolution occurred in Britain. Like Wrigley and Pomeranz, he compares Britain to other advanced European economies of the time (the Netherlands and Belgium) and the advanced economy in the East: China. England stands out as an exception in two ways: coal was relatively cheap there and labor costs were higher than elsewhere. Therefore, it was profitable to substitute coal-fuelled machines for labor in Britain, even when these machines were inefficient and consumed large amounts of coal. In no other place on Earth did this make sense. Many technological innovations were required in order to use coal effectively in new applications ranging from domestic heating and cooking to iron smelting. These induced innovations sparked the Industrial Revolution. Continued innovation that improved energy efficiency and reductions in the cost of transporting coal eventually made coal-using technologies profitable in other countries too.

By contrast, Clark and Jacks (2007) argue that an industrial revolution could still have happened in a coal-less Britain with only "modest costs to the productivity growth of the economy" (68), because the value of coal was only a modest share of British GDP, and they argue that Britain's energy supply could have been greatly expanded, albeit at about twice the cost of coal, by importing wood from the Baltic. Madsen et al. (2010) find that, controlling for a number of innovation related variables, changes in coal production did not have a significant effect on labor productivity growth in Britain between 1700 and 1915. But as innovation was required to expand the use of coal this result could make sense even if the expansion of coal was essential for growth to proceed. Both Clark and Jacks (2007) and Madsen et al. (2010) do not allow for the dynamic effects of resource scarcity on the rate of innovation. Tepper and Borowiecki (2015) also find a relatively small direct role for coal but concede that: “coal contributed to structural change in the British economy” (231), which they find was the most important factor in raising the rate of economic growth. On the other hand, Fernihough and O’Rourke (2014) and Gutberlet (2012) use geographical analysis to show the importance of access to local coal in driving industrialization and urban population growth, though Kelly et al. (2015) provide contradictory evidence on this point. Finally, Kander and Stern (2014) econometrically estimate a model of the transition from biomass energy (mainly wood) to fossil fuel (mainly coal) in Sweden, which shows the importance of this transition in economic growth there.

Our new paper shows that the switch to coal in response to resource scarcity is a plausible explanation of how an increase in the rate of economic growth and a dramatic restructuring of the economy could be triggered in a country with a suitable environment for innovation and capital accumulation. We argue that in the absence of resource scarcity this shift might not have happened or have been much delayed.

References

Allen, Robert C. 2012. "The Shift to Coal and Implications for the Next Energy Transition." Energy Policy 50: 17-23.

Barbier, Edward .B. 2011. Scarcity and Frontiers: How Economies Have Developed Through Natural Resource Exploitation. Cambridge University Press: Cambridge and New York.

Clark, Gregory. 2014. “The Industrial Revolution.” In Handbook of Economic Growth, Vol 2A, edited by Philippe Aghion and Steven Durlauf, 217-62. Amsterdam: North Holland.

Clark, Gregory, and David Jacks. 2007. “Coal and the Industrial Revolution 1700-1869.” European Review of Economic History 11: 39–72.

Fernihough, Alan, and Kevin Hjortshøj O’Rourke. 2014. “Coal and the European Industrial Revolution.” NBER Working Paper 19802.

Kander, Astrid, Paolo Malanima, and Paul Warde. 2014. Power to the People – Energy and Economic Transformation of Europe over Four Centuries. Princeton, NJ: Princeton University Press.

Kander, Astrid, and David I. Stern. 2014. “Economic Growth and the Transition from Traditional to Modern Energy in Sweden.” Energy Economics 46: 56-65.

Kelly, Morgan, Joel Mokyr, and Cormac Ó Gráda. 2015. “Roots of the industrial revolution.” UCD Centre for Economic Research Working Paper WP2015/24.

Krausmann, Fridolin, Heinz Schandl, and Rolf Peter Sieferle. 2008. “Socio-Ecological Regime Transitions in Austria and the United Kingdom.” Ecological Economics 65: 187-201.

Madsen, Jakob B., James B. Ang, and Rajabrata Banerjee. 2010. “Four Centuries of British Economic Growth: the Roles of Technology and Population.” Journal of Economic Growth 15(4): 263-90.

O’Rourke, Kevin Hjortshøj, Ahmed S. Rahman and Alan M. Taylor. 2013. “Luddites, the Industrial Revolution, and the Demographic Transition.” Journal of Economic Growth 18: 373-409.

Pomeranz, Kenneth L. 2001. The Great Divergence: China, Europe and the Making of the Modern World Economy. Princeton, NJ: Princeton University Press.

Tepper, Alexander, and Karol J. Borowiecki. 2015. “Accounting for Breakout in Britain: The Industrial Revolution through a Malthusian Lens.” Journal of Macroeconomics 44: 219-33.

Wilkinson, Richard G. 1973. Poverty and Progress: An Ecological Model of Economic Development. London: Methuen.

Wrigley, E. Anthony. 1988. Continuity, Chance, and Change: The Character of the Industrial Revolution in England. Cambridge: Cambridge University Press.

Wrigley, E. Anthony. 2010. Energy and the English Industrial Revolution. Cambridge: Cambridge University Press.

Wednesday, March 29, 2017

From Wood to Coal: Directed Technical Change and the British Industrial Revolution

We have finally posted our long-promised paper on the Industrial Revolution as a CAMA Working Paper. This is the final paper from our ARC-funded DP12 project: "Energy Transitions: Past, Present and Future". The paper is coauthored with Jack Pezzey and Yingying Lu. We wrote our ARC proposal in 2011, but we "only" started work on the current model in late 2014 after I read Acemoglu's paper "Directed Technical Change" in detail on a flight back to Australia and figured out how to apply it to our case. We have presented the paper many times in seminars and conferences, though I will be presenting it again at the University of Sydney on April 6th.

The paper develops a directed technical change model of economic growth where there are two sectors of the economy each using a specific type of energy as well as machines and labor. The Malthus sector using wood, which is only available in a fixed quantity per year, and the Solow sector uses coal, which is available at a fixed price. These assumptions are supported by the data. We don't think it is necessary to model coal as an explicitly non-renewable resource. As shallow deposits were worked out, technological change, including the development of the steam engine, allowed the exploitation of deeper deposits at more or less constant cost.

The names of the sectors come from the paper by Hansen and Prescott (2002): Malthus to Solow.  That paper assumes that technological change is exogenous and happens at a faster fixed rate in the Solow sector (which only uses labor and capital) than in the Malthus sector (which also uses a fixed quantity of land). The Solow sector is initially backward but because technical change is more rapid in that sector and it is not held back by fixed land, eventually it comes to dominate the economy in an industrial revolution.

Our paper updates this model for the 21st Century. In our model, technological change is endogenous, as is the speed with which it happens in each sector - the direction of technical change. We don't assume, a priori, that it is easier to find new ideas in the coal-using sector. In fact, we don't assume any differences between the sectors apart from the supply conditions of the two energy sources, which we explicitly model.

In most cases, an industrial revolution eventually happens. The most interesting case is when the elasticity of substitution between the outputs of the Malthus and Solow sector's is sufficiently high - based on our best guesses of the model parameters in Britain, greater than 2.9 - then it is possible if wood is relatively abundant for an economy to remain trapped forever in what we call Malthusian Sluggishness where growth is very low.* Population growth can push an economy out of this zone by raising the price of wood relative to coal and send the economy on a path to an industrial revolution.

These two phase diagrams show the two alternative paths an economy can take in the absence of population growth, depending on its initial endowment of knowledge and resources:

N is the ratio of knowledge in the Malthus sector (actually varieties of machines) to knowledge in the Solow sector. y is the ratio of output in the two sectors and e is the ratio of the price of wood to the price of coal. In the first diagram we see that an economy on an industrial revolution path first has rising wood prices relative to coal and also, initially, technical change is more rapid in the Malthus sector than in the Solow sector and so N rises too. In the long-run both these trends reverse and under Modern Economic Growth technical change is more rapid in the Solow sector and the relative price of wood falls. At the same time, we see in the second diagram that eventually the output of the Solow sector grows more rapidly than that of the Malthus sector so that y falls. The rate of economic growth also accelerates.

But an economy which starts out with a low relative wood price, e, or low relative knowledge in the Solow sector, N, can remain trapped with rising wood prices AND increasing specialization in the Malthus sector - rising y and N. Though there is coal lying underground, it is never exploited, even though switching to coal use would unleash more rapid economic growth in the long run. The myopic, but realistic, focus on near term profits from innovation discourages the required innovation in the Solow sector.

The core of the paper is a set of formal propositions laying out the logic of these findings but we also carry out simulations of the model calibrated to the British case over the period 1560-1900. Counterfactual simulations with more abundant wood, more expensive coal, more substitutability, less initial knowledge about using coal, or less population growth all delay the coming of the Industrial Revolution.

* We assume either that population is constant or treat its growth as exogenous.

Tuesday, March 28, 2017

Cohort Size and Cohort Age at Top US Economics Departments

I'm working on a new bibliometrics paper with Richard Tol. We are using Glenn Ellison's data set on economists at the top 50 U.S. economics departments as a testbed for our ideas. I had to compute the size of each year cohort for one of our calculations, and thought this graph of the number of economists at the 50 departments in each "academic age" year was interesting:


There isn't as sharp a post-tenure drop-off in numbers as you might expect, given the supposed strict tenure hurdle these departments impose. But as we can see the cohorts increase in size up to year 5, which might be explained by post-docs and other temporary appointments, or people even moving up the rankings after a few years at a lower ranked department. So, as a result, the tenure or out year would be spread over a few years too. On the other hand, as the data were collected in 2011, the Great Recession might also explain lower numbers for the first few years.

A post-retirement drop-off only really seems to occur after 39 years. The oldest person in the study by academic age was Arnold Harberger.

Thursday, March 23, 2017

Two New Working Papers

We have just posted two new working papers: Technology Choices in the U.S. Electricity Industry before and after Market Restructuring and An Analysis of the Costs of Energy Saving and CO2 Mitigation in Rural Households in China.

The first paper, coauthored with Zsuzsanna Csereklyei, is the first to emerge from our ARC funded DP16 project.  Our goal was to look at the factors associated with the adoption of more or less energy efficient electricity generating technologies using a detailed US dataset. For example, combined cycle gas turbines are more energy efficient than regular gas turbines and supercritical coal boilers are more efficient than subcritical. Things are complicated by the different roles that these technologies play in the electricity system. Because regular gas turbines are less energy efficient but have lower capital costs they are mainly used to provide peaking power, while combined cycle turbines contribute more to baseload. So comparing combined cycle gas to subcritical coal makes more sense as a test of how various factors affect the choice of energy efficiency than comparing the two types of gas turbine technologies.

Additionally, some US regions underwent electricity market reform where either just wholesale or both wholesale and retail markets were liberalized, while other regions have retained integrated regulated utilities, which are typically guaranteed a rate of return on capital. Unless regulators press utilities to adopt energy efficient technologies there is much less incentive under rate of return than under wholesale markets to do so.


The graph shows that following widespread market reform at the end of the 20th Century there was big boom in investment in the two main natural gas technologies. More recently renewables have played an increasing role and there was a revival of investment in coal up to 2012. These trends are also partly driven by the lagged (because investment takes time) effects of fuel prices:


We find that electricity market deregulation resulted in significant immediate investment in various natural gas technologies, and a reduction in coal investments. However, market deregulation impacted less negatively on high efficiency coal technologies. In states that adopted wholesale electricity markets, high natural gas prices resulted in more investment in coal and renewable technologies.

There is also evidence that market liberalization encouraged investments into more efficient technologies. High efficiency coal technologies were less negatively affected by market
liberalization than less efficient coal technologies. Market liberalization also resulted in increased investment into high efficiency combined cycle gas. In summary the effect of liberalization is most negative for the least efficient coal technology and most positive for the most efficient natural gas technology.

The second paper is based on a survey of households in rural China and assesses the potential for energy conservation and carbon emissions mitigation when energy saving technologies are not fully implemented. In reality, appliances do not always survive for their designed lifetime and households often continue to use other older technologies alongside the new ones. The effect is to raise the cost of reducing energy use and emissions by a given amount. The paper computes marginal abatement cost curves under full and partial implementation of the new technologies.


The graph shows the marginal abatement cost curve for rural households in Hebei Province, scaled up from the survey and our analysis. Full-Scenario is the curve with full implementation of new technologies and OII-Scenario is with actual partial implementation. This analysis does not take into account any potential rebound effect of energy efficiency improvements.

The first author, Weishi Zhang, is a PhD student at the Chinese University of Hong Kong. She contacted me last year about possibly visiting ANU, and I supported her application for a scholarship to fund the visit (which unfortunately she didn't get), because I thought her research was some of the more interesting research on Chinese energy use and pollution that I had seen. I helped write the paper (and responses to referees in our revise and resubmit).

Monday, March 13, 2017

March Update

Just realized that we are already in the third month of the year and I haven't posted anything here yet! Things have been very busy with both work and family, so there hasn't been time to put out the blogposts only indirectly related to my research that I used to do - instead I'll usually tweet something on those topics - and research-wise things have either been at the relatively early research stages or the final publication stages. But there will soon be some new working papers going up and some blogposts here discussing them!

On the research front, in January we were mainly focused on putting the final touches on our climate change paper in time for the deadline for the special issue of the Journal of Econometrics. My coauthors want to wait for some feedback before posting a working paper on that. Then in February my collaborators Stephan Bruns and Alessio Moneta visited Canberra to work with me on modeling the economy-wide rebound effect as part of our ARC DP16 project. I spent the first half of the month working hard on the topic to prepare for their visit. We made good progress but it will be at least a few months till we have a paper on the topic ready. So far, it seems robust that the rebound effect is big. Then since they left, I've been catching up.

Recently, Paul Burke said: "You've already got three papers accepted this year - are you going to keep that pace up? ;)" He'd been keeping better count than me! Our original paper on the growth rates approach to modeling emissions and economic growth was accepted at Environment and Development Economics. Two related papers were also accepted - at Journal of Bioeconomics and Climatic Change. I also have three revise and resubmits to be working on... though one of those came in 2016... I'll put out one or two of those as working papers when we resubmit them.

Thursday, December 29, 2016

Ranking Economics Institutions Applying a Frontier Approach to RePEc data

Back in 2010 I posted that the RePEc ranking of economics institutions needed to be adjusted by size. Better quality institutions do tend to be bigger but as RePEc just sums up publications, citations etc rather than averaging them larger institutions also get a higher RePEc ranking even if they aren't actually better quality. In the post, I suggested using a frontier approach. The idea is that the average faculty member at Harvard perhaps is similar to one at Chicago (I haven't checked this), but because Harvard is bigger it is better. So, looking at average scores of faculty members might produce a misleading ranking.

A reader sent me an e-mail query about an updated version of this and I thought that was a good idea for a new post:


The chart shows the RePEc rank for 190 top-level institutions (I deleted NBER) against their number of registered people on RePEc. I drew a concave frontier by hand. How have things changed since 2010? The main change is the appearance of Stanford on the frontier. Also, the Federal Reserve is now listed as one institution, so the Minnesota Fed has dropped off the frontier. Dartmouth is now slightly behind the frontier and Tel Aviv looks like it has also lost a little ground. Otherwise, not much has changed.

Monday, December 26, 2016

Annual Review 2016

I've been doing these annual reviews since 2011. They're mainly an exercise for me to see what I accomplished and what I didn't in the previous year. The big change this year mentioned at the end of last year's review is that we had a baby in February. I ended up taking six weeks leave around the birth. Since then, I've been trying to adjust my work-life balance :) I'm trying to get more efficient at doing things, dropping things that aren't really necessary to do, trying to schedule work time more. None of these things are that easy, at least for me. It's mainly anything that isn't work, baby, or housework that gets squeezed out. I'm still director of the International and Development Economics program at Crawford. I will now be director for the next six months at least, after which I hope to pass this role on to someone new, but they haven't been identified as yet. During my time as director, we've made less progress on various initiatives than I would have liked due to internal ANU politics.

The highlights for the year were being elected a fellow of the Academy of the Social Sciences in Australia. I attended the annual ASSA symposium and other events in November where new fellows are welcomed. Also, our consortium was awarded a five year contract by the UK DFID to research energy for economic growth in Sub-Saharan Africa and South Asia. In particular, we are looking at how electrification can best enhance development. Also in November I attended the "Research and Matchmaking Conference" in Washington DC, where we presented the results of our first year of research and interacted with policymakers from developing countries and others. In the first year, the main activity has been writing 18 state of knowledge papers. I've have writing a paper with Stephan Bruns and Paul Burke on macroeconomic evidence for the effects of electrification on development.


Work got started on our ARC DP16 project. Zsuzsanna Csereklyei joined us at ANU as a research fellow working on the project. She is focusing on the technology diffusion theme. 

I published a record number of journal articles - in total, eight! Somehow a lot of things just happened to get published this year. It's easiest just to list them with links to the blogposts that discuss them:

Ma C. and D. I. Stern (2016) Long-run estimates of interfuel and interfactor elasticities, Resource and Energy Economics 46, 114-130. Working Paper Version | Blogpost

Bruns S. B. and D. I. Stern (2016) Research assessment using early citation information, Scientometrics 108, 917-935. Working Paper Version | Blogpost

Stern D. I. and D. Zha (2016) Economic growth and particulate pollution concentrations in China, Environmental Economics and Policy Studies 18, 327-338. Working Paper Version | Blogpost | Erratum

Lu Y. and D. I. Stern (2016) Substitutability and the cost of climate mitigation policy, Environmental and Resource Economics 64, 81-107. Working Paper Version | Blogpost

Sanchez L. F. and D. I. Stern (2016) Drivers of industrial and non-industrial greenhouse gas emissions, Ecological Economics 124, 17-24. Working Paper Version | Blogpost 1 | Blogpost 2

Costanza R., R. B. Howarth, I. Kubiszewski, S. Liu, C. Ma, G. Plumecocq, and D. I. Stern (2016) Influential publications in ecological economics revisited, Ecological Economics. Working Paper Version | Blogpost

Csereklyei Z., M. d. M. Rubio Varas, and D. I. Stern (2016) Energy and economic growth: The stylized facts, Energy Journal 37(2), 223-255. Working Paper Version | Blogpost

Halkos G. E., D. I. Stern, and N. G. Tzeremes (2016) Population, economic growth and regional environmental inefficiency: Evidence from U.S. states, Journal of Cleaner Production 112(5), 4288-4295. Blogpost

I also updated my article on economic growth and energy in the Elsevier Online Reference Materials. Citations shot past 11,000 on Google Scholar (h-index: 42) and will total more than 12,000 when all citations for this year are eventually collected by Google.

I have two papers currently under review (also two book chapters, see below). First, there is a survey paper on the environmental Kuznets curve, which I have now resubmitted to a special issue of the Journal of Bioeconomics that emerged from the workshop at Griffith University I attended last year. So, this should be published soon. Then there is our original paper on the growth rates approach to modeling the emissions-income relationship. I have resubmitted our paper on global particulate concentrations. We have a revise and resubmit for the paper on meta-Granger causality testing.

Some other projects are nearing completion. One is a new climate econometrics paper. Stephan Bruns presented our preliminary results at the Climate Econometrics Conference in Aarhus in October. I posted some excerpts from our literature review on this blog. We are also still wrapping up work on our paper on the British Industrial Revolution. Last year, I forecast we would soon have a working paper out on it. I'll have to make that forecast again! We also want to turn our state of knowledge paper for the EEG project into a publication. Of course, there is a lot more work at much earlier stages. For example, this week so far I've been working on a paper with Akshay Shanker on explaining why energy intensity has declined in countries such as the US over time. It's not as obvious as you might think! We've been working on this now and then for a couple of years, but now it looks much more like we will really complete the paper. I'm going to see if I can complete a draft in the next day or so of a paper following up from this blogpost. And, of course, there are the DP16 projects on energy efficiency and there are some long-term projects that I really want to return to and finish, but other things keep getting in the way.

My first PhD student here at Crawford, Alrick Campbell, submitted his PhD thesis in early December. It consists of four papers on energy issues in small island developing states (SIDS). The first of these looks at the effect of oil price shocks on economic growth in SIDS using a global vector autoregression model. He finds that oil price shocks have only small negative effects on most oil importing SIDS and positive effects, as expected, on oil exporting countries such as Bahrain or Trinidad and Tobago. These results are interesting as many of the former economies are fairly dependent on imported oil and would be expected to be susceptible to oil price shocks. The remaining papers estimate elasticities of demand for electricity for various sectors in Jamaica, look at the choice between revenue and price caps for the regulation of electric utilities, and benchmark the efficiency of SIDS electric utilities using a data envelopment analysis. My other student (I'm also on a couple of other PhD panels), Panittra Ninpanit, presented her thesis proposal seminar.


Because of the baby, I didn't travel as much this year as I have in previous years. I gave online keynote presentations at conferences in Paris and at Sussex University on energy and growth.  In September and October I visited Deakin U., Curtin U., UWA, and Swinburne U. to give seminars. Then in late October and early November I visited the US for a week to attend the EEG conference in Washington DC, mentioned above.

I only taught one course this year - Energy Economics. I got a reduction in teaching as compensation for being program director instead of receiving extra pay. As a result, I didn't teach in the first semester, which was when the baby arrived.

Total number of blogposts this year was slightly less last year, averaging three per month. As my Twitter followers increase in number - now over  500 - I find that readership of my blog is becoming very spiky with a hundreds of readers visiting after I make a post and tweet it and then falling back to a low background level of 20-30 visits per day. The most popular post this year was Corrections to the Global Temperature Record with about 650 reads.

Looking forward to 2017, it is easy to predict a few things that will happen that are already organized:

1. Alessio Moneta and Stephan Bruns will visit Canberra in late February/early March to work on the rebound effect component of the ARC DP16 project.
2. I will visit Brisbane for the AARES annual conference and Singapore for the IAEE international conference. I just submitted an abstract for the latter, but it's pretty likely I'll go, especially as there are now direct flights from Canberra to Singapore.
3. I will be the convener for Masters Research Essay in the first semester and again teach Energy Economics in the second semester.
4. I will publish two book chapters on the environmental Kuznets curve in the following collections: Oxford Research Encyclopedia of Environmental Economics and The Companion to Environmental Studies (Routledge).


In the realm of the less predictable, for the first time in five years I actually applied for a job. I had a Skype interview for it a two weeks ago. I wasn't really looking for a job but just saw an attractive advertisement that a former Crawford PhD student sent me. No idea if anything more will come of that...

Sunday, November 20, 2016

World Energy Outlook 2016 and the Rebound Effect

I've been asked to make some brief comments on the 2016 World Energy Outlook just published by the IEA at the ANU Energy Change Institute's 2016 Energy Update. It's a huge report, but I'll focus on the global projections for energy use and GHG emissions. I think that the IEA are still over-optimistic about the potential for energy intensity improvements and underestimate the future contribution of non-fossil energy. Under the "Current Policies" scenario they expect fossil fuels to have 79% of total energy in 2040 vs. 81% today. The current rapid growth of renewables under current policies makes me skeptical about that. The decline in world energy intensity is also more rapid than in recent decades.

Three main scenarios used throughout the report are summarized in the following Figure:


The "New Policies Scenario" includes policies from NDC's where the policy to implement the pledge appears to actually exist. The "450" Scenario is where policies that actually limit warming to 2 degrees C are implemented. Clearly, decarbonization is minimal under the current policies scenario and not that great under the new policies scenario. But the improvement in energy intensity is very large under all scenarios and does the vast majority of the work in reducing CO2 emissions. How plausible is this huge reduction in energy intensity? Here, I plot the historical global trend in energy intensity and the growth rates projected under the current and new policies scenarios:

The current policies scenario projects an increase in the rate of reduction in energy intensity relative to the 1990-2015 mean. This is possible, the rate of change might accelerate, but I am skeptical. Just looking at the data, we see that in the last few business cycles, energy intensity rose or fell slowly after recessions compared with later parts of boom periods. So, we seem likely to go through other cycles like these. Another issue is that the Chinese economy might have grown slower than the government admitted to in the recent couple of years. This would have exaggerated the global decline in energy intensity but probably not be a lot. The main reason, is that energy efficiency improvements do not translate one-for-one to reductions in energy intensity. The rebound effect, which we are researching in our ARC DP16 grant, means that improvements in energy efficiency lead to increases in the use of "energy services" - like heating, lighting, transport etc. which mean that energy use does not decrease as much as it would if all the efficiency improvement flowed through to energy consumption. At the micro-economic level this is simply because these energy services become cheaper as a result of the efficiency improvement. At the macro-level things are more complicated. I suspect that the IEA's model, which is driven by exogenous assumptions on things like the rate of economic growth, underestimates the economy-wide rebound effect.

Wednesday, October 26, 2016

The Ocean in Climate Econometrics

Third excerpt (previous excerpts):


Most studies of global climate change using econometric methods have ignored the role of the ocean. Though these studies sometimes produce plausible estimates of the climate sensitivity, they universally produce implausible estimates of the rate of adjustment of surface temperature to long-run equilibrium. For example, Kaufmann and Stern (2002) find that the rate of adjustment of temperature to changes in radiative forcing is around 50% per annum even though they estimate an average global climate sensitivity of 2.03K. Similarly, Kaufmann et al. (2006) estimate a climate sensitivity of 1.8K, while the adjustment coefficient implies that more than 50% of the disequilibrium between forcing and temperature is eliminated each year. Furthermore, the autoregressive coefficient in the carbon dioxide equation of 0.832 implies an unreasonably high rate of removal of CO2 from the atmosphere. The methane rate of removal is also very high.

Simple AR(1) I(1) autoregressive models of this type assume that temperature adjusts in an exponential fashion towards the long run equilibrium. The estimate of that adjustment rate tends to go towards that of the fastest adjusting process in the system, if, as is the case, that is the most obvious in the data. Schlesinger et al. (no date) illustrate these points with a very simple first order autoregressive model of global temperature and radiative forcing. They show that such a model approximates a model with a simple mixed layer ocean. Parameter estimates can be used to infer the depth of such an ocean. The models that they estimate have inferred ocean depths of 38.7-185.7 meters. Clearly, an improved time series model needs to simulate a deeper ocean component.
Stern (2006) used a state space model inspired by multicointegration. The estimated climate sensitivity for the preferred model is 4.4K, which is much higher than previous time series estimates and temperature responds much slower to increased forcing. However, this model only used data on the top 300m of the ocean and the estimated increase in heat content in the pre-observational period seems too large.

Pretis (2015) estimates an I(1) VAR for surface temperature and the heat content of the top 700m of the ocean for observed data for 1955-2011. The climate sensitivity is 1.67K for the preferred model but 2.16K for a model, which excludes the level of volcanic forcing from the radiative forcing aggregate, entering only as first differences. With two cointegrating vectors it is not possible to “read off” the rate of adjustment of surface temperature to increased forcing and Pretis does not simulate impulse or transient response functions.

References

Kaufmann, R. K., Kauppi, H., Stock, J. H., 2006. Emissions, concentrations, and temperature: a time series analysis. Climatic Change 77(3-4), 249-278.

Kaufmann, R. K., Stern, D. I., 2002. Cointegration analysis of hemispheric temperature relations. Journal of Geophysical Research 107(D2), Article No. 4012.

Pretis, F., 2015. Econometric models of climate systems: The equivalence of two-component energy balance models and cointegrated VARs. Oxford Department of Economics Discussion Paper 750.

Schlesinger, M. E., Andronova, N. G., Kolstad, C. D., Kelly, D. L., no date. On the use of autoregression models to estimate climate sensitivity. mimeo, Climate Research Group, Department of Atmospheric Sciences, University of Illinois at Urbana-Champaign, IL.

Stern, D. I., 2006. An atmosphere-ocean multicointegration model of global climate change. Computational Statistics and Data Analysis 51(2), 1330-1346.  

Monday, October 24, 2016

Recent Estimates of the Climate Sensitivity

Another excerpt from our literature review:

Estimates of the climate sensitivity have been the focus of ongoing debate with widely differing estimates (Armour, 2016) and notable differences between observation and model based sensitivity estimates. The consensus in the IPCC 5th Assessment Report (Bindoff et al., 2013) is that the equilibrium climate sensitivity (ECS) falls in the range of 1.5-4.5 K with more than 66% probability. The transient climate response (TCR) falls in the range 1-2.5 K with more than 66% probability. Armour (2016) notes that the range of ECS supported by recent observations is 1-4 K with a best estimate of around 2 K and the TCR is estimated at 0.9-2.0 K. This suggests that climate model based estimates are too sensitive.


Richardson et al. (2016) note that sea surface temperature measurements measure water rather than air temperature, which has warmed faster. Additionally, the most poorly measured regions on Earth, such as the Arctic, have also warmed the most. Richardson et al. (2016) process the CMIP5 model output in the same way as the HADCRUT4 temperature series is constructed – using seawater temperatures and under-sampling some regions. They infer an observation-based best estimate for TCR of 1.66 K, with a 5–95% range of 1.0–3.3 K, consistent with the climate models considered in the IPCC 5th Assessment Report.

Marvel et al. (2016) argue that the efficacy of other forcings is typically less than that of greenhouse gases so that total radiative forcing is less than standard calculations estimate. When single-forcing experiment results are reported to estimate these efficacies, and TCR and ECS are estimated from observed twentieth-century warming, both TCR and ECS estimates are revised upward to 1.7K and to 2.6-3.0 K, depending on the feedbacks included. Armour (2016) highlights the joint (multiplicative) importance of the Richardson et al. (2016) and the Marvel et al. (2016) studies, which together should raise observational ECS by 60%, which reconciles the discrepancy between observation and model based estimates.

References

Armour, K. C., 2016. Projection and prediction: Climate sensitivity on the rise. Nature Climate Change 6, 896–897.

Bindoff, N. L., Stott, P. A., K. AchutaRao, M., Allen, M. R., Gillett, N., Gutzler, D., Hansingo, K., Hegerl, G., Hu, Y., Jain, S., Mokhov, I. I., Overland, J., Perlwitz, J., Sebbari, R., Zhang, X., 2013: Detection and attribution of climate change: from global to regional. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

Marvel, K., Schmidt, G. A., Miller, R. L., Nazarenko, L., 2016. Implications for climate sensitivity from the response to individual forcings. Nature Climate Change 6(4), 386-389.

Richardson, M., Cowtan, K., Hawkins, E., Stolpe, M. B., 2016. Reconciled climate response estimates from climate models and the energy budget of Earth, Nature Climate Change 6, 931-935.


Saturday, October 22, 2016

The Role of the Oceans in Global Climate Change

From the draft of the literature review of a paper I am writing with Zsuzsanna Csereklyei and Stephan Bruns. Stephan will be presenting some preliminary results next week at the conference on climate econometrics in Aarhus:

Historical record high global temperatures occured in 2015 and are expected in 2016. Nevertheless, the period between 1998 and 2014, when surface temperatures increased much slower than in the previous quarter century has been the subject of intense scrutiny. As the search for the missing pieces of the puzzle began, a number of potential culprits surfaced.


Among the suggested candidates were an increase in anthropogenic sulfur emissions (Kaufmann et al., 2011), declining solar irradiance (Tollefson, 2014, Trenberth 2015; Kaufmann et al., 2011), and an increase in volcanic aerosols (Andersson et al., 2016) over the examined period, which also coincided with a negative phase of the Pacific Decadal Oscillation (PDO). Similarly, Fyfe et al. (2016) mention anthropogenic sulfate aerosols as contributing factors to the earlier hiatus period from the 1950s to the 1970s. Smith et al. 2016 recently suggested that anthropogenic aerosol emissions might be a driver of the negative PDO. This is however in contrast with the findings of Kosaka and Xie (2013) who attribute with high probability the hiatus to internal variability, instead of forcing.

Karl et al. (2015) argued that the apparent hiatus was due to mis-measurement of surface temperature data. They correct the temperature data for several biases finding the result warming trends between 1950-1999 and 2000-2014 to be “virtually indistinguishable”. However, their approach was critiqued, by among others Fyfe et al. (2016,) who argue that the starting and ending dates of the observation period matter significantly, as the 1950-1970 period also included a big hiatus.

The majority of recent studies agree, however, that exchange of heat between the atmosphere and the oceans is a key player in explaining the surface warning slowdown. Nonetheless, the mechanisms by which oceans absorb and then again release heat were not well understood until recently, when this process was found closely linked to the decadal oscillation of the oceans. Decadal ocean variability, in particular the Pacific Decadal Oscillation (PDO), but also the variability of the Atlantic and Indian Oceans, seem to play a key part in explaining atmosphere – ocean interactions (Kosaka and Xie, 2013; Meehl et al. 2011). According to Meehl et al. (2011), hiatuses might be relatively common climate occurrences, where enhanced heat uptake by the ocean is linked to La Nina-like conditions. By contrast, the positive phase of the PDO favors El Nino conditions and injects heat into the atmosphere (Tollefson, 2014). Stronger trade winds during La Nina episodes drive warm surface water westwards across the Pacific, then down into the lower layers of the ocean. Simultaneously cold water upwells in the eastern Pacific (Trenberth and Fasullo, 2012). It is possible that extreme La Nina events, such as that in 1998, may tip the ocean into a cool phase of the PDO.

While the heat uptake and content of the world ocean is a key factor in the Earth’s energy balance, observations of ocean heat content are sparse. Currently, systematic annual observations for the upper 700m only reach back to 1955, while for the upper 2000 meters only to 2005. Pentadal ocean heat estimates for the upper 2000 meters (Levitus et al. 2012) are available since the mid 1950s. Due to the lack of systematic observations, the pentadal estimates (Levitus et al. 2000) used composites of all available historical temperature observations for respective 5-year periods. Therefore, the farther we go back in time, the larger the uncertainty surrounding ocean heat uptake and the larger potential biases might be.

Estimates for 1955-2010 (Levitus et al., 2012) show a rate of heat uptake of 0.39 Wm-2 for the upper 2000 meters of the world ocean but the uptake has varied over time. Half of the heat accumulated since 1865 accumulated after 1997 (Gleckler et al., 2016) Balmaseda et al. (2013) estimate that heat uptake in the 2000s was 0.84 Wm-2 for the entire ocean with 0.21 Wm-2 of that being stored below 700m, but in the 1990s uptake was negative (-0.18 Wm-2) though other sources find a lower but positive rate of uptake in that period. The vast majority of warming is concentrated in the top 2000m of the ocean (Purkey and Johnson, 2010). Johnson et al. (2016) estimate net ocean heat uptake in the top 1800m of the ocean of 0.71Wm-2 from 2005 to 2015, and 0.07Wm-2 below 1800m. However, during the recent hiatus period, the upper layers of the ocean did not show enough warming to account for the imbalance in the energy system (Balsameda et al. 2013). This “missing energy” was actually stored in the deep oceans (Trenberth and Fasullo, 2012). Estimates of deep ocean heat fluxes are very limited. Kouketsu et al. (2011) calculate world ocean temperature changes for the 1990s and 2000s for waters below 3000m, estimating heat changes below 3000 meters to be around 0.05 Wm-2. Purkey and Johnson (2010) estimate the heat uptake below 4000m to be 0.027 Wm-2.

References
Andersson, S. M., Martinsson, B. G., Vernier, J. P., Friberg, J., Brenninkmeijer, C. A. M., Hermann, M., van Velthoven, P. F. J., Zahn, A., 2015. Significant radiative impact of volcanic aerosol in the lowermost stratosphere. Nature Communications 6, 7692.

Balmaseda, M. A., Trenberth, K. E., E. Källén, E., 2013. Distinctive climate signals in reanalysis of global ocean heat content. Geophysical Research Letters 40, 1754–1759.

Fyfe J. C., Meehl, G. A., England, M. H., Mann, M. E., Santer, B. D., Flato, G. M., Hawkins, E., Gillett, N. P., Xie, S. P., Kosaka, Y., Swart, N. C., 2016. Making sense of the early-2000s warming slowdown. Nature Climate Change 6, 224-228.

Gleckler, P. J., Durack, P. J., Stouffer, R. J., Johnson, G. C., Forest, C. E., 2016. Industrial-era global ocean heat uptake doubles in recent decades. Nature Climate Change 6, 394-398.

Johnson, G. C., Lyman, J. M., Loeb, N. G., 2016. Improving estimates of Earth's energy imbalance. Nature Climate Change 6(7), 639-640.

Karl, T. R., Arguez, A., Huang, B., Lawrimore, J. H., McMahon, J. R., Menne, M. J., Peterson, T. C., Vose, R. S., Zhang, H.-M., 2015. Possible artifacts of data biases in the recent global surface warming hiatus. Science 348 (6242), 1469-1472.

Kaufmann, R. K., Kauppi, H., Mann, M. L., Stock, J. H., 2011. Reconciling anthropogenic climate change with observed temperature 1998–2008. Proceedings of the National Academy of Sciences 108(29), 11790-11793.

Kosaka, Y., Xie, S.-P., 2013. Recent global-warming hiatus tied to equatorial Pacific surface cooling. Nature 501, 403–408.

Kouketsu, S., et al. , 2011. Deep ocean heat content changes estimated from observation and reanalysis product and their influence on sea level change. Journal of Geophysical Research 116, C03012.

Levitus, S., Antonov, J. L., Boyer, T. P., Stephens, C., 2000. Warming of the world ocean. Science 287, 2225-2229.

Levitus, S., Antonov, J. I., Boyer, T. P., Baranova, O. K., Garcia, H. E., Locarnini, R. A., Mishonov, A. V., Reagan, J.R., Seidov, D., Yarosh, E.S., Zweng, M.M., 2012. World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010. Geophysical Research Letters 39, L10603.

Meehl, G. A., Arblaster, J. M., Fasullo, J. T., Hu, A., Trenberth, K. E., 2011. Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods. Nature Climate Change 1, 360–364.

Purkey, S. G., Johnson, G. J., 2010. Warming of global abyssal and deep southern ocean waters between the 1990s and 2000s: contributions to global heat and sea level rise budgets. Journal of Climate 23, 6336-6351.

Smith, D. M., et al., 2016. Role of volcanic and anthropogenic aerosols in the recent global surface warming slowdown. Nature Climate Change 6, 936-940.

Tollefson, J., 2014. Climate change: The case of the missing heat. Nature 505, 276-278.

Trenberth, K. E., 2015. Has there been a hiatus? Science 349, 691-692.

Trenberth, K. E., Fasullo, J. T., 2012. Tracking Earth’s energy: from El Nino to global warming. Surveys in Geophysics, 33(3-4), 413–426.

Sunday, September 11, 2016

Progress on New Climate Modeling Paper

As I mentioned a couple of posts ago, we are working on a new climate modeling paper. We just started estimating models. This graph shows predicted ocean heat content in units of 10^22 Joules, blue, and a 5 year moving average of observed heat content in the top 2000m of the ocean (which is only available from 1959*):


We only used data on global temperature and radiative forcing and the most basic estimator possible to produce this prediction. It's in the right ballpark in terms of the increase in heat content and even some of the wiggles match up (the levels are "arbitrary"). Diagnostic statistics look fairly good too. I think we can only improve on this prediction using more sophisticated estimators. Watch this space :)

* NOAA assign the middle year of each 5 year window as the date of the data. We assign the last year of each 5 year window instead.

Friday, September 2, 2016

The Electricity "Cost Share"

This graph shows the value of electricity divided by GDP for 130 countries in 2013 plotted against GDP per capita. I used the 2015 price of electricity reported by the World Bank Doing Business report, IEA data on electricity use in 2013 and GDP data from the Penn World Table. Cost share is in inverted commas because GDP isn't gross output and electricity is used for consumption as well as production. The fitted curve is a quadratic.

There is a general trend to lower cost shares at higher income levels. But the electricity cost share is very low in some poor countries like Ethiopia simply because they don't use much electricity. It is also low in many oil producing countries such as Kuwait who subsidize electricity. In Kuwait a kWh costs 0.7 U.S. cents. By contrast, in Jamaica a kWh cost 41.6 U.S. cents. The highest cost share is in Macedonia.

I think we should expect that total energy cost shares will be more declining with income. This is because poor countries use more of other energy sources and rich countries use less of energy other than electricity. This matches the longitudinal data we have from Sweden and Britain.

I put this data together for our Energy for Economic Growth project.

Saturday, August 27, 2016

Corrections to the Global Temperature Record and the Early Onset of Industrial Age Warming

A lot of fuss is often made about adjustments to estimates of global temperature. But here is the key figure from the paper by Karl et al. last year:

The changes to the temperature trend in the period of the "hiatus" are really small and hard to see in the context of the century scale temperature trend (Panel A). Most of the effect of corrections is in the 19th Century (Panel B). But even there, all the corrections make little difference to the overall trend.

The next graph shows 3 different estimates of global temperature since 1955:


The HADCRUT data has been criticized for not covering heating in the Arctic very well. The GISS series shows more warming due to that. But really the two series are not that different in the overall signal they provide. The Berkeley Earth series is somewhere in between these two. Berkeley Earth was funded by Koch and others to investigate whether the series from official agencies had distorted the record by their use of corrections. The result turned out almost the same as NASA's (GISS). So, there definitely doesn't seem to be any conspiracy to distort the data.

Graham Lloyd also mentions the paper published by Nerilie Abram et al. in Nature this week, that argues that "industrial era warming" began earlier than previously thought. Here is a key figure from their paper:

The brown graph is the reconstructed land surface temperature anomaly and the blue graph the sea surface anomaly. Their argument is based on the current warming trend starting in the first half of the 19th Century. But I don't see anything in the paper that actually associates this with anthropogenic forcing. So I tend to somewhat agree with the quote from Michael Mann "the Abram team misinterprets the cooling of the early 1800s from two giant volcanic eruptions as a cooler baseline instead of something unusual. That makes it look like human-forced warming started earlier than it did instead of climate naturally recovering from volcanoes putting cooling particles in the air". The paper compares the onset of warming in simulations to the onset of warming in the data and there is almost no correlation between the model results and the data (Panel A):


Yes, greenhouse gases were increasing already (CO2 chart is shown in the bottom right hand corner of the Figure) but it's likely that they only contributed a small part to the warming in that period and much of it is a bounceback from the volcanic eruptions, which had suppressed temperature.

The idea that people have been affecting the climate for a long time was first introduced by Ruddiman in a 2003 paper. I think that Ruddiman was likely right about this. My guess is that what we are seeing in the early 19th Century is mostly still the Ruddiman effect of increased human population, land-clearing, farming etc. Industrial CO2 emissions were very low: 54 million tonnes of carbon a year in 1850.

I'm working on a new climate change paper with Zsuzsanna Csereklyei and Stephan Bruns for the conference on climate econometrics in Aarhus at the end of October. We have got all the data together and we've reviewed the literature and so now comes the modeling phase. Watch this space :)

Wednesday, August 10, 2016

Missing Coefficient in Environmental Economics and Policy Studies Paper

I don't like looking at my published papers because I hate finding mistakes. Today I saw that there is a missing coefficient in Table 2 of my recent paper with Donglan Zha "Economic growth and particulate pollution concentrations in China". In the column for Equation (2) for PM 2.5 the coefficient for the interaction between growth and the level of GDP per capita is missing. The table should look like this:


I checked my correspondence with the journal production team. They made lots of mistakes in rendering the tables and I went through more than one round of trying to get them to fix them. But the version I eventually OK-ed had this missing coefficient. At least the working paper version has the correct table.

Monday, July 25, 2016

Data and Code for Our 1997 Paper in Nature

I got a request for the data in our 1997 paper in Nature on climate change. I didn't think I'd be able to send the actual data we used as I used to follow the practice of continually updating the datasets that I most used rather than keeping an archival copy of the data actually used in a paper. But I found a version from February 1997, which was the month we submitted the final version of the paper. I got the RATS code to read the file and with a few tweaks it was producing the results that are in the paper. These are the results for observational data in the paper, not those using data from the Hadley climate model. I have now put up the files on my website. In the process I found this website - zamzar.com - that can convert .wks to .xls files. Apparently, recent versions of Excel can't read the .wks Lotus 1-2-3 files that were a standard format 20 or more years years ago. For those that don't know, Lotus 1-2-3 was the most popular spreadsheet program before Microsoft introduced Excel. I used it in the late 80s and early 90s when I was in grad school.

The EKC in a Nutshell

Introduction
The environmental Kuznets curve (EKC) is a hypothesized relationship between various indicators of environmental degradation and countries’ gross domestic product (GDP) per capita. In the early stages of economic growth environmental impacts and pollution increase, but beyond some level of GDP per capita (which will vary for different environmental impacts) economic growth leads to environmental improvement. This implies that environmental impacts or emissions per capita are an inverted U-shaped function of GDP per capita, whose parameters can be statistically estimated. Figure 1 shows a very early example of an EKC. A vast number of studies have estimated such curves for a wide variety of environmental impacts ranging from threatened species to nitrogen fertilizers, though atmospheric pollutants such as sulfur dioxide and carbon dioxide have been most commonly investigated. The name Kuznets refers to the similar relationship between income inequality and economic development proposed by Nobel Laureate Simon Kuznets and known as the Kuznets curve.


The EKC has been the dominant approach among economists to modeling ambient pollution concentrations and aggregate emissions since Grossman and Krueger (1991) introduced it in an analysis of the potential environmental effects of the North American Free Trade Agreement. The EKC also featured prominently in the 1992 World Development Report published by the World Bank and has since become very popular in policy and academic circles and is even found in introductory economics textbooks.

Critique
Despite this, the EKC was criticized almost from the start on empirical and policy grounds, and debate continues. It is undoubtedly true that some dimensions of environmental quality have improved in developed countries as they have become richer. City air and rivers in these countries have become cleaner since the mid-20th Century and in some countries forests have expanded. Emissions of some pollutants such as sulfur dioxide have clearly declined in most developed countries in recent decades. But there is less evidence that other pollutants such as carbon dioxide ultimately decline as a result of economic growth. There is also evidence that emerging countries take action to reduce severe pollution. For example, Japan cut sulfur dioxide emissions in the early 1970s following a rapid increase in pollution when its income was still below that of the developed countries and China has also acted to reduce sulfur emissions in recent years.

As further studies were conducted and better data accumulated, many of the econometric studies that supported the EKC were found to be statistically fragile. Figure 2 presents much higher quality data with a much more comprehensive coverage of countries than that used in Figure 1. In both 1971 and 2005 sulfur emissions tended to be higher in richer countries and the curve seems to have shifted down and to the right. A cluster of mostly European countries had succeeded in sharply cutting emissions by 2005 but other wealthy countries reduced their emissions by much less.


Initially, many understood the EKC to imply that environmental problems might be due to a lack of sufficient economic development rather than the reverse, as was conventionally thought, and some argued that the best way for developing countries to improve their environment was to get rich. This alarmed others, as while this might address some issues like deforestation or local air pollution, it would likely exacerbate other environmental problems such as climate change.

Explanations
The existence of an EKC can be explained either in terms of deep determinants such as technology and preferences or in terms of scale, composition, and technique effects, also known as “proximate factors”. Scale refers to the effect of an increase in the size of the economy, holding the other effects constant, and would be expected to increase environmental impacts. The composition and technique effects must outweigh this scale effect for pollution to fall in a growing economy. The composition effect refers to the economy’s mix of different industries and products, which differ in pollution intensities. Finally the technique effect refers to the remaining change in pollution intensity. This will include contributions from changes in the input mix – e.g. substituting natural gas for coal; changes in productivity that result in less use, everything else constant, of polluting inputs per unit of output; and pollution control technologies that result in less pollutant being emitted per unit of input.

Over the course of economic development the mix of energy sources and economic outputs tends to evolve in predictable ways. Economies start out mostly agricultural and the share of industry in economic activity first rises and then falls as the share of agriculture declines and the share of services increases. We might expect the impacts associated with agriculture, such as deforestation, to decline, and naively expect the impacts associated with industry such as pollution would first rise and then fall. However, the absolute size of industry rarely does decline and it is improvement in productivity in industry, a shift to cleaner energy sources, such as natural gas and hydro-electricity, and pollution control that eventually reduce some industrial emissions.

Static theoretical economic models of deep determinants, that do not try to also model the economic growth process, can be summarized in terms of two parameters: The elasticity of substitution between dirty and clean inputs or between pollution control and pollution, which summarizes how difficult it is to cut pollution; and the elasticity of marginal utility, which summarizes how hard it is to increase consumer well-being with more consumption. It is usually assumed that these consumer preferences are translated into policy action. Pollution is then more likely to increase as the economy expands, the harder it is to substitute other inputs for polluting ones and the easier it is to increase consumer well-being with more consumption. If these parameters are constant then either pollution rises or falls with economic growth. Only if they change over time will pollution first rise and then fall. The various theoretical models can be classified as ones where the EKC is driven by changes in the elasticity of substitution as the economy grows or models where the EKC is primarily driven by changes in the elasticity of marginal utility.

Dynamic models that model the economic growth process alongside changes in pollution, are harder to classify. The best known is the Green Solow Model developed by Brock and Taylor (2010) that explains changes in pollution as a result of the competing effects of economic growth and a constant rate of improvement in pollution control. Fast growing middle-income countries, such as China, then having rising pollution, and slower growing developed economies, falling pollution. An alternative model developed by Ordás Criado et al. (2011) also suggests that pollution rises faster in faster growing economies but that there is also convergence so that countries with higher levels of pollution are more likely to reduce pollution faster than countries with low levels of pollution.

Recent Empirical Research and Conclusion 
Recent empirical research builds on these dynamic models painting a subtler picture than did early EKC studies. We can distinguish between the impact of economic growth on the environment and the effect of the level of GDP per capita, irrespective of whether an economy is growing or not, on reducing environmental impacts. Economic growth usually increases environmental impacts but the size of this effect varies across impacts and the impact of growth often declines as countries get richer. However, richer countries are often likely to make more rapid progress in reducing environmental impacts. Finally, there is often convergence among countries, so that countries that have relatively high levels of impacts reduce them faster or increase them slower. These combined effects explain more of the variation in pollution emissions or concentrations than either the classic EKC model or models that assume that either only convergence or growth effects alone are important. Therefore, while being rich means a country might do more to clean up its environment, getting rich is likely to be environmentally damaging and the simplistic policy prescriptions that some early proponents of the EKC put forward should be disregarded.

References
Brock, W. A. and Taylor, M. S. (2010). The green Solow model. Journal of Economic Growth 15, 127–153.

Grossman, G. M. and Krueger, A. B. (1991). Environmental impacts of a North American Free Trade Agreement. NBER Working Papers 3914.

Ordás Criado, C., Valente, S., and Stengos, T. (2011). Growth and pollution convergence: Theory and evidence. Journal of Environmental Economics and Management 62, 199-214.

Panayotou, T. (1993). Empirical tests and policy analysis of environmental degradation at different stages of economic development. Working Paper, Technology and Employment Programme, International Labour Office, Geneva, WP238.

Smith, S. J., van Ardenne, J., Klimont, Z., Andres, R. J., Volke, A., and Delgado Arias S. (2011). Anthropogenic sulfur dioxide emissions: 1850-2005. Atmospheric Chemistry and Physics 11, 1101-1116.

Stern, D. I. (2015). The environmental Kuznets curve after 25 years. CCEP Working Papers 1514.

Stern, D. I., Common, M. S., and Barbier, E. B. (1996). Economic growth and environmental degradation: the environmental Kuznets curve and sustainable development. World Development 24, 1151–1160.

Thursday, July 21, 2016

Dynamics of the Environmental Kuznets Curve

Just finished writing a survey of the environmental Kuznets curve (EKC) for the Oxford Research Encyclopedia of Environmental Economics. Though I updated all sections, of course, there is quite a bit of overlap with my previous reviews. But there is a mostly new review of empirical evidence reviewing the literature and presenting original graphs in the spirit of IPCC reports :) I came up with this new graph of the EKC for sulfur emissions:


The graph plots the growth rate from 1971 to 2005 of per capita sulfur emissions in the sample used in the Anjum et al. (2014) paper against GDP per capita in 1971. There is a correlation of -0.32 between the growth rates and initial log GDP per capita. This shows that emissions did tend to decline or grow more slowly in richer countries but the relationship is very weak -  only 10% of the variation in growth rates is explained by initial GDP per capita. Emissions grew in many wealthier countries and fell in many poorer ones, though GDP per capita also fell in a few of the poorest of those. So, this does not provide strong support for the EKC being the best or only explanation of either the distribution of emissions across countries or the evolution of emissions within countries over time. On the other hand, we shouldn't be restricted to a single explanation of the data and the EKC can be treated as one possible explanation as in Anjum et al. (2014). In that paper, we find that when we consider other explanations such as convergence the EKC effect is statistically significant but the turning point is out of sample - growth has less effect on emissions in richer countries but it still has a positive effect.

The graph below compares the growth rates of sulfur emissions with the initial level of emissions intensity. The negative correlation is much stronger here: -0.67 for the log of emissions intensity. This relationship is one of the key motivations for pursuing a convergence approach to modelling emissions. Note that the tight cluster of mostly European countries that cut emissions the most appears to have had both high income and high emissions intensity at the beginning of the period.


Tuesday, July 12, 2016

Legitimate Uses for Impact Factors

I wrote a long comment on this blogpost by Ludo Waltman but it got eaten by their system, so I'm rewriting it in a more expanded form as a blogpost of my own. Waltman argues, I think, that for those that reject the use of journal impact factors to evaluate individual papers, such as Lariviere et al., there should be then no legitimate uses for impact factors. I don't think this is true.

The impact factor was first used by Eugene Garfield to decide which additional journals to add to the Science Citation Index he created. Similarly, librarians can use impact factors to decide on which journals to subscribe or unsubscribe from and publishers and editors can use such metrics to track the impact of their journals. These are all sensible uses of the impact factor that I think no-one would disagree with. Of course, we can argue about whether the mean number of citations that articles receive in a journal is the best metric and I think that standard errors - as I suggested in my Journal of Economic Literature article - or the complete distribution as suggested by Lariviere et al., should be provided alongside them.

I actually think that impact factors or similar metrics are useful to assess very recently published articles, as I show in my PLoS One paper, before they manage to accrue many citations. Also, impact factors seem to be a proxy for journal acceptance rates or selectivity, which we only have limited data on. But ruling these out as legitimate uses doesn't mean rejecting the use of such metrics entirely.

I disagree with the comment by David Colquhoun that no working scientists look at journal impact factors when assessing individual papers or scientists. Maybe this is the case in his corner of the research universe but it definitely is not the case in my corner. Most economists pay much, much more attention to where a paper was published than how many citations it has received. And researchers in the other fields I interact with also pay a lot of attention to journal reputations, though they usually also pay more attention to citations as well. Of course, I think that economists should pay much more attention to citations too.


Wednesday, June 15, 2016

p-Curve: Replicable vs. Non-Replicable Findings

Recently, Stephan Bruns published a paper with John Ioannidis in PLoS ONE critiquing the p-curve.  I've blogged about the p-curve previously. Their argument is that the p-curve cannot distinguish "true effects" from "null effects" in the presence of omitted variables bias. Simonsohn et al., the originators of the p-curve, have responded in their blog, which I have added to the blogroll here. They say, of course, the p-curve cannot distinguish between causal effects and other effects but it can distinguish between "false positives", which are non-replicable effects and "replicable effects", which include both "confounded effects" (correlation but not causation) and "causal effects". Bruns and Ioannidis have responded to this comment too.

In my previous blogpost on the p-curve, I showed that the Granger causality tests we meta-analysed in our Energy Journal paper in 2014 form a right-skewed p-curve. This would mean that there was a "true effect" according to the p-curve methodology. However, our meta-regression analysis where we regressed the test statistics on the square root of degrees of freedom in the underlying regressions showed no "genuine effect". Now I understand what is going on. The large number of highly significant results in the Granger causality meta-dataset is generated by "overfitting bias". This result is "replicable". If we fit VAR models to more such short time series we will again get large numbers of significant results. However, regression analysis shows that this result is bogus as the p-values are not negatively correlated with degrees of freedom. Therefore, the power trace meta-regression is a superior method to the p-curve. In addition, we can modify this regression model to account for omitted variables bias by adding dummy variables and interaction terms (as we do in our paper). This can help to identify a causal effect. Of course, if no researchers actually estimate the true causal model then this method too cannot identify the causal effect. But there are always limits to our ability to be sure of causality. Meta-regression can help rule out some cases of confounded effects.

So, to sum up there are the following dichotomies:
  • Replicable vs. non-replicable - can use p-curve.
  • True or genuine effect (a correlation in the data-generating process) vs. false positive - metaregression model is more likely to give correct inference.*
  • Causal vs. confounded effect - extended meta-regression model can rule out some confounded effects.
The bottom line is that you should use meta-regression analysis rather than the p-curve.

* In the case of unit root spurious regressions mentioned in Bruns and Ioannidis' response, things are a bit complicated. In the case of a bivariate spurious regression, where there is a drift in the same direction in both variables then it is likely that Stanley's FAT-PET and similar methods will show that there is a true effect. Even though there is no relationship at all between the two variables, the nature of the data-generating-process for each means that they will be correlated. Where there is no drift or the direction of drift varies randomly then there should be equal numbers of positive and negative t-statistics in underlying studies and no relationship between the value of the t-statistic and degrees of freedom, though there is a relationship between the absolute value of the t-statistic and degrees of freedom. Here meta-regression does better than the p-curve. I'm not sure if the meta-regression model in our Energy Journal paper might be fooled by Granger Causality tests in levels of unrelated unit root variables. These would likely be spuriously significant but the significance might not rise strongly with sample size?