Optimization and Climate Policy Transcription

advertisement
Weston Lecture: Optimization and Climate Policy Transcription
Dr. Tom Rutherford, Professor of Agricultural and Applied Economics
Introduction: Welcome to all of you who are normally at then round table and all of you who aren’t
necessarily. I’ll just say that the roundtable is also a four credit class so please don’t forget to sign in
before you leave. About the roundtables in general, they are sponsored by a donation from Roy Weston
who is an alumnus of the university. His intention is to support community, research and teaching in
environment sustainability, science and technology and policy. So that’s the theme of the roundtables it
meshes well with the theme of the forum today. I’ll introduce our guest for this special Weston. It is
Professor Tom Rutherford. He earned a PhD from Stanford University in 1987 from Alan Manne. He
subsequently been a faculty member at the University of Western Ontario, the University of Colorado
and the Swiss Federal institute of Technology in Zurich, before coming to UW in 2012, where he is a
professor of Agricultural and Applied Economics, as well as part of optimization things happening at
WID. Professor Rutherford’s research focuses with formulation, solution and application of solution
inapplicable numerical models in environmental economics, international trade and economic growth.
His interests are in global warming, the economic consequences of multi-regional trade agreements the
economic effects of trade reform and small open economies. His research also involves methodological
contributions related to the application of complementarity models in economics. Without further ado
I’ll introduce Tom Rutherford.
*Applause*
Professor Rutherford: Thank you. This is my first time speaking at the Weston roundtables here, so I
had, in preparing for this to think about who is my target audience gonna be. So I knew this was a class
for credit so I decided to prepare this talk for a sophomore level engineer who wants to learn something
about climate. So that’s what I had in my mind. I thought it’d be kind of interesting because I’m
relatively new here at UW. I’m in Michael Farris’s optimization group here at WID and I thought it might
be interesting to talk about optimization here as well. So my objective here is to give students a chance
to see that computer based methods and modeling methods can be useful in understanding
environmental changes and that is my general sort of objective here. I also want to talk about some of
the new ideas and research ideas that relate to extensions of existing frameworks. The basic objective
here will be to provide a survey after a brief introduction I’ll give you a bit of a feeling of the difference
between exante and ex-post assessment. Sort of how we use analytic methods, what is this doing. I’ll do
that through an example of the Golden Gate Bridge’s anniversary in another 50 years. So I’ll do that and
then I’ll go in a little bit about local warming using optimization to identify climate change here and now.
We’ll look at the climate record from Truax field here in Madison to see what the rate of change is using
optimization methods. Then we’ll talk about integrated fields. I’m not going to talk about, I’m not going
to do this kind of light, I have lots of slides I am going to go through, but I’m going to talk a little bit
about that and then at the end I’m going to talk a little bit about discounting and the climate change
debate and talk about the work I am doing with Michael Farris on new methods for thinking about how
to do assessments of costs and benefits of climate change, in a way the accommodates a sort of social
discounting at rates that are lower than what’s observed in the market.
Okay so just the basic science, so again a lot of you here probably know more about this stuff than I do, I
know that has been the case when I was teaching a freshman level class on climate economics this last
term and I was really impressed with the students and how much they knew about these issues. So we
know that the greenhouse effect is real, as it’s something we can understand from the back of the
envelope calculations. It’s been born out in experiments and observations and there’s lots of evidence.
We know this is caused by carbon dioxide. Carbon Dioxide has lots of forces, so we have to worry about
energy generation, transportation, deforestation, methane- you know cows are responsible to some
extent. We have think then about how to deal with this? There is a variety of instruments the typical
type of paradigm that I think about is it is either mitigation, adaptation or geoengineering. Those are the
typical type of approaches that we have to think about. There was a time that mentioning adaptation or
geoengineering was verboten it was only mitigation. Now we seem to be talking about a range of issues.
Despite our improved science there still remain lots of uncertainties about climate and the range of
uncertainty about climate.
So greenhouse effect again, I’m going to run through these slides, but I’m sure these are things that
you’ve seen before. So greenhouse effect has to do with the carbon dioxide molecules reflecting
thermal radiation in the infrared spectrum, so you end up heating up more molecules in the atmosphere
over time. As I said it’s been understood for a long time it goes back to in 1827 and 1890, this has a long
existence of study in the physical sciences and observations going back a large number of years.
Demonstrated here you see the overall upward trend of the carbon trend in the atmosphere here. That
is the one thing about climate change in the atmosphere here is that we can measure the amount of
carbon there is in the atmosphere very precisely and we can observe this as something going on. And it
goes on without historical precedence. And that is the thing that the general problem is, I mean we
could quibble over the cost of the things and when things happen. But what’s unsaleable is the fact that
carbon is going up to a level that is unprecedented in terms of history in the last thousands of years.
So, jumping forward. Optimization on the other hand. Climate change I expect most of the audience to
know something about. Optimization is something you may not be as familiar with. It is the general idea
of trying to improve on things. Basically, given a set of constraint, how can we achieve a better outcome,
given the instruments we have available? We can do this formally, we can use math, and you can use
systems equations and talk about maximizing. F in this case would be a formal objective function that
depends on a set of state variables say x, a set of instruments, t and you try to think about maximizing
some outcome. So this is the formalization of the process. So I am going to try and give you four
examples of how this would work in the climate setting. But before that, kind of just to fix ideas, let’s go
back.
I was an engineer originally as an undergrad and when I went to grad school I didn’t intend to learn
economics, it kind of just happened, because I was just curious about things. I had never taken an
economics class as an undergrad. And one of the examples in my undergraduate training that caught my
eye was an example from David Luenberger and it was very tangible because it was right before the 50th
anniversary of the golden gate bridge. And for the first time in the history of the bridge they were going
to open it to undergraduate traffic, which posed a real challenge to engineers, because people can pack
themselves together at about 450 kilograms per square meter. Whereas vehicles trucks and so forth can
only have 150. So there’s this big difference. So the big debate at the time was, would the big be able to
handle it at full load? I was particularly intrigued by this because I was in the Peace Corps and worked on
suspension bridges and knew something about bridges so I found this an interesting challenge. And
David Luenburger brought up the idea that there were a few different ways to go at this.
So the loose form approach is the instrument the bridge as you see it and you keep a video of how many
truck or cars are on the bridge and measure how much deformation you have. And you can use that to
extend to see what would happen. So you can see where you’re gonna go. The other way is a structural
approach where you develop a model that works as a physics principle for the general elements of the
bridges or you evaluate a way in which it would happen in that setting.
So if you look at this kind of standard approach you can look at the blue line as being this kind of
reduced form model. And measuring the max strain on a point in the bridge and the strain, the gray line
might be considered the threshold that you want to go beyond. And then along the horizontal axis we
are increasing the load. And the point is that the red dots represent observations that you have. You can
fit a reduced form model to those observations but it may not be very good in terms of extending
outside of the observations that you have. If you want to go well beyond what you have observed, you
need structural model. Because you really can’t rely on the in-sample observations giving a good
perspective. I like this example because it applies in space and climate.
We’ve observed thing carefully, so we can go back and see what things have happened. But we’re
challenged to think ahead in a framework that’s beyond what we can simply interpolate. That’s one of
the challenges here and that’s the reason the climate modeling you can say as a consequence we can do
any modeling but instead rely on religion to tell us what to do. So the problem is that we have to rely on
models but we have to recognize that we cannot just project from what we have. So if you look at this.
So in the context of the golden gate bridge they had pictures from 1897. You can see differences in
loading. And it did survive, it didn’t break down or fall. But if we think about this method of just
projecting, something else could be used in terms of long term projecting. So I just say, this is an
interesting terms just in terms of sideline, I just say evidently there are different ways in terms of doing
things and the projection approach, sort-of reduced form method.
King Hubbard was a geoscientist who worked for one of the oil companies and he made predictions in
the 50’s when the U.S. peak oil productions would take place and it basically had a very simple kind of
logistic curve to describe what would happen. This is actually a curve from his original paper. And the
thing that’s kind of crazy is actually how precise this turned out to be, in term of predicting when the
U.S.- well there’s been changes of course, technological changes have occurred- but the overall thought
wasn’t that far off. So this is an example of this model, this reduced form sampling method. So the
question is “can we rely on this?” The interesting thing is that the whole thing was actually very precise
with the U.S. oil production and this was similar. In fact, another name that it is known by is from the
energy production game is the Stanley Jevens. The author of the Jevens paradox. The next question is
when will the UK run out of oil or coal rather. So coal extraction is quite remarkable. They had to
recognize that it was getting low, and sure enough the British Coal Production also follows this logistic
structure quite nicely. And I’m just bringing this up as an example. If you do the same for anthracite
extraction in Pennsylvania. And something that is entertaining to look at.
I find these papers that look at this historically, the main paper is by David Rutledge from Cal Tech. He
has his projections. His projections suggest that the coal productions, that coal will run out before the
climate becomes a real problem. Which is an interesting sort of alternative take on this. I’m just pointing
this out as an intriguing type of modeling exercise. I’m not saying I’ve obsessed it in great detail but I do
find it kind of interesting. But let’s look at it in a different way, we can see in that model there is no
optimization. Simply all we’re doing is fitting a trend. There is rule of prices or technology it’s simply
saying we can match what we can observe.
If we think about another approach by Robert VanderPuye, an optimization professor at Princeton who
wrote a nice paper on local warming, that looks at data available from the 1950’s from NOAH and a
number of stations available all around the world. You can get this data, but when you look at this data
you find evidence of climate change that requires you to think critically about this kind of data because
there is so much noise that goes on. Year to year there is so much CO2 in the area that we observe
immediately, right now and this is something that further, year to year is remarkable.
So if you look at the average McGuire air force base in New Jersey, in 55 years, then boy, the data is all
over the place. How do you work with making sense of this? How do you take, for example, differences
one year apart you get a mean trend where the differences is around 2.9 degrees Fahrenheit per
century but the standard deviation is over 7.0 degrees Fahrenheit per century. So that means you’re not
really getting a real good handle on it. And this is where optimization tends to be helpful, because you
can set up a parametric model where you recognize that seasonal cycle is sinusoidal with a duration of
65.25 days you know something about the solar cycle and you can add to that simple little model a
linear trend, so the parameter times one tells us the linear trend per day of how much warming we have
going on as a consequence of climate change. And then we can minimize the epsilons or deviations per
day observed to give us our best match. You can either use the least absolute differences to get nonparametric differences or use squares that give you basically the same answers. And that gives you a
number that tells you what the rate of change is. What I find remarkable about this is that the number
that comes out of this is from all these locations are that they are very stable and they agree with the
physical models that are based on the greenhouse effect.
The models give evidence that at the local level we have quite a bit of stability. Tis is looking at 4 years
here at Truex field here in Madison and we see that the linear trend there and the actual sort of
estimation. There is a bit of information here, but the big thing is that climate change- there’s strong
evidence of this- this is the way the model looks you can write it down and then solve it, but you get 3.2
degrees per century and if you are responding to the New Jersey estimate it is about 3.6 so it is very
close. So this says we can use optimization in a way that makes sense of local observations and we can
concur with what comes out of the climate models. So now I’m going to go forward with talking about
integrated assessment and the idea here is to give people an idea of what integrated assessment models
are, what are the issues they are intended to address and how can they helps us in regards to making
better climate policy? That’s the basic sets of questions I would like to think about here. And I’m going
to go back, Alan Manne was my advisor and part of my project was finding the economics that were a
predecessor to something called beta macro which was something I worked on a long time ago. I
worked on and wrote several papers using this model. It goes back from a book from 1992 called Buying
Greenhouse insurance which focused on uncertainty in the role of mitigation as a method of insurance
policy. Now the insurance industry is talking about it, at the time these were academics thinking about
the [17:05]abaypin measures it would in buying against future damages. The general structure is known
as an internal temporal equilibrium model I’ll talk a little bit about what that means. The basic kind of
framework of these integrative assessment models are they are typically original and we can look at
tradeoffs between what happens in different countries and different policies. You can look at
collaborations between countries to understand what the room is and gains from trade and
coordinations of policies. This is a model that is known as a top-down model which means we’re gonna
talk about a macro framework that talks about economic growth. And against that framework we’ll talk
about energy use and what particular merge starts to focus on producing technologies that produce
economic and non-economic policies. It has a variety of climate mitigation scenarios and captures in
some sense the economy wide scope of carbon policy.
18:02
Typically in this setting you can use these type of models for two approaches. One is cost benefit that
says given what we conjecture to be the damages due to climate change what is the optimal level of
abatement we [18:23]filmotrate today to weight the costs of here and now and the long term benefits.
Or we can also use the model for cost effectiveness analysis saying if we have the upper-bound of two
or three degrees centigrade. What’s the most cost effective way to get to our target? These are
equilibrium models that are typically solved as an optimization framework and typically incorporate
multiple independent agents and D things are mostly represented as individual actors in this framework.
But the key sort of outcome here that makes an integrative assessment model different from a
conventional economic growth model it has the elements of an economic growth model in the sense
you have capital accumulation investment and intertemporal decisions that are made about how much
to invest and how much to consume but in addition it has an energy sector it has electric energy and
furthermore it’s emissions translated through a simple little climate model to understand what the
effects are on market and non-market damages that effect their welfare.
So the fact that you have the red emissions and capture the climate means you are able to trace
together what you put together in this system and it provides you the framework for assessing, in a
simple framework if we know what these policies look like. The main sort of strength emerge in terms of
other integrative assessment models is its ability to represent technology because it can be used to price
out under different policy proposals, how much money does a particular technology help us save on the
shelf. Now merge has made models that look at the actual conjecture trade-off between funding for
research of RND and in terms of technologies generated. That is typically not how a merge is made. In a
merge you’d say if we have this technology available you ask when will this technology become costeffective to use? At what magnitude will it come in and how valuable is that to the overall process? Of
course the value of the technology is kind of setting ends off the policy constraint. That is one of the
challenges I would submit, that is an important insight about climate change, is that fundamentally is
this a public good or a public bad. Provision of this has to be done through government policies. One of
the challenges is to incentivize innovation that produce new technologies when the market doesn’t do
that.
You know the incentive is given by government intervention in the market and that’s one of the things
in my climate economics class, we begin reading Climate Change is a Super Wicked Problem. This is one
of the things that makes it into a super wicked problem, this is generally a problem that happens over
time. Virtues of integrative assessment models provide explicit representation of the problem
incorporates a sense of behavioral responses, it is a framework for alternative approaches and trade off
efficiency and equity. It has this logical appeal that is generally consistent with the idea that if the price
goes up people will buy less of it, but by how much? These are things you need to consider. It
incorporates both environmental and technology constraints. And it can address risk and uncertainty.
I am not going to address that specifically but one of the interesting directions is accounting for
potential catastrophic risk-risk and the process of learning over time. But these are a bunch of
drawbacks. There’s often time misunderstanding and misrepresentation of what the model the
forecasting device. Basically it is important to understand what models can do for you. And having a
reliable model doesn’t make us understand how economic growth is changes with intervention. I’d say
that is the biggest challenge for these models.
If China doesn’t get on the game you are not going to solve the climate quality. But we don’t have a
model that tells us how much it will cost China to get involved. Because economic growth in China has
been remarkable. And there concerned about interfering with this process of industrial transformation.
And that’s a fundamental shortcoming of these models. We don’t know how to do that very well. If we
abandon selfish optimizing agents and go to a model structure that may be more appealing, that’s also
tricky. The important thing to know is that having a model that points toward policy should be regarded
as a necessary but not a sufficient condition for the validity of a given argument. In other words if you
come to me and say carbon tax should be 150 dollars immediately and it should rise to 200 dollars. I feel
as though it is possible to say show me how it is that you reached that conclusion. That’s not to say that
the fact that you have a model means the conclusion is right, but the model provides a starting point for
understanding what our assumptions are given rise to given types of arguments. So the basic sort of
trade off there is important. The stylized model provides a framework for second order agreement. We
may not argue about what the right assumptions are but we can agree that this provides a framework
for thinking through what the consequences are.
I think the role of models to focus policy discussion is important.
24:04
The problem with this model is it’s perceived as being helplessly complicated and a lot of times there’s
this tendency to dismiss or accept things without really addressing them. So there’s lots of
microeconomics that are involved. The problem is that in most conventional economics curriculums
they tend to focus on econometrics and game theory and price theory and this market interaction stud
tends to be a little old fashioned. Generally coming up with a paradigm that is both clean and able to
address these issues is a challenge. For the students standpoint this is hell’s library and the bookshelf is
full of story problem books. So it is all about how to solve story problems how to take problems you
have and make sense of what they say. How to quantify that.
I’ve had the last couple of days and we have two visitors from Bloomberg that have funded this project
The Risky Business. They are trying to look at the cost-benefit of climate intervention. They are trying to
build models to assess this thing. Here are the guys that are doing the assessment of this analysis. They
are master’s degrees doing environmental science and environmental policy. But they are both
computer science guys. These are guys who are interested in doing stuff and can work on the machine
no problem. They show me stuff all the time, they are very, very good.
So my basic point is that analytic methods are the not panacea that provide you with the solution to
everything, but you have to have a seat at the table. You have to have a framework to argue that this
makes sense. So having a model is a good idea. Defensive neoclassical versatility you can calibrate or
estimate consistent with Occam’s razor you want to have a consistent as possible model so basically this
is formulating models out of sample.
This goes back to the golden gate bridge example. The problem is there is a lot of controversy. Climate
change brings out a lot of controversy level. One of the more entertaining controversies in the last year
has been this paper by Pindyck. The title if you can’t see it is climate change policy, what do the models
tell us. And the first sentence has no verb. The first sentence says very little. Right? So there is this
interesting sort of debate now about what’s the role of little models. Can they tell us anything? DO they
have too much influence? I say the motivation for this paper is mostly because the Obama
administration wants to bring in climate policy without having to go through Congress. So they
monitored the social cost of carbon and this has been a big debate. Like how much should we charge
when we are evaluating environmental policy? How much should we bill ourselves for involving carbon?
And that price is currently being based on little models like this. And there’s a lot of people who thinks
this does not make that much sense. So this is a legitimate point.
At the same time if you are going to look at this paper, which is a fun paper to look at, you almost might
want to look at Nordhaus’ most recent book. Nordhaus has been at this game for a long time. He’s really
slick and knows his stuff. In terms of the climate he really knows what he’s talking about. In his most
recent book it’s quite nice and quite accessible at a graduate level, it’s quite good. Finally if you don’t
want to read the book you can read Proveman’s review of the book from the New York Times which is
also very good. So this overall debate on climate change. I’m not trying to say something where there’s
lots of dust still, you know about this.
So let me give you an example of what a model might look like and be applied. This is a paper we wrote
four or five years ago looking at what’s going on in China. And what do we expect. How much longer is
this going to go on with Chinese emissions going up? It’s seem that this will probably be, Greg Nemet
knows more about this than I, but local air pollution will probably stop their emissions, is my guess. But
at the time we wrote this, there was a general tendency to underestimate the rate of change in China. It
seemed as though every year there would be another surprise. My God, in 2006 the Chinese installed
generating capacity. It was their new installation of generating capacity of power was one quarter of
that of the US. In one year they installed that much. It was completely breathtaking how fast. And the
carbon shadow of the generating units installed after 2000. That was the accumulative carbon emissions
units is going to be more than they were from the beginning industrial area period. So it’s really
breathtaking. And when we looked at this question we wanted to see why China is growing so fast. So
theirs this Kaya Identity which is simply population multiplied by per capita income times the intensity of
GDP times carbon intensity of energy equals emissions. So gives you a way to decompose things. As you
can see, in China, emissions are the black line and a lot of this is being driven by per capita income.
People get richer they want to have a cell phone. People get richer they want to have a car, right? So
this is one of the things that happened. It’s not driven that much by population. But carbon intensity
have been relatively flat. But energy intensity of GDP, that’s carbon intensity of energy and energy
intensity of GDP had been going down. Actually quite perceptively. It was actually precedent. The
efficiency improvements were actually remarkable.
The main problems was that China’s emissions were growing faster than the other emissions in the
world. They became the producer of all of these energy intensive products. The blue are represents all
the electric and non-electric energies. The blue and the striped blue, as you can see that is the dominant
effect. We’re in this world were we are trying to look forward and see, what is going to happen. One
way to look at this is to say, historically, what has happened in comparable countries. That managed to
turn the corner on economic growth. And enter into a period of sustained per capita income growth.
Another example, in 2006 comparable to Malaysia in 1979 and Korea in 1977, Taiwan in 1973, Germany
in 1959. If you queue them up, you can assess what the growth process is.
31:05
And here we have, the two white lines that represent upper and lower bounds, although out black line
represents our central case and the dark area between represents the range of emissions intensities
from those other countries. So you can see that sustained growth that continued growth that we’ve
observed from 1990 to 2010 it’s completely plausible to keep going for a long time. If China follows the
growth experience of these middle income transition equalities over the next 20 years, we will continue
to see emissions go up dramatically. The gray area represents what the outcomes are, what the growth
experiences are of the comparable Asian economies. And these lines represent our prediction from their
model. And you can see that relative to what happened historically there’s reason to believe that this
will continue to grow for a long time
Well China has shown a remarkable interest in renewable energy and they’ve definitely recognizes that
this is a problem. But the main challenge is that with this sort of framework is that if China grows in a
method that is comparable to Taiwan and Korea and Japan, Climate will be a problem as far as we can
see. What are some other insights? Integrative assessment models have been used in a number of
different settings. The most common and most visible is most likely with the energy modeling form from
Stanford that does model comparison. We can assess the likelihood of getting a two-degree target?
Typically the models tell us that it is not likely to happen to reach two degrees unless we have
something that is call biofuels with carbon capturing storage. So if you have carbon capturing storage
hooked up to a unit. Reasoning is trying to connect that model to the interest rate that matches in the
market and prescriptively saying no I really care about what happens to future generations. I want to
know what good policies look like. That involve very low discounting of the future. This comes back to
this guy Frank Ramsey, in terms of entertaining people from last century. He’s really close to the top of
my list, he’s from Cambridge. Ramsey died at age 26 and he basically wrote three paper sin economics
and then a bunch of stuff in philosophy that I can’t make sense of. But his model of investment tradeoffs is the main model that is used. And Alan’s analysis says, well I have this Ramsey model. I set the
model up to match market interest rates. It has a social discount rate of three percent. If I take that
model and just change the discount rate then what happens is the investment jumps. Because suddenly
the discount rate I put on the model of three percent if I suddenly change it, then the model does not
match what we observed in the market. The problem is what we observe in the market represents the
interaction of people who want to save money and people who want to borrow money. So there’s basic
interaction involved that determines what the interest rate is, I mean the standard.
There are lots of macroeconomic stories that assess why interest rates are so low right now. It’s because
there’s a bunch of us who are getting older now and we are worried about our pension. So everybody is
looking for a place to put their money to make money for retirement. There’s more money chancing
fewer investments. Lower interest rates. That’s the basic Ramsey model like that. The problem is you
take the Ramsey model that describes interest rates as observed and you jack in an interest rate with a
lower discount rate, you have a big disconnect. So logically challenged that climate policy analysts who
council a low amount of time precedence fundamentally face this challenge. That if you lower a zero
interest rate it is not going to be consistent with market interest rates. So in the Ramsey optimal growth
framework, changes in the total preference produce and changes to plausible aggregate investment.
38:50
So it’s fundamental. Alan was found of this expression: “the model doesn’t pass the laugh test.” So the
laugh test is if you look at the results and start laughing because it doesn’t make any sense then that
means the model failed. So in integrative assessment, passing the laugh test means you run the model
and you should be able to observe things that are consistent and should be able to see, what is GDP
growth? What’s energy use? How do things evolve around the baseline? That’s passing the laugh test
along the base line and that is important. Then along comes last year with Golder and Williams and they
said, well maybe Alan had a point 25 years ago. Maybe we should think about separating prescriptive
and descriptive discounting, so we have a social discounting of one discount rate which is social welfare
or SW. And the finance equivalent discount rate are sub-f. So we have the agents in the model there’s
discounting at a rate that is consistent with their impatience they basically, and that’s an interchange in
capital markets. But when we run the model we want to be able to apply on the outcome our own
discount rate which may be lower. Until now there has been a problem because integrative assessment
modeling, every time they want to they always want to take these Ramsey models and lower the
discount rate.
On one hand you have much higher rates of intervention in the near term. Carbon tax rates are much
higher. Basically good things happen but the model loses its plausibility, because it does match the
macroeconomic factors. So this was a paper that didn’t propose a proposal of how to do this, but it laid
out a pretty nice idea I thought. And that’s where Michael Farris’s approach comes in. So Michael Farris
is a computer science professor here at WID and he has a nice framework for thinking about bi-level
programing. Think about maximizing our social objective here subject to the fact that were only going to
control t, we are not going to control x. X will be controlled by the market underneath, so that is a little
bit abstract. Sorry I am talking to my colleagues. But basically in Michael’s approach we don’t have to do
very much. You can basically set up a bi-level program and match my social welfare. Control only the
rate of mission control and let the finance agent control everything else. And voila we have a model
where investment does not just.
So the blue line, red line that you can barely see is the original three percent line. If we solve the bi-level
program with the climate change with a very low discount rate for future damages. We end up having
exactly the same interest rate. Whereas if we looked at an investment and consumption, we can see if
we simply drop the interest rate down to one-percent we fail to pass the laugh test. Whereas if we work
in the bi-level framework we end up having basically the same consumption pattern a we do with the
three percent rate, but the mitigation rate we observe is significantly higher.
42:03
So I think this is quite a promising approach and it shows the virtue of being a aware of modern
computing techniques is quite important. I think it has the potential to have a contribution to the debate
of what is appropriate for the social cost of carbon and how much we should charge for it.
So my claim is that it provides a very useful method for assessing optimal policies with very low discount
rates. And in principle, as an analyst we can discuss which discount rates could make sense in what
makes sense in terms of the interest rate. The user of the model should be able to say what they think
about the trade-offs between different generations. What’s fundamentally in play here in terms of
climate is how much here do we want to mitigate in terms of what the uncertain returns are for the
future. I know I didn’t talk about the uncertainty but the main objective here is to have better control
and to provide a framework for second order agreement, where I think this is quite promising.
I guess the message to students is that when I was an undergraduate or a master, one of my favorite
books, which led me to Alan Manne was Soft Energy Paths by Henry Levins back at Earth Day. It was the
same kind of idea that with technological change we can do things smarter. It’s all great to be aware of
what the issues are but also good to learn some tools and computer science and optimization in
particular is something that it does not provide sort of a magic – bullet, solving all problems. But it
provides you with a framework for understanding what matters and you can draw your own conclusions
from this, and it is very fun as well. So that’s the conclusion.
*Applause*
Question: Professor Rutherford everybody. And if you are conjugating for a second, I have a question. I
only had a couple of engineering classes in my grad career. But hey had a lot to do with cost/benefit of
rates and I’ve always thought it was fascinating. Does this extremely low discount rate that applies to
climate change optimization, does it climate change considerations in a certain class of things that have
extremely low discount rates. Could that have any resonance for people? Climate change investment
and policy, will it be like x or y or other things we care about?
Professor Rutherford: We think of market interest rates as reflecting, sort of, decision-making in the
here and now. There’s no place you can buy and annuity over 100 year horizon. So whenever you work
on these environmental problems that have very long horizons, the idea that the market rate of
interests should govern your decisions, there’s no economic reason why that’s the case. I’d say again,
since moving back to Madison I’ve been struck by the similarities of climate change and utrification of
lakes, because Lake Mendota has gotten a lot worse than it was when I lived here in the early 70s. One
of the things I’ve learned about moving back is that this is a problem that has many similarities to
climate change, because it is very long horizon. You can drop off all of the pollutants going into the lakes
and cutoff all of the fertilizer and it might take 100 years they say to clear the water. SO we have these
decisions that are long term and have choices in the long term and I don’t think there is a reason exante
to choose any interest rate that we want. It’s not to say that there is an argument that says one is better
than the other, but personally I think there is an argument that we can ask that question.
Question: I work for a solar energy installer and one of the things is that our inverters have a 10 year
warranty and you can extend it out to 20 years, but we normally don’t recommend that people do,
because we figure in 10 years that new inverters would be a lot better and a lot cheaper than if they
pre-pay now. So what I want to know is how do these types of models take into account these
technological advances and investment. Is the forward path of technological innovation you would think
is getting much better for carbon abating technology?
Professor Rutherford: So one of the things we introduced in Merge in 1999 was vintage technology. So
basically the model recognizes what technology is available and when. So it makes a decision. It may
postpone investment abating that cheaper things will be on the horizon. The problem in your game is
that it is not known with certainty. How to cope with uncertainty and how to capture that is always
something that’s a game. But in this setting, if you have vintage technology then the cost coefficients
and the technology are based on the year in which the investment was made.
47:49
Question: So you talk about these low interest rates exante. What does that translate to in a policy
sense? You kind of have to create the appearance that these interest rates and discount rates are
working to help people to make decisions about investments?
Professor Rutherford: The key topic is to think about not being able to control the divisions of finance
yeas. All we have control over is the carbon tax rate and how many aversions we can permit. And so if
you are in this bi-level framework. We may assess outcomes and if we could control investment rates
and we have a much lower investment rate, we would invest as much as we could. But if we only have
access to the carbon tax, what is means is that you can raise the carbon tax and justify it in the basis of a
given social outcome. So it means that you can’t dismiss model results that are based on a low discount
rate as they are implausible. Because the model I present there with the low discount rate passes the
laugh test. The macroeconomics does not fail to match up with what we see in practice. The model from
an optimization standpoint it justifies. From a cost/benefit standpoint invention in carbon markets can
be justified on the concern from future generations.
49:34
Question: I appreciate the two-level discount rate. My concern is that it has a theoretical backing, but it
might be an example of Texas sharpshooting, where you have the target you are looking for and draw
the factors around it. So I guess it passes the laugh test theoretically, but what is the empirical evidence
that supports the two-level discount rate.
Professor Rutherford: Well it is empirical evidence if can convince someone that it is a reasonable
approach. I’d say that my reaction would be, rather than reading Frank Ramsey’s 1928 paper about
optimal saving, they should read his 1920 paper about optimal taxation. Because this is really something
that is the classic approach of how do we design taxes? So tax design is about designing an instrument.
Recognizing how individuals respond to change in that instrument. In optimal taxation you want to tax
things that have a low overall elasticity and you also want to be careful not to tax factors of production.
That’s the argument for low taxes on capital. You don’t want to interfere with economic growth. You
want to use the same logic that is used in public finance for taxes and take it over and apply it to
integrative assessment. So whether or not we can sell this to legitimize the analysis of low-carbon
proposals, I don’t know. Were right in the paper now, so we’ll see how this goes. There’s quite a few
people who are frustrated with the current dialogue about integrative assessment models. The things is
this doesn’t solve the problem. Pindyck’s other argument is that most of our investments in climate
protection are largely driven not by low damages, But the potential for high consequence events. So we
have run away to thermal hailing situation we have problems with methane outgassing. These runaway
events are what drives things. It does not address all of Pindyck’s problems.
52:00
Question: Regarding the gentleman a few people back about might we have more technologies in the
future, that might make saving the money now for a future investment would make more sense. My
understanding is keeping carbon the in the ground is a lot more effective under any standard how it is
current understood than the cost that would be incurred in trying to pull it back out of the atmosphere.
When the IPCC did their report on climate change, the most recent one, some scientists asked, well wait
where is the focus on methane releases in the arctic? This is a huge area and it’s a huge area of the
unknown. We see it bubbling up. Has it been bubbling up for forever, or is this new and this is going to
be a positive feedback loop. The IPCC said they didn’t include it because we didn’t know what those
numbers are. And the scientists are saying certainly those numbers must be large. And they said yea but
because we don’t know what they are, we didn’t include them in our model. So a lot of this is a bit out
of my league but my understanding is if you frame the problem as we are very near a tipping point and
we don’t know it. Or we’ve got that hundred year return on investment that you talk about and we
don’t know that either. It is hard for any model to predict very effectively what the outcome will be,
because we don’t really understand the playing field.
Professor Rutherford: There’s lots of models so the main problem is that there is lots of uncertainty. So
does that mean we don’t want to observe lots of models? My view on it is, when we hear about all these
tipping point events. I have a student, Charlie Chang, and I are working with folk with Epri and Klaus
Keller from Penn State on tipping point sort of events. That is something that we are asking exactly.
What are some of the tipping point events estimates? The problem is if you watch Chasing Ice, it’s really
a great movie. It’s about sort of the disappearance of glaciers in Greenland and around the world. The
problem with these tipping point things is the scientists truly don’t know. Stuff happens and they aren’t
sure how fast it happens and why. So there’s a lot of uncertainty still. So my thinking is it is better to
have a model there and make an assumption about what matters in the here and now. Ultimately with
these models we start to question of where do we go in 150 years of the model and what does the
model tell us the right action is today, given the range of uncertainties and that’s the best we can do.
55:00
Question: How much can your agents in your models being non-rational, how much can that throw a
spinner in the works of the outcome?
Professor Rutherford: There is a lot of enthusiasm for agent based models. We have rules of thumb for
characterizing people’s behavior. And these models have lots of applications that are very affectively
applied. I myself was trained in an era where the assumption of optimization was such an appealing
thing, because it makes it relatively simple. If you control the environment it is relatively
straightforward. When you have met agents that can do all kinds of crazy stuff, the question is how
many times do you have to do that and what happens? But my own training is just like that. I’ve look at
specifically price signals and electricity and how they reflect these agent based models. They are quite
intriguing, but I claim ignorance about effective ways to use these tools.
Conclusion: Thanks everybody for your great questions and thank to Professor Rutherford for your very
interesting talk.
Download