Econ 522 – Lecture 26 (Dec 13 2007)

advertisement
Econ 522 – Lecture 26 (Dec 13 2007)
Today’s material is not on the final.
Pretty much everything we’ve done this semester has assumed that people are perfectly
rational, and respond to incentives according to what they correctly perceive to be their
own best-interest.
 Property and nuisance law: people can bargain with each other to get entitlements
to the owners who value them most
 Contract law: parties can negotiate efficient contracts, courts can enforce them
correctly
 Tort law: people react rationally to incentives; courts can assign liability and
damages correctly
 Criminal law: even criminals react rationally to incentives, commit crimes when
benefit outweighs expected costs
These are strong assumptions. They are useful assumptions – they gave us a lot of
predictions about how laws would affect behavior, and therefore what laws would lead to
efficiency. But the question remains whether they’re valid assumptions.
In the last decade or two, there’s been huge growth in the field of behavioral economics.
Behavioral economics studies how peoples’ actual behavior differs from the predictions
of the standard model. We mentioned a couple examples over the course of the semester:
for example, we mentioned that people don’t react to probabilistic risks the way
expected-utility theory would suggest.
Behavioral economics started out as a fairly ad-hoc discipline – someone would pick a
prediction of the standard model – for instance, expected-utility-maximizing under
uncertainty, or discounting future payoffs by a consistent per-period discount rate, or
maximizing only one’s own payoff in a multi-player setting. Then they would do
experiments – have a bunch of undergraduates play games in a lab – or look for instances
in the real world where the prediction was violated.
Over time, behavioral economics has generated some fairly robust conclusions about
systematic ways in which peoples’ behavior differs from the standard model of perfect
rationality.
What’s important is that the way peoples’ behavior deviates from the standard predictions
is not random. If it was, we could explain it simply as random errors – people aren’t
necessarily infinitely wise, so they sometimes make mistakes in calculating the right
behavior, and these mistakes can go in any direction. Instead, we find that peoples’
behavior seems to have consistent biases – that is, in many situations, deviations from
perfect rationality all seem to go in the same direction.
At its best, behavioral economics also holds itself to a sort of a “higher standard” than
traditional economics. Traditional economics makes assumptions (basically, rationality
and optimization), derives predictions, and then asks whether the predictions seem to be
right, but doesn’t spend that much time questioning the assumptions themselves.
Behavioral economics tries to justify the assumptions as well.
The paper on the syllabus by Jolls, Sunstein, and Thaler, “A Behavioral Approach to Law
and Economics,” discusses some of these biases observed by behavioral economists; and
proposes how these more complicated (and therefore more accurate) views of human
behavior could be incorporated into law and economics. How people actually behave,
and how this differs from the standard model, has implications for every use of law and
economics:

The positive part
o “Positive” here means “predictive” – making predictions about how
people will respond to particular laws
o The positive approach also allows us to predict (or explain) the laws that
do exist – as outcomes of some process (either the common law
“evolving” toward efficiency, as we’ve discussed in class; or as the
outcome of a legislative process)
o (Positive statements are things like, “an increase in expected punishment
will lead to a decrease in crime”)

The prescriptive part
o Once we know how people react to a given law, we can make
prescriptions about how the law should be designed to achieve particular
goals
o (Prescriptive statements are things like, “to achieve efficiency, the law
should specify injunctive relief when transaction costs are low, and
damages when transaction costs are high”)
o If people behave differently than the standard model, than the law should
be designed to take this into account

The normative part
o The normative question is, what should the goal of the legal system be?
o Throughout this class, we’ve mostly assumed that the goal of the law is
economic efficiency – we gave a number of arguments to defend this
o This gets much trickier when a behavioral approach is used
o One of the observations of behavioral economics is that peoples’
preferences are not as well-defined and stable as the standard model
assumes
o But this makes even measuring efficiency hard, since we don’t know what
preferences to use
o (An example: one of the findings of behavioral economics is that people
value things more once they have them. So if I gave one of you a
chocolate bar, you might get all excited about it, and be more hurt by
losing it than if you hadn’t had it to begin with. Suppose I give one of you
a chocolate bar, and offer you an opportunity to sell it to someone else.
Good chance you wouldn’t. Even if I offered to subsidize the purchase –
I’d throw in 50 cents on top of what they pay you – you might not. So
we’d conclude you value the chocolate bar more than them.
o But if we’d started out giving the chocolate bar to them, maybe they
wouldn’t have wanted to sell it to you either.
o But this muddles the question of who values it more: if I give it to you,
you value it more than him; if I give it to him, he values it more than you.
But now we have no way to gauge which allocation is efficient!)
So that’s the goal of behavioral law and economics – to give a more accurate model of
how people actually behave, and use that model to reconsider the positive, prescriptive,
and normative conclusions of law and economics.
The Jolls, Sunstein and Thaler paper concedes that so far, the results are fairly sparse; the
paper reads more like a proposal for future research than a bunch of conclusions. Still,
some of the initial results – basically, taking behavioral biases documented elsewhere and
considering their implications for law and economics – are quite interesting.
Behavioral biases – the way peoples’ actual behavior deviates from the standard model of
perfect self-interested rationality – tends to be broken up into three categories:



Bounded rationality
o People aren’t perfect – we have limited computational abilities, have
flawed memory, imperfect powers of perception
o This leads us to make “mistakes”; it also leads us to use simple “rules of
thumb”, rather than detailed analysis, in many situations
Bounded willpower
o Even when we know what’s “right”, we don’t always do it – we eat too
much, don’t go to the gym, have trouble quitting smoking
o This means that commitment devices – finding a way to “give up” options
– can have value, which doesn’t make sense in the standard model. We’ve
all seen people turn down leftover cake – “if I have it at home, I’ll eat it,
and I don’t want to eat it.”
o (This is why savings plans that “force” people to save, or gym
memberships that reward you for going to the gym, can have value)
Bounded self-interest
o People aren’t completely selfish – we all do nice things for other people.
But even in anonymous situations with strangers, people tend to care about
others’ outcomes as well as their own – we’ll see examples.
On to some examples.
We begin with an experiment done at Cornell. The experiment took 44 students in an
advanced undergrad Law and Econ class, and gave half of them tokens. Each person
(those who got tokens and those who didn’t) was also given a personal value, an amount
of money they could exchange a token for at the end of class if they have one. Then
people were given an opportunity to trade.
The market for tokens worked just like the standard model would predict: people with
higher token values bought them from people with lower token values.
But that was with tokens, which had an artificial value that everyone knew objectively.
So they reran the experiment. This time, half the class was chosen at random and given
Cornell coffee mugs. Then students were allowed to trade.
If, like in the standard model, each person knew exactly what a mug was worth to them,
we’d predict about half the mugs would trade hands. Since the people who got them
were chosen at random, about half the mugs should have gone to people who valued them
above the median valuation, and half to people who valued them below that; that latter
half should all have been sold, to the people with high valuations who didn’t get mugs.
Instead, only 15% of the mugs traded hands. And on average, people who got mugs
asked more than twice as much money for them as the people who didn’t get them were
willing to pay. And the effect didn’t go away if the experiment was repeated.
The conclusion was that having something makes you value it more – this is referred to
as an endowment effect. (In this case, having a mug made you value having it more
highly.)
So what? Well, the big so what is that this seems to contradict Coase. Coase predicted
that without transaction costs, the initial allocation should affect the final allocation –
whoever starts out with an object (or an entitlement), it will naturally flow to whoever
values it the most. But endowment effects mean that the initial allocation does matter, in
terms of predicting the final allocation. And if preferences really change depending on
whether you got the object, it becomes very unclear how to even define efficiency!
Recall what we said about injunctive relief in nuisance cases. We argued that when
transaction costs are small, injunctions would work well, since they clarify the two sides’
threat points so they can bargain to an efficient outcome. Endowment effects challenge
this result – they say that whoever is allocated the right initially, comes to value it more,
and therefore may not be willing to give it away, regardless of who efficiency would have
favored ex ante.
The existence of this bias is fairly robust. One of the chapters in Sunstein’s book,
“Behavioral Law and Economics,” documents twelve different studies where peoples’
Willingness to Pay for something they didn’t have was compared to their Willingness to
Accept an offer for something they did. In every case, the payment required to give up
something they had was greater – typically three times greater or more – than their
willingness to pay.
This also has implications on damages. If you asked someone ahead of time how much
money they would accept to lose an arm, the number would be huge. If someone lost an
arm, and you asked them how much money it would take to make them overall as welloff as before, the number would be smaller.
(This is also partly due to the fact that people adapt to new circumstances better than they
anticipate. That is, if someone loses their arm, they find ways of dealing with it which
make it less bad than they would have guessed ahead of time. Again, though, this calls
into question which measure should be used in assessing efficiency. Suppose someone
with two arms thinks losing one would be a catastrophe, on the order of a $10,000,000
loss. Someone who lost an arm realizes that life’s still not that bad, and that the damage
done was, say, $500,000. Should a construction firm have to take precautions that cost
$3,000,000 to prevent each lost arm?)
Another bias Jolls/Sunstein/Thaler discuss is hindsight bias. Once something happens,
people have trouble assessing what its likelihood was before the fact. Specifically, they
overestimate what the ex-ante probability was, knowing that the thing did in fact happen.
(Ask a Packers fan what they thought the odds were in August that the Packers would be
11-2 right now. Once something happens, we can always find ways to rationalize it –
“they’ve got Favre, maybe some of the kids will step up, they’ll win some close games,
it’s not impossible”. I couldn’t find Vegas lines…)
Why does this matter? Determining negligence usually requires figuring out what the
probability was that something would happen, after it happens. A storage company
decides the risk of a fire at its warehouse is 1 in 1000, and so it doesn’t install a $10,000
sprinkler system to protect $1,000,000 in stored goods. Now a fire occurs, and the jury
has to sort out whether the company was negligent. Knowing the fire occurred, they
might decide the probability of a fire was 1 in 50, and find the company liable.
(The same thing happens in lawsuits against publicly-owned companies who failed to
disclose a particular risk to investors. Was the risk material, so the company was
fraudulent in hiding it? Or was it an extremely small risk that just happened to occur, so
the company did its job and got unlucky?)
The effect of hindsight bias should be clear: juries will find negligence more often than
they would if they could perfectly assess ex-ante probabilities after the fact. The
proposals Jolls/Sunstein/Thaler give for dealing with hindsight bias, though, have
problems themselves.
(One thing they suggest is in some cases, to keep the jury in the dark about what
happened. Obviously, since they were asked to serve on a jury, the jury knows
something bad happened. However, in some cases, either action or lack of action would
entail risk: treating a patient with a risky drug might cause them to die, but not giving
them the drug might also cause them to die. They suggest the jury could be given the
facts available at the time, without being told what choice was made, and asked to decide
if either action would have constituted negligence. Still, this won’t always work, since in
many cases the jury will be able to infer what happened from the fact that there’s a trial at
all; and in order to make this work, the jury would have to not read newspapers or know
anything about the trial, and not even know which lawyers represented the plaintiff and
which ones represented the defendant!
The other suggestion they make is to raise the standard of proof for finding negligence –
from “preponderance of the evidence,” interpreted as 51% certainty, to, say, the “clear
and convincing evidence” standard, generally interpreted as 60-70% certainty. But this
assumes that hindsight bias is of a certain magnitude, not just that it exists; and that the
“preponderance of the evidence” standard would be efficient if there were no hindsight
bias.)
Another bias they consider is what they call “self-serving bias”. This can be thought of
as relative optimism that exists even when both sides have the same information.
In another experiment they cite, students – undergrads and law students – were randomly
assigned to the roles of plaintiff and defendant, knowing they would be asked to negotiate
a settlement. They were all given the same facts – based on an actual case in Texas.
Prior to negotiations, they were each asked to write down a guess as to the damages the
judge actually awarded, as well as what they felt was a “fair” settlement – these answers
would not be used in any way during the negotiations.
Although they were chosen randomly, the students chosen to represent the plaintiffs
guessed $14,500 higher than those representing the defendants as to the judge’s actual
award, and answered $17,700 higher when asked for a “fair” settlement.
(They give another example where the presidents of teachers unions and the presidents of
school boards were asked what other cities were “comparable” to their own, since
comparables were often brought up during salary negotiations. Not surprisingly, the
union presidents listed cities with higher average salaries than those listed by school
board presidents.)
What does self-serving bias suggest? That pre-trial settlements may not happen as often
as the standard model would predict, and that sharing information won’t solve the
problem. That is, even if both sides have access to all the same information, they may
still be relatively optimistic about their chances at trial, and therefore unable to reach a
settlement. (It also has implications for wage negotiations and strikes.)
Another example of self-serving bias is the old cliché that 80% of people think they’re
above-average drivers. The authors mention that this sort of bias can be helpful in
designing public campaigns that are more effective. In promoting safe driving, the move
from “drive carefully or you’ll cause an accident” to “drive carefully, there are bad
drivers out there you have to avoid!”
There’s another bias, similar to hindsight bias, in how people perceive the probabilities of
events. People tend to overestimate the probability of a certain type of accident
happening in the future if they’ve recently observed a similar accident.
Jolls/Sunstein/Thaler refer to this as availability – a memory of a recent accident is
available in your mind, and colors your perception. Adding to this is salience – basically,
how vivid the memory is.
So if you recently passed a car accident while driving, you tend to overestimate the
likelihood of car accidents. If you just saw a news item about lead in toys, or asbestos in
ceilings, you overestimate that risk.
Jolls/Sunstein/Thaler use this to explain environmental and safety regulations covering
whatever that year’s “hot topic” is, without regard for thoughtful cost-benefit analysis.
(Recall that we say the “cost per life saved” of safety regulations varying from $200,000
to over a hundred million or even billions of dollars.)
The problem of perception is made worse, they point out, by the fact that some people
(politicians, or regulators, or concerned citizens who are worried about a problem) may
deliberately try to keep the accident available, in order to gain from it. They use the term
“availability entrepreneurs” for people who try to whip everyone into a panic about a
particular risk, presumably for private gain. (Think of most politicians in this country
after 9/11. They use Superfund – the EPA project for dealing with abandoned toxic
waste dumps, passed after the Love Canal scare despite the actual risk being very small.
But a recent example was available, and very salient, so there was no opposition)
Another example of bounded rationality, which explains bounded willpower, is how
people discount the future. Peoples’ choices (both in the “real world” and in
experiments) do reflect discounting of future events. But the way they discount them is
different from the theory. The dropoff between “now” and “later” is much more severe
than the dropoff between “some future time” and “some later future time”. That is, the
difference in value between something happening now versus a year from now is much
greater than the difference between something happening in five years versus ten years.
One implication is something we’ve already seen: that the last few years of a prison
sentence offer much less deterrence than the first year. Studies with criminals found that
a five-year prison term was viewed as only being about twice as severe as a one-year
term – the first year mattered far more, since it starts now.
The final bias they talk about is bounded self-interest. A standard example of this is the
ultimatum game. Player 1 is given the opportunity to propose a way to divide up $10
between himself and player 2. They player 2 says yes or no. If he says yes, they each get
their share; if he says no, they both get nothing.
If player 2 is fully rational, he should say yes to any share of the money – even a penny is
better than nothing. Experiments find that people reject small offers – offers of less than
a third of the total are often rejected. This, and other evidence, brings
Jolls/Sunstein/Thaler to the following conclusions, which are shocking to economists and
completely obvious to everyone else:
 people are willing to sacrifice their own material well-being to help those who are
being kind
 people are willing to sacrifice their own material well-being to punish those who
are being unkind
(The same observations occur in experimental Prisoner’s Dilemma-type games, and in
lots of other settings.)
One interpretation is that people care not only about their outcome, but on whether it’s
fair. However, “fairness” is not defined objectively – that is, people don’t reject every
offer below 50%.
They say that this preference for fairness may help explain lots of rules we see
empirically:
 rules against scalping tickets in many states
 rules against predatory pricing during emergencies
 rules against usury (unreasonably high interest rates)
Standard economics would suggest these rules are inefficient – any voluntary transaction
should be Pareto-improving, and these rules prohibit such transactions. However, in each
of these cases, prices appear to be “unfair” relative to an available market benchmark.
We discussed earlier that the law may evolve toward greater efficiency. Another
common model (one that we did not consider) is that the law will evolve to favor certain
wealthy or politically-connected individuals, since they can influence the political
process and impose their own interests on the system. Jolls/Sunstein/Thaler suggest a
third possibility: the law may evolve toward agreeing with peoples’ notion of “fairness”
(either because legislators themselves share this preference, or because they think it will
help them win re-election.)
(This also explains why ticket prices are kept at a level low enough for there to be excess
demand – one New York theater owner explained, “Even though we could sell tickets at
$100, we’d be cutting our own throats because it would be a P.R. disaster for Broadway.”
Similarly, why stores don’t raise prices on popular toys that are likely to sell out during
the Christmas rush.)
(Also: failed personalized pricing program on Amazon; failed weather-sensitive Coke
machines.)
Most of the interesting work in the paper has been on the positive questions – how to
more accurately describe peoples’ behavior. They’re weaker on the prescriptive part –
how to design the law to deal with it – presumably because when behavior is different in
different situations, it’s hard to come up with general rules that always work.
One thing they point out is that how people respond to information depends very much
on how it’s presented. They give an example: university staff choosing whether to invest
their retirement money in a safe fund (bonds) or a risky one (stocks). Those who were
shown a distribution of one-year returns of the stock fund focused on the volatility, and
put most of their money in bonds. Others were shown a simulated distribution of thirtyyear returns, based on the same data; they focused more on the compounding effect of
time, and put most of their money in stocks.
The authors’ takeaway seems to be that when the government is putting out information –
either to help people make informed choices, or to encourage or discourage particular
behavior – it should think about how the “framing” matters. Particularly in the second
case – trying to push people to behave in a certain way – they can control the effect of the
message by manipulating how it’s presented.
(We saw earlier the example of safe-driving ads focusing on other drivers being bad, not
you. They give other examples – particularly graphic warnings about cigarette dangers,
phrasing things as losses instead of gains (“if you fail to do a breast self-examination, you
will have a decreased chance of finding a tumor at an early stage”) instead of positives,
etc)
They do a little bit on criminal stuff, but nothing we haven’t already seen.
They wrap up with an argument for “anti-antipaternalism”. Antipaternalism is the notion
that the government shouldn’t tell people to do, since people know what makes them
better off and can do it on their own. They stop short of actually being pro-paternalism,
for several reasons – among them, any behavioral bias that leads individuals to make
mistakes, might also lead government bureaucrats to make the same mistakes when
telling people what to do – but at least argue that we shouldn’t reflexively reject
paternalism, as it may have a role in some instances.
(If you’re interested, the article is on the syllabus, and cites tons of papers addressing
particular biases in particular instances. The book by Sunstein, “Behavioral Law and
Economics,” is a collection of some of these articles; I have it, if anyone wants a look.)
One other thing I thought I’d mention today. There was a great six-piece article on Slate
about two months ago, titled “American Lawbreaking.” The gist of it was this.
Obviously, some people get away with crimes because it’s too expensive to catch
everyone. But some people get away with certain crimes because even though the law is
on the books, everyone recognizes it’s not a great law, and it’s better to just let it go.
The article is by Tim Wu. He starts it with a story of New York prosecutors sitting
around the office, picking a celebrity – say, Mother Theresa – and trying to come up with
a crime they could have charged her with.
Here’s a link to the paper: http://www.slate.com/id/2175730/entry/2175733/
To quote Wu:
Tolerated lawbreaking is almost always a response to a political failure – the
inability of our political institutions to adapt to social change or reach a rational
compromise that reflects the interests of the nation and all concerned parties.
That’s why the American statutes are full of laws that no one wants to see fully
enforced – or even enforced at all.
The rest of the article details examples.
The first doesn’t exactly fit his premise, but it’s interesting anyway. His claim: “over the
last two decades, the pharmaceutical industry has developed a full set of substitutes for
just about every illegal narcotic we have.” That is, rather than trying to legalize street
drugs – which some people argue for, but which isn’t politically very popular – our
society has developed drugs like Ritalin, vicodin, oxycontin, and clonazepam, which may
serve a “legitimate” medical purpose in some instances, but also mimic the highs of
cocaine or others.
The second example is pornography. Apparently, there’s porn online. And pretty much
all of it is illegal.
Federal law prohibits using a “computer service” to transport over state lines “any
obscene, lewd, lascivious, or filthy book, pamphlet, picture, motion-picture film, paper,
letter, writing, print, or other matter of indecent character.”
He discusses some American history on prosecution of pornography, bringing us to the
last decade or so, in which prosecutors have stopped prosecuting pornography, juries
refuse to convict people for it, the laws are still on the books, and nobody really cares.
What’s happened instead is analogous to “zoning” – rather than prohibiting the behavior,
it’s regulated. Not literally regulated; but it’s prosecuted when it crosses certain lines,
and ignored otherwise. Recall the Super Bowl wardrobe malfunction – just because
everyone accepts that pornography is pseudo-legal, doesn’t mean they don’t freak out
when it happens on prime-time network TV. Prosecutors still chase down child
pornographers and a few other extreme cases that cross certain lines.
(In 2005, new Attorney General Alberto Gonzalez tried to pressure local prosecutors to
crack down on pornography. Basically, nothing happened – a few more cases involving
“extreme” content, but no prosecutions of mainstream pornography at all. Quoting a
Miami attorney: “compared to terrorism, public corruption, and narcotics, [pornography]
is no worse than dropping gum on the sidewalk.”
What’s interesting is not that modern society has basically legalized pornography, but
that it’s happened not through legislation or the court system, but through a general
consensus – among prosecutors, the FCC, the FBI, and local police – to do nothing about
it.
(The ironic part of this, of course, is that since it’s still illegal, it’s not regulated at all.)
(Also, I believe that at least up till a few years ago, oral sex was still illegal in about half
of U.S. states. Again, not much enforcement.)
Wu also discusses copyright law and illegal immigration, but the other one I found
interesting was how the Amish and Mormons basically became exempt from most laws.
The Amish refuse to pay Social Security taxes, and do not accept Social Security
benefits; they will not educate their children beyond eighth grade. To some degree,
polygamy is still practiced among some Mormons.
The article gives some history of occasional prosecutions, and backlash, and how we
seem to have reached a sort of truce: the Amish (and some Mormons) keep to themselves
and keep quiet about what they’re doing, and the rest of society pretty much lets them be,
not worrying about the fact that they’re breaking certain laws. (When a Mormon
fundamentalist went on Sally Jessy Raphael to defend his polygamous lifestyle, he was
tried and convicted. When it’s done quietly, in scattered communities outside of big
cities, polygamy apparently still goes on, and is tolerated. Again, practices that are
officially illegal are, as a practical matter, zoned.)
Anyway, the article’s a fun read – you can find it on Slate.com by searching “American
Lawbreaking”.
That’s it for the class. Lots of office hours before the exam. Good luck on finals!
Download