Econ 522 – Lecture 3 (Sept 11 2007)

advertisement
Econ 522 – Lecture 3 (Sept 11 2007)
Today: day 2 of 2 of review: efficiency, game theory, choice under uncertainty (if time)
Recall our setup from the end of last Thursday – consumers, who maximize utility given
their budget constraint, and producers, who maximize profits given prices and their
available technology.
Efficiency
 A Pareto-improvement is any change in allocations (either through additional
production, or through trade, or through a combination) that makes someone in
the economy strictly better off without hurting anyone
 That is, it leaves everyone in the economy at least as well off, and at least one
person in the economy strictly better off
 An allocation is Pareto-efficient if there are no available Pareto-improvements,
that is, if with that as your starting point, there’s no way to make someone better
off without hurting someone else
 Note that Pareto-efficiency has nothing to do with fairness/equity – one person
having everything in the economy is usually Pareto-efficient
 First Welfare Theorem: Under this general setup, equilibrium is always Paretoefficient.
 But remember that our setup made a lot of implicit assumptions: everyone (firms
and consumers) are price-takers, nobody has any private information, peoples’
utility depends only on what they themselves consume (there are no externalities)
 So there are several potential problems that can disrupt Pareto-efficiency:
o Market power – if firms have market power, that is, they have an impact
over prices – this generally leads to inefficiency. Recall that monopolists
have an incentive to set their quantity too low, avoiding mutually
beneficial trades and therefore leading to Deadweight Loss
o Private information – in some settings, private information can lead to
market breakdown and inefficient outcomes. For example, adverse
selection (in insurance or used cars).
o Externalities – if what I consume has an impact on your utility, I don’t
take this into account when I make my choices, and so the market outcome
may be inefficient. We’ll see an example of this next lecture.
 But under our (strong) assumptions, general equilibrium usually exists, and is
Pareto-efficient

In complicated economies, Pareto-improvements are hard to come by
o Looking for Pareto-improvements implicitly gives every member of
society a veto over any change
o There might be policies that are good for most, but a little bit bad for a few
– it would be nice to have a condition that recognizes these








There is a weaker condition – potential Pareto improvement (also referred to as a
change that is Kaldor-Hicks efficient)
This is a change that could potentially be turned into a Pareto-improvement with
the right combination of transfers
So for example…
Suppose I have an apple, which I value at $1, and you value at $3
If I give you the apple for $2, this is a Pareto-improvement – we’re both better off
(If I give you the apple for $1, this is still a Pareto-improvement – you’re strictly
better off, I’m equally well off)
Now suppose that I turn around to write on the blackboard and you grab the apple
while my back is turned. This is not a Pareto-improvement, since I’m now worse
off than when I had the apple; but it is a Kaldor-Hicks improvement, since with
the right transfer, this would become a Pareto improvement.
In settings where we’re more concerned with the overall levels of resources rather
than the distribution of resources across individuals, Kaldor-Hicks efficiency is a
reasonable measure of what we’re after
Game Theory
Static Games, or Simultaneous-Move Games
A static game is completely described by three things:
 Who the PLAYERS are
 What ACTIONS are available to each player
 What PAYOFFS each player will get, given his own action and the actions of the
other players
In two-player games where each player chooses between a finite number of alternatives,
these can easily be presented via a payoff matrix
We’ll use a classic example, the Prisoner’s Dilemma:
Player 1’s Action:
Shut up
Rat on his friend
Player 2’s Action:
Shut up
-1, -1
0, -10
Rat on his friend
-10, 0
-5, -5
When one player’s best move does not depend on what his opponent does, this is called a
DOMINANT STRATEGY. In this case, considering only his own payoffs, player 1 is
better off ratting on his friend, regardless of whether his friend keeps quiet (0 > -1) or rats
(-5 > -10). Similarly, player 2 is better off ratting, regardless of what player 1 does. So
game theory predicts that both players rat, for payoffs of (-5, -5), even though if they both
kept quiet instead, they’d get (-1, -1).
In many games, players won’t have a single move that’s always best; their best move will
depend on what the other player (or players) is doing. In those cases, the way we solve a
game is to look for Nash Equilibria.
Nash Equilibrium is a plan (an action) for each player such that if everyone else sticks to
the plan, I should stick to the plan as well. That is, a Nash Equilibrium is a set of
strategies, one for each player, such that if I believe everyone else is going to play their
equilibrium strategy, I can’t do any better by playing a different strategy.
In the Prisoner’s Dilemma, both players ratting turns out to be the only equilibrium. The
easiest way to find equilibria is to circle each player’s best-responses to his opponent’s
potential moves:
Player 2’s Action:
Shut up
Rat on his friend
Player 1’s Action:
Shut up
-1, -1
-10, 0
Rat on his friend
0, -10
-5, -5
Any square that has both payoffs circled is a Nash Equilibrium.
Some games will have multiple equilibria. For example, the Battle of the Sexes:
Player 2’s Action:
Baseball Game
Opera
Player 1’s Action:
Baseball Game
0,0
6,3
Opera
0,0
3,6
Some games will have multiple equilibria where one seems obviously better than the
other:
Player 2’s Action:
Left
Right
Player 1’s Action:
Up
0,0
50,50
Down
0,0
1,1
In this case, (Up, Left) Pareto-dominates (Down, Right) – both players are better off. So
if the players could communicate, you’d expect them to play the “better” equilibrium.
But (Down, Right) is still an equilibrium – if 1 expects 2 to play Right, he should play
Down; and if 2 expects 1 to play Down, he should play Right.
Some games – especially games where the players’ interests are opposite to each other –
do not have any equilibria where each player plays a single strategy. Instead, equilibria
are found by assuming that each player flips a coin at the last minute, and does what the
coin says; his opponent knows what probability he puts on playing each of his actions,
but not which one he will actually use. An example of this is if we played
Rock/Paper/Scissors, with the loser paying the winner a dollar:
Player 1:
Player 2:
Rock
0,0
1, -1
-1, 1
Rock
Paper
Scissors
Paper
-1, 1
0,0
1, -1
Scissors
1, -1
-1, 1
0,0
It turns out, the only equilibrium in this game is for each of us to play each strategy with
equal probabilities. This is called a MIXED-STRATEGY EQUILIBRIUM.
DYNAMIC GAMES
Another type of game is a dynamic game, that is, a game where one player gets to see
what the other player did before he moves. These games are usually easiest to represent
using a game tree.
Suppose there is one firm already in a particular market, and another firm is considering
entering. If he does enter, than the incumbent firm can either play nice, ensuring that
both firms end up making money; or he can start a price war, ensuring that both firms end
up losing money. We can represent the game this way:
(10, 10)
Accomodate
Firm 2
Enter
Fight
Firm 1
Don’t Enter
(-10, -10)
(0, 30)
A strategy in this type of game is a plan of what to do at each juncture; so even if firm 1
doesn’t enter, we need to know what firm 2 planned to do if did enter. We can represent
this game with a payoff matrix:
Firm 1’s Action:
Enter
Don’t Enter
Firm 2’s Action:
Accommodate
10, 10
0, 30
Fight
-10, -10
0, 30
There are two (pure-strategy) Nash equilibria: (Enter, Accommodate), and (Don’t Enter,
Fight).
However, (Don’t Enter, Fight) doesn’t make that much sense. Firm 1 has no incentive to
enter if he believes firm 2 really would fight; but would it really make sense for firm 2 to
fight if it got to that point? Not a “Credible Threat”, since once firm 1 had entered, firm
2 would prefer to accommodate.
We use a stronger type of equilibrium in this type of game: Subgame Perfect
Equilibrium. Basically, this is when players don’t just play best-responses in the game as
a whole, but also at every branch of the game tree.
Backward induction.
Key assumption here is rationality – the players know each others’ payoff functions,
everyone is rational, everyone knows everyone is rational, and so on. If you’re up against
someone who’s crazy, who knows what will happen – it might turn out better for them!
Literature on repeated games with reputation, where your actions in the early stages may
partly be to try to convince opponents that you’re not a rational player, and therefore that
they shouldn’t try to take advantage of you in later stages.
CHOICE UNDER UNCERTAINTY
Generally, when we model uncertainty and risk, we assume that there is just a single
good in the economy – we might be looking at investing in a risky stock or buying
insurance, but we assume the agent is simply trying to maximize the expected utility he
gets from money. In order to do this, we assume the agent has some utility function for
money, and that what he’s trying to do is to maximize the expected value of his utility,
knowing the probabilities of the different outcomes.
So for example, suppose there is a risky stock that will either triple in value, with
probability 2/5, or lose all its value, with probability 3/5. So if I invest $100 in this stock,
I’ll either get $300 back, or nothing. How do I know whether to do this, and how much
to invest?
Well, suppose I have $1000 to start with, and u is my utility function for money. And
suppose I invest x. My expected utility will be
2/5 u(1000 – x + 3x) + 3/5 u(1000 – x)
Once again, we can think of “consumption in the world where the stock tripled” and
“consumption if the world where the stock became worthless” as two different goods, and
2/5 u(one) + 3/5 u(other) as our two-good utility function. Or we can just solve this as a
one-variable maximization problem. Suppose that u(x) = log x. Then we would take the
derivative w.r.t. x and find first-order condition
2/5 2 / (1000+2x) – 3/5 1 / (1000 – x) = 0
4(1000 – x) = 3 (1000 + 2x)
4000 – 4x = 3000 + 6x
1000 = 10x  x = 100
So given this utility function and this particular stock, my optimal position is to invest
$100, but keep the rest of my fortune safe.
The reason I chose log w as my utility function here: in order to make choice under
uncertainty interesting, we typically assume people are RISK-AVERSE. This means that
faced with an asset (a stock, or a lottery ticket, or whatever) that will have uncertain value
but whose EXPECTED VALUE is $50, we would prefer to just have $50 for sure.
This turns out to be the same as assuming that the function u is concave. Let’s draw it.
Let’s take our stock from before, which is worth $300 with probability 2/5 and $0 with
probability 3/5.
In the case we already solved, the expected value of the stock was 2/5 * 300 + 3/5 * 0 =
600/5 = $120. (This is what it’s worth on average.) Since it costs $100, if we weren’t
risk-averse – if all we cared about was expected value – we’d buy as much of the stock as
we could afford. However, since we were risk-averse in this example, we limited our
purchase to $100.
(Risk-neutral consumers have utility for money which is a straight line, and just
maximize their average outcome. Risk-loving consumers like to gamble, they have
convex utility for money.)
One more example. Suppose I’m already in a risky situation, and am thinking about
buying insurance. I own a house worth $200,000, and there’s a 1-in-100 chance that it
will burn down. Suppose that each dollar of coverage costs me $p in premiums. Let’s
assume I started out with $100,000 plus the house. If I buy x in coverage, my expected
utility is
99/100 u(300,000 – px) + 1/100 u(100,000 – px + x)
Once again, let’s suppose u(w) = log w, and take the derivative:
-.99p/(300,000-px) + .01 (1-p)/(100,000 + (1-p)x) = 0
(1-p) (300,000 – px) = 99 p (100,000 + (1-p)x)
(1-p)(300,000) – 99 p (100,000) = 100 p (1-p) x
First off, suppose the price of insurance is fair, that is, the insurance company doesn’t
make a profit, he just breaks even on average. So p = .01. Then
.99 (200, 000) = 100 * .01 * 99 * x, or x = 200,000, so I buy insurance for the full value
of my house. This is a more general result – if the price of insurance is “actuarily fair” –
breakeven in expected value – and I’m risk-averse, I’ll FULLY INSURE – that is, I’ll
buy insurance up to the point where my payoff is the same whether or not my house
burns down!
Now suppose p = 0.02, so the insurance company is charging me twice as much as it
expects to pay out on average. In this case,
.98 (300,000) – 99 * .02 * 100,000 = 100 * .02 * .98 * x
And x = $48,980. So even though insurance is a bad deal from an expected-value point
of view, if I’m risk-averse, I may still buy some of it; but this time, I don’t fully insure.
(Of course, if insurance is a bad enough deal, I won’t choose to buy any of it. In this
case, if p > .029, I won’t buy any of it.)
Download