Lecture 2 - People.vcu.edu

advertisement
Econ 604 Advanced Microeconomics
Davis
Spring 2006
Reading.
Problems:
Chapter 2 (pp. 26 to 56) for today
Chapter 3 (pp. 66-86) for next time
Chapter 2.3 2.5, 2.6 and 2.9
Lecture #2.
REVIEW
I. Introduction.
A. The logical status of theoretical models. Models are necessarily
oversimplifications. We usually evaluate models indirectly, that is in terms of predictive
performance. Thus empirically testable implications are an essential ingredient of useful
models.
B. General Features of Economic Models:
1. The Ceteris Paribus Assumption: We “abstract” away from reality by
holding lots of things constant. This assumption becomes controversial when
statistical rather than laboratory methods must be used to evaluate them.
2. Optimization Assumptions: Optimization is useful because it allows for
solutions, and thus testable implications.
3. Positive/Normative Distinction. Most of what we do will involve
positive analysis (in as much as we can be “positive”)
C. Some Historical Perspective: The Development of the Economic Theory
of Value. The notion of value, as being determined by price is a consequence of
marginalist cost/value theory, something essential to modern economic logic.
D. Basic Models and Analytical Contexts.
1 .Supply Demand Equilibrium. We set up a very simple linear demand
and supply model. Then we
a. Solved for an equilibrium.
b. Induced comparative statics effects (and solved).
2. General Equilibrium Analysis and the Production Possibilities Frontier.
We used the production possibilities frontier to illustrate general social
choices, including Economic Efficiency, and Increasing Marginal
Opportunity cost.
II. Chapter 2. The Mathematics of Optimization. A chapter that reviews some of the
primary mathematical tools that we will use in this course.
A. Maximization of a Function of One Variable.
1. The logic of a derivative
2. Some rules for derivatives
3. Necessary and sufficient conditions for a maximum. Recall, our rule is
that optimization requires that the first order condition equal zero, and that the second
derivative be negative (for a maximum) or positive (for a minimum).
Example:
f(x) = 10x – x2
f’(x) = 10 – 2x
f’’(x) = -2
Here x=5 maximizes this function, because x=5 is a flat place, and you are climbing more
slowly.
f(x) = 26+ x2 -10x
f’(x) = 2x - 10
f’’(x) = 2
Here x=5 minimizes this function, because x=5 if a flat place, and you are climbing more
quickly. (That is, descending more slowly).
Example:
PREVIEW
B. Functions of Several Variables
1. Partial Derivatives
a. Intuition
b. Calculating Partial Derivatives
2. Maximizing Functions of Several Variables.
a. Total differentiation
b. Second order Conditions
c. Implicit Functions
d. The Envelope Theorem
C. Constrained Maximization
1. Lagrangian Multiplier Method.
2. Interpretation of the Lagrangian Multiplier
3. Duality
4. Envelope Theorem in Constrained Maximization Problems.
D. Final Comments
2
LECTURE____________________________________________
B. Functions of Several Variables. In most problems of economic interest agents
optimize by choosing among several independent variables. Firms, for example, create
output by using a mix of possible inputs. Households, similarly, optimize utility by
selecting among a combination of possible products. Let’s review how to optimize in
such contexts.
1. Partial Derivatives
a. Intuition
Generically, an agent faces the problem
y  f ( x1 , x2 ,...xn )
Conceptually, the agent is looking at n-dimensional mound. One way to optimize
would be for the agent to parachute to different points on the mound, and then
evaluate (a) if he/she is at a flat place in all directions. One could evaluate this by
stepping in one direction at a time. Analytically, we could take a partial
derivative in a particular dimension by finding the slope in that dimension,
treating values for all other dimension as constants.
Consider the partial derivative of y =f(x1, …., xn) with respect to x1 We
y
f

 f x1  f1
denote this as
x1 x1
b. Calculating Partial Derivatives. As a practical matter partial derivates
are calculated in much the same way as derivatives of a single variable, with the
exception that we treat as constants those independent variables that are not
changing.
Example: Consider the function y =f(x1, x2)= ax12 + bx1x2 + cx22
Then f1 = 2ax1 + bx2
and
f2 = bx1 + 2cx2. We could solve for
the optimum by simultaneously solving these first order conditions.
i. Second Order Partial Derivatives The interpretation of
second order conditions parallel the single variable case. The only
difference is that one may also calculate cross partial effects.
Generally, we write
(f
)
xi
x j
If i=j, then the second order partial derivative parallels the second
order condition in the single variable case. Using the above
example,
f11 = 2a,
f22= 2c
However, we can also calculate two additional conditions
3
f12 = b,
f21= b
These “cross-partial” derivatives assess the change in one direction
as you move in another. In terms of climbing a mountain, if your
primary variables were north/south and east/west, the cross partial
derivative would tell you how much a slight movement west would
affect your progress up the hill from the north. These cross partial
effects are important for finding a maximum, a topic to which
we’ll return at the end of this chapter.
ii. Young’s Theorem However, one observation from the
above example that does generalize is that fij = fji. Intuitively, on a
mountain, the gain in altitude from going northwest a certain
distance is independent of the route taken (north then west or the
reverse). This result is known as Young’s Theorem
2. Maximizing Functions of Several Variables. Using the partial
derivative, we can now talk about the optimization of functions with
several variables. We do this by elaborating on some notation from the
single variable case. Given the function y = f(x), we could optimize by
considering the consequences of a small change in x on y. That is
dy = f’(x)dx
The necessary condition for a maximum is that f’(x) =0. That is, y doesn’t
change at all in response to a change in x. This is called a point-slope
formula. We can develop the same formula for multiple variables
f
dy 
dx1  f1 dx1
x1
Intuitively, if you’re climbing a mountain (altitude, y), and traveling North
(x1). then this partial derivative reflects the change in altitude (dy) from
traveling dx units north.
i. The Total Differential. Now consider the effects on y of varying
all independent variables by a small amount, e.g.,
dy = f1dx1+ f2dx2+… + fndxn
This expression is called the total differential of f
ii. First-Order Condition. The possible way to be at a maximum or
minimum is if dy=0 for any combination of small changes. Alternatively
stated, the total differential must equal zero. Since by assumption the dxi’s
are all nonzero, this condition holds only if
f1 = f2 =… = fn =0.
4
Points where this condition holds are called critical points.
iii. Second -Order Condition Analogous to first order conditions
in the single variable case, critical points can be a maximum or a
minimum. However, optimization requires satisfaction of a second order
condition.
The second order condition is somewhat more complicated than in the
single variable case. To see this consider yet again the problem of
climbing a mountain by climbing north (x1) and west (x2) from a starting
point to the south and east of the mountain. To be at an optimum, it must
be the case that you are at a critical point (a “flat place” (f1=0 and f2=0).
Further, it must be the case that you must have been climbing from the
north direction, (f11<0) and from the west direction (f22<0). However,
these are only necessary conditions. The sufficient condition for a
maximum attends further to “cross effects” It might be possible, for
example, that the reduction in your rate of ascents by climbing North and
West are more than outweighed by the effects of a Southeast Ridge that
causes the “mountaintop” viewed from the South or from the East to be a
saddlepoint. For the “flat place” to be a maximum, it must be the case that
these “cross effects” don’t dominate own effects. Formally, the condition
is
f11(f22) –f122 >0.
Curiously, the same second order condition implies a minimum as well as
a maximum (intuitively the own effects must dominate the cross effects).
(The difference is that f11 and f22 are both positive rather than negative)
Analogous (but more complicated) conditions apply to the n variable case
(These conditions are derived by solving for the determinant of an nn
matrix).
Example: Consider the function y =f(x1, x2)= -(x1 -3)2- (x2 -4)2 + 25. For
later purposes, it will be constructive to view this as a problem of taking
medicine. The 25 reflects a general level of health, while x1 and x2 are
medicines that you may take to relive certain conditions. (Note, look at
this function, since x1 and x2 both drag down the total value of the function
it will obviously be maximized when the effects of these first two terms
cancel out, or when x1 = 3 and x2 = 4. Now let’s solve this using calculus.
The first order (necessary) conditions are
f1 = -2x1 +6 =0
and
f2 = -2x2 + 8 =0.
Thus, at an optimum
x1 =3
and x2 = 4
5
Sufficient second order conditions are
f11= -2
f22= -2
f12= 0
Here there are no interaction effects, so
(-2)(-2)a maximum.
0
=
4
>
0
a. Implicit Functions. Recall in our last lecture, we observed that
the normal expression for demand was Q = f(P, K) where K equals a
number of other variables that are held constant. However, for the
purposes of graphing, we needed to solve for indirect demand, that is, an
expression P = g (Q, K). Situations like these arise frequently in
economics. In such instances, it is often useful to place all the variables of
an equation on the left side of an equation that equals zero. Then,
provided that appropriate underlying conditions hold, we can develop
relationships between the variables of interest by total differentiation.
Example. Consider the function
y = 3x + 5.
In implicit function form, we write
y - 3x – 5 = 0
To find the effects of a change in x on y, totally differentiate this implicit
function
dy – 3dx =0
or
Similarly
dy/dx =
3.
dx/dy =
1/3
Now, in this case, it is so easy to solve explicitly for x and y that direct
expressions are most easily used. However, in more complicated
instances solutions are not so easy. Notice the function f(x,y,m)=0 (with
m a constant) is totally differentiated as
fxdx
+
fydy
dy/dx =
-fs/fy
=
0
Thus,
6
This is very useful result: In any implicit function (satisfying conditions
of the implicit function theorem) the effect of a change in variable x on
another variable y equals the negative of the ratio of the partial derivatives
Example, consider the production possibility frontier
2x2
+
y2
=
225.
In implicit function form
2x2
+
y2
-
225
=
2ydy
=
0.
- fx/fy =
-2x/y
0.
Totally differentiating,
4xdx
+
Thus,
dy/dx =
A final observation: When can one use the implicit function theorem? It is
necessary that that there is a unique (one to one and onto) relationship
between the independent variables. We won’t develop appropriate
conditions here, but it turns out that the same second order conditions
necessary for a critical value to be an optimum suffice.
b. The Envelope Theorem. Now we consider a primary application
of the implicit function theorem. Consider an implicit function f(x, y, a) =
0, where x and y and variables, and a is a parameter. We can find optimal
values for x and y in terms of a. But what happens if the parameter
changes? The envelope theorem provides a straightforward way to answer
this question.
Application: Suppose you are a manager of a competitive firm, and you
wish to know minimum per unit costs given a cost condition TC = q2 + k
where q denotes output, and k denotes fixed costs. How do minimum unit
costs y change with changes in fixed cost conditions?
Development. Write the above function
y
=
f(q,k)
=
=
(q2
q
+
+
k)/q
k/q
A standard, but rather lengthy, way to solve this problem involves finding
the optimal q expressed in terms of k. Then express the objective in terms
of k.
7
To find the optimal value of q take a derivative and solve
f’(q)
=
1-
k/q2
=
0
Thus, the optimal value of q, which we denote as q* = (k)1/2. Now, to
consider the effects of changes in fixed costs on minimum per unit cost
insert q* into f(q, k)
y
=
(k)1/2
(k)1/2
+
=
2k1/2
Now, taking the derivative of y with respect to the parameter k,
dy/dk =
k-1/2
Thus, for example, if k = 4. dy/dk = ½. Notice also that q* = 2.
The envelope shortcut. The envelope theorem states that for small
changes in k, dy*/dk can be computed by holding x constant at its optimal
value x*, and simply calculating y/k {q = q*(k)}.
Application
y
=
y/k =
f( q, k)
=
(q2
+
k)/q
1/q
From the FONC, we know that at an optimum, q*
=
Thus, y/k{q = q*(k)}
as before.
k-1/2
=
1/ k1/2
=
k1/2
The many variable case. The envelope theorem is particularly useful for
problems involving many variables. Consider the general function
y=
f(x1, . . . , xn, a)
To find an optimal value, you would take a series of n first order
conditions, f1, … fn and solve for optimal values x1*, …., xn*. These
values, however, are sensitive to (and, if the conditions of the implicit
function theorem hold, functions of) a. Thus, we can write x1*(a), ….,
xn*(a). Substituting these optimal values back into the objective yields an
expression for the optimal value in terms of a,
y* =
f(x1*(a) . . . , xn*(a), a)
8
Totally differentiating this expression with respect to a
dy*/da
=
f1 dx1*/da +, . . . , + fn dxn*/da + fa
But, because the first order conditions for all the independent variables
have been satisfied, f1= …= fn = 0. Thus,
dy*/da
=
fa{x1*, … xn*}
Example: Consider again the problem y =f(x1, x2)= -(x1 -3)2- (x2 -4)2 +
25. This might be interpreted as a health status problem, where x1 and x2
were quantities of medication, and 25 is a general health level. Taking
first order conditions, and optimizing, we found that
x1* = 3
y*
=
;
25.
x2* = 4
and
Suppose now we change 25 to an arbitrary parameter a.
y =f(x1, x2, a)= -(x1 -3)2- (x2 -4)2 + a.
In this case optimal value x1* and x2* do not depend on a, thus
dy*/da =
1.
Intuitively, a change in general health level improves health status at a 1
for 1 rate.
C.
Constrained Maximization. To this point, we have maximized
objective functions in the absence of any restrictions on admissible values for
independent variables. However, in many instances of economic relevance,
restrictions are important. Consumers, for example, maximize utility subject to a
budget constraint, while sellers maximize profits subject to a cost constraint. In
this section we review the most standard method for optimizing in light of such
constraints.
1. Lagrangian Multiplier Method. In general form, we optimize a function f(x1,
…, xn) in light of a series of constraints about those independent variables g(x1, …,
xn).= 0. These constraints can take on a variety of forms, such as a budget
constraint (e.g, I- x1 - x2 =0) or another economic relationship between the
independent variables (e.g., 100 - x1 - x22 =0). Notice that if function f has n
independent variables, first order conditions will generate a system of n equations
that will allow a unique solution. One way to include the constraint would be to
impose the constraint on the first order conditions. A second approach, one that
also generates additional useful information, is to include the constraint as an
n+1st equation with the new variable . Formally, we write
9
L = f(x1, …, xn) +g(x1, …, xn)
FONC (first order necessary conditions become
 L /x1= f1 + g1=0
 L /x2= f2 + g2=0
.
.
.
 L /= g(x1, …, xn) =0
This is a system of n+1 equations in n+1 unknowns. The solution to this system
will differ from the unconstrained (n variable) case, unless the constraint is not
binding, in which case  = 0.
2. Interpretation of . Notice that each of the first order conditions in the above
system may be solved for . That is
f1/g1 = f2/g2 =, …., fn/gn = 
Notice that these ratios are all equal. Further, the numerator is a marginal
“benefit” of increasing xi, while the denominator is a marginal “cost.” associated with
displacing other variables. Thus these equalities imply that at a maximum, the marginal
effect of relaxing the constraint is the same in each dimension, and further that for each i,
.
=
marginal benefit of xi / marginal cost of xi
Now at an optimum the marginal benefit of increments for each xi are identical.
On the other hand, at an optimum, the only way to increase “costs” is by relaxing the
constraint. Thus,  reflects the marginal benefit of relaxing the constraint. For example,
for a consumer maximizing utility f(x1, … ,xn) subject to a budget constraint g(x1, … ,xn),
 reflects the marginal utility of income (e.g., the incremental utility associated with a
one unit increase in income.) Similarly, for a firm maximizing output subject to an input
expenditure constraint,  reflects the marginal productivity of increasing input
expenditures.
3. Duality. As the above examples make clear, a tight relationship exists between
an objective function and its constraint. Rather than maximizing utility subject to a
budget constraint, for example, a consumer might minimize expenses, subject to a given
level of utility. Again, rather than maximizing output subject to a resource, or input
expenditure constraint, a firm might minimize costs subject to a given output level. The
capacity to recast constrained maximization problems as constraint minimum problems is
termed duality theory. In many instances it will be convenient to look at the dual for a
problem, rather than the primary problem.
10
Example: Consider again our health maximization problem.
y = f(x1, x2)= -(x1 -3)2- (x2 -4)2 + 25.
But this time suppose that x1 and x2 are medicines, and that an individual can only
safely consume one dose a day. That is x1 + x2 =1 or
1-x1 - x2 =0
To solve this constrained problem, write the Lagrangian expression
L
= f(x1, …, xn)
= - (x1 -3)2- (x2 -4)2 + 25
+
+
g(x1, …, xn)
(1-x1 - x2)
Taking FONC
L /x1= -2x1 +6 -  =0
 L /x2= -2x2 + 8-  =0
 L /= 1-x1 - x2 =0
-2x1 +6 =  =
= -2x2 + 8
Thus, x1 = x2 – 1
Inserting this result into the constraint yields
1- (x2 – 1) - x2 =0
or 1 = x2, so, x1 =0 .
This result implies that if you can only take one pill per day, take pill for x2, since
it contributes more to health improvement than condition x1. Finally, insert values for x1
and x2 into either of the FONC, and we get =6. That is, the marginal effect of relaxing
the constraint (say, to take 2 pills per day) would be 6 increments in health status.
Comparing values of the constrained and unconstrained objectives makes this clear.
Absent the constraint, f(x1, x2 ) =25. With the constraint, -(0 -3)2- (1 -4)2 + 25 = -18+25
= 7.
Exercise: Consider what would happen to health status if you could take 2 pills
per day.
4. Envelope Theorem in Constrained Maximization Problems. The
Envelope Theorem discussed above for unconstrained problems also
applies to the case of constrained problems. Specifically, if
y = f(x1, …, xn, a), and if this equation is optimized subject to the
constraint g(x1, …, xn, a), we may evaluate the effect of relaxing a as
follows. First write,
11
L
= f(x1, …, xn, a)
+
g(x1, …, xn, a)
Take first order conditions for the Lagrangian expression and solve for
optimal values x1*,.. , xn*. Alternatively, it can be shown that
dy/da
=
L /a (x1*, . . . , xn*, a)
Thus, the Lagrangian expression plays the same role in applying the
envelope theorem to the constrained problem as does the objective
function itself in unconstrained problems.
Example: Suppose a farmer with a fence of length P wished to enclose the
largest possible rectangular area. What shape should the farmer choose?
Let x = length and y = width. The total amount of fence is P = 2x+2y.
Thus
L
=
xy
+ ( P - 2x-2y)
Optimizing
L/x =
L/y =
L/ =
y
-2
x
-2
P -2x – 2y
=
=
=
0
0
0
Solving, y = x = P/4. But notice also that = P/8. Thus, each extra yard
of fence allows for the farmer to increase the fenced in area by 1/8 of P
(with 100 yards, one extra yard will allow 100/8 more yards to be
enclosed.
To see the duality relationship, consider the minimum amount of fence, P
necessary to encircle an area A.
L
=
2x+2y+ D(A-xy)
Solve as before, and
y = x = A½ and  D = 2/A½
and  D is the extra fence needed to cover one additional square foot of
area.
Finally, to understand the application of the envelope theorem to a
constrained maximization problem, go back to the primal problem
L
=
xy
+ ( P - 2x-2y)
Where A = xy is the objective
12
From the FONC, y = x = P/4. Inserting, Area, A = P2/16. Thus
dA/dP = P/8.
Using the envelope theorem, we could differentiate the LaGrangian w.r.t.
P to yield . Given optimal choices of x and y,  =P/8. In this way,
dA/dP = L/P
5. Second Order Conditions with Constrained Optimization Recall that
we developed second order conditions in the single variable case, as well as in the multivariable case without constraints. In these cases, evaluating the second order conditions
was equivalent to establishing a concavity condition. In closing this section, we consider
a special case of second order conditions with a constraint. Consider the objective
y = f(x1,x2)
g(x1,x2) =
optimized subject to a linear constraint
c - b1x1 - b2x2=0
Write the Lagrangian expression
L
= f(x1, x2)
+ ( c - b1x1 - b2x2)
via partial differentiation, recover
f1
 b1
f2
 b2
c - b1x1 - b2x2 =
=
=
0
0
0
Now, in general we can solve this system for x1, x2 and . Absent constraints, we could
use the concavity condition to ensure that the critical point that defines this solution
indeed a maximum. That is, we would take the second order differential and see if it was
less than zero: d2y =f11 dx12 +f22 dx22 +2f12 dx1dx2 <0 (Note: The determinant of this
expression determines the standard concavity condition f11 f22 --f122 >0
f12   dx1 
f
dx2  11
    dy ) However, the constraint prevents us from using all
 f 21 f 22  dy 2 
values of x1 and x2.
dx1
To incorporate the effect of the constraint, totally differentiate it:
-
b1dx1 - b2dx2 = 0
or
dx2 / dx1 = - b1 /b2
Now, recall from the first order conditions, that
13
f1/f2. = b1 /b2
Thus,
dx2 = (- f1/f2)dx1
Insert this in the differential expression for the concavity condition
d2 y
d2y
=
f11 dx12 +f22 dx22 +2f12 dx1dx2
=
f11 dx12 +f22 (- f1/f2) 2dx12 -2f12( f1/f2)dx12
=
f11 dx12 - 2f12 dx1( f1/f2)dx12+ f22 ( f1/f2)dx12
=
(f11 f22 - 2f12 f1 f2 + f22 f12)(dx12/ f22)
or
< 0 as long as the bolded term is negative.
(f11 f22 - 2f12 f1 f2 + f22 f12)<0
This is an important condition. It characterizes a set of functions termed as quasiconcave. Specifically these are conditions for functions where all points take on values
greater than a linear combination of the ordinates making up the points. (Such as
production or utility maximization problems, subject to linear budget constraints.)
We will use this quasi concavity condition later. Intuitively, it indicates a minimum
condition for an interior solution to a linearly constrained optimization problem. In
contrast to the simple “mound” condition necessary in the case of unconstrained
optimization, a sufficient condition for an optimum given a constraint is the weaker
condition that the constraint be more convex than the objective function. As we will see
below, this condition will be equivalent to asserting that any linear combination of two
points on a curve be in the interior of the function.
D.
Final comments: Maximization without calculus. Of course, not all problems are
amenable to calculus. Use of differential calculus requires that continuity and concavity
conditions that are often not satisfied (for example when the decision-maker cannot see
the underlying objective function.) Techniques, such a nonlinear programming
techniques exist for such problems. However, we largely confine our attention in this
course to problems that satisfy conditions for using calculus. Two factors motivate this
decision. The first is simplicity. Techniques that don’t presume continuity and concavity
are more complex. The second is parallelism. The economically interesting parts of
solutions via nonlinear techniques parallels solution developed here via the calculus.
14
Download