Uploaded by Yangyang Yu

Math review

advertisement
Math Review for Intermediate Microeconomics
Xiaokuai Shao
International Business School, Beijing Foreign Studies University
September 6, 2021
1
Calculus and Optimization
1.1
Single-Variable Optimization
Figure 1: The maximum and minimum of a function
Proposition 1 (first-order (necessary) condition). If the function f (x) is differentiable and if x∗
is an optimum then f ′ (x∗ ) = 0.
Proposition 2 (second-order (sufficient) condition). Let f ′ (x∗ ) = 0.
(i) If f ′′ (x∗ ) < 0, then x∗ is a local maximum (e.g., xmax in figure 1).
(ii) If f ′′ (x∗ ) > 0, then x∗ is a local minimum (e.g., xmin in figure 1).
Example 1. Consider a quadratic function f (x) = ax2 + bx + c. If a > 0 (resp., a < 0), the
b
minimum (resp., maximum) of f (x) is reached at x∗ = − 2a
.
If we use calculus to solve the maximum/minimum:
1
b
(1) FONC: f ′ (x∗ ) = 2ax∗ + b = 0 ⇒ x∗ = − 2a
.
(2) SOSC: f ′′ (x) = 2a, and
b
(i) f ′′ (x) < 0 ⇔ a < 0, then x∗ = − 2a
is a maximum.
b
(ii) f ′′ (x) > 0 ⇔ a > 0, then x∗ = − 2a
is a minimum.
Example 2. Consider f (x) = sin x, x ∈ [0, 2π].
(1) FONC: f ′ (x∗ ) = cos x∗ = 0 ⇒ x∗1 =
π
2
and x∗2 =
(2) SOSC: f ′′ (x) = − sin x, and
(i) f ′′ π2 = − sin π2 = −1 < 0, then x∗1 =
3π
∗
(ii) f ′′ 3π
2 = − sin 2 = 1 > 0, then x2 =
1.2
π
2
3π
2
3π
2 .
is a maximum.
is a minimum.
Multi-Variable Optimization
Consider a function z = f (x1 , x2 ) with two variables. Let f1′ =
′′ = f ′′ =
f12
21
∂f
∂x1 ,
f2′ =
∂2f
∂x1 ∂x2 .
∂f
∂x2 ,
′′ =
f11
∂2f
,
∂x21
′′ =
f22
∂2f
,
∂x22
Maximum If x∗1 and x∗2 maximize f (x1 , x2 ):
• FONC: f1′ (x∗1 , x∗2 ) = 0 and f2′ (x∗1 , x∗2 ) = 0. The two unknowns (x∗1 , x∗2 ) are solved from the
above two equations.
′′ , f ′′ and f ′′ . Define the Hessian matrix
• SOSC: There are three second-order derivatives: f11
22
12
"
#
′′
′′
f11
f12
H=
.
′′
′′
f21
f22
If x∗1 and x∗2 maximize f (x1 , x2 ), then the Hessian matrix H is negative (semi-)definite, i.e.,
the determinants of principal minors alternate in signs, starting with “−:”
′′
det(H1 ) = |f11
|<0
′′ ′′
′′ 2
det(H2 ) = det(H) = f11
f22 − (f12
) > 0.
2
Minimum If x∗1 and x∗2 minimize f (x1 , x2 ):
• FONC: f1′ (x∗1 , x∗2 ) = 0 and f2′ (x∗1 , x∗2 ) = 0. The two unknowns (x∗1 , x∗2 ) are solved from the
above two equations.
• SONC: If x∗1 and x∗2 minimize f (x1 , x2 ), the Hessian matrix
"
#
′′
′′
f11
f12
H=
.
′′
′′
f21
f22
is positive (semi-)definite, i.e., the determinants of all principal minors are positive:
′′
det(H1 ) = |f11
|>0
′′ ′′
′′ 2
det(H2 ) = det(H) = f11
f22 − (f12
) > 0.
Similarly, for functions with three variables: z = f (x1 , x2 , x3 ):
• FONC: f1′ = 0, f2′ = 0 and f3′ = 0. The three unknowns (x∗1 , x∗2 , x∗3 ) are solved from the three
equations.
• SONC: The Hessian matrix is


′′
′′
′′
f11
f12
f13
 ′′
′′
′′  .
H =  f21
f22
f23

′′
′′
f31 f32 f33
(1) If (x∗1 , x∗2 , x∗3 ) maximize f (·), then the matrix H is negative (semi-)definite, i.e.,
′′
det(H1 ) = |f11
|<0
det(H2 ) =
′′
′′
f11
f12
′′
′′
f21
f22
det(H3 ) = det(H) =
>0
′′
′′
′′
f11
f12
f13
′′
′′
′′
f21
f22
f23
′′
′′
f31
f32
f33
<0
(2) If (x∗1 , x∗2 , x∗3 ) minimize f (·), then the matrix H is positive (semi-)definite, i.e.,
det(H1 ) > 0, det(H2 ) > 0, det(H3 ) = det(H) > 0
Alternatively, you can solve “eigenvalues” to check whether a particular matrix is positive
or negative definite. However, it is difficult to solve eigenvalues for a matrix whose order is
greater than 2 (because solving a high-degree polynomials is complex).
Therefore, you should review how to solve a determinant of a matrix: cofactor expansion. For
a 3 × 3 matrix, for example, let Mij be the minor of the entry aij , i.e., the determinant of the
matrix after eliminating the i’th row and j’th column. Let Cij be the cofactor of the entry
aij , i.e., Cij = (−1)i+j Mij . Then the determinant of a matrix can be expressed by cofactor
expansion along the i’th row (or the j’the column, your choice).
3
Example 3. Solve the determinant of

′′
′′
′′
f11
f12
f13
 ′′
′′
′′  .
H =  f21
f22
f23

′′
′′
f31 f32 f33

Let’s expand the first column.
′′
det(H) =f11
(−1)1+1
|
2
′′
′′
′′
′′
′′
′′
f22
f23
f12
f13
f12
f13
′′
2+1
′′
3+1
+f
(−1)
+f
(−1)
21
31
′′
′′
′′
′′
f32
f33
f32
f33
f22
f23
{z
}
{z
}
{z
}
|
|
|
′′
minor of entry f11
{z
′′
cofactor of entry f11
|
}
′′
minor of entry f21
{z
|
}
′′
cofactor of entry f21
′′
minor of entryf31
{z
′′
cofactor of entry f31
}
Solve a Linear System
Assume that the three unknowns (x1 , x2 , x3 ) are determined by the following linear system:



a x + a12 x2 + a13 x3 = b1

 11 1
a21 x1 + a22 x2 + a23 x3 = b2



a31 x1 + a32 x2 + a33 x3 = b3
Sometimes you are interested in the solution x1 only. You can use the Cramer’s rule:
(1) First, write the linear system in the

a11

 a21
a31
|
matrix form:

a12 a13

a22 a23  
a32 a33
{z
}|
coefficient matrix A
 
x1
 
x2  = 
x3
{z } |

b1

b2 
b3
{z }
x
b
(2) Second, solve the determinant of the coefficient matrix A (e.g., using cofactor expansion).
(3) Then, the solutions are
1
x1 =
det(A)
b1 a12 a13
b2 a22 a23
b3 a32 a33
{z
}
|
,
replace the 1st column of A with b
1
x2 =
det(A)
a11 b1 a13
a21 b2 a23
a31 b3 a33
|
{z
}
,
replace the 2nd column of A with b
1
x3 =
det(A)
a11 a12 b1
a21 a22 b2
a31 a32 b3
|
{z
}
replace the 3rd column of A with b
4
.
3
Probability and Random Variables
Discrete random variables When throwing a fair die, each of the six values 1 to 6 has the
probability 1/6 (see figure 2), i.e.,
1
Pr(X = 1) = Pr(X = 2) = · · · = Pr(X = 6) = .
6
Figure 2: Probability mass function of a discrete random variable
The Cumulative distribution function (CDF) for throwing the fair die:
1
2
Pr(X ≤ 1) = , Pr(X ≤ 2) = Pr(X = 1) + Pr(X = 2) = , · · · , Pr(X ≤ 6) = 1.
6
6
Continuous random variables The random variable X is distributed within [x, x], and its
probability density function (pdf) is f (x).
• The sum of all possibilities must be equal to 1, i.e.,
Rx
x
f (x)dx = 1.
• The probability of being between a and b is
Z
b
Pr(a ≤ X ≤ b) =
f (x)dx.
a
• The probability of being lower than a particular number t is
Z t
Pr(X ≤ t) =
f (x)dx,
x
which is defined as the cumulative distribution function (CDF) F (t) = Pr(X ≤ t) =
Rt
x
f (x)dx.
• F ′ (t) = f (t).
Example 4 (Uniform distribution). The random variable X is distributed uniformly (evenly) within
x ∈ [0, 4]. The pdf is
1
1
f (x) =
= .
4−0
4
5
The probability of being between 1 and 3 is
Z
3
Pr(1 ≤ X ≤ 3) =
1
The CDF is
Z
t
F (t) = Pr(X ≤ t) =
0
Confirm that F ′ (x) =
1
4
3−1
1
1
dx =
= .
4
4
2
1
t
x
dx = , or F (x) = .
4
4
4
= f (x). See figure 3.
Figure 3: Uniform distribution
Example 5 (Normal distribution). The random variable X follows normal distribution with mean
µ and standard deviation σ, i.e., X ∼ N (µ, σ 2 ). The density function is
f (x) = √
(x−µ)2
1
e− 2σ2 .
2πσ
Let µ = 10 and σ = 5, and figure 4 plots the pdf of f (x) =
Figure 4: N (10, 52 )
6
√ 1 e−
2π·5
(x−10)2
2·52
.
A frequently used technique for normal distribution is the “change of variables”, i.e., let t =
and hence dt = σ1 dx. Therefore,
Z
+∞
√
1=
−∞
Z
(x−µ)2
1
e− 2σ2 dx =
2πσ
+∞
−∞
t2
1
√ e− 2 dt =
2π
Z
+∞
−∞
√
x−µ
σ
(x−0)2
1
e− 2·12 dx,
2π · 1
where N (0, 1) is the standard normal.
R +∞
2
Quiz: How to calculate 0 e−x dx?
R +∞
2
Let I = 0 e−x dx. Notice that
+∞ Z +∞
Z
e
0
−x
2 +y 2
2
Z
+∞
=
e
− y2
|
0
Z
2
− x2
e
{z
|0
Z
2
dx dy =
0
Z
+∞
+∞
·
dx
}
|0
e−
{z
y2
2
+∞
e
{z
0
2
− x2
dx
dy
}
a number independent of y
Z +∞
Z +∞
2
2
− x2
− x2
dy
}
=
dx ·
e
e
0
dx = I 2
0
put the number in front change of variables: y=x
Therefore we calculate I 2 first. Using the polar coordinate: the integral region is 0 ≤ x = r cos θ <
+∞ and 0 ≤ y = r sin θ < +∞; hence 0 ≤ θ ≤ π2 and 0 ≤ r < +∞.
+∞ Z +∞
Z
2
I =
e
Z
0
π
2
=
0
r
I=
0
+∞
Z
|
0
−x
2
− r2
e
{z
2 +y 2
2
Z
π
2
dx dy =
rdr dθ =
}
0
Z
π
2
0
Z
+∞
e
0
−∞
Z
−r
2 cos2 θ+r 2 sin2 θ
2
Z
π
2
−e du dθ =
0
u
0
2
u=− r2 ⇒du=−rdr
π
.
2
7
rdr dθ
π
−e−∞ + e0 dθ =
2
Download