Some things to remember for the final 1. Mean and Variance

advertisement
Some things to remember for the final
1. Mean and Variance
• E[aX +bY] = aE[X] +bE[Y] (Note: this works even if X and Y are dependent.)
• If X and Y are independent (more generally, uncorrelated), then Var(X + Y ) = Var(X) + Var(Y ).
2. Central Limit Theorem (CLT) and Law of large numbers (LLN)
Let If X1 , X2 . . . be independent.
• The law of large numbers says the sample mean Sn :=
as n → ∞.
1
n
Pn
• Formally, the CLT says that the probability distribution of
a normal random variable as n → ∞.
i=1
√
Xi converges to the mean of E[Xi ]
nSn converges to the distribution of
Pn
• Practically speaking, the CLT says that if n is large then Sn := i=1 Xi is approximately distributed as N (E[Sn ], Var(Sn )). To approximate large sums (such as the sample mean Sn ), all you
have to do is find the mean and variance of the large sum and plug those two parameters into the
normal distribution.
<
• If X ∼ N (µ, σ 2 ) then P (X < t) = P ( X−µ
σ
t−µ
σ )
= Φ( t−µ
σ ), since
X−µ
σ
∼ N (0, 1).
3. Moment generating functions
• MX (t) := E[etX ]
• If X and Y are independent, the moment generating function of X + Y is just the product of the
individual moment generating functions (Warning: This is NOT true if X and Y are dependent.):
MX+Y (t) := E[et(X+Y ) ] = E[etX ] × E[etY ] = MX (t)MY (t)
• MaX (t) = E[etaX ] = MX (at) (Warning: M2X (t) 6= MX (t)2 , since even though 2X = X + X, X
is not independent of itself!):
dn
• E[X n ] = dt
M
(t)
n
X
t=0
4. Joint PMFs, PDFs, and CDFs.
• If X and Y are independent, then F(X,Y ) (t) = FX (t) × FY (t), and f(X,Y ) (t) = fX (t) × fY (t) (in
the continuous case) and p(X,Y ) (t) = pX (t) × pY (t) (in the discrete case). Warning: This is NOT
true if X and Y are dependent.
• If Z = g(X), where g : R → R is some function, then FZ (t) = P (Z ≤ t) = P (g(X) < t). To get
the pdf fZ (t), you can differentiate the CDF.
• If you know the joint pdf f(X,Y ) (x, y), you can find the probability P(g(X,Y)<t), where g : R2 → R
is some function, by performing a double integral
ˆ ˆ
P (g(X, Y ) < t) =
f(X,Y ) (x, y)dydx
R
where R = {(x, y) ∈ R2 : g(x, y) < t} is the region on the xy-plane where g(x, y) < t. You can
find the bounds of this double integral by drawing a picture of R, and finding the range of x as a
function of t, then the range of y as a function of both x and t.
5. Sums of random variables
• If X and Y are independent, then the pdf fX+Y (t) of X + Y is given by the convolution of X
and Y . Warning: This is NOT true if X and Y are dependent.
1
• In general, whether or not X and Y are independent, we can always find the pdf of X + Y by
performing a double integral, and then differentiating:
ˆ ˆ
FX+Y (t) = P (X + Y ≤ t) =
f(X,Y ) (x, y)dxdy
{(x,y)∈R2 :x+y<t}
fX+Y (t) =
d
FX+Y (t)
dt
• The sum of two independent normals is also a normal. The sum of two independent Poisson
random variables is also a Poisson Random Variable.
6. Markov chains
• Let X1 , X2 , . . . be a Markov chain. The Markov property says that, P (Xt = xt |Xt−1 = xt−1 , . . . , X1 =
x1 ) = P (Xt = xt |Xt−1 = xt−1 ), i.e., the current state Xt depends only on the previous state Xt−1 ,
as long as we know the previous state (Note: If we don’t know Xt−1 , information about states
before the previous state is still useful).
• π(t) := (P(State 1 at time t), P(State 2 at time t), . . . P(State n at time t)), where n is the number of possible states the Markov chain can take.
• π(t) = π(t − 1)P , where P is the probability transition matrix. (This is just the law of total
probability, written down using matrices) The entries pi,j of P are
pi,j := P(State j at time t | State i at time t-1) (Note: usually, we assume that the Markov chain
is "time-invariant", i.e., we usually assume P does not change with t).
• π(t) = π(0)P t (assuming you don’t have extra information about the distribution π(s) at some
later state s, where 0 < s < t. If your given π(s) is independent of π(0), by the Markov property,
we would have π(t) = π(s)P t−s )
7. If X and Y are independent, and W := max{X, Y }, then W has CDF FW (t) = FX (t) × FY (t).
(Warning: This is not true for the pdf: fW (t) 6= fX (t) × fY (t). To get the pdf you must differentiate
the CDF using the product rule.)
Good luck on the final!
2
Download