Random Variables It is easier to work with numbers that with elementary outcomes. From now on, we’re going to use real-valued functions called random variables (r.v.’s) instead of working with Ω and ω directly. Random variables allow us to summarize elementary outcomes and events with numbers. Definition: A random variable (r.v.) is a function X : Ω → R. Example: Throw a dart at a board with 8 grey squares and 1 red square three times. The tosses are iid with P (red) = 19 on any toss. Count the number of times you hit the red square. • Let X = # of reds. • X is a random variable. – X = x if there are x reds and 3 − x greys. – The set of possible values for X is {0, 1, 2, 3}. • What is the probability that you hit the red square exactly twice? – The event “exactly 2 reds” is formally written as {ω : X(ω) = 2} – Simpler notation for the event “exactly 2 reds” is X = 2. P (2 reds) = P (X = 2) = P (RRG or RGR or GRR) = P (RRG) + P (RGR) + P (GRR) 1 1 8 1 8 1 8 1 1 · · + · · + · · = 9 9 9 9 9 9 9 9 9 • Standard Notation: – Capital letters for r.v.’s – i.e., X, Y, Z – and lowercase letters for the values of the r.v.’s – To simplify, write X = x for{ω : X(ω) = x} Example (Practice with notation): Send 8 bits of information through a communication channel. Each bit is received correctly with probability p and incorrectly with probability q. The bits are independent. We are interested in the number of bits that are received incorrectly. – Sending the 8 bits is a sequence of iid Bernoulli trials. – Let X =# of bits received incorrectly. – What are the possible values for X? {0, 1, 2, 3, 4, 5, 6, 7, 8} – Write the following events and expressions for their probabilities using the r.v. X. (a.) No wrong bits: X = 0, P (X = 0) (b.) At least one wrong bit: X > 0, P (X > 0) or X ≥ 1, P (X ≥ 1) (c.) Exactly 2 wrong bits: X = 2, P (X = 2) (d.) Between 2 and 7 wrong bits (inclusive): 2 ≤ X ≤ 7, P (2 ≤ X ≤ 7) Definition: The image of a r.v. is the range of the r.v. Im(X) = R(X) = X(Ω) = {x : x = X(ω) for some ω ∈ Ω} The image of a r.v. X is sometimes called the sample space for X. Definition: A r.v. is discrete if Im(X) is finite or countably infinite. Examples: Consider following experiments. What is the image? Is the r.v. discrete? (a.) X =total # dots showing on two rolls of a six-sided die. Im(X) = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} X is discrete. (b.) Y = # heads in n = 1000 tosses of a fair coin. Im(Y ) = {0, 1, . . . , 1000} Y is discrete. (c.) W =time until a part on a machine fails. ∗ Im(W ) = (0, ∞) ∗ W is not discrete. (d.) Z =trial on which the first head occurs ∗ Im(Z) = {1, 2, 3 . . .} = { positive integers} ∗ Z is discrete. (e.) I = 1 if the amplifier fails during waranty period and 0 otherwise. ∗ Im(I) = {0, 1} ∗ I is discrete. ∗ I is a bernoulli random variable. Random Variables and Bernoulli trials Definition. A Bernoulli random variable is a r.v. with two possible outcomes. If X is a Bernoulli r.v., then Im(X) = {0, 1}, and P (X = 1) = p, P (X = 0) = q, p + q = 1. Random Variables and Independence Definition: Two discrete r.v.’s X and Y are independent if P (X = x, Y = y) = P (X = x)P (Y = y) for all x ∈ Im(X) and y ∈ Im(Y ). In general, two r.v.’s X and Y are independent if P (X ∈ A, Y ∈ B) = P (X ∈ A)P (Y ∈ B) for any two sets A ⊂ Im(X), and B ⊂ Im(Y ). Note that P (X = x, Y = y) means P (X = x, and Y = y), and P (X ∈ A, Y ∈ B) means P (X ∈ A, and Y ∈ B). Result: Two r.v’s X and Y are independent if P (X ∈ A|Y ∈ B) = P (X ∈ A) for all A ⊂ Im(X), and B ⊂ Im(Y ). Intuitively, X and Y are independent if knowledge about Y tells us nothing about what values are more or less likely for X.