e.g. if the sample of students’ heights is {180cm, 175cm,

advertisement
Measure of Location
I
Notation: We use n to denote the sample size; i.e. the
number of observations in a single sample.
Measure of Location
I
Notation: We use n to denote the sample size; i.e. the
number of observations in a single sample.
e.g. if the sample of students’ heights is {180cm, 175cm,
191cm, 184cm, 178cm, 188cm}, then n = 6.
Measure of Location
I
Notation: We use n to denote the sample size; i.e. the
number of observations in a single sample.
e.g. if the sample of students’ heights is {180cm, 175cm,
191cm, 184cm, 178cm, 188cm}, then n = 6.
Furthermore, we use x1 , x2 , . . . , xn to denote the sample data.
Measure of Location
I
Notation: We use n to denote the sample size; i.e. the
number of observations in a single sample.
e.g. if the sample of students’ heights is {180cm, 175cm,
191cm, 184cm, 178cm, 188cm}, then n = 6.
Furthermore, we use x1 , x2 , . . . , xn to denote the sample data.
e.g. in the above example,
x1 = 180, x2 = 175, x3 = 191, x4 = 184, x5 = 178, x6 = 188.
Measure of Location
I
Sample Mean:
Measure of Location
I
Sample Mean:
The sample mean x̄ of observations x1 , x2 , . . . , xn is defined as
Pn
xi
x1 + x2 + · · · + xn
x̄ =
= i=1
n
n
Measure of Location
I
Sample Mean:
The sample mean x̄ of observations x1 , x2 , . . . , xn is defined as
Pn
xi
x1 + x2 + · · · + xn
x̄ =
= i=1
n
n
Remark:
Measure of Location
I
Sample Mean:
The sample mean x̄ of observations x1 , x2 , . . . , xn is defined as
Pn
xi
x1 + x2 + · · · + xn
x̄ =
= i=1
n
n
Remark:
1. For simplicity, we can informally write x̄ =
summation is over all sample observations.
P
xi
n ,
where the
Measure of Location
I
Sample Mean:
The sample mean x̄ of observations x1 , x2 , . . . , xn is defined as
Pn
xi
x1 + x2 + · · · + xn
x̄ =
= i=1
n
n
Remark:
P
1. For simplicity, we can informally write x̄ = nxi , where the
summation is over all sample observations.
2. When reporting x̄, we use decimal accuracy of one digit
more than the accuracy of the xi ’s.
Measure of Location
I
Sample Mean:
The sample mean x̄ of observations x1 , x2 , . . . , xn is defined as
Pn
xi
x1 + x2 + · · · + xn
x̄ =
= i=1
n
n
Remark:
P
1. For simplicity, we can informally write x̄ = nxi , where the
summation is over all sample observations.
2. When reporting x̄, we use decimal accuracy of one digit
more than the accuracy of the xi ’s.
3. The average of all values in the population is defined as
population mean and it is denoted by the Greek letter µ. In
statistics, µ is usually unavailable and we want to get some
infomation about population mean µ from sample mean x̄.
Measure of Location
Example:
In the previous example, the sample is {180, 175, 191, 184, 178,
188} and the sample size is 6; then the sample mean is calculated
as
180 + 175 + 191 + 184 + 178 + 188
= 182.7
x̄ =
6
Measure of Location
I
Pros and Cons
Measure of Location
I
Pros and Cons
Pros: the sample mean tells us the location (center) of the
sample.
Measure of Location
I
Pros and Cons
Pros: the sample mean tells us the location (center) of the
sample.
I
Cons: the sample mean can be significantly affected by
outliers
Measure of Location
I
Pros and Cons
Pros: the sample mean tells us the location (center) of the
sample.
I
Cons: the sample mean can be significantly affected by
outliers
Measure of Location
I
Sample Median
Measure of Location
I
Sample Median
The sample median is obtained by first ordering the n
observations from smallest to largest (with any repeated
values included so that every sample observation appears in
the ordered list). Then,
(
th
( n+1
if n is odd
2 ) ordered value,
x̃ =
n th
n
average of ( 2 ) and ( 2 + 1)th ordered values, if n is even
Measure of Location
Measure of Location
e.g. in the previous example, the sample is
x1 = 180, x2 = 175, x3 = 191, x4 = 184, x5 = 178, x4 = 188. Then
the ordered observation is x1:6 = 175, x2:6 = 178, x3:6 = 180, x4:6 =
184, x5:6 = 188, x6:6 = 191.
Measure of Location
e.g. in the previous example, the sample is
x1 = 180, x2 = 175, x3 = 191, x4 = 184, x5 = 178, x4 = 188. Then
the ordered observation is x1:6 = 175, x2:6 = 178, x3:6 = 180, x4:6 =
184, x5:6 = 188, x6:6 = 191.
And the sample median is the average of x3:6 and x4:6 , which is
182, since the sample size is even.
Measure of Location
e.g. in the previous example, the sample is
x1 = 180, x2 = 175, x3 = 191, x4 = 184, x5 = 178, x4 = 188. Then
the ordered observation is x1:6 = 175, x2:6 = 178, x3:6 = 180, x4:6 =
184, x5:6 = 188, x6:6 = 191.
And the sample median is the average of x3:6 and x4:6 , which is
182, since the sample size is even.
If we have one more observation x7 = 189, then the ordered
observation is x1:7 = 175, x2:7 = 178, x3:7 = 180, x4:7 = 184, x5:7 =
188, x6:7 = 189, x7:7 = 191 and the sample median is x4:7 = 184,
since the sample size now is odd.
Measure of Location
Measure of Location
Remark:
1. Contrary to the sample mean, the sample median is very
insensitive to outliers. In fact, the sample median is affected by
at most two values in the sample.
Measure of Location
Remark:
1. Contrary to the sample mean, the sample median is very
insensitive to outliers. In fact, the sample median is affected by
at most two values in the sample.
2. Similar to the sample mean and the population mean, we can
define the population median. However, in general, the sample
median DOES NOT equal to the population median. In statistics,
we want to use sample median to infer population median.
Measure of Location
Other Measures of Location:
Measure of Location
Other Measures of Location:
I
Quartiles: a quartile is any of the three values which divide
the ordered data set into four equal parts, so that each part
represents ( 14 )th of the sample.
Measure of Location
Other Measures of Location:
I
Quartiles: a quartile is any of the three values which divide
the ordered data set into four equal parts, so that each part
represents ( 14 )th of the sample.
e.g. If our sample data about the students’ height is 180, 175,
191, 184, 178, 188,189, 183, 197, 186, 172, 169, 181, 177,
170, 172, then the ordered data would be
169 170 172 172 | 175 177 178 180 | 181 183 184 186 | 188
189 191 197. And a summary of this sample data is given by:
Measure of Location
Other Measures of Location:
I
Quartiles: a quartile is any of the three values which divide
the ordered data set into four equal parts, so that each part
represents ( 14 )th of the sample.
e.g. If our sample data about the students’ height is 180, 175,
191, 184, 178, 188,189, 183, 197, 186, 172, 169, 181, 177,
170, 172, then the ordered data would be
169 170 172 172 | 175 177 178 180 | 181 183 184 186 | 188
189 191 197. And a summary of this sample data is given by:
Min. 1st Qu. Median Mean 3rd Qu. Max.
169.0
173.5
180.5
180.8
187
197.0
Measure of Location
Other Measures of Location:
Measure of Location
Other Measures of Location:
I
Percentiles: A percentile is the data value below which a
certain percent of observations fall.
e.g. the 20th percentile is the value below which 20 percent of
the observations may be found. In our previous example, the
sampel size is 16, 20% which is 3.2. So the 20th percentile is
171.
Measure of Location
Other Measures of Location:
I
Percentiles: A percentile is the data value below which a
certain percent of observations fall.
e.g. the 20th percentile is the value below which 20 percent of
the observations may be found. In our previous example, the
sampel size is 16, 20% which is 3.2. So the 20th percentile is
171.
I
Trimmed Mean: a p% trimmed mean is obtained by
eliminating the smallest p% data values and the largest p%
data values and averaging the left data values. It is a
compromise between sample mean and sample median.
Measure of Location
Other Measures of Location:
Measure of Location
Other Measures of Location:
I
Trimmed Mean:
e.g. in our previous example, the sample data is 180, 175,
191, 184, 178, 188,189, 183, 197, 186, 172, 169, 181, 177,
170, 172. If we want to eliminate the largest and smallest
1
= 6.25% trimmed mean. Then the
observation, then it is a 16
6.25% trimmed mean is x̄tr (6.25%) = 180.4.
Measure of Location
I
Categorical Data:
In some cases, we can assign values to categorical data.
Then we can calculate the sample mean. In that situation, the
sample mean would be the sample proportion.
Measure of Location
I
Categorical Data:
In some cases, we can assign values to categorical data.
Then we can calculate the sample mean. In that situation, the
sample mean would be the sample proportion.
e.g. if we toss a coin 10 times and get the result T, H, T,
T, H, T, H, H, H, T, we can assign 0 to T and 1 to H.
Then, the sample mean would be (1 + 1 + 1 + 1 + 1)/10 = 0.5
which is exactly the proportion of heads in the sample data.
Measures of Variability
Sample I:
Sample II:
Sample III:
30, 35, 40, 45, 50, 55, 60, 65, 70
30, 41, 48, 49, 50, 51, 52, 59, 70
41, 45, 48, 49, 50, 51, 52, 55, 59
Measures of Variability
Sample I:
Sample II:
Sample III:
30, 35, 40, 45, 50, 55, 60, 65, 70
30, 41, 48, 49, 50, 51, 52, 59, 70
41, 45, 48, 49, 50, 51, 52, 55, 59
Measures of Variability
I
Sample Range: the difference between the largest and the
smallest sample values.
Measures of Variability
I
Sample Range: the difference between the largest and the
smallest sample values.
e.g. for Sample I: 30, 35, 40, 45, 50, 55, 60, 65, 70
the sample range is 40(= 70 − 30).
Measures of Variability
I
Sample Range: the difference between the largest and the
smallest sample values.
e.g. for Sample I: 30, 35, 40, 45, 50, 55, 60, 65, 70
the sample range is 40(= 70 − 30).
I
Deviation from the Sample Mean: the diffenence between
the individual sample value and the sample mean.
Measures of Variability
I
Sample Range: the difference between the largest and the
smallest sample values.
e.g. for Sample I: 30, 35, 40, 45, 50, 55, 60, 65, 70
the sample range is 40(= 70 − 30).
I
Deviation from the Sample Mean: the diffenence between
the individual sample value and the sample mean.
e.g. for Sample I: 30, 35, 40, 45, 50, 55, 60, 65, 70
the sample mean is 50 and thus the deviation from the sample
mean for each data is -20, -15, -10, -5, 0, 5, 10, 15, 20.
Measures of Variability
I
Sample Variance: the mean (or average) of the sum of
squares of the deviations from the sample mean for each
individual data.
Measures of Variability
I
Sample Variance: the mean (or average) of the sum of
squares of the deviations from the sample mean for each
individual data.
If our sample size is n, and we use x̄ to denote the sample
mean, then the sample variance s 2 is given by:
Pn
(xi − x̄)2
Sxx
2
=
s = i=1
n−1
n−1
Measures of Variability
I
Sample Variance: the mean (or average) of the sum of
squares of the deviations from the sample mean for each
individual data.
If our sample size is n, and we use x̄ to denote the sample
mean, then the sample variance s 2 is given by:
Pn
(xi − x̄)2
Sxx
2
=
s = i=1
n−1
n−1
I
Sample Standard Deviation: the square root of the sample
variance
s=
√
s2
Measures of Variability
e.g. for Sample I: 30, 35, 40, 45, 50, 55, 60, 65, 70, the mean is
50 and we have
xi
30
35
40 45 50 55
60
65
70
xi − x̄
-20 -15 -10 -5
0
5
10
15
20
(xi − x̄)2 400 225 100 25
0 25 100 225 400
Therefore the sample variance is
(400 + 225 + 100 + 25 + 0 + 25√+ 100 + 225 + 400)/(9 − 1) = 187.5
and the standard deviation is 187.5 = 13.7.
Measures of Variability
e.g. for Sample II: 30, 41, 48, 49, 50, 51, 52, 59, 70, the mean is
also 50 and we have
xi
30 41 48 49 50 51 52 59
70
xi − x̄
-20 -9 -2 -1
0
1
2
9
20
(xi − x̄)2 400 81
4
1
0
1
4 81 400
Therefore the sample variance is
(400 + 81 + 4 + 1 + 0 + 1 + 4√+ 81 + 400)/(9 − 1) = 121.5
and the standard deviation is 121.5 = 11.0.
Measures of Variability
e.g. for Sample III: 41, 45, 48, 49, 50, 51, 52, 55, 59, the mean is
also 50 and we have
xi
41 45 48 49 50 51 52 55 59
xi − x̄
-9 -5 -2 -1
0
1
2
5
9
(xi − x̄)2 81 25
4
1
0
1
4 25 81
Therefore the sample variance is
(81 + 25 + 4 + 1 + 0 + 1 + 4 +
√ 25 + 81)/(9 − 1) = 27.75
and the standard deviation is 27.75 = 4.9.
Measures of Variability
sample variance for Sample I is 187.5, for Sample II is 121.5 and
for Sample III is 27.75.
Measures of Variability
Remark: 1. Why use the sum of squares of the deviations? Why
not sum the deviations?
Measures of Variability
Remark: 1. Why use the sum of squares of the deviations? Why
not sum the deviations?
Because the sum of the deviations from the sample mean EQUALS
TO 0!
Measures of Variability
Remark: 1. Why use the sum of squares of the deviations? Why
not sum the deviations?
Because the sum of the deviations from the sample mean EQUALS
TO 0!
n
n
n
X
X
X
(xi − x̄) =
xi −
x̄
i=1
=
i=1
n
X
i=1
xi − nx̄
i=1
=
n
X
i=1
=0
n
xi − n(
1X
xi )
n
i=1
Measures of Variability
Remark:
2. Why do we use divisor n − 1 in the calculation of sample
variance while we use use divisor N in the calculation of the
population variance?
Measures of Variability
Remark:
2. Why do we use divisor n − 1 in the calculation of sample
variance while we use use divisor N in the calculation of the
population variance?
The variance is a measure about the deviation from the
“center”. However, the “center” for sample and population are
different, namely sample mean and population mean.
Measures of Variability
Remark:
2. Why do we use divisor n − 1 in the calculation of sample
variance while we use use divisor N in the calculation of the
population variance?
The variance is a measure about the deviation from the
“center”. However, the “center” for sample and population are
different, namely sample mean and population mean.
If we P
use µ instead of x̄ in the definition of s 2 , then
2
s = (xi − µ)/n.
Measures of Variability
Remark:
2. Why do we use divisor n − 1 in the calculation of sample
variance while we use use divisor N in the calculation of the
population variance?
The variance is a measure about the deviation from the
“center”. However, the “center” for sample and population are
different, namely sample mean and population mean.
If we P
use µ instead of x̄ in the definition of s 2 , then
2
s = (xi − µ)/n.
But generally, population mean is unavailable to us. So our
choice is the sample mean. In that case, the observations xi0 s tend
to be closer to their average x̄ then to the population average
µ. So to compensate, we use divisor n − 1.
Measures of Variability
Remark:
3. It’ customary to refer to s 2 as being based on n − 1 degrees of
freedom (df).
Measures of Variability
Remark:
3. It’ customary to refer to s 2 as being based on n − 1 degrees of
freedom (df).
s 2 is the average of n quantities: (x1 − x̄)2 , (x2 − x̄)2 , . . . ,
(xn − x̄)2 . However, the sum of x1 − x̄, x2 − x̄, . . . , xn − x̄ is 0.
Therefore if we know any n − 1 of them, we know all of them.
Measures of Variability
Remark:
3. It’ customary to refer to s 2 as being based on n − 1 degrees of
freedom (df).
s 2 is the average of n quantities: (x1 − x̄)2 , (x2 − x̄)2 , . . . ,
(xn − x̄)2 . However, the sum of x1 − x̄, x2 − x̄, . . . , xn − x̄ is 0.
Therefore if we know any n − 1 of them, we know all of them.
e.g. {x1 = 4, x2 = 7, x3 = 1, and x4 = 10}.
Measures of Variability
Remark:
3. It’ customary to refer to s 2 as being based on n − 1 degrees of
freedom (df).
s 2 is the average of n quantities: (x1 − x̄)2 , (x2 − x̄)2 , . . . ,
(xn − x̄)2 . However, the sum of x1 − x̄, x2 − x̄, . . . , xn − x̄ is 0.
Therefore if we know any n − 1 of them, we know all of them.
e.g. {x1 = 4, x2 = 7, x3 = 1, and x4 = 10}.
Then the mean is x̄ = 5.5 and x1 − x̄ = −1.5, x2 − x̄ = 1.5 and
x3 − x̄ = −4.5. From that, we know directly that x4 − x̄ = 4.5
since their sum is 0.
Measures of Variability
Some mathematical results for s 2 :
Measures of Variability
Some mathematical results for s 2 :
P
P
I s 2 = Sxx where Sxx =
(xi − x̄)2 = xi2 −
n−1
P
( xi )2
;
n
Measures of Variability
Some mathematical results for s 2 :
P
P
I s 2 = Sxx where Sxx =
(xi − x̄)2 = xi2 −
n−1
I
If y1 = x1 + c, y2 = x2 + c, . . . , yn = xn + c,
P
( xi )2
;
n
then sy2
= sx2 ;
Measures of Variability
Some mathematical results for s 2 :
P
P
I s 2 = Sxx where Sxx =
(xi − x̄)2 = xi2 −
n−1
P
( xi )2
;
n
then sy2
= sx2 ;
I
If y1 = x1 + c, y2 = x2 + c, . . . , yn = xn + c,
I
If y1 = cx1 , y2 = cx2 , . . . , yn = cxn , then sy =| c | sx .
Here sx2 is the sample variance of the x’s and sy2 is the sample
variance of the y ’s. c is any nonzero constant.
Measures of Variability
e.g. in the previous example, Sample III is {41, 45, 48, 49, 50, 51,
52, 55, 59} then we can calculate the sample variance as following
xi
41
45
48
49
50
51
52
55
59
2
xi
1681 2025 2304 2401 2500 2601 2704 3025 3481
P
x
i
P 2 450
xi 22722
Therefore the sample variance is
(22722 −
4502
)/(9 − 1) = 27.75
9
Measures of Variability
Boxplots
Measures of Variability
Boxplots
e.g. A recent article (“Indoor Radon and Childhood Cancer”) presented the
accompanying data on radon concentration (Bq/m2 ) in two different samples of
houses. The first sample consisted of houses in which a child diagnosed with cancer
had been residing. Houses in the second sample had no recorded cases of childhood
cancer. The following graph presents a stem-and-leaf display of the data.
1. Cancer
9683795
86071815066815233150
12302731
8349
5
7
2. No cancer
0
1
2
3
4
5
6
7
8
95768397678993
12271713114
99494191
839
55
5
Stem: Tens digit
Leaf: Ones digit
Measures of Variability
The boxplot for the 1st data set is:
Measures of Variability
The boxplot for the 2nd data set is:
Measures of Variability
We can also make the boxplot for both data sets:
Measures of Variability
Some terminology:
I
Lower Fourth: the median of the smallest half
Measures of Variability
Some terminology:
I
Lower Fourth: the median of the smallest half
I
Upper Fourth: the median of the largest half
Measures of Variability
Some terminology:
I
Lower Fourth: the median of the smallest half
I
Upper Fourth: the median of the largest half
I
Fourth spread: the difference between lower fourth and
upper fourth
fs = upper fourth − lower fourth
Measures of Variability
Some terminology:
I
Lower Fourth: the median of the smallest half
I
Upper Fourth: the median of the largest half
I
Fourth spread: the difference between lower fourth and
upper fourth
fs = upper fourth − lower fourth
I
Outlier: any observation farther than 1.5fs from the closest
fourth
Measures of Variability
Some terminology:
I
Lower Fourth: the median of the smallest half
I
Upper Fourth: the median of the largest half
I
Fourth spread: the difference between lower fourth and
upper fourth
fs = upper fourth − lower fourth
I
Outlier: any observation farther than 1.5fs from the closest
fourth
An outlier is extreme if it is more than 3fs from the nearest
fourth, and it is mild otherwise.
Measures of Variability
The boxplot for the 2nd data set is:
Sample Spaces and Events
Basic Concepts in Probability:
Sample Spaces and Events
Basic Concepts in Probability:
I
Experiment: any action or process whose outcome is subject
to uncertainty
Sample Spaces and Events
Basic Concepts in Probability:
I
Experiment: any action or process whose outcome is subject
to uncertainty
e.g. tossing a coin 3 times, testing the pH value of some
reagent, counting the number of customers visiting a store in
one day, etc.
Sample Spaces and Events
Basic Concepts in Probability:
I
Experiment: any action or process whose outcome is subject
to uncertainty
e.g. tossing a coin 3 times, testing the pH value of some
reagent, counting the number of customers visiting a store in
one day, etc.
I
Sample Space: the set of all possible outcomes of an
experiment, usually denoted by S
Sample Spaces and Events
Basic Concepts in Probability:
I
Experiment: any action or process whose outcome is subject
to uncertainty
e.g. tossing a coin 3 times, testing the pH value of some
reagent, counting the number of customers visiting a store in
one day, etc.
I
Sample Space: the set of all possible outcomes of an
experiment, usually denoted by S
e.g. for the above 3 examples, the sample spaces are {TTT,
TTH, THH, THT, HHH, HHT, HTH, HTT}, [0,14] and {0,
1, 2, . . . , N, . . . }, respectively.
Sample Spaces and Events
Basic Concepts in Probability:
Sample Spaces and Events
Basic Concepts in Probability:
I
Event: any colletcion (subset) of outcomes contained in the
sample space S.
Sample Spaces and Events
Basic Concepts in Probability:
I
Event: any colletcion (subset) of outcomes contained in the
sample space S.
An event is simple if it consists of exactly one outcome and
compound if it consists of more than one outcome.
Sample Spaces and Events
Basic Concepts in Probability:
I
Event: any colletcion (subset) of outcomes contained in the
sample space S.
An event is simple if it consists of exactly one outcome and
compound if it consists of more than one outcome.
e.g. for the coin tossing example: {all the outcomes such that
the first result is Head}, i.e. {HHT, HTH, HTT, HHH}, is an
event and this is a compoud event;
Sample Spaces and Events
Basic Concepts in Probability:
I
Event: any colletcion (subset) of outcomes contained in the
sample space S.
An event is simple if it consists of exactly one outcome and
compound if it consists of more than one outcome.
e.g. for the coin tossing example: {all the outcomes such that
the first result is Head}, i.e. {HHT, HTH, HTT, HHH}, is an
event and this is a compoud event;
{all the outcomes which have 3 consecutive Head}, i.e.
{HHH}, is also an event, while this is a single event.
Sample Spaces and Events
Examples:
For the pH value testing example:
{pH value is less than 7.0}, i.e. [0, 7.0), is an event, and it is
compound;
Sample Spaces and Events
Examples:
For the pH value testing example:
{pH value is less than 7.0}, i.e. [0, 7.0), is an event, and it is
compound;
{pH value is between 2.0 and 3.0}, i.e. [2.0, 3.0], is another event,
and it is also compound.
Sample Spaces and Events
Examples:
For the pH value testing example:
{pH value is less than 7.0}, i.e. [0, 7.0), is an event, and it is
compound;
{pH value is between 2.0 and 3.0}, i.e. [2.0, 3.0], is another event,
and it is also compound.
For the customers’ visiting investigation example:
{the number of cumstomers visited in one day is less than 100},
i.e. {1, 2, 3, . . . , 98, 99}, is an event, and it is compound;
Sample Spaces and Events
Examples:
For the pH value testing example:
{pH value is less than 7.0}, i.e. [0, 7.0), is an event, and it is
compound;
{pH value is between 2.0 and 3.0}, i.e. [2.0, 3.0], is another event,
and it is also compound.
For the customers’ visiting investigation example:
{the number of cumstomers visited in one day is less than 100},
i.e. {1, 2, 3, . . . , 98, 99}, is an event, and it is compound;
{the number of cumstomers visited in one day is more than 200},
i.e. {201, 202, . . . } is also an event and it is compound.
Sample Spaces and Events
Basic Set Theory
I
Complement: the complement of an event A denoted by A’
is the set of all outcomes in S that are not contained in A.
Sample Spaces and Events
Basic Set Theory
I
Complement: the complement of an event A denoted by A’
is the set of all outcomes in S that are not contained in A.
e.g. for our first coin tossing example, if
A = {the first outcome is Head} = {HHH, HHT, HTH,
HTT}, then
A’ = {the first outcome is not Head, i.e. Tail} = {TTT,
TTH, THT, THH}
Sample Spaces and Events
Basic Set Theory
I
Complement: the complement of an event A denoted by A’
is the set of all outcomes in S that are not contained in A.
e.g. for our first coin tossing example, if
A = {the first outcome is Head} = {HHH, HHT, HTH,
HTT}, then
A’ = {the first outcome is not Head, i.e. Tail} = {TTT,
TTH, THT, THH}
for the pH value testing example, if
A = {the pH value of the reagent is below 7.0}, then
A’ = {the the pH value of the reagent is above 7.0}
Sample Spaces and Events
Basic Set Theory
I
Complement: the complement of an event A denoted by A’
is the set of all outcomes in S that are not contained in A.
e.g. for our first coin tossing example, if
A = {the first outcome is Head} = {HHH, HHT, HTH,
HTT}, then
A’ = {the first outcome is not Head, i.e. Tail} = {TTT,
TTH, THT, THH}
for the pH value testing example, if
A = {the pH value of the reagent is below 7.0}, then
A’ = {the the pH value of the reagent is above 7.0}
Sample Spaces and Events
Basic Set Theory
I
Union: the union of two events A and B, is the event
consisting of all outcomes that are eigther in A or in B or in
both events — that is, all outcomes in at least one of the
events, denoted by A∪B
Sample Spaces and Events
Basic Set Theory
I
Union: the union of two events A and B, is the event
consisting of all outcomes that are eigther in A or in B or in
both events — that is, all outcomes in at least one of the
events, denoted by A∪B
e.g. for the coin tossing example, if
A = {the first outcome is Head} = {HHH, HHT, HTH,
HTT}, and
B = {the last outcome is Head} = {HHH, TTH, HTH,
THH}, then
A ∪ B = {the first or the last outcomem is Head}
= {HHH, HHT , HTH, HTT , TTH, THH}
Sample Spaces and Events
Basic Set Theory
I
Union: the union of two events A and B, is the event
consisting of all outcomes that are eigther in A or in B or in
both events — that is, all outcomes in at least one of the
events, denoted by A∪B
e.g. for the coin tossing example, if
A = {the first outcome is Head} = {HHH, HHT, HTH,
HTT}, and
B = {the last outcome is Head} = {HHH, TTH, HTH,
THH}, then
A ∪ B = {the first or the last outcomem is Head}
= {HHH, HHT , HTH, HTT , TTH, THH}
Sample Spaces and Events
Basic Set Theory
I
Intersection: the intersection of two events A and B, is the
event consisting of all outcomes that are both in A and in B,
denoted by A∩B
Sample Spaces and Events
Basic Set Theory
I
Intersection: the intersection of two events A and B, is the
event consisting of all outcomes that are both in A and in B,
denoted by A∩B
e.g. for the coin tossing example, if
A = {the first outcome is Head} = {HHH, HHT, HTH,
HTT}, and
B = {the last outcome is Head} = {HHH, TTH, HTH,
THH}, then
A ∩ B = {the first and the last outcomem is Head}
= {HHH, HTH}
Sample Spaces and Events
Basic Set Theory
I
Intersection: the intersection of two events A and B, is the
event consisting of all outcomes that are both in A and in B,
denoted by A∩B
e.g. for the coin tossing example, if
A = {the first outcome is Head} = {HHH, HHT, HTH,
HTT}, and
B = {the last outcome is Head} = {HHH, TTH, HTH,
THH}, then
A ∩ B = {the first and the last outcomem is Head}
= {HHH, HTH}
Download