Originals from the University of San
Diego, adapted by K.van Deemter
If you want to know something about a population, your results would be most accurate if you could study the entire population.
But it is often not feasible (cost, time) to study the whole population.
We suspect that there less crime in
Aberdeen than the national average
How can we test this?
We do not have the funds to measure the crime rate in every street in Abdn, so we take a random sample of one or more streets.
Sampling in general: Study a sample, and try to draw conclusions about the sample space
(population) as a whole
The larger the sample, the more accurately will it tend to reflect the properties of the population
In this example: We calculate how much crime, on average, the streets in our sample have experienced and compare it to the national average.
Suppose UK crime is normally distributed, with 4 crimes per street (mean ) and known st. dev.
Now choose a sample of one Abdn street, which happens to have experienced 2 crimes
Suppose Aberdeen crime levels were the same as the national average, how probable would it be to find
2 crimes or less crimes in a given street?
Recall that this can be computed given mean and standard deviation of a normally distr. population.
If this is highly unlikely then say “it looks as if Abdn has less crime than the national average”
National crime may not be normally distributed
The standard deviation on the number of crimes per street may be very high
As a result of this, you may find that 2 or less crimes per street may not be so improbable
For these reasons, a more sophisticated approach is called for
The trick is to look at a larger sample and focus on the sample mean
What is the probability of obtaining the sample mean that you did?
Compare your sample to other samples of the same size from the same population.
To make calculations easy, suppose your variable can have values 2,4,6,8 only (e.g. two crimes, four crimes, etc). Consider all possible samples of two:
{2,4}, {4,2},{2,6}, {6,2}, {4,4},...
Although there are 16 different possible samples, there are not 16 different sample means possible. The ones that are possible have different probabilities.
Has the same mean as the original distribution
Tends to be (almost) normally distributed
Has a smaller standard deviation
The larger the sample size n , the smaller the standard deviation of the mean x
There is a formula which says how the new standard deviation depends on the old one
( ) and the sample size n.
In case you’re curious:
x
n
This distribution describes the entire spectrum of sample means that could occur just by chance.
In other words, the sampling distribution of the mean allows us to determine whether, among the set of random possibilities, the one observed sample mean can be viewed as a common outcome or a rare outcome .
Common outcome.
Probability of obtaining a particular sample mean.
Rare outcome.
Rare outcome.
But we were not gambling on the likelihood that mean will occur
sample
E.g., our guess was not: “average crime in
Aberdeen is 3 crimes per street”
Our guess was that crime in the average
Aberdeen street was average below the national
How would statisticians handle this?
We start with the hypothesis that the crime rate on average in Aberdeen is the same as the national average.
This is called the null Hypothesis (H
0
) . This is roughly the opposite of what you try to confirm (which is called the alternative Hypothesis H
A or the research Hypothesis ): that there’s less crime in Aberdeen
To test the null hypothesis, we ask what sample means would occur if many samples of the same size were drawn at random from our population if our null hypothesis was true .
Then we compare our sample mean with the means in this sampling distribution.
Suppose that the relationship between our sample mean and those of the sampling distribution of the mean looks like this…
Our hypothesized value.
Our obtained value.
If so, our sample mean is one that could reasonably occur if the null hypothesis is true, and we will retain this hypothesis as one that could be true. (i.e., The crime rate of Aberdeen could be the same as the national average.)
On the other hand, if the relationship between our sample mean and those of the sampling distribution of the mean looks like this…
Our sample mean is so deviant that it would be quite unusual to obtain such a value when our hypothesis is true. In this case, we would reject our hypothesis and conclude that it is more likely that the crime rate of Abdn is not the same as the national average.
The population represented by the sample differs significantly from the comparison population.
Going into this a bit more deeply
(no need to understand this in detail)
But how deviant is deviant enough? In other words,
How unlikely does H
0 need to be to count as false?
In some areas a probability of 0.5 is generally agreed to be small enough ( 95% certainty)
In areas where errors are costly (e.g., medicine), it’s often chosen as low as 0.1 ( 99% certainty)
This is called the decision rule.
We say that the difference between observed mean m and the hypothesised mean is significant if the decision rule decides that about by accident.
m is unlikely to have come
0.1, 0.5, etc. are also called levels of significance
We can use the tables to calculate the critical values , which separate the upper 2.5% and lower 2.5% of sample means from the remainder.
A psychologist is working with people who have had surgery.
The psychologist thinks that people may recover from the operation more quickly if friends and family are in the room with them after the operation.
It is known that time to recover from this kind of surgery is normally distributed with a mean of 12 days and a standard deviation of 5 days.
The procedure of having friends and family in the room for the period after the surgery is done with 9 randomly selected patients. The patients recover in an average of 8 days.
Using the .01 level of significance, what should the researcher conclude?
For illustration, we show here how this experiment is analysed statistically.
H
0 is the null hypothesis
H
A is the alternative hypothesis (research hyp.)
A test statistic says how far from the population mean the sample mean is. An often-used statistic is Z
Z involves the sample mean m, the hypothesised mean , and the standard deviation on the means
An often-used test statistic is Z. Z involves the sample mean m, the hypothesised mean , and the standard deviation on the means
We have seen that the standard deviation on the means is
x
n
• The formula for Z is
Z
m
n
• Z = the difference between m and , compared with the new standard deviation
State the research hypothesis:
Is it true that patients who have friends and family with them following surgery recover more or less quickly than people who do not?
State the statistical hypothesis:
Set decision rule:
Z
2 .
58 crit
Calculate the test statistic:
Decide if results are significant:
Retain H
0
, -2.40 > -2.58
Z
H o
H
A
:
:
12
12
8
12
5
2 .
40
9
Interpret results as relating to the statistical hypothesis:
Patients who have friends and family with them did not recover significantly faster, or slower , than patients who do not have social support.
Does it follow that “friends and family do not
have the predicted effect”?
No! You may have used too few subjects, for example. The facts did point in the right direction (because recovery was 4 days faster, on average), so maybe do a bigger experiment
An experiment can never confirm the null hypothesis, only disconfirm it.
This is essentially what’s been done when you read that
one medicine is more effective than another one user interface is better liked than another one computer program runs faster than another, on typical input
In most cases, people are comparing one sample with another (rather than with a completely known population, as in our examples)
Still, the techniques are always similar.
We’ve covered some key concepts only (plus a quick illustration of how these concepts can be used in hypothesis testing)
More from Professor Hunter, who will talk about simulations and random number generators
More in year 2, when you learn about HCI
In the lectures on probability, we wrote
“ P(q) = a ”, where 0 <= a <= 1
Now we move on to Symbolic Logic, where we focus on the cases where a=0 or a=1