John`s Lab to teach introductory error analysis

advertisement
Experimental Uncertainty and Testing Theories
A lab designed to teach the basic concepts to beginning physics students.
-John Welch, Cabrillo College
OBJECTIVES
1. To understand the ideas of experimental uncertainty and confidence intervals.
2. Decide whether experimental data supports or contradicts a theory.
EQUIPMENT
Set of Dice (1 ea of: 4 sided, 6 sided, 12 sided, and 20 sided), stopwatch, ball, 2m stick.
Part 1: Rolling Dice
a. Theory.
Examine your set of four dice. Roll all 4 at the same time and add them to get a total
for the roll. If you did this many times, the totals wouldn’t always be the same, but you
could find an average value for all your rolls. Your job in this part is to work with your
group to come up with a theoretical value for what the average should be of a group of
many rolls. Don’t just guess, but try to have some reasoning behind your theory. If the
group can’t agree on a single theory, it’s ok to have two or more competing theories.
b. Experiment.
Now you’re going to do an experiment to see if it supports or rejects your theory.
Here’s the procedure:
Roll the set of dice together and record the total in a data table. Do this a total of
5 times. Record each total and calculate the average value.
c. Compare theory and experiment.
Was your experimental average equal to your theoretical average value?
What is the difference between the two values?
What is the percent difference between the two values? To find this, divide the
difference by experimental value and express the answer as a percent.
If your experimental average didn’t match your theoretical average, does that mean
that there was something wrong with your theory? Why or why not?
People sometimes say that their experimental results were off because of something
they call ‘human error’. Do you think ‘human error’ caused your experiment to give
different results than the theory predicted?
When a theory agrees with what you observe in an experiment, we say that it is
consistent with your observation. If it doesn’t agree, the theory is inconsistent with
observation.
d. Uncertainty.
Sometimes an experimental result is “pretty close” to what a theory predicts, and
leaves you unsure about whether or not to reject the theory. The question you might
ask is: “how far off do they have to be to make me confident that the theory is wrong,
or inconsistent with my observation?
2-1
The answer to that question depends on how precise your experiment is. If you repeat
it many times, will you always get the same value, or will there be some fluctuation?
To find out how much fluctuation there is in our dice experiment, work with your group
to repeat the experiment 14 more times. (Remember, each experiment is that average
of 5 rolls.) Record your 14 new average values in a data table, along with the original
one, for a total of 15 averages.
Based on these results, come up with what you think is a “reasonable spread of
uncertainty” for this experiment. Answer the following questions:
• When we perform this experiment many times we are pretty confident that the results
will come out somewhere between ____ and ____. This interval is called your
confidence range.
• If we did the experiment one more time, our best guess for the result would be _____?
Based on your answers to the two questions above, restate your experimental results in
the following form:
“Our observation is that the average value of 5 rolls dice is _________ +/- _________.
The “+/-“ part at the end is what is called your experimental uncertainty. All experiments
have an uncertainty because they never produce the exact same result every time.
e. Testing your theory.
Now you can use the ideas of experimental uncertainty and confidence range to decide
whether or not your theory should be rejected. You have a range of values that you are
confident about based on your experimental observations. Testing the theory is now
easy: if the theoretical value falls outside your range, you’re confident that something is
wrong with it. We usually say “the theory is inconsistent with the observed results.” If the
theoretical value does fall within your confidence range, then it is consistent with
observation. It survives for now until someone comes along and disproves it later.
Does your theory survive this confidence range test? Does the value you came up
with land inside your experimental confidence range?
f.
Uncertainty and multiple trials.
Here we’re going to look at what happens to your uncertainty as you increase the
number of trials in the experiment. Rather than having each group roll the dice more
times, we’ll combine the results from all the groups and see what they look like. Go to
the whiteboard and add your results to the group chart. (ask instructor for help if this
isn’t clear.)
2-2
Chart notes: this is a typical result. Each lab group used a different color pen. The
numbers in the boxes are how many times that group got that result. We project
gridlines on the whiteboard to help make the graph easier to make.
Group Discussion
Look at curve, discuss its shape.
Looking at this curve, or histogram, we can answer the following questions:
What is our new confidence interval for the ‘whole class’ data? That is, “most” of our
results fall within the range _________ to __________. How does this compare to
the intervals you got in your groups? (ans: it’s smaller, showing that more trials reduces
your uncertainty and makes a more precise result.)
From this graph, what would we say is our “best guess” at the ‘right answer’? How
does that compare with what you got in your groups? (ans: mean is about the same,
showing that multiple trials don’t really change the best guess.)
Based on the chart on the board, and our confidence range, which of these theoretical
values would ‘survive’ or be consistent with the experiment, and which would have to
be rejected?
15
17
19
12
10
23
26
Based on the more precise results we get from using the data from the whole class, fill
in the following statement:
Our theoretical value of _________ for the average total of the 4 dice is
_________________ (consistent or inconsistent) with the class experimental results
of __________ +/- __________.
2-3
g. Standard Deviation (can be part of group discussion or not)
When we decided on our confidence range in the previous section, our choice was
kind of arbitrary – we just eyeballed a range we thought looked good. In real science,
we want to have a more formal way of choosing a confidence range than just
eyeballing. For example, what if 2 different people decide on different confidence
ranges? Maybe one excludes the theory and one doesn’t? Who’s right?
To solve that problem, we use a mathematical method of calculating a confidence
range instead of eyeballing it. The mathematical tool we use is called the standard
deviation. Your instructor can show you the formula used to calculate the standard
deviation by hand, or you can have a computer calculate it for you. One way is to use
the stdev() function in Excel.
Calculate the standard deviation of your group’s 15 average values using whatever
method you or your instructor chooses.
The standard deviation is a measure of your uncertainty. You can use the SD to get a
confidence range. One way to do it would be to add and subtract one standard
deviation amount to your best guess value. Make a confidence range using this
method and compare it to the range you originally eyeballed from your group’s 15
trials.
Note to instructors: the method above is greatly simplified in order to give beginning
students an introduction to the idea of standard deviation. The statistically correct way
to make a 95% confidence interval is to use two standard deviations of the mean,
which would be 2* SD/sqrt(N). This method can be taught in more advanced classes,
but might be too much information for beginning students. If students are doing 5 – 10
trials, then one standard deviation comes out about the same size as 2 SD of the
mean, so makes a decent confidence interval.
Part 2:Dropping a ball
a. Theory: work with your group to predict the time tthy a ball is in the air when dropped
from a 2-meter height. Use kinematic equations or some other method, rather than just
guessing.
b. Experiment.
Drop a ball from a 2-meter height and carefully measure the time the ball is in the air.
c. Compare theory with experiment:
Did the values for a and b come out exactly the same? What is the difference between
the two? What is the percent difference?
d. Uncertainty.
Find your uncertainty by repeating the experiment multiple times. Do the experiment a
total of 10 times.
Find the mean and the standard deviation of your 10 trials. Use the standard deviation
as a measure of your uncertainty, and report your results below, in this form:
mean value +/- standard deviation (don’t forget units)
2-4
here’s an example:
In our experiment, the time for a ball to fall 2 meters was: 1.5 +/- .4 seconds.
In the dice experiment, there were no units, but in this case you measured time, so you
need to include the units in your result.
e. Testing your theory.
Does your theoretical time from part a land inside the confidence range you
determined above?
Is your theory consistent with the experimental evidence?
Does the theory need to be modified or abandoned?
2-5
Download