Dealing with uncertainties and errors

advertisement
Dealing with uncertainties and errors
You, the experimenter, will go and measure some physical quantities numerically (using some piece of
equipment to do so). You make a measurement. The measurement is necessarily somewhat different
than the true value of the physical quantity.
First, we need to define some terms:
 Accuracy: This is the extent to which your measurement is in fact close to the true value. If you
do not a priori know the true value, then it may be difficult to determine to what extent your
measurement is accurate.
 Precision: This is the extent to which you can specify the exactness of a measurement. For
example, to report that the time is about 3 PM is less precise than to say the time is 3:02:45.
Being more precise does not always imply being more accurate.
 Statistical (Random) Uncertainty: This is a key idea. The statistical uncertainty of a
measurement is the uncertainty that reflects the fact that every time you make a measurement,
you must, be necessity, measure a slightly different quantity each time. The tendency for a
measured value to "jump around" from measurement to measurement is the statistical error.
 Systematic Uncertainty: This is uncertainty and error in your measurement caused by anything
that is not statistical uncertainty. This includes instrumental effects, not-taking things into account
(will the change in barometric air pressure impact this measurement?) and gross (stupid) errors.
Any measured quantity or calculated constant requires 3 items in order to specify it completely:
(1) a numerical value
(2) a unit
(3) an indication of the reliability of the ascribed value.
Item (3) means the uncertainty or systematic error.
Note that the two words, 'uncertainty' and 'error' are often interchangeable. Strictly 'uncertainty' is best
used to represent a range which contains the true value, perhaps with a statistical confidence, whereas
'error' is best used to cover measurement deficiencies such as systematic differences (errors).
An example of a fully specified statement might be: acceleration = 13.4 ± 0.2 km s-1
1.
Absolute, Relative and Percentage Uncertainties/Errors
Statement: time for object to fall = 14.32 ± 0.04 s
The 0.04 is referred to as the absolute (time) uncertainty in the measurement. The relative uncertainty is
simply calculated as
0.04 / 14.32 = 0.0028
The percentage uncertainty = relative uncertainty x 100%
= 0.28%
So we can re-express the statement as:
time to fall = 14.32 s ± 0.3%
1.a. Quoting Uncertainties
(a) The uncertainty must be quoted to the same number of decimal digits as the value. e.g. 14.32 ±
0.04 and not 14.32 ± 0.041
(b) Knowing the uncertainty in a quantity immediately reveals the number of significant digits its value
should contain
e.g. 9.77 ± 0.01 m but not 9.7742 ± 0.01 m
since the uncertainty of ± 0.01m clearly indicates that the third significant figure is uncertain and thus
there is no point in writing down the 4th, 5th, etc.
2. Types of Uncertainties and Errors
Uncertainties are mainly of two types, systematic and random. There does exist a third type, blunders, but
they must be discarded when correctly recognised.
2.a. Systematic errors
Often these are the most difficult to deal with since you may not even be aware of their existence. They
cause a series of measurements to be always too high or too low rather than randomly scattered about
the true value; e.g. a shrunken ruler will always give length measurements which are too high.
Systematic errors must be carefully watched for and if possible eliminated or turned into random
uncertainties; e.g. in measuring a temperature difference, exchange the two thermometers and take a
second reading thereby eliminating zero errors.
2.b. Random Uncertainties
These may result either from the random varieties (often referred to as fluctuations) in the measured
quantity itself (e.g. the emission of alpha, beta or gamma rays from a radioactive source or from the
variations in measuring instruments and/or the experimenter. This simply means that repeated
measurements are unlikely to give precisely the same value each time. Rather a spread of values will be
obtained and it is from this spread or scatter that the uncertainty in the measured quantity is best
determined. You can turn this argument around and state that the best way of accurately determining the
uncertainty in any measured quantity is by repeating the measurement many times.
The next two sections, however, will consider situations in which a detailed treatment of measurement
scatter is not relevant.
2.c. Non-random Measurements
If you make a series of measurements and they all have the same value (see previous section) this
indicates that the instrument used for the measurement was possibly not sufficiently sensitive for the use
intended and you might then consider whether your choice is appropriate.
This does not mean, however, that there is no uncertainty in the measurements.
Example:
Using a vernier the following measurements were made of the diameter at different points along a
brass rod; 1.42 cm, 1.42 cm, 1.42 cm, 1.42 cm, 1.42 cm. Clearly any variations in diameter are
too small for the vernier to detect. This does not mean that there is no uncertainty in the result,
but rather that it is not greater than half the smallest division that the vernier can measure, viz. 1/2
x 0.01 cm = 0.005 cm.
If the length had been greater than 1.42 cm the venier would have read it as 1.43 cm; if it had
been less than 1.415 the reading would have been 1.41 cm.
The result of this measurement is, therefore, written as 1.42 ± 0.005 cm
2.d. Limited Sampling
On many occasions you will not be in a position to repeat a measurement many times. For example,
recording the temperature of a cooling body; you measure single temperatures at specified times. Here
you must use your own judgement in determining how accurately the temperatures were measured.
Under ideal conditions the uncertainty is likely to be ± 1/2 of the smallest divisions of the instrument used.
Note, however, this is likely to be the minimum uncertainty and that systematic or instrument errors, may
be somewhat higher.
It is good practice to examine the manufacturers specifications for the instruments you use.
Time restrictions limit you to only a few, say 3 or 4 repeated measurements
Example:
Four length measurements of a rod were recorded as : 0.041, 0.044, 0.044 and 0.045 m. There
are two alternative approaches in calculating the uncertainty.
(a) Take the uncertainty as half the difference between the highest and lowest readings
= ± (0.045 - 0.042)/2 = ± 0.002 m.
(b) Calculate the average deviation for the four readings
For the above example this is ± 0.001 m which is half the value calculated in (a). The
disadvantage of this method is that calculating the spread in small samples using average
deviations (or standard deviations) is extremely dubious.
3. Manipulation of Independent Uncertainties
You will often have to calculate a quantity that depends on a number of measurements you have made,
each having some uncertainty. You need, therefore, to understand how the individual uncertainties
combine. The section below describes a very simple way of doing it but which will overestimate
uncertainties.
3.1 The Less Precise Way
Addition and subtraction Let L = x+y-z with dx, dy and dz the absolute uncertainties in x, y and z
respectively.
dL = dx + dy + dz
Note that the change in sign of dz is justified by the following arguments:
Maximum value of L will be given by
(L+dL) = (x+dx) + (y+dy) - (z-dz)
= (x+y-z) + (dx+dy+dz)
and the minumum value of L by
(L-dL) = (x-dx) + (y-dy) - (z+dz) + (x+y-z) - (dx+dy+dz)
i.e. dL = dx + dy + dz
Absolute uncertainty in sum = algebraic sum of absolute uncertainties in the individual terms.
Example : Suppose a = (2.1 ± 0.1)m, b=(3.4 ± 0.4)m and c = (0.82 ± 0.01)m
If S = a + b - c What is dS?
dS = da + db + dc = 0.1 + 0.4 + 0.01 = 0.51 = 0.5
thus S = 4.7 ± 0.5m
Products and quotients
Let L = x(y/z)
Log L = log x + log y - log z
dL/L = dx/x + dy/y + dz/z
Relative (or percentage uncertainty) of a product and/or quotient = sum of the relative (or
percentage uncertainties) in the individual terms.
Examples:
Suppose a = (2.1 ± 0.1)m, b=(3.4 ± 0.4)m and c = (0.82 ± 0.01)m
If S = a x b/c What is dS?
dS/S = da/a + db/b + dc/c = 0.1/2.1 + 0.4 /3.4+ 0.01/0.82 = 0.18 = 18%
thus S = 9.0 ± 18% or 9.0 ±1.5m
Powers
Let L = xa yb
Log L = a log x + b log y
dL/L = a dx/x + b dy/y
Relative (or percentage) uncertainty = sum of the relative (or percentage) uncertainties in each
term multiplied by the corresponding power index.
NB If a parameter is raised to a high power then its contribution to the total error is increased by a factor
equal to the value of the power. It is important that the parameter is measured with increased accuracy in
order to keep the total error relatively low.
Examples:
Suppose a = (2.1 ± 0.1)m, b=(3.4 ± 0.4)m and c = (0.82 ± 0.01)m
Suppose S = x2y3 where x = 4.0 ± 0.2, y = 2.2 ± 0.3 What is the value of dS?
dS/S = 2(dx/x) + 3(dy/y) = 2(0.2/4.0) + 3(0.3/2.2) = 0.1 + 0.4 = 50%
thus S = 170 ± 50%
NB The above technique for the manipulation of uncertainties leads to values which are generally too
large.
3.2 The Precise Way
By simply adding the absolute uncertainties it is assumed that the uncertainties in dx, dy and dz add in
the worst possible way. If dx, dy and dz are independent uncertainties then this will rarely happen. A more
suitable way of calculating the uncertainties is as follows:
(dL)2= (dx)2 + (dy)2 + (dz)2
for products and quotients:
(dL/L)2 = (dx/x)2 + (dy/y)2 + (dz/z)2
and for powers:
(dL/L)2 = a2(dx/x)2 + b2(dy/y)2
Uncertainties, so calculated, are usually referred to as the most probable uncertainty
4. Error bars on graphs
You must account for the uncertainties in your measured points by representing these uncertainties as
error bars on your graphs. In nearly every experiment, you are varying some quantity, X, and the
measuring the impact on some quantity, Y. Measure the uncertainties in X and Y. Then plot Y vs. X with
bars on Y and X to show how uncertain the measurement was (see graph below). This is what a physicist
means by error bars. A graph without error bars is just plain wrong. Sometimes the size of the bar is very
small - that's ok.
Once you have plotted the points, do a fit to some function (usually a line) that describes the physics you
expect, or just see what line you get. Remember that if you ask for a polynomial, you will get one. The
trendline is not 'it', it is not the point of the exercise.
Once you have this you need to address the following question: is the best fit a good fit. In other words,
does the model fit the data to within the uncertainties prescribed by the error bars. This is the critical
question for the experimental physicist. Your goal is not to measure a number. Your goal is not
even to measure the "right" number. Really, your goal is to determine if the physical model is
supported by the data.
In my opinion, if you take this notion to heart, you will understand the soul of experimental physics. To do
this you must numerically answer the question: "Does the data fit the model to within the uncertainties on
the measurements?"
Sometimes, you may not know what it is supposed to do and what kind of line the physics would suggest.
In that case you are looking for a relationship between two quantities and to work out what that is you
have to find something to plot that gives you a straight line, e.g. y against x 2. If you get a straight line, that
is the holy grail of physics - you can say that y is proportional to x2 - you have discovered a law that
governs those two quantities. Your next task is to work out WHY is should be x 2 and not x3 or 1/x.
Download