AS Physics Coursework: Quality of Measurement

advertisement
AS Physics Coursework: Quality of Measurement
Handbook
In this piece of coursework you have to take careful measurements, with particular attention to the
quality of the measurement and any inferences you can make from it.
Types of activity include:
• a careful measurement of a physical quantity;
• a careful quantitative study of the relationship between two or more variables, where there
are some indications from theory of what to expect;
• a careful calibration of a sensor or instrument;
• a careful study of one or more of the properties of a sensor or instrument;
• a comparison of methods of measuring the same thing.
We will provide you with a list of possible activities from which you can choose.
You are assessed on your ability to do the following:
• recognise the qualities and limitations of measuring instruments, particularly resolution, sensitivity,
calibration, response time, stability and zero error;
• identify and estimate the most important source of uncertainty in a measurement and seek ways to
reduce it;
• consider the possibility of systematic errors and seek to estimate and remove them, including
considering calibration;
• make effective plots to display relationships between measured quantities, with appropriate
indication of uncertainty;
• use simple plots of the distribution of measured values to estimate the median (or mean) value and
the spread (which may be estimated from the range of values), and to identify and account for outlying
values.
This guide will take you through each of these ideas, clarifying what is meant and offering suggestions
as to how to proceed:
Resolution, sensitivity, calibration, response time, stability and zero error
These ideas were discussed during the Sensing Team Task that you completed before and during the
Christmas holidays.
Resolution:
The resolution of an instrument is the smallest change of the input that can be detected at the output.
The output of a digital instrument is a numerical display. The resolution is the smallest change of input
the instrument can display. For example, a digital voltmeter that gives a three-digit read-out up to 1.35
V has a resolution of 0.01 V since the smallest change in p.d. it can display is 0.01 V.
For an analogue instrument, the output is the position of a pointer on a scale. Its resolution is the
smallest change in input that can be detected as a movement of the pointer. The resolution of an
analogue instrument can be improved using a magnifying lens to observe movement of the pointer.
Sensitivity:
The sensitivity of a measuring instrument is the change of its output divided by the corresponding
change in input.
A temperature sensor whose output changes by 100 mV for a change of 2 K in its input has a
sensitivity of 50 mV per kelvin.
A very sensitive instrument gives a large change of output for a given change of input.
In a linear instrument, the change of output is directly proportional to the change of the input. Thus a
graph of output against input would be a straight line through the origin. The gradient of the line is
equal to the sensitivity, which is constant. Thus a linear instrument has a sensitivity that is
independent of the input.
If the change of output is not proportional to the change of the input, the graph would be a curve. In
this case, the sensitivity would vary with input.
Calibration:
Calibration is the process of ensuring the relation between the input and the output of an instrument is
accurately known. This is done by measuring known quantities. For example, an electronic top pan
balance is calibrated by using precisely known masses. If the readings differ from what they should be,
then the instrument needs to be recalibrated.
Important terms used in the calibration of an instrument include:
Calibration graph: a graph to show how the reading should change with the quantity to be measured.
Linearity: where the output increases in equal steps when the input increases in equal steps. If the
output is zero when the input is zero, the output is then directly proportional to the input, and its
calibration graph will be a straight line through the origin.
Response time:
Response time has to do with how fast a system changes from one state to a different state.
Response time is the time taken by a system to reach a steady new state after a signal initiates the
change.
In an electronic measuring instrument, the response time is the time taken by the instrument to give a
reading after being supplied with a change in input. If the response time is too long, the instrument
would not measure changing inputs reliably. If the response time is too short, the instrument might
respond to unwanted changes in input.
Stability:
If the input of a measuring system does not change, its output should be stable. This is a measure of
the stability of the system.
Zero error:
When a measuring system has a zero input, the output should also be zero. If it is not, then the
system has a zero error which must be taken into account during calibration.
Your coursework should include a discussion of these factors in relation to your experiment. You
should calculate resolution and sensitivity, measure response time and mention any zero errors that
occur, along with the consequences of these regarding practical applications of your experiment. For
instance, does the response time of a thermistor limit its use in certain situations, or does the
sensitivity of an LDR system mean it is not suitable for situations where small changes in light level
need to be noticed?
Identify and estimate uncertainties, and discuss ways to reduce them
The uncertainty of an experimental result is the range of values within which the true value may
reasonably be believed to lie.
Remove from the data values which are reasonably suspected of being in serious error, for example
because of human error in recording them correctly, or because of an unusual external influence, such
as a sudden change of supply voltage. Such values should not be included in any later averaging of
results or attempts to fit a line or curve to relationships between measurements.
Then consider the resolution of the instrument involved – say ruler and stopwatch. The uncertainty of
a single measurement cannot be better than the resolution of the instrument. But it may be worse.
Repeated measurements under supposedly the same conditions may show small and perhaps
random variations.
In this case, first inspect the spread of values obtained, for example using a plot of values along a line
(a dot plot). Look to see if the values appear randomly scattered. A 'safe' but pessimistic estimate of
the variation is just the range of values obtained.
Having decided the uncertainty in each measurement, the most important next step is to identify the
largest source of uncertainty. This will (a) tell you where to invest effort to reduce the uncertainty of the
result and (b) give you a least possible value for the uncertainty of the result.
A simple way to see the effect of uncertainties in each measured quantity on the final result is to
recalculate the final result, but adding or subtracting from the values of variables the maximum
possible variation of each about its central value.
Calculating a value of Power dissipated in a resistor using P=I2R.
You measure the resistance of the resistor as 50 Ohms, but the measurement from your multimeter is
only given to the nearest Ohm.
You measure the current as 0.5A, but the meter has a resolution of 0.1A.
The value of Power using these numbers would be 0.52x50=12.5W.
Using the resistance values, max and min, would give (0.52x50.5) and (0.52x49.5), so 12.625W and
12.375W respectively ie. 12.5±0.125W
Using the current values gives (0.552x50) and (0.452x50) = 15.125 and 10.125, or 12.5±3W.
In this case, the largest value of uncertainty in the final answer comes from the current value, so this is
the one you would use in your conclusions.
An alternative treatment:
Uncertainty of 1 Ohm in a measurement of 50 Ohm is a percentage of (1/50)x100 = 2%
Uncertainty of 0.1A in 0.5A is (0.1/0.5)x100 = 20%, clearly a larger error, so this is the one you would
pursue.
If you have several causes of error, its best to be on the safe side and increase your final error to take
account of this, eg in the first example you might quote the answer as 12.5±3.5W.
Ways to reduce errors must also be discussed, such as more precise instruments or adapting the
technique to reduce random fluctuations.
Identify and seek to reduce systematic errors
Systematic error is any error that biases a measurement away from the true value.
All measurements are prone to systematic error. A systematic error is any biasing effect, in the
environment, methods of observation or instruments used, which introduces error into an experiment.
For example, the length of a pendulum will be in error if slight movement of the support, which
effectively lengthens the string, is not allowed for, and a zero error is a type of systematic error.
Consider trying to use a thermistor to measure temperature. While calibrating your equipment you
leave 10 seconds between using the thermometer and looking at the voltage output. In this time the
temperature may have fallen, leading to a systematic error that needs to be taken into account, and
ideally, reduced.
Make effective plots to display relationships, with an indication of
uncertainty
If you want to see how two quantities are related, then you can plot a scattergram or a line graph. If
you do this by hand, and the data seem to suggest it, you can then fit a line to the data by eye.
Using computer software, it is easy to find a line of best fit or a regression line, without even having to
plot a scattergram. A spreadsheet for example can calculate the slope and intercept of a regression
line directly from a set of data. But this needs to be done with care.
It is important first to look at the values plotted on a scattergram to see whether it is appropriate to
attempt a straight line fit. For example, the data may show curvature. It may have exceptional values
that can produce misleading fits. Or it may be that the variability in the data is of importance and such
patterns are lost if you move directly from the data to finding a fitted line.
Fitting an appropriate line
The scattergrams shown below illustrate some examples of where a straight line fit is appropriate and
when it is not.
x
x
x
x
x
x
x
The points lie very close to a straight line. Here, there is a simple linear relationship between the
variables, and there is little variation about the line due to other factors. Small deviations from the line
may well be due to random variations in measurement.
x
x
x
x
x
x
x
These variables also seem to be very closely related, but in this case, the points lie on a curved line. It
may be possible to find a simple algebraic expression for this curve. Again, the small variation from
the line may be due to random variations.
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
These points suggest a linear relationship, though not as strong as before. Instead of lying on a line
they cluster around it, suggesting some relationship, with variation or error due to other variables.
x
x
x
x
x x
x
x
x x
x
x
x
x
x
x
Here the points also cluster around a line, but in this case the line is curved. As before this suggests
that there may be a relationship between the variables, but that there is variation due to the effects of
other factors.
x
x
x
x
x
x
x
x
x
x
x
x x
x
x
x
The points lie scattered all over the plot, and it does not appear that there is any relationship between
the variables. One possibility is the obvious one – that there does not appear to be a relationship
because, in fact, there is none. Another possibility though is that there is a relationship, but that the
effects of large variations or effects of other variables on the dependent variable (the y-axis values)
are greater than the effect of the independent variable (the x-axis values), which conceals the
relationship.
x
x
x x
x
x
x
x
x
x
x
x
x
x
x
x
Here there appears to be a linear relationship, though not a strong one, and a straight line has been
drawn. However, for lower values, the points cluster much more closely to the fitted line than at higher
values. As a way of predicting values of x from values of y, it appears that the line might work quite
well at low values, but not at high values. You need to take care in fitting lines to this kind of data.
Think about the pattern you would see in the points if the two values at the top of the graph happened
not to be there. Or the two values towards the bottom right. You might be inclined to draw the line in
quite a different place. So, a few exceptional values, or outliers, can have an unjustifiably large effect.
x
x
x
x
x
x
x
x
x
x xx
xx x x
In this case points lie fairly close to a straight line, but there are more crowded together in the lower
part of the plot than in the upper. The distribution of the values of each of the variables is not uniform.
Again, care needs to be taken. In extreme cases, the presence of one or two outliers may suggest a
relationship, when without them, the points would appear as a cluster with no clear relationship
between them.
You do not have to plot your graph using a computer. In fact, it is often better to plot a graph
by hand to see if there is a line that can be fitted. Excel can be used to get an equation for a
line of best fit but you should be aware of what type of relationship you are using, whether
linear or exponential etc.
Uncertainties
Having worked out uncertainties in your values you will be able to add error bars to your graph. These
can show errors in the x-axis and/or in the y-axis, so may look like little crosses rather than just lines.
You may wish to use the line of best fit to work out gradients to help you work out relationships or the
sensitivity. Error bars can mean that you have several different possibilities for the line of best fit.
Show these on your graph and use the different gradients to give you an uncertainty value in the
overall result or gradient.
Plots to estimate median and spread, and show possible outliers
When you are taking data, or shortly after the experiment, it is often useful to do a simple ‘plot and
look’ to sport clustering or outliers. This applies when you have lots of values for the same piece of
data. You could use it when dealing with the repeat readings in your experiment.
A familiar way of getting an idea of the level of a batch of values is to find the mean – add them all up
and divide by the number of values. Another measure of level is the median. If you put all the values in
order from biggest to smallest, then the median is the one in the middle. So, there are the same
numbers of values above it and below it. (N.B. If there are an odd number of values, there is a single
middle value, but if there are an even number of values, you get the median by taking the average of
the two in the middle.)
The simplest way to see how much the data vary is to find the maximum value and the minimum value.
However, the difficulty with this is that these values may exceptional cases. It is even possible that you
might just have a single very large value and a single very small value with all of the other values very
close together in the middle. Maximum and minimum values on their own are not very good indicators
of the spread of the data.
The median is the middle value of a batch of data. A good way to see how much the data vary around
the median is to find the middle of the top half of the data and the middle of the bottom half. These are
called the upper and lower quartiles. This now gives you five values which 'cut the data up' into four
quarters:
A box plot is a useful way to represent this data.
In a normal distribution, the values are spread symmetrically around
the middle of the distribution. But often in a batch of data, the values
are more spread out towards the upper end or towards the lower end.
Such distributions are called skewed.
In the example above, you can see that the values are spread out
more in the bottom half than in the top half – the distribution is skewed
to the lower values. It is negatively skewed. A distribution that is
skewed towards the higher values is called positively skewed.
Are there exceptional values?
Be on the lookout for exceptional values and always be suspicious. They often arise through errors in
copying. When they really are correct, think about why they might be exceptional. Does it really belong
in the batch or is it better to treat it as a special case? When plotting data, for example on a boxplot, it
may be helpful to indicate these exceptional values or outliers.
And finally…
Don’t lose sight of the aim of your experiment.
Is it to measure a particular quantity (eg Young’ modulus), to establish a
mathematical relationship between two variables (eg. resistance and
temperature of wire), to calibrate a sensor (eg. LDR used to find distance to a
lamp), to find the properties of a sensor (eg. resolution of a thermocouple), or to
compare different ways of measuring the same quantity?
Download