advertisement

PART II LABORATORY 14 ERRORS IN EXPERIMENTAL PHYSICS 14.1 INTRODUCTION The following few pages is a guide to errors. It has been constructed to show you that errors really are quite simple and are of great importance to your report and development as an experimental physicist. Topics covered include sources and types of error, how to calculate and combine errors and how to display your final result. Errors are often thought of as troublesome but they really are easy. Please take the time to read through the following guide and master those errors once and for all. When scientists present the results of experimental measurements, they should almost always specify a possible error associated with the quoted results. It is very poor experimental technique to say that, “this is my best value, but … I have no idea, nor do I care about how accurate it is.” In first year laboratory you were introduced to the concept of errors. In second year laboratory we not only expect you to think about and mention the errors, but you are required to understand the errors, estimate their magnitude using some relatively simple formulae, reduce them by developing good experimental technique, and discuss them at length within your report. The failure to consider, estimate, reduce and discuss the error associated with any quantity you obtain is very poor experimental technique. Hence, you should always state your final result plus a confidence limit. X + X units Your final result is your best estimate for the value being investigated, whereas the confidence limits are the upper and lower limits between which you declare that you confidently expect the true value to lie. The confidence limit or ‘error’ conveys very significant information about the experiment and the result obtained. Its consideration is an indication of a sound experimental technique. We shall now consider the different types of errors, how to estimate and combine them and how to present them appropriately. 14.2 SOURCES OF ERROR IN EXPERIMENTAL WORK 14.2.1 MISTAKES !! First, let us dispose with human errors that are just silly blunders; misreading a scale, recording a number incorrectly or a making a mistake in calculation. These are not really errors in the scientific sense at all. The best advice we can give is --- don’t make them! 14.2.2 RANDOM ERRORS Random errors become evident when one takes repeated readings of a physical quantity and a small spread of readings is obtaining, with readings equally likely to be in either direction from the average. The cause could be fluctuations in experimental conditions, unavoidable instrument variations, or your own unconscious subjective bias in the setting or reading of instruments. 130 ERRORS IN EXPERIMENTAL PHYSICS Random errors can be investigated by taking repeated measurements. a) Repeated Readings If you need to know the value of a quantity as accurately as possible, it is obvious that you need to check the result a few times. Repeating the measurement allows you to calculate the best value for the quantity being investigated and investigate the magnitude of the random errors by consideration of the statistical variation of the data set. The best estimate of the true value of the result is the arithmetic mean of the measured result An estimate in the uncertainty of the result comes from the standard deviation of the data. This is defined as the square root of the average of the difference between the recorded value and the mean. N is the number of points in the data set 1 N X 2 (X i Xi i X ) 2 ( N 1) For errors with a normal distribution we expect 68% of the recorded measurements to be within one standard deviation of the mean as so on. b) Reading Errors Both the accuracy and precision of any real measuring device is limited. In first year laboratories you learnt that the precision is limited to perhaps to a quarter of the smallest scale on the instrument (ruler). The instruments used to measure quantities include rulers with Vernier scales, calipers and digital voltmeters. Each of these improves the precision to which values can be obtained however a reading error is still present. READING ERRORS (Parallax, Rulers, Vernier scales, digital voltmeters) A quick little reminder of a thing called parallax. It is important to align yourself with the instrument appropriately if you wish your results to be accurate. Figure 14.1Parallax errors arising from the measurement of the length of an object. The apparent length, r2, in this case is shorter than the real length, r1. This would be the length if the measurement was taken with the ruler in contact with the object. 131 PART II LABORATORY Figure 14.2 The first scale can be read to the nearest 1 mm, while the second scale can be read to the nearest 0.1 mm. The accuracy of the result will depend upon the accuracy of the measuring device you use. In Figure 14.2 we consider the length of a block of material. It is quite obvious that the accuracy of our result is greater in the second case. In the Second Year laboratories the accuracy of our result should be down to a quarter of the smallest division. This applies to both rulers and needle meters. If the results depend greatly upon the accuracy of this result you will find a Vernier scale or digital meter (both discussed below) more appropriate. Limit of accuracy = a quarter of the smallest division THE VERNIER SCALE A Vernier scale is used whenever one needs to make a measurement of a distance of angle to a greater accuracy than that obtainable through direct visual reading of a linear scale. The Vernier uses the linear reading scale of the measuring apparatus ( i.e. a ruler) along with another scale which is scaled by a factor of 9:10 compared to the linear scale. Reading a Vernier is easy and is best seen with an example in Figure 14.3. Figure 14.3 Vernier Scale In order to read this scale Read the linear part of the measurement, in this case, from the ‘zero’ of the Vernier. This is the first approximation to the measurement and yields a result of 3.5 mm. Next, determine which marking on the Vernier scale most nearly lines up with the markings on the linear scale. Do this by scanning your eyes across the scale and judging whether the Vernier scale is to the left or the right of the linear scale. When I cannot say left of right, I know that I have the alignment. In this case the 2 on the Vernier. This is the next significant figure. So the result is 3.52 mm. 132 ERRORS IN EXPERIMENTAL PHYSICS Finally, the smallest scale on the Vernier scale is 0.005 mm. This is the uncertainty in the measurement. Thus my final result would be (3.520 0.005) mm DIGITAL VOLTMETER If you are using a digital voltmeter to record a voltage, it might be reading 1.004 volts, and the limit of the reading is 0.001 volts ( this is the smallest difference between readings which the device can resolve). We would then be tempted to claim the result to be 1.004 0.001 volts. Which would seem reasonable. HOWEVER the accuracy of the digital meters is less than the number of digits displayed would suggest, typically the accuracy is 1 digit plus some fraction of the reading (typically 0.3%). In this example the error is calculated as follows V 3 V limit of reading 1000 e.g. V = 3/1000 1.004 + 0.001 = 0.004012 error = 0.004 So the final result plus confidence limit (1.004 0.004) Volts 14.2.3 SYSTEMATIC ERRORS It is fairly easy to assess the size of the random errors that we have been discussing. Averaging a number of readings tends to cancel out the effects of such errors, and the spread of the readings allows us to estimate the size of the random errors involved. Unfortunately there is another type of error – known as systematic errors – the effects of which are much more difficult to assess. Systematic errors are errors that systematically shift the measurements in one direction away from the true value. Thus repeated readings do not show up the presence of systematic errors, and no amount of averaging will reduce their effects. Both systematic and random errors are often present in the same measurement, and the different effects the difference effects they have are contrasted in Figure 14.4 Figure 14.4 Schematic diagram showing the different types of errors and how they arise. 133 PART II LABORATORY There are no rules for eliminating systematic errors, or even detecting them, but the following point may be of some help. APPARATUS If at all feasible meters should be checked against a standard, or realistically, against another and hopefully better meter. Zero settings should be checked and adjusted if necessary. “GIVEN” values should be treated with suspicion, and checked if possible; especially resistors that can vary 10% or more from their “NOMINAL” value Instructions for use of the apparatus should be read and followed. Make sure you know how to read the scales on the instruments. Identifying features of the apparatus should be recorded. If possible, vary the conditions at measurement slightly to avoid systematic errors. Use different rulers or section of a ruler to measure lengths of swap meters if possible. This data can be used in a discussion of systematic errors in your report. OBSERVATIONS Measurements should be repeated by different observers to detect experimenter bias Corrections for instrument bias should be made if necessary ( e.g. incorrect zero setting on a meter Any necessary precautions (warm up times, step to avoid backlash in screw transports, temperature controls etc.) should be observed ANALYSIS The experiment should be carefully analysed for likely sources of error and steps should be taken to minimize these Theoretical arguments used in the derivation of relations for indirectly measured quantities should be checked for assumptions, and the assumptions checked for validity in the particular experiment Results should be checked for feasibility, and silly results recalculated. If they are still unreasonable then measurements must be take again. 14.3 GRAPHS AND ERRORS 14.3.1 ERROR BARS If estimates of uncertainties in measured values are available then these should be indicated on the plot in the form of error bars. These are bars passing through the estimated point running from the lower limit to the upper limit of the range covered by the uncertainties. Thus if (<x>,<y>) is the estimated point and x and y are the uncertainties then the error bar in the x direction runs from <x> –x to <x> +x and the bar in the y directions runs from <y> –y to <y> + y as shown in Figure 14.5. 134 ERRORS IN EXPERIMENTAL PHYSICS Figure 14.5 Error bars One may choose error bars with a magnitude of one or two standard deviations. This interpretation should be included in the description of the plot. In many practical cases the error in one of the pair (usually the independent variable often placed on the x-axis) is much less then that in the other. In these cases the error bar in the small error direction is often omitted but if this occurs a note should be made in the report. If more than one data set is plotted on the same graph, a different colored pen or different symbols should be used. 14.3.2 ESTIMATES OF GRADIENTS AND UNCERTAINTIES In many experiments the result is obtained from the gradient of a straight line. The straight line is usually a line of best fit to a set of experimental points. However random errors will most probably cause the plotted points to not lie precisely on any one straight line, although the trend of the data may be clear. There are thus two questions to be answered: What is the best estimate for the gradient of the line? What is the uncertainty in the estimated gradient? There are a number of ways to estimate the gradient of a straight line from a plot of experimental points. We shall consider two such methods a) Best Fit by Eye In many cases it is sufficient to draw the single straight line which seems to lie closet to the largest number of experimental points. If one standard deviation error bars have been drawn then such a line should pass through at least 2/3 of these error bars. If two stand deviation error bars are used the line should pass through about 90% of the bars. See The gradient is then calculated using the rise over run and the equation of a straight line. m y mx c 135 rise y 2 y1 run x2 x1 PART II LABORATORY Figure 14.6 Line of best fit by eye. The uncertainty in the gradient can be calculated using the ‘maximum and minimum gradient approach’. Two lines are drawn, a steepest and most shallow line that fit the data, with fits meaning that they pass through 66% or 90% of the error bars for one and two standard deviation error bars respectively. The gradient is calculated for each line and the average of these is used as the estimate of the gradient. gradient steepest gradient shallow gradient 2 The uncertainty can be taken as gradient b) steepest gradient - shallow gradient 2 Least Squares Fit This technique is one of the most accurate techniques for obtaining the gradient of a line of best fit. SigmaPlot® uses this technique to find the line of in its curve fit algorithm. This technique was developed by Gauss and assumes that all of the error is concentrated in the y co-ordinate, and asks “ For a line of the form y=mx+c, what values of m and c yield the smallest value for the sum of errors (differences between the point it predicts and the experimental point) squared”. This technique minimizes the total distance between the points on a line of best fit and the experimental point to obtain a best fit. 14.4 SIGMAPLOT® SigmaPlot® is a powerful and elegant software program used with the 2 nd Year physics laboratories within the school of physics. SigmaPlot®, if you would have seen in the tutorial, enables you to take a data set and obtain an equation of best fit. 136 ERRORS IN EXPERIMENTAL PHYSICS The figure shown opposite is obtained in the tutorial. In this experiment particles with particular energy are binned in a particular ADC Channel Number. The role of SigmaPlot® is to find out the relationship. Clearly this is roughly linear so we guess that the equation to be used in the transform is of the form y = mx +c. Figure 14.7 Line of Best fit obtained using SigmaPlot® in the Tutorial When this transform is run, the following ‘result box’ is produced. VALUE STDERR m 2.388e-2 3.411e-4 c 9.588e+0 3.355e-1 CV(%) 1.429e0 3.500e0 DEPENDENCIES 0.8314527 0.8314527 The primary results of interest are the parameters, namely m and c which tell us the equation of the line, the equation which best fits the experiment data is E 0.0239 * ADC 9.588 Note that m is given a value of 0.02388, with the associated error being 0.0003411. Hence the gradient could be expressed as 0.0239 0.0003. This can then be used to calculate the error of quantities derived from this result. For this and discussion of the rounding of errors and presentation of the final result, see the following few pages. The results are displayed in the result box, in five columns Parameter : The parameter names are shown in the first column Value : The calculated parameter values are shown in the second column StdErr:The asymptotic standard error of the parameters is displayed in column three. The standard errors and coefficients of variation (see next) can be used as a gauge of the accuracy of the fitted curve CV(%):The parameter coefficients of variation, expressed as a percentage, are displayed in column four. This is the normalised version of the standard errors. Dependencies:The last column shows the parameter dependencies. Parameters with dependencies near 1 are strongly dependent on one another. This may indicate that the equation(s) are two complicated and overparameterized, (too many parameters are being used) and a model with fewer parameters may be better. 137 PART II LABORATORY 14.5 THE COMBINATION OF ERRORS In the majority of experiments, there is more than one source of error. For instance, several random and systematic errors may be present in measurements of a single quantity. Furthermore it is likely that a number of measurements of different quantities (each of which has an error associated with it) will have to be combined to calculate the required results. So it is important to know how errors are combined to get the overall error in an experiment. The basis of this combination is the following formulae, which really makes a lot of sense. Suppose that your final result Z is a function of some experimental parameters (x,y …). It makes perfect sense that if x and y have small errors in them, then the final result, being a function of those variables will also have some error in it. In the mid 1600’s Sir Isaac Newton developed calculus (and partial derivatives) to consider such variations. The general formulae for the error in Z due to experimental uncertainties (x, y, ..) in the parameters is (Z ) 2 ( Z Z ) (x) ( ) (y ) ..... x y 2 2 2 2 Eq. 14.1 The squared terms in this function arise as a result of statistical theory, the depth of which is not appropriate for this reference – see references for more detail. Every confidence limit you quote can be calculated using Eq. 14.1. The errors x, y, .. can be random errors such as the standard deviation from repeated measurements, reading errors from a Vernier scale, tolerance limits from a digital voltmeter or uncertainties in the gradient of a graph. All these ‘errors’ combine together to give an error in the final result. The example below illustrates the use of this equation and calculation of the final error. a) WORKED EXAMPLE Consider the use of a diffraction grating to determine the wavelength of an unknown emission line. The formula is m = d sin() Thus d sin m We can now work out the errors. We are trying to find out if there are small errors (standard deviation, reading errors…) in the variables m, d and , what is the resultant error in . We calculate the partial derivatives of with respect to d, m and and then use 1) sin d m Thus ( ) 2 (d ) 2 d sin m m2 d cos m 2 2 sin 2 2 d 2 cos 2 2 d sin ( m ) ( q )( ) m2 m4 180 m2 138 ERRORS IN EXPERIMENTAL PHYSICS The last term was converted into radians. This allows you to enter the angle in degrees. Alternatively you could not include the (/180)2 term and enter the angle in radians. If we are given the d = 1710 2 nm m=1 q = 17.30 0.01 degrees Note that m is a DISRETE variable and so has no associated error with it. When all the terms are evaluated we get = 508.511 0.6595 nm = (508.5 0.7) nm 14.6 SIGNIFICANT FIGURES AND DATA/ERROR PRESENTATION The correct presentation of the final result plus a confidence limit is 508.5 0.7 nm. Hence we are confident that are result is between 507.8 nm and 509.2 nm. Please take note of the following Results can be rounded up or down according depending upon the magnitude of the error. This is quite obvious but difficult to explain. Ask your demonstrating if you are unsure. The ‘best guess’ can only be as accurate as the error. There is no point is quoting 508.5110 0.6. Clearly the 0.0110 is not important. Usually only the first significant figure in the error is of any use. Stating the wavelength is 508.5 0.7 will suffice. It gives and magnitude of the error. Clearly giving details of the error to the nth degree is not required. i.e. 0.6595 Displaying the result as 5.085 x 102 6.595 x 10-1 nm indicates poor experimental technique and indicates a lack of appreciation for what the result is actually telling you. This error presentation method is far from elegant and indicates that you can calculate the error but do not really know what the important points to take from the calculation are. The form of (5.0850.007) x 102 nm would be more appropriate for very large or very small values. Finally, I shouldn’t have to say this, but calculating an error is only half the exercise. Once the error has been calculated you must then discuss the not only the magnitude of the error and its significance but sources of error and what was done to minimize them. As I said earlier, a best result without an error is next to useless, but an error calculation without an explanation of what it means is pointless. 14.7 GENERAL EXPRESSIONS Our final result Z is a function of experimental parameters (x,y, ….). These formulae reduce to a simple form in many common cases. These equations are very easy to. I would encourage you to derive all of the following so you can appreciate that the following formulae are based upon something simple and easy to understand, namely, if the variables x and y differ then our final result Z will differ. 139 PART II LABORATORY GENERAL EXPRESSION Result Z depends upon x and y. Results error is calculated as follows… (Z ) 2 ( SUMS AND DIFFERENCES If Z = x + y + .. or Z = x – y .. (Z ) 2 (X ) 2 (Y ) 2 .. then PRODUCTS AND QUOTIENTS If Z=xy… then or Z=xy… POWERS AND FUNCTIONS If Z = xN then SIMPLE AND TRIGONOMETRIC FUNCTIONS If Z = f(x) then dZ 2 dZ ) (X ) 2 ( ) 2 (Y ) 2 .. dX dY ( Z 2 X 2 Y 2 ) ( ) ( ) .. Z X Y ( Z X ) | N | ( ) Z X ( Z df X ) | |( ) Z dX X Note : This last category includes all the trigonometric functions. Substituted angles must be in radians unless you have included the (/180)2 term in the error derivation. A few of the errors to be calculated in the Second Year laboratories will require a combination of the above formulae. If you wish to use these formulae, then you must go through your specific case step by step. Alternatively you could use Eq. 14.1 which is relatively straight forward and easy to understand. 14.8 CONCLUDING COMMENTS So this concludes our consideration of errors. We have looked at the source and types of errors, how to record them, reduce and calculate them. As I hope you now appreciate, errors are neither mysterious nor tricky. As I have mentioned before you must always state your final result plus confidence limit with correct accuracy and significant figures. It is very poor experimental technique to obtain a value without considering a confidence limit. The confidence limit or ‘error’ conveys very significant information about the experiment and the result obtained. Once the error has been calculated it then should be displayed appropriately and finally discussed in depth. All final results must there have an estimation and discussion of the errors involved. Throughout this semester your experimental technique will develop greatly as you encounter new concepts and ideas. Our hope is that your report will be incredibly sound, demonstrating that you have not only thought about the final result and the physics investigated, but also a detailed consideration of the magnitude of errors encountered. 140