Uploaded by aruzhan.seraliyeva

l2 errors

advertisement
Lecture 2
ERRORS OF MEASUREMENTS.
Lecture aim: understand, how probability theory can be applicable for the
result of measurements’ evaluation
Measurement errors.
When we conduct measurements our aims are:
1. to gain numerical value of definite physical parameter
2. to define degree of belief of the measurement result.
It is clear that the result of measurement is a variate, because if we
measure the same value several times with instrument of high accuracy,
each time we have the result with some difference (reasons- influence of
electric and magnetic fields, temperature, humidity, vibrations and so on).
Because we deal with variates, we must use probability theory. Error is a
result of measurement deflection from ideal value of measuring
parameter. We distinguish absolute error and relative error. Absolute
error Δ is equal to the difference between the result of measurement A
and ideal value of measuring parameter X: Δ=A-X. It is expressed in the
same unit as measuring parameter. Relative error δ is a ratio (in per cents)
of absolute error and ideal value of measuring parameter: δ=Δ/X. It has
no unit (%). Errors of measurement have accidental and systematic
components (errors). Component of error, which changes accidentally at
repeated measurements of the same parameter, is called accidental error.
It is defined by factors, which exist irregularly with different intensity.
Value of accidental component of error is unpredictable, so it is
unavoidable. Influence of accidental error can be decreased by
application of multiple measurements. Systematic error is such one,
which is the same during multiple measurements or change according to
definite law. Systematic errors can be classified:
1. error of method- component of error because of applicable method
2. instrument error- component of error, which is defined by error of
instrument
3. environmental error- component of error, which can be explained by
influence of external conditions on process of measurement
4. operator’s bias – component of error because of individual
peculiarities of the operator.
2
Error, which is substantially higher than expectant one, is called gross
error. Systematic errors may be constant and alternating. Constant error
has the same value and sign. Alternating error can change its value in
monotonic manner or in periodic manner. We use very often notion of
measurement accuracy. It characterizes quality of measurement and
reflects vicinity of measurement result to ideal value of measuring
parameter. High measurement accuracy is corresponded to low errors of
all types.
Evaluation of accidental errors.
Because of influence of different accidental factors, result of each
measurement Ai will differ from ideal value of measuring parameter X:
Ai –X=ΔXi. This difference is called accidental error of this definite
measurement. Ideal value X is unknown. But, conducting a lot of
measurements of parameter X, we can define the next regularities.
1. If we conduct a lot of measurements of X and define average value,
positive and negative deflections of some measurement results from
average value have approximately the same probability. This is a
reason of the fact that there is an equal probability (frequency) of
measurement results deflection from ideal value to decrease or to
increase in the case, when systematic error is equal to 0. Average of
the value, calculated on the basis of a set of measurements, is the
truest value, which can be applicable to measuring value. If we
calculate average of a set of measurements, errors of individual
measurements, which have different signs, are compensated.
2. Probability (frequency) of appearance of high deflections from the
true value is substantially less than probability (frequency) of
appearance of low deflections.
These statistic regularities are fair in case of multiple measurements.
After generalization of measurement results we have not absolute true
result, but result of maximum probability, and the result is average of set
of measurements: Ā=(A1+A2+…An )/n=(∑Ai)/n, where n- number of
measurements. These statistic regularities set up a question about a law,
according to which distribution of accidental errors takes place. For
electric measurements, as a rule, we use Gaussian law. Analytically it is
described as:
P(ΔX)=(1/σ√2π)*e-(Δx)2/2σ2, where p(Δx) is probability density of
accidental error ΔX=A-X, σ (Sx) – quadratic mean (rms) (standard)
deviation of accidental error of measurement. This parameter
characterizes extent of accidental straggling results of singular
measurements respectively ideal value X. The sense of probability
density is ratio of hitting probability of accidental value into interval ΔX
to length of the interval, when this length tends to 0.
3
σ=√[∑(Ai-X)2]/n-1=√[∑(∆Xi)2]/n-1
With probability of 0.997 accidental error is in limits of ±3σ. It means
that only 3 measurements from 1000 may give us error higher than 3σ.
This expression is called three-sigma equation. To define accidental error
ΔXi=Ai-X it needs to know ideal value of measuring parameter, but X is
unknown. But average Ā is the value of the highest accuracy for ideal
value. So, for evaluation of ideal value X we can use its average. In this
case deviations of singular measurements must be taken respectively
average: Ai-Ā=vi. According to proofs of theory of errors the average Ā is
satisfied to the next conditions: 1) algebraic sum of accidental deviations
of singular measurements from the average equals to 0 if number of
measurements is high. ∑(Ai- Ā)=∑ vi =0; 2) sum of deviations from the
average squared ∑(Ai- Ā)2= ∑ vi 2 has minimum value.
If instead of the average we take some other value, the sum of deviation
squared will be higher than one for the average. Expressions
p(ΔX)=(1/σ√2π)*e-1/2 (ΔX/σ)2 and σ=√[∑(∆Xi)2]/n-1 are taken for n→∞. In
practice number of measurements is finite, so it needs to use corrections.
Role of corrections is not very important if number of measurements
increases because in this case Ā tends to X and if n→∞ Ā=X. It means
that all conclusions about ΔX(accidental errors)-Gaussian law- can be
applicable to deviations from the average. As a rule, there are no more
than several tens of measurements in practice, so appearance of error,
which is equal to ±3σ is of low probability. So error of ±3σ is taken as
maximum possible. Errors which are higher than ±3σ are discarded.
4
In practice (if n is low) it needs to evaluate accuracy and reliability
of the results obtained for the average and rms deviations. For this
purpose they use confidence probability and confidence interval.
Confidence probability is probability of appearance of such error, which
is inside of definite limits (inside interval). This interval is called
confidence one, and probability, which characterizes it, is confidence one.
If we use Gaussian law according to the probability integral table, we can
define the value of confidence intervals. If confidence intervals increase,
value of confidence probability increases, tends to its limit 1. You know
that for interval ±3σ p=0.9973. (For confidence interval -3σ to +3σ
confidence probability is 0.9973). To evaluate error of a singular
measurement we use probable error ρ. It is such value, which divides all
accidental errors of a set of n measurements into 2 equal parts. In one part
there are n/2 accidental errors, which are more than ρ, in other part there
are n/2 accidental errors, which are less than ρ. It means that probability
of the fact that some accidental error is in limits – ρ to + ρ must be equal
to 0.5. Probable error for Gaussian law is equal to:
ρ=2σ/3=(2/3)√[∑(vi)2]/n-1. Above mentioned expressions for confidence
intervals are fair only if number of measurement n>20….30. If n<20 the
formulae give us low value of confidence interval. It means that
evaluation of measurement accuracy is too high. For exactness of
confidence interval it need to use Student’s coefficients tn, which are
function of definite confidence probability and number of measurements
n. For definition of the confidence interval it needs to multiply rms error
to Student’s coefficient. Final result must be taken as A=Ā±tn*σĀ,
5
where σĀ=σ/√n=√[∑(vi)2]/n(n-1). Student’s coefficients must be taken
from a special table.
Glossary
Variate- случайная величина
Probability density- плотность вероятности
Accidental straggling- случайный разброс
Confidence probability- доверительная вероятность
Theory of errors- теория ошибок
6
Hitting probability- вероятность попадания
Quadratic mean- среднеквадратичное
1.
2.
3.
4.
Self-assessment questions.
What types of measurement errors do you know?
Prepare the algorithm of accidental errors evaluation.
Show the normal distribution curve and demonstrate entrance data
behavior for ±1 standard deviation, ±2 standard deviations, ± 3
standard deviations.
Explain why probability theory is applicable for measurements’
results evaluation.
Literature
[1]. Ch.1, pp.7-16,Ch.10, pp.517-545
EXAMPLE
Given p=0.95
Number of measurements n
Capacitance Ci (µF)
1
5.02
2
5.04
3
5.10
4
5.07
5
4.98
6
5.02
7
4.98
8
5.05
9
5.04
10
5.01
Ā=Caver=5.03 µF
Number of
measurements n
1
2
3
4
5
6
7
8
Capacitance Ci (µF)
5.02
5.04
5.10
5.07
4.98
5.02
4.98
5.05
Ci - Caver (µF)
- 0.01
0.01
0.07
0.04
- 0.05
- 0.01
- 0.05
0.02
7
9
10
Number of
measurements n
1
2
3
4
5
6
7
8
9
10
5.04
5.01
Capacitance Ci
(µF)
5.02
5.04
5.10
5.07
4.98
5.02
4.98
5.05
5.04
5.01
0.01
- 0.02
Ci - Caver (µF)
(Ci - Caver )2
- 0.01
0.01
0.07
0.04
- 0.05
- 0.01
- 0.05
0.02
0.01
- 0.02
∑( Ci - Caver)2=0.0127
σ(Sx)=√ (∑( Ci - Caver)2/n-1)= √127*10-4/9=0.03756 µF
σ (Caver)=σ/√n=0.03756/√10=0.03756/3.162=1.19*10-2 µF
at p=0.95 tn=1.81 (use the table above)
C= Caver± tn* σ (Caver)=5.03±1.81*1.19*10-2 = 5.03±0.02 µF
Probable error ρ=2 σ/3=2*0.03756/3=0.025=0.03 µF
0.0001
0.0001
0.0049
0.0016
0.0025
0.0001
0.0025
0.0004
0.0001
0.0004
Download