Document

advertisement
STATISTICAL
TESTS AND
ERROR
ANALYSIS
PRECISION AND ACCURACY
PRECISION – Reproducibility of the result
ACCURACY – Nearness to the “true” value
How sure are you that the experimentally
obtained value is close to the “true” value?
How close is it?
Finding errors
Experimental error
Uncertainty in every experiment
(measurement)
SYSTEMATIC / DETERMINATE ERROR
• Reproducible under the same conditions in the same
experiment
• Can be detected and corrected for
• It is always positive or always negative
To detect a systematic error:
• Use Standard Reference Materials
• Run a blank sample
• Use different analytical methods
• Participate in “round robin” experiments
(different labs and people running the same
analysis)
RANDOM / INDETERMINATE ERROR
• Uncontrolled variables in the measurement
• Can be positive or negative
• Cannot be corrected for
• Random errors are independent of each other
Random errors can be reduced by:
• Better experiments (equipment, methodology,
training of analyst)
• Large number of replicate samples
Random errors show Gaussian distribution for a
large number of replicates
Can be described using statistical parameters
For a large number of experimental replicates the
results approach an ideal smooth curve called the
GAUSSIAN or NORMAL DISTRIBUTION CURVE
Characterised by:
The mean value – x
gives the center of the
distribution
The standard
deviation – s
measures the width of
the distribution
The mean or average, x
 the sum of the measured values (xi) divided by the
number of measurements (n)
n
_
x
x
i 1
i
n
The standard deviation, s
 measures how closely the data are clustered about
the mean (i.e. the precision of the data)


x

x
i  i 
n 1

s
2
NOTE: The quantity “n-1” = degrees of freedom
Other ways of expressing the precision of the data:
• Variance
Variance = s2
• Relative standard deviation
s
RSD 
x
• Percent RSD / coefficient of variation
s
% RSD   100
x
POPULATION DATA
For an infinite set of data,
n→∞:
x → µ
and
population mean
s→σ
population std. dev.
The experiment that produces a small
standard deviation is more precise .
Remember, greater precision does not
imply greater accuracy.
Experimental results are commonly
expressed in the form:
mean  standard deviation
_
xs
The Gaussian curve equation:
1
y
e (x μ ) /2σ
σ 2π
2
1
σ 2π
2
= Normalisation factor
It guarantees that the area under
the curve is unity.
Probability of
measuring a value
in a certain range =
area below the
graph of that range
The Gaussian curve whose area
is unity is called a normal error
curve.
µ = 0 and σ = 1
The standard deviation measures the width of the
Gaussian curve.
(The larger the value of σ, the broader the curve)
Range
Percentage of measurements
µ ± 1σ
68.3
µ ± 2σ
95.5
µ ± 3σ
99.7
The more times you measure, the more confident you
are that your average value is approaching the “true”
value.
The uncertainty decreases in proportion to 1/ n
EXAMPLE
Replicate results were obtained for the analysis of
lead in blood. Calculate the mean and the standard
deviation of this set of data.
Replicate
[Pb] / ppb
1
752
2
756
3
752
4
751
5
760
xi

x
_
n
s
 xi
 754
 x 2
n1
Replicate
1
2
3
[Pb] / ppb
752
756
752
4
5
751
760
 3.77
NB DON’T round a
std dev. calc until
the very end.
x  754
s  3.77
754  4 ppb Pb
Also:
s
RSD 
x
3.77

754
s
% RSD   100
x
Variance = s2
 0.00500
3.77

 100
754
 3.77 2
 14.2
 0.500%
Lead is readily absorbed through the gastro intestinal tract. In blood,
95% of the lead is in the red blood cells and 5% in the plasma. About
70-90% of the lead assimilated goes into the bones, then liver and
kidneys. Lead readily replaces calcium in bones.
The symptoms of lead poisoning depend upon many factors, including
the magnitude and duration of lead exposure (dose), chemical form
(organic is more toxic than inorganic), the age of the individual
(children and the unborn are more susceptible) and the overall state of
health (Ca, Fe or Zn deficiency enhances the uptake of lead).
European Community Environmental
Quality Directive – 50 g/L in drinking water
Pb – where from?
• Motor vehicle emissions
• Lead plumbing
• Pewter
• Lead-based paints
• Weathering of Pb minerals
World Health Organisation – recommended
tolerable intake of Pb per day for an adult –
430 g
Food stuffs < 2 mg/kg Pb
Next to highways 20-950 mg/kg Pb
Near battery works 34-600 mg/kg Pb
Metal processing sites 45-2714 mg/kg Pb
CONFIDENCE INTERVALS
The confidence interval is the expression stating that
the true mean, µ, is likely to lie within a certain
distance from the measured mean, x.
– Student’s t test
The confidence interval is given by:
_
μx
ts
n
where t is the value of student’s t taken from the table.
A ‘t’ test is used to compare sets of measurements.
Usually 95% probability is good enough.
Example:
The mercury content in fish samples were determined
as follows: 1.80, 1.58, 1.64, 1.49 ppm Hg. Calculate the
50% and 90% confidence intervals for the mercury
content.
Find x = 1.63
s = 0.131
_
μx
50% confidence:
t=
for n-1 =
ts
n
There is a 50% chance that the true
mean lies between 1.58 and 1.68
ppm
x = 1.63
s = 0.131
1.78
90% confidence:
t=
for n-1 =
90%
1.68
_
μx
ts
n
50%
1.63
1.58
μ  1.63  0.15
There is a 90% chance that the true
mean lies between 1.48 and 1.78 ppm
1.48
Confidence intervals - experimental uncertainty
APPLYING STUDENT’S T:
1) COMPARISON OF MEANS
Comparison of a measured result with a ‘known’
(standard) value
t calc
known value  x

s
n
tcalc > ttable at 95% confidence level
 results are considered to be different
 the difference is significant!
Statistical tests are giving only probabilities.
They do not relieve us of the responsibility of interpreting
our results!
2) COMPARISON OF REPLICATE MEASUREMENTS
For 2 sets of data with number of measurements n1 , n2
and means x1, x2 :
t calc
x1  x 2

spooled
n1n2
n1  n2
Where Spooled = pooled std dev. from both sets of data
spooled 
s12 (n1  1)  s22 (n2  1)
n1  n2  2
tcalc > ttable at 95% confidence level
 difference between results is significant.
Degrees of freedom = (n1 + n2 – 2)
One sample,
many
measurements
3) COMPARISON OF INDIVIDUAL DIFFERENCES
Use two different analytical methods, A and B, to make
single measurements on several different samples.
Perform t test on individual differences between results:
t calc
Where
d

n
sd
sd 
2
(d

d
)
 i
n1
Many samples,
one
measurement
Where d = the average difference between methods A and B
n = number of pairs of data
tcalc > ttable at 95% confidence level
 difference between results is significant.
Example:
(di)
Are the two methods used comparable?
sd 
t calc
 (di  d )2
n1
d

n
sd
sd  0.12
t calc  1.2
F TEST
COMPARISON OF TWO STANDARD
DEVIATIONS
Fcalc 
s12
s2 2
Fcalc > Ftable at 95% confidence level
 the std dev.’s are considered to be different
 the difference is significant.
Q TEST FOR BAD DATA
Q calc
gap

range
The range is the total spread
of the data.
The gap is the difference
between the “bad” point and
the nearest value.
Example:
12.2 12.4 12.5 12.6
Range
Gap
12.9
If Qcalc > Qtable  discarded
questionable point
EXAMPLE:
The following replicate analyses were obtained when
standardising a solution: 0.1067M, 0.1071M, 0.1066M and 0.1050M.
One value appears suspect. Determine if it can be ascribed to
accidental error at the 90% confidence interval.
Arrange in increasing order:
0.1050M
Gap
Q = Range
0.1066M
0.1067M
0.1071M
= 0.7619
BUT these values are very close  rather do another
analysis to confirm!!!
STATISTICS OF SAMPLING
A chemical analysis can only be as meaningful as the
sample!
Sampling – process of collecting a representative
sample for analysis
OVERALL VARIANCE =
ANALYTICAL VARIANCE + SAMPLING VARIANCE
2
2
so  sa  ss
2
Where does the sampling variance come from?
Consider a powder mixture containing nA particles of
type A and nB particles type B.
Probability of drawing A:
nA
p = nA+ nB
Probability of drawing B:
q=
nB
nA+ nB = 1 - p
If n particles are randomly drawn, the expected number
of A particles will be np
and standard deviation of many drawings will be:
σ n  npq
How much of the sample should be analysed?
Std dev.
σn 
npq
Where p, q – fractions of each kind of particles present
Relative
Std Dev.
Relative
Variance
npq
σn
pq
R


n
n
n
2
pq
 σn 
R   
n
 n 
2
 nR2 = pq
The mass of sample (m) is proportional to number of
particles (n) drawn, therefore:
Ks = mR2
Where R = RSD as a % and
Ks (sampling constant) = mass of sample
required to reduce the relative sampling
standard deviation to 1%
How many samples/replicates to analyse?
Rearranging Student’s t equation:
Required number of
replicate analyses:
ts s
x 
n
e
n
t 2 s2s
e2
µ = true population mean
x = measured mean
n = number of samples needed
ss2 = variance of the sampling operation
e = sought-for uncertainty
Since degrees of freedom is not known at this stage,
the value of t for n → ∞ is used to estimate n.
The process is then repeated a few times until a
constant value for n is found.
Example:
In analysing a lot with random sample variation, there is
a sampling deviation of 5%. Assuming negligible error
in the analytical procedure, how many samples must be
analysed to give 90% confidence that the error in the
mean is within 4% of the true value?
t 2 s2s
n 2
e
For 90% confidence:
t =
n=6
SAMPLE STORAGE
Not only is the sampling and sample preparation
important, but the sample storage is also critical.
The composition of the sample may change with time
due to, for example, the following:
• reaction with air
• reaction with light
• absorption of moisture
• interaction with the container
Glass is a notorious ion exchanger which can alter the
concentration of trace ions in solution.
Thus plastic (especially Teflon) containers are
frequently used.
Ensure all containers are clean to prevent
contamination.
EXAMPLE: (for you to do)
Consider a random mixture containing 4.00 g of Na2CO3
( = 2.532 g/ml) and 96.00 g of K2CO3 ( = 2.428 g/ml)
with an approximated uniform spherical radius of 0.075
mm.
How many particles of Na2CO3 are in the mixture? And
K2CO3?
Na2CO3:
4.00 g at 2.532 g/ml
m
V =  = 1.58 ml
K2CO3:
m
V = 
= 1.58 cm3
96.00 g at 2.428 g/ml
VNa2CO3 = 1.58 cm3
VK2CO3 = 39.54 cm3
Particles:
r = 0.075 mm = 0.0075 cm
nNa2CO3 = 8.94x105 particles
nK2CO3 = 2.24x107 particles
EXAMPLE:
Consider a random mixture containing 4.00 g of Na2CO3 ( = 2.532
g/ml) and 96.00 g of K2CO3 ( = 2.428 g/ml) with an approximated
uniform spherical radius of 0.075 mm.
What is the expected number of particles in 0.100 g of
the mixture?
8.94x102 particles of Na2CO3 and
2.24x104 particles of K2CO3
in a 0.1 g sample
EXAMPLE:
Calculate the relative standard deviation in the number
of particles for each type in the 0.100 g sample of the
mixture.
R Na2CO3  0.0328 or 3.28%
R K2CO3  0.00131 or 0.131%
Download