Errors, Random Errors, and Statistical Data in Chemical Analyses

advertisement
Introduction to
Analytical Chemistry
CHAPTER 3
ERRORS, RANDOM ERRORS,
AND STATISTICAL DATA IN
CHEMICAL ANALYSES
Errors, Random Errors, and Statistical
Data in Chemical Analyses
 Impossible: The analytical results are free of errors or
uncertainties.
 Possible: Minimize these errors and estimate their size
with acceptable accuracy.
 Statistical calculations for use to judge the quality of
experimental measurements are many.
3-2
Copyright © 2011 Cengage Learning
Errors, Random Errors, and Statistical
Data in Chemical Analyses
 Only two will be considered: The confidence interval
and the least-squares method for constructing
calibration curves.
3-3
Copyright © 2011 Cengage Learning
Figure 3-1
3-4
Figure 3-1 Results from six replicate determinations for iron in aqueous samples of a standard
solution containing 20.00 ppm of iron(III).
Copyright © 2011 Cengage Learning
Errors, Random Errors, and Statistical
Data in Chemical Analyses
 Measurements are always accompanied by uncertainty.
The true value always falls within a range due to
uncertainty.
 The probable magnitude of the error defines the limit
within which the true value lies.
 Data of unknown quality are worthless.
 The true value of a measurement is never known
exactly.
3-5
Copyright © 2011 Cengage Learning
Reliability Can Be Assessed
In Several Ways
 Standards of known composition can be analyzed and
the results compared with the known composition.
 Calibrating equipment enhances the quality of data.
3-6
Copyright © 2011 Cengage Learning
Reliability Can Be Assessed
In Several Ways
 Questions to answer before beginning an analysis:
 What is the maximum error that I can tolerate in the result?
 No one can afford to waste time generating data that are more
reliable than is needed.
3-7
Copyright © 2011 Cengage Learning
3A-1 The Mean and Median
 Mean, arithmetic mean, and average ( x ) are synonyms.
(3-1)
x
i
where xi represents the individual values of x
making up a set of N replicate measurements.
The symbol  xi means to add all the values
xi for the replicates.
3-8
Copyright © 2011 Cengage Learning
3A-1 The Mean and Median
 The median is the middle result when replicate data are
arranged in order of size.
 Equal numbers of results are larger and smaller than
the median.
 For an odd number of data points, the median can be
evaluated directly. For an even number, the mean of
the middle pair is used.
3-9
Copyright © 2011 Cengage Learning
3A-1 The Mean and Median
 The mean of two or more measurements is their
average value.
 The median is used advantageously when a set of data
contains an outlier, a result that differs significantly
from others in the set.
3-10
Copyright © 2011 Cengage Learning
Example 3-1
 Calculate the mean and the median for the data shown
in Figure 3-1.
Because the set contains an even number of
measurements, the median is the average of the
central pair:
3-11
Copyright © 2011 Cengage Learning
3A-2 What Is Precision?
 Precision describes the reproducibility of
measurements; the closeness of results to each other.
 Precision is determined by repeating the measurement
on replicate samples.
 Three terms to describe the precision of a set of
replicate data: standard deviation, variance, and
coefficient of variation.
3-12
Copyright © 2011 Cengage Learning
3A-2 What Is Precision?
 Precision is a function of the deviation from the mean
di, or just the deviation, which is defined as
(3-2)
 Precision is the closeness of results to others that have
been obtained in exactly the same way.
3-13
Copyright © 2011 Cengage Learning
3A-3 How About Accuracy?
 Accuracy indicates the closeness of the measurement
to its true or accepted value and is expressed by the
error.
Accuracy measures agreement between a result and its true
value.
 Precision describes the agreement among several results that
have been obtained in the same way.

3-14
Copyright © 2011 Cengage Learning
Figure 3-2
3-15
Figure 3-2 Illustration of accuracy and precision using the pattern of darts on a
dartboard
Copyright © 2011 Cengage Learning
3A-3 How About Accuracy?
 Absolute Error
 The absolute error E in the measurement of a quantity xi is
given by the equation
(3-3)
where xt is the true, or accepted, value of the quantity.
 Note that we retain the sign in stating the error.
3-16
Copyright © 2011 Cengage Learning
3A-3 How About Accuracy?
 Relative error
 Often, the relative error Er is a more useful quantity than the
absolute error. The percent relative error is given by the
expression
(3-4)
3-17
Copyright © 2011 Cengage Learning
3A-3 How About Accuracy?
 Relative error
 Relative error is also expressed in parts per thousand (ppt).
e.g.
3-18
Copyright © 2011 Cengage Learning
3A-4 Classifying Experimental
Errors
 To determine accuracy, we have to know the true value,
and this value is exactly what we are seeking in the
analysis.
 Do we know the answer precisely, also know it
accurately?
 The danger is illustrated in Figure 3-3, which
summarizes the results for the determination of
nitrogen in two pure compounds: benzyl isothiourea
hydrochloride and nicotinic acid.
3-19
Copyright © 2011 Cengage Learning
Figure 3-3
Figure 3-3 Absolute error in the micro-Kjeldahl determination of nitrogen. Each dot represents the error associated
with a single determination. Each vertical line labeled (xi - xt ) is the absolute average deviation of the set from the true
value. (Data from C. O. Willits and C. L. Ogg, J. Assoc. Anal. Chem., 1949, 32, 561. With permission.)
3-20
Copyright © 2011 Cengage Learning
3A-4 Classifying Experimental
Errors
 Figures 3-1 and 3-3 suggest that chemical analyses
are affected by at least two types of errors.
 One type, called random (or indeterminate) error,
causes data to be scattered more or less
symmetrically around a mean value.
 Refer to Figure 3-3: Notice that the scatter in the
data, and thus the random error, for analysts 1 and
3 is significantly less than that for analysts 2 and 4.
3-21
Copyright © 2011 Cengage Learning
3A-4 Classifying Experimental
Errors
 A second type of error, called systematic (or
determinate) error, causes the mean of a set of data to
differ from the accepted value.
 The results of analysts 1 and 2 in Figure 3-3 have little
systematic error, but the data of analysts 3 and 4 show
systematic errors of about –0.7 and –1.2% nitrogen.
3-22
Copyright © 2011 Cengage Learning
3A-4 Classifying Experimental
Errors
 Random, or indeterminate, errors are errors that affect
the precision of measurement.
 Systematic, or determinate, errors affect the accuracy
of results.
3-23
Copyright © 2011 Cengage Learning
3A-4 Classifying Experimental Errors
 A third type of error is gross error. Gross errors differ
from indeterminate and determinate errors. They
usually occur only occasionally, are often large, and
may cause a result to be either high or low.
 Gross errors lead to outliers, results that appear to
differ markedly from all other data in a set of replicate
measurements.
3-24
Copyright © 2011 Cengage Learning
3A-4 Classifying Experimental
Errors
 Various statistical tests can be done to determine if a
data point is an outlier.
3-25
Copyright © 2011 Cengage Learning
3B Systematic Errors
 Systematic errors have a definite value, an assignable
cause, and are of about the same magnitude for
replicate measurements made in the same way.
 Systematic errors lead to bias in measurement
technique. Note that bias affects all the data in a set in
approximately the same way and that it bears a sign.
3-26
Copyright © 2011 Cengage Learning
3B-1 How do Systematic Errors
Arise?
 Three types of systematic errors:
 (1) Instrument errors are caused by imperfections in
measuring devices and instabilities in their components.
 (2) Method errors arise from non-ideal chemical or physical
behavior of analytical systems.
 (3) Personal errors result from the carelessness, inattention, or
personal limitations of the experimenter.
3-27
Copyright © 2011 Cengage Learning
3B-1 How do Systematic Errors
Arise?
 Instrument Errors
 All measuring devices are sources of systematic errors: pipets,
burets, and volumetric flasks.
 These differences arise from using glassware at a temperature
that differs significantly from the calibration temperature,
from distortions in container walls due to heating while drying,
from errors in the original calibration, or from contaminants or
scratches on the inner surfaces of the containers.
3-28
Copyright © 2011 Cengage Learning
3B-1 How do Systematic Errors
Arise?
 Instrument errors
 Electronic instruments are subject to instrumental systematic
errors. These uncertainties have many sources. For example,
errors emerge as the voltage of a battery-operated power
supply decreases with use.
 Instrument Errors are detectable and correctable.
3-29
Copyright © 2011 Cengage Learning
3B-1 How do Systematic Errors
Arise?
 Method Errors
 The nonideal chemical or physical behavior of the reagents
and reactions upon which an analysis is based often introduce
systematic method errors.
 Such sources of nonideality include the slowness and
incompleteness of reactions, the instability of species, nonspecificity of most reagents, and possible interference.
3-30
Copyright © 2011 Cengage Learning
3B-1 How do Systematic Errors
Arise?
 Method Errors
 In Figure 3-3, the results by analysts 3 and 4 show a negative
bias that can be traced to the chemical nature of the sample,
nicotinic acid.
 The compounds containing a pyridine ring are incompletely
decomposed by the sulfuric acid ; hence, the negative errors in
Figure 3-3 are likely systematic errors from incomplete
decomposition of the samples.
3-31
Copyright © 2011 Cengage Learning
3B-1 How do Systematic Errors
Arise?
 Method Errors
 Errors inherent in a method are often difficult to detect and
are thus the most serious of the three types of systematic
error.
3-32
Copyright © 2011 Cengage Learning
3B-1 How do Systematic Errors
Arise?
 Personal Errors
Measurements requiring personal judgments.
 Judgments of this type are often subject to systematic,
unidirectional errors.
 An analyst who is insensitive to color changes tends to use
excess reagent in a volumetric analysis. Physical disabilities are
often sources of personal determinate errors.
 A universal source of personal error is prejudice.

3-33
Copyright © 2011 Cengage Learning
3B-1 How do Systematic Errors
Arise?
 Personal Errors
 Number bias is another source of personal error that varies
considerably from person to person.
 The most common number bias encountered in estimating the
position of a needle on a scale involves a preference for the
digits 0 and 5. Also prevalent is a prejudice favoring small
digits over large and even numbers over odd.
 Color blindness amplifies personal errors in a volumetric
analysis.
3-34
Copyright © 2011 Cengage Learning
3B-2 What Effects Do Systematic
Errors Have on Analytical Results?
 Systematic errors may be either constant or
proportional.
 The magnitude of a constant error does not depend on
the size of the quantity measured.
 Proportional errors increase or decrease in proportion
to the size of the sample taken for analysis.
3-35
Copyright © 2011 Cengage Learning
3B-3 Detecting Systematic
Instrument and Personal Errors
 Systematic instrument errors are usually corrected by
periodic calibration of equipment. The response of
most instruments changes with time.
 Most personal errors can be minimized by care and
self-discipline.
3-36
Copyright © 2011 Cengage Learning
3B-4 Detecting Systematic
Method Errors
 Bias in an analytical method is particularly difficult to
detect. One or more of the following steps can
recognize and adjust for a systematic error in an
analytical method.
3-37
Copyright © 2011 Cengage Learning
3B-4 Detecting Systematic
Method Errors
 Analyzing Standard Samples
 Analyzing standard reference materials, SRM, is the best way
to estimate the bias of an analytical method.
 The SRM Materials contain one or more analytes at wellknown or certified concentration levels.
 Standard materials can sometimes be prepared by synthesis.
 Standard reference material can be purchased from a number
of governmental and industrial sources.
3-38
Copyright © 2011 Cengage Learning
3B-4 Detecting Systematic
Method Errors
 Analyzing Standard Samples
 The concentration of one or more of the components in these
materials has been determined in one of three ways:
(1) by analysis with a previously validated reference method;
 (2) by analysis by two or more independent, reliable measurement
methods; or
 (3) by analysis by a network of cooperating laboratories that are
technically competent and thoroughly knowledgeable with the material
being tested.

3-39
Copyright © 2011 Cengage Learning
3B-4 Detecting Systematic
Method Errors
Standard reference
materials from NIST.
(Photo courtesy of the
National Institute of
Standards and
Technology.)
3-40
Copyright © 2011 Cengage Learning
3B-4 Detecting Systematic
Method Errors
 Using an Independent Analytical Method
 If standard samples are not available, a second independent
and reliable analytical method can be used in parallel with the
method being evaluated.
 A statistical test must be used to determine whether any
difference is a result of random errors in the two methods or
due to bias in the method under study.
3-41
Copyright © 2011 Cengage Learning
3B-4 Detecting Systematic
Method Errors
 Performing Blank Determinations
 Blank determinations are useful for detecting certain types of
constant errors.
 In a blank determination, or blank, all steps of the analysis are
performed in the absence of a sample.
 The results from the blank are then applied as a correction to
the sample measurements.
 Blank determinations reveal errors and correct data.
3-42
Copyright © 2011 Cengage Learning
3B-4 Detecting Systematic
Method Errors
 Varying the Sample Size
 Constant errors can often be detected by varying the sample
size.
3-43
Copyright © 2011 Cengage Learning
3C The Nature of Random Errors
 All measurements contain random errors.
 Random, or indeterminate, errors occur whenever a
measurement is made.
 Caused by many small but uncontrollable variables.
 The errors are accumulative.
3-44
Copyright © 2011 Cengage Learning
3C-1 What Are the Sources of
Random Errors?
 Imagine a situation in which just four small random
errors combine to give an overall error.
 We will assume that each error has an equal probability
of occurring and that each can cause the final result to
be high or low by a fixed amount ±U.
 Table 3-1 shows all the possible ways the four errors
can combine to give the indicated deviations from the
mean value.
3-45
Copyright © 2011 Cengage Learning
Table 3-1
3-46
Copyright © 2011 Cengage Learning
Figure 3-4
Figure 3-4
Frequency
distribution for
measurements
containing (a)
four random
uncertainties, (b)
ten random
uncertainties,
and (c) a very
large number of
random
uncertainties.
3-47
Copyright © 2011 Cengage Learning
Figure 3-4
 For a sufficiently large number of measurements,
we can expect a frequency distribution like that
shown in Figure 3-4a. The ordinate is the relative
frequency of occurrence of the five possible
combinations.
3-48
Copyright © 2011 Cengage Learning
Figure 3-4
 Figure 3-4b shows the theoretical distribution for ten
equal-sized uncertainties.
3-49
Copyright © 2011 Cengage Learning
Figure 3-4
 For a very large number of individual errors, a bell-
shaped curve like that shown in Figure 3-4c results.
Such a plot is called a Gaussian curve or a normal error
curve.
3-50
Copyright © 2011 Cengage Learning
Table 3-2
3-51
Copyright © 2011 Cengage Learning
3C-2 Describing the Distribution
of Experimental Data
 This 0.025 mL spread of data, from a low of 9.969 mL to
a high of 9.994 mL, results directly from an
accumulation of all the random uncertainties in the
experiment.
 Rearrange Table 3-2 into frequency distribution groups,
as in Table 3-3.
 26% of the data reside in the cell containing the mean
and median value of 9.982 mL and that more than half
the results are within ± 0.004 mL of this mean.
3-52
Copyright © 2011 Cengage Learning
3C-2 Describing the Distribution
of Experimental Data
 The frequency distribution data in Table 3-3 are plotted
as a bar graph, or histogram (labeled A in Figure 3-5).
 As the number of measurements increases, the
histogram approaches the shape of the continuous
curve shown as plot B in Figure 3-5 (a Gaussian curve,
or normal error curve).
3-53
Copyright © 2011 Cengage Learning
Figure 3-5
Figure 3-5 A histogram (A) showing distribution of the 50 results in Table 3-3
and a Gaussian curve (B) for data having the same mean and same standard
deviation as the data in the histogram.
3-54
Copyright © 2011 Cengage Learning
Sources of random uncertainties
 Many small and uncontrollable variables affect even
the simple process of calibrating a pipet.
 The cumulative effect of random uncertainties is
responsible for the scatter of data points around the
mean.
 Statistics only reveal information that is already present
in a data set.
 Do not confuse the statistical sample with the
analytical sample.
3-55
Copyright © 2011 Cengage Learning
Sources of random uncertainties
 (1) visual judgments, such as the level of the water with
respect to the marking on the pipet and the mercury
level in the thermometer
 (2) variations in the drainage time and in the angle of
the pipet as it drains
 (3) temperature fluctuations, which affect the volume
of the pipet, the viscosity of the liquid, and the
performance of the balance
 (4) vibrations and drafts that cause small variations in
the balance readings.
3-56
Copyright © 2011 Cengage Learning
3D Treating Random Errors
with Statistics
 The random, or indeterminate, errors in the results of
an analysis can be evaluated by the methods of
statistics.
 Ordinarily, statistical analysis of analytical data is based
on the assumption that random errors follow a
Gaussian, or normal, distribution.
3-57
Copyright © 2011 Cengage Learning
3D Treating Random Errors
with Statistics
 Sometimes analytical data depart seriously from
Gaussian behavior, but the normal distribution is the
most common.
 We base this discussion entirely on normally
distributed random errors.
3-58
Copyright © 2011 Cengage Learning
3D-1 Samples and Populations
 In statistics, a finite number of experimental
observations is called a sample of data; this is different
from the term used in chemical analysis.
 Statisticians call the theoretical infinite number of data
a population, more specifically a parent population, or a
universe, of data.
3-59
Copyright © 2011 Cengage Learning
3D-1 Samples and Populations
 Statistical laws must be modified substantially when
applied to a small sample because a few data points
may not be representative of the population.
3-60
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
 Figure 3-6a shows two Gaussian curves in which the
relative frequency y of occurrence of various deviations
from the mean is plotted as a function of the deviation
from the mean.
 The equation for a Gaussian curve has the form
3-61
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
 The equation contains just two parameters, the
population mean μ and the population standard
deviation σ.
3-62
Copyright © 2011 Cengage Learning
Figure 3-6 (a)
Figure 3-6 Normal error
curves. The standard
deviation for curve B is
twice that for curve A;
that is, σB = 2σA. (a) The
abscissa is the deviation
from the mean in the
units of measurement. (b)
The abscissa is the
deviation from the mean
in units of σ. Thus, the
two curves A and B are
identical here.
3-63
Copyright © 2011 Cengage Learning
Figure 3-6 (b)
3-64
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
• The Population Mean μ and the Sample Mean x

Sample mean = x , where
when N is small

Population mean = μ, where
when N → ∞

3-65
The difference between x and μ decreases rapidly as N
reaches over 20 to 30.
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
 The Population Standard Deviation (σ)
 σ is a measure of the precision or scatter of a population of
data, which is given by the equation
(3-5)
where N is the number of data points making up the
population.
3-66
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
 The Population Standard Deviation (σ)
 The two curves in Figure 3-6a are for two populations of data
that differ only in their standard deviations.
 The standard deviation for the data set yielding the broader
but lower curve B is twice that for the measurements yielding
curve A.
 The precision of the data leading to curve A is twice as good as
that of the data that are represented by curve B.
3-67
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
 The Population Standard Deviation (σ)
 Figure 3-6b shows another type of normal error curve in which
the abscissa is now a new variable z, which is defined as
(3-6)
z is the deviation of a data point from the mean relative to one
standard deviation. That is, when x – μ = σ, z is equal to one;
when x – μ = 2σ, z is equal to two.
3-68
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
 The Population Standard Deviation (σ)
 A plot of relative frequency versus this parameter yields a
single Gaussian curve that describes all populations of data
regardless of standard deviation.
3-69
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
 The Population Standard Deviation (σ)
 Variance: The square of the standard deviation σ2.
 A normal error curve has several general properties:
(1) The mean occurs at the central point of maximum frequency,
 (2) there is a symmetrical distribution of positive and negative
deviations about the maximum,
 (3) there is an exponential decrease in frequency as the magnitude of
the deviations increases.


3-70
Small random uncertainties are more common.
Copyright © 2011 Cengage Learning
3D-2 Characterizing Gaussian
Curves
 Areas under a Gaussian Curve
 Regardless of its width, 68.3% of the data making up the
population will lie within the bounds bracketed by ±1σ.
 Approximately 95.4% of all data points are within ±2σ of the
mean and 99.7% within ±3σ.
 These are shown in Figure 3-6.
 Because of such area relationships, the standard deviation of a
population of data is a useful predictive tool.
3-71
Copyright © 2011 Cengage Learning
Feature 3-2
For μ = 0, x = ±σ
3-72
Copyright © 2011 Cengage Learning
Feature 3-2
For μ = 0, x = ±2σ
3-73
Copyright © 2011 Cengage Learning
Feature 3-2
For μ = 0, x = ±3σ
3-74
Copyright © 2011 Cengage Learning
3D-3 Finding the Sample
Standard Deviation
 Equation 3-5 must be modified for a small sample of
data. Thus, the sample standard deviation s is given by
the equation
(3-7)
 The quantity N – 1 is called the number of degrees of
freedom.
3-75
Copyright © 2011 Cengage Learning
3D-3 Finding the Sample
Standard Deviation
 An Alternative Expression for Sample Standard
Deviation
(3-8)
3-76
Copyright © 2011 Cengage Learning
Example 3-3
The following results were obtained in the
replicate determination of the lead content of a
blood sample: 0.752, 0.756, 0.752, 0.751, and
0.760 ppm Pb. Calculate the mean and the
standard deviation of this set of data.
3-77
Copyright © 2011 Cengage Learning
Example 3-3
To apply Equation 3-8, we calculate  x i and
2
( x i ) / N
2
3-78
Copyright © 2011 Cengage Learning
Example 3-3

3-79
Substituting into Equation 3-8 leads to
Copyright © 2011 Cengage Learning
3D-3 Finding the Sample
Standard Deviation
 Note in Example 3-3 that the difference between Σx2i
and (Σxi)2/N is very small. If we had rounded these
numbers before subtracting them, a serious error
would have appeared in the computed value of s. To
avoid this source of error, never round a standard
deviation calculation until the very end.
3-80
Copyright © 2011 Cengage Learning
3D-3 Finding the Sample
Standard Deviation
 What Is the Standard Error of the Mean?
 For replicate samples, each containing N measurements, are
taken randomly from a population of data, the mean of each
set will show less and less scatter as N increases.
 The standard deviation of each mean is known as the standard
error of the mean and is given the symbol sm.
(3-9)
3-81
Copyright © 2011 Cengage Learning
3D-4 The Reliability of s as a
Measure of Precision
 The rapid improvement in the reliability of s with
increases in N makes it feasible to obtain a good
approximation of σ when the method of measurement
is not excessively time consuming and when an
adequate supply of sample is available.
3-82
Copyright © 2011 Cengage Learning
Pooling Data to Improve the
Reliability of s
 Pooled data from a series of similar samples
accumulated over time provide an estimate of s that is
superior to the value for any individual subset.
 Assume the same sources of random error in all the
measurements.
3-83
Copyright © 2011 Cengage Learning
Pooling Data to Improve the
Reliability of s
 To obtain a pooled estimate of the standard deviation,
spooled, deviations from the mean for each subset are
squared; the squares of all subsets are then summed
and divided by an appropriate number of degrees of
freedom.
3-84
Copyright © 2011 Cengage Learning
Feature 3-4 Equation for Calculating
Pooled Standard Deviations
 The equation for computing a pooled standard
deviation from several sets of data takes the form
where N1 is the number of results in set 1, N2 is the
number in set 2, and so forth. The term Nt is the
number of data sets that are pooled.
3-85
Copyright © 2011 Cengage Learning
Example 3-4
 The mercury in samples of seven fish taken from
Chesapeake Bay was determined by a method based on
the absorption of radiation by gaseous elemental
mercury.
3-86
Copyright © 2011 Cengage Learning
Example 3-4
 Calculate a pooled estimate of the standard deviation
for the method, based on the first three columns of
data:
3-87
Copyright © 2011 Cengage Learning
Example 3-4
 The values in the last two columns for specimen 1 were
computed as follows:
3-88
Copyright © 2011 Cengage Learning
Example 3-4
 The other data in columns 4 and 5 were obtained
similarly. Then
One degree of freedom is lost for each of the seven
samples.
3-89
Copyright © 2011 Cengage Learning
3D-5 Alternative Terms for
Expressing the Precision of Samples of Data
Other than sample standard deviation , three
other terms are often employ in reporting the
precision.

The variance (s2) is
1.
(3-10)

3-90
People who do scientific work tend to use standard deviation
rather than variance as a measure of precision.
Copyright © 2011 Cengage Learning
Relative Standard Deviation (RSD)
and Coefficient of Variation (CV)
2.
Relative standard deviation:
3.
Coefficient of variation (CV)
(3-11)
3-91
Copyright © 2011 Cengage Learning
3D-5 Alternative Terms for
Expressing the Precision of Samples of Data
 Spread or Range (w)
 Another term to describe the precision of a set of replicate
results.
 It is the difference between the largest value in the set and the
smallest.
 Example: The spread of the data in Figure 3-1 is (20.3 – 19.4) =
0.9 ppm Fe.
3-92
Copyright © 2011 Cengage Learning
Example 3-5
 For the set of data in Example 3-3, calculate (a) the
variance, (b) the relative standard deviation in parts per
thousand, (c) the coefficient of variation, and (d) the
spread.
3-93
Copyright © 2011 Cengage Learning
Example 3-5
 In Example 3-3, we found

x = 0.754 ppm Pb and s = 0.0038 ppm Pb
 (a) s2 = (0.0038)2 = 1.410-5
0.0038
0.754
0.0038
0.754
1000ppt = 5.0ppt
 (c) CV =
100% = 0.50%
 (d) w = 0.760 – 0.751 = 0.009
 (b) RSD =
3-94
Copyright © 2011 Cengage Learning
3E-1 The Standard Deviation
of Sums and Differences
 Consider the summation
Absolute standard deviations
3-95
Copyright © 2011 Cengage Learning
3E-1 The Standard Deviation
of Sums and Differences
 The summation could be
as large as +0.02 + 0.03 + 0.05 = + 0.10,
or as small as – 0.02 – 0.03 – 0.05 = – 0.10,
or any value lies between these two extremes,
or even
– 0.02 – 0.03 + 0.05 = 0
or
+ 0.02 + 0.03 – 0.05 = 0
3-96
Copyright © 2011 Cengage Learning
Variance and Propagation of
Errors
 The variance of a sum or difference is equal to the sum
of the individual variances, which demonstrate how the
errors propagate.
 If y = a + b – c
For the computation
 The variance of y, S2y is given by
3-97
Copyright © 2011 Cengage Learning
Table 3-4
3-98
Copyright © 2011 Cengage Learning
3E-2 The Standard Deviation of
Products and Quotients
3-99
Copyright © 2011 Cengage Learning
3E-2 The Standard Deviation of
Products and Quotients
 As shown in Table 3-4, the relative standard deviation
of a product or quotient is determined by the relative
standard deviations of the numbers forming the
computed result.
3-100
Copyright © 2011 Cengage Learning
3E-2 The Standard Deviation of
Products and Quotients
 Applying this equation to the numerical example gives
 we can write the answer and its uncertainty as
0.0104(±0.0003).
3-101
Copyright © 2011 Cengage Learning
3F Reporting Computed Data
 One of the best ways of indicating reliability is to give a
confidence interval at the 90% or 95% confidence level
as we describe in Section 3G-2.
 Another method is to report the absolute standard
deviation or the coefficient of variation of the data.
 A less satisfactory but more common indicator of the
quality of data is the significant figure convention.
3-102
Copyright © 2011 Cengage Learning
3F-1 The Significant Figure
Convention
 A simple way of indicating the probable uncertainty
associated with an experimental measurement is to
round the result so that it contains only significant
figures.
 The significant figures in a number are all the certain
digits plus the first uncertain digit.
3-103
Copyright © 2011 Cengage Learning
3F-1 The Significant Figure
Convention
 A zero may or may not be significant depending on its
location in a number.
 A zero that is surrounded by other digits is always
significant (such as in 30.24 mL).
 Zeros that only locate the decimal point for us are not.
3-104
Copyright © 2011 Cengage Learning
3F-2 Significant Figures in
Numerical Computations
 Sums and Differences
 For addition and subtraction, the number of significant figures
can be found by visual inspection.
the second and third decimal places in the answer cannot be
significant because 3.4 is uncertain in the first decimal place
3-105
Copyright © 2011 Cengage Learning
3F-2 Significant Figures in
Numerical Computations
 Products and Quotients
 A rule of thumb sometimes suggested for multiplication and
division is that the answer should be rounded so that it
contains the same number of significant digits as the original
number with the smallest number of significant digits.
3-106
Copyright © 2011 Cengage Learning
3F-2 Significant Figures in
Numerical Computations
 The first answer would be rounded to 1.1 and the
second to 0.96. If we assume a unit uncertainty in the
last digit of each number in the first quotient, however,
the relative uncertainties associated with each of these
numbers are
3-107
Copyright © 2011 Cengage Learning
3F-2 Significant Figures in
Numerical Computations
 Because the first relative uncertainty is much larger
than the other two, the relative uncertainty in the
1
result is also 24 ;the absolute uncertainty is then
Therefore, the first result should be rounded to three
significant figures or 1.08, but the second should be
rounded to only two; that is 0.96.
3-108
Copyright © 2011 Cengage Learning
3F-2 Significant Figures in
Numerical Computations
Logarithms and Antilogarithms

1.
2.
3-109
In a logarithm of a number, keep as many digits to the right
of the decimal point as there are significant figures in the
original number.
In an antilogarithm of a number, keep as many digits as
there are digits to the right of the decimal point in the
original number.
Copyright © 2011 Cengage Learning
Example 3-7
 Round the following answers so that only significant
digits are retained: (a) log 4.000  105 = –4.3979400
and (b) antilog 12.5 = 3.162277  1012.
(a)

3-110
Following rule 1, we retain 4 digits to the right of the decimal
point
(b) Following rule 2, we may retain only 1 digit
Copyright © 2011 Cengage Learning
3F-3 Rounding Data
 A good guide to follow when rounding a 5 is always to
round to the nearest even number. For example, 0.635
rounds to 0.64 and 0.625 rounds to 0.62.
 We should note that it is seldom justifiable to keep
more than one significant figure in the standard
deviation because the standard deviation contains
error as well.
3-111
Copyright © 2011 Cengage Learning
3F-4 Rounding the Results
from Chemical Computations
 The uncertainty of the result is estimated using the
techniques presented in Section 3E. Finally, the
result is rounded so that it contains only significant
digits.
 It is especially important to postpone rounding until
the calculation is completed. At least one extra digit
beyond the significant digits should be carried
through all the computations to avoid a rounding
error.
 This extra digit is sometimes called a “guard” digit.
3-112
Copyright © 2011 Cengage Learning
3G Confidence Limits
 1. Defining a numerical interval around the mean of a set of
replicate analytical results within which the population
mean can be expected to lie with a certain probability. This
interval is called the confidence interval.
 2. Determining the number of replicate measurements
required to ensure at a given probability that an
experimental mean falls within a certain confidence interval.
 3. Using the least-squares method for constructing
calibration curves.
3-113
Copyright © 2011 Cengage Learning
3G Confidence Limits
 Confidence limits define a numerical interval around
that contains μ with a certain probability.
 A confidence interval is the numerical magnitude of the
confidence limit.
3-114
Copyright © 2011 Cengage Learning
3G-1 Finding the Confidence Interval
When s Is a Good Estimate of σ
 confidence limits (CL)
(3-15)
3-115
Copyright © 2011 Cengage Learning
3G-1 Finding the Confidence Interval
When s Is a Good Estimate of σ
3-116
Copyright © 2011 Cengage Learning
3G-2 Finding the Confidence
Interval When σ Is Unknown
(3-17)
3-117
Copyright © 2011 Cengage Learning
Table 3-6
3-118
Copyright © 2011 Cengage Learning
3H Analyzing Two-dimensional
Data: The Least-squares Method
 A statistical technique called regression analysis
provides the means for objectively obtaining such a line
and also for specifying the uncertainties associated
with its subsequent use.
3-119
Copyright © 2011 Cengage Learning
3H-1 Assumptions of the LeastSquares Method
 The method of least squares is used to generate a
calibration curve, two assumptions are required. The first is
that there is actually a linear relationship between the
measured variable (y) and the analyte concentration (x).
 The mathematical relationship that describes this
assumption is called the regression model, which may be
represented as
where b is the y intercept (the value of y when x is zero) and
m is the slope of the line.
3-120
Copyright © 2011 Cengage Learning
3H-2 Computing the Regression Coefficients
and Finding the Least-Squares Line
 The vertical deviation of each point from the straight
line is called a residual.
3-121
Copyright © 2011 Cengage Learning
3H-2 Computing the Regression Coefficients
and Finding the Least-Squares Line
 1. The slope of the line m:
 2. The intercept b:
 3. The standard deviation about regression sr :
3-122
Copyright © 2011 Cengage Learning
3H-2 Computing the Regression Coefficients
and Finding the Least-Squares Line
 4. The standard deviation of the slope sm :
 5. The standard deviation of the intercept sb :
 6. The standard deviation for results obtained from the
calibration curve sc :
3-123
Copyright © 2011 Cengage Learning
3H-2 Computing the Regression Coefficients
and Finding the Least-Squares Line
 The standard deviation about regression sr (Equation
3-24) is the standard deviation for y when the
deviations are measured not from the mean of y (as is
usually the case) but from the straight line that results
from the least-squares analysis:
3-124
Copyright © 2011 Cengage Learning
3H-2 Computing the Regression Coefficients
and Finding the Least-Squares Line
 The standard deviation about regression is often called
the standard error of the estimate or the standard error
in y.
3-125
Copyright © 2011 Cengage Learning
THE END
3-126
Copyright © 2011 Cengage Learning
Download