Guidelines for Single Laboratory Validation (SLV) of Chemical

advertisement
1
2
3
4
Guidelines for Single Laboratory Validation (SLV) of Chemical Methods for
Metals in Food
Introduction
5
6
The application of analytical methods within a regulatory analysis or accredited
7
laboratory framework imposes certain requirements on both the analyst and laboratory. Under
8
ISO-17025, accredited laboratories are expected to demonstrate both “fitness for purpose” of the
9
methods for which they are accredited and competency of their assigned analysts in performance
10
of the methods1. The Codex Alimentarius Commission has issued a general guideline for
11
analytical laboratories involved in the import and export testing of foods which contains four
12
principles2:
13
14

Such laboratories should demonstrate internal quality control procedures which meet the
15
requirements of the Harmonised Guidelines for Internal Quality Control in Analytical
16
Chemistry3;
17

Such laboratories should be regular participants in appropriate proficiency testing schemes
18
which have been designed and conducted as per the requirements of the International
19
Harmonized Protocol for Proficiency Testing of (Chemical) Analytical Laboratories 4;
20

Such laboratories should become accredited for tests routinely performed according to
21
ISO/IEC-17025:1999 General requirements for the competence of calibration and testing
22
laboratories (now ISO/IEC-170251); and
23
24

Such laboratories should use methods which have been validated according to the principles
laid down by the Codex Alimentarius Commission whenever such methods are available.
25
26
General requirements for validation of analytical methods according to principles laid
27
down by the Codex Alimentarius Commission are provided in the Codex Manual of Procedures,
28
including provision for “single laboratory” validation of analytical methods5. However, there
29
remains considerable misunderstanding among analysts as to precisely what is meant and what is
30
required to demonstrate “method validation”. Additional guidance for possible future inclusion
31
in the Manual of Procedures is currently under discussion in the Codex Committee on Methods
1
1
of Analysis and Sampling6. While compliance with Codex Alimentarius Commission standards
2
and guidelines is voluntary for member states, subject to WTO agreements, they do reflect
3
international consensus on issues discussed. These guidelines can therefore be informative for
4
the development of guidance documents to be used within AOAC International for issues such as
5
single laboratory validation of analytical methods for trace elements.
6
7
Validation is defined by ISO as ‘Confirmation by examination and provision of objective
8
evidence that the particular requirements for a specified intended use are fulfilled’ 7. Method
9
validation has been defined as:
10
“1.The process of establishing the performance characteristics and limitations of a
11
method and the identification of the influences which may change these characteristics and to
12
what extent. Which analytes can it determine in which matrices in the presence of which
13
interferences? Within these conditions what levels of precision and accuracy can be achieved?
14
2. The process of verifying that a method is fit for purpose, i.e. for use for solving a
15
particular analytical problem.”8
16
17
18
In addition, it is been stated in the IUPAC Harmonized Guidelines for Single Laboratory
Validation of Methods of Analysis9 that:
19
“Strictly speaking, validation should refer to an “analytical system” rather than an
20
“analytical method”, the analytical system comprising a defined method protocol, a defined
21
concentration range for the analyte, and a specified type of test material.”
22
23
Method validation can therefore be practically defined as a set of experiments conducted
24
to confirm that an analytical procedure used for a specific test is suitable for its intended purpose
25
on specific instrumentation and within a specific laboratory environment in which the set of
26
experiments have been conducted. A collaborative study is considered to provide a more reliable
27
indicator of method performance when used in other laboratories because it requires testing of
28
the method in multiple laboratories, by different analysts using different reagents, supplies and
29
equipment and working in different laboratory environments. Validation of a method, even
30
through collaborative study, does not, however, provide a guarantee of method performance in
31
any laboratory performing the method. This is where a second term, verification, is introduced.
2
1
Verification is usually defined as a set of experiments conducted by a different analyst or
2
laboratory on a previously validated method to demonstrate that in their hands, the performance
3
standards established from the original validation are attained. That is, it meets requirements for
4
attributes such as scope (analytes/matrices), analytical range, freedom from interferences,
5
precision and accuracy that have been identified for suitable application of the method to the
6
intended use.
7
8
9
In contrast, method development is the series of experiments conducted to develop and
optimize a specific analytical method for an analyte or group of analytes. This can involve
10
investigations into detection/extraction of the analyte, stability of the analyte, analytical range,
11
selectivity, ruggedness, etc. It is important to note that method validation experiments will
12
always take place after method development is complete, in other words, validation studies are to
13
confirm method performance parameters which were demonstrated during method development.
14
15
Validation should not begin until ruggedness testing has been completed. A ruggedness
16
design should identify steps of the analytical method where small changes are made to determine
17
if they affect method results. A common approach is to vary seven factors simultaneously and
18
measure these changes to determine how they may affect method performance10. Once method
19
development and ruggedness experiments are complete, the method cannot be changed during
20
the validation process.
21
22
When validating a method for metals in food products, many factors should be
23
considered during the planning phase of the validation experimental design. For example, is the
24
method to be used in a regulatory environment, and if so, does the analyte of interest have a
25
maximum residue limit (MRL) for which it is assessed for compliance? Is the intended purpose
26
of the method to achieve the lowest possible detection limit? Is the method to be used for the
27
determination of a single element in a particular matrix, or multi-element analyses? Can
28
authentic blank matrix be gathered as the test material? For example many elements are
29
naturally present in a test matrix, such as arsenic in shellfish tissue. The inability to obtain
30
authentic blank test material can cause many validation problems when assessing matrix effects,
31
limits of detection/quantitation, etc.
3
1
Although food testing programs frequently include testing for a range of elements
2
(predominantly metals), there are actually few formally established MRLs or other action limits
3
for these analytes. The Codex Alimentarius Commission has established limits for arsenic (total),
4
cadmium, lead in a variety of foods, total mercury in mineral waters and salt, methylmercury in
5
fish and tin in canned goods, as well as for a number of radionuclides in infant and other foods11.
6
Similarly, the European Union has established regulatory limits for cadmium, lead, mercury and
7
tin in a variety of foods12. Requirements for analytical methods to enforce EU standards for lead,
8
cadmium and mercury in foodstuffs are the subject of another EU regulation13. Canada has
9
established maximum limits for arsenic, lead and tin in various foods14 and for mercury in
10
seafood15.
11
12
Table 1: Regulated Toxic Elements of Codex and Various Countries
Organization/Country
Regulated Element
Codex
As, Cd, Pb, Hg, MeHg in a variety of foods
EU Countries
Hg, Cd, Pb Sn in some foods
Canada
Hg in fish, Cd, Pb, Sn in some foods
USA
Hg in fish
Japan
Hg and MeHg in some fish
13
14
The aim of this single laboratory validation (SLV) protocol is to provide guidance for the
15
scientist when validating a method for inorganic analytes in food or environmental matrices as
16
“fit-for-purpose” for an element or a group of elements in those products. This document
17
provides definitions of common terminology, procedures to be followed, technical guidelines
18
and recommended approaches, as well as an example of a SLV experimental plan. The protocol
19
addresses any specific requirements that are provided in Codex Alimentarius guidance
20
documents or in regulations or guidelines set by national or regional authorities, so is intended to
21
be generally applicable for a variety or potential users.
22
23
24
25
4
1
Definitions
2
3
It is recommended that definitions included in the Codex Alimentarius Commission
4
Manual of Procedures5 should be used, when available, as these have been adopted after
5
extensive international consultation and are taken from authoritative sources, such as ISO,
6
IUPAC and AOAC International. A revised list of definitions currently under
7
consideration by the Codex Committee on Methods of Analysis and Sampling (CCMAS)
8
for inclusion in the Codex Manual of Procedures has also been used as a source for the
9
most current definitions which have acceptance within the international analytical science
10
community6.
11
12
Accuracy: Closeness of agreement between a measured quantity value and a true quantity value
13
of the measurand16. The Codex Manual of Procedures defines accuracy as “the closeness of
14
agreement between a test result and the accepted reference value.”5 The definition currently
15
under consideration by CCMAS 6 is:
16
“The closeness of agreement between a test result or measurement result and a
17
reference value.
18
Notes: The term “accuracy”, when applied to a set of test results or measurement results,
19
involves a combination of random components and a common systematic error or bias
20
component. (Footnote: When applied to a test method, the term accuracy refers to a
21
combination of trueness and precision.) Reference:ISO Standard 3534-2: Vocabulary and
22
Symbols Part 2: Applied Statistics, ISO, Geneva, 2006.”
23
Analytical function: A function which relates the measured value (Ca) to the instrument reading
24
(X) with the value of the interferants (Ci) remaining constant. This function is expressed by the
25
following regression of the calibration results: Ca = f(X)16.
26
Analytical Range: The range of an analytical procedure is the interval between the upper and
27
lower concentration (amounts) of analyte in the sample (including these concentrations) for
28
which it has been demonstrated that the analytical procedure has a suitable level of precision,
29
accuracy and linearity17.
5
1
Applicability6: “The analytes, matrices, and concentrations for which a method of analysis may
2
be used satisfactorily.
3
Note: In addition to a statement of the range of capability of satisfactory performance for
4
each factor, the statement of applicability (scope) may also include warnings as to known
5
interference by other analytes, or inapplicability to certain matrices and situations.
6
Reference:Codex Alimentarius Commission, Procedural Manual, 17th edition, 2007.”
7
8
Bias6: “The difference between the expectation of the test result or measurement result and the
9
true value.
10
Note: Bias is the total systematic error as contrasted to random error. There may be one or
11
more systematic error components contributing to bias. A larger systematic difference from
12
the accepted reference value is reflected by a larger bias value.
13
The bias of a measuring instrument is normally estimated by averaging the error of
14
indication over the appropriate number of repeated measurements. The error of indication
15
is the: “indication of a measuring instrument minus a true value of the corresponding input
16
quantity”. In practice the accepted reference value is substituted for the true value.
17
Expectation is the expected value of a random variable, e.g. assigned value or long term
18
average {ISO 5725- 1}.
19
Reference: ISO Standard 3534-2: Vocabulary and Symbols Part 2: Applied Statistics, ISO,
20
Geneva, 2006.”
21
22
Calibration6: “Operation that, under specified conditions, in a first step, establishes a relation
23
between the values with measurement uncertainties provided by measurement standards and
24
corresponding indications with associated measurement uncertainties and in a second step uses
25
this information to establish a relation for obtaining a measurement result from an indication.
26
Notes: A calibration may be expressed by a statement, calibration function, calibration
27
diagram, calibration curve, or calibration table. In some cases it may consist of an
28
additive or multiplicative correction of the indication with associated measurement
29
uncertainty.
6
1
Calibration should not be confused with adjustment of a measuring system often
2
mistakenly called “self calibration”, nor with verification of calibration. Often the first step
3
alone in the above definition is perceived as being calibration.
4
Reference: VIM, International vocabulary for basic and general terms in metrology, 3rd
5
edition, 2007”
6
Calibration function: The functional (not statistical) relationship for the chemical measurement
7
process, relating the expected value of the observed (gross) signal or response variable to the
8
analyte amount16.
9
Certified Reference Material (CRM): A reference material of whose property values are
10
certified by a technically valid procedure, accompanied by, or traceable to, a certificate or other
11
documentation which is issued by a certifying body17.
12
From CCMAS discussion document6:
13
“Reference material accompanied by documentation issued by an authoritative body and
14
providing one or more specified property values with associated uncertainties and
15
traceabilities, using valid procedures.
16
Notes: Documentation is given in the form of a “certificate” (see ISO guide 30:1992).
17
Procedures for the production and certification of certified reference materials are given,
18
e.g. in ISO Guide 34 and ISO Guide 35. In this definition, “uncertainty” covers both
19
measurement uncertainty and uncertainty associated with the value of the nominal
20
property, such as for identity and sequence. “ Traceability covers both metrological
21
traceability of a value and traceability of a nominal property value. Specified values of
22
certified reference materials require metrological traceability with associated measurement
23
uncertainty {Accred. Qual. Assur., 2006}. ISO/REMCO has an analogous definition
24
{Accred. Qual. Assur., 2006} but uses the modifiers metrological and metrologically to
25
refer to both quantity and nominal properties.
26
References:
27
VIM, International vocabulary for basic and general terms in metrology, 3rd edition, 2007.
7
1
New definitions on reference materials, Accreditation and Quality Assurance, 10:576-578,
2
2006.”
3
Critical value (LC)6: The value of the net concentration or amount the exceeding of which leads,
4
for a given error probability α, to the decision that the concentration or amount of the analyte in
5
the analyzed material is larger than that in the blank material. It is defined as:
6
Pr ( >LC | L=0) ≤ α
7
Where is the estimated value, L is the expectation or true value and LC is the critical value.
8
Notes:
9
The critical value Lc is estimated by
10
LC = t1-ανso,
11
Where t1-αν is Student's-t, based on ν degrees of freedom for a one-sided confidence
12
interval of 1-α and so is the sample standard deviation. If L is normally distributed with
13
known variance, i.e. ν = ∞ with the default α of 0.05, LC = 1.645so.
14
A result falling below the LC triggering the decision “not detected” should not be construed
15
as demonstrating analyte absence. Reporting such a result as “zero” or as < LD is not
16
recommended. The estimated value and its uncertainty should always be reported.
17
References:
18
ISO Standard 11843: Capability of Detection-1, ISO, Geneva, 1997.
19
Nomenclature in evaluation of analytical methods, IUPAC, 1995.”
20
Error6: Measured value minus a reference value.
21
Note:
22
The concept of measurement ‘error’ can be used both: when there is a single reference
23
value to refer to, which occurs if a calibration is made by means of a measurement standard
24
with a measured value having a negligible measurement uncertainty or if a conventional
25
value is given, in which case the measurement error is not known and if a measurand is
26
supposed to be represented by a unique true value or a set ot true values of negligible
27
range, in which case the measurement error is not known.
28
Reference: VIM, International vocabulary for basic and general terms in metrology, 3rd
29
Edition, 2007, ISO, Geneva.”
30
8
1
Fitness for purpose6: Degree to which data produced by a measurement process enables a user
2
to make technically and administratively correct decisions for a stated purpose.
3
Reference: Eurachem Guide: The fitness for purpose of analytical methods: A laboratory guide
4
to method validation and related topics, 1998.”
5
6
HorRat6: The ratio of the reproducibility relative standard deviation to that calculated from the
7
Horwitz equation,
8
Predicted relative standard deviation (PRSD)R =2C-0.15:
9
HorRat(R) = RSDR/PRSDR ,
10
HorRat(r) = RSDr/PRSDR ,
11
where C is concentration expressed as a mass fraction (both numerator and denominator
12
expressed in the same units).
13
Notes:
14
The HorRat is indicative of method performance for a large majority of methods in
15
chemistry. Normal values lie between 0.5 and 2. (To check proper calculation of PRSDR, a
16
C of 10-6 should give a PRSDR of 16%.)
17
If applied to within-laboratory studies, the normal range of HorRat(r) is 0.3-1.3.
18
concentrations less than 0.12 mg/kg the predictive relative standard deviation developed by
19
Thompson (The Analyst, 2000), should be used.
20
Reference:
21
A simple method for evaluating data from an inter-laboratory study, J AOAC, 81(6):1257-
22
1265, 1998
23
Recent trends in inter-laboratory precision at ppb and sub-ppb concentrations in relation to
24
fitness for purpose criteria in proficiency testing, The Analyst, 125:385-386, 2000.”
For
25
26
Intermediate Precision: The precision of an analytical procedure expresses the closeness of
27
agreement between a series of measurements obtained from multiple sampling of the same
28
homogeneous sample under the prescribed conditions. Intermediate precision expresses within-
29
laboratories variations: different days, different analysts, different equipment, etc.17
9
1
Limit of Detection (LOD): The lowest concentration of analyte in a sample that can be detected,
2
but not necessarily quantitated under the stated conditions of the test16.
3
Limit of Detection6: “The true net concentration or amount of the analyte in the material to
4
be analyzed which will lead, with probability (1-β), to the conclusion that the concentration
5
or amount of the analyte in the analyzed material is larger than that in the blank material. It
6
is defined as:
7
Pr ( ≤LC | L=LD) = β
8
Where is the estimated value, L is the expectation or true value and LC is the critical
9
value.
10
Notes: The detection limit LD is estimated by,
11
LD ≈ 2t1-ανσo [where α = β],
12
Where t1-αν is Student's-t, based on ν degrees of freedom for a one-sided confidence
13
interval of 1-α and σo is the standard deviation of the true value (expectation). LD = 3.29 σo,
14
when the uncertainty in the mean (expected) value of the blank is neglible, α = β = 0.05
15
and L is normally distributed with known constant variance. However, LD is not defined
16
simply as a fixed coefficient (e.g. 3, 6, etc.) times the standard deviation of a pure solution
17
background. To do so can be extremely misleading. The correct estimation of LD must take
18
into account degrees of freedom, α and β, and the distribution of L as influenced by factors
19
such as analyte concentration, matrix effects and interference. This definition provides a
20
basis for taking into account exceptions to simple case that is described, i.e. involving
21
non-normal distributions and heteroscedasticity (e.g. “counting” (Poisson) processes as
22
those used for real time PCR). It is essential to specify the measurement process under
23
consideration, since distributions, σ’s and blanks can be dramatically different for different
24
measurement processes. At the detection limit, a positive identification can be achieved
25
with reasonable and/or previously determined confidence in a defined matrix using a
26
specific analytical method.
27
References:
28
ISO Standard 11843: Capability of Detection-1, ISO, Geneva, 1997
29
Nomenclature in evaluation of analytical methods, IUPAC, 1995
10
1
Guidance document on pesticide residue analytical methods, Organization for Economic
2
Cooperation and Development, 2007.”
3
4
Limit of Quantification (LOQ): The LOQ is the smallest amount of analyte in a test sample
5
that can be quantitatively determined with suitable precision and accuracy under previously
6
established method conditions17.
7
Limit of Quantification6: A method performance characteristic generally expressed in
8
terms of the signal or measurement (true) value that will produce estimates having a
9
specified relative standard deviation (RSD), commonly 10% (or 6%). LQ is estimated by:
10
LQ = kQ σQ, kQ = 1/RSDQ
11
Where LQ is the limit of quantification, σQ is the standard deviation at that point and kQ is
12
the multiplier whose reciprocal equals the selected RSD. (The approximate RSD of an
13
estimated σ, based on ν-degrees of freedom is 1/ √2ν.)
14
Notes:
15
If σ is known and constant, then σQ = σo, since the standard deviation of the estimated
16
quantity is independent of concentration. Substituting 10% in for kQ gives:
17
LQ = (10 * σQ) = 10 σo
18
In this case, the LQ is just 3.04 times the detection limit, given normality and α = β = 0.05.
19
At the the LQ, a positive identification can be achieved with reasonable and/or previously
20
determined confidence in a defined matrix using a specific analytical method.
21
This definition provides a basis for taking into account exceptions to simple case that is
22
described, i.e. involving non-normal distributions and heteroscedasticity ( e.g. “counting”
23
(Poisson) processes as those used for real time PCR).
24
References:
25
Nomenclature in evaluation of analytical methods, IUPAC, 1995
26
Guidance document on pesticide residue analytical methods, Organization for Economic
27
Co-operation and Development, 2007.”
28
29
Concern has been expressed that LOD and LOQ should not always be used as mandatory fixed
30
performance limits for validated methods, due to the inherent variability which may observed in
31
the determination of these limits by different analysts using different instruments. For example,
11
1
an expert consultation on the validation of analytical methods noted in its report that “LOD and
2
LOQ are estimates of variable parameters, the values of which depend on various factors,
3
including the conditions of measurement and the experience of the analyst. The use of these
4
estimates in client reports can be misleading. In view of this, it was requested that the
5
FAO/IAEA expert consultation following the Workshop would consider that the lowest
6
calibrated level of the analysis be recommended to be used in client reports as an alternative to
7
the LOD and LOQ.”18
8
9
The following terms were defined in the consultation report:
10
11
Accepted Limit (AL): Concentration value for an analyte corresponding to a regulatory limit or
12
guideline value which forms the purpose for the analysis, e.g. MRL, MPL; trading standard,
13
target concentration limit (dietary exposure assessment), acceptance level (environment) etc. For
14
a substance without an MRL or for a banned substance there may be no AL (effectively it may
15
be zero or there may be no limit ) or it may be the target concentration above which detected
16
residues should be confirmed (action limit or administrative limit).
17
18
Lowest Calibrated Level (LCL): Lowest concentration of analyte detected and measured in
19
calibration of the detection system. It may be expressed as a solution concentration or as a mass
20
ratio in the test sample and must not include the contribution from the blank.
21
22
23
Linearity: The ability of the method to obtain test results proportional to the concentration of
24
analyte16.
25
Linearity6: The ability of a method of analysis, within a certain range, to provide an
26
instrumental response or results proportional to the quantity of analyte to be determined in
27
the laboratory sample. This proportionality is expressed by a prior defined mathematical
28
expression. The linearity limits are the experimental limits of concentrations between
29
which a linear calibration model can be applied with an acceptable uncertainty.
30
Reference:
31
Codex Alimentarius Commission, Procedural Manual, 17th edition, 2007.”
12
1
Linear Range: The range of analyte concentrations over which the method provides test results
2
proportional to the concentration of the analyte16.
3
Matrix: The components of the sample other than the analyte16.
4
Matrix Effect: The combined effect of all components in the sample other than the analyte on
5
the measurement of the quantity. If a specific component can be identified as causing an effect
6
then this is referred to as interference16.
7
Matrix Fortified Calibration Curve: When a known concentration of the target analyte is
8
added to a blank matrix at various levels prior to extraction or digestion to generate a calibration
9
curve. This curve is used to determine the effect of the matrix on the response of the analyte.
10
Matrix Matched: When fortified blank matrix is extracted and carried through the method to
11
generate a calibration curve. This is used to correct for matrix effects. In metals testing, matrix
12
matched refers to matching diluent concentrations of standards to that of the sample digest.
13
Other elements that are known to be present in sample digest may be added as well.
14
Matrix-matched Calibration18: Calibration using standards prepared in an extract of the
15
commodity analysed (or of a representative commodity). The objective is to compensate for the
16
effects of co-extractives on the determination system. Such effects are often unpredictable, but
17
matrix-matching may be unnecessary where co-extractives prove to be of insignificant effect.
18
Measurand6: Quantity intended to be measured.
19
Notes: The specification of a measurand requires knowledge of the kind of quantity,
20
description of the state of the substance carrying the quantity, including any relevant
21
component and the chemical entities involved. In chemistry, ‘analyte’ or the name of a
22
substance or compound are terms sometime used for measurand. This usage is erroneous
23
because these terms do not refer to quantities.
24
Reference: VIM, International vocabulary for basic and general terms in metrology, 3rd
25
Edition, 2007, ISO, Geneva.”
26
13
1
Measurement procedure6: Detailed description of a measurement according to one or more
2
measurement principles and to a given measurement method, based on a measurement model and
3
including any calculation to obtain a result.
4
Notes: A measurement procedure is usually documented in sufficient detail to enable an
5
operator to perform a measurement. A measurement procedure can include a statement
6
concerning a target measurement uncertainty. A measurement procedure is sometimes
7
called a standard operating procedure (SOP).
8
Reference: VIM, International vocabulary for basic and general terms in metrology, 3rd
9
Edition, 2007, ISO, Geneva.”
10
11
Measurement Uncertainty: A parameter associated with the result of a measurement that
12
characterises the dispersion of the values that could reasonably be attributed to the measurand8.
13
Measurement uncertainty6: Non-negative parameter characterizing the dispersion of the
14
values being attributed to a measurand, based on the information used.
15
Notes: Measurement uncertainty includes components arising from systematic effects, such
16
as components associated with corrections and the assigned values of measurement
17
standards, as well as the definitional uncertainty. Sometimes estimated systematic effects
18
are not corrected for but, instead associated measurement uncertainty components are
19
incorporated. The parameter may be, for example, a standard deviation called standard
20
measurement uncertainty (or a given multiple of it), or the half-width of interval having a
21
stated coverage probability. Measurement uncertainty comprises, in general many
22
components. Some of these components may be evaluated by Type A evaluation of
23
measurement uncertainty from the statistical distribution of the values from a series of
24
measurements and can be characterized by experimental standard deviations. The other
25
components which may be evaluated by Type B evaluation of measurement uncertainty
26
can also be characterized by standard deviations, evaluated from assumed probability
27
distributions based on experience or other information. In general, for a given set of
28
information, it is understood that the measurement uncertainty is associated with a stated
29
quality value attributed to the measurand. A modification of this value results in a
30
modification of the associated uncertainty.
14
1
Reference: VIM, International vocabulary for basic and general terms in metrology, 3rd
2
Edition, 2007, ISO, Geneva.”
3
4
Expanded measurement uncertainty6: product of a combined standard measurement
5
uncertainty and a factor larger than the number one.
6
Notes: The factor depends upon the type of probability distribution of the output quantity
7
in a measurement model and on the selected coverage probability. The term factor in this
8
definition refers to a coverage factor. Expanded measurement uncertainty is also termed
9
expanded uncertainty.
10
Reference: VIM, International vocabulary for basic and general terms in metrology, 3rd
11
Edition, 2007, ISO, Geneva.”
12
13
Precision6: The closeness of agreement between independent test/measurement results obtained
14
under stipulated conditions.
15
Notes: Precision depends only on the distribution of random errors and does not relate to
16
the true value or to the specified value. The measure of precision is usually expressed in
17
terms of imprecision and computed as a standard deviation of the test results. Less
18
precision is reflected by a larger standard deviation. Quantitative measures of precision
19
depend critically on the stipulated conditions. Repeatability and reproducibility conditions
20
are particular sets of extreme conditions. Intermediate conditions between these two
21
extreme conditions are also conceivable, when one or more factors within a laboratory
22
(intra-laboratory- e.g. the operator, the equipment used, the calibration of the equipment
23
used, the environment, the batch of reagent and the elapsed time between measurements)
24
are allowed to vary and are useful in specified circumstances. Precision is normally
25
expressed in terms of standard deviation.
26
References:
27
ISO Standard 3534-2: Vocabulary and Symbols Part 2: Applied Statistics, ISO, Geneva,
28
2006
29
ISO Standard 5725-3: Accuracy (trueness and precision) of measurement methods and
30
results Part 3:
15
1
Intermediate measures of the precision of a standard measurement method, ISO, Geneva,
2
1994.”
3
4
Recovery: IUPAC defines it as a “term used in analytical and preparative chemistry to denote
5
the fraction of the total quantity of a substance recoverable following a chemical procedure”16. It
6
has also been defined in an EU Commission Decision referring to requirements for analytical
7
methods used for the determination of residues of veterinary drugs in foods as the “percentage of
8
the true concentration of a substance recovered during the analytical procedure. It is determined
9
during validation, if no certified reference material is available.”19 Recovery has also been
10
defined as the “proportion of the amount of analyte, present in or added to the analytical portion
11
of the test material, which is extracted and presented for measurement.”20
12
Recovery6 / recovery factors: Proportion of the amount of analyte, present in, added to or
13
present in and added to the analytical portion of the test material, which is extracted and
14
presented for measurement.
15
Notes: Recovery is assessed by the ratio R = Cobs / C ref of the observed concentration or
16
amount Cobs obtained by the application of an analytical procedure to a material containing
17
analyte at a reference level Cref . Cref will be: (a) a reference material certified value, (b)
18
measured by an alternative definitive method, (c) defined by a spike addition or (d)
19
marginal recovery. Recovery is primarily intended for use in methods that rely on
20
transferring the analyte from a complex matrix into a simpler solution, during which loss of
21
analyte can be anticipated.
22
References:
23
Harmonized guidelines for the use of recovery information in analytical measurement,
24
1998
25
Use of the terms “recovery” and “apparent recovery” in analytical procedures, 2002.”
16
1
Reference material6: Material, sufficiently homogeneous and stable with respect to one or more
2
specified properties, which has been established to be fit for its intended use in a measurement
3
process or in examination of nominal properties. Notes: Examination of a nominal property
4
provides a nominal property value and associated uncertainty. This uncertainty is not a
5
measurement uncertainty. Reference materials with or without assigned values can be used for
6
measurement precision control whereas only reference materials with assigned values can be
7
used for calibration and measurement trueness control. Some reference materials have assigned
8
values that are metrologically traceable to a measurement unit outside a system of units. In a
9
given measurement, a given reference material can only be used for either calibration or quality
10
assurance. The specification of a reference material should include its material traceability,
11
indicating its origin and processing. {Accred. Qual. Assur., 2006}. ISO/REMCO has an
12
analogous definition that uses the term measurement process to mean examination which covers
13
both measurement of a quantity and examination of a nominal property.
14
Reference:
15
VIM, International vocabulary for basic and general terms in metrology, 3rd Edition, 2007,
16
ISO, Geneva.
17
New definitions on reference materials, Accred. Qual. Assur., 10:576-578, 2006.”
18
Reference value6: Quantity value used as a basis of comparison with values of quantity of the
19
same kind.
20
Notes: A reference quantity value can be a true quantity value of a measurand, in which
21
case it is unknown, or a conventional quantity value in which case it is known. A reference
22
quantity value with an associated measurement uncertainty is usually provided with
23
reference to ( a) a material, e.g. a certified reference material (b) a reference measurement
24
procedure (c) a comparison of measurement standards.
25
Reference: VIM, International vocabulary for basic and general terms in metrology, 3rd
26
Edition, 2007, ISO, Geneva.”
17
1
Repeatability: This term is defined by the Codex Alimentarius Commission as “Conditions
2
where independent test results are obtained with the same method on identical test items in the
3
same laboratory by the same operator using the same equipment within short intervals of time.”5
4
Reproducibility: The Codex Alimentarius Commission defines this as “Conditions where
5
independent test results are obtained with the same method on identical test items in different
6
laboratories with different operators using different equipment.”5 It is also defined in an EU
7
Commission Decision as “The precision under conditions where test results are obtained with the
8
same method on identical test items in different laboratories with different operators using
9
different equipment. For Single Lab Validation intermediate precision is determined with
10
different operators on different equipment.”19
11
From CCMAS discussion paper6:
12
Repeatability (Reproducibility)6: Precision under repeatability (reproducibility) conditions.
13
Reference:
14
ISO 3534-1 Statistics, vocabulary and symbols-Part 1: Probability and general statistical
15
terms, ISO, 1993
16
ISO Standard 78-2: Chemistry – Layouts for Standards – Part 2: Methods of Chemical
17
Analysis, 1999)
18
Codex Alimentarius Commission, Procedural Manual, 17th edition, 2007
19
AOAC International methods committee guidelines for validation of qualitative and
20
quantitative food microbiological official methods of analysis, 2002.”
21
22
Repeatability conditions6: Observation conditions where independent test/measurement results
23
are obtained with the same method on identical test/measurement items in the same test or
24
measuring facility by the same operator using the same equipment within short intervals of time.
25
Note: Repeatability conditions include: the same measurement procedure or test procedure;
26
the same operator; the same measuring or test equipment used under the same conditions;
27
the same location and repetition over a short period of time.
28
Reference:
18
1
ISO Standard 3534-2: Vocabulary and Symbols Part 2: Applied Statistics, ISO, Geneva,
2
2006.”
3
Repeatability (Reproducibility) limit6: The value less than or equal to which the absolute
4
difference between final values, each of them representing a series of test results or measurement
5
results obtained under repeatability (reproducibility) conditions may be expected to be with a
6
probability of 95%.
7
Notes: The symbol used is r [R]. {ISO 3534-2} When examining two single test results
8
obtained under repeatability (reproducibility) conditions, the comparison should be made
9
with the repeatability (reproducibility) limit, r [R] = 2.8σr[R]. {ISO 5725-6, 4.1.4} When
10
groups of measurements are used as the basis for the calculation of the repeatability
11
(reproducibility) limits (now called the critical difference), more complicated formulae are
12
required that are given in ISO 5725-6: 1994, 4.2.1 and 4.2.2.
13
Reference:
14
ISO Standard 3534-2: Vocabulary and Symbols Part 2: Applied Statistics, ISO, Geneva,
15
2006
16
ISO 5 725-6 “Accuracy (trueness and precision) of a measurement methods and results—
17
Part 6: Use in practice of accuracy value”, ISO, 1994
18
Codex Alimentarius Commission, Procedural Manual, 17th edition, 2007.”
19
20
Repeatability (reproducibility) standard deviation6: Standard deviation of test results or
21
measurement results obtained under repeatability (reproducibility) conditions.
22
Notes: It is a measure of the dispersion of the distribution of the test or measurement
23
results under repeatability (reproducibility) conditions.
24
Reference:
25
ISO Standard 3534-2: Vocabulary and Symbols Part 2: Applied Statistics, ISO, Geneva,
26
2006.”
27
28
Repeatability (reproducibility) relative standard deviation6: RSDr[R] is computed by dividing
29
the repeatability (reproducibility) standard deviation by the mean.
30
Note: Relative standard deviation (RSD) is a useful measure of precision in quantitative
31
studies. This is done so that one can compare variability of sets with different means. RSD
19
1
values are independent of the amount of analyte over a reasonable range and facilitate
2
comparison of variabilities at different concentrations. The result of a collaborative test
3
may be summarized by giving the RSD for repeatability (RSDr) and RSD for
4
reproducibility (RSDR).
5
Reference:
6
AOAC International methods committee guidelines for validation of qualitative and
7
quantitative food microbiological official methods of analysis, 2002.”
8
9
Reproducibility conditions6: Observation conditions where independent test/measurement
10
results are obtained with the same method on identical test/measurement items in different test or
11
measurement facilities with different operators using different equipment.
12
Reference:
13
ISO Standard 3534-2: Vocabulary and Symbols Part 2: Applied Statistics, ISO, Geneva,
14
2006.”
15
Result6: Set of values being attributed to a measurand together with any other available relevant
16
information
17
Notes: A result of measurement generally contains ‘relevant information’ about the set of
18
values, such that some may be more representative of the measurand than others. This may
19
be expressed in the form of a probability density function. A result of measurement is
20
generally expressed as a single measured value and a easurement uncertainty. If the
21
measurement uncertainty is considered to be negligible for some purpose, the measurement
22
result may be expressed as a single measured value. In many fields, this is the common
23
way of expressing a measurement result. In the traditional literature and in the previous
24
edition of the VIM, result was defined as a value attributed to a measurand and explained
25
to mean an indication or an uncorrected result or a corrected result according to the
26
context.
27
Reference:
28
VIM, International vocabulary for basic and general terms in metrology, 3rd Edition, 2007,
29
ISO, Geneva.”
30
20
1
Representative Analyte18: Analyte chosen to represent a group of analytes which are likely to
2
be similar in their behaviour through a multi-residue analytical method, as judged by their
3
physico-chemical properties e.g. structure, water solubility, Kow, polarity, volatility, hydrolytic
4
stability, pKa etc.”
5
Represented Analyte18: Analyte having physico-chemical properties which are within the range
6
of properties of representative analytes.”
7
Representative Commodity18: Single food or feed used to represent a commodity group for
8
method validation purposes. A commodity may be considered representative on the basis of
9
proximate sample composition, such as water, fat/oil, acid, sugar and chlorophyll contents, or
10
biological similarities of tissues etc.”
11
Ruggedness: The ruggedness of an analytical method is the resistance to change in the results
12
produced by an analytical method when minor deviations are made from the experimental
13
conditions described in the procedure. It is tested by deliberately introducing small changes to
14
the procedure and examining the effect on the results. Ruggedness testing should not be used to
15
determine critical control points (these should be determined earlier during method development)
16
and critical control points should not be included in ruggedness testing, as they are known to
17
have a significant impact on the analysis.16, 19
18
Robustness (ruggedness)6: A measure of the capacity of an analytical procedure to remain
19
unaffected by small but deliberate variations in method parameters and provides an indication of
20
its reliability during normal usage.
21
Reference:
22
ICH Topic Q2 Validation of Analytical Methods, the European Agency for the Evaluation
23
of Medicinal Products: ICH Topic Q 2 A - Definitions and Terminology
24
(CPMP/ICH/381/95), 1995
25
Harmonized guidelines for single laboratory validation of methods of analysis, Pure and
26
Appl. Chem., 2002.”
27
28
Selectivity: This term is defined in the Codex Manual of Procedures as “the extent to which a
29
method can determine particular analyte(s) in mixtures or matrices without interference from
21
1
other components of similar behaviour”.5 Other definitions include “The extent to which other
2
substances interfere with the determination of a substance according to a given procedure.”21 It
3
has been defined in an AOAC guidance document as “the extent to which the (analytical)
4
method can determine particular analyte(s) in a complex mixture without interference from the
5
other components in the mixture.”17
6
The IUPAC Gold Book16 defines selectivity in analysis as:
7
“(qualitative): The extent to which other substances interfere with the determination of a
8
substance according to a given procedure.
9
(quantitative): A term used in conjunction with another substantive (e.g. constant, coefficient,
10
index, factor, number) for the quantitative characterization of interferences.”
11
12
[It is important to note that while many analytical chemistry texts and older papers in scientific
13
journals use the term “specificity” for “selectivity”, the term “selectivity” is now recommended
14
and use of the term specificity is discouraged.5 It is considered that a method is either “specific”
15
or it is “non-specific”, while the term selectivity implies that there may be varying degrees of
16
“selectivity”.]
17
18
Selectivity6: Selectivity is the extent to which a method can determine particular analyte(s) in a
19
mixture(s) or matrice(s) without interferences from other components of similar behaviour.
20
Note: Selectivity is the recommended term in analytical chemistry to express the extent to
21
which a particular method can determine analyte(s) in the presence other components.
22
Selectivity can be graded. The use of the term specificity for the same concept is to be
23
discouraged as this often leads to confusion.
24
Reference:
25
Selectivity in analytical chemistry, IUPAC, Pure Appl Chem, 2001
26
Codex Alimentarius Commission, Alinorm 04/27/23, 2004
27
Codex Alimentarius Commission, Procedural Manual, 17th edition, Food and Agriculture
28
Organization of the United Nations, World Health Organization, 2007.”
29
30
Sensitivity: Describes the change in instrument response for a given concentration change. It is
31
represented by the slope of the calibration curve and can be determined by a least squares
22
1
procedure, or experimentally, using samples containing various concentrations of the analyte17.
2
(1) It is also defined as “change in the response divided by the change in the concentration of a
3
standard (calibration) curve; i.e., the slope si of the analytical calibration curve”5. The IUPAC
4
Gold Book16 defines the term sensitivity, used “in metrology and analytical chemistry”, as:
5
“The slope of the calibration curve. If the curve is in fact a 'curve', rather than a straight line, then
6
of course sensitivity will be a function of analyte concentration or amount. If sensitivity is to be a
7
unique performance characteristic, it must depend only on the chemical measurement process,
8
not upon scale factors.”
9
Sensitivity6: Quotient of the change in the indication of a measuring system and the
10
corresponding change in the value of the quantity being measured.
11
Notes: The sensitivity can depend on the value of the quantity being measured. The change
12
considered in the value of the quantity being measured must be large compared with the
13
resolution of the measurement system.
14
Reference:
15
VIM, International vocabulary for basic and general terms in metrology, 3rd Edition, 2007,
16
ISO, Geneva.”
17
Surrogate matrix: When authentic blank tissue does not exist, a surrogate may be used for
18
validation experiments. This would consist of a closely related matrix (i.e., similar chemical
19
composition) which may have low or non-detected levels of the analyte(s) of interest. For
20
biological matrices, surrogates should have similar contents of protein, fat, carbohydrate,
21
moisture and salt.
22
Surrogate6: Pure compound or element added to the test material, the chemical and
23
physical behavior of which is taken to be representative of the native analyte.
24
Reference:
25
Harmonized guidelines for the use of recovery information in analytical measurement,
26
1998.”
27
28
Systematic error6: Component of measurement error that in replicate measurements remains
29
constant or varies in a predictable manner.
23
1
Notes: A reference value for a systematic error is a true quantity value, or a measured value
2
of a measurement standard of neglible measurement uncertainty, or a conventional value.
3
Systematic error and its causes can be known or unkown. A correction can be applied to
4
compensate for a known systematic error. Systematic error equals measurement error
5
minus random measurement error.
6
Reference:
7
VIM, International vocabulary for basic and general terms in metrology, 3rd edition,
8
2007.”
9
10
Trueness6: The closeness of agreement between the expectation of a test result or a
11
measurement result and the true value.
12
Notes: The measure of trueness is usually expressed in terms of bias. Trueness has been
13
referred to as “accuracy of the mean”. This usage is not recommended. In practice the
14
accepted reference value is substituted for the true value. Expectation is the expected value
15
of a random variable, e.g. assigned value or long term average {ISO 5725-1}
16
Reference:
17
ISO Standard 3534-2: Vocabulary and Symbols Part 2: Applied Statistics, ISO, Geneva,
18
2006
19
ISO Standard 5725-1: Accuracy (trueness and precision) of measurement methods and
20
results, Part 1:
21
General principles and definitions, ISO, Geneva, 1994.”
22
23
True value6: Quantity value consistent with the definition of a quantity.
24
Notes: In the error approach to describing measurement, a true quantity value is considered
25
unique and in practice unknowable. The uncertainty approach is to recognize that, owing to
26
the inherently incomplete amount of detail in the definition of quantity, there is not a single
27
true quantity value, but rather a set of quantity values consistent with the definition of a
28
quantity. However, this set of values is, in principle and in practice unknowable. Other
29
approaches dispense altogether with the concept of true quantity value and rely on the
30
concept of metrological compatibility of measurement results for assessing their validity.
31
When the definitional uncertainty associated with the measurand is considered to be
24
1
negligible compared to the other components of the measurement uncertainty the
2
measurand may be considered to have an essentially “unique” true value.
3
Reference:
4
VIM, International vocabulary for basic and general terms in metrology, 3rd Edition, 2007,
5
ISO, Geneva.”
6
7
Validation6: Verification, where the specified requirements are adequate for an intended use.
8
References:
9
VIM, International vocabulary for basic and general terms in metrology, 3rd Edition, 2007,
10
ISO, Geneva.”
11
12
Validated Test Method6: An accepted test method for which validation studies have been
13
completed to determine the accuracy and reliability of this method for a specific purpose.
14
Reference:
15
ICCVAM Guidelines for the nomination and submission of new, revised and alternative
16
test methods, 2003.”
17
18
Validated range6: That part of the concentration range of an analytical method which has been
19
subjected to validation.
20
Reference:
21
Harmonized guidelines for single-laboratory validation of methods of analysis, 2002.”
22
23
Verification6: Provision of objective evidence that a given item fulfills specified requirements.
24
Notes: When applicable method uncertainty should be taken into consideration. The item
25
may be e.g. a process, measuring procedure, material, compound or measuring system. The
26
specified requirement may be that a manufacturer’s specifications are met. Verification in
27
legal metrology, as defined in VIM and in conformity assessment in general pertains to the
28
examination and marketing and/or issuing of a verification certificate for a measuring
29
system. Verification should not be confused with calibration. Not every verification is a
30
validation. In chemistry, verification of the identity of the entity involved or of the activity,
31
requires a description of the structure and properties of that entity or activity.
25
1
References:
2
VIM, International vocabulary for basic and general terms in metrology, 3rd Edition, 2007,
3
ISO, Geneva.”
4
5
Performance Criteria
6
7
The Codex Committee on Methods of Analysis and Sampling is currently considering
8
new guidance for inclusion in the Codex Manual of Procedures with respect to implementation
9
of the criteria approach for analytical methods6. This guidance is based on accepted approaches
10
to the establishment of performance criteria for analytical methods22,23,24 and will have been
11
subject to extensive consultation by representatives of major international organizations and
12
national regulatory authorities prior to acceptance and implementation and therefore it is
13
recommended that these recommendations should be considered for inclusion in guidance on
14
single laboratory validation of methods developed for use by this working group.
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
26
1
Table 2: Guidelines for establishing numeric values for analytical method performance criteria,
2
as proposed by the Codex Committee on Methods of Analysis and Sampling (CCMAS)6:
3
Method Applicability
Minimum
applicable range
Limit of
Detection (LOD)
Limit of
Quantification (LOQ)
Precision
Recovery (R)
Trueness
4
5
6
a
The method has to be applicable for the specified
provision, specified commodity and the specified level(s)
(maximum and/or minimum) (ML). The minimum
applicable range of the method depends on the specified
level (ML) to be assessed, and can either be expressed in
terms of the reproducibility standard deviation (sR) or in
terms of LOD and LOQ.
For ML ≥ 0.1 mg/kg, [ML - 3 sR , ML + 3 sR ]
For ML < 0.1 mg/kg, [ML - 2 sR , ML + 2 sR ]
sRa = standard deviation of reproducibility
For ML ≥ 0.1 mg/kg, LOD ≤ ML · 1/10
For ML < 0.1 mg/kg, LOD ≤ ML · 1/5
For ML ≥ 0.1 mg/kg, LOQ ≤ ML · 1/5
For ML < 0.1 mg/kg, LOQ ≤ ML · 2/5
For ML ≥ 0.1 mg/kg, HorRat value ≤ 2
For ML < 0.1 mg/kg, the RSDTR < 22%.
RSDRb = relative standard deviation of reproducibility
Concentration Ratio
Unit
Recovery
(%)
100
1
100% (100
98 – 102
g/100g)
≥10
10-1
≥ 10% (10
98 – 102
g/100g)
≥1
10-2
≥ 1% (1 g/100g)
97 - 103
-3
≥0.1
10
≥ 0.1% (1 mg/g)
95 – 105
0.01
10-4
100 mg/kg
90 – 107
-5
0.001
10
10 mg/kg
80 – 110
0.0001
10-6
1 mg/kg
80 – 110
-7
0.00001
10
100 μg/kg
80 – 110
-8
0.000001
10
10 μg/kg
60 – 115
0.0000001
10-9
1 μg/kg
40 – 120
Other guidelines are available for expected recovery
ranges in specific areas of analysis. In cases where
recoveries have been shown to be a function of the matrix
other specified requirements may be applied.
For the evaluation of trueness preferably certified
reference material should be
used.
The sR should be calculated from the Horwitz / Thompson equation. When the Horwitz / Thompson equation is
not applicable (for an analytical purpose or according to a regulation) or when “converting” methods into criteria
then it should be based on the sR from an appropriate method performance study.
27
1
2
3
b
The RSDR should be calculated from the Horwitz / Thompson equation. When the Horwitz / Thompson equation
is not applicable (for an analytical purpose or according to a regulation) or when “converting” methods into criteria
then it should be based on the RSDsR from an appropriate method performance study.
4
5
Performance Characteristics
6
7
In order for a method to be fit-for-purpose certain performance requirements should be evaluated
8
and met. Listed below are the requirements for a quantitative method. A screening or
9
confirmation method may require different, usually fewer, parameters.
10
Ruggedness (completed during method development phase)
11
Selectivity (completed during method development phase)
12
Matrix Effects (may be completed during method development phase)
13
Limit of Detection (LOD)
14
Limit of Quantitation (LOQ)
15
Analytical range
16
Linearity
17
Stability of analyte in standard solution
18
Stability of analyte in matrix
19
Stability of analyte in extract/digest
20
Accuracy
21
Repeatability of detection system (may be completed during method development phase)
22
Repeatability of method
23
Intermediate precision
28
1
Reproducibility (if appropriate)
2
Measurement Uncertainty
3
4
Technical Guidelines & Approaches
5
Linear Range and Calibration Curve
6
A typical chemical measurement process at trace concentrations involves two types of
7
calibration, one involving the determination of the detector response to changing concentrations
8
of pure standard (instrument response), while the second assesses the response to changes in
9
analyte concentration in the presence of matrix co-extractives and reagents. The first, referred to
10
as the calibration function, is defined as the “functional (not statistical) relationship for the
11
chemical measurement process, relating the expected value of the observed (gross) signal or
12
response variable E(y) to the analyte amount . The corresponding graphical display for a single
13
analyte is referred to as the calibration curve. When extended to additional variables or analytes
14
which occur in multicomponent analysis, the curve “becomes a calibration surface or
15
hypersurface”25. The limit of detection and the limit of quantification, when obtained from the
16
calibration function, are the instrumental detection and quantification limits. It should be
17
specified whether these determinations are based on pure analyte only or pure analyte in the
18
presence of reagents used in the method, as the detector responses may differ. This function is
19
commonly used when the method of “external calibration” is applied in a method.
20
21
The second type of calibration is referred to as the analytical function, defined as a
22
“function which relates the measured value Ca to the instrument reading, X, with the value of all
23
interferants, Ci, remaining constant. This function is expressed by the following regression of the
24
calibration results, Ca = f(X)”26. This is the calibration result obtained when the response of the
25
detector to the analyte is assessed in the presence of typical matrix co-extractives or digestion
26
products from the sample material in which the analyte concentration is being measured.
27
Detection and quantification limits derived from this calibration are the “method” detection and
28
quantification limits and are considered to provide a more accurate portrayal of the actual
29
1
performance capabilities of an analytical method. Since they reflect any interferences or matrix
2
enhancement or suppression effects, as well as analyte recovery from the matrix during the
3
performance of the analytical method, the detection and quantification limits determined from
4
these experiments are in most cases (the exception being matrix enhancement effects on the
5
detector) higher than the equivalent instrumental limits of detection and quantification
6
determined using pure analyte, or pure analyte in the presence of method reagents. This method
7
of calibration is used in internal standard calibration.
8
9
Instrument calibration may be determined by use of external or internal standard
10
calibrations, but the calibration approach used should be clearly stated. In most circumstances
11
involving elemental analysis, external standard calibration is the method of choice. Linear range
12
is determined by the injection of standard solutions in order to determine at what level the
13
instrument response no longer conforms to a linear equation (y = mx + b). This is determined in
14
the following manner:
15

16
17
the samples.

18
19
22
The concentrations of the solutions must be evenly spaced to determine the precise level
at which the calibration curve is no longer linear.

20
21
Injections of calibration solutions (minimum six) made up in similar solvent/reagent as
The range of concentration should encompass the expected concentration range from
routine samples if known.

The amount or concentration of analyte injected is plotted vs. the instrument response to
determine the linear portion of the curve.
23
The instrument linear range is used to determine the analyte concentration range for which the
24
method will be fit for purpose.
25
26
Matrix effect
30
1
Blank Matrix
2
Once the linearity has been determined the effect of the matrix on the instrument
3
response must be determined. The matrix may alter the results or create an enhanced or
4
suppressed response from the detector. In order to determine matrix effect, calibration curves of
5
neat and matrix fortified standards must be prepared and compared. The matrix fortified
6
calibration curves are prepared by using extracted/digested blank matrix as diluent. Prior to
7
reconstitution of the samples, fortify the extracts with aliquots of standards to provide the
8
required concentration in the final solution to be equivalent to that of neat standards. The
9
standards are analysed by duplicate or triplicate injections. When the results obtained for matrix-
10
fortified standards are lower (or higher) than the results obtained for pure standards taken
11
through the complete analysis, the results may be due to low recovery of analyte from the matrix
12
material (or the presence of interferences when high recoveries are obtained) or may be due to
13
matrix suppression or enhancement effects changing detector response. To check on these
14
possibilities, compare the results obtained for pure standards, pure standards taken through the
15
complete analysis, standards spiked into blank matrix extract and standards added to matrix prior
16
to extraction. The following comparisons can then be made:
17

Pure standards versus pure standards taken through the analysis is indicative of any losses
18
of analyte which are related to the method, while enhanced results may indicate reagent
19
contamination.
20

Pure standards taken through the analysis compared with pure standards added to
21
extracted or digested extracts provides an indication of matrix enhancement or
22
suppression effects on the detection system.
23

Pure standards added to blank matrix after extraction or digestion, compared to pure
24
standards fortified in matrix prior to extraction or digestion, provides an indication of
25
losses of analyte during processing.
26
Calibration curves for the neat and matrix fortified standards are prepared by plotting the
27
average response of the standard solution against the standard concentration. Differences
28
(>10%) of the slope of the matrix fortified calibration curve in relation to that of the neat
29
standards, or significant changes in the elution profile indicates that the matrix does indeed affect
31
1
the instrument response. If this is the case the routine analysis will have to be performed using
2
matrix fortified standards or possibly an internal standard.
3
There are several additional considerations which affect the experimental design and
4
specifically the choice of matrices and analytes for validation of method performance. In a
5
regulatory environment, such as testing of foods for the presence of residues or contaminants,
6
there are many sample materials which potentially require testing. Resources are usually not
7
available to fully validate each analytical method for all analytes and matrices to which it may be
8
applied. Therefore, the concepts of representative commodity18 (matrix) and representative
9
analytes18 have been proposed to facilitate method validation and routine application. Using this
10
approach, for example, in validating a method for application to “fish”, representative matrices
11
are salmon for “high fat” finfish, tilapia for “low fat”, shrimp for “crustaceans”. Apples may be
12
the representative matrix for apples and pears, oranges for “citrus fruit” and strawberries for
13
“berries”, while head lettuce may represent “leafy vegetables” and carrots may represent “root
14
crops”. Once the method has been validated for an analyte or analytes on the “representative
15
commodity”, it is considered to be applicable to all commodities represented by that matrix until
16
performance issues are observed when the method is applied to for the first time to a less
17
commonly analyzed member of the group. When this happens, further work is required to adapt
18
and validate the method for that application.
19
Calibration using the analytical function, or internal standardization, approach usually
20
assumes and requires the availability of representative blank matrix. However, situations will be
21
encountered when no material is available for a particular commodity which is free of naturally
22
incurred analyte. Ideally, in such situations, a “representative commodity” which is free of the
23
analyte can be chosen as a surrogate material for the validation or to represent the commodity
24
grouping of which the material is considered a member. In some situations, there is no such
25
material available and mixing of materials may be required to approximate the composition of
26
the target commodity. The following sections provide some approaches which may be used when
27
no blank matrix material is available for use in method validation or for method calibration.
28
No Blank Matrix
32
1
If blank matrix cannot be found, such as the case in many elemental analysis techniques,
2
a different approach is needed. First, test material must be characterized to determine the analyte
3
concentration in tissue by conducting a total of 20 determinations over 4 days. Then, prepare a
4
solution with similar analyte concentration to the matrix under investigation. Run matrix and
5
prepared solution with varying fortification levels (ie. 3 levels) in the same analytical run and
6
repeat on a second day. Plot analyte concentration versus instrument response for both matrix
7
and solution on the same graph. If the slopes of the curves diverge by >10% or final fortification
8
level concentrations show a >10% difference, then a matrix effect is evident. If curves do not
9
diverge (<10% difference in slopes), or final fortification level concentrations show a <10%
10
difference then no matrix effects are evident. When the response to standards (calibration
11
function) and matrix fortified standards (analytical function) does not differ, then response
12
related method performance parameters may be assessed using pure standards.
13
14
15
Limit of Detection and Limit of Quantification
There are many procedures typically used to determine LOD and LOQ, however, the
16
technique chosen can be used as long as it can be defended scientifically. It is recommended to
17
use an approach that is common to the field of analytical chemistry you are practicing and would
18
be accepted by other scientific colleagues.
19
Blank Matrix
20
The limit of detection (LOD) must be determined for each analyte for which the method
21
is validated. This is done by evaluating the noise level of 5 blank samples per run on 4 separate
22
instrument runs (n=20). One approach that could be used to determine the LOD for the analyte
23
in the matrix is by calculating the average noise of the 20 observations + 3SD. A procedure for
24
estimation of the LOD and the LOQ from the y-intercept of the calibration curve is used in many
25
laboratories27, as it is considered to provide a more realistic estimate of these parameters than a
26
direct calculation from the observed noise level. With some techniques, a reagent blank taken
27
through the method may be the only means of evaluating background noise. In this case 5
28
reagent blank samples per run on 4 separate instrument runs (n=20) would be completed and 3
33
1
standard deviations of the background analyte level may be used as a good indicator of LOD.
2
The limit of quantification (LOQ) can be a mathematical determination based on the LOD. The
3
LOQ is calculated by multiplying the LOD x 3 (in most cases).
4
No Authentic Blank Matrix
5
If no authentic blank matrix can be found and background levels of the analyte are
6
appreciable, a different approach will be needed. Run 5 reagent blanks per day, for 4 days,
7
through the digestion procedure and calculate the standard deviation (SD) of the background
8
noise for these blanks. LOD = 3SD; LOQ = 3LOD
9
Since all approaches may give varied results for LOQ, an experiment could be conducted
10
where solutions of the analyte of interest are prepared at increasing intervals between the lowest
11
and highest calculated LOQ. If multiple injections of a particular solution has acceptable
12
precision then this concentration would be indicative of the LOQ.
13
14
Method Recovery
15
The recovery of the analyte(s) by the method for each validated matrix is to be
16
determined by the analysis of that matrix fortified with a specified amount of the analyte(s)20.
17
Recovery studies are to be carried out on a minimum of three fortification levels. These levels
18
should be chosen depending on the intended use for the method, and whether authentic blank
19
matrix can be found. Five replicated analyses at each fortification level shall be carried on 3
20
separate days. Calculate the mean, standard deviation and % relative standard deviation for each
21
of the three levels.
22
Blank Matrix with MRL/Target Level
23
If authentic blank material is available, and there is a published concentration of
24
importance (ie. Canadian MRL for mercury in fresh tuna is 0.5 ug/g) then spike levels should be
25
a factor of this MRL. Spike at ½MRL, 1MRL, and 2MRL with each level replicated 5 times
26
over three days.
27
Blank Matrix with no MRL/Target Level
34
1
If authentic blank material is available, and there is not a published concentration of
2
importance then spike levels should be a factor of the LOD. Spike at 3LOD, 10LOD, and the
3
tissue equivalent concentration of the upper limit of the calibration curve. Each level will be
4
replicated 5 times over three days.
5
No Blank Matrix with MRL/Target Level
6
If authentic blank material cannot be found, a surrogate matrix (one in which low or non-
7
detectable levels of the analyte(s) of interest are present) may be used to fulfill validation
8
requirements. If an appropriate surrogate matrix cannot be found, spike solution is added to
9
previously characterized tissue so that target concentration(s) (background level + spike added)
10
of tissue are equal to ½ MRL, 1MRL and 2MRL with each level replicated 5 times over three
11
days.
12
No Blank Matrix with no MRL/Target Level
13
If authentic blank matrix cannot be found and there is no published MRL or
14
concentration of interest a surrogate matrix (one in which low or non-detectable levels of the
15
analyte(s) of interest are present) may be used to fulfill validation requirements. If an
16
appropriate surrogate matrix cannot be found, spike levels will be determined based on the
17
previously characterized analyte concentration(s). A low, medium, and high spike level will be
18
used for this study, ie. spike equivalent to ½X, 1X, and 2X (or upper limit of the calibration
19
curve) of the analytical range of the characterized tissue concentration.
20
In all scenarios, spiking at the LOQ may be required for verification purposes (if
21
possible). Calculate average spike recovery for each level, standard deviation and percent
22
relative standard deviation (%RSD) and compare to Table 2 above.
23
24
25
Repeatability
As noted previously in the discussion of calibration approaches, there are two types of
26
repeatability that are to be determined. The first type is a function of the instrument. Instrument
27
repeatability is determined by repeat injections of the standards as well as a fortified sample at
35
1
each of the fortification levels. The second type is the method repeatability. It is determined by
2
replicate extraction and analysis of a fortified or incurred material at or near each of the
3
fortification levels. If a CRM is available, it may be used in repeatability experiments.
4
Instrument Repeatability
5
Inject each of the standard solutions that are used to prepare the working calibration
6
curve as well as an incurred or fortified sample at each of the spike levels 5 times. These
7
injections should be done in random order to prevent any sort of bias. Calculate average,
8
standard deviation and percent relative standard deviation (%RSD).
9
Method Repeatability
10
Prepare pools of sample material with levels of the analyte(s) at or near the same
11
concentrations that were used for the method recovery studies. This may be done by using
12
incurred material or by fortifying material (blank or incurred) with the required amount of the
13
analyte(s). Prepare five replicate extracts of each of these samples and analyse on the same day.
14
This process is to be repeated on two other days. Calculate average, standard deviation and
15
percent relative standard deviation (%RSD).
16
17
18
Intermediate Precision
This parameter is used to determine if there are biases in the method. The bias can come
19
from the analyst, instrumentation, or other sources. To study this parameter prepare pools of
20
sample material with, either incurred or fortified, levels of the analyte(s) at or near the same
21
concentrations that were used for the recovery and repeatability studies. The same material may
22
be used as was prepared for the repeatability studies if sufficient is remaining. As a minimum
23
the study must be carried out by an additional analyst over three separate days. The second
24
analyst is to prepare all fresh reagents and the samples are to be extracted and analysed in 5
25
replicates over three separate days by the second analyst. If multiple instruments are available
26
then the study by the second analyst must be carried out on the second instrument, to take into
27
account any instrument bias.
36
1
2
3
Measurement Uncertainty
The uncertainty of a result from a chemical analysis can be caused by many issues. In
4
practice the uncertainty on the result may arise from many possible sources, including examples
5
such as incomplete definition, sampling, matrix effects and interferences, environmental
6
conditions, uncertainties of masses and volumetric equipment, reference values, approximations
7
and assumptions incorporated in the measurement method and procedure, and random
8
variation.8, 28
9
A document which provides extensive guidance on the estimation of measurement
10
uncertainty in analytical methods is available from Eurachem29. Rather than attempting to
11
calculate the uncertainty from each factor independently and combining the results, our approach
12
is to the look at the methodology as a whole and group the uncertainty into two categories:
13
Accuracy and Precision.
14
15
Data sets that are to be considered for Accuracy are; recovery, CRM data, PT samples
16
etc. Data to be included with precision are; intermediate precision, in-house check samples,
17
CRM data, etc. The relative uncertainty for the method is calculated by determining square root
18
of the sum of the squares of the respective relative uncertainties for accuracy and precision.
19
UofM 
20
2
( RU (accuracy)  RU ( precision)
2
21
22
23
Ruggedness
24
25
The ruggedness of an analytical method is the resistance to change in the results produced
26
by an analytical method when minor deviations are made from the experimental conditions
27
described in the procedure. The ruggedness of a method is tested by deliberately introducing
28
small changes to the procedure and examining the effect on the results. Methods should be
37
1
ruggedness tested as the last stage of method development, prior to method validation.
2
Ruggedness testing should not be used to determine critical control points (these should be
3
determined earlier during method development) and critical control points should not be included
4
in ruggedness testing, as they are known to have a significant impact on the analysis.
5
Ruggedness testing does not need to be performed for each matrix tested as this examination of
6
matrix effects should be performed in method development. The matrix used for ruggedness
7
testing should be as representative as possible of the proposed workload, i.e. the most common
8
matrix, or the matrix intermediate in a quality for which the typical matrices cover a wide variety
9
(non-fat, low-fat, high-fat).
10
11
Examples of variables to be tested include
12

pH
13

temperature (extraction, evaporation)
14

solvent/acids
15

reagents (age, source, concentrations)
16

delays in the method
17

analytical columns
18

SPE cartridges
19

sample weight
20

extraction/digestion (time, technique, solvents/acids)
21

different instruments
22
23
The easiest approach is to use Youden’s factorial approach10, where seven variables can
24
be combined in a specific manner to determine the effects of all seven variables using eight
25
combinations in a single experiment. If the method has fewer variables to be tested, then blanks
26
can be included, or variables can be examined individually. The experiment should also be
27
carried out in duplicate in order to eliminate the possibility of a single sample affecting the
28
outcome. Values for each sample should be spike recoveries or concentrations if
29
incurred/fortified tissue is being used.
30
31
38
Sample
Factor Combinations
Measurement
1
ABCDEFG
s
2
ABcDefg
t
3
AbCdEfg
u
4
AbcdeFG
v
5
aBCdeFg
w
6
aBcdEfG
x
7
abCDefG
y
8
abcDEFg
z
1
2
To determine effect of individual factor,
3
[(s + t + u + v)/4] – [(w + x + y + z)/4] = J
4
Effect of A and a:
5
This simplifies to:
6
Effect of B and b:
[(s + t + w + x)/4] – [(u + v + y + z)/4] = K
7
Effect of C and c:
[(s + u + w + y)/4] – [(t + v + x + z)/4] = L
8
Effect of D and d:
[(s + t + y + z)/4] – [(u + v + w + x)/4] = M
9
Effect of E and e:
[(s + u + x + z)/4] – [(t + v + w + y)/4] = N
10
Effect of F and f:
[(s + v + w + z)/4] – [(t + u + x + y)/4] = O
11
Effect of G and g:
[(s + v + x + y)/4] – [(t + u + w + z)/4] = P
(4A/4) – (4a/4) = J
12
13
After calculating the differences between factors (J-P) examine those values. Small
14
changes in factors with larger differences can lead to significant changes in results. Determine
15
which factors create statistically significant changes by performing a two-sample t-test assuming
16
equal variance for each factor. If the p-value is <0.05 the factor is significant, if the p-value is
17
>0.15 the factor is not significant, and if 0.05<p<0.15 the factor may be significant.
18
19
If factors are determined to be significant the procedural instructions dealing with those
20
factors should be made more specific and the ruggedness testing repeated, including those factors
21
with intermediate p-values (0.05<p<0.15). These factors may be determined to be critical
39
1
control points, and if this is the case the procedural instructions should be changed to reflect an
2
acceptable variance.
3
4
If intermediate precision data are available then an additional comparison may be made.
5
Comparing the standard deviation of the method as determined in intermediate precision testing
6
to the standard deviation of the differences of factors examined during ruggedness testing may
7
reveal that a combination of factors has a significant effect on the method even though no
8
individual factors have a significant effect on the method. In this case the procedural instructions
9
should be made more specific and the ruggedness testing should be repeated using the more
10
specific instructions.
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
40
1
2
3
4
Example of an Experimental Method Validation Plan for Tin in Canned
Foods
5
Method Title: Validation of a Method for the Determination of Tin (Sn) in Canned Pears
6
Project Participants: Analyst 1, Analyst 2, Analyst 3
7
Start Date: January 1, 2009
8
Projected Completion Date: March 31, 2009
9
Instrument(s): ICP-OES
10
MRL/Target Level: 250 ug/g
11
NOTE: For this validation plan, authentic blank pears for Sn was found.
12
13
Linearity Survey:
14
Analysis of standard solutions prepared at equal concentration intervals (minimum 6) to
15
determine at what concentration the calibration curve is no longer linear. Should include
16
expected concentration of samples if known.
17
18
Analytical Range:
19
Determined using the LOQ as the lower limit and the upper end of the linear range as the
20
upper limit.
21
22
Matrix Effects:
23
Blank pear material is digested and as per the method. The resulting digest is then used
24
as diluent to prepare a calibration curve. A neat standard curve (ie. 10% HCl) is also
25
prepared. The neat and matrix fortified standards are run on the same day on the ICP-
26
OES to determine if slope of curves are similar. If similar, then neat standards can be
27
prepared for the remainder of the validation experiments and for quantifying samples. If
28
slopes are different (>10% difference), then matrix fortified calibration curves will need
29
to be prepared for the remainder of the validation experiments and for quantifying
30
samples.
41
1
2
LOD/LOQ:
3
Since authentic blank pears can be found, run 20 blank matrix tissue samples through
4
digestion over 4 days and measure the noise level for each. Calculate the average noise
5
and standard deviation of the 20 data points. LOD may be calculated several ways such
6
as LOD = 3SD or LOD = noise + 3SD, or using the approach as outlined by Miller and
7
Miller (1988). LOQ = 3LOD. Since all approaches may give varied results for LOQ, an
8
experiment should be conducted where solutions of tin are prepared at increasing
9
intervals between the lowest and highest calculated LOQ. If multiple injections of a
10
particular solution has an acceptable precision, then this concentration is the LOQ.
11
12
Stability:
13
Analyte in standard solution:
14
Compare a freshly prepared working standard to one that has been made and stored.
15
Measure at intervals over a specified time period to determine Sn stability in solution.
16
17
Analyte in matrix:
18
Run a canned pear sample at specified time intervals over the time that the sample would
19
typically be stored to see if Sn levels degrade or concentrate. Ensure standards used for
20
the calibration curve are not degraded or expired.
21
22
Analyte in sample digest:
23
Digest a sample with a known concentration of tin and measure daily for a period of a
24
week.
25
26
Recovery:
27
Since blank pear matrix is available and an MRL exists, fortify blank matrix at 3 levels:
28
½ MRL, 1MRL and 2MRL. Run five samples per level, on 3 separate days.
29
30
Repeatability:
42
1
Instrument:
2
Inject each of the standards of the calibration curve, as well as a CRM or fortified
3
samples, five times in random order on the same day.
4
5
Method:
6
Make pooled pear material at three levels ½ MRL, 1MRL and 2MRL. Run five samples
7
per each level over 3 separate days. If a CRM exists for Sn in canned pears (or canned
8
fruit) incorporate this into these experiments.
9
10
11
Intermediate Precision:
12
A second analyst in the same laboratory repeats the procedure for method repeatability as
13
above. If possible use a different ICP-OES instrument.
14
15
Reproducibility:
16
Using our methodology, other laboratories would analyze pooled material at 3 levels as
17
well CRMs if available.
18
19
Measurement Uncertainty:
20
Data from recovery/repeatability experiments will be used to determine accuracy and
21
precision. Upon implementation of the method, these values will be updated with
22
analytical sample data as well as check sample data.
23
24
Other:
25
Other analyses may be required to further validate the method for use in the chemistry
26
section. Analysis of CRM, inter-laboratory samples, proficiency samples and in-house check
27
samples add to the method validation data and should be included as this data is available.
28
29
30
43
1
References
1
ISO (1999). ISO/IEC-17025: General requirements for the competence of calibration and
testing laboratories. International Organization for Standardization, Geneva.
2
CAC (1997). CAC/GL 27-1997. Guidelines for the Assessment of the Competence of
Testing Laboratories Involved in the Import and Export Control of Food.
3
Thompson, M., & Wood, R. (1995). Harmonized Guidelines for Internal Quality Control in
Analytical Chemistry Laboratories. Pure & Appl. Chem. 67: 649-666.
4
Thompson, M. and Wood, R. 1993. International Harmonized Protocol for Proficiency
Testing of (Chemical) Analytical Laboratories. Pure & Appl. Chem. 65: 2132-2144.
5
CAC (2009). Codex Alimentarius Commission Procedural Manual, 17th ed., Joint
FAO/WHO Food Standards Program;
ftp://ftp.fao.org/codex/Publications/ProcManuals/Manual_17e.pdf; accessed March 24,
2009.
6
CAC (2008). ALINORM 08/31/23; Report of the twenty-ninth session of the Codex
Committee on Methods of Analysis and Sampling, Budapest, Hungary, 10 - 14 March
2008, Appendix II, pp. 31-33;
http://www.codexalimentarius.net/download/report/699/al31_23e.pdf; accessed March 24,
2009.
7
ISO 8402 (1994).
8
Eurachem (1998). The Fitness for Purpose of Analytical Methods - A Laboratory Guide to
Method Validation and Related Topics. http://www.eurachem.org/guides/valid.pdf;
accessed March 24, 2009.
9
Thompson, M., Ellison, S.L.R., and R. Wood. 2002. Harmonized Guidelines for Single
Laboratory Validation of Methods of Analysis. (IUPAC Technical Report). Pure Appl.
Chem., Vol. 74, No. 5, pp. 835–855.
10
Youden, W.J., & Steiner, E.H. (1975) Statistical Manual of the AOAC, pp.33-36.
11
AC (2007). Codex General Standard for Contaminants and Toxins in Foods - Codex Stan
193-1995, Rev.3-2007.
12
EU (2006). Commission Regulation (EC) No 1881/2006 of 19 December 2006 setting
maximum levels for certain contaminants in foodstuffs. Official Journal of the European
Union, L 364: 5-24.
44
13
.
14
EU (2001). COMMISSION DIRECTIVE 2001/22/EC of 8 March 2001 laying down the
sampling methods and the methods of analysis for the official control of the levels of lead,
cadmium, mercury and 3-MCPD in foodstuffs. Official Journal of the European Union,
L77: 14-21
HC (2009). Food & Drug Act and Regulations, B.15.003;
http://laws.justice.gc.ca/en/showtdm/cr/C.R.C.-c.870; accessed March 25, 2009.
15
HC (2007). Canadian Standards ("Maximum Limits") for Various Chemical Contaminants
in Foods, Heakth Canada, Ottawa, ON, Canada; http://www.hc-sc.gc.ca/fnan/securit/chem-chim/contaminants-guidelines-directives-eng.php; accessed March 25,
2009.
16
IUPAC Compendium of Chemical Terminology (Gold Book). International Union of
Applied Chemistry, copyright 2005-2008. http://goldbook.iupac.org
17
Anon (1998). AOAC PEER-VERIFIED METHODS PROGRAM- MANUAL ON
POLICIES AND PROCEDURES. AOAC International.
18
Alder, L, Holland, PT, Lantos, J, Lee, M, MacNeil, JD, O’Rangers, J, van Zoonen, P,
Ambrus, A. 2000. Guidelines for Single-Laboratory Validation of Analytical Methods for
Trace-level Concentrations of Organic Chemicals in Principles and Practices of Method
Validation, Fajgelj, A, & Ambrus, A (ed.).ISBN 0-85404-783-2. The Royal Society of
Chemistry, Cambridge, UK, pp. 179-248 (see also: Alder, L, Holland, PT, Lantos, J, Lee,
M, MacNeil, JD, O’Rangers, J, van Zoonen, P, Ambrus, A. 2000. Report of the
AOAC/FAO/IAEA/IUPAC Expert Consultation on Single-Laboratory Validation of
Analytical Methods for Trace-Level Concentrations of Organic Chemicals, Miskolc,
Hungary, November 8-11, 1999. Report published on the website of the International
Atomic Energy Agency (IAEA): http://www.iaea.org/trc)
19
EU (2002). European Communities Commission Decision 2002/657/EC, implementing
Council Directive 96/23/EC concerning the performance of analytical methods and the
interpretation of results. Off. J. European Communities 17.8: L221/8- L221-36.
20
Thompson, M., Ellison, S.L.R., Fajgelj, A., WILLETTS, P. & WOOD, R. (1999).
Harmonized guidelines for the use of recovery information in analytical measurement.
Pure & Applied Chemistry 71 (2): 337-348.
21
den Boef, G. & Hulanicki, A. (1983). Recommendations for the usage of selective,
selectivity and related terms in analytical chemistry. Pure & Applied Chemistry 55 (3):
553-556.
22
Horwitz, W., Kamps, L. R. and Boyer, K. W. (1980) J. Assoc. Off. Anal.Chem., 1980, 63,
1344.
23
Horwitz, W. and Albert, R. (1996) J. AOAC Int., 79, 589.
45
24
Thompson, M. Analyst, 2000, 125, 385-386).
25
IUPAC Compendium of Chemical Terminology, Electronic version,
http://goldbook.iupac.org/C00778.html; accessed April 13, 2009.
26
IUPAC Compendium of Chemical Terminology, Electronic version;
http://goldbook.iupac.org/A00332.html; accessed April 13, 2009.
27
Miller J.C., Miller J.N., 1988. Statistics for Analytical Chemistry, 2nd Edition, New York,
Ellis Horwood Limited.
28
Standards Council of Canada. 2005. CAN-P-4E (ISO/IEC 17025:2005) General
Requirements for the Competence of Testing and Calibration Laboratories.
29
Ellison, S.L.R., Rosslein, M., & Williams, A., ed. (2000). Quantifying Uncertainty in
Analytical Measurement, Second Edition. EURACHEM / CITAC Guide CG4;
http://www.eurachem.org/guides/QUAM2000-1.pdf; accessed March 27, 2009.
46
Download