National Air Quality Reference Laboratories and the

advertisement
National Air Quality Reference Laboratories and
the European Network – AQUILA:
Roles and Requirements for Measurement
Traceability, Accreditation, Quality
Assurance/Quality Control, and Measurement
Comparisons, at National and European Levels
Version 2, December 2009
This document can be found at:
http://ec.europa.eu/environment/air/quality/legislation/pdf/aquila.pdf
1
National Air Quality Reference Laboratories and the European Network –
AQUILA:
Roles and Requirements for Measurement Traceability, Accreditation,
Quality Assurance/Quality Control, and Measurement Comparisons, at
National and European Levels
Table of Contexts
Chapter 1: National Air Quality Reference Laboratories and the AQUILA
Network: Introduction, General Objectives, and Terms & Definitions……...6
1.1 Overview………………………………………………………………………………...6
1.2 Drivers for the Establishment of “National Air Quality Reference
Laboratories”, and Requirements for the AQUILA Network, arising from
European Union Directives………………………………………………………….6
1.3 National Reference Laboratories: What are they? How are they
appointed? What are they responsible for?……………………………………...7
1.4
General Objectives and Main Aims of AQUILA………………………………………9
Chapter 2: The Applications and Role of Measurement Traceability,
Traceable Calibration Standards, and Certified Reference Materials, in
Ambient Air Quality Measurements…………………………………………………11
2.1 Introduction……………………………………………………………………………11
2.2 What is Traceability?………………………………………………………………..12
2.3 The Realization and Dissemination of Traceable Measurement Standards
at a National Level: General……………………………………………………….14
2.4 Certified Reference Materials and their Roles in Quality Assurance and
Quality Control: Definitions and Roles…………………………………………..15
2.5 The Realization and Dissemination of Traceable Standards and Certified
Reference Materials at a National Level: Ambient Air……………………….16
2.5.1 Overview…………………………………………………………………………...17
2.5.2. Methods for the Production of Calibration Standards Traceable Directly to SI
Units: Ambient Air…………………………………………………………………17
2.5.2.1 Overview……………………………………………………………………………...17
2.5 2.2 Preparation of Gaseous Calibration Standards…………………………18
2.5.2.2 Purity Requirements for Diluent Gases, Calibration Gas Species, and
Zero Gases…………………………………………………………………..20
2.5.2.3 Methods using Certified Reference Materials, and their Role in Quality
Assurance and Quality Control: Ambient Air…………………………….20
2.5.3
Calibration Methods that use “Primary” Measurement Techniques Not
Directly Traceable to SI Units: Ambient Air……………………………………23
2.5.4
Methods not Traceable to SI Units or Primary Methods, Realized by
Convention or Definition as Reference Methods: Ambient Air……………...24
2.6 Methods for Disseminating Traceability by Means of National or
International Standards or Reference Methods: Ambient Air………………25
2.6.1
2.6.2
Overview……………………………………………………………………………25
Dissemination for Methods Using Calibration Standards Produced with
Concentration Values Linked Directly to SI units: Ambient Air……………….26
2.6.3 Dissemination for Methods that Use CRMs for Quality Control and Quality
Assurance: Ambient air…………………………………………………………...28
2.6.4 Dissemination Using “Primary” Methods: Ambient Air…………………………30
2
Table of Contexts (continued)
2.6.5
Dissemination using Reference Methods Designated by Convention or by
Regulation: Ambient Air…………………………………………………………..30
2.7 International Comparisons to demonstrate Comparability of Measurements
between Countries…………………………………………………………………..31
2.7.1
2.7.2
2.7.3
Overview…………………………………………………………………………..31
Summary of Comparisons Organised by the EC Joint Research Centre…..31
International Comparisons carried out by the CCQM and Associated
Organisations………………………………………………………………….32
2.7.3.1 Background…………………………………………………………………..32
2.7.3.2 International Comparisons of Ambient-Air Calibration Standards…….33
Chapter 3: Interpretation, Design, and Implementation, of Quality Systems
for European National Reference Laboratories………………………………….37
3.1 Initial Background……………………………………………………………………37
3.2 Development of the New European and Worldwide Quality-Assurance
Standard……………………………………………………………………………….38
3.3. General Quality Assurance and Quality Control Requirements for
European and National Reference Laboratories………………………………39
3.4 Additional Responsibilities of NRLs that Relate to Quality Assurance at a National
Level…………………………………………………………………………………….43
Chapter 4: Type Approval Testing and Product Certification of Automated
Instruments used for Ambient Air Quality Monitoring…………………………45
4.1. Definitions and Background………………………………………………………..45
4.2 Reference Methods/CEN Standards and Type Approval Requirements...46
4.2.1 CEN Standards with Requirements for Type Approval………………………..46
4.2.2 Summary of Type-approval Requirements within the CEN Standards………47
4.3 Requirements for the Certification of Type-approved Analysers…………..47
4.3.1 Overview……………………………………………………………………………47
4.3.2 EN 15267 Part 1: Certification of automated measuring systems – General
principles……………………………………………………………………………48
4.3.2.1 Need for Certification across Europe……………………………………..48
4.3.2.2 Scope of the EN 15267 Part 1 Standard………………………………….49
4.3.3 EN 15267 Part 2: Certification of automated monitoring systems – Initial
assessment of the AMS manufacturer’s quality management system, and
post-certification surveillance of the manufacturing process…………………49
4.4
Possible Roles for an NRL………………………………………………………50
4.5 Towards a Harmonised European Approach…………………………………51
Chapter 5: Quality Assurance and Quality Control in Ambient Air-quality
Monitoring Networks at a National Level, Operated for EU Regulatory
Purposes…………………………………………………………………………………….51
5.1 Introduction……………………………………………………………………………52
5.2 General Principles of Quality Assurance and Quality Control as Applied to
Air Quality Networks…………………………………………………………………54
5.3 Requirements of the CEN Standard Methods…………………………………55
5.3.1 Type Approval of Analysers………………………………………………………55
5.3.2 Field Operation and on-going QA/QC Activities………………………………. 55
5.3.2.1 Suitability Evaluation and Initial Installation………………………………55
5.3.2.2 Requirements for Ongoing QA/QC……………………………………….56
3
Table of Contents (continued)
5.3.3 Data Handling………………………………………………………………………56
5.3.4 Calculating the overall Expanded Uncertainty of the Measurement Results..57
5.4 Traceability and Other Requirements of the EN ISO 17025 Standard…57
5.5
Further Requirements of QA/QC Systems for EU Regulated Pollutants57
5.6
5.7
5.9
QA/QC Activities Linked to the Collecting and Reporting of the
Measurement Data…………………………………………………………..58
Demonstration of Equivalence……………………………………………..58
Inter-laboratory Comparisons at a National Level………………………..59
5.9.1 Background to the Requirements………………………………………………..59
5.9.2 Types of Inter-Laboratory Comparisons…………………………………………59
5.9.3 Evaluation of the Results of the ILC at a National Level……………………...60
5.9.4 Links to ILCs at a European Level………………………………………………62
5.10 Examples of Specific Recommendations for NRLs at a National Level.62
Chapter 6:
6.1
6.2
6.3
Intercomparisons Carried Out By the EC’s Joint Research
Centre, Italy……………………………………………………………….63
Overview……………………………………………………………………………..63
Examples for Inorganic Gaseous Compounds: Ozone, Carbon Monoxide,
Nitrogen Oxides, and Sulphur Dioxide………………………………………..64
Examples for Organic Gaseous Compounds: BTEX and VOC Ozone
Precursors…………………………………………………………………………..65
6.3.1 BTEX…………………………………………………………………………………65
6.3.2 VOC Ozone Precursors…………………………………………………………...66
6.4 Examples for PM Constituents: Heavy Metals and EC/OC……………….68
6.4.1 Heavy Metals……………………………………………………………………….68
6.4.2 EC/OC……………………………………………………………………………….69
6.5 Example of PM10 and PM2.5 Comparisons………………………………….70
References………………………………………………………………………………….71
Annex 1: Organisation of Intercomparison Exercises for Gaseous Air
Pollution for EU National Air Reference Laboratories and
Laboratories of the WHO EURO Region…………………………….72
ANNEX 2: Background to the Requirements for Quality Systems from the
Previous Directive 1996/62/EC…………………………………………81
4
5
Chapter 1: National Air Quality Reference Laboratories and the
AQUILA Network: Introduction, General Objectives, and Terms
& Definitions
1.1 Overview
This Document provides the background, the context, and certain regulatory and
scientific information, on the European Network of National Air Quality Reference
Laboratories, known as AQUILA, which has been active since its constitution in
December 2001. The AQUILA Network is made up of ‘National Reference
Laboratories’ from European countries. This document also defines what constitutes
National Air Quality Reference Laboratories, and specifies their roles and
responsibilities in a number of important scientific and technical areas.
1.2
Drivers for the Establishment of “National Air Quality Reference
Laboratories”, and Requirements for the AQUILA Network, arising
from European Union Directives
Air quality across the whole European Union (EU) is a field where the European
Commission (EC) has been proactive in defining a comprehensive strategy and an
implementation methodology. Recently, this has resulted in the plans and initiatives
developed under the Clean Air for Europe (CAFE) programme. This Programme have
also involved the publication, or the revision, of a range of EU directives, concerned
both with ambient air quality and atmospheric pollution emissions to the atmosphere.
The directives currently of most relevance to this document are:
2008/50/EC – The “new” Directive of the European Parliament and of the Council on
ambient air quality and cleaner air for Europe (Official Journal of the European Union
L152 11.6.2008).
2004/107/EC - “4th Daughter directive: relating to arsenic, nickel, cadmium, mercury
and polycyclic aromatic hydrocarbons in ambient air;
In addition, for completeness, it should be considered that this document covers the
requirements of the previously published directives listed below, which will be repealed
by June 2011 as the new Directive is required to be transposed in all Member States:
1996/62/EC - “Air Quality Framework directive” on ambient air quality assessment and
management;
1999/30/EC - “1st Daughter directive” relating to limit values for sulphur dioxide,
nitrogen dioxide and oxides of nitrogen, particulate matter and lead in ambient air;
2000/69/EC - “2nd Daughter directive” relating to limit values for benzene and carbon
monoxide in ambient air;
2002/3/EC - “3rd Daughter directive” relating to ozone in ambient air;
The above directives all on ambient air quality, and particularly the recent Directive
2008/50/EC, have a number of general objectives, which may be summarized as:
6
- define objectives for ambient air quality to avoid, prevent or reduce harmful effects on
human health and the environment as a whole,
- assess ambient air quality in Member States on the basis of common methods and
criteria,
- obtain information in order to combat air pollution and monitor long-term trends,
- ensure public information,
- maintain good air quality and improve in other cases, and
- promote cooperation between Member States in reducing air pollution.
The requirement for ensuring measurements of known measurement uncertainty that
are comparable across EU is crucial for ensuring that appropriate abatement
measures are triggered, where necessary, and that a level-playing field exists for any
potential enforcement of the provisions of the ambient air quality Directives.
1.3 National Reference Laboratories: What are they? How are they
appointed? What are they responsible for?
The terms “ National Air Quality Reference Laboratory” or the alternative “National
Reference Laboratory”, and thier abbreviation “NRL”, have been used colloquially for
a number of years, but there has been no explicit internationally-accepted formal
definition laid down of what constitutes an NRL, or who exactly is responsible for
specifying this, particularly in the context of the European Union’s ambient air-quality
directives outlined above. A brief explanation and interpretation of this is therefore
provided here, particularly from the context of the most recent EU Directive
2008/50/EC.
As is now well known, Article 3 of Directive 2008/50/EC, Responsibilities, (as well as
the previous directive 1996/62/EC) states:
Member States shall designate at the appropriate levels the competent authorities and
bodies responsible for the following:
(a) assessment of ambient air quality;
(b) approval of measuring systems (methods, equipment, networks and laboratories);
(c) ensuring the accuracy of measurements;
(d) analysis of assessment methods;
(e) coordination on their territory if Community-wide quality assurance programmes are
being organised by the Commission;
(f) cooperation with other Member States and the Commission.
Where relevant, the competent authorities and bodies shall comply with Section C of
Annex 1
(NOTE 1: The terms “competent authority” and “competent body” might be viewed in
the above as being used interchangeably. However, for the purposes of this document
the term “competent authority” is used to mean the national government ministry or the
national agency that has been empowered to make the designations listed above,
whilst a “competent body” is one of the organisations that are so designated by the
relevant competent authority.)
7
(NOTE 2: In Directive 2008/50/EC, and the previous ambient air directives listed
above, the term “Community-wide intercomparisons” is used. It is felt, however, that
the word “intercomparison” is editorially unnecessary in this document in English
vocabulary, and instead the word “comparison” is used, whenever this term in the
directive is not being specifically quoted.)
A number of points may be concluded and emphasized concerning Article 3 of
Directive 2008/50/EC. Many of these might already be clear to the reader of this
document. However, they are emphsized below for completeness and in order to
provide a better uniformity of the intended concepts and conclusions across the
AQUILA Network and other relevant organisations:
(i) It is the responsibility of the competent authority of the specific Member State, to
designate or appoint relevant organisation(s) to carry out all the tasks listed in (a)
– (f) above, where these competent authorities in each country are as defined
above generally national government ministries or national agencies.
(ii) It may not be practical for all of the tasks listed above to be carried out by one
organisation in many Member States. For example, in some of the larger Member
States there may be different organisations that cover different pollutants, in others
all may be covered by one organisation. In addition, some of these tasks are
broader or different than those required of an NRL (see below). This may also
allow implicitly for subcontracting of certain tasks to other organisations or
individuals, but it clearly leaves the designated body with its responsibilities
outlined above and discussed in more detail later in this Document. These issues
are all for the competent authority to define.
(iii) Some of the above tasks comprise mainly scientific and technical experimental
work associated with the requirements of Annex 1 Section C, whilst others do not.
This enables us to differentiate whether there are requirements for the designated
bodies for a particular task to be accredited to the EN ISO 17025 standard
discussed in Annex 1 or not – application of EN ISO 17025 is relevant to those
organisations that have involvement with scientific and/or technical experimental
quality assurance work, and not to the other tasks.
(iv) The competent bodies that are designated for the tasks involving the scientific and
technical activities associated with quality assurance and quality control of the
accuracy of the results of all the regulated ambient-air pollutants at a
national/Member State level and that are involved in participating in, or
coordinating, “Community-wide quality assurance programmes being
organised by the Commission” (Article 3) should be considered as NRLs
(v) The NRL(s) in a given Member State are also those that take part in the
“Community-wide intercomparisons” covering one or more of the pollutants
regulated by the directive, as stated in Annex 1 Section C, and these require
accreditation to the EN ISO 17025 standard. The exact scope of the accreditation
required is discussed below.
(vi) There are a number of ambient air pollutants that are covered by Directive
2008/50/EC (and others covered by directive 2004/107/EC – the “4th daughter
directive”). It is therefore possible that all the scientific and technical quality
assurance tasks that are required to be implemented might be carried out by
different organisations within a Member State, and then there will be more than
one designated NRL in any Member State to cover all of these. This, as noted
above, is the responsibility of the competent authority of a Member State to define.
8
(vii) There may be a multi-level or tiered system of quality assurance and quality
control to ensure the accuracy of the measurements in a given Member State. This
is acceptable within the requirements of the directive(s), and this is again a
decision of the competent authority in that Member State.
(viii) Where there is a multi-level or tiered quality assurance system for ambient air
quality measurements relevant to the Directive 2008/50/EC (and by implication to
directive 2004/107/EC), the definition and designation of an NRL, and the issues
related to accreditation to the EN ISO 17025 standard apply only to the national
(highest-level) organisation that also participates at the Community-wide
intercomparison, unless national transposition of the Directive requires otherwise.
However, in this case it should be understood that the competent authority and/or
body still has certain responsibilities for ensuring the accuracy of all relevant
measurements within the Member State as discussed further below (Section
3.4).
(ix) The competent authority of a given Member State may, in principle, designate an
organisation outside of that Member State, where it can be assured of the quality
and accuracy of the measurements at a national or other level. This might occur if
there is no competent body with suitable expertise within the Member State.
(x) The competent authorities in Member States are, as stated previously, generally
national government ministries or national agencies, and these may choose not to
designate permanently any given organisation or laboratory as the competent body
for a given range of the ambient air activities defined in Article C. Instead it may
wish to retain this designation formally for itself. This may be, for example, because
it allocates its scientific and technical, and other tasks, to different organisations
regularly on a competitive-tender basis, and thus no one organisation remains
permanently responsible. In this case it is anticipated that this competent authority
will designate the organisation currently responsible for a range of quality
assurance and quality control activities, and where possible these therefore
become NRLs, and thus participate in AQUILA and the other required activities,
and these will need to gain accreditation to the EN ISO 17025 quality-assurance
standard with a scope that is described in detail in Section 3.3 of this document.
1.4 General Objectives and Main Aims of AQUILA
A formal European Network has been established, with support from the EC
Directorate-General Environment, and the Joint Research Centre, Ispra, Italy, comprising about 40 organisations across Europe. AQUILA is now the formally
constituted Network and is open to all National (Air Quality) Reference Laboratories
across Europe.
AQUILA, as the European Network of National Air Quality Reference Laboratories, has
taken on a number of roles, responsibilities and tasks, with the following overall
objectives:
1. Provide a forum for the regular exchange of scientific and technical information
between the National Reference Laboratories (NRLs), in order to improve their
knowledge, enhance monitoring methods, improve the accuracy of the results, and
harmonise quality-assurance and quality-control practices, across Europe;
2. Provide coherent, expert, internationally agreed judgements and advice, on issues
related to measurements and their strategy at a EU level;
9
3. To provide scientific and technical advice, where required, to NRLs that are less
well developed and to other organisations (European Environment Agency, World
Health Organization etc);
4. Provide appropriate scientific and technical advice to the European Commission to
support current EU legislation and aid in the development of future policy; This
includes, for example, the provision of quantitative performance data on new
monitoring methods under a range of realistic operating conditions, in order to
further environmental protection by substantially accelerating the acceptance and
use of improved and cost-effective technologies that provide equivalent results
5. Provide technical advice to the Competent Authorities in the EU Member States, in
order to assist with the prompt dissemination of EU policies and monitoring
requirements, and ensure up-to-date, valid, and harmonised implementation of
these, and where needed, provide interpretation;
6. To raise the profile of the roles and responsibilities of AQUILA and the NRLs, in
order to make greater inputs and thereby improve, where required, scientific and
technical activities within Member States;
7. Coordinate, or contribute to the coordination of, international and national intercomparison exercises carried out for the purposes of demonstrating the
harmonisation of ambient air-quality measurements across Europe;
8. Participate, where possible, in European standardization activities in the field of
relevant ambient air-quality measurements, and/or provide technical advice to
these. This may be done by means of common representation of AQUILA
members on the relevant CEN Working Groups, and/or through recommendations
of the AQUILA group for consideration by the relevant CEN Working Group.
9. Act as a forum for the collation of practical experiences on published standards so
that future revisions are based on sound and up-to-date data.
The objectives listed above represent the main technical activities that the AQUILA
Network has been, and is involved with. However, the AQUILA activities become
broader in detail as time progresses. For example, the AQUILA Group is now
considering the best methodology for handling and presenting monitoring data at close
to the detection limits to ensure a harmonised means of applying this across Europe.
The subsequent Chapters in this document provide background information on the
selection and the responsibilities of National Reference Laboratories that make up the
AQUILA Network, and also their requirements for accreditation as used in the context
of the European ambient-air directives summarized above. The subsequent Chapters
also provide details on the main scientific and technical requirements that the AQUILA
Members must conform to, and some of the most relevant activities of AQUILA
Network itself up to the present time are also discussed.
10
Chapter 2: The Applications and Role of Measurement
Traceability, Traceable Calibration Standards, and Certified
Reference Materials, in Ambient Air Quality Measurements
2.1 Introduction
The concept of measurement traceability is generally well known within the
metrological community. Within this community, it is generally known as “metrological
traceability” as defined in Reference 1, but the term “measurement traceability”
will be used in this document instead identically - for the benefit of some readers. In
this Chapter a brief discussion of the general principles of traceability and its
importance to AQUILA is provided first. This is followed by a discussion on what is
meant by measurement traceability in the context of the ambient air quality
measurements, of relevance to EU directives and related regulations in Europe.
This measurement traceability is described in the specific context of its requirements to
underpin, and hence support, the accuracy and comparability of measurements made
by European NRLs, and/or similar institutes and/or associations of organisations, that
carry out measurements of ambient air quality to meet the regulatory requirements in
the relevant EU directives.
(NOTE 3: In many Member States the NRLs do not carry out the actual measurements
in the networks or do not make the associated quality-assurance/quality-control
activities at the network sites that are required for regulatory purposes. Instead these
are devolved to regional or other groups of laboratories or associations of laboratories.
However, these NRLs nevertheless have responsibilities that are specified within the
Directive(s) for ensuring the quality and accuracy of all such reported measurements in
the Member State, as discussed in more detail in Section 1.4.)
These network measurements are carried out to provide data in support of national
regulations derived from the EU directives that are listed in below, are made using a
number of techniques and range from, for example, continuous automatic monitors to
discontinuous manual methods. Nevertheless, the requirements for measurement
traceability of all of these (as defined in this metrological context) are given in Section
C of Annex 1 of Directive 2008/50/EC:
To ensure the accuracy of measurements and compliance with the data quality
objectives, the appropriate competent authorities and bodies designated pursuant to
Article 3 shall ensure that:
- All the measurements undertaken in relation to the assessment of ambient air quality
discussed in Articles 6 & 9 are traceable in accordance with EN ISO 17025 Section
5.6.2.2 And:
- The national laboratories appointed by the competent authority, designated within
Article 3, that take part in Community-wide intercomparisons covering the regulated
air pollutants, are accredited according to EN ISO 17025 for the Reference Methods
given in Annex VI of the Directive.
The need for traceability of measurements is also specified in the EN ISO 17025
standard as one essential aspect for fulfilling the requirements of that standard (e.g.
see paragraph 5.6 of the standard). In addition, the concept of traceability to relevant
(primary or other) standards of the SI system of units of measurement is introduced as
a requirement in that standard, for use wherever practical. It should be understood that
this is considered important because traceability should provide a rigorous and robust
11
means of determining the uncertainty of such measurements. This is discussed in
more detail below in this Chapter, and in Chapter 3 of this document.
The concept of measurement traceability also leads naturally to requirements for
nationally- or internationally-traceable calibration standards. It leads, in some cases,
to requirements for validated Certified Reference Materials (CRMs). The applications
and roles of both of these types of calibration standards, and CRMs generally used for
validation, are also summarized below. These are used to ensure the accuracy and
international comparability of measurements, when monitoring ambient air quality
pollutants that are regulated by EU directives.
2.2 What is Traceability?
The concept of (metrological or measurement) “traceability”, as it is now generally
used, was initially and generally defined in metrological terms in the International
Vocabulary of Basic and General Terms in Metrology (VIM) – revised in 2008
Reference 1. This (metrological) traceability can be interpreted or paraphrased as:
“The property of a measurement result (or the value of a standard) whereby the
result can be related to stated references that are generally appropriate national or
international standards, through an unbroken chain of comparisons (calibrations), all
having stated (measurement) uncertainties that contribute to the overall measurement
uncertainty of the result”
This requires an unbroken chain of measurements all with stated uncertainties possibly from a primary or national realization of an SI unit, or another internationallyaccepted unit, to the final measurement result, which may itself be the result of many
steps in the chain. From the above discussion it is clear from this that each step down
the measurement chain results in a poorer level of accuracy, as is represented in
Figure 2.1
To further assist with the interpretation of the above definition of traceability from VIM,
it is useful also to note a further definition that is used in ISO standard 14111 “Natural
gas - Guidelines to traceability in analysis”:
“Ability to provide evidence of the uncertainty attributed to the measurement results
through documented calibrations, using measurement standards of known uncertainty,
and comparison measurements of known performance”
This emphasises the procedure that is used to assign the overall uncertainty to the
final result of the measurement process. It is also compatible with the above VIM
definition. The concept of measurement or metrological traceability is illustrated in
Figure 2.2 to demonstrate how it may be achieved in practice using different primary
methods. A discussion on what may potentially constitute a ‘primary’ method for the
measurement of amount of substance, relevant to the monitoring of atmospheric
pollutants, has been published (Reference 2).
12
Í SI Units
c
o
m
p
a
r
i
s
o
n
s
Í (Inter)national (Primary)
Standards
Í Secondary Standards
Í Work Standards/In-house
Standards
Í Measurement/Calibration
uncertainty
Figure 2.1: Schematic of how the Hierarchy of Traceability is achieved, and the
Resulting Measurement Uncertainties
SI system of units
prim ary direct m ethods
(e.g. coulom etry, FPD)
prim ary direct m ethods
(e.g. gravim etry)
calibration
standards
pure m aterials
prim ary ratio m ethods
(e.g. IDM S)
real sam ple or
m atrix reference
m aterial
‘secondary’ m ethods
real sam ples
Figure 2.2: Examples of Practical Realizations of Measurement Traceability to
the SI system of Units (FPD = flame photometric detection, IDMS = isotope-dilution
mass spectrometer)
13
In addition, wherever the measurement results are to be described correctly as
traceable it is essential to specify to which specific types and values of ‘appropriate
measurement standards’ the traceability has been established. This can be, for
example, established either to:
(a) One of the base system of measurement units (the International SI System of
Units), such as a mass measurement;
(b) A reference material, or similar, that has one or more of its properties certified by
one or more internationally accepted measurement methods;
(c) An internationally defined and recognised scale, such as a pH value;
(d) A method defined in a widely recognised international standard;
(e) A practical realization of a well established measurement method.
(f) Traceability to a recognised national or similar laboratory (e.g. “NIST USA
traceable”) – this is not an acceptable claim on its own in the context of this
document.
The above routes for achieving traceability have all been adopted and employed in the
field of ambient air measurements, carried out for the purposes of compliance with the
regulatory requirements in the EU ambient-air directives. These will be discussed more
specifically in Section 2.5 below.
However, we first present some GENERAL background on the methods whereby
traceability is developed and disseminated in different countries worldwide in
order to illustrate the concepts involved. These involve dissemination using one of
the following methods:
-
At a national level through ‘primary’ standards realized nationally or obtained from
another National Metrology Institute (NMI) or,
At an international level through the application of appropriate Certified Reference
Materials. This may not be an acceptable means of achieving rigorous traceability
but may be necessary under some circumstances.
Or using one of the other ‘primary’ or related methods given in (a) – (e) above in
this Section.
2.3 The Realization and Dissemination of Traceable Measurement Standards at a
National Level: General
Many countries worldwide have appointed Institutes in their country to take
responsibility for the development, maintenance, and the dissemination of
measurement standards in order to provide calibrations and references for industry,
government, and the public. There are currently approximately 80 such Institutes
worldwide, which are known usually as National Metrology (or Measurement) Institutes
(NMIs) - also as National Standards Laboratories. These NMIs develop and maintain
measurement standards at a national level that are considered important to national
and international trade, the quality of life etc, and to which traceability of most of the
measurements in that country can be derived, wherever required.
Clearly it is difficult to develop and maintain in each country all of the measurement
standards that are required and that exist within the SI system, and any other
measurement systems as required, particularly in one institute. Therefore, most NMIs
develop and maintain a subset of the total, which are considered to be the most
important to that country, and they will then depend upon other NMIs in different
countries to produce those that they require but do not maintain themselves, and for
which national or international traceability is considered necessary. Alternatively, the
14
NMI in a given country may seek to identify and to designate another laboratory as its
representative for a certain range of measurement activities, where it does not have
the expertise, or the capacity to maintain itself in that country.
The national standards developed in any specific NMI are generally maintained and
retained within that institute, and not disseminated or distributed to outside customers
or other interested organisations. Instead, there are generally two different
mechanisms for disseminating the traceability from a nationally established ‘primary’
measurement standard or reference measurement method. This is by:
(1) Bringing the equipment, a measurement method, or some form of calibration
artefact to the NMI, where it is calibrated with respect to the national standard by
means of experimental comparisons that have known performance and
measurement uncertainty.
(2) The NMI producing and marketing calibrated artefacts or other measurement
techniques, that has been ‘calibrated’ with respect to the national standard held by
the NMI, where the calibration has been shown to be stable over time. This then
can therefore be provided for application by the end user externally to the NMI.
These two mechanisms may give rise to nationally traceable calibrations that have
different measurement uncertainties. Both of these mechanisms are used by NMIs
worldwide. The mechanism selected depends on the quantity being disseminated and
also on the optimum way that achieves the required traceability with the necessary
measurement uncertainty. A schematic diagram of the general hierarchy of traceability
applicable to many measurements is shown in Figure 2.1 above, indicating clearly that
the measurement uncertainties increase as the measurement becomes further from
the source of traceability.
In addition, it is important to recognise that many NMIs (or other laboratories at a
national level designated for a particular field of measurement expertise) participate
regularly in comparisons, usually between each other. This is necessary in order to
demonstrate the international comparability and accuracy of the national standards
they maintain. Some of the methods for doing this are discussed in Section 2.7 below.
Section 5 below summarizes the work that NMIs are carrying out in certain countries in
the field of ambient air-quality measurements, and in particular for the provision of
traceability to nationally maintained or internationally recognised standards, and/or
using internationally-accepted Certified Reference Materials. This procedure for
disseminating traceability is similar in principle to that shown in figure 2.1, and this is
explained in more detail in Section 2.5 below.
2.4 Certified Reference Materials and their Roles in Quality Assurance and
Quality Control: Definitions and Roles
It is important to realise that in certain circumstances, the term “Certified Reference
Material” and the term “measurement standard” are used interchangeably. This is not
fully valid, and will not be used in this document. In addition, their roles and methods of
use are usually also different. Nevertheless Certified Reference Materials may be used
as methods of calibration, or methods of checking the validity of a calibration. They
may be considered effectively as calibration standards where certain of their properties
only are known, and they are therefore generally used differently to traceable
calibration standards of the type discussed above. However, they may provide one
method of supplying a methodology for establishing the comparability of measurement
15
results between one organisation or one country and another, with certain limitations,
particularly where no fully traceable method exists. For example, in general, CRMs can
only be considered calibrants where they are used in a manner such that they exhibit
the certified value for the quantity of concern in the method being used, with no other
effects due to their different composition sample matrix effects etc. Thus, where CRMs
are used properly for calibrations they provide traceability but where they are used in
quality control and quality assurance only they do not strictly provide traceability.
A Reference Material (RM) may be defined as:
“Material, sufficiently homogeneous and stable with reference to specified properties,
which has been established to be fit for its intended use in measurement, or in the
examination of nominal properties (of a sample);”
This emphasizes that certain of the properties only may be homogeneous and stable.
It specifies nothing about the other properties of the RM, or its matrix. This therefore
means that it may be only applied where the effects of not knowing these other
properties, or the effects of the matrix material(s), are not significant for the validity of
the analysis being used. This also constrains how they should be applied.
The activities whereby Reference Materials are produced characterised and certified is
one of the key activities in improving and subsequently maintaining a comparable,
coherent, stable, and accurate system worldwide for nearly all types of measurements.
These result in what is known generally as a Certified Reference Material (CRM). The
definition of this may be considered as:
“Reference material, accompanied by documentation (usually in the form of a
recognised certificate) issued by an authoritative body using valid certification
procedures and providing one or more specified values for specified properties
with associated uncertainties and with specified traceabilities.”
The definition indicates some of the differences between nationally or internationally
traceable calibration standards that are discussed in Section 2.3 above, and RMs and
their associated CRMs:
-
Only certain of their properties may be homogeneous and stable, with no
certification of the behaviour of the other properties of the CRM, which may give
rise to interference effects;
The CRM is provided in a given matrix material (usually solid or liquid). This may or
may not have an effect on a given analytical procedure, and this effect may give
rise to a different result if the sample to be analysed is in a different matrix;
Certification is carried out on a specific and finite batch of the RM. When this is
used up, a new (and different) RM must be certified and this may have different
properties, interference effects, matrix effects etc;
The properties and limitations summarized above, are also applicable to the CRMs
used in the validation of the measurements of ambient air quality that are regulated by
EU directives, as discussed in Section 2.5.3. They are rarely used as calibrants in
these ambient-air pollutant cases.
2.5 The Realization and Dissemination of Traceable Standards and
Certified Reference Materials at a National Level: Ambient Air
16
2.5.1 Overview
This Section outlines some of the methods by which nationally traceable calibrations
and/or calibration standards and Certified Reference Materials are realized in the field
of ambient air quality, in the context of the requirements of the EU directives:
-
At a national level by the NMI concerned;
Or by the alternative laboratories that are designated by the competent authority
(known then as National Reference Laboratories - NRLs);
Or a laboratory designated by the NMI, with specialist scientific and technical
expertise in the ambient air quality field.
The different methods of dissemination of these standards are also summarized below
in Section 2.6, which include the distribution of what are sometimes described as
‘Certified Reference Materials’ that are traceable to these national standards.
The use of other Certified Reference Materials, that are internationally recognised
and used widely across different countries, is discussed separately in Section 5.3
below.
The methods by which nationally traceable calibrations and/or calibration standards
are developed and maintained at a national level within the scope of this document fall
into a number of different categories. These cover calibration standards that are:
(1) Prepared in a manner that is traceable directly to SI units though mass, volume
(length), flow (mass and time) etc. These usually apply to calibration standards for
certain pollutant gases, although there are different means of realizing and
disseminating these (see Section 2.5.2 below);
(2) Not able to be produced in a manner that is directly traceable to SI units but
traceability can be achieved by means of the realization of a potentially ‘absolute’
method such as optical photometry, when implemented under controlled conditions
(see Sections 2.5.3 & 2.5.4 below);
(3) Not able to be realized in a traceable manner, and not able to be realized as a
‘primary’ or ‘absolute’ method, but it is realized by convention or by definition as the
reference method, preferably having the smallest measurement uncertainties
achievable using that method (see section 2.5.5 below);
2.5.2 Methods for the Production of Calibration Standards Traceable
to SI units: Ambient Air
2.5.2.1 Overview
Directly
There are a number of methods for the preparation of calibration standards that are
considered as traceable directly to the SI system of units. These, which are described
in published EN ISO standards, are summarized below. They have been used by NMIs
to develop and maintain their national/primary standards, and several international
comparisons have been carried out between NMIs worldwide. The most important of
these comparisons are summarized in Section 2.7.3, which demonstrate
comprehensively the level of the comparability and accuracy of these nationallyrealized primary standards that have been developed and maintained using these
primary methods.
There are, however, a number of quality assurance requirements that are absolutely
essential to implement if accurate and comparable standards are to produced and
maintained. Many of these are specified in detail in the relevant EN ISO standards
discussed below, but one common and important issue in the quality control is the
17
requirement in all the preparation methods for diluent gases with sufficient and
monitored purity and this is discussed in Section 2.5.2.3 below
2.5.2.2 Preparation of Gaseous Calibration Standards
The most convenient calibration standards that can be produced to provide traceability
directly to SI units at a national level are those for the gaseous pollutants covered by
the 2008 EU Directive:
(a)
(b)
Sulphur dioxide, nitrogen dioxide (and nitrogen monoxide), ozone, carbon
monoxide, and benzene, directly regulated under this Directive;
Multi-component or single-component hydrocarbon ozone precursors as
specified in Annex X of the Directive
There are a number of ‘primary methods’ that can be used for preparing these
standards that are considered directly traceable to SI units as listed in Section 2.2 (a)
above (where traceability is achieved directly to one of the base units of the SI system
such as mass or length (volume), as well as to one or more of the derived SI units,
such as pressure). These are as referred to in Section 2.5.1 bullet (1).
These have been developed into international standards (norms) through the
International Standardization Organisation (ISO), and mainly through its Technical
Committee TC 158. This has produced a suite of ISO standards specifying different
methods for preparing and validating the prepared gas standards, as summarized in
Table 2.1, presented for the example of NOx calibration standards.
Outlines of the different preparation techniques listed in Table 2.1 are summarized
below for clarity and completeness. For more detailed information, the relevant ISO
standards given in Table 2.1 should be consulted:
ƒ
One of the methods used worldwide to prepare gas standards at all levels of
concentration, for many different species, is to use gravimetry to weigh specific
pollutant gases together with relevant matrix/diluent gases into high-pressure
cylinders, and then to dilute these gas mixtures to the required low concentrations
for ambient air concentration calibrations, as required. A number of NMIs and other
designated laboratories, carry out this process. An advantage is that these gas
mixtures can be prepared for use at the low concentrations required. A
disadvantage is that the lower the concentrations prepared the more likely the gas
standard is to be unstable for certain pollutant species. It is important to recognise,
however, that comprehensive quality assurance and quality control procedures are
required if valid standards are to be produced, and this becomes increasingly
difficult at the low concentrations required for ambient calibrations. This method is
an example of where gas standards are prepared that are directly traceable to the
SI mass unit, via a number of accurate weighings (gravimetry), as stated in Section
2.5.1 bullet (1).
ƒ
A further method, usable only for some pollutant species and concentration ranges,
is to employ permeation tubes. These permeation tubes are designed so that a
given mass of pollutant gas permeates from the tube per unit time, depending on
the temperature at which the tube is maintained. Then, if this gas is mixed into a
stream of diluent gas of known flow rate, a known concentration gas standard may
be produced. This requires that the mass loss of the permeation tube is
determined, and the flow of the diluent gas is measured. The mass loss is thus
traceable to the SI unit of mass, and the dilution process is traceable either to
pressure measurement, or it may alternatively be calibrated using gravimetry.
ƒ
Static injection can be used to prepare a known concentration in a vessel of known
volume, with injections of known amounts of the pollutant gas into the vessel
through a syringe or similar. The advantage is that the standard can be prepared
18
A further method uses dynamic dilution of a higher concentration of the pollutant
gas than required, usually prepared gravimetrically as above, blended with a
diluent gas through flow controllers to achieve the required low concentration. The
advantage is that the gas standard is possibly more stable, with the disadvantage
that dilution is required. This is an example where the original standard is traceable
through gravimetry directly to the mass unit, and the dilution process is traceable
either to pressure measurements or it may be calibrated using gravimetry
ƒ
Table 2.1: Methods for preparation of Calibration Gases According to ISO
Standards and other Guidelines: Example for NOx Standards
Standard or
Guideline to be
used
Method
NOa
NO2
Description
Cylinder
+
+
Gas cylinders containing nitrogen or
synthetic air, as appropriate, at the
required low concentrations for the
direct calibrations of NOx ambient air
analysers
ISO 6142
ISO 6143
Permeation tubes
-
+
Determination of the weight loss of a
permeable tube containing NO2 liquid
and gas
ISO 6145-10
Static dilution
+
+
Preparation by means of injecting
known amounts of NO and NO2 into a
known volume
ISO 6144
Dynamic dilution
+
+
Dynamic blending of cylinder of
pollutant gas at a higher concentration
than required with nitrogen or
synthetic air as appropriate
ISO 6145-6/7
Gas phase titration
-
+
Conversion of NO containing gas
with ozone to produce NO2
ISO FDIS 15337
& VDI 2453-2
a
+appropriate method, - not applicable
These calibration standards are generally realized in different countries at a national
level either by the relevant NMI or its associated designated laboratory, or by an
appointed NRL (but not always – see below). It is also worth noting that, in general,
different laboratories in a given country may have responsibility for developing
and/or maintaining the calibration standards for different pollutants covered by the
EU ambient-air directives at a national level, since a large breadth of expertise is
required, and this is not always available in any one laboratory.
.
Comparisons carried out between these laboratories, with other recognised
laboratories either in Europe or worldwide, are used to demonstrate the international
comparability of these national standards, and hence demonstrate their accuracy
These have been carried out for nearly all of the pollutant gases at ambient
concentrations discussed here (see Section 2.7 below).
19
The different methods used in different countries for disseminating traceability from
these national standards to the individual network sites are also summarized - see
Section 2.6 below.
NOTE 4: It is worth clarifying in the context of definitions, that gas standards that
are prepared by the ‘primary’ methods discussed above, provide calibration standards
that are directly traceable to one or more of the international units of measurements,
(and hence may be used to provide robust calibration methods when applied
correctly), could fall within the definition of Certified Reference Materials (CRMs) as
defined above.
The gas standards produced by these methods may therefore sometimes be referred
to as CRMs. However, for the purpose of this document, and in relation to ambient air
calibrations, a distinction is made between these calibration standards described
above for ambient air, and the specific and different actual CRMs discussed below.
These CRMs are employed in a different manner within ambient air monitoring
applications.
2.5.2.3
Purity Requirements for Diluent Gases, Calibrant Gas Species, and Zero
Gases
In all the above gas standard preparation techniques, there are critical requirements to
carry out analyses of the purity of all components used for the standard. There are a
number of reasons that measurements of impurities in the diluent and calibrant
components of the standard are essential, in order to ensure the accuracy of the
calibration and/or ensure the stability of the mixture with time:
(a) Impurities in the diluent gas may react with the calibrant gases and an incorrect
concentration of the prepared calibration gas will result. Examples include:
- trace levels of oxygen in the nitrogen diluent used to prepare nitrogen monoxide
standards, which will result in conversion of some of the nitrogen monoxide to
nitrogen dioxide;
- water vapour levels in diluent air or nitrogen when used to prepare sulphur
dioxide standards may result in mixtures that are unstable with time;
- when there is a significant concentration of the calibrant in the diluent gas, then
this will increase the concentration of the resulting calibration gas unless
measured and corrected for. This is most important for low concentrations.
(b) Impurities in the pure calibrant gases will give rise to incorrect calibrations unless
the impurities are analysed and corrected for. One example is the presence in
nitrogen monoxide of impurities comprising other oxides of nitrogen and nitrogen
itself.
In addition, it is clear that presence of the trace calibration gas in all zero gases used
to provide the zero response of an analyser, or any other actively interfering gaseous
species, will clearly result in incorrect zeroing of the analyser.
It is therefore clear that all components used in the preparation of all calibration gases,
and wherever zero gases are employed, must be analyzed with sufficient sensitivity
and accuracy or incorrect results may occur. These activities should be one part of the
NRLs accreditation to the EN ISO 17025 standard, wherever relevant. However, it is
not necessary for the NRL to perform such analyses themselves, but these should
then be obtained from a suitable source. Certified zero “reference” gases (usually of
nitrogen or synthetic air) are available commercially with suitable levels of purity.
2.5.3 Methods using Certified Reference Materials, and their Role in Quality
Assurance and Quality Control: Ambient Air
20
The methods discussed in Section 2.5.2 above provide robust means for providing
calibration standards, with applications to ambient air quality, directly traceable to one
or more of the international system of SI units. However, in some cases it is not
possible to produce calibration standards this way. Then internationally recognised
CRMs, where certain of their properties are well validated, are employed instead. Such
CRMs are generally prepared and certified by one or more expert laboratories
worldwide, where these have a range of different independent techniques to certify
such CRMs.
Those relevant to the monitoring of ambient air pollutants that are covered by the EU
Directive 2008/50/EC and the 4th daughter directive 2004/107/EC are discussed below:
(i) There are requirements in the above directives for the analyses of the heavy
metals lead, cadmium, arsenic and nickel. These are required to be monitored
using PM10 ambient-air manual samplers onto specified filter media in the field,
followed by laboratory analyses using Inductively-Coupled Plasma Mass
Spectrometry (ICP-MS) or by Graphite Furnace Atomic Absorption Spectroscopy
(GF-AAS), using comprehensive procedures that are specified in European
standard EN 14902. These procedures are examples of measurement methods
that are not directly traceable to the international system of SI units, but to a
prescribed procedure with partial traceability, and operated under tightly controlled
conditions, as prescribed in the EN standard. In this, traceability of the analysis is
achieved by preparing gravimetrically solutions of the metals concerned with
known concentrations. These are traceable to SI units via the national or the other
laboratory that prepared or certified them. However, they are used in these
applications for the calibration of the analytical part of the method only. The results
of the extraction/digestion of the ambient air sample cannot be made traceable in
such a manner. Instead, some CRMs are available to check that the
digestion/extraction remains under control. The CRM is thus used to check or audit
the validity of the procedure during use - to demonstrate that certain aspects of the
procedure are valid whenever it is implemented. Examples of relevant CRMs are
given in EN 14902:
ƒ
ƒ
NIST SRM 1648a (subsequently 1648b has been issued) prepared and
certified by the National Institute of Standards and Technology (NIST) USA;
SRM NIES No.28 certified by National Institute for Environmental Studies
(NIES), Japan;
More recently, work is being completed at the EU Joint Research Centre, Institute
of Reference Materials and Measurements (IRMM), Geel Belgium, on the
preparation of two new CRMs (one for Heavy metals, and the other for PAHs).
These will have improved characteristics when compared to the CRMs of NIST
SRM1648 & NIES SRM No.28 listed above, and to the equivalent NIST SRM for
PAHs (NIST SRM 1649a), and should be used widely for quality control and
audits of EU directive-regulated air pollutant measurements when they
become available.
There are certain limitations in the applications of any SRMs for these
analyses of heavy metals (and other analytes), and some are summarized below.
These limitations are given here to illustrate some of the differences between
CRMs and the calibration standards discussed in Section 2.5.1 above, and also to
show why they are generally used as quality checks and audits of the analytical
procedures, rather than being applied directly for the calibrations themselves.
These limitations include:
ƒ The metals are contained in a solid matrix (urban dust), but this is never identical
to the particulate matrix that constitutes the real sampled PM10 particulate.
21
Neither is this matrix material contained on a filter medium as is the sampled PM,
and it may therefore not be digested in an identical manner to that of the real
sampled PM;
ƒ The metals are usually present in different chemical forms to those routinely
found in the ambient air, which may not be digested in an identical manner;
ƒ The masses of the metals contained within a representative homogeneous
sample of the SRM that must be used for the analyses are generally not very
similar to those masses that are in the real atmospheric samples on filter media;
ƒ The uncertainties assigned to the certified concentrations of the metals in the
CRM are generally larger than required ideally if used as calibrants (and they are
also not ideal as calibrants for the reasons above). They would therefore not
satisfy the measurement uncertainty requirements of the relevant ambient air
directives;
In addition, it should be understood that these CRMs are certified using a fixed and
finite amount of original material (usually solid in these cases), and thus this
represents a fixed batch size of the specific CRM. When this is used up, another
CRM must be obtained and another certification carried out afresh where different
values will be assigned (In contrast to the realizations of the gas standards
discussed in Section 2.5.1 above, that can be realized with respect to SI units at
any given time).
(ii) There are requirements in the 4th daughter directive for the monitoring of a range of
olycyclic aromatic hydrocarbons (PAHs), including benzo(a)pyrene (BaP). These
are required to be sampled and analysed according to the standard EN
15549:2008, which is to be considered as the Reference Method. The initial
requirement in the standard is to extract the PAHs collected on ambient particulate
filters using specified solvents and extraction techniques. The efficiency of this
extraction is known as the ‘recovery, of the PAHs. There are then requirements
within the standard to set up calibration curves for specified analysis techniques,
with five different known concentrations of the PAHs for the analyses, using
traceable solutions of one or more PAHs dissolved in specified solutes. There is a
requirement also assess the efficiency of this recovery process using a CRM.
Currently the CRM recommended in the standard is that produced by NIST USA
(NIST 1649a). However, others may be used. The CRM is again used as a quality
check or audit check and is not the primary source of traceability in these PAH
measurements.
The limitations of CRMs for these types of PAH measurements are the same as
those given in Section 5.2 (i) above (except that the concentration of the BaP in the
CRM is certified to an accuracy of ~4%, which would be a satisfactory accuracy for
its application as a calibrant).
(iii)There are also certain requirements in the 4th daughter directive for monitoring the
concentrations of mercury in ambient air. The main requirement, however, in this
directive is for measurements of total gaseous mercury. This is covered by a CEN
standard that is discussed below in section 2.5.3, and no CRMs are available or
required for this. However, the CEN standard specifies that calibration must be
performed by means of mercury vapour contained in a vessel at a known
temperature and using an empirical equation to define the relationship between its
temperature and concentration. Where there are requirements to monitor mercury
and its compounds in particulates, the same CRMs as given in Section 5.2 (i)
above may be used, and the same limitations on their applicability apply, and
22
hence they are being used as quality assurance checks and/or audit samples,
rather than being used for primary calibrations.
A schematic of the means by which the traceability and hence the accuracy of these
methods for the analyses of heavy metals and PAHs on particulate samples is
achieved in general, including the use of ‘primary’ calibrations with gravimetrically
prepared solutions and CRMs, is shown in Figure 2.3 below.
Analysis
maintained by
Sample Preparation
Analytical Laboratory
Extraction
As for PM10
CRM and Standards
Sampling
Matrix-matched CRM
Internal Standards
Analytical CRM
Figure 2.3: Schematic of the Methodology for Achieving “Best Practice” in
Traceability for Heavy Metals and PaH Analyses, and applications of
CRMs
2.5.4 Calibration Methods that use “Primary” Measurement Techniques NOT
Directly Traceable to SI Units: Ambient Air
As referred to in Section 2.5.1 bullet (2) above, there are a number of methods that
can be considered as primary methods in the field of amount of substance (that
includes gas metrology). The definition of a primary method according to the BIPM’s
CCQM Group (see Section 2.2 above) is:
A primary method of measurement is a method having the highest metrological
qualities, whose operation can be completely described and understood, for which a
complete measurement uncertainty statement can be written down in SI units, and
whose results are, therefore, accepted without reference to a (separate) standard of
the quantity being measured.
There are a number of primary measurement methods that have been identified for
use in gas metrology (e.g. one method of operating isotope dilution mass
spectrometry, for measurements of molar concentrations.) However, the most relevant
in the context of measurements of air pollutants regulated by the EU directives are the
Standard Reference Photometers (SRPs) for measurements of ozone concentrations
in air at ambient levels, originally developed and maintained by NIST USA, and
currently also being developed and maintained also by the BIPM Paris, France. A full
description of the SRPs manufactured by NIST has been published (Reference 3).
This shows how the SRPs conform to the above definition.
23
The mechanism for dissemination of the traceability of the results obtained by these
SRPs is outlined in Section 2.6.3.
The BIPM organisation is involved in carrying out comparisons of the SRPs maintained
by laboratories worldwide, within the auspices of the CCQM. These are used to
demonstrate their comparability as primary methods when realized by the different
institutes that maintain them (see Section 2.7).
2.5.5 Methods not Traceable to SI Units or Primary Methods, Realized by
Convention or by Definition as Reference Methods: Ambient Air
As noted in Section 2.5.1 bullet (3), there are certain calibration standards and
calibration methods that are not required/not able to be developed and realized with
traceability, directly or indirectly to the SI system of units. Examples of most relevance
to measurements of ambient air pollutants currently regulated under the EU directives
are:
(i) The manual methods for particulate monitoring of PM10 and PM2.5 specified in the
European standards EN 12341 and EN 14907 are required to be established and
maintained at a national level by the relevant NRL. These two standards specify
methods that cannot be considered as traceable to the SI system of units, in a
rigorous manner. The sampling is by a method defined by convention, but the
mass measurement can clearly be defined as traceable to SI Units. However, the
main problem is that the measurand itself is not well defined as the particulates
sampled will all have different shapes sizes and chemical compositions.
Notwithstanding this, as is well known, these two European standards have been
specified as the Reference Methods in EU Directive 2008/50/EC. This therefore
imposes a requirement for these standards to be used as references against which
other measurement techniques used for monitoring PM10 and PM2.5 for EU
regulatory purposes, may be compared, and the results must be demonstrated to
be “equivalent” to be accepted for the purposes of reporting results to the EC. This
concept of the demonstration of “equivalence” with is specified in a document
prepared by the EC (Reference 4) and involves comparisons in the field of any
other method for monitoring PM10 or PM2.5 with the relevant reference method.
The above standards therefore represent reference methods that are specified by
definition and by convention, with no real traceability to the SI system of units (with
the exception of a requirement to determine the mass flow of air through the
monitor). Figure 2.4 shows the principle whereby a limited level of traceability is
achieved using these reference methods for monitoring particulates. This inability
to achieve traceability is partially as a result of the complex variable and ill-defined
size, shape and composition, of the particulate matter being monitored.
24
Classifier Dimensions
Filter Conditioning
maintained by
Filter Weighing
Calibration Laboratory
Sample Flow Rate
Secondary Standards
Sampler and Filter
Materials
PS of Length
PS of Flow
PS of Temperature,
Humidity
PS of Mass
Figure 2.5: Representation of the Traceable Aspects of the Particulate Mass
Reference Methods (PS = Primary standard)
(ii) As described above, the monitoring of total gaseous mercury is required to comply
with the EU’s 4th daughter directive. This is required to be carried out using the
procedures specified in European standard prEN 15862, which covers the
operation of approved (continuous) monitors (or other techniques that are
demonstrated as equivalent). The requirements for calibration within this standard
involve the injection of known volumes of elemental mercury vapour saturated in
air at a given temperature onto adsorbent media. These are then desorbed
thermally into the instrument to be calibrated, and are used to establish its
calibration curve. However, although it can be argued that the mass concentration
injected onto the adsorbents, and the temperature at which the injection is made,
can be determined to be traceable to SI units, the relationship between the volume
and the ambient temperature was determined some time ago through a set of
experiments that were not well controlled and are therefore not considered
traceable to SI Units. At the present time, therefore, this relationship is presumed
to be accepted by convention as the best formulation currently available, and is
specified in the EN standard. It is considered, however, that this has no clear and
internationally traceability to SI units, and no internationally accepted uncertainty
budget. The current position on this has been published (Reference 5). Recently
some research was carried out at NIST USA on mercury vapour calibration
standards certified with respect to solid phase CRMs, but this is no longer
continuing. Instead, research is underway at NIST to provide a better equation for
the relationship discussed above.
2.6 Methods for Disseminating Traceability by Means of National or
International Standards or Reference Methods: Ambient Air
2.6.1 Overview
As is well known, the EU ambient air directives require the monitoring of eight different
types or groups of ambient air pollutants – SO2, NOx, O3, CO, PM10 & PM2.5, benzene
(and other VOCs), certain heavy metals (including mercury), and PAHs (particularly
but not exclusively benzo(a)pyrene). Therefore, as already outlined above in Chapter 1
25
before, the NRLs identified in each country will generally NOT comprise the same
laboratory for all of these, because of the large breadth of expertise that would be
required.
Moreover, the dissemination mechanisms will depend on the number of sites/sampling
locations that are required and present in a given country (ranging from less than 10 to
more than 1000 locations). In addition, the dissemination mechanisms for these
different air pollutant species will be different in many cases.
The dissemination procedures should therefore be designed differently in each case to
accommodate these issues. Examples of these dissemination mechanisms for the
different types of ambient air pollutants grouped according to the way traceability is
developed, as given in Section 2.5.1 bullets (1) – (3), are given below.
It is also useful to emphasize that there is a general requirement, irrespective of how
dissemination is realized, according to Directive 2008/50/EC Annex 1 section C:
These (designated NRL) laboratories shall coordinate at a national level the
appropriate realization of the Reference Methods and the demonstration of
equivalence of non-reference methods;
However, in practice the competent authority in a Member State may choose to define
another organisation to carry these activities out.
2.6.2 Dissemination for Methods Using Calibration Standards Produced that
have Concentration Values Linked Directly to SI units: Ambient Air
This is the most common situation in the context of ambient air pollutants regulated by
the EU directives, since as discussed above in Sections 2.5.1 & 2.5.2, this situation
covers all the gaseous species (except gaseous mercury), where calibration standards
may be produced traceably to SI units, using a number of different methods as shown
in Table FF. However, even in these circumstances there are nevertheless, in practice,
somewhat different ways of disseminating national traceability, depending for example
on the number of network sites within a given country. Currently there are broadly two
methods in use:
(a) The use of a single QA/QC Unit for all of the network sites covering all these
gaseous pollutant species within the country. This then becomes by definition the
single NRL for these monitoring activities, and the traceable calibration standards
available to this NRL can be used directly at each monitoring site, without any
additional steps in the traceability chain. This is generally carried out using
calibration standards produced and certified as discussed above in Section 5.2.
Alternatively the NRL may utilize traceable standards from another organisation
and then certify other mixtures in a manner that is traceable to these. Where this
latter is carried out, this certification process should be accredited to EN ISO 17025
as a calibration accreditation, to ensure the accuracy of the certified calibration
standards used. This procedure maps closely with the mechanism shown in Figure
AA. This structure means for example:
- The number of sites may be large, and this requires relevant expertise to be
available in a large organisation with a skilled workforce, and also that a large
amount of field work is required that encompasses all of the network sites at a
national level;
- There is little requirement to audit the performance of any staff at the individual
network sites since this should be all in the control of one laboratory that must
have traceable standards, and should have appropriate accreditations, to
carry out the QA/QC activities in the field - as specified in the relevant CEN
ambient air standards;
26
- The measurement uncertainty is generally smaller than it may be in a multi-step
or tiered approach from the NRL to the network site as discussed below.
(b) Alternatively a tiered approach may be used. This utilizes a single NRL as the
peak, or the centre, of the QA/QC activities in the country, but the NRL then
generally provides the means of dissemination of its traceability to regional or other
QA/QC centres. These then provide traceability downwards to the individual
network sites, or sometimes to a second tier of QA/QC laboratories. This
mechanism is generally employed when there are a large number of network sites
in the country covering the gaseous pollutant species, and/or it arises due to a
regional structure of government. This structure is illustrated in Figure 2.6.
E N -IS O 1 7 0 2 5
C a lib r a t io n
C IP M M R A
N a tio na l M etro lo g y I nst it ute
o r D esig na ted B o d y
C a lib r a t io n
L a bCo aralib
tora
r yt io n
L a bCo aralib
tora
ryt io n
L a bCo ar alib
torry
a t io n
L a b o r a to r y
N a tio na l
H a r m o n iza t io n
N a tio na l
R eference
L a b o ra to r y
A ud its
R e g io na l N e tw o r k A ut ho r itie s
R eg io na l N e tw o r k A ut ho r it ie s
R e g io na l N e tw o r k A ut ho r it ie s
R e g io na l N etw o r k A ut ho r it ies
R e g io na l N e tw o r k A ut ho r itie s
R eg io na l N e tw o r k A ut ho r it ie s
E N -IS O 1 7 0 2 5
C a lib r a t io n ?
T esting ?
E C H a rm o n iza t io n
E N -IS O 1 7 0 2 5
IS O G uid e 4 3 ?
E N -IS O 1 7 0 2 5
T e sting
M o nito r ing N etw o r k
M o nito r ing N etw o r k
M o nito r ing N etw o r k
M o nito r ing N e tw o r k
M o nito r ing N etw o r k
M o nito r ing N etw o r k
Figure 2.6: Schematic of a Tiered or Multi-level Approach to Ambient-air Quality
Monitoring in a Member State
This tiered structure has certain consequences that are different from those when
employing a single national QA/QC unit in a country. For example:
- There is still the requirement that the NRL be accredited to the EN ISO 17025
standard for the technical measurement activities it carries out, although these
activities will not cover the routine field QA/QC activities that are specified in the
relevant CEN standard. However, it should cover other technical activities,
including any comparisons that it participates in at a EU level, where relevant
(see Section 3.3 (v)).
- There is, however, no EU requirement for any regional QA/QC unit to be so
accredited, and there are thus no accredited measurements may not be made at
27
the individual network sites with the tiered approach, in contrast to the situation in
(a) above.
- The regional QA/QC unit, on the other hand, may have closer and more direct
contacts with the individual network sites than a QA/QC unit that operates
nationally, and therefore over a significantly larger region/geographical area. This
localised QA/QC unit may have positive effects on the quality of the results;
- There are nevertheless requirements within Article 3 of Directive 2008/50/EC for
the NRL to ensure the accuracies of all the measurements within its Member
State that are reported under the requirements of the directive (clearly within the
overall responsibility of the competent authority). This may be more difficult to
ensure since the individual network sites may not operate within the direct control
of the NRL. This is discussed further in Chapter 5.
- There is therefore a need for the NRL, as part of its responsibilities within that EU
Directive, to carry out audits and other QA/QC checks, so as to convince itself
and inform the competent authority that the regional or other QA/QC units are
providing continuously the required measurement uncertainties of all the results
and, other DQOs at the individual network sites, as required;
- The multi-step or tiered approach will usually result in worse measurement
uncertainties of the results at the individual measurement sites.
In the above cases, both for the single QA/QC unit at a national level, and for the
NRL at the centre of a tiered or regional QA/QC structure, it is useful to emphasize
that:
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
These (designated NRL) laboratories shall coordinate at a national level the
appropriate realization of the Reference Methods, and the demonstration of the
equivalence of non-reference methods, as specified in Directive 2008/50/EC
Annex 1 Section C;
The NRL is required to achieve accreditation to EN ISO 17025 for the relevant
experimental and technical activities it carries in the context of measurements
under the requirements of that Directive (see Section C of Annex 1);
Within Article 3 of that Directive the designated NRL has responsibility for
“ensuring the accuracy of measurements..”, (clearly within the overall
responsibility of the national competent authority), whether there is a single
national network of sites, or a tiered or regional network of sites.
In the case of a tiered network, the responsibility for organising any quality
checks and audits, related activities necessary to ensure the accuracy of the
results, and the critical evaluation of such data should be the NRL.
The NRLs in many different European countries are already organising audit
checks and round-robin tests at a national level, and the methods that are
used for these are discussed in more detail in Chapter 5 of this document
The NRL has the responsibility for ensuring the relevant accuracies of
measurement results through its national dissemination mechanisms as
outlined above. In addition, it also has a further important (implicit)
responsibility for ensuring that the national measurement standards are
accurate, and this can generally only be demonstrated through international
comparisons through one of the methods discussed in Section 1.4 and Section
2.7 below.
2.6.3 Dissemination for Methods that Use CRMs for Quality Control/Quality
Assurance: Ambient air
The EU Reference Methods for the ambient-air pollutants that are regulated under the
ambient-air directives, that utilize CRMs as part of their QA/QC procedures are those
analytical procedures that are used for the measurements of particulate-bound heavy
metals and PAHs, as outlined above. The networks that are operated to monitor these
species may, in principle, also be operated either through one laboratory at a national
28
level analysing all the PM filter samples, or through the use of a regional approach, in
a similar manner to that discussed in Section 6.1. However:
-
-
-
The methods for sampling the heavy metals and PAHs in the field are relatively
simple compared with the comprehensive QA/QC procedures required in the field
at the network sites for the gaseous species discussed in Section 6.1.Therefore it
may not be necessary to accredit these field sampling procedures for the cases of
heavy metals and PAHs, and this may reduce the requirements for a tiered
network.
There are significant advantages in analysing all the samples in one laboratory at a
national level from the perspective of the homogeneity of the results, and it is also
easier to audit the performance of that one laboratory and the accuracy of the
results obtained; However, this is not mandatory and is decided by the competent
authority in the Member State.
There may be cost benefits when all the samples are analysed in large batches;
The dissemination mechanisms and their related requirements are therefore clearly
different in the two cases:
a. For the case of one national laboratory carrying out all the analyses, a specific
dissemination mechanism is unnecessary, although there are requirements for this
laboratory to assess the methods whereby samples are taken in the field and
dispatched to the analytical laboratory to ensure the integrity of the samples. This
is more important in the case of PAHs sampling but is also relevant for heavy
metals. In addition, there are essential requirements to assess the quality of the
results obtained by the national analytical laboratory, and this should be carried out
through international comparisons, as discussed in Section 2.7 below.
b.
For the case of a tiered approach, where regional or other laboratories carry out
analyses within the same country, the NRL should carry out audits and other
QA/QC checks, in order to establish and demonstrate that the results conform to
the EU DQOs. This should be done using similar procedures to those described in
Section 2.7.
c. It is considered important in the case where there is a single laboratory carrying out
all analyses of heavy metals or PAHs in a country, that this is considered as the
NRL for those measurements, and therefore should obtain accreditation to the EN
ISO 17025 standard according procedures of the relevant CEN technical standard,
as required by the relevant EU directive. It is also necessary to take account of all
the results of international comparisons and audit checks within this accreditation.
d. There are as yet no specific requirements in directive 2004/107/EC, that these
heavy metals and PAHs measurements to be accredited to the EN ISO 17025
standard (unlike those in Directive 2008/50/EC). However, the CEN reference
methods require such accreditation - see for example EN 14902 clause 10.8)
e. When there is a tiered approach the NRL should obtain accreditation to cover all
the relevant technical activities it carries out, wherever practical. It is not
necessarily required, however, by the EU directives that the regional analytical
laboratories are accredited to the EN ISO 17025 standard. Then, however, it
should be emphasized that in these cases the NRL, as a representative of the
national competent authority, should take some responsibilities for the accuracies
of the results obtained by these, and this will require account to be taken of all
quality issues, including the results of audits and other QA/QC checks (see
Chapter 5).
29
2.6.4 Dissemination Using ‘Primary’ Methods: Ambient air
The primary method that is used in the context of ambient air pollutants regulated
within EU directives is that using ultraviolet spectrometry, as discussed in Section 5.4
above, and realized through Standard Reference Photometers of the type developed
by NIST USA and certain other organisations.
The dissemination mechanism for the results obtained using these primary SRPs,
maintained at a national level or for international networks, generally involves the
calibration of portable ultraviolet ozone photometers directly against an SRP, that have
sufficiently stable performance, and these then act as transfer standards. Then these
portable photometers may be used either directly to carry out field calibration checks
as specified in EN 14625, or used for the calibration of other portable ultraviolet ozone
photometers that are themselves used for the field calibrations and other tests
according to that standard.
These calibrations will be carried out either by one QA/QC Unit operating at a national
level to cover all the network sites in a given country, or through a tiered approach by
means of regional networks having their own QA/QC units. The advantages and
disadvantages of these two approaches, and the different consequences of choosing
one or the other, and the requirements for accreditation to EN ISO 17025, are all the
same as those summarized in Section 2.6.1 and detailed Chapter 3 below.
2.6.5 Dissemination using Reference Methods Designated by Convention or by
Regulation: Ambient air
The most important method that falls within this category is the manual particulate
monitoring standard methods specified in EN 12341 (PM10) and EN 14907 (PM2.5), and
defined as Reference Methods in EU Directive 2008/50/EC Annex VI.
These Reference Methods, when operated according to the requirements of the
relevant CEN standard, provide correct results by definition within their prescribed
measurement uncertainty, with no systematic biases in their operation. Therefore,
these methods may be operated at individual measurement sites, and the filters from
these weighed by many different laboratories in a manner that conforms to the
requirements of the standards, and the results then obtained will be considered
correct, within the uncertainty specifications. There is therefore no need for
dissemination of such Reference Methods.
Where there are requirements to utilize other methods, such as other manual or
discontinuous monitoring methods, or those for continuous monitoring of particulates,
there is a requirement to demonstrate the “equivalence” of the results obtained by
these using the procedures specified in Reference 4 (or by another method that the
competent authority may choose). In these circumstances also therefore there is no
requirement for the dissemination of traceability of such measurements, providing that
the instrumentation used is deemed equivalent and that the scope of the equivalence
testing is covered at the individual network sites of relevance. There, however, are
several other important requirements that should be fulfilled:
(1) Whenever the NRL is involved in some or all aspects of such measurements, the
NRL should be accredited to EN ISO 17025 for all relevant technical activities;
(2) Where the NRL has no direct involvement in these measurements it still should be
seen as having responsibilities for the accuracy of these measurements, as
specified in the EU Directive, insofar as this can be done –Chapter 3;
(3) There are international comparisons carried out in each country by the EC JRC
Italy, and these provide audits of the quality of the results obtained. These are
30
discussed in Chapter 6. The results should be reviewed by all those involved with
these side by side comparisons, and by the relevant NRL, to ensure the data
quality of the results are as required by EU Directive 2008/50/EC.
2.7 International Comparisons to demonstrate the Comparability of
Measurements between Countries.
2.7.1 Overview
There are a number of scientific and technical activities underway to demonstrate that
the calibration standards and measurements realized in different countries at a
national level are comparable and harmonised, and hence can be deemed to be
accurate. These normally entail some form of international comparisons. There are two
types of international comparisons that are important to the requirements of the EC
Directives 2008/50/EC and 2004/107/EC, and to the AQUILA group. These are:
ƒ
The “Community-wide intercomparisons” that are organised by the EC Joint
Research Centre on a regular basis to cover the pollutants regulated by the
directives. A summary of the methodology and rationale for these JRC activities is
given below (Section 2.7.2). Important examples of these JRC
intercomparisons are summarized in Chapter 6 of this document.
ƒ
International comparisons carried out between NMIs either in Europe or worldwide,
under the auspices the BIPM CCQM activities. These are designed to be
complementary to those of the JRC wherever possible and these are explained
below in this chapter, with examples (Section 2.7.3).
2.7.2 Summary of Comparisons Organised by the EC Joint Research Centre
The EC Joint Research Centre’s Institute for Environment and Sustainability,
particularly the European Reference Laboratory for Air Pollution (ERLAP), Ispra, Italy,
has been given the responsibility by the EC for the organisation of intercomparison
exercises (IEs) and similar activities for the ambient-air pollutants that are regulated by
the above directives. This is required by Directive 2008/50/EC where Article 1 requires
the assessment of air quality on the basis of common methods and criteria, article 3
mentions European community-wide quality assurance programmes, and Article 8
discusses the requirement to use Reference Measurement Methods or equivalent, and
provides criteria for these. Similar requirements were given in the original EU
Framework Directive 96/62/EC. The aims and objectives of these IEs organised by
ERLAP are to:
-
Compare the calibration standards and the measurement capabilities NRLs making
measurements made in each EU Member State, so as to establish their validity,
comparability, and uniformity, and thereby to assist with the harmonization of
regulated European ambient air quality measurements;
-
Establish the accuracy and the measurement uncertainties of the results obtained
during these exercises and confirm that these meet the requirements of the
relevant directive;
-
Check the status and levels of implementation of the EU Air Quality directives;
-
Provide a forum to facilitate the exchange of scientific and technical information
between the experts of the Member States.
31
The IEs organised by the ERLAP should, as described in the directives, involve all the
relevant NRLs designated for each of the pollutant species. Initially, these IEs
organised by ERLAP, which began in the early 1990s, were carried out for one
gaseous pollutant at a time. More recently, these have involved more than one
gaseous pollutant within each IE (see Chapter 6).
The World Health Organization (WHO) carries out similar IE activities, but with a view
to obtaining harmonised air quality data for health related studies, and this
organisation has integrated its programme within the WHO EURO Region, to include
public health institutes and other national institutes - especially from Central and
Eastern Europe, the Caucasus, and countries from Central Asia.
Recently, the IE activities of the JRC ERLAP have been merged with those of the
WHO, in order to:
ƒ
ƒ
ƒ
ƒ
ƒ
Prevent the duplication of participation by different institutes (including NRLs);
Optimise the scientific and technical benefits of the EIs to the participants;
Ensure the comparability and accuracy of results obtained beyond the existing EU
borders;
Optimise the technical capabilities of the participating laboratories;
Facilitate the exchange of technical expertise between these two communities and
all the institutes.
A document has been prepared on this merged IE activity by the JRC, agreed with the
WHO and AQUILA (Reference 6), and these IE activities are now carried out jointly
between the JRC and the WHO. The AQUILA and WHO agreed document describes
further the procedure and data evaluation to be followed for IEs and can be
downloaded from AQUILA’s website.
Until recently, the ERLAP IE activities concentrated on the gaseous pollutant species
of sulphur dioxide, the nitrogen oxides (particularly nitrogen dioxide), carbon
monoxide, and ozone. Examples are given in Chapter 6 of this document. Recently,
however, the IE activities of the JRC ERLAP have been extended to cover other
species such as the monitoring methods for benzene, for the 30 volatile organic
compounds specified in Annex X of Directive 2008/50/EC, for the analyses of heavy
metals, and the application of the PM10 and PM2.5 Reference Methods in the Member
States. These later comparisons have generally required somewhat different
methodologies than those described in Reference 6. They are all discussed in more
detail in Chapter 6 of this document, where examples of the results obtained are also
given.
2.7.3 International Comparisons carried out by the CCQM and Associated
Organisations
2.7.3.1
Background
As stated in Section 3 above, there are about 80 national Measurement Institutes
(NMIs) worldwide, with additional institutes designated by them for specific types of
measurements (including some involved with ambient air quality measurements and
calibration standards related to these, and some of these are also European NRLs).
Many of these NMIs have signed Mutual Recognition Agreements (MRAs) with each
other, within the framework established by the international institute called the
International Bureau of Weights and Measures (the BIPM), based in France (see
www.bipm.org/en/cipm.mra)
It is necessary to support and underpin these MRAs, in order to demonstrate that the
primary/ national standards held by these NMIs (or their designated laboratories in
32
specific measurement fields) are comparable and harmonized internationally, and
hence may be considered as accurate and effectively interchangeable. This is done by
carrying out a wide range of international comparisons between these laboratories,
using structures that have been developed over a number of years. These include the
circulation of stable well-characterised calibration standards or other calibration
artefacts between the participants, the values of which are unknown to them when
they are analysed. These are organised by one of the NMIs worldwide with expertise
in the field. These are known as Key Comparisons (KCs) or Pilot Studies (PSs), and
when they are completed the results are all published so that the whole scientific
community can evaluate critically the levels of comparability between these NMIs (see
http://www.bipm.org/en/committees/cc/ccqm)
All the types of physical, chemical and biological measurements carried out worldwide
that require harmonisation are divided up into different measurement fields. The field
within which these international comparisons of ambient air pollution and related
measurements are carried out, falls within the responsibility of the BIPM Consultative
Committee for Amount of Substance (CCQM), and particularly within its specialist
Committee the Gas Analysis Working Group (GAWG).
The BIPM, and its overseeing Technical Committees, have also divided the NMIs
worldwide into different Regional Metrology Organisations (RMOs). The region
comprising the European NMIs is known as the EURAMET Organisation (see
www.euramet.org). Each RMO may also organise international comparisons important
to that region, which are not necessarily so relevant worldwide. There are also linked
Groups within EURAMET for each field as noted above for CCQM.
2.7.3.2
International Comparisons of Ambient-Air Calibration Standards
As discussed above, there have been a large number of international comparisons of
gaseous calibration standards over the past ten years covering many different
applications such as natural gas, vehicle emissions, and industrial emissions. There
also have been several international comparisons of ambient-air calibration standards
at a national level carried out either within the auspices of CCQM and/or EURAMET.
Some of the more important examples of these comparisons of ambient air standards
at a national level are given below. These serve to demonstrate the international
comparability, and accuracy, of such ambient air calibration standards that are used as
the first and primary national steps in the traceability chains as discussed in Section
2.5.2 above.
(a)
International comparisons of standards of sulphur dioxide and nitrogen
monoxide at ambient concentrations
There have been two sets of international comparisons, organised simultaneously, to
evaluate the degree of international comparability and accuracy of standards of these
two gaseous species by distributing standards to a number of NMIs and designated
laboratories, some of which are also European NRLs. The comparisons also involved
the JRC ERLAP and certain NMIs worldwide including NIST USA, and KRISS Korea.
The standards had the following characteristics:
ƒ
ƒ
nitrogen monoxide in a diluent gas of nitrogen at a nominal concentration of 780
ppb molar value;
sulphur dioxide in a diluent gas of synthetic air at a nominal concentration of 280
ppb molar value;
These standards were prepared by one European Laboratory, and analysed by them
before and after they had been analysed by all the participants, in order to make
33
corrections for instabilities in their concentrations, if appropriate. The results of these
comparisons have now been published (References 7 & 8).
As an example, the results obtained for sulphur dioxide (CCQM K26b) are shown in
Figure 2.7. It can be seen that most of the European laboratories that took part,
including the JRC Ispra, showed comparable results to the established reference value
generally to within about ± 2% relative of concentration at about 280 parts-per-billion
10
0
-10
VN
IIM
D
)
U
BA
(
N
PL
N
M
i
LN
E
KR
IS
S
D
G
-J
R
C
IE
S
IP
Q
FM
I
C
H
M
I
-20
C
ER
I/N
M
IJ
Degreee of equivalence [nmol/mol]
20
Figure 2.7: The Comparability Between NMIs of Ambient Concentration Sulphur
Dioxide Standards at ~ 300nmol/mol, determined by CCQM Comparison K26b:
expressed as ppb by molar concentration ratio (i.e. concentration of 280 x 10-9).
(NOTE 5: The term for gas concentrations in ppb has been used conventionally by the
ambient air quality community for many years, and this can represent a either a molar,
a volume or a mass concentration, (although most frequently it has meant a volume
concentration). However, in the remainder of this Chapter this term will be replaced by
the term nmol/mol. This has the same numerical quantity as that of a ppb by
molar concentration ratio, and is very nearly equal to ppb as a volume
concentration)
(b)
International comparisons of ambient-air ozone measurements using
ultraviolet photometry
There are approximately 50 SRPs worldwide, originally manufactured and supplied by
NIST USA, that are currently in use as national calibration standards for ambient
ozone measurements (or similar for example used by GAW and by the US EPA),
together with other similar Reference Photometers manufactured by other
organisations (e.g. UMEG Germany). These have been manufactured at different
times. There is therefore a requirement to carry out comparisons of these national
standards with one or more SRPs that represent some form of international reference
instruments, to ensure the continued comparability of such measurements, across
different countries including those in the EU, to demonstrate that the maintained
34
national standards are accurate and comparable before these calibrations are
disseminated, as discussed in Section 6.4. These comparisons were originally carried
out by sending the national SRPs back to NIST USA. However, with the proliferation of
these SRPs outside the USA, this became more difficult to manage. Therefore, it was
decided that the SRPs and similar Reference Photometers that are in located outside
of the USA (e.g. excluding those operated by the US EPA) and generally operated at a
national level, would have their calibrations checked, and if required adjusted, by
scientific staff within the International Bureau of Weights and Measures (BIPM), and
this would be carried out regularly through a series of international comparisons. The
first of these comparisons, organised under the auspices of the CCQM, was carried
out by BIPM, with SRPs from the different national laboratories sent to BIPM
sequentially between July 2003 and February 2005, using SRPs at BIPM and ones
provided by NIST USA as references. The results of this first comparison of Ozone
Reference Photometers carried out and reported by BIPM are shown in Figure 2.8. A
full report on this has been published (Reference 9).
(c)
International comparisons of volatile organic compound ozone precursors
specified in EU Directive 2008/50/EC.
Annex X of the above Directive requires Member States to monitor 30 volatile organic
hydrocarbon (VOC) species that are considered the dominant VOC species involved in
the photochemical production of ozone in the atmosphere near ground level.. These
species are generally present at ppb concentrations or below in ambient air. The
analyses of these VOC species are generally carried out by using gas
chromatography, with cryogenic pre-concentration of the atmospheric samples.
However, there are no specified performance characteristics and criteria for these
measurements.
It is practical to prepare calibration standards for these multi-component gas mixtures
using the gravimetric techniques outlined above in Section 5.2 above, and described in
more detail in the standards EN ISO 6142 and EN ISO 6143, at concentrations at
parts-per-billion (ppb) levels and below. There is, however, a requirement to
demonstrate the stability of these standards since they contain low concentration
reactive species.
The JRC ERLAP, in association with AQUILA, and the EURAMET Group, proposed
and carried out jointly an international comparison exercise using gas mixtures at
these low concentrations in high-pressure gas cylinders. Batches of gas mixtures,
were prepared both containing synthetic mixtures of these 30 VOCs at ~ 4ppb
concentrations, and also ‘spiked’ ambient air mixtures generally at lower
concentrations. These standards were prepared by one European laboratory and
provided to the AQUILA and EURAMET participants, with concentrations unknown to
them. Stability trials were carried out during this EI, lasting about one year. This EI is
now completed, a report has been prepared by the JRC ERLAP (Reference 10), and
another has been prepared by EURAMET (Reference 11). Figure 2.9 shows the
EURAMET results obtained by the participating NMIs for some species.
Further comparison exercises covering specific volatile organic species were carried
out earlier by CCQM. These involved comparisons between NMIs worldwide of the
concentration of gas mixtures containing benzene, toluene, ethyl benzene, and the
xylenes, at concentrations of about 100ppb or 10ppb respectively, and multicomponent
mixtures of halogenated and other hydrocarbons at around 100ppb. The results have
been published (References 12,13, &14)
Figure 2.8: Summary of the Results of the Worldwide International Comparison
of Ozone Standard Reference Photometers (SRPs) and Similar Instruments -
35
Accuracy of the Slopes (a), for Exemplar Results at a concentration of
80nmol/mol (b)
Figure 2.8(a): Accuracy of the Slopes of Nationally-operated SRPs with Respect
to the Reference SRPs of BIPM and NIST, USA - slope 1.000
Figure 2.8(b): Example of the Differences in the Results Obtained with the
National SRPs at an Ozone Concentration of 80 nmol/mol
36
Figure 2.9: Results of the EURAMET Comparison (886) for certain of the 30
Component Hydrocarbon Ozone Precursor mixtures specified in
Annex X of Directive 2008/50/EC, described in Section 2.7.3.2(c)
above
Chapter 3: Interpretation, Design, and Implementation, of
Quality Systems for European National Reference Laboratories
3.1 Initial Background
The original requirements for implementing quality assurance procedures for
ambient air quality monitoring at an EU Member State level were specified in “Council
Directive 1996/62/EC on ambient air quality assessment and management” (the socalled ‘Framework directive’). Article 3 of this directive required that:
“The Member States shall designate at appropriate levels the competent authorities
and bodies responsible for, for example, ensuring the accuracy of measurement by
measuring devices and checking the maintenance of the accuracy of such devices, by
internal quality controls carried out in accordance, inter alia, with the requirements of
European quality assurance standards”.
At this time there was no published version of the ISO/IEC 17025 standard. There was
also a debate as to which European quality assurance standard is referred to in the
1996 directive listed above, since there were at that time at least two candidates,
including the ISO 9000 and the EN14000 series of standards - as discussed in Annex
2. This lead to different interpretations for some time. However, subsequent activities
overtook the issue of which candidate standard was intended by the then current
directive 1996/62/EC. These are discussed below.
37
3.2 Development of the New European and Worldwide Quality-Assurance
Standard
Following on from the above, certain developments took place, which are essential to
consider in the context of the scope of this discussion. These changes made a
fundamental impact the previous position summarized above:
(a) The new ISO/IEC standard 17025 entitled “General requirements for the
competence of testing and calibration laboratories” was published in 2001. This
was accepted as the basis for future accreditation of calibration and testing
laboratories that carry out technical work and perform calibration and testing
services - generally for external customers, by national accreditation bodies
worldwide. It was also accepted in the same way by the European organisation of
Accreditation Bodies (EA).
It was then approved in 2005 by CEN and CENELEC and published as EN
ISO/IEC 17025 for the same applications as those considered by the ISO
organisation. This replaced, therefore, the previous EN 45001 standard (and the
associated ISO/IEC Guide 25), which were therefore withdrawn as the European
quality assurance standard that was previously used.
This EN ISO 17025 standard has a number of advantages over other quality
assurance standards. For example:
-
It covers all the quality management system requirements of the ISO 9001
standard that the organisation would need to implement to enable its
management practices, organisational structure, quality policy, process and
resource management etc. to be considered valid and defensible;
-
It is not necessary for any calibration or testing laboratory to gain a separate
certification to ISO 9001 etc. (see EN ISO 17025 p. vi). Instead, it is worth
noting that if an organisation does hold both EN ISO 17025 accreditation and
ISO 9001 approvals, then the ISO 9001 assessors will in general accept all of
the technical/organisational aspects within the EN ISO 17025 accreditation
without further review.
-
The requirements for the technical work to be carried out by the calibration or
testing laboratory are generally more comprehensive and rigorous than those
specified within EN 45001, which it effectively replaced, as discussed in Annex
2;
-
There are comprehensive and specific requirements in the EN ISO 17025
standard for method validation, for traceability to recognised reference
standards and/or reference materials etc., for technical quality audits to be
carried out;
-
There are requirements for the measurement uncertainties to be evaluated
rigorously, and for the provision of the measurement uncertainty of the results
to their customers; (These issues are more comprehensively and rigorously
specified than in EN 45001, and such issues are not covered specifically in ISO
9001)
-
The EN ISO 17025 standard is compatible with the requirements of the current
internationally accepted method for the determination of measurement
38
uncertainties is through the internationally recognised ISO “Guide to the
Expression of Uncertainty in Measurement (GUM), first published in 1993;
-
This was emphasized when the GUM document, published by ISO, was also
transposed into a CEN standard “Guide to the Expression of Uncertainty of
Measurement” (ENV 13005 –1999).
The adoption of the EN ISO 17025 standard as the definitive European (and
worldwide) quality assurance standard to underpin technical accreditations for
testing and calibration laboratories should therefore be considered as beneficial
and highly relevant to the 1996/62/EC Framework directive’s original requirements
for a ‘European quality assurance standard’ as referred to in that directive. This
quality assurance standard EN ISO 17025 is now referred to specifically in the EU
Directive 2008/50/EC (see below).
Information on some of the more important scientific and technical requirements
within EN ISO 17025 and the GUM, which are specific and relevant to the scientific
and technical work of the NRLs, are given in Chapter 2 of this document.
(b) More recently, within a separate initiative, the European Commission developed
the new Directive within the Clean Air for Europe (CAFE) Programme, and this is
now published as “Directive 2008/50/EC on ambient air quality and cleaner air for
Europe” (Official Journal of the European Union L152 11.6.2008). In this 2008
Directive, in order to be consistent with the earlier Framework directive
1996/62/EC, there is an Article 3 with a very similar scope. However, this new
Directive has some additional text contained in Annex 1, which describes in more
detail than the 1996 directive, the methodology and context of ‘ensuring the
accuracy of measurements and compliance with the data quality objectives’. The
issues related to this requirement are discussed in Section 3.3 below.
3.3. General Quality Assurance and Quality Control Requirements for
European National Reference Laboratories
There are a number of explicit areas and activities that Directive 2008/50/EC specifies
as being the responsibility of the NRL concerned with a particular ambient air
monitoring activity, covering the pollutants that are regulated by the Directive. This new
Directive provides greater detail and more clarity than the previous Framework
directive 96/62/EC. For example it specifies:
(1) The ‘European quality assurance standard’ that shall be used - EN ISO 17025:
General Requirements for the Competence of Testing and Calibration
Laboratories. This describes a number of technical and quality requirements that a
calibration or testing laboratory must fulfil in order to provide acceptable
experimental calibration or testing results. It is clear therefore that this covers the
requirements of very specific scientific and technical activities associated with
testing and calibration that, in the case discussed here, are related to the
monitoring and the data quality requirements of the ambient-air pollutants specified
in the directives. Therefore, It is also clear that accreditation to the EN ISO
standard of the competent authorities themselves, usually as noted above national
ministries or agencies, is generally not appropriate or relevant.
(2) The requirement for the accreditation of the competent body that is designated to
carry out a clearly specified range of technical experimentally based qualityassurance activities, as outlined in Annex 1 Section C of Directive 2008/50/EC.
39
This Section also defines WHICH of the laboratories of the Member States require
this accreditation. That is:
The laboratories that are designated to take part in the “Community-wide
intercomparisons” that are regulated in the Directive for the reference
methods referred to in Annex VI of that Directive. Other laboratories in the
Member States that do NOT take part in the “Community-wide intercomparisons”
for the reference methods do not require any such accreditation, although
recommendations for these are made below in this document (Section 3.4).
The above definition is also compatible with the implicit definition of an NRL within
Article 3 of the Directive as discussed above in Section 1.4.
(NOTE 6: The previously published 4th Daughter directive 2004/107/EC is not
covered by this 2008 EU Directive, and thus the exact provisions and requirements
listed here do not formally apply to this. However, it is anticipated that this may be
aligned in future).
(3) The definition as to which laboratories need to be accredited to the EN ISO 17025
standard leads on naturally to a detailed discussion of the SCOPE of the
accreditation required. In this context it should be understood that:
(a) It is often the case that more than one NRL will be designated in order to cover
the range of ambient air pollutants that are specified in Directive 2008/50/EC,
(and more within directive 2004/107/EC should this be relevant in future),
although this is not mandatory and the competent authority decides. Therefore,
it should be clear that in this circumstance it is necessary for the NRL to have a
scope of accreditation that relates only to the reference method(s) it has been
given responsibility for by the competent authority in the Member State.
(b) The text of Annex 1 Section C of the Directive states that, in addition to
restricting the requirements for accreditation to that laboratory that participates
in “Community-wide intercomparisons” it is necessary for the NRL to be
involved in one (or more) reference methods (within the Member State) as
specified in Annex VI. This is important in arriving at the requirements and
scope of their accreditation because:
o The reference methods specified in Annex VI cover a number of different
technical tasks. These include, as discussed in Section 1.4, the “type
approval” of the instrument before use, the initial installation, the set up and
the first calibrations and other tests required on the complete monitoring
system at the selected sites in the Member State, and the subsequent ongoing quality assurance, quality control, and maintenance of the complete
system during its field operations, so as to ensure the on-going required
accuracy of the results.
o When the NRL is involved in realizing and/or implementing one or more
of the technical activities listed above, for one or more regulated
pollutants, as specified in the CEN standards defining the reference
methods, and as given in Annex VI of the Directive, this range of
technical activities therefore MUST BE ACCREDITED according to the
requirements of the EN ISO 17025 standard. This should be considered
the minimum scope of the required accreditation. Bullet (iii) and Section
3.3 bullet (a) below gives strong recommendations on the OTHER
related TECHNICAL activities that should also be accredited
40
(4) The methodology for determining the measurement uncertainties of the methods –
primarily ENV 13005:1999 and ISO GUM: 1993;
(5) The level of confidence of the measurement uncertainties that shall be used –
95%, and how these shall be determined;
(6) The NRL and the competent authority’s obligations wherever the reference
measurement method is not used for the Directive’s purposes – to carry out the
procedures specified in the EU Guidance on the Demonstration of Equivalence
(Reference 4) for non-reference Methods, or other relevant methods that are
specified by the national competent authority.
It is useful to emphasize at this point certain aspects concerning the laboratories that
require this accreditation, as a result of these definitions:
(i) This definition of which laboratories must be accredited is linked to their
participation in the “Community-wide intercomparisons”. This may not therefore
cover all laboratories within a given Member State that have national
responsibilities, due for example, to a particular federal structure, or due, for
example, to the fact that a different laboratory provides the nationally-traceable
calibration standards required.
In this case, the Directive limits (as a minimum) the requirements for
accreditation to the EN ISO 17025 standard to those laboratories that
participate in the “Community-wide intercomparisons”, as designated by the
relevant competent authority.
There are, however, some additional recommendations within this document to
cover these other laboratories that have national and possibly regional
responsibilities (see below Section 3.4).
(ii) It should not be possible for a laboratory to avoid the requirement for accreditation
to EN ISO 17025 by declining to participate in international comparisons that relate
to the relevant technical quality assurance activities of that NRL in the Member
State, since this would contravene the other requirements of Article 3 of the
directive, and EU treaty provisions on cooperation.
(iii) It may be argued from a reading of the Directive and its Annexes that the scope of
the accreditation should only cover and be limited to those technical activities that
are carried out directly within the “Community-wide intercomparisons”. However,
this is not practical since it is not possible to accredit a specific activity that may
take place for a short time once in every three to five years. In addition, the
activities carried out during these “Community-wide intercomparisons” should
generally be considered to be a sub-set of the activities that are carried out by the
NRLs during their role within the Member State, and thus the accreditation as
indicated above, should have a scope that covers all the technical qualityassurance activities that the specific NRL carries out at a national level
within the CEN reference method(s) for which it has responsibility.
In addition, importantly, the results obtained during these Community-wide
intercomparisons must be made available during the assessments of the
accreditations to EN ISO 17025, in order to provide independent evidence that the
results obtained by the NRL are valid within their stated uncertainties. Any nonconformances obtained as a result with the measurement uncertainties stated in
their scope of accreditation that are apparent as a result of these international
comparisons, must be investigated and rectified.
41
(iv) As discussed above, a number of CEN reference methods cover separable and
different experimental activities, including type approval for automated method
before its use, and the regular ongoing QA/QC activities carried out whilst the
automated method is deployed in the field. There are clearly some common
features of these but in detail they are actually different activities. Therefore,
where a given laboratory is involved in one (or more) or these technical
activities, the scope of the accreditation shall cover this fully.
There is no specific requirement for a scope of accreditation that covers the
other activities within the CEN reference method that the NRL is not involved
with directly in a technical manner.
However, the NRL has certain responsibilities within these other activities and for
other technical activities outside of the scope of the CEN reference methods, as
outlined below in this document.
(v) The primary activities of the NRL that require accreditation to the EN ISO 17025
standard are those stated above. These may be considered to represent the
minimum scope of the required accreditation. This scope, and the related
detailed technical procedures and other quality management documents that
embrace all the quality-assurance and quality-control activities involved,
should be comprehensive and cover all of the national activities carried out
technically by the specific NRL to ensure the quality and accuracy of the
measurement results supplied to the Commission.
It is considered that this should also include and describe the relevant
“Community-wide intercomparisons” as these provide an essential means of
assessing, in an “audit” manner, the validity of the accredited activities carried out
at a national level by the NRL. These comparisons therefore should be considered,
as discussed above, as an integral part of the national QA/QC activities using
the reference method.
(vi) The accreditation requirements within the EN ISO 17025 standard cover both
calibration and testing activities for a range of technical activities, with (almost)
equal requirements (indeed most European accreditation bodies treat these now
as equally stringent). When defining whether the requirements discussed here are
for a testing or a calibration accreditation of the technical quality assurance
activities carried out by NRLs, within the ongoing QA/QC requirements of the CEN
reference methods, it should be noted that most should be considered as “testing”
activities (e.g. linearity and cross sensitivity tests on the complete analytical
system). However, some of the technical activities carried out may be considered
calibrations (e.g. the certification of a gas mixture used in the field with the
analytical system against a nationally-traceable calibration standard, in order to
confirm that the system remains within specified calibration responses as given in
the CEN reference method). Then wherever these gas mixtures are used to
calibrate/recalibrate the reference method this should be considered as a
calibration activity and be accredited accordingly.
In addition, in the past, it was recognised that testing accreditations did not
necessarily require demonstrable measurement uncertainty statements assigned
to the results. This is clearly not the case for ambient air quality measurements.
It is recommended, therefore, that the accreditation is normally that for
testing. However, the laboratory concerned, in consultation with the relevant
national accreditation body, should consider the issue of whether a
calibration or testing accreditation is appropriate in part or for the whole. In
certain cases where there is a strong dependence on the accuracy of the
calibrated artefact used in the field on the quality of the results, then this
should be emphasized in the accreditation. However, in all cases there is
clearly a requirement to emphasize and make explicit the measurement
42
uncertainties of all of the tests covered, when this is deemed a testing
accreditation, in order to demonstrate that the requirements of the CEN
reference methods are addressed comprehensively and rigorously.
(vii) The requirements that the technical assessment of the NRL to EN ISO 17025
should be made by an independent authority with appropriate technical
competence, and an authority that is acceptable to the European Accreditation
body; this technical assessment should cover the scope of the accreditation to
ensure it is appropriate, the validity of the methodologies carried out to ensure that
the requirements of the EN standards for the Reference Methods are implemented
correctly, the NRL’s methodology for determining results, assessing measurement
uncertainties etc.;
It should be recognised, however, that although the above are listed as responsibilities
for the NRLs, the ultimate responsibility for these activities in a given Member
State lies with the national competent authority and the detailed national
transposition of the Directive that may modify or rescind the quality
assurance/quality control responsibilities of its appointed NRLs described above.
3.4 Additional Responsibilities of NRLs that Relate to Quality Assurance at a
National Level
There are a number of more implicit requirements that are not specified in detail.
Some of these are given below for information:
(a) In addition, as discussed above, Article 3 of the Directive specifies that NRLs (or
other designated laboratories), have responsibilities through their role in ”ensuring
the accuracy of measurements” for the coordination:
o
On the Member State’s territory of the Community-wide quality assurance
programmes organised by the Commission;
o
At a national level of the appropriate realization of the reference methods and
the demonstration of equivalence of non-reference methods;
Therefore, it is strongly recommended and highly desirable, that all the technical
activities carried out by the NRL, that ARE associated with the above two activities,
and that are clearly additional (but related) to those discussed in Section 3.3 bullet
(iv) should also be included in their scope, and the technical procedures
covered by the accreditation of the NRL to the EN ISO 17025 standard,
wherever applicable and part of their responsibilities.
(b) Furthermore, where the NRL is not involved in all of the activities specified in the
CEN reference method - for example, if the NRL is not involved in those tasks
associated with type-approval of the method, or in its initial installation and
commissioning at a field site, failure to implement these activities correctly will
mean that the accuracy of the measurements may be compromised. Therefore, in
these circumstances, it can be argued that the NRL has a level of responsibility for
ensuring that all such activities have been carried out appropriately, with the
required measurement uncertainties, and within the required conditions (unless
this has been delegated to another organisation by the competent authority).
Thus it is not necessary for a given NRL to carry out the type approval and the
other tasks itself, but it is necessary for the NRL to convince itself that these
activities have been carried out in order to perform its function appropriately, and
to appropriate requirements, unless the competent authority has taken this
responsibility.
43
.
(NOTE 7: The 2008 Directive also makes recommendations that the competent
authorities accept the type-approval testing and results produced by laboratories in
other Member States, provided these have been carried out by laboratories
accredited appropriately to EN ISO 17025 for carrying out such testing (see 2008
Directive Annex VI section E), and the appropriateness or otherwise of the typeapproval this should be considered by the relevant NRL, and advice be supplied if
required to the competent authority).
(c) Laboratories other than NRLs may operate, as outlined above, at a national level
but they may not participate in “Community-wide intercomparisons” and therefore
are not required by the Directive to gain an accreditation as discussed above to the
EN ISO 17025 standard.
However, In addition, Annex 1 Section C states “that institutions operating
networks and individual stations shall have established quality assurance and
quality control systems to assure the accuracy of the measuring devices”. This is
intended to be institutions at a regional, local, or more limited level that are not
required by the Directive to gain accreditation to EN ISO 17025.
In these circumstances, it should be argued that the NRL has a level of
responsibility for “ensuring the accuracy of measurements” and requirements for
measurement traceability, although no accreditation exists for these, unless
otherwise delegated to another organisation by the competent authority. It is
recommended, nevertheless, that such organisations:
ƒ
gain the appropriate accreditation, where this is practical, or if not they should
follow all the requirements of the EN ISO 17025 standard that are possible;
ƒ
be involved in an appropriate and regular programme of audits of their
technical activities, defined in consultation with the NRL or another relevant
body; these audits should include evaluations of whether the data quality
objectives of the Directive, particularly those associated with the required
measurement uncertainty of the results, are initially conformed with and
continue to be conformed with throughout the ongoing monitoring activities.
It is important that the NRL play a significant role in these above activities, in
addition to their other responsibilities.
(d) There are further requirements in article 3 of the 2008 Directive that the NRL
ensures the accuracy of measurements covering pollutants that are regulated in
the 2008 Directive (and the 4th daughter directive). There are clearly a number of
mechanisms for demonstrating this, and some of these are discussed below as
specific examples. One of these relates to the requirement for traceability of all
measurements to recognised national or international standards, following the
requirements set out in Section 5.6.2 of the EN ISO 17025 standard. This
requirement is discussed in more detail in Chapter 2.
(e) The appropriate accreditation of the NRLs to the ISO/IEC quality-assurance
standard, with the scope as discussed above, must be in place by the end of the
year 2010.
44
Chapter 4: Type Approval and Product Certification of
Automated Instruments used for Ambient Air Quality
Monitoring
4.1. Definitions and Background
The definition of type approval (or pattern approval) may be given as follows:
“Decision taken by a designated body that the type or pattern of a given instrument or
analyser, conforms to all of the requirements that are laid down in a specific authorised
(agreed) document, generally through a series of tests and other examinations, known
as type-approval tests”;
Where type-approval tests may be defined as:
“The examination of instruments or analysers of the same type, or pattern, including all
tests necessary for the comprehensive approval of that pattern, which are submitted
(reported) to a designated body”;
In the context of this document it is important to realize that this concept of type
approval applies to:
ƒ
Instruments or analysers that are selected for use in order to meet the regulatory
requirements in relevant EU directives (see below);
ƒ
Continuous and semi-continuous automated instruments or analysers (as specified
below).
In addition, it can also be seen from this definition that the type approval procedure
and its associated tests, as applied to commercially manufactured instruments, may
be implemented only once during the lifetime of this ONE TYPE of instrument
throughout its entire time frame of manufacture and operations, which is
generally five years or more. Then if only this one-off type approval procedure is
carried out generally at the beginning of the lifetime of one type of analyser, there is
therefore no guarantee that modifications may have occurred during this lifetime that
may affect the performance of that type of instrument with respect to the EU directives
data quality objectives. Therefore CEN has developed standards (see Section 4.3
below) that provide ongoing requirements and control of the analyser performance
throughout it’s lifetime.
The concept of instrument certification, or product certification, builds on the above
type approval, and its related tests, but it imposes additional on-going
requirements on the manufacturer of this instrument, on the designated body
and on the end-user, before and after these type-approval tests, to address the issue
of the possible one-off nature of these type-approval tests. The type-approval
requirements are summarized in Section 4.2 below. The additional requirements,
resulting from the need for instrument certification or for product certification have also
been incorporated into CEN standards, and these are discussed below in Section 4.3.
The drivers for the type approval of instruments used to monitor ambient air quality for
the purposes of meeting the regulatory requirements of the EU directives, arose
originally from the EU ‘Framework’, where article 3 states:
45
“The Member States shall designate … the competent authorities or bodies
responsible for – approval of the measuring devices (measurement methods,
equipment….)”
This may be considered as an implicit requirement for either formal or informal typeapproval testing to demonstrate conformance with the relevant EU directive’s data
quality objectives.
The new Directive 2008/50/EC has more specific and explicit requirements:
-
It contains the same text as in Article 3 of the above with the same implications;
-
Article 11 calls for the use of Reference Measurement Methods as specified in
Section A of Annex VI, and most of these Reference Methods, corresponding to
certain CEN standards, contain comprehensive and specific requirements for type
approval tests to demonstrate performance with the Directive’s DQOs;
-
There are further requirements for a range of different tests if the Reference
Methods are not used (Section B Annex VI);
-
There are requirements on the acceptance by Member States of type-approval test
reports produced by other Member States (Section C Annex VI), provided these
tests are carried out within the accreditation to EN ISO 17025 for such tests.
4.2. Reference Methods/CEN Standards and Type Approval Requirements
4.2.1 CEN Standards With Requirements for Type Approval
The requirements for type approval are contained in a number of CEN ambient air
standards, that comprise continuous or semi-continuous measurement methods that
are covered by EU Directive 2008/50/EC, as indicated below:
1. EN 14211: 2005 Standard method for the measurement of the concentration of
nitrogen dioxide and nitrogen monoxide by chemiluminescence;
2. EN 14212: 2005 Standard method for the measurement of the concentration of
sulphur dioxide by ultraviolet fluorescence;
3. EN 14625: 2005 Standard method for the measurement of the concentration of
ozone by ultraviolet photometry;
4. EN 14626: 2005 Standard method for the measurement of concentration of carbon
monoxide by non-dispersive infrared spectrometry;
5. EN 14662: 2005 Part 3 Standard method for the measurement of benzene
concentrations (automated pumped sampling with in situ gas chromatography)
(NOTE 8: These are the currently published (2005) versions that are referred to
specifically in the above Directive. The first four of these are being revised and these
revised standards will eventually be published and replace the existing ones)
The other Reference Methods within Directive 2008/50/EC are standards for manual
methods and these contain no requirements for type approval. In addition, within
directive 2004/107/EC (4th daughter directive), nearly all of the standard methods are
manual sampling methods followed by laboratory analyses, and also do not require
type approval, with one exception:
46
– The EN standard 15852 for the measurement of total gaseous mercury, which is
based on a semi-continuous method;
4.2.2 Summary of Type-approval Requirements within the CEN Standards
The type-approval tests are used to evaluate a set of performance characteristics
determined under a prescribed series of tests. Procedures are defined for the
determination of the actual values of the performance characteristics for at least two
identical types of analysers in a laboratory and in the field. The standard specifies a
comprehensive and rigorous set of tests. The evaluation for type approval of an
analyser is based on the conformance of the analyser with all the individual
performance criteria that are specified in the standard and determined using these
tests. It also requires the calculation of the overall expanded uncertainty of the
measurement result based on the numerical values of all the relevant tested
performance characteristics, and this is compared with the relevant prescribed
maximum uncertainty specified in the Directive’s DQOs (e.g. ±15% at 95% confidence
level for NO2 at the hourly limit value). The tests and calculations carried out are:
Laboratory tests:
- Short-term drift tests at zero and span concentrations (12 hrs);
- Response times rising and falling;
- Repeatability at zero concentrations and at the highest numerical limit or target
value (where appropriate);
- Lack of fit (residuals from the linear regression function including the zero value);
- Sensitivity to sample gas pressure and sample temperature;
- Sensitivity to electrical voltage;
- Sensitivity to the temperature surrounding the analyser;
- Effects of interferences from other substances present in the atmospheric samples;
- Effect of fluctuating concentrations (averaging test);
- Converter efficiency where applicable (NOx)
- Carry over where applicable (benzene)
- Differences between sample and span inputs, where applicable.
Field tests (3 months):
- Long-term reproducibility between two analysers during the field measurements;
- Long-term drifts at zero and span levels;
- Period of unattended operation, and fraction of total time the analyser is available
for measurements.
Overall uncertainty of measurement results:
- Determined by combining the type approval results obtained from the relevant tests
above, expressed at a 95% confidence level.
4.4 Requirements for the Certification of Type-approved Analysers
4.3.1 Overview
The accepted and published CEN standards referred to in Section 2.1 above, contain
complete comprehensive and autonomous technical requirements for the
specifications of type approved analysers that are required to meet the data quality
objectives, particularly the overall measurement uncertainty requirements, of the EU
Directive 2008/50/EC. They are all required to be fulfilled in order demonstrate
comprehensively that they have the potential to serve as EU Reference Methods,
provided they are subsequently installed appropriately at the selected monitoring sites
47
and have the required on-going quality assurance and quality control procedures
applied, as specified in the relevant CEN standards. These CEN standards also
contain rigorous laboratory and field tests to establish the specified numerical values
for all the relevant performance characteristics that contribute to the overall
measurement uncertainty of the method. They are mandatory to be implemented in the
EU Member States.
(NOTE 9: This situation is in contrast to the technical requirements for industrial stack
emission analysers, where no mandatory standards were available at a European level
before the publication of standard EN 15267-Part 3, where similar technical
information and test requirements are contained in this one standard for all pollutant
species –see below).
There are, however, additional requirements for other standards at a European
level that do not provide the technical and the testing requirements of measurement
methods, but provide instead an acceptable formal management and organisational
structure under which these technical CEN standards can be operated. These
requirements are covered by two CEN standards that have recently been published.
These are EN 15267 Part 1 and EN 15267 part 2. A summary of the scope and
objectives of these two standards is given below:
4.3.2
4.3.2.1
EN 15267 Part 1:
General principles
Certification of automated measuring systems –
Need for Certification across Europe
The type approval of Automated Measurement Systems (AMSs), and their subsequent
certification, is broader than that specified in Directive 2008/50/EC, and supports the
requirements of other EU directives, particularly those related to the emissions of air
pollutants to atmosphere from certain industrial plants that are regulated by EU
directives.
The responsibility for ensuring that AMSs that monitor ambient air quality under
Directive 2008/50/EC in a given Member State is clearly defined as the national
competent authority in the Member State. However, no such explicit requirements are
specified in the relevant EU directives for approving AMSs for industrial sources.
Instead, the competent authorities in some Member States have set their own
arrangements in place.
In certain Member States the competent authorities have designated the type approval
or certification of AMSs that monitor both ambient air quality and industrial emissions
to one or more organisations that are accredited to standard EN 45011:1998 (General
requirements for bodies operating product certification schemes – identical to ISO/IEC
Guide 65:1996) by a national accreditation body. However, In some cases the
competent body itself is not accredited by an external body to EN45011, whereas in
other cases it may be. In some States the designated organisations may be public
bodies, in others private. These different approaches have been developed over a
number of years, and reflect the different legislative and administrative arrangements
that exist in different Member States (see also Section 4.5 below).
European standard EN 45011 specifies the general criteria that the certification body
shall follow if it is to be recognised as at a national or European level as competent
and reliable. This is one step towards mutual recognition, as outlined further below.
48
4.3.2.2
Scope of the EN 15267 Part 1 Standard
This European Standard specifies the general principles, including common
management and organisational procedures and requirements, for the product
certification of automated measuring systems (AMSs) for monitoring both ambient air
quality and emissions from stationary industrial sources. This product certification
consists of the following sequential stages:
a) Type approval, or performance testing, of an AMS by comparisons with the
specified performance requirements (in this case those given in the relevant CEN
ambient air standards); - the technical procedure discussed in Section 4.2 above.
b)
Initial assessment of the AMS manufacturer’s quality management system (usually
using ISO 9001); - this process is outlined in Section 4.3.3 below.
c) Formal certification of the product type or product pattern – see below;
The process of formal certification is carried out through the organisation that is
accredited for product certification as outlined above, or similar (e.g. EN ISO/IEC
17021). The certification body carries out a critical review of the following:
ƒ The technical report(s) from the type-approval testing of the AMS to establish
(i) that the tests are carried out fully in accordance with the relevant CEN
standard, (ii) that the AMS fulfils all the performance criteria requirements
specified in that standard, and (iii) that they have been carried out fully within an
accreditation to the EN ISO 17025 standard.
ƒ Evidence of the manufacturer’s compliance with EN ISO 9001:2000, if applicable;
ƒ Evidence of the manufacturer’s compliance with the Initial assessment of its
quality management system as specified in EN15267 part 2.
If these are all found to be satisfactory, then the relevant body issues a certificate
of conformance.
d) Continuing surveillance of the AMSs in use and through audits of the manufacturer.
– as outlined below in Section 3.3 below.
The product certification body should have a clear mandate to operate the
certification scheme with the participation of and on behalf of all the stakeholders
(public authorities, end users, AMS manufacturers, test institutes etc).
4.3.3 EN 15267 Part 2: Certification of automated monitoring systems – Initial
Assessment of the AMS manufacturer’s quality management system, and
post certification surveillance of the manufacturing process
This standard specifies the requirements that the manufacturer’s quality management
system must follow in order to fulfil the requirements of product certification, including:
-
An initial assessment of the AMS manufacturer’s quality management system;
The initial assessment of the manufacturer’s production control;
The continuing surveillance of the effects on the performance of the AMS from
subsequent design changes, in order to ensure that all such design changes do not
alter the AMS in such a manner that it no longer conforms with its certified
performance.
49
The standard generally follows the format and requirements of EN ISO 9001:2000.
However, it does not preclude the use of other quality management systems that are
comparable in objectives. Where no such quality management system exists there are
requirements for enhanced auditing of the manufacturer by skilled personnel with the
control of the certification body. In addition, there will be continuing evidence available
of the performance of the AMSs from their use in field measurements.
4.4
Possible Roles for an NRL
The certification procedures outlined above, and covered in the standards EN 15267
parts 1 & 2, together with the relevant CEN standards discussed in Section 2.1 above,
require, in principle, that in each Member State a certification body, and one or more
type-approval and performance testing organisations are appointed and operate,
together with the related activities that are needed, and with guidance from the
relevant stakeholders. It is clearly the responsibility of the competent authority in the
Member State:
-
to agree this and to establish suitable bodies within the Member State or
elsewhere;
or to propose alternative methods that it can justify;
and/or to agree that it will accept the appropriate test reports from other
appropriate test laboratories in other Member States as specified in the Directive
008/50/EC Annex VI Section E
Within this overall framework the NRL might or might not serve as a recognised testing
laboratory or the certification body. However, these roles and responsibilities should
not be considered as integral and implicit parts of the normal responsibilities of the
NRLs. Instead it may be considered in a Member State that the NRL, in its role of
“ensuring the accuracy of the results”, should be to formally or informally act as an
technical assessor or technical reviewer of these activities, in order for example to:
ƒ
ƒ
ƒ
ƒ
Review the validity of the testing programme and the results obtained so as to
ensure that the technical quality of the Reference Method being implemented
conforms to the requirements of the Directive’s DQOs;
Review the results obtained through QA/QC procedures in the field to check that
the Reference Method continues to operate in long-term use;
Exchange information through AQUILA, and related mechanisms, about the
testing, type-approval and certification mechanisms, so as to improve the existing
CEN standards through future revisions;
Thereby improving the confidence, the transparency, and the mutual acceptance of
the test results.
An additional role for the NRL may be considered, where appropriate, as being
involved in the testing of methods other than the Reference Methods for equivalence,
or in overseeing or reviewing the technical data from such testing, in order to ensure
its validity. This should apply especially, but not exclusively, to PM equivalence testing.
An NRL should therefore also be willing to collate and exchange the technical
information and the data obtained from equivalence testing with other NRLs through
AQUILA, in order to achieve a more harmonised and rigorous approach across Europe
in future. It may also become more important for other pollutant monitoring techniques
including those related to passive sampling.
50
4.5
Towards a Harmonised European Approach
As outlined above, European standard EN 45011 specifies the general criteria that a
certification body that operates a product certification scheme must follow should it is
seek to be recognised at a European or wider international level. To support this,
Document EA–6/01 published by the International Accreditation Forum (IAF) provides
guidance on the application of EN 45011, with the purpose of harmonising
internationally the application of EN 45011 by accreditation bodies. This is one
important step towards the mutual recognition between certification bodies in different
countries through the IAF Multilateral Agreement.
The preparation and publication of the two European standards EN 15267 parts 1 & 2
discussed above by CEN Technical Committee 264 are also intended to facilitate this
in the field of ambient air quality and industrial emission monitoring, by providing
clarification and technical interpretation of the relevant generic standards. The aim
therefore is the mutual recognition, where practical, of the certifications and testing for
type approval or product conformance of AMSs carried out on behalf of recognised
bodies, so that there is no absolute requirement for each Member State to carry out
this certification process should it choose to accept the results from another Member
State.
This objective is also intended in Directive 2008/50/EC Annex VI section E:
“In carrying out the type approval to demonstrate that the instrument meets the
performance requirements of the reference method, designated competent authorities
and bodies shall accept test reports issued in other Member States by laboratories
accredited to EN ISO 17025 for carrying out such testing”
This mutual recognition of testing reports that conform to the EN ISO 17025 standard
requirements for the specified tests is already in place through international
agreements, either formal or informal, within certain Member States - for example
between UBA Germany (where type approval is carried out by TÜV and other
laboratories) and the UK MCERTS certification scheme established by the
Environment Agency of England and Wales. Other countries (e.g. France and Italy
now have in place, or are planning to establish, their own type-approval schemes for
these types of AMSs (and industrial-emission monitors), and there is a wish for mutual
recognition in these countries, and in certain others also. This is one important step
towards a more pan-European certification/type approval scheme in the future, to
avoid unnecessary and expensive duplication of the testing work required.
There will need, however, to be very considerable improvements to the harmonisation
of the testing, and more transparency of the tests themselves and the results obtained
from these in practice, if this is to be achieved widely across Europe. The European
standards discussed above, the overarching management and organisational
standards, EN 15267 parts 1 & 2, and the relevant Reference Methods, are intended
to provide the mechanisms for bringing this about, probably in the longer term.
Chapter 5: Quality Assurance and Quality Control in Ambient
Air-quality Monitoring Networks at a National Level, Operated
for EU Regulatory Purposes
51
5.1 Introduction
The tasks specified for the Competent Authority/National Reference Laboratory in the
Directive 2008/50/EC on Ambient Air Quality and Cleaner Air for Europe have been
summarized in Section 1.4 of this document and the roles of NRLs summarized
elsewhere. This Chapter covers in more detail the following specific technical
requirements at a national level:
Directive 2008/50/EC Article 3 of point (c.):
Member States shall designate at the appropriate levels the competent authorities and
bodies responsible for the following:
•
Ensuring the accuracy of measurements
Where relevant, the competent authorities and bodies shall comply with Section C of
Annex I.
Directive 2008/50/EC Annex 1 Section C:
1. To ensure accuracy of measurements and compliance with the data quality
objectives laid down in Section A, the appropriate competent authorities and
bodies designated pursuant to Article 3 shall ensure the following:
ƒ
All measurements undertaken in relation to the assessment of ambient air
quality pursuant to Articles 6 and 9 are traceable in accordance with the
requirements set out in Section 5.6.2.2. of the EN ISO 17025:2005 standard;
ƒ
Institutes operating networks and individual stations have an established
quality assurance and quality control system which provides for regular
maintenance to assure the accuracy of measuring devices;
ƒ
Quality assurance/quality control procedures are established for the process of
data collection and reporting, and that institutes appointed for this task actively
participate in the related Community-wide quality assurance programmes;
(NOTE 10: as yet no such exercises have been initiated but the AQUILA
Network may become involved in this in future.)
ƒ
National Reference Laboratories, when appointed by the appropriate
competent authority designated pursuant to Article 3, that are taking part in
Community-wide intercomparisons covering pollutants regulated in that
Directive, are accredited according to the standard EN ISO 17025 by 2010 for
the reference methods referred to in Annex VI. These laboratories shall be
involved in the coordination on Member States territory of the Community wide
quality assurance programmes to be organized by the Commission and shall
also coordinate, on the national level, the appropriate realization of reference
methods and the demonstration of equivalence of non-reference methods.
2. All reported data under Article 27 shall be deemed to be valid except data flagged
as provisional.
As discussed previously in this report, it is generally accepted that the Competent
Authority of a given Member State is designated at the political level, whereas national
laboratories or National Reference Laboratories are designated at the technical level,
52
this distinction is not clear in the Directive and hence, on a number of occasions it is not
clear whether the responsibilities for certain of the activities discussed in this Chapter
belong to the competent authority or to the National Reference Laboratory. Thus these
two terms may be used in combination in this Chapter.
The Article 3 of Directive 2008/50/EC concerned with ensuring the accuracy of
measurements must be interpreted in the context of the required Data Quality Objectives
(DQO) listed in Annex 1 of that Directive. These specify uncertainty, minimum data
capture, minimum time coverage (where appropriate) for fixed measurements, the
requirements for indicative measurements, modelling and objective assessment, and for
each pollutant covered by the Directive. This Chapter will address fixed and indicative
measurements.
These DQO requirements apply to individual analysers/samplers at individual stations. In
order for these measurements to constitute a compliant overall assessment of air quality
in the Member State, the requirements for the appropriate numbers of monitoring points
in Zones and Agglomerations (Annex V and Annex IX), their locations and macro and
micro siting (Annex III and Annex VIII) must also be met. These requirements are
complex, with the need to maintain specific ratios of roadside and background site for
nitrogen dioxide, particulate matter, benzene and carbon monoxide:, specific ratios of
PM10 to PM2.5 sites, PM2.5 sites to assess exposure reduction, sites to assess
compliance with critical levels for vegetation and separate requirements for ozone
precursors monitoring. It would therefore be beneficial if an EU-level Working Group
were to be formed to provide additional guidance to Member States on the requirements
for the appropriate numbers of monitoring stations. At the least, this working group
should examine how these requirements have been interpreted in the different counties
and make recommendations. In the absence of this, this should be the subject of an
AQUILA document in future additional to this current document.
As noted previously, the Reference Methods of measurement for the various pollutants
are defined as the relevant CEN Standard Methods:
SO2
NO.and NO2
PM10
PM2.5
Benzene
CO
Ozone
Pb, Cd, As, Ni in PM10
PAH (benzo[a]pyrene)
EN 14212:2005
EN 14211:2005
EN 12341:1999
EN 14907:2005
EN 14662:2005
EN 14626:2005
EN 14625:2005
EN 14902:2005
EN 15549:2008
Of the above, as is well known, the last two standards (apart from requirements for the
monitoring of the heavy metal lead) are not covered by Directive 2008/50/EC. However,
this AQUILA document and this discussion still embrace these, where applicable, since it
is considered that this is an appropriate method for ensuring the DQOs for those
pollutants also.
The CEN standard methods for gaseous pollutants provide details of the reference
method of measurement for each pollutant, specific analyser tests to be performed and
requirements for the on-going regular quality control and quality assurance (QA/QC)
activities for the analysers and associated systems for sampling, data collection and
data transmission. In addition, these standards provide a methodology for uncertainty
evaluation to enable the overall measurement uncertainty of the method to be directly
compared with the specified DQOs.
53
This Chapter outlines some of the general principles of QA/QC, and the detailed CEN
QA/QC requirements - many of which will need to be newly introduced into
measurement networks - and provide guidelines and recommendations to NRLs on the
implementation of this QA/QC.
(NOTE 11: It will also be useful if the experience of network operators in the practical
implementation of these standards is fed back to the CEN working groups so that this
experience can be used when the current CEN standards are revised).
5.2
General Principles of Quality Assurance and Quality Control as
Applied to Air Quality Networks
The International Standardization Organisation (ISO) defines QA and QC as follows:
Quality Assurance - all planned and systematic actions necessary to provide
adequate confidence that a product, process or service will satisfy given requirements
for quality;
Quality Control – the operational techniques and activities that are used to fulfil the
given requirements for quality.
These definitions are interpreted, In relation to the operations of air quality networks
operations, as meaning:
-
Quality assurance refers to the overall management of the process involved in
obtaining the data,
Whereas quality control refers to the activities undertaken to check and optimise
data accuracy and precision after collection.
Hence, quality assurance relates to the measurement process, whilst quality
control is concerned primarily with its outputs.
For ambient air quality monitoring networks, quality assurance activities should
include:
•
Network design;
•
Station siting (macro and micro criteria);
•
Instrument selection;
•
Instrument operations;
•
Instrument preventative and corrective maintenance;
•
Operator training;
•
Instrument calibration;
•
Specifications for ongoing inspections and maintenance of the systems/stations
etc;
•
Site, analyser, and site calibration standard audit;
•
Data handling, validation and management.
Quality control activities should include:
On-site instrument checks;
Data checking and validation;
Quality review and feedback of this into the results;
Participation in relevant comparisons.
•
•
•
•
In principle, where all of the quality assurance activities are undertaken correctly, are
fully in compliance with the relevant CEN standard, and compliant with the detailed
local operating procedures at all times, then there is an expectation that the
measurements will fulfil all the requirements of the EU directive(s) without further
checking. However, in practice in large and complex networks, often with many
54
hundreds of analysers and ancillary equipment and many site operators, this is may
not always be the case. Hence, there is a need to control the quality of data by careful
data management and rigorous checking, preferably by very experienced personnel.
As an example, analyser faults must be identified and addressed quickly in order to
fulfil the Data Quality Objective for >90% data capture.
In addition, prior to submission of data to the EU, all suspect data must be identified
and investigated in order to fulfil the Directive’s requirement that “all data reported
under Article 27 shall be deemed to be valid except data flagged as provisional”
(Annex I, C). In addition to these technical requirements there is the need to ensure
that the data are reported correctly – these tasks all require different skills.
Laboratories familiar with the requirements of the EN ISO17025 standard may be well
aware of the need to review quality procedures and activities regularly in order to
identify any generic issues that need to be addressed to ensure quality is maintained
and improved in future. We recommend that all network operators undertake a
regular review of all QA/QC activities, that the findings documented, and that
any necessary actions or improvements implemented.
5.3 Requirements of the CEN Standard Methods
The CEN standard methods are now defined as reference methods of measurement
within Directive 2008/50/EC. As discussed above, these standards define “type
testing” procedures for gaseous analysers (discussed fully in Chapter 4), on-going
QA/QC requirements, and the methodology for assessing uncertainty.
The relevant CEN standard should be referred to for detailed requirements, but a
general overview is provided in this Chapter. It will be the responsibility of the NRL to
ensure that these requirements are met whenever the reference methods of
monitoring are used. Requirements for the demonstration of equivalence, use of the
equivalent method, and on-going QA/QC of other methods of measurement (Annex VI,
B) are briefly covered in Section 5.7 of this document.
5.3.1 Type Approval of Analysers
All measurements must be undertaken with analysers that have been type tested and
“approved” under the CEN requirements. It is not envisaged that all Competent
Authorities/National Reference Laboratories will undertake these tests or set up a
suitable body within their country to do this. (see Annex VI, E of Directive 2008).
5.3.2 Field Operation and on-going QA/QC Activities
5.3.2.1
Suitability Evaluation and Initial Installation:
The CEN standards set a number of operational condition checks that need to be
performed for each site and analyser deployed. These are summarized as follows:
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
Estimate of the sample gas pressure and temperature variations;
Estimate of the air temperature and mains voltage variations;
Estimate of the concentration ranges of relevant interfering compounds in the local
atmosphere;
Functional check of the analyser and sampling system;
Functional check of the data telemetry system;
Lack of fit (linearity) test;
Converter efficiency check for NOx analysers;
55
ƒ
Determination of the required frequency of filter changes and analyser
consumables changes;
These parameters need to be reviewed in order to ensure that all the analysers at all
the sites are operating within the operating conditions for which they have been “type
approved”. The consequence for the end-user is the possibility to use values of certain
performance characteristics obtained during the type approval test for his own
uncertainty calculation (i.e. linearity, sensitivity coefficient…). This requires also to
determine the required frequency of certain maintenance actions, which may depend
on the pollutant concentrations at the particular site, or other factors. It will be very
useful to continuously monitor the temperature of the container used for the analysers
to ensure that the temperature control system (air conditioning system) maintains the
correct operating conditions for the analysers.
5.3.2.2
Requirements for Ongoing QA/QC
The CEN standards also specify a number of on-going QA/QC checks that need to be
undertaken whilst any gaseous analyser is in operation at a monitoring site:
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
Calibration of the analyser using nationally-traceable reference gases or
transfer standards- at least every 3-months;
Certification of test gases on site - at least every 6-months;
Zero/Span checks - at least every 2-weeks;
Lack of fit (linearity) check - every year plus after every significant repair
Converter efficiency for NOx analysers - at least every year;
Tests on the sample manifold, where applicable - at least every 3-years;
Changes of the particulate filters - at least every 3-months;
Tests of the sample lines - at least every 6-months;
Changes of the analyser consumables - at least every 6-months;
Regular analyser maintenance - as specified by the instrument manufacturer
and further as required.
The National Reference Laboratory should ensure, wherever relevant, that procedures
are in place for these on-going QA/QC checks and that results are documented and
any necessary remedial actions taken. When analysers are found to be non-compliant
with any of these checks then the data and site/analyser records must be examined to
determine if data need to be rejected of corrected.
5.3.3 Data Handling
The Competent Authority/National Reference Laboratory has the responsibility for
producing valid data. All reported data under Article 27 shall be deemed to be valid
except data flagged as provisional. Hence, all faulty data, and zero/span checks,
calibrations etc, must be removed from the dataset.
For NOx analysers, there is a specific requirement to correct data where converter
efficiencies are measured to be between 95% and 100% (note: it is possible with a
poorly calibrated analyser to determine more than 100% but this should be rectified
during QA/QC). If the converter efficiency of the analyser is below 95%, the data must
be rejected. Data for gaseous pollutants needs to be reported at conditions of standard
temperature and pressure, whereas PM10 and PM2.5 data need to be reported at
ambient temperature and pressure conditions at the date of measurement. (Annex VI,
C.)
Currently an EC Data Exchange Group (DEG) is in place, and any specific
requirements from this group, such as procedures for dealing with negative numerical
data, data below the detection limit of the analyser, and number of decimal places
56
required for data values, all need to be complied with before the data are submitted to
the Commission. The DEG will also specify what meta-data are required for each
monitoring station. The Competent Authority/National Reference Laboratory therefore
needs to ensure that these data are collected, accurate, and up-to-date, and reported
with the relevant measurement data.
(NOTE 12: The AQUILA Group is currently contributing to the future specifications of
these requirements, and the outcomes of this will be the subject of a separate report.)
5.3.4
Calculation of the Overall Expanded Uncertainty of the Measurement
Results
The CEN standards provide a specific and comprehensive methodology for calculation
of the overall expanded uncertainty of measurement for direct comparisons with the
Directive’s DQOs. The CEN standard requires that this is calculated annually and it is
likely that the DEG will require that this to be calculated individually from all the QA/QC
data for each analyser. This should then be reported annually to the Commission
along with the corresponding annual data set.
5.4
Traceability and Other Requirements of the EN ISO 17025 Standard
As discussed in Chapter 3 The Competent Authority/National Reference Laboratory is
required to ensue that all measurements are traceable in accordance with the
requirements of the EN ISO 17025:2005 standard.
As described previously, this is not to be considered as a requirement that the whole
monitoring network is accredited, but rather that the measurements are traceable to
recognised national and/or international standards. In some Member States monitoring
networks are already fully accredited, but this may be difficult or expensive to achieve
for large and widely dispersed networks.
However, when this part of the Directive enters into effect (2010), it is envisaged that
the JRC will request that Member States use the reference methods of measurement
operated fully to the EN ISO 17025 standard during the “Community-wide
intercomparisons”. It is also anticipated that the JRC may request a certificate of
conformance to the EN ISO 17025 standard as the preferred method for Member
States to provide their results at such comparisons. Further details are provided in the
JRC Comparison Protocol paper, developed in cooperation with AQUILA. (Annex 1 of
this document)
5.5
Further Requirements of QA/QC Systems for EU Regulated
Pollutants
The Competent Authority/National Reference Laboratory is required to ensure that
institutions operating networks and individual stations have an established quality
assurance and quality control system which provides for regular maintenance in order
to assure the accuracy of the results obtained from the measuring devices.
In order to comply with this requirement it is recommended that the QA/QC procedures
used by all institutions operating networks and individual stations within the Member
State are fully documented and that these are regularly inspected by the National
Reference Laboratory. (see also Section 3.4(c)) These inspections and all follow on
actions also need to be documented.
57
The Competent Authority/National Reference Laboratory may also wish to audit these
other institutions to ensure that the written procedures are being implemented fully. In
addition, comparisons between these organisations and the Competent
Authority/National Reference Laboratory should be organised, where practical using
similar methodologies to those of the EU wide comparisons. In several Member States
these procedures are already in place. In some Member States, annual technical
meetings and/or calibration workshops are arranged as one alternative.
The JRC have also organised intermittently EU wide “audits” of selected monitoring
sites in each Member State – for example a programme for particle measurements is
currently underway, and this is discussed in Chapter 6. Member States may also agree
to undertake mutual audits of each other’s sites, either bilaterally or within groups, to
assist in the harmonisation of the results and to facilitate exchanges of knowledge and
experience. This is an issue that should be discussed within the responsibilities of the
AQUILA Group.
5.6
QA/QC Activities Linked to the Collecting and Reporting of the
Measurement Data
The Competent Authority/National Reference Laboratory has to ensure that a quality
assurance/quality control process is established for the process of data collection and
reporting and that institutions appointed for this task actively participate in the related
Community-wide quality-assurance programmes, when these become available in
future.
If the National Reference Laboratory is itself responsible for the process of data
collection and reporting then QA/QC for this task is covered by the laboratory’s overall
QA/QC procedures (and strong consideration should be given to the requirement to
accredit this to EN ISO 17025). However, if another institution is responsible for this
task, then the Competent Authority/National Reference Laboratory is required to
ensure that the QA/QC procedures of that institution are satisfactory. The procedure
described in the Sections above should then apply. The Competent Authority/National
Reference Laboratory needs to ensure that the institute responsible for the process of
data collection and reporting participates in the related Community-wide quality
assurance programmes, as far as is relevant.
There is a CEN requirement in the standards covering the EC Reference Methods to
test the telemetry system, and this provides assurance of the integrity of transmission
of data from the monitoring site to a central database.
As given above in Section 5.3.3, the procedures for reporting data are currently being
developed by the EU Data Exchange Group. The Competent Authority/National
Reference Laboratory is required to ensure that the institute that has been given
responsibility for data reporting within the Member State follows these procedures.
5.7
Demonstration of Equivalence
The Competent Authority/National Reference Laboratory shall also coordinate, on the
national level, the appropriate realization of Reference Methods and the demonstration
of equivalence of non-reference methods. Realization of the reference methods for the
regulated pollutants is covered in the CEN standards, and is summarized above in this
Chapter and elsewhere in this document.
58
Procedures for the demonstration of equivalence are described in Annex VI, Of
directive 2008/50/EC, and in the EU guidance document - Demonstration of
Equivalence of Ambient Air Monitoring Methods (Reference 4)
In undertaking a Demonstration of Equivalence, Member States may draw upon
results determined in other Member States, provided that these are appropriate to the
conditions found in their territory. A report of the demonstration of equivalence must
be prepared and provided to the Commission.
There are a number of issues related to Demonstration of Equivalence and Equivalent
Methods that are not yet fully detailed at present. These are seen to include:
ƒ
ƒ
ƒ
ƒ
ƒ
Dealing with updates to equivalent or to reference methods;
Details of the use of methods for indicative measurements where these do not fulfil
the requirements of equivalence to the reference methods;
Where equivalence is only demonstrated at one (of several) EU pollutant limit
values;
The requirement to demonstrate that the equivalent method for PM monitoring
remains so over its lifetime of operation in the field;
Centralisation of information on equivalence testing at the EU level.
Many of these issues are dealt with in the revision of the EC Guidance on
Equivalence published in June 2009 (Reference 4), and others will be resolved as
experience grows - with experimental activities carried out in future for demonstrating
equivalence and using equivalent methods. The AQUILA Group provides a suitable
vehicle for collecting such experiences and providing inputs to any further revisions of
the EU Guidance document and/or the related CEN standard(s).
5.9
Inter-laboratory Comparisons at a National Level
5.9.1 Background to the Requirements
The importance of inter-laboratory comparisons (ILC) at a national level is
predominantly dependent on the structure of ambient air monitoring in the respective
Member State (MS).
If a MS has only one national network – probably operated by the National Rreference
Laboratory - ILCs at a national level should not be necessary.
However, in many MSs a number of different air-quality monitoring networks exist,
operated by different ‘Länder’ (e.g. Austria, Germany), by different regions (e.g. Spain,
France), or by local authorities (UK - although these are not reported to the EC). In
these cases, inter-laboratory comparisons are an important element in order to ensure
comparable results of these networks, and the comparability of all these results to
those obtained by the NRL, and suitable ILCs are therefore strongly recommended,
particularly where the data is used for reporting under the EU Directives’ requirements.
5.9.2 Types of Inter-Laboratory Comparisons
ILCs can be realized in various ways:
1. Selected samples are sent round between participants as in normal round-robin
tests, e. g. using gas cylinders containing inorganic or organic gases or gas mixtures
with known stable concentrations, or particulate-loaded filters for the analysis of
constituents etc.
59
2. Inter-laboratory-comparisons can also be carried out at one central laboratory by
means of special devices for the generation of test gases. For example, the
German air pollution monitoring networks come together once a year for an ILC at
the ‘finca (facility for inter-laboratory comparisons and analyses), operated by the
Landesumweltamt in Essen (LANUV NRW). A number of other MSs use this facility,
for example the Austrian networks operated by its Länder.
The technique mentioned above is predominantly suited for gases, and not for
particulate matter (PM). ILCs for PM and its constituents are normally implemented
through parallel measurements in ambient air. However, such experiments are
costly and time-consuming. In Germany, such an extensive campaign was carried
out by the German Länder in Wiesbaden (2003), organized by the German NRL.
3. Another scheme of operating an ILC has been used for several years in France.
This scheme is called ‘Interlabs’ and is organised by INERIS, as part of the French
NRL LSQRA. This involves bringing a set of at least 15 mobile stations together on
one site measuring real ambient atmospheric pollutants, in order to allow for the
possibility of having a robust statistical treatment of data. Data treatment is based
on the ISO 5725 standard that leads to the determination of :
•
•
Reproducibility confidence interval associated with any measurement provided
by one of the participants,
Repeatability confidence interval for participants that use at least two
measuring systems.
These two characteristics may be compared to the expanded uncertainty provided
by the GUM approach.
For some pollutants, however, involvement in this type of field campaign may be
limited due to the fact that concentrations in the ambient air may be small and not
very variable. Furthermore, the uniformity of the atmospheric concentrations across
the measurement area may be not guaranteed, and this may therefore cause
apparent differences between the measurement results of the different laboratories.
In order to solve these problems, a specific gas supply system has been developed
and used together with the group of mobile vans. This is designed to guarantee
identical characteristics of the sampled air provided to the participants (including the
same concentrations of pollutants and the same residence time in the sampling
systems for all participants) using the fact that this air is "spiked" by a controlled
addition of pollutants. This system allows for the possibility of measuring over a
large range of concentrations with an actual and representative matrix, and thereby
to obtain a realistic estimation of uncertainty of the field measurements.
It should be emphasized that ILCs at a national level shall never be used as a
replacement for the establishment of traceability to national or international standards.
On the other hand, such comparisons are an effective tool for checking if traceability is
realized validly in practise, to detect systematic errors of participants, and also to
enhahance the knowledge base by mutual exchange of information
5.9.3 Evaluation of the Results of the ILC at a National Level
There are various statistical models for the evaluation of round robin tests or interlaboratory comparisons which cannot be discussed here in detail. In many cases , the
z-score method or En numbers are used (which are very similar in approach). In any
case, two basic specifications are needed:
60
•
The reference value (‘true value’) of the known concentration artefact and its
measurement uncertainty: However, in the absence of a commonly accepted
value the average or median of the participants’ results may be used as the
reference. For an ILC at a national level it is recommended that the reference value
and its uncertainty are established by the responsible NRL with traceability to
recognised national standards.
•
The quality requirement: Generally, the standard deviation (or multiples of it) is
often used to define a quality requirement. The disadvantage of such an approach
is that this data quality criterion is dependent on the specific experiment carried out
and also on the proficiency of the participants and their calibration methods.
For air quality measurements that are required to be carried out under the
European Directives’ data quality objectives (DQOs), the measurement uncertainty
requirements are defined and have to be met in routine/regular measurements.
Therefore, the data quality requirements in the relevant ILC must also be linked to
the EU-DQO. This can be illustrated by the following example:
In Germany, private laboratories may be employed when these are notified by the
German states ('Länder') to perform air pollution measurements according to the
German Federal Air Pollution Control Act. In this framework of QA/QC measures,
these laboratories in Germany must participate in inter-laboratory comparisons for
sulphur dioxide, nitrogen dioxide, and benzene every three years. These round robin
tests are evaluated according to the so-called z-score method based on ISO Guide 431 as follows:
z-score z =
x:
X:
σ:
x− X
σ
value of participant
reference ('true') value
required precision
The z-values are categorised in three classes:
class 1:
class 2:
class 3:
|z| ≤ 2
2 < |z| < 3
|z| ≥ 3
('good')
('questionable')
('poor')
The reference value (X) is generally determined and provided by the relevant national
reference laboratory. The required precision (σ) is based on the data quality objectives
of the relevant EU directive.
However, inter-laboratory comparisons take place within more or less ideal conditions,
since compared with field conditions, only some components of all the sources of the
overall measurement uncertainty are included. Examples of uncertainty sources not
considered in inter-laboratory comparisons may be:
•
•
•
•
interference from humidity and other compounds
variations in the temperature of sample air
dependence of the results on sampled air pressure
long term method drift
Because of these considerations it was decided that the expanded uncertainty of the
participants’ values Ulab shall be below 50% of the data quality objectives, which
means:
61
•
•
7.5 % for SO2 and NO2
12.5 % for benzene
The inter-comparisons are then evaluated according to the following scheme:
U requ =
2
2
U lab
+ U ref
= 2σ
with
U requ the maximum allowed deviation from the reference value;
U ref the expanded uncertainty of the reference value given by LANUV;
U lab the (acceptable) expanded uncertainty of a participant’s result
Under practical conditions in a round robin test, test gases will be offered over a range
of concentrations. To overcome analyser performance limitations at low concentrations
Ulab is replaced by U0 (Ulab < U0):
•
•
•
SO2: 5 µg/m³
NO2: 4 µg/m³
benzene: 0.5 µg/m³
U0: is the 'uncertainty near zero' as a best estimate
U0 is 10 % of the annual limit value (SO2: national LV)
Evaluation of the inter-laboratory comparison:
•
•
Three gas concentrations are offered for each compound;
A laboratory is judged to pass the tests with:
o
o
3 results with |z| ≤ 2 ('good')
2 results with |z| ≤ 2 ('good') and 1 result with 2 < |z| < 3 ('questionable')
Since 2004 this evaluation scheme has been routinely used and widely accepted for
the evaluation of inter-laboratory comparisons for private laboratories in Germany.
5.9.4 Links to ILCs at a European Level
Where ILCs are organised at a national level in the MSs, it is important that they are
carried out rigorously, and are evaluated in a comparable way, as outlined above.
These procedures should in turn be linked and be consistent with those carried out at
a European level (see Chapter 6).
5.10 Examples of Specific Recommendations for NRLs at a National Level
There are a wide range of recommendations that should be given to NRLs concerning
all the technical activities discussed in the above Chapter at a national level, in order to
assist them in meeting their very broad ranging responsibilities. Examples of these are
given below:
1. The number, distribution, and locations, of all the monitoring stations in the
monitoring networks must conform to the requirements of the relevant directive;
2. All new analysers purchased for networks used for demonstrating compliance with
the Directive must be type-tested and approved to the comprehensive
specifications in the appropriate CEN standard – all analysers used in monitoring
62
networks need to be replaced with type-approved analysers within five years of the
implementation of Directive 2008/50/EC
3. All of the CEN QA/QC procedures must be introduced into the monitoring networks
4. Data uncertainties must be known for every analyser, every year. It should be
calculated using the CEN methodology. Development of semi-automated systems
of performance evaluation and calculation of uncertainty based on known
procedures, calibration results etc., are recommended for effective processing and
report generation.
5. The appropriate accreditation for all the relevant technical activities, as discussed
in Chapter 4 of this document must be obtained, that conforms to the requirements
of the EN ISO 17025 standard;
6. The NRL must participate in the appropriate EU Community-wide quality
assurance programme(s).
7. The technical procedures and all the quality issues must be reviewed on a regular
basis in order to adapt and improve these.
8. Ensure that satisfactory documented quality-assurance procedures are in place for
all institutions operating networks and individual stations within the Member State;
9. Define and carry out appropriate technical audits of the measurements at all the
network stations, on an intermittent basis, with a frequency that is defined by the
quality of the results obtained;
10. Where equivalent analysers are used, ensure that the correct procedures have
been followed for the demonstration of equivalence and that a report has been
supplied to the Commission for approval;
11. Ensure that all data reported to the Commission comply with all the Directive’s
requirements and the recommendations of the Data Exchange Group, unless the
competent authority delegates this to another body.
Chapter 6: Intercomparisons Carried Out By the EC’s Joint
Research Centre, Italy
6.1 Overview
As outlined in Chapter 2.7.2, the EC Joint Research Centre’s Institute for Environment
and Sustainability, European Reference Laboratory for Air Pollution (ERLAP), Ispra,
Italy, is the responsible body of the European Commission for the organisation of
intercomparison exercises (IEs) and similar activities for the ambient-air pollutants at a
European level that are regulated by air quality legislation. This is required by Directive
2008/50/EC where Article 1 requires the assessment of air quality on the basis of
common methods and criteria, article 3 describes European community-wide quality
assurance programmes, and Article 8 discusses the requirement to use Reference
Measurement Methods or equivalent, and provides criteria for these. Details on the
background, the evaluation (z-score, En number, assigned value) and procedure for
the IEs of “classical” gaseous pollutants are given in Annex 1 of this document. This
Chapter will give examples of results achieved during IE’s for inorganic gaseous
63
pollutants, organics (BTX and VOC precursors), particulate matter, and its
constituents.
6.2 Examples for Inorganic Gaseous Compounds: Ozone, Carbon
Monoxide, Nitrogen Oxides, and Sulphur Dioxide
A number of IEs have been organised since the 1990s for inorganic gaseous
compounds such as NO2. Recently, during the two years 2007/2008, four IEs have
been organised and the results can be summarised as follows: in terms of criteria
imposed by the European Commission, 30% to 50% of the results reported by AQUILA
laboratories were good both in terms of measured values and reported uncertainties.
Another 20% to 50% of the results had good measured values, but the reported
uncertainties were either too small or too high. The comparability during the past two
years of results among AQUILA participants can be considered as reasonable for the
O3 and CO measurement methods. However, the results obtained for the pollutant NO2
and also, but less frequently SO2, need further improvements. Several EUR reports
with detailed discussions and results are available on the ERLAP server:
ftp://ipscftp.jrc.it/erlap/ERLAPDownload.htm
Sulphur Dioxide conce ntration le ve l 2
54.0
52.0
SO 2 (nmol/mol)
50.0
48.0
46.0
44.0
42.0
40.0
38.0
36.0
A
B
C
D
E
F
G
H
I
J
K
L
M
O
P
Laboratory
Figure 6.1 Example of one run SO2 at a concentration level of about 45 ppb
(~ 130 µg/m3) during an IE in 2007 with 15 participating laboratories
Carbon M onoxide conce ntration le ve l 1
10.0
CO (μmol/mol)
9.5
9.0
8.5
8.0
7.5
7.0
6.5
6.0
A
B
C
D
E
F
G
H
Laboratory
64
I
J
K
L
M
O
Figure 6.2 Example of one run CO at a concentration level of about 8 ppm
(~ 10mg/m3) during an IE in 2007 with 15 participating laboratories
Ozone conce ntration le ve l 3
66.0
O 3 (nmol/mol)
64.0
62.0
60.0
58.0
56.0
54.0
52.0
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Laboratory
Figure 6.3 Example of one test with ozone at a concentration level of about 60
ppb (~ 120 µg/m3) during an IE in 2007 with 15 participating laboratories
Nitroge n dioxide conce ntration le ve l 10
21.0
NO2 (nmol/mol)
19.0
17.0
15.0
13.0
11.0
9.0
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Laboratory
Figure 6.4 Example of one test with NO2 at a concentration level of about 13 ppb
(~ 30 µg/m3) during an IE in 2007 with 15 participating laboratories
6.3
Examples for Organic Gaseous Compounds: BTEX and VOC ozone
precursors
6.3.1 BTEX
A first EC-JRC aromatic species (Benzene, toluene, ethyl benzenes, xylenes - BTEX)
inter-laboratory comparison using automatic analysers took place at the JRC Ispra
facility in 2005 [EUR 22523 EN], and the second BTEX comparison took place in 2008.
During the second BTEX IE for benzene, the average reproducibility standard
deviation for the exercise (about 18 %) is in contrast with the relatively low repeatability
standard deviation (1.4 %). This lack of robustness (γ ~ 17) is an indication of the need
65
for traceability in the calibration process and also for improvements in the instruments’
linearities.
Benzene reproducibility standard deviation values of about 15 % at the limit value (5
µg/m3) are very close to the reproducibility value obtained in the last intercomparison
(12.5%). Considering the higher associated uncertainty due to the limited number of
participating laboratories, no changes in the performance of the method can be
derived from this intercomparison with respect to the previous exercise.
Z’-score is revealed as an appropriate criterion for the evaluation of laboratory
performance when a limited number of participating laboratories are involved.
Nevertheless, the calculation given in Document N37 (Annex 1) for the reproducibility
standard deviation for proficiency assessment seems to be very restrictive for the
method. In spite of that, almost half of the participants passed the Z’-score test given in
N37 for benzene measurement proficiency testing.
Further harmonisation actions need to be implemented in order to obtain
reproducibility values, which can satisfy the N37 criterion for benzene. Otherwise, the
proposed N37 standard deviation may need to be reconsidered in order to fit into the
methods’ performance limits.
Table 6.1.- Average repeatability, reproducibility and gamma values of the
exercise - gamma values higher than two indicate a lack of robustness.
Benzene
Toluene
ethyl-benzene
mp-xylene
o-xylene
Repeatability, %
1.4
1.8
2.2
4.2
3.1
Reproducibility, %
17.8
10.0
9.7
8.0
16.5
γ
17.2
7.1
6.1
2.1
6.7
Detailed results of the BTEX IE are published as EUR 23792.
6.3.2 VOC Ozone Precursors
Between May 2007 and February 2008, cylinders from two mixtures (synthetic and
ambient air) circulated amongst 18 AQUILA laboratories to evaluate their performance
in the analysis of the 30 volatile organic compounds listed in an Annex of the directive
EC/50/2008.
The concentration for the compared mixtures was of about 5 and 1 ppb per compound
in the synthetic and ambient air mixture respectively. Laboratories were asked to carry
out five independent measurements for each mixture and provide values and
associated uncertainties. A full description of the analytical method used by each
laboratory and their calculations for the estimation of the uncertainties was also
included in the report.
The results were finally expressed in terms of bias with respect to the reference value
and the En number of the measurement. In general, analytical uncertainties were
higher for the lower concentration mixture in comparison to the higher one. For the
synthetic mixture, half of the laboratories reported overall expanded uncertainties
better than 10 %, whilst for the ambient air mixture, the same percentage of
laboratories showed only values better than 20 %.
66
The heaviest hydrocarbons (123, 124 and 135 trimethyl benzene, m,p-, o- xylene,
ethyl-benzene and benzene) and those more reactive and lighter (like 1,3 butadiene,
isoprene, acetylene and ethene) were analysed with more difficulty in comparison to
the rest of the compounds.
Median Ulab %, ambient air
Median Ulab %, synthetic
ethane
1,2,3-trimethyl benzene
1,2,4-trimethyl benzene
1,3,5-trimethyl benzene
o-xylene
20
ethene
propane
propene
15
iso-butane
10
m+p-xylene
ethyl-benzene
n-butane
acetylene
5
toluene
trans-2-butene
0
n-octane
1-butene
2,2,4-trimethyl pentane
cis-2-butene
benzene
2-methyl butane
n-heptane
n-pentane
isoprene
n-hexane
2-methyl pentane
1,3-butadiene
trans-2pentene
1-pentene
Figure 6.5.- Median values for the associated expanded uncertainties of the
reported concentrations of the synthetic and ambient air mixture
25.0
Synthetic mixture
Ambient air mixture
20.0
Median bias, %
15.0
10.0
5.0
0.0
Figure 6.6 - Median bias with respect to the reference value
67
1,2,3-trimethyl benzene
1,2,4-trimethyl benzene
o-xylene
1,3,5-trimethyl benzene
m+p-xylene
toluene
ethyl-benzene
n-octane
benzene
2,2,4-trimethyl pentane
isoprene
n-heptane
n-hexane
1-pentene
2-methyl pentane
trans-2pentene
n-pentane
1,3-butadiene
2-methyl butane
1-butene
cis-2-butene
trans-2-butene
n-butane
acetylene
propene
iso-butane
ethene
propane
ethane
-5.0
6.4
Examples for PM Constituents: Heavy Metals and EC/OC
6.4.1 Heavy Metals
In 2007 the Joint Research Centre (JRC) carried out a first IE for the determination of
heavy metals in particulate matter (PM10). This IE focussed on lead (Pb), arsenic (As),
nickel (Ni) and cadmium (Cd), the heavy metals regulated by the 1st and 4th Daughter
Directives for Air Pollution. Copper (Cu), chromium (Cr) and zinc (Zn), the elements
included in the EMEP programme together with aluminium (Al), cobalt (Co), iron (Fe),
manganese (Mn) and vanadium (V), were also tested. Fourteen laboratories, generally
members of the Network of Air Quality Reference Laboratories (AQUILA), participated
in this IE. The participants mainly used microwave digestion with nitric acid and
hydrogen peroxide and Inductively Coupled Plasma Mass Spectrometry (ICP-MS) or
Graphite Furnace Atomic Absorption Spectrometry (GF-AAS) for analyses as
recommended in the reference method (EN 14902). However, a few participants used
other methods: Energy Dispersive X-ray Fluorescence (EDXRF), Optical Emission
Spectrometry (ICP-OES) and Voltammetry for analysis, while digestion was performed
by a number of methods - vaporisation on a hot plate before microwave digestion,
Soxhlet extraction, high pressure or cold hydrogen fluoride methods.
Each participant received 5 samples to be analysed: (1) a liquid sample prepared by
dilution of a Certified Reference Material (CRM), (2) a solution of a dust CRM sample
digested by the JRC13F, (3) a sub-sample of a dust CRM that each participating
laboratory had to digest and analyse, (4) a solution prepared by JRC after digestion of
an exposed filter and (5) a pair of filters (one blank filter and one exposed filter) to be
digested and analysed by each participant.
For 89 % of all types of samples, the Data Quality Objectives (DQOs) of the 1st and 4th
European Directives (uncertainty of 25 % for Pb and 40 % for As, Cd and Ni) were
met. All together, this is a good score. The best results were obtained for the liquid
CRM, a particulate CRM digested by JRC, a particulate CRM and a filter digested by
JRC with 92%, 90%, 96% and 93 % of DQOs being met respectively. It was found that
the DQOs were not met if the difference of acidity between test samples and
participants’ calibration standards was high.
Conversely, only 76 % of DQOs were met for the filter to be digested by each
participant with about 85 % for Cd and Ni, 73% for Pb and 64 % for As, the latter
element being the most difficult to determine. The worst results were associated with
special events: - explosion due to overpressure in microwave oven during digestion for
two participants, a wrong dilution factor used by one participant and a huge
contamination in the blank filter for another participant. Amongst the two explosions,
one of them was probably the effect of a lack of temperature control in the digestion
vessel. For the other explosion, the microwave digestion and the digestion program
advised by EN 14902 should be considered. Moreover, satisfactory results were
obtained using Soxhlet extraction, the high-pressure method and cold hydrogen
fluoride digestion methods, which are not completely covered in EN 14902. The DQOs
of As and Cd could not be met with EDXRF, the limit of detection of which was too
high for these two elements, and for Cd using Voltammetry which suffered from a
strong interference for this element.
Regarding the methods of analysis, apart from the points mentioned above about
EDXRF and Voltammetry, good results were observed using ICP-OES for Cd, Ni and
Pb. A few discrepancies were also registered for GF-AAS and ICP-MS but they were
created by the special events or the acidity problem mentioned above. This shows that
even though GF-AAS and ICP-MS are found suitable, the implementation made by
each participant may be responsible for important mistakes.
68
The results of the IE showed that for 77 % of the analyses, the uncertainty of
measurements estimated by participants was consistent with the differences between
the participant results and the reference values of the test samples. On average,
participants claimed uncertainties that were consistent with the DQOs: about 10 % for
Pb and between 15% and 20 % for As, Cd and Ni. The discrepancies were mainly
produced either by some participants underestimating their uncertainty of
measurements or by the explosions, a wrong dilution factor, contamination, high limit
of detection, or the interference mentioned above.
Once these special events are discarded, the reproducibility for all samples and
participants show values between 41% and 54 %. These figures are consistent with
the DQOs if one takes into consideration that reproducibility should be compared to √2
of the DQOs. The repeatability remains between 5% and 12 % without much
difference according to the sample type. Only the analysis of As on filter gave a higher
repeatability of up to 20 %. The reproducibility was higher than the repeatability.
Furthermore, for a majority of participants the between day variability, determined on
three different days with different calibration, was higher than the within-day variability
of measurements. These two observations suggest that it should be still possible to
improve the quality of measurements by implementing more stringent procedures of
quality control.
More detailed discussions and results can be found in EUR 23219 EN – 2008
z’
S1
S1
S1
S2
S3
S4
S5
S5
S5
As Cd Ni Pb As Cd Ni Pb As Cd Ni Pb As Cd Ni Pb As Cd Ni Pb As Cd Ni Pb As Cd Ni Pb As Cd Ni Pb As Cd Ni Pb
2 < z’ < 3
NRL0
-2 < z’ < 0
NRL1
Z’ > 3
NRL2
ED-XRF
NRL3
ICP-MS
NRL4
GF-AAS
NRL5
ICP-OES
NRL6
VOLTAMMETRY
NRL7
No results
NRL8
No data treatment
NRL9
NRL10
NRL11
NRL12
NRL13
NRL14
NRL15
NRL16
-3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3 0 3
Fig 6.7 Summary of z-score results
6.4.2 EC/OC
A first pilot study carried out amongst AQUILA members on TC, EC and OC, took
place in 2008/2009. The TC reproducibility reached 5%, whereas the EC
reproducibility was only about 30% among the 16 expert laboratories from AQUILA.
The laboratories implemented different protocols such as NIOSH and EUSAAR. The
graph below shows a z-score evaluation (assumed a data quality objective of 40%)
obtained for the different laboratories on 13 different samples. (Z-scores above 3 are
considered as unsatisfactory.)
69
LCSQA / INERIS
S4
EC z-score
DQO S8
50% S9
S5
S6
S7
S10
S11
NPL
S3
ERLAP
S2
EMPA
S1
S12
S13
4
3
2
-3
IMI
ISCIII
GGD
UNIMIB
VMM/UGent
LRA
UBA D niosh
NERI
JRC
UBA D eusaarII
-2
NCSR-D
-1
UBA A
0
CHMI
1
-4
Fig 6.8 draft results for z-scores for an EC comparison
6.5 Example of PM10 and PM2.5 comparisons
For the purpose of harmonizing PM measurement methods the European
Commission’s Joint Research Centre and the AQUILA Network of National Air Quality
Reference Laboratories have organized a PM quality assurance campaign in Europe.
From 2006 until 2009 18 measurement campaigns have been carried out in the
European Member States by means of the JRC mobile laboratory carrying out parallel
measurements to the Member States National Reference Laboratory and a routine
monitoring station. The JRC mobile PM laboratory has been equipped with reference
samplers for PM10, PM2.5, PM1, an optical particle counter, a continuous PM10
instrument (TEOM FDMS) and a semi-continuous elemental and organic carbon
analyzer. Results of the comparability of PM 10 measurements in 16 of the 18 visited
countries are shown below. More detailed results will be published during 2009/2010.
PM 10 all labs
(y: LV / p: HV / b: online)
150
% deviation
100
50
0
-50
-100
24 h average over time
Fig 6.9 deviation from ERLAP reference PM10 value for 16 campaigns
70
References
1. International Vocabulary of metrology – Basic and general concepts and
associated Terms (VIM), JCGM 200:2008 (see www.bipm.org)
2. M.J.T. Milton and T.J. Quinn, Primary methods for the measurement of amount of
substance, Metrologia, 2001, 38, 289-2961.
3. J. Viallon et al; A study of systematic biases and measurement uncertainties in
ozone mole fraction measurements with the NIST Standard Reference
Photometer; Metrologia 43 (2006) 441 – 450;
4. Demonstration of Equivalence of Ambient Air Monitoring Methods:
http://ec.europa.eu/environment/air/quality/legislation/assessment.htm
4. A.S. Brown, R.J. Brown, W.T. Corns, P.B. Stockwell, Establishing SI traceability for
measurements of mercury vapour, Analyst 133, (2008), 946 –953;
6. Organisation of Intercomparison Exercises for Gaseous Air Pollution for EU
National Air-Quality Reference Laboratories and Laboratories of the WHO Euro
Region.
7. International Key Comparison CCQM-K26a and Pilot Study CCQM P50a (NO):
http://kcdb.bipm.org/AppendixB/appbresults/ccqm-k26.a/ccqmk26.a_final_report.pdf
8. International Key Comparison CCQM-K26b and Pilot Study CCQM P50b (SO2),
http://kcdb.bipm.org/AppendixB/appbresults/ccqm-k26.b/ccqmk26.b_final_report.pdf
9. International Comparison CCQM- P28, ozone at ambient level (Pilot Study):
http://www.bipm.org/utils/common/pdf/final_reports/QM/P28/CCQM-P28.pdf
10. EC Intercomparison of VOC measurements between National Reference
Laboratories (AQUILA Network), European Commission Joint Research Centre
Report EUR 23529 – 2008;
11. EURAMET Comparison of multi-component ambient VOC measurements (886)
http://kcdb.bipm.org/AppendixB/appBresults/EUROMET.QM-S3/EUROMET.QMS3_Final_Report.pdf
12. International Comparison CCQM-K7: Benzene, Toluene, Ethylbenzene, m-Xylene
and o-Xylene in a nitrogen balance at levels higher than regulatory levels 1999:
http://kcdb.bipm.org/AppendixB/appbresults/ccqm-k7/ccqm-k7_final_report.pdf
13. International Comparison CCQM-K10: Benzene, Toluene and Xylene (BTX) in
Nitrogen at regulatory levels 2001:
http://kcdb.bipm.org/AppendixB/appbresults/ccqm-k10/ccqm-k10_final_report.pdf
14. International Comparison CCQM-K22: Volatile organic compounds in air 2003:
http://kcdb.bipm.org/AppendixB/appbresults/ccqm-k22/ccqm-k22_final_report.pdf
71
ANNEX 1:
ORGANISATION OF INTERCOMPARISON EXERCISES FOR GASEOUS
AIR POLLUTION FOR EU NATIONAL AIR QUALITY REFERENCE
LABORATORIES AND LABORATORIES OF THE WHO EURO REGION.
1
1.1
General
Background and objectives
The European Commission’s Joint Research Centre in Ispra, European Reference
Laboratory for Air Pollution (ERLAP) has been organising intercomparison exercises
(IEs) for European National Air Quality Reference Laboratories (NRLs) since the early
‘90s. The first of these IEs were dedicated to single pollutants, but for some years
several pollutants have been tested during each exercise. These IEs are organised with a
view to harmonizing European air quality measurements and for checking the status of
the implementation of Air Quality directives by the responsible bodies in the EU
Member States.
The World Health Organization (WHO) is carrying out similar activities, but with a
view to obtaining harmonised air quality data for health related studies, and integrating
their programme within the WHO EURO Region, which includes public health
institutes and other national institutes - especially from the Central Eastern Europe,
Caucasus and countries from Central Asia.
This document discusses bringing together the efforts of both these organisations, and
coordinating their activities as far as possible, in order to optimize resources to reach
greater international harmonisation.
The intercomparison exercises will thus have two purposes:
1) Quality control of air pollution measurements of the EU NRLs.
2) Harmonisation of Air Quality (AQ) measurements made by public health and
environmental institutes in the WHO EURO Region.
The NRLs, representing the EU Member States, are required to participate in the IEs.
These IEs are carried out in order to compare calibration standards and measurement
capabilities, and to facilitate exchange of technical information amongst the national
experts. The basis for the organization of these IEs is laid down in the FWD 96/62/EC,
in which Article 1 mentions the assessment of air quality on the basis of common
methods and criteria, Article 3 mentions the Community wide quality assurance
programmes organized by the EC, and Article 4 specifies criteria for reference
measurement and sampling techniques.
In case of the WHO, laboratories in Member States of the WHO EURO Region are
invited to participate by the WHO Collaborating Centre for Air Quality Management
and Air Pollution Control, Berlin (WHO CC). In some cases these laboratories will also
be NRLs.
72
The objectives of this initiative are to merge these activities and therefore to:
o prevent duplication of participation,
o optimize the value of the IEs to the participants,
o ensure the comparability and accuracy of results obtained beyond the current EU
borders and
o optimize the technical capabilities of the participating laboratories.
IE procedure
These IEs are carried out according to the principles of ISO Guide 43-1 1 (1997).
Organiser and participants
The IEs are organized by the European Commission’s, DG JRC, European Reference
Laboratory for Air Pollution (ERLAP), in collaboration with the WHO European
Centre for Environment and Health (WHO/ECEH, Bonn Office) and WHO CC.
Reference documents
Registration form
Questionnaire
Reporting form
Complaint form (LAB-REC-0310)
Frequency, place, time
The IEs are usually organised twice a year. All NRLs are required to participate at least
once every three years. The same is recommended for the other laboratories.
The duration of an IE is about 3 days. However, additional time is needed for
installation, warming up and dismantling of the equipment.
Three IE facilities are currently available:
1) ERLAP
Joint Research Centre – IES, T.P. 441, I – 21020 Ispra (VA)
Contacts:
Annette Borowiak (annette.borowiak@jrc.it)
Friedrich Lagler (friedrich.lagler@jrc.it)
2) LANUV NRW, Wallneyer Str 6, D – 45133 Essen
Contacts:
Ulrich Pfeffer (ulrich.pfeffer@lanuv.nrw.de)
3) UBA, Paul-Ehrlich Str 29, D – 63225 Langen
Contacts:
Volker Stummer (volker.stummer@uba.de)
Hans-Guido Muecke (hans-guido.muecke@uba.de)
Invitation, measurements, communication, reporting and deadlines
An IE will be announced 6 months in advance to the AQUILA group and WHO CC
representative. The JRC will invite the NRLs, and the WHO CC will invite the other
laboratories. The compounds to be measured and their rough concentration levels will
be communicated in this announcement. The laboratories must confirm their interest in
1
ISO/IEC Guide 43-1:1997, Proficiency testing by interlaboratory comparisons - Part 1: Development and
operation of proficiency testing schemes, ISO, Geneva, Switzerland.
73
participating within 6 weeks of this announcement. A selection process may then be
required - depending on the number of applicants. A formal invitation with technical
details will then be sent to the participating laboratories at the latest 2 months in
advance of the exercise.
Indicative timetable
1.
2.
3.
JRC & WHO CC: Announcement of IE (6 months before IE)
NRLs and WHO contacts: Expression of interest (4,5 months before IE)
JRC & WHO CC: Formal invitation to selected laboratories with registration form for the actual
participants (4 months before IE)
4. Participants: Registration (3 months before IE)
5. JRC & WHO CC: Formal invitation to participants with technical details (2 months before IE)
6. Intercomparison exercise
7. Participating laboratories: Deadline for reporting results and questionnaires to JRC (0,5 month
after IE)
8. JRC: Contacting laboratories which results were identified as statistical outliers (1,5 months after
IE)
9. Outlying laboratories: Return explanation and any potential corrected results (2 months after IE)
10. JRC: Deadline for distributing draft report to participating laboratories, corrections by participants
are no more allowed. No anonymous treatment is foreseen. (4,5 months after IE)
11. Participating laboratories: Deadline for commenting on the draft report. Reasonable comments can
be included in the final report (5 months after IE)
12. JRC: Deadline for issuing the final report of IE and distribution of pdf copy to participants ,
Directorate General Environment and WHO/ECEH (6 months after IE)
Measurements
The measurement methods to be used by the NRLs are those specified as reference
methods in the AQ Directives (or in alternative the ones that have formally been
recognized as equivalent). For the other laboratories, national measurement methods
may be used, but reference methods are recommended.
The participants must bring their own complete measuring equipment that is needed for
the analysis and data acquisition of the test gases, including, where possible calibration
facilities.
Generation of test gases
It is possible that not all the compounds listed below will be tested at each IE.
At least three concentration steps will be generated per compound.
The following table indicates the concentration ranges of interest for the
intercomparison exercise (chosen as 75 % of the measurement ranges defined in the EN
standards 2 ):
Compound SO2
NO
NO2
CO
3
3
3
Conc. min 0 µg/m
0 µg/m
0 µg/m
0 mg/m3
Conc. max 750 µg/m3 900 µg/m3 375 µg/m3 75 mg/m3
2
O3
Benzene
3
0 µg/m
0 µg/m3
400 µg/m3 50 µg/m3
EN 14212 (2005), Standard method for the measurement of the concentration of sulphur dioxide by ultraviolet fluorescence
EN 14211 (2005) Standard method for the measurement of the concentration of nitrogen dioxide and nitrogen monoxide by
chemiluminescence
EN 14625 Standard method for the measurement of the concentration of ozone by ultraviolet photometry
EN 14626 Standard method for the measurement of the concentration of carbon monoxide by nondispersive infrared spectroscopy
EN 14662-3 (2005) Ambient air quality - Standard method for measurement of benzene concentrations - Part 3: Automated
pumped sampling with in situ gas chromatography;
74
Each concentration level will be normally generated for a minimum of 2 hours. Shorter
durations may apply for particular studies. During an IE, the organiser may also
introduce interferences into the test gas, i.e. to check for compliance of equipment
according to the EN standards. Other tests regarding certain performance characteristics
may also be performed during the IE. These possible tests will be detailed into the
invitation to participate to IEs.
Reporting of the measurement results
Each participating laboratory is required to deliver three 30-minute averaged values (in
nmol/mol for NO, NO2, NOx, SO2, O3 and benzene while in μmol/mol for CO) and
their associated uncertainty (obligatory for NRLs and recommended for others) for each
compound and concentration.
Each participating laboratory is also required to complete the questionnaire inquiring on
traceability, implemented practice concerning calibrations, measurements and
uncertainty evaluation. Further reporting requirements may be sent with the invitation.
Evaluation scheme
General
The evaluation of the results of the IE will be carried out according to the ISO Guide
43-1 and ISO 13528. Proficiency of participating laboratories will be evaluated by two
methods:
1. The z’-score method 3 will be used to demonstrate the capacity of NRLs to comply
with the uncertainty requirements for calibration gases stated in the relevant EN
standards (which are consistent with the data quality objective DQO of the
European Directives). Other criteria may apply to laboratories other than NRLs.
The z’-score will be evaluated for all participants of the IE and for all runs having a
reference value. For example the interference tests (see hereafter) may not have
reference values and therefore cannot be treated using the z’-score method.
2. The En-number method will be used to demonstrate that the difference between
participating laboratories’ results and assigned values remains within participating
laboratories’ claimed uncertainties and the uncertainty of assigned values. The Ennumbers are calculated for all participants reporting uncertainty of measurements,
this latter parameter being mandatory for NRLs.
Beside proficiency of participating laboratories also repeatability and reproducibility of
standardized measurement methods will be evaluated according to ISO 5725-2 4 . This
group evaluation will be used as an indicator of the trend of the quality of
measurements from one IE to other ones (ISO 13528 § 6.7).
In some IEs, tests of interference on the response of analysers will be performed. The
reporting of results of these tests is only informative and is not foreseen is this
document.
Assigned values for proficiency evaluation
Generally the measurements of ERLAP will be used as assigned values (X) of IEs. The
assigned values of tested concentration levels will be derived from a calibration against
the certified reference values of the CRMs (ISO 13528 §. 5.4) and will be confirmed by
3
ISO 13528 (2005) Statistical methods for use in proficiency testing by interlaboratory comparisons. ISO, Geneva, Switzerland.
ISO 5725:1994, Accuracy (trueness and precision) of measurement methods and results -- Part 2: Basic method for the
determination of repeatability and reproducibility of a standard measurement method, ISO, Geneva, Switzerland
4
75
comparison to robust averages (ISO 13528 § 5.7). If ERLAP measurements fail to pass
this conformation test or if the IE will take place far from Ispra and ERLAP will not be
able to implement its optimum measurement capability, the assigned values will be
calculated as the robust averages (ISO 13528 § 5.5) from a subset of expert NRLs. As
expert NRLs will be regarded laboratories that participate to BIPM CCQM GAWG key
comparisons and/or are accredited with appropriately small uncertainty. For each IE the
list of participating expert NRLs will be given in step 5 of indicative time table (§ 4).
The uncertainty of the assigned value will be calculated as combined uncertainty of the
ERLAP measurement uncertainty and the possible lack of homogeneity among the
different position on the testing bench. However, if the assigned value is calculated
according to (ISO 13528 § 5.5) instead of using ERLAP’s value then the uncertainty
will be calculated with the equation 1 (ISO 13528 § 5.5.2):
uX =
1.25
p
∑u
2
i
(1)
where p is the number of expert NRLs and ui are their standard uncertainties.
The calculation of the assigned value and its uncertainty will be documented and made
available in an annex to the report of the IE.
z’-score
The z’-score will be calculated as follows:
z' =
xi − X
(2)
σ p2 + u X2
where xi is a participant’s value, X is the “assigned value”, σp is the fitness-for-purposebased “standard deviation for proficiency assessment” and uX is the standard
uncertainty of the assigned value.
In the NO2, SO2, CO and O3 EN Standards the uncertainties for calibration gases used
in ongoing quality control are prescribed. In fact, it is stated that maximum permitted
expanded uncertainty for calibration gases at the calibration point (75% of calibration
range) is 5% and that ‘zero gas’ shall not give instrument reading higher than the
detection limit. However no criteria for detection limits are prescribed. As one of the
tasks of NRLs is to verify the accuracy of ‘zero gas’ and calibration gas mixtures the
‘standard deviation for proficiency assessment’ (σp) is derived in a fitness-for-purpose
manner from requirements given in the EN standards, where in place of detection limits
criteria the specifications for purity of zero gas used in type approval as defined in EN
Standards are taken. This general reference to the EN standards can not be made for
benzene, where measurement method used by NRLs for verification purposes can differ
from the method used at IE. Therefore for benzene σp is set to 6% at the calibration
point.
Over the whole measurement range, σp is calculated by linear interpolation between the
value at the calibration point and zero. The linear function parameters (a,b) of σp are
given in table 1. Figures 1 to 6 give the absolute and relative values of σp over the
concentration range for each compound.
76
σp (calibration
σpnmol/mol=a·[Assigned value]nmol/mol+b
point)
a
b
nmol/mol
nmol/mol
7.1
0.0215
1
1613
0.0234
100
4.7
0.0197
1
18.1
0.0236
1
4.9
0.0199
1
0.7
0.0566
0.04
σp (zero)
nmol/mol
1
100
1
1
1
0.04
SO2
CO
O3
NO
NO2
Benzene
Table 1 : Standard deviation for proficiency assessment σp. The limit of
detections are derived from previous IEs
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
50
100
150
200
250
σ p relative (%)
σ p (nmol/mol)
standard deviation for proficiency assessment : SO2
0
300
Measurement range (nmol/mol)
Figure 1
Absolute and relative representation of SO2 σp over the IE testing range.
2.0
10
1.8
9
1.6
8
1.4
7
1.2
6
1.0
5
0.8
4
0.6
3
0.4
2
0.2
1
0.0
σ p relative (%)
σ p (μmol/mol)
standard deviation for proficiency assessment : CO
0
0
10
20
30
40
50
60
70
Measurement range (μmol/mol)
Figure 2
Absolute and relative representation of CO σp over the IE testing range.
77
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
50
100
σ p relative (%)
σ p (nmol/mol)
standard deviation for proficiency assessment : O3
0
200
150
Measurement range (nmol/mol)
Figure 3
Absolute and relative representation of O3 σp over the IE testing range.
20
10
18
9
16
8
14
7
12
6
10
5
8
4
6
3
4
2
2
1
0
0
100
200
300
400
500
600
700
σ p relative (%)
σ p (nmol/mol)
standard deviation for proficiency assessment : NO
0
800
Measurement range (nmol/mol)
Figure 4
Absolute and relative representation of NO σp over the IE testing range.
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
20
40
60
80
100
120
140
160
180
σ p relative (%)
σ p (nmol/mol)
standard deviation for proficiency assessment : NO2
0
200
Measurement range (nmol/mol)
Figure 5
Absolute and relative representation of NO2 σp over the IE testing range.
78
standard deviation for proficiency assessment : benzene
10
1.0
9
8
0.8
6
5
4
0.4
3
σ p relative (%)
σ p (nmol/mol)
7
0.6
2
0.2
1
0.0
0
0
1
2
3
4
5
6
7
8
9
10
11
Measurement range (nmol/mol)
Figure 6 Absolute and relative representation of benzene σp over the IE testing
range.
En-number
The normalized deviations, according to ISO Guide 43-1 (ISO, 1997), will be used to
evaluate whether the differences between the results of laboratories that reported
expanded measurement uncertainty together with their measurement results and the IE
reference value remained within the stated uncertainties. The normalised deviations are
calculated using the following equation:
En =
xi − X
(3)
U x2 + U X2
where: xi is the results of a participating laboratory with stated expanded uncertainty Ux
while X is the assigned value with expanded uncertainty UX, determined according to
4.1. En values will be presented by plotting (x – X) ± (Ux² + UX²)1/2 for each
concentration level.
Group evaluation/evaluation of precision of standardized measurement method
The procedure laid down in ISO 5725-2 will be implemented in order to evaluate the
repeatability and reproducibility of the measurement methods. Data consistency and
outlier tests will be performed and relationship between r and R and the concentration
levels of the IE test will be investigated.
Assessment
The z‘-score evaluation allows the following criteria to be used for the assessment of
the results:
o -2 ≤ z‘≤ 2 are designated satisfactory. Approximately 95 % of z-scores should fall
between –2 and +2.
o -3 ≤ z’ < -2 or 2 < z‘ ≤ 3 are designated questionable. They are expected about 1
time out of 20.
o z’ < -3 or z’ > 3 are designated unsatisfactory. Scores falling in this range are very
unusual and are taken to indicate that the cause of the event should be investigated
and remedied.
79
The En evaluation allows evaluating whether the differences between participating
NRLs and the assigned value would remain within the assigned uncertainty and NRL
uncertainty provided that -1 ≤ En ≤ 1. In bar plot representation all results that touch or
cross x-axis are satisfactory.
Further details concerning the standard deviation of the repeated measurements at the
same concentration will be given as additional information to each participant.
Measures
The IEs are organized in order to provide the NRLs with the possibility of comparing
their results and test their proficiency. As the quality of the NRLs measurement is
connected to the data quality of the Member State, the European Commission requires
satisfactory results within the data quality objectives to be obtained. In the case of
NRLs overall unsatisfactory results of z’-score evaluation (one unsatisfactory or two
questionable results per parameter (ISO 13528 § 7.4.2)) the EC requires to repeat
participation to the next IE in order to demonstrate remediation measures. In case of
failing participation to the IEs for more than 3 consecutive years, the JRC will inform
DG-Env.
Complaints
Should be sent in writing to the organiser of the IE within 5 months of its completion
(annette.borowiak@jrc.it or friedrich.lagler@jrc.it).
Costs
The costs of participation by the NRLs in the IEs will be covered by the NRLs
themselves. Other laboratories may apply to the WHO CC or the JRC for financial
assistance.
80
ANNEX 2:
Background: The Requirements for Quality Systems from the Previous
Directive 1996/62/EC
The original requirements for implementing quality assurance procedures for ambient
air quality monitoring at an EU Member State level were specified in “Council directive
96/62/EC on ambient air quality assessment and management” (the so-called
‘Framework directive’). Article 3 of this directive requires that:
“The Member States shall designate at appropriate levels the competent authorities
and bodies responsible for, for example, ensuring the accuracy of measurement by
measuring devices and checking the maintenance of the accuracy of such devices, by
internal quality controls carried out in accordance, inter alia, with the requirements of
European quality assurance standards”.
The Framework directive’s requirements as to which ‘European quality assurance
standard(s)’ should be used, however, were not made explicit. At the time that this
directive was published in 1996, there were two types of such quality assurance (QA)
standards in existence:
1. Standards for the implementation of overall quality systems in organisations developed by the International Standardization Organisation (ISO), particularly
those in the ISO 9000 series of standards – initially ISO 9001:1994 & ISO
9002:1994, which were then both superseded by ISO 9001:2000. These standards
apply to the general and overarching quality system required within a particular
organisation so as to ensure the ongoing quality, acceptability, and consistency, of
the products and/or services that the organisation was involved with. These
activities may cover, for example, a particular manufacturing process, or a specific
type of non-technical service to customers. These standards are not intended to
cover in any level of detail the scientific technical and experimental activities of that
organisation, or to provide guidance on, for example, how measurements may be
carried out, or how the uncertainty of such measurements should be determined by
the organisation.
2. Standards developed initially at a national level in many countries to ensure that a
specific technical and experimental activity, particularly those that are
measurement related, are carried out in a consistent quality-controlled manner – so
as to ensure that the end result is delivered with specified quality, and thus they
result in a defined and required accuracy (and hence the measurement
uncertainty) of all the results produced. These standards were produced by the
European standardization body, the European Committee for Standardization
(Committee European de Normalisation, known as CEN), and published originally
as the EN 45000 series of standards.
There was, however, no overall consensus for some time on which of the two types of
quality assurance standards the Framework directive intended to refer to. It could be
argued that the choice of the standard was implied, since the Framework directive
refers to measurements and to technical activities and to European standards.
From this it could be judged that the EN 45000 series was the most appropriate.
81
Additionally, there was some discussion as to how the quality assurance standards
should be implemented for the NRLs and other related laboratories, since this was
also not made explicit within the Framework directive. For example:
-
Could the performance of a laboratory with respect to the quality assurance
standard, with its associated procedures and systems, be self-assessed by the
laboratory concerned?
-
Or should the quality assurance standard and its procedures and systems as it is
established by the laboratory be determined by assessment using an independent
body such as a national accreditation body or similar?
These issues were not fully resolved. The subsequent activities discussed in Chapter 3
made this unnecessary.
82
Download