Presentation

advertisement
Pushing forward with
ASPIRE
A System for Product Improvement, Review and
Evaluation
Heather Bergdahl, Paul Biemer, Dennis Trewin
Q2014
Background
• Quality Reporting to Ministry of Finance
• Request for quantitative and objective measures of
quality changes in statistical products
• External evaluators to ensure objectivity
• Focus on Accuracy but other quality components
possible
• Ten key statistical products
• Inspiration for improvement work
Sources of error by product
Product
Error Sources
Survey Products
Specification error
Frame error
Nonresponse error
Measurement error
Data processing error
Sampling error
Model/estimation error
Revision error
Foreign Trade of Goods Survey (FTG)
Labour Force Survey (LFS)
Annual Municipal Accounts (RS)
Structural Business Survey (SBS)
Consumer Price Index (CPI)
Living Conditions Survey (ULF/SILC)
Registers
Business Register (BR)
Total Population Register (TPR)
Specification error
Frame: Overcoverage
Undercoverage
Duplication
Missing Data
Content Error
Input data error
Quarterly Gross Domestic Product (GDP) Compilation error
Data processing error
Annual GDP
Modelling error
Balancing error
Revision error
Compilations
Quality criteria
 Knowledge (of the producers of statistics) of the
risks affecting data quality for each error source,
 Communication of these risks to the users and
suppliers of data and information,
 Available expertise to deal with these risks (in
areas such as methodology, measurement or IT),
 Compliance with appropriate standards and best
practices relevant to the given error source, and,
 Plans and achievements for mitigating the risks.
Guidelines / Checklists
Guidelines for the Criterion of Knowledge Conversion of guideline to checklist item
of Risks regarding “Good” and Very Good”
“Good”: Some work has been done to
assess the potential impact of the error
source on data quality. But: Evaluations
have only considered proxy measures
(example, error rates) of the impact with
no evaluations of MSE components.
Reports exist that gauge the impact of the
source of error on data quality using proxy
measure (e.g. error rates, missing data
rates, qualitative measure of error, etc.) –
Yes or No. Yes, to achieve the level of
“Good”.
“Very Good”: Studies have estimated
relevant bias and variance components
associated with the error source and are
well-documented. But: Studies have not
explored the implications of the errors on
various types of data analysis including
subgroup, trend, and multivariate analyses.
At least one component of the total MSE
(bias and variance) of key estimates that is
most relevant for the error source has
been estimated and is documented – Yes
or No. Yes, to achieve the level of “Good”.
The review process
1. Self assessment and documentation sent to evaluators
2. Quality interview
•
•
•
•
•
discussion of notable changes,
review of quality declarations,
progress made on recommendations,
assignment of preliminary ratings using the checklists,
review of assigned ratings, discussion of results, and
recommendations for improvement
3. Control, feedback, possible correction and finalising of
ratings
4. Process repeated annually
Results – Labour Force Survey
Average Average Knowledge
score
score
of Risks
round 2 round 3
Communica- Available
tion
Expertise
Accuracy(control for error sources)
Error Source
Compliance
with
standards &
best
practices
Plans or
Risk to
Achievement data
towards
quality
mitigation of
risks
Specification error
70
70





L
Frame error
58
58





L
Non-response error
52
52





H
Measurement error
56
68





H
Data processing error
62
62





M
Sampling error
78
80





M
Model/estimation error
60
64





M
Revision error
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Total score
60,9
64,3

Poor

Fair
Scores



Good Very good Excellent
Changes from round 2
Levels of Risk
H
High
M
Medium
L
Low
Improvements Deteriorations
Results: Structural Business Statistics
Accuracy (control over error
sources)
Average Average Knowledge
Score Score of Risks
round 2 round 3
Error Source
Specification error
Frame error
Non-response error
Measurement error
Data processing error
Sampling error
Model/estimation error
Revision error
Total score

Poor

Fair
54
64
70
52
60
84
56
56
58
60
70
56
60
86
48
54
60,8
60,1








Scores



Good Very good Excellent
Communica- Available
tion
Expertise
















M
Medium
Plans or
Risk to
Achievement data
towards
quality
mitigation of
risks
M

M

M

H

H

M

H

H

Changes from round 2
Levels of Risk
H
High
Compliance
with
standards &
best
practices








L
Low
Improvements Deteriorations
Strengths of ASPIRE approach
• comprehensive covering error sources and criteria
that pose risks to product quality
• checklists are effective for assigning reliable
ratings
• ASPIRE identifies improvement areas ranked in
terms of priority
Possible weaknesses
• Does not measure the true accuracy of a statistical
product
• Relies on the skills and experience of external
evaluators, and also on the information provided
by the product staff – certain subjectivity
Concrete Results
1. Methods developed to explore measurement error
2. Improved quality in quality declaration information
3. Increasing activity in the area of planning for
studies and improvement projects
4. Major redesign for Living Conditions Survey with
substantial improvements
5. Higher scores for those who systematically make
use of methodological staff
6. In summary, no quick fixes to improve Accuracy.
ASPIRE –

pointing us towards excellence
and beyond…




Download