ONLINE APPENDIX

advertisement
TITLE: NCDR Data Quality Brief: The NCDR® Data Quality Program in 2012
AUTHORS: John C. Messenger, MD, Kalon K.L. Ho, MD, MSc, Christopher H. Young, PHD,
Lara E. Slattery, MHS, Jasmine C. Draoui, MS, Jeptha P. Curtis, MD, Gregory J. Dehmer, MD,
Frederick L. Grover, MD, Michael J. Mirro, MD, Matthew R. Reynolds, MD, MSc, Ivan Rokos,
MD, John A. Spertus MD, MPH, Tracy Y. Wang, MD, MHS, MSc, Stuart A. Winston, DO, John
S. Rumsfeld, MD, PHD, Frederick A. Masoudi, MD, MSPH, on behalf of the National
Cardiovascular Data Registry Science and Quality Oversight Committee Data Quality
Workgroup
Appendix A: 2010 Audit Findings – CathPCI Registry
Background:
In 2003, NCDR developed a national onsite audit program for several of its suite of
hospital based registries. The main purposes were (1) to assess the accuracy of the data collected
by comparing audit to hospital medical records, (2) to help participants identify areas of
improvement with their data capturing process and to re-educate participants when necessary.
Ultimately, the NCDR audit aims to improve data collection in a manner that supports overall
hospital quality improvement effort.
In 2010 NCDR launched a joint onsite audit nationwide for CathPCI Registry version 3
and the ICD Registry version 1. Twenty-five participants (24 enrolled in both registries and one
participant only enrolled in each registry) with a green light status of their Data Quality Report
for two quarters within a 3-year period (2007 to 2009) were randomly selected, as well as 625
records per registry for a total of 25 for each participant. For the 2010 audit, 58 CathPCI
Registry audit variables were evaluated for accuracy. To complete the Inter Rater reliability
Assessment (IRRA), 6 participants were randomly selected for a review of 60 records total.
West Virginia Medical Institute, a quality Improvement organization, was contracted to support
this effort.
Methods:
Using a clustered randomization methodology, participants submitting data were selected
to assess their data accuracy. Nurses with cardiology experience and trained to capture NCDR
registry data reviewed randomly selected hospital charts, abstracted clinical data for selected
variables and, while blinded, entered the clinical data for each participant in an audit data
collection tool. A comparative analysis was conducted between the data captured during an
audit and the NCDR data submitted by the participants and an accuracy score for each variable
and the overall participant’s accuracy score was calculated. The accuracy score for each variable
represents how often the NCDR data matches auditors’ data. Crude agreement rate (total
number of agreements divided by total number of possible agreements) was computed based on
the presence of a match. Fuzzy matches were allowed for 14 audit variables for CathPCI
Registry. Cohen’s Kappa was used to evaluate the inter-rater reliability score to account for
agreement by chance in fields with varying number of categories.
Each participant received a confidential site specific report that displays their overall
audit accuracy score and individual accuracy score for each audit variables. ACCF provided a
mechanism by which participants may disagree with the audit findings and request adjudication
during a 45-day period. In addition, NCDR conducted an IRRA for the audit abstractors as a
quality assurance of the audit. A Cohen Kappa coefficient or Krippendorff’s alpha were
calculated to correct agreement rates for the possibility of agreement by chance.
Results:
For the CathPCI Registry, the average accuracy across all participants was 93.1%, with
an individual participant’s agreement rate ranging from 89.4% to 97.4%. The overall aggregate
score for the IRRA was 98.3%. Out of the audited participants 32% had an accuracy score
above 93.5%. Four participants (16%) requested adjudication, which allowed two of them to
improve their overall accuracy scores. The adjudications had a modest impact on the overall
accuracy score for this audit with three-tenths of a percent increase.
Of the 58 variables, 67% had an accuracy above 93.5%, 17% variables an accuracy
between 80.5-93.4%, 9% variables an accuracy between 70.5-80.4%, and 7% variables that an
accuracy of less than or equal to 70.4%. Variables with the lowest accuracy score were the
Admission Symptoms at presentation, Mid/Distal LAD and Proximal LAD Stenosis Percentages,
as well as the Ejection Fraction Percentage.
Disagreement between hospital and auditor data could result from subjective variables
such as NYHA Classification. Other reasons for low agreement scores could relate to vague
documentation or lack of information at the time of an audit.
(Table 1)
Conclusions:
Overall the audit findings for both registries revealed high agreement rates and reliability
statistics. Two thirds of the audited variables received a score above 93.5% accuracy, and
overall, the registries received a satisfactory accuracy score ranging between 85% - 93%.
Nonetheless, a few audit variables were identified with poor accuracy scores. The adjudication
process helped recalibrate our assessment for some participating sites, but impacted modestly on
the overall accuracy score.
Future Considerations:
Based on the results presented above, ACCF plans on developing targeted educational
programs around audit variables with poor accuracy scores (i.e. training modules reviewing the
definitions and necessary documentation). ACCF is considering, as well, other audit approaches
such as offsite review, which will increase the number of participants, and records, audited
annually. In addition, an offsite audit would also augment the amount of audit variables
assessed. Other considerations include clarifying source documentation for variables, improve
the data capture process at the participant and quality assessment during audits.
Appendix B : 2010 Audit Findings – ICD Registry
Background:
In 2003, NCDR developed a national onsite audit program for several of its suite of
hospital based registries. The main purposes were (1) to assess the accuracy of the data collected
by comparing audit to hospital medical records, (2) to help participants identify areas of
improvement with their data capturing process and to re-educate participants when necessary.
Ultimately, the NCDR audit aims to improve data collection in a manner that supports overall
hospital quality improvement effort.
In 2010 NCDR launched a joint onsite audit nationwide for CathPCI Registry version 3
and the ICD Registry version 1. Twenty-five participants (24 enrolled in both registries and one
participant only enrolled in each registry) with a green light status of their Data Quality Report
for two quarters within a 3-year period (2007 to 2009) were randomly selected, as well as total of
625 records. For the 2010 audit, 55 ICD Registry audit variables were evaluated for accuracy.
To complete the Inter Rater reliability Assessment (IRRA), 6 participants were randomly
selected for a review of 60 records total. West Virginia Medical Institute, a quality improvement
organization, was contracted to support this effort.
Methods:
Using a clustered randomization methodology, participants submitting data were selected
to assess their data accuracy. Nurses with cardiology experience and trained to capture NCDR
registry data reviewed randomly selected hospital charts, abstracted clinical data for selected
variables and, while blinded, entered the clinical data for each participant in an audit data
collection tool. A comparative analysis was conducted between the data captured during an
audit and the NCDR data submitted by the participants and an accuracy score for each variable
and the overall participant’s accuracy score was calculated. The accuracy score for each variable
represents how often the NCDR data matches auditors’ data. Crude agreement rate (total
number of agreements divided by total number of possible agreements) was computed based on
the presence of a match. A variance for an exact match was applied to 3 audit variables for ICD
Registry. Cohen’s Kappa was used to evaluate the inter-rater reliability score to account for
agreement by chance in fields with varying number of categories.
Each participant received a confidential site specific report that displays their overall
audit accuracy score and individual accuracy score for each audit variables. ACCF provides a
mechanism by which participants may disagree with the audit findings and request adjudication
during a 45-day period. In addition, NCDR conducted an IRRA for the audit abstractors as a
quality assurance of the audit. A Cohen Kappa coefficient or Krippendorff’s alpha were
calculated to correct agreement rates for the possibility of agreement by chance.
Results:
For the ICD Registry, the average accuracy across all participants was 91.2%, with an
individual participant’s agreement rate ranging from 83.7% to 95.7%. 12% of the ICD Registry
participants audited recorded an accuracy score above 93.5 %. The overall aggregate score for
the IRRA was 96.8% for the ICD Registry.
One of the two participants who disagreed with the
findings was able to change their overall accuracy score to above 93.5%.
Of the audited variables, 63% had an accuracy score above 93.5% while 19% scored
between 80.5- 93.4% and 9% were between 70.5-80.4%. The remaining 9% of audit variables
received an accuracy score below 70.4% and included variables such as CHF Duration, EF
timeframe or QRS Duration.
Disagreement between hospital and auditor data could result from subjective variables
such as NYHA Classification. Other reasons for low agreement scores could relate to vague
documentation or lack of information at the time of an audit.
(Table 2)
Conclusions:
Overall the audit findings for both registries revealed high agreement rates and reliability
statistics. Two thirds of the audited variables received a score above 93.5% accuracy, and
overall, the registries received a satisfactory accuracy score ranging between 85% - 93%.
Nonetheless, a few audit variables were identified with poor accuracy scores. The adjudication
process helped recalibrate our assessment for some participants, but impacted modestly on the
overall accuracy score.
Future Considerations:
Based on the results presented above, ACCF plans on developing targeted educational
programs around audit variables with poor accuracy scores (i.e. training modules reviewing the
definitions and necessary documentation). ACCF is considering, as well, other audit approaches
such as offsite review, which will increase the number of participants, and records, audited
annually. In addition, an offsite audit would also augment the amount of audit variables
assessed. Other considerations include clarifying source documentation for variables, improve
the data capture process at the participant and quality assessment during audits.
Appendix C : 2010 Audit Findings – ACTION Registry- GWTG
Background:
In 2003, NCDR developed a national onsite audit program for several of its suite of
hospital based Registries. The main purposes were (1) to assess the accuracy of the data
collected by comparing audit to hospital medical records, (2) to help participants identify areas
of improvement with their data capturing process and to re-educate participants when necessary.
Ultimately, the NCDR audit aims to improve data collection in a manner that supports overall
hospital quality improvement effort.
In 2010 NCDR launched a pilot onsite audit for ACTION Registry®-GWTG™ version 2.
Three hundred records from ten participants were randomly selected from a sample frame of
participants with a green light status of their Data Quality Report for two quarters within 2009
and at least 25 records during 2009. For the 2010 audit, 59 ACTION Registry-GWTG audit
variables were evaluated for accuracy. In addition to a medical chart review, NCDR also
assessed records eligibility criteria as well as possible sampling. To complete the Inter-Rater
Reliability Assessment (IRRA), 3 participants were randomly selected for a review of 30 records
total. West Virginia Medical Institute, a quality improvement organization, was contracted to
support this effort.
Methods:
Using a clustered randomization methodology, participants submitting data are selected
to assess their data accuracy. Nurses with cardiology experience and trained to capture NCDR
registry data reviewed randomly selected hospital charts, abstracted clinical data for selected
variables and, while blinded, entered the clinical data for each participant in an audit data
collection Tool. A comparative analysis was conducted between the data captured during an
audit and the NCDR data submitted by the participants and an accuracy score for each variable
and the overall participant’s accuracy score are calculated. The accuracy score for each variable
represents how often the NCDR data matches auditors’ data. Crude agreement rate (total
number of agreements divided by total number of possible agreements) was computed based on
the presence of a match. A Cohen Kappa coefficient or Krippendorff’s alpha were calculated to
correct agreement rates for the possibility of agreement by chance. A fuzzy match was allowed
for 12 audit variables for ACTION Registry-GWTG.
A participant’s list of submitted records
is compared to hospital’s billing list for a specific random period to evaluate possible sampling
of records on the part of the participant. The inclusion criteria compliance rate is equal to the
number of times the inclusion criteria is met divided by the total number of records audited in a
facility.
Each participant received a confidential site specific report that displays their overall
audit accuracy score and individual accuracy score for each audit variables. ACCF provides a
mechanism by which participants may disagree with the audit findings and request adjudication
during a 45-day period. In addition, NCDR conducts an inter-rater reliability assessment (IRRA)
for the audit abstractors as a quality assurance of the audit.
Results:
For the ACTION Registry-GWTG, the average accuracy across all participants was
89.7%, with an individual participant’s agreement rate ranging from 87.2% to 94.2%. Out of the
audited participants 20% had an accuracy score above 93.5%. One participant (10%) requested
adjudication, which slightly improved its overall accuracy scores. Overall, the adjudications had
a modest impact on the overall accuracy score for this audit.
42% of the audited variables had an average accuracy above 93.5%, 47.5% variables an
accuracy between 90.5-93.4%, 8.5% variables accuracy between 70.5-80.4%, and 2% an
accuracy of less than or equal to 70.4%. Variables with the lowest accuracy score were the
Cardiac Rehabilitation Referral, Left Ventricular ejection Fraction percentage, Weight and Peak
Troponin Collected, as well as Peak Creatinine date and time.
Disagreement between hospital and auditor data could result from subjective variables
where a judgment call applies. Other reasons for low agreement scores could relate to vague
documentation or lack of information at the time of an audit.
(Table 3)
The overall erroneous submission rate of records in the registry was 0.3%, strongly
suggesting that the majority of records in the registry are appropriately included. However, the
erroneous omission rate was much higher with a rate of 7.9%. Finally, NCDR examined the
audit results to identify any audit variables in which agreement rates were different between the
STEMI and NSTEMI records. In general, the type of record made little difference in the final
scores. However, there were a few exceptions for variables such as the peak troponin fields or
Reperfusion Candidate.
Conclusions:
Overall the audit findings revealed high agreement rates and reliability statistics. Two
thirds of the audited variables received a score above 90% accuracy, and overall, the registries
received a satisfactory accuracy score ranging between 85% - 95%. Nonetheless, a few audit
variables were identified with poor accuracy scores. The adjudication process helped recalibrate
our assessment for some participants, but impacted modestly on the overall accuracy score. A
few facilities with abnormally high erroneous omission rates accounted for the majority of the
erroneous omissions. The hospitals indicated resource issues were the primary reason for not
submitting records (e.g. not enough staff). This suggests that while a few records are either
accidentally submitted to or excluded from the registry, resource constraints in a few hospitals
have a disproportionally high impact on the registry.
Future Considerations:
Based on the results presented above, ACCF plans on developing targeted educational
programs around audit variables with poor accuracy scores (i.e. training modules reviewing the
definitions and necessary documentation). ACCF is considering, as well, other audit approaches
such as offsite review, which will increase the number of participants, and records, audited
annually. In addition, an offsite audit would also augment the amount of audit variables
assessed. Other considerations include clarifying source documentation for variables, improve
the data capture process at the participant and quality assessment during audits.
Table 1 – 2010 Audit Overall Results: CathPCI Registry
CathPCI v3
Number of audited participants / Records
25 / 625
Overall Accuracy Score
93.1%
10th percentile Accuracy Score
90.9%
90th Percentile Accuracy Score
94.8%
Accuracy Score Range
% of adjudicated Participant
Number of audit variables
89.4-97.4%
16%
58
% variables above 93.5%
67%
% variables Between 80.5-93.4 %
17%
% variables Between 70.5-80.4 %
9%
% variables below 70.4 %
7%
Inter Rater Reliability Assessment Score
98.2%
Table 2 – 2010 Audit Overall Results: ICD Registry
ICD v1
Number of audited participants / Records
25 / 625
Overall Accuracy Score
91.2%
10th percentile Accuracy Score
89.0%
90th Percentile Accuracy Score
93.4%
Accuracy Score Range
83.7-95.7%
% of adjudicated Participant
8%
Number of audit variables
55
% variables above 93.5%
63%
% variables Between 80.5-93.4 %
19%
% variables Between 70.5-80.4 %
9%
% variables below 70.4 %
9%
Inter Rater Reliability Assessment Score
96.8%
Table 3 – 2010 Audit Overall Results: ACTION Registry- GWTG
ACTION-GWTG v2
Number of audited participants / Records
10 / 30
Overall Accuracy Score
89.7%
10th percentile Accuracy Score
87.2%
90th Percentile Accuracy Score
94.2%
Accuracy Score Range
85-95%
Overall Kappa score
0.69
Kappa Score Range
0.57-0.80
% of adjudicated Participant
Number of audit variables
% variables above 93.5%
10%
59
42%
% variables Between 80.5-93.4 %
47.5%
% variables Between 70.5-80.4 %
8.5%
% variables below 70.4 %
Inter Rater Reliability Assessment Score
2%
98.9%
Download