PrECIS - UNICEM

advertisement
PrECIs
(Pragmatic-Explanatory
Continuum Indicators)
“Spokes”
Dave Sackett, on behalf of at
least 23 collaborators
Why the smartass title?
1. We have to have an acronym to be in the
same league as the cardiologists and
many other trialists
2. PrECIs = precis (in both Canadian languages)
= a summary or abstract of a longer text
or speech.
The traditional distinction - 1
• Some trials ask whether an
intervention can work, under tightlycontrolled, ideal conditions.
• We call these “Explanatory” or
“Efficacy” trials.
Example of an Explanatory Trial
• “Among patients with angiographically-
confirmed, symptomatic 70-99% stenosis of a
carotid artery, can the addition of carotid
endarterectomy (performed by an expert
vascular or neurosurgeon with an excellent track
record) to best medical therapy, vs. best medical
therapy alone, reduce the risk of major or fatal
stroke over the next two years of rigorous followup?”
(NASCET: NEJM 1991;325:445-53)
Advantage of an explanatory trial:
If negative, you can abandon the
treatment (it won’t work anywhere)
Disadvantage of an explanatory trial:
If positive, you still don’t know whether it
will work in usual health care conditions
The traditional distinction - 2
• Other trials ask whether an intervention
•
•
does work under the usual conditions that
apply where it would be used.
We call these “Pragmatic” or
“Effectiveness” trials.
They are the primary focus of PraCTiHC
and SUPPORT
Example of a Pragmatic Trial
Among women at 12-32 weeks gestation
whose clinicians thought they were at
sufficient risk for pre-eclampsia or IUGR to
be uncertain whether they should be
prescribed ASA, does simply prescribing
ASA (compared with placebo), and with no
study follow-up visits, reduce the risk of a
composite of bad outcomes for her or her
baby?
(CLASP: Lancet 1994;343:619-29)
Advantage of a pragmatic trial:
If positive, it really works and you can implement
the treatment just about everywhere
Disadvantage of a pragmatic trial:
If negative, you can’t distinguish a worthless
treatment from an efficacious treatment that
isn’t applied/accepted widely enough.
Because of these differences in
interpretation and application . . .
It is important to be able to distinguish
Pragmatic from Explanatory trials
A UNC group developed a
diagnostic test to distinguish them
• Identified 7 “domains” they thought were
important.
• Asked each of 12 US & Canadian
“Evidence-Based Practice Center”
Directors to nominate 6 trials:
 4 to exemplify Pragmatic trials
 2 to exemplify Explanatory trials
• Two blinded raters applied the 7 domaincriteria and decided yes/no for each
Domain-criteria
1.
2.
3.
4.
Population was in primary care
Less stringent eligibility criteria
Health outcomes (function, QoL, mortality)
Long study duration; clinically relevant
treatment modalities (considered compliance
an outcome)
5. Assessment of adverse events
6. Adequate sample size to assess a minimally
important difference from a patient perspective
7. ITT analysis
Results
Kappa for yes/no on the domains = 0.42
Decided best cut-point for a positive test
was the satisfaction of 6 of the 7 criteria
Sensitivity = 72%
Specificity = 83%
LR+ 4.3
LR- 0.3
Their ROC Curve
But
• Pragmatic vs. Explanatory is not an
either/or dichotomy
• It is a continuum
• And individual methodological components
of a trial often vary in their “pragmaticness”
And in SUPPORT we want
to be able to describe
BOTH
Where a trial resides on that continuum …
AND
Where a trial’s individual components
reside on that continuum.
That is, we want a summary
or “precis” of the trial and its
individual methodological
components
So a group of us have been
working on:
Pr = Pragmatic (to)
E = Explanatory
C = Continuum
Is = Indicators
There are 8 PrECIs elements
(“spokes”)
•
•
Each is defined in terms of
restrictions on an otherwise totally
pragmatic trial
The more restrictions in a trial, the
higher its score, and the smaller the
population to whom its results can
be extrapolated
Spoke #1:
Participant Eligibility Criteria
• The extent to which restrictive eligibility
criteria were used in selecting study
participants/patients
• Eg, age, risk, responsiveness, past
compliance
Spoke #2:
Intervention Flexibility
• The extent to which restrictions were
placed on how to apply the primary
intervention and any co-interventions
• Eg, inflexible protocols for how every bit of
the primary intervention was to be applied,
and how many and which co-interventions
were permitted
Spoke #3:
Practitioner Expertise
• The extent to which restrictive demands
for ever-greater expertise were placed on
the practitioners who applied the
experimental maneuver.
• Eg, experience, certification, recognition,
validation of expertise through
examination of past patients’ records
Spoke #4:
Follow-Up Intensity
• The restriction of “usual” follow-up by
demands for increasing frequency and
intensity of follow-up of trial participants.
• Eg, more frequent follow-up, and attempts
to track down and re-enlist trial
participants who drop-out.
Spoke #5:
Follow-Up Duration
• The restriction of follow-up duration so that
it becomes too short to capture important
health outcomes
• Eg, too short to capture long-term efficacy
and safety, restriction to surrogate
mechanistic “biomarkers”
Spoke #6:
Participant Compliance
• Restrictions on leaving trial participants
alone to follow/ not follow trial treatments
as they would in usual health care.
• Eg, compliance measurements, feed-back,
and the employment of complianceimproving strategies.
Spoke #7:
Practitioner Adherence
• Restrictions on leaving trial practitioners
alone to offer and apply trial treatments as
they would in usual health care.
• Eg, adherence measurements, feed-back,
and the employment of adherenceimproving strategies.
Spoke #8:
Primary Analysis
• Restrictions (in the form of exclusions) on
the data that are incorporated in the
primary analysis.
• Eg, excluding drop-outs or non-compliant
patients from the primary analysis (“per
protocol”).
The results can be
displayed graphically
The PrECIs Spokes
And the graphic form can be used
to display agreement among
readers/observers of the same trial
report
• For example, the latest group of Trout
Fellows read two low-dose aspirin trials for
preventing/treating pre-eclampsia
The CLASP Trial (Lancet ’94)
The Caritis et al trial (NEJM ’98)
And the graphic form can be
used to display an overall
pattern in a trial
Could “connect the dots” of
greatest agreement
• The resulting wheel could be informative
• Small = applies to only a small proportion
•
•
of the target population = Explanatory
Large = applies to a large proportion of the
target population = Pragmatic
Lumpy-Bumpy = inconsistent/ ?confused
protocol
A highly Explanatory “expert” surgical trial:
The CLASP Trial
The Caritis Trial
Might want a summary number
• Advantage: To give an overall indicator of
•
•
•
•
“Pragmatic-ness”
Disadvantage: Hides individual spoke
scores, which may have extreme values
Constructed in terms of “restrictions” to
study participants, treatments, analyses,
etc.
Few restrictions = low # = Pragmatic
Many restrictions = high # = Explanatory
Summary number
Simply add the scores for the individual
spokes
The NASCET trial scores 27 !
The Caritis Trial scores 10
The CLASP Trial scores 6
Progress to date
1. Have agreed on the 8 domains
2. Have developed 3rd drafts of criteria
for them
3. Have demonstrated moderate to
good agreement in applying criteria
Work yet to be done:
1. Further refinement of the criteria
for (at least some) spokes
2. Decide how they should be
scored
3. Do more face-validation studies
4. Get observer agreement up to
high levels
1. Further develop the criteria
for (some) spokes
• Do some individual elements need to be
added, altered, or eliminated?
2. How should they be scored?.......
• Independent of each other, and equal in
weight (1 point each, with their sum
naturally limited to 4 points)?
• Independent of each other, but weighted
by their importance (1-4 points each, with
their sum truncated at 4)?
• Mutually exclusive, and progressive
(maximum score of 4)?
3. More face-validation studies
• A comparison to the Gartlehner et al set of
trials is underway
4. Get observer agreement
up to high levels
• As part of our work in revising and
improving the spokes and
individual criteria
Please help us !
1. Please review the articles that Andy
distributed (Rodrigo Salinas & Eduardo Bergel)
2. Score them
3. Suggest improvements in the
individual criteria
4. Suggest how they should be scored
at every level (individual, spoke &
overall)
Last week at the Trout Centre
Results of our PrECIs exercise
Thanks, everyone !
Overall Ratings
How long (minutes) did it take to apply these
PrECIs criteria?
<10
x
10-14x
15-19xxxxxxxxx
20-24xxxxxxxx
25-29
30-34xxxxxxx
35-39x
40+ xx
How difficult was it to apply these criteria?
Very easy
1
2 xx
3 x
4 xxxx
5 xxxxxxxxxx
6 xx
7 xxxxxx
8 xxxxx
9 xx
10
Very Difficult
How well were important properties captured?
Not at all
1
2
3 xx
4 xxx
5 xx
6 xxx
7 xxxxxx
8 xxxxxxxx
9 xx
10 xx
Extremely well
How much fun did you have ?
No fun at all
1 x
2 x
3 x
4 xx
5 xx
6 xx
7 xxxxx
8 xxxxxxxxxxx
9 xxx
10 x
Great fun
Observer Variation Results
Magpie Trial
Belfort Trial
Your suggestions for revisions
Keep them coming!
Need to define when a Spoke might
be “Not Applicable”
• Eg, Spoke 6: “Participant Compliance”
• When treatment is applied in a single
session at the start of the trial (an
operation, an immunization, etc.)
participants can’t not comply.
Further develop the criteria for Spoke 1:
Participant Eligibility Criteria
Don’t charge a restriction for a unisex
disorder.
Further develop the criteria for Spoke 5:
Follow-Up Duration
1. Spoke 5 really isn’t about follow-up
duration, it’s about restrictions on
the “events” chosen for the analysis.
2. So rename it and make it more clear
Further develop the criteria for Spoke 8:
Primary Analysis Inclusions
• Need to distinguish between (or combine):
Drop-outs
Non-compliant participants
Suggestions about scoring
• Make criteria within a spoke independent
of each other, but weighted by their
importance
• Could be worth 1-4 points for each
restriction
• Maximum score for a spoke = > 4
More face-validation studies
• A comparison to the Gartlehner et al set of
trials is underway
Improve observer agreement
• As part of our work in revising and
improving the spokes and
individual criteria
Explore other issues
• How would PI ratings of their own trials
compare with ours?
• Do trials with differing scores also have
differing effect-sizes?
Results of our PrECIs exercise
Thanks, everyone !
Logo Breakfast Club
Fernando
Marian
Edgardo
Anna Maria
Kilgore
Download