Pancreatic Disease I: Pancreatitis in Dogs and Cats

advertisement
Turning a clinical question
into a testable hypothesis
???
Lauren A. Trepanier, DVM, PhD
Diplomate ACVIM, Diplomate ACVCP
Department of Medical Sciences
School of Veterinary Medicine
University of Wisconsin-Madison
Clinical questions
• Trust your clinical experience
• Common diseases
• Clinical controversies
• Standards of practice in
human patients
Clinical questions
• New diagnostic tests
• Better treatment options
• Characterization of outcomes
• Prognostic indicators
• Underlying etiology
Getting ideas
• Journal club papers
• Logical follow-ups
• Specialty proceedings
• Knowledge gaps
•Discussions with senior
faculty
Define the state of knowledge
• Literature search
• Multiple search terms
• Reference lists from papers
• Read full papers!!
•Beware abstracts that never
made it to peer reviewed pubs
Define the knowledge gap
• Major conclusions from each paper
• Organize as a logical story
• Why it is important
• What is known in humans
• What is known in veterinary
species of interest
Refining the clinical question
• What remains to be answered?
• Does your question need
revising?
• What do you think you will
find (your hypothesis)?
Framing your research approach
• Research objectives, or aims, to
specifically test your hypothesis
• To compare
• To determine
• To evaluate
• To characterize
PICOT approach
• Population
• Intervention
• Comparators
• Outcomes
• Time frame
Population
smallanimal.vethospital.ufl.edu
• Inclusion criteria
• Gold standard for
diagnosis
• Validated surrogate
marker
Population
• Inclusion criteria
• Specific breed(s)
• Stage of disease
• Severity of illness
•Heterogeneity vs.
homogeneity
Population
• Exclusion criteria
• Prior treatments allowed?
• Washout
• Patient size vs. blood drawn
• Exclude fractious animals?
• Owner consent
Intervention
• Drug treatment
• Surgical procedure
• Diagnostic assay
• What other care is allowed?
•Avoid “clinician discretion”
without guidelines
Intervention
• Blinded vs. double blinded
• Applies to all evaluators
• Owners
• Managing clinicians
• Techs administering
questionnaires
• Radiologists
• Pathologists
Comparators
• Clinically relevant
• Normal or suspected
of disease?
• Placebo or standard
of care?
• Concurrent
• Randomized
Randomization
• Random numbers
• Evaluators should be
blinded to scheme
Random Numbers
00531 41784 44584 62742 81710 71692 28303
58470 94527 33239 70219 59279 38984 99868
17217 18285 15081 24694 95854 82373 96259
54602 79573 78101 09076 16149 21490 05468
53534 82778 68487 37916 03072 07604 47125
02004 10808 37512 57402 97732 23626 99059
72760 25098 68083 65688 19758 84105 17622
90514 98395 48193 98800 20421 08672 43920
38175 81969 24030 71287 56074 48597 71028
03736 32171 73424 49666 67824 13349 03331
59942 63551 26167 64879 75301 90918 70624
31507 48857 49925 46720 56333 00936 14013
27898 86241 11213 09740 40716 47788 53129
37107 85173 14417 00127 69556 34712 39243
Outcomes
• Define a primary outcome
• Objective
• Easily measured
• Clinically available
• Validated for your species
• Relevant to clinical response
Dr. Noel Moens, Guelph
Outcomes
• Subjective primary outcomes
• Validated scoring system
• Complement with objective
outcomes whenever possible
• Blinded evaluators!!
Dr. Duncan Lascelles, NCState
Outcomes
• Secondary outcomes
• Less important?
• May be harder to prove
•Can generate further
hypotheses
• Add depth
Sample size and power
• Both prospective and
retrospective designs
P = 0.0004
• Need enough cases to
overcome variability
within groups to show
a difference between
groups
Viviano et al. J Vet Intern Med. 2009
Sample size calculation
• Type I error: finding a
difference when it is
actually due to chance
• Type II error: missing a
difference that is actually
present
• With too few cases, you
can have either type
Power
• Type I error: P = 0.05
• Type II error: often set at
10-20%
• Power = 100 -Type II error
• Power = ability to detect a
true difference
• Power often set at 80-90%
Sample size (or power) calculation
• Two approaches:
• Start with known
sample size and
calculate the power
to find a difference
• Set a minimum
power and calculate
needed sample size
Sample size (or power) calculation
• Choose your stats test
based on type of data
• Define the variability
in your control
population (SD)
• Define the difference
you need to detect
http://www.stat.uiowa.edu/~rlenth/Power/
Sample size
• Consider drop-out
Time frame
• Recruitment period
• Timing of intervention
• Duration of intervention
• Time points for evaluation
Time frame
• Consider seasonal
variables
• Follow-up
• Complicated?
• Prolonged?
Finalized PICOT research plan
• Still addresses the hypothesis
• Still relevant
• Feasible!
• Clinical expertise
• Caseload
• Support staff
• Funds
• Career time frame
Finalized PICOT research plan
• Question is of interest to pet
owners
• Intervention is low risk
• Follow-up is convenient
• Incentives are considered
Common roadblocks
• Disease is uncommon
• Studied outcome is rare
• Data collection too labor
intensive
Common roadblocks
• Samples banked without
validated assays (!)
• Case identification out
of your control
• Collaborators
unmotivated
Key points
• Study what you know
• Choose straight-forward
aims using available
assays/procedures
• Define the approach
using PICOT
Key points
• Make sure you would
volunteer your own pet
to participate
• Results should be
publishable no matter
what the outcome
Questions or comments?
Download