Critical Appraisal

advertisement
Critical Appraisal
Or Making Reading More
Worthwhile
www.bradfordvts.co.uk
The Problem
Vast and expanding
literature.
Limited time to read.
Different reasons to read –
mean different strategies.
Keeping up to date.
Answering specific clinical
questions.
Pursuing a research interest.
August 2001
Bruce Davies
2
Stages
Clarify your reasons for reading.
Specify your information need.
Identify relevant literature.
Critically appraise what you read.
August 2001
Bruce Davies
3
Clarify Your Reasons for
Reading.
Keeping up to date.
Skimming the main journals and summary
bulletins.
Answering specific clinical questions.
Finding good quality literature on subject.
Pursuing a research interest.
Extensive literature searching.
August 2001
Bruce Davies
4
Specify Your Information Need.
What kind of reports do I want?
How much detail do I need?
How comprehensive do I need to be?
How far back should I search?
The answers to these questions should flow
from the reasons for reading.
August 2001
Bruce Davies
5
Identify Relevant Literature.
There are many ways of finding literature.
Remember to ask a librarian – they are the
experts.
Selectivity is the key to successful critical
appraisal.
August 2001
Bruce Davies
6
Critically Appraise What You
Read.
Separating the wheat from the chaff.
Time is limited – you should aim to quickly
stop reading the dross.
Others contain useful information mixed
with rubbish.
Simple checklists enable the useful
information to be identified.
August 2001
Bruce Davies
7
Questions to Ask
Is it of interest?
Why was it done?
How was it done?
What has been found?
What are the
implications?
What else is of
interest?
August 2001
Bruce Davies
8
Questions to Ask
Is it of interest?
Title, abstract, source.
Why was it done?
Introduction.
Should end with a clear statement of the purpose of the study.
The absence of such a statement can imply that the authors had
no clear idea of what they were trying to find out.
Or they didn’t find anything but wanted to publish!
August 2001
Bruce Davies
9
Questions to Ask
How was it done?
Methods.
Brief but should include enough detail to enable one
to judge quality.
Must include who was studied and how they were
recruited.
Basic demographics must be there.
An important guide to the quality of the paper.
August 2001
Bruce Davies
10
Questions to Ask
What has it found?
Results.
The data should be there – not just statistics.
Are the aims in the introduction addressed in the
results?
Look for illogical sequences, bland statements of
results.
? Flaws and inconsistencies.
All research has some flaws – this is not nit picking,
the impact of the flaws need to assessed.
August 2001
Bruce Davies
11
Questions to Ask
What are the implications?
Abstract / discussion.
The whole use of research is how far the results can
be generalised.
All authors will tend to think their work is more
important than the rest of us!
What is new here?
What does it mean for health care?
Is it relevant to my patients?
August 2001
Bruce Davies
12
Questions to Ask
What else is of interest?
Introduction / discussion.
Useful references?
Important or novel ideas?
Even if the results are discounted it doesn’t mean
there is nothing of value.
August 2001
Bruce Davies
13
NOT JUST A FAULT
FINDING EXERCISE
!
August 2001
Bruce Davies
14
What Is the Method?
The first task –
alternative check lists
for different methods.
How was the study
conducted to confirm
the method?
Authors sometimes
use the wrong words
to describe their work!
August 2001
Bruce Davies
15
Surveys
Describe how things are now.
Samples of populations or special groups.
The samples must be randomly selected.
August 2001
Bruce Davies
16
Surveys
Do not have separate control or comparison
groups.
Comparisons may be made between subgroups
– but this is not control.
Use of the term survey should identify the
method – beware the use in what is really a
cohort study.
August 2001
Bruce Davies
17
Surveys
Cross-sectional is seldom used to describe
other methods.
There are many ways of selecting a sample:
e.g: stratified, cluster and systematic.
August 2001
Bruce Davies
18
Cohort Studies
Used to find out what happens to patients.
A group is identified and then watched to
see what events befall them.
May have comparison or control groups –
who must be identified from the start.
Not an essential feature tho.’
August 2001
Bruce Davies
19
Cohort Studies
Must have the element of time flowing
forward, from the point at which they are
identified.
This is sometimes called a retrospective
cohort study.
The term cohort should be diagnostic.
August 2001
Bruce Davies
20
Clinical Trials
Testing.
Always concerned with effectiveness.
The focus should always be on the outcome.
The outcomes may not be beneficial – in other
words side effect trials.
Sometimes cohort trials are used to assess
effectiveness. This is very poor research and
can usually be dismissed.
August 2001
Bruce Davies
21
Clinical Trials
When more than two things are compared it
makes the study more complex and harder
to get right.
The key words to look for are: random
allocation, double blind, single blind,
placebo-controlled.
The term outcome is sometimes used in
cohort studies as well.
August 2001
Bruce Davies
22
Case-control Studies
Ask what makes groups of patients different.
Select a set of patients with a characteristic – eg a
disease.
The characteristics of this set are then compared
with a control group who do not have the
characteristic being studied – but all all other
respects must be as identical as possible.
August 2001
Bruce Davies
23
Case-control Studies
Case studies look backward – not forward
as cohort studies do.
Other terms used are: case-referrent, casecomparator, case-comparison.
May also be called retrospective – as is used
for some cohort studies.
August 2001
Bruce Davies
24
The Results
The major mental
challenge.
What do I think this
really means?
CAUTION.
Large unexpected
results are rare.
Flawed studies and
misleading findings are
common.
August 2001
Bruce Davies
25
Statistics
A subject in itself.
Some general thoughts are worth
emphasising.
Size matters.
August 2001
Bruce Davies
26
Probability
Is just that.
It is not proof.
Think of horse racing and the lottery.
Think of the odds.
August 2001
Bruce Davies
27
Pitfalls
All statistical tests make assumptions about
the raw data.
If there is no raw data presented you cannot
know if the tests are meaningful.
Outliers.
August 2001
Bruce Davies
28
Pitfalls
Skew.
Non-independence.
Serendipity masquerading as hypothesis –
or data trawling!
August 2001
Bruce Davies
29
Pitfalls
Black box analyses. Modern computers
make statistical testing easy – the authors
may not know what they are doing!
Bias – play devils advocate.
Confounding.
A very common problem in medicine.
Colour televisions do not cause increases in
hypertension.
August 2001
Bruce Davies
30
Checklists
Checklists for
particular types of
literature are a quick
and easy way of
learning critical
appraisal.
They all have 3 stages:
Basic questions.
Essential appraisal.
Detailed appraisal.
August 2001
Bruce Davies
31
BUT, BUT, BUT
Checklists do not tell you about the quality
or usefulness.
This is still a subjective question.
All the lists do is enable a more structured
and thoughtful response to the subjective
question.
August 2001
Bruce Davies
32
References
The Pocket guide to critical appraisal.
By: Iain Crombie.
Pub BMJ Books, 1996.
How to lie with statistics.
By: Darrell Huff.
Pub: Pelican, 1989.
August 2001
Bruce Davies
33
Download