Uploaded by zephyr.shiva

CRITICALITY

advertisement
CRITICALITY
There are two aspects to criticality of the evidence – both of which are required for
Masters level study:
 Critical thinking about the evidence itself
 Critical appraisal of the quality of the evidence.
CRITICAL THINKING
“MASTERY OF A SUBJECT”
Master students have a systematic understanding of knowledge / critical awareness
of current problems. Comprehensive understanding of techniques applicable to their
own research. Originality in the application of knowledge.
Conceptual understanding the enables one to critically evaluate research, evaluate
methodologies and develop critiques of them / propose new hypotheses (where
appropriate)
Knowledge is contextual. It will have different meanings depending on how, and
when, and what the knowledge is applied on. The value of reading is in interpreting
and making sense of the text, and then applying it to a particular context.
Academic Critique needs to be factual, objective, impartial and considered.
Being critical = not accepting things at face value, explore implications of
something
Recognise the value or quality of something
Critical thinkers exhibit the habit of thinking critically as part of their intellectual
repertoire / questioning attitude
Critical thinking may be part of the process of problem-solving, but may not lead to
a solution. Assumptions about the outcome do not exist.
Transformation of learners from an unquestioning position to accepting the
relativity of knowledge
Interpret, Analyse, Evaluate & Explain
Interrogate the current state of knowledge on a subject
“Find my own voice”
Expectation to develop and assert opinions by, for eg, challenging conventional
wisdom by critiquing seminal texts or concepts.
Done so by reading widely and assimilating the different literature within a discipline.
Active Thinking
Fully engaged in process
Question underlying assumptions and scrutinise the evidence (vs. just being told /
passively learn)
Persistence
Evidence – must be subject to healthy scepticism and carefully evaluated.
Critical appraisal – carefully and systematically examining research to judge its
trustworthiness, value and relevance.
Further conclusion – be aware that a conclusion is pursued through reasoned
thinking, but be aware that a conclusion may not in itself provide and unequivocal
answer to a question or a resolution to an issue. The conclusion may be an
increased understanding of the issue and acceptance of ambiguity.
Intellectual Skills of a Critical Thinker
 Open-mindedness
 Inquisitiveness
 Truth-seeking
 Analyticity
 Systematicity
 Self-confidence
 Maturity of judgement
Open questioning attitude - evaluate strengths/weaknesses (in a positive way)
Which viewpoint is weakest (& why), which is strongest & why)
It needs to support every comment made with research texts to support the points
you are making.
You need to be familiar with all the material that you have before you can move on to
more detailed critical appraisal. It is always a good test of how well you know the
literature if you can discuss the literature you have found in detail with someone else
without referring back to the papers or at least with minimal reference!
Once you have become familiar with your literature, the next step is to decide how
you will critically appraise the literature that you have.
CRITICAL APPRAISAL
Reading critically. Treat journal article as a debate - I am to be persuaded
 One that views research writing as a contested terrain, within which
alternative views and positions may be taken up;
Make NOTES as you go through eg. statements/argument that are not convincing
• Don’t assume authors are right
• Look out for assumptions/bias
• Evaluate the argument as much as the conclusions
Relating what you are reading to the wider context or other reading you have done
Re-writing concepts or arguments in my own words
Points that may be useful or relevant for my assignment
When critiquing an article it is useful to use a checklist.
Most research books have checklists
Validity: rigour, soundness of study – extent to what the conclusions of research are
true within the specific confines of the research (internal validity). More valid
research design, methodology & procedure = less biased results. no study is perfect
and free from bias. systematically check that the researchers have done all they can
to minimise bias, and that any biases that might remain are not likely to be so large
as to be able to account for the results observed. A study which is sufficiently free
from bias is said to have internal validity.
Studies are also subject to bias and it is important that researchers take steps to
minimise this bias; for example, use of a control group, randomisation and blinding.
The fact that many illnesses tend to get better on their own is one of the challenges
researchers face when trying to establish whether a treatment – be it a drug, device
or surgical procedure – is truly effective
Sometimes even randomisation can produce unequal groups, so another CASP
question asks whether baseline characteristics of the group were comparable
It is also important to monitor the dropout rate, or treatment withdrawals, from the
RCT, as well as the number of patients lost to follow-up, to ensure that the
composition of groups does not become different. In addition, patients should be
analysed in the group to which they were allocated even if they did not receive the
treatment they were assigned to (intention-to-treat analysis). Further discussion can
be found on the CONSORT Statement website.
1. Is it relevant (easy to get carried away)
AUTHOR
• Is the author an acknowledged expert? Are their qualifications relevant?
• What is their job title, academic background?
• Is it a peer-reviewed journal? (journal generally considered to be good quality if so)
2. Is it asking a good question?
Given the knowledge in the area, are the aims appropriate?
• INTRODUCTION
• Why was it done? Why are the authors writing this
– Is the purpose clear? Is the argument laid out logically/methodically?
– What are the aims, are they appropriate? Is there a clear statement of aims,
defining the population, intervention & outcomes?
ABSTRACT
• Gives summary of article
• Helps determine relevance to your topic / question
• Often written with a bias towards positive aspect of the research (Muir Gray 2001)
3. Is there any literature glaringly missing from the literature review?
•
Literature review
– Was relevant literature reviewed?
– Does it justify need for study?
– Is it unbiased?
What evidence is used to support the central argument
Is the literature review thorough, has up-to-date literature been reviewed?
Have they presented evidence that contradicts their ideas or is it biased towards the study
they want to carry out
4. Methodology - number of participants, interviews, questionnaires, appropriate
 Was it quantitative / qualitative / mixed method
What instruments were used? Are they standardised – valid and reliable?
If a questionnaire was used – is it standardised?
DID THEY MENTION how they did randomisation
Reduce bias by randomly allocating participants to Rx. Authors may state they
randomised participants, but not provide data on randomisation methods (they might
have, but it is hard to assess validity)
 Was the methodology appropriate for the question?
eg. Views and experiences - doing RCT (would need qualitative instead)
 What are the variables – are they sufficient details for it to be reproduced?
A less detailed review (eg. NOT Cochrane) acknowledges that the search will not be
comprehensive but will identify which databases were searched
 Sampling – is it described in detail, justified, generalisable, is it large enough to justify
aims? How were the participants chosen? Were participants relevant to research
question selected appropriately
 How representative of the population is it? Is it from one geographical area, 1 unit etc?
How similar was the sample to your patient group?
 If it appears to be a small sample, consider the population size eg number of people
with that condition etc. eg rare condition – so think deeper than ‘small sample size’
 is sampling & method appropriate - eg right group demographic / anyone that’s
excluded that could sway results eg. younger or older people / how is this applicable to
real world
5. Data collection – what methods were used, were they used well? Was it appropriate?
 What about non-response rate? Is that significant? Drop-out rate, non-response – is
that information provided. Why did people drop-out – that might have an effect on the
results eg if they dropped out because there were problems with the intervention
 Why might there have been a high non-response, might that impact on the results
6. Is it ethically sound?
 Confidentiality
 depriving clients of treatment
 sensitive subject - support for clients available
 ethical approval
Is there evidence of ethical approval (there may not be simply due to word count)
All indicate strength of the research
7. Results - what are they? How trustworthy.
What outcome measures were used? – can they be used in your area, are they
appropriate for the topic area? Are they reliable / valid?
• Frequency of outcome measurement – was it appropriate?
• How was the analysis undertaken? Was it appropriate for the methodology?
Quantitative - were the correct statistical tests done? You are not expected to remember
different statistical tests, you will need to refer back to statistics books, there are some
basic ones available.
Qualitative – how did they code into themes? Do the themes make sense? Are quotes
representative from the themes?
•
•
•
•
•
•
•
•
What do the results suggest? (clinical or statistical significance)
Know the author’s statement of findings are accurate/precise based on factors such as
size of p value, confidence interval, and quality of evidence
Do the authors address any counter-arguments
Could a different conclusion be drawn from the results
How does it relate to other research in this field / existing body of evidence
If were to entirely contradict a whole body of evidence – ask why?? Participants /
Populations /Rx similar in interventional study. Or case control study, was exposure
accurately / equally measured between groups
Odds ratios, risk ratios and number needed to treat are methods of analysing results in
order to determine if an intervention is effective.
8. Implications
• What are the implications for the topic I’m looking at
• How can I use this in my practice?
• Practical relevance / Who can the results be applied to, outside of the article
• Any local applications? Consider benefits, harms/costs
• Should I be changing what I'm doing? (Consider: study setting / participants are
sufficiently similar to population. The outcomes, who are they important too – the
therapist & patients??
• May need to look for other articles
DISCUSSION
• Does the discussion of the findings relate back to the literature and aims of the study?
• What was the clinical importance of the results?
• Were all important outcomes considered – not just to researcher, but PT, patients too
• Are reasonable inferences made from the results?
CONCLUSION
• Do the conclusions support what was found in the results?
• What were the main limitations or biases in the study?
• Do the authors discuss this, or have they missed out any that you consider important?
• Are recommendations for practice or future research provided?
References – even if the article is no good the reference list might find further references that
may be useful.
CRAAP TEST
C
Currency
When was it produced
R
Relevance
Why
A
Authority
Who produced it
Accuracy
Where does the
information come from?
Purpose
What is the purpose?
A
P
How relevant, it may be old – if nothing
recent is published.
the study was undertaken or why the paper
was written
Is it useful to you?
What are their credentials
Is it supported by evidence?
Ie. inform, argue, teach, sell, entertain,
persuade
Make sure reader is aware that you ALL THE ABOVE
You need to have a clear understanding of research methodology.
You may find it helpful if you are not confident in conducting critical appraisal to use
a tool. There are many available. Some are subject specific. Some are research
method specific. Some are topic specific. Some have tick boxes. Some have guiding
comments. It is important to find one which works for you.
At Masters level because of the need to undertake Critical Appraisal throughout any
work that you submit all work should be using research text books and all reference
lists should have a range of research text books in them.
•
Formulate own argument/original thought
YOU MUST SUPPORT YOUR APPRAISAL WITH RESEARCH TEXT BOOKS
 ALWAYS use a research text book to support your critical appraisal
 Use a range of research text books rather than relying on one or two
 For key points, perhaps support with 2 research texts
Eg. ‘ Although random sampling was used in the trial, the researchers were not
blinded to the group allocations and so as Denscombe (2010) tell us, this may cause
researcher bias, potentially influencing their actions and hence impact on the validity
of the findings (Bryman 2012)’. (Both of these references are general research texts
which tell us about these aspects of research methods and the potential impact of
these on quality/validity)
 general research textbooks available as e-books via the University digital
library – many available via Dawson era

Generally within an assignment you will not be able to debate the strength
and limitations of the paper in depth but you will need to include the main

strength or limitation that you feel impacts on the weight you can give to the
paper within your argument.
Students studying the MSc Hand Therapy may find using a critical appraisal
table useful to support your appraisal within assignments as a lot of the
literature is quantitative.
• McMaster Critical Review form
http://www.google.co.uk/search?hl=en&source=hp&q=mcmaster+critical+review+for
m&meta=&aq=8&oq=McMas
• CASP (SPECIFIC <<- GOOD TO USE)
http://www.phru.nhs.uk/pages/PHD/resources.htm
Cottrell (2005) has developed a generic appraisal tool
Essentials of Nursing Research by Polit and Beck (2005)




Critical appraisal tools help you develop a consistent approach to the critique
of research and other information.
However they only help with the critical appraisal – they do not do the work for
you! If you do not understand the methods by which the research has been
undertaken, the tool will not help you. Therefore you need to under- stand
what is going on in the paper before you begin to appraise it.
For Introduction to Masters Level study’ module you are required to provide
an in-depth critical appraisal of the studies as this module is aimed to help you
develop these skills.
In future assignments you will be expected to critically appraise the articles
but not include it in depth in your assignment.
Example
• Source not been considered at face value
• Shows originality
• Considers inconsistencies
• Logical structure/clear reasoning
• Draws on a range of evidence
• Clear understanding of different arguments
• Formulates own argument
• Careful consideration of assumptions made
• Clear evidence of interpretation of findings.
• Uses these to support, challenge and enhance the argument being formed.
• Evidence of consideration of the wider picture and the implications which
may arise
• Justifies the assumptions being made
 Not just ‘generalisability’; need to expand and consider the implications of this
 Consider country; as well as acknowledging the potential impact on
‘generalisability’ what were the key strengths of this study which mean we
potentially could make use of these findings in the UK (or in your country of
practice), for example
the strength of the study design
the sample size
the way sample was selected
At this level of study, you need to be detailed in your criticality and when appraising
evidence or a study, the ‘so what?’ element should be addressed. It is correct to
acknowledge that perhaps a study cannot be generalised but a good level of detail
must be given as to why this is the case and what the implications of this are for the
quality and trustworthiness of the study.
It is common practice for students state that the study was not carried out in their
country therefore cannot be generalised however, you need to state exactly why this
is the case; is it because there is some characteristic of the sample that makes it
different or would impact on the findings? Is there a cultural issue? Or actually, does
it not matter that it was carried out in a different country because actually, this may
have no impact on any aspect of the research methodology and hence the
outcomes? Evaluate this and use a research text or texts to support it!
Look at the date – is it contemporary? If it is ‘old’ you need to acknowledge this – is
it still current despite it being ‘old’? Is it old but seminal? Is it old but no other
relevant studies carried out since? Is it old and perhaps out of date (why is it out of
date?)
 Consider date
If you have done a literature search and set yourself date parameters, you need to
be able to justify why these parameters have been set and ensure you are not
excluding any seminal work by limiting the dates.
All research is conducted in research - might have constraints (eg. Time/money to
conduct) so might have flaws
Accept flaws: most published literature has flaws. This is because the research
which has been undertaken has usually not been conducted for an article in a
journal. In many instances the research collected as part of a project and in many
instance what is presented in the journal is a very small part of the overall literature.
Very often what you are wanting it to support or prove is different from the intended
purpose and therefore it is important to acknowledge this.
•
Flaws in research should be considered in context to the article and its
conclusions
Get into a habit of asking myself / jotting down at the end of every learning activity –
what new information have I gained? What are the implications for my developing
understanding of this subject?
As you read more and appraise more articles - you are faced with an increasing
battery of information - how can you systematically decide which is better or worse
evidence?


At first glance, a research paper might appear to address your research
question directly, however on closer inspection you realise that the scope of
the paper is very different from what your initial assessment had led you to
believe and in fact has only indirect relevance to your research question.
You might find that although the context of the paper is relevant to your
research question, the methods used in the paper have been poorly carried
out and you are less confident in the results of the study as a result.
You need to appraise the type and strength of the evidence being given - and the
recommended scale for doing so is:
RESULTS – ODDS RATIO / RISK RATIO
Results are presented in many different ways. In RCTs, cohort studies and casecontrol studies, two groups are compared and the results are often expressed as a
relative risk (for example, dividing the outcome in the intervention group by the
outcome in the control group). If the outcome is measured as the odds of an event
occurring (for example, being cured) in a group (those with the event / those without
the event), then the relative risk is known as the odds ratio (OR). If it is the frequency
with which an event occurs (those with the event / the total number in that group),
then the relative risk is known as the risk ratio (RR). When there is no difference
between the groups, the OR and the RR are 1. A relative risk (OR or RR) of more
than 1 means that the outcome occurred more in the intervention group than the
control group (if it is a desired outcome, such as stopping smoking, then the
intervention worked; if the outcome is not desired, for example death, then the
control group performed better). Similarly, if the OR or RR is less than 1, then the
outcome occurred less frequently in the intervention group.7
Results are usually more helpful when they are presented as risk differences. In this
case you subtract the proportion of events in the control group from that in the
intervention group. The risk difference can also be presented as the number needed
to treat (NNT). This is the number of people to whom the treatment would have to be
given – rather than the control – to produce one extra outcome of interest.8
There will always be some uncertainty about the true result because trials are only a
sample of possible results. The confidence interval (CI) gives the range of where the
truth might lie, given the findings of a study, for a given degree of certainty (usually
95% certainty). P-values report the probability of seeing a result such as the one
obtained if there were no real effect. P-values can range from 0 (absolutely
impossible) to 1 (absolutely certain). A p-value of less than 0.05 means that a result
such as the one seen would occur by chance on less than 1 in 20 occasions. In this
circumstance a result is described as statistically significant. This does not mean that
it is necessarily important
Costs are usually not reported in a trial but if a treatment is very expensive and only
gives a small health gain, it may not be a good use of resources. Usually an
economic evaluation is necessary to provide information on costeffectiveness, but
sometimes a ‘back-of-theenvelope’ calculation can be performed. If the cost of
treating one patient and the NNT can be established, these values can be multiplied
to give a rough idea of the likely order of cost for producing one unit of benefit.
Conclusions
When reading any research – be it a systematic review, RCT, economic evaluation
or other study design – it is important to remember that there are three broad things
to consider: validity, results, relevance. It is always necessary to consider the
following questions. ● Has the research been conducted in such a way as to
minimise bias? ● If so, what does the study show? ● What do the results mean for
the particular patient or context in which a decision is being made?
Download