Evaluating new treatments in psychiatry: the potential value of

advertisement
International Review of Psychiatry (2002), 14, 6–11
Evaluating new treatments in psychiatry: the potential value of
combining qualitative and quantitative research methods
MICHAEL J. CRAWFORD, TIMOTHY WEAVER, DEBORAH RUTTER, TOM SENSKY &
PETER TYRER
Imperial College of Science Technology and Medicine, Faculty of Medicine, London, UK
Summary
Clinical trials have been extensively used in order to explore the effectiveness of drug treatments for psychiatric disorders. Double-blind,
randomized controlled trials provide the potential to explore the effectiveness of treatments free from the effects of bias and confounding
factors. More recently clinical trials have been used in order to evaluate the effects of more complex interventions such as psychological
treatments and alternatives to inpatient treatment. Clinical trials of complex interventions present several methodological challenges
including the need to identify their mechanisms of action, problems ensuring the fidelity of the experimental intervention and variations
in the effects that interventions have in sub-groups of people. We argue that some of these challenges can be met by incorporating qualitative research methods into the experimental evaluation of complex interventions. The benefits and some of the problems of combining
methods are illustrated by examples of recent and ongoing research.
The value of clinical trials
Evidence based healthcare, a process which involves
making clinical decisions on the basis of best available evidence is widely seen as a central component
of efforts to improve the quality of health services
(Sackett & Wennberg, 1997). Controlled trials and
especially randomized controlled trials (RCTs)
(collectively referred to in the rest of this document
as ‘clinical trials’) continue to be seen as one of the
key methodologies for ensuring that interventions
that aim to improve health are assessed in an unbiased way (Greenhalgh, 1997; Harris et al., 1998).
High quality clinical trials randomly allocate subjects to alternative interventions in a controlled
environment (Pocock, 1983). In doing so they seek
to measure the effects of experimental interventions
free from the confounding effects of other factors
that may influence findings in non-randomized
studies.
The methodology of clinical trials was initially
developed to investigate the effects of drug treatments. The earliest trials investigated the efficacy of
antibiotics (e.g. MRC, 1948) and similar methods
were subsequently applied to the evaluation of drugs
used in the treatment of depression, schizophrenia
and other psychiatric disorders (e.g. Leff & Wing,
1971). By comparing the effects of active drugs with
placebos it may be possible to ‘blind’ patients and
raters to which treatment a particular subject is
receiving. So called ‘double-blind’ studies have the
added potential of reducing the likelihood of
observer and recall bias, which may effect the findings of studies where patients and raters know
whether or not the subject has received the ‘active’
treatment (Day & Altman, 2000). It has been shown
that clinical trials in which raters are not blind are
more likely to demonstrate positive effects than those
where raters are blinded to the treatment that
patients receive (Shultz et al., 1995).
Many of the interventions, which have been developed for the treatment of mental health problems,
are more complex than drug treatments. These
include psychological treatments and novel forms of
service delivery such as case management, hometreatment and assertive community treatment. Such
‘complex’ interventions tend to:
• comprise multiple inter-connecting elements;
• have mechanisms of action that are difficult to
identify;
• have effects that depend on a range of factors
including the actions of the practitioners who
deliver them (MRC, 2000).
Difficulties encountered in clinical trials of
complex interventions
Clinical trials of complex interventions face a range
of methodological problems not shared by trials of
Correspondence to: Dr M. J. Crawford, Senior Lecturer in Psychiatry, Department of Public Mental Health, Imperial
College of Science Technology and Medicine, Faculty of Medicine (St Mary’s Campus), Paterson Centre, 20 South Wharf
Road, London W2 1PD, UK. E-mail: m.crawford@ic.ac.uk
ISSN 0954–0261 print/ISSN 1369–1627 online/02/010006–06
DOI: 10.1080/0954026012011400 5
Institute of Psychiatry
Combining qualitative and quantitative research methods
Table 1. Common problems with trials of complex
interventions
Concealment of allocated groups is difficult
Comparison groups are often not strictly comparable
Differential contamination of experimental and control
groups by other treatments
Quality assurance of treatments delivered
Differential attrition from intervention groups
Problems in defining robust outcome measures
Attribution of change to the specific intervention
Controlling for patient expectancies
simple interventions (see Table 1). For example, it is
seldom possible to devise trials of complex interventions that are properly ‘double-blind’. While it is
sometimes possible to ‘blind’ those assessing patient
outcomes, blinding study participants who receive
complex interventions is often difficult or impossible
to achieve (Taylor & Thornicroft, 1996). In order to
consent to take part in a study, patients need to be
provided with information about the treatments on
offer. Patients who agree to take part in a drug trial
have to agree to take a tablet in a way that is specified
by the study protocol. However patients taking part
in a trial of a psychological intervention are required
to play a more active part in their treatment. It is
sometimes possible to devise a control treatment
which is sufficiently similar to the experimental
intervention to allow the patients to remain ‘blind’
(e.g. Sensky et al., 2000). However, as interventions
become more complex, it is impossible to gain
informed consent for patients’ participation without
them (and the clinical teams involved in their care)
having sufficient information to judge which group
they have been assigned to. The effect of randomization on the person’s expectations and perceptions of
the care they subsequently receive may impact on
their willingness to engage in, and therefore benefit
from, the treatment they are assigned to (Brewin &
Bradley, 1989).
Clinical trials of complex interventions may also
encounter problems maintaining the fidelity of
experimental interventions. Because complex interventions are inevitably multi-factorial and usually
depend on the relationship between those who
deliver and those who receive the intervention,
ensuring that subjects allocated to experimental
treatments all receive ‘the same’ treatment can be
difficult to achieve. Variation in the content and
application of experimental interventions can further
weaken the external validity of trial findings. For
example, in studies of brief versus standard inpatient
care for psychiatric patients, results are impossible to
interpret without knowing what other facilities (e.g.
acute day hospitals, crisis intervention teams, etc.)
were available in the study settings. Furthermore, in
trials of novel forms of treatment, those delivering
the intervention may have a level of commitment and
enthusiasm for the treatment that is difficult to
replicate (Taylor & Thornicroft, 1996).
7
If evidence suggests that an intervention is effective, it is important to determine which aspects of the
intervention brought about these effects. In the development of new drug treatments, by the time clinical
trials are undertaken, the drug’s mechanism of action
has been well researched and is comprehensively
understood. By contrast, complex interventions in
psychiatry are usually introduced as such, rather than
in a step-wise manner, so that it is usually impossible
from basic trial data to determine which component(s) of the intervention were responsible for the
outcomes recorded. Explanatory studies of complex
interventions require a fundamentally different
approach (Schwartz & Lellouch, 1967). Absence of
knowledge about the explanatory components of an
intervention can limit its generalizabilty and impedes
its further development and refinement (Weaver
et al., 1996).
Attempts to modify traditional trial designs
Some of the problems faced when conducting clinical trials of complex interventions may be addressed
by making modifications to traditional trial designs
(Hotopf et al., 1999). If raters involved in baseline
assessments are likely to become aware of which
treatment the patient subsequently receives, independent ‘blinded’ raters need to be responsible for
collecting follow-up data (Banerjee, 1998). ‘Patient
preference trials’, in which only patients who have no
preference between alternative interventions are
randomized, have been recommended as a way of
avoiding problems with patient expectations and
motivation in trials of complex interventions (Brewin
& Bradley, 1989; Chilvers et al., 2001). Quality control measures such as providing standardized
training and monitoring the delivery of the intervention can help to ensure that standards of treatment
(Kamb et al., 1996). While quality controls are a prerequisite for trials of complex interventions, they may
be insufficient to allow comprehensive interpretation
of study findings or generalizability of their results.
For example, in psychotherapy trials, it is possible to
measure the therapist’s technical competence (e.g.
Shaw et al., 1999) but this does not equate with skill
or experience, and hence it is not surprising the
correlations between patient outcomes and (basic)
therapist skills are often only modest.
Despite attempts to adapt trial designs to meet the
challenges of evaluating complex interventions,
problems persist. These include residual variation in
the delivery of the interventions being assessed
(Bryant et al., 1999), and the limited ability of quantitative research methods to adequately explore and
describe the active ingredients of interventions. As a
result of these and other concerns the view that clinical trials provide the best evidence on which to guide
clinical practice is frequently criticized (Britton et al.,
1998; Graham, 2000; Slade & Priebe, 2001). Some
8
Michael J. Crawford et al.
have argued that observational and other research
methods form a better basis for the evaluation of
health services. An alternative approach may lie in
combining qualitative research with experimental
designs in the evaluation and development of complex interventions (Stokols et al., 1996; Murphy
et al., 1998).
The value of qualitative research methods in
the evaluation of complex interventions
Qualitative research methods attempt to address
research questions with a holistic perspective that
‘preserves the complexities of human behaviour’
(Black, 1994). They have been used extensively in
medical research in order to explore the behaviour
and interactions of those who deliver and receive
healthcare (Murphy et al., 1998). In doing so qualitative research methods may be particularly adept at
exploring and describing the process and outcomes
of psychological and other complex interventions
used to treat mental disorders.
It has long been argued that qualitative research
methods have a particularly important role in ‘phase
one’ or the pre-trial, modelling phase of the evaluation of a complex intervention (MRC, 2000).
Qualitative methods used at this stage of an evaluation may help to identify variables for the study and
help in the development of appropriate testable
hypotheses. Less attention has been paid to the role
of qualitative methods at later stages in the evaluation of interventions. However, qualitative research
can perform three important functions in this context. First, collection and analysis of qualitative data
within a trial may assist the process of examining
the fidelity of the intervention. Second, in highly
complex experimental studies (e.g. trials of case
management) information about the process and
interaction of the intervention elements can help
identify ‘active ingredients’ and provide a more
secure empirical basis for investigators to interpret
their outcome data. Third, it may also provide an
opportunity for post hoc analysis of factors associated with positive and negative outcomes both
within and between study groups (e.g. Bradley
et al., 1999).
While clinical trials provide aggregate measures of
effect, any intervention (but especially complex
interventions) may produce very different effects in
sub-groups of subjects. Attempts to explore these
differences using sub-group analysis are problematic
(Davies et al., 2000). Qualitative research methods
applied to experimental research may enable an
exploration of the effects of interventions in subgroups of people and enhance the explanatory power
of the study (Powell & Davies, 2001). Similarly if
results show no difference between experimental and
control groups qualitative data may enable reasons
for the apparent absence of effect to be explored
Table 2. Potential benefits of qualitative research in the
evaluation of complex interventions
Pre-trial
Qualitative research can help to identify the active
ingredients of interventions
Contribute to the development and refinement of
experimental interventions
Assist with the development and selection of appropriate
outcome measures
Generate testable hypothesis for examination in the
subsequent clinical trial
During a trial
Qualitative research methods can examine the fidelity of
treatment delivered to those receiving experimental and
control treatments
Provide further information about the active ingredients
of interventions
Be used to explore interactions between subjects,
interventions and those who deliver them
Post-trial
Qualitative research can be used to explore reasons for
the positive (or negative) findings of a clinical trial
Provide further information about components of
interventions that may lead to differences in patient
outcomes
(Murphy et al., 1998). The potential benefits of
using qualitative research methods in the ways
described earlier are summarized in Table 2.
Examples of the use of qualitative data
collection collected in conjunction with clinical
trials of psychiatric interventions
The authors are currently involved in a number of
studies that illustrate how qualitative research may
be employed either to develop future trials or to
enhance clinical trials that are undertaken
concurrently.
Qualitative research in advance of a trial: developing
new interventions, testable hypotheses and appropriate
outcome measures
A key priority for mental health services is to develop
effective methods for involving service users in the
management and development of services. However,
recent work completed by the team shows that
models of user involvement (UI) remain undeveloped and that there is no significant evidence base
relating to effectiveness. A further complicating
factor that inhibits formal evaluation of UI is the
absence of appropriate outcome measures. Outcome
measures conventionally employed in the evaluation
of psychiatric interventions (e.g. inpatient admission,
symptomology) represent inappropriate outcome
measures for UI. Even ‘patient satisfaction’ measures
fail to adequately capture the quality of the users
experience of care, user-sensitivity and empowerment. We are applying qualitative research methods
Combining qualitative and quantitative research methods
to study UI practice and identify a model of user
involvement, which could potentially be assessed
using cluster-randomized designs.
Qualitative data aimed at improving generalizability of
study findings
As many as 40% of Accident and Emergency department (AED) attenders have consumed alcohol prior
to their presentation (Holt et al., 1980). While there
is good evidence to suggest that brief interventions
aimed at helping those who misuse alcohol can lead
to reduced alcohol consumption (Wilk et al., 1997)
the value of these interventions administered in the
context of the emergency medical settings is unclear.
Before an intervention can be offered to patients a
system for identifying those who drink excessively
needs to be established. A previous attempt to investigate the effects of an AED based intervention failed
because only a small minority of patients who
attended the department where screened for excessive drinking (Peters et al., 1998). A clinical trial is
being conducted at St Mary’s Hospital where screening rates are far higher (Huntley et al., 2000). In
parallel with the trial, qualitative data is being collected from in-depth interviews with staff in the AED
in order to explore reasons for the high rate of screening that takes place. The rationale for the collection
of this data is that, should the intervention prove to
be effective, information about how to achieve high
levels of screening will be at least as important as
information about the intervention if this service is to
be developed at other sites.
Using qualitative research to assess the ‘active
ingredients’ of complex interventions
The value of case management for people with severe
mental illness is one of the enduring controversies of
community psychiatry (Tyrer et al., 1995; Marshall,
1996). Concerns about the efficacy of Care Programme Approach (CPA) and Care Management
have led researchers to assess whether enhanced
patient outcomes can be achieved by incorporating
elements of Assertive Community Treatment
(ACT), for which there is a secure evidence-base
(Marshall et al., 2000). The recent UK700 case
management trial (Burns et al., 1999) is the example
par excellence of this trend. The UK700 study borrowed elements from the ACT model and tested the
hypotheses that outcomes could be improved by
reducing CPA caseloads to ACT norms of 10–15
patients. It was argued that this would enable more
intensive casework with individual patients and
encourage ‘assertive outreach’ (UK700 Group,
1999). Despite strong evidence that intervention
fidelity was achieved (Burns et al., 1999) the main
outcome findings (duration of hospitalization over
2 years) did not support the hypotheses. Burns et al.
9
(1999) candidly acknowledge the pitfall of RCTs of
complex interventions when they wrote: ‘We may
not have measured those aspects of care which have
most relevance to outcome’. We believe this problem
to be solvable and one of our team (TW) is currently
undertaking a multi-method analysis of process
during the trial in which qualitative methods have
been employed to identify factors and mechanisms
influencing outcome.
Qualitative data collection aimed at adding to the
explanatory power of a trial
Another recent clinical trial of case management
compared two models of care delivered by mental
health teams in an inner London Borough. The trial
compared standardized measures of patient outcome
among those randomized to either case managers
working as part of a multi-disciplinary healthcare
team or to care managers working with a separate
team providing social services. In recognition of the
importance of contextual factors in determining the
treatment that patients receive and the effects that
these factors can have on patient care, qualitative
data was also collected through patient interviews
and non-participant observation of meetings
between patients and staff. This data will be used to
generate hypotheses about reasons for positive or
negative outcomes from the trial. It will enable exploration of the impact of factors, such as the impact of
the trial on team working and financial and other
constraints that influence the service that care
managers provide.
Potential obstacles to the collection and
analysis of qualitative and quantitative data in
the evaluation of interventions in psychiatry
Despite increasing support for combining quantitative and qualitative research methods in the
evaluation of interventions aimed at improving
health (MRC, 2000), several factors may make this
approach difficult to achieve (Baum, 1995).
Researchers with expertise in qualitative and quantitative methods often have different academic
backgrounds and may have a poor understanding of
each other’s disciplines. Practical differences in the
approach to data collection may complicate the
study design. For instance, those collecting qualitative data are likely to require information about
which treatment patients are receiving but this may
effect arrangements for ‘blinding’ patients and those
collecting quantitative data.
Issues concerning the synthesis and interpretation
of data are yet to be fully considered. To what extent,
and under what circumstances, can the findings of
qualitative research inform (or even challenge) the
analysis or interpretation of data from the trial? How
10
Michael J. Crawford et al.
do we measure and assess the quality of trials that
combine qualitative and quantitative methods? Such
questions can only be addressed through conducting
and reflecting on the results of further work that
combines these methods.
Conclusions
There continues to be a need to extend the evidence
base of psychiatry. Clinical trials should play a central part in generating evidence. By combining
qualitative and quantitative research methods it may
be possible to increase the explanatory power of trials. The collection and analysis of qualitative data
can enable the active ingredients of interventions to
be explored and provide a more complete description
of the process of interventions that take place during
a trial. In doing so, qualitative methods may help to
‘close the gap between discovery and implementation’ of new treatments (Jones, 1995) by providing a
more complete picture of the content and effects of
interventions used in the management of mental
disorders.
References
BAUM, F. (1995). Researching public health: behind the
qualitative–quantitative methodological debate. Social
Science and Medicine, 40, 459–468.
BANERJEE, S. (1998). Randomised controlled trials.
International Review of Psychiatry, 10, 291–303.
BLACK, N. (1994). Why we need qualitative research.
Journal of Epidemiology and Community Health, 48,
425–426.
BRADLEY, F., WILES, R., KINMONTH, A., MANT, D. &
GANTLEY, M. (1999). Development and evaluation of
complex interventions in health services research: case
study of the Southampton heart integrated care project
(SHIP). British Medical Journal, 318, 711–715.
BREWIN, C.R. & BRADLEY, C. (1989). Patient preferences
and randomised clinical trials. British Medical Journal,
299, 313–315.
BRITTON , A., THOROGOOD, M., COOMBES, Y. &
LEWANDO-HUNDT, G. (1998). Qualitative outcome
evaluation with qualitative process evaluation is best.
British Medical Journal, 316, 703.
BRYANT, M., SIMONS, A. & THASE, M. (1999). Therapist
skill and patient variables in homework compliance:
controlling an uncontrolled variable in cognitive therapy
outcome research. Cognitive Therapy and Research, 23,
381–399.
BURNS, T., CREED, F., FAHY, T., THOMPSON, S., TYRER,
P. & WHITE, I. (1999). Intensive versus standard case
management for severe psychotic illness: a randomised
trial. Lancet, 353, 2185–2189.
CHILVERS, C., DEWEY, M., FIELDING, K., GRETTON, V.,
MILLER, P., PALMER, B., WELLER, D., CHURCHILL, R.,
WILLIAMS, I., BEDI, N., DUGGAN, D., LEE, A. &
HARRISON, G. (2001). Antidepressant drugs and generic
counselling for treatment of major depression in primary
care: randomised controlled trial with patient preference
arms. British Medical Journal, 322, 772–775.
DAVIES, H.T.O., NUTLEY, S.M. & TILLEY, N. (2000).
Debates on the role of experimentation. In: H.T.O
DAVIES, S.M. NUTLEY & P.C. SMITH (Eds), What
works? Evidence-based policy and practice in public services.
Bristol: Policy Press.
DAY, S.J. & ALTMAN, D.G. (2000). Blinding in clinical
trials and other studies. British Medical Journal, 321, 504.
GRAHAM, P. (2000). Treatment interventions and findings
from research: bridging the chasm in child psychiatry.
British Journal of Psychiatry, 176, 414–419.
GREENHALGH, T. (1997). How to read a paper: getting
your bearings. British Medical Journal, 315, 243–246.
HARRIS, L., DAVIDSON, L. & EVANS, I. (1998). Health
evidence bulletin Wales: mental health. Cardiff: Welsh
Office.
HOLT , S., STEWART, I.C., DIXON, J.M., ELTON , R.A.,
TAYLOR, T.V. & LITTLE, K. (1980). Alcohol and the
emergency service patient. British Medical Journal, 281,
638–640.
HOTOPF , M., CHURCHILL, R. & LEWIS, G. (1999). Pragmatic randomised controlled trials in psychiatry. British
Journal of Psychiatry, 175, 217–223.
HUNTLEY, J.S., BLAIN, C., HOOD , S. & TOUQUET, R.
(2001). Improving detection of alcohol misuse in
patients presenting to an accident and emergency
departments. Journal of Accident and Emergency Medicine,
18, 99–104.
JONES, R. (1995). Why do qualitative research? British
Medical Journal, 311, 42.
KAMB, M.L., DILLON, B.A., FISHBEIN, M. & WILLIS, K.L.
(1996). Quality assurance of HIV prevention counselling in a multi-centre randomised controlled trial.
Project RESPECT study group. Public Health Reports,
111, 99–107.
LEFF , J.P. & WING , J.K. (1971). Trial of maintenance
therapy in schizophrenia. British Medical Journal, iii,
599–604.
MARSHALL, M. (1996). Case management: a dubious
practice. British Medical Journal, 312, 523–524.
MARSHALL, M., GRAY, A., LOCKWOOD, A. & GREEN, R.
(2000). Case management for people with severe mental
disorders (Cochrane Review). In: The Cochrane library
(Issue 4). Oxford: Update Software.
MEDICAL RESEARCH COUNCIL (MRC) (1948). Streptomycin treatment for pulmonary tuberculosis. British
Medical Journal, ii, 769–782.
MEDICAL RESEARCH COUNCIL (MRC) (2000). A framework for development and evaluation of RCTs for
complex interventions to improve health. London:
MRC.
MURPHY, E., DINGWALL, R., G REATBATCH, D., PARKER, S.
& WATSON, P. (1998). Qualitative research methods in
health technology asseessement: a review of the literature.
Southampton: Health Technology Assessment.
PETERS, J., BROOKER, C., MCCABE, C. & SHORT, N.
(1998). Problems encountered with opportunistic
screening for alcohol-related problems in patients
attending an accident and emergency department.
Addiction, 93, 589–594.
POCOCK, S.J. (1983). Clinical trials: a practical approach.
Chichester: Wiley.
POWELL, A. & DAVIES, H.T.O. (2001). Qualitative
research may be more appropriate. British Medical
Journal, 322, 929.
SACKETT, D.L. & WENNBERG, J.E. (1997). Choosing the
best research design for each question. British Medical
Journal, 315, 1636–1627.
SCHWARTZ, D. & LELLOUCH, J. (1967). Explanatory and
pragmatic attitudes in therapeutic trials. Journal of
Chronic Diseases, 20, 637–648.
SENSKY, T., TURKINGTON, D., KINGDON, D., SCOTT , J.L.,
SCOTT , J., SIDDLE, R., O’CARROLL, M. & B ARNES, T.R.
(2000). A randomized controlled trial of cognitive-
Combining qualitative and quantitative research methods
behavioral therapy for persistent symptoms in schizophrenia resistant to medication. Archives of General
Psychiatry, 57, 165–172.
SHAW, B.F., ELKIN, I., YAMAGUCHI, J., OLSTEAD, M.,
VALLIS, T.M., DOBSON , K.S., LOWERY, A., SOTSKY,
S.M., WATKINS , J.T. & IMBER, S.D. (1999). Therapist
competence ratings in relation to clinical outcome in
cognitive behaviour treatment of depression. Journal of
Consulting Clinical Psychology, 67, 837–846.
SHULTZ, K.F., CHALMERS, I., H AYES, R. & ALTMAN, D.G.
(1995). Empirical evidence of bias: dimensions of
methodological quality associated with estimates of
treatment effects in controlled trials. Journal of the
American Medical Association, 273, 408–412.
SLADE, M. & PRIEBE, S. (2001). Are randomised controlled trials the only gold that glitters. British Journal of
Psychiatry, 179, 286–287.
STOKOLS, D., ALLEN, J. & BELLINGHAM, R.L. (1996). The
social ecology of health promotion—implications for
research and practice. American Journal of Health
Promotion , 10, 247–251.
11
TAYLOR, R. & THORNICROFT, G. (1996). Uses and
limits of randomised controlled trials in mental health
service research. In: G. THORNICROFT & M.
TANSELLA (Eds) Mental health outcome measures.
London: Springer.
TYRER, P., MORGAN, J., VAN H ORN, E., JAYAKODY, M.,
EVANS, K., BRUMMELL, R., WHITE, T., BALDWIN, D.,
HARRISON-READ, P. & JOHNSON, T. (1995). A randomised controlled study of close monitoring of
vulnerable psychiatric patients. Lancet, 345, 756–759.
UK 700 GROUP (1999). Comparison of intensive and
standard case management for patients with psychosis:
rationale for the trial. British Journal of Psychiatry, 174,
74–78.
WEAVER, T., RENTON , A., TYRER, P. & RITCHIE, J. (1996).
Combining qualitative studies with randomised controlled trials is often useful. British Medical Journal, 313, 629.
WILK, A.J., JENSEN, N.M. & HAVIGHURST, T. (1997).
Meta-analysis of randomised control trials addressing
brief interventions in heavy alcohol drinkers. Journal of
General Internal Medicine, 12, 274–283.
Download