A031

advertisement
***Please do not quote without permission***
A031: Insights into the modelling process: qualitative interviews
with modellers
Samantha Husbands, Susan Jowett, Pelham Barton, Joanna Coast
Health Economics Unit, School of Health & Population Sciences, University of Birmingham
Correspondence to:
Samantha Husbands
Health Economics Unit, School of Health and Population Sciences, University of
Birmingham, Birmingham, B15 2TT
Email: skh161@bham.ac.uk
Acknowledgements
We would like to thank all those who took part in this research and the School of Health &
Population Sciences, University of Birmingham for the doctoral funding that has made this
work possible.
1
***Please do not quote without permission***
Introduction
Decision-analytic models have increasingly become an essential component of health
technology adoption decisions within the UK (Williams et al, 2008) and within health care
systems internationally (Australian Government Department of Health, 2014; Canadian
Agency for Drugs and Technologies and Health, 2006). They are used in economic analyses
to help decision-makers ‘decide whether new health technologies represent sufficient value
for money to be funded’ (Drummond et al, 2005, p277). The models compare the costs and
health consequences of competing interventions by synthesizing all relevant evidence into a
framework and generating results in terms of relative cost-effectiveness (Briggs et al, 2006).
The National Institute for Health and Care Excellence (NICE) in the UK encourages the use
of these models in their technology appraisal process through their request that costeffectiveness be ‘considered over an appropriate time horizon to reflect UK practice and
patients, and…compare treatment options that represent routine care and/or current best
practice for the relevant patient groups’ (NICE, 2008, P28). Indeed, the advantage of using
decision-analytic models is that they allow for all information relevant to a decision problem
to be used via both extrapolation of data beyond that observed in a trial, and synthesis of
head-to-head comparisons where data on relevant interventions do not exist (Briggs et al,
2006).
A general search of the modelling literature has highlighted a number of papers which have
found errors in published and policy level decision models. Chilcott et al (2010a) cite the
results of a research study demonstrating that 30% of models submitted to Australia’s health
decision-making body, the Pharmaceutical Benefit Advisory Committee (PBAC), were
problematic. This study by Hill et al (2000) establishes that poor data assumptions and the
identification of utility estimates were responsible for the poor quality of the submitted
models. Indeed, Chilcott et al (2010a) argue that although mathematical and ‘technical’ errors
are an unavoidable part of model development, process errors can also be responsible for
poor model inputs and outcomes. Roberts et al (2006), too; found in their systematic review
of cost-effectiveness models for Chlamydia screening that most of the published models were
using inappropriate structures. The majority were using static rather than dynamic model
structures which cannot account for the ‘impact of re-infection, continued transmission, and
the change in prevalence over time that might result from a screening programme’ (Roberts et
al, 2006, p193) and thus are likely to lead to misleading cost-effectiveness estimates. Again
2
***Please do not quote without permission***
process issues, in identifying an appropriate model structure and understanding the natural
history of a disease, appear to be responsible for these modelling problems.
Chilcott et al (2010a) argue that there is very little good practice guidance available on the
process of model development and indeed a systematic review of the modelling literature
conducted for the research (Husbands et al, 2013), found that only a minimal number of
papers offer guidance on a process of model building. Of the 23 papers reviewed only four
present a stage-by-stage guidance and only one of the four offers a realistic depiction of the
process, with iterations between modelling stages. The majority of papers focus on only one
stage or present the stages as being a chronological version of distinct modelling operations.
The focus of most of the papers is on model structure but only insofar as offering guidance on
how to select an appropriate type of model for the decision problem. Guidance on the more
methodological aspects of structural development and other areas of model building tends to
descriptive rather than procedural and lacking in detail. Many of the papers list guidance for
the specific stages without explaining how these modelling requirements may be achieved. A
common example of this is the instruction from Sun & Faunce (2008, p314) that ‘the
structure of the model is developed on the basis of an understanding of the nature of the
disease progression’. The methods for gaining this understanding are not covered in this or
most other papers. Perhaps if in-depth, process guidance on model development were more
widely available, the type of errors reported in the papers above may be less common.
Clearly health care systems with limited health resources need to be able rely on models to
produce optimal decisions and recommend health technologies which maximise expected
total outcome i.e. offer the most benefit for the least cost. The suggestion is that current
modelling processes or at least the documentation of them could be improved. Therefore the
objective of this study was to establish through in-depth discussion with modellers, how
model development is currently being undertaken, and whether there are standard methods
and processes being used. The paper next reports on the methods and the preliminary findings
obtained in relation to current model processes and possibilities for further research and
developing guidance. The discussion will summarise the key findings of the study and how
they relate to the work of others who have undertaken similar research. The strengths and
weaknesses of the study will be reported and the implication of the findings for future
investigations considered.
3
***Please do not quote without permission***
Methods
The research was designed to capture views and perceptions about the process of decisionanalytic modelling from modellers in two health systems and from different organisational
perspectives. In-depth qualitative methods were used.
The informants were sampled purposively, with a focus on choosing ‘information-rich’ cases
for in-depth study of the issue under inquiry (Patton, 2002). Therefore informants were
selected on their ability to develop decision-analytic models and discuss their experiences and
opinions of the process. The intention was to sample a breadth of modellers to potentially
gain narratives on different modelling processes and opinion on the issues around model
building from a range of perspectives. This involved recruiting modellers who work in an
academic environment both in the UK and in Canada, as well as individuals working for
consultancies and pharmaceutical companies. The intention was to also sample individuals
who were at different stages of their career to see whether this raised different issues. In the
case of UK academics, informants were generally identified via their university online staff
profiles. Snowball sampling, where future interviewees are identified by existing informants
(Patton, 2002), was used to gain access to the majority of those working internationally and
in the private sector. Senior and more junior modellers were sampled for on the basis of their
job titles i.e. professor, research assistant, and experience. The intention was not to sample
from the different groups of modellers equally but to cover the range of views.
Given that the objective of this study was to explore the process of decision-analytic model
building via the practice and opinions of modellers, in-depth, face-to-face, unstructured,
qualitative interviews were used. To establish a detailed account of the methods and
processes that modellers are using to develop models, as well as their reflections on them, the
‘free-flowing’, ‘formless’ and conversational nature (O’Reilly, 2009, p126) of the
unstructured interviews allowed informants to speak freely about their experiences.
Open-ended questioning was used, encouraging informants to lead the direction of discussion
and ‘talk about their experiences, perceptions and understandings’ (Rubin & Rubin, 2005,
p135) in terms of what they considered to be important and relevant to the research. Allowing
the informant to lead the interview was optimal given the specialist nature of the research
topic. Questions asked were broad in the first instance and then focused on probing further
the issues that informants discussed. To facilitate the flow of discussion and remind the
4
***Please do not quote without permission***
interviewer of relevant issues, a broad topic guide provided a general framework for the
interview. Informants were asked questions on current model development, good and poor
modelling processes, modelling guidance and direction of potential further research. The
initial topics reflect the objective to investigate the way in which informants are undertaking
modelling processes but also to encourage their reflections on model development in general.
It was anticipated that they would be more likely to discuss poor practice and improvements
away from their own personal experiences. Informants were also asked for their opinion on
guidance and where they felt that further investigation into the modelling process to develop
guidance would be beneficial.
Interviews were audio-recorded with the permission of the informant to provide a rich source
for data analysis. They were conducted and analysed in waves so that emerging themes could
be followed-up and discussed with future informants. All interview recordings were
transcribed verbatim and analysed using Strauss and Corbin’s (1998, p57) ‘microscopic
examination of data’ approach, which involves scrutinizing the contents of interviews lineby-line. The analysis of the first wave of interviews was used to identify initial themes and
concepts, which were developed into codes to summarise a particular idea or issue. A coding
structure was generated to demonstrate the hierarchical relationship between codes and to
facilitate the comparison of informant experience and opinion on particular aspects of model
building. Primary codes were generated to highlight the main topics discussed by the
informants and secondary codes then to describe experience and opinion related to these. The
codes generated through the first set of interviews were used to code future interview
transcripts, with new codes being added where new themes emerged. The framework created
by the coding structure was then used to develop descriptive accounts, which involved
‘looking within a theme, across all cases in the study and noting the range of perceptions,
views, experiences or behaviours which have been labelled or tagged as part of that theme’
(Richie & Lewis, 2003, p238). The aim of this comparison of the informant’s responses was
particularly to see whether there appeared to be standard methods for model development,
common problems and similar suggestions for where further investigation and guidance is
needed to improve general modelling practice. Interviews were conducted until data
saturation was reached, where the descriptive accounts were demonstrating that little new
information was being gained on the above topics of interest (Ritchie, Lewis, & Elam 2009).
Due to this study being a work in progress, the findings presented here draw on the full
analysis of the first eleven interviews conducted and early impressions from the remaining
5
***Please do not quote without permission***
thirteen. Findings are presented in relation to the primary themes generated through the
analysis conducted to date, with quotes to illustrate important issues relevant to the research
objectives. Quotes are presented on the basis of being typical of a theme or where the issues
discussed are particularly salient. Quotes are presented verbatim with ellipses used to indicate
missing text. Umms, errs and repeats of words that do not add to meaning are removed
without use of ellipses.
Findings
Twenty four interviews were conducted. Table 1 summarises the characteristics of the sample
of interview informants. The experience of the informants in relation to modelling is then
discussed.
Table 1: Characteristics of interview informants
Characteristics of Interview Informants
Whole sample
Full analysis
Males
16
6
Females
8
5
Junior
11
6
Senior
13
5
Academic – UK
11
10
Academic – Canada
7
0
Commercial – UK
6
1
Gender
Level of Modelling Experience
Nature of work
The characteristics demonstrate a gender difference of sixteen male to eight female
informants in the full sample. The sample is almost equally divided in terms of those
considered as junior and senior modellers. Senior informants are those who undertake a
management and supervisory role, whilst junior informants are those whose role is research
orientated. As expected, the senior informants generally had more years experience of
modelling than the junior informants. In terms of the nature of the informants’ modelling
6
***Please do not quote without permission***
work, the majority work within a UK academic environment, with the next biggest group
being Canadian academics. The remaining six 'commercial' informants work for either a
consultancy or pharmaceutical company within the UK. However, there are a number of
informants who have worked across more than one of these contexts, namely those who have
worked in academia and now work for a consultancy. The table also demonstrates the
characteristics of the whole sample as compared to the informants who have had their
interview transcripts fully analysed. It shows that the majority of informants whose
interviews have received a full analysis are UK academics, with only partial analysis of the
Canadian and commercial informants to be presented.
Analysis of the interview transcripts further demonstrated differences between the
informants. This was in terms of the diseases and conditions that they have modelled in, and
the types of models that they have used.
“Most of the models I’ve worked on have been in chronic conditions…” (Informant 6,
Academic UK, Junior)
“They’ve all been basic Markov models built in Excel…” (Informant 4, Academic
UK, Junior)
Informants also differed in terms of how they learnt to model and the nature of modelling
work that they currently undertake.
“The first modelling I did was at university…on the course [MSc Health Economics]”
(Informant 7, Academic UK, Junior)
“I had to look up decision trees and find out what they were all about and I did a very
basic model…" (Informant 1, Academic UK, Senior)
"Mainly supervision now rather than actually doing it…” (Informant 5, Academic
UK, Senior)
Two of these quotes demonstrate rather contrasting experiences of how these informants
learned to model, with Informant 7 doing so in a seemingly structured, educational
environment and Informant 1 appearing to have had to learn more informally. The majority
of informants appeared to have developed their first modelling skills through Master’s
courses, although some stated that they learnt through their own research and/or the guidance
of other modellers. The latter quote illustrates the situation of all of the senior modellers in
7
***Please do not quote without permission***
that where they were once involved in the 'hands on', technical aspect of modelling, they now
act in an almost exclusively supervisory capacity. The junior informants all reported
undertaking hands on modelling in the programming and software implementation of their
models, although a few of them discussed also having to supervise others.
The interviews generated two major themes: Process and Assessment. These in turn each
generated three subthemes, each of which is explored below.
Process
The accounts of the informants on the modelling process suggest that a similar set of stages is
followed by all, irrespective of the context of their work. The collective analysis of the
interviews found that the modelling process appears to involve three broad elements: building
the model structure, populating the structure and checking the structure. The majority of the
informants suggested that these three elements involve the iterative development of a
complete model structure, with the input of clinical experts.
Building the structure
Informants’ comments under this theme concerned their direct references to the building of
the structure itself, namely the model pathways. The processes used to develop the pathways
appeared varied between informants in terms of how they use the literature, where they
involve clinicians and the methods used to draft an initial model structure.
The suggestion from the majority of the informants is that the modelling process begins with
a look at, or a search of, the literature. However, the reason for doing this appears to be
different among informants. While most imply that they use the literature initially to gain an
understanding of what they are going to be modelling and how, some also go further and say
that this information feeds directly into a model structure. In terms of understanding, the
informants cited methods such as reading the clinical literature in the disease area of the
model, reviewing the economic literature for similar models and first-hand visits to clinics or
hospitals where a particular treatment is being used. The majority of the informants suggested
that the initial use of the literature is concerned with gaining information on the natural
history of a disease and its progression, current practice and the patient population who are
affected, the intervention(s) being modelled and the outcomes and possible adverse events
associated with the various treatments.
8
***Please do not quote without permission***
“Then you’re looking at the epidemiology literature so what are the long-term bad
outcomes of a child having a [particular infection type], well they might get [severe
disease], they might get [serious health outcome], so obviously those are things that
you want the model to be able to cope with.” (Informant 10, Academic UK, Senior).
“The easiest way to get a handle on the condition you’re going to model…if there
have been any other models developed, what did they do…?” (Informant 6, Academic
UK, Junior).
“So I actually observed the intervention being done and I observed how it was done
and I observed a training day and I had all the training manuals, so I learnt about it
very much as a clinician would learn about it, because it was a new procedure...”
(Informant 8, Academic UK, Junior).
The next stage of the process is then divided between informants who use this information as
a basis on which to discuss and build an initial structure with clinicians, and those who
appear to use it immediately to develop a structure themselves. Further, there were
differences in the processes used by those who appear to initially build a structure directly
from the literature. This is in the sense that for some the literature only informs an initial draft
structure which can then be reviewed and discussed with clinicians at the beginning of the
process, whereas for others it forms the final structure that will be populated with data and
perhaps is only discussed with clinicians at a later stage.
“Yeah always at the start [we involved clinicians], say what we’re trying to do, often
try and sort of work out a diagram together” (Informant 24, Academic Canada,
Senior).
“We’ve got a structure from the literature that then we go back to them and say ‘okay
does this fit with what you think?’” (Informant 5, Academic UK, Senior).
“Very late in the model development process, that’s probably not a good practice, but
it has worked for me that way, very late on I involve my co-investigators
[clinicians]...” (Informant 19, Academic Canada, Senior).
The above quotations demonstrate a distinction between the informants who involve
clinicians in the development of their model structure and those who do not. A small number
of the informants suggested that they would only involve clinicians in the latter stages of
9
***Please do not quote without permission***
model development, perhaps to identify data or even only to provide a final external
validation of the structure in its entirety. These informants are all either Canadian academics
or commercial modellers. The reasons given for not involving clinicians early on in the
process were due to the informants’ already having a clinical background, being unable to
gain access to clinical opinion or because they could use the complete structures of models
that had already been published in the particular disease area. Informant 17 suggested that in
disease areas where there were a lot of previous models, he would not go through the process
of developing an entirely new structure but instead use those of existing models as well as his
own knowledge to build one. However, he then also stated that he would go through the
process of building a new structure if there were not many existing models or if the previous
models did not seem to be of a high standard.
“I am also a physician by training so I do model building with a pretty good
knowledge of the clinical know how’s that go into the health condition that I am
modelling” (Informant 19, Academic Canada, Senior).
“You get your data, you can build your model based on your data and at the end of
the day you have to validate your model...either talk to clinicians or experts to
validate your model” (Informant 21, Academic Canada, Junior).
“If for example, we were modelling a metastatic cancer I think the pathways are very
well established. For those that we know we build a three stage Markov model and we
wouldn’t give much thought to that, we’d probably just double check ‘is there any
difference at all about this particular type of cancer or that particular treatment that
means the pathways might be different?’ but more than nine times out of ten it
wouldn’t be so we would almost just launch straight in with the data analysis...”
(Informant 17, Commercial, Senior).
Of the majority of informants who do use clinicians in the development of the model
pathways, different methods were cited for gaining the relevant information from them. A
number of the informants suggested that they would draft a structure using the literature
before their first meeting with the clinicians, to act as a starting point for discussion and
potential revision based on their opinions and comments. A reason given for the use of this
method was that it makes it easier to engage the clinicians and is a more practical way of
undertaking this stage of the process. Other informants however, suggested that clinician
involvement is required before any model structure is drafted, a reason being that a modeller
10
***Please do not quote without permission***
would not be able to easily translate clinical information into model pathways due to a lack of
understanding.
“We asked them to write down the pathways like in a narrative and we got some
really strange things coming back which, it wasn’t useable, so I’ve now gone back to
‘this is my idea of it, how would you like to change it’. Ideally, if we had time I’d
bring them in and I’d show them, like we’d do it together maybe with bits of paper
and place things somewhere, but it’s just not feasible with their working
hours...(Informant 8, Academic UK, Junior).
“In terms of the clinical literature, obviously most of it goes over my head and I don’t
understand it so most of that information I try and get from people [clinicians]
because they can translate into a language I can understand” (Informant 4, Academic
UK, Junior).
Informant 8 appeared to reflect on the processes she uses as not being best practice,
suggesting that physically drawing up a model structure in conjunction with clinicians would
be better. Indeed, among the informants who have stated that they use clinicians in the
development of the first draft of their model structure, there were differences in terms of how
this process progresses. Whilst some stated that they develop a diagram of the model
structure in conjunction with clinicians in an initial meeting, others suggested that they may
use this meeting to gather the necessary information from them and then separately draft an
initial structure.
“We drew it up on the wall, the pathways and the assumptions and what would
happen, with five specialists in the room...” (Informant 1, Academic UK, Senior).
“I’m not one of those people who like to put everything down on paper massively first
before I start putting it in Excel because I find it a lot easier to try and link things and
see how they work and kind of develop the structure that way.” (Informant 13,
Commercial, Senior).
The above quotations also demonstrated the distinction between the informants who first
draft their model structures on paper, and those who implement it straight into a software
platform.
11
***Please do not quote without permission***
Populating the structure
This theme concerned the informants’ discussions of data and using it to inform model
parameters. It demonstrated that the informants’ use data in different ways in relation to the
model structure and use different methods for making data assumptions.
The majority of informants gave the impression that building the structure was separate from
populating the structure in that the structure is developed and 'fixed' before parameter data is
added. However, a significant number of the informants suggested that the initial structure
that is developed in line with the literature and/or clinician opinion could change based on the
availability of data.
"So you've got this iterative process of putting a model together and then a case of
applying the data to the model" (Informant 2, Academic UK, Junior).
“Then you try and fill it in and you know there’s data missing or you have to
restructure it to sort of make it fit with what you have” (Informant 23, Academic
Canada, Junior).
Where the impression given by the latter informants was that adapting the structure to fit the
available data is a standard aspect of the process, a number of the informants, including
Informant 17, suggested that this approach should not be considered as good practice.
“I’m never really a fan of building models just around the data you’ve got because that
can lead to all sort of biases especially if the data is provided by the manufacturer.”
(Informant 17, Commercial, Senior).
In the case of those who do not alter the structure for the data it appears that they would
instead make data assumptions to populate parameters for which data are not available. The
informants offered different methods for making data assumptions, either finding a proxy
value in the literature or using clinical opinion to populate a parameter.
“Let’s assume that dizziness quality of life is the same as nausea or something else
that we have got evidence for” (Informant 17, Commercial, Senior).
“If there are going to be big data gaps thinking about expert elicitation processes you
might need...” (Informant 9, Academic UK, Senior).
12
***Please do not quote without permission***
The suggestion from the majority of the informants however, was that even if data
assumptions are not made by clinicians they will be validated by them.
“You’d be foolish just to make an assumption and not have anything to back it up”
(Informant 12, Commercial, Senior).
Checking the structure
Almost all of the informants referred to checking the model structure in some respect though
external validation from clinicians and/or internal validation in terms of the workings of the
model. External validation appeared to be mentioned and carried out less often.
Most of the informants mentioned clinician validation as a method for checking the structure
of a model. The impression from the majority was that this is not a distinct stage but instead
something that is iterative and happens throughout the development of a structure at regular
time intervals or perhaps each time it has been updated. The informants’ suggested that
clinicians are also used to check data inputs, assumptions and model results to assess whether
they are representative of the nature of a particular disease.
“Every four to six weeks we have a meeting, we take it back to the clinicians and say
does it look right, does the model make sense in light of the clinical area?”
(Informant 7, Academic UK, Junior).
The majority of informants also stated that they carry out internal validation on a model
structure, involving a more ‘technical’ check to ensure that a model is working correctly
within its software platform, referred to as extreme value analysis.
“I just like run the model loads and check all of the results and stick in massive
numbers and small numbers and see what happens…” (Informant 6, Academic UK,
Junior).
What the informants appeared not to do as much was an external validation of the complete
model structure, either against the results of other models or data sets, or by rebuilding it in
alternative modelling software. A number of the informants discussed external validation
within their accounts but then appeared to imply that time and resource constraints often
prevented this activity from being undertaken.
13
***Please do not quote without permission***
“What I don’t do, others do...partly it’s due to the time and resources that we have to
do it, we don’t do an external validation so much...we don’t keep a data set to one
side that we haven’t used or calibrated against the data sets to say ‘does this
model…produce the same sort of results as the independent study found?’”
(Informant 10, Academic UK, Senior).
“There’s like what I’d like to do and what I really do, so I have on a couple of models
I have programmed it in different software which is really useful but I don’t do it all
the time because of time and resource constraints...” (Informant 24, Academic
Canada, Senior).
Assessment
All of the informants provided some assessment of the modelling processes carried out by
them and others, current modelling guidance and further research which could be undertaken
to improve model development. This resulted in three evaluative categories: reflection,
guidance and future research.
Reflection
The informants’ reflections on the modelling process involved discussion of what they
considered to be a good and poor standard of model and where problems have and could
occur. Clinician involvement appeared to be important to the integrity of a model but also
seemed to cause problems for the modeller.
Most of the informants discussed clinical input and validation as a strength of their previous
models. The impression given was that the involvement of clinicians in the modelling process
allowed the informant to feel more confident in the design and structure of their model and
believe that it would more likely stand up to outside scrutiny.
“Having so much input from clinicians, I think that was really good, and kind of
getting a really thorough understanding of both the treatment pathway and disease
pathway... (Informant 4, Academic UK, Junior).
“Getting a model that the clinicians are happy with, because it doesn’t really matter
if the modeller is happy with it, if it’s not like real life you’re stuffed” (Informant 2,
Academic UK, Junior).
14
***Please do not quote without permission***
Despite the influential role that clinicians appear to play in model development, many of the
informants’ spoke of the difficulties associated with their involvement, particularly their
understanding of the health economics associated with decision-analytic modelling. It was
suggested that clinicians find it difficult to discuss the experience of a typical patient to
inform the pathways and structure of a model.
"The hard thing with clinicians is getting them to abstract because they see
individuals, they don't see a group..." (Informant 14, Commercial, Senior).
"You're trying to understand what happens to people but some of them will quite often
get into talking about their clinical experiences and kind of anecdotal stuff which you
don't want" (Informant 4, Academic UK, Junior).
The majority of informants suggested that although some clinicians are able to engage
themselves in model development, most are not. The clinicians considered useful were those
who understood the simplifying nature of models and the factors important to costeffectiveness decisions, and who could communicate clinical information in a way that
allowed it to directly inform a model.
“There are two types of clinicians…one just distrusts the whole thing and doesn’t
really buy into it and thinks modelling is somehow a quirk, quack science…and then
there are other clinicians, sometimes who have got training in it, who just totally get
and they’re really involved and they just really describe what your assumption is, they
understand it implicitly, they know which ones are a big deal and which ones don’t
really matter as much, or aren’t really going to influence your results” (Informant 24,
Academic Canada, Senior).
The informants held similar opinions on what constituted a poor model, with the process for
identifying data to populate parameters being cited as particularly problematic. This appeared
to be associated with the integrity and transparency of the methods used. The impression was
that modellers in general, intentionally or not, are biasing their models by not being robust in
the selection of data or clear in the write-up as to the process they have followed.
15
***Please do not quote without permission***
"You see a lot of studies, they haven't done a meta-analysis, they've just picked
whatever data they can out of thin air sometimes" (Informant 7, Academic UK,
Junior).
“Cherry picking the value you want, I’ve seen that happen…” (Informant 9,
Academic UK, Senior).
“I don’t think we do a very good job of recording how we identify evidence for
models…decision models are an evidence synthesis of a particular topic and if you
are biased, selective or not comprehensive in the way you look for evidence you may
get a biased model" (Informant 10, Academic UK, Senior).
Guidance
Informants discussed their opinions of existing guidance, with many stating that more is
needed on the modelling process, but some questioning how practical this would be to use
when actually building the model.
The informants' general attitude towards guidance appeared to be positive in the sense that it
was considered a useful tool to help modellers in model development. The general opinion of
the informants on the current available guidance is that it is comprehensive in terms of advice
on how to write-up a model but lacking in procedural guidance regarding the development of
the structure.
"They've [model checklists] got their uses certainly in terms of basics, reminding you
to get your cost year right, report your discounting, report the research question..."
(Informant 2, Academic UK, Senior).
"They show you what you should show out of a model but not how you should build it"
(Informant 7, Academic UK, Junior).
“There isn’t that much that can accompany you in the process of building the
model…" (Informant 4, Academic UK, Junior).
The attitude towards whether step-by-step, process guidance is useful however, appeared
divided. A number of the informants appeared to believe that more realistic and in-depth
guidance would be helpful to modellers, whilst others seemed to be of the opinion that a
16
***Please do not quote without permission***
detailed, stage-by-stage representation of the model development process would be
impractical to use in the way it was intended.
“It [the existing guidance] gives the impression of it being simpler than it is…I think
it sort of hides that there’s this sort of art behind it, when you read it you think ‘oh it’s
just a cookbook, I just have to do this and this and I’ve got my perfect model’”
(Informant 9, Academic UK, Senior).
“If you find a one-size-fits-all then it is going to be a thousand pages long, no one
wants to read that, you’d just dip in and pick out the bits you need” (Informant 13,
Commercial, Senior).
Future research
The informants' offered a range of suggestions as to where they thought future research
should be focused for the purpose of potentially developing modelling guidance and
improving current practice. This included opinion on specific aspects of model development
which informants' believed were likely to lead to errors, and ideas for where guidance would
be most useful. Many of these suggestions focused on structure and the involvement of
clinicians in its development, with the informants appearing to think about how the
construction and outcome of a model can be improved through further investigation and
possibly guidance on the structural process.
"The most vague part of modelling is designing the structure at the start..."
(Informant 17, Commercial, Senior).
"The current areas that are lacking is the translation of the clinical evidence into the
modelling itself...the interpretation step is the bit I think is a little bit more difficult
and makes a model either relevant or not..." (Informant 13, Commercial, Senior).
“They [the clinicians] didn’t really like them [models] because of the ‘black box’ but
at the same time they weren’t really interested in pursuing more in that area, so if
there was a sort of a lay guide to models, that would be really helpful.” (Informant
23, Academic Canada, Junior).
17
***Please do not quote without permission***
Discussion
The preliminary findings from these 24 interviews undertaken with a range of informants
provide an account of the methods that modellers are currently using to develop models, and
opinion on how current processes could be improved. The analysis found that although
informants appear to follow a similar set of stages in their model development, the methods
undertaken are different, particularly in terms of how they use clinical literature and where
and how they involve clinicians. This disparity in their accounts suggests that there is no
established practice for clinical involvement in the development of model pathways, and this
appears to be supported by the suggestion of the informants that future investigation and
research should focus on the clinicians' role in the structural process. Indeed, clinician
involvement appeared as a major theme in model building, with the informants suggesting it
essential to the integrity of their models. However, they also reported problems with
engaging the clinicians and obtaining the relevant clinical information from them.
A clear strength of this study is that, to the author’s knowledge, it is the first of its kind to use
qualitative methods so extensively in the exploration of the model development process. This
is in terms of the size and breadth of the modellers that were sampled. A limitation of the
paper is that some of the results are only based on the initial analysis of the interview
transcripts. However, detailed knowledge of those transcripts remaining to be analysed
suggest that the findings presented here have captured all of the major themes. Full analysis
of the remaining transcripts will ascertain whether any new sub-themes emerge concerning
the methods that the informants undertake in the modelling process and their assessments of
model development, guidance and where future research should focus. In addition, this paper
has not been able to fully reflect on the differences between the UK, Canadian and
commercial responses due to only a minimal number of the latter two groups being fully
analysed; this analysis may produce further insights.
Two additional studies were identified which use qualitative methods and interviews with
modellers to investigate the model development process. The findings of this research study
appear to support and build on those of Chilcott et al (2010b) whose interviews with twelve
modellers generated a similar account of the elements involved in model development. Their
study also highlighted a lack of consensus about the methods used within stages, in particular
how a model structure is conceptualised and developed. The authors conclude that the
variation in reported practice between the respondents demonstrates a ‘complete absence of a
18
***Please do not quote without permission***
common understanding of a model development process’ (Chilcott et al, 2010b, p17) and
warrants further investigation. In contrast, the preliminary findings reported here include the
informants’ assessments of their current modelling activities, allowing for further exploration
into why they undertake certain practices. Further analysis with the full data set will also seek
to determine whether variations in practice between informants and their opinions on the
modelling process can, in part, be explained by contextual variables such as the modellers’
background, how they developed their skills and the nature of their current role and work.
The aim of Squires (2014) PhD research was to develop guidance on how to conceptualise
the structure of models. Her in-depth interviews with two modellers led to a framework being
established which offers stage-by-stage methods for building a structure and highlights the
importance of strong communication with stakeholders, including clinicians in the modelling
process. However, the study undertaken and the guidance produced is in the context of public
health models which require different methods to standard decision models, particularly in
terms of structural development.
Future research should use qualitative methods to facilitate further investigation into the
modelling process. This is with a particular focus on the structural development of a model
and the involvement of clinicians, in line with the finding that there is no established practice
on this. These findings also relate to those of the systematic review which found very little
methodological guidance on the process of structure development (Husbands, 2013). Future
research should also investigate other areas where the informants' report discrepancy in their
practices, including how far available data should influence structure and how assumptions
should be made. Differences in methods undertaken by the modellers suggests that further
research and the development of guidance might be useful to ascertain what might constitute
best practice. Other important areas of focus include where informants have reported
problems, such as in communication with clinicians and the identification of data for model
parameters, and where they have stated future research and guidance is needed. The
implications of these findings and the recommendations made from them may result in the
development of guidance which could improve modelling processes and practice in areas
where modellers are currently uncertain or are reporting problems and potential errors.
It would be useful at HESG to discuss:

Do the findings resonate with those who are involved in building models?
19
***Please do not quote without permission***

With the full analysis, we will explore differences between the commercial and
academic sectors, and between UK and Canada. Are there other differences that it
would be important for us to focus on in the analysis – issues might be:
o Senior/junior modellers
o Those modelling in different disease areas
o The academic background of the modeller
o OTHERS?

Alongside the idea of guidance for modellers, we are quite interested in pursuing the
notion of providing modelling guidance for clinicians (and possibly patient
representatives?) who might be involved in the modelling process.
o Is anyone aware of anything similar already in existence?
o Would this be a good idea?

What are the most interesting aspects to focus on in future papers?
References
Australian Government Department of Health (2014) PBAC Guidelines: Translation:
Adapting the clinical evaluation to the listing requested for inclusion in the economic
evaluation [online] Available at http://www.pbac.pbs.gov.au/section-c/section-c.html
[Accessed 20 April 2014].
Briggs, A., Claxton, K. and Sculpher, M. (2006) Decision Modelling for Health Economic
Evaluation. Oxford University Press: Oxford.
Canadian Agency for Drugs and Technologies in Health (2006) Guidelines for the
Economic Evaluation of Health Technologies: Canada (3rd Edition) [online]. Available
at: http://www.cadth.ca/media/pdf/186_EconomicGuidelines_e.pdf [Accessed 15 May 2013].
Chilcott, J., Tappenden, P., Rawdin, A., Johnson, M., Kaltenthaler, E., Paisley, S.,
Papaioannou, D. and Shippam, A. (2010a) Introduction. In: Avoiding and identifying errors
in health technology assessment models: qualitative study and methodological review
[Review]. Health Technology Assessment, 14(25): 1-2.
Chilcott, J., Tappenden, P., Rawdin, A., Johnson, M., Kaltenthaler, E., Paisley, S.,
Papaioannou, D. and Shippam, A. (2010b) The model development process. In: Avoiding
and identifying errors in health technology assessment models: qualitative study and
methodological review [Review]. Health Technology Assessment, 14(25): 7-18.
Drummond, M., Sculpher, M., Torrance, G., O’Brien, B. and Stoddart, G. (2005) Methods
for the Economic Evaluation of Health Care Programmes. Oxford: Oxford University
Press.
20
***Please do not quote without permission***
Hill, S., Mitchell, A., and Henry, D. (2000) Problems with the interpretation of
pharmacoeconomic analyses: a review of submissions to the Australian Pharmaceuticals
Benefit Scheme. American Medical Association, 283: 2116–21.
Husbands, S., Coast, J. and Andronis, L. (2013) Systematic review: What guidance
currently exists for the process of decision-analytic model building? Presented at
Conference on Quantitative Modelling in the Management of Health and Social Care,
London, March 2013.
National Institute for Health and Clinical Excellence (2008) Guide to the methods of
technology appraisal [online]. Available at:
http://www.nice.org.uk/media/B52/A7/TAMethodsGuideUpdatedJune2008.pdf [Accessed
8 November 2012].
O’Reilly, K. (2009) Key Concepts in Ethnography. London: SAGE.
Patton, M.Q. (2002) Qualitative research and evaluation methods. London: SAGE.
Ritchie, J., Spencer, L., and O’Connor, W. (2003) "Carrying out Qualitative Analysis," In
Qualitative Research Practice, J. Ritchie & J. Lewis, eds., London: SAGE Publications, pp.
219-262.
Ritchie, J., Lewis, J., and Elam, G. (2009) "Designing and Selecting Samples," In
Qualitative Research Practice, J. Ritchie & J. Lewis, eds., London: SAGE Publications, pp.
77-108.
Rubin, H.J. and Rubin, I.S. (2005) Qualitative Interviewing: The Art of Hearing Data.
London: SAGE.
Squires, H. (2014) A methodological framework for developing the structure of Public
Health economic models. PhD Thesis. The University of Sheffield: UK.
Strauss, A., and Corbin, J.M. (1998) Basics of Qualitative Research: Techniques and
Procedures for Developing Grounded Theory. London: SAGE.
Sun, X. and Faunce, T. (2008) Decision-analytical modelling in health-care economic
evaluations. European Journal of Health Economics, 9: 313–323.
Williams, I., McIver, S., Moore, D. and Bryan, S. (2008) The use of economic evaluation in
NHS decision-making: a review and empirical investigation, Health Technology
Assessment, 12(7).
21
Download