Cross-European Research Projects: Developing and validating disease management evaluation methods for

advertisement
Cross-European Research Projects:
Developing and validating disease
management evaluation methods for
European health care systems
(DISMEVAL)
Jack Needleman
UCLA School of Public Health
AcademyHealth
June2009
Goals for the Presentation
•
•
•
•
•
Project background and goals
Project scope
Consortium members and individual projects
Synergies
Comments and reflections
Project background
Scope of current DM initiatives
• DM response to extensive, expensive chronic illness
 Expand patient education, support & services outside
traditional health plan to facilitate patient self-management
 Education, support and reimbursement for providers to
effectively manage patients
• Many models, inputs, strategies
• Many diseases, although Diabetes major focus
• Many health care systems in Europe and elsewhere are
beginning to embrace this strategy
 E.g., Germany encourages sickness funds to offer DM
Project background
State of DM evaluation
•
•
While conceptually attractive, ability of DM to reduce cost and improve
care has not been empirically demonstrated.
No universally accepted evaluation methods to measure performance
in a scientifically sound fashion that are practicable for routine
operations.
 Selection, with randomization difficult or undone
 Variations in program content
– How are variations in assessment of effectiveness associated
with differences in program organization & content
 Different measures, metrics for evaluating program performance
– Costs
– Utilization
– Clinical indicators
DISMEVAL goals
• Provide overview of approaches to chronic care, DM methods,
and DM evaluations across Europe
• Test and validate possible evaluation approaches
 Identify best practices
 Develop evidence-based recommendations for
policymakers, program operators and researchers on which
evaluation approach is most useful in a given context
• Promote and support the networking and coordination of
research and innovation activities on aspects related to scientific
knowledge and policy development in this area
 Build on existing work carried out in Member States and at
the wider European level
Consortium members
RAND Europe Cambridge Ltd RAND
UK
London School of Hygiene and Tropical Medicine
UK
Paracelsus Medizinische Privatuniversitat Salzburg
Austria
KØbenhavns Universitet UCPH Denmark
Denmark
Johann Wolfgang Goethe Universitaet - Frankfurt Am Main
Germany
Universite Paris XII - Val de Marne UPVM France
France
Universiteit Maastricht UM
The Netherlands
Instituto de Salud Carlos III ISCIII
Spain
Centre Anticancereux Leon Berard CLB
France
Review current state of DM programs and
evaluation strategies in Europe
• Objectives
 Review approaches to managing (chronic) conditions that
have been developed and/or implemented by different
countries in Europe
 Assess whether and how countries evaluate the approaches
to (chronic) disease management
• Update existing information for 5 countries/new info for 9
 Common template
 Obtain documentation/grey literature
• Products
 Description of programs
 Descriptions of evaluation approaches
 Synthesis of evaluation approaches – lessons/best practices
Examples of questions to be addressed
•
•
Indicators
 Which domains (e.g. cost, quality, patient satisfaction) should be
used to measure the effect of disease management programmes
 What are appropriate and established measures for these domains,
and what is the evidence behind them?
Methods
 Which non-experimental attribution strategies have been well
enough validated to consider them viable alternatives to
experimental designs?
 How do effect size estimates from non-experimental designs track
those from experimental designs
 How can you account for confounding factors in experimental
designs?
 How do contextual factors influence the choice of evaluation
design?
Implement evaluation of 6 DM programs
Design in coordination
Austria
Diabetes
Denmark
Diabetes &
COPD
Germany
Diabetes
France
Diabetes / cancer
Netherlands
Diabetes
Spain
CVD prev.
Cluster RCT. Intervention then introduced into control group as open study.
Some observational data from same DMP introduced into two other
regions
Pre-Post observational data during ‘project period’, some follow up data.
Nested RCT of hospital vs community care. Limited comparative data
from national diabetes database going back 2-3 years (hospital patients
only)
Longitudinal case control study, some earlier data. Possibly limited data
from wider sick fund data. Interest in testing different matching
procedures.
Pre-Post observational data (patient and network level). Heterogeneous
networks (some option of analysis of intensity of intervention). Data for
matched controls from claims.
Pre-Post observational data for 10 pilots (and some data for 10 additional
ones). Some intermediate outcomes for the rest of the Netherlands
Observational data on risk factor reduction. Comparison of enrolled vs nonenrolled patients (patient choice). Starting Dec 08, systematic
differences in intervention by region.
Synergies: the value of a consortium:
Designing in coordination offers…
• Peer consultation on design
 Additional methodological advice
 Encouragement to push beyond original design, e.g.
– Germany on case-control
– France on control group
 Applying lessons from one evaluation to others
 Clever expansions to allow methods to be contrasted
• Opportunities for pooling & comparing/contrasting results:
 By disease
 By DM strategy
 Measures of dose-response, or more effective models
Examples of discussion from first
consortium meeting
• What is the range of things you can measure? Knowing that the
audiences of programme evaluations differ, what outcome
measures should have the higher interest?
• What are the selection issues and with what confidence can we
use tools for solving selection bias (e.g. regression of the mean
as a selection problem)? That is, do results differ when we apply
different tools and examine the same database in alternative
ways? Which of these tools is most easy to apply?
• Dose-response issue. In theory, disease management should
work but in practice evaluations have been disappointing. What
are the formative components that help a programme work?
How can you measure delivered dose? How can you set up
your evaluation so that you can learn about the formative
components that work?
Consortiums & EU sponsorship may require
new recordkeeping and vehicles for
administrative and program coordination
• EU expectations
 How expenses reported
 Timekeeping
• Consortium administrative and substantive issues
 Joint authorship
 Data use/sharing/acknowledgment agreements
 Intra-consortium communication
– Listserves, websites, intranet
 Common dissemination strategy and “boilerplate” language,
templates
Personal reflections from initial work
• Frustration with additional administrative demands but…
• Strong teams with access to unique data
• Enthusiasm and curiosity about what consortium can mean for
own research
• Willingness to be open about weaknesses and limitations of
data, design
• Jury is out on whether, as pressure to complete work grows,
promise of collaboration, exploration and stretching is achieved,
but I am optimistic.
Download