20 times 80 is enough Ben van Hout 17/7/16

advertisement
20 times 80 is enough
Ben van Hout
17/7/16
Julius Center for Health Sciences and Primary Care
Contents
•
•
•
•
•
•
•
Introductionary remarks
Valueing health states
The classic approach
The Bayesian approach
A comparison
How further
Concluding remarks
Uncertainty surrounding costs and effects
fl 10.000
difference in costs
fl 7.500
fl 5.000
fl 2.500
fl 0
fl 2.500fl 5.000fl 7.500fl 10.000-1,00 -0,80 -0,60
-0,40 -0,20
0,00
0,20
0,40
0,60
0,80
1,00
1,20
difference in event free survival
1
0.8
0.6
0.4
0.2
0
€0
€ 20,000
€ 40,000
€ 60,000
€ 80,000
€ 100,000
EuroQol
EQ-5D Index
MACCE Free Survival
100%
90%
80%
70%
60%
50%
0
200
400
600
800
1000
1200
Days since random ization
Q5D-index SA not w orking
Q5D-index CABG not w orking
MACCE Free Survival
100%
90%
80%
70%
60%
50%
0
200
400
600
800
1000
Days since random ization
Q5D-index SA w orking
Q5D-index CABG not w orking
1200
Valueing health states
• Populations from the general public are asked
to attain values to health states which are
described in terms of scores on different
dimensions
• EQ-5D
 5 dimensions
 3 scores per dimension
• Not all health states are valued
• There is an underlying structure
What do we want?
• To compare the results from different
therapies
 Using data from RCT’s
 Using models
• Using valuations from the general public
 Medians
• Which may be country specific
Among the numerous problems
• Each country starts its own valuation study
withouth learning from the other countries
• Utility estimates are hardly ever surrounded
with uncertainty margins
 Especially when collected alongside trials
The MVH-study
• 3395 respondents
• 41 health states + 11111 + unconsious
 rescaled
• 15 states per respondent
• Mostly about 800 respondents per state
 A few with 1300
 3333 for all
• Inconsistents taken out
• 39868 valuations
The 3074 clean repondents (36369 clean
data-points)
10
8
6
4
2
0
-2
-4
-6
-8
-10
1
229 457 685 913 1141 1369 1597 1825 2053 2281 2509 2737 2965
10
8
6
4
2
0
-2
-4
-6
-8
-10
1
229 457 685 913 1141 1369 1597 1825 2053 2281 2509 2737 2965
A typical good health state
700
600
500
400
300
200
Std. Dev = 3,00
100
Mean = 8,2
N = 1318,00
0
-10,0
-6,0
-8,0
TTO11112
-2,0
-4,0
2,0
0,0
6,0
4,0
10,0
8,0
A typical bad health state
100
80
60
40
20
Std. Dev = 4,95
Mean = -3,8
N = 844,00
0
-10,0
-6,0
-8,0
TTO33323
-2,0
-4,0
2,0
0,0
6,0
4,0
10,0
8,0
A typical health state
100
80
60
40
20
Std. Dev = 6,01
Mean = 1,3
N = 842,00
0
-10,0
-6,0
-8,0
TTO32211
-2,0
-4,0
2,0
0,0
6,0
4,0
10,0
8,0
Mean values + 95% confidence intervals
The classic appproach (EQ-5D)
U  f ( x)  e
U   o   1 x1   2 x 2   3 x 3   4 x 4   5 x 5  e
e  n(0,  )
U  f ( x)  e
11
U   o    i xi  e
i 1
e  n(0,  )
Linear model; middle level = 2
10
8
6
2
0
-2
-6
-8
21111
11211
11121
12111
11112
12211
12121
11122
22112
22121
21222
11312
12222
22122
21312
22222
11113
13212
13311
12223
11131
21323
32211
23321
21232
22323
22331
33212
11133
21133
23313
23232
33321
22233
32313
32223
13332
32232
32331
33232
33323
33333
-4
Observed
lower
upper
predicted
4
Linear model; middle level is free
10
8
6
2
0
-2
-6
-8
21111
11211
11121
12111
11112
12211
12121
11122
22112
22121
21222
11312
12222
22122
21312
22222
11113
13212
13311
12223
11131
21323
32211
23321
21232
22323
22331
33212
11133
21133
23313
23232
33321
22233
32313
32223
13332
32232
32331
33232
33323
33333
-4
Observed
lower
upper
predicted
4
Linear Model, middle level free + n3 term
Uncertainties surrounding the model
estimates
2
x '   n( x '  ,  x ' ( x ' x ) x )
1
Observations + model estimates with 95%
confidence limits
Let’s go Bayesian
•
The confidence intervals of my predictions of the
average values are sometimes out of the range
defeined by the confidence intervals of my observed
average values
•
Wouldn’t it be nice if we would also acknowldege that
we are uncertain about our model?
•
Samer Kharroubi, Tony O’Hagan and John Brazier
The Kharroubi approach
• The function is unknown and is a random
function
• The expected values of the function are
described by a linear model
• Look at all valuations as random variables
following a large 243-dimensional multinomial
distribution with correlations that decrease
with the distance between the states
• Respondents may differ by a parameter α.
The Kharroubi approach
•
•
Succesfull in describing the SF-6D
Unsuccessful in working with 40,000 data-points
•
So,



•
A random sample of 38*80 points
Estimate
Predict 3 remaining points
Compare with classical approach
Within a few minutes
And while waiting the results
• What if I don’t use all the data, but just the
averages
• And play around with a classical Bayesian
alternative
Parameter estimates
all data
average
estimate
se
estimate
se
Constant
15.65
0.47
15.38
0.43
B1
-1.49
0.16
-1.58
0.14
B2
-1.33
0.15
-1.28
0.14
B3
-0.35
0.17
-0.29
0.16
B4
-2.02
0.14
-1.92
0.13
B5
-1.29
0.15
-1.20
0.14
D1
0.88
0.21
0.92
0.20
D2
0.22
0.23
0.12
0.21
S3
0.53
0.23
0.44
0.21
S4
0.88
0.23
0.88
0.22
D5
0.56
0.22
0.59
0.21
N3
-2.80
0.37
-3.03
0.35
Estimates based on averages (including
95% confidence intervals)
Standard Bayesian
Yi  N (  i , )
11
i   o    i xij
j 1
  (0.001,0.001)
 I  N ( i ; I )
 I  N (0.001;0.001)
 i  (0.001,0.001)
Parameter estimates
Bayesian
Classical
estimate
se
estimate
se
Constant
16.16
0.41
15.38
0.43
B1
-1.62
0.14
-1.58
0.14
B2
-1.21
0.14
-1.28
0.14
B3
-0.59
0.15
-0.29
0.16
B4
-2.08
0.13
-1.92
0.13
B5
-1.37
0.13
-1.20
0.14
D1
0.88
0.18
0.92
0.20
D2
0.18
0.20
0.12
0.21
S3
0.19
0.20
0.44
0.21
S4
0.84
0.20
0.88
0.22
D5
0.60
0.20
0.59
0.21
N3
-2.31
0.34
-3.03
0.35
Aren’t you neglecting something?
• Standard Bayesian approach using WinBugs
σ= 0.52 (in comparison to 0.59 following
classical approach)
• We know the uncertainties surrounding the
observed average values
• We can include those in WinBugs
Partly promising Bayesian
Yi  N ( i , 1
1 i

)
Yi  N (  i , )
11
11
 i   o    i xij   i
j 1
  (0.001,0.001)
i   o    i xij
  (0.001,0.001)
 I  N ( i ; I )
 I  N (0.001;0.001)
 i  (0.001,0.001)
j 1
 i  N (0, 1 )
i
 I  N ( i ; I )
 I  N (0.001;0.001)
 i  (0.001,0.001)
Hey, there is Samer
• Relatively very good fit, but not as good as
mine
• But much better than mine without the
dummies and the n3
• My predictions are better
A comparison
•
The Bayesian approach is - off course - more intuitive
•
It seems much more flexible in a natural way
•
It may take ways more computer-time
•
It may not handle large data-sets very well
•
I hope to be more convinced at the end of this week
How further
• A better inclusion of the uncertainties
surrounding the averages
• Can’t Samer work with 41 data-points?
• Using the first data-set as prior to the next
• Designing a next country specific study
Concluding remarks
• Bayesian analysis makes one feel good
• Samer for president
• I’m almost convinced
Download