Network meta-analysis in SAS

advertisement
Network meta-analysis in SAS
Danish Society of Biopharmaceutical Statistics,
Elsinore, May 27, 2014
David A. Scott MA MSc
Senior Director, ICON Health Economics
Visiting Fellow, SHTAC, University of Southampton
Network Meta-Analysis: Software
•
•
•
•
winBUGS/OpenBUGS/JAGS (DSU series)
R e.g. rmeta, netmeta, mvmeta packages
Stata mvmeta
SAS e.g. proc glimmix, proc mcmc
A brief history of NMA in SAS
• Lots of different procedures to implement
NMA in SAS
– proc mixed, proc mixed, proc nlmixed, proc
genmod, proc glimmix1-3
– Frequentist techniques
– Difficult to fit complex hierarchical models2
• MCMC techniques
– proc genmod (using Easy Bayes) -> proc mcmc
– SAS 9.2 (level 2M3), SAS 9.3 (sas stat 12)
1 Glenny
AM et al, Health Technology Assessment 2005; 9(26)
2 Jones B et al, Pharmaceutical Statistics 2011; 10:523-31
3 Piepho HP et al. Biometrics 2012; 68:1269-77
Potential barriers
• DSU series winBUGS-focused
• SAS not yet used in UK reimbursement
submissions
• ERG limited experience of SAS
• Limited published code/articles
• Validation exercise
Illustrative example 1 - binary
data
Syntax: load data
data smoking;
input Study Trt R
datalines;
1 2 11 78 3
1 3 12 85 3
1 4 29 170 3
2 1 75 731 2
…
run;
N narm;
#Mothersill 1988
#Mothersill 1988
#Mothersill 1988
#Reid 1974
Syntax: fixed effects
proc mcmc data=smoking nmc=20000 seed=246810;
random Studyeffect ~general(0) subject=Study init=(0);
random Treat ~general(0) subject=Treatment init=(0) zero="No
contact" monitor=(Treat);
mu= Studyeffect + Treat;
P=1-(1/(1+exp(mu)));
model R ~ binomial(n=N, p=P);
run;
Syntax: random effects
proc mcmc data=smoking nbi=20000 nmc=200000 thin=10
seed=246810 monitor=(mysd) dic;
random Studyeffect ~normal(0, var=10000) subject=Study init=(0) ;
random Treat ~normal(0, var=10000) subject=Treatment init=(0)
zero="No contact" monitor=(Treat);
parms mysd 0.2;
prior mysd ~ uniform(0,1);
random RE ~ normal(0,sd=mysd/sqrt(2)) subject=_OBS_ init=(0);
mu= Studyeffect + Treat +RE;
P=1-(1/(1+exp(mu)));
model R ~ binomial(n=N, p=P);
run;
Diagnostics
• Trace
• Density
• Autocorrelation
– thin= option
• DIC (relative model fit)
– dic option
Diagnostics in SAS
Practical exercise 1
• Run the code as is
• Compare results for each model
• Amend the code to generate fewer MCMC samples,
how many are sufficient? How much burn-in is
needed? Is thinning necessary in the RE model?
• Which model is the better fit, fixed or random effects?
• Change the baseline from “no contact” to “self help”.
Are the results consistent?
• Try changing the priors to other vague priors1, does this
affect results?
1
Lambert PC et al, Statistics in Medicine, 2005; 24:2401-28
Results from WinBUGS
Fixed effects
mean
sd
Self help
0.25
0.13
Individual counselling
0.75
0.06
Group counselling
1.02
0.21
DIC
485.0
Random effects
Self help
0.46
0.4
Individual counselling
0.78
0.23
Group counselling
1.09
0.51
reSD
0.79
0.18
DIC
298.6
Illustrative example 2 continuous data
RQy449hbr123oxout
Syntax: load data
data scott;
input study trt baseline y SE;
datalines;
1 2 8.5 -1.08 0.12
1 3 8.5 -1.13 0.12
1 1 8.5 0.23
0.2
2 2 8.4 -1 0.1
…
;
run;
Syntax: fixed effects
proc mcmc data=scott nmc=200000 nthin=20 seed=246810;
random Studyeffect ~general(0) subject=Study init=(0) ;
random Treat ~general(0) subject=Treatment init=(0) zero="Placebo"
monitor=(Treat);
Mu= Studyeffect + Treat ;
model Y ~ normal(mean=Mu, var=SE*SE);
run;
Syntax: random effects
proc mcmc data=scott nmc=200000 nthin=20 seed=246810
monitor=(mysd) outpost=outp7 dic;
random Studyeffect ~normal(0,var=10000) subject=Study init=(0) ;
random Treat ~normal(0,var=10000) subject=Treatment init=(0)
zero="Placebo" monitor=(Treat);
parms mysd 0.2;
prior mysd ~ uniform(0,1);
random RE ~normal(0,sd=mysd/sqrt(2)) subject=_OBS_ init=(0);
Mu= Studyeffect + Treat +RE;
model Y ~ normal(mean=Mu, sd=SE);
run;
Syntax: fixed effects meta-regression
proc mcmc data=scott nmc=200000 nthin=20 seed=246810;
random Studyeffect ~general(0) subject=Study init=(0) ;
random Treat ~general(0) subject=Treatment init=(0) zero="Placebo"
monitor=(Treat);
parms hba1c 0;
prior hba1c ~normal(0,var=10000);
Mu= Studyeffect + Treat + baseline*hba1c;
model Y ~ normal(mean=Mu, var=SE*SE);
run;
Practical exercise 2
• Run the code as is
• Compare results for each model
• Which model is the better fit, fixed or random effects, or
meta-regression?
• Amend the code to generate fewer MCMC samples, how
many are sufficient? How much burn-in is needed? Is
thinning necessary in the RE model?
• Change the baseline from “Placebo” to “Insulin Glargine”.
Are the results consistent?
• Compare results to WinBUGS output
• Try changing the priors to other vague priors1, does this
affect results?
1
Lambert PC et al, Statistics in Medicine, 2005; 24:2401-28
Results from WinBUGS
Fixed effects
mean
Liraglutide 1.2mg
-1.04
Liraglutide 1.8mg
-1.21
Insulin glargine
-0.82
Exenatide BID
-0.79
Exenatide QW
-1.12
Fixed effects adjusting for baseline hba1c
Liraglutide 1.2mg
-1.02
Liraglutide 1.8mg
-1.20
Insulin glargine
-0.83
Exenatide BID
-0.82
Exenatide QW
-1.13
delta
-0.41
sd
0.07
0.06
0.06
0.05
0.06
0.07
0.06
0.06
0.05
0.06
0.14
Random effects from paper
Download