Module 1 - Amstatphilly.org

advertisement
Introduction to Adaptive Designs:
Definitions and Classification
Inna Perevozskaya, Merck & Co.
09/12/2008
AD course for Philadelphia ASA Chapter
Acknowledgement:
PhRMA Adaptive Designs Working Group

Co-Chairs:
Michael Krams
Brenda Gaydos

Authors:
Keaven Anderson
Suman Bhattacharya
Alun Bedding
Don Berry
Frank Bretz
Christy Chuang-Stein
Vlad Dragalin
Paul Gallo
Brenda Gaydos
Michael Krams
Qing Liu
Jeff Maca
Inna Perevozskaya
Jose Pinheiro
Judith Quinlan

Members:
Carl-Fredrik Burman
David DeBrota
Jonathan Denne
Greg Enas
Richard Entsuah
Andy Grieve
David Henry
Tony Ho
Telba Irony
Larry Lesko
Gary Littman
Cyrus Mehta
Allan Pallay
Michael Poole
Rick Sax
Jerry Schindler
Michael D Smith
Marc Walton
Sue-Jane Wang
Gernot Wassmer
Pauline Williams
Recent DIJ Publications by PhRMA Working Group
on Adaptive Designs
(Drug Information Journal, Vol. 40, 2006)
1.
P. Gallo, M. Krams
Introduction
2.
V. Dragalin
Adaptive Designs: Terminology and Classification
3.
J. Quinlan, M. Krams
Implementing Adaptive Designs: Logistical and Operational Considerations
4.
P. Gallo.
Confidentiality and trial integrity issues for adaptive designs
5.
B.Gaydos, M. Krams, I. Perevozskaya, F.Bretz; Q. Liu, P. Gallo, D. Berry; C. ChuangStein, J. Pinheiro, A. Bedding.
Adaptive Dose Response Studies
6.
J. Maca, S. Bhattacharya, V. Dragalin, P. Gallo, M. Krams,
Adaptive Seamless Phase II / III Designs – Background, Operational Aspects, and
Examples
7.
C. Chuang-Stein, K.Anderson, P. Gallo, S. Collins.
Sample Size Re-estimation: A Review and Recommendations
Outline



Adaptive design: evolution if the term

Adaptive vs. static designs

Some adaptive designs were known under different names
Formal classification effort:

Structure and key elements

Classification by Objective and Phase or Stage
Adaptive designs “ahead of others” (where effort should be
focused)

dose response

seamless II/III

Sample size re-estimation
Adaptive vs. Traditional Designs

In traditional drug development, most designs
used (especially Phase II and III) are “static”:

Key elements driving the designs are specified in
advance:

Hypotheses to be tested

Population of interest

Maximum information to be collected (translated into
power, SS, and detectable treatment effect)

Randomization scheme

Early stopping rules
Adaptive vs. Traditional Designs (cont.)

“Static” designs framework:

Results observed during trial are not used to guide it’s
course

This setup provides solid inferential procedures

But leaves some space for improvement in terms of
efficiency

Different ways to improve efficiency have been proposed
over time, allowing dynamic modification of trial’s design
during its course based on accumulating data

That lead to formation of a broad group of methods
known today as “adaptive designs”
Adaptive vs. Traditional Designs (cont.)
Definition: (from An Executive Summary of PhRMA
Working Group):
Adaptive design refers to a clinical study design
that uses accumulating data to decide on how to
modify aspects of the study as it continues, without
undermining the validity and integrity of the trial

Essential components:

changes are made by designs and not on an ad-hoc
basis

adaptation is a design feature and not a remedy for poor
planning
Adaptive Designs: Evolution of the
Term

Many of designs we call “adaptive” today existed for quite
some time as a “class of their own”


(e.g. group-sequential designs, response-adaptive
randomization, flexible designs, sample size re-estimation )
These designs

Aim at improving some feature of a rigid traditional design
(such as cost efficiency or addressing an ethical dilemma)

Share a common feature of mid-course adaptation(s)

As the number of such designs grew, so did the confusion…

Strong need for a unified structured approach to
terminology has emerged
Key Reference:
V. Dragalin “Adaptive designs: Terminology and Classification“.
Drug Information Journal (2006),
Vol 40, pp 425-435




First attempt to develop a unified approach to AD
Reflects discussions within PhRMA working group on adaptive
designs
Major source of AD review to follow
Provides:

general definition of adaptive designs

structure (key components)

Classification (by objective)

mapping against drug-development process
Review of “AD: Terminology and Classification”
Adaptive Design Definition
Adaptive design refers to a multistage clinical
study design that uses accumulating data to
decide on how to modify aspects of the study
without undermining the validity and integrity
of the trial

Validity:




Correct statistical inference
Ensuring consistency across different parts
Minimizing operational basis
Integrity:


Providing results convincing to the scientific community
Adequate pre-planning and blinding procedures
Key Elements of an Adaptive Design
1.
2.
3.
4.
Allocation Rule
Sampling Rule
Stopping Rule
Decision Rule
One or more
may be applied
during interim
looks
Examples:




Group sequential designs (stopping )
Response-adaptive allocation (allocation)
Sample size reassessment (sampling)
Flexible designs (all)
Key Elements of an Adaptive Design (cont.)
1. Allocation Rules:



Determine how patients are assigned to available treatments
at each stage
Can be fixed (static) or adaptive (dynamic)
Fixed allocation examples:




Complete randomization
Stratified randomization
Restricted randomization
Rosenberger and
Lachin, (2001)
Adaptive allocation examples




Covariate-adaptive randomization
Response-adaptive randomization
Bayesian response-adaptive randomization (Berry, 2001)
Drop-the-loser type (Sampson, 2005)
Key Elements of an Adaptive Design (cont.)
2. Sampling Rules


How many subject will be sampled at the next stage?
Examples of designs with SR:

Blinded SS re-estimation
 Adjustment of SS based on estimate of a nuisance parameter

Unblinded SS re-estimation
 Adjustment of SS based on information about trt effect

Traditional group sequential
 fixed sampling rule

Flexible SSR based on conditional power
 Probability of rejecting null at the end of study given firststage data
 Calculated for the originally specified treatment effect
Key Elements of an Adaptive Design (cont.)
3. Stopping rules

Intended to protect patients from unsafe drug or to
expedite the approval of a beneficial treatment.

Based on satisfying power requirements in hypothesis
testing framework

“Crossing a boundary” methodology


Superiority

Harm

Futility
Examples: classical group-sequential (Jenisson&Turnbull,
2000)
Key Elements of an Adaptive Design (cont.)
4. Decision rules:

Changing test statistics

Redesigning multiple endpoints

Selecting hypotheses to be tested or their hierarchy

Changing patient population

Choosing the number of interim analyses based on
current information

For dose-response studies-selecting next dose
assignment
Classification of Adaptive Designs
Ref: V. Dragalin “Adaptive designs: Terminology
and Classification“. DIJ (2006)

Key elements of AD define structure and
describe algorithms of AD





Allocation Rule
Sampling Rule
Stopping Rule
Decision Rule
Another way to classify AD is by
what their objectives are
 applicability to a particular stage of clinical
development

Classification of Adaptive Designs (cont.)
1.
2.
3.
Single-arm trials
Comparing two treatments
Comparing more than two treatments

4.
Model-based dose-response assessment
Seamless Phase II/III
1. Adaptive Designs for Single-Arm
Trials
Applicability: Phase-I/POC/Phase II

a)
Screening trials for 1 trt-used to screen
candidate components based on short-term
response





Employ small sample sizes
Hypothesis testing: minimum acceptable probability
of response pre-specified
Allow early stopping due to futility
Ex1: Two-stage designs (Gehan, 1961 )
Ex2: Bayesian designs (Thall & Simon, 1994)
1. Adaptive Designs for Single-Arm Trials
(cont.)
b)
Designs for entire screening program



Minimize time to identify promising compound
Control Type I and Type II risk for the entire
program
Ref:
Wang&Leung, 1998;
Yao&Venkatraman, 1998;
Hardwick & Stout, 2002;
2. Adaptive Designs for Comparing Two
Treatments

Applicability: predominantly Phase III, but some can be
used in Phase I-II
 Fully sequential design


Group-sequential Design


Check boundary crossing after a group of patients
Adaptive group-sequential designs



Check boundary crossing after each patient
Extend the GSD methodology: allow  in SS
Methodology based on P-value combination tests
Flexible designs



Wide spectrum of decision rules can be applied after 1st
stage
Recursive application of 2-stage combination tests
Allow many mid-trial adaptations; not all prespecified (in
theory….)
3. Adaptive Designs for Comparing
More Than Two Treatments


Applicability: dose-response assessment studies
(mostly phase II, full range I-III)
“Late stage dose-response development”



group-sequential designs (Stallard & Todd, 2003)
Flexible designs (Bauer & Kieser, 1999)
“Early exploratory development”


Dose-escalation studies (Phase I; Ex. CRM)
Model-based dose-response assessment





D-optimal designs
Bivariate response
Penalized (constrained) designs
Bayesian dose-finding designs
Reviewed in depth in (Gaydos et al., 2006)
4. Seamless Phase II/III designs
Combine traditional Phase IIb and Phase
III
 “learning and confirming” governed by
one protocol
 Can be




operationally seamless
inferentially seamless
Explored in depth in Maca et al., 2006
Dose-Finding AD Example:
Continual Reassessment Method (Ex.1)






Bayesian dose-escalation design
Designed to converge to MTD
For a predefined set of doses to be studied and a binary
response, estimates dose level (MTD) that yields a
particular proportion of responses
Updates MTD distribution after each patient’s response
Next dose is selected as the one with predicted probability
closest to the target level of response
Procedure stops after N patients enrolled
Continual Reassessment Method (cont.)
Choose
initial estimate
of response
distribution
& choose
initial dose
Stop.
MTD = Dose w/
Prob. (Resp.)
Closest to
Target level
Obtain next
Patient’s
Observation
Next Pt. Dose
= Dose w/
Prob. (Resp.)
Closest to
Target level
Update Dose
Response Model
& estimate
Prob. (Resp.)
@ each dose
no
Max N
Reached?
yes
CRM Design example (1)






Post-anesthetic care patients received a single IV dose of
0.25, 0.50, 0.75, or 1.00 μg/kg nalmefene.
Response was Reversal of Analgesia (ROA) = increase in
pain score of two or more integers above baseline on 010 NRS after nalmefene
Patients entered sequentially, starting with the lowest
dose
The maximum tolerated dose = dose, among the four
studied, with a final mean posterior probability of ROA
closest to 0.20 (i.e., a 20% chance of causing reversal)
Modified continual reassessment method (iterative
Bayesian proc) selected the dose for each successive pt.
as that having a mean posterior probability of ROA
closest to the preselected target 0.20.
1-parameter logistic function for probability of ROA used
to fit the data at each stage
Dougherty,et al. ANESTHESIOLOGY (2000)
CRM example (1) results
Dose
(ug/kg)
0.25
# pts.
4*
# w/ ROA
mean
median
post.
post.
% w/ ROA
prob. ROA prob. ROA
0
0%
0.09
0.11
0.50
18
3
17%
0.18
0.21
0.75
3
2
67%
0.37
0.41
1.00
0
-
-
0.79^
0.80^
(MTD)
* including the 1st patient treated
(MTD), i.e., estimated mean posterior probability closest to 0.20 target
^ extrapolated
Posterior ROA Probability
(with 95% probability intervals)
CRM example (1) results
1.0
0.8
0.6
0.4
0.2
0.0
Continual Reassessment Method (cont.)




Allocation rule:
Sampling rule:
Stopping rule:
Decision rule:
model-based
cohort size
max N or no rule
posterior update,
select next dose
Example 2: Comparing 2 treatments
Adaptive GS (Flexible) design
Redesigned trial example from (Cui et al., 1999)





Actual design: group sequential design
Proposed design: sample size re-estimation + combination
test statistic
Phase III trial for prevention of MI in patients undergoing
coronary artery bypass graft surgery
N=600 per treatment group to detect 50% reduction of
incidence (predicted 22% for placebo vs. 11% for drug)
with 95% power
Interim analysis at 50% data:



N=300 per treatment group
Observed incidence for pbo was ~16.5%, drug~11%
Given observed data, power is 40% to detect 25% reduction
Example 2 (cont.)

Sponsor wanted to increase 2nd stage sample size to detect
smaller effect




Type I error rate would be inflated with usual group sequential
test
Trial continued with planned sample size and ended with nonsignificant statistical result
Instead, authors proposed to  SS and use combination test
Simulations were performed:


Increase total sample size to 1400 per treatment group
Maintain Type I error rate; 93% power to detect 25%
reduction
Example 2 (cont.)
Allocation rule: fixed randomization
 Sampling rule: sample size of next stage
depends on results from previous stage
 Stopping Rule: p-value combination test
 Decision Rule: adapting alternative
hypothesis and test statistics

Summary: adaptive designs where
attention needs to be focused
Dose-ranging studies:
1.
•
B.Gaydos, M. Krams, I. Perevozskaya, F.Bretz; Q. Liu, P.
Gallo, D. Berry; C. Chuang-Stein, J. Pinheiro, A. Bedding.
Adaptive Dose Response Studies
Seamless Phase II/III
2.
•
J. Maca, S. Bhattacharya, V. Dragalin, P. Gallo, M. Krams,
Adaptive Seamless Phase II / III Designs – Background,
Operational Aspects, and Examples
Sample Size Re-estimation
3.
•
C. Chuang-Stein, K.Anderson, P. Gallo, S. Collins.
Sample Size Re-estimation: A Review and Recommendations
Conclusions








Adaptive designs provide an opportunity to redesign trials
based on accumulating data
In some situations, may be more efficient than
implementing traditional designs
There is no “ one-size-fits-all” recommendation for the choice
of AD
In fact, it may not be the best solution at all
That decision will depend on:
 Trials objectives
 Regulatory guidelines
 Logistic and practical consideration
Those are collectively determined by clinicians, regulatory,
statisticians and data management => complicated process
As a result, implementation may be the biggest challenge
However, there are successful examples out there and that
should be encouraging!!!
Additional References
1.
2.
3.
4.
5.
6.
7.
8.
Rosenberger WF, Lachin JM. Randomization in Clinical Trials: Theory and
Practice. New York: Wiley; 2002.
Berry D. Adaptive trials and Bayesian statistics in drug development.
Biopharm Rep. 2001;9:1–11.
Sampson AR, Sill MW. Drop-the-Losers design: normal case. Biometrical
J. 2005;47:257–268.
Cui L, Hung HMJ, Wang SJ. Modification of sample size in group
sequential clinical trials. Biometrics. 1999;55:853–857
Jennison C, Turnbull BW. Group Sequential Methods With Applications to
Clinical Trials. Boca Raton, FL: Chapman and Hall; 2000.
Gehan EA. The determination of number of patients in a follow-up trial of
a new chemotherapeutic agent. J Chronic Dis. 1961;13:346–353.
Wang YG, Leung DHY. An optimal design for screen trials. Biometrics.
1998;54:243–250.
Yao TJ, Venkatraman E. Optimal two-stage design for a series of pilot
trials of new agents. Biometrics.
Download