Interpreting RTI Using Single

advertisement
Interpreting RTI Using
Single-Case Time Series Analysis
Paul Jones, Ed.D.
Professor & Doctoral Program Coordinator
School Psychology & Counselor Education
Department of Educational Psychology
University of Nevada, Las Vegas
Las Vegas, NV
The Controversies of Our Time:
●
●
●
Response to Intervention: a solution or
just a different problem, (or a little of
each);
Statistics in Single-Case Design: an
essential addition or just an unnecessary
complication, (or a little of each);
Is There Sex After Death?
The Law of Parsimony
(Occam's Razor)
"Entities should not be multiplied
unnecessarily."
"When you have two competing
theories which make exactly the same
predictions, the one that is simpler is
the better."
“Use the simplest design that is
sufficient to answer your research
question.”
Was there a response to the intervention?
●
●
●
A- baseline
B- treatment
A- reversal
●
●
●
B- baseline
T- treatment
F- followup
Visual Analysis: Enough?
Visual Analysis: Sometimes Not Enough!
A More Realistic Example
Analysis is often focused on
three features:
●
Level (mean of scores within a phase)
Analysis is often focused on
three features:
●
●
Level (mean of scores within a phase)
Variability (s.d. of scores within a phase)
Analysis is often focused on
three features:
●
●
●
●
Level (mean of scores within a phase)
Variability (s.d. of scores within a phase)
Trend / Slope
Trend / Magnitude
Level – Variability - Trend
Analyzing:
●
●
●
●
Level- easy
Variability-fairly easy
Trend/Slope- not always difficult
Trend/Magnitude- can be a problem
One Approach To Assess Magnitude:
Young's C Statistic (Young, 1941)
1. Requires only 8 data points within the
baseline and treatment phases,
One Approach To Assess Magnitude:
Young's C Statistic (Young, 1941)
1. Requires only 8 data points within the
baseline and treatment phases,
2. Easy to calculate,
One Approach To Assess Magnitude:
Young's C Statistic (Young, 1941)
1. Requires only 8 data points within the
baseline and treatment phases,
2. Easy to calculate,
3. Provides likelihood of random variation
within and among phases in the form of
the familiar p value.
C Statistic Formula
X array is each point in data series;
Mx is mean of the X values
C Statistic: Hand Calculation
●
●
The numerator is calculated by subtracting the
data point that immediately follows it from
each obtained data point, squaring that
difference, and summing for the total of the n-1
calculations.
For the denominator, after calculating the
mean of the observations, the difference
between each observation and the mean is
squared. The squared differences are then
summed and that total multiplied by two.
Statistical Significance of the C
SEc= n− 2/ n 1 n− 1
z = C / SEc
The critical z value for the one-tailed .05
level of significance if n is greater than
or equal to 8 is 1.64
Limitations of the C Statistic
Crosbie (1989) raised two major concerns:
●
significant autocorrelation in the baseline
creates an intolerable risk of Type I error
(inappropriately rejecting the null hypothesis)
when intervention data are added,
Limitations of the C Statistic
Crosbie (1989) raised two major concerns:
●
●
significant autocorrelation in the baseline
creates an intolerable risk of Type I error
(inappropriately rejecting the null hypothesis)
when intervention data are added,
formulae that make statistical corrections to
create a stable baseline are particularly
problematic when using the C statistic.
Solutions for These Limitations of the C Statistic
●
While the C statistic can be used to determine
if the baseline is stable (only random
variation), analysis to determine the effect of
adding the intervention SHOULD NOT be
done until the baseline is stable.
Solutions for These Limitations of the C Statistic
●
●
While the C statistic can be used to determine
if the baseline is stable (only random
variation), analysis to determine the effect of
adding the intervention SHOULD NOT be
done until the baseline is stable.
DO NOT use statistical corrections to
artificially create a stable baseline.
Other Limitations of the C Statistic
The C Statistic only identifies whether the
magnitude of change when intervention data are
added to baseline data is likely to have occurred
by chance alone.
It does not address whether the change was
“caused by” the intervention.
Other Limitations of the C Statistic
The C Statistic only identifies whether the
magnitude of change when intervention data are
added to baseline data is likely to have occurred
by chance alone.
It does not address whether the change was
“caused by” the intervention.
It does not address whether the change has
clinical or practical significance.
For More Information:
Tryon (1982) and Tripoldi (1994) provide detailed
steps for calculating the C statistic.
A better idea is:
http://www.unlv.edu/faculty/pjones/singlecase/scsastat.htm
Did you know?
The name “Nevada” is from a Spanish word meaning snowclad.
Nevada is the seventh largest state with 110,540 square miles,
85% of them federally owned including the secret Area 51.
Nevada is the largest gold-producing state in the nation. It is
second in the world behind South Africa.
Hoover Dam, the largest single public works project in the
history of the United States, contains 3.25 million cubic yards
of concrete, which is enough to pave a two-lane highway
from San Francisco to New York.
Did you know?
Camels were used as pack animals in Nevada as late as 1870.
Las Vegas has more hotel rooms than any other place on
earth.
The ichthyosaur is Nevada's official state fossil.
There were 16,067 slots in Nevada in 1960. In 1999 Nevada
had 205,726 slot machines, one for every 10 residents.
In Tonopah the young Jack Dempsey was once the bartender
and the bouncer at the still popular Mispah Hotel and Casino.
Famous lawman and folk hero Wyatt Earp once kept the
peace in the town.
A Bayesian Primer
Not often does a man born almost 300 years ago
suddenly spring back to life.
But that is what has happened to the Reverend
Thomas Bayes, an 18th-century Presbyterian
minister and mathematician.
A Bayesian Primer
Not often does a man born almost 300 years ago
suddenly spring back to life.
But that is what has happened to the Reverend
Thomas Bayes, an 18th-century Presbyterian
minister and mathematician.
A statistical method outlined by Bayes in a paper
published in 1763 has resulted in a blossoming of
"Bayesian" methods in scientific fields ranging
from archaeology to computing.
A Bayesian Primer
Imagine a (very) precocious newborn who
observes a first sunset and wonders if the
sun will ever rise again.
The newborn assigns equal probabilities to
both possible outcomes and represents it
by placing one white and one black marble
in a bag.
A Bayesian Primer
Before dawn the next day, the odds that a
white marble will be drawn from the bag
are 1 out of 2.
The sun rises again, so the infant places
another white marble in the bag.
A Bayesian Primer
Before the next dawn, and with the
information from the previous day, the odds
for drawing a white marble from the bag
have now increased to 2 out of 3.
The sun rises again, another white marble
goes in the bag.
A Bayesian Primer
On the fourth day, this is beginning to
sound Biblical, the predawn odds of
drawing a white marble are now 3 out of 4.
The concept is that as new data become
available, the likelihood of a specific
outcome is changed.
A Bayesian Primer
The essence of the Bayesian approach
is to provide a mathematical rule
explaining how you should change your
existing beliefs in the light of new
evidence.
Observations are interpreted as
something that changes opinion, rather
than as a means of determining ultimate
truth.
(adapted from Murphy, 2000)
Bayesian Applications in School-Based Practice
A variety of applications of the Bayesian
probability model have been suggested
including:
scaling of tests
interpreting test reliability
interpreting test validity
Bayesian Applications in School-Based Practice
Most relevant in this context, however, is
the potential of a Bayesian approach to
combine or synthesize several replications
of the simple time series analysis to decide
if there has been a sufficient response to
an intervention.
Illustrating a Bayesian Application
Did the intervention result in a change
in the student's response, more than
would have been expected by chance
alone?
Illustrating a Bayesian Application
Using the time series analysis, the question
is framed as whether the variation in the
time series data:
remained random after intervention data
were added to the baseline data, or
did not remain random after the
intervention data were added to the
baseline.
Illustrating a Bayesian Application
Before the intervention, our beliefs about
the effect are equivocal. So, our prior
beliefs about the outcome are:
.50 probability that there will be no change
in random variation, and
.50 probability that the series will have
more than random variation when
intervention is added to baseline.
Illustrating a Bayesian Application
Our initial trial, using the time series
analysis, results in a statistically significant
outcome, p = .009.
The classical interpretation is that only 9
times in 1000 would we get the obtained
results if in fact the intervention provided
no real change in random variation.
Illustrating a Bayesian Application
From this trial, our belief about the efficacy
of the intervention changes from .50-.50
that the intervention will provide more than
a chance level effect to .009-.991.
(Said, more easily, “this seems to be working.”)
The Basic Bayesian Formula
P (H|E) = posterior probability
P (H) = prior probability of outcome
P (E|H) = likelihood of observed event given
hypothesized outcome
P (E) = overall likelihood of observed event
The Basic Bayesian Formula
Initial Study p = .009
Hypothesis
Prior Belief
Likelihood
Prior x Likelihood Posterior Belief
random
.009
.50
.009
.0045
.0045/.50=
nonrandom .50
.991
.4955
.4955/.50 = .991
.5000
Not much (actually nothing) gained thus far.
This approach becomes useful when
replications begin, for example:
Same intervention, same student,
different content, or
Same intervention, different student,
same content (confirming the efficacy of
the intervention)
The Basic Bayesian Formula
First replication p = .310
Hypothesis
Prior Belief
Likelihood
Prior x Likelihood Posterior Belief
random
.004
.009
.310
.0028
.0028/.6866 =
nonrandom .991
.690
.6838
.6838/.6996 = .996
.6866
The Basic Bayesian Formula
Second replication p = .980
Hypothesis
Prior Belief
Likelihood
Prior x Likelihood Posterior Belief
random
.164
.004
.980
.0039
.0039/.0238 =
nonrandom .996
.020
.0199
.0199/.0238 = .836
.0238
The Difference in a Bayesian Approach
A traditional practitioner would probably be quite
discouraged. Three studies were done. In only one of
the three was there a result that was statistically
significant (p < .05).
The Difference in a Bayesian Approach
A traditional practitioner would probably be quite
discouraged. Three studies were done. In only one of
the three was there a result that was statistically
significant (p < .05).
But, the traditional approach is extremely wasteful.
Focusing only on the .05 level of signifiance makes
everything from outcomes of .06 to .99 equal. That really
doesn’t make sense.
The Difference in a Bayesian Approach
A traditional practitioner would probably be quite
discouraged. Three studies were done. In only one of
the three was there a result that was statistically
significant (p < .05).
But, the traditional approach is extremely wasteful.
Focusing only on the .05 level of significance makes
everything from outcomes of .06 to .99 equal. That really
doesn’t make sense.
Instead of just counting statistically significant outcomes
(the frequentist approach), Bayesian analysis allows for
an ongoing synthesis of the actual data.
Paul Jones, Ed.D.
Mail:
jones@unlv.nevada.edu
Web:
http://www.unlv.edu/faculty/pjones/pj.htm
Single-Case Tutorial:
http://www.unlv.edu/faculty/pjones/singlecase/scsaguid.htm
Selected References
Bayes, T. 1763. An Essay Toward Solving a Problem in the
Doctrine of Chances. Philosophical Transactions of the Royal
Society of London 53, 370-418.
Crosbie, J. (1989). The inappropriateness of the C statistic for
assessing stability or treatment effects with single-subject data.
Behavioral Assessment, 11, 315-325.
Jones, W.P. (2003). Single-case time series with Bayesian
analysis: A practitioner's guide. Measurement and Evaluation
in Counseling and Development 36, 28-39.
Jones, W.P. (1991). Bayesian interpretation of test reliability.
Educational & Psychological Measurement, 51, 627-635.
Jones, W.P. (1989). A proposal for the use of Bayesian
probabilities in neuropsychological assessment.
Neuropsychology,3, 17-22.
Selected References
Jones, W.P., & Newman, F.L. (1971). Bayesian techniques for test
selection. Educational and Psychological Measurement,31,
851-856.
Murphy, K. P. (2000). In praise of Bayes. Retrieved April 9, 2005,
from the World Wide Web:
http://www.cs.berkeley.edu/~murphyk/Bayes/economist.html
Phillips, L. D. (1973). Bayesian statistics for social scientists. New
York: Thomas Y. Crowell Company.
Tripodi, T. (1994). A primer on single-subject design for clinical
social workers. Washington, D.C.: NASW Press.
Tryon, W.W. (1982). A simplified time-series analysis for
evaluating treatment interventions. Journal of Applied Behavior
Analysis, 15, 423-429.
Young, L.C. (1941). On randomness in ordered sequences.
Annals of Mathematical Statistics, 12, 153-162.
Interpreting RTI Using Single-Case Time Series Analysis
Download