Acknowledgement: Wendy Machalicek University of Oregon www.uoecs.org Rob Horner Tom Kratochwill Joel Levin Chris Swoboda George Sugai Hariharan Swaminathan Dan Maggin Define an emerging role for single case designs Defining features of Single-case research Programs of research Grant applications Review standards for visual analysis of single case studies Design standards Evidence standards Single-case methods developed and used within Applied Behavior Analysis Single-Case Intervention Research Design Standards Thomas R. Kratochwill University of Wisconsin-Madison John H. Hitchcock Ohio University Robert H. Horner University of Oregon Joel R. Levin University of Arizona Samuel M. Odom University of North Carolina at Chapel Hill David M. Rindskopf City University of New York William R. Shadish University of California Merced Available at http://ies.ed.gov/ncee/wwc/ documentsum.aspx?sid=2 29 Better standards for training in single-case methods More precision in RFA stipulations Establishes expectations for reviewers Better standards for reviewing single-case Journal editors Grant reviewers Development of meta-analysis technology Baseline 1. Experimental Control: The design allows documentation of causal (e.g., functional) relations between independent and dependent variables. 2. Individual as unit of analysis Individual provides their own control Can treat a “group” as a participant with focus on the group as a single unit 3. Independent variable is actively manipulated. 4. Repeated measurement of dependent variable Direct observation at multiple points in time Inter-observer agreement to assess “reliability” of the dependent variable. 5. Baseline To document social problem, and control for confounding variables. 6. Design controls for threats to internal validity Opportunity for replication of basic effect at 3 different points in time 7. Visual Analysis Visual analysis documents basic effect at three different points in time Statistical analysis options emerging. 8. Systematic replication Within a study to document experimental control Across studies to document external validity Across studies, researchers, contexts, participants to document EBP. 9. Experimental flexibility Designs may be modified or changed within a study Reversal/Withdrawal Designs Multiple Baseline Designs Alternating Treatment Designs Others: Changing Criterion Multiple Probe What Works Clearinghouse Standards Design Standards Evidence Standards Social Validity Evaluate the Design Meets Design Standards Meets with Reservations Does the Design allow Documentation of Experimental Does Not Meet Design Control ? Standards Evaluate the Evidence Strong Evidence Moderate Evidence Stop No Evidence Stop Effect-Size Estimation Social Validity Assessment Do the data document Experimental Control ? Is the effect something we should care about? Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence Effect-Size Estimation Social Validity Assessment No Evidence Meets Standard IV manipulated directly IOA documented (.80 percent agreement; .60 Kappa) 20% of data points in each phase Design allows opportunity to assess basic effect at three different points in time. Controls for threat to Int Validity. Five data points per phase (or design equivalent) ATD (four comparison option) Meets with Reservation All of above, except at least three data points per phase Non-concurrent Multiple Baseline Does not Meet Standard Ambiguous Temporal Precedent (correlation) Selection c.f. Discussion in History White Paper Appendix Maturation Statistical Regression (regression toward mean) Attrition Testing Instrumentation Additive and Interactive Effects A B 4 1 A B Student 1 A Student 1 2 A B Student 1 3 A B B Student 1 Student 2 Student 2 Student 2 Student 3 Student 3 Student 4 Student 4 A B Student 3 Actual Time Threats to Internal Validity 1 Ambiguous Temporal Precedent (correlation)…………………………… Selection………………………………… History…………………………………… Maturation…………………………….. Statistical Regression……………… (regression toward mean) Attrition………………………………… Testing…………………………………. Instrumentation……………………. Additive and Interactive Effects 2 3 4 Reversal Design Multiple Baseline Designs Alternating Treatment Designs Basic Effect: Change in the pattern of responding after manipulation of the independent variable. (level, trend, variability) Experimental Control: At least three demonstrations of basic effect, each at a different point in time. Design Standard Does the design allow for the opportunity to assess experimental control? Baseline At least five data points per phase (3 w/reservation) Opportunity to document at least 3 basic effects, each at a different point in time. Intervention X First Demonstration of Basic Effect Intervention X Third Demonstration of Basic Effect Second Demonstration of Basic Effect 1. Baseline 2. Each phase has at least 5 data points (3 w/reservation) 3. Design allows for assessment of “basic effect” at three different points in time. Intervention X Does Not Meet Standard Intervention X Intervention Y Does Not Meet Standard Intervention X Intervention X Does Not Meet Standard Intervention X Intervention X Meets Standard With Reservation First Demonstration of Basic Effect Second Demonstration of Basic Effect Third Demonstration of Basic Effect Does Not Meet Standard Meets Standard Time A B 3 Student 1 A 1 A B Student 1 2 A B B Student 1 Student 2 Student 2 Student 2 Student 3 Student 3 A B Student 3 Actual Time Research Question: Is there a DIFFERENCE between the effects of two or more treatment conditions on the dependent variable. Methodological Issues: How many data points to show a functional relation? No current agreement At least three per condition Preferably 5 data points per condition The lower the separation, or higher the overlap, the more data points are needed to document experimental control. P e rc e n t o f in te rv a ls w ith s c re a Felix 60 50 Escape 40 Attn 30 20 10 0 Play 1 2 Food 3 4 5 6 7 Sessions 8 9 10 P e rc e n ta g e o f T ria ls w ith S c re a Alice 70 Tangible 60 50 40 Escape 30 20 10 0 Control Attention 1 2 * 3 * 4 5 6 7 FA sessions * 8 9 10 P e rc e n t o f in te rv a ls w ith s c re a Felix 60 50 Escape 40 Attn 30 Meets Standard With Reservation Food 20 Escape 10 0 Play 1 2 3 4 5 6 7 Sessions 8 9 10 P e rc e n t o f in te rv a ls w ith s c re a Felix 60 50 Escape 40 Attn 30 20 Escape 10 0 Play 1 2 Food 3 4 5 6 7 Sessions 8 9 10 Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence Effect-Size Estimation Social Validity Assessment No Evidence Strong Baseline Documentation of research question “problem” Documentation of predictable pattern (>5 data points) Each Phase of the Analysis Documentation of predictable pattern (> 5 data points) Basic effects Documentation of predicted change in the DV when IV is manipulated Experimental Control Three demonstrations of basic effect, each at a different point in time. No demonstrations of intervention failure Moderate All of “Strong” criteria, with these exceptions: Only 3-4 data points per phase Three demonstrations of effect, but with additional demonstrations of failure-to-document effect. Non-concurrent multiple baseline No Evidence Misnomer Evidence does not meet Moderate level. Level Within Trend Phase Variability Between + Phases Overlap Immediacy of Effect Consistency across similar phases _________________________________________ Other: vertical analysis; intercept gap Six Variables Considered in Visual Analysis Level The mean of the data within a phase Also can be used to assess the level of the last 3-5 data points within a phase. Trend The slope of the best-fit straight line describing data within a phase Variability The level deviation of data around the slope of the best fit straight line (range, standard deviation) Level 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 Level 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 Level 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 Level 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 Trend 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 Trend 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 Trend 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Trend 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Variability NO Not, the way to assess variability 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Variability YES The way to assess variability 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Variability 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 Variability 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 Variables Considered in Visual Analysis Overlap The percentage of data from one phase (typically the intervention phase) that overlaps with the range of data from the previous phase (typically the baseline phase) Overlap 100% overlap 0% non-Overlap 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Overlap 2/8 = 25% Overlap 6/8 = 75% non-Overlap 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Variables Considered in Visual Analysis Immediacy of Effect The magnitude of change (in level, trend or variability) between the last 3-5 data points in one phase and the first 3-5 data points in the next phase. Consistency of Data Pattern in Similar Phases The extent to which phases with similar conditions are associated with data similar data patterns. Immediacy of Effect 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Immediacy of Effect 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Level, Trend, Variability define “similar” pattern across phases with “similar” status of the Independent Variable. “A” phases are more similar to each other than “B” phases… and vice versa. Interactions Assessing change in Level The greater the variability the more data points needed to build a confident index of “level” The greater the trend the less useful “level” becomes for predicting future performance, or assessing overlap Assessing change in Trend Within phase trends may be linear or non-linear. The greater the trend, the more data points needed to document a predictable pattern. The shift in trend at the end of a phase should not be moving in the direction of the anticipated IV effect. Assessing change in Variability The greater the variability the more data points are needed within a phase to document a predictable pattern Within phase shift in variability requires more data points to establish a stable pattern Outliers require more data points within a phase, and a phase should not end with an outlier. Assessing Overlap Overlap (using range) becomes less relevant with increased trend. Instead use overlap projected by confidence intervals around best fit straight line. Immediacy of Effect Variability at end of first phase and beginning of second phase reduce the impact of “immediacy of effect.” Visual Analysis: The core discriminations once the Design is selected as meeting standards Is the Baseline adequate? Are there sufficient data within a phase to document a “pattern?” Is there a “basic effect” between two phases? As this for each pair of phases. Is there a functional relationship documented by the full data set within a study? First Demonstration of Effect Third Demonstration of Effect Second Demonstration of Effect Visual Analysis: 1. Change in Level 2. Change in Trend 3. Change in Variability 4. Immediacy of Effect Parsonson & Baer, 1978; Kratochwill & Levin, 1992 5. Overlap 6. Consistency of Data Patterns across similar Phases First Demonstration of Effect Third Demonstration of Effect Second Demonstration of Effect Comparison of actual against projected data (Analysis of Transition States versus Analysis of Steady States) First Demonstration of Effect Third Demonstration of Effect Second Demonstration of Effect Consistency in data pattern across similar phases Level, Trend, Variability, Overlap Immediacy of Effect Consistency across similar phases Stability in non-intervened series when effect demonstrated in one series The magnitude of the intercept gap between the best-fit straight lines associated with two phases at each point of intervention. X Z Y Assess the magnitude of the intercept gap (Point X, Point Y, Point Z) X Y Z Magnitude of separation Greater the difference between two conditions, larger the demonstration of a functional relation Consistency of separation Greater consistency of separation between two conditions (no overlap) larger the demonstration of a functional relation Number of data points used to establish experimental control At least 4 comparisons. The more points documenting separation the stronger the demonstration of experimental control. P e rc e n ta g e o f T ria ls w ith S c re a Alice 70 Tangible 60 50 40 Escape 30 20 10 0 Control Attention 1 2 * 3 * 4 5 6 7 FA sessions * 8 9 10 Magnitude of separation Number of comparisons (data points) Consistency of separation across comparisons JABA 1994 Social Attention Increase the precision of Research Questions Define conceptual logic for research question Define research question with greater precision IV related to change in level, trend, variability? “Is there is a functional relation between Functional Communication Training and reduction in the level and variability of problem behavior?” Development of visual analysis graphs Bruce Wampold, Richard Freund, Rick Albin, Tom Kratochwill, Chris Swoboda Level, trend, variability, overlap Purpose is to emphasize interaction Examine and score each graph Use ALL data in the graph Combine assessment of: Level, trend, variability, overlap, immediacy of effect, consistency of data patterns in similar phases. Use scoring metric: 0= no functional relation 5 = publishable 7 = strong functional relation Compare scores with Excel file Median scores from five “expert” Single-case Researchers http://www.singlecase.org 1 2 3 ABAB 1 ATD 1 ABAB 2 ATD 2 ABAB 3 ATD 3 ABAB 4 ATD 4 ABAB 5 ATD 5 ABAB 6 ATD 6 ABAB 7 ATD 7 MBL 1 MBL 2 MBL 3 MBL 4 MBL 5 MBL 6 MBL 7 4 Is there a functional relation between the Intervention and reduction in the level of the problem behavior? No Exp Control 1 2 Publishable 3 4 5 Strong Exp Control 6 7 6 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 1 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Evaluation for LEVEL Evaluate for TREND Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Is there a functional relation between introducing the intervention and reduction in the level of the problem behavior? 6 DV: Proportion of intervals with social disruption 2 – level 5 - variability 3 Is there a difference in the percentage of 10 s intervals with social initiation under Condition B compared with Condition C? Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Level of Experimental Control No Exp Control 1 2 3 4 Publishable 5 6 Strong Exp Control 7 Visual Analysis of Single Case Designs Determine viability of design first Assess Baseline Assess the within phase pattern of each phase Assess the basic effects with each phase comparison Assess the extent to which the overall design documents experimental control E.g. Three demonstrations of basic effect, each at a different point in time The full set of Assess effect size 140 graphs Assess social validity