TM By James L. McDougal, Psy. D., Achilles N. Bardos, Ph.D., & Scott T. Meier, Ph.D. What is the BIMAS? • 3-tier Behavioral Progress Monitoring System • for screening and assessing changes in response to behavior intervention • follows the RTI model • School-aged children and adolescents (5-18) • 2 main components: BIMAS Standard & BIMAS Flex • Multi-informant tool with forms for teachers, parents, youth, and clinician • web-based data management and reporting system. Presentation Overview • • • • School Mental Health Services BIMAS: Rationale within an RTI context BIMAS: Theory and Structure BIMAS: Development and Psychometric Properties • BIMAS: Applications in a School District • BIMAS: Web-based Interface and Reports. • Q and A Background & Development Why is Behavioral/Emotional Screening Crucial? • Students with behavioral/emotional problems: - lower grades - poorer reading skills - drop-out rate > 50% - worst social & academic outcomes of any disability group (Bradley, Dolittle, & Bartolotta, 2008) • Loss of instructional time: teacher intervention & student disruption (Arnold, 1997) • Early ID – prevents pervasive problems (Kern, HiltPanahon, & Sokol, 2009) • Systematic universal screening ID children at-risk for E/B problems (Kamphaus & Reynolds, 2007) Early Identification • Early identification and intervention with children who are at risk for EBD appear to be the “most powerful course of action for ameliorating lifelong problems associated with children at risk for [EBD]” (p. 5). (Hester et al., 2004) • Younger children are more likely to be responsive to and maintain the positive outcomes from early prevention and intervention programs (Bailey, Aytch, Odom, Symons, & Wolery, 1999) RTI and Behavior Rationale and need for Universal Behavior Screening • Strong link between behavior/emotions and academic performance • How do we define health? – Schools screen for vision, hearing, speech, and academic achievement – Behavior/Emotional screening occurs in less than 2% of districts across the U.S. RTI and Behavior Rationale and need for Universal Behavior Screening • Screeners for children are mostly done in primary care settings; we miss many children for early identification and intervention (Pagano et al., 2000). • Screening measures for children are frequently focused on one disorder to the exclusion of others, an approach that neglects large numbers of children who have problems other than the target condition (August et al., 1992; Taylor et al., 2000; Matthey & Petrovski, 2002). RTI and Behavior Rationale and need for Universal Behavior Screening • Early problem identification allows for prevention & intervention • Negative impact of problems can be minimized • Only 15-20% of children with emotional/behavioral problems receive mental health services in the U.S (Ringel & Sturm, 2001) RtI and Behavior Rationale and need for Universal Behavior Screening • Teachers accurately identify young children at high risk of academic and behavioral problems related to school adjustment with a great deal of accuracy (Taylor et al., 2000). • Schools are the ideal setting for largescale, broad based mental health screening of children and adolescents (Wu et al., 1999). How to use BIMAS within the RtI Framework Tier 1 • Universal Level • Students without serious problem behaviors (80-90%) • Use BIMAS to Screen How to use BIMAS within the RtI Framework Tier 2 • Targeted Level • Students at risk for problem behaviors (5-15%) • Use BIMAS to monitor & assess response to intervention/treatment How to use BIMAS within the RtI Framework Tier 3 • Intensive Level • Students with chronic/intense problem behaviors (1-7%) • BIMAS Data for decision making: Treatment planning, Special education eligibility, IEP review, program evaluation Applications of BIMAS 1. Screening- To detect students in need of further assessment and to identify their respective areas of need. 2. Student Progress Monitoring- To provide feedback about the progress of individual students or clients. 3. Program Evaluation- To gather evidence that intervention services are effective. Instrument Structure The BIMAS consists of two levels 1. Standard 2. Flex BIMAS Standard • 34 change-sensitive items • Can be used as a brief screener, treatment monitoring or program evaluation tool • 4 forms: Teacher, Parent, Self (U.S. normed) & Clinician Intervention Item Selection Rules (IISRs; Meier 1997, 1998, 2000, 2004) • BIMAS: developed using empirically derived model for designing change-sensitive measures to assess RTI: IIRSs • “State” scale as opposed to a “Trait” scale • BIMAS developed using clinical & school samples in field settings (rare combo in psychotherapy research) • Identified constructs that change as a result of emotional and behavioral intervention IISRs (cont’d) 1. 2. 3. 4. 5. 6. 7. 8. 9. Be grounded in theory- to provide for interpretation and minimize the effects of chance; Be aggregated across individuals- to reduce the effects of random error; Not evidence ceiling and floor effect; Evidence change after a psychosocial intervention; Change in the theoretically expected direction; Evidence change relative to control and comparison groups; Show no difference at pre-intervention; Have no relation to relevant systematic errors; and Be subject to cross-validation studies. (Meier 1997, 1998, 2000, 2004) Rule 1. Items Based on Theory • Review existing research and theory • Select pool of items from a variety of theories • Ensure large Item Pool • Select those thought to be influenced by intervention. Rule 2. Aggregate Items • Aggregation across individuals decreases error • Increases chances of showing intervention effect • Individual response used to inform theory Rule 3. Avoid Ceiling/Floor Effects • Mean item scores should not be at top or bottom of possible range • Drop items with means 2 or more SD’s out Rule 4. Detect Change Post-Intervention • Significance vs. effect size • Retain subset of items showing pre/post change • Then filter those that demonstrate treatment group change Rule 5. Change in the Expected Direction • Retain items showing pre/post TX change in expected direction • Items changing in opposite direction: retained to investigate negative effects Rule 6. Change Relative to Comparison Group • Compare to no TX control for: developmental and maturation effects, and TX effects • Do items differentiate between TX groups? • Retain items: pre/post difference in TX group • Drop items with pre/post difference in control group Rule 7. No Difference at Pre-Test • Drop items showing significant difference between samples drawn from the same population • Random assignment best Rule 8. Remove Systematic Error • Drop items reflecting systematic error • (e.g., social desirability --Marlowe Crowne Social Desirability Scale) Rule 9. Cross Validate • Repeat steps 3-8 with new samples from same population • Emphasize theoretically predicted change items BIMAS Standard Behavioral Concern Scales: • Conduct— anger management problems, bullying behaviors, substance abuse, deviance • Negative Affect — anxiety, depression • Cognitive/Attention — attention, focus, memory, planning, organization Adaptive Scales: • Social — social functioning, friendship maintenance, communication • Academic Functioning — academic performance, attendance, ability to follow directions BIMAS Flex • List of specific behavioral items corresponding to each Standard item for progress monitoring • User can select items based on elevated Standard scale score for an individual student — customized treatment goals • Ability to make notes to describe specific behaviors, response to services, or to add other comments • Teacher, Parent, Self and Clinician forms BIMAS Flex Example Standard Item: Fought with others (verbally, physically, or both) Negatively worded: • Argued with peers • Argued with teachers • Argued with parents • Argued with siblings • Talked back to parents • Talked back to teachers • Physically hurt peers • Physically hurt parents • Physically hurt teachers • Physically hurt siblings • Threatened peers • Threatened teachers • Threatened parents • Threatened siblings Positively worded: • Showed regret after a fight • Was respectful to adults • Walked away from a fight • Prevented a fight • Stopped an argument • Found a positive outlet for frustration • Avoided a verbal confrontation Or…custom create your own! Psychometric Properties Large Normative Sample Total Sample N = 4,855 Teacher N = 1,938 Normative N = 1,400 Clinical N = 538 Parent N = 1,938 Normative N = 1,400 Clinical N = 467 Self-Report N = 1,050 Normative N = 700 Clinical N = 350 Age x Gender Distribution: Normative Sample Age Group Parent Rating Teacher Ratings Self-Reports Male (N) Female (N) Total (N) Male (N) Female (N) Total (N) Male (N) Female (N) Total (N) 5-6 100 100 200 100 100 200 7-9 150 150 300 150 150 300 10-11 100 100 200 100 100 200 12-13 100 100 200 100 100 200 100 100 200 14-16 150 150 300 150 150 300 150 150 300 17-18 100 100 200 100 100 200 100 100 200 Total 700 700 1400 700 700 1400 350 350 700 Race/Ethnicity Distribution • Highly comparable to the most recent U.S. Census Asian African American Hispanic White Other Total N 55 218 203 836 50 1361 % 4.0 16.0 14.9 61.4 3.7 Census % 3.8 15.7 15.1 61.9 3.5 Difference % 0.22 0.29 - 0.22 -0.47 0.18 Total N 30 214 207 873 75 % 2.2 15.3 14.8 62.4 5.4 Census % 3.8 15.7 15.1 61.9 3.5 Difference % - 1.65 - 0.39 - 0.33 0.47 1.89 Total N 28 110 107 433 25 % 4.0 15.6 15.2 61.6 3.5 Census % 3.8 15.7 15.1 61.9 3.5 Difference % 0.23 - 0.07 0.09 - 0.29 0.03 Form Teacher Parent SelfReport Total (Weighted N’s) 1400 703 Geographic Region Distribution • Highly comparable to the most recent U.S. Census Northeast Midwest South West Total N 251 299 486 325 1361 % 18.4 22.0 35.7 23.9 Census % 18.1 21.9 36.7 23.3 Difference % 0.35 0.08 -1.03 0.61 Total N 272 265 530 333 % 19.4 18.9 37.9 23.8 Census % 18.1 21.9 36.7 23.3 Difference % 1.39 -2.97 1.13 0.47 Total N 128 159 259 157 % 18.3 22.6 36.8 22.4 Census % 18.1 21.9 36.7 23.3 Difference % 0.21 0.70 0.03 -0.93 Form Teacher Parent SelfReport Total 1400 703 (Weighted N’s) Parental Education Level • Highly comparable to the most recent U.S. Census Parent Education Level High school or Lower Apprenticeship/ 2-year College University or higher Total N 646 385 369 1400 % 46.2 27.5 26.4 Census % 46.6 27.2 26.2 Difference % - 0.43 0.28 0.16 Total (Weighted N’s) Norm Groups Age Groups Teacher Parent Self 5-6 5-6 7-9 7-9 10-11 10-11 12-13 12-13 12-13 14-16 14-16 14-16 17-18 17-18 17-18 • Age groups based on: – age group mean differences – developmental settings (K-12) • Combined-gender norms (♂ flagged as often as ♀) • Gender-specific norms available as option Development of Standard Scores • T-scores reported on every scale • Raw scores percentile rank to preserve the shape of original distribution (behavior: not normal curve but • Original percentile scores smoothed by imposing empirical percentiles • Final smoothed percentile scores converted to standard T-scores (M = 50; SD = 10) Interpretation of BIMAS Scores : Scale-level Descriptors Behavior Concern Scales (↑ scores = ) • High Risk: T = 70+ • Some Risk: T = 60-69 • Low Risk: T < 60 Adaptive Scales (↑ scores = ) • Strength: T = 60+ • Typical: T = 41-59 • Concern: T ≤ 40 Interpretation of BIMAS Scores: Item-Level Descriptors (Useful in Individualized Intervention Design) • Likert Scale 0 = Never; 1 = Rarely; 2 = Sometimes; 3 = Often; 4 = Very Often Item Descriptors: Behavior Concern Scales No Concern Mild Concern = M + 1SD; ≥ 75th percentile Concern = >1 SD; ≥ 84th percentile Adaptive Scales Concern = ≤10th percentile Mild Concern = 10th - 20th percentile; M - 1SD Fair Positive = ≥ 75th percentile; M + .67SD Psychometric Properties • Reliability – Internal Consistency – Test-Retest (stability) • Validity – Content – Construct • Scale structure • Screening accuracy • Progress monitoring Reliability • Internal Consistency – The extent to which all items on the same scales measure the same content – Cronbach’s Alpha (ranges from 0.0 to 1.0; higher = more reliable) – Good range: α = .80 to .89 • Clinical cases added to the Normative Sample for reliability analyses (85% Normative; 15% Clinical) – To increase variability in the data – Most schools: mixture of students with & w/o diagnosis Internal Consistency Cronbach’s Alpha Form Behavioral Concern Scales Conduct Negative Cognitive/ Affect Attention Adaptive Scales Social Academic Functioning Teacher .91 .85 .91 .85 .81 Parent .87 .82 .90 .84 .77 Self-Report .88 .85 .87 .83 .75 Reliability (cont’d) • Test-Retest Reliability – Refers to the stability of test scores when an assessment is administered on two or more occasions (without intervention) – Pearson’s Correlation (r) between Time 1 and Time 2 BIMAS scores – 2-to-4 week interval – Ranges from -1 to +1; higher = more reliable; good range: .7 or higher Test-Retest Reliability Coefficients Form Behavioral Concern Scales Conduct Negative Cognitive/ Affect Attention Adaptive Scales Social Academic Functioning Teacher (N = 112) .89 .85 .91 .91 .91 Parent (N = 83) .79 .91 .84 .96 .80 Self-Report (N = 53) .81 .87 .82 .90 .85 All rs significant, p < .001.; A 2-4 week interval (non-clinical sample; no intervention in between) Across-Informant Correlations • Correlation between parent & teacher ratings • Correlation between self-report & parent/teacher • Are the behaviors assessed by the BIMAS consistently detected by raters in different settings? • (Diff informant: Diff observation context) • Parent to Teacher r: range = .79 - .86 • Parent to Self r: range = .59 - .69 • Teacher to Self r: range = .54 - .59 Validity Validity The validity of a test refers to the quality of inferences that can be made by the test’s scores (i.e., how well does the test measures and how well are the claims it makes for its use and applications supported by empirical evidence). The BIMAS Validity • Validity – Content – Construct • Relationship with Other Measures • Screening Accuracy • Progress Monitoring Content validity • Behavioral Items on BIMAS Standard: – Empirically-based: Meier's (1997, 1998, 2000, 2004) work on change-sensitive item selection (IISRs on slides 15,16 ) – Input from colleagues in field testing studies over an 8 year period • Structure of items into scales – Exploratory & Confirmatory factor analysis – Rational/clinical analysis Construct Validity Relationship with other Tests • Convergent Validity – Degree to which results from theoreticallyrelated measures converge – Pearson’s Correlation (r) between BIMAS & Conners Comprehensive Behavior Rating Scales (Conners CBRS) on relevant scales – Ranges between -1 & +1; higher = more convergence* (reverse is true for Adaptive Scales strength-based) Pearson’s Correlation Coefficients between Conners CBRS and BIMAS Conduct • All construct-related scales show convergent validity Conners CBRS Scales BIMAS Conduct Behavioral Concerns Scale Teacher Parent Self (N = 112) (N = 126) (N = 107) Defiant/Aggressive Behaviors .519 .734 .642 Conduct Disorder .499 .681 .615 Oppositional Defiant Disorder .493 .777 .622 •All correlations, p < .01 (2-tailed) Pearson Correlation Coefficients between Conners CBRS and BIMAS Negative Affect • All construct-related scales show convergent validity Conners CBRS Scales BIMAS Negative Affect Behavioral Concern Scale Teacher Parent Self (N = 112) (N = 126) (N = 107) Emotional Distress .474 .696 .540 Major Depressive Episode .379 .615 .556 •All correlations, p < .01 (2-tailed) Pearson Correlation Coefficients between Conners CBRS and BIMAS Cognitive Attention • All construct-related scales show convergent validity Conners CBRS Scales BIMAS Cognitive/Attention Behavioral Concern Scale Teacher Parent Self (N = 112) (N = 126) (N = 107) Hyperactivity/Impulsitivity .627 .563 .460 ADHD Inattentive .688 .531 .530 ADHD Hyperactive/Impulsive .621 .562 .455 •All correlations, p < .01 (2-tailed) Pearson Correlation Coefficients between Conners CBRS and BIMAS Social • All construct-related scales show convergent validity (-ve correlations since BIMAS Adaptive Scales: Higher Scores = Less Concerns) Conners CBRS Scales BIMAS Social Adaptive Scale Teacher Parent Self (N = 112) (N = 126) (N = 107) Autistic Disorder -.688 -.469 n/a Aspergers Disorder -.706 -.521 n/a •All correlations, p < .01 (2-tailed) Pearson Correlation Coefficients between Conners CBRS and BIMAS Academic Functioning • All construct-related scales show convergent validity (-ve correlations since BIMAS Adaptive Scales: Higher Scores = Less Concerns) Conners CBRS Scales BIMAS Academic Functioning Adaptive Scale Teacher Parent Self (N = 112) (N = 126) (N = 107) Academic Difficulties -.494 -.395 -.488 Language (subscale) -.436 -.308 n/a Math (subscale) -.389 -.354 n/a •All correlations, p < .01 (2-tailed) BIMAS Validity (cont’d) Claims for its use: 1. A multi-informant screening tool to identify emotional and behavior concerns – Teacher – Parent – Self 2. A progress monitoring tool 1) BIMAS as a Screening Tool • Ratings offered by teachers, parents, students (self) • Clinical samples were identified during the standardization process. – Screening criteria were applied thru the use of a Clinical Diagnostic Information Form. The BIMAS Clinical Samples • Large clinical samples; different diagnostic groups Clinical Diagnoses of the samples rated by teachers, parents and students themselves. Clinical Group Teacher N % Parent N % Self-Report N % Total N DBD 123 22.9 70 15.0 65 18.6 258 ADHD 109 20.3 117 25.1 89 25.4 315 Anxiety 55 10.2 67 14.3 56 16.0 178 Depression 60 11.2 73 15.6 62 17.7 195 PDD 95 17.7 86 18.4 65 18.6 246 LD 45 8.4 -- -- -- -- 45 DD 30 5.6 -- -- -- -- 30 Other 21 3.9 54 11.6 13 3.7 88 Total 538 100.0 467 100.0 350 100.0 1355 The BIMAS as a Screening Tool How were the data analyzed? • Can the BIMAS discriminate clinical from non-clinical cases? • Discriminant Function Analysis. – Compare the means of the groups across the BIMAS subscales – Develop a discriminant function equation – “Hide” original group membership and allow the determination of the “predicted-new” membership to be made with BIMAS subscale scores The BIMAS as a Screening Tool How were the data analyzed?...... • Calculate percent correct classification estimates for the…. – Clinical – Non-clinical – Total sample • Calculate other accuracy classification statistics The Teachers as Screening Agents BIMAS–T scores can differentiate between Clinical vs. Non-Clinical Clinical Sample BIMAS-T Standard Scales Cohen’s d N M SD Conduct 516 63.5 10.9 1.3 Negative Affect 537 66.4 10.4 1.6 Cognitive/Attention 538 66.6 9.8 1.7 Social 538 35.6 10.3 −1.4 Academic Functioning 538 40.2 9.8 −1.0 Note. Clinical Ms (SDs) compared to values from the normative sample (N = 1,361, M = 50, SD = 10). Cohen’s d values of ∣0.2∣ = small effect, ∣0.5∣ = medium effect, and ∣0.8∣ = large effect. BIMAS Standard T-score Cut-offs BIMAS Scales Behavioral Concern Scales Adaptive Scales T-score Scale Descriptors T = 70+ High Risk T = 60-69 Some Risk T = 60 or less Low Risk T = 40 or less Concern T = 41-59 Typical T = 60+ Strength Group Classification as Predicted by BIMAS–Teacher Scales Scores using Cut-Scores Predicted Group Membership Normative Sample Actual Group Clinical Membership Sample Total Normative Sample Clinical Sample 1,167 233 1,400 107 431 538 1,274 664 1,938 Total Blue = True Negatives; Pink = True Positives Classification Accuracy of BIMAS–Teacher Scales (All satisfactory) Classification Accuracy Statistic Full Range of Scores Cut-Scores Overall Correct Classification 85.2% 82.5% Sensitivity 83.5% 80.1% Specificity 85.8% 83.4% Positive Predictive Power 68.4% 64.9% Negative Predictive Power 93.4% 91.6% BIMAS–Teacher Mean T-scores by Clinical Group Behavioral Concern Scales High Risk 80 70 Some Risk 60 50 Low Risk 40 30 20 DBD ADHD ANX Conduct DEP Negative Affect PDD Cognitive/Attention LD DD BIMAS–Teacher Mean T-scores by Clinical Group Adaptive Scales 80 Strength 70 60 Typical 50 40 Concern 30 20 DBD ADHD ANX Social DEP PDD Academic Functioning LD DD The Parents as Screening Agents BIMAS–P scores can differentiate between Clinical vs. Non-Clinical Clinical Sample BIMAS-P Standard Scales Cohen’s d N M SD Conduct 467 60.3 10.5 1.0 Negative Affect 467 61.5 10.3 1.1 Cognitive/Attention 467 60.7 9.9 1.1 Social 467 38.4 9.9 −1.2 Academic Functioning 467 40.4 7.9 −1.0 Note. Clinical Ms (SDs) compared to values from the normative sample (N = 1,400, M = 50, SD = 10). Cohen’s d values of ∣0.2∣ = small effect, ∣0.5∣ = medium effect, and ∣0.8∣ = large effect. BIMAS Standard T-score Cut-offs BIMAS Scales Behavioral Concern Scales Adaptive Scales T-score Scale Descriptors T = 70+ High Risk T = 60-69 Some Risk T = 60 or less Low Risk T = 40 or less Concern T = 41-59 Typical T = 60+ Strength Group Classification as Predicted by BIMAS–Parent Scales Scores using Cut-Scores Predicted Group Membership Normative Sample Actual Group Clinical Membership Sample Total Normative Sample Clinical Sample 1,124 276 1,400 124 343 467 1,248 619 1,867 Total Blue = True Negatives; Pink = True Positives Classification Accuracy of BIMAS–Parent Scales (All satisfactory) Classification Accuracy Statistic Full Range of Scores Cut-Scores Overall Correct Classification 78.3% 78.6% Sensitivity 80.1% 73.4% Specificity 77.7% 80.3% Positive Predictive Power 54.6% 55.4% Negative Predictive Power 92.1% 90.1% BIMAS–Parent Mean T-scores by Clinical Group Behavioral Concern Scales 80 High Risk 70 Some Risk 60 50 Low Risk 40 30 20 DBD ADHD Conduct ANX Negative Affect DEP Cognitive/Attention PDD BIMAS–Parent Mean T-scores by Clinical Group Adaptive Scales 80 Strength 70 60 Typical 50 40 Concern 30 20 DBD ADHD Social ANX Academic Functioning DEP PDD The Students as Screening Agents BIMAS–SR scores can differentiate between Clinical vs. Non-Clinical Clinical Sample BIMAS-P Standard Scales Cohen’s d N M SD Conduct 350 57.3 9.7 0.7 Negative Affect 350 59.2 9.7 0.9 Cognitive/Attention 350 57.3 8.2 0.8 Social 350 41.4 9.7 −0.9 Academic Functioning 350 42.3 8.3 −0.8 Note. Clinical Ms (SDs) compared to values from the normative sample (N = 703, M = 50, SD = 10). Cohen’s d values of ∣0.2∣ = small effect, ∣0.5∣ = medium effect, and ∣0.8∣ = large effect. Group Classification as Predicted by BIMAS–Self-Report Scales Scores using Cut-Scores Predicted Group Membership Normative Sample Actual Group Clinical Membership Sample Total Normative Sample Clinical Sample 1,124 276 1,400 124 343 467 1,248 619 1,867 Total Blue = True Negatives; Pink = True Positives Classification Accuracy of BIMAS–Self-Report Scales (All satisfactory) Classification Accuracy Statistic Full Range of Scores Cut-Scores Overall Correct Classification 71.5% 71.8% Sensitivity 76.3% 67.1% Specificity 69.1% 74.1% Positive Predictive Power 55.3% 56.5% Negative Predictive Power 85.3% 81.9% BIMAS–Self-Report Mean T-scores by Clinical Group Behavioral Concern Scales High Risk 80 70 Some Risk 60 50 Low Risk 40 30 20 DBD ADHD Conduct ANX Negative Affect DEP Cognitive/Attention PDD BIMAS–Self-Report Mean T-scores by Clinical Group Adaptive Scales 80 Strength 70 60 Typical 50 40 Concern 30 20 DBD ADHD ANX Social Academic Functioning DEP PDD BIMAS Validity Claim No. 2: The BIMAS as a Progress Monitoring Tool Progress Monitoring with BIMAS • Documenting and Measuring Change/Progress – BIMAS Standard – BIMAS Flex • A Case Study –Anger management group Anger Management Treatment Study N = 46 (ages 12 to 18 years) Gender: 32 males and 14 females. Race/Ethnicity: 30 African American, 2 Hispanic & 14 Caucasian students • BIMAS scores showed good sensitivity to change in response to intervention in theoretically expected direction Anger Management Treatment Group: Pre- to Post-Treatment BIMAS T-scores Behavioral Concern Scales Teacher Parent Self-Report 80 High Risk 70 Some Risk 60 Low Risk 50 40 Pre-Test Post-Test Conduct Pre-Test Post-Test Negative Affect Pre-Test Cognitive/Attention Post-Test Anger Management Treatment Group: Pre- to Post-Treatment BIMAS T-scores Adaptive Scales 60 Typical Teacher Parent Self-Report 50 40 Concern 30 20 Pre-Test Post-Test Pre-Test Social Post-Test Academic Functioning Pre-Test Post-Test Pre-Post Intervention Performance of an Anger Management Treatment Group: BIMAS–Teacher T-scores • Statistically significant change in theoretically expected direction BIMAS-T Scale Conduct Negative Affect Cognitive/ Attention Social Academic Functioning Pre-Test Post-Test M 65.9 59.3 SD 4.8 3.7 M 63.0 53.9 SD 10.7 7.7 M 63.3 55.3 SD 6.6 6.9 M 30.0 34.4 SD 5.5 7.2 M 41.9 45.7 SD 4.9 4.1 t Cohen’s d 9.2 1.5 6.6 1.0 7.3 1.2 −3.4 −0.7 −5.2 −0.8 Note. N = 46. All ts significant at p < .01. Cohen’s d values of ∣0.2∣ = small effect, ∣0.5∣ = medium effect, and ∣0.8∣ = large effect. Pre-Post Intervention Performance of an Anger Management Treatment Group: BIMAS–Parent T-scores • Statistically significant change in theoretically expected direction BIMAS-P Scale Conduct Negative Affect Cognitive/ Attention Social Academic Functioning Pre-Test Post-Test M 66.6 53.5 SD 5.8 4.3 M 60.8 47.1 SD 9.5 6.9 M 59.4 49.5 SD 5.4 4.6 M 31.7 37.5 SD 4.9 6.9 M 40.0 45.7 SD 4.4 4.1 t Cohen’s d 12.7* 2.6 10.4* 1.7 10.3* 2.0 −4.7* −1.0 −7.3* −1.3 Note. N = 46. All ts significant at p < .01. Cohen’s d values of ∣0.2∣ = small effect, ∣0.5∣ = medium effect, and ∣0.8∣ = large effect. Pre-Post Intervention Performance of an Anger Management Treatment Group: BIMAS–Self-Report T-scores • Statistically significant change in theoretically expected direction BIMAS-SR Scale Conduct Negative Affect Cognitive/ Attention Social Academic Functioning Pre-Test Post-Test M 65.5 52.2 SD 5.4 3.8 M 59.2 44.6 SD 9.8 6.5 M 62.7 49.6 SD 6.6 4.2 M 35.1 39.5 SD 6.2 4.8 M 38.9 46.2 SD 5.0 3.0 t Cohen’s d 13.8* 2.8 11.5* 1.8 12.9* 2.4 −4.5* −0.8 −10.1* −1.8 Note. N = 46. All ts significant at p < .01. Cohen’s d values of ∣0.2∣ = small effect, ∣0.5∣ = medium effect, and ∣0.8∣ = large effect. BIMAS Web Interface Please contact the Publisher (MHS Inc.) Contact info at the end of this presentation Some Key Features • Web-based: allows users to access from different locations • Different levels of access • User interface tailored to needs of the user • Online or paper administration • Paper tests can be generated and scanned in batches with any regular scanner • Real-time reporting • Many custom features Levels of Access within the BIMAS System School Setup Features: • Schedule BIMAS Standard Universal Assessments across all schools in a district • Tailor school schedule to each individual school • Import feature (Excel template for easy upload) Class Setup Features: • Classes organized by school, grade, subject and teacher • Homeroom classes vs. Subject classes • Import feature User Setup Features: • Different access levels for different user types • T-score rights • Read-only rights • Multi-rater comparison rights • Password protected login for users • Import feature Student Setup Features: • Demographics linked to students for effortless assessment/report generation • Easy migration of students from year to year or school to school within district • Import feature Screening/ Progress Monitoring Administration & Scoring Features: • Individual or group (batch) paper and online administrations • Student information does not have to be reentered for every single administration • Import paper form data using any regular scanner Online/Paper Administration & Scoring Options ADMINISTRATION & SCORING OPTIONS Q: Does the rater have internet access? Yes/No << if YES >> << if NO >> Online Administration Paper-and-Pencil Administration Rater completes the assessment online via an emailed link or by directly logging onto BIMAS Online with a user account. Online Scoring Assessment is automatically scored and reports are generated online. Assessor prints a paper-and-pencil form from BIMAS Online for the rater to fill out manually. Scanned Scoring Assessor scans the paperand-pencil form using any regular scanner and imports the image files into BIMAS Online. Online Scoring Assessor enters responses from paper-and-pencil forms into BIMAS Online for automatic scoring and report generation. Progress Monitoring Features: • Searchable list of all students • Quick status, student profile, and assessment history for each student • Assign Flex items for each student • Paper and online assessments linked to student • Reminders sent for student follow-up • Notes section for general comments about the student Application of BIMAS within RtI Framework Tier 3 Intensive Level Intensive Services & Frequent Monitoring Reduce complications, intensity, severity of current cases SFFFFSFFFS F F F S F S Tier 2 Targeted Level Intervention & Progress Monitor At-Risk Students S F F S F F S Reduce current cases of problem behavior Tier 1 Universal Level Universal Screening Reduce new cases of problem behavior S S S Web-based Reports Reports Features: • Only reports accessible to the user are available • Graphs are generated on the screen and can easily be regenerated with different variables in real-time • Direct links between related reports and drilldown options from group to individual reports • Graphs can be copied easily into any user documentation • Item-level norms available Types of Reports 1) Assessment Reports—present BIMAS Standard results numerically and graphically in one specific assessment (i.e., static results from a single point in time). • At the Group Level: Ideal for Universal Screening or Intervention Design for an intervention group, class, grade, school, and/or district. • At the Individual Level: Standard scale scores can assist in Universal Screening to identify problem areas for a student while the item-level scores (also based on U.S. national norms) would be particularly useful for individualized Intervention Design. Types of Reports (cont’d) 2) Progress Reports—compare the results of two or more BIMAS Standard/Flex assessments for the same individual/group of individuals to monitor progress and measure changes over time. • At the Group Level: Useful in Program Evaluation for an intervention group, class, grade, school, and/or district. • At the Individual Level: Student Progress Monitoring becomes effortless with scale-level as well as itemlevel time series graphs. Types of Reports (cont’d) 3) Comparative Reports—offer comparisons between the results of different BIMAS assessments. • At the Group Level: Provide assistance in Universal Screening or Intervention Design by comparing group average BIMAS Standard scale scores obtained approximately the same point in time between different classes, grades, or schools numerically and graphically. • At the Individual Level: Versatile tool for Student Progress Monitoring whereby BIMAS Standard/Flex assessments by different raters for the same individual are compared. Types of Reports (cont’d) 4) Demographic Reports— present graphically the percentage/numeric breakdown of students in a class, grade, school, or district along a particular demographic variable (e.g., age, gender, service code, or race/ethnicity). • Group Level only: Ideal for identifying and targeting specific demographic risk factors in Universal Screening or Intervention Design. Summary/Strength of BIMAS • BIMAS: empirically-based; sensitive to change (excellent for RtI) √ • Standard & Flex √ • Big Norm Samples & Good Psychometric Properties √ • Powerful Web-based Interface √ • Easy paper & online administration and scoring options √ • Wide Selection of Informative Web-based Reports √ Author Contact Information Please direct all theoretical/behavioral RtI implementation inquiries to: Achilles N. Bardos, Ph.D. Professor of School Psychology University of Northern Colorado College of Education & Behavior Sciences Mckee Hall 289- CB 131 Greeley, CO 80639 (970) 405-6694 abardos@comcast.net Publisher Contact Information Please direct all product-related inquiries to: Jane Wong, M.A. Research Associate Multi-Health Systems, Inc. (MHS) Email: jane.wong@mhs.com 3770 Victoria Park Avenue Toronto, ON, Canada M2H 3M6 Toll Free Tel: 1-800-456-3003 ext. 209 Toll Free Fax: 1-888-540-4484 (Attn to Jane Wong) Visit our website: www.mhs.com/bimas