What Might Disability Practitioners Learn from STELLA, A

advertisement
What Might Disability Practitioners Learn
from STELLA,
A Computerized System for Selecting
Accommodations for ELLs?
Rebecca Kopriva, University of Maryland
rkopriva@umd.edu
STELLA stands for
The
Selection Taxonomy
for
English Language Learner
Accommodations
Federally funded by USED
Collaboration of
•
•
•
•
•
•
•
University of Maryland
South Carolina Department of Education
North Carolina Department of Public Instruction
Maryland State Department of Education
District of Columbia Public Schools
Austin Independent School District
American Association for the Advancement of Science
(AAAS)
What is STELLA?
STELLA is a computerized decision-making
system designed to provide a systematic
mechanism for:
– Appropriately defining and identifying different
types of English language learners,
– Matching these students to the accommodation
methods appropriate for each student.
Purpose of STELLA:
While appropriate accommodations are being
identified and integrated into large-scale
academic achievement systems, how do the
proper accommodations get to the correct
students?
This is the question that STELLA addresses.
Students
3 Forms
Conversion and
Consolidation
Rules
Computerized
Profile of
Each Student
Decision
Rules
Test
Accommodations
for Each Student
Test
Accommodation
Domain
Framework of STELLA
Data Collection Mechanism
• Teacher, Parent, and Records Forms
Preloaded
• Relevant Student Information Variables
• Domain of Promising Test Accommodations
• Student Information Conversion and Consolidation Rules
• Test Accommodation Decision-Making Rules
Output
• Student Profile
• Accommodation Decisions for Each Student
Associated Materials
• Data Collection Forms Handbook
• Output Interpretation Handbook
• Technical Manual
How might STELLA be Helpful to You?
Relevant questions might include :
• How did key student variables get identified?
• What was the thinking behind the prioritization of
variables in the organizational algorithms?
• Why did we choose the 3 sources of information
from whom to collect data?
• What might the results tell disability practitioners
about who might benefit from a systematic decisionmaking process?
Relevant Characteristics of Students
Student
Language Proficiency
US Schooling
Cultural Proximity
Student
Language Proficiency
Cultural Proximity
English
L1
Reading
Writing
Speaking
Listening
Reading
Writing
Speaking
Listening
US Schooling
Student
Language Proficiency
Cultural Proximity
US Schooling
Time in School
Schooling Experiences
Testing Experiences
Time in US
Consistency
Structure of Academic Year
Resources
Types/Purposes of Testing
Formats
Practices
Student
Language
Proficiency
Cultural Proximity
Needs
US Schooling
Classroom
Experiences
Current Domain of Accommodations
• Pre-Test Best Practice Accommodations
– Family Assessment Night
– Tailored Classroom Support
• Forms
– Standard or some Universal Design Forms
– Access-based Form
– L1 or Side-by-Side forms as available
• Tools
– Bilingual word list, general or test specific
– Picture-word dictionary
– Problem solving tools
• Administration
–
–
–
–
–
–
–
Small group
Individual Administration
Oral English
Oral L1
Language Liaison
Extra time
More frequent breaks
• Response
–
–
–
–
Written L1 or code switching
Oral English
Oral L1 or code switching
Demonstrated or modeled response
Data Collection
• Forms (pull down menus and bubbles to
explain questions)
 Records Form
• Language of content instruction per content
area
• English language Proficiency information
• L1 test information (if any)
• ELL program (for student profile only)
 Parent/Guardian Interview Form*
• Interview protocol with rating scales
• L1 information, 4 domains
• Full-time academic programs in U.S.
– Length of time in U.S. schools
– Consistency
• School atmosphere in native country if
applicable
– Time (months, days/week, hours/day)
– Number of students in classroom
– Describing the school (e.g. chalkboards, desks,
textbooks per student, other books, supplies for
math or science, additional comments
Parent/Guardian Interview Form, cont.*
• Types of tests, assessments in schools in
native country
– Formal high stakes, formal not high stakes, types of
ongoing classroom evaluations, methods or tasks
used to assign grades
– Accurate reflection of child’s achievement?
• Assessments in U.S. schools
– Accurate reflection of achievement?
– Experience with various test formats
*Older students may be able to complete this or a similar form.
Teacher Form
• English and L1 proficiency judgments
– Explanation of judgment criteria
– L1 judgment includes a don’t know option
• Standardized score accuracy and judgments
about reasons for inaccuracy
• Student’s experience with standard test
formats
• Student’s perceived purpose of standardized
testing
• Classroom test condition options
• Teacher’s judgment about condition options
that help student on classroom tests,
evaluations
Test Conversion Rules
• English language proficiency tests are
currently preloaded in the system.
Output from each of the tests are put
on a common “scale” for the purposes
of selecting accommodations.
Currently, output from 4 tests are preloaded.
There is also the opportunity for personnel to
add results from other tests.
In all cases score conversion rules place
students on a common scale with four levels:
–
–
–
–
Beginner ELL
Low intermediate ELL
High intermediate ELL
Grade level competitive ELL*
* Academically thriving in mainstream classrooms
Student Information Consolidation Rules
In several cases more than one piece of
student information is used to make
judgments about the his or her level on
relevant variables. Two types of
consolidation rules are part of STELLA.
These are:
– Consolidation of data from related items
– Consolidation of information from more than one
source
From More Than One Question: An Example
Time in School = LOW
If the student has been in the US less than 1 academic
year
OR
If the student has been in US between 1 and 2 years
AND has missed more than two months of school
per year for 1 or more years.
OR
If the student has been in US between 2 and 3 years
AND has missed more than two months of school
per year for more than one year.
From More Than One Source: An Example
For L1 proficiency, consolidation rules are
applied to information from teacher, parent
and records.
Decision-Making Rules for Accommodations
A beginning set of decision-making rules were
developed and tested. These rules take relevant
student information and pair it with relevant
accommodation factors for individual students. The
accommodation factors identify which student needs
the specific accommodation was designed to
remediate.
In this way, individual students are matched with
accommodations appropriate to their particular
needs.
An Example
Current Status
At this time, the rules explicitly use information
from English language reading and oral
proficiency, and L1 reading proficiency to
make broad decisions.
Cultural proximity and US schooling variables
are informally used to make the final
decisions about the selection of
accommodations. Future decision trees will
be developed which formally identify how
these latter factors are used.
Current Adaptation Possibilities
1.
STELLA recognizes that different states allow different
accommodations. As such, the platform for STELLA is
designed so that the decision making rules can be adapted
to suit individual agencies. The output in STELLA specifies
the accommodations allowed for each agency using it.
In addition, since the accommodations selected for each
student is based on research and other best practice
literature, STELLA also displays the results identified for
each student that are based on this literature. In this way,
teachers and others have guidance about additional
accommodations that should be useful for ELL students with
specified needs.
2.
New accommodations can also be added to the system.
Example of Another Decision Tree
STELLA Output
1. Individual Student Profile
–
–
–
English language proficiency, 4 domains
L1 proficiency, 4 domains
Cultural proximity, 5 variables
•
•
•
•
•
Previous schooling experiences
Time in U.S.
U.S. Experience with testing procedures
ELL program record in U.S.
Native country
2. Recommended Accommodations
Student Profile Example
Recommended Accommodations
Initial Verification Studies
1. Cut-score Study
2. Independent Raters Study
1. Computer-based Cut-Score Study
Method
• 276 3rd and 4th grade Spanish-speaking ELL
students administered mathematics test (30
multiple choice and 3 constructed response
items).
• Based on information from teachers,
schools, appropriate accommodations were
identified for each student.
• Accommodations (picture-word, SpanishEnglish, oral English) were randomly
assigned. Each student received none, 1, 2
or 3.
1. Cut-Score Study cont.
After completing test:
• Students were assigned to 3 groups:
Appropriate accommodation group, random
accommodation group, no accommodation
group (IV); scores (DV).
Findings:
• ANOVA results indicate that a significant
difference (F=3.2, p=.04).
1. Cut-Score Study cont.
Findings cont.
• Appropriate accommodations group scored
significantly higher than other two groups:
t (no/app) = 2.24, p=.03;
t (random/app) = 2.33, p=.02
• No significant difference between random and no
accommodations groups
t (random/no) = 1.67, p =.49
• Regression results found accommodations
addressed ELP reading and listening, L1 reading
needs
1. Cut-Score Study cont.
Implications
• For accommodations assignment:
– Variables utilized appear to be among the most
salient.
• For validity of scores:
– When accommodations were non-appropriate,
scores did not increase over those receiving no
accommodations. With appropriate
accommodations, scores did increase. This
pattern suggests improved validity.
2. Independent Raters Study
Method
• 5 sets of accommodations (STELLA, 3
teacher recommendations, 1 randomly
generated)
• 4 raters: 3 teachers/ELL specialists, 1
researcher with experience in ELL testing
• Reviewed completed forms for each student
• Blindly rated 5 sets of accommodations for
each student from most appropriate to least
appropriate
2. Raters Study cont.
Findings
• ANOVA results of ratings found a significant
difference by accommodation sets (F =
76.789, p <.001).
• Significant difference between STELLA
findings and all other findings, best fit for
STELLA (p = .000).
• No significant difference between any of the
other findings.
Rater by source interaction was not significant (F = 1.184, p =
.288), suggesting that raters did not differentially assign
accommodation sets
2. Raters Study cont.
Implications:
• Findings suggest that teacher ratings are not
significantly different from random, no matter how
much targeted information they collect.
• Even when teachers know what variables they are
asked to focus upon, their results are not
significantly different from when they are asked to
assign accommodations based on only their
understanding of the students.
• STELLA results consistently and significantly
provide best fit.
Next Steps
• Continual refinement and customization
necessary.
• Additional and more explicit cut points need
to be validated.
• Data needs to be collected on the impact of
the system for different ages and content
areas.
Download