Chapter 6 Alignment Among Secondary and Post-Secondary Assessments in Oregon

advertisement
Chapter 6
Alignment Among Secondary and Post-Secondary Assessments
in Oregon
The Oregon Assessment Environment
For more than ten years, Oregon has been involved in substantial educational
reform. The most visible components of the reform efforts are the Certificate of Initial
Mastery (CIM) and Certificate of Advanced Master (CAM), established by the Oregon
state legislature in 1991. These certificates are designed to be capstones to student
mastery of the standards at the tenth and twelfth grades, respectively, as part of the larger
standards and assessment system. In order to earn a CIM, Oregon students are expected
to meet standards based on performance on standardized assessments and in work
samples in the tenth grade in English (reading and writing), math, speaking, science and
social studies.1 Specifically, students are required to exceed a specified cut score on each
of the reading, writing, math, and science sections of the statewide achievement test, and
must also obtain a minimum score of 4 (on a 1 to 6 point scale) on their work samples
(which are classroom assignments). Currently, only the CIM is available, as the CAM is
still under development. Neither the CIM nor the CAM is required for graduation, but
prospective college students are encouraged to obtain the CIM and CAM because
students will have automatically met some of the admissions requirements into the
Oregon University System.
Additionally, the Oregon University System established the Proficiency-Based
Admissions Standards System (PASS), which links the state’s secondary education
standards and curriculum to admissions standards at state universities. PASS describes
the skills and knowledge that students need to demonstrate before they can be admitted to
one of Oregon’s public higher education institutions. There are several ways in which
students can demonstrate proficiency of PASS standards. Students can meet many,
though not all, of the PASS proficiencies by performing satisfactorily on the state
achievement tests, including the CIM and CAM.
1
In the spring of 2000, there were several changes to the CIM requirements. First, rather than a single
certificate for achievement across all subjects, individual certificates will be awarded by subject. Second,
students will be able to earn their CIM any time during their high-school years, and will no longer be
required to fulfill the CIM requirements by 10th grade.
123
Oregon Assessments Included in this Study
For this study, we examined the standardized assessments that are part of the
PASS system, namely the CIM Reading, CIM Writing, and CIM Math. PASS
proficiencies are not part of a standardized assessment system and are not analyzed here.
The CIM Reading consists of a 65-item multiple-choice section and CIM Writing
consists of a single writing sample. The CIM Math test contains 55 multiple-choice
questions, and one open-ended problem-solving item. The CIM Reading and Math
assessments are level tests, which means that students of different proficiency take
different versions.2 There are three versions of the test: low, medium, and high. For this
study, we examined all three versions and averaged the results. Because students at
different levels may be exposed to different item content mixes, findings about content
coverage or cognitive demands should be interpreted with caution.
To identify the appropriate test level for the students, teachers may use
professional judgment and/or administer locator tests provided by the state. In reading,
the locator test is a 45-minute exam, consisting of 54 multiple-choice items. The math
locator exam contains 24 multiple-choice questions administered within 40-minutes. We
included both locator tests for our study.
In addition to the CIMs, students applying to a public university in Oregon may
also be required to take placement tests in math and/or English. These tests are used to
determine whether admitted students possess entry-level math and English skills.
Colleges in Oregon administer a wide range of placement exams; we include the
assessments used at the University of Oregon as an example. At the University of
Oregon, students who do not meet the minimum achievement level on the SAT I or ACT
are required to take the Test of Standard Written English (TSWE), which is a 30-minute,
50-item multiple-choice exam that assesses use of basic grammar, sentence structure, and
word choice.3 Students who do not exceed a pre-specified cut-score on the TSWE are
required to enroll in a remedial English course.
In math, all students enrolling at the University of Oregon must take a placement
test except examinees with satisfactory scores on the AP Calculus exam, or those who
2
The rationale for having different versions of the state test is to provide more precise measures of
proficiency by allowing students to take exams that are more closely tailored to their achievement level.
3
TSWE is published by the Educational Testing Service.
124
have transferred credit for college-level calculus from another institution. The math
placement test, which will hereafter be referred to as the UO Math Placement Test,
consists of 40 multiple-choice questions administered within 50 minutes. Scores on the
UO Math Placement Test are used to determine which mathematics courses students will
be eligible to enroll in.
Tables 6.1 and 6.2, organized by test function, list these testing programs and the
type of information we were able to obtain for this study. For most tests, we used a
single form from a recent administration or a full-length, published sample test. In a few
instances where full-length forms were unavailable, we used published sets of sample
items. This was the case for the UO Math Placement Test, CIM Reading, and CIM Math.
For the ELA tests, Table 6.2 specifies whether the test includes each of three possible
skills: reading, editing, and writing.
125
126
Test Type
State achievement
State achievement
College
admissions
College
admissions
College
admissions
Test
Certificate of
Initial Mastery
Mathematics
Assessment
CIM Locator Test
ACT
SAT I
SAT II
Mathematics Level
IC
Full sample
form
Full sample
form
Full sample
form
Full sample
form
Sample
items
Materials
Examined
60 minutes
75 minutes
60 minutes
40 minutes
No time limit
2 sessions,
one each for
MC and OE
Time Limit
Table126
continues
50 MC
35 MC
15 QC
10 GR
60 MC
24 MC
55 MC
1 OE
Number and
Type of Items
Calculator
Calculator
Calculator
Calculator
Calculator
Tools
Selection of
students for higher
education
Selection of
students for higher
education
Selection of
students for higher
education
Identify the
appropriate form
of CIM to be
administered
Monitor student
achievement
toward specified
benchmarks
Purpose
Table 6.1
Technical Characteristics of the Mathematics Assessments
Elementary and intermediate
algebra (30%), geometry (38%,
specifically plane Euclidean
(20%), coordinate (12%), and
three-dimensional (6%)),
trigonometry (8%), functions
(12%), statistics and probability
(6%), and miscellaneous (6%)
Arithmetic (13%), algebra
(35%), geometry, (26%), and
other (26%)
Prealgebra (23%), elementary
algebra (17%), intermediate
algebra (15%), coordinate
geometry (15%), plane geometry
(23%) and trigonometry (7%)
Calculations and estimations,
measurement, statistics and
probability, algebraic
relationships, and geometry
Calculations and estimations,
measurement, statistics and
probability, algebraic
relationships, and geometry
Content as Specified in Test
Specifications
127
College placement
University of
Oregon Math
Placement Test
MC =
OE =
GR =
QC =
multiple-choice
open-ended
grid-in
quantitative comparison
College
admissions
SAT II
Mathematics Level
IIC
Notes.
Test Type
Test
Full sample
form
Full sample
form
Materials
Examined
50 minutes
60 minutes
Time Limit
127
40 MC
50 MC
Number and
Type of Items
Calculator
Calculator
Tools
Placement of
students into
appropriate math
course
Selection of
students for higher
education
Purpose
Elementary algebra, intermediate
algebra, geometry, trigonometry
Algebra (18%), geometry (20%,
specifically coordinate (12%)
and three-dimensional (8%)),
trigonometry (20%), functions
(24%), statistics and probability
(6%), and miscellaneous (12%)
Content as Specified in Test
Specifications
128
State achievement
State achievement
Certificate of Initial
Mastery Reading
Assessment
Certificate of Initial
Mastery Writing
Assessment
College
admissions
College
admissions
College
admissions
ACT
AP Language and
Composition
SAT I
CIM Locator
Test Function
Test
Full sample
form
Full sample
form
Full sample
form
Full sample
form
Sample
writing
samples
Sample form
Materials
Examined
75 minutes
180 minutes
--60 minutes
reading
-- 120 minutes
writing
80 minutes
--35 minutes
reading
--45 minutes
editing
45 minutes
No time limit
No time limit
Time Limit
Table128
continues
40 MC reading
38 MC editing
52 MC reading
1 OE reading
2 OE writing
40 MC reading
75 MC editing
54 MC reading
1 OE writing
65 MC reading
Number and
Type of Items
Y
Y
Selection of students for
higher education
Y
Y
N
Y
Reading
Section?
Provide opportunities for
HS students to receive
college credit and
advanced course
placement
Selection of students for
higher education
Identify the appropriate
form of CIM to be
administered
Measure student
achievement toward
specified benchmarks
Measure student
achievement toward
specified benchmarks
Purpose
Table 6.2
Technical Characteristics of the English/Language Arts Assessments
Y
N
Y
N
N
N
Editing
Section?
N
Y
N
N
Y
N
Writing
Section?
129
College
admissions
College
admissions
College placement
SAT II Literature
SAT II Writing
Test of Standard
Written English
MC = multiple-choice
OE = open-ended
Notes.
Test Function
Test
Full sample
form
Full sample
form
Full sample
form
Materials
Examined
30 min
60 minutes
-- 40 minutes
editing
-- 20 minutes
writing
60 minutes
Time Limit
129
49 MC editing
60 MC editing
1 OE writing
60 MC reading
Number and
Type of Items
Evaluate student ability in
recognizing standard
written English
Selection of students for
higher education
Selection of students for
higher education
Purpose
N
N
Y
Reading
Section?
Y
Y
N
Editing
Section?
N
Y
N
Writing
Section?
Alignment Among Oregon Math Assessments
In this section, we describe the results of our alignment exercise for the math
assessments. The results are organized so that alignment among tests with the same
function is presented first, followed by a discussion of alignment among tests with
different functions. In some instances, there are only two tests that share the same
purpose, so it is important to recognize that patterns or comparisons between these tests
may not be indicative of more general trends within this category of tests.
Alignment is described by highlighting similarities and differences with respect to
technical features, content, and cognitive demands. That is, we first present how the
assessments vary on characteristics such as time limit, format, contextualized items,
graphs, diagrams, and formulas. We then document differences with respect to content
areas, and conclude with a discussion of discrepancies in terms of cognitive requirements.
Table 6.3 presents the alignment results for the math assessments. The numbers
in Table 6.3 represent the percent of items falling into each category. As an example of
how to interpret the table, consider the SAT I results; 58% of its items are multiplechoice, 25% are quantitative comparisons, and 17% are grid-in items. With respect to
contextualization, 25% of the SAT I questions are framed as a real-life word problem.
Graphs are included within the item-stem on 7% of the questions, but graphs are not
included within the response options (0%), and students are not asked to produce any
graphs (0%). Similarly, diagrams are included within the item-stem on 18% of the
questions, but diagrams are absent from the response options (0%), and students are not
required to produce a diagram (0%). With respect to content, the SAT I does not include
trigonometry (0%), and assesses elementary algebra (37%) most frequently. In terms of
cognitive demands, procedural knowledge (53%) is the focus of the test, but conceptual
understanding (32%) and problem solving (15%) are assessed as well. Results for the
other tests are interpreted in an analogous manner.
130
131
MC
QC
100
CIM
Locator
0
0
0
0
SAT II Math
100
Level IC
SAT II Math
100
Level IIC
100
0
0
0
0
17
0
0
0
GR
0
0
0
0
0
0
10
OE
8
12
18
25
22
60
67
C
Context
Formulas
M = formula needs to be memorized
G = formula is provided
Notes.
Format
MC = multiple-choice items
QC = quantitative comparison items
GR = fill-in-the-grid items
OE = open-ended items
UO Math
Placement
Test
College Placement Tests
25
58
SAT I
0
100
ACT
College Admissions Tests
90
CIM Math
State Achievement Tests
Test
Format
0
12
8
7
5
20
18
S
0
2
0
0
2
0
0
0
0
0
0
0
0
0
P
0
2
26
18
13
20
26
S
0
0
0
0
0
0
0
RO
0
0
0
0
0
0
0
P
Diagrams
0
0
0
8
0
0
3
G
131
Content Areas
PA = prealgebra
EA = elementary algebra
IA = intermediate algebra
CG = coordinate geometry
PG = plane geometry
TR = trigonometry
SP = statistics and probability
MISC = miscellaneous topics
5
10
12
1
15
15
16
M
Formulas
Contextualization
C = contextualized items
RO
Graphs
0
2
2
13
17
25
15
PA
15
22
10
2
5
0
3
IA
5
12
12
6
15
15
5
CG
3
14
28
19
25
20
31
PG
13
18
4
0
8
0
0
0
6
8
13
3
30
30
TR SP
0
12
6
11
5
5
2
MISC
10
26
34
32
40
20
18
Cognitive Demands
CU = conceptual understanding
PK = procedural knowledge
PS = problem-solving
90
54
58
53
53
75
61
PK
Cognitive
Demands
CU
Diagrams
S
= graph/diagram within item-stem
RO = graph/diagram within response options
P
= graph/diagram needs to be produced
65
14
30
37
22
5
15
EA
Content
Table 6.3
Alignment Among the Technical, Content, and Cognitive Demands Categories for the Math Assessments
0
20
8
15
7
5
21
PS
Alignment Among Tests With the Same Function
State Achievement Tests
Two state achievement tests are included in this analysis: the CIM Math and CIM
Locator. The CIM Math is an untimed test, consisting of 55 multiple-choice items and
one open-ended item. It is administered in two sessions, one each for the multiple-choice
and open-ended items. In contrast, the CIM Locator is a 40-minute test consisting only
of multiple-choice items (see Table 6.2). In terms of technical characteristics, there are
few differences. Both assessments contain many items framed in a realistic context
(60%-67%). Questions than contain graphs within the item-stem constitute a similar
proportion of each exam (18%-20%), as do questions that contain diagrams within the
item stem (20%-26%). With respect to formulas, neither test includes many items that
require a memorized formula, and there are virtually no differences with respect to
proportion of such items included (15%-16%).
In terms of content areas, the CIM Math and CIM Locator show a similar
distribution, although the CIM Math contains a slightly higher proportion of planar
geometry (31%) and elementary algebra (15%) items, whereas the CIM Locator contains
more prealgebra (25%) and coordinate geometry (15%) items. With respect to cognitive
demands, both tests assess procedural knowledge most frequently (61%-75%), but the
CIM Math contains a moderate proportion of problem-solving items as well (21%).
College Admissions Tests
We examined four college admissions tests: the ACT, SAT I, SAT II Math Level
IC, and SAT II Math Level IIC. All tests, except the SAT I, have a one-hour time limit.
The SAT I has a 75-minute time limit. All four exams are also predominantly multiplechoice, although the SAT I includes quantitative comparison (25%) as well as grid-in
(17%) items. Contextualized questions are most prevalent on the SAT I (25%) and least
prevalent on the SAT II Math Level IIC (12%). Students are rarely asked to work with
graphs, and questions that contain graphs within the item-stem constitute no more than
12% of items on the college admissions measures. Questions that include diagrams
within the item-stem are more prevalent, comprising 26%, 18%, and 13% of items on the
SAT II Math Level IC, SAT I, and ACT, respectively. However, questions with
132
diagrams are infrequent on the SAT II Math Level IIC (2%). Formulas are also
uncommon, but there are differences with respect to the extent to which formulas are
necessary. Whereas the ACT, SAT II Math Level IC, and SAT II Math Level IIC include
some items in which a memorized formula is needed (15%, 12%, and 10%, respectively),
these items are largely absent from the SAT I (1%).
Although the college admissions exams generally sample from the same content
areas, they do not do so to the same extent. Elementary algebra comprises most of the
SAT I items (37%). The SAT II Math Level IC also emphasizes elementary algebra
(30%), but focuses on planar geometry as well (28%). The ACT shows a similar content
emphasis as that of the SAT II Math Level IC; 22% of its items assess elementary algebra
and 25% assess planar geometry. The SAT II Math Level IIC, on the other hand, draws
from more advanced content areas, such as intermediate algebra (22%) and trigonometry
(18%).
In terms of cognitive demands, all four tests assess procedural knowledge to a
similar degree. Procedural knowledge items constitute between 54% and 58% of the
items found on college admissions measures. However, there is more variation among
the exams with respect to emphasis on problem solving. The SAT I and SAT II Math
Level IIC place relatively greater emphasis on problem solving (20% and 15%,
respectively) than do the ACT and SAT II Math Level IC (7% and 8%, respectively).
College Placement Tests
The only college placement test examined is the UO Math Placement Test. It is a
40-item multiple-choice test administered within 50 minutes (see Table 2). Few of its
items are framed in a realistic context (8%), and most of the items can be solved without
a memorized formula (5%). Additionally, the UO Math Placement Test is completely
devoid of problems requiring students to work with graphs or diagrams. Test questions
focus on elementary algebra (65%), but some items assess more advanced content such as
intermediate algebra (15%) and trigonometry (13%). In terms of cognitive demands,
virtually all items assess procedural knowledge (90%).
133
Alignment Among Tests with Different Functions
With the exception of the SAT I and CIM Math, none of the math assessments
requires students to generate their own answers. Questions framed within a realistic
context represents a small proportion of the UO Math Placement Test (8%), a small to
moderate proportion of college admissions (12%-25%), but a large proportion of state
achievement tests (60%-67%). Questions that contain graphs within the item-stem are
relatively uncommon, comprising less than 20% of any assessment. Excluding the UO
Math Placement Test, diagrams are included on every measure that we examined, but
typically constitute only a small or moderate share of a test. Questions that contain
diagrams within the item-stem represent 2%-26% of college admissions items and 20%26% of state achievement items. Items calling for memorized formulas are also relatively
infrequent, comprising 15%-16% of state achievement tests, 1%-15% of college
admissions tests, and 5% of the UO Math Placement Test.
With respect to the content category, all exams, irrespective of purpose, generally
assess the same content areas, although there are differences with respect to extent. State
achievement tests focus mostly on statistics and probability (30%) and to a lesser extent,
planar geometry (20%-31%). College admissions also include a moderate proportion of
planar geometry items (14%-28%), but are more likely to include elementary algebra
items (14%-37%) than are state achievement tests. The UO Math Placement Test also
emphasizes elementary algebra, but to a much greater degree than any other test (65%).
In terms of cognitive requirements, all tests emphasize procedural knowledge, but
to varying degrees. Procedural knowledge items are most common on the UO Math
Placement Test (90%), followed by state achievement tests (61%-75%), and least
common on college admissions tests (53%-58%). Problem-solving items are absent
from the UO Math Placement Test, but constitute a small to moderate fraction of state
achievement (5%-21%) and college admissions (7%-20%) measures. Conceptual
understanding items are also uncommon, except on college admissions exams, where
they comprise a moderate to large proportion of the test (26%-40%).
134
Discussion
Below, we discuss the implications of the discrepancies among the math
assessments. We begin by highlighting instances in which differences are justifiable,
then address whether there were any misalignments that may send students confusing
signals. We then explore the possibility that state achievement and college admissions
tests can inform postsecondary course placement decisions.
Which Discrepancies Reflect Differences in Test Use?
As noted in Chapter 1, content discrepancies may reflect differences in intended
purpose. To illustrate, consider the SAT I and ACT. The SAT I places relatively greater
emphasis on problem-solving and non-routine logic problems than does the ACT,
whereas the ACT places relatively greater emphasis on procedural knowledge and
textbook-like items than does the SAT I. Given that the SAT I is intended to be a
reasoning measure, but the ACT is designed to assess content knowledge found in most
high-school math courses, these differences are warranted.
Is There Evidence of Misalignment?
In our analysis of the math tests, we could not find any examples of
misalignments, as discrepancies among college admissions, college placement, and state
achievement measures are either small or moderate, and could generally be predicted a
priori. To illustrate, consider that open-ended items are included on the CIM Math but
are absent from college admissions measures. As noted in Chapter 1, the inclusion of
open-ended items on state achievement exams is indicative of attempts to use these tests
as levers of instructional reform. College admissions exams, on the other hand, exclude
open-ended items because such items can potentially undermine the public’s perceptions
of these tests as “objective” measures in which to make fair comparisons of student
proficiency.4 Format differences, in this case, are not misalignments.
4
Open-ended items are also excluded because they are more costly than multiple-choice items.
135
Can State Achievement Tests and College Admissions Tests Inform Postsecondary
Course Placement Decisions?
Although there are many discrepancies among exams of different functions, it
may still be possible that a test can serve multiple purposes satisfactorily. Currently,
some measures are used for more than one purpose. Students applying to a public college
or university in Oregon can submit CIM scores in place of the ACT or SAT I scores.
Potentially, there may be other uses for the CIM, such as informing placement decisions.
If the CIM can indeed guide course placement decisions, testing burden can be reduced.
Below, we discuss the potential of CIM Math for placing students into appropriate
math courses.5 We also discuss the possibility that testing burden can be eased by using
the SAT II Math Level IIC scores for other purposes, namely placement into an
appropriate math course.
With respect to placing students into a math course commensurate with their prior
background, the CIM Math holds little potential as an alternative measure to the UO
Math Placement Test. Because the CIM Math does not include as many intermediate
algebra or trigonometry items as the UO Math Placement Test, it cannot provide much
information regarding whether students have the necessary background to enroll in
higher-level math courses such as calculus. For the same reason, the CIM Math is not a
viable alternative to either the SAT II Math Level IC or the SAT II Math Level IIC.
Because the CIM Math does not sample as extensively from advanced content areas (i.e.,
intermediate algebra or trigonometry), its discriminating power is more limited than that
of the two former college admissions exams.
It may be possible to ease students’ testing burden in other ways beyond
expanding the use of CIM scores. For instance, the SAT II Math Level IIC can be used
to place students into an appropriate math course up to calculus. The SAT II Math Level
IIC contains a slightly higher proportion of intermediate algebra and trigonometry items
than the UO Math Placement Test. The SAT II Math Level IIC also assesses problem
solving (20%) to a greater extent than does the UO Math Placement Test (0%).
5
Because we do not have a remedial college placement test in our sample, we do not explore the possibility
of CIM Math as an alternative to remedial college placement tests. However, because CIM Math is
currently used as alternatives to ACT and SAT I, and scores from these two latter measures have been used
by many postsecondary institutions to exempt students from remedial college placement exams, it is logical
to assume that CIM Math can be used for the same purpose.
136
Conceivably, academic counselors can use the SAT II Math Level IIC instead of the UO
Math Placement Test to advise students which math course is most appropriate. To
determine the feasibility of the SAT II Math Level IIC as a measure that informs
placement decisions, more research is needed to explore the relationship between the
SAT II Math Level IIC and UO Math Placement Test.
Alignment Among Oregon ELA Assessments
Below we present the ELA results. As with math, we discuss discrepancies both
within and across test functions. The results are also organized by skill, namely reading,
editing, and writing. In some instances, there are only two tests in a given category, so it
is important to keep in mind that patterns or comparisons may not be representative of
more general trends within this category of tests.
Alignment is characterized by describing differences with respect to technical
features, content, and cognitive demands. Specifically, we discuss differences in time
limit and format, then document discrepancies with respect to topic, voice, and genre of
the reading passages, before concluding with variations in cognitive processes.
The alignment results for tests that measure reading skills are presented in Tables
6.4-6.5. Tables 6.6-6.7 provide the results for exams that assess editing skills, and Tables
6.8-6.9 provide the findings for exams that assess writing skills. For each table, the
numbers represent the percent of items falling in each category. To provide a concrete
example of how to interpret the findings, consider the content category results for the AP
Language and Composition, presented in Table 6.4. With respect to topic, 50% of the
reading passages included on the AP Language and Composition are personal accounts,
whereas 25% of the topics are about humanities, and the remaining 25% are about natural
science. It does not include topics from fiction or social science (0% each). In terms of
the author’s voice, 75% of the passages are written in a narrative style, whereas the other
25% are written in an informative manner. With respect to genre, only essays (100%) are
used; passages on the AP Language and Composition are not presented as letters, poems,
or stories (0% each). Results for the other tests are interpreted in a similar manner.
137
138
25
CIM Locator
25
0
20
63
ACT
AP Language and
Composition
SAT I
SAT II Literature
College Admissions Tests
63
0
40
25
25
0
13
Fiction Humanities
CIM Reading
State Achievement Tests
Test
0
20
25
25
25
0
Natural
Science
Topic
13
20
0
25
25
0
Social
Science
25
0
50
0
25
25
Personal
Accounts
138
100
40
75
0
0
0
0
0
75
50
0
75
0
0
0
0
0
0
Narrative Descriptive Persuasive
Voice
0
60
25
50
25
25
Informative
Table 6.4
Alignment Within the Content Category for the Reading Passages
13
0
0
0
0
0
Letter
25
80
100
75
75
13
Essay
50
0
0
0
25
13
Poem
Genre
13
20
0
25
0
50
Story
Reading Measures
Alignment Among Tests of the Same Function
State Achievement Tests
There are two state achievement measures that assess reading proficiency, the
CIM Reading and CIM Locator. Both are multiple-choice tests, but the CIM Reading is
an untimed measure whereas the CIM Locator must be completed within 45 minutes (see
Table 6.2). Most of the reading passages on the CIM Reading are drawn from fiction
(63%), written in a narrative voice (75%), and presented as a story (50%). Reading
passages on the CIM Locator are also written in a narrative voice (75%), but topics tend
to be drawn from an array of areas, including fiction, natural science, social science, and
personal accounts (25% each) (see Table 6.4). Essay is the most common genre of
passages on the CIM Locator (75%). With respect to cognitive demands, both tests are
approximately evenly split among recall (45%-54%) and inference (46%-55%) items (see
Table 6.5).
Table 6.5
Alignment Within the Cognitive Demands Category for Tests
Measuring Reading Skills
Test
Recall
Inference
Evaluate Style
CIM Reading
54
46
0
CIM Locator
45
55
0
ACT
58
42
0
AP Language and Composition
23
77
0
SAT I
18
83
0
SAT II Literature
13
80
7
State Achievement Tests
College Admissions Tests
College Admissions Tests
Four college admissions exams assess reading proficiency: the ACT, AP
Language and Composition, SAT I, and SAT II Literature. With the exception of the AP
Language and Composition, no other college admissions test assesses reading skills with
open-ended items. Testing time devoted to measuring reading skills is 60 minutes for
both the SAT II Literature and the AP Language and Composition. Because the SAT I
139
does not contain separate sections for editing and reading items, we cannot determine
testing time earmarked specifically for assessing reading proficiency, although testing
time devoted to assessing both types of skills is 75 minutes (see Table 6.2).
Reading passage topics also vary from one measure to the next (see Table 6.4).
The SAT II Literature emphasizes fiction (63%) whereas the AP Language and
Composition emphasizes personal accounts (50%). The SAT I favors humanities (40%),
but the ACT is evenly distributed among fiction, humanities, natural science, and social
science (25% each). Narrative pieces are included on all college admissions measures,
and range from 40% of the SAT I passages to 100% of the SAT II Literature passages.
Essay is generally the most common genre, appearing on 75% of the ACT, 80% of the
SAT I, and 100% of the AP Language and Composition passages. However, the SAT II
Literature is more likely to include poems (50%) than essays (25%). With the exception
of the ACT, college admission exams place greatest emphasis on interpretation and
analysis of the reading passages. Inference items range from 42% of the ACT questions
to the 83% of SAT I questions (see Table 6.5).
College Placement Tests
None of the college placement tests analyzed for this case study site assesses
reading proficiency.
Alignment Among Tests of Different Functions
Across all measures, reading skills are assessed primarily with multiple-choice
items. Testing time devoted specifically to assessing reading skills ranges from 35
minutes for the ACT to 60 minutes for the SAT II Literature and AP Language and
Composition. All assessments contain reading passages on two or more topics, and
every reading assessment includes a topic from fiction except the AP Language and
Composition (see Table 6.4). Every test also contains a reading passage in which the
author employs a narrative or informative voice. Essay is the most prevalent genre, but
the CIM Reading favors stories (50%) whereas the SAT II Literature favors poems
(50%).
140
State achievement tests tend to be evenly split among recall (45%-54%) and
inference items (46%-55%). In contrast, college admissions tests are more likely to
assess inference skills. Excluding the ACT, inference skills comprise 77%-83% of the
items on college admissions exams. The ACT, however, resembles state achievement
tests in that it contains a significant proportion of recall items (58%).
Editing Measures
Alignment Among Tests of the Same Function
State Achievement Test
None of the state achievement tests in this case study site measures editing skills.
College Admissions Tests
Items measuring editing skills are included on three college admissions tests, the
ACT, SAT I, and SAT II Writing. The exams are predominantly multiple-choice, with
testing time ranging from 40 minutes for the ACT to 45 minutes for the SAT II Writing
(see Table 6.2). As mentioned earlier, the SAT I does not specify the specific amount of
testing time devoted to measuring editing skills.
The SAT I does not include a reading passage, but instead uses a few sentences as
prompts. In contrast, the ACT and SAT II Writing include reading passages. These
reading passages are typically essays about humanities, and written in either a narrative
or informative voice (see Table 6.6). The ACT and SAT II Writing items are equally
distributed among recall (48% and 50%, respectively) and evaluate style items (48% and
47%, respectively), but the SAT I assesses only inference skills (100%) (see Table 6.7).
141
142
TSWE
College Placement Test
SAT II Writing
SAT I
ACT
College Admissions Tests
Test
0
0
100
60
Fiction Humanities
N/A
0
N/A
20
Natural
Science
Topic
0
0
Social
Science
0
20
Personal
Accounts
142
50
40
0
0
N/A
N/A
0
0
Narrative Descriptive Persuasive
Voice
50
60
Informative
Table 6.6
Alignment Within the Content Category for the Editing Passages
0
0
Letter
100
100
N/A
0
0
Poem
N/A
Essay
Genre
0
0
Story
Table 6.7
Alignment Within the Cognitive Demands Category for Tests
Measuring Editing Skills
Test
Recall
Inference
Evaluate Style
ACT
48
4
48
SAT I
0
100
0
SAT II Writing
50
3
47
90
0
10
College Admissions Tests
College Placement Tests
TSWE
College Placement Tests
There is only one test in this category, the TSWE. The TSWE is a 30-minute,
multiple-choice exam that assesses editing skills with sentences. Recall items comprise
the majority of the test (90%), although the test also includes a small proportion of
evaluate style items (10%) (see Table 6.7).
Alignment Among Tests of Different Functions
Editing skills are assessed solely with multiple-choice items. The ACT and SAT
II Writing include reading passages as prompts, whereas the SAT I and TSWE use
sentences as prompts. The reading passages on the ACT and SAT II Writing are
typically drawn from humanities (60%-100%), written in a narrative (40%-50%) or
informative (50%-60%) voice, and presented as essays (100%).
None of the editing exams assesses the full spectrum of the cognitive demands
category. The ACT and SAT II Writing emphasize recall and evaluate style items, but
are generally devoid of inference items, whereas the reverse is true for the SAT I. The
TSWE, on the other hand, focuses on recall (90%), and rarely includes evaluate style
(10%) or inference items (0%).
143
Writing Measures
Alignment Among Tests of the Same Function
State Achievement Tests
The CIM Writing requires students to provide one writing sample in an untimed
session (see Table 6.2). Topics are typically drawn from fiction or personal accounts (see
Table 6.8). With respect to scoring criteria, the CIM Writing requires students to
demonstrate mechanics, word choice, style, organization, and insight (see Table 6.9).
College Admissions Tests
Of the college admissions measures, only the SAT II Writing and AP Language
and Composition require a writing sample. The SAT II Writing provides students with a
one- or two-sentence writing prompt on a topic (usually humanities), and allows 20
minutes for students to respond (see Tables 6.2 and 6.8). In contrast, prompts on the AP
Language and Composition are typically reading passages, and students are required to
provide a total of three writing samples in over two hours (see Table 6.2).6 Topics can
vary, but are usually about humanities or personal accounts (see Table 6.8). The AP
Language and Composition emphasizes all elements of the scoring criteria, but the SAT
II Writing downplays the importance of insight (see Table 6.9).
Table 6.8
Alignment Among the Writing Prompt Topics
Topic
Test
Fiction
Humanities
Natural Science
Social Science
Personal Accounts
State Achievement Test
CIM Writing
X
X
College Admissions Tests
AP Language and Composition
X
SAT II Writing
X
X
6
The AP Language and Composition requires a total of three writing samples, two of which are produced
during the 120-minute writing session, and one during the 60-minute reading session. However, because
examinees also respond to a set of multiple-choice items during the reading session, it is unknown the
amount of time students devote specifically to the writing sample.
144
Table 6.9
Alignment Among the Scoring Criteria for Tests Measuring Writing Skills
Scoring Criteria Elements
Test
Mechanics
Word Choice
Organization
Style
Insight
X
X
X
X
X
AP Language and Composition
X
X
X
X
X
SAT II Writing
X
X
X
X
State Achievement Test
CIM Writing
College Admissions Tests
College Placement Tests
None of the college placement tests analyzed for this case study site requires a
writing sample.
Alignment Among Tests of Different Functions
Writing measures can vary from 20 minutes for a single writing sample (SAT II
Writing) to over 2 hours for three writing samples (AP Language and Composition).
Humanities and personal accounts are the most common topics, and every test includes a
writing prompt from at least one of these areas. The CIM Writing and AP Language and
Composition measures emphasize mechanics, word choice, organization, style, and
insight, but the SAT II Writing omits insight from its scoring criteria.
Discussion
Our discussion of the discrepancies among ELA assessments parallels that of the
math discussion. We first identify examples of discrepancies that are justifiable, then
discuss the implications of the misalignments.
Which Discrepancies Reflect Differences in Test Use?
Some discrepancies among the ELA assessments reflect differences in test use.
Consider, for instance, discrepancies in cognitive demands between two editing
measures, the SAT I and TSWE. The SAT I emphasizes inference skills, whereas the
TSWE is much more focused on recall skills. Given that the SAT I is designed to be a
145
measure of verbal reasoning and is used to distinguish among higher-proficiency
examinees, whereas the TSWE is a measure of knowledge of basic English conventions
and is used to identify students who may potentially need remedial English instruction,
discrepancies in the kinds of cognitive skills elicited is justifiable.
Even when two measures have similar test functions, discrepancies may still be
warranted. For example, the large discrepancy between the SAT I (100%) and the ACT
(4%) and the SAT I and SAT II Writing (3%) with respect to inference items is
attributable to subtleties in purpose. The SAT I is intended to be a measure of reasoning
proficiency, so great emphasis on inference questions is warranted. The ACT and SAT II
Writing, on the other hand, are curriculum-based measures, so relatively greater focus on
skills learned within English classes (i.e., recall and evaluate style skills) is to be
expected.
Is There Evidence of Misalignment?
Although the majority of the ELA discrepancies stems from variations in test
function, one instance of misalignment pertains to the scoring criteria of the SAT II
Writing. Insight is included within the scoring criteria of the CIM Writing and AP
Language and Composition, but is omitted from the scoring rubrics of the SAT II
Writing. Given that insight is included in the standards of most English courses, it
appears that the SAT II Writing standards are incongruent with those that are typically
expressed. Potentially, this misalignment can send students mixed messages about the
importance of insight with respect to writing proficiency. If the developers of the SAT II
Writing were to add insight to the scoring criteria, or provided a clear rationale of why
insight has been omitted from the scoring rubrics, students would receive a more
consistent signal about the importance of insight with respect to writing skills.
146
Download