Cognitive Complexity Reading (Literacy) - DPS

advertisement
Proposed Sources of Cognitive Complexity in PARCC Items and Tasks: ELA/Literacy
August 31, 2012
Text Complexity
Explanation and Justification
We expect text complexity to be one of several factors that will contribute to the determination of the
cognitive complexity of individual items. For example, an item that may be considered moderately
complex on the basis of factors such as Response Mode or Processing Demands may become highly
complex when paired with a highly complex text. For that reason, we include text complexity as a source
of item and task cognitive complexity.
We will determine the complexity of reading passages by following the process outlined in 1. Text
complexity 08-31-12.pdf. As a result of this process, a text will be assigned to one of three categories of
test complexity: Readily Accessible, Moderately Complex, or Very Complex. These categories correspond
with the categories of low, moderate, and high complexity used in the cognitive complexity framework.
Command of Textual Evidence
Explanation and Justification
We define this source of cognitive complexity as the amount of text that an examinee must process (i.e.,
select and understand) in order to respond correctly to an assessment item. The amount of text to be
processed is not a reference to the length of text or volume of reading that is required. Instead, this
category focuses on the numbers of details in one or more texts that must be processed in order to
respond to the requirements of Close Analytic Reading and Comparison and Synthesis of Ideas items.
The amount of text processed is influenced by both the cognitive complexity of items and tasks and the
complexity of the text or texts, so that ordinarily low complexity cognitive tasks can be adjusted upward
when paired with very complex text and vice versa.
This source of complexity is drawn directly from and is deeply grounded in the Common Core State
Standards. The ELA/Literacy standards require both Close Analytical Reading and Comparison and
Synthesis of Ideas, both within and across texts, and require examinees to command evidence from
various places in one or more texts. This source of cognitive complexity measures the nature and
amount of text that an examinee must process (i.e., select and understand) from one or more texts. It
encompasses but is not fully explained by the length of text or volume of reading that is required.
Instead, this category focuses on the numbers of ideas that must be processed in order to respond to
the requirements of Close Analytical Reading and Comparison and Synthesis of Ideas. This source adjusts
the interaction of the cognitive complexity of items and tasks with text complexity, so that ordinarily low
complexity cognitive tasks can be adjusted upward when paired with very complex text and vice versa.
This source of cognitive complexity was proposed originally in the June 25 Achieve memo as Amount of
Text Required to Answer the Question Asked (in Figure 2). The definitions of this source of cognitive
complexity are taken from that memo.
Cognitive Complexity: ELA/Literacy
Page 1
Low Complexity
Items at this level require examinees to identify a single idea or detail in a text. For example, identifying
the main idea in a text may be as simple as locating the explicit statement of that main idea.
Moderate Complexity
Items at this level require synthesis of ideas and details across multiple sections a single text. For
example, identifying the main idea or theme of a text may require inferring the main or theme or
integrating ideas and details from several locations in the text. Other types of items at this level that
require synthesis of ideas and details from two texts will be only moderately complex if the two texts
are closely related in theme or genre.
High Complexity
Items at this level require synthesis of ideas and details across multiple texts. High complexity items may
require examinees to construct the main idea or theme that is common across multiple texts, especially
multiple texts that are not closely related in theme and/or genre.
Response Mode
Explanation and Justification
The way in which examinees are required to complete assessment activities influences an item’s
cognitive complexity. We propose that, in general, selecting a response from among given choices often
is less cognitively complex than generating an original response. This difference is due in part to the
response scaffolding (i.e., response choices) in selected response items that is absent from constructed
response items. Selected response items can be highly cognitively complex due to the influence of other
sources of complexity in test items. Response Mode interacts with other sources of complexity to
influence the level of complexity: in ELA/Literacy, with Text Complexity and Command of Textual
Evidence; in Mathematics, with Mathematical Content and Processing Demands. Further, the degree to
which response choices may be easily distinguishable or highly similar can be influenced by other
sources of complexity, such as Text Complexity and Mathematical Content.
Low Complexity
Items at this level primarily require the examinee to select a correct response rather than generate a
response. For example, an evidence-based selected response item, in which both parts A and B require
examinees to select the correct answer from a set number of options, is a low complexity item. Similarly,
a technology enhanced constructed response item in which part A requires examinees to select the
correct answer from a series of options provided, and part B requires a simple response construction
such as highlighting a single section of text, would also be considered low complexity.
Moderate Complexity
Moderate complexity requires examinees to construct responses or pair selected responses
appropriately and correctly. Multiple selection multiple choice items (i.e., evidence based selected
Cognitive Complexity: ELA/Literacy
Page 2
response items) might require examinees to choose one of two correct responses in part A and the only
corresponding correct choice in part B. Technology enhanced constructed response items require
examinees to construct one of several paths to a correct response, as is required in completing Venn
diagrams, for example. Or examinees may be required to select a correct response in part A of an
evidence based selected response item and use a technology tool in part B to bring together multiple
pieces of evidence. Moderate complexity prose constructed response summary items focus on main
ideas in text. While these ideas may be readily identifiable and in many ways of relatively low
complexity, the prose constructed response mode requires examinees to generate an essay response.
Such open ended responses must answer the assigned questions without the scaffolding provided by
selected response items, and are therefore best categorized at the moderate complexity level.
High Complexity
Prose constructed response items, including analytic, narrative, and summary tasks, require examinees
to generate an essay response and are equally unscaffolded. However, analytic and narrative prose
constructed response items, unlike summary PCRs, typically address more challenging standards and
evidence statements and often require synthesis of ideas or the use of more than one text. Such
characteristics place these items in the high complexity category. Response Mode may be coded as high
complexity for selected response items that correspond to high complexity for Text Complexity or
Command of Textual Evidence.
Processing Demands
Explanation and Justification
Though all items go through an intensive content and editorial review process, some level of linguistic
demand and reading load remains within items. Linguistic demands and reading load in item stems,
instructions for responding to an item, and response options contribute to the cognitive complexity of
items.
Linguistic demands include vocabulary choices, phrasing, and other grammatical structures. Item
development and review processes are designed to remove any such demands that are likely to be
construct irrelevant. The remaining linguistic demands may be construct relevant or at least construct
neutral. That said, linguistic demands contribute to the complexity and cognitive load in processing,
understanding, and formulating responses to test items and tasks. Research on the role of linguistic
demands in complexity and difficulty in mathematical problem solving goes back as far as 1972 (e.g.,
Jerman & Rees, 1972). In their study of mathematical problem solving and cognitive level, Days,
Wheatley, & Kulm (1979) identify linguistic demands that are related to the difficulty of mathematics
problem solving items, which they refer to as the problem’s syntax. More recently, Ferrara, Svetina,
Skucha, and Murphy (2011) and Shaftel, Belton-Kocher, Glasnapp, and Poggio (2006) have identified
linguistic demands that are related to mathematics item difficulty and discriminations indices. We
propose five linguistic demands, taken from these studies, to identify in PARCC items as sources of
complexity:
–
–
–
Ambiguous, slang, multiple meaning, and idiomatic words or phrases
Words that may be unusual or difficult and specific to English language arts (i.e., vocabulary)
Complex verbs (i.e., verb forms of three words or more), such as had been going, would have
gone
Cognitive Complexity: ELA/Literacy
Page 3
–
–
Relative pronouns, specifically, that, who, whom, whose, which (sometimes), why
Prepositional phrases
Similarly, lengthy item stems, instructions for responding to an item, and response choices also place
reading and processing demands on examinees and may give rise to additional complexity. Ferrara et al.
(2011) defined reading load and demonstrated its relationship to item difficulty and discrimination
indices for grades 3-5 mathematics items.
In research studies, linguistic demand and reading load have been identified by counting numbers of
words, prepositional phrases, and so forth. That approach is not feasible for the thousands of PARCC
items and tasks. We propose a holistic judgment approach to determining Processing Complexity. These
holistic judgments will account for the details in the Reading Load and Linguistic Demands research
frameworks. While research substantiates the impact of processing demands on item complexity, best
practices in item development always seek to minimize construct irrelevant sources of complexity such
as linguistic demands and reading load. Since the remaining features of the item should be construct
relevant or construct neutral, this source of complexity may not be significant enough to account for
more than 10% of the weighting in the overall complexity index.
Low Complexity
Low complexity is generally defined as a combination of low reading load and low linguistic demand.
Compared to moderate reading demand and high reading demand, low reading demand is characterized
by simple language with few words (approximately 25 words or fewer) in an item, including the item
stem, response choices, and other directions for responding. Low complexity for this source also is
characterized by low linguistic demands, generally, and low frequencies of all five linguistic demands
(see above).
Moderate Complexity
Moderate complexity is defined as a combination of moderate reading load and moderate linguistic
demand. Moderate reading load is characterized, generally, by a range of simple to grade appropriate
language with items that are several sentences in length. Moderate complexity for this source also is
characterized by moderate linguistic demands (i.e., generally, a few instances of some of the
frequencies of all five linguistic demands; see above).
High Complexity
High complexity is defined as a combination of high reading demand and high linguistic demand. High
reading load is characterized by grade appropriate language in prompts that are generally several
sentences in length, with high linguistic demands (i.e., generally, instances of some of the frequencies of
all five linguistic demands; see above).
Cognitive Complexity: ELA/Literacy
Page 4
Download