Test of Early Readin..

advertisement
Hayward, Stewart, Phillips, Norris, & Lovell
Test Review: Test of Early Reading Ability (TERA-3)
Name of Test: Test of Early Reading Ability (TERA-3)
Author(s): Reid, D. Kim, Hresko, Wayne P., and Hamill, Donald D.
Publisher/Year: PRO-ED 1981, 1989, 2001
Forms: Forms A and B (also manual and picture book Form A and Form B)
Age Range: 3 years, 6 months to 8 years, 6 months
Norming Sample: Norming was conducted between February 1999 and April 2000.
Total Number: 875
Number and Age: The sample was stratified by age as follows: 3 years (N = 89), 4 years (N=104), 5 years (N= 160), 6 years
(N=231), 7 years (N=186), 8 years (N=105)
Location: The test was delivered in 22 states with standardized sites established in four major regions: NE, S, Midwest and W.
Specific sites were identified as having samples of students that “closely matched” the region.
Demographics: The authors used 2000 U.S. census data to guide the sample as they were previously criticized for their sample (as
noted in the Buros review, deFur & Smith, 2003).
Rural/Urban: 80% urban and 20% rural
SES: SES was determined by family income range.
Other: The level of parental educational attainment was also used.
Comment: This is a small norming sample especially when stratified.
Summary Prepared By: Eleanor Stewart 17 May 2007
Test Description/Overview:
The TERA-3 is designed to test the reading ability of young children from preschool to grade two (ages 3 years, 6 months to 8 years,
6 months) by assessing developing skills rather than readiness for reading. The manual includes background information about
research on children’s reading development that is useful to those examiners less familiar with the field. In this introductory section,
the components of early reading are described: “mastering the alphabet and its functions, discovering the conventions of printed
material, and finding meaning in print” (Reid, Hresko, & Hammill, 2001, p. 5). These components form the rationale and framework
for the TERA-3 and are well known areas of skill development.
1
Hayward, Stewart, Phillips, Norris, & Lovell
This new edition includes test items added to the lower and higher age ranges.
Cosmetic changes to the TERA-3 include: colour pictures, and inclusion of pictured items that have company logos (e.g.,
McDonald’s) that previously the examiner had to collect. These changes were included for the new standardization.
Purpose of Test: The authors list five purposes (Reid, Hresko, & Hammill, 2001, p. 8): 1. to identify children who are behind their
peers in reading development, 2. to identify the strengths and weakness of an individual child’s abilities, 3. to document intervention
progress (measuring change), 4. to use as a measure in research on reading, and 5. to use in conjunction with other assessment tools.
Areas Tested: TERA-3 tests alphabet (knowledge and uses, 29 items), conventions (print rules, 21 items), and meaning (the
construction of meaning from print, 30 items).
Areas Tested:
 Print Knowledge
Alphabet
Other Print conventions, “awareness of letters printed in different forms”, p. 7
 Phonological Awareness
Segmenting
Blending
Elision
Rhyming
Other Sound-letter correspondence
 Reading
Single Word Reading/Decoding
Comprehension (meaning)
Book handling
 Oral language
relational vocabulary
sentence construction
paraphrasing
 Spelling
Other -letter names, identification of embedded capitalization, and spelling errors in sentences and
paragraphs (Reid, Hresko, & Hammill, 2001, p. 7)
 Writing
Letter Formation
Capitalization
Punctuation
Conventional Structures
Word Choice
Details
Comment: The organization of the test items on the test record form is a bit confusing in terms of areas targeted. It appears to jump
around from one category to another. For example, in Subtest I: Alphabet, at the start age 8:0 to 8:6 years, the items progress as
follows:
20. What word goes with this picture? (points to dog)
21. Point to the word up.
22. Point to the word that goes with this picture?(point to house)
23. What does this say?(says, “Daddy”)
24. Read these words aloud. (was, boy, girl, and man)
2
Hayward, Stewart, Phillips, Norris, & Lovell
25. Look at this word. How many syllables does it have? (policeman)
I am used to test formats that group the targets and then proceed to test in sequence. There must be some logic to the way the TERA3 authors have organized thus.
Who can Administer: No specific professional group is identified. Rather, the authors state, “examiners should have some formal
training in assessment” (Reid, Hresko, & Hammill, 2001, p. 12). The manual reviews elements of professional conduct regarding
testing. Potential examiners are encouraged to be fully acquainted with the test manual and to have practiced administration at least
three times (p. 12).
Administration Time: Administration time is approximately 30 minutes.
Test Administration (General and Subtests):
Start points are defined by age range. A basal is established with three consecutive correct responses. Similarly, a ceiling is
established with three consecutive incorrect responses.
The front page of the record form has a section for a summary of scores and a graph for depicting a Profile of Scores. There is also a
section for recording comments and interpretations. This area can also be used to note test conditions, child behaviours, and other
factors that may influence the test outcome.
Test Interpretation:
Scores are recorded as 1 for correct and 0 for incorrect responses with examples of correct responses listed on the test form.
Throughout the manual, the authors emphasize that this test alone does not represent the entirety of a child’s reading ability profile.
They direct the examiner to go beyond merely scoring the child’s responses to consider how the child responded and what that might
mean. In this way, the authors are encouraging thoughtful observation of the child’s performance that could yield intervention
planning information.
Comment: As a novice in reading research, some test items appeared ambiguous to me.
3
Hayward, Stewart, Phillips, Norris, & Lovell
Comment: Number of subtest items at age range: there are not very many items at each age range. For example, on the Conventions
Subtest, there are only 4 items at the 3 years, 6 months to 5 years, 11 months level. That’s a span of 18 months.
Also, I wonder about whether there are sufficient items relating to specific skills, e.g., phonological awareness, in each age range.
For example, at the 3 years, 6 months to 5 years, 11 months, there are 3/9 items that address phonological awareness. If
phonological awareness is one area known to impact reading success, shouldn’t this area be explored with more test items? Is the
same true for other skills tested?
Standardization:
Age equivalent scores
Grade equivalent scores
Percentiles
Standard scores
Stanines
Other The Reading Quotient is calculated by summing scores from the three subtests and converting using tables for this
purpose. The Reading Quotient can also be expressed as a percentile rank.
Comments: Though age and grade level equivalents are provided, the authors state that they do so with “great reluctance” (Reid,
Hresko, & Hammill, 2001, p. xi). In the manual, the authors give reasons for not using these and how they are misused.
Reliability:
Internal consistency of items: Using the normative sample in its entirety, coefficients (Cronbach’s alpha) were calculated for six
age intervals. The results ranged from .81 for Form A-age 3 years, to .95 for Meaning at 4 years. Averages across subtests and the
Reading Quotient ranged from .83 to .95. Using Guilford’s 1954 formula, the coefficient alphas for the Reading Quotient ranged
from .91 at 3 years to .97 at 4 years.
Subgroup reliability was reported for gender (N=544 males and 542 females), ethnicity/race (Europeans= 734, African Americans
=162, Hispanic Americans =144), and clinical groups including learning disabled (N=38), language impaired (N=40), and reading
disabled (N=64). Coefficients were high, ranging from .91 to .99 for all subtests and the Reading Quotient.
Test-retest: Using a two week interval, the authors retested students and reported correlation coefficients ranging from .86 to .98 for
tow age groupings (age 4, 5 and 6 years and age 7 and 8 years), and all subtests and Reading Quotient.
Inter-rater: One author and two graduate students were involved in inter-rater testing with .99 agreement (n = 40).
Other: Alternate forms (immediate administration) were reported to be .80 or better correlated.
4
Hayward, Stewart, Phillips, Norris, & Lovell
SEMs were reported for both forms. For all ages, the subtests’ SEM was 1 while the Reading Quotient SEM was 3.
Comments: I would think that the author and graduate students would be better than average in terms of knowledge and practice in
administering and scoring TERA-3 so I wonder if reliability would hold for less experienced examiners. Different raters would have
been more credible to me. More effort could have been put into inter-rater reliability given that it is relatively easy to do.
Validity:
Content: The authors used a literature review, curriculum review, review of similar tests, and consultation with an expert panel as
sources for content validity. Also, the authors used Valencia’s categories of early reading behaviours to compare content.
Conventional item analysis was conducted using Pearson correlation index method. Coefficients (corrected for part-whole effect) are
displayed for items that indicated variance as is convention. Table 7.3 displays median percentages of difficulty and discrimination
for all subtests and ages. All items were found to met the requirements of difficulty and discrimination. Median values for both forms
confirm their equivalence.
Differential item analysis was conducted to detect item bias. The authors report that they “failed to find any evidence of overt bias
with respect to the vocabulary, syntax, or underlying conceptual basis of the items.” (Reid, Hresko, & Hammill, 2001, p. 65).
Comment from Buros reviewer: 13 items across two forms exhibited some DIF, the amount was observed was negligible. There is
some concern that 7 of the 13 items demonstrating DIF were on subtest Alphabet, Form B” (de Fur and Smith, 2003, p. 944).
Criterion Prediction Validity: Stanford Achievement Test Series-Ninth Edition, and Woodcock Reading Mastery-Revised-NU
Normative Update were used with moderate to high correlations.
Construct Identification Validity: Correlation of TERA-3 performance to age was performed with age differentiation evidenced by
high to very high correlations. In terms of group differentiation, gender, race/ethnicity, and disability were examined. The results
provide further support for construct validity with only slightly higher average scores for the mainstream students. The authors also
reported comparisons between typical and disabled students with the latter group demonstrating predictably lower average scores.
Differential Item Functioning: see previous comments
Other: none
5
Hayward, Stewart, Phillips, Norris, & Lovell
6
Summary/Conclusions/Observations: The TERA-3 is easy to administer with clear instructions and guidelines for scoring and
interpretation. Phonemic awareness is not well represented.
Clinical/Diagnostic Usefulness: As the authors state, TERA-3 useful in context of multiple points of assessment. It is easy to learn,
use, and score. If adjunct observations are made, intervention planning will be enhanced. The Buros’ reviewer, Lisa F. Smith, notes
that validity will be dependent on the purpose for which TERA-3 is used (deFur, & Smith, 2003). I understand that comment to mean
that if TERA is used to identify children, the psychometric basis is available but if, for example, the test is used to mark progress over
time with intervention, there may be insufficient data to support the validity claim as a measure sensitive to change in this way.
References
deFur, S., & Smith, L. (2003). Test review of the Test of Early Reading Ability-3rd edition. In B.S. Plake, J.C. Impara, and R.A. Spies
(Eds.), The fifteenth mental measurements yearbook (pp. 940-944). Lincoln, NE: Buros Institute of Mental Measurements.
Reid, D. K., Hresko, W. P., & Hammill, D. D. (2001). TERA 3 Examiner's Manual. Austin, TX: Pro-Ed.
To cite this document:
Hayward, D. V., Stewart, G. E., Phillips, L. M., Norris, S. P., & Lovell, M. A. (2008). Test review: Test of early reading ability
(TERA-3). Language, Phonological Awareness, and Reading Test Directory (pp. 1-6). Edmonton, AB: Canadian Centre for Research
on Literacy. Retrieved [insert date] from http://www.uofaweb.ualberta.ca/elementaryed/ccrl.cfm.
Download