Emerging Literacy an..

advertisement
Hayward, Stewart, Phillips, Norris, & Lovell
Test Review: Emerging Literacy and Language Assessment (ELLA)
Name of Test: Emerging Literacy and Language Assessment (ELLA)
Author(s): Wiig, Elizabeth and Secord, Wayne
Publisher/Year : Super Duper 2006
Forms: one
Age Range: 4 years, 6 months to 9 years, 11 months
Norming Sample:
The test was originally piloted with a small group of 19 children in order to ensure that the instructions and tasks were clearly written
and that the pictures were easy to identify. The test, consisting of 245 items, was then field tested with 153 children in two age
groups, 4 years, 0 months to 5 years, 11 months and 6 years, 0 months to 9 years, 11 months. Two forms, A and B, composed of the
same 22 subtests were used. Some items on some of the subtests differed. Children receiving special education services were
included and represented approximately half of each group. These children demonstrated language or learning disabilities as
indicated by standardized test results. The children represented a variety of backgrounds (race/ethnicity, parent education levels, and
geographic regions). The purpose of the field-testing was “to identify items within each subtest that elicited correct and incorrect
responses from different age groups” (Wiig & Secord, 2006, p. 86). Changes made as a result of field testing included: weak or
inconsistent items were eliminated on the basis of descriptive statistics, new items were added, group differences were examined,
ceilings and start ages were added, and a new subtest, Reading Comprehension, was added.
The standardization process, using a version with 22 subtests, was conducted with examination of concurrent validity, diagnostic
validity and test-retest reliability. A total of 1267 children participated. Included in the norming sample were children with language
disorders, learning disabilities and those receiving special reading services. The sample did not include other disability groups.
Tables 4.1 to 4.6.1 present the data by age, geographic region, gender, race/ethnicity, mother’s educational level, language spoken at
home (English, Spanish, Other). Favorable comparisons to U.S. census percentages are presented as well. Test administrators, named
and listed by city and state, as well as key schools in various states are listed (and acknowledged with thanks) at the front of the
manual on p. viii. The test administrators are identified as speech-language pathologists, special educators, and reading specialists.
Comment: How these test administrators were recruited and/or trained is not stated.
Total Number: 1267
Number and Age: The norming sample included 100 or more children per age interval from 4 years, 0 months to 9 years, 11 months
1
Hayward, Stewart, Phillips, Norris, & Lovell
(range n=101 to n=169). Ten age groups with 6-month intervals from 4 years, 0 months, to 8 years, 11 months, and a 12-month
interval at 9 years, 0 months to 9 years, 11 months were used. The authors state, “This decision was made because literacy and
language skills grow progressively and are easily discernible in six month intervals up to age nine. After age nine (9:0 to 9:11), the
growth is more gradual and yearly intervals are much more appropriate” (Wiig & Secord, 2006, p. 85).
Location: Testing sites comprised four geographic regions: South, Northeast, Midwest, and West. Individual states were not
identified.
Demographics: The sample was stratified by age, gender, ethnicity/race, mother’s educational level (SES proxy), and language
spoken at home.
Rural/Urban: not specified
SES: Mother’s educational (level-no high school, high school, less than 3 years of college, 4 or more years of college, or not
specified) was used to determine SES.
Other (Please Specify): Language disorders, learning disabilities, and those receiving special reading instruction were indicated.
The authors provide comments about the data for each sample category. In this way, they address aspects of the sample that might
draw attention. For example, they note that there was an under-representation of Hispanic children while there was a slight overrepresentation of African-American and those defined as Other. The category, Other, included children who were Asian or Native
American. In terms of language spoken at home, the authors state that English speaking children were over-sampled while children
from Spanish-speaking homes were under-sampled. However, the authors point out that the ELLA is not intended to be a bilingual
test. The authors state that, overall, the norming sample was comparable to the general U.S. population. In each norming category,
the authors address over and under-representations. Comment: The author’s discussion of the norming sample was a comprehensive
treatment of their data. Tables were easy to read and the accompanying text was well laid-out for easy reference to the tables. This
presentation was notable to me because I have reviewed manuals in which the information was placed in appendices and required
flipping pages to check the information. To do so was a bit time consuming and might lead a reader to overlook or dismiss this
important information that relates to the clinician’s specific caseload (i.e., are the children on my caseload represented in the
norming sample? Will this test be a fair comparison?).
Summary Prepared By (Name and Date): Eleanor Stewart 6-20 September 2007
Test Description/Overview
The test kit consists of an examiner’s manual, two stimulus books that fold into easels, record forms, Emerging Literacy Checklist
(for parents and caregivers), three story books with a trial item stimulus card, a clipboard, and CD Pronunciation Guide. The kit is
2
Hayward, Stewart, Phillips, Norris, & Lovell
housed in a tote suitcase with wheels and an extendable handle. A stopwatch is supplied for timed tasks.
Four areas have been shown to be foundational to literacy: (a) phonological awareness and flexibility, (b) sign and symbol
identification and interpretation, (c) memory, retrieval and automaticity, and (d) oral language. The authors note that these abilities
are predictive of reading achievement and demonstrate the link between reading impairments and language disorders (Wiig &
Secord, 2006, p. 77). They provide a schema for visualizing these aspects of literacy which they refer to as “the temple of literacy”
integrating the four key areas of growth necessary for achieving literacy” (p. 1).
ELLA tests the first three areas mentioned above, but the authors recommend the use of norm-referenced language tests for the
assessment of oral language. Chapter 1, “Purpose and Characteristics”, includes a brief description of the U.S. Public Law (PL) 107110 mandate intended to ensure that all children develop the necessary literacy skills for academic success. Two initiatives, Early
Reading First and Reading First, specifically state that assessment procedures be used to identify children at risk. The ELLA was
designed to align with PL 107-110’s recommendation for assessment in terms of its content, standardization, and statistical
properties.
The authors also provide a brief description of reading impairments (RI), indicating that the prevalence among school-aged children
is estimated to be 5 to 10% (compared with language impairments that range from 6 to 8%). The authors state, “Reading and
language impairments (LI) with associated linguistic (morphology, syntax, semantics, pragmatics) or modality-based problems
(receptive, expressive, or mixed type) tend to go together” (Wiig & Secord, 2006, p. 2). Further they state, “While both RI and LI
have genetic correlates, the environment and degree of exposure to language and reading opportunities play important roles” (p. 2).
In discussing the theoretical basis for the ELLA, the authors begin by outlining the developmental stages of reading acquisition
proposed by Firth (1986) and Morton (1989). They state that reading unfolds in stages that form the focus of each of the sections of
the ELLA as follows: 1. logographic –Section 2, 2. alphabetic-Section 1, and orthographic- section 3 (Wiig & Secord, 2006, pp. 3-4).
The authors follow with brief sections that outline the research basis for each section thus providing the rationale for the content of
the ELLA.
The standardization, reliability and validity information are grouped into one chapter, “Development and Technical Characteristics”
(Chapter 4). The authors have cleverly organized this chapter around the ten criteria proposed by Rebecca McCauley and Lorraine
Swisher in their seminal 1984 article. In this way, they present their evidence to support that the criteria are met. The ten criteria, in
the ELLA’s authors’ words, are:
3
Hayward, Stewart, Phillips, Norris, & Lovell
1. “The test manual should clearly define the standardization sample so that the test user can examine its appropriateness for a
particular test taker” (Wiig & Secord, 2006, p. 79).
2. “For each subgroup examined during the standardization of the test, an adequate sample size should be used” (p. 85).
3. “The reliability and validity of the test should be promoted through the use of systematic item analysis during item
construction and selection” (p. 86).
4. “Measures of central tendency and variability of test scores should be reported in the manual for relevant subgroups examined
during the objective evaluation of the test” (p. 86).
5. “Evidence of concurrent validity should be supplied in the test manual” (p. 90).
6. “Evidence of predictive validity should be supplied in the test manual” (p. 94).
7. “An estimate of test-retest reliability for relevant subgroups should be supplied in the test manual” (p. 95).
8. “Empirical evidence of inter-examiner reliability should be given in the test manual” (p. 96).
9. “Test administration procedures should be described in sufficient detail to enable the test user to duplicate the administration
and scoring procedures used during the test standardization” (p. 97).
10. “The test manual should supply information about the special qualifications required of the test administrator or scorer” (p.
98).
Comment: Surprisingly, McCauley and Swisher’s article is not listed in the References on page 193. A stunning omission, I think.
Purpose of Test: The authors identify the purpose of the ELLA as follows: “The ELLA is intended to be a developmental and
educational assessment tool rather than a diagnostic tool for identifying children as having literacy or language disorders. In other
words, the results can point to developmental, literacy, or language differences, but cannot be interpreted to reflect the presence of a
learning disability in reading or a language disorder” (Wiig & Secord, 2006, p. 90). As such, the ELLA is meant to guide instruction
in the areas needed by the student. Comment: I wonder if this distinction, educational assessment vs. diagnostic test, will be clear to
clinicians. I am confused because some of the psychometric information presented later in this review would seem to point to
diagnostic purposes; particularly the information on specificity and sensitivity. And, on page 11, the authors state, “The
Phonological Awareness and Flexibility section is a comprehensive form that can be used for diagnosis and identification of specific
problem areas in phonological awareness development” (Wiig & Secord, 2006).
Areas Tested: There are a total of 24 subtests divided into 3 sections: Section 1 (Phonological Awareness and Flexibility), Section 2
(Sign and Symbol Identification and Interpretation), and Section 3 (Memory, Retrieval, and Automaticity). The examiner must select
from these sections the subtests that address areas of concern identified in the referring information. The subtests under each section
are:
4
Hayward, Stewart, Phillips, Norris, & Lovell
Section 1:
Letter-Sound Identification –start age 4 years, 6 months or older, 10 items
Rhyming Section
1. Letter-Sound Identification- start age 4 years, 6 months or older, 7 items
2. Rhyme Awareness Part 1- start age 4 years, 6 months or older, 12 items
3. Rhyme production- start age 5 years, 0 months, 7 items
Initial Sound Identification –start age 4 years, 6 months or older, 12 items
Blending Section
4. Blending Words- start age 4 years, 6 months or older, 7 items
5. Blending Syllables- start age 4 years, 6 months or older, 8 items
6. Blending Sounds- start age 5 years, 6 months or older, 8 items
Segmenting Section
7. Segmenting Words- start age 4 years, 6 months or older, 7 items
8. Segmenting Syllables- start age 5 years, 0 months or older, 8 items
9. Segmenting Sounds- start 5 years, 6 months or older, 6 items
Deletion Section
10. Sound Deletion- Initial Position- start 5 years, 6 months or older, 12 items
11. Sound Deletion- Final Position- start age 5.6 years or older, 12 items
Substitution Section
12. Sound Substitution- Initial Position- start age 5 years, 6 months or older, 12 items
13. Sound Substitution- Final Position- start age 5 years, 6 months or older, 12 items
Section 2: Sign and Symbol Identification and Interpretation
Environmental Symbols- start age 4 years, 6 months or older, 12 items
Letter- Symbol Identification- start age 4 years, 6 months or older, 26 items
5
Hayward, Stewart, Phillips, Norris, & Lovell
Word Reference Association- start age 5 years, 0 months years or older, 15 items
Reading Comprehension- start age 5years, 0 months or older, 20 items
Section 3: Memory, Retrieval, and Automaticity
Rapid Naming- start age 4 years, 6 months or older, 3 tasks
Word Association- start age 4 years, 6 months or older, 4 tasks
Story Retell- start age 5years, 0 months or older, with 3 age-level stories:
-Level 1 –A Day at the Beach
-Level 2 –Ice Cream Surprise
-Level 3 –Making the Team





Oral Language
Vocabulary
Grammar
Narratives
Other (Please Specify)
Print Knowledge
Environmental Print
Alphabet
Other (Please Specify)
Phonological Awareness
Segmenting
Blending
Elision
Rhyming
Other (Please Specify)
Reading
Single Word Reading/Decoding
Comprehension
Listening
Lexical
Syntactic
Supralinguistic
Who can Administer (McCauley and Swisher Criterion No. 10): “Speech-language pathologists, special educators, resource
specialists, reading specialists, school psychologists, and other professionals who are knowledgeable about literacy and language”
(Wiig & Secord, 2006, p. 10) may administer the test.
Administration Time: Testing time depends upon which subtests are administered but is not specified in the manual.
Test Administration (General and Subtests):
The authors provide explicit information, much of it familiar to experienced clinicians, about the set-up for testing. However, for
novices, these sections are instructive of how to conduct a testing session. Specific information of note includes the caution that
although the test may be interrupted for rest breaks or carried out over multiple sessions, the authors state that, “You should not
administer the ELLA in sessions that span over a week” (Wiig & Secord, 2006, p. 10). Also, there is a caution against repeating
directions.
6
Hayward, Stewart, Phillips, Norris, & Lovell
The authors report that field-testing and standardization revealed that certain subtests and tasks were beyond the skills of younger
children. For this reason, start ages for subtests are specified. For example, the authors state that the Sound Deletion and Substitution
tasks were found to be too difficult for children at 4 years, 0 months to 5 years, 5 months. Therefore, a start age of 5 years, 6 months
was established. Ceiling criteria were also set so that “the examiner and the child can be spared time, effort, and frustration” (Wiig &
Secord, 2006, p. 11).
The examiner must listen to the Pronunciation Guide CD-ROM prior to test administration. The authors state that “Failure to
pronounce the items correctly may lead to invalid test results” (Wiig & Secord, 2006, p. 12). Comment: I have a small concern about
this statement. I wonder about this claim since we have such variety of accents in Canadian English. What would constitute
“invalid” results” in our context? Would an Australian accent invalidate the results? Other tests also have Pronunciation Guides
(the LAC-3 comes to mind). Perhaps this is an American concern?
A handy age calculator is referenced: www.superduperinc.com/agecalc.
The examiner is encouraged to select subtests based on the child’s age and presenting concerns. For children under six years of age,
the examiner chooses from any section those tasks which explore the child’s presenting concerns. For children six years and above,
the examiner is instructed to first administer Section 1: Blending, Deletion, and Substitution. If a problem appears in this area
(phonological awareness and flexibility), then the examiner will complete all tasks in Section 1. If no problem is found, the examiner
may continue with Sections 2 and 3.
Directions are printed in blue on both the record form and in the stimulus books. In the examiner’s manual, the directions are printed
in italics. The examiner is instructed to place emphasis on items printed in bold type. Phonetic symbols are used with slash marks / /
for indication.
Beginning on page 13, the specific instructions for each subtest are outlined (30 pages). The subtests are presented in their groupings
under each section so that Subtest 1 (Letter-Sound Identification) is found in Section 1 (Phonological Awareness and Flexibility).
Each subtest follows the same outline:
1. Start Age
2. Statement (e.g., Subtest 6 Blending Words- “In this section, you will ask the child to blend two words together to make a
compound word” (Wiig & Secord, 2006, p. 19).
7
Hayward, Stewart, Phillips, Norris, & Lovell
3. Objective (“To evaluate the child’s ability to form meaningful units by blending two separate words to form a new compound
word.”)
4. Curriculum and Classroom Connection
5. Materials
6. Scoring
7. Trial Items (1-3)
8. Discontinue and Ceiling rules
Comments: In terms of the tasks, I found all to be reasonable and easy to follow. The record booklet is well laid out with colour
coding and large print making it easy to locate important information while in the midst of administering the test.
Test Interpretation:
Chapter 3, “Scoring Responses and Interpreting Results”, presents the specific instructions for scoring each of the subtests (Wiig &
Secord, 2006, pp. 46-58). Tables are provided that outline sample correct and acceptable responses for Rhyme Production, Word
Association, and Story Retell.
Calculation of standard scores as well as interpreting norm-referenced scores is described. Examples of completed score sheets are
provided. Scoring using the raw scores is specific to each section. The authors note that the scores “help identify the inter-personal
and intra-personal differences that signify a child’s relative strengths and weaknesses” (Wiig & Secord, 2006, p. 54).
All norm-references scores are well explained. The authors point to the limitations both of scores and of testing in general. On page
62 (Wiig & Secord, 2006), in their discussion of confidence intervals, they state, “All test scores are approximations due to
measurement error”. They provide an interpreting and reporting schema based on their 2001 publication (Wiig & Secord, 2001).
Using the case examples that follow, the schema is incorporated into the text. Sample case studies, eight in all, are presented with
interpretation and ideas for intervention. These cases include four typically developing children as well as four children identified
with language or reading difficulties. At each age group, from ages 6 to 9, there are two cases presented, one for a typically
developing child and one for a child with language or reading difficulties. Comment: I think that these case examples and the format
for reporting will be instructive for clinicians.
Standardization:
Age equivalent scores
upper limits
Stanines
Grade equivalent scores
Percentiles
Standard scores with lower and
8
Hayward, Stewart, Phillips, Norris, & Lovell
9
Other (Please Specify) Confidence intervals, explained on page 62, are labeled “lower” and “upper” limits in the tables in the
appendix. The confidence interval is 90%.
Chapter 5, “Norms Tables”, provides norms for Rhyming, Blending, Segmenting, Deletion, and Substitution as well as Total Norms
for Sections 1, 2, and 3 (Story Retell) and Age Equivalents.
Reliability:
Internal consistency of items: Cronbach’s Alpha for each section are reported and displayed in table form by age interval. The
correlation coefficients for Sections 1 and 2 are high while those of Section 3 are considered moderate to moderately high. Total
items alpha for Section 1 ranged from .91-.98 and Section 2 from 0.70 -0.95. Coefficients for Section 3 ranged from 0.64 to 0.80. The
authors explain the results as follows: “The results for Sections 1 and 2 were expected to some degree, given how similar the tasks
are within each section. In Section 3, the test items differ greatly because the stories vary considerably in both content and difficulty.
Overall, the reliabilities are good to excellent across all age ranges” (Wiig & Secord, 2006, p. 91).
Test-retest (McCauley and Swisher Criterion No. 7): 61 children age 4 years, 6 months to 9 years, 11 months participated, and were
tested twice within 10 to 30 days. Correlation r was reported as follows: Section 1 (0.99), Section 2 (0.99), and Section 3 (Memory,
Retrieval, and Automaticity) subtests: Rapid Naming # of Errors (0.65), Rapid Naming-Time (0.92), Word Associations (0.93), and
Story Recall (0.85). The authors note “there were not enough subjects to calculate correlations for Story Retell Level 1. Excepting
Rapid Naming 3 of Errors, these correlations are high” (Wiig & Secord, 2006, p. 95). Comment: No details about who tested the
children, or the children’s characteristics other than their age range, where they lived, or median interval in days are given. Other
test manuals, particularly PRO-ED manuals, detail all of this information.
Inter-rater (McCauley and Swisher Criterion No. 7): Three raters were trained; two speech-language pathologists and one special
educator. The raters scored “recorded responses from 69 children” (Wiig & Secord, 2006, p. 96). The results demonstrated highly
consistent scoring among trained examiners with Pearson’s r correlations reported in table form for all sections/subtests (Table 4.21)
ranging from 0.93 (Word Reference Association, raters 1 and 3) to 1.00 across a large number of subtests. Comment: I presume that
the raters completed their task by reviewing completed record forms rather than conducting ratings live. Few tests reviewed report
live ratings, so it must be acceptable to do inter-rater reliability with completed protocols. However, I still think that live ratings (or
even audio-recorded) are more believable.
Other (Please Specify): none
Hayward, Stewart, Phillips, Norris, & Lovell
10
Validity
Content (McCauley and Swisher Criterion No. 5): The authors refer to “Face Validity” in their discussion of the qualitative evidence
for content validity. Chapter 1, “Purpose and Characteristics”, provided the background information on which the content and
organization of the test was developed. Comment: Unlike other tests, the ELLA does not include a quantitative analysis of content
such as classical item and differential item functioning analysis. As a result, no quantitative information is provided about the
process of item selection that would contribute to this aspect of validity. Also, test bias is not addressed. Does the ELLA
avoid/reduce/eliminate the possibility of a particular group being disadvantaged? Usually, DIF procedures are used to determine
item bias, however, the ELLA does not contain any information specific to DIF.
Criterion Prediction Validity (McCauley and Swisher criterion No. 5): To establish Criterion Validity, ELLA was compared to a
literacy measure, DIBELS, and a language measure, CELF-4 (created by the same authors as ELLA).
The CELF-4 comparison involved 77 children in two age groups, 5 years, 6 months to 5 years, 11 months (n=36) and 7 years, 6
months to 8 years, 0 months (n=41). Using raw scores on the subtests, Pearson’s r was calculated for:
1. ELLA’s Sections 1 and 2 and Story Retell, Rapid Naming, Section 1 Phonological Awareness and Flexibility.
2. CELF-4 Core Language subtests (Concepts and Following Directions, Word Structure, Recalling Sentences, and Formulated
Sentences), Rapid Automatic Naming, and Phonological Awareness Subtest.
Results showed correlations that were significant at p < .01. According to the authors, all comparisons yielded strong results (.75 for
Core and Sections 1, 2 and Story Recall, .74 for Rapid Naming, and .94 for Phonological Awareness). The authors explain this last
result by stating, “This could be explained by the fact that phonological awareness is an ability that is so closely tied to the age ranges
of the subjects tested and therefore the tasks themselves may be more uniquely similar” (Wiig & Secord, 2006, p. 92).
The ELLA was also studied in relation to the DIBELS. Thirty-six children ranging in age from 5 years, 6 months to 5 years, 11
months who participated were administered all of the ELLA subtests and the DIBELS Kindergarten Benchmark Assessment. A
correlation of .53 between the DIBELS’ Letter Naming Frequency and the Letter-Symbol Identification and .88 between DIBELS
Phoneme Segmentation Fluency and ELLA Segmenting Sounds was found. A second group of children ages 7 years, 6 months to 8
years, 0 months were administered the ELLA and DIBELS Second Grade Benchmark Assessment. Comparisons yielded a
correlation of .76 between the DIBELS’ Oral Reading Fluency, Retell Fluency, and Word Use Fluency and the ELLA Section 2,
Story Recall.
Hayward, Stewart, Phillips, Norris, & Lovell
11
Specificity, Sensitivity, and Total Accuracy: For the ages 5 years, 6 months, to 9 years, 11 months, with a -1 SD cut off, the
specificity was 99.5%, sensitivity was 86%, and Total Correct was 92.8% for a group of 64 normal children and 64 children in a
clinical group (language/learning disability) matched for age, sex, race/ethnicity, and mother’s level of education. The authors state,
“In settings where children are taught or provided preschool education, classroom, or language resource services in groups, those
with ELLA total standard scores between 85 and 77 may be identified and grouped for intensive stimulation and intervention in the
existing educational context. Children with ELLA total scores below 77 (-1.5 SD below the mean) should, at minimum, be provided
with intensive, individualized intervention to develop the basic phonological awareness and/or sign and symbol interpretation skills
that are essential for access to the regular English and language arts curriculum” (Wiig & Secord, 2006, p. 95).
Construct Identification Validity (McCauley and Swisher Criterion No. 4):
Age differentiation: the authors state that “There is a steady age progression in both Section 1 (between ages 4.6 and 8.11) and
Section 2 (between ages 4.6 and 7.5 years). However, in Section 3, the age progression is more moderate for Story Retell Level…
This more moderate growth by age in Section 3 may be attributed in part to task variability as well as task difficulty” (Wiig &
Secord, 2006, p. 86).
Group difference: Raw score means and standard deviations for each section by age and race/ethnicity are presented in Tables 4.9,
4.11, and 4.13. While differences are evident in some categories, the authors acknowledge this and defend the use of one set of norms
on the basis that all children who perform differently deserve to receive “increased stimulation, teaching, or intervention to establish
literacy and language skills” (Wiig & Secord, 2006, p. 90).
Differential Item Functioning: No information is provided.
Other (Please Specify): No other validity information is provided.
Summary/Conclusions/Observations: The authors do state that the test is not diagnostic for language or learning disabilities.
Examiners will need to keep this in mind, especially when communicating the results to others. Specifically, they state: “The ELLA is
intended to be a developmental and educational assessment tool rather than a diagnostic tool for identifying children as having
literacy or language disorders. In other words, the results can point to developmental, literacy, or language differences, but cannot
be interpreted to reflect the presence of a learning disability in reading or a language disorder” (Wiig & Secord, 2006, p. 90). Do
the authors skirt some of the concerns I have outlined because they choose to describe the ELLA as not a diagnostic test? But then
again, they seem to be unsure themselves of whether or not it is diagnostic.
Hayward, Stewart, Phillips, Norris, & Lovell
12
Clinical/Diagnostic Usefulness: It takes time to read through the manual in order to be familiar with which tests to choose. It’s a bit
off-putting to have to wade through so much. I’m not sure that most clinicians have spare time for this, likely only at home and on
weekends. The text spans 98 pages. I spent quite a lot of time reviewing this test. I would use this test. It contains a wealth of
information that if implemented I believe would enhance literacy development in young children.
References
Firth, U. (1986). A developmental framework for developmental dyslexia. Annals of Dyslexia, 36, 69-81.
Gilger, J. W., & Wise, S. E. (2004). Genetic correlates of language and literacy impairments. In C.A. Stone, el al. (Eds.), Handbook of
Language and Literacy (pp. 25-48). NY: Guilford Press.
McCauley, R., & Swisher, L. (1984). The use and misuse of norm-referenced tests in clinical assessment: A hypothetical case. Journal
of Speech and Language Disorders, 49, 338-348.
Morton, J. (1989). An information-processing account of reading acquisition. In A.M. Galaburda (Ed.), From reading to neurons (pp.
43-66). Cambridge, MA: MIT Press.
Smith, S.D. (2002). Dyslexia and other language/learning disabilities. In D. L. Conner and R. E. Pyeritz (Eds.), Emory and Rimoin’s
principles and practices in medical genetics (pp. 2827-2865). NY: Churchill Livingstone.
Wiig, E. & Secord, W. (2006). Emerging literacy and language assessment (ELLA). Greenville, SC: Super Duper Publications.
Wiig, E. H., & Secord, W. A. (2001). Making Sense of Test Results. Columbus, OH: Red Rock Educational Publications, Inc.
To cite this document:
Hayward, D. V., Stewart, G. E., Phillips, L. M., Norris, S. P. , & Lovell, M. A. (2008). Test review: Emerging literacy and language
assessment (ELLA). Language, Phonological Awareness, and Reading Test Directory (pp. 1-12). Edmonton, AB: Canadian
Centre for Research on Literacy. Retrieved [insert date] from http://www.uofaweb.ualberta.ca/elementaryed/ccrl.cfm.
Download