Rationale

advertisement

Join Together: A Nationwide On-Line

Community of Practice and Professional

Development School Dedicated to Instructional

Effectiveness and Academic Excellence within

Deaf/Hard of Hearing Education

Objective 2.4 – Assessment

Submitted by:

Topical Team Leaders – John Luckner, Ed.D. and Sandy Bowen, Ph.D.

1

Topic – Language

Rationale

It will be argued in this paper that classroom teachers of children who are deaf or hard of hearing have primary responsibility for assessing the language of their students. The term “assessment” will be defined as Stewart and Kluwin (2001) defined it, simply, the collection of data in a systematic way. Throughout this paper the words "language" and "English" will be used interchangeably and are equivalent. If a language is being referred to other than English, it will be specified. No attempt will be made in this paper to write a comprehensive treatise on language assessment. The topic is far too broad; indeed a very sizable book would have to be written to do justice to such an undertaking for it would have to incorporate the bodies of information existing in the many books already in print (McDaniel, McKee & Cairns, 1998; Lund, N. & Duchan,

1988; Muma, 1998, 1978). Hopefully, this paper will address the issue of assessment from a very different and unique perspective---from the perspective of teachers.

It is recognized, of course, that not all children with a significant hearing loss experience serious difficulties in learning English, nor are serious language difficulties limited to children who have a profound/severe hearing loss.

The paper is intended to provide:

(a) A purpose for evaluating and teaching English to children who are deaf.

(b) Justification for involving classroom teachers with language assessment.

(c) A conceptual framework for making assessments.

(d) A criteria for making “strong and functional” assessments.

(e) A criteria of selecting assessment tools and strategies, and finally,

(f) Suggestions for making “teacher friendly assessments” within the classroom.

General Guidelines – Hearing Students

The purpose for assessing English competence and developing intervention activities designed to enhance English proficiency should be apparent: English is the dominant language of commerce and being able to function effectively in the world of commerce is essential if a person is to achieve a reasonable amount of economic independence. It is assumed that all people desire economic independence. While it may not be absolutely essential to know and be able to use English to secure economic independence, certainly that aim is greatly facilitated if one knows the language of commerce.

There are several reasons why language assessments should be made; each will be discussed only briefly. While the treatment of each is somewhat abbreviated, there should be no question about the importance and significance of each.

Baselining

Assessments of children’s language are made to determine what they know, and what their skill levels are in a wide variety of domains. Educators typically believe that by finding out

“where a child is” (baselining), a foundation is laid for determining “what a teacher should teach”. This process, of course, is often referred to as “diagnostic teaching”, and it seems to be such a universally accepted perspective that no effort will be made to justify it. Indeed, the adage that “If you don’t know where you are, you won’t know how to get where you’re going” captures

2

this perspective. Diagnostic teaching should be the objective of all those who work with language-delayed, or language-disordered children.

Establishing Language Targets (Language Goals)

Once a reasonably clear picture has been developed of those language skills a child has and does not have, then, it is possible to establish language targets. The writing of language targets is not unlike drawing a map: that is, “you have to know where you are" if you want to draw a map to “where it is you want to go". It will be assumed that teachers have the expertise to determine appropriate language targets if they know what abilities their students actually possess. The focus of this paper is upon developing greater skill in unmasking the linguistic abilities of students and quantifying them in a reasonably easy and time-efficient ways.

Increase Effectiveness of Teaching

Assessments are also made to allow teachers to determine if their educational efforts are being successful. Certainly, it would not be possible to determine if a child is making progress in the domain of language as a result of “educational intervention”, if no one assesses the child’s linguistic ability (competence), nor measures the same in objective terms. And, it would be tragic, indeed, if a teacher persisted with an intervention strategy thinking it was efficacious when, in fact, it was not. To be perfectly clear, teachers have the primary responsibility for facilitating language development, and as a consequence, they should take primary responsibility for knowing about the language and language learning curve of each student for whom they are responsible.

Teacher Accountability

Teachers should be accountable for their activities within the classroom. The current climate in education, and the workplace generally, is such that all those employed to provide a service must be accountable for their success, or the lack thereof, as well as their efforts to achieve success. Some teachers may be inclined to recoil from such a notion. The concept of taking assessment to the level of evaluation (making assessment objective through measurement) so that teachers can be held accountable (responsible) is well developed by Stewart and Kluwin

(2001) and deserves some serious consideration. This is particularly true in working with children who have a profound hearing loss coupled with serious language problems; linguistic growth comes very slowly.

General Guidelines- Students who are Deaf of Hard of Hearing

In this discussion it is being proposed that the primary responsibility for assessment rest squarely upon the shoulders of the teacher. It is the classroom teacher who has the best opportunity to observe a child’s social behavior, emotional-adjustment style, academic performance, auditory processing capabilities and his/her written, verbal and manual communication skills. It is the teacher who is in the best position to develop working hypotheses about what the child knows and does not know, about what the child can do and cannot do under varying circumstances, and it is the teacher who can best test those hypotheses over and over again in the course of

“classroom living”.

It should be made clear in this discussion that assessment within the classroom is not to be viewed as “traditional assessment” where a standardized diagnostic instrument is brought into the classroom and administered to one or more children. Nor should it be thought of as

3

assessment provided by a SLP or diagnostician outside the classroom and shared with a classroom teacher. Rather, it should be the kind of assessment where the teacher, as an astute observer, studies language performance of students, makes insightful observations and draw appropriate conclusions. To do this teachers must be skilled in analyzing the dynamics of the social interactions (social behavior and pragmatic behavior) along with the written-work

(journals and daily writings) of their students. Teachers should see themselves as the primary diagnostician, and as such, they should seek the assistance of educational diagnosticians and

SLPs to confirm the hypotheses that they have formulated about their students.

One of the most puzzling challenges before teachers is that of knowing where to begin in the assessment process. It is seductively simple and easy to begin the assessment process by selecting some well-recognized, standardized test to administer that assesses grammatical abilities and there are many available (Thompson, et al., 1987). However, such a temptation should be avoided if it is to be used as the primary strategy for assessment. As an alternative, it is suggested that the “place of beginning” should be that of finding answers to four sets of related questions.

Formal Assessment Best Practices- Students who are Deaf or Hard of Hearing

Vocabulary Assessment

For a person to construct a grammatical sentence, they must have vocabulary. The assessment of vocabulary has historically captured much attention and for some remains a domain of fascination. Although the Peabody Vocabulary Test (Dunn and Dunn, 1981) has been used for years with deaf and hard of hearing children standardized norms have never been developed for children with a hearing loss. Two noteworthy attempts have been made to develop a vocabulary test for children with a significant hearing loss, namely, The Total Communication

Vocabulary Test (TCVT) (Scherer, 1981) and The Carolina Picture Vocabulary Test (CPVT)

(Layton and Holmes, 1985). Of the two, clearly the CPVT has the most comprehensive set of norms. This fact, coupled with the fact that a sign-prompt picture is provided for the examiner, argues strongly that the CPVT is a more usable instrument. However, it is noteworthy that

Muma (1998) has argued that the use of any vocabulary test is highly suspect. Additionally,

White and Tischler (1999) have pointed out that any vocabulary sign test that does not take into consideration the iconic nature of signs is also of questionable value. Their research on the iconic nature of signs within the CPVT indicates that many of the test items could be answered correctly 100% of the time without any prior contact with, or understanding of "signs". This finding renders the results of formal sign tests highly suspect.

Assessing Syntax-Traditional Approaches

The assessment of syntax has been a high priority since the early 1960’s as a consequence of the insightful work of Chomsky (1957, 1965) that laid the foundation for so much of what has been taken place in the domain of language assessment and language instruction over the past 40+ years. For a comprehensive review of language assessment instruments for deaf and hard of hearing children, the reader is referred to the work of

Thompson, M., Piro, P., Vethivelu, S., Pious, C., and Hatfield, N. (1987). Although their conclusions of the value of the various instruments sited are open to question and/or interpretation, their work remains noteworthy.

4

A variety of instruments have been used for the assessment of syntax with children who are profoundly deaf, but a relatively small number have been standardized on children who are deaf and hard of hearing (D/HH). These notables are listed chronologically below:

1) Maryland Syntax Evaluation Instrument (MSEI) (1975).

(White, A. Support

Systems for the Deaf, TX )

2) Test of Syntactic Structures (TSA) (1978).

(Quigley, S.P., Steinkamp, M.W.,

Power, D.J., & Jones, B. Dormanc, Inc., PBeaverton, OR .)

3) Grammatical Assessment of Elicited Language (GAEL) (1979). (Mogg, J.S.,

& Geers, A.E. Central institute for the Deaf, St. Louis, MO).

4) Rhode Island Test of Language Structure (RITLS) (1983) . (Engen, E., &

Engen, T., Pro Ed, 5341, Austin, TX)

The MSEI is unique in that it can be administered to a group of students, and invites students to write sentences spontaneously to a set of 10 pictures; their written responses can be scored in about 5 to 10 minutes each. Norms exist for residential school deaf students from 6-0 to

18-11. Philosophically, it was developed on the premise that children cannot escape revealing their knowledge of English word order when they write; therefore, the strategy of “awarding points for components in English used properly” will ultimately result in scores that reflect their competence in English. Students cannot guess their way to successful scores. While the test was designed for in-class teacher use, it does require the teacher to be quite knowledgeable about

English grammar.

The TSA can also be administered by teachers and given to a group of students. It was normed on older children between the ages of 10-0 and 18-11 years, and relies heavily upon

“word scrambles” presented as multiple-choice items. It is comprehensive in nature, and scored objectively, but by virtue of the fact that it is an objective test students are able to guess. Due to the length of the test, accompanied by small print, there is also a heavy “fatigue factor”.

The GAEL has been used much with younger children. It was normed on children 5-0 to

9-0 and relies heavily upon “manipulatives” and personal interaction between child and diagnostician which is a virtue; however, it is seldom given by a teacher because it takes two or more hours to give and longer to score. The assessment is restrictive in that the child’s spontaneous language is ignored and only language that is prompted and imitated can be scored.

The RITLS is teacher friendly in that it is both easy to give and easy to score. However, because it relies upon a format that allows students to point to one of three pictures based upon the target language (the prompt), a child has a 33% chance of guessing the right answer without fully understanding the language prompt. Additionally, a child might have no English competence, but know ASL. If such were the case, it is predicted that a student could still

“guess” the right answer based simply upon knowing signs for a few content and structures words. In other words, a child’s semantic knowledge base in ASL might be sufficient to cue him/her to make the proper selection without the English competence which is the thing the test is designed to assess.

In addition to these standardized tests, check-off lists have been developed to help track the development of a child’s syntactic system. Although many schools and programs have assembled similar outlines or check-off lists for tracking their students, of significance are the following:

1) Teacher Assessment of Grammatical Structures (TAGS) (1983). (Mogg, J.S.,

& Kodak, V.J. Center Institute for the Deaf, ST. Louis, MO.)

5

2) Developmental English Language List (DEL) (1998).

Compton, C.C. &

Harder, K. University of Washington Press (1988). Adapted by Barbara Luetke-

Stahlman in Language Issues in Deaf Education, Butte Publication, Inc.

In addition to these instruments, those working with the deaf and hard of hearing (D/HH) have utilized other standardized instruments that have been normed on hearing children. Three of significance are:

1) Carrow Elicited Language Inventory (CELI) (1974).

DLM Reaching Resources,

Allen, TX .

2) Oral and Written Language Scales (OWLS) (1995). American Guidance Service,

Inc. Circle Pines, MN.

3) Brigance Diagnostic Inventory of Early Development (Brigance) (1978).

Curriculum Associates, Inc. Woburn, Massachusetts

All of these assessment instruments, and many others that could be cited (Thompson, et. al. 1987), have made a significant contribution to the field of deaf education. They have helped educators relate the child/children they are working with to other children based upon assessment scores obtained from testing, and they have yielded valuable diagnostic information that can be helpful to teachers. Unfortunately, however, most assessment strategies, schemes, and tests are administered outside the classroom with minimal involvement of teachers.

Informal Best Practices – Deaf and Hard of Hearing

It would appear that a definite shift has occurred in the last decade. Much more attention is being paid to the individual. Indeed, there is greater interest in the child’s use of language within social context (Muma and Teller, 2003), as well as greater interest in the idiosyncratic nature of the individual’s language system as it is emerging. Additionally, assessment seems to be leaning more heavily upon the “content analysis” of a child’s individual language performance. There seems to be less concern about comparing a child to “other children” and much greater concern about comparing a child with him/herself, a trend whose time has surely come.

Grammatical Forms

Papers (White, 2003, White and Goin, 2002; Baudean, 2003) are available that detail convincingly that verbs are all powerful in dictating the word order within the sentence predicate that follows thereafter. This same thought can be expressed from a psychological perspective: when the human mind is trying to shape its infinite number of ideas into language, it is constrained to use only one or more of 13 different grammatical forms which result from using verbs from one or more verb subsets (White, 2003); these forms are listed in Table 1. Because the sentence form becomes defined as a consequence of using a verb from one or more of the 13 verb subsets, a single, frequently used verb from each subset has been identified and this set of

13 verbs has been “dubbed” the 13 Key Trigger Verbs (KTVs). At first, to speak of “constraints imposed upon the mind” seems unacceptable, but in fact, it is quite logical. These sentence forms, hypothesized but never defined by Chomsky, then become the building blocks for full and rich communicative exchanges, and by being finite in nature, it is possible to learn all the “base components or base forms” of communication.

Table 1. Defining Characteristics of 13 Key Trigger Verb Subsets

A COGNITIVE PERSPECTIVE KTV AN EXAMPLE COMPONENTS

6

(Promoting the teaching of KTVs I=intransitive supports a cognitive agenda, namely, humans are able to conceptualize that:

1. People, animals and things can “act”.

T=transitive

L=linking

Sleep (I)

2. People, animals and things can act upon, or be acted upon by people, animals and things.

When the verb being used has a “set of one” possible items that can be acted upon, the DO need not be expressed.

3. People, animals and things can act upon, or be acted upon by people, animals and things.

Read (T)

Drop (T)

When the verb being used can take many different nouns as a DO, then it muse be expressed at the surface level.

4. Animate subjects can be recipients of things. Give (T)

(Expressed in Past Tense)

DO=direct object

IDO=indirect object

The boy slept

The boy read (DO*) well.

*The DO (printed material) need not be expressed unless it has significance.

The boy dropped the book.

THE KTV GIVES

RISE TO…

Cannot take a DO

The DO may or may not be expressed at the surface level. It must be expressed if it is very significant.

The DO must be expressed at the surface level.

5. People, animals and things have location and can be moved from one location to another by animate subjects.

6. People, animals and things can be possessed

(owned) by animate subjects

7. Some concepts require partnership between two or more words to convey meaning.

8. Objects that have been acted upon can, themselves, be described.

9. Attempted actions may or may not be successful.

10. Concepts can be nested or embedded within other concepts.

11. All things have defining characteristics.

12. All things can be renamed in some way.

13. All things have location.

Put (T)

Has (T)**

Play with (T)

(double verb)

Move (T)

Try (T)

Think (T)

BE (L)

BE (L)

BE (L)

(1) The boy gave some flowers to his girl friend. (2)

The boy gave his girl friend flowers.

The boy put the flowers on the table.

The boy has a new bike.

(1) The boy played with his toys.

(2) The boy stared at the beautiful girls.

(1) John made Mary happy.

(2) John made Mary move.

(3) John made Mary President.

The boy tried to climb the rope.

The boy thinks he is strong.

The boy is happy.

The boy is my friend.

The boy is home.

In establishing topic, the

DO and IDO must be expressed at the surface level of the sentence.

A “place adverbial” must follow the DO

“put” and be used at surface level.

The object (DO) of ownership must be expressed.

A preposition must be used in conjunction with the verb to convey full meaning.

An objective complement must follow this verb variant

An infinitive* or gerund* must follow this verb variant and it serves as the DO.

A dependent clause must follow this verb variant and it serves as the DO.

Predicate adjectives describing the subject may follow a Be verb.

Predicate nominatives may follow a Be verb.

“Place adverbials” may follow Be verbs.

*It should be noted that once the semantic-syntactic features of a verb have been learned that when those verbs are used elsewhere in the sentence as “verbals” they will still compel the user to have the same elements follow it. (The boy tried to give some flowers to his girl, but she refused them.) / ** The verb “has” has also be selected as a KTV because it must also be used as an auxiliary verb when speaking in perfect tense, and children need to know that a word my have multiple functions within a language.

Any assessment of language would thus be incomplete if it did not assess the richness of a student’s functional and grammatical use of verbs across the 13 KTV subsets. Indeed, it is the tacit knowledge of the semantic-syntactic features of verbs that gives language users the ability to know what constituents should and must follow in the predicate of a sentence to make it

7

grammatical. Of course, making such assessments are predicated upon securing an adequate language sample.

Language Samples

Securing language samples is a respectable practice. Indeed, it is hard to imagine conclusions can be drawn about a child’s language abilities without access to samples of what the child has written, said or signed. Language sampling is justified on the bases that the securement and assessment of the same are the only way a teacher can take a “peek” at the rule system that governs a child’s linguistic performance. Indeed, if one wants to make judgments about a person’s linguistic competence they must first secure samples of linguistic performance.

There really is no other way.

Suggestions have been made about the appropriate size of an acceptable language sample. By some it has been suggested that a sample should be at least 50 to 100 utterances

(Darley and Moll, 1960; Luetke-Stahlman, 1998). Others have suggested a minimum of 200-300 utterances (Muma, et al., 1998). Rather than suggesting any set number of utterances or words, it is suggested that teachers simply become diagnostic teachers, which means that securing a language sample becomes almost a daily activity. A “diagnostically-minded teacher, will recognize that every time the child writes, speaks or signs, she/he should be noting the child’s linguistic performance and from this multitude of “language samples” an insightful list of what the child knows and doesn’t know about English can be specified.

T-Units

Hunt (1965) defined the “T-unit” as an independent clause along with all attached dependent clauses. He believed it constituted the ideal “unit of analysis” for assessing the linguistic maturity of speech and writing. More specifically, Hunt (1965) showed that as children get older (presumably matured, linguistically) the number of words per T-unit, and clauses per Tunit increased. Thus, simply counting words and clauses per T-Unit seems to be a simple, straightforward and logical way of measuring linguistic development. Indeed, counting words seems simple enough; however, the difficulty comes for most teachers of children who are deaf in (1) identifying T-units, which requires a rich understanding of English grammar (being able to recognize clauses), and (2) in knowing how to count words when sentence production of their children can be so flawed. Unfortunately, Hunt (1965) provided no rigorous guidelines (scoring procedures) for how to assess seriously flawed English; however, White (1997) has created an instrument specifically designed to assess both perfect and flawed English based on T-unit analysis, namely, the SAWL (Structural Assessment of Written Language). This assessment strategy has the advantage of being able to assess and rank written textual material (White, Scott and Grant, 2002) as well as assess and rank spontaneously written English, regardless of whether the composition is written in perfect English or is seriously flawed English (Chappell, 2003;

Graem, 2003).

The SAWL provides teachers with several indices which can be used individually or collectively in documenting a child’s linguistic ability, namely: (a) T-Units per 100 words

(TU/100), (b) Words per T-Unit (WTU), (c) Morphemes per T-Unit (MTU), Clauses per T-Unit

(CTU), and (d) the Word Efficiency Ratio (WER). These can be calculated on perfect T-Units

(Level-I), on perfect and flawed qualifying T-Units (Level-II) or on everything written (Level-

III). The two indices, MTU and WER are unique to the SAWL: the MTU increases with the use of every perfectly used morpheme, and the WER provides a ratio of words used in English word

8

strings over total words used, and obviously as a child masters English more words are incorporated into the numerator of that ratio, until it reaches 100% or a ratio of 1.0.

After using the SAWL to assess the written text of basal readers second through fifth grade (White, Scott & Grant, 2002) it was suggested that the MTU appears to be the most sensitive indicator of linguistic maturity. The SAWL, if used, would also allow a teacher to make direct comparisons between written textual material and a child’s own language composition.

While there is no definitive data detailing the degree of similarity that should exist between the complexity of a child’s writing and the reading material best suited for him/her, it is presumed that there is a “preferred relationship”. Future research will have to confirm or disconfirm this assertion.

To be certain, with the SAWL a child’s language performance can be documented and tracked across time using the same assessment instrument (same indices) and this is a significant advantage. Additionally, Part-II of the SAWL allows a teacher to go beyond the calculation of indices of maturity (scores) by just counting words, morphemes and clauses; it also provides a strategy for doing a (1) detailed analysis of verbs being used against a KTV template, and a (2) detailed analysis of all properly used grammatical structures (NP, verb inflections, PP, adverbs, clauses, conjoining, etc.)

Sociolinguistic Assessment

The importance of making Sociolinguistic Assessments (Contextual Assessments) has been underscored by Muma and Teller (2003) and cannot be minimized. Indeed, they are critical and of utmost importance because the information gleaned from them should provide a solid foundation upon which vocabulary, grammatical and pragmatic assessment will be made and upon which teaching will be based. This critical point has been made exceedingly well by

Stewart and Kluwin (2001).

In conducting these four assessments, it should be noted in Table 2 that the Social

Awareness Assessment should be made first, and the Pragmatic Assessment should be made last.

Answers to intentionality and context (Functional and Sociolinguistic Assessments/Question

Clusters 2 and 3) will most likely be secured simultaneously inasmuch as it is impossible to determine the functional use a child has for language without watching him/her perform linguistically within a social context.

Table 2: Four Sets of “Cluster Questions” Designed to Give Direction to Teachers

Who Wish to Assess the Language Abilities of Their Students

FOUR CLUSTERS OF “QUESTIONS”

CLUSTER 1: SOCIAL AWARENESS

(Social Assessment)

1.

Does the child relate to others socially in a positive way?

2.

If so, with whom does the child interact: those younger, peers, or adults?

3.

Does the child interact differently and appropriately to those of different social status?

CLUSTER 2: COMMUNICATIVE

INTENT

(Functional Assessment)

1. What are the manifested reasons for which the child uses language? More specifically, does the child use language functionally* to

(a) Secure goods and services and to satisfy needs (Instrumental); (b)

Influence others and to achieve some control over others (Regulatory); (c)

9

4.

Does the child initiate social interaction, or is the child merely responsive to those who interact with him/her?

5.

What expectations do the significant others have for the child relative to communication and relative to the performance of ageappropriate activities?

6.

What expectations does the child appear to hold for him/herself relative to communication and the performance of age-appropriate activities?

CLUSTER 3: SOCIOLINGUISTIC

(Contextual Assessment)

1.

Who are the people with whom a child must and/or chooses to interact in the course of a standard day?

2.

What other people may the child be obliged to interact with from time to time?

3.

What are the settings

(sociolinguistic arenas) and circumstances associated with the reoccurring communicative events in the child’s life?

4.

What are the child’s intentions/reasons for communication (See Cluster 2)?

5.

What social role is the child obliged to take within his/her sociolinguistic arenas?

6.

What linguistic symbol system

(speech, print, ASL, MCE, etc.) is the child expected to use with his/her communication partners?

7.

What linguistic symbols system does the child prefer to use?

8.

Is the child expected to code switch and if so, is he/she able to do so effectively?

Know, enjoy and relate to others in a satisfying way (Interactional); (d)

Express feelings, emotions, attributes, values, aspirations and individuality

(Personal); (e) Discover the world about him/her, how things work, why things are done, and what will happen to him/her (Heuristic); (f) Speak and write in ways that bring pleasure to self and others (Imaginative); (g)

Inform others of things he/she knows and wants to share with them

(Informative).

*Halliday (1975)

CLUSTER 4: PRAGMATIC

KNOWLEDGE

(Pragmatic Assessment)

1.

Does the child have an appropriate cognitive base upon which to build conversation?

2.

To what degree is the child able to effectively communicate his/her intentions with/without full grammatical competence?

3.

Is the child able to initiate a conversation?

4.

Is the child able to keep a conversation on topic and if so, for how long?

5.

Does the child make appropriate presuppositions that sustain a conversation?

6.

Is the child able to appropriately change the topic of conversation?

7.

Is the child able to appropriately close a conversation?

8.

Is the child able to effectively repair a conversation and re-establish topic?

9.

Is the child able to understand and use direct and indirect speech acts?

10.

Is the child an effective and respectful turn-taker within a conversation?

10

9.

Are there significant events in the life of the child for which the child needs better communication skills?

10.

Are the child’s parents able to

(Continued: Sociolinguistic)

11.

communicate effectively with him/her?

12.

If the parents are not able to communicate effectively, are they willing to learn and how much time will they commit to learning to

Etc. communicate effectively?

11.

To what degree does the child show the appropriate emotions and body language

(Continued: Pragmatic)

germane to a conversation?

12.

What “language routines” does the child use and how well are they formed?

13.

etc.

Summary

Language assessment has appropriately shifted from the assessment of vocabulary and syntax using predominately, and sometimes exclusively, standardized tests to that of assessing language as a living, growing, emerging ability within various arenas of human communication.

In this paper it has been argued that language assessment must begin by knowing about the child’s world, taking an inventory of communication partners, the social contexts in which language is used. Vocabulary assessment should be functional and tied to social context, as should syntactic assessment. And most importantly, teachers should assume leadership and responsibility for making sure that a comprehensive language assessment is made. It was noted that teacher-friendly assessment instruments now exist to empower teachers to carryout their rightful responsibility.

References

Barry, A. K. (1998) English Grammar: Language as Human Behavior. Prentice Hall, New

Jersey.

Blackwell, P.M. Engen, E., Fischgrund, J. E. & Zarcadoolas, C. (1978) Sentences and Other

Systems: A Language and Learning Curriculum for Hearing Impaired Children. A.G. Bell

Association. Washington, D.C.

Bloom, L & Lahey, M. (1978). Language development and language disorders.

John Wiley & Sons, NY.

Brown, R. (1973). A first language: The early stages. Cambridge: Harvard University Press.

Chappell, Alison, (2003). Establishing inter-scorer reliability, test-retest reliability and construct validity for the SAWL (Structural Assessment of Written Language). Unpublished Professional

Paper for the Master’s Degree. Texas Woman's University, Department of Communication

Sciences and Disorders. Major Professor: Dr. Alfred H. White.

Chomsky, N. (1957) Syntactic Structures. The Hague: Mouton.

11

Chomsky, N. (1965) A review of B.F. Skinner’s Verbal Behavior (1964). In J.Fodor & Katz

(Eds.) The structure of language: Readings in the philosophy of language. Englewood Cliffs,

NJ: Prentice-Hall, 1965.

Crump, C. (1977). A comparison of free response and structured response test scores as measured by the Maryland Syntax Evaluation Instrument. Unpublished master’s thesis, Texas

Woman's University, Department of Communication Sciences and Disorders. Major Professor:

Dr. Alfred H. White.

Daiute, C. (ed.) (1993). The development of literacy through social interaction. No. 61, Jossey

Bass Publishers, San Francisco.

Darley and Moll (1960). Reliability of language measures and size of language samples. Journal of speech and hearing research. 3.

Dunn, L. M. & Dunn, L.M. (1981). Peabody Picture Vocabulary Test – Revised. American

Guidance Services.

Gleitman, L. (1994) The structural sources of verb meanings. In Language acquisition: core readings: Ed. Bloom, P. The MIT Press.

Graem, D. (2003). Determining correlations between writing samples, test-retest reliability, scorer reliability, and time efficiency using the Structural Analysis of Written Language

(SAWL). Unpublished graduate research paper. Texas Woman's University, Department of

Communication Sciences and Disorders, Academic Advisor: Dr. Alfred White.

Halliday, M. (1977). Learning how to mean. New York: Elsevier.

Hunt, K.W. (1965). Grammatical structures written at three grade levels. Urbana, Illinois:

National Council of Teachers of English.

Kretchmer, R.R. & Kretchmer, L. W. (1978). Language development and intervention with the hearing impaired. University Park Press, Baltimore.

Logemann, J. (1998). “ Clinical Forum” Language, speech and hearing services in the schools.

October issue.

Luetke-Stahlman, B. (1998. Language issues in deaf education. Butte Publications, Inc.

Hillsoboro, Oregon.

Lund, N., & Duchan, J. (1988). Assessing children’s language in naturalistic contexts. Prentice-

Hall, Inc., Englewood Cliffs, New Jersey.

McDaniel, D., McKee, C., & Cairns, H. (1998). Methods of Assessing Children’s Syntax.

Moog, J. S. & Kozak, V. J. (1983). Teacher assessment of grammatical structures. Central

12

Institute for the Deaf. St. Louis.

Moog, J., & Biedenstein, (1998). Teacher assessment of spoken language.

Muma, J. (1978). Language handbook: concepts, assessment, intervention. Prentic-Hall, Inc.

Englewood Cliffs, New Jersey.

Muma, J. (1998). Effective speech-language pathology: a cognitive socialization approach.

Lawrence Erlbaum Associates, Inc. New Jersey.

Schemmel, C., Edwards, S., & Prickett, H. (1999). Reading?….Pah! (Igot it!). American Annals of the Deaf. Vol., 144, 4.

Scherer, P. (1981). Total communication receptive vocabulary test. Northbrook, IL: Mental

Health & Deafness Resources.

Stewart, D. & Kluwin, T. (2001). Teaching deaf and hard of hearing students: content, strategies, and curriculum. Allyn and Bacon, Needham Heights, MA.

Thompson, M., Biro, P., Vethivelu, S., Pious, C., & Hatfield, N. (1987). Language assessment of hearing impaired school age children. The University of Washington Press.

White, A. (1979). The relationship of semantic and syntactic features of verbs to sentence structure. Unpublished paper. Texas Woman's University, Denton, Tx.

White, A. (1980) Language Curriculum Guide for Teachers of the Deaf, (Primary Writer)

Statewide Project for the Deaf, Texas School for the Deaf Austin, Texas.

White, A. & Tischler, S. (1999). Receptive sign vocabulary tests: test of single-word vocabulary or iconicity? American Annals of the Deaf. Vol. 144. 4.

13

Download