AAC_paper - MIT Media Lab: VisMod Group

advertisement
Acoustic Analyses of Dysarthric Speech
Running Head: VOCAL CONTROL OF PROSODIC PARAMETERS IN DYSARTHIRA
Vocal Control of Prosodic Parameters by an Individual with Severe Dysarthria
Rupal Patel
B.Sc., M.H.Sc., SLP (C), CASLPO (R)
Ph.D. Candidate
University of Toronto
Department of Speech-Language Pathology
6 Queen's Park Cres. W.
Toronto, ON M5S 3H2
r.patel@utoronto.ca
December 1, 1998
1
Acoustic Analyses of Dysarthric Speech
2
Abstract
This paper reports on a case study aimed at determining prosodic control of vocalizations
produced by an adult male with severe dysarthria due to cerebral palsy. Within each
prosodic parameter of frequency, intensity and duration, the subject was instructed to
sustain production of the vowel /a/ at three levels: low, medium and high. The resulting
data were analyzed to determine the number of distinct non-overlapping categories within
each prosodic parameter. Results for this individual indicate limited control of frequency,
greater control for duration and the most control over intensity. These findings provide
the impetus to further investigate prosodic control among individuals with severe
dysarthria as a potential channel to convey communicative information.
Key Words: dysarthria, cerebral palsy, prosody, automatic speech recognition
Introduction
Problem: Information Bottleneck
Slow, imprecise and variable speech characteristics of indiviudals with dysarthria impose
an information bottleneck, impeding efficient and effective communication. Concomitant
physical impairments among indivudals with dysarthira due to cerebral palsy pose an
even greater challenge for communication efficiency. Through assistive interface
technologies, researchers and clinicians have attempted to improve information transfer
for individuals who are non-speaking.
Approach: To Increase the number of Input Modes
A common approach for enhancing the expressive abilities of individuals who are nonspeaking is to increase the number of input modes through augmentative and alternative
communication (AAC) techniques and devices (Shein, Brownlow, Treviranus & Parnes,
1990). For example, an individual may use his1 eyes, arm movements, facial expressions
and/or head rotations to transmit information.Various interface adaptations have been
developed to make use of these residual motor skills including optical head pointers, sip
and puff switches, and scanning devices (Fried-Oken, 1985). Unfortunately, pointing and
scanning methods are slow and can be physically fatiguing. Additionally, although an
AAC system which is based on non-vocal input may be functional for vocational and
educational needs, it may not sufficiently satisfy the individual’s social communication
needs (Beukelman & Mirenda, 1992). In fact, many AAC users with severe dysarthria
often prefer speech over other input modes (Ferrier, Jarrell, Carpenter, & Shane, 1992;
Fried-Oken, 1985; Treviranus et al., 1992). These reasons provide motivation for
addressing the problem of utilizing the vocal channel of individuals with severe
dysarthria. A system capable of recognizing dysarthric vocalizations would expand the
1
For stylistic brevity all third person references are masculine throughout this paper, but the reader should
read them as gender neutral.
Acoustic Analyses of Dysarthric Speech
3
information bandwidth of a communicative exchange by enabling the AAC
communicator to utilize an additional and socially satisfying channel of expression.
Background
Automatic speech recognition (ASR) is a hands-free interface which lets individuals with
physical limitations access computer based assistive technolgy with their voice.
Suprisingly, ASR may also be used by individuals with dysarthria since the underlying
technology only requires consistently articulated, and not necessarily intelligible, speech
(cf. Ferrier et al., 1992; Fried-Oken,1985).
Some ASR systems may even outperform the ability of human listeners to recognize
dysarthric speech. Carlson and Bernstein (1987) reported that 35 of 46 individuals with
mild-moderate speech impairments were better recognized using a template-based
isolated word speech recognizer2 than by human listeners. Stevens and Bernstein (1985)
also compared the recognition accuracy of a speaker dependent recognition system3 to
that of human listeners. Five hearing impaired individuals, only two of whom used speech
for daily communication, produced a list of 40 words. Recognition accuracy for the
speech recognition system was reportedly between 75-99%. Four of the five subjects were
also judged by human listeners using the same vocabulary. The speech recognition system
outperformed human listeners by levels greater than 60%.
Using vocalizations may be faster and inherently less physically fatiguing than direct
manual selection or scanning. Treviranus, Shein, Haataja, Parnes and Milner (1992)
compared two access techniques: traditional scanning and a hybrid approach of
combining scanning with speech recognition. For all eight individuals with severe
physical impairments and non-functional speech, the hybrid approach increased rate of
computer input compared to scanning alone. In addition, seven of their eight subjects
preferred the hybrid method compared to scanning alone. Similarly, Ferrier et al., (1992)
reported a case study of an adult male with mild-moderate dysarthria using the
DragonDictate4 speech recognition system. The subject achieved 80-90% recognition
accuracy by the second session. Although he demonstrated greater variability in rate and
accuracy of text creation between sessions compared to normal speakers, his oral typing
rate using DragonDictate was twice as fast as manual typing on a standard keyboard.
Improvements in communication rate and accuracy are particularly importmant to AAC
users given their typical message construction rates of six or less words per minute
(Vanderheiden, 1985). Posible benefits include increased naturalness of interactions and
reduced physical fatigue. Communication rate can be improved by increasing recognition
2
The speech recognition system was a noncommercial system developed at U.C. Berkley, 1986).
The speech recognition system was a noncommercial system developed at U.C. Berkley, 1986).
4
DragonDictate is a product of Dragon Systems, Inc. , 90 Bridge Street, Newton, MA 02158. The version
of DragonDictate used in this investigation is not specified.
3
Acoustic Analyses of Dysarthric Speech
4
accuracy. One approach to train the ASR system to attend to the user's speech
characteristics. Schmitt and Tobias (1986) used the IntroVoice II5 speaker-dependent
speech recognition system with a 18-year old girl with severe physical disabilities and
impaired speech. Ten isolated words were spoken three times each during three training
sessions. Recognition accuracy improved from 40% in the first session to 60% in the last
session. Another approach to improving ASR accuracy is to train speakers to change their
articulatory patterns in order to be better understood by the system. Kotler and ThomasStonell (1997) demonstrated that speech training with a young man with mild dysarthric
speech resulted in a 57% reduction in recognition errors using IBM VoiceType6 (version
1.00). The training has focused on the goal of improving phonetic accuracy and
consistency across vocalizations.
Although ASR has been successfully used by individuals with mild dysarthria, the
technology has been less reliable in cases of severe dysarthria. In response, some
researchers have attempted to rebuild acoustic models for specific dysarthric populations
(cf. Deller, Hsu & Ferrier, 1991; Jayaram & Abdelhamied, 1995). Phonetic transitions
which require fine motor control of articulators have been found to be particularly
difficult for individuals with dysarthria (Doyle, Raade, St. Pierre & Desai, 1995). On the
assumption that phoneme steady states are physically easier to produce because they do
not require dynamic movement of the vocal tract, Deller et al., (1991) used techniques to
automatically remove transient segments from the speech signal. This resulted in
improved but still relatively poor recognition accuracy. Jayaram and Abdelhamied (1995)
developed two different artificial neural network based systems for an individual with
severe dysarthria. One network used fast Fourier transform coefficients as inputs and the
other used formant frequencies as inputs. Results indicated that the neural networks
outperformed IntroVoice system7, a commercial system, and human listeners in
recognition tasks. These attempts, however, have been only moderately successful.
Considerable interspeaker variability in phonetic form remains a significant problem.
An underlying assumption in most previous work with ASR and AAC has been that the
primary channel of information in dysarthric speech is phonetic. This is largely driven by
the fact that all current commercial ASR systems only respond to phonetic aspects of the
speech signal. Non-phonetic information including many important prosodic factors such
as fundamental frequency, intensity, and duration, are explicity factored out based on the
premise that they do not convey linguistically salient information (Rabiner, 1993).
5
6
IntroVoice II, The Voice Connection, Irvine, CA 92714.
IBM VoiceType (version 1.00). IBM U.S.A. Department 2VO, 133 Westchester Ave., White Plains, NY
10604.
7
IntroVoice IE, The Voice Connection, Irvine, CA 92714
Acoustic Analyses of Dysarthric Speech
5
This paper calls for a paradigm change from attempting to discover phonetically
consistent features in severely dysarthric speech to identifying salient prosodic features.
The fact that individuals with severe speech impairments have highly variable phonetic
speech characteristics has been well documented in the literature (cf. Darley, Aronson,
Brown, 1969; Dabbagh & Damper, 1985; Deller et al., 1991; Jayaram and Abdelhamied,
1995; Rosenbeck & LaPointe, 1978). In contrast, although perceptual studies have noted
restricted range of prosodic control among individual with dysarthria, there is a paucity of
findings regarding the amount of control along these dimensions. In addition, the
potential for harnessing any residual prosodic control to achieve efficient communication
using vocalizaions has not been explored.
This investigation aimed to determine the capacity of an individual with severe dysarthria
to control the frequency, intensity and duration of his vocalizations. It was hypothesized
that the subject with severe dysarthria would be able to consistently produce at least two
acoustically distinct and statistically non-overlapping levels of control in each prosodic
parameter. The assumption was that prosodic features may be more stable parameters of
severely dysarthric speech given that they span a larger temporal window than phonetic
units and therefore may offer greater vocal stability.
Participant
A 52 year old adult male with severe dysarthria secondary to cerebral palsy participated in
this investigation. He was functionally non-speaking and used an alphanumeric board as
his primary means of communication. This subject was selected on the premise that
severely dysarthric speech due to cerebral palsy is qualitatively similar to severe
dysarthria resulting from other etiologies such as traumatic brain injury, stroke,
and multiple sclerosis (Deller et al., 1991; Jayaram & Abdelhamied, 1995; Turner,
Tjaden and Weismer, 1995). In fact, since the variability of speech among
individuals with cerebral palsy is higher than variability resulting from other
etiologies, findings based on this subject are more likely to be generalizable. Wnted
to use severe dysarthria because previosu studies used mild dyarthria and severe would be
a more stringent test.
Procedure
To ensure controlled experimental conditions, data collection was conducted in a sound
treated audiometric booth at the University of Toronto, Department of Speech Language
Pathology. The experiement was recorded using a Sony ® digital audio tape (DAT)
recorder (Model PCM-2300), and also video taped with a Pansonic ® AF Piezo VHS
Reporter. The subject wore a Shure (Model SM10A) ® professional unidirectional headworn dynamic microphone which was directly connected to the DAT recorder.
Acoustic Analyses of Dysarthric Speech
6
Throughtout the experiment, the mouth to microphone distance was carefully manitained
at 1 inch from the left hand corner of the mouth.
The subject's control over each of three prosodic parameters, frequency, intensity, and
duration, was examined separately. For each parameter, the subject was asked to sustain
the vowel /a/ at three target levels: low, medium and high. Seventeen trials were
requested in random order for each of the three target levels. This resulted in a total of 51
vocalizations per parameter. During data collection, approximately three seconds were
allowed between the end of the subject's production and the request for the next
stimulus8. There was no time pressure to produce vocalizations and breaks were provided
after data collection for each parameter, and whenever requested by the subject.
*TABLE*
* why 51 - based on information theory - looking to see if he could do it and with what
accuracy or consistency
For each prosodic parameter, an establishment phase proceeded the experimental phase.
During establishment, the subject practiced calibrating his vocal system to produce the
three target levels. The Visi-Pitch9 system provided real time visual feedback for
monitoring his productions. Based on calibration vocalizations, visual guides in the form
of colored stickers were placed directly on the computer screen and served as targets
during the experiment.
The Computerized Speech Lab (CSL)10 was used for off-line acoustic analysis of
vocalizations recorded on the DAT. The mean fundamental frequency and standard
deviation, the average intensity and standard deviation, and the duration of each
vocalization were calculated and recorded for the frequency, intensity and duration
parameters, respectively.
8
If the subject produced a vocalization and then spontaneously corrected himself, the self correction was
accepted as the trial. Only one re-attempt was allowed. Spontaneous self-corrections were assumed to be
accidental errors due to lack of concentration or anticipation of the next trial rather than control.
9
Visi-Pitch Model 6087 DS, Kay Elemetrics Corporation, 12 Maple Ave, PineBrook, NJ 07058.
10
Computerized Speech Lab (Model 4300), Kay Elemetrics Corporation, 12 Maple Ave, PineBrook, NJ
07058.
Acoustic Analyses of Dysarthric Speech
7
Results and Discussion
Descriptive analysis was performed using the Statistical Application Software SAS
(Version 6.11)11 package. In the pitch protocol, the analysis was carried out on the 51
data values of mean fundamental frequency. Similarly, in the intensity protocol, the
average intensity level of the vocalization was used for analysis. In the duration protocol,
the total time of the vocalization was analyzed.
Distinct levels in each prosodic parameter were operationally defined as minimally
overlapping distributions. Within each protocol, a normal distribution with a sample
mean and standard deviation was determined for each level of the parameter using the
seventeen mean values collected. It was expected that 95% of the time the subject would
be able to produce a vocalization within a specified range corresponding to the intended
stimulus level. Lower and upper limits of the 95% confidence intervals around the mean
for each stimulus level were also calculated to test the hypothesis of “distinct” levels
within each parameter. The 95% confidence limits for two adjacent “distinct levels”
could not overlap.
Table 1
Summary Statistics for the Frequency Protocol
Level
1 - Low
2 - Medium
3 - High
Mean
(Hz)
173.97
181.89
198.05
St. Dev.
(Hz)
30.13
22.04
20.2
Minimum
119.41
135.25
160.54
Maximum
95% Confidence Interval
218.34
218.89
231.20
lower limit
159.65
171.41
187.58
Upper limit
188.30
192.37
208.52
There was considerable variability of productions within the frequency protocol.
Confidence limits for all three target levels of frequency overlap. There is overlap
between each adjacent stimulus level indicating fewer than 3 categories of frequency.
There is also a small degree of overlap, between the confidence limits of level 1 (low)
and level 3 (high) indicating fewer than 2 categories of frequency control. A graphical
illustration of the overlap between stimulus level distributions can be found in figure 1.
Table 2
Summary Statistics for Duration Protocol
Level
11
Mean
(sec)
St. Dev.
(sec)
Minimum
Maximum
95% Confidence Interval
Statistical Application Software (SAS) (Version 6.11), SAS Institute, Inc. Cary, NC TS040.
Acoustic Analyses of Dysarthric Speech
1 - Low
2 - Medium
3 - High
0.55
1.56
2.92
0.13
0.67
1.08
0.37
0.67
0.62
0.79
2.64
3.88
Lower limit
0.49
1.24
2.40
8
upper limit
0.61
1.88
3.44
In contrast to frequency, the subject showed better control over the duration of his
vocalizations. Applying the operatinal definition, he was able to produce three levels of
duration. Confidence limits of all three target levels do not overlap. It is important to
note, however, that duration results should be interpretted with caution. When the subject
was asked to produce vocalizations of medium duration, his productions were not
normally distributed about the mean. In addition, clinical observation during the
experiment indicated that the subject began to experience fatigue toward the end of the
session. The variability of his productions of long duration /a/ was much greater than for
short duration /a/. It can be argued that factors of fatigue and learning may be influencing
his productions. Figure 3 provides a graphical illustration of the stimulus level
distributions for the duration protocol and the relative overlap.
Table 3
Summary Statistics for the Intensity Protocol
Level
1 - Low
2 - Medium
3 - High
Mean
(dB)12.
67.13
73.18
78.24
St. Dev.
(dB)
2.91
1.87
2.99
Minimum
62.02
70.35
69.37
Maximum
95% Confidence Interval
72.17
77.97
82.25
lower limit
65.74
72.29
76.82
Upper limit
68.51
74.07
79.67
This subject demonstrated the most control over the intensity parameter.There was no
overlap of the confidence limits between any of the three target levels. If we apply the
operational definition of distinct levels, this subject demonstrated control of 3 distinct
levels of intensity. The distribution of soft (L1) and loud (L3) vocalizations were
maximally seperated. Figure 3 provides a graphical illustration of the stimulus level
distributions and the relative overlap.
Figure 1 is a graph showing control of low (P1), medium (P2) and high (P3) frequency
vocalizations. The degree of overlap indicates limited differential control.
12
A calibration tone was not used in this experiment. Decibel units reported here are dB HL rather than dB
SPL.
Acoustic Analyses of Dysarthric Speech
7
6
5
count
4
P1
P2
P3
3
2
1
5
23
5
21
5
19
5
17
5
5
15
-1
13
11
5
0
Frequency (Hz)
Figure 2 shows control of short (D1), medium (D2) and long (D3) duration. The
distributions have less overlap indicating greater control over this parameter than for
frequency.
12
10
count
8
D1
D2
D3
6
4
2
0
0.2
0.6
1
1.4
1.8
2.2
2.6
3
3.4
3.8
4.2
-2
Duration (Sec)
Figure 3 shows control of soft (L1), medium (L2) and loud (L3) intensity vocalizations.
Distributions for soft and loud vocalizations overlap minimally suggesting two distinct
loudness levels are available for control of a vocalization driven communication aid.
9
Acoustic Analyses of Dysarthric Speech
10
12
10
count
8
L1
L2
L3
6
4
2
0
62
-2
64
66
68
70
72
74
76
78
80
82
Intensity (dB)
The goal of this paper was to demonstrate that the prosodic channel of individuals with
severe dysarthria is a viable channel to convey information.This experiment sheds light
on the degree of vocal control over prosodic parameters available to an individual with
severe dysarthria. Results indicate parameter specific differences in prosodic control
during sustained vowel productions. This subject had limited control over frequency,
some control over duration and most control over intensity. Although his performance
within the frequency parameter was highly variable, he did have better control over
intensity and duration, indicating reduced performance rather than reduced competence.
Perhaps with training, he may be able to produce at least two distinct levels of frequency.
Intensity would appear to be the most reliable paratmeter of prosodic control for this
individual. The ability to consistently control soft and loud sounds may serve as a vocal
switch for controling some functions on a voice output communication aid that is
interfaced with a vocalization recognition system.
Conclusions and Future Directions
More Clinical
Imagine this - a patient comes in who is severely dysarthria but still wants to use his voice
for some aspects of his communication system
Others can't understand him but here is a machine that can and it can serve as a translator
to unfamiliar listeners
The output may be text or sysntesized speech with DecTalk
Clinical and research applications of ASR technology to individuals with dysarthria has
focused on identifying phonetic consistencies towards improving recognition accuracy.
Moderate levels of success have been documented. As a result, ASR is commonly used as
a writing aid for individuals with motor and speech impairments. The possibility of using
Acoustic Analyses of Dysarthric Speech
11
this technology to enhance face to face communication has not received enough attention.
In order to do so, recognition accuracy rates would need to be considerable higher.
Application of speech recognition to AAC users has relied primarily on commercial
software. Effective adaptations of commercial systems may be sufficient to meet the
needs of individuals with mild speech impairments. In its current state, however,
commercial technology is not sufficiently tolerant to recognize individuals with severe
dysarthria. AAC users with severe dysarthria, however, report an unequivocal preference
to use speech for communication.
The objective of this investigation was to determine whether an individual with severe
dysarthria could control specific prosodic parameters of his vocalizations. This
information is necessary to support the development of software capable of analyzing
dysarthric vocalizations. First, we must identify acoustic parameters which can be
manipulated consistently and reliably. Next, we must determine the amount of control in
each parameter. A vocalization recognizer may be programmed to analyze the salient
prosodic parameters.
There are several shortcomings in the methodology of this initial case study that may be
addressed in future work. This investigation examined the prosodic control abilities of
one individual with severe dysarthria secondary to cerebral palsy. It will be important to
repeat the experiment with other individuals with severe dysarthria. The subtype of
dysarthria will also need to be controlled in order to make generalizations about prosodic
control. For example, an individual with respiratory involvment may have more difficulty
with intensity and duration compared to frequency.
Future studies should address the issue of small sample bias that may have impacted on
the current results. For example, the medium duration vocalizations did not follow a
normal distribution. In this study, 17 vocalizations were requested based on previous
reports of subject fatigue with increased trails and statistical calculations of sample size.
A relatively small sample size of 17 vocalizations may not be sufficient to approximate
an underlying distribution, if in fact one exists. This may be addressed by requesting a
larger number of vocalizations at each stimulus level within each parameter.
This investigation focused on prosodic control of sustained vowel productions for an
individual with severe dysarthria; it does not make claims about utterance level control.
Future work will need to explore the differences in prosodic control during spontaneous
vocalizations. This work is currently in progress at the University of Toronto by the
author.
New directions to follow are more important than dwelling on these findings further
Acoustic Analyses of Dysarthric Speech
12
Control of non-linguistic, suprasegmental features such as frequrncy, intensity and
duration may enable an individual with severe dyarthria to control some functions on his
VOCA using vocalizations rather than pointing or scanning. In theory, individuals with
only a few discrete vocalizations could potentially use various combinations of their
vocalizations to formulate an increased number of unique messages. The supposition is
that even the distorted speech signal is a channel for transmitting information for the
purpose of communication. Ultimately, the communication aid of the future would act as
a “familiar facilitator” which would support communication transfer between the nonspeaking person and an unfamiliar listener. Some of the far reaching implications of using
vocalizations include increased communication efficiency, reduced fatigue, increased
face-to-face communication, improved communication naturalness and social integration.
Note only a case study - but it offers an alternative way to look at the issues
Acknowledgements
PP, Whipper Watson, BMC
References
Beukelman, D. R. & Mirenda, P. (1992). Augmentative and alternative communication: management
of severe communication disorders in children and adults. Baltimore: Paul H. Brookes.
Carlson, G.S. & Bernstein, J. (1987). Speech recognition of impaired speech. Proceedings of
RESNA 10th Annual Conference, 103-105.
Coleman, C.L. & Meyers, L.S. (1991). Computer recognition of the speech of adults with cerebral
palsy and dysarthria. Augmentative and Alternative Communication, 7(1), 34-42.
Dabbagh, H.H. & Damper, R. I. (1985). Text Composition by voice; design issues and
implementions. Augmentative and Alternative Communication, 1, 84-93.
Darley, F. L., Aronson, A.E., & Brown, J.R. (1969). Differential diagnostic patterns of dysarthria.
Journal of Speech and Hearing Research, 12, 246-269.
Darley, F. L., Aronson, A.E., & Brown, J.R. (1975). Motor Speech Disorders. Philadelphia, PA: W.
B. Saunders Company.
Deller, J. R., Hsu, D., & Ferrier, L. (1991). On hidden Markov modelling for recognition of
dysarthric speech. Computer Methods and Programs in BioMedicine, 35(2), 125-139.
Doyle, F. L., Leeper, H. A., Kotler, A., Thomas-Stonell, N., Dylke, M., O’Neil, C., & Rolls, K.
(1993, November). Comparative perceptual and computerized speech recognition functions for dysarthric
speakers. Paper presented to the American Speech-Language and Hearing Association, Anaheim, CA
Doyle, P.C., Raade, A.S., St. Pierre, A. & Desai, S. (1995). Fundamental frequency and acoustic
variability associated with production of sustained vowels by speakers with hypokinetic dysarthria. Journal
of Medical Speech-Language Pathology, 3(1), 41-50.
Acoustic Analyses of Dysarthric Speech
13
Ferrier, L. J., Jarrell, N., Carpenter, T., & Shane, H., (1992). A case study of a dysarthric speaker
using the DragonDictate voice recognition system. Journal for Computer Users in Speech and Hearing, 8
(1), 33-52.
Ferrier, L. J., Shane, H. C., Ballard, H. F., Carpenter, T., & Benoit, A. (1995). Dysarthric speakers’
intelligibility and speech characteristics in relation to computer speech recognition. Augmentative and
Alternative Communication, 11, 165-173.
Fried-Oken, M. (1985). Voice recognition device as a computer interface for motor and speech
impaired people. Archives of Physical Medicine and Rehabilitation, 66, 678-681.
Jayaram, G., & Abdelhamied, K. (1995). Experiments in dysarthric speech recognition using
artificial neural networks. Journal of Rehabilitation Research and Development, 32(2), 162-169.
Kaplan, H., Gladstone,V.S., & Llyod, L.L. (1993). Audiometric Interpretation: A Manual of Basic
Audiometry. Needham Heights, MA: Allyn and Bacon.
Kotler, A. & Thomas-Stonell, N. (1993 November). Improving voice recognition of dysarthric
speech through speech training. Paper presented to the American Speech-Language and Hearing
Association, Anaheim, CA.
Kotler, A. & Thomas-Stonell, N. (1997). Effects of speech training on the accuracy of speech
recognition for an individual with speech impairment. Augmentative and Alternative Communication, 13,
71-80.
Noyes, J.M., & Frankish, C.R. (1992). Speech recognition technology for individuals with
disabilities. Augmentative and Alternative Communication, 8, 297-303.
Rosenbeck, J. & LaPointe, L. (1978). The dysarthrias: description, diagnosis, and treatment. In D.
Johns (ed.), Clinical Management of Neurogenic Communicative Disorders. Boston: Little-Brown and
Company, pp. 251-310.
Schmitt, D.G. & Tobias, J. (1986). Enhanced Communication for the severely disabled dysarthric
individual using voice recognition and speech synthesis. Proceedings of RESNA 9th Annual Conference,
304-306.
Stevens, G., & Bernstein, J. (1985). Intelligibility and machine recognition of deaf speech.
Proceedings of RESNA 8th Annual Conference, 308-310.
Shein, F., Brownlow, N., Treviranus, J., & Parnes, P. (1990). Climbing out of the rut: the future of
interface technology. In Beth A. Mineo (Ed.), 1990 ASEL Applied Science and Engineering Laboratories:
Augmentative and Alternative Communication in the Next Decade p. 17-19. Wilmington: University of
Delaware Press.
Treviranus, J., Shein, F., Haataja, S., Parnes, P., & Milner, M. (1991). Speech recognition to enhance
computer access for children and young adults who are functionally non-speaking. Proceedings of RESNA
14th Annual Conference, 308-310.
Turner, G. S., Tjaden, K., & Weismer, G. (1995). The influence of speaking rate on vowel space and
speech intelligibility for individuals with amyotrophic lateral sclerosis. Journal of Speech and Hearing
Research, 38(5), 1001-1013.
Acoustic Analyses of Dysarthric Speech
14
Vanderheiden, P. J. (1985). Writing aids. In J.G. Webster, A.M. Cook, W.J. Tompkins, & G.C.
Vanderheiden (Eds.), Electronic aids for rehabilitation, (pp. 262-282). London: Chapman and Hall.
Yorkston, K.M. , Beukelman, D.R., & Bell, K.R. (1988). Clinical Management of Dysarthric
Speakers. Austin, TX: PRO-ED, Inc.
Shane, H. C., & Roberts, G. (1989). Technology to enhance work opportunities for
persons with severe disabilities. In W. Kiernan (Ed.) Economics, industry, and disability
(pp. 127-140). Baltimore: Paul H.Brookes.
Download