Five myths about noise and hearing loss

advertisement
Home | Resources | CEUs | HEARING CONSERVATION AND NOISE | Class
Details
Five Myths in Assessing the Effects of Noise on Hearing
Date Published:
Oct 24, 2000
Objectives:
*
Participants will be able to name 3 terms for high frequency hearing loss
and identify the preferred term.
*
Participants will be able to describe the central tendency and range for
percentage risk of sustaining a material impairment in hearing after a working
lifetime.
*
Participants will be able to identify four important sources of noise
exposure outside the workplace, and name the one source that represents the
most severe hazard in the United States.
*
Participants will be able to identify two federal regulations that control
excessive exposure to noise.
Instructor:
William W. Clark, Ph.D.
Five Myths in Assessing the Effects of Noise on Hearing
This short essay is a compilation of statements commonly used by audiologists
or others with some expertise related to the effects of noise on hearing. Like most
things in life, there is a kernel of truth in each statement, and some boundary
conditions within which the statement may be valid. However, I have certainly
witnessed each statement below being used inaccurately and accurately by
students, experts, and other professionals in the more than two decades I’ve been
teaching, consulting, and lecturing on the subject.
Of course, the statements made here may seem controversial to some of my
colleagues. I’ve tried to support each statement with documentation. If readers
wish to discuss any issue in more detail, feel free to contact me by e-mail, or join
the Audiology Online National Chat on October 24, 2000.
Myth 1: A notch at 4 kHz is called a ''noise notch'' and it means that the hearing
loss was caused or contributed to by noise exposure.
It is not uncommon to refer to the characteristic notching of the audiogram as a
''noise notch'', and to assume that if the notch is present, noise was the cause.
While it is true that a ''notch'', that is, a characteristic hearing loss which is
greater at 4 kHz than at the adjacent frequencies (usually 3 kHz and 6 kHz) is
commonly seen in individuals with noise induced hearing loss, the presence of
the notch, in and of itself is not diagnostic. Notches are also associated with other
disorders, including viral infections, head trauma, perilymph fistula, etc.
Here are some kernels:
1. The ''4 kHz notch'' has been known to be associated with excessive exposure to
noise for more than a century. Toynbee, in his 1860 textbook1, noted a
diminution in hearing ''of the 5th fork'' by patients who engaged in the hobby of
sport shooting. The ''5th fork'' is the tuning fork with a characteristic frequency of
4096 Hz, or 5 octaves - and thus 5 forks - above middle C (256 Hz). This loss was
also termed the ''C5 dip'' until the 1930’s when audiometers began to be used,
and the ''4 kHz'' nomenclature was adopted.
2. The notch doesn’t always occur at 4 kHz. If individual subjects are tested at
finer frequency gradients than the traditional clinical half- or full-octave
registers, ''notches'' can be observed as low as 3 kHz, and as high as 8 kHz. This
is true for humans and other mammals, such as the chinchilla.
3. Despite numerous efforts, and several theories, there is, in my view, no good
explanation as to why mammals sustain injury in the 4 kHz region first. Theories
abound: vascular insufficiency in that region of the organ of Corti, torque from
the traveling wave bending around the first turn of the cochlea; energy transfer
through the basilar membrane, and the angle of the hair cell apparatus in that
region. What we do know, mostly from animal studies, is that it is very difficult
to damage the most basal end of the organ of Corti- the so-called hook region.
Thus, the very basalmost hair cells, which code the highest frequencies, are often
preserved with noise or other trauma.
4. Patients whose hearing losses had other causes, such as genetic or hereditary,
ototoxic, or traumatic, can also exhibit notches. Patients with permanent hearing
loss due to noise may not have notches at 4 kHz. These findings are discussed in
several medical textbooks2.
Take home message: The term “noise notch” implies causation; whereas “4 kHz
notch” describes the audiometric pattern, and is therefore more descriptive and
more accurate.
Myth 2: Asymmetric hearing losses are caused by asymmetric exposures.
This is a sticky one, with numerous complications. The danger here is in overinterpreting the ability of a noise to cause an asymmetric exposure. Audiologists
usually look backward from the measure of hearing toward the cause, e.g., noise.
As a hearing scientist, I look forward from the causative agent, e.g., noise, to the
effect, hearing loss. So the question becomes: what is the maximum difference in
exposure between ears that can be produced by noise exposure, and what are the
limits of differential effects one would expect on hearing?
There are clinical dangers here, including the misdiagnosis of acoustic neuromas
as asymmetric noise induced hearing losses. In a survey of hearing levels of
railroad workers3, we found and referred six patients with AN (out of nearly
10,000 tested) based solely upon asymmetric audiograms. That’s a small, but
very important, yield.
Statements are commonly made in literature or in litigation such as ''the patient’s
asymmetric hearing loss was caused by his work environment, which positioned
his right/left ear nearer the noise source, and thus produced a greater hearing
loss on the right/left ear''. One otherwise very good paper about hearing levels
of truck drivers concluded that the poorer hearing in the left ear was caused by
noise coming in from the open window of the truck, giving the left ear more
exposure. Sorry, folks. It is more likely that wind blowing on the ear caused the
asymmetric hearing loss! The authors ignored the known facts about differential
hearing sensitivity and they never bothered to measure the noise levels at each
ear of the truckers studied. Their conclusion was nothing more than a guess
about the cause of the observation. Similar logic has been used to attribute the
poorer hearing in the left ear in railroad engineers, who sit on the right side of
the locomotive cab to an asymmetric left ear exposure to the radio, which is
opposite the left ear. Here are the facts:
1. In most occupational environments, the ears are exposed to similar sound
levels bilaterally, even when the apparent noise source comes from one side. It’s
easy for the subject to think the noise is all on one side because of the precedence
effect, which causes a perceptual lateralization of the source to the ear getting the
higher exposure, even when the difference is only one dB or so. But the fact of
the matter is that in the vast majority of occupational environments, most of
which are somewhat reverberant, between-ear exposure differences are less than
2 dB (A- weighted measures), even if the noise source is positioned directly
toward one ear. Here’s why:
a. Head shadow. The diameter of the head is about 8'' (some say I think mine is
bigger). The wavelength of a 1 kHz tone in air is about 1 foot. For signals with
wavelengths longer than the head diameter (read 1 kHz and below), the signal
bends around the head, and the sound pressure levels at the two ears differ by
less than 5 dB. At high frequencies, the shadow effect can be as much as 15 dB.
Therefore, for short duration sounds, the difference between the ears is negligible
for low frequencies, and can only be up to about 15 dB maximum (in an anechoic
environment) at high frequencies. Short duration sounds such as the report of a
rifle, explosion of a firecracker, or other impulse events can produce differences
in exposure of up to 15 dB in the high frequencies, but never more than 5 dB in
the low frequencies.
b. Reverberation and head movement. In occupational environments with
continuous noise sources, like the truck driver example, the exposure is usually
symmetric. The combination of the reverberant environment and the head
movement of the subject produces exposures that rarely differ by more than 2
dB. As the driver operates the truck he moves his head continually, checking
traffic, his mirrors, and adjusting the radio. This movement results in similar
exposure to both ears.
2. Two exceptions to the ''similar exposure bilaterally'' rule are shooters and
workers who use a headphone on one ear in their work environment. As
mentioned above, shooters can get asymmetries of as much as 15 dB at high
frequencies, with the left ear being more exposed than the right ear for shooters
resting the rifle on their right shoulder. Importantly, nearly all published surveys
of human hearing reported worse hearing in the left ear than the right. This is
particularly true for males. It is very likely that a large portion of the observed
differences is due to asymmetric exposure to firearms in males.
The other exception is workers using a single headphone in their job, such as
telephone operators or radio dispatchers. In this case, exposures can differ by as
much as 50 dB.
3. The ability of a continuous noise to cause a hearing loss has a slope of about
1.7 dB per dB over a range of about 80 dB to about 100 dB. That is, if the two ears’
exposures differ by 10 dB, one would expect the maximum difference in
threshold shift potentially caused by the exposure to be 17 dB. Larger differences
suggest another cause besides noise exposure.
The take home message here is three-fold:
A. The limit of asymmetric hearing losses from impulsive exposures is 5-10 dB in
the lower frequencies, and about 20-25 dB in the high frequencies.
B. For continuous exposures, one should not expect any asymmetry at all, even
though the offending noise originates from one side of the individual
C. If asymmetries exceed these values, a medical referral is indicated.
Myth 3: Occupational noise exposure is the most significant cause of noise
induced hearing loss in the United States.
Since the beginning of the industrial age, workers employed in noisy occupations
have suffered significant hearing loss caused by workplace noise. The first
reports, by Ramazzini in the 1700’s, described hearing loss in an Italian town
employing a number of copper workers. Later reports, mostly from western
Europe, characterized hearing losses occurring in metal smiths and boilermakers
during and after long careers pounding metals into useful shapes. In fact,
“boilermaker’s deafness” was the term coined in the early 1900’s to describe the
characteristic high frequency bilateral sensorineural hearing loss experienced by
these workers.
Efforts to regulate occupational noise began in the United States in the mid1950’s. Generally, knowledge about noise-induced hearing loss had come from
WWII military experience. Soldiers returning from combat with NIHL required
hearing assessment and inspired the military to establish hospitals specializing in
''auricular training''. Because the term ''auricular training'' seemed to imply a
training regimen designed to teach soldiers to wiggle their ears, Dr. Hallowell
Davis suggested a new term, ''audiology'', to colleagues Norton Canfield and Ray
Carhart, despite his misgivings about the mixing of Greek and Latin roots4. It
stuck.
Federal occupational noise regulations were implemented at the end of the
1960’s, starting with the Walsh-Healy Public Contracts Act (1969) and
culminating with the Department of Labor’s Occupational Noise Standard and
its amendment, implemented in 1983. For a summary of federal noise
regulations, consult the NIOSH criteria document5. As a result of these
regulations, millions of Americans began to learn that too much noise could
cause hearing loss, and that NIHL could be prevented by reducing exposure and
by wearing hearing protection.
But how many Americans are exposed to what? The literature contains
references to the number of American industrial workers exposed to ''hazardous
noise'' that range from 1 million to about 30 million. Both the low number
estimates and the high number estimates were derived by extremely dubious
calculations. Probably the best summay statement comes from the National
Institute for Occupational Safety and Health5. Their data suggest that about 9
million American workers engaged in manufacturing or utilities, or about 1 in 5,
are exposed at least once per week for 90% of the work weeks, to continuous
noise that exceeds 85 dBA.
Of those, nearly 90% work in environments that have daily, time-weighted
average noise levels of 80-95 dBA. Fewer than one million workers experience
on-the-job noise exposures higher than 95 dBA. You may be interested to know
that the lower levels of exposure are actually riskier than the higher exposures.
Because noise induced hearing loss is insidious, and because exposures below 95
dBA may be annoying, but they don’t cause pain or discomfort, it is difficult to
induce workers to always wear hearing protection when they are working in
these levels. In the very noisiest jobs, those with exposures above 100 dBA,
compliance with hearing protection requirements is much easier; those noises are
very annoying, and they serve as their own warning to the worker.
But how risky are these exposures? A recently developed national standard
allows the calculation of the percent of workers who will suffer a material
impairment in hearing after a working lifetime of occupational noise exposure6.
You may be surprised at what it tells us: Only about 3% of workers in daily
noises of 85 dBA will just begin to sustain a material impairment in hearing after
40 years of daily exposure. That is, 97% of the workforce will not sustain a
significant occupational noise induced hearing loss after a working lifetime at 85
dBA. For daily exposures of 80 dBA, the risk is negligible (<1%). The risk only
becomes significant with exposures of 90 dBA (11%) and above, with the risk
increasing dramatically for a lifetime of daily exposures above 90 dBA.
Because the US occupational noise standard, as amended, requires a hearing
conservation program for industrial and manufacturing workers exposed at or
above daily levels of 85 dBA, it can be concluded that the current regulations, if
enforced, are sufficient to protect the nation’s workforce from occupational noise
induced hearing loss.
Myth 4: Occupational noise is far more hazardous than nonoccupational noise.
Although federal regulations have been in place for three decades, there are no
regulations limiting hazardous exposure to noise outside the workplace. And,
maybe there shouldn’t be. However, if our objective is to prevent noise induced
hearing loss and we don’t want to regulate or over-regulate our lives, then it
becomes incumbent upon us, as professionals, to better educate the public about
the real risks of excessive noise exposure within and outside of the workplace.
There are numerous sources of noise in the environment that have the potential
to produce noise-induced hearing loss. Stories about hearing risks from rock
concert attending, boom box listening, drag race attending and aerobic dance
exercising, movie watching, and noisy restaurant eating abound in the public
media. Most of these reports are myths (see myth 5), below.
However, there is one source of recreational noise exposure that far exceeds the
others in terms of risk for producing noise induced hearing loss: Hunting and
Target Shooting. Clinical reports documenting hearing loss after exposure to
shooting have been documented since the 1800s1. Reported peak sound levels
from rifles and shotguns have ranged from 132 dB SPL for small caliber rifles to
more than 172 dB SPL for high power rifles and shotguns.
Numerous studies have attempted to assess the prevalence of hunting or target
shooting in the general population. On the basis of these surveys it is estimated
that more than 50% of men in the American industrial workforce fire guns at
least occasionally. The National Rifle Association estimates that 60-65 million
Americans own more than 230 million guns. The severity of injury produced by
impulsive noise exposure and the prevalence of shooting by Americans make
gun noise America’s most serious nonoccupational noise hazard7,8.
Because of the logarithmic nature of the decibel scale, it is difficult to grasp how
much acoustic energy is in a single gunshot. The acoustic energy in a single
report from a high-power rifle or shotgun is equivalent to almost 40 hours of
continuous exposure at 90 dBA. In other words, one bullet equals one week of
hazardous occupational noise exposure. Because shells are often packaged in
boxes of 50, shooting one box of shells without hearing protection is equivalent
to working in a 90 dBA environment for a full year! An avid target shooter can
produce an entire year’s worth of hazardous occupational noise exposure in just
a few minutes on the target range.
One method of determining the role of shooting on hearing loss is to compare the
hearing in groups of individuals who engage in shooting with a matched group
who do not. Variations of such an approach have been reported in a number of
studies. These types of studies show significant detrimental effects on hearing
produced by gunfire noise, with the ear contralateral to the firearm exhibiting
thresholds worse than the ipsilateral ear by about 15 dB for high-frequency (3-8
kHz) stimuli, and up to up to 25-30 dB for avid shooters. For a right-handed
shooter who shoulders a rifle on the right, the left ear is pointed toward the
barrel of the rifle and is closer to the noise source than is the right ear. The right
ear points away from the noise source and is somewhat protected by the sound
shadow cast by the head.
Because shooting is so prevalent in our culture, it is the most important source of
excessive noise outside the workplace.
Myth 5: All loud leisure noise is dangerous noise.
I blame the media and gullible experts for this. There is a tendency among
nonprofessionals to consider only the level of noise exposure, and not the
duration of exposure, in considering risk. There is also a bit of sensationalizing
by the media, and even governmental agencies about the risks of
nonoccupational exposure. A list provided by the National Institute for Deafness
and Other Communication Disorders warns that rock concerts are ''130 dB SPL''.
The value cited by NIDCD as ‘representative’ is the highest level I have ever seen
reported for rock concert noise (see below for a better assessment). Finally, there
is confusion about the annoyance and temporary effects of a loud exposure (e.g.,
TTS, tinnitus, fullness, communication interference) which are widespread, and
the true risk of sustaining a permanent, material impairment in hearing, which is
minimal. The former is enough reason to limit or prevent the exposure; it is not
necessary to falsely invoke the latter as justification for eliminating the exposure.
Among the sources of noise outside the workplace that fall into what I consider
the ''minimal risk'' category are those associated with listening to amplified
music. A large body of research details exposures to individuals attending rock
concerts and noisy discotheques. An analysis of all the data indicated the
geometric mean of all published sound levels from rock concerts was 103.4 dBA8
.Hence, it is reasonable to conclude that attendees at rock concerts are routinely
exposed to sound levels above 100 dBA. Studies of temporary threshold shift
(TTS) after exposure to rock music have most often considered only the hearing
levels of performers; a few studies have shown TTSs in listeners attending rock
concerts. Generally, these studies show that most listeners sustain moderate TTSs
(up to 30 dB at 4 kHz), and recover within a few hours to a few days after the
exposure. The risk of sustaining a permanent hearing loss from attending rock
concerts is small, and is limited to those who frequently attend such events.
However, attendance at rock concerts remains an important contributor to
cumulative noise dose for many Americans.
Increased use and availability of personal stereos and CD players has led to
general concern about potentially hazardous exposures, particularly for younger
listeners. Whether listening to music through headphones may cause hearing
loss or not depends on several variables. These include the volume level selected
by the listener, the amount of time spent listening, the pattern of listening
behavior, the susceptibility of the individual’s ear to noise damage, and other
noisy activities that contribute to the individual’s lifetime dose of noise.
Although some stereos can produce exposures above 120 dBA, fewer than 5% of
users select volume levels and listen frequently enough to risk hearing loss8. I’ve
been measuring the sound output of personal stereos for many years on a
periodic basis. I believe the personal stereo industry is responding to our
concerns about excessive exposure. Most stereos I’ve purchased recently include
a useful brochure about limiting sound exposure and the importance of our
sense of hearing. And the maximum volume of stereos has been turned down by
the manufacturers. In the late 1980’s every model I tested produced levels on an
acoustic manikin exceeding 115-120 dB SPL. More modern versions seldom
approach 100-105 dB SPL. Further, many of the newer models include an
automatic volume control adjustment, which limits exposure to about 85 dB SPL.
Summary
Despite the appearance of some skepticism about ''facts'' presented by other
experts and the media, I am personally convinced that understanding,
controlling, reducing, and preventing excessive exposure to noise, wherever it
occurs, is one of the most important responsibilities of audiologists. The key to
success, in my view, is education: education for consumers, students, industrial
hygienists, classroom architects, physicians, and others who are involved in the
process of producing, controlling, treating, or preventing excessive exposure and
its effects. Perhaps the most important objective is to educate ourselves about the
knowledge base concerning the true effects of excessive exposure to noise.
Stay tuned for the next chapter, and a few more myths. If you have questions or
comments, feel free to email me at wclark@cid.wustl.edu
References
1. Toynbee, J. (1860). Diseases of the Ear: Their Nature, Diagnosis, and
Treatment.. London: Churchill.
2. Dobie, R.A. (1993). The Medico-Legal Evaluation of Hearing Loss. New York:
Van-Nostrand Reinhold.
3. Clark, W. and Popelka, G. (1989). ''Hearing Levels of Railroad Trainmen''.
Laryngoscope, 99, 1151-1157.
4. Davis, H. (1990). The Memoirs of Hallowell Davis. St. Louis, CID Publications.
5. NIOSH (1998). Criteria for a Recommended Standard: Occupational Noise
Exposure. US Department of Health and Human Services, Public Health Service,
Centers for Disease Control and Prevention, National Institute for Occupational
Safety and Health.
6. ANSI. (1996). Standard S 3.44-1996: ''Determination of Occupational Noise
exposure and Estimation of Noise-Induced Hearing Impairment''. American
National Standards Institute.
7. Clark, W. and Bohne, B.A. (1999). ''Effects of Noise on Hearing''. J. American
Medical Association, 281, 1658-1659.
8. Clark, W. (1991). ''Noise Exposures from Leisure Activities, a Review''. J.
Acoust. Soc. Am., 90, 175-181.
Download