THE ANALYSIS OF INSTITUTION INCIDENT REPORTS AS PART OF THE

advertisement
THE ANALYSIS OF INSTITUTION INCIDENT REPORTS AS PART OF THE
DEVELOPMENT OF HEARING STANDARDS FOR ENTRY-LEVEL STATE
CORRECTIONAL OFFICERS
A Project
Presented to the faculty of the Department of Psychology
California State University, Sacramento
Submitted in partial satisfaction of
the requirements for the degree of
MASTER OF ARTS
in
Psychology
(Industrial/Organizational Psychology)
by
Kelly Ann Hunley
SPRING
2012
THE ANALYSIS OF INSTITUTION INCIDENT REPORTS AS PART OF THE
DEVELOPMENT OF HEARING STANDARDS FOR ENTRY-LEVEL STATE
CORRECTIONAL OFFICERS
A Project
by
Kelly Ann Hunley
Approved by:
______________________________, Committee Chair
Lawrence Meyers, Ph.D.
______________________________, Second Reader
Lee Berrigan, Ph.D.
____________________________
Date
ii
Student: Kelly Ann Hunley
I certify that this student has met the requirements for format contained in the University
format manual, and that this project is suitable for shelving in the Library and credits is to
be awarded for the Project.
____________________________, Graduate Coordinator ___________________
Jianjian Qin, Ph.D.
Date
Department of Psychology
iii
Abstract
of
THE ANALYSIS OF INSTITUTION INCIDENT REPORTS AS PART OF THE
DEVELOPMENT OF HEARING STANDARDS FOR ENTRY-LEVEL STATE
CORRECTIONAL OFFICERS
by
Kelly Ann Hunley
The job of a Correctional Officer (CO) in California State Prisons requires a high level of
sensory abilities, including the ability to hear speech and sounds. The hearing standard
for entry-level COs was outdated and in need of revision. Institution incident reports, in
addition to several other resources were used to perform the research necessary to update
this standard. The institution incident reports were analyzed for content and subsequently
used as a tool to guide where within the facility and at what times to conduct the on-site
sound measurements. In addition, the incident reports provided information about the
nature of the job, including what type of incidents occur, at what times, where within the
facility, and what type of sensory cues alert COs to respond. The incident reports also
provided evidence of the hearing critical functions of the job and the need to hear.
______________________________, Committee Chair
Lawrence Meyers, Ph.D.
____________________________
Date
iv
DEDICATION
I would like to dedicate this project to my family. Without their continued support of my
endeavors and aspirations this would not have been possible. I would especially like to
attribute this accomplishment to my mother Ginny. Thank you Mom for your unending
love and for providing me with a living example of what true work ethic, resilience, and
dedication really are.
v
ACKNOWLEDGEMENTS
I would like to acknowledge the support and expertise both my project committee and the
faculty in the Department of Psychology at CSUS have provided me with. The
knowledge and experience I have acquired during my time here has been truly invaluable.
Dr. Larry Meyers, thank you for your wisdom and dedication to your students. You have
and will continue to be a mentor to me. I will be forever grateful for the guidance you
have provided me with and attribute much of my accomplishments in this field to your
willingness to aid and lead me in my graduate and professional career.
I would also like to especially thank the Corrections Standards Authority for the
opportunities they have provided me with, both as a Graduate Student Assistant and now
as a Research Program Specialist. In particular, I would like to acknowledge Debbie
Rives and Evonne Garner, for providing and facilitating an environment for growth and
always exemplifying an example of true professionalism.
Lastly, I would like to acknowledge the work and contributions of Dr. Meyers, Dr. Soli,
Shelley Montgomery, Laurel Alvarez, and Stacey Fuller, all of whom played an
instrumental role in the Hearing Guidelines research project. Without the collaborative
efforts of every member of this multidisciplinary team, this project would not have been
possible. I am so grateful to have been both a part of this research endeavor and of this
team.
vi
TABLE OF CONTENTS
Page
Dedication ..................................................................................................................... v
Acknowledgments....................................................................................................... vi
List of Tables ............................................................................................................ viii
Chapter
1. HISTORY OF MERIT EVALUATION ................................................................. 1
2. VALIDITY ........................................................................................................... 10
3. DESCRIPTION OF THE JOB.............................................................................. 16
California State Prison System and the Correctional Officer Job....................16
Corrections Standards Authority ...............................................................16
Standards and Training for Corrections Division ...............................17
4. PURPOSE OF THE PROJECT .............................................................................19
5. RESEARCH STRATEGY .................................................................................... 21
6. LITERATURE REVIEW ......................................................................................30
7. OBTAINING MATERIALS ................................................................................ 34
8. CATEGORIZING AND CODING OF INCIDENT REPORTS ......................... 40
9. RESULTS ............................................................................................................. 44
Tabulations ..........................................................................................44
Cross tabulations .................................................................................55
10. DISCUSSION ...................................................................................................... 61
11. CONCLUSION AND RECOMMENDATIONS .................................................63
vii
References ................................................................................................................... 65
viii
LIST OF TABLES
Tables
Page
1. Categories of Incident Types Used to Classify Incident Reports .......................... 41
2. Sensory Cues for Incidents .....................................................................................44
3. Incident Frequencies Across Watches ................................................................... 45
4. Frequencies of Incidents Across Locations ............................................................46
5. Frequencies of Incident Categories .........................................................................49
6. “Hearing Only” Incident Frequencies by Watch ....................................................51
7. “Hearing Only” Incident Frequencies by Location ............................................... 52
8. “Hearing Only” Incident Frequencies by Incident Category ................................. 54
9. “Hearing Only” Incidents - Incident Location by Watch ...................................... 56
10. “Hearing Only” Incidents - Incident Category by Watch ......................................58
11. “Hearing Only” Incidents - Incident Location by Category ................................. 60
ix
1
Chapter 1
THE HISTORY OF MERIT EVALUATION
The origin of testing and merit evaluation for employment selection decisions can
be dated back to the year 2000 B.C. with its use in the Chinese civil service system.
Many years prior to its establishment, Confucius (551 B.C.E- 479 B.C.E), theorized that
there was a better way to make employment decisions for Chinese citizens. Rather than
assign jobs and positions based on the family to which a person belonged to, which had
been the previous practice, he suggested that employment selections should be based on a
person’s merit to perform the particular task. He concluded that by testing on such merit,
it would lead to a more virtuous, moral-based government. According to Kaplan and
Saccuzzo (2005), the Chinese civil service system utilized a relatively sophisticated
testing program more than 4000 years ago. Every third year, oral examinations were
given in China to determine work evaluations and promotion decisions.
By the time of the Han Dynasty (206 B.C.E to 220 C.E), test batteries were quite
common. Test batteries where when two or more tests were used in combination with one
another in order to make selection decisions. This practice was later expanded during the
Sung Dynasty (960-1279) into an inclusive system for all important civil positions. The
Chinese civil service system employed what is commonly referred to as a hurdle system.
Hurdle systems require male candidates (females were not considered for examinations)
to successfully pass one stage or test to advance onto the next stage.
2
These multistage testing programs tested candidates on their knowledge or skill
in areas such as: agriculture, archery, civil law, music composition, Confucian history,
rituals, military affairs, revenue, geography, chariot riding, calligraphy, etc. By the Ming
Dynasty (1368-1644 C.E), national multistage testing programs involved local and
regional testing centers equipped with special testing booths. Those who performed well
on the tests at the local level went on to the provincial capitals for more extensive essay
examinations. After the extensive essay examinations, those with the highest test scores
went on to the nation’s capital for a final round. Only those who passed this third set of
tests, or hurdle, were eligible for public office. It can be assumed that the Western world
learned most of what it knows about testing programs through the Chinese. This
influence ultimately led to the establishment of what came to be known as the American
Civil Service Commission in 1883 (Kaplan & Saccuzzo, 2005).
Following the establishment of the Chinese civil service system and its
subsequent influence on the Western world was the formation of the patronage system in
the U.S. The basic idea of the patronage system was to award patrons, or supporters of
political parties by make governmental positions, more accessible to the common citizen.
This system came to be known as the Spoils system. The name of this system was
supposedly derived from a speech by Senator William Learned Marcy in the 1830s in
which he stated, “to the victor belong the spoils” (Hoogenboom, 1968). Under the spoils
system, loyal supporters of winning political candidates could be appointed to
governmental office. Under new administrations, all persons holding an appointed office
could be removed and subsequently replaced with those who supported the political
3
candidates and/or political party during their campaign. The spoils system was an
accepted part of American federal government throughout the 19th century. It was highly
criticized for awarding appointments to the unqualified and non-meritorious and as being
inefficient in that because even those jobs unrelated to public policy could change and be
reappointed as a result of an election.
The spoils system hit its height during the administration of Andrew Jackson. In
1830, Andrew Jackson, the 7th president of the U.S. created the spoils system as a reform
movement. He argued that a policy of “rotation” in office should be implemented in
order to democratize opportunities for public service. Under the new policy, Jackson
allowed everyday citizens who worked hard for the party to obtain government jobs
ordinarily given only to an elite class of citizens. By the 1850s the spoils system was
thoroughly ingrained as a tool of political warfare between the political parties. During
the time prior to the Civil War, calls for reform came about and gained momentum
following the Reconstruction period following both Jackson and Ulysses S. Grant’s
administrations. The system fared extremely poor during the Civil War as government
resources were stretched. Eradicating the system became a major crusade of the 1870’s.
The level of corruption had finally become so great early into Grant’s administration that
in 1971 Congress authorized the creation of a Civil Service Commission as a way to
reform the spoils system. However, not long thereafter Grant dismantled the
Commission. Various attempts by presidents and political figures to establish a Civil
Service Commission followed, however none were entirely successful.
4
Opponents of the system demanded that federal employment be removed from
party politics and be based on merit determined through the use of competitive
examination. It was not until after the assassination of President James Garfield by a
vengeful losing political opponent that the spoils system was eliminated and a new civil
service system was established in 1883. New York immediately adopted the system for
its state workers that same year and Massachusetts followed in 1884.
In 1883, the Pendleton Civil Service Act was passed. The Pendleton Act was
drafted during the administration of Chester Arthur and served as a response to the
assassination of President James Garfield. Named after Ohio Senator George H.
Pendleton, the act was written by Dorman Bridgeman Eaton who was a vehement
opponent of the patronage system. Eaton was later appointed the first chairman to the
U.S. Civil Service Commission. The passage of this act created a bipartisan American
Civil Service Commission through which the U.S. government was able to develop and
administer competitive examinations for certain jobs. The act also set standards for the
employment tests, requiring them to be practical in nature and related to the job. The
momentum of the testing movement in the Western World grew rapidly at that time
(Wiggins, as cited in Kaplan & Saccuzzo, 2005). The passage of the Pendleton Act in
1883 was the first step in introducing a nonpartisan merit system in the hiring of
government workers.
For the most part, the merit system had almost completely replaced the spoils
system in determining governmental office appointees. This type of system established
the use of competitive civil service examinations in which only the most qualified
5
employees would be hired for government positions and consequently banned the
practice of appointing office holders for contributions toward party funds. The new law
called for open competitive exams for all jobs classified as civil service jobs. The law
however, only applied to federal jobs. As a result, most federal jobs were now under
civil service. It allowed the President to transfer jobs and their current holders into the
system, giving the holder a permanent job. As noted by Hoogeboom (1968), a result of
the switch to civil service was an influx of more expertise and less politics. Another
unintended result of the law was a shift of the parties to reliance on funding from
business. Since they could no longer depend on patronage hopefuls for support they had
to turn to businesses for campaign funding. The act also outlawed soliciting for
campaign donations on federal government property.
By 1900, most federal jobs were handled through civil service and the spoils
system was limited only to very senior positions. While the Pendleton Act signified the
beginning of the end for the spoils system it was not the ultimate solution to unfair
employment practices. The Civil Service System, while intended to use merit in hiring
government workers through administering competitive examinations, was not intended
to completely remove discrimination in employment practices. It was not until later that
provisions were made outlawing discrimination in employment selection and merit based
practices.
According to Kaplan and Saccuzzo (2005), the most important legal development
in the establishment of clear standards for the use of psychological tests was the passage
of the 1964 Civil Rights Act. Passed during the administration of President Lyndon B.
6
Johnson, the Civil Rights Act carried forward the mission that the Civil Service
Commission was to accomplish. The Act introduced several new jurisdictions including
voting rights, the prohibition of discrimination in all public arenas including: facilities,
schools, educational opportunities, legal suits, federal assistance etc. In terms of fairness
in employment practices however, Title VII of the Civil Rights Act was the most
significant.
Title VII and its following amendments dealt with discrimination in employee
selection practices. It made it illegal for employers to discriminate on the basis of race,
color, sex, national origin, or religion. Discrimination, as detailed by the act, included
refusing to hire, discharging, limiting employment opportunities, excluding, or refusing
to refer an individual based on their race, color, sex, national origin, or religion. Title VII
applies to businesses with 15 or more employees, but does not apply to an employer with
respect to the employment of aliens, employment with a religious organization, or
employment with an educational institution when work involved in such organizations
and/or institutions requires the carrying on of the particular religion or educational
institution through its activities. Title VII details and clearly defines the relevant legal
terms including: person, employer, employment agency, labor organization, commerce,
and labor affecting commerce. The Title also clearly explains what is considered a basis
for discrimination through specific definitions of sex, state, religion, etc. Further, it
defines, in respect to their legal meanings, the terms complaining party, demonstrates,
and respondent. Other provisions made by Title VII protect individuals from
discrimination against an individual because of their association with another individual
7
of a particular race, color, sex, religion, or national origin. Further protections include the
prohibition of an employer to discriminate against a person because of his or her
interracial association with another. Title VII also protects individuals against retaliation
against employees who oppose such unlawful discrimination.
Title VII of the Civil Rights Act of 1964 and its subsequent amendments created
the Equal Employment Opportunity Commission (EEOC). The EEOC, as determined by
Title VII, shall be a committee consisting five members, with no more than three within
the same political party. All committee members are to be appointed by the President
with the consent of the Senate for terms lasting five years. The commission is responsible
for the prevention of any employer engaging in unlawful employment practices, as
detailed by Title VII. If such a claim is made against an employer, the EEOC is
responsible for investigating the charge and making subsequent determinations in regards
to the charge within a specified time period.
In 1967, Congress added a provision outlawing the discrimination of employees
based on age and subsequently passed the Age Discrimination in Employment Act
(ADEA). Under the ADEA, practices involving refusing to hire or discharging, limiting
or segregating, reducing the wage rate, or failing to refer employment for individuals
based on age were deemed unlawful discrimination and therefore illegal. The
prohibitions covered by the ADEA applied to individuals at least 40 years of age. In
1970, the EEOC wrote guidelines for fair employment practices in which fair employeeselection procedures were defined. The guidelines serve as regulations, by which all
employers are required to follow. By 1978, the guidelines had been revised and
8
simplified into what was named, the Uniform Guidelines on Employee Selection
Procedures. The Uniform Guidelines were adopted by the EEOC, the Civil Service
Commission, and the Departments of Justice, Labor, and the Treasury.
The Uniform Guidelines are a set of guidelines, originally released by the EEOC
in 1970, to define fair employee-selection procedures. In 1978, the guidelines were
revised and simplified, and published as the Uniform Guidelines on Employee Selection
Procedures (Kaplan & Saccuzzo, 2005). They were adopted by the EEOC, the Civil
Service Commission and the Departments of Justice, Labor, and the Treasury. The
guidelines apply to most public employment and institutions that receive government
funding and state that an employer cannot discriminate on the basis of race, color, sex,
national origin, or religion. The Guidelines define what groups of people constitute a
“Protected class” and define what constitutes adverse impact through the use of the FourFifths rule, or 80% Rule of Thumb. The Guidelines cover what the requirements are for
selection practices to be considered valid and differentiate between what is considered
“Disparate Impact” versus “Disparate Treatment” by employers and how to identify it.
The obligations pertaining to retesting applicants are presented as well as the obligations
applying to Affirmative Action programs. Also covered in the Guidelines are
instructions how to properly document adverse impact and the various strategies for
assessing validity in selection procedures.
Ultimately, the Uniform Guidelines were developed in order to set clear standards
on how to identify discrimination and subsequently how to deal with it. According to
Kaplan & Saccuzzo (2005), following the Civil Rights Act of 1964, many employers
9
were still not following fair employment practices. Employers were failing to recognize
the legal requirements of equal protection; therefore EEOC released the specific set of
guidelines in order to enforce the law. Section 1.A of the Uniform Guidelines presents
the Federal government’s recognition of the pressing need for a uniform set of principles
on the use of tests and other selection procedures. As discussed in Section 1.B of the
Guidelines, the purpose in the development of the Guidelines was to incorporate a single
set of principles to assist employers, labor organizations, employment agencies, and
licensing and certification boards to comply with requirements of Federal law prohibiting
employment practices which discriminate on grounds of race, color, religion, sex, and
national origin. The Guidelines were designed to provide a framework for determining
the proper use of tests and other selection procedures. The Guidelines were not designed
to require an employer to conduct validity studies of selection procedures with no adverse
impact, but rather encourage employers to use valid selection procedures which are based
on merit.
In 1985 the American Educational Research Association, the American
Psychological Association and the National Council on Measurement in Education
released the Standards for Educational and Psychological Testing. The Standards, which
were later revised in 1999, were developed to promote the correct and ethical use of tests
and provide standards for evaluating the quality of testing practices.
10
Chapter 2
VALIDITY
As mentioned, both the Guidelines and the Standards are intended to provide
methods for ensuring and evaluating the quality of a testing and/or selection method. The
Standards provide the five sources of validity evidence, by which the quality of a
selection method or instrument is judged. The Standards lay out the idea of validity.
Validity refers to the degree to which evidence and theory
support the interpretations of test scores entailed by the
proposed uses of tests.
According to the Standards, validity is a unitary concept and the degree to which all the
accumulated evidence supports the intended interpretation of test scores for the proposed
purpose. While a unitary concept, there are several types of validity evidence that might
be used in evaluating a proposed interpretation of test scores for particular purposes. The
five sources of validity evidence presented in the Standards are as follows: evidence
based on test content, evidence based on response process, evidence based on internal
structure, evidence based on relation of test to other variables and evidence based on
consequences of testing.
Evidence based on test content is determined through analysis of the relationship
between a test’s content and the construct the test is designed to measure. This type of
evidence is commonly referred to as content validity. To present evidence based on test
content the test must be shown to be a representative sample of the important portions of
11
the content domain. Determining the content domain may involve systematic
observations of behavior in a job along with judgments of Subject Matter Experts (SMEs)
used to assess the relative importance, criticality, and/or frequency of the various tasks.
According to the Standards, evidence about content can be used, in part, to address
questions about differences in the meaning or interpretation of test scores across relevant
subgroups of examinees. Once the test is developed, qualified experts in the field can
then judge the representativeness of the chosen set of items and subsequently the validity
evidence based on test content.
Evidence based on response processes concerns the fit between the construct
intended to be measured and the nature of performance or responses by test takers. That
is, responses should be assessing the test taker’s knowledge or degree of possession of
the construct and not be influenced by another construct or response pattern. This type of
evidence comes from analyses of individual responses and questioning test takers about
their performance strategies or responses to particular items. The Standards add that
validation evidence based on response processes may include empirical studies of how
observers or judges record and evaluate data along with the analyses of the
appropriateness of these processes to the intended interpretation or construct definition.
Evidence based on internal structure concerns the degree to which the relationship
among test items and test components relate to the construct on which the test is intended
to measure. Internal structure refers to the statistical features of items on the test. It
includes information such as the item characteristic curves, inter-item correlations, and
item-total correlations. According to the Standards the extent to which item
12
interrelationships bear out the presumptions of the framework of the test would be
relevant to validity. The framework for a test may imply a single dimension of behavior
or rather that several components intended to be similar are in fact distinctly different,
implying multidimensionality. This type of evidence is determined by looking at the
factor structure or performing a factor analysis of the test. Items selected to be combined
together to form a scale or subscale on a test should be rationally selected, with empirical
evidence supporting their inclusion. A factor analysis can identify which items share
common qualities meriting their use for measuring an underlying dimension.
Evidence based on relations to other variables refers to the relationship of test
scores to variables external to the test. The Standards note such external variables to
include measures of some criteria that the test is expected to predict, as well as
relationships to other tests. In employment arenas, performance criteria may serve as
relatable measures. As explicated in the Standards, there are three ways to assess such
relationships to other variables and include: convergent and discriminant evidence, testcriterion relationships, and validity generalization.
Convergent evidence concerns relationships between test scores and other
measures intended to assess similar constructs, whereas discriminant evidence concerns
relationships between test scores and measures of supposedly different constructs. These
relationships can be quantified using stronger versus weaker correlations to the construct
being measured. For example, scores on a test of mathematic reasoning, if considered to
exhibit convergent evidence, would be expected to relate closely to other measures of
mathematic reasoning. Conversely, if exhibiting discriminant evidence, scores on the
13
same test would be expected to relate less closely to measures of different skills, such as
reading comprehension. Test-criterion relationships concern the degree to which test
scores accurately predict performance on the criterion. As described in the Standards, the
criterion is a measure of some attribute or outcome that is of primary interest. This
relationship has been referred to as criterion validity and encompasses two designs or
methods for its evaluation, predictive and concurrent studies. A predictive study indicates
how accurately test data can predict criterion scores that are obtained at a later time while
a concurrent study assesses predictor and criterion information at the same time. When
assessing the differences in test-criterion relationships, one must take into account that
measurement error may be present, especially when group means differ unnecessarily,
implying differences in score meaning. Validity generalization addresses the extent to
evidence of validity based on test-criterion relations can be generalized to different
settings without additional validity studies. According to the Standards when a test used
to predict the same or similar criteria at different times or in different places, it is
typically found that observed test-criterion relationships or correlations differ to a
considerable degree. Within the issue of validity generalization exists several different
procedures dealing with this concern that include: meta analysis, transportability and
synthetic validity.
Meta analysis attempts to compare the results of sets of separately published
research studies ranging from qualitative attempts using reviews of literature to
quantitative efforts that compare studies on the basis of various statistical information.
The Standards note that such statistical summaries of past validation studies in similar
14
situations may be useful in estimating test-criterion relationships in a new situation. The
concept of transportability is concerned with tying validity evidence for a selection
procedure’s original purpose to that of a different setting. In order for this “transport” of
evidence to be justified, there must be sufficient documentation that the two jobs, or
settings, share the same set of important tasks, knowledge, skills and other abilities
(KSAs), context and that the content and nature of the selection procedure work for both
settings. In addition, it must be shown that the candidate groups are comparable and that
the new setting can be appropriate assessed/scored. Again, such transportability evidence
must be provided in order for one selection method to be used in or “generalized” to
another setting. Synthetic validity refers to the notion that validity evidence is tied to the
task groups representing a particular job. After a job analysis has revealed the tasks or
task groups that are represented in a particular job and that the tasks or task groups have
been tied to a particular job, then the KSAs necessary for satisfactory performance on the
job can be identified.
Evidence based on consequences of testing concerns both the intended and
unintended consequences of test use in a variety of settings (e.g., employment decisions,
the placement of children in special education classes, narrowing of school’s curriculum,
etc.) and how it can inform validity decisions. According to the Standards, although
information about the consequences of testing may influence decisions about test use,
such consequences do not in and of themselves detract from the validity of intended test
interpretations. Rather, the question of if evidence for validity does or does not exist in
the light of testing consequences depends on a more searching inquiry into the sources of
15
the consequences. Evidence about consequences may be directly relevant to validity
when it can be traced to a source of invalidity such as construct underrepresentation or
construct-irrelevant components in the selection procedure.
Tests and selection procedures are used in the hope that some benefit may result
from their associated scores. And as stated by the Standards, a fundamental purpose of
validation is to indicate whether these specific benefits are likely to be realized.
Ultimately, the validity of an intended interpretation of test scores relies on all of the
available and aforementioned evidence relevant to the quality of the testing/selection
instrument.
In a selection process there can be multiple methods of evaluation, all of which
need to be job related. Such methods of evaluation may include medical, physical,
psychological evaluations including the assessment of a selection exam. This particular
project focuses on a specific element in the scope of a larger selection process which
resulted in the establishment of a new hearing standard for entry-level Correctional
Officers (COs).
16
Chapter 3
DESCRIPTION OF THE JOB
California State Prison System and the Correctional Officer Job
The California State Prison System is the largest in the United States, which is the
largest in the world in terms of inmate population (Davis, 2006, Stephan, 2008). The
system consists of 33 prisons, throughout which approximately 25,000 COs are
employed, and as of August 2009, supervised the incarceration of over 166,000 inmates
(Stephan, 2008). According to the California State Correctional Officer Job Description,
the CO class disarms, subdues and applies restraints to an inmate; runs to the scene of an
emergency; supervises the conduct of inmates in housing units, during meals and bathing,
at recreation, in classrooms, and on work and other assignments, and escorts them to and
from activities; stands watch on an armed post or patrols grounds, quarters, perimeter
security walls and fences, or shops. On a daily basis, COs are faced with compromising
situations in which they risk both their own personal safety, the safety of other
institutional staff, in addition to that of the inmates. Such safety issues are significant for
the position of a CO. COs must prevent escape, control fights and riots, search for
dangerous contraband, etc. Personal safety issues are considerable, sometimes even fatal.
Description of the Corrections Standards Authority
The Corrections Standards Authority (CSA), formerly known as the Board of
Corrections, is a division of the California Department of Corrections and Rehabilitation
(CDCR). The Board of Corrections was established in 1944 as part of the reorganization
17
of the state prison system. On July 1, 2005, the CSA was established within the CDCR
replacing the Board of Corrections. The CSA works with city and county agencies to
develop and maintain standards for the construction and operation of local jails and
juvenile detention facilities and for the selection and training of state and local
correctional employees. The CSA performs a variety of other activities including
inspecting adult and juvenile detention facilities; administering and implementing grants
towards various juvenile and adult programs; responding to correctional facility
construction needs; and conducting special studies relative to the public safety of
California’s communities.
Standards and Training for Corrections Division
The role of the CSA in developing selection standards is set forth in the California
Penal Code, specifically Sections 13601-13603. These laws mandate the CSA to
develop, approve and monitor selection and training standards for state Correctional
Officers who work in the state prison system. The CSA consists of four divisions, each
formed to perform a specific group of responsibilities. The Standards and Training for
Corrections (STC) division, established in 1980 and the division within which this project
was developed, works with both state and local corrections and public and private
training providers in developing and administering programs designed to ensure the
competency of state and local correctional employees. This division performs a variety of
functions with the primary focus being on developing, monitoring, and updating the
selection and training standards for both state and local corrections agencies. STC
conducts projects such as: job analyses, test construction and validation, selection exam
18
portability studies and development and administration of job-related core training
curricula. Additionally, this division has the authority to administer training course
certifications and certify training instructors statewide as well as provide training to
corrections agencies in the areas of instructor development, curriculum design, training
management and other areas as determined by agencies’ needs. As mentioned, a primary
responsibility of the STC division is to develop and update minimum selection and
training standards, this also includes the establishment and maintenance of guidelines for
medical, vision and hearing screening of COs, the development of which shall serve as
the focus of this project.
The STC division of the CSA is responsible for setting selection and training
standards for entry-level State COs. As part of the selection standards that STC develops,
COs must meet medical and physical requirements in order to be considered for the job.
Such requirements include physical agility, as well as exhibit minimum vision and
hearing capabilities. More specifically and serving as the focus of this project, is that in
order to establish employment with the department and be deemed medically and
physically fit, COs must demonstrate the ability to hear at a level necessary to
successfully perform the duties of the job.
19
Chapter 4
PURPOSE OF THE PROJECT
The impetus for this project was that the State hearing standards had not been
updated in a considerable amount of time, since 1992. In addition, the testing measures
have changed with the influx of new auditory screening methods. Given both the safety
issues COs face, which are both important and considerable, and the need for updated
standards, the task of developing a new hearing standard was deemed necessary.
The overall goal of the research was to identify the hearing-critical job functions
performed by COs and to use this information to define valid hearing screening measures
that can be used to evaluate applicants for jobs as COs. The term “hearing-critical job
functions” in this context is defined as those functions where hearing is absolutely
essential, and no other sense modality or behavioral adaptation can be used to supplement
hearing to perform the function (Giguere et al., 2008; Laroche et al., 2003; Soli, 2003).
For these screening measures to be valid there must be evidence documenting the
necessity and importance of each hearing-critical job function. There also must be
additional evidence as to which functional hearing abilities (e.g., understanding of
speech, detection and recognition of sounds, localization of sounds) are most important in
the performance of these essential hearing-critical job functions. Further evidence about
the impact of the sound environment, especially background noise levels, on the
performance of hearing-critical job functions must also be demonstrated. Therefore, the
foundation of this research was to determine the hearing-critical job functions that COs
20
need to perform their job effectively and to protect the health and safety of those around
them, where and when these job functions are performed, and the background noise
conditions under which these job functions are performed and necessary to perform their
job effectively in the California State CO job environment.
The purpose of this larger project is to describe and document the process that
was undertaken to establish and validate an objective hearing screening standard for
individuals who apply for jobs as COs in the State of California. The research strategy
that was employed was designed as a sequence of steps, with each step intended to
establish a foundation of evidence for the next step. The goal of this approach was to
establish a chain of evidence linking the hearing screening measures and screening
criteria to the functional hearing abilities needed to perform the essential hearing-critical
job functions of the CO job.
21
Chapter 5
RESEARCH STRATEGY
The first step taken in beginning the research process was to review existing job
analyses research for the CO position. This provided the research team with a context of
the tasks a CO performs and a foundation to determine the specific hearing-critical job
functions. The next step involved collecting, reading and analyzing institution incident
reports based on their content. The incident reports provided information about the
hearing-critical job functions used on the job. The reports were categorized and analyzed
based on various elements, including the type of incident, time, location, sensory function
involved in the detection of the incident (vision, hearing or both), etc. These analyses
provided evidence that detecting and responding to incidents are hearing-critical job
functions and that speech communication is an important, and at times essential,
functional hearing ability used on the job. This step in the research strategy will be the
focus of this particular project and will be presented in more detail in the proceeding
chapters.
The next step in the research strategy consisted of interviews with COs who
served as Subject Matter Experts (SMEs) for the purposes of identifying further hearingcritical job functions. Several SME groups were held consisting of a sample of SMEs
from various prisons throughout the state. The groups were structured as panel style
interviews where the SMEs were asked to describe hearing-related activities that take
place during both normal days and during times when incidents or non-routine events
22
have occurred. The COs were asked to provide specific information with regard to their
need to hear sounds and communicate with speech. The information that was collected
was input into data style sheets where it was later tabulated and analyzed. The interviews
provided in depth information about the hearing abilities required on the job as well as
the times and locations within the facility where these hearing-critical job functions take
place. This information served as a foundation in developing the sampling plan by which
institution visits were planned in order to take sound recordings.
The next step in the research strategy involved synthesizing the information from
the two preceding steps, the incident reports and SME interview data. The information
culled out from this step reinforced the necessity and importance of speech
communication for effective performance of the CO job, for both routine and non-routine
activities. The data also indicated that much of this speech communication takes place in
the presence of high levels of background noise. It was determined that a CO needs the
ability to understand speech communication in both quiet and noisy environments, as
well as hear non-speech sounds in such environments in order to successfully perform the
job. This consideration guided the subsequent steps in the research strategy.
Using the information collected from the incident report analysis and SME
interviews on times, locations and types of hearing-critical job functions, a sample of
prison facilities were selected to visit and perform on-site observations and make noise
measurements. The sample was representative of all of the prisons in terms of
geographical region of the state, security level, composition (male or female), and
physical plant design. A total of seven facilities were chosen to be included in the sample
23
for on-site observations and noise recordings. Upon selecting the sample of facilities to
visit, the next step involved selecting the locations and times within each prison where
noise levels should be measured. A list was developed prioritizing the times and locations
where hearing-critical functions are used. Prior to taking noise measurements, COs and
their supervisors were interviewed so that additional information regarding the facility,
times, locations and the CO job could be provided.
The next step was the actual recording of the noise environments within the
prison facilities. The research team made high quality calibrated digital sound recordings
at each of the sampled prisons at various times and locations. The goal of the
measurements was to provide evidence of the effect of noise on speech communication in
the prison environment. Once all of the noise measurements had been recorded, the data
was analyzed according to the procedures and methods described in the recent
publications describing and validating the Extended Speech Intelligibility Index (ESII,
Rhebergen & Versfeld, 2005; Rhebergen et al., 2006, 2008). The ESII is based on the
calculations and parameters from the Speech Intelligibility Index (SII) standard
(American National Standards Institute, 2007). These standardized calculations, when
applied to the ESII, make it possible to estimate the likelihood of accurate and effective
speech communication in each background noise environment for otologically normal
individuals. The results of these analyses provide objective evidence about the likelihood
that otologically normal COs can perform hearing-critical job functions that require
accurate and effective speech communication in each of the selected noise environments.
24
Once the noise recordings from the sampled environments had been analyzed the
likelihood of effective speech communication within these environments was estimated.
This was the next step in the research strategy. Though the analyses of the noise
measurements provided objective evidence about the ability of a normal hearing CO to
effectively communicate with speech in each of the prison noise environments, the
proportion of time a CO was to be in each environment and the importance of the
hearing-critical job functions performed in each of those environments needed to be
determined. The evidence provided from the noise recordings enabled a weighted
composite to be calculated based on the ESII calculation for each environment
representing the overall likelihood of effective speech communication throughout the
entire typical day of an otologically normal CO. The weighting of each noise
environment was based on the analyses of the incident reports and the structured
interviews. Given the overall likelihood of effective speech communication throughout
the entire day for an otologically normal CO, it was possible to introduce the effects of
having a hearing impairment into the ESII calculations. Research staff then made
quantitative estimates of the impact of this impairment on the likelihood of effective
speech communication during the performance of hearing-critical job functions by COs.
The process for obtaining these estimates of the impact of hearing impairment
followed. While the weighted combination of ESII analyses provides the likelihood of
effective communication for normal hearing individuals, the hearing standard must take
into account and establish the amount of impairment that is acceptable for those CO
applicants that do not have normal hearing. Therefore it was necessary to determine how
25
hearing impairment affects speech communication. After making these determinations,
measures of hearing impairment could be used in the establishment of an objective
hearing screening criteria. Two types or aspects of hearing impairment were identified as
affecting functional hearing ability, both of which were explored as they relate to an
applicant’s ability for effective speech communication on the CO job. One aspect of
hearing impairment that affects functional hearing ability is audibility, the other is
distortion. Audibility refers to the inability to hear soft speech or other sounds. Both the
SII and the ESII standards provide methods for using reduced audibility into estimates of
the likelihood of effective speech communication. These estimates were used to
determine whether pure-tone audiometry, which measures audibility, is an accurate
predictor of the likelihood of accurate and effective speech communication on the CO
job. Distortion refers to the requirement for a larger signal-to-noise ratio (SNR) to
understand speech when both the speech and the background noise are audible. Neither
the SII nor the ESII standards provide a clear method for incorporating distortion as an
aspect of impaired hearing ability into the screening process. There has been however
several recent publications (Bentler, 2000; Bilger, Nuetzel, Rabinowitz, & Rzeczkowski,
1984; Cox, Alexander, Gilmore, & Pusakulich, 1988; Kalikow, Stevens, & Elliott, 1977;
Killion & Niquette, 2000; Nilsson, Soli, & Sullivan, 1994, as cited in Corrections
Standards Authority, 2011) on the speech recognition threshold (SRT) in noise expressed
as the SNR at the threshold of speech understanding (speech intelligibility) that can be
used to determine the effects of distortion on the ability to understand speech in noisy
environments. Analyses of both audibility and distortion as an impairment for effective
26
speech communication indicated that the distortion aspect of hearing impairment is more
significant than audibility in screening the functional hearing abilities of CO applicants.
This evidence was then used to establish the screening materials, screening protocol, and
screening criteria using measures of the speech reception threshold in quiet and in noise.
The following step was to specify the screening materials for the hearing
standard. Evidence collected from the previous step indicated that each aspect of the
hearing standard, screening materials, screening protocol and criteria should be focused
on the ability to communicate with speech in both quiet and noisy environments, such as
the SRT. The SRT in quiet evaluates the effects of hearing impairment on audibility. The
SRT in audible noise assesses the effects of hearing impairment on distortion. Given the
fact that people hear with both ears, it was determined that measures of SRT should be
obtained using binaural presentation of speech and noise. When the SRT is measured
binaurally in noise, the spatial location of the speech and the noise have a substantial
effect on the ability to understand speech in noise (Nilsson, Soli, & Sullivan, 1994;
Plomp & Mimpen, 1981, as cited in Corrections Standards Authority, 2011). When the
speech and noise are spatially separated, as often happens in daily life, the SRT can be
much lower and, consequently, speech intelligibility is substantially better. Such effects
imply that to measure SRT in noise, the speech and noise measures must be separated
spatially. To make creating these conditions feasible, SRT measures should be taken
under headphones. In addition, the norms for how well otologically normal individuals
perform on each SRT measure must be known. Hearing impairment can reduce
audibility, causing SRTs in quiet to be poorer than the norms. Hearing impairment can
27
also increase distortion, causing SRTs in noise to be poorer than the norms. Thus,
elevation of SRTs provides evidence of reduced ability to communicate with speech in
quiet and/or in noise. The Hearing In Noise Test (HINT) (Nilsson, Soli, & Sullivan,
1994; Soli & Wong, 2008; Vermiglio, 2008; as cited in Corrections Standards Authority,
2011) was chosen as the screening material as it satisfies all of the aforementioned
screening requirements and is also currently employed as the screening instrument for a
variety of public safety jobs with hearing-critical job functions.
Upon selection of the HINT as the screening instrument, the screening criteria
were specified. Two different criteria were specified. The first criterion specified is based
on the SRT in quiet as measured by the HINT. This specification is to ensure that CO
applicants with reduced or limited audibility caused by hearing impairment can hear and
understand soft or whispered speech. The second specification is based on a composite of
three SRTs measured in noise. This specification is to ensure that CO applicants with
increased distortion caused by hearing impairment can hear and understand speech in
noise environments where hearing-critical job functions are routinely performed on the
CO job.
The proceeding step in the research strategy was to evaluate the use of auditory
prostheses by COs. CO applicants with a hearing impairment may require up to two
auditory prostheses, such as hearing aids, to meet the screening requirement. If an
applicant possessing a hearing impairment is able to meet the criteria while wearing an
auditory prosthesis, they will be required to wear and use their prosthesis at all times on
the job. Making such a requirement led to other issues, particularly if there may be
28
aspects of the job that could not be performed by a CO while wearing an auditory
prosthesis. The research team conducted several additional interviews with SMEs to
explore this issue. The research provided evidence that COs are expected at all times to
don protective equipment, including helmets, that may lead to interference with the use of
auditory prostheses. Given that a CO is required to wear an auditory prosthesis at all
times, including while wearing a helmet, many of which cover the entire ear, there must
be evidence that the prosthesis functions properly under the helmet. This lead to the
inclusion of an additional screening requirement within the protocol, that being the need
to demonstrate proper functioning of an auditory prosthesis while wearing a helmet. The
research team surveyed experts from the hearing aid industry to make the determination
if hearing aids would in fact function properly while worn under a helmet. The experts
provided feedback consistent with previous responses in that proper function could not be
ensured under such conditions. This evidence provided the rationale for the inclusion of
additional measures in the screening protocol for applicants who must use auditory
prostheses to meet the hearing standard.
The final step in the research strategy was to adapt the screening protocol for
individuals with auditory prostheses. Given the evidence that protective headgear such as
helmets can create interference for proper functioning of hearing aids, it was necessary to
adapt the screening protocol to ensure that said prostheses would be able function
properly when worn under a helmet. Therefore, the screening protocol was supplemented
to specify a sound field administration of the HINT as well as direct observation of the
functionality of the applicant’s prostheses when worn under a helmet.
29
While a comprehensive research strategy was employed in developing this
standard, the remainder of this project will focus primarily on role of the analysis of
institution incident reports as part of the larger scheme project. The resulting hearing
screening standard is based on systematic analyses of the job performed by COs,
specifically those aspects of the job that require a CO to hear. As part of a comprehensive
research strategy used to develop and update these standards, the research team sought to
determine what types of hearing critical job functions take place on the CO job,
particularly when COs need to respond to incidents. To supplement the information taken
from CSA’s previous task analysis research of the CO job, the research team examined
written reports of incidents to which COs responded. The incident reports described
unlawful or unusual activities or events, often involving an emergency situation. The
purpose of examining these reports was to determine the hearing-critical functions
associated with responding to these incidents. From there, the content information was
extracted and analyzed as part of the determination of what and how well a CO needs to
hear on the job.
30
Chapter 6
LITERATURE REVIEW
In order to formulate objective data, analyze information, and make reasonable
assumptions about the hearing critical tasks performed on the CO job, a content analysis
of institution incident reports was performed. Content analysis is a social research
method appropriate for studying human communications as well as other aspects of social
behavior (Babbie, 1995). The practice of content analysis can extend to a wide variety of
professions, including social services, library sciences, civil service positions, etc.
Content analysis, according to Kerlinger and Lee (2000), it is an objective and
quantitative method for assigning verbal and other types of data to categories. Content
analysis has also been described as a research procedure for obtaining information from
written documents in which both descriptive and quantitative information can be obtained
(Tripodi & Epstein, 1980). Essentially, this involves making inferences about data
(usually text) by systematically and objectively identifying special characteristics (classes
or categories) within them (Gray, 2004). In the context of content analysis, the unit or
units of analysis refer to the informational units that will be categorized or coded. Terms
often used in the performance of such an analysis are usually “coding” or “categorizing.”
The term “coding” can also refer to the assignment of qualitative pieces of information
into classes or categories. Identifying the unit or units of analysis involves specifying the
information to which the content analyst should pay attention to in developing categories
for analysis (Tripodi & Epstein, 1980). In this case, and as will be discussed in further
31
sections, elements that were analyzed included those of incidents occurring during a
Correctional Officer’s job (e.g., type of incident, time of day, location, etc.). While
content analyses may range from more detailed, that is, to analyze every word in a
document, to more broad, as in analyzing whole sections of documents, the level of detail
is ultimately up to the need, time constraints, and financial means of which can be
allotted for the research. As recommended by Tripodi and Epstein (1980) however, the
unit of analysis should be small enough so that it provides reliable and valid information
pertaining to the purpose of the research, but not too small in that it requires an excessive
amount of resources in processing and analyzing the data.
Once the data measures are coded or categorized they can then be tabulated.
Tabulation is the recording of the numbers of types of responses in the appropriate
categories, after which statistical analysis follows: percentages, averages, relational
indices, and appropriate tests of significance. The analyses of the data are studied,
collated, assimilated, and interpreted. Finally, the results of this interpretative process can
be reported (Kerlinger & Lee, 2000).
According to Babbie (1995) the first step in performing a content analysis
involves developing operational definitions of the key variables you are inquiring about.
The next step is to determine what to observe or analyze, which in this case was the types
of incidents that occurred, the time of day, location within the facility, and the sensory
cue which was used to detect the incident (e.g., hearing, vision, or both). After these
categories are established the next step involves observing or reading, classifying, and
making the recordings.
32
Upon establishing the categories, observing, classifying, and recording the
information, the next step is the actual analysis of the content data. The key part in
analysis is to reduce to volume of textual material (Gray, 2004). Gray (2004)
distinguishes three steps in the analysis process. The first step is the summarizing content
analysis. This involves paraphrasing the material and putting similar paraphrases together
and eliminating information not relevant to the key variables or focus of the analysis.
Next is the explicating content analysis, which clarifies confusing or contradictory
passages by introducing context material into the analysis. This may include definitions
of terms or providing contextual information. A clarifying paraphrase is formulated and
tested in this step. Finally, a structuring content analysis is the next step. This step seeks
to identify formal structures in the materials or information. The structuring content
analysis may extract features within the information and explicate them further or in
more detail. If the information included is based on a rating scale, it can offer a frequency
count of the various similar features within the data.
Gray (2004) notes that content analysis is potentially a very important tool in
research as it can offer useful information and is highly cost-effective as well. That is,
there is often no need to design or issue surveys which are oftentimes rather costly for
agencies, but rather existing documentation such as reports, memorandums, or electronic
documents can provide much of the data. Conversely, this feature may be construed as a
disadvantage also, as the approach uses older, sometimes archival, materials rather than
collecting fresh information.
33
Content analysis proved very useful for the purposes of this particular project. As
will be discussed in more detail in the following sections, a content analysis of incident
reports was performed as one part of the larger research strategy developed to establish
the entry-level hearing guidelines for State Correctional Officers. Utilizing the concepts
and methods of content analysis, a substantial amount of information was culled out of
institution incident reports, tabulated, and analyzed. This information provided evidence
in determining the hearing-critical functions of the CO job including what types of
incidents occur, where within the facility they occur, at what times.
34
Chapter 7
OBTAINING MATERIALS
As part of the larger project to develop the appropriate hearing standard and in an
effort to determine the hearing-critical functions of the CO job, the research team
collected and chose to analyze a sample of institution incident reports. Incident reports
are work-related documents prepared by COs to document unusual or unlawful activities
and events that have occurred within the prison facility. Most often the activities and
events documented in the incident reports involve responses to emergency situations and
are intended to be as detailed as possible in describing what took place. The hope in
analyzing these reports was that the content analysis would reveal hearing-critical job
functions that are commonly performed on the job and that which are essential to the
safety of both facility staff and inmates.
Common examples of activities documented in incident reports are events such as
the discovery of contraband, inmate altercations, assaults (or attempted assaults) on COs
or other prison staff, disruptive or unusual inmate behavior, suicide attempts, medical
emergencies, etc. Incident reports generally include information about the incident or
event that occurred, how the incident was initially detected, the location within the
facility where the incident or event took place, the time of day and how it was resolved
by the responding CO. Several examples of incident reports are included below. Please
note, all identifying information has been removed to preserve the confidentiality of
35
inmates and staff. Incorrect grammar may also be present, however was included in the
examples below in an effort to provide exact replicates of the text written by the COs.
Example 1.
On Tuesday September 16, 2008, at approximately 0803 hours, as I was in
the R&R office, I heard Officer T yelling “Get down”. I immediately
responded to the R&R hallway and saw Officer T instructing inmates to
stay down and that Officer T had administered his O.C. spray. Officer T
informed me that as he processed Inmate P, inside R&R holding tank #3,
Inmate P began striking Inmate H with his fists about the upper body area.
Officer T pointed out Inmate H who was face down in the tank near the
door. I ordered Inmate H to crawl out and instructed Officer T to place
him in restraints and escort Inmate H outside to decontaminate with fresh
air. I then instructed Officer E to retrieve gas masks and extra handcuffs
from the equipment cage. I then ordered Inmate P, who was face down
near the back wall with pepper spray on his upper body and facial area to
crawl out backwards from the tank. Officer C then placed Inmate P in
restraints and escorted him outside of the R&R at the south end out the
R&R sally port to be decontaminated with fresh air. I made the decision to
have the inmates escorted outside due to the R&R building was overcome
with pepper spray.
Upon reading and analyzing the content of the incident documented in Example 1
above, it was determined that the initial sensory cue that alerted the CO to the
incident was hearing in that the CO reported he heard another officer yell “man
down.” Several other communications took place throughout the officers’
response to the incident thus indicating that performance of the job in this
instance required “hearing-critical” functions to be performed.
Example 2.
On April 5, 2008 at approximately 0835 hours, while assisting with cell
feeding the morning meal, I heard Inmate X complaining that he did not
receive his breakfast tray. I approached the cell and told Inmate X that he
did not receive his tray because he refused it. Inmate X then stated “This is
why cops get hurt, why you guys are always getting pounded on, I’ll hurt
36
you mother ******!” In order to stop Inmate X’s attempt to incite the
other inmates in the housing unit I ordered Inmate X to cuff up and he
complied by placing his hands through the food tray slot. I placed Inmate
X in handcuffs. As I was escorting Inmate X out of the building, he
continued to incite other inmates by shouting. As Inmate X and I exited
the housing unit C2, Inmate X became resistive and attempted to pull
away from my grasp. Officer Z responded to assist me. Inmate X then
lowered his left shoulder and charged Officer Z and striking him in the
chest, knocking him off balance. I gained compliance of Inmate X by
grabbing him with my left hand on his right wrist and my right hand on his
shoulder and guided him to the ground utilizing my body weight. I
performed a clothed body search with negative results for weapons or
contraband. Responding staff took control of Inmate X and escorted him
to the facility C Program office.
In Example 2, the officer who was the initial respondent to this incident was
alerted to the issue and need to respond by hearing an inmate make verbal
complaints which subsequently then led to verbal threats. This incident report also
illustrates the importance of hearing on the job and an officer’s ability to hear
communication and be alerted to respond to potential incidents and oftentimes
prevent escalation of the incident as well.
Example 3.
On Thursday May 15, 2008, at approximately 0535 hours, Officer F,
Control Booth Officer heard Inmate X yell “man down”. Officer F
contacted Officer S, Floor Officer via state telephone and asked if he
could report to the Unit A to look inside the cell. Officer S. arrived at the
cell and saw inmate C laying on the bottom face up with blood on his
shirt. Officer S. announced via state radio that a medical cart was needed
in the cell. Officer F then activated his personal alarm. Correctional
Sergeant L responded to the area. Upon assessing the situation Sergeant L
advised staff via state radio that the medical emergency was a possible
stabbing. Sergeant L saw Inmate O standing in front of the cell door
stating, “I didn’t do it, I didn’t know what happen”. Sergeant L instructed
Inmate O to allow staff to place handcuffs on him. Correctional Officer B,
responded to the area and placed handcuffs on Inmate O. Officer B
escorted Inmate O to the B Section Upper Shower area where Officer B
37
conducted a clothed body search with negative results for weapons or
contraband. Officer B then placed Inmate O in the shower stall. Officer S
entered the cell and placed Inmate O in handcuffs. Correctional Officer D,
Floor Officer and Correctional Sergeant E, responded to the area and
assisted in placing Inmate C on the stokes litter and subsequently on to the
medical cart. Sergeant L and Correctional Officer W, Search and Escort,
escorted Inmate O to the triage treatment area via medical cart for medical
evaluation. Officer B subsequently escorted Inmate O to the Family
Medical Clinic. Correctional Officer R, Drug Interdiction Sergeant,
responded to the triage unit and conducted photographs of the injuries.
Sergeant R asked Inmate C some questions but Inmate C refused to
answer any of the questions. Sergeant R subsequently responded to
Facility C Medical Clinic and began photographing Inmate O. Sergeant R
observed no injuries or blood on Inmate O. Correctional Officer B and
Correctional Officer M responded to the area and began photographing the
cell. Correctional Officer G, Squad Sergeant responded to the Medical
Center where Inmate C had been taken for further treatment. Upon
arriving Sergeant G informed Inmate C who he was and that he needed to
ask him (Inmate C) some questions. During the course of the questioning,
Inmate C stated “leave my cellie out of this” several times. Sergeant G
asked Inmate C if his cellie had done this to him, Inmate C stated “no, I
did it”.
The incident documented above in Example 3 again illustrates the hearing-critical
functions of the CO job. The officer who initially responded here was alerted to a
potential dangerous and/or fatal incident by hearing another inmate yell “man
down”. Upon his response, the officer required the use of verbal communication
using radios, telephones, and in speaking directly to the inmate. This particular
incident again demonstrates the critical need for hearing on the job.
In an attempt to adequately determine the necessary hearing critical tasks
performed on the CO job, the research staff sought to obtain a sufficient sample of
incident reports to review and analyze. Therefore, the research team requested a
representative sample of reports from each of the 33 state prisons. By collecting reports
38
from every facility, the research team could better ensure that the sample was
representative of the job and CO population in terms of size of facility, jurisdiction,
capacity, inmate populations, gender compositions, physical structure and geographic
location. Fifteen incident reports were requested from each of the prisons, 5 from each of
the three shifts or watches. Once received, each report was labeled with a unique
identifying code and logged into a database.
Each of the 33 facilities provided incident reports, with some providing more than
the 15 that were requested, subsequently totaling approximately 500 reports that were
received. A coding and categorization system was developed in order to perform the
content analysis of the incident reports (this will be discussed in the next chapter). To
ensure inter-rater reliability during this process, each report was read and coded by two
research team members. However, because the coding/categorization system involved
several pieces of information and also due to feasibility purposes, only a subset of the 500
reports were able to be dually read and coded/categorized due to the availability of
resources and time constraints. Given these circumstances, out of the 500 incident reports
received, 275 were read by two research staff and agreed upon in terms of their respective
codings. These 275 reports were selected for analysis to ensure that there was a
representative sample of all shifts and facilities included. Given that the Northern region
of the state has fewer facilities relative to the Central and Southern regions, a slightly
smaller subset of incident reports were selected from that particular region. In terms of a
breakdown of incident reports analyzed, a total of 75 reports from the Northern region
were analyzed, 100 from the Central region, and 100 from the Southern region,
39
respectively. Among each subset of incident reports selected from each region, the
reports were selected to represent each of the three shifts (watches) of the CO job.
40
Chapter 8
CATEGORIZING AND CODING OF INCIDENT REPORTS
Before reading and attempting to perform a content analysis on the entire sample
of incident reports, the research team sought to establish categories into which the reports
could be classified. One feature of the incidents that was coded were times at which the
incident occurred. Reports were given a code according to the shift (or watch) in which
the incident took place, Watch 1(10:00 pm – 6:00 am) was coded as 1, Watch 2 (6:00 am
– 2:00 pm) was coded as 2 and Watch 3 (2:00 pm – 10:00 pm) was coded as 3. Another
element of the reports that was coded for were the locations within the facility in which
the incident took place. Research staff began by going through the reports and recording
each location within the facility where the incidents took place. Once the list was
exhaustive, each location was given a number code. Since different facilities refer to
similar areas using different terms, similar location names across facilities were grouped
together under one code (e.g., chow hall and dining hall). Next was the development of
the incident type categories. This was done by reading a sample of the reports and
identifying common incidents that occur within all of the facilities. Initially the research
team tried to categorize each incident respective to what occurrence the CO was
responding to. After identifying a host of various incidents, the common ones were
grouped together and named according to the general category describing the incidents
that it would encompass. For instance, oftentimes COs write incident reports when they
find and confiscate an item that an inmate is not permitted to have. While this can be a
41
variety of items (e.g., a knife, an extra pair of socks, alcohol), any item an inmate
possesses that is not permitted is considered contraband, therefore “Contraband”
represented one category. The resulting nine categories that were identified comprised the
common incidents that occur across all of the prisons at varying frequencies, with one
category, “Miscellaneous” to include not as common of occurrences and another
category, “Multiple Elements” to include incident reports that can be categorized into
more than one category. Table 1 below displays each of the categories that were
established and their definitions, with examples of incidents.
Table 1
Categories of Incident Types Used to Classify Incident Reports
Incident Type
Definition
Contraband
Weapons, drugs, or any other unauthorized items (e.g., an
extra blanket, extra socks, etc.)
Medical Intervention
Death, bleeding, collapse, seizure, physical trauma,
unintentional self-injury; need for First Aid, CPR
Physical Assault/Battery/
Altercation One -on-One
(2 People)
Physical altercations, assaults, or battery; it does not include
physical threats such as fist clenching, or injuries against
self.
Physical Assault/Battery/
Altercation Group
(3+ People)
Physical altercations, assaults, or battery among a group of
three or more individuals; it does not include physical threats
such as fist clenching, or injuries against self.
42
Non-Assaultive/
Oppositional Behavior
Active verbal/vocal interaction, oppositional behavior, not
following instructions, banging on walls with attempts to be
disruptive, and non-assaultive threatening behaviors such as
fist clenching. Recounts of vocal/verbal events, summaries,
or third party accounts are not considered here.
Unusual/
Abhorrent Behavior
Crying, indecent exposure, hallucinations, intoxication,
altered emotional states, etc. Threats of suicide however, are
not included in this category.
Suicide, Suicide Threat,
Suicide Attempt/
Self-Injury
Suicide, suicide threats, attempts or other instance of selfinjury; banging head on wall or floor, punching/kicking
walls or other inanimate objects (with intent to harm
oneself). Unintentional self-injury is not considered here.
Miscellaneous
Miscellaneous incidents including an escape or an attempted
escape and reports from individuals who were not the first
responders.
Multiple Elements
An incident that involves multiple elements or instances of
the above categories.
Finally, categories based on the sensory cue that caused the CO to initially detect the
incident (Vision, Hearing, or Both) were established. If the incident was categorized as
“Vision”, it meant that the CO was alerted to the incident because they visually saw
something occurring. If the incident was categorized as “Hearing,” it meant that the CO
was alerted to the incident’s occurrence because they heard something. If the incident
was categorized as “Both” it meant that the CO was alerted to the incident because they
both heard and saw something occur. Codings for vision only were given a 1, hearing
only was coded as 2, and both vision and hearing was given a 3.
43
Each of the 275 incident reports was read by two raters so that degree of
agreement could be assessed. Upon reading all of the reports, the raters had almost
perfect agreement. Those reports that had discrepancies in codings were eventually
resolved by the raters into agreement.
44
Chapter 9
RESULTS
Tabulations
Table 2 below displays the percentage of the incidents where vision, hearing, or
both vision and hearing alerted Correctional Officers that an incident was occurring. Of
the 275 Incident Reports that were read and analyzed for content, more than a quarter
(28.7%) of the cues for detecting incidents were deemed as requiring exclusively the
sensory cue of “hearing only,” another approximate 23% involved hearing as a critical
component. Thus, hearing was used in the detection of more than half the incidents.
These tabulations make it clear that hearing is a very important sensory ability for
Correctional Officers in their detection and responsiveness to incidents.
Table 2
Sensory Cues for Incidents
Sensory Cue
Frequency
Percentage
Vision
133
48.4%
Hearing
79
28.7%
Both Vision and Hearing
63
22.9%
Total
275
100%
45
Table 3 tabulates the percentage of incidents that occurred across each of the three
shifts or “Watches.” While the percentages for Watch 2 and Watch 3 are relatively close,
34.9% and 40.4%, respectively, Watch 3 appeared to be the time in which more of the
incidents occurred. Nearly, a quarter of the incidents occurred during Watch 1, making
the frequency of incidents that occurred substantial among all times during the day.
Table 3
Incident Frequencies Across Watches
Watch Incident Occurred
Frequency
Percentage
Watch 1 (10 pm – 6 am)
68
24.7%
Watch 2 (6 am – 2 pm)
Watch 3 (2 pm – 10 pm)
96
34.9%
111
40.4%
Total
275
100%
Table 4 presents the percentages of locations within the institutions in which incidents
occurred. The cell area had the highest frequency of incidents, comprising over 30% of
the incidents. Following the cell area, several other facility locations had relatively
similar frequencies of incident occurrences, including the dorms/barracks, undetermined
facility areas (miscellaneous), dayroom, and yard, constituting 12.7%, 9.8%, 8.0% and
7.6% of the incidents, respectively. The dorms/barracks areas are different from the cell
area in that they are larger and house multiple inmates, whereas the cell area generally
sleeps only 1-2 inmates at one time. Other undetermined facility areas encompass the
incidents in which the incident report did not specify a specific location. The dayroom
area is an indoor area where inmates usually participate in recreational activities. And the
46
yard is generally a larger area outdoor the facility in which inmates may participate in
sports or other recreational activities. The remainder of the locations listed in Table 4,
such as Undetermined Housing, which is an area within the housing that was not
specifically documented on the incident report, the Chow Hall where dining takes place,
and Receiving and Releasing, where inmates are both booked and allowed to exit the
facility, all had relatively equally-distributed frequencies of incidents that occurred.
Table 4
Frequencies of Incidents Across Locations
Location of Incident
Frequency
Percentage
Cell
83
30.2%
Dorms/Barracks
35
12.7%
Other
Undetermined Facility
Area (Miscellaneous)
27
9.8%
Dayroom
22
8.0%
Yard
21
7.6%
12
4.4%
Receiving and Releasing
11
4.0%
Laundry and Vocational/
Classroom
9
3.3%
Undetermined Housing
47
Chow Hall
7
2.5%
Shower Area
6
2.2%
Hallway
6
2.2%
Bathroom
4
1.5%
Parking Lot
3
1.1%
Program Office
3
1.1%
Gym
3
1.1%
2
0.7%
2
0.7%
Upper Tier
Visitation
Medical/Infirmary
2
0.7%
Kitchen
2
0.7%
Administration Area
2
0.7%
Sergeant’s Office
2
0.7%
Nurses’ Station
1
0.4%
48
Hospital
1
0.4%
1
0.4%
1
0.4%
1
0.4%
1
0.4%
1
0.4%
Telephone Area
1
0.4%
Captain’s Office
1
0.4%
Court
1
0.4%
Entrance Gate
1
0.4%
Total
275
100%
Medication Window
Education Building
Control Booth
Gate
Sally Port
Table 5 tabulates the frequency of each incident category occurrence. The category of
Non-Assaultive/Oppositional Behavior had the highest frequency of occurrence
accounting for 28.4% of the incidents. Again this refers, to inmate acts of verbal/vocal
interaction, oppositional behavior, not following instructions, banging on walls with
attempts to be disruptive, and non-assaultive threatening behaviors such as fist clenching.
49
Accounting for 19.6% of the incidents was the category of Physical Assaults – one on
one. Following one-on-one physical assaults was the miscellaneous category comprising
14.2% of the incidents. The miscellaneous category includes incidents such as an escape
or an attempted escape and reports from individuals who were not the first responders.
The remaining categories, contraband, physical assault/battery/altercation – group (3 or
more people), unusual/abhorrent behavior, medical intervention, suicide/suicide
threat/suicide attempt/self-injury, and multiple elements accounted for 13.5%, 8.0%,
4.7%, 4.4%, 3.6%, and 3.6% of the remaining incidents, respectively.
Table 5
Frequencies of Incident Categories
Incident Category
Frequency
Percentage
Non-Assaultive/Oppositional
Behavior
78
28.4%
Physical Assault—One on
One
54
19.6%
Miscellaneous
39
14.2%
Contraband
37
13.5%
50
Physical
Assault/Battery/Altercation Group (3 or more people)
22
8.0%
Unusual/Abhorrent Behavior
13
4.7%
Medical Intervention
12
4.4%
Suicide, Suicide Threat,
Suicide Attempt/Self-Injury
10
3.6%
Multiple Elements
10
3.6%
Total
275
100%
Table 6 displays the shift or Watch the “Hearing Only” sensory-cued incidents occurred.
Hearing only incidents were relatively well distributed across all three of the watches.
Watch 1 however seemed to encompass the highest frequency of incident occurrences,
with 43% of hearing only incidents occurring during that shift. Following Watch 1,
Watch 2 had the next highest proportion of incidents with 32.9%, followed by Watch 3
51
with 24.1% of the incidents. The tabulations indicate that incidents cued by hearing only
sensory triggers occur during throughout all times of the day.
Table 6.
“Hearing Only” Incident Frequencies by Watch
Watch
Frequency
Percentage
Watch 1 (10 pm – 6 am)
34
43.0%
Watch 2 (6 am – 2 pm)
26
32.9%
Watch 3 (2 pm – 10 pm)
19
24.1%
Total
79
100%
Table 7 below presents the locations within the institution in which “hearing only”
incidents occurred. The cell area had the highest frequency of incidents, comprising
34.2% of the incidents. The next highest frequency of hearing-only incidents occurred in
the dorm/barrack areas with a proportion of 15.2%. Following the dorm/barrack areas,
several other facility locations had relatively similar frequencies of incident occurrences,
including undetermined facility areas, undetermined housing, dayroom, yard, receiving
and releasing, constituting 6.3%, 5.1%, 5.1%, 5.1%, and 5.1% of the incidents,
respectively. The hallway and vocation/classroom locations each accounted for 3.8% of
the incidents. The shower and kitchen areas of the facility had a frequency proportion of
2.5% each. Several other locations within the facilities accounted for 1.3% each of the
hearing only incidents, including the nurse’s station, upper tier, hospital, program office,
administration area, Sergeant’s office, telephone area, Captain’s office, and entrance gate.
The remainder of the locations included in Table 6 did not have any incident occurrences.
52
Table 7
“Hearing Only” Incident Frequencies by Location
Location of Incident
Frequency
Percentage
Cell
27
34.2%
Dorms/Barracks
12
15.2%
Other
Undetermined Facility
Area (Miscellaneous)
5
6.3%
Undetermined
Housing
4
5.1%
Dayroom
4
5.1%
Yard
4
5.1%
Receiving and Releasing
4
5.1%
3
3.8%
3
3.8%
2
2.5%
2
2.5%
Nurses’ Station
1
1.3%
Upper Tier
1
1.3%
Hospital
1
1.3%
1
1.3%
Hallway
Laundry and Vocational
Classroom
Shower Area
Kitchen
Program Office
53
Administration Area
1
1.3%
Sergeant’s Office
1
1.3%
Telephone Area
1
1.3%
Captain’s Office
1
1.3%
Entrance Gate
1
1.3%
Bathroom
0
0.0%
Chow Hall
0
0.0%
Visitation
0
0.0%
Medical/Infirmary
0
0.0%
Medication Window
0
0.0%
Education Building
0
0.0%
Control Booth
0
0.0%
Parking Lot
0
0.0%
Gym
0
0.0%
Gate
0
0.0%
Sally Port
0
0.0%
Court
0
0.0%
Total
79
100%
54
Table 8 shows the percentages of each incident category that occurred for the “Hearing
Only” incidents. The non-assaultive/oppositional behavior category encompassed over a
third of the hearing-only incidents, with a proportion of 32.9% of the incidents. The
miscellaneous category followed, accounting for 21.5% of the hearing only incidents.
Nearly 14% (13.9%) of the incidents were categorized as one-on-one physical assaults,
followed by medical interventions accounting for 11.4% of the incidents. The remaining
contraband and suicide, suicide threat/attempt, self-injury, physical
assault/battery/altercation (3 or more people), unusual/abhorrent behavior, and multiple
elements categories all had relatively equally distributed frequencies of incident
occurrences accounting for 6.3%, 5.1%, 3.8%, 2.5%, and 2.5%, respectively.
Table 8
“Hearing Only” Incident Frequencies by Incident Category
Incident Category
Frequency
Percentage
Non-Assaultive/Oppositional
Behavior
26
32.9%
Miscellaneous
17
21.5%
Physical Assault—One on
One
11
13.9%
Medical Intervention
9
11.4%
Contraband
5
6.3%
55
Suicide, Suicide Threat,
Suicide Attempt/Self-Injury
4
5.1%
3
3.8%
Unusual/Abhorrent Behavior
2
2.5%
Multiple Elements
2
2.5%
Total
79
100%
Physical
Assault/Battery/Altercation Group (3 or more people)
Cross-tabulations
Table 9 below presents the percentages of locations within the institutions by watch in
which hearing alerted Correctional Officers that an incident was occurring. Out of the 79
incidents that were categorized as requiring “Hearing Only” as a means for detecting the
incident, Watch 2 had the highest frequency of incidents occurring during that time,
constituting 43% of the incidents, while Watch 3 and Watch 2 each comprised
approximately a third, and a quarter of all of the incidents, respectively. The cell area had
the highest frequency of “Hearing Only” incidents, with nearly half of the incidents
occurring during Watch 3. The Dorm/Barracks area comprised the next highest frequency
of “Hearing Only” incidents, with times of occurrence relatively equally-distributed
across the three Watches.
56
Table 9
“Hearing Only” Incidents – Incident Location by Watch
Location of
Incident
Housing
Cell
Dorm/Barracks
Undetermined
Housing
Shower Area
Dayroom
Upper Tier
Hallway
Yard
Medical
Nurses’ Station
Hospital
Receiving and
Releasing
Laundry and
Vocation
Classroom
Kitchen
Other
Program Office
Undetermined
Area
Administration
Area
Telephone
Area
Captain’s
Office
Sergeant’s
Office
Entrance Gate
Total
Watch
2 (0600 –
% of
1400 hrs)
Total
1 (2200 –
0600 hrs)
% of
Total
3 (1400 –
2200 hrs)
% of
Total
10
3
(37.0)
(25.0)
4
5
(14.8)
(41.7)
13
4
(48.1)
(33.3)
2
0
0
0
0
0
(50.0)
(0.0)
(0.0)
(0.0)
(0.0)
(0.0)
1
0
2
1
2
3
(25.0)
(0.0)
(50.0)
(100.0)
(33.3)
(75.0)
1
2
2
0
1
1
(25.0)
(100.0)
(50.0)
(0.0)
(66.7)
(25.0)
0
1
(0. 0)
(100.0)
1
0
(100.0)
(0.0)
0
0
(0.0)
(0.0)
2
(50.0)
2
(50.0)
0
(0.0)
0
0
(0.0)
(0.0)
3
2
(100.0)
(100.0)
0
0
(0.0)
(0.0)
0
(0.0)
1
(100.0)
0
(0.0)
1
(20.0)
3
(60.0)
1
(20.0)
0
(0.0)
0
(0.0)
1
(100.0)
0
(0.0)
1
(100.0)
0
(0.0)
0
(0.0)
1
(100.0)
0
(0.0)
0
0
19
(0.0)
(0.0)
(24.1)
1
1
34
(100.0)
(100.0)
(43.0)
0
0
26
(0.0)
(0.0)
(32.9)
57
Table 10 tabulates the frequency of each “Hearing Only” incident category occurrence
across each of the three watches. Of the 79 incidents categorized as hearing only, the
category of Non-Assaultive/Oppositional Behavior had the highest frequency of
occurrence accounting for 32.9% of the incidents, the majority of which occurred during
Watch 3. The next highest cross-tabulation of hearing only incident categories across
watches was the miscellaneous category. Again, this category includes incidents such as
an escape or an attempted escape and reports from individuals who were not the first
responders. This category accounted for 21.5% of all of the hearing only incidents, of
which over 80% occurred during Watch 2. Medical Intervention and One-on-One
Physical Assault comprised relatively similar proportions of the hearing only incidents,
11.4% and 13.9% respectively, which most of the occurrences taking place during Watch
3 for both categories.
58
Table 10
“Hearing Only” Incidents –Incident Category by Watch
Watch
% of
Total
(0.0)
3
(1400 –
2200 hrs)
0
% of
Total
(0.0)
Total
5
% of All
Hearing
Only
Incidents
(6.3)
1
(11.1)
5
(55.6)
9
(11.4)
(9.0)
5
(45.5)
5
(45.5)
11
(13.9)
1
(33.3)
1
(33.3)
1
(33.3)
3
(3.8)
7
(26.9)
8
(30.8)
11
(42.3)
26
(32.9)
Unusual Behavior
0
(0.0)
1
(50.0)
1
(50.0)
2
(2.5)
Suicide, Suicide
Attempt/Threat
1
(25.0)
3
(75.0)
0
(0.0)
4
(5.1)
Miscellaneous
1
(5.9)
14
(82.4)
2
(11.8)
17
(21.5)
Multiple Elements
0
(0.0)
1
(50.0)
1
(50.0)
2
(2.5)
Total
19
(24.1)
34
(43.0)
26
(32.9)
79
(100.0)
Incident Category
1
(2200 –
0600 hrs)
5
% of
Total
(100.0)
2
(0600 –
1400 hrs)
0
Medical
Intervention
3
(33.3)
Physical Assault –
one-on-one
1
Physical Assault – 3
or more people
Contraband
Non-Assaultive/
Oppositional
Behavior
59
Table 11 below displays cross-tabulations for each of the “Hearing Only” incident
locations across each of the incident categories. Several of the specific locations in which
hearing only incidents occurred were collapsed into larger categories in order to simplify
the table. Specific locations such as the cell, dorms/barracks, hallways, shower areas,
upper tiers, dayroom, and those housing areas that were deemed undetermined, were
collapsed into Housing. The location category for Medical includes the Nurse’s Station
and Hospital, Laundry and Vocation comprises the classroom area, and Other includes
specific locations such as Program Office, Undetermined Area, Administration Area,
Telephone Area, Captain’s Office, Sergeant’s Office, and Entrance Gate. The Housing
areas had the highest frequency of incidents across all of the categories. Twenty-four
percent of incidents in such housing areas were categorized as NonAssaultive/Oppositional Behavior, 11.4% categorized as miscellaneous, and the
remainder of incidents were well-distributed across each of the other incident categories.
Several other hearing only incidents categorized as Medical Intervention and Physical
Assault/Battery/Altercation (One-on-One) occurred, 11.5% and 14.0% of the time
respectively also and again took place in housing areas the majority of the time, 7.6% and
7.6%, respectively.
60
Table 11
“Hearing Only” Incidents – Incident Location by Category
Incident Category
Multiple Elements
Miscellaneous
Suicide, Suicide Threat,
Suicide Attempt
Unusual/Abhorrent
Behavior
Non-Assaultive/
Oppositional Behavior
Physical
Assault/Battery/
Altercation
3 or more people
Physical
Assault/Battery/
Altercation
One-on-One
Medical Intervention
Contraband
Location
% of Total
Frequency
% of Total
Frequency
% of Total
Frequency
% of Total
Frequency
% of Total
Frequency
% of Total
Frequency
% of Total
Frequency
% of Total
Frequency
% of Total
Frequency
Housing
Yard
4
0
(5.1)
(0.0)
6
1
(7.6)
(1.3)
6
0
(7.6)
(0.0)
3
0
(3.9)
(0.0)
19
1
(24.0)
(1.3)
4
0
(5.1)
(0.0)
2
0
(2.5)
(0.0)
9
2
(11.4)
(2.5)
1
0
(1.3)
(0.0)
Medical
Receiving
and
Releasing
Laundry
and
Vocation
Kitchen
Other
Total
0
(0.0)
1
(1.3)
0
(0.0)
0
(0.0)
0
(0.0)
0
(0.0)
0
(0.0)
1
(1.3)
0
(0.0)
0
(0.0)
1
(1.3)
0
(0.0)
0
(0.0)
1
(1.3)
0
(0.0)
1
(1.3)
0
(0.0)
1
(1.3)
0
0
1
5
(0.0)
(0.0)
(1.3)
(6.4)
0
0
0
9
(0.0)
(0.0)
(0.0)
(11.5)
2
1
2
11
(2.5)
(1.3)
(2.6)
(14.0)
0
0
0
3
(0.0)
(0.0)
(0.0)
(3.9)
1
0
3
15
(1.3)
(0.0)
(3.9)
(31.8)
0
0
0
4
(0.0)
(0.0)
(0.0)
(5.1)
0
0
1
4
(0.0)
(0.0)
(1.3)
(5.1)
0
1
4
17
(0.0)
(1.3)
(5.1)
(21.6)
0
0
0
2
(0.0)
(0.0)
(0.0)
(2.6)
61
Chapter 10
DISCUSSION
The analyses of the coded incident reports revealed information about the
frequency of each type of incident, the location where the incident occurred, and the time
of day or watch during which the incident took place. Tabular and crosstab summaries
were reported and explicated earlier. Although the information obtained in the incident
reports provided substantial detail about the nature of the incidents and when and where
they occurred, the incident report analysis served as only one part of the research strategy
involved in establishing the hearing standard. While content analysis of the incident
reports helped to establish a broader context of the job of Correctional Officer, it alone
was not able to provide sufficient details of the hearing-critical job functions and their
specific context. Such information was collected concurrently as part of the research
project in the form of panel interviews with COs who served as Subject Matter Experts
(SMEs). The data gathered from analysis of the incident reports and the interviews with
SMEs was used to guide the on-site measurements in which research staff took a large
sample of audio recordings within a large range of different prison facilities. Using both
the information extracted from the reports as well as the interview data helped guide the
strategy for planning where and when to conduct the audio measurements, particularly
the location and time of day in which to perform the recordings. That is, based on the
results of the incident report analysis and the information that was offered from the
SMEs, research staff determined the areas and the times the facility would be especially
62
noisy as well as locations and times both where and when a CO spends a large amount of
his/her shift and thus would be important to take measure in determining the minimum
hearing ability required to perform the job. In terms of limitations the content analysis of
the incident reports may have had, one potential disadvantage was that some of the
incident reports that were read and coded were several years old. While it is unlikely that
the job and necessary hearing ability required to perform the job has changed
substantially since the time some of the reports were written, using archival information
as opposed to only collecting and using fresh information can sometimes be seen as
disadvantage when conducting a content analysis (Gray, 2004). Overall, however, the
incident reports ended up serving as a very useful tool in this research project and in
providing information about the CO job and were extremely cost effective as well, in that
there was little to no cost associated with obtaining them.
63
Chapter 11
CONCLUSION AND RECOMMENDATIONS
In the larger scheme of this project, the content analysis of the incident reports
proved useful in providing information regarding descriptive and analytic insight into
where, when, and what type of incidents occur within a facility, and emphasized the
critical need for hearing ability to perform the job of a CO. A thorough analysis of the
reports in conjunction with the SME interviews and along with the various other steps
undertaken as part of this particular research strategy made evident the need to perform
hearing-critical tasks on the CO job. In addition, several analyses of incident reports, job
analyses and job tasks, and SME interview panels, showed that the need to hear and
understand speech communication underlies many critically important job functions for
the job of a CO.
Once the on-site audio measurements had been completed, the research team’s
Hearing Consultant analyzed all of the audio measurements, which corresponded to
detailed logs of where, at what time, and what activities were taking place during each
recording. As previously described in the Research Strategy section, several tools were
utilized in the analysis of the recordings, including the Extended Speech Intelligibility
Index (ESII) as a means of determining the primary functional hearing ability and to
calculate proportional likelihoods of effective speech communication at various
distances, vocal effort levels, with the effects of hearing loss, and at different speech
reception thresholds. Incorporating the information obtained by research staff through the
64
job analyses, incident report analysis, and SME panels, showed that the recommended
hearing assessment tool is the Hearing In Noise Test (HINT; Nilsson, Soli, & Sullivan,
1994). The HINT was deemed the most appropriate and valid test for evaluating the
functional hearing ability of applicants for the Correctional Officer position, which
according to experts in the field, it provides better objective prediction of an applicant’s
ability to perform hearing-critical job functions than do measures of hearing sensitivity
obtained with other methods such as pure-tone audiometry. Beyond the work and
analyses conducted by the research team, research by the Hearing Consultant proceeded
further in determining the screening protocol, calibrations, as well as how to adequately
screen and accommodate entry-level CO candidates who possess one or more auditory
prostheses (e.g., hearing aids, cochlear implants, etc.).
In conclusion, the purpose of this project was to document and describe one part
of a valid, evidence-based method to develop a hearing standard for State of California
Correctional Officers. This portion of the larger research strategy focused specifically on
the coding and content analysis of institution incident reports in an effort to provide
information about the CO job and where, when, and what types of incidents occur in
which a CO must respond. Additionally, and a primary focus of the analysis was what
types of sensory triggers a CO was required to detect in response to incidents within
facilities. The particular research strategy, steps, and content analysis described in this
project detail the approach the research team employed in developing this state-wide
hearing standard. It can be used as a guide in developing future standards, and modified
as well to fit the needs of a particular organization and specific job.
65
REFERENCES
A.A Hoogenboom, Outlawing the Spoils: A History of the Civil Service Reform
Movement (1968); Encyclopedia of American History. Retrieved October 06,
2007, from Answers.com Web site: http://www.answers.com/topic/spoils-system
American Educational Research Association, American Psychological Association, &
National Council on Measurement in Education. (1999). Standards for
educational and psychological testing. Washington, D. C.: Author.
American National Standards Institute (2007). Methods for calculation of the speech
intelligibility index. ANSI S3.5-1997 (Reaffirmed in 2007). New York: American
National Standards Institute.
Babbie, E. (1995). The Practice of Social Research (7th ed). Belmont, CA: Wadsworth
Publishing Company.
Corrections Standards Authority, California Department of Corrections and
Rehabilitation. (2011). Hearing Standard for Selection of Entry-Level
Correctional Officers. Retrieved from
https://www.cdcr.ca.gov/Divisions_Boards/CSA
Equal Employment Opportunity Commission, Civil Service Commission. Uniform
Guidelines on Employee Selection Procedures. Federal Register, 43 (166) 3829038309.
66
Giguere, C, Laroche, C, Soli, SD, & Vaillancourt, V (2008). Functionally-based
screening criteria for hearing-critical jobs based on the Hearing In Noise Test.
International Journal of Audiology, 47, 319-328.
Gray, D.E. (2004). Doing research in the real world. Thousand Oaks, CA: SAGE
Publications Ltd.
Kaplan R.M., & Saccuzzo, D.P. (2005). Psychological testing: Principles, applications,
and issues (6th ed). Belmont, CA: Thomson Wadsworth.
Kerlinger, F.N., & Lee, H.B. (2000). Foundations in behavioral research (4th ed).
Orlando, FL: Harcourt College Publishers.
Laroche C, Soli S, Giguere C, Lagace´ J, Vaillancourt V, & Fortin M. (2003) An
approach to the development of hearing standards for hearing-critical jobs. Noise
Health. 6:17–37.
Nilsson, M, Soli, SD, Sullivan, JA (1994). Development of the Hearing in Noise Test for
the measurement of speech reception thresholds in quiet and in noise. Journal of
the Acoustical Society of America, 95:1085-1099.
Soli SD. (2003) Hearing and job performance. Paper commissioned by the Committee on
Disability Determination for Individuals with Hearing Impairment, National
Research Council, National Academy of Sciences.
Rhebergen, KS, &Versfeld NJ. (2005) A speech intelligibility index-based approach to
predict the speech reception threshold for sentences in fluctuating noise for
normal-hearing listeners. Journal of the Acoustical Society of America, 117:218192.
67
Rhebergen, KS, Versfeld, NJ, & Dreschler, WA. (2006) Extended speech intelligibility
index for the prediction of the speech reception thresholds in fluctuating noise.
Journal of the Acoustical Society of America, 120, 3988-3997.
Rhebergen, KS, Versfeld, NJ, & Dreschler, WA. (2008) Prediction of the intelligibility
for speech in real-life background noises for subjects with normal hearing. Ear &
Hear, 29, 169-172.
Tripodi, T. & Epstein, I. (1980) Research techniques for clinical social workers. New
York: Colombia University Press.
Download