Improving Public-Sector Performance Management: One Step

advertisement
Improving Public-Sector Performance Management:
One Step Forward, Two Steps Back?
Carolyn J. Heinrich
University of Wisconsin-Madison
LaFollette School of Public Affairs
and Institute for Research on Poverty
August, 2003
Please do not quote, cite or distribute without permission.
This research was funded by a grant from the IBM Endowment
for the Business of Government, and the author thanks Mark
Abramson for his support and guidance throughout the project.
Stephen Wandner and Jonathan Simonetta of the U.S. Department
of Labor provided data, technical assistance and feedback that
were also vital to this work. Will DuPont and Lynette Mooring
were excellent research assistants.
Abstract
The U.S. Department of Labor introduced performance standards and outcome measures to its
public employment and training programs more than two decades ago, and a strong emphasis on
performance accountability continues as a key feature of the current Workforce Investment Act
(WIA) programs. This study uses the WIA performance management system to identify
challenges and prospects in implementing performance management systems effectively in public
agencies. The process for setting performance standards is studied, and empirical analyses
investigate relationships among these standards, states’ attained performance levels, and
differentials between states’ performance and the standards. The study findings show that setting
performance targets is a key task that determines the nature of incentives in the performance
management system. In the absence of regular adjustments to these standards for changing local
conditions, however, the WIA system appears to have promoted increased risk for program
managers rather than shared accountability. Program managers appeared to make undesirable
post-hoc accommodations to improve measured performance. This study produces both general
lessons about the implementation of performance management systems and more specific
feedback and strategies for improving the effectiveness of the WIA system.
Introduction
Although performance measurement as a management tool has a long history dating back
to the 1800s, it is primarily in the last two decades that public-sector performance management
has shifted to an explicit focus on measuring outcomes and rewarding results (Heinrich, 2003;
Radin, 2000). The Government Performance and Results Act (GPRA) of 1993 mandated the
development of outcomes-based performance measurement systems in federal agencies, including
annual performance plans specifying quantitatively measurable goals and levels of performance to
be achieved and annual reports comparing actual performance with goals. This study is one in a
growing body of work that aims to describe and draw lessons from public agencies’ experiences
in implementing these systems and to identify ways to increase their effectiveness, in addition to
improving agency performance (Hatry et al., 2003; Heckman, Heinrich and Smith, 2002).1
Among federal government agencies, the Department of Labor (DOL) has been a
“pioneer” in the development of performance management systems (Barnow, 2000). Before
GPRA, the Job Training Partnership Act (JTPA) of 1982 introduced performance standards for
public employment and training program outcomes (e.g., job placement rates and trainee earnings)
and the use of budgetary incentives based on performance to motivate agency staff. In addition,
two randomized experimental evaluations, of the JTPA program in the 1980s and the Job Corps
program in the 1990s, provided important information for assessing the performance of these
performance standards systems in measuring program impacts. Policymakers and public managers
have since drawn from the results of these studies to inform the design and operation of
performance standards systems in government programs.
1
In the Workforce Investment Act (WIA) of 1998 that recently replaced the JTPA
program, a greater emphasis on performance accountability has been described as a “hallmark” of
the legislation (Sheets, 2002; U.S. DOL-ETA, 2001). Some of the broader principles guiding the
evolution of this performance management system include those originating in “total quality
management” and “reinventing government” reforms--the measurement and analysis of results,
continuous performance improvement, shared accountability, and a customer and market focus.
The DOL is also actively supporting the use of the Malcolm Baldridge Criteria for Performance
Excellence as a tool for improving organizational effectiveness.2 Two new features of the WIA
performance management system that were intended to strengthen these principles in
implementation are: (1) a new approach to setting performance standards that involves the
negotiation of performance targets with states, and (2) new performance measures of customer
(participant and employer) satisfaction.
This study uses the WIA performance management system as a case study to elucidate
some of the challenges and prospects for making basic principles and components of performance
management systems work effectively in public agencies. Early studies of the WIA performance
management system have suggested that the system is working poorly and is in need of important
reforms (U.S. GAO, 2002). Through the analysis of data from states’ five-year WIA
implementation plans, DOL records on state negotiated standards and performance, and other
sources of data on participant and local area characteristics, this study produces both general
lessons about the implementation of performance management systems and more specific
feedback and strategies for improving the effectiveness of the WIA system. The information
2
generated in this study should also contribute to ongoing debate and discussions associated with
the reauthorization of WIA.
The paper proceeds as follows. An overview of the WIA program, the specific goals of
the WIA performance management system, and notable changes compared to the JTPA system
are presented first. The data and methods for the study are briefly described next. A qualitative
analysis of how states determined performance goals, the levels of performance standards, and
adjustments to standards under WIA is followed by empirical analyses of variation in and
relationships among negotiated standards, states’ attained performance levels, and differentials
between states’ performance and their negotiated standards. The larger question these analyses
address is: How effective is the WIA performance management system in gauging program
performance and creating the right incentives to guide program administration and operations in
improving outcomes for workers and employers? The paper concludes with recommendations for
how the WIA performance management system and similar systems in other government
programs might be improved.
The WIA performance management system: background information, key features and
issues
Since the inception of the JTPA program, federal workforce development programs have
sought to actively engage the private sector and to promote strong local governance so that
employment and training services can be tailored to meet local employer and worker needs.
Although the WIA program retains the basic structure and operational components of the JTPA
program, important changes were made in the eligibility criteria for workforce development
3
services, the types of services made available, and the processes for performance accountability
under WIA.
In brief, WIA makes available a broader range of core services to the general public (e.g.,
labor market information and job search assistance), not solely to those who qualify based on lowincome criteria. Individuals’ access to more intensive levels of service (e.g., comprehensive
assessment and case management, vocational or on-the-job training) proceeds sequentially if they
fail to achieve success in the labor market following receipt of basic services. These services are
typically provided through one-stop centers that include programs of the DOL, the Department of
Education, the Department of Health and Human Services, and the Department of Housing and
Urban Development. The DOL does not require monitoring and tracking of participants using
self-directed, core services or non-WIA services at the one-stop centers, but rather only those
participants who receive substantial staff assistance in the WIA programs.
WIA also established new performance measures and requirements for using specific types
of data to evaluate performance. Table A-1 in Appendix A shows the current WIA performance
measures and indicates which of these are new to WIA. The addition of the participant and
employer satisfaction performance measures was intended to make the workforce development
programs broadly accountable to their primary customers: participants and employers. Other new
measures are the credential rates for adults, dislocated workers and older youth, which indicate
the attainment of a degree, the certification of skills or training completed.
The DOL directed states to develop management information systems (MIS) for tracking
performance and to use unemployment insurance (UI) records to compute the employment and
earnings outcomes of participants. Although some states were able to modify their existing JTPA
4
MIS systems, a number of states had to develop new procedures and systems to collect these
data. As the GAO (2002) reports, states have struggled to meet DOL requirements for these
systems, including the need to maintain lists of participants (i.e., a sampling frame) to use in
supplemental data collection through follow-up surveys, the collection of performance data at
different time points for different measures, and the use of different participant subgroups (e.g.,
employed at registration, type of program or level of service received) in calculating performance
outcomes.
A key feature of the new WIA performance management system (and a primary focus of
this study) is the negotiation of performance standards that states are required to meet. Under
JTPA, the DOL established expected performance levels using a regression-based model with
national departure points. States could use the optional DOL adjustment model or develop their
own adjustment procedures, although the state-developed procedures and any adjustments made
by the governor had to conform to the DOL’s parameters (see Social Policy Research Associates,
1999). A majority of states adopted these models and used the DOL-provided performance
standards worksheets (see Appendix A, Table A-2) to determine performance targets, although
some with modifications.
Under WIA, states negotiate with the DOL and local service delivery areas to establish
performance targets, using estimates based on historical data that are similarly intended to take
into account differences in economic conditions, participant characteristics and services delivered.
The pretext for making this change to a system of negotiated standards was to promote “shared
accountability,” described as one of the “guiding principles” of the Workforce Investment Act
(U.S. DOL-ETA, 2001, p. 8). States’ own reports of procedures used to determine WIA
5
performance standards suggest that there is substantially greater discretion and variation in both
the processes and types of information used to establish the state-level standards.
Because there are strong incentives (rewards and sanctions) for performance outcomes in
WIA, it is important that the data collected and measures used are comparable across states and
localities. The level of the negotiated standard is also critical in the determination of performance
bonuses and sanctions. In order to be eligible for an incentive grant (up to $3 million), states are
required to achieve at least 80 percent of the negotiated performance level for all 17 measures.
States that do not meet their performance goals for two consecutive years may be penalized with
up to a 5-percent reduction in their WIA grant. Reflecting the increased emphasis on continuous
performance improvement in the WIA system, the targeted levels of performance negotiated by
states increase over each of the first three years of WIA (PY 2000-2002) for most states. And
although the law allows states to renegotiate standards in cases of unanticipated circumstances,
the data show that few states exercised this option in the first three years of the program.
The history of the JTPA performance management system suggests important reasons for
concern about the determination of performance standards and the incentives they create for
program managers and staff. In the mid to late 1980s, reports emerged describing how JTPA
program administrators and case workers limited access to program services for more
disadvantaged applicants in the effort to improve measured performance, a practice more widely
known as “cream-skimming.” (Dickinson et al., 1988; Orfield and Slessarev, 1986; Anderson, et
al., 1993). In addition, Courty and Marschke (1997) showed how program managers strategically
organized their “trainee inventories” and timed participant program exits to maximize end of the
year performance levels. Other studies associated a shift to shorter-term, less intensive service
6
provision under JTPA with the pressure to produce more immediate, low-cost job placements
(Zornitsky and Rubin, 1988; Barnow, 1992).
A recent U.S. General Accounting Office ( GAO) report (2002) suggests that history may
be repeating itself. The GAO interviewed WIA program administrators in 50 states and visited
five sites to assess the effectiveness of the WIA performance management system. The report
notes that many states have indicated that “the need to meet performance levels may be the
driving factor in deciding who receives WIA-funded services at the local level” (p. 14). It also
describes how some local areas are limiting access to services for individuals who they perceive
are less likely to get and retain a job. Observing the serious challenges that states and localities
have faced in implementing the system, the GAO suggests that “even when fully implemented,
WIA performance measures may still not provide a true picture of WIA-funded program
performance” (U.S. GAO, 2002, p. 3). In a summary report to the U.S. Department of Labor on
the implementation of WIA, Barnow and Gubits (forthcoming, fn. 12) also found, based on
meetings with officials from about 20 states, that “the greatest dissatisfaction in every instance has
been with the way the performance management system has been implemented.”
Study data and methods
As described in the preceding section, three elements are key to the WIA performance
management system and to similar systems in other government programs: (1) performance
measures to evaluate progress toward performance goals, (2) a method for setting standards and
measuring performance against the standards, and (3) rewards and sanctions that generate
incentives for the achievement of performance goals. This analysis begins with an investigation,
7
primarily qualitative, of how performance goals and performance standard levels were established
under WIA’s new system. Data for this first part of the study come from:
1.
Five-year plans, mandated by the DOL and developed by states, describing how
states would implement the WIA program and the performance management
system.
2.
Guidelines issued by the DOL for performance standards negotiations and
parameters recommended for use as baseline values in negotiations. The DOL also
established national goals for the WIA performance measures.
3.
Data from the DOL on the final levels of negotiated performance standards set by
the states.
4.
Data from the DOL’s Standardized Program Information Reports (SPIR) on local
participant characteristics and services delivered by JTPA agencies in program year
1998, the baseline year used by a majority of states in determining performance
standards.
In the WIA five-year plans, states had to indicate the performance standards established
for each of the core indicators (see Appendix A, Table A-1) for program years 2000-2002 and to
explain how they determined the levels of performance goals. They were also required to
describe the management information systems and reporting processes used to track performance,
and how these data would be disseminated and used to improve services and customer
satisfaction. States were given the option to submit the plans for early transition by July 1, 1999
or to submit them later by April 2000, before the July 1, 2000 WIA start date.
8
Although the WIA state plans are stored in the DOL electronic archives, less than half of
the electronic links were functional in early 2003.3 Contact with state WIA officials and website
searches produced a total of 50 (out of 52) of these plans.4 Information in these plans about
states’ negotiated performance targets, the process by which these performance levels or
standards were established, and how they compared to national goals and projected national
averages of the standards was extracted for analysis. Forty-four states had complete information
about their specific performance targets.
The second part of the study applies correlation and regression analysis to investigate
relationships among negotiated performance standards, states’ attained performance, and
differentials between states’ performance and their negotiated standards. The data for these
analyses include:
1.
Information from the DOL on states’ reported (actual) performance levels in 8
quarters under WIA (2nd quarter PY 2000-1st quarter PY 2002) for each of 17
performance standards.
2.
DOL data on the final levels of negotiated performance standards set by the states.
3.
Bureau of Labor Statistics data on state economic conditions by year, in addition
to the SPIR data on other state and local characteristics.
Four sets of analyses are conducted to compare states’ workforce development
performance to negotiated standards and other relevant variables. The first set of analyses
computes the differential between states’ performance and their negotiated standards and
examines how these differentials vary across states and by program year. A second set
investigates the relationship between states’ attained performance levels and baseline participant
9
and area characteristics to determine if there are associations between performance and these
variables for which adjustments to standards were intended to be made. A third set of analyses
examines associations between the performance differentials and states’ baseline participant and
area characteristics. If, in fact, the process of negotiating standards effectively adjusts states’
standards to account for local participant and area characteristics, then the relationships among
these variables should be weaker than those in the second set of analyses described above.
Finally, the last set of analyses focuses on the new participant and employer satisfaction
performance measures and investigates whether there are significant associations between these
measures and the more objective measures of employment, earnings, retention, education and skill
attainment of WIA participants.
Determination of Performance Standards under WIA
As this research and some of the studies discussed above suggest, the determination of
performance standards (or minimum levels of performance to be achieved) is a key task in the
design and implementation of performance management systems that significantly influences the
incentives for public managers and staff operating programs. An important concern for those
involved in setting standards is to use a process that creates a “level playing field” (Social Policy
Research Associates, 1999). Public managers do not want to be held accountable for factors
outside their control or to be unfairly compared to other agencies with different client
populations, economic environments, and other extenuating factors. At the same time,
policymakers want to use the system to motivate performance improvements, and in the case of
WIA, to promote “shared accountability” for results. This is
10
likely to require an approach that engages public managers in the process of setting performance
standards and makes an attempt to balance risks (e.g., for unanticipated conditions or changes in
external factors that affect outcomes) among the parties involved.
Procedures for Setting State Performance Standards in WIA
One important source of data for setting performance standards is historical (or baseline)
information on past levels of performance achievement, to the extent that these data are available.
Since performance data were collected in the JTPA program, more than half of the states used
some baseline performance measures to determine appropriate levels for the WIA negotiated
performance standards. The baseline data typically came from several different sources: projected
national averages for the negotiated standards provided by the DOL (based on the experiences of
seven early implementation states), federal baseline numbers (available in the federal performance
tracking system, i.e., SPIR data),5 unemployment insurance data, and states’ own performance
baselines from previous program years. Georgia, for example, used program year (PY) 1998
state performance records combined with the projected national averages in negotiations with
regional office representatives and local-level officials to determine the performance targets for
the first three years of WIA. Indiana reported that it used PY 1999 performance data to
determine the performance standards, but it did not have time for consultations with local
workforce development officials in setting the goals; only first-year (PY 2000) goals were
presented in Indiana’s five-year plan. Some states, such as New Hampshire and Ohio, used UI
data from earlier periods (PY 1994-1997) combined with DOL performance data available in the
SPIR to set performance levels.
11
About one-half of the states also explicitly indicated that negotiations with local workforce
development officials were important in determining performance standards, and many of these
also used some type of baseline data to inform the discussions. States were instructed to take into
account differences in economic conditions, participant characteristics, and services provided.
For a majority, these adjustments to standards were made informally during the review of baseline
information and negotiations. For example, Wisconsin reported using PY 1997 data and the
projected averages in negotiations with local officials to set the standards. A comparison of these
data in Wisconsin’s five-year plan shows that when Wisconsin’s PY 1997 baseline was above the
projected national averages, the projected averages were established as the targets. When
Wisconsin’s baseline numbers were below the projected national averages, the baseline values
were typically set as the targets. The states of Washington, Nebraska, South Carolina and others
followed a similar process. It was rare, as in the case of the state of New York, that all of the
state’s performance baseline measures were above the national targets and were set as the
standards for PY 2000. Only the states of Texas, Maryland and the District of Columbia reported
using statistical models to determine the performance standards.6
Table 1 presents descriptive statistics on the levels of performance standards set by the
states using data from the DOL on the final negotiated standards. These statistics confirm that
there is considerable variation across the states in the levels of performance standards established
through the negotiation process. Interestingly, among the new WIA performance standards (for
which no historical performance information was available), the participant and employer
satisfaction standards vary the least across the states (standard deviations 2.8 and 3.2,
12
respectively), while the credential rate measures (for all groups) have comparatively large
standard deviations (7.3-8.7).
Comparison of State Performance Targets with National Goals
The DOL also established national goals for the first three years of WIA performance
measures (see Appendix A, Table A-3) that reflect the articulated objective of continuous
performance improvement. As reported by the Employment and Training Administration (DOL,
2002: 2), WIA “envisions a high-performance workforce system that is continuously improving
and delivering high quality services to its customers.” States’ negotiated performance standards
were compared to these national goals. For about one-third of the states for which information
on the specific levels of negotiated performance is included in their five-year plans, some of the
state targets are above the national goals, and some are below, likely reflecting risk-balancing,
standard-setting strategies such as those used by Wisconsin. The negotiated standards are mostly
or all above the national goals for another third of these states, although only four had standards
set higher than all of the national targets. Arkansas was unique in setting each of its standards
(with the exception of the earnings change measures) exactly 1 percentage point above the
national goals in the first year. Among the others, just three states had established performance
standards that were all below the national goals. North Carolina, for example, used PY 1997
baseline data in its determination of performance standards, and all of the standards were set
significantly below both the state baseline measures and national goals. A few states, such as
Alabama, also adopted a more risk-adverse approach, setting some performance standards lower
than baseline values to allow time for adjustment to the new system.
13
In general, the performance targets established in the state 5-year plans for WIA
implementation reflected the continuous performance improvement objective. These planned
targets and the final negotiated standards for the states7 for the first three years of WIA show that
the negotiated standards, on average, increased about 1 to 2 ½ percentage points between PY
2000 and PY 2001 and between PY 2001 and PY 2002 (see Table 2). In addition, the mean
expected increase in performance levels is larger between PY 2001 and PY 2002 than that going
from PY 2000 to PY 2001 for most standards. The states, in effect, set target levels that not only
required that they improve over time, but also that the magnitude of the improvements increase
from year to year.
Adjustments to Performance Standards
In addition to accounting for factors (demographic, economic or others) known at the
time that performance standards are established, it is important to allow for adjustments to
standards that will offset future or unknown risks of poor performance due to conditions or
circumstances beyond the control of public managers. As described above, many states used
baseline performance data from program years 1999, 1998, 1997 or earlier to establish
performance standards for the first year of WIA and then also built in anticipated performance
improvements for the two subsequent years.
Economic conditions changed significantly, however, between the pre-WIA period and
first three years of WIA implementation. Between 1998 and 1999, unemployment rates were
declining on average, with a median decline of 0.2 percent and 75 percent of all states
experiencing a decline. This pattern continued in the year before WIA (1999 to 2000). Between
2000 and 2001, however, this trend reversed. More than 75 percent of the states experienced an
14
increase in unemployment rates over the course of this year, with a median increase of 0.7
percent. Increases in unemployment rates were even greater between 2001 and 2002, with all
states experiencing an increase in unemployment except one that was unchanged. Thus, at the
same time that unemployment rates were increasing and creating adverse labor market conditions
for trainees in the first three years of WIA, the standards for performance achievement in the
program were increasing. (See Figure 1 below.)
Figure 1.
Performance goal and local
labor market conditions
Mean entered employment
rate standard for adults
Mean unemployment rate
Program
year 2000
Program
year 2001
Program
year 2002
66.44
69.17
70.94
3.94
4.59
5.35
Despite these dramatic changes in economic conditions, less than a third of the states’ final
negotiated standards were changed from those proposed in their 5-year plans. A few states’ final
negotiated standards, such as those in North Carolina and Delaware, were higher than originally
presented in their plan. Where changes were made, however, it was more common to lower the
negotiated standards. The District of Columbia, Georgia, Idaho, Missouri, New York, Oregon
and Washington, for example, adjusted one or more of their performance standards downward
over these program years. Among the small number of states that made changes, they were most
likely to lower their older youth or displaced worker standards. One Texas official expressed her
concern in a phone conversation that even with Texas’ relatively sophisticated statistical model
for setting performance standards, adequate adjustments had not been made for economic
conditions. She noted that older youth were most likely to experience poor labor market
outcomes in a recession, as adults would take any job and thereby displace the older youth.
15
Under JTPA, performance standards were adjusted annually, as shown in Appendix A,
Table A-2. The WIA guidelines directed that the negotiated performance targets take into
account local economic conditions, participant characteristics, and services delivered in the states.
Renegotiation appears to be more of an exception, however, than a routine procedure. The
relationship of the final negotiated (or re-negotiated) standards to these local variables was
examined empirically in correlation and regression analyses using DOL SPIR data that was only
available through program year 1998, the baseline year used by a majority of states in determining
performance standards.8
The question of interest in this analysis is whether the negotiated or re-negotiated
performance standards appear to account (or adjust) for differences across states. The simple
correlation analyses showed only two consistent associations among negotiated performance
standards and participant characteristics. States with higher percentages of Hispanic and limited
English proficiency populations had significantly lower performance targets for all adult,
dislocated worker, and youth performance measures (correlation coefficients ranging from r = 0.214 to -0.575, p<0.0001 for all). The correlation between percentage Hispanic and limited
English proficiency, not surprisingly, was very high at r=0.819; the percentage of the participant
population that was Hispanic was also significantly and positively correlated (p<0.0001) with the
percentage who were single heads of households, had less than a high school degree, lacked work
experience, had a skills deficiency, and were not in the labor force. Among the three states with
performance targets in their 5-year plan that were all below the national goals, California had the
largest proportion of Hispanics among its PY 1998 participant population (34.4%), nearly onefourth of Rhode Island’s participant population was Hispanic, and North Carolina had the fastest
16
growing Hispanic population (over 400% increase) between the 1990 and 2000 U.S. Censuses.
In addition, correlations with state unemployment rates (in 1998) showed that states with higher
rates of unemployment had significantly lower standards for adult, dislocated worker and older
youth entered employment rates and younger youth employment retention rates.
Ordinary least squares regressions indicated a few more statistically significant
relationships among baseline participant and economic characteristics and the performance
standards negotiated by the states, although these relationships tended to vary across the different
standards.9 For example, a more highly educated participant population in 1998 was significantly
and positively associated with higher standards for entered employment rates, although not for
earnings change or employment retention standards. In addition, the most important factors
affecting entered employment rate standards for adults and dislocated workers were
unemployment rates in 1998 and the change in unemployment rates between 1998 and 1999. The
regression models also included measures of the employment and earnings outcomes of PY1998
participants to account for past performance, and these variables were statistically significant and
positively associated with performance standard levels in most models. However, it was still the
case that the most consistent relationships across standards were the statistically significant and
negative associations between higher percentages of Hispanics or participants with limited English
proficiency and performance standard levels.
Although documentation is not available to confirm that adjustments were being made
deliberately in negotiations to account for these specific baseline characteristics, the empirical
findings above suggest this may be occurring. Interestingly, Table A-2 in Appendix A shows that
the PY 1998 JTPA performance standards adjustment worksheet did not explicitly allow for
17
adjustments for the proportion of Hispanics,10 although it did adjust for lack of work history, less
than a high school education and not in the labor force, all of which are positively correlated with
the percentage of Hispanics. The JTPA adjustment model did take into account the percentage of
the local area that was black, however, where being black was significantly correlated with having
less than a high school degree and a skills deficiency. The analyses presented in the next section
provide some indication of how effectively the new system of negotiated performance standards
works in adjusting for local characteristics and economic conditions.
States’ Performance under WIA: Is It Up to Standards?
States’ performance relative to negotiated targets
The difference between a state’s attained performance level in a given quarter and the
performance target for that particular program year was computed for each of the 17 performance
standards over the 8 quarters for which performance data were available. Table 3 presents some
descriptive statistics on the magnitude of these differentials by program year. A positive
differential indicates that, on average, states were exceeding their negotiated targets. The
relatively large standard deviations associated with each of these measures suggests that there is
considerable variation among the states in their performance achievements (relative to standards).
Table 4 shows the proportion of states that met or exceeded their performance targets in the 2nd4th quarters of PY 2000, PY 2001, and the first quarter of PY 2002.
Simple correlations among the computed performance differentials for the 17 different
measures showed nearly all positive correlations, some weaker and some stronger and significant
relationships. This is a result program managers should like to see, as it suggests that there are
18
not likely to be tradeoffs in directing resources towards improving specific aspects of program
performance.
Examining performance across the different measures, however, it is clear that states
struggled to achieve success on some dimensions more than others. Tables 3 and 4 show, for
example, that a majority of states consistently failed to meet their planned goals for the new
credential rate measures (indicating the attainment of a degree, the certification of skills or
training completed) for adults, dislocated workers and older youth. A majority of states also
failed to meet their targets for youth diploma rates, although their performance against this
standard improved over time, even though states’ targets were set at higher levels in 2001 and
2002.
Of potentially greatest concern, however, is the obvious negative turn in performance
differentials going from PY 2001 to the first quarter of PY 2002. A quarter by quarter
examination of the average differentials indicates that up through the last quarter of PY 2001
(June 30, 2002), the performance differentials were generally positive, with the exception of the
credential rate and youth diploma rate measures as discussed above. In the first quarter of 2002,
states were below targets on more than half (9) of the 17 measures. Table 4 also shows that the
proportion of states meeting or exceeding their performance targets dropped between PY 2001
and PY 2002 for nearly all measures, some dramatically, such as the 21 percent decrease in the
proportion of states meeting their older youth entered employment rates. Recall that Table 2
showed that states were attempting to achieve larger increases in performance between the 2001
and 2002 program years (in terms of the levels of their negotiated standards) than between the
2000 and 2001 years, at a time when labor market conditions were becoming increasingly
19
unfavorable.
The GAO’s interviews with WIA program administrators confirm that concerns about
meeting performance targets were widespread across the states. The GAO (2002) reported that
all state program administrators believed that some of the performance targets were set too high
for them to meet, and that the process of performance standards negotiations did not allow for
adequate adjustments to varying economic conditions and demographics. In addition, states
noted the absence of baseline data to use in establishing targets for the new credential rate and
customer satisfaction measures. Some states responded to these pressures by augmenting the
screening process for determining registrations or by limiting registrations of harder-to-serve job
seekers, including dislocated workers whose pre-program earnings were more difficult to replace.
The GAO report also included a comment from a Texas official who indicated that without
Texas’ regression model, which adjusts standards for differences in economic conditions and
participant characteristics, the Texas WIA programs would have also registered fewer workers.
This same GAO (2002) report described the WIA performance management system as a
“high-stakes game” (p. 27). As indicated earlier, states are required to achieve at least 80 percent
of the negotiated performance level for all 17 measures in order to be eligible for incentive grants.
States that do not meet their performance goals for two consecutive years may be penalized with
up to a 5-percent reduction in their WIA grant. Thus, the rewards and sanctions for performance
can have an important impact on states’ resources for WIA program operations.
States’ performance relative to the 80 percent levels of their negotiated targets was
computed, and the percent meeting these minimum performance requirements for each standard
was also computed for each program year. In addition, although performance data are incomplete
20
for PY 2002, the states that appeared to be at risk of sanctions for failing to achieve at least 80
percent of their performance goals for two consecutive years were identified. The results of these
analyses are presented in Table 5.
Table 5 shows that there is no performance measure for which all of the states meet 80
percent of their targeted goal in any given program year. Although for most performance
standards and program years, a majority of states are meeting the minimum requirements, when
the percent that achieve 80 percent of targeted levels for all measures (as required to be eligible
for incentive grants) is computed, the numbers drop dramatically. These calculations show that in
PY 2000, only 4 states met the minimum requirements for all 17 performance measures. In PY
2001, 9 states met their 80 percent target levels for performance, and 9 states (although not the
same ones) were also meeting all minimum performance requirements in the first quarter of PY
2002.
What may be most alarming to WIA program administrators, however, is the number of
states that appear to be at risk for sanctions based on the performance management system rules.
Thirty-eight states were identified as failing to achieve at least 80 percent of their performance
goals (for all measures) for two consecutive years. This number also holds when using only PY
2000 and PY 2001 performance data in these calculations. There were no regional patterns or
apparent relationships to the absolute levels of standards set among those states that did not
appear to be at risk for sanctions. These results alone might go a long way toward explaining
WIA program administrators’ great dissatisfaction with the new WIA performance management
system.
State characteristics and performance
21
What accounts for the relatively poor performance of the states in meeting targeted goals
under WIA? The relationship between states’ quarterly performance on the 17 standards and
states’ baseline participant population characteristics and economic conditions over time was
assessed using correlation and regression analysis. The simple correlations show strong,
statistically significant and negative correlations between the percentage of Hispanics and limited
English proficiency participants (at baseline) and attained performance levels. This finding is
particularly interesting given the earlier observation that states with higher proportions of
Hispanics and limited English proficiency participants in their populations negotiated significantly
lower performance targets. Higher percentages of single heads of households and participants
without a high school degree are also fairly consistently associated with poorer performance. In
terms of economic conditions, there are some significant, negative associations between states’
unemployment rates in 2000, 2001 and 2002 and their attained performance levels, the strongest
being the negative associations between the older youth entered employment rates and
unemployment rates. This finding is concordant with the observation of the Texas official
regarding the additional challenges these youth face during poor labor market conditions, and
with the actions of some states to lower performance targets for older youth after planned targets
were submitted to the DOL.
Regression analyses were performed using both the states’ attained performance levels and
the differentials between their attained performance levels and negotiated standards as dependent
variables to assess the relationship of local participant and area characteristics to performance.
Separate regressions were estimated for performance relative to each of the 17 performance
standards for these two dependent variables, and thus, the detailed results are not presented for all
22
models. In general, if the states’ initial processes for adjusting performance standards through
negotiations worked as intended, one would expect to see fewer or weaker relationships between
the performance differentials (versus attained performance levels) and these baseline participant
and area characteristics.
Table 6 presents results for six of these models. The first two models show the results of
regressions of attained performance and the performance differential for adult entered
employment rates. The other four regression models are of older youth entered employment rate
and employment retention rate performance and performance differentials.
The findings of the model of adult entered employment rate performance show that, first
of all, none of the participant population characteristics are significantly related to entered
employment rate performance levels. The sole, statistically significantly explanatory variable in
this model is the unemployment rate in 2001, which is negatively related (as expected) to entered
employment rates. In the second model of the differential between attained performance and the
negotiated standard, there are no statistically significant predictors, suggesting that the process of
negotiating entered employment rate standards for adults may have effectively accounted for local
economic factors. Looking at Table 4, one can see that a higher percentage of states met their
attained performance goals for this standard than for other standards, with the exception of
participant and employer satisfaction and youth skill attainment rates in the first two years.
The regression results for all other models presented a less optimistic picture, however, of
the effectiveness of the performance standards adjustments under WIA. Turning to the subset of
these results shown in Table 6, the third model for older youth entered employment rate
performance shows that the percentage of Hispanics among the participant population is
23
significantly and negatively related to entered employment rate performance levels, as is the
unemployment rate in 2001. In addition, past state performance (as measured by the wage at
placement) is positively related to higher entered employment rate levels for older youth. In the
fourth model of the differential between older youth entered employment levels and the negotiated
standard, the percentage of Hispanics in the population and past performance are no longer
statistically significant predictors, but the unemployment rate is still a very strong predictor.
Thus, these results appear to confirm the verbal expressions of state program administrators that
the WIA performance standards system did not adequately adjust for changes in economic
conditions that hit older youth particularly hard.
The final two models in Table 6 are more exemplary of the regression findings for the
other performance standards, and the results are rather discouraging. The adjustments made to
standards during the negotiation process appear to do little to account for demographic and
economic factors that significantly affect older youth retention rate performance. In both the
model of attained performance levels and the retention rate performance differential, race,
education level, work history, past state performance and unemployment rates are all statistically
significant predictors of performance. Among the other regression models not shown,
unemployment rates were the most consistent, negative predictors of performance (levels or
differentials), suggesting again that states were not prepared or in a position to adjust for what
turned out to be significant risks of failure to meet performance targets due to the economic
downturn.
Customer satisfaction performance
The WIA measures of participant and employer satisfaction were intended to add a new
24
dimension to performance evaluation that makes program administrators accountable to the
primary customers of WIA services. Customer satisfaction measures, typically described as “soft”
or more subjective measures of performance, have received mixed reviews in terms of their value
in providing useful feedback to program administrators. Kelly and Swindell (2002) note that they
typically do not correlate strongly with more objective measures of program performance. This is
not necessarily a problem, however, if these measures are picking up on other dimensions of
service effectiveness that are within the purview of program administrators to affect or change.
The only statistically significant correlation between participant satisfaction performance
and other WIA performance outcomes is that with employer satisfaction (r=0.416, p<0.0001).
Likewise, there are no other statistically significant correlations of employer satisfaction with
other objective measures of performance (i.e., employment, job retention, earnings, etc.) in the
WIA system. One hypothesis for the observed relationship between participant and employer
satisfaction might be that better trained or prepared participants make them of greater value to
employers, who thus provide better compensation or work environments that produce greater
levels of satisfaction among these workers. If this was the case, however, one might expect a
stronger relationship between the participant satisfaction measures and other measures of labor
market outcomes. It may be more likely that these two measures are picking up on other more
subjective (or administrative) dimensions of performance that do not overlap with the labor
market outcome measures.
In fact, the specific wording of the questions used to assess customer satisfaction makes it
practically impossible to determine what particular aspects of the WIA program or post-program
experiences participants or employers might be rating in response to these questions. Three
25
questions are asked of participants and employers statewide, with respondents rating their
satisfaction levels on a scale of 1 (lowest satisfaction) to 10: (1) Was the participant (employer)
satisfied with services? (2) Did the services meet the expectations of the customer? and (3) How
well did the service compare to the ideal set of services? (GAO, 2002) Since these data on
customer satisfaction are collected (according to WIA rules) in the second or third quarter after a
participant exits from the program, this may broaden the scope of responses, covering both time
during and following the program to evaluate service effectiveness. Furthermore, because
customer satisfaction performance is measured at the state-level rather than the point of service
(i.e., the local board or provider level), feedback for program managers attempting to improve
programs at the local level is likely to be limited.
Conclusions and Recommendations
A goal of the WIA performance management system was to standardize the types of
performance data collected, including the use of unemployment insurance data to track labor
market outcomes, and to compel states to develop the management information system capacity
to produce more accurate and comparable measures of program performance. At the same time,
WIA increased the flexibility and discretion of states in determining the levels of the performance
standards that they would be expected to meet. The regression model approach used in the JTPA
program to make annual adjustments in performance targets was abandoned by most states, and
they were allowed to determine their own procedures for negotiating and establishing final
standards. Thus, as one component of the performance equation (the post-program performance
level) was seemingly measured more rigorously and reliably under WIA, the other key component
26
(the performance standard) was determined by more widely varying approaches with various
types of baseline data and different groups involved in the negotiation processes. It is plausible,
therefore, that rather than increasing the comparability of performance achievements across the
states, the WIA system added a new source of arbitrariness to the measures that could
compromise their effectiveness as a tool for performance evaluation and improvement.
The reported responses of WIA program administrators and staff to the new WIA
performance management system incentives confirmed that the system may not be working
effectively to promote the program’s goals. In the absence of an adequate process for establishing
and adjusting performance standards over time, program managers appeared to be making
undesirable post-hoc accommodations, e.g., restricting participant registrations in discriminatory
ways. The fact that the baseline data used by a majority of states (PY ‘94-‘99) to determine
performance targets up to three years ahead of time was a particularly poor approximation of the
actual conditions faced by the states in the first three years of WIA likely exacerbated these
problems. In effect, in a program where performance is judged primarily by labor market
outcomes, it appears that the performance management system failed to fully account for changes
in labor market conditions and other factors that likely directly (and negatively) affected
participant outcomes.
In addition, to advance the continuous performance improvement goals of WIA,
expectations for continually increasing performance levels were built into the negotiated
performance targets. In a system that appropriately adjusts standards for context or local
conditions, a subsequently lower (in absolute terms) performance level could be documented as a
performance improvement if it is achieved under more adverse conditions. As shown in this
27
study, however, both national goals and state standards set higher absolute levels of performance
requirements for nearly all measures in each year of the PY 2000-2002 period of WIA. In the
absence of regular adjustments for changing local conditions, the system appeared to promote
increased risk for program managers, rather than “shared accountability,” holding managers
accountable for some factors outside of their control.
In a system where the rewards (up to $3 million in grants) and sanctions (up to a 5%
reduction in grants) could have important implications for operating budgets, the performance
measures should provide feedback to managers, staff and other service providers about the
effectiveness of their activities in improving service quality and participant outcomes. The
current WIA performance management system, like the earlier JTPA system, appears to instead
be generating inappropriate incentives for program managers to improve measured performance
rather than service access or quality. In its effort to improve the WIA performance management
system, the DOL has focused on making its WIA Standardized Record Data (WIASRD) system,
which replaced the SPIR, fully functional. Using wage records rather than administrative records
and survey data to report quarterly instead of annual performance, the goal is to produce more
complete, accurate and timely performance information at the program level to inform operational
decisions. If these technological improvements are going to ameliorate some of the apparent
flaws of the current system, however, they also need to address the information and procedures
used by states in establishing performance standards.
In decentralized government programs like WIA, imparting a role to state and local
officials in the determination of performance standards or targets should improve their validity and
fairness. Local program managers appear to have specific knowledge about client populations
28
and area characteristics important to making appropriate accommodations for local factors that
influence program performance. However, state and local managers also need to have up-to-date,
readily accessible information (e.g., on local characteristics, economic conditions, etc.) to provide
useful input into the process of setting performance targets. Using data that were 2-3 years old to
project performance targets 1-3 years in advance created fundamental problems with the WIA
performance targets. In addition, the procedures followed by local areas to provide input into the
performance standard setting process should be fairly uniform or consistent across areas,
particularly if regional, state, or local comparisons of performance outcomes are made to
determine performance awards. The DOL might consider returning to a system of statistical
model adjustments to performance standards that allows for the continued input of state and local
managers via the statistical model specification.
If the DOL continues to evaluate WIA program performance on a quarterly basis, data
regularly collected in the management information system should be used to make ongoing and
systematic adjustments in the performance targets for factors that influence participant outcomes
but are outside the control of program managers. To facilitate this, the DOL should continue to
work with states, localities and third parties contracted to manage data systems to develop local
management information system capacity that is necessary for timely and effective use of these
data at their point of origination. A system designed to promote continuous performance
improvement should also include a mechanism for gathering information from program managers
and independent sources at regular intervals to assess the influence of both organizational and
environmental factors on program operations and outcomes.
In terms of customer satisfaction, the usefulness of the new customer satisfaction
29
measures to program managers might be enhanced by several changes. First, the dimensions of
participant or employer satisfaction that are being measured should be made more explicit in the
survey questions. Questions should address the interactions of customers with program staff and
their experiences with the WIA service process separately from participants’ (or employers’)
perceived value of services delivered in helping them to achieve labor market success (or to meet
employers’ labor market needs). In addition, performance on these measures should be evaluated
at the point of service, (i.e., tracking the center at which services were received), so that these
data will be more useful to local program managers in making performance improvements. Kelly
and Swindell (2002) also found that disaggregating citizen satisfaction measures by smaller units
provides a fairer assessment of quality and effectiveness. As currently designed, and with the
important differences across states in the number of service delivery areas, participant populations
and service approaches, the WIA customer satisfaction measures are probably of more symbolic
value to customers than they are of use to program managers.
Finally, “high stakes” performance management systems need to incorporate adequate
buffers for errors and imprecision in performance measurement to balance risks and rewards for
managers. Careful case reviews should be undertaken before sanctions are applied or large
bonuses awarded.
The difficulties and setbacks described in this study of the WIA performance management
system are indicative of ongoing challenges that public managers more generally face in the effort
to design and implement outcomes-based performance management systems in government
programs. Particularly in social programs, it is practically infeasible to distinguish precisely the
contributions of program services and management to customer outcomes from the influence of
30
other local factors that can aid or harm program performance. Thus, as the standards and stakes
for performance outcomes are raised, it is not surprising that public managers sometimes turn to
counter-productive means of achieving higher levels of measured performance at the expense of
other program goals. It is likely that no matter how advanced our management information
systems for tracking performance or our statistical models for adjusting for external factors, there
will be some degree of both bias and error in our measures. Is a 20 percent buffer, as used in the
WIA performance management system to judge performance below standards, sufficient to insure
against unfairly applied sanctions or denied rewards? These are the kinds of questions that
policymakers and public managers will have to ask in their ongoing efforts to improve public
program outcomes through the use of performance management systems.
31
References
Anderson, Kathryn H., Burkhauser, Richard V. and Raymond, Jennie E. (1993) The Effect of
Creaming on Placement Rates Under the Job Training Partnership Act. Industrial and Labor
Relations Review, 46(4): 613-624.
Barnow, Burt S. (1992) The Effects of Performance Standards on State and Local Programs. In
Charles F. Manski and Irwin Garfinkel, eds. Evaluating Welfare and Training Programs.
Cambridge, MA.: Harvard University Press. Barnow, 1992
Barnow, Burt S. and Gubits, Daniel B. Forthcoming. Review of Recent Pilot,
Demonstration, Research, and Evaluation Initiatives to Assist in the Implementation of Programs
under the Workforce Investment Act. Chapter 5 of the Strategic Plan for Pilots, Demonstrations,
Research, and Evaluations, 2002—2007.
Courty, Pascal and Marschke, Gerald R. (1997) Empirical Investigation of Gaming Responses to
Performance Incentives. Working paper, The University of Chicago.
Dickinson, Katherine P. and West, Richard W. (1988) Evaluation of the Effects of JTPA
Performance Standards on Clients, Services, and Costs. National Commission for
Employment Policy Research Report No. 88-17.
Dyke, Andrew, Heinrich, Carolyn J., Mueser, Peter and Troske, Kenneth. (2003) The Effects of
Welfare-to-Work Program Activities on Labor Market Outcomes. Working paper, University of
North Carolina at Chapel Hill.
Hatry, H.P., Morley, E., Rossman, S.B. and J.S. Wholey. (2003). How Federal Programs Use
Outcome Information: Opportunities for Federal Managers. IBM Endowment for the Business of
Government Report, May.
Heckman, James J., Heinrich, Carolyn J. and Jeffrey Smith. 2002. The Performance of
Performance Standards. Journal of Human Resources, 37(4): 778-811.
Heinrich, Carolyn J. (2003) Measuring Public Sector Performance and Effectiveness. In the
Handbook of Public Administration, Guy Peters and Jon Pierre, (eds), London: Sage
Publications, pp. 25-37.
Kelly, J. M. and Swindell, D. (2002) A Multiple–Indicator Approach to Municipal Service
Evaluation: Correlating Performance Measurement and Citizen Satisfaction across Jurisdictions.
Public Administration Review, September/October 2002, Volume 62, Number 5, pp. 610-621.
Orfield, Gary and Slessarev, Helene. (1986) Job Training Under the New Federalism: JTPA in
32
the Industrial Heartland. Report to the Subcommitte on Employment Opportunities, Committee
on Education and Labor in the U.S. House of Representatives.
Radin, Beryl A. (2000) Beyond Machiavelli: Policy Analysis Comes of Age. Washington, DC:
Georgetown University Press.
Sheets, Robert G. (2002) From Programs to Systems to Markets: Rethinking the Role and
Scope of Performance Management in Workforce Development. Working paper, Northern
Illinois University.
Social Policy Research Associates. (1999) Guide to Performance Standards for the Job Training
Partnership Act for Program Years 1998 and 1999.
U.S. Department of Labor, Employment and Training Administration. (2001) 2002 Annual
Performance Plan for Committee on Appropriations.
U.S. Government Accounting Office. (2002) Improvements Needed in Performance Measures to
Provide a More Accurate Picture of WIA’s Effectiveness, GAO Report #02-275.
Zornitsky, Jeffrey and Rubin, Mary. (1988) Establishing a Performance Management System for
Targeted Welfare Programs. National Commission for Employment Policy Research Report No.
88-14, (August).
33
Table 1: Descriptive Information on the Level of Negotiated Performance Standardsa
Negotiated Performance Standard
Mean
Standard
deviation
Adult entered employment rate
68.8%
5.0%
45.0%
78.0%
Adult employment retention rate
78.4
4.1
60.0
88.0
$3227.71
$562.96
$674.00
$4638.00
51.8%
8.5%
30.0%
71.0%
Dislocated worker entered employment
rate
75.0
5.0
61
84.4
Dislocated worker employment
retention rate
85.3
4.9
59.0
93.2
Dislocated worker earning replacement
rate
91.1
5.0
80.0
106.0
Dislocated worker credential rate
52.9
8.7
27.0
72.0
Older youth entered employment rate
63.5
4.9
50.0
75.0
Older youth employment retention rate
75.3
5.0
59.0
83.6
2744.92
565.76
517.00
4075.00
Older youth credential rate
44.3
7.3
21.0
55.0
Younger youth retention rate
52.5
6.6
35.0
74.0
Younger youth skill attainment rate
67.7
7.3
50.0
90.0
Younger youth diploma rate
50.3
7.9
25.0
66.0
Employer satisfaction
66.3
3.2
60.0
78.0
Participant satisfaction
68.9
2.8
63.0
78.0
Adult earnings change
Adult credential rate
Older youth earnings change
a
For 51 states and Puerto Rico.
34
Minimum
Maximum
Table 2: Differences in Negotiated Performance Standards Levels between Program Years
2000-2001 and Program Years 2001-2002
Change in standard
PY 2000-2001
Mean
(Standard deviation)
Change in standard
PY 2001-2002
Mean
(Standard deviation)
Adult entered employment rate
1.41 (1.16)
1.67 (1.56)
Adult employment retention rate
1.29 (1.30)
1.56 (1.92)
71.18 (212.40)
95.63 (285.34)
Adult credential rate
2.51 (4.07)
2.48 (3.09)
Dislocated worker entered employment rate
1.47 (2.22)
0.13 (10.27)
Dislocated worker employment retention rate
1.36 (1.13)
1.35 (1.41)
Dislocated worker earning replacement rate
1.14 (1.75)
1.47 (1.33)
Dislocated worker credential rate
1.91 (2.55)
2.58 (3.72)
Older youth entered employment rate
1.61 (1.20)
1.69 (1.27)
Older youth employment retention rate
1.35 (1.09)
1.74 (1.43)
59.80 (157.86)
99.72 (169.90)
Older youth credential rate
1.52 (4.47)
2.75 (3.36)
Younger youth retention rate
1.39 (2.34)
1.97 (2.21)
Younger youth skill attainment rate
2.16 (2.17)
2.34 (2.41)
Younger youth diploma rate
1.67 (3.74)
2.28 (2.80)
Employer satisfaction
1.62 (2.08)
1.79 (1.34)
Participant satisfaction
1.62 (1.25)
1.59 (1.20)
Performance Standard
Adult earnings change
Older youth earnings change
35
Table 3: Descriptive Statistics on Performance Differentials (Differences between Attained
Quarterly Performance Levels and Negotiated Performance Standards)
Program
year 2000
Program year
2001
Program year
2002
Mean
(Std. dev.)
Mean
(Std. dev.)
Mean
(Std. dev.)
Adult entered employment rate
1.50
(9.34)
3.19
(10.15)
-1.32
(12.41)
Adult employment retention rate
2.18
(9.92)
1.62
(10.27)
1.34
(7.94)
$379.37
(1370.39)
$264.86
(1409.74)
-$39.00
(1212.22)
Adult credential rate
-5.32
(24.08)
-2.78
(14.90)
-3.33
(14.37)
Dislocated worker entered employment rate
1.19
(10.20)
2.11
(12.45)
-0.04
(12.86)
Dislocated worker employment retention rate
-0.39
(10.61)
-0.05
(11.24)
0.52
(6.38)
Dislocated worker earning replacement rate
13.55
(28.33)
12.02
(24.58)
13.51
(29.09)
Dislocated worker credential rate
-4.67
(25.42)
1.10
(18.50)
-0.65
(16.16)
Older youth entered employment rate
4.59
(13.04)
4.42
(13.97)
-0.72
(15.61)
Older youth employment retention rate
5.90
(13.33)
1.88
(13.94)
0.48
(9.16)
649.66
(1613.48)
543.32
(1406.05)
180.43
(1258.43)
Older youth credential rate
-6.26
(25.47)
-6.18
(16.93)
-8.97
(17.25)
Younger youth retention rate
8.35
(27.12)
1.05
(23.17)
-0.42
(17.95)
Younger youth skill attainment rate
15.92
(21.78)
3.88
(22.44)
0.13
(21.52)
Younger youth diploma rate
-9.53
(27.56)
-2.58
(21.80)
-0.46
(18.24)
Employer satisfaction
7.85
(7.09)
8.08
(7.03)
5.78
(7.23)
Participant satisfaction
8.35
(7.88)
8.38
(6.65)
5.87
(6.77)
Performance Measure/Standard
Adult earnings change
Older youth e-arnings change
Table 4: Percent of States Meeting or Exceeding their Negotiated Performance Targets in
Program Years 2000-2002
Percent of states meeting or exceeding
their negotiated performance target
Performance Measure/Standard
Program
year 2000
Program year
2001
Program year
2002
Adult entered employment rate
56.7%
66.5%
61.5%
Adult employment retention rate
54.0
60.7
57.7
Adult earnings change
49.3
64.6
48.1
Adult credential rate
36.7
45.6
46.2
Dislocated worker entered employment rate
52.7
65.5
55.8
Dislocated worker employment retention rate
42.0
58.7
51.9
Dislocated worker earning replacement rate
54.7
74.8
61.5
Dislocated worker credential rate
36.7
58.7
55.8
Older youth entered employment rate
58.7
63.6
42.3
Older youth employment retention rate
52.0
61.2
48.1
Older youth earnings change
52.7
64.6
59.6
Older youth credential rate
29.3
31.6
23.1
Younger youth retention rate
38.0
59.2
57.7
Younger youth skill attainment rate
72.0
69.4
53.9
Younger youth diploma rate
25.3
45.6
50.0
Employer satisfaction
45.3
75.7
69.2
Participant satisfaction
51.3
78.6
76.9
37
Table 5: Percent of States Meeting their Minimum (80 percent of Negotiated Performance
Standards) Requirements in Program Years 2000-2002
Percent of states meeting their minimum
performance requirements
Performance Measure/Standard
Program year
2000
Program year
2001
Program year
2002
Adult entered employment rate
90.0%
93.7%
86.5%
Adult employment retention rate
77.5
60.7
57.7
Adult earnings change
69.3
77.7
73.1
Adult credential rate
48.7
73.3
75.0
Dislocated worker entered employment rate
92.0
89.8
88.5
Dislocated worker employment retention rate
77.3
93.7
92.3
Dislocated worker earning replacement rate
74.7
91.8
94.2
Dislocated worker credential rate
48.7
76.2
73.1
Older youth entered employment rate
88.0
89.3
80.8
Older youth employment retention rate
71.3
90.3
86.5
Older youth earnings change
61.3
80.1
69.2
Older youth credential rate
37.3
51.0
44.2
Younger youth retention rate
48.0
72.3
73.1
Younger youth skill attainment rate
75.3
83.5
76.9
Younger youth diploma rate
34.7
58.2
67.3
Employer satisfaction
50.00
85.44
88.46
Participant satisfaction
57.33
84.47
90.38
Met all minimum performance requirements
8.67
17.48
17.31
38
Table 6: Regression models of attained performance levels and performance differentials
Participant and
local area
characteristics
Adult entered
employment rate (n=398)
Older youth entered
employment rate (n=392)
Older youth employment
retention rate (n=364)
Attained
Differential
Attained
Differential
Attained
Differential
Intercept
20.17 (33.38)
-39.19 (33.66)
23.65 (39.71)
-29.73 (42.90)
114.1 (41.37)*
49.72 (42.66)
Mean age
0.98 (0.65)
0.79 (0.65)
0.90 (0.77)
0.43 (0.83)
0.72
(0.81)
0.28 (0.83)
% black
-0.02 (0.05)
0.04 (0.05)
-0.01 (0.06)
0.08 (0.07)
-0.11
(0.07)
-0.32 (0.07)*
% Hispanic
-0.03 (0.06)
0.02 (0.06)
-0.17 (0.07)*
-0.04 (0.08)
-0.14 (0.07)*
-0.29 (0.07)*
% female
0.16 (0.12)
0.19 (0.120
0.03 (0.14)
-0.03 (0.15)
-0.04
(0.15)
-0.12 (0.15)
% w/high
school degree
0.07 (0.15)
-0.09 (0.15)
-0.11 (0.17)
-0.03 (0.19)
-0.61 (0.18)*
-0.61 (0.19)*
% greater than
high school
0.20 (0.19)
0.19 (0.19)
-0.25 (0.22)
0.08 (0.24)
-0.78 (0.23)*
-0.63 (0.24)*
% lack work
history
-0.02 (0.06)
-0.02 (0.06)
-0.11 (0.17)
0.12 (0.08)
-0.20 (0.08)*
-0.35 (0.08)*
% single head
of household
-0.32 (0.23)
0.14 (0.23)
0.25 (0.27)
0.58 (0.29)*
0.07
(0.27)
0.22 (0.28)
% basic skills
deficient
-0.03 (0.06)
0.01 (0.06)
0.09 (0.07)
0.12 (0.07)
-0.04 (0.07)
0.01 (0.07)
% not in labor
force
0.02 (0.06)
0.01 (0.06)
-0.02 (0.07)
0.01 (0.08)
-0.002 (0.07)
-0.08 (0.08)
-0.08 (0.08)
-0.02 (0.08)
-0.07 (0.10)
-0.11 (0.10)
0.04 (0.10)
0.18 (0.10)
mean preprogram wage
0.17 (1.75)
2.33 (1.76)
-0.98 (2.06)
0.78 (2.23)
-5.50 (2.14)*
-4.90 (2.21)*
unemployment
rate 2001
-1.51 (0.73)*
-1.14 (0.74)
-4.20 (0.87)*
-5.51 (0.94)*
-2.72 (0.90)*
-2.03 (0.93)*
change in
unemployment
rate (2001-02)
2.64 (1.51)
0.93 (1.52)
-0.64 (1.79)
-1.44 (1.93)
0.20 (1.89)
-0.80 (1.94)
entered
employment
rate 1998
0.20 (0.18)
0.06 (0.18)
0.12 (0.21)
-0.15 (0.22)
0.10 (0.21)
-0.24 (0.22)
wage at
placement 1998
0.66 (1.70)
-1.62 (1.71)
5.16 (2.02)*
3.28 (2.18)
7.02 (2.07)*
7.47 (2.14)*
% welfare
recipient
Adjusted R2
6.4%
0.1%
10.5%
10.9%
10.2%
14.0%
Appendix A: Basic Information on the JTPA and WIA Performance Standard Systems
Table A-1: Workforce Investment Act program performance measures
Performance Measure
Description (*indicates measure new to WIA)
Adults
Entered employment rate
The percentage of adults who obtained a job by the end of the first quarter
after program exit (excluding participants employed at registration).
Employment retention rate at 6
months
Of those who had a job in the first quarter after exit, the percentage of
adults who have a job in the third quarter after exit.
Average earnings change in 6
months
Of those who had a job in the first quarter after exit, the post-program
earnings increases relative to pre-program earnings.
Employment and credential rate*
Of those adults who received WIA training services, the percentage who
were employed in the first quarter after exit and received a credential by the
end of the third quarter after exit.
Dislocated workers
Entered employment rate
The percentage of dislocated workers who obtained a job by the end of the
first quarter after program exit (excluding those who were employed at
registration).
Employment retention rate at 6
months
Of those who had a job in the first quarter after exit, the percentage of
dislocated workers who have a job in the third quarter after exit.
Earnings replacement rate in 6
months
Of those who had a job in the first quarter after exit, the percentage of preprogram earnings that are earned post-program.
Employment and credential rate*
Of those dislocated workers who received WIA training services, the
percentage who were employed in the first quarter after exit and received a
credential by the end of the third quarter after exit.
Older youth (19-21)
Entered employment rate
The percentage of older youth who were not enrolled in post-secondary
education or advanced training in the first quarter after program exit and
obtained a job by the end of the first quarter after exit (excluding those who
were employed at registration).
Employment retention rate at 6
months
Of those who had a job in the first quarter after exit and were not enrolled
in post-secondary education or advanced training in the third quarter after
program exit, the percentage of older youth who have a job in the third
quarter after exit.
Average earnings change in 6
months
Of those who had a job in the first quarter after exit and were not enrolled
in post-secondary education or advanced training, the post-program
earnings increases relative to pre-program earnings.
Table A-1, continued
Performance Measure
Description
Older Youth
Employment/education/
training and credential rate*
The percentage of older youth who are in employment, post-secondary
education, or advanced training in the first quarter after exit and received a
credential by the end of the third quarter after exit.
Younger Youth
Retention rate
In employment, post-secondary education, advanced training,
apprenticeships in the third quarter afer exit
Skill attainment rate
Attain at least two goals relating to basic skills, work readiness, skill
attainment, entered employment and skill training
Diploma rate
Earn a secondary school diploma or its recognized equivalent (GED)
Customer satisfaction
Participant satisfaction*
The average of three statewide survey questions, rated 1 to 10 (1=very
dissatisfied to 10=very satisfied), asking if participants were satisfied with
services, if services met customer expectations, and how the services
compared to the “ideal set” of services
Employer satisfaction*
The average of three statewide survey questions, rated 1 to 10 (1=very
dissatisfied to 10=very satisfied), asking if employers were satisfied with
services, if services met customer expectations, and how the services
compared to the “ideal set” of services
41
Table A-2: Program Year (PY) 1998 JTPA performance standards adjustment worksheet
PY 98 JTPA Performance Standards Worksheet
A. Service Delivery Area’s Name
B. SDA #
C. Performance Period
D. Type of Standard
E. Performance Measure
F. Local Factors
G. SDA
factor values
I.
Difference
1. % Female
H. National
averages
71.3
2. % Age 55 or more
(G - H)
J.
Weights
-0.050
1.9
-0.130
3. % Not a high school graduate
17.8
-0.066
4. % Post-high school
26.1
0.008
5. % Dropout under age 30
8.1
-0.015
6. % Black (not Hispanic)
26.4
-0.027
7. % Minority male
11.6
-0.026
8. % Cash welfare recipient
40.9
-0.031
9. % Long-term TANF recipient
15.3
-0.018
3.3
-0.133
47.0
-0.037
8.1
-0.096
32.4
-0.055
14. % Homeless
1.7
-0.043
15. % Vietnam-era veteran
2.2
-0.081
16. % Not in labor force
32.2
-0.108
17. % Unemployed 15 or more weeks
31.9
-0.073
18. % UI claimant or exhaustee
13.2
0.022
19. Unemployment rate
5.7
-0.608
20. 3-year growth in earning in trade
0.0
0.245
21. Annual earnings in retail and
wholesale trade
17.3
-0.539
22. % Families income below poverty
10.6
-0.211
10. % SSI recipient
11. % Basic skills deficient
12. % Individual with disability
13. % Lacks significant work history
Note: Numbers in this table were
used for the adult follow-up
employment rate measure in PY ‘98.
Performance standards worksheets for
earnings performance measures
include an adjustment for % limited
English proficiency and pre-program
wage.
L. Total
M. National departure point
N. Model-Adjusted Performance Level
0. Governor’s Adjustment
P. SDA Performance Standard
K. Effect of
local factors
(I * J)
(sum of K)
60.0
(L + M)
Table A-3: National performance goalsa
PY 2000 goal
PY 2001 goal
PY 2002 goal
67%
68%
70%
77
78
80
$3264
$3361
$3423
60%
n.a.
n.a.
Dislocated worker entered employment rate
71
73
75
Dislocated worker employment retention rate
82
83
85
Dislocated worker earning replacement rate
90
91
92
Dislocated worker credential rate
60
n.a.
n.a.
Older youth entered employment rate
63
63
63
Older youth employment retention rate
69
70
77
$3150
n.a.
n.a.
50%
n.a.
n.a.
Younger youth retention rate
48
50
53
Younger youth skill attainment rate
n.a.
60
60
Younger youth diploma rate
55
65
65
Employer satisfaction
65
66
68
Participant satisfaction
67
69
70
Performance Standard
Adult entered employment rate
Adult employment retention rate
Adult earnings change
Adult credential rate
Older youth earnings change
Older youth credential rate
a
Source: U.S. Department of Labor Employment and Training Administration 2002 Annual Performance Plan for
Committee on Appropriations, March 2001.
43
Notes
1. See, for example, the regular section in the Public Administration Review that features articles
on performance measurement and frequently highlights governments’ experiences with
performance measurement initiatives and systems.
2. The Malcolm Baldridge criteria include: (1) leadership, (2) strategic planning, (3) customer and
market focus, (4) information and analysis, (5) human resource development and management, (6)
process management, and (7) business results, which include customer satisfaction, financial and
marketplace performance, human resource, supplier and partner performance, and operational
performance.
3. The web link to WIA state five-year plans is www.doleta.gov/usworkforce/asp/planstatus.asp.
4. The WIA state five-year plans for Puerto Rico and Illinois could not be located.
5. Standardized Program Information Reports (or SPIR data) are maintained by Social Policy
Research and can be accessed at: http://wdr.doleta.gov/opr/spir/.
6. The District of Columbia and Maryland contracted with Mathematica Policy Research to
conduct the statistical analysis for establishing performance targets.
7. The Department of Labor provided data on the final negotiated standards used by the states to
evaluate performance.
8. The JTPA participant population demographics include: race, gender, single head of household,
education level, limited English proficiency, public welfare recipient, labor force participation and
work history, pre-program wage, basic skills deficiency and record of criminal offense.
9. The detailed results of these regression analyses are available from the author.
10. Adjustments to earnings performance standards were made for the proportion with limited
English proficiency.
44
Download