Distinct Strains of Administrative Reform

advertisement
NO “ONE BEST WAY” TO MANAGE CHANGE:
UNDERSTANDING ADMINISTRATIVE REFORM IN ITS MANAGERIAL,
POLICY, AND POLITICAL CONTEXTS
ACROSS THE FIFTY AMERICAN STATES
Brendan Burke
Bridgewater State University
<bburke@bridgew.edu>
Chung-lae Cho
Kookmin University, Korea
<clcho@kookmin.ac.kr>
Jeffrey L. Brudney
University of Georgia
<jbrudney@uga.edu>
Deil S. Wright
University of North Carolina
<dswright@mindspring.com>
Paper presented at the 8th Public Management Research Conference, University of
Southern California, School of Policy, Planning, and Development, Los Angeles, CA,
September 29 - October 1, 2005. The authors thank the Earhart Foundation of Ann
Arbor, Michigan, for their research support.
1
No “One Best Way” to Manage Change: Understanding Administrative Reform
in Its Managerial, Policy, and Political Contexts Across the Fifty American States
INTRODUCTION
Administrative Reform (AR) is in some respects like the 57 ingredients in Heinz
ketchup or the 31 flavors of Baskin Robbin’s ice cream. It is imported, marketed, sold,
and consumed by a diverse array of jurisdictional and/or organizational “customers.”
Thus the character and composition of the commodity as well as the complexion of the
consumers produces a complex collage on the intellectual and practical landscapes of
public management. Kettl (2005) sketched the global “market” for reform and offered
two dominant patterns, the Westminster and American “styles.” The scope and
significance of AR is demonstrated by his claim (2005: 60) that, “Perhaps never before
have so many governments tried to change so much so fast in such similar ways.”
Transcending style (Borins, 1995), consolidation and reorganization (Berkman
and Reenock, 2004), emergent models (Peters, 1996) and reinvention (Osborne and
Gaebler, 1992; Brudney and Wright, 2002), a core proposition emerges from the
scholarly literature and the realm of management practice. Kettl (2005: 61) captures this
perspective in his chapter on “Reform as Governance,” stating, “reforms of government
management are inevitably tied up with fundamental issues of policy and political and
economic change.”
This analysis of reform in American state governments in the 1990’s begins with
the governance proposition as a starting point. With this base we first offer a descriptive
analysis of configurations or patterns of AR across the states. A host of factors, many
2
self-evident, make the 50 American states natural “laboratories” for comparative
analysis. Beyond systematic description, however, we are intent on pursuing a more
probing and theoretically grounded understanding of AR. Ultimately, we are in search of
one or more models that contribute substantially and significantly toward explaining the
phenomenon. Given the variety of AR as practiced by the states and the range of
explanatory variables, our analysis must be exploratory.
Reform and Reinvention
The public management field faces a quandary in grappling with the cluster of
issues associated with contemporary AR. The issues involve, among others, the
identification, classification, understanding, explanation, and prescription (or advocacy)
of administrative reform(s). It is clear that a global public management “revolution” has
occurred (Kettl, 2005). It is common knowledge that many national and subnational
governments now function with updated techniques, approaches, and cultural
underpinnings since the 1980’s and 1990’s. Difficulties remain, however, in estimating
the character, content, extent, and results of the so-called revolution. New Public
Management reforms are fraught with methodological challenges (Poister and Streib,
1999; Calista, 2002; Thompson, 2002). To what extent have governments “reinvented”
themselves? Specific successful cases (New Zealand, Texas, and Sunnyvale, California)
support affirmative responses (Boston, 1996; Tucker, 2002; Epstein, Campbell, and
Tucker, 2002), but others highlight the complications and pitfalls of these reforms
(Mintzberg, 1994; Frederickson, 1996; Peters and Savoie, 1996; Behn, 2002).
3
This article proposes and demonstrates a different approach to measuring,
understanding, and explaining contemporary administrative reform. A focus on
management changes in the American states enables us to identify two dimensions of
reform and to assess them as alternative though not mutually exclusive strategies. This
effort contrasts with previous approaches that assess the success or failure of New Public
Management (reinvention) reforms in general (Brudney, Hebert, and Wright, 1999;
Kearney, Feldman, and Scavo, 2000). It also departs from analysis of single reform
techniques such as performance management (Poister and Streib, 1999; Willoughby and
Melkers, 2001). The reality of AR is that jurisdictions, organizations, and administrative
leaders often pick and choose packages of reform techniques or tools in an incremental or
piecemeal fashion rather than a comprehensive manner (Conant, 1988; Berkman and
Reenock, 2004). The two dimensions of reform features proposed and examined in this
article consist of (a) specific efforts to streamline administrative processes, rules, and
structures, and (b) performance-focused changes, with an emphasis on results or
outcomes. After describing the historical basis for the two dimensions, and validating
their usage in state governments during the 1990’s, the article compares and contrasts the
two dimensions in different state governments and agency-specific organizational
settings.
Reinvention is not the only possible line of reform under the guise of New Public
Management. It is nevertheless a dominant expression since the publication of its
guiding documents (Osborne and Gaebler, 1992; Gore, 1993). Usually depicted as a
new paradigm for the functioning of public administration systems, reinvention is
considered as a reorientation of the bureaucracy simultaneously toward a customer focus,
4
market options, enhanced accountability, and continuous improvement (Osborne and
Plastrik, 2000). Most discussions of reinvention, including critiques, endorsements, and
empirical assessments, attempt to unify the different components as a single or
unidimensional phenomenon. Some analysts hint at multiple strategies involving
reinvention implementation. Osborne and Gaebler (1992) outlined several principles of
reinvention (“Catalytic Government,” “Competitive Government,” etc.) and attached case
examples. But rarely did a “reinventing” government align itself with more than two or
three of the principles. At the end of the Twentieth Century, Osborne and Plastrik (2000)
provided specific guidance on implementing reinvention in the Reinventor’s Toolbook.
Several years into the (American) reform wave, these consultants described the difficulty
of implementing the reform and the relative scarcity of fully developed examples of
reinvention. Not surprisingly, the “user” of their Reinventor’s Toolbook was advised to
assess and implement the various techniques differently depending on whether their
political and administrative systems were favorably disposed toward AR.
Why is it so difficult for public organizations to reinvent themselves in a
comprehensive manner? Does full-scale AR exceed the typical organization’s capacity
for change? Critics identified philosophical inconsistencies within the “reinvention”
reforms, starting with the impossibility of empowering employees and work groups
simultaneously with efforts to enhance their coordination across service areas (Peters and
Savoie, 1996; Thompson and Riccucci, 1998).
The executive focus of AR may work against legislative goals and incentives and
reform efforts may confront further problems as executive (governor) leadership changes
hands (Kettl, 2000; Aberbach and Rockman, 2001). The shift from a traditional focus on
5
effective bureaucratic processes toward an outcome orientation requiring major cultural
changes can be daunting (Durant, 1999; Behn, 2002). The major difficulties associated
with implementing the Government Performance and Results Act of 1993 (GPRA) is a
classic case in point so far as AR involves managing for results (Kamensky and Morales,
2005). The pressures from the political environment and commitments within the
administrative system must be strong in order to pursue such dramatic changes.
Given different historical, political, and cultural circumstances across the
landscape of governance, it is natural to find contrasting meanings for AR even within
the specific definitions of reinvention. Kettl’s (1999) history of the National
Performance Review describes three relatively distinct phases, including a movement
from the broadest possible design in 1993, toward a “rightsizing” focus with the change
in Congress in 1995, and finally a targeted improvement effort (in agencies like the
Internal Revenue Service) by 1997.
Political demands can create an “ebb and flow” to reform efforts, movements
such as shoring up efficiency and effectiveness in governmental operations as well as
enhancing accountability mechanisms (Light, 1997; Thompson and Riccucci, 1998).
Partial use of reinvention tools may be criticized by some as “tireless tinkering” (Peters,
1996, viii), but selective attention to administrative specifics, such as strategic planning
(Berry, 1993) has its supporters. Narrower efforts provide “amplification” or pyramiding
of reform possibilities already under way (Thompson, 1999). These efforts occasionally
have had some impact on administrative performance as an undercurrent within countless
Western governments (Borins, 1995).
6
Kettl (2005) elaborates “American style” reform from a joint historical and
analytic perspective. He notes (2005:19) that the two Hoover Commissions (1949, 1955)
concentrated successively on “the structure of government” and on “governmental
processes.” He concludes that these features “guided most of the government reform
movement for the rest of the Twentieth Century.” Toward the end of the century,
however, Kettl and a number of other observers identify (2005: 22) “substantial and longlasting reforms” that “make government work better and…make it smaller and cheaper
too.”
This attention marked the advent of reforms which, regardless of terms and tools,
put an emphasis on performance, results, and outcomes rather than structure(s) and
process(es). For Light (1997) this development was “liberation management,” the fourth
tide of AR. For the Clinton Administration it was the National Performance Review
(NPR). For Osborne and Gaebler (1992) it was Reinventing Government. In these
analytic and temporal groundings rest the foundations for our excavation of two
dimensions of AR: (a) structure/process and (b) performance/results.
Distinct Strains of Administrative Reform
This analysis relies on comprehensive surveys of administrative reform
(reinvention) strategies employed at the state agency level during the 1990’s. The range
of administrative tools analyzed in the surveys allows for the conceptual and empirical
development of two separate dimensions during the first decade of “reinvention.” The
data, extraction, methods, and validation of the two dimensions are discussed in the
7
subsequent section of the article. The next section sketches the two dimensions and
identifies their roots in American bureaucratic reform history.
Structure and Process Reforms
The Winter Commission (Thompson, 1994) advocated for a reduction of certain
administrative structures that had grown over time. Knott and Miller (1987) describe the
growth of regulatory impediments in American government during the course of the
Twentieth Century, natural accompaniments to the Weberian bureaucratic design and
reliance on rules—but at the same time, growth in regulatory mechanisms was a central
repository of perverse incentives and unintended consequences that burdened the efficient
functioning of administrative systems. The deregulatory movement was seen as a distinct
focus in public administration (DiIulio, 1994), in keeping with common findings from
process analysis, “reengineering” (Hammer and Champy, 1994) and other techniques of
the Total Quality Movement (Deming, 1986).
Deregulation drove reforms in three areas especially: The reduction of constraints
to move budget appropriations; the easing of restrictive procurement rules; and the
streamlining of personnel procedures to hire and fire government workers. The
simplification of governmental structures may even involve a more comprehensive option
to circumvent “red tape:” The privatization or contracting out of governmental
functions, under the assumption that private or non-profit service providers will not be
hamstrung by the same level of rules as found in governmental service agents (Savas,
1987; Milward, 1992). To different stakeholders, one program’s burdensome constraint
is the protection of rights and liberties (Kaufman, 1978). A strong review process is
necessary to determine whether deregulation is appropriate or not.
8
Performance-based Reforms
American state governments steadily enhanced their managerial and policy
implementation capacity throughout the Twentieth Century (Bowman and Kearney,
1986). Many if not most of the reforms were oriented around stronger executive capacity
and increases in the efficiency and effectiveness of administrative systems (Conant,
1988). Organizational structures were streamlined, and independent commissions were
consolidated under the governor, and executive staff agencies were strengthened. In
keeping with executive accountability, state governments moved toward more rational
decisionmaking systems, designed to enhance management’s ability to improve the
administration of public programs. Beginning in the 1930’s, performance became an
appropriate criteria for administrative strategy (Ridley and Simon, 1938).
The performance focus first drove budget allocation processes at the national
level, especially in the advent of Planning-Programming-Budgeting systems, and further
in the use of cost benefit analysis (Schick, 1966). PPB would prove too onerous and
complex for all but the largest of national agencies (Wildavsky, 1969), but some of the
underlying components of the complex analytic design were kept for use in many
governmental settings. State and local governments created the ability to allocate
budgets, evaluate programs, and even evaluate personnel performance around outcome
measures by the 1970’s. Strategic planning, another part of PPB systems, became
increasingly useful to the best of state governments by the 1980’s. By the end of the
1990’s, the Government Performance Project (GPP) would show that some state
governments had strong planning and performance measurement focuses, while others
had a long way to go (Barrett and Greene, 1999).
9
The major commission to inspire state-level reform worked in temporal proximity
to the publication of Osborne and Gaebler’s Reinventing Government (1992), and would
share many of the same ideas and approaches. The Winter Commission (1993) described
a performance deficit in American state governments and the need for a reorientation
toward customers or clients of government services. A dialogue was necessary among
governmental agencies, legislative sponsors, and either interest groups or direct
beneficiaries of state services. The commission provided impetus for much of the state
and local reform effort of the 1990’s, as strategic planning and performance-based
budgeting spread across the state capitals (Berry, 1994; Willoughby and Melkers, 2001).
Measuring Administrative Reform (AR) Dimensions
We contend that AR strategies can be clustered around two dimensions: (a)
structure/process and (b) performance/results. The next step and challenge is to construct
measures (questions) for each dimension and present them to practicing public managers.
Timing was especially relevant because the 1990s were the “reform” decade (Kettl 2000;
Brudney and Wright 2002). Measurement was accomplished through two successive
surveys of the American State Administrators Project (ASAP) in 1994 and 1998 (Wright
and Cho 2001). The 2,000-plus state agency heads who responded to the ASAP surveys
were asked the following lead or stem question:
From time to time state agencies undertake to change the way they do
things. Please indicate the extent to which your agency has implemented
each of the following changes in the 1990s.
A set of five response options were posed. They ranged from “No change
considered” and “Considered, no action taken,” to “Fully implemented.” Six
10
structure/process and five performance/results queries were included in the battery of
“reform” changes. The eleven-item set of questions was lodged in a larger section titled,
“Administrative Changes and Reforms.”
Structure/Process Reform Indicators
Below we summarize the information we have obtained on the structure/process
reform measures across the states. We present the items along with the percentage of
respondents who indicate that each change was partially or fully implemented by their
agency in each of the two ASAP survey years, 1994 and 1998.
Partially/Fully Implemented
1994
1998
(percentages)
Decentralization of decision-making
Reduction in hierarchical levels
Greater procurement discretion
Simplify human resources rules
Privatization of major programs
Discretion to use carryover funds
n=
55
39
36
29
23
21
60
44
44
34
24
22
1,135
1,075
Two features of the tabulation merit mention. First, for each item the percentage
for 1998 was slightly higher than for 1994. This pattern suggests a degree of consistency
and continuity involving state-level AR in the 1990s. In other words, reform was more
than a ripple, possibly wave, but hardly a tide or a tsunami (Thompson 2002, Brudney
and Wright 2002).
Second, inclusion of the privatization item under structure/process seems
questionable. We had initial doubts given the prominence and recognition accorded
contracting for services as a “liberation management” feature (Light 1997). We knew,
however, that contracting was extensive and longstanding (Light 1999). Our cautions and
11
concerns were alleviated when we examined responses to a related battery of questions in
the 1998 ASAP survey. We found, for example, that roughly 60%, 70%, and 80% of state
agencies contracted for service delivery with, respectively, other governments, non-profit
entities, and private firms (Choi, Cho, Wright, and Brudney 2005). In sum and in short,
privatizing (contracting out) is a process or tool well established and extensively
employed through a variety of intermediaries. The long-standing legacy of this tool
prompted us to place it in the structure/process cluster (Salamon 2002).
Performance/Results Reform Indicators
Any historical or other set of reflections about AR must recognize that
administrative performance in the form of economy, efficiency, and effectiveness was
never far from the aims and aspirations of reform efforts (Ridley and Simon 1938).
Whether prompted by “War on Waste” or “Watchful Eye” efforts (Light 1997), or other
reform themes, underlying much of the long history of AR is a central value of quality in
public service delivery. Performance or results, therefore, is a matter of relative emphasis,
not exclusivity or in opposition to structure/process changes.
What appeared distinctive in the 1990s, however, was the elevation and
veneration of tools, strategies, or actions that placed “managing for results,”
“performance management,” and other phrases in the forefront of most reform
vocabularies. The national GPRA legislation in 1993 seemed to cement the emphasis,
along with the NPR (National Performance Review) of the Clinton Administration (Kettl
1999).
At the state and local levels REGO or Reinventing Government in its various
“RE” forms, seemed to predominate. As Kettl (2005:22) noted, reform “focused on
12
changing bureaucrats’ behavior rather than transforming fundamental government
structures and processes.” He also observed (2005:73) that “the reform movement has
shifted the focus of many governments away from inputs . . . to results (outputs or
outcomes).”
Following considerable discussion and pre-testing, five administrative change
items were selected for inclusion in the 1994 and 1998 ASAP surveys with the aim of
measuring performance and/or results. These items are listed below together with the
percentages of state agency heads indicating that the action(s) had been partially or fully
implemented in 1994 and 1998.
Partially/Fully Implemented
1994
1998
(percentages)
Strategic plans for clear mission statements
Training programs to improve customer service
Quality improvement programs
Benchmarking to measure outcomes
Measurement of customer satisfaction
n=
79
82
72
62
52
87
83
79
73
64
1,135
1,075
Once again a modest but consistent rise in the percentages from 1994 to 1998
reveals the staying power of reform among the American states in the 1990s. Strategic
planning, benchmarking, and measuring customer satisfaction displayed noteworthy
percentage-point rises across the four year span. Purpose, performance, and results
clearly carried the day in reform implementation from 1994 to 1998 across the states.
This point is reinforced and solidified by an examination of the percentages for
structure/process tools reported earlier. The average percentage of partial/full
implementation for the six structure/process items was 34% and 38% respectively in
13
1994 and 1998. The comparable averages for the five performance items were 69% and
77%. Clearly the past decade was one in which performance/results reforms substantially
overshadowed more traditional structure/process reforms.
A final point merits mention with regard to both clusters of reform items. It
echoes but modifies Kettl’s earlier observation about so many governments making so
many changes so fast in such similar ways. A reinspection of the percentages (or
averages) for 1994 and 1998 reveals that, despite the consistently higher percentages for
the latter year, those numbers were only slightly or modestly higher for 1998 than 1994.
These minor differences underscore Kettl’s assertion about American style reform
(2005:72): “[American] reforms have been incremental rather than sweeping or
comprehensive.”
With details, clarifications, and measurement(s) of the two dimensions of AR
appropriately specified we proceed to the next stage in the analysis. That section involves
the validity and the reliability of dividing the eleven reform items into the two
hypothesized dimensions of AR.
Confirmatory Factor Analysis
Two major statistical approaches can be used to extract underlying factors or
latent variables across a set of observed variables: (1) explanatory factor analysis, and (2)
confirmatory factor analysis. Beyond explanatory factor analysis (EFA), confirmatory
factor analysis (CFA) provides the opportunity and flexibility to hypothesize the relations
between/among latent variables, observed variables, and measurement errors. EFA
seems most appropriate when we have limited substantive knowledge about the numbers
of latent variables or the relationships between latent and observed variables. If,
14
however, we can formalize a model representing the relationship among three elements -latent variables, observed variables, and measurement errors -- with theoretical
justification, CFA is more advantageous. Since EFA relies exclusively on statistical
criteria, such as retaining only factors associated with an eigenvalue greater than or equal
to one, problematic solutions and interpretations may emerge (Bollen 1989, 232). By
contrast, CFA requires researchers to apply theoretical and substantive knowledge to
construct a model. The model can then be estimated to determine its fit to empirical data.
The employment of both substantive and statistical knowledge produces a more
reasonable and reliable solution.
CFA contributes two main practical advantages to this research. First, CFA
allows correlations among measurement errors. In EFA, measurement errors cannot be
correlated. Measurement errors are part of the variance that is not explained by latent
variables, and it is quite plausible that measurement errors are correlated. Since few, if
any measurements are perfect, measurement errors need to be included in a formal and
statistical/empirical model. The second utility of CFA for this research is that the
covariance of latent variables can be estimated or set to any value. As presented below,
we estimate the degree of correlation (or independence) between the two dimensions of
reform we have proposed: structure and performance. These important advantages lead
to the choice of CFA as the statistical strategy for testing propositions about the
dimensionality of reform across the U.S. states.
Figure 1 shows the hypothesized relationship between the structural-process
reform dimension and the six indicators from the ASAP survey thought to tap that aspect
of reform. As hypothesized, all six reform items display a positive relationship with the
15
structure-process dimension. Factor loadings for the specific reform techniques and the
structure construct lie in the range between .39 and .58, all at statistically significant
levels. The model fit indexes are very good (see Figure 1) so that the theoretical model
proposing a structure/process reform dimension seem to fit the data well.
INSERT FIGURE 1 ABOUT HERE
Figure 2 presents analogous findings for the hypothesized relationship between
the performance/results dimension and the five relevant ASAP reform indicators
assessing that type of reform. These reform items all load at between .43 and .62, and
again, the overall model displays good fit indexes. The only potentially troublesome
statistical test is the low chi-square value (3.02); yet, the chi-square statistic is
problematic when sample size is large. Given that the ASAP data set is large (N =
2,229), and the other model fit indexes are good, we are confident that our model fits the
data well in spite of the chi-square value. Thus, our hypothesis of a distinct
performance/results dimension seems to be borne out in the ASAP data.
INSERT FIGURE 2 ABOUT HERE
Exploring the Structural-Process and Performance-Results Dimensions
In previous research (Brudney, Hebert, and Wright, 1999; Brudney and Wright,
2002), some of the present authors reported and analyzed “overall” reform scores for
reinvention across the states based on all eleven of the reform items discussed above.
16
Those scores consist of the mean responses of individual agency heads for their agency in
the implementation of the eleven reinvention tools, aggregated to the state level. By
contrast, here we present and examine the two dimensions of reform that were verified by
the CFA: structure-process and performance-results. Table 1 presents mean state-level
scores for the implementation of the structure-process reforms across agencies in a state,
and Table 2 displays state-level mean scores for the performance-results dimension. The
individual response options for each reform tool were: 0, No changes considered; 1,
Considered, no action yet; 2, Action(s) planned; 3, Partially implemented; and 4, Fully
implemented.
INSERT TABLES 1 AND 2 ABOUT HERE
An initial interpretation of these data reveals that the structural reforms were in
the incipient stage during the 1990’s at the state level. Only four states display a mean
above 2.0. Two of these states are not surprising as leaders in structural advancement;
Texas has a reputation as one of the early leaders in reinvention, at least relaxing its
budgetary processes (Tucker, 2002), and Florida State government aligned itself early
with Total Quality Management philosophies (Braunbach and Burke, 1992). We should
bear in mind that these statewide scores do not preclude greater (or less) individual
agency achievement with respect to the structural reforms (and/or the performance-results
reforms). As intimated by our comparison of the structural-process and performanceresults reforms above, the performance–related reforms appear much more successful,
17
with a mean score across all states of 2.81, and only two states with means below 2.5
(Alabama and Rhode Island).
INSERT FIGURE 3 ABOUT HERE
Are there in fact distinct strains of administrative reform across the American
states, that is, states that choose structural approaches and others that choose to address
performance characteristics? Figure three maps the contrast and convergence of the two
reform strains, indicating a placement for each state as achievers on both reform
dimensions. The figure displays some linearity (the Pearson correlation for the two
dimensions is .62), such that the majority of states are either low in their implementation
of both reform strains, or high on both dimensions. However, the figure shows that some
state leadership teams appear to have chosen structure as their reform focus (Montana,
Michigan, and Georgia), while other statewide groupings of political and administrative
leaders concern themselves more actively with performance-results reforms (Maine,
Louisiana, and Arizona).
Table 3 adopts a different frame or level of analysis, to begin to review the
hypothesis that reform efforts differ not only across states, but within states as well. The
table presents the mean scores on implementation of the structure-process reforms and
the performance-results reforms by agency type or function. The ASAP survey tracks the
function of the different state agencies; Table 3 displays mean scores for respondents
grouped by 13 major agency.
18
INSERT TABLE 3 ABOUT HERE
The table shows that Criminal Justice agencies and those led by elected officials
had lower mean implementation scores on both dimensions of reform. By contrast,
Income Security and Social Service agencies scored high on structure-process and
performance-results reforms -- a strong possible match with the welfare reform
devolution during the middle and latter years of the 1990’s. Economic Development
agencies appear to have a relatively greater focus on performance reforms with lower
attention to structural reform options. The only apparent agency grouping to fall higher
on the structural reform dimension and lower on the performance dimension is the set of
Non-fiscal Staff agencies. The variation in mean scores across the functional
classification revealed in Table 3 suggests that agency-level variables are important to the
explanation of administrative reform.
Concluding Observations
This article has explored the dimensionality of Administrative Reform (AR)
across the American states. We engage the body of research and commentary that treats
a package of reforms commonly referred to as “reinvention” (but not always by that
name) as a single, unidimensional phenomenon. This presumption is common and has
guided research by us (and others) in the past (Burke and Wright, 2002; Brudney and
Wright, 2002; Brudney, Hebert, and Wright, 1999). Based on a reexamination of the
reform literature, however, we propose two different reform dimensions --
19
structure/process reforms and performance/results reforms. We test their existence
empirically with data from two successive surveys of the American State Administrators
Project (ASAP), in 1994 and 1998. The data-base consists of responses from over 2,200
agency heads from all 50 states. Confirmatory factor analysis (CFA) of the 11 reform
items presented on the surveys provides convincing evidence of the existence of the
structure/process and performance/results dimensions across the states.
We initiate alternative explanations for the adoption of the two types of
administrative reforms across the states. Results of such explanatory analyses may yield
insights into reform of state bureaucracies. This area provides the foundation for our
continuing research into the adoption of the two types of administrative reforms by the
states, as one unit of analysis, and by the 2,200 administrative agencies.
Agency Analysis
One guiding hypothesis with the agency as the unit of analysis was that
structure/process reforms will have lower rates of implementation across the states as
compared with the performance/results reforms. Isn’t it easier to effect change in the
way an agency operates and/or relates to clients than to change the structure, rules, and/or
procedures of an organization? While the former change may be at the discretion of the
agency head, the latter may require approval by the state legislature or the governor.
The present analysis provides tentative support for this hypothesis -- the average
percentage of partial/full implementation for the six structure/process items was 34% and
38% respectively in 1994 and 1998, compared to the averages for the five performance
items of 69% and 77% (see also Tables 1 and 2). In further research we need to
determine if these rates might be moderated by other variables: For example, does
20
incremental or comprehensive reorganization of state government (Berkman and
Reenock, 2004) make the achievement of the two types of reform equally likely? Prior
empirical research based on 1,000 agencies as the unit of analysis indicates that
reorganization of state government state is, indeed, associated with a single reinvention
dimension (Brudney, Hebert, and Wright, 1999). Reorganization’s effects on the two
more nuanced dimensions of structure and performance reforms are unknown.
Our model also incorporates a number of theoretically-informed hypotheses at the
agency level. The 1994 and 1998 American State Administrators Project surveys provide
the necessary data for empirical evaluation. First, as demonstrated above in the analysis
of the reform dimensions by agency type, some functional areas are more amenable to
change than others. Agencies differ in the level of adoption of structure/process reforms
versus performance/results reforms (see Table 3). Second, we consider the environment
of the agency with respect to the perceived influence of the governor, legislature, interest
groups, professional associations, and the courts, as a factor in reform. We propose that
agencies with greater (perceived) influence by interest groups will come under greater
pressure to effect performance/results reforms to accommodate them; by contrast, when
the governor, legislature, and the courts are more influential we anticipate more basic
structural/process reforms in the way an agency is structured or operates.
Third, the manner of appointment of the agency head is also crucial. With the
governor often spearheading reform, agencies that feature an appointment procedure in
which the governor has control or strong influence (such as appointment by the governor
or appointment with legislative consent) are likely to evidence greater reform than those
21
in which the agency has a much freer (appointment) rein from the governor (for example,
agencies headed by a popularly elected official).
Fourth, we examine the effect of another agency-level variable, size, as measured
by agency personnel. On the one hand, size is an impediment to change because it can
act to make implementation more difficult. But on the other, the success of the reform
effort rests in part on the adequacy of resources behind it, so that larger agencies may
possess greater slack resources essential to implementation. In past research, agency size
was positively related to the implementation of (unidimensional) reform across state
agencies (Brudney, Hebert, and Wright, 1999). We hypothesize that larger size may be
associated with greater adoption of performance/results reforms because they are easier
to effect and do not require fundamental change. By contrast, the cost and difficulty of
changing organizational structure increases with size, so that larger agencies will likely
be much more phlegmatic in registering reforms in basic structure and administrative
processes.
State-Level Analysis
Furthermore, we can test the dual-dimension patterns of reform based on the 50
states as the unit(s) of analysis. What state economic, social, political, policy, and
institutional variables might help us reveal relationships involving the extent as well as
the types of reform dimensions pursued by different states in the 1990’s?
To explain implementation of the structure/process reforms and the
performance/results reforms across the American states, we propose a model that
incorporates three levels of analysis: state, agency, and individual. At the state level,
because the governor often leads the reform charge, we anticipate that the formal powers
22
of the governor will be associated with adoption of both structure/process and
performance/results reforms. We also expect political culture to bear a relationship, so
that states with a more progressive culture should register higher in the adoption of the
reforms. Finally, because the reforms are often predicated on an application of economic
reasoning to public institutions, we anticipate that state ideology would also bear a
relationship to the adoption of reforms. With regard to the performance/results
dimension, especially, a relationship might be anticipated: these reforms are drawn
directly from the philosophy and practices of some business and corporations (i.e.,
strategic plans for clear mission statements, training programs to improve customer
service, quality improvement programs, benchmarking to measure outcomes, and
measurement of customer satisfaction) and, thus, may be more likely in those states with
a more conservative political outlook or ideology.
Reform and the State Agency Public Manager
The final set of variables intended to explain administrative reform pertains to the
individual level, that is, the head or director of the agency. A variety of variables are
available through the American State Administrators Project to test intriguing hypotheses
regarding the role and influence of the agency director in the adoption of both
structure/process and performance/results reforms across the American states.
First, we propose that the administrator’s perceived influence over the agency will
be strongly related to the adoption of reforms: Unless directors feel that they possess
control over the agency, they are hardly likely to risk a major reform initiative. Reform is
not only time, labor, and resource intensive, but also often unpopular and resisted by
some agency stakeholders; in addition, reform may go unappreciated and unrewarded in
23
government. In our view, an agency head is much more likely to undertake reform if she
or he perceives sufficient influence over the agency to proceed. Second and relatedly, the
professed goals of agency directors, for example, to be recognized as a leader should also
correlate with their interest and willingness to undertake reform. Third, as mentioned
above, since the performance/results reforms are predicated on a private businesscorporate philosophy, agency heads who identify themselves personally as more
conservative are more likely to implement them; we expect no, or certainly less
relationship to the other reform dimension of structure/process changes. Similarly, we
hypothesize that prior years of experience of the agency director in the private sector will
result in a stronger penchant to apply the performance/results reforms in the public (state
government) sector.
Our model of the adoption of structure/process reforms and performance/results
reforms by state agencies entails analysis of state, agency, and individual factors. It calls
for the use of a statistical technique such as hierarchical linear modeling (HLM) that can
estimate and distinguish effects at multiple levels of analysis. Our continuing research is
predicated on the belief that a more variegated -- dimensional -- treatment of reform
combined with an analysis of its context -- state, agency, and individual -- will provide a
more comprehensive understanding of the important, yet elusive phenomenon of
administrative reform across the states.
References
Aberbach, Joel D., and Bert A. Rockman. 2000. In the Web of Politics: Three Decades
of the U.S. Executive. Washington, D.C.: Brookings.
24
Barzelay, Michael, and Babak J. Armajani. 1992. Breaking Through Bureaucracy: A
New Vision for Managing in Government. Berkeley, CA: University of
California Press.
Berkman, Michael B., and Christopher Reenock. 2004. “Incremental Consolidation and
Comprehensive Reorganization of American State Executive Branches.”
American Journal of Political Science. 48(October):796-812.
Bollen, Kenneth. 1989. Structural Equations with Latent Variables. New York: John
Wiley and Sons.
Borins, Sandford. 1995. “The New Public Management Is Here To Stay.” Canadian
Public Administration. 38(Spring/Printemps):122-132.
Bowman, Ann O’M., and Richard C. Kearney. 1986. The Resurgence of the States.
Englewood Cliffs, NJ: Prentice-Hall.
Braunbach, Kelli, and Brendan Burke. 1992. “Improving Internal Processes: Focus on
Quality.” MIS Report. 24(January):1-20.
Brudney, Jeffrey L., F. Ted Hebert, and Deil S.Wright. 1999. “Reinventing Government
in the American States: Measuring and Explaining Administrative Reform.”
Public Administration Review. 59(January/February):19-30.
Brudney, Jeffrey L., and Deil S. Wright. 2002. “Revisiting Administrative Reform in the
American States: The Status of Reinventing Government in the 1990’s.” Public
Administration Review. 62(May/June):353-361.
Burke, Brendan F., and Deil S. Wright. 2002. “Reassessing and Reconciling
Reinvention in the American States: Exploring State Administrative
Performance.” State and Local Government Review. 34(Winter):7-19.
Calista, Donald J. 2002. “A Critique of ‘Reinventing Government in the American
States: Measuring and Explaining Administrative Reform.’” Public
Administration Review. 62(May/June):347-352.
Cho, Chung-lae, and Deil S. Wright. 2004. “The Devolution Revolution in
Intergovernmental Relations in the 1990’s: Changes in Cooperative and Coercive
State-National Relations as Perceived by State Administrators.” Journal of
Public Administration Research and Theory. 14(October):469-494.
Conant, James K. 1988. “In the Shadows of Wilson and Brownlow: Executive Branch
Reorganization in the States, 1965 to 1987.” Public Administration Review.
48(September/October):892-902.
25
de Lancer Julnes, Patria, and Marc Holzer. 2001. “Promoting the Utilization of
Performance Measures in Public Organizations: An Empirical Study of Factors
Affecting Adoption and Implementation.” Public Administration Review.
61(November/December):693-708.
Denhardt, Janet Vinzant, and Robert B. Denhardt. 2001. “Creating a Culture of
Innovation: 10 Lessons from America’s Best-Run City.” Arlington, VA: The
PricewaterhouseCoopeers Endowment for the Business of Government.
DiIulio, John J., ed. 1994. Deregulating the Public Service: Can Government Be
Improved? Washington, D.C.: Brookings.
Durant, Robert F. 1999. “The Political Economy of Results-Oriented Management in
the ‘Newoadministrative State:’ Lessons from the MCDHHS Experience.”
American Review of Public Administration. 29(December):307-331.
Eliadis, Pearl, Margaret M. Hill, and Michael Howlett, eds. 2005. Designing
Government: From Instruments to Governance. Montreal: McGill-Queen’s
University Press.
Fox, Charles J. 1996. “Reinventing Government as Postmodern Symbolic Politics.”
Public Administration Review. 56(May/June):256-262.
Frederickson, H. George. 1997. The Spirit of Public Administration. San Francisco:
Jossey-Bass Publishers.
Gore, Al. 1993. Creating a Government that Works Better and Costs Less: The Report
of the National Performance Review. New York: Penguin.
Hennessey, J. Thomas, Jr. 1998. “’Reinventing’ Government: Does Leadership Make a
Difference?” Public Administration Review. 58(November/December):522-532.
Kamensky, John M., and Albert Morales, eds. 2005. Managing for Results 2005.
Lanham, MD: Rowman & Littlefield.
Kearney, Richard C., Barry M. Feldman, and Carmine P.F. Scavo. 2000. “Reinventing
Government: City Manager Attitudes and Actions.” Public Administration
Review. 60(Novembr/December):535-548.
Kettl, Donald F. 1998. Reinventing Government: A Fifty-Year Report Card.
Washington, D.C.: Brookings Institution.
Kettl, Donald F. 2005. The Global Public Management Revolution: A Report on the
Transformation of Governance. Second Edition. Washngton, D.C.: Brookings
Institution.
26
Kettl, Donald F., and H. Brinton Milward, eds. 1996. The State of Public Management.
Baltimore, MD: Johns Hopkins University Press.
King, David C., Richard J. Zeckhauser, and Mark T. Kim. 2004. “The Management
Performance of the U.S. States.” Kennedy School of Government, Harvard
University.
Knott, Jack H., and Gary J. Miller. 1987. Reforming Bureaucracy: the Politics of
Institutional Choice. Englewood Cliffs, NJ: Prentice-Hall.
Light, Paul C. 1997. The Tides of Reform: Making Government Work, 1945-1995. New
Haven, CT: Yale University Press.
Light, Paul C. 1999. The True Size of Government. Washington, D.C.: Brookings
Institution.
March, James G., and Johan P. Olson. 1983. “Organizing Political Life: What
Administrative Reorganization Tells Us About Government.” American Political
Science Review. 77(March):281-296.
Mintzberg, Henry. 1994. The Rise and Fall of Strategic Planning: Reconceiving Roles
for Planning, Plans, and Planners. New York: The Free Press.
National Commission on State and Local Public Service. 1993. Hard Truths/Tough
Choices: An Agenda for State and Local Reform. Albany, NY: Nelson
Rockefeller Institute of Government.
Osborne, David, and Ted Gaebler. 1992. Reinventing Government: How the
Entrepreneurial Spirit is Transforming the Public Sector. New York: Penguin
Books.
Osborne, David, and Peter Plastrik. 2000. The Reinventor’s Fieldbook: Tools for
Transforming Your Government. San Francisco: Jossey-Bass.
Peters, B. Guy. 1996. The Future of Governing: Four Emerging Models. Lawrence,
KS: University Press of Kansas.
Peters, B. Guy, and Donald J. Savoie. 1996. “Managing Incoherence: The Coordination
and Empowerment Conundrum.” Public Administration Review.
56(May/June):281-290.
Poister, Theodore H., and Gregory Streib. 1999. “Performance Measurement in
Municipal Government: Assessing the State of the Practice.” Public
Administration Review. 59(July/August):325-335.
27
Salamon, Lester M., ed. 2002. The Tools of Government. Oxford: Oxford University
Press.
Tucker, Laura. 2002. “Case Study: State of Texas.” Washington, D.C.: Governmental
Accounting Standards Board. Accessible at:
http://www.accounting.rutgers.edu/raw/seagov/pmg/sea_gasb_project/state_tx.pdf
28
Figure 1: Model for Measuring Structure-Process Dimension Using
Confirmatory Factor Analysis. Note: N = 2,229. Goodness of Fit Index: 1.00.
Adjusted Goodness of Fit Index: 1.00. Normed Fit Index: .99. Nonnormed Fit Index: .98.
0.72
HUM RES
0.66
MAN DI SC
0.53
0.58
0.85
PRI VI TE
0.39
STRUCTUR
0.46
0.79
LEVEL RE
0.57
0.23
0.76
0.49
DECENT
0.10
0.67
PROCUR
Chi-Square=26.34, df=7, P-value=0.00044, RMSEA=0.035
1.00
29
Figure 2: Model for Measuring Performance Dimension Using
Confirmatory Factor Analysis. Note: N = 2,229. Goodness of Fit Index: 1.00.
Adjusted Goodness of Fit Index: 1.00. Normed Fit Index: 1.00. Nonnormed Fit Index:
1.00
0.81
TRAINING
0.21
0.43
0.73
QUALITY
0.52
PERFORM
0.62
0.07
0.62
BENCHMK
0.62
0.60
0.62
STPLAN
0.65
SYSMEASU
Chi-Square=3.02, df=3, P-value=0.38920, RMSEA=0.002
1.00
30
Figure 3. Scatter Plot of States' Performance and Structure Scores
SC
OK
TX
FL
2.00
Structure Reform Scores
OR
MN
GA
VA
1.75
MI
UT
CA
IA
MT
WA
NY
WY
KY
ID
MO
IN
ND
1.50
OH
MA
CO
WI
NCSD
KS
IL WV
PA
MS
NE
DE
HI
1.25
VT
CT
NJ
MD
AK
AR
AZ
ME
LA
NM
RI
TNNH
NV
1.00
AL
2.25
2.50
2.75
Performance Reform Scores
3.00
Table 1. State Performance Reform Scores
State
Performance
N
AL
2.14
48
AK
2.56
58
AZ
3.20
45
AR
2.56
36
CA
2.82
28
CO
3.05
52
CT
2.79
30
DE
2.65
45
FL
3.16
43
GA
2.79
47
HI
2.54
57
ID
2.92
44
IL
2.79
34
IN
2.79
47
IA
2.87
40
KS
2.77
40
KY
2.81
54
LA
3.01
33
ME
2.86
40
MD
2.79
58
MA
2.93
30
MI
2.73
53
MN
2.91
59
MS
2.72
42
MO
2.88
65
MT
2.62
61
NE
2.70
48
NV
2.68
42
NH
2.66
43
NJ
2.51
32
NM
2.73
49
NY
2.95
19
NC
2.75
65
ND
2.80
48
OH
3.08
55
OK
2.96
57
OR
3.18
49
PA
2.81
46
RI
2.38
50
SC
2.88
41
SD
2.77
36
TN
2.64
42
TX
3.13
36
UT
3.08
62
VT
2.64
47
VA
2.81
24
WA
2.89
40
WV
2.84
60
WI
2.97
63
WY
2.81
56
Total
2.81
2299
1
Table 2. State Structure Reform Scores
State
Structure Reform
N
AL
0.97
48
AK
1.17
55
AZ
1.36
43
AR
1.14
34
CA
1.71
27
CO
1.46
51
CT
1.26
27
DE
1.32
43
FL
2.05
40
GA
1.82
47
HI
1.26
57
ID
1.56
41
IL
1.37
34
IN
1.51
44
IA
1.70
38
KS
1.41
40
KY
1.57
54
LA
1.22
28
ME
1.23
36
MD
1.21
57
MA
1.48
29
MI
1.73
52
MN
1.84
58
MS
1.32
42
MO
1.53
65
MT
1.66
61
NE
1.38
45
NV
0.98
43
NH
1.00
41
NJ
1.22
31
NM
1.17
43
NY
1.60
19
NC
1.42
61
ND
1.48
48
OH
1.52
55
OK
2.10
57
OR
1.87
47
PA
1.35
45
RI
1.06
49
SC
2.14
39
SD
1.41
38
TN
1.00
38
TX
2.03
33
UT
1.75
62
VT
1.26
47
VA
1.76
24
WA
1.65
38
WV
1.37
60
WI
1.45
60
WY
1.58
55
Total
1.47
2229
2
Table 3. Performance and Structure Reform Scores by Functions
Function
Performance Reform
N
Structure Reform
Elected Officials
2.55
82
1.35
Staff: Fiscal
2.83
141
1.68
Staff: Non-Fiscal
2.69
130
1.72
Income Security & Social Services
3.00
240
1.65
Education
2.86
128
1.45
Health
2.85
138
1.42
Natural Resources
2.82
254
1.57
Environmental & Energy
2.72
226
1.29
Economic Development
2.90
184
1.45
Criminal Justice
2.73
178
1.38
Regulatory
2.72
292
1.28
Transportation
2.93
132
1.58
Other
2.75
177
1.41
Total
2.80
2302
1.47
N
78
135
128
228
124
134
250
214
177
175
283
129
177
2232
Download