paper - MyWeb

advertisement
Effectiveness, Implementation Capacity, and Policy Diffusion: Or, "Can We
Make that Work for Us and Do We Care?"
Sean Nicholson-Crottya,*, Sanya Carleya,
a
*
School of Public and Environmental Affairs, Indiana University
Corresponding author: 1315 E. 10th St., Bloomington, IN, 47408; seanicho@indiana.edu
Abstract
Policy learning has arguably been one the primary mechanism by which policy
innovations are assumed to diffuse from one jurisdiction to another. Recent research
suggests, however, that learning is more than simply observing policy adoption in other
jurisdictions and must also include an assessment of the outcomes or effectiveness of
those policies. This paper argues that implementation considerations can also help
scholars distinguish learning from simple emulation. It argues that lawmakers using the
experience of others as a decision criteria are likely to ask not only was the policy
effective in other states that adopted it, but also can we make the policy work for us? We
test hypotheses drawn from this general argument in analyses of renewable portfolio
standards in the American states between 1990 and 2009. Results indicate that both
shared implementation environments among jurisdictions and internal implementation
capacity help determine the impact that policy effectiveness information has on adoption.
This moderating impact is evident not only in the initial decision to adopt, but also in
decisions regarding how stringent to make RPS policies. These findings confirm the
occurrence of policy learning, and suggest the importance of implementation in
distinguishing learning from other diffusion mechanisms.
Keywords: public policy, energy policy, policy adoption, renewable portfolio standard,
diffusion, social learning, effectiveness, compliance
Introduction
Because states play a crucial role as incubators for policy innovation in the U.S.
federal system, scholars have long been interested in the factors that affect policy
adoption decisions within the American states. In a large and growing body of work,
studies have demonstrated that both internal characteristics—such as ideology, wealth,
and innovativeness—as well as external factors—including most notably the behavior of
other states—influences those decisions.
The most persistent questions in the literature on policy adoption and diffusion
focus on those external influences. When early studies recognized that public polices
often spread among the states in patterns of regional contagion, they surmised that states
must be learning about policies from their neighbors prior to adopting them themselves.
In the ensuing decades, authors have spent a good deal of time and effort refining the
concept of policy learning. They have investigated the mechanisms, such as interest
groups and professional networks, by which information might travel from one state to
another. They have studied which states are most likely to learn from one another,
concluding that both ideological and geographic peers might serve as exemplars for
potential adopters. And, most recently, they have sought to understand the ways in which
interstate learning can be distinguished from intrastate decision processes that may lead
to the same outcome.
It is to the general conversation about policy learning, and to this final line of
research more specifically, that this paper aims to contribute. The work seeking to
distinguish learning from internal decision processes has emphasized the importance of
policy effectiveness, suggesting that successful policies are more likely to be emulated if
learning is really driving the adoption process. We suggest herein that a consideration of
implementation can also serve as an indicator of policy learning. In other words, we
believe that lawmakers using the experience of others as a decision criteria are likely to
ask not only was the policy effective in other states that adopted it, but also can we make
the policy work for us? In the following pages we further develop the theoretical
argument that lawmakers truly interested in learning about the effectiveness of a policy
will consider their ability to implement it. Based on that general argument we offer the
following expectations: 1) information about policy effectiveness will have a greater
impact on adoption when a state shares implementation related characteristics with
previous adopters; and 2) information about effectiveness should have less influence over
the adoption decision when a state has high levels of internal implementation capacity.
We test these assertions in an analysis of the adoption of renewable portfolio
standards (RPS) in the American states between 1990 and 2009. Specifically, we examine
whether the impact of RPS policy effectiveness, measured as local utility compliance
with policy targets, on policy adoption is moderated by similarity in the levels of
electricity market deregulation between previous and potential adopters. We also test
whether the importance of information about effectiveness diminishes as a state’s general
environmental enforcement capacity increases. We look for these relationships across
multiple operationalizations of the policy “adoption” variable, including a simple
2
dichotomous indicator, a categorical variable that captures several dimensions of RPS
policies at the time of adoption, and a continuous measure that captures both policy
strength as well as adjustments to the policy after adoption.
The results consistently suggest that factors related to implementation help to
determine the impact that effectiveness information has on the adoption decision. We
conclude from these findings that states are indeed engaged in policy learning in the case
of RPS policies and, more importantly, offer the observed consideration of
implementation by lawmakers as a potential mechanism for verifying policy learning in
other contexts. We also conclude that the results point to implementation capacity as an
underexplored but potentially important influence on policy adoption decisions in the
states.
Policy Learning in the Literature
The idea that state policymakers may learn from one another in the diffusion
process grows from the earliest work on the diffusion of innovations among individuals,
which suggested that the spread of something new is a social process dependent on
communication among users and potential users (see Rogers 1995 for a review). Walker
(1969) focused the discussion on governments and policy innovations, and suggested that
jurisdictional decisions are driven by both internal state characteristics and information
from other states, the latter providing a heuristic cognitive shortcut for policymakers
considering an innovation. Later research suggests that this internal-external diffusion
model has defined and continues to dominate the study of policy diffusion (Berry and
Berry 1990).
Walker (1969) emphasized that state policymakers were most likely to imitate the
policy choices of “similar” states, and as a proxy for this similarity he used geographic
contiguity. The argument that neighboring states are likely to share relevant
characteristics is an intuitive one, and the empirical research has consistently confirmed
that a jurisdiction is more likely to adopt a policy innovation if a higher proportion of its
neighbors has done so (see for example Berry & Berry, 1990, 1992; Gray, 1973;
Mintrom, 1997; Mintrom & Vergari, 1996; Volden, 2002; Karch 2007a).
Despite the persistence of results confirming geographical diffusion, scholars
have also begun to look for other “peers” that states may choose to emulate when
considering policies. In this vein, scholars have demonstrated that policymakers are likely
to learn from states that share their political ideology, particularly for policies that are
ideologically charged (Grossback, Nicholson-Crotty, and Peterson 2004; Volden 2006).
Research has also demonstrated that states learn from the policy example of the federal
government (Gray 1973; Karch 2007a), which often serves to disseminate or amplify
previous state-level innovations. Finally, they have shown that policies can diffuse up to
state governments that can, under certain circumstances, learn from local governments
within the state (Shipan and Volden 2006).
3
In addition to asking who states learn from, scholars have also investigated the
mechanisms and volume of information transfer among states. For example, one body of
work has explored the degree to which unofficial political actors (i.e. interest groups,
professional associations, and other policy entrepreneurs) help facilitate learning between
states (see, e.g., Balla 2001; Haider-Markel 2001; Mintrom 2000). Recent scholarship has
also demonstrated that characteristics of a policy itself, such as salience, complexity, and
trialability affect the incentives that lawmakers have to gather information from peers
(Nicholson-Crotty 2009; Boushey 2010; Makse and Volden 2011).
Interestingly, scholars have recently begun to question long held assumptions
regarding the prevalence and importance of interjurisdictional learning in the diffusion
process. As an example, Boehmke and Whitmer (2004) argue that social learning is often
conflated with economic competition as a motivation for state behavior in the diffusion
process and demonstrate that the former can explain initial adoption decisions, the latter
is more likely responsible for subsequent changes to policy. Similarly, research has
suggested that what it often termed “learning” is simply an emulation of behavior in other
jurisdictions, rather than a conscious search for information about the effectiveness of a
policy (see for example Weyland 2004).
In a related argument, Volden (2006) argues that a desire to avoid policy failure
gives both lawmakers and administrators incentives to replicate only successful policies.
Authors empirically confirm that effective State Children’s Health Insurance Program
innovations were more likely to diffuse. The key implication of this finding is that
conclusions about policy learning are more robust when scholars find evidence that
policymakers rationally mimic only those policies that work. Work on policy diffusion in
the developing world, though rarely cited in studies of U.S. diffusion, similarly argues
that, in order to conclude that learning is occurring, studies should find evidence that
potential adopters examine both policy actions and outcomes in previously adopting
jurisdictions, rather than simply the first (See Weyland 2005; 2007; Meseguer 2005).
Volden, King, and Carpenter (2008) extend and refine this argument formally.
They demonstrate that a decision theoretic model that does not allow potential adopters to
learn from one another can predict the same adoption outcome as a game theoretical
approach that allows learning to be a central feature of the adoption decision. One of the
key takeaways from this finding is that looking for emulation of effective policies is one
of the primary methods by which scholars can distinguish adoption decisions driven by
learning rather than by internal experimentation.1
Learning and Attention to Implementation in Diffusion Process
We propose in this paper that, in addition to assessments of effectiveness, true
policy learning will likely involve a consideration by lawmakers of their ability to
1
See Karch (2007) for the argument that the emulation of effectiveness is particularly likely when a policy
is not highly controversial and potential adopters are more concerned with achieving substantive policy
objectives than with simple political desirability.
4
replicate that success. Our theoretical argument rests primarily on the widely accepted
premises that policy effectiveness is inexorably linked to implementation choices and that
lawmakers are aware of this connection and work hard to make sure that policies are
administered in a way that matches their preferences. From that premise, we make what
we believe is a relatively straight forward argument that, for the same reasons they might
prefer adoption information from states with which they share demographic and
ideological information, lawmakers may place greater weight on effectiveness
information from states that share similar implementation capacities. Additionally, we
make the argument that information about effectiveness is likely to become less valuable
to potential adopters with very high implementation capacity, who are likely to trust their
ability to produce desired outcomes regardless of the experience of previous states. It is
important to note that we are not suggesting that a focus on implementation will replace
other criteria, such as an overweighting of neighbor’s policy decisions or attention to
policy effectiveness, in the adoption decision, but rather that it will interact with these
well-established influences.
Initially, we can note that an argument regarding the importance of
implementation for lawmakers concerned with policy effectiveness should be
uncontroversial for a couple of reasons. First, for at least 50 years scholars have
demonstrated empirically that policy effectiveness and implementation choices are
inexorably intertwined (see Pressman and Wildavsky 1963 for an early example). While
the goals of some policies are intractable, thus attenuating the linkage between
implementation and success (see Mazmanian and Sabatier 1983), in the vast majority of
cases choices about the allocation of resources, stakeholder involvement, discretion
afforded to street level personnel, collaborations, and various other implementation
related factors have an enormous impact on the outcomes of public policy (see Hupe and
Hill 2004 for a good modern review).
Even more germane to our argument is the very large literature suggesting that
policy makers are well aware of the linkage between implementation and policy. Indeed,
there exists a mountain of evidence that legislators go to great lengths to ensure that
bureaucratic agents do not use their discretion to implement policies in a way that
deviates from legislative preferences. Lawmakers overcome information asymmetries
and other factors that facilitate policy drift through agency design (Moe 1989). They
“stack the deck” in favor of bureaucratic decisions that match their preferences through
ex ante controls such as advisory commissions and reporting requirements (McCubbins,
Noll, and Weingast 1989; Balla 2001). They identify and seek to correct undesirable
implementation decisions through ex post controls such as monitoring and auditing
(McCubbins and Schwartz 1984). And, finally, lawmakers seek to control policy
outcomes by writing detailed legislation that gives bureaucratic agents very little
discretion in the implementation process (Huber, Shipan, and Pfaler 2001; Huber and
Shipan 2002).
So, there is consistent evidence that 1) jurisdictions are more likely to learn from
and emulate effective policies and that 2) lawmakers care about the implementation of
laws they write. It is a relatively short leap, therefore, to assert that they will pay attention
5
to implementation when assessing the effectiveness of policies previously adopted by
other jurisdictions. Indeed, this fits quite well with Roger’s (1962: 173) assertion that
knowledge regarding “how to use it correctly” is one of the three primary pieces of
information that potential adopters are likely to gather about any innovation. We can also
turn back to existing work on the use of information in the learning process in order to
develop some specific expectations about the relationship between effectiveness,
implementation, and adoption. Specifically, we can draw on the large literature
suggesting that potential adopters prefer information from states with which they share
relevant characteristics, and the findings that certain characteristics make information
about policy less relevant to potential adopters.
One of the oldest and most consistent findings in work on diffusion is that those
considering an innovation are more likely to trust information about its advantages and
disadvantages from a previous adopter who shares their characteristics. Focusing
primarily on individual adoption decisions, Rogers (1963) argued that “interpersonal
diffusion networks are mostly homophilous,” or characterized by contacts among similar
individuals. Walker (1969) extended the argument of diffusion networks to state-level
actors and developed the expectation that regional peers would be considered as
“legitimate guides to action” for potential adopters because of the relative homogeneity
of states within a given region.
Interestingly, Walker did not find particularly strong empirical support for his
expectations of regional clustering, but numerous other studies have hypothesized and
demonstrated that adoptions in neighboring states have a large impact on the decisions of
potential adopters (See for example Berry and Berry 1990; 1992; Volden 2002). The
standard explanation for these results is that information from proximal states means
more because shared demographic, economic, and political characteristics gives potential
adopters a better idea of the capability of a policy solution with their needs and values.
Grossback, Nicholson-Crotty, and Peterson (2004) extended this “sameness” argument
by suggesting that ideological compatibility may not always be captured by geographic
proximity and that potential adopters will be more likely, therefore, to trust policy
information from ideological as well as regional peers (see also Volden 2006).
Thus, the literature is clear that shared characteristics increase the trust that
potential adopters of an innovation have in information from previous adopters. In this
analysis, we are simply offering the implementation environment as another element of
sameness, which might increase the weight that potential adopters assign to
information—particularly information about whether or not a policy worked in
previously adopting jurisdictions.
By the implementation environment we mean those characteristics that lawmakers
might logically think would affect the outcomes of a policy or, more importantly, the
degree to which they could produce outcomes observed in other states. On the one hand,
these could be structural or institutional characteristics that likely condition success, such
as tax and expenditure limits, local government autonomy, the characteristics of the
regulatory environment, or interest group strength, to name just a few. Alternatively, they
6
could be things that bear on lawmakers’ ability to ensure that policies are implemented
according to their preferences, such as monitoring and enforcement capacity. Whichever
of these is the focus, we expect that evidence of policy success will mean more when it
comes from states where the implementation environment is comparable to the one faced
by a potential adopter.
We turn now to a second argument from the literature that bears on the potential
relationship between previous effectiveness, implementation, and the decision to adopt.
Research on diffusion has demonstrated fairly consistently that policy information is not
always of the same value to potential adopters. Specifically, studies have shown that
more complex innovations encourage the collection of more and better information about
characteristics such as effectiveness, relative to those policies that are simpler and easier
to understand (see Rogers 1963 for the original argument). Some of this work has
focused on technical complexity (see for example Nicholson-Crotty 2009), but other
studies have emphasized the importance of administrative complexity in search for
information (see for example Gormley 1986; Boushey 2010; Makse and Volden 2011).
These latter studies focus on Roger’s (1963) argument that complexity reflects “how
difficult an innovation is to use” and argued that this characterization speaks to the
challenges of implementing a policy (Makse and Volden 2011). The upshot of all of this
research is that complex innovations diffuse more slowly because potential adopters take
more time to gather desired information before making an adoption decision.
In our minds, administrative complexity is the natural inverse of administrative
capacity. If complexity, and resultant uncertainty about the ability to “use” a policy,
makes information more valuable to potential adopters, then a proven ability to
implement laws should reduce that uncertainty and make information about things like
effectiveness less important in the adoption decision. Indeed, a large literature
demonstrates that capacity, measured as relevant technical skills, adequate resources and,
most commonly, human capital or adequate personnel resources, correlates with the
effectiveness of policy implementation at the federal, state, and local levels (see for
example May 1993; Spillane and Thompson 1997; McDermott 2004; Howlett 2009).
Because of this demonstrated linkage between capacity and policy effectiveness, we
expect that the need for information about effectiveness among previous adopters will be
lower among potential adopters with high administrative capacity.
U.S. State Energy Policy: The Renewable Portfolio Standard
We evaluate the relationship between implementation factors and policy learning
through the lens of U.S. state energy policy. Over the past three decades, state
governments have taken a prominent role in the energy policy arena. While the federal
government over this time has adopted transportation policies, such as renewable fuel
standards and increased the corporate average fuel economy standards, updated the
production tax credit periodically after letting it lapse time and again, and, most recently,
provided a number of incentives to alternative energy industries through the American
Recovery and Reinvestment Act of 2009, their commitment to alternative energy in the
7
electricity sector has been little more than rhetoric.2 State governments have responded to
this absence of national leadership in energy policy by designing and implementing
policies of their own. Collectively, state governments as well as American territories have
adopted over 3,000 individual renewable energy or energy efficiency policies that are still
active as of 2014 (NC Solar Center, 2014).3
One of the most popular state policies is the renewable portfolio standard, present
in 45 states as of the beginning of 2014. While there is a wide degree of variation among
states in their RPS design4, all states’ policies set a target for renewable energy by a
specific year. The vast majority of policies set a percentage target for renewable energy
out of total electricity generation or sales (e.g., 20% renewable energy by 2020); as the
only two exceptions, Iowa and Texas set total capacity targets (e.g., Texas’ target is 5,880
MW by 2015). Once a state establishes a final target, it then sets annual benchmarks that
must be achieved by all participating utilities, usually documented as annual Megawatthour (MWh) obligations. Utilities must then meet their annual obligations through
developing and deploying their own renewable energy, paying an alternative compliance
payment, or paying for renewable energy credits. A renewable energy credit (REC) is a
certificate that represents one MWh of renewable electricity. RECs can be sold within
states or across states, and are generally traded through specifically designed REC
transaction and tracking systems.
RPS policies provide a good opportunity to test for the relationship between
implementation concerns, policy information, and adoption for a number of reasons.
First, as illustrated in Figure 1, the policy has diffused in a relatively traditional manner,
suggesting that this is a case in which policy learning has played a role in diffusion
decisions (see Nicholson-Crotty 2009; Boushey 2010).
[Insert Figure 1 about here]
Second, this policy is also a convenient platform in which to test a new theoretical
argument about policy learning, because it has already been the focus of a number of
empirical diffusion studies. The majority of these papers have evaluated which factors are
associated with policy adoption and a consistent finding across them is that political
ideology is one of the key factors in the adoption decision (Huang et al. 2007, Matisoff
2
The obvious exception to this statement is recent developments with the Environmental Protection
Agency’s regulation of greenhouse gas emissions through the Clean Air Act. This type of activity,
however, is arguably climate policy, and is not driven by the objective of increasing renewable energy,
energy efficiency, or other energy alternatives.
3
Of the total 3,123 policies active in 2014, 1,142 are classified as renewable energy incentives, 1,452 as
energy efficiency incentives, 391 as renewable energy regulations, and 138 as energy efficiency regulations
(NC Solar Center, 2014).
4
Policies may differ in the following design features: which renewable and alternative energy resources are
eligible; whether a portion of the energy target must come from a specific resource (i.e., “carve-outs” or
“set-asides”); whether some resources count more than others (i.e., “multipliers”); whether a certain
percentage of a target must come from in-state generation; whether all utilities in a state must comply to
this regulation, or if it only pertains to investor-owned utilities; what level and how well enforced the
penalty for non-compliance is; whether an alternative compliance payment option is offered; and whether
the policy is voluntary or binding.
8
2008, Chandler 2009, Lyon and Yin 2010, Carley and Miller 2012, Yi and Feiock 2012).
Many also find that state affluence is important (Huang et al. 2007, Matisoff 2008,
Chandler 2009, Wiener and Koontz 2010). Other factors are less consistent across
studies. Of notable importance for this study, evidence on the effect of peer influence is
mixed. All energy policy studies that include peer relations do so with a measure of
geographic neighbors—usually operationalized as the percent of contiguous or regional
neighbors that have a policy in the previous year. The majority of these studies do not
find neighborly influence to be statistically significant (Matisoff 2008, Yi and Feiock
2012, Carley and Miller 2012; see also Stoutenborough and Beverlin 2008 for a similar
finding regarding the adoption of net metering, a different energy policy), while one does
(Chandler 2009). Of course, none of these studies have included information about policy
effectiveness in previous adopters, similarities across the implementation environments
of potential and previous adopters, or the administrative capacity of those considering a
policy, which are the core of our theoretical contribution.
Before moving on to a description of our operationalization of those and other
concepts, however, it is important to touch upon one additional advantage of RPS
policies. Renewable portfolio standards offer an intriguing test of our theoretical
argument because there is significant variation in the character and content of these
policies as they have been adopted and amended across the nation. As one example, some
states demanded that utilities produce 12.5% of electricity from renewable sources, while
others set a much higher standard of 40%. Similarly, some states gave local utilities 4
years to meet targets, while others demanded compliance in 28 years. Finally, 44% of
states that adopted RPS amended their standard at least once during the period under
study. This variation allows us to test for the relationship between implementation factors
and learning by examining not just policy adoption, but also the choice among different
policy characteristics and the amendment of existing policies among adopters.
Data, Dependent Variables, and Estimation Strategy
We model RPS policies in the America states between 1990 and 2009. The RPS
policy data, including the information used to devise the stringency score, are extracted
from the Database for State Incentives for Renewables and Efficiency, DSIRE (NC Solar
Center 2014). The RPS is assumed to be present in a state on the date by which it is
registered as effective. Because we are interested in not only whether a state decides to
adopt RPS, but also the choice among policy characteristics in the adoption process, and
policy amendment post adoption, we use several different measures of the RPS adoption
variable and thus estimate several different models.
In the first model, we operationalize policy adoption as a simple dichotomous
measure, where a state either has a policy in a given year or does not. We employ a
traditional event history analysis technique for this model, in which we drop a state from
the sample in the year following first adoption. Specifically, we employ Cox proportional
hazard model, where “failure” is coded as the adoption of the RPS policy, either a fully
binding or voluntary policy. Because there are numerous “ties” or adoptions in the same
year in our data, we use the Efron rather than the Breslow method for dealing with these.
9
In the second model, policy adoption is operationalized as a categorical variable
that measures the characteristics of RPS policy at the time of adoption. Specifically, we
model the stringency of a state’s policy using a score first proposed by Carley and Miller
(2012). This score is a measure of the total percentage of renewable energy generation
that must be added due to the RPS divided by the total number of years that a state has
given to achieve this percentage, multiplied by the percent of a state’s electricity load that
is regulated under this mandate. This score provides an estimate of how quickly a state
must develop and deploy new renewable energy, weighted by how much of the state’s
electricity market is actually regulated. The variable in our second model is equal to zero
when a state has no RPS policy, one when a state has a voluntary policy, two when a
state’s stringency score is under the median value of stringency scores in that year, and
three when the stringency score is over the median (see Miller and Carley 2012).
We also employ event history approach in this model and, thus drop observations
post-adoption. Because the dependent variable is categorical, we estimate a multinomial
probit and deal with time dependence via the inclusion of three cubic splines, as
suggested by Beck, Katz, and Tucker (1998). We also test whether dependent variable
categories should be collapsed using a Cramer-Ridder test and the independence of
irrelevant alternatives (IIA) assumption using the Hausman test. Neither category mix nor
IIA is found to present problems.
In the final model, we operationalize the stringency of an RPS policy with a
continuous measure. In this model, we are not only interested in how strong or weak the
policy is at the time of adoption, but also over the entire study period. This model,
therefore, does not drop observations following policy adoption; the dependent variable
also varies over time for those states that revise their policy at some point over the study
period. More specifically, we use a continuous measure of the calculated stringency score
for each state in each year, again according to the approach devised by Carley and Miller
(2012).5 If a state revises its policy in a given year, the stringency score will change
accordingly. We model this variable via a standard panel data estimation using state and
year fixed effects and robust standard errors.
Independent Variables
Because we are interested in the degree to which the impact of information about
policy effectiveness is moderated by concerns about implementation, our first
independent variable needs to capture both who a potential adopter is most likely to learn
from and the information about policy effectiveness contained in the information they
receive from those peers. In terms of effectiveness, we focus on compliance measured as
the average percent of a state’s RPS annual mandate that is achieved by utilities in each
year. So if, for example, a state must deploy 500,000 MWh to achieve their annual
benchmark, but their utilities only deploy a total of 400,000, the compliance value for
that state in that year would be 0.8. 6 In terms of those exemplars from which potential
5
6
All states with voluntary RPS policies have stringency scores equal to zero.
These data are utility reported and tracked by DSIRE.
10
adopters are most likely to value information, we focus first on neighbors, which is the
variable enjoys the greatest support in the literature.
As noted above, our key independent variable combines these separate
components, effectiveness and geographic proximity, into a single measure. Specifically,
for each state and year the variable measures the average compliance rate in the previous
year among those contiguous neighbors that had already adopted an RPS standard.
Our next set of independent variables capture the relative and absolute
implementation environment within a state. For the first, we use a dimension upon which
we believe lawmakers might look for similarities when considering whether they could
produce the RPS compliance rates observed in neighboring states. Specifically, we focus
on the regulation of electricity markets within a state and, precisely, on the similarity in
the regulatory environments of potential and previous adopters.
Though our primary interest is not in the direct impact of deregulation on RPS
compliance, there are reasons to expect that this relationship would be positive.
Intuitively, we might expect that states operating a regulated monopoly would see higher
compliance rates because they avoid the multiple agent problem and can, therefore, more
easily monitor a producer. However, there are several strains of literature that suggest
that deregulation could increase compliance with RPS standards among local utilities.
First, research suggests that deregulation incentivizes producers to take advantage of
policies like RPS in order to “environmentally differentiate” themselves and court
customers who value green power (Delmas et al. 2007). Additionally, research on vertical
monopolies suggests that regulatory regimes in these arrangements are often weak and
tend to suffer from very large information asymmetries, which makes the confirmation of
compliance with regulations difficult (Fabrizio et al. 2007).7 Finally, empirical work
suggests that deregulation correlates positively with the total renewable energy utilized
within a state (Carley 2009).
However deregulation affects compliance, our argument is simply that states will
value compliance information more when it comes from previous adopters with a similar
regulatory environment. Thus, while we include the deregulated indicator in all models as
a control, our independent variable is actually the similarity between the regulatory
environment of a potential adopter and the environments in surrounding states. In order to
create this measure, we first calculate the proportion of a state’s neighbors that are
deregulated. The “same regulatory environment” is that proportion for potential adopters
that are deregulated and the inverse of that proportion for those that are not.
Along with the expectation that the value of effectiveness information from peers
will increase as similarities with the implementation environment in peer states increases,
we also expect that such information will become less valuable as a state’s own
implementation capacity grows. In order to begin testing that second hypothesis, we need
7
Interestingly, analyses of our own data confirm that, in 2009, that public utilities commissions in
deregulates stated employed significantly more staff than those in deregulated states even after controlling
for a host of demographic, political, and economic factors.
11
an indicator of internal capacity. Ideally, we would use the size of public utilities
commissions as our measure of capacity. These data, however, are only available for all
50 states in a very limited set of years. Instead we use the measure of “traditional”
environmental enforcement created by Konisky (2007). The measure captures total
enforcement actions initiated by state inspectors in each state and year. Unfortunately,
these data are only available from 1985 to 2000, which does not overlap sufficiently with
our period of study. It is a relatively long time frame, however, and enforcement at year t
correlates at over .91 with enforcement at t-1 throughout the series and in the presence of
numerous other controls. Because of these features of the data, we are able to create a
stable “average” enforcement figure for each state based on the 1985 to 2000 values. We
divide that figure by GSP in each year in order to normalize for state size and the need for
environmental regulation. This procedure creates a time-variant variable that we use as
our measure of capacity. This is not a perfect proxy of the regulation of public utilities,
but we believe that it adequately captures the resources that a state is willing to devote to
regulatory enforcement. We expect that states with a high enough commitment may have
faith in their ability to produce high RPS compliance regardless of the expectation of
their peers.
Finally, all models discussed below include multiplicative interaction terms,
which are the variables that actually allow us to test our hypotheses. Specifically, we
estimate one model for each of the dependent variables discussed above that contains an
interaction between the measure of regulatory sameness and the average compliance rate
in neighboring adopters. We also estimate three models where the interaction term
contains the state’s own enforcement per GSP and the measure of compliance in
neighboring states.
Control Variables
In addition to the independent variables discussed above, we, of course, control
for other influences on adoption identified in previous diffusion and state energy policy
adoption studies. The first of these is a more traditional measure of peer effects. While
recent work suggests that information about policy effectiveness from peers is really what
drives the learning process, there are also a number of studies that have shown a
correlation between adoption decisions and adoptions in neighboring states, regardless of
effectiveness. In order to capture this effect, we include a measure of the proportion of
neighboring states that have adopted an RPS for each state and year.
Next, we include a second variable to capture the impact of policy effectiveness of
a state’s ideological peers. In this paper, we only test for the interaction between
implementation factors and information about effectiveness coming from neighboring
states both for the sake of parsimony and because we believe states will have better
information about the implementation environment in neighboring versus geographically
distant but ideologically similar states. We also do not include this variable directly in the
tests of our theoretical expectations because we are not as confident in its validity as a
measure of effectiveness information as we are the measure of contiguous state
compliance discussed above. This is because the ideological peer variable’s construction
is more complicated than the neighboring compliance measure. In a given year for a
12
given state, we calculate the average difference in ideological values between the state of
interest and all other states that had adopted the policy in the year prior, and then multiply
this value by the average compliance value of all of those peers. The same calculation is
performed for all policy adoptions in previous years, and the two values are then
averaged together and lagged by one year.8 This series of calculations generates a
measure that is weighted simultaneously by difference in ideology (i.e., peer status),
policy compliance, and policy vintage.
The next set of controls includes economic and demographic measures, including
the price of electricity, the status of a state’s electricity regulatory structure, the
population growth rate, and gross state product per capita. The price of electricity,
gathered from the Energy Information Administration (2010), measures the annual
average real price of electricity in each state, in cents/kWh, averaged across all end users.
The deregulation variable is dichotomous, equal to one if the state’s retail market is
deregulated and equal to zero otherwise. A state’s annual population growth is extracted
from annual Census Bureau data (1999, 2009) and GSP per capita are derived from
Bureau of Economic Analysis data (2010).9
We control for three political variables. We capture state-level political ideology
with the Berry et al.’s (2010) measures of citizen and government ideology. These
variables range from 0 to 100, where 100 represents the highest level of liberalism and 0
represents the lowest. We also control for fossil fuel industry presence with a measure of
carbon dioxide emissions per capita, drawn from the Environmental Protection Agency
(2010).
We additionally include two other factors. First, we control for wind and solar
energy potential as the total GWh possible per year. Wind potential is based on the
available windy land area, after exclusions, with a wind turbine capacity factor of 30
percent at a height of 80 meters (DOE 2011). Solar potential represents average solar
radiation measured between 1961-1990 for a south-facing flat-plat collector, with zero
degree tilt, multiplied by the total area of land and the number of sunny days per year
(NREL 1991). Second, we include a measure of whether the state has another energy
policy instrument, a net metering policy, which is even more prevalent than the RPS. Net
metering allows a customer that owns a small-scale renewable energy system of a certain
capacity size or smaller to hook their system up to the electric grid and exchange
electricity with the grid. We include this variable on the premise that previous adoption
of energy policies encourages future adoption of other energy policies (Yi and Feiock,
2012). This variable is lagged one year and is composed using policy information from
DSIRE (NC Solar Center, 2014).
Findings and Discussion
8
This technique helps to separate the impact of very recent adoptions among ideological peers, which the
literature suggests should be more consequential (see Grossback et al. 2004) from older ones.
9
This variable is based on Standard Industrial Classification (SIC) codes before 1997 and the North
American Industry Classification System (NAICS) codes from 1997 onward.
13
The results from our empirical tests are presented in Tables 1 and 2 and Figures 25. Table 1 contains the three models examining whether similarity in the regulatory
environment moderates the impact of information regarding RPS compliance rates in
neighboring states on adoption decisions. The first column presents the results from a
Cox model of the dichotomous adoption choice, while the second contains the
multinomial probit regression using the categorical stringency at time of adoption, and
the third contains the fixed effects panel regression with the continuous measure of
stringency over the life of the policy. We will present the findings related to each
hypothesis in order, dealing first with the expectation that shared implementation
capacity should increase the value of information on policy effectiveness before turning
to the assertion that high internal implementation capacity should decrease the impact
that such information has on the adoption decision.
[Insert Tables 1 and 2 about here]
Before discussing the two primary hypotheses, we can note that, as the literature
would suggest, political ideology is a consistent predictor of RPS policies, where more
liberal states are more likely to adopt any RPS, to adopt more binding and stringent
policies, and to adjust stringency upward through the policy amendment process.
Additionally, the influence of the fossil fuel industry is evident in the CO2 per capita
figure in the multinomial probit model, where higher rates of CO2 per capita decrease the
likelihood that a state will adopt a binding policy. Finally, the models suggest that
renewable energy potential and deregulation correlate positively with increases in the
stringency level of RPS over the life of these policies.
Turning now to the independent variables of primary interest we see that, across
all three models, the main effect of average compliance scores among neighboring
adopters is not statistically significant. This is not at all surprising because, in the
presence of the interaction between neighboring states’ compliance and similar regulatory
environment, that coefficient represents the impact when there is no information on
effectiveness coming from peers that share the same regulatory environment.
The real variable of interest for hypothesis testing is the interaction term and, in
all three models, it is significant and in the expectedly positive direction. Turning first to
the cox model, the impact of the interaction is easier to see graphically. Figure 2 graphs
the survival function at the mean level of neighboring compliance rates and 1-standard
deviation below (line 1) and 1-standard deviation above (line 2) the mean value of the
proportion neighbors with the same regulatory environment. As the figure suggests the
probability of a state “surviving” each year without adopting an RPS is lower for states
where regulatory sameness with neighbors is high, even though both states are getting the
same information about policy effectiveness from geographic peers.
[Insert Figure 2 here]
If we examine the model of the categorical RPS variable presented in Column 2,
the interaction is again significant, but only in the final equation. Compliance rates in
14
neighboring states had a positive, though only marginally significant, impact on the
decision to adopt a voluntary RPS. That impact was not, however, moderated by
similarity in the implementation environment. Neither neighboring compliance rates nor
their interaction with regulatory sameness were significant predictors of the decision to
adopt a mandatory but relatively low stringency RPS policy.
When we turn to those cases where states adopted a mandatory policy that was
more stringent than the median, however, the interaction term is highly significant and in
the expectedly positive direction. This suggests that compliance information has a larger
impact on the decision to adopt this type of RPS, relative to no policy, when a greater
proportion of neighboring states have the same regulatory environment. An assessment of
predicted effects suggests that neighboring compliance rates do not begin to have a
significant impact on the adoption decision until the proportion of neighbors with the
same regulatory environment increases to 0.6.
The final column of Table 1 presents a model of the continuous measure of RPS
stringency. Again, the interaction term is positive and significant, suggesting that
neighboring compliance information has a larger impact on the adjustment of RPS
stringency levels when a greater proportion of neighbors share the implementation
environment. Figure 3 graphs the marginal effect of an increase in compliance rates
across the range of the regulatory similarity measure for this dependent variable. Again,
the graph suggests that the impact of neighboring compliance does not begin to have a
significant impact on RPS stringency until the proportion of neighbors with the same
regulatory environment reaches 0.7. From that point, an increase of 1-standard deviation
in the sameness measure increases stringency by approximately 1/3-standard deviation.
[Insert Figure 3 about here]
Table 2 presents models testing the hypothesis that internal implementation
capacity reduces the impact of policy effectiveness information on adoption decisions. As
a reminder, our measure of capacity is state-level enforcement actions for clean air, clean
water, and hazardous materials violations normalized by GSP. As in the first table,
Column 1 contains results from a survival analysis of the dichotomous adoption choice.
The interaction between enforcement actions and neighboring compliance is negative and
significant, suggesting that the impact of compliance information from neighbors on the
adoption decision decreases as internal enforcement capacity goes up.10 A graph of the
survival function, as displayed in Figure 4, with enforcement actions per GSP set to 1standard deviation below (line 1) and 1-standard deviation above (line 2) the mean shows
that the probability of a state “surviving” each year without adopting an RPS is
consistently higher at the greater level of enforcement in states whose neighbors report
identical levels of compliance.
[Insert Table 2 and Figure 4 about here]
10
The interaction term is individually significant at .1 on a 1-tailed test, but the
interaction terms are jointly significant at .05 on a 2-tailed test.
15
Column 2 of the second table presents the model with the categorical measure of
RPS stringency at initial adoption. In this case, an increase in neighboring compliance
rates have a positive impact on both the likelihood of adopting both voluntary RPS and
mandatory RPS that are less stringent than the average. In the latter case, however, that
impact is moderated by enforcement capacity. The predicted effects suggests that, in the
case of the adoption of mandatory but low stringency policies, the impact of compliance
information from neighboring states begins diminishing significantly once enforcement
capacity rises above the mean level.
Results from the model of the continuous measure of RPS stringency are
presented in the third column of Table 2. The interaction between enforcement actions
and neighboring compliance information is negative and statistically significant,
suggesting that the influence of effectiveness information from peers decreases as
enforcement capacity increases. However, an examination of the marginal effects (Figure
5) suggests that enforcement capacity has a relatively limited impact on the importance of
neighboring compliance information. The former only begins to significant moderate the
latter once enforcement capacity reaches 2-sd above the mean level.
Nonetheless, when taken together, the findings presented above provide
considerable evidence for our hypotheses. Whether we are modeling a simple adoption
decision, or taking account of the different policy characteristics lawmakers might choose
during and after adoption, implementation concerns consistently appear to be relevant. In
all three models presented in Table 1, we find that information about compliance rates in
neighboring states has a greater impact when a greater proportion of those states have the
same regulatory environment as a potential adopter. In at least two of the three models in
Table 2, depending on which statistical significance threshold one values, the results
suggest that the influence of neighboring compliance rates decreases as the internal
enforcement capacity of a potential adopter goes up. It appears, therefore, that states are
concerned not only with how effective RPS has been for previous adopters, but also
whether they will be able to duplicate that effectiveness within their borders. That
concern leads them to prefer effectiveness information from states with similar
implementation environments, but discount the importance of previous effectiveness
when their own implementation capacity is sufficiently high.
Conclusion
In recent years, much of the debate in the literature on diffusion has focused on
the challenges of distinguishing and understanding policy learning. Scholars have noted
that it can be very difficult to tell the difference between the careful weighing of costs
and benefits by potential adopters and other decision processes that may produce the
same adoption outcome. As a result, they suggest that diffusion patterns ascribed to
“learning” may, in some cases, be simple imitation or even the result of intrajurisdicitonal
experimentation free from any external influence.
Fortunately, previous work has also suggested a method by which students of
diffusion can distinguish policy learning from other decision processes. Specifically, they
16
have argued that a focus on policy effectiveness is the best way to do so. The logic of this
argument is that the simple spread of policies from one jurisdiction to another could be
caused by any number of factors, but if we see that jurisdictions are copying successful
policies and forgoing those that prove ineffective, then we can assert with greater
confidence that policy learning is contributing to diffusion.
This paper offers another test that scholars can use to determine if observed
diffusion patterns are a result, in part at least, of interjurisdicitonal learning. There is
significant evidence that lawmakers are highly attuned to the relationship between
implementation and policy success. We argue that if potential adopters are gathering
information about the effectiveness of previously adopted policies, they are also likely to
gather information about the ways in which those policies were implemented. If learning
is driving the adoption decision, we should see evidence that lawmakers considering an
innovation weight effectiveness information based on their ability to reproduce that
success in the implementation process. Thus, we should see them give more weight to
effectiveness information from states that share similar implementation conditions, but
discount the importance of success among previous adopters when internal
implementation capacity is high.
Based on these expectations, it appears that in the case of RPS policies learning
did influence the diffusion of the innovation among the American states. The success of
these policies really only had a meaningful effect on the likelihood of adoption in states
that shared the regulatory environment of previous adopters and, thus, had the reasonable
expectation that they would experience similar levels of success. Similarly, we find that
states with high capacity to monitor policy outcomes post adoption were less likely to let
low success rates among previous adopters deter them from embracing an RPS. It is very
hard to imagine a process other than policy learning that could produce this pattern of
results.
Before concluding, it is important to note that examination of the effectivenessimplementation interaction cannot offer a critical test of the learning hypothesis. In other
words, if we fail to observe implementation factors moderating the impact of
effectiveness information, it is not sufficient evidence to conclude that policy learning is
not present. The observation of a moderating relationship can, however, provide an
affirmative test, which gives researchers more confidence that learning is influencing
diffusion patterns.
Of course, there is still a great deal of research that needs to be done to confirm
that attention to implementation can help us better understand policy learning. First and
foremost, the results from this study need to be replicated in a policy where “success” is
not as easily observed. In the case of RPS, lawmakers face what is primarily an agency
problem and by observing compliance with established targets they can easily conclude
whether the policy has been successful or not. Such determinations are not so easily made
in the case of numerous other policies where success is more difficult to measure. It is an
open empirical question whether policy makers will give the same weight to
implementation factors in these types of policies.
17
References
Balla, Steven J. 2001. Interstate Professional Associations and the Diffusion of Policy
Innovations. American Politics Research 39:221-45.
Berry, William D., Richard C. Fording, Evan J. Ringquist, Russell L. Hanson, and Carl E.
Klarner. 2010. “Measuring Citizen and Government Ideology in the U.S. States: A
Re-appraisal.” State Politics & Policy Quarterly 10(2): 117 -135. Data through 2008
updated online at <http://rcfording.wordpress.com/state-ideology-data/>; retrieved
January 19, 2011.
Boehmke, F. J., & Witmer, R. (2004). Disentangling diffusion: The effects of social
learning and economic competition on state policy innovation and expansion.
Political Research Quarterly, 57(1), 39-51.
Boushey, Graeme. 2010. Policy Diffusion Dynamics in America. New York, NY:
Cambridge University Press.
Carley, Sanya, and Chris J. Miller. 2012. “Regulatory Stringency and Policy Drivers: A
Reassessment of Renewable Portfolio Standards.” Policy Studies Journal.
Chandler, Jess. 2009. “Trendy Solutions: Why do States Adopt Sustainable Energy
Portfolio Standards?” Energy Policy 37:3274-3281.
Delmas, M., Russo, M. V., & Montes‐Sancho, M. J. (2007). Deregulation and
environmental
differentiation in the electric utility industry. Strategic Management Journal,
28(2), 189-209.
Gray, Virginia. 1973. Innovation in the States: A Diffusion Study. American Political
Science Review 67:1174-85.
Grossback, Lawrence J., Sean Nicholson-Crotty, and David A. M. Peterson. 2004.
“Ideology and Learning in Policy Diffusion.” American Politics Research 32(5):521545.
Haider-Markel, Donald P. 2001. Policy Diffusion as a Geographical Expansion of the
Scope of Political Conflict: Same-Sex Marriage Bans in the 1990s. State Politics
and Policy Quarterly 1:5-26.
Howlett, M. 2009. Policy analytical capacity and evidence‐based policy‐making: Lessons
from
Canada. Canadian public administration, 52(2): 153-175.
18
Huang, Ming-Yuan, Janaki R.R. Alavalapati, Douglas R. Carter, and Matthew H.
Langholtz. 2007. “Is the Choice of Renewable Portfolio Standards Random?” Energy
Policy 35(11): 5571-5575.
Huber, J. D., Shipan, C. R., & Pfahler, M. (2001). Legislatures and statutory control of
bureaucracy. American Journal of Political Science, 330-345.
Huber, J. D., & Shipan, C. R. (2002). Deliberate discretion?: The institutional
foundations of bureaucratic autonomy. Cambridge University Press.
Hill, M. J., & Hupe, P. L. (2002). Implementing public policy: governance in theory and
practice. London: Sage.
Karch, Andrew. 2007a. Democratic Laboratories: Policy Diffusion among the American
States. Ann Arbor: University of Michigan Press.
Lyon, Thomas P., and Haitao Yin. 2010. “Why Do States Adopt Renewable Portfolio
Standards? An Empirical Investigation.” Energy Journal 31(3):133-157.
Makse, Todd, and Craig Volden. 2011. The Role of Policy Attributes in the Diffusion of
Innovations. Journal of Politics 73:108-24.
Matisoff, Daniel C. 2008. “The Adoption of State Climate Change Policies and
Renewable Portfolio Standards: Regional Diffusion or Internal Determinants?”
Review of Policy Research 25(6):527-546.
May, P. J. (1993). Mandate design and implementation: Enhancing implementation
efforts and
shaping regulatory styles. Journal of Policy Analysis and Management, 12(4),
634-663.
Mazmanian, D. A., & Sabatier, P. A. (1983). Implementation and public policy.
Glenview, IL: Scott, Foresman.
McCubbins, M. D., Noll, R. G., & Weingast, B. R. (1989). Structure and process, politics
and policy: Administrative arrangements and the political control of agencies.
Virginia Law Review, 431-482.
McCubbins, M. D., & Schwartz, T. (1984). Congressional oversight overlooked: Police
patrols versus fire alarms. American Journal of Political Science, 165-179.
McDermott, K. A. (2006). Incentives, capacity, and implementation: Evidence from
Massachusetts education reform. Journal of Public Administration Research and
Theory, 16(1), 45-65.
19
Meseguer, C. (2005). Policy learning, policy diffusion, and the making of a new order.
The Annals of the American Academy of Political and Social Science, 598(1), 6782.
Mintrom, Michael. 1997. Policy Entrepreneurs and the Diffusion of Innovation.
American Journal of Political Science 42:738-770.
Mintrom, Michael. 2000. Policy Entrepreneurs and School Choice. Washington, DC:
Georgetown University Press.
Mintrom, Michael, and Sandra Vergari. 1998. Policy Networks and Innovation Diffusion:
The Case of Education Reform. Journal of Politics 60:126-148.
Moe, T. M. (1989). The politics of bureaucratic structure. Can the government govern,
267, 285-323.
Moe, T. M. (1991). Politics and the Theory of Organization. Journal of Law, Economics,
and Organization, 7(special issue), 106-129.
National Renewable Energy Laboratory, 1991. Solar Radiation Data Manual for FlatPlate and Concentrating Collectors. Available at <
http://rredc.nrel.gov/solar/pubs/redbook/> Accessed January 3, 2008.
Nicholson-Crotty, Sean. 2009. “The Politics of Diffusion: Public Policy in the American
States.” The Journal of Politics 71 (1): 192-205.
North Carolina Solar Center, 2014. Database of State Incentives for Renewable Energy
(DSIRE). Available at <http://www.dsireusa.org>. Accessed Feb. 16, 2014.
Pressman, Jeffrey and Wildavvsky, Aaron (1973). Implementation. Berkeley: University of
California Press.
Rogers, Everett M. 1995. Diffusion of Innovations. 4th ed. New York: The Free Press.
Shipan, Charles R., and Craig Volden. 2008. “The Mechanisms of Policy Diffusion.”
American Journal of Political Science 52(4): 840-857.
Spillane, J. P., & Thompson, C. L. (1997). Reconstructing conceptions of local capacity:
The
local education agency’s capacity for ambitious instructional reform. Educational
Evaluation and Policy Analysis, 19(2), 185-203.
U.S. Bureau of Economic Analysis. Nov. 24, 2010. Regional Economic Accounts: Gross
Domestic Product by State. Online at <http://www.bea.gov/regional/gsp>. Retrieved
Nov. 24, 2010.
20
U.S. Census Bureau. 1999 and 2009. State Population Estimates: Annual Time Series.
Population Estimates Program, Population Division, Washington, DC. Retrieved
online at <http://www.census.gov/popest/data/historical/index.html>, April 21, 2012.
U.S. Department of Energy. 2011. Energy Efficiency and Renewable Energy: Wind
Powering America. Available online at
<http://www.windpoweringamerica.gov/wind_maps.asp>. Retrieved September 21,
2011.
U.S. Energy Information Administration. 2010. Electricity Price by State and End-User,
1990–2008. Available at <http://www.eia.doe.gov/cneaf/electricity/page/>. Retrieved
November 18, 2010.
U.S. Environmental Protection Agency. 2010. “State CO2 Emissions from Fossil Fuel
Combustion.” Available at <
http://www.epa.gov/statelocalclimate/resources/state_energyco2inv.html>. Retrieved
Mar. 28, 2012.
Volden, Craig. 2002. The Politics of Competitive Federalism: A Race to the Bottom in
Welfare Benefits? American Journal of Political Science 46: 352-363
Volden, Craig. 2006. “States as Policy Laboratories: Emulating Success in the Children's
Health Insurance Program.” American Journal of Political Science 50(2): 294-312.
Volden, Craig, Michael Ting, Daniel Carpenter. 2008. “A Formal Model of Learning and
Policy Diffusion.” American Political Science Review 102: 319-332.
Walker, Jack L. 1969. “The Diffusion of Innovations Among the American States.”
American Political Science Review 63(3): 880-899.
Weyland, K. G. (2004). Learning from foreign models in Latin American policy reform.
Johns Hopkins University Press.
Weyland, K. G. (2005). Theories of policy diffusion: lessons from Latin American
pension reform. World politics, 57(2), 262-295.
Wiener, Joshua G., and Tomas M. Koontz. 2010. “Shifting Winds: Explaining Variation
in State Policies to Promote Small-Scale Wind Energy.” Policy Studies Journal 38(4):
629-651.
Wiser, R., Barbose, G., Holt, E., 2011. Supporting solar power in renewables portfolio
standards: Experience from the United States. Energy Policy 39(7), 3894-3905.
Yi, Hongtao, and Richard Feiock. 2012. “Policy Tool Interactions and the Adoption of
State Renewable Portfolio Standards.” Review of Policy Research 29(2): 193-206.
21
Table 1 Regulatory Similarity and the Impact of Compliance Information on RPS
Adoption and Adjustment
Cox Model
Multinomial Probit
Regression
Voluntary
Mand. Low
Mand. High
Neighbor Compliance
Regulatory Similarity
Similar X Comp.
Ideo. Peer Comp.
Electricity Price
Deregulated
Citizen Ideology
Govt. Ideology
Renew. Eng. Poten.
Co2 Emissions
Gross State Product
Net Metering Policy
Adopting Neighbors
Pop. Growth Rte.
0.743
(0.805)
0.177
(0.188)
30.559
(43.160)
1.430
(0.828)
1.058
(0.115)
0.923
(0.497)
1.084
(0.025)
1.005
(0.010)
1.001
(0.001)
0.975
(0.016)
241.134
(7657.10
9)
0.704
(0.267)
0.602
(0.474)
1.36E+31
(4e+32)
Intercept
Number of
Observations
R-squared
813
2.441
(1.260)
0.325
(1.073)
-1.088
(1.537)
0.067
(0.152)
-0.009
(0.167)
-1.612
(0.746)
0.084
(0.032)
-0.014
(0.011)
-0.002
(0.001)
-0.014
(0.017)
65.048
(44.791)
-0.458
(0.917)
-0.285
(0.904)
0.922
(1.262)
-0.099
(0.158)
-0.205
(0.135)
-0.051
(0.552)
0.076
(0.027)
-0.007
(0.013)
0.002
(0.001)
-0.036
(0.013)
51.103
(28.123)
-1.174
(1.094)
-2.865
(1.300)
4.578
(1.577)
0.001
(0.138)
0.091
(0.119)
0.499
(0.692)
0.018
(0.021)
0.049
(0.011)
0.001
(0.001)
-0.045
(0.031)
45.080
(41.121)
-8.524
(3.829)
4.226
(3.243)
17.849
(5.129)
-2.083
(2.158)
4.077
(0.601)
10.200
(2.058)
0.319
(0.102)
0.181
(0.034)
41.201
(16.843)
0.248
(0.316)
-86.776
(237.742)
-0.554
(0.525)
0.309
(1.233)
33.811
(22.731)
-89.850
(54.993)
-0.216
(0.490)
1.040
(0.855)
4.532
(13.934)
-12.356
(2.893)
-0.984
(0.504)
-0.903
(0.786)
13.419
(33.598)
-10.402
(2.438)
7.558
(1.798)
3.686
(3.550)
-119.944
(89.826)
-12116.720
(4928.824)
814
814
814
1000
0.509
Numbers in parentheses are robust standard errors. Models 2,3, and 4 contain cubic splines of time. Model 5 includes year
and state fixed effects.
22
Table 2 Enforcement Capacity and the Impact of Compliance Information on RPS
Adoption and Adjustment
Cox Model
Multinomial Probit
Regression
Voluntary
Mand. Low
Mand. High
Neighbor Compliance
Enforcement Capacity
Capacity X Comp.
Ideo. Peer Comp.
Electricity Price
Deregulated
Citizen Ideology
Govt. Ideology
Renewable Eng.
Potential
Co2 Emissions
Gross State Product
Net Metering Policy
Prop. Adopting
Neighbors
Pop. Growth Rte.
6.285
(5.683)
454.186
(403.190)
7.22E-80
(8.1E-78)
1.485
(0.839)
1.147
(0.131)
0.852
(0.418)
1.108
(0.029)
1.001
(0.010)
1.003
3.209
(1.482)
-198.449
(120.347)
-313.840
(270.548)
0.125
(0.159)
-0.387
(0.203)
-1.572
(0.639)
0.097
(0.029)
-0.009
(0.012)
-0.003
1.400
(0.885)
93.978
(66.999)
-734.727
(289.973)
0.105
(0.129)
-0.114
(0.162)
-0.407
(0.555)
0.099
(0.040)
-0.015
(0.016)
0.004
-0.255
(0.843)
-397.624
(268.469)
304.797
(280.240)
0.037
(0.142)
0.267
(0.111)
0.410
(0.623)
0.040
(0.020)
0.040
(0.010)
0.001
4.696
(5.263)
423.858
(265.654)
-1038.058
(519.794)
-2.161
(4.494)
3.573
(1.104)
8.426
(5.028)
0.154
(0.106)
0.152
(0.048)
0.029
(0.001)
0.991
(0.010)
7.68E+26
(2.33E+2
8)
0.733
(0.321)
0.613
(0.002)
-0.035
(0.019)
-52.124
(47.082)
(0.001)
-0.028
(0.011)
109.979
(49.016)
(0.001)
-0.005
(0.020)
47.193
(41.145)
(0.006)
0.013
(0.051)
338.358
(422.357)
0.216
(0.595)
1.378
0.061
(0.496)
1.077
-0.818
(0.549)
-0.953
6.811
(3.359)
3.465
(0.496)
1.08E+23
(3.38E+2
4)
(0.923)
46.145
(26.217)
(0.958)
1.183
(15.385)
(0.857)
11.658
(26.230)
(7.416)
-23.814
(137.387)
-6.559
(2.432)
-13.001
(3.331)
-12.007
(2.310)
-42.998
(17.120)
767
767
767
Intercept
Number of
Observations
R-squared
767
940
0.509
Numbers in parentheses are robust standard errors. Models 2, 3, and 4 contain cubic splines of time. Model 5 includes year
fixed effects.
23
Figure 1. Cumulative adoption of RPS policies over time
0.8
0.7
0.6
0.5
0.4
Percentage of states with
RPS
0.3
0.2
0
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
0.1
24
.9
.85
.8
Legend
Low Similarity
High Similarity
.75
Survival Rate
.95
1
Figure 2. Impact of compliance information on RPS survival rate at different levels
of regulatory similarity
1990
1995
2000
Year
2005
2010
25
-20
-10
0
Marginal Effects
10
20
Figure 3. Impact of compliance information on RPS stringency at different levels of
regulatory similarity
0
.2
.4
.6
Regulatory Sameness
.8
1
Dashed lines give 90% confidence interval.
26
.8
.7
.6
Low capacity
High capacity
.5
Survival Rate
.9
1
Figure 4. Impact of compliance information on RPS survival rate at different levels
of enforcement capacity
1990
1995
2000
_t
2005
2010
27
-10
-20
-30
Marginal Effects
0
10
Figure 5. Impact of compliance information on RPS stringency at different levels of
enforcement capacity
0
.005
.01
.015
Enforcement Capacity
.02
.025
Dashed lines give 90% confidence interval.
28
Download