522 Lecture evaluating partnered programs in networks PPT

advertisement
The challenges of managing and
evaluating partnered programs, from
program theory to implementation
PADM 522—Fall 2011
Dr. Mario A. Rivera
Lecture 2
The changing nature of public program
management and service delivery



We have moved from hierarchy (command and
control within organizations) to markets and
market-modelled management (the “New Public
Management”) to collaborative network
management organized around public-private and
cross-sector partnerships
Move toward recognizing a plurality of
stakeholders for virtually any program
Move from governments to governance
“Public management is getting things done through other organizations”
(Metcalfe & Richards, 1990, p220)
Networked or partnered programs
It has become a critical managerial ability and an essential
part of the public manager role to be able to create and
leverage participation in network-based programs. Pertinent
are such organizational processes as increased internalexternal linkages and connectivity; partnerships, alliances,
and coalitions; cross-boundary teams; interdependencies;
strategic alliances (with agencies which are at times
competitors, at times collaborators), involving people from
disparate venues and agencies; building consensus; ensuring
that contractors are accountable; encouraging the
involvement of stakeholders; participating in public
dialogue; creating a joint knowledge base.
Collaborative Relationships







Some are informal, essentially inter-personal, one-to-one
relationships across agencies
More formalized, contractual collaborative relationships,
as well
Partnerships between and among organizations mediated
by individual brokers and ‘boundary-spanners’
Strategic alignment along interorganizational lines is
crucial
Collaborative networks require holistic analysis, consistent
with systems theory (Chen, Chapter 1).
While it is important to fashion strategic consensus, one
needs to be careful with the insular tendencies of networks
(Rivera-Rogers)
One also needs to balance participatory management and
evaluation with direction sufficient to a particular program.
Systems complexity and systems-level interventions
System dynamics are inherently complex
 Constantly changing;
 Governed by feedback;
 Non-linear, history-dependent;
 Adaptive and evolving;
 Characterized by trade-offs;
 Characterized by complex causality—coordination
complexity, sequencing complexity, causal complexity due to
multiple actors and influences, etc.
 There’s too much focus in evaluation on a single intervention
as the unit of analysis;
 Understanding connectivity between programs is important;
 Many complex interventions require programming (and
therefore also evaluation) at multiple levels, e.g., at the
community, neighborhood, school and individual level;
 Multilevel alignment is required across interventions.
Network Theory
Derived from ‘group theory’ in sociology and anthropology,
network theory describes intricate sets of formal & informal
relationships which create both functional and dysfunctional
forms of complexity;
 Actor autonomy is pulled by forces of inter-dependency, and
of power- and resource-dependency
 Networks involve the exchange of information and resources
 Involves complementarity of roles, functions, capabilities
 Reflects and grows out of the ‘hollowing out’ of the state
 Entails shared knowledge & interorganizational learning
 From relatively stable internal networks to dynamic and
changeable trans-organizational networks; networks may
institutionalize over time (e.g., the United Way).

Network questions








Why enter network relationships (partnerships, alliances,
collaborative networks?
What resources become available when you enter a network?
What position is held in the network by one’s own
organization?
How easy is access?
How do members view each other?
What is to be gained by joining a network?
Is there true partnership or mere agglomeration?
How do you derive value from network-building activities?




Increased visibility?
Fundraising with more connections?
Empowered policy advocacy?
How much do you give up building the network rather than
your organization? What are the tradeoffs?
Managing Interorganizational Relationships







Managing across jurisdictions
Managing different stakeholders
Managing implementation
Managing exchanges
Managing incrementally
Managing negotiations
Managing multiple network functions:
Diffusion: Rapid spread of ideas, policy innovations, via influential
opinion leaders and early technology adopters (Rogers)
 Alignment: Aligning ideas, efforts toward a common set of goals
 Mobilization: Reaching stakeholders and motivating them to act
 Exchange: Sharing of information, knowledge, other resources
 Advocacy: development of a collective voice for change
 Delivery: Bringing resources and assistance to increase capacity

Program partnerships as types of network
Leverage expertise and strength of each program to
deliver improved outcomes
 Often requested in grants
 Like individual members of teams, agency network
members bring to the given enterprise their own network
resources
 Examples of program partnerships? When you have
worked in one, have you felt as though collaboration was
real?
 Is it possible to develop partnership-wide plans
(strategic, functional—fund-raising, collaborative
research)?
 What are the challenges of evaluating partnerships and
other networks?

Defining characteristics of effective interorganizational and inter-sectoral networks
They are integrated (some with substantial centralization,
even if the centrality of different actors changes and the
identities and roles of central actor(s) change;
 They are most likely to be found in situations when fiscal
and human resources are relatively plentiful and where
information asymmetries are minimal;
 They are most likely under conditions of stability in
intergovernmental and interorganizational fiscal relations;
 They require social capital (i.e., trust-based relationships)
as much as they do fiscal and human capital;
 They have or come to have shared values & commitments;
 They are capable of generating aligned, conjoint
strategies

Performance measurement and program
evaluation in networks


Tracking and analyzing performance indicator data enables
evaluation and accountability of collaborative work, and is
therefore a critical contributor to the capacity to manage for
results across agencies. It also increases the legitimacy of
collaboration, introducing a new level of accountability.
There needs to be reasonable agreement on both goals and
the way to best measure and evaluate the extent of goal
attainment. There also needs to be agreement as to whom a
program or policy initiative is accountable.
Most existing performance measures assess the
accomplishments of individual agencies, yet most
meaningful policy results require interagency collaboration—
participatory, collaborative planning, implementation, and
evaluation. We need new tools to assess networked
programs when there are multiple agencies involved in
program design and implementation.
Program theory: what is it?





A plausible and sensible model of how a program is
supposed to work. L. Bickman, 1987
A systematic delineation of cause and effect
relationships. M. Scheirer, 1987
Identification of resources, activities, and outcomes of a
program and the causal assumptions that connects
these. J. Wholey, 1987
Scientific explanations of how social change is possible.
C. McClintock, 1987
A specification of what must be done to achieve the
desirable goals, what other important impacts may also
be anticipated, and how these goals and impacts would
be generated. H. Chen, 1990, cited in our text, p. 16.
How should theory be constructed?
There are essentially two camps, and Chen’s
“both/and” approach:
 Program theory should be constructed by key
program stakeholders, in a fully participatory
fashion (Wholey)
 Program theory should stress social and behavioral
science theory and knowledge (Rossi)
 Program theory should be based on stakeholder
knowledge (stakeholder theory) as well as science
or social science theory (Chen)
Prescriptive theories
How to go about solving the problem or
bringing about change; how should activities
be constructed? How should program
interventions be implemented? What is the
causal connection to behavioral or social or
other change?
Chen’s model and partnerships


Chen divides program theory into action into two
component parts: An action model and a change model.
The action model incorporates both program ecological
context and inter-agency collaboration. Some programs
require collaborative partnership. Using the example of a
spouse abuse program (p. 26), Chen points out that it
would fail if it lacked a working relationship with the
courts, police, and community partners including
advocacy groups. It is therefore essential to craft
strategies for collaboration with other agencies.
Partnered programs may have different program
theories and models of change at work, or they may
operate on different concepts of a single set of theories
and models.
Evaluating Partnerships
What do we most want to know?
1. That the partnership has been effective in achieving its
aims
2. That the partners have all benefitted from their
involvement
3. That the partnership approach was or is the best way to
do it
What are the key underlying questions?


We need to understand the underlying elements that create
partnership but that are not obvious and that we cannot readily
appreciate
In particular, we need to understand the relationship between
component elements of partnership function the synergies of
partnership-level functioning
AN EMERGING EVALUATION
APPROACH FOR PARTNERSHIPS:
1
IMPACT ASSESSMENT
3
EVALUATING PARTNERING
AS A MECHANISM
2.
PARTNERHIP RELATIONSHIP
and DEVELOPMENT REVIEW
Evaluating partnering as a mechanism
We evaluate partnerships so as to understand:
 Transaction costs reduction
 Value added
 Sustainability of outcomes
 Strategic influences
 Systemic impact
In sum, whether a partnership approach has been or is
to be ultimately better than the next best alternative
One formula for assessing the ‘added value’ of a
partnership
AV = (OP + SC) – (RC + NA + EC + OC + FC)
Key:
AV =
Added Value of a Partnership
OP =
SC =
RC =
NA =
EC =
OC =
FC =
Outcomes of Partnership
Social Capital
Resources Contributed
Net Benefit of the Next Most Likely Alternative
Environmental Contributions
Opportunity Costs (e.g., time spent)
Facilitation Costs
Partner relationship evaluation
To reveal:
Value created by the partnership for partner organizations (and
other stakeholders)
–expected
–unexpected
–potential
Extent of strategic alignment and goal congruence among
partners
Degree of effectiveness/efficiency/impact
Degree of effectiveness/efficiency/impact
Relative level of influence (sectoral/strategic)
Relative impact of different partners on program outcomes—can
this be determined from program data?
Complex effects chain in partnered
programs
Attribution difficulties;
transparency &
accountability challenges
Partners
1, 2, 3,
etc.
Shared
Common
Outcomes
This form of evaluation links to:





A review of partnering principles in practice (i.e. is the
partnership equitable, transparent and mutually
beneficial?)
Whether the partnership is achieving the individual
goals and satisfying the underlying interests of partner
organizations or not—along with partnership-level goals
and corporative interests
Exploring whether the partners have made maximum
use of the range of resources available
Whether the partnership could work better–if so, how?
Whether the partnership could do something quite
different–if so, what?
Other elements of partnership evaluation
Any truly valid and effective evaluation of a partnered
program always needs to:
• Involve all partners and key stakeholders in design
and data collection
• Include a genuine feedback loop so that the process
effectively informs the development of the partnership
• Find a good balance between external projection of
the partnership as a corporative entity, and the internal
experience of the partnership—strategic dynamics of
partnership identity
General principles of evaluation
Needs → Inputs → Outputs → Impacts:
Logic models in often static sequence, focused on single
agency actors remains the usual focus of most evaluation
questions
 Identifying causal links and quantifiable indicators is
stressed
 Evaluations should specify minimum requirements
concerning the representativity of the data to be collected
or analyzed, and state how the analysis of causality will be
approached

Partnered, participatory evaluation is complex,
requiring a correspondingly complex evaluative
approach that can adequately deal with multiple
layers of causality and dynamic effects.


Case study is an approach to evaluation (as such) that can
capture such complexity.
Kushner: “This capacity for case study to both provide insight into
the unique, but also to generalise helped the evaluation
community develop a methodology and a practice freed from the
normal constraints of academic and professional disciplines. For
example, the study of curriculum and pedagogical change in one
school, say, provides a platform for generalising to other schools,
curriculum projects and pedagogical strategies. But, beyond that,
that same school case study provided insights into, say, the link
between professional and organisational development; the
integration of knowledge and action; the tension between policy,
institutional management and professional practice; or the
grounding of professional action in biographical experience. These
insights were transportable to other contexts . . .”
Best practice guidelines






Benefits of the comparative case study framework
Definition and selection
Pluralistic methods to identify causal structure and
process
Interpretation by iterative evaluation
Validation
The trouble with case studies is or has to do with
 Generalizability
 Difficulty
 Comprehensibility
 Validity
 Cost
Sources
1.
2.
3.
4.
5.
6.
7.
Huxham (eds.) 1996. Creating Collaborative Advantage. London:
Sage
Kickert, W., Klijn, E-H. & Koppenjan, J.F.M. [eds.] 1997. Managing
Complex Networks: Strategies for the Public Sector. London:
Sage
Marsh, D. & Rhodes, R. (eds.) 1992a. Policy Networks in British
Government. Oxford: Clarendon.
Richardson, J. & Jordon, G. 1979. Governing Under Pressure.
Oxford: Martin Robertson.
Rose, A. & Lawton, A. 1999. Pubic Services Management.
Harlow: Prentice Hall.
Rhodes, R. 1997. Understanding Governance. Buckingham:
Open University Press
Rhodes, R. 1996. ‘The new governance: governing without
government.’ Political Studies, Vol.44, pp.652-67.
Program theory sources





L. Bickman (ed.) Using program theory in evalulation. New
Directions for Program Evaluation, no. 33. San Francisco: JosseyBass, Spring, 1987. (multiple authors)
Chen, H-T. Issues in constructing program theory. New Directions
for Program Evaluation, no. 47, Fall 1990.
Chen, H-T and Rossi, P. H. The theory driven approach to validity.
Evaluation and program Planning, 1987, 10, 95-103.
Patton, M. Q. A context and boundaries for theory-driven approach
to validity. Evaluation and Program Planning, 1989, 12, 375-377.
Wilder Research Center. Program theories and logic models. St.
Paul, MN: Wilder Research Center.
http://chicano.umn.edu/pdf/resourcesProgram%20Theories%20Exa
mple%20Logic%20Model.pdf
Download