2009 benchmarking study

advertisement
Use of Evaluative Information in
Foundations: Benchmarking Data
Patrizi Associates
June 2010
Study Purpose
Study Purpose
 Benchmark foundation practices regarding evaluation functions and responsibilities and how evaluation
resources are deployed.
 Explore perceptions of how well foundations use evaluative information.
 Explore patterns of “demand” for evaluative information.
Set the stage for the July 2010 Evaluation Roundtable Meeting to:
 Consider how evaluative information can be used effectively to advance foundation capacity to develop and
guide strategy in complex and challenging environments.
2
Study Overview
About the study
 The focus of this study is on “evaluative information” rather
than on “evaluation” in order to capture the range of
functions and products used by foundations to gauge their
own effectiveness.
 The questions were posed to those responsible for
evaluation in each of the participating foundations.
 Although we’ve conducted benchmarking studies in the past,
they were more narrowly focused on “evaluation,” more
qualitative in nature, and included between 10-14
foundations. We are reluctant to use these as true points of
comparison. However, we’ve included some historical
references in this presentation based on these previous
Evaluation Roundtable studies and from interviews
conducted as part of this study.
Approach
 33 foundations (US and Canada) with a history of strong
evaluation use were invited to participate.
 We sent a web-based survey to the person who led the
evaluation unit or had major responsibility for evaluation.
 31 foundations completed the survey.
 26 foundations returned foundation expenditure
Time period of study
 Study conducted in Summer 2009. Respondents were asked
to provide data from time period 2007 and 2008 and to
reflect on changes over the last five years.
Analysis
 We examined overall responses to identify patterns across
respondents and segmented responses by:
– Size of the foundation’s yearly grantmaking:
• Foundations under $50 million
• Foundations $50 million and under $200 million
• Foundations $200 million and over
– Reporting structure: Whether the evaluation unit reports
to the CEO, program leader, or administrative leader.
Caveat
 The sample size is small, although it includes over 50% of
foundations with grantmaking over $200 million annually and
nearly 15% of those awarding over $50 million annually. The
universe of foundations of interest is even smaller, in light of
our criterion for study participation that a foundation has an
expressed/ demonstrated strong interest in evaluation.
information.
 29 participated in follow-up phone interviews.
3
Participating Foundations by Grantmaking Size and
Reporting Structure
Evaluation
Reports to
Foundations Under
$50 Million
Foundations Between
$50 to $200 Million
Foundations Above
$200 Million
CEO
 Bruner Foundation*
 Colorado Trust
 Edna McConnell Clark
 Cleveland Foundation
 James Irvine Foundation
 Ontario Trillium
 Hewlett Foundation
 Robert Wood Johnson
Foundation
 J.W. McConnell Family
Foundation
 New York State Health
Foundation
Foundation
 Wallace Foundation
 William Penn Foundation
 Barr Foundation*
 Marin Community
 California Endowment
 Lumina Foundation
 Rockefeller Foundation





 Annie E. Casey Foundation
 California Wellness
 Ford Foundation
 Kellogg Foundation
 Packard Foundation
Administrator
Foundation
Program
 California Health Care
Foundation
 Kauffman Foundation
Foundation
 Hilton Foundation
 Knight Foundation
Foundation
Atlantic Philanthropies
Gates Global Health*+
Gates Global Development*+
Gates U.S. Program*+
Pew Charitable Trusts
* These foundations did not provide evaluative information expenditure data.
+
Because of the unique nature of the BMGF’s operations, they were counted as 3 separate foundations. At the time of the survey, evaluation staff
reported to an administrator; shortly after the survey administration, the Foundation delegated this function to the three program presidents.
4
Study Context: Forces Shaping the Evolution of Evaluation
in Foundations

The evaluation function in philanthropy is relatively new, dating to the late 1970’s early 1980s with a large
expansion of the number of foundations with dedicated evaluation staff in the 1990s.

Evaluation emerged during a period of professionalization of philanthropy, when many foundations shifted
their approach from that of a charitable grant maker responding to grant requests toward a more directive and
purposeful role as a “strategist.”

With their expanded role as a strategic actor, foundations increased their attention to evaluation. To be a
strategic philanthropy soon became linked to being an “effective” philanthropy and with this, an increased
focus on measurement.

Reflecting this overall shift toward strategic philanthropy, evaluation units have expanded their focus from
assessing whether grantees are effective toward assessing whether foundation strategies are effective and
whether foundations “add value.”

A look at the trends in evaluation unit titles is revealing. We hypothesize that much of this evolution
corresponds to the growth of strategic philanthropy.
Trends in Evaluation Unit Titles
1980s
Early 1990s
Late 1990s
2000s
Research and evaluation
Planning and evaluation
Organizational learning
Impact “something”
Strategic “something”
5
Study Context: Tensions and Challenges
 The role and function of evaluation sits astride several philosophical debates related to this
evolution of philanthropy, largely pivoting around the degree to which evaluation serves to
increase “learning” vs. its role in assuring accountability.
 This core tension is played out in numerous questions regarding reporting, responsibility,
orientation (internal or external) and level of resources relative to value.
 In the shift toward strategic philanthropy, foundations have been challenged to reframe evaluation
from an older model of “post hoc” assessment of grantees for accountability to one that examines
their own work and is structured to inform strategy from start to finish.
 This shift has resulted in many changes in the roles, responsibilities and structure of the evaluation
function. This survey sought to describe how foundations use “evaluative information” (in its
many forms) to guide their work and to analyze whether reporting relationship or size affects
the use of evaluative information. We looked at:
– The range of foundation activities employed to produce evaluative information
– The resources available for these functions
– Perception of internal use and demand for evaluative information by different groups internal to foundations
This study sets the stage for discussion at the Evaluation Roundtable Meeting: Information and Its Use in
Supporting Strategy. We hope to reflect on the data and its implications for what foundations need to enhance
strategic learning.
6
Benchmarking Evaluation in Foundations:
2009 Findings
I.
Evaluation Functions and Responsibilities
II.
Evaluation Resources: Staffing and Budget
III. Perceptions of Demand for and Use of Evaluative Information
7
Evaluation Functions and Responsibilities
What Functions Do Evaluation Units Perform?
What are the principal responsibilities of your unit?
100%
97%
90%
87%
80%
60%
60%
50%
40%
20%
0%
Evaluation
Performance
metrics/indicators
Knowledge management
Research other than
evaluation
Aiding in development of
program strategy
 Most foundations have expanded the role of the unit beyond supporting evaluations, including but not limited to
the functions illustrated in the table.
 Evaluation units have a major role in knowledge management in 60% of responding foundations, and this is more
often the case when staff report to the CEO (73%) or to program (67%). Only 40% of those reporting to
administrators are involved in knowledge management in any way.
 Two developments have emerged as important aspects of the job: involvement in performance metrics and in
program strategy. We know from prior work that an evaluation unit’s involvement in strategy development was
relatively rare five years ago; now 90% have a role in strategy.
8
Evaluation Functions and Responsibilities
What Types of Evaluative Activities Do
Evaluation Units Do?
Which of the following types of evaluation/performance metric activities does your foundation do?
Individual Grant Evaluations
90%
Initiative Evaluations
93%
Strategy Evaluations
83%
Entire Program Area Assessments
76%
Foundation-wide Assessments
76%
Satisfaction/Perception Surveys
83%
Identifying Indicators of Grantmaking Performance
80%
Tracking Grantmaking Indicators
83%
Identifying Indicators of Foundation Performance
73%
Tracking Foundation Indicators
70%
0%
20%
40%
60%
80%
100%
 There has been an increase in all types of evaluative information. In the past individual grant and initiative evaluations made up the
majority of the portfolio of foundation evaluation work. Now, although nearly all foundations surveyed still evaluate individual
grants and initiatives, the vast majority of those surveyed support the full spectrum of evaluative activities listed above.
 Indicators work and strategy evaluations are increasingly important part of the portfolio of work, whereas this was relatively rare
five years ago.
 Evaluations of larger aspects of foundation work, such as entire program area assessments and foundation-wide assessments, are
also more common; however, about 25% of the foundations responding typically do not conduct these types of assessments.
* This chart does not include when “other units” have primary responsibility. 4% of respondents indicated that another unit had
primary responsibility for each activity except perception surveys, where 21% of foundations delegated this responsibility to an
“other unit.”
9
Evaluation Functions and Responsibilities
How is Responsibility for Evaluation Distributed
Throughout Foundations?
 We asked a series of questions about which unit in the foundation takes “primary”
responsibility for an evaluative task.
 Response options were:
– My unit has primary responsibility
– Program has primary responsibility
– My unit and program share responsibility
– Other unit has responsibility
– My foundation does not do this type of work
 The purpose of these questions was to surface information regarding the role of
evaluation staff in relation to program staff in the work of evaluation
10
Evaluation Functions and Responsibilities
Evaluation Is Not Exclusively the Responsibility of
Evaluation Staff
Which unit has primary responsibility: evaluation, program, or shared?
Individual Grant Evaluations
19%
Initiative Evaluations
35%
41%
Strategy Evaluations
42%
26%
30%
50%
Entire Program Area Assessments
29%
41%
17%
27%
Foundation-wide Assessments
68%
Satisfaction/Perception Surveys
68%
27%
23%
Evaluation Unit
5%
Shared
Identifying Indicators of Grantmaking Performance
13%
Tracking Grantmaking Indicators
12%
Identifying Indicators of Foundation Performance
8%
50%
36%
20%
29%
32%
43%
0%
Program
32%
46%
Tracking Foundation Indicators
4%
33%
40%
60%
14%
10%
80%
100%
 Evaluation units tend to have primary responsibility when the focus of assessment is larger (i.e., strategy or
foundation-wide evaluations) and conversely, program staff tend to have primary responsibility for individual
evaluations, a smaller focus of assessment.
 In most foundations, program staff assumes a great deal of the responsibility for most types of evaluation.
– Most foundations give evaluation staff primary responsibility for foundation level assessments (including perception surveys)
– Program staff have at least shared if not primary responsibility for identifying and tracking indicators of grantmaking
performance
11
Evaluation Functions and Responsibilities
Allocation of “Primary Responsibility” Differs
Considerably Based on Reporting Structure
Evaluation Reports to CEO
 CEO reports are much more likely to have
“primary responsibility” for every type of
evaluation and assessment.
Evaluation Reports to Administrator
 Units reporting to an administrator are also
more likely to have “primary responsibility” for
most evaluative activities, except identifying
and tracking foundation indicators.
Individual Grant Evaluations
Initiative Evaluations
Strategy Evaluations
Entire Program area Assessments
Foundation-wide Assessments
Satisfaction/Perception Surveys
Identifying Indicators of Grantmaking Performance
Tracking Grantmaking Indicators
Identifying Indicators of Foundation Performance
Tracking Foundation Indicators
38%
67%
43%
78%
88%
22%
22%
78%
75%
0%
Individual Grant Evaluations
Initiative Evaluations
Strategy Evaluations
Entire Program area Assessments
Foundation-wide Assessments
Satisfaction/Perception Surveys
Identifying Indicators of Grantmaking Performance
Tracking Grantmaking Indicators
Identifying Indicators of Foundation Performance
Tracking Foundation Indicators
 A much different picture emerges if evaluation
reports to program. Not one evaluation unit
reporting to program has “primary
responsibility” for individual grant
evaluations, strategy evaluations, entire
program area assessments, or for identifying
grantmaking performance indicators.
Individual Grant Evaluations
Initiative Evaluations
Strategy Evaluations
Entire Program area Assessments
Foundation-wide Assessments
Satisfaction/Perception Surveys
Identifying Indicators of Grantmaking Performance
Tracking Grantmaking Indicators
Identifying Indicators of Foundation Performance
Tracking Foundation Indicators
20%
40%
60%
80%
100%
20%
40%
67%
75%
86%
88%
12%
11%
14%
14%
0%
Evaluation Reports to Program
56%
20%
40%
60%
80%
100%
60%
80%
100%
0%
25%
0%
0%
33%
25%
0%
0%
33%
33%
0%
20%
40%
12
Evaluation Functions and Responsibilities: Program Strategy Assistance
Evaluation Units Assume an Increased Role in
Program Strategies
 Evaluation units now have at least some role in the strategy development process,
with nearly all respondents reporting that they are at least “somewhat” or “heavily
involved” at both the start and end/renewal points of program strategy.
 Overall, 64% report that they are “heavily involved” in strategy discussions in the early
stages of strategy development and nearly the same (61%) report heavy involvement
at the end or renewal point.
 Overall participation in strategy drops off considerably in the “ongoing” stages of
strategy evolution, with only 27% reporting that they are heavily involved in
“providing feedback or critique” in an ongoing basis.
 However, respondents who report to the CEO are more involved in program strategy
at every stage of the strategy cycle (outset, on-going and end) than those reporting to
others.
13
Summation/Questions/Points to Consider: Functions and
Responsibilities
 The role of evaluation has expanded, particularly in program strategy
 There is an increase in types of evaluative activities employed
 Reporting structure varies and seems to affect who has primary responsibility
 Thought starters:
– How does the distribution of responsibility reflect learning or accountability? What
are the tradeoffs?
– How do you interpret the way in which role of evaluation differs under each
reporting relationship studied?
– What kind of skills do program staff need to meet their responsibilities in the design
and implementation of evaluations/ evaluative activities?
– Why does the evaluation role in program strategy drop off during implementation?
14
Benchmarking Evaluation in Foundations:
2009 Findings
I.
Evaluation Functions and Responsibilities
II.
Evaluation Resources: Staffing and Budget
III. Perceptions of Demand for and Use of Evaluative Information
15
Evaluation Resources
Tracking Spending on Evaluative Information
 Nearly every foundation found it difficult to provide data on evaluation spending.
Most foundations do not systematically track these costs.
 We asked respondents to submit estimates* of their spending on a range of
“evaluative information” activities where the foundation is a “primary user,” including:
– Evaluations
– Collection of data for indicators of foundation or program performance, and
– Other related expenditures to gather data to inform knowledge of foundation effectiveness
 We received expenditure data from 26 foundations. Those not submitting these data
are asterisked on slide 4. Neither the largest foundation (BMGF) or the smallest
(Bruner) is included in the analysis.
 To augment these data, we asked staff for their perceptions about how spending for
evaluative information compared to spending on grants over the last five years.
* We have every reason to believe that respondents submitted as accurate a number as possible. Those respondents
who felt like the could not get good estimates did not participate in this part of the survey.
16
Evaluation Resources
The Percentage Spent on Evaluative Information
(Relative to the Grant Budget) Varies Greatly
Dollars Spent on Evaluative Information and as a Percentage of Grantmaking Budget
Foundation Size
Mean
Median
Minimum
Maximum
Overall
$4,664,652
3.7%
$1,629,313
2.2%
$212,451
0.3%
$28,719,575
17.8%
Tier 1: Under $50 million
$1,150,068
7.2%
$1,054,500
7.4%
$212,451
0.8%
$3,000,000
17.8%
Tier 2: $50 to $200 million
$2,354,650
2.4%
$1,513,145
1.6%
$273,281
0.3%
$10,650,000
6.5%
Tier 3: Over $200 million
$12,139,240
2.6%
$6,037,536
2.3%
$500,000
0.3%
$28,719,575
4.9%
 Although the highest percent spent on evaluative information was 17.8%, nearly 40% of all foundations surveyed
invest less than 1% on these activities.
 Smaller foundations tend to invest a greater portion of their grantmaking budget than those in the other two tiers,
with the majority spending over 7% of their grant making budget.
 In addition, those who report to the CEO and Administrators have invested 33+% more on evaluative information
than those who report to program.
These data are based on two years of spending for both evaluative information and the grants: FY 2007 and 2008. These data should
be considered estimates as many foundations do not formally track this information.
17
Evaluation Resources
Most Respondents Perceived an Increase in Evaluation
Investments Prior to the Economic Downturn
Not considering the recent economic downturn – Over the last five years, what is your perception of
how funding levels for evaluation have changed relative to shifts in the size of the grants budget?
All Respondents
Availability of Evaluation
Funds Compared to
Grantmaking Spending
By Reporting Structure
Net of Recent
Economic Downturn
Availability of Evaluation
Funds Compared to
Grantmaking Spending
CEO
Admin
Program
Increased Dramatically
3%
Increased
67%
78%
38%
Increased Somewhat
59%
Stayed Same
33%
22%
38%
Stayed about the Same
31%
Decreased
0%
0%
25%
Decreased Somewhat
7%
Decreased Dramatically
0%
Not Considering the Economic Downturn:
 Most respondents believe that evaluation spending “increased somewhat” compared to their foundations’
grantmaking spending.
 Respondents who perceived increases in investment were largely those reporting to the CEO or an administrator.
Compared to those reporting to program, more than twice as many administrator reports and 60% more of those
reporting to CEOs responded that their foundation increased their investments in evaluation.
 Decreases in spending on evaluation were perceived only in units reporting to program.
18
Evaluation Resources
What was the Impact of the Economic Downturn
on Evaluation?
How has the recent economic downturn affected the amount of funds available for evaluation
compared to those available to grantmaking?
All Respondents
Availability of Evaluation
Funds Compared to
Grantmaking Spending
By Reporting Structure
Considering
Economic Downturn
Availability of Evaluation
Funds Compared to
Grantmaking Spending
CEO
Admin
Program
Increased Dramatically
0%
Increased
8%
20%
0%
Increased Somewhat
10%
Stayed Same
75%
50%
57%
Stayed about the Same
62%
Decreased
17%
30%
43%
Decreased Somewhat
21%
Decreased Dramatically
7%
Considering the Economic Downturn:
 In most foundations, the poor economy did not affect investment in evaluation more than it did grantmaking. The
majority of respondents reports that evaluation spending relative to grantmaking spending remained constant
despite the economic downturn.
 However, there were clear differences based on reporting structure:
– All respondents who perceived an increase in evaluation spending were in units reporting to CEOs or administrators.
– Decreases in spending occurred most frequently in units reporting to program. This is notable if we consider the order of
magnitude among the differences.
19
Evaluation Resources
How is the Evaluation Function Staffed?
Evaluation Unit Professional Staffing (in FTEs)
Foundation Size
Mean
Median
Minimum
Maximum
Overall
3.0
2.0
0
14
Tier 1: Under $50 million
2.1
2.0
0.8
4.8
Tier 2: $50 to $200 million
2.0
2.0
0
4.0
Tier 3: Over $200 million
5.0
3.5
0
14
 The smallest foundations (tier 1) have more evaluation staff relative to their size than those in the other tiers.
 Staffing within the largest foundations (tier 3), varies tremendously and is heavily skewed by two foundations each
with 14 FTE staff, almost 3 times that of the mean.
 Overall, staffing has gone down since 2005 from and overall mean of 3.9 FTEs and median of 3.5 FTEs.
20
Evaluation Resources
Number of Evaluation FTE Staff by Reporting Structure
Mean
Median
Minimum
Maximum
Total in FTEs
3.0
2.0
0
14.0
CEO
3.3
2.0
0.25
14.0
Administrator
4.0
2.75
0.75
14.0
Program
1.6
1.3
0
4.0
 Reporting structure again greatly influences number of FTEs allocated to evaluation functions.
– Those reporting to administrators have 2.5 times the staff size of those reporting to program.
– Those reporting to CEOs have over twice the staff of those reporting to program.
These data are based on two years of spending for both evaluative information and the grants, FY 2007 and 2008. These data
should be considered estimates as many foundations do not formally track this information.
21
Evaluation Resources
Perceptions on the Level of Investment: Is It Appropriate?
How would you assess the amount your foundation invests (both in terms of
staff and funding) in each of…?
100%
3%
80%
30%
3%
3%
53%
50%
7%
7%
3%
43%
Far Too Much
57%
60%
59%
Too Much
Appropriate Amount
40%
47%
Too Little
37%
30%
Far Too Little
40%
20%
20%
20%
13%
0%
Knowledge
Management
Formal Learning
Functions
Performance
Metrics/Indicators
27%
24%
7%
10%
7%
Evaluation
Foundation Strategy
Program Strategy
 Half of the respondents believe that their foundation invests an appropriate amount (in dollars and staff time) in program strategy,
foundation strategy, performance metrics, and evaluation functions. Still sizeable percentages, 31 to 47%, believe too little (or far
too little) is invested in these areas.
 Dissatisfaction was articulated most frequently regarding foundation investment in knowledge management and formal learning
functions, where 67% say foundation investments were inadequate.
 Program strategy was the only area where a number of respondents felt that their foundation invested
“far too much.”
“There is a lot of ya ya ya-ing out there on learning—but it doesn’t happen.”
22
Evaluation Resources
Dissatisfaction with the Level of Investment was Highest Among
Those Reporting to Program and Those in Mid-size Foundations
How would you assess the amount your foundation invests in…?
Percent of responses indicating a "less than appropriate amount"
CEO
Administrator
Program
Evaluation
25%
50%
75%
Performance Metrics/Indicators
25%
40%
75%
Knowledge Management
75%
50%
75%
Formal Learning Functions
42%
70%
62%
Program Strategy
27%
10%
62%
Foundation Strategy
25%
30%
62%
How would you assess the amount your foundation invests in…?
Percent of responses indicating a "less than appropriate amount"
Under $50M
$50M-$200M
Above $200M
Evaluation
25%
55%
60%
Performance Metrics/Indicators
37%
55%
40%
Knowledge Management
62%
62%
70%
Formal learning Functions
50%
64%
60%
Program Strategy
43%
45%
0%
Foundation Strategy
38%
36%
30%
 Knowledge management and formal
learning functions were of greatest
concern across all reporting structures
and across all size foundations.
 About 2 out of 3 (or higher) reporting to
program believe that a less than
appropriate amount is invested in all of
these functions.
 A majority (or more) reporting to
administrators are most concerned about
the level of investment in evaluation,
knowledge management and learning.
 The greatest weighting of dissatisfaction
was expressed by those in foundations
with over $50 million in grant making.
23
Summation/Questions/Points to Consider: Evaluation Resources—
Staffing and Budget
 Overall financial support appears to be holding for evaluative activities.
 Staffing for evaluation, however, has dropped considerably since 2005.
 Knowledge management and learning were the areas of largest concern regarding the adequacy of
foundation investment, across all size segments and reporting relationships.
 Those reporting to program expressed the greatest concern about the adequacy of investment
made in strategy, evaluation, learning, knowledge management and indicators.
 Thought starters:
– Are resources (across the organization) adequate to meet the knowledge challenge of strategic
philanthropy?
– How do you interpret the influence of reporting relationships on evaluative information
investment decisions?
24
Benchmarking Evaluation in Foundations:
2009 Findings
I.
Evaluation Functions and Responsibilities
II.
Evaluation Resources: Staffing and Budget
III. Perceptions of Demand for and Use of Evaluative Information
25
Perceptions of Demand for and Use of Evaluative Information
Management Demand for Evaluative Information
Increased in Most Foundations
Over the last five years, what is your perception of trends in management demand for the following:
100%
80%
7%
27%
18%
20%
31%
31%
19%
60%
25%
20%
28%
39%
48%
52%
38%
36%
40%
59%
30%
41%
Stayed about the Same
Increased Somewhat
34%
Increased Dramatically
54%
39%
33%
28%
31%
32%
26%
28%
7%
0%
Program
Performance
Metrics
Program
Foundation Full Program
Strategy
Performance
Area
Evaluations
Metrics
Assessments
Perception
Surveys
Research for
Strategy
Overall
Foundation
Assessments
Program
Initiative
Evaluations
Individual
Grant
Evaluations
 No decrease in demand was reported by any respondent.
 Respondents perceived large increases in demand for all types of evaluative activities. Most respondents perceived “dramatic
increases” in demand for program performance metrics and strategy evaluations.
 Although overall demand for individual grant evaluations increased, it was not perceived to be as strong as that experienced for
other forms of evaluative work.
 Differences by reporting structure:
–
–
Units that report to program experience much greater demand for individual grant evaluations, research to inform program strategy and
program metric data.
Units that report to CEOs experience greater demand for strategy evaluations and overall foundation assessments.
 Our interviews revealed that increases in demand were largely driven by CEO or board interest.
“We have a new president and (s)he’s driving the change here. Having the CEO focused on [evaluation] is critical.
Before (s)he came in, it was hard to get program officers’ attention – I can’t overstate how vital that is.”
“The board is asking in a more explicit way what results are being achieved with our grantmaking and that is the
reason for the increase.”
26
Perceptions of Demand for and Use of Evaluative Information
Most Respondents Believe Program Staff Use of
Evaluation is at Least “Acceptable”
How would you assess program staff’s use of evaluation to inform:
Good
Acceptable
Poor
Programmatic Development Work
23%
57%
20%
Mid-course Decisions During Implementation
33%
33%
33%
Summative Performance Assessments
43%
36%
21%
 About 2/3 or more respondents report at least acceptable use of evaluation at all the different stages of program
work.
 The most frequently cited problem area for evaluation use was how well it informs midcourse decision making.
“We spend a lot of time and resources in developing strategies, but they can become like railroad tracks—once
you get going, the mechanism for switching tracks on the basis of evaluative information is difficult. The ways in
which strategies get realigned is still a work in progress.”
27
How Much are Evaluation Findings Disseminated:
An Early Indicator of Use
What portion of your evaluations – either in full or summary form – do you estimate are shared
with the following audiences:
Percent reporting “large majority” or “all or almost all”
Evaluation Unit Reports to:
CEO
Administrator
Program
Management
92%
40%
75%
Board
58%
40%
38%
Program Staff
92%
80%
87%
Grantees
67%
60%
50%
Broader Field
58%
20%
50%
 Reporting is a factor in how evaluation findings are shared.
 More respondents reporting to CEO’s share their evaluation results with every major audience listed than do
those reporting to administrators or program.
– This is even the case in sharing with grantees (17 point difference from program) and the broader field (8 point difference).
– Fewer of those reporting to administrators share reports with the broader field.
“Frankly, there is a lot of cherry picking going on [regarding sharing evaluation findings]. Access to findings is
more managed here. Findings get shared, but warts and all? Not generally.”
“The CEO controls the message to the board and s/he errs on the side of less information. S/he would not [share]
a full evaluation report that had any equivocation.”
28
Comments on Issues of Use from Interviews
 “It gets to be more difficult when people feel that there is more at stake. … a program
director is stepping down or if someone’s under threat, it’s very difficult to do healthy
learning.”
 “We have many meetings where a program director will pitch for an idea for funding,
but there won’t be a question on how these grants build on prior experience or
investments. There are no moments where people are asked to look back on what
they’ve learned.”
 “When I came to the Foundation I looked at all the evaluations that they did over the
last 4 to 5 years and I could not tell the impact that those evaluations had had on
business. Partly, entire programs had changed. Partly, officers/ directors are no longer
there, so there is no ownership of information as it comes back into the foundation—so
it gets ignored.”
29
Perceptions of Demand for and Use of Evaluative Information
Most Respondents Also Believe Program Use of Metrics is at
Least Acceptable
How would you assess program staff’s use of performance metrics to inform:
Good
Acceptable
Poor
Programmatic Development Work
17%
50%
33%
Mid-course Decisions During Implementation
8%
67%
25%
Summative Performance Assessments
22%
44%
35%
 Again, about 2/3 or more of respondents rated staff use of performance metrics as acceptable or higher.
30
Perceptions of Demand for and Use of Evaluative Information
There Were Surprisingly Few Differences Between How Respondents
Viewed Evaluation and Metrics in Terms of Use
How important are the following potential uses of evaluation and performance metrics
at your foundation?
 In considering these purposes, few respondents saw
Percent report "very" or
Performance
"moderately" important
Evaluation
Metrics
Provide information about
implementation
97%
89%
Sharpen focus or
operationalize a goal or
strategy
87%
78%
Provide periodic information
on program performance at
regular intervals
83%
93%
Provide information to make
summative judgments
76%
64%
Provide information about
foundation progress at
regular intervals
69%
82%
differences in the importance of use between
evaluation and metrics. We were surprised by the
relative lack of variation in responses.
 Across the spectrum of uses cited, both evaluation
and performance metrics were largely seen as
“very” or “moderately important” tools. The largest
spreads were:
– Performance metrics were seen as less useful (by 12%)
than evaluation in making summative judgments.
– Evaluation was seen as less useful (by 13%) than
performance metrics in providing information about
foundation progress at regular intervals.
31
Perceptions of Demand for and Use of Evaluative Information
Respondents Raised Several Issues Regarding the Use of
Metrics in Our Interviews
 Evaluation vs. metrics:
– “Executive leadership is looking less to evaluation and more to performance indicators without
understanding the difference.”
 Program vs. operational metrics:
– “Each program can define their own metrics they want to measure. [Staff tend] to rely on what
you would report to finance, which isn’t necessarily helpful as program impact indicators. The
indicators we have now for the board work well except for assessing program performance.”
– “Interestingly, the first metrics out the door are more operational than program (e.g., turnaround time of grants, volume, etc.).”
 Using appropriate metrics in complex systems:
– “If we looked only at the [metrics] it was a failure. But if we looked at the dynamics of schools
and community, you’d see what was happening that explained the initial drop. It took looking at
the broader dynamics and beyond metrics alone. It took the program officer really reaching into
the community to understand what was behind those metrics.”
 When to identify metrics:
– “Should we focus on metrics when we haven’t defined strategy yet?”
32
Perceptions of Demand for and Use of Evaluative Information
Management Support for Evaluation Was Seen as Strong; CEO
Reports Experience the Most Consistent Support
How well would you assess managerial support of evaluative
information at your foundation?
All Respondents
100%
80%
60%
3%
23%
20%
37%
50%
 About 2/3 of respondents report
7%
31%
Rarely/Never
From time to Time
34%
40%
Often
20%
Frequently
37%
30%
28%
Management
communicates its value
for the use of evaluation
and evaluative
information
Management values
evaluative efforts that
illustrate problems or
shortcomings in the work
of the foundation
Management addresses
foundation problems
identified in evaluations
management support as being frequent
or often.
 Respondents who report to the CEO were
more likely to perceive strong managerial
support.
 Only 25% of evaluation units reporting to
0%
program believe that management often
or frequently addresses foundation
problems identified in evaluations.
By Reporting Structure
Percent of Responses Indicating "Frequently" or "Often"
100%
80%
100%
92%
83%
70%
62%
70%
62%
60%
40%
Evaluation Unit
Reports To:
CEO
58%
25%
Administrator
Program
20%
0%
Management
communicates its value
for the use of evaluation
and evaluative
information
Management values
evaluative efforts that
illustrate problems or
shortcomings in the work
of the foundation
Management addresses
foundation problems
identified in evaluations
33
Perceptions of Demand for and Use of Evaluative Information
Board Support Reveals a Similar Pattern to
Management Support
How would you assess the board’s position on:
All Respondents
100%
80%
3%
7%
7%
6%
31%
28%
15%
7%
10%
32%
34%
60%
 About half of the respondents report high
support from their boards.
Don't Know
Limited/Low Support
40%
59%
59%
20%
Moderate Support
54%
48%
High Support
0%
Hearing a thirdparty perspective
Reinforcing the
importance of
evaluative
information
The role and
functions of the
evaluation unit
 Relatively few respondents feel limited or
low support from their boards.
 Perceptions of board support tend to be
lowest in evaluation units that report to
program.
Spending on
evaluation
Percent of Responses Indicating “High Support”
By Reporting Structure
100%
83%
80%
60%
67% 67%
64% 67%
38%
40%
Evaluation Unit
Reports To:
CEO
67%
56%
25%
25%
33%
38%
Administrator
Program
20%
0%
Hearing a thirdparty perspective
Reinforcing the
importance of
evaluative
information
The role and
functions of the
evaluation unit
Spending on
evaluation
34
Many Respondents Raised Four Organizational Cultural
Factors as Impeding Good Learning at Their Foundation
To what extent are the following organizational culture factors an impediment to good learning at
your foundation?
 The top four factors identified by over 20% of
Not a
Factor
Sometimes
Often
Always
Highly individualistic
grantmaking
20%
53%
20%
7%
Lack of attention to
implementation
27%
50%
23%
0%
Limited constructive
feedback
37%
40%
20%
3%
Limited thinking or
reflecting with others
28%
52%
10%
10%
Isolation from others in
the field
45%
41%
14%
0%
Over-commitment when
knowledge is limited and
uncertainty is high
45%
41%
14%
0%
Pressure to make
larger grants
52%
34%
10%
3%
Predisposition toward
relatively untested
grantmaking
33%
57%
7%
3%
Unwillingness to make
small exploratory grants
63%
33%
3%
0%
respondents as “often” or “always” impeding
good learning are:
–
–
–
–
Highly individualistic grantmaking
Limited thinking or reflecting with others
Lack of attention to implementation issues
Limited constructive feedback
 These tendencies were cited as a factor
“sometimes” in at least 63% of foundations.
 Relatively few respondents see isolation from
the field, over commitment despite uncertainty,
pressure to make larger grants or unwillingness
to make small exploratory grants as impeding
learning in their foundations.
35
Perceptions of Demand for and Use of Evaluative Information
Respondents Who Report to an Administrator or Program Believe that Organizational
Culture Factors More Frequently Impede Learning at Their Foundation
To what extent are the following organizational culture factors an impediment to good learning at
your foundation?
Percent of Responses Indicating “always a factor” or “often a factor”
By Reporting Structure
Evaluation Unit Reports to:
Percent of responses indicating
"always a factor " or "often a factor"
 Respondents who report to the CEO were
President
Administrator
Program
Limited thinking or reflecting
with others
18%
30%
12%
Highly individualistic grantmaking
17%
30%
38%
Over-commitment when knowledge
is limited and uncertainty is high
17%
11%
12%
Pressure to make large grants
9%
30%
0%
Isolation from others in field
9%
20%
12%
Limited constructive feedback
8%
40%
25%
generally less likely than others to believe that
these cultural factors inhibit learning.
 Among those who report to administrators,
about a third identify the following as factors
that reduce institutional learning:
–
–
–
–
Limited thinking with others
Highly individualistic grantmaking
Pressure to make large grants
Limited constructive feedback
 Among those who report to program:
– Half cite a lack of attention to implementation as a
factor that often impedes foundation learning.
Unwillingness to make small
exploratory grants
8%
Lack of attention to implementation
8%
20%
50%
Predisposition toward relatively
untested grantmaking approaches
0%
20%
12%
0%
0%
– Highly individualistic grantmaking is another common
impediment to learning.
36
Perceptions of Demand for and Use of Evaluative Information
Respondents from Larger Foundations are More Likely to Believe
that Culture Factors Impede Learning
To what extent are the following organizational culture factors an impediment to good learning at
your foundation?
Percent of Responses Indicating “always a factor” or “often a factor”
By Foundation Size
Percent of responses indicating
"always a factor " or "often a factor"
Under
$50M
$50M to
$200M
Above
$200M
Limited constructive feedback
22%
18%
30%
Unwillingness to make small
exploratory grants
11%
0%
0%
Limited thinking or reflecting
with others
11%
27%
22%
Predisposition toward relatively
untested grantmaking approaches
0%
18%
10%
Pressure to make large grants
0%
0%
40%
Over-commitment when knowledge
is limited and uncertainty is high
0%
10%
30%
Lack of attention to implementation
0%
36%
30%
Isolation from others in field
0%
9%
30%
Highly individualistic grantmaking
0%
54%
20%
 Those working in foundations with grantmaking
under $50m report few organizational factors
that impede learning.
 Among mid-size grantmaking institutions, more
than half report highly individualistic
grantmaking as a serious impediment to
learning. A lack of attention to implementation
was also cited as a factor inhibiting learning by
about 1/3 of the foundations this tier.
 Those respondents from larger foundations saw
a greater number of cultural factors as impeding
good learning.
37
Summation/Questions/Points to Consider: Perceptions of Demand
for and Use of Evaluative Information
 Demand is increasing for all evaluative services and products.
 Information use is seen as most problematic when programs and strategies are “ongoing.” This is
strongly supported by the qualitative interviews.
 For the most part, those reporting to program expressed greater dissatisfaction with program use
of evaluative information.
 Program reports also felt their management and board were less supportive of evaluation than
those reporting to the CEO or an administrator.
 Thought Starters:
– Demand has increased yet staff size is down and spending remains steady. Are foundations able
to do more with less? Is something being sacrificed?
– We expected that reporting to program would result in greater evaluation use and more general
“buy in,” yet this seems not to be the case? How do you interpret this?
– What are the other sources of evaluative information available to program staff outside of the
evaluation unit? How are all sources integrated and made available to program and
management?
– How is program learning surfaced and facilitated?
– Do/should performance metrics and evaluation serve different purposes and what are they?
38
In Sum: If Strategy Is Learned; How Well Are We Doing?
 “Strategy is learned; not planned.” Henry Mintzberg
 As foundations have increasingly engaged in strategic philanthropy, the evaluation function has
evolved in a corresponding manner—with a greater focus on issues of strategy. Examples abound
regarding highly valued theory of change work, serving to improve and clarify foundation intent,
focus, feasibility etc.
 Yet, we also see areas where improvement is needed, particularly related to deep and ongoing
learning necessary for strategy evolution. We believe that as important strategic actors,
foundations need to have strong capacities and commitment to learn throughout strategy
evolution.
 The question becomes: How can philanthropic organizations deepen their capacity to learn and
adapt their strategies?
 This is not just an evaluation issue, but an organizational challenge requiring the commitment of
evaluation, program and management.
39
Download