IOG Executive Summary – Dr. Gilleylen September 2013

advertisement
Institute of Government
Executive Summary Report
Data Driven Modeling for Strategic Planning and Program Evaluation: A
Developing Model for Decision Making
September 30, 2013
Dr. Johnny Gilleylen
Institute Associate
Associate Professor, Interim Executive Director
School of Policy and Planning
Jackson State University
The purpose to this activity was to introduce and refine a developed model for data driven
decision making for strategic planning and program evaluation. Dr. Gilleylen’s model for
strategic planning and program development has been incrementally developed over several
years as an external evaluator of public and nonprofit programs. The Institute of Government
seeks to refine the model and gauge the marketability of the topic and technique to public and
nonprofit agencies in Mississippi. The initial target audience was identified as Jackson State
University faculty and graduate research students.
Introduction
There is an unquestionable need for what you do—whether you are a public, non-profit, or a
quasi-governmental organization. Your constituents need your services. In Mississippi the need
is great! The State’s per capita income is almost $7,000 below the national average. Similarly,
families living below poverty is 8 percentage points higher and the number of children living
below poverty almost 3 percentage points higher than their national averages. The rate of persons
that are uninsured in our state is a full 10 percentage points higher. The proportion of our
population that is disabled is more than 4 percent higher. The State’s unemployment rate is at
6.6 percent as compared to the national average of 5.9 percent. Our higher education rate lags
behind the national average by 5.2 percentage points for people in the population with bachelor
degrees and almost 4 percentage points lower for people with graduate degrees. The statistics
presented here are not intended to be exhaustive. Unfortunately, there are more of similar vital
population statistics. Clearly, there is a need for what you do: your constituents need for
services; empirical evidence of the need your agency meets and your subsequent performance in
meeting that need is becoming required and the cost of doing business; and your ability to
continue to provide the services that you deliver to your constituents is dependent our your
survival—your ability to acquire and keep your funding. In a climate of shrinking budgets,
funders are now requiring almost unprecedented levels of accountability. Agencies must have
evidence that they are effective and efficient.
The Impact of the Current Economic Climate on Programs and Services
Over the past decade the US economy has been in crisis. With an economic growth rate of 2.8
percent, public debt ballooned to $11.959 trillion or about 75 percent of the nation’s GDP.
Though the federal budget deficit will experience this year its lowest level in a decade, federal
spending is expected to expand by $3.5 trillion. The demand is for effective spending at all
levels of government. Do more with less. In the spirit of increased accountability, grant
applications now require detailed formative and summative evaluation plans.
The national “Sequester” legislation passed December 2012 began to take effect in March 2013
where more than $85 billion in budgetary cuts were indiscriminately applied affecting thousands
of public, non-profit and for-profit agencies.
There is another expected round of cuts of
approximately $109 billion that will launch in January 2014. Over a decade, the “Sequester”
removes approximately $1.2 trillion dollars from the national budget cutting or eliminating
programs from the military to education and national parks.
The impact of these factors constraining federal and state funding is resonating at the local levels
of service delivery. According to the National League of Cities, the fiscal years of 2009 and
2010 saw the largest percent of cities less able to meet its financial obligations in more than two
2|Page
decades, rivaled only by the Great Depression of the early 1920’s and 30”s. For the two years,
88 percent of cities said that they were less able to meet their financial needs and 87 percent in
the following year. The nation’s cities have had to make cross-the-board cuts in services. The
largest cuts usually occurred in personnel and the delay or cancellation of capital projects. All
Americans and organizations/services are being impacted—the poor, the elderly, the disabled,
healthcare, police protection, education, research, transportation, and even Meals-on-Wheels.
Clearly nothing was spared. There has been glimmers of hope as in 2011 the nation’s cities saw
an upswing and started to become more optimistic.
Justifying Your Existence
The public policy implication to service delivery organizations facing this climate of financial
constraints is that now they must compete at the highest levels. And not just among programs in
your niche but also among other vitally needed services and often better financed programs. In
response to the US Department of Education’s Talent Search Program issued its 2011-2016
request for proposals, there were 968 grant applicants. The Program only funded at the time 435
requests, 45 percent of total applicants. The most significant statistic that came out of an
analysis of their funding was that 7 percent of programs that were not funded had been
previously funded for years. This was an indication of applications were retreads and were no
longer sufficient to acquire funding. The programs ceased to effectively compete. No longer
could you just blow the dusk off their last application and submit it. The demand was for
performance based outcomes.
There are 35 federal government programs to improve
participation in higher education. The Department of Education saw a $3 billion reduction in its
budget. Will it be necessary to further prioritize programs? Have they begun to prioritize their
funding? Of course they have!
In many cases for your organizations to continue to exist or remain viable, you must be able to
effectively argue for its existence or more succinctly for its continued funding. The economic
and political climates of our time demand empirical evidence that you are effective and efficient.
Can you provide evidence of what do? Why you do it? That you have been effective? Or that
you have improved? Do you have evidence of effective and continuous improvement in your
3|Page
operations?
Why should your organization above all others applying for funding deserve
continued support?
Rationale for Evidence Based Decision Making
What has occurred is that all levels of government are embracing the call for data driven decision
making. The US Federal Government has begun to implement and require formative and
summative evaluations at all levels. The Government Performance and Results Modernization
Act of 2010 “calls for agencies to focus on high priority goals and create a culture where data and
analytics play a larger role.” A very short list of agencies employing and calling for data driven
evaluations are:
1. Centers for Medicare and Medicaid Service
2. Federal Aviation Administration
3. Office of Management and Budgets
4. The Network of Associated Schools of Public Affairs, Policy and Administration
5. US Census Bureau
6. US Department of Education
7. US Department of Housing and Urban Development
8. US Department of Veteran Affairs
9. US Federal Reserve
In your grant applications you must justify your request for funding and thus your existence.
Most federal guidelines for grant applications require formative and summative evaluations.
Formative or process evaluation to provide evidence that you are continuously improving your
processes and summative or impact analysis to provide evidence that you are meeting your
program objectives. They further recommend that programs utilize third party evaluators to
ensure that the evaluations are objective and unbiased. They look for these evaluations to be the
catalysts for change.
Expansion of evaluation has historically been the usual reaction to
significant government spending expansion.
4|Page
As a specific example of the rationale for doing evidenced based decision making is the US
Department of Education’s competitive priorities. They award 10 points competitive preference
priority points in 3 categories:
1. Turning around persistently low-achieving schools—projects that are designed to
identify and address low performance in program operations, not just program
outcomes.
2. Enabling more data-based decision making—projects that are designed to collect (or
obtain), analyze, and use high quality and timely data, including data on program
outcomes
3. Improving productivity—projects that are designed to significantly increase
efficiency in the use of time, staff, money, or other resources while improving
performance outcomes.
The traditional “Black Box” approach to evaluation is no longer good enough. The model only
required a program to broadly specify its inputs, such as funding match, personnel and
institutional support. The operations to achieve the program objectives were almost entirely
inside a black box. The funders rarely knew exactly what they were, how they were linked, or
how they functioned. Another term for this approach is summative evaluation where only the
long-ranged program objectives are evaluated or analyzed. This was problematic in that many of
the programs involved outcomes that could only be effectively measured long after the money
had been spent. A case in point would be a college access program that targeted junior high
school students where the program objectives are to get poor children to enroll in college and
graduate. From junior high school to graduation is at least 8 to 10 years. Until a point of
graduation, programs simply said what they did and counted the number of students with whom
they interfaced. All agreed that this was not a solid measure of effectiveness or efficiency.
Legislators and funders wanted and demanded empirical evidence of effectiveness and
efficiency.
Evaluation, Myths and Benefits
5|Page
Many have found excuses not to conduct sufficient or no evaluation of their programmatic
outcomes albeit any evaluation of their processes. The Pell Institute identifies three most used
myths not to conduct program evaluation.
Myth #1: Evaluation unnecessarily diverts resources and services from my program. It
identifies evaluation as an investment that would bring about improvement in efficiency
and effectiveness that most often would greatly exceed the cost.
Myth #2: Evaluation automatically determines whether or not a program will be
eliminated. The Institute points out that program evaluation is for program improvement
not elimination. It is program evaluation that will permit programs and organizations to
identify the processes that are strong and effective as well as those that are ineffective
and in need of improvement, removal or replacement.
Evidence of continuous
improvement is considered a program strength and a highly regarded characteristic.
There is a significant difference between the revision or elimination of a process than it is
to lose your program to a better managed program.
Myth#3: Evaluation is too complicated. Yes, it can be if not well-thought out and
constructed. Evaluation can be very easy, especially after a plan is developed. This
analysis presents such a plan.
The benefits of evaluation are varied and many times uniquely tied to a program’s specific
characteristics. Among the touted benefits of evaluation are that it:

Produces programmatic self-understanding and self-accountability;

Allows you to demonstrate program outcomes to grant providers and other constituents;

Substantiates request for increased funding by providing evidence of effectiveness;

Identifies and leverages program strengths;

Identifies and modifies ineffective practices;

Provides documentation for performance/funding reports;

Creates a foundation for strategic planning; and

Produces credibility and visibility.
The value of program evaluation is so essential to program survival that issue advocacy groups
frequently provide external funding to agencies for which they advocate: organizations such as
6|Page
the Ruddie Memorial Youth Foundation Evaluation Grant and the Grassroot Grants Program of
the Meyer Memorial Trust. The Grassroots Grants program specifically aims at smaller
organizations with operating budgets under $250,000. The organization awards grants up to
$25,000 that can be used for projects such as organizational technical assistance, hiring
advisers/consultants and strategic planning and marketing.
The Ruddie Memorial Youth
Foundation also awards grants up to $25,000 to programs for evaluations that are outcome based,
logic-module based, quantifiable, valid, utilization focused, and professional. In 2005, Ernest
Brewer and Jama McMahan-Landers of the University of Tennessee in a “Longitudinal Study of
the Talent Search Program” drew this conclusion regarding evaluation in education. They
concluded that “highlighting participants’ enrollment rates in post-secondary education is an
important first step toward demonstrating the effectiveness of programs aimed at removing
barriers to higher education for first generation college students from low-income backgrounds.
However, in the current climate of quantitative accountability, Federal TRIO program
administrators must push themselves to document all stages of success. College enrollment rates
are not enough; programs also must verify progress.”—a clarion call for formative evaluation.
Six Sigma and Shanin as a Basis of Continuous Improvement
Long before the recession of 2008 was full blown, the discussion of data driven decision-making
had begun. The manufacturing industry has long embraced the concept—from Fredrick Taylor’s
industrial engineering to Value Analysis Engineering, Shanin and Six Sigma. The competitive
environment of the for-profit sector of our economy required efficiency and effectiveness as
requirements of survival—to stay in business, to compete. This is where the public and nonprofit sectors are today. They must compete. We must compete.
The proposed model is rooted in Six Sigma and Shanin Red X techniques. Six Sigma is a set of
strategies, tools and techniques to improve processes. It is a structured process for problem
solving.
Since its inception at Motorola, the technique has been employed worldwide.
Implementation of the methodology was rooted in manufacturing; however the concept has
found its way into virtually all business sectors. The overall concept of the technique is defined
by the process of Define, Measure, Analyze, Improve and Control (DMAIC).
7|Page
The Evaluation Planning Process
The evaluation process begins with the development of an organization evaluation plan,
preferably the organization’s strategic plan.
The participants in the evaluation plans
development are its stakeholders—the owner of the process and the supplier.
A plan is
developed for the process owner and the supplier and should begin at the highest level of the
organization.
Define the purpose and scope of the program by identifying its high level processes. The
identification technique not only requires naming the processes but also identifying the
objective of each process.
Measure the current level of process performance of each of the processes. The technique
brings all stake holders into the evaluation process. All agree on the components of the
process, the objectives of each component and the data needed to effective evaluation. The
constraining factor of the data to be collected is that it should not be used just in reporting but
also in the process monitoring and improvement.
Analyze data to identify root causes and validate the root cause against captured data. The
process determines the improvements that need to be made. But it is also an opportunity to
identify Shanin’s Best of the Best (BOB) and Worst of the Worsts (WOW). The BOBs are
highlighted and projected externally as program strengths. The WOWs are recognized as
opportunities for improvement and become the focus of the program’s improvement plan.
This step helps to pinpoint problem locations and identify potential areas for improvement.
All the tools of the Six Sigma toolbox are utilized—soft tools (qualitative) and hard tools
(quantitative). Selection is determined by the level of available data.
Implement the improvements that have been determined to most effectively address the root
cause of major issues. The implementation process is an iterative on-going process. Data
collection is the responsibility of the program owner and service provider. Data analysis is
performed by the consultant. Improvement becomes project-based. The Program determines
the workshops that identify projects, solutions and effectiveness. Continuous improvement
becomes part of the contract or working agreement between the stakeholders or parties at all
levels.
Control the change by performing before and after analyses. The change is monitored and
the results are documented.
8|Page
The initial evaluation planning process is a collaborative process of the consultant, program
owner and the service provider. The initial planning session normally involves two 4 to 6-hour
workshops to develop the evaluation and data plans. There are two levels of planning involved.
Level 1 includes the program owner and the consultant and Level 2 consists of the program
owner, service provider and the consultant. The benefit of the process is that the owner and the
supplier learns to appreciate the other process and information needs. The process is a mutual
agreement between the partners.
The Program Level Evaluation Plan Logic Module
There are two phases to the Program Level Evaluation Plan (PLEP). The participants in the plan
development at both levels are the program owner and their provider. Phase I is to layout the
evaluation plan and Phase 2 is the development of the data collection plan.
Phase 1: The Evaluation Plan
Step 1: Program goals and objectives are identified first in that the summative goals are
known and are therefore measurable as opposed to the mission. Phase the mission
is usually described in very broad terms—so broad that they cannot not be
effectively measured.
Step 2: Links to the mission of the supplier to the goals and objectives of the process
owner.
Step 3: Identifies the activity performed by the supplier that addresses their mission that
is linked to the program’s goals and objectives.
Step 4: Requires an explanation as why the activity is performed; these are the process
goals and they must be measureable.
Step 5: Describe how the activity can be effectively measured.
Step 6:Describe how the measures of Step 5 describes the program’s goals and
objectives.
Phase 2: the Data Collection Plan
Again the participants in the planning are the program owner and their supplier.
Step 1: List all the measures identified in Phase I’s Step 5.
9|Page
Step 2: Determine whether the data required is currently being collected. If the data is not
being currently collected a thought technique should be determined.
The
considerations should be time and effort.
Step 3: Determine the needed frequency of collection. The more frequent the collection,
the predictive ability of the data will be more reliable. Again the consideration
should be time and effort.
Step 4: Determine the supplier’s operational use of the data. Collecting data strictly for
reporting should be avoided.
Step 5: Provide the program owner a timeline as to when the data might be available for
analysis. The analysis of the data is the responsibility of the program owner.
A Few Evaluation “isms”

Keep the process as simple as possible. There is no need to impress anyone with your
analytical acumen.

Involve stakeholders to identify meaningful measures.

Collect only data that you plan to use. Evaluation is more than just reporting; its
operations. You have limited resources. Dig deeper only when it is necessary.

Identify the greatest opportunities for improvement but pick the low hanging fruit first.
The lack of success brings with it loss of enthusiasm and effort.
Conclusion
The answer to the question: “Can you effectively argue for the existence of your program and its
funding?” is “yes we can!” Your response should be.
We do this!
This is why we exist!
We have been effective. Here is our data.
We measure what we do.
These are the things that we do well.
These are the things that we are working to improve.
These are the processes that we changed and their results.
These are the people that have benefitted from our program and here is the data of how
they have benefitted
10 | P a g e
Here is the data of the impact without our services.
We are continuously improving.
We are working to become a national best practice.
Construct your evaluation plan, plan your data collection, analyze your data, improve your
processes and outcomes, document your progress then tell the world your story!
Remember the goal is continuous improvement
11 | P a g e
Download