Measuring Your Mission

advertisement
Measuring Your Mission
Every nonprofit has a mission statement that defines the organization’s reason for existence,
and this statement should be the basis of all strategy and programming. That said, it can be
difficult to determine whether these plans or activities are actually achieving what they intend to.
In this Topic of the Month, we will outline ways to set up measurements to demonstrate your
organization’s effectiveness at accomplishing its mission.
Reactive or Proactive Evaluation
There are two mindsets nonprofits use to measure their mission statement. First, some
nonprofits use reactive evaluation. This mindset takes a look back at activities performed and
determines what worked well, what didn’t, and what lessons were learned for next time. The
other mindset is proactive evaluation. Instead of reacting to past performance, proactive
evaluation determines how to measure the activity’s success before the activity happens. This
mindset gives you not only the possibility of measuring progress as you implement your
mission, but also allows you to compare yourself with other organizations that provide similar
services once the activity is completed.
For instance, if you are an after-school program with a mission to increase the reading level of
students, you can determine the reading levels of each student at the end of the year. An
increase from one year to the next will indicate the success of your mission. This is a reactive
evaluation. Using a proactive evaluation, you can measure reading levels at the beginning of the
school year, set up expectations for performance throughout the year, and periodically measure
the progress of your students according to these expectations. When the activity concludes, you
can reflect on your periodic and final outcomes and compare them with other after-school
programs to see if you are being more or less effective than other organizations with a similar
mission.
While using reactive evaluation is better than no evaluation, Cathedral recommends proactive
evaluation. We have found that planning for the evaluation prior to the activity is not only helpful
in solidifying the activity’s goals but it determines what data needs to be collected ahead of time
while allowing for corrections along the way. Furthermore, proactive evaluation allows the
organization to better understand its areas of expertise and will help position the organization
with donors, parents and community partners during the fundraising season.
Qualitative vs. Quantitative Data
There are two types of data that you can collect in your evaluation: quantitative and qualitative.
It is important to know that one type is not superior to the other, and both are scientifically
based. Cathedral recommends collecting both to demonstrate your organization’s success.
Cathedral Consulting Group, LLC
Page 1
Quantitative Data: Quantitative data is expressed through numbers. How many children
participated in your program? How many reading classes did they take? How many hours
did they read per day? How many tutors were involved? What is the high school
graduation rate of students in your after-school program? Because quantitative data is
inherently objective, it is often used to clarify qualitative data, which is more subjective in
nature.
Unique counts: Many foundations are asking for a program’s “unique counts”, i.e.
the unique number of individuals served. For example, a homeless shelter’s unique
count is the number of people served and not the total number of meals served.
Why is this? Of the 350 meals served at the homeless shelter during the day (50
breakfasts, 100 lunches and 200 dinners for a total of 350 meals), it may be that
people were served multiple meals. Thus, 350 meals does not mean 350 people
served.
Let’s do the math. After an evaluation, you found the following:
1. 24 people came to all three meals (breakfast, lunch, and dinner.)
2. 52 people came to only to lunch and dinner.
3. 174 people came to dinner only.
Looking at each step above, we can therefore see the following people at each
meal:
Lunch
Dinner
Breakfast
24
1.
24
24
52
2.
52
3.
174
To receive the unique count for that day, we need to make sure that we do not
double count and include those who received more than one meal. Thus, the 24
lunches and dinners from Step 1 (those who also ate breakfast) and the 52 dinners
from Step 2 (those who also ate lunch) – a grand total of 100 - should not be
counted. Therefore, the daily unique count for this homeless shelter is 250 even
though 350 meals were served. It is entirely appropriate to report both numbers, as
long as you explain that 250 is the total number of unique individuals served and
350 is the total number of meals served.
Qualitative Data: Qualitative data is expressed through stories or quotes, received from
questionnaires, testimonies, and interviews. Qualitative researchers measure success by
observing and engaging in activities rather than counting the number of participants or
measuring test results. Data that is qualitative is more subjective in nature, as the goal is
to obtain a more experiential sense of whether a program participant had a successful or
positive experience in your program.
Going back to the after-school program aimed at increasing reading level of students, the
organization can use quantitative data to track the number of children who attended the afterschool program, how many increased his/her reading score, and by what percent did the
increase occur. Using qualitative data, the organization can articulate students’ or parents’
opinions about the program as well as narrate challenges that have been overcome because of
the students’ participation.
Cathedral Consulting Group, LLC
Page 2
Knowing and Understanding Your Stakeholders
Lastly, before designing your evaluation tool, you need to know who is interested in your
program’s evaluation and what they want to know. Every organization has different stakeholders
– donors, program participants, board members – and each stakeholder has a different
expectation for your program.
For instance, an employee will probably want to know if you are making progress toward
meeting your mission. The board of directors may be more focused on making sure the
organization is being fiscally responsible as it works toward its mission. A donor will want to
know that their contribution is making a positive difference in your program and that your
program is worthy of their support. Other stakeholders include your local community,
organizational partners, government and local officials, program participants and anyone else
affiliated with your organization. By understanding the motivation and interests of each
stakeholder that is affiliated with your organization, you will gain an understanding of what is
important to measure.
In the past, organizations could rely on sharing a compelling story of a client who embodied the
“success” of their mission. However, the funding environment has become more competitive,
and organizations that rely on funding from foundations or corporate entities need to realize that
these entities are increasingly looking for strong evaluation measurements. This trend is also
growing with individual donors, especially younger generation donors. More and more, funders
and donors who do not give primarily out of a sense of loyalty are looking beyond the classic
success stories and want to know that organizations have a system in place to measure
incremental and systematic change directly related to the mission statement.
Measuring the Mission: Using a Logic Model
The key to measuring your mission is to break down the all-encompassing mission statement
into smaller activities with measurable outcomes for each. We recommend creating a logic
model assist you in this process.
A logic model is shown below and demonstrates your organization’s theory, i.e. “What will our
program do and what is the expected result?” It demonstrates what steps need to take place
before the final objective is achieved.
Resources: Sometimes referred to as inputs, these are things that need to be brought to
bear in order for the program to begin and include: funding, space, volunteers, and staff.
Activities: The specific events or pieces of programming that you will offer. These
Cathedral Consulting Group, LLC
Page 3
activities should directly relate to the funding that you outlined in the resources/inputs
section. Activities should also include those things that need to happen before the
program is launched, i.e. pre-program activities such as hiring, training, recruiting staff,
advertisement/awareness. Note that activities can be very simple or complex. You may
find that you have a lot of things in the activities column or one or two items.
After you review what will be used to build the program, the next step is to project or
estimate your intended results. This is done through looking at your:
Outputs: The direct results of the activities that demonstrate the program or a piece of the
program has been implemented. This is often quantitative in nature and says “what we
did” and “who we reached.” It is important that you have an output for every activity. For
the after-school program, it is the number of participants and the number of participating
tutors. If your activity is to advertise the program, the output is the number of
advertisements. If your activity is staff training, it is the number of staff meetings held.
Qualitative outputs can be the feedback received from participating in the program.
Outcomes: The direct results of the output/direct implementation, also known as the
difference between the past and the present as a result of the activity and output. Note,
while an accomplishment, it is not the final impact and not every output has an outcome.
For our after-school program example, the output of 30 participants and 4 tutors resulted
in an outcome of 23 participants reading at grade-level after the program concludes.
Another example is a men’s program with the output of 12 participants. The outcome is
that 5 men graduated with a designation that they are clean and sober.
Impact: The final intended effects of the activity. For the after-school program, the
outcome of 23 participants means an impact of 17 on-time graduates. A larger impact
may be the enrollment into a college or vocational program. For the men’s program, the
outcome of 12 participants clean and sober may mean the impact of functioning citizens of
society.
Measuring Your Mission: Questions
Thinking through a logic model will ensure that you know what you need to successfully run
your program (resources/inputs and activities) and how you will measure its effectiveness
(outputs, outcomes and impact.) As you develop each of these areas, it may be helpful for you
to begin with your impact - the final intended effects of your activity – and work toward
understanding its resources. For example, if you run a job-training program, and your outcome
is that program graduates will attain full-time employment, you can work backwards to develop
the outcomes and outputs against which you can measure progress toward this goal. In this
scenario, an output could be having each participant be a part of three mock interviews, and an
outcome would be having each participant be interviewed for current open positions at five
companies. From there, you can work backward again to determine the additional training
through activities that needs to take place as well as the resources that you need in order to
complete your program.
While this can be a time-consuming process, it is important to remember the importance of
being able to measure your mission proactively and ensure that you are achieving your
organization’s purpose.
Cathedral Consulting Group, LLC
Page 4
Articles for Further Reading
1. Key Steps In Outcome Management. This article provides you with a tutorial of how to
create outcomes for your organization and also includes examples of other
organizations’ outcomes. This is an excellent “how to” guide.
http://www.urban.org/UploadedPDF/310776_KeySteps.pdf
2. Reactive vs. Predictive Evaluation. This short article underscores the importance of
using outcome measurement as an evaluation tool instead of relying on analyzing
activities that have already taken place. http://www.nten.org/articles/2012/predictiveprogram-evaluation-dont-reactpredict?utm_source=ntenconnect&utm_medium=email&utm_content=consultant&utm_c
ampaign=oct12
Kimberly Reeve is a Managing Director, Michelle Fitzgerald is a former Senior Associate in the
New York office. Rachelle Bottenbley is an Associate in the Amsterdam office. This article was
written for our Topic of the Month for January 2013 as part of our General Executive Counsel
program.
For more information, please visit Cathedral Consulting Group, LLC online at
www.cathedralconsulting.com or contact us at info@cathedralconsulting.com.
Cathedral Consulting Group, LLC
Page 5
Download