Using Automated Logic Models to Enhance Grants

advertisement
Journal of the National Grants Management Association, 2009
Using Automated Logic Models to Enhance Grants Management
by
Barry Nazar
Frederick Richmond, Manuel Valentin
and
Barbara Dorf
Logic models have been advanced previously in this journal for their ability to enhance
strategic management of grants programs1 and for their ability to improve the
development of proposals2. In the wider literature, logic models are promoted for
planning3 and, of course, their original purpose; viz., program evaluation4. In all the
applications of logic models heretofore, the merits of logic modeling are wrought by
aiding processes of conceptualization, communication, and sometimes, teamwork. The
logic models per se are pretty much a terminal, static, bi-product of these processes.
What ensues beyond the creation of the logic models is subject to the proclivity of the
persons involved.
We describe here some techniques whereby logic models not only yield the conceptual
enhancements described above, they act dynamically to monitor the fidelity of program
implementation and support the focus of ongoing management. When applied to
hundreds, or thousands, of grantees implementing programs of various design within a
grantor’s scope of purpose, these techniques provide both performance measurement
and assessments for the putative “theories of change” in use.
The HUD Logic Model Experiment
Despite the wide acclaim for logic models, there are many limitations for their practical
utility for grants management, at least in the form as they are typically prescribed. Many
federal, state, and foundation grant makers now require logic models as part of their
grant applications, but HUD currently uses those logic models for managerial purposes
after the grant awards are made. The story of how HUD arrived at this level of practice
highlights the shortcomings of traditional logic models and the features of automated
logic models that make them a viable tool for grants management.
In 2003, HUD’s Office of Departmental Grants Management and Oversight (ODGMO)
sought to find some way to track the activities and outcomes resulting from the several
billion dollars of discretionary funding through its SuperNOFA grants. This was no small
1
Michaelson, L. (2005). Using Logic models to enhance strategic management of federal grant programs. Journal
of the National Grants Management Association, 14(1), 4-12.
2
Carruthers, W.L., Thivierge-Rikard, R.V., & Thivierge-Rikard, L.T. (2008). Using logic models to manage the
development of grant proposals. Journal of the National Grants Management Association, 16(1), 4-10.
3
W.K. Kellogg Foundation. (2004). Logic model development guide: Using logic models to bring together
planning, evaluation, and action. Battle Creek, MI: W.K. Kellogg Foundation. Retrieved January 20, 2009, from
http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.pdf
4
Frechtling, J.A. (2007). Logic modeling methods in program evaluation. San Francisco: Jossey-Bass.
1
Journal of the National Grants Management Association, 2009
task as there are over 30 different HUD programs in the SuperNOFA including, but not
limited to, lead remediation, housing counseling, economic development, rural
initiatives, urban initiatives, construction, financing, fair housing, etc. Furthermore, there
are over 10,000 applications each year and 3,000 to 4,000 grantees funded each year.
Each HUD program has its own statute of eligible activities. Some eligible activities are
common to multiple programs and some are unique to one or a very few. Similarly, the
goals and outcomes across the programs are different for each HUD program with
some overlap among a disparate few.
ODGMO elected to pursue this new found accountability with a framework set forth by
logic models. Specifically, it was patterned after logic models used in the Results
Oriented Management Accountability (ROMA)5 schema. The ROMA version of logic
models differs slightly from that of the Kellogg Foundation in that “Inputs” are omitted
and replaced by “Needs, Problems, or Situation.” Also, the ROMA model places an
added emphasis on “Measurement Tools” and a plan for collecting, storing, processing,
and reporting data about activities and outcomes in the model. Finally, ROMA models
include a projection of activities and outcomes.
The initiative was launched with the 2004 and 2005 SuperNOFAs. All grantees
submitted a logic model with their filed applications. Detailed instructions were given
and video tutorials posted online. The models were submitted on paper in 2004, and in
MS Word™ files as part of an electronic application in 2005, and were, therefore,
essentially “paper” logic models. The models were subjected to a content analysis by
The Center for Applied Management Practices (CAMP) and entered into a relational
database (MS Access™). Although these efforts were aimed at learning about HUD
grantee activities, the analysis revealed more about the inadequacies of paper logic
models in grants management.
Chief among the adverse findings were:






Applicants used different and sometimes ambiguous terminology to describe
needs, activities, and outcomes.
Applicants used different units of measure when projecting activities and
outcomes, sometimes referring, for example, to: families, households,
individuals; or buildings, units, square feet; or mortgages, dollars, borrowers.
Many applicants omitted specifying any quantitative projection of activities or
outcomes whatsoever.
Some applicants used cryptic labels for content and others wrote narrative
essays. One model was 72 pages long and others scarcely a half page.
Some applicants specified items that were not among the grants’ eligible
activities.
The timeframes for outcomes; i.e., short, intermediate, and long term, were too
ambiguous to make meaningful comparisons among projections.
5
ROMA is a national peer-to-peer training regimen developed in large part by Frederick Richmond of the Center for
Applied Management Practices (CAMP) and funded by the ACF/Office of Community Services to provide a
response for the Community Action Agencies to GPRA.
2
Journal of the National Grants Management Association, 2009






Many applicants did not distinguish the difference between outputs and
outcomes.
Some applicants failed to put their applicant name on the logic model.
The level of effort required to conduct content analyses was burdensome.
Some applicants corrupted the template formatting while entering their data.
Many models showed poor correspondence among needs, activities, and
outcomes, thus obscuring whether a “theory of change” might be evident.
Despite compiling all content into a relational database, it was not possible to
generate meaningful aggregate summaries of applicant activities and proposed
outcomes for most of the HUD programs because of the many inconsistencies of
model content.
Despite all these difficulties, a number of favorable findings also emerged, which
propelled the initiative through subsequent rounds of development. Chief among the
favorable findings were:





Processing of electronic applications with attached logic model files through
Grants.gov to the various HUD program offices, ODGMO, and the consultant
offices (CAMP) for content analysis was easily manageable.
A desktop database, MS Access™, was more than adequate to handle the
volume of content necessary to carry out the processing of all logic models
nationwide.
About 50 percent of the logic models submitted rendered an excellent executive
summary of the grantees’ proposed program.
Many applicants reported that the logic model provided a useful and desirable
format for presenting their program.
The content analysis effort provided a foundation for specifying a data element
dictionary for needs, activities, and outcomes in subsequent logic model creation.
The mixed findings make it clear that logic models generated in the usual way; i.e., a
prescribed template with open end content, are of limited usefulness for grants
management, even with detailed instructions and video training. Rather than abandon
the approach, the ODGMO and CAMP, its consulting partner, devised an alternative
approach that overcomes the disadvantages while retaining the advantages of logic
modeling from a grants management perspective.
The HUD eLogic Model®6
Our solution lies in a stable, electronic, form arranged in a logic model format with a
built-in knowledge base (data model) that informs the user about appropriate choices of
content for each component of the logic model. Most content is entered by selecting
from a pick list or drop down menu. Each HUD program has a finite set of Needs,
The term “eLogic Model®” is a registered trade name by The Center for Applied Management Practices, Inc., but
the concepts and technologies described here are public domain and grants managers are encouraged to apply
similar applications to their grants management protocols.
6
3
Journal of the National Grants Management Association, 2009
Activities, and Outcomes for which its funding is authorized. Each applicant , in turn,
selects the particular combination of these Needs, Activities, and Outcomes which
correspond to their proposal. As the applicants do so, the electronic form automatically
presents the appropriate unit of measure for an Activity or Outcome and prompts the
applicant for a quantitative projection of how many of these units they intend to deliver
or attain. It operates as a logic model expert system. The layout of the 2007 HUD
eLogic Model® is presented in Figure 1.
An astute reader may find that this seems too confining to elicit creative proposals.
Although the knowledge base for pick lists is fairly comprehensive, occasionally
grantees have something in mind that is not listed. The pick lists for Activities and
Outcomes have an option for “other.” If an applicant selects “other,” the logic model
pops up an interactive dialogue window prompting the applicant to define a new Activity
or Outcome and the unit of measure to associate with it. Upon doing so, the new item is
added to the knowledge base so that the applicant may select it again as needed. The
item is also tagged with a prefix, “new-“ to alert HUD staff that an applicant is proposing
something that was not in the knowledge base. The tagging of the new items allows
HUD staff to gather further information on how applicants operate programs and
determine if additional activities or outcomes should be added to the logic model in the
future.
Finally, we should note that the automated logic model form has multiple page layouts
that correspond to fiscal years of funding. Some HUD programs are one year grants
and some span multiple years. The applicant completes a logic model page for each of
the first three years of their grant funding.7 This solves the ambiguity of short,
intermediate, and long term outcomes. If a grantee adds a new item to the knowledge
base, it is accessible from any year of the logic model. A logic model page is not limited
to the size of a printed page. Each logic model year is a scrolling page that could be as
many as six or seven printed pages. Few applicants require this much space.
Typically, an expert system user interface of this type is built as a “front end” attached to
a sophisticated relational database. Practical logistics prohibit such arrangements in
the federal grants management environment. For one, the estimated costs of rolling out
an enterprise application via the internet on a nationwide basis runs into millions of
dollars. For another, such an arrangement is incompatible with the existing Grants.gov
application filing system, where all application materials must pass through a single
portal for submission. What is required is (1) a small, portable, computer file that can be
attached to a grant application, (2) that has the capacity to store data and run
programmed routines that interact with the user, (3) is functional on a typical PC as
prescribed by Grants.gov for submitting applications, and finally (4) is capable of
passing its content to a common database for comparative and aggregate analyses.
The ideal platform is the MS Excel™ spreadsheet file. While we typically think of Excel
as a gridline spreadsheet, these can be formatted with amazing variety. The formats
7
Some projects operate for longer than three years, but because of the difficulty in accurately projecting beyond
three years, grantees are asked to update their logic models in the fourth year of the project.
4
Journal of the National Grants Management Association, 2009
can be protected, cells can be locked and unlocked, and cells can be configured as
drop down “combo boxes” that look up information from other worksheets in the file (the
knowledge base). Better still, Excel has a script language, VBA (Visual Basic for
Applications), that allows routines (macros) to be executed in response to the user’s
actions. These routines check for errors, add new content to the knowledge base, and
adjust the formatting as the user works. After thousands of logic models are received
by HUD, the VBA script also provides the means to automate the process of uploading
content of the models to a common database for summary analyses.
Performance Measurement with Logic Models
The logic models submitted with the application specify what and how much an
applicant proposes to deliver (services) and achieve (outcomes). Upon making grant
awards, HUD returns the logic model to the selected grantee for follow up reporting.
The returned models function differently, however, as the original model content is
locked and certain fields for specifying date periods, actual activities/services delivered,
and actual outcomes attained are now unlocked. As part of the reporting requirements,
the grantees fill in these performance fields on a periodic basis and email the file back
to the corresponding HUD program. The reporting period requirements differ among
programs with some HUD programs require quarterly submissions, some semiannually,
and some annually.
The returned logic models go through a similar routine of review by project officers and
uploading to a common database. For the project officers, the returned logic models
provide a ready and detailed executive summary of the grantee’s project and its
implementation progress accordingly. For policy makers, the database summaries
provide a “big picture” assessment of what’s going on. That is, they can know what mix
and amount of activities and services are occurring and what mix and amount of
outcomes are resulting. Database queries also support more ambitious inquiries about
the relationships between activities and outcomes.
The Data Base Platform
The HUD logic model initiative uses MS Access™ as the receiving database. While
there are many among the IT profession that disparage Access as too limited for
serious database management, these sweeping judgments bear some qualification.
The principle weakness of Access is its limited multi-user support. If more than five or
ten persons are logged in at the same time, performance degrades precipitously. That
is not an issue with the HUD logic model operation, however, because the data loading
is accomplished by automated batch uploads of the logic models. As for disseminating
the information, the portability of Access makes it possible to simply give each HUD
program its own copy of the uploads to examine and query at will.
In addition to the portability of Access, it offers superior analytic features by comparison
to enterprise level database platforms.8 A user-friendly interface allows for generation
8
Alexander, M. (2006). Microsoft Access data analysis. Indianapolis, IN: Wiley Publishing.
5
Journal of the National Grants Management Association, 2009
of ad hoc queries as well as standardized reports, data transformations, pivot tables,
and even export back to Excel files for those more comfortable with spreadsheet
operations. Best of all, the platform for implementing the entire scope of the HUD logic
model initiative lies within the Microsoft Office™ suite of software, which makes it very
affordable and accessible.
For grant managers contemplating an eLogic Model® type operation, probably the most
difficult or least accessible task of setting up the operation is using the VBA code that
automates certain aspects of the logic model and uploading routines. The script
embedded in the distributed logic models is password protected. This is not to prevent
others from adapting the script, but to prevent grantees from inadvertently corrupting
their logic model files. The authors are happy to share the underlying technology with
grant managers from government or foundation grant makers upon request.
The Evolution of eLogic Model® Applications (CDBG)
Most systems arise through a process of successive approximations and adjustments.
The same is true for the HUD logic model initiative. During the first several years, 2004
through 2006, the development focused on correcting problems. A presentation was
delivered to the 2004 NGMA Annual Training Conference.9 In the years since 2006,
however, development has focused on expanding the utility and features of the logic
model. Interim period and year-to-date performance measures are captured. DUNS
numbers allow for cross tab links to SF424 forms for incorporating budgetary and
geographic factors in the analyses. A web served distribution of all grantee HUD logic
models is now available to the public for review and is searchable by a number of
factors or combination of factors; e.g., region, state, city, zip code, grantee name,
project name, year of funding, etc.10
The adaptability of the HUD logic model paradigm is aptly demonstrated by a recent
deployment of the system for the Community Development Block Grant (CDBG)
program in Jersey City, NJ.11 The CDBG is a formula grant system that awards funds to
local agencies, who, in turn, become grant makers at a local level. In Jersey City, there
are typically over 75 sub-grantees. The only changes required to make this work was to
populate the knowledge base with the appropriate needs, services, and outcomes that
pertain to local CDGB scope and mission. A matrix of demographic data fields was also
added to support reporting of the number and characteristics of persons served.
This adaptation was feasible because logic models are inherently generic and eLogic
Models are customized by simply providing a suitable data model in the knowledge
9
Dorf, B., Richmond, F. & Nazar, B. (2004). Performance measurements: U.S. Dept. of Housing and Urban
Development. retrieved at: www.ngma-grants.org/docs/2004conference/attachments/sessionE3.ppt
10
Access to the web served distribution of HUD logic models is available at: www.hudgrantperformance.org by
entering ID = “hud” and Password = “hud” (without the quotes). This deployment was created by Chad Harnish, IT
Director at Temple University, Harrisburg, PA.
11
On-line access to the Jersey City logic model grants management system is available at:
http://www.cityofjerseycity.com/hedc.aspx?id=2638
6
Journal of the National Grants Management Association, 2009
base for content selection. The other factor that made this so readily adaptable is that
the basic software platform is nothing more than Microsoft Office™.
Summary of Lessons Learned
The notion of using logic models in connection with grants management is attractive,
but putting the notion into practice deserves some thoughtfulness. The chief advantage
of logic models lies in identifying information about programs in ways that correspond to
the “theory of change” or functional rationale of the program structure. Not all persons
are accustomed to thinking along these lines and they sometimes interchange goals,
needs, outputs, and outcomes. There is also a strong likelihood that grantees will
specify these elements with different terminology or different units of counting. All these
inconsistencies make it difficult to quantify information. Automated logic models can
overcome these problems by supplying a structured guide to creating the model and a
defined lexicon for specifying content and units of measure.
The approaches to automating logic models are many and, again, deserve some
thoughtfulness. The likelihood of success is enhanced if the platform is generally
accessible, affordable, easily adaptable, and feature rich. The features should include a
user-friendly interface, programmability, portability, and connectivity to a relational
database. As with any instrument, pilot testing is essential. The gremlins who enforce
Murphy’s Law are especially attracted to automated logic models.
Automating the process of logic modeling does not preclude the need for training. As
noted above, thinking in terms of logic model frameworks is novel for many. The
“garbage in – garbage out” rule still applies. The amount or intensity of training required
may be reduced, but not eliminated. Clear instructions and, perhaps, video tutorials
may be sufficient.
Finally, logic modeling, even when automated, is not a panacea. While an abundance
of information is availed which can support grants management purposes, there are
limitations to these purposes. Logic model data are not a substitute for experimental
and quasi-experimental evaluation research. Causal relationships are implied in a logic
model but the aggregate nature of the data collected lack the empirical controls to make
statistically valid inferences about causality. With a large number of grantees (as with
HUD), however, some insightful correlations are possible. That is, one can ask: Does
the presence or amount of a certain activity correspond to the presence or amount of a
certain outcome?
Some types of information do not lend themselves readily to a logic model format. In
the Jersey City adaptation, for example, a matrix of demographic characteristics was
added to fulfill certain reporting requirements. In the HUD logic models, we add a series
of management questions specific to each HUD program which are aimed at developing
estimates of return on investment. Fortunately, the platform we use to disseminate and
collect logic models can readily accommodate add-on data capture.
7
Journal of the National Grants Management Association, 2009
References
Alexander, M. (2006). Microsoft Access data analysis. Indianapolis, IN: Wiley
Publishing.
Carruthers, W.L., Thivierge-Rikard, R.V., & Thivierge-Rikard, L.T. (2008). Using logic
models to manage the development of grant proposals. Journal of the National
Grants Management Association, 16(1), 4-10.
Dorf, B., Richmond, F. & Nazar, B. (2004). Performance measurements: U.S. Dept. of
Housing and Urban Development. retrieved at: www.ngmagrants.org/docs/2004conference/attachments/sessionE3.ppt
Frechtling, J.A. (2007). Logic modeling methods in program evaluation. San Francisco:
Jossey-Bass.
Michaelson, L. (2005). Using Logic models to enhance strategic management of
federal grant programs. Journal of the National Grants Management
Association, 14(1), 4-12.
W.K. Kellogg Foundation. (2004). Logic model development guide: Using logic models
to bring together planning, evaluation, and action. Battle Creek, MI: W.K.
Kellogg Foundation. Retrieved January 20, 2009, from
http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.pdf
8
Journal of the National Grants Management Association, 2009
Authors
Barry Nazar, D.P.A. Barry is Assistant Professor at Temple University, School of Social
Administration, where he teaches research methods. He is also a Fellow of The Center
for Applied Management Practices where he studies antipoverty initiatives.
nazar@temple.edu
Frederick Richmond, M.S./M.H.A. Fred is the CEO of The Center for Applied
Management Practices in Camp Hill Pennsylvania. He is a founding contributor to the
national Results Oriented Management Accountability (ROMA) curriculum and coauthor of The Accountable Case Manager. frichmond@appliedmgt.com
Manuel Valentin, C.C.A.P Manny is Senior Manager at The Center for Applied
Management Practices and Master Trainer for the ROMA National Peer-to-Peer training
program. Manny is co-author of The Accountable Case Manager and one of the
principle architects of the eLogic Model®. mvalentin@appliedmgt.com
Barbara Dorf, M.U.P. Barbara is the Director of HUD’s Office of Departmental Grants
Management and Oversight (ODGMO). She is a member of NGMA and recipient of the
National Public Service Award (2007) from the National Academy of Public
Administration, and the Harry Hatry Award for Distinguished Performance Measurement
Practices (2007) from the American Society of Public Administration.
barbara.dorf@hud.gov
The views presented in this paper are the opinions of the authors and do not
necessarily represent the views of the Department of Housing and Urban Development.
9
Journal of the National Grants Management Association, 2009
Figure 1: Sample Logic Model from SuperNOFA 2007 Partially Completed
Reporting
Period
Grantee
Name
Projected Workshops
Needs
Selected
from Drop
Down
Menu
Activity
Selected
from
Drop Down
Menu
Actual Workshops
New
Outcome
Defined by
User
Unit of Measure
posted automatically
10
Journal of the National Grants Management Association, 2009
Figure 2: eLogic Models® Driven by Drop Down Pick Lists
Knowledge Base
Logic Model Form
Goals/Policy
Needs
Activities
Provide
Affordable
Housing
Lack skills and
Information
Outcomes
Drop Down Menu
Housing Counseling
Personal Budgeting
Credit Repair
Home Buyer Training
Home Maintenance
OTHER
11
Journal of the National Grants Management Association, 2009
Figure 3: Flow of HUD Logic Model Processing
Excel Logic
Model
File Application
Application Forms
Grants.gov
Submit Updated
Performance Results
Awarded Grantees
Return Models for
Performance Updates
HUD
Data Extract
Reports
12
Download