Improving the Performance of the Department's Investments Please

advertisement
Improving the Performance of the Department’s Investments
Please accept this message in response to the Secretary’s request for ideas to further HUD’s vision
related to “building a stronger HUD” through the measurement of outcomes and improved
accountability. The current measurement approaches used within the Department are quite diverse
and lack alignment. This “flexibility” is good when there is significant variation in program goals and
objectives. However, the Department’s programs have much in common when looking at desired
outcomes, indicating an integrated approach is needed to assess and report outcomes on a common set
of performance measures.
Before moving forward with an integrated approach, we need to answer this fundamental question: Do
we want only an integrated performance reporting system for HUD grants or do we want a performance
management system? Reporting performance outcomes is much less involved than managing
performance outcomes.
A performance reporting system involves the periodic reporting of actual performance on one or more
indicators. The focus of a performance reporting system is describing how grant funds were spent and
the results, planned and unplanned. Benchmarks based on prior performance may or may not be set for
these indicators. The emphasis of this approach is on describing what happened while not defining and
applying consistently a set of rewards and sanctions to hold programs and grantees accountable for the
use of federal funds.
A performance management system builds upon the performance reporting system by adding various
features to promote and ensure accountability in the use of public monies. Typical features of a
performance management system are:
a. Defining the core data elements for decision-making. Core data elements are those elements
deemed critical for decision-making purposes and reporting to Congress and other stakeholders.
Data elements include variables that are demographic in nature, process or product oriented
(such as elements describing the services/products delivered to customers) and descriptive of
program outputs and outcomes. Where a data element is used by two or more programs, it is
important to ensure the definition of the data element is common to all.
b. Defining core performance measures. A performance measure is a data element/indicator
described earlier plus a numeric standard or goal to assess progress. A core performance
measure is a key business metric used to evaluate factors that are crucial to the success of
individual programs and grants and the overall organization. Where a performance indicator is
used by two or more programs, it is imperative to ensure the definition of the indicator is
common to all. Organization-wide measures reflect the Department’s overall progress in its
mission to reach its vision. These measures have common definitions and are typically applied
to each office or division in the Department. Measures can be programmatic and financial in
nature.
o
The numeric targets for the core performance measures are typically based on historical
baseline data for the overall organization, individual program or type of grant. Targets or
individual goals may vary by the type of program and grant. They are value-based and
1
Improving the Performance of the Department’s Investments
typically are based on performance results from the prior year(s). These numeric targets
should reflect adjustments for circumstances outside the control of the program or grant
(such as natural and man-made disasters). Targets should ideally include an improvement
factor to promote continuous improvement in outcomes. Performance goals should be
established for the organization overall and the various programs and their grantees.
o
There are different methods for interpreting a result for an individual measure. The
interpretation of an outcome on an individual measure is typically done in one of two ways:
a knife-edge approach where performance above the target is exceeding, performance
equal to the goal is meeting and performance below the target is failing; or a range
approach where exceeding, meeting minimum expectations and failing are defined for each
measure. For example, exceeding expectations on a particular measure may be defined as a
result above the target or goal. Meeting minimum expectations may be defined as
performance that is less than the goal, but is higher than a predetermined lower-bound
threshold (say 90% of the goal…or another acceptable tolerance level). Performance falling
in this range would be considered meeting minimum expectations. Performance falling
below the lower-bound threshold for meeting minimum expectations would be considered
failure to meet minimum expectations.
o
Determine if the measures are to be weighted equally or differently. One critical
implementation issue that arises when incorporating multiple performance measures in a
performance management system is determining the relative weights to place on the
various measures in those circumstances where some measures are deemed more
important than other measures.
c. Determining performance in the aggregate when multiple core performance measures exist. By
default, overall failure for an organization, program or grantee is defined as failing to meet
minimum performance expectations on all measures when multiple measures are used to assess
performance. The Department should consider adopting more rigorous methodologies for
assessing performance overall. One approach is to apply an average rate of attainment
threshold. For example, the organization, program or grant must achieve an average rate of
attainment on the measures of 90% to be considered to have met minimum expectations
overall. A rate below 90% in this example could be considered failure in the aggregate, while
performance above 100% could be considered exceeding expectations in the aggregate.
Another typical approach is to specify the number of the key measures to be met to define an
aggregate minimum performance expectation. For instance, exceeding performance
expectations in the aggregate could be defined as meeting all individual goals and exceeding
expectations on over half of the key core measures. Individual measures can also be weighted.
d. Rewarding or sanctioning performance on the core performance measures. Rewards can
financial (e.g., a grantee qualifying for supplemental funding based on exemplary performance
in addition to need), non-financial in nature (e.g., official recognition of outstanding
performance by the Secretary) or both. The application of sanctions should be structured to
lead to improved performance. For example, failure in the aggregate for a set period of
2
Improving the Performance of the Department’s Investments
performance like a year (assuming the grant is for multiple years) would result in technical
assistance from HUD plus a HUD-approved performance improvement action plan from the
grantee. Failure could also result in a high-risk designation from the Grant Officer and possible
termination of the grant award for continued failure in the aggregate (e.g., failing two or more
consecutive years). You can have different approaches to rewarding and sanctioning
performance based on whether the grant is formula or discretionary. Grant risk assessment
information discussed later in this paper should also be considered in developing a performance
improvement action plan.
e. Requiring improved performance on one or more core performance measures as a condition for
receiving approval from the Secretary to waive a regulation (see 42 U.S.C. 3535(q)). The
Department, for example, could opt to not renew a waiver of a regulation for a grantee if
improved results were not realized.
f.
Including results on the core performance measures in post-award grant risk assessment and risk
management activities. Grant risk assessment and risk management should play a significant
role in any performance management system established by the Department. After a cursory
review of grant risk assessment activities, it’s clear each program office is responsible for
developing and applying its own criteria or indices of risk to the successful operation of a grant.
While there are differences in deliverables among the various grants, there are significant
similarities in the functional activities related to the operation or management of grants to
support the development and application of uniform grant risk assessment criteria.
Ideally, periodic risk assessments should be completed on each grant using a common or
uniform approach to the identification, assessment and estimation of the levels of risks, their
comparison to benchmarks or standards, and determination of acceptable levels of risks. While
there are a wide range of approaches to grant risk assessment, the focus of these assessments
should be on identifying the threats, real or potential, to the successful implementation of the
Department’s grants and working with at-risk grantees to improve or effectively address these
threats. Poor performance on one or more core performance measures could be used as an
indication of threat to the successful operation of a grant. There is an old adage that says, “If
you can’t measure it, you can’t manage it.” Consequently, it’s important to describe risk in
quantifiable terms, including the use of weights to distinguish more significant rating elements
from lesser significant ones.
In order to make the risk assessment information useful for grant management purposes, it is
recommended the Grant Officer’s technical representative assigned to the grant be responsible
for completing a risk assessment desk review of grant activities on a quarterly basis. Each grant
would receive a quarterly risk assessment and risk designation (e.g., no risk, minor risk,
moderate risk, etc.). A grant determined to be at-risk would be targeted for risk management
activities based on its risk designation. Risk management activities include onsite monitoring,
technical assistance and related follow-up actions such as conditions and sanctions initiated by
the Grant Officer.
3
Improving the Performance of the Department’s Investments
The areas to be covered in grant risk assessments should be consistent with the Departmental
factors outlined in the HUD Monitoring Desk Guide: Policies and Procedures for Program
Oversight:





Financial,
Physical,
Management,
Satisfaction, and
Services (including programmatic reporting and performance).
A key element of the risk assessment process is determining risk tolerance or, in other words,
identifying the acceptable level of risk associated with each of the factors outlined above. The
results from the risk assessment could also be categorized into levels like 1 - no risk, 2 insignificant risk, 3 - minor risk, 4 - moderate risk, 5 - major risk and 6 - catastrophic risk in order
to prioritize any Departmental action or response.
Additionally, the Department should consider pursuing uniformity and efficiencies by centralizing
automated data collection and reporting activities for all programs and their grants. Without such
uniformity, it will be difficult for the Department to accurately assess its overall progress. Ideally, there
should be uniformity the following areas:
a. The definitions of individual data elements collected and reported to HUD,
b. The specification of allowable data sources for the collections of data on each element, and
c. The approach to the validation of the accuracy of data collected and reported to HUD, including
any grantee documentation requirements.
The centralization of data collection and reporting activities in HUD will lead to reductions in
operational costs in three ways: it can exploit economies of scale, minimize duplication of efforts or
procedures, and enjoy reduced regulatory costs. Because of the technical nature of data collection
and reporting and the strong reliance on automation and technology, these centralized data
collection and reporting activities are typically located in the organization’s information and
technology section or department. Some organizations have also located centralized data collection
and reporting activities in its research and evaluation section or department.
In the event the Department pursues a performance management system, this performance
management function should be centrally located in the Department to align strategic planning with
performance management. Developing a strategic performance management approach drives sustainable
performance by aligning the activities of the management team and employees with the Department’s strategy.
Ideally, a performance management approach will include the use of performance-based scorecards or
report cards to monitor and manage performance, including the use of the core performance measures.
Some organizations use established management methodologies with their performance management
4
Improving the Performance of the Department’s Investments
systems, such as balanced scorecard or Six Sigma. The activities of the performance management
system should also be aligned with the activities of the Office of Research, Evaluation, and Monitoring
housed in the Office of Policy Development and Research (PD&R).
5
Download