An Approach to the Development of Performance Measures

advertisement
March 3, 2002
An Approach to the Development of Performance Measures
For Public Sector Service Providers
By John M. Greacen
Greacen Associates, LLC
Increasingly, public entities are being asked to specify performance measures for
their activities or to provide information on their performance organized according to
such measures. The pressure for creating performance measures and reporting
performance data is part of the call for ever more accountability on the part of public
entities of all kinds – from branches of government, to government agencies, to
organizations funded by government agencies.
A number of state legislatures have adopted “performance based budgeting.”1
The appropriations acts in these states include performance measures and performance
targets for all or most entities to whom funds are appropriated. The entities are required
to make periodic reports on these measures during the course of the fiscal year. In
theory, the legislature will make future funding decisions in light of the entities’ success
in meeting the prior year’s performance targets.
In all of these states, the entities are invited to suggest appropriate performance
measures for inclusion in the appropriations act. However, the measures ultimately
included in the act are the product of the legislature and the governor. They may bear
little or no resemblance to the measures suggested.
Legislatures are not the only sources of performance measures for publicly funded
bodies. Funding authorities, such as grant making organizations or agencies, may specify
them as a condition of a grant. An oversight board may promulgate them. Or an entity
may decide to initiate performance measurement proactively for its internal improvement
or external accountability.
Greacen Associates, LLC, will be developing performance measures for legal
services entities to serve as the basis for a questionnaire to be used by LSC staff to
evaluate state level planning efforts. We will be assisted by a “design team” of
representatives of local and state legal services providers and other experts from
academia and the courts.
Initial efforts at defining performance measures for public bodies have
encountered a number of problems. The measures may prove to be unclear. The cost of
gathering data bearing on a measure may outweigh the benefit of the information
gathered. The measures may not address all, or the most significant, activities of the
public body. It is important as a matter of sound public administration that performance
1
Florida, Louisiana, Texas, and New Mexico. Other states, such as North Carolina, Oregon, and Virginia
have implemented similar budgeting approaches. Arizona has adopted a form of program budgeting that
approaches performance based budgeting.
-1-
March 3, 2002
measures address all of the key activities of an agency and all of the agency’s purposes or
objectives for each activity. When key activities are left out of the measurement process,
they may well get short shrift in day to day operations. To the extent that an agency’s
program is measured on some, but not all, of its purposes, its program efforts will become
unbalanced by excessive focus on the measured purposes. The unmeasured purpose will
be “suboptimized.” An example would be measurement of timeliness and cost to resolve
cases without measuring the quality of service provided. Over time, if the performance
measurement process is taken seriously, it is certain to subordinate the quality of service
provided in the pursuit of faster, cheaper service. Paying attention to the speed with
which cases are resolved, and the amount they cost, will lead to faster, cheaper
dispositions. But there is a basic tradeoff among speed, cost and quality. Maximizing
speed and cost will ultimately lead to a reduction in quality. Optimizing services on the
measured dimensions will undoubtedly, over time, lead to a sacrifice of unmeasured
objectives.
Quality measures are usually much more difficult to obtain than quantity
measures. Quality is hard to define. Data on quality is more difficult and expensive to
obtain. There is a general tendency, therefore, to ignore – or at least to postpone paying
attention to – measures of quality and to focus on more easily measured quantitative
program attributes.
This discussion will focus on the first problem encountered in performance
measure development – lack of clarity in the measure itself. Measures are often stated in
what seems to be a straightforward manner but one which contains major ambiguity.
That ambiguity leads to confusion in the gathering and reporting of data pertaining to the
measure. And it produces inconsistent results when multiple agencies, or even multiple
units within the same agency, report performance data. Take, for instance, “number of
cases handled.” This is a basic attempt to measure workload. But the simple phrase set
forth leaves many key issues unanswered.
What is a “case?” Is a case limited to a matter in which a client is represented in
court – a “court case?” When a client seeks modification of a child support order, is that
a new case, or is it merely the continuation of a pre-existing case? Is a case a
consultation with a client, including the opening of a file and screening for eligibility and
conflicts? If the potential client fails to qualify for assistance, does the time spent on the
qualification process justify counting it as a “case?” Does a case include inquiries to a
telephone hotline? If a person seeking a divorce obtains forms, instructions and an
outline of the divorce process from a website, is that a “case?” If an agency provides
unbundled legal services, is each instance of advice provided a separate “case?”
When is a case “handled?” When is the case counted? When it is opened or
when it is concluded? This issue is, of course, tied to the definition of a “case.” If a
“case” is a court case pursued on a client’s behalf, is the case concluded when the client’s
court case reaches a disposition in the trial court? When an appeal is concluded? How
and when do you count a reopening of a previously concluded case? If the definition of
“case” includes counseling of clients who will be handling their own cases in court, when
-2-
March 3, 2002
is the matter concluded? At the end of the first major consultation? At the conclusion of
the client’s case? If the latter, how would a legal services agency know when the client’s
case is resolved? Counting hits on a website is much easier, but not necessarily a good
measure. Should a website capture downloads of forms separately from hits?
Even “number” can be problematic. Are all “cases” lumped together into a single
count, or are they broken down into different categories? It is pretty obvious that it
would make no sense to lump together court cases and web site hits in a single count.
But if “cases” are to be broken down into categories, what are the categories? Are they
based on substantive case types? If so, what case types will be used? Where are the case
types defined? How frequently are the “numbers” computed? Weekly, monthly,
annually? How are the “numbers” aggregated? Is it important to know how many cases
are “handled” by a specific legal services program or by all programs statewide?
In developing performance measures with the state courts in New Mexico, I found
it useful to address these issues one at a time. We used a matrix to display the issues.
The matrix contained these factors:
Name of measure – What is the most descriptive name for this measure? It
should be as precise as possible, but no more than three or four words.
Purpose of measure -- Is the measure an outcome, an output, an input, a quality,
cost, or efficiency measure? How will the measure be used?
Unit of analysis -- Is the measure to be reported for a specific agency or for all
agencies statewide? It may be that a particular measure should be reported by individual
agencies and aggregated for all agencies statewide. This relates back to the purpose for
which data pertaining to the measure is being collected. If one of the purposes is to
compare agencies against each other, then the data needs to be reported for each. If the
purpose is to identify statewide workload increases for purposes of calculating budget
requirements, a compiled number will be more appropriate.
Case type definitions -- For what types of cases is data to be collected? Is it to be
reported separately by case type? If so, what are the case type categories? Where are
those categories defined?
Applicable business rules – This is the most important and difficult definitional
task. What are the business rules for determining when a matter should be counted? If
data is to be gathered from a sample of cases rather than from all cases, how is the sample
drawn? It is in this analysis that tradeoffs can be made between the completeness of the
data to be gathered and the cost of data collection. Creative solutions can be found. Here
is an example. The New Mexico general jurisdiction trial courts wanted to know – and to
report to the legislature – how long it takes from the time a document is filed in a court to
the appearance of that document on the electronic docket. A traditional approach would
have required that the two times – date and time of filing and date and time of docketing
– be entered into an automated system for tracking and analysis. While there is a field in
-3-
March 3, 2002
the data base for the date of filing, there is no such field for the time of day of filing. And
if a field were added, the entry of a new data element for every court filing would mean a
significant increase in work for court staff. The courts decided to approach the problem
from the other direction. They would check at the end of each business day to find out
the date of the “oldest” undocketed document. The number of elapsed business days
would be calculated. To the extent that number exceeded two (which was determined to
be a good standard) it would be reported as a measure of docketing backlog. (For
instance, if today is Friday the 28th of February and staff have not finished entering all
documents filed on Monday, February 24th [but have completed the processing of all
documents filed the week before or earlier] the docketing backlog on that date is two
days. Four days have elapsed, but the backlog is only the excess days beyond the two
day docketing standard.) It does not require much time or effort to record this measure
each day and to report it at the end of each month.
Unit of measure – Exactly how will the data be reported? Is the measure a
number or a ratio or a percentage? Exactly how is it to be calculated? In the instance of
the docketing backlog measure, the committee decided to require a monthly count of the
number of days with a zero backlog, a one day backlog, a two day backlog, a three day
backlog, a four day backlog, a five day backlog, or a backlog of more than five days. It
could have been a monthly average, but the committee concluded that the more complete
data would provide a more accurate picture of a court staff’s performance.
Frequency of measure – How frequently is data to be collected? How frequently
is it to be reported? Is this a daily, weekly, monthly, annual, or semi-annual report?
Data source(s) – Where does the data come from? Is it available routinely from
the case management system? Must it be gathered separately by hand? Is there a
standard format from data collection? Must the agency conduct a periodic survey of
some sort to obtain the data?
I attach a page showing the application of these principles to a single, relatively
straightforward performance measure for the general jurisdiction trial courts of New
Mexico.
Greacen Associates LLC will be working with the Legal Services Corporation
and a group of representatives of legal services providers to apply this general approach
to the challenge of developing performance measures that can be used consistently by
LSC grantees and on which LSC staff can capture data on a standard questionnaire.
-4-
March 3, 2002
Name of
Purpose of
measure
measure
Dispositions Output
measure to
show
number of
cases
resolved
Unit of
Case type
analysis
definitions
By district Separately for
general civil,
Statewide criminal,
for all
domestic
districts
relations,
combined domestic
violence,
juvenile
delinquency,
abuse and
neglect,
requests for
judicial action,
and youthful
offender cases
-5-
Applicable business rules
New filings adjudicated and
reopened cases closed during
the reporting period. New
filings are adjudicated as
follows: Criminal, juvenile
delinquency, and youthful
offender – pronouncement of
sentence; Civil – entry of
final judgment on original
complaint; Domestic
relations, abuse and neglect,
and requests for judicial
action – entry of final
judgment on original
petition; Domestic violence - entry of permanent
protection order or dismissal
of case. Reopened cases are
closed when the reason for
their being reopened is
accomplished.
Unit of
measure
Number of
new filings
adjudicated
plus
number of
reopened
cases
closed
Frequency Data
of measure source(s)
Annually
Standard
report from
case
management
information
system
Download