Metrics & Dashboards

advertisement
Metrics & Dashboards
Survey Results
With help from Marty Klubeck at Notre Dame and Brenda Osuna at
USC
Who are we?
Brown
Carnegie Mellon
Columbia
Cornell
CU-Boulder
Duke University
Georgetown University
Harvard*
Michigan State University
New York University
Penn State
Princeton
Stanford University UC San
Diego
UCSF
University of Chicago*
University of Iowa
University of Michigan
University of Minnesota
University of Notre Dame
University of Washington
University of Wisconsin
Virginia Tech
2
How often do we collect the following types of
metrics around service health (effectiveness)?
80%
59%
60%
48%
45%
40%
34%
31%
26%
25%
20%
10%10%
14%
7%
6%
30%
33%
11%
9%
0%
Demographics
Usage/demand
Performance
Customer
Satisfaction
Weekly/Daily/Continuously
Monthly or by Semester
Annually
Less frequently than annual or not at all
3
For what services do we collect metrics?
100%
80%
60%
40%
33%
29%
29%
20%
10%
0%
0%
Most of our
services
A few
services
Good news is that no one said zero!
Only key
services
All of our
services
No metrics
collected
4
And, our metrics to measure business efficiency
and delivery of goals?
100%
80%
60%
40%
20%
0%
Time (speed of
delivery)
Cost
Quality
(defect/error
rates)
Other
OTHER:
1) It widely varies depending on the service
2) We do not collect any business efficiency metrics
3) Project delivery
# of calls abandoned; # change requests; # e-mails; # abandoned calls,
resolution time, cycle time, abandonment, etc.; capacity, mean time to repair
5
Our use of targets
100%
80%
68%
60%
40%
63%
53%
32%
21%
20%
16%
0%
Expectations Targets based Expectations Targets based Other (please We don’t use
based on our
on
specify)
based on our on our peers’
any
Service-Level- historical/trend customers’
performance
demarcation of
Agreements
data
requests/needs
“health”
OTHER:
1) Working on the use of ITIL Information Technology Infrastructure Library
2) Note: we don't do this consistently though
3) We do not use any service target range metrics
4) Industry Practices / Standards
6
Metrics are collected and analyzed primarily as…
100%
80%
67%
60%
43%
40%
20%
10%
5%
0%
Grass roots
Organizational/departmental
effort
Institutional
effort
effort
Other (please specify)
OTHER:
System performance metrics in transition to organizational effort.
7
Who is the audience for our metrics?
95%
100%
95%
80%
67%
60%
43%
40%
38%
29%
20%
0%
Internal IT staff
OTHER:
Post publicly
IT
management
University
executive
leadership
Your user
community
Peers in other
institutions (for
benchmarking)
Other
leadership
8
How do we share them?
100%
80%
67%
60%
53%
40%
40%
33%
27%
20%
0%
Published for Other (please
the organization
specify)
(Intranet)
Published
publicly (web
with open
access)
Directly to
Published for
customers
current and
(electronic or
potential
hardcopy
customers (web
reports)
with controlled
access)
9
20%
OTHER:
1) right-sizing the organization; metrics enable us to tune documentation
and training and better prepare support providers
Other
60%
Early-warning-system,
enabling us to prevent
problems
Insights to the causes of
problems or innovations
71%
Improvements
71%
Communicate better
with our leadership
80%
Communicate better
with our customers
Made process/project
adjustments
Benefits so far
100%
81%
71%
62%
43%
40%
10%
0%
10
How do we rate the maturity of our
organization’s use of metrics?
100%
80%
60%
52%
40%
33%
20%
10%
0%
0%
Fully
Mature
5%
0%
Maturing Managed Developing
Novice
Totally
novice
11
Our use of external data sources
100%
84%
80%
72%
68%
77%
56%
60%
Compare
data to
40%
28%
20%
Provide
data to
23%
16%
17%
8% 8%
11%
Don't use
0%
Educause Core Data
Use for
defining
metrics
IPEDS
OTHER:
1) Gartner for Benchmarking
2) Used to participate in the campus computing survey
3) Gartner
COFHE
12
Any BI action?
100%
80%
60%
45%
40%
30%
15%
20%
10%
5%
0%
Have
considered
Have not
considered
In the process
Have a
of developing functioning BI
environment
OTHER:
Currently considering an environment, platform selection pending
Other
13
Our biggest challenges
67%
43%
33%
33%
Other (please
specify)
Lack of consistent
comparison/benchm
ark data
Lack of expertise in
data analysis
Lack of expertise in
the development of
metrics
Lack of automated
collection tools
14%
5%
Lack of support from
leadership
81%
Lack of dedicated
resources
100%
80%
60%
40%
20%
0%
OTHER:
1) Continuing engagement from mid-level leadership to respond to metrics findings
2) Organizations ability to identify specific KPI's to measure specific objectives
3) Changing leadership/definition of what is necessary and relevant; metrics must
mean something to be used effectively; lack of a plan; staff resent
14
What would we find useful?
0%
20%
40%
60%
80%
90%
Standard definitions
68%
Guidelines on developing/reporting
63%
A template for basic metrics
58%
Review by peers of possible tools
37%
On-call expertise
Other (please specify)
100%
11%
OTHER:
1) None of the above
2) Unified approach to metrics from an organizational perspective; lack
of a plan; dedicated resources would be better. No one is going to use
another template and different services would be measured by different
metrics unless the metrics were provided at a very very very high level
15
Tools – what have we used, what do we think?
20
95%
100%
18
16
80%
14
12
11
60%
44%
10
8
40%
2
6
2
5
4
2
0
38%
1
6
4
41%
1
Excel
2
Cognos
18%
13%
3
1
1
1
1
2
SPSS
SAS
Other
BSC
Inadequate
Fair
Good
20%
13%
2
7%
7%
1
1
Power Pivot SigmaXL
Excellent
Tableau
0%
0%
iDashboard
Vision
0%
Outstanding
“Believe that the process and commitment to consistent data collection is
far more important than the tool”
Used
16
Lessons learned
•
Metrics have helped to highlight areas of significant service difficulties
(e.g., with BlackBerry services) and to note some low-level points of
problems (e.g., around some of our network measures.) At the same time,
our current metrics processes are highly manual in nature and require
significant time investment to collect and report. We have seen challenges
in getting service management engaged on the data writ large, which can
lead to problems when errors due to service changes are missed thus
impacting trending. Goals for us in coming year include focusing on trend
analysis/reporting through executive summarization (done), gaining more
mileage out of system-generated metrics on availability and low-level
alarms, improving automated collection of non-availability data, and looking
to focus data aggregation of human-generated, automated and other data
into a dashboard to reduce effort level required to visualize service data.
•
Benchmarking is very challenging because of the variance environments
at each institution. Cost components may be different, service features and
SLAs may not match, accounting practices can be problematic, tracking
labor is different, etc.
17
Lessons Learned
•
We had a nascent metrics program under development with dedicated
resources, focusing on helping service managers to develop metrics with
their local data. With the departure of that resource in October, we are
choosing to re-prioritize the work away from dedicated attention to metrics
at this time. Instead, we watch with great interest the aggressive agenda
that EDUCAUSE has developed with the reinvigoration of ECAR under
Susan Grajeck. We will continue to monitor the progress of the various
EDUCAUSE initiatives around research, data, and analytics and pursue
collaboration opportunities based on our own priorities and resources.
•
We did quite a push to get a metrics dashboard going a couple of years ago
which was quite successful. However, the backend work of building a data
metrics repository was never completed. This has limited us from
getting deeper analytics questions answered and still requires us to perform
manual queries often. On the other hand, when we recently needed to pull
together a metrics dashboard for a large client (a hospital) we were able to
reuse much of the work we had done previously.
18
Lessons Learned
•
We collect a lot of operational performance data using traditional tools
(Cricket, Nagios, home-grown scripts) but don't have a reasonable
dashboard or approach to making the data useful. We have recently
started measuring performance of our service desk and groups behind them
to track delivery against SLAs in our service catalog. We've started a
Service-Now implementation and expect to use metrics delivered by that
tool.
•
Challenge getting consistent operational definitions both for internal use
and benchmarking; Data collection is still a time consuming, manual
process that we are working to automate through the collection of metrics
from disparate systems into a BI environment; We are exploring the use of
Microsoft BI tools (e.g. PowerPivot, SQL Server 2012, PowerView)
19
THANK YOU
20
Download