OPIM 658: Service Operations - NYU Stern School of Business

advertisement
Data Envelopment Analysis
(DEA)
Which Unit is most productive?
DMU labor hrs. #cust.
1
100
150
2
75
140
3
120
160
4
100
140
5
40
50
DMU = decision making unit
DEA (Charnes, Coopers & Rhodes ‘78)
A multiple-input, multiple-output productivity measurement tool
DMU labor hrs. #cust. #cust/hr.
1
100
150
1.50
2
75
140
1.87
3
120
160
1.33
4
100
140
1.40
5
40
50
1.25
Basic intuition
(DMU = decision making unit)
#cust.
200
x
x
x
x
DMU’s 1,3,4,5 are
dominated by DMU 2.
100
x
50
100
labor hrs.
Extending to multiple outputs ...
Ex: Consider 8 M.D.’s working at Shouldice Hospital for the same
160 hrs. in a month. Each performs exams and surgeries.
Which ones are most “productive”?
Doctor #Exams
#Surgeries
1
48
68
2
12
80
3
35
76
4
31
71
5
20
70
6
20
105
7
36
53
8
15
65
Note: There is some “efficient” trade-off between the number of
surgeries and exams that any one M.D. can do in a month, but
what is it?
Efficient M.D.’s: These two
M.D.’s (#1 and #6) define the
most efficient trade-off between
the two outputs.
Scatter plot of outputs:
120
#6
#Surgeries
100
80
#1
60
40
These points are dominated
by #1 and #6.
20
0
0
10
20
30
40
50
60
#Exams
“Pareto-Koopman efficiency” along the frontier - cannot increase an output (or
decrease an input) without compensating decrease in other outputs (or increase
in other inputs).
How bad are the inefficient M.D.s and where
are the gaps?
120
#Surgeries
100
#5
80
60
40
20
0
0
10
20
30
#Exams
Efficiency score = 73.4%
40
50
60
“Nearest” efficient points define a reference set and a linear
combination of the reference set inputs and outputs defines a
hypothetical composite unit (HCU)
Reference set for #5
is {1,6}
#6
120
#Surgeries
100
80
#5
#1
60
40
HCU
20
0
0
10
20
30
#Exams
40
50
60
DEA summary so far:
DEA uses an efficient frontier to define multiple I/O
productivity




Frontier defines the (observed) efficient trade-off
among inputs and outputs within a set of DMUs.
Relative distance to the frontier defines efficiency
“Nearest point” on frontier defines an efficient
comparison unit (hypothetical comparison unit (HCU))
Differences in inputs and output between DMU and
HCU define productivity “gaps” (improvement
potential)
How do we do this analysis systematically?
A real-word example:
NY Area Sporting Goods Stores
Productivity
Conceptually ...
Productivity = Outputs
Inputs
Reality if more complex ...
Inputs
Outputs
equipment
facility space
server labor
mgmt. labor
#type A cust.
Technology
+
Decision Making
#type B cust.
quality index
$ oper. profit
Operating Units Differ





Mix of customers served
Availability and cost of inputs
Facility configuration
Processes/practices used
Examples
–
bank branches, retail stores, clinics, schools, etc.
Questions:
–
–
–
–
How do we compare productivity of a diverse set of
operating units serving a diverse set of markets?
What are the “best practice” and under-performing units?
What are the trade-offs among inputs and outputs?
Where are the improvement opportunities and how big are
they?
Some approaches

Operating ratios
–
–
–

e.g. Labor-hrs/transaction, $sales/sq.-ft.
Good for highly standardized operations
Problem: Does not reflect varying mix of inputs and outputs
found in more diverse operations
Financial approach: Convert everything to $$$!
$Inputs

$Outputs
Problems?
–
–
Some inputs/outputs cannot be valued in $ (non-profit)
Profitability is not the same as operating efficiency (e.g.
variances in margins and local costs matter as well)
Profitability vs. effeciency

Profitability is a function of 3 elements …
–
–
–

Input prices (costs)
Output prices
Technical efficiency (How much input is required to generate
the firms output.)
Improving operations requires understanding
technical efficiency not just overall profitability.
LP Formulation:
Data
K
# operting units (DMUs) k  1,..., K
N # inputs i  1,..., N
M # outputs j  1,..., M
O jk observed level of output j from DMU k
I ik
observed level of input i from DMU k
Model variables
vi
weigh t on input i
uj
weight on output j
Ek
efficiency of DMU (0 - 100%)
M
Ek 
u O
j 1
N
j
v I
i 1
jk
i ik
To evaluate a give unit, e, choose nonnegative weights to
solve ...
max Ee
s.t.
Ek  100, k  1,..., K
Which can be formulated
M
max  u j O je
j 1
s.t.
N
v I
i 1
i ie
1
M
u O
j 1
j
Normalize weighted input of
e to one
N
jk
 100 vi I ik ,
i 1
u j  0, j  1,..., M
vi  0, i  1,..., N
k  1,..., K
Output analysis
k
dual variable associated with DMU k
k  0  DMU k is in the reference set of DMU e
These dual variables can be used to contruct an efficient
hypothetical composite unit (HCU) with
K
Oˆ j   k O jk , j  1,..., M
Output j of HCU
k 1
K
Iˆi   k I ik , i  1,..., N
k 1
Satisfying
Oˆ j  O je , j  1,..., M
Iˆi  I ie , i  1,..., N
Input i of HCU
HCU can be used to measure excess use of inputs and
potential increase in outputs
Output  Oˆ j  O je  0, j  1,..., M
Input  I ie  Iˆi  0, i  1,..., N
Refer to spreadsheet examples.
Using the results: Eff.-Profit Matrix
High Profit
Under-performing
potential leaders
Best practice
comparison group
Low
Eff.
High
Eff.
Under-performing
possibly profitable
Low Profit
Candidates for
closure
Designing DEA Studies

#Inputs/Ouputs
K > 2(N+M)


“Ambivalence” about inputs and outputs - all should
be relatively important!
“Approximate similarity” among DMUs
–
–

Provides relative efficiency only
–
–

objectives
technology
choice of units to include matters
inclusion of “global leader” unit may be desirable
Experimenting with different I/O combinations may be
necessary
DEA Summary

Addresses fundamental productivity measurement
problems due to ...
–
–

Takes advantage of service operating environment
–
–

complexity of service outputs
variability in service outputs
large numbers of similar facilities
diversity of practices/management/environment
Provides useful information
–
–
–
–
objective measures of productivity
reference set of comparable units
excess use of inputs measure
returns to scale measure
DEA Summary (cont.)

Role of DEA
–
–
–

“data mining” to generate hypotheses
evaluation/measurement
benchmarking to identify “best practice” units
Caveats
–
–
–
“black box” - No information on root causes of inefficiency
Be aware of assumptions (e.g. linearity)
Can be sensitive to selection of inputs/outputs
Download