Simulation Study of an Inbound Call Center

advertisement
DISCRETE EVENT SIMULATION
MSCI 632 SPRING ‘05
COURSE INSTRUCTOR
DR. J. BOOKBINDER
PROJECT REPORT
Simulation Study of an Inbound Call Center
Submitted by
SACHIN JAYASWAL
&
GAURAV CHHABRA
Table of Contents
Abstract ........................................................................................................................................... 3
1
Introduction............................................................................................................................. 4
2
Problem Definition.................................................................................................................. 5
3
Call Centre Data...................................................................................................................... 6
3.1
Data Collection ............................................................................................................... 6
3.2
Inter-Arrival Times Distribution..................................................................................... 7
3.3
Service Times Distribution ........................................................................................... 11
3.4
Balking and Reneging................................................................................................... 15
4
Simulation Model.................................................................................................................. 16
5
Verification, Validation and Testing..................................................................................... 19
6
Experimentation and Results ................................................................................................ 21
7 Conclusions and Future Research Directions ....................................................................... 29
References..................................................................................................................................... 31
List of Tables
Table 3-1Goodness of Fit Test output for Lognormal Inter-Arrival Times.................................. 11
Table 3-2 Goodness of Fit Test output for Lognormal Inter-Arrival Times................................. 14
Table 6-1 Service Levels: Regular Customers (5 minutes) Priority Customers (1 minute) ......... 23
Table 6-2 Service Levels: Regular Customers (3 minutes) Priority Customers (1 minute) ......... 27
Table 6-3 Service Levels: Regular Customers (2 minutes) Priority Customers (1 minute) ......... 27
Table 6-4 Service Levels: Regular Customers (2 minutes) Priority Customers (30 seconds)...... 29
List of Figures
Figure 3-1 Pearson5 Distribution for Inter-Arrival Times.............................................................. 9
Figure 3-2 Log Logistic Distribution for Inter-Arrival Times........................................................ 9
Figure 3-3 Log Normal Distribution for Inter-Arrival Times....................................................... 10
Figure 3-4 Difference Graph for Log Normal Distribution of Inter-Arrival Times ..................... 10
Figure 3-5 Log Logistic Distribution for Service Times .............................................................. 12
Figure 3-6 Pearson5 Distribution for Service Times .................................................................... 13
Figure 3-7 Log Normal Distribution for Service Times ............................................................... 13
Figure 3-8 Difference Graph for Log Normal Distribution of Service Times.............................. 14
Figure 3-9 Renege Probability Distribution.................................................................................. 16
Figure 4-1 Simulation Model in Arena Environment ................................................................... 18
Figure 6-1 Service Level for Priority Customers versus Number of Agents................................ 24
Figure 6-2 Abandonment Rate for Priority Customers versus Number of Agents....................... 25
Figure 6-3 Abandonment Rate of Regular Customers versus Number of Agetns........................ 25
Figure 6-4 Average Waiting Time for Priority Customers versus Number of Agents................. 26
Figure 6-5 Average Waiting Time for Regular Customers versus Number of Agents ................ 26
Figure 6-6 Sensitivity Analysis on Service Level thresholds for Regular Customers.................. 28
Copyright: University of Waterloo
Department of Management Sciences
Page 2 of 31
Abstract
This paper examines the design and development of a simulation model of a call center
environment. Two classes of call center customers are considered, priority customers and regular
customers. The Call Center operation under study guarantees a higher service level to its priority
customers. An animated simulation model is developed in ARENA to capture the impact on
various system performance measures such as abandonment rate, average waiting time, agent
utilization and service level, based on the call mix of the priority and regular customers.
Distributions of the inter-arrival and service times are drawn from a health care call center data
and are modeled into the simulation model. Balking and Reneging affects are considered in the
modeling when the customer finds a server busy. We find the optimal number of agents required
to serve the call center operations in order to meet the business objectives of minimal target
service levels and abandonment rates set by the management. Verification, Validation and
Testing categorized into informal, static and dynamic techniques are used throughout the design
and development of the call center simulation model. A terminating simulation study is
conducted and confidence intervals are constructed on the measures of performance. Sensitivity
analysis is done by varying the target service levels and call-mix of the priority and regular
customers.
Copyright: University of Waterloo
Department of Management Sciences
Page 3 of 31
1 Introduction
The past decade has witnessed a rapid growth in Call Center industry as businesses have
increasingly embraced the idea of using telephone as a means to provide to their customers
services like telemarketing, technical support, etc. Call Centers are locations “where calls are
placed, or received, in high volume for the purpose of sales, marketing, customer service,
telemarketing, technical support or other specialized business activity” (Dawson 1996). Call
Center operations are now part of many manufacturing and service industries. A finance
company, for example, may have its call center operations that provide its customers with online
information on its various kinds of financial products available; a software company may have
its call center operations to provide technical support to its customers while a health clinic may
provide online health services. All these are examples of Inbound Call Centers where calls are
received in high volumes from customers seeking services. All Inbound call centers face the
classical planning problems of forecasting and scheduling under uncertainty.
Customer calls at these centers are attended by staffs called agents. Most call centers
target a specific level of service to its customers. Service level may be defined as the percentage
of callers who wait on hold for less than a particular period of time. For example, a particular call
center may aim to have 90% of callers wait for less than 30 seconds. Other related measures of
service level may be the average customers wait time on hold or abandonment rate, which is
defined as the percentage of callers who hang up while on hold before talking to an agent. It is
quite intuitive that customer abandonment rates and customer waiting times are highly
correlated. High abandonment rates result in forming a negative impression towards the
company and likely loss in business due to the economic value associated with customer
dissatisfaction. In order to be able to guarantee a specific level of customer service, a call center
Copyright: University of Waterloo
Department of Management Sciences
Page 4 of 31
needs to carefully plan for the level of staffing (agents) to match its demands. Too low a staff
level may render the attainment of the target service level unattainable while a staff level higher
than actually needed to meet the actual demand serves to increase the cost thereby squeezing the
profit margin.
2 Problem Definition
The problem that we propose to study here is that of a typical Call Center. It has two classes
of customers - Priority Customers and Regular Customers. The Call Center guarantees a service
level to its priority customers, which is comparatively better than that for regular customers
Priority customers could be paying a service charge in return for a prompt service or they could
be some special customers contributing towards significant business value to the organization.
This guaranteed service level can be achieved by keeping a sufficiently large number of agents to
serve the calls. We used the following target service levels for our study:
i. At least 90 percent of high priority calls should be attended within 1 minute (SL_PC1).
ii. Abandonment rate for high priority calls should not exceed 5 percent (SL_PC2).
iii. At least 90 percent of regular calls should be attended within 5 minutes (SL_RC1).
iv. Abandonment rate for regular calls should not exceed 10 percent (SL_RC2).
We developed an animated simulation model to study the problem. The reasons for using a
simulation study was that the problem required the modeling of two classes of customers, and the
ease in representation of one of the most of important dynamic features of the system – call
abandonment while gathering output on a variety of performance measures. The objective of the
Copyright: University of Waterloo
Department of Management Sciences
Page 5 of 31
simulation study is to find the optimum number of agents to achieve these service levels and at
the same time maximizing their utilization. We used the following performance measures for the
study, the first three defined for each class of customers:
i.
Average waiting time: Amount of time the customer waits in the queue before getting
served.
ii.
Service Level: Percentage of customers served who spent less than the target time on
hold.
iii.
Abandonment Rate: Percentage of customers who hung up without being served.
iv.
Agent Utilization: Percentage of time that the agent was busy talking to a customer.
3 Call Centre Data
3.1 Data Collection
We collected a limited amount of data on call arrivals and service times from a call centre
servicing Health industry. The system under study requires the following input data:
i.
Pattern of call arrivals: This is specified in terms of probability distribution of number of
calls in a given time. This is further discussed in the Inter-Arrival Times Distribution subsection below.
ii.
Pattern of call service: This is specified in terms of probability distribution of amount of
time to serve a call. This is further discussed in the Service Times Distribution subsection below.
Copyright: University of Waterloo
Department of Management Sciences
Page 6 of 31
iii.
Call mix: This is defined in terms of the proportion of calls that are high priority. We did
a sensitivity analysis by varying the call mix from 10% to 50% and observed the optimal
number of agents required with other business constraints remaining unchanged.
iv.
Balking Percentage: This is specified in terms of percentage of callers who hang up as
soon as they find that the server is busy before talking to an agent. This is further
discussed in the sub-section, Balking and Reneging, below.
v.
Reneging Probability: This is defined as the probability that a customer abandons a call
after waiting some time in the queue before getting served. This is further discussed in
the Balking and Reneging sub-section below.
vi.
Target service levels: For each class of customers, this is specified in terms of a lower
limit on percentage of customers whose calls should be attended to within a target
amount of time and an upper limit on abandonment rates. As discussed in the previous
section, we have fixed a service level of 90% to answer the calls within one minute for
the priority customers and a service level of 90% to answer the calls within five minutes
for the regular customers. Also, the maximum permissible abandonment rate is set as 5%
and 10% for priority and regular customers respectively. We further conducted a
sensitivity analysis by changing the threshold times in queue for both the regular and
priority customer with the service levels fixed at 90% for both classes of customers.
3.2 Inter-Arrival Times Distribution
Data on call arrivals was available in the form of number of incoming calls every 15
minutes during the hours of operation. The inter-arrival time was assumed to be constant within
Copyright: University of Waterloo
Department of Management Sciences
Page 7 of 31
each 15 minute interval. As an example, if there were 20 incoming calls in a time step of 15
minutes between 1330 hours to 1345 hours, then the inter-arrival time was assumed to be 15/20
(=0.75) minutes. We used BestFit to find the distribution of the inter-arrival times. The following
were the top three best fits for the inter-arrival time data based on the Kolmogorov-Smirnov
Goodness of Fit test.
i.
Pearson5(5.8906, 1.2856) Shift=+0.25189
ii.
LogLogistic(0.31440, 0.16754, 2.9835)
iii.
Lognorm(0.21240, 0.12779) Shift=+0.30195
Even though Pearson5 and Log Logistic distributions were ranked higher than the Log
Normal distribution, we chose the Log Normal distribution for our simulation model to generate
the inter-arrival times since the modeling software Arena, used for the simulation study, does not
support the Pearson5 and Log Logistic distributions.
The distribution of the K-S test statistic itself does not depend on the underlying
cumulative distribution function being tested. This advantage coupled with the advantage that it
is an exact test as compared to chi-squared goodness of fit test, which depends a lot on the
adequate sample size for the approximations to be valid (since we had limited data on arrival),
were the main reasons for using K-S test for ranking the distributions. Finally, the K-S test tends
to be more sensitive near the centre of the distribution than at the tails, which is advantageous to
us, since the objective of our simulation model is to come up with an upper limit on the staffing
level for which the data for busy hours holds more importance.
Copyright: University of Waterloo
Department of Management Sciences
Page 8 of 31
PEARSON5(5.8906, 1.2856) SHIFT=+0.25189
6
5
4
3
2
1
0
0.3
0.4
0.5
< 5.0%
0.3759
0.6
0.7
0.8
90.0%
1.0
0.9
5.0%
1.1
>
0.7578
Figure 3-1 Pearson5 Distribution for Inter-Arrival Times
LOGLOGISTIC(0.31440, 0.16754, 2.9835)
6
5
4
3
2
1
0
0.3
0.4
0.5
5.0%
0.3768
0.6
0.7
0.8
90.0%
0.9
1.0
5.0%
1.1
>
0.7639
Figure 3-2 Log Logistic Distribution for Inter-Arrival Times
Copyright: University of Waterloo
Department of Management Sciences
Page 9 of 31
LOGNORM(0.21240, 0.12779) SHIFT=+0.30195
6
5
4
3
2
1
00.3
0.4
0.5
5.0%
0.3749
0.6
0.7
0.8
0.9
90.0%
1.1
1.0
5.0%
>
0.7560
Figure 3-3 Log Normal Distribution for Inter-Arrival Times
LOGNORM(0.21240, 0.12779) SHIFT=+0.30195
2.0
1.5
1.0
0.5
0.0
-0.5
-1.0
-1.5
-2.0
-2.5
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
Figure 3-4 Difference Graph for Log Normal Distribution of Inter-Arrival Times
Copyright: University of Waterloo
Department of Management Sciences
Page 10 of 31
Chi-Sq
A-D
K-S
Test Value
6.742
0.3697
0.117
P Value
0.2406
N/A
N/A
Rank
4
3
3
C.Val @ 0.75
2.6746
N/A
N/A
C.Val @ 0.5
4.3515
N/A
N/A
C.Val @ 0.25
6.6257
N/A
N/A
C.Val @ 0.15
8.1152
N/A
N/A
C.Val @ 0.1
9.2364
N/A
N/A
C.Val @ 0.05
11.0705
N/A
N/A
C.Val @ 0.025
12.8325
N/A
N/A
C.Val @ 0.01
15.0863
N/A
N/A
Table 3-1Goodness of Fit Test output for Lognormal Inter-Arrival Times
3.3 Service Times Distribution
The call service time data, the time that an agent serves a customer on a call, was given in
seconds. The time unit was converted to minutes and BestFit was used to find the distribution.
The following were the top three best fits for the service time data based on the KolmogorovSmirnov Goodness of Fit test.
i.
LogLogistic(2.1757, 3.9236, 2.5351)
ii.
Pearson5(5.0463, 26.680) Shift=+0.47587
iii.
Lognorm(5.2343, 3.5904) Shift=+1.8160
Even though the Log Logistic and Pearson5 distributions were ranked higher than the Log
Normal distribution, we chose the Log Normal distribution for our simulation model to generate
Copyright: University of Waterloo
Department of Management Sciences
Page 11 of 31
the service times for the same reasons cited in previous section. Further, we used the K-S test
statistic for the distribution ranking.
LOGLOGISTIC(2.1757, 3.9236, 2.5351)
0.25
0.20
0.15
0.10
0.05
10
0.00
2
4
5.0%
6
8
12
90.0%
3.40
14
16
5.0%
14.71
>
Figure 3-5 Log Logistic Distribution for Service Times
Copyright: University of Waterloo
Department of Management Sciences
Page 12 of 31
PEARSON5(5.0463, 26.680) SHIFT=+0.47587
0.25
0.20
0.15
0.10
0.05
10
0.00
2
4
< 5.0%
3.37
6
8
12
14
90.0%
16
5.0%
>
13.82
Figure 3-6 Pearson5 Distribution for Service Times
LOGNORM(5.2343, 3.5904) SHIFT=+1.816
0.25
0.20
0.15
0.10
0.05
10
0.00
2
4
< 5.0%
3.37
6
8
12
14
90.0%
16
5.0%
>
13.80
Figure 3-7 Log Normal Distribution for Service Times
Copyright: University of Waterloo
Department of Management Sciences
Page 13 of 31
LOGNORM(5.2343, 3.5904) SHIFT=+1.8160
8
6
4
Values
2
0
-2
-4
10
-6
2
4
12
14
16
8
6
Figure 3-8 Difference Graph for Log Normal Distribution of Service Times
Chi-Sq
A-D
K-S
Test Value
4.4
0.2277
0.08944
P Value
0.6227
N/A
N/A
Rank
4
3
3
C.Val @ 0.75
3.4546
N/A
N/A
C.Val @ 0.5
5.3481
N/A
N/A
C.Val @ 0.25
7.8408
N/A
N/A
C.Val @ 0.15
9.4461
N/A
N/A
C.Val @ 0.1
10.6446
N/A
N/A
C.Val @ 0.05
12.5916
N/A
N/A
C.Val @ 0.025
14.4494
N/A
N/A
C.Val @ 0.01
16.8119
N/A
N/A
Table 3-2 Goodness of Fit Test output for Lognormal Inter-Arrival Times
Copyright: University of Waterloo
Department of Management Sciences
Page 14 of 31
3.4 Balking and Reneging
Some call centre customers may decide not to join the system upon arrival if the server is
busy, i.e. there is no agent to serve their call. Such customers are said to have balked. Others may
leave after spending some time in the queue. They are said to have reneged. Of course there are
also those who stay on until service completion. The balking and reneging of customers in the
service systems is a common occurrence in all real life queuing systems, and have a direct
impact on the quality of service delivered despite an inherent complexity involved in its
estimation.
Harris, Hoffman, and Saunders (1987) argued that the human behavior is such that a very
long wait often keeps the caller on the line because of an increased expectation and an already
incurred overhead although the future delays in any M/M/c queue are independent of alreadyspent waiting time. In the simulation model that they developed for the IRS telephone taxpayer
information system, they assigned different hold-on probabilities based on accumulated waiting
time. Parkan (1987) developed a simulation model for the operation of a fast food kiosk where
dissatisfied customers renege. He argued that the customers do not join a queue with the
intention of reneging, and therefore, it is more appropriate not to presume that a person will balk
at a queue or renege from it later with a certain probability. He further argues that one must look
into the decision process that leads to balking or reneging of a person and try to understand the
relationship between this process and the operational characteristics (such as speed and quality of
service) of the system. His approach on reneging decisions of customers in a queue is based on
Bayesian analysis where he assumes a gamma distribution of the initial expectation of the
waiting time before getting serviced common to all customers of the estimated waiting time.
Copyright: University of Waterloo
Department of Management Sciences
Page 15 of 31
Renege probability distribution that we modeled based on our literature review was a piecewise linear function of wait time. The renege probability distribution is assumed to be CONT
[(0.0, 1), (0.25, 2), (0.40, 3), (0.50, 4), (0.70, 5), (1.0, 6)]. We argue that the probability of
reneging will decrease with waiting time only until a certain amount of waiting time after which
the customer tend to lose patience and the reneging probability increases thereafter. This can be
observed in the Figure 3-9 below where the slope of the probability distribution function
decreases until a certain time and increases thereafter. We further assume that customers who
join the queue will wait at least for one minute before deciding to renege.
Renege Probability Distribution
Renege Probability
1.2
1
0.8
0.6
0.4
0.2
0
0
1
2
3
4
5
6
7
Waiting Time (Minutes)
Figure 3-9 Renege Probability Distribution
4 Simulation Model
Figure 4-1 shows the simulation model, built in Arena 7.01, a module-based graphical
simulation software package (Kelton, Sadowski, and Sadpwsk, 1998), for the system under
study. The Calls are generated using the “Call Arrival” module with the Inter-Arrival time
distribution. The call mix is decided using a “2-way by chance” decision module. The calls on
arrival are tagged with the following attributes: Service time, Renege Time, Customer Number
Copyright: University of Waterloo
Department of Management Sciences
Page 16 of 31
and the Arrival Time. Upon arrival, the entity is duplicated with the same value of the attributes.
The original entity joins the queue if it does not balk. The duplicated entity is delayed using the
“Delay Module” by its renege time obtained from the Renege Probability Distribution. The
original entity balks with a probability of 5% using the Decide module. The duplicated entity
after being delayed enters a Search module, which searches the queue before the server to see if
the corresponding original entity is still present in the queue. If present, it is reneged using the
remove module; otherwise the original entity has already been served. After the Process module,
the customer waiting time, and the percentage of calls that waited for less than the upper limit on
wait time are noted. Similarly, statistics are calculated for the percentage renege and percentage
balk for each type of calls, the sum of which gives the abandonment rate. Finally, the served calls
are disposed off the system using the Dispose Module, “Call Served”.
We defined several essential run characteristics, such as the number of replications and the
length of each replication, in the Simulation model. Model parameters that were held constant for
a given scenario, such as the percentage of Priority class customers defining the call mix, the
service level target answer times by call type, etc. were also defined. The distributions for
drawing random variates for inter-arrival time, service time, and the amount of time a caller
waits on hold before abandoning were defined in the simulation model. During execution, the
model tracks a number of system performance measures for each class of calls and continually
updates their values: the percentage of callers who hung up without getting served (Abandoned),
the average number of minutes served callers spent on hold (Average Queue Time), and the
percentage of served customers who spent less than the target time on hold (Service Level).
Copyright: University of Waterloo
Department of Management Sciences
Page 17 of 31
Si m ul at i
00:00:00
Cal arrival
0
0
Tr ue
0
0
Percent Sp cust
hangups
0
Fals e
Percent Sp cust
srv w
ti hn
i m
il ti
Tr ue
Spcl cust ?
Server I dle?
0
R
ecord sp #
w
ti hn
i m
il ti
0
Percent R
eg cust
hangups
Fals e
Percent Reg cust
srv w
ti hn
i m
il ti
Reg cust avg
dea
ly
Assign priorit y t o
Sp cust omer
Assign R
eg cust
D
ea
ly
Reg cust cummul
dea
ly
R
ecord # R
eg
cust comptl d srvc
0
Tr ue
Sp C
ust omer?
Record Sp
Cust oomers
0
0
Fals e
D
uplicat e C
as
l
Record R
egular
cust omer Bak
ls
O rg
i n
i al
Fals e
Assign priorit y t o
R
egua
l r cust omer
R
ecord R
egular
C
ust omers
0
Reg cust w
it hin t ime
m
il ti ?
Duplc
i at e
De l a y fo r re n e g e
ti m e
Percentage Sp customers Hang ups p c ust Avg W a it T i m
1.0
1.0
0.0
0. 0
percntsuw ithnm lime it
0.0
60. 0
0 . 0 0
S p cu st av g d
0 . 0 0
0. 0
1.0
0.0
percntsuw ithnm lime it
60. 0
0
Fals e
C
al Served
0
R
ecord Reg #
w
ti hn
i m
il ti
Re m o v e fro m
O rg
i n
i al
queue
Rem oved Ent it y
Not
Found
Dispose
0
Spc cust omer?
Tr ue
Record Sp
Reneges
0 . 0 0
0
p e r c e n st p c u s twith ni im
t e im
l it
0 . 0 0
R eg cu st av g d
Fals e
P e rc e n t s p c u s t s e rv e d w i
0
1.0
0.0
0. 0
Record R
eg
Reneges
0. 0
Tr ue
60. 0
1.0
0.0
0
Assign Search #
Se a rc h Se rv e r
Found
queue
Percentage Reg customers Hang upse g c ust Av g W a i t T i m
0. 0
Record Sp.
cust omer Bak
ls
Tr ue
Speca
i l Cust omer?
0
Fals e
R
ecord # sp cust
complt d srvc
Assign At t ribut es
0
Tr ue
sp w
ti hn
i tm
i em
il ti ?
Sp cust avg delay
Tr ue
Bak
l ?
0
0
sp cust cummul
delay
Fals e
Cal Service
0
Assign sp cust
D
ea
ly
60. 0
p e r c e n Re
t g c u s wi
t th ni im
t e limti
0 . 0 0
0 . 0 0
60. 0
P ercent Reg cust served within limit
1.0
0.0
0. 0
60. 0
Figure 4-1 Simulation Model in Arena Environment
Copyright: University of Waterloo
Department of Management Sciences
Page 18 of 31
5 Verification, Validation and Testing
Simulation model verification, validation and testing (VV&T) plays an important role in any
simulation study. VV&T is the structured process of increasing one’s confidence in a model,
thereby providing a basis for confidence in modeling study’s results (Swisher 2001). Model
verification substantiates that the model has been properly transformed from one form to another
(e.g. from a flowchart to an executable program). Model validation, on the other hand,
substantiates that the model behaves with sufficient accuracy in light of the study’s objectives.
We were able to ascertain that the model we developed closely reflects the call center operation.
However, we could not employ the validation techniques such as high face validity (a model
which on surface seems reasonable to people who are knowledgeable about the system under
study) or Turing Test (comparison of the model outputs to those observed in actual system) due
to the lack of industrial contacts and the unavailability of the data on true system performance
measures of the call center under study. Finally, model testing is the process of revealing errors
in a model. Testing procedures may be designed to perform either model verification or model
validation. The VV&T techniques used throughout the design and development of the call center
simulation model can be categorized into informal, static or dynamic (Balci 1997).
Balci (1997) states that well-structured informal VV&T techniques applied under formal
guidelines can be very effective. Informal VV&T techniques employed in the call center
simulation modeling effort were review and walk-through. Arena has a completely graphical
user interface, many automated bookkeeping features that greatly reduce the likelihood of
programming error, and debugging capabilities that allow the user to stop execution and examine
the values of any variable or caller attribute.
Copyright: University of Waterloo
Department of Management Sciences
Page 19 of 31
Static VV&T techniques are concerned with assessing the accuracy of a model based
upon characteristics of the static simulation model design (Siwsher 2001). They do not require
the computational execution of the model. Static VV&T technique employed in the modeling
effort was fault/failure analysis. We examined under what conditions the model should logically
fail. This helped us in identifying the logic problems in the definition of the call flow and made it
easier to define the possible paths that the customer call may take in the model.
Dynamic VV&T techniques require model execution and are intended to evaluate the
model based on its execution behavior (Balci 1997). Examples of dynamic VV&T techniques
applied to the simulation model include assertion checking, debugging, functional testing, and
sensitivity analysis. The feasibility of critical state variables was monitored using the assertion
checking technique. The simulation program was developed and debugged modularly to avoid
any critical bug fixes in the final model. Further we ran the model with simplifying assumptions
(adequate servers, no reneging, etc.) which is an essential part of debugging. Functional testing is
used to assess the accuracy of a model based upon its outputs, given a specific set of inputs
(Balci 1997). The model was tested with several arrival call rates. As an example with low call
rates (achieved by changing the parameters of the Inter-Arrival time distribution) the agent
utilization was low for a fixed number of agents. Finally, sensitivity analysis on the number of
agents, call-mix, and the threshold values of time on hold for regular and priority customers was
done. We discuss this aspect of the VV&T technique in the Experimentation and Results section
of the paper.
Copyright: University of Waterloo
Department of Management Sciences
Page 20 of 31
6 Experimentation and Results
The call center is a terminating system that begins each morning empty of calls and ends
hours later when agents go home after serving their last calls. We took each replication of the
model to be exactly 12 hours, so even though the calls in the system at the end of the day were
not served to completion, we counted them as served. We evaluated our results for the following
scenarios:
i.
Specific combinations of the number of agents (S) and the percentage of priority (P) class
callers having the service levels defined in Section 2, Problem Definition of this paper.
We used three values of S and five values of P leading to a total of 15 scenarios. The
results for these 15 scenarios are presented in Table 6-1.
ii.
We did a sensitivity analysis by changing our service levels defined previously by fixing
P at 30%. The results of the 3 scenarios thus evaluated are presented in Table 6-2 to
Table 6-4.
We performed 20 independent replications for each scenario and Arena’s output analyzer
calculated the summary statistics for the various performance measures discussed in Section 2,
Problem Definition. Arena generates a default of 95% confidence intervals across the mean. We
converted these confidence intervals at 90% by calculating the approximate measure of variance
at 95% using the half-length and the number of replications. We then constructed the confidence
interval at 90% by using the t-statistic value at 90% and using the variance calculated above with
the same number of replications. Table 6-1 to Table 6-4 reports these confidence intervals at
90%.
Copyright: University of Waterloo
Department of Management Sciences
Page 21 of 31
% of priority Calls
50%
40%
30%
20%
Performance Measures
Number of Agents
12
13
14
Service Level – Priority
[80.61,83.63]
[86.35,89.37]
[92.74,94.30]
Service Level – Regular
[98.30,98.70]
[98.78,98.98]
[98.82,99.06]
Abandonment Rate – Priority
[5.08,6.18]
[3.98,4.82]
[2.36,3.00]
Abandonment Rate – Regular
[18.64,21.40]
[10.74,13.34]
[4.74,5.98]
Avg. wait time – Rapid
[0.46,0.52]
[0.33,0.39]
[0.18,0.22]
Avg. wait time – Regular
[0.91,1.03]
[0.56,0.68]
[0.30,0.38]
Agent Utilization
[0.980.0.980]
[0.962,0.962]
[0.931,0.931]
Service Level – Priority
[81.04,84.14]
[87.27,89.67]
[93.12,94.84]
Service Level – Regular
[98.48,98.82]
[98.82,99.02]
[98.86,99.08]
Abandonment Rate – Priority
[5.40,6.96]
[3.70,4.86]
[2.12,3.10]
Abandonment Rate – Regular
[16.99,19.51]
[9.59,11.71]
[4.63,6.07]
Avg. wait time – Rapid
[0.45,0.51]
[0.32,0.36]
[0.17,0.21]
Avg. wait time – Regular
[0.93,1.03]
[0.56,0.68]
[0.29,0.37]
Agent Utilization
[0.982,0.982]
[0.965,0.965]
[0.932,0.932]
Service Level – Priority
[82.73,85.65]
[90.09,92.23]
[93.52,94.86]
Service Level – Regular
[98.77,98.97]
[98.87,99.05]
[98.95,99.11]
Abandonment Rate – Priority
[5.01,6.51]
[3.42,4.70]
[2.18,2.98]
Abandonment Rate – Regular
[15.79,17.59]
[8.54,10.22]
[4.30,5.08]
Avg. wait time – Rapid
[0.44,0.48]
[0.29,0.33]
[0.18,0.20]
Avg. wait time – Regular
[0.95,1.07]
[0.57,0.65]
[0.31,0.37]
Agent Utilization
[0.982,0.982]
[0.964,0.964]
[0.936,0.936]
Service Level – Priority
[83.89,86.85]
[89.83,92.17]
[93.43,95.47]
Service Level – Regular
[98.83,99.05]
[98.95,99.13]
[98.99,99.15]
Abandonment Rate – Priority
[4.61,6.57]
[2.97,4.41]
[2.27,3.07]
Abandonment Rate – Regular
[14.69,16.31]
[7.68,9.28]
[3.78,4.84]
Copyright: University of Waterloo
Department of Management Sciences
Page 22 of 31
10%
Avg. wait time – Rapid
[0.42,0.46]
[0.27,0.31]
[0.16,0.20]
Avg. wait time – Regular
[0.94,1.06]
[0.53,0.63]
[0.28,0.34]
Agent Utilization
[0.983,0.983]
[0.963,0.963]
[0.933,0.933]
Service Level – Priority
[84.06,87.46]
[89.36,93.00]
[92.37,95.09]
Service Level – Regular
[98.95,99.15]
[99.04,99.18]
[99.04,99.18]
Abandonment Rate – Priority
[3.43,6.81]
[3.09,4.97]
[1.29,3.17]
Abandonment Rate – Regular
[13.24,14.46]
[7.12,8.48]
[3.83,4.61]
Avg. wait time – Rapid
[0.38,0.42]
[0.25,0.31]
[0.17,0.21]
Avg. wait time – Regular
[0.90,1.00]
[0.54,0.64]
[0.29,0.37]
Agent Utilization
[0.983,0.983]
[0.966,0.966]
[0.937,0.937]
Table 6-1 Service Levels: Regular Customers (5 minutes) Priority Customers (1 minute)
For (P = 10%, S = 12) we were able to achieve SL_RC1 (At least 90 percent of regular
calls should be attended within 5 minutes), however we were unable to achieve SL_PC1 (At
least 90 percent of high priority calls should be attended within 1 minute). Further SL_PC2
(Abandonment rate for high priority calls should not exceed 5 percent) and SL_RC2
(Abandonment rate for regular calls should not exceed 10 percent) were not met. We therefore
need to increase agents to meet the targets. For (P = 10%, S = 13) we are able to achieve all the
service levels (SL_PC1, SL_PC2, SL_RC1, SL_RC2) if we just consider the means for these
performance measures. However, as can be seen in Table 6-1, at 90% confidence SL_PC1
cannot be ascertained; the lower limit of the confidence interval being 89.36% which is less than
the desired level of 90%. This can be further ascertained by increasing the number of replications
in the simulation. We observe that for (P=10%, S = 14) we are able to achieve all the service
levels with 90% confidence. We further observe a drop in the average waiting times for both the
regular and the priority customers as we increase the number of agents. For (P = 10%) the
Copyright: University of Waterloo
Department of Management Sciences
Page 23 of 31
average waiting time for the regular customers varies between 0.29 to 1 minute and for the
priority customers it varies between 0.17 to 0.42 minutes. The average agent utilization should
drop with the increase in the number of agents intuitively, and this can be observed for (P =
10%) case where it drops from 98.3% to 93.7% as we increase the number of agents from 12 to
14. Finally, if the confidence intervals for a performance measure overlap with an increase in the
number of agents, then we cannot claim that there is a significant difference in the performance
achieved. In the (P = 10%, S= 13) and (P = 10%, S = 14) the confidence intervals for the
abandonment rate for the priority customers overlap and hence we cannot conclude that there is a
significant improvement in performance even though there is a decrease in the mean
abandonment rate by increasing the number of agents from 13 to 14. Figure 6-1 to Figure 6-5
shows the plots of various output performance measures against the number of agents at different
percentages of priority class callers.
Percentage of Priority
Customers waiting less
than 1 minute
Service Level for Priority Customers
100
95
P = 50%
90
P = 40%
P = 30%
85
P = 20%
80
P = 10%
75
12
13
14
Number of Agents
Figure 6-1 Service Level for Priority Customers versus Number of Agents
Copyright: University of Waterloo
Department of Management Sciences
Page 24 of 31
Abandonment Rate of Priority Customers
Abandonment Rate
7
6
P = 50%
5
P = 40%
4
P = 30%
3
P = 20%
2
P = 10%
1
0
12
13
14
Number of Agents
Figure 6-2 Abandonment Rate for Priority Customers versus Number of Agents
Abandonment Rate of Regular Customers
Abandonment Rate
25
20
P = 50%
15
P = 40%
P = 30%
10
P = 20%
5
P = 10%
0
12
13
14
Number of Agents
Figure 6-3 Abandonment Rate of Regular Customers versus Number of Agetns
Copyright: University of Waterloo
Department of Management Sciences
Page 25 of 31
Average Waiting Time for Priority Customers
Average Waiting Time
(Minutes)
0.6
0.5
P = 50%
0.4
P = 40%
0.3
P = 30%
0.2
P = 20%
P = 10%
0.1
0
12
13
14
Number of Agents
Figure 6-4 Average Waiting Time for Priority Customers versus Number of Agents
Average Waiting Time for Regular Customers
Average Waiting Time
(Minutes)
1.2
1
P = 50%
0.8
P = 40%
0.6
P = 30%
0.4
P = 20%
P = 10%
0.2
0
12
13
14
Number of Agents
Figure 6-5 Average Waiting Time for Regular Customers versus Number of Agents
An overall of 14 agents seemed to be optimal to guarantee the target service levels for the
range of call mix. However, with 14 agents we observed that we were able to achieve almost
98% service level (SL_RC1) for regular calls because of a high threshold limit of 5 minutes. We
investigated the sensitivity of the percentage of calls answered to the service level threshold in
minutes by setting the percentage of priority calls at 30%. We observed a decrease in the service
level (SL_RC1) by varying the maximum allowable service time from 5 minutes to 3 minutes
Copyright: University of Waterloo
Department of Management Sciences
Page 26 of 31
and to 2 minutes. The results are presented in Table 6-2 and Table 6-3 below and Figure 6-6
shows the results of this sensitivity analysis.
% of priority Calls
30%
Performance Measures
Number of Agents
12
13
14
Service Level - Priority
[82.73,85.65]
[90.09,92.23]
[93.52,94.86]
Service Level - Regular
[92.03,93.95]
[96.82,97.52]
[98.24,98.68]
Abandonment Rate - Priority
[5.01,6.51]
[3.42,4.70]
[2.18,2.98]
Abandonment Rate - Regular
[15.79,17.59]
[8.54,10.22]
[4.30,5.08]
Avg. wait time – Rapid
[0.44,0.48]
[0.29,0.33]
[0.18,0.20]
Avg. wait time - Regular
[0.95,1.07]
[0.57,0.65]
[0.31,0.37]
Agent Utilization
[0.982,0.982]
[0.964,0.964]
[0.936,0.936]
Table 6-2 Service Levels: Regular Customers (3 minutes) Priority Customers (1 minute)
% of priority Calls
30%
Performance Measures
Number of Agents
12
13
14
Service Level – Priority
[82.73,85.65]
[90.09,92.23]
[93.52,94.86]
Service Level – Regular
[79.63,83.69]
[90.31,92.63]
[95.24,96.76]
Abandonment Rate – Priority
[5.01,6.51]
[3.42,4.70]
[2.18,2.98]
Abandonment Rate – Regular
[15.79,17.59]
[8.54,10.22]
[4.30,5.08]
Avg. wait time – Rapid
[0.44,0.48]
[0.29,0.33]
[0.18,0.20]
Avg. wait time – Regular
[0.95,1.07]
[0.57,0.65]
[0.31,0.37]
Agent Utilization
[0.982,0.982]
[0.964,0.964]
[0.936,0.936]
Table 6-3 Service Levels: Regular Customers (2 minutes) Priority Customers (1 minute)
Copyright: University of Waterloo
Department of Management Sciences
Page 27 of 31
Percentage of Regular
Customers
Sensitivity Analysis - Service Level for Regular
Customers
120
100
Service Level Regular
Customers - 3
minutes
80
60
Service Level Regular
Customers - 2
minutes
40
20
0
12
13
14
Number of Agents
Figure 6-6 Sensitivity Analysis on Service Level thresholds for Regular Customers
By performing the sensitivity analysis on the service levels to threshold service times for
regular customers described above, we observed that with 14 agents 90% of the regular calls
could be attended by an agent within two minutes. We therefore observed the change in system
performance if the target service levels are changed to 90% of the calls answered within 30
seconds for priority customers and 90% within 2 minutes for regular customers. We found that to
assure this service level we need to increase the number of agents from 14 to 16. The results are
presented in Table 6-4 below.
Copyright: University of Waterloo
Department of Management Sciences
Page 28 of 31
% of priority Calls
30%
Performance Measures
Number of Agents
14
15
16
Service Level – Priority
[82.30,84.58]
[89.58,92.00]
[94.89,96.21]
Service Level – Regular
[95.24,96.76]
[97.11,98.53]
[98.54,98.86]
Abandonment Rate – Priority
[2.18,2.98]
[0.73,1.47]
[0.54,1.06]
Abandonment Rate – Regular
[4.30,5.08]
[1.93,2.77]
[0.76,1.16]
Avg. wait time – Rapid
[0.18,0.20]
[0.09,0.11]
[0.03,0.05]
Avg. wait time – Regular
[0.31,0.37]
[0.13,0.19]
[0.05,0.07]
Agent Utilization
[0.935,0.935]
[0.892,0.892]
[0.849,0.849]
Table 6-4 Service Levels: Regular Customers (2 minutes) Priority Customers (30 seconds)
7 Conclusions and Future Research Directions
We obtained several output performance measures from the simulation model of the call
center operation that we developed under the assumed business constraints on service levels and
abandonment rates. The sensitivity analysis on the performance measures that we conducted by
varying our business constraints helped us in identifying the appropriate service levels that can
be provided to both the regular and priority customers and assisted us to derive the optimal
number of agents required to service the priority and regular class of customers.
An important area of future research is to model a scalar performance measure, constructed
as a weighted linear combination of several output performance measures that we discussed in
the paper. This is because the optimal call center configuration, from the perspective of the
management should simultaneously maximize the call center profits by employing lesser number
of agents and hence lesser number of trunk setup and maintenance cost, customer satisfaction of
both the regular and priority customers by providing good service levels, and staff satisfaction by
Copyright: University of Waterloo
Department of Management Sciences
Page 29 of 31
having lower agent utilization rates. However, determining an optimal staffing level is
complicated by the conflicting nature of these objectives. For example a call center configuration
that maximizes the profits may create lower service levels and higher agent utilization.
Therefore, addressing this problem through multi-variate analysis would provide a simple,
concise and intuitive measurement of the call center effectiveness. Hence, we propose to develop
a univariate measure composed of multiple attributes as a means of providing information to
decision-makers through a single measurement. Further, we need to conduct a fractional factorial
experimental design to determine the significance of several input measures and to obtain good
estimates of the main effects and some higher-order interactions of the altering input parameters.
Further, we haven’t considered the transient state of the system in this paper. Generally,
there are fewer number of calls when the call center operation begins. We need to identify the
warm-up period of the system before it reaches a steady state and truncate those observations for
evaluating our performance measures. We propose to use the Welch’s procedure of plotting the
moving average by adjusting the window size to obtain the length of warm-up period. Further,
we propose to design a reneging distribution that models the human behavior more closely given
the problem at hand. We have used a piece-wise linear function of waiting time to model the
reneging distribution in this paper with a lower probability of dropping the call till a certain
threshold time and a higher probability of dropping the call thereafter, with no calls dropping in
the first minute and all calls dropped within six minutes of waiting time. We need to examine the
actual data on call drops to model our reneging distribution. Finally, we propose to use a nonstationary process of inter-arrival times owing to varying traffic at different times of the day.
This would also help us in deriving different staffing levels for different times of the day.
Copyright: University of Waterloo
Department of Management Sciences
Page 30 of 31
References
[1]
Andrews, B.H. and Parsons, H.L., “Establishing telephone-agent staffing levels through
economic optimization,” Interfaces, Vol. 23(2), 1993, pp. 15-20.
[2]
Balci, O., “Verification, validation and testing,” In: Banks J, editor. The handbook of
simulation. New York: Wiley, 1997, pp. 335-393.
[3]
Dawson, K., “The call Center Handbook,” Flatiron Publishing, New York, 1996.
[4]
Harris, C.M., Hoffman, K.L., and Saunders, P.B., “Modeling the IRS Telephone Taxpayer
Information System,” Operations Research, Vol. 35(4), 1987, pp. 504-523.
[5]
Kelton, W. D., Sadowski, R. P., and Sadowski, D. A., “Simulation with Arena,”
McGraw-Hill, New York, 1998.
[6]
Mehrotra, V., Profozich, D. and Bapat, V., "Simulation: The best way to design your call
center," Telemarketing and Call Center Solutions, Vol. 16(5), 1997, pp. 28-29.
[7]
Mehrotra, V., "The call center workforce management cycle," Proceedings of the 1999
Call Center Campus, Purdue University Center for Customer-Driven Quality, Vol. 27,
1999, pp. 1-21.
[8]
Parkman, C., “Simulation of a Fast-Food Operation Where Dissatisfied Customers
Renege,” The Journal of Operational Research Society, Vol. 38(2), 1987, pp. 137-148.
[9]
Saltzman, R.M. and Mehrotra, V., “A call center uses simulation to drive strategic
change,” Interfaces, Vol. 31(3), 2001, pp. 87-101.
[10]
Swisher, J.R., Jacobson, S.H., Jun, J.B., and Balci, O., “Modeling and analyzing a
physician clinic environment using discrete-event (visual) simulation,” Computers and
Operations Research, Vol. 28, 2001, pp. 105-125.
Copyright: University of Waterloo
Department of Management Sciences
Page 31 of 31
Download