An Integer Programming Heuristic for Printed Circuit Card

advertisement
An Integer Programming Heuristic for Component Allocation
in Printed Circuit Card Assembly Systems
Gail W. DePuy
University of Louisville, Louisville, Kentucky
Martin W.P. Savelsbergh
Jane C. Ammons
Leon F. McGinnis
Georgia Institute of Technology, Atlanta, Georgia
Abstract
Component allocation is an important element of process planning for printed circuit card
assembly systems. The component allocation problem directly impacts the productivity and cost
of a circuit card assembly system. Many companies have recognized the importance of component
allocation and have started to develop a better decision process. Also, a few commercial software
packages have been developed that provide environments to support process planning. However,
optimization methods are not yet widely used. We demonstrate that component allocation is
amenable to improvement using optimization methods. We present an integer programming
heuristic for the component allocation problem and report on several case studies that have been
conducted and that demonstrate its effectiveness. The heuristic is based on a mixed integer
programming formulation of the component allocation problem that incorporates estimates of
downstream process planning decisions.
1. Introduction
This paper addresses the problem of allocating component types to coupled automated placement
machines so as to balance and minimize the workload for a printed circuit card assembly system. The
component allocation problem involves determining which component types are placed by which
placement machines. These automated placement machines typically are coupled using conveyors to form
an assembly line. Therefore, to avoid one machine becoming a bottleneck and slowing down the entire
line of machines, a card should spend an equal amount of time at each machine. Balancing the machine
workload not only reduces line cycle time, it also reduces work-in-process. The component allocation
problem is difficult for a single card type, but obtaining an allocation of component types that
1
simultaneously balances and minimizes the workload for each card type in a group of cards is especially
hard. The integer programming heuristic presented in this paper can be used to solve even these difficult
multiple card component allocation problems.
The component allocation decision is only one of several decisions that has to be made in the
design of a printed circuit card assembly system. It is therefore complicated by its interaction with several
other process planning decisions. The main process planning decisions that have to be made (e.g. Ammons
et al., 1997), are:
1. Setup Strategy - select machine groups and card families and assign families to groups;
2. Component Allocation - decide which component types are placed by which machines;
3. Feeder Arrangement and Placement Sequencing - stage component feeders on each machine and
sequence the placement operations for each machine and card type.
Together, these three sets of interrelated decisions determine the cycle time for producing each card type,
and the associated equipment utilization. Once component types have been allocated to machines, the
feeder arrangement and placement sequencing decisions determine the cycle time for each individual
placement machine.
A number of studies (Ahmadi et al. 1995, Ammons et al. 1992, Ammons et al. 1993, Ball and
Magazine 1988, Crama et al. 1990, Gavish and Seidmann 1987, Grotzinger 1988, Grotzinger 1992, Leipala
and Nevalainen 1989, McGinnis et al. 1992) have demonstrated that optimization methods for feeder
arrangement and placement sequencing have the potential to significantly reduce placement machine cycle
times. Component allocation has not been studied as extensively as feeder allocation and placement
sequencing, but studies have indicated potential opportunities for significant impacts (Ahmadi and
Kouvelis 1994, Ammons et al. 1997, Askin et al. 1994, Ben-Arieh and Dror, 1990, Berrada and Stecke
1986, Carmon et al. 1989, Crama et al. 1990, DePuy 1995, Hillier and Brandeau 1993, Lofgren and
McGinnis 1986, Zijm and Van Harten 1993) .
These process planning decisions are an important contributor to the productivity and cost of
electronics assembly systems. Many companies have recognized this and have started to develop a better
2
decision process. Also, a few commercial software packages have been developed that provide
environments to support process planning. However, optimization methods are not yet widely used.
In this paper, we report on the design and implementation of an optimization-based heuristic for
the solution of component allocation problems, and on the results of several case studies that have been
conducted and that demonstrate its effectiveness. The heuristic is based on a large-scale mixed integer
programming formulation of the component allocation problem that incorporates estimates of downstream
process planning decisions. The emphasis of this paper is on the design and implementation of the
optimization-based heuristic and the computational experiments. A more elaborate discussion of the mixed
integer programming model can be found in a companion paper (DePuy et al. 1997).
The paper is organized as follows. In Section 2, we describe the component allocation problem in
greater detail. We focus on the modeling issues related to the incorporation of lower level machine
optimization considerations. A mixed 0-1 integer programming formulation of the component allocation
problem is presented. In Section 3, we discuss the techniques that were used to customize a linear
programming based branch-and-bound algorithm to develop an integer programming heuristic for the
component allocation problem. In Section 4, we present several case studies that demonstrate the viability
of our approach. Finally, in Section 5, we draw some conclusions and discuss future extensions of this
research.
2. Problem Description and Model Formulation
The main objective of the component allocation problem is to balance machine workload while minimizing
cycle time. Therefore, accurate estimates of the time required to place components on the cards is
important. Unfortunately, obtaining accurate estimates of the placement time of components is nontrivial.
To understand why it is hard to obtain accurate placement time estimates we briefly discuss how a typical
placement machine works.
Both the movement of the feeder carriage and the card locator affect the time required to place a
component. Typically, a turret type machine is comprised of a feeder carriage that moves in the X
3
direction, a card assembly locator that moves in the X and Y directions, and a pick/place device that only
moves in the Z direction (see Figure 1). Both the card assembly locator which positions the card for the
next placement to be made and the feeder carriage which moves to allow the next component to be
retrieved must stop moving before the pick/place device can move. Therefore the time the pick/place
device has to wait is usually determined by the maximum of the feeder carriage movement time and the
card assembly locator movement time. This delay time, i.e., the time the pick/place device is ready and
waiting for the card locator or feeder carriage to arrive at the proper location, is called latency. It should be
noted that the card assembly locator and the feeder carriage commonly move simultaneously.
The card locator and feeder carriage have a certain amount of time (the component placement
time) to get to their proper positions without incurring latency. The distance the card locator and feeder
carriage can move during the component placement time, i.e., without causing latency, is referred to as the
free card distance and free feeder distance (Ahmadi et al. 1988).
The “standard” placement time estimate for each component type provided by a machine's
manufacturer is usually the fastest time with which a component type can be placed. The standard
FIGURE 1 HERE
4
placement times are often not realized, due to latency, which is determined by the feeder arrangement and
placement sequencing decisions. Of course, feeder arrangement and placement sequencing cannot be
decided without an allocation of component types to machines. Different allocations will lead to different
feeder arrangements and placement sequences and, therefore, may affect the cycle time. Clearly there is a
circular interaction between the component allocation and machine optimization problems (i.e. feeder
arrangement and placement sequencing). The typical decomposition strategy is to do the process planning
in two stages. First, solve the component allocation problem. Secondly, solve the feeder arrangement and
placement sequencing problem. A unique feature of the model presented below is that it includes a
component placement time estimate which incorporates aspects of the feeder assignment and placement
sequencing problems within the component allocation problem. Note that our goal is not to optimize the
component allocation, feeder assignment, and placement sequencing decisions simultaneously, but, by
approximating latency resulting from feeder assignment and placement sequencing decisions, to obtain
better solutions to the component allocation problem.
As discussed above, the component placement time estimated can be affected by latency, or a
delay time, caused by either the feeder carriage or card locator. To estimate the feeder carriage latency,
first the total feeder movement distance is calculated. The total feeder movement distance for a card type j
on a machine k is estimated by determining the largest and smallest index of a slot on machine k occupied
by a component type being placed on card type j. This total feeder movement distance is adjusted by the
total free feeder distance (i.e. the number of slots that can be moved without incurring latency during the
standard placement time) to determine the feeder latency distance. This feeder latency distance is then
used to estimate the additional time it takes, due to feeder latency, to populate a card. This additional
population time will be included in the workload estimate to calculate a more accurate estimate of the
actual shop floor population time.
Although the majority of total latency time can be attributed to feeder latency (DePuy 1995, Crama
et al. 1997), a rough estimate of card locator latency for each component type i on card type j, cmovei , j , is
5
also included in the component allocation model. The interested reader is encouraged to refer to DePuy,
1995 for a detailed discussion of an algorithm that can be used to calculate cmovei , j . The card locator
latency estimates can be computed in advance of the mixed integer program and will allow an estimate of
card locator latency to be included in the workload estimate.
The estimates for feeder latency and card latency are incorporated in the following mixed integer
programming formulation of the component allocation problem. This component allocation model also
includes considerations for whether a particular component type can be placed by a particular machine (not
all machines can place all component types), the number of adjacent feeder slots each component type
requires on each machine (typically large component types require several adjacent slots to be staged on a
machine while many smaller component types only require one slot), and the amount of standard time
required to place a specific component type by each machine (typically large component types require
more time to place than smaller component types).
Let
i
=
index for component types
i=1, ..., n
j
=
index for card types
j=1, ..., m
k
=
index of machines
k=1, ..., p
clsk =
last slot index on machine k
u
=
index for slots
qj
=
number of card type j to be produced
u=1, ..., clsp
d i, j =
quantity of component type i used on card type j
Sk =
total number of slots on machine k
si , k =
number of slots required by component type i if assigned to machine k
t i,k =
standard time to place/insert component type i using machine k
fmove k
=
time to move one slot distance on machine k feeder carriage
freef k
=
number of slots that can be moved in the average time to place a
component on machine k
cmovei , j =
card latency time for component type i on card type j
6
and let the decision variables be
Yi , j, u
= quantity of component type i placed/inserted on card type j from slot u
Wi , j, u
ì 1 if component type i from card type j is assigned to slot u
ï
=í
ï 0 otherwise
î
Xi, u
ì 1 if component type i is assigned to slot u
ï
= í
ï 0 otherwise
î
dj
=
estimated assembly time for card type j
FMD j, k = feeder latency distance of card type j on machine k
MXS j, k = maximum machine k slot index occupied by a component from card type j
MNS j, k = used to determine the minimum machine k slot index occupied by a
component from card type j
7
OBJECTIVE
m
åd j
Minimize
[1]
j =1
SUBJECT TO:
æ n
d j ³q j ç
ç
è i =1
clsk
ö
u = clsk -1 +1
ø
å å (t i,k Yi, j,u + cmovei, jWi, j,u ) + fmovek FMD j,k ÷÷
" j = 1,..., m, k = 1,..., p [2]
cls p
å Y i , j ,u = d i , j
"i = 1,..., n, j = 1,..., m [3]
u =1
Y i , j ,u £ d i , j W i , j , u
"i = 1,..., n, j = 1,..., m, u = 1,..., cls p [4]
Y i , j ,u ³ W i , j ,u
"i = 1,..., n, j = 1,..., m, u = 1,..., cls p [5]
Wi , j ,u £ X i ,u
"i = 1,..., n, j = 1,..., m, u = 1,..., cls p [6]
n
clsk
å å s i , k X i ,u £ S k
"k = 1,..., p [7]
i =1 u = clsk -1 +1
n
u + si , k -1
å å X i ,u
'
'
£ s i ,k (1 - X i ,u )
"i = 1,..., n, k = 1,..., p, u = cls k -1 + 1,..., cls k [8]
i ' =1 u ' = u +1
MXS j ,k ³ uWi , j ,u
"i = 1,..., n, j = 1,..., m, k = 1,..., p, u = cls k -1 + 1,..., cls k [9]
MNS j ,k ³ ( cls k + 1 - u )Wi , j ,u
"i = 1,..., n, j = 1,..., m, k = 1,..., p, u = cls k -1 + 1,..., cls k [10]
FMD j ,k ³ MXS j ,k + MNS j ,k
éæ clsk
- ( cls k + 1) - freef k êç
êç u = cls +1
k -1
ëè
n
ö
ù
i =1
ø
û
å å Wi, j,u ÷÷ - 1úú
" j = 1,..., m, k = 1,..., p [11]
MXS j,k ³ 0
" j = 1,..., m, k = 1,..., p [12]
MNS j ,k ³ 0
" j = 1,..., m, k = 1,..., p [13]
FMD j ,k ³ 0
" j = 1,..., m, k = 1,..., p [14]
Wi , j ,u , X i ,u Î {0,1}
"i = 1,..., n, j = 1,..., m, u = 1,..., cls p [15]
Yi , j ,u Î {0,1,2,..., d i , j }
"i = 1,..., n, j = 1,..., m, u = 1,..., cls p [16]
8
The objective function minimizes the assembly time for a group of cards. The objective function
serves to reduce the bottleneck machine time for each card type and, hence, balance the workload. The
assembly time (weighted by a card type's production volume, q j ) for each card type on each machine is
(
)
calculated in constraints [2] by summing the placement time t i , k Yi , j, u , card latency time
( cmovei, jWi, j,u ) , and the feeder latency time ( fmovek FMD j,k ) . As mentioned previously, the card
latency time estimate may overestimate the amount of latency due to the card assembly locator, especially
if a component type i is placed on card type j from more than one feeder location. A correction factor may
need to be considered if many component types are anticipated being assigned to more than one feeder
location.
Constraints [3] ensure all placements of each component type are made. Constraints [4], [5], and
[6] specify the relationship requirements among decision variables. Constraints [7] ensure that each
machine's slot capacity is not violated. Constraints [8] guarantee all slots required by a component type
that uses more than one slot are located adjacently on the feeder carriage. Constraints [9] and [10] are
used to find the largest numbered machine k slot containing a card type j component type ( MXS j, k ) and
the smallest numbered machine k slot with a card type j component type ( MNS j, k - (clsk + 1) ). These
largest and smallest slot numbers on a machine are then used in constraints [11] to estimate the feeder
distance moved that will incur a latency penalty (i.e. the feeder latency distance). Constraints [12], [13],
and [14] ensure that all feeder distances are nonnegative. The integrality constraints are shown in [15]
and [16].
As stated earlier, this model more closely represents the component allocation problem found in
industry than previous models by including machine optimization considerations. This is the first
component allocation mathematical model known which considers feeder arrangement issues (DePuy,
1995). Crama et al. (1997) develop a heuristic to assign components to machines based on a workload
estimate which includes feeder assignment considerations. Including feeder latency in the workload
9
calculation provides realism that previous models lack. Because the above model assigns component types
to specific slots, an initial solution to the feeder arrangement problem is found in addition to the allocation
of component types to machines.
3. Solution Approach
While developing this model has increased our understanding of the component allocation and feeder
assignment problems, a mathematical model is only a valuable tool for practical problems if it can be
solved. The size of the mixed integer program is huge for any realistic instance of the component
allocation problem; e.g., a group of 4 card types, 140 component types, and 3 machines leads to 57,834
binary variables, 35,206 integer variables, and 287,344 constraints. No currently available commercial
mathematical programming solver has been able to find even a feasible solution in a reasonable amount of
time (i.e. less than 100 CPU hours) to a realistic instance.
Fortunately, because of the myriad of unaccountable manufacturing complications that arise on the
factory floor an optimal solution is not needed or necessarily desired by industry. An automated solution
methodology is needed that finds a high quality solution in an acceptable amount of time. An 'acceptable
amount of time' has to be seen in relation to the amount of time it takes to actually produce the group of
cards. For example, a data set for a daily production run should not require more than one or two hours to
solve.
Because commercial general purpose solvers cannot even find feasible solutions for many realistic
instances, another solution methodology was created to find good solutions in an acceptable amount of
time. We have taken two approaches to accomplish this goal: reducing the problem size by model
simplification and data aggregation, and developing a special purpose solver for the component allocation
problem. The latter was accomplished by customizing a linear programming based branch-and-bound
algorithm by incorporating problem specific knowledge about the component allocation problem.
10
3.1 Problem Size Reduction
The approach used to reduce the size of the mixed integer program is twofold: model simplification and
slot aggregation. The model can be simplified by assuming each component type requires 1 slot on each
machine to which it is assigned (i.e. si , k = 1 " i , k ). With this simplification constraints [8] can be
removed from the model and constraints [7] become:
n
clsk
å å
X i ,u £ S k
"k = 1,..., p
i =1 u = clsk -1 +1
This model simplification could alter the accuracy of the feeder movement estimates if there is a wide
variation in the number of slots each component type requires on a particular machine. However, if each
component type requires approximately the same number of slots on a machine, the assumption that each
component type requires 1 slot will not significantly degrade the accuracy of the feeder movement
estimate. Note this simplification may require a few proportional machine characteristic changes. For
example, if most component types, i, require 2 slots on machine k (i.e. si , k = 2), then simplifying the model
by assuming each component type requires one slot (i.e. si , k = 1) would require the total number of slots
on machine k, S k , to be reduced by half, as well as the number of slots that can be moved in the average
time to place a component on machine k, freefk . In addition, the time to move one slot distance on
machine k, fmovek , would need to be doubled for this example.
The other approach used to reduce the size of the mixed integer program is slot aggregation. Slot
aggregation models several slots as a machine section. Component types are now assigned to a specific
section of the machine rather than to a specific slot. Machine sections can be as small as one slot and as
large as the entire feeder carriage. If a machine section contains more than one slot, the exact slot within
the section to which a component is assigned is not determined. Assigning component types to individual
slots (where each machine section is made up of one slot) leads to the best estimation of feeder carriage
movement since the distance the feeder carriage moves can be calculated with the most precision.
11
However, machine sections of 1 slot can dramatically increase the number of variables and size of the
problem.
The model must be slightly altered to accommodate slot aggregation. The index u is now used for
machine sections (i.e., Yi , j,u = quantity of component type i placed on card type j from machine section u)
and clsk is now used to reference the last machine section on machine k. In addition, the number of slots
per machine section ( spsk ) is now included in the feeder capacity constraints. It is assumed that each
machine section on a machine k is comprised of the same number of slots as other machine sections on the
same machine. Constraints [7] can be written as follows to include slot aggregation.
n
å
X i ,u £ sps k
"k = 1,..., p, u = cls k -1 + 1,..., cls k
i =1
In addition, constraints [11] must be altered to reflect the slot aggregation. The variables MXS j, k and
MNS j, k are now used to find the maximum and minimum machine k section index occupied by a
component type from card type j. To estimate the feeder carriage distance traveled in number of slots,
these section distances must be multiplied by the number of slots in each section ( spsk ). Constraints [11]
can be written as follows to include slot aggregation
(
FMD j ,k ³ sps k MXS j ,k + MNS j ,k
n
éæ clsk
ö ù
Wi , j ,u ÷ - 1ú
- ( cls k + 1) - freef k êç
÷ ú
êç u = cls +1 i =1
k -1
ø û
ëè
)
å å
" j = 1,..., m, k = 1,..., p
As discussed above, the number of slots per machine section ( spsk ) determines the total number of
variables. Machine sections of one slot are best for more accurately estimating the latency due to feeder
carriage movement. However, as the number of machines and total number of slots increases, modeling
one slot machine sections may not be practicable. For example, our earlier formulation with one slot per
machine section generated 93,040 variables and 287,344 constraints. This problem size can be reduced to
15,550 variables and 48,124 constraints by increasing the number of slots per machine section to 6.
12
Computational experiments using several industry representative data sets (Table 1) show that the
number of slots per machine section seems to have little effect on the objective value that can be obtained.
For each of the data sets and each of the aggregation levels, we ran our integer programming heuristic for a
limited amount of time. Then, we compared, for the different aggregation levels, how long it takes to reach
solutions of comparable quality, i.e., to reach a solution that is within 5 percent of the best solution found
for the lowest aggregation level. Table 2 shows the effect of various levels of slot aggregation on several
sets of industry representative data. It illustrates that, within limits, slot aggregation can be used to
significantly reduce the solution time without significantly distorting the quality of the solution obtained.
For example, the solution time for Data Set E was reduced from 16560 seconds using 4 slots per machine
section to 3595 seconds using 8 slots per machine section, a 78% reduction in solution time, while the
objective value (i.e. estimate of the cycle time) was only changed by 1.4%.
It should be noted that the quality of the estimate of the distance the feeder carriage moves is
influenced by slot aggregation. The more slots per machine section, the less accurate the estimate of
feeder movement time may be since it is not known to which slot in a machine section a component type is
assigned. Consequently, the objective value (i.e. cycle time estimate) is also less accurate when a large
number of slots are aggregated. If the number of slots in a machine section is larger than the number of
component types assigned to the machine (for a given card type) it could be the case that all feeder latency
distance information is lost. This observation suggests that the number of slots per machine section should
be some value less than the number of component types assigned to that machine for a particular card type.
An upper bound for the recommended number of slots per machine section can be calculated as follows
min ( number of component types on card j)
spsk <
j
number of machines
13
.
TABLE 1 HERE
TABLE 2 HERE
14
3.2 A Special Purpose Optimizer for the Component Allocation Problem
Problem size reduction, although beneficial, is not enough to be able to solve realistic component
allocation problems. Currently available commercial mathematical programming solvers rarely find a
feasible solution in a reasonable amount of time even for the reduced instances (i.e. in less than 100 CPU
hours). To be able to produce high quality solutions to realistic instances, we have developed an integer
programming heuristic by modifying and enhancing a linear programming based branch-and-bound
algorithm as follows
1.
Performing a truncated tree search;
2.
Fixing variables with LP values close to one of their bounds;
3.
Incorporating primal heuristics;
4.
Employing a dedicated branching scheme.
Each of these methods is described in more detail below.
First, the number of nodes that will be evaluated is reduced through the use of optimality
tolerances. Optimality tolerances are used to terminate or prevent the evaluation of a node in the search
tree when its associated lower bound value is within a certain range. In our case, this range is within 10%,
of the value of current best primal solution; i.e., the best feasible IP solution found so far. Note that this
results in a truncated tree search algorithm. Using optimality tolerances trades optimality for speed.
However, the solution produced by the algorithm still comes with a guaranteed quality; its value is never
more than the specified optimality tolerance above the best possible solution. This is a major advantage
over greedy type heuristics that do not provide such a quality measure. As indicated before, we are not
interested in optimal solutions, but in high quality solutions that can be obtained in an acceptable amount
of time.
Secondly, the number of nodes that will be evaluated is reduced by fixing variables with LP values
close to one of their bounds. If the LP value of a binary variable is close to either 0 or 1 it suggests that a
high quality solution exists in which the variable is set to 0 or 1, respectively. Again, we trade quality for
15
speed. By fixing such a variable, we reduce the solution space and simplify the problem. Furthermore, the
size of the active LP is reduced which leads to faster solution times. In our current implementation, binary
variables with LP values greater than 0.90 are fixed to 1 and binary variables with values less than 0.05 are
fixed to 0. These threshold values have been determined experimentally. Data sets were solved using
various threshold values and those giving the best solution value in the shortest amount of time were
selected.
Thirdly, two primal heuristics, a construction heuristic and an improvement heuristic, have been
incorporated to construct and improve feasible solutions at all nodes in the search tree. Creating feasible
solutions is not difficult since the only limiting factor is machine capacity and in all practical cases the
machine capacity is greater than the number of component types to be assigned. However, creating high
quality feasible solutions is not so easy. The construction heuristic uses the current LP solution to guide
the construction of a feasible IP solution. A large Xi ,u value in the LP solution suggests that component
type i should be assigned to machine section u. The construction heuristic begins by first assigning each
component type i to a single machine section u based on the Xi ,u values from the LP. Starting with the
largest Xi ,u value, component i is assigned to machine section u as long as (1) the Xi ,u value is larger
than some specified value (we chose 0.5 through experimentation), (2) the component type i has not been
assigned to any other machine section, and (3) there is available space in machine section u for component
type i. Assignments are made in this fashion in decreasing order of Xi ,u values, until all component types
with large Xi ,u values have been examined. Then, each remaining unassigned component type is assigned
to a machine section such that feasibility is maintained and workload balance is improved. The
construction heuristic then considers multiple assignment of component types, again based on the Xi ,u LP
values. For each machine section u with available space, component types not already assigned to machine
section u are considered for multiple assignment in decreasing order of Xi ,u values. So first the
component type i, not already assigned to machine section u, with the largest Xi ,u value is considered for
16
multiple assignment in machine section u. To determine the quantity of component type i for board j to be
placed from the new machine section u (i.e. Yi, j, u ) that leads to the largest reduction in workload all
possibilities are examined. If a reduction in workload is achieved, the multiple assignment is made.
Multiple assignments continue to be made until no reduction in workload can be achieved or the machine
capacity has been reached. We allow multiple assignment of component types on the same machine, but
also on different machines.
The improvement heuristic analyzes a feasible solution to determine if any improvements can be
made by rearranging the component types assigned to a machine, i.e., if the component types assigned to a
machine can be moved to other machine sections on that machine to reduce the feeder latency. Both
heuristics are applied, one after the other, at every node of the search tree. Consequently, at each node of
the search tree a feasible solution is generated.
Finally, a special branching scheme has been designed and incorporated to assists the search for
high quality solutions. A branching scheme specifies which variable is selected to branch on and in which
order unevaluated nodes are processed. The selection of a variable is based on two considerations. First,
we have divided the set of variables into priority classes and will always select a fractional variable with
highest priority. This is done to make sure that we branch on variables representing important decisions.
For example, it is more important in the component allocation problem to decide whether or not a
component type is assigned to a particular machine section ( Wi , j, u ) than deciding how many placements of
a component type are made from a machine section ( Yi , j,u ). Therefore, we will always branch on Wvariables before branching on Y-variables. Secondly, within a priority class, we select the fractional
variable with LP value closest to 1. The motivation for selecting the variable with value closest to 1 is our
desire to find high quality solutions quickly. The LP solution indicates that it likes this variable to be 1, so
we force it to be one in one of the two child nodes and always process this node before the other. The
standard approach of selecting a fractional variable with LP value closest to 0.5 is motivated by lower
bound improvement, which is not our prime concern. For the node evaluation order, we have chosen best
17
bound search. The motivation for using best bound search instead of other strategies such as depth first
search is that best bound search tends to visit many different parts of the search tree and is likely to
generate many different LP solutions which is beneficial to our primal heuristics.
The integer programming heuristic for the component allocation problem has been implemented
using MINTO, a Mixed INTeger Optimizer (Savelsbergh and Nemhauser 1994). MINTO is a software
system that solves mixed-integer linear programs by a branch-and-bound algorithm with linear relaxations.
It allows its users to build a special purpose mixed integer optimizer by providing various mechanisms for
incorporating problem specific knowledge.
4. Case Studies
As discussed in the previous section, the two approaches that have been applied to enable us to produce
high quality solutions to the component allocation model in an acceptable amount of time are problem size
reduction and algorithm customization. To evaluate the efficiency and effectiveness of these approaches,
we have conducted the following experiment. We have used our integer programming heuristic to solve a
large data set with 6 card types, 200 component types, and 3 machines. In three hours of CPU time on a
HP 9000/735 workstation with a PA-7200 processor, it produced a solution with an objective function
value that was within 6.5% of the LP value. Without slot aggregation and without algorithm customization
no feasible solution could be found in 100 hours of CPU time. For a smaller data set with 4 card types, 50
component types, and 4 machines, it took our integer programming heuristic less than 10 minutes of CPU
time to produce a solution with an objective function value that was within 8.8% of the LP value. Without
slot aggregation and without algorithm customization it took almost 10 hours of CPU time to produce a
solution with objective function value within 8.8% of the LP value. The benefit of the solution
methodology presented in this paper is apparent, especially on larger instances. Using the customized
component allocation optimizer, good solutions were obtained in a reasonable amount of time whereas no
feasible solution could be obtained using a general purpose mathematical programming solver.
18
It should be noted that although all the customizations that we have done contribute to the overall
performance of the integer programming heuristic, the incorporation of primal heuristics is probably the
most important, since only a relatively small number of nodes of the search tree could be evaluated within
the given time limits. Below, three case studies are presented that were performed to evaluate the quality of
the solutions obtained by the integer programming heuristic discussed in Section 3 for the component
allocation model formulated in Section 2. These case studies were performed to determine if our solution
methodology provides a viable solution approach for industry sized problems as well as to determine the
accuracy of the model.
4.1 Case Study 1
The objective of this case study is to show that the integer programming heuristic represents a viable
solution approach for industry sized problems and produces high quality solutions in an acceptable amount
of time. Table 3 presents the characteristics of the data sets used. It reports the problem characteristics, i.e.,
number of component types, number of card types, number of machines, and degree of slot aggregation as
well as the characteristics of the associated integer program, i.e., number of variables and number of
constraints. The data sets in Table 3 are representative of daily or weekly production runs in industry.
These data sets were solved on an HP 9000/735 workstation with a PA-7200 processor. The results are
presented in Table 4, which shows the time required to find the first feasible IP solution, first feasible IP
objective value, time to best IP solution, best IP solution objective value, LP objective value and the
integrality gap. Each of the smaller, daily data sets (i.e. data sets 1-4) were allowed to run for 2 hours
while the larger data sets (i.e. data sets 5 - 8) were allowed to run for 6 hours. These maximum run times
were chosen in light of the time frame that the actual decisions had to be made. It should be noted that the
optimal IP value is not known, so our solution values may be closer to the optimal IP value than is implied
by the integrality gap.
19
Case study 1 indicates that solutions to the component allocation problem as modeled in Section 2
can be obtained in an acceptable amount of time using the integer programming heuristic presented in
Section 3. It should be noted that for the data sets included in this case study most of the time was spent
on solving linear programs. In fact, for some of the larger data sets almost half of the time was spent on
solving just the first LP. This explains why the first solution found is often the best solution found for the
large data sets; there is only enough time to generate a couple of solutions.
20
TABLE 3 HERE
TABLE 4 HERE
21
4.2 Case Study 2
The objective of this case study is to validate the accuracy of the cycle times computed by the model. For
each data set, we were given the component allocation used and the shop floor cycle times of each of the
card types. With the component allocation fixed, our component allocation model was solved to
optimality, which effectively means the cycle times were computed using our model since all allocation
decisions were fixed in advance. The solution values were then compared to the shop floor cycle times of
the cards to determine how well the model cycle times compare to the shop floor cycle times.
The cards in this data set are currently manufactured in this industrial partner's assembly
plant. Each card type is produced on two machines. Card types 1 and 3 are produced on a Panasert MVII
machine and a Panasert MVIIC machine. Card types 2 and 4 are produced on two Panasert MSH
machines. The problem characteristics for these data sets are
Number of component types: 13 - 38
Number of placements per card type: 38 - 372
Number of slots required by a component type ( si, k ): 1 - 4
Time to place a component type ( t i, k ): 0.14 - 1.00 seconds.
Table 5 reports the shop floor cycle times, as provided by the industry partner, the model cycle times, and
their difference. Cycle times computed by our model were found to be within 7.25% of the actual factory
floor cycle times.
The data sets supplied by the industrial partner were then solved by our integer programming
heuristic, now without the component allocation fixed, to determine if any allocation improvements could
be found. Table 6 shows the cycle times for the component allocation of the industrial partner and the
cycle times produced by the integer programming heuristic. The results imply that the industrial partner's
cycle times may be improved by changing the component allocation. These improved solutions have not
yet been setup and run on the industrial partner's shop floor, so the true shop floor savings are not
available.
22
TABLE 5 HERE
TABLE 6 HERE
The results of case study 2 show that the model cycle times are close to the shop floor cycle times
for the data sets solved. A model cycle time that is close to the actual cycle time indicates that the model
does a relatively good job of portraying the machine optimization considerations of feeder and card locator
latency. Because the model cycle times seem to be a fairly good estimate of the actual cycle times, one
would be lead to believe balanced model solutions will translate to balanced machine workloads on the
factory floor. The case study solutions show the maximum workload difference between the machine with
the highest workload and the machine with the lowest workload for a particular card type was only 6.7
percent (see DePuy, 1995, for more details). The solutions obtained using the component allocation model
indicate that a good workload balance can be achieved.
23
4.3 Case Study 3
Ideally, another case study, similar to case study 2, would be conducted to validate the accuracy of the
cycle times computed for a multiple card type scenario. However, an industrial partner willing to share
their multiple card allocations and machine setups could not be found. The next best way to evaluate the
performance of our integer programming heuristic on multiple card type instances would be to compare
our solutions to the solutions of other software packages that can solve multiple card component allocation
and machine optimization problems. However, no such multiple card machine optimizer was readily
available. In fact, it is the opinion of many of the industrial partners consulted for this research that a good
multiple card optimizer does not exist today. Consequently, the cycle times for multiple card allocation
solutions cannot be directly validated.
To get an idea of the increase in individual card cycle times due to the machine being setup for
several card types rather than being optimized for one specific card type, the multiple card type cycle times
can be compared to the single card type cycle times to get an idea of the multiple card degradation. It
should be noted that there is often a tradeoff between increased population time and decreased setup time
when using a multiple card type setup. Usually some population time degradation occurs when a group of
card types is set up at once on a machine since the feeder arrangement ordinarily cannot be optimized for
each card type in the group. However, a savings in machine setup time is often realized when using a
group setup since the machines only need to be setup once for the entire group of card types.
In this case study, the multiple card allocation solutions obtained using the model were compared
to the best case scenario of single card allocation in an effort to evaluate the multiple card assignment
solutions. Five industry representative data sets, each consisting of several card types, were used for this
case study. The characteristics of the five industry representative data sets are shown in Table 7.
First, the integer programming heuristic was used to produce solutions for the component
allocation problem for each card type in each data set. This resulted in an allocation of a single card's
component types to several machines and an associated cycle time for each card. Next, a multiple card
24
TABLE 7 HERE
allocation for each data set was found. In other words, an allocation of all component types for each data
set was found using the integer programming heuristic. This allocation is for the case where the same
machine setup is used to produce all the cards in the group (data set). The model cycle times from the
multiple card allocation were then compared to the model cycle times from the single card allocation to
evaluate the potential attractiveness of the multiple card allocation. If the card cycle times of the data set
solved with multiple card allocation are close to the single card allocation cycle times, it indicates the
model is good at allocating component types on a single card level even when a multiple card setup is
implemented. It should be noted that the converse of the preceding statement is not true. If the total cycle
time of a data set solved with multiple card allocation is not close to that of the single card allocation, the
multiple card allocation is not necessarily bad because it may be the best multiple card allocation available.
Table 8 compares the single card allocation population cycle times and multiple card allocation population
cycle times.
Table 8 indicates that the multiple card allocation cycle times are close to the single card
allocation cycle times. This result implies that our component allocation model probably does a relatively
good job of allocating component types for a group of cards. The solutions balance the multiple card type
workload fairly well compared to the best possible situation of producing each card type individually and
therefore being able to achieve the best workload balance for each card. While the above card by card
25
TABLE 8 HERE
comparison of the cycle times from multiple card allocation and single card allocation can be made, no
tools are available to evaluate the accuracy of the cycle time estimation for the entire group of cards setup
at once. As mentioned previously, if setup times were included in this comparison, the difference between
the multiple allocation cycle times and the single card cycle times may be smaller. As shown in Table 8,
the degradation in cycle times for producing the group of cards in one setup rather than individually seems
to be comparable to those degradation times found by Ammons et al. (1992).
5. Conclusions and Extensions
Component allocation is an important element of process planning for printed circuit card assembly
systems, and directly impacts productivity and cost. As demonstrated here, component allocation is
amenable to improvement using optimization methods. We have developed an integer programming
heuristic for solving component allocation problems that performs well on realistic instances.
The implication for the electronics industry is that high quality solutions to real and economically
important component allocation problems can be obtained in an acceptable amount of time. The
implication for operations research is that integer programming based solution techniques can be used
effectively to successfully solve large-scale real-world optimization problems.
There are several potential areas of future research based on the work presented in this paper.
First, the model can easily be extended to handle the situations in which component types require more
26
than one slot. Cut generation techniques can be used to include only the relevant machine capacity
constraints. Secondly, the component allocation model can be adapted to other placement machines. The
current model was formulated for general turret type placement machines, which account for the majority
of placement machines in use today. If a completely different machine is used in the future, some aspects
of the model, e.g., the feeder movement calculations, may have to be altered. Finally, this work may be
expanded to include greater detail of the higher level process planning issues interrelated to component
allocation, such as the setup strategy decisions. A method of including more of these process planning
issues in the component allocation problem may lead to a better allocation since more elements of the real
world problem are considered. Similarly, incorporation of component allocation with other process
planning issues may facilitate better solutions for the grouping and production planning problems.
This work was motivated and guided by involvement with companies who make process planning
decisions. What we have demonstrated is the potential opportunity for improvement through optimizationbased heuristics. What remains is the challenge of translating these models and methods into widely used,
reliable, and affordable decision support tools.
REFERENCES
Ahmadi, J., R. Ahmadi, M. Hirofumi, and D. Tirupati. (1995). “Component Fixture
Positioning/Sequencing for Printed Circuit Card Assembly with Concurrent Operations.”
Operations Research 43, 444-457.
Ahmadi, J., S. Grotzinger, and D. Johnson. (1988). “Component Allocation and Partitioning for Dual
Delivery Placement Machine.” Operations Research 36, 176-191.
Ahmadi, R., and P. Kouvelis. (1994). “Staging Problem of a Dual Delivery Pick-and-Place Machine in
Printed Circuit Assembly.” Operations Research 42, 81-91.
27
Ammons, J. C., M. Carlyle, L. Cranmer, G.W. DePuy, K.P. Ellis, L.F. McGinnis, C.A. Tovey, and H. Xu.
(1997). “Component Allocation to Balance Workload in Printed Circuit Card Assembly Systems.”
IIE Transactions 29, 265-275.
Ammons, J. C., M. Carlyle , G.W. DePuy, K.P. Ellis, L.F. McGinnis, C.A. Tovey, and H. Xu. (1992).
“Computer Aided Process Planning in Printed Circuit Card Assembly.” Proceedings of
International Electronics Manufacturing Technology Symposium, Baltimore, Maryland,
September 28-30.
Ammons, J. C., M. Carlyle , G.W. DePuy, K.P. Ellis, L.F. McGinnis, C.A. Tovey, and H. Xu. (1993).
“Computer Aided Process Planning in Printed Circuit Card Assembly.” IEEE Transactions on
Components, Hybrids, and Manufacturing Technology 16, 370-376.
Askin, R.G., M. Dror, and A.J. Vakharia. (1994). “Printed Circuit Card Family Grouping and Component
Allocation for a Multimachine, Open-Shop Assembly Cell.” Naval Research Logistics 41, 587608.
Ball, M.O. and M.J. Magazine. (1988). “Sequencing of Insertions in Printed Circuit Card Assembly.”
Operations Research 36, 192-201
Ben-Arieh, D. and M. Dror. (1990). “Part Assignment to Electronic Insertion Machines: Two Machine
Case.” International Journal of Production Research 28, 1317-1327.
Berrada, M. and K.E. Stecke. (1986). “A Branch and Bound Approach for Machine Load Balancing in
Flexible Manufacturing Systems.” Management Science 32, 1316-1335.
Carmon, T.F., O.Z. Maimon and E.M. Dar-El. (1989). “Group Set-Up for Printed Circuit Card
Assembly.” International Journal of Production Research 27, 1795-1810.
Crama, Y., O.E. Flippo, J. van de Klundert, and F.C.R. Spieksma. (1997). “The Assembly of Printed
Circuit Boards: A Case with Multiple Machines and Multiple Board Types”. European Journal of
Operations Research 98, 457-472.
28
Crama, Y., A.W.J. Kolen, A.G. Oerlemans, and F.C.R. Spieksma. (1990). “Throughput Rate
Optimization in the Automated Assembly of Printed Circuit Cards.” Annals of Operations
Research 26, 455-480.
DePuy, G.W. (1995). “Component Allocation to Balance Workload in Printed Circuit Card Assembly
Systems.” Ph.D. Dissertation, Georgia Institute of Technology, Atlanta, Georgia.
DePuy, G.W., J.C. Ammons, and L.F. McGinnis. (1997). “Formulation of a General Component
Allocation Model for Printed Circuit Card Assembly Systems.” Proceedings of the 1997 Sixth
Industrial Engineering Research Conference, Miami, Florida, May 17-18.
Gavish, B. and A. Seidmann. (1987). “Printed Circuit Cards Assembly Automation - Formulations and
Algorithms.” Proceedings of IXth International Conference on Production Research, Cincinnati,
Ohio, 662-673
Grotzinger, S. (1988). “Positioning for a Dual Delivery Placement Machine.” IBM Research Division,
T.J. Watson Research Center, Yorktown Heights, New York.
Grotzinger, S. (1992). “Feeder Assignment Models for Concurrent Placement Machines.” IIE
Transactions 24, 31-46.
Hillier, M.S. and M.L Brandeau. (1993). “Optimal Operation Assignment and Production Grouping in
Printed Circuit Card Manufacturing.” Working Paper, Department of Industrial Engineering and
Engineering Management, Stanford University.
Leipala, T., and O. Nevalainen. (1989). “Optimization of the Movements of a Component Placement
Machine.” European Journal of Operations Research 38, 167-177.
Lofgren, C.B. and L.F. McGinnis. (1986). “Soft Configuration in Automated Insertion.” Proceeding of
the 1986 IEEE Conference on Robotics and Automation, San Francisco, California.
McGinnis, L.F., J.C. Ammons, M. Carlyle, L. Cranmer, G.W. DePuy, K.P. Ellis, C.A. Tovey, and H. Xu.
(1992). “Automated Process Planning for Printed Circuit Card Assembly.” IIE Transactions 24,
18-30.
29
Savelsbergh, M.W.P. and G.L Nemhauser. (1994). “Functional Description of MINTO, a Mixed INTeger
Optimizer, Version 2.0b.” Report COC-91-03 from Computational Optimization Center, Georgia
Institute of Technology, Atlanta, Georgia.
Zijm, W.H.M. and A. van Harten. (1993). “Process Planning for a Modular Component Placement
System.” International Journal of Production Economics 31, 123-135.
30
Download