Machine-part cell formation in group technology using a modified

Available online at www.sciencedirect.com
European Journal of Operational Research 188 (2008) 140–152
www.elsevier.com/locate/ejor
Production, Manufacturing and Logistics
Machine-part cell formation in group technology using
a modified ART1 method
Miin-Shen Yang *, Jenn-Hwai Yang
Department of Applied Mathematics, Chung Yuan Christian University, Chung-Li 32023, Taiwan
Received 17 August 2004; accepted 9 March 2007
Available online 29 April 2007
Abstract
Group Technology (GT) is a useful way of increasing the productivity for manufacturing high quality products and
improving the flexibility of manufacturing systems. Cell formation (CF) is a key step in GT. It is used in designing good
cellular manufacturing systems using the similarities between parts in relation to the machines in their manufacture. It can
identify part families and machine groups. Recently, neural networks (NNs) have been widely applied in GT due to their
robust and adaptive nature. NNs are very suitable in CF with a wide variety of real applications. Although Dagli and Huggahalli adopted the ART1 network with an application in machine-part CF, there are still several drawbacks to this
approach. To address these concerns, we propose a modified ART1 neural learning algorithm. In our modified ART1,
the vigilance parameter can be simply estimated by the data so that it is more efficient and reliable than Dagli and Huggahalli’s method for selecting a vigilance value. We then apply the proposed algorithm to machine-part CF in GT. Several
examples are presented to illustrate its efficiency. In comparison with Dagli and Huggahalli’s method based on the performance measure proposed by Chandrasekaran and Rajagopalan, our modified ART1 neural learning algorithm provides
better results. Overall, the proposed algorithm is vigilance parameter-free and very efficient to use in CF with a wide variety
of machine/part matrices.
Ó 2007 Elsevier B.V. All rights reserved.
Keywords: Group technology; Cell formation; ART1 neural network; Learning algorithm; Group efficiency
1. Introduction
Profit in manufacturing can be achieved by lowering costs and improving product quality. There
are some general guidelines for reducing the cost
of products without any decrease in quality. These
include improving production methods, minimizing
*
Corresponding author. Tel.: +886 3 265 3119; fax: +886 3 265
3199.
E-mail address: msyang@math.cycu.edu.tw (M.-S. Yang).
flaws, increasing machine utilization and reducing
transit and setup time. Research and development
(R&D) engineering is the first line of defense in
addressing these issues through the design of a
unique product and competitive production techniques. Keeping a close watch over the production
process is also important in the pursuit of profit.
Although the traditional statistical process control
(SPC) technique has several merits, control chart
pattern recognition has become a popular tool
for monitoring abnormalities in the manufacturing
0377-2217/$ - see front matter Ó 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.ejor.2007.03.047
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
process by recognizing unnatural control chart patterns. This approach not only decreases the waste
but also prevents the defects more efficiently. Many
researchers have applied neural network models to
the manufacturing process with generally good
results (see Chang and Aw, 1996; Cheng, 1997;
Guh and Tannock, 1999; Yang and Yang, 2002).
The production process requires a variety of
machines and often some complex procedures. Frequently, parts have to be moved from one place to
another. This results not only in machine idle time
but also wastes the manpower required for the physical movement of the parts. On the other hand, an
increasing number of companies are encountering
small to medium size production orders. In this situation, more setup changes and frequent part or
machine movements occur. Group technology
(GT) has proven to be a useful way of addressing
these problems by creating a more flexible manufacturing process. It can be used to exploit similarities
between components to achieve lower costs and
increase productivity without loosing product quality. Cell formation (CF) is a key step in GT. It is a
tool for designing cellular manufacturing systems
using the similarities between parts and machines
to have part families and machine groups. The parts
in the same machine group have similar requirements, reducing travel and setup time.
In CF, a binary machine/part matrix of m p
dimension is usually provided (see Fig. 1(a)). The
m rows indicate m machines and the p columns represent p parts. Each binary element in the m p
matrix indicates a relationship between parts and
machines where ‘‘1’’ (‘‘0’’) represents that the pth
part should be (not) worked on the mth machine.
The matrix also displays all similarities in parts
and machines. Our objective is to group parts and
machines in a cell based on their similarities. If we
consider a machine/part matrix as shown in
Fig. 1(a), the result shown in Fig. 1(b) is obtained
by a CF clustering method based on the similarities
in parts and machines from the machine/part matrix
of Figs. 1(a) and 1(b) demonstrates that parts 1 and
4, and machines 1 and 3 are in one cell while parts 3,
141
Fig. 1(b). Optimal result of Fig. 1(a).
Fig. 1(c). Machine/part matrix.
Fig. 1(d). Optimal result of Fig. 1(c).
5 and 2, and machines 2 and 4 are in another cell. In
this case, there are no ‘‘1’’ outside the diagonal block
and no ‘‘0’’ inside the diagonal block so that we call
it a perfect result. That is, the two cells are completely independent where each part family will be
processed only within a machine group. Unfortunately, this perfect result for a machine/part matrix
is rarely seen in real situations. On the other hand,
another machine/part matrix is shown in Fig. 1(c)
with its result in Fig. 1(d). We see that there is a
‘‘1’’ outside the diagonal block. In this case, part 3
is called an ‘‘exceptional part’’ because it works on
two or more machine groups, and machine 1 is called
a ‘‘bottleneck machine’’ as it processes two or more
part families. There is also a ‘‘0’’ inside the diagonal
block in Fig. 1(d). In this case, it is called a ‘‘void’’.
In general, an optimal result for a machine/part
matrix by a CF clustering method is desired to satisfy the following two conditions:
(a) To minimize the number of 0s inside the diagonal blocks (i.e., voids);
(b) To minimize the number of 1s outside the
diagonal blocks (i.e., exceptional elements).
Fig. 1(a). Machine/part matrix.
Based on these optimal conditions, Fig. 1(b) is an
optimal result of Fig. 1(a) and 1(d) is an optimal
result of Fig. 1(c).
142
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
There are many CF methods in the literature (see
Singh, 1993; Singh and Rajamani, 1996). Some of
them use algorithms with certain energy functions
or codes to sort the machine/part matrix. Examples
include the bond energy algorithm (BEA) (McCormick et al., 1972), rank order clustering (ROC)
(King, 1980), modified rank order clustering
(MODROC) (Chandrasekaran and Rajagopalan,
1986a) and the direct clustering algorithm (DCA)
(Chan and Milner, 1982). Others use similaritybased hierarchical clustering (Mosier, 1989; Wei
and Kern, 1989; Gupta and Seifoddini, 1990; Shafer
and Rogers, 1993) or simulated annealing approach
(see Xambre and Vilarinho, 2003). Examples of
these methods include single linkage clustering
(SLC), complete linkage clustering (CLC), average
linkage clustering (ALC) and linear cell clustering
(LCC). However, these CF methods all assume
well-defined boundaries between machine/part cells.
These crisp boundary assumptions may fail to fully
describe cases where machine/part cell boundaries
are fuzzy. This is why fuzzy clustering and fuzzy
logic methods are applied in CF (see Xu and Wang,
1989; Chu and Hayya, 1991; Gindy et al., 1995;
Narayanaswamy et al., 1996).
Neural networks have been studied for many
years and widely applied in various areas. It is a
learning scheme that uses mathematical models to
simulate biological nervous system operations in
parallel. Lippmann, 1987 gave a tutorial review on
neural computing and surveyed six important neural
network models that can be used in pattern classification. In general, neural network models are of three
types: feedforward networks (e.g., multilayer perceptron, see Rumelhart et al., 1986), feedback networks
(e.g., Hopfield network, see Hopfield, 1982) and
competitive learning networks (e.g., self-organizing
map (SOM), see Kohonen, 1981; adaptive resonance
theory (ART1), see Carpenter and Grossberg, 1988).
Both feedforward and feedback networks are supervised. The competitive learning network on the other
hand is unsupervised. By applying neural network
learning, GT is more adaptive in a variety of situations. Recently, more research is being conducted
in applying neural networks to GT by using backpropagation learning (Kao and Moon, 1991), competitive learning (Malave and Ramachandran,
1991), ART1 (Kaparthi and Suresh, 1992; Dagli
and Huggahalli, 1995) and SOM (Venugopal and
Narendran, 1994; Guerrero et al., 2002).
Since the competitive learning network is an
unsupervised approach, it is very suitable for use in
GT. SOM is best used in GT when the neural node
number is known a priori, but this number is not
usually known in most real cases. It is generally
known that ART1 is a competitive learning network
with a flexible number of neural nodes making it better applied to GT than SOM. However, some problems may be encountered in directly applying ART1
to GT. Thus, Dagli and Huggahalli (1995) revised
ART1 and then applied it to the machine-part CF.
Although Dagli and Huggahalli (1995) presented a
good application of ART1 to the machine-part
CF, we find that their method still has several drawbacks. In this paper, we first propose a modified
ART1 to overcome these drawbacks and then apply
our modified ART1 to the machine-part CF in GT.
The remainder of the paper is organized as follows.
Section 2 reviews the original ART1 with Dagli
and Huggahalli’s application to the machine-part
CF in GT. We describe these drawbacks when
ART1 is applied in CF by Dagli and Huggahalli
(1995). We then propose a modified ART1 to correct
these problems. Several examples with some
machine/part matrices are presented and compared
in Section 3. Conclusions are made in Section 4.
2. A modified ART1 algorithm for cell formation
Although GT has been studied and used for more
than three decades, neural network applications in
GT began only during the last 10 years. Neural network learning is beneficial for use in GT in a variety
of real cases because it is robust and adaptive. In
most neural network models, competitive learning
is unsupervised making it valuable to be applied in
GT. For examples, see the applications from Malave and Ramachandran (1991), Kaparthi and Suresh (1992), Venugopal and Narendran (1994) and
Guerrero et al. (2002). In competitive learning,
Kohonen’s SOM has widely been studied and
applied (see Kohonen, 1998, 2001; Guerrero et al.,
2002; Lin et al., 2003), but the SOM neural node
number needs to be known a priori and the neural
node number is, in most real cases, unknown. An
appropriate learning algorithm should have the
ability to function without being provided the node
number. Moreover, the SOM learning system often
encounters a ‘‘stability and plasticity dilemma’’
(Grossberg, 1976). Learning is essential, but stability mechanism, to resist random noise, is also
important. ART neural networks were proposed
to solve this stability and plasticity dilemma (Carpenter and Grossberg, 1987, 1988). On the other
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
hand, the machine/part matrix data type in GT is
binary, making ART1 a good choice. Thus, Kaparthi and Suresh (1992) first applied the following
ART1 algorithm to machine-part CF in GT.
ART1 algorithm
Step 1. Give a vigilance parameterq and set the initial weights with:
tij ð0Þ ¼ 1;
bij ð0Þ ¼
1
; i ¼ 1; . . . ; n;
1þn
j ¼ 1; . . . ; c;
Step
Step
Step
Step
Step
where tij (t) is the top-down weight which
represent the centers and bij (t) is the bottom-up connective weight between input
node i and output node j at time t that is
used to evaluate matching scores in the
training stage.
2. Input a training vector x.
3. Use bottom-up weights to evaluate matching scores and determine the winner according to the following:
j is the winner node when it satisfies
nodej ¼P
maxfnodej gwhere
n
nodej ¼ i¼1 bij ðtÞxi
Pn
Pn
4. Set kX k ¼ i¼1 xi and kT j X k ¼ i¼1 tij xi .
Test whether the similarity measure
kT j X k
> q? IF the similarity measure is larkX k
ger than q THEN go to Step 6; ELSE go
to Step 5.
5. Disable the node j* so that it will not become
a candidate in the next iteration and go to
Step 3. If there is no winner node, then activate a new node and go to Step 2.
6. Update the winner as follows:
tij ðt þ 1Þ ¼ tij ðtÞxi ;
bij ðt þ 1Þ ¼
tij ðtÞxi
Pn
:
0:5 þ i¼1 tij ðtÞxi
Step 7. Go to Step 2 until all the training data are
inputted.
However, Dagli and Huggahalli (1995) pointed
out that directly applying the above ART1 algorithm might present the following drawbacks:
(a) The vector with few ‘‘1’’ elements is called a
sparse vector in contrast to a dense vector.
The stored patterns grow sparser when more
input data are applied.
143
(b) The input vector order influences the results.
That is, if a sparse vector inputs first, it will
easily cause the final stored patterns grow
sparse.
(c) Determination of the vigilance parameter q in
ART1 is important but always difficult. If the
similarity between the winner and the input X
is larger than the vigilance parameter as
shown in the Step 4 of ART1 algorithm, it
should be allowed to update the winner or else
activate a new node as a new group center.
Obviously, a larger vigilance will have more
plasticity and generate more groups. However,
a smaller vigilance has greater stability and
may result in only one group. Thus a suitable
vigilance parameter is very important.
To solve the first two drawbacks, Dagli and Huggahalli (1995) re-ordered the input vectors according to the number of 1s in each vector, and
applied them in the order of descending number of
1s to the network. Then, when a comparison
between two vectors is successful, instead of storing
Y, the result of ‘‘AND’’ing vectors X and T j , the
vector having the higher number of 1s among Y
and T j must be stored. This can ensure that the
stored patterns become denser as the algorithm progresses. To solve the third drawback, Dagli and
Huggahalli (1995) first ran a pre-process with the
machine/part matrix to determine appropriate vigilance values q1 and q2 relative to part families and
machine groups, and then obtained group numbers
N and M, respectively. The values are increased to
get different part family and machine group numbers so that the final vigilance value can be chosen
according to that which satisfied N = M.
However, by further considering the solutions of
Dagli and Huggahalli, we find that there are still
several drawbacks. The first two modifications from
Dagli and Huggahalli (1995) often affect the following input vectors, giving them no opportunity to
update the output layer. That is, there is no learning
behavior after that. This can be demonstrated by an
example with the machine/part matrix shown in
Fig. 2 where there are 9 machines and 9 parts.
The objective is to identify machine groups. We
use fx1;...; x9 g to present the 9 machine data vectors
for the machine/part matrix as shown in Fig. 2.
Chu and Hayya (1991) pointed out that a better
and reasonable result for the machine groups in the
machine/part matrix shown in Fig. 2 should be as
follows:
144
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
Fig. 2. Machine/part matrix.
Machine Group 1: 3, 4, 7, and 8.
Machine Group 2: 1 and 5.
Machine Group 3: 2, 6, and 9.
Fig. 4. We find that the 7th components of four
input vectors are ‘‘1’’ except in x4, but we have that
the final center change result in the 7th component
becomes ‘‘0’’. Obviously, this final center vector is
unreasonable because there are three ‘‘1’’ in four
input vectors. This is because the center change
results in the ART1 will become sparse after
updating.
To prevent ART1 from developing sparse reference vectors after the learning stages, we propose
an annealing mechanism to enable the component
to have an opportunity for 0 to approach 1 by
replacing the logical ‘‘AND’’ with the weighted
average of the reference vector Wij and the input
vector X i as follows:
W ij ðt þ 1Þ ¼ b W ij ðtÞ þ ð1 bÞxi :
ð1Þ
Suppose all competitions satisfy our expectations, the center of Group 1 is updated in the order
of x8 ; x4 ; x3 , and x7. As far as the final center results
are concerned, Fig. 3 shows that x4 ; x3 and x7 cannot
update the node during the learning process because
they are sparser than x8. This shows that Dagli and
Huggahalli (1995) revision of ART1 is not good
enough.
The third modification of ART1 by Dagli and
Huggahalli (1995) is similar to the validity index
for clustering algorithms which needs to be run several times to find an optimum cluster number. Thus,
the redundant evaluation destroys the original idea
of ART1. On the other hand, there may be more
than one set of q1 and q2 for which N = M. We will
explain this with an example later. After making
more detailed examination for all steps in the
ART1 algorithm, we find that the main problem
with an application of ART1 to the machine/part
matrix is caused by Step 6. This is because they execute the update process using a logic ‘‘AND’’. We
show this phenomenon by using the machine/part
matrix shown in Fig. 2 in the order of x3 ; x4 ; x7;
and x8. The center change results are shown in
Here we adopt b = 0.5. Using the update formula
(1) with the same example in Fig. 2 and the same order of x3 ; x4 ; x7; and x8, we obtain the center change
results as shown in Fig. 5. We find that the final center change value of 7th component has been upgraded to 0.875. The value of 0.875 is just between
0 and 1, but it is much close to 1. The final result
seems to be more acceptable in the case of three
‘‘1’’ in four input vectors.
We already mentioned that the Dagli and Huggahalli (1995) selection method for the vigilance
parameter is not reliable. In fact, to enable ART1
to be applied to most real cases of cell formation,
the vigilance value should be auto-adjusted from
the data structure and information. The distances
(similarity) between sample vectors play an important role in deciding the vigilance parameter as if
the data sets have differing degrees of dispersion.
If the data are more dispersed, the data needs a larger vigilance value to avoid generating too many
groups. If the data have less dispersion, the data
should take on a smaller vigilance value for effective
classification. According to this analysis, we may
take the data dispersion value as an index in esti-
Fig. 3. Variation of center.
Fig. 4. Variation of center.
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
145
Fig. 5. New approach of center variation.
mating the vigilance parameter. Suppose that there
are n vectors for training. We adopt the similarity
measure with absolute error as an estimator for
the vigilance parameter as follows:
Pn Pn
i¼1
j¼iþ1 jxi xj j
^¼
q
:
ð2Þ
Pn1
k
f ðnÞ þ k¼1
In general, more data results in more groups,
whereas a smaller q also generates more groups.
Thus, we divide the total absolute error by a monotone increasing function f(n) as shown in Eq. (2) to
adjust the similarity measure and make the estimator more flexible.
Thus, a modified ART1 algorithm for CF can be
described as follows:
Modified ART1 algorithm
Step 1. Determine the vigilance parameter q by (2);
Given b = 0.5 and assign the first training
vector to W1.
Step 2. Input the training vector x.
Step 3. Calculate the matching score to find the winner node j Pby the following equation:
nodej ¼ min ni¼1 j W ij xi j .
j
Step 4. Test
the degree of similarity. IF
Pn
i¼1 jW ij xi j < q, THEN go to Step 6.
ELSE go to Step 5.
Step 5. Activate a new node and go to Step 2.
Step 6. Update the winner as follows:
W ij ðt þ 1Þ ¼ b W ij ðtÞ þ ð1 bÞx:
Step 7. Go to Step 2 until all the training data are
inputted.
We know that Dagli and Huggahalli’s method
and the proposed modified ART1 algorithm can
group data into machine-part cells. To accomplish
diagonal blocking for the machine/part matrix,
both methods have to group parts into part families
and machines into machine groups. However, they
may probably generate different number of groups
when running the algorithms separately by parts
and by machines, respectively. Therefore, we can
group row vectors (machines) and then assign parts
to machine groups or group column vectors (parts)
and then assign machines to part families. Suppose
we have already grouped m machines into k groups,
part i will be assigned to family k when the part
operated on k machine group is proportionately
higher than that of any other machine group.
3. Numerical examples
In order to measure the grouping efficiency of an
algorithm for machine-part CF, a performance
measure is needed. Due to its simplicity of calculation, the grouping efficiency measure proposed by
Chandrasekaran and Rajagopalan (1986b) is the
most widely used method. They define the grouping
efficiency g with a weighted mean of g1 and g2 as
follows:
g ¼ xg1 þ ð1 xÞg2 ;
mpov
oe
where g1 ¼ oeþv
; g2 ¼ mpovþe
; 0 6 x 6 1 and
m
number of machines,
p
number of parts,
o
number of 1s in the part/machine matrix,
e
number of 1s outside the diagonal block,
v
number of 0s in the diagonal block.
An optimal result should have two features with a
higher proportion of 1s inside the diagonal block
as well as a higher proportion of 0s outside the diagonal block. The values of g1 and g2 are used to measure these two features, respectively. Of course, x
allows the designer to modify the emphasis of the
two features. Since x is a weight between g1 and
g2 ; x ¼ 0:5 is generally suggested and will be used
in all of the examples presented next.
146
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
Example 1. In this example, we first use several
different machine/part matrices to demonstrate the
behavior of the defined grouping efficiency g. We
obtain the optimal clustering results for these
machine/part matrices using the proposed modified
ART1 algorithm in which some 1s have appeared
outside the diagonal blocks and some 1s have
disappeared from the diagonal blocks. These
machine/part matrices and optimal clustering
results with grouping efficiencies are shown in Figs.
6(a)–6(d). Fig. 6a illustrates a machine/part matrix
with its clustering result without any exceptional
element and void and a grouping efficiency
g = 100%. Fig. 6b illustrates another machine/part
matrix with its clustering result having 8 exceptional
elements and a grouping efficiency g = 97.7%.
Fig. 6c demonstrates a machine/part matrix with
its clustering result having 9 voids and a grouping
efficiency g = 91.7%, and finally, Fig. 6d has both
exceptional elements and voids with a grouping
efficiency g = 89.3%. Of course, Fig. 6a has a perfect
result without any exceptional element and void
such that a grouping efficiency g = 100% is
obtained. For Fig. 6b, we have m = 15, p = 15,
o = 54, e = 8, and v = 0. We find
54 8
15 15 54 0
¼ 1 and g2 ¼
45 8 þ 0
15 15 54 0 þ 8
¼ 0:9532:
g1 ¼
Thus, we have the grouping efficiency g ¼ 0:5 g1 þ
0:5 g2 ¼ 97:7%: Similarly, the grouping efficiency
for Figs. 6c and 6d are 91.7% and 89.3%, respectively. Our proposed modified ART1 method obtains the optimal clustering results for these
different machine/part matrices with the advantage
Fig. 6a. Machine/part matrix and final clustering result with grouping efficiency = 100%.
Fig. 6b. Machine/part matrix and final clustering result with grouping efficiency = 97.7%.
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
147
Fig. 6c. Machine/part matrix and final clustering result with grouping efficiency = 91.7%.
Fig. 6d. Machine/part matrix and final clustering result with grouping efficiency = 89.3%.
that the number of groups needs not be given and is
automatically generated from the data. On the other
hand, the vigilance parameter is estimated automatically in our modified ART1 algorithm. In comparing the grouping efficiencies between Figs. 6a–6d,
we find that if there are more exceptional elements
or voids in final clustering results, the grouping efficiency will decrease.
Example 2. This example uses a machine/part
matrix with 35 parts and 28 machines as shown in
Fig. 7. We use the data set for comparing the results
from our method to those from Dagli and Huggahalli (1995). The pre-process results of determining a
suitable vigilance based on Dagli and Huggahalli
(1995) method is shown in Fig. 8. Table 1 shows
all the different combinations of efficiency with each
different choice of vigilances where q1 and q2 repre-
sent groups for parts and machines, respectively.
We see that the group number can be c = 5, 6 or 7.
In fact, it is difficult to pick a suitable group number
c in Dagli and Huggahalli (1995) method. If c = 5 is
picked, there is an efficiency g = 75.08%. If c = 6 is
picked, there is an efficiency g = 87.81%. Even if
c = 7 is chosen with a best efficiency g = 89.11% in
Dagli and Huggahalli’s method, our approach gives
the final results of c = 6 with an efficiency g =
90.68% as shown in Fig. 9. We can see that our proposed method presents a simple and efficient way by
using an auto-adjusted estimation method according
to the structure of the data set itself.
Example 3. In this example, the machine/part
matrix shown in Fig. 2 with 9 parts and 9 machines
is used. The pre-process results of determining a
suitable vigilance based on Dagli and Huggahalli
148
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
Fig. 7. Machine/part matrix with 35 parts and 28 machines.
Table 1
Grouping efficiency of different vigilance by part and machine
13
Part Group
Machine Group
12
q1
Number of Groups
11
q2
10
9
0.2
71.65
71.65
0.3
75.08
75.08
0.35
75.08
75.08
c=6
0.2
0.35
0.4
0.45
0.25
87.81
87.81
87.81
88.43
0.4
87.81
87.81
87.81
88.43
0.45
87.81
87.81
87.81
88.43
c=7
0.5
0.55
0.5
89.11
89.11
0.55
89.11
89.11
q1
8
q2
7
6
5
4
c=5
0.25
0.3
q1
15
20
25
30
35 40 45
Vigilance (%)
50
55
60
65
Fig. 8. Variation of group number with vigilance.
(1995) method are shown in Fig. 10. Table 2 shows
the different combinations of efficiency with each
different choice of vigilances. The Dagli and Huggahalli (1995) method gives the final results with
c = 2 for this machine/part matrix as shown in
Fig. 11(a). The grouping efficiency is g = 81.62%.
Our proposed modified ART1 gives the final results
with c = 3 for this machine/part matrix as shown in
Fig. 11(b). The grouping efficiency is g = 89.06%.
q2
The final results from our proposed modified
ART1 algorithm actually present better machinepart cells and also higher grouping efficiency than
Dagli and Huggahalli’s method.
Example 4. The last example uses a larger machine/
part matrix with 105 parts and 46 machines as
shown in Fig. 12. The final clustering matrix from
our proposed modified ART1 is shown in Fig. 13.
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
149
Fig. 9. Final results for machine/part matrix of Fig. 7 using the modified ART1 algorithm with grouping efficiency = 90.68%.
Table 2
Efficiency of different vigilance by part and machine
q1
q2
c=2
0.2
0.25
0.3
0.35
0.4
0.45
0.2
81.62
81.62
81.62
81.62
81.62
81.62
0.25
81.62
81.62
81.62
81.62
81.62
81.62
0.3
81.62
81.62
81.62
81.62
81.62
81.62
0.35
81.62
81.62
81.62
81.62
81.62
81.62
0.4
81.62
81.62
81.62
81.62
81.62
81.62
0.45
81.62
81.62
81.62
81.62
81.62
81.62
Fig. 10. Variation of group number with vigilance.
(a)
(b)
Fig. 11. Final result for machine/part matrix of Fig. 2. (a) Using Dagli and Huggahalli’s method with grouping efficiency = 81.62%. (b)
Using the modified ART1 algorithm with grouping efficiency = 89.06%.
150
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
Fig. 12. Machine/part matrix with 105 parts and 46 machines.
Fig. 13. Final results for machine/part matrix of Fig. 12 using the modified ART1 algorithm with grouping efficiency = 87.54%.
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
The results show that the estimated optimal group
number c is 7, and the grouping efficiency g is
87.54%. These results look good for a machine-part
CF algorithm. It actually follows the assumption
that more bottleneck machines and exceptional
parts will decrease the final grouping efficiency
and so is the case in this example.
4. Conclusions
Our main objective in this paper is to provide a
neural network application in GT cell formation
with a special focus on the ART1 algorithm.
Although ART1 has been applied to GT by Kaparthi and Suresh (1992) and Dagli and Huggahalli
(1995), it encountered problems when directly
applied to GT. In this paper, we analyze these drawbacks and propose a modified ART1 to fit the application to GT. Some examples are given and
comparisons are made. Based on the performance
measure proposed by Chandrasekaran and Rajagopalan (1986b), we find that our proposed method is
vigilance parameter-free and also more efficient in
CF with different machine/part matrices than the
previous methods.
Acknowledgements
The authors are grateful to the anonymous referees for their critical and constructive comments and
suggestions. This work was supported in part by the
National Science Council of Taiwan, R.O.C., under
Grant NSC-92-2118-M-033-001.
References
Carpenter, G.A., Grossberg, S., 1987. A massively parallel
architecture for a self-organizing neural pattern recognition
machine. Computer Vision, Graphics, and Image Processing
37, 54–115.
Carpenter, G.A., Grossberg, S., 1988. The ART1 of adaptive
pattern recognition by a self-organization neural network.
Computer 21, 77–88.
Chan, H.M., Milner, D.A., 1982. Direct clustering algorithm for
group formation in cellular manufacture. Journal of Manufacturing Systems 1 (1), 64–76.
Chandrasekaran, M.P., Rajagopalan, R., 1986a. MODROC: An
extension of rank order clustering of group technology.
International Journal of Production Research 24 (5), 1221–
1233.
Chandrasekaran, M.P., Rajagopalan, R., 1986b. An ideal seed
non-hierarchical clustering algorithm for cellular manufacturing. International Journal of Production Research 24, 451–
464.
151
Chang, S.I., Aw, C.A., 1996. A neural fuzzy control chart for
detecting and classifying process mean shifts. International
Journal of Production Research 34, 2265–2278.
Cheng, C.S., 1997. A neural network approach for the analysis of
control chart patterns. International Journal of Production
Research 35, 667–697.
Chu, C.H., Hayya, J.C., 1991. A fuzzy clustering approach to
manufacturing cell formation. International Journal of Production Research 29 (7), 1475–1487.
Dagli, C., Huggahalli, R., 1995. Machine-part family formation
with the adaptive resonance theory paradigm. International
Journal of Production Research 33, 893–913.
Gindy, N.N.G., Ratchev, T.M., Case, K., 1995. Component
grouping for GT applications- a fuzzy clustering approach
with validity measure. International Journal of Production
Research 33 (9), 2493–2509.
Grossberg, S., 1976. Adaptive pattern classification and universal
recoding I: Parallel development and coding of neural feature
detectors. Biological Cybernetics 23, 121–134.
Guerrero, F., Lozano, S., Smith, K.A., Canca, D., Kwok, T.,
2002. Manufacturing cell formation using a new self-organizing neural network. Computers and Industrial Engineering
42, 377–382.
Guh, R.S., Tannock, J.D.T., 1999. Recognition of control chart
concurrent patterns using a neural network approach. International Journal of Production Research 37, 1743–1765.
Gupta, T., Seifoddini, H., 1990. Production data based similarity
coefficient for machine-component grouping decisions in the
design of a cellular manufacturing system. International
Journal of Production Research 28 (7), 1247–1269.
Hopfield, J.J., 1982. Neural networks and physical systems with
emergent collective computational abilities. Proceedings of
the National Academy of Sciences, USA 79, 2554–2558.
Kao, Y., Moon, Y.B., 1991. A unified group technology
implementation using the backpropagation learning rule of
neural networks. Computers and Industrial Engineering 20
(4), 425–437.
Kaparthi, S., Suresh, N.C., 1992. Machine-component cell formation in group technology: A neural network approach. International Journal of Production Research 30 (6), 1353–1367.
King, J.R., 1980. Machine-component grouping in production
flow analysis: An approach using rank order clustering
algorithm. International Journal of Production Research 18
(2), 213–232.
Kohonen, T., 1998. The self-organizing map. Neurocomputing
21, 1–6.
Kohonen, T., 2001. Self-Organizing Maps, 3rd ed. SpringerVerlag, Berlin.
Lin, K.C.R., Yang, M.S., Liu, H.C., Lirng, J.F., Wang, P.N.,
2003. Generalized Kohonen’s competitive learning algorithms
for ophthalmological MR image segmentation. Magnetic
Resonance Imaging 21, 863–870.
Lippmann, R.P., 1987. An introduction to computing with neural
nets. IEEE ASSP, 4-22.
Malave, C.O., Ramachandran, S., 1991. Neural network-based
design of cellular manufacturing systems. Journal of Intelligent Manufacturing 2, 305–314.
McCormick, W.T., Schweitzer, P.J., White, T.W., 1972. Problem
decomposition and data reorganization by a clustering
technique. Operations Research 20 (5), 993–1009.
Mosier, C.T., 1989. An experiment investigating the application
of clustering procedures and similarity coefficients to the GT
152
M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140–152
machine cell formation problem. International Journal of
Production Research 27 (10), 1811–1835.
Narayanaswamy, P., Bector, C.R., Rajamani, D., 1996. Fuzzy
logic concepts applied to machine-component matrix formation in cellular manufacturing. European Journal of Operational Research 93, 88–97.
Rumelhart, D.E., Hinton, G.E., Williams, R.J., 1986. Learning
representations by back-propagation. Nature 323, 533–536.
Shafer, S.M., Rogers, D.F., 1993. Similarity and distance
measures for cellular manufacturing, Part 1: A survey.
International Journal of Production Research 31 (5), 1133–
1142.
Singh, N., 1993. Design of cellular manufacturing systems-an
invited review. European Journal of Operational Research 69
(3), 284–291.
Singh, N., Rajamani, D., 1996. Cellular Manufacturing Systems.
Chapman & Hall, New York.
Venugopal, V., Narendran, T.T., 1994. Machine-cell formation
through neural network models. International Journal of
Production Research 32, 2105–2116.
Wei, J.C., Kern, G.M., 1989. Commonality analysis: A linear cell
clustering algorithm for group technology. International
Journal of Production Research 27 (12), 2053–2062.
Xambre, A.R., Vilarinho, P.M., 2003. A simulated annealing
approach for manufacturing cell formation with multiple
identical machines. European Journal of Operational
Research 151, 434–446.
Xu, H., Wang, H.P., 1989. Part family formation for GT
applications based on fuzzy mathematics. International
Journal of Production Research 27 (9), 1637–1651.
Yang, M.S., Yang, J.H., 2002. A fuzzy soft learning vector
quantization for control chart pattern recognition. International Journal of Production Research 40, 2721–2731.