International Journal of Application or Innovation in Engineering & Management... Web Site: www.ijaiem.org Email: , Volume 2, Issue 4, April 2013

advertisement
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 4, April 2013
ISSN 2319 - 4847
A Comparative Study of Fuzzy K-mean & Fuzzy
K-NN & Fuzzy back propagation
Manan bhavsar 1 and Prof. Mahesh Panchal 2
1
ME CSE, 1,2KITRC KALOL
ABSTRACT
Organizing data into sensible groupings is one of the most fundamental modes of understanding and learning. Classification &
Cluster analysis is the formal study of methods and algorithms for grouping. Most of the real life classification problems have
ill defined, imprecise or fuzzy class boundaries. We provide a brief overview of Fuzzy logic methods, summarize well known
Fuzzy logic classification methods, discuss the major challenges and key issues in designing combination of Fuzzy & neural
network algorithms. Hence, in this study , Feed forward neural networks, that use back propagation learning algorithm with
fuzzy objective functions, are investigated. A learning algorithm is proposed that minimizes an error term, which reflects the
fuzzy classification from the point of view of possibility approach. Since the proposed algorithm has possibility classification
ability, it can encompass different back propagation learning algorithm based on crisp and constrained fuzzy classification. A
methodology has presented for adequate classification of fuzzy information. In this system, the fuzzy rules are used as input
output stimuli’s. The system consists of layered architecture of fuzzy-neural network. First hidden layer generates the degree of
membership for all the input-output patterns pairs. This vector of degree of membership is now being trained into the neural
network for generating the final classification of rules using the Back propagation algorithm of neural network. Thus, neural
network as a whole performs two tasks. First it generates the degree of membership and later it classifies the rules in different
classes on the basis of membership function.
Keywords: Fuzzy classification; FBP, Fuzzy K-NN, Fuzzy K-mean; Neural networks and back propagation.
1. INTRODUCTION
Fuzzy logic was first developed by Zadeh [1] in the mid-1960s for representing uncertain and imprecise knowledge. It
provides an approximate but effective means of describing the behavior of systems that are too complex, ill-defined, or
not easily analyzed mathematically. Fuzzy variables are processed using a system called a fuzzy logic controller. It
involves fuzzification, fuzzy inference, and defuzzification. The fuzzification process converts a crisp input value to a
fuzzy value. The fuzzy inference is responsible for drawing conclusions from the knowledge base. The defuzzification
process converts the fuzzy control actions into a crisp control action.
Fuzzy logic techniques have been successfully applied in a number of applications: computer vision, decision making,
and system design including ANN training. The most extensive use of fuzzy logic is in the area of control, where
examples include controllers for cement kilns, braking systems, elevators, washing machines, hot water heaters, airconditioners, video cameras, rice cookers, and photocopiers.
This presents an extension of the standard back propagation algorithm (SBP). The proposed learning algorithm is
based on the fuzzy integral of Sugeno and thus called fuzzy back propagation (FBP) algorithm. Necessary and sufficient
conditions for convergence of FBP algorithm for single-output networks in case of single- and multiple-training
patterns are proved. A computer simulation illustrates and conforms the theoretical results. FBP algorithm shows
considerably greater convergence rate compared to SBP algorithm. Other advantages of FBP algorithm are that it
reaches forward to the target value without oscillations, requires no assumptions about probability distribution and
independence of input data. The convergence conditions enable training by automation of weights tuning process
(quasi-unsupervised learning) pointing out the interval where the target value belongs to. This supports acquisition of
implicit knowledge and ensures wide application, e.g. for creation of adaptable user interfaces, assessment of products,
intelligent data analysis, etc.
Neural networks, fuzzy logic and evolutionary computing have shown capability on many problems, but have not yet
been able to solve the really complex problems that their biological counterparts can (e.g., vision). It is useful to fuse
neural networks, fuzzy systems and evolutionary computing techniques for offsetting the demerits of one technique by
the merits of another techniques. Some of these techniques are fused as:
Neural networks for designing fuzzy systems
Fuzzy systems for designing neural networks
Evolutionary computing for the design of fuzzy systems
Volume 2, Issue 4, April 2013
Page 95
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 4, April 2013
ISSN 2319 - 4847
Evolutionary computing in automatically training and generating neural network architectures.
2. Fuzzy k-Means Clustering[11][13]:
In classical k-means procedure, each data point is assumed to be in exactly one cluster. In fuzzy k-means clustering,
we can relax this condition and assume that each sample has some graded or “fuzzy” membership in a cluster.
Fuzzy K-Means (also called Fuzzy C-Means)[11] is an extension of K-Means, the popular simple clustering technique.
While K-Means discovers hard clusters (a point belong to only one cluster), Fuzzy K-Means[13] is a more statistically
formalized method and discovers soft clusters where a particular point can belong to more than one cluster with certain
probability.
Algorithm :
Like K-Means, Fuzzy K-Means works on those objects which can be represented in n-dimensional vector space and a
distance
measure
is
defined.
The algorithm is similar to k-means.
1.Initialize k clusters
2.Until converged
3.Compute the probability of a point belong to a cluster for every <point, cluster> pair
Recomputed the cluster centers using above probability membership values of points to cluster.
Basic K-Means Algorithm [11]: (1) Select K points as initial centroids (2) repeat (3) Assign each point to its closest
centroid to form K clusters (4) xz Recomputed the centroid of each cluster (5) Until Termination condition is met.
For algorithm termination few termination conditions that may be used (i) Maximum number of iterations is
reached,(ii) there is no further change in centroids positions.
Figure1. k-means clustering flow chart
Advantages: The k-means algorithm is a popular method for automatically classifying vector-based data. It’s used in
various applications such as vector quantization, density estimation, workload behavior characterization, and image
compression[11]
Limitations: It is computationally very expensive as it involves several distance calculations of each data point from all
the centroids in each of the iteration.
The final cluster results heavily depend on the selection of initial centroids which causes it to converge at local
optimum.
3. Fuzzy KNN[14] :
Fuzzy K-Nearest Neighbor algorithms[14] have two main advantages over the traditional (crisp) K-Nearest Neighbor
algorithms[15][16].Firstly, while determining the class of the current residue, the algorithm is capable of taking into
consideration the ambiguous nature of the neighbors if any. The algorithm has been designed such that these
ambiguous neighbors do not play a crucial role in the classification of the current residue. The second advantage is that
residues are assigned a membership value in each class rather than binary decision of ‘belongs to’ or ‘does not belong
to’. For each r, the predicted membership value ui in class i can be calculated as follows[17] :
BEGIN
Initialize i = 1.
DO UNTIL (r assigned membership in all classes)
Compute
(r) using
Volume 2, Issue 4, April 2013
Page 96
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 4, April 2013
ISSN 2319 - 4847
Increment i.
END DO UNTIL
END
Some limitation of Fuzzy so intriduce Neurofuzzy concept:
4. Neuro -Fuzzy Technologies
To enhance fuzzy logic systems with learning capabilities, you can integrate neural net technologies. The combination
of fuzzy logic and neural net technology is called "NeuroFuzzy" and combines the advantages of the two
technologies[10].
The first part of this presentation introduces the basic principles of neural nets and their training methods. The second
part shows the combination of neural nets with fuzzy logic.
The sparse use of neural nets in applications is due to a number of reasons. First, neural net solutions remain a "black
box". You can neither interpret what causes a certain behavior nor can you modify a neural net manually to change a
certain behavior. Second, neural nets require prohibitive computational effort for most mass-market products. Third,
selection of the appropriate net model and setting the parameters of the learning algorithm is still a "black art" and
requires long experience. Of the aforementioned reasons, the lack of an easy way to verify and optimize a neural net
solution is probably the mayor limitation.
5. Training Fuzzy Logic Systems with NeuroFuzzy[10]
Many alternative ways of integrating neural nets and fuzzy logic have been proposed in the scientific literature. The
first artificial neural net implementation dates over 50 years back. Since then, most research dealt with learning
techniques and algorithms. One major milestone in the development of neural net technology was the invention of the
so-called error back propagation
6. Learning by Error Back Propagation [10]
The error back propagation algorithm soon became the standard for most neural net implementation due to its high
performance. First, it selects one of the examples of the training data set. Second, it computes the neural net output
values for the current training examples' inputs. Then, it compares these output values to the desired output value of the
training example. The difference, called error, determines which neuron in the net shall be modified and how. The
mathematical mapping of the error back into the neurons of the net is called error back propagation.
If the error back propagation algorithm is so powerful, why not use it to train fuzzy logic systems too? Alas, this is not
straightforward. To determine which neuron has what influence, the error back propagation algorithm differentiates the
transfer functions of the neurons. One problem here is that the standard fuzzy logic inference cannot be differentiated.
To solve these problems, some neuro-fuzzy development tools use extended fuzzy logic inference methods. The most
common approach is to use so-called Fuzzy Associative Memories (FAMs). A FAM is a fuzzy logic rule with an
associated weight. A mathematical framework exists that maps FAMs to neurons in a neural net. This enables the use
of a modified error back propagation algorithm with fuzzy logic. As a user of NeuroFuzzy tools, you do not need to
worry about the details of the algorithm. Today's NeuroFuzzy tools work as an "intelligent" assistant with your design.
They help you to generate and optimize membership functions and rule bases from sample data.
As we studied in Fuzzy K-Means ,F K-NN Algorithm suffers from major limitations[13] :
(i) It is computationally very expensive as it involves several distance calculations of each data point from all the
centroids in each of the iteration.
(ii) The final cluster results heavily depend on the selection of initial
centroids which causes it to converge at
local optimum. So, the solution of this problem is to find a way to choose the initial centers.
(iii) Sometimes results are unexpected and hard to debug.
(iv) Missing ability of Generalization.
(v) Computationally complicated.
(vi) According to literature, Fuzzy Logic is not recommendable for non-linear data.
7. Methodology and Expected Outcome
In simple words, both neural nets and fuzzy logic are powerful design techniques that have its strengths and
weaknesses. Neural nets can learn from data sets while fuzzy logic solutions are easy to verify and optimize. If you look
at these properties in a portfolio, the idea becomes obvious, that a clever combination of the two technologies delivers
best of both worlds. Combine the explicit knowledge representation of fuzzy logic with the learning power of neural
nets, and you get Neuro Fuzzy[10].
7.1 LR-type Fuzzy no[10]:
Volume 2, Issue 4, April 2013
Page 97
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 4, April 2013
ISSN 2319 - 4847
A fuzzy number means if there exists reference function L(left) &R(right) and scalars, α>0, β>0 with
µ(x)
Symmetric triangle LR type Fuzzy number :
Figure2. Symmetric triangle LR type Fuzzy number :
7.2 Back propagation algorithm with fuzzy objective functions:
Fuzzy Neuron:
The Fuzzy neuron is the element of
FBP. In
fig. given vector I=( , ,.. ) &weight vector
W=(w1,w2,..wl).compute the crisp output O is:
O = f (NET) = f ( CE (
)).
Here i0= (1,0,0) is bias. Fuzzy weighted summation :
Net=
Is first computed & NET= CE(net).The function CE is centriod of triangle Fuzzy no & treated as Deffuzification
operation .if net(
,
,
) then:
CE (net ) =
+1/3(
)= NET
After that first define :
f (NET)=1/exp(-NET)
That is the final computation to obtain the crisp output value O.
Fuzzy BP is a three layered feed forward architecture .All three layered like :
Input ,Hidden,Output .In FBP proceeds in two stages, namely
1.Learning or Training .
2.Infernce.
Figure 3. Fuzzy Neroun
Learning in FBP:
The learning process of FBP follows the gradient method of minimizing error due to this. Here the square error
function for pattern P:
Ep =
Where D is the target output value O is the computed value of output neroun.
Total error of training pattern : E =
During the learning phase , weights are adjusted so change in t time
∆W(t)= - η Ep(t)+ α ∆W(t-1).
Volume 2, Issue 4, April 2013
Page 98
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 4, April 2013
ISSN 2319 - 4847
Applications of
FPB[10] :
1. Knowledge base evaluation.
2. Earthquake Damage evaluation.
Problem solving ability of employee :
Table No 1 Data of Employee :
N Worth
o value
1
High
Employee
Salutation
Teach
acceptance
availability
ability
Positive
None
Frequent
risk
Out
put
Low
Good
Fig: 4 Worth value of employee
In this data Good = 1, Poor=0.
Convert the normal Data in to Fuzzy number .
Negative : ( 0, 0.0001, 4)
Low
: (0.5, 0.2 ,0.2)
Moderate :(0.7 ,0.2,0.2)
High
:(1 , 0.2 ,0.0001)
Algorithm :
1.Initially generate the weight sets W for input-hidden layerW1& W2
2 .SET input convert in LR-type Fuzzy no & crisp output D.
3. COUNT_OF_ITRNS = 0; p=1.
4. Assigned value for η and α and ∆W(t-1)=0.
Output O =(1,0,0) for input Neurons.
5. Compute at hidden layer:
O=f(NET)=f(CE(
)).
6. Compute at output layer:
O=f(NET)=f(CE(
)).
7. Compute the changes in weight in input hidden layer & hidden output layer
Set ∆W(t-1)=0 & η=0.9 and α=0.1.
8.Update weight for hidden output layer .
9.Change in weight in input-hidden layer and Update.
10.p=p+1
11.COUNT _OF_ITRNS = COUNT _OF_ITRNS+1;
If COUNT _OF_ITRNS < ITRNS
{ Reset the first pattern in the training set;
p=1;
go to Step 4;
}
After calculate FBP: Expected output = 1
Converted into FBP= 0.956
Benefits of tis algor itham :
Fuzzy Rule will be discover after the train the data set.
Advantages of proposed Algorithm over F -k-means Algorithm & SBP are:
Volume 2, Issue 4, April 2013
Page 99
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 4, April 2013
ISSN 2319 - 4847
(1) Greater convergence speed implying significant reduction in computation time what is important in case of large
sized neural networks,
(2) Reaches always forward to the target value without oscillations and there is no possibility to fall into local
minimum,
(3) Requires no assumptions about probability distributions and independence of input data (pattern's inputs),
(4) Does not require initial weights to be different from each other,
(5)Batter Accuracy & get Optimal solution .
(6) Another advantage of FBP algorithm is that it presents a generalization of the fuzzy perception algorithm [19]. FBP
algorithm can support knowledge acquisition and presentation especially in cases of uncertain implicit knowledge,
8. Conclusions and Future work
This Paper presents comparisons between classification methods of Fuzzy set.
This section explains the implementation strategy of the LR-type (proposed) FBP algorithm. It provides guidance about
how to improve performance and efficiency of LR Back propagtion algorithm with modifications. Requirements to
implement proposed algorithm are as under.
To fulfill this proposed algorithm or methodology required implementation strategy are,
Database use is sql server.
Platform use is either Windows XP or Window 7 .
Implementation language is C# and Tool is Visual studio.
References
[1.] L. A. Zadeh, “ Fuzzy Sets”, Info. Control,vol. 8, pp.338–353 ,1965.
[2.] LA Zadeh, “ Fuzzy logic, neural networks and soft computing”, One-page
course announcement of CS 294-4,
Spring 1993, University of California at Berkeley, 1992.
[3.] S. Jakubek and T. Strasser. “ Artificial neural networks for fault detection in large-scale data acquisition systems”.
Engineering Applications of Artificial Intelligence, vol17,pp:233–248, May 2004.
[4.] Fahlman, S.E., “An Empricial Study of Learning Speed on BP Networks”,Carnegie Mellon Report , No CMUCs,pp 88.162,(1988b)
[5.] K. Asakawa and H. Takagi, “Neural networks in Japan,” Commun. ACM, vol. 37, no. 3, pp. 106–112, Mar.
1994.
[6.] J. C. Bezdek, E. C.-K. Tsao, and N. R. Pal, “Fuzzy Kohonen clustering networks,” in IEEE Int. Conf. Fuzzy Syst.,
pp. 1035–1043,1992.
[7.] J. J. Buckley, P. Krishnamraju, K. Reilly, and Y. Hayashi, “Genetic learning algorithms for fuzzy neural nets,” in
Proc. 3rd IEEE Int. Conf. Fuzzy Syst., vol. 3, pp. 1969–1974,1994.
[8.] Bei, C.-D., & Gray, R. M. “An improvement of the minimum distortion encoding algorithm for vector
quantization”. IEEE Transactions on Communications, vol33, pp.1132–1133,1985.
[9.] A. Kandel, Y.-Q. Zhang, and T. Miller, “Hybrid fuzzy neural system for fuzzy moves,” in Proc. 3rd World Congr.
Expert Syst,pp. 718–725,1996.
[10.] Neural network ,Fuzzy logic, Genetic Algorithms, Synthesis &
Applications : S. Rajasekaran & G.A
Vijayalakshmi Pai.
[11.] http://en.wikipedia.org/wiki/Data_clustering#Fuzzy_c-means_clustering.
[12.] M. M. Gupta and D. H. Rao, “On the principles of fuzzy neural networks,” Fuzzy Sets Syst., vol. 61, pp. 1–18,
1994.
[13.] C. Elkan, “Using the triangle inequality to accelerate k-means,” International Conference on Machine Learning ,
pp. 147-153, 2003.
[14.] A. Yajnik, S. Rama Mohan, “Identification of Gujarati Characters Using Wavelets and Neural Network”, Proc.Of
10th IASTED International Conference on Artificial Intelligence and Soft Computing, Acta press , pp. 150-155
,2006.
[15.] H. Ishigami et al., “Structure optimization of fuzzy neural network by genetic algorithm,” Fuzzy Sets Syst., vol.
71, pp. 257–264, 1995.
[16.] J. Yen and Wang L. “Application of statistical information criteria for optimal fuzzy model construction”. IEEE
Transactions on Fuzzy Systems, 6(3):362 – 372, August 1998.
[17.] J. B. Kiszka and M. M. Gupta, “Fuzzy logic neural network,” BUSEFAL, vol. 4, pp. 104–109, 1990.
[18.] D Nauck and R Kruse, “ A fuzzy neural network learning fuzzy control rules and membership functions by fuzzy
error back propagation “. In Pro. IEEE Int. Conf. on Neural Networks, 1993.
[19.] Y. Wang and G. Rong, “A self-organizing neural network- based fuzzy system “, Fuzzy Sets and Systems, 1999.
Volume 2, Issue 4, April 2013
Page 100
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 4, April 2013
ISSN 2319 - 4847
[20.] Y. Yang, X. Xu and W. Zang ,“Design neural networks based fuzzy logic”, Fuzzy Sets and Systems, 2000.
[21.] C. Yao and Y. Kuo, “A fuzzy neural network model with three layered structure,” in Proc. of FUZZ-IEEE/IFES,
vol. III, pp.1503-1511,1995.
[22.] Y. Shi, and M. Mizumoto, “A new approach of neurofuzzy learning algorithm for tuning fuzzy rule s”, Fuzzy Sets
and Systems, 2000.
AUTHOR:
Manan N Bhavsar received the B.E. Degree in Computer Engg. in 2009. I am pursuing Master of
Engineering in Computer Science and Engineering from KITRC, Gujarat, India. He is actively involved in
academic field since last one year. His research interests include classification, rule extraction, Artificial
Intelligence and data mining.
Mahesh Panchal is Associate Professor and Head of Computer Engineering Department at KITRC,
Gujarat, India. He has received his B.E. (Computer Engg.) in 2002 and M.E. (Computer Engg.) in 2009.
He is actively involved in academic field since last 9 years. At present he is pursuing Ph.D. from Nirma
University, Gujarat, India. His area ofinterest includes Machine Learning, Data Mining and Artificial
Intelligence. He is member of editorial board and reviewer committee of many technical journals
Volume 2, Issue 4, April 2013
Page 101
Download