A neuro-fuzzy approach to FMOLP problems ∗ Robert Fullér † Dept. of Computer Science, Eötvös Loránd University, Muzeum krt. 6-8, H-1088 Budapest, Hungary Silvio Giove Dip. di Inform. e Studi Aziendali, University of Trento via Inama 5-7, I-38100 Trento, Italy Abstract We propose the use of fuzzy neural networks for finding a good compromise solution to fuzzy multiple objective linear programs (FMOLP). Key words: Fuzzy neural network, fuzzy multiple objective program 1 The statement of the problem We consider fuzzy decision problems of the form max (C1 x, . . . , Ck x) x (1) where Ci = (Ci1 , . . . , Cin ) is a vector of fuzzy numbers and x ∈ Rn is the vector of crisp decision variables. Suppose that for each objective function of (1) we have two reference fuzzy numbers, denoted by mi and Mi , which represent undesired and desired levels for the i-th objective, respectively. We now can state (1) as follows: find an x∗ ∈ Rn such that Ci x∗ is as close as possible to the desired point Mi , and it is as far as possible from the undisered point mi for each i. ∗ in: Proceedings of CIFT’94, June 1-3, 1994, Trento, Italy, University of Trento, 1994 97-101. Presently visiting professor at Dip. di Informatica e Studi Aziendali, University of Trento. Partially supported by the Hungarian National Scientific Research Fund OTKA under the contracts T 4281, T 7598, T 14144, 816/1991 and I/3-2152, † 1 2 Fuzzy neural networks Suppose that there is a fuzzy neural network with the set of input-output pairs {(Ai , Bi ), i = 1, . . . , m}, where Ai = (Ai1 , . . . , Ain ) is a vector of fuzzy numbers and the output Bi is a fuzzy numbers. In computer applications we usually use discrete versions of the continuous fuzzy sets. The discrete version of the above system is Inputs Outputs A1 (x1 ), . . . , A1 (xM ) B1 (y1 ), . . . , B1 (yN ) ... ... Am (x1 ), . . . , Am (xM ) Bm (y1 ), . . . , Bm (yN ) where (x1 , . . . , xM ) and (y1 , . . . , yN ) are well-chosen partitions of the input and output spaces. Another possibility is to input the α-level sets of fuzzy numbers [6]. We need to find weights, such that for all input vectors Ai all the computed α-level sets are as close as possible to the α-level sets of the target fuzzy numbers Bi . The number of inputs and outputs depend on the number of α-level sets considered. a11 a1M A discretization of a fuzzy input Aij , where aij = Ai (xj ). In the architecture [5] input vectors and target outputs were given by fuzzy numbers. Weights and biases, however, were given by real numbers as in the case of standard backpropagation algorithm. The following extension of backpropagation algorithm to train neural networks from fuzzy training patterns was proposed by [5]. 2 Οp wi 1 k i wij j 1 n ........... Ap1 Apn The architecture of fuzzy neural network proposed in [5]. The output of the i-th hidden unit is Opi = f ( n wij Apj ) j=1 For the output unit k Op = f ( wi Opi ) i=1 Let us denote the α-level sets of the computed output Op and the target output Bp by [Op ]α = [OpL (α), OpR (α)] [Bp ]α = [BpL (α), BpR (α)] The cost function to be minimized can be stated as follows R ep (α) := eL p (α) + ep (α) where L L 2 eL p (α) = (Bp (α) − Op (α)) /2 R R 2 eR p (α) = (Bp (α) − Op (α)) /2 The cost function for the training pattern p is ep = α 3 αep (α) From the cost function ep (α) the following learning rules can be derived ∆wi (t + 1) = Ωα(−∂ep (α)/∂wi + α∆wi (t) ∆wij (t + 1) = Ωα(−∂ep (α)/∂wij + α∆wij (t) 3 Fuzzy linear equations Let us consider the system of fuzzy linear equations A11 x1 + . . . + A1n xn = B1 ... ... (2) Am1 x1 + . . . + Amn xn = Bm where Aij and Bi are fuzzy numbers. The problem is to find a crisp vector x ∈ Rn satisfying this system of equations as far as possible. We use the following single layer fuzzy neural network for finding an approximate solution to (2). Οi x2 x1 = ∑Aijx j ~ Bi x n-1 xn ... Ai1 Ain Ai2 where the training set is {(Ai1 , . . . , Ain ; Bi ), i = 1, . . . , m} and the weights are the coordinates of decision variable x. It is clear that in the end of the learning we get an x satisfying bestly (in the sense of closeness of α-level sets) the system of equations (2). It is well-known that fuzzy expert systems and neural networks are universal approximators. However, as was pointed out by Buckley and Hayashi [1], fuzzy neural networks can not approximate all continuous fuzzy functions, which means that the extended backpropagation algorithm from [5] can not be always used in the process of finding approximate solutions to systems of fuzzy equations. 4 4 Application of fuzzy neural networks to FMOLP Consider the following FMOLP max {(C1 x, . . . , Ck x)|x ∈ Rn } (3) where Ci is a vector of fuzzy numbers i = 1, . . . , k. Suppose that for each objective function of (3) we have two reference fuzzy numbers, denoted by mi and Mi , which represent undesired and desired levels for the i-th objective, respectively. Now we should find an x∗ ∈ Rn such that Ci x∗ is as close as possible to the desired point Mi , and it is as far as possible from the undisered point mi for each i. Let di denote the maximal distance between the α-level sets of mi and Mi , and let mi be the fuzzy number obtained by shifting mi by the value di in the direction of Mi . Then we consider mi as the reference level for the biggest acceptable value for the i-th objective function. m1 M1 m '1 C1x* C1 x∗ is too far from M1 . It is clear that good compromise solutions should be searched between Mi and mi , and we can introduce weigths measuring the importance of ”closeness” and ”farness”. 5 m1 C1x* M1 C1 x∗ is close to M1 , but not far enough from m1 . Let Ω ∈ [0, 1] be the grade of importance of ”closeness” to the disered level and then (1 − Ω) denotes the importance of ”farness” from the undisered level. Then we can use the following training set for our single layer fuzzy neural network M1 Input Output C1 ..... Ck ΩM1 + (1 − Ω)m1 ..... ΩMk + (1 − Ω)mk ΩM1 + (1-Ω)m'1 C1x* m'1 A good compromise solution It is clear that in the end of training we get the optimal weights (decision variables), for which the values of the fuzzy objectives are as close as possible to the desired levels and and far from the undisered levels in the sense of the choosen importance degrees. 6 References [1] J.J.Buckley and Y.Hayashi, Can fuzzy neural nets approximate continuous fuzzy functions?, Fuzzy Sets and Systems, 6(1994) 43-51. [2] T.Hashiyama, T.Furuhashi and Y.Uchikawa, A study on a multi-attribute decision making process using a fuzzy neural network, In: Proceedings of Fifth IFSA World Congress, 1993 810-813. [3] T.Hashiyama, T.Furuhashi and Y.Uchikawa, A decison making model using a fuzzy neural network, In: Proceedings of the 2nd International Conference on Fuzzy Logic & Neural Networks, Iizuka, Japan, 1992 1057-1060. [4] E.H.L.Aarts, J.Wessels and P.J.Zwietering, The applicability of neural nets for decision support, In: Proceedings of First European Conference on Fuzzy and Intelligent Technologies, Aachen, September 7-10, 1993, Verlag der Augustinus Buchhandlung, Aachen, 1993 379-386. [5] H.Ishibuchi, K.Kwon and H.Tanaka, Implementation of fuzzy if-then rules by fuzzy neural networks with fuzzy weights, In: Proceedings of First European Conference on Fuzzy and Intelligent Technologies, Aachen, September 7-10, 1993, Verlag der Augustinus Buchhandlung, Aachen, 1993 209-215. [6] H.Ishibuchi, H.Tanaka and H.Okada, Interpolation of fuzzy if-then rules by neural networks, Int. J. of Approximate Reasoning, 10(1994) 3-27. 7