logarithmic barrier optimization problem using neural

advertisement
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN: 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
12
LOGARITHMIC BARRIER OPTIMIZATION
PROBLEM USING NEURAL NETWORK
A. K. Ojha, C. Mallick, D. Mallick
Abstract—The combinatorial optimization problem is one of the important applications in neural network
computation. The solutions of linearly constrained continuous optimization problems are difficult with an
exact algorithm, but the algorithm for the solution of such problems is derived by using logarithm barrier
function. In this paper we have made an attempt to solve the linear constrained optimization problem by
using general logarithm barrier function to get an approximate solution. In this case the barrier parameters
behave as temperature decreasing to zero from sufficiently large positive number satisfying convexity of the
barrier function. We have developed an algorithm to generate decreasing sequence of solution converging
to zero limit.
Index Terms— Barrier function, weight matrix, barrier parameter, Lagrange multiplier, s-t max cut
problem, descent direction.
—————————— ‹ ——————————
1 INTRODUCTION
et us consider graph G= (V,E) where V=(1,2,...n) is the
L
(1.3)
set of nodes and E is the set of edges. The edge
Subject to
between two nodes and is defined by (i,j).Let (Wij )n×n
Benson and Zhang [2] found an algorithm to get an
is symmetric weight matrix such that
approximate
(1.1)
Given two nodes s and t, if s<t there exist a partition
U and V-U to the node set V such that
W(s) =
Where j
solution
in
this
direction
to
solve
combinatorial and quadratic optimization problem.
Neural computation is combinatorial and quadratic
optimization problem due to Hopfield and Tank [10] put
(1.2)
is maximized for s, t not belonging to the
a foundation stone in solving such type of problems.
Many
computational
model
and
combinatorial
same set. This problem is called s-t max cut problem.
optimization algorithm have been developed. Aiyer,
Through the exact solution is difficult however the same
Niranjan and Fallside [1] extended this idea for a
can be obtained by considering a equivalent problem.
theoretical study of such problem. Similar algorithm is
————————————————
used in solving travelling salesman problem by Durbin
and Willshaw [6] and multiple traveling salesman
• A.K. Ojha is with the School of Basic Sciences, Indian Institute of
Technology, Bhubaneswar, Orissa-751014, India.
problem by using Neural Network, due to Wacholder,
• C. Mallick is with the Department of Mathematicsics, B.O.S.E, Cuttack,
Orissa-753007, India.
• D. Mallick is with the Department of Mathematics, Centurion Institute of
Technology, Bhubaneswar, Orissa-751050, India.
© 2009 Journal of Computing
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN: 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
Han and Mann [15].Gee, Aiyer and Prager [7] made a
frame work for studying optimization problem by Neural
Network, where in a different paper Gee and Prager [7]
used polyhedral combinatorics in Neural Network.
Urahama [13] used gradient projection Network for
solving linear and nonlinear programming problems.
Some of the related works can be found in Bout and
13
2 LINEAR LOGARITHMIC BARRIER FUNCTION:
In order to find the solution of the problem first of all we
will convert s-t max cut problem into a linearly
constrained continuous optimization problem and then
derive some theoretical results from an application of
linear logarithmic barrier function. From s-t max-cut
problem we have
Miller[3],Gold and Rangaranjan [9],Gold and Mjolsness
(2.1)
[11], Simic [12], Waugh and Westervelt [16],Walfe , Parry
and Macmillan[17], Xu [18],Yuille and Kosowsky [19] etc.
Subject to
A systematic study of neural computational model is
given by Van Den Berg [14], Clchoki and Unbehaunen [4]
where
helps to formulate an algorithm to solve such type of
matrix and
is any positive number and I an n×n identity
problem in an efficient manner. All these work based on
deterministic annealing type and to study the global
minimum of the effective energy at high temperature and
be a symmetric weight matrix.
track it as temperature decreases. In this paper we have
Let
generalized the work of Dang [5] by considering an
which
equivalent linear constrained optimizing problem with
i=1, 2,……n in the objective function.
general logarithmic barrier function.
Let us define a function
(2.2)
is
used
as
barrier
–q/p<xi<q/p,
function
for a positive barrier
parameter β such that
to find an approximate solution. Where the parameters
p, q are positive real. The barrier parameters behave as
temperature decreasing to zero from a sufficiently large
positive number satisfying convexity of the barrier
function. We have developed an algorithm for which the
(2.3)
Subject to
Let
and
network converges to at least a local minimum point, if a
local minimum point of the barrier problem is generated
for a decreasing sequence of values with zero limit. The
organization of the paper is as follows. Following the
Thus
From
the
definition
of
f(x)
it
is
clear
that
is bounded on a set B and hence there
introduction in section 2, we have considered logarithm
barrier function to minimize the problem with Lagrange
exists an interior point
multiplier. In section 3, we have proved the theorem by
local or global minimum point of
considering general logarithmic function. Numerical
Now we will develop an algorithm for approximating a
example in section 4, has been presented to show the
solution of
efficiency of the algorithm. Finally the paper concluded
Min: f(x) =
with a remark in the section 5.
Subject to xs+ xt=0
of B such that it is a point of
defined in (2.1).
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN: 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
and to find minimum point where
every component equal to -1 and 1 from the solution of
(2.1) at the limiting value of β which converging to zero.
Let
For any given barrier parameter β>0 ,the first order
necessary optimality condition of (2.1) says that if
is a
minimum point of (2.1) then there exists a Lagrange
multiplier λ satisfying
(2.5)
For
or t we have
14
2.1. Lemma
Assume that
(i) If
then
(ii) If
then
(iii) If
then
(iv) If
then
(v)
If
and
then
Proof: Before proving these lemma we need to prove
that
if
Let
When
(2.6)
=
and for i=s or t
=
We derive
=
Let
(2.8)
(2.7)
Similarly
For i=1, 2,…n and
from
discussion it shows that
direction of
if
the
above
is a descent
.Before proving our main
theorem we will prove the following lemma.
(2.9)
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN: 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
For
15
it is clear that
and that for any point
can only be zero,
negative or positive.
Case 1:
Considering
From
then
, we obtain
Thus
So,
and
Case II:
Let
From
Using (2.8) and (2.9) we have
>0
Case III:
If we have
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN: 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
16
Lagrange multiplier λ we have developed an algorithm
for approximating a solution of
subject to
be any given sequence of positive
Let
and
numbers such that
value of
should be sufficiently large so that
convex over
Let
Thus
.The
is
.
be an arbitrary non zero interior point of B
satisfying
. For m=1, 2,… starting at
, we
can generate the feasible descent direction
Therefore
satisfying
to search for better feasible interior point
Using this result the rest of this Lemma can be proved
3 Main Results
similarly.
Using the above lemmas we can prove the following main
Now if we will consider all
theorem.
Theorem 1.
Since
then
satisfies the
For
every limit point of
is a stationary point of
desired property and when searching for a point in
, the constraint
is always satisfied
automatically if the step length is a number between
.
subject to
Proof:
be any given sequence of positive
convex over
Let
become
a
feasible
to
descent
direction
as
and we want to solve
is
.
for k=1,2,… starting from
,we
derive the feasible descent direction
, if
When β is sufficiently small so that a feasible solution
(2.10)
which every component equal to either -1 or 1 can be
is a solution of (2.10)
Since
bounded.
should sufficiently large so that
.The
be an arbitrary non zero interior point of B
satisfying
with
Given any point
and
numbers such that
value of
Let
generated by
generated by rounding off
is bounded hence
Using
and
the
feasible
is
descent
Let
direction
for updating
Satisfying
.
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN: 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
17
Let
with
And as,
with
, = +
, since
we have
h(
)
It implies that h (
and
Since
(
is bounded.
( )
, which proves that h
For any
are bounded then
+
) = min
+
( )
Thus
According to the definition of A (x), it follows that
and
For any
Therefore no limit of
is equal to either -1 or 1
for i=1,2,…n
is a
From the above lemma we obtain
feasible solution of
subject to
Then by Bolzonoweierstrass theorem we can extract a
convergent subsequence from the sequence
Let
be a convergent subsequence of the
sequence
Let
and
Let
for any
be the limit point of the sequence. Clearly as
converges to
Let
, since
continuous and
Considering the sequence
=
and
be any arbitrary point of
Let
According
to
definition
be a sequence convergent to
and
),r=1,2,…n
converges to
).
be
a
convergent
) is closed, we need to show that
From
and
and
is continous. Thus
as
.Since
we
,
have
where
converges to
then there exists a
satisfying
number
, we obtain
we
of
have
are bounded, then there exist
sequence
.
To prove that
From
and by (2.10)
We can write
Now we will prove A(x) is closed at every point
Let
is
a convergent subsequence. If
and we have
is closed then
Which
and
contradicts
that
converges
as
.
The use of logarithmic barrier function finds a minimal
point
for
solving
linearity
constrained
optimization problem to s-t max cut problem.
continuous
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN: 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
18
4 Numerical Example
transforming it in to s-t max-cut problem and developed
In order to establish the effectiveness and efficiency of the
an algorithm is based on the barrier generalized
algorithm for obtaining optimization, we have solved by
logarithm the barrier function. The algorithm developed
using MATLAB.
for generating decreasing sequence of solution which
converges at least a local minimum point of (2.1) with
= 0.000001
For our solution we have consider
1- Smin/2 with Smin being the minimum eigen value of
every component equal to either
.
W- I.
REFERENCES
In our computation we have taken following variables.
[1] S. Aiyer, M.Niranjan and F.Fallside:”A theoretical
NI=No. of iteration
investigation into the performance of the Hopfield
OBJM =Objective value of (1.4)
model”. IEEE Transaction on Neural Networks, I, 204-205,
OBJU =Greatest integer value of
1990.
[2] S.J.Benson, Y.Ye and X.Zhang :”Mixed linear and semi
definite programming for combinatorial and quadratic
Subject to trace
optimization”, Preprint,1998.
[3] Bout and Miller:”Graph partitioning using annealed
0, RATIO= (OBJU-OBJM)/OBJU
In the computation we have considered weights
are
the random integer between 1 to 50 and the results
computed are given in the following table for the value of
m=.6
of
NI
OBJM
OBJU
203, 1990.
[4]A. Cichocki and R. Unbehaunen:”Neural networks for
optimization
and
signal
processing”.
New
York,
Wiley,1993.
Table for the numerical results for m=.6
No
networks”. IEEE Transaction on Neural Networks, I, 192-
RATIO
Test
[5] C. Dang : “Approximating a solution of the s-t max cut
problem with a deterministic an nealing algorithm”.
Neural Networks 13, p. 801-810, 2000.
1
100
35670
35798
.03
2
150
36150
37630
.02
3
200
46260
47350
.02
[6] R. Durbin and D. Willshaw :”An analogue approach to
the travelling salesman problem using an elastic network
method”. Nature 326,689-691, 1987.
[7] A. Gee, S. Aiyer and R. Prager: “An analytic
framework for Optimizing Neural Networks”. Neural
4
250
20350
21450
.02
5
300
35672
36505
.02
Networks 6, 79-97, 1993.
[8] A. Gee and R. Prage:”Polyhedral combinatories and
Neural Networks”. Neural Computation 6,161-180, 1994.
From the above computation it shows that the ratio is
[9] S. Gold and A. Rangarajan:”Soft assign versus soft
nearly equal to .02 and it proves the convergence of the
bench-marking in combinatorial optimization”. Advances
solution to the local minimum point and shows that the
in Neural information processing systems 8, 626-632.
algorithm is an effective one.
(MIT press: Cambridge, M.A.), 1996.
5 Conclusion
[10] J.Hopfield and D. Tank: Neural Computation of
In this paper we have taken a general logarithmic barrier
decision in optimization problems. Biological cybernetics,
function to solve the continuous optimization problem by
52, 141-152, 1985.
JOURNAL OF COMPUTING, VOLUME 1, ISSUE 1, DECEMBER 2009, ISSN: 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
19
[11] A.Rangarajan, S.Gold and E.Mjolsness : A novel
Programming,
optimizing
Fundamentals of Numerical Analysis etc.
network
architecture
with
applications.
A
text
book
of
Modern
Algebra,
Neural computation, 8 , 1041-1060, 1996.
[12] P. Simic: Statistical mechanics as the underlying
Dr.C.Mallick:Dr.C.Mallick
theory of elastic and neural Optimizations.Networks, 1,
(Mathematics) from Utkal University in 2008.Currently he
89-103, 1990.
is an Lecturer in Mathematics at B.OS.E Cuttack, India.
[13] K. Urahama : Gradient projection network, analog
He is performing research in Neural Network. He is
solver for linearly constrained nonlinear programming.
published 3 books for degree students such as: A Text
Neural Computation 6, 1061-1073, 1996.
Book
[14] J. Van den Berg: Neural relaxation dynamics. Ph.D.
Engineering Mathematics etc.
thesis
Erasmus
University
of
Rotterdam,
of
Engineering
received
a
Mathematics,
Ph.D
Interactive
The
Netherlands, 1996.
D.
[15] E.Wacholder , J.Han and R.Mann : A Neural network
(Mathematics)
algorithm for the multiple traveling salesman problem,
2002.Currently he is an Asst. Prof. in Mathematics at
Biological cybernetics 61, 11-19, 1989.
Centurion Institute of Technology, Bhubaneswar, India.
[16] F.Waugh and R.Westervelt : Analog neural networks
He is performing research in Neural Network and
with local competition: I dynamics and stability physical
Optimization Theory. He is served more than more than 6
Review E.47, 4524-4536, 1993.
years in different Colleges in the state of Orissa. He is
[17] W.Walfe, M.Parry and Macmillan’s : Hopfield style
published 6 research paper in different journals.
Neural Networks and the TSP, Proceedings of the
7th.IEEE International Conference on Neural Networks.
New York: IEEE press (pp.-4577-4582), 1994.
[18] L.Xu : Combinatorial optimization neural nets based
on a hybird of Lagrange and transformation approaches.
Proceeding of the world congress on Neural Networks,
San Diego, CA. (pp.-399-404), 1994.
[19] A.Yuille and J. Kosowsky : Statistical physical
algorithms that converges. Neural Computation, 6, 341356, 1994.
Dr.A.K.Ojha: Dr A.K.Ojha received a Ph.D (Mathematics)
from Utkal University in 1997.Currently he is an Asst.
Prof. in Mathematics at I.I.T Bhubaneswar, India. He is
performing research in Neural Network, Genetical
Algorithm, Geometric Programming and Particle Swarm
Optimization. He is served more than more than 27 years
in different Govt. colleges in the state of Orissa. He is
published 22 research paper in different journals and 7
books
for
degree
students
such
as:
Fortran
77
Mallick:
Mr.D.Mallick
from
received
Sambalpur
a
M.Phil
University
in
Download