MidYear Progress Report - Department of Mathematics

advertisement
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
Solving minimax problems with feasible sequential quadratic programming
Qianli Deng (Shally)
E-mail: dqianli@umd.edu
Advisor: Dr. Mark A. Austin
Department of Civil and Environmental Engineering
Institute for Systems Research
University of Maryland, College Park, MD 20742
E-mail: austin@isr.umd.edu
Abstract:
As one of the SQP-type programming, FSQP involves the solutions of quadratic programs as
subproblems. This algorithm is particularly suited to the various classes of engineering applications where
the number of variables is not too large but the evaluations of objective or constraint functions and of
their gradients are highly time consuming. This project is intended to implement the Feasible Sequential
Quadratic Programming (FSQP) algorithm as a tool for solving certain nonlinear constrained optimization
problems with feasible iterates.
Keywords: Nonlinear optimization; minimax problem; sequential quadratic programming; feasible
iterates
1
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
1. Introduction
The role of optimization in both engineering analysis and design is continually expanding. As a result, the
faster and more powerful optimization algorithms are in constant demand. Motivated by problems from
engineering analysis and design, feasible sequential quadratic programming (FSQP) are developed as a
dramatic reduction in the amount of computation while still enjoying the same global and fast local
convergence properties.
The application of FSQP includes all branches of engineering, medicine, physics, astronomy, economics
and finances, which abounds of special interest. In particular, the algorithms are particularly appropriate
for problems where the number of variables is not so large, while the function evaluations are expensive
and feasibility of iterates is desirable. But for problems with large numbers of variables, FSQP might not
be a good fit. The minimax problems with large numbers of objective functions or inequality constraints,
such as finely discretized semi-infinite optimization problems, could be handled effectively, for instance,
problems involving time or frequency responses of dynamical systems.
The typical constrained minimax problem is in the following format as showing in eq.(1).
fi ( x)}
minimize max{
f
iI
x X
(1)
where f i is smooth. FSQP generates a sequence { xk } such that g j ( xk )  0 for all j and k .
I f  {1,..., n f } where n f stands for the number of objective functions f . If n f  0 , I f   . X is a set
of points x 
n
satisfying the following constraints, as shown in eq.(2).



 g ( x) 
 j


 h j ( x) 

bl  x  bu
g j ( x)  0,
c j  ni , x  d j  ni  0,
h j ( x)  0,
a j  ne , x  b j  ne  0,
j  1,..., ni
j  ni  1,..., ti
j  1,..., ne
(2)
j  ne  1,..., te
where g j and h j are smooth; ni stands for the number of nonlinear inequality constraints; ti stands for
the total number of inequality constraints; ne stands for the number of nonlinear equality constraints, and
te stands for the total number of inequality constraints. bl stands for the constraint of lower boundary
and bu stands for the constraint of upper boundary. a j , b j , c j and d j stand for the parameters of
linear constraints.
2
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
2. Algorithm
To solve the constrained minimax problem using FSQP, four steps are taken as a sequence: 1)
initialization; 2) computation of a search arc; 3) arch search; and 4) updates. Figure 1 below shows a
flowchart indicating how FSQP works for solving the constrained minimax problem.
Start
Start
Initialization
Initialization
Computation
Computation of
of d0
d0
Check
Check stop
stop
condition
condition satisfies?
satisfies?
No
No
Computation
Computation of
of d1
d1
Computation
Computation of
of d~
d~
Arc
Arc search
search
Updates
Updates
Yes
Check
Check max
max iteration
iteration
reaches?
reaches?
Yes
End
End
Figure 1. Flowchart for solving the constrained minimax problem using FSQP
Parameters setting as:   0.1 ,   0.01 ,   0.1 ,   0.5 ,   2.1 , 1   2  2.5 , t  0.1 , 1  1 ,
 s  1 ,   5 , 0  s
Data: x0 
n
1.
,   0 ,  e  0 and p0, j   2 for j  1,..., ne .
2.1 Initialization
FSQP solves the original problem with nonlinear equality constraints by solving a modified optimization
problem with only linear constraints and nonlinear inequality constraints. x0 , H 0 , p0 and k are
initialized in the first step and then updated each time within the loop. For finding the initial feasible point
3
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
x0 , an initial guess x00 is generated. If x00 is infeasible for some constraints other than a nonlinear
equality constraint, substitute another feasible point.
Table 1. Methods for finding an initial feasible point
Initial guess x00
Method
Infeasible for linear constraints
Strictly convex quadratic programming
Infeasible for the nonlinear inequality constraints
Armijo-type line search
If x00 is infeasible for linear constraints, a point x0 satisfying these constraints is generated by solving
the strictly convex quadratic program. If x00 or newly generated initial guess is infeasible for the
nonlinear inequality constraints, a point x0 is generated satisfying all constraints by iterating on the
problem of minimizing the maximum of the nonlinear inequality constraints. An Armijo-type line search
is used when minimizing the maximum of the nonlinear inequality constraints to generate an initial
feasible point x0 .
H 0 is an identity matrix as the initial Hessian matrix. Since the constraints h j  0 , j  1, 2,..., ne , the
fi ( x)} is replaced by the modified objective function
original objective function max{
f
iI
ne
f m ( x, p)  max
 fi ( x)   p j h j ( x)
f
iI
j 1
(3)
where p j , j  1,..., ne , which are positive penalty parameters that are iteratively adjusted. For
j  1,..., ne , replace h j ( x ) by  h j ( x) whenever h j ( x0 )  0 . Set the loop index k  0 .
2.2 Computation of a search arc
From an initial guess x0 , the following steps are repeated as xk converges to the solution. Four steps are
used to compute a search arc: 1) compute d k0 ; 2) compute d k1 ; 3) compute d k ; and 4) compute d k . d k0
stands for the direction of descent for the objective function and d k1 stands for an arbitrary feasible
descent direction. d k stands for the feasible descent direction between the directions of d k0 and d k1 . And
d k stands for a second order correction which could be deemed as a “bent” of the search direction. Inner
relations among the parameters are displayed in Figure 2.
4
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
Figure 2. Calculations of direction d in FSQP
2.2.1 Compute d k0
Compute quadratic programming QP( xk , H k , pk ) for d k0 . At each iteration k , the quadratic program
QP( xk , H k , pk ) that yields the SQP direction d k0 is defined at xk for H k symmetric positive definite by

min
d0

 s.t.







1 0
d , H k d 0  f '( xk , d 0 , pk )
2
bl  xk  d 0  bu
g j ( xk )  g j ( xk ), d 0  0,
j  1,..., ti
h j ( xk )  h j ( xk ), d 0  0,
j  1,..., ne
(4)
j  1,..., te  ne
a j , xk  d 0  b j
Given I  I f , following notation is made.
f I ( x)  max{ f i ( x)}
iI
(5)
ne
f '( x, d , p)  max{
fi ( x)  f i ( x), d }  f I f ( x)   p j hi ( x), d
f
iI
j 1
(6)
2.2.2 Stop check
If || d k0 ||  and
ne
| h (x ) |  
j 1
j
k
e,
iteration stops and return the value xk .
2.2.3 Compute d k1
Compute d k1 , solution of QP ( xk , d k0 , pk ) by solving the strictly convex quadratic program as follows
5
Project Midterm Report

n
 d1min
, 

 s.t.









Qianli Deng (Shally)
12-19-2013
 0 1 0 1
dk  d , dk  d  
2
bl  xk  d 1  bu
f '( xk , d 1 , pk )  
g j ( xk )  g j ( xk ), d 1  
j  1,..., ni
c j , xk  d  d j
j  1,..., ti  ni
h j ( xk )  h j ( xk ), d 1  
j  1,..., ne
a j , xk  d 1  b j
j  1,..., te  ne
1
(7)
2.2.4 Compute d k
Set d k  (1   k )d k0   k d k1 with  k || d k0 || /(|| d k0 || vk ) , where vk  max(0.5,|| d k1 ||1 ) .
2.2.5 Compute d k
In order to avoid the Maratos effect and guarantee a superlinear rate of convergence, a second order
correction d  d ( x, d , H ) 
n
is used to “bend” the search direction. That is an Armijo-type search is
performance along the arc x  td  t 2 d . The Maratos correction d k is taken as the solution of QP.
1

(d k  d ), H k (d k  d )  f I' f ( d ) ( xk  d k , xk , d , pk )
n
min
k
k
d
2

bl  xk  d k  d  bu
 s.t.

g j ( xk  d k )  g j ( xk ), d   min(v || d k ||,|| d k || 2 )



c j  ni , xk  d k  d  d j ni


h j ( xk  d k )  h j ( xk ), d   min(v || d k ||,|| d k || 2 )


a j , xk  d k  d  b j

j  I kg (d k )  { j : j  ni }
j  I kg (d k )  { j : j  ni }
j  1,..., ne
j  1,..., te  ne
Compute d k by solving the strictly convex quadratic program
ne
f '( x, d , p)  max{ fi ( x  d )  fi ( x), d }  f I ( x)   p j hi ( x), d
iI
j 1
The set of active constraints by
6
(9)
(8)
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
I kg (d k )  { j {1,..., ti }:| g j ( xk ) | 0.2 d k  g j ( xk ) }
{ j {1,..., ti }:| k , j  0}
(10)
2.3 Arc search
If ni  ne  0 ,  k  f '( xk , d k , pk ) , while if ni  ne  0 ,  k   d k0 , H k d k0 . Compute t k , the first
number t in the sequence {1,  ,  2 ,...} satisfying
fm ( xk  tdk  t 2dk , pk )  f m ( xk , pk )  t k
(11)
g j ( xk  td k  t 2 d k )  0 , j  1,..., ni .
(12)
c j ni , xk  tdk  t 2 d k  d j ni , j  ni & j  I kg (d k )
h j ( xk  td k  t 2 d k )  0 , j  1,..., ne
(13)
(14)
2.4 Updates
xk , H k , pk and k are updated within each loop. Firstly, the new point xk 1 is calculated based on the
following
xk 1  xk  tk dk  tk2dk
(15)
Then, the updating scheme for the Hessian estimates uses BFGS formula with Powell’s modification to
compute the new approximation H k 1 as the Hessian of the Lagrangian, where H k 1  H kT1  0.
k 1  xk 1  xk
(16)
 k 1  x L( xk 1 , ˆk )  x L( xk , ˆk )
(17)
where L( x, ˆ ) stands for the Lagrange function, and ˆk stands for the Lagrange multiplier. A scalar
k 1  (0,1] is then defined by

1,
kT1 k 1  0.2kT1 H kk 1

 k 1   0.8kT1 H kk 1
otherwise
 T H    T 
 k 1 k k 1 k 1 k 1
Defining  k 1 
n
(18)
as
k 1  k 1   k 1  (1  k 1 )  H k k 1
7
(19)
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
The rank two Hessian update is
H kk 1kT1H k  k 1 kT1
H k 1  H k  T

k 1 H kk 1 kT1 k 1
(20)
Solve the unconstrained quadratic problem in 
nf
ti
j 1
j 1
minne ||   k , j f j ( xk 1 )  k   k , j g j ( xk 1 ) 

te

j  ne 1
ne
k , j h j ( xk 1 )   j h j ( xk 1 ) ||2
j 1
For j  1,..., ne , update the penalty parameters as
pk , j
pk , j   j  1

pk 1, j  
max{1   j ,  pk , j } otherwise
(22)
Set k  k  1.
3. Implementation
Hardware: the program will be developed and implemented on my personal computer.
Software: the program will be developed in Java.
4. Validation
Example. For x 
3
, the objective function min3 f ( x)  ( x1  3x2  x3 )  4( x1  x2 )
2
x
The constraints are
s.t. x13  6 x2  4 x3  3  0
1  x1  x2  x3  0
xi  0, i  1, 2,3
Feasible initial guess x0  (0.1, 0.7, 0.2)T , f ( x0 )  7.2 .
Global minimizer x*  (0, 0,1)T , f ( x* )  1 .
5. Testing
Example. The objective function min6 max fi ( x)
x
i 1,...,163
8
2
(21)
Project Midterm Report
s.t.
Qianli Deng (Shally)
12-19-2013
 x1  s  0
x1  x2  s  0
x2  x3  s  0
x3  x4  s  0
x4  x5  s  0
x5  x6  s  0
x6  3.5  s  0
Where fi ( x) 
i 
6

1 2
  cos(7 sin i )   cos(2 x j sin i ) 
15 15 
j 1

 (8.5  0.5i )
180
, i  1, 2,...,163
s  0.425
The feasible initial guess is x0  (0.5,1,1.5, 2, 2.5,3)T
With the corresponding value of the objective function max fi ( x0 )  0.2205
i 1,...,163
Other practical problems such as the wind turbine setting could also be applied if time permits.
6. Project Schedule
Time
October
November
December
Tasks

Literature review;

Specify the implementation module details;

Structure the implementation;

Develop the quadratic programming module;

Unconstrained quadratic program;

Strictly convex quadratic program;

Validate the quadratic programming module;

Develop the Gradient and Hessian matrix calculation module;

Validate the Gradient and Hessian matrix calculation module;
9
Project Midterm Report
January
February
March
April
May
Qianli Deng (Shally)
12-19-2013

Midterm project report and presentation;

Develop Armijo line search module;

Validate Armijo line search module;

Develop the feasible initial point module;

Validate the feasible initial point module;

Integrate the program;

Debug and document the program;

Validate and test the program with case application;

Add arch search variable d in;

Compare calculation efficiency of line search with arch search methods;

Develop the user interface if time available;

Final project report and presentation;
7. Deliverables

Project proposal;

Algorithm description;

Java code;

Validation results;

Test database;

Test results;

Project reports;

Presentations;
10
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
8. Bibliography
Charalambous, Conn and AR Conn. "An Efficient Method to Solve the Minimax Problem Directly."
SIAM Journal on Numerical Analysis 15, no. 1 (1978): 162-187.
Goldfarb, Donald and Ashok Idnani. "A Numerically Stable Dual Method for Solving Strictly Convex
Quadratic Programs." Mathematical programming 27, no. 1 (1983): 1-33.
Lawrence, Craig T and André L Tits. "Nonlinear Equality Constraints in Feasible Sequential Quadratic
Programming∗." Optimization Methods and Software 6, no. 4 (1996): 265-282.
Lawrence, Craig T and André L Tits. "A Computationally Efficient Feasible Sequential Quadratic
Programming Algorithm." Siam Journal on optimization 11, no. 4 (2001): 1092-1118.
Li, Wu and John Swetits. "A New Algorithm for Solving Strictly Convex Quadratic Programs." SIAM
Journal on Optimization 7, no. 3 (1997): 595-619.
Madsen, Kaj and Hans Schjaer-Jacobsen. "Linearly Constrained Minimax Optimization." Mathematical
Programming 14, no. 1 (1978): 208-223.
Mayne, DQ and E Polak. "Feasible Directions Algorithms for Optimization Problems with Equality and
Inequality Constraints." Mathematical Programming 11, no. 1 (1976): 67-80.
Watson, GA. "The Minimax Solution of an Overdetermined System of Non-Linear Equations." IMA
Journal of Applied Mathematics 23, no. 2 (1979): 167-180.
11
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
Appendix A – Nomenclature
Symbols
Interpretation
fi
Sequence of objective functions
nf
Number of objective functions f
gj
Inequality constraints function
hj
Equality constraints function
ni
Number of nonlinear inequality constraints
ti
Total number of inequality constraints
ne
Number of nonlinear equality constraints
te
Total number of inequality constraints
bl
Lower boundary of the variables
bu
Upper boundary of the variables
aj
Parameters of linear equality constraints
bj
Parameters of linear equality constraints
cj
Parameters of linear inequality constraints
dj
Parameters of linear inequality constraints
x00
Initial guess
x0
Initial feasible point
H0
Initial Hessian matrix as the identity matrix
xk
Feasible point in k iteration
Hk
Hessian matrix in k iteration
pk , j
Positive penalty parameters in k iteration
k
Iteration index
d k0
Descent direction for the objective function in k iteration
d k1
An arbitrary feasible descent direction in k iteration
dk
A feasible descent direction between the directions of d k0 and d k1 in k iteration
dk
k
A second order curve direction in k iteration
Weight between directions of d k0 and d k1 in k iteration
12
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
Symbols
Values

0.1


0.01
0.1

0.5

2.1
1
2.5
2
2.5
t
0.1
1
1
2
2

2
13
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
Appendix B – Solving strictly convex quadratic programming
Quadratic programming is the problem of optimizing a quadratic function of several variables subject to
linear constraints on these variables. Consider a convex quadratic programming problem of the following
form, minimizing with respect to the vector x of the following function showing in eq.(1).
F ( x) 
1
x ' Cx  d ' x
2
(B1)
where n stands for the number of variables. x is a column of n variables, x1 , x2 ,..., xn . C is a n  n
symmetric positive definite matrix that C  C ' . d is a n dimension column vector. Subject to the linear
inequality constraints as showing in eqs.(2) and (3).
Ax  b
(B2)
x0
(B3)
where m stands for the number of constraints. A is an m n matrix. b is a given column vector of m
elements. All the variables are constraint to be nonnegative. Combining the objective function as well as
the constraints, the Lagrangian function for the quadratic program is formed as the following expression
eq.(4).
1
x ' Cx  d ' x   '( Ax  b)
2
(B4)
where  is the Lagrangian multipliers as a column vector of m dimensional,
1 , 2 ,..., m . Karush-
L ( x,  ) 
Kuhn-Tucker conditions are first order necessary conditions for a solution in nonlinear programming to
be optimal, provided that the following regularity conditions eqs.(5)~(8) are satisfied.
L
 Ax  b  0

(B5)
L
 x ' C  d '  ' A  0 then Cx  d  A '   0
x
(B6)
'
L
  '( Ax  b)  0

(B7)
x'
L
 x '(Cx  d  A '  )  0
x
(B8)
14
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
Thinking to solve the problem, slackness variables have been added in to switch the inequality constraints
to equality constraints. As a result,  is the m m slackness matrix for constraints and  is the n  n
slackness matrix for objective functions. Then, the following eqs.(9)~(12) have been derived.
Ax  b    0
(B9)
Cx  d  A '     0
(B10)
 '( Ax  b   )  0
(B11)
x '(Cx  d  A '    )  0
(B12)
where all the variables are nonnegative, x  0 ,   0 ,   0 , and   0 . Eqs.(13)~(16) are derived
based on eqs.(9)~(12), which works as an equivalent problem of the original quadratic programming to be
solved.
Ax    b
(B13)
Cx  A '     d
(B14)
 '   ' x  0
(B15)
x,  ,  ,   0
(B16)
To solve the above problem, artificial variables s are generated for the initial basis composition.  and
 , as well as  and x are complementary slack variables. As a result, another coefficient vector as the
Lagrangian multipliers is given in the objective function. The complementary slackness conditions are
satisfied in nature. Then, the problem becomes the following optimization problem with the objective
function of eq.(17) and the constraints of eqs.(18)~(20).
Minimize g ( x,  ,  , , s)   ' s   '    ' x
Subject to
Ax    b
(B17)
(B18)
Cx  A '     s  d
x,  ,  ,  , s  0
(B19)
(B20)
where s is an n dimension column vector. Here we set the coefficient of  '    ' x to be 1. Then,  is
also an n dimension column vector that has been adjusted as the coefficient for s accordingly.
In order to solve the modified quadratic programming problem, the Wolfe’s extended simplex method has
been chosen which do not requires the gradient and Hessian computation. Accordingly, three major steps
15
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
are taken as: 1) set up the initial tableau; 2) repeat the loop until the optimal solution found; and 3) return
the final solution. Details are explained as below.
Step 1. Set up the initial tableau
According to the constraints eqs.(18) and (19), Table 1 has been set up for further computation. Variables
 and s are chosen as the initial bases.
Table A1. Step-up the initial tableau for the Simplex method
Basic variables
Values basic variables
x

s
b
-d
A
C



s
I
A'
-I
I
Since feasible values of x,  ,  and  have not been reached yet, the following coefficients are set as a
start for calculation simplification.
Table A2. Step-up the initial price value for the Simplex method
Variables
x



Values
0
0
0
0
s
'
Step 2. Repeat the loop until the optimal solution found
There are three sub-steps to be conducted within the loop as: 1) search for pivot element; 2) perform
pivoting; and 3) update price. When each time a new feasible solution is found through pivoting, the price
table for variables is updated, in order to check whether the new combination could help to reduce the
value of the objective function g ( x,  ,  , , s) . The pivot element is the new variable selected to enter the
basis, while the previous one is the one to leave. Two rules are followed by searching for the pivot
element in the loop.

Rule 1. Search for the pivot column. Find the largest negative indicator in the price table after
canonical transformation.

Rule 2. Search for the pivot row. Find the smallest nonnegative indicator for the ratio.
Step 3. Return the final solution
16
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
For the loop iteration, the stopping criteria is when the variable changes could no longer help to reduce
the value of the objective function g ( x,  ,  , , s) , which means the updated price values after canonical
transformation are all positive.
In order to demonstrate the feasibility for solving quadratic programming problem, the following test case
has been conducted:
Minimize F ( x)  x12  2 x22  2 x1 x2  4 x1
(B21)
Subject to 2 x1  x2  6.0
(B22)
x1  4 x2  0.0
(B23)
x1 , x2  0.0
(B24)
Accordingly, the input parameters are set as the number of constraints m = 2; number of variables n = 2;
 x1 
2 1 
6 
 2 2 
 4
, A
, b   , C  
, d    . Initial set-up tableau and price value



0 
 2 4 
1 4
0
 x2 
and x  
have been set as Table 3 and 4 indicating respectively.
Table A3. Application of the simplex method for quadratic programming
Basic
variables
1
2
s1
s2
Values basic
variables
x1
x2
1
2
1
2
1
2
s1
s2
6.0
2.0
1.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
1.0
-4.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
4.0
2.0
-2.0
2.0
1.0
-1.0
0.0
0.0
0.0
1.0
0.0
0.0
-2.0
4.0
1.0
-4.0
0.0
-1.0
0.0
0.0
0.0
1.0
Table A4. Step-up the initial price value for the Simplex method
Variables
x1
x2
1
2
1
2
1
2
s1
s2
Values
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
1.0
Switching Table 4 to canonical form, the price value is shown as Table 5 below.
Table A5. Price value after canonical transformation
Variables
x1
x2
1
2
1
2
1
2
s1
s2
Value
0.0
-2.0
-3.0
3.0
1.0
1.0
0.0
0.0
0.0
0.0
17
Project Midterm Report
Qianli Deng (Shally)
12-19-2013
As a result, variable 1 is chosen as the pivot variable indicating the pivot column.
 6.0 0.0 4.0 0.0 
,
,
,
 . Then, the
 0.0 0.0 2.0 1.0 
The pivot row is selected from the smallest nonnegative elements of 
pivot element based on the column and row is chosen to be 1.0. Accordingly, pivoting has been conducted
and Table 6 is set as follows.
Table A6. Application of the simplex method for quadratic programming
Basic
variables
Values basic
variables
x1
x2
1
2
1
2
1
2
s1
s2
6.0
2.0
1.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
1.0
-4.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
4.0
6.0
-10.0
0.0
9.0
-1.0
0.0
0.0
0.0
1.0
-2.0
0.0
-2.0
4.0
1.0
-4.0
0.0
-1.0
0.0
0.0
0.0
1.0
1
2
s1
1
From Table 6, 1  6.0 , 2  0.0 , s1  4.0 and 1  0.0 built a feasible solution. Then the price values
of the corresponding complementary variables are set. For instance, the complementary variable of 1 is
1 . The updated price value for 1 is 6.0. Table 7 below indicates the details.
Table A7. Price value after canonical transformation
Variables
x1
x2
1
2
1
2
1
2
s1
s2
Value
0.0
0.0
6.0
0.0
0.0
0.0
0.0
0.0
1.0
1.0
Repeating the process, the following tableau updating is demonstrated in Table 8.
Table A8. Application of the simplex method for quadratic programming
Values basic
variables
x1
x2
1
2
1
2
1
2
s1
s2
6.0
2.5
0.0
-0.25
1.0
0.0
0.25
1.0
0.0
0.0
-0.25
0.0
-1.0
0.0
1.0
-4.0
0.0
-1.0
0.0
1.0
0.0
1.0
4.0
1.0
0.0
2.5
-1.0
-1.0
-0.5
0.0
0.0
1.0
0.5
0.0
-0.5
1.0
0.25
-1.0
0.0
-0.25
0.0
0.0
0.0
0.25
x1
2
s1
2.4
1.0
0.0
-0.1
0.4
0.0
0.1
0.4
0.0
0.0
-0.1
2.4
0.0
0.0
0.9
-3.6
0.0
-0.9
0.4
1.0
0.0
0.9
1.6
0.0
0.0
2.6
-1.4
-1.0
-0.6
-0.4
0.0
1.0
0.6
x2
1.2
0.0
1.0
0.2
-0.8
0.0
-0.2
0.2
0.0
0.0
0.2
Iterat
ion
Basic
variables
3
1
2
s1
x2
4
18
Project Midterm Report
x1
5
2
1
x2
Qianli Deng (Shally)
12-19-2013
2.46
1.0
0.0
0.0
0.35
-0.04
0.08
0.38
0.0
0.04
-0.08
1.85
0.0
0.0
0.0
-3.12
0.35
-0.69
0.54
1.0
-0.35
0.69
0.62
0.0
0.0
1.0
-0.54
-0.38
-0.23
-0.15
0.0
0.38
0.23
1.08
0.0
1.0
0.0
-0.69
0.08
-0.15
0.23
0.0
-0.08
0.15
Finally, optimal solutions are reached in the fifth iteration that x1*  2.46 , x2 *  1.08 , F ( x*)  6.77 .
19
Download