Karmarkar Algorithm
Anup Das, Aissan Dalvandi, Narmada Sambaturu, Seung Min,
and Le Tuan Anh
1
Contents
• Overview
• Projective transformation
• Orthogonal projection
• Complexity analysis
• Transformation to Karmarkar’s canonical form
2
LP Solutions
• Simplex
• Dantzig 1947
• Ellipsoid
• Kachian 1979
• Interior Point
• Affine Method 1967
• Log Barrier Method 1977
• Projective Method 1984
• Narendra Karmarkar form AT&T Bell Labs
3
Linear Programming
General LP
๐
Minimize ๐ ๐ฅ
Subject to
• ๐๐ฅ ≥ ๐ก
• ๐ฅ≥0
•
•
•
•
Karmarkar’s Canonical Form
Minimize ๐ ๐ ๐ฅ
Subject to
• ๐ด๐ฅ = 0
• ๐ฅ๐ = 1
• ๐ฅ≥0
Minimum value of ๐ ๐ ๐ฅ is 0
โฆ ≡ ๐ฅ ๐ด๐ฅ = 0} ≡ ๐ ๐ข๐๐ ๐๐๐๐ (๐๐ข๐๐ ๐ ๐๐๐๐ ๐๐ ๐ด)
Δ ≡ {๐ฅ|๐ฅ ≥ 0, ๐ฅ๐ = 1} ≡ simplex
π ≡ โฆ ∩ Δ ≡ ๐๐๐๐ฆ๐ก๐๐๐
5
Karmarkar’s Algorithm
• Step 1: Take an initial point ๐ฅ (๐) , ๐ = 0
• Step 2: While ๐ฅ (๐) − ๐ฅ (๐−1) ≤ ๐พ ๐๐ ๐ ๐ ๐ฅ
๐
≤ ๐
• 2.1 Transformation: ๐: โ → โ′ such that ๐ฅ′(๐) is center ofโ′
• This gives us the LP problem in transformed space
• 2.2 Orthogonal projection and movement: Project ๐′ to feasible
region and move ๐ฅ′(๐) in the steepest descent direction.
• This gives us the point ๐ฅ′(๐+1)
• 2.3 Inverse transform: ๐ −1 : โ′ → โ
• This gives us ๐ฅ (๐+1)
• 2.4 Use ๐ฅ (๐+1) as the start point of the next iteration
• Set ๐ = ๐ + 1
6
Step 2.1
• Transformation: ๐: โ→ โ′ such that ๐ฅ′(๐) is center of โ′ - Why?
๐ฅ (๐)
๐ฅ′(๐)
๐
7
Step 2.2
• Projection and movement: Project ๐′ to feasible region and
move ๐ฅ′(๐) in the steepest descent direction - Why?
๐′
Projection of ๐′ to feasible region
8
−๐′
Big Picture
๐ฅ′(๐)
๐ฅ (๐)
๐ฅ
(๐+1)
๐ฅ′(๐+1)
๐ฅ′′(๐+1)
๐ฅ (๐+2)
๐ฅ′′(๐+2)
9
Karmarkar’s Algorithm
• Step 1: Take an initial point ๐ฅ (๐) , ๐ = 0
• Step 2: While ๐ฅ (๐) − ๐ฅ (๐−1) ≤ ๐พ ๐๐ ๐ ๐ ๐ฅ
๐
≤ ๐
• 2.1 Transformation: ๐: โ→ โ′ such that ๐ฅ′(๐) is center of โ′
• This gives us the LP problem in transformed space
• 2.2 Orthogonal projection and movement: Project ๐′ to feasible
region and move ๐ฅ′(๐) in the steepest descent direction.
• This gives us the point ๐ฅ′(๐+1)
• 2.3 Inverse transform: ๐ −1 : โ′ → โ
• This gives us ๐ฅ (๐+1)
• 2.4 Use ๐ฅ (๐+1) as the start point of the next iteration
• Set ๐ = ๐ + 1
11
Projective Transformation
• Transformation: ๐: โ→ โ′ such that ๐ฅ′(๐) is center ofโ′
Transform
๐ฅ′(๐)
๐ฅ (๐)
(๐)
๐ฅ1
๐ฅ (๐) =
(๐)
๐ฅ2
โฎ
(๐)
๐ฅ๐
๐ฅ′(๐)
๐
=
๐
1
๐
1
๐
= ๐ =
๐
โฎ
1
๐
๐ = 111 … 1
๐
12
Projective transformation
• We define the transform ๐ฅ′ = ๐๐ฅ ๐ (๐ฅ) such that
• ๐ฅ
๐
∈ โ→
๐
๐
∈ โ′
• We transform every other point with respect to point ๐ฅ
๐
Mathematically:
(๐)
๐ฅ1
• Defining D = diag (๐ฅ (๐) ) = โฎ
0
• projective transformation:
• ๐ฅ → ๐ฅ′
โฏ
0
โฑ
โฎ
(๐)
โฏ ๐ฅ2
๐ฅ′ = ๐๐ฅ (๐) ๐ฅ =
• inverse transformation
• ๐ฅ′ → ๐ฅ
๐ฅ = ๐๐ฅ (๐) ๐ฅ′ =
๐ท−1 ๐ฅ
๐ ๐ ๐ท−1 ๐ฅ
๐ท๐ฅ′
๐ ๐ ๐ท๐ฅ′
13
Problem transformation:
• projective transformation:
• ๐ฅ → ๐ฅ′
๐ฅ′ = ๐๐ฅ (๐) ๐ฅ =
• inverse transformation
• ๐ฅ′ → ๐ฅ
๐ฅ = ๐๐ฅ (๐) ๐ฅ′ =
Minimize ๐๐ป ๐
s.t.
๐จ๐ = ๐
๐๐ป๐ = ๐
๐ ≥ ๐
๐ท−1 ๐ฅ
๐ ๐ ๐ท−1 ๐ฅ
๐ท๐ฅ′
๐ ๐ ๐ท๐ฅ′
๐๐ป ๐ซ๐′
๐๐ป ๐ซ๐′
Transform
Minimize
S.t.
๐จ๐ซ๐′ = ๐
๐๐ป๐′ = ๐
๐′ ≥ ๐
The new objective function is not a linear function.
14
Problem transformation:
๐๐ป ๐ซ๐′
๐๐ป ๐ซ๐′
Minimize
S.t.
๐จ๐ซ๐′ = ๐
๐๐ป๐′ = ๐
๐′ ≥ ๐
Convert
Minimize ๐′๐ป ๐′
S.t.
๐จ′๐′ = ๐
๐๐ป๐′ = ๐
๐′ ≥ ๐
15
Karmarkar’s Algorithm
• Step 1: Take an initial point ๐ฅ (๐) , ๐ = 0
• Step 2: While ๐ฅ (๐) − ๐ฅ (๐−1) ≤ ๐พ ๐๐ ๐ ๐ ๐ฅ
๐
≤ ๐
• 2.1 Transformation: ๐: โ→ โ′ such that ๐ฅ′(๐) is center of โ′
• This gives us the LP problem in transformed space
• 2.2 Orthogonal projection and movement: Project ๐′ to feasible
region and move ๐ฅ′(๐) in the steepest descent direction.
• This gives us the point ๐ฅ′(๐+1)
• 2.3 Inverse transform: ๐ −1 : โ′ → โ
• This gives us ๐ฅ (๐+1)
• 2.4 Use ๐ฅ (๐+1) as the start point of the next iteration
• Set ๐ = ๐ + 1
16
Orthogonal Projection
Minimize ๐′๐ป ๐′
S.t.
๐จ′๐′ = ๐
๐๐ป๐′ = ๐
๐′ ≥ ๐
• Projecting ๐′ onto
• ๐ด′ x ′ = 0 ⇒ projection on ℵ(๐ด′ )
• ๐ ๐ ๐ฅ ′ = 1 ⇒ normalization
17
Orthogonal Projection
• Given a plane and a point outside the plane, which direction
should we move from the point to guarantee the fastest
descent towards the plane?
• Answer: perpendicular direction
• Consider a plane and a vector ๐′
๐′
ℵ(๐ด′ )
๐๐
๐
๐ด’
18
Orthogonal Projection
• Let ๐1 and ๐2 be the basis of the plane (๐ด’)
• The plane is spanned by the column space of ๐ด′ = [๐1 ๐2 ]
• ๐ ∈ ๐ด′, so ๐ = ๐ฆ1 ๐1 + ๐ฆ2 ๐2 = ๐ด′ ๐ฆ
• ๐๐ = ๐′ − ๐ = ๐′ − ๐ด′ ๐ฆ
๐′
ℵ(๐ด′ )
๐๐
• ๐๐ is perpendicular to ๐1 and ๐2
• ๐ด′๐ ๐′ − ๐ด′ ๐ฆ = 0
๐
๐ด’
19
Orthogonal Projection
๐′
ℵ(๐ด′ )
๐๐
• ๐ด′๐ ๐ด′ ๐ฆ = ๐ด′๐ ๐′
๐
• ๐ฆ = (๐ด′ ๐ด′)−1 ๐ด′๐ ๐′
• ๐ =
๐ด′ ๐ฆ
= ๐ด′
๐
๐ด′ ๐ด′
๐
−1
๐ด’
๐ด′๐ ๐′ = ๐๐′
๐
• ๐ = ๐ด′(๐ด′ ๐ด′)−1 ๐ด′๐ = projection matrix with respect to ๐ด’
• We need to consider the vector ๐๐ ∈ ℵ ๐ด′
• Remember ๐ด′๐ ๐๐ = 0
• ๐๐ = ๐′ − ๐ = ๐ผ − ๐ด′ ๐ด′๐ ๐ด′
−1
๐ด′๐ ๐ ′ = ๐ℵ ๐ ′
• ๐ℵ is the projection matrix with respect to ℵ(๐ด′)
20
Orthogonal Projection
• What is the direction of ๐๐ ?
๐′
๐๐
• Steepest descent = −๐๐
๐
๐ด’
• We got the direction of the movement or projected ๐’ onto
ℵ(๐ด′ )
• Projection on
๐๐๐ฅ′
= 1 ⇒ ๐๐ =
๐๐
|๐๐ |
21
Calculate step size
Minimize ๐′๐ป ๐′
S.t.
๐จ′๐′ = ๐
๐๐ป๐′ = ๐
๐′ ≥ ๐
• r = radius of the incircle =
• ๐ฅ′(๐+1) =
๐
๐
1
๐(๐−1)
− ๐ผ๐๐๐
22
Movement and inverse
transformation
Transform
๐ฅ (๐)
๐ฅ (๐+1)
Inverse
Transform
๐ฅ′(๐)
๐
=
๐
๐ฅ′(๐+1)
23
Big Picture
๐ฅ′(๐)
๐ฅ (๐)
๐ฅ
(๐+1)
๐ฅ′(๐+1)
๐ฅ′′(๐+1)
๐ฅ (๐+2)
๐ฅ′′(๐+2)
24
Matlab Demo
25
Contents
• Overview
• Projective transformation
• Orthogonal projection
• Complexity analysis
• Transformation to Karmarkar’s canonical form
26
Running Time
• Total complexity of iterative algorithm =
(# of iterations) x (operations in each iteration)
• We will prove that the # of iterations = O(nL)
• Operations in each iteration = O(n2.5)
• Therefore running time of Karmarkar’s algorithm = O(n3.5L)
# of iterations
• Let us calculate the change in objective function value at the end of
each iteration
• Objective function ๐ ๐ ๐ฅ ๐ changes upon transformation
• Therefore use Karmarkar’s potential function
๐
๐ ๐ฅ =
๐=1
๐๐๐ฅ
ln
๐ฅ๐
(๐)
๐ ๐ฅ (๐) =
ln ๐ ๐ ๐ฅ (๐) − ln ๐ฅ๐
๐
(๐)
๐ ๐ฅ (๐) = ๐ ln ๐ ๐ ๐ฅ (๐) −
ln ๐ฅ๐
๐
• The potential function is invariant on transformations (upto some
additive constants)
# of iterations
• Improvement in potential function is
๐(๐ฅ (๐) ) − ๐(๐ฅ (๐−1) )
(๐)
= n ln ๐ ๐ ๐ฅ (๐) −
๐๐ ๐ฅ๐
(๐−1)
− (๐ ๐๐ ๐ ๐ ๐ฅ (๐−1) −
๐
๐๐ ๐ฅ๐
)
๐
• Rearranging,
๐ ๐ฅ
๐
−๐ ๐ฅ
๐−1
= ๐ ๐๐
๐ ๐ ๐ฅ (๐)
๐ ๐ ๐ฅ (๐−1)
(๐)
−
๐ฅ๐
๐ ๐๐ (๐ฅ (๐−1) )
# of iterations
• Since potential function is invariant on transformations, we can use
• ๐(๐ฅ (๐) ) − ๐(๐ฅ (๐−1) ) ≈ ๐(๐ฅ′(๐) ) − ๐(๐ฅ′(๐−1) )
where๐ฅ′(๐) and ๐ฅ′(๐−1) are points in the transformed space.
• At each step in the transformed space,
• ๐ฅ′(๐) = ๐ฅ′(๐−1) − ๐ผ๐
• ๐ ๐ฅ
๐
−๐ ๐ฅ
๐−1
๐๐
|๐๐ |
= ๐0 − ๐ผ๐
≈ ๐๐๐
๐๐
|๐๐ |
๐ ๐ ๐0 −๐ผ๐
๐ ๐ ๐0
๐๐
๐๐
=
๐0 −๐ผ๐
๐ ln(
๐0
๐๐
๐๐
)
# of iterations
• Simplifying, we get
• ๐ ๐ฅ
๐
−๐ ๐ฅ
๐−1
๐๐
≈ ๐๐๐ 1 − ๐ฝ
๐ ๐๐
−
๐๐๐
๐ ln(1 − ๐ฝ |๐ |)
๐
Where β = ๐ผ๐๐
• For small ๐ฅ ,
ln 1 − ๐ฅ ≈ −๐ฅ
• So suppose both β
๐ ๐ฅ
๐
−๐ ๐ฅ
๐−1
๐๐
๐ ๐๐
and β
≈ −๐ β
๐๐๐
๐๐
are small
๐๐
๐ ๐๐
jβ
cPj
cP
≈ −n β
cP
j cj
# of iterations
• It can be shown that
cP
j cj
R≥
≥r >
1
n
• The worst case for
๐ ๐ฅ
๐
−๐ ๐ฅ
๐−1
would be if β =
cP
≈ −n β
j cj
1
n
• In this case
๐ ๐ฅ
๐
−๐ ๐ฅ
๐−1
≈ −β
(β = α r n)
# of iterations
• Thus we have proved that
๐
๐ ๐ฅ
−๐ ๐ฅ
๐−1
≤ −δ
• Thus
๐ ๐ฅ
๐
−๐ ๐ฅ
0
≤ −kδ
where k is the number of iterations to converge
• Plugging in the definition of potential function,
๐
๐=1 ln
๐ ๐ ๐ฅ (๐)
(๐)
๐ฅ๐
−
๐
๐=1 ln
๐ ๐ ๐ฅ (0)
(0)
๐ฅ๐
≤ −๐๐ฟ
# of iterations
• Rearranging, we get
๐๐
•
•
๐ ๐ ๐ฅ (๐)
๐ ๐ ๐ฅ (๐)
๐๐๐ฅ
(0)
๐ ๐ ๐ฅ (๐)
๐๐๐ฅ
≤ ๐ ln ๐ − ๐๐ฟ
(0)
๐๐ ๐ฅ
(0)
๐
≤ ๐
๐๐ ๐−๐๐ฟ
๐
≤๐๐
−๐๐ฟ
# of iterations
• Therefore there exists some constant ๐′ such that after
๐ ๐+ln ๐
๐ฟ
๐ = ๐′(
−๐′(
๐ (๐)
๐ ๐ฅ
(0)
๐๐๐ฅ
≤๐๐
๐ ๐ ๐ฅ (๐)
(0)
๐ ๐๐ฅ
) iterations,
๐ ๐+๐๐ ๐
)
๐ฟ
๐๐ฟ
≤ ๐ −๐ ≤ 2−๐ where ๐ = ๐ฉ(๐ฟ) , ๐ฟ being the number of
bits
2−๐ is sufficiently small to cause the algorithm to converge.
• Thus the number of iterations is in O(๐๐ฟ)
Rank-one modification
• The computational effort is dominated by:
๐๐ = ๐ผ − ๐ด′๐ ๐ด′๐ด′๐ −1 ๐ด ๐ท๐
๐ด′ = ๐ด๐ท
• Substituting A’ in, we have
๐ด′ ๐ด′๐ = ๐ด๐ท ๐ด๐ท ๐ = ๐ด๐ท๐ท๐ ๐ด๐ = ๐ด๐ท2 ๐ด๐
• The only quantity changes from step to step is D (๐ท๐๐๐ = ๐ฅ๐๐ )
• Intuition:
• Let ๐ท and ๐ท′ be ๐ × ๐ diagonal matrices
• If ๐ท and ๐ท′ are close the calculation ๐ด๐ท ′2 ๐ด๐
๐ด๐ท 2 ๐ด๐ −1 should be cheap!
−1
given
Method
• Define a diagonal matrix ๐ท′(๐) , a “working approximation” to
๐ท ๐ in step k such that
•
1
2
≤
′ ๐
๐ท๐๐
๐
๐ท๐๐
2
≤ 2 for ๐ = 1,2, … , ๐
• We update D’ by the following strategy
• ๐ท ′ ๐+1 = ๐ ๐ ๐ท′ ๐
later slides)
• For ๐ = 1 to ๐ do
if
′ ๐
๐ท๐๐
๐
๐ท๐๐
Then let
(We will explain this scaling factor in
2
∉
1
,2
2
๐+1
๐ท๐๐′
=
๐+1
๐ท๐๐
and update ๐ด ๐ท’
๐+1
2 ๐ −1
๐ด
Rank-one modification (cont)
• We have
•
•
•
•
−1 ๐ข ๐ −1 ๐ฃ ๐
๐
๐ + ๐ข๐ฃ ๐ −1 = ๐−1 −
1 + ๐ข๐ ๐−1 ๐ฃ
Given ๐−1 , computation of ๐ + ๐ข๐ฃ ๐ −1 can be done in 3๐2
or ๐ ๐2 steps.
If ๐ท and ๐ท’ differ only in ๐ ๐กโ entry then
๐ด๐ท2 ๐ด๐ = ๐ด๐ท′2 ๐ด๐ + ๐ท๐๐2 − ๐ท๐๐′2 ๐๐ ๐๐๐
=> rank one modification.
If ๐ท and ๐ท’ differ in ๐ entries then complexity is ๐(๐2 ๐).
Performance Analysis
• In each step: ๐ ′ − ๐๐ ≤ ๐ผ๐
• Substituting ๐’ =
1
๐=
and ๐(๐ฅ) =
2
๐+1
๐
๐ฅ
๐
๐0 =
๐−1 ๐
(๐(๐−1)
๐ฅ
•
๐(๐ฅ (๐+1) ),
๐
๐
๐+1
๐
๐
๐
๐ฅ
๐
• Then
• Then
๐ฅ
๐
๐
๐ฅ๐
−
1
๐
≤
๐
โ๐
๐
Let call ๐ (๐) =
2
๐+1
๐ฅ๐ ๐๐
๐ผ2
๐ ๐−1
−1
−1
2
≤
๐ฝ2
≤ ๐ฝ2
๐
Let โ๐ =
๐ฅ๐
1
๐
๐+1
๐
๐ฅ๐ ๐๐
๐+1
๐ฅ๐
๐
๐
๐ฅ๐
๐ท−1 ๐ฅ
,
๐ ๐ ๐ท−1 ๐ฅ
Performance analysis - 2
by a factor ๐ ๐
๐ท′ ๐+1 = ๐ ๐ ๐ท′(๐)
• Then for any entry I such that
2
′ ๐+1
๐ท๐๐
1
∉
,2
๐+1
2
๐ท
• First we scale ๐ท’
• We reset ๐ท๐๐′
๐+1
๐
=
๐๐
๐+1
๐ท๐๐
• Define discrepancy between ๐ท ๐ and ๐ท’ ๐ as
′ ๐
๐ท
๐
๐ฟ๐ = ๐๐ ๐๐๐
๐ท๐๐
• Then just after an updating operation for index ๐
๐
๐ฟ๐ = 0
Performance Analysis - 3
• Just before an updating operation for index i
๐ฟ๐
๐
≥ ๐๐ 2
Between two successive updates, say at steps k1 and k2
๐ฟ๐
Let call โ๐ฟ๐
๐
= ๐ฟ๐
๐2
− ๐ฟ๐
๐+1
๐๐ 2 ≤
๐1
=
๐=๐1
๐
−๐ฟ
(๐ )
๐ฟ๐ 2
๐2 −1
[๐ฟ๐
๐+1
−๐ฟ
๐
]
Then
= ๐ฟ๐
๐2
−๐ฟ๐
๐1
=
๐2 −1
๐=๐1
โ๐ฟ๐
๐
Assume that no updating operation was performed for index I
โ๐ฟ๐
๐
= ๐ฟ๐
๐
๐ ๐ ๐ฅ๐
๐๐ ๐+1
๐ฅ๐
๐+1
−๐ฟ
= −๐๐โ๐
๐
′ ๐+1
= ๐๐
๐ท๐๐
๐+1
๐ท๐๐
′ ๐
- ๐๐
๐ท๐๐
๐
๐ท๐๐
๐
โ๐ฟ๐
๐
= ๐๐โ๐
๐
′ ๐+1
= ๐๐
๐ท๐๐
๐
๐ท๐๐
๐
×
๐ท๐๐
′ ๐
๐ท๐๐
=
Performance Analysis - 4
• Let ๐๐ be the number of updating operations corresponding to index ๐ in ๐ steps
and ๐ = ๐๐=1 ๐๐ be the total number of updating operations in ๐ steps.
• We have
๐
๐๐ 2๐๐ ≤
๐๐โ๐
๐=1
๐
๐
๐๐ 2๐ =
๐=1
โ๐
๐
−1
2
๐
๐=1
≤ ๐ฝ 2 ⇒ โ๐
๐๐โ๐
๐
๐
−1 ≤๐ฝ
๐
Because of ๐ฅ ≤ ๐ฝ < 1 then ln 1 + ๐ฅ − ๐ฅ
⇒ ๐๐โ๐ ๐ − (โ๐ ๐ − 1) ≤
โ๐
๐
๐
−1
2
๐ฅ2
≤
2 1−๐ฝ
2
โ๐๐ − 1
2 1−๐ฝ
โ๐ ๐ − 1 ≤ ๐๐ฝ
≤ ๐ฝ2 ⇒
๐
Performance Analysis - 5
• Hence
•
๐
๐=1
๐๐โ๐
๐
๐
≤
๐=1
โ๐๐
≤
๐
๐
๐=1
=
๐๐โ๐
๐
๐๐โ๐
− โ๐
๐
2
๐
− โ๐
๐
− 1 + (โ๐
๐
−1
+
๐=1
โ๐
๐
๐
−1
−1
๐ฝ2
+ ๐๐ฝ ≤
+ ๐๐ฝ
2 1−๐ฝ
2 1−๐ฝ
• So
๐
๐=1
๐
๐=1
• ๐ = ๐(๐ ๐)
• Finally ๐(๐๐ฟ ๐๐2 )
๐๐โ๐
๐
− 1)
๐ฝ2
≤๐
+ ๐๐ฝ
2 1−๐ฝ
Contents
• Overview
• Transformation to Karmarkar’s canonical form
• Projective transformation
• Orthogonal projection
• Complexity analysis
44
Transform to canonical form
General LP
๐
Minimize ๐ ๐ฅ
Subject to
• ๐๐ฅ ≥ ๐ก
• x≥0
Karmarkar’s Canonical Form
Minimize ๐ ๐ ๐ฅ
Subject to
• ๐ด๐ฅ = 0
• ๐ฅ๐ = 1
• x≥0
45
Step 1:Convert LP to a
feasibility problem
• Combine primal and dual problems
Dual
Primal
Minimize ๐ ๐ ๐ฅ
Subject to
Maximize ๐ก ๐ ๐ข
Subject to
• ๐๐ฅ ≥ ๐ก
• x≥0
• ๐๐ ๐ข ≤ ๐
• ๐ข ≥0
Combined
•
•
•
•
๐๐ ๐ฅ − ๐ก ๐ ๐ข = 0
๐๐ฅ ≥ ๐ก
๐๐ ๐ข ≤ ๐
x, u ≥ 0
• LP becomes a feasibility problem
46
Step 2: Convert inequality to
equality
• Introduce slack and surplus variable
๐๐ ๐ฅ − ๐ก ๐ ๐ข = 0
• ๐๐ฅ − ๐ฆ = ๐ก
• ๐๐ ๐ข + ๐ฃ = ๐
• x, u, y, v ≥ 0
47
Step 3: Convert feasibility
problem to LP
• Introduce an artificial variable to create an interior starting
point.
• Let ๐ฅ0 , ๐ฆ0 , ๐ข0 , ๐ฃ0 be strictly interior points in the positive
orthant.
• Minimize λ
• Subject to
•
•
•
•
๐๐ ๐ฅ − ๐ก ๐ ๐ข + (−๐๐ ๐ฅ0 + ๐ก ๐ ๐ข0 ) λ = 0
๐๐ฅ − ๐ฆ + (๐ก − ๐๐ฅ0 + ๐ฆ0 )λ = t
๐ ๐ ๐ข + ๐ฃ + (๐ − ๐ ๐ ๐ข0 ) λ = ๐
๐ฅ, ๐ฆ, ๐ข, ๐ฃ, λ ≥ 0
48
Step 3: Convert feasibility
problem to LP
• Minimize λ
• Subject to
•
•
•
•
๐๐ ๐ฅ − ๐ก ๐ ๐ข + (−๐๐ ๐ฅ0 + ๐ก ๐ ๐ข0 ) λ = 0
๐๐ฅ − ๐ฆ + (๐ก − ๐๐ฅ0 + ๐ฆ0 )λ = t
๐ ๐ ๐ข + ๐ฃ + (๐ − ๐ ๐ ๐ข0 ) λ = ๐
๐ฅ, ๐ฆ, ๐ข, ๐ฃ, λ ≥ 0
• Change of notation
Minimize ๐ ๐ ๐ฅ
Subject to
• ๐ด๐ฅ = ๐
• x≥0
49
Step4: Introduce constraint
๐ฅ′๐ = 1
• Let ๐ = [๐1 , ๐2 ,…, ๐๐ ] be a feasible point in the original LP
• Define a transformation
• ๐ฅ′๐ =
๐ฅ๐
๐๐
๐ฅ๐
๐
๐=1๐
๐
+1
• ๐ฅ′๐+1 = 1 −
for ๐ = 1, 2, … , ๐
๐
๐=1 ๐ฅ′๐
• Define the inverse transformation
• ๐ฅ๐ =
๐๐ ๐ฅ′๐
๐ฅ′๐+1
for ๐ = 1, 2, … , ๐
50
Step4: Introduce constraint ๐ด′๐ฅ ′ = 0
• Let ๐ด๐ denote the ๐ ๐กโ column of ๐ด
• If ๐๐=1 ๐ด๐ ๐ฅ๐ = ๐
• Then ๐๐=1 ๐ด๐ ๐๐ ๐ฅ′๐ − ๐๐ฅ′๐+1 = 0
• We define the columns of a matrix ๐ด’ by these equations
• ๐ด′๐ = ๐๐ ๐ด๐ for ๐ = 1, 2, … , ๐
• ๐ด′๐+1 = −๐
• Then
๐+1
๐=1 ๐ด′๐ ๐ฅ′๐
=0
51
Step 5: Get the minimum value
of Canonical form
• Substituting
• We get
๐๐ ๐ฅ๐′
๐ฅ๐ = ′
๐ฅ๐+1
๐+1 ๐ ๐ ๐ฅ ′
๐ ๐ ๐
′
๐=1 ๐ฅ๐+1
=0
• Define ๐′ ∈ ๐
๐+1 by
๐๐′ = ๐๐ ๐๐ ๐๐๐ ๐ = 1, … , ๐
′
๐๐+1
=0
• Then ๐ ๐ ๐ฅ = 0 implies ๐ ′๐ ๐ฅ ′ = 0
52
Step 5: Get the minimum value
of Canonical form
• The transformed problem is
Minimize ๐ ′๐ ๐ฅ ′
Subject to
• ๐ด′ ๐ฅ ′ = 0
• ๐ฅ′ ≥ 0
•
๐+1 ′
๐=1 ๐ฅ๐
=1
53
References
• Narendra Karmarkar (1984). "A New Polynomial Time
Algorithm for Linear Programming”
• Strang, Gilbert (1 June 1987). "Karmarkar’s algorithm and its
place in applied mathematics". The Mathematical Intelligencer
(New York: Springer) 9 (2): 4–10. doi:10.1007/BF03025891.
ISSN 0343-6993. MR '''883185'‘
• Robert J. Vanderbei; Meketon, Marc, Freedman, Barry (1986).
"A Modification of Karmarkar's Linear Programming
Algorithm". Algorithmica 1: 395–407.
doi:10.1007/BF01840454. ^ Kolata, Gina (1989-03-12). "IDEAS
& TRENDS; Mathematicians Are Troubled by Claims on Their
Recipes". The New York Times.
54
References
• Gill, Philip E.; Murray, Walter, Saunders, Michael A., Tomlin, J.
A. and Wright, Margaret H. (1986). "On projected Newton
barrier methods for linear programming and an equivalence to
Karmarkar’s projective method". Mathematical Programming
36 (2): 183–209. doi:10.1007/BF02592025.
• oi:10.1007/BF02592025. ^ Anthony V. Fiacco; Garth P.
McCormick (1968). Nonlinear Programming: Sequential
Unconstrained Minimization Techniques. New York: Wiley.
ISBN 978-0-471-25810-0. Reprinted by SIAM in 1990 as ISBN
978-0-89871-254-4.
• Andrew Chin (2009). "On Abstraction and Equivalence in
Software Patent Doctrine: A Response to Bessen, Meurer and
Klemens". Journal Of Intellectual Property Law 16: 214–223.
55
Q&A
56
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )