Document 13446080

advertisement
Topic one: Production line profit maximization subject to a
production rate constraint
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization :
22/79
Production line profit maximization
The profit maximization problem
max
N
�(N) = �� (N) −
�−1
∑
�=1
s.t.
�� �� −
�−1
∑
�� �
¯ � (N)
�=1
� (N) ≥ �ˆ ,
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
where
c
⃝2010
Chuan Shi
—
� (N)
�ˆ
�
�
¯ � (N)
��
��
=
=
=
=
=
=
production rate, parts/time unit
required production rate, parts/time unit
profit coefficient, $/part
average inventory of buffer �, � = 1, ⋅ ⋅ ⋅ , � − 1
buffer cost coefficient, $/part/time unit
inventory cost coefficient, $/part/time unit
Topic one:
Line optimization : Constrained and unconstrained problems
23/79
An example about the research goal
"data.txt"
"feasible.txt"
1205
1200
1190
1180
1160
1140
1120
1100
1080
1060
Optimal
boundary
1220
1200
1180
1160
J(N) 1140
1120
1100
1080
1060
1040
0
10
20
30
40
N1
50
60
70
80
90
1000
10
20
30
40
50
60
70
80
90
100
N2
Figure 2: �(N) vs. �1 and �2
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Constrained and unconstrained problems
24/79
Two problems
Original constrained problem
max
N
�(N) = �� (N) −
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯ � (N)
�=1
s.t. � (N) ≥ �ˆ ,
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
Simpler unconstrained problem (Schor’s problem)
max �(N) = �� (N) −
N
s.t.
c
⃝2010
Chuan Shi
—
Topic one:
�−1
∑
�=1
�� �� −
�−1
∑
�� �
¯ � (N)
�=1
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
Line optimization : Constrained and unconstrained problems
25/79
An example for algorithm derivation
Data
�1 = .1, �1 = .01, �2 = .11, �2 = .01, �3 = .1, �3 = .009, �ˆ = .88
Cost function
�(N) = 2000� (N) − �1 − �2 − �
¯ 1 (N) − �
¯ 2 (N)
100
90
P(N1,N2)>P
80
1660
1640
1620
J(N) 1600
1580
1560
1540
1520
1500
70
N2
60
50
40
0
10
20
30
40
N1
50
60
70
80
90
1000
10
20
30
40
50
Figure 3: �(N) vs. �1 and �2
c
⃝2010
Chuan Shi
—
Topic one:
60
70
N2
80
90
100
30
P(N1,N2)<P
P(N1,N2)=P
20
10
20
30
40
50
60
N1
70
80
90
100
Figure 4: � (N)
Line optimization : Algorithm derivation
26/79
An example for algorithm derivation
1660
1640
1620
J(N) 1600
1580
1560
1540
1520
1500
0
10
20
30
40
N1
50
60
70
80
90
1000
10
20
30
40
50
60
70
80
90
100
N2
Figure 5: �(N) vs. �1 and �2
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Algorithm derivation
27/79
Algorithm derivation
Two cases
Case 1
The solution of the unconstrained problem is N� s.t. � (N� ) ≥ �ˆ . In this
case, the solution of the constrained problem is the same as the solution
of the unconstrained problem. We are done.
100
Unconstrained problem
=
N
�� (N) −
�−1
∑
u
�� ��
�=1
u
( N1 , N2 )
80
70
60
N2
max �(N)
90
50
−
�−1
∑
�� �
¯ � (N)
P(N1,N2) > P
40
30
�=1
20
≥
s.t. ��
c
⃝2010
Chuan Shi
—
�min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
Topic one:
10
20
30
Line optimization : Algorithm derivation
40
50
60
N1
70
80
90
100
28/79
Algorithm derivation
Two cases (continued)
Case 2
N� satisfies � (N� ) < �ˆ . This is not the solution of the constrained
problem.
100
90
80
70
N2
60
50
P(N1,N2) > P
40
30
u
10
c
⃝2010
Chuan Shi
—
Topic one:
u
( N1 , N2 )
20
20
30
40
50
60
N1
70
Line optimization : Algorithm derivation
80
90
100
29/79
Algorithm derivation
Two cases (continued)
Case 2 (continued)
In this case, we consider the following unconstrained problem:
′
max �(N) = � � (N) −
N
s.t.
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯ � (N)
�=1
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
in which � is replaced by �′ . Let N★ (�′ ) be the solution to this problem
and � ★ (�′ ) = � (N★ (�′ )).
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Algorithm derivation
30/79
Assertion
The constrained problem
max
N
�(N) = �′ � (N) −
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯ � (N)
�=1
s.t. � (N) ≥ �ˆ ,
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
has the same solution for all �′ in which the solution of the corresponding
unconstrained problem
max �(N) = �′ � (N) −
N
s.t.
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯ � (N)
�=1
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
has � ★ (�′ ) ≤ �ˆ .
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Algorithm derivation
31/79
Interpretation of the assertion
We claim
480
80
470
70
460
60
450
50
440
40
430
30
max �(� )
=
500� (� ) − � − �
¯ (� )
s.t. � (� )
s.t. �
≥
≥
�ˆ
�min
N
Cost ($)
A’P(N)
If the optimal solution of the unconstrained problem is not that of the constrained
problem, then the solution of the constrained problem, (�1★ , ⋅ ⋅ ⋅ , ��★−1 ), satisfies
� (�1★ , ⋅ ⋅ ⋅ , ��★−1 ) = �ˆ .
max �(N)
=
500�ˆ − � − �
¯ (� )
s.t. � (� )
s.t. �
≥
≥
�ˆ ⇒ � (� ) = �ˆ
�min
N
420
20
410
10
400
N*(A’)
0
5
10
P(N*(A’)) < P
15
20
N
25
30
35
40
0
We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin
ear programming.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Algorithm derivation
32/79
Interpretation of the assertion
We claim
480
80
470
70
460
60
450
50
440
40
430
30
max �(� )
=
500� (� ) − � − �
¯ (� )
s.t. � (� )
s.t. �
≥
≥
�ˆ
�min
N
Cost ($)
A’P(N)
If the optimal solution of the unconstrained problem is not that of the constrained
problem, then the solution of the constrained problem, (�1★ , ⋅ ⋅ ⋅ , ��★−1 ), satisfies
� (�1★ , ⋅ ⋅ ⋅ , ��★−1 ) = �ˆ .
max �(N)
=
500�ˆ − � − �
¯ (� )
s.t. � (� )
s.t. �
≥
≥
�ˆ ⇒ � (� ) = �ˆ
�min
N
420
20
410
10
400
N*(A’)
0
5
10
P(N*(A’)) < P
15
20
N
25
30
35
40
0
We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin
ear programming.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Algorithm derivation
32/79
Interpretation of the assertion
We claim
480
80
470
70
460
60
450
50
440
40
430
30
max �(� )
=
500� (� ) − � − �
¯ (� )
s.t. � (� )
s.t. �
≥
≥
�ˆ
�min
N
Cost ($)
A’P(N)
If the optimal solution of the unconstrained problem is not that of the constrained
problem, then the solution of the constrained problem, (�1★ , ⋅ ⋅ ⋅ , ��★−1 ), satisfies
� (�1★ , ⋅ ⋅ ⋅ , ��★−1 ) = �ˆ .
max �(N)
=
500�ˆ − � − �
¯ (� )
s.t. � (� )
s.t. �
≥
≥
�ˆ ⇒ � (� ) = �ˆ
�min
N
420
20
410
10
400
N*(A’)
0
5
10
P(N*(A’)) < P
15
20
N
25
30
35
40
0
We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin
ear programming.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Algorithm derivation
32/79
Interpretation of the assertion
A = 1500 (Original Problem)
A’ = 3500
1220
1200
1180
1160
J(N) 1140
1120
1100
1080
1060
1040
0
10
20
30
40
50
N1
60
70
80
90
1000
10
20
30
40
50
60
70
80
"infeasible.txt"
"feasible.txt"
1205
1200
1190
1180
1160
1140
1120
1100
1080
1060
Optimal
100 boundary
90
J(N)
2850
2800
2750
2700
2650
0
10
20
30
40
50
N1
60
70
80
90
1000
10
20
30
40
50
60
70
80
20
30
40
N1
c
⃝2010
Chuan Shi
—
50
60
70
80
90
1000
Topic one:
10
20
30
40
50
60
70
N2
80
"infeasible.txt"
"feasible.txt"
2060
2055
2049.8
2040
2020
2000
1980
1950
1930
1900
Optimal
100 boundary
90
3850
3800
3750
J(N) 3700
3650
3600
3550
3500
3450
0
10
20
30
Line optimization : Algorithm derivation
40
N1
50
60
70
80
90
1000
10
20
30
40
50
"infeasible.txt"
"feasible.txt"
2921
2919.7
2910
2900
2880
2860
2830
2800
2770
2730
Optimal
100
boundary
90
N2
A’ = 4535.82 (Final A’)
2080
2060
2040
2020
J(N) 2000
1980
1960
1940
1920
1900
1880
10
2900
N2
A’ = 2500
0
2950
60
70
80
"infeasible.txt"
"feasible.txt"
3819
3816
3813
3818
3800
3770
3730
3700
3660
3620
3580
3540
3500
Optimal
100 boundary
90
N2
33/79
Karush-Kuhn-Tucker (KKT) conditions
Let �★ be a local minimum of the problem
min � (�)
s.t. ℎ1 (�) = 0, ⋅ ⋅ ⋅ , ℎ� (�) = 0,
�1 (�) ≤ 0, ⋅ ⋅ ⋅ , �� (�) ≤ 0,
where � , ℎ� , and �� are continuously differentiable functions from ℜ�
to ℜ. Then there exist unique Lagrange multipliers �★1 , ⋅ ⋅ ⋅ , �★� and
�★1 , ⋅ ⋅ ⋅ , �★� , satisfying the following conditions:
∇� �(�★ , �★ , �★ ) = 0,
�★� ≥ 0, � = 1, ⋅ ⋅ ⋅ , �,
�★� �� (�★ ) = 0, � = 1, ⋅ ⋅ ⋅ , �.
where �(�, �, �) = � (�) +
Lagrangian function.
c
⃝2010
Chuan Shi
—
Topic one:
∑�
�=1 �� ℎ� (�)
+
∑�
�=1 �� �� (�)
is called the
Line optimization : Proofs of the algorithm by KKT conditions
34/79
Convert the constrained problem to minimization form
Minimization form
The constrained problem
min
N
s.t.
−�(N) = −�� (N) +
�−1
∑
�� �� +
�=1
�−1
∑
�� �
¯ � (N)
�=1
�ˆ − � (N) ≤ 0
�min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1
We have argued that we treat �� as continuous variables, and � (� ) and
�(� ) as continuously differentiable functions.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
35/79
Applying KKT conditions
The Slater constraint qualification for convex inequalities guarantees the
existence of Lagrange multipliers for our problem. So, there exist unique
Lagrange multipliers �★� , � = 0, ⋅ ⋅ ⋅ , � − 1 for the constrained problem to
satisfy the KKT conditions:
− ∇�(N★ ) + �★0 ∇(�ˆ − � (N★ )) +
�−1
∑
�★� ∇(�min − �� ) = 0
(1)
�=1
or
⎛
∂�(N★ )
∂�1
∂�(N★ )
∂�2
..
.
⎜
⎜
⎜
⎜
−⎜
⎜
⎜
⎜
⎝ ∂�(N★ )
∂��−1
⎞
⎛
∂� (N★ )
∂�1
∂� (N★ )
∂�2
..
.
⎟
⎜
⎟
⎜
⎟
⎜
⎟
⎜
⎟ − �★0 ⎜
⎟
⎜
⎟
⎜
⎟
⎜
⎠
⎝ ∂� (N★ )
∂��−1
⎞
⎛
⎟
⎟
⎟
⎜
⎟
⎟ − �★1 ⎜
⎜
⎟
⎝
⎟
⎟
⎠
1
0
..
.
0
⎞
⎛
⎟
⎜
⎟
⎜
★
⎟ − ⋅ ⋅ ⋅ − ��−1 ⎜
⎠
⎝
0
0
..
.
1
⎞
⎛
⎟ ⎜
⎟ ⎜
⎟=⎜
⎠ ⎝
0
0
..
.
0
⎞
⎟
⎟
⎟,
⎠
(2)
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
36/79
Applying KKT conditions
and
�
★� ≥ 0, ∀� = 0, ⋅ ⋅ ⋅ , � − 1,
(3)
�★0
(�ˆ − � (N★ )) = 0,
(4)
�
★� (�min − ��★ ) = 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1,
(5)
haha
haha
where N★ is the optimal solution of the constrained problem. Assume
that �
�★ > �min for all �. In this case, by equation (5), we know that
�
★� = 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
37/79
Applying KKT conditions
The KKT conditions are simplified to
⎛
⎞
⎛
∂�(N★ )
∂� (N★ )
⎜ ∂�1 ⎟
⎜ ∂�1
⎜
⎟
⎜
⎜ ∂�(N★ ) ⎟
⎜ ∂� (N★ )
⎜
⎟
⎜
∂�2 ⎟ − �★0 ⎜ ∂�2
−
⎜
⎜
⎟
⎜
.
.
⎜
⎟
⎜
.
.
.
.
⎜
⎟
⎜
⎝
∂�(N★ ) ⎠
⎝
∂� (N★ )
∂��−1
∂��−1
⎞
⎟ ⎛ ⎞
0
⎟
⎟ ⎜ ⎟
⎟ ⎜ 0
⎟
⎟ =
⎜ .
⎟ ,
⎟ ⎝
. ⎠
.
⎟
⎟
0
⎠
�★0 (�ˆ − � (N★ )) = 0,
(6)
(7)
★
where �★0 ≥ 0. Since N is not the optimal solution of the unconstrained
∕ 0. Thus, �★0 =
∕ 0 since otherwise condition (6)
problem, ∇�(N★ ) =
would be violated. By condition (7), the optimal solution N★ satisfies
� (N★ ) = �ˆ .
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
38/79
Applying KKT conditions
The KKT conditions are simplified to
⎛
⎞
⎛
∂�(N★ )
∂� (N★ )
⎜ ∂�1 ⎟
⎜ ∂�1
⎜
⎟
⎜
⎜ ∂�(N★ ) ⎟
⎜ ∂� (N★ )
⎜
⎟
⎜
∂�2 ⎟ − �★0 ⎜ ∂�2
−
⎜
⎜
⎟
⎜
.
.
⎜
⎟
⎜
.
.
.
.
⎜
⎟
⎜
★
⎝
∂�(N ) ⎠
⎝
∂� (N★ )
∂��−1
∂��−1
⎞
⎟ ⎛ ⎞
0
⎟
⎟ ⎜ ⎟
⎟ ⎜ 0
⎟
⎟ =
⎜ .
⎟ ,
⎟ ⎝
. ⎠
.
⎟
⎟
0
⎠
�★0 (�ˆ − � (N★ )) = 0,
In addition, conditions (6) and (7) reveal how we could find �★0 and N★ .
For every �★0 , condition (6) determines N★ since there are � − 1 equations
and � − 1 unknowns. Therefore, we can think of N★ = N★ (�★0 ). We
search for a value of �★0 such that � (N★ (�★0 )) = �ˆ . As we indicate in the
following, this is exactly what the algorithm does.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
39/79
Applying KKT conditions
Replacing �★0 by �0 > 0 in constraint (6) gives
⎞
⎛
⎞
⎛
∂�(N� )
∂� (N� )
⎜ ∂�1 ⎟
⎜ ∂�1 ⎟ ⎛ ⎞
0
⎜
⎜
⎟
� ⎟
⎜ ∂�(N ) ⎟
⎜ ∂� (N� ) ⎟ ⎜ ⎟
0
⎟
⎜
⎟
⎜
⎟
∂�2 ⎟ − �0 ⎜ ∂�2 ⎟ =
⎜
−
⎜
⎜ .
⎟ ,
⎜
⎟
⎜
⎟
.
.
⎝
..
⎠
⎜
⎟
⎜
⎟
.
.
.
.
⎜
⎟
⎜
⎟
0
⎝
∂�(N� ) ⎠
⎝
∂� (N� ) ⎠
∂��−1
∂��−1
(8)
where N� is the unique solution of (8). Note that N� is the solution of
the following optimization problem:
¯
min −�(N)
= −�(N) + �0 (�ˆ − � (N))
N
(9)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
40/79
Applying KKT conditions
The problem above is equivalent to
max �¯(N) = �(N) − �0 (�ˆ − � (N))
N
(10)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
max �¯(N) = �� (N) −
N
�−1
∑
�=1
�� �� −
�−1
∑
�� �
¯ � − �0 (�ˆ − � (N))
(11)
�=1
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
max �¯(N) = (� + �0 )� (N) −
N
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯�
�=1
(12)
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
41/79
Applying KKT conditions
The problem above is equivalent to
max �¯(N) = �(N) − �0 (�ˆ − � (N))
N
(10)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
max �¯(N) = �� (N) −
N
�−1
∑
�=1
�� �� −
�−1
∑
�� �
¯ � − �0 (�ˆ − � (N))
(11)
�=1
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
max �¯(N) = (� + �0 )� (N) −
N
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯�
�=1
(12)
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
41/79
Applying KKT conditions
The problem above is equivalent to
max �¯(N) = �(N) − �0 (�ˆ − � (N))
N
(10)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
max �¯(N) = �� (N) −
N
�−1
∑
�=1
�� �� −
�−1
∑
�� �
¯ � − �0 (�ˆ − � (N))
(11)
�=1
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
max �¯(N) = (� + �0 )� (N) −
N
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯�
�=1
(12)
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
41/79
Applying KKT conditions
or, finally,
max �¯(N) = �′ � (N) −
N
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯�
�=1
(13)
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
where �′ = � + �0 . This is exactly the unconstrained problem, and N�
is its optimal solution. Note that �0 > 0 indicates that �′ > �.
In addition, the KKT conditions indicate that the optimal solution of the
constrained problem N★ satisfies � (N★ ) = �ˆ . This means that, for every
�′ > � (or �0 > 0), we can find the corresponding optimal solution N�
satisfying condition (8) by solving problem (13). We need to find the
�′ such that the solution to problem (13), denoted as N★ (�′ ), satisfies
� (N★ (�′ )) = �ˆ .
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
42/79
Applying KKT conditions
or, finally,
max �¯(N) = �′ � (N) −
N
�−1
∑
�� �� −
�=1
�−1
∑
�� �
¯�
�=1
(13)
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
where �′ = � + �0 . This is exactly the unconstrained problem, and N�
is its optimal solution. Note that �0 > 0 indicates that �′ > �.
In addition, the KKT conditions indicate that the optimal solution of the
constrained problem N★ satisfies � (N★ ) = �ˆ . This means that, for every
�′ > � (or �0 > 0), we can find the corresponding optimal solution N�
satisfying condition (8) by solving problem (13). We need to find the
�′ such that the solution to problem (13), denoted as N★ (�′ ), satisfies
� (N★ (�′ )) = �ˆ .
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
42/79
Applying KKT conditions
Then, �0 = �′ − � and N★ (�′ ) satisfy conditions (6) and (7):
−∇�(N★ (�′ )) + �★0 ∇(�ˆ − � (N★ (�′ ))) = 0,
�★0 (�ˆ − � (N★ (�′ ))) = 0.
Hence, �★0 = �′ −� is exactly the Lagrange multiplier satisfying the KKT
conditions of the constrained problem, and N★ = N★ (�′ ) is the optimal
solution of the constrained problem.
d
Consequently, solving the constrained problem through our algorithm is
essentially finding the unique Lagrange multipliers and optimal solution
of the problem.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
43/79
Applying KKT conditions
Then, �0 = �′ − � and N★ (�′ ) satisfy conditions (6) and (7):
−∇�(N★ (�′ )) + �★0 ∇(�ˆ − � (N★ (�′ ))) = 0,
�★0 (�ˆ − � (N★ (�′ ))) = 0.
Hence, �★0 = �′ −� is exactly the Lagrange multiplier satisfying the KKT
conditions of the constrained problem, and N★ = N★ (�′ ) is the optimal
solution of the constrained problem.
d
Consequently, solving the constrained problem through our algorithm is
essentially finding the unique Lagrange multipliers and optimal solution
of the problem.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Proofs of the algorithm by KKT conditions
43/79
Algorithm summary for case 2
Solve unconstrained problem
Solve, by a gradient method, the unconstrained problem for fixed �′
max
N
�(N)
=
�′ � (N) −
�−1
∑
�� �� −
�=1
�−1
∑
Search: Choose A'
�� �
¯ � (N)
�=1
Solve unconstrained problem
s.t.
��
≥
�min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
Search
′
′
Do a one-dimensional search on � > � to find �
such that the solution of the unconstrained problem,
N★ (�′ ), satisfies
� (N★ (�′ )) = �ˆ .
c
⃝2010
Chuan Shi
—
Topic one:
P(N*(A'))=P?
No
Yes
Quit
Line optimization : Proofs of the algorithm by KKT conditions
44/79
Numerical results
Numerical experiment outline
Experiments on short lines.
Experiments on long lines.
Computation speed.
Method we use to check the algorithm
�ˆ surface search in (�1 , ⋅ ⋅ ⋅ , ��−1 ) space. All buffer size allocations, N,
such that � (N) = �ˆ compose the �ˆ surface.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Numerical results
45/79
�ˆ surface search
P surface
The optimal solution
90
85
80
N3
75
70
15
20
25
N1
30
35
40 40
45
50
60
55
65
70
N2
Figure 6: �ˆ Surface search
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Numerical results
46/79
Experiment on short lines (4-buffer line)
Line parameters: �ˆ = .88
machine
�
�
�1
.11
.008
�2
.12
.01
�3
.10
.01
�4
.09
.01
�5
.10
.01
Machine 4 is the least reliable machine (bottleneck) of the line.
Cost function
�(N) = 2500� (N) −
4
∑
�� −
�=1
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Numerical results
4
∑
�
¯ � (N)
�=1
47/79
Experiment on short lines (4-buffer line)
Results
Optimal solutions
Prod. rate
�1★
�2★
�3★
�4★
�
¯1
�
¯2
�
¯3
�
¯4
Profit ($)
�ˆ Surface Search
.8800
28.85
58.46
92.98
87.39
19.0682
34.3084
48.7200
31.9894
1798.2
The algorithm
.8800
28.8570
58.5694
92.9068
87.4415
19.0726
34.3835
48.6981
32.0063
1798.1
Error
0.02%
0.19%
0.08%
0.06%
0.02%
0.23%
0.04%
0.05%
0.006%
Rounded � ★
.8800
29.0000
59.0000
93.0000
87.0000
19.1791
34.7289
48.9123
31.9485
1797.4000
The maximal error is 0.23% and appears in �
¯2.
Computer time for this experiment is 2.69 seconds.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Numerical results
48/79
Experiment on long lines (11-buffer line)
Line parameters: �ˆ = .88
machine
�
�
�1
.11
.008
�2
.12
.01
�3
.10
.01
�4
.09
.01
�5
.10
.01
�6
.11
.01
machine
�
�
�7
.10
.009
�8
.11
.01
�9
.12
.009
�10
.10
.008
�11
.12
.01
�12
.09
.009
Cost function
�(N) = 6000� (N) −
11
∑
�� −
�=1
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Numerical results
11
∑
�
¯ � (N)
�=1
49/79
Experiment on long lines (11-buffer line)
Results
Optimal solutions, buffer sizes:
Prod. rate
�1★
�2★
�3★
�4★
�5★
�6★
�7★
�8★
�9★
★
�10
★
�11
c
⃝2010
Chuan Shi
—
�ˆ Surface Search
.8800
29.10
59.20
97.80
107.50
84.50
70.80
63.10
53.10
47.20
47.90
48.80
Topic one:
The algorithm
.8800
29.1769
59.2830
97.7980
107.4176
84.4804
70.6892
63.1893
52.9274
47.2232
47.7967
48.7716
Line optimization : Numerical results
Error
0.26%
0.14%
0.002%
0.08%
0.02%
0.17%
0.14%
0.33%
0.05%
0.22%
0.06%
Rounded � ★
.8799
29.0000
59.0000
98.0000
107.0000
84.0000
71.0000
63.0000
53.0000
47.0000
48.0000
49.0000
50/79
Experiment on long lines (11-buffer line)
Results (continued)
Optimal solutions, average inventories:
�
¯1
�
¯2
�
¯3
�
¯4
�
¯5
�
¯6
�
¯7
�
¯8
�
¯9
�
¯ 10
�
¯ 11
Profit ($)
�ˆ Surface Search
19.2388
34.9561
52.5423
45.1528
34.4289
30.7073
28.0446
21.5666
21.5059
22.6756
20.8692
4239.3
The algorithm
19.2986
35.0423
52.6032
45.1840
34.4770
30.7048
28.1299
21.5438
21.5442
22.6496
20.8615
4239.2
Error
0.31%
0.25%
0.12%
0.07%
0.14%
0.01%
0.30%
0.11%
0.18%
0.11%
0.04%
0.002%
Rounded � ★
19.1979
34.8194
52.6833
45.0835
34.2790
30.8229
28.0902
21.5932
21.4299
22.7303
20.9613
4239.5000
Computer time is 91.47 seconds.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Numerical results
51/79
Experiments for Tolio, Matta, and Gershwin (2002) model
Consider a 4-machine 3-buffer line with constraints �ˆ = .87. In addition, � =
2000 and all �� and �� are 1.
machine
��1
��1
��2
��2
★
� (N )
�1★
�2★
�3★
�
¯1
�
¯2
�
¯3
Profit ($)
c
⃝2010
Chuan Shi
—
Topic one:
�1
.10
.01
–
–
�2
.12
.008
.20
.005
�ˆ Surf. Search
.8699
29.8600
38.2200
20.6800
17.2779
17.2602
6.1996
1610.3000
�3
.10
.01
–
–
�4
.20
.007
.16
.004
The algorithm
.8699
29.9930
38.0206
20.7616
17.3674
17.1792
6.2121
1610.3000
Line optimization : Numerical results
Error
0.45%
0.52%
0.39%
0.52%
0.47%
0.20%
0.00%
52/79
Experiments for Levantesi, Matta, and Tolio (2003) model
Consider a 4-machine 3-buffer line with constraints �ˆ = .87. In addition, � =
2000 and all �� and �� are 1.
machine
��
��1
��1
��2
��2
� (N★ )
�1★
�2★
�3★
�
¯1
�
¯2
�
¯3
Profit ($)
c
⃝2010
Chuan Shi
—
Topic one:
�1
1.0
.10
.01
–
–
�2
1.02
.12
.008
.20
.005
� ★ Surf. Search
.8699
27.7200
38.7900
34.0700
15.4288
19.8787
13.8937
1590.0000
�3
1.0
.10
.01
–
–
�4
1.0
.20
.012
.16
.006
The algorithm
.8700
27.9042
38.9281
34.1574
15.5313
19.9711
13.9426
1589.7000
Line optimization : Numerical results
Error
0.66%
0.34%
0.26%
0.66%
0.46%
0.35%
0.02%
53/79
Computation speed
Experiment
Run the algorithm for a series of experiments for lines having iden
tical machines to see how fast the algorithm could optimize longer
lines.
Length of the line varies from 4 machines to 30 machines.
Machine parameters are � = .01 and � = .1.
In all cases, the feasible production rate is �ˆ = .88.
The objective function is
�(N) = �� (N) −
�−1
∑
�� −
�=1
�−1
∑
�
¯ � (N).
�=1
where � = 500� for the line of length �.
c
⃝2010
Chuan Shi
—
Topic one:
Line optimization : Numerical results
54/79
Computation speed
3000
Computer time (seconds)
2500
2000
1500
1000
10 mins
500
0
c
⃝2010
Chuan Shi
—
3 mins
1 min
5
Topic one:
10
15
20
Length of production lines
Line optimization : Numerical results
25
30
55/79
Algorithm reliability
We run the algorithm on 739 randomly generated 4-machine 3-buffer lines.
98.92% of these experiments have a maximal error less than 6%.
450
400
350
Frequency
300
250
200
150
100
50
0
0
c
⃝2010
Chuan Shi
—
0.1
Topic one:
0.2
0.3
0.4
0.5
Maximal Error
0.6
Line optimization : Numerical results
0.7
0.8
0.9
1
56/79
Algorithm reliability
Taking a closer look at those 98.92% experiments, we find a more accurate
distribution of the maximal error. We find that, out of the total 739 experiments,
83.90% of them have a maximal error less than 2%.
60
50
Frequency
40
30
20
10
0
0
c
⃝2010
Chuan Shi
—
0.01
Topic one:
0.02
0.03
0.04
0.05
Maximal Error
0.06
Line optimization : Numerical results
0.07
0.08
0.09
0.1
57/79
MIT OpenCourseWare
http://ocw.mit.edu
2.852 Manufacturing Systems Analysis
Spring 2010
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
Download