Homework 7: Solutions

advertisement
Homework 7: Solutions
Problem 4.20
The Jacobian for g(x, y) = (x3 + y 3 − 1, ye−x − sin(y) − a)T is
Jg(x, y) =
"
3x2
3y 2
−ye−x e−x − cos(y)
#
.
Newton’s iteration for a = 0.5 and starting value (x0 , y0 ) = (1, 1) is coded in the following
script file:
n=0;
eps=1;
a=0.5;
x=[1;1];
%initialize iteration counter
%initialize error
%set parameter
%set starting value
%Computation loop
while eps>1e-10&n<101
g=[x(1)^3+x(2)^3-1;x(2)*exp(-x(1))-sin(x(2))-a];
eps=abs(g(1))+abs(g(2));
Jg=[3*x(1)^2,3*x(2)^2;-x(2)*exp(-x(1)),exp(-x(1))-cos(x(2))];
y=x-Jg\g;
x=y;
n=n+1;
end
n,eps,x
%display end values
%g(x)
%error
%Jacobian
%iterate
%update x
%counter+1
Output in the command window:
n =
11
eps =
1.450617403975230e-012
x =
-0.28900596152510
1.00798246496238
Result for (x0 , y0 ) = (−1, −1):
n = 10,
eps = 5.551115123125783e−017, x = (1.18596121011778, −0.87418827633574).
For a = 1 and the same starting values the Matlab outputs are:
(x0 , y0 ) n
eps
x
(1, 1)
14 1.998401444325282e − 015 (−0.57009135942431, 1.05829620147583)
(−1, −1) 22 7.701217441535846e − 011 (26.70403569679139, −26.70356824960749)
1
Problem 4.21
To apply Newton’s method we need the gradient ∇f and the Hessean Hf :
T
(∇f (x, y)) =
"
14x + 2y + 4x3 + 1
2x + 2y + 4y 3 − 1
#
,
Hf (x, y) =
"
14 + 12x2
2
2
2 + 12y 2
#
.
Matlab implementation of Newton’s iteration scheme for starting value (x0 , y0 ) = (1, 1):
n=0;eps=1;
x=[1;1];
while eps>1e-10&n<101
gradf=[14*x(1)+2*x(2)+4*x(1)^3+1;2*x(1)+2*x(2)+4*x(2)^3-1];
eps=abs(gradf(1))+abs(gradf(2));
Hf=[14+12*x(1)^2,2;2,2+12*x(2)^2];
y=x-Hf\gradf;
x=y;
n=n+1;
end
n,eps,x
Output in the command window:
n =
6
eps =
2.687405853407654e-012
x =
-0.13519805076535
0.45132879399569
Problem 4.22
Matlab script for starting value (x0 , y0) = (1, 1) and α = 0.04 (α = a in the script):
n=0;eps=1;
a=0.04;
x=[1;1];
while eps>1e-10&n<101
gradf=[14*x(1)+2*x(2)+4*x(1)^3+1;2*x(1)+2*x(2)+4*x(2)^3-1];
eps=abs(gradf(1))+abs(gradf(2));
y=x-a*gradf;
x=y;
n=n+1;
end
n,eps,x
2
The results of this computation and the computations for the other values of α are:
α
n
eps
0.04 101 1.712168873346798e − 008
0.06 83 9.091261077287527e − 011
0.08 58 8.236544779549604e − 011
0.1 48 6.471267965935112e − 011
0.12 11
NaN
x
(−0.13519805134746, 0.45132879695525)
(−0.13519805076814, 0.45132879400989)
(−0.13519805076761, 0.45132879400718)
(−0.13519805076379, 0.45132879398775)
(NaN, NaN)
The table shows that when α is too small convergence can take very long, whereas when
α gets too large there is the possibility of “overshooting”.
Problem 4.23
Matlab code of the objective function with the gradient option:
function [f,gradf]=objfun(x)
f=x(1)^6+x(1)^2*x(2)^2+x(2)^4+x(3)^4+exp(-x(3)^2)*sin(x(1)+x(2));
gradf=[6*x(1)^5+2*x(1)*x(2)^2+exp(-x(3)^2)*cos(x(1)+x(2));...
2*x(1)^2*x(2)+4*x(2)^3+exp(-x(3)^2)*cos(x(1)+x(2));...
4*x(3)^3-2*x(3)*exp(-x(3)^2)*sin(x(1)+x(2))];
The call (using a script file)
x0=[1;1;1];
options = optimset(’GradObj’,’on’);
[x, fval] = fminunc(’objfun’,x0,options)
generates the following solution x and minimal value f val:
x =
-0.56899205519790
-0.41489638426987
0.00004179192746
fval =
-0.71336068456945
Problem 4.24
(a) Minimum: Matlab code of objective and constraint functions with gradient option:
function [f,gradf]=objfun(x)
f=x(1)^3+x(2)^2-x(1)*x(2);
gradf=[3*x(1)^2-x(2);2*x(2)-x(1)];
3
function [c,ceq,gradc,gradceq]=constraint(x)
c=x(1)^2+4*x(2)^2-2;
gradc=[2*x(1);8*x(2)];
ceq=[];gradceq=[];
Call of fmincon in script file:
x0=[1;1];
options = optimset(options,’GradObj’,’on’,’GradConstr’,’on’);
[x,fval] = fmincon(’objfun’,x0,[],[],[],[],[],[],’constraint’,options)
Answer in command window:
x =
-1.40651377678752
-0.07368683044933
fval =
-2.88069128010844
(b) Maximum: Using the inverted objective function from (a) yields:
fmax = 2.89438478876845, x = (1.40195517936740, −0.09290014170954).
Problem 4.25
Objective and constraint function files with gradient options:
function [f,gradf]=objfun(x)
f=sin(x(1)+x(2))+cos(x(2)+x(3))-exp(-x(1)^2);
gradf=[cos(x(1)+x(2))+2*x(1)*exp(-x(1)^2);...
cos(x(1)+x(2))-sin(x(2)+x(3));-sin(x(2)+x(3))];
function [c,ceq,gradc,gradceq]=constraint(x)
c=[x(2)^2-x(1)^2;x(3)^2-1];
gradc=[-2*x(1),0;2*x(2),0;0,2*x(3)];
ceq=x(1)^2+x(2)^2-1;
gradceq=[2*x(1);2*x(2);0];
(a) Call (from script file):
x0=[1;1;1];
options = optimset(options,’GradObj’,’on’,’GradConstr’,’on’);
[x,fval] = fmincon(’objfun’,x0,[],[],[],[],[],[],’constraint’,options)
Answer:
4
x =
-0.70710678118734
-0.70710678118734
-1.00000000000000
fval =
-1.73018533177725
(b) Call (same options and starting value):
[x,fval] = fmincon(’objfun’,x0,[],[],[],[],...
[0;-Inf;-Inf],[Inf;0;Inf],’constraint’,options)
Answer:
x =
0.70710678118655
-0.70710678118734
-1.00000000000000
fval =
-0.74241938578575
5
Download