(Solution)

advertisement
Project 3 (Solution)
> with(LinearAlgebra):
Below you can see how the data vectors were generated (but now this is commented out).
> #_seed:=3467:
#vv2:=RandomVector(10):
#vv3:=RandomVector(10):
#vv4:=RandomVector(10):
#vv1:=2*vv2-vv3:
#ww:=RandomVector(10):
#save vv1, vv2, vv3, vv4, ww, "proj3data.m";
We read the vectors from the file:
> read "proj3data.m";
Problem 1
a) we compute the norms:
> Norm(vv1,2)^2+Norm(vv2,2)^2,Norm(vv1+vv2,2)^2;
156675, 272945
and we see that the results do not agree. We conclude that the vectors vv1 and vv2 are not orthogonal.
b) We first find the cosine of the angle using the standard definition: cos(a) = vv1.vv2/(|vv1||vv2|).
> cosa:=DotProduct(vv1,vv2)/(Norm(vv1,2)*Norm(vv2,2));
evalf(%);
cosa :=
58135 31406 31051
1950375412
0.9308154266
> a:=evalf(arccos(cosa));
a := 0.3741587302
and we see that the vectors are not orthogonal, confirming what we said in a).
c) We find a basis by using Maple’s Basis command (we could, alternatively have used the row
reduction process).
> B:=Basis([vv1,vv2,vv3,vv4]);
  -115  -90  5 

 
 


 
 

  -47  -21  -75 

 
 


 
 


 
 

  -178  -56  38 

 
 


 
 

  20  -8  97 

 
 


 
 


 
 

  -59  -50  -82 

 
 

, 
, 

B :=  





  40  30  -66 

 
 


 
 

  131  62  55 

 
 


 
 


 
 

  -174  -79  68 

 
 


 
 

  -108  -71  26 

 
 


 
 


 
 

  118  28  13 
Notice that the result is a list of vectors. So, the basic vectors are B[1], B[2], B[3]. Of course, they are
B[1]=vv1, B[2]=vv2 and B[3]=vv4. Thus, dim(S)=3, the number of vectors in the basis.
d) We start by normalizing the first vector of the basis, B[1].
> w[1]:=B[1]/Norm(B[1],2):
Next we subtract from B[2] its projection on w[1]. After that we normalize this vector.
> BB[2]:=B[2]-DotProduct(w[1],B[2])*w[1]:
w[2]:=BB[2]/Norm(BB[2],2):
Finally, we subtract from B[3] its projection on <w[1],w[2]>, and normalize the resulting vector.
> BB[3]:=B[3]-DotProduct(w[1],B[3])*w[1]-DotProduct(w[2],B[3])*w[2]:
w[3]:=BB[3]/Norm(BB[3],2):
Thus, our orthonormal basis is w[1], w[2], w[3], that we display as a matrix:
> <w[1]|w[2]|w[3]>;
 115 31406
4620635 16364806044194
822493655 9710436533570654523702 
 −
,−
,−


62812
32729612088388
3236812177856884841234

 47 31406 94241 16364806044194
20059237688 9710436533570654523702 
 −
,
,−


62812
32729612088388
4855218266785327261851

 89 31406 1656543 16364806044194 8706504367 9710436533570654523702 
 −

,
,

31406
16364806044194
4855218266785327261851

 5 31406
541923 16364806044194 49874873527 9710436533570654523702 

,−
,

 15703
8182403022097
9710436533570654523702

 59 31406
2851235 16364806044194
7596917461 9710436533570654523702 
 −
,−
,−


62812
32729612088388
1618406088928442420617

 10 31406 360830 16364806044194
16355939672 9710436533570654523702 

,
,−

 15703
8182403022097
4855218266785327261851

 131 31406 173003 16364806044194 10593235567 9710436533570654523702 


,
,
 62812
32729612088388
3236812177856884841234

 87 31406 95597 16364806044194 10502558707 9710436533570654523702 
 −

,
,

31406
16364806044194
3236812177856884841234

 27 31406
660181 16364806044194 9670748431 9710436533570654523702 
 −
,−
,


15703
8182403022097
9710436533570654523702

 59 31406
1671229 16364806044194 7758395939 9710436533570654523702 

,−
,

 31406
16364806044194
9710436533570654523702

and, for more a more convenient view, its decimal form
> evalf(%);
 -0.3244602746 -0.5711053604 -0.02504000631




 -0.1326055035 0.01164808306 -0.4071224423 






 -0.5022080773 0.4094937547

0.1767072795




0.05642787384 -0.2679243266 0.5061304082 






 -0.1664622278 -0.3524094832 -0.4625612855 




 0.1128557477
0.1783927510
-0.3319602773 





 0.3696025737 0.02138297889 0.3225005857 






 -0.4909225025 0.02363136631 0.3197400183 




 -0.3047105188 -0.3263905571 0.09813879223 






 0.3329244556 -0.4131241012 0.07873223180 
e) The QRDecomposition command takes a matrix whose columns are a basis of a subspace and
returns the QR-decomposition of that matrix. That is, two matrices whose product is the original
matrix. The columns of the Q-part (the first matrix) form an orthonormal basis of the subspace
spanned by the columns of the original matrix. Thus:
> evalf(QRDecomposition(<B[1]|B[2]|B[3]>));
 -0.3244602746 -0.5711053604 -0.02504000631




 -0.1326055035 0.01164808306 -0.4071224423 






 -0.5022080773 0.4094937547

0.1767072795




0.05642787384 -0.2679243266 0.5061304082 






 -0.1664622278 -0.3524094832 -0.4625612855 



,
 0.1128557477
0.1783927510
-0.3319602773 





 0.3696025737 0.02138297889 0.3225005857 






 -0.4909225025 0.02363136631 0.3197400183 




 -0.3047105188 -0.3263905571 0.09813879223 






 0.3329244556 -0.4131241012 0.07873223180 
354.4347612 164.0217223 -15.73491262






0.
64.40399522
-8.107145039






0.
0.
189.1128941 

and by looking at the first matrix we read the orthonormal basis of S. Notice that this basis coincides
with the basis that we found in the previous item.
f) We define the vector vs=vv1+...vv4:
> vs:=vv1+vv2+vv3+vv4:
Then we compute the coordinates of vs with respect to the basis B’={w[1], w[2], w[3]}. Since the
basis B’ is orthonormal, the coefficients are just the dot product of vs with the corresponding w[j]
> c[1]:=DotProduct(vs,w[1]);evalf(%);
c[2]:=DotProduct(vs,w[2]);evalf(%);
c[3]:=DotProduct(vs,w[3]);evalf(%);
42207 31406
15703
476.3302543
c1 :=
c2 :=
374406355 16364806044194
8182403022097
185.1048406
c3 :=
9710436533570654523702
521072599
189.1128941
Last, we check our computation:
> vs,c[1]*w[1]+c[2]*w[2]+c[3]*w[3];
 -265  -265

 


 

 -138  -138

 


 


 

 -130  -130

 


 

 73  73

 


 


 

 -232  -232

 


 

 24,  24

 


 


 

 241  241

 


 


 

 -169  -169

 


 

 -187  -187

 


 


 

 97  97
Notice that we could have compared the two vectors using Equal:
> Equal(vs,c[1]*w[1]+c[2]*w[2]+c[3]*w[3]);
true
g) Since we already know an orthonormal basis of S, we know that the matrix of PS with respect to the
canonical basis is W.Transpose(W), where W is the matrix whose columns are the orthonormal basis
of S:
> W:=<w[1]|w[2]|w[3]>:
MPS:=W.Transpose(W);
MPS :=
 2683898846675 144633926340 -234006491505 758034728705 828831057030

,
,
,
,
,
 6211825730366 3105912865183 3105912865183 6211825730366 3105912865183
-404346297210 -870952030035 855881238805 1756781712675 782349464945 

,
,
,
,
3105912865183 6211825730366 6211825730366 6211825730366 6211825730366 
 144633926340 1709512272515 -5368200268 -2018785896392 640712530900

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
1139197148378 -559249088864 -201260207251 -31214771837 -754861473736 

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
 -234006491505 -5368200268 4203451318610 -452981349257 -442433415772

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
-394011275092 -372315094431 971288934877 342101034361 -3004565669929

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
 758034728705 -2018785896392 -452981349257 6170868799543 -463061692953

,
,
,
,
,
 6211825730366 9317738595549 9317738595549 18635477191098 3105912865183
-1951532687216 1107905334455 793852786123 2234856443527 3155377492595 

,
,
,
,
9317738595549 6211825730366 6211825730366 18635477191098 18635477191098 
 828831057030 640712530900 -442433415772 -463061692953 1136344996801

,
,
,
,
,
 3105912865183 3105912865183 3105912865183 3105912865183 3105912865183
223310340046 -677824229722 -231412926892 373798829460 166945990865 

,
,
,
,
3105912865183 3105912865183 3105912865183 3105912865183 3105912865183 
 -404346297210 1139197148378 -394011275092 -1951532687216 223310340046

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
1441994767232 -191110050566 -488649378676 -1166508412808 -580139922412 

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
 -870952030035 -559249088864 -372315094431 1107905334455 -677824229722

,
,
,
,
,
 6211825730366 3105912865183 3105912865183 6211825730366 3105912865183
-191110050566 1497484344851 -483432655741 -546337380811 867214992375 

,
,
,
,
3105912865183 6211825730366 6211825730366 6211825730366 6211825730366 
 855881238805 -201260207251 971288934877 793852786123 -231412926892

,
,
,
,
,
 6211825730366 3105912865183 3105912865183 6211825730366 3105912865183
-488649378676 -483432655741 2135607201613 1076230456215 -919530032493 

,
,
,
,
3105912865183 6211825730366 6211825730366 6211825730366 6211825730366 
 1756781712675 -31214771837 342101034361 2234856443527 373798829460

,
,
,
,
,
 6211825730366 9317738595549 9317738595549 18635477191098 3105912865183
-1166508412808 -546337380811 1076230456215 3895010752615
766307743919 

,
,
,
,
9317738595549 6211825730366 6211825730366 18635477191098 18635477191098 
 782349464945 -754861473736 -3004565669929 3155377492595 166945990865

,
,
,
,
,
 6211825730366 9317738595549 9317738595549 18635477191098 3105912865183
-580139922412 867214992375 -919530032493 766307743919
5361594144199 

,
,
,
,
9317738595549 6211825730366 6211825730366 18635477191098 18635477191098 
That result was not very illuminating, so we show it again in floating point form:
> evalf(MPS,3);
0.432 , 0.0466 , -0.0753 , 0.122 , 0.267 , -0.130 , -0.140 , 0.138 , 0.283 , 0.126




0.0466 , 0.183 , -0.000576 , -0.217 , 0.206 , 0.122 , -0.180 , -0.0648 , -0.00335 , -0.0810






-0.0753 , -0.000576 , 0.451 , -0.0486 , -0.142 , -0.0423 , -0.120 , 0.313 , 0.0367 , -0.322




0.122 , -0.217 , -0.0486 , 0.331 , -0.149 , -0.209 , 0.178 , 0.128 , 0.120 , 0.169






0.267 , 0.206 , -0.142 , -0.149 , 0.366 , 0.0719 , -0.218 , -0.0745 , 0.120 , 0.0538




-0.130 , 0.122 , -0.0423 , -0.209 , 0.0719 , 0.155 , -0.0615 , -0.157 , -0.125 , -0.0623






-0.140 , -0.180 , -0.120 , 0.178 , -0.218 , -0.0615 , 0.241 , -0.0778 , -0.0880 , 0.140






0.138 , -0.0648 , 0.313 , 0.128 , -0.0745 , -0.157 , -0.0778 , 0.344 , 0.173 , -0.148




0.283 , -0.00335 , 0.0367 , 0.120 , 0.120 , -0.125 , -0.0880 , 0.173 , 0.209 , 0.0411






0.126 , -0.0810 , -0.322 , 0.169 , 0.0538 , -0.0623 , 0.140 , -0.148 , 0.0411 , 0.288
A simple test: since vv1 is in S, its projection should be the same vector:
> MPS.vv1=vv1;
 -115  -115

 


 

 -47  -47

 


 


 

 -178  -178

 


 

 20  20

 


 


 

 -59  -59

 


=

 40  40

 


 


 

 131  131

 


 


 

 -174  -174

 


 

 -108  -108

 


 


 

 118  118
h) we first find the projection of ww on S (wProj) and the part of ww that is orthogonal to S (wOrth)
> wProj:=MPS.ww:
> wOrth:=ww-wProj:
Next we find the cosine of the angle determined by wProj and wOrth
> cosa:=DotProduct(wProj,wOrth)/(Norm(wProj,2)*Norm(wOrth,2));
cosa := 0
and the angle itself:
> a:=arccos(cosa);
π
2
So the vectors are orthogonal, as was expected since one was the orthogonal projection on S and the
other was in the orthogonal complement of S.
i) We remember that the reflection on S is computed by RS(v) = 2PS(v)-v, so that its matrix with
respect to the canonical basis is MRS=2*MPS-Identity
> MRS:=2*MPS-IdentityMatrix(10);
MRS :=
 -422014018508 289267852680 -468012983010 758034728705 1657662114060

,
,
,
,
,
 3105912865183 3105912865183 3105912865183 3105912865183 3105912865183
-808692594420 -870952030035 855881238805 1756781712675 782349464945 

,
,
,
,
3105912865183 3105912865183 3105912865183 3105912865183 3105912865183 
 289267852680 -5898714050519 -10736400536 -4037571792784 1281425061800

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
2278394296756 -1118498177728 -402520414502 -62429543674 -1509722947472

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
 -468012983010 -10736400536 -910835958329 -905962698514 -884866831544

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
-788022550184 -744630188862 1942577869754 684202068722 -6009131339858

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
 758034728705 -4037571792784 -905962698514 -3146869796006 -926123385906

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
-3903065374432 1107905334455 793852786123 2234856443527 3155377492595 

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
 1657662114060 1281425061800 -884866831544 -926123385906 -833222871581

,
,
,
,
,
 3105912865183 3105912865183 3105912865183 3105912865183 3105912865183
446620680092 -1355648459444 -462825853784 747597658920 333891981730 

,
,
,
,
3105912865183 3105912865183 3105912865183 3105912865183 3105912865183 
 -808692594420 2278394296756 -788022550184 -3903065374432 446620680092

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
-6433749061085 -382220101132 -977298757352 -2333016825616 -1160279844824

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
 -870952030035 -1118498177728 -744630188862 1107905334455 -1355648459444

,
,
,
,
,
 3105912865183 3105912865183 3105912865183 3105912865183 3105912865183
-382220101132 -1608428520332 -483432655741 -546337380811 867214992375 

,
,
,
,
3105912865183 3105912865183 3105912865183 3105912865183 3105912865183 
a :=
 855881238805 -402520414502 1942577869754 793852786123 -462825853784

,
,
,
,
,
 3105912865183 3105912865183 3105912865183 3105912865183 3105912865183
-977298757352 -483432655741 -970305663570 1076230456215 -919530032493 

,
,
,
,
3105912865183 3105912865183 3105912865183 3105912865183 3105912865183 
 1756781712675 -62429543674 684202068722 2234856443527 747597658920

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
-2333016825616 -546337380811 1076230456215 -5422727842934 766307743919 

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
 782349464945 -1509722947472 -6009131339858 3155377492595 333891981730

,
,
,
,
,
 3105912865183 9317738595549 9317738595549 9317738595549 3105912865183
-1160279844824 867214992375 -919530032493 766307743919 -3956144451350

,
,
,
,
9317738595549 3105912865183 3105912865183 9317738595549 9317738595549 
again, we show floating point form:
> evalf(MRS,3);
-0.136 , 0.0931 , -0.151 , 0.244 , 0.534 , -0.260 , -0.280 , 0.276 , 0.566 , 0.252




0.0931 , -0.633 , -0.00115 , -0.433 , 0.413 , 0.245 , -0.360 , -0.130 , -0.00670 , -0.162






-0.151 , -0.00115 , -0.0978 , -0.0972 , -0.285 , -0.0846 , -0.240 , 0.625 , 0.0734 , -0.645




0.244 , -0.433 , -0.0972 , -0.338 , -0.298 , -0.419 , 0.357 , 0.256 , 0.240 , 0.339






0.534 , 0.413 , -0.285 , -0.298 , -0.268 , 0.144 , -0.436 , -0.149 , 0.241 , 0.108




-0.260 , 0.245 , -0.0846 , -0.419 , 0.144 , -0.690 , -0.123 , -0.315 , -0.250 , -0.125






-0.280 , -0.360 , -0.240 , 0.357 , -0.436 , -0.123 , -0.518 , -0.156 , -0.176 , 0.279






0.276 , -0.130 , 0.625 , 0.256 , -0.149 , -0.315 , -0.156 , -0.312 , 0.347 , -0.296




0.566 , -0.00670 , 0.0734 , 0.240 , 0.241 , -0.250 , -0.176 , 0.347 , -0.582 , 0.0822






0.252 , -0.162 , -0.645 , 0.339 , 0.108 , -0.125 , 0.279 , -0.296 , 0.0822 , -0.425
j) We apply MRS to w
> wRef:=MRS.ww;
evalf(wRef,3);
 -59204081226337 


 3105912865183 






-684003815887756


 9317738595549 






-495096105186397


 9317738595549 






 346626320457703 




 9317738595549 




-119786177288358




 3105912865183 


wRef := 

 622300840178522 




 9317738595549 




 -2574510788877 




 3105912865183 






 222080591321103 


 3105912865183 






-498810861749711


 9317738595549 






-204266699745337


 9317738595549 
 -19.1 




 -73.4 






 -53.1 




 37.2 






 -38.6 




 66.8 






-0.829






 71.5 




 -53.5 






 -21.9 
Then we compare the norms:
> Norm(wRef,2)=Norm(ww,2);
59 7 = 59 7
An orthogonal transformation must preserve the length of every vector. We have seen that RS
preserves the length of a vector, so, in principle, RS could be an orthogonal transformation.
k) We know that a transformation is orthogonal if and only if its associated matrix is orthogonal, that
is, if Transpose(M) =M^(-1).
> Equal(Transpose(MRS),MatrixInverse(MRS));
true
and we conclude that MRS is an orthogonal matrix so that the reflection is an orthogonal
transformation.
Problem 2
a) We start by checking that a[0] has length 1:
> int((1/sqrt(2))^2,t=-Pi..Pi)/Pi;
1
Next we consider the cosines: a[n] for n>0.
> int((cos(n*t))^2,t=-Pi..Pi)/Pi;
cos(π n ) sin(π n ) + π n
nπ
The result is correct but we would like to see a more simplified answer. For that we have to let Maple
know that n is an integer number. Since we will be also using the integer m below we declare to Maple
these facts:
> assume(n::integer,m::integer);
and ask Maple to repeat the previous computation:
> int((cos(n*t))^2,t=-Pi..Pi)/Pi;
1
Finally we check the sines b[n]:
> int((sin(n*t))^2,t=-Pi..Pi)/Pi;
1
b) we now take any two sin(n*t) and cos(m*t):
> int(cos(m*t)*sin(n*t),t=-Pi..Pi)/Pi;
0
and sin(nt) against sin(mt) with n different from m:
> int(sin(m*t)*sin(n*t),t=-Pi..Pi)/Pi;
0
Finally, the same for the cosines:
> int(cos(m*t)*cos(n*t),t=-Pi..Pi)/Pi;
0
Remark: notice that whenever we wrote n and m for Maple they are different numbers (that is, Maple
is assuming that they are different).
We conclude with the product of a[0] and a[n] or b[n]:
> int(1/sqrt(2)*cos(n*t),t=-Pi..Pi)/Pi;
> int(1/sqrt(2)*sin(n*t),t=-Pi..Pi)/Pi;
0
0
c) Here and below we will be finding orthogonal projections, so it will be convenient to define a few
things. We start with the inner product:
> IP:=(f,g)->int(f(t)*g(t),t=-Pi..Pi)/Pi;
π
1 ⌠
IP := (f, g ) →  f(t ) g(t ) dt
π ⌡−π
and continue with a few of the a[n] and b[n] functions:
> a[0]:= t->1/sqrt(2):
a[1]:= t->cos(1*t):
a[2]:= t->cos(2*t):
a[3]:= t->cos(3*t):
a[4]:= t->cos(4*t):
a[5]:= t->cos(5*t):
a[6]:= t->cos(6*t):
b[1]:= t->sin(1*t):
b[2]:= t->sin(2*t):
b[3]:= t->sin(3*t):
b[4]:= t->sin(4*t):
b[5]:= t->sin(5*t):
b[6]:= t->sin(6*t):
Finally we declare the function f(x)=x^2-1.
> ff:=t->t^2-1;
ff := t → t 2 − 1
Next, we could find the projection of ff on <a[0],a[1],b[1]> by
P1(ff)=IP(ff,a[0])*a[0]+IP(ff,a[1])*a[1]+ IP(ff,b[1])*b[1]. Instead, since we are going to use it several
times, we define a function that does this for an arbitrary n. For example, the previous projection
should correspond to n=1.
> P:=(f,n,t)->sum(’IP(f,a[j])*a[j](t)’,’j’=0..n)+sum(’IP(f,b[j])*b[j
](t)’,’j’=1..n);
 n
  n


P := (f, n, t ) → 
’IP(f, aj ) aj(t )’ + 
’IP(f, bj ) bj(t )’
 ’j’ = 0
  ’j’ = 1

Finally, the projection of ff on T1 is
> proj:=t->P(ff,1,t);
> simplify(proj(t));
proj := t → P(ff, 1, t )
∑
∑
π2
− 1 − 4 cos(t )
3
We see the plot of ff and the projection:
> plot([ff(t),proj(t)],t=-Pi..Pi);
8
6
4
2
–3
–2
–1
1
2
t
The last thing is the computation of the "error":
> sqrt(IP(ff-proj,ff-proj));evalf(%);
8 5
π − 16 π
45
π
1.147681032
d) We repeat the steps of the previous item, but this time we use n=2:
> proj:=t->P(ff,2,t);
3
simplify(proj(t));
err:=evalf(sqrt(IP(ff-proj,ff-proj)));
plot([ff(t),proj(t)],t=-Pi..Pi);
proj := t → P(ff, 2, t )
π2
− 1 − 4 cos(t ) + cos(2 t )
3
err := 0.5631800334
8
6
4
2
–3
–2
–1
0
1
2
t
e) Finally, we compute the errors for n=3,4,... until the error is < 0.2
> proj:=t->P(ff,3,t):
err:=evalf(sqrt(IP(ff-proj,ff-proj)));
err := 0.3458914419
Not there yet...
> proj:=t->P(ff,4,t):
err:=evalf(sqrt(IP(ff-proj,ff-proj)));
err := 0.2390416071
Close...
> proj:=t->P(ff,5,t):
err:=evalf(sqrt(IP(ff-proj,ff-proj)));
err := 0.1775975522
Done. So the projection that achieves the error margin is
3
> simplify(proj(t));
π2
4
1
4
− 1 − 4 cos(t ) + cos(2 t ) − cos(3 t ) + cos(4 t ) − cos(5 t )
3
9
4
25
and the plot is:
> plot([ff(t),proj(t)],t=-Pi..Pi);
8
6
4
2
–3
–2
–1
0
1
2
3
t
Problem 3
a) We start by defining the vectors:
> u[1]:=<1,1>;
u[2]:=<0,1>;
 1

u1 := 

 1
 0

u2 := 

 1
The volume of the parallelepiped spanned by u[1] and u[2] is computed by the absolute value of the
determinant of the matrix that has u[1] in the first column and u[2] in the second:
> abs(Determinant(<u[1]|u[2]>));
1
b) For this (and the next) item we could make a list of all the possible candidates by hand but I prefer
b) For this (and the next) item we could make a list of all the possible candidates by hand but I prefer
to have Maple do it for me. In order to achieve this I need the combinat package:
> with(combinat):
Warning, the protected name Chi has been redefined and unprotected
Next we let N be the dimension of our vectors (2 in this case) and vals the possible values for the
components:
> vals:={0,1};
> N:=2;
vals := {0, 1 }
N := 2
The following lines use the cartprod command to obtain all possible vectors with components in vals.
The list of all the possible vectors is left in bas.
> T:=cartprod([seq(vals,i=1..N)]):
> bas:={}:
while not T[finished] do
bas:=bas union {convert(T[nextvalue](),Vector)}:
end do:
nops(bas);
4
We can see all the vectors:
> bas;
 1  1  0  0
, 
, 
, 
}
{
 
 
 

 1  0  1  0
But we need a list of all pairs of such vectors. For that we use the choose function:
> Nples:=choose(bas,N):
> nops(Nples);
6
and we can see all the pairs:
> Nples;
 1  0  1  0  1  1  1  0  1  0
, 
 }, {
, 
 }, {
, 
 }, {
, 
 }, {
, 
 },
{{
 
 
 
 
 
 
 
 
 

 1  0  0  0  1  0  1  1  0  1
 0  0
, 
 }}
{
 

 1  0
Next we define a function that computes the (absolute value of the) determinant of a pair of vectors.
Actually, the function returns the value of the determinant as well as the vectors.
> computeDet:= s->
[abs(Determinant(convert(convert(s,list),Matrix),method=integer)),
s];
computeDet :=
s → [ LinearAlgebra:-Determinant(convert(convert(s, list), Matrix), method = integer) , s]
For example:
> computeDet(Nples[1]);

 1  0 
 0, {
, 
 }


 
 

 0  0 
> result:=map(computeDet,Nples):
> result;

 1  0  
 1  0  
 1  1  
 0  0 
, 
 },  0, {
, 
 },  1, {
, 
 },  0, {
, 
 },
{ 0, {
 
  

 
  

 
  

 
 

 0  0  
 1  0  
 1  0  
 1  0 

 1  0  
 1  0 
 1, {
, 
 },  1, {
, 
 } }


 
  

 
 

 1  1  
 0  1 
To make it easier to see the result we create a new set were we just keep the values of the
determinants:
> justDeterminant:={seq(result[i][1],i=1..nops(result))};
justDeterminant := {0, 1 }
So, the maximum value of the absolute value of the determinant is 1, and this is also the maximum
volume of the parallelogram spanned by the vectors. To find vectors whose parallelogram has volume
1, just explore the list result.In view of the next items, we may want to do this in a more "automatic"
way. The following code will find an element in result that has volume 1 (the maximum value).
> for i from 1 to nops(result) do
if (result[i][1]=1) then
print(result[i][2]);
break;
end if;
end do;
 1  0
, 
}
{
 

 0  1
c) We repeat step by step everything that we did above, only that now we use N=3:
> N:=3;
N := 3
> T:=cartprod([seq(vals,i=1..N)]):bas:={}:
while not T[finished] do
bas:=bas union {convert(T[nextvalue](),Vector)}:
end do:
nops(bas);
8
> Nples:=choose(bas,N):
nops(Nples);
56
> result:=map(computeDet,Nples):
> justDeterminant:={seq(result[i][1],i=1..nops(result))};
justDeterminant := {0, 1, 2 }
So now the maximum volume is 2. To find a triple of vectors with maximal volume we do as we did
above:
> for i from 1 to nops(result) do
if (result[i][1]=2) then
print(result[i][2]);
break;
end if;
end do;








d) Again we repeat everything with N=4:
> N:=4;
1 
 
 
1, 
 
 
0 
0 
 
 
1, 
 
 
1 
1 
 

0 
 

1 
N := 4
> T:=cartprod([seq(vals,i=1..N)]):bas:={}:
while not T[finished] do
v:=convert(T[nextvalue](),Vector);
if (not Equal(v,ZeroVector(N))) then
bas:=bas union {v}:
end if:
end do:
nops(bas);
15
> Nples:=choose(bas,N):
nops(Nples);
1365
> result:=map(computeDet,Nples):
> justDeterminant:={seq(result[i][1],i=1..nops(result))};
justDeterminant := {0, 1, 2, 3 }
and we see that the maximum volume is 3. It seems to be getting obvious: in R^N the maximum
volume is N... Could it be a general fact?
Finally, we show a quadruple of vectors with maximal volume:
> for i from 1 to nops(result) do
if (result[i][1]=3) then
print(result[i][2]);
break;
end if;
end do;
 0  0  1  1 

 
 
 


 
 
 

 1  0  1  0 

 
 
 

, 
, 
, 
 

 
 
 

 1  1  0  1 

 
 
 


 
 
 
 
 0  1  1  0 
Download