Linear Algebra 2270-2

advertisement
Linear Algebra 2270-2
Due in Week 2
For the second week, the plan is to evaluate work from chapter 1. Here’s the list of problems, followed by
problem notes and a few answers.
Problem week2-1. State the Three Possibilities for the solution ~x of a matrix system A~x = ~b. Give an
example of each possibility in dimension 3, then describe the geometry of each example.
Problem week2-2. Strang defines the nullspace of a matrix A to be the set of all solutions ~x to the equation
A~x = ~0. Give a 3 × 3 example in both matrix form and equation form. Then define nullspace for a
system of equations.
Problem week2-3. The transpose of matrix A has associated nullspace AT ~y = ~0. Both A~x = ~0 and AT ~y = ~0
appear in the four-subspace figure on the cover of the textbook. Give an example of a matrix A for
which both nullspaces have a simple geometric realization, but ~x and ~y are not the same size.
Section 1.2. Exercises 4, 13, 19, 21, 22, 31
Section 1.3. Exercises 4, 6, 10, 12
Problem Notes
The problems not specifically from Strang’s book use the online reference about linear algebraic equations (no
matrices), found at the course web site in week 1. Questions about the problems will be received by email, in
JWB 113, and telephone 581-6879. Worthwhile remarks will be recorded here.
Resolution of issues for Strang’s problems will appear here in response to communications. If there is a
difficulty, like a missing definition, incomplete information, strange statement, total impasse, then please
write email, call 581-6879, or visit JWB 113.
1.2-13. A vector (x, y, z) is perpendicular to (1, 0, 1) provided the dot product of the two vectors is zero,
which is the scalar equation x + z = 0. View this as a 3 × 3 system A~u = ~0 with


x


~u =  y  ,
z


1 0 1


A =  0 0 0 .
0 0 0
System A~u = ~0 is the infinitely many solution case with general solution x = t1 , y = t2 , z = −t1 . Because
there are two parameters, then the dimension is 2, and there is the standard basis found by taking partial
derivatives ∂t1 , ∂t2 on ~u. These are (1, 0, −1) and (0, 1, 0). What are they? They are a basis for the plane
perpendicular to (1, 0, 1), because all linear combinations of the two vectors reproduces the general solution
just found. Read Strang’s answer below, and then conceive of a plan for decoding Strang’s answers into a
problem-solving method.
1.3-6. Attacks on the problem should try to minimize the algebra. No one expects you
 to write all
 the details,
1 3 5


but if you give an answer, then the justification has to appear. For instance, matrix  1 2 4  with c = 3
1 1 c
has column 3 equal to 2 times column 1 plus column 2. The justification should display the details of this
claim, on paper, not just claim it is true or write the answer without details. How is the answer discovered?
One way is to try to make column 3 a combination of the first two columns by just looking at the matrix,
then narrowing down what the two multipliers might be. This method assumes that you have intuition gained
from actually doing such computations before, on paper, with details that solve for ~x in the equation A~x = ~0.
In truth, this problem is a mechanical one with the twist of a symbol c in the mechanics. The fundamental
calculation is finding the nullspace, the solution set of A~x = ~0.
1.3-10. Working entirely with equations will make the problem mechanically easy, although it gives little
intuition about practical methods for finding an inverse matrix. All that is required is back-substitution in
order to solve for z1 , z2 , z3 in terms of b1 , b2 , b3 . Delaying until chapter 2 further work on inverses seems to be
a good plan, because the details of practical methods require not just proof details but on-paper experience
trying to compute inverses by clumsier methods, like this method, which have details entirely in terms of
systems of equations.
Some Answers
1.2. Exercises 4, 21, 22 have textbook answers.
1.2-13. Strang’s answer: The plane perpendicular to (1, 0, 1) contains all vectors (c, d, −c). Then (1, 0, −1)
and (0, 1, 0) are two perpendicular vectors in the plane.
1.2-19. A proof does not have an answer as such, but a sequence of written details. There are many
ways to write the proof. Here is one way, chosen by Strang. Start from the rules (1) v · w = w · v, (2)
u · (v + w) = u · v + u · w and (3) (cv) · w = c(v · w). Use rule (2) replacing u and v by v + w and v + w,
respectively. Then the left side is the required kv + wk2 . Then use rules (1) and (2) repeatedly on the right
side to obtain the identity. These details are not the proof, as it is presented on paper, but rather instructions
for how to construct the proof on paper, especially the background theorems that must be cited.
1.2-31. Find four perpendicular unit vectors with all components equal to 1/2 or −1/2. The first theorem
needed is something about independent vectors. In three dimensions the maximum number of independent
vectors is three. So the constructed vectors cannot be three dimensional. The first possible dimension is
four. But the request does not fix the dimension, so there are many possible solutions to this problem.
Secondly, the length of the vector has to be one, it is a unit vector, so there is an equation constraint of
the form sum of the squares of the components equals one. One possible answer is four vectors of dimension
8, the first (c, c, c, c, 0, 0, 0, 0) with c = 1/2, the other three vectors constructed similarly, e.g., the second is
(0, c, c, c, c, 0, 0, 0). See if you can give an answer for dimension 4, which is what Strang had in mind.
1.3. Exercises 4, 12 have textbook answers.
1.3-6. Strang’s answers: col(A, 3) = 2 col(A, 1) + col(A, 2) for c = 3; col(B, 3) = (−1) col(B, 1) + col(B, 2)
for c = −1; col(C, 3) = 3 col(C, 1) + (−1) col(C, 2) for c = 0.

1.3-10. Strang’s answer: ∆−1

−1 −1 −1


=  0 −1 −1 .
0
0 −1
Download