Uploaded by Kexin WANG

The-Art-of-Linear-Algebra

advertisement
The Art of Linear Algebra
– Graphic Notes on “Linear Algebra for Everyone” –
Kenji Hiranabe ∗
with the kindest help of Gilbert Strang
†
September 1, 2021/updated March 23, 2023
Abstract
I tried intuitive visualizations of important concepts introduced in “Linear Algebra for Everyone”. 1
This is aimed at promoting understanding of vector/matrix calculations and algorithms from the
perspectives of matrix factorizations. They include Column-Row (CR), Gaussian Elimination (LU ),
Gram-Schmidt Orthogonalization (QR), Eigenvalues and Diagonalization (QΛQT ), and Singular Value
Decomposition (U ΣV T ). All the artworks including this article are maintained in the GitHub repository
https://github.com/kenjihiranabe/The-Art-of-Linear-Algebra/.
Foreword
I am happy to see Kenji Hiranabe’s pictures of matrix operations in linear algebra ! The pictures are an
excellent way to show the algebra. We can think of matrix multiplications by row · column dot products,
but that is not all – it is “linear combinations” and “rank 1 matrices” that complete the algebra and the art.
I am very grateful to see the books in Japanese translation and the ideas in Kenji’s pictures.
– Gilbert Strang
Professor of Mathematics at MIT
Contents
1 Viewing a Matrix – 4 Ways
2
2 Vector times Vector – 2 Ways
2
3 Matrix times Vector – 2 Ways
2
4 Matrix times Matrix – 4 Ways
4
5 Practical Patterns
4
6 The
6.1
6.2
6.3
6.4
6.5
Five Factorizations of
A = CR . . . . . . . .
A = LU . . . . . . . .
A = QR . . . . . . . .
S = QΛQT . . . . . . .
A = U ΣV T . . . . . .
a Matrix
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
∗ twitter:
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
@hiranabe, k-hiranabe@esm.co.jp, https://anagileway.com
Institute of Technology, http://www-math.mit.edu/~gs/
1 “Linear Algebra for Everyone”: http://math.mit.edu/everyone/ with Japanese translation started by Kindai Kagaku.
† Massachusetts
1
7
7
8
8
9
10
1
Viewing a Matrix – 4 Ways
A matrix (m × n) can be seen as 1 matrix, mn numbers, n columns and m rows.
!
!
!
# $
# $
# $
" ! % & ! % & ! % &
' (
' (
' (
4"'5+,/6
3"(&'2*,-
!"#$%&'(")*#+$,./+0 1"(&'2*,-
1",$.")*#+$,./+0 !"(&'2*,-
Figure 1: Viewing a Matrix in 4 Ways

a11
A = a21
a31
  ∗ 
|
−a1 −
a2  = −a∗2 −
|
−a∗3 −
 
a12
|
a22  = a1
a32
|
Here, the column vectors are in bold as a1 . Row vectors include ∗ as in a∗1 . Transposed vectors and
matrices are indicated by T as in aT and AT .
2
Vector times Vector – 2 Ways
Hereafter I point to specific sections of “Linear Algebra for Everyone” and present graphics which illustrate
the concepts with short names in colored circles.
• Sec. 1.1 (p.2) Linear combination and dot products
• Sec. 1.3 (p.25) Matrix of Rank One
• Sec. 1.4 (p.29) Row way and column way
!"
!
!
!"#$%&"'()#$$*+(,-.&/
!
!#
!
01+2$3$41#&56
!"$ 57$1$,1#&56$#!"$ $ %&8$9:$+.5#;.&$'( ) 1&.$*(
#;.$&.7(<#$% 57$1$&1+2$3$,1#&568
!"#$%&"'()# *! + "/$57$.6%&.77.'$17$!$" 5+$
,1#&56$<1+=(1=.$1+'$>5.<'7$1$+(,-.&8
$!
$!
!
! " # $" & " ' $" & $! ( "$" ( #$#
$#
$#
#
!
" $
#
$
% & "$
#$
%
"%
#%
Figure 2: Vector times Vector - (v1), (v2)
(v1) is a elementary operation of two vectors, but (v2) multiplies the column to the row and produce a
rank 1 matrix. Knowing this outer product (v2) is the key for the later sections.
3
Matrix times Vector – 2 Ways
A matrix times a vector creates a vector of three dot products (Mv1) as well as a linear combination (Mv2)
of the column vectors of A.
2
• Sec. 1.1 (p.3) Linear combinations
• Sec. 1.3 (p.21) Matrices and Column Spaces
!
!"#
!
!"$
!"#$%&'$(#)*&%+$&,$- .%#$/01*2312#4$56$.$(#)*&%$7
.84$5#)&/#$*"#$*"%##$4&*93%&40)*$#1#/#8*+$&,$:7;
"
!"#$3%&40)*$:7 2+$.$128#.%$)&/528.*2&8$&,$*"#$
)&10/8$(#)*&%+$&,$-;
$ % *
$
%
!
!" # & ' * # *! & , *" '
"
( )
(
)
+*! ,%*" $ % *
!
!" # & ' * # +&*! , '*" "
+(*! , )*" ( )
Figure 3: Matrix times Vector - (Mv1), (Mv2)
At first, you learn (Mv1). But when you get used to viewing it as (Mv2), you can understand Ax as a
linear combination of the columns of A. Those products fill the column space of A denoted as C(A). The
solution space of Ax = 0 is the nullspace of A denoted as N(A).
Also, (vM1) and (vM2) shows the same patterns for a row vector times a matrix.
!
!"#
!" # $!
$"
!
!"$
!" # $!
% &
$# ' ( # +$! ,'$" , )$# - +&$! , ($" , *$# ) *
$"
"
8$&'6$7#*+'&$, ./$3)1+.%1.#($4:$
+"#$+6'$*'1)32$7#*+'&/$'5$8
02($4#*'3#$+"#$+6'$('+;%&'()*+$
#1#3#2+/$'5$,-9
"
% &
$# ' ( # $! % & , $" ' ( , $# ) *
) *
!"#$%&'()*+$,- ./$0$1.2#0&$
*'34.20+.'2$'5$+"#$&'6$
7#*+'&/$'5$89
Figure 4: Vector times Matrix - (vM1), (vM2)
The products fill the row space of A denoted as C(AT ). The solution space of yA = 0 is the left-nullspace
of A denoted as N(AT ).
The four subspaces consists of N(A) + C(AT ) (which are perpendicular to each other) in Rn and N(AT )
+ C(A) in Rm (which are perpendicular to each other).
• Sec. 3.5 (p.124) Dimensions of the Four Subspaces
3
!!
" / !"$!
+ "#
!"
+,"!"# % '
!"# % '
,"-'()*!+
!"#$%&'()*!+
!""#$"
!""#"#
)+,)+&01!$#*,
)+,)+&01!$#*,
!"
!!
&$##()*!+
.,"# -
"# % &
.,"!"# % ( ) '
#+./'&$##()*!+
$" % &
!"# % * ) '
!! % . " 0 + "#
. " 1 + "#
!" % + " 0 . "#
2 " 1 3 "#
Figure 5: The Four Subspaces
See A = CR (Sec 6.1) for the rank r.
4
Matrix times Matrix – 4 Ways
“Matrix times Vector” naturally extends to “Matrix times Matrix”.
• Sec. 1.4 (p.35) Four Ways to Multiply AB = C
• Also see the back cover of the book
!
!!
"
!!
#
MN87O6898=84?6>8;<=8@6365<?6G7<5B;?6<A67<J6N8;?<7
3456;<9B=46N8;?<7D
*'!+"'",
*(!+"(",
! " ' (
!
!
# $ ' ( ) *#'!+$'", *#(!+$(",
"
"
*%'!+&'", *%(!+&(",
% &
!
!
-. 3456-/ 37869:48376;<=>:43?:<4@6<A6;<9B=4@6<A6CD
! " ' (
!
!
# $ ' ( ) - . / ) -. -/
"
"
% &
!!
%
!!
$
!
!
"
!
FB9?:G9:;3?:<46-H :@6>7<I8465<J46?<636@B=6<A6734I6!6=3?7:;8@D
KL86G7<5B;8567<J@637869:48376;<=>:43?:<4@6<A67<J@D
1%#
1%#E
! " ' (
!
!
# $ ' ( ) 1%$ E ) 1%$E
"
"
% &
1%&E
1%&
!
#
%
!
) #
%
" 0
2%
0
$ 0!! 0!" ) 1# 1$ #%
2$
"!
""
&
"
0!! 0!" + $ 0"! 0"" )
&
) 1#2%# + 1$2%$
0!! 0!"
"0"! "0""
#0!! #0!" + $0"! $0""
%0!! %0!"
&0"! &0""
Figure 6: Matrix times Matrix - (MM1), (MM2), (MM3), (MM4)
5
Practical Patterns
Here, I show some practical patterns which allow you to capture the coming factorizations more intuitively.
4
!
!
"
#
#
# ! "
# ! "
!"
!
!"#$%&'()*+,$(-+&.#+$'/.&+%0&+()+&.#+
0(12-)*+(,+&.#+-%&$'34+5.'*+
#3"$#**'()+0%)+6#+*##)+%*+&.#+&.$##+
1')#%$+0(-6')%&'()*+')+&.#+$'/.&+')+
()#+,($-21%4
"
#
!
!
!
"
#
!
!"#$%&'()*+,$(-+&.#+1#,&+%0&+()+&.#+
$(7*+(,+&.#+-%&$'34+5.'*+#3"$#**'()+
0%)+6#+*##)+%*+&.#+&.$##+1')#%$+
0(-6')%&'()*+')+&.#+$'/.&+')+()#+
,($-21%4
"
"
"
"
!
!
!
!
"
#
$%&'(
$$
$%#
#
"
!
#
"
!#
"
!
!
"
"
"
"
#
"
!
!
"
"
"
#
#
!
!
"
"
"
#
$%&'(
$$
&
%$#
Figure 7: Pattern 1, 2 - (P1), (P1)
Pattern 1 is a combination of (MM2) and (Mv2). Pattern 2 is an extention of (MM3). Note that Pattern
1 is a column operation (multiplying a matrix from right), whereas Pattern 2 is a row operation (multiplying
a matrix from left).
!
!"#
!
!$#
!""#$%&'()(*%)'+&)#(,)-.%/(0.+,(-12(#2034)#23(2)41(.+76
!""#$%&'()(*%)'+&)#(,)-.%/(0.+,(-12(.%'134)#23(2)41(4+#5,&6
%$
!" # $!
$"
$#
%$
# %$ $!
%%
%% $"
%& $#
"& #
%&
%%
%&
'$'
%$ '$'
''% # %% ''%
''&
%& ''&
Figure 8: Pattern 1′ , 2′ - (P1′ ), (P2′ )
(P1′ ) multipies the diagonal numbers to the columns of the matrix, whereas (P2 ′ ) multipies the diagonal
numbers to the row of the matrx. Both are variants of (P1) and (P2).
!
!"
"
"
!"#$%&'(()*+%,'-)$%'+.(")*%/.,0#+'(#.+%.1%%/.23,+$4
5.3%6#22%)+/.3+()*%("#$%#+%7#11)*)+(#'28*)/3**)+/)%)93'(#.+$4
&$
!"# $ %!
%"
%#
&%
&&
'$
'% $ '$ &$ %! ( '% &% %" ! '& && %#
'&
Figure 9: Pattern 3 - (P3)
This pattern appears when you solve differential equations and recurrence equations:
• Sec. 6 (p.201) Eigenvalues and Eigenvectors
• Sec. 6.4 (p.243) Systems of Differential Equations
5
du(t)
= Au(t), u(0) = u0
dt
un+1 = Aun , u0 = u0
[
]
In both cases, the solutions are expressed with eigenvalues (λ1 , λ2 , λ3 ), eigenvectors X = x1 x2 x3
[
]T
of A, and the coefficients c = c1 c2 c3 which are the coordinates of the initial condition u(0) = u0 in
terms of the eigenvectors X.
u0 = c1 x1 + c2 x2 + c3 x3
 
c1
c = c2  = X −1 u0
c3
and the general solution of the two equations are:
u(t) = eAt u0 = XeΛt X −1 u0
= XeΛt c = c1 eλ1 t x1 + c2 eλ2 t x2 + c3 eλ3 t x3
un = An u0 = XΛn X −1 u0
= XΛn c = c1 λn1 x1 + c2 λn2 x2 + c3 λn3 x3
See Figure 9: Pattern 3 (P3) above again to get XDc.
!
!"
"
"
!"#$%&'("')"*&+,-."/+0."%+"$")1#"+2"&$.,"3"#$%&'4-)5
$)"'.")'.617$&"8$71-9-'6-.8$71-"/-4+#:+)'%'+.;
'%!
'!& $ &% %% '%! ( && %& '!& ! &' %' '!'
'!'
&%
!
!"# $ %"
%#
%$
&&
&'
Figure 10: Pattern 4 - (P4)
This pattern (P4) works in both eigenvalue decomposition and singular value decomposition. Both decompositions are expressed as a product of three matrices with a diagonal matrix in the middle, and also a
sum of rank 1 matrices with the eigenvalue/singular value coefficients.
More details are discussed in the next section.
6
6
The Five Factorizations of a Matrix
• Preface p.vii, The Plan for the Book.
A = CR, A = LU, A = QR, A = QΛQT , A = U ΣV T are illustrated one by one.
A = CR
Independent columns in C
Row echelon form in R
Leads to column rank = row rank
A = LU
LU decomposition from
Gaussian elimination
(Lower triangular)(Upper triangular)
A = QR
QR decomposition as
Gram-Schmidt orthogonalization
Orthogonal Q and triangular R
S = QΛQT
Eigenvalue decomposition
of a symmetric matrix S
Eigenvectors in Q, eigenvalues in Λ
A = U ΣV T
Singular value decomposition
of all matrices A
Singular values in Σ
Table 1: The Five Factorization
6.1
A = CR
• Sec.1.4 Matrix Multiplication and A = CR (p.29)
All general rectangular matrices A have the same row rank as the column rank. This factorization is the
most intuitive way to understand this theorem. C consists of independent columns of A, and R is the row
reduced echelon form of A. A = CR reduces to r independent columns in C times r independent rows in R.
[
1
2
2
3
A = CR
] [
][
3
1 2 1 0
=
5
2 3 0 1
1
1
]
Procedure: Look at the columns of A from left to right. Keep independent ones, discard dependent ones
which can be created by the former columns. The column 1 and the column 2 survive, and the column 3
is discarded because it is expressed as a sum of the former two columns. To rebuild A by the independent
columns 1, 2, you find a row echelon form R appearing in the right.
"
!
!
" !
#
!
"" !
"" !
"" !
#$%&'
!"
Figure 11: Column Rank in CR
Now you see the column rank is two because there are only two independent columns in C and all the
columns of A are linear combinations of the two columns of C.
7
"
!
#
!
&
'
!
&
&
"
"
!"#$%
'
'
!"
Figure 12: Row Rank in CR
And you see the row rank is two because there are only two independent rows in R and all the rows of A
are linear combinations of the two rows of R.
6.2
A = LU
Solving Ax = b via Gaussian elimination can be expressed as an LU factorization. Usually, you apply
elementary row operation matrices (E) to A to make upper trianglar U .
EA = U
A = E −1 U
let L = E −1 ,
A = LU
Now solve Ax = b in 2 steps: (1) forward Lc = b and (2) back U x = c.
• Sec.2.3 (p.57) Matrix Computations and A = LU
Here, we directly calculate L and U from A.
 

  
 

| [
| [
| [
0
0 0 0
]
]
]
 = l1  −u∗1 − + l2  −u∗2 − + 0
A = l1  −u∗1 − + 0
A2
|
|
|
0
0
#
!
"
"
!
0
0
0

0
0  = LU
A3
"
!
Figure 13: Recursive Rank 1 Matrix Peeling from A
To find L and U , peel off the rank 1 matrix made of the first row and the first column of A. This leaves
A2 . Do this recursively and decompose A into the sum of rank 1 matrices.
!
"
!
"
"
!"#$%
!!
"
Figure 14: LU rebuilds A
To rebuild A from L times U , use column-row multiplication.
6.3
A = QR
A = QR changes the columns of A into perpendicular columns of Q, keeping C(A) = C(Q).
• Sec.4.4 Orthogonal matrices and Gram-Schmidt (p.165)
In Gram-Schmidt, the normalized a1 is q1 . Then a2 is adjusted to be perpendicular to q1 to create q2 .
This procedure gives:
8
q1 = a1 /||a1 ||
q2 = a2 − (q1T a2 )q1 ,
q3 = a 3 −
(q1T a3 )q1
q2 = q2 /||q2 ||
− (q2T a3 )q2 ,
q3 = q3 /||q3 ||
In the reverse direction, letting rij = qiT aj and you get:
a1 = r11 q1
a2 = r12 q1 + r22 q2
a3 = r13 q1 + r23 q2 + r33 q3
The original A becomes QR: orthogonal Q times upper triangular R.

|
A =  q1
|

|
r11
q3  
|
|
q2
|

r13
r23  = QR
r33
r12
r22
QQT = QT Q = I
!
#
!
!"#$%
" "
# ! "
"
# !
#
#!"
$#
$"
$!
"
!
!"
Figure 15: A = QR
Each column vector of A can be rebuilt from Q and R .
See Pattern 1 (P1) again for the graphic interpretation.
6.4
S = QΛQT
All symmetric matrices S must have real eigenvalues and orthogonal eigenvectors. The eigenvalues are the
diagonal elements of Λ and the eigenvectors are in Q.
• Sec.6.3 (p.227) Symmetric Positive Definite Matrices

|
S = QΛQT = q1
|
|
q2
|

|
λ1
q3  
|


−q1T −
 −q2T −
λ3
−q3T −
λ2
 
 
 
| [
| [
| [
]
]
]
= λ1 q1  −q1T − + λ2 q2  −q2T − + λ3 q3  −q3T −
|
|
|
= λ 1 P 1 + λ 2 P2 + λ 3 P3
P2 = q2 q2T ,
P1 = q1 q1T ,
!
#
!
# ! "
"
$" %# %"!
!!
#
!
"
P3 = q3 q3T
#
!
#
$$ %% %!$
!
"
!
Figure 16: S = QΛQT
9
$& %' %!&
"
"
"
$%&'(
!"
A symmetric matrix S is diagonalized into Λ by an orthogonal matrix Q and its transpose. And it is
broken down into a combination of rank 1 projection matrices P = qq T . This is the spectral theorem.
Note that Pattern 4 (P4) is working for the decomposition.
S = S T = λ 1 P1 + λ 2 P2 + λ 3 P3
QQT = P1 + P2 + P3 = I
P1 P2 = P2 P3 = P3 P1 = O
P22 = P2 = P2T ,
P12 = P1 = P1T ,
6.5
P32 = P3 = P3T
A = U ΣV T
• Sec.7.1 (p.259) Singular Values and Singular Vecrtors
Every matrix (including rectangular one) has a singular value decomposition (SVD). A = U ΣV T has the
singular vectors of A in U and V . The following illustrates the ’reduced’ SVD.
#
!
"
%" &# '"!
$!
!
& ' (
&
!
&
'
%$ &% '!$
!"#$%
'
!"
"
&
'
Figure 17: A = U ΣV T
You can find V as an orthonormal basis of Rn (eigenvectors of AT A), and U as an orthonormal basis of
R (eigenvectors of AAT ). Together they diagonalize A into Σ. This is also expressed as a combination of
rank 1 matrices.
m

A = U ΣV T
|
=  u1
|
|
u2
|

σ1
|
u3  
|

σ2

[
−v1T −
−v2T −
]

 
| [
| [
]
]
= σ1 u1  −v1T − + σ2 u2  −v2T −
|
|

= σ1 u1 v1T + σ2 u2 v2T
Note that:
U U T = Im
V V T = In
See Pattern 4 (P4) for the graphic notation.
Conclusion and Acknowledgements
I presented systematic visualizations of matrix/vector multiplication and their application to the Five Matrix
Factorizations. I hope you enjoyed them and will use them in your understanding of Linear Algebra.
Ashley Fernandes helped me with beautifying this paper in typesetting and made it much more consistent
and professional.
To conclude this paper, I’d like to thank Prof. Gilbert Strang for publishing “Linear Algebra for Everyone”. It guides us through a new vision to these beautiful landscapes in Linear Algebra. Everyone can reach a
fundamental understanding of its underlying ideas in a practical manner that introduces us to contemporary
and also traditional data science and machine learning. An important part of the matrix world.
10
References and Related Works
1. Gilbert Strang(2020),Linear Algebra for Everyone, Wellesley Cambridge Press.,
http://math.mit.edu/everyone
2. Gilbert Strang(2016), Introduction to Linear Algebra,Wellesley Cambridge Press, 5th ed.,
http://math.mit.edu/linearalgebra
3. Kenji Hiranabe(2021), Map of Eigenvalues, Slidedeck,
https://github.com/kenjihiranabe/The-Art-of-Linear-Algebra/blob/main/MapofEigenvalues.
pdf
!
!
!
!
!
!
!
"
!
!
!
"
!
!
!
"
!
!
!
"
$
!
#
"
$
$
$
"
!
2*--"(#)3
#!
$
!
"
!
"
!
+ # +% , -% +- ( $ .!-/
"
!
'"
!" ( $
"
!
!
"
"
!
z"
%
!
"
!
!
!
"
"
!
!
<<% # =
"
!4"4 # %
!
!()
!("
"
!
!
!
'"
'"
!
"
'"
%'0"#()56"
454 3 $
!
!" 3 $
!" # &
<#(=$7$'96
5 # A5%
!" ) @*
!
!
#
!
!" # $
@'()A.*--"(#)3
!
#
!
'
!
!
"
!
#
!
!
+,()-".,%&"'()(*
>, # ?
!
!
!" ) *
!
'"
!
"
!
"
!
!
!
>)6?$("'(
!" # $ 01 %
/$.)()0",&"1)')("
+ # +%
&
"
!
!" # %
$
!
"
!
!
!
2 # 2& # 2%
"
!" # $
!
"
!
/#$4"3()$'
%&"'()(*
!"#$
"
!
!
!($
!
!(*
!()
!("
!
:9#;$0
;+ 5 # ;
"
6" # %,
07891 4"4 : %
!
!
!
!
'"
'"
!
"
'"
2)'7869#
5 #$
!
6" # $
Figure 18: Map of Eigenvalues
4. Kenji Hiranabe(2020), Matrix World, Slidedeck,
https://github.com/kenjihiranabe/The-Art-of-Linear-Algebra/blob/main/MatrixWorld.pdf
11
Matrix World
$"%
in
Linear Algebra
for Everyone
./012345/607128/0279
Matrix "<>4#
! " '*+
! " ()
93L 9-4O " P3)Q<4 9-4O
."
!
+,,-./012*3-45061
'"$
!"#$%"#&'($)*+&,('$-.($/0&(1.#&2
&RNS 39:;3439<-) 6-?1? K@ R
Square Matrix "4>4#
$ ' F
G H I
Invertible
ABCD.E J $!%()) + J #
%"%
Singular
-: )7-?: 347 + " #@ ABCD.E " $
X91-42Q)-9157
! " #)
&"#
! " &'
K ;-? -: )7-?: 347 5793 93L
!"#$%&'($)*+
' $
."
# '
Diagonalizable
!"& ! " %$% "#
$ $
."
# '
! " #$# !
#
! "!
()) *+* " $
# $
!"
%$ #
."
&"% Symmetric
# $
# $
& " & # @ ()) + -97 97-)
!"#
Positive
Semidefinite
%"&
Projection
&"%
Permutation
."
- " #$# ! !"#
()) + , #! -)) . # .
Orthogonal %"%
! " %,% "#
+'
W " W390-4 U39<
+( Normal
.# . " ..#
01-234-)15-6)7 68 39:;3234-) <-:91=
!"
N1-234-)157
$
$
%$ %$
W"
$ '
&"
' #
# $
# #
M$ " M " M# @ + " $ 39 #
! "
T79<Q:-:134 3U V
()) + -97 933:? 3U $
[$ #
Diagonal Z " # [ $
Positive !"#
Definite
Y"
+ #
# +
()) + / #
%&'()*+,-'.&' /*. 011 2
! "# " +* "# ' !
! $ " +* $ ' !
#"()*'"%
!"#$%&'(&)*%+,&-,"#%#'*
$,./&./*&/*01&23&4"235&6,0'*".&7."#%8
9:;5<=&>#"5?!"=?@?AB
Figure 19: Matrix World
5. Gilbert Strang, artwork by Kenji Hiranabe, The Four Subspaces and the solutions to Ax = b
" , &+0*
&+
*
&
" #$%&
$
'" # !
!#"$
)
!"# # ,
! "
!"# # ,
'$ # !
&
(
!
(
'& # (
%#") $
%#"$
!"# # - . ,
!"# # / . ,
&* ' % " ( ! ")
% " ) ! ")
&+ ' ! " ( % ")
* " ) + ")
Figure 20: The Four Subspaces and the solutions to Ax = b
12
Download