A Note on the Equality of the Column and Row Rank of a - IME-USP

advertisement
A Note on the Equality of the Column and Row Rank of a Matrix
Author(s): George Mackiw
Reviewed work(s):
Source: Mathematics Magazine, Vol. 68, No. 4 (Oct., 1995), pp. 285-286
Published by: Mathematical Association of America
Stable URL: http://www.jstor.org/stable/2690576 .
Accessed: 27/03/2012 11:08
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
Mathematical Association of America is collaborating with JSTOR to digitize, preserve and extend access to
Mathematics Magazine.
http://www.jstor.org
VOL. 68, NO. 4, OCTOBER 1995
285
A Note on the Equalityof the Column
and RowRankofa Matrix
GEORGE MAC KIW
Loyola College in Maryland
Baltimore,MD 21210
A basic resultin linearalgebrais thatthe row and columnspaces of a matrixhave
dimensionsthatare equal. In thisnotewe derivethisresultusingan approachthatis
elementary
yetdifferent
fromthatappearingin mostcurrenttexts.Moreover,we do
not relyon the echelonformin our arguments.
The columnand rowranksofa matrixA are thedimensionsofthecolumnand row
spaces,respectively,
of A. As is oftennoted,it is enoughto establishthat
row rank A < column rank A
(1)
forany matrixA. For, applyingthe resultin (1) to At, the transposeof A, would
producethe desiredequality,sincethe rowspace of At is preciselythe columnspace
of A. We presentthe argumentforreal matrices,indicating
at the end the modificationsnecessaryin the complexcase and the case of matricesoverarbitrary
fields.
We takefulladvantageof the following
twoelementary
observations:
1) foranyvectorx in R', Ax is a linearcombination
of the columnsof A, and
2) vectorsin the null space of A are orthogonalto vectorsin the row space of A,
relativeto the usual Euclideanproduct.
Bothoftheseremarksfolloweasilyfromtheverydefinition
of matrixmultiplication.
For example,ifthevectorx is in the nullspace of A then Ax = 0, and thustheinner
productof x withtherowsof A mustbe zero. Sincetheserowsspantherowspace of
A, remark2) follows.An immediateconsequenceis thatonlythe zero vectorcan be
commonto boththe nullspace and rowspace of A, since2) requiressucha vectorto
be orthogonal
to itself.
Given an m by n matrixA, let the vectors xI, x2.
Xr in Rn forma basis
forthe row space of A. Then the r vectorsAx,, Ax2,..., Ax, are in the column
space of A and, further,we claim they are linearlyindependent. For, if
cl AxI + c2 Ax2 +
+cr Axr = O forsome real scalars cl, c2) ... . cr, then A(c1xI +
and the vector v=clxX+c2x2 +
C2X2 ?
+crxr)=0
+Cr xr would be in the
nullspace of A. But,v is also in therowspace of A, sinceitis a linearcombination
of
basis elements.So, v is the zero vectorand the linearindependenceof xI, x2,..., Xr
guaranteesthat cl = c2 = .. = Cr = 0. The existenceof r linearlyindependentvectorsin the columnspace requiresthatr ? columnrankA. Since r is the rowrankof
A, we have arrivedat the desiredinequality(1).
This approachalso yieldsadditionalinformation.
Let YI, Y2 ... Yk forma basis for
the null space of A. Since the row space of A and the null space of A intersect
it followsthatthe set {xI, x2., Xr, YI, Y2,.
trivially,
Yk} is linearlyindependent.
Further,if z is a vectorin R' then Az is in the columnspace of A and hence
expressible as a linear combinationof basis elements. Thus, we can write
Az = E_ 1 d Ax., forscalarsd . But thenthevectorz - Er d xj is in thenullspace
of A and can be writtenas a linearcombinationof YI, Y2,
Yk Thus z can be
expressed as a linear combinationof the vectors in the set {xI, x2,. .. Xr)
286
MATHEMATICS MAGAZINE
Yk) and this set then formsa basis for Rn. A dimensioncount yields
r + k = n, givingthe rankand nullitytheoremwithoutuse of the echelonform.
and
in the argumentsabove is elementary
Note thatthe relianceon orthogonality
does not requirethe Gram-Schmidt
process. The ideas used here can be readily
innerproductis
The hermitian
appliedto complexmatriceswithminormodifications.
used insteadin the complexvectorspace Cn, as is the hermitian
transpose.Observato thosein
tion2) wouldnotethenthatvectorsin the nullspace of A are orthogonal
the rowspace A.
resultthatis validformatrices
The rowand columnranktheoremis a well-known
overarbitrary
fields.The notionof orthogonalcomplementcan be generalizedusing
of the argumentshere
linearfunctionals
and dual spaces, and the generalstructure
can thenbe carriedoverto arbitrary
fields.The text[1, pp. 97 ff.]containssuch an
approach.
Manyauthorsbase theirdiscussionof rankon the echelonform.The factthatthe
non-zerorowsof the echelonformare a basis forthe rowspace, or thatcolumnsin
a basis forthe column
the echelonformcontaining
lead ones can be used to identify
space in the originalmatrixA, are centralto such a developmentof rank.Results
thatrowrankand column
such as thesefolloweasilyifit is establishedindependently
rankmustbe equal.
YI, Y2,
The authorwould liketo thankthe refereeforseveralvaluablesuggestions.
REFERENCE
1. KennethHoffmanand Ray Kunze, Linear Algebra,2nd edition,Prentice-Hall,EnglewoodCliffs,NJ,
1971.
A One-SentenceProofThatV2 Is Irrational
DAVID M. BLOOM
BrooklynCollege of CUNY
Brooklyn,NY 11210
If V2 were rational,say F = mn/nin lowest terms,then also ? =(2n -m)/
(m - n) in lower terms,givinga contradiction.
is less thanthe first
needed-that the seconddenominator
(The threeverifications
are equal-are straightforward.)
and stillpositive,and thatthe twofractions
The argumentis not original;it's the algebraicversionof a geometricargument
different
form)by Ivan Nivenat a
givenin [1, p. 84], and it was presented(in slightly
lecturein 1985. Note, though,that the algebraicargument,unlikethe geometric,
easilyadaptsto an arbitrary
Vk where k is anypositiveintegerthatis not a perfect
square.Indeed,let j be the integersuchthatj < xk <j + 1. If we had xk= m/n in
lowest terms(m, n e Z+), then also Vk= (kn -jrn)/(rm-jn) in lower terms,a
contradiction.
REFERENCE
1. H. Eves, An Introduction
to theHistoryofMathematics,
6thedition,W. B. SaundersCo., Philadelphia,
1990.
Download