About Definition Flashcards for M1P2 - Algebra I Imperial College London Subspace Sum & Intersection Algebra I Theorem Algebra I Definition dim U + W = dimU + dimW − dim U ∩ W Rank, Row-Rank, Column-Rank Algebra I Theorem Algebra I Definition Row-Rank = Column-Rank Linear Transformation Algebra I Proposition Algebra I Definition dimW = dimV =⇒ ∃! T s.t. T (vi ) = wi Kernels & Images Algebra I Proposition Algebra I Theorem If {v1 , . . . , vn } is a basis for V , Im(T ) = Span{T v1 , . . . , T vn } Algebra I Rank Nullity Theorem Algebra I U + W = {u + w : u ∈ U, w ∈ W } = Span{u1 , . . . , ur , w1 , . . . , ws } These flashcards and the accompanying LATEX. For brevity, often important detail is omitted from proofs - you should make sure you can do them fully! U ∩ W = {v : v ∈ U and v ∈ W } P P Solve v = ar ui = bs ws to find U ∩ W . Row-Rank is dim RSp(A). Column rank is dim CSp(A). Rank is either since they’re equal. RSp(A) = RSp(Aech ) bit.ly/imperialmaths Proof. Let {v1 , . . . , vm } basis for U ∩ W , write U = {v1 , .., vm , u1 , .., ur−m }, W = {v1 , .., vm , w1 , .., ws−m }. Span B = {v1 , .., vm , u1 , w1 , .., ur−m , ws−m } = U +W . X X X X γi w i = − αi vi − β i ui = δi v i {v1 , ..vm , w1 , ..ws−m } is a basis =⇒ γi = δi = 0 X X αi vi + βi ui = 0 {v1 , ..vm , u1 , ..ur−m } is a basis =⇒ αi = βi = 0. So B is a basis for U + W . Result follows. T : V → W such that: Proof. ri = (ai1 , ai2 , .., ain ), cj = (a1j , a2j , .., amj )T RSp(A) has basis {v1 , . . . , vk }, ri = λ1i v1 + . . . λil vk . If vi = (bi1 , . . . , bin ), then aij = λi1 b1j + · · · + λik bkj a1j 1. T (v1 + v2 ) = T (v1 ) + T (v2 ) (preserves +) cj = 2. T (λv) = λT (v) (preserves scalar ×) Matrix Transformations are always linear. amj Ker(T ) = {v ∈ V : T (v) = 0} λ11 b1j +···+λ1k bkj ! = ! .. . λm1 b1j +···+λmk bkj RSp(AT ) = CSp(A) ≤ k = RSp(A) = CSp(AT ). Similarly RSp(A) ≤ CSp(A). So RSp(A) = CSp(A). P λi vi . Let T (v) = λi wi . Then ∀u, v ∈ V : X X µi wi , T (v) = λi wi T (u) = X T (u + v) = (µi + λi )wi , so T preserves + . P If π ∈ F . T (πv) = πλi vi = πT (v), so preserves ×. If v = Im(T ) = {T v : v ∈ V } (CSp(T ) if matrix) .. . P For T : V → W : dim Im(T ) + dim Ker(T ) = dim(V ) Proof. If {u1 , . . . , us }, {w1 , . . . , wr } are bases for Ker and Im, Let V = {v1 , . . . , vr } such that T (vi ) = wi . Then I claim B = U ∪ V is basis for V ( =⇒ thm): P B v ∈ V, Im 3 T v = µi wi . Let P Spans V : IfP P v̄ = µi vi ,PT v̄ = P µi wi = T v. Ker 3 v − v̄ = λi ui , so v = µi vi + λP i ui ∈ SpanB P Lin. Independent: λi ui + µi vi = 0. Apply T , P µi wi = 0 =⇒ µi = 0. So λi = 0. P Proof. ∀w P ∈ Im(T ), ∃v ∈ V : T (v) = w. v = λi vi . w = Tv = λi T vi ∈ Span{T v1 , . . . , T vn } ≥ Im(T ). ∀i, T vi ∈ Im(T ), so Span{T v1 , . . . , T vn } ≤ Im(T ). Definition Definition Vector with respect to basis, B Matrix with respect to basis, B Algebra I Proposition Algebra I Definition Eigenvectors & Eigenvectors [T v]B = [T ]B [v]B Algebra I Proposition Algebra I Proposition 1. Eigenvalues of T are same as [T ]B 2. Eigenvectors v of T are vectors v s.t. [v]B eigenvector of [T ]B [T ]B is diagonal iff every basis vector in B is an eigenvector for T Algebra I Definition Algebra I Definition Diagonalisable Change of Basis Matrix, P Algebra I Proposition Algebra I Proposition 1. P is invertible, with P −1 change of basis from C to B 1. P = [T ]B where T : T (vi ) = wi 2. ∀v ∈ V, P [v]C = [v]B 2. [T ]C = P −1 [T ]B P Algebra I Algebra I T vi = X aij vi for basis B = {v1 , . . . , vn } v= Then a11 a12 ... .. . [T ]B = ! a1n .. . X λi vi for basis B = {v1 , . . . , vn } Then [V ]B = (λ1 , . . . , λn )T an1 an2 ... ann So the ith column is [T vi ]B P Proof.P[T ]B = (aijP ), B P = {v1 , . . . ,P vn }, λi vi . Plet v = Tv = λi T vi = λi aji vj = ( λi aji )vj , so: v is an eigenvector if v 6= 0 and T v = λv. Then λ is the v 0 s eigenvalue. P [T v]B = P λi a1i .. . λi ani a11 ... a1n ! = .. . .. . an1 ... ann ! λ1 .. . ! = [T ]B [v]B λn Proof. Proof. For standard basis e1 , . . . , en , A is diagonal iff ei ’s are eigenvectors of A. So [T ]B is diagonal iff all the ei are eigenvectors. But ei = 0v1 +· · ·+1vi +· · ·+0vn = [vi ]B . So ei eigenvector iff vi is eigenvector for T . T v = λv ⇐⇒ [T v]B = [λv]B ⇐⇒ [T ]B [v]B = λ[v]B ⇐⇒ [V ]B is e-vector for [T ]B with e-value λ If B = {v1 , . . . , vn }, C = {w1 , . . . , wn }, wj = ! λ11 ... λ1n .. .. PB to C = = (λij ) . . P λij vi T diagonalisable iff ∃ basis of V such that every element of V is an eigenvector of T . λn1 ... λnn So jth column of P is [wj ]B Proof. Let Q be change of basis from C to B. So Q[v]B = [v]C . P Qei = P Q[vi ]B = P [vi ]C = [vi ]B = ei =⇒ Q = P −1 Also P −1 [T ]B P ei = P −1 [T ]B P [wi ]C − P −1 [T ]B [wi ]B = P −1 [T wi ]B = [T wi ]C = [T ]C [wi ]C = [T ]C ei . Thus P −1 [T ]B P = [T ]C Proof. We know that [T ]B [vi ]B = [T vi ]B = [wi ]B . Since P [vi ]B = P ei = [wi ]B , it follows P = [T ]B P P If v ∈ V , P v= ai wi .P [v]C = (a1 , . . P . , an )T = ai ei . P [v]C = ai P ei = ai [wi ]B = [ ai wi ]B = [v]B Definition Proposition 1. G has exactly one identity. 2. Every element has exactly one inverse. Group 3. Left Cancel.: x ∗ y = x ∗ z =⇒ y = z 4. Right Cancel.: x ∗ z = y ∗ z =⇒ x = y Algebra I Definition Algebra I Definition Abelian Group Permutation Algebra I Definition Algebra I Definition Symmetric Group Subgroup Algebra I Proposition Algebra I Proposition Subgroup Criteria: H ≤ G iff 1. e ∈ H, where e is identity of G 2. If a, b ∈ H, then ab ∈ H 3. If a ∈ H, then a −1 1. (g n )−1 = g −n 2. g m g n = g m+n 3. (g m )n = g mn ∈ H. Algebra I Algebra I Definition Definition Cyclic Subgroup Cyclic Group Algebra I Algebra I Proof. If e, f identity, e ∗ f = f = e. If y, z inverses for x, z = (y ∗ x) ∗ z = y ∗ (x ∗ z) = y. Set G with binary operation ∗ (a function mapping S × S → S) with 1. Associativity:(a ∗ b) ∗ c = a ∗ (b ∗ c) If x ∗ y = x ∗ z, let w = x−1 , then w ∗ (x ∗ y) = w ∗ (x ∗ z) =⇒ (w ∗ x) ∗ y = (w ∗ x) ∗ z 2. Identity: ∃e ∈ G s.t. x ∗ e = e ∗ x = x 3. Inverses: ∀x ∈ G, ∃y ∈ G s.t. x ∗ y = e =⇒ y = z, similar for right cancellation A bijective function f : X → X A group G such that ab = ba (a and b commute) A subset of G which is itself a group under ∗, denoted H ≤ G. Sym(X) is the set of all permutations of X. Proof. Obvious. G cyclic if ∃ a generator, g ∈ G s.t. hgi = G, Proof. blah blah blah hgi = {g n : n ∈ Z} ≤ G Definition Proposition |hgi| = ord(g) (ord = ∞ iff hgi is infinite) Order of g Algebra I Algebra I Proposition Definition Every permutation in Sn can be written as a product of disjoint cycles. Cycle Shape Algebra I Proposition Algebra I Proposition If a, b commute, then ord(g) = d then for all k ∈ Z, g k = e ⇐⇒ d | k 1. a−1 b = ba−1 2. ai bj = bj ai 3. (ab)k = ak bk Algebra I Proposition Algebra I Definition If f is a permutation with cycle shape (r1 , . . . , rk ), then ord(f ) = lcm(r1 , . . . , rk ) Right Coset & Index Algebra I Theorem Algebra I Definition Lagrange’s Theorem Residue Class of a modulo m Algebra I Algebra I Proof. If ord(g) = ∞. Suppose wlog n > m, let k = n − m. g n = g m+k = g m g k . If n m k g = g =⇒ g = e, contradicting ord = ∞, so hgi is infinite. If ord(g) = k, write n = pk + q, 0 ≤ q ≤ k − 1. g n = g pk+q = (g k )p g q = g q ∈ {g 0 , . . . , g k−1 } = hgi. If g i = g j , 0 ≤ i < j ≤ k − 1, let l = j − i. Then g i = g i+l =⇒ g l = e, contradicting ord(g) = k Sequence of cycle lengths in descending order written in disjoint cycle notation, including 1-cycles. Proof. Easy. ord(g) = min k ∈ Z+ s.t. g k = e (if 6 ∃k, ord = ∞) Proof. blah blah. Proof. By Euclid’s Lemma, k = xd + y, 0 ≤ y < d. g k = g xd+y = (g d )x g y = g y = e. y = 0 ⇐⇒ d | k. Right coset is Hx = {hx : h ∈ H} Proof. Easy. Index is |G| |H| , number of right cosets of H in G. [a]m = {s ∈ Z : s ≡ a (mod m)} ∀m ∈ N, [0]m ≤ (Z, +). Other classes are cosets. For H ≤ G, |H| divides |G| Proof. Consider S = {Hx : x ∈ G}, setPof cosets. x ∈ G is in one member of S. Thus |G| = X∈S |X|. Every coset has size |H|. So |G| = |S||H|. Proposition Proposition (Z∗m , ×) is a group iff m is prime. ∗ = Zm \{[0]m }) (Zm (Zm , +) is a group. Algebra I Algebra I Theorem Proposition If 2m − 1 is prime, then m is prime. Fermat’s Little Theorem Algebra I Proposition Algebra I Definition Let n = 2p − 1. If q is a prime divisor of n, then q ≡ 1 (mod p) Symmetry & Dihedral Group Algebra I Definition Algebra I Definition Ring Polynomial Ring Algebra I Definition Algebra I Theorem (R× , ×) is the unit group of R Unit Algebra I Algebra I Proof. Obvious. Proof. Suppose m is not prime, so m = ab. Now m ab a b a a(b−1) 2 −1 = 2 −1 = (2 ) −1 = (2 −1)(2 Proof. Obvious. If p 6 | n, np−1 ≡ 1 (mod p) a +· · ·+2 +1) Proof. p 6 | n =⇒ [n] ∈ Z∗ . So ord[n] divides |Z∗ | = p p p p p − 1 by Lagrange’s Theorem. Thus [n]pp−1 = [1]p a m So 2 − 1 divides 2 − 1, so is not prime. Symmetry group H = {A ∈ GLn (R) : A preserves S}(Preserves iff AS = {Av : v ∈ S} = S) D2n is the symmetry group of Pn , regular n-gon. Proof. q | n =⇒ q | 2p − 1. So [2]pq = [1]q . So ord([2]q ) | p, so it must be 1 or p. 2 6≡ 1 (mod q) so [2]q 6= [1]q , so ord([2]q ) = p. So p divides |Z∗q | = q − 1 by Lagrange’s Theorem. A set R with + and × such that 1. (R, +) is an abelian group F [x] is set of polynomials in x with coefficients from F , a field with usual + and ×. 2. × is associative 3. × distributes over + Degree of f (x) is highest power of x in f (x) 4. ∃ identity for × (ring with unity) 5. × is commutative. (commutative ring) Proof. Easy from the group axioms. u, a multiplicatively invertible element of R, i.e. ∃w ∈ R such that uw = wu = 1. R× is the set of units in R. Definition Definition Field Zero Divisor & Integral Domain Algebra I Proposition Algebra I Defintion If a | b and b | a then ∃u ∈ R such that b = au Irreducible Algebra I Defintion Algebra I Defintion √ Ring Z[ d] Norm Map Algebra I Proposition Algebra I Definition r is a unit iff N (r) = ±1 Highest Common Factor Algebra I Proposition Algebra I Lemma If c = hcf(a, b), then d= hcf(a, b) iff d = cu Algebra I ∃q(x), r(x) ∈ F [x] s.t. f (x) = q(x)g(x) + r(x) with 0 ≤ deg r < deg g Algebra I r is a zero divisor if ∃s 6= 0 ∈ R such that rs = 0. A unit is not a zero-divisor. F , a commutative ring such that F × = F ∗ = F \{0} (Every non-zero element is a unit) An integral domain is a ring with no zero-divisors (except for 0). F ⊂ ID. If r is a non-zero, non-unit in R, an ID, r is irreducible if 6 ∃s, t ∈ R non-units such that r = st. Otherwise it is reducible. √ √ N : Z[ d] → Z by N (x + y d) = x2 − dy 2 Proof. Trivial. √ √ Z[ d] = {x + y d : x, y ∈ Z}, usual + and ×. c = hcf(a, b) iff √ √ Proof. N : Z[ d] → Z by N (x + y d) = x2 − dy 2 1. c | a and c | b (so a common factor) 2. If d | a and d | b then d | c Proof. Suppose that c adn d are hcfs, by definition d = cu. Proof. Do stuff. Let d = cu. Then e | a and e | b =⇒ e | c. So e | cu = d Proposition Definition a = bq + r Then d= hcf(a, b) iff d = hcf(b, r) Euclidean Function & Domain Algebra I Proposition Algebra I Lemma √ f (a) = |N (a)| is Euclidean function for Z[ d] if d ∈ {−2, −1, 2, 3} Bezout’s Lemma Algebra I Definition Algebra I Definition Coprime Unique Factorisation Domain Algebra I Definition Algebra I Proposition In a UFD, p is irreducible =⇒ p is prime (non-unit such that p | ab =⇒ p | a or p | b) Prime Algebra I Algebra I Euclidean Function, f : 1. f (ab) ≥ f (a) for a, b ∈ R∗ 2. For all a, b ∈ R, b 6= 0, ∃q, r s.t. a = qb + r with 0 ≤ f (r) < f (b). Proof. Blah. Proof. VERY LONG! A Euclidean Domain is an ID with such a function. ∃s, t ∈ R s.t. as + bt = d Proof. Like M1F. An ID such that 1. a = p1 . . . ps If 1 is a hcf for a and b in an ID. 2. Uniqueness of factorisation Euclidean Domain ⊂ UFD (proof in M2P2) Proof. Final proof. p is prime iff p | ab =⇒ p | a or p | b