Propositional and Mathematical Statements: Connectives and Truth Tables: Connectives such as AND (∧), OR (∨), NOT (¬), IMPLIES (→), and IF AND ONLY IF (↔) modify and combine propositional statements. They are represented in truth tables: (P) (Q) (P∧Q) (P∨Q) (¬P) (P→Q) (P↔Q) T T T T F T T T F F T F F F F T F T T T F F F F F T T T 1. Inverse: If the original statement is "if P then Q" (symbolically represented as P → Q), then the inverse is "if not P then not Q" (symbolically represented as ~P → ~Q). So, we just negate both the hypothesis and the conclusion. 2. Converse: For the original statement "if P then Q" (P → Q), the converse is "if Q then P" (Q → P). So, the hypothesis and conclusion are being swapped. 3. Contrapositive: The contrapositive of "if P then Q" (P → Q) is "if not Q then not P" (~Q → ~P). So, it's when you negate both the hypothesis and the conclusion and then swap them. It's worth noting that an implication and its contrapositive always have the same truth value (if one is true, so is the other; if one is false, so is the other). However, an implication and its converse or an implication and its inverse do not necessarily share the same truth value. Rules of Logical Equivalence Identity laws: P AND T ≡ P: (P P OR F ≡ P: (P ∧ T) ≡ P ∨ F) ≡ P Domination laws: P AND F ≡ F: (P P OR T ≡ T: (P Negation laws: ∧ F) ≡ F ∨ T) ≡ T P OR ~P ≡ T: (P ∨ ¬P ) ≡ T P AND ~P ≡ F (P ∧ ¬P ) ≡ F Double Negation Law: P ≡ P: (¬¬P ) ≡ P ¬¬ Idempotent laws: P AND P ≡ P: (P P OR P ≡ P: (P ∧ P) ≡ P ∨ P) ≡ P Commutative laws: P AND Q ≡ Q AND P: (P P OR Q ≡ Q OR P: (P ∧ Q) ≡ (Q ∧ P ) ∨ Q) ≡ (Q ∨ P ) Associative laws: P AND (Q AND R) ≡ (P AND Q) AND R: (P P OR (Q OR R) ≡ (P OR Q) OR R: (P ∧ (Q ∧ R)) ≡ ((P ∧ Q) ∧ R) ∨ (Q ∨ R)) ≡ ((P ∨ Q) ∨ R) Distributive laws: P AND (Q OR R) ≡ (P AND Q) OR (P AND R): (P ∧ (Q ∨ R)) ≡ ((P ∧ Q) ∨ (P ∧ R)) P OR (Q AND R) ≡ (P OR Q) AND (P OR R): (P ∨ (Q ∧ R)) ≡ ((P ∨ Q) ∧ (P ∨ R)) De Morgan’s laws: NOT (P AND Q) ≡ (~P OR ~Q): (¬(P ∧ Q)) ≡ (¬P ∨ ¬Q) NOT (P OR Q) ≡ (~P AND ~Q): (¬(P ∨ Q)) ≡ (¬P ∧ ¬Q) Absorption laws: - P OR (P AND Q) ≡ P: (P - P AND (P OR Q) ≡ P: (P Material Implication P → Q ≡ ¬P v Q Rules of Inference: Modus Ponens: P→Q P ⊢ Q ∨ (P ∧ Q)) ≡ P ∧ (P ∨ Q)) ≡ P Modus Tollens: P→Q ¬ Q ⊢ ¬P Hypothetical Syllogism: P→Q Q→R ⊢ P → R Disjunctive Syllogism: P∨Q ¬ P ⊢ Q Addition: P ⊢ P ∨ Q Simplification: P∧Q ⊢ P and P∧Q ⊢ Q Conjunction: (P , Q ⊢ P ∧ Q ) Resolution: ( (P ∨ Q), (¬ P ∨ R) ⊢ (Q ∨ R) ) Universal Generalization: If ( P(a) ) holds for an arbitrary element ( a ) in the domain, then ∀xP(x) holds. Predicates and Quantifiers: Predicates: Predicates are functions that take one or more variables and produce a truth value. For instance, ( P(x) ) might denote "x is an even number." Quantifiers: Universal Quantifier (∀): Denotes "for all" or "for every." For example, ∀ x (x > 0) means "for all x, x is greater than 0." Existential Quantifier (∃): Denotes "there exists." For instance, ( ∃ x (x > 5) ) means "there exists an x greater than 5." Theorems: Universal Instantiation: From ( ∀ x P(x) ), deduce ( P(c) ) for a particular element ( c ). Existential Instantiation: From ∃ x P(x), deduce P(c) for a chosen element ( c ). Universal Generalization: If ( P(a) ) holds for an arbitrary element (a) in the domain, then ∀ x P(x) holds. Set Theory: Set Identities: Commutative Laws: ( A ∪ B = B ∪ A ) and ( A ∩ B = B ∩ A ) Associative Laws: ( (A ∪ B) ∪ C = A ∪ (B ∪ C) ) and ( (A ∩ B) ∩ C = A ∩ (B ∩ C) ) Distributive Laws: ( A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) ) and ( A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) ) Identity Laws: ( A ∪ ∅ = A ) and ( A ∩ U = A ) Complement Laws: ( A ∪ A' = U ) and ( A ∩ A' = ∅ ) Relations: Relation Properties: Reflexivity: ∀ x (x, x) belongs to the relation. Symmetry: If (a, b) belongs to the relation, then (b, a) also belongs. Transitivity: If (a, b) and (b, c) belong to the relation, then (a, c) also belongs. Equivalence Relation: A relation that is reflexive, symmetric, and transitive. Functions: Functions map elements from one set (domain) to another (codomain). Different types of functions include: Injective (One-to-One): Each element of the domain maps to a distinct element in the codomain. Surjective (Onto): Every element in the codomain has at least one corresponding element in the domain. Bijective: Combines the properties of injective and surjective functions. Summation and Its Different Formulas Summation (Σ) is a mathematical operation that represents the addition of a sequence of numbers or terms. It's symbolized by the Greek letter sigma ((Σ ). Basic Summation Notation: The summation notation represents the sum of terms from (i = m) to (n) of a given expression (f(i)): Σ n i=m f (i) Here are some key summation formulas and properties: Arithmetic Series: The sum of an arithmetic series (a , a , a , … , a difference (d), and first term (a ) is given by: 1 2 3 n) with (n) terms, common 1 Sn = n 2 n ⋅ (a 1 + a n ) = 2 ⋅ (2a 1 + (n − 1)d) Geometric Series: The sum of a geometric series (a , a , a , …, a ) with (n) terms, first term ( a ), and common ratio (r) is calculated as: 1 2 3 n 1 Sn = a 1 ⋅(r n −1) r−1 f or r ≠ 1 Summation Properties: Linearity: (Σ n i=m (c ⋅ f(i) + g(i)) = c ⋅ Σ n i=m f (i) +Σ n i=m g(i)) Changing the Index of Summation: Shifting indices in a sum does not affect the result. Splitting a Sum: ( Σ n i=m f(i) = ∑ k i=m f(i) + Σ n i=k+1 f(i)) Telescoping Sums: Sum cancellation where many terms cancel each other out, leaving only a few remaining terms. Arithmetic Mean of Summation: ( of the terms. 1 Σ n n i=1 f(i) ) represents the average Double and Triple Summations: Double (∑∑) and triple (∑∑∑) summations involve nested sums. For example: Double Summation: (∑ Triple Summation: (∑ Arithmetic: m i=1 m i=1 ∑ n n j=1 ) f (i, j) ∑ j=1 ∑ p k=1 f(i, j, k)) GCD (Greatest Common Divisor) and LCM (Least Common Multiple): GCD: gcd(a, b) represents the largest integer that divides both (a) and (b) without leaving a remainder. LCM: lcm(a, b) denotes the smallest multiple that is divisible by both (a) and (b). For example, the GCD of 8 and 12 can be found by considering their prime factorizations: Factors of 8: (2^3) Factors of 12: (2^2 \times 3) Common factors: (2^2 = 4) Greatest Common Divisor: (4) Modulo Arithmetic: Modulo arithmetic involves operations like (a \mod b), which returns the remainder when (a) is divided by (b). For instance, (17 \mod 5 = 2) means dividing 17 by 5 yields a remainder of 2. Matrix Algebra Zero Matrix A zero matrix, denoted by O , is a matrix in which all the elements are zero. Example: 0 O = [ 0 0 ] 0 Transpose of a Matrix The transpose of a matrix is obtained by interchanging its rows and columns. It is denoted by A^T , where A is the original matrix. Example: If 1 A = [ 4 2 5 3 ] 6 , then ⎢⎥ ⎡ A T = ⎣ Symmetric Matrix 1 4 2 5 3 6 ⎤ ⎦ Original matrix is the same as the Transposed Matrix A symmetric matrix is a square matrix that is equal to its transpose. In other words, A = A^T . Example: 1 A = [ 2 A is symmetric because A = A^T. Vector Dot Product 2 ] 3 The dot product of two vectors is a scalar value representing the cosine of the angle between them multiplied by their magnitudes. Example: If 1 ⎡ ⎤ v = 2 ⎣ ⎦ 3 and 4 ⎡ ⎤ u = 5 ⎣ ⎦ 6 , then v ⋅ u = 1 × 4 + 2 × 5 + 3 × 6 = 32 . Identity of a Matrix The identity matrix, denoted by I , is a square matrix with ones (1) on its main diagonal and zeros (0) elsewhere. Example: 1 I = [ 0 0 ] 1 When multiplied by another matrix, the identity matrix leaves the matrix unchanged. Determinant of a Matrix The determinant of a square matrix is a scalar value that can be calculated from its elements. It describes certain properties of the matrix and is denoted by det(A) or |A| . Example: For a matrix a A = [ c b ] d , the determinant is calculated as det(A) = ad - bc . Matrix Operations Matrix Addition Matrix addition combines two matrices of the same order or dimension. To add two matrices, add corresponding elements. 2x2 Matrix Example: 1 A = [ 3 2 ] 4 3 B = [ 5 4 ] 6 , , then 1 + 3 = 4 4 + 2 = 6 3 + 5 = 8 4 + 6 = 10 A + B = [ ] 4 A + B = [ 8 6 ] 10 3x3 Matrix Example ⎡ A = ⎣ 1 2 3 4 5 6 7 8 9 ⎤ ⎦ , ⎢⎥ 9 ⎡ 8 7 6 5 4 ⎣ 3 2 1 B = , A + B = , ⎦ 1 + 9 = 10 ⎡ 2 + 8 = 10 3 + 7 = 10 ⎤ 4 + 6 = 10 5 + 5 = 10 6 + 4 = 10 ⎣ 7 + 3 = 10 8 + 2 = 10 ⎦ 9 + 1 = 10 ⎡ A + B = ⎣ Matrix Subtraction ⎤ 10 10 10 10 10 10 10 10 10 ⎤ ⎦ Similar to matrix addition, but subtracts corresponding elements. 2x2 Matrix Example: , , then 1 A = [ 3 2 ] 4 3 B = [ 5 4 ] 6 1 − 3 = −2 2 − 4 = −2 3 − 5 = −2 4 − 6 = −2 A − B = [ ] 2 A − B = [ −2 3x3 Matrix Example 2 3 4 5 6 7 8 9 9 ⎡ 8 7 6 5 4 ⎣ 3 2 1 A = ⎣ B = ] −2 1 ⎡ , −2 ⎤ ⎦ ⎤ ⎦ , ⎢⎥ ⎡ A − B = ⎣ , 1 − 9 = −8 2 − 8 = −6 3 − 7 = −4 4 − 6 = −2 5 − 5 = 0 6 − 4 = 2 7 − 3 = 4 8 − 2 = 6 9 − 1 = 8 −8 ⎡ A − B = −2 ⎣ 4 −6 −4 0 2 6 8 ⎤ ⎦ Scalar Multiplication Multiplies a matrix by a scalar (single number). 2x2 Matrix Example: 1 A = [ 3 , 2 ] 4 k = 2 , then 1 ∗ 2 = 2 2 ∗ 2 = 4 3 ∗ 2 = 6 4 ∗ 2 = 8 2 kA = [ 6 4 ] 8 kA = [ ] 3x3 Matrix Example ⎡ A = ⎣ , 1 2 3 4 5 6 7 8 9 ⎤ ⎦ k = 2 , ⎡ kA = ⎣ 1 × 2 = 2 2 × 2 = 4 3 × 2 = 6 4 × 2 = 8 5 × 2 = 10 6 × 2 = 12 7 × 2 = 14 8 × 2 = 16 9 × 2 = 18 ⎤ ⎦ ⎤ ⎦ , ⎢⎥ ⎡ kA = 2 4 6 8 10 12 16 ⎦ 18 ⎣ 14 Matrix Multiplication ⎤ Combines two matrices to produce a new matrix. Number of columns in the 1st matrix must equal the number of rows in the 2nd matrix for multiplication. 2x2 Matrix Example: 1 A = [ 5 2 ] 6 3 B = [ 5 4 ] 6 , , then 1 × 3 + 2 × 5 AB = [ 5 × 3 + 6 × 5 3x3 Matrix Example 1 × 4 + 2 × 6 13 ] = [ 5 × 4 + 6 × 6 33 1 2 3 4 5 6 7 8 9 9 ⎡ 8 7 6 5 4 ⎣ 3 2 1 ⎡ A = ⎣ , B = , ⎡ AB = ⎣ 16 ] 40 ⎤ ⎦ ⎤ ⎦ 1 × 9 + 2 × 6 + 3 × 3 = 30 1 × 8 + 2 × 5 + 3 × 2 = 24 1 × 7 + 2 4 × 9 + 5 × 6 + 6 × 3 = 84 4 × 8 + 5 × 5 + 6 × 2 = 69 4 × 7 + 5 7 × 9 + 8 × 6 + 9 × 3 = 138 7 × 8 + 8 × 5 + 9 × 2 = 114 7 × 7 + 8 , ⎡ AB = ⎣ 30 24 18 ⎤ 84 69 54 138 114 ⎦ 90 Zero-One Matrix and its associated operations Join and Meet In lattice theory, a zero-one matrix represents a partially ordered set. The join of two elements corresponds to matrix union, and meet corresponds to matrix intersection. These operations define relationships between elements in a partially ordered set using zero-one matrices.