Uploaded by john333smith

lect01

advertisement
LECTURE 1: VECTORS
Reference: It is a good idea to start by having a look at Pemberton & Rau Section 1.3,
especially the discussion of Gaussian elimination. The main reference for this lecture is
Pemberton and Rau, Section 11.1.
A vector is a list of numbers written as a column. The entries of a vector are called its
components. Vectors are usually denoted by lower case bold letters a,b,x,y and so on.
The number in the ith position of the vector a is denoted by a i and is referred to as the ith
component of a.
A vector with n components is called an n-vector and we shall use the symbol Rn to mean
the set of all n-vectors, with real numbers as components.
The term scalar is used to denote a member of the number system from which
components of vectors are drawn. For our purposes a scalar is the same as a real number.
Two vectors are equal if and only if they have the same number of components and
corresponding components are equal. Thus, if x and y are both n-vectors, then
x=y
if and only if
x i  y i for i  1,2,..., n.
We therefore see that a single vector equation is equivalent to n scalar equations.
Vector arithmetic
Addition of two vectors can be performed when and only when they have the same
number of components and is performed component by component.
Multiplication by a scalar is also performed component by component.
The vector (-1)a is often denoted by –a.
We can represent members of R² by points in the plane. The operations of addition and
multiplication by scalars are interpreted geometrically as follows. To obtain a+b from a
and b, complete the parallelogram. To obtain 2a from a, stretch the line from the origin
to a by a factor of 2. To obtain -b from b, reflect b in the origin.
The two kinds of operation are put together in the obvious way. In particular, subtraction
is defined by letting
a - b = a + (-b)
The zero n-vector, whose components are all zero, is denoted by 0n, or simply 0 if there is
no ambiguity about n. Addition and multiplication by scalars obey laws similar to those
of ordinary arithmetic. In particular, we have:
the commutative law of vector addition
the associative law of vector addition
a+b=b+a
(a + b) + c = a + (b + c)
Linear dependence
A linear combination of two vectors a and b in Rn is a vector of the form a  b , where
α and β are scalars; a linear combination of four vectors a,b,c,d is a vector of the form
a  b  c  d where  ,  ,  ,  are scalars; and so on.
We say that k n-vectors b 1 , b 2 ,..., b k are linearly dependent if it is possible to express
one of them as a linear combination of the others.
We say that k n-vectors b 1 , b 2 ,..., b k are linearly independent if they are not linearly
dependent; thus b 1 , b 2 ,..., b k are linearly independent if none of the vectors can be
expressed as a linear combination of the others.
A simple criterion for linear dependence is the following: the vectors b 1 , b 2 ,..., b k are
linearly dependent if and only if there exist scalars  1 ,  2 ,...,  k , not all zero, such that
 1b 1   2 b 2  ...   k b k  0
This criterion provides a simple way of testing whether a given set of vectors are linearly
dependent or independent. Start with a relation of the above form. If you can show that
the only scalars satisfying this are  1 ,  2 ,...,  k all zero, then the vectors are linearly
independent; if you can find  1 ,  2 ,...,  k not all zero, then the vectors are linearly
dependent.
Exercises: Pemberton and Rau 11.1.1-11.1.4
Download