Document

advertisement
Chapter 9
9.1
Vector Space
Introduction
Vectors have both magnitude and direction.
9.2
Vectors; Geometrical Representation
Fig. 1 Geometric
representation of V
Denoting the magnitude, or norm, of any vector v as ║v║,
we have ║v║ = 8 for the v vector in Fig. 1.
Observe that the location of a vector is not specified , only
its magnitude and direction. Thus, the two unlabeled arrows in
Fig. 1 are equally valid alternative representations of v. That is
not to say that the physical effect of the vector will be entirely
independent of its position.
We say that two vectors are equal if and only if their length
are identical and if their directions are identical as well.
1
Define the addition between any two vectors u and v.
The sum, or resultant, u+v is defined as
the arrow from the tail of u to the head of v.
Reversing the order, v+u is as shown in
Fig. 3b. Equivalently, we may place u and
v tail to tail, as shown in Fig. 3c.
u+v=v+u
 u + v + w = u +  v + w 
(1)
(2)
Any vector of zero length to be a zero vector.
We define a negative inverse “-u” such that if u is any
nonzero vector, then -u is determined uniquely, as shown
in Fig. 4a; that is, it is of the same length as u but is
directed in the opposite direction.
2
We denote u + (-v) as u – v but emphasize that it is really
the addition of u and (-v), as shown in Fig. 4b.
Scalar multiplication
If α≠ 0 and u ≠ 0, then αu is a vector
whose length is │α│ times the length of u
and whose direction is the same as that of
u if α >0, and the opposite if α < 0; if α=
0 and/or u=0, then αu=0. The definition
is illustrated in Fig. 5. It follows from this
definition that scalar multiplication has the
following algebraic properties:
3
   u     u,
    u   u   u,
  u  v    u   v,
1 u  u,
(4a)
(4b)
(4c)
(4d)
where α, β are any scalars and u, v are any vectors.
9.3
Introduction of Angle and Dot Product
The angle θ between two nonzero vectors u and v will be
understood to mean the ordinary angle between he two vectors
when they are arranged tail to tail as in Fig. 1. For definiteness,
we choose θ to be the interior angle, as shown in Fig. 1. And
angular measure will be understood to be in radians.
4
0  ,
(1)
Fig. 1 The angle  between u and v.
We define the so-called dot product, u.v,
 u v cos  if u, v  0
uv  
if u = 0 or v =0
0
(2a,b)
u.v =║u║║v║cosθ =(║v║)(║u║cosθ) is the length of
v times the length of the orthogonal projection of u on the
line of action of v.
5
Example 1. Work done by a force
In mechanics the work done when a body undergoes a linear
displacement from an initial point A to a final point B, under the
action of a constant force F (Fig. 3).
W  F  AB
(3)
An important special case of the dot
product occurs when =/2, which indicates u v =0, and in the
case of u = u
●
 u u cos0  u 2 if u  0
uu  
if u = 0
0
(5)
if u  0
so that we have
u  uu
(6)
6
9.4
n-Space
The 2-tuple (a1, a2) denotes the point P indicated in Fig. 1a,
where a1, a2 are the x, y coordinates, respectively. But it can
also serve to denote the vector OP in Fig. 1b or, indeed, any
equivalent vector QR.
Fig. 1 2-tuple representation
7
Thus the vector is now represented as the 2-tuple (a1, a2)
rather than as an arrow. The set of all such real 2-tuple vectors
will be called 2-space and will be denoted by the symbol R2;
that is,
2
R   a1 , a2 
a1 , a2 real number
(1)
Vectors u=(u1,u2) and v=(v1, v2) are defined to be equal if
u1=v1, u2=v2.
u  v   u1  v1 , u2  v2 
 u   u1 ,  u2  , 0   0, 0 
(2)
: any scalar
R   a1 , a2 , a3 
3
a1 , a2 , a3 real numbers
u  v   u1  v1 , u2  v2 , u3  v3  ,
(6)
(7)
8
One may introduce the set of all ordered real n-tuple vectors,
even if n is greater than 3. We call this n-space, and denote it as
Rn, that is,
R   a1 ,..., an  a1 ,..., an real number.
n
(8)
Consider two vectors, u = (u1,…, un) and v = (v1,…,vn), in Rn.
The scalars u1,…, un and v1,…,vn are called the components of
u and v. u and v are said to be equal if u1 =v1, …, un = vn, and
we define
u  v   u1  v1 ,..., un  vn  ,
 u   u1 ,...,  un  ,
0   0,..., 0  ,
u   1 u,
u  v  u   v 
 addition 
 scalar multiplication 
 zero vector 
 negative inverse 
(9a)
(9b)
(9c)
(9d)
9
(9e)
From these definitions we may deduce the following properties:
u+v =v+u,
 u+v  +w=u+  v+w  ,
 commutativity 
 associativity 
(10a)
(10b)
u+0 =u,
(10c)
u+  -u  =0,
(10d)
α  βu  =  αβ  u,
 α+β  u=αu+βu,
α  u+v  =αu+αv,
 associativity 
 distributivity 
 distributivity 
(10e)
(10f)
(10g)
1u=u,
0u=0,
(10h)
 1  u,
(10j)
 0  0.
(10k)
(10i)
10
In Fig. 4, it may be defined at any instant
by the four currents i1, i2, i3, i4, and that
these may be regarded as the components
of a single vector i = (i1, i2, i3, i4) in R4.
Of course, the notation of (u1,…, un) as
point or arrow in an “n-dimensional space”
can be realized graphically only if
n ≤ 3; if n > 3, the interpretation is valid
only in an abstract.
11
9.5
Dot Product, Norm, and Angle for n-space
9.5.1. Dot product, norm, and angle.
If u≡(u1, u2, ..un), we wish to define the norm or “length” of u,
denoted as ∥u ∥, in terms of the components u1, u2,…un of u.
uv  u
v cos 
(1)
If u and v are vectors in R2 as shown
in Fig. 1, formula (1) may be expressed
in terms of the components of u and v
as follows:
u  v  u v cos 
 u v cos     
 u v
 cos  cos   sin  sin  
Fig. 1 u‧v in terms of
components
  u cos   v cos     u sin   v sin  
 u1v1  u2 v2
(2)
12
Generalizing (2) and (3) to Rn, it is eminently reasonable to
define the (scalar-valued) dot product of two n-tuple vectors
u = (u1,…, un) and v = (v1,…,vn) as
n
u  v  u1v1  u2v2    un vn   u j v j .
(4)
j 1
u 
uu 
n
2
u
 j
(5)
j 1
 uv 
uv
  cos 
 1 or u  v  u v
 ,  1 
u v
 u v 
1
(6)&(11)
Other dot products and norms are sometimes defined for nspace, but we choose to use (4) and (5), which are known as
the Euclidean dot product and Euclidian norm, we refer to
the space as Euclidean n-space, rather than just n-space. 13
9.5.2. Properties of the dot product.
Commutative : u.v = v.u
Nonnegative : u.v >0
for all u  0
 0 for u  0,
Linear : (u+v).w = (u.w)+(v.w) ,
(12a)
(12b)
(12c)
for any scalar α, β and any vectors u, v, w. The
linearity condition (12c) is equivalent to the two conditions
(u+v).w = (u.w)+(v.w) and (αu).v = α(u.v).
Prove the promised inequality (11), namely, the Schwarz
inequality
uv  u v
(13)
14
 u   v    u   v   0,
(14)
u  2 u  v   v  0.
2
2
2
2u  v  2 v  0 or   
2
u  v u  v

2

2
u
2
v
u
u
2
v
2
u v
v
2
2
v  u  v
2
v
4
0
v  2 u  v  u  v  0
2
2 
(u  v)2
2
2
2
(15)
2
2
15
9.5.3. Properties of the norm.
Since the norm is related to the dot product according to
u  uu
(16)
the properties (12) of the dot product should imply certain
corresponding properties of the norm. These properties are
as follow:
Scaling :
u  
Nonnegative :
u 0
0
u
for all u  0
(17a)
(17b)
for u  0,
Triangle Inequality : u+v  u  v
(17c)
16
  u     u    u   u 
u 
   u   u   2 u  u

uv
2
uu   u
 u  v  u  v
 u  u+v  u+u  v+v  v
 u
2
 2u  v  v
2
 u
2
 2 uv  v
 u
2
2 u
 u  v

2
v  v
2
2
17
9.5.4. Orthogonality.
If u and v are nonzero vectors such that u.v = 0, then
  cos
1
 uv 
 0
1

  cos 
 u v 
 u v


1

cos
0

,
 

2

(18)
and we say that u and v are perpendicular.
We will say that u and v are orthogonal if
uv  0
(19)
Only if u and v are both nonzero does their orthogonality
imply their perpendicularity. Finally, we say that a set of
vectors, say {u1,…,uk}, is an orthogonal set if every vector in
the set is orthogonal to every other one:
u i  u j  0 if i  j
(20)
18
9.5.5. Normalization.
Any nonzero vector u can be scaled to have unit length by
multiplying it by 1/║u║ so we say that the vector
1
û 
u
(21)
u
has been normalized, and the normalized vector has unit length.
A vector of unit length is called a unit vector.
A set of vectors is said to be orthonormal if it is orthogonal
and if each vector is normalized (i.e., is a unit vector). Thus,
{u1,…,uk} is ON if and only if ui.uj = 0 whenever i ≠ j (for
orthogonality), and uj.uj = 1 for each j (so ║uj║ = 1, so the
set is normalized). The symbol
i j
1,
 ij  
(22)
0,
i j
will be used, which is known as the Kronecker delta. Thus,
{u1,…,uk} is ON if and only if u i  u j   ij
for i = 1, 2,…, k and j = 1, 2,…, k.
(23)
19
9.6
Generalized Vector Space
9.6.1. Vector space.
In section 9.5 we generalize our vector concept from the
familiar arrow vectors of 2- and 3-space to n-tuple vetors in
abstract n-space. It is interesting to wonder if further
generalization is possible.
Definition 9.6.1 Vector Space
We call a set S of “objects.” which are denoted by
boldface type and referred to as vectors, a vector space if
the following requirements are met:
(i) An operation, which will be called vector addition and
denoted as +, is defined between any two vectors in S
in such a way that if u and v are in S, then u+v is too.
uv  vu
(1)
 commutative 
u  v  w  u   v  w 
 associative 
(2)
20
(ii) S contains a unique zero vector 0 such that
u+0=u
(3)
for each u in S.
(iii) For each u in S there is a unique vector “-u” in S, called
the negative inverse of u, such that
u + (-u) = 0
(4)
We denote u + (-v) as u - v for brevity, but emphasize
that it is actually the + operation between u and -v.
(iv) Another operation, called scalar multiplication, is defined
such that if u is any vector in S and α is any scalar, then
the scalar multiple αu is in S, too.
(4)
   u     u,
 associativity 
    u   u   u,
  u  v    u   v,
1 u  u,
 distributivity 
 distributivity 
(5)
(6)
(7)
21
Theorem 9.6.1 Properties of scalar multiplication
If u is any vector in a vector space S and  is any scalar,
then
0u=0
(-1) u = -u
0=0
9.6.2. Inclusion of inner product and/or norm
A vector space S need not have a dot product (also called
an inner product) or a norm defined for it. For a vector space,
if it does have an inner product it is called an inner product
space; if is has a norm it is called a normed vector space; and
if it has both it is called a normed inner product space.
Requirements of inner product
Commutative :
Nonnegative :
u  v  v u
u  u  0 for all u  0
0
Linear :
for u  0,
 u   v   w    u  w     v w  ,
(16a)
(16b)
(16c)
22
Requirements of Norm
Scaling :
u  
Nonnegative :
u 0
(17a)
u
for all u  0
0
for u  0,
Triangle Inequality : u+v  u  v
(17b)
(17c)
Example 3. Rn-space. If we wish to add an inner product to
the vector space Rn, we can use the choice
n
u  v  u1v1    unvn   u j v j .
(18a)
j 1
A variation of (18a) that still satisfies (16) is
n
u  v  1u1v1    nun vn    j u j v j ,
j 1
(18b)
here the j’s are fixed positive constants known as “weights”
because they attach more or less weight to the different
components of u and v.
23
Note that for (18b) to be a legitimate inner product we must
have wj > 0 for each j.
If for any vector space S we already have an inner product,
then a legitimate norm can always be obtained from that inner
product as u  u  u, and that choice is called the natural
norm. Thus, the natural norms corresponding to (18a) and (18b)
n
n
are
u   u 2j and u    j u 2 j ,
(19a,b)
1
1
However, we do not have to choose the natural norm. For
instance, we could use (18a) as our inner product, and choose
n
u  u1    u n   u j
as our norm.
(20)
j 1
24
Example 4.
Let us imagine approximating any given function u(x) in S in a
piecewise-constant manner as depicted in Fig. 1. That is, divide
the x interval (0 ≤ x ≤ 1) into n equal parts and define the
approximating piecewise-constant function, over each
subinterval as the value of u(x) at the left endpoint of that
subinterval.
If we represent the piecewise-constant function as the
n-tuple (u1, …,un), then we have, in a heuristic sense.
25
For a function u(x) in S
u  x    u1 , , un 
similarly, for any other function v(x) in S
(21)
v  x    v1 , , vn  .
The n-tuple vectors on the right-hand sides of (21) and (22)
are members of Rn. For that space, let us adopt the inner
product
n
(22)
 u1 , , un    v1 , , vn    u j v j x,
(23)
j 1
that is, (18b) with all of the wj weights the same, namely, the
subinterval width ∆x. If we let n → ∞, the “staircase
approximations” approach
u(x) and v(x), and the sum in (23)
1
tends to the integral
u  x  v  x  dx.

0
This heuristic reasoning suggests the inner product
u  x  , v  x    u  x  v  x  dx.
1
0
(24a)
26
We can denote it as u.v and call it the dot product, or we
can denote it as <u(x),v(x)> and call it the inner product.
Just as (18b) is a legitimate generalization of (18a), (if wj>0
for 1≦j≦n, we expect that
u  x  , v  x    u  x  v  x  w( x)dx.
1
(24b)
0
is a legitimate generalization of (24a).
Naturally, if we wish to define a norm as well, we could
use a natural norm based on (24a) or (24b), for instance
u 
u  x, u  x 

1
0
u 2  x  w( x)dx
(25)
27
The inequality │u.v│ ≤ ║u║║v║holds for any normed
inner product space with natural norm
u  u u.
In Rn with the dot product (18a) it says.
n
n
 u jv j 
u
j 1
1
n
2
j
2
v
 j,
(26)
1
in the function space of examples 2 and 4; with the inner
product (24b) and norm (25) it says

1
0
u  x  v  x    x  dx 

1
0
u  x    x  dx
2

1
0
v 2  x    x  dx ,
(27)
and so on.
28
9.7
Span and Subspace
Definition 9.7.1 Span
If u1,…,uk are vectors in a vector space S, then the set of
all linear combinations of these vectors, that is, all vectors of
the form
u   u     u ,
1 1
k
k
where α1,…,αk are scalars is called the span of u1,…,uk and is
denoted as span {u1,…,uk}.
The set {u1,…,uk} is called the generating set of span {u1,…,uk}.
Span {u1,u2} is, once again, the line L in Fig. 1, for both
u1 and u2 lie along L.
If S denotes any set of vectors {u1,…,uk}
in a vector space V, then the set of all linear
combinations of the vectors u1,…,uk in S,
{1u1+2u2+…+kuk}, where the i, i
=1,2,…,n are scalars, is called the span of
the vectors.
29
Definition 9.7.2 Subspace
If a subset T of a vector space S is itself a vector space,
then T is a subspace of S. (If a subset W of a vector
space V is itself a vector space under the operations of
vector addition and scale multiplication defined on V, then
W is called a subspace of V.)
Theorem 9.7.1 Span as Subspace
If u1,…,uk are vectors in a vector space S, the span {u1,…,uk}
is itself a vector space, a subspace of S.
9.8 Linear Dependence
Definition 9.8.1 Linear dependence and linear independence
A set of vectors {u1,…,uk} is said to be linearly dependent if
at least one of them can be expressed as a linear combination
of the others. If none can be so expressed, then the set is
30
linearly independent.
Theorem 9.8.1 Test for linear dependence/independence
A finite set of vectors {u1,…,uk} is LD if and only if there
exist scalars j, not all zero, such that
1u1+…+kuk = 0
(1)
If (1) holds only if all ’s are zero, then the set is LI.
Definition 9.8.2 Linear dependence/independence
of two vectors
A set of two vectors {u1,u2} is LD if any only if one is
expressible as a scalar multiple of the other.
Definition 9.8.3 Linear dependence of sets containing
the zero vector
A set containing the zero vectors is LD.
31
Definition 9.8.4 Equating coefficients
Let {u1,…,uk} be LI. Then, for
a1u1 + …+ akuk = b1u1 + …+ bkuk
to hold, it is necessary and sufficient that aj = bj, for each
j = 1,…, k. That is, the coefficients of corresponding vectors
on the left- and right-hand sides must match.
Theorem 9.8.5 Orthogonal sets
Every finite orthogonal set of nonzero vectors is LI.
For a set of vectors {u1,…,uk} which is orthogonal
1u1 + …+ kuk = 0 if 1=2=…=k=0 LI
Dot u1 into both sides
u1‧1u1 + …+ kuk)= u1 ‧0
32
9.9 Bases, Expansions, Dimension
9.9.1. Bases and expansions.
A given function f(x) can be “expanded” as a linear
combination of powers of x,
f  x   a0  a1x  a2 x2  .
(1)
We call a0, a1, a2, …, the “expansion coefficients,” and these
can be computed from f(x) as aj=f(j)(0)/j!.
Likewise, the expansion of a given vector u in terms of a set
of “base vectors” e1,…, ek can be expressed:
u  1e1     k ek
(2)
Definition 9.9.1 Basis
A finite set {e1,…, ek} in a vector space S is a basis for S if
each vector u in S can be expressed (i.e., “expanded”) uniquely
n
in the form
u  1e1   k ek   j e j .
j 1
(5) 33
Theorem 9.9.1 Test for Basis
A finite set {e1,…, ek} in a vector space S is a basis for S if
and only if it spans S and is LI.
9.9.2. Dimension.
Definition 9.9.2 Dimension
If the greatest number of LI vectors that can be found in a
vector space S is k, where 1 ≤ k < ∞, then S k-dimensional,
and we write
dim S  k
If S is the zero vector space, we define dim S = 0. If an
arbitrarily large number of LI vectors can be found in S, we
say that S is infinite-dimensional.
Theorem 9.9.2 Test for Dimension
If a vector space S admits a basis consisting of k vectors,
then S is k-dimensional.
34
Let {e1,…, ek} be a basis for S. Suppose that vectors
e1 , ,ek 1 in S are LI. Each of these can be expanded in terms
of the given base vectors {e1,…, ek}, as
e1  a11e1 
 a1k e k
ek 1  ak 1,1e1 
 ak 1,k e k

 According to Eq. (5) (15)



Putting these expressions into the Eq. (2), we have
1e1   2e2 
  k 1ek 1  0
(16)
and grouping terms gives
(1a11 
  k 1a k 1,1 )e1 
 (1a1k 
  k 1a k 1,k )e k  0
Because the set {e1,…, ek} is LI since it is a basis,
so each coefficient in the preceding equation must be zero:
35
a111 
 a k 1,1 k 1  0
(17)
a1k1 
 a k 1,k k 1  0
These are k linear homogeneous equations in the k+1
unknowns 1 through k+1, and such a system necessarily
admits nontrivial solutions. Thus, the ’s in (17) are not all
necessarily zero so the vectors e1 , ,ek 1 could not have
been LI after all.
Theorem 9.9.3 Dimension of Rn
The dimension of Rn is n: dim Rn = n.
Theorem 9.9.4 Dimension of Span {u1,…, uk}
The dimension of span {u1,…,uk}, where the uj’s are not all
zero, denoted as dim [span {u1,…,uk}], is equal to the greatest
number of LI vectors within the generating set {u1,…,uk}.
36
9.9.3. Orthogonal bases.
If {e1,…, ek} is an orthogonal basis for S; that is, it is not only
a basis but also happens to be an orthogonal set:
ei  e j  0 if
i  j.
(19)
Suppose that we wish to expand a given vector u in S in
terms of that basis; that is, we wish to determine the
coefficients 1, 2,k in the expansion
(20)
u  1e1   2e2     k ek .
To accomplish this, dot (20) with e1, e2,…,ek, in turn, we
obtain the linear system
u  e1   e1  e1  1  0 2      0 k
u  e 2  01   e 2  e 2   2  0 3      0 k
u e k  01      0 k 1   e k  e k   k
(21)
37
where all of the quantities u.e1,…, u.ek, e1.e1,…, ek.ek
are computable since u, e1,…, ek are known. The crucial
point is that even though (21) is still k equations in the k
unknown αj’s, the system is uncoupled and readily gives
u  ek
u  e1
u  e2
1 
, 2 
, ...,  k 
,
e1  e1
e2  e2
e k  ek
(22)
Thus, if the {e1,…, ek} basis is orthogonal, the expansion of
any given u is simply
k
u ej
u  ek
u  e1
u(
)e1   (
)e k   (
)e j
(23)
e1  e1
ek  ek
j 1 e j  e j
If besides being orthogonal, the ej’s are normalized (∥ej∥=1)
so that they constitute an ON basis, then (23) simplifies
slightly to
u   u  eˆ1  eˆ1     u  eˆk  eˆk    u  eˆ j  eˆ j
k
j 1
(24)
38
9.10 Best Approximation
Let S be a normed inner product vector space (i.e., a vector
space with both a norm and an inner, or dot, product defined).
9.10.1. Best approximation and orthogonal projection.
For a given vector u in S, and an ON set {eˆ1 ,
what is the best approximation
u  c1eˆ1 
,eˆ N } in S,
N
 cN eˆ N   c j eˆ j ?
(1)
j 1
That is, how do we compute the cj coefficients so as to
render the error vector
N
E = u -  c j eˆ j
j 1
as small as possible? In other word, how do we choose the cj’s
so as to minimize the norm of the error vector ∥E∥? If∥E∥
is a minimum, then so is ∥E∥2, so let us minimize∥E∥2.
39
N
N
j 1
j 1
E =E  E=(u -  c j eˆ j )  (u -  c j eˆ j )
2
N
N
j 1
j 1
 u  u - 2 c j (u  eˆ j )   c 2j
and where the step
N
N
N
1
1
1
( c j eˆ j )  ( c j eˆ j )   c 2j
Defining u.ȇj ≡ αj and noting that u.u = ║u║2,
N
N
   c  2 j c j  u ,
2
j 1
2
j
2
j 1
    c j   j   u   2j ,
2
N
2
j 1
2
N
(4)
j 1
Thus, in seeking to minimize the right-hand side of (4).
c j = αj
N
u    u  eˆ j  eˆ j
j 1
(5)
40
Theorem 9.10.1 Best Approximation
Let u be any vector in a normed inner product vector
space S with natural norm ( u  u  u ), and let {ȇ1,…, ȇN}
be an ON set in S. Then the best approximation (1) is
obtained when the cj’s are given by cj = u.ȇj, as indicated
in (5).
9.10.2. Kronecker delta.
When working with ON sets it is convenient to use the
Kronecker delta symbol δjk, defined as
1,
 jk  
0,
jk
jk
(11)
Clearly, δjk is symmetric in its indices j and k:
 jk   kj .
(12)
41
To illustrate the use of the Kronecker delta, suppose that
{ȇ1,…, ȇN} is an ON basis for some space S, and that we
wish to expand a given u in S as
N
u   c j eˆ j .
(13)
j 1
Use the fact that ȇj.ȇk = δjk
N
 N

u  eˆk    c j eˆ j   eˆk   c j  eˆ j  eˆk 
j 1
 j 1

N
  c j jk  ck .
(14)
j 1
Thus, ck = u. ȇk for each k = 1, 2,…, N so (13) becomes
u    u  eˆ j  eˆ j .
N
j 1
(15)
42
Problems for Chapter 9
Exercise 9.2
8. ; 9.
Exercise 9.7
1.(c)、(e)、(h)、(m)、(q)
4.(b)、(d)、(e)
5.(b);6.(a)、(c)、(h)
Exercise 9.3
1.(a)、(d)
2.(c); 3.
4.(i)、(k); 6.
Exercise 9.8
2.(d)
3.(b)、(c)、(e)、(f) 、(g)
6.(a)
Exercise 9.9
1.(b)、(e)、(g) 、(h)、(p)
2.(b);3.(b)
4.(a)、(b)、(h) ;
8.(a)、(d)、(f) 、(h)
12.(a)、(b)、(c) 、(d)、(i)
Exercise 9.4
1.(g)、(n);3.(c)
4.(b)、(d)
Exercise 9.5
1.(g) ; 6.(d)、(g)
8.(a);11.(a) ;14.(d)
Exercise 9.6
8.; 11; 12.(b)、(d)
Exercise 9.10
1.;2. (a)、(b)
3.;5.;6.; 8(a)
43
Download