3.3: LINEAR INDEPENDENCE AND THE WRONSKIAN Generally

advertisement
3.3: LINEAR INDEPENDENCE
AND THE WRONSKIAN
KIAM HEONG KWA
Generally, two functions f (t) and g(t) are said to be linearly independent on an interval I if the linear relation
(1)
c1 f (t) + c2 g(t) = 0,
where c1 and c2 are constant scalars, holds for all t ∈ I only if c1 =
c2 = 0. Otherwise, f (t) and g(t) are said to be linearly dependent on
I. In the latter case, there are constants c1 and c2 with the property
that either c1 6= 0 or c2 6= 0 such thatthelinear relation (1) holds on
c2
I. Note that if c1 6= 0, then f (t) = −
g(t), while if c2 6= 0, then
c1
c1
f (t) throughout the interval I. In other words, f (t) and
g(t) = −
c2
g(t) are linearly dependent on I if one of them is a constant multiple
of the other throughout the interval I.
A relationship between the Wronskian and the linear independence
of two functions is provided by the following theorem.
Theorem 1. 1 If f (t) and g(t) are differentiable functions on an open
interval I such that W (f, g)(t0 ) 6= 0 for some point t0 in I, then f (t)
and g(t) are linearly independent on I. Equivalently, if f (t) and g(t)
are linearly dependent on I, then W (f, g)(t) = 0 for all t in I.
Proof. Let c1 and c2 be constants such that c1 f (t) + c2 g(t) = 0 on I.
Then, clearly, c1 f 0 (t) + c2 g 0 (t) = 0 on I as well. In particular,
(2a)
c1 f (t0 ) + c2 g(t0 ) = 0,
(2b)
c1 f 0 (t0 ) + c2 g 0 (t0 ) = 0.
Multiplying (2a) and (2b) by g 0 (t0 ) and g(t0 ) respectively gives
(3a)
c1 f (t0 )g 0 (t0 ) + c2 g(t0 )g 0 (t0 ) = 0,
(3b)
c1 g(t0 )f 0 (t0 ) + c2 g(t0 )g 0 (t0 ) = 0.
Date: January 17, 2011.
1This is theorem 3.3.1 in the text.
1
2
KIAM HEONG KWA
Now substracting (3b) from (3a) yields c1 W (f, g)(t0 ) = 0. Hence
if W (f, g)(t0 ) 6= 0, then c1 = 0. Likewise, we can show that if
W (f, g)(t0 ) 6= 0, then c2 = 0 as well. Hence if W (f, g)(t0 ) 6= 0, then
f (t) and g(t) are linearly independent.
Remark 1. The converse of theorem 1 does not hold without additional
conditions on the pair of functions involved. That is to say, it is not
true in general that two differentiable functions f (t) and g(t) with a
vanishing Wronskian on an open interval must be linearly dependent on
the same interval. See the following example due to Giuseppe Peano,
a famous Italian mathematician.
Example 1. Consider the differentiable functions f (t) = t2 |t| and
g(t) = t3 on R. It can be shown that
(
−3t2 if t < 0,
f 0 (t) =
3t2
if t ≥ 0.
Hence W (f, g)(t) = 0 for all t in R. On the other hand, let c1 and
c2 be constants such that c1 f (t) + c2 g(t) = 0 on R. Then c1 + c2 =
c1 f (1) + c2 g(1) = 0 and c1 − c2 = c1 f (−1) + c2 g(−1) = 0. It follows
that c1 = c2 = 0 and thus f (t) and g(t) are linearly independent.
The converse of theorem 1 may hold if additional conditions are
imposed on the pair of functions involved. One of the results in this
direction is given by [1]
Theorem 2. If f (t) and g(t) are differentiable functions on an open
interval I such that f (t) 6= 0 and W (f, g)(t) = 0 for all t ∈ I, then
f (t) and g(t) are linearly dependent. In particular, g(t) = kf (t) for a
constant k.
Proof. This follows from the fact that
0
g(t)
f (t)g 0 (t) − g(t)f 0 (t)
W (f, g)(t)
=
=
= 0,
f (t)
f (t)2
f (t)2
so that
g(t)
=k
f (t)
for some constant k.
3.3: LINEAR INDEPENDENCE
AND THE WRONSKIAN
3
There are quite a number of extensions to this result. For instance,
see [1, 2, 3]. We can also say more about the Wronskian of two functions if the functions are solutions to a common second-order linear
homogeneous equation.
Theorem 3 (Abel’s Theorem). Let y1 (t) and y2 (t) be solutions of the
equation
y 00 + p(t)y 0 + q(t)y = 0
(4)
with continuous coefficients on an open interval I. Then
Z
(5)
W (y1 , y2 )(t) = c exp −
p(t) dt
on I for a constant c that depends on y1 (t) and y2 (t). To be more
explicit, if t0 is any given point in I, then
Z t
(6)
W (y1 , y2 )(t) = W (y1 , y2 )(t0 ) exp −
p(s) ds
t0
for all t ∈ I.
Proof. For notational convenience, let W = W (y1 , y2 )(t). Note that
(5) is equivalent to the first-order linear equation
(7)
W 0 + p(t)W = 0.
In terms of y1 (t) and y2 (t) and their derivatives, this reads
(8)
[y1 (t)y200 (t) − y2 (t)y100 (t)] + p(t)[y1 (t)y20 (t) − y2 (t)y10 (t)] = 0.
To verify this equation, recall that
(9a)
y100 (t) + p(t)y10 (t) + q(t)y1 (t) = 0,
(9b)
y200 (t) + p(t)y20 (t) + q(t)y2 (t) = 0
since y1 (t) and y2 (t) are solutions of (4). Multiplying (9a) and (9b) by
y2 (t) and y1 (t) respectively yields
(10a)
y2 (t)y100 (t) + p(t)y2 (t)y10 (t) + q(t)y1 (t)y2 (t) = 0,
(10b)
y1 (t)y200 (t) + p(t)y1 (t)y20 (t) + q(t)y1 (t)y2 (t) = 0.
Now substracting (10a) from (10b) gives (8).
Remark 2. By (5) or (6), the Wronskian of any two solutions of (4)
is either always negative, always zero, or always positive on the interval
I.
4
KIAM HEONG KWA
As a corollary to theorem 3, we have
Theorem 4. 2 Let y1 (t) and y2 (t) be solutions of (4). Then y1 (t) and
y2 (t) are linearly independent if and only if W (y1 , y2 )(t) 6= 0 for all
t ∈ I. Equivalently, y1 (t) and y2 (t) are linearly dependent if and only
if W (y1 , y2 )(t) = 0 for all t ∈ I.
Proof. By Abel’s theorem, the conditions that W (y1 , y2 )(t) 6= 0 for all
t ∈ I and that W (y1 , y2 )(t) = 0 identically on I are mutually exclusive.
On the other hand, by theorem 1, if W (y1 , y2 )(t) 6= 0 for all t ∈ I, then
y1 (t) and y2 (t) are linearly independent. Equivalently, by the same
theorem, if y1 (t) and y2 (t) are linearly dependent, then W (y1 , y2 )(t) =
0 for all t ∈ I. Hence it suffices to show that if W (y1 , y2 )(t) = 0
throughout I, then y1 (t) and y2 (t) are linearly dependent. To this end,
we want to find two constants c1 and c2 with the property that either
c1 6= 0 or c2 6= 0 and
(11)
c1 y1 (t) + c2 y2 (t) = 0
for all t ∈ I on the assumption that W (y1 , y2 )(t) = 0 identically. To
begin with, note that (11) implies that
(12)
c1 y10 (t) + c2 y20 (t) = 0
on I. Let t0 be a point in I, but otherwise arbitrary. Then (11) and
(12) gives rise to the linear equations
(13)
c1 y1 (t0 ) + c2 y2 (t0 ) = 0 and c1 y10 (t0 ) + c2 y20 (t0 ) = 0
with unknown c1 and c2 . We need the following lemmas to proceed.
a b = ad − bc = 0. Then there exists a real constant
Lemma 1. Let c d
k such that either a = βc and b = βd or c = βa and d = βb.
Proof of lemma 1. Consider the cross product of the vectors ai + bj
and ci + dj in the three-dimensional Euclidean space:
(ai + bj) × (ci + dj) = (ad − bc)k = 0.
This shows that |(ai + bj) × (ci + dj)| = |ai + bj||ci + dj| sin θ = 0,
where θ is the angle between the vectors ai + bj and ci + dj, so that
these vectors are collinear. Hence there exists a constant β such that
either ai + bj = β(ci + dj) or ci + dj = β(ai + bj).
2This
is theorem 3.3.3 in the text.
3.3: LINEAR INDEPENDENCE
AND THE WRONSKIAN
5
a b = ad − bc = 0. Then the system of linear
Lemma 2. Let c d
equations
(14)
au + bv = 0 and cu + dv = 0
with unknowns u and v has a nontrivial solution.
Proof of lemma 2. By lemma 1, there is a constant β such that either
a = βc and b = βd or c = βa and d = βb. In the case a = βc and
b = βd, we have β(cu + dv) = au + bv. Hence any pair of values for
u and v such that cu + dv = 0 is a solution of (14). Clearly, there are
infinitely many of them besides the trivial one. In the case c = βa and
d = βb, we only need to require that au + bv = 0 which, again, has
infinitely many solutions.
y (t ) y2 (t0 )
= 0.
Getting back to (13), recall that W (y1 , y2 )(t0 ) = 10 0
y1 (t0 ) y20 (t0 )
Hence, by lemma 2, (13) has a nontrivial solution. In other words, there
is a pair of values for c1 and c2 such that either c1 6= 0 or c2 6= 0 and
(13) holds. Define y(t) = c1 y1 (t) + c2 y2 (t) for all t ∈ I. By the principle
of superposition, y is a solution of (4). By the choice of c1 and c2 ,
y(t0 ) = y 0 (t0 ) = 0. So, by the uniqueness of the solution to an initialvalue problem, y(t) = 0 identically on I. This, of course, implies that
(11) holds, so that y1 (t) and y2 (t) are linearly dependent on I.
Recall that a function f (t) is said to be analytic about a point t0 if it
has a convergent power series representation in a neighborhood of t0 ;
that is,
(15)
f (t) =
∞
X
fk (t − t0 )k
k=0
is convergent in a nontrivial interval containing t0 . In addition, the
coefficients fk are uniquely determined by
f (k) (t0 )
k!
for each k ∈ N0 . Lemma 1 also gives the following converse of theorem
1.
(16)
fk =
Theorem 5. Let f (t) and g(t) be analytic functions on an open interval
I. If W (f, g)(t) = 0 on I, then f (t) and g(t) are linearly dependent.
6
KIAM HEONG KWA
Proof. By a suitable change of coordinates, it can be assumed that the
center of I is 0, so that
∞
∞
X
X
k
f (t) =
fk t and g(t) =
gl tl .
k=0
l=0
Since f (t) and g(t) are analytic, they are differentiable and [4, Theorem
8.1]
∞
∞
X
X
0
k−1
f (t) =
kfk t
=
(k + 1)fk+1 tk
k=1
and
0
g (t) =
∞
X
l=1
k=0
l−1
lgl t
=
∞
X
(l + 1)gl+1 tl .
l=0
It follows that W (f, g)(t) is also analytic and
∞
∞
∞
∞
X
X
X
X
l
k
l
gl t
(k + 1)fk+1 tk
fk t
(l + 1)gl+1 t −
W (f, g)(t) =
k=0
=
=
=
∞
X
l=0
l=0
k=0
∞
∞
∞
X
X
X
k
l
k
fk t
(l + 1)gl+1 t −
gk t
(l + 1)fl+1 tl
k=0
∞
X
l=0
k=0
l=0
(l + 1) (fk gl+1 − gk fl+1 ) tk+l
k+l=0
∞ X
m
X
(m − k + 1) (fk gm−k+1 − gk fm−k+1 ) tm
m=0 k=0
for all t ∈ I.
By assumption, W (f, g)(t) = 0 on I, so
m
X
(17)
(m − k + 1) (fk gm−k+1 − gk fm−k+1 )
k=0
for each m ∈ N0 because the coefficients of a convergent power series
are unique. In particular, for m = 0, we have
f0 f1 = 0.
f0 g1 − g0 f1 = g0 g1 This, together with lemma 1, implies that there is a constant k such
that f0 = βg0 and f1 = βg1 or g0 = βf0 and g1 = βf1 . We consider
the former case, i.e., f0 = βg0 and f1 = βg1 . In addition,
we can
P∞
k−1
assume
that
g
=
6
0,
otherwise
we
consider
the
series
f
t
and
0
k=1 k
P∞
l−1
instead. The case g0 = βf0 and g1 = βf1 is similar.
l=1 gl t
3.3: LINEAR INDEPENDENCE
AND THE WRONSKIAN
7
Suppose, inductively, that fk = βgk for k = 0, 1, · · · , n for an n ∈ N.
Setting m = n in (17) yields
(18)
n
X
(n + 1) (f0 gn+1 − g0 fn+1 ) +
(n − k + 1) (fk gn−k+1 − gk fn−k+1 )
k=1
=
n
X
(n − k + 1) (fk gn−k+1 − gk fn−k+1 )
k=0
= 0.
On the other hand, note that for 1 ≤ k ≤ n, we have 1 ≤ n − k + 1 ≤ n
and therefore
n
X
(n − k + 1) (fk gn−k+1 − gk fn−k+1 )
=
k=1
n
X
(n − k + 1) (βgk gn−k+1 − gk βgn−k+1 )
k=1
=0
by the induction hypothesis. Hence, by (18),
g0 (βgn+1 − fn+1 ) = f0 gn+1 − g0 fn+1 = 0.
Consequently, fn+1 = βgn+1 . Thus fk = βgk for all k ∈ N0 by induction, from which it follows that f (t) = βg(t).
Corollary 1. Two polynomials p(t) and q(t) are linearly dependent if
and only if W (p, q)(t) = 0 everywhere.
References
[1] Maxime Bocher. Certain cases in which the vanishing of the wronskian is a
sufficient condition for linear dependence. Transactions of the American Mathematical Society, 2(2):139–149, April 1901.
[2] Alin Bostan and Philippe Dumas. Wronskians and linear independence. American Mathematical Monthly, 117(8):722–727(6), October 2010.
[3] Mark Krusemeyer. Why does the wronskian work? The American Mathematical
Monthly, 95(1):46–49, January 1988.
[4] Walter Rudin. Principles of Mathematical Analysis. McGraw-Hill, Inc., 3rd edition, 1976.
Download