CHAPTER 13

advertisement

CHAPTER 13

Calculus of Variations with applications

13.1 Variational problems with fixed boundaries

13.2 Applications to concrete Problems

13.3 Variational Problems with moving boundaries

13.4 Variational Problems involving derivatives of higher order and several independent variables

13.4.1 Functionals involving several dependent variables.

13.5 Sufficient conditions for an Extremum-Hamilton-Jacobi Equation

13.6 Exercises

Calculus of variations is a branch of mathematics dealing with finding minima and maxima of certain type of functions called functionals. In first course of calculus of variations, usually, functionals are taken as integrals of functions of independent variables and possibly their derivatives. Results of calculus of variations have found applications in different fields of science and engineering. The theme has a long history but a systematic study began by the work of Euler (1707-1783) and Lagrange (1736-1813). The aim of this chapter is to provide an introduction of those concepts which could be useful from applications point of view. It may be observed here that it is a special class of optimization problems dealing with functionals (functions defined on a vector space say R n or C[a,b], space of all continuous functions on [a,b]) represented by integrals.

471

13.1 Variational Problems with Fixed Boundaries

In an elementary course of calculus we study the concepts of maxima and minima and problems of their existence. Let us recall some of those results. A real valued function f defined on an interval

= [a,b] of R is said to have a minimum at the interior point of

, say x=c if f(c)

f(x) (13.1) for all x in the neighbourhood of c, that is, in the open interval (c-

, c+

),

>0 arbitrary small. f is said to have a maximum if reversed inequality f(c)

f(x) holds in the neighbourhood of c.

(13.2)

A point x=c is called an extremum (extremal or critical) if either (13.1) or (13.2) holds. The value of f at an extremum is called stationary value. The following result provides a necessary condition for an extremum.

Theorem A Let f(x) be defined on an interval (a,b). Then if f(x) has an extremum at x = c, c

(a,b). Then f'(c) = 0.

The point c at which f ' (c) = 0 is called a critical or stationary point and f(c) is a stationary value. This theorem does not tell us whether c is a maximum, or minimum, or a point of inflection (saddle point). This problem is dealt by the following theorem.

472

Theorem B Let f'(c) = 0. Then f (c) is a minimum if f"(c) > 0 and f(c) is a maximum if f"(c) < 0.

if f"(c) changes sign for points in the neighbourhood of x = c then c is a point of inflection.

In this section we extend these concepts of introductory calculus to the functionals of the following type

J(u) =  b a

F ( x , u ( x ), u ' ( x )) dx (13.3) du where u'(x) = dx

Minima and Maxima of functions of several variables and stationary point

Let X denote the Euclidean space R n , n=1,2,3……or space of continuous or twice continuously differentiable functions; that is, X=R n or C[a,b] or C 2 [a,b]

C 2 [a,b] is the set of all functions f: [a,b]

R such that f" is continuous.

Distance or metric between two points x and y is defined as d(x,y) = i

1

| x i

y i

|

2

1

2

where x = (x

1

, x

2

, .. . . ,x n

) and y = (y

1

, y

2

,. .. . .,y n

). The magnitude or length or norm of an element x

R is defined as || x || =

 i

1

| x i

|

2

1

2

. The distance or metric between two elements f,g of C[a,b] or

C 2 [a,b] is defined as d (f,g) =  b a

| f ( x ) g ( x ) | 2 dx

1

2

of d (f,g) = sup a

 x

 b

| f ( x ) -g(x)|

Norm or magnitude of f

C[a,b] or C 2 [a,b] is defined as

473

||f|| =  b a

| f ( x ) | 2 dx

1

2 or ||f|| = sup a

 x

 b

| f ( x ) | .

A mapping or transformation T : X

Y where Y is also either R n or

C[a,b] or R is called an operator. If X=R n , n=1,2,3 ------- or C [a,b] or

C 2 [ a,b] and Y = R then T is called functional and usually denoted by capital letters F, G et. al. Let x, t

X and lim

 

0

II

T ( x

  t )

T ( x )

– DT(x) II

Y

=

 lim

0 d



T ( x

  t )

T( x )

, DT ( x )



0 for every t

X, where



0 in R. DT(x) t

Y is called the value of the

Gateaux derivative of T at x in the direction t, and T is said to be G aˆ teaux differentiable at x in the direction t. G aˆ teaux derivative of an operator T is an operator and it is denoted by DT(x). If T is a functional, that is Y = R and it is denoted by F then the mapping x

DF (x) is called gradient of F and very often denoted by

F.

It is clear that if X = R n , Y = R and e

1

= (1,0,……0), e

2

= (0,1,0,……,0),…… e n

= (0,0 ……1) then the Gaˆ teaux derivative of F: R n 

R in the direction e i

is

F

 x i

. The G aˆ teaux derivative of F: R n 

R in the direction of arbitrary t

=(t

1

,t

2

,....t

n

)

R n is given by

DF(x) t= k n 

1

F ( x )

 x k t k

where

F

 x k

denotes the partial derivatives of F with respect to x k

at x,x=(x

1

,x

2

, ......x

n

).

474

Let x be a fixed point of X. An operator S: X

Y is called the Frechet derivative of a given operator T : X

Y at x if lim

|| t ||

0

|| T ( x

 t )

||

T ( x ) S ( t ) ||

0 .

t ||

The Frechet derivative is usually denoted by dT(x) or T'(x). It is clear from the definition of G aˆ teaux and Frechet derivatives that if the Frechet derivative exists then it is equal to the G aˆ teaux derivative. It is also evident that existence of the classical derivatives imply existence of the Frechet derivative. For r>0, S r

(x) = {x

X :||x|| < r} is called an open sphere of X. Let

A be a subset of X and F : A

R be a functional. F is said to have a relative or local minimum (local maximum) at x

0

A if there is an open sphere S r

(x

0

) of X such that F(x

0

)

F(x) (F(x)

F(x

0

)) holds for all x

S r

(x

0

)

A. If F has either a relative minimum or relative maximum at x

0

, then F is said to have a relative extremum or an extremum of a set A (This set may be taken as interval [a,b] or (a,b). The set A on which an extremum problem (extremal or stationary) is defined is often called the admissible set .

Theorem 13.1 Let F: X

R be a Gateaux differentiable functional at x

0

X and F have a local extremum at x

0

, then D F(x

0

) t=0 for all t

X.

Remark: 13.1 It may be observed that the conclusion of the theorem holds for the Frechet derivative and classical derivatives as well.

475

Proof of Theorem 13.1 For every t

X, the function F (x

0

+

t) (of the real variable

) has a local extremum at

=o. Since it is differentiable at 0, it follows from ordinary calculus that

 d d

F ( x

0

  t

) 

 

0

0 .

This means that DF (x

0

) t = 0 for all t

X, which proves the theorem. Problem of Calculus of Variations (P):

Find a function u(x) on the interval [a,b] satisfying the boundary conditions u(a)=

and u(b)=

, and extremizing (minimizing or maximizing) the functional

(13.3) were u is a twice continuously differentiable function on [a,b]; that is u

C 2 [a,b], F continuous in x,u and u', and has continuous partial derivatives with respect to u and u'.

The solution of problem P is provided by the following theorem often called the Euler-Lagrange theorem.

Theorem 13.2 A necessary condition for the functional J(u) given by (13.3), where F and u satisfy conditions mentioned in the Problem P, to have an extremum at u is that u must satisfy the following equation in a

x

b with the boundary conditions u(a) =

and u(b) =

.

F

 u

 d dx

F

 u '

0 (13.4)

Equation (13.4) is called the Euler-Lagrange equation.

We use the following lemma in the proof.

Lemma (13.1) (Euler-Lagrange Lemma) If h(x) is continuous in [a,b], and if

476

< f, h > =

 b a f ( x ) h ( x ) dx

0 (13.5) for every continuous function f (x) such that f(a) = f(b) = 0, then h(x) = 0 for all x in [a,b]

Proof of Lemma 13.1 We prove the Lemma by contradiction. Let the conditions of the Lemma be satisfied but h(x)

0, say h (x) > 0 for some x in

[a,b]. Then by continuity, h(x) > 0 in some interval [a

1

,b

1

] containing this point and contained in [a,b]. If we set f(x) =

( x a

1

)

2

( b

1

0 ,

x )

2

, a

1

 x

 b

1 otherwise then f(x) satisfies the conditions of the lemma. However, b

 a f ( x ) h ( x ) dx

  b 1 a

1

( x a

1

) 2 ( b

1

x ) 2 h ( x ) dx

0 since integrand is positive in (a

1

, b

1

). This contradicts (13.5) which proves the lemma

Proof of Theorem 13.2

We assume that J(u) has an extremum at some u

C 2 [a,b]. Then we consider the set of all variations u+tv, for an arbitrary fixed v

C 2 [a,b], such that v (a) = v (b) = 0. Then

J (u + tv)

– J(u) = b

 a

F ( x , u

 tv , u '

 tv ' ) F ( x , u , u ' )

 dx .

(13.6)

Using the Taylor series expansion

F ( x , u

 tv , u '

 tv ' )

F ( x , u , u ' )

 t v

F

 u

 v '

F

 u '

477

 t 2

2 !

v

F

 u

 v '

F

 u '

2

+ ....., it follows from (13.6) that

J(u+tv) = J(u) + tdJ(u,v) + t

2

2 !

d 2 J(u,v) +....., where the first and the second Frechet differentials are given by dJ(u,v) =

 a b

 v

F

 u

 v '

F

 u ' dx , d 2 J (u,v) =  b a v

F

 u

 v '

F

 u '

2 dx .

(13.7)

(13.8)

(13.9)

By Theorem 13.1 and Remark 13.1, the necessary condition for the functional

J to have an extremum at u is that dJ (u,v) = 0 for all v

C 2 [a,b] such that v

(a) = v (b) =0, that is,

0 = dJ (u,v) =

 a b

 v

F

 u

 v '

F

 u ' dx (13.10)

Integrating the second term in the integrand in (13.10) by parts, we obtain

 b a

F

 u

d dx

F

 u '

 vdx

 v

F

 u '

 b a

0 .

(13.11)

Since v(a)=v(b)= 0, the boundary terms vanish and the necessary condition becomes b

 a

F

 u

d dx

F

 u '

 

 vdx

0 .

(13.12) for all functions v

C 2 [a,b] vanishing at a and b. By Lemma 13.1 we have

478

F

 u

 d dx

F

 u '

0 . (13.13)

Remark 13.2

After we have determined the solution of (13.13) which makes J(u) stationary, the question arises on whether J(u) has a minimum, a maximum, or a saddle point there. To answer this question, we look at the second derivative involved in (13.7). If terms of o(t 2 ) can be neglected in (13.7), or if they vanish for the case of quadratic F, it follows that a necessary condition for the functional J(u) to have a minimum at u

0

is that d 2 J(u,v)

0 for all v.

Similarly, a necessary condition for the functional J (u) to have a maximum at u

0

is that d 2 J (u,v)

0 for all v. These results enable us to determine the upper and lower bounds for the stationary value J(u

0

) of the functional.

13.2 Applications to concrete Problems

Example 13.1 (Minimum Arc Length).

Determine the form of the curve in a plane which will make the distance between two points in the plane minimum.

Suppose the plane curve y = y(x) passes through the points (x

1

,y

1

) and

(x

2

,y

2

). The length of such a curve is given by the functional

J( y) =  x x 1

2 1

( y ' )

2 dx .

(13.14)

Thus, the problem is to determine the curve for which the functional J(y) is minimum. Since F = 1

( y ' ) 2 depends on y' only, the Euler-Lagrange equation becomes

479

d dx



F

 y '



0 .

Hence, y" = 0.

This means that the curve extremizing J(y) is a straight line: y

 y

2 x

2

 y x

1

1 (x – x

1

) + y

1

.

(13.15)

Example 13.2 Determine the meridian curve joining two points in a plane which, when revolved about the x-axis, gives the surface of revolution with minimum area.

This is a problem of minimum surface of revolution generated by the rotation of the curve y = y(x) about the x-axis. In this case, the area is given by

S = 2

  x x

1

2 y ( x ) 1

( y ' )

2 dx , so that the functional to be minimized is

J (y) =  x

2 x

1 y ( x ) 1

( y ' )

2 dx .

subject to the conditions y

1

= y(x

1

) and y

2

= y(x

2

).

This corresponds to

F (x,y,y') = y 1

( y ' )

2

,

(13.16) which does not depend on x explicitly. The Euler-Lagrange equation is yy" - (y') 2 - 1 =0. (13.17)

Writing p for y', we have y" = dp/dx = p dp/dy, and (13.17) becomes

480

dp py dy

= p 2 + 1.

Separating the variables and integrating, we obtain y = a 1

 p

2

, and hence, dy dx

= y

2 a

2

1 .

Integrating again, we find y = a cosh x b a

, (13.18) where a and b are constants of integration, which can be determined from conditions (13.16). The curve defined by (13.18) is called the catenary , and the resulting surface is called a catenoid of revolution.

Example 13.3 Consider the functional

J (u) =  b a

2

1 p ( x ) ( u ' )

2

1

2 q ( x ) u

2  f ( x ) u

 dx , (13.19) where p,q, and f are given functions and u belongs to an admissible set

of

J. Clearly, the Euler-Lagrange equation associated with the functional J(u) is

F

 u d

dx

F

 u '

0 , (13.20) where

F (x,u,u') = 1

2

p (x) (u') 2 - 1

2 q (x) u 2 + f(x) u.

Consequently, (13.20) becomes

(pu')' + qu = f.

481

This is a non-homogeneous ordinary differential equation of the Sturm-

Liouville type.

Example 13.4 (Hamilton's Principle). According to Hamilton's principle, a particle moves on a path which makes the time integral

J =  t t

1

2 L ( q i

, q i

' t ) dt (13.21) stationary, where the Lagrangian L = T - V is the difference between the kinetic energy T and the potential energy V. In coordinate space, there are numerous possible paths joining any two positions. From all these paths which start at a point A at time t

1

and end at another point B at time t

2

, nature selects the path q i

= q i

(t) for which dJ = 0. Consequently, the Euler-Lagrange equation assumes the form

L

 q i

d dt

 dL dq ' i

 =0, i = 1,2,..., n. (13.22)

In classical mechanics, these are simply called the Lagrange equations of motion.

The Hamilton's function (or Hamiltonian) H is defined in terms of the generalized coordinates q i

, generalized momentum p i

=

L/

 q i

', and L as

H = i n

1 p i q i

' L

 i n

1 q i

'

L

 q i

'

L ( q i

, q i

' ) (13.23)

It follows that dH

 dt d dt

 i n

1 q i

L

'

 q i

'

L



 i n

1 q i

'

 d dt

L

 q ' i

L

 q i



0 .

Hence, the Hamiltonian H is the constant of motion.

482

Example 13.5 (Fermat's Principle in Optics). This principle states that, in an optically homogeneous isotropic medium, light travels from one point

(x

1

,y

1

) to another point (x

2

,y

2

) along a path y=y(x) for which the travel time is minimum. Since the velocity v is constant in such a medium, the time is minimum along the shortest path. In other words, the path y = y(x) minimizes the integral

J =  x 2 x

1

1

( y ' ) 2 v dx

  x x

2

2 F ( y , y ' ) dx , (13.24) with y(x

1

) = y

1

and y(x

2

) = y

2

. The Euler-Lagrange equation is given by d dx

 F y '

F

 y '



0 .

Hence,

F – y'

F

 y '

= constant or

1 v

1

1

( y ' ) 2

= constant. (13.25)

In order to give a physical interpretation, we rewrite (13.25) in terms of the angle

between the tangent to the minimum path and the vertical y-axis, so that

Sin

=

1

1

( y ' ) 2

= constant (13.26)

Hence,

483

sin

= constant v

(13.26) for all points on the minimum path. For a ray of light, 1/v must be proportional to the refractive index n of the medium through which light is travelling.

Equation (13.26) is known as the Snell law of refraction of light. This law is often stated as n sin

= constant. (13.27)

Example 13.6 (Abel's Problem of Tautochronous Motion).

The problem is to determine the plane curve y = y(x) for which the time of descent of a particle sliding freely along the curve which passes through the origin and the point (x

1

,y

1

) is minimum.

The velocity of the particle at the intermediate point (x,y) is found from the energy equation

1

2 m ds dt

2

 mg ( y 0 ), so that the velocity is ds

= 2 gy dt or

1

( y ' )

2 dx

2 gy dt , which gives the time required for the particle to descend from the origin to the point (x

1

,y

1

) on a frictionless curved path in a plane as

484

T (y) = 

0 x

1

1

( y ' ) 2

2 gy dx ,

The problem is to minimize this functional subject to the conditions y(0) = 0, y(x

1

) = y

1

.

This case corresponds to

F(x,y,y') = F(y,y') =

1

( y ' ) 2

.

2 gy

Thus, the Euler-Lagrange equation

F

 y

d dx



F

 y '



0 can be written as

0 = y'



F

 y

d dx

F

 y '

 = d dx

 F y '

F

 y '

 , so that

 F y '

F

 y '

 = c, where c is a constant. More explicitly, c 2 gy ( 1

( y ' ) 2 )

1 or y' =

 a y y

, where a -1 = 2gc 2 . This can be integrated to obtain x = 

0 y y dy ay y

2

,

485

where the positive sign is appropriate. We rewrite this integral in the form x = 1

2

0 y

( a 2 y ) dy ay y 2

 a

2

 y

0 dy ay y 2

, so that the first integral can be evaluated at once, and the second one can be evaluated by making the substitution (a/2) – y =(a/2) cos 

. The final result is x = - ay y

2  a

2 ar cos a 2 y a

.

This is the equation of the curve of minimum time of descent, where the constant a is to be determined so that the curve passes through the point (x

1

, y

1

). It is convenient to write the equation in a parametric form by letting a

2

y

 a

2 cos

. Then x = a

2

(

- sin

), y = a

2

(1-cos

).

They represent a cycloid.

Remark 13.3 (The Brachistochrone Problem)

The problem is to find the curve joining two given points A and B, which is traversed by a particle moving under gravity from A to B in the shortest possible time.

Isoperimetric problem

The determination of the extremum of the functional

J(u) =  x x

1

2 F ( x , u , u ' ) dx subject to the isoperimetric constraints (conditions)

486

 x x 1

2 G ( x , u , u ' ) dx

 

( cons tan t ) is called an isoperimetric problem.

The Euler's equation for this problem is

 f

 u d

dx

 f

 u '

0 where f = F+

G and the constant

is the undetermined multiplier, to be determined from the boundary conditions. u(x

1

) = u

1,

u(x

2

) = u

2

.

The isoperimetric problem may be generalized as follows:

Determine the extremum of the functional x

J(u

1

,u

2

,...u

n

) =  x

1

2

F ( x , u

1

,...

u n

, u

1

'..., u n

' ) dx subject to the conditions, u i

(x

1

) =a i

,u i

(x

2

) = b i

(i=1,2 .....n) and

 x x

1

2 G j

( x , u

,

......., u n u '

1

........, u ' n

) dx

 

J j= 1,2,......m,

 j

being constants. Here m may be greater than, equal to or less than n.

The Euler's equations are given by

 u i

F

 m

  j

1 j

G j d

dx



 u ' i

( F

 j m

 

1

J

G j

)



0

Example 13.7 The problem is to find the curve y = y (x) of the shortest length between two points (x

1, y

1

) and (x

2

,y

2

) such that the area under the curve is A.

The length of the curve is given by the functional

487

J (y) =

 x

2

1

 x

1

( y ' )

2 dx and the area under the curve is

(13.28)

 x x

1

2 y ( x ) dx

A (13.29)

This is a constrained optimization problem which reduces to that of finding the extremum of

J

1

(y) =  x x

1

2

1

( y ' )

2   y

 dx .

(13.30)

The associated Euler-Lagrange equation is

 y

1

( y ' )

2   y

d dx

 y '

1

( y ' )

2   y

0 or d dx y '

1

( y ' )

2

 

.

(13.31)

This differential equation can be integrated twice to obtain the equation for y: x -

2

( y

 

)

2 

1

 2

.

(13.32)

Thus, the curve of shortest length is an arc of a circle, where the constants of integration

and

together with the constant

can be determined from the condition that the curve passes through the points (x

1

,y

1

) and (x

2

, y

2

) and the given constrained condition (13.29).

13.3 Variational Problems with Moving Boundaries

Case (a) Suppose in Problem P, u(a) and u(b) are not specified. In this case the necessary condition that the functional J(.) given by (13.3) has extremum

488

at u is that the following condition is satisfied besides(13.4)

F

 u '

= 0 at x = a

(13.33) and x = b

Example 13.8

Let J(u) = 

2

0

( u ' )

2 dx (13.34)

Then find u extremizing J(.) for

(i) fixed end points u(0) = 0, u(2) = 2

(ii) free end points, u(0) and u(2) not prescribed.

Solution (i) Euler-Lagrange equation (13.4)

F

 u d

- dx

F

 u '

= 0 takes the form

F =(u') 2 ,

F

 u

= 0,

F

 u '

= 2 u'.

The general solution of u'' = 0 is u =

 x +

,

,

constants. u(0) =0= 0+

, implying

=0 u(2) =

2+

= 2 or

= 1

Hence J(.) is extremized at u = x and its extremized value is 2.

(ii) In this case u must satisfy two conditions

F

 u d

- dx

F

 u '

= 0

F

 u '

= 0 at x = 0 and x = 2

489

We have

F

 u '

= 2u' = 0 implies u' = 0 at x= 0 and x=2

Since u =

 x +

from the Euler-Lagrange equation u' =

= 0 and so

J(.) is extremized at u =

and extremized value of J(u) is zero.

Case (b) End Points variable in x and y Directions

Let u(a) = u a

, u(b) = u b

, that is end points are variable points and u is defined over (a+

a, b+

b). Then the functional of problem P is extremized at u satisfying the following conditions, the Euler-Lagrange equation, namely

F

 u d

- dx

F

 u '

= 0, a< x<b

 p

 u H

 x

 x

 b x

 a

0

F where p =

 u

'

, H = pu'-F

(13.35)

(13.36)

We shall see more general situation in section 13.5.

13.4 Variational Problems Involving several independent variables and higher derivatives

In this section we consider the extremum of the following functionals

J(u) = c d

 b a

F

 x , y ,

 u

 x

,

 u

 y

 dx dy (13.37) where u = f(x,y)

J(u) = b

 a

F

 x , u , u ' , u " ,

 dx du u' = dx d

2 u

, u'' = dx

2

(13.38)

490

If the edges of the surface u = f(x,y) are fixed then the necessary condition for the extremum of the functional given in equation (13.37) is

F du

-

 x



F

 p

 -

 y



F

 q



0 (13.39) where p =

 u

 x

, q

 u

 y

.

(13.39) is called Ostrogradsky equation named after Russian mathematician M.V. Ostrogradsky .

Here our variational problem is to find the surface with fixed edges u = f(x,y) on which (13.37) has extremum. The above results can be carried out for n variables:

F

 u

 i n

1

 x i



F

 p i



0 , where p i

 u

 x i

If the integrand F depends on higher derivatives, then the

Ostrogradisky equation is

F

 u

-

 x



F

 p

 -

 y



F

 q



 2

 x

2

F

 r

+

 2

 x

 y

F

 s

 2

 y

2

F

 t

0 where r =

 x

2 u

2

, s

 2 u

 x

 y

, t

 2 u

 y 2

The necessary condition for the functional given in (13.38) to have extremum at u is

F

 u d

- dx

F

 u

' d

2

+ dx

2

F

 u "

=0 (13.40)

491

This results holds in the following general form. The functional

J(u) = b

 a

F

 x , u , u ' , u " ,......

u ( n )

 dx has extremum at u if it satisfies

F

 u d

- dx

F

 u '

+ d

2 dx

2

F

 u "

..........

d n

(

1 )

( n ) dx n

F

 u

( n )

0 (13.41)

(13.41) is called the Euler-Poisson equation

Proofs of these results are straight forward extensions of the proof of

Theorem 13.2.

13.4.1. Functionals involving several dependent variables

The functional given by

J =  x 2 x 1

F ( x , u

1

, u

2

, ......., u n

, u '

1

, u '

2

,....., u ' n

) dx (13.42) has extremum if it satisfies the equations

F

 u j d

dx

F

 u j

'

= 0, j =1 , .......n

(13.43) are known as Euler-Lagrange equations.

Example 13.9 Find Ostrogradsky equation for the functional

J(u) = c a b

  u

 x

2



 u

 y



2

 dxdy

Solution: Here F =

 u

 x

2

+



 u

 y



2

=p 2 +q 2 where p =

 u

 x and q =

 u

 y

. This gives us

F

 u

= 0,

F

 p

= 2p and

F

 q

=

2q. Putting these values in (13.39) we have

492

0 -

 x

(2p) -

 y

(2q) = 0 or

 p

 x

 q

+

 y

= 0 as -2 = 0 or

 2 u

 x

2

+

 2 u

 y

2

= 0.

This is the two dimension Laplace equation which is solved in Section 12.4.

Example 13.10. Find u for which

J (u) =  1

0

( 1

 u "

2

) dx satisfying the following boundary conditions u(0) =0, u'(0) =1, u(1) = 1, u'(1) = 0.

Is extremized (J has minimum or maximum value)

Solution: The Euler-Poisson Equation (13.40) takes the following form for

F = 1+u" 2 d

2 dx 2

( 2 u " )

0

Integrating 4 times we get u = c

1

x 3 + c

2

x 2 + c

3

x+c

4

.

Using boundary conditions we get c

1

= 0, c

2

= 0, c

3

= 1, c

4

= 0.

Therefore the given functional is extremized along the straight line u =x.

Example 13.11 Find the extremal of

 1

0

( u ' 2  v ' 2 

2 u ) dx

493

3

u (0) = 1, u(1) =

2

, v (0) =0, v(1) =1

Solution:

F = u' 2 + v' 2 +2u

Euler's equation gives u"=1, v" = 0 which gives u = x

2

2

+ c

1 x +c

2 v = c

3 x +c

4

3

u(0) = 1 = c

2

and u(1) =

2

implies c

1

= 0 v(0) = 0 gives c

4

= 0 and v(1) = 1 gives c

3

=1

Therefore extremals are u= x 2

2

+ 1 and v=x.

13.5 Sufficient conditions for an Extremum – Hamilton Jacobi Equation

Figure 13.1

494

Let J(u) =  b a

F ( x , u , u ' ) dx (13.44) and let for any two end points A(a,u a

) and B(b, u b

) there is only one curve C which extremizes J. Take A fixed and consider two right hand end points.

B

1

(b,u b

) and B

2

(b+

 b

, u b

+

 u b

).

The corresponding curves at which J(.) given by (13.44) is extremized are C

1 and C

2

as shown in Figure 13.1. The integral (13.44) evaluated along any curve which extremizes it is just a function of the end pints A and B, and since

A is fixed, one can consider (13.44) as a function of B alone. Thus

J(C

1

) =

C

1

F ( x , u , u ' ) dx (13.45) is a function S of B

1

which we can write as u b

.

S = S (b,u b

).

Similarly

S+

S = S (b+

 b, u b

+

 u b

)

(13.46)

(13.47)

Is the corresponding value for the extremum curve C

2

joining A and B

2

. From these we have

S = H

 u b

- H

 b

) (13.48)

Therefore

S

 u b

= p,

S

 b

= - H where H = pu'- F

(13.49)

495

Now B

1

(b,u b

) may be any end point, and so we can replace it by the point B(x,u) by changing b to x, u b

to u. Then (13.49) becomes

S

 u

= p,

S

 x

= - H where p = p(x,y) =

F

 u '

(13.50)

(13.51) and

H = H (x,u,p) = pu' – F (13.52)

In (13.51) u' denotes the derivative du

calculated at the point B for the dx extremizing curve C going from A to B.

From (13.50) we have

S

 x

+ H x , u ,

S

 u

= 0 (13.53)

The partial differential equation given by equation (13.53) equation is called the Hamilton-Jacobi equation.

Theorem 13.3

(Hamilton-Jacobi Theorem). Let S = S(x,u,

) be a solution of the Hamilton-Jacobi equation given by equation (13.53) depending on a parameter

(constant of integration). Then

S

 

= constant along each extremizing curve.

Proof Let S = S (x, u,

), u = u(x) extremizing curve be a solution of (13.53), depending on parameter

. Then we consider

496

d dx

S

 x

=

 2

S

 x

 

+

 2

S

 u

  du dx

By differentiating (13.53) with respect to

we have

(13.54)

 2

S

 x

 

+

H

 p

 2

S

 u

 

 2

S

= 0 or

 x

 

= -

H

 p

 2 S

 u

 

(13.55)

(We get this keeping in mind that

occurs only in the third variable of

H, which was originally denoted by p.

Putting the value of

 2

S

 x

 

from (13.55) in (13.54) we obtain d dx

S

 

=

 2

S

 u

 

 du dx

-

H

 p

 (13.56)

Now, since du dx

=

H

 p

(canonical equation) along each extremizing curve, it follows that d dx

S

 

= 0 or

S

 

= constant (on each extremizing curve)

This proves the theorem,

Example 13.12

Illustrate Theorem 13.3 with the help of

J(u) =  b a u '

2 dx

(13.57)

497

Solution F(x,u,u') = u' 2 , p =

F

 u '

= 2u'.

Thus the Hamiltonian is given by

H (x,u,p) = pu' – F =

1

2

p 2 -

1

p 2 =

4

1

4

p 2

The Hamilton

– Jacobi equation, (13.53), takes the form

S

 x

+

1

4

S

 u

2

(13.58)

This is a first order non-linear partial differential equation. Let

(13.59) S = S(x,u) = v(x) + w(u) which gives dv dx

+

1

4 dw du

2

= 0 (13.60)

It follows from (13.60) that dv dx

must be constant, because dv dx

does not dependent on u and dw du

2

does not depend on x. Hence v = -

2 x (

constant)

Then -

 2 +

1

4 dw du

2

= 0, which gives dw

= 2

 du or w = 2

 u +

, where

in another constant. So, by (13.59)

498

S = -

 x 2 + 2

u +

(13.61)

By Theorem 13.3, the extemizing curves are given by

 s

 

= constant, that is, u =

 x + c (

, c constants) (13.62).

The extemizing curves in (13.62) are straight lines. This is in agreement with Example 13.8.

13.6 Exercises

1. Find functions u(x) which extremize the functional

J (u(x)) = 

2

0

(( u ' ) 2 u 2 ) dx , subject to boundary conditions u(0) = 0,

 u(

2

) = 1.

2.

3.

Find functions which extremize the functional

J(u(x)) = 

1

0

(( u ' )

2 

12 xu ) dx , subject to boundary conditions u(0) = 0, u(1) = 1.

Find u such that

J(u) = b

 a u

2 dx , u ( a )

 u a

, u ( b )

 u b

is extremized along u.

4.

5.

Solve the calculus of variational problem :

J(u(x)) =

1

 ( u

2 

0 x

2 u ' ) dx , u(0) = 0, u(1) = 1.

Solve the calculus of variational problem :

499

6.

J(u(x)) =

6

2

( u

 x u ' ) dx , u(2) = u

0

, u(6) = u

1

.

Find u for which the functional

J(u) =

X

1

X

0

( 1

 u '

2 x

)

1

/

2 dx , is extremized.

7.

8.

9.

Find the extremals of the functional

J(u(x),v(x)) = 

2

0

( u ' 2  v ' 2 

2 uv ) dx u(0) = 0, u(

/

2

) = 1, v(0) = 0, v

= -1

Find the extremals of the functional

J(u(x),v(x)) = b

 a

F ( u ' , v ' ) dx

Find the extremals of the functional

J(u(x)) = 

2

0

( u "

2

u

2  x

2

) dx satisfying the boundary conditions u(0) =1, u' (0) =0, u(

/ 2 ) =0, u'(

/2) = -1.

10. Find the extremals of the functional

J (u(x) =  l

 l

(

1

2

 u

1

2

  u ) dx

11.

satisfying the boundary conditions

u(-l) =0, u'(-l) = 0, u(l) =0, u'(l) = 0

Find the Ostrogradski equation for the functional

500

J (u(x,y)) = c d

 b a

  u

 x

2



 u

 y



2

2 u f ( x , y )

 dx dy

12.

where on the boundary of the rectangle of sides b-a and d-c the values of all the functions u are given in advance and fixed.

Write down the Ostrogradski equation for the functional

J ( u ( x , y ))

  

D

 u

 x

2

-



 u

 y



2

 dx dy

13. Examine whether the problem of calculus of variations for the following functional subject to the given boundary conditions has a solution or not?

J(u)=

1

 ( xu

0 u

2

2 u

2 u ' ) dx

14.

u(0) =1, u(1) =2.

Find the curve which gives extremum value of the function J(.) given by

J(u) = b

 ( u 2  a u ' 2 2 u sin x ) dx

15.

16.

Examine whether the functional J(u) =  b a

( u ' ) 2 x 3 dx has an extremum or not?

Find the function which extremize the functional

J ( u )

/

0

2

( 2 u sin x

 u '

2

) dx ,

501

subject to the boundary conditions u(0) =0, u(

/ 2 ) = 1.

17.

18.

Discuss the problem of finding the shortest distance between two points in the plane. Write down the problem and solve it.

Find the extremal of the functional

J (u,v) =  b a

( 2 uv 2 u 2  u ' 2 v ' 2 ) dx where u and v are functions of x

19. Find extremals of the functional

2

1

 u '

2  v

2  v '

2

 dx

20 u (1) =1, u(2) =2, or v(1) = 0, v(2) =1

A uniform elastic beam of length l is fixed at each end. The beam of line density

, cross sectional moment of inertia

and modulus of elasticity E performs small transverse oscillations in the horizontal xy plane. Derive the equation of motion of the beam.

21. Find Ostrogradsky equation for the functional

(i)   

 u

 x

2 



 u

 y



2

 u

 z

2

2 u f ( x , y , z )

 d x d y d z

(ii)  

 2 u

 x

2



2



 2 u

 u

2



2

2



 2 u

 x

 y



2

2 u f ( x , y )

 dxdy

502

25.

22.

23.

24.

A geodesic is a curve of minimum length between two points on a smooth surface S(x,y,z) = 0 when the whole curve is confined to the surface. Find the geodesic on a sphere.

Derive the equation of motion of free vibration of an elastic string of length l and line density

using the method of calculus of variations.

Solve the isoperimetric problem, that is, to maximize the area under x a curve 

2 x

1 u ( x ) dx subject to the fixed point arc length x

2 x

1

( 1

( u ' ( x ))

2

)

1

2 dx

 l

Show that the geodesic on a cylinder is a spiral curve.

503

Download