Minimum Time Control

advertisement

Minimum Time Control

From the point of view of the minimum principle, the minimum time problem is relatively difficult compared to other control problems. This is because of the nature of the two point boundary value problem (TPBVP). For a two variable problem, e.g., x

1

 x

2 x

1

 fixed

1

( ) f

0 u

1 x

2

 u x

2

(0)

 fixed ( )

0

The equations for the adjoint variables (λ) have no boundary conditions associated with them. In addition, 𝑡 𝑓

, the final time, is unknown. Graphically,

X

X

X

X

1

2

X

X

?

?

λ

1

λ

2

----?

----?

Unfortunately, the minimum principle is not too helpful for solving this problem, except in the sense that we know the control for linear o.d.e.’s must be on-off, or bang-bang (u=±1).

For the particular case of the two variable problem, a special technique, called phase plane analysis, can be used (also called state space flooding). This approach requires defining what is called a switching curve, which is used to determine the switches in the control from u=+1 to u=-1 or vice versa. This method is graphically a type of feedback method, namely, we follow the state trajectory until at some point in time we reach the switching curve. We then change the control from one bound to the other.

Phase plane analysis can also be used for second order nonlinear systems, but unfortunately most systems are of higher order than second order, and the phase plane approach cannot be used then. This results in a programmed or open loop control, which has significant disadvantages compared to feedback control. Also for linear systems we need to be concerned with the nature of the eigenvalues of the A matrix, since this determines the number of switches. It can be shown that for real distinct eigenvalues, an nth order system has at most n-1 switches, however, for complex eigenvalues, no such prediction can be made. Again this can be handled in either case for simple systems by phase plane analysis.

1

A summary of computational algorithms for general minimum time problems along with advantages and disadvantages is listed below:

1.

Phase plane analysis a) limited to two dimensions b) can handle linear or nonlinear problems c) requires knowledge of the optimal control structure

2.

Solution of TPBVP via search for the initial adjoint variables (t=0) a) error propagation with integration of adjoint variables b) converges very slowly c) handles control discontinuities (switches) well

3.

Switching Function a) limited to simple systems b) analytical in nature

4.

Search for switching times a) requires knowledge of the number of switches b) usually very fast computationally c) sometimes unreliable for large systems

5.

Linear Programming a) very fast, most efficient method

Minimum Time Control of a Double Integrator

Consider the time-optimal control of a double integrator dx

1 dt

 x

2

(1) dx

2 dt

 u

It is desired to find the control input function u(t) that will transfer the system state from a specified initial state x 0 to a specified final state x 1 . Let us also assume that the input u is subject to a magnitude constraint defined by u

1 (2)

2

The Hamiltonian H has the following form

H

 p x

1 2

 p u

2

1 (3) dH du

 p

1

(cannot solve for optimal u)

For a given set of values p

1

, p

2

, and x

2

, the function H takes on its minimum value if u is equal to one or the other of its extreme values +1 or -1. The optimal value of u(t), denoted by u*(t), is given by u t

  sgn

 p t

(4) where sgn

 p t

is the signum function defined by sgn

1

1 for x for x undefined for x

0

0

0

The adjoint equations for p

1

and p

2

are

(5) dp

1 dt

0 p

1

 a (const)

(6) dp

2 dt

  p

1 p

2 at b

The four boundary conditions required for the complete solution of Equations (1) and (6) are provided by the two specified initial and final values of the state variables x

1

and x

2

. These equations, together with Equation (4) and the specified boundary conditions, allow the time optimal control input u*(t) to be found.

It is very easy to find the form of the optimal control input u* in this case. p t

2

( )

 a at b

(7)

Where b is another constant.

3

Equations (7) show that, as t varied over any range whatever, the value of p

2

(t) changes sign not more than once. The optimal control input u*, during the minimum-time transition from any specified initial state to any specified final state, takes on only the value +1 and -1, and changes sign not more than once during the transition.

In order to study the form of the optimal trajectories in a particular case, let us assume the following:

1. The final state x 1 is the origin (0,0).

2. u* takes on only the value 1 and -1 during a time-optimal transition.

We can solve this minimum time problem using phase plane analysis. The state trajectories of this system under the influence of the input u=+1 are x

2

 

(8) and x

1 t

2

  

2 ct d (9) where c and d are constants. From (8) and (9) we see that x

2

2 t 2 ct

 c 2 

2 x

1

 e

Where e=c 2 -2d is another constant. For different values of e, the relationship between x

1

and x

2

can be illustrated by the family of parabolas shown in Fig. 1. The arrows on the trajectories show the direction of increasing time t. If the representative point reaches the origin while u=+1, it must do so by following the trajectory AO in Fig. 1, because no other trajectory on the diagram passes through the origin.

Next consider the state trajectories under the influence of an input u=-1. The relationship between x

1 and x

2 is of the form x

2

2  

2 x

1

 f (10) where f is a constant. For different values of f, the relationship between x

1

and x

2

can be illustrated by the family of parabolas shown in Fig. 2. If the representative point reaches the origin while u=-1, it must do so by following the trajectory BO shown in Fig. 2, as no other trajectory on that diagram passes through the origin.

4

Figure 1. State trajectories for u=+1.

Figure 2. State trajectories for u=-1.

Recall that the optimal input takes on only the values +1 and -1, and changes sign at most once (see Eqs.

4 and 6). If the initial state is not on arc AO of Fig. 1 or arc BO of Fig. 2, the initial input must be able to transfer the state to some point on one of these curves. When the point reaches one or other of these curves, the input must therefore switch from its maximum positive value to its maximum negative value, or vice versa. The family of time-optimal trajectories from the arbitrary initial states to the origin of the state space can be constructed as shown in Fig. 3. The curve AOB is formed by the combination of AO of

Fig. 1 and BO of Fig. 2. If the initial state point is above AOB in Fig. 3, the input u* becomes +1 and the curve AO is followed to the origin. Below AOB the input u* becomes -1 and the curve BO is followed to the origin.

5

Figure 3. Family of time-optimal trajectories for double integrator.

The curve AOB is therefore a switching curve, which divides the phase plane into two regions corresponding to u*=+1 and u*=-1. A time-optimal controller thus is required to choose u* in accordance with the following logical rules: u *

 

1

If x is below AOB or on AO u *

 

1

If x is above AOB or on BO

Fig. 3 shows that it is possible to reach the origin from any initial state with at most one sign change in the control input u*. It should be noted that the optimal input at any instant is a function of the instantaneous state of the system. Once you reach the origin, setting u=0 will keep the states at the origin, which you should be able to verify. It is also possible to reach origin using more switches in u using Figs. 1 and 2, but these paths will take longer than the minimum time.

For a second order system with imaginary eigenvalues, the solution for time optimal control involves a sinusoidal solution, thus a minimum time solution can have many switches.

6

Download