Math 2250-010 Fri Apr 18 , The superquiz will be take-home, and due at the start of class on Monday. , Let's talk about the "train" example in Wednesday's notes, because it's interesting and because it relates to two of your homework problems due this afternoon, that you may be stuck on. Then begin Chapter 9: 9.1-9.2 Nonlinear systems of first order differential equations, with an emphasis on autonomous systems of two first order DEs. This chapter will not only introduce some new ideas, but it will also tie together and review many of the key course concepts for Math 2250. The general (non-linear) system of two first order differential equations for x t , y t can be written as x# t = F x t , y t , t y# t = G x t , y t , t which we often abbreviate, by writing x#= F x, y, t y#= G x, y, t . If the rates of change F, G only depend on the values of x t , y t but not on t , i.e. x#= F x, y y#= G x, y then the system is called autonomous. Autonomous systems of first order DEs are the focus of Chapter 9, and are the generalization of one autonomous first order DE, as we studied in Chapter 2. In Chapter 9 we will restrict to systems of two equations as above, although most of the ideas generalize to more complicated autonomous systems with three or more interacting functions. Constant solutions to and autonomous differential equation or system of DEs are called equilibrium solutions. Thus, equilibrium solutions x t h x), y t h y * will correspond to solutions x), y) T to the nonlinear algebraic system F x, y = 0 G x, y = 0 Exercise 1) Consider the "competing species" model from 9.2, shown below. For example and in appropriate units, x t might be a squirrel population and y t might be a rabbit population, competing on the same island sanctuary. x# t = 14 x K 2 x2 K x y y# t = 16 y K 2 y2 K x y . 1a) Notice that if either population is missing, the other population satisfies a logistic DE. Discuss how the signs of third terms on the right sides of these DEs indicate that the populations are competing with each other (rather than, for example, acting in symbiosis, or so that one of them is a predator of the other). x# t Hint: to understand why this model is plausible for x t consider the normalized birth rate rate , as x t we did in Chapter 2. 1b) Find the four equilibrium solutions to this competition model, algebraically. System repeated for convenience: x# t = 14 x K 2 x2 K x y y# t = 16 y K 2 y2 K x y . Equilibrium solutions x), y) T to first order autonomous systems x#= F x, y y#= G x, y are called stable if solutions to IVPs starting close (enough) to x), y) T stay as close as desired. , Equilibrium solutions are unstable if they are not stable. , Equilibirum solutions x), y) T are called asymptotically stable if they are stable and furthermore, IVP , solutions that start close enough to x), y) T converge to x), y) T as t/N . (Notice these definitions are completely analogous to our discussion in Chapter 2.) 1c) Discuss the pplane phase portrait below for the competition model we're studying. Discuss the sorts of stability/instability appparently exhibited by the four equilibrium solutions. 1d) Locate the two phase diagrams from Chapter 2, for the logistic behavior when there is only one population, inside the phase portrait below. 1e) If both initial populations are positive, what do you expect their limiting populations will be, based on the phase portrait? Linearization near equilibrium solutions is a recurring theme in differential equations and in this Math 2250 course, as we've already seen multiple times. It's important to understand how to linearize in general, because the linearized differential equations can often be used to understand stability and solution behavior near the equilibrium point, for the original differential equations. An easy case of linearization in the present example is near the equilbrium solution x), y) T = 0, 0 T. It's pretty clear that our population system x# t = 14 x K 2 x2 K x y y# t = 16 y K 2 y2 K x y linearizes to x# t = 14 x y# t = 16 y at the origin. And, we can see directly that for the linearized problem, the solutions are x t = c1 e14 t , y t = c2 e16 t . Notice that we would also have concluded this if we didn't notice this system reduced to two first order DE's for x t , y t and instead tried the eigenvalue-eigenvector way of solving it: x# t 14 0 x t = y# t 0 16 y t The eigenvalues are the diagonal entries, and the eigenvectors are the standard basis vectors, so x t 1 0 = c1 e14 t C c2 e16 t , y t 0 1 as we knew. Notice how the phase portrait for the linearized system looks like that for the non-linear system, near the origin: How to linearize with multivariable Calculus: (This would work for systems of n autonomous first order differential equations, but we focus on n = 2 in this chapter. Notice how we're not assuming the equilbrium point is the origin, or that we have polynomial expressions for F x, y , G x, y . Here's the general system: x# t = F x, y y# t = G x, y Let x t h x), y t h y) be an equilibrium solution, i.e. F x), y) = 0 G x), y) = 0 . For solutions x t , y t Thus T to the original system, define the deviations from equilibrium u t , v t by u t d x t Kx) v t d y t K y) . u#= x#= F x, y = F x) C u, y) C v v#= y#= G x, y = G x) C u, y) C v . Using partial derivatives, which measure rates of change in the coordinate directions, we can approximate vF vF u#= F x) C u, y) C v = F x), y) C x), y) u C x , y v C e1 u, v vx vy ) ) vG vG v#= G x) C u, y) C v = G x), y) C x), y) u C x , y v C e2 u, v vx vy ) ) For differentiable functions, the error terms e1 , e2 shrink more quickly than the linear terms, as u, v/0. Also, note that F x), y) = G x), y) = 0 because x * , y * is an equilibrium point. Thus the linearized system that approximates the non-linear system for u t , v t , is (written in matrix vector form as): u# t vF x ,y vy ) ) u . vG vG v x ,y x ,y vx ) ) vy ) ) The matrix of partial derivatives is called the Jacobian matrix for the vector-valued function F x, y , G x, y T, evaluated at the point x * , y * . Notice that it is evaluated at the equilibrium point. vF We will often use the subscript notation for partial derivatives to save writing, e.g Fx for and Fy for vx vF . vy v# t = vF x ,y vx ) ) Exercise 2) We will linearize the rabbit-squirrel (competition) model of the previous example, near the equilibrium solution 4, 6 T . For convenience, here is that system: x# t = 14 x K 2 x2 K x y y# t = 16 y K 2 y2 K x y 2a) Use the Jacobian matrix method of linearizing they system at 4, 6 T. In other words, as on the previous page, set u t = x t K4 v t = y t K6 So, u t , v t are the deviations of x t , y t from 4, 6, respectively. Then use the Jacobian matrix computation to verify that the linearized system of differential equations that u t , v t approximately satisfy is u# t K8 K4 u t = . v# t K6 K12 v t 2b) The matrix in the linear system of DE's above has approximate eigendata: l 1 zK4.7, v1 z .79,K.64 T l 2 zK15.3, v2 z .49, .89 T Use the eigendata above to write down the general solution to the homogeneous (linearized) system. Use this information to make a rough sketch of the solution trajectories to the linearized problem near u, v T = 0, 0 T . Then compare your work to the pplane output on the next page. Notice how your phase portrait for the linearized problem near u, v T = 0, 0 T looks very much like the phase portrait for x, y T near 4, 6 T. This is sensible, since the correspondence between x, y and u, v involves a translation of x K y coordinate axes to u K v coordinate axes, via the formula. xK4 = u yK6 = v Linearization allows us to approximate and understand solutions to non-linear problems near equilbria: The non-linear problem and representative solution curves: pplane will do the eigenvalue-eigenvector linearization computation for you, if you use the "find an equilibrium solution" option under the "solution" menu item. The solutions to the linearized system near u, v T = 0, 0 T are close to the exact solutions for non-linear deviations, so under the translation of coordinates u = x K x) , v = y K y) the phase portrait for the linearized system looks like the phase portrait for the non-linear system. Exercise 3) Linearize the same competition model at the equilibrium solution 7, 0 T, find the general solution to the linearized system, and compare your work to the pplane picture for the original nonlinear system. Here's the system and the Jacobian matrix we worked out, for general values of x, y: x# t = F x, y = 14 x K 2 x2 K x y y# t = G x, y = 16 y K 2 y2 K x y 14 K 4 x K y Kx J x, y = . Ky 16 K 4 y K x , Summary: In case the matrix A for the linearized system is diagonalizable with real number eigenvalues, the linearized system for u t = u t , v t T u# t = A u has general solution l t l t u t = c1 e 1 v1 C c2 e 2 v2 . If each eigenvalue is non-zero, the three possibilities are: Exercise 4) Verify how the two eigenvectors of the matrix A in the linear system show up in the three phase portraits above. (By the way, in the nodal cases, if the eigenvalues are equal then all solution trajectories follow rays from the origin, and the nodes are called "proper" nodes, as opposed to the case with unequal eigenvalues, in which case they're called "improper" nodes.) Remark: In case A2 # 2 is not diagonalizable, then the single eigenvalue l 1 must be real, and its eigenspace must only be one-dimensional. In this case one can check that the general solution to the linearized system is l1 t l t u t = c1 e v C c2 e 1 wCt v where v is the eigenvector for l 1 and A K l 1 I w = v . Qualitatively the phase portraits look like those of improper stable or unstable nodes, depending on whether l 1 ! 0 or l 1 O 0 . We probably won't run across any examples where this actually happens. Monday: Linearization with complex eigendata, and applications to first order systems arising from mechanical oscillations (section 9.4). Appendix: Discussion of transverse mass-spring oscillations, and connection to modeling of earthquake vibrations in multi-story buildings. (We may not have time to go over this section 7.4 material in class.) As it turns out, for our physics lab springs, the modes and frequencies are almost identical: , An interesting shake-table demonstration: http://www.youtube.com/watch?v=M_x2jOKAhZM Below is a discussion of how to model the unforced "three-story" building shown shaking in the video above, from which we can see which modes will be excited. There is also a "two-story" building model in the video, and its matrix and eigendata follow. Here's a schematic of the three-story building: For the unforced (homogeneous) problem, the accelerations of the three massive floors (the top one is the roof) above ground and of mass m, are given by x ## t 1 k = m x ## t 2 K2 x ## t 3 1 0 1 K2 1 0 1 K1 x t 1 x t 2 . x t 3 Note the K1 value in the last diagonal entry of the matrix. This is because x3 t is measuring displacements for the top floor (roof), which has nothing above it. The "k" is just the linearization proportionality factor, and depends on the tension in the walls, and the height between floors, etc, as discussed on the previous page. Here is eigendata for the unscaled matrix k m =1 . For the scaled matrix you'd have the same k eigenvectors, but the eigenvalues would all be multiplied by the scaling factor m and the natural k frequencies would all be scaled by m . Symmetric matrices like ours (i.e matrix equals its transpose) are always diagonalizable with real eigenvalues and eigenvectors...and you can choose the eigenvectors to be mutually perpendicular. This is called the "Spectral Theorem for symmetric matrices" and is an important fact in many math and science applications...you can read about it here: http://en.wikipedia. org/wiki/Symmetric_matrix.) If we tell Maple that our matrix is symmetric it will not confuse us with unsimplified numbers and vectors that may look complex rather than real. > with LinearAlgebra : > A d Matrix 3, 3, K2.0, 1, 0, 1,K2, 1, 0, 1,K1 ; # I used at least one decimal value so Maple would evaluate in floating point A := K2.0 1 0 1 K2 1 0 1 K1 > Digits d 5 : # 5 digits should be fine, for our decimal approximations. (1) > eigendata d Eigenvectors Matrix A, shape = symmetric : # to take advantage of the # spectral theorem lambdas d eigendata 1 : #eigenvalues evectors d eigendata 2 : #corresponding eigenvectors - for fundamental modes omegas d map sqrt,Klambdas ; # natural angular frequencies 2$evalf Pi f d x/ : x periods d map f, omegas ; #natural periods eigenvectors d map evalf, evectors ; # get digits down to 5 1.8019 1.2470 omegas := 0.44504 3.4870 periods := 5.0386 14.118 K0.59101 K0.73698 0.32799 eigenvectors := 0.73698 K0.32799 0.59101 K0.32799 0.59101 (2) 0.73698 > B d Matrix 2, 2, K2, 1.0, 1,K1 : eigendata d Eigenvectors Matrix B, shape = symmetric : # to take advantage of the # spectral theorem lambdas d eigendata 1 : #eigenvalues evectors d eigendata 2 : #corresponding eigenvectors - for fundamental modes omegas d map sqrt,Klambdas ; # natural angular frequencies 2$evalf Pi f d x/ : x periods d map f, omegas ; #natural periods eigenvectors d map evalf, evectors ; # get digits down to 5 1.6180 omegas := 0.61804 periods := eigenvectors := 3.8834 10.166 K0.85065 K0.52573 0.52573 K0.85065 (3) > Remark: One can interpret the data above, in terms of the natural modes for the shaking building . In the youtube video the first mode to appear is the slow and dangerous "sloshing mode", where all three floors oscillate in phase, with amplitude ratios 33 : 59 : 74 from the first to the third floor. What's the second mode that gets excited? The third mode? (They don't show the third mode in the video.) Remark) All of the ideas we've discussed in section 7.4 also apply to molecular vibrations. The eigendata in these cases is related to the "spectrum" of light frequencies that correspond to the natural fundamental modes for molecular vibrations.