The logistic equation Didier Gonze September 30, 2015 Introduction The logistic equation (sometimes called the Verhulst model or logistic growth curve) is a model of population growth first published by Pierre-François Verhulst (1845,1847). The model is continuous in time, but a modification of the continuous equation to a discrete quadratic recurrence equation, known as the logistic map, is also widely studied. The continuous version of the logistic model is described by the differential equation: dN = rN dt N 1− K (1) where r is the Malthusian parameter (rate of maximum population growth) and K is the carrying capacity (i.e. the maximum sustainable population). Dividing both sides by K and defining X = N/K then gives the differential equation dX = rX(1 − X) dt (2) The discrete version of the logistic model is written: Xn+1 = rXn (1 − Xn ) (3) Here we will first describe the fascinating properties of the discrete version of the logistic equation and then present the continuous form of the equation. Discrete logistic equation Before reading the present notes, you are invited to do some exploration using a computer. The goal is to understand the behavior of the following innocent-looking difference equation: Xn+1 = f (Xn ) = rXn (1 − Xn ) (4) Let r = 0.5 and x0 = 0.1, and compute x1 ,x2 ,... x30 using equation (4). Now repeat the process for r = 2.0, r = 2.7, r = 3.2, r = 3.5, or r = 3.8. We will limit our analysis to 0 ≤ r ≤ 4 (which guarantees that 0 ≤ Xn ≤ 1). As r increases you should observe some changes in the type of solution you get. n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 r = 0.5 0.1000 0.0450 0.0215 0.0105 0.0052 0.0026 0.0013 0.0006 0.0003 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 r = 2.0 0.1000 0.1800 0.2952 0.4161 0.4859 0.4996 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 r = 2.7 0.1000 0.2430 0.4967 0.6750 0.5923 0.6520 0.6126 0.6407 0.6215 0.6351 0.6257 0.6323 0.6277 0.6310 0.6287 0.6303 0.6292 0.6300 0.6294 0.6298 0.6295 0.6297 0.6296 0.6297 0.6296 0.6296 0.6296 0.6296 0.6296 0.6296 0.6296 r = 3.2 0.1000 0.2880 0.6562 0.7219 0.6424 0.7351 0.6231 0.7515 0.5975 0.7696 0.5675 0.7854 0.5393 0.7951 0.5214 0.7985 0.5148 0.7993 0.5133 0.7994 0.5131 0.7995 0.5131 0.7995 0.5130 0.7995 0.5130 0.7995 0.5130 0.7995 0.5130 r = 3.5 0.1000 0.3150 0.7552 0.6470 0.7993 0.5614 0.8618 0.4168 0.8508 0.4443 0.8641 0.4109 0.8472 0.4531 0.8673 0.4029 0.8420 0.4657 0.8709 0.3936 0.8353 0.4814 0.8738 0.3860 0.8295 0.4950 0.8749 0.3830 0.8271 0.5005 0.8750 r = 3.8 0.1000 0.3420 0.8551 0.4707 0.9467 0.1916 0.5886 0.9202 0.2790 0.7645 0.6842 0.8211 0.5583 0.9371 0.2240 0.6606 0.8519 0.4793 0.9484 0.1861 0.5755 0.9284 0.2527 0.7177 0.7699 0.6731 0.8362 0.5206 0.9484 0.1860 0.5753 The first thing you should notice about eq. (4) is that it is non-linear, since it involves a term Xn2 . Because of this non-linearity, this equation has remarkable non-trivial properties, but can not be solved analytically. Therefore we must resort to other methods to explore its behaviour. This equation and its variants still puzzle the mathematicians. As we will see in the following, this equation will allow to introduce many fundamental concepts pertaining to non-linear systems. 2 Steady state and stability The concept of steady state (or equilibrium) relates to the absence of changes in a system. In the context of difference equations, the steady state Xss is defined by Xn+1 = Xn = Xss (5) For the logistic equation, the steady state is then Xss = rXss (1 − Xss ) (6) 2 rXss − Xss (r − 1) = 0 (7) Xss1 = 0 and Xss2 = 1 − 1/r (8) Two steady states are possible: By definition, a stable steady state is a state that can be reached from the neighbour states, whereas an unstable steady state is a state that the system will leave as soon as a small perturbation will move the system out of this state. The notion of stability is schematized here: The stability is a local property. It can be calculated by applying a small perturbation xn from the steady state. Then we look at the evolution of the perturbation. If it is decreasing (xn > xn+1 > xn+2 > ...), this means that the perturbation is damped out and the steady state is stable. Otherwise (xn < xn+1 < xn+2 < ...), the perturbation is amplified, the system leaves its steady state and the latter is then unstable. Let’s apply this in the case of the logistic equation (for the steady states Xss1 and Xss2 ). Consider a small perturbation x that moves the system out of its steady state: perturbation Xss −−−−−−−→ Xn = Xss + xn (9) Xn+1 = Xss + xn+1 (10) At the next step, Combining eq (4) and eq (10) we get xn+1 = Xn+1 − Xss = f (Xn ) − Xss = f (Xss + xn ) − Xss (11) Unfortunately, eq (11) is still not useable because it involves the evaluation of the function f at Xss + xn , which is unknown. Hopefully, to overcome this difficulty, there is a trick. 3 We can indeed exploit the fact that xn is small compared to Xss and develop the function as a Taylor expansion around Xss : df f (Xss + xn ) = f (Xss ) + xn + O(x2n ) (12) dX X=XSS The very small terms O(x2n ) can be neglected, at least close to the steady state (i.e. when xn is small). This approximation results in some cancellation of terms in eq. (11) because f (Xss ) = Xss . Thus the approximation df df xn = xn (13) xn+1 ≃ f (Xss ) − Xss + dX X=XSS dX X=XSS can be written as xn+1 ≃ axn where a= df dX (14) (15) X=XSS Clearly, if |a| < 1, the steady state is stable (the perturbation xn tends to 0 as n increases), while if |a| > 1, the steady state is unstable (the perturbation xn increases as n increases). In the case of the logistic equation, we have for the steady state Xss1 df = (r − 2rX)X=Xss1 =0 = r a= dX X=Xss1 (16) Thus the steady state Xss1 is stable if r < 1. Similarly, for the second steady state, Xss2, we have df a= = (r − 2rX)X=Xss2 =1−1/r = 2 − r dX X=Xss2 (17) We conclude that the steady state Xss2 of the logistic equation will be stable when 1 < r < 3. The steady state Xss1 thus becomes unstable when the second steady state Xss2 starts to exist and is stable. Now the $1000 question is: what happens when r > 3 ? 1 0.9 X =1−1/r SS2 0.8 (unstable) 0.7 X ss 0.6 XSS2=1−1/r (stable) 0.5 0.4 0.3 0.2 0.1 XSS1=0 (stable) XSS1=0 (unstable) 0 0 0.5 1 1.5 2 2.5 3 3.5 4 r Figure 1: Steady states as a function of r. 4 Graphical method In this section we examine a simple technique to visualize the solution of a first-order difference equation as the logistic equation. First, let us draw the graph of f (X), the next generation function. In our case, f (X) = rX(X − 1), so that f (X) is a parabola passing through 0 at X = 0 and X = 1, and with a maximum at X = 1/2 (red curve in fig. 2). Choosing an initial value X0 , we can read X1 = f (X0 ) directly from the parabolic curve. To continue finding X2 = f (X1 ), X3 = f (X2 ), and so on, we need to similarly evaluate f (X) at each succeeding value of Xn . One way of achieving this is to use the line Xn+1 = Xn to reflect each value of Xn+1 back to the Xn axis (blue trajectory in fig. 2). This process, which is equivalent to bouncing between the curves Xn+1 = Xn (diagonal line) and Xn+1 = f (X) (parabola) is a recursive graphical method (also called cobwebbing) for determining the population level at each iterative step n. As we can see in figure 2 (for r = 2.8), the sequence of points converges to a single point at the intersection of the parabola with the diagonal line. This point satisfies Xn+1 = Xn . This is bydefinition the steady state of the equation. Recall that the condition for stability df is |a| = < 1. Interpreting graphically, this condition means that the tangent dX Xss line L to f (x) at the steady state must have a slope not steeper that 1. 1 r = 2.8 X n+1 0.8 0.6 0.4 0.2 0 0 0.2 X 1 X0 0.4 X X2 0.6 0.8 1 n 1 0.8 X n 0.6 0.4 0.2 0 0 5 10 15 20 25 30 step, n Figure 2: Graphical resolution (cobweb diagram). 5 In figure 3, several time sequences, corresponding to different values of the parameter r are shown. When the parameter r increases, this effectively increases the steepness of the parabola, which makes the slope of this tangent steeper, so that eventually the stability condition is violated. The steady state then becomes unstable and the system undergoes oscillations. When r further increases the periodic solution becomes unstable and higher period oscillations are observed. When all the cycles become unstable, chaos is observed. In the next section, we discuss the period-2 oscillations observed just beyond r = 3 and its stability. 1 1 0.8 1 0.8 0.8 0.6 0.6 0.6 0.4 0.2 0 0 0.5 Xn Xn Xn+1 Xn+1 r=2 0.4 0.4 0.2 0.2 0 1 1 0 0.5 Xn 0 1 0 10 20 30 20 30 20 30 20 30 Step, n 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.2 0 0 0.5 Xn Xn Xn+1 Xn+1 r = 3.2 0.4 0.4 0.2 0.2 0 1 1 0 0.5 Xn 0 1 0 10 Step, n 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 Xn Xn+1 Xn+1 r = 3.5 0.4 0.4 0.2 0.2 0.2 0 0 0 0.5 Xn 1 1 0 0.5 Xn 0 1 0 10 Step, n 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 Xn Xn+1 Xn+1 r = 3.8 0.4 0.4 0.2 0.2 0.2 0 0 0 0.5 Xn 1 0 0.5 Xn 1 0 0 10 Step, n Figure 3: Graphical resolution for various values of r. In the left panels, the complete trajectory, from the initial condition (here x0 = 0.2) is shown. In the middle panels, the transients have been removed. In the right panels, the complete time series is shown. 6 Beyond r=3... We present here the approach proposed by May (1976) to prove that as r increases slightly beyond r = 3, stable oscillations of period 2 appear. A stable oscillation is a periodic behavior that is maintained despite small perturbations. Period 2 implies that successive generations alternate between two fixed values of X, which we will call X1∗ and X2∗ . Thus period 2 oscillations (sometimes called two-point cycles) simultaneously satisfy the two equations: Xn+1 = f (Xn ) Xn+2 = Xn (18) (19) These two equations can be combined to give: Xn+2 = f (Xn+1 ) = f (f (Xn )) = Xn (20) Let us call the composite function by the new name g: g(X) = f (f (Xn )) (21) and let k be the new index that skips every two generations: k = n/2 (22) With this new notation, equation (20) becomes Xk+1 = g(Xk ) (23) The steady state X ∗ of this equation, i.e. the fixed point of g(X), is the period 2 solution of equation (4). Note that there must be two such values, X1∗ and X2∗ since by assumption X oscillates between two fixed values. By this trick, we have reduced the new problem to one which we are familiar with. Indeed, the stability of a period 2 oscillation can be determined by using the method described here above. Briefly, suppose an initial small perturbation x: X → X +x. Stability implies that periodic behavior will be reestablished, i.e. that the deviation x from this behavior will decrease. This will happen when: dg <1 (24) dX X=X ∗ This condition is equivalent to df dX X=X ∗ df dX X=X <1 ∗ (25) From this equation, we conclude that the stability of period 2 oscillations depends on the magnitude of df /dX at X ∗ . 7 We will now apply this approach to the logistic equation. First we have to determine the two fixed points X1∗ and X2∗ of equation (4). To do so, we first make the composite function g(X) = f (f (X)) explicit: g(X) = r(rX(1 − X)(1 − (rX(1 − X)) = r 2 X(1 − X)(1 − rX(1 − X)) (26) Next, in this equation, we set X ∗ = g(X ∗) to obtain X ∗ = r 2 X ∗ (1 − X ∗ )(1 − rX ∗ (1 − X ∗ )) 1 = r 2 (1 − X ∗ )(1 − rX ∗ (1 − X ∗ )) 0 = r 2 (1 − X ∗ )(1 − rX ∗ (1 − X ∗ )) − 1 (27) In order to solve this third-order polynomial expression, we will make use of the fact that the solution of eq. (7) is also solution of eq. (27). Indeed Xss = Xn = Xn+1 = Xn+2 (28) Xss = f (f (X ss)) = f (X ss ) = g(X ss ) (29) As we have seen above, the solution of eq. (7) is X = 1 − 1/r and this must be a solution of eq. (27). This enables us to factor the polynomial so that the problem is reduced to solving a quadratic equation. To do this, we expand the eq. (27): 1 1 1 3 2 X+ =0 (30) − X − 2X + 1 + r r3 r 1 Putting the factor X − 1 − in evidence, we get r 1 1 1 1 2 X − 1+ X+ =0 (31) + X − 1− r r r2 r The second factor is a quadratic expression whose the roots are solutions of the equation r+1 r+1 2 X − X+ =0 (32) r r2 Hence X∗ = 1 r + 1 ± 2 r s r+1± r+1 r p 2 − 4(r + 1) r2 (33) (r − 3)(r + 1) (34) 2r The possible roots, denoted X1∗ and X2∗ , are real if r < −1 or r > 3. Thus, for positive values of r, steady states of the two-generation map f (f (Xn )) exist only when r > 3. Note that this occurs when Xss = 1 − 1/r ceases to be stable. X1∗ , X2∗ = 8 With X1∗ and X2∗ computed it is possible (albeit algebraically messy) to test their stability. To do so, it is necessary to compute dg/dX and to evaluate this derivative at the values X1∗ and X2∗ . When this is done, we obtain a second range of behavior: stability of the √ two-fixed point cycles for 3 < r < 1 + 6 = 3.449. In fig. 4 we represented in red the third-order function g(X) (26). The steady states corresponds to the intersection of this function with the diagonal line. The stability is determined by the slope dg/dX at the steady states. Again, we could ask a $10000 question: What happen beyond r = 3.449? In theory, the trick used in exploring period 2 oscillations could be used for any higher period n: n = 3, 4, ... Because the analysis becomes increasingly cumbersome, this method will not be further applied here. We will rather discuss the results obtained by numerical simulations. 1 r=2 X n+2 0.8 0.6 0.4 0.2 0 0 0.2 0.4 X 0.6 0.8 1 0.6 0.8 1 0.6 0.8 1 n 1 r = 2.8 X n+2 0.8 0.6 0.4 0.2 0 0 0.2 0.4 X n 1 r = 3.5 X n+2 0.8 0.6 0.4 0.2 0 0 0.2 0.4 Xn Figure 4: Determination of the stability of the period 2 cycles. 9 Bifurcation diagram One way of summarizing the range of behaviours encountered when r increases is to construct a bifurcation diagram. Such a diagram gives the value and stability of the steady state and periodic orbits (fig. 5). In this diagram, for each value of r is reported the local maximum of values of Xn . The transition from one regime to another is called a bifurcation. 1 0.9 0.8 max (X) 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 1.5 2 2.5 3 3.5 4 Parameter r Figure 5: Bifurcation diagram. The inset is a zoom on the right part of the diagram. This diagram is obtained by computing for each value of r the steady state or the maxima and minima of Xn after the transients, e.g. from X100 to X1000 . 10 Period doubling and chaos The schematic representation shown in fig. 6 highlights the structure of the bifurcation diagram: As r increases, the system undergoes successively cycles of period 2, 4, 8, 16,... Such a sequence is called a period doubling cascade. It ultimately leads to a chaotic attractor. This is the most typical “route to chaos”. Note that in a chaotic attractor, Xn never take two times the same value. Figure 6: Schematic representation of the bifurcation diagram. Chaotic attractor is obtained after an infinite number of period doubling bifurcations. Feigenbaum (1987) studied the behavior of the ratio δk = rk − rk−1 rk+1 − rk (35) and found that lim δk = 4.66920160910299067185320382... k→∞ (36) This limit appears to be a fundamental constant and is referred to as the Feigenbaum constant. 11 Periodic windows and intermittency In the chaotic domain, there are windows of periodic behaviors. A large window of period 3 cycle is visible on the bifurcation diagram (see zoom on the inset). This period 3 cycle can be explained by cobweb diagrams as done of the period 2 cycle (Fig. 7). Interestingly, at the border just before the period 3 window, we can observe something that looks like period 3 cycle but interrupted by irregular “bursts”. This behaviour is called intermittency (Fig. 8). As the control parameter r is moved further away from the periodic window, the irregular bursts become more frequent until the system becomes fully chaotic. This progression is known as the “intermittency route to chaos”. 1 1 r=3.8 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0.2 r=3.85 0.9 Xn+3 Xn+3 0.9 0.4 Xn 0.6 0.8 0 1 0 0.2 0.4 X 0.6 0.8 1 n Figure 7: Graphical analysis of period 3 cycles. Blue dots: stable points; open dots: unstable points. zoom r=3.8282 1 0.56 0.8 n+3 X X n+3 0.54 0.6 0.4 0.5 0.2 0 0.52 0 0.2 0.4 X 0.6 0.8 0.48 0.48 1 0.5 n 0.52 X 0.54 0.56 n 1 0.8 X n 0.6 0.4 0.2 0 0 20 40 60 80 100 120 140 n Figure 8: Intermittency 12 160 180 200 Sensitivity to initial conditions and Lyapunov exponent Chaotic behaviours are characterized by a high sensitivity to initial conditions: starting from initial conditions arbitrarily close to each other, the trajectories will rapidely diverge (Fig. 9). Said otherwise, a small difference in the initial condition will produce large differences in the long-term behaviour of the system. This property is sometimes called the “butterfly effect”. 1 0.9 0.8 0.7 X n 0.6 0.5 0.4 0.3 0.2 0.1 0 0 5 10 15 20 25 30 n Figure 9: Sensitivity to initial conditions. Both curves have been obtained for r = 3.8 but differ by their initial conditions: x0 = 0.4 for the blue curve and x0 = 0.41 for the red curve. Another way to appreciate the sensitivity to initial condition is to observe the evolution of a small interval of initial conditions (Fig. 10). 0.25 0.8 0.2 0.7 Frequency Interval of initial conditions 1 0.9 0.6 0.5 0.4 0.3 0.2 0.15 0.1 0.05 0.1 0 0 2 4 6 8 0 10 Iteration 0 0.2 0.4 0.6 0.8 1 X Figure 10: Evolution of the interval [0.47,0.48]. The red dots indicate the initial boundary of the interval (i.e. x0 = 0.47 and x0 = 0.48). These results have been obtained for r = 4. The values obtained after some iterations cover the whole range [0 1] but they are not distributed uniformely. 13 The sensitivity to initial conditions can be quantified by the Lyapunov exponent. Given an initial condition x0 , consider a nearby point x0 + δ0 , where the initial separation δ0 is extremely small. Let δn be the separation after n iterations. If |δn | ≈ |δ0 | enλ (37) then λ is called the Lyapunov exponent. A positive value is a signature of chaos. Figure 11: Lyapunov exponent (schematic representation). A precise and computationally useful formula for λ can be derived. By definition: δn = f n (x0 + δ0 ) − f n (x0 ) (38) From eq. (37) we obtain 1 δn λ = ln n δ0 1 f n (x0 + δ0 ) − f n (x0 ) = ln n δ0 1 ln |(f n )′ (x0 )| = n (39) where we have taken the limit δ0 → 0 and applied the definition of the derivative: f (x + ∆x) − f (x) ∆x→0 ∆x f ′ (x) = lim (40) The term inside the logarithm can be expanded by the chain rule: (f n )′ (x0 ) = (f (f (f (....f (x0))))′ = f ′ (x0 ).f ′ (f (x0 ))..f ′ (f (f (x0 )))... = f ′ (x0 ).f ′ (x1 ).f ′ (x2 )...f ′ (xn−1 ) n−1 Y = f ′ (xi ) (41) i=0 Hence: n−1 1 Y ′ ln f (xi ) λ = n i=0 n−1 1X ln |f ′ (xi )| = n i=0 14 (42) If this expression has a limit as n → ∞, we define that limit as the Lyapunov exponent for the trajectory starting at x0 : # " n−1 1X ′ ln |f (xi )| (43) λ = lim n→∞ n i=0 Note that λ depends on x0 . However it is the same for all x0 in the basin of attraction of a given attractor. The sign of λ is characteristic of the attractor type: For stable fixed points (steady states) and (limit) cycles, λ is negative; for chaotic attractors, λ is positive. For the logistic map, f (x) = rx(1 − x) (44) f ′ (x) = r − 2rx, we have, by definition: λ(r) = lim n→∞ " (45) n−1 1X ln |r − 2rxi | n i=0 # (46) In practice, the value of λ converges after a few hundreds iterations: 1000−1 1 X ln |r − 2rxi | λ(r) ≈ 1000 i=0 (47) Figure 12 shows the Lyapunov exponent computed for the logistic map, for 3 < r < 4. We notice that λ remains negative for r < r ∗ ≈ 3.57, and approaches 0 at the period doubling bifurcation. The negative spikes correspond to the 2n − cycle. The onset of chaos is visible near r = r ∗ , where λ becomes positive. For r > r ∗ , windows of periodic behaviour are cleary visible (spikes of λ < 0). 1 0.5 Lyapunov exponent 0 −0.5 −1 −1.5 −2 −2.5 −3 −3.5 −4 3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4 Parameter r Figure 12: Lyapunov exponent for 3 < r < 4 (transients = 200; number of iterations = 5000). 15 Generalization of the logistic map Maroto (1982) studied the following discrete map: Xn+1 = rXn2 (1 − Xn ) (48) The analysis of this system is left as an exercise. The logistic equation can be generalized as: Xn+1 = rXnp (1 − Xn )q (49) The dynamical properties of such equation have been investigated namely by Levin & May (1976), Hernandez-Bermejo & Brenig (2006), Briden & Zhang (1995, 1994), and others. Delayed logistic map A delayed version of the discrete logistic equation was proposed by Meynard-Smith (1968): Xn+1 = rXn (1 − Xn−1 ) (50) Note that if we define the new variable Yn = Xn−1 , the 2-order equation can be converted into a system of two equations of the first order: Xn+1 = rXn (1 − Yn ) Yn+1 = Xn (51) The analysis of these equations is left as an exercise. A variant of the logistic map: the tent map The tent map is defined by: Xn+1 = rxn r(1 − xn ) The analysis of this system is left as an exercise. 16 if xn < 1/2 if xn ≥ 1/2 (52) Continuous logistic equation The continuous form of the logistic equation is written dX = rX(1 − X) dt (53) This equation can be solved either numerically, using the usual integration algorithm, or analytically. From the continuous equation to the discrete map Let’s apply the Euler’s method for the differential equation: If we define Yn = dX = X(1 − X) dt (54) Xn+1 − Xn = Xn (1 − Xn )∆t (55) Xn+1 = Xn + Xn (1 − Xn )∆t = Xn (1 + ∆t − Xn ∆t) (56) ∆t Xn and r = 1 + ∆t, then we find 1 + ∆t ∆t Xn (1 + ∆t − Xn ∆t) 1 + ∆t = Yn (r − rYn ) = rYn (1 − Yn ) Yn+1 = (57) which is the discrete logistic map. Solving numerically the continuous equation The Euler method does not work well for to solve the continuous logistic equation. Other integration methods (such as Runge-Kutta methods), however, give reliable solution (Fig. 13). 17 2 1.8 r = 0.5 1.6 1.4 X 1.2 1 0.8 0.6 0.4 0.2 0 0 1 2 3 4 5 6 7 8 9 10 Time Figure 13: Numercial solution of the logistic equation obtained for r = 0.5, for various initial condition X(0). Solving analytically the continuous equation The logistic equation (58) can be solved analytically and the solution is 1 X(t) = 1+ 1 − 1 e−rt X0 (58) where X0 = X(0) The demonstration is left as an exercise. Solution map (Ruelle plot) It sometimes occurs that we have a differential equation for a system, but we are only interested in the behavior at fixed time intervals. For instance, the logistic differential equation is sometimes used to model population growth, but we might only have census data at intervals of five or ten years. It then makes little sense to look at the whole continuous solution. Suppose for instance that we want solutions of the logistic differential equation (not solutions of some numerical approximation like the Euler method iterates) at fixed intervals T . The solution of the logistic differential equation is (cf. eq (58)). X(t) = 1− X(t) = e 1 (X0 − 1) X0 −rt X0 − e−rt = X0 −rt e (X 0 − 1) X0 (X(t) − 1) X(t)(X0 − 1) 18 (59) (60) (61) X(t + T ) = X0 − X0 −(t+rT ) (X e 0 (62) − 1) If we substitute for the exponential in eq (62) using eq (61) we get, after a little rearranging, X(t + T ) = X(t) X(t) − e−rT (X(t) (63) − 1) This last equation is a solution map. It lets us calculate X(t + T ) knowing only X(t) and some parameters. Unlike equation (57) which gives approximations to the solution of the logistic differential equation at fixed intervals ∆t, equation (63) is exact. We were able to obtain this equation because we were able to solve the differential equation. In general of course, we cant do that, but we can still obtain numerical representations of the solution map by sampling the numerical solution (obtained with a good numerical method, of course) at fixed time intervals and plotting X(t + T ) vs X(t). This is sometimes called a Ruelle plot. 1 0.8 X 0.6 0.4 Continuous solution Map solution (T=1) Euler (dt=0.5) 0.2 0 0 1 2 3 4 5 6 7 8 9 10 Time Figure 14: Ruelle plot of the logistic equation (r = 0.5). 19 Generalization of the logistic equation As for the discrete version, the continuous logistic equation can be generalized: q X dX p = rX 1 − dt K (64) Delay version of the logistic equation A delay variant of the logistic equation was studied by Cunningham (1954), Wangersky & Cunningham (1956), and more recently by Arino et al (2006). It can be formulated as: dX X(t − τ ) = rX 1 − dt K 20 (65) References Text books • Edelstein-Keshet, L (2005; originally 1988) Mathematical Models in Biology, SIAM Editions. • Glass L & MacKey MC (1988) From Clocks to Chaos. Princeton Univ. Press. • Murray JD (1989) Mathematical Biology, Springer, Berlin. • Nicolis G (1995) Introduction to Nonlinear Science, Cambridge Univ. Press. Original papers • Verhulst PF (1845) Recherches mathématiques sur la loi d’accroissement de la population. Nouv. mém. de l’Académie Royale des Sci. et Belles-Lettres de Bruxelles 18:1-41. • Verhulst PF (1847) Deuxième mémoire sur la loi d’accroissement de la population. Mém. de l’Académie Royale des Sci., des Lettres et des Beaux-Arts de Belgique 20:1-32. • Cunningham WJ.(1954) A non-linear differential-difference equation of the growth. Proc Natl Acad Sci USA 40:708-13. • Wangersky PJ, Cunningham WJ (1956) On time lag in equations of growth, Proc Natl Acad Sci USA 42:699-702. • Meynard-Smith J (1968) Mathematical Ideas in Biology. Cambridge University Press. (p.23) • May RM (1974) Biological populations with nonoverlapping generations: stable points, stable cycles, and chaos. Science 186:645-7. • May RM. (1975) Biological populations obeying difference equations: stable points, stable cycles, and chaos. J Theor Biol. 51:511-24. • May R (1976) Simple mathematical models with very complicated dynamics, Nature 261: 459-467. • Levin SA, May RM (1976) A note on difference-delay equations. Theor Popul Biol 9:178-87. • Feigenbaum MJ (1978) Quantitative Universality for a Class of Non-Linear Transformations. J. Stat. Phys. 19:25-52. • Marotto FR (1982) The Dynamics of a discrete population model with threshold. Math. Biosci. 58:123-128. 21 More recent papers • Briden W, Zhang S (1994) Stability of solutions of generalized logistic difference equations, Periodica Mathematica Hungarica 9:81-87. • Arino J, Wang L, Wolkowicz GS (2006) An alternative formulation for a delayed logistic equation. J Theor Biol 241:109-19. • Hernandez-Bermejo B, Brenig, L (2006) Some global results on quasipolynomial discrete systems, Nonlin Anal-RealWorld Applic 7:486-496. 22