High Frequency Shaking induced by Low Frequency Forcing

advertisement
High Frequency Shaking induced by Low Frequency
Forcing: Periodic Oscillations in a spring cable system
L.D. Humphreys,
P.J. McKenna
Rhode Island College,
University of Connecticut,
Providence, R.I 02908-1991
Storrs CT 06269-2000
K.M. O’Neill,
Aerospace Corporation,
Los Angeles,
CA 90009-2957
Abstract
Abstract: We investigate the repsonse of a piecewise linear spring to very low frequency forcing. It has been observed that the bifurcation picture is very complicated
with many multiple-periodic responses with a high-frequency component. We show
many new aspects of this phenomenon, including what happens if the nonlinearity is
smoothed out. Methods used include Newton’s method, in combination with continu-
1
ation algorithms, and a steepest descent method for finding new isolated branches of
solutions.
1
Introduction
Beginning with the publication of [1], an interesting new principle in nonlinear oscillations
has been observed: large-amplitude, very low frequency external forces can give rise to oscillations in nonlinear models with a pronounced high-frequency aspect. This is an important
and new observation because generally, when mechanical systems begin to exhibit any highfrequency oscillation, the ”default” explanation is that there must be some high-frequency
forcing term present to cause this. (Usually, this is ascribed to aerodynamic forces.)
The basic idea of the model that we investigate was developed in [2, 3, 4, 5, 6, 7]. A
nice exposition can be found in [8]. This model was motivated by early accounts of vertical
oscillations in suspension bridges. A common feature of these oscillations was that in identical
wind conditions, multiple periodic behaviors were observed,[9, 10]. The key feature focused
on in this model is that cables resist expansion but not compression.
If we simplify a suspension bridge to a single oscillating particle, we expect to find two
different restoring forces acting on it. The first is a unilateral force from the cables that
hold the bridge up (but not down), and the second is a linear force that resists deflection
from equilibrium in both directions. This second force is due to the inherent stiffness of the
roadbed. For example, the George Washington Bridge, is very stiff, whereas the original
Tacoma Narrows Bridge, was extremely flexible and thus had little resistance to vertical
2
deflection. The weight of the bridge pushes the bridge down and extends the cables to an
equilibrium around which the forces from the cables remain linear. However, if the upward
deflections become too large, the cables slacken causing this part of the restoring force to
disappear.
These considerations lead to the following simplified mathematical model. Consider a
mass attached to a vertical spring with a cable providing additional support. While the
spring causes a restoring force in both the upward and downward directions, the cable only
resists expansion. Let y(t) denote the downward displacement of the mass at time t, where
y = 0 denotes the position before elongation of the spring by the addition of the mass (see
Fig. 1).
Figure 1: Two different types of forces hold up the mass: a linear spring that resists displacement in either direction, and a cable-like force that resists displacement only in the
downward direction.
There are three main forces acting on the mass: a restoring force from the cable, a linear
restoring force from the spring, and gravity. Without the cable, the restoring force in the
linear model in the spring is given by k1 y. A cable has a restoring force of k2 y + , where
3
y + = max{y, 0}, because it resists deflections in the downward direction (expansion) but
not in the upward direction (compression). For example, in the George Washington Bridge,
k1 would be large and k2 relatively small. In the more flexible Tacoma Narrows Bridge,
k1 would be small and k2 relatively large. Thus the combined restoring force is given by
ay + − by − , where a ≥ b and y − = −min{y, 0}. We are interested in the response of this
system to periodic forcing, and we take f (t) = λ sin µt as our generic periodic force. A small
amount of damping leads us to the following model:
y 00 + 0.01y 0 + ay + − by − = 10 + λsin(µt).
(1)
Observe that if λ = 0, there is a natural equilibrium y ≡ 10/a and as long as the particle
stays in the region y ≥ 0, the oscillator will obey the linear equation
y 00 + 0.01y 0 + ay = 10 + λsin(µt).
(2)
Since we fully understand this equation, we can safely say that as long as the initial values
and forcing λ stay reasonably small, any low-frequency forcing will result in a corresponding
low-frequency oscillation. Naturally, at this stage, we can say very little about what happens
if the particle is forced by the external forces into the region y ≤ 0.
A word of warning: we are interested in the violence of the oscillation induced by the
external shaking. Thus, with a certain abuse of language, we measure the amplitude of the
oscillation as half the difference between the maximum and minimum of the oscillation over a
time-period. Strictly speaking, this is really the amplitude of the distance from equilibrium,
not of the actual solution y(t).
4
The earliest investigations [8, 11], concentrated on the equation
y 00 + 0.01y 0 + 17y + − 13y − = 10 + λsin(µt)
(3)
with µ in the neighborhood of 4. For λ small, this equation has multiple periodic solutions, one the intuitive linear oscillation close to equilibrium, but in addition, several large
amplitude oscillations that change sign.
This paper focuses on a different phenomenon. We make two significant changes to the
model. First, we make the model more nonlinear by increasing the dependence on the cable,
which translates into increasing the gap between a and b. Second, we investigate the response
of this system to extremely low frequency forcing. The model becomes
y 00 + 0.01y 0 + 17y + − y − = 10 + λsin(0.17t)
(4)
and we will search for periodic responses, namely solutions satisfying the boundary conditions
given by y(0) = y(T ), y 0 (0) = y 0 (T ) for various periods T as we vary λ from small to large
amplitudes.
When describing the structure of the set of solutions of a nonlinear equation that includes
a parameter, we will use the language of bifurcation theory [12, 13] to describe the solution
set as a plot of the amplitude of the solution versus a parameter of the equation, in our case
the amplitude of the forcing term.
We expect to see curves of solutions, arcs of which correspond to families of either stable
or unstable solution. A curve of solutions can change from stable to unstable. This usually
happens in one of two ways. The curve, viewed as parametrized by our λ can reverse
5
direction. (This is called a turning point.) As one rounds the turning point, one expects the
solutions to change from stable to unstable or vice versa.
A second situation is where two different pieces of curve intersect, typically when there
is period doubling. Here again, as one crosses the point of say a T −periodic curve where a
2T −periodic curve intersects, one expects a change in stability along the T −periodic curve.
As we shall see, this change in stability of the T −periodic curve can alert us to the possibility
of period doubling.
In [14], we introduced the method of steepest descent along with the pseudoarclength
continuation method for further study of Eq. (1) with µ = 0.17 in order to investigate the
bifurcation curve and behavioral traits of the solution for varying λ. We spent considerable
time and effort building the main branch as it was much more intricate than the ”less”
nonlinear problems. In searching for methods to create the curve, the method of Steepest
Descent for finding new solutions proved most helpful when the continuation algorithm failed
or when finding the many isolae (closed bifurcation curves not attached to any other branch)
which had periodicity of T , and even nT . Still other branches were found at points associated
with unstable solutions which were not near turning points. This led the authors to suspect
bifurcation into new branches. There was a the need to better identify regions of unstable
solutions and to characterize these regions as those associated with turning points or as a
starting point for a new branch.
The new feature in [1] and in [15] and [14] was the high frequency component which
was found to increase in severity as λ increased and appeared as soon as the motion of the
6
roadbed fell into the range of y < 0 in the presence of low frequency forcing. We utilize
Fourier Analysis to investigate this in section 4.
In this treatment we continue the study the ”17, 1, 0.17” problem (i.e., a = 17, b = 1
and µ = 0.17). Using the continuation algorithm we have performed a more extensive
investigation of the problem. We investigate stable and unstable solutions as mentioned
above using Floquet Analysis (see section 2). With this technique and the help of the
Steepest Descent method, we will get considerably more information about the bifurcation
curve.
In addition we investigate the effects of smoothing the nonlinear restoring force in Section
5. The question of what behavior, if any, is lost was posed to the authors in [15, 14, 1]. We
will show the resulting bifurcation curves and conclude that that they show virtually the
same behavior as in the piecewise linear case.
A final remark; for such a highly idealised thought experiment as this, it probably does
not make sense to place undue emphasis on units or numerical values for our constants. In
particular, the spring constants for expansions and compression of 17 and 1 are really meant
to indicate a strong resistance to expansion and a weak resistance to compression. The
constant 10 is motivated by the gravitational acceleration of 9.8 m/sec2 in the MKS system.
The damping coefficient of 0.01 is common in the engineering literature for primarily steel
structure.
7
2
Numerical techniques
We will now outline the various numerical techniques used to understand our model. More
detailed descriptions can be found in [1], [14], and [15]. The most basic of all methods is to
treat the boundary value problem as an initial value problem with various initial positions
and velocities and then observe long-term behavior. Surprisingly, different sets of initial
conditions yield different periodic solutions. Although simple to execute this method has
one major drawback and that is it only converges to stable solutions, as one might expect.
Even if we started exactly on an unstable solution the small amount of numerical error would
quickly send it to another region of phase space.
Continuing to treat the problem as an initial value problem, we often employ a Newton
solver. This has the advantage of improving the numerical accuracy of our solutions. Newton
schemes require the initial starting point to be significantly close to the solution and thus
it is a tool for refining solutions such as the ones found through long-term runs. To employ
Newton we search for the ideal initial position and velocity by defining a function of two
variables (initial position and velocity) that gives the difference of the starting values and
that of the exiting values when one period has been completed in a traditional solver such as
Runge-Kutta. The zeros of this new function will be the sought after initial conditions and
thus a one-period run with these values will yield the desired solution. Newton’s method
is able to refine unstable solutions if we know where they are located. Being only a local
method however, Newton’s method does not help us search for unstable solutions.
To describe the Newton solver in more detail, we let c and d denote the initial position
8


 c 

and initial velocity, respectively, and G 
  denote the position and velocity of a solution
d
to the equation at t = 2π/µ. Thus, finding periodic solutions of (1) is equivalent to finding
zeros of






 c   c 
 c 
 =   − G .
F
   
 
d
d
d
(5)
To implement Newton’s method, we follow the iterative scheme
−1
 

zn+1
∂G
∂G
 cn   1 − ∂c1 − ∂d1 

−
= zn − (F ) [zn ]F [zn ] = 

 

∂G2
∂G2
1 − ∂d
− ∂c
dn
0 −1


 cn 
.
F


dn
(6)
As one can see, we must calculate the partial derivatives in the Jacobian matrix given
in Eq. (6). One way to do this is to use a central difference method. A more effective
technique is to differentiate equation (1) with respect to initial conditions. This results in
two additional differential equations. Thus, to implement one iteration of Newton’s method,
we have to simultaneously solve the following coupled system of three equations,
y 00 + 0.01y 0 + ay + − by − = 10 + λ sin µt




∂y



y>0 
∂y
∂y
∂c
( )00 + (0.01)( )0 + a

∂c
∂c

 0 y≤0



 ∂y y > 0
∂y 00
∂y 0
∂d
( ) + (0.01)( ) + a

∂d
∂d

 0 y≤0
9
−b










−b




0
(7)



y>0 


 − ∂y y ≤ 0
∂c



 0 y>0
=0
(8)
=0
(9)










 − ∂y y ≤ 0 

∂d
with the initial conditions
y(0) = c, y 0 (0) = d, (
∂y
∂y
∂y
∂y
)(0) = 1, ( )0 (0) = 0, ( )(0) = 0, ( )0 (0) = 1
∂c
∂c
∂d
∂d
(10)
over the interval (0, 2π/µ).
Having solved the initial value problem (7-10), the term
∂G2
∂c
∂G1
∂c
is given by
∂y
(2π/µ)
∂c
and
is given by ( ∂y
)0 (2π/µ) and similarly for partial derivatives with respect to d. We then
∂c
iteratively compute zn+1 = zn − (F 0 )−1 [zn ]F [zn ] until our error is sufficiently small. Our final
output is a set of initial conditions corresponding to the desired solution.
(A word of warning; it is not at all clear that these derivatives really exist! Certainly, we
end up solving a system in which many of the terms are discontinuous, which should give us
a certain discomfort with the quality of the solutions of the initial value problem. As long as
the solutions are not identically zero on an open interval, one can justify this approach [16].)
Newton’s method is extremely efficient and accurate in refining our solutions. Naturally, it
was not effective in global searches. — If for example we wanted to find a periodic solution
at a fixed λ without any hint to what we were looking for, Newton’s method usually wasn’t
successful. As we will discuss later, another more global method was needed.
Our main goal is to understand the behavior of our model in terms of individual solutions and how the whole solution space is structured. So far we have discussed two methods
for finding individual solutions but we haven’t yet seen how these solutions arise. We are
indeed interested in the totality of behavior exhibited by the solution space of Eq. (1) and
we employ the benefit of a bifurcation curve to do so. Specifically, for given a, b, µ, λ we plot
the amplitude ampλ of the solution, versus the forcing amplitude λ. In this way, we can
10
track the occurrence and characteristics of multiple responses (e.g. small and large amplitude solutions, and subharmonic solutions) as λ varies. To begin this process, start with a
known solution resulting from a specific λ value. Plot (λ, ampλ ) and then increment λ by a
small amount while using the previous converged-to solution as the new initial guess in the
Newton solver discussed above. In this way we can slowly step along a solution curve.
This method works well while stepping along intervals of regular points, or points on the
curve (λ, ampλ ) for which the matrix of derivatives DF is nonsingular. The justification for
this is the uniqueness of these solutions as implied by the Implicit Function Theorem [?].
Unfortunately the theorem no longer applies as we approach singularities on the path, such
as as turning points or points where new branches are created. The method likely will fail
since there exists more than one solution in the neighborhood of such points. One remedy for
this is to search for another solution past the singularity then restart our simple continuation
procedure in λ extending backward to the point of the failure then forward for a continuous
curve. Although not ideal, in this way one could piece together intervals of the path in order
to fill out the solution space.
A method for smoothly handling turning points does exist. We utilize a continuation
algorithm in which the curve is parametrized by pseudoarclength instead of λ [12]. The main
idea of the algorithm is to introduce an additional equation of pseudoarclength normalization.
This addresses the difficulty of continuation in λ since the matrix of derivatives DF is
11
nonsingular even if the matrix DF is singular [11, 12, 13]. This new equation is dependent
on initial conditions, λ and the arclength parameter. Let (c0 , d0 , λ0 ) be a zero of F and set
(c0 , d0 , λ0 ) = (c(s0 ), d(s0 ), λ(s0 )). The pseudoarclength normalization N is given by
N (c, d, λ, s) = ψ
|λ(s) − λ(s0 )|2
k((c(s), d(s)) − (c(s0 ), d(s0 ))k2
+ (1 − ψ)
− (s − s0 )
s − s0
s − s0
(11)
where ψ(0, 1) and s is a chord-length parameter. We use a version of Newton to search
for zeros of the map F given by


 F (c, d, λ) 
.
F(c, d, λ, s) = 


N (c, d, λ, s)
(12)
The Jacobian matrix





0
F =





∂F1
∂c
∂F1
∂d
∂F1
∂λ
∂F2
∂c
∂F2
∂d
∂F2
∂λ
∂N
∂c
∂N
∂d
∂N
∂λ









(13)
is nonsingular even if the matrix F 0 is singular. Four of the partial derivatives in Eq.
(13) have already been computed in our Newton solver. — To compute
∂F1
∂λ
and
∂F2
∂λ
we
differentiate our original equation (1) with respect to λ as we did previously with respect to
initial conditions and solve that new initial value problem simultaneously with the others.
12
— The derivatives in the third row of Eq. (13) are computed directly.
We can now roughly describe the continuation algorithm. — For a value of s0 , choose a
good guess for λ0 , c0 , d0 which yields a periodic solution to Eq. (1). — Solve the system (710) and update — (c0 , d0 , λ0 ) via Newton’s method. — Upon finding the zero of F, plot the
amplitude of the periodic solution versus λ and repeat. Then increment s0 to s1 . Thus we
compute the value of N using s1 , the updated value of s0 . — Moreover, we must introduce
the parameterization λ(s), c(s), d(s).
F may yet fail at points where bifurcation into a new branch occurs and, regardless,
it cannot detect where such new branches stem. We need another more global method to
search for other random solutions. Once found we can restart the continuation algorithm
from them. The continuation algorithm can generate the bifurcation curve in both a forward
and backward direction. Combining our outcomes together we can piece together a more
complete picture of the solution curves.
Recently, we implemented a new method based on the notion of Steepest Descent. This
method is based on a simple and familiar idea from multivariable calculus, namely, the
directional derivative. To find a minimum of a scalar functional f of two variables, c and
d, one evaluates the gradient ∇f at some initial point and, assuming that this vector is not
zero, then moves in the opposite direction for a small distance. So long as the distance is
small, this should lower the value of the function. We then continue to iterate until the
function can no longer be reduced and presumably we have found a critical point that is
generically a minimum.
13
One problem with this process is that it will come to a stop if it encounters any critical
point of the functional, such as a saddle point. Remarkably, we have not encountered this
in our investigations. Also, we will only be interested in finding zeroes of a nonnegative
function E, and this process would stop at any local minimum, not just the ones we want.
This technique was first used in [14], and [15]. As far as we know, this method has not
previously been used to find periodic solutions of nonlinear ordinary differential equations.
The idea is to find a zero of the error function given by
E(c, d) = (c − y(2π/µ, c, d))2 + (d − y 0 (2π/µ, c, d))2 .
(14)
Notice that we have emphasized the differentiable dependence of y(2π/µ) on the initial
condition pair (c, d). As remarked, one way to find zeros of this function (i.e., the places
where it takes its minimum value) is to start at an arbitrary point (c, d) and move in the
direction opposite the gradient. Naturally, this involves computing the partial derivatives
∂E
∂c
and
∂E
∂d
which we have already discussed. One elementary way to do this would be to
evaluate finite differences of the form
(E(c + h, d) − E(c − h, d))/(2h)
(15)
for some small h. Calculating this one finite difference involves solving the initial-value
problem two more times on the interval [0, 2π/µ] and twice more to find
∂E
.
∂d
Fix T = 2nπ/µ. The integer n depends on whether we are looking for a periodic solution
with the same period, (n = 1) or a subharmonic response, (n ≥ 2). Another way to calculate
the gradient is to take the partial derivatives of equation (11) with respect to c and d. This
14
gives
∂E
∂y
∂y
= 2(c − y(T ))(1 −
(T )) + 2(d − y 0 (T ))(−( )0 (T )),
∂c
∂c
∂c
(16)
∂E
∂y
∂y
= 2(c − y(T ))(− (T )) + 2(d − y 0 (T ))(1 − ( )0 (T )).
∂d
∂d
∂d
(17)
The various partial derivatives like
∂y
(T ))
∂c
are exactly the ones we have already discussed
in Newton’s method. We then iterate zn+1 = zn − h∇E, where h is taken to be sufficiently
small. This procedure is relatively easy to implement since the calculations for all the partial
derivatives have already been computed in the Newton solver.
Steepest descent had, for us, the advantage of finding solutions, including unstable ones,
without having a good guess. This method proved unreasonably effective, especially in
finding solutions that were not connected to any obvious branches of solutions [7].
A priori, there was no reason why this process should not converge to some local minimum
of the function E that was not zero, (and therefore not a periodic solution), but most times
it did in fact converge to zero, giving us a periodic solution. One curious result of our
calculations was the observation that the only local minima to which the process converged
to were zeros of the scalar function E(c, d). One cannot help wondering whether this is a
theorem, or whether there are other nonzero local minima.
The method of Steepest Descent has proved to be exceptionally effective in searching
globally to find the good beginning ”guess” which leads us to both stable and unstable
solutions [15, 14]. Once such a guess has been found, we can then refine it using Newton’s
Method. Then we can continue off these individual solutions in the continuation algorithm
and piece together the bifurcation curves. This process requires some careful methodical
15
work but will yield a fairly complete analysis. During this process, we often converged to
solutions of different periods. In bifurcation theory, it is well known that that period doubling
arises along the curve where changes in stability occur when period two branches bifurcate
from the main branch. For this reason, we needed to find an effective and efficient way
to detect changes in stability along the bifurcation curves. We employed Floquet Analysis
which we will discuss next.
Although effective in locating solutions and the bifurcation curves, the continuation algorithm provides no information on the stability of solutions along these curves. This information is important because changes in stability give information on where subharmonic
behavior may be arising. To determine where stability changes along branches we utilize Floquet Theory which additionally enables us to identify where new branches arise from those
already established and to determine whether the new branch is one comprised of solutions
having the same period as that of the external forcing or perhaps having doubled in period,
quadrupled in period or having some other period altogether. The numerical results will be
shown in the next section.
Recall our model
y 00 + δy 0 + ay + − by − = 10 + λsin(µt),
(18)
letting y1 = y, y2 = y 0 , Y = [y1 y2 ]T , and η = Y(0) it becomes the system


dY
dt

= f (t, Y) = 

y2
−δy2 −
ay1+
+
by1−
+ 10 + λ sin(µt)

.

(19)
We consider the T -periodic solution Yη (t) obtained from the initial conditions η and, having
16
been found, search for stability of solution and subharmonic behavior.
Linearizing Eq. (19) about the initial conditions leads us to the system





d ∂Y
∂f ∂Y
(
)=
(
)=

dt ∂η
∂Y ∂η


0



 −a y1 > 0


 −b y1 ≤ 0
1 

 ∂Y

 ∂η .
δ 

(20)
Note that we do not have the necessary continuity of partial derivatives although as long
as the solutions take on the value of zero only a finite number of times in one cycle we can
justify this approach [16].
For this linearized system we calculate the Floquet multipliers. By taking two linearly
independent vectors as initial conditions represented by the columns in the matrix


 0 1 
∂Y
,
(0) = 


∂η
1 0
(21)
two linearly independent solution vectors result respectively which give our desired fundamental matrix
∂Y
(T ).
∂η
(22)
We find the eigenvalues ρj or Floquet multipliers for j = 1, 2 whereupon we know whether
the original Yη (t) solution to Eq. (19) is stable or unstable.
As we step forward along the bifurcation curve (λ, ampλ ) we will be able to observe when
we leave or enter regions of stable or unstable solutions. We do so by tracking the Floquet
17
multipliers calculated. For instance, if for some λ we begin in an interval on the bifurcation
curve where both multipliers lie inside the unit circle we are of course in a region of stable
solutions. But if as λ is increased, the multipliers move so that one exits the unit circle, the
solutions change from stable to unstable. Further, if the direction of escape is such that one
(complex) multiplier has the value (1, 0), the point of escape marks a turning point on the
bifurcation curve. If upon exiting, one multiplier has the value (−1, 0), the point of escape
indicates the possible beginning of a new branch of solutions having double the period of the
original branch. Some more background on this can be found in [17], ca. page 100.
3
The bifurcation curve
We are interested in the solutions to
y 00 + δy 0 + ay + − by − = 10 + λsin(µt)
y(0) = y(T )
(23)
y 0 (0) = y 0 (T )
with a, b, λ, µ, δ > 0, a > b, and T =
2π
µ
in so much as how the solutions change as we vary
the different parameters a, b, λ,, and µ. In this treatment we will fix a, b,, δ and µ then track
changes in the response amplitude ampλ of the T -periodic solution y to changes in forcing
amplitude λ. As we do this we will note multiple responses, stability of solutions, and the
existence of subharmonic solutions.
18
In our first experiment, [1], we focus on
y 00 + 0.01y 0 + 17y + − y − = 10 + λsin(0.17t)
where T =
2π
0.17
(24)
and track the continuum of points (λ, ampλ ). With λ = 0, the unique
solution is the equilibrium value of y =
10
17
with amplitude 0. As λ increases from 0 but
remains small, there is the unique small amplitude linear solution that is approximately
y(t) =
λ
10
+
sin(0.17t),
17 17 − 0.172
(25)
(modulo a minor correction due to the damping term), which oscillates about the equilibrium
value. In a sense, these solutions are the intuitively obvious ones; a constant term gives
a displacement to the new equilibrium
10
17
and the oscillatory piece λ sin(0.17t) gives us
oscillation about the new equilibrium whose magnitude is proportional to λ.
As we move along this branch, this solution merely gains amplitude until the increase
in λ causes the oscillations to grow to ampλ =
10
17
or more. This occurs when the vertical
displacement is carried into the region where y becomes non-positive. Recall this is the point
when the cable becomes slack and we see the contribution of the nonlinearity in the restoring
force. Referring to Fig. 2 - the bifurcation curve for Eq. (24) - we see this as the linear
component that emanates from the origin and approaches the amplitude value of amp =
denoted by the dotted line.
19
10
17
7
6
Amplitude of Solution
5
4
3
2
1
10/17
0
0
2
4
6
8
10
12
Lambda Value
14
16
18
20
Figure 2: The main bifurcation curve of T -periodic solutions for equation (24). Note the
linear behavior until λ becomes approximately 10, at which point slackening occurs and
nonlinear behavior arises.
20
This curve was first constructed in [14]. Note that as soon as the amplitude reaches
amp =
10
17
multiple solutions become evident. Things begin to change rapidly as we enter
this portion of the branch. See also Fig. 3. The corresponding solutions immediately begin
to exhibit a slight wiggle. As we progress up the branch, we can clearly see a high frequency
responses superimposed on the outline of a linear solution, see Fig. 1. In this paper we utilize
the Floquet multipliers to indicate where period doubling solutions occur. These places are
indicated with a *.
Before we examine more closely the regions where period doubling occurs, we should note
that the first part of this main branch turns quite sharply and needs closer examination. Two
closer views of these corners along with changes in stability are shown in Fig. 5 and Fig. 6.
In Fig. 5, we see both where the solution changes stability and where the curve turns. We
now need to search for additional branches at the places where stability changes. We use the
method of Steepest Descent described in Section 2 to locate a solution and then continue it
and find a branch composed entirely of solutions having period 2T (Fig. 6). Two specific
examples of 2T −periodic solutions are shown in Fig. 7. In addition the Floquet multipliers
also indicate regions of solutions which are unstable, signifying both turning points and,
again, new branches having doubled in period to 4T -periodic solutions. These 4T branches
are shown in Fig.8.
21
(a)
(b)
1.2
1.2
1
1
0.8
0.8
y 0.6
y 0.6
0.4
0.4
0.2
0.2
0
0
−0.2
0
10
20
30
t
40
50
60
−0.2
70
0
10
20
30
t
40
50
60
70
Figure 3: Two periodic solutions from the bifurcation curve shown in Fig. 2 for λ = 9.94.
Solution (a) remains in the linear range, whereas Solution (b) goes into the negative range,
resulting in high frequency shaking.
22
6
Amplitude of Solution
5
4
3
2
1
10/17
10
11
12
13
14
15
Lambda Value
16
17
18
19
Figure 4: A closer view of the main branch shown in Fig. 2. Curves of stable solutions
are indicated with darker lines, and curves of unstable solutions with lighter. Stability, as
23
measured by the Floquet Multipliers, changes when rounding a turning point, (marked with
a circle) or in the presence of period doubling, (marked with a star).
0.65
0.64
Amplitude of Solution
0.63
0.62
0.61
0.6
10/17
9.88
9.9
9.92
9.94
9.96
9.98
Lambda Value
10
10.02
10.04
10.06
Figure 5: A close-up view of the first set of turns in Fig. 2. Curves of stable solutions
are indicated with darker lines, and curves of unstable solutions with lighter. Notice the
sharpness of the turns compared to the smoother changes further up the branch in Fig. 5.
This can also be compared to Fig. 14 , in section 4 where the nonlinearity is smoother.
24
0.65
Amplitude of Solution
0.64
0.63
0.62
0.61
0.6
10/17
9.88
9.9
9.92
9.94
9.96
9.98
Lambda Value
10
10.02
10.04
10.06
Figure 6: The first occurrence of a 2T -periodic solution branch bifurcating from the main
branch. We were alerted to the presence of this branch by the change in stability indicated
by the Floquet multipliers. The dotted curve is the main branch, while the solid curve
is the 2T -periodic solution branch. The steepest descent algorithm found an initial point
on the 2T -periodic solution branch and the curve was then completed by the continuation
algorithm.
25
(a)
(b)
1.5
2
1.5
1
1
0.5
0.5
y
y
0
0
−0.5
−0.5
−1
−1
0
50
t
100
−1.5
150
0
50
t
100
150
Figure 7: Two period-two solutions. For each solution we plot (t, y(t)) over four periods.
The period-two solution (b) goes through one period of small vibration followed by a second
period of more violent vibration.
26
0.65
Amplitude of Solution
0.64
0.63
0.62
0.61
0.6
10/17
9.88
9.9
9.92
9.94
9.96
9.98
Lambda
10
10.02
10.04
10.06
Figure 8: A view of four 4T -periodic solution branches. The dotted curve represents the
T −periodic and 2T −periodic curves, while the solid curves are 4T -periodic solution branches.
One appears to be a figure-8 isola. Again, we were alerted to the presence of these branches
by changes in stability indicated by the Floquet multipliers on the 2T −periodic solution. As
before, stars represent points where new period-doubling branches are located, and circles
27
indicate turning points. A point on each curve was located by the steepest descent algorithm,
and then the continuation algorithm was used to complete each curve.
1.7
1.6
1.5
Amplitude of Solution
1.4
1.3
1.2
1.1
1
0.9
0.8
0.7
10
10.2
10.4
10.6
10.8
Lambda
11
11.2
11.4
11.6
Figure 9: The solid curves show more 4T −periodic solutions as the amplitude of the forcing
term increases. The dotted curve represents the T −periodic and 2T −periodic curves, while
the solid curves are 4T -periodic solution branches. As before, stars represent points where
new period-doubling branches are located, and circles indicate turning points.
28
Note the progression of the 2T branch from the first such bifurcation (Fig. 6) to the
fourth and the differing number of 4T branches which occur including a 4T isola (the figure8 curve projected onto the other 4T S-shaped curve) in the middle grouping of Fig. 9.
Regions of unstable and stable solutions are marked as light or dark respectively. There is
evidence of new 8T -periodic branches.
Besides the subsequent regions for large λ > 11 or so , there was actually an additional
region indicating period doubling activity. It was in the small interval on the main curve for
10.02979 ≤ λ ≤ 10.03025 , 0.672173 ≤ amp ≤ 0.672345.
Additionally, in [14], 3T -periodic isolae were found. Fig. 10 shows all the period three
arcs that have been found to date. Fig. 11 shows two 3-periodic solutions, Solution (b)
being quite counterintuitive.
There is evidence of solutions having period 6T although they are not shown here.
4
Determination of the high frequency
For a given T-periodic function, the Fast Fourier Transform ,[18],decomposes y(t) as
N/2 X
2πnt
2πnt
1
An cos(
) + Bn sin(
) .
y(t) ≈ A0 +
2
T
T
n=1
29
(26)
2.6
2.4
2.2
Amplitude of Solution
2
1.8
1.6
1.4
1.2
1
0.8
10
10.5
11
11.5
12
12.5
Lambda Value
Figure 10: The many 3T isolae shown as solid curves. These are not connected in any way
to the main branch. They were found by using the steepest descent algorithm followed by
the continuation algorithm. Some of the solutions found are shown in Fig. 11.
30
(a)
(b)
2
2
1.5
1.5
1
1
0.5
0.5
y
y
0
0
−0.5
−0.5
−1
−1
−1.5
−1.5
−2
−2
0
50
100
t
150
−2.5
200
0
50
100
t
150
200
Figure 11: Two examples of period three solutions from the curves shown in Fig. 10. The
period-2 solution (a) seems similar to versions of various period one solutions. On the other
hand, the period-2 solution (b) seems particularly counter intuitive with periods of violent
shaking alternating with periods of relative calm. Note that we have not found examples of
”two periods” of calm and ”one period” of violent shaking.
The large values of An and Bn will indicate the significant frequencies of the oscilla-
31
4.5
4
Amplitude of Solution
3.5
3
2.5
2
1.5
1
0.5
10
10.5
11
11.5
Lambda Value
12
12.5
13
Figure 12: All the bifurcation curves found to date. Of course we have no reason to believe
that this picture is complete.
tion. There are certain components which we expect in the solution. The constant 10/17
for instance, since the solutions all oscillate about this equilibrium position. Therefore we
32
expect and indeed find the value n = 0 (so the amplitude A0 ) appearing as one of the main
contributors in the decomposition. Additionally, since we are subjecting the system to an
external force of λ sin(µt) = λ sin(0.17t) we expect and again find sin(0.17t) (so n = 1) to
be the most significant contributor in the solution y(t) besides the constant term.
The most significant amplitude was Bn and Bn+1 where
nµ ≤
√
a ≤ (n + 1)µ
(27)
or specifically for our ”17,1,0.17” problem
24 ∗ 0.17 ≤
√
17 ≤ 25 ∗ .17.
(28)
That is, after the constant and sin(µt) term, also evident was sin(24∗0.17t) and sin(25∗0.17t)
which leads us to believe that a occurrence of resonant frequency has occurred. When the
oscillation moves into the nonlinear range, the periodic solution becomes more complex in
the sense that it is not merely a multiple of the forcing term. The high frequency shaking
effect seems to correspond to what would be linear resonance if the oscillation had remained
in the positive linear range.
For example, we performed the decomposition on solutions of the ”16,1,.10” problem.
Indeed we found that besides the constant and the sin(.10t) term which were each expected,
the next most significant contribution was from the value of n such that
nµ =
33
√
a
(29)
or specifically
40 ∗ .10 =
√
16 = 4.
(30)
√
That is, sin(40µt) = sin(4t) = sin( 16t) is one of the main components in the solution curve.
In summary, the nonlinear coupling that occurs when the particle moves into the negative
range causes a resonant excitation in the linear range. The frequency corresponds to the
resonant frequency of the linear behavior near equilibrium.
5
The effect of smoothing the nonlinearity
The final topic we consider is the question: Is the errant behavior of the model specifically
due to the piecewise nonlinear restoring force of
f (y) = 17y + − y − ,
(31)
with its attendant discontinuous derivative at y = 0 or is this behavior also mirrored in
smoother (more generic) nonlinearities with the same approximate form?
We introduce the nonlinearity,
g(y, ε) = 9y + 8
p
√
y2 + ε − 8 ε
(32)
for various ε > 0 as a smoothed alternative to the restoring force f above. Note that g is
now differentiable with respect to y at y = 0 for ε > 0. Clearly as approaches zero, the
34
nonlinearity g becomes a better approximation for the piecewise nonlinearity f of equation
(31).
We choose ε = 5.0 ∗ 10−5 and investigate
y 00 + (0.01)y 0 + 9y + 8
p
√
y 2 + ε − 8 ε = 10 + λsin(0.17t).
(33)
We compare the results to those found for our original problem (34).
y 00 + (.01)y 0 + 17y + − y − = 10 + λsin(0.17t).
(34)
We investigate whether we have the same behavior exhibited as in the piecewise linear
case. That is, does the bifurcation curve show similar regions of stable and unstable solutions? Are there new branches of period doubling, and if so do they occur in the same
number? Does the high frequency component emerge and, again, if so does the decomposition using the F F T produce the same frequencies which have the greatest contribution in
the solution y?
The bottom line is that the global picture remains substantially the same, as seen in Fig.
2. The main branch is almost identical to what we found for the piecewise linear nonlinearity.
the same profusion of solutions of multiple period also occurred. All solutions manifested a
high frequency component with the frequency corresponding to linearized resonance about
the equilibrium.
35
Not surprisingly there were some differences when we look at the curves close-up. One
would expect that smoothing the nonlinearity would have a concomitant smoothing on the
bifurcation curves. Indeed, we can see this by comparing Fig. 5 and Fig. 14. The sharp
corners typical of the piecewise linear restoring force have been replaced by smoother, more
”analytic” curves.
Finally, in Fig. 15 we have all of the branches found thus far for the smoothed restoring
force incorporated in Eq. (33) for ε = 5.0 ∗ 10−5 . Refer to the bifurcation curve of the
piecewise linear ”17,1,0.17” problem in Fig. 12 in order to compare.
6
Concluding remarks
We have answered several of the problems raised in our opening survey of earlier papers. In
particular, we have identified the high frequency component of the result of low frequency
forcing as a coupling between the slackening of the cable and the present frequency near equilibrium. We have also identified (rather laboriously) many additional bifurcating branches
of different types, including new isolae of multiple periods. However, we do not claim that
we have exhausted all possibilities, even for the limited ranges of parameters a and b that
we have used. We have also shown that this new phenomenon of high frequency responses
to low frequency forces is not peculiar to the piece-wise linear problem but also occurs in
more generic smoothed versions of the nonlinearity.
36
At least two areas could be more fully understood. The first involves counter intuitive
solutions like those of Fig. 7, Solution (b) and Fig. 11, Solution (b). Are these solutions
typical and under what circumstances? One could imagine solutions of period eight which
are quiet for seven of those periods and violent for the remaining period. How to find these
solutions is not obvious, if they even exist.
The second area of interest is exactly how this complex solution behavior varies as we
vary the physical parameters a and b. If a and b are close, say a = 17, b = 13, no high
frequency response to low frequency forcing was found. However, when the gap between a
and b becomes large, as in a = 17, b = 1, we get the complex behavior observed in this
paper. Exactly how this depends on a and b needs further study.
Finally, it would be interesting to see if this new physical phenomenon can be observed
in other contexts. For example, this should be observable in simple mechanical experiments
with periodic shaking. It should also be found in equations reflecting more complex structures, such as nonlinear differential equations modeling bridges or ships [19].
37
6
Amplitude of Solution
5
4
3
2
1
10/17
10
11
12
13
14
15
Lambda Value
16
17
18
19
Figure 13: The main branch for the smoothed version including intervals of unstable solutions
(compare to Fig. 4). Note how the global pictures remains basically the same.
38
0.65
Amplitude of Solution
0.64
0.63
0.62
0.61
0.6
10/17
9.88
9.9
9.92
9.94
9.96
9.98
Lambda Value
10
10.02
10.04
10.06
Figure 14: A view of the first set of turns for the smoothed version including intervals of
unstable solutions (compare to Fig. 5).
39
4.5
4
Amplitude of Solution
3.5
3
2.5
2
1.5
1
0.5
10
10.5
11
11.5
Lambda Value
12
12.5
13
Figure 15: The global picture of all solutions found for the smoothed restoring force. (compare to Fig. 12).
40
References
[1] L. D. Humphreys, R. Shammas, Finding unpredictable behavior in a simple ordinary
differential equation., College Mathematics Journal, 31, (2000), 338-346.
[2] J. Glover, A. C. Lazer, P. J. McKenna, Existence and stability of large scale nonlinear
oscillations in suspension bridges, Zeitschrift fr Angewandte Mathematik und Physik,
40, (1989), 171-200.
[3] A. C. Lazer, P. J. McKenna, Large scale oscillatory behavior in loaded asymmetric systems, Annales differential equation l’Institut Henri Poincaré (C) Analyse Non Linaire,
4, (1987), 243-274.
[4] A. C. Lazer, P. J. McKenna, Large amplitude periodic oscillations in suspension bridges;
some new connections with nonlinear analysis, SIAM Review, 32, (1990), 537-578.
[5] P. J. McKenna, Large torsional oscillations in suspension bridges revisited: fixing an
old approximation, American Mathematical Monthly, 106, (1999), 1-18.
[6] S. H. Doole, S. J. Hogan, Non-linear dynamics of the extended Lazer-McKenna bridge
oscillation model, Dynamics and Stability of Systems, 15 (2000) 43–58.
[7] W.T. van Horssen, On oscillations in a system with a piecewise smooth coefficient,
Journal of Sound and Vibrations, 283 (2005), 1229–1234.
[8] P. Blanchard, R. L. Devaney, G. R. Hall, Differential Equations, Second Ed.,
Brooks/Cole, Pacific Grove, 2002.
41
[9] O.H. Amann, T.von Karman, and G.B. Woodruff, The Failure of the Tacoma Narrows
Bridge, Federal Works Agency, 1941.
[10] F. Bleich, C.B. McCullough, R. Rosecrans, and G.S. Vincent, The Mathematical Theory
of Suspension Bridges. U.S. Dept. of Commerce, Bureau of Public Roads, 1950.
[11] K. C. Jen, Numerical investigation of periodic solutions for a suspension bridge model,
Doctoral Thesis, University of Connecticut, (1990).
[12] H. B. Keller, Numerical solution of bifurcation and nonlinear eigenvalue problems in:
Applications of Bifurcation Theory (P. H. Rabinowitz, ed.). Academic Press, New York,
1977, 359-384.
[13] W. C. Rheinboldt, Numerical Analysis of Parameterized Nonlinear Equations, Wiley,
New York, 1986.
[14] L. D. Humphreys, P. J. McKenna, When a mechanical model goes nonlinear: unexpected
responses to periodic shaking, American MathMathematical Monthly, 112, (2005), 861875.
[15] L. D. Humphreys, P. J. McKenna, Using a gradient vector to find multiple periodic
oscillations in suspension bridge models, College Mathematics Journal, 36, (2005), 1626.
42
[16] S. Solimini, Some remarks on the number of solutions of some nonlinear elliptic equations, Annales differential equation l’Institut Henri Poincaré (C) Analyse Non Linaire,
2, (1985), 143-156.
[17] M. Kubı́ček, M. Marek, Computational Methods in Bifurcation Theory and Dissipative
Structures, Springer-Verlag, New York, 1983.
[18] S. W. Smith, The Scientist and Engineer’s Guide to Digital Signal Processing, Second
Ed., California Technical Publishing, San Diego, 1999.
[19] A.C. Lazer, P.J. McKenna, Nonlinear periodic flexing in a floating beam. Oscillations
in nonlinear systems: applications and numerical aspects. J. Comput. Appl. Math. 52
(1994), 287–303.
43
Download