Characteristics of a One Dimensional Longitudinal Wave Propagating in an Elastic Solid

advertisement
Characteristics of a One Dimensional
Longitudinal Wave
Propagating in an Elastic Solid
Andrew Foose
MEAE4960
December 9, 1999
Table of Contents:
List of Symbols Used
…………………………………..
3
Abstract
…………………………………..
4
Introduction
…………………………………..
4
Problem Discussion
…………………………………..
Figure 1: Typical Section of a Longitudinal Wave…….
4
5
Exact Formulation
…………………………………..
5
Numerical Formulations
…………………………………..
6
Results and Discussion
…………………………………..
Table 1: Constants Used
…………………………..
Table 2: Explicit Method Effect of Reduced Time Steps
Table 3: Explicit Method Effect of Increased M
…..
Figure 2: Relative Error vs.  for Constant N, E, and ..
Table 4: Explicit vs. Implicit for  = 3
…………..
7
8
8
9
9
10
Conclusions
…………………………………..
10
References
…………………………………..
11
Appendices
…………………………………..
Appendix I: Typical Excel Output For Explicit Method
Appendix II: Presentation For Class
…………..
12
12
13
2
List of Symbols Used:
U(x,t)
x
l
t
c
E


A
F
m
a
F,f,G,g
i
M
j
N
h
k


 


Longitudinal displacement of beam at point x and time t
Position in beam; 0  x  l
Length of beam
Time, 0  t  
Constant in wave equation
Modulus of beam
Density of beam
Area of beam
Force applied to beam
Mass
Acceleration
Generic Functions of x and t
Integer designating different length steps
Maximum number of length steps
Integer designating different time steps
Maximum number of time steps
Length of individual length step; l/M
Length of individual time step; Max time/N
Constant used in finite difference algorithm; ck/n
Variables of time and distance in error calculations
Error estimate of explicit method finite difference equation
3
Abstract:
This paper discusses the explicit and implicit finite difference methods used to analyze
the one dimensional wave equation with given initial conditions. The explicit method
will yield accurate results as  approaches one but does not exceed one. When  equals
one, the explicit method has no truncation error. When  exceeds one, the explicit
method will not converge, but an implicit method can be used to achieve accurate results.
Both methods become less accurate as the number of time steps is increased. This is
because the error from each time step is carried on to the next. Increasing the number of
length steps will increase the accuracy as long as the algorithm used remains stable.
Introduction:
In this paper, I will be looking into the characteristics of a one-dimensional longitudinal
wave propagating in an elastic solid. A good example that is easy to visualize, although
not a solid, is a toy Slinky or a stretched spring. If you deform the Slinky into an initial
shape and let go, the toy will attempt to restore itself to its initial position. Of course, as
each particle of the spring reaches its initial position it will have some velocity and it will
be unable to stay there. This motion will continue on indefinitely unless there is damping
in the spring that will gradually dissipate energy. However, for the scope of this report,
damping will be neglected. Even without damping, the characteristics of this type of
wave can be important in several practical applications.
One of the practical applications that can be understood is how sound waves travel
through an elastic media such as a submarine hull or the casing of a commercial jet
engine. While doing the research for this paper I discovered another important
application on a web site1. For large construction, such as bridges and highways, large
piles are driven into the ground to support the structure. These piles are driven in with
very large hammering loads that initiate longitudinal waves in the pile itself. These
waves must be understood to prevent damage being done to the pile. The longitudinal
waves can also be used to aid in the pile driving process itself. This is called “vibratory
pile driving”.
To begin looking at this problem we must understand that it is governed by a hyperbolic
partial differential equation. The equation at which we will be looking, called the wave
equation for obvious reasons, is as follows.
2
 2u
2  u
(
x
,
t
)

c
( x, t )  0
t 2
x 2
(1)
Problem Description:
To develop the wave equation we can refer to Timoshenko and Goodier2. The hyperbolic
partial differential equation can be found first by summing the forces in the beam shown
in Figure 1.
4
Figure 1: Typical Section of A Longitudinal Wave in a 1-D Beam
 F  ma
AE
 u  2 u 
u
 2u
 AE  2   AE 2  ma
x
x
 x x 
(2)
Applying the equations of motion, and substituting c2 for E/ results in the wave equation
previously mention in Equation 1.
AE
 2u
 2u

A

x 2
t 2
E
c2 

(3)
2
 2u
2  u

c
t 2
x 2
Exact Formulation:
A general solution to this equation was found by d’Alembert3 to be the following where F
and G can be any function.
u ( x, t )  F ( x  t / c )  G ( x  t / c )
(4)
This solution is for an infinitely long string or beam. However, for practical purposes,
and to be able to solve it numerically, we will consider only a finite length, fixed at both
ends. This results in the following boundary conditions.
5
u (0, t )  u (, t )  0
u ( x,0)  f ( x)
u
( x,0)  g ( x)
t
(5)
Numerical Formulation:
To solve this equation numerically, I will use the finite difference method developed in
Burden and Faires4. The formulation is as follows.
x i  ih; t j  jk
2
 u
2  u
( xi , t j )  c
( xi , t j )  0
t 2
x 2
u ( x i , t j 1 )  2( x i , t j )  ( x i , t j 1 ) k 2  4 u
 2u
(
x
,
t
)


( xi ,  j )
i
j
12 t 4
t 2
k2
u ( x i 1 , t j )  2( x i , t j )  ( x i 1 , t j ) h 2  4 u
 2u
(
x
,
t
)


( i , t j )
i
j
12 x 4
x 2
h2
Neglecting  The  Error  and  combining  yields.
2
u ( x i , t j 1 )  2( x i , t j )  ( x i , t j 1 )
k2

 c2
u ( x i 1 , t j )  2( x i , t j )  ( x i 1 , t j )
h2
(7)
0
ck

 u ( x i , t j 1 )  2(1   2 )u ( x i , t j )   2 (u ( x i 1 , t j )  u ( x i 1 , t j ))  u ( x i , t j 1 )
h
The last equation provides the equation to that gives subsequent predictions for each time
step. To begin using this equation we must first provide approximations for the first two
time points and at the end points for every time point after that. The boundary conditions
provide that u=0 for all end points (x=0, x=l) and that u(x,0) = f(x). Using the 4th and
final boundary condition, which is the initial velocity, in conjunction with the initial
position, a solution for the second time point can be found. The following formulation
provides an equation to calculate u(x,t1).
u ( x i , t1 )  u ( x i ,0) k  2 u
u
( x i ,0) 

( x i , ~i )
2
t
k
2 t
u
k 2  2u
u ( x i , t1 )  u ( x i ,0)  k
( x i ,0) 
( x i , ~i )
t
2 t 2
Substitute  final  B.C.  and  neglect  error  term.
u ( x i , t1 )  u ( x i ,0)  kg( x i )
(8a)
A more accurate approximation, from Burden and Faires4, can also be used for the
explicit algorithm. This is shown in Equation 8b.
6
u ( x i , t1 )  (1   2 ) f ( x i ) 
2
2
( f ( x i 1 )  f ( x i 1 ))  kg( x i )
(8b)
We now have everything we need to solve the wave equation numerically. The following
sections will discuss the limitations, accuracy, and stability of this explicit algorithm. For
this discussion we will use the following f(x) and g(x) in Equation 9. We will use these
because they fit the requirements of the boundary conditions. Also, an exact solution to
these can be found, and this is necessary to determine the accuracy of this method. The
exact solution is shown in the third part of Equation 9.
f ( x)  sin
x

g ( x)  0
x
ct
u ( x, t )  sin
cos


(9)
One final formulation is that for an implicit finite difference method that will be
discussed in the results. The method I chose, from Smith5 and Forsythe and Wasow6,
uses Equation 10 in place of Equation 7 in the algorithm.
u ( xi 1 , t j 1 )  u ( xi 1 , t j 1 ) 
1  12 2
1
4
2
u ( xi , t j 1 )  2u ( xi 1 , t j ) 
 u ( xi 1 , t j 1 ) 
1  12 2
1
4
2
2  2
1
4
2
u ( xi , t j )  2u ( xi 1 , t j )
(10)
u ( xi , t j 1 )  u ( xi 1 , t j 1 )
This equation again is used to determine the results at time points with j  2. To
determine the first two time points, I used the same method as in the explicit algorithm
found in Equations 5 and 8b. This algorithm will provide a system of M-2 equations with
M-2 unknowns. This can be solved using any number of the algorithms designed for a
system of linear equations. Because I was using Microsoft Excel to provide my results, I
could not easily implement one of these algorithms. Instead, I set u(x1,tj) for j  2 to a
constant and used the function “Goal Seek” within the program to change the value of
u(x1,tj) until u(l,tj)=0.0.
Results and Discussion:
This section contains the results of the finite difference algorithms compared with the
regular solution. I will discuss the effects of changing the time and length steps, as well
as the stability of the explicit algorithm. I will also discuss the benefits of implicit
algorithms under certain conditions. Finally, I will discuss how the errors mentioned
here relate to the theoretical errors of the finite difference method.
To run my program I chose to use the constants shown in Table 1.
7
E
1e11
1000

Length
240
Maximum Time
0.03
10000
c (E/
Table 1: Constants Used
The reason I chose a high length and a low time was to guarantee convergence of the
explicit method at low values of N. I learned very quickly that the value of , introduced
in Equation 7, must be less than or equal to 1.0 to guarantee convergence of the explicit
algorithm developed above. Once, I picked these values, I could look at how changing
the number of time steps, N, and the number of length steps, M, would effect my results.
u ( x, t )  sin
x
240
cos
10000t
240
(11)
Table 2 shows the results of the program at different x values at the maximum time.
Keeping M constant at 6, I changed N and kept track of error by comparing it to the
actual solution in Equation 11.
m =6,n=8,=.9375
u(x,t)
Actual
u(40,.03)
m =6,n=16,=.46875
m =6,n=32,=.234375
Prediction Rel. Error
Abs. Error Prediction Rel. Error
Abs. Error Prediction Rel. Error
Abs. Error
-0.3535534
-0.355515
-0.555%
0.0019616 -0.3657488
-3.449%
0.0121954 -0.3681948
-4.141%
0.0146414
u(80,.03)
-0.6123724
-0.61577
-0.555%
0.0033976 -0.6334955
-3.449%
0.0211231 -0.6377321
-4.141%
0.0253597
u(120,.03)
-0.7071068
-0.71103
-0.555%
0.0039232 -0.7314977
-3.449%
0.0243909 -0.7363897
-4.141%
0.0292829
u(160,.03)
-0.6123724
-0.61577
-0.555%
0.0033976 -0.6334955
-3.449%
0.0211231 -0.6377321
-4.141%
0.0253597
u(200,.03)
-0.3535534
-0.355515
-0.555%
0.0019616 -0.3657488
-3.449%
0.0121954 -0.3681948
-4.141%
0.0146414
Table 2: Explicit Method Effect of Reduced Time Steps (Increased N)
As you can see, the error increases with the number of time steps. The error term, that
was neglected in Equation 7, follows in Equation 12. This equation shows that the error,
for each time step, in which j  2, is proportional to k2. Therefore, the error should
decrease with increasing N. This seems to contradict my previous statement, but on
further inspection, it is apparent that the error decreases for each time step. Because any
finite difference algorithm uses the values of the previous step to calculate the new
values, errors compound on each other resulting in a total error that actually increases
with N.
4

1  2  4u
2 2  u
k
( xi ,  j )  c h
( i , t j )  

4
4
12  t
x

(12)
Because  is inversely proportional to the number of time steps, and  must be less than
or equal to one for the explicit method to converge, N must often be large to get a
converging solution. In this case, we must live with the error and understand that it is
there.
8
Changing the value of M also has a significant effect on the results and error. Table 3
shows the effect of increasing M while maintaining all other inputs constant. You will
see that as the value of M increases, the error decreases, until there is no error. This
conclusion is in complete agreement with the error term in Equation 13. Equation 13 was
derived by bringing h2/c2 outside of the brackets. The error term is proportional to h2. As
M increases, h decreases, so the error goes down as well.
h2
12c 2
4
 2  4u

4  u
 t 4 ( xi ,  j )  c x 4 ( i , t j )  


m =6,n=16,=.46875
u(x,t)
Actual
Prediction Rel. Error
m =12,n=16,=.9375
Abs. Error Prediction Rel. Error
(13)
m =12,n=15,=1
Abs. Error Prediction Rel. Error
Abs. Error
u(40,.03)
-0.3535534 -0.3657488
-3.4494%
0.0121954 -0.3540365
-0.1365%
0.0004827 -0.3540365
0.0000%
0
u(80,.03)
-0.6123724 -0.6334955
-3.4494%
0.0211231 -0.6132084
-0.1365%
0.000836 -0.6132084
0.0000%
0
u(120,.03)
-0.7071068 -0.7314977
-3.4494%
0.0243909 -0.7080721
-0.1365%
0.0009653 -0.7080721
0.0000%
0
u(160,.03)
-0.6123724 -0.6334955
-3.4494%
0.0211231 -0.6132084
-0.1365%
0.000836 -0.6132084
0.0000%
0
u(200,.03)
-0.3535534 -0.3657488
-3.4494%
0.0121954 -0.3540365
-0.1365%
0.0004827 -0.3540365
0.0000%
0
Table 3: Explicit Method Effect of Increased M
Viewed in terms of , the error approaches zero as  approaches one. This is graphically
shown in Figure 2.
Relative Error vs. 
N=16,E=1e11,=1000
0
0.2
0.4
0.6
0.8
1
1.2
Relative Error
0%
-4%
-8%
-12%
-16%

Figure 2: Relative Error vs.  for constant N,E, and 
Of course as  goes over one, as mentioned earlier, the explicit method will no longer
converge. This is shown in Table 4 along with the results of the implicit method
discussed in the formulation and Equation 10. As is shown in the table, the explicit
method will not converge, while the implicit method used gives values within 2%. The
error in this method could be reduced if a better algorithm was used to solve the system
of linear equations instead of the goal seek function in excel. However, to demonstrate
convergence at values of  greater than 1, this method was adequate.
9
Explicit m =36,n=15,=3
Prediction Rel. Error
Im plicit m =36,n=15,=3
u(x,t)
Actual
Abs. Error Prediction Rel. Error
u(40,.03)
-0.3535534
2.68E+05
-
-
-0.3610345
-2.12% -0.0074811
Abs. Error
u(80,.03)
-0.6123724
8.73E+05
-
-
-0.6246158
-2.00% -0.0122434
u(120,.03)
-0.7071068
8.00E+05
-
-
-0.7222455
-2.14% -0.0151387
u(160,.03)
-0.6123724
-1.49E+06
-
-
-0.6249708
-2.05% -0.0125546
u(200,.03)
-0.3535534
-2.73E+06
-
-
-0.3608687
-2.07% -0.0073153
Table 4: Explicit vs. Implicit for 

I also looked at values of  orders of magnitude higher than 1 (>100). Neither of the
methods discussed here provided convergence in this case. This is due to the method
used to determine u(xi,t1) shown in Equation 8b. For very large values of  this equation
is not accurate. A better approximation used here could possibly solve this problem.
Burden and Faires4 shows that the error involved in predicting u(x,t1) is on the order of
k3+h2k2. Because the time step, k, is cubic, the error depends more on the time step than
the length step. This backs up my conclusion that this method is inaccurate for large
values of .  is proportional to k/h, so as k increases, so does , and so does the error.
Conclusions:
1) The explicit method of finite difference can be accurately used for any
problem with values of   1. However, at higher values of , the algorithm
becomes divergent.
2) As  approaches 1 from below, the explicit method becomes more accurate
until, with f(x) and g(x) in this form, the explicit algorithm will exactly predict
the position of the wave at time t.
3) As you increase the number of time steps, the error for each individual step
decreases, but due to compounding error, the total error will increase.
4) As you increase the number of length steps, the error decreases as long as the
algorithm remains stable.
5) An implicit method of finite difference can be used to accurately predict
longitudinal waves for values of  > 1.
10
Bibliography:
1) http://www.geocities.com/CapeCanaveral/Hangar/2955/
2) Timoshenko and Goodier; "Theory of Elasticity", 2nd Edition, McGraw-Hill;
New York, 1951. pp 438-444
3) Walet, Niels;
http://walet.phy.umist.ac.uk/2C2/Handouts/Handout5/node2.html, 1999
4) Burden and Faires, "Numerical Analysis", 6th Edition, Brooks/Cole; Pacific
Grove, 1997. pp 698-704
5) Smith, “Numerical Solutions of Partial Differential Equations”, 2nd Edition,
Claredon Press; Oxford, 1978. Pp 186-200
6) Forsythe & Wasow, “Finite Difference Methods for Partial Differential
Equations”, John Wiley & Sons, Inc.; New York, 1960
11
Download