[100] 004 Tutorial diff - msharpmath, The Simple is the Best

advertisement
[100] 004
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
revised on 2012.12.03
cemmath
The Simple is the Best
Chapter 4 Differentiation
4-1
4-2
4-3
4-4
4-5
4-6
4-7
Single Variable Function
Derivatives and Plot
Higher Derivatives
Numerical Theories
Vector Calculus
Multi-Variable Functions
Summary
In this chapter, we discuss Umbrella ‘diff’ to handle differentiation of
functions. Also, vector calculus such as gradient, divergence, curl and Laplacian
are handled by operators ‘.del’, ‘.del*’, ‘.del^’ and ‘.del2’. For multi-variable
functions, the gradient and the Jacobian matrix are implemented by using
operators ‘.del[]’ and ‘.del[][]’. Numerical theories to differentiate a function
are briefly addressed.
Section 4-1 Single Variable Function
โ–  Syntax of Umbrella ‘diff’. Differentiation of a function with respect to
a single independent variable can be written as
๐‘‘๐‘“
๐‘“(๐‘Ž + โ„Ž) − ๐‘“(๐‘Ž)
(๐‘Ž) = lim
โ„Ž→0
๐‘‘๐‘ฅ
โ„Ž
The syntax of our new Umbrella ‘diff’ for this situation is
diff .x(a) ( <<opt>>, f(x), g(x), h(x), … ) .Spoke .Spoke. …
1
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
and the relevant Spokes are
.central
.forward
.backward
central difference scheme (default Spoke)
forward difference scheme
backward difference scheme
๐œ–๐‘Ÿ๐‘’๐‘™ (default Spoke)
.eps/abseps(e=1.e-6) absolute ๐œ–๐‘Ž๐‘๐‘ 
.releps (e=1.e-6)
relative
.togo (F,…)
.plot
takeout derivatives as matrices F,…
plot derivatives for one-variable case
โ–  Central Difference Scheme. Differentiation by the central difference
scheme is defined as
๐‘“′(๐‘ฅ) ≅
๐‘“(๐‘ฅ + ๐œ–) − ๐‘“(๐‘ฅ − ๐œ–)
2๐œ–
where the default value of ๐œ– is ๐œ– = 10−5. The Spoke ‘central’ is used to
evaluate the derivative, for example
#> diff .x(0) ( exp(x) ).central.eps(0.1);
ans =
1.0016675
is equivalent to evaluate
๐‘’ 0.1 − ๐‘’ −0.1
= 1.0016675
2(0.1)
โ–  Forward and Backward Difference Schemes. Differentiation by the
forward- and backward-difference schemes are defined as
๐‘“′(๐‘ฅ) ≅
๐‘“(๐‘ฅ + ๐œ–) − ๐‘“(๐‘ฅ)
๐‘“(๐‘ฅ) − ๐‘“(๐‘ฅ − ๐œ–)
, ๐‘“′(๐‘ฅ) ≅
๐œ–
๐œ–
respectively. Using ๐œ– = 0.1, Spokes ‘forward’ and ‘backward’ are used to
evaluate the following two approximate derivatives
๐‘’ 0.1 − ๐‘’ 0
= 1.0517092,
0.1
๐‘’ 0 − ๐‘’ −0.1
= 0.9516258
0.1
by
2
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
#> diff .x(0) ( exp(x) ).forward.eps(0.1);
#> diff .x(0) ( exp(x) ).backward.eps(0.1);
ans =
1.0517092
ans =
0.9516258
Note that the central difference shows better agreement with the exact value of
๐‘’ 0 = 1.
โ–  Relative Step Size. Consider the derivative of ๐‘“(๐‘ฅ) = √๐‘ฅ at ๐‘ฅ = 1010.
If we apply the same procedure
#> diff .x(1e10) ( sqrt(x) ).forward.eps(1.e-5);
ans =
0.0000044
the result differs from the expected value
1
2√1010
= 0.000005
This is due to the truncation occurred by ๐‘ฅ + ๐œ– = 1010 + 10−5 . Therefore, we
introduce a relative step size
โ„Ž = |๐œ–๐‘Ÿ๐‘’๐‘™ ๐‘ฅ|, ๐‘“′(๐‘ฅ) ≅
๐‘“(๐‘ฅ + โ„Ž) − ๐‘“(๐‘ฅ)
โ„Ž
for the case of the forward difference scheme. In particular, โ„Ž = ๐œ–๐‘Ÿ๐‘’๐‘™ is used
when โ„Ž < ๐œ–๐‘Ÿ๐‘’๐‘™ . For example,
#> diff .x(1e10) ( sqrt(x) ).forward.releps(1.e-5);
ans =
0.0000050
yields the correct result 1/2√1010 = 0.000005. If not specified explicitly, the
central difference scheme and the default value of ๐œ–๐‘Ÿ๐‘’๐‘™ = 10−5 is used. This
means that
.central .releps(1.e-5)
3
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
is the default Spokes of Umbrella ‘diff’.
โ–  Multiple Functions. When a single function is differentiated, the final
result is expressed as a number (i.e., double data). However, differentiation of
multiple functions is expressed as a matrix. For example,
#> diff .x(0) ( sin(x) );
#> diff .x(0) ( sin(x),cos(x) );
ans =
1.0000000
ans =
[
1
0 ]
show the difference between the single and multiple functions.
Section 4-2 Derivatives and Plot
โ–  Derivative Functions. In the above, we evaluated the derivative at a
given point. If we consider an interval, the result becomes a derivative function.
This can be treated by the following syntax
diff .x[n=26,g=1](a,b) ( <<opt>>, f(x), g(x), h(x), … )
or
x = (a,b).span(n,g=1); diff [x] (a,b) ( <<opt>>, f(x), g(x), … )
The numerical result is expressed as a matrix. The differentiation method is by
default based on the central difference scheme and the relative step size ๏ฅ rel , as
was discussed above. For example,
#> diff .x[5](0,1) ( x, x*x );
// .central.releps(1.e-5)
yields the following unnamed matrix
ans =
[
[
[
1
1
1
0 ]
0.5 ]
1 ]
4
[100] 004
Chapter 4
Differentiation,
[
[
1
1
Tutorial by www.msharpmath.com
1.5 ]
2 ]
When each derivative function is required, use of Spoke ‘togo’ fulfills this
purpose, For example,
#> diff .x[5](0,1) ( x, x*x ) .togo(A,B);
#> A; B;
creates two matrices
A =
[
[
[
[
1
1
1
1
]
]
]
]
[
1 ]
[
[
[
0 ]
0.5 ]
1 ]
[
[
1.5 ]
2 ]
B =
โ–  Plot of Derivatives. Using Spoke ‘plot’ or ‘plot+’, it is possible to plot
the derivatives. As an example, we consider two functions
๐‘“(๐‘ฅ) = sin ๐‘ฅ + 3,
๐‘”(๐‘ฅ) = sin 2๐‘ฅ + 3
the derivatives of which are
๐‘“′(๐‘ฅ) = cos ๐‘ฅ, ๐‘”′(๐‘ฅ) = 2 cos 2๐‘ฅ
Over an interval of 0 ≤ ๐‘ฅ ≤ 2๐œ‹, both the functions and their derivatives are
plotted by the following commands
#> plot .x(0,2*pi) ( sin(x)+3,sin(2*x)+3 );
#> diff .x(0,2*pi) ( sin(x)+3,sin(2*x)+3 ) .plot+;
5
// .x[101](a,b)
// .x[ 26](a,b)
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
where the results are shown in Figure 1.
Figure 1
Plot derivatives
Section 4-3 Higher Derivatives
โ–  Numerical Differentiation of Higher Derivatives. To evaluate higher
derivatives, the numerical theories discussed in the last section can be employed.
However, due to subtraction between close numbers, truncation becomes the
most critical factor in approximation. For this reason, we support only the
second-order differentiation of functions. The modified Umbrella ‘diff2’ is
adopted to find the second-order differentiation. For higher derivatives greater
than 2nd-order, users can adopt the theoretical finite-difference approximations
discussed in the last section.
The 2nd-order derivatives are approximated as
๐‘“๐‘–+1 − 2๐‘“๐‘– + ๐‘“๐‘–−1
+ ๐‘‚(โ„Ž2 ) central
โ„Ž2
๐‘“๐‘–+2 − 2๐‘“๐‘–+1 + ๐‘“๐‘–
๐‘“๐‘–′′ =
+ ๐‘‚(โ„Ž2 ) forward
โ„Ž2
๐‘“๐‘– − 2๐‘“๐‘–−1 + ๐‘“๐‘–−2
๐‘“๐‘–′′ =
+ ๐‘‚(โ„Ž2 ) backward
โ„Ž2
๐‘“๐‘–′′ =
where the degree of accuracy varies depending on the numerical scheme. For a
special case of ๐‘“(๐‘ฅ) = ๐‘’ ๐‘ฅ and โ„Ž = 0.1 ๐‘“′′(0) can be evaluated in three
6
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
different ways by noting that
๐‘’ −0.2 = 0.8187308, ๐‘’ −0.1 = 0.9048374, ๐‘’ 0 = 1
๐‘’ 0.1 = 1.1051709, ๐‘’ 0.2 = 1.2214028
The results are
0.9048374 − 2(1) + 1.1051709
= 1.0008336 central
(0.1)2
1 − 2(1.1051709) + 1.2214028
๐‘“๐‘–′′ =
= 1.1060922 forward
(0.1)2
0.8187308 − 2(0.9048374) + 1
๐‘“๐‘–′′ =
= 0.9055917 backward
(0.1)2
๐‘“๐‘–′′ =
These formula can be confirmed by the following Cemmath commands
#> diff2 .x(0) ( exp(x) ).eps(0.1).central;
#> diff2 .x(0) ( exp(x) ).eps(0.1).forward;
#> diff2 .x(0) ( exp(x) ).eps(0.1).backward;
ans =
ans =
ans =
1.0008336
1.1060922
0.9055917
โ–  Spokes for 2nd-Order Differentiation. All the Spokes for differentiation
are equally applicable to the second-order differentiation, i.e. ‘diff2’. For
example, Spoke ‘plot’ yields a plot of 2nd-order derivative, as follows
#> diff2 .x(0,2) ( x^4-3*x^3+5*x-5 ).plot;
#> plot+ .x(0,2) ( x^4-3*x^3+5*x-5 );
where the results are shown in Figure 2.
7
[100] 004
Chapter 4
Figure 2
Differentiation,
Tutorial by www.msharpmath.com
Plot function and 2nd-order derivatives
Section 4-4 Numerical Theories
โ–  Finite Difference Approximation. In the literature, it is well known that
the finite-difference approximations are derived from the Taylor series. A few
examples for approximating ๐‘“′(๐‘ฅ) are
3๐‘“๐‘– − 4๐‘“๐‘–−1 + ๐‘“๐‘–−2
+ ๐‘‚(โ„Ž2 )
2โ„Ž
11๐‘“๐‘– − 18๐‘“๐‘–−1 + 9๐‘“๐‘–−2 − 2๐‘“๐‘–−3
๐‘“๐‘–′ =
+ ๐‘‚(โ„Ž3 )
6โ„Ž
๐‘“๐‘–′ =
In order to find the theoretical expressions for approximating derivatives, we
provide a Tuple function ‘dfdx’ the syntax of which is
(n1,n2).dfdx(order)
where order represents the order of differentiation, and both n1 and n2 are the
indices designating ๐‘“๐‘›1 and ๐‘“๐‘›2 . The two approximations listed above can be
confirmed from the following Cemmath commands
#> (-2,0).dfdx(1) ;
#> (-3,0).dfdx(1) ;
which result in
8
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
// f'[i] = 1/(2h) ( f[i-2] -4f[i-1] +3f[i] ) + (-1/3)h^2 f^(3) + ...
// f'[i] = 1/(6h) ( -2f[i-3] +9f[i-2] -18f[i-1] +11f[i] ) + (-1/4)h^3 f^(4)
+ ...
Note that the error estimate is also printed as a result. The last result can be
interpreted directly as
๐‘“๐‘–′ =
−2๐‘“๐‘–−3 + 9๐‘“๐‘–−2 − 18๐‘“๐‘–−1 + 11๐‘“๐‘– 1 3
− โ„Ž ๐‘“′′′′ + โ‹ฏ
6โ„Ž
4
Also, the numerical values of each coefficient can returned as a matrix, if Tuple
function ‘dfdxm’ is used. From the following commands
#> (-2,0).dfdxm(1) ;
#> (-3,0).dfdxm(1) ;
For the above case, two matrices representing coefficients
ans =
[
0.5
-2
1.5 ]
[
-0.33333
1.5
-3
ans =
1.8333 ]
are displayed on the screen.
โ–  Higher Derivative Approximation. For higher derivatives, it is also
possible to derive finite-difference approximations. From the following
Cemmath commands
#> for.n(2,4) (0,n).dfdx(2) ;
we get forward-difference approximations for the second derivatives
// f''[i] = 1/(h^2) ( f[i] -2f[i+1] +f[i+2] ) + (1/1)h^1 f^(3)
+ ...
// f''[i] = 1/(h^2) ( 2f[i] -5f[i+1] +4f[i+2] -f[i+3] )
+ (-11/12)h^2 f^(4) + ...
9
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
// f''[i] = 1/(12h^2) ( 35f[i] -104f[i+1] +114f[i+2] -56f[i+3]
+11f[i+4] ) + (5/6)h^3 f^(5) + ...
The above results can be expressed in more clear form
๐‘“๐‘– − 2๐‘“๐‘–+1 + ๐‘“๐‘–+2
+ โ„Ž๐‘“๐‘– ′′′ + โ‹ฏ
โ„Ž2
2๐‘“๐‘– − 5๐‘“๐‘–+1 + 4๐‘“๐‘–+2 − ๐‘“๐‘–+3 11 2
๐‘“๐‘–′′ =
− โ„Ž ๐‘“′′′′ + โ‹ฏ
โ„Ž2
12
35๐‘“
−
104๐‘“
+
114๐‘“
−
56๐‘“
+
11๐‘“๐‘–+4 5 3 5
๐‘–
๐‘–+1
๐‘–+2
๐‘–+3
๐‘“๐‘–′′ =
+ โ„Ž ๐‘“ +โ‹ฏ
2
12โ„Ž
6
๐‘“๐‘–′′ =
These expressions can be alternatively used to evaluate numerical
approximations of derivatives, and this point will be discussed later.
Section 4-5 Vector Calculus
โ–  ‘.del’ for Gradient. In the three-dimensional space, the gradient of a
function ๐‘“(๐‘ฅ, ๐‘ฆ, ๐‘ง) is defined as
∇๐‘“ = ๐ข
∇๐‘“ = ๐ž๐‘Ÿ
∇๐‘“ = ๐ž๐‘Ÿ
๐œ•๐‘“
๐œ•๐‘“
๐œ•๐‘“
+๐ฃ
+ ๐ค
๐œ•๐‘ฅ
๐œ•๐‘ฆ
๐œ•๐‘ง
๐œ•๐‘“
1 ๐œ•๐‘“
๐œ•๐‘“
+ ๐ž๐œƒ
+ ๐ž๐‘ง
๐œ•๐‘Ÿ
๐‘Ÿ ๐œ•๐œƒ
๐œ•๐‘ง
๐œ•๐‘“
1 ๐œ•๐‘“
1
๐œ•๐‘“
+ ๐ž๐œ‘
+ ๐ž๐œƒ
๐œ•๐‘Ÿ
๐‘Ÿ ๐œ•๐œ‘
๐‘Ÿ sin ๐œ‘ ๐œ•๐œƒ
where the cylindrical (๐‘Ÿ, ๐œƒ, ๐‘ง) and the spherical (๐‘Ÿ, ๐œ‘, ๐œƒ) coordinates are also
considered. The operator ‘.del’ is innovated to handle these gradients in three
coordinates.
For a given point (๐‘Ž, ๐‘, ๐‘), the gradient ∇๐‘“ is numerically calculated from
the following syntax
.del ( f(x,y,z) ) (a,b,c)
.del ( f(x,y,z) ) (a,b,c).cyl
10
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
.del ( f(x,y,z) ) (a,b,c).sph
In these cases, both the cylindrical and spherical coordinates must be expressed
in terms of ๐‘ฅ, ๐‘ฆ, ๐‘ง for consistency.
Operator ‘.del’ returns a three-dimensional vertex which can be easily
converted to a matrix of dimension 3 × 1 (the date structure of vertex will be
discussed later though). Another Spoke ‘eps’ controls the relative step size for
evaluating the partial derivative. To examine the role of Spoke ‘eps’, let us
execute the followings
#> for.n(1,5) .del( exp(x)*cos(y) )(2,pi/6,0).eps(10^-n);
The results are
ans = <
7.084
-4.008
0 >
ans
ans
ans
ans
6.464
6.406
6.4
6.399
-3.726
-3.698
-3.695
-3.695
0
0
0
0
=
=
=
=
<
<
<
<
>
>
>
>
It is obvious that the use of eps = 1.e-5 is reasonable, and therefore this is
assigned as a default value. Using this default value,
#> .del( x^3*y^2*z
#> .del( x^3*y^2*z
)( 2,3,4 )
)( 3,pi/3,4 )
;
.cyl;
#> .del( x^3*y^2*z
)( 3,pi/4,pi/3 ) .sph;
produce the following results
ans = <
432
192
72 >
ans = <
ans = <
118.4
17.44
75.4
14.8
29.61 >
7.851 >
โ–  Operator ‘.del*’ for Divergence. The divergence of a vector field
๐ฎ = (๐‘ข, ๐‘ฃ, ๐‘ค) is defined as
∇⋅๐ฎ=
๐œ•๐‘ข ๐œ•๐‘ฃ ๐œ•๐‘ค
+
+
๐œ•๐‘ฅ ๐œ•๐‘ฆ
๐œ•๐‘ง
11
[100] 004
Chapter 4
Differentiation,
∇⋅๐ฎ=
∇⋅๐ฎ =
Tutorial by www.msharpmath.com
1 ๐œ•
1 ๐œ•๐‘ฃ ๐œ•๐‘ค
( ๐‘Ÿ๐‘ข) +
+
๐‘Ÿ ๐œ•๐‘Ÿ
๐‘Ÿ ๐œ•๐œƒ
๐œ•๐‘ง
1 ๐œ•
1
๐œ•
1
๐œ•๐‘ค
( ๐‘Ÿ 2 ๐‘ข) +
( sin ๐œ‘ ๐‘ฃ) +
๐‘Ÿ 2 ๐œ•๐‘Ÿ
๐‘Ÿ sin ๐œ‘ ๐œ•๐œ‘
๐‘Ÿ sin ๐œ‘ ๐œ•๐œƒ
The divergence is numerically calculated from the following syntax
.del * ( u(x,y,z),v(x,y,z),w(x,y,z) ) (a,b,c)
.del * ( u(x,y,z),v(x,y,z),w(x,y,z) ) (a,b,c).cyl
.del * ( u(x,y,z),v(x,y,z),w(x,y,z) ) (a,b,c).sph
For example,
#> .del*( x*y^2*z, x*x*y, x*z*z ) (2,3,4);
#> .del * ( x*y^2*z, x*x*y, x*z*z ) (3,pi/3,4).cyl;
#> .del * ( x*y^2*z, x*x*y, x*z*z ) (3,pi/4,pi/3).sph;
yields the following results
ans =
ans =
ans =
56.0000800
35.7731456
10.2560429
โ–  Operator ‘.del^’ for curl. The curl of a vector field ๐ฎ = (๐‘ข, ๐‘ฃ, ๐‘ค) is
defined as
๐ข
๐œ•
∇×๐ฎ=|
๐œ•๐‘ฅ
๐‘ข
๐ž๐‘Ÿ
1 ๐œ•
∇×๐ฎ= |
๐‘Ÿ ๐œ•๐‘Ÿ
๐‘ข
12
๐ฃ
๐œ•
๐œ•๐‘ฆ
๐‘ฃ
๐‘Ÿ๐ž๐œƒ
๐œ•
๐œ•๐œƒ
๐‘Ÿ๐‘ฃ
๐ค
๐œ•
|
๐œ•๐‘ง
๐‘ค
๐ž๐‘ง
๐œ•
|
๐œ•๐‘ง
๐‘ค
[100] 004
Chapter 4
Differentiation,
๐ž๐‘Ÿ
1
๐œ•
|
∇×๐ฎ= 2
๐‘Ÿ sin ๐œ‘ ๐œ•๐‘Ÿ
๐‘ข
Tutorial by www.msharpmath.com
๐‘Ÿ๐ž๐œ‘
๐œ•
๐œ•๐œ‘
๐‘Ÿ๐‘ฃ
๐‘Ÿsin๐œ‘๐ž๐œƒ
๐œ•
|
๐œ•๐œƒ
๐‘Ÿ๐‘ค sin๐œ‘
The curl is numerically calculated from the following syntax
.del ^ ( u(x,y,z),v(x,y,z),w(x,y,z) ) (a,b,c)
.del ^ ( u(x,y,z),v(x,y,z),w(x,y,z) ) (a,b,c).cyl
.del ^ ( u(x,y,z),v(x,y,z),w(x,y,z) ) (a,b,c).sph
For example,
#> .del ^ ( x*y^2*z, x*x*y, x*z*z ) (2,3,4);
#> .del ^ ( x*y^2*z, x*x*y, x*z*z ) (3,pi/3,4).cyl;
#> .del ^ ( x*y^2*z, x*x*y, x*z*z ) (3,pi/4,pi/3).sph;
yields the following results
ans = <
ans = <
ans = <
0
0
1.097
2
-12.71
-1.321
-36 >
1.047 >
5.424 >
โ–  Operator ‘.del2’ for Laplacian. The Laplacian of a function ๐‘“(๐‘ฅ, ๐‘ฆ, ๐‘ง)
is defined as
∇2 ๐‘“ =
∇2 ๐‘“ =
∇2 ๐‘“ =
๐œ•2๐‘“
๐œ•2๐‘“ ๐œ•2๐‘“
+
+
๐œ•๐‘ฆ๐‘ฅ 2
๐œ•๐‘ฆ 2 ๐œ•๐‘ง 2
1 ๐œ•
๐œ•๐‘“
1 ๐œ•2๐‘“ ๐œ•2๐‘“
(๐‘Ÿ )+ 2
+
๐‘Ÿ ๐œ•๐‘Ÿ
๐œ•๐‘Ÿ
๐‘Ÿ ๐œ•๐œƒ 2 ๐œ•๐‘ง 2
1 ๐œ•
๐œ•๐‘“
1
๐œ•
๐œ•๐‘“
1
๐œ•2๐‘“
2
(
๐‘Ÿ
)
+
(
sin
๐œ‘
)
+
๐‘Ÿ 2 ๐œ•๐‘Ÿ
๐œ•๐‘Ÿ
๐‘Ÿ 2 sin ๐œ‘ ๐œ•๐œ‘
๐œ•๐œ‘
๐‘Ÿ 2 sin2 ๐œ‘ ๐œ•๐œƒ 2
Then, the divergence is numerically calculated from the following syntax
.del2 ( f(x,y,z) ) (a,b,c)
.del2 ( f(x,y,z) ) (a,b,c).cyl
13
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
.del2 ( f(x,y,z) ) (a,b,c).sph
An example for evaluating Laplacian is
#> .del2 ( x^2*y^2*z^3 ) (2,3,4);
#> .del2 ( x^2*y^2*z^3 ) (3,pi/3,4).cyl;
#> .del2 ( x^2*y^2*z^3 ) (3,pi/4,pi/3).sph;
which yields the following results
ans =
ans =
ans =
2528.0051988
645.6077851
16.1024906
Section 4-6 Multi-Variable Functions
โ–  Operator ‘.del[]’ for gradient. The gradient of a function
๐‘“(๐‘ฅ1 , ๐‘ฅ2 , … , ๐‘ฅ๐‘› ) with multiple variables is defined as
∇๐‘“ = ๐ž1
๐œ•๐‘“
๐œ•๐‘“
๐œ•๐‘“
+ ๐ž2
+ โ‹ฏ + ๐ž๐‘›
๐œ•๐‘ฅ1
๐œ•๐‘ฅ2
๐œ•๐‘ฅ๐‘›
The syntax for the gradient of a multi-variable function is
.del[] .x1.x2
.xn ( f(x1,x2, …,xn) ) (c1,c2, …, cn)
where all the Hub variables x1, x2, …, xn are dummy and returns a matrix
(instead of vertex). The Spoke of ‘eps’ also applies to operator ‘.del[]’.
An example is taken for a function
๐‘“(๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 , ๐‘ฅ4 ) = ๐‘ฅ13 ๐‘ฅ22 ๐‘ฅ3 ๐‘ฅ42
at point (๐‘ฅ1 , ๐‘ฅ2 , ๐‘ฅ3 , ๐‘ฅ4 ) = (2,3,4,5). For this case, let us find ∇๐‘“ by the following
commands
#> .del[] .x1.x2.x3.x4 ( x1^3*x2^2*x3*x4^2 ) (2,3,4,5);
14
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
which result in
ans =
[
10800
4800
1800
2880
]
โ–  Operator for ‘.del[][]’ for Jacobian Matrix. Let us consider a vector
function defined as
(๐‘“1 , ๐‘“2 , … , ๐‘“๐‘š )
where each function ๐‘“๐‘– is a multi-variable function of ๐‘ฅ๐‘– , ๐‘– = 1,2,3, … , ๐‘›. Then,
the Jacobian matrix ๐‰ of dimension ๐‘š × ๐‘› is defined as
๐œ•๐‘“1
| ๐œ•๐‘ฅ1
๐œ•๐‘“2
๐œ•๐‘“๐‘–
๐‰ = [ ] = ๐œ•๐‘ฅ1
๐œ•๐‘ฅ๐‘—
โ‹ฎ
| ๐œ•๐‘“
๐‘š
๐œ•๐‘ฅ1
๐œ•๐‘“1
๐œ•๐‘ฅ2
๐œ•๐‘“2
๐œ•๐‘ฅ2
โ‹ฎ
๐œ•๐‘“๐‘š
๐œ•๐‘ฅ2
๐œ•๐‘“1
๐œ•๐‘ฅ๐‘› |
๐œ•๐‘“2
โ‹ฏ
๐œ•๐‘ฅ๐‘›
โ‹ฑ
โ‹ฎ
๐œ•๐‘“๐‘š |
โ‹ฏ
๐œ•๐‘ฅ๐‘›
โ‹ฏ
The syntax for the Jacobian matrix is
.del[][] .x1.x2
.xn ( f1,f2,f3, … , fm ) (c1,c2, …, cn)
The operator ‘.del[][]’ plays a simple role of differentiation only, as similar to
the operator ‘.del[]’.
Consider a vector function of ๐Ÿ = (๐‘ฅ 2 ๐‘ฆ, ๐‘ฅ + 3๐‘ฆ, ๐‘’ ๐‘ฅ (๐‘ฆ + 1)), and find a
Jacobian matrix at point ๐ซ = (2,3). This can be solved by
#> .del[][] .x.y ( x*x*y, x+3*y, exp(x)*(y+1) )(2,3);
and the result is
ans =
[
12
4 ]
15
[100] 004
Chapter 4
[
[
Differentiation,
1
29.557
Tutorial by www.msharpmath.com
3 ]
7.3891 ]
Therefore, we get a Jacobian matrix at point r ๏€ฝ (2,3)
๐‰=[
12
4
๐œ•๐‘“๐‘–
]=[ 1
3 ]
๐œ•๐‘ฅ๐‘—
29.557 7.3891
Section 4-7 Summary
โ–  Derivatives with Umbrellas ‘diff’ and ‘diff2’. Derivatives of a
function are calculated by the following syntax.
๏‚ท Point value (double)
diff
.x(a) ( <<opt>>, f(x), g(x), h(x), … )
diff2 .x(a) ( <<opt>>, f(x), g(x), h(x), … )
๏‚ท Derivative function (matrix)
diff
.x[n=26,g=1](a,b) ( <<opt>>, f(x), g(x), h(x), … )
diff2 .x[n=26,g=1](a,b) ( <<opt>>, f(x), g(x), h(x), … )
x = (a,b).span(n,g=1); diff [x] (a,b) ( <<opt>>, f(x), g(x), h(x), … )
x = (a,b).span(n,g=1); diff2 [x] (a,b) ( <<opt>>, f(x), g(x), h(x), … )
โ–  Spokes. A number of Spokes are available as follows
.central
.forward
.backward
central difference scheme (default Spoke)
forward difference scheme
backward difference scheme
.releps(e=1.e-6)
relative
.eps/abseps(e=1.e-6)
absolute
.togo (F,…)
.plot
takeout derivatives as matrices F,…
plot derivatives for one-variable case
๐œ–๐‘Ÿ๐‘’๐‘™ (default Spoke)
๐œ–๐‘Ž๐‘๐‘ 
โ–  Finite Difference Approximation.
16
[100] 004
Chapter 4
Differentiation,
Tutorial by www.msharpmath.com
(n1,n2).dfdx(order)
(n1,n2).dfdxm(order)
โ–  Vector Calculus. Vector calculus (gradient, divergence, curl, and
Laplacian) can be numerically calculated by the following syntax.
.del
( f(x,y,z) ) ( a,b,c )
.del * ( u(x,y,z),v(x,y,z),w(x,y,z) ) ( a,b,c )
.del ^ ( u(x,y,z),v(x,y,z),w(x,y,z) ) ( a,b,c )
.del2 ( f(x,y,z) ) ( a,b,c )
โ–  Multi-Variable Functions. For the gradient and Jacobian matrix of a
multi-variable function, we use
.del[] .x1 .x2
.del[][] .x1 .x2
.xn ( f ) ( c1,c2, …,cn )
// gradient
.xn ( f1,f2,f3, … , fm ) ( c1,c2, …,cn )
17
Download