methods and problems of computational mathematics

advertisement
Actes, Congrès intern, math., 1970. Tome 1, p. 151 à 161.
METHODS AND PROBLEMS
OF COMPUTATIONAL MATHEMATICS
by G. I. MARCHUK
Computational mathematics being part of mathematics has currently at its disposal
powerful techniques for solving problems of science and engineering. The range
of computational methods is so wide that it is practically impossible to cover them
to a full extent in one report. A series of interesting investigations by Bellman, Greyfus
et al. devoted to dynamic programming and some related problems was discussed
at the previous Congress of Mathematicians. Therefore we shall confine ourselves
to some selected questions connected with the theory of approximate operations
in finite, and infinite-dimensional functional spaces which the author has been concerned with. Even so, however, it is impossible to cover many interesting studies
in the field because of the time limit given to the report. For the same reason the
author, regretfully, had to reduce to minimum references to the original studies.
Large-scale electronic computers gave rise to algorithmic constructions and mathematical experimentation over a wide area of science and engineering. This attracted
new research personnel to the problems of computational mathematics. The valuable
experience we had had in solving applied problems was later used to devise effective
methods and algorithms of computational mathematics.
The methods of computational mathematics are closely related to the state of computer art. New concepts and methods are formed in computational mathematics
and its numerous applications influenced essentially by every new stage of computer
technology.
The standard of research in computational mathematics is largely dependent on
the actual connection with fundamental areas of mathematics. First of all I should
like to mention functional analysis, differential equations, algebra and logic, the
theory of probability, calculus of variations, etc. A mutual exchange of the ideas
between different branches of mathematics has been intensified in the recent decade.
This is true in the first place for computational mathematics which has used the results
of fundamental mathematical areas to develop new and more sofisticated methods
and to improve the old ones.
At the same time it should be emphasized that applications have an important
influence on computational mathematics. Thus, for instance, mathematical simulation often stimulated a discovery of new approaches which are now a most valuable
possession of computational mathematics. Such applied areas as hydrodynamics,
atomic physics, mathematical economics and the control theory are most important
examples.
152
G. I. MARCHUK
G
1. The theory of approximation, stability and convergence of difference schemes.
The wide use of finite-differences method in differential equations of mathematical
physics required a detailed study of those features of difference equations that affect
in the first place the quality of difference schemes. Among them are above all the
stability and convergence conditions.
This unfavourable feature of difference equations and the corresponding studies
of John von Neumann initiated theoretical investigations in order to determine the
relation between convergence and stability and to find effective stability criteria of
difference schemes.
Later on several authors formulated the following fundamental theorem called the
equivalence theorem. If a difference scheme approximates a linear homogeneous
differential equation for a properly posed problem, then the stability of the difference
scheme is a necessary and sufficient condition for its convergence. The final formulation and the proof of this theorem for an abstract evolution equation in a Banach
space were given by Lax. Generalization of the equivalence theorem for non-homogeneous linear differential equation was given by Richtmyer. One can make the
stability conditions of the scheme less strict provided that the initial data are suficiently
smooth. This idea is implemented in the Strang equivalence theorem using the
concept of weak stability.
Speaking of the effective stability conditions it is necessary to mention John von Neumann-Richtmyer's paper of 1950. They formulated a so-called local stability criterion.
They introduced such new notions as a symbol of a difference scheme, a spectrum of a
family of difference operators and a kernel of the spectrum of the family which made
it possible to estimate norms of the powers of the step operators. These estimates
were in many cases effectively used in the stability analysis.
An interesting approach to difference schemes with variable coefficients is associated
with the idea of dissipativity. This idea was implemented in the studies of Kreiss.
His theorems relate the order of dissipativity of the difference equations approximating systems of hyperbolic equations to the order of their accuracy. Important results
have been derived by a so-called energy method which is based on the concept of
strong stability. The idea of the method is to choose some norm for the vector solution. The norm of the vector solution grows from step to step not faster than J + 0(At).
The energy method was first introduced by Courant, Freidrichs and Lewy and
developed by other authors, in particular by Ladyzhenskaya and Lees.
Here it is necessary to mention the theory of the convergence of difference schemes
developed by Samarsky who has used energy inequalities and a priori estimates. The
theory gives necessary and sufficient stability conditions for two- and three-layer
schemes formulated in a form of inequalities. The inequalities contain operator
coefficients of difference schemes.
Of late the interest of mathematicians has been attracted to stable boundary-value
hyperbolic problems. A certain contribution to that has been made by Kreiss. He
has formulated necessary and sufficient stability conditions for some classes of problems. Ryabenky has deeply studied the theory of boundary-value problems for
difference equations with constant coefficients. As before the theory of difference
equations for boundary-value problems of mathematical physics is of supreme concern
to mathematicians.
METHODS AND PROBLEMS OF COMPUTATIONAL MATHEMATICS
153
2. A numerical solution of the problems of mathematical physics.
The studies of approximation, stability and convergence have provided the necessary
basis for a wide research of effective difference schemes applied to the problems of
mathematical physics. The algorithms of finite difference methods combine, as a rule,
the aspect of a construction of a difference equation-analogue as well as the aspect
of its solution. Therefore the advance of the constructive theory of the finite difference
methods depends on a mutually coordinated development of the two aspects mentioned
above.
If we try to summarize the vast experience of recent years in the development of
finite difference methods we can conventionally distinguish some main trends.
2.1 One of such trends is concerned with finding efficient algorithms for multidimentional stationary problems on mathematical physics.
As a result of the success achieved in a solution of simultaneous linear algebraic
equations with Jacobi and block-tridiagonal matrices there have emerged a few excellent
algorithms in which factorization of the difference operator is used. At the Institute
of Applied Mathematics (AS, USSR) were proposed different variants of the direct
factorization method which have been effectively applied to a solution of different
classes of problems and which should be specially mentioned.
One can see that besides the precise factorization methods there is a rapid development of the approximate factorization methods where factorization of the operator
is performed by means of iterations.
Early sixties were marked by a major contribution in computational mathematics
associated with the names of Douglas, Peaceman and Rachford who suggested an
alternating direction method. The success of the method was ensured by the use of a
simple reduction of a multi-dimensional problem to a sequence of one-dimensional
problems with Jacobi matrices which are convenient to handle. The theory of the
alternating direction method has been developed by Douglas and Gunn, Birknoff,
Wachspress, Varga and also by Kellogg, Bakhvalov, Vorobjov, Widlund et al.
Later Soviet mathematicians Yanenko, Diakonov, Samarsky and others developed
a so-called splitting-up method. The point is that the approximation of the initial
operator by each auxiliary operator is not necessary but on the whole such an approximation exists in special norms.
A series of investigations has been devoted to a choice of optimization parameters
of splitting-up schemes by means of spectral and variational techniques.
2.2 The experience we have in the solution of one-dimensional problems represents a solid base when we come to the development of algorithms for the problems of
mathematical physics. An important role in the development of new approaches to a
solution of non-stationary two-dimensional problems belongs to the alternating
direction method.
Further advancement of the methods for multi-dimensional non-stationary problems
is connected with splitting-up techniques based as a rule on non-homogeneous difference approximations of the initial differential operators. The mathematical technique
is related with splitting of a compound operator to simple ones. If this approach
is used the given equation can be solved by means of integration of simpler equations.
154
G. I. MARCHUK
G
In this case the intermediate schemes have to satisfy the approximation and stability
conditions only as a whole which permits flexible schemes to be constructed for practically all problems of mathematical physics.
Splitting-up schemes for implicit approximations have been suggested by Yanenko,
Diakonov, Samarsky et al. and applied in various problems. Such schemes have
stimulated a more general computational approach to the problems of mathematical
physics which has been called a weak approximation method.
French scientists Lions, Temam, Bensoussan, Glowinsky et al. have made an important contribution to the splitting-up methods and theoretically substantiated a number
of new approaches. These investigations are especially important for fluid dynamics,
the theory of plasticity and the control theory. The method of decomposition and
decentralization formulated by these scientists should be specially mentioned. It
is closely connected with the method of weak approximation.
Recently there has been found a class of splitting-up schemes equivalent in their
accuracy to the Crank-Nicolson difference scheme and applied to non-stationary
operators. These schemes are absolutely stable for the systems of equations with
positive semi-definite operators depending explicitly on space and time coordinates.
This method is easily extended to quasi-linear equations.
Lax and Wendroff have suggested a kind of a predictor-corrector scheme. This
approach is used in hydrodynamics, meteorological and oceanological problems.
2.3 In the recent years there has been a rapid development of a so-called particlein-the-cell method suggested by Harlow and applied to multi-dimensional problems
of mathematical physics. It is widely used to calculate multi-dimensional hydrodynamics flows with strong deformation of the fluid, big relative displacements and
colliding surfaces. We can expect that in the years to come the applicability of the
method will be extended to multi-dimensional problems.
2.4 The Monte-Carlo method suggested by John von Neumann and Ulam has
been developed now for more than two decades. From the very beginning it turned
out that the Monte-Carlo method was effective only on very fast computers because a
great number of samples is required to reduce the mean squared error of a solution.
However, in spite of the difficulties of putting this method on middle-scale computers
and, maybe, due to them the theory of the method has been considerably improved
which has increased its efficiency. The basic ideas intended to a considerable improvement of the method comprise the use of conditional probabilities and statistical weight
coefficients which can be found when information on the solutions of conjugate equations is used, the latter being related to the essential functionals inherent in the problems.
The simplicity and universality of this method will undoubtedly make it an important
tool of computational mathematics.
2.5 Lately there has been much interest in variational methods applied to problems
of mathematical physics. The variational methods of Rits, Galerkin, Frefz and others
have long become classical in computational mathematics.
Not long ago there emerged a new trend in variational methods, a so-called method
of finite elements or functions. The main idea of it was expressed by Courant as far
back as nineteen forties. The essence of this method is that one seeks an approximate
METHODS AND PROBLEMS OF COMPUTATIONAL MATHEMATICS
155
solution in a form of linear combination of functions with compact support of order
of the mesh width h. In other words one takes as trial functions special functions
in a polynomial form identically equal to zero outside of a fixed domain having a
characteristic dimension of several h's. The main problem here is the theory of
approximation of the functions by a given system of finite elements.
An important contribution to the finite element method has been made by Birkhoff,
Shultz, Varga et al A systematic study of the theory and applications of the method
has been fulfilled by Aubin, Babuska, Fix and by Strang, Bramble, Douglas and others.
Usually the main obstacle one comes across using variational methods is a choice
of simple functions satisfying boundary conditions. It can be overcome by means
of special variational functionals. For this purpose one employs a so-called penalty
method or a weight method which reduce the initial problem to one with natural
boundary conditions. The finite element method is close in its idea to the method of
spline functions.
The finite element method is closely associated with the application of a variational
approach to constructing finite difference equations corresponding to differential
equations of mathematical physics. Lions, Cea, Aubin, Raviart and other authors
have contributed to this area of research.
There is no doubt that the scope of variational methods will grow as the problems
become more and more complicated. The variational approach in combination with
other methods will be a powerful tool in computational mathematics.
3. Conditionally properly posed problems.
Correctness of a problem plays an important role in a numerical solution of mathematical physics equations. The concept of correctness was introduced by Hadamard
at the beginning of our century. We know a variety of classical problems properly
posed in the sense of Hadamard. However, with a more profound study of various
problems in natural sciences and engineering it became necessary to solve so-called
conditionally properly posed problems. Tykhonov has formulated the requirements
which proved to be natural in a formulation of improperly posed problems in the sense
of Hadamard. Tykhonov introduced a concept of regularization.
The results of the investigations of conditionally properly posed problems are
presented in M. M. Lavrentiev's well-known monograph " Some improperly posed
problems of mathematical physics ".
An interesting approach to the formulation of the improperly posed problems in
the sense of Hadamard is based on probabilistic methods. Most complete investigations have been made by M. M. Lavrentiev and Vasiliev. Different aspects of the
theory of these problems in mathematical physics are discussed by Jones, Douglas,
S. Krein, Miller, Cannon and others.
Lions and Lattes have formulated a numerical method for the inverse evolution
equation using a so-called quasi-inversion.
As evidenced by the tendencies of solving conditionally properly posed problems,
the techniques used here is closely associated with the optimization theory of computation to be briefly reviewed in this paper.
156
G. I. MARCHUK
G
4. Numerical methods in linear algebra.
A solution of simultaneous algebraic equations and computing of eigenvalues and
eigenvectors of matrices are important problems of computational mathematics.
Speaking about the numerical methods and problems in linear algebra of recent years
it is necessary first of all to emphasize the growing interest in the solution of large
systems of the corresponding equations, in the solution of ill-conditioned systems and
in spectral problems for arbitrary matrices. Much attention has been paid to the use
of a priori information in the process of the solution. Under the influence of computer
development the old numerical methods in linear algebra have been reconsidered.
The increasing use of computers has stimulated a creation of new algorithms well
suited for automatic calculation.
4.1 Direct methods play an important role when simultaneous linear algebraic
equations are solved or inverse matrices and determinants are found.
Direct methods have been considerably developed first by Faddeeva, Bauer, Householder, Wilkinson and then by Henrici, Forsythe, Golub, Kublanovskaya, Voevodin
and others. Using some elementary transformations one can represent the initial
matrix as a product of two matrices, each being easily inverted.
We used to compare computational methods according to a number of arithmetic
operations and the memory requirements. Now we ought to pay attention also to
their accuracy. It means that round-off error analysis has become an essential feature
of the method itself.
The corresponding inverstigations were started by John von Neuman, Goldstein,
Turing, Givens et al. A systematic study of errors was first made by Wilkinson.
His results were later systematized in his excellent monograph " An algebraic eigenvalue
problem " where the method of equivalent perturbations was taken as a basic mathematical technique. As a result estimates of the norms of perturbations were obtained
for all fundamental transformations of linear algebra.
In parallel with the method of equivalent perturbations there was an intensive
development of the statistical error theory. The results obtained by Bakhvalov,
Voevodin, Kim et al. initiated an investigation of the real distribution of round-errors.
The statistical methods are certain to play an important role in the round-off error
analysis.
4.2 Iterative methods remain very important in linear algebra. An active progress
of these methods has resulted in a number of powerful algorithms which are effectively
used on computers.
At present there are some trends in a construction of the iterative processes and
methods aimed at the minimization of the number of arithmetic operations for obtaining a solution, with the emphasis put on the use of spectral characteristics of the operators involved. A choice of iteration process parameters is part of optimization of the
computational algorithm. The major difficulty here is as a rule to determine the
boundaries of the spectra of the matrices.
Spectral optimization of iterative methods stimulates a formulation of a number
of problems. Once again we shall discuss the two of them.
METHODS AND PROBLEMS OF COMPUTATIONAL MATHEMATICS
157
More attention has been recently attracted to the Lanszos transform of arbitrary
matrices which leads to an equivalent system of equations with a symmetric matrix
whose spectrum occupies two segments symmetric with respect to zero.
The second problem is a search of effective methods intended to determine the
matrix eigenvalue with minimum modulus.
Let us discuss the application of variational principles to iterative methods. Such
methods allow a successive minimization of some functional which attains a minimum
on a desired solution. There has been much interest in such problems. Kantorovich,
Lanszos, Hestens and Stiefel as well as Krasnoselsky and Krein et al. have stated
a variational approach to iterative methods. I should like to mention the recent
papers of Petryshyn, Forsythe, Daniel, Yu. Kuznetsov, Godunov and others.
When the variational approach to iterative methods is used one can select relaxation
parameters on the basis of a posteriori information obtained at each step. This is
also the case for the steepest descent method and the iterative method with minimal
discrepancies. The above said is the merit of the variational approach. The rate
of convergence seems to be not lower than the rate we get using Chebyshev's polynomials.
There is also probabilistic technique intended to choose optimization parameters
of iterative processes. A series of interesting results has been obtained by Vorobjov.
The Young-Frankel overrelaxation method has not yet lost its importance. It
has become classical and is generalized in the monographs of Wasow and Forsythe,
Varga, Isaacson et al.
4.3 Let us consider how to solve a total eigenvalue problem for arbitrary matrices
by iterations.
We shall discuss only power methods which have been advanced by Wilkinson,
Bauer, Rutishauser, Collatz, Voevodin and by Frencis, Kublanovskaya, Eberlein
and many others. Until recently there have been effective eigenvalue algorithms
only for symmetric matrices, for instance, the Jacobi method and the method of dividing segments in two. It is hoped that the discovery of the ßR-algorithm and the generalized method of rotation will allow us to deal with arbitrary matrices. As present
different modifications of the QR-algorithms are developed most intensively. These
are widely used in science and engineering.
5. Optimization of numerical algorithms.
An important goal of computational mathematics is to find most profitable methods
for a solution of the problems, i. e. to optimize algorithms. One must study the problem
of optimization under given constraints by general mathematical theorems and to
estimate what is a minimal possible cost to solve a particular problem or a sequence
of problems. Local optimization of one isolated part of a solution does not practically
resolve the problem we are interested in. However, if one can find the best way of
handling every local problem using the existing computing facilities, one is thus led
to its solution. This concept of the optimization theory has been formulated by
Sobolev and Babuska and it represents sufficiently well the essence of the formulated
problem.
158
G. I. MARCHUK
G
Yet in many cases it is either impossible to build an optimal algorithm or the latter
turns out to be very costly. Nevertheless it appears possible to build an algorithm
close an optimal one. This is the case, for example, when asymptotically optimal
algorithms are constructed. It will be noted that at present the theory of asymptotic
estimates is an effective tool of algorithm optimization.
The concept of s-entropy introduced by Kolmogorov has been very useful too.
A hypothesis has been proposed that the efforts spent to find a solution are essentially
associated in many instances with e-entropy of a set of elements on which the solution
depends. Using the concept of e-entropy one can estimate both upper and lower
bounds of the number of operations needed for the solution of many computational
problems.
Sobolev, Bakhvalov, Lebedev and others have studied a number of algorithms for the
problems of mathematical physics using finite-difference methods.
A considerable contribution to the theory of computation and its optimization has
been made by'Babuska, Dahlquist, Henrici et al. Babuslca, Vitasek and Prager have
introduced a notion of aK-sequence of computational processes. This implies that if
the length of a sequence of operations in the problems of mathematical analysis is
increased, the accuracy of computation exponentially increases.
An idea has been expressed to introduce operations with intervals. This trend
named interval arithmetic can be applied to the study of the approximation errors in
mathematical analysis and to the analysis of round-off errors.
6. Trends in computational mathematics.
6.1 The progress in computational technology has had an important influence
on many branches of computer science which show a tendency of integration. The
relations between: software, the methods of computational and applied mathematics,
the theory of programming and languages-become so close that the choice of a strategy
for a solution of particular problems is now of paramount importance. Though
optimization of individual components of computational process is as before a fundamental factor of the theory, the attention becomes more and more concentrated on
optimization of the whole process. Optimization of computation is obviously one
of the central goals of computational mathematics which stimulates exploration of
new algorithms and new ways of their computer implementation.
6.2 The second trend is connected with a solution of classes of problems and with
algorithm standardization. A large amount of computer-processed information must
be systematized and put in order. The valuable experience which we have in the
solution of the problems of science and engineering allows us in many cases to set as
an ultimate goal a creation of universal methods suitable to handle more or less wide
classes of mathematical problems of the same type. At present a care must be taken to
save the efforts of the society on a creation of numerous individual algorithms for
individual and rare problems. It seems that a rational strategy for a solution of various
rare problems is to construct universal algorithms self-adjusting to optimal operating
conditions because they use a posteriori information. A rational strategy for a solution of frequently repeated problems is a careful implementation of specific algorithms.
METHODS AND PROBLEMS OF COMPUTATIONAL MATHEMATICS
159
These two approaches combined will help to save social resources spent on a creation
of effective software. First steps have been made in the theory of the universal algorithms which are self-adjusting to a kind of optimal operating conditions and a course
of further research has been outlined.
6.3 Software is becoming a materialization of the society's intellect. The process
of the mathematization of sciences has given rise to an active development of the
methods to simulate the phenomena occurring in nature and society. High-speed,
large-memory computers of new generations can store immediately available valuable
information and multi-access computers allow new forms of man-machine interaction
using a conversational mode of operation. Therefore standardization of solftware
in general and of computational algorithms in particular is an urgent problem of
scientific and technological process.
6.4 The problem of solftware has stimulated a formulation of new problems in
computational mathematics, such as a construction of grids for complicated domains.
For two-dimensional domains the above problem is close to its effective solution while
for three- and multi-dimensional domains it is just being stated. This problem is
closely connected with a construction of algorithms for large problems with high
accuracy by difference, variational and other techniques or may be by a combination
of different methods. The solution of the problems with non-linear monotonous
operators is especially important. The corresponding theory is at present intensively
developed.
6.5 The success achieved in analytic transformations on a computer leads practically to the solution of mathematical physics problems by the well-known technique
of the continuous function analysis. As the supply of visual aids for analytic computations grows, these methods will penetrate more and more into software. The
success achieved in analytic transformations on computers will give computer science
new possibilities which nowadays should be taken into account.
Finally I should like to note that the further development of computational mathematics depends on the standard of research in fundamental branches of mathematics,
the importance of the latter essentially growing at the age of great technological progress. Only a harmonic combination of research in all branches of mathematics
will provide the necessary and favourable conditions for self-development of mathematics and its applications.
MONOGRAPHS AND REVIEW PAPERS (*)
I. BABUSKA and S. L. SOBOLEV. — The optimization of numerical methods (Russian). Aplikace
Matematiky, 10, No. 2 (1965).
—, M. PRAGER and E. VITASEK. — The numerical solution of differential equations (Russian).
« Mir », M. (1965).
(*) The reference to the original literature mentioned in the present paper, as well as other
papers on the subject of the paper can be found in the bibliographies of the monographs and
review papers listed here.
160
G. I. MARCHUK
G
F. L. BAUER and C. T. FIKE. — Norms and exclusion theorems. Numerische Math., 2 (1960).
N. S. BAKHVALOV, N. M. KOROBOV and N. N. CHENTSOV. — Application of the number-theoretic
nets to the problems of approximate analysis (Russian). Trudy IV Vsesojuznogo sjezda, II,
Leningrad, 1961.
G. BIRKHOFF and S. MACLANE. — A survey of modern algebra (revised edition). MacMillan,
New York, 1953.
W. WASOW and G. FORSYTHE. — Finite-difference methods for partial differential equations.
John Wiley and Sons, Inc., 1959.
V. V. VOEVODIN. — Approximation and stability errors in direct methods of linear algebra
(Russian), M. (1959).
J. H. WILKINSON. — The algebraic eigenvalue problem. Oxford, Clarendon Press (1965).
S. K. GODUNOV and V. S. RYABENKY. — Introduction to the theory of difference schemes (Russian). Fizmatgiz (1962).
G. H. GOLUB and R. S. VARGA. — Chebyshev's semi-iterative methods, successive overrelaxation iterative methods and second order Richardson's iterative methods. Parts I, II.
Num. Math., v. 3 (1961).
A. A. DORODNITSYN. — A method of integral relations (Russian). Trudy IV Vsesojuznogo
sjezda, II, Leningrad, 1961.
J. DOUGLAS and H. RACHFORD. — On the numerical solution of heat conduction problems
in two and three space variables. Trans. Amer. Math. Soc. (1956).
— and J. E. GUNN. — A general formulation of alternating direction methods. Part I. Parabolic and hyperbolic problems. Numerische Math. (1964).
J. DOUGLAS jr. and T. DUPONT. — Galer kin methods for parabolic equations. Synspade (1970).
E. G. DIAKONOV. — Some iterative methods for the systems of finite-difference equations, arising
in the solution of elliptic partial differential equations (Russian). Vychislitelnye Metody i
Programmirovanie III, Izdat. MGU (1965).
ISAACSON and KELLER. — Analysis of numerical methods. John Wiley and Sons, Inc. (1966).
L. V. KANTOROVICH. — Functional Analysis and Applied Mathematics, UMN, 3, vyp. 2,
89 (1948).
— and G. P. AKILOV. — Functional analysis in normed spaces (Russian). Fizmatgiz (1959).
L. COLLATZ. — Funktional analysis und Numerisch Mathematik. Springer-Verlag (1964).
A. KOLMOGOROFF. — Über die beste Annäherung von Funktionen einen gegebenen Funktionklassen, Ann. of Math., 37 (1936).
N. M. KOROBOV. — The calculation of multiple integrals by method of optimal coefficients,
Vest. MGU, 4, 19 (1959).
H.-O. KREISS. — On difference approximations of the dissipative type for hyperbolic differential equations, Communs Pure and Appi. Math., vol. XVII, No. 3 (1964).
M. M. LAVRENTIEV. — Some improperly posed problems of mathematical physics. Springer
Tracts in Natural Phylosophy, vol. II (1967).
O. A. LADYZHENSKAYA. — A mixed problem for a hyperbolic equation (Russian). Gostehizdat,
M. (1953).
P. D. LAX. — Nonlinear partial differential equations and computing, SIAM Review, vol. II,
No. I (1969).
C LANCZOS. — Applied analysis. Prentice Hall, Inc., Englewood Cliffs, New York (1956).
R. LATTES et J. LIONS. — Méthodes de quasi-réversibilité et applications. Dunod, 1967.
J. LIONS. — Quelques méthodes de résolution des problèmes aux limites non linéaires. Dunod
(1969).
G. I. MARCHUK and N. N. YANENKO. — The application of the splitting-up method (fractional
steps) to the problems of mathematical physics. IFIP, New York (1965).
—. — On the theory of splitting-up methods. Maryland, Synspade (1970).
J. P. AUBIN. — Behaviour of the error of the approximate solutions of boundary value problems for linear elliptic operations by Galerkin's and finite difference methods, Ann. Scula
Norm., Pisa XXI (1967).
METHODS AND PROBLEMS OF COMPUTATIONAL MATHEMATICS
161
D. W. PEACEMAN and N. N. RACHFORD, jr. —The numerical solution of parabolic and elliptic
differential equations, J. Soc Industr. Appi. Math., 3 (1955).
R. D. RICHTMYER and K. W. MORTON. — Difference Methods for initial-value Problems (part 1,
General considerations; part 2, Applications). Interscience publishers, a division of John
Wiley and Sons, New York, London, Sydney (1967).
H. RUTISHAUSER and H. R. SCHWARZ. — Handbook Series Linear Algebra. The LR transformation method for symmetric matrices, Numerische Math. (1963).
V. S. RYABENKY and A. F. FILIPPOV. — On the stability of finite difference equations (Russian),
Gostehizdat (1956).
A. A. SAMARSKY. — Lectures on the theory of finite difference schemes (Russian), M. (1969).
S. L. SOBOLEV. — Lectures on the theory of numerical integration formulas. Parts I, II (Russian),
Izd. NGU (1964).
G. STRANG. — The finite element method and approximation theory, The Symposium on the
Numerical Solution of Partial Differential Equations. The Univ. of Maryland (1970).
A. N. TYKHONOV. — Regularization of improperly posed problems (Russian), Dokl. Akad.
Nauk SSSR, 153, No. 1 (1963).
D. K. FADDEEV and V. N. FADDEEVA. — The computational methods of linear algebra (Russian),
M.-L., Fizmatgiz (1969).
G. FORSYTHE and C B. MöLLER. — Computer Solution of linear algebraic systems, PrenticeHall, Inc., Englewood Cliffs, New York (1967).
A. S. HOUSEHOLDER. — The theory of matrices in numerical analysis, Blaisdell, New York
(1964).
P. HENRICI. — Discrete variable methods in ordinary differential equations. John Wiley and Sons,
Inc. (1962).
N. N. YANENKO. — The method of fractional steps for multi-dimensional problems of mathematical physics (Russian), « Nauka », Novosibirsk (1967).
Computing Center
Novosibirsk 90 (U. S. S. R.)
I -
6
Download