Proceedings CESA 2003 conference - July

advertisement
1
2/12/2016 3:32 PM
The Other Side of Moore's Law :
The Rise of Scientific Computing in the 20th Century
Robert Vichnevetsky
Rutgers University
President, IMACS
vichneve@cs.rutgers.edu
The rise of computing is all too often presented as a succession of innovations, hardware
developments, novel ideas as a thread used to reconstruct the past. But this is only part of the
story. What is missing, what need be more emphasized is what computers are used for, what
applications were important enough to bring about the finances that have gone
into their
development. While most have heard of Moore’s law that predicts the rise in the number of
transistors that find their place on a single computer chip (from 32 in 1964 to something on the
order of 100 million today), little is said about wherefrom the demand came for the corresponding
increase in computing power, few are aware of the emergence of new applications, new scientific
disciplines that have been enabled along the way. But it is precisely the (political, economic,
commercial, long and short term) importance of those applications that has been at the source of
those sine qua non finances, at the source of the material support without which the modern
development of electronic computers would simply not have taken place. Many brilliant ideas,
innovations never amounted to anything because a financial support motivated by the need, or the
promise of benefits of significant applications was not there at the time. To place this in perspective,
one may note that today’s overall funding in computer, in information processing research and
development reaches well into the tens of billions of dollars a year, more than the Gross Domestic
Product of a number of countries in the world. And all are, in one form or other, earmarked for
specific applications.
It was for what we now call scientific computing, solving problems related to
phenomena from the physical world that the first generation of electronic computers came about half
a century ago. It is still for scientific computing that today’s leading edge machines, those that make
the news are on the drawing board, a path that I shall try to illustrate with landmarks that, in line
with my previous remarks, will give to applications their proper place in the story, a story that has
changed much on the face of the Earth.
the rise of large numbers
The significance of Moore’s law is of course not just in its prediction of smaller and smaller
computer components, but more so in the resulting exponential growth in the speed of computation,
the exponential growth in the amount of data that can be handled in solving a single problem.
One of the things that characterize science of the end of the 20 th century is, expressed in
numbers of independent parameters, the size of the objects it deals with.
2
some history
It is appropriate to go back to questions that were of concern to natural philosophers of the
Renaissance (why that is relevant shall become clear later on). One of importance that had been
around for centuries was that of explaining the motion of celestial objects, in particular the motion
of the Sun’s planets. What is perhaps a first “exact” mathematical description of the latter was that
given by Kepler. A fortunate circumstance concerning what is known as Kepler’s laws was that the
Greek had, some two thousand years earlier, developed a branch of geometry known as the theory of
conic sections (Apollonius – ca 300 B.C.). Johannes Kepler (1571-1630) managed to show that the
planets are moving around the Sun following elliptical trajectories : ellipses happen to be one of the
families of curves that are part of Apollonius’ geometry. This was an empirical result, a
mathematical formulation indeed, but not supported by any theoretical explanation.
Almost contemporaneous contributions were those of Galileo Galilei (1564–1642) who
was well aware of the importance of mathematics in the natural sciences (or natural philosophy, as it
was still called) as illustrated by the often quoted :
Philosophy is written in that great book which ever lies before our eyes. I mean the
universe, but we cannot understand it if we do not first learn the language and grasp the
symbols in which it is written. This book is written in the mathematical language and the
symbols are triangles, circles and
other geometrical figures without whose help it is
humanly impossible to comprehend a single word of it, and without which one wanders in vain
through a dark labyrinth.
Galileo Galilei “The Assayer” 1623
He had managed to derive algebraic equations that describe uniformly accelerated motion,
leading him to find that the trajectory of a projectile above the earth is a parabola, another curve in
3
the family of conic sections. This he did with rational reasoning, considering finite time increments
across which things could be managed with simple mathematics.
Galileo Galilei
”Two New Sciences”
1638
Conditions were right for what was to come next, which is what others did. Letting the
finite segments used by Galileo become infinitely small and accepting in so doing the concept of
limits, of infinitesimals gave a basis for the mathematical definition of continuous change, of time
dependent velocity, of acceleration. This was to mature into calculus,
something that was
accomplished by several philosophers (not always in agreement as to who was the father of which
idea). They included Christiaan Huygens (1629-1695), Isaac Newton (1642-1727), Gottfried
Wilhelm Leibniz (1646-1716).
The most celebrated among them was perhaps Isaac Newton, not only because he was one of
the inventors of calculus ( that he was calling “theory of fluxions”) but perhaps more so because he
applied it successfully to showing that mechanics is governed by laws (in which he included the
principle of universal gravitation), laws that may be expressed in the language of this new tool.
The verification of his theories - or rather confirmation, this had been done at least partially by
others, in particular Huygens who is somewhat forgotten in history - was provided by showing
that Kepler’s elliptical orbits, and by extension the dynamics of much of the physical world could
be explained by writing equations, then solving those equations analytically (this latest point not to
be overlooked, an aspect on which I shall further comment later on). That all of this was abundantly
documented in Newton’s two volumes “Principia” (1684) did in no small way contribute to his fame
(if not “publish or perish”, it was more a case of “publish and be remembered”).
Everything in the physical world could now be computed, predicted (at least so it came to be
believed). Rational thought had replaced dogma and authority that had prevailed for millennia in the
natural sciences. This was indeed, as it has been called, a scientific revolution (if not “the” scientific
revolution).
where computers come in
But what does this have to do with computers ? The relation with computers comes from the
simple fact that not all of the problems coming from the natural world, those described with
mathematics, with differential equations can be solved analytically. The “two body problem” can
4
(which is what Newton had done when confirming analytically Kepler’s laws), but the “three body
problem” (e.g. the motion of the Moon under the influence of both the Earth and the Sun) cannot.
Then there are a great many problems that could theoretically be solved, but doing so with
traditional pen and paper was, because of size or complexity, way too cumbersome to be feasible.
Of the problems coming from simple physics and related sciences, later from the emerging
industrial developments, the number of those that could be solved (i.e. producing useful numbers)
was very small indeed in spite of the vast arsenal of new tools in applied mathematics that had
become vigorously developed for that purpose after the appearance of calculus.
All of this had created a situation where machines that could compute, assist in solving those
many problems for which solutions were not accessible had become an attractive proposition. But
things had to wait for appropriate conditions, appropriate technology to come about.
Devices
computing with beads had been used at least since the Romans, then came calculators with toothed
wheels (Pascal -1642 , Leibniz-1672 , Babbage in the 1820’s - 30’s), but they remained of limited
power.
Then came the feasibility of machines that perform mathematical operations of calculus.
One of the first on record emerged when James Thomson invented a mechanical device executing
(analogically, as we say today) the operation of integration. It was not the first man made analog
integrating device, by the way : water clocks that had been used since the Egyptians several
millennia earlier, later by the Chinese were also analog computers, essentially counting time by the
integration of a flow of water. Lord Kelvin, who was James Thomson’s brother, came up with
the idea of hooking together several of those devices with rotating shafts and gears to obtain
contraptions capable of more. Whereupon he wrote 1:
“….thus I was led to a conclusion that was wholly unexpected; and it seems to me very
remarkable that the general [differential] equation of the second order with variable coefficients
may be rigorously, and in a single process solved by a machine”
1
Thomson W. (Lord Kelvin) Proc. Roy. Soc. 24, 1876
5
But technology of the time did not allow for the construction of such machines, except maybe
for the solution of very simple equations. Nor was there much of a demand for solving many practical
problems of the kind, whence no sources of finances for further exploration of possibilities and not much happened to those Thomson/Kelvin contributions.
It was later - in the early 20th century with the rise of industry, both private and
governmental, war related that an increased demand and funding for obtaining solutions to sizeable
scientific problems surfaced (by “scientific” I mean mostly problems formulated with differential or
algebraic equations). And new technology not available in the 19th century was by now emerging.
This had allowed in particular for the development of torque amplifiers, electromechanical devices
capable of transmitting the position of a rotating shaft, transmitting information with minimal
need for either force or energy, making it now possible to build larger machines espousing Lord
Kelvin’s concept. The first using this new development were Vannevar Bush and colleagues at the
Massachusetts Institute of Technology. Interestingly though, D.Hazen (Bush’s collaborator) said that
neither of them had been unaware at the time of the Thomson / Kelvin’s precedent2 .
Bush’s differential analyzers (a strange choice of words, I was told by an old colleague who
had been part of the team building one of those at the National Physical Laboratories in Teddington,
England, strange choice since the machines did not differentiate nor analyze anything) were analog
computers. They were fully operational, successful at what they were doing. Copies were made not
only in the U.S. and elsewhere. One had been built with Meccano parts in England in 1934 by D.R.
Hartree and colleagues, it did well and was followed by more substantial machines that were used
for calculations in theoretical physics3. The early demand, and significant use in the U.S. had been
in the late 1930’s and early 40’s for computing trajectories of artillery shells needed as part of the
ongoing war preparation effort, something that was otherwise done by large numbers, armies of
human “computers” sitting at a desk, equipped with pencil, paper and (noisy, it has been noted)
mechanical desk calculators4. This was taking place in particular at the Proving Grounds in Aberdeen,
Maryland where the Army Ballistics Research Laboratory was located, not far from the University of
Pennsylvania in Philadelphia, with which the Laboratory was entertaining a relationship. Copies of
Bush’s MIT machines had been built for both Aberdeen and Philadelphia and were by the late 1930’s
used intensively for those ballistics calculations.
It is in that context that electronics entered the scene.
.
computing with electronics
The beginning of the new era (appropriately referred to as the computer revolution) came in
the early 1940’s with the construction of a machine that was to reproduce with digital electronics
what Vannevar Bush’s differential analyzers were doing mechanically. A proposal made by John
Mauchly who was on the staff of the University had been approved and funded by the U.S. Army (a
decision with which H.H. Goldstine, then in charge of those calculations at Aberdeen had a lot to do). It
2
in Annals of the History of Computing, Vol 3. no 1 , jan 1982
see D.H. Hartree :”Calculating instruments and Machines” - Cambridge University Press 1950
4
from N. Wiener, commenting on his time at Aberdeen in the 1930’s “..When we were not working on the noisy hand-computing
machines which we knew as “crashers”, we were playing bridge together … using the same computing machines to record our scores”
in Ex-Prodigy (1953)
3
-
6
resulted in that computer called ENIAC (for Electronic Numerical Integrator and Computer where
the “Numerical Integrator” part was to specifically refer to the numerical integration of differential
equations)5. The internal information flow of ENIAC was similar to that of Bush’s differential
analyzer, but the mathematics were implemented with digital electronics instead of mechanics. It
had been just a case of proving the feasibility of replacing mechanical integrators with digital
accumulators, driven by the knowledge that, if it did work, electronics had the potential of being
much faster. And it worked.
Then from the more prosaic side was also the promise of easier programming. The actual
implementation of a new program on the mechanical differential analyzer (i.e. preparing it for the
solution of a new problem) was a long process done with wrenches and lead hammers, rearranging
shafts and gears. With electronics, it would be just a matter of changing electrical connections
(something that became even simpler later on, just reading in new instructions after the concept of
stored programs had been introduced).
By contrast with Vannevar Bush’s machines that were analog, continuous time computers,
the integration of differential equations was now to be performed by numerical, discrete time
approximation, using the theoretical tools of what we call today numerical analysis (“numerical
analysis” as a name did not make it until the 1950’s). This is the way human “computers” were
doing it at Aberdeen and elsewhere - though limited by the slowness of the process. ENIAC ended
up being thousands of times faster that a human “computer” and methods of numerical
approximation were, following the rise in computer power to become serious business. Numerical
analysis became the significant branch of applied mathematics that has its own journals, its own
professional societies, its own sources of funding.
ENIAC at the Moore school of
the University of Pennsylvania
in February 1946.
This is when it was shown to
the Press. It had become
operational in 1945 but was
kept secret from the public
until the war was over.
5
Mauchly had evidently taken the idea of computing with electronic tubes from Vincent Atanasoff who had started building
such a machine at Iowa State University in 1937, and who was visited by Mauchly for a week in the summer of 1941.
7
Interesting (philosophically speaking) is the fact that what numerical approximation does is
return to a description of change in natural systems with discrete increments, pushing aside the
concept of infinitesimals and continuity that had been essential to the birth of calculus in the 17th
century. Nor is the computer solution of differential equations, analog or digital, concerned with
the fact that those equations may be solved analytically or not : it just does not matter any more.
Where then does calculus fit in the grand scheme of modeling nature? Its original success had
been not just in describing dynamics, but equally so in its ability to provide analytic solutions to the
resulting differential equations (at least some of them), something that computers have now shown is
not essential. The evident place that calculus had occupied in the development and practice of the
natural sciences is no more what is used to be. To be noted of course is that there remain entire (and
essential) provinces of applied mathematics and mathematical physics that investigate properties of
the natural world without the need to actually solve problems to obtain what are called particular
solutions (the kind of things computers do).
So, the most significant contribution of the 17th century had perhaps been that of
accepting the concept of infinitesimals, of continuity, of continuous change, providing a language
(calculus6) for dealing algebraically with those quantities. This may appear trivial to today’s
students and practitioners of the sciences, those concepts having been spoon fed to them when they
were young and unawares but, however surprising this may appear to us, it had taken a couple of
millennia for natural philosophers to get there. And even well after the correctness of using
6
the form of that language we use today, the standard notations of calculus are due to Leibniz.
8
infinitesimals had been demonstrated by Newton and others, there was still disbelief in the air, for
instance that voiced by Bishop George Berkeley :
..And what are these .. evanescent increments? They are neither finite quantities, nor
quantities infinitely small, nor yet nothing. May we not call them ghosts of departed
quantities ?
“The Analyst” 1734
But back to computers: Other numerical machines had been under development at the time some using electromechanical relay technology instead of electronic tubes, for instance the Mark I
funded by the U.S. Navy at Harvard, a technology that was slower than electronics, and was soon to
become obsolete7. Also to be mentioned is the notorious electronic machine called Colossus (or rather
Colossi – several were built) developed in England where it became operational in 1943 - to crack
encrypted German radio messages during the World War II battle of the Atlantic, the development of
which Alan Turing had been associated with. The Colossi left the British with experience in digital
electronics. But details about the machine’s architecture were kept secret after the war, whence
Colossus did not otherwise have a significant effect on the development of electronic computing.
Britain, as well as the U.S. did adopt the post ENIAC von Neumann architecture - of which I shall
talk next - in the development of their computer industry.
One of the main sequels to ENIAC came with the definition (or invention) of what is
referred to as the “von Neumann architecture”, a logic organization that was to be significantly
different from that of ENIAC, having essentially a single computing/processing unit. The idea may
well have come from that theoretical concept known as a Turing’s Machine8. The concept of this
new architecture may, I feel, have been von Neumann foremost contribution, not as it is often
claimed the “stored program” concept for the paternity of which he had a lot of competition anyway9.
Von Neumann had been only a consultant to the project but he played a major role - in the
summer of 1946 - in a course at the University of Pennsylvania where the experience gained from
ENIAC together with what was called “a proposal” for what the logic design, what the
architecture of electronic computers should look like - was given to the participants10. This sequence
of events was rightfully called the “second computer revolution” by A. Burks11 and it resulted in the
presence, between 1946 and the early 1950’s, of some 15 projects for the construction of electronic
computers based on the new concept, with a panoply of names such as EDSAC, EDVAC, MANIAC,
UNIVAC, NORC, SEAC and the like. All were originally intended for applications coming from the
physical world, though some were used later for other things having to do with business and data
processing.
One of them, Whirlwind (started in 1944 at the Massachusetts Institute of Technology under
U.S. Navy funding) had been initially intended to be an analog airplane simulator for training
pilots and testing automatic control instruments. But it was soon decided (in 1945 by Jay Forrester
who was the project leader and knew of the ENIAC computer that was by then becoming
7
the name of (computer) “bug” came from a live moth that had found its way inside of Mark I, fouling one of the relays and
ending up taped in the daily logbook by Grace Hopper, one of the pioneers in computer programming, who coined the name.
8
Turing, A., "On Computable Numbers, With an Application to the Entscheidungsproblem", Proceedings of the London Mathematical
Society, Series 2, Volume 42, 1936; reprinted in M. David (ed.), The Undecidable, Hewlett, NY: Raven Press, 1965
9
though the stored program feature was essential to the von Neumann architecture.
10
There was a report to the Army containing the same information: Burks, A.W., H.H. Goldstine and J. von Neumann “Preliminary
Discussion of the Logical Design of an Electronic Computing Instrument – Rept. prepared for the U.S. Army Ordinance Dept. 1946 .
Parts of this report have often been reprinted. See for instance in G. Bell and A. Newell “Computer Structures : readings and examples”
1971
11
Burks, A in “A History of Computing in the twentieth Century” Ed. by N. Metropolis, J. Howlett and Gian-Carlo Rota, 1980
9
operational, though not made known to the public) to switch to digital, numerical technology.
Whirlwind turned out to be one of the most important and most heavily funded projects, leaving as one
of its indirect results integrated circuits, the technology that is found in today’s ubiquitous personal
computers.
And as for the question of impact and historical precedence, Presper Eckerd who was with
John Mauchly a co-father of ENIAC wrote later12 “Neither I nor John [Mauchly] had any knowledge
of Babbage’s ideas and had never heard of lady Lovelace at that time”. Which suggests that
Babbage’s contribution to the rise of computing falls also short of what it is often claimed to be.
what computers were used for at first
Reflecting on that period of time that goes between the early 1940’s when electronic
computers were all but unheard of - and the late 1950’s when they had asserted their presence and
were beginning to be seen all over the place, each step in their evolution can - with few exceptions be associated with the intention to apply them to solving existing problems coming essentially from
the natural sciences, material things that could at least in principle be seen, be felt. The range of
applications that had started with ballistics and theoretical physics in the 1930’s/40’s continued to come
from phsics, engineering, the natural sciences. Taking advantage of a a reasonably successful industry
producing electromechanical business machines that had stated in the second half of the 19th century,
the application of electronics extended to other things, to business, data processing, all the way to
video games and ATM machines, even playing chess, but this is not where the basic concepts had
originated.
Electronic analog computers are to be mentioned in this historical sketch. Significantly,
their importance had to do not with computers or computing technology, but with the introduction
of computer simulation to a variety of down to earth fields of industrial applications and also,
indirectly, with forcing a broader mathematical education upon the research and development
personnel engaged in those fields. Electronic analog computers had appeared in the early 1950’s,
intended for the solution of the same class of problems, the same differential equations as those
solved with Bush’s differential analyzers. They had a similar logic organization but with the calculus
operation of integration now implemented with electronics. Programming analog computers was
accomplished by interconnecting several electronic “integrators” (and other mathematical operators)
not with rotating shafts and gears but with electrical wires, with patch cords.
A general purpose
electronic analog
computer of the 1950’s
12
in “A Histry of Computing in the Twentieth Century” N.Metropolis, J.lett and Gian Carlo Rota (Eds) 1980
10
They stayed around for a good while. They were reasonably affordable and reliable. And,
as late as the early 1960’s, they were faster than digital computers for solving the many of the
differential equations routine applications in industry, in applied science and engineering. More
analog computers were built than Bush’s machines and they were easy to program. They had in the
1960’s been essential to the U.S. Space program, used among other things for the simulation of the
dynamics of space vehicles, allowing “hands on” training for astronauts before them having to
actually “fly” those vehicles in outer space. And they were also among the first to assist engineers
in solving a variety of more mundane problems. I was attached at the time to a Computer Center in
Brussels, equipped with what were the most powerful analog computers then available to the
European industry13. Other than having been involved with simulating the dynamics of airplanes
and of nuclear power plants then on the drawing board, I remember working with engineers from
Phillips/Netherlands finding ways to get rid of instability in a rotating disk (that was to become the
head of the Norelco shaver, the one I use today) and with engineers of Peugeot/France, finding
parameters of the direction system of a car that would minimize vibrations in the steering wheel (the
car ended up being the Peugeot 404). But analog computers were to be eventually displaced by the
growing power and declining cost of digital computers, and they had all but disappeared by the
late 1960’s, early l970’s.
Digital computers had by then gone through several generations of development. The so
called third generation had come in the mid 1960’s with machines using solid state electronics,
intended for a variety of applications (as reflected by the name given by IBM to their “360 Series” :
they wanted to make it known that all azimuths were covered). But significantly, the first copies of
the CDC 6000 computer, another member of that third generation were specifically aimed for
applications coming from physics, helping the U.S. industry, in particular Westinghouse and General
Electric in the design nuclear reactors intended for electric power plants.
Third generation computers of the mid 1960’s had roughly 1000 times the speed of the first
generation computers, those of the early 1950’s.
the post 1960’s : riding on Moore’s law
One generally talks of Moore’s law in the context of increased technological
performance of computers. The number of transistors on a computer chip, so says that empirical law
formulated in 1964 by Gordon Moore (later to become a co-founder of INTEL) is to increase by a
factor of two every eighteen months or so. And by way of consequence, so will computing power,
the computing speed of individual machines. Seen from the other side of the fence, this increased
computing power has resulted in an increase in the size, in the complexity of problems that
scientists and engineers can and do solve.
As we have seen, the beginnings of electronic computing started with a pressure brought
about by the demand to do large amounts of calculations coming from groups concerned with
specific problems, specific applications, calculations of the trajectory of artillery shells that had been
done so far laboriously with mechanical desk calculators by armies of human “computers”, workers
13
The European Computer Center , established in Brussels in 1957 by Electronic Associates, Inc., the leading American
manufacturer of analog computer/simulators at the time.
11
actually computing for a living, who ended up being replaced by ENIAC and its descendants.
But the scientific community was quick to discover the range of new things that had become
accessible to quantitative investigation, now that machines could do calculations, solve equations,
solve problems orders of magnitude larger than anything that they could have done before, and do it
much faster. And financial support for research was becoming available, compliments of the
industrialization of the world in the second half of the 20th century. It was a perfect case of symbiosis,
with more powerful computers enabling new territory in research and development to be explored,
with parties benefiting in one way or other from the products of this research and development
willing to pay for the elaboration of those computers
The end of this process did not come when computers had become powerful enough to solve
all the current, problems to be solved by science and industry of the pre-computer days. What took
place instead was the burgeoning of new generations of researchers, scientists of all stripes that
literally found, created new fields of investigation, discovering new provinces of science that could
now be explored with this new tool in hand. This was to lead to new philosophies in the analysis of
the phenomena of world in which we live. One of the latest examples thereof is that of
biotechnology, a new industry that depends entirely on computers for its very existence, with cash
flows in the billions of dollars a year of which a significant portion finds its way back to the
computer industry (more on this later on).
Also important had been the development of programming assistance tools referred to as
compilers such as FORTRAN that appeared in 1958, making it possible for a much larger population
or scientists to use computers. Before that, programming was done with tools not much different from
those of machine language, the kind of thing prone to turn off all but those from the circle of
computer professionals (a small circle indeed at the time). Which contributed to the democratization of
computing at the time.
A representative example of the new things that were being done was that of the
(quantitative) study of the dynamics of systems in the social sciences, something entirely new,
conceivable only with the assistance of computers. One of the pioneers was Jay Forrester14 who had
also been one of the important actors in the development of the first generation/post ENIAC computers
at the Massachusetts Institute of Technology. He had the tools, was thus well placed to initiate their
use in new areas. Another, somewhat later, was that of Mathematical Ecology a field that also
became possible by the simple fact that one could now express the dynamics of interacting species
with mathematical models whose solution could be observed, recorded by way of computer
simulation15.
And with the increase in computer power and in the number of users came new subdisciplines that were representative of a whole service community that had entered the scene in the
1960’s and 70’s This was illustrated by new names explicitly describing what those disciplinary
groups sitting at the interface were doing, names such as computational physics, computational fluid
dynamics, computational acoustics.
Personal computers appeared in the early 1980’s bringing computer power out of the narrow
landscape of universities and laboratories that were equipped with a “mainframe” computer. Personal
14
Books by Jay Forrester : “Industrial Dynamics” 1961, “Urban Dynamics” 1969, “World Dynamics”, 1971
A representative illustration is “Stability and Complexity in Model Ecosystems” - Princeton Univ. Press published by Robert
May, 1973
15
12
computers (and the internet that increased their coverage) made computers reachable by a much
wider segment of the population, literally.
An interesting summary of the period of development in the (mostly large scale) application
and impact of computers in the last quarter of the 20th century was given, part history and part
predictions, in a 1991 report prepared by a special committee for presentation to the US Congress
with the intent of justifying the request for funding of the High Performance Computing and
Communications program (to the tune of close to 700 million U.S. Dollars, for the year 1992 a sum
that was to exceed one billion dollars a year in subsequent years).
from “Grand Challenges : High Performance Computing and communication”, a report to
the US Congress presented to support funding of large computers and communications development as
part of the President Fiscal Year 1992 Budget
And we may, in passing, note that the increase of problem size with time illustrated by the
above does quantitatively indeed follow Moore’s law. Quoting from the accompanying text :
"High performance computing has become a vital enabling force in the conduct of
science and engineering research over the past three decades, Computational science and
engineering has joined, and in some areas displaced. the traditional methods of theory and
experiment, For example. in the design of commercial aircraft, many engineering issues are
resolved through computer simulation rather than through costly wind tunnel experiments "
And we find, elsewhere in the some report
“Although the U.S. is the world leader in most of the critical aspects of computing
technology, this lead is being challenged”.
which illustrates the degree to which national superiority in the science and engineering were used
as the motivation to have the government invest significant finances in developing ever more
powerful computers (and also more powerful means of communicating between computers,
something that has become an integral part of the scene).
13
Meshing of the surface of the earth
used to solve numerically, with
computers, the equations that
describe the dynamics of the
atmosphere.
The refinement of the mesh over the
years is a direct measure of the
growing power of computers
allowing for finer and finer meshes
to be handled, better and better
prediction power to be obtained
(from the NCAR annual report 1982)
Discretization of the exterior
surface of an airplane used
as part of the numerical
calculation of the exterior air
flow (creating a “numerical
wind tunnel”).
This had, with the increased
speed and memory capacity
of computers, become a
feasible proposition in the
1980’s.
Not to mention the top computers developed for military purposes (such as the ASCII
machines used among other things for the simulation of nuclear explosions, motivated by the ban
on conducting
experimental nuclear explosions in the atmosphere) two of the largest
“supercomputers” that are either operational or soon to be completed in 2003 include Japan’s NEC
Earth Simulator16 that became operational in 2002, intended to simulate and predict the Earth’s
climate
16
see SIAM NEWS 2003
14
The second, called “Blue Gene” by IBM, is conceived with the purpose of simulating the
folding of proteins. Much of microbiology has, with the help of computers, become an exact science,
something it had not been thus far, and representative in that respect is that the Human Genome
project was started in 1990 when computer power had reached into the billions of operations per
second. The folding of proteins is (at the nano-scale) a classical dynamics multi body problem
described by the laws of physics. But simulating it is a task that, according to a recent article ,
…makes simulating a nuclear explosion or the collision of two galaxies look like a picnic in
comparison 17. The number of calculations of forces between individual atoms is enormous, which is
what makes the computation intensive. Blue Gene will cost IBM on the order of 100 million dollars,
will have the combined power of 30,000 desktop PC’s, but the size of the market for computers
needed for this and related applications in microbiology is estimated to range into the tens of billions
of dollars.
17
The Economist, December 9, 2000
15
figuring out the shape of proteins
turns out to be an enormously
demanding computational
problem, It is one for which
there is a large demand coming
from a part of microbiologythat related to genetics- that
would be all but impossible to
exist without the assistance of
supercomputers, those that
appeared in the 1990’s.
Moving outside of the physical sciences , the beginning in the late 1950’s of Artificial
Intelligence (AI), and its subsequent growth gives another illustration of the increase with time not
only of what has been done, but also feasible, another illustration of Moore’s law supported by a
range of applications. As the name suggests, AI aims at reproducing human intelligence with
computers. The following chart pulled together by Hans Moravec at the Carnegie Mellon University
gives, among other things, a comparison of the data processing power of generations of electronic
computers with that of the brain living species18.
18
Hans Moravec “When will computer hardware match the human brain?” - Journal of Evolutionary Technology 1998
16
I feel compelled to note that data processing power and intelligence are not the same thing.
That IBM’s Deep Blue computer beat the world chess champion Garry Kasparov in 1996 was just a
case of brute force. The machine consisted of 256 chips orchestrated by a 32 processor minisupercomputer, that could examine 200 million chess positions a second, well beyond what human
chess players are capable of. But intelligence it was not.
That the data processing power of computers will - as predicted by Moore’s law - approach
that of the human brain in the next few decades does not necessarily mean that the range of
applications they will be capable of will de facto be the same as those accessible to humans, those
things we refer to as intelligent and conscious behavior. Much remains to be seen.
17
***********************************************************************8
***********************************************************************8
LEFTOVERS
But this is not the way things evolved. Computers have been built for specific applications,
their development was financed by
The history of ideas, the evolution of technological innovations is often presented as a
chronological succession of facts, of novelties with little analysis for their interaction, for their
impact. The end result may well give a plausible picture, but in the absence of having taken cause
and effect into account, this picture is bound to be inaccurate, incomplete at best.
Historical reconstructions of the rise of electronic computing over the past century makes no
exception. They all too often display a sequence of
inventions, original concepts, new devices
interspersed with occasional names of known people, but little documentation of the place (if any)
those devices, those people have taken in this evolution. A story making sense should includes
numbers (yes, numbers) telling how many copies of a particular new device have been built,
quantifying the volume and the importance of the tasks to which it was applied, measuring its
impact. A story making sense should include an evaluation of the importance of what the intended
use had been, what that device was actually used for. That is, in the final analysis, looking at
applications as the leading thread of the story. Which is what I shall attempt in the following.
--------------------
-------------------And when computers making use of those concepts eventually saw the light over half a
century later, those who did it (Vannevar Bush and D. Hazen) went on record to say they had not
been influenced by, nor that they were aware of this precedent (more on this below).
Which brings me to comment on that all too often overlooked aspect in the evolution of
the tools of science. It is not just because someone had the idea of the concept for a new device
that this new device shall be built (in numbers- not just a prototype) that it shall have an impact on
subsequent developments. There must be a demand, there must be potential applications, and there
must be finances to back up those applications (today’s finances for technological innovations come
from either short term commercial interests - or from governmental interests motivated by longer
term political or economic objectives). A true history of computers should - with few exceptions - be
given in terms of the succession of applications that motivated their development and for which
they were used, leaving out (or mentioning only as commentary) ideas and original concepts in
technology that did not amount to more than published descriptions and possibly one or two
prototypes.
Download