Early British computer development and the emergence of

advertisement
Early British Computer Development and the Emergence of Americanisation: 194563
Robert Reid, University of Glasgow
A tempting, and fruitful method of studying the History of Computers is to provide a rich and textured story,
detailing the sequential and evolutionary development of individual projects set against an implicit historical
framework. This traditional approach yields an internalist and less engaging historical account. This research
project is set apart from this approach by taking into account, and emphasising the importance of, the
‘culture of computing’ in the UK by setting computing in its historical context and understand the
development of the technology and concomitant industry as a process of construction in which the structure
and ideology of the process of innovation is moulded by existing culture. Within this structure, or surface of
emergence, the research styles and ideologies of these early actors in computer development, and the
material agency of the codified technological structure that they created, pushed the British industry in a
particular direction, unique to the British experience. More significantly this distinct approach to computing
history demands that the distinction between the experience of innovation in the UK and in the US is made
explicit.
In making this distinction I will make reference to Foucault’s ‘surface of emergence’ as a methodological
construct to understand the form of computer innovation at work in this period. 1 One can understand the
structure of human agency in developing technology to be the based on ‘performative human agency’ in
which the intentionality of that agency is modelled by existing culture. 2 In other words, the culture in which
humans conduct themselves is the ‘surface of emergence’ for their agency. Performative human agency is
his understanding of how innovation proceeds, that is science is the ‘doings’ of human agents in a world of
material agents. The intentionality of human agency is defined by the existing culture and material agency
takes its form through the temporally emergent character of innovation.
By understanding the social groups that influenced both the innovation of computing and the culture in which
that innovation took place, one can arrive at an understanding of the ‘performative human agency’ of
computing set against the extant culture of research configuring the human agency of these actors. These
actors fall into a number of broad groups including scientists, users, and computer industry and government
actors. In looking at these actors, one can plot the divergence between the UK and the US in terms of their
cultures of computing. Ultimately this led to the emergence of Americanisation as a rhetorical determinant of
computer development within the UK. The impact of Americanisation is seen to go beyond the mere use of
American ideas and models of development. America was used as a rhetorical construct by British actors to
push development of the UK computer industry towards the single, rationalised industry that emerged in the
sixties. To put the paper in the context of the overall research project, in which the process of
Americanisation as an influence on UK computer development throughout the post-war period is explored,
this paper will explore the genesis of the distinct, British culture of computing. By considering this cultural
aspect of computing, a more detailed and nuanced account of the history of computing in the UK can be
developed than is possible through a more traditional approach.
A range of sources were used in order to arrive at a detailed understanding of the innovative process and the
existing culture in which that innovation proceeded. Primary materials from the company archives of the key
computer companies operating in this period from the National Archive for the History of Computing were
used to develop an understanding of the process of innovation within the computer industry itself. A wide
range of material in the National Archives was consulted to provide the governmental aspect to this
throughout the long run of the period in question. This included not only documents specifically related the
computer industry, but more widely, sources that point to the ideological framework of science policy and its
development. The personal recollections of key actors are used to fully explore these themes alongside
academic and trade journal research of the period. Through these sources one is able to develop an
understanding of the surface of emergence from which innovation in the computer industry sprung, and as a
result, understand the culture that underpins human agency and how it is modified by rhetoric constructions
such as Americanisation, which are perceived rather than actual.
How then are we to develop a particular understanding of the early influences that led to this process of
emergence of a British culture of computing, distinct from the US? It is in the field of computer memory that
one can tease out the nature of the distinction. Memory is the principle tool of divination between tabulating
machines of the pre-computer world to the stored programme computer that can rightfully claim the title of
‘computer’ in the modern conceptualisation. It is clear that the innovation of memory is intricately bound up
with the question ‘who first conceptualised the computer?’ This may seem a rather impotent tool by which to
1
2
Foucault, M, The Archaeology of Knowledge, Tavistock Publications; London; 1972.
Pickering, A, The Mangle of Practice: Time Agency and Space, University of Chicago Press; Chicago; 1995 p.20-21.
question the development of computing. ‘Who first?’ questions tend to yield little in the way of useful history.
However, in considering ‘who first conceptualised the computer?’ in the interconnected world of the US and
the UK, the question proves surprisingly potent at distinguishing significant differences between these two
closely related programmes, and provides a useful outline of two distinct cultures of computing. One in the
US focused around John von Neumann, and one in the UK around Alan Turing. The key feature of the
difference between them is memory.
Memory indexing may seem a rather eccentric place to start when considering the heroic innovations of
computing, however it is within this rather lowly sounding field that the genesis of two cultures of computing
find their place, and in turn influence the development of two distinct industries on either side of the Atlantic.
Two reports were issued in 1946 both of which detailed the direction of future computer developments after
the War, the ACE report in the UK and the EDVAC report in the US. 3 Both sprung from research into the use
of computing for defence applications during the war. The EDVAC was based on work by the ENIAC team,
consisting of Eckert and Mauchly with von Neumann who had built computers for the computation of firing
tables for the Ballistics Research Laboratory. The ACE on the work carried out at Bletchley Park in decoding
German encryption methods by Max Newman, Tommy Flowers and Alan Turing amongst others. The ACE
report is distinctive in its different intellectual framework from the EDVAC report.
Two significant differences can be discerned. The first rather general observation is the clear difference in
styles. Alan Turing had developed a theory of computing in 1936 in his paper “On Computable Numbers,
with an application to the Entscheidungsproblem”.4 Though his work tended towards conceptualising
machine intelligence rather than the specifics of computer design, it provided a conceptual basis for the
mathematical rules of computing. In the thirties and throughout the War the effects of this early grounding in
‘universal machines’ provided a strong foundation to a number of automated decoding machines. Included in
these were machines necessary for decoding of the Fish and more famously the Enigma signals. Through
the work in Bletchley Park, the stored-programme principle was developing rapidly in this closely-knit UK
culture of electrical engineers. The ACE report was a detailed and systematic account of the requirements to
build a machine secure in its conceptualisation of the stored programme principle and even contained
costing! The EDVAC report was clearly unfinished and unreferenced when it was made public. Indeed, it
became known as ‘The First Draft.’ Recently, Martin Davis has suggested that Turing influenced von
Neumann far more than von Neumann influenced Turing. 5 Indeed, von Neumann, humiliated by an earlier
failure in mathematical logic, did not mention Turing’s work on computable numbers in recommending his
application for a PhD in 1936-1938 at Princeton. Von Neumann focused on other areas of mathematics that
were more compelling him to in 1936. It seems more likely that Turing’s influence reignited von Neumann’s
interest in the concept of mechanical devices for computation. Indeed the celebrated von Neumann
architecture appears to have been born largely from von Neumann regurgitating Turing with a little help from
Eckert and Mauchley and their practical experience. The ACE report generally comes from a more mature
intellectual culture of computing and became recognised as unique in computing as a novel and far sighted
approach to computing.6
This maturity had an impact on the approach to memory evident in the two documents and highlights the
second difference between the two documents in their treatment of memory architecture. It is here that a
more concrete distinction can be made. Strangely it is not only in Turing’s ACE project at Cambridge that this
second distinction can be made, but in a related project at the University of Manchester under his long term
collaborator Max Newman. Newman had been a maverick mathematician with Turing while they were both at
Cambridge and had worked closely on wartime projects at Bletchley Park in the early 1940s. The culmination
of the work of Newman, who supervised machine development at Bletchley Park, was the colossus which
struggled towards limited programmability and was a steppingstone to the universal machine envisioned by
Turing in his 1937 paper.7 Following the War, Newman, under the auspices of Patrick Blackett, moved to
Manchester together with his Royal Society grant to study computable numbers in the same field as Turing.
Blackett and Newman were instrumental in securing the move of FC Williams, the inventor of the cathoderay storage tube, to Manchester after the war towards aiding the mechanical requirements of such a task.
The ‘Williams tube’, initial developed independently from computing, became the standard memory device
for storing data in early computers. The aim was to establish a rival to Turing’s work at the NPL and to
provide the necessary practical applications of Newman’s grant. 8 The team at Manchester quickly surpassed
the somewhat inept management at the NPL and their ACE effort and Turing came to Manchester to work on
the Manchester Mark 1. However, to what extent Williams concerned himself with machine architecture is
3
ACE and EDVAC reference.
Turing, M, ‘On Computable Numbers, with an application to the Entscheidungsproblem’ in Proceedings of the
London Mathematical Society, Series 2, vol. 42 (1936-37) pp230-265.
5
Davis, M, The Universal Computer: The Road from Leibniz to Turing, WW Norton & Co.; New York; 2000 pp.166170.
6
Lavington, S, Early British Computers, MUP; Manchester; 1980 p.46 and Davis (2000) p.188.
7
Agar, J, Turing and the Universal Machine, Icon Books; Cambridge; 2001 p.111.
8
Ibid, p.349.
4
uncertain, claiming that Newman described the store-programme concept to him in ‘all of half an hour’.9
Newman did however have an influence on the work, and his sizable Royal Society grant was equally as
important initially. Williams appears to suggest that the overall architecture of the machine was an
assumption and his job was just to get the thing to work. 10 Indeed the reason it was an assumed
architecture, rather than the explicit von Neumann architecture, was that Turing’s stored programme principle
and the need for workable memory were understood in the UK well in advance of the US and there was a
good understanding of these principles and their basis for future computing in the UK. UK actors clearly
understood this and were prepared to deviate from the American line of innovation. Indeed it was the existing
culture of computing, and an understanding of the future shape of innovation flowing from a more developed
‘philosophical’ understanding of the computer amongst the war time researchers such as Blackett and
Newman that led them to understand the need for someone like Williams and his discovery of cathode-ray
tubes (CRT) as a memory device for computing.
The key to understanding this is to look at the mercury delay line memory used in machines like the EDVAC
and UNIVAC and the CRT tubes of the Mark 1. Superficially they both do the same thing by storing data to
be operated on by the processor. However, delay line storage mimics the early conception of machine
memory as suggested by Turing in 1936: that is as a sequential memory that is accessed in a consistent and
linear order. This is the archetypal Turing tape that he discusses at length in ‘On Computable Numbers’.
The CRT storage is, conceptually a more advanced form of memory, which is clearly an improvement on the
early conception. The memory can be accessed randomly and out of sequence, as and when the machine
requires the data. The advantages of this were clearly understood by the UK actors at a time when the US
actors were not as understanding of the importance of this distinction. The EDVAC reports discussion of
CRT Storage suggests that the indexing of random memory would be impossible for the foreseeable future.
Both Turing at NPL and Flowers at the Post Office who in 1946 were working on there own computing
engines, recognised that CRT storage, which Williams had a unique working knowledge of, would be a
distinct advantage over the delay lines that they intended to use in their machines. This knowledge flows
from the strong ‘philosophical’ understanding of the stored-programme principle and all that it entails. As a
result the necessary and wide-ranging infrastructure of actors, like Williams in fields such as radar and radio,
were able to be gathered centrally in a University infrastructure to capture government funding, focusing on
the key difficulties of computer development and produce workable memory solutions earlier than their US
counterparts.
By the nature of the University environment from which these early actors in the UK came, their approach to
computing, having it roots in mathematics research in Cambridge and Manchester, and the central place of
these actors in later computing advances points to a strong University culture extant in computing from the
pre-war era. If the infrastructure of a University is the key to successful innovation, as Florida and Cohen
contend, then it is the infrastructure of actors collected by the Edison like network engineer of Blackett
around Universities and government funding that afford the UK an early lead in computing and maintain it. 11
However, throughout the fifties LEO, BTM etc remain out with the NRDC (which became the principle
government actor in commercialising computer technology) and the Ferranti/Manchester axis of computing
that builds up around Blackett, Newman, Turing and Williams at Manchester. They remain haemorrhaged
from these developments for the best part of a decade, conducting, at best, ‘ad hoc’ development as
Campbell-Kelly calls it.
This is recognisably different from their private US counterparts who receive high levels of funding from the
US government. In the US, the culture of computing was less focused on the mathematical and intellectual
pursuit of computing as it was in the UK and the opportunity for network builders such as Blackett, Newman
and Turing to create a strong, centralised network of experts within the University structure that had
conceived of the computer was smaller. However this less University led culture allowed the focus of
research to move into the private sector where the US culture of computing begins its development around
more commercial applications earlier in its development. This in turn precipitates competition amongst the
data processing companies in the US and the emergence of IBM. The British culture was unable to develop
a similar culture as computing remained an intellectual and philosophical issue that failed to integrate with
private concerns. It became the goal of UK industrial reorganisation in the Sixties to reverse this culture and
through the use of America as a rhetorical construct, with its now significantly distinct culture of computing,
the UK industry was directed along a path quiet distinct from its origin.
9
Ibid. p.390.
NAHC/MUC/Series 1. “Frederic Calland Williams 1911-1977” T. Kilburn & L.S. Piggott, reprint from Biographical
Memoirs of Fellows of the Royal Society 24 (1978) A2 p.595.
11
Florida, R. & Cohen, W.M., Engine or Infrastructure? The University Role in Economic Development in
Branscromb, L.M.; Kodama, F. & Florida, R. Industrializing Knowledge, MIT Press; Camb. MA; 1999 p.606.
10
Download