Economies of System

advertisement
1
IF NELSON AND WINTER ARE ONLY HALF RIGHT
ABOUT TACIT KNOWLEDGE, WHICH HALF? A REPLY TO
DAVID, FORAY AND COWAN
PAUL NIGHTINGALE Email: p.nightingale@sussex.ac.uk (CoPS Innovation Centre, SPRU, University of Sussex,
Falmer, UK)
‘The search for certainty reveals itself as a fear of the truth’ Hegel
Abstract
The paper explores how knowledge is conceptualised in the science policy literature by looking at how the theories
of Polanyi and Simon relate to one another. This is used to critique recent theories that propose that information
technologies allow the codification of tacit knowledge. This paper analyses what ‘codification’ means, the
relationship between codified and tacit knowledge and how the more extreme versions of codification theory
replace tacit knowledge with a ‘non-manifest’ codebook. Problems with the idea of codification are traced back to
Simon’s ‘programme level’ of explanation and how it relates causes to effects.
Introduction
This paper explores some potential tensions in the literature that draws on Nelson and Winter’s attempt to bring
together two ways of thinking about knowledge - the tacit knowledge tradition and the more objectivist information
processing approach.1
These differences can be related to wider differences in how both knowledge and firms are conceptualised. In one
approach, that links Nelson and Winter’s book back to Simon and Arrow, firms and knowledge are understood in
terms of abstract information processing. An alternative more empirical approach looks at the hardware involved in
problem solving - high-lighting the tacit, physically embodied and socially embedded nature of knowledge. In this
approach Nelson and Winter’s book forms part of an appreciative theory tradition that links to Polanyi and
Schumpeter.
This more appreciative literature seems to be happy with mathematical models of real processes but is largely
unconvinced by abstract explanations, tending to regard questions about knowledge as empirical ones that draw
on real world social, cultural, organisational and technical features. As a consequence, it is a bit unsure how
Polanyi’s phenomenology integrates with their hardware explanation, and tends to treat tacit knowledge as a not
particularly well understood, but empirically important, residual. The abstract approach (probably quite rightly),
regards much of Polanyi’s talk about tacit knowledge as a bit too much like mystification. It generally doesn’t go
into any empirical details as it regards the use of knowledge as really only problem-solving processes that are
essentially abstract. As a consequence, the biological and psychological features of problem-solving in people,
and the cultural, organisational and technical features of problem-solving in firms, that at first glance might appear
important can be ignored, as illusionary, and not part of the real essential features.
As Mary Midgley highlights, when one looks at this suggestion, one needs to play close attention to the language –
words and phrases like in reality, illusionary, appear, and essential features invoke not physics but ontological
metaphysics. The tensions that have followed Nelson and Winter’s attempt to bring these two traditions together
are consequences of the fact that Polanyi and Simon are incompatible because what they regard as ‘real’ and
consequently the nature of their explanations is fundamentally different.2 The aim of this paper is to map out those
relationships and tensions to see how the tacit level explanation, the empirical hardware level explanation, and the
abstract information processing level explanation fit together.
These theoretical tensions can generally be ignored as most people take a pragmatic position that tacit knowledge
is a useful but limited concept that explains some of the failings of the information processing perspective, which in
1
Tacit knowledge is a category of unconscious neuro-physiological causation that provides the basis and context to actions
and conscious mental states. Objectivism ‘stems from the notion that objects in the world come in fixed categories, that
things have essential descriptions, that concepts and language rely on rules that acquire meaning by formal assignment to
fixed world categories, and that the mind operates through what are called mental representations [that can be described
algorithmically].’ (Edelman 1996:228).
2 Their position at extremes is rhetorical and our interest is in the overlap.
2
turn is a limited but useful way of understanding the world. But every now and then the tensions flair up – and
since the debate is about what can be regarded as evidence, rather than evidence itself, the debates tend to be
rather polemical.3
Recently these tensions have resurfaced in the science policy literature in what has become known as the
codification debate. The proponents of codification argue that IT allows tacit knowledge to be codified into codified
knowledge because the boundary between them is determined by costs and benefits (David and Dasgupta 1994,
Foray and Cowan 1995a & b). They argue that the concept of codified knowledge is useful and can provide insight
into the transfer of knowledge and simulation use (Cowan and Foray 1995). More extreme versions of the
codification position argue that the concept of tacit knowledge has outlived its usefulness and can be replaced by
the concept of non-manifest code-books (Cowan et al 2000). Most people would accept that the concept of tacit
knowledge needs a critical over-haul, and in some quarters has been imbibed with almost magical qualities, but
might question if it can be dispensed with just yet.4
The next section examines the codification position of David, Foray, Dasgupta and Cowan highlighting a number of
contentious points. It is followed by a section that explores the different levels of explanation that are used to
understand knowledge in individuals – the hardware level (the neurological basis of psychology), the program level
(Simon’s abstract algorithms), and the knowledge level (Polanyi’s phenomenology) – to see how they fit together.
The following section then uses an evolutionary explanation to show how Polanyi’s phenomenology level and the
biological, hardware level fit together. It provides an explanation of the relationship between tacit knowledge and
articulated words and symbols (rather than codified knowledge) to show how they are complements rather than
alternatives. This is used to suggest that the extreme codification position is incompatible with Darwinian theory
because nature reuses and retains earlier adaptations such as non-conscious knowledge to support more recent
evolutionary adaptations like speech and conscious planning. The discussion and conclusion look at the concept
of the codebook in more detail and criticise it. The paper ends by looking at the types of explanations that can be
derived from Nelson and Winter’s theory, and argues that we don’t need to think they might only be half right about
tacit knowledge.
Part 2. The Codification Debate
The codification debate is about the nature of knowledge and its relationship to technical change. Everyone agrees
that writing things down is useful, that people can learn from books, and that IT has increased the amount of
electronic information around. Consequently, the concept of codification must be more than this. Unfortunately,
the meaning of codification has changed over time making it difficult to pin down. The reason for the slippery
nature of the concept will be analysed in the discussion. For now it is enough to note that, as would be expected in
a debate, positions have shifted and meanings have changed.
The traditional view of knowledge that the proponents of codification differentiate themselves from could be
represented by Alic (1993) who argued that as science policy shifted from military to commercial aims in the 1980s
knowledge transfer became more important. He argued that ‘much shop floor learning has this [tacit] character:
people find better ways of doing their jobs without ever realising quite how… some of this is they are aware of but
much remains unconscious and unexamined. For instance it takes 8 or 10 years to become highly skilled in the
design of plastic parts to be made by injection moulding.’ (1993:372). This tacit knowledge makes technology
difficult to transfer, and Alic proposes that government policy should recognise this as part of their commitment to
open science by ‘helping to codify and interpret knowledge created through government funding’ (1993:382). Thus
for Alic, and the traditional view as a whole, codified and tacit knowledge are complements, and technology transfer
needs to take both into account.
Dasgupta and David (1994) disagree and argue that tacit and codified knowledge are to an extent ‘substitutable
inputs’ rather than complements and that IT is reducing the costs of codification - leading to the increased
codification of tacit knowledge. They note: 'Insofar as codified and tacit knowledge are substitutable inputs... the
3
In the 1950s Polanyi used tacit knowledge to attack J. D. Bernal and defend open science. More recently, the
mathematician G. C. Rota attacked the excessive reliance on axiomatics by pointing out that objectivists miss the important
distinction between truth and truth games and noted ‘The snobbish symbol dropping on finds nowadays… raises eyebrows
among mathematicians. It is as if you were at the grocery store and you watched someone trying to pay his bill with
Monopoly money.’ (1990). Similarly, John Searle has ridiculed the proponents of information processing approaches to
knowledge and the Noble prize winning neurologist Gerald Edelman has referred to objectivism as ‘an intellectual swindle’
(1992;229).
4 See for example, Langlois (2001)
3
relative proportion in which they are used is likely to reflect their relative access and transmission costs to the
users... Similarly, differences in the extent to which knowledge... gets codified... as information rather than retained
in tacit form will reflect the reward structures within which the researchers are working, as well as the costs of
codification' (1994:494). Consequently, the 'falling costs of information transmittal deriving in large part from
computer and telecommunications advances, have lately been encouraging a general social presumption favouring
more circulation of timely information and a reduced degree of tacitness' (1994:502 emphasis added). This was
the first meaning of codification – namely, that IT allows the codification of tacit knowledge and makes its
codification cheaper, which reduces the need for tacit knowledge and allows its transfer.
Similarly, Foray and Cowan (1995) and David and Foray (1995) argued that not enough attention has been placed
on the distribution of knowledge and that the introduction of information technology has lead to the ‘codification of
knowledge’. By this they mean something more than the traditional notion of writing things down. They define the
codification of knowledge as ‘the process of conversion of knowledge into messages which can then be processed
as information’ (Foray and Cowan 1997:3). This process of ‘codification involves reducing knowledge to
information so that it can be transmitted, verified, sorted and reproduced at less cost’ (Foray and Cowan 1995).
Cowan and Foray note that the digital revolution has increased the extent to which tacit knowledge gets embedded
in machines (1995). In this work, the second meaning of codification becomes apparent, codification is a term that
is applied to three different causal processes – creating messages, creating models and creating languages. The
effect of each of these different causes is to increase knowledge transfer and reduce the tacit requirements.
Although they explicitly note that tacit knowledge can never be completely codified – this implicitly indicates a view
of their relationship.
Given that the proponents of codification are attempting to over-turn widely held views it is surprising that they
provide almost no non-anecdotal empirical support for their suggestions. Empirical evidence still seems to suggest
that tacit knowledge and information are complements rather than alternatives and that the notion that tacit
knowledge gets codified by information technology is more complicated than the proponents of codification suggest
(see for example, Nightingale 1997, 1998, 2000a, 2000b).5 In particular, the empirical evidence suggested that the
introduction of IT-based knowledge tools removed some jobs, while allowing far more complex, and therefore more
skilled work in others. Moreover, their use was sector specific, and fundamentally different for generating new
knowledge than it was for transferring old (ibid.).
Rather than addressing this empirical anomaly Cowan, David and Foray (2000) developed a theoretical framework
for launching a polemic attack on the tacit knowledge position.6 They argue that tacit knowledge has become a
‘loaded buzzword’ (2000:212). Moreover that ‘the first-order result of all this [discussion of tacit knowledge] would
seem to have been the creation of a considerable amount of semantic and taxonomic confusion…. [which] Might
be both expected and tolerable as a transient phase in any novel conceptual development. Unfortunately, one
cannot afford to be so sanguine…such claims in many instances are neither analytically nor empirically warranted.’
(2000:213). Their ‘scientific skepticism’ causes them to ‘question whether the economic functioning and attributes
of tacit and codified knowledge are well understood by those who would invoke those matters in the context of
current innovation policy debates’ (2000:224).
In response to this misuse of tacit knowledge they propose to ‘develop a more coherent re-conceptualisation’
(2000:213). This is needed because the concept of tacitness ‘now obscures more than it clarifies. Among the
matters that thereby have been hidden are some serious analytical and empirical flaws in the newly emerging
critique of the old economics of R&D. Equally serious flaws in the novel rational that has recently been developed
for continuing public support of R&D …’ (2000:213-4).
They do this with a rather extreme objectivist theory based on the concept of the ‘code-book’. While they don’t go
as far as signing up to what Cohendet and Steinmueller (2000) call the ‘strong codification’ position on knowledge
which ‘strictly interpreted, implies the absence of any meaningful distinction between information and knowledge.
… all the cognitive and behavioural capabilities of whatever human or non-human agent is being described must
have been reduced to ‘code’, that is, to structured data and the necessary instructions for its processing.’ (Cowen
et al 2000:217), they do come pretty close.
While they note that humans and machines differ in that humans can form new categories they note ‘it is no less
important to notice that the capacities of humans to ‘decode’, interpret, assimilate and find novel applications for
5
Arora and Gambardella (1994) came up with almost the opposite conclusions, suggesting either, I am wrong, they are
wrong, we are both wrong, or we need a taxonomy.
6 These attacks are academic rather than personal. I have always found Profs. David, Foray and Cowan extremely pleasant
and helpful and I have a great respect for their work.
4
particular items of information entail the use of still other items of information [which may form the cognitive
context. but] … there is nothing in this observation that would imply a lack of awareness … or an inability to
transmit it to others.’ (2000:17). In doing so they seem to be ignoring the considerable empirical literature on the
geographical localisation of scientific and technical knowledge – which may or may not be explained by its tacit
nature – and collapse the traditionally important distinction between knowledge and information.
They suggest that with regard to knowledge traditionally regarded as tacit ‘if one can describe behaviour in terms of
‘rule conformity’, then it is clear that the underlying knowledge is codifiable – and indeed may have previously been
codified’ (2000:220). Note the switch from an epistemological claim that behaviour “can be described” as rule
following to a ‘clear’ ontological claim that the underlying knowledge “is” codifiable. I am not so convinced this is
that this conceptual switch is that ‘clear’ and will argue later that this switch is the cause of their problems. But for
now it is enough to note that to put forward their case they need to introduce an extremely controversial new
concept – the codebook.
The concept of the codebook
The codification theory ‘makes extensive use of the concept of a codebook’ (2000:215). ‘Knowledge that is
recorded in some codebook serves inter alia as a storage depository, as a reference point and possibly as an
authority. But information written in a code can only perform those functions when people are able to interpret the
code;… Successfully reading the code in this last sense may involve prior acquisition of considerable specialised
knowledge (quite possibly including knowledge not written down anywhere)’ (2000:225).7 They thereby seem to
equate know-how with know-that.
The codebook is then defined to ‘refer to what might be considered a dictionary that agents use to understand
written documents and to apply it also to cover the documents themselves. This implies… First, codifying a piece
of knowledge adds content to the codebook. Second, codifying a piece of knowledge draws on pre-existing
contents of the codebook. This creates a self-referential situation,[8] which can be severe… initially there is no
codebook, either in the sense of a book of documents or in the sense of a dictionary. Thus initial codification
activity involves creating the specialized dictionary. Models must be developed, as must a vocabulary with which to
express those models.’9 (2000)
It is unclear whether Cowan, David and Foray think that all knowledge or just most knowledge can be codified. But
they take the central issue to be the extent of codification, implying, as in their previous work, that more codification
is in general a good thing. They divide knowledge up into unarticulable (which they set aside as ‘not very interesting
for the social sciences’ without explaining why), codified – in which case a codebook ‘clearly exists, since this is
implicit in knowledge being or having been codified’, and unarticulated.10 However, when faced with the obvious
point that people don’t carry big books around with them they invoke an even more contentious notion - the
misplaced codebook.
In the discussion of the displaced codebook, attention must be paid to the metaphysical language. They note that
‘To the outside observer, this group [with the misplaced codebook] appears to be using a large amount of tacit
knowledge in its normal operations. A ‘displaced codebook’ implies that a codified body of common knowledge is
present, but not manifestly so.’ (2000:232 their emphasis). Thus within this category, what appears to be tacit
They continue, ‘As a rule there is no reason to presuppose that all the people in the world possess the knowledge needed
to interpret the codes properly. This means that what is codified for one person or group may be tacit for another and an utter
impenetrable mystery for a third. Thus context – temporal, spatial, cultural and social – becomes an important consideration
in any discussion of codified knowledge.’ (2000:225)
8
Cowan et al are technically incorrect here. What they call self-referential is instead an infinite regress. This sentence is in
English and is an example of a self-referential sentence because it refers to itself, but this doesn’t stop it being true (it is in
English and is referring to itself). With infinite regressions explanations include the thing being questioned and are there fore
false.
9 They continue: ‘When models and a language have been developed, documents can be written. Clearly, early in the life of
a discipline or technology, standardisation of the language (and of the models) will be an important part of the collective
activity of codification. When this ‘dictionary’ aspect of the codebook becomes large enough to stabalize the ‘language ’, the
‘document’ aspect can grow rapidly… But new documents will inevitably introduce new concepts, notion and terminology, so
that ‘stabalization’ must not be interpreted to imply a complete cessation of dictionary-building.’
10
This last category is divided further into situations where there once was a codebook, but that codebook has been
displaced, and where there has never been a codebook but it is ‘technically possible to produce one’ (2000:232).
7
5
knowledge is in reality the non-manifest presence of a ‘displaced codebook’. Thus, their disagreement with
proponents of tacit knowledge is metaphysical and an argument about what is real.11
Consequently, they claim that the tacit knowledge found by Collins, Latour etc., is not tacit at all. Instead it could
simply involve the ‘mental ‘repackaging’ of formal algorithms and other codified material for more efficient retrieval
and frequent applications, including those involved in the recombinant creation of new knowledge – [which] rests
upon the pre-existing establishment of a well-articulated body of codified, disciplinary tools’ (2000:238).12
Within their domain of science they repeat a central tenet of codification theory - that the extent to which knowledge
is tacit or codified depends only on ‘costs & benefits’ (2000:241). Moreover, they suggest that when knowledge is
codified (and meanings are stable) the transfer of knowledge can be conceptualised as the transfer of messages.
As a consequence, codification allows a greater de-localisation of knowledge (c.f. 1997:5).
This is the third codification position that suggests that knowledge can not only be reduced to, but actually is a
code-book, and that tacit knowledge is really the non-manifest possession of a codebook, and has to be because
‘codification precedes articulation’ (2000:228).
It would seem that if the codification position, which is being proposed as an explanatory theory, is to be anything
more than the traditional, pragmatic position that ‘it is useful to write things down’, ‘people can learn from books’
and ‘IT allows you to send information’, a causal chain and a number of assumptions should hold. I am not
suggesting that the proponents of codification believe all of these assumptions, in fact I am convinced they don’t.
My criticism of the codification position is going to be precisely that they don’t produce a causal chain. But if it were
true that ability could be reduced to know-how, which could be reduced to know what, which could be completely
codified and used to transmit the full ability to people who are distant in time, space and culture, at a very low cost
that radically reduces the amount of learning required (and/or the tacit components of knowledge), and all this
could be done without exploring the local social, cultural, technical and organisational aspects of knowledge use,
then the codification position would be correct, and the traditional position would be wrong. Whatever the
correctness or not of these assumptions, the concept of tacit knowledge is over-used and ready for critical analysis,
having become in some quarters a ‘explain-all’ elixir. I would suggest that the proponents of codification are
absolutely correct to suggest that tacit knowledge has become a badly defined residual explanation, that when
badly used amounts to little more than mystification.
However, is their total rejection of the concept really justified? Does articulating something and reading it really
amount to knowing it? Is ‘knowing how to do it’ in the sense of being able ‘to say what the steps are’ really the
same as having the actual capability? Would spending time questioning Lennox Lewis about boxing, really put me
in a position to get in the ring with Mike Tyson without being beaten to a bloody pulp? Should I be worried if the
person operating on me learnt surgery from a book? Intuitively, one might think that the novelty and complexity of
the surgery might make a difference, but they are ignored by codification theory. Is it really the case, knowing what
we do about the nature of knowledge and the role of knowledge representation that the codification position is
plausible?13
Moreover, given that Nelson and Winter used the notion of tacit knowledge extensively, are they wrong? If as I
shall argue Polanyi and Simon might be incompatible on the same explanatory level, it would seem that Nelson
and Winter can be only half right about tacit knowledge, but which half?
Part 3: Codifications Problems and its Incompatibility with Darwinism
There are a number of points of disagreement between David, Dasgupta, Cowan and Foray, and academics who
are sceptical about their causal explanations. The ones I will analyse within this paper are the notions that:
 Tacit knowledge and codified knowledge (meaning information) are largely alternatives rather than
complements.
 IT allows tacit knowledge to be codified, into codified knowledge.
For example, Cowan et al argue that ‘most studies fail to prove what is observed is the effect of ‘true tacitness’, rather
than highly codified knowledge without explicit reference to the codebook… that is not manifest. (2000:233).
11
12
In doing so, they criticise Collins but do not explain why his extensive arguments (against the possibility of
codifying knowledge) in Collins (1990) are invalid.
Should we take a theory of knowledge that largely ignores learning seriously? If codifying knowledge is so easy why isn’t it
done more? Doesn’t more IT simply create the same old knowledge/technology transfer problem but this time with lots of
information?
13
6


The boundary between tacit and codified knowledge is determined by costs and benefits in a non-trivial
sense.14
The concept of codified knowledge in its new sense, is analytically more useful than old fashioned ‘information’.
But my main line of investigation is to see if:
 The concept of non-manifest codebook is useful.
While I think that the proponents of codification have advanced an intellectually interesting and creative set of ideas
I am going to argue that most of them are flawed.15 I am going to argue that the causal chain that must hold if the
codification explanation is going to be anything more than the traditional view has breaks in it. But, I am also going
to argue that in the end this does not matter because codification theory is not a causal theory. And in making this
second point I am going to say something about the relationship between Polanyi and Simon.16
Given that this paper was written to celebrate Nelson and Winter’s work I would like to explore the causal chain
between embodied capabilities and articulated information using an evolutionary argument that links Edelman’s
hardware level explanation to Polanyi’s phenomenology. Darwinian theory implies that our innate, conscious and
linguistic cognitive abilities were built from and complement earlier developments. Since the ability to symbolise is
a recent evolutionary development, we would expect it to be intertwined with, rather than an alternative to older
non-conscious cognitive processes. The extreme codification explanation ignores these earlier unconscious
cognitive abilities and is therefore potentially inconsistent with evolutionary theory if the theory suggests that some
capabilities cannot be reduce to know-how, and some know-how cannot be reduced to articulated know-what.17
The Evolution of Knowledge
Human cognitive capacities are rooted in our biology, which has developed through a process of evolutionary
redundancy whereby nature adapts pre-existing features to new functions by duplication and divergence, symbiosis
and epigenesis. Humans, like the organisms they evolved from need to maintain the parameters of their body
within a fairly narrow range. Single celled organisms, such as bacteria, maintain themselves using self-correcting
chemical feedback loops and bio-chemical regulation mechanisms that act as chemical ‘buffer solutions’,
maintaining the organisms within the range of parameters that is consistent with life.
As organisms become more complex, more sophisticated processes are required to maintain homeostasis.
Typically, this involves the secretion and transportation of specialised chemicals that regulate behaviour. In most
multi-celled creatures, these earlier evolutionary adaptations are complemented by neurological mechanisms
which, at their most basic level, moderate behaviour through reflexes, where the activation of one neurone,
activates another. More sophisticated mechanisms involve intervening and moderating the neural pathways which
14
It makes a lot of difference if some of the costs are infinite.
I have for instance, no problem with the ideas in Foray and Cowan (1997) which I regard as an excellent paper. My only
criticism might be that the ideas haven’t been properly tested.
16 Because Simon’s work is so central to both my argument and Nelson and Winter’s work, I am going to make a quick point
about theories in which symbols have propositional content. The point is that it is a methodological error to move from
explaining knowledge with symbols to say things about the non-importance of tacit knowledge. Specifically, a confusion
between ‘an absence of evidence and evidence of absence’. If you are drawing your ideas about knowledge from the
research of Newell and Simon (1972), and I am specifically not suggesting this about the proponents of codification (even
though I think they do come pretty close in Cowan (2000)), then you are not in a position to talk about tacit knowledge.
Simon’s work relied on a methodology called protocol analysis. Protocol analysis involved asking people how they solved
problems, and assuming that the answers they provided gave direct insight into their short-term memory. Moreover, it made
another implicit assumption that things that could be articulated (which they equated with short-term memory) were the only
things involved in problem solving. Any neurological processes involved in problem-solving, that work below the level of
consciousness cannot be articulated and are not going to be picked up by protocol analysis. Simon and Newell’s theory
therefore did not incorporate any of the underlying tacit causation. Followers of Simon and Newell have used proto col
analysis to deny the importance of tacit knowledge for their explanations, but one should not assume that it is unimportant in
real life. To do so would indirectly rely on a methodology that can not detect it, let alone conceptualise its importance. To
deny tacit knowledge exists based on protocol analysis is like assuming that the person on the end of the telephone doesn’t
exist just because we can’t see them.
17 This is not meant to imply that understanding individual cognition is the only way, or even a good way to understand
knowledge and technical change, only to explore some potential problems. Any assumption that innovation takes place in
one person’s head is going to lead to serious errors.
15
7
allows more complex and sophisticated responses to external stimuli.18 These neural mechanisms are
evolutionary complements to the biochemical mechanisms and together generate ‘emotions’ a technical term for
chemically and neurologically induced changes which automatically regulate organisms (Damasio 1994, 1999).19
These emotional control mechanisms are generally, but not exclusively centred on the brain. As the brain has
evolved it has allowed representations of the environment and the body to form. These can be related to learnt
memories through an embodied process of categorisation, allowing more sophisticated responses to the
environment (Edelman 1992). These dispositional memories constitute both our innate and learnt knowledge and
but are not necessarily conscious or able to be made conscious.20
Together the systems produce emotional responses to neural images (either perceived or imagined) in the form of
chemical and neurological changes within the body.21 Some of these responses can be modified by learning,
which involves linking the earlier limbic brain-stem system, that unconsciously defines emotional values that are
important to the survival of the organism, to the later-evolving cortical systems of the senses that categorise the
world (Edelman 1992). Lesion studies have shown that if any part of these multiple systems is disabled then
learning and categorisation cannot take place properly (Damasio 1994).
However, it is not always the case that we need to be conscious to learn, or are conscious of emotional changes or
even the perceptions that cause them. Consciousness is after all a relatively recent evolutionary development and
anatomical evidence suggest that is dependent on earlier non-conscious systems that are capable of learning on
their own (Edelman 1992). Moreover, neural images require more permanence to become conscious than they do
to influence the body and brain. If experimental stimuli are delivered to the thalamus for over 150 ms they will
produce sensory detection, but they must be delivered for over 500 ms in order to become conscious (Tononi
1998:1848, Libet, 1992). Thus, much of our learning is tacit and implicit (Reber 1989, Lihlstrom 1987, Lewicki
1986, Sternberg 1986.) and there would appear to be large breaches in the causal chain required by the
proponents of codification.
Things are only made conscious (or felt) if the brain has evolved enough infrastructure to recognise how changes
in the body relate to the thing producing those responses (Damasio 1999). 22 That is; neural structures that
produce an image of you changing in response to an object, allowing you to feel changes produced by external
objects (Damasio 1999).23
Such neural systems allow selected images to be set out and brought from what Polanyi called implicit subsidiary
awareness to focal awareness (Posner 1994). This has obvious evolutionary advantages such as being able to
concentrate on escaping from a predator, or attacking prey. These advantages can be improved if conscious
attention can be linked to memory and categorisation (both at a conscious and unconscious level) to allow learning
from errors. This allows higher primates, and a range of other animals to live in what Edelman has called the
‘remembered present’ where our attention to external objects is related to previous memories, learnt categories
and their unconscious emotional responses (1989).24
18
While the hedonic regulation systems tend to deal with the slow internal systems such as appetite, body cycles, sleep and
sex, these thalamocortical regulation systems tend to react very fast to external signals and link sensory sheets within the
brain via the rest of the nervous system to voluntary muscles (Edelman 1994:117).
19 These are produced by brain devices that automatically and non-consciously regulate and represent body states in the
brain stem and higher brain. As organisms become more complex so do their emotional responses. In humans the emotions
include the primary emotions ‘happiness, anger, fear, surprise or disgust’, the secondary social emotions of ‘embarrassment,
jealousy, guilt or pride’ and background emotions such as ‘well being or malaise, calm or tension’ (Damasio 1999:49-50).
20 The dispositions concerning biological regulation generally do not become images in the mind, and regulate metabolism,
drives and body cycles below the level of consciousness in the early brain systems such as the hypothalamus, brain stem
and limbic system (Damasio 1994:104). By contrast the dispositions for learnt knowledge are in the higher order cortices and
the grey matter below (Damasio 1994:105). These generate motor movement, related dispositions and neural images in the
sensory cortices.
21 This includes signals along the autonomic nervous system via peripheral nerves, changes to the viscera such as heart,
lungs, skin etc., changes to the skeletal muscles and changes within the endocrine glands and the immune system (Damasio
1994, 1999).
22 To feel something one needs to have evolved three inter-related neural systems (ibid). The first is what Damasio calls the
proto-self, an unconscious, dynamic neural image that monitors, maps and emotionally responds to changes within the body.
The second system generates unconscious images of external objects (the phenomena of blind sight shows that it is possible
to perceive something and not be conscious of perceiving it) and produces emotional responses to them. And the third
produces second order neural maps of the relationship between the previous two (Damasio 1999).
23 The process is somewhat more complicated as neural images can also be generated from memory.
24 This conscious attention is personal, changing, continuous and intentional (Edelman 1994:111).
8
However, the neurological processes that could produce consciousness are constrained in three ways. Firstly, they
must be able to produce an integrated ‘scene’ that cannot be decomposed. Secondly, they must be able to quickly
discriminate between the extremely large number of different inter-related neural images that make up our
subsidiary awareness, and thirdly, they are constrained by time, as they must be stable enough to permit decision
making (Tononi et al 1999).
Based on these constraints, Tononi et al have proposed a “dynamic core” hypothesis that highlights how
consciousness is produced by a constantly changing core of neuronal groups that interact with each other far more
than with the rest of the brain, and do so for sufficiently long to select images from an extremely large repertoire
(1999:1849). This temporally integrated neural firing generate consciousness and allows it to act as a search-light
over unattended mental images that structure our experience within Gestalts based on similarity. It is these
unattended but linked neural images that provide the context to our conscious attention. As Polanyi noted they are
involved in all our actions even if when they are not being consciously attended to (c.f. Edelman 1989).
However, Tononi et al’s ‘dynamic core’ excludes much of the brain and is consistent with imaging studies that show
that many brain structures outside the thalamocortical system do not influence consciousness (Tononi and
Edelman 1998:1846).25 Thus most of the highly automated knowledge needed to read, talk, listen, write etc., in an
effortless and coherent way will forever be tacit and no amount of changes to the costs and benefits is going to
affect it.26
These neurological structures produce consciousness as a subjective, inner and qualitative state (Searle 1998).
Consequently, there are many types of local, first person knowledge that cannot be fully articulated in a global, third
person perspective (Nagel 1986).27 This processes, however, has nothing to do with codification and the whole
process can be explained neurologically without invoking any algorithms or code-books.
Beyond Basic Consciousness
The theories of neurologists such as Damasio, Tononi and Edelman suggest that the conscious ability to direct
attention from subsidiary to focal awareness is based on anatomical structures that are about 300 million years old.
Damasio’s core-self is therefore extremely basic.28 The three later evolving systems that will be examined here are
human learning-categorisation-memory, human problem solving, and language (which includes symbolic
understanding).
Learning, categorisation and memory are traditionally regarded as distinct but neurologically they are extremely
difficult to separate from each other or the emotional systems (Edelman 1992:100). Edelman has proposed a
theory of neuronal group selection to explain how continuous dynamic changes within neural populations
(specifically their synaptic strengths), mediated by the hippocampus, allow sensory inputs to be categorised and
learnt (1992).29
These processes are not passive. Because it takes time (typically 500 ms) for perceptions to be made conscious,
the brain actively hypothesises how the world will be by activating implicit, dispositional memories within the
appropriate sensory coritices. As Bartlett showed in the 1950s memory is not like a computer file, as required by
the extreme codification argument, but an active process of reconstruction.30 This process of reconstruction
As Tononi et al note ‘The dynamic core is a process, since it is characterised in terms of time-varying neural interactions,
not as a thing or a location. It is unified and private, because its integration must be high at the same time as its mutual
information with what surrounds it is low, thus creating a functional boundary between what is part of it and what his not… [it
predicts] that neural processes underlying automatic behaviours, no matter how sophisticated, should have lower complexity
than neural processes underlying consciously controlled behaviours.’ (ibid).
26
The systems of the brain responsible for this would ‘typically include posterior corticothalamic regions involved in
perceptual categorisation interacting re-entrantly with anterior regions involved in concept formation, value-related memory
and planning’ (1849-50). Consequently, no amount of money will allow us to articulate how the vestibular ocular reflex keeps
our vision stable as we move.
27 Our neurological hardware generates a whole range of inter-related neural images that can in some instances turned into
conscious, subjective mental images. As a result, tacit knowledge should really be contrasted with conscious mental states
(which potentially can be articulated) rather than with codified knowledge. Only some of the underlying neural states that
comprise our tacit knowledge can be brought into consciousness to form subjective mental images, which in turn can be
articulated with words and gestures.
28 Lesion studies have shown that damage to the areas of the human brain responsible for these capabilities do not destroy
consciousness. However, the neuro-anthropology of Oliver Sacks suggests that consciousness can be profoundly effected.
29 Similar ideas have been put forward by Walter Freeman and are supported by simulation studies.
30 A series of dispositional neural patterns in the higher order association cortices fire back to the early sensory cortices an d
generate images and activity there that topologically relates to the thing being remembered. These neural hypotheses are
25
9
generates memories within Gestalts that cannot be fully articulated. Consequently, we always know more than we
can say (Polanyi 1969).
The anatomical distinctness of different memory systems is revealed in how they can be remembered. In contrast
to the proponents of codification, many forms of memory can only be recalled by doing (Anderson 1983), and these
include many scientific and technical procedures.31 At a very crude level this can be usefully conceptualised as a
distinction between fact memory and skill memory which is mainly implicit or tacit (Schacter 1987). However, the
biological basis is far more complex and the human brain probably has several hundred different memory systems
rather than just the traditional procedural, semantic, and episodic ones (Squire and Butters 1984, Squire 1987,
Tulving 1983). Much of this dispositional memory is found in brain regions outside the areas accessible by Tononi
et al’s ‘dynamic core’ and consequently, can never be articulated from a first person perspective and is extremely
difficult to learn from a third person perspective. This is again at odds with the extreme codification position
whereby almost all knowledge needs to be reduced to articulated rule following.
Moreover, imaging studies have shown that the distribution of neural activity within the brain changes as one learns
and moves from consciously following explicit rules (which is widely spread) to tacit, unconscious automated
performance (which is more localised) and often outside the region accessible consciousness (Tononi et
al:1847).32 This functional isolation, that accompanies learning, produces a ‘gain in speed and precision, but a loss
in context-sensitivity, accessibility, and flexibility’ (ibid.). This makes expert knowledge extremely difficult to
articulate and creates a distinction between know what and know how, which is built up through practice.
Thus when someone is learning how to ski, for example, they may start being told to ‘bend the knees’ and ‘lean into
the mountain’ following a set of explicit rules. However as the person learns these explicit rules become irrelevant
as tacit understanding takes over and the brain changes to automate the procedures. As Searle puts it (1983:150):
‘As the skier gets better he does not internalize the rules better, but rather the rules become progressively
irrelevant. The rules do not become ‘wired in’ as unconscious Intentional contents, but the repeated experiences
create physical capacities, presumably realised as neural pathways, that make the rules simply irrelevant.
“Practice makes perfect” not because practice results in a perfect memorisation of the rules, but because repeated
practice enables the body to take over and the rules to recede into the [tacit] Background’.33
Thus we have a fundamental split between the neurological evidence and the position of objectivists like Simon
and Newell.34 The neurological evidence shows that learning can start with written rules, but as we learn, the rules
become redundant and we rely on neurological processes that cannot be articulated, making teaching most expert
behaviour, without lengthy practice, extremely difficult. Simon and Newell argue that as we learn knowledge that
was unarticulated (meaning uncodified rather than biologically distinct) becomes codified in problem solving
heuristics and routines. As Polanyi pointed out the correctness of each position can be established by asking
whether tourists or locals are likely to be navigating with a map?
The neurological differences between different types of knowledge and the corresponding difficulties in bringing
them into consciousness and articulating them suggest that the Cowan et al’s (2000:17) position that ‘… there is
nothing … that would imply a lack of awareness … or an inability to transmit it to others.’ may be an
oversimplification. Moreover, it seems to be in direct contrast with a range of established psychological findings
that now have neurological explanations.35
activated as the body moves within its environment (because memory is procedural and involves continual motor activity and
rehearsal). ‘Ballet dancer’s muscles and pianists fingers have learnt to do their job just as much as their brains’ (Edelman
1992). Memory is embodied and damage to brain regions such as the cerebellum that are concerned with the timing and
smoothing of body movements have deep effects on memory, learning and categorisation. Edelman notes ‘categorisation
depends on smooth gestures and postures as much as it does on sensory sheets’ (1992:105). This is why neurological
disorders, such as Parkinson’s disease, that compromise our ability to move smoothly also disrupt our memories.
31 Amnesiacs for example can learn many new skills but will have no memory of having taken the lessons.
32 That involves distributed neural activity in the thalamocortical system that is essential for learning.
33 As Henry Maudsley, of Maudsley hospital fame, put it in 1876 ‘If an act became no easier after being done several times, if
the careful direction of consciousness were necessary to its accomplishment in each occasion, it is evident that the whole
activity of a lifetime might be confined to one or two deeds.’ (Quoted in Edelman 1994). Since we can only really concentrate
on one task at a time there are obvious evolutionary advantages to moving established behaviours out of conscious control
and into our tacit background knowledge where they can improve and develop on their own.
34 I am not suggesting that any of the proponents of codification accept Simon’s view. Although, Cowan and Foray for
example (1997) argue that ‘a procedure that was developed to produce some end becomes routinised, and repeatable, which
implies that it can be broken down into component pieces, each of which is sufficiently simple that it can be described
verbally, or embodied in a machine. This again, is a process in which tacit knowledge gets codified.’
35 See for example, Dixon 1971, Reber 1989, the collected edition by Carlo Umilta 1994, especially Berry 1994, Buckner et al
1995, Cheeseman and Merikle 1984, Merikle 1992, and Schacter 1992 and the extensive references therein.
10
Problem solving: Similarly, problem solving relies on and is intertwined with earlier evolutionary brain structures
linked to emotions. The abstract objectivist model of problem solving is basically false (Damasio 1994:172)
because it fails to recognise that rationality is dependent on earlier evolving emotional systems (Damasio 1994).36
Brain lesions that break the link between emotion systems working just below the level of consciousness and the
later evolving conceptual systems make decision making almost impossible. Damasio gives an example of a
patient with ventromedial prefrontal damage who was asked to choose which of two dates he would like to return to
the hospital on.
‘The patient pulled out his appointment book and began consulting the calendar… For the better part of
half-hour, the patient enumerated reasons for and against each of the two dates: previous engagements,
proximity to other engagements, possible meteorological conditions, virtually anything that one could
reasonably think about concerning a simple date. Just as calmly as he had driven over the ice and
recounted that episode, he was now walking us through a tiresome cost benefit analysis, an endless
outlining and fruitless comparison of options and possible consequences. It took enormous discipline to
listen to all of this without pounding on the table and telling him to stop, but we finally did tell him, quietly that
he should come on one of the days’ (1994:193).
Rationality can therefore be severely compromised when these unconscious biochemical and neurological
emotions fail to act.37 In non-pathological people, Damasio has proposed a somatic marker hypothesis in which the
generation and selection of alternative courses of action (produced by linking previous learnt memories to current
situations) occurs with the help of evolutionary earlier brain structures that mark each neural image with an
emotional response (1996, 1997). These emotional markers help focus attention on the positive and negative
implications of choices (often tacitly below the level of consciousness) and bring appropriate ways of problem
solving from subsidiary awareness into focal awareness and attention. This allows choices to be made between
fewer alternatives increasing the accuracy and efficiency of decision making (Damasio 1994). Because this
process is evolutionary it is not perfect – rationality like the QWERTY keyboard and the Panda’s thumb is path
dependent and therefore often sub-optimal. Consequently, Kahneman and Tversky’s findings, that even
statisticians are more likely to choose a medical procedure in which they have a 90% chance of living over one in
which they have a 10% chance of dying, should not come as a surprise.38
This is not to suggest that it is not possible to disengage our knowledge from our own emotions, biases and first
person perspectives. The process of scientific research is normally explicit in attempting to do this. Turro (1986)
has provided a model of problem solving that highlights its embodied and embedded nature. He suggests that
normal cognitive activity generates a set of mental states or beliefs about the world. These allow us to actively preempt and predict the behaviour of the world, when we predict well we reinforce our understanding and ‘breed [a
feeling of ] satisfaction’, while when we fail (i.e., when we don’t understand the world, and cannot resolve our
problems) we ‘breed [a feeling of] tension and conflict’ (1986:882).39
36
The notion that people perform some heuristic, cost-benefit analysis based on subjective expected utility by exploring the
consequences of options, fails to provide a biological basis for the impossible tasks of holding all the mental images in pla ce
long enough to perform the calculation. It fails to explain the units in which the ‘apples and oranges’ of the future will be
compared. It ignores the huge time needed to perform these types of calculations and fails to recognise that working memory
and attention are limited (ibid). It fails to acknowledge the weaknesses of rationality that Tversky and Kahneman have
highlighted, nor our ‘ignorance and defective use of probability theory and statistics’ (ibid). Again I am not suggesting that the
proponents of codification are objectivists.
37
Psychopaths, for example, behave in irrational ways that are dangerous to themselves and others because
their emotions influence their thinking only at very high levels. They are normally ‘the very picture of a cool
head’ and by self-reports are ‘unfeeling and uncaring’ (Damasio 1999:178).
38
The error that David et al make with their concept of the codebook is a common one in which they ontologise a
normative feature of rationality into a theory of rationality (c.f. Taylor 1995). They do this by moving from a position
whereby “we should try and make our decision making as objective as possible, and not subject to our subjective, petty
dislikes”, to propose that this objective decision making process is what rationality actually is. This error has important
implications, especially as there appears to be an implicit normative teleology in David, Foray and Cowan’s arguments –
in short ‘more codification is a good thing’. This teleology is directed away from tacit rationality and towards a pathological
situation of pure disembodied rationality.
39 This is consistent with Damasio’s emotional response theory whereby we have negative emotional responses when we fail
to understand the world and positive ones when we do, both of which we ‘feel it in our guts’. Turro states the origin of this
‘could be traced to its survival value and the associated evolution of an appropriate genetic composition and constitution of
the human brain (1986:882).
11
These gut feelings are trained during the scientists apprenticeship so that ‘students who learn to tolerate the
tensions that normally accompanies the process of resolving such intellectual conflicts often feel an excitement that
is stimulating and rewarding in itself’ (1986:882). But, they are not things, in chemistry at least, that can be
articulated, codified and transmitted. They have to be learnt at the bench. This suggests that even if it were
possible to codify the knowledge required to perform an already perfected task (and the evidence so far goes
against this), it would be far more difficult to codify the knowledge required to perform an innovative task.
Consequently, the codification position, even on already perfected tasks, will be made even more problematic if
local knowledge is needed to adapt already existing technologies to local conditions.40
Language: These processes of learning, categorisation and problem solving are influenced and enhanced by our
symbolic representation capabilities. Human language is a rather recent evolutionary development and is closely
related to the steady increase in brain size over the last 2 million years. However, it is probable that the ability to
use language fully only emerged in the last 40,000 years (Maynard Smith and Szathmary 1999). Whatever the
case, language required a range of anatomical changes to the brain mechanisms that process sounds (Lieberman
1984).41 After the evolutionary development of the supralaryngeal tract, the Broca’s and Wernicke’s areas in the
brain developed to allow the re-categorisation of phonemes. This extra symbolic memory allows new concepts to
be related, connected and refined, and in turn related to sounds and gestures.
However, there is nothing particularly advanced from an evolutionary perspective about the ability to have
representational systems. The sea anemone Stomphia coccinea can distinguish between 11 species of starfish
and forms a category of representations that requires similar responses (Maynard Smith 1995:284). As Maynard
Smith notes ‘A category is a tacit concept: that is why animals must be granted the faculty of concept formation.
Language has merely provided labels for concepts derived from prelinguistic experience’ (ibid., emphasis added).
The biological evidence suggests that the codification proposal that codification precedes articulation (Cowan et al
2000:228) is wrong.42 One would seem to have a choice, either evolutionary theory, or codification. If Darwin is
correct we would agree with John Locke that words refer to constructed mental categories that are dependent on
shared tacit understanding for their successful articulation.43 Consequently, the neurological and evolutionary
evidence suggests that the concept of codified knowledge, as used in its new sense by the proponents of
codification, seems potentially problematic. The next section is going to criticise the notion of codified knowledge
when it is used to suggest that it can be used without tacit knowledge, and show how communication is structured
by tacit knowledge.
Communication, Speech Acts and the Transfer of Knowledge
The difference between the codification and tacit knowledge positions becomes clearer if we take a simple
example: the case of someone who knows how to bake a cake, writing down the recipe and sending it by post to a
friend in a different country. The friend then follows the recipe and produces the cake. Here we have an instance
where a one person’s capability was transferred to another person who successfully produces a cake. The
codification position is that knowledge in one person’s head has been successfully codified and transmitted as
codified knowledge, and is now in the head of another person. Furthermore, they might add that with a modern fax
machine, that recipe could be translated from one code to another digital code and transmitted far more easily to a
wider range of people.
Turro’s (1986) model of science is consistent with much of the empirical work within the ‘mental models’ tradition of
cognitive science, and especially Shepard and Metzler’s work on imagery (1971), Kosslyn’s work on mental images (1980),
and Paivio’s work on dual coding (1971). All of which is consistent with the view that spatial mental representations (meaning
dispositional memories, rather than computer code) are embodied to reflect their spatial properties rather than the propertie s
of codes or sentences that might describe them (Garnham and Oakhill 1994:36). Although, like most models in the ta cit
knowledge tradition it only deals with the individual level, while the problems for the codification position become more
extreme once the role of specialisation and the division of scientific labour are introduced.
41 These changes possibly came about due to adaptations to brain systems used for object manipulation. Evidence shows
that damage to the Broca’s area of the brain not only reduces the ability of people to produce syntactically organised speech
but also the ability to conceptualise object manipulation (ibid.).
42 They note ‘articulation being social communication, presupposes some degree of codification’ (2000:228).
43 “ Words by long and familiar use, … come to excite in men certain ideas, so constantly that they are apt to suppose a
natural connexion between them. But that they signify only men’s parculiar ideas, and that by a perfectly arbitrary imposition
is evident, in that they often fail to excite in others … the same ideas,… and every man has so inviolable a liberty, to mak e
words stand for what ideas he pleases, that no one hath the power to make others have the same ideas in their minds
[Compare with Horace Epistle 1, 14 ‘Nullius addictus jurare in verba magistri’ ‘not pledged to echo the opinions of any
Master’, which forms the basis of the Royal Society’s motto.], that he has, when they use the same words, that he does…
‘Tis true, common use, by a tacit consent, appropriates certain sounds to certain ideas … which limits the signification of that
sound, that unless a man applies it to the same idea, he does not speak properly ….” (1689 III, 2, 8).
40
12
The traditional position is slightly different. While they agree that the ability to produce a cake, that was initially only
possessed by one person is now possessed by two they would disagree with the notion that what was transmitted
was codified knowledge, in the sense of knowledge is independent of tacit understanding. 44 What was sent
through the post was a recipe that could not contain any knowledge because recipes are not capable of containing
knowledge. To suggest they can is to commit a category mistake and confuse knowledge which is a capacity, with
information which is a state. While it may be convenient to refer to the recipe as codified knowledge, which
everyone does, it is a big theoretical leap to jump to the conclusion that it reduces or eliminates the tacit knowledge
is required to use it. In more technical terms, symbols with propositional contents need someone with tacit
knowledge to assign them as such.
Rather than containing pure “codified knowledge” in the strict ‘no (or vastly reduced) tacit knowledge needed’
sense, the recipe contained a series of what Austin calls ‘speech acts’ which are the basic unit of communication
(1975, Searle 1969:16). The structure of speech acts is dependent on the neural structures (discussed earlier) that
allow us to have intentional mental states, i.e., mental states ‘about’ things in the outside world. For example, they
allow us to have subjective thoughts, expectations, fears and beliefs about the world, which creates a distinction
between the type of intentionality – i.e., hopes, beliefs, fears etc., and its content ‘it will rain’, ‘there is a monster
under the bed’ etc (Searle 1969).45 This distinction between type and content constrains the structure of speech
acts and the processes involved in communication. This makes human communication more complicated than
having knowledge, codifying it, transmitting it and the other person understanding it.
Taking an example from Searle (1969:22) ‘Imagine a speaker and a hearer and suppose that in the appropriate
circumstances the speaker utters one of the following sentences:
1.
Sam smokes habitually.
2.
Does Sam smoke habitually?
3.
Sam, smoke habitually.
4.
Would that Sam smoked habitually.’
These all contain a referring act which refers to the same thing, namely Sam. The predicating act is also the same
- ‘smoking habitually’, and these two together make up the propositional act which tells us what the person is
talking about. Creating a full speech act requires an illocutionary force indicating act. This can be generated by
word order, as in sentence 1 which is a statement, the question mark and word order, which indicates sentence 2
is a question, and so on with 3 being an order and 4 being a wish.46 All three parts of the speech act are
dependent on, rather than alternatives to, tacit knowledge.47 And all presupposes a whole range of tacit skills and
knowledge that cannot be reduce to codified knowledge because they are needed for the process of codification to
even take place. Which is why in normal conversations, despite having the same structure, you automatically
know that the sentence ‘I haven’t had breakfast’ refers to today, while ‘I haven’t been to Tibet’ refers to my entire
life (Searle 1969).48 In essence, ‘meaning is a form of derived intentionality. The original or intrinsic intentionality
of a speaker’s thought is transferred onto words, sentences, marks, symbols, and so on.’ (Searle 1998:141), which
makes the codification proposal to separate tacit-intentionality from codes potentially problematic.
44
Again, this is not the position of all proponents of codification. The early work argued that tacit knowledge will always be
required. Though it does seem to be the implication of the later, admittedly more extreme code-book position.
45 Importantly, we can have intentional states about things that don’t even exist – as when I walk into the house to get a drink
but get distracted and never actually get it.
46 Thus speech acts have a referring act and a predicating act which together make up the propositional act, which together
with the illocutionary force indicating act make up the whole thing (Searle 1998).
47 For referring to take place successfully the reference must be part of a larger speech act, the thing being referred to
must exist, and the referring expression must contain an identifying description of the referent that the receiver will
understand (Searle 1969, cf. Austin 1975). This presupposes that the receiver and the communicator both have the
ability to understand contextual similarity - which is an embodied tacit skill (Nightingale 1998). It cannot be learnt because
it is needed for learning to take place (Pinker 1994:417). Similarly, with predicating acts. For predicating to take place
successfully ‘X [the thing being referred to has to be] … of a type or category such that it is logically possible for P [the
predicate] to be true or false about X’ (Searle 1969:126). Thus in order for predicating to take place the person
performing the speech act needs a lot of unarticulated, tacit background knowledge about the thing being referred to. So,
to take an extreme example, if you say ‘the cat flows’ or ‘codification speaks Chinese’ the receiver will be confused
because they both ascribe logically impossible properties that clash with our normal automatic understanding of speech.
Consequently, because speech acts relate to mental images that exist as Gestalts, we will often have to ask people what
they mean by what they say. As Fotion (2001:35) notes predicating is not a complete speech act and ‘has more to do
with knowing the meaning of a term, or knowing how to use it than it does with point to anything’.
48 Because language is used for social communication meanings have to be shared. Paul Grice pointed out that we only
succeed in communicating when people recognise our intention to produce understanding in them. This is why it is the stage
manager and not the actors who tell the theatre audience that there is a fire and everyone must leave (c.f. Searle 1969,
Austin 1975, Grice 1989, Searle 1998:141).
13
We can now criticise some of the tenets of the codification approach.
1.
Tacit knowledge gets codified in the sense that tacit knowledge and codified symbols are substitutable
alternatives and tacit knowledge gets codified into codified knowledge. This is the Dasgupta and David (1994)
assertion and originally the main tenet of the codification approach. Speech Act theory suggests that this is
problematic because tacit knowledge does not get codified into codes, and speech acts and tacit knowledge
are complements not alternatives. Indeed, the analysis of predication and referring show that both acts are
dependent on tacit understanding. This goes deeper than Chomsky’s argument that language depends on
unconscious brain processes. Words and symbols refer to ideas that are interconnected neural images
structured in Gestalts, so that the transfer of the word is not going to transfer the full idea.
2.
The extent of codification is dependent on costs and benefits. Again, speech act theory and the
neurological evidence are in full agreement with the traditional view that if you can pay people money to write
things down, which people can then read and learn from. But they disagree with the codification position that
capabilities can to a large extent be reduced to know-how, and that know-how can be reduced to know-that,
which can be articulated, and this articulated know-that can be read by someone who will to a large extent (i.e.,
more than the traditional view would suppose) then have the original capabilities with considerable less need
for learning methods that build up tacit skills. The evidence suggests that relationship between tacit knowledge
and written words are much more complex than the proponents of codification propose. Some knowledge,
including much expert knowledge cannot be readily articulated. It can however be described, but having a
description in the form of speech acts and having the knowledge are not the same thing.49
3.
The concept of codified knowledge makes sense, in the sense of “capabilities that have a tacit
components being codified into codified knowledge” which is enough to allow you to have access to the
capabilities. Again we find a disagreement between codification theory and the neurological and linguistic
evidence. No-one is denying the traditional view that reading books which contain ‘codified knowledge’ in its
normal sense, can help performance. The codification position is more than this – it is that tacit knowledge
gets codified into codified knowledge. However, tacit knowledge and codified knowledge are not endogenously
chosen alternatives. The dichotomy is confused. There are i) causal neurological processes, some of which
can generate neural images that can form ii) subjective mental images, and some of those can have their
intentionality imposed on words and symbols to become iii) speech acts. Not all capabilities can be reduced to
causal neurological processes – no amount of reading is going to make me a good ballet dancer. Not all
causal neurological processes generate neural images – I’m not going to get a Fields medal by asking a
mathematician how they think. Not all neural images can be articulated because many have a local, first
person perspective – being male I will never ‘know’ what it is like to have a baby even if I read a book about it.
And because our ability to symbolise our knowledge is dependent on actively reconstruct Gestalts, speech acts
are not able to transfer memories completely let alone capabilities, as suggested by the codification theory. As
Polanyi noted we know more than we can say.
In short, the concept of codified knowledge, as it is used by the proponents of codification, comes from a category
mistake that confuses knowledge (a capacity) with information (a state). This in turn is the result of their ‘clear’
switch from an epistemological description to an ontological property. This confusion becomes apparent when we
move from very simple speech acts were both sides share tacit understanding, such as the recipe situation, to
situations where that tacit understanding can’t be assumed.
For example, complex technical jargon, such as ‘how you get a TOP but non-DIFF 4-manifold by surgery on a
Kummer surface there’s this fascinating cohomology intersectional form related to the exceptional Lie algebra E8...’
does not transmit knowledge freely (Stewart 1992:x). In this instance it is clear that communication fails for people
who don’t understanding what the words refer to. And the fact it is ‘codified’ does not make it any easier to transmit
or understand. Given that most scientific and technical communication takes the form of jargon, and innovation by
definition involves new things where the categories are not fixed, assuming this tacit understanding away is going
to lead to flawed understanding and dangerous policy.
Similarly, if having ‘codified knowledge’ is equated with codes, then it is not at all clear from the speech act
perspective that having codes implies more knowledge. One only has to point out that the British buy the most
cook-books per capita in the world, and far more than the French and Italians, to see that possession of cookbooks is not to be equated with the possession cooking knowledge or skills.50
49
Moreover, the description and the capability are not the same thing. If the extent of codification were endogenous it raises
the obvious question why firms would not replace all their expensive people with cheap computers – if all their knowledge
could be turned into computer programmes why haven’t firms done it? Indeed, why employ any people at all?
50
Moreover, there is a danger that by concentrating on the coded part, we overlook changes in our
explanations and understanding which are so essential to science. The suggestion that science involves
taking badly understood tacit notions and codifying them into algorithms (see for example Figure 2 in Cowan et
14
Discussion and Conclusion
So far the paper has argued that there are potential problems with the concept of codification and a number of
breaks in the causal chain needed to convince us that the traditional view of knowledge needs to be replaced. It
has criticised the notion that tacit knowledge is codified, that tacit and codified knowledge are determined by costs
and benefits in a non-trivial sense, and that codified knowledge divorced from the tacit knowledge needed to
understand it is a useful concept. In each case, the traditional view has been supported while the codification
position, especially the more extreme codification position has been found problematic.
However, while the explanation outlined above may convince some, they are not going to be at all convincing to the
more extreme objectivism of Cowan et al (2000). To proponents of this more extreme version all the previous talk
of tacit understanding, neurology, speech acts etc., is irrelevant. This is because ‘in reality’ it can all be reduced to
code-books. This is a metaphysical doctrine that is immune from empirical criticism because anyone who
proposes that tacit knowledge might be important is ‘clearly’ wrong because the tacit knowledge is itself, in reality,
only non-manifest code-books.51 Consequently, refuting this position requires more than empirical analysis.
In this final section of the paper the concept of the code-book will be analysed and shown to be without Fregean
sense. The main point is to explore how a theory of codification is possible in the first place and shed some light
on the reasons behind the tensions in the science policy literature.
How is an extreme theory of codification even possible?
The paper has hopefully shown that the traditional view of tacit knowledge doesn’t need any radical revision and
has more life in it than the proponents of codification suggest. Biology and speech act theory seem to support the
widely held traditional view. But in Cowan et al (2000) this view is rejected as ‘serious taxonomic and semantic
confusion’ and should be replaced by an abstract code-book, that is real even when it is not manifestly present.
What is so remarkable about this more extreme codification theory and what forms the starting point for this
discussion is the very possibility of it existing. That is, that it is possible at all to produce a theory of knowledge
without looking at the brain. This I would argue is like having a theory about how cars work without mentioning
engines. What is more, this extreme codification theory, although plausible, was found to have a problematic
relationship with the empirical evidence, which raises questions about what made the theory potentially
problematic? and if there is this theory, may there be others out there?
To understand how a theory of the code-book is even possible one needs to look at the type of explanation that it
provides. And to understand them I have drawn extensively on Searle’s 1993 discussion. Newell (1982) explains
that there are three levels of explanation – the hardware level (biology and neurology), the program level
(algorithms and code-books) and the knowledge level (beliefs, ideas etc.,) which following Searle (1993:215) I will
refer to as the intentionality level. Polanyi’s phenomenology worked at the intentionality or knowledge level, Simon,
Newell and their followers worked at the program level, and neurologists like Edelman work at the hardware level.
The previous section showed how the intentionality and hardware levels were mediated by biology. This section
looks at how they relate to the program level explanations.
Knowing about Knowing
al (2000)), is directly contradicted by the history of quantum mechanics. Starting in 1900 (with a shake up in
1927-8), quantum mechanics started, rather than ended with a codified algorithm, and the last century has
been spent trying to understand what it means. Moreover, while the early papers are probably understandable
by most scientists, the more recent papers are impossible to understand without a long mathematical
apprenticeship. This suggests that Hicks was correct in arguing that the important issue is the tacit
understanding of what ‘codes’ refer to and not the codes themselves. This in turn creates rather serious
problems for the notion that if you codify knowledge you make it easy to learn and transfer. Could it be that by
using speech acts you can exploit divisions of scientific labour and specialise your knowledge more easily, in
doing so you can deal with more complex phenomena and produce understanding that makes tacit knowledge
far more important, and the transfer of knowledge more difficult? If this is the case then ‘codification’, in the
traditional sense of writing things down, could make some knowledge harder, rather than easier, to transmit.
51
Cowan et al (2000:218) seem to take this line to extremes by arguing that even mathematical theorems that prove that
mathematical truth cannot be reduced to information processing, are potentially solvable by improved information processing.
15
If I wanted to know how to buy a drink on a hot day, the question I might ask my subject would be ‘how do you buy
a drink’, or ‘what do I do in order to by a drink’? It is important to note that the question has an intentional aim and
consequently an implicit teleology.
It is possible to produce a hierarchy of causes going back in time that are more or less appropriate levels of
analysis. Thus we will want to know where to go, but can probably do without explanations of how the concept of
money was developed, or how heat energy is dissipated when we move our arms.
Our subject might explain that buying a drink involves a four step process 1) Go down the hill, 2) Pick a drink, 3)
Take to counter, and 4) Pay money. This provides a causal chain that explains how to buy a drink. Taking a
hugely over simplified view of causation, each of these tasks can be broken down further. For example, the first
task of going down the hill can be performed by car, by bicycle or on foot; the second picking a drink can be based
on it being hot or cold, its taste, or the colour of the can, the third, taking to the counter, can be done by walking,
hoping or skipping, and the last, paying for it, can be done with either cash or credit card. If we asked our subject
for more details, we could exclude alternatives at the next level down. This could be repeated until we reach our
problem solving heuristic.
However, and this is the important point, each of the causes in our explanation is the name of the effect of the level
of explanation lower down the hierarchy. For example, at the top level, going down the hill is the one part of the
chain of causes that allows the person to buy the drink. It is also, the effect of going by car, walking, or cycling, the
causes at the next level down the hierarchy.
What we have therefore is a chain of causes and effects and at each level of our explanation we have what Polanyi
has called an operational principle – that is an explanation of how a task is performed that fits within an
unarticulated, teleological framework. From this operational principle it is possible to infer how the world works and
gain scientific understanding. In general the more detail we get the better our understanding will be.
When we have accurate enough understanding we can fill in any gaps and model that operational principle with
simplified mathematical truth-games which have the potential to tell us interesting things about the world.
Moreover, they can explore the implications of our causal explanations and test them against statistical data.
However, it is also possible to take another non-empirical route from this process. This involves taking the
operational principles, (i.e., the 1, 2, 3, & 4 tasks outlined above), and looking at the next level down in the causal
hierarchy (whereby 1 would be the effect of walking, driving or cycling) and assuming that they are the result of a
single common abstract cause. That is, not the top level being common effects of a range of different causes, but
that all the different lower level processes really being the expression of a single underlying abstract cause. For
example, walking down the hill, driving down the hill, and cycling down the hill are all common expressions of an
underlying common cause, for example a ‘going down the hill algorithm’ that is being expressed in three different
ways. The previous method involved assuming that common causes will have common effects and using that to
produce models, while this involves assuming that common behaviour (or effects) have a common abstract
cause.52
Because, this common abstract cause is shared among all the various operational principles that will produce a
desired behaviour, its proponents can argue it must be ‘deeper’ and ‘broader’ than normal causes. Consequently,
because it is more abstract its proponents think it must be more ‘scientific’.
However, because there are an infinite number of theories that can produce a range of effects, the choice of this
abstract causation comes down to pre-empirical and often unarticulated ideas about what the explanation should
be like – Hegel’s ‘search for certainty’. Sociobiologists for example, put behaviour down to abstract genes, 53 AI
people put behaviour down to abstract algorithms, the proponents of extreme codification theory put behaviour
down to abstract code-books. Whatever they chose, their level of explanation exists between the hardware level of
real genes, real world organisational routines, real neuronal processes etc, and the intentionality level of
explanation in terms of ideas, beliefs etc., (Searle 1993). This is the program level of explanation that allows the
proponents of code-books to ignore the neurological hardware level of explanation and its production of Polanyi’s
intentionality level.
52
The tension here is therefore not between empirical evidence and abstract mathematical models. Instead it
is between empirical and mathematical explanations that use real causes, and empirical and mathematical
explanations that use abstract causes that are really just imposed names for common effects.
53 Despite there being plenty of opportunity to prove that the genes exists and functions as they suggest.
16
Choosing between the levels comes back to a metaphysical argument about reality. The argument in Cowan et al
(2000) is that tacit knowledge is an appearance while the code-book, codified knowledge, and even non-manifest
code-books are real.54 By this they are asserting that ‘algorithmic information processing’ and ‘code-books’
including ‘non-manifest code-books’ form part of the fabric of reality and are intrinsic properties with causal power –
in the same way that the laws of physics are (c.f. Searle 1993).
But from our perspective programme level explanations don’t provide intrinsic causal properties. Instead of
investigating intrinsic causality the proponents of codification have taken a motley of different causes with similar
effects, given them a new name and then given that name causal powers. The causality isn’t intrinsic at all, its
imposed from the outside (Searle 1992, c.f. Taylor 1995).55
So what appears to be a metaphysical difference is instead a conceptual error, which explains why the theory is so
convincing and so immune from empirical criticism. This is because it adopts a pre-Newtonian standard of proof –
‘formal consistency’ - rather than the post-Newtonian standard of ‘growth of knowledge’. As a consequence, if they
find a mathematical model or a computer program that will mimic the behaviour they are investigating or another
process with a different cause but the same effect, they can claim they this proves they have found some abstract
underlying cause. Instead, it shows that they have just found another very different cause that produces the same
effects, which they can simply add to their collection.
If code-books are meant to be real and intrinsic then it would seem their proponents would be forced into claiming
their underlying algorithms or codes exist in some Platonic sense. It might therefore be demanded what exactly
this means. If code-books are meant to exists in reality, in the sense that we can pick them up, then it is obviously
false. People can do many things that aren’t written down, and knew how to do many things before writing was
invented. Moreover, our sea anemones knew how to respond to different types of star-fish without being able to
read. If code-books are meant to exists in some abstract sense we could ask where? And how exactly (i.e. can we
have a mechanism) do they interact with the real world. How do these code-books especially the non-manifest
code-books interact with our cognitive processes? Do they use a wave or a particle to carry the information? If I
stand behind a wall will it get through? If so, how come it interacts with the brain and not the wall? Does the
transmission take place instantaneously or is there a delay? If there is a delay, what causes it? If there is no delay
does this imply information transfer faster than the speed of light? (c.f. Deutsch 1997:160).
Moreover, how do they know it’s a code-book, could it not be another mysterious force? Hegelian Geist?
Descartes’ Demon? Platonic essences? Or worst of all Polanyi’s mystical tacit knowledge? Could ‘code-book’ and
any one of these terms be co-extensive to the extent that anything that we could ascribe to a code-book could be
ascribed to them. If we were to take the codification literature and replace ‘code-book’ with some other abstract
entity the theory would seem obviously problematic. But the terms seem to be co-extensive with no way of
experimentally choosing between them. In short, it seems that the whole concept of ‘code-book’ lacks Fregean
sense, and should be rejected.
Understanding the Tensions in the Science Policy Literature
Hopefully, this paper will have shown that there are potential problems with the concept of codebook and more
generally with the idea of codification. However, the paper is not going to convince many people because the
theoretical treatment of causation makes the more extreme version of the theory immune from criticism. But, for
those who are at least a little convinced, its plausibility should be a cause for concern. The question could then be
asked if, the theory is potentially problematic and yet so convincing, are there others like it about? While deciding
would require case by case analysis, my guess would be that there are.
Recall: ‘To the outside observer, this group appears to be using a large amount of tacit knowledge in its normal operations.
A ‘displaced codebook’ implies that a codified body of common knowledge is present, but not manifestly so.’ (Cowan et al
2000:232 their emphasis).
55 This argument is derived from Searle’s attack on the AI position (1993). Algorithms are held to be intrinsic properties that
can be run on any 'computationally equivalent' hardware. Thus a computer could be made out of mechanical cogs,
hydraulics, silicon chips, cats and mice or pigeons trained to peck (Searle 1993:206-207). 'The physics is irrelevant in so far
as it admits of the assignment of 0s and 1s and of state transitions between them.' (ibid.) This has two consequences.
'1. The same principle that implies multiple realizability would seem to imply universal realizability... everything would be a
digital computer, because … you could describe anything in terms of 0s and 1s.
2. Worse yet, syntax is not intrinsic to the physics. The ascription of syntactical properties is always relative to an agent or
observer who treats certain physical phenomena as syntactical... The multiple realizability of computationally equivalent
processes in different physical media is not just a sign that the processes are abstract, but that they are not intrinsic to the
system. They depend on an interpretation from outside.' (Searle 1993:208).
54
17
For a start, a large part of the theory of codification itself. This is not just the idea of the code-book but the wider
theory. As Foray and Cowan (1995) note for them codification is a term that is applied to three different processes
– creating messages, creating models and creating languages. Each of these affects the nature of knowledge in
three different ways, and each will have an influence on how it is transferred and used. But the underlying causality
in each case is different. The three different causes may have the same effect, (helping knowledge get
transferred), so it is useful, when analysing the world at a level where exactly how things get transferred is not
relevant, to call them codified knowledge, as we do in normal conversations. This is what the traditional theory has
done. If you examine the codification proponent’s ideas, they have gone further. Codification has gone from being
the name of an effect of three different causes, to being an abstract cause in itself.56
This I would argue is why the idea of the codification of tacit knowledge is so convincing, why it is largely immune
from empirical attack, why it is so adaptable, why it is so slippery to define and moreover why it should be rejected.
The theory of codification lacks explanation and a causal mechanism. If this analysis is correct, then the extent to
which IT changes how knowledge is used is going to be an almost purely empirical question.57
However, what is good for the goose is good for the gander – most of the criticisms of codification could be applied
to individualistic treatments of tacit knowledge that ignore social interactions. The present author has argued that
tacit knowledge explains why firms hold together, why project teams need to co-locate and why technology transfer
is difficult. While tacit knowledge no doubt plays a small role in these processes, there are many more important
causal processes at work, such as ones relating to social interaction between individuals, that cannot be reduced
down to aggregated individual tacit knowledge. Therefore, using tacit knowledge, as understood at the individual
level, as a name for these and imbibing it with mystical causal powers is a ‘serious semantic and taxonomic
confusion’ and the proponents of codification are absolutely correct to attack it.
Are Nelson and Winter only half right about tacit knowledge?
The initial aim of this paper was to break explanations down into a three-fold typology and divide the world, as
Searle has attempted to divide cognitive sciences, into good (intentionality-hardware) and bad (program).58 This
aim is unobtainable most explanations incorporate different levels of explanation because it is so convenient to talk
in program level terms when looking at the causal levels below. For example, when one is talking about
knowledge interactions within a firm it is far easier and more useful to talk of knowledge being transferred in an
abstract sense without going into the details of how it actually happened. The danger only emerges when one
assumes that the abstract program level is real and can be used instead of the causal, hardware level. This
interplay can perhaps best be explained by looking at Nelson and Winter’s theory.
Nelson and Winter position their theory largely at Simon’s middle-program level and concentrate on problem
solving routines (1982). The importance of their work is that they re-articulate the categories involved in industrial
dynamics. Using an evolutionary theory they conceptualise the higher levels in the causal hierarchy in terms of
technological trajectories that exist within market and non-market selection environments. They conceptualised
dynamic search routines in the middle, just above the industry specific lower levels which they relate to sectoral
diversity. Moving along these slow changing technological trajectories typically involves finding sector specific
economies of scale, improving control, and increasing mechanisation. And these are found using problem solving
search heuristics that are analysed in great depth and related to Polanyi’s discussion of tacit knowledge.
So when Nelson and Winter are discussing what routines are, tacit knowledge is invoked in causal terms, but when
talking about how routines are used within the firm the underlying tacit aspect can be ignored, and they can be
conceptualised in Simon-like terms as search routines. Even though Polanyi and Simon are incompatible when
talking about knowledge, there is no need to choose which is correct as they are being used at different levels.
56
A similar process of questioning could be applied here: Exactly what causal mechanisms are at work during the
codification process? How exactly does IT relate to knowledge? Will the codification be different in pharmaceuticals than
it will in banking, or in 20th century Japan than in 16th century England? Why?
57 Given that the empirical analysis of the importance of IT based codified sources of knowledge, and their relationship to
proxies for tacit knowledge is being under taken by Brusoni, Salter and Marsili (forthcoming), the analysis here could be use d
to make some testable predictions. Firstly, codified IT based sources of knowledge will not be as important as more tacit local
sources, secondly, the ability to use any IT based knowledge sources will be dependent on prior accumulated learning, and
thirdly, there will be a correlation between the sophistication of the use of IT and prior investments in learning. If these
predictions turn out to be false, as they might, then the traditional view will be wrong, and codification theory may have so me
life in it yet. But until then, on theoretical grounds alone, it seems problematic.
58 Explanations that stop at one operational principle level can still be used to infer causation. So unlike Searle, who regard s
Simon’s work in cognitive science as without merit, I would argue that he has produced very useful work by specifying
operational principles which can be used to infer scientific causation, especially at the organisational level.
18
But what about the work that has followed? The Nelson and Winter theory can then be taken as a starting point for
empirical analysis or used to develop mathematical models that explore how different causal processes relate to
the empirical evidence.59 This could involve looking for example at the role of tacit knowledge in problem solving or
the use of technology at the level of routines. Or it could involve looking at how routines are used and taking the
tacit part for granted. In both cases the causal processes that are operating at the hardware level of explanation
can be examined and modelled mathematically.
Alternatively, the theory could be taken in another direction. Rather than stating that routines are used to travel
along technological trajectories by providing sector specific sources of scale, mechanisation and control etc., one
could switch cause and effect, so that search routines become abstract things that cause movement along
trajectories in their own right.60 So rather than being a means towards scale economies and mechanisation etc,
they become ends in themselves. As a consequence, the explanation is at the program level and not at the
hardware level.
Unlike the hardware level explanation, the program level explanation can explain everything in every industry, it is
immune from empirical criticism (because any contradictory empirical evidence is really a search routine), provides
a deep and general explanation and can be used instantly for policy and consulting. Moreover, it can piggyback on
search models that are mathematically exploring our understanding at the hardware level (for example, history
friendly models within specific industries) by incorrectly using them as yet more pre-Newtonian empirical
confirmation.
Obviously, the world doesn’t conveniently divide up along these lines as people will mix the hardware and program
level explanations. But if the analysis in this paper is correct then there will be tensions between them. A bit of
amateur Actor Network theory would suggest that these different explanations would not confront the theoretical
tensions between themselves head on. Instead they would consolidate themselves within different journals,
departments, invisible colleges, funding sources, and take over the peer review processes (c.f. Becher 1999).
Moreover they would strengthen themselves by linking to similar theories and explanations carefully avoiding any
tensions unless they opened up opportunities for network strengthening. As a consequence, what looks like
agreement and calm could be balanced tensions waiting to come apart.
It would seem then that these tensions are going to be extremely difficult to untangle. This problem is made worse
because as Nelson showed in The Moon and the Ghetto different ways of looking at the world are more or less
useful at different levels of analysis, and moreover, one doesn’t have to be correct to be useful for policy. Given
that program level analysis is relatively easy to do, is great for policy because it has an in-built teleology (and far
better than hardware level explanations which are pretty dismal for policy), and arguably from a Richard Whitley
like perspective, actively encouraged by both the institutional and publication structure of academia, it isn’t going to
go away. Maybe the proponents of codification were right after all. We can improve our understanding by making
our tacit, un-stated assumptions explicit.
Acknowledgements: I am grateful to Ed Steinmueller for ‘wise words’ that have helped reduce a lot of conceptual
confusion.
Bibliography
Alic J. A., (1993) ‘Technical Knowledge and Technology Diffusion: New Issues for US Government Policy, Technology
Analysis and Strategic Management, Vol 5., No., 4., 369-383
Anderson, J. R., (1983), The Architecture of Cognition, Harvard University Press, Cambridge MA
Arora A., and Gambardella A., (1994) ‘The changing technology of technical change: general and abstract knowledge and the
division of innovative labour’ Research Policy, 23:523-532
Austin, J. L., (1975), How to do things with Words, 2nd Edition, Harvard University Press
Bartlett, F.C, (1954) Remembering: A Study in Experimental and Social Psychology, Cambridge University Press, Cambridge
59
For example, in Nightingale (2000) I analysed the impact of IT on innovation in pharmaceutical firms. I found that genetics
technology had transgressed the Simon middle ground, that the search routines where themselves now achieving economies
of scale and scope, and that the process could be better conceptualised as a socio-technical system than a hierarchy.
Moreover, in unpublished work I found that the ‘natural’ trajectories were increasingly being constructed by financial
institutions. In contrast to a Simon type assumption that they could be taken as constant and ignored, they were instead
actively involved in causing change at Nelson and Winter’s routines middle level. ABB for example, changed its structure
from a 7 division company to a three division company to provide more transparent accounting information for financial
markets and not to improve its search routines.
60
‘Routines as genes’ can then be either an operational principle related to sector specific industrial change or an abstract
program level explanation.
19
Becher T., (1999) Professional Practices: Commitment and Capability in a Changing Environment, Transactions Publishers,
New Brunswick.
Berry D. C. (1994) Implicit Learning: Twenty-Five Years on: A Tutorial’ pg. 755-82 in Attention and Performance 15: Conscious
and Non-conscious Information Processing, Attention and Performance Series, M. M. Carlo Umilta (ed.), MIT Press,
Cambridge Mass
Brusoni S., Salter A., and Marsili O., (2001) Innovation and Codified Knowledge in Dutch Manufacturing: Innovation Policy
Implications of Codification – Working Paper
Buckner R. L., et al (1995), Functional Anatomical Studies of Explicit and Implicit Memory, Journal of Neuroscience, 15, 12-29
Cohendet P., and W. E., Steinmueller (2000) ‘The Codification of Knowledge: A Conceptual and Empirical Exploration’
Industrial and Corporate Change, 9, 2, 195-211
Collins H. M., (1990) Artificial Experts: Social Knowledge and Intelligent Machines, Cambridge MA, MIT Press
Carlo Umilta, M. M. (1994) (ed.) ‘Introduction’ in Attention and Performance 15: Conscious and Non-conscious Information
Processing, Attention and Performance Series, MIT Press, Cambridge Mass
Cheeseman, J. M., and Merikle, P. M., (1984) ‘Priming with and without awareness’ Perception and Psychophysics, 36, 38795
Cowan, R., Foray D., and P. A., David (2000), ‘The Explicit Economics of Codification and the Diffusion The Economics of
Codification and the Diffusion of Knowledge, Industrial and Corporate Change, 6:595-622
Damasio, A., (1994) Descartes Error, Emotion Reason and the Human Brain Putnam Books New York
Damasio, A. R., (1996), ‘The Somatic Marker Hypothesis and the Functions of the Prefrontal Cortex’ Philosophical
Transactions of the Royal Society of London, Series B: Biological Sciences, 351, 1413-20
Damasio, A., (1997), Deciding advantagously before knowing the advantageous strategy’ Science, 275:1293-95
Damasio, A., (1999) The Feeling of What Happens, Body and Emotion in the Making of Consciousness, William Heinemann,
London
Damasio A., and Damasio, H., (1993) ‘Language and Brain’, Scientific American 267: (3) 89-95
David, P. A., and D Foray 1995, Accessing and Expanding the Science and Technology Knowledge Base, STI Review of
OECD
Deutsch D., (1997) The Fabric of Reality, OUP
Dixon, N. F., (1971) Subliminal Perception: The Nature of a Controversy, New York, McGraw-Hill
Edelman G., (1992), Bright Light, Brilliant Fire: On the Matter of the Mind, Basic Books, New York
Edelman G., (1989), The Remembered Present: A Biological Theory of Consciousness, Basic Books, New York
Edelman G., (1987), Neural Darwinism: The Theory of Neuronal Group Selection, New York: Basic Books
Fotion, N., (2000) John Searle, Acumen Press, London
Foray, D., and Cowan R., (1995a) The Changing Economics of Technical Learning IIASA working Paper
Foray D., and Cowan R., (1997), ‘The Economics of Codification and the Diffusion of Knowledge’, Industrial and Corporate
Change, 6, 595-622
Garnham, A., and Oakhill, J., (1994), Thinking and Reasoning, Blackwell, London
Grice, H., P., (1989), Studies in the Way of Words, Harvard University Press, London
Kosslyn, S. M., (1980), Image and the Mind, Cambridge MA. Harvard University Press.
Langlois, R. N., (2001) Knowledge Consumption and Endogenous Growth J. Evolutionary Economics. 2001, 11:77-93
Lewicki, P, Hill T., and M Czyzewska, (1992), ‘Non-conscious Acquisition of Information’ American Psychologist, 47, 796-801
Lewicki, P., (1986), Non-conscious Social Information Processing, New York Academic Press,
Lieberman, (1984), The Biology and Evolution of Language’ Harvard University Press.
Libet, B., Ciba Foundation Symposia, 174, 123
Lihlstrom (1987), The Cognitive Unconscious, Science, 237, 1445-1452,
Locke J (1689) An Essay Concerning Human Understanding, Penguin (1989)
Marcel, A. J., (1983), ‘Conscious and Unconscious Perception: An Approach to the Relations between Phenomenal and
Perceptual Processes’, Cognitive Psychology,15, 2807-12
Merikle, P. M., (1992), ‘Perception without Awareness: Critical Issues’, American Psychologist, 47, 792-95
Maynard-Smith J., and Szathmary, E., (1999) ‘The Major Transitions in Evolution’, W H Freeman, London
Nagel T., (1986) The View From Nowhere, Oxford, OUP
Nelson R. R. and Winter S. G., (1982) An Evolutionary Theory of Technical Change, Belknap Harvard
Newell, A., (1982) The Knowledge Level, Artificial Intelligence, 18:87-127
Nightingale (1997) Knowledge and Technical Change, Computer Simulations and the Changing Innovation Process,
Unpublished D. Phil, SPRU, University of Sussex, UK
Nightingale, P., (1998), A Cognitive Theory of Innovation, Research Policy, 27, 689-709
Nightingale, P., (2000a), Economies of Scale in Pharmaceutical Experimentation, Industrial and Corporate Change, 9,
Nightingale, P., (2000b) The Product-process-organisation relationship in complex development projects, Research Policy, 29,
913-930
Paivio, A. (1971), Imagery and Verbal Processes, New York, Holt, Rinehart and Winston.
Penrose, R., (1988), The Emperor’s New Mind, Oxford University Press.
Pinker, S., (1994) The Language Instinct Morrow New York
Polanyi (1967), The Tacit Dimension, London Routledge
Posner M. I., (1994), ‘Attention: The Mechanism of Consciousness’, Proceedings of the National Academy of Sciences of the
United States of America, 91, 7398-403
Reber, A. S., (1989), Implicit Learning and Tacit Knowledge: An Essay in the Cognitive Unconscious, Oxford Psychological
Series no. 19. Oxford University Press
Rota G. C., (1990), ‘Mathematics and Philosophy: The Story of a misunderstanding’, Review of Metaphysics, 44, 259-71
20
Schacter, D. L., (1992), ‘Implicit Knowledge: New Perspectives on Unconscious Processes’ Proceedings of the National
Academy of Sciences of the United States of America, 89, 11113-17
Searle, J., (1969), Speech Acts: An Essay in the Philosophy of Language, Cambridge University Press
Searle, J., (1983), Intentionality, An Essay in the Philosophy of Mind, Cambridge University Press
Searle, J., (1993), The Rediscovery of the Mind, MIT Press, Cambridge MA
Searle, J., (1995), The Construction of Social Reality, New York, Free Press
Searle, J., (1998), Mind Language and Society, Philosophy in the Real World, New York, Basic Books
Simon, H., and Newell A., (1972) Human Problem Solving, Englewood Cliffs, NJ: Prentice Hall
Shepard, R. and Metzler, J. (1971) Mental Rotation of Three Dimensional Objects, Science, 171, pp. 701-3 Squire, L R.,
(1987) Memory and Brain, Oxford. Oxford University Press
Squire and Butters 1984 (eds.) The Neuropsychology of Memory, OUP
Stewart, I., (1992) The Problems of Mathematics, Oxford University Press
Sternberg R. J., (1986), Intelligence Applied, New York Harcourt.
Taylor, C., (1995) ‘After Epistemology’ in Philosophical Arguments Harvard University Press.
Tononi, G., and Edelman G., (1999), ‘Consciousness and Complexity’, Science, 282, 1846-51
Tulving, R., (1983) Elements of Episodic Memory, OUP.
Turro, N. J. (1986) 'Geometric and Topological Thinking in Organic Chemistry' Angew Chem International Edition English 25
(1986) 882-901
Download