THE CONCEPT OF ACTIVITY IN SOVIET PSYCHOLOGY

advertisement
O. K. Tikhomirov
THE PSYCHOLOGICAL CONSEQUENCES OF
COMPUTERIZATION*
Editor’s Introduction
In this paper Tikhomirov argues that we have often misconstrued the connection
between computerization and human activity. The way he tries to rectify this is by going
back to Vygotsky’s notations about the importance of sign systems in mediating higher
mental functions.
Before getting into the details of how he analyzes the role of the computer,
Tikhomirov reviews some of the other approaches that have been used. First he deals
with the “theory of substitution”. According to this theory, the computer’s role is to
replace the human in intellectual spheres. Although Tikhomirov agrees that in the case
of certain types of problems the computer can have the same input and output as a
human being, he rejects the notation that this means that a program of a computer’s
work is a theory of human thinking. Next he deals with the “theory of supplementation”,
which argues that the computer’s role is to increase the volume and speed of human
processing of information. According to Tikhomirov, this approach relies on our ability
to make a formal analysis of human mental processes in terms of information theory.
The main objection he raises against the theory of supplementation is that it assumes
that the inclusion of a computer in human activity does nothing more than provide a
purely quantitative extension of existing activity.
Tikhomirov argues that both of these approaches are misguided because they fail
to understand the essential role of mediation in human activity. He claims that if we
view computerization in terms of the Soviet idea of mediation, we will begin to see it in
a completely new light – that is, he maintains that the real question is not how the
computer can replace mental processes or how it can make a purely quantitative
addition to already existing psychological processes, but rather that computer programs
should be viewed as a new kind of sign system that can mediate human activity.
As Vygotsky and his followers have pointed out, when a new form of mediation
is introduced into activity, it does not simply expand the capacity of the existing activity
but often also causes a qualitatively new stage to emerge. Just as Vygotsky argued that a
new form of mediation (speech) gives rise to a qualitatively new stage of thinking in
ontogenesis, in this paper Tikhomirov is arguing that a new form of mediation
(computerization) gives rise to a qualitatively new stage of thinking in history. (He also
connects it with ontogenesis.)
In building his argument Tikhomirov uses some of the tenets of the theory of
activity introduced by Vygotsky. For example, he relies on the historical approach (one
form of genetic explanation) and the notion of the tool as the most important component
of human activity; and he utilizes several of the concepts and distinctions Vygotsky
proposed in connection with his analysis of sign systems, the distinction between sense
[smysl] and meaning [znachenie], among others.
J. V. W.
*
From O. K. Tikhomirov (Ed.), [Man and computer]. Moscow: Moscow University Press, 1972.
1
Among the new theoretical problems that have confronted psychology in the
course of the scientific-technological revolution is study of the psychological
consequences of computers.
Does the computer affect the development of human mental activity? If so, how?
In order to answer these questions, we must compare how human beings and computers
solve the same problem. Such an analysis allows us to establish whether or not human
activity is reproduced in the computer. First, let us deal with the computers that have
already been created; with regard to future models, I shall limit myself to evaluating
concrete schemes for perfecting functional potential.
In the last few years, the analogy between thought (and the behavior of organisms
in general) and the working principles of computers has come to be used widely.
Special significance has been given to so-called heuristic programs (Simon, Newell).
The very term heuristic is a reflection of a certain stage in the development of the theory
of programming problems for the computer. It designates any principle or system that
contributes to decreasing the mean number of steps needed to make a decision.
Heuristics are mechanisms that guide a search so that it becomes more selective, and
therefore efficient. It is important to point out that this interpretation does not
correspond to the broader meaning of the term heuristic, which is attributed to Papp and
means “the skill to solve problems”, This latter meaning, which had very indefinite
content, was used until the sciences that study human thinking were differentiated, and
referred to devices of analysis and synthesis.
The possibility of solving with a computer problems that earlier had been solved
by humans led several scholars to the conclusions that:
1. A program of a computer’s work is a theory of human thinking.
2. The possibility of reproducing some functions on a machine is the criterion of the
correctness or incorrectness of a truly psychological explanation of activity.
3. A negative response to the traditional question “Can a machine think?” is
“unscientific and dogmatic,” since a comparison of the behavior of machine and
human being often reveals identical results.
On the basis of these ideas, the influence of computers on intellectual activity is
viewed as follows. The computer replaces the human being or substitutes for him/her in
all spheres of intellectual work. This is the theory of substitution. However, in order to
check the validity of this theory of the influence of computers on human mental
processes (a theory that begins with the assumption that the heuristic program
reproduces human creative thinking), we must analyze how closely human processes for
directing a search for the solution of a problem correspond to those used by computer in
performing the same task. As has been shown in our laboratory, these processes are not
the same. A large part of the control mechanisms of search in humans in general are not
represented in existing heuristic programs for computers. When computer heuristics do
resemble human ones, they are significantly simpler and are comparable in some
essential way. Computer reproduction of some externally observed results of human
activity has been carried out without reproducing human heuristics.
On the basis of the data collected in the course of experimental psychological
investigations, we can state that the idea of substitution does not express the real
relations between human thought and the work of the computer. It does not accurately
represent how the latter influences the development of the former.
One can hardly determine how computers influence the development of human
mental processes without considering what human thought is and what important
2
historical stages in the development of thinking can be identified up to the time of the
appearance of computers. We approach the problem in this way in order to examine the
question of computerization in a broader historical perspective – the perspective of the
development of human culture.
In recent years the information theory of thought has become very popular. We
think it necessary to contrast this theory with the psychological theory of thought (they
are sometimes incorrectly viewed as being identical). The former theory is often
formulated as a description of thought at the level of elementary information processes
and is concerned primarily with the characteristics of information processes.
The information theory of thought consists of the following ideas. Any behavior,
including thought, can and must be studied relatively independently of the study of its
neurophysiological, biochemical, or other foundations. Although the differences
between the brain and the computer are evident, there are important functional
similarities. The idea that complex processes of thought consist of elementary processes
of symbol manipulation is the main premise of the explanation of human thought at the
revel of information processing, in general, these elementary processes are described as
follows: Read the symbol, write the symbol, copy the symbol, erase the symbol, and
compare two symbols. It is not difficult to see that the “elementary informational
processes” or “elementary processes of symbol manipulation” are nothing but the
elementary operations in the operation of calculator. Thus, the desire to study thought
“at the level of elementary informational processes” is actually interpreted as a demand
to explain human thought exclusively within a system of concepts that describe the
operation of a calculating machine.
The fundamental concepts within this framework are: (1) information, (2) the
processing of information, and (3) the information model. Information is in essence a
system of signs or symbols. The processing of information deals with the various types
of processing of these symbols according to given rules (“symbol manipulation,” as
some authors refer to this). The information model (or “the space of the problem,” as
opposed to the environment of the problem) is the information about the problem
represented or collected (in the form of a coded description) in the memory of the
system that solves the problems. A complicated, but final and fully defined, set of rules
for processing information was seen as forming the basis for the behavior of thinking
human. This became, as it were, the position differentiating “scientific” approaches
from “nonscientific” ones (i.e., those tolerating “mysticism”).
If we accept the information theory of thought (i.e., the description of thought as
analogous to the work of the computer), there can be one answer to the question of how
computers affect human thought: computers supplement human thought in the
processing of information, increasing the volume and speed of such processing. This
point of view can be labeled the theory of supplementation. Within the framework of
the theory of supplementation, the relations between the functioning of humans and the
computer, if combined into one system, are the relations of two parts of one whole – the
“processing of information.” With the aid of the computer, humans process more
information faster, and perhaps, more accurately. A purely quantitative increase in their
resources takes place. Inasmuch as the theory of supplementation is directly connected
with the information theory of thought, it is necessary to examine it in more detail.
Psychologically, what does thought mean? Does the informational approach
describe actual processes of human thinking, or has it abstracted from it those
characteristics that are most essential? We shall not find the answers to these questions
through “modeling” mental processes, but through the theoretical and experimental
analysis of thought processes.
3
Psychologically, thought often emerges as the activity of solving a problem. The
problem is usually defined in terms of a goal in a certain situation. However, the goal is
not always “given” initially. Even if it is externally imposed, it is often quite undefined
and allows for complex interpretation. Therefore, the formulation and attainment of
goals are among the most important manifestations of thinking activity. On the other
hand, the conditions is which a goal is formulated are not always “defined”. It is still
necessary to distinguish them from the general situation of activity on the basis of
orientation or an analysis of the situation. The problem of a given goal in defined
conditions still must be analyzed. Consequently, thinking is not the simple solution of
problems: it also involves formulating them.
What enters into the conditions of a problem? In other words, with what do
humans who are solving a problem have to concern themselves? If we examine cases of
so-called visual-active thought, it may be real objects or things, and/or it may be people.
In the case of verbal thinking, it may be symbols. Is it sufficient to say of human verbal
thought that it “operates with signs” in order to express the essential aspect of thinking?
No, this is not sufficient. Following Vygotsky, in the analysis of verbal thinking I shall
distinguish the sign itself, its referent, and its meaning. In “operating with signs,”
human operate with meanings. In the final analysis, they operate with objects of the real
world through meanings. Thus, if we describe human thought only as manipulation by
means of signs, we are extracting and focusing on a single, isolated aspect of the
thinking activity of a real person. This is precisely what the information theory of
thought does.
Actual objects or named objects that enter into the problem have important
characteristics such as values. Actions with these objects (i.e., transformations of a
situation) also have different values. There are different sources of the formation of the
values of the same element of a situation and different interrelations among these
values. The formal representation of the conditions of a problem (for example, in the
form of a graph or list of signs), which reflects some reality, is artificially isolated from
such accompanying objective situational characteristics. The conditions of the problem
may include factors such as (1) the correlation of various values of elements and means
for transforming the situation, and (2) the intention of the formulator of the problem.
These characteristics, which are lost in a formal representation, not only exist but even
determine (sometimes more than anything else) the flow of activity in solving the
problem. Thus, the psychological and informational characteristics of a problem
obviously are not the same.
Mental activity often results in humans’ generating signs (for example, in
identifying the actions that lead to a goal). However, these signs have a definite
meaning (for example, embody the principle of the actions) and value. For someone
solving a problem the meaning of a sign must be formulated, and the value emerges as
an appraisal.
Psychological examination of human intellectual activity must include an analysis
of the following areas: (1) the operational sense* of the person solving the problem, (2)
the sense of concrete attempts at solving the problem, (3) the sense of reinvestigating
the problem, (4) the sense of various elements of the situation as opposed to their
objective meaning, (5) the emergence of the senses of the same set of elements of the
situation and of the situation as a whole at different stages of problem solving, (6)
correlation of the non-verbal and verbal senses of the various types of formation in the
course of problem solving, (7) the processes of the interaction of sense formation, (8)
Tikhomirov’s notion of “sense” [smysl] is basically the same as that of A. N. Leont’ev and V. P.
Zinchenko. See the footnote on this term in the paper in the volume by Zinchenko & Gordon. – J.V.W.
*
4
the role of sense formation in the organization of research activity in the definition of its
range (selectivity) and directionality, (9) the process of the emergence and satisfaction
of search requirements, (10) change in the subjective value or meaningfulness of the
same set of elements of a situation and actions expressed in the change of their
emotional coloring (with constant motivation), (11) the role of a changing scale of
subjective values in the organization of the flow of a search, and (12) the formation and
dynamics of the personal sense of the situation of a problem and its role in the
organization of activity in problem solving.
In human mental problem-solving, such real functional forms as sense
(operational and personal) and the values of the objects for the problem solver are not
simply neutral with regard to the informational characteristics of the material; rather,
they take part in the processes of directing the activity of problem solving in an a
important way. It is this great importance that above all creates the qualitative
distinctiveness of mental activity in comparison with information processing. This is
what differentiates the psychological and the informational theories of thought.
Thus, we cannot accept the theory of supplementation in our discussion of the
problem of the influence of computers on the development of human intellectual
activity, inasmuch as the informational approach on which it is based does not express
the actual structure of human mental activity.
It is impossible to discuss the problem of the influence of computers on the
development of human mental processes without talking into account research on
artificial intelligence.
In cybernetics it is usually the case that not only narrow, specialized problems but
also general, theoretical ones are dealt with. Analyzing how these problems are
interpreted is a required step not only for evaluating the contribution of cybernetics to
the establishment of a world view but also for predicting the development of certain
areas of this science.
In the course of its development, cybernetics, which Weiner understood as a
theory of guidance and connections in the living organism and machine, was divided
into a number of areas: self-regulating systems, the modeling of human thought, and
artificial intelligence. Currently, the third area has become the leading one in a number
of countries. This has happened because, over the course of its development, the first
area encountered significant difficulties and the second and third in fact were combined.
Artificial intelligence is not simply a theme for novels. It is a scientific trend also
requires thorough analysis from the psychological point of view. I shall look at the
interpretation of this trend that can be found in the works of Minsky, MsCarthy, Simon,
and other investigators who have developed positions on the issue.
The most widespread definition is the following: Artificial intelligence is the
science whose goal is to develop methods that will enable machines to solve problems
that world require intelligence if they were solved by humans. At the same time, the
expression artificial intelligence is often used to signify human functional possibilities:
a machine is intelligent if it solves human problems. Initially, in the theory of
programming problems for computers there was a distinction, even opposition, between
two scientific courses – “artificial intelligence” and “modeling mental processes.” The
differentiation was along the following lines: The first course involved programming
problems for the computer without regard to how these problems were solved by
humans; the second course proposed programming with an attempt to duplicate human
means of solving problems in machine programs. After a time, the boundaries between
these two areas became practically nonexistent. However, the term artificial intelligence
is significantly more popular than the second, modeling mental processes. Investigators
5
concerned with modeling have expressed their dissatisfaction with the term simulation
since it implies an imitation (i.e., a purely external similarity of two objects that does
not reflect the scientific aspirations of the authors). On the other hand, investigators
concerned with artificial intelligence have emphasized that machine programs
eventually must be based on an account of human problem-solving.
Thus artificial intelligence now is interpreted differently than it was five to tem
years ago. Making machine methods of problem solving approximate human ones is the
strategic goat of artificial intelligence research. It is often said that are no longer any
restrictions on the possibilities for duplicating human abilities by machine programs.
A strategy that has been thus formulated touches on a series of fundamental
problems. It is thought, for instance, that a positive answer to the traditional question
“Can a machine think?” must be linked with a materialist world-view (Borko). Neisser
expresses a similar idea: in his opinion the analogy between humans and computers is
based on materialism.
It is sometimes argued that the creation of artificial intelligence will influence
human beings’ ideas about themselves and destroy the illusion of their “uniqueness.”
With regard to the formation of a world view, this influence will be even more
significant than understanding “the place of our planet in the galaxy” or the laws of
“evolution from more primitive forms of life.” Other authors write about “undermining
human beings’ egocentric conception of themselves.” At the same time, we are already
beginning to see predictions of the peculiarities of the “world view” of future machines.
In Minsky’s opinion, when intellectual machines are constructed, we must not be
surprised to find that they, like humans, will have false ideas about mind matter,
consciousness, free will, etc.
It is sometimes thought that the creation of intelligent machines will shed light on
the eternal mind-body problem and the role of humans in the universe. In Slagle’s
opinion, the existence of intelligent machines will support the “mechanistic conception”
that humans are only machines, and the answer to the psychophysiological problem
supposedly will be that only the body exists.
Thus, the position of mechanistic materialism is sometimes explicitly formulated
as the methodological foundation of research in artificial intelligence. No distinction is
made, however, between mechanistic and dialectic forms of materialism. The second is
ignored, and the first is proposed as the sole form of materialism.
What can one say about the influence of computers on the development of human
mental processes from a mechanistic point of view? If “only the body exists” and
humans are “only machines,” then at best one can imagine the “synthesis of two
machines “ or the substitution of one machine for another. Consequently, in research on
artificial intelligence we again run into theories of supplementation and substitution.
The creation of artificial intelligence in past years relates to the future. To appraise the
probability of achieving the strategic goal of this program means simultaneously
appraising the theories of supplementation and substitution in another context.
When humans solve certain problems, what, exactly, is most important? Which
component or mechanism must one, as it were, try to transfer to machine programs?
Specialists in artificial intelligence answer as follows: We must give the computer the
system f concepts necessary for the solution of problems of a given class. We must give
machines more “semantic information.”
The interest in semantic problems is, to a significant degree, based on critical
reinterpretation of unsuccessful experiences in machine translation. In Minsky’s
opinion, the machine needs to acquire knowledge on the order of 100,000 bytes in order
for in to be able to act adequately in relation to a simple situation. A million bytes,
6
along with the proper corresponding organization, would be sufficient for “powerful”
intelligence. If these arguments do not appear adequate, it is suggested that these figures
be multiplied by tem
In analyzing an artificial intelligence program it is necessary to clarify what
meaning is given to such terms (often used in philosophy and psychology) as thought,
knowledge, and solution of a problem. Usually, in the context of research in artificial
intelligence, these terms emerge in the following way. Knowledge is the ability to
answer questions. If some system answers a question, this serves as an indicator that it
possesses “knowledge.” This is the so-called empirical definition of knowledge. The
way to identify the knowledge necessary for someone to solve a given class of problems
is to observe one-self in the process of self-instruction. Human thinking is the process of
realizing an algorithm. The source of information about human thinking with which the
work of the computer is compared is often simply the definitions taken from a
dictionary (a dictionary of definitions). All four positions have a direct relation to
sciences that study human thought, to psychology in particular. Therefore, it is
advisable to examine to what degree the concrete scientific results of psychological
research are taken into account by investigators in artificial intelligence.
A distinction between formal and descriptive knowledge has existed in
psychology for a long time. Let us assume that a student knows three questions that will
be on examination and learns the answers mechanically, “by heart.” A good teacher
would hardly be satisfied with the success of the pupil. Psychologically, knowledge is
the reflection of some essential relations among surrounding objects. It is a system of
generalizations. When a person learns “mechanically,” he/she ascertains only the
connection between the question and the answer (it is another matter that, in its pure
form, this phenomenon is seldom seen). When some information is meaningfully
acquired, it always is included in some system of a person’s past experience. Thus, in
“artificial intelligence” knowledge is treated formally and has only an external
similarity to genuine human knowledge.
The method that the advocates of this scientific trend propose for revealing the
nature of the human knowledge used in problem solving (through observations of the
processes of self-instruction) seems to be extremely limited. The fact is that in human
actions some components are consciously realized and some are not, including
generalizations. It has long been known that studies of how human beings perform
actions with objects reveal that they use practical generalizations that they do not fully
realize at the conscious level but that play an active role in the process of solving certain
problems. If we fail to take these generalizations into account, the answer to the
question of what kind of knowledge a person uses in solving a given class of problems
would be incomplete in an important way.
As pointed out above, the psychological investigation of thought reveals a
somewhat more complex organization of search processes in human problem-solving
that the analogous processes in computer. These investigations, in particular, show that
the actual process of search in human problem-solving is not based on an algorithm.
The algorithm, as best, emerges as thee product of human creative activation. In this
area the appeal is only to dictionaries, in which case comparison of human thought with
the work of the computer appears outmoded.
It is interesting that critical analyses of attempts to identify the features of thought
that are unique to humans (as opposed to computers) have unearthed only one factor.
Slagle has “refuted” only one thesis: “The computer can do only what it is told to do,
and people can do more.” In his opinion, in some sense humans also can do only what
they are “told.” For example, heredity supposedly tells humans what to do and
7
determines how they derive experience from the environment. On the basis of this
reasoning, the author arrives at the conclusion that those arguments are wrong that favor
the notion that the machine in principle cannot be as intelligent as humans. It is well
known, however, that even at the level of animals’ so-called instinctive behavior,
hereditary factors do not predetermine their orientation toward the surrounding
environment in a simple way. Consequently, Slagle’s argument is not strictly correct.
The defense of machine thought we have discussed contains an assumption that
can be formulated as follows: If a technical system solves the same problems a human
does by using thought, and if the system uses it for problem solving, this means that the
technical system also possesses thought. There is good reason, however, to consider this
initial (intuitive) assumption simply incorrect. The fact is that the scientific concept of
thought is concerned not only with the nature of the results but also with the procedural
side of human cognitive activity. Therefore, one can say that a technical system
possesses thought only when it not only solves the same problems that humans solve
but solves them in the same way. To label as thought the work of the machine that only
solves the same problems as humans do has no more foundation than to label an
airplane, in a scientific (for instance, biological), not everyday, context, a bird.
Our analysis shows that the initial assumptions of the scientific trend that has
come to be known as artificial intelligence consist of a fundamental simplification of
ideas about thinking and knowledge and about the correlation of thought and knowledge
in humans. This situation gives reason to doubt whether the advocates of the present
trend (i.e., those who want to create an automaton that reproduces all human thinking
abilities) have proved their point. One can also conclude that there are no foundations
for thinking that a dialectical materialistic world-view, being the result of the
development of a scientific system, must yield to a mechanical picture of the world.
Given this, what can we say about “artificial intelligence”? We must regard
“artificial intelligence” as a theory of programming problems for computers. The very
label is no more than a metaphor reflecting the romanticism about which, for instance,
Minsky and Papert write in their book Perceptrons (1969, p. 10).
Psychology is now developing in its own ways. We shall examine how the
problem with which we are concerned can be posed if defined the framework of Soviet
psychology.
Traditional Soviet psychology has evolved on the basis of a historical approach to
the development of human mental processes. Vygotsky played a considerable role in
establishing this principle. In analyzing practical activity, psychologists emphasize the
tool as the most important component of human activity. This component creates the
qualitative uniqueness of human activity in comparison with animal behavior. The tool
is not simply added on to human activity: rather, it transforms it. For example, the
simplest action with a tool – chopping wood – produces a result that could not have
been achieved without the use of an axe. Yet the axe itself did not produce this result.
Action with a tool implies a combination of activation and human creative adaptation.
Tools themselves appear as supplementary organs created by humans. The mediated
nature of human activity clearly plays a leading role in the analysis of practical activity.
One of Vygotsky’s central theses is that mental processes change in human beings
as their processes of practical activity change (i.e., the mental processes become
mediated). The sign (language, mathematical sign, mnemotechnic means, etc.) emerges
as such a mediational link. Language is the most important form of the sign. In using
auxiliary means and signs (for example, in making a notch in stick in order to
remember), humans produce changes in external things; but these changes subsequently
have an affect on their internal mental processes. By changing the milieu, humans can
8
master their own behavior and direct their mental processes. Mediated mental processes
initially emerge as functions distributed new forms of interfunctional relations. The
emergence of: (1) logical thought as opposed to situational thought that is unmediated,
(2) mediated memory as opposed unmediated memory, and (3) voluntary attention as
opposed to involuntary attention are all examples of the development of higher mental
functions. Writing is mankind’s artificial memory, which gives human memory an
immense advantage over animal memory. With the help of speech, humans master
thought, since the logical processing of their perception is conduced through a verbal
formulation.
We approach the analysis of the role of computers by relying on these traditions
of the historical approach to human activity. For us, the computer and other machines
are organs of the human brain created by the human hand. If, at the stage of the creation
of engines, automobiles served as tools in human activity for carrying out work that
required great expenditures of energy, at the stage of the development of computers, the
latter became the tools of human intellectual activity. Mental activity has its own
mediated structure, but the means are novel. As a result, the question of the influence or
computers on the development of human mental processes must be reformulated as
follows: How does mediation of mental processes by the computer differ from
mediation by signs? Do the new means introduce new changes into the very structure of
intellectual processes? In other words, can we distinguish a new stage in the
development of human higher mental processes?
As a result of using computers, a transformation of human activity occurs, and
new forms of activity emerge. These changes are one expression of the scientifictechnological revolution. The distribution of bibliographic information and computation
in a bank, the planning of new machines and the adoption of complex decision in a
system of management, medical diagnosis and the control of airplane movement,
scientific research, instruction, and the creations of art are all constructed in new ways.
The problem of the creation of “human-computer” system is very real. The creation of
such systems involves many scientific problems: technical, logicomathematical,
sociological, and psychological. I shall examine the psychological ones in more detail.
What is the specific nature of human activity in “human-computer” systems as
opposed to other forms of activity? In accordance with the definition generally used,
“human-computer” systems cannot function without at least one human and one
program stored in the computer’s memory. Consequently, we are talking about forms of
human activity that cannot take place without the participation of the computer.
*”Human-computer” systems generally are not concerned with using computers for
control processes in which a human is included only for reasons of safety or when a
human becomes involved only when deficiencies in system functioning arise.)
One of the characteristic features of human activity in the systems we are
examining is the immediate receipt of information about one or another result of
actions. In their time, Anokhin and Bernshtein described the principle of reverse
afferentation or the mechanism of sensory correction as a necessary link in the
regulation of activation. Later, the mechanism of feedback as a universal principle was
formulated in Weiner’s works. The reorganization of this regulation in human
intellectual activity is one of the ways in which computers have come to be studied.
This reorganization of mechanisms of feedback makes the process more controlled.
We shall compare the process of regulating human activity through normal verbal
commands with the process when aided by computer. The similarity here is obvious, but
there is an important difference: possibilities for immediate feedback are much greater
in the second case. In addition, the computer can appraise and provide information
9
about intermediate results of human activity that are not perceived by an outside
observer (for example, changes in states revealed by an electroencephalogram). Thus,
with regard to the problem of regulation we can say that not only in the computer a new
means of mediation of human activity but the very reorganization of this activity means
described by Vygotsky are used.
In concrete terms, what is involved in the change in human intellectual activity
when a person is working in a “human-computer” system? Usually this change is
described in terms of liberation from technical, performance components. We shall
examine in more detail what the performance component of mental activity means from
the psychological point of view.
Let us take the situation of work in a “human-computer” system. The computer
has an algorithm for solving chess problems consisting of two moves. The need to
discover the solution for a specific problem arises in the person. The solution consists of
enumerating or listing the succession of moves that would inevitably lead to a
checkmate (i.e., it consists of a plan of actions). It is sufficient to give the computer the
goal and a description of the conditions of the concrete problem in order to obtain the
plan of actions necessary to attain the goal. Can one say that the computer has freed the
person from an algorithm? No, one cannot say this, since this algorithm was not known
to the person. It means that from the psychological point of view, this case involves
freeing the person from the necessity of going through the actual search for the solution
of the problem. As psychological studies have shown, even the solution of a chess
problem often emerges as a genuinely creative activity that includes the complex
mechanisms of the regulation of a search, which we described above. Consequently, in
this example the human is freed not from “mechanistic work,” but from “creative
work.” On a broader plane we can say that regard to problems for which the algorithm
either is not known to humans or is so complex that impossible or irrational to use it, the
algorithm can free humans from creative forms of search.
At first glance our conclusions support the theory of substitution, in which the
work of the computer replaces creative activity. Such substitution can actually take
place, but it is relevant for only a certain class of problems, namely, those for which an
algorithm can be, and actually has been, worked out. With regard to this class of
problems, the psychological structure of the activity of the user and that of the
programmer will be essentially different.
The user who obtains an answer to a problem from the computer may not know
completely the algorithm for its solution. This algorithm, which was developed by
another person, is used after its creation without being mastered (i.e., it remains in
computer memory). An algorithm is a fully formalized procedure for the solution of a
given class of problem worked out by the programmer. Consequently, the user’s activity
is mediated by this formalized procedure. It has a character external to the mediated
activity.
At first glance this activity is quite analogous to any assignment given to another
person. For instance, instead of solving a chess problem myself, I request a colleague to
do it. The difference is revealed when we say that colleague does not solve this problem
in accordance with an algorithm. It means that we simply have a transmission of the
solution from one person to another, not a transformation into a formalized process. We
can justifiably state that users’ activity that is mediated by a procedure they have not
mastered is a new form of human activity.
Now let us examine the psychological features of the activity of programmers
who earlier worked out an algorithm for the solution of chess problems and introduced
it into the memory of the computer. They were confronted with a new problem
10
consisting of two moves. The solution of the problem can be achieved by two means:
either without turning to the computer, in which case it will not have the nature of a
fully formalized process, or by turning to the computer, in which case it will use a fully
formalized procedure worked out earlier. Can one say that the user was freed a fully
formalized process (in the second case?) No, because the problem is not solved even by
programmers in a purely formal way if they do it without the help of the computer.
Programmers do not themselves from a formalized process, but specially carry out
formalization in order to relieve other people and themselves of the necessity of solving
a certain class of problem.
This means a machine can perform “mechanical” work not because human
mechanical work is transferred to it, but because work formerly done in a
nonmechanical way is transformed into mechanical work. Viewing computerization as
the liberation of human activity from its technical aspects is correct only in some cases
– namely, when an activity is formalized in the course of the development of human
activity itself and in the division of labor, i.e., the activity begins to be composed of
monotonous, repeated actions that are executed in accordance with rigidly fixed rules.
Of course, it is never completely “mechanical” in the precise meaning of the word; but
the meaningfulness of the activity can stop being essential from the point of view of
achieving the unmediated results of the actions accomplished. In this case
computerization can mean transmitting to the machine those elements that begin to be
formalized in human activity itself.
The cases of greatest interest are not those in which the computer takes over the
solution of some problems solved earlier by humans (no matter how), but those in
which a problem is solved jointly by humans and computer (i.e., “human-computer”
systems proper). From our point of view, this type of system (not those of “artificial
intelligence”) is of primary interest for the future of computerization. We know that
creative thought is impossible without the use of previously prepared knowledge, which
is often stored in the “artificial human memory” (reference books, encyclopedias,
magazines, books, etc.). At the same time, the accumulation of information means that
the search for information in external memory often turns into an independent task,
which is sometimes so complex that it distracts the problem solver from the solution of
the fundamental problem. Often such search activity turns out to be impossible, which is
why we hear that it is sometimes easier to make a discovery anew by oneself than to
check whether or not others have already made it. The use of computers for storing
information is a new stage in the development of what Vygotsky called the “artificial
memory of the human race.” Effective use of computers for the search for information
in this memory reorganizes human activity in the sense that it makes it possible to focus
on solving truly creative problems.
Thus, we are confronted not with the disappearance of thought, but with the
reorganization of human activity and the appearance of new forms of mediation in
which the computer as a tool of mental activity transforms this very activity. I suggest
that the theory of reorganization reflects the real facts of historical development rather
than the theories of substitution and supplementation.
The influence of computers on mental activity must be examined not only in terms
of the historical development of human activity but also in ontogenetic and functional
terms. Elaboration of the theory of ontogenetic development led to the formulation of
the position that acquisition of the experience of society is the most characteristic trait
of the processes of human ontogenetic development. With the appearance of the
computer, the very form of storing the experience of society (the “electronic brain” vs.
the library) is changed, as is the process of acquisition of knowledge when teacher –
11
student relations begin to be mediated by computer. Moreover, the process of acquiring
knowledge is changed (i.e., it is now possible to reduce the number of formal
procedures to be acquired thanks to the use of computers). This gives us a basis for
stating that as a result of computerization, a new stage in the ontogenetic development
of thinking has also developed.
Psychological research has shown that the solution of a complex cognitive
problem by humans is a process of functional development, or a process of the
succession of different stages and mechanisms that realize this activity – for example,
intuitive guesses and strictly logical verification of these guesses, the feeling of being
close to a solution and the fully developed logical analysis of the solution. The
computer drastically changes (or can change) this process of functional development.
By translating the formal components of activity in the solution of a problem into the
form of an external mediating chain, the computer makes it possible to reveal and
develop the intuitive component of thought and the chain of the generation of
hypotheses, since the complexity of the task of verifying these hypotheses often, as it
were, overwhelms the intuitive components of thought. The results of the analysis that
has been conducted allow us to state that even on a functional plane, changes occur in
the intellectual processes of a person solving complex problems in conjunction with
computer.
In elaborating the theory of higher mental functions, Vygotsky drew parallel
between the historical and the ontogenetic development of activity. The child acquires
signs that were developed earlier in human history. In different instances, external
mediation was interpreted by Vygotsky as a stage in the path to internal mediated
activity. For example, speech, which evolves into inner speech, is the foundation of
discursive thought, voluntary memory, attention, etc. A new stage of mediation (by
computers) is not a stage in the path to internal mediation: it is the further development
of external mediation or interpsychological functioning (according to Vygotsky), which
exerts an influence on the development of intrapsychological functioning. Here we see
yet another feature of the new forms of mediated human mental activity.
In this paper I have given special attention to showing how the computer changes
the structure of human intellectual activity. Memory, the storage of information, and its
search (or reproduction) are reorganized. Communication is changed, since human
communication with the computer, especially in the period when languages that are
similar to natural language are being created, is a new form of communication. Human
relations are mediated through use of computers. Of course, the computer creates only
the possibility for human activity to acquire a more complete structure. Such
possibilities are realized only when certain technical, psychological, and social
conditions are found. The technical conditions is that the computer must be adequate;
the psychological condition is that the computer must be adapted to human activity, and
the human must adapt to the conditions of working with a computer. The main
conditions are social – what the goals are for which the computer is used in a given
social system. How society formulates the problem of advancing the creative content of
its citizens’ labor is a necessary condition for the full use of the computer’s possibilities.
From our point of view, analysis of the reorganization of human intellectual,
mnemonic, communicative, and creative activity as a result of the use of computers and
optimization of these reorganizations, including both expansion of the computer’s
functional possibilities and coordination of human activity with the use of tools, must
constitute the content of a new branch of psychology, which I propose to label the
psychology of computerization. The development of this new branch of psychology will
permit more complete use of the possibilities of the development of human activities,
12
which are revealed thanks to computers, and will permit us to avoid secondary negative
consequences of technical progress in this area.
Reference
Minsky, M. & Papert, S. Perceptrons. Cambridge, Mass.: M.I.T. Press, 1969.
This Paper was published in
Wertsch, J. V. (Ed.). The Concept of Activity in Soviet Psychology. New York:
M.E. Sharpe Inc. pp. 256 – 278, 1981
13
THE CONCEPT OF ACTIVITY IN SOVIET PSYCHOLOGY
Translated and edited by James V. Wertsch
Preface
It is now just twenty years since I began studying Russian while a graduate
student in psychology. It was considered a strange thing to do. What did I think I would
learn? My student colleagues and my professors wondered if the rewards would be
worth the effort. And the effort turned out to be considerable.
Just think, for example, about getting into a theoretical discussion with
Alexei Leont’ev. (In fact, you could not avoid such a discussion; Leont’ev was the
theorist of Soviet Psychology, famous for his complex formulations). He might try to
tell you that “...international activities emerge out of practical activity developed in
human society based on labor, [and] international mental activities are formed in the
course of each person’s ontogenesis in each new generation” (A. N. Leont’ev, this
volume). Now, how does an American psychologist trained to study learning processes
in rats and humans as part of a general theory of stimulus response learning interpret
something like that? Not very fully, if my experience at the beginning of my career was
any indication.
As I have discussed elsewhere (Cole, 1979), my continued exposure to
Soviet psychology gradually began to have an impact on my thinking, especially when I
found that I had to act as an interpreter in addition to being a translator (a distinction
that is often difficult to maintain). Citations such as the one from Leont’ev ceased to be
“Greek to me” and started to make sense. Not all of it and not always, but some of it
sometimes. There was certainly something there worth delving into.
One of the most important ideas I had to understand in order for Soviet
psychology to make much sense for was the concept of deyatel’nost, “activity,” as it
called in this book. What does it mean, and why write a book about it? A large Soviet
dictionary isn’t very helpful. Activity is called “work, a task of some kind,” or “the
work of some kind of organ.” Examples include references to “pedagogical activity,”
“business activity,” or “the heart’s activity” – not exactly the most obvious units to use
for a psychology of learning.
Ironically, our own Webster’s dictionary offers some additional ideas that
begin to hint at what the Soviets are talking about. It refers to “any process actually or
potentially involving mental function” and – even more relevant – “an organizational
unit for performing a specific function.” When we put these two aspects of Webster’s
idea of “activity” together, we get “an organizational unit for performing a mental
function.” That definitely puts us on the right track. Activity is a unit of analysis that
includes both the individual and his/her culturally defined environment.
Now if we go back and reread the quotation from Leont’ev with which I
began this discussion, we can begin to appreciate Leont’ev’s meaning. He begins by
saying that internal organizational units for performing a mental function arise out of
practical, external, organizational units for performing that function. If we keep on with
interpretative process, we arrive at a theory so ambitious in Scope that it exceeds
anything in American academic psychology.
Vygotsky often wrote about the “crisis” in psychology, a term then very
much in fashion among Continental Psychologists in the early decades of this century.
They were arguing about the successor to the crumbled empire of Wundtian
14
psychology, established at the beginning of the discipline in 1879. The concept of
activity is central to the Soviet strategy for overcoming that crisis.
As Luria (1979) explains very clearly, Vygotsky was arguing against the
views of his contemporaries (except insofar as they represented, in his opinion,
mistaken strategies for overcoming a crisis that had its roots in Wundt’s methods, which
in turn embodied Wundt’s kind of theory). Vygotsky was returning to the argument
between Dilthey and Wundt about the very possibility of an experimental psychology.
He argued (correctly, I believe) that when Wundt gave up on the laboratory study of
higher psychological functions, he created a methodological dualism that would limit
psychology so long as it was accepted. An “explanatory psychology” would live
alongside a “descriptive psychology,” and the two would happily deal with their own
phenomena, fight for hegemony, and never arrive at a principled basis for whole
psychology built upon a single set of principles with a single, overarching theory.
Surveying the current scene in psychology, who could say that Vygotsky was wrong?
It is one thing to criticize and to moan about crises; it is another to resolve
them. A linchpin in the Soviet resolution of that crisis is provided by the concept of
deyatel’nost discussed in these pages. As Leont’ev says, it breaks down the distinction
between the external world and the world of internal phenomena. But it produces new
problems of its own. From an “activity perspective” the psychological experiment can
no longer be set up entirely to model philosophical speculation, as in Wundt’s day: it
must model the phenomena of everyday, “practical activity.” Among other distinctions
that such a perspective erases is that between basic and applied research, for applied
settings are the locus practical activity. And, of course, by definition the individual and
those around him/her engaged in the same activity are the basic unit of analysis of
psychology. Social psychology and cognitive psychology are one area of inquiry. Manmachine systems are simply an extreme version of human – environment interactions in
which instead of people to interact with, one has the products of human activity.
It is heady stuff, the Soviet version of “general systems theory.” Like any
approach that attempts to provide an alternative world view, it is difficult to grasp in its
entirety and difficult to interpret in its details.
American psychology, driven by the limitations of the computer as an
analogy for human thinking, is beginning to deal.
15
Download