OBhandout

advertisement
COMPUTING MACHINERY AND EMERGENCE:
THE AESTHETICS AND METAPHYSICS OF VIDEO GAMES
Jon Cogburn1 and Mark Silcox2
[A]
…even if the real world is not a Matrix, video games still raise interesting metaphysical questions.
How does one best characterize causality in a video game? What is the moral and metaphysical status of
quasi-intelligent agents inside of games? Nearly any metaphysical question we can ask about the real
world will have as an analog some question about the diagenic world of video games. This is perhaps most
clearly the case with respect to the issue of emergent properties. Most of us believe that, in some sense,
minds are emergent properties of brains, and in counterfactual situations, things at least appropriately like
brains. Unfortunately, discerning what such a belief actually comes to is frustratingly difficult, and perhaps
the only consensus in the philosophy of mind is that there is nothing approaching a consensus on this issue.
[B]
Prima facie, philosophical investigation into the sorts of virtual environments that we experience
through video games is promising here. Philosophers generally seem to think that any satisfactory account
of how the mind is emergent upon the brain should capture our somewhat contradictory intuitions that
(1) the mind is nothing over and above the brain,
and
(2) the way we describe properties of the brain is completely different from the way we describe properties
of the mind.
[…]
Of all contemporary philosophers of mind, Daniel Dennett has arguably done the most to reconcile the two
contradictory intuitions under-girding the claim that the mental emerges upon the physical. This
reconciliation involves three steps: first, Dummett poses the problem in an innovative way, second, he
attends to relevant empirical results, and, finally, he attempts to dissolve any left over residual problems.
Any attempt to account for the emergence of the mental upon the physical must grapple with Dennett’s
contributions to this discussion.
A key part of Dennett’s attempted dissolution of metaphysical quandary about the mental/physical
emergence is the way he poses the quandary in terms of stances interpreters take towards objects. Recall
that our problem was reconciling (1) the intuition that the mind is nothing over and above the brain with
with (2) the fact that the way we describe the brain is completely different from the way we describe mental
properties. It is by thinking about (2) as a problem of interpretation that Dennett is able to make progress.
Dennett describes the ways we interpret objects in terms of the physical stance, the design stance,
and the intentional stance. The physical stance is utilized when one interprets a system solely in terms of
efficient causation. The paradigmatic example is scientists discerning laws that govern the evolution of
systems over time. These allow us to predict a system’s evolution in terms of causes and effects. When a
cannon is fired, the laws plus relevant environmental information about temporally prior states of affairs
will allow us to determine where the ball will land. The design stance, by way of contrast, is employed
when one considers a system as having been built for a given purpose. For example, if one knows that an
instrument was built to play music on a western scale, it will be much easier to predict what noises will
come out when you blow into it. Finally, one takes the intentional stance towards a system when one
appeals to the beliefs and desires of the system to predict what it will do. For example, we can predict that
our students will be usually frown and grumble when getting certain sorts of grades, because we know they
don’t desire those grades.
1
Jon Cogburn
Louisiana State University Philosophy Dept.
email: jcogbu1@lsu.edu
2
Mark Silcox
Auburn University Philosophy Dept.
email: silcoma@auburn.edu
1
It is not immediately obvious how Dennett’s classification of these stances can help with the
problem of emergence. For Dennett acknowledges that the same system can be considered from the
perspective of different stances. Thus, for Dennett there is no fast and sharp metaphysical divide between
things with minds (of which it is appropriate to take the intentional stance) and things without. One might
therefore think that since any stance can be taken (by someone) to any system Dennett leaves us with
schoolboy relativism where, anything can count as an intentional system and hence anything count as a
mind. This is not the case, however. It is not until one turns to Dennett’s more metaphysical writings that
this becomes clear.
For Dennett, a system is really intentional to the extent that the intentional stance allows one to
predict the behavior of that system in ways that other stances do not. That is, something is really an
intentional system to the extent that taking the intensional stance gives us more explanatory/predictive
power than the other stances do. While this is still relativized to predictors, there is a fact of the matter
about whether or not a particular system possesses a particular sort of intentional property; it does iff the
intentional properties isolated allows one to predict more than she would otherwise be able to. It is in this
sense that Dummett considers intentional properties “real.”
Now we can see how Dennett’s reformulation of the mind//body problem permits progress in the
task of explaining how mind/body emergence is possible. For the Dennettian, emergent mental properties
are those that are attributed via any predictively indispensable adoption of intentional stance. The problem
of reconciling our contradictory intuitions about emergence is thus reduced to the problem of explaining
why the intentional stance works. The need to adopt the intentional stance is consistent with the system’s
being basically physical, but at the same time we are constrained to describe the two in completely different
ways.
[…]
the Dennettian perspective does illuminate a pervasive aspect of human-computer interface. We do take
the intentional stance with respect to our machines and entities existing in the machine’s diagenic realms.
Moreover, doing so does enable us to predict the evolution of the machine in ways we couldn’t manage
otherwise.
There is also, however, an important disanalogy here. In order to be successful and flourishing
inhabitants of the Matrix, we would have to take all three stances towards objects within the diagenic world
of the video game, as well as toward the machines upon which the game was running. Successful play of
any game requires the ability to predict how one’s own input will affect the evolution of the game, and
doing this requires taking the appropriate stances. This is true even of well-designed games about which
(unlike The Matrix) there is no temptation to confuse them with our natural environment. For example, in
order successfully to play Warcraft one must interpret the causal properties of wood correctly, one must
interpret weapons as being designed in certain ways, and one must correctly interpret the beliefs and desires
of the characters. In describing a bout of play to ourselves or another after its conclusion, we will take all
of these stances, and for each of them one can note the difference between the player’s description and an
engineer’s description of what’s going on inside of the machine.
[C]
The sorts of emergent properties that we attribute to objects within the virtual environments
generated by video games share a common feature that those of objects in the real world pretty clearly lack
– the distinctive sort of interest that we have in using them to make predictions about how that environment
evolves arises more or less exclusive as the result of the fact that we behold them as distinctively aesthetic
objects. And in the recent literature of analytic aesthetics we find extensive discussion of the connection
between the existence of aesthetic properties and the attitudes or “stances” adopted by the readers of stories
and the viewers of pictures. This discussion about the viewer-relativity of aesthetic properties was more or
less initiated in contemporary philosophy by E.H. Gombrich’s Art and Illusion’s famous discussion of the
“beholder’s share” in constructing 3-D objects out of 2-D representations. It has also been carried out at
length amongst literary critics of the “reader-response” school of thinking about the properties of narrative
art. In his influential essay, “Interpreting the Variorum,” the reader-response critic Stanley Fish makes the
radical-sounding suggestion that
there are no fixed texts, but only interpretive strategies making them, and…interpretive strategies are not
natural, but learned…meanings are not extracted but made and made not by encoded forms but by
interpretive strategies that call forms into being.
2
Fish’s claims here sounds remarkably similar in spirit to Dennett’s position that intentional systems are just
those toward which we find ourselves disposed to take the intentional stance.
[D]
Since one can hardly doubt that the relevant properties (i.e. that of being a sentence that is not a
logical truth and that of being a Turing machine that has halted for some input) do exist, it does not seem
unreasonable to view the examples that we have described from the theory of computation to be
paradigmatic instances of a philosophically tractable notion of emergence. And if we do, then it starts to
look as though the notion of aesthetic properties being dependent upon the viewer that we have imported
from literary criticism might actually represent a sort of precisification of the comparatively informal
notion of an explanatory “stance” that we found in Dennett. One advantage of taking this approach is that
our earlier objection to using Dennett’s notion of an intentional “stance” to describe the emergent
properties of a diagenic game-world will also lose most of its bite. For when we are trying to make
explanations of what goes on in the game by reference to the aesthetic properties of the narrative that it
instantiates, how we describe the player’s interaction with that game will depend upon the characteristics of
that game’s user interface (a command-line inputs in a work of interactive fiction, mouse clicks in a
hypertext, or a complicated subdural interface with the participant’s sensorimotor system in the Matrix),
rather than upon issues that have merely to do with the terms in which the player might be disposed to
think of objects in the game world when he is cooking up plans to find the treasure or kill the evil wizard.
[E]
Our problem was to explain how properties of objects within the diagenic realm relate to
properties of objects outside of that realm. But perhaps before we should follow this route any further, we
should examine more closely the relevant properties of computers that properties of video games emerge
upon.
The underlying physical processes that take place in computers when they are running the sorts of
programs that we have been interested in are all describable using computability theory and physics…it is
fair to characterize computationally tractable properties as physical ones, since they are able to be
instantiated in machines in such a transparent manner.
[…]
Now one of the strangest things in computability theory is the way that computationally intractable objects
can be fully defined in terms of computationally tractable ones. For example, the set of sentences of first
order logic that are logically true is enumerable. There is an algorithm for listing them. However, the
complement of this set (the set of sentences that are not logically true) is not enumerable. There is no
Turing machine program that will halt on 1 for all and only (Gödel numbers of) sentences that are not
logically true.
This is simultaneously both strange and wholly normal. The set of sentences that are not logically
true is definable in terms of the set of sentences that are. Once you have one set defined, you get the other
for free, so to speak. But there is also a sense in which the language used to describe one set is radically
different from the language used to describe the other. The first set carries with it a computationally
tractable procedure by which the members of that set can be enumerated, yet the second set does not. Our
discourse about whether arbitrary sentences of first order logic are members of either set must therefore
radically different for each set. Here, then, at the very heart of computability theory, we have objects that
satisfy our two contradictory intuitions about emergence. One set is clearly nothing over and above the
other, in the sense that the second can be defined in terms of the first given nothing more than the basic
vocabulary of set theory. Yet with respect to at least one sort of explanatory procedure we must talk about
the two sets in radically different ways.
[F]
Can we enumerate the set of Turing machine programs? To say that we can is equivalent to the
claim that there exists a super-Turing machine program that will halt on one for all and only the Gödel
numbers of Turing machine programs. Moreover, this “Universal” Turing machine will itself be in our
initial enumeration of Turing machines.
Now, for each Turing machine and each initial entry on the tape that feeds into that machine, the
machine will either halt after running through the program or it will not halt. So as well as the universal
3
Turing machine, we can now contemplate a “Meta-“ Turing machine that treats this property of halting.
This one will take as inputs two numbers (two successive strings of ones, with a blank space in between
them). The meta-Turing machine will halt on one if the first of the two numbers entered is the number of a
regular Turing machine, and the second is the number that designates an input that that Turing machine
halts on. The meta-Turing machine will halt on zero if the two numbers entered are the number of a regular
Turing machine program together with an input that it does not halt on. As Turing famously proved,
however, this meta-Turing machine cannot exist. The halting problem is unsolvable.
Here again we see the strange dual aspect of emergence. We’ve got a set of objects (ordered
triples consisting of the number of a Turing machine, an entry, and a one or zero) that is in a sense nothing
over and above what already exists once we enumerate the set of Turing machine programs. Yet the fact
that is in non-Turing machine computable (and hence, by Church’s Thesis, non-decidable) means that the
way we talk about this set must be radically different from the way we talk about Turing machines
themselves.
To reiterate, in both cases there is a sense in which non-computationally tractable properties
“emerge” in a thoroughly non-mysterious way out of computationally tractable ones. We can imagine a
world where all that exists are computationally tractable objects (an enumerated set of Turing machines)
but where nonetheless non-computationally tractable properties exist too, and moreover exist solely in
virtue of the computationally tractable objects. A world existing solely of machines for executing
algorithms, indeed existing of physical instantiations of every algorithm, contains a procedure that cannot
be executed by an algorithm! A human being, however, clearly can execute the relevant procedures. In the
case of the set of logical non-truths, one simply has to examine each well-formed sentence in a given
language and check whether one can construct a logically possible countermodel.
[G]
To see the tremendous relevance of the halting problem to the issue of emergence, one need only
focus on a property of games that has obvious aesthetic significance, the property of winning.
[H]
Now consider a video game played by two human players against one another. First the two
players decide how long the game will last, the winner being the player with the highest score at the end of
the allotted time. Then, the computer presents each player with ten flowcharts of Turing machine
programs, and an input for each. The players then each try to figure out whether their Turing machine
programs will halt on the assigned inputs. Each player can pick “yes” or “no.” If a player picks “no,” then
the computer will assign the player one point for that machine. If a player picks “yes,” and the Turing
machine program does in fact halt for the input, the player gets two points, if the computer determines that
that Turing machine halts for the given input within the time frame of the game. However, if the player
picks “yes,” and the Turing machine program does not in fact halt for the input, then the computer goes into
an “infinite” search (that terminates when the game is over) and the computer gets zero points for that
input. Again, the winner of the game is the player who has the most points when the assigned time limit
for the game is over.
Such a game is possible and completely computationally tractable. Moreover, it allows us to
extend the notion of computational emergence to the realm of video games. Now imagine a third person
leaning over the first two players giving one of the players advice about how to pick. Clearly a human
could do this, and do it quite effectively too, if he was just especially good at parsing turning machine
diagrams. But in virtue of the unsolvability of the halting problem there is no algorithm that that person
could be instantiating that would the player the correct answer 100% of the time. Thus, no computer
program, even if programmed by a God who could avail herself of arbitrarily large resources, could select
the winning strategies for the game. Thus, when we look at the property of winning, which any gamer is
bound to make reference too when she evaluates a particular game or a particular play-through of that game
for its aesthetic value, our comparison between aesthetic properties of video games and the examples of
emergent properties from computability reveals itself to be more than just a suggestive metaphor.
Let’s ask this question in the broadest possible generality. Given an arbitrarily large, yet finite,
amount of computer space and time, could a god-like being devise an algorithm such that, when given any
video game that is “winnable” in a way that is at least partially dependent upon scores that are attached to a
player’s inputs, the algorithm could give players of that game effective strategies for beating the game? Of
course no human could do this such that it would work for any arbitrary game, but that is not our question.
There are an infinite number of possible algorithms, and restrictions on our intelligence and the size of
4
computers restrict humans to a finite number of these. We’re wondering if a God-like creature could in
principle design such an algorithm.
He can’t. He can’t do it for the class of games, because that class of games includes the halting
game described in this section. If God could design a meta-strategy program to give us all advice about
how to play any game that we chose (from among those having the fairly anodyne aesthetic properties
described above), then the halting problem would be solvable. So even though games are nothing over and
above algorithms, they have non-algorithmic properties.
[I]
Most popular computer games about which it makes any sense at all to talk about winning are sold
with so-called “strategy guides.” Interestingly enough, these guides are typically not written by the
programming team, but by players who have discerned the relevant counterfactual dependencies in the
diagenic world, and figured out how to exploit these for higher scores. As with a procedure for
determining whether an arbitrary Turing machine halts on a given input, the procedure used by the players
writing these guides is not an instance of a master algorithm.
This characterization clearly makes sense of the emergent strategic properties of video games, but
what of other emergent properties, such as an atmosphere being sad? Again, we can ask whether a god-like
creature could discern an algorithm to correctly judge the affective properties of video games. If there is no
such algorithm, then the affective properties of video games are genuinely emergent in the way we have
described. If there is, then the affective properties are purely computational properties and hence not
emergent.
[J]
Now we are in a position to return to the considerations that originally motivated us. We have
argued that some aesthetically significant properties of video games are dependent on the player in the
strictly literal sense that a discrete set of actions with fairly precise identity conditions that she performs
represent a necessary condition for the instantiation of these properties. This was also true for both of the
computationally intractable procedures we described earlier. A computer can list the logical truths, but it
takes a person to list the sentences that are not logical truths. A computer can enumerate the set of Turing
machine programs, but it takes a person to determine about the members of this enumeration and arbitrary
inputs, whether those members will halt on those inputs.
Since one can hardly doubt that the relevant properties (i.e. that of being a sentence that is not a
logical truth and that of being a Turing machine that has halted for some input) do exist, it does not seem
unreasonable to view the examples that we have described from the theory of computation to be
paradigmatic instances of a philosophically tractable notion of emergence. And if we do, then it starts to
look as though the notion of aesthetic properties being dependent upon the viewer that we have imported
from literary criticism might actually represent a sort of precisification of the comparatively informal
notion of an explanatory “stance” that we found in Dennett. One advantage of taking this approach is that
our earlier objection to using Dennett’s notion of an intentional “stance” to describe the emergent
properties of a diagenic game-world will also lose most of its bite. For when we are trying to make
explanations of what goes on in the game by reference to the aesthetic properties of the narrative that it
instantiates, how we describe the player’s interaction with that game will depend upon the characteristics of
that game’s user interface (a command-line inputs in a work of interactive fiction, mouse clicks in a
hypertext, or a complicated subdural interface with the participant’s sensorimotor system in the Matrix),
rather than upon issues that have merely to do with the terms in which the player might be disposed to
think of objects in the game world when he is cooking up plans to find the treasure or kill the evil wizard.
One way of understanding the computational theory of mind (as the view has been defended by
philosophers of the past thirty years or so) is that intentional states are emergent properties of the brain or
neurosystem in a way that is closely analogous to that in which it makes sense to ascribe to computers
properties that are upon the computer’s hardware. But if our own analogy between emergent properties in
the theory of computation and those of game-worlds like the Matrix can be taken as suggestive of a more
general strategy for understanding the phenomenon of emergence, this actually turns out to create a
problem for the computational theory of mind.
For Dennett, facts about the interests and needs of the interpreter play a crucial role in explaining
why the adoption of the intentional stance is unavoidable. We need to think of our fellow human beings as
having minds because, while this may not be necessary to the explanation of how they get indigestion or
show signs of aging, it is needed to predict how they will respond when faced with an arithmetic problem
or a choice about what to pack for lunch. But no explicit reference to the specific interests of the interpreter
5
who adopts the intentional stance needs to be made in the characterization of the mental phenomena
themselves – just to say that a person P has beliefs  and  is to say nothing directly about why it is in my
interest to suppose that he does. Characterizing the sorts of aesthetic properties that we attribute to the types
of interactive narratives instantiated by computer games, however, must always involve just this very sort
of explicit reference. To say that a particular game was “sad,” say, because the player’s character died
while trying to save the princess, is just to say (for the reasons we noted above) that certain decisions were
made by the player which brought about some possible in-game events while suppressing others. That this
is more than a merely trivial difference between these two different ways of appealing to the notion of
player- or interpreter-relativity in the characterization of emergent properties becomes clear once we realize
that simply being a player or an interpreter is itself a mental property. So if we try to extend the analogy
that we have drawn from the emergence of strictly computational properties to aesthetic ones to cover the
case of mental phenomena, we will be caught in a vicious circle.
References:
Aarseth, E. Cybertext: Perspectives on Ergodic Literature (Johns Hopkins, 1997)
Boolos, G. and Jeffrey, R. Computability and Logic (Cambridge University Press, 1989)
Chomsky, N. Aspects of a Theory of Syntax (MIT, 1969).
Dennett, D. The Intentional Stance, (MIT, 1989).
Dennett, D. Consciousness Explained, (Little, Brown and Co, 1992).
Dennett, D. Brainchildren, (MIT, 1998).
Dennett, D. “Real Patterns,” in [Dennett, 1998], pp. 95-120.
Fish, S. Is There a Text in This Class? (Harvard, 1980).
Fish, S. “Interpreting the Variorum” in [Fish, 1980], pp. 174-180.
Gombrich, E.H. Art and Illusion (Princeton, 2000).
Seager, W. Theories of Consciousness: An Introduction and Assessment, (Routledge,
1999).
Tennent, N. “The Withering Away of Formal Semantics?,” Mind and Language 1, 1986, pp. 302-318.
6
Download