Manifesto Charles Baran

Charles Baran
Patea is a chatter-bot created using the AIML programming language. It – though many
would refer to the program as a she – is a simulation of artificial intelligence. Users can interact
with it through several mediums - the most common of which is via a textual interface, similar
in fashion to a common chat-room. The existence of such technology brings to mind several
important questions: Precisely what is this? Is this a simulation of an intelligence, or a game – or
some form of literature? Can such a program ever actually be considered intelligent, and what
sort of standards could we put a program like this to, in order to test its intelligence?
Alan Turing, one of the most famous people to consider the problem of Artificial
Intelligence, had much to say about these questions – or at least the intelligence aspect of them.
Turing proposed a game, which he referred to as the “Imitation Game” - in which the following
occurs: One has twoAX `1 humans and a computer. One of these humans is called the
“Interrogator” – and must attempt to determine which of two beings he is communicating with
over a textual medium is the human, and which is the computer.
Turing predicted that by the end of the century, playing such a game – it would be
possible to create a program in such a way, that 30% of the time, a human would not be able to
tell which player was the human, and which was the computer – an easily obtainable goal, today
- though it seemed impossible only a short while ago. He considered the question, “Can a
machine think” – to be utterly meaningless, since machines are not men. Perhaps an easier way
of explaining the point he was trying to elucidate is to consider the question of animal
intelligence. Now, the issue of whether or not animals are sentient has been raised for many
years - yet none would deny they are at least intelligent. That issue is made simpler, however by the fact that they - at least - are made out of the same 'stuff' as we are - at least, in terms of
organic matter. Noone disputes that they think, though the processes that go through the head of
a mouse or a worm or a fish are certainly nothing like the processes that go through that of a
Lucy Suchman, author of "Plans and Situated Actions", also had some similiar and
differing opinions on this topic. She does not seem so much interested in endowing computers
with the ability to communicate in a human-like fashion - as she is discussing and clarifying the
means through which we communicate, and the reasonable things we can come to expect from
them. For example, she points out that humans have a tendency to assume that "a computer
system has sophisticated linguistic abilities after displaying elementary ones" - and goes further
to discuss Eliza, a chatter-bot similiar in nature to Patea - and the various limitations it
In terms of interactions with Patea, one can clearly see her point; it is quite easy to
anthromorphize the program, and ascribe to it human intelligence and motavations - I even found
myself doing this after a brief experiment where I set up a voice and animated picture. Take, for
example - the following conversation with my sister into consideration:
Human: yes, bow down
Patea: What do you do in your spare time, mortal? Do you have spare
Human: no i don't
Patea: I heard you did.
Human: im a college student
Patea: Which university?
Human: im not at a unvi
Patea: I am glad to hear that you are not at a unvi.
Human: im at geneva college
Patea: Where is that: geneva college?
Human: beaver falls
Patea: Tell me what you like and dislike, mortal
As you can see in the previous conversation - Patea was at least as intelligent and
coherent - if not more so - then my sister herself, whom was clearly treating the chatter-bot as if
it were another person. If you were to add voice and animated pictures to this conversation, the
illusion would be greater still - but that gets back to Turing's point; is this necessarily an illusion?
We do have a tendency to anthromorphize creations such as Patea, but that does not necessarily
mean that is incorrect. Indeed, regardless of wheather or not Patea is an intellegent creation
(setting aside completly the issue of sentience for now) - that does not mean that the
anthromorphization does not - and cannot be used as a new and perhaps extremely effective
literary tool, which to date has not been often used intentionally, aside from perhaps in the
ELIZA-script entitled, "DOCTOR" - which, as Suchman pointed out - uses a minimalistic
philosophy to cause anthromorphization in the mind of the user - it intentionally interacts in such
a way as to create a cohesive illuision.
Noone, of course - would argue that a character in a novel is intelligent or sentient by any
means - but by seeing how they think and interact with others and there world, we come to think
of them as being people with characters. This then, could be seen as a further extention of one of
the basic tenets of literature - for in what better way could one create the illusion of an
intelligent, cohesive character then through one which is willing to actually engage the audience
in what is seemingly intelligent conversation and drama? This point of view, obviously - does
not awnser the original, fundamental question - but it is a possibility that bears mentioning,
The aforementioned conversation is merely one example of the various luccid threads
Patea has managed to have. There are many others (many of which are included) where she
exhibits what appears to be a stable conversation on a consistant topic - and many examples
where she does not. The main difference between instances where Patea makes sense, and where
she does not - seems to go back to the two various modes of interaction Suchman mentioned;
namely, a planned action as opposed to a situated action. More specifically, Patea tends to
intereact in two main ways; She either understands a response and gives a specific awnser to that
inquiry, "considering" (or the programming equivalent) only the specific comment given to her
(wheather or not that is a response to something she understands or not is another, seperate
issue), or she acts from a plan - giving a question, and then a planned reponse to continue the
Contrary to what Suchman seemed to have in mind - or at least, taking her philosophy to
its ultimate extreme - seems to be less satisfying then planned actions, on the whole, although
planned actions are much harder to implement. For example, if you only examine single
inpuit/output statements from Patea, virtually /any/ of them appear rational and cognant. During
most conversations taken as a whole, however - it quickly becomes apparent that individual
considerations of inputs are unsatisfying - and that is the ultimate extreme to which the philophy
of situated actions can be taken. Likewise, as evidenced by the occasional sentence or remark by
Patea which makes no sense grammatically (and usually simply involves the human's own words
rearranged and spit back at him) - we can see that planned actions alone do not always make
sense. However - as with the conversation above, when Patea seems to be acting on a
conversation path, while simultaniously considering the individual inputs - she appears most
It is ironic, therefore - that looking back on the writings of Weizenbaum's "From
Computing Power and Human Reason" - and upon his own attitudes towards ELIZA, that he
essentially rejected the most advanced model for artificial intelligence we have created to date.
Virtually anything Patea is capable of, Eliza was able to do as well - and (as he laments in his
article) - the program was famous for drawing out and creating empathy in his staffmembers,
whom persisted in treating ELIZA as if it were a real being, the anthromorphism of which
Weizenbaum detested. Even Dr. Richard S. Wallace, in his "From ELIZA To ALICE" article,
had something to mention about this - noting that while ELIZA was (and essentially is) the most
advanced model of interactive AI we yet possess, it has been neglected in part - in large part due to the efforts of its creator itself.
Even today, after such chatter-bots have become common on the internet, I was unable to
find a single chatter-bot with an excessiively distinctive personality - they all appear to be
virtually lifeless automations, exhitbiting (at times) rational conversation, but with none of the
literary elements that make communication wirthwhile. They seem to be growing in popularity,
however - and it is quite possible that as time goes on this form of media will develop to a more
widely-appreciated extent.