Mind and Machine Turing’s Test, Searle’s Objection Alan Turing A brilliant mathematician and computer scientist. A leader of the team that cracked the German enigma code. Killed himself in 1954 after being sentenced to chemical castration for homosexuality. Can Machines Think? ‘Can a machine think’ is hard to answer straight-up: What do we mean by ‘think’? How do we tell when something is thinking? From ‘can a machine think’ Turing retreats to asking, ‘can a machine do the things we take to demonstrate thinking in humans’? Can a machine fool us into mistaking it for a human in a teletyped conversation? Question and Answer Turing suggests that a question-and-answer format allows us to compare the machine and a person in a fair sort of way. It will be a bit anthropocentric, as an approach to deciding whether a machine can think. This biases the test against the machine, but a machine that passes the test will still be a pretty convincing thinker! What sort of machine? For Descartes, the ‘machine’ would have been a bit of clockwork. For Turing, it’s a digital computer (a discrete state machine). Nothing physical is really a DSM, but many things are good approximations. Consider Turing’s switch/lamp and lever machine: a digital computer can ‘mimic’ or simulate it… The computer So Turing’s final question is, could a suitably programmed computer with a fast enough system ‘do well’ in the imitation game (with player B being a man)? Turing thinks that a computer, with a high enough storage capacity and speed, could win 30% of the time by the end of the 20th Century. Consciousness If thinking requires consciousness, and a machine can’t be conscious, then a machine can’t think. This argument is associated with ‘feeling’ and emotions, and the notion that machines simply cannot be in these states. Turing suggests that it leads to solipsism. Limitations? A long list here. Do we have any additions? What does Turing have to say here? What about creativity? This raises two questions: If the machine does unexpected things, what more do you want? If a human does something that seems creative, how do we know that it wasn’t just hard to predict, but truly novel or creative? Breaking the rules? What rules? (We break some rules- “rules of conduct”- but do we ever break the basic rules- “rules of behaviour”- that our minds/ brains function by? How would we know?) And the rules of behaviour are not always obvious even when we see the behaviour over a long time– cf.Turing’s 1000-bit numberchanging program. A learning machine Could a suitably programmed computer be set up to learn a language, over time, in something like the way our brains come to enable us to speak the language? We can make it responsive to correction and to encouragement, and Also provide other avenues of input for richer information. Anticipates chess programs– which now play better than any but the very best humans. (Go is next.) Searle’s Objection The Chinese Room. Does such a room understand Chinese? Searle claims it doesn’t. No individual part of the room understands Chinese. But (as the first objection goes) that doesn’t show the room doesn’t– or does it? Leibniz’ machine G.W. Leibniz objected to the idea that a machine could think with a similar example: The ‘expanded’ thinking machine. Take any machine you think is actually thinking. Magnify it to the size of a mill, so you can wander through and watch the works. Where’s the thinking? Call this the ‘we’ll know it when we see it’ criterion. The Turk A famous chessplaying machine. It defeated many famous opponents (including Napoleon and Benjamin Franklin). A human chess-player hidden inside operated the apparatus. Do we know thinking when we see it? What are you aware of when you’re thinking? Suppose physicalism is true: If you watched someone else’s brain (or your own brain, or a computer’s electronic activity) while they (you) were thinking, would you expect to see that this was thinking going on? How would you decide whether the processes you were watching really were some kind of thinking? Thinking is as thinking does? One popular view holds that our concept of thinking is really quite abstract: Thinking guides or produces certain kinds of behaviour– it has a role or function in generating speech, in leading to various kinds of decisions or choices and so on. What makes a thought the thought it is, is the difference it makes/ the effect it has on such behaviour. Uphshot From this point of view, anything that produces the kinds of behaviour we regard as characteristic of thinking is thinking– even if it’s thinking in a way that we don’t (i.e. via different physical processes). Searle clearly rejects this kind of abstract view of thinking– for him, it’s not understanding unless it has concrete properties of some special kind. Responses to Searle The system reply: the whole room understands even though no part of it does. The robot reply: a real Chinese speaker has lots of other capacities (to see and report what s/he sees in Chinese, to move around in response to requests in Chinese, etc.). A robot with these capacities really understands Chinese. More Responses The brain simulation reply. A combination of all three (system/robot/brain simulation). What is missing here? What is Searle looking for, that these answers don’t provide? I suspect it’s what Leibniz was looking for: the private, immediate awareness of something as thinking. Searle’s view Other Minds: Searle’s position leads to doubts about whether anyone else really thinks or understands. But Searle says: “It is no answer to this argument to feign anesethesia.”(338) The program response: ‘understanding’ can’t be (merely) formal; a storm simulation is not a storm… But a storm simulation doesn’t blow down buildings either– or else it would be an artificial storm. And a program playing the imitation game does do something that thinking things do. So why not call it an ‘artificial intelligence’?