Homework #1: Suggested Answers

advertisement

Homework #1: Suggested Answers

Warning!! The questions for this homework are VERY subjective. Hence, my answers should not be viewed as THE correct answers, but merely suggestions.

1. Create and justify your own definition of AI (Ex. 1, pg. 31)

If we take the dictionary definition of "artificial" as "non-natural or immitation", then AI is immitation intelligence in media other than biological nervous systems. So here I'm assuming that, a) intelligence is a property of many (if not all - see below) organisms, and b) intelligence largely results from a nervous system. So now, all we have to do is define intelligence :)

I will define intelligence as "doing the right thing at the right time and in the right place". Now all we have to define is "right"! Relative to some goal, an organism will have certain behaviors that will help to achieve it, and others that will not. Intelligence involves choosing those actions (the "right" ones) that will aid in progress toward the goal.

Some organisms that we attribute "consciousness" to (I don't want to get into the bottomless pit of defining that one!) will actually be aware of their goals, whereas other "lower" organisms will not have an explicit awareness, but to any outside observer, it will appear as if these lower organisms are acting in accordance with certain goals. The highest-level goal is "survival", which involves subgoals like finding food, avoiding becoming food, etc. The subgoal of finding a mate can be viewed as servicing a higher goal such as "propagating ones genes".

Under this definition, all species of living organisms exhibit intelligence. Don't try to tell me that bacteria are not intelligent, when they have survived for several billion years, whereas the human race show no signs of being able to last any longer than, say, the dinosaurs - at the rate we're killing each other and the planet! Humans are clearly more complex than bacteria, and exhibit a much more diverse intelligence, but both ends of the complexity scale exhibit goal-directed behavior.

Other definitions of intelligence bring in stuff like consciousness, which nobody can properly and unequivocally define, so the border between intelligent and unintelligent life forms is hard to fix. I prefer to view all life forms as intelligent, just in different degrees and manners. I'm certainly not alone in this view.

Discuss your thoughts on the mind-body problem and its importance for a theory of artificial intelligence (Ex. 3, pg.

31)

The mind and body are overlapping entities, with at least the great majority of mental activity occurring in the brain and nervous system in general, which, of course, are parts of the body. I don't think that this is such a controversial statement as it was a few hundred years ago. The nervous system is a storage area for imprints and impressions of the real world, but which have been colored by our past experiences and the present state (of our body and mind). So we're not taking snapshots! We're taking in sensory information and its getting filtered and massaged with respect to all sorts of other activities that are going on in the body AT THE TIME OF SENSORY INPUT, as well as by other information already stored in the brain.

Classic AI, or Good-Old-Fashioned AI (GOFAI) has traditionally treated the mind and (remaining) body so separately that intelligence was seen as pure knowledge manipulation in the brain/mind, with the coupling to the rest of the body and to the physical world being ignored. In fact, the sufficiency side of Newell and Simon's Physical

Symbol System Hypothesis (PSSH) implies that ANY symbol system, implemented in ANY medium, can exhibit intelligence. Hence, we can completely divorce A mind from a body and still call what the mind does "intelligent".

This, of course, was great news for AI researchers, who could maintain a dualistic separation between mind and body and just focus on the mind. Current robotics (and some general AI) work challenges this mediumindependence assumption, and for good reason.

It now appears that so much of our knowledge is not objective facts about A REAL world, but merely the impressions that an individual has gotten from that world, and those impressions are strongly influenced by an individual's state, both mental and physical. Also, one's skill knowledge (i.e., how to ride a bike) is believed to be strongly influenced by one's phyical sensors and actuators. So, for example, the "program" for bicycling in the brain of a light person with 2 equal-length legs may be quite different from the program for a heavy person with a longer left leg.

One of the real gems of intelligence is common sense. Yet this is one of the hardest things to pound into a computer.

You just can't give it a bunch of rules as to how to be commonly sensible. Plenty of logicians have tried...with no major success. I believe that much of human common sense arises from our physical experience in the world and that most of that information simply cannot be transferred directly to another human or to a machine. They have to learn it themselves, by experiencing the world. So the key to truly intelligent machines is situated learning: robots placed in the real world and given lots of time to bootstrap themselves to Harvard!

In short, I don't believe that we'll ever have extremely intelligent machines until we give them the real-world experience. So we cannot study intelligence in a vacuum (mind without (remaining) body). Integrated mind-body understanding is critical to the future of AI.

Criticize Turing's criteria for computer software being

"intelligent" (Ex. 4, pg. 31)

My criticism is largely positive. If we're assessing the intelligence of the software alone (not software plus a body, as in a robot), then Turing's test is quite useful. It essentially tests for intelligent reasoning, regardless of whether the

METHOD by which reasoning is carried out parallels that which occurs in humans: the focus is on the ends, not the means.

In a United States Supreme Court case on pornography, one of the judges said that he could not define pornography, but he knew it when he saw it. The Turing test follows the same philosophy: the observer may not have a clear definition of artificial intelligence, but (s)he will know it when (s)he cannot distinguish it from human intelligence.

Based on all the philosophical quagmires related to definitions of things like intelligence, rationality, consciousness, etc., I find performance-oriented tests like Turing's much more useful.

Describe your own criteria for computer software being intelligent (Ex. 5, pg. 31)

One improvement over the Turing test would be to couple the software to the physical environment: to test a robot.

The Turing test as such might be too difficult at this point, since the outside observer could probably see the difference between a robot and a human, and to know exactly WHAT the robot is doing at all times, the observer would probably need a real picture of the robot, rather than just a description of what it was doing.

I therefore propose the Survivor Test. In this one, we stick a robot with a group of real people, just like in those popular reality-TV programs. The robot has to interact with everyone else for several weeks. As on television, a vote must be taken at regular intervals to remove people from the game. In this case, we might fix the criteria for the judging to questions such as:

1.

Sympathetic actions toward others.

2.

Ability to get what he/she/it wants.

3.

Ability to cooperate.

4.

General intelligence

Just like in the Turing test, the criteria are all subjective. But now we focus on both individual intelligence and social intelligence.

Discuss why you think the problem of machines "learning" is so difficult (Ex. 10, pg. 31)

Learning is hard for at least two reasons:

1.

We don't know the right primitives to give the computer so that it can BUILD knowledge in many useful ways.

2.

State-of-the-art AI does not provide robots that can function fully in the physical world in order to build up the necessary common-sense knowledge that is needed as scaffolding for the learning of more complex things. <\OL>

If we could solve problem 2, then we could solve problem 1 by giving our robots VERY simple primitive structures such as simulated neurons connected into a large structure. We could also give the neurons simple adaptive abilities, like those seen in real brains. Over time, those neural networks could develop

sophisticated intelligence. This amounts to very little hard-wiring of the mental processes, since the individual develops its neural circuitry based on its experiences.

The problem with symbolic attempts at machine have to do with the conceptual bias imposed by the designer, who chooses a set of symbolic primitives that are useful for exactly those problems that the designer can foresee, but not much else. Hence, the system does learn, but it eventually reaches a limit from which further improvement is impossible. Human learners also hit such limits, but work-arounds are often possible by going back to basics and learning more primitives (i.e. studying more basic topics) before returning to the hard problems. Artificial systems do not usually have that option of autonomous improvement: a human engineer must step in and change the primitive set.

In short, real machine learning on par with human learning is always severely limited by the starting point

(primitives) and the constraints on further development, such as time and possibilities for exposure to enough situations to learn something from.

List and discuss 2 potentially negative effects on society of the development of artificial intelligence techniques

(Ex. 12, pg. 31)

An obvious negative effect could be a declining intelligence of the human population. We already observe this in children who begin using calculators at too early an age: they don't really understand arithmetic.

Some argue that this will just free the mere mortal intelligences to work on more difficult tasks, while others argue that you simply cannot get too far off into the wilds of productive creative thought without a firm grounding in the basics.

Another potential problem is that, in the extreme case, the AI robots of the future could decide that the mere mortals are a big waste of resources and should be exterminated. Granted, the humans will program these artificial intelligences and thus, supposedly, have full control over their behavior. However, it's quite clear that AI programs can surprise their developers, both positively and negatively. So if AI systems are given the capability to learn (i.e., if the roadblocks discussed in the previous section are overcome), then it will be hard to predict the eventual behaviors of our learning robots.

Now to take over a planet, these robots would also need to be able to reproduce. Artificial life research has made some interesting steps in this direction, with robots able to generate plans for new robots (both their bodies and brains and send the plans to a special machine that makes all the parts. Currently, a human is still needed to assemble those parts, but robots are already prevalent on assembly lines all over the world.

Why not assembly lines for other robots?

This is all, admittedly, science fiction, but many of the pieces of the fictional puzzle are, in fact, technologically possible today. I won't lose any sleep over it...but it is hard to give any convincing arguments why these worst-case scenarios are undeniably impossible.

Criticize Ada Lovelace's claim that computers can never do anything original or surprising, since they only do what they are programmed to do

First, programs only do what you program them to do, but who is capable of predicting what a 1000-line program will do when run for 5 days? As results in theoretical computer science show, the program itself is often the most concise description of its own behavior: we cannot analyze the program, without running it, and say anything conclusive about the content or scope of the final results...or even whether the system will

EVER stop running! So nothing, aside from another identical computer that runs THE SAME program can predict what the first computer will do in the time it takes the first computer to run...assuming that our program is running on the fastest computer available at the time.

Second, computers can be programmed to learn. This is really not a separate issue, since the human also programs the learning mechanism. So in theory, the human could predict what the program would learn and how that learned information would affect future performance. But then it boils down to the same problem as before: how is anyone going to efficiently predict the behavior of this learning program faster than the program itself executes.

So if we cannot come up with a prediction of the program's behavior before it has finished, then we will, in many cases, be surprised by the result. You could counter this argument by claiming that time is unimportant. So we let the computer run for 5 minutes, lock the results in a secret vault, and then let a (very bored) human do some sort of predictive analysis that takes, say, 5 years! Then, if the human makes no major errors (talk about long odds!), the he (I would imagine that no woman is stupid enough to take on that job!) could claim to be unsurprised by the computer's result. Unfortunately, if the computer's conclusion was that all mortals should be banished to Siberia, then the human prediction would be of little practical value.

Finally, the computer can be attached to sensors and thus situated in a physical environment. The computer's behavior will then be a function of real-world events, many of which are completely unpredictable. Once again, there is no way for a human to predict the computer's ultimate behavior without following along in time and watching each and every real-world event that impinges upon the computer's sensors.

Download