I've made available a set of Review Questions. On the... "Course Materials" and on the class home page, on the...

advertisement
I've made available a set of Review Questions. On the Blackboard site under
"Course Materials" and on the class home page, on the schedule page:
<http://www.cs.utk.edu/~mclennan/Classes/102/schedule.html>,
under Mar. 4.
This is a set of questions of the same kind as I will be asking on the exam. Try
doing the questions just like you would on an exam. Then check the Key, so you can
see how you did. Some of the questions deal with things we haven't got to yet. Just
answer them the best you can, even if it's just guessing, because it will help you
learn these ideas when we get to them.
Reactive Robot Programming:
Early AI focused on human intellectual activities (things that clearly require human
intelligence). For example, playing chess and other board games, proving
mathematical theorems, doing logic, and so on.
Robots took a similar approach: an intellectual approach to controlling their
behavior based on:
– memory
– knowledge
– planning (analyzing goals and how to achieve them)
– representations of the environment (internal maps and the like).
Problems:
Robot controlled their own behavior by thinking about it (reasoning about it).
This turned out to be very slow (because thinking is slow).
We do not normally think about walking, for example, we just do it.
Memory and representations can both be wrong (either bad data was stored
initially, or has become obsolete). Leads to problems of knowledge update and to
what is called non-monotonic reasoning.
All of this is very complicated and computationally expensive.
As a result, some early robots (such as "Shakey") took hours to plan each move.
Paradox: How do insects, which have very tiny brains, behave so intelligently?
(Note: this is intelligent behavior, not intellectual intelligence.) Behavioral
intelligence is exactly what we want for many robotic applications.
They don't seem to be using a lot of memory or much planning. Their brains don't
seem to be big enough to hold complex representations of the environment. They
don't seem to be capable of reasoning.
There is much that we don't understand about the information processing power of
neurons. Nevertheless, innovative approaches to AI and robotic control have been
able to achieve results much more comparable to insects. What this suggests is that
there might be other ways of controlling behavior that don't involve a lot of complex
reasoning and representations of the environment.
We also have to be careful of apparently sophisticated behavior. Herb Simon (Nobel
laureate) notes that complex behavior is often a result of simple system operating
in a complex environment. (He used the example of an ant.)
All of these observations led to a new approach to robotics called reactive control,
which means the agent reacts to its current environment to determine its current
behavior. There very little, if any, memory or planning.
(Insects are not purely reactive.)
As much as possible, in reactive control there is no memory, planning, or
representation. The environment represents itself. The result is that some problems
simply disappear: if there are no representations, they cannot be wrong or out of
date.
Braitenberg's Vehicles:
He was trying to understand how the brain controls behavior, and how even simple
reactive control systems could lead to apparently complex behavior.
Law of uphill analysis and downhill synthesis:
It is often relatively easier to design a system with a given behavior, than to take a
system with that behavior and analyze it to figure how it works.
So, for example, it's often easier to design a robot with insect-like behavior, than to
figure out how insects work. Designing an agent with a given behavior gives you
insights into how to do it, the problems that have to be solved, potential solutions,
etc. Braitenberg (in his book Vehicles) described a series of increasingly complex
reactive robots, exhibiting behaviors suggestive of intelligence.
Normalization:
Taking raw data and putting it in a form that is more useful for computation.
Example, taking light sensor values in the 0 .. 5000 range, where lower numbers
are brighter (which is what the sensor delivers) and putting them in a more useful
form, such as 0.0 .. 1.0, where bigger numbers mean brighter.
To do this we write a little normalization function.
double normalize (int sensorValue) {
return 1.0 - (double) sensorValue / 5000;
}
To be even more useful, we really want to know the brightness relative to the
ambient light intensity. This is a simple example of adaptation, because the robot
adapts to the ambient light intensity.
There are a number of ways to do this.
double normalize (int sensorValue) {
if (sensorValue > Ambient) {
sensorValue = Ambient
}
return 1.0 - (double) sensorValue / Ambient;
}
These example have an inverse linear relation between the sensor value and the
normalized value, but in some cases you might want a logarithmic or other
dependency, because your sensor might not respond linearly to light intensity, and
you want to correct for that.
Some Braitenberg Vehicles:
These can exhibit simple, but intelligent and even intentional looking behaviors.
Download