Intelligent_robotics_recap

advertisement
Lecture 1:
What is a robot?
1 – not nowadays (example: slide with different ‘robots’)
3 – right one (important: reprogrammable -> leads to multipurpose)
Sales of robots are increasing every year
South Korea has most of the robots. Then Japan, Germany, …
Robots out there are mostly employed in Automotive Industry, production of
electronics/metal structures/chemicals
Lecture 2:
Additional papers
1) Sowbug (BBR)
Abstract
Reintroduces & evaluates Sowbug, proposed in 1930s (based on purposive
behaviorism and implements behavior-based architecture). Sowbug navigates
environment based on 2 vectors: orientation & progression.
Something else of Tolman’s: Concept of cognitive map, introduced by Tolman.
Tolman studied how people and rats store information of their physical
location with the respect to the environment (past/present/future)
Initially, Tolman’s purposive behaviorism, behavior implied performance. In
other words, goal was to investigate how motivation, cognition, and purpose
were interconnected with stimulus and response.
Tolman derived formula to compute value of
behavior from environment stimuli,
psychological drive,
heredity,
previous training and
age
Purposive behaviorism considers same issues as behavior-based robotics
(modern)
How to produce intelligent behavior from multiple, concurrent and parallel
sensori-motor pathways
How to interpret outputs
How to introduce goal-oriented behavior
How to include motivation & emotion
How to support developmental growth to influence behavior
Answer: Sowbug model (based on purposive behaviorism)
a) Receptor Organ – photo-sensors that perceive light in environment
b) Orientation Distribution, Orientation need, and Orientation tension
a. Orientation Distribution – Output of photo-sensors – affects
curve on the front of the Sowbug
b. Orientation Need – connected with orientation tension
c. Orientation Tension – motivational demand (if hungry, the
tension is higher if Sowbug sees food)
c) Orientation Vector – Interaction between various Orientations (causes
only rotational movements of Sowbug)
d) Progression Distribution, Hypothesis, Progression Tension
e) Progression Vector
With combination of Orientation Vector & Progression Vector, Sowbug
expected to respond to the stimulus in the environment (rotating + moving
[based on Orientation Need])
Orientation Need is an internal state of the Sowbug
Sowbug is inspired by Lewin’s “psychological life space” and Loeb’s “tropism
theory”. Lewin attempted to form equation which could predict behavior of a
person for some event.
Phototactic behavior – connect motors to opposite light sensors – record &
compare statistical results
Behavior-based robotics architecture – subsumption architecture
Implementation of Sowbug
2) A Robust Layered Control System For a Mobile Robot
Abstract
New architecture for controlling mobile robots
Asynchronous layers of control system. Each level is a simple computational
machine
Higher-level layers can subsume lower-level layers by suppressing their
outputs
Introduction
Completely autonomous mobile robot must be able to perform many complex
information processing tasks in the environment (boundary of the
environment is changing rapidly)
Usual way to build control system for mobile autonomous robots is to
decompose the whole environment perceived by sensors into series
(roughly) of functional units (vertical slicing)
Use task-achieving behaviors to decompose a problem (horizontal slicing)
Vertical slicing (old) as opposed to Horizontal slicing (new)
Each slice is implemented explicitly & tied together to form a robot controller
Horizontal slicing is more robust
Requirements for a controller of an autonomous mobile robot:
Multiple Goals
Some goals may be conflicting
Example: Reach destination while avoiding obstacles
Relative importance of goals
System must be responsive to high-priority goals while still
servicing necessary “low-level” goals (i.e. getting off the tracks, while still
maintaining balance)
Multiple Sensors
All sensors have error information in their reading
Some sensors will overlap in data they measure
Robustness
When some sensors fail, robot must adapt and cope by relying on
those that are still functioning
When environment changes frastically, robot must still achieve
some sensible behavior
Ideally, robot must still function even if it has faults in its
processor
Extensibility
Robot needs more processing power as new sensors/capabilities
are added
Other approaches:
New language designed to support multiple objectives
It seems that the data from one sensor tends to dominate others
(sensors are not combined)
Little has been done on robustness of a processor
Assumptions
Complex behavior does not have to be a product of complex
controller
Things should be simple
We want to be able to build cheap robots that can do things
autonomously
Robot must model world as 3D to be able to be integrated in
human environment
Relational maps are more useful than absolute maps
Visual data is much better for interaction than sonar data
Robot must perform well even if sensors fail. Recovery must be
fast (self-calibration must be occuring all the time) -> we try to make all
processing steps self-calibrating
We are interested in building artificial robots that can survive for a
long time without human assistance
There are many approaches for building an autonomous robot:
Traditional one is to decompose the real-world problem into subproblems, solving each sub-problem independently
This approach does things differently
Traditional robots slice problem into: (Horizontal decomposition
into vertical slices)
 sensing
 mapping sensor data onto real-world
 planning
 task execution
 motor control
sensing -> mapping -> planning -> execution -> action (from
environment -> through the robot -> to the environment) ->
feedback loop + internal feedback loop
Disadvantage: instance of each slice must be built for a robot to
operate. If changes are done, changes to connected levels could be
required as well
Level of competence of a robot is a specification of a class of behavior of a
robot for all environments it might encounter -> needed for vertical
decomposition (which is presented in this paper)
Higher level of competence means more specialized class of behaviors
(higher level <-> more specific behaviors)
Levels of competences used: (each level includes earlier level [as its
subset]. Higher levels provide additional constrants)
0) Avoid contact with objects
1) Wander aimlessly around without hitting things
2) Explore the world by seeing things in front and approaching
them
3) Build a map of the environment and plan routes around
4) Notice changes in static environment
5) Reason about the environment in terms of identified objects
6) Formulate and execute plans that would change the state of the
world in some desireable way
7) Reason about the behavior of objects and modify plans with
respect to possible objects’ behaviors
Key Idea: can build layers of a control system corresponding to each level of
competence (and add new layer to move to higher level of overall
competence)
We start buildingour robot with level 0 competence (we never alter that
system later) and then we build on top of preceding level. Preceding level
is unaware of the level above itself
^^^ called a subsumption architecture ^^^
In sumsumption architecture, we have a working controller very early –
when we have first layer
With subsumption architecture:
Multiple goals:
Individual layers can work on individual goals cuncurrently
Advantage: No need to know in advance which goal is to be
pursued (pursuing all the goals can lead to the ultimate solution)
Multiple sensors:
Can ignore sensors data fusion – not all sensors need to
feed into (only those that are extremely reliable can be considered)
Other layers may be processing other sensor’s values
(fairness to sensors exists)
Robustness:
1) Multiple sensors + multiple objectives at different layers
2) Lower levels continue to function when higher levels
are added (higher levels do not suppress lower levels,
they only produce results at lower level of competence)
Extensibility:
Each level can run on separate processor
When we construct each level, we do not need to account for EVERY desired
perception/processing/generated behavior
Example:
Level 1 – does local navigation + avoids obstacles
Level 2 – delivers robot to a desired location
Suppressor -> layer higher in competency can suppress signal of layer lower
Inhibitor -> layer lower in competency can inhibit signal of layer higher in
organization
Download