Engineering Self-Adaptive Modular Robotics: A

advertisement
Engineering Self-Adaptive Modular Robotics: A Bio-Inspired Approach
Chih-Han Yu
Radhika Nagpal
chyu@fas.harvard.edu
rad@eecs.harvard.edu
School of Engineering & Applied Sciences, Harvard University, Cambridge, MA, USA
Wyss Institute for Biologically Inspired Engineering, Harvard University, Cambridge, MA, USA
Abstract— In nature, animal groups achieve robustness and
scalability with each individual executes a simple and adaptive strategy. Inspired by this phenomenon, we propose a
decentralized control framework for modular robots to achieve
coordinated and self-adaptive tasks with each modules performs
simple distributed sensing and actuation [1]. In this demonstration, we show that such a framework allows several different
modular robotic systems to achieve self-adaptation tasks scalably and robustly, examples tasks include module-formed table
and bridge that adapt to constantly-perturbed environment,
a 3D relief display that renders sophisticated objects, and a
tetrahedral robot that performs adaptive locomotion.
robot systems. In the first type of modular robots, each
module is equipped with a rotary actuation and tilt sensor. We
show how our control framework allow such robotic system
to achieve environmentally-adaptive structure formation, e.g.
forming a self-adaptive table. In the second type of modular
robots, each module is composed of a linear actuator and
pressure sensor. We show that a tetrahedral robot formed
by such modules is capable of performing locomotion that
adapts to different terrain conditions.
I. I NTRODUCTION
We first describe the design of our modular robot, and we
then present a brief overview of the algorithmic framework1 .
Biological systems gain a tremendous advantage by using
vast numbers of simple and independent agents to collectively achieve group behaviors, e.g. fish schooling. In
such systems, each local agent executes a simple adaptive
strategy while the system as a whole archives coordinated
behavior and is highly adaptive to local changes. In addition,
such a strategy is scalable to the number of agents. This
biological self-organizing principle has inspired our recent
control framework in programming modular robots [1].
In this paper, we demonstrate several modular robotic
systems and tasks that can be formulated within our control framework, including forming self-adaptive structure,
performing adaptive grasping, and achieving adaptive locomotion. Although these robotic systems and tasks are
fundamentally different, our control framework is able to
capture all of them and allow them to achieve the desired
self-adaptive tasks. In addition, it is scalable to the number of modules and robust towards real-world sensing and
actuation. In [2], we explore the theoretical properties of
such control strategy and analytically show that this class
of algorithms exhibit superior performance in self-adaptive
tasks. This demonstration is also a composition of various
self-adaptive modular robotic tasks that coincides with our
previous theoretical results.
Our algorithmic framework is simple and fully decentralized. Each modules in the system is viewed as an
independent and autonomous agent. Modules achieve the
desired global tasks through inter-module cooperation. In
addition, the system can autonomously adapt to changing
environments, even though each module executes only a
very simple control law. Recently, we also show that such
control strategy is closely related to a class of multiagent
algorithms called distributed consensus [1], [2]. Here, we
demonstrate several tasks on two different types of modular
II. A PPROACH
A. Module Model
Here, we describe the modular robot model and the
capabilities associated with each module. We view each
module as an independent and autonomous unit that has the
following capabilities:
Computation: All modules executes identical control law.
We assume that the computational power of a single module
is limited, and our focus is on simple local rules that do not
require complex calculations.
Communication: Each module can communicate with its
immediate physically-connected neighbors.
Actuation: Each module is equipped with an actuator.
This can be either a rotary or a linear actuator. For example,
we equip each module with a rotary motor for the modular
gripper applications, while we provide linear actuators for
the terrain-adaptive bridge and tetrahedral robot.
Sensing: Each module is equipped with a sensors suited
to different robotics applications. In the self-balancing table
and terrain-adaptive bridge, a tilt sensor is incorporated into
each module. In the modular gripper and pressure-adaptive
column, a pressure sensor is associated with each module.
Task: Each task is described in terms of inter-agent
state constraints. For example, in the self-balancing table
application, we specify that each agent needs to maintain
zero-tilt angles with all of its neighbors. In the case of the
modular gripper, we specify that each module must achieve
an equal pressure state with all of its neighbors.
B. Algorithm
In this section, we provide a brief overview of our approach. Our framework is based on an iterative sensing and
1 For
further details of hardware design and algorithm, please see [1]
CONFIDENTIAL.
Limited
circulation.
ForFor
review
only.only.
CONFIDENTIAL.
Limited
circulation.
review
(a)
(a)
(b)(b)
(c) (c)
(d) (d)
(e)
(e)
Fig. 1.
(a)(a)
Self-balancing
table.
TheThe
module-formed
legs legs
are capable
of maintaining
table surface
level atlevel
all time
irrespective
1. Different
Differentself-adaptation
self-adaptationTasks
Tasks
Self-balancing
table.
module-formed
are capable
of maintaining
table surface
at all
time irrespective
Fig. 1.changes
Different
self-adaptation Tasks
(a)The
Self-balancing
table.
The module-formed
legs surface
arewhen
capable
maintaining
table terrain.
surfaceterrain.
level
irrespective
of tilt
(b)
bridge
is able
to achieve
a flata flat
bridge
surface
it isofplaced
on a rough
(c)
A 3D
of tilt changes
(b)Terrain-adaptive
Terrain-adaptivebridge.
bridge.
The
bridge
is able
to achieve
bridge
when
it is placed
on a rough
(c) Relief
A 3D display
Relief display
changes
(b)
Terrain-adaptive
bridge.
The
bridge
is
able
to
achieve
a
flat
bridge
surface
when
it
is
placed
on
a
rough
terrain.
(c)
A
3D
Relief
display
that
shapes
irrespective
of environmental
conditions.
(d) A(d)
modular
gripper.
The gripper
can form
configuration
that
that is
is capable
capable of
ofrendering
renderingcomplicated
complicated
shapes
irrespective
of environmental
conditions.
A modular
gripper.
The gripper
cana form
a configuration
that
is capable
of rendering
complicated
shapes.
(d)
Amodule
modular
gripper.
The
modules
cooperatively
form
configuration
that
grasps
arobot
fragile
object,
e.g.,
grasps
aa fragile
object,
a aballoon,
with
each
coordinating
only
withwith
itscan
local
neighbors.
(e) Aa(e)
modular
tetrahedral
robot performs
locomotion
grasps
fragile
object,e.g.,
e.g.,
balloon,
with
each
module
coordinating
only
its
local
neighbors.
A modular
tetrahedral
performs
locomotion
a balloon.
(e) of
A
tetrahedral
robot
performs
locomotion
by
a sequence
of self-adaptations
totoward
the
environment.
by
sequence
ofmodular
self-adaptations
the
external
environment.
The
robot
is capable
of moving
the source.
light
source.
a sequence
self-adaptations
totothe
external
environment.
The
robot
is capable
of moving
toward
theexternal
light
represents
ofappropriate
module
i at
time
t parameters.
and g(θ
) is[2],aIn [2],
i , θjIn
which
tostate
compute
appropriate
actuation
parameters.
which
to compute
actuation
feedback
that
module
i
receives
from
neighboring
module
we
propose
a
generalized
distributed
consensus
approach
to
we propose a generalized distributed consensus approach
to
j.
We
showed
that
as
long
as
g
is
appropriately
designed,
derive
our
control
law.
The
generalized
form
of
the
control
derive our control law. The generalized form of the control
ourlaw
will eventually achieve the desired state [1].
as follows:
law
ismodules
asis follows:
Empirically,
this
approach shows X
robustness
against real
X
(a)
(a)
x
(t
+
1)
=
x
(t)
+
α
·
g(θ
world sensing
and
actuation
noise.
i
i
xi (t + 1) = xi (t) + α ·
g(θi , θj ).i , θj ). (1) (1)
aj ∈Ni
aj ∈Ni
III. ROBOT DEMONSTRATIONS
we
showed
that
as long
as
g satisfies
the three
conditions
weIn
showed
that
as
long
as
g
satisfies
the three
conditions
this video
we show
different
experoutlined
in demonstration,
[4], our modules
willfive
eventually
achieve the
outlined
in [4],from
our robots
modules
will eventually
the
imental
results
operating
under our achieve
framework.
desired
state.
desired
state.
(b)
Self-Balancing
robot
is composed
of real
Empirically, Table/Bridge:
this approach The
shows
robustness
against
(b)
Empirically,
this
approach
robustness
against
real
supporting
groups
(legs),shows
and
each
composed
of three
Fig. 2. Algorithmic overview of the self-adaptation tasks. (a) Self-balancing four
world
sensing
and
actuation
noise.
For
further
details,
please
Fig.2.2. Algorithmic
Algorithmic overviewofofthe
theself-adaptation
self-adaptationtasks.
tasks. Step 1: Modules
Fig.
sensingsurface
and actuation
noise. For
further
details,
please
table
example. Stepoverview
1: Modules start
sending messages(a)toSelf-balancing
their neighbors. world
modules.The
group
modules
are
replaced
by
a
single
refer
to
[1],
[2],
[4].
start
passing
messages
to
their
neighbors
to
identify
their
local
connectivity.
table example.
1: Modules
start sending messages
theirpropagate
neighbors.their
Step
2: Some Step
modules
sense environmental
changestoand
refer
[1], [2],
Step2:2 Some
and 3: modules
Modules
perform
iterative
sensing
and and
actuation
until their
they
rigid to
surface
and[4].
has the tilt sensor mounted in the middle.
Step
sense
environmental
changes
propagate
sensory
information
to other
modules.
Step
3:
Supporting
modules
perform
III. R EAL ROBOT D EMONSTRATIONS
converge
to
the
desired
state
sensory
information
to
other
modules.
Step
3:
Supporting
modules
perform
actuation based on sensory information they received. (b) Modular gripper. We place III.
the Rself-balancing
on several terrains of
EAL ROBOT Dtable
EMONSTRATIONS
actuation
based
sensory information
they
received. (b)
Modulartogripper.
Step
1: The
firston
contacted
module starts
propagating
messages
neighbors.
In this
video demonstration,
we show
five
different
varying
roughness
and
tilt
angle.
Our
showed
thatexpercontacted
module
startspressure
propagating
messages
neighbors.Step In
this video
demonstration,
weoperating
show results
fiveunder
different
experStep 1:
2: The
Eachfirst
module
sends
its current
reading
to itsto
neighbors.
imental
results
from
robots
our
framework.
the robot
is capable
of maintaining
a levelourtable
surface
Step
2: Each
module
sends its
current pressure
reading
to itsreadings
neighbors.
Step
3:
Each
module
perform
actuation
based on
pressure
it receives
imental
results
from
robots
operating
under
framework.
In the
first
experiment,
weand
placed
the self-balancing
table
procedure:
once a based
desired
task hasreadings
been specified,
3:actuation
Each
perform actuation
on pressure
it receives
from
its module
neighbors.
irrespective
of
initial
conditions
subsequent
perturbations
the
first experiment,
placed roughness
the self-balancing
from
its neighbors.
on several
terrains ofwevarying
and tilt table
angle. Our
all modules
sense the environment and coordinate to perform In
(Figseveral
1 (a)(b)).
terrains that
of varying
roughness
andoftilt
angle. Oura level
results showed
the robot
is capable
maintaining
actuation until the desired goal has been reached. Here, we on Self-Adaptive
Structure
Simulations:
We also construct
results
showed
that
the
robot
is
capable
of
maintaining
level
table
surface
irrespective
of
initial
conditions
anda subsequent
use a self-balancing
table and a modular
gripper as examples
modules
sense the environment
and coordinate
to perform several self-adaptive structure simulations to examine
scalatable
surface
irrespective
of
initial
conditions
and
subsequent
perturbations.
In
the
second
experiment,
we
placed
modules
sense
environment
coordinate
to illustrate
ourthe
proposed
algorithm
shown
in to
Figperform
2).
Ourwe bility of our algorithm, including an adaptive bridge, a 3D the
actuation
until
the
desired
goal and
has(as
been
reached.
Here,
In the
second
experiment,
the that
terrain-adaptive
bridge
on rough
terrain,we
andplaced
we showed
actuation
until
desired
goalthe
been reached.
Here,
we perturbations.
use
a self-balancing
table into
and
ahas
modular
gripper
examples
algorithm
can the
be divided
following
three as
steps:
building,
and a 3D
relief
structure.
Our result
show
that that
thissurface
terrain-adaptive
bridge
on
rough
terrain,
and
we
showed
our
modular
robot
was
able
to
maintain
a
level
bridge
use
a
self-balancing
table
and
a
modular
gripper
as
examples
to illustrate
our proposed algorithm
(as shown
in messages
Fig 2). Our approach is scalable to the number of modules (Fig 1 (c)).
Step 1 (Initialization):
Modules start
sending
modular
robotthe
wasalgorithm
able to maintain
level bridge surface
after
running
for 60aiterations.
totoillustrate
our be
proposed
(as
shown
inthree
Fig 2).
Our ourAdaptive
algorithm
can
divided
into the gripper
following
steps:
their neighbors.
In thealgorithm
modular
case,
this
process
Grasping:
We
further
demonstrate
how tasks
a
We
also
demonstrate
two
“pressure
consensus”
in
after
running
the
algorithm
for
60
iterations.
algorithm
can
be
divided
into
the
following
three
steps:
1 (Initialization):
start starts
sending
messages
isStep
initiated
as soon as one ofModules
the modules
sensing
the module-formed gripper achieves adaptive grasping via “presthis
video.
In
modular
gripper
experiments,
we
gave
the
We
also
demonstrate
two
“pressure
consensus”
tasks
in
topresence
their 1neighbors.
In the
modular
gripper
case,
this process
Step
(Initialization):
Modules
start sending
messages
of
an object.
Modules
transmit
these
messages
to sure consensus” formation among modules. We gave the
robot
an In
inflated
balloon
to carry.
The gripper
of
video.
modular
gripper
experiments,
we was
gave capable
the
is
initiated
as
soon
as
one
of
the
modules
starts
toidentify
their neighbors.
In theand
modular
gripper
case, thissensing
processthe this
their neighbors
to map
local connectivity.
robot
aninflated
inflated
balloontowith
tocarry.
carry.
The
gripper
was
capable
grasping
the
balloon
uniform
pressure.
In
the
modular
robot
an
balloon
The
gripper
was
capable
of
presence
object.
Modules
transmit
these
messages
is initiated
asan
soon
as At
one
of the
modules
starts
sensing
the to of grasping the balloon with uniform pressure. In addition,
Step 2of(Sensing):
each
time
step, those
modules
that
tetrahedral
robot with
experiment,
robot Insuccessfully
the balloon
uniform the
pressure.
the modularmoved
identify
their
neighbors
and
to
map
local
connectivity.
presence
of an with
object.sensors
Modules
these messages
to grasping
are equipped
starttransmit
propagating
their sensor
thetoward
robot autonomous
adjustsatto
a 10cm/sec.
uniform pressure
the
light
source
amaintain
speed
of
tetrahedral
robot
experiment,
the
robot
successfully
moved
Step 2
each
time
step,
those modules
identify
their
neighborsAtand
to map
connectivity.
readings
to(Sensing):
neighboring
modules.
In local
the
self-balancing
table,that state when the it is perturbed by exogenous force (Fig 1 (d)).
toward
the
light
source
at
a
speed
of
10cm/sec.
R EFERENCES
are
equipped
with
sensors
start
propagating
theiraggresensor
Step
2 (Sensing):
At each
time
step,
thoseand
modules
that
these
messages
contain
sensory
information
are
Adaptive Locomotion: Finally, we demonstrate that this
readings
to
neighboring
modules.
In
the
self-balancing
table,
[1]
C.-H.
Yu,
F.-X.
Willems,
and R. Nagpal, “Self-organization
R
EFERENCES
are
equipped
sensors
propagating
their
sensor
gated
by pivotwith
modules
(darkstart
red modules).
After
collecting
approach
can be applied toD.aIngber,
module-formed
tetrahedral
of environmentally-adaptive shapes on a modular robot,” in Proc. of
these
messages
contain
sensory
information
and
are
aggrereadings
to neighboring
modules.
In thethen
self-balancing
table,
C.-H.Intl.
Yu,Conf
F.-X.on
Willems,
D. Ingber,
and
Nagpal,
“Self-organization
all the messages,
the pivot
modules
transmit the
ag- [1]
robot’s
adaptive
locomotion
tasks.
InSystems
our experiment,
the
Intelligent
Robots
andR.
(IROS
07), 2007.
gated
by pivot contain
modules, modules
that are both
horizontally
of environmentally-adaptive
shapes
on a modular
robot,”robotics:
in Proc.Aofgeneral[2]
C.-H. Yu and moved
R. Nagpal,
“Self-adapting
modular
these
messages
information
are
aggregregated
information tosensory
supporting
modules and
(blue
modules
robot
successfully
toward
the
light
source
at
a
speed
Intl. ized
Conf distributed
on Intelligent
Robots and
Systems (IROS
07), 2007.to International
and vertically connected to other modules. After collecting
consensus
framework,”
in Submitted
gated
modules
thatmodular
are bothgripper,
horizontally
in Figby2pivot
(a)).modules,
In the case
of the
each of 10cm/sec
flat
terrain.
the environmental
conC.-H.Conference
Yu andonR.aon
Nagpal,
“Self-adapting
modular
robotics: A generalRobotics
an When
Automation,
2009.
all the messages, the pivot modules then transmit the ag- [2]
and
vertically
connected
toitsother
modules.
After
collecting
ized allows
distributed
consensus
framework,”
in Submitted
tolocomotion,
International
module
simply
transmits
current
pressure
readings
to
its
R.
Olfati-Saber,
J.toFax,
and R.gravity
Murray,
“Consensus
and cooperation in
[3]
dition
agents
exploit
to
assist
gregated information to supporting modules (blue modules
Conference
on Robotics
an Automation,
multi-agent
systems,” in2009.
Proc. of IEEE, 2007.
all the messages,
the pivot modules then transmit the agneighbors.
e.g.[4]
onnetworked
a steeper
slope,
theR.
locomotion
cycle time
will adapt
inimmediate
Fig 2 (a)).
In the case of the modular gripper, each [3]
C.-H. Yu and
R. and
Nagpal,
“Sensing-based
formation intasks on
R. Olfati-Saber,
J. Fax,
Murray,
“Consensusshape
and cooperation
gregated
information
to After
supporting
modules
(blue modules
Step
3
(Actuation):
receiving
the
sensory
informato
become
shorter
(Fig
1
(e)).
modular
multi-robot
systems:
A
theoretical
study,”
networked multi-agent systems,” in Proc. of IEEE, 2007. in Proc. Intl. Conf
module simply transmits its current pressure readings to its
in
Fig
2
(a)).
In
the
case
of
the
modular
gripper,
each
Autonomous
Agents“Sensing-based
and Multi-Agent
Systems
(AAMAS),
C.-H.onYu
and R. Nagpal,
shape
formation
tasks 2008.
on
[4]
tion,
each
modules
controller
uses
that
data
as
input
from
immediate neighbors.
R EFERENCES
S. Murata,
E. systems:
Yoshida,
Kamimura,
H.inKurokawa,
K. Tomita,
[5]
modular
multi-robot
AA.
theoretical
study,”
Proc. Intl. Conf
module
simply
transmits
its current
pressure
readingsInto [2],
its
which
to
compute
appropriate
actuation
parameters.
Step 3 (Actuation): After receiving the sensory informa- [1] on
and
“M-tran:
Self-reconfigurable
modular
robotic system,”
Autonomous
and
Multi-Agent
Systemsrobotics:
(AAMAS),
2008.
C.-H.
YuS.
andKokaji,
R. Agents
Nagpal,
“Self-adapting
modular
A generalized
immediate
neighbors.
IEEE/ASME
Trans. Mechatron,
vol. 7, no.
4, pp. 431–441,
2002.
we propose
a generalized
distributed
approach
to [5] S.distributed
Murata,
E. Yoshida,
A. Kamimura,
Kurokawa,
tion,
each module’s
controller
uses consensus
that data as
input from
consensus
framework,”
in Proc.H.
of ICRA,
2009. K. Tomita,
Step our
3 (Actuation):
receiving theform
sensory
informaand
S. “Sensing-based
Kokaji, “M-tran:shape
Self-reconfigurable
modular
robotic
system,”
derive
control law.After
The generalized
of
the
control
[2]
——,
formation
tasks
on
modular
multi-robot
P
Trans. Mechatron,
7, no.
4, pp. 431–441,
systems: A theoretical
study,” invol.
Proc.
of AAMAS,
2008. 2002.
tion,
each
module’s
uses
that data
input from
law is:
xi (t
+ 1) = xcontroller
), where
xi (t) IEEEIEEE/ASME
i (t) + α · Preprint
i , θjas
aj ∈Ni g(θ
submitted
to 2009
International Conference on
Robotics and Automation. Received September 15, 2008.
Preprint submitted to 2009 IEEE International Conference on
Robotics and Automation. Received September 15, 2008.
Download