MONITORING THE EXECUTION OF SENSORY ROBOT PROGRAMS* Vincenzo

From: AAAI Technical Report SS-94-04. Compilation copyright © 1994, AAAI (www.aaai.org). All rights reserved.
MONITORING
THE EXECUTIONOF SENSORYROBOTPROGRAMS*
VincenzoCaglioti1, presently 3, Massimo
Danieli2, 3Domenico
Sorrenti
International Computer
ScienceInstitute; 1947CenterStreet, 94704-1105
Berkeley,California, U.S.A,
ABB
SAESadelmis.p.a., P.le Lodi3, 20137Milano,Italy
Artificial Intelligence and RoboticsProject, Dipartimentodi Elettronica e Informazione,Politecnico di Milano,
P.zza Leonardoda Vinci 32, 20133Milano,Italy
Abstract- Asystemis presented, whichmonitorsthe executionof an assignedrobot program,in order to detect execution
errors. Theassigned programis supposedto include sensor instructions, whichallow to adapt the executionto variable
environmentconditions. In the presented system, the task informationis represented in terms of a relationship between
environment
conditionsandworkcellevlution. Theselection of the monitoringsensor detctions is basedon the accuracyin
checkingthe state variablesandin the ambiguityin matchingthe sensor measuresto the variablesto be measured.
1. Introduction
executionerror is detected,it planssensingstrategies aimed
at determining
the error state.
Robotprogramsincluding sensor instructions are generally
Manycurrent approachesto error recovery([1], [4], [4],
aimedat producinga given, desired workcell evolution,
[5], [13]) are of limited applicability in most of the
independentof the environmentconditions: e.g., whatever
practical cases where:(i) task level information,Or even
be the entranceposition of an input object, this object is
objectlevel information,is not directly available,since the
assembled to other objects in a predefined workcell
robot task is specified by manipulator level programs
location. This situation is in accordancewith the "one
(AMLand VALare among the most widely used
product"assumption
adoptedby Fieldinget al. [4].
programming
languages), (ii) several sensor instructions
However,this is not the only possibility.
are often includedin realistic programs,in order to let the
Infact a
sensorized programcould also be aimedat producing a
workcell evolution vary according to the variable
workcell evolution dependent on the environment
environment
conditions.
conditions:e.g., the set of the objects assembledwith an
In the systempresented,the task informationis represented
input object maydepend on the geometryof the input
by meansof a relationship (step conditions) betweenthe
object. Bythis possibility, the robot flexibility can be
environmentconditions and the workcell evolution. These
exploitedin orderto performdifferent types of assemblies.
conditionshaveto be satisfied at the endof the execution
In this worka monitoringsystemis illustrated, that: (i)
of each programstep. Thetask informationis organizedin
plans andexecutessensingstrategies aimedat assessingthe
a forest structure(a forest is a set of trees) calledskeleton.
executioncorrectnessof a sensoryrobot program,(ii) if
Eachnodeof the forest represents a workcellstate. The
roots of the forest representthe possibleinitial states. The
*This workhas been supportedby: the Italian National ResearchCouncil,
C.N.R., PFR2 "MANUEL";
IBMSemea(contract: Error Recoveryin Assembly
Robots)
21
leavesof the forest representthe possiblefinal states of the
program.Thetask informationis automaticallyextracted,
represent the covering elements. EachSDcovers one or
starting fromthe given programwritten in the manipulator
morephysical variables, and each physical variable is
level languageAML
[2], [15]. Theset of the correct (or
consideredoncefor eachof its non-critical sequences.The
nominal)evolutions, associated to an assigned program,
weightof each SDis represented by the estimate of its
can be obtainedby meansof the simulationof an errorless
time-cost.
excutionof the program[16].
specialization for the zero-oneproblemof the Branchand
Themonitoringsystemselects, for eachprogramstep, a set
Boundalgorithm[6].
of physical variables that are to be checkedin order to
Theconsideredsensordetectionare:
verify the step conditions. Eachof the selected physical
1.
variables has to be checkedat least once within a non-
signal relative to a photocell mountedbetweenthe fingers
critical sequence,whichis a sequenceof steps betweentwo
of the robot gripper. If an object is present betweenthe
critica/eventsfor it (for instance,if v is a variablerelated
fingers, thenthe outputis active.
to a physical object in the workcell,e.g., its position, a
2.
critical eventfor v is the modificationof the contactswith
determines position and orientation of a modeledobject
the other objects). Thepossible sensor detections (SDs)
basing on the linear segmentsextracted from a single
are evaluateda priori with respect to their accuracyand
image.TheIoc primitiveperformsa 3D(i.e., six degreesof
ambiguiO’
in measuringa consideredphysical variable. The
freedom)localization[17].
ambiguityis related to the risk that the sensedfeaturesand
3.
the scene features are not matchedcorrectly. Both the
whichcomparesan expected scene line-drawing with the
coefficients are determinedby the simulationof the sensor
actually extracted line-drawing, and then performsa 3D
detections. The best SD,betweenthose capable to check
localization.
the considered physical variable in the considered non
In [14] a criterion is proposedfor planningthe best sensor
critical sequence,is then selected. ThechosenSDs,related
configurationin order to checkan object feature; it can be
to the wholeset of physical variables, are then combined
seen as a first step towardan active monitoringsystem.On
and give rise to an augmented
set of SDs;the newentries
the other hand, the presented system is only capable to
of the SDset are built aggregatingthe SDsthat can share
determine that some physical variables can be poorly
somecomputation(e.g., at the end of step i the two SDs
checked (or even not checkable at all),
aimed to check the position of Obj-j and Obj-/can be
configurationof the workcellobjects andsensors.
Wesolved the problem by means of a
A photo primitive, whose output is a binary
A loc(x, y, theta, phi, psi, Obj)primitive, which
A match(x, y, theta, phi, psi, Obj) primitive,
due to the
aggregatedbecauseboth exploit the actual line drawingof
camera-k).Bythe selection of different sequenceof SDs,
2. Anexampleof skeleton
different sensingstrategies can be generated.At this point
the problemreducesthe search of the (time) best sensing
Ourexperimentalworkcell(Fig. 1) is equippedwith a four
strategy. The problemcan be viewedas a weightedset
degrees of freedom SCARA
IBM-7547robot. A photocell
coveringproblem[6]; the set of the physical variables to
is mountedon the robot gripper. Twofixed cameras,
be verified represents the set to be covered, and the SDs
indicated by tvl and tv2, are present. On both these
22
camerasthe primitives matchand loc are implemented.
AML/Eprogram ([2]) is:
HOME:
NEW
PT(103,0,0,0); -- the parking position
BASKET
:NEWPT(0,90,-30,90); -- the basket position
xl ,yl,rl : STATICCOUNTER;
-- cartesiancoordinatesandroll angleof O1
pmove(PT(70.5,71.5,-30,90));
-- moveto the vertical of 02
zmove(-46.5);
-- movesto midheight betweenOBJ2and OBJ3
compc(photo= 0, GO-TO-BASKET);
-- if 02 = OBJ3(h = 2) go to basket
zmove(-30);
pmove(PT(-49.5,0,-30,90));
-- vertical of
zmove(-48.5); -- moveto grasp
grasp; -- graspO1
zmove(-30);
pmove(PT(80,
80, -30, 90));
-- moveto the vertical of the assembly
position
zmove(-48.5);
-- moveO1to the assemblyposition
release; -- openhand
zmove(-30);
pmove(PT(70.5,71.5,-30,90));
-- moveto the vertical of 02
zmove(-46.5);
grasp; -- grasp 02
zmove(-30);
pmove(PT(80,
80, -30, 90));
-- movetoward assemblyposition
zmove(-45.0);-- insert 02 into
release;
branch(GO-HOME);
:GO-TO-BASKET;
zmove(-49);
grasp; -- grasp02
zmove(-30);
pmove(BASKET);
release;
:GO-HOME;
p&~ve(HOM
E);
Fig. 1 ExperimentalSetup
Twoobjects are involved in the example. The first object
(O1), belonging
to the OBJ1 class,
parallelepiped
with dimensions
is a holed
(llx7x3)
along
respectively the x, y, z axis of the object centeredreference.
The program skeleton,
The origin of this reference is on a vertex of the base of the
simulator is reported in Fig. 2. The node NOcontains the
parallelepiped. The hole is centered on the upperface of the
initial state correspondingto the case in which 02 belongs
parallelepiped,
to the class OBJ2, while N1 contains the initial
and its dimensions are (lx3xl.5).
The
extracted
by the nominal case
state
second object (O2), can belong to either of the two classes
corresponding to the case in which02 belongs to the class
OBJ2and OBJ3. Both are parallelepipeds
OBJ3.The forest actually consists in two linear sequences:
of dimensions
(lx3xh). While for the OBJ2class h=5, for the OBJ3class
h=2. In the initial
one starting from NO,the other starting with N1.
state of the programexecution 02 is
present in the workcell in the position: xO2=70,yO2=70,
3. An example of monitoring plan
zO2=-50,rO2=0(rO2 indicates the roll angle). The initial
position
of object O1 is: xO1=-55, yOl=0, zOl=-50,
For simplicity, the monitoring example here illustrated
rOl=0. The robot programconsists first in determining the
involves a skeleton constituted by a single node sequence.
class of 02: if its class is OBJ2,then 02 is inserted into
Twoobjects are involved in this example. The first object
O1. Otherwise, the object 02 is put into a basket. The
(O1), which belongs to the OBJ1 class,
23
is a holed
parallelepiped
with dimensions llx7x3.
The hole is
first in grasping 02 in its initial
position and then in
centered on the upper face of the parallelepiped and its
inserting
dimensions
reportedbelow, the programskeleton in Fig. 2.
are 3xlxl.5.
The second object
(O2),
it into the hole of O1. The AML/Eprogram is
HOME:
NEW
PT(103, O, O, 0); --the parking position
02VERT:NEW
PT(70.5, 71.5, -30, 90);
-- a point alongthe vertical of 02
dimensions lx3x5.
. ...........
01VERT:NEW
PT(-49.826, 1.467, -30, 5.01);
prnove(PT(70.5,71.5,-30,90));
-- a point alongthe vertical of 01
it zmove(-46.5);
pmove(O2VERT);
-- moveto the vertical of
ZTZ’f..’...compcCphoto
- o, GO-TO-BASKET);
zmove(-46);-- movetoward the grasping position of 02
grasp;-- grasp02
/.zmove(-30)" P
sm~o~r.~=¢, f’
zmove(-30);
~ ...........
pmove(PT(-49.5,0,-30,90));
pmove(O1VERT);
-- a position on the vertical of
j ~u I
~pzrnove(-48.5);
zmove(-44.5);-- insert 02 into
release; -- openhand
~ZZ;Z.".’..
gr,sp;
zmove(-30);
jr,, ~~ I
-t zmove(-30);
jIBpmove(HOME);
-- return to parking position
~
IN1 21(~.~.~’.~:’.;’.~;~."
...pmove(PT(80, 80,-30, 90));
:END.
belonging to the OBJ2class,
is a parallelepiped
with
~
~
~
~
’~ ............
~tiI~ J °l
The node sequence is analyzed, in order to find the
release;
physical variables to be verified and to determine the
p,zrnove(-30);
"-22"."-’,,"."-1.-.,pmove(PT
( 7 0.S,
71.5,30,90
)
correspon(fmgnon-criticalsequences.
=ra.
~.:,..:! ..........,~ove(-,S,S);
grasp;
I-’-~
~
............
~jnJ, =~1
fp zmove(-30);
~---~4-pmove(PT(BO,
80,-30, 90));
j~( i
p..zmove(-4S.O);
F,o-1
..................
pmove(O2VERT);
..................
..............
[;2-]
/,,branch(GO-HOME);
"" pmove(HOME)
.................
..................
grasp;
zmove(-30);
Fq
E~
................
pmove(PT(70.5,71.5,-30,90));
P~
................
zmove(-46.5);
P~
................
compc(photo - O, GO-TO-BASKET);
P~
................
zmove(-49);
F~
................
grasp;
~.,r-]
................
zrnove(-30);
F~
.................
pmove(BASKET);
F~
¯ --o*-oooo-.... o- release;
r~,0-]
.................
pmove(HOME);
E~
Fig. 2 A programskeleton
In the initial
zmove(-46);
..................
pmove(OlVERT);
.................
zmove(-44.5);
..................
Fn
Fn
release;
..................
zmove(-30);
.................
pmove(HOME);
ig. 3 Asequential programskeleton
For instance,
at node NOthe physical variable xo2=70
with a related non-cr#ical sequence ~qO, N1, N2) is found.
The non-critica! sequence for xo2 ends at node N2, since
state of the programexecution, only 02 is
in N3 the object 02 is grasped by the robot. After the
present in the workcell. Its initial position is (70, 70, -50,
analysis has been completed, the sensor simulation is
0). Ol is introduced into the workcell at the initial
executed along the node sequence. The table below reports
the accuracy 02 and ambiguity y relative to the activation
position (-55, -2.5, -50, 5.01). The robot programconsists
24
of the Ioc primitive on tvl at node N8 with respect to the
detections may eventually detect an error. In order to
position parameters of objects O1 and 02, as calculated
proceed with the recovery of the robot operations, the cause
basing on the expected line drawing.
2o
variable
7
0.5583
0.5
XOl
0.0092
0.5
YOI
zo1
0.3791
0.5
-7
rOl
5.821 10
0.5
0.25
xo2
0.122
-4
8.735 10
0.25
ro2
of the occurred error must be determined. To this aim a
The results relative to the y and z coordinates of the object
determine the error state. Wronglypositioned objects, and
02 are neglected, since their accuracy is below a given
defective objects are amongthe considered classes of error
threshold. The plan generated in correspondence to the
causes. In order to determine the workcell state, which is
node sequence is composed of the following
the most "likely", the diagnosis system uses information
diagnosys systems has been designed. This system will be
described briefly in the sequel (experimentalresults are not
yet available).
As an error is detected by the monitoring system, a
diagnosis system is called into operation in order to
sensor
instructions, in whichthe spatial positions are indicated by
about the past actual execution, e.g., past actions and past
the roll,
sensor detections
pitch,
and yaw angles, and by the x, y, z
(both the actual and the expected
coordinates referred to the frames attached to the cameras
readings).
tvl and tv2:
information and thus discriminate the most likey state
I
hypotheses,
:read (photo1):correct (value 0) :node
:read (tv2 (LOCOBJ2)):correct (value (OBJ2(0, 0,
120, 0))) :node
(:read(photo1):correct (value 1) :node
(:read(photo1):correct (value 1) :node
(:read (tvl (LOCOBJ1)) :correct (value (OBJ1
-0.782, -3.055, 4.246,-2.5, 72.46))) :node
(:read (photo1):correct (value 1) :node
(:read (iv2 (LOCOBJ1)) :correct (value (OBJ1
0.087,-105, 47.5, 0))) :node
(:read (photo1):correct (value 1) :node
(:read (tvl (LOCOBJ1)) :correct (value (OBJ1
-0.782, -3.055, 4.246, -2.5, 72.46))) :node
(:read (tv2 (LOCOBJ2)) :correct (value (OBJ2
-1.483, -101.381,52.095,1.5))) :nodeN8)
(:read (tvl (LOC OBJ10BJ2)):correct (value ((OBJ1
3.018, -0.782, -3.055, 4.246, -2.5, 72.46)) (OBJ2
-1.644, -0.073, 2.359, 0.614, 1.833, 73.97))))
:nodeNg)
(:read (photo1):correct (value 0) :node
where the keyword:read indicates the sensor that must be
activated;
the keyword :correct
predicates to be satisfied by the sensor data. The keyword
:node indicates
the node to which the instruction
an uncertainty
integrate
the available
management mechanism is
adopted, based on evolutions ([7], [8], [9]) of DempsterShafer’s theory of evidence ([10], [11]).
As a result
of the incremental analysis of the error
hypotheses,the belief state of each hypothesesis evaluated.
The belief state is categorized according to its support and
its plausibility:
in [8] six categories are proposed. The
hypoteses whose belief state is in either of the groups
believed
and rather
believed
than disbelieved
are
considered for further validation. For this last task, these
hypotheses are handled as correctness conditions to be
verified
is followed by the
To consistently
by the monitoring system; namely a sensing
strategy is planned and executed in order to verify each
hypothesis. If the results of the executed sensor detections
is
allow to confirm one of the considered hypotheses, then the
associated.
analysis task is concluded and a recovery strategy can be
planned. Otherwise, the current set of hypotheses (together
4. Current developments
with their belief state) is updatedin view of the results of
During the program execution,
the selected
the executedsensor detections, and the process is re-iterated
sensor
25
recognition of polygonal patterns",
Recognition,Vol. 26, n. 11, (1993)
until either a hypothesis is confirmedor no sensor is
availableto checkthe error state.
References
[1]
[2]
[4]
[4]
[6]
[6]
[7]
[8]
[9]
[10]
[11]
[13]
[14]
[ 15]
[16]
[17]
R. Smith, M. Gini, "Reliable real-time robot
operation employingintelligent forwardrecovery",
Int. Journal of RoboticSystems,Vol. 3, n. 3, pp.
281-300, (1986)
AML/Entry
v.4 User’s Guide, 2nd ed., IBMPress,
Aug. 1985
A. Hoermann,W. Meier, J. Scholen, "A control
architecture for an advancedfault-tolerant robot
system", Roboticsand Autonomous
Systems,Vol. 7,
pp. 211-225,(1991)
P. Fielding, F. Di Cesare, G. Goldbogen,"Error
recovery in automatedmanufacturingthrough the
augmentation
of programmed
processes", Int. Journal
of Robotic Systems, Vol. 5, n. 4, pp. 337-362,
(1988)
G.R. Meijer, L. O. Hertzberger, T. L. Mai, E.
Gaussens, F. Arlabosse, "Exception handling
systems for autonomousrobots based on PES",
Roboticsand Autonomous
Systems,Vol. 7, pp. 197209, (1991)
H. M, Salkin, K, Mathur"Foundationsof Integer
Programming",
North Holland, (1989)
A. Bonarini, E. Cappelletti, A. Corrao, "NetworkBased Managementof Subjective Judgements: A
Proposal Accepting Cyclic Dependencies", IEEE
Transactionson SystemsManand Cybernetics, vol.
22, n. 5, 1992
D. Driankov, "Uncertainty calculus with verbally
definedbelief-intervals", Int. Journ. on Intelligent
Systems,vol. 1, 1986
D. Driankov, "Toward a many-valued logic of
quantifiedbelief: Theinformation
lattice", Int. Journ.
on Intelligent Systems,vol. 6, 1991
G. Shafer, "A MathematicalTheoryof Evidence",
PrincetonUniversityPress, Princeton(1976)
G. Shafer, "Theory and Practice of Belief
Functions", Int. Journ. on Approximate
Reasoning,
vol. 4, n. 5/6, 1990
T. Cao, A. C. Sanderson, "Sensor-based error
recovery for robotic tasks sequencesusing fuzzy
Pe~nets", Proc. IEEEIntnl. Conf.on Roboticsand
Automation,(1992)
K. Tarabanis, R. Y. Tsai, P. K. Allen, "Automatic
sensor planning for robotic vision tasks", Proc.
IEEE Int. Conf. on Robotics and Automation,
(1991)
R.H. Taylor, P. D. Summers,J. M. Meyer,"AML:
a
manufacturinglanguage",TheInl. Joum.of Robotics
Research,vol. I, n. 3, 1982
V.Caglioti, M.Somalvico,"Symbolicsimulation of
sensory robot programs",in B.Torby,T.Jordanides
(eds.) "Expert Systemsand Robotics", NATO
ASI
Series, Springer-Verlag,
(1990)
V. Caglioti, "The planar three-lines junction
perspective problem, with applicat.ion to the
26
Pattern