Passivity-Based Visual Feedback Pose Regulation Integrating a

advertisement
SICE Journal of Control, Measurement, and System Integration, Vol. 6, No. 5, pp. 322–330, September 2013
Passivity-Based Visual Feedback Pose Regulation Integrating
a Target Motion Model in Three Dimensions
Tatsuya IBUKI ∗ , Takeshi HATANAKA ∗ , and Masayuki FUJITA ∗
Abstract : This paper investigates passivity-based visual feedback pose regulation whose goal is to control a vision
camera pose so that it reaches a desirable configuration relative to a moving target object. For this purpose, we present
a novel visual feedback estimation/control structure including a vision-based observer called visual motion observer
under the assumption that a pattern of the target motion is available for control. We first focus on the evolution of the
orientation part and the resulting estimation/control error system is proved to be passive from the observer/control input
to the estimation/control error output. Accordingly, we also prove that the control objective is achieved by just closing
the loop based on passivity. Then, we prove convergence of the remaining position part of the error system. We moreover
extend the present velocity input to force/torque input taking account of camera robot dynamics. Finally, the effectiveness
of the present estimation/control structure is demonstrated through simulation.
Key Words : vision-based motion estimation, pose regulation, rigid body motion, passivity.
1. Introduction
Numerous research works have been devoted to integration
of control theory and computer vision [1]–[7]. Early works are
mainly motivated by robot control as in [8]. On the other hand,
the motivating scenarios of the integration currently spread over
the robotic systems into security and surveillance systems [9],
medical imaging procedures [10] and even understanding biological perceptual processing [11].
In this paper, we study a vision-based estimation/control
problem for a target object moving in three dimensions as in
[12]–[14]. The goal of this paper is to regulate the camera pose
(position and orientation) to the desired configuration relative
to the target by using only visual measurements. To achieve
this objective, one of our previous works [14] presents a visionbased 3D pose estimation mechanism, called visual motion observer, for a moving target based on passivity of rigid-body
motion. The authors in [14] also propose pose control laws
of the camera so that it tracks the target and analyzes the tracking performance in the framework of L2 -gain, where the target
velocities are viewed as unknown external disturbances.
On the other hand, the paper [15] presents a novel estimation
mechanism integrating target object motion models in order to
eliminate the estimation errors for the moving target. Here, the
authors assume some target motion patterns, constant or periodic motion, and add an integral term to the observer input
inspired by classical control theory to cancel tracking errors.
These patterns are commonly used in visual servoing and visual tracking [1],[2],[16]–[18]. However, the paper [15] does
not consider control of the camera pose at all.
This paper tackles a vision-based 3D pose regulation problem investigated in [14] and incorporates the approach of [15]
into the framework of [14]. Namely, we present a novel visual
∗
Department of Mechanical and Control Engineering, Tokyo Institute of Technology, Tokyo 152-8550, Japan
E-mail: fujita@ctrl.titech.ac.jp
(Received October 11, 2012)
(Revised January 9, 2013)
Fig. 1 Coordinate frames for visual feedback system.
feedback estimation/control structure integrating the target object motion model. After introducing the observer, we formulate a total estimation/control error system. We first focus only
on the orientation part of the error system. Then, we show that
an appropriate line connection recovers passivity of the subsystem. We thus prove from the property that just closing the
loop via a negative feedback regulates the orientation and angular velocity to desirable states. Then, we shift our focus to the
remaining position part of the error system and prove that the
position and linear velocity are also regulated to desirable states
by the structure. We moreover extend the present velocity input
to force/torque input taking account of camera robot dynamics. Finally, the effectiveness of the present estimation/control
mechanism is demonstrated through simulation.
2. Problem Statement
2.1
Rigid Body Motion
In this paper, we consider a visual feedback system illustrated
in Fig. 1, where Σw , Σc and Σo represent the world frame, the
camera frame and the object frame, respectively. We denote the
pose of the origin of Σc relative to Σw by gwc = (pwc , eξ̂wc θwc ) ∈
SE(3). Here, ξwc ∈ R3 (ξwc = 1) and θwc ∈ R specify the
direction and angle of rotation, respectively. For simplicity, we
use ξθwc to denote ξwc θwc hereafter. The notation ’∧’ is the
operator such that âb = a × b, a, b ∈ R3 for the vector crossproduct ×. The notation ’∨’ is the inverse operator to ’∧’. Similarly, we denote the pose of the origin of Σo relative to Σw by
c 2012 SICE
JCMSI 0005/13/0605–0322 323
SICE JCMSI, Vol. 6, No. 5, September 2013
Fig. 3
Block diagram of camera model with RRBM (RRBM is the
acronym for relative rigid body motion).
We assume that the feature points poi are known a priori.
Then, the visual measurement vector f (gco ) depends only on
the relative pose gco . Figure 3 depicts the block diagram of the
relative rigid body motion with the camera model.
Fig. 2
Simple vision camera model.
2.3
ξ̂θwo
gwo = (pwo , e ) ∈ S E(3).
We also denote the body velocity of the camera relative to
b
= [vTwc ωTwc ]T ∈ R6 , where vwc ∈ R3 and ωwc ∈ R3
Σw by Vwc
respectively represent the linear and angular velocities of the
origin of Σc relative to Σw [19]. Similarly, the body velocity of
b
the object relative to Σw is denoted by Vwo
= [vTwo ωTwo ]T ∈ R6 .
Throughout this paper, we use the following homogeneous
representations of g and V b .
ξ̂θ
ω̂ v
p
e
4×4
b
∈ R , V̂ =
∈ R4×4 .
g=
0 0
0 1
b
b
and Vwo
are simply given by
Then, the body velocities Vwc
b
−1
b
−1
V̂wc = gwc ġwc and V̂wo = gwo ġwo , respectively. Additionally,
the adjoint transformation [19] associated with g is denoted by
⎡ ξ̂θ
⎤
⎢⎢e
p̂eξ̂θ ⎥⎥⎥
⎥⎦ ∈ R6×6 ,
Ad(g) := ⎢⎢⎣
0
eξ̂θ
−1
which satisfies V = Ad(g) V if V̂ = gV̂g .
We finally define the pose of Σo relative to Σc as gco =
b
(pco , eξ̂θco ) := g−1
wc gwo ∈ S E(3) and the body velocity as V̂co =
−1
b
gco ġco . Then, the definition of V̂co yields relative rigid body
motion [19]:
ġco =
b
−V̂wc
gco
+
b
gco V̂wo
.
(1)
2.2 Visual Measurement
In this subsection, we define the visual measurement of
the vision camera which is available for estimation/control.
Throughout this paper, we use the pinhole camera model1 with
the perspective projection [19] (Fig. 2).
Suppose that the target object has k (k ≥ 4) feature points
whose positions relative to Σo are denoted by poi ∈ R3 , i ∈
{1, · · · , k}. Then, a coordinate transformation yields the positions of the feature points relative to Σc as pci = gco poi , where
pci and poi should be respectively regarded as [pTci 1]T and
[pToi 1]T . Let the k feature points of the object on the image
plane coordinate f := [ f1T · · · fkT ]T ∈ R2k be the visual measurement of the vision camera. Then, it is well known [19] that
each fi ∈ R2 is given by the perspective projection:
λ xci
, pci = [xci yci zci ]T ,
(2)
fi =
zci yci
where λ > 0 is the focal length of the vision camera (Fig. 2).
1
Note that the subsequent discussions are applicable to
panoramic camera models through the appropriate modifications as in [20].
Visual Pose Regulation
In this subsection, we formulate the control objective of this
paper. Before mentioning it, we suppose that the target object
b
body velocity Vwo
is (approximately) given in the form of a
finite Fourier series expansion:
b
(t) = c +
Vwo
n
ai sin wi t + bi cos wi t,
(3)
i=1
where ai = [aTv,i aTω,i ]T ∈ R6 , bi = [bTv,i bTω,i ]T ∈ R6 , c =
[cTv cTω ]T ∈ R6 and the frequencies wi > 0, i ∈ {1, · · · , n} are
known a priori. Here, it should be noted that a constant velocity
model is the special case of (3) (ai = bi = 0 ∀ i).
Under the assumption of (3), the objective of this paper is
b
in order to achieve the
to design the camera velocity input Vwc
following pose regulation conditions.
b
b
− Ad(gco ) Vwo
) = 0,
lim (Vwc
(4a)
t→∞
T T ξ̂θ T
6
lim ER (g−1
d gco ) = 0, E R (g) := [p eR (e )] ∈ R , (4b)
t→∞
based only on the visual measurement f (gco ). Here,
eR (eξ̂θ ) := sk(eξ̂θ )∨ ∈ R3 , sk(eξ̂θ ) :=
1 ξ̂θ
(e − e−ξ̂θ ) ∈ R3×3 ,
2
and gd = (pd , eξ̂θd ) ∈ S E(3) specifies a fixed desirable configuration of the camera relative to the object. Throughout this
paper, the problem is called visual pose regulation.
Let us now give some properties of (3) necessary for the subsequent discussions. We define
zv,0 = cv , zv,i = av,i sin wi t + bv,i cos wi t ∈ R3 ,
zω,0 = cω , zω,i = aω,i sin wi t + bω,i cos wi t ∈ R3 ,
zv = [zTv,1 · · · zTv,n ]T , zTω = [zTω,1 · · · zTω,n ]T ∈ R3n ,
x = [xvT xωT ]T := [zTv,0 zTv żTv zTω,0 zTω żTω ]T ∈ R12n+6 .
b
Then, it is straightforward to see that the time evolution of Vwo
is represented by the following linear time invariant system.
Av 0
∈ R(12n+6)×(12n+6) ,
(5a)
ẋ = Ax, A :=
0 Aω
C
0
b
∈ R6×(12n+6) ,
= Cx, C := v
(5b)
Vwo
0 Cω
⎤
⎡
0
0 ⎥⎥⎥
⎢⎢⎢0
⎥
⎢⎢⎢
0
I3n ⎥⎥⎥⎥ ,
Av = Aω := ⎢⎢0
⎦
⎣
2
2
0 −diag(w1 , · · · , wn ) ⊗ I3 0
Cv = Cω := 1Tn+1 ⊗ I3 0 , 1n = [1 · · · 1]T ∈ Rn .
The following fact is also proved to be true in [15].
324
SICE JCMSI, Vol. 6, No. 5, September 2013
f
uc = −kc ν f = −kc ER (gce
), kc > 0
Fig. 4
Block diagram of full information feedback control.
Fact 1 [15] Let Bω = CωT .
Then, the linear system
(Aω , Bω , Cω , 0) with state xω is passive with respect to the storage function S (xω ) := (1/2)xωT Pxω with
0
I3(n+1)
∈ R(6n+3)×(6n+3) .
P :=
0
diag(1/w21 , · · · , 1/w2n ) ⊗ I3
b
Remark 1 A key example of (3) is a constant velocity: Vwo
=
c [1],[2],[16] or a typical rectangular wave. The model is in
practice useful not only for really constant velocities since any
signal can be approximated by a piecewise step function and it
can be also approximated by finite Fourier series expansions.
b
is not really periodic, since
Similarly, (3) is helpful even if Vwo
b
a future profile of Vwo over a finite interval can be approximated
as (3). Namely, it is possible to regard the estimation process
over the infinite time interval as repeats of the estimation over
a finite time interval. A variety of real periodic motion is also
approximately described in the form of (3).
3. Structure Design for Visual Pose Regulation
3.1 Full Information Feedback
(6)
as
from (1). We now fix the form of the velocity input Vwc
b
Vwc
= −uc + Ad(gcef ) Vwo
,
(7)
where uc = [uTcp uTcR ]T ∈ R6 is the new input to be determined
so as to drive gco to the desirable gd . Substituting (7) into (6)
f
b
of (6), and thus we get
V̂wo
cancels the second term gce
f
f
ġce
= ûc gce
.
(8)
f
) ∈ R6 be respectively the input and
Let uc and ν f := ER (gce
output vectors of the system (8), whose block diagram is illustrated in Fig. 4. Then, the following fact is proved to be true
in [14].
Fact 2 [14] The system (8) is passive from uc to ν f with the
f
) := (1/2)pce 2 + φ(eξ̂θce ) ≥ 0.
storage function ψ(gce
Here, φ(eξ̂θ ) ≥ 0 is defined as follows and has the following
property [21].
φ(eξ̂θ ) :=
1
1
I3 − eξ̂θ 2F = tr(I3 − eξ̂θ ), φ̇(eξ̂θ ) = eR (eξ̂θ )T ωb .
4
2
Fact 2 implies that closing the loop with
assures the asymptotic stability of the equilibrium point ν f = 0.
f
We see from the definition of ν f that ν f = ER (gce
) = 0 means
f
(4b). Furthermore, in the case of ER (gce ) = 0, we obtain uc = 0
b
and Vwc
= Vwo
from (7), which implies (4a). Namely, the pose
regulation is achieved by (7) with (9). The resulting control
b
is given by
input Vwc
b
b
Vwc
= kc Ad(gd ) ER (g−1
d gco ) + Ad(gco ) Vwo .
(10)
We now see that (10) consists of the state feedback term
b
.
kc Ad(gd ) ν f and the disturbance feedforward term Ad(gco ) Vwo
b
However, the target object velocity Vwo is not available for control in practice and it is not straightforward to extract gco from
the visual measurement f (gco ). Therefore, we will introduce
a vision-based observer to estimate these variables only from
f (gco ) in the next subsection.
3.2 Observer Design
In this subsection, we introduce an estimation mechanism [15] of the relative pose gco and the target object body
b
velocity Vwo
from the visual measurement f (gco ) in the case
b
that Vwo is given as (3).
Similarly to [14], we first prepare a model of the 3D object
motion (1) and (5) as
b
b
ḡco − ḡco ûe + ḡco V̄ˆ wo
,
ḡ˙ co = −V̂wc
T
x̄˙ = A x̄ − Buv , B := C ,
(11b)
b
= C x̄,
V̄wo
(11c)
(11a)
ˆ
In this subsection, we present a feedback control system to
achieve pose regulation under the assumption that the relative
b
are available
pose gco and the target object body velocity Vwo
for control.
f
= (pce , eξ̂θce ) := g−1
We define the control error gce
d gco ∈
b
6
S E(3) and Vwc := Ad(g−1
V
∈
R
.
Then,
we
obtain
d ) wc
f
f
f
b
V̂wo
= −V̂wc
gce
+ gce
ġce
(9)
where ḡco = ( p̄co , eξ̄θ̄co ) ∈ S E(3), x̄ = [ x̄vT x̄ωT ]T ∈ R12n+6 and
b
b
V̄wo
= [v̄Two ω̄Two ]T ∈ R6 are the estimates of gco , x and Vwo
,
6
6
respectively. The new input ue ∈ R and uv ∈ R should be designed so that these estimates asymptotically converge to their
actual values. Note that the paper [14] builds a model only of
b
= 0 since [14] assumes no prior information on
(1) with V̄wo
the target object motion.
We now define the estimation errors as follows.
T
T T
gee = (pee , eξ̂θee ) := ḡ−1
co gco , xe = [xe,v xe,ω ] := x − x̄,
b
b
Ve = [vTe ωTe ]T := Vwo
− V̄wo
.
Then, we derive the following estimation error system
b
b
ġee = ûe gee − V̄ˆ wo
gee + gee V̂wo
,
(12a)
ẋe = Axe + Buv ,
(12b)
Ve = Cxe
(12c)
from (1), (5) and (11). The paper [14] also shows that, as long
as k ≥ 4, the estimation error vector ER (gee ) is approximately
reconstructed from fe := f − f¯ ∈ R2k as
ER (gee ) = J † (ḡco ) fe ,
(13)
where f¯ ∈ R2k is computed by (2) using ḡco instead of gco
and J † (ḡco ) ∈ R6×2k is the pseudo-inverse of the image Jacobian [19].
We finally introduce the following input of the observer.
uv = −kv ER (gee ),
ˆ wo
k I − ω̄
ue = −Ke ER (gee ), Ke := e 3
0
0
,
ke I3
where kv , ke > 0. Then, we have the following fact.
(14a)
(14b)
325
SICE JCMSI, Vol. 6, No. 5, September 2013
Fig. 5 Block diagram of visual motion observer (ee := ER (gee )).
Fig. 6 Block diagram of total error system (ec := ER (gce )).
Fact 3 [15] Suppose that the target object body velocity is
given in the form of (5). Then, all the state trajectories
along with the estimation error system (12) with (14) satisfy
limt→∞ [VeT eTe ]T = 0.
Fact 3 means that the present estimation mechanism correctly
b
.
estimates the relative pose gco and the object body velocity Vwo
The total estimation mechanism is formulated as
⎧ ˙
b
= C x̄,
· · · (11b)
x̄ = A x̄ − Buv , V̄wo
⎪
⎪
⎪
⎪
⎪
b
b
⎪ ḡ˙ co = −V̂wc
⎨
ḡco + ḡco (V̄ˆ wo
− ûe )
· · · (11a)
, (15)
⎪
†
⎪
⎪
(g
)
=
J
(ḡ
)
f
·
· · (13)
E
⎪
R ee
co e
⎪
⎪
⎩ u = −k E (g ), u = −K E (g ) · · · (14)
v
v
R
ee
e
e
R
ee
which is called visual motion observer throughout this paper.
The block diagram of the resulting estimation mechanism (15)
together with the relative rigid body motion (1) and the velocity
generator block (3) is depicted in Fig. 5.
3.3 Total Estimation/Control Error System
In this subsection, we present a novel visual feedback structure to achieve visual pose regulation, based on the contents in
Subsections 3.1 and 3.2.
Similarly to Subsection 3.2, we build the 3D target object
motion model (11), formulate the estimation error system (12)
and close the loop with
uv = −kv ER (gee ).
(16)
We next formulate the control error system. Basically, we try
to imitate the structure of the full information feedback case in
b
nor gco is available
Subsection 3.1. However, since neither Vwo
b
b
for control, we replace Vwo and gco by their estimates V̄wo
and
b
ḡco and fix the form of Vwc as
b
Vwc
:= Ad(g−1
V b , Vwc
= −uc + Ad(gce ) V̄wo
d ) wc
(17)
in imitation of (7). Here, gce := g−1
d ḡco ∈ S E(3) is the control error. The time evolution of the control error gce is then
described by
ġce = ûc gce − gce ûe
(18)
from (11a) and (17).
From (5), (12), (16) and (18), the total estimation/control error system is formulated by
ẋ = Ax,
b
Vwo
= Cx,
ẋe = Axe − kv BER (gee ), Ve =
(19a)
b
Vwo
ġce = ûc gce − gce ûe ,
b
b
ġee = ûe gee − V̄ˆ wo
gee + gee V̂wo
.
−
b
V̄wo
= Cxe ,
(19b)
(19c)
(19d)
Notice now that the time evolution of the orientation part
(xω , xe,ω , eξ̂θce , eξ̂θee ) is independent of that of the position part
(xv , xe,v , pce , pee ) while the evolution of (xv , xe,v , pce , pee ) depends on that of (xω , xe,ω , eξ̂θce , eξ̂θee ). The block diagram of the
total estimation/control error system (19) is depicted in Fig. 6.
4. Main Result
In this section, we close the loops of uc = [uTcp uTcR ]T and ue =
uTeR ]T and show that visual pose regulation is achieved.
[uTep
4.1
Visual Feedback System: Orientation Part
Extracting the orientation part from (19) yields
ẋω = Aω xω , ωwo = Cω xω ,
(20a)
ẋe,ω = Aω xe,ω − kv Bω eR (eξ̂θee ), ωe = Cω xe,ω ,
(20b)
ėξ̂θce = ûcR eξ̂θce − eξ̂θce ûeR ,
ˆ wo )eξ̂θee + eξ̂θee ω̂wo .
ėξ̂θee = (ûeR − ω̄
(20c)
(20d)
We define the controlled output of the system (20) as
⎡
⎤
⎢⎢⎢eR (eξ̂θce )⎥⎥⎥
I3
0
6
6×6
⎢
⎥
νR := N ⎣
⎦ ∈ R , N := −I I ∈ R .
eR (eξ̂θee )
3
3
Then, the block diagram from [uTcR uTeR ]T to νR is illustrated in
Fig. 7, and the following lemma holds true.
Lemma 1 The total error system (20) is passive from
[uTcR uTeR ]T to νR with the storage function UR := φ(eξ̂θce ) +
T
φ(eξ̂θee ) + (1/kv )S (xe,ω ) ≥ 0, where S (xe,ω ) = (1/2)xe,ω
Pxe,ω .
Proof. We first consider the velocity error system (20b). Note
that Bω = CωT holds from (11b). Then, compared with (5a)
and (5b), Fact 1 means that the velocity error system (20b) is
passive from −kv eR (eξ̂θee ) to ωe . Namely, the time derivative of
S (xe,ω ) along with (20b) satisfies
Ṡ (xe,ω ) = −kv ωTe eR (eξ̂θee ).
(21)
Let us next consider the time derivative of φ(eξ̂θee ) along with
the trajectories of (20d). Then, we obtain
φ̇(eξ̂θee ) = eTR (eξ̂θee )(e−ξ̂θee ėξ̂θee )∨
= eTR (eξ̂θee ){e−ξ̂θee (ueR − ω̄wo ) + ωwo }
= eTR (eξ̂θee )ueR + eTR (eξ̂θee )ωe ,
where we use the following property [21].
eξ̂θ ω̂e−ξ̂θ = (eξ̂θ ω)∧ , eξ̂θ eR (eξ̂θ ) = eR (eξ̂θ ).
From (21) and (22), we get
(22)
326
SICE JCMSI, Vol. 6, No. 5, September 2013
4.2
Visual Feedback System: Position Part
In this subsection, we close the loop of (ucp , uep ) and show
that visual pose regulation is achieved.
The time evolution of (xv , xe,v , pce , pee ) in the total error system (19) with (25) is described by
ẋv = Av xv , vwo = Cv xv ,
(27a)
ẋe,v = Av xe,v − kv Bv pee , ve = Cv xe,v ,
(27b)
ṗce = ucp − uep + δce ,
(27c)
ṗee = uep + ve + p̂ee ω̄wo + δee ,
(27d)
δce := kc p̂ce eR (eξ̂θce ) + (I3 − eξ̂θce )uep ,
(28)
Fig. 7 Block diagram of total error system: Orientation part
(ecR := eR (eξ̂θce ), eeR := eR (eξ̂θee )).
1
Ṡ (xe,ω ) + φ̇(eξ̂θee ) = eTR (eξ̂θee )ueR .
kv
where
(23)
ξ̂θce
On the other hand, the time derivative of φ(e
the trajectories of (20c) is given by
) along with
Note that limt→∞ δce = 0 and limt→∞ δee = 0 hold for any
bounded uep from Lemma 2.
We next close the loop of (ucp , uep ) as
ˆ wo
ucp
pce
ω̄
−kc I3
=
(30)
ˆ wo ) pee .
uep
0
(−ke I3 + ω̄
φ̇(eξ̂θce ) = eTR (eξ̂θce )(e−ξ̂θce ėξ̂θce )∨
= eTR (eξ̂θce )(e−ξ̂θce ucR − ueR )
= eTR (eξ̂θce )ucR − eTR (eξ̂θce )ueR .
(24)
Therefore, from (23) and (24), we obtain
U̇R = eTR (eξ̂θce )ucR − (eTR (eξ̂θce ) − eTR (eξ̂θee ))ueR
⎤ T
⎡
T I3
0 ⎢⎢⎢eR (eξ̂θce )⎥⎥⎥
u
u
⎢⎣
⎥ = cR νR .
= cR
ueR −I3 I3 eR (eξ̂θee )⎦
ueR
This completes the proof.
From Lemma 1, we propose the following input.
⎤
⎡
ucR
−I3
0 ⎢⎢⎢eR (eξ̂θce )⎥⎥⎥
⎢⎣
⎥.
= −kc νR = kc
ueR
I3 −I3 eR (eξ̂θee )⎦
Then, substituting (30) into (27) yields
ṗce = −kc pce + ke pee + δce ,
ṗee = −ke pee + ve + δee .
(25)
Then, the following lemma holds true.
Lemma 2 Suppose that the present input (25) is applied to the
total error system (20). Then, all the trajectories of its state
xR := (xω , xe,ω , eξ̂θce , eξ̂θee ) satisfy
lim [ωTe eTR (eξ̂θce ) eTR (eξ̂θee )]T = 0.
t→∞
δee := (eξ̂θee − I3 )vwo + ke p̂ee (eR (eξ̂θee ) − eR (eξ̂θce )). (29)
(26)
T
Let us now define x p := [xe,v
pTee pTce ]T ∈ R6n+9 . Then, we
obtain
⎤
⎡ ⎤
⎡
0 ⎥⎥⎥
⎢⎢⎢ 0 ⎥⎥⎥
⎢⎢⎢ Av −kv Bv
⎥
⎢⎢⎢ ⎥⎥⎥
⎢⎢⎢
0 ⎥⎥⎥⎥
ẋ p = Φx p + ⎢⎢δee ⎥⎥ , Φ := ⎢⎢Cv −ke I3
(31)
⎦
⎣ ⎦
⎣
0
ke I3
δce
−kc I3
and the following theorem holds true.
Theorem 1 All the trajectories of the total error system (27)
with (30) satisfy
lim [vTe pTce pTee ]T = 0.
t→∞
Proof. From Lemma 1, we immediately obtain
U̇R = −kc νR ≤ 0.
2
Since the matrix N is nonsingular, U̇R = 0 holds if and only if
eR (eξ̂θce ) = eR (eξ̂θee ) = 0.
We next consider the set S := {xR | U̇R = 0} = {xR |
eR (eξ̂θce ) = eR (eξ̂θee ) = 0}. In the set S, eξ̂θee = I3 , eR (eξ̂θee ) = 0
and hence ueR = 0 and ėξ̂θee = 0 hold. Then, substituting these
equations into (20d) yields
ˆ wo + ω̂wo = ωe .
0 = ėξ̂θee = −ω̄
In addition, since all elements of xω are bounded from the definition of xω , the system (20a)–(20d) with (25) has a compact
positively invariant set. Namely, we see from LaSalle’s Invariance Principle [22] that all the state trajectories asymptotically
converge to the set {xR | eR (eξ̂θce ) = eR (eξ̂θee ) = 0, ωe = 0}. The proof of Lemma 2 relies on the passivity in Lemma 1. It
should be noted that the inner loops (16) and (17) closed before
determining (ucR , ueR ) allow the system to be passive. Namely,
the operations in (16) and (17) are regarded as a kind of passivation of the estimation/control error system.
(32)
Proof. Let us view the system (31) as a linear time invariant system ẋ p = Φx p with perturbations δce and δee . Remark
that δce and δee are vanishing as time goes to infinity from their
definitions and Lemma 2. It is thus immediately proved from
stability theory of perturbed systems ([22], Lemma 9.6) that
limt→∞ x p = 0 as long as the origin of the nominal system
ẋ p = Φx p is exponentially stable, which is equivalent to stability of Φ from linearity of the system. From the structure of
Φ, its stability is equal to stability of the matrix
A −kv Bv
.
Γ := v
Cv −ke I3
Let (y0 , · · · , y2n+1 ), yi ∈ R3 be an eigenvector of Γ corresponding to an eigenvalue σ. Then, from the definition of Γ,
we get the following equations
(σ + ke )y2n+1 =
n
yi , kv y2n+1 = −σy0 ,
(33)
i=0
kv y2n+1 = yn+i − σyi , σyn+i = −w2i yi , i ∈ {1, · · · , n}.
From (34), we have
(34)
327
SICE JCMSI, Vol. 6, No. 5, September 2013
desired velocities. In this section, we extend the proposed velocity input to force/torque input taking account of the camera
robot dynamics.
5.1
Dynamic Pose Estimation/Control Mechanism
The dynamics (derived by Newton-Euler Equation in body
coordinates [21]) of the rigid body equipped with a camera is
represented by
mI3 0 v̇wc
mω̂wc
vwc
f
0
+
= c . (38)
τc
0
J ω̇wc
0 − (Jωwc )∧ ωwc
Fig. 8 Present estimation/control structure.
yi = −
kv σ
y2n+1 .
σ2 + w2i
(35)
b
b
b
+ C(ωwc )Vwc
= Fwc
.
M V̇wc
Substituting the second equation of (33) and (35) into the first
equation of (33) yields
kv kv σ
−
.
σ i=1 σ2 + w2i
n
σ + ke = −
√
We now denote σ = σ1 + −1σ2 . Then, by comparing the
coefficients of the real part, we have
σ1 + ke = −kv σ1
σ̃i 1
+
,
σ21 + σ22 i=1 σ̄i
n
+
+
σ̄i =
−
+
+
σ̃i =
n σ̃i
1
Since kv σ2 +σ
2 +
i=1 σ̄i ≥ 0, we see that σ1 has to be negative.
1
2
This completes the proof.
The combination of (26) and (32) is equivalent to (4a) and
(4b). The control objective is thus proved to be achieved by the
present visual feedback system. The block diagram of the total
estimation/control structure is illustrated in Fig. 8.
We finally give a remark on the present estimation/control
b
is given by
structure. The velocity input Vwc
ˆ wo pee
ω̄
b
−1
b
(36)
Vwc = kc Ad(gd ) ER (gd ḡco ) + Ad(ḡco ) V̄wo − Ad(gd )
0
(σ21
σ22
w2i ),
Here, m ∈ R and J ∈ R3×3 are the mass and inertia tensor of the
camera robot, respectively. Also, fc ∈ R3 and τc ∈ R3 are the
force and torque input. Let us now rewrite (38) as
(σ21
σ22
w2i )2
4σ21 σ22 .
from (17), (25) and (30). We see that the first and second terms
of (36) have the same form as the full information feedback
b
(10). However, since the actual values of gco and Vwo
are not
available, we replace these variables by the estimates produced
by the observer (11) with the input
⎤
⎡
⎤
⎡
⎢⎢−kv I6
0 ⎥⎥⎥⎥⎥ ER (gee ) ⎢⎢⎢⎢⎢ 0 ⎥⎥⎥⎥⎥
⎢⎢⎢
uv
ˆ p ⎥ (37)
0
0 ⎥⎥⎥
+ ⎢ ω̄
= ⎢⎢⎢
⎦ ER (gce ) ⎢⎢⎣ wo ee ⎥⎥⎦
⎣−ke I6
ue
0
0 ke I3
using the technique in (13). This interconnection structure
(Fig. 8) is based on [23] and not straightforward from [14],[15].
Namely, the present control mechanism has the same form as
the structure of [23] except for the third term of (36) and the
second term of (37). These terms are correction terms of the
∨
b
b
error between (g−1
ee ġee ) and Ve = Vwo − V̄wo and appear since
we impose the assumption of (3) on the body velocity.
5. Passivity-based Dynamic Visual Feedback Pose
Regulation
So far, we tackle a visual feedback pose regulation problem
under the assumption that the camera robot can directly input
[ fcT
τTc ]T
(39)
Here,
=
∈ R and note that Ṁ − 2C = −2C is
skew-symmetric. By using this property, we can easily see that
b
b
to Vwc
with respect to the
the dynamics (39) is passive from Fwc
b T
b
storage function Ud := (1/2)(Vwc ) MVwc ≥ 0.
Based on [14], we propose the following force/torque input
law.
0
b
+ ur ,
Fwc
= M V̇d + C(ωwc )Vd + ξ̂θd
e eR (eξ̂θce )
b
Fwc
6
where Vd ∈ R6 is the desired body velocity of the camera and
we set this velocity as (36) to achieve (4). The new input ur ∈
b
to the desired
R6 is to be determined in order to drive actual Vwc
b
Vd . Also, the third term of Fwc
is derived from the passivity
property of the total error system (discussed later). Then, by
b
− Vd ∈ R6 ,
introducing the velocity error r = [rTp rRT ]T := Vwc
we get the following error dynamics.
0
= ur .
(40)
Mṙ + C(ωwc )r − ξ̂θd
e eR (eξ̂θce )
We finally present the following simple feedback law based on
the passivity of (39).
ur = −kr r,
where kr > 0 is a positive scalar gain.
In summary, we propose the following visual feedback dynamic pose estimation/control mechanism.
⎧
0
⎪
⎪
b
⎪
+ ur
=
M
V̇
+
C(ω
)V
+
F
⎪
d
wc d
wc
⎪
⎪
eξ̂θd eR (eξ̂θce )
⎪
⎪
⎪
⎪
⎪
⎪
(force/torque input)
⎪
⎪
⎪
⎪
b
⎪
u
=
−k
(V
−
V
)
(new
force/torque
⎪
r
r
d
wc
⎪
input)
⎪
⎪
⎪
ˆ wo pee
⎪
ω̄
⎪
b
⎪
=
k
Ad
E
(g
)
+
Ad
−
Ad
V̄
V
⎪
d
c
(gd ) R ce
(ḡco ) wo
(gd )
⎪
⎪
0
⎨
. (41)
⎪
⎪
⎪
(desired
velocity
input)
⎪
⎪
⎪
⎪
b
⎪
x̄˙ = A x̄ − Buv , V̄wo = C x̄
(target motion model)
⎪
⎪
⎪
⎪
⎪
⎪
u
=
−k
E
(g
)
(velocity
observer input)
v
v
R
ee
⎪
⎪
⎪
⎪
b
ˆ b − û )
⎪
˙
⎪
ḡ
=
−
V̂
ḡ
+
ḡ
(
V̄
(RRBM
model)
co
co wo
e
⎪
wc co
⎪
⎪
⎪
⎪
0
⎪
⎪
⎪
(pose observer input)
⎩ue = −Ke ER (gee ) +
ke eR (gce )
Here, V̇d depends on ṗee and hence vbwo which is not available
only from the visual measurements. In this paper, we avoid the
problem numerically by just replacing ṗee by the difference approximation of pee following [14]. A future work of this paper
328
SICE JCMSI, Vol. 6, No. 5, September 2013
This completes the proof.
It should be noted that ur , ucR in Vd and ueR in the present
estimation/control mechanism (41) are just the simple negative
feedbacks of the corresponding outputs. Also, the third term of
b
cancels the effect of rR in (43d).
Fwc
Then, similarly to Lemma 2 and Theorem 1, we immediately
get the following theorem.
Theorem 2 Suppose that the input (41) is applied to the total
error system. Then, all the trajectories of the system satisfy
Fig. 9 Dynamic estimation/control structure.
is to work out this issue rigorously by assuming more information like optical flow [19].
Though we newly introduce the camera robot dynamics in
this section, we aim at the same goal as (4) and we also assume
that the target object body velocity is given by (3). The block
diagram of the present dynamic estimation/control structure is
depicted in Fig. 9.
5.2 Convergence Analysis
b
is replaced
We first note that the present velocity input Vwc
b
b
can be written by Vwc
=
by the desired input Vd in (41) and Vwc
r + Vd . Here, we utilize the same form as (17) for Vd :
b
Vd , Vd = −uc + Ad(gce ) V̄wo
.
Vd := Ad(g−1
d )
ġce =
+ ûc gce − gce ûe .
(42)
Namely, ġce is influenced by r. On the other hand, ġee does
b
does not influence gee in our estimation
not change since Vwc
mechanism as in (12a).
Then, the total error system of the error dynamics (40) and
the orientation part of estimation/control error systems is written by
0
= ur ,
(43a)
Mṙ + C(ωwc )r − ξ̂θd
e eR (eξ̂θce )
ẋω = Aω xω , ωwo = Cω xω ,
(43b)
ẋe,ω = Aω xe,ω + kv Bω eR (eξ̂θee ), ωe = Cω xe,ω ,
ξ̂θce
ė
ξ̂θee
ė
−ξ̂θd
= −e
ξ̄ˆθco
ξ̂θce
ξ̂θce
r̂R e + ûcR e − e
ˆ wo )eξ̂θee + eξ̂θee ω̂wo .
= (ûeR − ω̄
ûeR ,
Proof. We first note that we can easily see, from Lemma 3 and
the same analysis as in Lemma 2, that all the trajectories of the
system (43) satisfy
lim [rT ωTe eTR (eξ̂θce ) eTR (eξ̂θee )]T = 0.
t→∞
(44)
We next consider the position part of the total error system.
Similarly to (27), the time evolution of (xv , xe,v , pce , pee ) of the
remaining total error system is described by
ẋv = Av xv , vwo = Cv xv ,
ẋe,v = Av xe,v − kv Bv pee , ve = Cv xe,v ,
ṗce = ucp − uep + δce ,
Then, the time evolution of the control error gce is described by
−g−1
d r̂ ḡco
lim [rT VeT ERT (gce ) ERT (gee )]T = 0.
t→∞
(43c)
(43d)
(43e)
Let the input and output of the system (43) be [uTr uTcR uTeR ]T and
[rT νRT ]T , respectively. Then, the following lemma holds true.
Lemma 3 The total error system (43) is passive from
[uTr uTcR uTeR ]T to [rT νRT ]T with the storage function U :=
(1/2)rT Mr + φ(eξ̂θce ) + φ(eξ̂θee ) + (1/kv )S (xe,ω ) ≥ 0.
Proof. Since C(ωwc ) is skew-symmetric, the time derivative of
(1/2)rT Mr yields
rT Mṙ = rT ur + rRT eξ̂θd eR (eξ̂θce ).
On the other hand, we get the following relation from (43d).
φ̇(eξ̂θce ) = −eTR (eξ̂θce )e−ξ̂θd rR + eTR (eξ̂θce )ucR − eTR (eξ̂θce )ueR .
Moreover, the time derivatives of φ(eξ̂θee ) and (1/kv )S (xe,ω ) are
the same as in Lemma 1. We thus get
r
.
U̇ = [uTr uTcR uTeR ]
νR
ṗee = uep + ve + p̂ee ω̄wo + δee .
Here, δee is the same as (29), but δce is given by
δce := −e−ξ̂θd r p + e−ξ̂θd p̄ˆ co re + kc p̂ce eR (eξ̂θce ) + (I3 − eξ̂θce )uep
from (42). Note that it differs from (28) only in the first
two terms. However, we see from (44) that δce also satisfies
limt→∞ δce = 0 for any bounded uep . We can thus apply the
same convergence analysis as in Theorem 1 to the remaining
part of the proof.
Equation [VeT ERT (gce ) ERT (gee )]T = 0 means the achievement
b
b
but Vd is equal to Ad(gco ) Vwo
. Thereof (4b), and then not Vwc
fore, the additional condition r = 0 guarantees the achievement
of (4a).
6. Verification
We finally demonstrate the present estimation/control structure through simulation. We first set the initial pose of the
camera robot as pwc (0) = [0 0 0]T [m], ξθwc (0) = [0 0 0]T
[rad], and suppose that the camera sees in the direction of the
z-axis of its own frame Σc with the focal length λ = 0.005
[m]. We also assume that the target object is the cube 1m on
a side, has eight feature points on its vertices and moves with
b
(t) = [cos t 0 0.5 0 0 0.3 sin(0.5t)]T [m/s, rad/s] from the
Vwo
initial state pwo (0) = [0 0 5]T [m] and ξθwo (0) = [0 0 1]T [rad].
Then, the objective here is to lead the camera robot pose to a
desirable configuration gd = ([0 2 5]T , I3 ) with respect to the
moving object. We finally set m = 1 [kg] and J = I3 [kg·m2 ]
and run the present method (41) with kr = 10, kc = 10, kv = 5
and ke = 10.
Figure 10 illustrates the trajectories of the camera and the
target in Σw . Circles and squares describe the initial and the
final positions, respectively. We see from this figure that the
camera robot successfully track the target object moving along
the complicated trajectory.
329
SICE JCMSI, Vol. 6, No. 5, September 2013
Fig. 10 Position in Σw .
Fig. 11 Linear velocity of target object.
Fig. 12
Angular velocity of target object.
Fig. 14
Angular velocity in common frame Σc .
Fig. 15 Relative position.
Fig. 16 Relative orientation.
mation/control mechanism achieves visual pose regulation (4a)
and (4b).
Although the gain setting in this verification is heuristic so
far, higher estimation gains than control gains seem to be generally better. Obtaining the guideline to set up desirable gains
is one of our future directions.
7. Conclusion
Fig. 13
Linear velocity in common frame Σc .
On the other hand, Figs. 11-16 show the time responses of
variables. Figures 11 and 12 illustrate responses of variables
associated with linear and angular velocities, where the solid
curves show the actual target object velocities and dashed ones
the estimated object velocities, respectively. We see from the
b
.
figures that the velocity observer correctly estimates Vwo
The camera body velocity and the object velocity transformed to the camera frame are depicted in Figs. 13 and 14.
We see that (4a) is actually achieved by the present estimation/control mechanism. Finally, Figs. 15 and 16 show the time
responses of the relative pose gco and its estimate ḡco . We also
see from these figures that both of the actual and estimated relative poses asymptotically converge to the prescribed gd , which
implies (4b). In summary, we conclude that the present esti-
This paper has investigated passivity-based visual pose regulation whose objective is to lead a vision camera pose to a
desirable configuration relative to a moving target object. For
this purpose, we have presented a novel visual feedback estimation/control structure including a vision-based observer with a
3D target object motion model. Then, we have proved, based
on passivity-based control theory and stability theory of perturbed systems, that the control objective is achieved by using the present estimation/control structure. We have moreover taken account of the camera robot dynamics and presented
force/torque input to achieve the same goal. Finally, the effectiveness of the present estimation/control structure has been
demonstrated through simulation.
Acknowledgments
The authors would like to thank Mr. P. Wittmuess and Mr. Y.
Namba for invaluable help.
330
SICE JCMSI, Vol. 6, No. 5, September 2013
References
[1] F. Chaumette and S. Hutchinson: Visual servo control, Part I:
Basic approaches, IEEE Robotics and Automation Magazine,
Vol. 13, No. 4, pp. 82–90, 2006.
[2] F. Chaumette and S. Hutchinson: Visual servo control, Part II:
Advanced approaches, IEEE Robotics and Automation Magazine, Vol. 14, No. 1, pp. 109–118, 2007.
[3] Y. Iwatani, K. Watanabe, and K. Hashimoto: Unmanned helicopter control via visual servoing with occlusion handling, Lecture Notes in Control and Information Sciences, Vol. 401, G.
Chesi, and K. Hashimoto (Eds), pp. 361–374, Springer, 2010.
[4] S. Onishi and K. Kogiso: Super-resolution image reconstruction using an observer of a motorized camera head, Proceedings of the SICE Annual Conference 2010, pp. 1114–1122,
2010.
[5] T. Kodama, T. Yamaguchi, and H. Harada: A method of object
tracking based on particle filter and optical flow to avoid degeneration problem, Proceedings of the SICE Annual Conference
2010, pp. 1529–1533, 2010.
[6] K. Kogiso, M. Tatsumi, and K. Sugimoto: A remote control
technique with compensation for transmission delay and its application to pan control of a network camera, Proceedings of
the 17th IEEE International Conference on Control Applications, pp. 55–60, 2008.
[7] S. Takahashi and B.K. Ghosh: Motion and shape identification
with vision and range, IEEE Transactions on Automatic Control, Vol. 47, No. 8, pp. 1392–1396, 2002.
[8] G.D. Hager and S. Hutchinson Eds.: Special section on visionbased control of robot manipulators, IEEE Transactions on
Robotics and Automation, Vol. 12, No. 5, pp. 649–650, 1996.
[9] B. Song, C. Ding, A.T. Kamal, J.A. Farrell, and A.K. RoyChowdhury: Distributed camera networks: Integrated sensing
and analysis for wide area scene understanding, IEEE Signal
Processing Magazine, Vol. 28, No. 3, pp. 20–31, 2011.
[10] V. Mohan, G. Sundaramoorthi, and A. Tannenbaum: Tubular surface segmentation for extracting anatomical structures
from medical imagery, IEEE Transactions on Medical Imaging, Vol. 29, No. 12, pp. 1945–1958, 2010.
[11] S. Han, A. Censi, A.D. Straw, and R.M. Murray: A bioplausible design for visual pose stabilization, Proceedings of
the 2010 IEEE/RSJ International Conference on Intelligent
Robots and Systems, pp. 5679–5686, 2010.
[12] A.P. Aguiar and J.P. Hespanha: Robust filtering for deterministic systems with implicit outputs, Systems & Control Letters,
Vol. 58, No. 4, pp. 263–270, 2009.
[13] A.P. Dani, N.R. Fischer, Z. Kan, and W.E. Dixon: Globally exponentially stable observer for vision-based range estimation,
Mechatronics, Vol. 22, No. 4, pp. 381–389, 2012.
[14] M. Fujita, H. Kawai, and M.W. Spong: Passivity-based dynamic visual feedback control for three dimensional target
tracking: Stability and L2 -gain performance analysis, IEEE
Transactions on Control Systems Technology, Vol. 15, No. 1,
pp. 40–52, 2007.
[15] T. Hatanaka and M. Fujita: Passivity-based visual motion
observer integrating three dimensional target motion models,
SICE Journal of Control, Measurement, and System Integration, Vol. 5, No. 5, pp. 276–282, 2012.
[16] P.A. Vela and I.J. Ndiour: Estimation theory and tracking
of deformable objects, Proceedings of the 2010 IEEE Multiconference on Systems and Control, pp. 1222–1233, 2010.
[17] F. Bensalah and F. Chaumette: Compensation of abrupt motion changes in target tracking by visual servoing, Proceedings
of the 1995 IEEE/RSJ International Conference on Intelligent
Robots and Systems, pp. 181–187, 1995.
[18] J. Gangloff, M. de Mathelin, L. Soler, M.M.A. Sanchez, and
J. Marescaux: Active filtering of physiological motion in robotized surgery using predictive control, IEEE Transactions on
Robotics, Vol. 21, No. 1, pp. 67–79, 2005.
[19] Y. Ma, S. Soatto, J. Kosecka, and S.S. Sastry: An invitation to
3-D Vision, Springer, 2003.
[20] H. Kawai, T. Murao, and M. Fujita: Passivity-based visual motion observer with panoramic camera for pose control, Journal
of Intelligent and Robotic Systems, Vol. 64, No. 3-4, pp. 561–
583, 2011.
[21] F. Bullo and A.D. Lewis: Geometric Control of Mechanical
Systems: Modeling, Analysis, and Design for Simple Mechanical Control Systems, Springer, 2004.
[22] H.K. Khalil: Nonlinear Systems, Third Edition, Prentice Hall,
2002.
[23] W.M. Wonham and J.B. Pearson: Regulation and internal stabilization in linear multivariable systems, SIAM Journal on Control, Vol. 12, No. 1, pp. 5–18, 1974.
Tatsuya IBUKI (Student Member)
He received his B.Eng and M.Eng. degrees in Mechanical and Control Engineering from Tokyo Institute
of Technology, Japan, in 2008 and 2010, respectively. He
is currently a Ph.D. student at Tokyo Institute of Technology and a research fellow of the Japan Society for the
Promotion of Science. His research interests include cooperative control, vision-based estimation and control.
Takeshi HATANAKA (Member)
He received his B.Eng. degree in informatics and
mathematical science, M.Inf. and Ph.D. degrees in applied mathematics and physics all from Kyoto University, Japan in 2002, 2004 and 2007, respectively. He
is currently an Assistant Professor in the Department of
Mechanical and Control Engineering, Tokyo Institute of
Technology, Japan. His research interests include cooperative control, vision-based control/estimation, and energy management.
Masayuki FUJITA (Member)
He is a Professor with the Department of Mechanical
and Control Engineering at Tokyo Institute of Technology. He is also a Program Officer for Japan Science and
Technology Agency (JST) Core Research for Evolutional
Science and Technology (CREST). He received the Dr.
of Eng. degree in Electrical Engineering from Waseda
University, Tokyo, in 1987. Prior to his appointment at
Tokyo Institute of Technology, he held faculty appointments at Kanazawa
University and Japan Advanced Institute of Science and Technology. His
research interests include passivity-based control in robotics and applied
robust control. He is currently the IEEE CSS Vice President Conference
Activities. He was a member of CSS Board of Governors. He serves as
the Head of SICE Technical Division on Control and served as the Chair
of SICE Technical Committee on Control Theory and a Director of SICE.
He served as the General Chair of the 2010 IEEE Multi-conference on
Systems and Control. He has served/been serving as an Associate Editor
for the IEEE Transactions on Automatic Control, the IEEE Transactions
on Control Systems Technology, Automatica, Asian Journal of Control,
and an Editor for the SICE Journal of Control, Measurement, and System
Integration. He is a recipient of the 2008 IEEE Transactions on Control
Systems Technology Outstanding Paper Award. He also received the 2010
SICE Education Award and the Outstanding Paper Awards from the SICE.
Download