100

advertisement
Available online at www.sciencedirect.com
ScienceDirect
Procedia Engineering 00 (2014) 000–000
www.elsevier.com/locate/procedia
“APISAT2014”, 2014 Asia-Pacific International Symposium on Aerospace Technology,
APISAT2014
Research on 3D Virtual Environment Modeling Technology for
Space Tele-robot
Biyu Zhu,Aiguo Song*,Xiaonong Xu,Song Li
School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
Abstract
To overcome the time delay in tele-robot system, a new 3D virtual environment modeling technology is proposed in this paper.
The interactions between slave robot and environment can be attained in advance in a 3D virtual environment in the master side,
therefore an accurate operation of space tele-robot can be realized and the effect of time delay would be minimized. Firstly, 3D
virtual interaction scenario is modeled as a point cloud data image, and the target objects are recognized from the image and
reconstructed by using Random Sample And Consensus (RANSAC) algorithm. Secondly, an effective method is proposed to
modify the position of virtual robot and calculate the virtual interactive force between the virtual robot and the virtual objects.
Lastly, the experiment is completed with the time delay. The errors are lower (10% F.S.), while virtual interaction force is
compared with the real force measured by the force sensor. Experimental results show that this 3D virtual environment modeling
method is effective and reliable for space tele-robot control.
© 2014 The Authors. Published by Elsevier Ltd.
Peer-review under responsibility of Chinese Society of Aeronautics and Astronautics (CSAA).
Keywords: Virtual environment; 3D modeling; Tele-robot system; Point cloud data; RANSAC
1. Introduction
Space tele-robot plays an important role in space exploration, it is able to perform tasks in unknown or hazard
environment. However, the large time delay in communication between the tele-robot in space and the operator at
the local site is inevitable, which leads to the decrease of the stability and maneuverability of the system.
* Corresponding author. Tel.: +86 13951804055; fax:025-83793293.
E-mail address: a.g.song@seu.edu.cn
1877-7058 © 2014 The Authors. Published by Elsevier Ltd.
Peer-review under responsibility of Chinese Society of Aeronautics and Astronautics (CSAA).
2
Biyu Zhu / Procedia Engineering 00 (2014) 000–000
Construction of tele-robot system with virtual reality is an effective way to solve the time delay problem [1].
Virtual environment provides the operator a real-time visual and force feedback [2]. Therefore, the accuracy of
virtual environment modeling is the key point to the space tele-robot system [3]. Hironao Yamada introduced the
methods of auto point of view (APV) and semi-transparent object (STO), which improved the efficiency as well as
the security of tele-operation and overcome the shortcomings of conventional three-screen visual display [4]. Huber,
D. utilized the combined data from a laser scanner and a video camera to generate a photo-realistic 3D model of the
vehicle's environment which was displayed to the remote operator in real time [5]. However, these methods did not
calculate the virtual interaction force and compare it with the real force.
This paper proposes a new 3D virtual environment modeling technology. First of all, a virtual robot model which
is the same as the real slave robot is established based on 3DS MAX and VTK (object-oriented encapsulation of the
OpenGL) [6]. Then, a 3D virtual interaction scenario is established. Due to the unknown of exact location and shape
of the objects in the real environment, the way to identify them is important. Point cloud data collected by Kinect [7]
in the slave side would be displayed in the master side. The shape, size and color of the target objects could be
caught initially from the point could data. However their exact geometry and position are uncertain, especially when
the target is translucent. All the color would be filtered out except the target one, then Random Sample And
Consensus (RANSAC) [8] could be used to detect specific models and their parameters in point clouds. Thirdly, an
effective method is proposed to modify the position of virtual robot and calculate the virtual interactive force
between the virtual robot and the virtual objects. Lastly, the experiment is completed with the time delay.
Research result of this paper lays the foundation for the development of high efficiency and high precision of
operation in the space tele-robot system. It demonstrates that tele-robot system based on 3D virtual environment
modeling will improve the performance of space robot to fulfill tasks such as science experiment, daily maintenance
in space station and repair of the on-orbit satellite [9].
2. Space tele-robot system with visual and force feedback
The space tele-robot system in this research is mainly composed of two parts, the master system and the slave
system, which is shown in Fig. 1. The master system consists of an operator, a force feedback hand controller [10], a
computer (PC1) which shows the virtual environment. The virtual robot in the virtual environment is controlled by
the operator using the force feedback hand controller. And the slave system is a 4-DOF robot, which is equipped
with a Kinect, a CCD camera, position sensors, force sensors and a control computer (PC2). Due to the space telerobot system is established in the laboratory, the TCP/IP is used to communicate between the master and the slave
temporarily.
Master
Slave
Force Feedback
CCD Camera
Robot
Internet
Kinect
Operator
Visual Feedback
Hand Controller
PC1
PC2
Fig. 1. Space tele-robot system with virtual environment.
The main working process of this system is as follows. The control computer (PC2) collects the data of sensors,
CCD camera and Kinect in the slave side and sends them to the computer (PC1) in the master side. Then PC1
processes the image information and reconstructs the virtual environment which includes the virtual robot, the
virtual interaction scenario and the target objects. The data of position sensors is used to modify the position of
Biyu Zhu / Procedia Engineering 00 (2014) 000–000
3
virtual robot. The operator controls the virtual robot and the remote robot by using the force feedback hand
controller and watches the virtual environment. The remote robot moves the same way as the virtual robot with 3
seconds delay. The operator can view the virtual environment from all directions and achieves the force from the
force feedback hand controller when the gripping end-effector contacts with the target object.
3. 3D virtual environment reconstruction
3.1. Reconstruction of virtual robot
The three-dimensional virtual robot model is the same as the real robot, which means that the scale and shape of
model is accurate. The robot is divided into five parts from the joints based on degrees of freedom: the base, the
transfer device, the big arm, the forearm and the gripping end-effector. Each part is modeled by 3DS MAX and
combined into a complete robot model through series of graphic conversion, such as translation, rotation, and
scaling. Last, the virtual robot is displayed in the virtual environment by VTK programming. The virtual robot
model is shown in Fig. 2.
Fig. 2. 3D model of virtual robot.
3.2. Representation of virtual interaction scenario
Conventional CCD camera can only collect two-dimensional images which dissatisfy the need of the operator, so
Kinect is used in this research to capture three dimensional images.
Kinect is equipped with a RGB camera and an infrared camera which has been calibrated by its internal chip,
directly capturing RGB image data and depth information in slave side. Three-dimensional point cloud data
(x,y,z,r,g,b) is obtained from Kinect image capture function which is sealed in PCL open source library.
Considering the relatively low precision of the Kinect itself and the influence of environmental factors, there is
some noise in the acquired point cloud data inevitably. Bilateral filter algorithm [11] is adopted to reduce the noise,
preserve the depth information and make up the unattained depth information at the same time. The filtered point
cloud data is displayed via PCL [12], which represents the slave environment.
3.3. Reconstruction of target object
In order to achieve the interaction between virtual robot and the target objects, the target objects need to be
identified from the virtual interaction scenario. However, only the shape, size and color of the target objects could
be caught initially from the point could data, their exact geometry and position are uncertain, especially when the
target is translucent. All the color would be filtered out except the target one through HSL color space [13], then
Random Sample And Consensus (RANSAC) could be used to detect specific models and their parameters in point
clouds.
RGB image data (r,g,b) means the coordinate of red, green and blue (r,g,b∈[0,1]). These maximum and minimum
component values are defined as max and min respectively, with
4
Biyu Zhu / Procedia Engineering 00 (2014) 000–000
max  max{ r , g , b}
mi n  mi n{ r , g , b}
(1)
Image color is converted from RGB to HSL, and HSL color space coordinate( h, s , l ) is shown as:

0

60



h  60

 
60

 
60


g b
max mi n
g b

max mi n
g b

max mi n
g b

max mi n

if
max  mi n
if
max  r and g  b
 360 , i f
max  r and g  b
 120 , i f
max  g
 240 , i f
max  b
 0 ,

0
i f l  0 or max  mi n

max mi n
1
 max mi n
s  

,
if 0  l 
2l
2
 max mi n
 max mi n
max mi n
1

, if l 

2

(
m
ax

m
i
n)
2

2
l
2

l 
1
( max mi n)
2
(2)
(3)
(4)
where Hue h∈[0,360°], Saturation s∈[0,1], and Lightness l∈[0,1].
Hue h is selected as the basis for filtering and all the color would be filtered out except the target one. Then the
filtered data is processed in order to get rid of the outliers (the number of points within a certain radius is less than a
certain threshold value), further reducing non-point on the target object.
Refer to the geometric model in Sample_consensus module of PCL and the RANSAC algorithm, target object is
matched with the geometric model and the parameters are obtained. The point clouds about the remote side and the
identified target object are shown in Fig. 3. The radius of the identified ball is 34.92mm while the actual one is
35mm. The error is relatively small (0.2% F.S.) and the recognition algorithm is effective. Target object can be
reconstructed on the basis of the identified parameters.
Fig. 3. (a) Point clouds about the remote side; (b) The identified target object.
Biyu Zhu / Procedia Engineering 00 (2014) 000–000
5
4. Modification of virtual robot’s position
The control commands from the force feedback hand controller are sent to the virtual robot and the slave robot simultaneously. Due to the
time delay in the communication, the motion of slave robot is delayed. However, the motion of virtual robot is real-time, which is constantly
updated according to the instructions. Therefore, accumulated error from time delay and collision leads to the position distinction between slave
robot and the virtual robot. The real position information of the slave robot is collected by the position sensors mounted on it, which is used to
modify the position of virtual model.
The time delay in one-way transmission is denoted as τ, the position of virtual robot is denoted as x(t), the output position of the force
feedback hand controller is denoted as x1(t), and position sensor information is denoted as x2(t). The transmission of position sensor information
delays 𝜏 seconds, and the position of the slave robot is obtained from the information sent by the hand controller 2τ
seconds ago, so the modified value is described as:
x(t  2 )  x 2(t   )  x 1(t  2 )
(5)
Then the real position of virtual robot is expressed as follows:
x(t )  x 1(t )  x(t  2 )
(6)
x(t )  x 1(t )  ( x 2(t   )  x 1(t  2 ) )
(7)
And substituting Eq. (5),
Eq. (6) shows the modified virtual robot’s position.
5. Calculation of virtual interactive force
In the simulation experiment of catching the target object, virtual interactive force is calculated and passed to the
operator by the force feedback hand controller. The interactive force which the operator feels is the same as the real
force in the slave side. The key point of determining the existence of virtual interactive force is to detect whether the
gripping end-effector contacts with the target object.
The force analysis when gripping end-effector contacts with the target object is shown in Fig. 4. Assuming the
deformation on the both sides of the ball are equal (influence of the gravity component is ignored) when the
gripping end-effector collides with the ball. Considering the virtual interactive force F as a function of the elastic
deformation ∆l and the elastic coefficient of the ball K, we have:
F  K l
(8)
6
Biyu Zhu / Procedia Engineering 00 (2014) 000–000
1
f
2
1
f
2
F
F
mg

Fig. 4. The force analysis when gripping end-effector contacts with the target object.
In ideal circumstances, when the friction force f and the component which opposes the frictional force of the
gravity mg (the mass of the ball is m) are equal, the end-effector can seize the ball and lift without falling, with
f  2 F  2 K l  mg cos 
(9)
Where μ is the static frictional coefficient, θ is the supplementary angle of the angle between gravity and frictional
force. The deformation of the ball acquires a critical value:
l 0 
mg cos 
2 K
(10)
The positon of the contact point on the ball are P1(x1,y1,z1) and P2(x2,y2,z2), the center position of the ball is
O(x,y,z) and the radius of the ball is r. In accordance with the positional relationship between two contact points and
the ball, there are four conditions as follows.
(1) If OP1>r and OP2>r, the gripping end-effector did not contact with the ball.
(2) If r-∆l 0<OP1≤r or r-∆l 0<OP2≤r, at least one contact point exists and the ball is still in the elastic deformation,
then the gripping end-effector could apply larger force.
(3) If r-∆l max≤OP1≤r-∆l 0 or r-∆l max≤OP2≤r-∆l 0, only one contact point exists, but the gripping end-effector could
not apply larger force.
(4) If r-∆l max≤OP1≤r-∆l 0 and r-∆l max≤OP2≤r-∆l 0, the two contact points both exist and the elastic deformation is
larger than the critical value.
Only in the fourth conditions, could the virtual ball be caught up by the gripping end-effector. The virtual
interactive force F is decomposed into three components: Fx, Fy and Fz, with
Fx  F cos 
(11)
Fy  F cos 
(12)
Fz  F cos 
(13)
Biyu Zhu / Procedia Engineering 00 (2014) 000–000
7
Where α, β, γ is the orientation angles of force vector F. Virtual interactive force could be displayed on the master
computer with three rectangles, respectively. The length of the rectangle represents the magnitude of the force.
6. Experimental result and analysis
An experiment is implemented with the influence of 3 seconds time delay. After initial test, the virtual robot and
the virtual interaction scenario display on the master computer simultaneously, and their positions are consistent.
During the experiment, the operator controls the motion of robot by the force feedback hand controller. The real
robot in the image is found moving behind the virtual robot, which is shown in Fig. 5. The operator could control the
virtual robot in a real-time to perform specified tasks with visual and force feedback. After successful operation, the
commands are sent to the slave robot, therefore the slave robot moves along the planned route. In this way, the time
delay problems would be overcome.
Fig. 5. The 3D virtual environment with time delay.
In order to test the accuracy of the 3D virtual environment modeling method, virtual interaction force is
compared with the real force measured by the force sensor. After the data is processed, the two force lines and the
error of the virtual force are shown in Fig. 5.
Fig. 6. (a) The virtual interaction force and the real force lines; (b) The error of the virtual force.
The total trend of them is similar and the force remains unchanged eventually. However, the fluctuation of the
real force line is obvious, because the jitter exists when the gripping end-effector contacts with the ball. The error is
below 10%F.S. after removing the unstable point, which shows this 3D virtual environment modeling method is
effective and reliable for space tele-robot control.
8
Biyu Zhu / Procedia Engineering 00 (2014) 000–000
7. Conclusion
In this paper, a space tele-robot system was constructed with the time delay using a new 3D virtual environment
modeling method. The operator controls the remote operation of the space robot by using the force feedback hand
controller and observing the reconstructed virtual environment on the computer. The accurate force and visual
feedback ensure the success of the operation. The virtual environment provides the operator a real-time state of the
robot which allows him a fluent operation, and the force from the hand controller gives him an immersive
experience. Experimental results show that this 3D virtual environment modeling method is effective and reliable
for space tele-robot control. However, the RANSAC algorithm could only identify the target objects in simple and
regular shape, since this research is in an early stage. In the future, more complicated and irregular objects are
expected to be reconstructed via the point cloud data, thus the 3D virtual environment modeling method would be
perfect.
Acknowledgements
This work is supported by the Natural Science Foundation of China (Grant No.61325018 and No.61272379).
References
[1] H. Yamada, Tao Ni, Dingxuan Zhao, Construction tele-robot system with virtual reality, in: IEEE international conference on robotics,
automation and mechatronics, 2008, pp. 36–40.
[2] Chung Hyuk Park, Ayanna M. Howard, Vision-based force guidance for improved human performance in a teleoperative manipulation
system, in: IEEE/RSJ international conference on intelligent robots and systems, 2007, pp. 2126–2131.
[3] Huijun Li, Aiguo Song, Virtual-Environment Modeling and Correction for Force-Reflecting Teleoperation With Time Delay, in: IEEE
Transactions on Industrial Electronics, 2007, 54(2), pp. 1227–1233.
[4] Xinxing Tang, Hiromao Yamada, Tele-operation Construction Robot Control System with Virtual Reality Technology, in: Advanced in
Control Engineering and Information Science, 2011, pp. 1071–1076.
[5] Daniel Huber, Herman Herman, Alonzo Kelly, Peter Rander, Jason Ziglar, Real-time photo-realistic visualization of 3-D environments for
enhanced teleoperation of vehicles, in: Proceedings of the international conference on 3-D digital imaging and modeling, 2009, pp. 1518–
1525.
[6] Jian Dong, Jian Wang, Ge Jin, Research of 3D modeling of Virtual Reality for LAMOST based on OpenGL, in: Second International
Workshop on Education Technology and Computer Science, 2010, pp. 296–298.
[7] Jan Smisek, Michal Jancosek, Tomas Pajdla, 3D with Kinect, in: IEEE International Conference on Computer Vision Workshops, 2011, pp.
1154–1160.
[8] M. Xu, J. Lu, Distributed RANSAC for the robust estimation of three-dimensional reconstruction, Computer Vision, IET, 2012, 6(4), pp.
324–333.
[9] Michael Goeller, Jan Oberlaender, Klaus Uhl, et al, Modular Robots for On-Orbit Satellite Servicing, in: IEEE International Conference on
Robotics and Biomimetics, 2012, pp. 2018–2023.
[10] Rebecca M. Pierce, Elizabeth A. Fedalei, Katherine J. Kuchenbecker, Control of a Virtual Robot with Fingertip Contact, Pressure,
Vibrotactile, and Grip Force Feedback, in: IEEE Haptics Symposium, 2014, pp. 23–26.
[11] Li Chen, Hui Lin, Shutao Li, Depth Image Enhancement for Kinect Using Region Growing and Bilateral Filter, in: 21st International
Conference on Pattern Recognition, 2012, pp. 3070–3073.
[12] Cheng-Tiao Hsieh, An efficient development of 3D surface registration by Point Cloud Library (PCL), IN: IEEE International Symposium
on Intelligent Signal Processing and Communication Systems, 2012, pp. 729–734.
[13] Yuhua Zou, Weihai Chen, Jianbin Zhang, Edge Map Guided Stereo Matching in HSL Color Space for Mobile Robot Navigation, in: IEEE
International Conference on Robotics and Biomimetics, 2011, pp.
Download