Ground Target following Tri Copter and its Combined Anoop m

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 8 Number 6 - Feb 2014
Ground Target following Tri Copter and its Combined
Communication and Radio Navigation
Anoop m
Electrical and Electronics Engineering Department, Anna University
The Kavery Engineering Collage, salem
Guided by M.G Anand assistant professor in TKEC salem
Dist Salem, Tamil Nadu, India
Abstract: Unmanned aerial vehicle systems usage in
a rapid spread. They use in many fields. Safety of the
system is very important so GPS and radio
communication is used. These all parts are fixed in a
tri copter .Tri copters are the less power consumers in
the multi copters. A vision based ground target
following system is attached with the tri copter for
following a ground target. Here an onboard software
system is developed based on a multithread technique
capable of coordinating multiple tasks.
Index terms –unmanned aerial vehicle (UAVs), GPS,
Try copter multi copter, ground target following.
I. Introduction
Unmanned aerial vehicles are commonly
known as drone, it is an aircraft without human
pilot they are different type. According to the
number of brushless motors it can divide into
several types, helicopter, tri copter, quad copter,
hex copter, and octocopter etc. The UAVs
typically fall in one of six categories, first Target
and decoy- providing ground and aerial gunnery a
target that simulates an enemy aircraft or missile.
Second Reconnaissance - it provides battlefield
intelligence. Third Combat–this type providing
attack capability for high risk missions, Forth
Logistics- UAVs specifically designed for cargo
and logistics operation. Fifth Research and
development- used to further develop UAV
technologies to be integrated into field deployed
UAV aircraft and sixth Civil and Commercial
UAVs- specifically designed for civil and
commercial applications. They can also be
categorized in terms of range/altitude they are
Hand-held-about 2km range, Close type up to
10km range NATO type up to 50km range
,Tactical about 160km range, MALE (medium
altitude, long endurance) range over 200km, and
HALE (high altitude, long endurance) indefinite
range
Try copter is the hardware part of the
project. Here a vision-based algorithm is used to
autonomously track and chase a moving target
with a small size flying tri copter. The challenges
ISSN: 2231-5381
associated with the Tri copter led to consider a
density-based representation to track. The
proposed approach is to estimate the target’s
orientation position, and scale, is built on a robust
color based tracker using a multi-part
representation. The information obtained from the
visual tracker is then used to control the position
and yaw angle of the UAV in order to chase the
target object. A hierarchical control scheme is
designed to achieve the tracking. Experiments on
a tri copter UAV following a small moving car are
provided to validate the proposed method. The
rest of this work is organized as follows. In the
next section, the concept of Tri copter is
discussed. Section III will describe the ground
target following. Section IV will describe the
Target Detection Overview. Section V will
describe Image Tracking Overview. Section VI
will describe Target Following Control.
Conclusion will be drawn in section VII
II. Tri copter
To work in hazardous environment flying
platforms that are small, agile and are able to take
of vertically are of important. Equipment that
fulfills this requirement is an UAV (Unmanned
Aerial Vehicle). This is in the form of a multi
copter combined with excellent controlling. A
multi copter is a rotorcraft with more than two
rotors, and a rotorcraft with three rotors is called
tri copter. Multi copters have fixed blades with a
pitch. Speeds of rotors are varied to achieve the
motion control of the tri copter.
Tri copter is a small model rotorcraft with
three arms that has a brushless electrical motor
attached to one of them. The arms are attached to
a plat and they are in the shape of letter y. The
angle between any two arms is 120◦ and the
length of the arms is 50 meters. A servo motor is
attached to one of the motor and it can tilt, by that
achieve a change in motion. Here the controller
loops consist of inner and outer loop. The
controller uses only the inner loop to stabilize the
rotational rates of the tri copter. On each of the
motors a rotor is attached. These rotor blades have
http://www.ijettjournal.org
Page 272
International Journal of Engineering Trends and Technology (IJETT) – Volume 8 Number 6 - Feb 2014
fixed angles and therefore the airflow depends on
the direction of the rotation of the rotor blade.
Each of these blades can been seen as identical
which results in the same rotational direction and
same airflow for a given angular rate.
approaches. Fig.1.which shows over view of the
system. Based on the vision sensing data and
navigation sensors, the distance to the target is
measured. Such measurement is integrated with
the flight control system to guide the UAV to
follow the ground target in flight.
III. Ground Target Following
The vision based control of Unmanned
Aerial Vehicle is one of the main part in this
project. Vision can provide a cheap, passive and
rich source of information. A low weight camera
can be embedded on small size UAVs. The main
efforts have been concentrated on developing
vision based control methods for autonomous take
off, landing, stabilization and navigation. The
visual information is obtained using a known
model of target or the environment key images, or
texture points for motion estimation or optical
flow estimation. The reliability of the visual
information is a important factor. That is used for
the good realization of the vision-based control
task. For automatically performing such a task,
one has to be able to robustly extract. That is the
object location from images despite difficult
constraints: large displacements, occlusions,
image noise, illumination and pose changes or
image blur.
A way used to describe an object is the
image template, which stores luminance or color
values, and their locations. The object looks like
pixel-wise, image templates can accurately
recover a large range of motions. They are very
sensitive to some modifications in the object
appearance due to its pose changes, lighting
variations, blur or occlusions. Color histograms
are density-based descriptors. They represent an
attractive alternative for their low computational
complexity and robustness to appearance changes.
UAVs are in a challenging field. In the
challenging UAV application context, strong
simplifying assumptions are usually made in the
vision algorithms. Color based algorithm can be
used to track a fixed target and autonomously
stabilize a UAV above it. A full vision-based
system which uses a color-based tracking method
can be used to robustly localize a moving object
through frames and that can control the UAV to
chase it. This has never been done. While
considering potential loss due to occlusions, and
estimating not only the position of the object but
its rotation and scale changes in the image, which
will allow to control the UAV’s attitude and yaw.
To realize the vision-based ground target
detection, many vision approaches have been
proposed worldwide, such as template matching,
background subtraction optical flow, stereovision-based technologies and feature-based
ISSN: 2231-5381
Except for the low level embedded attitude
control, the computations are deported to a ground
station. The data are transmitted between the
ground station and the UAV through a radio
transmission.
Fig 1: Over view of the system
IV. Target Detection overview
The purpose of the target detection is to
identify the target of interest from the image
automatically based on a database of preselected
targets. A toy car can be used as the ground target.
Except for the low level embedded attitude
control, the computations are deported to a ground
station. The data are transmitted between the
ground station and the UAV through a radio
transmission. Figure 2 gives an overview of the
proposed system. A classical pattern recognition
procedure can be used to identify the target
automatically, which consist of three main steps,
they are segmentation, feature extraction, and last
one pattern recognition.
Fig 2: Flow chart of the ground target detection, tracking, and
following.
1.
Segmentation: The segmentation step aims
to separate the objects of interest from
background. To simplify the further
processing, some assumptions are made
http://www.ijettjournal.org
Page 273
International Journal of Engineering Trends and Technology (IJETT) – Volume 8 Number 6 - Feb 2014
2.
3.
first, the target and environments exhibit
Lambertian reflectance, and in other
words, their brightness is unchanged
regardless of viewing its direction. Then
Second, the target has a distinct color
distribution compared to the surrounding
environments.
Feature Extraction: Generally, multiple
objects will be found in the segmented
images, including the true target and false
objects. The geometric and color features
are used as the descriptors to identify the
true target
Pattern Recognition: The purpose of the
pattern recognition is to identify the target
from the extracted foreground objects in
terms of the extracted features in. The
straightforward classifier is to use the
nearest neighbor rule. It calculates a metric
or “distance” between an object and a
template in a feature space and assigns the
object to the class with the highest scope.
However, to take advantage of a priori
knowledge of the feature distribution
system, the classification problem is
formulated under the model-based
framework and solved by using a
probabilistic
classifier
system.
A
discriminant function derived from Bayes’
theorem, is employed to identify the target.
This function is computed based on the
measured feature values of each object and
the known distribution of features obtained
from training data
V. Image Tracking Overview
The purpose of image tracking is to find
the corresponding region or point to the given
target. Ground target following is associated with
image tracking, so image tracking will helps the
ground target following. Unlike the detection, the
entire image search is not required, so the
processing speed of image tracking is faster than
the detection. The image-tracking problem can be
solved by using two main approaches: 1) filtering
and data association and 2) target representation
and localization
ISSN: 2231-5381
Fig 3: Flow chart of image tracking
Filtering and Data Association: The
filtering and data association approach can be
considered as a top–down process. The use of the
filtering is to estimate the states of the target, such
as static appearance and location. Typically, the
state estimation is achieved by using filtering
technologies. It is known that most of the tracking
algorithms are model based because a good model
based tracking algorithm will greatly outperform
any model free tracking algorithm if the
underlying model is found to be a good one.
When the measurement noise satisfied the
Gaussian distribution, the optimal solution can be
achieved by the Kalman filtering technique. In
some more general cases, particle filters are more
suitable and robust. However, the computational
cost increases, and the sample degeneracy is also a
one of the problem in this system. When the
multiple targets are tracked in the image sequence,
the validation and association of the
measurements become a critical issue in the
system. Then the association techniques, such as
probabilistic data association filter (PDAF) and
joint PDAF are widely used
Target Representation and Localization:
Aside from using the motion prediction to find the
corresponding region or point of the target
representation the localization approach is
considered as another efficient way. This is
referred to as a bottom–up approach. In the
searching methods, the mean shift approach using
the density gradient is commonly used for this,
which is trying to search the peak value of the
object probability the density. The efficiency will
be limited when the spatial movement of the
target changes. To take advantages of the
aforementioned approaches, using multiple
trackers is widely adopted in applications of
image tracking. In the tracking scheme by
integrating color, motion, and geometric features
was proposed to realize robust image tracking. In
conclusion, combining the motion filtering and
advanced searching algorithms will definitely
http://www.ijettjournal.org
Page 274
International Journal of Engineering Trends and Technology (IJETT) – Volume 8 Number 6 - Feb 2014
make the tracking processing more robust, but the
computational load is heavier. Instead of using
multiple trackers simultaneously, a hierarchical
tracking scheme can be used to balance the
computational cost and performance. In the
model-based image tracking, the Kalman filtering
technique is employed to provide accurate
estimation and prediction of the position and
velocity of a single target. If the model-based
tracker fails to find the target, a mean-shift-based
image-tracking method will be activated to
retrieve the target back in the image.
UAV under the current v. As mentioned in the
definitions of the coordinate systems, the
orientation of P with respect to the UAV can be
defined using azimuth and elevation angles in the
spherical coordinate system, which is described by
two rotation angles pe = [pφ, pθ]T
To estimate the relative distance between
the target and the UAV, have to combine the
camera model with the transformation and can
generate the overall geometric model from an
ideal image to the NED frame
VI. Target Following Control
We proceed to design a comprehensive
target-following system in this section. It consists
of two main layers: the pan/tilt servomechanism
control and the UAV following control. As
mentioned
in
Section
II,
a
pan/tilt
servomechanism is employed in the first layer to
control the orientation of the camera to keep the
target in an optimal location in the image plane,
namely, eye-in-hand visual servoing, which makes
target tracking in the video sequence more robust
and efficient. The overall structure of the targetfollowing control is shown in Fig.4. In the second
layer, the UAV is controlled to maintain a
constant relative distance between the moving
target and the UAV in flight.
Assume that the ground is flat and the height of
the UAV to the ground h is known.
Then, it can be calculated by using the
measurements of the onboard navigation sensors.
Based on the assumption the target is on the
ground, zn is equal to zero. Then can derive λ as
Then the relative distance between the target and
the UAV is estimated. Which is employed as the
reference signal to guide the UAV to follow the
motion of the target The tracking reference for the
UAV is defined as
Fig 4: Block Diagram of the Tracking Control Scheme
Control of the Pan/Tilt Servomechanism:
As shown in Fig.4. given a generic point P, pi and
p∗i are the measured and desired locations of the
projected point P in the image plane, respectively.
e = [eφ, eθ]T is the tracking error, u = [uφ, uθ]T is
the output of the tracking controller, and v = [vφ,
vθ]T is the output of the pan/tilt servomechanism.
M is the camera model, which maps the points in
the 3-D space to the projected points in the 2-D
image frame. N is a function to calculate the
orientation of an image point pi with respect to the
ISSN: 2231-5381
where cy and cx are the desired relative
distances between the target and the UAV in the
Xb- and Yb-axes, respectively, h0 is the predefined
height of the UAV above the ground, ψ0 is the
predefined heading angle of the UAV, and Rn/b is
the rotation matrix from the body frame to the
local NED frame, which can be calculated in
http://www.ijettjournal.org
Page 275
International Journal of Engineering Trends and Technology (IJETT) – Volume 8 Number 6 - Feb 2014
terms of the output of the onboard navigation
sensors.
2006 IEEE International Conference on
Robotics and Automation Orlando, Florida May 2006
VII. Conclusion
Unmanned aerial vehicle systems become
a very essential part in today’s world. Today
applications of UAVs can be established in
number of areas like military, civilian,
surveillance, agriculture, and academic research to
wildlife conservation. In military applications
UAVs can be used for ground target following.
Thus, to find a suitable Segmentation algorithm
based on application and the type of inputted
image is very important. The comprehensive
design and implementation of the vision system
for the UAV, including hardware construction,
software development, and an advanced ground
target seeking and following scheme will provide
an efficient tri copter and that can be used for
many other type of applications
REFERENCES
[1] 1038 IEEE Transactions on Industrial
Electronics Vol. 59. No. 2. February 2012
[2] The French ANR national project SCUAV
(ANR Psirob SCUAV project ref ANR-06ROBO-0007-02)
[3]
Model Predictive Control of a Tri copter
Examensarbete utfört i Reglerteknik vid
Tekniska högskolan vid
Linköpings
universitet av Karl-Johan Barsk LiTH-ISYEX--12/4607—SE Linköping 2012
[4] Vision Based Terrain Recovery for Landing
Unmanned Aerial Vehicles 43rd IEEE
Conference on Decision and Control
December 14-17, 2004 Atlantis, Paradise
Island, Bahamas
[5] Machine Vision for Robotics. NELSON R.
CORBY, JR. IEEE Transactions on
Industrial Electronics, VOL. IE-30, NO. 3,
AUGUST 1983
[6] Combined Optic-Flow and Stereo-Based
Navigation of Urban Canyons for a UAV
Stefan Hrabar and Gaurav S. Sukhatme
Robotic Embedded Systems Laboratory
University of Southern California Los
Angeles,
California,
USA
{shrabar,
gaurav}@robotics.usc.edu
[7] Visual Servoing Approach for Tracking
Features in Urban Areas Using an
Autonomous Helicopter. Proceedings of the
ISSN: 2231-5381
[8] Practical Visual Servo Control for an
Unmanned
Aerial
Vehicle.
IEEE
Transactions On Robotics, Vol. 24, No. 2,
APRIL 2008 Nicolas Guenard, Tarek
Hamel, Member, IEEE, and Robert Mahony,
Senior Member, IEEE
[9] Vision Based MAV Navigation in Unknown
and Unstructured Environments. Michael
Bl¨osch,
Stephan
Weiss,
Davide
Scaramuzza,
and
Roland
Siegwart
Autonomous Systems Lab ETH Zurich.
2010 IEEE International Conference on
Robotics and Automation Anchorage
Convention District May 3-8, 2010,
Anchorage, Alaska, USA
[10] Vision-Aided Inertial Navigation on an
Uncertain Map Using a Particle Filter. Jason
Durrie, Tristan Gerritsen, Eric W. Frew,
IEEE Member, and Stephen Pledgie. 2009
IEEE International Conference on Robotics
and
Automation
Kobe
International
Conference Center Kobe Japan, May 12-17,
2009
M.G.Anand was born in Erode in 1985.
He received B.E degree in Electronics
Communication Engineering
from
Vellalar College Of Engineering And
Technology in the year of 2007.He got
M.E degree in power electronics and
drives from K.S.R college of
Engineering in the year of 2011. He is
now working as an assistant professor at The Kavery
Engineering college in the Department of Electrical and
Electronics Engineering. His research interest includes
power electronics, Renewable power generation and
distribution systems. He is a life time member of ISTE.
Email: anandeee1985@yahoo.com
Anoop M received the B Tech degree
from the Department of Electronics
and Communications Engineering,
PRIST University, Tanjavurur, India,
in 2012. Since 2013, he has been
working toward the M.E degree at the
Anna University. His research
interests include embedded system,
real-time
software,
vision-based
control and navigation, target tracking, real-time software
and unmanned aerial vehicles.
http://www.ijettjournal.org
Page 276
Download