Självständigt arbete på avancerad nivå

advertisement
Mid Sweden University
Department of Information
technology and media (ITM)
Introductory instructions
Page i (6)
Template revision date: 2007-01-06
Självständigt arbete på avancerad nivå
Independent degree project  second cycle
Electronics Engineering, 120 credits
Classification and prediction for flying objects based on behavior model
HUI LIU
Classification and prediction for
flying objects based on behavior
model
HUI LIU
2014-06-09
MID SWEDEN UNIVERSITY
HUI LIU
Examiner: Prof. Mattias O’Nils, mattias.onils@miun.se
Supervisor: Dr. Najeem Lawal, najeem.lawal@miun.se
Author: Hui Liu, huli1100@student.miun.se
Degree programme: Electronics Design, 120 credits
Main field of study: Machine vision system
Semester, year: HT2013
i
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Abstract
2014-06-09
Abstract
This master thesis relates to machine vision system and object
classification. The aim of this paper is to classify the flying objects in
images in a computer vision system, for example, an eagle, kite or
airplane. In this thesis, large amounts of data will be analyzed and a
behavior model will be built for each object as important steps towards
improving and automating the object classification system. The
application of this thesis is to reduce the deaths of golden and bald
eagles due to wind blades.
In this thesis work, a new effective method is presented, namely, a
stereo vision system, which is applied in feature selection based on this
object classification. Several features are primarily extracted, including
the flying height, speed, size and degree of changes in the object
parameters.
For image processing and feature extraction, the video acquisition is the
first and essential step. Due to the limitation both of equipment and
location, the captured videos still do not allow for the collection of
sufficient data. For the classification of two objects, a Support Vector
Machine (SVM) and Library for Support Vector Machine (LIBSVM) have
been employed and implemented in MATLAB. In addition, a
preliminary study in relation to the idea of multi-class classification has
been conceived and tested by means of an experiment.
In relation to building a behavior model, the various feature properties
and characteristics were beneficial with regards to developing the
accuracy and robustness of the final classification and recognition
results. The results gathered from these two methods in terms of SVM
and LIBSVM are compared and analyzed in order to identify their
differences and to determine a better solution. Additionally, the possible
future work for this project will be discussed.
Results show that 98% of the flying objects can be currently classified by
using OVO SVMs and the OVR SVMs. Based on the results of the
classification, 85.82% of the flying objects could be predicted correctly.
Key words: machine vision system, object classification, behavior model,
stereo vision system, image processing, feature extraction, SVM,
LIBSVM, MATLAB
ii
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Acknowledgements
2014-06-09
Acknowledgements
First of all, I would like to express my sincere gratitude to all who have
helped me during the researching and writing of this thesis. Meanwhile,
my thanks go to all the teachers who have taught me during these two
years and who have helped me enrich and broaden my knowledge.
My deepest gratitude goes first and foremost to my professor Dr.
Najeem Lawal for accepting me as his student and giving me his
constant encouragement, patient guidance and valuable suggestions
during my entire research work. Without his consistent instruction,
insightful criticism and professional guidance, the completion of this
project thesis would not have been possible.
Secondly, I am deeply indebted to my beloved family for their loving
consideration and great support of my overseas study.
Thirdly, I owe much to my friends and classmates for their valuable
suggestions and critique which are of help and importance in making
the thesis. Fourthly, I would also like to express my gratitude to Mid
Sweden University.
Finally, I am so proud to contribute my effort to this significant project
which is to protect rare eagles and other species and to assist in
preventing them from dying by flying into wind turbines.
iii
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Table of content
2014-06-09
Table of content
Abstract............................................................................................................... ii
Acknowledgements......................................................................................... iii
Table of content................................................................................................ iv
List of Figure..................................................................................................... vi
List of Table....................................................................................................... ix
Terminology............................................................................................... x
1 Introduction.................................................................................................... 1
1.1 Background and problem motivation..............................................2
1.2 Overall aim / High-level problem statement.................................. 3
1.3 Problem statement.............................................................................. 3
2 Related work................................................................................................... 4
2.1 Background substraction................................................................... 4
2.2 Stereo vision system........................................................................... 4
2.3 Camera calibration.............................................................................. 5
2.4 Object classification............................................................................ 5
2.5 Flying object detection....................................................................... 6
2.6 Flying object tracking......................................................................... 7
2.7 Classification methods....................................................................... 7
3 Methodology................................................................................................... 9
3.1 Background substraction................................................................... 9
3.2 Stereo vision system......................................................................... 11
3.3 Camera calibration............................................................................ 14
3.4 Object classification.......................................................................... 15
3.4.1 Linear SVM algorithm.......................................................... 15
iv
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Table of content
2014-06-09
3.4.2 Feature selection.................................................................... 17
4 Implementation............................................................................................ 18
4.1 Program structure............................................................................. 18
4.2 Video acquisition.............................................................................. 19
4.3 Camera calibration............................................................................ 20
4.4 Frame conversion.............................................................................. 23
4.5 Feature extraction of different flying objects................................ 26
4.6 Flying object classification............................................................... 27
4.6.1 Flying object classification based on flying track..............27
4.6.2 Flying object classification based on selected features.....35
5 Results............................................................................................................ 39
5.1 OVO SVMs classification................................................................. 41
5.1.1 OVO SVMS classification for height & speed features.... 41
5.1.2 OVO SVMS classification for height & size feat ures....... 50
5.1.3 OVO SVMS classification for speed & size features......... 58
5.2 OVR SVMs classification................................................................. 67
5.2.1 OVR SVMS (1) classification for height & speed features67
5.2.2 OVR SVMS (2) classification for height & size features... 71
5.2.3 OVR SVMS (3) classification for speed & size features....74
5.3 Prediction for unknown data.......................................................... 77
6 Conclusions and future work..................................................................... 81
References......................................................................................................... 83
v
Classification and prediction for
flying objects based on behavior
model
HUI LIU
List of Figure
2014-06-09
List of Figure
Figure 1.1 Wind farm base captured in Visby
Figure 1.2 Wind farm potential effect on birds
Figure 3.1.1 Block diagram of background substraction
Figure 3.1.2 Block diagram of background substraction
Figure 3.2.1 Structure of stereo vision system
Figure 3.2.2 Overlap of two cameras range
Figure 3.2.3 Depth Error measurement based on different baseline
Figure 3.2.4 Depth Error measurement based on different baseline
Figure 3.3.1 Stereo image principle
Figure 3.4.1 Maximum-margin hyperplane for a SVM trained
Figure 4.1 Block diagram of flying object classification
Figure 4.2 The shooting scene of flying objects
Figure 4.3.1 Testing model explanation
Figure 4.3.2 Error of distance calculation
Figure 4.4.1 Flow chart of frame conversion
Figure 4.4.2 Original image and thresholded image
Figure 4.5 Flow chart of frame conversion
Figure 4.6.1 Overview of intelligent visual monitor system
Figure 4.6.1.3 Flying track with different FPS values
Figure 4.6.1.4 Flying track with different FPS values comparison
Figure 4.6.2.1 Behavior model (BM)
Figure 4.6.2.3 4 nodes (distance between each node is 20m) built in the
wind farm in Gotland
Figure 5.1 Entire flying objects plotted in same plane
Figure 5.2 4 flying objects in same plane
Figure 5.3 Seabird and kite in same plane
Figure 5.1 4 classification groups (group number is 1-5, 1-4, 1-3 and 1-2
from left to right)
Figure 5.1.1.1 Eagle and seabirds classification in 2D version(Train set)
Figure 5.1.1.2 Eagle and kite classification in 2D version(Train set)
Figure 5.1.1.3 Eagle and airplane classification in 2D version(Train set)
Figure 5.1.1.4 Eagle and helicopter classification in 2D version(Train set)
Figure 5.1.1.5 Flow chart of entire groups (5*(5-1) /2=10 classifiers)
vi
Classification and prediction for
flying objects based on behavior
model
HUI LIU
List of Figure
2014-06-09
Figure 5.1.1.6 Helicopter and seabird classification in 2D version(Train
set)
Figure 5.1.1.7 Helicopter and kite classification in 2D version(Train set)
Figure 5.1.1.8 Helicopter and Airplane classification in 2D version(Train
set)
Figure 5.1.1.9 Airplane and Seabird classification in 2D version(Train
set)
Figure 5.1.1.10 Airplane and Kite classification in 2D version(Train set)
Figure 5.1.1.11 Kite and Seabird classification in 2D version(Train set)
Figure 5.1.2.1 Eagle and seabirds classification in 2D version(Train set)
Figure 5.1.2.2 Eagle and kite classification in 2D version(Train set)
Figure 5.1.2.3 Eagle and Airplane classification in 2D version(Train set)
Figure 5.1.2.4 Eagle and Helicopter classification in 2D version(Train set)
Figure 5.1.2.5 Helicopter and seabird classification in 2D version(Train
set)
Figure 5.1.2.6 Helicopter and kite classification in 2D version(Train set)
Figure 5.1.2.7 Helicopter and airplane classification in 2D version(Train
set)
Figure 5.1.2.8 Airplane and seabird classification in 2D version(Train
set)
Figure 5.1.2.9 Airplane and kite classification in 2D version(Train set)
Figure 5.1.2.10 Kite and seabird classification in 2D version(Train set)
Figure 5.1.3.1 Eagle and seabirds classification in 2D version(Train set)
Figure 5.1.3.2 Eagle and kite classification in 2D version(Train set)
Figure 5.1.3.3 Eagle and airplane classification in 2D version(Train set)
Figure 5.1.3.4 Eagle and helicopter classification in 2D version(Train set)
Figure 5.1.3.5 Eagle and seabird classification in 2D version(Train set)
Figure 5.1.3.6 Helicopter and kite classification in 2D version(Train set)
Figure 5.1.3.7 Helicopter and airplane classification in 2D version(Train
set)
Figure 5.1.3.8 Airplane and seabird classification in 2D version(Train
set)
Figure 5.1.3.9 Airplane and kite classification in 2D version(Train set)
Figure 5.1.3.10 Kite and seabird classification in 2D version(Train set)
Figure 5.2 Sketch of OVR SVMs
Figure 5.2.1.1 Eagle and rest classification in 2D version (Train set)
Figure 5.2.1.2 Seabird and rest classification in 2D version (Train set)
Figure 5.2.1.3 Kite and rest classification in 2D version (Train set)
Figure 5.2.1.4 Airplane and rest classification in 2D version (Train set)
vii
Classification and prediction for
flying objects based on behavior
model
HUI LIU
List of Figure
2014-06-09
Figure 5.2.1.5 Helicopter and rest classification in 2D version (Train set)
Figure 5.2.2.1 Eagle and rest classification in 2D version (Train set)
Figure 5.2.2.2 Seabird and rest classification in 2D version (Train set)
Figure 5.2.2.3 Kite and rest classification in 2D version (Train set)
Figure 5.2.2.4 Airplane and rest classification in 2D version (Train set)
Figure 5.2.2.5 Helicopter and rest classification in 2D version (Train set)
Figure 5.2.3.1 Eagle and rest classification in 2D version (Train set)
Figure 5.2.3.2 Seabird and rest classification in 2D version (Train set)
Figure 5.2.3.3 Kite and rest classification in 2D version (Train set)
Figure 5.2.3.4 Airplane and rest classification in 2D version (Train set)
Figure 5.2.3.5 Helicopter and rest classification in 2D version (Train set)
Figure 5.3.1 Procedures of the model prediction
Figure 5.3.2 Sketch of unknown data prediction
Figure 5.3.3 RGB images for result recognition
viii
Classification and prediction for
flying objects based on behavior
model
HUI LIU
List of Table
2014-06-09
List of Table
Table 3.4.2 Basic features selection and computed features
transformation
Table 4.3.1 Focal length results for camera calibration
Table 4.3.2 Error analysis results
Table 4.6.1.1 Flying track of eagle summary
Table 4.6.1.2 Flying track of rest summary
Table 4.6.1.3 Data of Eagle_2
Table 4.6.1.4 Data of Eagle_4,5,6
Table 4.6.1.5 Data of Airplane
Table 4.6.1.3 Error data collection in different FPS values
Table 4.6.2.1 SVM and Libsvm comparison
Table 4.6.2.2 Eagle (EA) & Airplane (Air ) data collection in 4 nodes
Table 5.1.1 Training data of OVO SVMs classification (1) for Height &
Speed
Table 5.1.2 Training data of OVO SVMs classification (2) for Height &
Size
Table 5.1.3 Training data of OVO SVMs classification (3) for Speed &
Size
Table 5.2.1 Training data of OVR SVMs classification (1) for height &
speed
Table 5.2.2 Training data of OVR SVMs classification (2) for height &
size
Table 5.2.3 Training data of OVR SVMs classification (3) for speed & size
Table5.3.1 OVR SVMs (1) prediction results
Table5.3.1 OVR SVMs (2) prediction results
Table5.3.1 OVR SVMs (3) prediction results
ix
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Terminology
2014-06-09
Terminology
Acronyms
SVM
Support Vector Machine
LIBSVM
Library for Support Vector Machine
ICA
Independence Component Analysis
FOV
Field of View
RGB
Red, Green and Blue image
B&W
Black and White
FPS
Frame Per Seconds
BM
Behavior Model
OVO SVMs
One Versus One SVMs
OVR SVMs
One Versus Rest SVMs
x
Classification and prediction for
flying objects based on behavior
model
HUI LIU
1
Introduction
2014-06-09
Introduction
A wind farm is a group of wind turbines in the same location used to
produce free, clean, non-polluting and renewable wind energy without
any environmental contamination as compared to a fossil-fueled
system .
Figure 1.1 Wind farm base captured in Visby
Wind turbines, like helicopter propeller blades or a huge fan, depend on
moving air to generate mechanical power or electricity. Everything
appears to be satisfactory, however, one potential effect on the
immediate surroundings, including eagle and other birds species, has
been ignored for many years.
According to the Associated Press, there will be more than 573,000 birds
killed by wind turbines each year in the United States, including 83,000
hunting birds such as hawks, falcons and eagles.
On the other hand, eagles, which are large birds are slower to reproduce
than other species and so their population is influenced by even a small
number of death from this situation. An ideal solution is that, the wind
turbines should have the function to detect birds and stop working
when the birds are flying across the wind farm. Meanwhile, it is clearly
impossible for humans to be able, at all times, to observe and report
birds activities and thus solve this problem.
1
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Introduction
2014-06-09
Therefore, instead of manual monitoring, an automatic visual
monitoring system could be established to track and detect the flying
objects, based on the rapid development of computer vision technology,
in order to save large numbers of flying species from death based on the
huge blades of the wind turbines.
Figure 1.2 Wind farm potential effect on birds
The main objective of this project is to focus on the classification of
flying objects and to build a behavior model relying on a large quantity
of videos captured by our research group. Additionally, it would be
possible to improve and extend the functions of image processing in
order to extract useful information with regards to investigating the
flying object features based on a Stereo Vision System in video
sequence.
1.1
Background and problem motivation
Wind power, as an alternative to fossil fuels, is renewable, clean and
readily available in order to reduce global warming. Sadly, wind farms
are faced one big difficulty: large numbers of flying birds are injured or
killed every year by collisions with the the wind towers and their giant
blades.
2
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Introduction
2014-06-09
In addition, object recognition is an active and significant research
direction in the field of computer vision. In the previous projects, the
research has been focused on the real relationship between the
parameters in terms of speed, size, height and location of eagle
respectively and FPS. In addition, in cooperation with Ming Gu, camera
calibration and video acquisitions have been achieved.
1.2
Outline of the thesis
The remaining parts of this thesis are organized as follows: Section 2
consists of related work. Then the methodological description of the
approach is briefed in Section 3. Section 4 presents the implementation.
Section 5 describes the results of classification. Conclusions and future
work concerning this project are shown in section 6.
1.3
Problem statement
The aim of this project is to respond to the following questions:
1. How can an appropriate camera calibration method be adopted as a
vital step before capturing the videos?
2. To capture the videos of flying objects outdoor choice of equipment
and shot location, moreover, large amounts of data must be analyzed
and processed.
3. The key to object classification is to find out suitable features, because
the preferable feature selection is in relation to the accurate results of the
classification .
4. Stereo vision theory is applied for the size and speed so as to
calculate the height of a flying bird, eagle and kite. However, the
measurement result is not accurate for airplanes and helicopters.
5. Background substraction is one common issue to segment complex
background in relation to various weather conditions.
6. Capture more data to build a comprehensive behavior model for
eagles and other flying objects.
3
Classification and prediction for
flying objects based on behavior
model
HUI LIU
2
Related work
2.1
Background substraction
Related work
2014-06-09
Background subtraction has been widely performed to detect moving
objects in recent years. Most previous work has focused on static or
stationary cameras. At this point, Here, a review regarding a few related
work is provided, and readers are asked to refer to [1] for
comprehensive background substraction techniques including
parametric and non-parametric. In paper [2], the main research work is
focused on a stationary camera, which applies simple techniques, such
as frame differencing and adaptive median filtering. A new
computationally efficient approach of background substraction for
real-time tracking is presented in [3], with the resource constraints in
embedded camera network providing an accurate and robust result,
meanwhile, the use of comprehensive sensors is in order to reduce the
dimensionality of the data. A fast background substraction scheme,
using independence component analysis (ICA), contributes to both
home-care and health-care monitoring and is presented in [4]. In
addition, this background substraction scheme covers both training and
detection. Background substraction is used extensively in the study of
traffic monitoring, object recognition and tracking in the field of
computer vision.
2.2
Stereo vision system
A great deal of research has been presented to support the stereo vision
system.In paper [5], the accuracy of different camera calibrations in a 3D
stereo vision system with CCD cameras is described. This new model
will reduce some difficulties in relation to stereo vision and significantly
developments were also made with regards to improving the operating
speed. A correlation-based stereo vision method for real-time
interpretation of traffic system is presented in [6], with its main
advantage being the accuracy of distance estimates and in converting
stereo disparities into sub-pixel accuracy.
4
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Related work
2014-06-09
During standard image processing, the moving object is recognized by
using the stereo vision to obtain its position with respect to two model
cameras[7]. For capturing videos based on a stereo vision system, paper
[8] describes a system which has the ability to capture images at a
high-video rate. Additionally, this application requires 3D tracking of
moving object and can thus improve the local discriminability at each
pixel unit and us thus able to achieve a satisfactory match.
2.3
Camera calibration
Primarily, camera calibration could be used to examine images or videos
and thus deduce the camera situation. However, camera calibration is a
complicated and tedious process, but modern software applications
have made it possible for it to be easily achieved. Meanwhile, in paper
[9], the measurement could be used to directly evaluate the performance
of the calibration, test and to make a comparison with both image and
synthetic data among different systems. Compared with traditional
techniques, which use expensive equipment such as two or three
orthogonal planes, a new flexible technique for camera calibration is
presented in [10] and this method is easy to perform from a laboratory
environment to real world usage and to obtain very good results in a
computer simulation. In paper [11], a linear approach for solving the
parameters of the inverse model is presented, which depend on the
classical calibration procedure, including two additional steps to
compensate for distortion and to correct the distorted image
coordinations and this has proved to be quite flexible and efficient.
2.4
Object classification
Some research works have been represented to support object
classification. In paper [12], an object classification system for digital
video surveillance is designed to distinguish humans from
vehicles.Classification of flying objects and planes so as to obtain useful
features in terms of height, area and direction is proposed in [13]. In
addition, most object classification information, in various companies or
industry is also summarized.
5
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Related work
2014-06-09
In paper [14], they explore the problem of classifying images by means
of object categories by using random forests as a multi-way classifier to
train and test data of different images.
Another study of image classification for cancer diagnosis and Gleason
grading is presented in [15], in which they compare the Gaussian,
support vector machine and nearest neighbor respectively with a
sequential feature selection algorithm in order to improve the accuracy
of the diagnostic performance. In paper [16], three procedures including
object detection, classification and recognition are developed in an
intelligent sensor network. The biggest advantage of this system is to
reduce the processing time by means of a hierarchical image extraction
method. In addition, object features such as size, direction and motion
are converted into a number of classes.
In this paper, an online method of feature selection has been chosen
which is able to collect suitable features during the classification task.
Moreover, some investigations on feature selection will directly affect
the accuracy of the performance. A combination of suitable features and
particular classifiers can be exploited most efficiently.
2.5
Flying object detection
As mentioned in Chapter 1, an intelligent monitor system should be
established in order to track and detect the eagles or birds, by providing
sufficient warning and thus giving them time to react properly and this
could be considered as one means of avoiding death from the huge
blades of the wind turbines.
Another approach is to temporarily stop the machine of wind blades
when eagles or birds are flying across a wind farm. Some methods have
been proposed to realize the detection of flying objects in recent years.
In paper [17], a special distributed network of a camera has been
designed to develop algorithms for the detection, tracking and
recognition of objects. One of techniques, based on image feature
matching and the operation of unmanned aerial vehicles, is presented in
[18]. In addition, these techniques have also been applied to helicopter
motion compensation and object detection by implementing them in the
framework of the COMETS multi-UAV system.
6
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Related work
2014-06-09
In paper [19], NASA, along with members of the aircraft industry, have
recently developed techniques for aircrafts, involving the use of video
cameras and image processing equipment to detect airplanes in the sky.
Furthermore, this developed method will provided a high detection
probability and alleviate the background clutter of videos. One
probabilistic approach for moving object detection from a mobile robot
is in relation to using a single camera in an outdoor environment and
this is presented in [20]. Two motions, including moving objects and
sensors, are used to detect the objects and these are compensated for by
using the corresponding features sets and the EM algorithm.
2.6
Flying object tracking
With the development of computer vision technology, object tracking in
surveillance system is widespread applied. It is mainly used to estimate
the position and other relevant information of moving objects in image
sequences in order to reduce the participation of human beings, which is
presented in [20]. Flying object tracking, in general, is still a challenge
problem due to the varieties of object appearance. Different tracking
methods in terms of Point tracking [21], Kernel tracking [22] and
Silhouette tracking [23], have been categorized on the basis of the object
and examine their pros and cons, is presented in [24]. In paper [25], KLT
algorithm has the best performance with higher accuracy and less
computation after comparison among KLT, MeanShift and CAMShift. A
real-time system for detecting and tracking humans from mobile
vehicles is presented in [26], can calculate the depth information
obtained from a stereo vision system in order to limit the search space of
the detector. Based on these theories above, it is possible to build a
simpler and faster system to track flying object in this thesis.
2.7
Classification methods
The Support Vector Machine (SVM) was first proposed by Vapnik [27].
Studies reveal that SVM are generally capable of delivering higher
performance in terms of classification accuracy and robustness than the
other data classification algorithms.
7
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Related work
2014-06-09
By the nature SVMs are essentially binary or two class classifier.
However, to handle the multiple classification [28] tasks is still a
difficult topic of research in machine vision system field. The two
approaches commonly used are the One-Versus-One (OVO) and
One-Versus-Rest (OVR) techniques. In paper [29], the OVO strategy is
substantially faster to train and seems preferable for problems with a
very large number of classes. Besides [30], the OVR strategy appears
significantly more accurate for digit recognition. Moreover, a library for
Support Vector Machine (LIBSVM) [31], helps users to easily apply SVM
to their applications in terms of classification, regression and
distribution estimation.
8
Classification and prediction for
flying objects based on behavior
model
HUI LIU
3
Methodology
3.1
Background substraction
Methodology
2014-06-09
Moving object detection has become an important area of research in the
field of video tracking system in real-time surveillance, traffic
monitoring and classification system in recent years. Large numbers of
different methods have been proposed for detecting moving objects,
such as template matching [32], temporal difference method [33], optical
flow algorithms [34] and background substraction [35]. One of
commonly used techniques is background substraction in the MATLAB
environment.
In most situations, a video captured outside will be confronted with a
complicated background. Hence, it is a critical step to segment
foreground objects from the background during the processing of a
video.
The following equations make use of the function of B0(x,y,0) as the
background image at time t=0, x and y are the pixel location variables.
Firstly, the frame difference at time t is
D(t  1)  B ( x, y, t )  B 0( x, y,0)
(1)
The background model can be assumed to be the frame at time t as a
static image without the object being presented. In this case, all the
background pixels are static and all the foreground pixels are moving.
Secondly, the value of threshold “Th” is applied in this difference image
to improve the substraction in order to obtain complete information of
the target.
B 0( x, y,0)  B ( x, y, t ) Th
(2)
The accuracy of this method is based on the speed of movement in the
video sequence. A faster movement probably requires a higher
threshold value. In this thesis work, most of background is sky which
involves different weather conditions, such as cloudy, foggy, overcast,
and sandstorm. The performance of background substraction depends
on the prevailing weather condition . The processing flow of
background substraction in MATLAB platform is illustrated as follows:
9
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Methodology
2014-06-09
Figure 3.1.1 Block diagram of background substraction
After the binary image is obtained by using background substraction, it
became possible to perform the further steps including features
extraction, object classification and identification.
Figure 3.1.2 Block diagram of background substraction
10
Classification and prediction for
flying objects based on behavior
model
HUI LIU
3.2
Methodology
2014-06-09
Stereo vision system
Stereo vision is an important and efficient approach in the field of
computer vision, which is used to capture the same scene with two
different cameras in order to extract some useful 3D information, such
as height, distance and area from two images. For human beings, each
eye will capture its own view and the two separate images are
transferred to the brain for processing. In addition, both of the images
are combined into one picture in the mind, when these two separate
images arrive at the same time. Eventually, human beings could observe
where objects are in relation to our bodies with much greater accuracy.
This is also true, for objects that are moving towards or away from us in
the depth dimension [36]. In figure 3.2.1, there is an illustration of the
stereo vision system used in this thesis work. In Particular, left and right
cameras will capture two images for the same target simultaneously.
Figure 3.2.1 Structure of stereo vision system
The reason for setting up two cameras at the same time is that the pixel
coordinations of the image related to the coordination of real world are
not unique in the situation of only one camera shot. In particular, some
depth information of the image could be lost during the operation.
Therefore, in order to recover and reconstruct the depth information in a
three-dimensional world, the formula to compute the deviation of the
pixels for the image using the triangulation principle is shown below
[37]:
D f 
B
B
 f
x 2  x1
x
11
(3)
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Methodology
2014-06-09
1
1
D  k    → x  k   
 x 
D
The distance D of the object from the cameras is inversely proportional
to the pixel disparity. One interpretation is that the smaller the distance
Δx between the object and the cameras means there will be a large Δx
disparity.
In formula (3), where f is the focal length of both cameras, B is
represented as the baseline between the two cameras and x1 and x2 are
the horizontal locations of the object in the two cameras. Δx is the
horizontal disparity of the object in the two cameras. As a result, the
distance between the ground and the object could be calculated using
the above equation. In addition, the value of distance B between the two
cameras should be considered, based on the reason that a different
baseline will lead to significantly different depth error and a different
field of view (FOV).
Figure 3.2.2 Overlap of two cameras range
In figure 3.2.2, the left image shows two cameras with a short baseline in
stereo vision and whose advantage is that it has a large FOV, however,
it will cause a large depth error. Instead, the right image represents the
long baseline between the cameras, which, additionally, causes an
occlusion problem. Hence, determining a suitable value for the baseline
is a key step in this part.
12
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Methodology
2014-06-09
Figure 3.2.3 Depth Error measurement based on different baseline
In figure 3.3.3, the real distance between the cameras and the object is
fixed as 6m to test different depth error values with various baselines
between the two cameras from 1m to 6m, respectively. In order to verify
the accuracy and reliability of the measurement result, the range for the
distance D between the object and the cameras will be enlarged.
Figure 3.2.4 Depth Error measurement based on different baseline
From figure 3.2.4, the suitable baseline between two cameras is 5 meters,
in this case, to ensure a measurement result with less depth error.
Additionally, the view angle of the camera, which is used in this project,
is 72 degree in the situation of 18 mm focal length.
13
Classification and prediction for
flying objects based on behavior
model
HUI LIU
3.3
Methodology
2014-06-09
Camera calibration
Recently, one of the popular methods is Zhengyou Zhang’s camera
calibration [10], which is well suited for use without requiring any
specialized knowledge regarding computer vision and for which the
experimental results are very good.
Compared to other classical methods, Zhang’s technique is easy to use
and is flexible without using expensive equipment such as two or three
orthogonal planes. However, according to the requirement of Zhang’s
calibration, the model planes or calibration boards must occupy at least
two-thirds of the camera screen. Consequently, it has proved to be very
ineffective in situations involving a larger distance between the set up
for the two cameras. In this paper, a simple and flexible calibration
approach is presented, moreover, it is easy to calibrate a camera and it
can be implemented both indoors and outdoors.
Figure 3.3 Stereo image principle
In figure 3.3, two separate images or videos are captured simultaneously
by the left and right cameras in a typical stereo vision model. More
specifically, stereo vision is obtained when the two cameras are
displayed by using both horizontal and vertical coordinations from each
other in the same scene. During the processing of stereo vision, depth
information could be calculated by examining the positions of targets
from the two perspectives.
In equation (3), various distances between the object and cameras will
cause a greater disparity in the apparent position between the two
perspectives.
14
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Methodology
2014-06-09
Hence, in order to compute the parameter D in equation (3), the focal
length value of both cameras should be converted from a millimeter unit
into a pixel scale, in advance.
Equation (4) is the supplement to equation (3) which is used for
calculating the focal length value.
f  D
(x 2  x1)
Bi
(4)
In formula (4) where D represents the distance between the camera and
object and this value can be measured in advance. x2 and x1 are the
horizontal coordinations of the target from each other in the same
picture, and Bi is expressed a the numbers in a video sequence.
After that, the focal length value within the pixel scale will be obtained
in order to perform further operations. In this project, various
simulations and tests will be performed in both indoor and outdoor
environments in order to obtain a suitable focal length value.
3.4
Object classification
The Support vector machine (SVM) is a popular object classification
technique and a powerful algorithm, useful in classifying data into
groups and which is considered to be easier to use as compared to a
neural network. The large quantity of data, including videos and images,
are required to be performed, however, data processing for different
types of flying objects is very difficult to analyze and summarize
manually. Thus, SVM will be applied in this case in order to automate
this complicated process and to classify different flying objects
belonging to their own class.
In general , a classification work will be divided into separate data for
training sets and testing sets. Each instance in the training set contains
the class label values and several feature values of the objects. The aim
of this project is to build a behavior model based on the training data,
which can predict the class label value of the testing data given only the
feature values of the testing data by using SVM in a MATLAB
environment.
15
Classification and prediction for
flying objects based on behavior
model
HUI LIU
3.4.1
Methodology
2014-06-09
Linear SVM algorithm
The
following
equations
make
use
of
the
function
n
l l
D  {( xi, yi ) | xi  R , yi  {1,1} }i 1 as a training set of instance-label pairs,
where xi is a p-dimensional real vector, and yi is either 1 or -1 as a label
to indicate which point xi belongs [38]. Firstly, the hyperplane could be
written as:
 w  xi  b  0
(5)
In equation (5), xi is the point of hyperplane and w is the vector, which is
vertical to the hyperplane. The shift b, which is added in this formula, is
used to enlarge the classification region between the two objects.
Therefore, the hyperplane can be described as the following:
w  xi  b  1, yi  1
or
w  xi  b  1, yi  1
(6)
Figure 3.4.1 Maximum-margin hyperplane for a SVM trained
According to geometry, the distance between these two hyperplanes is
2/||w|| as follows:



w  x1  b  1; w  x 2  b  1
w  ( x1  x 2)  2
w
2
 ( x1  x 2) 
w
w
(7)
(8)
(9)
In order to maximize the distance of the margin, it is necessary to
minimize the value of |w| to prevent the training data points from
falling into the margin.

w  xi1  b  1, y  1
w  xi 2  b  1, y  1
(10)

(11)
xi1 and xi2 belong to the first class and second class, respectively. In
addition, both of the above equations can be illustrated as below:
16
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Methodology
2014-06-09
 yi ( w  xi  b)  1
(12)
Hence, the solution for finding out the most suitable hyperplane could
be seen as a quadratic programming problem.
3.4.2
Feature selection
Suitable feature selection is an essential step in relation to classifying
different flying objects regarding the accuracy of the results and this is
implemented in the SVM by using MATLAB. Moreover, the processing
of feature extraction should be extremely robust across many hours of
captured videos so as to recognize the motion of different flying object.
Therefore, several features should be considered in this project, which
are illustrated in following table:
Basic Feature
Explanation
centroid(x1,y1)
centroid point of object in left camera
centroid(x2,y2)
centroid point of object in left camera
perimeter
perimeter of object
area
size of object
bbox_left
low x-coordinate of bounding box
bbox_bottom
low y-coordinate of bounding box
bbox_right
high x-coordinate of bounding box
bbox_top
high y-coordinate of bounding box
Computed Feature
H_object
Flying height of object
Sp_object
Flying speed of object
Dir_object
Flying angle change of object
Aspect-ratio_object
Explanation
D f 
B
B
 f
x 2  x1
x
centroid_y2 - centroid_y12 - (centroid_x2 - centroid_x1) 2
t
atan((centroid_y2 - centroid_y1)/ (centroid_x2 - centroid_x1)) *180/
Length - width ratio of bounding box
Table 3.4.2 Basic features selection and computed features transformation
17
Classification and prediction for
flying objects based on behavior
model
HUI LIU
4
Implementation
2014-06-09
Implementation
This chapter presents the structure of the program used to implement
the method of stereo vision system in object classification and camera
calibration as as to improve the accuracy of distance measurement. The
implementation has been conducted in MATLAB R2013a, on a Microsoft
Windows 7 operating system. Both cameras are Nikon D90 SLRs with
an HD movie model using a being AF-S DX NIKKOR lens with an
18-105mm focal length. In this case, the focal length will be set at 18mm
in order to obtain a broader field of view in order to capture the videos.
This section will introduce the program in detail. In order to extract the
appropriate features for the classification of different flying objects
and to obtain a precision comparison of the results, both indoor and
outdoor tests and simulations should be performed to confirm the
accuracy of the methodology.
4.1
Program structure
18
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Figure 4.1 Block diagram of flying object classification
Figure 4.1 describes the flowchart for the classification of flying objects
in this project. Generally, video acquisition, image processing, feature
extraction, training and testing data are critical and essential
components of a machine vision system and these usually determine the
quality of the final result in terms of accuracy and robustness. Video
pre-processing, frame conversion and behavior model are also the
addition and modification stages so as to apply this system into stereo
vision. Moreover, further prediction results are determined by using this
improved performance behavior model.
4.2
Video acquisition
In this project, the very first and essential step is to capture the videos of
different flying objects and this involves being faced with difficulties
and challenges, for instance, equipment and shot location limitations.
19
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
In spite of this situation, an attempt has been made to capture the videos
in different places, such as the seaside in Sundsvall, a basketball ground
in Sundsvall, Sundsvall-Harnosand airport, Arlanda airport in
Stockholm and Visby island in Gotland. Figure 4.2.1 shows some of the
locations:
Figure 4.2 The shooting scene of flying objects
4.3
Camera calibration
As was previously mentioned in chapter 3.3, the focal length of both
cameras should be converted from millimeter units into a pixel scale in
advance. In order to have a large testing area, a large area near the
seaside in Sundsvall was chosen. In this case, the distance between the
two SLR cameras is 5m, according to the stereo vision mentioned in
chapter 3.2. Moreover, the testing target is a one meter reference object,
which is caught by a person and the goal is designed to obtain the
coordinations of the testing target in both of the cameras. The distance
between the two cameras and the testing person varies from 5m to 55m
and each image will be captured three times in different positions,
which is shown in figure 4.3 as follows.
20
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Figure 4.3.1 Testing model explanation
Hence, the coordinates of the tag in three different positions will be
collected and the focal length value could be calculated by using
equation (4). All of the computed focal length values have been
illustrated in table 4.3.1 and an average value will be applied in further
operations.
21
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Real
distance
(m)
Implementation
2014-06-09
C1: Target coordinations
(left camera)
C2: Target
coordinations
(right camera)
Average
value
(pixel)
5
3384
4043
2078
180
812
\
2145
10
2657
3013
2357
1075
1415
764
3182
15
2433
2672
2190
1375
1620
1141
3159
20
2326
2512
2143
1540
1725
1350
3155
25
2248
2396
2074
1637
1760
1445
3127
30
2227
2352
2086
1733
1840
1571
3042
35
2204
2313
2069
1780
1880
1635
3012
40
2172
2268
2055
1815
1885
1681
2971
45
2164
2247
2065
1855
1920
1741
2880
50
2140
2217
2047
1865
1940
1752
2823
55
2147
2217
2057
1895
1955
1794
2849
2964
*\ means test tag is out of test area
Table 4.3.1 Focal length results for camera calibration
In order to verify the accuracy of the method of the camera calibration
as compared to a real distance value in this project, this average focal
length value will be performed in equation (5), and the error of the
distance between the cameras and the object are shown below.
Real distance
(m)
Computed
distance (m)
Error
(m)
Error
(%)
5
4.63
-0.37
-7.49%
10
9.37
-0.63
-6.32%
15
14.01
-0.99
-6.62%
20
18.85
-1.15
-5.73%
25
24.26
-0.74
-2.98%
30
30.00
0
0.00%
35
34.95
-0.05
-0.13%
40
41.51
1.51
3.78%
45
47.96
2.96
6.58%
50
53.89
3.89
7.78%
55
58.81
3.81
6.93%
Table 4.3.2 Error analysis results
22
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Figure 4.3.2 Error of distance calculation
From figure 4.3.2 and table 4.3.2 above, it can clearly be observed that
the algorithm has been entered into the experiment with a much lower
error rate (below 8%) and higher stability. This is the horizontal
implementation of the distance measurement, and the vertical height of
flying object in the sky could be calculated by performing a similar
method for the classification.
Generally speaking, the results of error are acceptable for further
operations.
4.4
Frame conversion
In this case, videos captured in different places will be required using
the same parameters, such as, the continuous shooting time being 20
minutes and the resolution of videos being 640*424. Normally, it is
easier to extract features of flying objects in higher resolution images
than in lower resolution images. However, feature extraction results are
almost the same in both of lower and higher resolution images for the
MATLAB simulation environment.
23
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Considering the time of one video is 20 minutes, few non-usable parts
exist and it is necessary for the captured videos to be split into video
clips and segmentations in order to select and extract the most
satisfactory parts so as to perform the further step of image processing
by means of the software named “LIWO” Video Converter.
Figure 4.4.1 Flow chart of frame conversion
In this case, the reason why colour images will be converted into binary
images is that the binary image represent objects in the image that are
directly relevant to the object recognition and features and it is easier
perform image processing and feature extraction in further operations.
However, converting an image from colour to binary (black and white)
will cause some significant loss of colour information if the
segmentation algorithm is inefficient.
For feature extraction, a binary image is a good choice fro extracting the
parameters of the flying objects. For object recognition, it is difficult to
distinguish and recognize a flying object in binary images in a black
background situation.
24
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Firstly, the colour images of the flying objects are obtained, next, is the
gray-scale images and the third is the binary images. There are many
ways to achieve a black and white image from a gray-scale image
during the image processing operation and this method has been
investigated in earlier works.
In this case, a threshold level method was chosen to produce a binary
image in the MATLAB platform. More especially, all the points of the
image above the threshold point will be converted into white color and
all the points below the threshold value will be black. Figure 4.4.2
shows the original image and the threshold image:
Figure 4.4.2 Original image and thresholded image
Normally, it is not an easy task to choose the threshold level in order to
extract a target object without noise or other background objects,
however, the background is a single sky and the flying object is rather
dark which makes it possible to apply a threshold level within the range
0.2-0.8.
25
Classification and prediction for
flying objects based on behavior
model
HUI LIU
4.5
Implementation
2014-06-09
Feature extraction of different flying objects
In order to succeed in building a behavior model that classifies flying
objects, suitable descriptors for certain features should be selected.
Ideally, only one descriptor can be used to classified objects into
different classes, however, it is very difficult to find such a descriptor. If
a set of descriptors could be applied which separate all the flying objects
then better results could be obtained. As mentioned previously in
chapter 3.4, suitable feature selection in terms of height, speed, degree
and aspect-ratio should be considered as a set of descriptors in this
project.
Figure 4.5 is illustrated data, used for training and object classification.
Figure 4.5 Feature extraction summary
Normally, a combination of features is extracted from each flying object
in order to recognize them common object recognition. After that,
features in figure 4.5 will be sent into a classifier for training and testing.
Hence, the selected features should represent the essential differences
among the different flying objects.
26
Classification and prediction for
flying objects based on behavior
model
HUI LIU
4.6
Implementation
2014-06-09
Flying object classification
In general, object classification is the recognition of an interesting target
in the image as belonging to the same class of similar objects. However,
the classified object involves flying objects, such as an eagle, seabird and
helicopter. Different flying objects can vary widely in appearance and
this leads to a large problems or difficulties in terms of variability.
Moreover, it will difficult to capture essential and comprehensive
features of one category and separate them from another. Due to the
complicated flying habits, it is quite difficult to provide a precise
definition from all the possible variations of flying eagle images.
In the past years, the human visual system somehow investigates
characteristics and features depending on experience. However, the
intelligent computer system could distinguish and learn the variability
of flying objects automatically as compared to an artificial system in the
future. Two approaches will be presented in this part to perform object
classification. One method is based on the flying track of different
objects and the other, is based on feature selection.
4.6.1
Flying object classification based on flying track
Figure 4.6.1 Overview of intelligent visual monitor system
27
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
For flying object classification, a flying track could be considered as an
efficient and visual method to recognize and classify different flying
objects through a plotting graph. Hence, the flying track of flying objects
is an important subject worthy of investigation as being of assistance
with regards to classification.
4.6.1.1 Eagle flying track
28
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Eagle category No.
Implementation
2014-06-09
Flying track
2D
Distinguishing
feature
3D
Eagle_1
Eagle_2
Eagle_3
Eagle_4
Eagle_5
Eagle_6
Table 4.6.1.1 Flying track of eagle summary
29



One circle
Smooth line
Cross flying



Two circles
Smooth line
Continuous
cross flying



One circle
Smooth line
Cross flying



Two circles
Smooth line
Continuous
cross flying




Two circles
Smooth line
Soaring
Continuous
cross flying



Two circles
Smooth line
Continuous
cross flying
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
4.6.1.2 Rest flying object track
Rest category No.
Distinguishing
feature
Flying track


Irregular track
Sometimes
soaring


Irregular track
Sometimes
soaring


Concentrate on
the certain area
Many cross

Straight line


Ups and downs
Sometimes
straight line
Seabird_1
Seabird_2
Kite
Airplane
Helicopter
Table 4.6.1.2.1 Flying track of rest summary
30
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
The graph of eagles will be illustrated, respectively, as in the
following:
Figure 4.6.1.2.1 Flying track of eagle_1
Figure 4.6.1.2.2 Flying track of eagle_2
31
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Figure 4.6.1.2.3 Flying track of eagle_3
Figure 4.6.1.2.2 Flying track of eagle_4,5,6
32
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
4.6.1.3 FPS analysis in flying track
After analysis of the videos captured outdoors in Visby and Sundsvall it
became obvious that a few challenges had to be considered in this
project. Due to the limitation of the power supply outdoor, the
parameter of FPS (frame per seconds) value should be adjusted to a
lower level to increase the shooting time. Meanwhile, the quality of the
frame sequences should be guaranteed to be both smooth and clear.
Hence, an investigation of the relationship between different FPS values
and the errors will contribute to determining suitable FPS values. Figure
4.6.1.3 shows different flying tracks with various FPS values:
Figure 4.6.1.3.1 Flying track with different FPS values
33
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Figure 4.6.1.3.2 Flying track with different FPS values comparison
From the images above, different colors represent different flying tracks
within various FPS values. It could be observed that the flying track
with the FPS values from 25 to 10 is sufficient to recognize the object,
however, the flying track with FPS values from 5 to 1 is quite irregular
and make it difficult to distinguish objects. Table 4.6.1.3 shows a
summary of the different errors relating to various FPS values:
34
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Error
FPS No.
Ave_area
Ave_perimeter
Ave_aspectratio
30
114.66
64.36
2.3541
0
0
25
117.23
65.63
2.3953
26.9602
0.5617
15
118.80
66.08
2.4400
64.1484
1.3364
10
119.82
65.94
2.4212
138.1920
2.8790
5
124.89
67.11
2.4567
294.5712
6.1369
2
138.75
71.00
2.3940
1347.8352
28.0799
1
196.00
88.50
2.1397
2467.1088
51.3981
All_error
Ave_error
Table 4.6.1.3 Error data collection in different FPS values
Combining the flying tracks with different FPS values and error
calculations, it was possible to generalize that the FPS value of the
camera should be set up to 10 for the following two reasons, one being
to increase the flight path accuracy and the other to reduce the error
compared to the default set.
4.6.2
Flying object classification based on selected features
In general, classification is concerned with the description of an object as
belonging to a natural class of similar objects, such as an eagle or a
seabird. In this thesis work, flying objects in terms of eagle, seabirds,
kites, helicopters and airplanes are mainly interesting targets to be
classified and discriminated from each other.
The visual monitoring system is therefore constantly faced with these
flying objects that are different from all others in the past and it is also
required to determine these based on the behavior model in order to
correctly classify a new image. Hence, to build a behavior model is a
significant step for flying object classification. Figure 4.6.2.1 will
illustrate the work flow of the behavior model used in this project:
35
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Figure 4.6.2.1 Behavior model (BM)
From the figure above, the eagle data which has been extracted during
the process of frame conversion and feature extraction, will be sent into
the classifier model (Behavior Model) to perform the operation
including training data, testing data and further prediction. In general,
an SVM toolbox can be used for object classification in the MATLAB
environment, which is easier in terms of analysis than using a neural
network.
However, another improved method of LIBSVM is also widely applied
in this area. Compared to SVM, LIBSVM has its own advantages which
are shown as below:
Performance
SVM
LIBSVM
Function model
selection
C-SVC
C-SVC; v-SVC; v-SVR
one-class-SVC; e-SVR
Classification/Regressi
on
Only support
classification
Support classification and
regression
Multi-class
classification
Only two-class
classification
Support multi-class
classification
Core function gamma
Not support adjustment
Support parameter
adjustment
Quadratic
programming problem
solution
DONLP2; SMO; LOQO
SMO
Table 4.6.2.1 SVM and Libsvm comparison
36
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
Moreover, a toolbox named Libsvm-farutoUltimateVersion which
implements support vector machines based on libsvm will also be
performed in this project. For building a behavior model of eagle data,
the videos captured with 4 nodes in Visby has been analyzed and
summarized in order to collect useful information for eagle and airplane
classification. The figure 4.6.2.3 illustrates the sketch of building 4 nodes,
and table 4.6.2.2 shows the details of the nodes’ information:
Node1
Node3
Node2
Node4
Figure 4.6.2.3 4 nodes (distance between each node is 20m) built in the wind farm in
Gotland
Section Num
Node1
Node2
Node3
Node4
1
×
×
×
×
2
×
×
4s(Ea)
×
3
×
14s(Air)
×
×
4
×
×
×
×
5
×
38s(Ea)
131s(Air)
×
6
19s(Air)
×
×
18s(Air)
7
×
49s(Air)
55s(Air)
×
8
10s(Ea)
×
20s(Air)
×
9
×
20s(Air)
20s(Air)
×
10
×
14s(Air)
11
25s(Air)
×
12
×
19s(Air)
Table 4.6.2.2 Eagle (EA) & Airplane (Air ) data collection in 4 nodes
37
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Implementation
2014-06-09
From the table above, the different section videos of flying objects
including an airplane and an eagle will be abstracted manually second
by second. For section No.6, the airplane is the same and was captured
in both Node1 and Node4 so that the stereo vision could be used to
perform frame conversion in any further operation. Besides, it is quite
difficult to capture the videos of flying objects, because they can
represent different kinds of birds, and for a particular eagle, the
appearance will be changed with the imaging conditions, such as flying
height, speed, viewing angle and illumination conditions, with the
eagle’s special posture.
38
Classification and prediction for
flying objects based on behavior
model
HUI LIU
5
Results
2014-06-09
Results
In this project, an eagle has been considered as the main interesting
target compared to other flying objects. Due to the bad weather in Visby,
the captured videos of an eagle flying are quite limited. On the other
hand, the seabird could replace the eagle to some extent, by considering
their frequent activities in Sundsvall. For multi-class classification, the
“one versus one (OVO SVMs)” and the “one versus rest (OVR SVMs)”
are considered as the two most popular strategies for SVM. According
to the literature review, it appears to be difficult to determine which one
is better for object classification as they each possess their own
advantages and disadvantages. Hence, both methods will be used for
classification in this part.
At the very beginning, it was possible to plot the graph of all the flying
objects together so as to separate them visually. The following figures
will illustrate the different flying objects in the same plane based on
height, speed and size features:
Figure 5.1 Entire flying objects plotted in same plane
The height of different objects differs greatly. Hence, it is easy to
classified them and it is thus possible to remove the yellow part which
represents an airplane.
39
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.2 4 flying objects in same plane
Then, the green and black part, presenting an eagle and helicopter,
respectively, can be removed as well. In order to classifier the kite and
seabirds clearly, it is necessary to use the SVM or Libsvm to separate
them in this case. The figure below shows the seabird and the kite in the
same plane, and it could be observed that a few data overlap in a certain
area.
Figure 5.3 Seabird and kite in same plane
For object classification, the accuracy of the classified result will be
based on the selected feature’s category and a large quantity of data. In
order to build a classified model, it is necessary to increase the number
of data points to improve the database of the behavior model.
40
Classification and prediction for
flying objects based on behavior
model
HUI LIU
5.1
Results
2014-06-09
OVO SVMs classification
This method will classify by using only two objects, once. For example,
if the eagle as the main classified target, then it is possible to divide into
4 groups including eagle and seabird, eagle and kite, eagle and airplane,
and eagle and helicopter, step by step in order to build 4 classifiers for
eagle classification in the Libsvm environment.
1-5
1-4
1-3
1-2
Figure 5.1 4 classification groups (group number is 1-5, 1-4, 1-3 and 1-2 from left to
right)
5.1.1
OVO SVMS classification for height & speed features
In this case, the height and speed have been selected as features to
separate flying objects. For Libsvm classification, the coordinate axis will
be normalized between 0 and 1 in order to make it easy for data
processing and to increase the speed of the program operation. The
following table and graphs illustrate the results of the classification:
41
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Two-class
category
name
Train set
Test set
Train set
accuracy
(%)
Libsvm Test
set accuracy
(%)
SVM
Error
(%)
Eagle &
Seabirds
550
200
100
100
0
Eagle &
Kite
400
200
100
100
0
Eagle &
Airplane
400
200
100
100
0
Eagle &
Helicopter
450
200
100
100
0
Helicopter&
Seabirds
600
200
100
100
0
Helicopter &
Kite
300
200
100
100
0
Helicopter&
Airplane
400
200
100
100
0
Airplane &
Seabirds
550
200
100
100
0
Airplane &
Kite
400
200
100
100
0
Kite &
Seabirds
550
200
100
94
3.57
Table 5.1.1 Training data of OVO SVMs classification (1) for Height & Speed

Eagle & Seabirds classification
42
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.1.1 Eagle and seabirds classification in 2D version(Train set)

Eagle & Kite classification
Figure 5.1.1.2 Eagle and kite classification in 2D version(Train set)
43
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Eagle & Airplane classification
Figure 5.1.1.3 Eagle and airplane classification in 2D version(Train set)

Eagle & Helicopter
44
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.1.4 Eagle and helicopter classification in 2D version(Train set)
Secondly, an additional 6 classifiers are required to be built in order to
classify each flying object by applying the “one versus one” method.
Figure 5.1.9 illustrates the entire groups as follows:
Figure 5.1.1.5 Flow chart of entire groups (5*(5-1) /2=10 classifiers)
Generally, the speed of classification depending on the “one versus one”
method is quite fast and there is no issue of an overlap of classification
and an unclassified situation, which means that it is more accurate and
reliable.
45
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
For example, the system will be operated in a “1 versus 5’ classifier at
the very beginning. If the answer is “5”, the system will continue to
operate in “2 versus 5”. Similarly, if the answer is still “5”, the next
classifier will be the “3 versus 5”. Hence, the system will perform the
minimum number of classifiers to complete task in a very short period.

Helicopter & Seabird
Figure 5.1.1.6 Helicopter and seabird classification in 2D version(Train set)

Helicopter & Kite
46
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.1.7 Helicopter and kite classification in 2D version(Train set)

Helicopter & Airplane
47
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.1.8 Helicopter and Airplane classification in 2D version(Train set)

Airplane & Seabird
Figure 5.1.1.9 Airplane and Seabird classification in 2D version(Train set)
48
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Airplane & Kite
Figure 5.1.1.10 Airplane and Kite classification in 2D version(Train set)

Seabird & Kite
49
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.1.11 Kite and Seabird classification in 2D version(Train set)
According to the graphs above, it can be observed that the classification
result is satisfactory in both the SVM and Libsvm environments. The
classification accuracy of all groups, with the exception “kite &
seabirds”, is 100% as the gap in altitudes of the airplane, helicopter and
eagle is quite large compared to the remainder of the objects.
5.1.2
OVO SVMS classification for height & size features
In this case, the height and size have been selected as features to
separate the entire flying objects. Compared to the parameter of speed,
the parameter of size is quite varied, unstable and complicated to select
in relation to different height situations. It is necessary to scale the
altitude of all flying objects into the same level in order to perform the
classification based on this parameter. The following table and graphs
will illustrate the result of the classification:
50
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Train set
Test set
Train set
accuracy
(%)
Libsvm
Test set
accuracy
(%)
SVM
Error
(%)
Eagle &
Seabirds
550
200
100
100
0
Eagle &
Kite
400
200
100
100
0
Eagle &
Airplane
400
200
100
100
0
Eagle &
Helicopter
450
200
100
100
0
Helicopter&
Seabirds
600
200
100
100
0
Helicopter &
Kite
450
200
100
100
0
Helicopter&
Airplane
450
200
100
100
0
Airplane &
Seabirds
550
200
100
100
0
Airplane &
Kite
400
200
100
100
0
Kite &
Seabirds
550
200
100
89.5
3.21
Two-class
category
name
Table 5.1.2 Training data of OVO SVMs classification (2) for Height & Size
51
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Eagle & Seabirds classification
Figure 5.1.2.1 Eagle and seabirds classification in 2D version(Train set)

Eagle & Kite classification
52
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.2.2 Eagle and kite classification in 2D version(Train set)

Eagle & Airplane classification
Figure 5.1.2.3 Eagle and Airplane classification in 2D version(Train set)
53
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Eagle & Helicopter classification
Figure 5.1.2.4 Eagle and Helicopter classification in 2D version(Train set)

Helicopter & Seabird classification
54
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.2.5 Helicopter and seabird classification in 2D version(Train set)

Helicopter & Kite classification
Figure 5.1.2.6 Helicopter and kite classification in 2D version(Train set)
55
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Helicopter & Airplane classification
Figure 5.1.2.7 Helicopter and airplane classification in 2D version(Train set)

Airplane & Seabird classification
56
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.2.8 Airplane and seabird classification in 2D version(Train set)

Airplane & Kite classification
Figure 5.1.2.9 Airplane and kite classification in 2D version(Train set)
57
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Seabird & Kite classification
Figure 5.1.2.10 Kite and seabird classification in 2D version(Train set)
According to the graphs above, it could be observed that the
classification result is similar to that in chapter 5.1.1. The SVM error is
below 3.21% and the Libsvm accuracy is above 89.5% based on the
overlap area of the height parameter in the group of seabirds and kites.
All in all, the classification result is good.
5.1.3
OVO SVMS classification for speed & size features
In order to analyze and compare the different combinations among
these three features, the features of speed and size will be considered as
the option to separate flying objects in this case. The following table and
graphs will illustrate the result of the classification:
58
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Two-class
category
name
Train
set
Test set
Train set
accuracy (%)
Libsvm Test
set accuracy
(%)
SVM
Error
(%)
Eagle &
Seabirds
550
200
100
100
0
Eagle &
Kite
400
200
100
100
0
Eagle &
Airplane
400
200
100
100
0
Eagle &
Helicopter
450
200
100
100
0
Helicopter&
Seabirds
600
200
100
100
0
Helicopter &
Kite
450
200
100
100
0
Helicopter&
Airplane
450
200
100
100
0
Airplane &
Seabirds
550
200
100
100
0
Airplane &
Kite
400
200
100
100
0
Kite &
Seabirds
550
200
84.5455
54.5
32.18
Table 5.1.3 Training data of OVO SVMs classification (3) for Speed & Size

Eagle & Seabirds classification
59
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.3.1 Eagle and seabirds classification in 2D version(Train set)

Eagle & Kite classification
60
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.3.2 Eagle and kite classification in 2D version(Train set)

Eagle & Airplane classification
Figure 5.1.3.3 Eagle and airplane classification in 2D version(Train set)
61
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Eagle & Helicopter classification
Figure 5.1.3.4 Eagle and helicopter classification in 2D version(Train set)

Helicopter & Seabird classification
62
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.3.5 Eagle and seabird classification in 2D version(Train set)

Helicopter & Kite classification
Figure 5.1.3.6 Helicopter and kite classification in 2D version(Train set)
63
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Helicopter & Airplane classification
Figure 5.1.3.7 Helicopter and airplane classification in 2D version(Train set)

Airplane & Seabird classification
64
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.1.3.8 Airplane and seabird classification in 2D version(Train set)

Airplane & Kite classification
Figure 5.1.3.9 Airplane and kite classification in 2D version(Train set)
65
Classification and prediction for
flying objects based on behavior
model
HUI LIU

Results
2014-06-09
Kite & Seabird classification
Figure 5.1.3.10 Kite and seabird classification in 2D version(Train set)
According to the graphs above, it can be observed that the classification
result is quite satisfactory, with the exception of the group of kites and
seabirds. Both the speed and size parameters of the kite and seabird are
overlapping with this combination. Hence, it could be concluded that
the combination of height and speed as selected features is a good
choice to separate a kite and seabird.
66
Classification and prediction for
flying objects based on behavior
model
HUI LIU
5.2
Results
2014-06-09
OVR SVMs classification
In this project, the target has been focused on the eagle but others could
be included in the same category. According to this idea, the method of
“one-versus-rest” could be implemented using both the OVO SVMs and
OVR SVMs decomposition to separate an eagle from among all the
flying objects. Figure 5.2.1 shows this method to divide multi-label data
set into different binary subsets:
Figure 5.2 Sketch of OVR SVMs
5.2.1
OVR SVMS (1) classification for height & speed features
67
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
OVR category
name
Train
set
Test
set
Train set
accuracy
(%)
Test set
accuracy (%)
Eagle & Rest
760
200
100
(760/760)
100
(200/200)
Seabird & Rest
780
80
99.6154
(777/780)
96.25
(77/80)
Kite & Rest
780
200
98.7179
(770/780)
96
(192/200)
Airplane &
Rest
680
200
100
(680/680)
100
(200/200)
Helicopter &
Rest
530
200
100
(530/530)
100
(200/200)
Table 5.2.1 Training data of OVR SVMs classification (1) for height & speed
Figure 5.2.1.1 Eagle and rest classification in 2D version (Train set)
68
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.2.1.2 Seabird and rest classification in 2D version (Train set)
Figure 5.2.1.3 Kite and rest classification in 2D version (Train set)
69
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.2.1.4 Airplane and rest classification in 2D version (Train set)
Figure 5.2.1.5 Helicopter and rest classification in 2D version (Train set)
According to the pictures above, the parameters of the kite and seabird
are overlapped, which will affect the final accuracy of the classification
result. However, the average accuracy is quite satisfactory when using
this method based on height and speed features.
70
Classification and prediction for
flying objects based on behavior
model
HUI LIU
5.2.2
Results
2014-06-09
OVR SVMS (2) classification for height & size features
OVR category
name
Train
set
Test
set
Train set
accuracy
(%)
Test set
accuracy (%)
Eagle & Rest
1200
200
100
(1200/1200)
100
(200/200)
Seabird & Rest
1200
200
95.0833
(1141/1200)
98.75
(79/80)
Kite & Rest
1200
200
100
(1200/1200)
86.5
(173/200)
Airplane &
Rest
1200
200
100
(1200/1200)
100
(200/200)
Helicopter &
Rest
1200
200
100
(120/1200)
100
(200/200)
Table 5.2.2 Training data of OVR SVMs classification (2) for height & size
Figure 5.2.2.1 Eagle and rest classification in 2D version (Train set)
71
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.2.2.2 Seabird and rest classification in 2D version (Train set)
Figure 5.2.2.3 Kite and rest classification in 2D version (Train set)
72
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.2.2.4 Airplane and rest classification in 2D version (Train set)
Figure 5.2.2.5 Helicopter and rest classification in 2D version (Train set)
In this case, another features combination was chosen including height
and size in order to classify flying objects. The classification result is
similar to that in the previous chapter, in which the average accuracy
proved to be above 86.5%. The altitude of the different flying objects
differed greatly, which could be considered as the main reason to
recognize and separate them easily.
73
Classification and prediction for
flying objects based on behavior
model
HUI LIU
5.2.3
Results
2014-06-09
OVR SVMS (3) classification for speed & size features
OVR category
name
Train
set
Test
set
Train set
accuracy
(%)
Test set
accuracy (%)
Eagle & Rest
1200
200
100
(1200/1200)
100
(200/200)
Seabird & Rest
1200
200
90.25
(1083/1200)
98.75
(79/80)
Kite & Rest
1200
200
90.25
(1083/1200)
48
(96/200)
Airplane &
Rest
1200
200
100
(1200/1200)
100
(200/200)
Helicopter &
Rest
1200
200
100
(120/1200)
100
(200/200)
Table 5.2.3 Training data of OVR SVMs classification (3) for speed & size
Figure 5.2.3.1 Eagle and rest classification in 2D version (Train set)
74
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.2.3.2 Seabird and rest classification in 2D version (Train set)
Figure 5.2.3.3 Kite and rest classification in 2D version (Train set)
75
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.2.3.4 Airplane and rest classification in 2D version (Train set)
Figure 5.2.3.5 Helicopter and rest classification in 2D version (Train set)
For the category of “Kite & Rest” and “Seabird and Rest”, the
classification accuracy is significantly reduced, the selected features
based on “speed & size” between the kite and seabirds are closer which
will result in an unsatisfactory classification result.
For multi-class classification, the OVO SVMs and the OVR SVMs are the
two most popular strategies which have been performed in this project,
however, it appear difficult to conclude which one is better for object
recognition and classification according to the literature review.
76
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
More especially, for OVO SVMs, it is necessary to build 10 classifiers in
advance to separate the 5 flying objects in this project, while for OVR
SVMs, only 5 classifiers are required to be built and the classification
result is still accurate. To conclude, the strategy of OVR SVMs appears
to be more suitable under the condition of classification of a few classes.
5.3
Prediction for unknown data
Figure 5.3.1 Procedures of the model prediction
In the previous chapter, SVM and Libsvm were relied upon to produce
training data sets, testing data sets and model construction. In this part,
the goal is to predict interesting target values of a sequence of data
among some unknown types of data belonging to flying objects. In
addition, these target values should satisfy the attributes and be easily
used for further predictions. The following flow chart is a procedure
that will be shown in figure 5.3.1:
77
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.3.2 Sketch of unknown data prediction
In chapter 5.1, 10 model classifiers were built in advance in order to
perform the prediction for unknown data belonging to a flying object.
According to the figure 5.1.9, the input unknown data will be predicted
step by step. In this part, the OVR SVMs model will be applied in order
to perform the prediction for unknown data. Based on the sample
Libsvm, the prediction results are shown below. The entire prediction
accuracy is 85.82%, which means that 2094 out of 2440 out-of-sample
predictions are correct.
OVR model classifier
Input
unknown
data No.
Testing
target No.
Real
target No.
Prediction
accuracy (%)
Eagle & Rest
500
200
200
100
(200/200)
Seabird & Rest
330
78
80
97.5
(78/80)
Kite & Rest
690
187
200
93.5
(187/200)
Airplane & Rest
510
150
150
100
(150/150)
Helicopter & Rest
560
200
200
100
(200/200)
Table5.3.1 OVR SVMs (1) prediction results
78
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
OVR model classifier
Input
unknown
data No.
Testing
target No.
Real
target No.
Prediction
accuracy (%)
Eagle & Rest
880
200
200
100
(200/200)
Seabird & Rest
800
98
80
81.63
(98/80)
Kite & Rest
700
178
200
89
(178/200)
Airplane & Rest
650
150
150
100
(150/150)
Helicopter & Rest
610
100
100
100
(100/100)
Table5.3.1 OVR SVMs (2) prediction results
OVR model classifier
Input
unknown
data No.
Testing
target No.
Real
target No.
Prediction
accuracy (%)
Eagle & Rest
480
200
200
100
(200/200)
Seabird & Rest
680
184
80
43.48
(184/80)
Kite & Rest
400
94
200
47
(94/200)
Airplane & Rest
600
199
200
99.5
(199/200)
Helicopter & Rest
500
200
200
100
(200/200)
Table5.3.1 OVR SVMs (3) prediction results
Following on from this, the prediction results will indicate the type of
categories that these unknown data for flying objects belong to.
However, the testing of the unknown data is extracted and computed
from the binary images which are converted from the captured videos.
Hence, it is necessary to inspect the RGB images for unknown flying
objects in order to guarantee the prediction result is absolutely correct.
Figure 5.3.3 illustrates the recognition from the RGB images as follows:
79
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Results
2014-06-09
Figure 5.3.3 RGB images for result recognition
Flying objects including eagles and seabirds, consist of distinguishable
parts such as wings and legs. Unlike human face detection, the eagle
and seabirds can vary widely in order to change their positions during
the entire flight.
On the one hand, classifier models are built based on the prediction
results in order to determine whether the unknown data belongs to the
class of object that is of interest. On the other hand, to perform
recognition for unknown data, reliance is on information of the RGB
images. Overall, the final result will be obtained in order to recognize
the real eagle and the information will be sent from an intelligent
computer system to control the windmill operation.
80
Classification and prediction for
flying objects based on behavior
model
HUI LIU
6
Conclusion and future work
2014-06-09
Conclusions and future work
In this project, the classification of flying objects and predictions in
terms of eagles, seabirds, kites, airplanes and helicopters based on the
behavior classifier model has been achieved. The main method of stereo
vision system is presented in this case to extract useful information from
the captured videos. The video acquisition performed by two identical
system cameras based on the stereo vison approach is a vital step in this
method. The accurate and robust feature selection is mostly determined
by the good quality videos. Additionally, camera calibration also
contributed to the accuracy of the computed results in the feature
selection. SVM and Libsvm could be considered as two approaches to
classify different flying objects, however, only Libsvm was applied in
order to perform the classification and prediction in this project. For the
classification of flying objects, the main difficulty faced by a
classification system, which is implemented in the MATLAB
environment, is the problem of variability. Hence, it appears impossible
for the classifier model to be absolutely precise in classifying eagles and
other flying objects due to the limitation of the training data.
Some future work should be considered to perfect this project.
Specifically, the video acquisition really depends on some factors, such
as weather condition and equipment which directly determined the
accuracy of the features in relation to converting the images. At the
same time, the comprehensive data acquisition should be collected in
different environments, which will contribute to the final classification
and prediction accuracy by considering the complicated situation of a
wind farm. For three-class classification, the result cannot be satisfied by
either SVM or Libsvm. Future work can base studies on this framework
to further develop multi-class classification. The equipment should be
improved and modified to make sure that the two system cameras are
on the same line without any degree and tilt in order to avoid the cause
of any further error. Finally, the project contributes to the protection of
eagles or other flying species from death cause be the huge blades of
wind turbines.
81
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Conclusion and future work
2014-06-09
Consequently, it could be possible to cooperate with some Chinese
Eagle Protection Agency to work together on this significant project as it
could be easier to observe and analyze their flying habits and activities,
thereby capturing more good quality of videos to perform further
classifications and predictions.
82
Classification and prediction for
flying objects based on behavior
model
HUI LIU
References
2014-06-09
References
[1] Piccardi, Massimo. "Background subtraction techniques: a review."
Systems, man and cybernetics, 2004 IEEE international conference on.
Vol. 4. IEEE, 2004.
[2] Jain, Ramesh, and H-H. Nagel. "On the analysis of accumulative
difference pictures from image sequences of real world scenes." Pattern
Analysis and Machine Intelligence, IEEE Transactions on 2 (1979):
206-214.
[3] Horprasert, Thanarat, David Harwood, and Larry S. Davis. "A
statistical approach for real-time robust background subtraction and
shadow detection." IEEE ICCV. Vol. 99. 1999.
[4] Tsai, Du-Ming, and Shia-Chih Lai. "Independent component
analysis-based background subtraction for indoor surveillance." Image
Processing, IEEE Transactions on 18.1 (2009): 158-167.
[5] Aguilar, J. J., F. Torres, and M. A. Lope. "Stereo vision for 3D
measurement:
accuracy
analysis,
calibration and
industrial
applications." Measurement18.4 (1996): 193-200.
[6] Franke, Uwe, and Armin Joos. "Real-time stereo vision for urban
traffic scene understanding." Intelligent Vehicles Symposium, 2000. IV
2000. Proceedings of the IEEE. IEEE, 2000.
[7] Tsai, Ching-Chih, Yi-Yu Li, and Ting-Ting Liang. "Autonomous task
execution using stereo vision camera for a two-armed service robot."
System Science and Engineering (ICSSE), 2011 International Conference
on. IEEE, 2011.
[8] Kang, Sing Bing, et al. "A multibaseline stereo system with active
illumination and real-time image acquisition." Computer Vision, 1995.
Proceedings., Fifth International Conference on. IEEE, 1995.
[9] Weng, Juyang, Paul Cohen, and Marc Herniou, "Camera calibration
with distortion models and accuracy evaluation," Pattern Analysis and
Machine Intelligence, IEEE Transactions on 14.10 (1992): 965-980.
[10] Zhang, Zhengyou. "A flexible new technique for camera
calibration." Pattern Analysis and Machine Intelligence, IEEE
Transactions on 22.11 (2000): 1330-1334.
[11] Penna, Michael A. "Camera calibration: A quick and easy way to
determine the scale factor." Pattern Analysis and Machine Intelligence,
IEEE Transactions on 13.12 (1991): 1240-1245.
83
Classification and prediction for
flying objects based on behavior
model
HUI LIU
References
2014-06-09
[12] Brown, Lisa M. "View independent vehicle/person classification."
Proceedings of the ACM 2nd international workshop on Video
surveillance & sensor networks. ACM, 2004.
[13] Abraham Mathews, Kolawole Omotayo Oluwaseun, “ Object
classification.” Mid Sweden University.
[14] Bosch, Anna, Andrew Zisserman, and Xavier Muoz. "Image
classification using random forests and ferns." Computer Vision, 2007.
ICCV 2007. IEEE 11th International Conference on. IEEE, 2007.
[15] Tabesh, Ali, et al. "Multifeature prostate cancer diagnosis and
Gleason grading of histological images." Medical Imaging, IEEE
Transactions on26.10 (2007): 1366-1378.
[16] Frank Y. Shih, “A Intelligent Sensor Network for Object Detection,
Classification and Recognition.”The 14th Annual Wireless and Optical
Communications Conference.
[17] Sankaranarayanan, Aswin C., Ashok Veeraraghavan, and Rama
Chellappa. "Object detection, tracking and recognition for multiple
smart cameras." Proceedings of the IEEE 96.10 (2008): 1606-1624.
[18] Ollero, Anibal, et al. "Motion compensation and object detection for
autonomous helicopter visual navigation in the COMETS system."
Robotics and Automation, 2004. Proceedings. ICRA'04. 2004 IEEE
International Conference on. Vol. 1. IEEE, 2004.
[19] Gandhi, Tarak, et al. "Detection of obstacles in the flight path of an
aircraft." Aerospace and Electronic Systems, IEEE Transactions on 39.1
(2003): 176-191.
[20] Segen, Jakub. "A camera-based system for tracking people in real
time." Pattern Recognition, 1996., Proceedings of the 13th International
Conference on. Vol. 3. IEEE, 1996.
[21] Salas, V., et al. "Review of the maximum power point tracking
algorithms for stand-alone photovoltaic systems." Solar Energy
Materials and Solar Cells90.11 (2006): 1555-1578.
[22] Comaniciu, Dorin, Visvanathan Ramesh, and Peter Meer.
"Kernel-based object tracking." Pattern Analysis and Machine
Intelligence, IEEE Transactions on 25.5 (2003): 564-577.
[23] Delamarre, Quentin, and Olivier Faugeras. "3D articulated models
and multi-view tracking with silhouettes." Computer Vision, 1999. The
Proceedings of the Seventh IEEE International Conference on. Vol. 2.
IEEE, 1999.
[24] Yilmaz, Alper, Omar Javed, and Mubarak Shah. "Object tracking: A
survey." Acm computing surveys (CSUR) 38.4 (2006): 13.
84
Classification and prediction for
flying objects based on behavior
model
HUI LIU
References
2014-06-09
[25] Sinha, Sudipta N., et al. "GPU-based video feature tracking and
matching." EDGE, Workshop on Edge Computing Using New
Commodity Architectures. Vol. 278. 2006.
[26] Abd-Almageed, Wael, Mohamed Hussein, and Mohamed
Abdelkader. "Real-time human detection and tracking from mobile
vehicles." Intelligent Transportation Systems Conference, 2007. ITSC
2007. IEEE. IEEE, 2007.
[27] Vapnik, Vladimir. "The nature of statistical learning theory."
springer, 2000.
[28] Foody, Giles M., and Ajay Mathur. "A relative evaluation of
multiclass image classification by support vector machines." Geoscience
and Remote Sensing, IEEE Transactions on 42.6 (2004): 1335-1343.
[29] Milgram, Jonathan, Mohamed Cheriet, and Robert Sabourin. "“One
Against One” or “One Against All”: Which One is Better for
Handwriting Recognition with SVMs?." Tenth International Workshop
on Frontiers in Handwriting Recognition. 2006.
[30] Anthony, Gidudu, Hulley Gregg, and Marwala Tshilidzi. "Image
classification using SVMs: one-against-one vs one-against-all." arXiv
preprint arXiv:0711.2914 (2007).
[31] Chang, Chih-Chung, and Chih-Jen Lin. "LIBSVM: a library for
support vector machines." ACM Transactions on Intelligent Systems and
Technology (TIST)2.3 (2011): 27.
[32] Pereira, Shelby, and Thierry Pun. "Robust template matching for
affine resistant image watermarks." Image Processing, IEEE
Transactions on 9.6 (2000): 1123-1129.
[33] Chang, Cheng-Chi, Tsorng-Lin Chia, and Chen-Kuei Yang.
"Modified temporal difference method for change detection." Optical
engineering 44.2 (2005): 027001-027001.
[34] Liu, Hongche, et al. "Accuracy vs efficiency trade-offs in optical
flow algorithms." Computer vision and image understanding 72.3 (1998):
271-286.
[35] McIvor, Alan M. "Background subtraction techniques." Proc. of
Image and Vision Computing 1.3 (2000): 155-163.
[36] Marr, David, et al. "A computational theory of human stereo
vision." From the Retina to the Neocortex. Birkhäuser Boston, 1991.
263-295.
[37] Ayache, Nicholas, and Charles Hansen, "Rectification of images for
binocular and trinocular stereo-vision," Pattern Recognition, 1988., 9th
International Conference on. IEEE, 1988.
85
Classification and prediction for
flying objects based on behavior
model
HUI LIU
References
2014-06-09
[38] Hsu, Chih-Wei, Chih-Chung Chang, and Chih-Jen Lin. "A practical
guide to support vector classification." (2003).
86
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Appendix
2014-06-09
Appendix
The plotting graphs above are based on the centroid data of flying
objects which are extracted from captured videos in different places.
Table 4.6.1.2.2 to 4.6.1.2.5 will be illustrated data partly as following:
Frame
X-coordination
Y-coordination
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
2055.1
2049.2
2012.9
1951.7
1795.3
1706.5
1619.8
1549.5
1483.2
1488.4
1523.3
1583.8
1763.9
1872.4
1986.6
2101.5
2311.5
2401.5
2479.6
2548.3
1878.9
1748.7
1628.8
1522.3
1334.7
1264.7
1231.4
1237.1
1308.5
1352.8
1394.9
1427.5
1442.5
1422.7
1384.6
1328.8
1178.2
1089.7
993.8
894
Table 4.6.1.2.2 Data of Eagle_1
87
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Appendix
2014-06-09
Frame
X-coordination
Y-coordination
179
1053.1
1845.4
180
964.6
1615.3
181
908.5
1537.2
182
849.1
1483.5
183
784.3
1469.7
184
681.7
1536.1
185
665.7
1590.2
186
671.6
1645.3
187
698.2
1697
188
822.3
1765.2
189
919.2
1765.4
190
1023.4
1736.5
191
1115.8
1666.6
192
1220.7
1411.1
193
1231.5
1261.3
194
1223.3
1119.8
195
1196.1
999.3
196
1108.7
824.7
197
1051.7
771
198
989.5
734
199
925.2
716.7
200
815.3
758.8
201
784.3
804.5
202
769.5
852.6
203
774
898.8
204
829
971.7
205
882.4
994.7
206
954.6
1002
207
1040.7
986.9
208
1229
896
Table 4.6.1.2.3 Data of Eagle_2
88
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Appendix
2014-06-09
Frame
X-coordination
Y-coordination
245
1465.6
1614.8
246
1495.7
1277.1
247
1478.9
967.9
248
1424.1
699.8
249
1220
351.1
250
1090.5
292.8
251
961.4
329.1
252
864.1
455.5
253
862.8
901.5
254
970.8
1156.3
255
1131.2
1393.7
256
1327.4
1600.7
257
1786
1867.8
258
2047.7
1896.4
259
2292.4
1890.1
260
2536
1850.9
Table 4.6.1.2.4 Data of Eagle_3
Frame
X-coordin
ation
Y-coordin
ation
X-coordin
ation
Y-coordin
ation
X-coordin
ation
Y-coordin
ation
65
765.6
1820.8
909.1
1916.7
1004.4
1816.5
66
825.7
1744.3
881.8
1895.1
1002.9
1739.9
67
880.9
1659
859.2
1854.6
993.7
1669.7
68
959.2
1475.8
838.2
1734.5
963.8
1543.9
69
984.9
1389.4
840.9
1660.8
944.2
1480.7
70
1008.3
1309.9
852.5
1582.6
925.9
1415.3
71
1027.6
1236.7
872.4
1503.1
912.5
1348.2
72
1046.8
1088.6
918.6
1341.6
915.8
1204.3
73
1046.4
1018.6
944.8
1262.2
935.7
1131.6
74
1036.8
955.4
980
1184.2
969
1060.8
75
1014.5
901.7
1021.4
1108.3
1015.8
993.4
76
932.3
850.5
1127.8
970.5
1154.2
877.9
77
891.7
855.9
1193.3
917.4
1236.3
836.2
78
869.9
874
1263.2
878.3
1316.4
808.9
79
870.4
892.5
1333.2
852.6
1388.2
794.6
80
919.5
916.7
1462.3
828.9
1497.7
795.6
81
959.8
926.6
1518.4
827.3
1536.5
807.6
89
Classification and prediction for
flying objects based on behavior
model
HUI LIU
Appendix
2014-06-09
82
1006.3
940.7
1566.9
826.5
1566.9
826.5
83
1054.7
958.2
1598.4
837.6
1598.4
837.6
84
1146.1
990.7
1622.9
855.2
1643.4
869.6
85
1190.2
1003.1
1622.7
864.7
1643.4
882.8
86
1233.9
1010.9
1610.6
872.9
1632.8
890.4
87
1280.7
1015.1
1590.6
876.6
1614.9
889
88
1391.6
1020
1536.5
854.4
1569.1
846.8
89
1457.4
1020.4
1510.3
822.5
1547.5
805.5
90
1530.1
1013.2
1489.7
773.9
1532
750.6
91
1606.3
993.3
1478.9
713.5
1523.7
684.8
92
1751.8
912.1
1491.8
572.3
1529.1
538.1
93
1815.2
856.5
1517.7
502.8
1545.8
466.2
94
1869.5
790.9
1555.9
439.9
1573.7
394.6
95
1913.7
716.7
1606.5
385.6
1611.2
324.8
96
1963
548.4
1745.2
312.5
1716.4
204
97
1969.3
460.2
1820.5
294
1783.4
161.9
98
1966.5
373.4
1889
285.2
1852.5
133.2
99
1952.4
288.2
1952.4
288.2
1920.1
116.3
Table 4.6.1.2.5 Data of Eagle_4,5,6
90
Download