Human Computer Interaction using Eye Movements for Hand Disabled People Gebremaryam Dagnew

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
Human Computer Interaction using Eye
Movements for Hand Disabled People
Gebremaryam Dagnew
Gagandeep Kaur
M-Tech student: Dept. of CSE/IT
Symbiosis Institute of Technology
Pune, Maharashtra, India.
Assistant Professor: Dept. of CS/IT
Symbiosis Institute of Technology
Pune, Maharashtra, India
Abstract—Most of Computers work using mouse and
keyboard as input devices that hand disabled people
couldn't able to use them. This paper proposes a new
method for hand disabled people to interact with the
computer using their eyes only. In this proposed
system eye detection is performed using pixel value
calculation and sclera detection. In order to avoid
noises that appeared from backgrounds, face
detection is applied before eye detection. After this,
pair of eyes is detected, cropped and further divided
as left eye image and right eye image. Then
brightness of the images is adjusted to make the
system work under both good and poor lighting
conditions. The resultant image is converted to gray
scale image and then noise is removed using image
enhancement methods (median filter and wiener
filter). After this, image is converted to binary image
(black and white) using threshold value to make a
decision whether eye is opened or closed. This
decision depends upon the value of pixels that can be
calculated from the binary image and accordingly
operations like cursor movement based on quadrants,
left click, right click, double click, typing using
virtual keyboard, selection, drag and drop are
performed. The proposed system shows good
performance even in poor lighting conditions.
computer interaction for hand disabled people would
used to fill the gap between computers and users with
hand disabilities.
Object detection especially eye detection is a very
important component in computer vision [11]
systems, which includes human computer interaction
systems, biometrics, driver drowsiness recognition
[12], intelligent transportation systems, and eye
medical systems. Human computer interaction (HCI)
is an innovative and efficient techniques that is an
active research field by a number of experts. This
paper proposes a human computer interaction using
eye movements for hand disabled people. Sclera
detection and pixel value calculation are the main
methods to estimate eye movements. Eye movement
is used to enable hand disable people to interact with
computer interfaces directly without the need of
mouse and keyboard.
Techniques proposed in this paper are easy and user
friendly, as does not require physical interaction with
the user. A built in webcam camera is used to capture
an image of a user. Mouse pointer is controlled by a
natural and efficient method for interaction which is
an eye movement. Currently disabled people types on
the computer keyboard by holding a long sticks on
their mouth[8], but the technique being proposed
helps hand disabled people be independent of using
sticks.
Keywords—hand disabled people, pixel value
calculation, sclera detection, quadrant division,
human computer interaction.
I. INTRODUCTION
As the use of computers have dramatically increased,
the quality of people lives are dramatically changed.
The rapid increasing ability of computers to process
information changed the world to computer based
world. People who have healthy hands, access
computers with keyboard and mouse. But, since most
of computers work using mouse and keyboard as
input devices, people with hand disabled couldn't
able to access information as much as the healthy
people. To address such problems many researchers
are working their researches on how disabled people
interacts with computers. Therefore, a Human
ISSN: 2231-5381
To remove noises and to improve image quality,
median filter and wiener filter are the two methods of
image enhancement techniques in this paper. Median
filter is used to remove salt and paper noises, and
wiener filter is used to remove additive noises and to
inverse blurred images that happen during user
motions. These two methods made the system an
accurate and error free to detect the sclera of the eye.
Eye conditions are classified as opened eye, closed
eye, left aligned, right aligned, upward, and
downward directions based on pixel value calculated
from the sclera.
The remainder of this paper is structured as follows.
Related works are presented in section II. Section III
http://www.ijettjournal.org
Page 142
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
presents system design of the proposed system,
experiments are presented in section IV. Section V
presents results and discussion of the proposed
system. Conclusion and future work are presented in
section VI.
II. Related Works
In recent years eye detection has been attracted with a
large number of studies on eye tracking, eye
recognition, eye gaze estimation, and other eye based
approaches. S.S. Deepika and G. Murugesan [1]
introduced human computer interface with a novel
approach based on eye movements. In this system
eye is tracked using low resolution webcam. Eye
blink, straight, left, right, and up movements of eye
can be detected with this system. The limitation of
this system is that it cannot detect down movements,
mouse functions such as selection, typing using
virtual keyboard, drag and drop are not implemented
in this paper. And also the algorithm for eye tracking
gets affected by illumination that means if there is no
sufficient amount of light, the system could not able
to perform. Muhammad Awais, Nasreen Badruddin,
Micheal Drieberg [2] introduced automated eye blink
detection and tracking based on template matching
and similarity measure. In this system, eye detection
and tracking is performed using Golden ratio and
correlation coefficient. Mouse functions are not
implemented in this system. Ryo Shimata, Yoshihiro
Mitani and Tsumoru Ochiai [3] introduced a human
computer interaction system using pupil detection
and tracking. This system combined an infrared lightemitting diode, sensitive infrared camera and infrared
filter to avoid the influences of illumination. In this
system, only direction of eyes is determined but
mouse functions such as cursor movement, mouse
click, selection, drag and drop are not implemented.
Prof Jianbin Xiong, Weichao Xu, Wei Liao, Qinruo
Wang, Jianqi Liu, and Qiong Liang [6] proposed a
system called Eye Control System Base on
Ameliorated Hough Transform Algorithm. In this
system, an ameliorated Hough transform algorithm is
developed using pupil-corneal reflection technique.
Typing function and efficiently blink detection
function are designed in this system. Using these
functions and eye control device users can enter
numbers to the computer with their eyes only.
Improving pupil localization, calibration method and
raising key number up to more than 30 to include all
English letters is a future work of this system.
Aleksandra Krolak, Paweł Strumiłło [7] introduced a
system called Eye-blink detection system for human–
computer interaction. This system has an interface to
detect voluntary eye blinks and interpret them as
control commands. And this system also consists of
ISSN: 2231-5381
algorithms such as face detection, eye-region
extraction, eye-blink detection and eye-blink
classification. The limitation of this system is that
users suffered from eye fatigue after using the
interface for more than 15 min. The reason behind
this problem is high intension in picking up the
candidates on the screen and reducing it by means of
auto fill predictive text options is the future work of
this system. Muhammad Usman Ghani, Sarah
Chaudhry, Maryam Sohail, Muhammad Nafees
Geelani[8] proposed a system called Real Time
Mouse Pointer Control Implementation Based On
Eye Gaze Tracking which uses eye gaze for
controlling mouse pointer to make an interaction with
a computer. The proposed system is not performing
well at bad lighting condition. Also, enhancing image
quality, increasing webcam resolution, introducing
features of head posture, and introducing gaze
estimation are suggested to improve this system.
Generally, eye detection seeks to localize an eye
position [5], and gaze direction [15] to answer the
question "how hand disabled people can interact with
computers?‖ Up to now, although a large amount of
research has been done on this issue, mouse functions
such as selection, drag and drop, and typing using
virtual keyboard are not implemented. Most of the
recent researches are also affected with illuminations
that means when there is insufficient amount of light,
the system cannot work.
Unlike the other methods, the proposed system
detects all eye conditions such as open eye, close eye,
left aligned, right aligned, up aligned, one eye opened
and one closed and accordingly implements mouse
functions such as cursor movement, left click, right
click, double click, selection, typing using virtual
keyboard, drag and drop.
As the input images are taken from low resolution
webcam, noise removal becomes an important step in
this proposed system. The proposed system removes
such noises using median filter and wiener filter and
also adjusts the contrast and brightness of the image
so that it works well under both poor and good
lighting conditions[19].
III. SYSTEM DESIGN
The algorithm of the proposed system is presented in
Fig.1. The system starts with capturing an image of
human face and detects an eye from the face and
converts to gray level, remove noises, converts to
binary image, calculate pixel value, detects sclera,
divides eye and screen quadrants and finally performs
mouse functions such as mouse move, left click, right
click, double click, selection, drag & drop, and typing
http://www.ijettjournal.org
Page 143
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
using virtual keyboards according the pixel value. If
sum of white pixel value is zero, mouse click
operation is performed, and if sum of white pixel
value of both eyes is one or more, mouse movement
is performed.
[18] technique. To provide a real time processing, a
number of classifiers that containing set of features
are combined together in a cascaded structure.
According to Viola Jones algorithm [4], face
detection is performed using the facts that human eye
is darker than upper cheeks and forehead as presented
in Fig.2 (c), and there is a brighter part in between the
two eyes that separates the left eye from the right eye
as presented in Fig.2 (b). The features required by the
detection framework generally performs the sum of
image pixels in a rectangular area as presented in
Fig.2 (a). The features used by Viola and Jones
algorithm are rely on more than one rectangular
areas. Fig.2 (a) presented the four types of features
used in viola Jones algorithm. Fig.2 (b) presented
features that looks similar as bridge of the nose.
Fig.2 (c) presented feature looks similar to the eye is
darker than upper cheeks. The value of a given
feature is the sum of the pixels of the unshaded
rectangle subtracted from the sum of the pixels
within shaded rectangle [18].
Fig.1. Flow chart of the proposed system.
A. Face Detection
As the proposed system is shown in Fig.1, the
webcam or USB camera is attached with the
computer to capture images of the person using the
system. From the captured image, human face is
detected and cropped in order to detect the eyes. Face
detection has been researched with a different
methods that often is motivated by a technique of
face detector. Such techniques can use colors,
textures, features and templates. The following two
techniques are tried in this proposed system to select
the best one.
1) Skin Color Analysis Method
Skin color analysis is often used as part of a face
detection technique. Various techniques and color
spaces can be used to divide pixels that belongs to
skin from pixels that are likely to the background.
This technique faces with a big problem as skin
colors are usually different over different people. In
addition, in some cases skin colors may be similar to
background colors with some measures. For example,
having a red floor covering or a red wooden door in
the human image can cause to fail the system.
2) Viola Jones Algorithm Method
This method performs set of features at a number of
scales and at different locations and uses them to
identify whether an image is a face or not. A simple,
yet competent, classifier is built by identifying a few
efficient features out of the whole set of the Haar-like
features which can be generated using the AdaBoost
ISSN: 2231-5381
Fig.2 Viola Jones algorithm features
These rectangular filters are very fast at any scale and
location to capture characteristics of the face. As it is
must, the collection of all likely features of the four
types which can be produced on an image window is
probably big; applying all of them could be
something intensive and could generate redundant
activities. So, only a small subset of features from the
large set of features are used. The advantage of Viola
Jones algorithm is, its robustness with very high
detection rate and real time processing.
In this proposed system Viola Jones algorithm is
selected to detect a real human face by reducing
effect of background noises that have similar
structures and colors with human faces. Fig. 3
presents sample of face detection using Viola Jones
method
Fig. 3 Face detection using Viola Jones
B. Eye Pair Detection
Eye movement analysis [13], can be used to analyze
performance of eye to cursor integration. Eye pair is
http://www.ijettjournal.org
Page 144
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
detected and cropped from the cropped face by
eliminating other face parts such as mouth, nose and
ears. The resultant image is divided into two parts:
left eye and right eye. The left and right eye images
are converted from RGB to gray scale and then noise
is removed using image enhancement techniques
(median filter and wiener filter).
(neighborhood) are ranked in the order of their gray
levels, and median value of the ordered mask is
replaced the noisy value. The median filter output is
[10] presented in equation 1.
After this, image is converted into binary image
(black and white) using threshold value. The
processing structure of the proposed method is shown
in Fig. 4.
Where f(x,y) is original image and g(x,y) is output
image, w is two-dimensional mask; the mask is n×n
matrix such as 3×3, 5×5, and etc. Fig. 5 shows an
example of median filter on 5×5 size of image. To get
the median value from Fig. 5 (a), the numbers would
ranked as
0,25,67,117,132,145,182,191,197,204,210,
214,217,221,227,229,229,229,234,235,241,
245,246,246,247. Then the median value is 217 as it
indicated in Fig. 5 (b).
..... (1)
Fig. 5 (a) shows an original image and its pixel
matrix, (b) shows image after median filter and
median value at the center.
The noise reduction performance of median filter for
an image with zero mean noise is calculated as [10]
in equation 2.
2
med
Fig. 4. Processing structure of the proposed
method
C. Image Enhancement
Removing noises and improving image quality is
used for better accuracy on computer vision. Noises
could be Gaussian noise, balanced noise and the
impulse noise [10]. Impulse noise distributed on the
image as light and dark noise pixels and corrupts the
correct information of the image. Therefore, reducing
impulse noises are key important in computer vision.
In this paper, two methods of image enhancement
(median filter and wiener filter) are used. Those
methods are used to remove noises.
1. Median Filter
Median filter is a nonlinear and rank-order filter [10]
which is good at reduction of impulse noise, and salt
and pepper noises. In median filter the mask
ISSN: 2231-5381
............. (2)
Where
is input noise power (variance), n is size of
median filtering mask, f
is function of noise
density.
2. Wiener Filter
Wiener filtering is an image enhancement technique
that used for inverse filtering and noise smoothing. In
this paper wiener filter is used to remove additive
noises and to reduce blurring that happen due to
sudden eye movement. This filter minimizes the
mean square error between the estimated process and
the targeted process. The operations based on time
domains and frequency domains are described as [16]
in equation 3, equation 4, equation 5, and equation 6.
............. (3)
Where x(t) is some original signal at time t, h(t) is
known impulse response, n(t) is additive noise, and
y(t) is observed output.
The goal is to find g(t), estimating x (t) as
............................ (4)
http://www.ijettjournal.org
Page 145
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
Where
is an estimate of x(t) that minimizes
mean square error.
Based on the frequency domain the wiener filter
provides as
.......... (5)
Where, G(f) and H(f) are Fourier transformations of g
and h, respectively at frequency f, S(f) is the power
spectral density of the original signal x(t), and N(f) is
the mean the mean power spectral density of the
noise n(t). The estimated operation is
.................. (6)
Where,
and Y(f) are Fourier transforms of
and y(t), respectively. Fig. 6 (a) shows original image
and Fig. 6 (b) shows results of wiener filter.
(a) Original image
(b) filtered image
Fig.6 Wiener filtering
D. Image Binarization Using Threshold Value
In most of vision systems, it is helpful to be separate
out the parts of the image that is corresponding to
which the image we are interested with, and the parts
of the image that corresponds to the background.
Thresholding usually gives an easy and suitable way
to carry out this segmentation based on different gray
level intensities of an image. A single or multiple
threshold levels could decide for the image to be
segmented; for a single value threshold level every
pixel in the image should compare with a given
threshold value and if the pixel of the image intensity
value is higher than the assigned threshold value, the
pixel is represented with white in the output; on the
contrary if the pixel of the image intensity value is
less than the assigned threshold value, the pixel is
represented with black. For the multiple threshold
level there are groups of intensities to be represented
to white while the image intensities that have out of
these groups are represented to black. Generally
thresholding is useful for rapid evaluation on image
segmentation due to its simplicity and fast processing
speed [14]. Image binarization is the process of
converting a gray level into black and white image by
using some threshold value [17] as shown in equation
7 and equation 8.
Equation 7 shows a single value threshold level.
............ (7)
is image intensity and T is threshold
Where,
value.
If the intensity value
, is less than threshold
value T, then each pixel of the image are replaced
ISSN: 2231-5381
with black pixel, and if
is greater than T, then
each pixel of the image are replaced with white pixel.
Threshold with multiple levels can be presented as
in Equation 8.
........ (8)
Where,
image resulted from multiple
threshold values, and Z is set of intensity values.
Fig.7 (a) presents gray level image and Fig.7 (b)
presents binary image after using single and multiple
thresholding.
(a) Gray level image (b) binary image
Fig.7 Image binarization
E. Eye and Mouse-Cursor Integration
When both eyes are opened, the left eye is divided
into four quadrants to integrate with mouse-cursor
movement. To divide the eye into four quadrants,
center of the eye is a reference point. Eye corner
location is used to find widths and heights of an eye
which are used to calculate center of eye. Using x and
y-coordinates that created at the corner of eye, center
of eye is calculated as [8] Equation 9, and Equation
10.
.................. (9)
.... (10)
Where,
and
coordinates.
are center points at x and y
Computer Screen is also divided into four quadrants.
Since height and width of a screen is constant, center
point of the screen is calculated as in Equation 11,
and Equation 12.
........ (11)
.....(12)
Where,
and
y-coordinates.
are center of screen at x and
After calculating the center point of eye (COE) and
the center point of screen (COS), a horizontal and
vertical lines are crossed each other perpendicularly
at these center points to divide the eye or the screen
into four quadrants for further processes.
http://www.ijettjournal.org
Page 146
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
Fig.8 (a) presented eye quadrants labeled with 1, 2, 3,
and 4, and Fig.8 (b) presented quadrants of computer
screen that is labeled with 1, 2, 3, and 4.
(a) Eye quadrant
(b) Screen quadrant
.......... (28)
where, R,
,
,
,
,
,
and s, are screen to eye ratio, width of screen, width
of eye, height of screen, height of eye, cursor
movement coordinate, and distance of sclera from the
center on the x and y-coordinates of the four
quadrants respectively.
F. Pixel Calculation and Sclera Detection
Fig.8 Eye and Screen quadrants
According the four quadrants of the eye and screen,
sclera movement is translated into cursor movement
using Equation 13 up to Equation 28.
Sclera is white part of an eye [9]. To determine
whether an eye is opened or closed, value of pixels is
calculated from the binary image as shown in
equation 29.
1) If sum of white pixels of eye at quadrant1 is
greater than the other quadrants, then
................... (13)
.................... (14)
........... (15)
......... (16)
2) If sum of white pixels of eye at quadrant2 is
greater than the other quadrants, then
.................... (17)
................... (18)
............ (19)
.......... (20)
3) If sum of white pixels of eye at quadrant3 is
greater than the other quadrants, then
..................... (21)
....................... (22)
.............. (29)
Where,
is sum of white pixel value and I is
binary image.
If the value of pixels is one or more ( >=1), then it
indicates that, eye is opened and sclera is detected
else if the value of pixels is zero ( ==0), then it
indicates that eye is closed and sclera is not detected.
G. Mouse-Cursor Movement and Click Operations
In the proposed system, mouse cursor moves
according to the quadrants (If both eyes are opened,
sclera of left eye is split into four quadrants,
accordingly, cursor is moving on quadrants of screen
with values of
and
at each quadrants)
using mouseMove(
,
) function.
If both eyes are closed as shown in experiment 3
Fig.13 (a) then the value of pixels is zero and this is
taken as input to perform operations like double
click, typing using virtual keyboard. As shown in
Fig.9, typing using virtual keyboard can be
performed by moving the cursor using the eyes to the
required key and then closing both the eyes. Both
eyes closed means image consists of black pixels
only which further means that value of pixels is zero
which is taken as input to perform clicking operation.
............. (23)
............. (24)
4) If sum of white pixels of eye at quadrant4 is
greater than the other quadrants, then
.............................. (25)
............................... (26)
......... (27)
ISSN: 2231-5381
http://www.ijettjournal.org
Page 147
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
Fig.9 Typing using virtual keyboard
If left eye is opened and right eye is closed as shown
in experiment 3 Fig.13 (b), then the value of pixels of
left eye image is one or more ( >=1) and the value
of pixels of right eye image is zero( ==0). These
pixel values are taken as input to perform operations
like selection, drag and drop using mousePress() and
mouseRelease() functions. Similarly, if right eye is
opened and left eye is closed as shown in experiment
3 Fig.13(c), then value of pixels of right eye image is
one or more( >=1) and the value of pixels of left
eye image is zero( ==0). These values are given as
input to perform right click operation.
Left eye is divided into four quadrants, and the
quadrant with highest summation of white pixel value
is taken as input to move the cursor according to x
and y-coordinate value of
and
on the
screen. Fig.12 shows sum of white pixel values at
each quadrant which is used for cursor movement.
Fig.12 (a) shows S have highest value at Q1 than Q2,
Q3, and Q4. Fig.12 (b) shows S have highest value at
Q2 than Q1, Q3, and Q4. Fig.12 (c) shows S have
highest value at Q3 than Q1, Q2, and Q4. Fig.12 (d)
shows S have highest value at Q4 than Q1, Q2, and
Q3. Where, S is sum of white pixel values ( ), and
Q1, Q2, Q3, Q4 are quadrant 1, quadrant 2, quadrant
3, and quadrant 4 respectively.
IV. EXPERIMENTS
a) Experiment 1: Face and Eye Detection
In this proposed system face and eye detection gives
a sufficient results. Its accuracy is remained same on
different lighting conditions. The accuracy is 100%,
when 200 frames of data sets are tested. A few
sample of face and eye detection are displayed at
Fig.10.
(a)Q1
(b) Q2
(c) Q3
(d) Q4
Fig.12 Sum of white pixel value at each quadrants
Results of experiment 2 are presented in Table II.
Table II. Experiment 2: Accuracy results
Sclera detection
Quadrant division
Calculation of pixel value (
CMC computation(
,
Fig.10 Sample of face and eye detection
From the detected face and eye, pairs of eyes are
cropped and extracted for further process as in
Fig.11. Fig.11 also shows eye conditions such as
centre, left aligned, right aligned, left eye closed,
right eye closed, and both eye closed.
Fig.11 Cropped and extracted eye results
Results of experiment 1 are presented in Table I.
Table I. Experiment 1: Accuracy results
Face detection
Eye detection
Eye crop and extract
100%
100%
100%
)S
)
100%
100%
100%
100%
c) Experiment3: Eye blink for mouse click
operation
To perform mouse click operations (double click, left
click, right click, selection, drag and drop, and typing
using virtual keyboard), at least one eye must be
closed. In Fig.13 (a) both eyes are closed, such type
of images are taken as input to perform operations
like double-click and typing using virtual keyboard.
Fig. 13 (b) shows a result when left eye is opened and
right eye is closed. This is used for right-click
operation. If left eye is closed and right eye is opened
as shown in Fig.13 (c) then operations like left-click,
selection, drag and drop are performed.
b) Experiment 2: Eye to Cursor Integration
The experiment with the proposed system using
different eye conditions are shown in Fig.12 and
Fig.13. If both eyes are opened as shown in Fig.12,
ISSN: 2231-5381
http://www.ijettjournal.org
Page 148
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
(a) Both closed (b) left open (c) right open
Fig.13. Eye closing types
Results of experiment 3 are presented in Table III.
Table III. Experiment 3: Accuracy results
Both eye closed
Left eye closed
Right eye closed
100%
100%
100%
V. RESULTS AND DISCUSSIONS
The proposed system is implemented with a 640x480
pixels resolution webcam. The algorithms are
programmed using MATLAB and JAVA API is used
in MATLAB to implement mouse functions such as
cursor movement, left-click, and right-click, doubleclick, selection, typing using virtual keyboard, drag
and drop.
The experiments are performed at both good and
poor lighting conditions. From experiment 1 Table I,
it can be seen that success rate of face and eye
detection is 100% and this is used as a key point for
further process. From experiment 2 Table II, it can be
seen that success rate of pixel value calculation and
CMC computation is 100% and this is used to detect
whether eye is opened or closed. CMC computation
is used to move the cursor on x and y-coordinate of
each quadrant. From experiment 3 Table III, it can be
seen that success rate of different eye closing
detection is 100%, which is used for all mouse click
operations(double click, left click, right click,
selection, drag and drop, and typing using virtual
keyboard).
The proposed method gives better results as
compared to the experimental results of previous
work [1]. Comparative analysis of previous work and
the proposed method is shown in Table IV.
The proposed system performs well even at poor
lighting conditions but still results can be improved
to attain more efficient cursor movement.
Table VI. Comparative analysis of previous work with
proposed system
VI. CONCLUSION AND FUTURE WORK
The proposed system helps the hand disabled people
to use the computer by using eye movements as input
devices and detecting their eye conditions such as
opened, closed etc. to perform various mouse
functions. This system is easy to use and requires
only minimal training before use. The proposed
system performs well at both good and poor lighting
conditions.
The proposed system can be enhanced by improving
eye movement detection technique for efficient
cursor movement. Also, the system detects
downward eye movement as closed eyes which can
be improved to add more functionalities to the
system.
ACKNOWLEDGMENTS
First and foremost, praises and thanks to the God, the
Almighty, who has been giving me every blessing
things that have made me who I am today, to
accomplish this thesis: Patience, health, wisdom, and
blessing. Without all these things, I can’t finish this
thesis successfully.
I would like to express my deepest gratitude to my
advisor, Professor Gagandeep Kaur, for guiding me
and giving her precious time, and also great ideas that
enable me to complete this thesis.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 149
International Journal of Engineering Trends and Technology (IJETT) – Volume 33 Number 3- March 2016
I also would like to express my heartfelt thanks to
[4]
Prof. Praveen Gubbala, for his coordination's and
[5]
valuable advices.
My appreciation must also be addressed to Dr.
Shraddha Phansalkar, HOD, for her patience,
[6]
invaluable advices and suggestions, help, and
support.
[7]
My appreciation must also be addressed to Dr. T. P.
[8]
Singh, Director, for his patience, invaluable advices
and suggestions, help, and support.
[9]
I would like to whole-heartedly thank the secretary of
CSE, Prachi Jagtap, for her patience, help, and
[10]
kindness.
I want to convey a great thank to all my lecturers for
[11]
their great contribution in sharing knowledge and
[12]
advice during my academic years.
I would like thank Ethiopian Ministry of Education
[13]
and Ethiopian Embassy, India for providing me
with scholarship during my study at Symbiosis
International University.
[14]
I am extremely grateful to my beloved parents for
their love, prayers, caring and sacrifices for educating
[15]
and preparing me for my future.
I am very much thankful to my wife for her love,
understanding, prayers and continuing support to
complete this research work.
[16]
[17]
REFERENCES
[18]
[1]
[2]
[3]
S.S.Deepika and G.Murugesan, "A Novel Approach for
Human Computer Interface Based on Eye Movements for
Disabled People", IEEE, 978-1-4799-6085-9/15, 2015.
Muhammad Awais, Nasreen Badruddin and Micheal
Drieberg, "Automated Eye Blink Detection and Tracking
Using Template Matching", IEEE, 978-1-4799-2656-5/13,
2013.
Ryo Shimata, Yoshihiro Mitani and Tsumoru Ochiai, "A
Study of Pupil Detection and Tracking by Image Processing
Techniques for a Human Eye–Computer Interaction
System", 978-1-4799-8676-7/15/2015 IEEE , SNPD 2015,
June 1-3 2015, Takamatsu, Japan.
ISSN: 2231-5381
[19]
M.Mangaiyarkarasi and A.Geetha, "Cursor Control System
Using
Facial
Exoressions
for
Human-Computer
Interaction", ISSN: 0976-1353 vol 8 Issue 1 –April 2014.
Prof.V.B.Raskar ,Priyanka E Borhade, Monali R Gayake,
Sujata B Pimpale, "Tracking a Gaze Using A Method of Eye
Localization", International Journal of Advanced Research
in Computer Engineering & Technology (IJARCET) vol. 4
Issue 3, March 2015.
Jianbin Xiong, Weichao Xu, Wei Liao, Qinruo Wang, Jianqi
Liu, and Qiong Liang, ―Eye Control System Base on
Ameliorated Hough Transform Algorithm ", IEEE Sensors
Journal, vol. 13, NO. 9, September 2013.
Aleksandra Krolak and Paweł Strumiłło," Eye-Blink
Detection System for Human–Computer Interaction",
Springer, Univ Access Inf Soc 11:409–419, 2012.
Muhammad Usman Ghani, Sarah Chaudhry, Maryam Sohail
and Muhammad Nafees Geelani, "GazePointer: A Real
Time Mouse Pointer Control Implementation Based On Eye
Gaze Tracking", IEEE, 978-1-4799-3043-2/13, 2013.
Sumeet Agrawal, Yash Khandelwal, " Human Computer
Interaction using Iris and Blink Detection", International
Journal of Advanced Research in Computer Science and
Software Engineering 5(8), August- 2015, pp. 641-646.
Youlian Zhu, Cheng Huang, "An Improved Median
Filtering Algorithm for Image Noise Reduction", Sciverse
ScienceDirect, Elsevier, Physics Procedia 25(2012) 609616.
Bin Xiong, Xiaoqing Ding, "A Generic Object Detection
Using a Single Query Image Without Training", Tsinghua
Science and Technology, April 2012, 17(2): 194-201
Wei Zhang, Bo Cheng, Yingzi Lin, "Driver Drowsiness
Recognition Based on Computer Vision Technology",
Tsinghua Science and Technology, June 2012, 17(3): 354362
Ziho Kang and Steven J. Landry, "An Eye Movement
Analysis Algorithm for a Multielement Target Tracking
Task:
Maximum
Transition-Based
Agglomerative
Hierarchical Clustering", IEEE TRANSACTIONS ON
HUMAN-MACHINE SYSTEMS, VOL. 45, NO. 1,
FEBRUARY 2015.
Moe Win, A. R. Bushroa, M. A. Hassan, N. M. Hilman, Ari
Ide-Ektessabi, "A Contrast Adjustment Thresholding
Method for Surface Defect Detection Based on Mesoscopy",
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS,
VOL. 11, NO. 3, JUNE 2015.
Sankha S. Mukherjee and Neil Martin Robertson," Deep
Head Pose: Gaze-Direction Estimation in Multimodal
Video",IEEE TRANSACTIONS ON MULTIMEDIA, VOL.
17, No. 11, NOVEMBER 2015.
Wiener deconvolution (n.d.).In Wikipedia. Retrieved,
February
22,
2016,
from
https://en.wikipedia.org/wiki/Wiener_deconvolution.
Thresholding (image processing) (n.d.). In Wikipedia.
Retrieved
February
22,
2016,
from
https://en.wikipedia.org/wiki/Thresholdin_%28image_proce
ssing%29
Viola–Jones object detection framework (n.d.). In
Wikipedia.
Retrieved
February
22,
2016,from
https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_obje
ct_detection_framework
Yash Shaileshkumar Desai, Natural Eye Movement & its
application for paralyzed patients, IJETT, Volume-4 Issue-4,
2013
http://www.ijettjournal.org
Page 150
Download