www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242

advertisement
www.ijecs.in
International Journal Of Engineering And Computer Science ISSN:2319-7242
Volume 3 Issue 10 October, 2014 Page No.8795-8797
Controlling a Car Buggy Using a NIR Camera with
Hand Gestures
Manish Bhutada, Meenakshi Raut, Mohammed Esoofally, Sanjyot Agureddy
Dept. of Computer Engineering,
Trinity College of Engineering & Research, Pune, India
Dept. of Computer Engineering,
Trinity College of Engineering & Research, Pune, India
Dept. of Computer Engineering,
Trinity College of Engineering & Research, Pune, India
Dept. of Computer Engineering,
Trinity College of Engineering & Research, Pune, India
Abstract: -We live in the digital era where the use of digital components is increasing in automotive engines, direction
systems & other devices. Hence giving rise to new user interfaces due increase in Human-Vehicle Interaction (HVI). HVI
devices having many key characteristics such as stability, reliability and robustness of the system. HVI devices help to
track the gesture, recognize it and perform the desired operations in real time. This can be implemented using hand
gesture control mechanism as one of its components. The gestures control technique has become a new trend for many
human based electronic products. In this paper we will be using a near infrared Camera (NIR) camera which will track
the images which will then be forwarded to the gesture recognition architecture. This architectures makes use of series of
different recognition algorithms to recognize the gesture and perform the desired action.
Keywords:- Hand Gestures, Gesture recognition, HumanVehicle Interaction, pixel.
I. INTRODUCTION
As we know in the field of technology hand gesture
recognition has played a significant role in last few decades.
In simple terms hand gesture means using hand movements
to control devices. Although other than technology, this
research has also been beneficial in day to day life of human
beings and has also helped them to bring in a change in their
way
of
living.
It has also helped to mold the life of many disables
[3]
.When people suffer from disabilities of the lower limb
then in order to move from place to place they make use of
the wheel chair controlled by joystick. However if the
person is suffering from an upper limb injury then he cannot
make use of joystick, therefore he makes use of hand and
head gestures to carry-out the movements [2]. Even though
hand gestures is a significant invention in the field of
science and technology, it goes in hand in hand with Image
Processing [8]. Image processing involves manipulation at
pixel
level.
In this paper, we have introduced a NIR camera
which will track the hand gestures movements, process it
and perform the desired action to recognize the movement to
control the device [12]. In this we are considering 5 hand
gestures.
II. LITERATURE SURVEY
Hand gesture recognition consist of three basic
stages. First stage is the stage in which the object will be
detected. Second stage is the stage in which the gesture will
Manish Bhutada, IJECS Volume 3 Issue 10, October, 2014 Page No.8795-8797
Page 8795
be recognized. The third stage is the stage in which it will
analyze the gestures in a sequential manner to identify the
instructions. To detect hand objects in the digital images is
the sole purpose of the first stage. To achieve recognition
accuracy, the most common image problems which include
noise, contrast, poor resolution and unstable brightness
should be solved in this stage. In the second stage object
recognition is programmed to differentiate between the
operator image and the obstacle image. Obstacle image can
be any image or hand gestures in the background. In the
third stage the analysis of the gestures will be done using
various algorithms to perform respective actions.
III. SYSTEM ARCHITECTURE
In our system we have proposed hand detection and
recognition and on the basis of this required data is fetched
from the database. This data is then responsible for the
movement of the device using a Radio Frequency
Transmitter (RFT) that are sent to Radio Frequency
Receiver
placed
on
the
device.
IV. PROPOSED SYSTEM
There are many environmental conditions which
are proposed in this paper which assure the good
performance of the system.
Stage 1:
Detection of Object
Stage 2:
Gesture Recognition
Stage 3:
Instruction to perform
Figure 1: Gesture Recognition stages
Likewise hand gesture recognition has made a
drastic change in robot navigation problems, where handling
the big robots over voice command was difficult, due to
introduction of the hand gestures it became quite simple as
only simple hand movement are required to control the
robot. For example, we are using certain gestures to control
the device. For robot handling various hand gestures given
in figure can be used.
a. Gesture recognition system can be built using this
proposed system which can be utilized as an indoor or invehicle system.
b. Camera captures the images that are in front of it.
c. The background of the hand image is made blur and a
blob of the hand is created.
According to our system, image capturing is done in quite
easy way and interface is very user-friendly.
The image is grabbed using an NIR camera, this
obtained image is converted from RGB model to Hue
Saturation Value Model (HSV), and then HSV-Thresholding
is applied to it. The image obtained is then blurred to reduce
the noise from the background. After the thresholding the
Blob is obtained which only consists of the required image.
Vector calculation are performed considering the Center of
Gravity (COG). The translation of the hand from one
portion of the screen to the other helps the system to
recognize the gesture movement.
Image Grab
RGB -> HSV conversion
HSV Thresholding
(a)
(b)
(c)
Blur
Blob Detection
Gesture Recognition
(e)
(f)
Figure 2: Images of Gesture Recognized by the system
In the above fig. 2 are mentioned the images of the gestures
that are recognized by the system that are
(a) Move (b) Stop (c) Reverse (d) Turn Right (e) Turn Left.
Manish Bhutada, IJECS Volume 3 Issue 10, October, 2014 Page No.8795-8797
Output
Figure 3: System Flow
Page 8796
ACKNOWLEDGEMENT
We here taking the opportunity in thanking all the
people who helped is for completing out project, it would
not have been possible to complete this project without
them. Firstly we would thank our utter supportive guide,
Prof. Anup H. Raut, for their ever ending guidance at all
times and providing the guideline without any complaints at
any time and also all of the ideas and effort they put into our
project.
We would also like to thank Prof. S. B. Chaudhari
(HOD Comp.) for his interest & co-operation for the project.
Expressing our sincere gratitude to TCOER, Pune, for
providing us the facilities like Library & Internet. Also
thanking all the people who have helped directly or
indirectly in completing our project.
REFERNCES
1. B. Ionescu, V. Suse, C. Gadae, B. Solomon, D. Ionescu,
S. Islam & M. Cordea, "Using a NIR Camera for Car
Gesture Control" in preceding IEEE Latin America
Transaction Vol. 12, No. 3 May 2014.
2. Fang Wang, Jinhua Zeng & Yaoru Sun "Natural hand
gesture for intelligent human-computer interaction and
medical assistance".
3. Helman Stern1, Juan Wachs1, Yael Edan1, Michael
Gillam2, Mark Smith2, Craig Feied2, Jon Handler2 “A
Real-Time Hand Gesture Interface for Medical
Visualization Applications”
4. Basic 3D Interaction Techniques in Augmented Reality
Aciek Ida Wuryandari, Yoki Ariyana,, School of Electrical
Engineering and Informatics InstitutTeknologi Bandung
Bandung, Indonesia.
9. P.A. Cabarcos, , F.A. Mendoza, R.S. Guerrero and D.
Díaz-Sánchez A. Marín López, “FamTV: An architecture
for presence-aware personalized television,” IEEE
Transactions on Consumer Electronics, vol.57, no. 1, pp.613, 2011.
10 M. Pantic, A. Vinciarelli, and H. Bourlard. “Social signal
processing: survey of an emerging domain,” Image and
Vision Computing Journal, vol. 27, no. 12, pp. 1743-1759,
2009.
11. J. Little and W. James, Simultaneous tracking and action
recognition using the PCA-HOG descriptor, in Proc. of the
Third Canadian Conference on Computer and Robot Vision,
pp. 6, 2006.
12. L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R.
Basri, Actions as space-time shapes, IEEE Trans. Pattern
Analysis and Machine Intelligence Vol. 29, No. 12, pp.
2247–2253, 2007.
13. S. Calderara, R. Cucchiara, and A. Prati, Action
signature: A novel holistic representation for action
recognition, in Proc. of the International Conference on
Advanced Video and Signal Based Surveillance, IEEE
Computer Society Press, Washington, pp. 121– 128, 2008.
14 C. Schuldt, I. Laptev, and B. Caputo, Recognizing human
actions: a local SVM approach, in Processing of
International Conference on Pattern Recognition, Vol. 3,
IEEE Computer Society Press, Cambridge, UK, pp. 32–36,
2004.
15. J. Davis and M. Shah “Visual Gesture Recognition", IEE
Proc.-Vis. Image Signal Process. Vol. 141, No.2, April
1994.
5. F. Kishino and A. Utsumi, T. Miyasato, "Multi-Camera
Hand Pose Recognition System Using Skeleton Image", pp.
219-224, 1995. IEEE International Workshop on Robot and
Human Communication,
6. Ching-Hao Lai ‘Smart Network System Institute Institute
for Information Industry’, Taipei City, Taiwan “A Fast
Gesture Recognition Scheme for Real-Time HumanMachine Interaction Systems”
7. Yaoru Sun, Jinhua Zeng, and Fang Wang Brunel
University Uxbridge UB8 3PH, United Kingdom. “A natural
hand gesture system for intelligent human-computer
interaction and medical assistance” 8. S. Davidoff, M.K.
Lee, J. Zimmerman, and, A. Dey, “Socially-aware
requirements for a smart home,” In Proceedings of the
International Symposium on Intelligent Environments, p.
41-44, 2000.
Manish Bhutada, IJECS Volume 3 Issue 10, October, 2014 Page No.8795-8797
Page 8797
Download