www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 4 Issue 2 February 2015, Page No. 10325-10332 Identify Obstacles of Different Types in the Path of UGV Using Region Based Image Segmentation Rajinder Kaur, Amanpreet Kaur Department of Information Technology Chandigarh University, Mohali, India rajinder1023@gmail.com Assist. Professor, CSE Department Chandigarh University, Mohali, India amanpreet_boparai@yahoo.co.in Abstract— Unmanned ground vehicle is a smart autonomous vehicle that is mainly capable to do tasks without the need of human operator. An automated vehicle work during off road navigation and mainly used in military operation such as detecting bombs, border patroling.These types of automated vehicles are required even in driving road vehicles where human errors cause major fatal loss of life and property. For this purpose the functionality of unmanned ground vehicle can be enhanced by using region based image segmentation which will help to identify the obstacles that come in the path of UGV.In this paper, Region based image segmentation algorithm is proposed to identify obstacles (car, human, tree etc).This method is compared with edge based image segmentation method based on three parameters (angle of projection, angle of disjunction and angle of disjunction). Based on comparisons, region based image segmentation algorithm is capable of producing more accurate results as compared to edge based segmentation algorithm. Keywords—Image segmentation, Unmanned vehicle, Region, edge, sensor, remote, autonomous,segments. ground I. INTRODUCTION Now a day, UGV has been used in different applications like military and civilian operations, border patroling, surveillance, law enforcement, hostage situation, and police for some specific mission to detecting and diffusing bombs. It has the ability to detect obstacle [1, 2, 15].UGV is a smart autonomous vehicle that is capable to do tasks in a structured or unstructured environment without the help of human operator. It use different types of sensors to sense the structured or unstructured environment then based on sense it takes the action and then pass the sensed information to the different computer operator in the different location through the communication medium. This type of automated vehicle can carry anything that human can’t do easily [1, 2, 3, 5]. 2) Autonomous Operated: I It is an autonomous vehicle that mainly operates without any help of a human operator. These types of vehicle use sensors to sense the environment and control algorithms are used to take an action to achieve a goal based on senses. It has the ability to learn autonomously. For example, a Vislab’s autonomous car. [1, 2] B. How Does UGV Works An autonomous vehicle, a sensor is fitted over the vehicle which senses the structured or unstructured environment and then on the basis of these senses, it takes a decision to achieve a goal and then pass the output to the computer operator which is at different location through communication media whose output is checked by a human. Autonomous vehicle Sensor fitted over vehicle Sense Environment A. Classes of UGV UGV is mainly classified into two classes: 1) Remote operated 2) Autonomous operated 1) Remote Operated: It is a vehicle that operates with the help of a human operator through a communication media. The tasks which are executed by it be observed through the operator using direct visualization or sensor such as a digital video camera. For examples a toy remote control car, explosives and bomb disabling vehicles. [1, 2] Forward information to human operator Take a decision Use control algorithm Fig 1: How does unmanned ground vehicle works. Section II describes the previous work that has done in UGV and image segmentation. Section III describes proposed work and section VI contains experimental results. Section V Contains conclusion. Rajinder Kaur, IJECS Volume 4 Issue 2 February, 2015 Page No.10325-10332 Page 10325 II. RELATED WORK There are different types of techniques are used in the field of unmanned ground vehicle to detect and avoidance of obstacles, but there are many problems in existing techniques that are discussed in the following survey: K.Alonzo et al. [4] presented a paper “Obstacle detection for unmanned ground vehicles: A progress report” in which they have developed a real time stereo vision system that is used to sense environment geometry. This system can work under any condition like night, day and in low visibility. But stereo is computationally very expensive for unmanned ground vehicle and also it can identify obstacles only during off road navigation and it can work in a limited range. Gavril et.al, presented paper “Real-time object detection for smart vehicles” [20] .This paper presents an efficient shapebased object detection method based on Distance Transforms and describes its use for real-time vision onboard vehicles. This method can used to detect objects of arbitrary shapes The method uses a template hierarchy to capture the variety of object shapes; efficient hierarchies can be generated offline for given shape distributions using stochastic optimization techniques. This method is not possible to provide an analytical expression for the speedup, because it depends on the actual image data and template distribution. B.Francois et.al[5] in this paper “Video-based event recognition: activity representation and probabilistic recognition method” authors presented a recognition method which is mainly used to examine events that can be detected and segmented from a video by using a probabilistic analysis that describes the features of moving objects i.e. shape, motion and trajectory. But the main problem of these methods is tracking a crowd of people is a very difficult due to naturally occlusion of the body parts. Antonio et.al [9] presented a paper “Visual surveillance by dynamic visual attention method” in which authors proposed a dynamic visual attention method which is used to divide the view into moving objects i.e. vehicles, pedestrians and background. The main problem with this algorithm is that it can extract only those frames from a video whose situation is already predefined. A.Broggi et.al [6] presented a paper “VisLab and the Evolution of Vision-Based UGVs” in which they have proposed a terromax vehicle which could move autonomously only up to 68kmph and it can’t work during the night and its performance is not impressive because of vehicle size and height. J.Pangyu [10] presented a paper “Local Difference Probability (LDP)-Based Environment Adaptive Algorithm for Unmanned Ground Vehicle” in which authors proposed a LDP Algorithm which is used for detection and recognition of road area. This method solves the problems that arise during generic classification and/or predefined model-based road-detection methods. This method can be worked in known or unknown environment, even different condition of road, image quality of images is different and different angles of camera. But the main problem of this algorithm is that it is only used for road area detection and recognition. This can’t detect different type of obstacles. B.H.Udaya et.al [11] presented a paper “An Autonomous Unmanned Ground Vehicle for Non-Destructive Testing of Fiber-Reinforced Polymer Bridge Decks” in which authors proposed a Non- Destructive Testing FRP technique which uses two IRT and GPR algorithm to enable the UGV to gather important information which is used to detect together air and water-filled defects. It is mainly used to minimize the human errors .But it is not an entirely efficient examination system. B.Mckinley et.al[7] in which authors presented a paper “A Real-Time, Interactive Simulation Environment for Unmanned Ground Vehicles: The Autonomous Navigation Virtual Environment Laboratory (ANVEL)” in they have proposed a tool i.e. Autonomous Navigation Virtual Environment Laboratory (ANVEL) which uses video game and physics-based techniques which are perceptive, interactive, and actually meaningful for UGV. But it is mainly operate during off road navigation and UGV can detect and avoid obstacles in static environment. A.J.Sterling et.al[12] presented a paper “ A ConstraintBased Approach to Shared-Adaptive Control of Ground Vehicles” in which authors proposed a Constraint-Based technique for semi -autonomous vehicles .this technique is used to recognize, designed, and imposed to make sure that controlled system can avoids hazards and loss of strength without the control of a human operator. But this technique became make the controller unstable to turn the wheels fast enough to avoid collisions and also the controller system can avoid hazards during off road navigation. T.Saurabh [8] presented a paper “Visualization Technique for Unmanned Ground Vehicles Using Point Clouds” in which authors proposed a visualization technique for UGV that is 3D point cloud. Point cloud is a data structure that commonly used to represent three-dimensional data. It also gives detail information and uses 3D scanner which scans environment in front in one plane and perceive the result in 3D point clouds. The Cluster extraction gives help to extract the clusters in the point cloud which are mainly help to identify the objects of interest i.e. Bomb .But it is mainly used in unmanned ground vehicle for home hand security. A.U.J.Mewes et.al[16] presented a paper "Improved watershed transform for medical image segmentation using prior information “in which they have proposed a new segmentation algorithm which is able to overcome the problems of over segmentation, finding of important areas with small contrast boundaries, detection of slight structure which is poor. The algorithms are used in two important applications: knee cartilage and white matter/gray segmentation in MR images. The method provides fine accuracy on these two applications. But it is only used in different medical image segmentation problems. F.Guoliang et.al [15] presented a paper “Segmenting human from photo images based on a coarse-to-fine scheme” in which authors proposed an effective algorithm which is coarse-to-fine segmentation. It is mainly working on static images to segments human body without human intervention segment. Multicue coarse torso detection Rajinder Kaur, IJECS Volume 4 Issue 2 February, 2015 Page No.10325-10332 Page 10326 algorithm (MCTD) and multiple oblique histograms (MOH) are two algorithms which are based on segmentation. MCTD algorithm is used to identify torso and MOH is a strong iterative algorithm which is used to get better lowerbody segmentation. This method is very simple and makes use of a face detector which is able to find only head situation currently. It is not used for face orientations. A.Smolic et.al [14] presented a paper “Distinguishing Texture Edges from Object Boundaries in Video” in which authors presented a very simple method that is used to distinguish object boundaries edges from texture edges in video. It is simple to implement, & performs well on a variety of datasets. This method is easy to implement and allows for texture/object boundary separation but it is not be sufficient to truly determine all object boundaries. H.Shi-Min et.al [17] presented a paper “Timeline Editing of Objects in Video” in which authors proposed a novel realtime method for editing object motions in video. The main function of this algorithm is to adjust the times to match the object interaction with respect to the background. Preprocessing is required to separate the moving objects from the background but it is very time consuming. This method is not suitable for videos where it is not easy to extract number of objects from frames means to separate the moving objects and background .it works only on images. Sometimes this method is failed to follow and separate foreground or background. This method doesn’t fulfill the requirements of the user when the scene is complex. Correia et.al [18] presented a paper “Objective Evaluation of Video Segmentation Quality” in which they have proposed a parameters to evaluate the quality of segmentation .these parameters are able to calculate approximately quality of segmentation according to what a human wants to observe. Video segmentation is a method which controls the quality of segmentation by identification of a number of objects creates a partition for a video is necessary. This method used to calculate approximately segmentation algorithms’ performance by applying on different applications. It can be work either for a single object or for on the whole partitions. This method generally does not address the temporal aspect of video sequences. Brejl er.al [21] presented a paper “Object localization and border detection criteria design in edge-based image segmentation: automated learning from examples”. This paper provides a fully automated model-based image segmentation method. Information which is necessary to perform image segmentation is automatically derived from a training set. The two models used which are objects shape model and border appearance model to segment images. In the first model, an approximate location of the object of interest is determined. In the second model, accurate border segmentation is performed. It finds objects of arbitrary shape, rotation, or scaling and can handle object variability. But this model is not well suited for manual outlining of the 3-D surfaces due to some of the difficulties include the complexity and time requirements. Chien et.al [22] presented a paper “Efficient moving object segmentation algorithm using background registration technique”. In this paper, they have proposed an efficient moving object segmentation algorithm which is suitable for real-time content-based multimedia communication systems First, a background registration technique is used to construct a reliable background image from the frame. The moving object region is then separated from the background region by comparing the current frame with the constructed background image. Finally, a post-processing step is applied on the obtained object mask to remove noise regions and to smooth the object boundary. In situations where object shadows appear in the background region, a pre-processing gradient filter is applied on the input image to reduce the shadow effect. The main problem of this algorithm is that its computation complexity is very high because both the watershed algorithm and the motion estimation are computationally intensive operations. D. Shih-Huai et.al [19] presented a paper “A Neuro-Fuzzy Approach for Segmentation of Human Objects in Image Sequences” in which authors proposed a neuro-fuzzy approach which is used for automatic take out of human objects in video streams based on spatial and temporal features but this method can extract only one person in a video stream. But the main difficulty of this method is that it’s processing time is high. III. PROPOSED WORK To overcome the problems of existing technique, region based image segmentation algorithm is proposed which used to identify different types of obstacles with the following steps: Record a video which consists of road, cars, humans & trees using camera. Extract the appropriate frames from the recorded video which meet the requirement of preprocessing. For example blurred and distorted images will not make it good. Convert extracted frames into gray level. Applying region based segmentation algorithm to detect different objects in the image. Bounding the different segmented objects (i.e. cars, human) in proper rectangular boxes in the frame. Implementing an efficient Decision making table for deciding the moves of the vehicle. Compare proposed segmentation technique with existing technique. Record a video Extract frames form video Compare region based segmentation with existing techniques Bound the segmented obstacles into rectangular box Convert frames into gray level Apply region based segmentation Fig 2: Data flow diagram. IV. EXPERIMENTAL RESULTS Image segmentation which is a method of separating an image into several segments that is set of pixels that collect the meaningful information from segment part which is Rajinder Kaur, IJECS Volume 4 Issue 2 February, 2015 Page No.10325-10332 Page 10327 supportive to take a decision. Image segmentation is widely used in many fields such as object recognition, image compression, medical, satellite. A method of image segmentation that is region based is proposed which helps to identify objects (cars, human, trees etc.) that comes in the path of unmanned ground vehicle during on road navigation and also some parameters which are angle of projection, angle of disjunction, no of segmented region that are used to calculates the moves of vehicles. Using this method, different types of objects like cars, human and tree are identified. After that, comparison is performed between region based segmentation and edge based segmentation. Based on comparisons, the region based segmentation provides more accurate results than edge based segmentation on the basis of parameters. Edge segmentation does not work properly on parameters because it does not calculate distance properly between vehicle and identified obstacles and also the velocity of identified object. The experimental is performed on ten different images. The results are calculated based on three parameters: angle of projection, angle of disjunction and number of segmented region. Comparative study includes region base segmentation and edge base segmentation technique. Fig1 shows the original image which is extracted from video. In fig2 original image is coverted into gray level. Gray level consists of only white and black pixel. So it is easy to segment a gray color image. Fig3 shows that apply region based segmentation to identify objects that appear in the image means come in front of vehicle. In fig 4, place the segmented objects into rectangular box. Angle of projection which is the distance between vehicle and identified object that is human is 53 units. Angle of disjunction is the velocity of identified object is 51 units and number of segmented region is 45. Fig5 shows the original image which is extracted from video. In fig6, original image is converted into gray level. In Fig7, region based segmentation to identify objects that is car that appears in the image means come in front of vehicle. In fig 8, place the segmented object into rectangular box. Angle of projection between vehicle and identified object is 10 units. Angle of disjunction is the velocity of identified object car is 16 units and number of segmented region is 170. Fig 1: original image Fig 2: Gray scale image Fig9 shows the original image which is extracted from video. In fig10, original image is converted into gray level. In fig11, region based segmentation to identify objects i.e. tree that appear in the image means come in front of vehicle. In fig 12, place the segmented the objects that is tree into rectangular box. Angle of projection between vehicle and identified object is 0 units because tree is on the right side of vehicle. Angle of disjunction is the velocity of identified object i.e. tree is 0 units and number of segmented region is 170. Fig13 shows the original image which is extracted from video. In fig14, original image is converted into gray level. In fig 15, region based segmentation to identify objects that is car that appears in the image means come in front of vehicle. In fig 16, place the segmented object into rectangular box. Angle of projection between vehicle and identified object is 1 unit. Angle of disjunction is the velocity of identified object car is 16 units and number of segmented region is 170. Parameters Analysis: Angle of Projection: It is used to find out the distance between identified obstacles and vehicle. Its value should be large means distance between identified object (i.e. car, tree, human etc.) and vehicle be more. Angle of Disjunction: This parameter is used to find out the speed of vehicle in each frame by calculating the position of the vehicle. Its value should be less. Number of segmented region: This parameter segments the images into number of regions based on threshold value T. It is used to segments the obstacles like cars, humans etc. Its value should be less because it is easy to cover each pixel in an image as compared to large number of segmented regions. Fig 3:Segmented image Rajinder Kaur, IJECS Volume 4 Issue 2 February, 2015 Page No.10325-10332 Fig 4: Obstacle rectangular box. into Page 10328 Fig 5: Original image Fig 6: Gray scale image Fig 7:Segmented image Fig 8: Obstacle rectangular box. into Fig 9:Original image Fig 10: Gray scale image Fig 11:Segmented image Fig 12: Obstacle rectangular box. into Fig 13: Original image Fig 14: Gray scale image Fig 15:Segmented image Fig 16: Obstacle rectangular box. into Rajinder Kaur, IJECS Volume 4 Issue 2 February, 2015 Page No.10325-10332 Page 10329 Graph 1:Comparsion between region and edge based image segmentation method based on number of segmented region. This graph shows the comparison between region and edge based segmentation based on number of segmented region. Number of segmented region is calculated using region based segmentation shows better result than edge based because regions segmented by region based segmentation is less so it is easy to cover every pixel. Graph 2:Comparsion between region and edge based image segmentation method based on angle of projection This graph shows the comparison between region and edge based segmentation based on angle of projection. Angle of projection is calculated using region based segmentation shows better result than edge because edge can’t calculate distance properly between obstacles. Edge does not provide good results because in some cases, even distance is exist between identified obstacle Rajinder Kaur, IJECS Volume 4 Issue 2 February, 2015 Page No.10325-10332 Page 10330 and vehicle but it gives information regarding distance is zero. This algorithm does not calculate the velocity of identified obstacles. Table1: comparison between region and edge based segmentation based on parameters Identify Obstacles using Edge Detection Identify Obstacles using Region Based Segmentation S.No No .of. Frames No. of Segment Region Angle of projection Angle of disjunction No of segment region Angle of projection Frame 1 Frame 2 45 49 53 104 51 51 616 297 156 104 Frame 3 Frame 4 171 170 56 10 46 16 1084 1101 51 0 Frame 5 170 1 1 1056 0 Frame 6 Frame 7 Frame 8 Frame 9 179 139 229 132 145 69 0 138 113 0 0 138 1235 985 1939 943 75 9 0 22 Frame 10 129 101 72 1010 105 Region based segmentation method is compared with edge based image segmentation method based on three parameters which are number of segmented region, angle of disjunction and angle of projection . Region based image segmentation method provide much better results as compare to edge. Because edge based image segmentation method does not provide good results because in some cases, even distance is exist between identified obstacle and vehicle but it gives information regarding distance is zero. V. CONCLUSIONS AND FUTURE SCOPE UGV is a smart autonomous vehicle that is capable to do tasks in a structured or unstructured environment without the help of human operator Different types of algorithms are used in the field of unmanned ground vehicle for the detection and identification of the obstacles. But there are many problems in existing algorithms that have discussed in literature survey. In this paper, we presented a region based image segmentation algorithm to improve the functionality of unmanned ground vehicle during on road navigation. This algorithm identifies the obstacles (cars, human, trees etc) in the images which are extracted from a video. After Angle of disjunction This technique is not appropriate to find the angle of disjunction of identified obstacles. the identification of the obstacles, it uses three parameters which are angle of projection, angle of disjunction and number of segmented region to calculate the moves of vehicle means what decision have to take based on measurement. This algorithm is tested on several frames which are extracted from video and compared this algorithm with edge based image segmentation algorithm. Based on comparisons, region based image segmentation algorithm is capable of producing more accurate results as compared to edge based segmentation algorithm, because this algorithm can properly identify the obstacles in the frames. It can also properly calculate the distance and velocity of identified obstacles. In future, region based image segmentation algorithm can be used to identify ditch or more obstacles that will come in the path of UGV. ACKNOWLEDGMENT I would like express my sincere gratitude to my supervisor Ms.Amanpreet Kaur who assisting me to write this paper. I thank her for providing me confidence and most importantly the track for the paper whenever I needed it. Rajinder Kaur, IJECS Volume 4 Issue 2 February, 2015 Page No.10325-10332 Page 10331 REFERENCES [1] A.Mugan,A. bner, A. Apak, C. Dikilita, H. Heceoglu , V. Sezer, Z. Ercan,and M. Gokasan “Conversion of a conventional electric automobile into an unmanned ground vehicle (UGV)”, Proceedings of the IEEE International Conference on Mechatronics, , 2012. [2] A.Mohebbi,M. Keshmiri,S. Safaee, and S. Mohebbi, “Design, Simulation and manufacturing of a Tracked Surveillance Unmanned Ground Vehicle”, Proceedings of the IEEE International Conference on Robotics and Biomimetices, pp.14-18; 2010. [3] A.Bouhraoua, N. Merah, M. AlDajani and M. ElShafei, “Design and Implementation of an Unmanned Ground Vehicle for Security Applications”, Proceedings of the 7th International Symposium on Mechatronics and its Applications, pp.1-6, 2010. [4] Matthies, Larry, Alonzo Kelly, Todd Litwin, and Greg Tharp, “Obstacle detection for unmanned ground vehicles: A progress report”, Springer London, pp. 475-486, 2000. [5] Somboon.H, Ram Nevatia and Francois Bremond “Videobased event recognition :activity representation and probabilistic recognition method “ Springer, Computer Vision and Image Understanding, Vol.96, pp. 129–162, 2004 [6] Massimo Bertozzi, Alberto Broggi and Alessandra Fascioli, “ VisLab and the Evolution of Vision-Based UGVs” IEEE on Computer society, pp.33, 2006. [7] Durst, P. J, Goodin, C, Cummins, C., Gates, B., McKinley, B., George, T., & Crawford, “A Real-Time, Interactive Simulation Environment for Unmanned Ground Vehicles: The Autonomous Navigation Virtual Environment Laboratory (ANVEL)” IEEE Fifth International Conference on Information and Computing Science, pp.7-10, 2012. [8] Saurabh .Trikande, “Visualization Technique for Unmanned Ground Vehicles Using Point Clouds” IEEE international Conference on Advances in Computing, Communications and Informatics (ICACCI), pp.1832-1836, 2013. [9] María T. Lópeza, Antonio Fernández-Caballeroa, Miguel A. Fernándeza, José Mirab, Ana E. Delgadob “ Visual surveillance by dynamic visual attention method” Springer ,pattern recognition,Vol.39,pp.2194 – 2211,2006. 10] Pangyu.J and Sergiu .N, “Local Difference Probability (LDP)-Based Environment Adaptive Algorithm for Unmanned Ground Vehicle” IEEE transactions on intelligent transportation systems, vol. 7, no. 3, September 2006. [11] Powsiri Klinkhachorn, A. Scott Mercer, Udaya B. Halabe, and Hota GangaRao, “An Autonomous Unmanned Ground Vehicle for Non-Destructive Testing of Fiber-Reinforced Polymer Bridge DeJFDFcks” IEEE Instrumentation & Measurement Magazine,Vol.10.,pp.28-33, 2007. [12] A.J.Sterling, K.B.Sisir, and I.Karl, “A Constraint-Based Approach to Shared-Adaptive Control of Ground Vehicles” IEEE Intelligent transportation systems magazine, pp.45-55, 2013. [13] DeSouza G N, Kak A C, “Vision for mobile robot navigation: a survey”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.24, pp: 237-267, 2002. [14] D. Martina, G .Markus, S. Aljoscha, and W. Oliver, “DistinguishingTexture Edges from Object Boundaries in Video” IEEE Transactions On Image Processing, Vol. 22, pp. 12, 2013. [15] Lu, Huchuan.”Segmenting human from photo image based on a coarse-to-fine scheme." Systems, Man, and Cybernetics, Part B: Cybernetics IEEE Transactions on 42.Vol.3, pp: 889-899, 2012. [16] A.U. J. Mewes, Grau , K. Ron ,M. Alcaniz, , W.K.Simon and Vicente, "Improved watershed transform for medical image segmentation using prior information" Medical Imaging, IEEE Transactions,Vol .23,pp 447-458,2004. [17] H.Shi-Min ,L.Shao-Ping, R. M. Ralph, W.Jin and Z.Song-Hai “Timeline Editing of objects in Video” IEEE transactions on visualization and computer graphics, vol. 19, pp. 12181227, 2013. [18] L. C.Paulo and Pereira .F,“Objective Evaluation of Video Segmentation Quality” IEEE transactions on image processing, vol. 12, pp.2, 186-200, 2003. [19] L.Shie-Jue, H. D.Shih and O.Chen-Sen, “A Neuro-Fuzzy Approach for Segmentation of Human Objects in Image Sequences” IEEE transactions on systems, man, and cybernetics, Vol. 33, pp. 420-437. 2003. [20] Gavrila,D. M., & Philomin, V.,“Real-time object detection for “smart” vehicles”,The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 1, pp. 87-93,1999. [21] Brejl, M., & Sonka, M, “Object localization and border detection criteria design in edge-based image segmentation: automated learning from examples.”,IEEE Transactions on Medical Imaging, vol.10,pp.973-985, 2000. [22] Chien, S. Y., Ma, S. Y., & Chen, L. G.,“Efficient moving object segmentation algorithm using background registration technique.”, IEEE Transactions on Circuits and Systems for Video Technology,Vol.7, pp.577-586, 2002. Rajinder Kaur, IJECS Volume 4 Issue 2 February, 2015 Page No.10325-10332 Page 10332