`blob detection` techniques.

advertisement
Blob Detection
The blob detection was a branch of the user interface (UI) development in order to satisfy the
requirements[LINK] to detect the robot though the user interface. Object or colour detection was
necessary to add removal functions could be added to augment the virtual reality we create through
maps and remove the state green seeing virtually ‘cleaning’ the environment. A camera placed parallel
to the lens that projected the image was the primary means of communication of interfacing between
the two major software requirements, hardware the robot and software UI meaning these requirements
were isolated from each other and could be tested independently and integrated at the end.
Initially we had to figure out what type deletion method to use and object affixed to the robot to be
tracked ensure that there was no interference with their sensors and the ability to gather sensory input.
Methods were proposed into the [G53ARS link] in order to achieve tracking objects through ‘blob
detection’ techniques.
This part of the software was partially volatile and required more input from our supervisor to validate
the direction we were going in, due to this nature the requirements spec had to be flexible enough to
change to different approaches. The main approaches researched and discussed were detection by
object colour or detection by shape.
Beginning
Though our formal meetings, our supervisor suggested reading material in python [bib link 1] that we
could use in order to develop blob detection, if we wanted to use that language (Minutes 29.10.2013)
show the first mention of blob detection and possibly ways to implement it.
Due to the UI being programmed in Java, java compatible would make the integration between UI and
blob detection easier, choosing Python would have been easier to Prototype the functionality from this
requirement as the tutorials and general programming paradigm of Python programs are shorter. For
compatibility with the already developing design choice of Java for our UI using JavaFX librarys to model
the view. A computer vision suit that allowed us to implement these functions for java would be ideal.
Initially we found libraries that could achieve the requirement of adding the Web-Cam stream
requirement into the UI [BIB link 04], after researching different methods the for implementing the blob
detection, a variant on the suit based on the java was found in the same resource, openCV was the
framework on which the libraries for different functions to interact with it were made, luckily a java
implementation was available in Java called javaCV.
Through development methods already had a structure of feeding in images and processing them,
converting their colour scale, blurring them and making minor changes. In late January/early February I
managed to finally get the code to detect my webcam. From there small steps were made such as taking
each frame of the webcam and then doing similar image processing.
Finally I started working on binary images (black & white) this showed me that with the right variable
values I could isolate the colour and outline an object which we might then be able to detect. This lead
me to start searching for different methods of either colour or shape detection.
Colour
Development process of using the blob detection libraries. Allowed us to manipulate images to different
colour palettes Red, Blue Green (RGB) => hue saturation value (HSV). One of the methods that was
possible using this sort of colour processing was Colour Filtering. This is where an image is fed through
the webcam, and 2 copies are made, one HSV and one black and white (binary). From there though 2
variables will be needed as a minimum colour value and a maximum colour value, this can be done by
using CvScalar where input the values of the 3 colour channels into the method below:
"CvScalar min = cvScalar(0,0,0), max = cvScalar(255,255,255);"
This library method will take in every possible colour value in the red, green and blue channels. So that
gives a colour spectrum to work within, from there a command of CVInRange can be used to see if there
is any value in the image captured.
This method allowed the UI to pick up only certain colours, however we identified there were several
issues. First the variables would have to be very precise in order to only pick up the colour that we put
on the Robot and this colour might change slightly if other colours were projected on top of it for
example the green path drawn, vitual border and avoidance spots on image . This made it seem like it
could be quite temperamental, also due to the removal function set up, a centre point would be
needed, and if the colour wasn't going to be picked up consistently this would add more problems as
then the track might not be cleared.
We used the reading material below implemented in javaCV for implementing colour detection using
the suit [ link bib 2]
Shape
Maintaining the idea of tracking an object or colour on the robot a circle We started looking for methods
to pick up shapes instead of colours. We found a method called ‘Hough Circle Transform’. This was a
highly useful method as OpenCV already has an inbuilt detection method. The code takes in an image
through a webcam and applies a blur filter so that the circle will appear smoother and therefore easier
to pick up.
.
The ‘Hough Circle Transform’ as I already had the code to read in images from the webcam and the
constructor for the method allowed us to pick up circles in different scenarios such as being 4 metres
away from the webcam. This was highly important because at first we couldn’t pick up the circle we
were using that was 2 metres away from the webcam.
We ended up adhering to the shape detection and ‘Hough Circle Transform’ algorithm to fil our
requirements, Already implemented previously where images a frames were fed into the webcam and
displayed to the screen, therefore the only thing that needed to be done was to add in the ‘Hough Circle
Transform’ constructor and a step through for loop that finds the centre of each circle (this was
particularly useful as the co-ordinates of this point is where the removal method runs).
Here is an approach we found using this Circle method that we’ve adapted to fit our requirements on a
different capture method and environmental contructor varables that determined the behaviour: [link
bib 3].
Tests
Initially tests for the Hough Circle Transform were performed from my own workspace, not in the
conditions of the lab. This resulted in several tests performing well in general but not for the conditions
set up as a baseline in the robotics lab. That being said several test cases from my own workspace did
prove useful:

Testing how far away an object could be and still be picked up in connection with Hough Circle
constructor arguments. I was able to test up to over 4 metres away (the maximum space I had
available and 2x that of the space needed to work in the lab) with several differently sized
objects.

Testing the reflective properties of materials that would interfere with the detection of a circle
due to distorted values picked up despite the Gaussian filter. This led me to find we couldn't use
any plastic materials for they would reflect a large amount of light and could then easily be
confused for the white background we were using. This also made me realise the need for a high
contrast object to further combat this problem of reflection.
These tests were useful but there was still need for more testing to be done in the lab. Several of the
objects that i used in my workspace didn't get picked up in the lab, I've put this down to a difference
in environmental conditions. We realised though that the Circle detection worked far better on rings
instead of filled in circles and we have some examples in the form of screen captures showing this.
(Picture Miles has)
Once we realised that rings were detected far better than filled in circles we then moved onto
extreme cases of detection. To test this we basically wanted to make sure that even in the furthest
corners of the camera's vision, circles could still be detected. We tested this quite simply by cutting
up the viewable area into a grid and placing a circle of similar size into each section thus proving the
camera could detect circles in every single part of the 'Track'.
(2nd picture that Miles)
One major issue we came across during prolonged periods of testing was that the program began to
crash after 20 minutes or so. After a fair bit of testing and finally opening the Task Manager we
realised that there was a memory leak somewhere in the program. We initially thought that the
cause was from saving the values of all the circles ever detected in a cvMemStorage. I tried
combatting this by clearing the storage at the end of every processing loop but unfortunately the
memory leak was still present. After some further testing we found out that the memory leak came
from feeding the camera images into the JavaFX set up. As of the 1st of April 2014 we have no
resolution to this problem.
Testing of Materials in my own workspace
Material
Card
Card
Card
Plastic
Plastic
Plastic
Porcelain
Porcelain
Paper
Paper
Paper
Size (cm)
8
6
4
8
6
4
8
4
8
6
6
Distance (m)
4
4
4
4
4
4
4
4
4
4
4
Detected?
Yes
Yes
No
Yes
No
No
No
No
Yes
Yes
Yes
Testing of Materials in the Labs partially controlled conditions.
Material
Card
Card
Card
Plastic
Plastic
Plastic
Porcelain
Porcelain
Paper
Paper
Paper
Size(cm)
8
6
4
8
6
4
8
4
8
6
4
Distance(m)
2
2
2
2
2
2
2
2
2
2
2
Detected?
Yes
No
No
No
No
No
No
No
Yes
Yes
No
(2 images from testing in my workspace using plastic and porcelain. The porcelain was VERY
temperamental and was rarely ever actually picked up.)
Characteristics for Detection in my tables: Circle must be detected for at least ¾ of every second.
Conclusion
For our needs and the projects requirements the method that we’ve gone with works fine. However
given more time I would have liked to be able to try pick up more circles of different shapes and
materials so there’s a lower failure rate on detection. I also would like to have done more with actual
Colour Detection but given how the properties change under the high intensity of the projector we’re
using it may not have been that viable in the first place and primary colour values being used for other
behaviours.
Bibliography
1. openCV and python: http://docs.opencv.org/trunk/doc/py_tutorials/py_tutorials.html
2. http://ganeshtiwaridotcomdotnp.blogspot.co.uk/2012/04/colored-object-tracking-in-java-javac
v.html?m=1
3. http://opencvlover.blogspot.co.uk/2012/07/hough-circle-in-javacv.html
4. https://github.com/sarxos/webcam-capture
5.
Download