FinalReport

advertisement
A Real-Time Coin Money Counting
Application using LabVIEW
Alexis M. Rodríguez-Díaz
M.E. CpE student
Alexis.Rodriguez@ece.uprm.edu
Abstract-Segmentation and pattern recognition methods are
used in a wide variety of applications for object and region
detection in images. A fair amount of functions exist in MATLAB
and LabVIEW, including automatic threshold selection and edge
detection. However, LabVIEW proves to be better suited for our
implementation due to a much broader quantity of built-in
functions. We also show this “coin money counting” scheme can
be done in real time.
I.
PROGRAMMING PLATFORM SELECTION
Using MATLAB was the first ideal option to implement the
real-time “Coin Money Counting Application” (CMCA) for
several reasons, but mainly because it is widely used in
academia. But many issues had to be confronted with
MATLAB. First, MATLAB has no built-in functionality for
lens distortion correction.
Toolkits are available from
academia (see Caltech’s Camera Calibration Toolbox for
MATLAB), but usage and implementation may not be trivial.
Secondly, some methods used for machine vision (like pattern
matching) require the combination of MATLAB and userdefined functions, which are prone to bad programming
practices that degrade performance as a whole. Finally,
MATLAB code is interpreted. Romer et al [1] demonstrated
how interpreters’ performance is linked to a variety of
nonhuman controlled factor, like “the way the virtual machine
names and accesses memory”. This, combined with bad
programming practices, makes MATLAB a second option.
LabVIEW succeeds over these issues simply because it has a
greater variety of these methods built-in. Because built-in
methods are created to obtain the best performance in mind,
it’s preferable to use them when possible. Gross [2] however
notes that, although the effort and complexity of programming
in LabVIEW’s graphical programming language G seemed
high compared to MATLAB, LabVIEW clearly has advantages
concerning speed of data acquisition and flexibility of
measuring procedure.
If the application required functions that didn’t exist for
LabVIEW, then they had to be created using the G
programming language (not practical in terms of time-toimplement) or in MATLAB (decreasing overall performance).
But this was not the case since LabVIEW’s built-in functions
proved to be more than enough for this implementation.
II. MACHINE VISION METHOD SELECTION
LabVIEW has many methods used for object detection.
Method selection, although flexible, must be scrutinized and
specific to avoid unnecessary operations. The selection of
certain methods and their reason is explained below.
A. Calibration
To effectively detect coins, image calibration is necessary
even if distortion is not considerable. The reason is that
distortion tends to make round object looks somewhat elliptical
when located on the distorted areas of the image. As the object
becomes more elliptical, its true radius decreases considerably
enough for a coin to be confused by another in this application.
Image calibration in LabVIEW is straightforward and
simple, yet effective. It is done only once, and recalibration is
only required if camera orientation or its working distance
changes.
B. Segmentation
Segmentation is normally used to separate object from the
background, when possible. Images of scenes depend on
illumination, camera parameters, and camera location [3].
When these cannot be controlled or known, they must be
assumed by observation. But in most cases, including CMCA,
they can be manipulated in such a way that allow for an
optimal or easier detection.
Controlling illumination in the field of view is vital. Since
the object to be detected has a high gray-level, background
should have a very low gray-level to increase contrast.
C. Pattern Matching/Recognition
A very popular method for object detection uses pre-defined
templates of the object to be found. At the low-end, it can be
as “simple” as Optical Character Recognition (OCR), used in
processing checks, sorting mail, among others [4]. This,
however, is only practical when tight constraints on color (or
gray-level for the matter), size, positioning, and especially the
orientation of the characters (coins, in CMCA).
The material that composes coins makes them highly
reflective to light, so small changes in illumination are quickly
noticeable. Any considerable change in illumination would
require the coin template to be updated.
Even when
illumination remains unchanged, object orientation must be
determined for proper matching.
For coins, the convolution process may require to match the
object with the template by rotating the template until it gets
enough votes (matching points) to consider it a match. The
process is computationally intensive and inefficient when
applied to many coins at a time.
III. HOW IT WORKS
A. Assumptions
Assumptions exist in the design process. It allows us to
specify what to expect in order to find what we’re looking for
and reject anything else.
First, U.S. Treasury coins with the following denominations
will be expected: $0.25 (quarters), $0.10 (dime), $0.05 (five)
and $0.01 (cent). Figure 1 (left) shows these coins on both
sides (heads and tails).
Second, the scene is controllable. Illumination, working
distance and other parameters can be increased or decreased to
better fit the application. Of course, after obtaining a good
setup these parameters should remain as unchanged as possible
so that changes in the application are minimal.
Third, the background selected was a black paper. This
increases contrast of the objects in the image. Even so, uneven
illumination is possible, but handled. This is explained later in
this section.
Finally, round objects smaller or bigger than coins can be
discarded by its radius. Objects that are not coins with a
radius similar to other coins are assumed not to be placed on
the working area.
B. Calibration
The first step is image calibration. The calibration process
first requires that an image with a grid-like pattern is taken,
show in Figure 2. For the program to determine the conversion
of “real-world” coordinates to “image” coordinates, certain
data must be provided: 1) specify that the calibration is nonlinear, 2) provide the center pixel of the upper left dot, and 3)
provide the “real-world” distance between each dot in the grid
(horizontally and vertically). The real-world distance between
Fig. 2. Image of the grid-like pattern paper used to correct distortion in
software. The first dot in the upper left is used as a reference or starting point
dots can be specified using a wide variety of metrics including
millimeters, centimeters, inches, among others.
C. Handling uneven illumination
Even though a dark background is used, which allows for
better contrast, the illumination devices used for the
application setting do not create a perfect evenly illuminated
field of view (area captured in the image). A simple technique
is used to account and remove (or at least reduce) small
variations of uneven illumination. This technique only
requires an image of the background (no objects in it). Once
the image is obtained, we can obtain the new image by
NewImage(i,j) = Original(i,j) – Backgroung(i,j).
(1)
The resulting image, shown to the right of Figure 1 (right),
contains little or no areas in the image with uneven
illumination. Consequently, objects can be better recognized,
not only by a person but also by a machine vision algorithm.
D. Calculating coins
We can now effectively find coins in sight using a circle
finding algorithm built-in for LabVIEW. The function is fast
and very useful. In it you can specify the range of the objects
radii so that only coins are detected. The method returns each
of the detected coins with its relative centroid position (x, y)
and radius, and also an image with circles indicating the
location and size of the coins detected (see Fig. 3).
To count the amount of money on sight we calculate
(QuartersFound * 0.25) + (FiveCentsFound * 0.05)
+ (DimesFound * 0.10) + OneCentsFound.
(2)
Note that to distinguish between coins (e.g a quarter) the
radii range is used (44-55 pixels).
Fig. 1. The result (right) of a simple technique that enhances an image with
uneven illumination (left).
REFERENCES
[1]
[2]
[3]
[4]
Fig. 3. A view of applying the circle finding algorithm (right) to the image
captured with the camera (left).
IV. OBSERVATIONS
A. Dimes and Cents
In many other implementations of coin detection, there is a
considerable issue over the close similarity of Dimes and Cents
in terms of their radii.
Although each one had their own radii range, the ranges
were very small. This is the main reason why actual dimes
could sometimes be detected as cents and viceversa.
Illumination, however, played a big role in this confusion. It
was observed that as illumination increased, the radius of coins
slightly increased. This initially appeared to be a good solution
because cents have a lower probability of being confused with
dimes, or be more detectable (see futher below for details).
But for every action, there’s a reaction. Dimes radius also
increased enough to be detected as Cents.
Other solutions that have been presented as a solution to this
particular case is to calculate the mean gray-level of a coin that
can be either a dime (high mean) or a cent (low mean). This
however could not be effectively implemented in time.
It was noticeable how some Cents could not be detected.
This only happened when Cents were dirty or opaque, causing
very little illumination to be reflected and, consequently, be
barely distinguishable from the background.
B. Overlapping
This application considered a slight level of coins
overlapping. The circle detection function of LabVIEW could
detect overlapped coins in most cases. Cases where coins were
not detected (in most cases)
 Cent over a Dime
o If the cent is too opaque, neither was
detected
 Five or Quarter over a dime
o Dime could not be detected
 Cent over a Cent
o One, the other, or both were not detected
or confused with dimes
T. H. Romer, D. Lee, G. M. Voelker, A. Wolman, W. A. Wong, J.L. Baer, B. N. Bershad, and H. M. Levy, “The structure and
performance of interpreters," in Architectural Support for Programming
Languages and Operating Systems (ASPLOS-VII), pp. 150{159, 1996.
B. Gross, M. Kozek, H. Jörgl: "Identification and Inversion of
Magnetic Hysteresis Using LabVIEW and Matlab"; in: "International
Symposium on Remote Engineering and Virtual Instrumentation", IEEE,
Villach, 2004.
R. Jain, R. Kasturi, B. G. Schunck, "Machine Vision", McGraw-Hill,
1995, pp. 462-465
J. C. Russ, "Computer-assisted microscopy : the measurement and
analysis of images", New York: Plenum Press, 1990, pp. 267-269
Download