Image processing algorithms for a computer assisted allergy testing

advertisement
Image processing algorithms for a
computer assisted allergy testing
system
D. Lymperis1, A. Diamantis1, G. Syrcos1, S. Karagiannis2
1
Dpt. of Automation, Technological Institute of Piraeus, Piraeus, Greece
Tel: +210-5381188, Fax: +210-5381249, E-mail: gsyrcos@teipir.gr.
2
Dpt. of Automation, Technological Institute of Piraeus, Piraeus, Greece
Tel: +210-5381704, Fax: +210-5381249, E-mail: skaragian@teipir.gr
This article is part of an overall project that aims to develop a system that will
automatically perform and evaluate common skin allergy tests on the human arm. The
complete system has three main branches of development; (1) The robotic arm and its
movement in space; (2) The image processing and vision system for guidance and
result evaluation and (3) The expert system for classification of results. This paper is a
progress report for the vision system and the image processing algorithms. It presents
a method for preliminary image processing for image enhancement followed by the
main digital processing section which includes items such as corrections for nonuniform illumination, hair removal, adaptive thresholding and morphological issues.
1. Introduction
The design of the image processing and vision system branch of the diagnosis
system is not only demanding and crucial for the performance of the system as a
whole, but also critical by medical terms, because the test must be performed on
specific skin areas of the arm that are designated with the help of anatomic criteria.
For example, the avoidance of veins is critical because a strong reaction to a
stimulant may cause a potentially lethal allergic shock to the patient. Therefore,
careful image processing and test planning is essential for the design of the whole
system.
The first step in determining candidate locations for the allergy agent placement is
to scan the skin and exclude the areas that have lesions, wounds and veins.
Previous research [1]-[7] has shown the effectiveness of vein imaging via the use of
infrared (IR) illumination. The same approach has been used in our system, but the
IR image is supplemented with normal visual images and augmenting visible,
infrared and laser illumination.
For the complete automation of the allergy test via a robotic arm, the help of a
machine vision system is a strong prerequisite. The machine vision system needs to
address the following issues: i) Detection of areas not suitable for performing the
test. ii) Monitoring of the subject position and location for the safe and accurate
guidance of the robotic arm. iii) Selection of the areas for stimulant dispensing. iv)
Evaluation of reactions with respect to the blood concentration (erythema). v)
Evaluation of reactions in case of rash development.
In the following sections we present the experimental prototype implementation of
such a system. In Fig.1 is shown the experimental setup of this application.
Figure 1: Experimental Setup.
2. Description of the Imaging System
The imaging system will consist of two cameras, one for the IR imaging and one for
the normal light imaging. The IR camera is used primarily at the first stage of the
test for the determination of vein location and any unusual concentration of blood on
the skin. The secondary use of the infrared camera is for the detection of the
reaction results after the placement of the allergy reactors. Any concentration of
blood that will show up as an erythema, will also be visible in IR as a dark blob,
because of the absorption of IR light from the blood hemospherin. The visible light
camera serves multiple purposes: For robot arm guidance; For measuring the
distance of the arm to the camera via calibrated laser beam marks projected on the
object; For measurement of real-life dimensions of the reactions on the skin; And for
scanning three dimensional anaglyph of reactions in cases of blisters appearing via
help of a projected laser line on the skin.
The illumination of the work area is performed via both IR and visible (white) light
LED sources that are sufficiently diffused via air/polymer diffusers. Reflections are
minimised by polarizing filters in the sources and cameras, oriented in perpendicular
directions. The Infrared optical system setup is shown in Fig2. The sensitivity of the
IR camera is selectively limited to the IR region via proper rejection filters that will
not allow visible light in the camera. In Fig.3, sample image from a working
prototype setup is shown, clearly displaying the benefits of IR imaging for
subcutaneous vein detection.
3. Algorithms
3.1 Camera Calibration
In order to be able to perform measurements on the allergy reactions, the cameras
have to be calibrated so that all real-lens projection distortion on the image can be
compensated in the image processing software. Camera calibration is the process
that it is used to estimate the intrinsic and extrinsic parameters of the camera. In
most cases the reliability and the output of a machine vision system depends on the
accurate definition of the above parameters. The calibration can be performed once
and after the camera parameters are estimated they can be reused in the
calculations, since the optical system remains invariant. The estimation of the
intrinsic camera parameters is achieved by presenting printed checkerboard images
of known dimensions to the camera. From these known images the camera model
can be iterated until the parameters are estimated to sufficient precision. Extrinsic
camera parameters can be estimated by varying the distance and angle of the
presented patterns to the camera. After the camera calibration is performed, images
acquired with the cameras can be compensated for the lens distortions and the
measured image distances can be correlated to the real world distances on the
patient skin.
Figure 2: Infrared optical system.
Figure 3: Capture with IR source illumination.
3.2 Image Acquisition
At the current stage of the project the image processing algorithms are prototyped
and developed in MATLAB®. We have assumed that the arm can be held still for
about one second, so we can capture multiple frames of the same picture by using
a capturing rate of 10 frames/sec. Of course, the capture rate depends on camera
characteristics and specifications. The multiple images are then averaged as a first
step for noise removal. Averaging a number of images creates a clearer picture,
with less noisy pixel values [8].
3.3 Image Processing
Since the final system is destined for real-time operation, the development of the
algorithms sought for moderate complexity that could be executed close to real time
on mainstream PC computational platforms. Therefore, the simplest possible
algorithmic route was often selected for the prototype run of the experiment. The
complete system will consist of two processing sections, one for the determination
of the test points and the robot guidance and another for the recording and
classification of the results. In this paper we present only the steps for the first part
of the experiment that involves the following procedures: i. Background detection, ii.
Hair removal, iii. Non-uniform illumination correction, iv. Contrast enhancement, v.
Thresholding, vi. Dispersion of test points.
Background Detection
For the optimal extraction of the test points, the calculations need to be constrained
to the actual image area that contains the subject’s skin. Therefore, a special setup
with a matte black backdrop is used to ease the exclusion of the background pixels
from the calculations. The background pixels can therefore be extracted with the
use of simple global thresholding, and the threshold Tb that was used was set at the
level of the mean value of the image minus the standard deviation, Equation (1), i.e
for the initial greyscale image I(x,y) ,where (x,y) are the spatial coordinates, with
dimensions MxN pixels, the threshold Tb equals:
M  N
 
 
 
 ( I ( x, y ))   

N
x 1  y 1
 
 

   I ( x, y ) 

M  N
M *N
x 1 y 1

 

 ( I ( x, y )) 

 
 
x 1  y 1


Tb 

M *N
M * N 1
M
(1)
Afterwards, the greyscale image is converted to binary (bw) using the calculated
threshold Tb
 0, if I (i, j )  Tb
bw(i, j )  
 1, if I (i, j )  Tb
(2)
where (i, j) are the coordinates of each image pixel. The result of the above process
is shown in Fig.4. Knowing the background pixels’ coordinates, the algorithm
processes only the forearm’s pixels that reduces the execution time and improves
the veins’ detection procedure.
Hair Removal
Before we can proceed with the vein detection we must first remove all the
elements that may interfere with the final result. Such elements are the hair that
may be present. We remove the traces of the hair via morphological closing. The
result of this process is shown in Fig.5. The selection of the proper structural
element (SE) to be used in opening the image is very important [9]-[10].
Experiments have shown that a vertical and a horizontal line SE in the shape of
cross should be used because of the irregular orientation of the hair. The size of the
SE is estimated on a per-image basis from the average hair thickness measured on
the images.
Figure 4: Background Detection.
Figure 5: Hair Removal.
Uneven Illumination Correction
One of the most frequent problems in image processing is uneven illumination. The
result of the undesirable non-uniform illumination is regions of the image being dark
and their contrast too low. The human eye has no trouble perceiving the images,
because the brain will compensate the differences, taking into account the ambient
illumination. Unfortunately, the computer cannot automatically compensate for this
effect, so the next goal of the processing is to flatten the illumination component in
the image. An image is a function consisting of two coefficients the reflectance and
the illumination which are combined in a non-linear manner [11]-[12]. The desired
goal is to remove the illumination gradient from the image and leave only the
objects’ reflectance.
The illumination component can be found at the low frequencies of the image. In
order to calculate this component the image is filtered with a very low pass filter in
frequency domain. This way, an estimation of the background of the image is
created, which is dominated by the illumination coefficient. To eliminate any border
effects, the images are transformed from- and to- the frequency domain via
symmetric extension of the image boundaries. The result of the low pass filtering is
the original image subjected to heavy smoothing. Finally, the reflectance component
of the image can be calculated by dividing the original (Fig.5) image with the
smoothed image. The division is the only way to extract the reflectance coefficient
because it is combined with the illumination signal by multiplication (Fig.6).
Figure 6: Uneven illumination correction.
Contrast Enhancement
The image produced after the illumination correction has very low contrast. In order
to enhance the veins, an intensity transformation was used which is called contrast
stretching. This transformation function compresses the input levels lower than m in
to a narrow range of dark levels in the output image; similarly, it compresses the
values above m in to a narrow band of light levels in the output. The value of m
could be calculated experimentally to optimize the results. In this case the m
coefficient is calculated using statistical elements of the image I(x,y).
M  N
 
 
N 
 ( I ( x, y ))   

x 1  y 1
 
 

   I ( x, y ) 

M  N
M *N
x 1 y 1

 

 ( I ( x, y )) 

 
 
x 1  y 1

m
b*
M *N
M * N 1
M
(3)
The contrast stretching transformation is described by the limiting function 4
T (r ) 
1
1  (m / r ) E
(4)
where r represents the intensities of the input image, T(r) the corresponding
intensity values in the output images and E controls the slope of the function. Fig.7
depicts the contrast stretching effect. It must be noted that the algorithm does not
take into account the background regions that were identified at the background
detection process [13].
Figure 7: Contrast Enhancement.
Vein Extraction
After the contrast enhancement procedure, the pixels that they belong to the veins
have dark intensity levels than the pixels that they belong to the rest skin area of the
forearm. Although that enhancement conduces to the extraction of the skin areas
that are suitable for performing the allergy test, some problems of the image’s
segmentation may be still remained. This situation is caused because of the nonuniform intensity levels of the pixels that shape the veins. So the use of a global
thresholding method will not produce satisfactory results. Many methods of global,
adaptive and dynamic thresholding calculation have been tested in order to extract
the optimal shape of the veins. Finally, a multilevel thresholding method is used by
producing satisfactory results.
Firstly, the profile of the pixels’ intensity levels of each row of the image is
computed. After intensity profiling the maximum and the minimum value of each
distribution is calculated, Fig.8. These values are the transition limits of the vein’s
pixels values. The maximum and minimum values from the previous stage of the
process are averaging to estimate the down and the upper limiting levels of the
multilevel thresholding, equation (5) and (6).
Τ1=mean(min)-b*(mean(max)-mean(min))
(5)
Τ2=mean(max)-b*(mean(max)-mean(min))
(6)
Figure 8: Pixel Intensity Profiling.
Afterwards, the greyscale image is converted to binary (bw) using the calculated
threshold levels
1,  1  I(x, y)  2
bw(i, j )  
0, else

(7)
where (i, j) are the coordinates of each image pixel. The result of the above process
is shown in Fig.9. The value coefficient b was determined by experiment and set to
a value of 0.38 for best results with the data set used.
Figure 9: Veins extraction.
Test point dispersion
The most crucial stage of the application is the selection of points suitable for
performing the allergy test. These points must satisfy rules that are set according to
the standard medical procedure for allergic tests. Let d denote the maximum
diameter that a test result can reach (we assume that the test results will be
circularly shaped). The minimal distance of two adjacent test points must be greater
than d to avoid overlapping of the test results. Another factor to consider in the
algorithm is that certain areas in the hand forearm such as veins, gland, scratches
etc, must be avoided. In order to build a fast dispersion algorithm and to ensure that
the test points will not overlap undesirable areas, a morphological operation of
dilation is used. A structuring element with a disk shape is applied because of the
roughly circular shape of the allergic test results. Its size must be shortly larger than
the estimated maximum diameter d of an allergy reaction to secure a safe distance
from the prohibited areas. The examination of the test points is accomplished
through the distance calculation between any two possible points. For every new
candidate test point the distance between it and all of the previous points that have
already been set and registered is calculated. The result of this process is shown in
Fig.10.
Figure 10: Test point dispersion.
4. Reading skin test reactions
Following the allergen’s reaction on the skin is the stage of results recognition. Ten
to twenty minutes after the placement of the allergens at the designated positions of
the skin, the results of the reactions appear. An allergen reagent causes a small
blister that is called a wheal, and a red region, which is called an erythema, appears
surrounding the wheal. To obtain best results the dimensions of the allergen
reactions are measured under high illumination.
The mean values of the maximum vertical and horizontal diameter of the wheal and
erythema are calculated. These values are used to grade the reactivity of the
allergic response. The grading is implemented by scoring system [14]. The exact
shape of the wheal can be recovered in 3D via the scanning of the area with a
projected laser line. This helps pinpoint the exact boundary of the wheal without
being distracted by any colour variations of the skin or the erythema.
5. Conclusion
In this paper we have presented a computer aided testing and diagnosis system for
allergy testing. We have focused mainly on the design of the vision and image
processing system which is based on the principle that the blood’s hemospherin
absorbs IR lighting, therefore making the detection of regions with high-level blood
concentrations, like veins, easier. The development of this system is still in its early
stages and the work is still in progress; therefore, some described features may
change for the benefit of speed and accuracy. In the current stage of development,
several methods are being evaluated on a laboratory prototype setup to determine
the optimal procedure for allergy detection (results) and classification.
Acknowledgements
This research has been conducted within the framework of the “Archimedes:
Funding of research groups in TEI of Piraeus” project, co-funded by the European
Union (75%) and the Greek Ministry of Education (25%).
References
[1] H. D. Zeman, G. Lovhoiden and H. Desmhukh, “Optimization of subcutaneous vein contrast
enhancement”, Proc. SPIE 2000 Biomedical Diagnostic, Guidance and Surgical-Assist Systems II,
vol.3911, pp.50-57, May 2000.
[2] H. D. Zeman, G. Lovhoiden and H. Desmhukh, “Design of a Clinical Vein Contrast Enhancing
Projector”, Proc. SPIE 2001 Biomedical Diagnostic, Guidance and Surgical-Assist Systems III,
vol.4254, pp.204-215, June 2001.
[3] G. Lovhoiden, H. Desmhukh and H. D. Zeman, “Clinical Evaluation of Vein Contrast Enhancement”,
Proc. SPIE 2002 Biomedical Diagnostic, Guidance and Surgical-Assist Systems IV, vol.4615, pp.6170, May 2002.
[4] G. Lovhoiden, H. Desmhukh, C. Vrancken, Y. Zhang, H. D. Zeman and D. Weinberg,
“Commercialization of Vein Contrast Enhancement”, Proc. of SPIE 2003 Advanced Biomedical and
Clinical Diagnostic Systems, vol.4958, pp.189-200, July 2003.
[5] G. Lovhoiden, H. Desmhukh and H. D. Zeman, “Prototype vein contrast enhancer”, Proc. of SPIE
2004 Advanced Biomedical and Clinical Diagnostic Systems II, vol.5318, pp.39-49, July 2004.
[6] Ph. Schmid and S. Fischer, “Colour Segmentation for the Analysis of Pigmented Skin Lesions”, Proc.
of the Sixth International Conference on Image Processing and its Applications 1997, vol.2, pp.688692, July 1997.
[7] Ph. Schmid-Saugeon, J. Guillod and J. P. Thiran, “Towards a computer-aided diagnosis system for
pigment skin lesions”, in Computerized Medical Imaging and Graphics, vol.27, No.1, pp.65-78, 2003.
[8] R. C. Gonzalez, R. E. Woods, Digital Image Processing, Prentice Hall, New Jersey, United States of
America, pp.10-18, 2002.
[9] T. Lee, V. NG, R. Gallagher, A. Coldman, D.McLean, “Dullrazor ®: A software approach to hair
removal from images”, Computers in Biology and Medicine, vol.27, no 6, pp. 533-543, November
1997.
[10] P. Soile, Morphological Image Analysis: Principles and Applications, Springer, Berlin, Germany, pp.
105-133, 2003.
[11] S. W. Smith, The Scientist and Engineer’s Guide to Digital Signal Processing, California Technical
Publishing, San Diego, United States of America, pp.407-410, 1999.
[12] R. C. Gonzalez, R. E. Woods, Digital Image Processing, Prentice Hall, New Jersey, United States of
America, pp.28-31, 2002.
[13] R. C. Gonzalez, R. E. Woods, S. L. Eddins, Digital Image Processing Using Matlab, Prentice Hall,
New Jersey, United States of America, pp.68-70, (2004).
[14] R. G. Slavin, R. E. Reisman, Expert Guide to Allergy & Immunology, American College of Physicians,
pp.44, 1999.
Download