472-249

advertisement
Unsupervised Color Correction Using Cast Removal for
Underwater Images
R. A. SALAM, A. O. H. EE
School of Computer Sciences
Universiti Sains Malaysia
11800 Penang
MALAYSIA
M. S. HITAM
Department of Computer Science
University College of Science and Technology Malaysia
Mengabang Telipot, 210300 Kuala Terengganu, Terengganu
MALAYSIA
Abstract: - Image enhancement for underwater images is very important especially for marine biologists. They
required an automated tool to help them to observe underwater images. Aquatic environment generate some
problems in underwater images, such as blurring of image features, clutter and lack of structure in the regions
of interest due to the transmission of the medium. Underwater images have always been wet or bluish. The
focus of this paper is to investigate and evaluate existing image enhancement methods and algorithms and to
apply these techniques to underwater images. We proposed a framework for underwater image enhancement
that starts with an unsupervised color correction using cast removal, contrast enhancement using histogram
equalization and finally are the noise filtering and image sharpening techniques. These stages are very
important for underwater image enhancement. As a result, an automatic tool will be developed to enhance
underwater images.
Key-Words: - underwater images, image enhancement, contrast, sharpness, color correction, cast removal.
1 Introduction
In general, physical content of the scene,
illumination on the scene and characteristics of the
digital camera are the three issues of image taken by
a camera. The physical content of the scene is the
main interest in many applications. The images
produced by digital cameras are often quite good,
but rarely perfect. They may not be sufficient for
recording a scene. We may need to edit the
brightness, contrast and color of all or parts of the
image.
In modern computing, various type of scientific
fields applications involve in image processing, such
as underwater research, medical diagnosis, forensic
recognition, document images and industrial
products monitoring. The most common problem in
image processing applications is the occurrence of
noise. Image enhancement is a process to solve
computer vision problem. Image enhancement is a
part of image processing that consists techniques to
improve appearance of image. These techniques are
used to emphasize and sharpen image features for
display and analysis and also extract the important
information from a poor quality image. However,
image enhancement techniques are application
specific. An image enhancement technique that
works in some applications may not necessarily be
the best approach for other applications.
Image enhancement is used as a preprocessing
step in some computer vision applications to ease
vision task, for example, enhancing edges of
marking in inspection vision machine. Sometimes,
image enhancement is used for post-processing to
produce a desirable visual image. For instance,
output image may lose most of its contrast after
performing image restoration to eliminate image
distortion. Here, image enhancement techniques can
be applied to restore the image contrast. Besides,
post-processing image enhancement may also
improve the quality of image after a compressed
image decompressed to its original image. For
example, post-processing image enhancement with a
smoothing filter technique will improve appearance
of image that contains noise generated by standard
JPEG compression algorithm [1].
For aquatic environments’ scientists, the image
processing techniques for coping with image
enhancement in underwater conditions is very
important. Aquatic environment generate some
problems in underwater images, such as blurring of
image features, clutter and lack of structure in the
regions of interest due to the transmission of the
medium [2].
The underwater image enhancement techniques
are crucial for studying archaeological sites in
various oceans of the world. Most historical objects
found underwater have to be analyzed directly from
the image taken. Underwater photography presents
a number of challenges. Apart from sky reflection,
prominent blue color of ocean water is due to the
selective absorption by water molecules. The water
tends to filter out reddish tones. Red light reduces
when the depth increases, thus producing blue to
gray like images. The effect worsens at greater
depths and distances. In fact all red lights are
disappeared when reaching 3 meter of depth [3]. In
addition, underwater light is often less than ideal for
picture taking. As a result, colors as seen and
photographed underwater are severely affected by
the enormous blue/green filter factor of ocean water
[3].
Underwater images have always been wet or
bluish. One of the major intrinsic problems with
underwater images using traditional image
processing is often exaggerated green or blue color
in the image. In addition, the produced images
frequently appeared blurry.
This research will look at available techniques
for image enhancement. The maim aim is to
investigate on how existing image enhancement
techniques can be used for underwater images. A
framework that combines different methods was
proposed so that it could enhance the underwater
images and reduce the aquatic environment effects.
One of the important techniques is the color
balancing method. Color balancing refers to the
process of removing an overall color casts from an
image.
Section 2 illustrates the problem addressed,
related image enhancement methods and color
balancing. Section 3, shows the proposed framework
of this research on improvement of enhancement
techniques and algorithms in underwater images.
2 Background
In general, image enhancement falls into two
categories: spatial domain methods that are based on
direct manipulation of pixels in an image and
frequency domain methods that is based on
modifying the Fourier transform of an image. Some
image enhancement algorithms combine both the
spatial and frequency domains.
2.1 Color Balancing
One of the image enhancement algorithms is color
balancing. Color balancing techniques are useful for
correcting color casts in images. Color balancing is
a process of removing an overall color bias from an
image. For example, if an underwater image
appears to be too blue, it means that the underwater
image has blue cast. Removing this blue cast brings
the underwater image back into balance. The way
original scene was illuminated, usage of film and
filters, differs in film processing, or from scanning
process will increase the color casts. Controlling
color imbalance in an image is a challenging task.
Usually, the easiest way to correct the problem is at
the end of the process.
A powerful method for identifying color casts is
to measure the color of pixel. In principle, the color
of pixel in an image should be neutral gray.
Neutrals must have equal components of red, green,
and blue. If they are not equal, that indicates the
presence of a color cast and color correction must be
made.
Deep Submergence Laboratory [3] has
developed software for underwater imaging. They
used laplacian filtering and also 2-dimension Fast
Fourier Transform. Another underwater project was
the prototype of jelly fish tracking system [6] which
can help remote operated vehicles pilot to observer
deep ocean life. They used gradient and color
thresholding segmentation in parallel. Statistics for
each segmented region will form a Pattern Vector.
Error will be calculated from a new Pattern Vector
with the existing one.
2.2 Enhancement by Point Processing
Enhancement by point processing is based on direct
manipulation transformation of gray values or color
values into other gray values or color values. Gray
values or color values are usually represented by a
2-D plot graph [4].
Gray values can also be modified to create
desired shape in histogram, such as flat, where every
gray value has the same probability. In point
processing, each output pixel is the function of one
input pixel and the transformation is implemented
with a look-up table (LUT) [4]. LUT provides a
flexible and powerful approach to adjusting the
appearance of an image.
2.3 Enhancement by Mask Processing
Mask Processing also known as image filtering aims
to change properties of an image by removing noise,
sharpening contrast or highlighting contours in
images.
Often, the goal of designing image
enhancement is to smooth the images in more
uniform regions, nevertheless preserve edges.
Another design goal is to sharpen the image. Due to
these goals, operations that perform need
neighbourhood processing, in which the output pixel
is a function of some neighbourhood of the input
pixels.
In general filters can be classified into two
common categories that are spatial domain filters
and frequency domain filters. These operations can
be performed using linear operations in either the
frequency or the spatial domain. However, there is a
third classifier that are applied spatially from
frequency domain filters, by using Fouriertransformed representation of an image [4].
Frequency domain filters can be categorized
into Low Pass Filter (LPF) or High Pass Filter
(HPF). LPF leaves low frequency components
unchanged. LPF is used for operations like noise
removal or image smoothing. Another low pass
filter that is frequently used is median filter, which
replaces the central pixel with the local median
value. Median filter is effective in removing speckle
noise. HPF acts to sharpen details in an image. The
disadvantage of HPF is that it tends to enhance highfrequency noise along with details of interest in an
image [5].
Filters can be expressed as a matrix representing
weights applied to the central pixels neighborhood.
Linear filters, also known as convolution filters, use
a matrix multiplication to represent in linear algebra.
A convolution is a mathematical function that
replaces each pixel by a weighted sum of its
neighbors. Convolution mask is a matrix that defines
neighborhood of a pixel and specifies weight
assigned to each of its neighbor [5].
Other technique that used spatial domain
information is the adaptive quadratic filter [5]. Apart
from point processing (neighbourhood processing)
and mask processing, there are also global
processing techniques. These methods will performs
when every pixel depends on all pixels of the whole
image. Histogram methods are usually a global
processing, but they can also be used in
neighbourhood processing.
3 Methodology
Typical underwater image enhancement properties
requiring correction are color, contrast and
sharpness. Fig. 1 shows our approach of a reliable
framework for an automatic tools and a simple
enhancing procedure that can improve overall
quality of underwater images. Each module can be
considered as an independent component that can be
combined to form a more complex algorithm.
Fig1. Underwater image enhancement
techniques
2.4 Color Correction
Color correction is the first module from the
proposed framework. This can be seen in Fig. 1.
This paper focuses on unsupervised color correction.
The framework is supposed to automatically and
reliably remove color cast (blue predominant) in
underwater images. Therefore, the solution that we
have designed can be utilized by simple statistics to
make color correction and prevent artifacts.
Cast removal is a primary step in achieving
color correction. As most of the users take
underwater images from web or digital cameras,
they have only little or no knowledge about digital
image processing. Therefore, an automatic color
adjustment tool would be most desirable. In this
module, an unsupervised algorithm which adjust the
intensity of color cast (color space conversion,
histogram analysis and linear transformation),
makes it possible to remove this overlay color (blue
predominant) in underwater images. The algorithm
can be easily integrated in a more complex
procedures without a major loss of computational
time.
Cast is usually due to different imaging device
or changes in lighting conditions. The measured
tristimulus values vary for different viewing
conditions. Chromatic adaptation is a human ability
to balance for light source color and to preserve
approximately
scene
color
[7].
These
transformations cannot be done in color balancing.
The measured tristimulus values must be
transformed to recover original scene appearance,
under various lighting and display conditions.
These transformations are called chromatic
adaptation models, and are the fundamental for cast
removal methods. A chromatic adaptation model
does not compare the appearance attributes, such as
lightness, chromatic and hue [7]. The model simply
provides a transformation of tristimulus values from
one viewing condition to another viewing
conditions.
Most of chromatic adaptation models are based
on Von Kries hypothesis, which states that
chromatic adaptation is an independent gain
regulation of three cone signals, through three
different gain coefficients [8]:
L’
M’
S’
=
=
=
kL
kM
kS
L
M
S
where L, M, S represent the original tristimulus
values, and kL, kM , kS represent the gain coefficients
that scale the original tristimulus values into postadaptation tristimulus values, L’, M’, S’. Adaptation
models depend on the gain coefficients.
The RGB channels are often considered an
approximation of L, M, S retinal wavebands [8],
therefore:
R’
G’
B’
=
=
=
kR
kG
kB
R
G
B
gray value chosen.
The basic concept of white balance algorithm is
to set at white a point or a region that appears
realistically white in original scene. A way to
perform a white balance algorithm is to set
maximum value of each channel (Rmax, Gmax, Bmax) at
a reference white (WhiteR, WhiteG, WhiteB) [9]:
kR
kG
kB
kR
=
GrayR / Rmean
kG
=
GrayG / Gmean
kB
=
GrayB / Bmean
where Rmean, Gmean, Bmean are the averages of RGB
channels, and GrayR, GrayG, GrayB represent the
WhiteR / Rmax
WhiteG / Gmax
WhiteB / Bmax
Another approach sets a potential white region at a
reference white [9]:
kR
kG
kB
=
=
=
WhiteR / RaverageW
WhiteG / GaverageW
WhiteB / BaverageW
where RaverageW, GaverageW, BaverageW represent three
RGB channels over a potential white object, which
is a subset of original image.
Here we used a modified cast removal method
proposed by Gasparini and Schettini [10], which is
based on Von Kries hypothesis. The Von Kries
coefficients, kR, kG, kB are estimated by setting at
white a particular region which is called the white
balance region (WB region) [10]:
kR
kG
kB
=
=
=
WhiteR / RWB
WhiteG / GWB
WhiteB / BWB
WB region depends on the strength of cast. The
WB region represents object or group of objects that
will be forced to be white. Only objects with
brightness over a set limit will be accepted (by
setting a threshold for example 30), because the
components to be whitened must not be too dark.
3.2
The most commonly used algorithms are gray
world algorithm and white balance algorithm, which
means average-to-gray and normalize-to-white [9].
The gray world algorithm suggests that, given an
image with a sufficient amount of color variations,
the mean value of R, G, B components of an image
will average out to a common gray value [9]:
=
=
=
Contrast Enhancement
The second module is the contrast enhancement
module, as shown in Figure 1. Here, we used
histogram equalization to enhance the contrast of
underwater images. Histogram equalization is a
common technique for enhancing the appearance of
images. Histogram equalization aims to produce an
image with equally distributed intensity levels over
the whole intensity scale. For example, we have an
underwater image, which is predominantly dark
blue. Then its blue histogram would be adjusted
towards the lower end of scale and all the image
detail is compressed into the dark end of the blue
histogram. But if we stretched the levels at dark end
to produce a more uniformly distributed histogram
then the underwater image would become much
clearer.
Histogram equalization decreases the
maxima intensity and increases the minima intensity
of each pixel.
3.3
Noise Filtering and Sharpness
Noise can degrade an underwater image so that
important features are no longer observable. Noise
can occur during underwater image capture,
transmission or processing. The classification of
noise is based upon shape of probability density
function or histogram for discrete case of noise. One
kind of noise that occurs in all recorded images is
detector noise. This kind of noise is due to the
discrete nature of radiation [11]. Allowing some
assumptions, this noise can be modeled with an
independent or additive model. Each pixel in the
noisy image is the sum of true pixel value and a
random (Gaussian distributed noise value).
Gaussian noise is the most common type of noise in
detector noise.
Another common form of noise is impulsive
noise. The noise is caused by errors in the data
transmission. The corrupted pixels are either set to
maximum value or have single bits flipped over.
Here, the image is corrupted with white and/or black
pixels. Unaffected pixels always remain unchanged.
The noise is usually quantified by the percentage of
pixels, which are corrupted.
Filtering is an image processing operation where
the value of pixel depends on the values of its
neighboring pixels. Each of the pixels is processed
separately with predefined mask. Weighted sum of
the pixels inside the mask is calculated using the
weights given by a mask. In image, we use the term
spatial filtering. Spatial filtering of images help to
remove noise and sharpen blurred images. Some of
the spatial filtering operations are linear and others
nonlinear [11].
Filtering techniques for noise removal include
median filtering, adaptive filtering and averaging
filtering [12],[13]. Median filter is a nonlinear
spatial filter, low-pass filter and high-pass filter. It
begins by ordering the N pixels included in the filter
operation as defined by the filter mask and setting
the center pixel to the median intensity value [9].
Adaptive filter is a low-pass filter. It estimates the
additive noise power based on statistics estimated
from a local neighborhood of each pixel before
doing the filtering. Averaging filter or mean filter is
the simplest linear spatial filter and a low-pass filter.
It works by passing a mask over the image and
calculating the mean intensity for the mask and
setting the central pixel to this value.
In general, median filter allows a great deal of
high spatial frequency detail to pass while remaining
very effectively at removing noise on images where
less than half of the pixels in a smoothing
neighborhood can be affected.
Median filter that is the third module of the
framework will be used to remove noise in
underwater image. Since median filter has two main
advantages over the averaging filter. It is a more
robust average. A single very unrepresentative pixel
in a neighborhood will not affect the median value
significantly. Since the median value must actually
be one of the values in neighborhood pixels, it does
not create new unrealistic pixel values. So, it is
much better at preserving sharp edges.
4 Conclusion
Marine biologists and other researchers that involve
with underwater images would prefer to receive
clear images with less noise. Underwater images are
often not very clear and very bluish. The proposed
framework that we design will be able to produce an
automatic tool that can enhance underwater images.
Images will be much clearer and the blue cast will
be removed.
The main focus of the research is the
unsupervised color correction using cast removal.
However, for real time images, parallel processing
approach will be a better solution for the next stage
of the research.
References:
[1] M. Sonka, V. Hilavac, R. Boyle, Image
Processing, Analysis, and Machine Vision,
Second Edition, ITP, 1999.
[2] E. Wolanski, R. Richmond, L. Cook, H.
Sweatman, Mud, Marine Snow and Coral Reefs,
American Scientist Online Magazine, JanFebruary, 2003.
[3] Woods Hole Oceanic Institution, Woods Hole
Oceanographic Institution Improves Underwater
Imaging with Math Work Tools, [Online]
Available: http://www.sciencedaily.com/encyclo
pedia/woods_hole_oceanographic_institution,
2002
[4] R. C. Gonzalez and R. C. Woods, Digital Image
Processing, Addison Wesley, 2nd Edition, 2002.
[5] C. Vertan, M. Malcie, V. Buzulio and V.
Popescu, Median Filtering Technique for Vector
Valued Signals, Technical Report: Faculty of
Electronics and Telecommunications, Bucuresti
Politehnica University, Bucuresti, Tomania, July
2000.
[6] J. Rife, S. M. rock, Field Experiments in the
control of a jellyfish tracking ROV.
In Proceedings of the IEEE OCEANS
Conference, pages 2031--2038, 2002.
[7] M. D. Fairchild, Color Appearance Models,
Addison Wesley, 1997.
[8] E. H. Land, J. McCann, Lightness and Retinex
Theory, Journal of the Optical Society of
America, Vol. 61, n°1, pp. 1-11, 1979.
[9] K. Barnard, V. Cardei, B. Funt, A Comparison
of
Computational
Color
Constancy
Algorithms-Part I: Experiments with Image
Data, IEEE Trans. On Image Processing,
Vol. 11, No. 9, Sept. 2002.
[10] F. Gasparini, R. Schettini, P.Gallina, Tunable
Cast Remover for Digital Photographs, Proc.
SPIE, 5008, pp 92-100, 2003.
[11] M. Elad, A. Feuer, Superresolution
Restoration of an Image Sequence: Adaptive
Filtering Approach, IEEE Transaction on
Image Processing, Vol 8, No. 3, March 1999.
[12] C.Y. Suen, M. Berthold, S. Mori, Automatic
Recognition of Handprinted Characters, The
State of The Art, Proceedings of the IEEE,
Vol. 68, No. 4, April 1980.
[13] Q. Wang and C.L. Tan, Matching of doublesided document
images to remove
interference, IEEE Conference on Computer
Vision and Pattern Recognition, Hawaii,
USA, 2001.
Download