Enhancement of Color Images Using LP Algorithm

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 5- May 2013
Enhancement of Color Images Using LP
Algorithm
Nisha Devi. K
Student, University College of Engineering, Nagercoil, Tamil Nadu, India
Abstract— Luminance Preserving Fuzzy Algorithm for color
image enhancement proposes a modification of the luminance
preserving technique to improve the image that suffers from
poor quality by preserving the luminance and contrast
enhancement abilities while reducing its computational
complexity. This technique, uses fuzzy statistics of digital images
for their representation and processing. This technique improves
the contrast by preserving the edge quality. The proposed
algorithm is to avoid the problem that occurs by existing
techniques by using fuzzy logic in the RGB color space. PDE
based methods are used to denoise the image.
Keywords— Luminance, enhancement, denoise, fuzzy, statistics,
contrast.
I. INTRODUCTION
The captured image is used in several
applications, which all have their own requests on
the quality of the captured image. Acquired images
are often degraded with blur, noise, or blur and
noise simultaneously. The processing to be applied
to these images depends on the way of extracting
wanted information. Therefore, the frequent
problem in low-level computer vision arises from
the goal to eliminate noise and uninteresting details
from an image, without blurring semantically
important structures such as edges [1], [2]. Two
operations would be done: denoising and
sharpening. Several deconvolution and denoising
techniques have been proposed in the literature
since: statistics-based filters [3]–[5], wavelets [6],
[7],
partial-differential-equation-based
(PDE)
algorithms [8], [9], and variational methods [10],
[11]. In particular, a large number of PDE-based
methods have been proposed to tackle the problem
of image denoising with a good preservation of
edges and also to explicitly account for intrinsic
geometry. In this paper, we are interested in PDEbased methods and luminance preserving algorithm.
The extension of these methods to multivalued
images can be achieved in two ways. The first one
ISSN: 2231-5381
consists in using a marginal approach that enhances
each color component of the multivalued image
separately using a scalar method.
The second way consists in using a single vector
processing, where different components of the
image are enhanced by considering the correlation
between them [15].
II. BACKGROUND
The PDE-based approaches consist in evolving
in time the filtered image under a PDE. When
coupling diffusion and a shock filter, the PDE is a
combination of three terms, i.e.,
= Cηuηη + Cξuξξ – CskF(uηη ) du
Where u(t=0) = u0 is the input image, du is the
gradient magnitude, η is the gradient direction, and
ξ is the direction perpendicular to the gradient;
therefore, and uηη and uξξ represent the diffusion
terms in gradient and level-set directions,
respectively. Cη and Cξ are some flow control
coefficients. The first kind of diffusion smooths
edges, whereas the second one smooths parallel to
the edge on both sides. The last term in (1), which
is weighted by Csk , represents the contribution of
the shock filter in the enhancement of the image.
Function F(s) should satisfy conditions F(0) = 0 and
F(s).s ≥ 0 . The choice of F(s) = sign(s) gives the
classical shock filter. Hence, by considering
adaptive weights as functions of the local contrast,
we can favor the smoothing process under diffusion
terms in homogeneous parts of the image or
enhancement operation under the shock filter at
edge locations.
http://www.ijettjournal.org
Page 2121
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 5- May 2013
Gilboa filter coupling model smooths the image
with an enhancement of weak edges[5]. The
imaginary value of the solution, which is an
approximated smoothed second derivative, is used
as an edge detector.
In the other hand, Fu developed a region-based
shock–diffusion
scheme,
where
directional
diffusion and shock terms are factored by adaptive
weights.
In a more recent work, Bettahar and Stambouli
proposed a new reliable and stable scheme, which is
a kind of coupling diffusion to a shock filter with a
reactive term.
III. COLOR IMAGES
Only a very few works tackle the shock diffusion
coupling using an approach specifically dedicated
to color images.
A. Tschumperlé–Deriche Model
To avoid the effect of the apparition of false
colors, the processing applied to the image must be
driven in a common and coherent manner for all
image components. This type of approach is
denoted as “vector processing,” in opposition to the
marginal processing, which is a multiscalar
processing. Thus, in order to describe vector-valued
image variations and structures, Di Zenzo and Lee
have proposed to use the local variation of the
vector gradient norm that detects edges and corners
when its value becomes high. It can be computed
using the eigenvalues and of the symmetric and
semipositive matrix.
This model gives satisfactory results, in that it
removes noise and enhances multivaluated images.
However, despite the use of a vector approach, the
shock filter still generates some false colors.
B. PDE-based Method
The PDE-based method is based on the Bettahar–
Stambouli model as an extension to multivalued
images, where each color component of the
enhanced image is considered by taking into
account the
correlation between the three
components.
IV. PROPOSED METHOD
ISSN: 2231-5381
Luminance Preserving Fuzzy Algorithm for color
image enhancement proposes a modification of the
luminance preserving technique to improve the
image that suffers from poor quality by preserving
the luminance and contrast enhancement abilities
while reducing its computational complexity. This
technique, uses fuzzy statistics of digital images for
their representation and processing. This technique
improves the contrast by preserving the edge
quality.
The proposed algorithm is to avoid the problem
that occurs by existing techniques by using fuzzy
logic in the RGB color space. Luminance
preserving algorithm is based on Ant Colony
Optimization. Fuzzy model represents the system
by means of a set of fuzzy rules that describe
linguistically the existent relation between the input
and the output. This linguistic description reinforces
the interpretability of the result.
Despite this interpretative capability is one of the
distinguishing features of fuzzy modeling, it has
often been underrated after an accuracy
improvement of the fuzzy modeling. Nevertheless,
in the last years, research efforts have been
redirected to preserve and enhance the
interpretability power of this type of models [2].
One way to improve the interpretability of a
fuzzy model consists of trying to identify rules as
general as possible, so that each rule covers the
highest number of examples and, this way, the size
of the rule base diminishes. Moreover, extending
the syntax of the rules adding new relational
operators to the usual equal-to operator will allow
to obtain more compact rules.
However, the goal of finding the optimal set of
such general rules is not an easy task. In this paper,
we propose a two stage method: the first one
providing a fuzzy model with the usual single fuzzy
rules and the second one searching for the best
combination of general rules that describe this
fuzzy model.
In particular, since there are a lot of methods for
the first stage in the literature, we focus on the
second one, which is mainly a combinatorial
problem. In order to solve this combinatorial
problem, an ant colony optimization (ACO) method
is proposed [3]. This method is a global
http://www.ijettjournal.org
Page 2122
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 5- May 2013
optimization technique which lies on the emergent
behavior rising from the cooperative search of a set
of agents called artificial ants. These artificial ants
communicate among them in an indirect way called
stigmergy by means of a shared memory which
emulates the pheromone deposit that real ants leave
along the paths they trace between the nest and the
food.
Next section introduces the syntax of the general
rules (compound rules) which will allow to enhance
the interpretability of the fuzzy models. Section III
constitutes the main core of the paper, where firstly
a simple strategy is proposed for the
aforementioned first stage and then the ACO
algorithm used for the second stage is detailed. In
Section IV, some experimental results are provided
to show the suitability of the proposal.
A. Single and Compound Rules in Fuzzy Models
Multiple input single output (MISO) systems is
considered with n input variables X = {X1, . . . ,Xn}
defined over the input domain of discourse X =
X1×. . .×Xn, and one output variable Y defined
over the output domain of discourse Y. The fuzzy
domain of the ith input variable Xi is denoted as fXi
= {LXi,1, . . . ,LXi,pi}, where pi is the number of
fuzzy values associated with the variable and LXi,j
represents both the membership function and the
linguistic label of the jth value. Analogously, e Y =
{LY1, . . . ,LYq} is the output fuzzy domain, being
q the number of fuzzy values and LYj both the jth
membership function and label. Usually, the fuzzy
rules for MISO systems contain in their antecedent
a premise for each input variable which associates
that variable with a label from its corresponding
fuzzy domain.
In order to improve the compactness and, thus,
the interpretability of the fuzzy rules, it is possible
to extend the syntax of the rules both by associating
more than one label to each input variable in the
antecedent of the rule and by using other relational
operators different from the usual equal-to operator.
These rules are called as compound rules.
B. Fuzzy Modeling With Interpretability Enhancement
In this section, an ACO algorithm is presented
that tries to improve the interpretability of a fuzzy
model described initially as a set of single rules. In
ISSN: 2231-5381
order to do that, the algorithm searches for the best
transformation of these rules in a set of compound
rules. Thus, the complete process of fuzzy modeling
is divided into two sequential phases: first, the
identification of a set of single rules from the
training set of examples, and, second, the
interpretability enhancement of the previously
identified fuzzy model by using the ACO algorithm.
In the following, both phases are detailed.
C. Fuzzy Modeling with the Mixed Method
In order to focus the attention of the paper on the
design of the ACO algorithm, a simple, rapid
prototyping method for fuzzy modeling will be used.
In particular, the following strategy translates the
training examples into a set of single rules.
Given a set of examples E = {e1, . . . , em}, ei =
([xi 1, ..., xi n], yi), the rule base RBE representing
them will be obtained as a set of single rules
Ri1...in LYk : LX1,i1 , . . . ,LXn,in ! LYk, k = arg
max j =1...q !(Ri1...in LYj ) for all {i1, . . . , in} 2
p1 × · · · × pn, where ! is a certainty measure based
on E.
This strategy, which is largely used in the
literature (e.g., [5], [6]), divides the input space into
a fuzzy grid and takes the rules with a maximum
certainty degree in every input fuzzy region of this
fuzzy grid. If no example covers a region
[LX1,i1 , . . . ,LXn,in] no rule will be selected; if
several rules take the maximum certainty degree,
one of them will be selected randomly.
In this paper we use the Mixed Method (MM),
recently proposed by the authors in [7], which
exploits the ideas previously presented in [8]. This
method combines the Wang and Mendel’s method
(WMM) [6] with an extension of the Ishibuchi’s
rule generation method [5] that deals with fuzzy
consequents. The WMM firstly translates each
example into the fuzzy rule that best covers it —i.e.,
with labels having the highest membership
degree— and, secondly, once all the examples are
processed, it selects the rules with maximum
certainty degree from (3) among all the conflicting
ones.
Basically, the MM extends the WMM by adding
rules in the input regions where WMM did not
identify rules. If there are examples covering one of
http://www.ijettjournal.org
Page 2123
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 5- May 2013
If the step is an inclusion of an initial rule in the set of
such fuzzy regions, the rule in this region having
compound
rules, it will be feasible when it satisfies the
the maximum degree.
D. Interpretability Enhancement with an ACO Algorithm
The general mechanism proposed here for
building a solution consists of the translation of the
set of initial rules (single rules) obtained with the
MM to an equivalent set of compound rules,
equivalent in the sense that it covers positively all
the aforementioned initial rules. In order to do that,
during its journey, an ant can add initial rules to its
description of the fuzzy model or it can extend the
covering of the compound rules already included in
the fuzzy model by adding labels to the premises
that form its antecedent. This second action of an
ant will be called an amplification of the rule.
Since each compound rule steams from a specific
initial rule, we will identify a compound rule with
the order number of the initial rule from which it
steams from. In short, the ith compound rule, Ri,
will be the one that arose from the ith initial rule.
In the following, the different components and
aspects of the ACO algorithm designed for this
purpose are detailed.
1)Construction graph: According to the description of the
mechanism outlined above, the construction graph over which
the ants travel will have two types of edges: Edges that allow
the ant to include an initial rule in the set of compound rules,
Edges that allow the ant to amplify a compound rule by
adding one label to one of its premises.
The second type will be denoted by a triple < i, j, k > that
represents the addition to the ith compound rule, Ri, of the kth
label, LXj,k, from the fuzzy domain of the jth input variable,
Xj . For the first type of edges, in order to homogeneously
describe a sequence of steps, it will be used the notation < i, 0,
0 >, representing the inclusion of the ith initial rule in the set
of compound rules to form the ith compound rule, Ri .
2) How to Build a Solution: Each ant will be randomly
located in an initial rule (no more than one ant for each initial
rule), so that this initial rule will be included in the set of
compound rules of the ant. Subsequently, the ant will select
one step among the feasible transitions at each ant state.
Briefly, the set of feasible transitions will be the steps that can
provide some benefit and that can be taken without leading to
inconsistencies or malformations in the final fuzzy model. An
inconsistency is produced when a compound rule covers
negatively an initial rule (i.e., the compound rule has a
different consequent than some of the initial rules it covers). A
malformation will be provoked when two or more compound
rules with the same consequent overlap in the input space,
since they are providing —partially or totally— a redundant
information.
Specifically, a feasible step can be described as follows:1
ISSN: 2231-5381
following constraint: (c1) the initial rule is not yet covered by
any of the rules in the set of compound rules.
If the step is an amplification of a compound rule Ri, it will
be feasible when it satisfies both the following constraints: (c2)
the amplification zone (i.e., the new zone of the input space
covered by the rule after the amplification) does not cover
negatively any initial rule, and (c3) the overlapping zone in
the input space of Ri with any other compound rule Rj in the
set of compound rules is equal to the covering region of Rj
(that is, each compound rule Rj that overlaps with Ri,
completely subsumes in it).
It must be stressed that the constraint (c3) does not require
the subsumed rule to be consistent with the rule in
amplification (i.e., that both Ri and Rj have the same
consequent). This is due to the fact that such requirement is
indirectly assured by constraint (c2), since a conflictive rule
Rj subsuming in Ri will cover positively, at least, the initial
rule from which it steams, and, therefore, constraint (c2) will
be violated.
Also, it must be noticed that constraints (c2) and (c3) assure
that the amplification zone covers, at least, some not-yetcovered input region of the fuzzy grid or some positive initial
rule (already covered or not yet covered). On the one hand,
constraint (c2) guarantees that the amplification zone does not
cover any negative initial rule and, on the other hand,
constraint (c3) guarantees that either the amplification zone
does not overlap with any other compound rule (and, therefore,
it covers not-yet-covered regions with positive initial rules
and/or with no rule at all) or the amplification covers
completely some other consistent compound rule.
3)Heuristic information: The heuristic information
provides a way to guide the search to the paths containing (a
priori) somewhat promising steps. In order to do that, a
heuristic function must be provided that measures how much
promising each step is.
Briefly, the heuristic function proposed here will attend to
both accuracy issues, such as the number of positive initial
rules the step covers, and interpretability issues, such as the
width of the covering gained with the step.
4) Memoristic information: The memoristic information
refers to information shared by all the ants in the algorithm
that stores the past experience collected by all of them about
the good/bad selection of the steps. Therefore, memoristic
information refers to pheromone trails.
The initial amount of pheromone deposited in each arc of
the construction graph is inversely proportional to the number
of initial rules, NIR, and the number of input variables, n, and
is defined by
After producing fuzzy logic rules, YCbCr image values of
RGB image is given to the fuzzy logic. Membership values
are calculated. to apply the rules. This leads into the next
concept, the membership function. The membership function
is a graphical representation of the magnitude of participation
of each input. It associates a weighting with each of the inputs
http://www.ijettjournal.org
Page 2124
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 5- May 2013
that are processed, define functional overlap between inputs,
and ultimately determines an output response. The rules use
the input membership values as weighting factors to determine
their influence on the fuzzy output sets of the final output
conclusion. Once the functions are inferred, scaled, and
combined, they are defuzzified into a crisp output which
drives the system. There are different membership functions
associated with each input and output response.
There is a unique membership function associated with each
input parameter. The membership functions associate a
weighting factor with values of each input and the effective
rules. These weighting factors determine the degree of
influence or degree of membership (DOM) each active rule
has. By computing the logical product of the membership
weights for each active rule, a set of fuzzy output response
magnitudes are produced. All that remains is to combine and
defuzzify these output responses.
5) Membership values Determination: Determination
methods break down broadly into the following categories: In
Subjective evaluation and elicitation, as fuzzy sets are usually
intended to model people's cognitive states, they can be
determined from either simple or sophisticated elicitation
procedures. At they very least, subjects simply draw or
otherwise specify different membership curves appropriate to
a given problem. These subjects are typcially experts in the
problem area. Or they are given a more constrained set of
possible curves from which they choose. Under more complex
methods, users can be tested using psychological methods. In
Ad-hoc forms: While there is a vast (hugely infinite) array of
possible membership function forms, most actual fuzzy
control operations draw from a very small set of different
curves, for example simple forms of fuzzy numbers. This
simplifies the problem, for example to choosing just the
central value and the slope on either side. In Converted
frequencies or probabilities: Sometimes information taken in
the form of frequency histograms or other probability curves
are used as the basis to construct a membership
function.
There are a variety of possible conversion methods, each with
its own mathematical and methodological strengths and
weaknesses.
images in presence of blur and additive noise
simultaneously.
A. Direct Observation
For this comparison, the parameters that give
better results for each filter is chosen, except for the
number of iterations that must be the same for
objective comparison. The number of iterations is
chosen in the function of the visual quality of the
result. For each test image, same number of
iterations, and for the step time , a small value is
preferred in order to converge to the solution with
more precision about the values of the objective
criteria while getting more details in the visual
aspect of the restored images. Therefore, we can
converge to the solution with small numbers of
iterations in reference to the number that we use in
this paper, excepted to the Tschumperlé–Deriche
filter that employs an adaptive step time . All
models are applied to blurry and noised images. In
the production of artificially blurry images, we use
the Gaussian convolution of original test images .
Noised images are produced by adding random
Gaussian noise to blurred images . The first
criterion used is the color peak signal-to-noise ratio
(PSNR), i.e.,
Increase in PSNR value shows the improvement
in image enhancement.
VI. COMPARISON WITH EXISTING SYSTEM
V. EXPERIMENTAL RESULTS
Performances of the model is evaluated by
comparing it to marginal channel by channel
methods of Alvarez–Mazorra, Kornprobst, Gilboa,
Fu,
Bettahar–Stambouli
and
the
vector
regularization of Tschumperlé–Deriche only. These
are developed particularly to enhance degraded
ISSN: 2231-5381
VII.
CONCLUSION
A novel filter which is based on fuzzy logic for
color image enhancement in the RGB space is
http://www.ijettjournal.org
Page 2125
International Journal of Engineering Trends and Technology (IJETT) – Volume 4 Issue 5- May 2013
proposed. This filter reduces efficiently noise
and sharpens edges. Our analysis shows that the
proposed method is more efficient than Alvarez–
Mazorra, Kornprobst, Gilboa, Fu, Bettahar–
Stambouli, and Tschumperlé–Deriche models at
color image restoration in the presence of blur
and noise simultaneously. In that, it denoises
homogeneous parts of the multivalued image
while it keeps edges enhanced. However, due to
the fact of using single vectors with the specific
reaction, our filter does not create false colors
that can appear when each component of the
image is enhanced separately.
[17]
M. Dorigo and L. Gambardella, “Ant colony system: A cooperative
learning approach to the traveling salesman problem,” IEEE Trans.
Evol. Comput., vol. 1, no. 1, pp. 53–66, 1997.
[18] Yuksel, “Application of Type-2 Fuzzy Logic Filtering to Reduce Noise
in Color Images,” IEEE Trans.vol. 7, 2012.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
H. Kaiqi, W. Zhenyang, and W. Qiao, “Image enhancement based on
the statistics of visual representation,” Image Vis. Comput., vol. 23, no.
1, pp. 51–57, Jan. 2005.
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising
by sparse 3D transform-domain collaborative filtering,” IEEE Trans.
Image Process., vol. 16, no. 8, pp. 2080–2095, Aug. 2007.
G. Y. Chen and T. D. Bui, “Multi-wavelet denoising using neighboring
coefficients,” IEEE Signal Process. Lett., vol. 10, no. 7, pp. 211–214,
Jul. 2003.
Perona and J. Malik, “Scale-space and edge detection using anisotropic
diffusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, no. 7, pp.
629–639, Jul. 1990.
L. Alvarez, P. L. Lions, M. Morel, and T. Coll, “Image selective
smoothing and edge detection by nonlinear diffusion II,” SIAM J.
Numer. Anal., vol. 29, no. 3, pp. 845–866, Jun. 1992.
Catté, P. L. Lions, M. Morel, and T. Coll, “Image selective smoothing
and edge detection by nonlinear diffusion,” SIAM J. Numer. Anal., vol.
29, no. 1, pp. 182–193, Feb. 1992.
Weickert, B. M. ter Haar Romeny, and M. A. Viergever, “Efficient and
reliable schemes for nonlinear diffusion filtering,” IEEE Trans. Image
Process., vol. 7, no. 3, pp. 398–410, Mar. 1998.
S. Osher and L. I. Rudin, “Feature-oriented image enhancement using
shock filters,” SIAM J. Numer. Anal., vol. 27, no. 4, pp. 919–940, Aug.
1990.
P. Kornprobst, R. Deriche, and G. Aubert, “Image coupling, restoration
and enhancement via PDE’s,” in Proc. Int. Conf. Image Process., Santa
Barbara, CA, 1997, vol. 2, pp. 458–461.
G. Gilboa, N. Sochen, and Y. Y. Zeevi, “Image enhancement and
denoising by complex diffusion processes,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 26, no. 8, pp. 1020–1036, Aug. 2004.
S. Fu, Q. Ruan, W.Wang, and J. Chen, “Region-based shock-diffusion
equation for adaptive image enhancement,” in Proc. IWICPAS, 2006,
vol. 4153, Lect. Notes Comput. Sci., pp. 387–395.
S. Bettahar and A. B. Stambouli, “Shock filter coupled to curvature
diffusion for image denoising and sharpening,” Image Vis. Comput.,
vol. 26, no. 11, pp. 1481–1489, Nov. 2008.
Dorigo, A. Colorni, and V. Maniezzo, “The ant system: Optimization
by a colony of cooperating agents,” IEEE Trans. Syst. Man Cybern. B,
vol. 26, no. 1, pp. 29–41, 1996.
L. Wang and J. Mendel, “Generating fuzzy rules by learning from
examples,” IEEE Trans. Syst. Man Cybern., vol. 22, no. 6, pp. 1415–
1427, 1992.
P. Carmona, J. Castro, and J. Zurita, “Strategies to identify fuzzy rules
directly from certainty degrees: A comparison and a proposal,” IEEE
Trans. Fuzzy Syst., vol. 12, no. 5, pp. 631–640, 2004.
J. Casillas, O. Cord´on, and F. Herrera, “COR: A methodology to
improve ad hoc data-driven linguistic rule learning methods by
inducing cooperation among rules,” IEEE Trans. Syst. Man Cybern.
Part BCybern., vol. 32, no. 4, pp. 526–537, 2002.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 2126
Download