Vector Amplification for Color

advertisement
VECTOR AMPLIFICATION FOR COLOR-DEPENDENT IMAGE FILTERING
Stephen J. Sangwine and Barnabas N. Gatsheni
Todd A. Ell
University of Essex
Department of Electronic Systems Engineering
Wivenhoe Park, Colchester, CO4 3SQ, UK
Email:s.sangwine@ieee.org
ABSTRACT
In previous work the authors have studied the problem of
color-dependent linear vector image filtering, particularly
based on resolution of pixel vectors into color-space directions parallel to and perpendicular to the color-space direction of a chosen color of interest (COI). A significant problem with this approach has been the lack of directional resolution, and consequent lack of good color-specificity in the
resulting filters. In this paper for the first time we present
an approach to increasing the directional resolution using
a concept which we call vector amplification, that is, increasing the magnitude of pixel vectors close to the direction of the COI. The amplification is reversible, and thus
it is possible to implement a filter by parallel/perpendicular
resolution coupled with vector amplification; filtering of the
image component in the direction parallel to the COI; and
combination of the resolved components, coupled with vector attenuation.
1. INTRODUCTION
Linear vector filters are a recent development in color image
processing and in their most recent papers [1] the authors
have studied approaches to color-dependent linear vector filtering. A color-dependent filter has a response that depends
on the direction in color space of pixel vectors as well as
the conventional dependence on horizontal and vertical spatial frequencies within the image. This makes possible, for
example, a filter which is low-pass for pixel vectors in a
chosen direction in color-space, but all-pass for other pixel
vectors. This could be combined with conventional directional sensitivity in the image plane to yield more complex
filters. The approach studied so far, and developed further
in this paper, relies on the resolution of pixel vectors into
components parallel to and perpendicular to a chosen direction in color-space which we refer to as the color of interest,
or COI.
The work of Barnabas N. Gatsheni was funded by the UK Engineering
and Physical Sciences Research Council under grant GR/M 45764.
0-7803-7750-8/03/$17.00 ©2003 IEEE.
5620 Oak View Court,
Savage, Minnesota, USA
Email: t.ell@ieee.org
In this paper we present a new idea to make such colordependent filters more directionally specific. We call this
concept vector amplification because it increases the magnitude of pixel vectors with directions close to that of the
COI . A somewhat similar idea has been used in non-linear
vector filtering, using weights which depend on direction in
color-space.
A central concern in the author’s work is the use of
quaternion algebra to provide geometrically meaningful algebraic manipulation of filter coefficients, so that a filter developed conceptually using a number of elementary operations, can be implemented using, at most, two hypercomplex convolutions.
We also utilise gray-centered RGB color-space. In this
color space, the unit RGB cube is translated so that the coordinate (0, 0, 0) represents mid-gray (half-way between
black and white). The translation is achieved by subtracting (½, ½, ½) from each pixel value in unit RGB space.
This translation is obviously equally trivially reversed by
adding (½, ½, ½). To explain what this translation of origin
achieves, think of all pixel values as vectors directed away
from the origin of the image color space. Since the origin
in the gray-centered RGB color space represents mid-gray,
the pixel vector represents by its length and direction how
much the pixel color differs from mid-gray. All pixels with
a common direction in this color-space share the hue.
The starting point for the discussion in this paper is the
basic filtering scheme shown in Figure 1 [1]. The aim of
this filtering scheme is to apply low-pass filtering to a par-
-k
⊥
F
?
6
-
Fig. 1. Color selective filtering scheme used in [1].
ICIP 2003
G
6
k= COI
6
pk
K
p
6
θ
p⊥
1
-⊥
1
Fig. 2. Vector p resolved into the direction of a colour of
interest (COI).
ticular color in the image (the COI), ideally with minimal
effect on other colors. This is effected by resolving the image into two components, one parallel to the COI, and one
perpendicular to the COI, and then applying the filtering operation to the parallel component only, as shown in Figure 1.
It should be noted that the separation into parallel and perpendicular images need not actually be done: it is possible
to merge this algebraically into the convolution coefficients
using quaternion algebra, as was shown in [1].
The resolution of an image into parallel and perpendicular directions consists of resolving each pixel vector as
shown in Figure 2. This can be done by the usual vector
methods based on dot products, but it can also be expressed
algebraically in quaternion algebra as [2]:
p⊥
=
pk
=
1
(p + vpv)
2
1
(p − vpv)
2
(1)
(2)
where v is a unit vector in the direction of the COI, and
the products are quaternion products. A significant problem with this approach is that the resolution into parallel
and perpendicular directions is relatively insensitive to vector direction. This is because the magnitude of the parallel
component is determined by the dot product of the vector p
and a unit vector in the COI direction:
|pk | = p · v = |p| cos θ
(3)
and the cosine function decreases only slowly with angle
between 0° and 45°. This is clearly not adequate because
a 45° cone around the COI direction includes a significant
fraction of color-space.
2. VECTOR AMPLIFICATION
To improve the directional selectivity of our filter, we propose some form of vector amplification. By this we mean
K
-M
Fig. 3. Piecewise-linear function used in vector amplification.
an operation that will increase the magnitude of the resolved parallel component for pixel vectors whose direction is close to the direction of the COI, leaving nearly unchanged those vectors whose direction is far from that of
the COI. Therefore, when we filter the parallel component
of the image, greater weight is given to pixels with directions close to the COI.
Ideally, this amplification would be expressed algebraically in the quaternion algebra, and it is not yet known
whether such a method exists. An affine transformation of
color-space is a possible candidate, but there are many such
transformations, and we are, as yet, in the early stages of
exploring the possibilities.
So far we have considered and experimented with one
approach, as follows. We calculate a gain factor G, for each
pixel in the image, which depends on the ratio, M , of the
magnitudes of the parallel and perpendicular components:
M=
|pk |
|p⊥ |
(4)
This is clearly infinite when the perpendicular component
vanishes (the pixel has the same direction as the COI), but
we overcome this by the definition of the gain G:


M ≤1
1
G= M 1<M <K
(5)


K M ≥K
where K is a maximum gain value which determines the
degree of directional sensitivity. (Clearly K = 1 gives no
vector amplification.) The inverse of this function is simply
1/G. Graphically, the piecewise linear function is as shown
in Figure 3. To show what the vector amplification operation achieves, Figures 4 and 5 show an image before and
after vector amplification in the direction of the yellow color
of the inside of the flowers. Notice that the yellow color has
Fig. 4. Original tulips image
Fig. 6. Tulips gain image represented as a grayscale image.
Dark areas represent low gain, bright areas represent high
gain.
parallel and perpendicular to the COI (that is these blocks
also compute M , as defined in equation 4). The amplifier blocks implement amplification and attenuation of the
parallel component of the pixel, and the filter block F implements convolution with a scalar mask. Notice that the
attenuation values are recomputed after convolution rather
than using the inverse of the gain values before convolution.
4. RESULTS
become brighter and more saturated in the amplified image.
This amplification is reversible (provided the amplified image is not stored in a format with limited dynamic range) by
dividing by the gain used in the amplification. Figure 6 is a
grayscale visualization of the gain image used to create the
amplified image (the original image is multiplied point-bypoint by the gain image).
Figure 8 shows the result of applying the filtering method
described in the previous section to the image of Figure 4.
The COI was the direction of the yellow color inside the
petals in gray-centered RGB color-space, and the amplified
parallel component was filtered with a binomial 5 × 5 averager. The amplification limit, K was 1.5. Figure 9 is the
difference between the images in Figures 4 and 8, scaled
to make the differences more visible. For comparison with
Figure 8, Figure 10 shows the result when the same binomial mask is convolved with the whole image, rather than
just with the separated and amplified component in the yellow color space direction.
3. FILTER IMPLEMENTATION
5. DISCUSSION
For experimental purposes, we have used an explicit implementation of the filtering method based on explicit parallel/perpendicular separation, vector amplification, convolution of the parallel component with a scalar mask, vector
attenuation, and addition of the filtered parallel component
to the unfiltered perpendicular component. Figure 7 shows
the scheme. The blocks labelled G and G−1 calculate (per
pixel) the piecewise linear function G shown in Figure 3 using the ratio of the moduli of the components of the pixel
The results in the previous section show the feasibility of
implementing color-dependent filters using some form of
vector amplification. This is a new approach to the problem, and it improves on what has been done before because
it is more color-selective. However, a significant problem is
lack of an algebraic method of implementing the amplification step. The piecewise linear function proposed in equation 5 is not a very satisfactory method to amplify vectors
in the direction of the COI. It does implement a variable
Fig. 5. Tulips image with vector amplification applied in the
yellow color-space direction
-k
H
H- H
⊥ - G
F
HH
H
- −1
-G
Fig. 7. Color selective filtering scheme based on filtering of the amplified parallel image component.
Fig. 8. Filtered tulips image using vector amplification applied in the yellow color-space direction, and low-pass filtering with a binomial 5 × 5 mask.
Fig. 10. Filtered tulips image after low-pass filtering with
the same binomial 5 × 5 mask.
6. CONCLUSION
We have presented a new approach to the construction of
linear vector color-image filters based on the idea of amplifying pixel vectors aligned with a chosen direction in colorspace. We have shown one approach to the problem of vector amplification to illustrate the concept, and we have outlined the limitations of the approach.
Future work must include finding a better way to implement the vector amplification step so that the process can be
expressed algebraically.
7. REFERENCES
Fig. 9. Difference between the original and filtered images
in Figures 4 and 8 respectively (with a mid-gray offset).
gain dependent on the direction of the vector p relative to
the COI, so that vectors with directions close to the COI are
amplified more than those with directions further away, and
vectors subtending angles of more than 45° to the COI are
left untouched.
[1] S. J. Sangwine, B. N. Gatsheni, and T. A. Ell, “Linear colour-dependent image filtering based on vector
decomposition,” in Proceedings of EUSIPCO 2002,
XI European Signal Processing Conference, Toulouse,
France, 3–6 Sept. 2002, vol. II, pp. 274–277, European
Association for Signal Processing.
[2] T. A. Ell and S. J. Sangwine, “Hypercomplex WienerKhintchine theorem with application to color image
correlation,” in IEEE International Conference on Image Processing (ICIP 2000), Vancouver, Canada, 11–14
Sept. 2000, vol. II, pp. 792–795, Institute of Electrical
and Electronics Engineers.
Download