A locally anisotropic model for image texture extraction

advertisement
A locally anisotropic model for image texture
extraction
Loı̈c Piffet
Abstract Each chapter should be preceded by an abstract (10–15 lines long) that
summarizes the content. The abstract will appear online at www.SpringerLink.com
and be available with unrestricted access. This allows unregistered users to read the
abstract as a teaser for the complete chapter.
1 Introduction
This work ensue from the one made in [10], in which we proposed a second order
variational model in order to extract the texture from an image ud , supposing that ud
could be decomposed into two components : a cartoon part u and a part v containing
strong oscillations (noise and texture). The model was the following :
1
kud − u − vk2L2 (Ω ) + λ |u|BV (Ω ) + µ |v|BV 2 (Ω ) ,
(u,v)∈BV (Ω )×BV 2 (Ω ) 2
inf
(1)
with λ , µ ≥ 0, and where kud − u − vk2L2 (Ω ) is he fitting data term. Here, we will not
work with this model anymore, but with a Tychonov-like model introduced in [4]
for image denoising, supposing that the studied image is in the space of bounded
hessian functions BV 2 (Ω ), what is equivalent to assert that u = 0 in the previous
model. So, we use the following problem:
inf
v∈BV 2 (Ω )
1
kud − vk2L2 (Ω ) + µ |v|BV 2 (Ω ) ,
2
(2)
Loı̈c Piffet
MAPMO, Université d’Orléans, FRANCE. e-mail: loic.piffet@univ-orleans.fr
1
2
Loı̈c Piffet
with µ ≥ 0.
We prefer to use here this last model rather (1) for numerical reasons. Indeed, the
calculation time is definitely lower.
The BV 2 -component v represents the regular part of the image, and the remainder
term w := u − ud is the part containing noise, texture, and probably a part of contour
lines. We can consider that a decomposition model is efficient for texture extraction
when all the oscillating information is contained in w, and when v only contains the
regular part as well as geometrical information. Then, it turns out that (2), even if
it is a good model for image denoising, is not efficient for texture extraction. As
a matter of fact, it is the component w that contains, in this case, the major part
of the geometrical information of the beginning image. The goal of this work is
then propose a way of making disappear contour lines from the component w. For
that, we keep the idea introduced in [10] that consists in making tne model become
anisotropic. This permits to make the decomposition better but dosn’t make it really
satisfactory. We propose here to improve this idea replacing, in the model, the global
anisotropy by a local anisotropy, so that each contour line specifically treated.
We will first show the limits of the original and globally anisotropic models,
and then introduce the locally anisotropic one. In a first time, we will only propose
numerical results for two images (figure 1).
(a)
Fig. 1 images for tests
(b)
A locally anisotropic model for image texture extraction
3
2 The second order variational model
2.1 Problem in infinite dimension
We consider the following function, defined on BV 2 (Ω ):
1
F (v) = kud − vk2L2 (Ω ) + λ TV 2 (v, Ω ),
2
where λ ≥ 0. We are looking for a solution to the optimization problem
∂v
=0 .
inf F (v) | v ∈ BV 2 (Ω ),
∂ n |∂ Ω
(3)
(4)
We give a general existence and uniqueness result.
Theorem 1. Assume that λ > 0. Problem (4) has a unique solution v.
We give here the proof in broad strokes.
Proof. Let (vn )n be a minimizing sequence of (4). It is clear that (J2 (vn ))n and
(kvn kL2 (Ω ) )n are bounded. The Poincaré-Wirtinger inequality proved in [3] by
Ma”ıtine Bergounioux make us deduce that (vn )n is bounded in W 1,1 (Ω ), and thus
bounded in BV 2 (Ω ) (we recall that kvkBV 2 (Ω ) = J2 (v) + kvkW 1,1 (Ω ) ). Injection theorems given by F. Demengel in [8] imply that (vn )n converges (up to a subsequence)
in W 1,1 (Ω ) to v∗ ∈ BV 2 (Ω ). Then we show that v∗ is a solution of (4), what proves
the existence. Uniqueness comes from the strict convexity of the cost functional F .
If the image is noisy, the noise (as the texture) will appear in the oscillating part
of the picture w := ud − v. In th denoising case, such an approach has already been
used by Hinterberger and Scherzer in [12] with the space BV 2 (Ω ). However, their
algorithm is different from the one proposed in [4].
2.2 The dicretized problem
All the results given in this part are proven in [4]. The problem (4) can be rewritten
as
inf
v∈BV 2 (Ω )
kud − vk2L2 (Ω )
2λ
+ TV 2 (v, Ω ),
We still have an existence and uniqueness result.
Theorem 2. Problem (5) has a unique unique solution for every λ > 0.
(5)
4
Loı̈c Piffet
Proof. The function Fd (v) :=
strictly convex.
kud −vk2 2
L (Ω )
2λ
+ TV 2 (v, Ω ) is continuous, coercive est
In order to implement the solution, we first specify the discretization process.
2.2.1 Discretization of the problem (5)
As for the ROF model, we only consider here square images with size N × N. To
define a discrete version of the second total variation, we have to introduce a discrete
version of the hessian operator. If v ∈ X := Rn×n , the hessian of v, that we denote
(Hv), is a vector of Z = X 4 , defined by
12
21
22
(Hv)i, j = (Hv)11
i, j , (Hv)i, j , (Hv)i, j , (Hv)i, j ,
with, for i, j = 1, . . . , N,

 vi+1, j − 2vi, j + vi−1, j if 1 < i < N,
vi+1, j − vi, j
if i = 1,
(Hv)11
=
i, j

vi−1, j − vi, j
if i = N,

 vi, j+1 − vi, j − vi−1, j+1 + vi−1, j if 1 < i ≤ N, 1 ≤ j < N,
0
if i = 1,
(Hv)12
=
i, j

0
if i = N,

 vi+1, j − vi, j − vi+1, j−1 + vi, j−1 if 1 ≤ i < N, 1 < j ≤ N,
0
if i = 1,
(Hv)21
=
i, j

0
if i = N,

 vi, j+1 − 2vi, j + vi, j−1 if 1 < j < N,
vi, j+1 − vi, j
if j = 1,
(Hv)22
=
i, j

vi, j−1 − vi, j
if j = N.
The discrete second total variation of v is then difined by
J2 (v) = ∑ (Hv)i, j R4 .
(6)
1≤i, j≤N
We get the following discretized problem:
inf Fd ,
v∈X
where
(7)
kud − vk2X
+ J2 (v).
(8)
2λ
In the following, we adapt methods used by A. Chambolle for the ROF model to
the space BV 2 (Ω ).
Fd =
A locally anisotropic model for image texture extraction
5
2.2.2 Calculus of the solution of the discret problem (8)
Optimality conditions associated with the problem (8) make us consider the LegendreFenchel conjugate of J2 . Theorem 3 gives the expression of J2∗ . We first define the
adjoint operator of H (which is the discretized version of the second divergence
operator):
∀p ∈ X 4 , ∀v ∈ X,
hH ∗ p, viX = hp, HviX 4 .
We check that H ∗ : X 4 → X satisfies, for all p = (p11 , p12 , p21 , p22 ) ∈ X 4
(H ∗ p)i, j =
+
+
+
 11
11
p
− 2p11

i, j + pi+1, j if 1 < i < N
 i−1, j
11
p11
if i = 1,
i+1, j − pi, j

 11
11
if i = N,
pi−1, j − pi, j
 22
22
22
p
− 2pi, j + pi, j+1 si 1 < j < N,

 i, j−1
22
pi, j+1 − p22
if j = 1,
i, j

 22
pi, j−1 − p22
if j = N,
i, j
 12
12
12
12

 pi, j−1 − pi, j − pi+1, j−1 + pi+1, j if 1 < i, j < N,



12

p12
if i = 1, 1 < j < N,

i+1, j − pi+1, j−1



12
12

if i = N, 1 < j < N,
pi, j−1 − pi, j





12
12

p
− pi, j
if 1 < i < N, j = 1,

 i+1, j
12
12
if 1 < i < N, j = N,
pi, j−1 − pi+1, j−1


12


pi+1, j
if i = 1, j = 1,




12

if i = 1, j = N,
−pi+1, j−1






−p12
if i = N, j = 1,

i, j


 p12
if i = N, j = N,
i, j−1
 21
21
21
21

 pi−1, j − pi, j − pi−1, j+1 + pi, j+1 if 1 < i, j < N,



21

if i = 1, 1 < j < N,
p21

i, j+1 − pi, j



21
21

pi−1, j − pi−1, j+1
if i = N, 1 < j < N,




21
21


p
− pi−1, j+1
if 1 < i < N, j = 1,

 i, j+1
21
21
pi−1, j − pi, j
if 1 < i < N, j = N,


21


pi, j+1
if i = 1, j = 1,




21

−pi, j
if i = 1, j = N,






−p21
if i = N, j = 1,

i−1, j+1


 p21
if i = N, j = N,
i−1, j
We can prove the following result (voir [4]):
(9)
6
Loı̈c Piffet
Theorem 3. The Legendre-Fenchel conjugate of J2 is J2∗ = 1K2 where
K2 := {H ∗ p | p ∈ X 4 , pi, j R4 ≤ 1, ∀i, j = 1, . . . , N} ⊂ X.
(10)
As for the ROF model we have a characterisation of the solution of (5).
Theorem 4. The solution v of (5) satisfies
v = ud − Pλ K2 (ud ),
where Pλ K2 is the orthogonal projection operator on λ K2 .
2.3 A fixed point algorithm to compute Pλ K2
We propose here a Chambolle-like fixed point algorithm : we extend to the second
order results given in [6]. To compute Pλ K2 (ud ) we have to solve
n
o
2
min kλ H ∗ p − ud k2X | p ∈ X 4 , pi, j R4 − 1 ≤ 0, i, j = 1, . . . , N .
Thus, we define the fixed point problem
p0 = 0,
pn+1
i, j =
(11a)
pni, j − τ (H [H ∗ pn − f /λ ])i, j
.
1 + τ (H [H ∗ pn − f /λ ])i, j 4
(11b)
R
In addition, a convergence result is provided (see [4]):
Theorem 5. Assume that τ satisfies τ ≤ 1/64. Then λ (H ∗ pn )n converges to Pλ K2 ( f ).
3 Improvement of the model
We observed in [4] that the second order model generates a blurred effect on the
regular part. That means that contour lines are somehow too extracted from the BV 2
component, what means that they appear in the oscillating component. As a result,
this decomposition model is not efficient for texture extraction. This problem is
illustrated by figure 2.
A locally anisotropic model for image texture extraction
7
(a)
(b)
(c)
(d)
Fig. 2 (a) and (b) are the BV 2 - components from the test images. (c) and (d) are rescaled oscillating
components.
We can see that, if the oscillating component contains all the texture, it contains
also too many geometrical information. Indeed, figure (2) shows that contour lines
(arm, feet of the table in image 1 (a), contour mines of the square in image 1 (b)...)
appear in the texture part. We proposed in [10] a very partial answer to this problem, by making the hessian operator become anisotropic, in order to give priority to
chosen directions.
8
Loı̈c Piffet
3.1 Globally anisotropic model
We can make the model become anisotropic by modifying the hessian operator. We
recall the definition of the discretized hessian operator :
12
21
22
(Hv)i, j = (Hv)11
i, j , (Hv)i, j , (Hv)i, j , (Hv)i, j , v ∈ X,
avec, pour i, j = 1, . . . , N,

 vi+1, j − 2vi, j + vi−1, j if 1 < i < N,
vi+1, j − vi, j
if i = 1,
(Hv)11
i, j =

vi−1, j − vi, j
if i = N,

 vi, j+1 − vi, j − vi−1, j+1 + vi−1, j if 1 < i ≤ N, 1 ≤ j < N,
0
if i = 1,
(Hv)12
i, j =

0
if i = N,

 vi+1, j − vi, j − vi+1, j−1 + vi, j−1 if 1 ≤ i < N, 1 < j ≤ N,
0
if i = 1,
(Hv)21
i, j =

0
if i = N,

 vi, j+1 − 2vi, j + vi, j−1 if 1 < j < N,
vi, j+1 − vi, j
if j = 1,
(Hv)22
i, j =

vi, j−1 − vi, j
if j = N.
We just have to nullify one of the components to cancel its effects during the processing. In concrete terms, we observe that if we pose (Hv)1,1 = 0, horizontal jumps
are not detected anymore, what means that vertical coutour lines are not smoothed
anymore. Thus, we get back this contour lines in the cartoon part whereas they don’t
appear in the oscillating part anymore.
In the same way, we can decide on acting on horizontal contour lines ((Hv)2,2 =
0), or on both vertical and horizontal contour lines ((Hv)1,1 = (Hv)2,2 = 0).
It is this last possibility that we illustrate in figure 3, comparing results with
and without anisotropy. We notice that horizontal and vertical contour lines have
disapeared from the texture part w whereas it contains almost all the oscillating
information from the studied image.
A locally anisotropic model for image texture extraction
9
(a)
(b)
(c)
(d)
Fig. 3 (a) and (c) are the rescaled oscillating components of the images obtained with the original
model. (b) and (d) are the ones obtained with the globally anisotropic model. We notice that horizontal and vertical contour lines disapear. Nevertheless, oblique contour lines remain (Barbara’s
legs and arms). We also notice the loss of the oscillating information oriented horizontally and
vertically (horizontal bands on Barbara’s headscarf).
Unfortunately, even if we can work on oblique lines rotating the image, we can
only treat two directions simultaneously wich are moreover necessarily perpendicular to each other. Additionally, add anisotropy in order to make disapear contour
lines in a certain direction implies that we also lose all the texture oriented in the
same direction. Figure 4 gives rise to all these problems : we make test on an image
without texture but which have contour lines in every directions, and on an image
very close to the one in 1(b) with texture oriented to the same direction that contour
lines we want to make disapear from the oscillating component. We then underline
the limits of the globally anisotropic model (figure 4).
10
Loı̈c Piffet
(a)
(c)
(b)
(d)
Fig. 4 (a) et (c) sont les images originales. (b) et (d) représentent la texture extraite à l’aide du
modèle globalement anisotrope. On observe l’inefficacité du modèle : les contours du cercle sont
quasiment intacts tandis que la texture horizontale est totalement perdue.
To remedy these difficulties, we are going to make the model become locally
anisotropic.
A locally anisotropic model for image texture extraction
11
3.2 Locally anisotropic model
3.2.1 Principle
We underlined limits of the globally anisotropic model pointing up the fact that we
could only treat simultaneously two directions, and the fact that we necessarily lose
the oscillating information in these two directions. So, we propose to work on contour lines locally. Here is tha approach:
• Step 1 : In a first time, we detect problematic points (figure 5), that is to say
points on contour lines that appear in the texture component and we want to make
disappear from it. Every remaining pixel is treated with the original model.
• Step 2 : Then, we calculate in these points the gradient vector, that gives us the
angle α between the contour line and the horizontal line.
• Step 3 : We extract matrices of size p × p centered on each problematic point,
and rotate these submatrices with the angle α previously calculated. Then, we just
have to aplly to each rotated submatrix the original model. We have to consider submatrices big enough to avoid side-effects (for example p = 5). This is illustrated
figure 6, for a pixel detected at the first step.
Remark 1. We point at the fact that the process can be systematized introducing
rotation matrices, that we locally apply to the dicreet hessian operator (see [5]).
Doing this, we hope to make disappear significantlycontour lines preserving almost of all the texture. Figure 7 illustrates the result when we treat any detected
pixel. We notice that the contour lines disappear from the oscillating part while almost of all the texture is preserved. In return, the BV 2 -componant still contains a
part of the texture, espacially near to contour lines.
We observe that we get back almost of all the texture while almost of all contour
lines have disappeared. In practice, we meet difficulties that we express in the next
part.
12
Loı̈c Piffet
(a)
(b)
(c)
Fig. 5 (c) highlights pixels on which we apply the anisotropic model. Other points are treated with
the original ROF2 model.
A locally anisotropic model for image texture extraction
13
(a)
(b)
(c)
Fig. 6 (a) Choice of one of the pixels detected in figure 5 (c). After having calculated the angle α
between the direction of the contour in this point and the horizontal direction, we rotate the image
to make the contour become horizontal. Then, we just have to apply the globally anisotropic model
(H 2,2 = 0) on a neighbourhood of the pixel.
(a)
(b)
Fig. 7 (a) is the texture part obtained with the original ROF2 model. (b) is the texture part obtained
with the locally anisotropic model. We see that texture is well preserved while the most of all
contour lines disappear in every direction.
14
Loı̈c Piffet
3.2.2 Implementation
The first big difficulty is to precisely detect points onstituting contour lines. As a
matter of fact, we have to detect contours as many as possible, without detecting
points from texture. We so proscribe classic contour line detectors (as a gradient)
which dosen’t permit to differentiate contour lines from texture (gradient is high in
both cases).
Furthermore, a more precise contour detector would be to ban too. Indeed, we
noticed in [4] that, even if the global model reserve contours very well, it tends to
disperse them. This dispersion is all the more important that the number of iteations
and the value of λ are high. That means in particular that a point which is not a part
of a contour but which is close to one will sustain effects of the regularization. So,
a contour line detector, as efficient it could be, will not permit to detect all problematic points, and will detect too many points from the texture. This phenomenon is
illustrated figure (8), on an image without any texture. We can see that, in spite of
the anisotropy, vertical lines can still be seen.
In order to remedy this problem, we first treat the image with the original algorithm. We choose the parameter λ contingent to each image, big enough to make
disapear almost of all the texture. Contour lines are thereby diffused, and a gradient
applied on this smooth image pemits to detect every problematic point that a classic contour detector applied on the original image could not do. Figure 8 highlights
pixels that have to be treated with the anisotropic model and that don’t appear in the
original image. Then, we can notice that almost of all of the contourlines disappear
from the oscillating component.
Moreover, a preprocessing of the image making disappear almost of all of the
texture, the points we then detect are, in the majority, only contained in contour
lines. We notice on figure 9 the result obtained on the full image 1 (a).
A locally anisotropic model for image texture extraction
15
(a)
(b)
(c)
(d)
Fig. 8 Detection of points we have to treat with the locally anisotropic model on an image without
texture. Images on the left put in evidence points locally treated. On the right, we propose the
texture part of the associated solution. We don’t apply the locally anisotropic model on the edges
but the original ROF2 model to compare resultas. On (a), we detect contour lines with a gradient
on the original image and treat these points with the locally anisotropic model; we observe on (b)
that if a part of contour lines disappear, a lot of them remains in the oscillating component. We
observe on (b) point detected thanks to a gradient on the BV 2 component of the original image
treated before with the original ROF2 model. We observe that we detect like this more points and
on (c) that almost of all contour lines have disappeared from the texture component.
If the loss of useful information (texture) in the ocillating component is very acceptable compared with the geometrical information we make disappear, the cartoon
part is not really satisfactory. Indeed, too much oscillating information still remains.
Finally, it seems more consistent not to speak about a decomposition model but a
texture extraction model.
16
Loı̈c Piffet
(a)
(b)
Fig. 9 Result obtained with the locally anisotropic model used on the full image. We see on (b) that
the oscillating component almost of all the texture despite an acceptable loss of information near
contour lines. The BV 2 component preserves in these places a part of the oscillating information.
4 Numerical tests on a natural image
We propose to compare results from the original FROF2 model and the locally
anisotropic one on a natural image for different values of λ .
Fig. 10 Original image
A locally anisotropic model for image texture extraction
(a) ROF2 (λ = 20)
(b) anisotropic (λ = 20)
(c) ROF2 (λ = 50)
(d) anisotropic (λ = 50)
(e) ROF2 (λ = 100)
(f) anisotropic (λ = 100)
17
Fig. 11 BV 2 components of solutions obtained with the original ROF2 model (on the left) and the
locally anisoropi model (on the right).
18
Loı̈c Piffet
(a) ROF2 (λ = 20)
(b) anisotropic (λ = 20)
(c) ROF2 (λ = 50)
(d) anisotropic (λ = 50)
(e) ROF2 (λ = 100)
(f) anisotropic (λ = 100)
Fig. 12 texture components of solutions obtained with the original ROF2 model (on the left) and
the locally anisoropi model (on the right).
We observe that, on this image, that contour lines on the BV 2 component are
well preserved using the anisotropic model, so that we con talk about a cartoon
component as for the ROF model (see [2]). Moreover, we can see on the texture
component that contour lines and edges visibly disappear with the add of anisotropy.
Even if we can notice that the locally anisotropic model gives pretty good results for
texture extraction, we still have to carefully understand what really happens and why
A locally anisotropic model for image texture extraction
19
simple modifications on the hessian operator gives such results. This will be covered
in a forthcoming work.
References
1. Ambrosio, L., Fusco, N., Pallara, D.: Functions of bounded variation and free discontinuity
problems. Oxford mathematical monographs, Oxford University Press (2000).
2. Aujol, J.F.: Some First-Order Algorithms for Total Variation Based Image Restoration. J Math
Imaging Vis (2009) 34: 307327
3. Bergounioux, M. : On Poincare-Wirtinger inequalities in spaces of functions of bounded variation, hal-00515451 (2010)
4. Bergounioux, M., Piffet, L. : A second-order model for image denoising, Set-Valued and
Variational Analysis,
5. Bergounioux, M., Tran, M., P. : A second order model for 3D-texture extraction, hal00530816 (2010)
6. Chambolle, A.: An algorithm for total variation minimization and applications. Journal of
Mathematical Imaging and Vision, 20 8997 (2004)
7. Chambolle, A., Lions, P.-L.: Image recovery via total variation minimization and related problems Numerische Mathematik, 76, 167188 (1997)
8. Demengel, F.:Fonctions à hessien borné. Annales de l’institut Fourier, tome 34, no 2, pp.
155-190 (1984)
9. Duval, V., Aujol, J-F., Gousseau, Y. : The TV-L1 model : a geometric point of view. SIAM
Journal on Multiscale Modeling and Simulation, 8(1) :154-189, (2009).
10. Echegut, R., Piffet, L. : A variational model for image texture identification.
http://hal.archives-ouvertes.fr/hal-00439431/fr/, submitted.
11. Evans, L.C., Gariepy, R.:Measure theory and fine properties of functions. CRC Press, 1992
12. Hinterberger, W., Scherzer, O. : Variational methods on the space of functions of bounded
Hessian for convexification and denoising. Computing, 76, no. 1-2, 109–133 (2006)
13. Osher, S., Fatemi, E, Rudin L..: Nonlinear total variation based noise removal algorithms.
Physica D 60, 259268 (1992)
14. Osher, S., Sole, A., Vese L.:, Image decomposition and restoration using total variation minimization and the H 1 norm. SIAM Journal on Multiscale Modeling and Simulation, 1-3,
349370 (2003)
15. Osher, S.,Vese, L.: Modeling textures with total variation minimization and oscillating patterns in image processing. Journal of Scientific Computing 19, no 1-3, 553572 (2003)
16. Osher, S. J, Vese, L. A., : Image denoising and decomposition with total variation minimization and oscillatory functions. Special issue on mathematics and image analysis. J. Math.
Imaging Vision, 20, no. 1-2, 7–18 (2004)
17. Piffet, L. : Modèles variationnels pour l’extraction de textures 2D, PhD Thesis, Orléans, 2010
18. Rockafellar, R., T. : Convex Analysis. Princeton University press (1972).
Download