Image Super-Resolution Using Sparse Representation

advertisement
Single Image Super-Resolution
Using Sparse Representation *
Michael Elad
The Computer Science Department
The Technion – Israel Institute of technology
Haifa 32000, Israel
* Joint work with Roman Zeyde
and Matan Protter
MS45: Recent Advances in Sparse and
Non-local Image Regularization - Part III of III
Wednesday, April 14, 2010
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
The Super-Resolution Problem
yh
WGN
v
Blur (H)
and
Decimation (S)
z
+
z  SHy h  v
Our Task: Reverse the process –
recover the high-resolution image
from the low-resolution one
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
2
Single Image Super-Resolution
z
Recovery
Algorithm
?
ˆ
yh
The reconstruction process should rely on:
 The given low-resolution image
 The knowledge of S, H, and statistical properties of v, and
 Image behavior (prior).
In our work:
 We use patch-based sparse and redundant representation
based prior, and
 We follow the work by Yang, Wright, Huang, and Ma [CVPR
2008, IEEE-TIP – to appear], proposing an improved algorithm.
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
3
Core Idea (1) - Work on Patches
yh
y
We interpolate the low-res.
Image in order to align the
coordinate systems
Interp.
Q
Every patch from y should go
through a process of resolution
enhancement. The improved
patches are then merged together
into the final image (by averaging).
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
y  Qz
z
Enhance
Resolution
4
Core Idea (2) – Learning [Yang et. al. ’08]
Enhance
Resolution
?
We shall perform a sparse
decomposition of the lowres. patch, w.r.t. a learned
dictionary A
We shall construct the high
res. patch using the same
sparse representation,
imposed on a dictionary A h
and now, lets go into the details …
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
5
The Sparse-Land Prior [Aharon & Elad, ’06]
Extraction of a patch from yh
in location k is performed by
yh
h
k
n
p  R k yh
n
Position k
m
Model Assumption: Every such patch
can be represented sparsely over the
dictionary Ah:
phk qk
such that
phk  A hqk and qk
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
0
 n
n
Ah

qk
phk
6
Low Versus High-Res. Patches
yh
Position k
y
n
n
n
n
p
h
k
?
z  SHyh  v
Interpolation
Q
z
pk
h
k
Lp  p k
2

 L = blur + decimation + interpolation.
 This expression relates the low-res.
patch to the high-res. one, based on
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
7
Low Versus High-Res. Patches
yh
Position k
We interpolate the low-res.
Image in order to align the
coordinate systems
y
n
n
n
n
Interpolation
z
h
k
Lp  p k
2

qk is ALSO the sparse
representation of
the low-resolution
patch, with respect
to the dictionary LAh.
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
h
k
p  A hqk
LA hqk  pk
2

A
8
Training the Dictionaries – General
Training pair(s)
We obtain
a set of
matching
patch-pairs
Pre-process
Interpolation
(low-res)
p 
 
pk k
h
k
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
Pre-process
(high-res)
 
 
These will
p
be used
for the
pk dictionary
training
k
h
k k
9
Alternative: Bootstrapping
The given image
to be scaled-up
Simulate the degradation process
WGN
v
z
Blur (H)
and
Decimation (S)
+
x
x  SHzh  v
Use this pair of images
to generate the
patches for the training
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
10
Pre-Processing High-Res. Patches
High-Res
x
Low-Res
Interpolation
z
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
p 
h
k k
Patch size: 9×9
11
Pre-Processing Low-Res. Patches
x
Low-Res
Interpolation
We extract
patches of size
* [0 1 -1]T Dimensionality
9×9 from each of
Reduction
these images, kand
k k
* [-1 2 -1]
By PCA
concatenate themk
to form
one
vector
Patch
size: ~30
h
h
T
* [-1 2 -1]
of length
324
k
k
* [0 1 -1]
p  p 
p  Bp
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
12
Training the Dictionaries:
 
pk
K-SVD
k
A
40 iterations, L=3
For an image of size
1000×1000 pixels,
there are ~12,000
examples to train on
Remember
A
m=1000
N=30
Given a low-res. Patch to be scaled-up, we start the
resolution enhancement by sparse coding it, to find qk:
min pk  A qk
qk
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
2
s.t. qk
0
L
13
Training the Dictionaries:
Ah
And then, the high-res. Patch is obtained by
ˆhk  A hqk
p
Thus,Given
Ah should
be designed such that
a low-res. Patch to be scaled-up, we start the
resolution enhancement by sparse coding it, to find qk:
2
Remember
h
k
hk k 2
k A2h
k 0
qk
k
q 
s.t.min
q
 pmin Ap q A
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
L
14
Training the Dictionaries:
p
k
h
k
Ah
2
 A hqk 
 min
Ah
2
 However, this approach disregards the fact that the resulting highres. patches are not used directly as the final result, but averaged
due to overlaps between them.
 A better method (leading to better final scaled-up image) would
be – Find Ah such that the following error is minimized:
min y h  ˆy h
Ah
2
2
 min y h  y  W   R A hqk
Ah
k
2
T
k
2
The constructed image
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
15
Overall Block-Diagram
A
Ah
Training the highres. dictionary
z
Training the lowres. dictionary
On-line or off-line
The recovery
algorithm
(next slide)
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
16
The Super-Resolution Algorithm
* [0 1 -1]
y
z
* [0 1 -1]T
Interpolation
* [-1 2 1]
* [-1 2 1]T
Extract
patches
p 
k k
p 
multiply by
B to reduce
dimension
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
k k
Sparse coding
using Aℓ to
compute
qk
Multiply by Ah
and combine
by averaging
the overlaps
ˆy h
17
Relation to the Work by Yang et. al.
We use a direct
approach to the
training of Ah
A
Ah
Training the highres. dictionary
We use OMP for
sparse-coding
We interpolate to avoid
coordinate ambiguity
z
Training the lowres. dictionary
We can train on
We train using
the given image
the K-SVD
a training set
Bottom line: The or
proposed
algorithm
algorithm is much simpler, much
faster, and, as we show next, it
We reduce
also leads to better
results
dimension prior
We avoid postto training
processing
We train on a
difference-image
The recovery
algorithm
(previous slide)
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
18
Results (1) – Off-Line Training
The training image: 717×717
pixels, providing a set of
54,289 training patch-pairs.
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
19
Results (1) – Off-Line Training
SR Result
PSNR=16.95dB
Bicubic
interpolation
PSNR=14.68dB
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
Ideal
Image
Given Image
20
Results (2) – On-Line Training
Given image
Scaled-Up (factor 2:1) using the
proposed algorithm, PSNR=29.32dB
(3.32dB improvement over bicubic)
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
21
Results (2) – On-Line Training
The Original
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
Bicubic Interpolation
SR result
22
Results (2) – On-Line Training
The Original
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
Bicubic Interpolation
SR result
23
Comparative Results – Off-Line
Bicubic
Yang et. al.
Our alg.
PNSR
SSIM
PSNR
SSIM
PSNR
SSIM
Barbara
26.24
0.75
26.39
0.76
26.77
0.78
Coastguard
26.55
0.61
27.02
0.64
27.12
0.66
Face
32.82
0.80
33.11
0.80
33.52
0.82
Foreman
31.18
0.91
32.04
0.91
33.19
0.93
Lenna
31.68
0.86
32.64
0.86
33.00
0.88
Man
27.00
0.75
27.76
0.77
27.91
0.79
Monarch
29.43
0.92
30.71
0.93
31.12
0.94
Pepper
32.39
0.87
33.33
0.87
34.05
0.89
PPT3
23.71
0.87
24.98
0.89
25.22
0.91
Zebra
26.63
0.79
27.95
0.83
28.52
0.84
Average
28.76
0.81
29.59
0.83
30.04
0.85
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
24
Comparative Results - Example
Ours et. al.
Yang
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
25
Summary
Single-image scale-up –
The rules of the game: use
an important problem,
the given image, the known
with many attempts to
degradation, and a
solve it in the past decade.
sophisticated image prior.
We introduced modifications
Yang et. al. [2008] – a very
to Yang’s work, leading to a
elegant way to incorporate
simple, more efficient alg.
sparse & redundant
with better results.
representation prior
More work is required to improve
further the results obtained.
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
26
A New Book
Thank You
Very Much !!
Image Super-Resolution Using Sparse
Representation
By: Michael Elad
27
Download