Compressed Sensing-fh part2

advertisement
Compressed Sensing for
Loss-Tolerant Audio Transport
Clay, Elena, Hui
6.829 Computer Networks
1
Introduction to CS
Basic idea:
Given a signal S of length d (large)
S can be recovered from a much smaller
measurement vector v ! ( if S is sparse )
signal
Sparse
6.829 Computer Networks
compressed
2
Introduction to CS
signal: s= (0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1)
measurements: projections of s onto some small
number of basis vectors
Questions:
1. what basis vectors?
2. how many measurements are enough?
6.829 Computer Networks
3
Intro to CS
Sometimes imperfection is OK! We only
want to have to transmit enough for a
“reasonable” reconstruction.
Reduce the number of bits used to transmit
a signal
6.829 Computer Networks
4
Motivation
Direct applicability to low-power sensor
networks (data is sparse)
Applications to medical imaging
How does CS apply to audio signals?
6.829 Computer Networks
5
CS and sound reconstruction
Compressed Sensing is:
loss-tolerant
universal
But:
is it practical? Particularly for audio?
how about quality of reconstructed sound?
6.829 Computer Networks
6
Approach/contributions
1.
2.
3.
4.
Use a modified version of the classical
Orthogonal Matching Pursuit
optimized the main iterative step
dealt with MATLAB memory overflow for
matrix storage
split original large data samples into
smaller frames and combine at the end
Quantify relationship between quality and
compression parameters m, c.
6.829 Computer Networks
7
Parameters
• m: sparsity level of original data
• d: data space dimension
• N: # of measurements
N= c m ln(d)
6.829 Computer Networks
8
OMP(Orthogonal Matching Pursuit)
• Input
Φ: N x d measurement matrix
v: N-dimensional data vector
m: data sparsity
• Output
s: estimated signal in Rd
v= Φ * s
• Procedure
6.829 Computer Networks
9
OMP Procedure
Determine which columns of Φ participate in the
measurement vector v, in greedy fashion.
1. Initialization
2. Iteration
In each iteration, choose one column Φ that is
most strongly correlated with the remaining part
of v. Then we subtract off its contribution to v and
iterate on the residual.
3. Reconstruction
Use the chosen columns of Φ and approximation
to reconstruct the signal.
6.829 Computer Networks
10
I-OMP on Audio Signal Recovery
•
•
Original sound signal (Source: s4d.wav)
Reconstructed by setting m = 256 and
500
6.829 Computer Networks
11
Tests
Test the impact of the parameters m, c on
the quality of the reconstruction
Method: MOS (Mean Opinion Score)
6.829 Computer Networks
12
MOS score
Sparsity and MOS
m as fraction of number of samples
6.829 Computer Networks
13
Quality of reconstruction
m = 1233
d = 8821
Sum of squared
differences
between
original and
reconstructed
signal
c
6.829 Computer Networks
14
Piecewise Compression
• Original:
• Recovered:
• MOS = 2.8
6.829 Computer Networks
15
I-OMP on image recovery
Different m = 256, 512, 1024
Source: moon.bmp
6.829 Computer Networks
16
I-OMP on Image Recovery
• Different value of
parameter c
• Original, c=2,4,20
6.829 Computer Networks
17
The End
6.829 Computer Networks
18
• 1. Initialization
residual r = v;
Index set Λ = empty;
• 2. Iteration
• 3. Reconstruction
6.829 Computer Networks
19
OMP Procedure
• 1. Initialization
• 2. Iteration
For t=0: m-1
• Find the index λ that solves
• λ= arg max j=1,…,d |<r,φj>|
• Λ = Λ U {λ}
• Re-compute projection P on φΛ.
A = P* v
r=v-A
• 3. Reconstruction
6.829 Computer Networks
20
OMP Procedure
• 1. Initialization
• 2. Iteration
• 3. Reconstruction
The estimate s for the ideal signal has
non-zero coefficients sλ at the
components listed in Λ.
A = Σ λ∈Λ φλ* sλ
6.829 Computer Networks
21
Iterative OMP
• 1. Initialization
r = v;
s = 0 d;
• 2. Iteration
For t=0: m-1
• Find the index λ that solves
λ= arg max j=1,…,d |<r,φj>|
• sλ = <r, φλ >/ || φλ ||2
• r = r - sλ * φλ
• A= A + sλ * φλ
• 3. Reconstruction
6.829 Computer Networks
22
Iterative OMP -2
6.829 Computer Networks
23
Download