CS-CC - Department of Systems and Computer Engineering

advertisement
Wideband and Cooperative Spectrum Sensing in
Cognitive Radios
Amir H. Banihashemi
(joint work with Zeinab Zeinalkhani and Ebrahim Karami)
BCWS Centre
Dept. of Systems and Computer Engineering
Carleton University
Feb 17, 2011
Compressive Sensing
[In part courtesy of R. Baraniuk, Rice University]
The Digital Universe
 Size: 281 billion gigabytes generated in 2007
digital bits > stars in the universe
growing by a factor of 10 every 5 years
 Growth fueled by multimedia data
audio, images, video, surveillance cameras,
sensor networks, …
 In 2007 digital data generated > total storage
 Solution: Compression
4
Why Compressive Sensing?
 Problem: Today’s multimedia sensor systems
acquire massive amounts of multimedia data only to
throw much/most of it away.
 Solution: compressive sensing
- enables the design of radically new sensors and
systems such as new cameras, imagers, ADCs,
…,
- beyond sensing to inference on massive data
sets
5
Digital Data Acquisition
 Foundation: Shannon sampling theorem
“if you sample densely enough (at the Nyquist
rate), you can perfectly reconstruct the
original analog data”
Time
Space
6
Sense by Sampling
sample
7
Sense by Sampling
sample
too
much
data!
7
Sense (Sample) then Compress
sample
compress
JPEG
JPEG2000
…
decompress
8
Sparsity / Compressibility (Transform Coding)
pixels
large
wavelet
coefficients
(blue = 0)
9
Sparsity / Compressibility (Transform Coding)
large
wavelet
coefficients
pixels
wideband
signal
samples
frequency
(blue = 0)
large
Gabor (TF)
coefficients
time
9
What’s Wrong with this Picture?
• Why go to all the work to acquire
N samples only to discard all but
K pieces of data?
sample
compress
decompress
10
Compressive Sensing
• Directly acquire “compressed” data
• Replace samples by more general “measurements”
compressive sensing
recover
11
Sampling
• Signal
is
-sparse in basis/dictionary
– WLOG assume sparse in the canonical domain
sparse
signal
nonzero
entries
12
Sampling
• Signal
is
-sparse in basis/dictionary
– WLOG assume sparse in the canonical domain
• Sampling (matrix multiplication)
samples
sparse
signal
nonzero
entries
13
Compressive Sampling
• When data is sparse/compressible, one can directly
acquire a condensed representation with
no/little information loss through
linear dimensionality reduction
measurements
sparse
signal
nonzero
entries
14
How Can It Work?
• In general, loss of information
• Ex. Infinitely many
’s map to the same
15
How Can It Work?
columns
• In general, loss of information
• But we are only interested in sparse vectors
16
How Can It Work?
columns
• In general, loss of information
• But we are only interested in sparse vectors
•
is effectively M x K
16
How Can It Work?
columns
• In general, loss of information
• But we are only interested in sparse vectors
• Design
so that each of its M x K submatrices
are full rank (ideally orthobases)
16
How Can It Work?

 Goal: Design
so that for any K-sparse vector
norm of  x is “close” to norm of . [Restricted
Isometry Property (RIP)]
x
x,
17
How Can It Work?

 Goal: Design
so that for any K-sparse vector
norm of  x is “close” to norm of . [Restricted
Isometry Property (RIP)]
x
x,
 Unfortunately, this is NP-hard.
17
How Can It Work?

 Goal: Design
so that for any K-sparse vector
norm of  x is “close” to norm of . [Restricted
Isometry Property (RIP)]
x
x,
 Unfortunately, this is NP-hard.
 Good news: Draw  at random, e.g., i.i.d. Gaussian
or i.i.d.  1 Bernoulli. Then  has the RIP with high
probability provided
17
Compressive Data Acquisition
• Measurements
= random linear combinations
of the entries of
• No information loss for sparse vectors
probability
measurements
with high
sparse
signal
nonzero
entries
18
CS Signal Recovery
• Goal: Recover signal
from measurements
19
CS Signal Recovery
• Goal: Recover signal
from measurements
• Problem: ill-posed inverse problem
19
CS Signal Recovery
• Goal: Recover signal
from measurements
• Problem: ill-posed inverse problem
• Solution: Exploit the sparse/compressible
nature of acquired signal
19
CS Signal Recovery
• Random projection
• Recovery problem:
given
find
• Null space
• So search in null space
for the “best”
according to some
criterion
– ex: least squares
(N-M)-dim hyperplane
at random angle20
CS Signal Recovery
• Recovery:
(ill-posed inverse problem)
•
given
find
(sparse)
fast
pseudoinverse
21
CS Signal Recovery
• Recovery:
(ill-posed inverse problem)
•
given
find
(sparse)
fast, wrong
pseudoinverse
21
CS Signal Recovery
• Reconstruction/decoding:
(ill-posed inverse problem)
•
given
find
fast, wrong
•
number of
nonzero
entries
“find sparsest
in translated nullspace”
21
CS Signal Recovery
• Reconstruction/decoding:
(ill-posed inverse problem)
•
fast, wrong
•
correct
given
find
21
CS Signal Recovery
• Reconstruction/decoding:
(ill-posed inverse problem)
•
fast, wrong
•
correct
given
find
slow: NP-hard
21
CS Signal Recovery
• Recovery:
(ill-posed inverse problem)
given
find
•
fast, wrong
•
correct, slow
•
correct, efficient
mild oversampling
[Candes, Romberg, Tao; Donoho]
(sparse)
linear program
number of measurements required
21
“Single-Pixel” CS Camera
scene
single photon
detector
DMD
image
reconstruction
or
processing
DMD
random
pattern on
DMD array
w/ Kevin Kelly
“Single-Pixel” CS Camera
scene
single photon
detector
image
reconstruction
or
processing
DMD
DMD
random
pattern on
DMD array
…
• Flip mirror array M times to acquire M measurements
• Sparsity-based (linear programming) recovery
23
Wideband Spectrum Sensing
Based on
Compressive Sensing
Spectrum Sensing in Cognitive Radio
• Motivation: To detect and use spectrum holes efficiently
• The frequency range of interest is often wide (wideband
spectrum sensing)
• Sampling at Nyquist rate is very challenging
• Solution: Compressive Spectrum Sensing
32
Compressive Spectrum Sensing
• Sub-Nyquist sampling: Analog to Information
Conversion (AIC)

y(t)
y[m]
M/N Nyquist Rate
p(t)
Random at Nyquist Rate
38
Compressive Spectrum Sensing
Signal X is in time domain and is sparse in frequency domain
y  mn x
s  Fnn x
y  mnFm1ns  mns
We solve the following optimization problem to find s
min s l1 s.t. y  mns
Application of compressive sensing to spectrum sensing:
Tian and Giannakis, 2007
39
Background
• Application of compressive sensing to spectrum sensing
( l1 optimization): Tian and Giannakis, ICCASP 2007.
• Improving the performance for a block sparse signal
( l1  l2 optimization): Stojnic, Parvaresh, and Hassibi, IEEE
Trans. Signal Proc., Aug. 2009.
• Application of l1  l2 optimization in spectrum sensing:
Liu and Wan, Arxiv., 2010.
• Improving the performance of l1 optimization (Iterative
Support Detection (ISD)): Wang and Yin, SIAM Journal on
Imaging Sciences, Aug. 2010.
• Our contribution: Application of ISD- l1 optimization and
ISD- l1  l2 optimization to spectrum sensing
40
l1  l2 minimization
• The received signal is block sparse
• For a licensed user (PU), its operating spectrum is in
a certain band
• Uses the priori information of the spectrum
boundaries between PUs
min  s1
s
l2
 s2
l2

 sK
l2

s.t. y   mns
• Can be formulated as a convex optimization
41
Iterative Support Detection (ISD)
• Reduced requirement on the number of measurements
compared to the classical l1 minimization
• Recovers the signal iteratively in a small number of iterations
42
Simulations
• n=256 , m=140, sparsity ratio = 30% (k = 77)
m=
=1
14
40
0 ))
(( m
1
1
n
p
u
P
S
D
III n
np
pu
u ttt P
PS
SD
D
0.9
0.9
ISD
l1
- l1 / l2
l1
/ l2
0.8
0.8
0.7
0.7
PSD
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0
0
50
50
100
100
150
150
n
n
200
200
250
250
Simulations
• n=256 , m=120, sparsity ratio = 30% (k = 77)
(m = 120)
1
II n
np
pu
u tt P
PS
SD
D
0.9
l1 / l2
ISD
- l1 / l2
0.8
0.7
PSD
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
150
n
200
250
Simulations
• n=256 , m=85, sparsity ratio = 30% (k = 77)
(m = 85)
1
II n
np
pu
u tt P
PS
SD
D
0.9
l1 / l2
ISD
- l1 / l2
0.8
0.7
PSD
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
150
n
200
250
MSE Comparison
• n=256, sparsity ratio = 30% (k = 77)
1.4
ISD - l1 / l2
l1 / l2
1.2
ISD - l1
l1
1
MSE
0.8
0.6
0.4
0.2
0
70
80
90
100
110
120
130
140
# of m eas urem ent s (m )
46
Cooperative
Spectrum Sensing
Introduction
• Probability of detection (PD) and probability of false alarm
(PF) are two important parameters to measure the
performance of spectrum sensing algorithms
• The interference induced from the secondary user (SU) on
the primary user (PU) is proportional to (1-PD)
• The throughput of the SU is proportional to 1-PF
• Goal: Minimize PF for a given PD.
48
Introduction
• Spectrum sensing can be performed as either distributed or
cooperative.
• Cooperative spectrum sensing
- Advantage: improves PF for a given PD
- Disadvantage: requires extra bandwidth for communication
between SUs
• Goal: To optimize the cluster size for maximum effective
throughput

R Pd
tot



 m N 
 1  Pf tot Pd tot , N ,
, N ,   1 
Ts 


- E. Karami and A. H. Banihashemi, ``Cluster size Optimization in
Cooperative Spectrum Sensing,” 9th Conference on Communication
Networks and Services Research (CSNR 2011), May 2011.
49
Simulations
Normalized achievable throughput for m=0.05 Ts
1
AND rule
OR rule
0.9
0.8
SNR=5dB
Normalized throughput
0.7
0.6
0.5
0.4
0.3
SNR=0dB
0.2
SNR=-5dB
0.1
0
2
4
6
8
10
12
Cooperation cluster size
14
16
18
20
50
Simulations
Normalized achievable throughput for m=0.2 Ts
1
0.9
AND rule
OR rule
SNR=5dB
0.8
Normalized throughput
0.7
0.6
0.5
SNR=0dB
0.4
0.3
SNR=-5dB
0.2
0.1
0
2
4
6
8
10
12
Cooperation cluster size
14
16
18
20
51
Future Research
• Low complexity wideband compressive spectrum
sensing: Sparse measurement matrices, Adaptive
algorithms
• Cooperative spectrum sensing: optimal fusion rules,
distributed fusion schemes
• Cooperative wideband spectrum sensing
52
Compressive Sensing with Sparse
Graphs
Graphical Model of CS
x
signal
xi
y j   ji xi
Measurements
y  x
 ji
i
y
• RIP: Graph is
dense
33
Revisiting CS with Dense Graph
3
• Recovery complexity using LP O( N )
34
Revisiting CS with Dense Graph
3
• Recovery complexity using LP O( N ) High!
34
Revisiting CS with Dense Graph
3
• Recovery complexity using LP O( N ) High!
• Measurements M  O( K log(N / K ))
34
Revisiting CS with Dense Graph
3
• Recovery complexity using LP O( N ) High!
• Measurements M  O( K log(N / K ))
Low (good)!
34
Revisiting CS with Dense Graph
3
• Recovery complexity using LP O( N ) High!
• Measurements M  O( K log(N / K ))
Low (good)!
• What if we use sparse graphs?
34
Revisiting CS with Dense Graph
3
• Recovery complexity using LP O( N ) High!
• Measurements M  O( K log(N / K ))
Low (good)!
• What if we use sparse graphs?
• Why?
34
Revisiting CS with Dense Graph
3
• Recovery complexity using LP O( N ) High!
• Measurements M  O( K log(N / K ))
Low (good)!
• What if we use sparse graphs?
• Why?
- Fast measurement
34
Revisiting CS with Dense Graph
3
• Recovery complexity using LP O( N ) High!
• Measurements M  O( K log(N / K ))
Low (good)!
• What if we use sparse graphs?
• Why?
- Fast measurement
- Fast recovery (iterative)
34
CS with Sparse Graphs
• Graph structure (degree distribution)?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
• Edge weights?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
• Edge weights?
• Recovery algorithms?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
• Edge weights?
• Recovery algorithms?
• Performance analysis?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
Random regular (dv , dc )
• Edge weights?
• Recovery algorithms?
• Performance analysis?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
Random regular (dv , dc )
[similar to a regular Tanner graph]
• Edge weights?
• Recovery algorithms?
• Performance analysis?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
Random regular (dv , dc )
[similar to a regular Tanner graph]
• Edge weights?
0 and 1
• Recovery algorithms?
• Performance analysis?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
Random regular (dv , dc )
[similar to a regular Tanner graph]
• Edge weights?
0 and 1
[similar to a Tanner graph]
• Recovery algorithms?
• Performance analysis?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
Random regular (dv , dc )
[similar to a regular Tanner graph]
• Edge weights?
0 and 1
[similar to a Tanner graph]
• Recovery algorithms?
Verification-based
• Performance analysis?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
Random regular (dv , dc )
[similar to a regular Tanner graph]
• Edge weights?
0 and 1
[similar to a Tanner graph]
• Recovery algorithms?
Verification-based
[similar to iterative decoding algorithms over BEC]
• Performance analysis?
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
Random regular (dv , dc )
[similar to a regular Tanner graph]
• Edge weights?
0 and 1
[similar to a Tanner graph]
• Recovery algorithms?
Verification-based
[similar to iterative decoding algorithms over BEC]
• Performance analysis?
Asymptotic ( N
 )
35
CS with Sparse Graphs
• Graph structure (degree distribution)?
Random regular (dv , dc )
[similar to a regular Tanner graph]
• Edge weights?
0 and 1
[similar to a Tanner graph]
• Recovery algorithms?
Verification-based
[similar to iterative decoding algorithms over BEC]
• Performance analysis?
Asymptotic ( N
 ) [same general techniques as in Density Evolution]
35
Example: Graph
x1
N  8, M  6, d v  3, d c  4
x2
y1
x3
y2
x4
y3
x5
y4
x6
y5
x7
y6  x3  x4  x6  x7
x8

36
Example: Recovery Algorithm
37
Example: Recovery Algorithm
Measurements
38
Example: Recovery Algorithm
Random Integer
Signal: 0 w.p. 1-α
Measurements
39
Example: Recovery Algorithm
Integer Signal
Measurements

40
Example: Recovery Algorithm
Integer Signal
Measurements
41
Example: Recovery Algorithm
Integer Signal
Measurements
42
Example: Recovery Algorithm
Integer Signal
Measurements
43
Example: Recovery Algorithm
Integer Signal
Measurements

44
Example: Recovery Algorithm
Integer Signal
Measurements
45
Example: Recovery Algorithm
Integer Signal
Measurements
46
Example: Recovery Algorithm
Integer Signal
Measurements
47
Example: Recovery Algorithm

Integer Signal

Measurements
48
Example: Recovery Algorithm
Integer Signal
Measurements
49
Example: Recovery Algorithm
Integer Signal
Measurements
50
Example: Recovery Algorithm
Integer Signal
Measurements
51
Example: Recovery Algorithm
Integer Signal
Measurements

52
Example: Recovery Algorithm
Integer Signal
Measurements
53
Example: Recovery Algorithm
Integer Signal
Measurements
54
Example: Recovery Algorithm
Integer Signal
Measurements
55
Example: Recovery Algorithm
Integer Signal

Measurements
55
Example: Recovery Algorithm
Integer Signal
Measurements
56
Example: Recovery Algorithm
Integer Signal
Measurements
57
Example: Recovery Algorithm
Integer Signal
Measurements
58
The Evolution of the Ratio of Unresolved Variables with
Iterations Above and Below the Threshold
(5,6) Graphs
Original SBB Algorithm
0.45
0.4
Simulation Below Threshold
Simulation Above Threshold
Theoretical Below Threshold
Theoretical Above Threshold
0.35
0.3

0.25
0.2
0.15
0.1
0.05
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Iteration Number
61
Sparse vs. Dense Graphs
Encoding Complexity:
sparse ? dense
62
Sparse vs. Dense Graphs
Encoding Complexity:
sparse
dense
62
Sparse vs. Dense Graphs
Encoding Complexity:
sparse
dense
Recovery Complexity:
sparse ? dense
62
Sparse vs. Dense Graphs
Encoding Complexity:
sparse
dense
Recovery Complexity:
sparse
dense
62
Sparse vs. Dense Graphs
Encoding Complexity:
sparse
dense
Recovery Complexity:
sparse
dense
Measurement Size:
sparse ? dense
62
Sparse vs. Dense Graphs
Encoding Complexity:
sparse
dense
Recovery Complexity:
sparse
dense
Measurement Size:
sparse
dense
62
Download