Estimation of Distribution Algorithm for Sensor Selection Problems

advertisement
Estimation of Distribution Algorithm
for Sensor Selection Problems
Muhammad Naeem and Daniel C. Lee
1
M ti ti
Motivation
The motivations to choose a subset of sensors
depend on applications, but in general a small
subset means less energy
gy consumption
p
of sensors
and simpler computation than operating all
sensors. Especially in sensor networks, activating
only
l a subset
b t off sensors can be
b iimportant
t t iin
prolonging the network’s life time.
2
THE SENSOR SELECTION PROBLEM
The sensor selection problem can be viewed as a
combinatorial optimization problem of selecting from
potential m sensor measurements a subset of sensor
measurements with conflicting goals of maximizing the
utility of selected measurements and minimizing the
cost.
t
3
Problem Formulation
¾ m = Total number of sensors
¾ k = Number
N b off selected
l t d sensors
¾ n = Number of parameters to be estimated
The problem is to select k sensors from the set of m
sensors in the system of estimating n parameters
4
Problem Formulation
x1,x2,…,xn, the n parameters to be estimated
Y1,Y2,…,Ym scalar measurements from m sensors
¾ The measurements and parameters have linear relation
(Y1 ,Y2 ,...,Ym )
T
= A ( x1 , x2 ,..., xn ) + (V1 ,V2 ,...,Vm )
T
T
where A=(a1,a2,…,am)T is m×n real-valued matrix known
to the system designer. (ai, i = 1,2,…,m, is an n
n-dimensional
dimensional
real-valued column vector. )
¾ V1,V2,…,Vm are i.i.d. additive white Gaussian noise with
variance
i
σ2.
5
¾ The maximum likelihood (ML) estimator of (x1,x2,…,xn)T is
( A A)
T
−1
A (Y1 , Y2 ,..., Ym )
T
T

T 
=  ∑ ai ai 
 i =1

m
−1 m
∑a Y
i i
i =1
¾ This is an unbiased estimator and the estimation error is
( A A)
T
−1
A (Y1 , Y2 ,..., Ym ) − ( x1 , x2 ,..., xn )
T
T
T
= ( A A ) A (V1 ,V2 ,,...,,Vm )
T
−1
T
T
6
¾ The covariance of this estimation error is
Λ ≡ σ 2 ( AT A)
−1
m
2
T 
= σ  ∑ ai ai 
 i =1

−1
¾ Let us denote by S ⊂ {1, 2,..., m} the set of selected
sensors. Then, the ML estimator is
(∑
i∈S
T
i i
aa
) ∑
−1
i∈S
aiYi
¾ The volume of the α − confidence ellipsoid of the estimator is
K (α , n ) det ( Λ S )
1/ 2
(
= K (α , n ) det σ
2
(∑
i∈S
T
i
ai a
)
)
−1 1/ 2
¾ The mean radius of the α − confidence ellipsoid is
B (α ) det ( Λ S )
1/ 2 n
(
= B (α ) det σ 2
(∑
T
a
a
i
i
i∈S
)
)
−1 1/ 2 n
7
(
) implies smaller volume
A larger value of
or mean radius of the α − confidence ellipsoid, thus
means better utility of the estimation. Therefore, the
sensor selection problem can be formulated as
det
T
a
a
∑i∈S i i
maximize
i i
log
l
det
d t
subject to
S =k
S
(∑
i∈S
T
i i
aa
)
8
We denote by Φ the collection of all possible sensor selections
selections.
We denote by φ in Φ a selection of sensors. Each selection is
represented as binary string
Z = [ z1 z2 ... zm ] , zi ∈{0,1}
We can rewrite sensor selection problem as
 m

maximize
log det  ∑ zi ai aiT 
 i =1

subject to
ΘT z = k
zi ∈ {0,1
0 1} , i = 1,...,
1 m and z ∈ℜ m
The Θ is a vector with all entries equal to one.
9
EDA-Flow Diagram
g
Generate random population
X1 X2 X3 … Xn
Sort
Evaluate
∆l
Individuals
1
2
3
...
∆ l −1
1
1
0
...
1
1
0
1
...
0
0
0
1
...
0
…
…
…
…
…
0
1
1
...
1
Function
Values
F1
F2
F3
...
F∆l −1
Yes
Convergence
C
Criterion satisfied
Generate New ∆ l − ηl
Individuals with Conditional
Prob. Vector
Terminate
No
Update Counter l=l+1
Select ηl −1
Best
Individuals
Γ = P (θ1 , θ 2 ," , θ n |ηl −1 )
1
2
...
ηl −1
X1
1
1
...
0
X2
1
0
...
1
X3
0
0
...
0
… Xn
… 0
… 1
… ...
… 1
10
EDA
EDA can be characterized by parameters and Notations
1. Is is the space of all potential solutions
2. F denotes a fitness function.
3. ∆l is the set of individuals (population) at the lth iteration.
4. ηl is the set of best candidate solutions selected from set ∆l at the
Γ
lth iteration.
5. We denote βl ≡ ∆l – ηl≡ ∆l ∩ ηC l .where ηC l is the complement of ηl.
6. ps is the selection probability. The EDA algorithm selects ps|∆l|
individuals from set ∆l to make up
p set ηl.
7. We denote by Γ the distribution estimated from ηl (the set of
selected candidate solutions) at each iteration
8. ITer are the maximum number of iteration
11
Si
Simulation
l ti Results
R
lt
12
15
CvxSS, n=20
EDA, n=20
CvxSS, n=18
EDA, n=18
Fitness V
Value
10
5
0
-5
-10
20
21
22
23
24
25
K
26
27
28
29
30
Performance comparison
p
of EDA and Convex optimization
p
with
m=100, n = {18,20}, |∆|=100, Ψ=100 and ps=0.3.
13
15
CvxSS, K=24
CvxSS
EDA, K=24
CvxSS, K=20
EDA, K=20
Fitness Value
10
5
0
-5
-10
10
11
12
13
14
15
n
16
17
18
19
20
Performance comparison of EDA and Convex optimization with
m=100, k = {20,24}, |∆|=100, Ψ=100 and ps=0.3.
14
14
12
Fitness Va
alue
10
8
6
CvxSS,
C
SS n=20
EDA, n=20
CvxSS, n=18
EDA, n=18
C SS n=16
CvxSS,
16
EDA, n=16
4
2
0
10
20
30
40
50
60
Iterations
70
80
90
100
EDA Performance
P f
per iteration
i
i
with
i h m=100,
100 k = 30,
30 |∆|=100,
|∆| 100 Ψ=100
Ψ 100
and ps=0.3
15
Conclusions
¾ The complexity of sensor selection problem grows
exponentially with the number sensors
sensors.
¾ The relaxation of binary constraints can only give upper
bound.
¾ The performance of EDA algorithm is better than convex
optimization.
¾ The EDA surpasses the convex optimization within a few
iterations.
16
Thank You
17
Download