Transmission of Correlated Sources Over a Fading Multiple Access Channel

advertisement
Transmission of Correlated Sources Over a Fading Multiple Access
Channel
R Rajesh and Vinod Sharma
Dept. of Electrical Communication Engg.
Indian Institute of Science, Bangalore, India
Email: rajesh@pal.ece.iisc.ernet.in, vinod@ece.iisc.ernet.in
Abstract— In this paper we address the problem of transmission of correlated sources over a fading multiple access channel
(MAC). We provide sufficient conditions for transmission with
given distortions. Next these conditions are specialized to a
Gaussian MAC (GMAC). Transmission schemes for discrete
and Gaussian sources over a fading GMAC are considered.
Various power allocation strategies are also compared.
Keywords: Fading MAC, Power allocation, Random TDMA,
Amplify and Forward, Correlated sources.
I. I NTRODUCTION AND SURVEY
Sensor nodes are often deployed for monitoring a random
field. Due to the spatial proximity of the nodes, sensor
observations are correlated. One often needs to transmit these
observations to a fusion center through wireless links which
experience multipath fading. A fundamental building block
for such a network is a fading Multiple Access Channel
(MAC). We study this system in this paper.
In the following we survey the related literature. Cover,
El Gamal and Salehi [3] provided sufficient conditions for
transmitting losslessly discrete correlated observations over
a discrete MAC. They also show that unlike for independent
sources, the source-channel separation does not hold. These
techniques were extended to more general models with discrete sources and channels and lossless transmission in [1].
References [15], [20] extend the result in [3] and obtain sufficient conditions for lossy transmission of correlated sources
over a MAC with side information. Joint source-channel
coding schemes for transmission of correlated sources over
a MAC are also discussed in [5] and [10].
The capacity of a fading Gaussian channel with channel
state information (CSI) at the transmitter and receiver and
at the receiver alone are provided in [6]. It was shown that
optimal power adaptation when CSI is available both at the
transmitter and the receiver is ‘water filling’ in time.
The capacity region of the GMAC with independent inputs
is available in [4]. The distributed Gaussian source coding
problem is discussed in [12], [21]. Exact rate region for two
users is provided in [21]. Hanly and Tse ([18], [19]) have
generalized the results on a GMAC with independent inputs
to the case with flat fading and CSI available at both the
transmitter and receiver. They obtain the Shannon (ergodic)
capacity and delay limited capacity.
This work was partially supported by the DRDO-IISc program on
Advanced Research in Mathematical Engineering.
The multi-access fading channels with independent inputs
were also considered in the excellent survey [2]. They also
show that unlike in the single user case in the multi-access
realm the optimal power control yields a substantial gain
in capacity. The optimal power allocation strategy for the
symmetric case is to allow the user with the best channel to
transmit at a time (Random TDMA) ([8]). The instantaneous
power allocated to a user is by the well known ‘water-filling’
algorithm in time.
The optimal strategy is also valid when there is different average power constraints on different users. The only
change being that the fading values are normalized by the
Lagrange’s coefficients [8]. The extension of this strategy to
frequency selective channels is given in [9].
Multiple-access techniques for fading cellular uplink
model with adjacent cell interference are discussed in [17].
An explicit characterization of the ergodic capacity region
and a simple encoding-decoding scheme for a fading GMAC
with common data is given in [11] for lossless transmission.
Optimum power allocation schemes are also provided.
This paper makes the following contributions. Sufficient
conditions for lossless and lossy transmission of correlated
sources over a fading MAC are obtained. The source alphabet
and/or the channel alphabet can be discrete or continuous.
The conditions are specialized for GMAC and an optimal
power allocation policy is derived. It is shown that the
‘random TDMA’ (i.e., optimum power allocation for a fading
MAC with independent inputs) is strictly sub-optimal in this
case. Suitable examples are given and the optimum power
allocation policy is compared with other commonly used
policies.
The paper is organized as follows. Sufficient conditions
for transmission of correlated sources over a fading MAC are
provided in Section II. These conditions are specialized to a
fading GMAC in Section III. Section IV obtains the optimal
power allocation policy and its comparison with other power
allocation policies. Section V concludes the paper. Proof of
the main result is given in the Appendix.
II. T RANSMISSION OF CORRELATED SOURCES OVER A
FADING MAC
In this section we consider the transmission of memoryless
dependent sources, through a memoryless fading multiple
access channel. The sources and/or the channel input/output
alphabets can be discrete or continuous. The transmitters and
the receiver have perfect knowledge of the fade state of the
channel at that time.
We consider two sources (U1 , U2 ) with a known
joint distribution F (u1 , u2 ). The random vector sequence
{(U1n , U2n ), n ≥ 1} formed from the source outputs with
distribution F is independent identically distributed (iid) in
time. The sources transmit their code words Xi ’s to a single
decoder through a memoryless, flat, fast fading multiple
access channel. Let (H1n , H2n ) be the fade state at time
n, n ≥ 1. We assume {(H1n , H2n ), n ≥ 1} to be iid,
although (H1n , H2n ) can be dependent and can be discrete
or continuous valued. The channel output Y has distribution
p(y|x1 , x2 , h1 , h2 ) if x1 and x2 are transmitted at that time
and the channel is in the fade state (h1 , h2 ). The decoder
receives Y and estimates the sensor observations Ui as
Ûi , i = 1, 2.
It is of interest to find encoders and a decoder such
that {U1n , U2n , n ≥ 1} can be transmitted over the given
fading MAC with E[di (Ui , Ûi )] ≤ Di , i = 1, 2 where di
are non-negative distortion measures and Di are the given
distortion constraints. We will assume that di are such that
di (u, u0 ) = 0 if and only if u = u0 . If the distortion measures
are unbounded we also assume that there exist u∗i , i = 1, 2
such that E[di (Ui , u∗i )] < ∞, i = 1, 2.
Source channel separation does not hold in this case.
We will denote Uij , j = 1, 2...n by Uin , i = 1, 2.
Def inition : The sources (U1n , U2n ) can be transmitted
over the fading multiple access channel with distortions
∆
D=(D1 , D2 ) if for any > 0 there is an n0 such that for
n
: Uin × H1n × H2n →
all n > n0 there exist encoders fE,i
n
n
n
n
n
n
n
Xi , i = 1, 2 and
hPa decoder fD : iY ×H1 ×H1 → (Û1 , Û2 )
n
such that n1 E
j=1 d(Uij , Ûij ) ≤ Di + , i = 1, 2 where
(Û1n , Û2n ) = fD (Y n , H1n , H2n ) and Ui , Hi , Xi , Y, Ûi are
the sets in which Ui , Hi , Xi , Y and Ûi take values.
n n
n
n
n
QnSince the MAC is memoryless, p(y |x1 , x2 , h1 , h2 ) =
j=1 p(yj |x1j , x2j , h1j , h2j ). In the following X ↔ Y ↔
Z will indicate that (X, Y, Z) forms a Markov chain.
Now we state the main Theorem.
Theorem 1: A source can be transmitted over a fading
multiple access channel with distortions (D1 , D2 ) if there
exist random variables (W1 , W2 , X1 , X2 ) such that
1. p(u1 , u2 , w1 , w2 , x1 , x2 , y, h1 , h2 ) =
p(u1 , u2 )p(w1 |u1 )p(w2 |u2 )
p(x1 |w1 , h1 , h2 )p(x2 |w2 , h1 , h2 )p(y|x1 , x2 , h1 , h2 ).
2. There exists a function fD : W1 × W2 × H1 × H2 →
(Û1 × Û2 ) such that E[d(Ui , Ûi )] ≤ Di , i = 1, 2 and the
constraints
I(U1 ; W1 |W2 ) <
I(U2 ; W2 |W1 ) <
I(U1 , U2 ; W1 , W2 ) <
I(X1 ; Y |X2 , W2 , H1 , H2 ),
I(X2 ; Y |X1 , W1 , H1 , H2 ), (1)
I(X1 , X2 ; Y |H1 , H2 ),
are satisfied where Wi is the set in which Wi take
values.
If the channel input alphabets are continuous valued then
the Xi s should also satisfy given power constraints E[Xi2 ] ≤
P i , i = 1, 2.
In Theorem 1 it is possible to include other distortion
constraints. For example, in addition to the bounds on
E[d(Ui , Ûi )] one may want a bound on the joint distortion
E[d((U1 , U2 ), (Û1 , Û2 ))]. Then the only modification needed
in the statement of the above theorem is to include this also
as a condition in defining fD .
The proof of the theorem is given in the Appendix.
The proof extends directly to the multi-user case (with
the number of users > 2). Let S = 1, 2, ..., M be the set of
sources with joint distribution p(u1 , u2 ...uM ). HS denotes
the set {H1 , H2 , ..., HM }.
Theorem 2: Sources (Uin , i ∈ S) can be communicated
in a distributed fashion over the memoryless fading multiple
access channel p(y|xi , i ∈ S) with distortions (Di , i ∈ S)
if there exist auxiliary random variables (Wi , Xi , i ∈ S)
satisfying
1. p(ui , wi , xi , y, hi , i ∈ S) = p(ui , i ∈ S)p(y|xi , hi , i ∈ S)
Y
p(wj |uj )p(xj |wj , hS )
j∈S
Q
2. There exists a function fD : j∈S Wj × H → (Ûi , i ∈ S)
such that E[d(Ui , Ûi )] ≤ Di , i ∈ S and the constraints
I(UA ; WA |WAc ) < I(XA ; Y |XAc , WAc , HS )
(2)
are satisfied (in case of continuous channel alphabets we also
need the power constraints E[Xi2 ] ≤ Pi , i = 1, ..., M ) for
all A ⊂ S.
The proof is along the lines of the proof of Theorem 1
and is omitted for the sake of brevity.
One of the main problems in using (1) in practice is in
obtaining efficient, explicit W1 , W2 , X1 , X2 , fD for a given
(U1 , U2 ), (D1 , D2 ) and the channel. In the rest of this paper
we obtain good coding-decoding schemes for a GMAC.
III. FADING G AUSSIAN MAC
In a fading Gaussian MAC the channel output Yn at time
n is given by Yn = H1n X1n +H2n X2n +Nn where X1n and
X2n are the channel inputs at time n and {Nn } is iid with
a Gaussian distribution and is independent of X1n and X2n .
2
Also, E[Nn ] = 0 and var(Nn ) = σN
. H1n and H2n are the
fade states of the channel at time n. The power constraints on
the channel inputs are E[Xi2 ] ≤ P i , i = 1, 2. The distortion
measure will be Mean Square Error (MSE). Let ρ̃ be the
correlation between the channel inputs X1 , X2 .
From the maximum correlation theorem ([16]), ρ̃ ≤ ρ,
where ρ is the correlation between U1 and U2 . Also, it is
shown in [14] that if (U1 , U2 ) are the correlated sources and
X1 ↔ U1 ↔ U2 ↔ X2 where X1 and X2 are jointly
Gaussian, then the correlation between (X1 , X2 ) satisfies
I(U1 ; W1 |W2 )
I(U2 ; W2 |W1 )
<
<
h
h
0.5E log
0.5E log
1+
|H1 |2 P1 (H1 , H2 )(1 − ρ̃2 )
σN 2
2
1+
|H2 | P2 (H1 , H2 )(1 − ρ̃ )
σN 2
"
I(U1 , U2 ; W1 , W2 )
<
0.5E
log
2
i
i
,
,
(3)
p
|H1 |2 P1 (H1 , H2 ) + |H2 |2 P2 (H1 , H2 ) + 2|H1 ||H2 |ρ̃
1+
σN 2
ρ2 ≤ 1 − 2−2I(U1 ,U2 ) . The proof is using data processing
inequality and the result sometimes gives a tighter upper
bound than the maximum correlation theorem. For example, consider (U1 , U2 ) with the joint distribution: P (U1 =
0; U2 = 0) = P (U1 = 1; U2 = 1) = 1/3; P (U1 = 1; U2 =
0) = P (U1 = 0; U2 = 1) = 1/6. The correlation between
the sources is 0.33 but from the above result, the correlation
between (X1 , X2 ) cannot exceed 0.327.
For this GMAC, following the experience in [14] we relax
the first two inequalities in (1) to make them more explicit.
These are then used to obtain efficient signaling schemes to
satisfy (1). For this the R.H.S. of the first two inequalities
in (1) are replaced by upper bounds I(X1 ; Y |X2 , H1 , H2 )
and I(X2 ; Y |X1 , H1 , H2 ) respectively. It is shown in [14]
that these upper bounds are quite tight whenever these two
inequalities are active (generally it is the third inequality
which is tight). Also, it is shown in [14] that for a given
(h1 , h2 ), these upper bounds and the R.H.S. of the third
inequality in (1) are maximized by choosing (X1 , X2 ) to be
zero mean, jointly Gaussian r.v.s with E[Xi2 ] = Pi (h1 , h2 ).
If such (X1 , X2 ) have correlation ρ̃ these three bounds
provide (3). We also need to choose the power control
policies Pi (h1 , h2 ) such that the average power constraints
E[Pi (H1 , H2 )] ≤ P i , i = 1, 2,
P1 (H1 , H2 )P2 (H1 , H2 )
!#
(4)
are satisfied. This motivates us to consider Gaussian coding
schemes.
An advantage of (3) is that we will be able to obtain
explicit source-channel coding schemes to satisfy (3). These
may be difficult to identify from (1) itself. Once we have
obtained these coding schemes we can verify the sufficient
conditions (1) themselves. If satisfied these will ensure that
the coding schemes can ensure transmission with given
distortions. If not, one can change ρ̃ to finally satisfy (1).
Thus in the rest of the paper we consider some power
allocation policies along with Gaussian signaling schemes
which can be used to satisfy the conditions (1).
An important special case is when the correlated sources
(U1 , U2 ) are U1 = (Z1 , Z0 ), U2 = (Z2 , Z0 ) where
Z0 , Z1 , Z2 are independent. Z0 represents the common information. We vector quantize Z1n , Z2n , Z0n separately into
W1n , W2n and W0n . These in turn are independently coded
into Gaussian codebooks. The corresponding codewords are
scaled into X1n , X2n , X0n with powers (1 − α)P 1 , (1 − β)P 2
reserved for X0n at the two encoders and the rest on X1n and
X2n respectively. Then using conditions (1) for three users
with lossless transmission we can recover the capacity result
.
in [11].
IV. O PTIMAL POWER ALLOCATION
We consider an optimal power allocation policy such that
the R.H.S. in the third inequality of (3) is maximized and the
other conditions are satisfied. This is done because often the
third inequality is the constraining condition. The optimal
power allocation policy is obtained numerically. It depends
upon ρ̃ which inturn depends on the source correlation ρ.
To find a ρ̃ such that all the three inequalities are satisfied
by the optimal policy, we can use the following procedure.
We consider an iterative algorithm in which the channel
correlation (ρ̃) is chosen in such a way that the third
inequality is satisfied. Then we check for the other two
inequalities. If they fail then the ρ̃ is decreased so that all
the three conditions are satisfied, if possible.
The optimal policy is compared with Random TDMA (RTDMA), Modified Random TDMA (MRTDMA) and Uniform
Power Allocation (UPA) policies defined below. In RTDMA
only the user having the best channel conditions transmits
(as in [2], [8], [9]). If the best channel state is obtained
by more than one user, then one of these users is selected
with equal probability. Once a user is chosen the power
allocation is by water-filling over the channel states of that
user. This policy maximizes the sum of channel rates for
independent sources and symmetric channel statistics and
power constraints. Thus for ρ̃ = 0 it equals the optimal
policy under symmetric conditions because then the third
inequality gives the sum rates. In MRTDMA also only the
user having the best channel conditions transmits. However
if the fading values are same for both the users then the total
power is split and both the users transmit simultaneously. In
UPA both the users transmit all the time at powers P1 and
P2 .
The optimal power allocation policy expends equal power
on both the users when the fade states are same. When the
users have unequal fading, then the user with worse channel
gets lesser fraction of the power. Its share increases with
ρ̃ and become close to 0.5 when the two fade states come
closer.
In the following we compare these power allocation policies on discrete and Gaussian sources. In particular we show
that RTDMA may not be optimal for correlated sources even
for symmetric channel and power constraints. Also, that even
though MRTDMA may improve over RTDMA significantly,
it is still suboptimal.
A. Discrete Sources
We show the sub-optimality of RTDMA and MRTDMA
via an example.
Consider the lossless transmission of discrete sources
(U1 , U2 ) over a fading GMAC. Such a system is most
commonly encountered in practice. The sources (U1 , U2 )
have joint distribution given by P (U1 = 0; U2 = 0) =
P (U1 = 1; U2 = 1) = P (U1 = 0; U2 = 1) = 1/3; P (U1 =
1; U2 = 0) = 0. The fade states h1 , h2 take values in (1, 0.5)
with equal probability and are independent of each other. The
power constraints on the channel inputs (X1 , X2 ) are (5, 5).
The channel noise is zero mean with unit variance.
For lossless transmission the L.H.S. in (3) become
H(U1 |U2 ), H(U2 |U1 ) and H(U1 , U2 ) respectively and they
evaluate to 0.667, 0.667 and 1.585. Let the sources be
mapped to channel codewords with correlation ρ̃ = 0.3.
Such correlation preserving mappings are discussed in [14].
If we use UPA the R.H.S. in the third inequality evaluates
to 1.5030. If we consider RTDMA the R.H.S. evaluates to
1.5273. Thus in both the cases the third inequality is violated,
and lossless transmission with these power control schemes
may not be possible. With MRTDMA the R.H.S. in the
third inequality improves to 1.6036 and the optimal scheme
provides 1.6071. The R.H.S in the first two inequalities
evaluate to 0.8627 for the MRTDMA and to 0.8755 with the
optimal policy. From the coding scheme in [14] discussed
above the original bounds in the first two inequalities of (1)
evaluate to 0.873 for optimal power allocation and 0.8315 for
MRTDMA. Hence the first two inequalities are also satisfied.
This ensures that the sources can be transmitted losslessly
with these two power allocation schemes.
Hence from the above example we see that unlike for
independent sources the RTDMA does not perform optimally
with correlated sources even for a symmetric system.
was shown to be optimal in [10] for symmetric case at low
SNR on channels without fading.
In the second scheme called Separation Based (SB), the
source and channel coding are separated. The source coding
is done by vector-quantization followed by Slepian-Wolf
compression ([4]) and the compressed outputs are mapped
to independent channel r.v.s. These are amplified to the state
dependent powers to obtain (X1 , X2 ) and sent.
In Lapidoth-Tinguely (LT) scheme (developed in [10])
the sources are vector quantized and mapped to correlated
Gaussian codewords.
The decoding for both the above schemes is jointly typical
decoding followed by estimation of the sources.
We compare the power allocation strategies optimal, RTDMA, MRTDMA and UPA for these joint source-channel
coding schemes for various values of source correlations. We
will take (P1 , P2 ) = (5, 5). The fading processes are as in
Section IV-A. The mean distortion of each user is provided
in Figs. 1-3. Because of symmetry the mean distortion will
be same for each user. Fig. 1 provides the results for AF,
Fig. 2 for SB and Fig. 3 for LT. For comparison we have
also included the results for the channel with no fading.
B. Gaussian Sources
Consider the transmission of correlated Gaussian sources
over a GMAC. The sources are assumed to have zero mean,
unit variance and correlation ρ. The power constraints on
the channel inputs are (P1 , P2 ) and channel noise variance
is unity. The performance measure is to find the minimum
distortion at the decoder for given sources, power constraints
and channel noise variance. The distortion criterion is mean
square error (MSE).
We consider three joint source-channel coding schemes
discussed in [13]. The schemes discussed below have been
used in [5], [7] and [10] also. Each of these coding-decoding
scheme is used with power allocation schemes RTDMA,
MRTDMA, UPA and the optimal scheme obtained above
from (3) which minimizes the distortions.
The first scheme is amplify and forward (AF) where the
sources are amplified to the powers allocated for each state
and sent. The power allocated to each state depends on the
power allocation policy used. The channel inputs preserve
the correlation of the source outputs (U1 , U2 ). The decoding
is performed by estimation of the sources (U1 , U2 ) from the
channel output Y as (Û1 , Û2 ) = E[(U1 , U2 )|Y ]. This scheme
Fig. 1.
Minimum distortion for AF, P1 = P2 = 5
Fig. 2.
Minimum distortion for SB, P1 = P2 = 5
Fig. 3.
Minimum distortion for LT, P1 = P2 = 5
From Figs. 1-3 we find that RTDMA performs worse than
the optimal scheme and MRTDMA for AF and LT. It is to
be also noted that in AF, RTDMA is sub-optimal even when
the sources are independent. This is because water-filling
does not give the optimal power allocations that minimizes
the distortion in AF. In SB the optimal power allocation
scheme is the RTDMA itself as the sources after SlepianWolf compression become (asymptotically) independent. In
AF, UPA is close to the optimal. In LT, MRTDMA performs
close to the optimal.
Fig. 5. Maximum sum rate for different power allocation policies with
P1 = P2 = 1
for this criterion.
We take P1 = P2 = 1 with the fading as above. The
channel correlation (ρ̃) achievable depends on the source
correlations and the scheme used. We compare the power
allocation policies for different ρ̃ in Fig. 5.
From the figure it is clear that unlike in the case of
independent inputs, the RTDMA based power allocation is
strictly suboptimal for correlated sources. We also see that
a slight modification in RTDMA resulting in MRTDMA
can give performance close to the optimal scheme. At high
channel correlation even the UPA performs better than the
RTDMA.
V. C ONCLUSIONS
In this paper, sufficient conditions for transmission of
correlated sources over a fading MAC are provided. These
conditions are specialized to a GMAC and an optimal
power allocation policy is discussed. The optimal policy is
compared with other well known policies in literature and it
is found that the ‘random TDMA’, optimal for independent
sources, does not perform optimally for transmission of
correlated sources over a GMAC.
A PPENDIX
P ROOF OF T HEOREM 1
Fig. 4. Comparison of the optimum power allocation schemes, P1 = P2 =
5
In Fig. 4 we have plotted the performance of the optimal
scheme for AF, SB and LT. It can be seen that LT performs
better than SB for all ρ and AF performs better than the
other two schemes at high ρ. This is due to the fact that,
ρ
for the symmetric case (P1 = P2 = P ), for σP2 ≤ 1−ρ
2
N
AF is optimal([10]). These are in conformance with the
conclusions in [13].
One of the performance measures is to maximize the
R.H.S. in the third inequality of (3). The optimal power
allocation policy discussed in the beginning of this section
maximizes this. We compare RTDMA, MRTDMA and UPA
First we consider discrete sources and channel inputs.
Comments to include the continuous sources/channel inputs
are provided at the end of the proof.
The scheme involves distributed quantization (W1n , W2n )
of the sources followed by a correlation preserving mapping
to the channel codewords depending on the channel state.
The decoding approach involves first decoding the quantized
version (W1n , W2n ) of (U1n , U2n ), and then obtaining estimate
(Û1 , Û2 ) as a function of (W1n , W2n ).
We show the achievability of all points in the rate region
(1).
P roof : Fix p(w1 |u1 ), p(w2 |u2 ), p(x1 |w1 , h1 , h2 ),
n
p(x2 |w2 , h1 , h2 ) as well as fD
satisfying the distortion
constraints.
0
Codebook generation: Let Ri = I(Ui ; Wi ) + δ, i = 1, 2
0
for some δ > 0. Generate 2nRi codewords of length n, sampled iid from the marginal distribution p(wi ), i = 1, 2. For
each win and Q
(hn1 , hn2 ) independently generate sequence Xin
n
according to j=1 p(xij |wij , h1j , h2j ), i = 1, 2. Call these
sequences xni (win , hn1 , hn2 ), i ∈ 1, 2. Reveal the codebooks to
the encoders and the decoder.
Encoding: For i ∈ {1, 2}, given the source sequence
Uin and hn1 , hn2 , the ith encoder looks for a codeword
Win such that (Uin , Win ) ∈ Tn (Ui , Wi ) and then transmits Xin (Win , hn1 , hn2 ) where Tn (.) is the set of -typical
sequences ([4]) of length n.
Decoding: Upon receiving Y n , for a given (hn1 , hn2 )
the decoder finds the unique (W1n , W2n ) pair such that
(W1n , W2n , xn1 (W1n , hn1 , hn2 ), xn2 (W2n , hn1 , hn2 ), Y n ) ∈ Tn . If
it fails to find such a unique pair, the decoder declares
(u∗1 n , u∗2 n ).
In the following we show that the probability
of error for the above encoding, decoding scheme
tends to zero as n → ∞. By Markov Lemma ([4]),
P {(U1n , U2n , W1n (U1n ), W2n (U2n ), X1n (W1n , hn1 , hn2 ),
X2n (W2n , hn1 , hn2 ), Y n ) ∈ Tn } → 1 as n → ∞. The
error can occur because of the following three events
E1-E3. We show that P (Ei) → 0, for i = 1, 2, 3. The state
sequence (hn1 , hn2 ) is known both at the encoder and the
decoder. For simplicity we take δ = .
E1 The encoders do not find the codewords. However from
rate distortion theory [4], P. 356, limn→∞ P (E1 ) = 0 if
0
Ri > I(Ui ; Wi ), i ∈ 1, 2.
E2 There exists another codeword ŵ1n such that
(ŵ1n , W2n , xn1 (ŵ1n , hn1 , hn2 ), xn2 (W2n , hn1 , hn2 ), Y n ) ∈ Tn .
∆
Define α=(ŵ1n , W2n , xn1 (ŵ1n , hn1 , hn2 ), xn2 (W2n , hn1 , hn2 ), Y n ).
Then,
P (E2)
=
≤
P {There is
X
ŵ1n
6=
w1n
: α ∈ Tn }
P {α ∈ Tn }
Hence,
P {(ŵ1n , W2n , xn1 (ŵ1n , hn1 , hn2 ), xn2 (W2n , hn1 , hn2 ), Y n ) ∈ Tn }
≤ 2−n{I(X1 ;Y |X2 ,W2 ,H1 ,H2 )−6} .
Then from (5)
P (E2) ≤
Denote {(xn1 (.), xn2 (.), y n ) : α ∈ Tn } by A. The probability term inside the summation in (5) is
X
≤
P {xn1 (ŵ1n , hn1 , hn2 ), xn2 (w2n , hn1 , hn2 ), y n |ŵ1n , w2n }
≤ |{ŵ1n : (ŵ1n , w2n ) ∈ Tn }|2−n{I(X1 ;Y |X2 ,W2 ,H1 ,H2 )−6}
≤ |{ŵ1n }|P {ŵ1n , w2n ) ∈ Tn }2−n{I(X1 ;Y |X2 ,W2 ,H1 ,H2 )−6}
≤ 2n{I(U1 ;W1 )+} 2−n{I(W1 ;W2 )−}
2−n{I(X1 ;Y |X2 ,W2 ,H1 ,H2 )−6}
= 2n{I(U1 ;W1 |W2 )} 2−n{I(X1 ;Y |X2 ,W2 ,H1 ,H2 )−8} .
(7)
The R.H.S of the above inequality tends to zero if
I(U1 ; W1 |W2 ) < I(X1 ; Y |X2 , W2 , H1 , H2 ). In (7) we have
used the fact that
I(U1 ; W1 ) − I(W1 ; W2 ) = I(U1 ; W1 |W2 ).
Similarly, by symmetry of the problem we require
I(U2 ; W2 |W1 ) < I(X2 ; Y |X1 , W1 , H1 , H2 ).
A
P {xn2 (w2n , hn1 , hn2 ), y n |w2n , hn1 , hn2 }
≤
X
2
(8)
E3 There exist other codewords ŵ1n and ŵ2n such that
∆
α=(ŵ1n , ŵ2n , xn1 (ŵ1n , hn1 , hn2 ), xn2 (ŵ2n , hn1 , hn2 ), y n ) ∈ Tn .
Then,
P (E3) = P {There is (ŵ1n , ŵ2n ) 6= (w1n , w2n ) : α ∈ Tn }
X
≤
P {α ∈ Tn }.
(9)
(ŵ1n ,ŵ2n )6=(w1n ,w1n ):(ŵ1n ,ŵ2n )∈Tn
Denote (xn1 (.), xn2 (.), y n ) : α ∈ Tn by A. The probability
term inside the summation in (9) is
≤
≤
X
A
X
P r{xn1 (ŵ1n , hn1 , hn2 ), x2 (w2n , hn1 , hn2 ), y n |ŵ1n , ŵ2n }
P r{xn1 (ŵ1n , hn1 , hn2 )|ŵ1n , h1 , h2 }
A
P r{xn2 (ŵ2n , hn1 , hn2 )|ŵ2n , hn1 , hn2 }P r{y n |hn1 , hn2 }
≤
≤
2−n{I(X1 ;Y |X2 ,W2 ,H1 ,H2 )−6}
ŵ1n 6=w1n :(ŵ1n ,w2n )∈Tn
A
P {xn1 (ŵ1n , hn1 , hn2 )|ŵ1n , hn1 , hn2 }
X
(5)
ŵ1n 6=W1n :(ŵ1n ,W2n )∈Tn
X
(6)
X
2−n{H(X1 |W1 ,H1 ,H2 )+H(X2 |W2 ,H1 ,H2 )+H(Y |H1 ,H2 )−5}
A
nH(X1 ,X2 ,Y |W1 ,W2 ,H1 ,H2 )
≤2
2−n{H(X1 |W1 ,H1 ,H2 )+H(X2 |W2 ,H1 ,H2 )+H(Y |H1 ,H2 )−7} .
−n{H(X1 |W1 ,H1 ,H2 )+H(X2 ,Y |W2 ,H1 ,H2 )−4}
But from hypothesis, we have
A
≤ 2nH(X1 ,X2 ,Y |W1 ,W2 ,H1 ,H2 )
2−n{H(X1 |W1 ,H1 ,H2 )+H(X2 ,Y |W2 ,H1 ,H2 )−6}
But from hypothesis, we have
H(X1 , X2 , Y |W1 , W2 , H1 , H2 ) − H(X1 |W1 , H1 , H2 )
−H(X2 , Y |W2 , H1 , H2 )
= −I(X1 ; Y |X2 , W2 , H1 , H2 ).
H(X1 , X2 , Y |W1 , W2 , H1 , H2 ) − H(X1 |W1 , H1 , H2 )
−H(X2 |W2 , H1 , H2 ) − H(Y |H1 , H2 )
.
= −I(X1 , X2 ; Y |H1 , H2 )
Hence,
P r{(ŵ1n , ŵ2n , xn1 (ŵ1n , hn1 , hn2 ), xn2 (ŵ2n , hn1 , hn2 ), y n ) ∈ Tn }
≤ 2−n{I(X1 ,X2 ;Y |H1 ,H2 )−7} .
(10)
Then from (9)
P (E3) ≤
X
2−n{I(X1 ,X2 ;Y |H1 ,H2 )−7}
(ŵ1n ,ŵ2n )6=(w1n ,w1n ):
(ŵ1n ,ŵ2n )∈Tn
≤ |{(ŵ1n , ŵ2n ) : (ŵ1n , ŵ2n ) ∈ Tn }|2−n{I(X1 ,X2 ;Y |H1 ,H2 )−7}
≤ |{ŵ1n }|.|{ŵ2n }|.P r{(ŵ1n , ŵ2n ) ∈ Tn }
.2−n{I(X1 ,X2 ;Y |H1 ,H2 )
≤ 2n{I(U1 ;W1 )+I(U2 ;W2 )+2}
2−n{I(W1 ;W2 )−2} 2−n{I(X1 ,X2 ;Y |H1 ,H2 )−7}
= 2n{I(U1 ,U2 ;W1 ,W2 )} 2−n{I(X1 ,X2 ;Y |H1 ,H2 )−13} .
The RHS of the above inequality tends to zero if
I(U1 , U2 ; W1 , W2 ) < I(X1 , X2 ; Y |H1 , H2 ).
Thus as n → ∞, with probability tending to 1, the decoder
finds the correct sequence (W1n , W2n ) which is jointly weakly
-typical with (U1n , U2n ).
The fact that (W1n , W2n ) are weakly -typical with
n
(W1n , W2n ) will satisfy
(U1n , U2n ) does not guarantee that fD
the distortions D1 , D2 . For this, one needs that (W1n , W2n )
are distortion--weakly typical ([4]) with (U1n , U2n ). Let
n
TD,
denote the set of distortion typical sequences ([4]).
n
|Tn ) → 1 as
Then by strong law of large numbers P (TD,
n → ∞. Thus the distortion constraints are also satisfied by
(W1n , W2n ) obtained above with a probability tending to 1
as n → ∞. Therefore, if distortion measure d is bounded
limn→∞ E[d(Uin , Ûin )] ≤ Di + i = 1, 2.
If there exist u∗i such that E[di (Ui , u∗i )] < ∞, i = 1, 2,
then the result extends to unbounded distortion measures
also as follows. Whenever the decoded (W1n , W2n ) are not
in the distortion typical set then we estimate (Û1n , Û2n ) as
(u∗1 n , u∗2 n ). Then for i = 1, 2,
n )c } ]. (11)
E[di (Uin , Ûin )] ≤ Di + + E[d(Uin , u∗i n )1{(TD,
n
Since E[d(Uin , u∗i n )] < ∞ and P [(TD,
)c ] → 0 as n → ∞,
the last term of (11) goes to zero as n → ∞.
The above proof also hold for continuous sources and
continuous channels. The Markov lemma and weak typical
decoding, the devices used to prove the theorem continue to
hold and the proof extends with E[Xi2 ] ≤ P i i = 1, 2.
R EFERENCES
[1] R. Ahlswede and T. Han. On source coding with side information via
a multiple access channel and related problems in information theory.
IEEE Trans. Inform. Theory, IT-29(3):396–411, May 1983.
[2] E. Biglieri, J. Proakis, and S. Shamai. Fading channels: Information
theoretic and communication aspects. IEEE Trans. Inform. Theory,
44(6):2619–2692, Oct. 1998.
[3] T. M. Cover, A. E. Gamal, and M. Salehi. Multiple access channels
with arbitrarily correlated sources. IEEE Trans. Inform. Theory, IT
-26(6):648–657, Nov 1980.
[4] T. M. Cover and J. A. Thomas. Elements of Information theory. Wiley
Series in Telecommunication, N.Y., 2004.
[5] M. Gastpar and M. Vetterli. Power spatio-temporal bandwidth and
distortion in large sensor networks. IEEE JSAC, 23(4):745–754, April
2005.
[6] A. J. Goldsmith and P. P. Varaiya. Capacity of fading channels with
channel side information. IEEE Trans. Inform. Theory, 43(6):1986–
1992, Nov 1997.
[7] P. Ishwar, R. Puri, K. Ramchandran, and S. S. Pradhan. On rate
constrained distributed estimation in unreliable sensor networks. IEEE
JSAC, 23(4):765–775, April 2005.
[8] R. Knopp and P. A. Humblet. Information capacity and power
control in single-cell multiuser communication. Proc. Int. Conf. on
Communication, ICC’95, Seattle,WA, pages 331–335, June 1995.
[9] R. Knopp and P. A. Humblet. Multiple-accessing over frequencyselective fading channels. 6 th IEEE Int. Symp. on Personal Indoor and
Mobile Radio Communication, PIMRC’95, Toronto, Canada, pages
1326–1331, sept 1995.
[10] A. Lapidoth and S. Tinguely. Sending a bi- variate Gaussian source
over a Gaussian MAC. IEEE ISIT 06, 2006.
[11] N. Liu and S. Ulukus. Capacity region and optimum power control
stategies for fading Gaussian multiple access channels with common
data. IEEE Trans. communication, 54(10):1815–1826, Oct. 2006.
[12] Y. Oohama. Gaussian multiterminal source coding. IEEE Trans.
Inform. Theory, IT -43(6):1912–1923, Nov. 1997.
[13] R. Rajesh and V. Sharma. Source channel coding for Gaussian sources
over a Gaussian multiple access channel. Proc. 45 Allerton conference
on Computing Control and Communication, 2007.
[14] R. Rajesh and V. Sharma.
Distributed joint sourcechannel coding on a multiple access channel with
side information,.
Technical Report,Available: Arxiv,
http://arxiv.org/PS cache/arxiv/pdf/0803/0803.1445v1.pdf, 2008.
[15] R. Rajesh, V. K. Varsheneya, and V. Sharma. Distributed joint sourcechannel coding on a multiple access channel with side information.
In Proc. IEEE ISIT 2008, Toronto, Canada.
[16] S. Ray, M. Medard, M. Effros, and R. Kotter. On separation for
multiple access channels. Proc. IEEE Inform. Theory Workshop, 2006.
[17] S. Shamai and A. D. Wyner. Information-theoretic considerations for
symmetric cellular, multiple-access fading channels part I. IEEE Trans.
Inform. Theory, 43(6):1877–1893, Nov 1997.
[18] D. Tse and S. V. Hanly. Multiaccess fading channels-part I: polymatroid structure, optimal resource allocation and throughput capacities.
IEEE Trans. Inform. Theory, 44(7):2796–2815, Nov 1998.
[19] D. Tse and S. V. Hanly. Multiaccess fading channels-part ii: Delaylimited capacities. IEEE Trans. Inform. Theory, 44(7):2816–2831, Nov
1998.
[20] V. K. Varsheneya and V. Sharma. Distributed coding for multiple
access communication with side information. Proc. IEEE Wireless
Communication and Networking Conference (WCNC), April 2006.
[21] A. B. Wagner, S. Tavildar, and P. Viswanath. The rate region of the
quadratic Gaussian two terminal source coding problem. IEEE Trans.
Inform. Theory, 54(5):1938–1961, May 2008.
Download