Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and
Applied Mathematics
http://jipam.vu.edu.au/
Volume 2, Issue 1, Article 5, 2001
FURTHER REVERSE RESULTS FOR JENSEN’S DISCRETE INEQUALITY AND
APPLICATIONS IN INFORMATION THEORY
I. BUDIMIR, S.S. DRAGOMIR, AND J. PEČARIĆ
D EPARTMENT OF M ATHEMATICS , FACULTY OF T EXTILE T ECHNOLOGY, U NIVERSITY OF Z AGREB ,
C ROATIA .
S CHOOL OF C OMMUNICATIONS AND I NFOMATICS , V ICTORIA U NIVERSITY OF T ECHNOLOGY, P.O. B OX
14428, M ELBOURNE C ITY MC, V ICTORIA 8001, AUSTRALIA .
sever.dragomir@vu.edu.au
URL: http://rgmia.vu.edu.au/SSDragomirWeb.html
D EPARTMENT OF A PPLIED M ATHEMATICS , U NIVERSITY OF A DELAIDE , A DELAIDE , 5001.
URL: http://mahazu.hazu.hr/DepMPCS/indexJP.html
Received 12 April, 2000; accepted 6 October, 2000
Communicated by L.-E. Persson
A BSTRACT. Some new inequalities which counterpart Jensen’s discrete inequality and improve
the recent results from [4] and [5] are given. A related result for generalized means is established. Applications in Information Theory are also provided.
Key words and phrases: Convex functions, Jensen’s Inequality, Entropy Mappings.
2000 Mathematics Subject Classification. 26D15, 94Xxx.
1. I NTRODUCTION
Let f : X → R be a convex
mapping defined on the linear space X and xi ∈ X, pi ≥ 0
P
(i = 1, ..., m) with Pm := m
p
i=1 i > 0.
The following inequality is well known in the literature as Jensen’s inequality
!
m
m
1 X
1 X
(1.1)
f
pi xi ≤
pi f (xi ).
Pm i=1
Pm i=1
There are many well known inequalities which are particular cases of Jensen’s inequality, such
as the weighted arithmetic mean-geometric mean-harmonic mean inequality, the Ky-Fan inequality, the Hölder inequality, etc. For a comprehensive list of recent results on Jensen’s inequality, see the book [25] and the papers [9]-[15] where further references are given.
In 1994, Dragomir and Ionescu [18] proved the following inequality which counterparts (1.1)
for real mappings of a real variable.
ISSN (electronic): 1443-5756
c 2001 Victoria University. All rights reserved.
007-00
2
I. B UDIMIR , S.S. D RAGOMIR , AND J. P E ČARI Ć
◦
◦
Theorem 1.1. Let f : I ⊆ R → R be a differentiable convex mapping on I (I is the interior
◦
P
of I), xi ∈ I, pi ≥ 0 (i = 1, ..., n) and ni=1 pi = 1. Then we have the inequality
!
n
n
X
X
(1.2)
0 ≤
pi f (xi ) − f
pi xi
i=1
≤
n
X
i=1
0
pi xi f (xi ) −
i=1
n
X
i=1
pi xi
n
X
pi f 0 (xi ),
i=1
◦
where f 0 is the derivative of f on I.
Using this result and the discrete version of the Grüss inequality for weighted sums, S.S.
Dragomir obtained the following simple counterpart of Jensen’s inequality [5]:
◦
Theorem 1.2. With the above assumptions for f and if m, M ∈I and m ≤ xi ≤ M (i = 1, ..., n),
then we have
!
n
n
X
X
1
(1.3)
0≤
pi f (xi ) − f
pi xi ≤ (M − m) (f 0 (M ) − f 0 (m)) .
4
i=1
i=1
This was subsequently applied in Information Theory for Shannon’s and Rényi’s entropy.
In this paper we point out some other counterparts of Jensen’s inequality that are similar to
(1.3), some of which are better than the above inequalities.
2. S OME N EW C OUNTERPARTS FOR J ENSEN ’ S D ISCRETE I NEQUALITY
The following result holds.
◦
◦
Theorem 2.1. Let f : I ⊆ R → R be a differentiable
Pn convex mapping on I and xi ∈I with
x1 ≤ x2 ≤ · · · ≤ xn and pi ≥ 0 (i = 1, ..., n) with i=1 pi = 1. Then we have
!
n
n
X
X
(2.1)
0 ≤
pi f (xi ) − f
pi xi
i=1
i=1
0
≤ (xn − x1 ) (f (xn ) − f 0 (x1 )) max
1≤k≤n−1
≤
where Pk :=
Pk
i=1
Pk P̄k+1
1
(xn − x1 ) (f 0 (xn ) − f 0 (x1 )) ,
4
pi and P̄k+1 := 1 − Pk .
Proof. We use the following Grüss type inequality due to J. E. Pečarić (see for example [25]):
n
n
n
1 X
X
X
1
1
Qk Q̄k+1
(2.2) q i ai b i −
q i ai ·
qi bi ≤ |an − a1 | |bn − b1 | max
,
2
1≤k≤n−1
Qn
Q
Q
Q
n
n
n
i=1
i=1
i=1
P
provided that a, b are two monotonic n−tuples, q is a positive one, Qn := ni=1 qi > 0, Qk :=
Pk
i=1 qi and Q̄k+1 = Qn − Qk+1 .
If in (2.2) we choose qi = pi , ai = xi , bi = f 0 (xi ) (and ai , bi will be monotonic nondecreasing),
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
R EVERSE R ESULTS FOR J ENSEN ’ S D ISCRETE I NEQUALITY
3
then we may state that
(2.3)
n
X
0
pi xi f (xi ) −
i=1
n
X
i=1
pi xi
n
X
pi f 0 (xi )
i=1
≤ (xn − x1 ) (f 0 (xn ) − f 0 (x1 )) max
1≤k≤n−1
Pk P̄k+1 .
Now, using (1.2) and (2.3) we obtain the first inequality in (2.1).
For the second inequality, we observe that
Pk P̄k+1 = Pk (1 − Pk ) ≤
1
1
(Pk + 1 − Pk )2 =
4
4
for all k ∈ {1, ..., n − 1} and then
max
1≤k≤n−1
1
Pk P̄k+1 ≤ ,
4
which proves the last part of (2.1).
Remark 2.2. It is obvious that the inequality (2.1) is an improvement of (1.3) if we assume that
the order for xi is as in the statement of Theorem 2.1.
Another result is embodied in the following theorem.
◦
◦
Theorem 2.3. Let f : I ⊆ R → R be a differentiable convex
Pmapping on I and m, M ∈I with
m ≤ xi ≤ M (i = 1, ..., n) and pi ≥ 0 (i = 1, ..., n) with ni=1 pi = 1. If S is a subset of the
set {1, ..., n} minimizing the expression
X
1
(2.4)
pi − ,
2
i∈S
then we have the inequality
(2.5)
0 ≤
n
X
pi f (xi ) − f
n
X
i=1
!
pi xi
i=1
≤ Q (M − m) (f 0 (M ) − f 0 (m)) ≤
1
(M − m) (f 0 (M ) − f 0 (m)) ,
4
where
!
Q=
X
i∈S
pi 1 −
X
pi .
i∈S
Proof. We use the following Grüss type inequality due the Andrica and Badea [2]:
!
n
n
n
X
X
X
X
X
(2.6) Qn
q i ai b i −
q i ai ·
qi bi ≤ (M1 − m1 ) (M2 − m2 )
qi Qn −
qi
i=1
i=1
i=1
i∈S
i∈S
provided that m1 ≤ ai ≤ M1 , m2 ≤ bi ≤ M2 for i = 1, ..., n, and S is the subset of {1, ..., n}
which minimises the expression
X
1 qi − Qn .
2 i∈S
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
4
I. B UDIMIR , S.S. D RAGOMIR , AND J. P E ČARI Ć
Choosing qi = pi , ai = xi , bi = f 0 (xi ), then we may state that
n
n
n
X
X
X
(2.7)
0 ≤
pi xi f 0 (xi ) −
pi xi
pi f 0 (xi )
i=1
i=1
i=1
!
≤ (M − m) (f 0 (M ) − f 0 (m))
X
pi 1 −
i∈S
X
pi .
i∈S
Now, using (1.2) and (2.7), we obtain the first inequality in (2.5). For the last part, we observe
that
!2
X
X
1
1
Q≤
pi + 1 −
pi
=
4 i∈S
4
i∈S
and the theorem is thus proved.
The following inequality is well known in the literature as the arithmetic mean-geometric
mean-harmonic-mean inequality:
(2.8)
An (p, x) ≥ Gn (p, x) ≥ Hn (p, x) ,
where
An (p, x) : =
n
X
pi xi - the arithmetic mean,
i=1
Gn (p, x) : =
n
Y
xpi i - the geometric mean,
i=1
1
Hn (p, x) : = P
- the harmonic mean,
n
pi
i=1
xi
P
and ni=1 pi = 1 pi ≥ 0, i = 1, n .
Using the above two theorems, we are able to point out the following reverse of the AGH inequality.
P
Proposition 2.4. Let xi > 0 (i = 1, ..., n) and pi ≥ 0 with ni=1 pi = 1.
(i) If x1 ≤ x2 ≤ · · · ≤ xn−1 ≤ xn , then we have
(2.9)
An (p, x)
Gn (p, x)
"
#
2
(xn − x1 )
≤ exp
max Pk P̄k+1
x1 xn 1≤k≤n−1
"
#
1 (xn − x1 )2
≤ exp
·
.
4
x1 xn
1 ≤
(ii) If the set S ⊆ {1, ..., n} minimizes the expression (2.4), and 0 < m ≤ xi ≤ M < ∞
(i = 1, ..., n), then
"
#
"
#
An (p, x)
(M − m)2
1 (M − m)2
(2.10)
1≤
≤ exp Q ·
≤ exp
·
.
Gn (p, x)
mM
4
mM
The proof goes by the inequalities (2.1) and (2.5), choosing f (x) = − ln x. A similar result
can be stated for Gn and Hn .
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
R EVERSE R ESULTS FOR J ENSEN ’ S D ISCRETE I NEQUALITY
Proposition 2.5. Let p ≥ 1 and xi > 0, pi ≥ 0 (i = 1, ..., n) with
(i) If x1 ≤ x2 ≤ · · · ≤ xn−1 ≤ xn , then we have
!p
n
n
X
X
(2.11)
pi xi
0 ≤
pi xpi −
i=1
≤ p (xn − x1 )
i=1
xp−1
n
− xp−1
1
max
1≤k≤n−1
5
Pn
i=1
pi = 1.
Pk P̄k+1
p
≤
(xn − x1 ) xp−1
− xp−1
.
1
n
4
(ii) If the set S ⊆ {1, ..., n} minimizes the expression (2.4), and 0 < m ≤ xi ≤ M < ∞
(i = 1, ..., n), then
!p
n
n
X
X
(2.12)
0 ≤
pi xpi −
pi xi
i=1
i=1
≤ pQ (M − m) M p−1 − mp−1
1
≤
p (M − m) M p−1 − mp−1 .
4
Remark 2.6. The above results are improvements of the corresponding inequalities obtained in
[5].
Remark 2.7. Similar inequalities can be stated if we choose other convex functions such as:
f (x) = x ln x, x > 0 or f (x) = exp (x), x ∈ R. We omit the details.
3. A C ONVERSE I NEQUALITY FOR C ONVEX M APPINGS D EFINED ON Rn
In 1996, Dragomir and Goh [15] proved the following converse of Jensen’s inequality for
convex mappings on Rn .
Theorem 3.1. Let f : Rn → R be a differentiable convex mapping on Rn and
∂f (x)
∂f (x)
(∇f ) (x) :=
, ...,
,
∂x1
∂xn
the vector of the partial derivatives, x = (x1 , ..., xn ) ∈ Rn . P
If xi ∈ Rm (i = 1, ..., m), pi ≥ 0, i = 1, ..., m, with Pm := m
i=1 pi > 0, then
!
m
m
1 X
1 X
(3.1)
0 ≤
pi f (xi ) − f
pi xi
Pm i=1
Pm i=1
*
+
m
m
m
1 X
1 X
1 X
≤
pi h∇f (xi ), xi i −
pi ∇f (xi ),
pi xi .
Pm i=1
Pm i=1
Pm i=1
The result was applied to different problems in Information Theory by providing different counterpart inequalities for Shannon’s entropy, conditional entropy, mutual information, conditional
mutual information, etc.
For generalizations of (3.1) in Normed Spaces and other applications in Information Theory,
see Matić’s Ph.D dissertation [23].
Recently, Dragomir [4] provided an upper bound for Jensen’s difference
!
m
m
1 X
1 X
(3.2)
∆ (f, p, x) :=
pi f (xi ) − f
pi xi ,
Pm i=1
Pm i=1
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
6
I. B UDIMIR , S.S. D RAGOMIR , AND J. P E ČARI Ć
which, even though it is not as sharp as (3.1), provides a simpler way, and for applications, a
better way, of estimating the Jensen’s differences ∆. His result is embodied in the following
theorem.
Theorem 3.2. Let f : Rn → R be a differentiable convex mapping and xi ∈ Rn , i = 1, ..., m.
Suppose that there exists the vectors φ, Φ ∈ Rn such that
(3.3)
φ ≤ xi ≤ Φ (the order is considered on the co-ordinates)
and m, M ∈ Rn are such that
m ≤ ∇f (xi ) ≤ M
(3.4)
for all i ∈ {1, ..., m}. Then for all pi ≥ 0 (i = 1, ..., m) with Pm > 0, we have the inequality
!
m
m
1 X
1
1 X
(3.5)
0≤
pi f (xi ) − f
pi xi ≤ kΦ − φk kM − mk ,
Pm i=1
Pm i=1
4
where k·k is the usual Euclidean norm on Rn .
He applied this inequality to obtain different upper bounds for Shannon’s and Rényi’s entropies.
In this section, we point out another counterpart for Jensen’s difference, assuming that the
∇−operator is of Hölder’s type, as follows.
Theorem 3.3. Let f : Rn → R be a differentiable convex mapping and xi ∈ Rn , pi ≥ 0
(i = 1, ..., m) with Pm > 0. Suppose that the ∇−operator satisfies a condition of r−H−Hölder
type, i.e.,
(3.6)
k∇f (x) − ∇f (y)k ≤ H kx − ykr , for all x, y ∈ Rn ,
where H > 0, r ∈ (0, 1] and k·k is the Euclidean norm.
Then we have the inequality:
(3.7)
m
m
1 X
1 X
0 ≤
pi f (xi ) − f
pi xi
Pm i=1
Pm i=1
H X
≤
pi pj kxi − xj kr+1 .
2
Pm 1≤i<j≤m
!
Proof. We recall Korkine’s identity,
*
+
m
m
m
n
1 X
1 X
1 X
1 X
pi hyi , xi i−
pi yi ,
pi xi =
pi pj hyi − yj , xi − xj i , x, y ∈ Rn ,
2
Pm i=1
Pm i=1
Pm i=1
2Pm i,j=1
and simply write
m
1 X
pi h∇f (xi ), xi i −
Pm i=1
*
m
m
1 X
1 X
pi ∇f (xi ),
pi xi
Pm i=1
Pm i=1
+
n
1 X
=
pi pj h∇f (xi ) − ∇f (xj ), xi − xj i .
2Pm2 i,j=1
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
R EVERSE R ESULTS FOR J ENSEN ’ S D ISCRETE I NEQUALITY
7
Using (3.1) and the properties of the modulus, we have
m
1 X
0 ≤
pi f (xi ) − f
Pm i=1
m
1 X
pi xi
Pm i=1
!
m
1 X
≤
pi pj |h∇f (xi ) − ∇f (xj ), xi − xj i|
2Pm2 i,j=1
m
1 X
≤
pi pj k∇f (xi ) − ∇f (xj )k kxi − xj k
2Pm2 i,j=1
m
H X
≤
pi pj kxi − xj kr+1
2
Pm i,j=1
and the inequality (3.7) is proved.
Corollary 3.4. With the assumptions of Theorem 3.3 and if ∆ = max1≤i<j≤m kxi − xj k, then
we have the inequality
!
!
m
m
m
X
1 X
1 X
H∆r+1
(3.8)
0≤
pi f (xi ) − f
pi xi ≤
1−
p2i .
2
Pm i=1
Pm i=1
2Pm
i=1
Proof. Indeed, as
X
pi pj kxi − xj kr+1 ≤ ∆r+1
1≤i<j≤m
X
pi pj .
1≤i<j≤m
However,
1
pi pj =
2
1≤i<j≤m
X
m
X
i,j=1
!
pi pj −
X
pi pj
i=j
1
=
2
1−
m
X
!
p2i
,
i=1
and the inequality (3.8) is proved.
The case of Lipschitzian mappings is embodied in the following corollary.
Corollary 3.5. Let f : Rn → R be a differentiable convex mapping and xi ∈ Rn , pi ≥ 0
(i = 1, ..., n) with Pm > 0. Suppose that the ∇−operator is Lipschitzian with the constant
L > 0, i.e.,
(3.9)
k∇f (x) − ∇f (y)k ≤ L kx − yk , for all x, y ∈ Rn ,
where k·k is the Euclidean norm. Then
(3.10)
!
m
m
1 X
1 X
0 ≤
pi f (xi ) − f
pi xi
Pm i=1
Pm i=1

2 
m
m
1 X
1 X
≤ L
pi kxi k2 − pi xi  .
Pm i=1
Pm i=1
Proof. The argument is obvious by Theorem 3.3, taking into account that for r = 1,
m
2
m
X
X
X
pi pj kxi − xj k2 = Pm
pi kxi k2 − pi xi ,
1≤i<j≤m
and k·k is the Euclidean norm.
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
i=1
i=1
http://jipam.vu.edu.au/
8
I. B UDIMIR , S.S. D RAGOMIR , AND J. P E ČARI Ć
Moreover, if we assume more about the vectors (xi )i=1,n , we can obtain a simpler result that
is similar to the one in [4].
Corollary 3.6. Assume that f is as in Corollary 3.5. If
φ ≤ xi ≤ Φ (on the co-ordinates), φ, Φ ∈ Rn (i = 1, .., m) ,
(3.11)
then we have the inequality
m
1 X
0 ≤
pi f (xi ) − f
Pm i=1
(3.12)
m
1 X
pi xi
Pm i=1
!
1
· L · kΦ − φk2 .
4
Proof. It follows by the fact that in Rn , we have the following Grüss type inequality (as proved
in [4])
2
m
m
1 X
1 X
1
(3.13)
pi kxi k2 − pi xi ≤ kΦ − φk2 ,
Pm
Pm i=1
4
i=1
≤
provided that (3.11) holds.
Remark 3.7. For some Grüss type inequalities in Inner Product Spaces, see [7].
4. S OME R ELATED R ESULTS
Start with the following definitions from [3].
Definition 4.1. Let −∞ < a < b < ∞. Then CM [a, b] denotes the set of all functions with
domain [a, b] that are continuous and strictly monotonic there.
Definition 4.2. Let −∞ < a < b < ∞, and let f ∈ CM [a, b]. Then, for each positive
integer n, each n−tuple x = (x1 , ..., xn ) , where a ≤ xj P
≤ b (j = 1, 2, ..., n), and each n-tuple
p = (p1 , p2 , ..., pn ) , where pj > 0 (j = 1, 2, ..., n) and nj=1 pj = 1, let Mf (x, y) denote the
(weighted) mean
( n
)
X
f −1
pj f (xj ) .
j=1
We may state now the following result.
P
Theorem 4.1. Let S be the subset of {1, ..., n} which minimizes the expression i∈S pi − 1/2.
If f, g ∈ CM [a, b], then
−1 0 2
−1 00 sup {|Mf (x, p) − Mg (x, p)|} ≤ Q · f
· f ◦g
· |g(b) − g(a)| ,
∞
x
∞
provided that the right-hand side of the inequality is finite, where, as above,
!
!
X
X
Q=
pi
1−
pi ,
i∈S
i∈S
and k·k∞ is the usual sup-norm.
Proof. Let, as in [3], h = f ◦ g −1 , n > 1,
x = (x1 , x2 , ..., xn ) and p = (p1 , p2 , ..., pn )
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
R EVERSE R ESULTS FOR J ENSEN ’ S D ISCRETE I NEQUALITY
9
be as in the Definition 4.2, and yj = g (xj ) (j = 1, 2, ..., n). By the mean-value theorem, for
some α in the open interval joining f (a) to f (b), we have
Mf (x, p) − Mg (x, p) = f −1
( n
X
)
pj f (xj )
" ( n
)#
X
− f −1 h
pj g(xj )
j=1
j=1
"
=
f
−1 0
(α)
"
=
f
−1 0
(α)
n
X
pj f (xj ) − h
( n
X
j=1
j=1
n
X
( n
X
pj h(yj ) − h
j=1
"
=
f
−1 0
(α)
n
X
)#
pj g(xj )
)#
pj yj
j=1
(
pj
h(yj ) − h
j=1
n
X
!)#
pk yk
.
k=1
Using the mean-value theorem a second time, we conclude that there exists points z1 , z2 , ..., zn
in the open interval joining g(a) to g(b), such that
Mf (x, p) − Mg (x, p) =
0
f −1 (α) p1 {(1 − p1 ) y1 − p2 y2 − · · · − pn yn } h0 (z1 )
+p2 {−p1 y1 + (1 − p2 ) y2 − · · · − pn yn } h0 (z2 )
+···
+pn {−p1 y1 − p2 y2 − · · · + (1 − pn ) yn } h0 (zn )
0
= f −1 (α) p1 {p2 (y1 − y2 ) + · · · + pn (y1 − yn )} h0 (z1 )
+p2 {p1 (y2 − y1 ) + · · · + pn (y2 − yn )} h0 (z2 )
+···
+pn {p1 (yn − y1 ) + · · · + pn−1 (yn − yn−1 )} h0 (zn )
X
0
= f −1 (α)
pi pj (yi − yj ) {h0 (zi ) − h0 (zj )} .
1≤i<j≤n
Using the mean value theorem a third time, we conclude that there exists points ωij (1 ≤ i < j ≤ n)
in the open interval joining g(a) to g(b), such that
X
0
pi pj (yi − yj ) {h0 (zi ) − h0 (zj )}
f −1 (α)
1≤i<j≤n
X
0
pi pj (yi − yj ) (zi − zj ) h00 (ωij ).
= f −1 (α)
1≤i<j≤n
Consequently,
X
−1 0
pi pj |yi − yj | · |zi − zj | · |h00 (ωij )|
(α)
|Mf (x, p) − Mg (x, p)| ≤ f
1≤i<j≤n
−1 0 ≤ f
· kh00 k∞ ·
∞
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
X
pi pj |yi − yj | · |zi − zj |
1≤i<j≤n
http://jipam.vu.edu.au/
10
I. B UDIMIR , S.S. D RAGOMIR ,
AND
J. P E ČARI Ć
≤ (by the Cauchy-Buniakowski-Schwartz inequality)
s X
s X
−1 0 2
−1 00 pi pj |yi − yj | ·
pi pj |zi − zj |2
≤ f
· f ◦g
·
∞
∞
1≤i<j≤n
1≤i<j≤n
≤ (by the Andrica and Badea result)
v
!
!
u
X
X
u
0
00 1−
≤ f −1 · f ◦ g −1 · t
pi
pi |g(b) − g(a)|2
∞
∞
i∈S
i∈S
v
!
!
u
X
u X
·t
pi
1−
pi |g(b) − g(a)|2
i∈S
i∈S
0 00 = Q f −1 · f ◦ g −1 · |g(b) − g(a)|2 ,
∞
∞
and the theorem is proved.
Corollary 4.2. If f, g ∈ CM [a, b], then
0 0 1 1 f ·
· |g(b) − g(a)|2 ,
sup {|Mf (x, p) − Mg (x, p)|} ≤ Q · 0
0
0
f ∞ g g ∞
x
provided that the right hand side of the inequality exists.
Proof. This follows at once from the fact that
0
f −1 =
f0
1
◦ f −1
and
f ◦g
−1 00
0 0 (g 0 ◦ g −1 ) (f 00 ◦ g −1 ) − (f 0 ◦ g −1 ) (g 00 ◦ g −1 )
1 f
=
= 0
◦ g −1 .
3
0
0
−1
g g
(g ◦ g )
Remark 4.3. This establishes Theorem 4.3 from [3] and replaces the multiplicative factor
by Q. In Corollary 4.2, we also replaced the multiplicative factor 14 by Q.
1
4
5. A PPLICATIONS IN I NFORMATION T HEORY
We give some new applications for Shannon’s entropy
r
X
1
Hb (X) :=
pi logb ,
pi
i=1
where X is a random variable with the probability distribution (pi )i=1,r .
Theorem 5.1. Let X be as above and assume that p1 ≥ p2 ≥ · · · ≥ pr or p1 ≤ p2 ≤ · · · ≤ pr .
Then we have the inequality
(p1 − pr )2
(5.1)
0 ≤ logb r − Hb (X) ≤
max Pk P̄k+1 .
1≤k≤r
p1 pr
Proof. We choose in Theorem 2.1, f (x) = − logb x, x > 0, xi = p1i (i = 1, ..., r). Then we
have x1 ≤ x2 ≤ · · · ≤ xr and by (2.1) we obtain
!
1
1
1
1
0 ≤ logb r − Hb (X) ≤
−
max Pk P̄k+1 ,
1 + 1
1≤k≤r
pr p1
− pr
p1
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
R EVERSE R ESULTS FOR J ENSEN ’ S D ISCRETE I NEQUALITY
11
which is equivalent to (5.1). The same inequality is obtained if p1 ≤ p2 ≤ · · · ≤ pr .
Theorem 5.2. Let X be as above and suppose that
pM : = max {pi |i = 1, ..., r} ,
pm : = min {pi |i = 1, ..., r} .
P
If S is a subset of the set {1, ..., r} minimizing the expression i∈S pi − 1/2, then we have the
estimation
(pM − pm )2
(5.2)
0 ≤ logb r − Hb (X) ≤ Q ·
.
ln b · pM pm
Proof. We shall choose in Theorem 2.3,
1
f (x) = − logb x, x > 0, xi =
i = 1, r .
pi
Then m =
1
,
pM
M=
1
,
pm
1
f 0 (x) = − x ln
and the inequality (2.3) becomes:
b
0 ≤ logb r −
r
X
i=1
1
≤ Q
ln b
= Q·
1
pi
pi logb
1
1
−
pm pM
−
1
1
pm
+
1
!
1
pM
1 (pM − pm )2
·
,
ln b
pM pm
hence the estimation (5.2) is proved.
Consider the Shannon entropy
H (X) := He (X) =
(5.3)
r
X
pi ln
i=1
1
pi
and Rényi’s entropy of order α (α ∈ (0, ∞) \ {1})
r
X
1
H[α] (X) :=
ln
pαi
1−α
i=1
(5.4)
!
.
Using the classical Jensen’s discrete inequality for convex mappings, i.e.,
!
r
r
X
X
(5.5)
f
pi xi ≤
pi f (xi ),
i=1
i=1
where f : I ⊆ R → R is a convex mapping on I, xi ∈ I (i = 1, ..., r) and (pi )i=1,r is a
probability distribution, for the convex mapping f (x) = − ln x, we have
!
r
r
X
X
(5.6)
ln
pi xi ≥
pi ln xi .
i=1
Choose xi =
pα−1
i
i=1
(i = 1, ..., r) in (5.6) to obtain
!
r
r
X
X
α
ln
pi ≥ (α − 1)
pi ln pi ,
i=1
i=1
which is equivalent to
(1 − α) H[α] (X) − H (X) ≥ 0.
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
12
I. B UDIMIR , S.S. D RAGOMIR ,
AND
J. P E ČARI Ć
Now, if α ∈ (0, 1) , then H[α] (X) ≤ H (X) , and if α > 1 then H[α] (X) ≥ H (X) . Equality holds iff (pi )i=1,r is a uniform distribution and this fact follows by the strict convexity
of − ln (·) . This inequality also follows as a special case of the following well known fact:
H[α] (X) is a nondecreasing function of α. See for example [26] or [22].
Theorem 5.3. Under the above assumptions, given that pm = mini=1,r pi , pM = maxi=1,r pi ,
then we have the inequality
α−1 2
pα−1
−
p
m
(5.7)
0 ≤ (1 − α) H[α] (X) − H (X) ≤ Q · M α−1 α−1
,
pM pm
for all α ∈ (0, 1) ∪ (1, ∞).
Proof. If α ∈ (0, 1), then
α−1
xi := pα−1
∈ pα−1
i
M , pm
and if α ∈ (1, ∞), then
α−1
, for i ∈ {1, ..., n} .
xi = pα−1
∈ pα−1
m , pM
i
Applying Theorem 2.3 for xi := pα−1
and f (x) = − ln x, and taking into account that f 0 (x) =
i
1
− x , we obtain
(1 − α) H[α] (X) − H (X)

1
α−1
1
α−1

Q
p
−
p
−
+
if α ∈ (0, 1) ,

α−1
α−1
m
M

pm
pM
≤

1

1
 Q pα−1 − pα−1
−
+
if α ∈ (1, ∞)
α−1
α−1
m
M
p
p
M
=
m

α−1 2
(pα−1

m −pM )

if α ∈ (0, 1) ,

α−1
 Q · pα−1
m pM

α−1
α−1 2

m )

 Q · (pMα−1−pα−1
if α ∈ (1, ∞)
p
p
M
m
pα−1 − pα−1
m
= Q · M α−1 α−1
pM pm
2
for all α ∈ (0, 1) ∪ (1, ∞) and the theorem is proved.
Using a similar argument to the one in Theorem 5.3, we can state the following direct application of Theorem 2.3.
Theorem 5.4. Let (pi )i=1,r be as in Theorem 5.3. Then we have the inequality
α−1 2
pα−1
−
p
m
(5.8)
0 ≤ (1 − α) H[α] (X) − ln r − α ln Gr (p) ≤ Q · M α−1 α−1 ,
PM p m
for all α ∈ (0, 1) ∪ (1, ∞).
Remark 5.5. The above results improve the corresponding results from [5] and [4] with the
constant Q which is less than 14 .
Acknowledgement 1. The authors would like to thank the anonymous referee for valuable
comments and for the references [26] and [22].
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
R EVERSE R ESULTS FOR J ENSEN ’ S D ISCRETE I NEQUALITY
13
R EFERENCES
[1] A. RÉNYI, On measures of entropy and information, Proc. Fourth Berkley Symp. Math. Statist.
Prob., 1 (1961), 547–561, Univ. of California Press, Berkley.
[2] D. ANDRICA AND C. BADEA, Grüss’ inequality for positive linear functionals, Periodica Math.
Hung., 19(2) (1988), 155–167.
[3] G.T. CARGO AND O. SHISHA, A metric space connected with general means, J. Approx. Th., 2
(1969), 207–222.
[4] S.S. DRAGOMIR, A converse of the Jensen inequality for convex mappings of several variables
and applications. (Electronic Preprint:
http://matilda.vu.edu.au/~rgmia/InfTheory/ConverseJensen.dvi)
[5] S.S. DRAGOMIR, A converse result for Jensen’s discrete inequality via Grüss’ inequality and
applications in information theory, Analele Univ. Oradea, Fasc. Math., 7 (1999-2000), 178–189.
[6] S.S. DRAGOMIR, A further improvement of Jensen’s inequality, Tamkang J. Math., 25(1) (1994),
29–36.
[7] S.S. DRAGOMIR, A generalisation of Grüss’s inequality in inner product spaces and applications,
J. Math. Anal. Appl., 237 (1999), 74–82.
[8] S.S. DRAGOMIR, A new improvement of Jensen’s inequality, Indian J. Pure Appl. Math., 26(10)
(1995), 959–968.
[9] S.S. DRAGOMIR, An improvement of Jensen’s inequality, Bull. Math. Soc. Sci. Math. Romania.,
34 (1990), 291–296.
[10] S.S. DRAGOMIR, On some refinements of Jensen’s inequality and applications, Utilitas Math., 43
(1993), 235–243.
[11] S.S. DRAGOMIR, Some refinements of Jensen’s inequality, J. Math. Anal. Appl., 168 (1992), 518–
522.
[12] S.S. DRAGOMIR, Some refinements of Ky Fan’s inequality, J. Math. Anal. Appl., 163 (1992),
317–321.
[13] S.S. DRAGOMIR, Two mappings in connection with Jensen’s inequality, Extracta Math., 8(2-3)
(1993), 102–105.
[14] S.S. DRAGOMIR AND S. FITZPARTICK, s - Orlicz convex functions in linear spaces and Jensen’s
discrete inequality, J. Math. Anal. Appl., 210 (1997), 419–439.
[15] S.S. DRAGOMIR AND C.J. GOH, A counterpart of Jensen’s discrete inequality for differentiable
convex mappings and applications in Information Theory, Math. Comput. Modelling, 24(2) (1996),
1–11.
[16] S.S. DRAGOMIR AND C.J. GOH, Some bounds on entropy measures in information theory, Appl.
Math. Lett., 10(3) (1997), 23–28.
[17] S.S. DRAGOMIR AND C.J. GOH, Some counterpart inequalities for a functional associated with
Jensen’s inequality, J. Inequal. Appl., 1 (1997), 311–325.
[18] S.S. DRAGOMIR AND N.M. IONESCU, Some Converse of Jensen’s inequality and applications,
Anal. Num. Theor. Approx. (Cluj-Napoca), 23 (1994), 71–78.
[19] S.S. DRAGOMIR, C.E.M. PEARCE AND J. E. PEČARIĆ, New inequalities for logarithmic map
and their applications for entropy and mutual information, Kyungpook Math. J., in press.
[20] S.S. DRAGOMIR, C.E.M. PEARCE AND J. E. PEČARIĆ, On Jensen’s and related inequalities for
isotonic sublinear functionals, Acta Sci. Math., (Szeged), 61 (1995), 373–382.
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/
14
I. B UDIMIR , S.S. D RAGOMIR ,
AND
J. P E ČARI Ć
[21] S.S. DRAGOMIR, J. E. PEČARIĆ AND L.E. PERSSON, Properties of some functionals related to
Jensen’s inequality, Acta Math. Hungarica, 20(1-2) (1998), 129–143.
[22] T. KOSKI AND L.E. PERSSON, Some properties of generalized entropy with applications to compression of data, J. of Int. Sciences, 62 (1992), 103–132.
[23] M. MATIĆ, Jensen’s Inequality and Applications in Information Theory, (in Croatian), Ph.D. Dissertation, Split, Croatia, 1998.
[24] R.J. McELIECE, The Theory of Information and Coding, Addison Wesley Publishing Company,
Reading, 1977.
[25] J. E. PEČARIĆ, F. PROSCHAN AND Y.L. TONG, Convex Functions, Partial Orderings and Statistical Applications, Academic Press, 1992.
[26] J. PEETRE AND L.E. PERSSON, A general Beckenbach’s inequality with applications, In: Function Spaces, Differential Operators and Nonlinear Analysis, Pitman Research Notes in Math., 211
(1989), 125–139.
J. Inequal. Pure and Appl. Math., 2(1) Art. 5, 2001
http://jipam.vu.edu.au/