An Approach to Decreasing Pixel Changes Using Reversible

advertisement
Z-OAXN: An Approach to Reducing the
Pixel Changes in LSB_Based Image
Steganography
Abstract
Image steganography is a way of hiding information in image files. LSB is one of the
most commonly-used algorithms in this field which depends on embedding data in the
least significant bits of bytes representing visual properties of pixels in the image file.
This paper introduces a reversible logic function called the Zolfaghari function and
exploits the properties of this function to propose an approach called Z-OAXN to modify
the traditional LSB algorithm in order to reduce the average pixel change probability.
Analytical modelings as well as experimental results are used to evaluate the approach
and both demonstrate that it can reduce the average pixel change probability in contrast to
the traditional LSB technique by 64 to 100 percent.
1 Introduction and Basic Concepts
Steganography is a technique in which one type of data is hidden in another. For example,
in image steganography, any type of data can be hidden in an image file. One of the most
commonly used algorithms for image steganography is the Least Significant Bit (LSB)
algorithm. LSB uses the least significant bits of the bytes presenting the pixels of the
image in a bitmap file to embed the bits of the data to be hidden. For example, suppose
that a short text message like “Hello” is to be hidden in a 24-bit bit map image file. The
ASCII code for the character H is 01001000 , this code is 01100101 for e, 01101100
for l and 01101111 for 0. Thus, the message can be shown as
0100100001100101011011000110110001101111 in the form of a bit stream. The bit
stream is thus accommodated in the image file as shown by figure 1.
Figure 1: The message “Hello” embedded in a 24-bit bit map file by LSB
Suppose that the traditional LSB algorithm is applied to a 24-bit bitmap file to hide a
given stream of bits. If we designate the probability that a pixel is changed after
embedding the data with Ppc ( LSB ) and the inverse probability (the probability that
a pixel remains unchanged) with Ppnc (LSB) , we can interpret Ppnc (LSB) as
the probability that neither of the LSBs of the 3 bytes representing a pixel are changed or
each of the 3 LSBs is the same as the corresponding bit in the data to be hidden. Thus,
Ppnc (LSB) can be given by equation 1.
1 1 1
1
. .

2 2 2
8
Accordingly, Ppnc can be calculated as follows:
Ppnc ( LSB ) 
(Equation 1)
7
(Equation 2)
8
In this paper, we refer to the average probability that a pixel is changed as the Average
Pixel Change Probability or APCP for short. Our proposed approach to reducing APCP
is called Z-OAXN. We use 24-bit bitmap pictures to explain the approach but it is
applicable to some other picture formats- especially those which use lossless compression
algorithms such as GIF- with minor changes. The main idea behind the Z-OAXN
approach is making use of a reversible function called the Zolfaghari function to replace
the LSBs of bytes which present pixels of the image. In order to explain the Z-OAXN
approach, we need to present some basic concepts and some definitions which follow.
Definition: A gate -or a circuit at all- is reversible if it implements a bijective logic
function [56]. A bijective function is one that is surjective and injective. Reversible
functions are all conservative functions, that is, they have identical numbers of input and
output lines and also identical numbers of 1s in the input and the output. Thus, the output
of a bijective function can be considered as a permutation of the input. In other words, a
reversible function leaves its inputs unchanged except for the last input whose change
depends on the values of the other inputs (and maybe the last input itself). Bijective
functions (permutations) can be denoted with different notations one of which is the cycle
notation [56]. In this notation, disjoint cycles are used to show the permutation. For
example, a function that swaps 000(0) and 001(1) and also swaps 110(6) and 111(7) in
the input and the output is denoted by (0, 1) (6, 7).
Definition: An m-CNOT gate is a gate with m+1 inputs (and the same number of
outputs) which transfers its first inputs to the output without any changes and
complements the last input if all the first inputs are equal to 1; otherwise the last output is
left unchanged like the others. An m-CNOT gate can be implemented by AND and XOR
function as figure2 shows.
Ppc ( LSB )  1  Ppnc ( LSB ) 
Figure 2: An implementation of a CNOT gate
It can be observed from figure 2 that the last output of an m-CNOT gate whose inputs are
I 1 , I 2 ,… I m 1 can be demonstrated as ( I1 .I 2 ...I m )  I m 1 .
Definition: An m-input OAXN (OR-AND-XNOR) function is a function whose last
output is 1 if all the inputs have identical values and is 0 otherwise. Other outputs are
equal to their corresponding inputs. It is obvious that this function gives 1 as its last
output in only two cases. The first case is when all the inputs are equal to 1. This is the
only case in which the AND function of the m inputs will be equal to 1. The second case
is when all the inputs have 0 values. The latter case is the only case in which the OR
function of the inputs can be 0 or equally the NOR function is 1. Therefore the last output
of an m-input OAXN function can be realized by an m-input AND function, an m-input
NOR function and a two-input OR function as figure 3 shows.
Figure 3: The last output of an m-input OAXN function
But for simplicity in demonstrating the OAXN function in terms of well-known
reversible gates, we propose another implementation for it. This implementation is
showed in figure 4.
Figure 4: The XNOR Implementation of an m-input OAXN function
In figure 4, the output of the XNOR gate is 1 only if the OR gate and the AND gate have
identical outputs and this is possible only in two cases. The first case is when the AND
and the OR both are 1, which requires all the inputs to be 1. The second case is when
both AND and OR produce 0 outputs, which requires all the inputs to be 0. It is obvious
that the implementation shown in figure 4 is equivalent to the one demonstrated in figure
3 from the output point of view. The name OAXN (OR-AND-XNOR) has been derived
from the idea behind the implementation shown in figure 4.
For consistency with previous works, we can show the OAXN function in terms of NOT
and CNOT functions. To do this, we must note that if we show the last output of a CNOT
gate by CNOTm and show the last output of an OAXN function by OAXN m ,
we will have:
OAXNm ( I1 , I 2 ,..., I m )  ( I1.I 2 ...I m )  ( I1  I 2  ...I m ) I m 1
(Equation 3)
Or:
OAXN m ( I1 , I 2 ,..., I m )  ( I 1 .I 2 ...I m )  (( I1 .I 2 ...I m )  I m1 )
 ( I 1 .I 2 ...I m )  CNOTm ( I1 , I 2 ,..., I m 1 )
 CNOTm ( I 1 .I 2 ...I m ,CNOTm ( I1 , I 2 ,..., I m1 ))
(Equation 4)
Equation 4 shows that the OAXN function can be implemented by the NOT and CNOT
gates as shown in figure 5.
Figure 5: Implementing the OAXN function with NOT and CNOT gates
The NOT gates on the output lines of the second CNOT gate in figure 5 are necessary
because the OAXN function should not change its inputs except for the last one.
Definition: An m-Zolfaghari function is a function with m+1 inputs and m+1 outputs
which returns the complement of the last input as its last output if all its first m inputs
have identical values and returns its last input unchanged in any other case. Such a
function does not change its other inputs at all. The last output of an m-Zolfaghari
function can be easily implemented by an OAXN between the first m inputs and an XOR
function between the last input and the output of the OAXN function. Figure 6 shows
such an implementation.
Figure 6: The last output of an m-Zolfaghari function
An m-Zolfaghari function is a permutation that can be designated in the cycle notation as
(0, 1) ( 2 m  2,2 m  1 ) since swaps the 00…00 pattern with 00…01 and 11…10
with 11…11 in the output.
According to figure 6, an m-Zolfaghari function can be implemented using an OAXN
function and a CNOT gate as shown in figure 7. The reasoning is similar to that presented
for the case of the OAXN function.
Figure7: Implementing the Zolfaghari function using the OAXN and CNOT
If we replace the OAXN function in figure 7 by the implementation given for it in figure
5, we will observe that the Zolfaghari function can be implemented using NOT and
CNOT functions. Such an implementation has been shown in figure 8.
Figure 8: The Zolfaghari function in terms of NOT and CNOT
The Zolfaghari function has two important properties. The first is its reversibility. As
figure 8 shows, The Zolfaghari function is reversible (because it can be implemented
using the reversible NOT and CNOT functions). This is important because allows this
function to be exploited in our proposed steganographic approach. In fact, this
reversibility allows the hidden data to be revealed when required. The second important
property of the Zolfaghari function is that it changes its last input with a relatively low
probability. This property is important in steganography as well, because helps reduced
the changes imposed to the cover image and makes it more difficult to discover the fact
that some secret data has been hidden in it. The probability that the last input is changed
by an m-Zolfaghari function is obviously equivalent to the probability that the output of
the OAXN function of the first m inputs is 1. Since the OAXN function is 1 only in 2
cases among the 2 m possible cases, this probability can simply be calculated as follows.
2
1
(Equation 5)
Plc (n)  m  m 1
2
2
The rest of this paper is organized as follows. Section 2 is dedicated to discussing the
related works, Section 3 introduces the Z-OAXN approach, Section 4 evaluates the
proposed approach and Section 5 is dedicated to conclusions and proposing further works.
2 Related Works
There has been a lot of research regarding steganography and steganalysis in recent years.
Among relevant topics on which large numbers of research works have been done, we
can refer to topics such as developing steganography methods [19,32,33,38], developing
techniques for steganography in video, audio, etc [8], applying statistical methods to
steganography [5,35], benchmarking and performance evaluation of steganographic
techniques [4,15,17,18,41], presenting techniques to choose among steganographic
algorithms [27], presenting techniques for choosing proper host and cover files [29],
developing steganalysis methods [20,22,23,39,47], applying statistical methods to
steganalysis [28], applying artificial intelligence to steganalysis [6,16,24] and developing
steganographic techniques that resist against steganalysis [3,7]. But the most relevant
works to this paper are those that deal with analyzing, optimizing or introducing new
variants of the LSB algorithm or finding new applications for it. In the following, we
discuss some of these works.
Chan and Cheng [50] proposed an optimization method which can augment the simple
LSB algorithm and enhance the quality of the stego image with minor computational
complexity and without any visually sensible changes in the cover image. They
analytically modeled the worst case for their proposed method from the point of view
mean square error between the cover image and the stego image.
Dabeer, et al [48, 54, 55], proposed an approach to exploit hypothesis theory in
steganalysis of images containing data hidden by LSB algorithm. Their approach reduces
the problem of composite hypothesis testing to a simple hypothesis testing problem for an
image with a known Probability Mass Function (PMF). They used images with known
PMFs to obtain tests based on estimation of the host PMF and showed that their tests
have superior self calibration and receiver operating characteristics compared with
previously known tests.
Goljan [51] presented a modular method for estimating the length of a bit stream
embedded in randomly distributed of an image using the LSB algorithm. They also
refined their approach to produce more accurate results for different types of natural
images. They showed that their proposed approach has the advantage that fits well to
statistical image models.
Celik, et al [40], modified the traditional LSB method to obtain a reversible technique for
embedding data. Their approach makes it possible to fully recover the host data when
recovering the embedded data. The main idea behind their technique is transmitting parts
of the host data that are suspectible to embedding distortion along with the hidden data
after compression.
Cvejic and Seppänen [45, 52, 53] proposed an LSB audio watermarking approach which
decreases the distortion imposed to the host audio. In their proposed approach higher
robustness against noise through embedding the watermark bits in higher LSB layers.
They showed true listening tests that their approach also improves the perceptual quality
of the watermarked audio.
Ker [46, 49, 54, 37] argued that LSB matching is harder to discover than the LSB
replacement method and therefore proposed modified variants of this method for
embedding data in gray scale and color bitmaps. He argued that the Histogram
Characteristic Function introduced by Harmsen [57] is effective for color images but
ineffective to use on gray scale images. He used two techniques to make HCF more
applicable. The first technique is calibrating the output using a down sampled image and
the second is exploiting the adjacency histogram instead of the usual histogram. He
demonstrated that his approach outperforms previous the traditional LSB matching
technique especially in cases that the cover image is stored in JPEG files.
Ker [42] combined the structural and combinatorial descriptors exploited by previous
frameworks to introduce a general framework for steganalysis of simple LSB
replacement in image files which can result in more powerful detection algorithms. He
also introduced and studied a novel descriptor and showed through experiments that their
suggested descriptor can perform that previously known ones.
Wu, et al [34], introduced a new steganographic method which combines the ideas of
LSB replacement and Pixel Value Differencing (PVD) in order to obtain high embedding
capacity and minimize the difference between the cover image and the stego image. Their
method obtains a difference between adjacent pixels to distinguish smooth areas from
edge areas. Then hides the data in smooth areas using the PVD technique and hides in the
edge areas through the use of LSB replacement. They showed that the security level of
the stego image produced by their method is similar to that of the stego images produced
by pure PVD method, but its hiding capacity shows notable improvement over PVD.
Draper, et al [43], proposed a statistical approach based on Probability Mass Functions
(PMFs) and frequency counts of pixel intensities for revealing messages hidden through
LSB replacement. They generalized the tests proposed by westfeld and Pfitzmann [62]
and Dabeer, et al[48,55], by ignoring the assumption that pixel intensities are
independent random values identically distributed and considering PMFs of neighboring
pixel intensities. They evaluated their test method and compared it to the RS test
suggested by Fridrich, et al [60], which does not exploit PMFs. They showed that their
test outperforms its PMF-based predecessors but the RS test performs better than their
test and used these results to state that making use of statistical methods best on PMFs to
reveal messages hidden by LSB replacement is inherently inefficient.
=Brisbane, et al [31], argued that the steganographic method proposed by Seppänen,
Makela and Keskinarkaus causes a high level noise and unreasonably decreases the
quality of the image although it exhibits high embedding capacity. They presented a
novel coding structure that causes low amounts of noise in addition to high embedding
capacity. The maximum size of the coding structure was limited and this enhanced the
capacity to distortion ratio. They also proposed an algorithm that helps identify pixels
having high capacity to distortion ratios.
Yu, et al [36], proposed a steganalysis method for LSB in cases that the data has been
embedded in L>0 least significant bits instead of being hidden only in the LSB. They
proved that the method had a high accuracy in both detecting the hidden message and
estimating its length. They showed the efficiency of their method through the use of
experimental results as well as analytical evaluation.
Lee, et al [44], proposed a steganalysis methodology based on analyzing chains of
horizontally and vertically adjacent pixels. They used experimental results to show that
their methodology outperforms its predecessors and can detect secret data embedded with
low rates.
Raja, et al [30], combined the LSB algorithm with Discrete Cosine Transform (DCT) and
compression techniques to present a novel image steganography method that improves
the security of the embedded payload. Their proposed method uses the LSB algorithm to
embed the payload in the cover image; DCT to transform the resulting stego image from
spatial domain to frequency domain and in the last step exploits quantization and run
length coding algorithms to compress the transformed image. They showed that their
proposed method has the advantage to its predecessors that allows the transfer of images
with low Bit Error Rates (BERs) without any need for passwords.
Luo, et al [21], introduced an LSB steganography method that can not be revealed by
steganalysis methods such as RS, SPA and DIH which are based on sample pair analysis.
The idea behind their method is adopting chaotic and dynamic compensation techniques.
They showed that their technique can make sample pair analysis based methods to
estimate very small embedding rates even if the actual embedding rate is almost 100%.
Rocha and Goldenstein [25] introduced an approach called Progressive Randomization
(PR), to reveal data embedded in LSBs. Their approach is based on generating different
images from the input image that differ only in the LSBs. Each of the produced images
indicates a different possible stream of bits hidden in the image. The PR approach
increases size and entropy in each step. They demonstrated that their approach can
perform equally well or even better than the previously known techniques.
Liu, et al [26], proposed a method based on feature extraction and pattern recognition to
reveal data hidden using LSB matching. Their proposed method measures picture
complexity using Generalized Gaussian Distribution (GGD) in the wavelet domain. They
trained and classified their proposed feature sets by the use of several statistical pattern
recognition algorithms. They demonstrated that their proposed method will be more
convenient for color images and low complexity gray scale images.
Ker [13] stated that highly-sensitive bit replacement methods do not perform very well on
images in which the payload is concentrated at the start of the cover image and proposed
an approach to attack this problem. Their proposed approach is in fact a modified variant
of the Weighted Stego (WS) image steganalysis technique introduced by Fridrich and
Goljan. They demonstrated that the modified WS method can exhibit about 10 times
more accuracy that the traditional WS technique.
Ker [14] modified standard LSB steganalysis methods to give them the ability to reveal
data hidden in two Least Significant Bits instead of one LSB in an image. He
demonstrated that his approach was much more sensitive and accurate than the only
previously proposed method to reveal data hidden in multiple LSBs. He also showed that
embedding data in two LSBs can be preferable to embedding in only one LSB.
Charles, et al [10], suggested a new method to reveal data embedded by both LSB
replacement and ±1 LSB embedding techniques. Their method is based on a support
vector machine classifier that uses statistics produced by a lossless compression method.
They compared their method with one of the most renowned methods called pairs method.
And showed that both performed equally well but their proposed approach can detect
±1 LSB embedding while the Paris method could not detect data hidden by this
technique.
Luo, et al [9], referred to Sample Pairs Steganalysis (SPA) [58] as one of the most
powerful methods and proposed a variant of the LSB steganography method that resists
against the mentioned steganalysis method. Their proposed technique applies a dynamic
compensation on the stego image that contains a secret data in randomly chosen pixels
selected by a chaotic system. The dynamic compensation causes the SPA steganalysis to
obtain very small estimate values and consequently fail to make correct judgments about
the stego image, even in cases were the embedding rate is close to 100%. They
demonstrated that their steganographic technique can also resist against Different Image
Histogram (DIH) [60, 61] and Regular and Singular groups (RS) [59] steganalysis
methods as well as different variants of SPA and RS methods.
Zhang et al. [11] argued that with previous steganalysis methods, high-frequency noises
are often mistaken as messages hidden in the picture using LSB matching techniques.
They proposed a novel approach to overcome this shortcoming. The main idea behind
their proposed approach is the fact that the intensity histograms of stego images show
smeller absolute differences between local minima/maxima and heir neighbors in contrast
to those of cover images. They demonstrated that their proposed approach is more
convenient for cases such as never-compressed scans of photographs or video.
Liu, et al [1,12], proposed combining a dynamic evolving neural fuzzy inference system
(DENFIS) with a feature selection of support vector machine recursive feature
elimination (SVMRFE) to introduce an approach based on feature mining and pattern
classification to discover LSB matching steganography in gray-scale images. They also
proposed a special feature set and argued that this set could outperform previously known
sets of features.
Liu, et al [2], introduced image complexity as a new metric for performance evaluation of
steganalysis methods. They also used feature mining and pattern recognition ideas to
build a steganalysis scheme that can be applied to images that contain secret data
embedded by LSB matching. They demonstrated that in addition to the embedding rate,
the image complexity influences the significance of features and the performance of the
detection method and this dependencies cause their approach to outperform other
previously propose techniques.
3 The Z-OAXN Approach
The Z-OAXN approach exploits the properties of the Zolfaghari function to improve the
LSB steganography algorithm from the pixel change probability point of view. Before
explaining the way Z-OAXN works, we will need some definitions which are presented
in the following.
Definition: The cover image is the image in which some data is going to be hidden.
Definition: The secret data is the data to be hidden in the cover image.
Definition: The change domain is the set of pixels in the cover image which are selected
to accommodate the secret data. These pixels may be adjacent, randomly distributed over
the cover image or selected due to some predetermined strategy.
Definition: The Storage Capacity indicates the number of bits of the secret data that can
be embedded in each byte of the change domain in the cover image file. For example, the
storage capacity of the traditional LSB technique is equal to 1, since it can store one bit of
the secret data in each byte of the change domain in the cover file.
Now we can explain the Z-OAXN technique. This technique works as follows. n-1 triples
of bytes in the cover image file are selected which are representatives for n-1 pixels of the
image. We designate these triples by p1 , p 2 ,…, pn 1 . Then the Least Significant
Bits (LSBs) of corresponding bytes in these triples are given to 3 n-Zolfaghari functions
as the first inputs. Each of the n-Zolfaghari functions has n+1 inputs. The n-th input of
each n-Zolfaghari function is the LSB of the corresponding byte of the n-th pixel and the
last input of each n-Zolfaghari function is one bit of the secret data. The last output of
each n-zolfaghari function is also stored in the corresponding LSB of the n-th pixel. This
way, 3 bits of the secret data are stored in the (n+1)th pixel. Figure 9 demonstrates the
details of the mentioned mechanism for n=3. This figure shows the case where the 3 bits
to be hidden are 000.
Figure 9: Storing 3 bits of the secret data in the fourth pixel
In figure 9, the values in the squares designate the bits of the secret data. The technique
can also be explained in terms of the Zolfaghari function as figure 10 shows.
Figure 10: The Z-OAXN technique explained in terms of the Zolfaghari function
Figure 11 shows the LSBs of the (n+1)th pixel after embedding the 000 sequence by the
Z-OAXN approach when n=3.
Figure 11: The first four pixels after embedding the first three bits of the secret data
As figure 11 shows, the first n-1 pixels remain unchanged. Thus, we can feed the LSBs of
n-1 of them into the Zolfaghari function which helps store the next 3 bits of the secret
data. The next 3 bits of the secret data are stored in p n 1 . This is done through giving
corresponding LSBs of p 2 ,…, p n along with the bits of the secret data to nZolfaghari functions and storing the outputs of the functions in the corresponding LSBs
of p n 1 . This is demonstrated in figure 12.
Figure 12: Storing the next 3 secret bits in the next pixel of the cover image
Embedding the next triples of the bits of the secret data continues this way until the triple
n-1 is embedded in p2 n  2 . To do this, LSBs of the pixels pn 1 , … , p 2 n  3 are
used. When this is done, 3 * ( n  1) bits of the secret data have been embedded in
2 * ( n  1) pixels of the cover image while the first n  1 pixels are absolutely
unchanged and the next n  1 pixels may have or have not been changed depended on
the outputs of the n-Zolfaghari functions. This way, 3 * ( n  1) bits of the secret
2 * ( n  1) pixels or equally
data can be embedded in each
3 * 2 * ( n  1) bytes of the change domain in the cover file. Thus, the storage
capacity of the Z-OAXN approach can be obtained from equation 6.
3 * ( n  1)
1
bit
SC Z  OAXN 

(Equation 6)
byte
3 * 2 * ( n  1)
2
Equation 6 shows that the storage capacity of the Z-OAXN is one half that of the
traditional LSB technique regardless of the value chosen for n. This means that the ZOAXN approach is useful for applications in which pixel change probability is more
important than storage capacity.
We can further clarify the way Z-OAXN works by giving a numerical example. To do
this we have selected a 16*16 window of a real 24-bit bitmap image and hidden the
message “Z-OAXN” in it by the use of the Z-OAXN approach with n=3.
The changes made to the original image by the Z-OAXN approach are visually difficult
to distinguish but we have not brought the original image and the modified image. The
reason is that they do not have high visual qualities because of their small sizes.
The string “Z-OAXN” will be converted to the bit stream“010 110 100 100 010 101 001
111 010 000 010 101 100 001 001 110” by the ASCII code. The length of this bit stream
is equal to 48. We know that each pixel includes 3 LSBs in 24-bit bitmap format and the
1
storage capacity of the Z-OAXN approach is equal to
. Therefore the number of the
2
pixels required to embed the mentioned message will be equal to
48
3  32
1
2
. The 32
pixels have been selected from the beginning of the image that is just after the header. g
Table 1 shows the contents of the pixel before embedding the message.
Pixel
1
2
3
4
5
6
7
8
Table 1: The contents of the pixels before embedding the message
Content Pixel
Content Pixel
Content Pixel
424E58
5C685C
4C5961
9
17
25
535D64
596957
525D61
10
18
26
4E575B
5B6C57
525B5E
11
19
27
515959
5D6D5B
515B5B
12
20
28
5C6360
5D6D5C
555E5B
13
21
29
556663
5B695D
59625F
14
22
30
59605B
5A675F
5B655F
15
23
31
586059
5A6562
5A655D
16
24
32
Content
586359
576756
586A59
5C6D5F
5C6D60
596860
586660
5A6765
In table 1, the columns entitled “Pixel” show the numbers of pixels and the columns
entitled “Content” show the contents of the pixels.
Table 2 gives the contents of the first 32 pixels after hiding the message in the image.
Pixel
1
2
3
4
5
6
7
8
Table 2: The contents of the pixels after embedding the message
Content Pixels
Content Pixels
Content Pixels
424E58
5C685C
4C5961
9
17
25
535D64
596957
525D61
10
18
26
4E575B
11
5A6D56
19
535A5E
27
515858
12
5D6D5B
20
505A5B
28
5C6360
5D6D5C
555E5B
13
21
29
556663
5B695D
59625F
14
22
30
59605A
15
5B675F
23
5B645F
31
586158
16
5A6462
24
5A655C
32
Content
586359
576756
596B58
5C6D5E
5C6D60
596860
596660
5A6665
In table 2, the new contents of the changed pixels have been highlighted. This table
15
 47% of the 32 pixels have been changed. As we will see in
shows that
32
section 4, the expected average pixel change probability for n=3 is almost 22%. This
difference is due to the similarity between corresponding LSBs in adjacent pixels. In fact,
this example shows a bad case for the Z-OAX approach. However if we consider an
adequate number of images and messages, we will reach more close results like those
given in table 4.
As shown in figure 8, the zolfaghari function can be explained in terms of the reversible
NOT and CNOT functions. Thus, this function is reversible and we can use this
reversibility in order to restore the hidden data using the original image and the cover
image in which the data has been embedded. The reverse of a CNOT function is a CNOT
function and the case is similar for a NOT function. Therefore, If we consider the
implementation shown in figure 8, the reverse of the Zolfaghari function can be
implemented as demonstrated in figure 13.
Figure 13: The reverse of the Zolfaghari function in terms of NOT and CNOT
According to figures 9, 10 and 12, the values of the LSBs of the pixels in the cover image
are changed during the embedding of the secret data as equation 7 shows.
P
i / 3
((
[i %3] new 
i / 3 1
i / 3 1
j  i / 3  n 1
j
j  i / 3  n 1
 Pj [i%3]new .Pi / 3 [i%3]old )  (
 P [i%3]
new
 Pi / 3 [i %3]old ))  I i
(Equation 7)
In equation 7,
P
j
[k ]old is the LSB of the k-th byte of the j-th pixel before
embedding the secret data,
P
j
[k ]old is the same LSB after hiding the secret data
and I i is the i-th bit number of the secret data which has been hidden in the cover
image.
Equation 8 which has been derived from equation 7, gives us the way to reveal each
individual bit of the hidden data.
I i  Pi / 3 [i %3]new  ((
i / 3 1
i / 3 1
j  i / 3  n 1
j
j  i / 3  n 1
 Pj [i%3]new .Pi / 3 [i%3]old )  (
 P [i%3]
new
 Pi / 3 [i %3]old ))
(Equation 8)
As equation 8 shows, each bit of the hidden data can be directly revealed without a need
for retrieving previous or next bits. But to do this, the original image and the cover image
in which the data has been embedded should both be available.
Applying equation 8 to the old and new values of the LSBs of the pixels in tables 1 and 2
will reveal the bit sequence “010 110 100 100 010 101 001 111 010 000 010 101 100 001
001 110” which is the ASCII coded equivalent of the message “Z-OAXN”.
4 Performance Evaluations
In this section, we evaluate the impact of the Z-OAXN approach on the average pixel
change probability (or APCP for short) through analytical modeling and experimental
methods. Section 4-1 is dedicated to deriving an analytical model in order to predict the
impact of the approach on the pixel change probability and section 4-2 uses experimental
results to verify the correctness of the derived model.
4-1 The Analytical Model
Suppose that the first 3 bits of the secret data are going to be embedded in p n .The
probability that an individual LSB in p n is changed is equal to the probability that the
OAXN function of the previous n-1 corresponding LSBs and I 1 (the first bit of the
secret data) is equal to 1. This probability can obviously be obtained from equation 7.
1
1
(Equation 9)
Z OAXN bc (n)  2 * n  n 1
2
2
Thus, the probability that an individual LSB in the last pixel does not change can be
calculated as follows:
n 1
1
1
2
(Equation 10)
Z  OAXN bnc (n)  1  n 1 
1
2
The probability that the last pixel is not changed is equal to the probability that all 3
LSBs in the pixel remain unchanged and is obtained as follows:
Z OAXN
n 1
pnc
( n) 
2
2
1
n 1
n 1
.2
1
n 1
n 1
.2
2
1
n 1

 2
n 1
2
(Equation 11)
Now we can obtain the probability that the n-th pixel is changed as equation 12 shows.
Z  OAXN

( n)  1  2
n 1
pc

1
3
3n  3

3n  3
2
2

2
n 1

1
3
3n  3
2
(Equation 12)
Or:
2n2
Z OAXN
pc
( n)  3*2
n 1
 3*2
3n 3
2
1
3n 3
2
(Equation 13)
Equation 13 can be rewritten in the following form.
3
3
1
Z OAXN pc (n)  2 n 1  2 2 n  2  2 3n 3
(Equation 14)
Now let us analyze the case where the second 3 bits of the secret data are going to be
embedded in p n 1 . In this case the probability that an individual LSB in p n 1 is
changed should be calculated as the sum of two distinct probabilities. The first is the
probability that the LSB is changed while the corresponding LSB in p n has been
changed too. We designate this probability by Z  OAXN pc 1 ( n  1) . The
second probability is related to the case where the LSB in p n 1 is changed while the
corresponding LSB in p n has not been changed. The latter probability is demonstrated
by Z  OAXN pc  2 ( n  1) . We know that the change in any LSB depends on
the related OAXN function. The first probability demonstrates the case that the OAXN
function related to the corresponding LSB in p n and its counterpart in p n 1 both
generate 1s in their out puts. Therefore, we can calculate the first probability as follows.
1
1
1
Z  OAXN bc 1 ( n  1) 
*

(Equation 15)
n 1
2
2
2n
1
In equation 15,
is the probability that the corresponding LSB in p n has been
2 n 1
changed (the immediate result is that all the first n-1 LSBs have had identical values) and
1
is the probability that the corresponding bit of the secret data has the same value.
2
The second probability is the representative of a case where the OAXN function related
to p n has generated a 0 while its corresponding OAXN function in p n 1 is
generating a 1. This is possible only in the case that the corresponding LSBs

1
3
in p 2 ,…, p n have identical values but either the LSB in p1 or the related bit of the
secret data differs from them in value. Therefore, the second probability can be obtained
from equation 16.
2
1
Z  OAXN bc  2 ( n  1) 
* (1  )
(Equation 16)
n 1
4
2
2
In the left side of the equation 16,
is the probability that the corresponding LSBs
2 n 1
1
in p 2 ,…, p n have identical values and (1  ) shows the probability that the
4
input or the LSB in the first pixel has a different value. This equation can be rewritten as
follows.
3
Z  OAXN bc  2 ( n  1) 
(Equation 17)
2n
Now we can obtain the probability that an individual LSB in p n 1 is changed as shown
by equation 18.
Z  OAXN bc (n  1)  Z  OAXN bc 1 (n  1)  Z  OAXN bc  2 (n  1)
(Equation 18)
Or:
1
3
4
1
Z  OAXN bc ( n  1) 
 n 

(Equation 19)
n
n
n2
2
2
2
2
Accordingly, equation 19 shows the probability that an individual LSB in
p n 1 remains unchanged.
2n2 1
(Equation 20)
2n2
2n2
It is obvious that the probability that the whole pixel p n 1 remains unchanged can be
calculated as follows.
( 2 n  2  1) 3
Z  OAXN pnc ( n  1) 
(Equation 21)
2 3n  6
Now we can calculate the probability that the pixel p n 1 is changed as demonstrated
by equation 22.
( 2 n  2  1) 3
3 * 2n4  3 * 2n2  1
Z  OAXN pc ( n  1)  1 

2 3n  6
2 3n  6
(Equation 22)
Another form of equation 22 can be as follows.
3
3
1
Z  OAXN pc ( n  1) 
 2 n  4  3n  6
(Equation 23)
n2
2
2
2
Through similar reasoning, the change probability for the next pixels is obtained as
follows
Z  OAXN bnc ( n  1)  1 
1

Z  OAXN pc ( n  i ) 
3
2
n 1 i

3
2
2 ( n 1 i )

1
2
3 ( n 1 i )
i  [0, n  2]
(Equation 24)
Among each group of 2n-2 pixels, the first n-1 pixels remain unchanged. Therefore, the
average pixel change probability of the Z-OAXN approach can be calculated as follows.
n2
n2
n2
3
3
1





n 1 i
2 ( n 1 i )
3 ( n 1 i )
2
i 0 2
i 0 2
Z  OAXN pc  i  0
2n  2
(Equation 25)
Each of the sums in the numerator in equation 25 is the sum of a geometric progression
with n-1 terms and we know that the sum of n-1 terms in each geometric progression can
be obtained from equation 26.
 * (1  d n 1 )
S n 1 
(Equation 26)
1 d
In equation 26,  is the first term and d is the common ratio of the progression. On
3
1
the other hand, in the first progression, we have  1 
and d 1 
.Thus
n 1
2
2
the first sum can be written as follows.
3
1
* (1  n 1 )
n2
n 1
3
3 * ( 2 n 1  1)
2
2


(Equation 27)

n 1 i
2 n 3
1
2
i 0 2
1
2
3
1
1

In the second progression, we have  2 
and d 2 
.
2n2
2
4
2
2
Therefore, equation 28 can be used to calculate the second sum.
3
1
* (1  2 n  2 )
n2
2
n

2
3
( 2 2 n  2  1)
2
2



2 ( n 1 i )
1
2 4n 6
i 0 2
1 2
2
(Equation 28)
As
for
the
third
progression,
we
observe
that
1
1
1
 3  3 n  3 and d 3  3 
. Thus, we can obtain the third sum from
8
2
2
equation 29.
1
1
1
* ( 2 3 n  3  1)
* (1  3 n  3 )
n2
3
n

3
1
7
2
 2


3 ( n 1 i )
1
2
2 6 n 9
i 0
1 3
2
(Equation 29)
Thus, the average pixel change probability of the Z-OAXN approach can be obtained
from equation 30.
1
( 2 3n  3  1)
3 * (2
 1)
2
1

 7
2 n 3
4n 6
2
2
2 6 n 9

2n  2
n 1
Z  OAX

pc
2n2
21* 2 4 n  6 * ( 2 n 1  1)  7 * 2 2 n  3 * ( 2 2 n  2  1)  ( 2 3n  3  1)
7 * 2 6 n  9 ( 2n  2)
21* 2 5 n  7  7 * 2 4 n  5  21* 2 4 n  6  2 3n  3  7 * 2 2 n  3  1

7 * 2 6 n  9 ( 2n  2)
(Equation 30)
The derivative of the Z  OAXN pc function shows that it gets smaller and
smaller as n grows. This trend can be analyzed better through the calculation of the
following limit.
21*2 5 n  7
Lim n  Z  OAXN pc Lim n 
7 * 2 6 n  9 ( 2n  2)
3
 Lim n  n  2
0
(Equation 31)
2
( 2 n  2)
Equation 31 shows that when n grows large enough the average change probability of an
individual pixel in the Z-OAXN approach declines until it approaches zero.
Now let us quantitatively examine the improvement in the average pixel change
probability gained by the Z-OAXN approach. Since the APCP of the traditional LSB
7
technique is equal to
, the improvement ratio of this probability can be calculated as
8
follows.
7
 Z  OAXN pc
7  8 * Z  OAXN pc
8
I pc 

7
7
8
(Equation 32)
Or:
I pc 
7  2 3 * Z  OAXN pc
7
 1
2 3 * Z  OAXN pc
7
(Equation 33)
According to equation 30, equation 33 can be rewritten as follows after some simple
algebraic operations.
21* 2 5 n  4  7 * 2 4 n  2  21* 2 4 n  3  2 3 n  7 * 2 2 n  8
I pc  1 
49 * 2 6 n  9 * ( 2n  2)
(Equation 34)
The importance of equation 34 becomes more obvious when we realize that we will have
I pc  0.64 for n  2 (It is obvious that in our approach n must be grater than
or equal to 2) and I pc quickly approaches unity with the growth of n. This trend can be
more clearly shown by the following limit.
21* 2 5 n  5
Lim n   I pc  Lim n   (1 
)
49 * 2 6 n  9 * ( 2n  2)
3
 Lim n   (1 
) 1
(Equation 35)
n4
49 * 2
* ( 2n  2)
Equation 35 demonstrates that can improve the APCP from 64% to nearly 100% when
compared to the traditional LSB algorithm. Table 3 shows the average pixel change
probability obtained from the model for n  2,20.
Table 3: The average pixel change probabilities obtained from the model
N
APCP
Improvement
0.3125000000
0.2182617188
0.0987497965
0.0419649482
0.0177703314
0.0076087060
0.0033045799
0.0014553028
0.0006489219
0.6428571429
0.7505580357
0.8871430897
0.9520400592
0.9796910499
0.9913043360
0.9962233373
0.9983367968
0.9992583749
n
APCP
Improvement
0.9996657236
11 0.0002924919
0.9998479323
12 0.0001330592
0.9999302739
13 0.0000610103
0.9999678122
14 0.0000281643
0.9999850541
15 0.0000130776
0.9999930249
16 0.0000061032
0.9999967303
17 0.0000028610
0.9999984613
18 0.0000013463
0.9999992734
19 0.0000006358
0.9999996558
20 0.0000003012
In table 3, the first column shows the value of n, the second column shows the average
pixel change probability for each value of n and the third column shows the improvement
over the traditional LSB technique for each value of n from the average pixel change
probability point of view. As table 3 shows, the greatest possible value for APCP in the
Z-OAXN approach is 0.3125 which is obtained for n=2. A comparison to the APCP of
7
 0.875 ) shows that the minimum improvement over LSB
the traditional LSB (
8
will be grater that 64% and this improvement quickly approaches 100% with the growth
of n.
2
3
4
5
6
7
8
9
10
Figures 14 and 15 show the curves which demonstrate how the APCP varies with the
value of n and how much improvement over LSB can be gained through the use of the ZOAXN approach.
Figure 14: The average pixel change probability obtained from the model
In figure 14, the horizontal axis shows the values of n and the vertical axis shows APCP.
Figure 15: The improvement over the traditional LSB
In figure 15, the horizontal axis shows the values of n and the vertical axis shows the
improvement of the APCP over traditional LSB.
4-2 Experimental Results
In this section, we present the APCPs obtained from experiments in order to compare
with those obtained from the analytical model. Section 4-2-1 explains the experimental
methodology and section 4-2-2 gives the obtained results.
4-2-1 Experimental Methodology
To verify the analytical model, 1000 experiments have been done to calculate the APCP
for each n  2,20. For each value of n, 100 secret messages with lengths between
500 to 1000 bytes have each been stored in 10 different cover images with sizes between
500 to 800 Kbytes and the average percents of changed pixels have been measured. The
cover images vary in visual properties in addition to size. They have been selectively
chosen from natural scenes, personal photos, paintings, etc.
4-2-2 Results
Table 4 shows the experimentally obtained APCPs along with their differences from their
analytically obtained counterparts (in percents).
Table 4: The results obtained from experiments
n
APCP
Difference
n
0.3215244414
0.2182993178
0.0985954124
0.0422001758
0.0181132798
0.0076202679
0.0033123614
0.0012815252
0.0006490419
0.0288782124
0.0001722657
0.0015633864
0.0056053355
0.0192989338
0.0015195609
0.0023547726
0.1194098931
0.0001849362
11
12
13
14
15
16
17
18
19
20
APCP
Difference
0.0002913697
0.0038366736
0.0001330546
0.0000342919
0.0000610234
0.0002152021
0.0000281384
0.0009211001
0.0000158550
0.2123796732
0.0000060114
0.0150447919
0.0000028529
0.0028150073
0.0000013481
0.0013639322
0.0000006407
0.0077048549
0.0000003013
0.0004866451
Average
0.0223047089
In table 4, the second column shows the APCPs and the third column gives their
differences from their corresponding values obtained from the model (given in table 3) in
percents. The single cell labeled “Average” shows the average difference ratio between
the values obtained from experiments from those obtained from the model. As shown in
table 4, the average difference ratio is a little more than 0.02 and this verifies the
correctness of the model. To observe the similarity between the APCPs presented in table
4 and those shown by table 3 in a more visual form, we have shown the experimentally
obtained APCPs in the form of a curve in figure 16.
2
3
4
5
6
7
8
9
10
Figure 16: The pixel change probabilities given by experiments
The similarity between the above diagram and the one in figure 14 means that
experimental results verify the correctness of the model with a high precision.
4-3 Disadvantages and Shortcomings
The Z-OAXN approach has an important disadvantage from the storage capacity point of
view despite its improved pixel change probability. In fact, this approach can store the
bits of the secret data in LSBs of n-1 bytes among each 2n-2 bytes and therefore its
n 1
1

while the traditional LSB can store in
2n  2
2
each LSB and has a storage capacity equal to 1. In other words, Z-OAXN reduces the
storage capacity by 50% in contrast to the traditional LSB. In other words, the Z-OAXN
approach improves the APCP at the cost of the reduction in the storage capacity and since
the improvement in the APCP grows with n, this approach will obviously work more and
more efficiently as we chose larger values for n.
storage capacity is equal to
5 Conclusions and Further Works
This paper proposed and evaluated a novel approach called Z-OAXN that uses the
properties of a reversible logic function called the Zolfaghari function to improve the
traditional LSB from the average pixel change probability point of view. The
improvement gained by this approach varies from 64 to 100 percent. On the other hand,
this approach reduces the storage capacity by 50% compared with the traditional LSB.
This work can be continued by improving the storage capacity, proposing similar
approaches to apply to compressed image formats such as JPEG, proposing schemes for
random distribution of the change domain among the pixels of the cover image,
introducing other reversible functions, etc.
References
[1] Q. Liu, A. Sung, Z. Chen, J. Xu, Feature Mining and Pattern Classification for
Steganalysis of LSB Matching Steganography in Grayscale Images, Journal of Pattern
Recognition vol. 41, No. 1, pp.56-66, January 2008.
[2] Q. Liu, A.H. Sung, B. Ribeiro, M. Weic, Z. Chen and J. Xu, Image complexity and
feature mining for steganalysis of least significant bit matching steganography, Journal of
Information Sciences, Volume 178, Issue 1, pp. 21-36, 2 January 2008.
[3] T. Pevny and J. Kodovsky, Statistically Undetectable JPEG Steganography: Dead
Ends, Challenges, and Opportunities, In Proceedings of ACM Multimedia and Secrity
Workshop, pp. 3-14, Dallas, TX, September 20-21, 2007.
[4] A. Sarkar and B. S. Manjunath, Estimating Steganographic Capacity for Odd-Even
Based Embedding and its Use in Individual Compensation, In Proceedings of IEEE
International Conference on Image Processing (ICIP), San Antonio, TX, September 2007.
[5] A. Sarkar, K. Solanki and B. S. Manjunath, Secure Steganography: Statistical
Restoration in the Transform Domain with Best Integer Perturbations to Pixel Values, In
Proceedings of IEEE International Conference on Image Processing (ICIP), San Antonio,
Texas, September 2007.
[6] H. Gou, A. Swaminathan, and M. Wu, Noise Features for Image Tampering Detection
and Steganalysis, In Proceedings of IEEE Conference on Image Processing (ICIP), Vol. 6,
pp. 97-100, San Antonio, TX, September 2007.
[7] K. Solanki, A. Sarkar and B.S. Manjunath, YASS: Yet Another Steganographic
Scheme that Resists Blind Steganalysis, In Proceedings of 9th International Workshop on
Information Hiding, Saint Malo, Brittany, France, June 2007.
[8] A. Sarkar, U. Madhow, S. Chandrasekaran and B.S. Manjunath, Adaptive MPEG-2
Video Data Hiding Scheme, In Proceedings of SPIE Conference on Security,
Steganography, and Watermarking of Multimedia Contents IX, January 2007.
[9] X. Luo, F. Liu and P. Lu, A LSB Steganography Approach Against Pixels Sample
Pairs Steganalysis, International Journal of Innovative Computing, Information and
Control (IJICIC),Vol. 3, No 3, June 2007.
[10] G. Charles, J.R Boncelet, and L.M. Marvel, Lossless Compression-Based
Steganalysis of LSB Embedded Images, In Proceedings of 41st Annual Conference on
Information Sciences and Systems, Baltimore, MD, USA. March 24-16 2007.
[11] J. Zhang, G. Doërr and I.J. Cox, Steganalysis for LSB Matching in Images with
High Frequency Noise, In Proceedings of the IEEE International Workshop on
Multimedia Signal Processing, pp. 385-388, 2007.
[12] Q. Liu and A. Sung, Feature Mining and Nuero-Fuzzy Inference System for
Steganalysis of LSB Matching Steganography in Grayscale Images, In Proceedings of
20th International Joint Conference on Artificial Intelligence, pp. 2808-2813, 2007.
[13] A.D. Ker, A Weighted Stego Image Detector for Sequential LSB Replacement, In
Proceedings of 2007 International Workshop on Data Hiding for Information and
Multimedia Security attached to IAS 07, IEEE Computer Society Press, 2007.
[14] A.D. Ker, Steganalysis of embedding in two least significant bits, IEEE Transactions
on Information Forensics and Security Vol. 2, No 1, pp. 46-54, 2007.
[15] P. Lisonek and D. Soukal, On Steganographic Embedding Efficiency,
In LNCS 8th International Workshop on Information Hiding, Alexandria, VA, 2007.
[16] G. Gul, A. E. Dirik, I. Avcibas, Steganalytic Features for JPEG Compression-Based
Perturbed Quantization, IEEE Signal Processing Letters, Vol. 14, Issue 3, 2007.
[17] M. Kharrazi, H. T. Sencar, N. Memon, Performance Study of Common Image
Steganography and Steganalysis Techniques, Journal of Electronic Imaging, Vol. 15,
Issue 4, October-December 2006.
[18] K. Sullivan, K. Solanki, B.S. Manjunath, U. Madhow and S. Chandrasekaran,
Determining achievable rates for secure zero divergence steganography, In Proceedings
of IEEE International Conference on Image Processing 2006 (ICIP06), Atlanta, GA, USA,
October 2006.
[19] K. Solanki, K. Sullivan, U. Madhow, B.S. Manjunath and S.
Chandrasekaran, Provably secure steganography: Achieving zero K-L divergence using
statistical restoration, In Proceedings of IEEE International Conference on Image
Processing (ICIP06), Atlanta, GA USA, October 2006.
[20] K. Sullivan, U. Madhow, S. Chandrasekaran and B.S. Manjunath, Steganalysis for
Markov cover data with applications to images, IEEE Transactions on Information
Forensics and Security, Vol. 1, No. 2, pp. 275-287, June 2006.
[21] X. Luo; Z. Hu; C. Yang; S. Gao, A secure LSB steganography system defeating
sample pair analysis based on chaos system and dynamic compensation, In Proceedings
of The 8th International Conference on Advanced Communication Technology (ICACT
2006), Vol. 2, February 20-22, 2006.
[22] M. Goljan and T. Holotyak, New Blind Steganalysis and its Implications, SPIE
Journal of Electronic Imaging, Photonics West, San Jose, CA, January 2006.
[23] T. Pevny, Multiclass Blind Steganalysis for JPEG Images, In Proceedings of SPIE
Electronic Imaging, Photonics West, San Jose, CA, January 2006.
[24] M. Kharrazi, H. T. Sencar, N. Memon, Improving Steganalysis by Fusion
Techniques: A Case Study with Image Based Steganography, LNCS Transactions on
Data Hiding and Multimedia Security I, Vol. 4300, 2006.
[25] A. Rocha and S. Goldenstein, Progressive Randomization for Steganalysis. In
Proceedings of IEEE 8th Workshop on Multimedia Signal Processing, Vol. 1, pp. 314319, Seattle, USA, 2006.
[26] Q. Liu, A.H. Sung, J. Xu, B.M. Ribeiro, Image Complexity and Feature Extraction
for Steganalysis of LSB Matching Steganography, In Proceedings of 18th International
Conference on Pattern Recognition (ICPR'06), Vol. 2, pp. 267-270, 2006.
[27] T. Pevny, Determining the Stego Algorithm for JPEG Images, Special
Issue of IEE Proceedings Information Security, Vol. 153, Issue 3, pp. 7586, September 2006.
[28] H. Ozer, B. Sankur, N. Memon, I. Avcibas, Detection of Audio Covert Channels
using Statistical Footprints of Hidden Messages, Digital Signal Processing, Vol. 16, Issue
4, pp 389-401, July 2006.
[29] M. Kharrazi, H. T. Sencar, N. Memon, Cover Selection for Steganographic
Embedding, In Proceedings of 2006 IEEE International Conference on Image
Processing, , pp. 117-120, October 8-11, 2006.
[30] K.B. Raja, C.R. Chowdary, K.R. Venugopal, L.M. Patnaik, A Secure Image
Steganography using LSB, DCT and Compression Techniques on Raw Images, In
Proceedings of International Conference on Intelligent Sensing and Information
Processing(ICISIP 2005), pp. 170-176, December 14-17, 2005.
[31] G. Brisbane, R. Safavi-Naini, P. Ogunbona, High-capacity steganography using a
shared colour palette In Proceedings of IEEE Conference on Vision, Image, and Signal
Processing, Vol. 152, Issue 6, pp. 787-792, December 2005.
[32] R. Radhakrishnan, M. Kharrazi, N. Memon, Data Masking: A New Approach for
Steganography, Journal of VLSI Signal Processing, Vol. 41, No. 3, November 2005.
[33] M. Jiang, E. K. Wong, N. Memon, X. Wu, Steganalysis of Degraded Document
Images, IEEE International Workshop on Multimedia Signal Processing, Shanghai,
China, October, 2005.
[34] H.C. Wu, N.I. Wu, C.S. Tsai, M.S. Hwang, Image steganographic scheme based on
pixel-value differencing and LSB replacement methods, In Proceedings of IEEE
Conference on Vision, Image and Signal Processing, Vol. 152, No. 5, pp. 611-615,
October 2005.
[35] K. Solanki, K. Sullivan, U. Madhow, B. S. Manjunath and S. Chandrasekaran,
Statistical Restoration for Robust and Secure Steganography, In Proceedings of IEEE
International Conference on Image Processing, Genova, Italy, September 2005.
[36] X.Y Yu, T.N. Tan, Y.H. Wang, Extended optimization method of LSB steganalysis,
In Proceedings of IEEE International Conference on Image Processing (ICIP 2005), Vol.
2, pp. 1102-1105, September 11-14, 2005.
[37] A.D. Ker, Steganalysis of LSB Matching in Grayscale Images, IEEE Signal
Processing Letters Vol. 12, No. 6, pp. 441-444, June 2005.
[38] K. Petrowski, M.Kharrazi, H. T. Sencar, N. Memon, PSteg: steganographic
embedding through patching, In Proceedings of IEEE International Conference on
Acoustics, Speech, and Signal Processing, Philadelphia, PA, March, 2005.
[39] M. Jiang, E. K. Wong, N. Memon, X. Wu, Steganalysis of Halftone Images, In
Proceedings of IEEE International Conference on Acoustics, Speech, and Signal
Processing, Philadelphia, PA, March, 2005.
[40] M. Celik, G. Sharma, A.M. Tekalp, E. Saber, Lossless Generalized-LSB data
embedding, IEEE Transaction on Image Processing, vol. 14, No. 2, pp.253-266, February
2005.
[41] M. Kharrazi, T. H. Sencar, N. Memon, Benchmarking steganographic and
steganalysis techniques, In Proceedings of SPIE Conference on Electronic Imaging, San
Jose, CA, January 16-20, 2005.
[42] A. D. Ker, A General Framework for Structural Steganalysis of LSB Replacement,
In Proceedings of 7th International Workshop on Information Hiding, pp. 296-311,
Barcelona, Spain, 2005.
[43] S.C. Draper, P. Ishwar, D. Molnar, V. Prabhakaran, K. Ramchandran, D. Schonberg,
D. Wagner, An Analysis of Empirical PMF Based Tests for Least Significant Bit Image
Steganography, In Proceedings of International Workshop on Information Hiding,
pp.327-341, Barcelona, Spain, 2005.
[44] Kwangsoo Lee, Changho Jung, Sangjin Lee, Jongin Lim, New Steganalysis
Methodology, LR Cube Analysis for the Detection of LSB Steganography, In
Proceedings of International Workshop on Information Hiding, pp.312-326, Barcelona,
Spain, 2005.
[45] N. Cvejic , T. Seppänen , Increasing robustness of LSB audio steganography by
reduced distortion LSB coding, Journal of Universal Computer Science Vol. 11, No. 1,
pp.56-65, 2005.
[46] A. D. Ker, Resampling and the detection of LSB matching in color bitmaps, In
Proceedings of International Conference on Security, Steganography, and Watermarking
of Multimedia Contents, pp. 1-15, 2005.
[47] M. Kharrazi, N. Memon, B. Sankur, I. Avcibas, Image Steganalysis with Binary
Similarity Measures, EURASIP Journal on Applied Signal Processing, Vol. 2005, No. 17,
2005.
[48] O. Dabeer, K. Sullivan, U. Madhow, S. Chandrasekaran, and B. S. Manjunath,
Detection of hiding in the least signifiant bit, In IEEE Transactions on Signal Processing,
Vol. 52, No. 10, pp. 3046-3058, October 2004.
[49] A.D. Ker, Improved Detection of LSB Steganography in Grayscale Images, In
Proceedings of International Workshop on Information Hiding, pp. 97-115, Toronto,
Canada, May 2004.
[50] C.K. Chan, L.M. Cheng, Hiding data in images by simple LSB substitution, Journal
of Pattern Recognition, Vol. 37, No. 3, pp. 469-474, March 2004.
[51] M. Goljan, On Estimation of Secret Message Length in LSB Steganography in
Spatial Domain, In Proceedings of SPIE Electronic Imaging, San Jose, CA, January 2004.
[52] Nedeljko Cvejic, Tapio Seppänen, Increasing Robustness of LSB Audio
Steganography Using a Novel Embedding Method, ITCC (2), pp. 533-537, 2004.
[53] Cvejic N, Seppänen T. Reduced distortion bit-modification for LSB audio
steganography, Journal of Universal Computer Science, special issue on information
assurance and security, Vol. 11, Issue 1, pp. 56-65, 2004.
[54] K. Sullivan, O. Dabeer, U. Madow, B. S. Manujunath and S. Chandrasekaran, LLRT
Based Detection of LSB Hiding, In Proceedings of IEEE International Conference on
Image Processing (ICIP), Vol. 1, pp. 497-500, Barcelona, Spain, September 2003.
[55] O. Dabeer, K. Sullivan, U. Madhow, S. Chandrasekharan, B. S. Manjunath,
Detection of Hiding in the Least Significant Bit, IEEE Transactions on Signal Processing, Vol.
10, No. 10, pp. 3046-3058, October 2004.
[56] V.V. Shende, A.K. Prasad, I.L. Markov, J.P. Hayes, Synthesis of reversible logic
circuits, IEEE Transactions on Computer-Aided Design of Integrated Circuits and
Systems, Vol. 22, Issue 6, pp. 710- 722, June 2003.
[57] J. Harmsen, W. Pearlman, Steganalysis of additive noise modelable information
Hiding, In Proceedings of International conference on Security and Watermarking of
Multimedia Contents, Vol. 5020, pp. 131-142, 2003.
[58] S. Dumitrescu, X. Wu and Z. Wang, Detection of LSB steganography via sample
pair analysis, IEEE Transactions on Signal Processing, Vol.51, No.7, pp.1995-2007, 2003.
[59] Zhang, T. and X. Ping, Reliable detection of LSB steganography based on the
difference image histogram, In Proceedings of the IEEE ICSAAP 2003, Part III, pp. 545548, 2003.
[60] J. Fridrich and M. Goljan. Practical steganalysis of digital images - state of the art, In
Proceedings of SPIE Conference on Security and Watermarking of Multimedia Contents,
Vol. 4675, pp.1-13, 2002.
[61] Fridrich, J., M. Goljan and R. Du, Reliable detection of LSB steganography in color
and grayscale images, In Proceeddings of the ACM Workshop on Multimedia Security,
pp.27-30, Ottawa, Canada, 2001.
[62] A. Westfeld and A. P_fitzmann. Attacks on steganographic systems. In Proceedings
of 3rd International Workshop on Information Hiding, September 29-October 1, 1999.
Download