IEEE Paper Template in A4 (V1)

advertisement
Robust Image Authentication Based on Predictive
Lossless coder
Manoba J P R [1], Prathibha S Nair[2]
[1]
Mtech CSE Student
Assistant Professor, CSE
Department of Computer Science
[2]
Mohandhas College of Engineering & Technology, Anad
manoba123@gmail.com
Abstract— Image authentication plays a vital role in the file
sharing system. There is a chance for modification of the original
contents by other intermediaries in the sharing process. In this
study, an attempt is made to identify the malicious attack in the
original image by predictive lossless coding. In order to
authenticate the received image, a predictive encoded quantized
image is used. This lossless method provides the efficient
robustness against legitimate variations and detects the
illegitimate modification. Predictive lossless coding provides
efficient encoding of the projection by exploiting the correlation
between the projections of the original and received images.
Keywords— Image authentication, Predictive lossless coder
I.INTRODUCTION
Image
authentication is becoming very important for
certifying the data integrity. Now the digital images are
transmitted over non-secure channels such as the Internet. So
the quality images used in medical, military, must be protected
against attempts to manipulate them. The manipulations may
tamper the images. In order to protect the authenticity of
multimedia images, various approaches have been proposed.
Predictive lossless coding provides effective lossless image
compression of both photographic and graphics content in
image and video and provides better tamper resistance.
Predictive lossless coding can operate on a macro block basis
for compatibility with existing image. Predictive lossless
coding chooses and applies one of the available differential
pulse-code modulation (DPCM) modes to individual macroblocks. In one survey indicates that more than 50% of popular
songs in are corrupted ie.,,replaced with noise or different
mixed songs. To distinguish the legitimate encoding versions
from maliciously modified ones is important in applications
that transmit the media content through untrusted
intermediaries. Some legitimate adjustments, such as cropping
and resizing of an image, are allowed in addition to lossy
compression. Users might also tested in identifying the
modified regions. In this paper the tampering areas can be
localized and it can distinguish the legitimate encodings using
predictive lossless compression. In early approach, SlepienWolf encoder fails to resist the tampering location. The
predictive lossless coding and statistical methods solve the
image authentication problem by using high tamper resistance
authentication. Section II describes previous approaches used
in image authentication using distributed source coding.
Section III introduces the image system using predictive
lossless coding. Here image authentication problem can be
formulated as a hypothesis testing problem. The original
image is projected and it is quantized and encoded using
predictive lossless coding, a form of distributed source coding.
To authenticate images that have undergone editing, such as
contrast, brightness, and affine warping adjustments, the
authentication decoder learns the editing parameters directly
from the target image through decoding the authentication
data using an predictive lossless algorithm.
II. RELATED WORKS
The early approaches for image authentication are
classified into three groups: watermarking, forensics, and
robust hashing. In watermarking, semi-fragile watermarking
technique is embedded into the host signal without any
distortion. The user checks the authenticity by extracting the
watermark portion from the received content. In digital
forensics, the user verifies the authenticity of an image or any
media only by checking the received content. Another option
for image authentication is authentication techniques based on
robust hashing, which is inspired by cryptographic hashing.
A cryptographic hash function is a hash function that takes a
fixed block of data and returns a fixed length bit cryptographic
hash value. Any change to the data will change the hash value.
Slepien-Wolf encoder used in distributed source coding
technique is used for encoding the data or image. Encoder
compress the data and it is encrypted at the sender side. At
the receiver side the data is decoded or compressed and it is
decrypted using RSA algorithm.
III. PROPOSED SYSTEM
The image authentication can be considered as a testing
problem. The authenticated data provides information about
the original image to the user. The user makes the
authentication by comparing the reference image and the input
image. For that first describe a two-state channel that models
the target image and then present the image authentication
system using predictive lossless encoding. The encoding is
done on the basis of predictive lossless DCT coder image
projection. These are the steps involved in this proposed
system:
Fig 1.a) original image b)16x16 block division.




Preprocessing is the first step in image
authentication. In preprocessing mean removing and
grayscale transformation is applied to the image,
after that DCT transform is applied and followed by
an iterative phase including the thresholding, the
quantization, dequantization, the inverse DCT.
This version can be correctly decoded with the help
of an authentic image as side information.
The next step is the proposed adaptive scanning
providing, for each (n, n) DCT block a corresponding
(n x n) vector.
The last step is the application of a modified lossless
decoder.
This paper use predictive lossless coding for image
authentication. In early approach, Slepien-Wolf encoder fails
to resist the tampering location and it has low tamper
resistivity. The predictive lossless coder compress the data
without losing any data and it has high tampering resistance.
The main key idea is to provide a DCT encoded image
projection as authentication data. This version can be correctly
decoded with the help of an authentic image as side
information.
The original image is projected by using the random seed
value. Then the image is quantized and predictive lossless
DCT compression is applied by it through one channel.
Through the another channel the image is encrypted using a
private key. At receiver side, the image is decompressed using
predictive lossless decoder through one channel and through
the another channel the image is decrypted using the public
key. The image obtained through both channels is compared
using cryptographic hash function and the tampered region is
identified and hence authentication is achieved. These are the
steps involved in image authentication system.
A .Image Projection
In image projection the original image is divided into 16x16
blocks. The divided blocks is rotated with 0,90,180 and 270
degree. The rotation is carried out based on pseudo random
number generator. For the identification of random number a
secret key value is used.
B. DCT Compression
The discrete cosine transform (DCT) is a technique for
converting a signal into elementary frequency components. It
is commonely used in image compression. Here the simple
functions are developed to compute the DCT and to compress
images. For a color image, each of the three planes (YCbCr)
are partitioned into blocks. The different sizes: 8x8, 16x16,
32x32 or 64x64 were tested. Each block is transformed into
DCT. The DCT transform concentrate the great part of block
energy in few representative coefficient. A quantizer having a
resolution of 7 bits is enough to maintain the quality of the
compressed image in tolerable bounds for different DCT
block size.
C. Digital Signature
A digital signature is a mathematical approach for
demonstrating the authenticity of a digital message or
document. A valid digital signature gives a reason to believe
that the message was created by an authorised sender. The
receiver cannot relay on received messages and that the
message was not altered in. Digital signatures are really very
important to detect tampering. A digital signature scheme
consists of three algorithm. One is an algorithm that selects a
private key from a set of possible private keys and outputs the
private key and a corresponding public key and another
algorithm that produces a signature. A signature verifying
algorithm that, given a message, public key and a signature,
either accepts or rejects the message's claim to authenticity.
After the DCT compression the DCT bit streams are the
output of a DCT encoder. It is based on LDPC codes and the
digital signature D(Xq, Ks).The digital signature consists of
the seed Ks and a cryptographic hash value of Xq which is
signed with a private key.All the authentication data are
generated by a server upon request. Each response uses
different seed value Ks, which is given to the decoder as part
of the authentication data.
D. Decoding-DCT Compression
The discrete cosine transform (DCT) is a technique for
converting a signal into elementary frequency components.
The DCT attempts to de-correlate the image data. Each
transform coefficient can be encoded independently without
losing compression efficiency after the decorrelation.
The image is broken into 8x8 blocks of pixels. Working
from left to right, top to bottom, the DCT is applied to each
block. Each block is compressed through quantization. The
array of compressed blocks that constitute the image stored in
a drastically reduced amount of space. When desired, the
image is reconstructed through process decompression, a
process that uses Inverse Cosine Transform (IDCT). If the
decoding fails, the hash value of the reconstructed image
projection does not match the signature.
E. Verification
The receiver operating characteristics (ROC) curves for
tampering with different numbers of bits in quantization by
sweeping the decision threshold T in the likelihood ratio test.
The ROC equal error rate versus the authentication data size
and demonstrates that distributed source coding reduces the
data size by more than 80% compared to conventional fixed
length coding at an equal error rate of 2%.
The image authentication using predictive lossless encoding
is represented in the fig 1,2. In fig 1 the target image is
projected by using random seed value .Then the image is
quantized and predictive lossless DCT compression is applied
by passing it through one channel. Through the another
channel the image is encrypted using a private key.
In the Image authentication system using predictive
lossless coding, the authentication data consists of a predictive
lossless encoded quantized pseudorandom projection of the
original image, a random seed, and a signature of the image
projection. The target image is modeled as an input of the
two-state channel shown in Fig. 2. The user projects the target
image using the same projection to yield the side information
and tries to decode the DCT bit stream using the side
information. If the decoding fails, the hash value of the
reconstructed image projection does not match the signature
and the verification decoder claims that it is tampered or
modified, otherwise, the reconstructed image projection along
with the side information is examined using hypothesis
testing.
Fig 2: predictive lossless encoding and encryption
at transmitter side
At receiver side, the image is decompressed using predictive
lossless decoder through one channel and through the another
channel the image is decrypted using the public key. The
image obtained through both channels is compared using
cryptographic hash function and the tampered region is
identified and hence authentication is achieved. The
authentication decoder projects received image in the same
way as done in transmitter side. Finally, the image digest of
transmitter side is compared with image digest received from
the server by decrypting the digital signature D(Xq, Ks). If
these two image digests are not identical, the received image
is declared to be inauthenticate.
filmography, graphics, etc. It also applies to professional
grade video coding, for encoding video frames at the highest
possible quality setting, i.e.,losslessly.Tampering degradations
are captured by using statistical model. Predictive lossless
coder compress the data content without losing the original
content. The DCT decoder is extended using predictive
lossless algorithms to address target images that have
undergone contrast, brightness, and warping adjustments. The
lossless decoder infers the tampered locations and decodes
the DCT bit streams by applying the algorithm over a factor
graph which represents the relationship among the DCT bit
stream, projections of the original image and the target image,
and the block state.
Target image
Random Projection
Image
Predictive
lossless
decoder
Cryptogra
phic hash
function
ACKNOWLEDGMENT
The authors I would thank Yao-Chung Lin, David
Varodayan, for helping with the generation of the reference
data and Pictorial information. for providing the imagery used
in this letter. I also thank my guide Mrs.Prathibha S Nair for
supporting me to do this paper.
V. .REFERENCES
Image
Decryption
Comparison
Public key
Fig 3. Predictive lossless decoding and decryption at receiver
side
The fig 3 shows the image authentication process at the
receiver side. At the receiver side the reverse process of
encoding and encryption will happen. The user projects the
target image using the same image projection and it is
dequantized at the transmitter side. The DCT bit stream is
decoded using the side information. If the decoding fails the
hash value of the reconstructed image projection does not
match the signature, and the decoder claims that it is modified.
The image digest obtained from the transmitter side is
compared with the receiver side and the tampering regions are
detected.
IV. CONCLUSION
This paper presents and investigates a novel image
authentication scheme that distinguishes modifications in the
image. Lossless coding has a large variety of important
applications like high quality digital photography,
[1] Yao-Chung Lin, David Varodayan, Member, IEEE, and Bernd Girod,
Fellow, IEEE “Image Authentication Using Distributed Source Coding”
IEEE transactions on image processing, vol. 21, no. 1, january 2012
[2] J. Liang, R. Kumar, Y. Xi, and K. W. Ross, “Pollution in P2P file sharing
systems,” in Proc. IEEE Infocom, Mar. 2005, vol. 2, pp. 1174–1185.
[3] D. Slepian and J. K.Wolf, “Noiseless coding of correlated information
sources,” IEEE Trans. Inf. Theory, vol. IT-19, no. 4, pp. 471–480, Jul. 1973.
[4] H. Farid, “Image forgery detection,” IEEE Signal Process. Mag., vol. 26,
no. 2, pp. 16–25, Mar. 2009.
[5] J. Lukas and J. Fridrich, “Estimation of primary quantization matrix in
double compressed JPEG images,” presented at the Digital Forensic Research
Workshop, Cleveland, OH, Aug. 2003.
[6] A. Popescu and H. Farid, “Exposing digital forgeries in color filter array
interpolated images,” IEEE Trans. Signal Process., vol. 53, no. 10, pp. 3948–
3959, Oct. 2005.
[7] I. J. Cox, J. Kilian, T. Leighton, and T. Shamoon, “Secure spread
spectrum watermarking for images, audio and video,” in Proc. IEEE Int.Conf.
Image Process., Lausanne, Switzerland, Sep. 1996.
[8] J. J. Eggers and B. Girod, “Blind watermarking applied to image
authentication,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal
Process., Salt Lake City, UT, May 2001.
[9] R. B. Wolfgang and E. J. Delp, “A watermark for digital images,” in Proc.
IEEE Int. Conf. Image Process., Lausanne, Switzerland, Sep. 1996.
[10] W. Diffie and M. E. Hellman, “New directions in cryptography,” IEEE
Trans. Inf. Theory, vol. IT-22, no. 6, pp. 644–654, Jan. 1976.
[11] C.-Y. Lin and S.-F. Chang, “Generating robust digital signature for
image/video authentication,” in ACM Multimedia: Multimedia and Security
Workshop, Bristol, U.K., Sep. 1998, pp. 49–54.
[12] C.-Y. Lin and S.-F. Chang, “A robust image authentication method
surviving JPEG lossy compression,” in Proc. SPIE Conf. Storage and
Retrieval for Image and Video Database, San Jose, CA, Jan. 1998.
Download