ICGST-GVIP Journal, Volume 7, Issue 3, November 2007 Designing Quantization Table for Hadamard Transform based on Human Visual System for Image Compression K.Veeraswamy* , S. Srinivaskumar#, B.N.Chatterji§ *Research scholar, ECE Dept., JNTUCE, Kakinada, A.P, India. #Professor ,ECE Dept., JNTUCE, Kakinada, A.P, India. § Former Professor, E&ECE Dept., IIT, Kharagpur,W.B, India. [kilarivs@yahoo.com, samay_ssk2@yahoo.com,bnchatterji@gmail.com] the transform domain [1]. Wavelet Transform (WT) [2,3] and Contourlet Transform (CT) [4] are used as a method of information coding. The ISO/CCITT Joint Photographic Experts Group (JPEG-2000) has selected the WT for its baseline coding technique [5]. The Discrete Cosine Transform has attracted widespread interest as a method of information coding. JPEG has selected the DCT for its baseline coding technique [6]. In this work, Hadamard transform is used for image compression. The elements of the basis vectors of the Hadamard Transform take only the binary values ± 1 and are, therefore, well suited for digital hardware implementations of image processing algorithms. Hadamard Transform offers a significant advantage in terms of a shorter processing time as the processing involves simpler integer manipulation (compared to floating point processing with DCT) and the ease of hardware implementation than many common transform techniques. So it is computationally less expensive than many other orthogonal transforms. Integer transforms are very much essential in video compression. Hadamard Transform is used for video compression [7]. The quantization table is central to the compression. Several approaches have been tried in order to design quantization tables for particular distortion or rate specifications. The most common of these is to use a default table and scale it up or down by a scalar multiplier to vary the quality and compression. Methods for determining the quantization table are usually based on rate-distortion theory. These methods do achieve better performance than the JPEG default quantization table [8, 9]. However, the quantization tables based on rate distortion theory methods are image-dependent and the complexity of the encoder is rather high. In this work, quantization table is designed based on the human visual system (HVS) [10]. HVS model is easy to adapt to the specified resolution for viewing. The mind does not perceive everything the eye sees. This knowledge is used to design new quantization table for Hadamard Transform. Abstract This paper discusses lossy image compression using Hadamard Transform (HT). Quantization table plays significant role in image compression (lossy) that improves the compression ratio without sacrificing visual quality. In this work, Human Visual system (HVS) is considered to derive the quantization table, which is applicable for Hadamard Transform. By incorporating the human visual system with the uniform quantizer, a perceptual quantization table is derived. This quantization table is easy to adapt to the specified resolution for viewing. Results show that this quantization table is good in terms of improving peak signal to noise ratio (PSNR), Normalized Cross correlation (NCC) and to reduce blocking artifacts. This work is extended to test the robustness of watermarking against various attacks. Keywords: Image compression, Hadamard Transform, human visual system, quantization table, watermarking. 1. Introduction There are many applications requiring image compression, such as multimedia, internet, satellite imaging, remote sensing, and preservation of art work, etc. Decades of research in this area has produced a number of image compression algorithms. Most of the effort expended over the past decades on image compression has been directed towards the application and analysis of orthogonal transforms. The orthogonal transform exhibits a number of properties that make it useful. First, it generally confirms to a parseval constraint, in that the energy present in the image is same as that displayed in the image’s transform. Second, the transform coefficients bear no resemblance to the image and many of the coefficients are small and, if discarded, will provide image compression with nominal degradation of the quality of reconstructed image. Third, certain sub scenes of interest, such as some targets or particular textures, have easily recognized signatures in 31 ICGST-GVIP Journal, Volume 7, Issue 3, November 2007 This paper is organized as follows. Human visual system is discussed in section 2. The 2D-Hadamard Transform is discussed in section 3. The proposed method is presented in section 4. Experimental results are given in section 5. Concluding remarks are given in section 6. f (u ) = The human visual system has been investigated by several researchers [11, 12]. Moreover, the simplicity and visual sensitivity and selectivity to model and improve perceived image quality are the requirements for the design of HVS. The HVS is based on the psychophysical process that relates psychological phenomena (contrast and brightness etc.) to physical phenomena (light sensitivity, spatial frequency and wavelength etc.). The HVS is complicated, it is a nonlinear and spatial varying system. Putting its multiple characteristics into single equation, especially one that is linear, is not an easy task. Mannos and Sakrison’s work [13] may be first breakthrough to incorporate the HVS in image coding. HVS as a nonlinear point transformation followed by the modulation transfer function (MTF) is given by ( d ) π f (u, v) = f (u) + f (v) 2 1 180arcsin 1 + dis2 where, u , v = 1,2...N 2 (4) Finally, to account for variations in visual MTF as a function of viewing angle, θ, these frequencies are normalized by an angular-dependent function, s (θ (u , v )) , such that ∧ f (u, v ) = f (u, v ) s (θ (u , v )) (5) where, s (θ (u , v )) is given by Daly as (1) where, f is the radial frequency in cycles/degree of the visual angel subtended and a, b, c and d are constants. HVS model proposed by Daly [14] is applied for generating the quantization table. This HVS model is a modified version of Mannos and Sakrison’s work with a=2.2, b=0.192, c=0.114 and d=1.1 respectively. The MTF of HVS has been successfully applied to the optimal image half toning and image compression. s (θ (u, v )) = with 1−ω 1+ ω cos(4θ (u, v )) + 2 2 (6) ω being a symmetry parameter, and ⎛ f (u ) ⎞ ⎟⎟ ⎝ f (v ) ⎠ θ (u , v ) = arctan⎜⎜ The MTF as reported by Daly [14] is ⎧ ⎛ ⎞ ∧ ⎪ 2 . 2 ⎜ 0 . 192 + 0 . 114 f (u , v )⎟ ⎟⎟ ⎪ ⎜⎜ ⎠ ⎪ ⎝ 1 .1 ∧ ⎪ ⎞ ⎛ ⎛ H (u , v ) = ⎨ . exp ⎜ − ⎜ 0 . 114 f (u , v )⎞⎟ ⎟ ⎜ ⎝ ⎪ ⎠ ⎟⎠ ⎝ ⎪ ∧ if f (u , v ) > f ; ⎪ ⎪ 1 .0 otherwise ⎩ (3) Converting these to radial frequencies, and scaling the result to cycles/visual degree for a viewing distance (dis) in millimeters gives 2. Human Visual System H ( f ) = a (b + cf ) exp − (c( f ) ) u −1 v −1 , f (v ) = ΔN ΔN Equation (6) indicates that as decreases. ω (7) decreases, s (θ (u, v )) 3. 2D-Hadamard Transform The 2D-HT has been used in image processing and image compression [15]. Let x represents the original image and [T] the transformed image. The 2D-Hadamard transform is given by [16] [] (2) [T ] = H n [x]H n N (8) where H n represents an NxN Hadamard matrix, N=2n, ∧ f (u , v ) is radial spatial frequency in cycles/degree and f is the frequency of 8 cycles/degree at which the n=1,2,3,…, with element values either +1 or -1. The Hadamard transform H n is real, symmetric, and exponential peak. To implement this, it is necessary to convert discrete horizontal and vertical frequencies, { f (u ), f (v )} into radial visual frequencies. orthogonal [17] that is For a symmetric printing grid, the horizontal and vertical discrete frequencies are periodic and given in terms of the dot pitch Δ and the number of frequencies N by The inverse 2D-Hadamard transform is given as H n = H n* = H nt = H n−1 [x ] = H n [T ]H n N 32 (9) (10) ICGST-GVIP Journal, Volume 7, Issue 3, November 2007 Table 1: The human visual frequency weighting matrix for Hadamard transform The Hadamard matrix of the order n is generated in terms of Hadamard matrix of order n-1 using Kronecker product ‘ ⊗ ’ [18] given by H1 = 1 ⎛1 1 ⎞ ⎜⎜ ⎟⎟ , H n = H n −1 ⊗ H1 2 ⎝1 − 1⎠ 1.0000 0.6571 1.0000 0.9599 1.0000 0.7684 1.0000 0.8746 0.6571 0.1391 0.4495 0.3393 0.6306 0.1828 0.5558 0.2480 (11) 1.0000 0.4495 0.7617 0.6669 1.0000 0.5196 0.8898 0.5912 0.9599 0.3393 0.6669 0.5419 0.9283 0.3930 0.8192 0.4564 HT matrix has its AC components in a random order. The processing is performed based on the 8x 8 sub-blocks of the whole image, the third order HT matrix H3 is used. By applying (11) H3 becomes: ⎡1 ⎢1 ⎢ ⎢1 ⎢ 1 ⎢1 H3 = 8 ⎢1 ⎢ ⎢1 ⎢1 ⎢ ⎣⎢1 1 −1 1 1 1 −1 1 1 1 1 −1 1 1 −1 −1 1 1 −1 −1 1 1 −1 1 1 1 −1 −1 −1 1 −1 −1 1 1 −1 −1 −1 −1 −1 −1 −1 −1 1 −1 −1 1 1 −1 1 1⎤ − 1⎥⎥ − 1⎥ ⎥ 1⎥ − 1⎥ ⎥ 1⎥ 1⎥ ⎥ − 1⎦⎥ 1.0000 0.6306 1.0000 0.9283 1.0000 0.7371 1.0000 0.8404 0.7684 0.1828 0.5196 0.3930 0.7371 0.2278 0.6471 0.2948 1.0000 0.5558 0.8898 0.8192 1.0000 0.6471 0.9571 0.7371 0.8746 0.2480 0.5912 0.4564 0.8404 0.2948 0.7371 0.3598 The quantization table is Q(u , v ) = (12) q H (u , v ) (14) where, q is the step size of the uniform quantizer. Therefore, the HVS-based quantization table can be derived by QHVS (u, v ) = Round [Q(u , v )] 4. Proposed method Given Hadamard matrix, the number of sign changes in each row of the Hadamard Transform matrix is given as 0,7,3,4,1,6,2 and 5 in the rows 1 to 8 respectively. The number of sign changes in each column of the Hadamard Transform matrix is given as 0,7,3,4,1,6,2 and 5 in the columns 1 to 8 respectively. The number of sign changes is referred to as sequency. The concept of sequency is analogous to frequency for the Fourier transform. Therefore, R = [0 7 3 4 1 6 2 5], C= [0 7 3 4 1 6 2 5] The horizontal and vertical discrete frequencies in the Hadamard domain are given in Equation (13) R(u ) for u =1, 2,….,N Δ ∗ 2N C (v ) f (v ) = for v =1, 2,….,N Δ ∗ 2N Table 2: Proposed quantization table, 16 24 16 f (u ) = (13) (15) QHVS (u, v ) 17 16 21 16 18 24 115 36 47 25 88 29 65 16 36 21 24 16 31 18 27 17 47 24 30 17 41 20 35 16 25 16 17 16 22 16 19 21 88 31 41 22 70 25 54 18 65 27 35 19 54 22 44 Table 2 shows the proposed luminance quantization table derived with q=16. Varying levels of image compression and quality are obtainable through selection of specific quantization matrices by varying ‘q’. This enables the user to decide on quality levels ranging form 1 to 100, where 1 gives the worst quality and highest compression, while 100 gives the best quality and lowest compression. Above quantization matrix works well for quality level 50. The dot pitch ( Δ ) of the high resolution computer display is about 0.25mm. High resolution computer display is about 128mm height and 128mm width to display a 512x512 pixel image. The appropriate viewing distance is four times of height. Hence, distance is considered as 512mm. Constant ω is a symmetric parameter, derived from experiments and set to 0.7[19]. Thus, the human visual frequency weighting matrix H (u , v ) of (2) is calculated for HT using equations (4) and (5) as given in Table 1. The human visual frequency weighting matrix H (u , v ) indicates the perceptual importance of the transform coefficients. After multiplying the 64 hadamard coefficients with human visual frequency weighting matrix H (u , v ) , the weighted hadamard coefficients contribute the same perceptual importance to human observers. The compression method is given as: 1. The input image is first divided into non overlapping 8 x 8 blocks. 2. Each block is transformed into 64 Hadamard coefficients via 2D-HT. 3. These 64 HT coefficients are then uniformly quantized by HVS based quantization table and rounded. The image is reconstructed through decompression, a process that uses the inverse HT. Nonzero coefficients are used to reconstruct the original image. Flow charts for 33 ICGST-GVIP Journal, Volume 7, Issue 3, November 2007 the proposed method are shown in Figure 1and 2 respectively. co efficients as follows AC p = ACa − T to embed bit ‘0’ AC p = ACa + T to embed bit’1’, Image, where, ‘p’ and ‘a’ are different locations. x 5. Experimental results Experiments are performed on two gray images as given in Figure 3 [20] to verify the proposed compression technique. Calculate the Hadamard Transform of image (block wise) Quantize each block using proposed HVS based quantization table (a) Figure 3: (a) Lena Round of the quantized coefficients (b) (b) Peppers These two images are represented by 8 bits/pixel and each image size is 512 x 512. The entropy (E) [21] is calculated as Image in HT domain in compressed form E = −∑ p(e) log 2 p (e) (16) e∈s Figure 1. Flowchart for image compression p(e) is the where, s is the set of coefficients and probability of coefficient (e) . An often used global objective quality measure is the mean square error (MSE) defined as Image in HT domain in compressed form MSE = Multiply each block using proposed quantization table to obtain coefficients [ n −1 m −1 1 ' xij − xij ∑∑ (n − 1)(m − 1) i =0 j =0 ] 2 where, n is the number of total pixels and (17) xij and xij ' are the pixel values in the original and reconstructed image. The peak to peak signal to noise ratio (PSNR in dB) is calculated as Calculate inverse HT of coefficients (block wise) PSNR = 10 log10 Reconstructed image 2552 MSE (18) where, the usable gray level values range from 0 to 255. The other metric used to test the quality of the reconstructed image is Normalized Cross Correlation. Normalized Cross Correlation (NCC) is defined as given in equation (19). Figure 2. Flowchart for image reconstruction The human visual frequency weighting matrix for Hadamard transform indicates that, middle and high frequency bands in HT are distributed in a random order. This property increases the reliability of watermark. The steps of the algorithm for watermarking are as follows. NCC= 1.Identify two AC coefficients in each transformed (HT) block of image. 2. Embed the watermark bit in one of the AC 34 − ⎛ − − ⎞⎛ ' ' ⎞ − ⎜ xij x ⎟⎜ xij x ⎟ ∑∑ ⎠⎝ i j ⎝ ⎠ 2 ⎡ ⎛ ' −' ⎞ ⎛ x − x− ⎞ ⎤⎡ ⎢∑∑⎜ ij ⎟ ⎥⎢∑∑⎜ xij − x ⎟ ⎠ ⎦⎢⎣ i j ⎝ ⎠ ⎣i j ⎝ 2 ⎤ ⎥ ⎥⎦ (19) ICGST-GVIP Journal, Volume 7, Issue 3, November 2007 achieve high performance without increasing any complexity. Performance comparison of proposed HVSbased Hadamard quantization table with other quantization tables is shown in Figures 4 and 5 respectively. − where, x indicates the mean of the original image − ' and x indicates the mean of reconstructed image. Comparative performance is studied in terms of PSNR considering the following methods. 1. 2. 3. Standard quantization matrix as used in JPEG algorithm is applied to quantize the hadamard coefficients denoted as Q1. HVS-based quantization matrix (as applied for DCT) is used to quantize the Hadamard coefficients denoted as Q2. Proposed HVS- based quantization matrix to quantize the Hadamard coefficients. The experiments are done on images of Lena and Peppers and are presented in Table 3 and 4. Table 3: PSNR results for Lena image using different quantization tables Entropy (Bits/pixel) PSNR Q1 Q2 0.3 27.1810 27.9678 Proposed method 28.0495 0.5 29.2018 30.4706 30.9096 0.7 30.7308 32.3234 32.8423 0.9 32.1647 33.7089 34.3915 1.0 32.7591 34.4589 35.2633 Figure 4: Performance comparison of Lena image Table 4: PSNR results for Peppers image using different quantization tables Entropy (Bits/pixel) PSNR Q1 Q2 0.3 26.5786 27.5926 Proposed method 27.7241 0.5 28.9478 30.2501 30.5620 0.7 30.5664 32.0616 32.3596 0.9 31.8695 33.3461 33.8205 1.0 32.6403 34.2340 34.3731 The bit rate and the decoded quality are determined simultaneously by the quantization table, and therefore, the proposed quantization table has a strong influence on the whole compression performance. Experimental results indicate that the proposed method can achieve better performance in terms of PSNR at the same level of compression. Proposed HVS based quantization table Figure 5: Performance comparison of Peppers image The process of quantization results in both blurring and blocking artefacts. The effect of blocking artifacts occurs due to the discontinuity at block boundaries. The blockiness is estimated as the average differences across block boundaries. This feature is used to constitute a quality assessment model to calculate the score [22]. The score typically has a value between 0 and 10 (10 and 0 35 ICGST-GVIP Journal, Volume 7, Issue 3, November 2007 represent more and less quality respectively). Performance comparison in terms of Normalized Cross Correlation (NCC) and score is given in Table 5 for Lena and Peppers at 0.9 bits per pixel. PSNR (less than 1 dB) than the proposed method. This is because DCT is high gain transform. To test the reliabilility of watermarking, binary logo of size 64x64 and Lena image of size 512 x 512 are considered. Watermark bits are embedded in the location AC(2,2) by modifying the contents of AC(2,6). Threshold value T=8 is considered for experimentation. Logo and watermarked images are shown in Figure 7. Table 5: NCC and Score results for Lena and Peppers using different quantization tables Lena Q1 Q2 Proposed NCC 0.9913 0.9940 0.9949 Score 3.4139 3.8274 4.1027 Peppers NCC Q1 0.9936 Q2 0.9954 Proposed 0.9959 Score 3.5986 4.1087 4.4115 (a) The results of retrieved watermarks with DCT and HT techniques are given in Table 7. Table 6: PSNR results for different images using different quantization tables Entropy Q1 Q2 Proposed Mandrill Barbara Zelda Airplane Washsat 2.17 1.47 0.89 1.11 0.92 29.0 31.8 35.3 34.2 34.0 30.4 33.4 36.5 36.0 35.0 31.2 34.2 37.1 36.7 35.6 Table 7: Retrieved watermarks and NCC results The reconstructed images Lena and Peppers using the proposed HVS-based quantization table are shown in Figure 6. (a) (c) Figure 7: (a) Logo(64x64) (b)Watermarked Lena image using DCT (c)Watermarked Lena image using HT The results of the proposed method with other images are given in Table 6. Image (b) Attack 64x64 logo embedded Salt and Pepper noise (Density 0.01) DCT PSNR=34.2dB HT PSNR=47.1dB NCC=0.8320 NCC=0.8687 Bit plane removal (4th bit plane =0) NCC=0.4888 NCC=0.5190 Cropping (25%) NCC=0.9909 NCC=0.9932 Histogram equalization NCC=0.6896 NCC=0.9277 The watermarked images are attacked by image cropping, histogram equalization, bit plane removal and noise attacks. Extracted watermarks are given in Table 7. Experiments demonstrate that the HT based watermarking scheme is robust to various attacks than DCT. PSNR of watermarked image is high in HT based watermarking scheme. (b) Figure 6: (a) Reconstructed Lena image(0.5bpp and PSNR is 30.90) (b) Reconstructed Peppers image(0.5bpp and PSNR is 30.56) The proposed method performs well in all tests. The error introduced by quantization of a particular Hadamard coefficient will not be visible if its quantization error is less than the just noticeable difference. By using JPEG method, PSNR is 34.9759 dB and 36.1517 dB for reconstructed Peppers and Lena images respectively (1.0 bpp). Experimental results indicate that JPEG gives better 6. Conclusions In this paper, a simple approach to the generation of optimal quantization table based on HVS model is presented. This quantization table is considered to quantize the HT coefficients and to obtain the superior image compression over standard quantization tables available in the literature. Superiority of this method is 36 ICGST-GVIP Journal, Volume 7, Issue 3, November 2007 observed in terms of PSNR and NCC. It is observed that the Hadamard transform has more useful middle and high frequency bands for image watermarking than several high gain transforms, such as DCT, DFT (Discrete Fourier Transform) and DST (Discrete Sine Transform). HT matrix has its AC components in a random order and is exploited for digital image watermarking. The scheme of digital image watermarking presented here with HT is robust compared to DCT based watermarking scheme. international journal of computer science and network security, Volume No.6,168-174, Sept 2006. [13] J.L.Mannos and D.J.Sakrison. The effect of a visual fidelity criterion in the encoding of images, IEEE Trans.Information Theory, Volume No.20,525-536, July 1974. [14] S.Daly. Subroutine for the generation of a two dimensional human visual contrast sensitivity function, Tech.Rep Eastman Kodak, 1987. [15] A.O.Osinubi and R A King. One-Dimensional hadamard naturalness-preserving transform reconstruction of signals degraded by nonstationary noise processes, IEEE Transactions on Signal Processing, Volume 40, No.3,645-659, March 1992. [16] J.Johnson and M.Puschel. In search of the optimal walsh-hadamard transform, ICASSP, 2000. [17] Khalid Sayood. “Introduction to Data Compression”, second edition, Academic press, 2000. [18] T.S Anthony, Jun Shen, Andrew K.K.Chow, Jerry Woon Robust digital image –in-image watermarking algorithm using the fast hadamard transform. Volume ASSP-33, No.4, pp.1006 - 1012, August 1985. [19] J.P.Allebach and D.L.Nehhoff . Model based digital half toning, IEEE Signal processing magazine, 14-27, July 2003. [20] University of Waterloo, “Waterloo Repertoire, http://links.uwaterloo.ca/greyset2.base.html [21] David Soloman. “Data Compression”, second edition, Springer, 2000. [22] Z.Wang HR Sheikh and AC Bovik . No-Reference perceptual quality assessment of JPEG compressed images, IEEE international conference, 477-480, Sept 2002. Acknowledgements First author thank Prof B. Chandra Mohan and Sri CH. Srinivasa rao research scholars at JNTU College of Engineering, Kakinada for the valuable discussions related to this work. The authors would like to thank the reviewers for the review of the paper and valuable suggestions. References [1] Z.Fan and R.D.Queiroz. Maximum likelihood estimation of JPEG quantization table in the identification of Bitmap Compression. IEEE, 948951,2000. [2] R.Sudhakar and R.Karthiga. Image compression using coding of wavelet coefficients-a survey. GVIP journal, Volume 5, Issue 6, 2005. [3] G. K. Kharate , A. A. Ghatol and P.P. Rege. Image compression using wavelet packet tree. GVIP journal, Volume 5, Issue 6, pp.37-40,2005. [4] S.Esakkirajan, T.Veerakumar , V. Senthil Murugan and R.Sudhakar. Image compression using contoulet transform and multistage vector quantization. GVIP journal, Volume 6, Issue 1, pp.19-28, 2006. [5] W. Fourati and M. S. Bouhlel. A novel approach to improve the performance of JPEG 2000. GVIP journal, Volume 5, Issue 5,pp.1-9,2005. [6] V.Ratnakar and M.Livny. RD-OPT: An efficient algorithm for optimizing DCT quantization tables. Data Compression Conference, IEEE, 332 - 341. March, 1995. [7] O.Tasdizen and I.Hamzaoglu. A high performance and low cost hardware architecture for H.264 transform and quantization algorithms. Sabanaci University, www.sabanciuniv.edu/~hamzaoglu. [8] D.M.Monro and B.G.Sherlock. Optimum DCT quantization. Data Compression Conference, IEEE, Dec, 1993. [9] D.M.Bethel,D.M.Monro and B.G.Sherlock. Optimal quantization of the discrete cosine transform for image compression. , Conference Publication No443, IEE ,69-72, 1997. [10] L.W.Chang, CY Wang and SM Lee Designing JPEG quantization tables based on Human Visual System. Signal Processing: Image Communication, Volume 16, Issue 5,501-506, Jan 2001. [11] J.Sullivan,L.Ray and R.Miller. Design of minimum visual modulation halftone patterns. IEEE Transactions on Systems, Man and Cybernetics,Volume 21,33-38,1991. [12] P.Poda and A.Tamtaoui. On the enhancement of unequal error protection performances in images transmission over time-varying channels, IJCSNS 37 ICGST-GVIP Journal, Volume 7, Issue 3, November 2007 Biographies B.N.Chatterji is a former Professor in E&ECE Department IIT, Kharagpur. He received B.Tech. and Ph.D. (Hons.) from E&ECE Department IIT, Kharagpur in 1965 and 1970, respectively. He has served the Institute under various administrative capabilities as Head of Department, Dean (Academic), etc. He has chaired many international and national symposium and conferences organized in India and abroad, apart from organizing 15 short term courses for Industries and Engineering college teachers. He has guided 35 Ph.D. scholars. Presently, he is active in research by guiding three research scholars. He has published more than 150 papers in reputed international and national journals apart from authoring three scientific books. His research interests are low-level vision, computer vision, image analysis, pattern recognition and motion analysis. K.Veeraswamy is currently working as Associate Professor in ECE Department, Bapatla Engineering College, Bapatla, India. He is working towards his Ph.D. at JNTU College of Engineering, Kakinada, India. He received his M.Tech. from the same institute. He has nine years experience of teaching undergraduate students and post graduate students. His research interests are in the areas of image compression, image watermarking and networking protocols. S.Srinivas Kumar is currently Professor and HOD in ECE Department, JNTU College of Engineering, Kakinada, India. He received his M.Tech. from Jawaharlal Nehru Technological University, Hyderabad, India. He received his Ph.D. from E&ECE Department IIT, Kharagpur. He has nineteen years experience of teaching undergraduate and postgraduate students and guided number of post-graduate theses. He has published 15 research papers in National and International journals. Presently he is guiding five Ph.D students in the area of Image processing. His research interests are in the areas of digital image processing, computer vision, and application of artificial neural networks and fuzzy logic to engineering problems. 38