International Journal of Emerging Technologies and Engineering (IJETE) Volume 1 Issue 1 January 2014, ISSN 2348 – 8050 CONGLOMERATION OF REFLECTION SEPARATION METHODS P.Santhana Kumari and Professor Mr.K.Madhan Kumar. Abstract Reflection arises when the photograph of an object is taken, which is placed behind the glass. The object also has other reflection problem like surface reflection. Reflection is the major problem, it can be eliminated by using many algorithms. Yet they have some disadvantages. The recently used constrained optimization technique will overcome those drawbacks. It formulates the reflection separation as a energy minimization function .Energy function is derived from Baye’s rule for a maximum a posteriori (MAP) estimate. This appraisal discusses all the existing methods used in reflection separation and their performance analysis. Keywords— Independent Component Analysis, Specular to Diffuse Mechanism, Sparsity Prior and Maximum a posteriori Estimation. I. INTRODUCTION The subject of reflection severance occurs obviously in our everyday life when a preferred scene restrains another scene reflected off a transparent or semi-reflective medium. Common examples include photographs of scenes taken through windows or photographs of objects which are placed inside glass showcases found in retail store and museum settings. The aim is to solve this problem of reflection by post processing the images. The image with reflection consist of two layers, they are background layer and reflection layer. The background layer is the image of the object and the reflection layer is the unwanted scene which is reflected in the glass. To reduce the effect of reflection, the polarizer filter is placed in front of the camera. But this will not completely eliminate the reflection. Because the polarization depends on the angle of incidence, the polarizer partially eliminates the reflection. So in this the image of an object is taken in same view point but in different polarization angles, which is done by adjusting the polarizer which is placed in front of the camera. To accomplish this goal, a simple assumption is made that the reflection and background layers are mutually exclusive. The object also has the other reflection problem explicitly surface reflection. In this surface reflection comprise two components they are diffuse and specular components. In [1] the two components are separated by independent component analysis. And in [2] the specular and diffuse components are analyzed based on color ratio. And here the normalization for both input image and diffuse pixels are taken into account. [3] is based solely on colors. The intensity logarithmic differentiation of an input image and specular free image is compared. Specular to diffuse mechanism used in both [2] and [3]. In [4] the sparsity prior approach is made that is optimized by iterative reweighted least squares approach. In [5] the image gradients classified into background layer gradients and reflection layer gradients using the information from the multiple input images. Then formulate this reflection separation problem as a constrained optimization problem where the reflection layer, the background layer and the “matte” that determines the mixing coefficients of the both layers. Fig .1.shows the general block diagram of image dispensation. In general the image of an object is captured by the camera in which the polarizer is present. By means of adjusting the polarizer multiple polarized images are captured. Initially these input images are read and subsequently the images are converted into pixels or image gradients using derivative methods. Again these images are processed for reflection separation .It can be done by ICA, specular to diffuse mechanism and optimization formulation techniques. After that the preferred reflection separated image is obtained, which is then displayed in the monitor. Fig.1.The general block diagram of image dispensation. II.LITERATURE REVIEW 7 www.ijete.org International Journal of Emerging Technologies and Engineering (IJETE) Volume 1 Issue 1 January 2014, ISSN 2348 – 8050 A. INDEPENDENT ANALYSIS COMPONENT Shinji Umeyama et… al. [1] used a new separation technique of diffuse and specular reflections based on Independent Component Analysis. This can be realized by observing surface reflection through a polarizer with several point of references. A stable separation algorithm was given based on dichromatic reflection model. In this method the separation results seem very good in spite of several approximations introduced such as the unpolarized diffuse reflection and the distant light source. They are significantly better than the naive separation only by a polarizer but the specular reflection cannot be eliminated completely simply by using a polarizer. Independent Component Analysis (ICA) is a nonlinear data analysis method , when some mixtures of probabilistically independent source signals are observed, ICA recovers the original source signals from the observed mixtures without knowing how the sources are mixed. Since both the original signals and the mixing coefficients are unknown, this estimation seems impossible at the first sight. An ICA algorithm for reflection component separation: By rotating the polarizer, a series of M surface reflection images are captured. The images are scanned and vectorized into row vector xj. the observation matrix X is composed of these row vectors. x1 x2 X= ⋮ xM The problem of separation of diffuse and specular reflections is to decompose a given observation matrix X into a product of two matrices, A and S. However, ICA achieves this by using the probabilistic independence property of the source signals. In ICA, the original signals are estimated as the mixtures of the observed signals, and the mutual independence of the estimated signals is maximized by tuning the mixing coefficients. Hence, ICA may be able to separate reflection components. The number of different observed signals must be greater than or equal to the number of the original signals in ICA. The major disadvantage of this technique is iterative and sometimes converges difficultly. B. SPECULAR-TO-DIFFUSE MECHANISM Robby T. Tan, Ko Nishino et... al.[2] introduced specular-to diffuse Mechanism. To separate the reflection components of uniformly colored surfaces from a single input image. To accomplish this, use the method on chromaticity, particularly on the distribution of specular and diffuse points in maximum chromaticity-intensity space. Briefly, the method is as follows: Given a single colored image taken under a uniformly colored illumination, first identify the diffuse pixel candidates based on color ratio and noise analysis, particularly camera noise. Color ratio, = (5) u is the scalar value. Chromaticity, σ(x) = (1) ( ) Let d and s be row vectors representing diffuse and specular reflection images; the matrix S is composed of these vectors: d S= (2) s From (2) the mixing matrix A can be written as 1 f(ψ ) 1 f(ψ ) A= (3) ⋯ 1 f(ψ ) Thus, X = AS (4) ( ) ( ) σ(x) = ( ) Maximum chromaticity, ( ) ̅( ) ( ) ( ) (6) (7) is a color vector. is a maximum chromaticity. Normalize both input image and diffuse pixel candidates simply by dividing their pixels values with known illumination chromaticity. Color constancy algorithms can be employed to estimate the illumination chromaticity. From the normalized diffuse candidates, estimate the diffuse maximum chromaticity by using histogram analysis. Having obtained the normalized image and the normalized 8 www.ijete.org International Journal of Emerging Technologies and Engineering (IJETE) Volume 1 Issue 1 January 2014, ISSN 2348 – 8050 diffuse maximum chromaticity, the separation can be done straightforwardly using a specular-to-diffuse mechanism, a new mechanism which is introduced. Fig. 2 shows the flow diagram of specular-to-diffuse mechanism. Given an input image of uniformly colored surfaces, at first group the pixels of the image based on their color ratio (u) values. Then, for every group of u, identify the diffuse point candidate, which implies identifying the diffuse pixel candidates. Using estimated illumination chromaticity, normalize all diffuse pixels candidates as well as the input image. Based on the normalized diffuse pixel candidates, using histogram analysis. Calculate a unique value of normalized diffuse maximum chromaticity. By knowing the normalized diffuse chromaticity, separate the normalized input image by using the specular-to-diffuse mechanism, producing normalized diffuse and specular components. Finally, to obtain the actual components, multiply both normalized components by the estimated illumination chromaticity. The method can be implemented to handle multicolored surfaces by using color-ratio or hue-based color segmentation; both color ratio and hue value will be independent from specularity if the specular reflection component is pure white. Fig.2. Flow diagram of specular-to-diffuse mechanism Finally, renormalize the reflection components to obtain the actual reflection components. Fig. 3(a) shows the estimation of actual diffuse maximum chromaticity for Victor KY-F70. Although some points that, due to ambient light in shadow regions, produce uncharacterized distribution, the diffuse chromaticity was still correctly obtained. The separation result using this camera is shown in Figs. 3(c) and 3(d). In this method this mechanism is accurate in separating the reflection components when given only the diffuse chromaticity of the normalized image but inaccurate illumination chromaticity estimation, which implies inaccurate grouping and inaccurate separation using specular to diffuse mechanism. Fig. 3. (a) Diffuse maximum chromaticity estimation for an image taken by Victor KY-F70. (b) Input image. (c) Diffuse reflection component. (d) Specular reflection component. C. INTENSITY LOGARITHMIC DIFFERENTIATION Robby T. Tan et... al [3]. used a novel method to separate diffuse and specular reflection components, The main insight of the method are in the chromaticity-based iteration with regard to the logarithmic differentiation of the specular-free image. The specular-free image is described as: İ(X) = ṁ (X) Λ̇(X) (8) ̇ ̇ ̇ ̇ where I={Ir,Ig,Ib} is the image intensity of the specular-free image, Λ̇={Λṙ,Λġ,Λḃ}is the diffuse chromaticity, and ṁ is the diffuse weighting factor. Fig. 4. (a) Normalized input image. (b) Specular-free image by setting Λ = 0.5 The specular components are perfectly removed, but the surface color is different. Fig. 4a shows a real image of a multicolored scene. By setting Λ = 0.5 for all pixels, we can obtain an image that is geometrically identical to the diffuse component of the input image (Fig. 4b). The difference between both is solely in their surface colors. This technique can successfully 9 www.ijete.org International Journal of Emerging Technologies and Engineering (IJETE) Volume 1 Issue 1 January 2014, ISSN 2348 – 8050 remove highlights mainly because the saturation values of all pixels are made constant regarding the maximum chromaticity, while retaining their hue. It is well-known that, if the specular component’s color is pure white, then diffuse and specular pixels that have the same surface color will have identical values of hue. Considering a diffuse pixel which is not located at color discontinuities in Fig. 4a, it is described as: I (X ) = m (X ) Λ .The spatial parameter (x1) is removed from Λ since the pixel is not located at color discontinuities. Apply logarithmic and then differentiation operation on this pixel, the equation becomes: ( ) + log Λ log ( ) = log (9) These two processes are done iteratively until there is no specularity in the normalized image. All processes require only two adjacent pixels to accomplish their task and this local operation is indispensable in dealing with highly textured surfaces. This ability plays an important role as a termination condition in the iterative framework, which removes specular components step by step until no specular reflection exists in the image. All processes are done locally, involving a maximum of only two neighboring pixels. In this method the separation problem in textured surfaces with a complex multicolored scene can be resolved without requiring explicit color segmentation .The drawback of this has computational time and complexity is high. Given a single colored image, is normalized by the illumination color using known illumination chromaticity, which produces an image that has a pure white specular component. Using this image, generate a specular-free image by simply shifting the intensity and maximum chromaticity of the pixels nonlinearly while retaining their hue. This image has diffuse geometry exactly identical to the diffuse geometry of the input image; the difference is only in their surface colors. Thus, by using intensity logarithmic differentiation on both the normalized image and its specular-free image, then determine whether the normalized image contains only diffuse pixels. D. USER ASSISTED SEPARATION: In [4] Anat Levin and Yair Weiss et. al... introduced a quantitative comparison of different likelihood models and different filters sets. In this paper, a technique that works on arbitrarily complex images but the problem is simplified by allowing user assistance. The user manually mark certain edges (or areas) in the image as belonging to one of the two layers. a hundred edges. Each marked edge gives an additional constraint for the problem. least squares problem reweighted by the previous step solution. A prior derived from the statistics of natural scenes, one can obtain on the statistics of natural scenes it use a prior on images that is based on the sparsity of derivative filters. This sparsity prior is optimized using the iterative reweighted least squares (IRLS) approach, which poses the problem as a sequence of standard least squares problems, each excellent separations using a relatively small number of labeled gradients. The input image I(x) is a linear combination of two unknown images the image behind the glass, I1 and the image reflected by the glass, I2. These two images sum linearly as log ( )= log ( ) (10) Fig. 5. Flow diagram of separation method Fig. 5. illustrates the basic idea of separation method. First, given a normalized image, a specularfree image is generated. After that, the diffuse verification verifies once again whether the normalized image has diffuse-only pixels. ( , )= ( , )+ ( , ) (11) Figs.6 shows the input images with labeled gradients and our results. In this compare the Laplacian prior and the sparse prior, versus the number of labeled points. The Laplacian prior gives good results although some ghosting effects can still 10 www.ijete.org International Journal of Emerging Technologies and Engineering (IJETE) Volume 1 Issue 1 January 2014, ISSN 2348 – 8050 be seen (i.e., there are remainders of layer 2 in the reconstructed layer 1). These ghosting effects are fixed by the sparse prior. Good results can be obtained with a Laplacian prior when more labeled gradients are provided. In this method sparcity prior is the supervised approach, which will translate to finding a likelihood function that when combined with user marks, will minimize the error of decomposed images but the amount of user interaction required to achieve good results is still quite large. Fig. 6. Comparing Laplacian prior with a sparse prior. (left) When a few gradients are labeled, the sparse prior gives noticeably better results. (right) When more gradients are labeled, the Laplacian prior results are similar to the sparse prior. E. CONSTRAINED OPTIMIZATION TECHNIQUE It is difficult to directly measure the physical quantities from images without prior knowledge [4]. Such physical quantities could be indirectly estimated by incorporating them as unknown variables into an optimization formulation for reflection separation. However, this would make the optimization formulation overcomplicated. To address these issues a reflection model is introduced which is based on a smooth alpha matte assumption. a. Reflection Model And Assumptions The input to our problem consists of multiple polarized images captured from the same view point but with different rotation angles of the polarizer. For each input image, we model the effect of reflection by the following equation for each of three color channels: Ii(x) = αi(x) R(x) + B(x) (12) where Ii, R and B are the input image, reflection layer and background layer, respectively, x is pixel coordinates, i is an image index and αi is a matte that represents the amount of reflection remaining in each of the polarized input images. b. Gaussian Pyramid Construction Each input image Ii(x) is downsampled to construct the Gaussian image pyramid. At each scale, the mask image and reflection guide map are built. This image is used as the base for the preceding operations. Assumptions: 1. The gradients of the reflection layer and those of the background layer are mutually exclusive. 2. Spatial variation of αi within an image is smooth, that is, ∇αi(x) = 0. This assumption comes from the fact that we are targeting at planar (smooth) surface reflection for which varies smoothly with the variation of the angle of incidence and other physical quantities. c. Guide Map Computation And Mask Image Computation Reflection Guide Map can be computed using a formula given below: ∇Ii(x) = R(x) ∇αi(x) + αi(x) ∇R(x) + ∇B(x) (13) Where ∇ = (∂/∂x, ∂/∂y)T is the gradient operator. Spatial variation of αi within an image is smooth, that is, ∇αi(x)=0. Therefore this equation can be rewrite as ∇ (x) (x)∇R(x)or∇B(x) if max ∇ (x) ≥ t = (x)∇R(x) + ∇B(x) otherwise where maxj∣∇Ij (x)∣ is the maximum magnitude of the gradient among all ∇Ii(x) and t is the threshold for image gradients in the first assumption. The threshold is determined by selecting the top two percentages of pixels which have the largest gradient magnitudes among all pixels in the input images. Note that according to this Equation, the contribution of ∇B(x) to ∇Ii(x) is fixed for all input images, while that of ∇R(x) varies depending on the values of αi(x). Hence, if the variance of ∇Ii(x) over the input images is large, it is likely that the gradient ∇Ii(x) is from the reflection layer. Similarly, if the variance of ∇Ii(x) over the input images is small, it is likely that the gradient ∇Ii(x) is from the background layer. Therefore, the large gradient pixels, that is, the pixels with maxj∣∇Ij (x)∣ ≥ t can be classified into two layers, depending on their gradient variances over the images. We construct a mask image M(x) which identifies the pixels with large gradients: M(x) = 1 if maxj∣∇Ij (x)∣ ≥ t, and M(x) = 0 otherwise. The mask image consists of two parts, MR(x) and MB(x), which indicates the large gradient pixels from the reflection layer and the background layer, respectively. d. Optimization Formulation Formulate reflection separation as an energy minimization problem. Our energy function is 11 www.ijete.org International Journal of Emerging Technologies and Engineering (IJETE) Volume 1 Issue 1 January 2014, ISSN 2348 – 8050 derived from Bayes’ rule for a maximum a posteriori (MAP) estimate together with the soft constraints from the reflection guide map: max , , Where L(I |α , R, B) = = + + ‖ (x) − ( + B(x))‖ L(α ) (L( I |α , R, B)) + L(R) L(B) L(α ) (x)R(x) ‖α (x) − α (x)‖ + γ ‖∇ (x)‖ L(R) = ∉ ‖∇R(x) − ∇R (x)‖ +γ ‖∇R(x)‖ L(B) = Intensity diff, (c) sparsity prior, (d)ICA , (e) Input images, (f) Ground truth (from left to right). Table.1.Rmse Comparison Table (15) (16) (17) Results of various methods RMSE of background layer RMSE of reflection Layer (a)MAP 6.11 9.48 (b)Intensity diff 12.21 19.67 (c) sparsity prior 12.02 13.43 (d) ICA 76.65 69.85 MAP method achieved the lowest RMSE compared to three previous methods that is; it generated the separation results closest to the ground truth layers than the three others. 4. CONCLUSION (18) ‖∇B(x) − ∇B (x)‖ +γ ‖∇B(x)‖ L(I |α , R, B) is the data term. L(α ) , L(R) and L(B) are the terms that reflect the soft constraints and (x), R(x) and regularization for Unknowns B(x),respectively. λ , λ and λ are weights for L(α ), L(R) and L(B). γ1,γ 2 are balancing between the soft constrain and regularization for each (x), R(x) and B(x). 3. RESULTS AND DISCUSSIONS For each reconstructed layer, root mean square error (RMSE) is calculated which quantifies the difference between an estimated image and a ground truth image. Following Table.1 summarizes the RMSEs for the examples illustrated in Figs.8.In Fig.8.The RMSEs with respect to the ground truth layers are shown. Fig. 8. Reflection separation results for a synthetic example with spatially-varying mattes.(a)map, (b) In this paper, a breif literature survey for reflection separation is discussed and the RMSE performance for various techniques MAP, Intensity differentiation, sparsity prior, and ICA are having background layer values of 6.11, 12.21, 12.02 and (19) respectively, are analysed. The MAP estimate 76.65 provide better performance than other three methods. In future to get improved RMSE value, contrast of the image is enhanced. Using contrast enhacement the hidden details of the image is obtained so it can generate superior result for reflection separation. ACKNOWLEDGEMENT I would like to express my gratitude to all, those who gave me the possibility to complete this paper. I owe a sincere prayer to the LORD ALMIGHTY for his kind blessings and giving me full support to do this work, without which this would have not been possible. REFERENCES [1] Shinji Umeyama, and Guy Godin, “Separation of Diffuse and Specular Components of Surface Reflection by Use of Polarization and Statistical Analysis of Images” IEEE 12 www.ijete.org International Journal of Emerging Technologies and Engineering (IJETE) Volume 1 Issue 1 January 2014, ISSN 2348 – 8050 Transactions On Pattern Analysis And Machine Intelligence, Vol. 26, No. 5, May 2004. [2] Robby T. Tan, Ko Nishino, Katsushi Ikeuchi, "Separating Reflection Components Based On Chromaticity And Noise Analysis", IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 26, No. 10,October2004. [3] Robby T. Tan, Katsushi Ikeuchi, "Separating Reflection Components Of Textured Surfaces Using A Single Image", IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 27, No. 2, February 2005 [4] Anat Levin, Yair Weiss, "User Assisted Separation of Reflections From A Single Image Using A Sparsity Prior",IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 29, No. 9, September 2007 [5] Efrat Be’ery, Arie Yeredor "Blind Separation of Superimposed Shifted Images Using Parameterized Joint Diagonalization", IEEE Transactions on Image Processing, Vol. 17, No. 3, March 2008 [6] Naejin Kong Yu-Wing Tai Sung Yong Shin “High Quality reflection separation using polarized images”, IEEE Transactions On Image Processing, Vol. 20, No. 12, December 2011. [7] H.Fujikake, K. Takizawa, T. Aida, H. Kikuchi, T. Fujii, and M.Kawakita, “Electricallycontrollable liquid crystal polarizing filter for eliminating reflected,” Opt. Rev., vol. 5, no. 2, pp. 93–98, 1998. [8] N.Ohnishi, K. Kumaki, T. Yamamura, and T. Tanaka, “Separating real and virtual objects from their overlapping images,” in Proc. ECCV, 1996, vol. 1065, pp. 636–646. [9] A.Levin and Y. Weiss, “User assisted separation of reflections from a single image using a sparsity prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 9, pp. 1647– 1654, Sep. 2007. [10] B.Olshausen and D. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature, vol.381, pp. 607–608, 1996. [11] A M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol., vol. 15, no. 1, pp. 84–91, 2005. [12] K. Gai, Z. W. Shi, and C. S. Zhang, “Blind separation of superimposed images with unknown motions,” in Proc. IEEE CVPR, 2009, pp.1881–1888. [13] Y. Y. Schechner, J. Shamir, and N. Kiryati, “Polarization-based decorrelation of transparent layers: The inclination angle of an invisible surface,” in Proc. IEEE ICCV, 1999, pp. 814– 819. [14] Y. Y. Schechner, J. Shamir, and N. Kiryati, “Polarization and statistical analysis of scenes containing a semireflector,” J. Opt. Soc. Amer., vol.17,no.2, pp. 276–284, Feb. 2000. [15] H. Farid and E. Adelson, “Separating reflections from images by useof independent components analysis,” J. Opt. Soc. Amer., vol. 16, pp.2136–2145, 1999. [16] A.Levin, A. Zomet, and Y.Weiss, “Separating reflections from a single image using local features,” in Proc.IEEE CVPR, 2004, pp. I: 306–I:313. [17] A.Agrawal, R. Raskar, S. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph., vol. 24, pp. 828–835, Jul. 2005. [18] A.Agrawal, R. Raskar, and R. Chellappa, “Edge suppression by gradient field transformation using cross- projection tensors,” in Proc. IEEE CVPR, 2006, pp. II: 2301–II: 2308. [19] M. Irani, B. Rousso, and S. Peleg, “Computing occluding and transparent motions,” Int. J. Comput. Vis., vol.12, no. 1, pp. 5–16, Feb.1994. [20] R. Szeliksi, S. Avidan, and P. Anandan, “Layer extraction from multiple images containing reflections and transparency,” in Proc. IEEE CVPR, 2000. [21] K. Gai, Z. Shi, and C. Zhang, “Blindly separating mixtures of multiple layers with spatial shifts,” in Proc. IEEE CVPR, 2008, pp. 1–8. [22] Y. Y. Schechner, N. Kiryati, and J. Shamir, “Blind recovery of transparent and semireflected scenes,” in Proc. IEEE CVPR, 2000, pp. I:38–I: 43. [23] Y. Tsin, S. Kang, and R. Szeliski, “Stereo matching with reflections and translucency,” in Proc. IEEE CVPR,2003, pp. 702–709. 13 www.ijete.org International Journal of Emerging Technologies and Engineering (IJETE) Volume 1 Issue 1 January 2014, ISSN 2348 – 8050 Fig. 8. Reflection separation results for a synthetic example with spatially-varying mattes.(a)map, (b) Intensity diff, (c) sparsity prior, (d)ICA , (e) Input images, (f) Ground truth (from left to right). Fig. 6. Comparing Laplacian prior with a sparse prior. (left) When a few gradients are labeled, the sparse prior gives noticeably better results. (right) When more gradients are labeled, the Laplacian prior results are similar to the sparse prior. 14 www.ijete.org