2.2 Filling in the Texture and Structure Information

advertisement

Digital Art for Object Removal in Photographs Using Image

Inpainting Technique

Ishita N. Theba

Assistant Professor

Computer Engineering Department

A.D. Patel Institute of Technology, Anand, Gujarat.

thebaishita@yahoo.com

ABSTRACT

: In this era of digital world, various techniques have been evolved for digital image and video processing. Industries including photography, gaming, animation and movie creation make use of these techniques and have achieved lot of publicity nowadays. This paper aims to present a digitized technique to remove some objects from the photograph and then reconstructing the image in such a way that it is unnoticeable by humans in output image. The area in the image from where an object is removed is reconstructed by filling the region using spatial information available in its surroundings. This technique can also be applied to many applications including scratch removal, region filling, red-eye correction, compression and overlaid text removal from a digital image.

Keywords

Digital Image, object removal and reconstruction, overlaid text removal, exemplar based image inpainting.

1.

INTRODUCTION

Digital Art is an artistic work that makes use of digital technology as significant part of creative or presentation process. In ancient times, this work was carried out manually by the artists. Image Inpainting is a process to modify or reconstruct the digital Image. It is a significant work carried out in the field of image processing. The main objective of image inpainting is to reconstruct the damage or missing portion of an image.

Inpainting techniques can be applied in object removal application by removing the object in an input image first and then filling that area with the information available in its background area.

For object removal application, the original image and the mask image is given as input. The mask image contains the selected portion from where the object is to be removed. Then the image is reconstructed by filling in that portion and an output image is obtained. This process is shown in figure-1.

Original Image Mask image Output Image

Fig-1: Process depicting object removal using image inpainting.

The technique of digital image inpainting for object removal is also used to produce stunning visual effect in an image. Researchers have proposed several techniques for image inpainting. Image Inpainting techniques are based on two main approaches.

Diffusion based image inpainting

Exemplar based image inpainting.

1.1

Diffusion Based Image Inpainting

In diffusion based image inpainting, the information from the source area or surrounding area is diffused into the target region. This will result in image smoothing.

Blurring will be noticeable in resultant image, if large area needs to be inpainted. Moreover, texture is not reproduced correctly using diffusion based techniques.

Various algorithms have been proposed based on diffusion based approach. In Partial Differential

Equation (PDE) based algorithm [1], the information is propagated in the direction of minimal change using isophote lines. The geometric structure is preserved well. This algorithm is suitable when small region need to be inpainted. Then, image inpainting is carried out with the use of navier strokes equation [2]. It exploits the remarkable relationship between the steady state solution of the stream function in fluid flow and the

(grayscale) image intensity level in image processing.

The algorithm works well in terms of quality of solution and computational efficiency, but stability of the algorithm was still an issue. Another model called Total variational (TV) Inpainting model [3] was developed which uses Euler-Lagrange equation and anisotropic diffusion based on the strength of the isophote. It is used when there is a reasonably small region to be

inpainted for noise removal applications. But the drawback of this method is that it is unable to recover partially degraded image for curved structures. To handle the curved surfaces in better way, an extended

TV model, called CDD model [4] was proposed as it contains the curvature information of the isophote. A fast marching method was proposed in [5]. It is faster and simpler method compared to other PDE based algorithms and used for applications including image segmentation, restoration, etc.

1.2

Exemplar Based Image Inpainting

Exemplar-based techniques [6], [7], [8], is capable of effectively reconstruct damaged textured areas in a digital image. The technique works by selecting a patch from the source region and then the information is filled in the target region pixel by pixel from the selected patch. It is an iterative process and continues till the whole target region is completely filled. The advantage of these techniques is that the textured area is reconstructed properly. To preserve the linear structures while reconstructing textured areas, an algorithm was proposed by criminisi [9]. This algorithm effectively produces good results by giving priority to the linear structures and then reconstructing the textured areas.

The extension of this technique [10] was performed by incorporating Bezier curves to construct the missing edge information. Once the structure information is determined, the destroyed contours will be connected in curve fitting process and damaged region will be inpainted using exemplar based image inpainting. Using this technique the curved structures are synthesized effectively.

2.

ALGORITHM DESCRIPTION

The Exemplar based Image inpainting method efficiently reconstructs the image by preserving both structure and texture information during reconstruction of an image. This method is used to remove large object from an image in such a way that it is not detectable by human and filling that region with the background information. In the proposed algorithm [9], the user selects the area in an image from where the object is to be removed. This region is considered as the target region and it is denoted by Ω. The source region contains the known information and it is denoted by Φ which is equal to entire image I minus the target region

Ω.(Φ = I - Ω). The boundary or contour of the target region is denoted by δΩ. It is formed by collection of pixels which has their neighboring pixels in the target region. A patch is constructed for each pixel on the boundary by considering that boundary pixel as the center pixel. The priority of each of these patches is calculated as the product of data term D(p) and confidence term C(p). The confidence term C(p) defines the number of pixels in a patch. The data term D(p) encourages the synthesis of the linear structure and it is propagated into the target region.

D(p) =

| ∇ 𝐼𝑝 𝛼

.𝑛 𝑝

|

Fig-2: Input Image I

The size of the template window Ψ is to be specified in the algorithm. The default window size is taken as 9 x 9 pixels. But if there an element larger than this default window size, then this value can be set slightly larger.

The filling order in the target region is determined based on patch priority. The algorithm iterates following three steps, until all the pixels are filled in target region.

2.1

Computing Patch Priorities on the

Boundary

The filling in the target region is carried out using the best-first filling strategy. In this strategy, the priority value for each patch on the boundary is calculated based on the following criteria, (i) if they are on the continuation of strong edges and (ii) they contain the more number of pixels which are already filled with known information. For a pixel p, where p ϵ δΩ, the patch priority P(p) for a patch Ψp , is calculated as the multiplication of confidence term C(p) and data term

D(p).

P(p) = C(p) x D(p) (1)

The confidence term C(p) and D(p) can be calculated as follows:

C(p) =

∑ 𝑞∈𝛹𝑝 ∩(𝐼− Ω )𝑐(𝑞)

|𝛹𝑝|

(2)

(3)

Where,  p is the area of  p

,

 is a normalization factor, n p

is a unit vector orthogonal to the front in the point p. The patch priority P(p) is calculated for every patch with distinct patches for each pixel p on the boundary of the target region. The confidence term for the pixels belonging to the target region is set to zero and the confidence term for the pixels belonging to the source region is set to one. The confidence term C(p) is a measure of the amount of reliable information surrounding the pixel p [9]. The data term is a function of strength of isophote hitting the front of the boundary at each iteration [9]. This term encourages the linear structure to be preserved first and then the texture information will be synthesized.

2.2 Filling in the Texture and Structure

Information

As the highest priority patch Ψp is obtained, one can fill that patch with the information from source region Φ. If diffusion process is used for propagating the information of the pixel value from the source region, then blurring occurs in the reconstructed image. For filling in the patch, one can propagate image texture by direct sampling of source region [9]. Identify the best matching patch Ψq from source region which contains information most similar to the patch Ψp by using following equation.

Ψq= arg min

Ψ q ϵ Φ 𝑑( Ψ p, Ψ q) (4)

Where the distance d(Ψp, Ψq) between the two generic patches is defined as the sum of squared differences of the already filled pixels in the two patches [9]. Once the source patch Ψq is found, the value of each pixel is copied from patch Ψq to patch Ψp. In this manner, the structure and texture information is propagated from source region to the target region with one patch at a time.

2.3

Modification of Confidence Values

After filling a patch Ψp with pixel values, the confidence term C(p) is updated as follows:

C(p) = C(ṕ) (5)

The relative patch confidence is measured using this simple update process. The boundary list is also updated with these values of pixels. The confidence values decay as the filling proceeds and it shows that the color values of pixels near the center of target region is less sure.

Fig-3: Propagation of structure using exemplarbased algorithm

3.

IMPLEMENTATION DETAILS

A user inputs the original image and the mask image as input to the computations. The mask image provides information about the target region that is to be filled with information from the surrounding area. A boundary consists of the collection of pixels that have neighboring pixel in the target region. A vector is formed which contains the list of boundary points.

Different patches are formed taking the boundary pixel as the center pixel. These patches will contain some portion of source region and some portion of target region. The highest priority patch is found, which contains most of its pixel filled with information of source region. The best matching patch is searched from source region which is most similar to target patch by making use of calculations based on sum of square difference (SSD) of color values. Then information from the source patch is copied into highest priority target patch pixel by pixel. The areas of target patch which contain part of target region need to be filled using information from source patch.

Once the target patch is filled, the boundary list is updated by removing the boundary points that fall into the boundary of the target region and the pixels on the boundary of target patch will be added to the boundary list, if those pixels are not filled yet. This process is iteratively carried out and the boundary list is updated as filling proceeds. The whole target region is filled, when the boundary list is empty and we can get inpainted image in output.

The algorithm can be summarized below:

1.

Provide initial confidence values for all the pixels in a given image to zero, if the pixels belong to inpainting domain, else initialize the confidence value with one.

2.

Until all pixels in target region are filled, repeat following steps (i) to (v). .

(i) Search for the boundary of inpainting domain, if the boundary is empty set exit.

(ii) Different patches on the boundary are formed and priority of all these patches is computed to find the highest priority target patch. Presently assuming patch P.

(iii) Find the best match patch (to the patch P) from the search window.

(iv) Copy only those pixels of best match patch to the corresponding pixels of patch

P, whose confidence term value is 0.

(v) Update the confidence values for the pixels of patch P.

4.

EXPERIMENTAL RESULTS

The implementation of the exemplar-based image inpainting technique is carried out using the tool

MATLAB 7.5.0.

Moreover, the efficiency of the algorithm is tested by considering variety of input images and the results obtained are presented below.

(i) The original image is taken as non-textured image and the object (Red colored pin) is removed. The image is reconstructed and the result is shown in output image.

(iv) The original image contains object (to be removed) too close to another object. The object (chalk) is removed and image is reconstructed. In the output image, some artifacts is produced and the result quality is not good.

Original Image Inpainted Image

Fig-4: Result depicting Object (Red colored pin) is removed from the non-textured input image and an output image is produced.

(ii) The original image is taken as textured image and the person is removed from original image. The original image is reconstructed by filling in the texture from the surroundings.. It is noticeable that the linear structure (base of the mountain area) is given priority and is preserved well, while reconstructing texture

(containing water). The result obtained is good and texture is synthesized properly in that area

Original Image Inpainted Image

Fig-7: Result depicting object (chalk) is removed from the input image and an output image is produced.

(v) The original image contains object which has its own shadow. After reconstructing the image, it is noticeable that blurring occurs in that area as the target region is filled with surrounding information containing shadow. The output quality is degraded.

Original Image Inpainted Image

Fig-5: Result depicting Object (a person) is removed from the textured input image and an output image is produced.

(iii) The original image consists of overlaid text.

The text is removed in the output image and the result is depicted below.

Original Image Inpainted Image

Fig-6: Result depicting overlaid text is removed from the input image and an output image is produced.

Moreover, there are situations where the stamped date in the photographs needs to be removed and the area is to be reconstructed in such a way that the removal process becomes unnoticeable by humans. In such cases also, exemplar based inpainting algorithm works efficiently.

Original Image

2

3

4

5

Inpainted Image

Fig-8: Result depicting object (charger) having shadow is removed from the input image and an output image is produced.

A summarized table is shown below to brief the result quality obtained, when variety of input images are provided.

Table-1: Summarized table depicting quality of result obtained when variety of input images are taken.

Sr.

No.

1

Characteristic of input image

Non-textured

Image

Remarks on quality of output image

After reconstructing the image the output quality is good as shown in fig-4.

Textured Image After reconstruction, texture is properly synthesized as shown

Original textured Image contains overlaid text in fig-5.

The overlaid text is removed and texture is properly synthesized. The overall quality of output image is good as shown in fig-6

After reconstruction some Original image contains objects too close to each artifacts are produced from where the object is removed.

The output Image quality is not good as shown in fig-7. other and one object needs to be removed

Original image contains objects having shadow

After inpainting, blurring is noticed in the area from where object is removed. The output image quality is not good as shown in fig-8.

5. CONCLUSION

This paper presents an Exemplar based Image

Inpainting technique for object removal from an original image. The experimental results show that the algorithm works efficiently for object removal application on both textured and non-textured input images. Some cases are presented where algorithm could not produce acceptable results. The future work includes modification of the algorithm, so that it can produce accurate results for a wide variety of input images.

REFERENCES

[1] M. Bertalmio, G. Sapiro, V. Caselles, and C.

Ballester, “Image inpainting,” in Proceedings of

SIGGRAPH , 2000, pp. 417-424.

[2] M. Bertalmio, A. L. Bertozzi, G. Sapiro, ”Navier-

Stokes, Fluid Dynamics, and Image and Video

Inpainting”, Proceedings of the International

Conference on Computer Vision and Pattern

Recognition , IEEE, Dec. 2001, Kauai, HI, volume I, pp. I-355-I362.

[3] T. Chan and J. Shen, “Local in painting models and

TV in painting,” SIAM Journal on Applied

Mathematics , Vol. 62, 2001, pp. 1019-1043.

[4] T. Chan and J. Shen, “Non-texture inpainting by the

Curvature-driven diffusions,” Journal of Visual

Communication and Image Representation , Vol. 4,

2001, pp. 436-449.

[5] Telea,”An Image Inpainting Technique Based On

The Fast Marching Method”, Journal Of Graphics

Tools , Vol.9, No. 1, ACM.

[6] M. Ashikhmin. Synthesizing natural textures. In

Proc. ACM Symposiumon Interactive 3D Graphics , pages 217–226, Research Triangle Park, NC,March

2001.

[7] A. Efros and T. Leung. Texture synthesis by nonparametric sampling. In Proc. Int. Conf. Computer

Vision , pages 1033–1038, Kerkyra, Greece, September

1999.

[8] A. Efros and W.T. Freeman. Image quilting for texture synthesis and transfer. In Proc. ACM Conf.

Comp. Graphics (SIGGRAPH) , pages 341–346, Eugene

Fiume, August 2001.

[9] Criminisi, P. Pérez, and K. Toyama. “Region

Filling and Object Removal by Exemplar-Based Image

Inpainting.” IEEE Transaction on Image Processing , vol. 13, Sep, 2004.

[10] Jason C. Hung, Chun-Hong Hwang, Yi-Chun Liao,

Nick C. Tang, Ta-Jen Chen, “Exemplar based Image

Inpainting based on Structure Construction” in Journal of software , vol. 3, No. 8, November 2008.

Download