Document 12914486

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 29 Number 7- November 2015
Conversion of a 2-D Image to 3-D Image and Processing the
Image Based on Coded Structured Light
1
T. Gnana Prakash, 2G. Ananth Rao
1
2
Assistant Professor, CSE Dept., VNR VJIET, Hyderabad, India
Associate Professor, CSE Dept., SGIET, Markapur, A.P., India
Abstract— Structured light image systems have been
used effectively for precise measurement of 3-D
surfaces in a computer image. Their applications are
mostly restricted to scanning stationary objects, as
tens of images are to be captured for improving a
single 3-D scene. This work presents an idea for
real-time acquisition of 3-D surface data by a
specially coded image system. To achieve 3-D
measurement for a dynamic scene, the data
acquisition must be accomplished with only single
image. A principle of distinctively color-encoded
pattern projection is proposed to mean a color
matrix for improving the renovation efficiency. The
matrix is produced by a special code chain and a
number of state transitions. A color projector is
limited by computer to generate the desired color
patterns in the scene. The unique indexing of the
light codes is crucial at this point for colour
projection, as it is critical that each light grid be
uniquely identified by incorporating local
neighbourhoods. Hence 3-D reconstruction can be
performed with only local analysis of the single
image. A scheme is presented to describe such an
image processing method for fast 3-D data
acquisition. Practical experimental performance is
provided to analyse the efficiency of the proposed
methods.
Key Words— 2D, 3D, 2D-3D conversion, Colorcoded structure, Gray scales Image, Slide show
I.
INTRODUCTION
Computer image is a very important means to
obtain the 3-D model of an object. During last 30
years numerous methods for 3-D sensing have been
explored. The structured light has made its progress
from single light-spot projection to complex coded
pattern and subsequently, the 3-D scanning
operation speeds up from several hours per image to
dozens of images per second. In early 1980 the first
stage of feasible structured light systems were
originated when the binary coding or gray coding
methods were employed.
The pattern resolutions are exponentially
increasing amongst the coarse-to-fine light
projections and the stripe gap tends to 0, but the
stripe locations are easily distinguishable as a small
set of primitives is used. Therefore, the position of a
pixel now can be encoded precisely. This method is
still the most widely used in structured light systems
because of its easy implementation.
ISSN: 2231-5381
The main disadvantage is that they cannot be
applied to moving surfaces since multiple patterns
are to be projected. A technique based on the
combination of gray code and phase shifting is often
used for better resolution with the disadvantage that
larger numbers of projection patterns (e.g., images)
are essential. When the aim is to project only one
light pattern before capturing a scene image, color
stripes are conceived for substituting multiple
black/white projections.
II.
2D To 3D
Structured light image systems have been
effectively used for accurate measurement of 3-D
surfaces in computer image, though their
applications are mainly limited to scanning
stationary objects. This paper presents an idea for
real-time acquisition of 3-D surface data by a
specially coded image system. To achieve 3-D
measurement for a dynamic scene, the data
acquisition must be accomplished with only a single
image. A principle of exclusively color-encoded
pattern projection is proposed for improving the
reconstruction efficiency to design a colour matrix.
A special code sequence and a state transitions
produce the matrix.
Computer organizes a colour projector to
generate the desired color patterns in the scene. The
unique indexing of the light codes is crucial here for
color projection since it is essential that each light
grid be uniquely identified by incorporating local
neighbourhoods so that 3-D reconstruction can be
performed with only local analysis of a single image.
A scheme is presented to describe such a image
processing method for fast 3-D data acquisition.
Experimental performance results are provided to
analyse the efficiency of the proposed methods.
III. COLOR-CODED STRUCTURED LIGHT
SYSTEM
The setup of structured light system
consists of a CCD camera and a digital projector
shown in Fig. 1, similar to the traditional stereo
image system. But with its second camera
substituted by the light source which projects a
known pattern of light on the scene while the other
camera captures the illuminated scene. The required
3-D information can be obtained by analysing the
deformation of the imaged pattern with respect to the
projected one. Here, the correspondences between
the projected pattern and the imaged one can be
http://www.ijettjournal.org
Page 347
International Journal of Engineering Trends and Technology (IJETT) – Volume 29 Number 7- November 2015
solved directly via codifying the projected pattern,
so that each projected light point carries some
information. When the point is imaged on the image
plane, this information can be used to determine its
coordinates on the projected pattern[1].
According to the perspective transformation
principle, the image coordinates and the assigned
code words of a spatial point are correspondent to its
world coordinates. A mapping relation must be
established between an image point in the image
coordinate system and the spatial point in the world
coordinate system and are the coordinates of a world
point, corresponding with the image coordinates and
together with the system calibration parameters, the
3-D information of the surface points can be easily
computed.
Effectively, it can guarantee that the
measurement system has a limited cost of
computation since it only needs to analyse a small
part of the scene and identify the coordinates by
local image processing. Therefore, the acquisition
efficiency is greatly improved.
IV. SEED WORD & FLOOD SEARCHING
Fig.1: Sensor structure for color coded image
A method is developed for designing the
grid patterns that can meet the practical requirements
of uniquely indexing for solving uncertain
occlusions and discontinuities in the scene. Let P be
a set of color primitives, (where the numbers {1,2,..p}
representing different colors, e.g.,1=white,2=blue,
etc.). These color primitives are assigned to an M*N
matrix to form the encoded pattern which may be
projected onto the scene[1]. A word from is defined
by the color value at location in and the color values
of its 4-adjacent neighbors.
If is the assigned color point at row and column
in M, then the word for defining this location, is the
sequence and is a substring as follows:
If a lookup table is maintained for all of the
word values in M, then each word defines a
location in M. Then it is known that an
M*N matrix has words (m-1)*(n-1). These
words are made up of a set W. The color
primitives of to the matrix are to be
assigned so that there are no two identical
words in the matrix.
Furthermore, every element has a color
different from its adjacent neighbors in the
word.
In this way, each defined location is uniquely
indexed, and, thus, correspondence will be of no
problem. That is, if the pattern is projected onto a
scene, and the word value for an imaged point (u,v)
is determined (by determining the colors of that
imaged point and its 4-adjacent neighbors), then the
corresponding position(i,j) in M of this imaged point
is uniquely defined. In addition to having each word
of M be unique, the color code assignments are also
to be optimized so that matrix M is as large as
possible[1].
ISSN: 2231-5381
For computing the 3-D mesh, we can
choose to do it from either the original image data
with formula (20) or simply interpolating 3-D
coordinates of known grid points in step[1]. The
CCD camera (PULNIX TMC-9700) has a 1-inch
sensor and a 16-mm lens. A 32-Bit PCI Frame
Grabber for machine image by Coerce Imaging Co.,
Ltd., PC2-Image, is used to capture live images in
640 480 size.
The main computer is a common PC, with a
2.1-GHz CPU and 512-MB RAM, for image
processing and 3-D computation. In the experiments,
a 44 * 212 encoded pattern generated from a sevencolor set is used to illuminate the scene. The grid
size is 25 * 25 pixels. The below figure illustrates an
image captured by the camera, in which there are
about 30*37 = 1110 grid points. A seed word is
identified randomly in the image.
Fig. 2: Image captured uniquely encoded light
pattern.
The above figure is the Image captured
from the scene where illuminated by a uniquely
encoded light pattern. A random position is
generated to find a seed word for flood search. Net
amendment is performed to deal with some unfilled
holes and abnormal leaves. In the example, total
http://www.ijettjournal.org
Page 348
International Journal of Engineering Trends and Technology (IJETT) – Volume 29 Number 7- November 2015
three seeds were generated automatically one by one
to get the final mesh due to surface discontinuity.
Fig. 5: Cases of mesh amendment for leaves
(deletion).
Fig. 3: After 3-D reconstruction.
Then grid points are detected by a floodsearch algorithm. Repeating the work until no large
area is possible to yield more points, the whole net
will be merged from them. The amended mesh after
detecting isolated holes and abnormal leaves is also
illustrated in Fig. 2. Finally, the 3-D mesh was
reconstructed after performing 3-D computation and
a typical example is illustrated in Fig. 3.
V. Mesh Amendment and Interpolation
The
mesh
amendment
and
grid
interpolation procedures are developed in this paper
for optimization of 3-D results. The projection of the
coded pattern should result in a regular mesh.
However, due to the complexity of the scene and
uncertainty in image processing, the constructed grid
matrix could have some faults (namely holes and
leaves).
After all possible code words have been
identified from the image, it is easy to compute the
3-D world coordinates of these points since the
coordinates on both the image (xc, yc) and the
projector (xp, yp) are known. This yields a rough 3D map of the scene. In order to improve the
resolution, we may perform an interpolation
algorithm on such map. Depending on the
application requirements, the interpolation may be
only on the segment of two adjacent grid points or
inside the square area formed by four regular grid
points.
Fig. 6: Decision based on content likelihood
measurement.
VI. RESULTS
Fig. 4: Cases of mesh amendment for holes
(insertion).
To correct these faults, this research
develops a Mesh Amendment Procedure [1] to find
and amend them. For some cases, it can decide
directly whether “insertion” or “deletion” is
necessary to amend the net (as illustrated in Figs. 4
and 5). Under a few other conditions, such an
operation has to be determined according to its
actual image content and with a likelihood
measurement (Fig. 6).
ISSN: 2231-5381
Fig. 7: Original image that is to be transformed.
http://www.ijettjournal.org
Page 349
International Journal of Engineering Trends and Technology (IJETT) – Volume 29 Number 7- November 2015
Test Case 2: MODULE: Brightness & Contrast
FILENAME: brightness.cs
Fig. 8: A colored image transformed to gray scale
image.
Table 2: Test case for Brightness and contrast
module.
Test
Input
Obtaine
Actual
Description
Case
d
Output
Output
Brightn Source
Success
Success
Test Passed.
ess &
Image,
Image
Contra
value
displayed with
st
the new set
value.
Brightn Source
Failed
Failed
Test Passed.
ess &
Image,
Invalid image
Contra
value
format.
st
Try Again.
Test result: The module „Brightness and Contrast‟ is
tested and the module is successfully implemented.
Test Case 3: MODULE: 2D to 3D
FILENAME: From2.cs
Fig. 9: Converting to the colored image.
System testing consists of the following steps:
Program(s) testing.
String testing.
System testing.
User acceptance testing.
Test
Case
Table 3: Test case for 2D to 3D module.
Input
obtained Actual Description
Output
Output
Convert
Image
Source
, Scale
Success
Success
Convert
Image
Source
, Scale
Failed
Failed
VII. TEST CASES
Test Case1: MODULE: Load Image
FILENAME: LOAD.CS
Valid
Image
Table 1: Test case for loading image.
Input Obtained
Actual
Description
Output
Output
Test Passed!
File
Success
Success
Image
name
displayed.
Invalid
Image
File
name
Test
Failed
Failed
Test Passed!
No preview
available.
Try Again.
Test result: The module „Load image‟ is tested and
the module is successfully implemented.
ISSN: 2231-5381
Test Passed.
Image scaled
to 3D.
Test Passed.
Image
distorted not
suitable for
conversion.
Try Again.
Test result: The module „2D to 3D‟ is tested and the
module is successfully implemented.
VIII. DISCUSSIONS AND CONCLUSIONS
The application converts images that have
edges from 2D to 3D. These images although
converted to gray scale can be colored using values
passed to RGB. The application can provide insight
to how time can be saved in generating 3D image
than proceeding with the conventional approach.
Also the number of images of the source is only one
unlike where the source image should be taken from
different direction in the present system to generate
3D. These 3D images can be used as copyright
images (Watermarked)[1]. This provides security
and minimizes misuse of copyright images.
Multimedia software can now incorporate this
concept to generate 3D images.
http://www.ijettjournal.org
Page 350
International Journal of Engineering Trends and Technology (IJETT) – Volume 29 Number 7- November 2015
REFERENCES
S. Y. Chen, Y. F. Li, Jianwei Zhang, “Image Processing for
Realtime 3-D Data Acquisition Based on Coded Structured
Light”, IEEE Transactions On Image Processing, Vol. 17, No.
2, February 2008, pp:167-176
[2] M. Ribo and M. Brandner, “State of the art on image-based
structured light systems for 3D measurements,” in Proc. IEEE
Int. Workshop on Robotic Sensors: Robotic and Sensor
Environments, Ottawa, ON, Canada, Sep. 20057, p. 2.
[3] J. Salvi, J. Pags, and J. Batlle, “Pattern codification strategies
in structured light systems,” Pattern Recognit., vol. 37, no. 4,
pp. 827–849, Apr. 2004.
[4] D. Desjardins and P. Payeur, “Dense stereo range sensing
with marching pseudo-random patterns,” in Proc. 4th Canad.
Conf. Computer and Robot Image, May 2007, pp. 216–226.
[5] F. Blais, “Review of 20 years of range sensor development,” J.
Electron.Imag., vol. 13, no. 1, pp. 231–240, 2004.
[6] S. Osawa, “3-D shape measurement by self-referenced pattern
projection method,” Measurement, vol. 26, pp. 157–166,
1999.
[7] C. S. Chen, Y. P. Hung, C. C. Chiang, J. L. Wu, and Range,
“Data acquisition using color structured lighting and stereo
image,” Image Vis. Comput., vol. 15, pp. 445–456, 1997.
[8] L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape
acquisition using color structured light and multi-pass
dynamic programming,” in Proc. IEEE 3D Data Processing
Visualization and Transmission, Padova, Italy, Jun. 2002, pp.
24–36.
[9] Y. F. Li and S. Y. Chen, “Automatic recalibration of an active
structured light image system,” IEEE Trans. Robot. Autom.,
vol. 19, no. 2, pp. 259–268, Apr. 2003.
[10] T. Lu and J. Zhang, “Three dimensional imaging system,”
U.S. Patent 6 252 623, Jun. 26, 2001.
[11] S. Inokuchi, K. Sato, and F. Matsuda, “Range-imaging
system for 3-D object recognition,” in Proc. 7th Int. Conf.
Pattern Recognition, Montreal, QC, Canada, 1984, pp. 806–
808.
[1]
BIBLIOGRAPHY OF THE AUTHORS
1
Gnana Prakash Thuraka obtained his B.Tech and
M.Tech from JNT University, Hyderabad in 2006
and 2010 respectively. He has more than 8 years of
teaching experience and presently working as
Assistant professor in CSE department, VNR VJIET
Engineering College; Hyderabad. His Interested
research areas include Image Processing, Computer
Vision, Data mining, Big Data and Service Oriented
Architecture.
2
Anantha Rao Gottimukkala received B.Tech (CSE)
Degree from JNT University in 2007 and M.Tech
(SE) Degree from JNTUK Kakinada in 2009. He has
7+ years of teaching experience and presently
working as Associate Professor in Dr. Samuel
George Institute of Engineering & Technology,
Markapur, India. His Interested research areas are
Image Processing, cloud computing and Computer
Networks. He attended Various National and
International Workshops and Conferences.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 351
Download