Print this article - Žilinská univerzita

advertisement
Journal of Information, Control and Management Systems, Vol. 7, (2009), No. 2
173
3D SCANNER DATA PROCESSING
Branislav SOBOTA, Maroš ROVŇÁK, Csaba SZABÓ
Technical University of Kosice, Faculty of Electrical Engineering and Informatics,
Department of Computers and Informatics, Slovak Republic
e-mail: branislav.sobota@tuke.sk
Abstract
Digitization of real objects into 3D models is a rapidly expanding field, with
ever increasing range of applications. This paper provides a brief summary of
current 3D scanning technologies, deals with some aspects of data processing
required to acquire a usable 3D model and presents various 3D visualization
methods with focus on stereoscopy. This processing represents large graphical
data processing also in parallel computer systems pertinently.
Keywords: 3D scanning, 3D visualization, stereoscopy
1
INTRODUCTION
The past few years have seen dramatic decreases in the cost of graphics hardware.
The performance is increasing at such a fast pace that a large class of applications can
no longer utilize the full computational potential of the graphics processor. This is a
question of both directions: output also input. Especially, the problem with the input
process is present in real models. There are some methods for input of 3D data. One
method is scanning of physical objects using 3D scanner. Raw output from 3D scanner
is usually point-cloud. Next, methods for transformation of these points to usable
model (usually, the target model is a surface model based on triangles) are needed. It is
possible to execute this process in parallel environment too – either for
triangularization or for visualization or both [1][2]. These trends, coupled with
increasing Internet bandwidth, are making the use of complex 3D models accessible to
a much larger audience. There exists a potential to expand the use of 3D models
beyond the well-established game and movie markets to their new applications in
various fields, such as industrial solution, medicine, architecture, e-commerce,
education, biometry etc. (Fig. 1 (all figures source is DCI FEEI TU Košice)). By using
a term of a 3D model, we refer to a numerical description of an object that can be used
to render images of the object from arbitrary viewpoints and under arbitrary lighting
conditions [3] e.g. for 3D user interfaces of information systems [4][5][6].
174
3D Scanner Data Processing
Figure 1 Example of 3D face scanning
2
3D MODEL ACQUISITION PROCESS
The 3D model acquisition process [3][7][8] (Fig.2) consists of two main
stages:
– 3D scanning and
– Data processing.
In general, the result of a 3D scan is a set of points in space with corresponding
3D coordinates called point cloud [9] (Fig.2c). To capture the whole object, a series of
scans from various angles has to be made. There are several types of 3D scanners,
which differ in the technology used for obtaining a point cloud. They can be divided
into two main categories: contact and non-contact scanners.
(a)
(b)
(c)
(d)
(e)
Figure 2 3D model acquisition process: (a) original object, (b) scanning process,
(c) scanned object (point cloud), (d) model without texture, (e) visualized final textured
object
Contact scanners require a physical contact with the object being scanned.
Although they are usually very precise, they are also much slower (order of 103 Hz)
than non-contact scanners (order of 105 Hz). A typical example of a contact 3D scanner
is a coordinate measuring machine (CMM).
Non-contact scanners use radiation to acquire required information about objects.
They are of two basic types: passive and active. The main advantage of passive scanners
Journal of Information, Control and Management Systems, Vol. 7, (2009), No. 2
175
is that they are cheap as they do not require so specialized hardware to operate. To scan
objects, they only use existing radiation in its surroundings (usually visible light). In
contrast, active scanners are equipped with their own radiation emitter (usually emitting
laser light). While the latter are considerably more expensive, they are also much more
accurate and able to scan over much bigger distances (up to few km).
Figure 3 3D scanner
The object presented in this paper (Figs.4-8) has been scanned by an active noncontact 3D laser scanner (Fig. 3), which uses structured light scanning method. In this
method, a pattern of light (vertical stripes in this case) is projected onto the object (Fig.
2b, 160 000 triangles, scan-time 35 min.). This allows capturing the positions of a large
number of points at once and therefore, the scanning process is very fast.
The next stage of the model acquisition process is data processing. It consists of
several parts. First, the point cloud has to be meshed (Fig. 4), i.e. the points have to be
connected into a collection of triangles (called faces) [10].
(a)
(b)
(c)
Figure 4 The meshing process: (a) point cloud, (b) meshed scan, (c) meshed scan
with texture
176
3D Scanner Data Processing
The next step is to align the scans from various angles to create the whole object
surface. The aligned scans then have to be merged into one continuous mesh, so that no
overlapping parts occur. The merging process also involves filling the eventual “holes”
(unscanned parts) in the model. Additionally, there is an optional step to simplify the
mesh (Fig. 5, 478 000 triangles, scan-time 60 min.), which consists of reducing the
number of triangles in order to save memory needed to visualize the final 3D model.
(a)
(b)
(c)
Figure 5 The simplifying process: (a) overlapping aligned scans, (b) merged scans,
(c) final simplified model
3
VISUALIZATION OF 3D MODELS
There are several methods of 3D visualization [4]. The most common ones are
stereoscopy, autostereoscopy, volumetric visualization and holography. This paper
only deals with stereoscopy [11], since it offers a fairly impressive 3D effect at low
cost and relatively easy implementation.
3.1 Stereoscopy implementations
Stereoscopy is based on the way human brain perceives the surrounding objects.
When a person looks at an object, it is seen by each eye from a slightly different angle.
Human brain processes the information and enables stereoscopic vision. In the
stereoscopic visualization, human eyes are replaced by two cameras “looking” at an
object from different angles. A stereoscopic image (or a video) is a composition of left
and right images captured by the two cameras. The image is then presented to the
viewer in such manner that each eye can see the image captured only by one of the
cameras. In this way human brain interprets a 2D image as a 3D scene. There are two
common ways of implementing this method.
The first one is known as anaglyph. It has become quite a popular stereoscopic
method, since it only requires usage of very cheap red-cyan glasses. The anaglyph is a
composition of a red channel from the left image and green and blue channels (together
creating a “cyan channel”) from the right image. When looking at the anaglyph through
red-cyan glasses, the left eye can only see the red part of the image and the right eye
the rest.
Journal of Information, Control and Management Systems, Vol. 7, (2009), No. 2
177
The second way of the implementation of the stereoscopy is based on light
polarization. It requires two projectors with the ability to project polarized light and
special polarized glasses. Through the glasses, the left eye can see only the scene from
one projector and the right eye from the other one.
3.2 Stereoscopic image quality
There are several factors affecting image quality, which ones have to be
considered when implementing stereoscopic visualization in order to achieve high
quality images.
The most important one is stereo base (separation), i.e. the distance between the
two cameras capturing images. The wider the stereo base, the farther the visualized
object appears from the projection plane (Fig. 6). Therefore, the stereo base has to be
adjusted to the viewing distance. Too large value could result in uncomfortable
viewing, while too small could lead to an almost unnoticeable 3D effect. For example,
when an image is to be viewed on a regular pc monitor (viewing distance around 50
cm), the stereo base should be narrower than when an image is to be projected on a
large screen (viewing distance more than 2,5 m).
(a)
(b)
(c)
Figure 6 Varying stereo base. (a) Stereo base is zero. (b) Narrow stereo base.
(c) Wide stereo base.
Another factor is the need for reference objects. The problem is that the 3D effect
is not so apparent when there is only one object in the image, because human brain has
no other objects to compare distances with. For example, on Fig. 7, there is an object
on a black background. It is much more difficult to see the 3D effect in this picture
than in the one with some additional objects (Fig. 6)
178
3D Scanner Data Processing
Figure 7 An anaglyph with no reference objects
Finally, there is also a quality affecting factor concerning image (video)
compression and resolution, since compression artifacts can significantly degrade the
visual quality of the 3D stereoscopic effect (Fig. 8). Therefore, it is recommended to
use high resolutions and quality compression formats for stereoscopic visualizations.
For images, Portable Network Graphics (PNG) appears to be a very suitable format,
since it is lossless and produces images, which do not take up too much hard-drive
space. For stereoscopic videos, a suitable solution would be to use either uncompressed
video or some of the modern, high quality codecs, such as x264, with high bitrate
settings.
Figure 8 Impact of low quality compression on anaglyph
4
CONCLUSION
In this paper, the processes of 3D model acquisition and visualization are
described. The acquisition begins with the scanning of an object with a 3D scanner.
The output of the scanning is a point cloud, which represents input for the data
processing. The resulting 3D model can be further used in a 3D visualization process.
Stereoscopy has been chosen as a suitable visualization method because of its simple
and inexpensive implementation. Two methods of its implementation were presented
Journal of Information, Control and Management Systems, Vol. 7, (2009), No. 2
179
in this paper; both are used as part of the visualization frame of virtual reality system
developed at DCI FEEI TU Košice.
In order to obtain stereoscopic images with high quality 3D visual effects, the
following factors have to be taken into account. The value of stereo base should be
adjusted to the intended viewing distance, images should contain some additional
reference objects apart from the presented one, and, finally, either uncompressed
images (videos) or high quality codecs (preferably lossless) should be used for
compression, since compression artifacts can significantly degrade the resulting image
quality.
Acknowledgement
This contribution is the result of the project implementation: Centre of Information and
Communication Technologies for Knowledge Systems (project number: 26220120020)
supported by the Research & Development Operational Programme funded by the
ERDF and also is supported by VEGA grant project No. 1/0646/09: “Tasks solution
for large graphical data processing in the environment of parallel, distributed and
network computer systems”.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
SOBOTA, B., STRAKA M.: A conception of parallel graphic architecture for
large graphical data volume visualization in grid architecture environment, Grid
Computing for Complex Problem, Institute of Informatics Slovak Academy of
Sciences, 29.11-1.12.2006, Bratislava, Institute of Informatics SAS, 2006, pp.
36-43, 978-80-969202-6-6
SOBOTA, B., STRAKA, M., PERHÁČ, J.: An Operation for Subtraction of
Convex Polyhedra for Large Graphical Data Volume Editing, Proceedings of the
9th international scientific conference on Engineering of Modern Electric
System, EMES 2007, Oradea, Romania, 2007 May 24.-26., Oradea, University of
Oradea, Faculty of Electrical Engineering and Information Technology, 2007, pp.
107-110, ISSN 1223-2106
BERNARDINI, F., RUSHMEIER, H.: The 3D Model Acquisition Pipeline. In
Computer Graphics forum, Vol. 21, No. 2, 2002, 149–172.
SOBOTA, B., SZABÓ Cs., PERHÁČ, J., ÁDÁM N.: 3D Visualization for City
Information System, Proceedings of International Conference on Applied
Electrical Engineering and Informatics AEI2008, Athens, Greece, 8.-11.9.2008,
Košice, FEI TU Košice, 1, ISBN 978-80-553-0066-5, 9-13
PORUBÄN J., VÁCLAVÍK P.: Separating User Interface and Domain Logic,
Analele Universitatii din Oradea, Proc. 8th International Conference on
Engineering of Modern Electric Systems, Oradea, May 24 - 26, University of
Oradea, Romania, 2007, pp. 90-95, ISSN 1223-2106
SOBOTA, B., SZABÓ, Cs., PERHÁČ, J., MYŠKOVÁ, H.: Three-dimensional
interfaces of geographical information systems. In: ICETA 2008: 6th
180
3D Scanner Data Processing
International Conference on Emerging eLearning Technologies and Applications:
Information and communications technologies in learning: Conference
proceedings: September 11-13, 2008, Stará Lesná, The High Tatras, Slovakia.
Košice : Elfa, 2008. s. 175-180. ISBN 978-80-8086-089-9
[7] RUSINKIEWICZ, S. - HALL-HOLT, O. - LEVOY, M.: Real-Time 3D Model
Acquisition. In ACM Transactions on Graphics (Proc. SIGGRAPH). 21(3): July
2002, 438-446
[8] KARI, P., ABI-RACHED, H., DUCHAMP, T., SHAPIRO, L.G., STUETZLE,
W.: Acquisition and Visualization of Colored 3D Objects. In Proceedings of the
14th international Conference on Pattern Recognition, Vol. 1 (August 16 - 20,
1998). ICPR. IEEE Computer Society, Washington, DC, 11.
[9] KATZ, S., TAL, A., BASRI, R.: Direct Visibility of Point Sets. In ACM
SIGGRAPH 2007 Papers (San Diego, CA, August 05 - 09, 2007). SIGGRAPH
'07. ACM, New York, NY, 24.
[10] SOBOTA B., STRAKA M., VAVREK M.: The preparation of contribution of
3D model input using triangulation, MOSMIC 2007 - Modeling and Simulation
in Management, Informatics and Control, Žilina, 15.10-16.10.2007, Žilina,
Fakulta manažmentu, riadenia a informatiky, Žilinská univerzita v Žiline, 2007,
pp. 21-26, ISBN 978-80-8070-807-8
[11] SOBOTA, B., PERHÁČ, J., STRAKA, M., SZABÓ, Cs.: Aplikácie paralelných,
distribuovaných a sieťových počítačových systémov na riešenie výpočtových
procesov v oblasti spracovania rozsiahlych grafických údajov; elfa Košice, 2009,
ps. 180, ISBN 978-80-8086-103-2
Download