PHOTOGRAMMETRY AND COMPUTER GRAPHICS FOR VISUAllMPACT ANALYSIS IN ARCHITECTURE

advertisement
PHOTOGRAMMETRY AND COMPUTER GRAPHICS FOR VISUAllMPACT ANALYSIS IN ARCHITECTURE
Peter Durisch
Informatik im Ingenieurwesen, ETH-Hoenggerberg
CH-8093 Zurich, Switzerland
e-mail: durisch@pfLethz.ch
ISPRS Commission V
ABSTRACT:
The paper presents a method to produce realistic computer-generated color images which aids to the judgement of aesthetic
properties of planned buildings. The planned buildings are embedded into the existing environment by photo montage. The planar universe of conventional photomontaging is extended to three dimensions. During an interactive preprocessing step, a
three-dimensional description of the existing environment is created: Geometrical data, atmospheric parameters and illumination parameters are retrieved from digital site photographs. The final image, combining the planned building and the environment represented by the site photographs, is rendered by an extended ray-tracing algorithm. The algorithm operates in threedimensional object space and handles the interaction between the building and its environment with respect to hiding, shadowing and interreflection. For the simulation of illumination effects, material parameters are retrieved from the photographs at rendering time.
KEY WORDS: Image Synthesis, Ray-tracing, Image Analysis, Feature Measurement, Visual Impact Analysis.
1. INTRODUCTION
building is perspectively mapped onto the background
photograph and an artist completes the drawing within the
frame. Background objects hidden by the building are manually eliminated from the photograph. Then the corrected
photograph is transparently overlaid onto the image of the
building. Nakamae et al. (Nakamae, 1986) calculate the
camera's position and orientation in object space and retrieve atmospheric parameters and illumination parameters
trom the photograph. The image of the building is generated
by considering the retrieved information and overlaid onto
the background photograph. Finally, it is covered by the manually identified foreground. The approximation of the background scene by faces and polyhedra in object space a1lows
to generate shadows and to simulate various atmospheric
conditions. Object space coordinates are obtained trom topographical maps. Terzopoulos & Witkin (Terzopoulos, 1988)
show an animation with deformable solid objects copied into
a real background scene which was approximated by planar
faces in object space.
Among the tradition aI methods which support the delicate job
of judging the visual impact of planned buildings on the landscape or cityscape are: images (plans, perspective drawings,
photomontages) and three-dimensional hand-crafted models.
Though there is a justification for all of these, observation reveals that they are either unable to show the subtle effects
which make up the overall appearance of a building, inflexible for case studies, or expensive due to the invested manual and artistic work. Thus, finding new methods to provide
accurate sources with which to judge the aesthetic properties
of planned buildings seems to be a worthy goal.
A fundamental requirement is that a building needs to be integrated into the existing environment (landscape, cityscape)
(Leonhardt, 1984). The paper presents a method to produce
realistic computer-generated color images which aids to the
judgement of aesthetic properties of planned buildings. The
planned buildings are embedded into the existing environment by photomontage. Photomontage is the combination of
artificial objects (the planned building) with a particular environment represented by a number of digital site photographs.
During an interactive preprocessing step, a three-dimensional description of the environment is retrieved trom the
photographs. In a second step, the photographs, the information retrieved from the photographs and the CAD model of
the building are used to generate the final images. These
show the planned building embedded in the existing environment from the perspectives of the site photographs. See
also Durisch (Durisch, 1992).
Kaneda et al. (Kaneda, 1989) generate a three-dimensional
terrain model trom cartographical data. The surface texture
of the model is derived from aerial photographs. Artificial objects are merged with the terrain by a method of Nakamae et
al. (Nakamae, 1989) tor the combination of a foreground and
a background based on known depth information. Though
the three-dimensional terrain model allows any observer position, the approach is of limited use whenever details of the
environment, especially those visible in horizontal views, are
required. Thirion (Thirion, 1992) generates images for the
test of image processing algorithms. A three-dimensional
model of an existing building is retrieved photogrammetrically
trom aerial photographs. The surface texture of the model is
derived trom the same photographs. The three-dimensional
model allows any observer position if the geometry of the
scene is known with sufficient precision and if photographs
trom suitable perspectives are available. Changes in illumination are simulated by removing and adding shadows.
For this purpose, illumination parameters and material parameters are retrieved from the given photographs.
2. PREVIOUS WORK
Computer-generated objects overlaid on background photographs are shown by Uno & Matsuka (Uno, 1979), Feibush
et al. (Feibush, 1980), (BDC, 1990), Takagi et al. (Takagi,
1990) and Doi et al. (Doi, 1991). Maver et al. (Maver, 1985)
calculate the camera's position and orientation in three-dimensional object space. The wireframe model of the planned
434
3. MODELS
Models serve to simplify the deseription of some aspeets of
the physieal world. This seetion introduces important radiometrie quantities and models for the representation of light
sourees, materials, geometrieal objeets and the ca me ra as
weil as models for the simulation of atmospherie effeets and
illumination effeets.
3.1
cosO dF
dF ~:mr
Radiometrie Quantities
Fig. 1 Definition of radiometrie quantities.
Irradiance E = dlP / dF [Wm- 2 ] at a point on a surfaee
element is the radiant power dlP [W] incident on the surfaee
element, divJded by the area dF [m -2] of the surfaee element. Radiance L = d 2lP / (cosO dF dw) [Wm- 2sr- 1] at
a point on a surfaee element and in a given direetion is the
radiant power dlP incident on the surfaee element or leaving
divided by the
the surfaee element in a eone eontaining
solid angle dw [sr] of that eone and the projeeted area
cos 0 dF = ;, . 1dF of the surfaee element. ;, is the unit surface normal (Fig. 1). The eorresponding speetral distribution
funetions are given by EA. = dE / dA [Wm- 3 ] and
LA. = dL / dA [Wm- 3sr- 1 ]. EA. and LA. denote relative speetra! distribution funetions (speetral distribution functions in an
arbitrary unit) (Judd, 1975). From the above definitions:
f
Qs
i
3.2
=
L cosO dw = Ln'
i dw
.
L, ii . d, Q,.
(3)
o
i,
d.E
L" ii . i da) = k,
The diffuse ambient light (Fig. 2) is a uniform illumination
term. It is defined by the radianee LA. = k u LA. of the ineident
light. k u is a weighting faetor. By (1). the irradianee EUA. of
the ambient light ineident at a point p on a surfaee Fis
(4)
The diffuse skylight (Fig. 3) is sunlight scattered by the atmosphere. It is emitted by the sky hemisphere above the horizon. Beeause of its large radius, all objeets are approximately in the center of the sky hemisphere. The skylight is
defined by the direetion h to the zenith whieh is the highest
point of the sky hemisphere, the parameter m h > 0 (see below) and the radianee LA. = k h LA. of the ineident light. k h is
a weighting faetor. By (1), the irradianee EIsA. of the skylight
incident at a point p on a surfaee Fis
(1 )
The Atmospherie Model
Due to the scattering and absorption of light within the atmosphere, an objeet whieh moves away from the observer
ehanges its color depending on the distanee from the observer, the eurrent weather eonditions and the pollution of the air.
The atmospherie effeet is simulated by the atmospherie model (Schachter, 1983):
d
(2)
(5)
LoJ.. represents the true object color. This is the light whieh
leaves the surfaee of an object towards the observer. LA. repQ !5 2.7r represents the part of the sky hemisphere with
resents the apparent object color. This is the light wh ich
reaehes the observer at a distanee d from the objeet. L co A.
represents the horizon color whieh is LA. for d - 00. l'aA. is
the speetral transmittanee of the atmosphere. This is the
fraetion of light energy whieh is neither absorbed nor seattered away from the straight direetion of light propagation
within the referenee distanee da. The atmosphere deseribed
by (2) is homogeneous and isotropie. Note that the term color is used for simplicity though LoJ..' LA. and L A. are color
stimuli in terms of speetral radianee.
i
d i
ii . > 0 and s • > 0 whieh is visible from p. The solid
angle 2.7r is subdivided into 4m~ sky faeets (m h faeets in polar direetion x 4mh faeets in azimuth direetion) of equal size
Lfw = 2:n;/(4m~) and the integral is approximated by a sumo
f; is the direetion to the center of the i-th sky faeet. The summation is done for alt faeets the centers of whieh are visible
from p. 0; = 1 if the center of the i-th sky faeet is visible
fram p, and 0; = 0 else.
OD
d
The illumination geometry is defined by s • QSI
3.3
dh and mh'
The Light Souree Model
Kaneda et al. (Kaneda, 1991) model natural daylight as being eomposed of direet sunlight and diffuse, non-uniformly ineident skylight. The eontribution of the skylight is determined
by integrating over the visible parts of the sky dome wh ich is
subdivided into band sourees. Wavelength dependency as
weil as absorption and scattering in a non-homogeneous atmosphere are taken into aeeount. The diseretization of distributed light sources is deseribed by Verbeck & Greenberg
(Verbeck, 1984).
Natural daylight is assumed to be eomposed of direet sunlight, diffuse ambient light and diffuse skylight.
The direet sunlight (Fig. 2) is defined by the direetion ds to
the light souree (the sun) and by the solid angle Qs ~ 2:n;
and the radianee LsÄ. = k s LA. of the ineident light. k s is a
weighting factor. By (1), the irradianee EsA. of the sunlight incident at a point p on a surfaee F is
435
Ag. 2 Sunlight (Ieft) and ambient light (right).
3.4
Fig. 3 Skylight.
al library (Fig. 4).
The Material Model and the Illumination Model
The illumination model expresses the radiance LA. of the light
which leaves a point on the interface between two materials
in a given direction. LA. depends on the roughness of the material interface and the illumination· and the materials on both
sides of the material interface. The full illumination model (a
slight modification of the model by Hall & Greenberg (Hall,
1983) is used) will not be given in detail here. Instead, for an
opaque, diffusely reflecting surface illuminated by natural
daylight (section 3.3), the radiance LA. of the reflected light is
uniformly distributed in all directions and the illumination
model reduces to
(6)
fiectance) of the material. EsA.' EuA. and EIIA. are the irradiances due to the incident sunlight, ambient light and skylight according to (3), (4) and (5).
In this section, a number of inversions are formulated. The
term inversion stands for solving an equation or a system of
equations given by one of the models (section 3) for one or
more unknown model parameters.
The Camera Model
World coordinates define locations in three-dimensional object space, image coordinates define locations in the twodimensional image plane of a camera and bitmap coordinates define locations in the two-dimensional pixel raster of
a digital image.
4.1
Inversion of the Camera Model
Each of the digital color images which represent the natural
environment is geometrically corrected to eliminate the effect
of the radial image distortion. The geometrically corrected
images are called the input images (Fig. 4).
The camera model represents a calibrated camera in threedimensional object space. The interior orientation of the
camera consists of the calibrated focal length, the image
coordinates of a number of fidudal marks and the center and
amount of the radial image distortion due to the non-ideal optics of the camera. The fiducial marks are not affected by the
radial image distortion. Therefore, they define the image
coordinate system in the image plane and allow to derive a
linear mapping between the homogeneous image coordinates and the homogeneous bitmap coordinates of a digitized image. The exterior orientation of the camera consists
of the three world coordinates of the camera position and the
three angles of the camera orientation in object space.
3.6
The aim of this work is to produce realistic computer-generated color images which aid to the judgement of aesthetic
properties of planned buildings. These artificial objacts are
embedded into the existing environment by photomontage.
The natural environment is represented by a number of
digital site photographs. During an interactive preprocessing
step wh ich is supported by the program PREPARE (Fig. 4), a
three-dimensional description of the natural environment is
created by retrieving geometrical and non-geometrical information from the site photographs.
The models (section 3) are formulated in terms of the wavelength. Consequently, RGB triplets retrieved from input
images have first to be converted to equivalent spectral distribution functions. A simple and efficient conversion method
is detailed in appendix A.
kd is the weight of the diffusely reflected component and (!dA.
[sr-I] is the diffuse reflectance (directional hemispherical re-
3.5
4. INFORMATION FROM DIGITAL COLOR IMAGES
For each input image, the six unknowns of the exterior
orientation are determined separately from the known world
coordinates and the manually identified image coordinates of
at least three control points. More control points help to refine the result. This is the resection in space or first inversion of the camera model. Each control point adds two
equations to a system of non-linear equations which is iteratively solved, starting from a user-specified estimation of the
solution. The result of the resection in space is converted to
a 4x4 matrix wh ich expresses the linear mapping from homogeneous world coordinates to homogeneous image coordinates. Since this mapping will be performed very often during
rendering (section 6), its linearity is important for efficiency.
The non-linearity introduced by the radial image distortion
was previously eliminated by the geometrical correction.
The Geometrical Object Model
A geometrical object is either a planar convex or concave
polygon, a polyhedron or a sphere. Each geometrical object
has a material attribute which is a reference into the materi-
The geometry of relevant parts of the natural environment is
436
system of equations (7) is iteratively solved for the unknowns
which are the true object color Lo.1. and the spectral transmittance 'ra.1. of the atmosphere. Lo.1. is of no interest since it expresses no general property of the scene. The iteration is
started with 'ra.1. = 1 and Lo.1. = Lj.1.' such that dj = min(dj ).
approximated by planar polygons in objeet space. These define the scene geometry. Relevant parts of the natural environment are those interaeting with the planned building with
respeet to hiding, shadowing and interrefleetion.
The world coordinates of a polygon vertex are determined
from its manually identified image coordinates in two or more
input images with different exterior orientation. This is the interseetion in spaee or the second inversion of the camera
model. The geometrieal properties of a polygon are defined
by its eoplanar vertices, while the non-geometrieal properties
of a polygon are given by an image attribute and a material
attribute. The image attribute referenees one of the input
images. The material attribute is a referenee into the material
library (Fig. 4). The refereneed material deseription defines
all material properties exeept for the diffuse refleetanee (!dÄ.
whieh is retrieved at rendering time from one of the input
images (sections 4.3 and 6). The default input image for retrieving edÄ. is indieated by the image attribute.
A simplifying assumption is that the radiance Lo.1. of the light
leaving the objects towards the observer is the same at all
locations Pi. The radiance of the diffusely reflected light is
uniformly distributed in alt directions but it depends on the illumination of the surface and the orientation of the surface
relative to the light soumes.
Nakamae et al. (Nakamae, 1986) use an equivalent method
based on the same atmospheric model (2). Wavelength dependency is not considered and world coordinates are obtained from topographical maps.
The second inversion of the atmospheric model involves
solving the equation of the atmospheric model (2) for the true
color Lo.1. of an object. This inversion will also be used during
rendering (section 6).
The illumination geometry (seetion 3.3) is determined as follows: The solid angle Q, of ineident sunlight is ealeulated
from the distanee to the sun and the radius of the sun. The
direetion d, to the sun is determined by a point on an objeet
and the corresponding point on the objeet's shadow by intersection in spaee. The direetion h to the zenith is determined
simiiarly, e.g. by two points on the same vertieal edge of a
house. Sinee the world eoordinate system may be seleeted
arbitrarily, the zenith direetion is not known apriori. m h > 0
is set arbitrarily.
Given are the horizon color L DD.1.' the spectral transmittance
of the atmosphere and the reference distance da = 1.
L,.,.1. and 'ra.1. are known from the first inversion of the atmospheric model (see above). pis a known location in object
space. The known apparent object color L.1. is the radiance
of the light reaching the camera associated with one of the
input images at the known distance d from p. L.1. is derived
from the RGB triplet of the reconstructed image function at
p' (appendix A). p' are the image coordinates of P in the selected input image.
'ra.1.
d
Details about resection and interseetion in spaee are not detailed here. These are basic proeedures in photogrammetry
(Wolf, 1983) (Rüger, 1987).
4.2
Inserting into (2) and solving tor Lo.1. yields
Inversion of the Atmospherie Model
Lo.1.
The first inversion of the atmospheric model involves
solving the equation of the atmospherie model (2) for the
spectral transmittanee 'ra.1. of the atmosphere.
4.3
=
(L.1. -
L"".1. ) 'ra.1. -d
+
L DD.1. •
(8)
Inversion of the Illumination Model
The first inversion of the illumination model involves solving the equation of the illumination model (6) for the weights
k" k" and k h of the three illumination components which
make up natural daylight (section 3.3).
Given are the horizon color L DD.1. and the reference distance
da = 1. L CD.1. is derived from a suitable RGB triplet which is
manually selected in one of the input images (appendix A).
Pi (i = 1 .. n, n ;;:: 2) are known loeations in object space.
They are situated on the surfaces of objects of the same
opaque, diffusely reflecting material. The Pi are manually
identified on previously determined polygons in object space
(section 4.1). The known apparent object color Li.1. is the radiance of the light reaching the camera associated with one
of the input images at the known distance d j trom Pi' La is
derived from the RGB triplet of the reconstructed image function at p/ (appendix A). p/ are the image coordinates of Pi
in the selected input image.
Given are the scene geometry (section 4.1) and the illumination geometry ds , Q" dh and mh (sections 3.3 and 4.1). L.1. is
the known relative spectral distribution function of the daylight components. Pi (i = 1 .. n, n ;;:: 3) are known locations
in object space. They are situated on the surlaces of objects
of the same opaque, diffusely reflecting material with material parameters k d = 1 and ed.1.' where (lä.1. is known up to a
constant factor only. Each Pi receives a different amount of illumination by the natural daylight. The Pi are manually identified on previously determined polygons in object space (section 4.1). The known apparent object color Li.1. is the radiance of the light reaching the camera associated with one
of the input images at the known distance d j trom Pi. La is
derived from the RGB triplet of the reconstructed image funetion at p/ (appendix A). p/ are the image coordinates of Pi
in the selected input image. The true object color L oa is determined from La by the second inversion of the atmospher-
By inserting into (2), each sampie contributes one equation
to a system of n non-linear equations which is redundant for
n > 2. For each wavelength within the visible spectrum, the
437
ic model (the effect of the atmosphere is eliminated) (section
4.2).
0')
c:
.fi.i
U)
Q)
o
o
By inserting into (6), €lach sampie contributes one equation
0..
Q)
0..
(9)
input
IInages
to a system of n linear equations which is redundant for
n > 3. E siA ' Eu.iA and E hil are the irradiances due to the
incident sunlight, ambient light and skylight. Since the scene
geometry and the illumination geometry are known, they can
be calculated according to (3), (4) and (5). For €lach wavelength within the visible spectrum, the system of equations
(9) is solved for the unknowns wh ich are the weights k s' ku.
and k h of the daylight components. Finally, the weights are
averaged within the visible spectrum. Since l?dl is known up
to a constant factor only, k s • k u and k h are the relative
weights of the daylight components.
0')
I
c:
.~
Q)
~:'~:~~rs
~ '-------'--....... 1
RENDER
r
+
I~I ~ I
.;, ----------------t- ---------------
Nakamae et aI. (Nakamae, 1986) and Thirion (Thirion, 1992)
determine the ratio of illumination by direct sunlight and ambient light. Skylight. wavelength dependency and the effect
of the atmosphere are not considered.
.s
a;
lIBRARY
'"0
o
E
The second inversion of the illumination model involves
solving the equation of the illumination model (6) for the diffuse reflectance l?dl of a material. This inversion will be used
during rendering (section 6).
r-----,
~
ICAD
I
CONVERT .......
- - ~.....-, system I
B
Given are the scene geometry (section 4.1) and the illumination geometry ds • Qs' dh and m h (sections 3.3 and 4.1). The
radiances k s Ll • ku. Ll • k h Ll of the daylight components
are known from the first inversion of the illumination model
(see above). p is a known location in object space. It is situated on the surface of an object of an opaque, diffusely reflecting material with material parameter k d > O. The known
apparent object color L l is the radiance of the light reaching
the camera associated with one of the input images at the
known distance d from p. L l is derived from the RGB triplet
of the reconstructed image function at p' (appendix A). p'
are the image coordinates of p in the selected input image.
The true object color L OA is determined from LA by the second inversion of the atmospheric model (the effect of the atmosphere is eliminated) (section 4.2).
L
___
J
Fig. 4 System overview.
5. ENVIRONMENT DESCRIPTIONS
The natural environment description (Fig. 4) contains geometrical and non-geometrical data about the natural environment of the planned building: Atmo~pheric parameters ~L A'
'(RA' da). illumination parameters (ds , Qs, k s• kll.' m",. dh • k h•
LA) and polygons (vertices. material attribute. image attribute). Most of this information is retrieved from the input
images during the interactive preprocessing step (section 4).
Additionally, the natural environment description contains the
interior and exterior orientation and the file name of €lach of
the input images.
00
Inserting into (6) and solving for l?dA yields
The artificial environment description (Fig. 4) contains the
CAD model of the planned building with additional artificial
light sources, if required. The program CONVERT (Fig. 4)
converts data trom the widely used DXF format (Autodesk,
1988) to the required data format.
(10)
E d • EII.A and Eu are the irradiances due to the incident
sunlight. ambient light and skylight. Since the scene geometry and the illumination geometry are known, they can be calculated according to (3), (4) and (5). Since the relative
weights k s• ku. and k h were used, also l?dA is known up to a
constant factor only.
6. RENDERING
During the final rendering step (program RENDER) (Fig. 4),
the input images and the natural and artificial environment
description (section 5) are used to generate the output
images. These show the planned building embedded in the
existing environment from the perspectives of the input
images. The rendering algorithm is an extension to conven-
Thirion (Thirion, 1992) determines the reflectance of a material without considering skylight, wavelength dependency and
the effect of the atmosphere.
438
is determined from LOA. by the second inversion of the illumination model (section 4.3). All other material properties, including k d • come from the member of the material library
(Fig. 4) which is referenced through the polygon's material
attribute.
tional ray-tracing. Ray-tracing was introduced by Whitted
(Whitted, 1980). It is assumed that the reader is familiar with
the principle of recursive ray-tracing in object space.
6.1
Setup
One of the input images is selected as the eurrent input
image. The eurrent eamera is the camera associated with
the current input image. It represents the observer in object
space and remains unchanged during the generation of the
current output image.
6.2
Three subcases of case D are identified:
Case 01. The natural object is iIIuminated by an artificial
light source: The illumination model (6) is evaluated at p under consideration of the iIIuminating artificial light souree and
the result is added to the true object color LOA.. SubsequentIy, the atmospheric model (2) is evaluated based on the
known distance d along the ray and the result LA. is returned.
Independent Pixel Processing
A natural object or natural light souree is part of the natural environment description (section 5). Each facet of the discretized sky hemisphere (section 3.3) is treated as an individual natural light source. An artifieial objeet or artifieial
light souree is part of the artificial environment description
(section 5). A primary ray originates at the center of projection of the current camera. A secondary ray is generated after reflection or refraction at a material interface.
Ca se 02. The natural object is illuminated by a natural light
source: The illumination model is not evaluated since the illumination effect at p is already represented by the true object
color LOA.. The atmospheric model (2) is evaluated based on
the known distance d along the ray and the result LA is returned.
Case 03. The natural object is not illuminated by a natural
light source due to an artificial object only. The irradiance EA.
due to the natural light source is calculated at p as if there
were no artificial objects in the scene. The illumination model
(6) is evaluated at p with a negative irradiance -EA. and the
result is added to the true object color LOA.. Subsequently,
the atmospheric model (2) is evaluated based on the known
distance d along the ray and the result LA. is returned.
Several cases are identified:
Case A. A primary ray does not hit any object: The intersection point p of the ray and the image plane of the current
camera is mapped from world coordinates to image coordinates p' of the current input image. The RGB triplet of the reconstructed image function at p' is converted to a spectral
distribution function LA. (appendix A) and returned.
Depending on the object material, a reflected and I or a refracted ray may be recursively traced for each of the three
subcases D1 - D3.
Case B. A secondary ray does not hit any object: The horizon color L .. A. (sections 3.2 and 4.2) is returned if the ray direction is above the horizon.
6.3
Case C. Any ray hits an artificial object: The illumination
model (6) is evaluated at the intersection point p of the ray
and the artificial object under consideration of the illuminating
natural and artificial light sourees. Subsequently, the atmospheric model (2) is evaluated based on the known distance
d along the ray and the result LA. is returned. Depending on
the object material, a reflected and I or a refracted ray may
be recursively traced. All material properties come trom the
member of the material library (Fig. 4) which is referenced
through the object's material attribute.
Change of Perspective
Restarting at the setup step (section 6.1) with another current input image and the corresponding current camera allows to generate a different view of the scene without any
changes in the natural or artificial environment description.
6.4
Output and Viewing
The program VIEW (Fig. 4) displays the finished part of an
image the computation of wh ich is still in progress. This allows to interrupt the calculation of not very promising images.
VIEW may run on a local workstation while RENDER runs
concurrently on a remote host. Image data is transmitted in
ASCII format through a UNIX pipe.
Case O. Any ray hits a natural object: The intersection point
p of the ray and the natural object is mapped from world
coordinates to image coordinates p' of one of the input
images. This is the current input image if the ray is a primary
ray; otherwise it is the input image referenced through the
polygon's image attribute (section 4.1). The RGB triplet of
the reconstructed image function at p' is converted to a
spectral distribution function LA. (appendix A) which represents the apparent color of the natural object. The true color
LOA. of the natural object is determined by the second inversion of the atmospheric model (section 4.2) based on the
known distance d o between p and the camera associated
with the selected input image (the effect of the atmosphere is
eliminated). The diffuse reflectance edA. of the object material
7. EXAMPLES
Three examples will be presented in this section. Additional
information about the examples is given in Tab. 1.
Example 1. Gallery: This is the test scene on the campus of
ETH-Hoenggerberg, Zurich. Fig. 5, top left and bottom left,
shows the natural environment on a sunny day. The gallery
(supports, roof, floor), the horizontallawn plane and the main
buildings in the background were approximated by planar
polygons during preprocessing (section 4.1). The horizon
439
Fig. 5 Example 1. Gallery.
color L oe..t was set equal to the sky color. Sampies on the
lawn plane were used to determine the spectral transmittance t'a..t of the atmosphere by the first inversion of the atmospheric model (section 4.2). Sampies of various incident
illumination were selected on the concrete walls of the background building on the left and used to determine the relative
weights kSf k u• k h of the daylight components by the first inversion of the illumination model (section 4.3). Fig. 5, top
right and boUom right, shows some primitive ,solids embedded in the natural environment. Note the shadow of the
natural support cast onto the artificial torus, the geometrically
correct interpenetration of the cone with the gallery and the
shadow of the torus cast onto the ground. Both perspectives
were rendered with no change in the natural or artificial environment description, simply by selecting different current input images with the corresponding current cameras.
Example 2. Bridge: The example shows the planned wood-
440
en bridge at Bueren an der Aare, Kanton Bem, which is intended to replace the historical bridge recently destroyed by
a fire. Fig. 6, top left, shows the natural environment on a
sunny day. The house on the left (wall, balconies, roof) and
the horizontal water plane were approximated by plan ar polygons during preprocessing (section 4.1). The horizon color
Loe..t was set equal to the sky color. SampIes on the water
plane were used to determine the spectral transmittance t'a..t
of the atmosphere by the first inversion of the atmospheric
model (section 4.2). SampIes of various incident illumination
were selected on the house and used to determine the relative weights k s• kUI kh of the daylight components by the first
inversion of the illumination model (section 4.3). The supports will be used for the new bridge. Fig 6, top right, bottom
left and bottom right, shows the new bridge from two different
perspectives. Note the shadow and the specular reflection on
the water surface. In Fig. 6, top right and bottom right, the
unnatural regularity of the reflection is distroyed by a sto-
Fig. Ei Example 2. Bridge.
chastic perturbation added to the local normal vector of the
water surface. The perturbation was generated based on the
method of Haruyama & Barsky (Haruyama, 1984) which was
extended to three dimensions and used as asolid texture.
Blinn (Blinn, 1978) introduced surface normal perturbation.
Solid textures were introduced by Peachey (Peachey, 1985)
and Perlin (Perlin, 1985).
of the daylight components by the first inversion of the illumination model (section 4.3). Fig 7, top right, bottom left and
bottom right, shows the planned building from various perspectives. Note the specular reflection of natural and artificial
objects in the windows. There is a diffuse shadow below the
arcades. The somewhat noisy appearance is mainly due to a
coarse discretization 01 the sky hemisphere by 4 sky facets.
Example 3. Courtyard: The example shows a project for the
redesign of a court yard in the city of Zurich. Fig. 7, top left,
shows the natural environment on a winter day with overcast
sky. The illumination is completely diffuse and consists of
ambient light and skylight only. Parts of the houses and the
horizontal ground plane were approximated by planar polygons during preprocessing (section 4.1). The atmospheric effeet has not been taken into account. Sampies of various incident illumination were selected on the white house in the
corner and used to determine the relative weights k s• k u• k h
All example images were rendered with aresolution of 512 x
512 pixels using an antialiasing strategy based on adaptive
stochastic sampling (Cook. 1984) (Cook, 1986). Based on
the statistical variance of the returned color values, 4, 8, 12
or 16 rays per pixel were used, resulting in an average per
image 01 between 4.0 and 4.5 rays per pixel. The relative
spectral distribution function 65A. of the CIE standard iIIuminant D 65 (Judd, 1975) was used tor the relative speetral distribution function LA of the three daylight components. For
the calculation of the relative weights k s, kill k h of the day-
L
441
Fig. 7 Example 3. Courtyard.
light components by the first inversion of the illumination
model (section 4.3) (!dl == 1 was assumed for the material at
all sampie points. This seems to be a reasonable choice
since the material was always gray or white and (!dl has to
be known up to a constant factor only. Subsequently, k s ' k u
and k h were used as calculated. For the simulation, spectral
curves were reduced to a total of 9 spectral sampies within
the visible spectrum. After the simulation, spectral distribution functions were reconstructed from the spectral sampies
and converted to the RGB color space of a specific output
device (Hall, 1989). Image rendering was performed on a
CONVEX-C2 computer with a poorly vectorizing code.
8. SUMMARY
The paper presents a method to produce realistic computergenerated color images which aids to the judgement of aesthetic properties of planned buildings. The planned buildings
442
are embedded into the existing environment by photomontage. During an interactive preprocessing step, a three-dimensional description of the existing environment is retrieved
from the site photographs (input images). In a second step,
the input images, the information retrieved trom the input
images (the natural environment description) and the CAD
model of the planned building (the artificial environment description) are used to generate the final images. These show
the artificial object (the planned building) embedded in the
existing environment from the perspectives of the input images.
Consistency with the models is maintained during preprocessing (section 4) and rendering (section 6). The models
(section 3) are formulated in terms of the wavelength. Therefore, RGB triplets retrieved from input images are converted
to equivalent spectral distribution functions (appendix A).
Extensions to previous methods have been presented for retrieving atmospheric-, illumination- and material parameters
from the input images (section 4). A drawback is the limited
complexity of the scene geometry which can be handled by
the interactive preprocessor. However, the preprocessing
step has to be done only once for a given scene. Once the
preprocessing step is completed, case studies can be investigated flexibly by simply changing the artificial environment
description and rendering aseries of new images.
Tab. 1 Example data.
input images
example 2
example 3
3
8
9
17
38
16
0.0004
0.0004
ks
4111 .65
4639.82
-
kM
0.14
0.13
0.53
nat . polygons
Qs
[sr]
m"
Conventional ray-tracing is strictly additive, whereas the
presented approach extends ray-tracing to an additive and
subtractive process: sometimes illumination has to added,
sometimes it has to be removed. The extension is straightforward and implied by the nature of the problem for which it
was designed. No changes in the illumination model (6) are
required. The key is to pass a negative irradiance to the illumination model in case 03.
example 1
k"
CPU emin]
-
2
2
1
0.72
0.51
0.65
201 .. 239
225 .. 292
153 .. 221
10.REFERENCES
Autodesk Inc., 1988. AutoCAD1 0 Reference Manual. Appendix
C: Drawing Interchange and File Formats.
BD&C, 1990. Computer Graphics Facilitate Approvals. Building
Design & Construction, 31 (7) : 16.
The presented approach allows to add an arbitrary number
of artificial objects and artificial light sources to an existing
scene. The rendering algorithm is based on ray-tracing which
works entirely in three-dimensional object space and handles
the interaction between the artificial object and the existing
environment with respect to hiding, shadowing and interreflaction.
Blinn, J. F., 1978. Simulation ofWrinkled Surfaces. Proceedings
S IGG RAPH '78. Computer Graphics, 12 (3): 286-292.
Cook, R. L., T. Porter and L. Carpenter, 1984. Distributed Ray
Tracing. Proceedings SIGGRAPH '84. Computer Graphics, 18
(3) :137-145.
Cook, R. L., 1986. Stochastic Sampling in Computer Graphics.
ACMTransactionson Graphics, 5 (1 ):51-72.
The system consists of the programs PREPARE, CONVERT,
RENDER and VIEW (Fig. 4). Rendering parameters allow to
trade off speed versus image quality. Al an early stage of the
design process, quick looks can be generated, e.g. at a low
resolution or by using simpler models. Tabulated physical
data is used directly in an extendable library. This frees the
user from inventing some fundamental material and light
source properties, though some of the coefficients still have
to be selected empirically.
Doi, A., M. Aono, N. Urano and K. Sugimoto, 1991. Data visualization using a general-purpose renderer. IBM Journal of Research and Development, 35 (1/2) : 45-58.
Durisch, P.1992. Ein Programmfürdie Visualisierung projektierter Bauwerke mittels Fotomontage. Ph.D. Dissertation No. 9599
ETH Zurich. vdf, Zurich.
Feibush, E. A., M. Levoy and R. L. Cook, 1980. Synthetic Texturing Using Digital Filters. Proceedings SIGGRAPH '80. Computer
Graphics, 14 (3) : 294-301 .
The examples show some of the subtle effects which make
up the overall appearance of a building: the three-dimensional shapes and their proportions, the surface textures and colors wh ich are determined by the selected materials, the proportions between light and shadow, the integration of the object into its environment and the effects which are determined by daytime, season, current weather conditions and
by the selection of the observer position with respect to foreground and background (Leonhardt, 1984).
Glassner, A. S., 1989. How to Derive a Spectrum from an RGB
Triplet. IEEE Computer Graphics & Applications, 9 (4) : 95-99.
Hall, R. A. and D. P. Greenberg, 1983. A Testbed for ReaJistic
Image Synthesis.IEEE Computer Graphics & Applications, 3 (8)
: 10-20.
Hall, R. A., 1989. Illumination and Color in Computer Generated
Imagery. Springer, New York.
Haruyama, S. and B. A. Barsky, 1984. Using Stochastic MOOeling
for Texture Generation. IEEE Computer Graphics & Applications,4 (3) :7-19.
9. ACKNOWLEDGEMENTS
I would like to thank Prof. Dr. A. Gruen and Z. Parsic, Institute of Geodesy and Photogrammetry at the ETH Zurich,
who enabled me to use the institute's photographic equipment, Dr. U. Walder tor the construction plans of the bridge
in example 2 (project by Walder & Marchand AG, Guemligen, Moor & Hauser AG, Bem) and Dr. G. Scheibler, the architect of the courtyard project in example 3 (project by Architekturwerkstatt Ruetschistrasse, Zurich). Special thanks to
Z. Despot for the measurement of control points and to L.
Hurni and H. StoII, Institute of Cartography at the ETH Zurich, for the reproduction of the images.
Judd, D. B. and G. Wyszecki, 1975. Color in Business, Science
and Industry. Third Edition. John Wiley & Sons, New York.
Kaneda, K., F. Kato, E. Nakamae, T. Nishita, H. Tanaka andT. Noguchi, 1989. Three DimensionalTerrain F\IIodeling and Displayfor
Environmental Assessment. Proceedings SIGGRAPH '89.
Computer Graphics, 23 (3) : 207-214.
Kaneda, K., T. Okamoto, E. Nakamae and T. Nishita, 1991. Photorealisitc images synthesis tor outdoor scenery under various
atmospheric conditions. The Visual Computer, 7: 247-258.
443
Leonhardt, F., 1984. Brücken. Ästhetik und Gestaltung. 2. Auflage. Deutsche Verlags-Anstalt, Stuttgart.
y
0.8
Maver, T. W., C. Purdie and D. Stearn, 1985. VisuallmpactAnalysis - Modelling and Viewing the Natural and Built Environment.
Computers &Graphics, 9 (2) : 117-124.
spectrum locus
Nakamae, E., K. Harada, T. Ishizaki and T. Nishita, 1986. A Montage Method: The Overlaying of the Computer Generated
Images onto a Background Photograph. Proceedings SIGGRAPH '86. Computer Graphics, 20 (4) : 207-214.
0.4
780 nm
Nakamae, E., T.lshizaki, T. Nishitaand S. Takita. 1989. Compositing 3D Images with Antialiasing and Various Shading Effects.
IEEE Computer Graphics &Applications, 9 (2) : 21-29.
Peachey, D. R., 1985. Solid Texturing of Complex Surfaces. Proceedings SIGG RAPH '85. Computer Graphics, 19 (3) : 279-286.
Rüger, W., J. Pietschner and K. Regensburger, 1987.
Photogrammetrie. 5. Auflage. VEB Verlag für Bauwesen, Berlin.
x, Y, Z :2: 0 are the tristimulus values in the CIE 1931 standard colorimetric system (Judd, 1975) (Wyszecki, 1982). L).
is the color stimulus in terms of spectral radiance. The CIE
color-matching functions .:r).. J).. !A represent the eiE standard colorimetric observer. k is a normalizing famor.
Schachter, B. J., 1983. Computer Image Generation. John Wiley
&Sons, New York.
Takagi. A., H. Takaoka, T. Oshima and Y. Ogata, 1990. Accurate
Rendering Technique Based on Colorimetric Conception. Proceedings SIGG RAPH '90. Computer Graphics. 24 (4) : 263-272.
The chromaticity
Terzopoulos, D. and A. Witkin, 1988. Physically Based Models
with Rigid and Deformable Components. IEEE Computer
Graphics & Applications, 8 (6) : 41-51.
x
Thirion, J.-P., 1992. Realistic3D Simulation of Shapes and Shadows for Image Processing. CVG IP: Graphical Models and Image
Processing. 54 (1) :82-90.
Verbeck, C. P. and D. P. Greenberg, 1984. A Comprehensive
Light Source Description for Computer Graphics. IEEE Computer Graphics &Applications, 4 (7) : 66-75.
Wolf, P. R., 1983. Elementsof Photogrammetry. Second Edition.
McGraw-HiII, NewYork.
Wyszecki, G. and W. S. Stiles, 1982. Color Science: Concepts
and Methods, Quantitative Data and Formulas. Second Edition.
John Wiley &Sons, New York.
The perception of color is determined by the spectral distribution function of the electromagnetic energy entering the
eye and by the properties of the human visual system. The
color sensation produced by a given color stimulus L). can be
quantified by a triplet of numbers X, Y, Z:
=k
f
380
L,r,dJ.; Y = k
f
380
f
y
=X + Y+Z
z
Z
=X + Y+Z
(12)
The RGB color space is defined by the chromaticities T, g, b
of the RGB primaries and the chromaticity W of the RGB white point. The chromaticities of all colors which are realizable
in the given RGB color space are contained within the
triangle T, g, b (Fig. 8). The conversion from CIE XVZ color
space to RGB color space is given by the 3 x 3 matrix M
which is derived from T, g, band w (Judd, 1975) (Hall, 1989):
780
L,y,dJ.; Z = k
zf of a color is given by
The conversion from RGB color space to LA is decomposed
into a conversion fram RGB color space to CIE XVZ color
space and a conversion from CIE XYZ color space to LA
(Fig.9).
A. SPECTRAL DISTRIBUTIONS FROM RGB TRIPLETS
X
Y
[x y
Two color stimuli are metameric with respect to a given observer x).. J).t !). if their respective tristimulus values are the
same but their spectral distribution functions are different. By
(11). metameric color stimuli are mapped to the same tristimulus values. The inverse problem of finding a color stimulus
with given tristimulus values has therefore (infinitely) many
metameric solutions (Fig. 9).
Whitted, T., 1980. An Improved Illumination Model for Shaded
Display. Communications of the ACM, 23 (6) : 343-349.
780
x
=X + Y+Z
f =
with x + y + z = 1. Equation (12) maps the tristimulus values F = [X Y Z]T onto the (X + Y + Z = l)-plane in CIE
XYZ color space. The two-dimensional (x,y)-plot is the CIE
chromaticity diagram (Fig. 8). The chromaticities of all physically realizable colors are contained within the area bounded
by the spectrum locus and the straight purpie line. The spectrum lorus is given by the chromaticities of the pure spectral
colors.
Uno, S. and H. Matsuka, 1979. A General Purpose Graphic System for Computer Aided Design. Proceedings SIGGRAPH '79.
Computer Graphics, 13 (2) : 25-32.
.
x
Fig. 8 CIE chromaticity diagram.
Perlin, K., 1985. An Image Synthesizer. Proceedings SIGGRAPH '85. Computer Graphics, 19 (3) : 287-296.
780
0.6
0.3
L,r,dJ. (11)
380
444
....
00
subdivided into three domains: Ä,1 •• Ä,2> Ä,2 •• Ä,3 and Ä,3 •• Ä,4>
M
equation (11)
x,
such that 380 nm = Ä,1
Y, Z ........
___
...
_
sis function Li), (i
R, G, B
<
Ä,2
= 1 .. 3)
<
Ä,3
<
Ä,4
= 780 nm. The ba-
is expressed as a linear com-
bination
many solutions
Fig. 9 Color conversions.
(18)
of piecewise constant functions Lki). (k
= 1 .. 3) with
(19)
It is assumed that r, g. band ware known. In the examples
(section 7), the definition values of the standard NTSC monitor (Hall, 1989) were used.
Inserting (18) into (11), premultiplying by M and solving for
Cl
The problem of finding color stimuli L). for all RGB triplets in
the given RGB color space is equivalent to the problem of
finding color stimuli L). tor alt tristimulus values F the chromaticities fot which are contained within the triangle r, g, b:
2!:
One or more metamerie solutions with
being concentrated in a narrow wavelength band. (The linear
ulus
O.
380
f
sitions between 0 and Sj. one at Ä,2 and the other at Ä,3'
F/
380
f
~ 0 exists (this can happen for chro-
dates are considered for Li).' They are piecewise constant
780
L"ZldA ..
C2 j. C3i
with either value 0 or value Si ~ 0 and with exactly two tran-
L 31Yl dA
380
780
f
L). which is zero almost everywhere.)
maticities near the spectrum locus): A different set of candi-
780
L"YldA ..
0 exist:
combination of three Une spectra would result in a color stim-
No solution cl i•
780
Cli. C2i. c 3i ~
The candidate with max (cu. C2i. C3i) = min is selected for
Li).. This condition tries to prevent the energy of Li). from
Inserting (14) into (11)
f
(see above) yields the basis function Li).' For a
stimulus L). which is physically not meaningful. Therefore,
(14)
C2 • C3
C2j. C3i
the weights Cl j. C2 io C3 i are calculated for each pair Ä,2 and Ä,3
within the visible spectrum and the best solution is selected:
Given are three basis functions Lu. Lu, L H 2!: 0 with tristimulus values F 1 , F 2• F 3 and chromaticities 11' 12. 13'
such that F 1 =/1 = b, F 2 =/2 = g, F 3 =/3 = r. A color
stimulus L). 2!: 0 with given tristimulus values F = [X Y Z]T
is expressed as a linear combination
of the basis functions with the weights Cl'
j.
constant pair Ä,2 and Ä,3' some of the weights Cu C2io C3 i can
be negative. This can result in a piecewise negative color
z/f are the tristimulus values of the candiY/ = Y;
candidate with 11 F/ - F i Ih = min is selected for
= [X/
Y/
date. For each candidate, Si is calculated such that
L 31 zl dA
and the
Li).'
380
(15)
Lu. Lu, L3). and W- 1 have to be determined only once for
premultiplying (15) by M
a given RGB color space. Subsequently, finding a color stimulus
(16)
and solving (16) for Cl'
C2 • C3
L). with given RGB color values consists of calculating
Cl' C2' C3
by (17) and Iinearly combining Lu. Lu, L 3l by
(14). This can be performed efficiently.
yields
The generated color stimuli are non-negative and piecewise
constant with 6 transitions within the visible spectrum. They
are generated by a linear combination of three piecewise
(17)
constant basis functions. each with 2 transitions within the
visible spectrum. The basis functions depend on the given
RGB color space.
The problem which remains is how to generate basis functions Li). 2!: 0 (i = 1 .. 3) with given tristimulus values F j • A
simple method is explained below:
Glassner (Glassner, 1989) generates smooth but not always
non-negative color stimuli by using trigonometrie basis func-
The visible spectrum from Ä,
=
380 nm to Ä,
=
780 nm is
tions which do not depend on the given RGB color space.
445
Download