International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,...

advertisement
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII, Part 5
Commission V Symposium, Newcastle upon Tyne, UK. 2010
3-D LEAST SQUARES TRACKING IN TIME-RESOLVED TOMOGRAPHIC
RECONSTRUCTION OF DENSE FLOW MARKER FIELDS
Patrick Westfeld and Hans-Gerd Maas
Institute of Photogrammetry and Remote Sensing
Technische Universität Dresden, Helmholtzstr. 10, D-01062 Dresden, Germany
patrick.westfeld@tu-dresden.de, www.tu-dresden.de/ipf/photo/
Commission V, WG V/1
KEY WORDS: PIV Processing, Tomographic Reconstruction, Tracking, Flow Measurement Techniques
ABSTRACT:
Flow measurement techniques determine velocity vector fields in liquid or gas flows. In fluid mechanics, many methods are based on
seeding particles to visualize the flow imaged by an adequate camera system. The tomo-PIV (tomographic particle image
velocimetry) technique presented in this paper generates time-resolved volumetric reconstructions of a particle constellation from a
limited number of synchronized camera views. The advantage of tomo-PIV in contrast to the established 3-D PTV (particle tracking
velocimetry) technique is its insensitivity to high seeding densities. While 3-D PTV comes with the necessity to detect, identify and
match individual particles for establishing multi-image and multi-temporal correspondences, tomo-PIV facilitates volume-based
tracking schemes applied to voxel cuboids filled with particles. The paper presents improved photogrammetric techniques for the
determination of 3-D flow velocity fields. This includes a multi-camera system configuration and calibration, approaches for a full
tomographic reconstruction in gas and liquid and 3-D least squares tracking for volume-based tracking in object space.
buoyant seeding particles are injected into the center of a vortex
generator. See (Kitzhofer et al., 2009) for detailed specifications
of this experimental setup.
1. INTRODUCTION
Elsinga et al. (2005) have proposed an approach to 3-D PIV,
which is based on a tomographic reconstruction of the
observation volume and subsequent 3-D cross correlation in
time-resolved voxel data. Tomographic PIV generates a
tomographic reconstruction of a particle constellation from a
limited number of camera views, for instance by applying
Herman and Lent’s (1976) MART algorithm (multiplicative
algebraic reconstruction technique). 3-D velocity field
information can be obtained from time-resolved voxel data by
dividing the data into cuboids of a pre-defined size and tracking
these cuboids. Herein, 3-D cross correlation is a straightforward
enhancement when advancing from 2-D PIV to 3-D PIV.
Disadvantages of both, MART and 3-D cross correlation can be
seen in the computational effort causing rather long processing
times.
Putze & Maas (2008) and Maas et al. (2009) have already
introduced more efficient approaches on volumetric
reconstruction and particle tracking. This article summarizes the
efforts of our working group and presents results which prove
the suitability of photogrammetric techniques in voxel data
sequences analysis.
Figure 1. Vortex ring at one epoch.
2. SENSOR AND DATA
The tomographic reconstruction methods and the cuboid
tracking have been implemented and tested in two different
experimental setups. First, a vortex ring in a water tank is
illuminated by a 3-D laser beam device. A rotating mirror
generates parallel light sheet planes with a thickness of 10 mm.
This volume of about (10×10×1) cm3 is recorded by a system of
four synchronized high speed cameras (1024×1024 pixel, 1000
fps) equipped with telecentric lenses (Fig. 1 and 2). Neutrally
Figure 2. Experimental setup with telecentric lenses (Kitzhofer
et al., 2009).
Using telecentric lenses results in a parallel projection for the
observed volume. A consideration of multi-media geometries is
597
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII, Part 5
Commission V Symposium, Newcastle upon Tyne, UK. 2010
not necessary for this special case, but the collinearity equations
have to be adapted to a parallel projection in space:
geometry, two kinds of ray tracing approaches are implemented;
a forward (FRT) and a backward ray tracing (BRT).
x'  pp x  K (r11 ( X  X 0 )  r21 (Y  Y0 )  r31 ( Z  Z 0 ))  x' (1)
For a spatial intersection, the complete image ray path has to be
reconstructed (Fig. 4): (i) Intersect the image ray with the first
interface Σ1 and calculate the piercing point P1. (ii) Calculate
the angle of incidence α1 resp. the angle of refraction α2 in
accordance to Snell’s law using the refraction indices of the
media air and glass. (iii) Calculate the refracted direction vector
using the incoming direction vector, the surface normal vector
of P1 in Σ1 and the relative refractive index. (iv) Repeat i-iii for
the second refraction at the following interface Σ2. (v) Finally,
the coordinates of an object point X can be calculated by
intersecting the reconstructed ray with a desired depth layer
resp. with two (or more) other ray paths. For a spatial resection
(BRT) the coordinates of the piercing points P1 resp. P2 have to
be calculated sequentially in an iterative manner by solving a
non-linear system of equations of conditions. These constraints
are: (i) The piercing point P1,2 is located on the plane Σ1,2. (ii)
Snell’s law has to be fulfilled. (iii) The normal vector of P1,2 in
Σ1,2 as well as the incident and reflected ray are coplanar. A
detailed mathematic description for FRT and BRT including all
necessary equations is given in (Mulsow, 2010).
y '  pp y  K (r12 ( X  X 0 )  r22 (Y  Y0 )  r32 ( Z  Z 0 ))  y '
where
cc:
x’,y’:
ppx,ppy:
Δx’, Δy’:
Focal length
Image point
Principal point
Correction functions
X0,Y0,Z0:
X,Y,Z:
rr,c:
Projection center
Object point
Elements of a rotation matrix R
The four-camera system was calibrated by taking image
sequences of a target, which was moved through the
observation volume by a 3-D translation stage. From these
reference positions and their respective image coordinates, the
orientation parameters of each camera were determined in a
parallel projection telecentric optics camera model (Eq. 1). The
transition of the optical paths from the camera through the plain
glass interface into the water could be neglected due to the fact
that the cameras were equipped with telecentric lenses
warranting a parallel projection rather than central perspective
projection.
In a second experimental configuration, different particle
constellations in a water basin are illuminated by a fiber optic
light source and are captured by a four-camera-recordingsystem (1000×1000 pixel; Fig. 3). Herein, the cameras are
equipped with central perspective lenses and are able to capture
images with an interval of a few microseconds between two
images in a triggered double exposure mode.
X0
*
x‘
α1
Σ1
air
P1
glass
α2
Σ2
P2
liquid
α3
X
Figure 4. Multimedia geometry.
Mulsow’s (2010) flexible multimedia bundle approach is used
to determine the interior and exterior camera parameters as well
as the media and interface parameters. The latter, namely the
refractive indices and the normal vectors of the planes, are
necessary to reconstruct the image ray paths through the
different media gas, glass and liquid as described above.
3. VOLUMETRIC RECONSTRUCTION
3.1 Principle
Figure 3: Experimental setup with central perspective lenses.
Tomographic particle image velocimetry (Tomo-PIV, Elsinga et
al., 2005) generates a tomographic reconstruction of a 3-D
particle constellation from typically four camera views. and has
the potential to solve the spatial resolution limit in 3-D PTV
(particle tracking velocimetry), which is set by ambiguities
occurring at high seeding densities. A reconstruction can be
performed by a MART (multiplicative algebraic reconstruction
technique; Herman & Lent, 1976):
If used in one medium only, the well-known collinearity
equations describe the image ray paths:
x'  pp x  cc
r11 ( X  X 0 )  r21 (Y  Y0 )  r31 ( Z  Z 0 )
 x'
(2)
r13 ( X  X 0 )  r23 (Y  Y0 )  r33 ( Z  Z 0 )
y '  pp y  cc
r12 ( X  X 0 )  r22 (Y  Y0 )  r32 ( Z  Z 0 )
 y '
r13 ( X  X 0 )  r23 (Y  Y0 )  r33 ( Z  Z 0 )
1. Project every voxel into the first image space, using the
model equations 1 resp. 2. The voxel gets the gray value GV
obtained by interpolation the gray value gv1 from the
corresponding pixel. (Fig. 5, left)
This collinearity of the image point, the projection center and
the object point is not given if particles are observed in liquids
by central perspective lenses. To consider the multimedia
598
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII, Part 5
Commission V Symposium, Newcastle upon Tyne, UK. 2010
2. Project every voxel into the second image space. Multiply
the existing voxel gray value GV with the gray value gv2 of
the corresponding pixel. (Fig. 5, right)
In the second case, the projective transformation model can be
solved directly by taking the four corner voxel of each depth
layer i. All other image coordinates can be obtained by inserting
the corresponding set of parameters into Eq. 3.
3. Proceed with all other camera views j.
Finally, the voxel space will contain multiplicatively
accumulated image intensity information of the instantaneous
particle constellation. It is obvious, that only voxels at valid
particle positions will show high values. Repeating the MART
for each epoch results in a time-resolved 3-D voxel space
representation of the object space. Disadvantages of a pixelwise reconstruction and the final realization by the MART can
be seen in the computational effort.
A layer-wise reconstruction can be performed with orthographic
projections, too. In comparison to the use of a projective
transformation, the determination of correspondences between
voxel and pixel for each camera view is even more
straightforward. Again, only the corner voxel of a depth layer
have to be transformed in a discrete way by using the according
model equation 1. Due to orthographic projections, all other
image coordinates can be obtained by a bilinear interpolation.
Volumetric Reconstruction in Liquid
For a volumetric reconstruction in liquid, the layer-wise
rectification approaches are not feasible anymore. To avoid a
throwback to a voxel-wise reconstruction from object to image
space, the image rays originating in each pixel of each camera
are intersected with each layer of the voxel space, taking into
account the fact that each ray is twice broken at the air-glass and
glass-water interfaces (Sec. 2). Though this solution is
performed pixel-wise, it is quite fast. A huge improvement of
the computation time can be achieved by thresholding the
images before the voxel space transformation, performing the
ray tracing only for pixels above the threshold. Obviously, the
use of MinART is not feasible here because not every pixel will
be projected into the voxel space. As an alternative, the MART
can be used.
Figure 5: Principle of a tomographic reconstruction (Putze &
Maas, 2008)
3.2 Improvements
Improvements can be seen in the radiometric as well as
geometric reconstruction process and will be presented and
discussed in the following sub-sections.
Further, a volume-wise transformation of each image content
into the voxel space can be performed by the determination of
the parameters of a polynomial with three variates in X, Y and Z
(Eq. 4). Consequently, the use of the improved MinART is
feasible again to map the image content into the actual depth
layer.
Algebraic Reconstruction Technique
To fill-up the voxel space with gray value information, Herman
and Lent’s MART can be replaced by a MinART (minstore
algebraic reconstruction technique; Maas et al., 2009) applying
a minimum operator rather than a multiplication to the gray
values of all camera pixels in each voxel. A 3-D particle
constellation can then easily be obtained by a thresholding in
voxel space.
x'   a  X   Y   Z  ,
For a volumetric reconstruction in gas, the process bases on a
multiple projective transformation of each camera view into
each depth layer of a voxel representation of the object space.
Compared to pixel-wise line-of-sight based implementations,
this approach saves plenty of computation time. After the
initialization of the voxel space with an adequate resolution, the
calculation of the corresponding image coordinates of each
voxel of each depth layer i for each view j is performed by
either using homogeneous coordinates as suggested by Putze &
Maas (2008) or even more straightforward by utilizing the
projective transformation directly:
a0  a1 X i  a2Yi
1  c1 X i  c2Yi
y' 
b0  b1 X i  b2Yi
1  c1 X i  c2Yi
(4)
To solve the transformation model of Eq. 4, the image
coordinates of a sufficient amount of control points are
calculated directly using the collinearity equations 2 plus
multimedia correction. The distribution of those control points
can be seen in analogy to the distribution of ground control
points of a set of aerial photographs for a block adjustment;
namely a dense pattern along the edges of the voxel space and
some single points in the center of each depth layer. The
corresponding image coordinates of all other voxels can then be
efficiently calculated by solving Eq. 4 using the set of
parameters determined prior.
Volumetric Reconstruction in Gas
x' 
 ,  ,  

3.3 Reconstruction Results
The following Fig. 6 shows the reconstruction results at one
epoch of a vortex ring in a water tank, illuminated by one of the
10 thickened laser light sheets (Sec. 2). The volumetric
representation is 278×1112×944 voxel. Each voxel corresponds
to (90 µm)3.
(3)
In homogeneous coordinates, it is sufficient to go through the
transformation for the corner voxels of a depth layer i only. All
other image coordinates can be obtained by a bilinear
interpolation. See (Putze & Maas, 2008) for a detailed
mathematic description.
599
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII, Part 5
Commission V Symposium, Newcastle upon Tyne, UK. 2010



Convergence behavior: Cuboids with a diverging or oscillating solution were rejected.
Vector length: Translation vectors exceeding a preset threshold were eliminated.
Neighborhood correlation: The differences of the translation
vector components between neighboring cuboids were analyzed. Vectors with deviations from their neighborhood exceeding a preset limit were eliminated.
The 3-D LST steering parameters were set on the basis of apriori knowledge on the flow and empirically on the basis of a
series of program runs. The parameters controlling the outlier
elimination process were set automatically following 3-sigma
rules. Optionally, gaps in the vector field can be closed by
neighborhood based interpolation.
Figure 6. Voxel space of the volumetric reconstruction of a
vortex ring at one epoch.
All voxels with gray values less than 15 were eliminated for a
better visual representation. Further, the right of Fig. 6 is shows
a reduced resolution of 10:1. The section in the left is at full
resolution 1:1.
The numeric results of the different geometric reconstruction
methods do not differ, which means that the accuracy of the
image ray path reconstruction only depends on the accuracy of
the calibration routine.
4. VOXEL SPACE TRACKING
4.1 3-D Least Squares Tracking
Eulerian 3-D velocity field information can be obtained by
volume-based tracking techniques applied to time-resolved
voxel space representations. Here, 3-D least squares tracking (3D LST) forms a rather interesting alternative to conventional 3D cross correlation. 3-D LST is a volumetric tracking technique,
which is adaptive to cuboid deformation and rotation. It
minimizes the sum of the squares of voxel value differences by
determining the coefficients of a 3-D affine transformation
between cuboids at consecutive time steps. In addition to the
three displacement vector components, the 12 parameters of the
3-D affine transformation in 3-D LST contain scale, rotation
and shear information. This allows for a higher precision and
reliability in case of velocity gradients in the interrogation
volume. Moreover, these parameters enable the determination
of a shear tensor for each interrogation cube. When applied to
liquid flow data, an incompressibility constraint is introduced to
force the volume of a cuboid to remain constant during the
iterative transformation. The result of 3-D LST applied to
sequences of tomographically reconstructed voxel structures is a
dense 3-D velocity vector field with additional shear tensor
information. A more detailed description of the functional
model of the 3-D LST can be found in (Maas et al., 1994).
Figure 8. Cross sections of color-coded velocity in voxel space.
Figure 8 shows a color-coded visualization of selected layers in
the 3-D LST results. Figure 9 shows the translation vector
lengths of one half of the vortex ring in a frontal view. As one
can see, some velocity vectors in the center of the vortex were
eliminated as potential outliers. This has to be attributed to the
finite cuboid size and the fact that the 3-D affine transformation
parameters can only recover linear cuboid deformations. The
results might be improved by some parameter fine tuning or by
a higher seeding density allowing for smaller cuboids.
4.2 Tracking Results
A regular grid of 253 voxel cuboids was defined into the volumetric reconstruction gained form Sec. 3 to apply the 3-D LST.
For each cuboid, the 12 parameters of the 3-D affine transformation were determined. Parameters, which turned out insignificant in the significance test, were excluded from the transformation. A volume constraint was applied to consider the incompressibility of the liquid. Outliers in the results were removed in an outlier detection procedure based on the following
criteria:

Figure 9. Color-coded vector lengths of one half of the vortex
ring (frontal view, X=const.=13.14 mm).
Affine transformation parameter standard deviation: The
results of cuboids with standard deviations exceeding a preset threshold were deleted.
In the experiment described here, the standard deviation of unit
weight produced by the least squared adjustment process, averaged over all accepted cuboids, was 2.5 gray values. Tab. 1
600
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII, Part 5
Commission V Symposium, Newcastle upon Tyne, UK. 2010
shows the average standard deviations of the 12 affine transformation parameters. As one can see, the internal precision of
the cuboid translation parameters is in the order of 1/100 of a
voxel. However, one has to consider that these internal precision figures are only realistic if the assumed functional and
stochastic model is correct (3-D affine transformation and least
squares adjustment assuming Gaussian error distribution). Further verification tests have to be performed to get a better estimate of the real accuracy potential of the method.
σi = [vx]
Sig = [%]
a0
0.0132
100
b0
0.0105
100
c0
0.009
100
σi = [vx]
Sig = [%]
a1
2.4e-3
1.95
b2
1.8e-3
2.68
c3
1.6e-3
3.75
σi = [vx]
Sig. = [%]
a2
2.4e-3
4.42
a3
2.3e-3
10.47
b1
b3
1.7e-3 1.8e-3
6.40
10.77
5. SUMARY AND OUTLOOK
The suggested approaches for volumetric reconstructions under
different experimental setups and 3-D least squares tracking
turned out to be efficient and accurate volumetric PIV
techniques.
The sequential projective transformation for volumetric
reconstructions in gas, resp. in liquids if telecentric lenses are
used, has the advantage of being fast and graphics card
implementation friendly. For the more common case of a
tomographic voxel space reconstruction in liquid using images
captured with central perspective lenses, a fast pixel-wise
technique and a polynomial approach were proposed. 3-D LST
as a cuboid tracking technique has the great advantage of
inherently determining 12 affine transformation parameters of
each cuboid. These 12 parameters allow to adapt to linear
deformations of cuboids, thus improving the precision and
reliability of cuboid translation parameters. Moreover, they
form a basis for the determination of a shear tensor for each
tracked cuboid. The fact that about 20% of tracked cuboids in a
vortex ring experiment showed at least one significant nontranslation parameter proves the relevance of determining not
only transformation parameters in cuboid tracking.
c1
c2
1.6e-3 1.6e-3
6.07
12.61
Table 1 Average standard deviations of transformation
parameters and percentage of significant parameters
in accepted trajectories.
Furthermore, Tab. 1 gives an overview on the percentage of
significant 3-D affine transformation parameters over all
accepted cuboids. As the cuboid translation parameters
(a0,b0,c0) were not excluded as a rule in the significance tests,
they all have 100% here. The scale parameters (a1,b2,c3),
constrained by the incompressibility condition, were only
significant in relatively few cuboids, while the rotation and
shear parameters (a2,a3,b1,b3,c1,c2) were significant especially
in the center of the vortex (Fig. 10). In total, about 20% of the
cuboids showed at least one significant non-translation
parameter, proving the adequateness of the 3-D LST approach.
Further, the gained 3-D LST non-translation parameters can be
used to estimate the 3-D deformation tensor as well as the 3-D
rotational tensor directly (Kitzhofer et al., 2010).
Future work will concentrate on a reliable implementation of
the polynomial approach to reconstruct the whole voxel space
using one set of higher order polynomial parameters only.
Beyond, the linear transformation model in 3-D LST can be
extended by introducing higher order polynomials. The
resolution of the velocity field may also be improved by
identifying individual particles in voxel space and tracking
those particles, using the results of the volume-based tracking as
good approximation.
ACKNOWLEDGMENTS
The authors would like to thank their cooperation partners Prof.
Dr.-Ing. habil Christoph Brücker and Dipl.-Ing. Jens Kitzhofer
(Institute of Mechanics and Fluid Dynamics, TU Freiberg,
Germany) and Dr.-Ing. Oliver Pust and Thomas I. Nonn
(Dantec Dynamics A/S, Skovlunde, Denmark) for providing the
vortex ring data set as well as for helpful, supportive and
enjoyable suggestions and discussions.
REFERENCES
Elsinga, G. E., Scarano, F., Wieneke, B. & van Oudheusden, B.
W., 2005. Tomographic particle image velocimetry. In 6th
International Symposium on Particle Image Velocimetry
(PIV05).
Herman, G. T. & Lent, A., 1976. Iterative reconstruction
algorithms. Computers in Biology and Medicine, 6:273-294.
Kitzhofer J., Brücker Ch. & Pust O., 2009. Tomo PTV using
3D
Scanning
Illumination
and
Telecentric
Imaging. Proceedings of the 8th International Symposium on
Particle Image Velocimetry, 25-28 August, Melbourne,
Victoria, Australia.
Figure 10. Velocity vector display with vectors belonging to
cuboids with at least one significant non-translation
3-D affine transformation parameter coded in green.
Kitzhofer, J., Westfeld, P., Pust, O., Maas, H.-G. & Brücker,
Ch., 2010. Direct Estimation of 3D Deformation and Rotation
Rate Tensor from Gray Value Weighted Voxel Space via Least
601
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII, Part 5
Commission V Symposium, Newcastle upon Tyne, UK. 2010
Squares Matching. Accepted for publication in the Proceedings
of the 15th International Symposium on Applications of Laser
Techniques to Fluid Mechanics, 5 – 8 July 2010, Lisbon.
Maas, H.-G., Stefanidis, A. & Grün, A., 1994. From pixels to
voxels - tracking volume elements in sequences of 3-d digital
images. In International Archives of Photogrammetry and
Remote Sensing, Volume 30, 3/2.
Maas, H.-G., Westfeld, P., Putze, T., Bøtkjær, N., Kitzhofer, J.,
Brücker, C., 2009. Photogrammetric techniques in multi-camera
tomographic PIV. In: Soria, J., Atkinson, C. (eds.), Proceedings
of the 8th International Symposium on Particle Image
Velocimetry, 25-28 August, Melbourne, Victoria, Australia, pp.
599-602.
Mulsow, C., 2010. A Flexible Multi-media Bundle Approach.
Accepted for publication in the Proceedings of the Commission
V, WG V/1 Symposium 2010, 21 – 24 June 2010, Newcastle.
Putze, T. & Maas, H.-G, 2008. 3D determination of very dense
particle velocity fields by tomographic reconstruction from four
camera views and voxel space tracking. In: Jiang J. Chen J. and
H.-G.
Maas,
editors,
International
Archives
of
Photogrammetry, Remote Sensing and Spatial Information
Sciences, Volume XXXVII, Part B5, pages 33-38. ISPRS.
602
Download