Study of the Accuracy of Different Stereoscopic Reconstruction Algorithms.

advertisement
Study of the Accuracy of Different Stereoscopic
Reconstruction Algorithms.
N. Pérenne, J.M. Foucaut and J. Savatier
Laboratoire de Mécanique de Lille, UMRCNRS 8107, Bv Paul Langevin, 59655
Villeneuve d’Ascq, France; Jean-marc.foucaut@ec-lille.fr.
Abstract
The stereoscopic technique applied to Particle Image Velocimetry (PIV) makes it
possible to get two-dimensional fields of three-component velocity vectors.
Several methods for implementing Stereoscopic Particle Image Velocimetry
(SPIV) can be found in the literature; the purpose of the present contribution is to
assess quantitatively the measurement accuracy of three such SPIV algorithms:
vector warping, image mapping and the Soloff method. The study, limited to the
measurement of uniform displacement fields, supports the fact that the precision
of all methods considered is better than 2% of the measured quantity (according to
various standard deviation criteria). Hence the computational cost of image
mapping is not justified; furthermore the Soloff method appears to be more
reliable than algorithms based on geometrical vector reconstruction.
1 Introduction
In the recent years, the stereoscopic technique has been successfully applied to
Particle Image Velocimetry (PIV), allowing to get two-dimensional (2D) fields of
three-components (3C) vectors using two cameras viewing the flow at different
angles; see for instance the review by Prasad (2000). The stereoscopic PIV, henceforth referred to as SPIV, can be implemented according to several and rather different methods. The current trend favors the use of (i) cameras arranged in the
Scheimpflug angular displacement configuration and (ii) calibration-based reconstruction algorithms. The latter include “vector warping”, “image mapping” and
the Soloff method: each of these three well-known algorithms (described briefly in
Section 3) will be considered in this contribution.
The purpose of this work is to assess quantitatively the measurement accuracy
that can be reached with SPIV. It is thus directly comparable to such papers as
Lawson and Wu (1997), where the r.m.s. error is about 4% when the angle between the optical axes of the cameras is 90° (their Fig. 6). This work, however,
can be considered as an extension of the above-mentioned study since additional
reconstruction algorithms will be evaluated: indeed the programs used here were
376 Session 7
written in a way that allows to switch easily from a reconstruction scheme to another.
This study is restricted to the measurement of uniform displacement fields (the
translation of either a ‘particle’ panel or a particle block, see Section 2). Gradients
in displacement field (such as provided by the rotation of the particle block/panel)
may of course bring in additional errors during the 2D2C PIV processing, but also
at reconstruction stage if calibration is not perfect, as mentioned by Willert (1997).
This issue, however, is not addressed here; the reader can refer to Coudert and
Schon (2001) for a clear discussion of this problem, its diagnostic and its remedy
(which applies to all reconstruction algorithms in the same manner).
The last characteristic of the present set of tests is that a very good accuracy (±
1 µm) was used to generate the particle displacements. As will be seen, this allows
a better assessment of the accuracy of the SPIV algorithms.
The plan for the paper is to first describe the experimental set-up used for
studying the various SPIV algorithms; two configurations were actually used
(Section 2). The reconstruction algorithms under study are presented briefly in
Section 3, together with a few details about the software implementation. Section
4 presents the results obtained when measuring various uniform displacements
fields, and the corresponding error analysis. The conclusions regarding the study
are given in Section 5.
2 Experimental set-up
The stereoscopic configuration used here is in angular-displacement with
Scheimpflug conditions (this is usually more accurate than translation systems, as
recalled by Westerweel and Van Oord, 1999). Actually two angular-displacement
systems were set-up: one with the light sheet between the cameras (case A, Fig.
1a), the other with both cameras on the same side of the object plane (case B, Fig.
1b). It is not expected that the accuracy of SPIV will depend on whether the object
plane lies between the cameras or not; the real difference between cases A and B
is the nature of the object being imaged: particle block in Case A, ‘particle’ panel
in case B. The object in case A is three-dimensional and consists of a 110x50x30
mm3 acrylic bloc; 10 µm melamine particles were mixed into the resin before
hardening. The particle block was then set in an octagonal glass container filled
with an index-matching oil (Eucalyptol): this allowed (i) to minimize optical distortions (the calibration model is not supposed to take care of any distortion) and
more importantly (ii) to ensure that the optical paths were approximately the same
during the calibration procedure (i.e., without the block) and during the particle
image acquisition (3 cm thick acrylic bloc). In case B the object is just a particle
pattern that was printed on a sheet of paper which was carefully scotch-taped on
the backside of the calibration plate. The light was provided in case A by a laser
sheet of ~1mm thickness (pulsed Nd:YAG source) and by continuous diffuse
white light illumination in case B.
The framework of reference is shown in Fig. 1a and 1b: the x and z axes are
parallel and perpendicular to the object plane, respectively; the y axis (not shown)
Stereoscopic PIV 377
is such that (xyz) is directly oriented. Actually y will appear as upward in the images, whereas according to the position of the camera considered, x will be leftward- or rightward-oriented in the images.
light sheet
X
Z
particle panel
Z
X
camera #1
camera #0
camera #1
a)
camera #0
b)
Fig. 1. schematic diagram of the experimental set-up a) for case A and b) for case B.
The origin of the coordinate system lies in the object plane, its (xy) position
being arbitrarily chosen in the field of view common to both cameras. The approximate positions of the optical centers of the camera lenses is listed in Table 1:
the accuracy of these values cannot be considered as very high (let say, +/-0.5cm).
They are required in the last step of the “vector warping” and “image mapping”
algorithms (see Section 3). One can check in Table 1 that the cameras were more
or less symmetrically positioned (relative to the field of view), the angle between
optical axes and object plane being close to 45°.
Table 1. position of the camera lenses in cases A and B.
case A / case B
X (cm)
Z (cm)
camera #0
+22 / +33
-21 / +36
camera #1
+22 / -32
+20 / +36
The images were taken using f = 50 mm lenses operated at f# = 8 (case A) or f#
= 4 (case B). Typical 64x64 windows are shown in Fig. 2: one can see that the
objects produce small diffraction-limited images in case A (Fig. 2, left) and relatively large geometric images in case B (Fig. 2, right). Furthermore the image distribution is relatively sparse in case A and much denser in case B. The two image
data sets thus provide very different conditions for the conventional 2D2C part of
the SPIV analysis. Note that the typical magnification is also different in the 2 setups: on average 1 pixel corresponds to ∆xpix = 47 µm and ∆xpix = 78 µm in the x direction for cases A and B, respectively. The conclusions to be drawn from this
study therefore exhibit some degree of generality.
The various displacements were applied using a two-axes translation stage
which offered a pretty good accuracy of ±1 µm. The displacements considered
here have components in the X and Z directions only; they will be denoted as (∆X,
∆Z). On the contrary the measured displacement field has three components (u, v,
w), but v should of course be close to 0. From now on (unless otherwise specified),
378 Session 7
the values of ∆X, ∆Z, u, v and w will be given after normalization by ∆xpix. The
range of displacements considered in this study covers most of PIV applications:
∆X and ∆Z have been independently varied from 0 to more than 10 ‘pixelequivalents’ in case A (with a step approximately equal to 2 units) while in case B
the range was wider: -10<∆X<10, -20<∆Z<20 (with a step of 5 units in x and 10
units in z).
Fig. 2. A typical 64x64 window from case A (left) and B (right).
A number of statistics are now defined, that will be used for the quantitative
inter-comparison of the various algorithms. For any given (∆X, ∆Z), one gets a
nearly uniform 2D3C (u,v,w) field of typically 40x20 vectors (in this study), all of
which are different realizations of the same measurement, i.e. the measurement of
(∆X, ∆Z). One can then define (i) the field-average (U, V, W) of (u, v, w) for a
given (∆X, ∆Z), and (ii) the root-mean-square (r.m.s.) values (u’, v’, w’) of (u-U,
v-V, w-W), which will be called afterwards the field-r.m.s. Field-average and
field-r.m.s. can be plotted as functions of ∆X and ∆Z, which will be done in Section 4. In order to give a more compact presentation of the results, however, other
statistics have been defined: they intend to provide a measure of the typical accuracy over the full (∆X, ∆Z) range. Firstly, one can define a “typical bias” E1 which
is the average over the (∆X, ∆Z) range, of the modulus of the difference between
field-average of the measurements and theoretical value of the displacement:
1
E1 =
N
N
∑
n =1
( U n − ∆X n )2 + Vn 2 + ( Wn − ∆Z n )2
(1)
where N is the number of (∆Xn, ∆Zn) combinations considered. Note that this
measure is non-dimensional if the summation is performed on quantities normalized by ∆xpix, which is the case here; thus E1 is actually given in pixel-equivalents.
Now one can also define :
Stereoscopic PIV 379
E2 =
1
N
N
∑
n =1
( U n − ∆X n )2 + Vn 2 + ( Wn − ∆Z n )2
∆X n 2 + ∆Z n 2
(2)
which is another overall measure of the bias, but this time relative to the amplitude of each displacement; E2 will be given in % and its value is of course the
same if the summation is performed on normalized (pixel-equivalents) or dimensional (microns) quantities. Finally, the formula used for E1 and E2 can be applied
to the field-r.m.s. values (instead of the components of the bias), which provides
other measures of the measurement error: E3, given in pixel-equivalents and E4,
given in % of the displacement (Eq. 3 and 4).
E3 =
E4 =
1
N
1
N
N
∑
( un' )2 + ( vn' )2 + ( wn' )2
(3)
n =1
N
∑
n =1
( un' )2 + ( vn' )2 + ( wn' )2
∆X n 2 + ∆Z n 2
(4)
3 Stereo Algorithms
This section provides a short summary of the SPIV algorithms considered here.
Further details and references can be found in the review by Prasad (2000).
3.1 Empirical back-projection
The first thing one needs when processing SPIV measurements is an accurate
evaluation of the functional relationship between (x,y), the object-plane coordinates in real world units, and (i,j), the column and row indexes in the images obtained from either camera #0 or #1. Actually the functions x(i,j) and y(i,j) could be
specified geometrically but this procedure is not reliable (Westerweel and Van
Oord, 1999). Hence these functions are usually determined empirically using images of a grid consisting of equally spaced marks. A least-square fit can then be
applied to the coefficients of an analytical formula that is supposed to describe the
relationship between the known object-plane coordinates (x,y) of the crosses and
the corresponding measured image-plane coordinates (i,j). This procedure is now
standard (e.g. Prasad, 2000) and the only variations regard the analytical form of
x(i,j) and y(i,j); here two types of formula will be fitted: second-order polynomial
and ratio of second-order polynomials (the latter being inspired by the form of the
geometrical formulas).
Thanks to x(i,j) and y(i,j), one is able to convert the (∆i,∆j) field of pixel displacements, provided by standard (2D2C) PIV in the image plane of each camera,
to a (Uk,Vk) field of displacements in real world units, where the subscript k refers
to the camera considered (k = 0 or 1). The next step will be to combine these two
380 Session 7
fields into one 2D3C field (U,V,W), as discussed in Section 3.2. However this
procedure, called here “vector warping”, is not the only way to compute the two
(Uk,Vk) fields: one may also use the functions x(i,j) and y(i,j) to map the particle
images onto physical space and then apply the 2D2C PIV processing to such
back-projected particle images. Processing back-projected particle images with
2D2C PIV provides a displacement field which is directly expressed in physical
units. This technique, called here “image mapping”, introduces an image interpolation step because the (x(i,j), y(i,j)) transformation distorts the pixels of the original image: the back-projected pixel data then need to be interpolated onto a regular array before the 2D2C processing. Three image interpolators will be
considered in this report: a simple bilinear scheme, the Whittaker formula with a
7x7 kernel (e.g. Scarano and Riethmuller, 2000) and the surface interpolator described in Ursenbacher (2000).
3.2 Geometrical vector reconstruction
Whether it is applied to pixel displacements (as in vector warping) or particle images (as in image mapping) the back-projection generates two 2D2C fields:
(Uk,Vk) where k = 0 or 1 is the camera number. Uk and Vk are object-plane projections of the real 3C displacement vector (U,V,W), as shown schematically by Fig.
3. Given the position of the optical centers of the camera lenses, geometrical formulas can then be used to retrieve the 3C vector (see Willert, 1997):
U 0 tan α 1 − U 1 tan α 0
tan α 1 − tan α 0
tan β 0 + tan β 1 
1

V = V0 + V1 + ( U 1 − U 0 )
2
tan α 0 − tan α 1 
U1 − U0
W=
tan α 0 − tan α 1
(5a)
X0 − x
Z0
Y0 − y
tan β 0 =
Z0
(5b)
U=
with
tan α 0 =
X1 − x
Z1
Y1 − y
tan β 1 =
Z1
tan α 1 =
where (Xk,Yk,Zk) are the coordinates of the optical center of camera #k and
(x,y,z) are the coordinates of the point where the velocity is being reconstructed.
Note that (Xk,Yk,Zk) do have a sign, so that the reconstruction formula actually applies to all geometrical configurations (cameras on the same side of the object
plane or not). Strictly speaking, Eq. (5) is obtained after approximating a ~ b in
Fig.3, which is not a problem since (X0+x) >> U0.
Stereoscopic PIV 381
Z
X0
X1
a
b
Z1
Z0
W
X
U1
U0
x
U
Fig 3. schematic diagram for the geometrical reconstruction.
3.3 Soloff’s method
Soloff et al. (1997) introduced a mathematical formalism allowing to achieve
back-projection and reconstruction in a single step; furthermore this operation
does not imply any geometrical formula such as Eq. 5, hence it is not necessary to
measure (more or less accurately) the position of the optical centers. The drawback is that more calibration images are needed: after taking images of the calibration grid when it is positioned within the (xy) measurement plane, as in Section
3.1, one must also take images of the calibration plate after it has been moved by a
known ∆z. Additional z positions of the (xy) calibration grid may also be provided
with the intent of getting more accurate functions i(x,y,z) and j(x,y,z); note that the
projection functions now connect coordinates in the object volume (x,y,z) to coordinates in the image plane (i,j), which is actually the kind of projection implemented by an optical system with a finite depth of field. Let us write
i0 = f0 ( x , y , z ) i1 = f1( x , y , z )
(6a)
j0 = g0 ( x , y , z ) j1 = g1( x , y , z )
where (i0,j0) (resp. (i1,j1)) are pixel coordinates in the image plane of camera #0
(resp. camera #1). Then to first order:
382 Session 7
 ∂f0
 ∂x
 ∆i0   ∂g0
∆j  
 0  =  ∂x
 ∆i1   ∂f1
   ∂x
 ∆j1  
 ∂g1

 ∂x
∂f0
∂y
∂g0
∂y
∂f1
∂y
∂g1
∂y
∂f0 
∂z 

∂g0  u
 
∂y   
• v
∂f1   
  w
∂z   
∂g1 

∂y 
(6b)
where (∆i0,∆j0) -resp. (∆i1,∆j1)- are pixel displacements measured in the image
plane of camera #0 -resp. camera 1-. Eq. (6b) is overdetermined and will provide a
robust (U,V,W) estimation using a least-square procedure.
The user is free to choose the shape of the functions used in Eq. (6a); here a
polynomial with cubic dependence in x and y, but quadratic dependence in z, has
been chosen, in accordance with Soloff et al. (1997):
a0 + a1 x + a3 y + a4 z + a5 x 2 + a6 xy + a7 y 2 + a8 xz + a9 yz
t( x , y , z ) = + a10 x 3 + a11 x 2 y + a12 xy 2 + a13 y 3 + a14 x 2 z + a15 xyz + a16 y 2 z
+ a17 z 2 + a18 xz 2 + a19 yz 2
(7)
where t is either i0, i1, j0 or j1. Once the 20 coefficients of the 4 polynomials have
been least-square fitted to the (hopefully large) set of (x,y,z,i,j) data points provided by the calibration images, the polynomials can be (analytically) derived and
the matrix in Eq. (6b) easily completed for each point where (U,V,W) should be
computed. Actually a reduced version of Eq. 7 will also be considered, using a
polynomial of degree 1 in z (i.e. without the last 3 terms in Eq. 7); the number of
calibration planes has also been varied: results will be presented with calibrations
in 3 or 5 planes.
4 Results
The SPIV system was run using several configurations which are listed in Table 2
(but of course the very same set of images was used in all cases). Configurations
#1 to #6 used the Soloff method but the z order of the calibration polynomials and
the number of calibration planes were varied. Runs #7 and #8 relied on vector
warping with a ratio of polynomials for #7 and a simple polynomial for #8 (see
Section 3.1). Finally, configurations #9 to #11 were all based on image mapping
but used different image interpolation schemes. Let us mention here that all 2D2C
PIV analyses were performed using 2 passes with 64x64 windows and integer
shift, the typical overlap being 50%. Smaller interrogation windows certainly increase the number of independent vectors used to compute the statistics, but actually it does not change the field-averages (whereas the field-r.m.s. are nearly doubled due to increased 2D2C noise). Continuous-shift iterations were considered in
case A (which exhibits a relatively strong peak-locking) but they didn’t change
Stereoscopic PIV 383
significantly the statistics relevant to this paper (the particle images in case A were
most probably too small to allow for peak-locking removal).
Fig. 4 provides a first view on the experimental results; one can see that (i) the
field-averages are close to the expected values and (ii) the difference between
field-averages and imposed displacements does not seem to follow a systematic
trend, as would be the case if the calibration was biased. The second point is further demonstrated by Fig. 5, where the same data are plotted in a different way,
together with the field-r.m.s. values. The first column of plots shows that the fieldaverages are indeed distributed around the expected values and that the discrepancies do not exhibit a tendency to increase when, say, ∆Z increases. The behavior
of the field-r.m.s. is different (second column of plots in Fig. 5); there is a consistent variation with regards to ∆X and ∆Z: while the field-r.m.s. of the U and W
components increases with ∆Z, the field-r.m.s. of V happens to be rather dependent on ∆X. Anyway one must remember that all forms of error have a typical
magnitude of 0.1 pixel, which is also the kind of accuracy that one can expect
from standard 2D2C PIV applied to real images. Thus it seems that the accuracy
in configuration #1 is limited by the PIV itself and not by the stereoscopic reconstruction.
Table 2. the various stereo reconstruction methods considered in this study.
Configuration #
Stereo algorithm
Parameters
1
Soloff
5 planes separated by 0.5mm
polynomial of order 2 in Z
2
Soloff
5 planes separated by 0.5mm
polynomial of order 1 in Z
3
Soloff
3 planes separated by 0.5mm
polynomial of order 1 in Z
4
Soloff
5 planes separated by 0.25mm
polynomial of order 2 in Z
5
Soloff
5 planes separated by 0.25mm
polynomial of order 1 in Z
6
Soloff
3 planes separated by 0.25mm
polynomial of order 1 in Z
7
warping
ratio of second order polynomials
8
warping
second order polynomial
9
mapping
ratio of second order polynomials
bilinear image interpolator
10
mapping
ratio of second order polynomials
Whittaker image interpolator
11
mapping
Ratio of second order polynomials
surface image interpolator
The statistical values E1, E2, E3 and E4 obtained with configuration #1 can be
found in the first line of Table 3 for cases A and B. Note that E1 is the same in
both cases but E2 is smaller in case B due to the larger range of displacements. E3
is also smaller in case B than in case A, most probably due to larger 2D2C (peak-
384 Session 7
locked) errors in case A (and the same, together with the larger displacement
range, applies to E4).
Overall, cases A and B exhibit typical errors amounting to 1-2% of the measured quantity, even when using other error definitions such as the one used by
Lawson and Wu (1997); these authors actually find slightly larger numbers but according to us this owes mainly to the smaller resolution of their translation stage
(10 µm).
12
10
8
6
4
2
0
0
2
4
6
8
10
12
Fig. 4. Field-averaged measurements in case A using configuration #1. Imposed displacement (∆X,∆Z) are indicated by circles and measured values (U,W) by asterisks. All values
are non-dimensionalized by the typical “object size” of a pixel (see text).
Table 3. overall performance of the various stereo configurations. Definition of E1, E2, E3
and E4: see text. Some configurations were not studied in case B (-).
Typical bias
Typical
case A
Typical
Typical
E2 (%)
r.m.s. E4
/case B
bias E1
r.m.s. E3
(pix-equiv.)
(pix-equiv.)
(%)
#1
0.10 / 0.09
1.6 / 0.7
0.11 / 0.06
1.5 / 0.5
#2
0.10 / 0.09
1.6 / 0.7
0.11 / 0.06
1.5 / 0.6
#3
0.10 / 0.13
1.6 / 0.9
0.11 / 0.08
1.5 / 0.6
#4
0.11 / 1.6 / 0.11 / 1.5 / #5
0.11 / 1.6 / 0.11 / 1.5 / #6
0.32 / 4.0 / 0.11 / 1.5 / #7
0.11 / 0.21
1.7 / 1.3
0.19 / 0.26
2.4 / 1.7
#8
0.11 / 0.20
1.7 / 1.3
0.19 / 0.26
2.4 / 1.7
#9
0.10 / 0.21
1.4 / 1.4
0.14 / 0.12
1.8 / 0.8
#10
0.10 / 0.21
1.4 / 1.4
0.14 / 0.11
1.8 / 0.7
#11
0.10 / 0.21
1.4 / 1.4
0.14 / 0.12
1.8 / 0.8
As far as case A is concerned, nothing much happens from configuration #1 to
configuration #5; configuration #6 however, induces a large increase of bias errors
(E1, E2) while the r.m.s. errors (E3, E4) remains stationary (Table 3). Plotting the
(∆X,∆Z) dependency of the field-average and -r.m.s, it comes that all displace-
Stereoscopic PIV 385
ments components are underestimated. This underestimation increases as ∆Z gets
larger.
Note that the calibration planes are always symmetrically distributed around z =
0 and that 0.25mm/∆xpix is close to 5 in case A. Thus in configuration #6 (3 planes
separated by 0.25 mm) the highest calibration plane is only halfway to the largest
∆Z measured. Run #6 is the only instance where this happens: in configurations #1
to #5 there is always a calibration plane lying beyond the largest ∆Z to be measured. In this case, no matter if the polynomial is first or second order in z, or if
there are 3 or 5 calibration planes. Actually although all configurations were not
studied in case B, the same phenomenon begins to occur in configuration #3 because 0.5mm/∆xpix is less than 7 in case B.
The two vector warping procedures (#7 and #8) do not differ significantly (Table 3); thus it appears that a second-order polynomial is accurate enough for reconstruction purposes with real images, even if the projection from object to image planes is even better described by a ratio of polynomials. In case B the vector
warping is characterized by a much larger typical bias (E1,E2) compared to the
Soloff algorithm. This is not observed in case A. This might be due to a less accurate geometrical positioning of the optical centers in case B (although the same
care was taken); this hypothesis is supported by the fact that the image mapping
methods (#9 to #11) exhibit the same behavior, as far as E1 and E2 are concerned.
This issue is investigated in Table 4, where the X0 and X1 geometrical parameters
are varied around the values of Table 1 which were measured (more or less accurately) on the optical set-up; note that the optical center of a commercial photo
lens consisting of several optical groups, is not always well known. Actually the
X0 and X1 parameters have not been varied independently: the sensitivity study
was arbitrarily restricted to |X0| + |X1| = 55 cm. One can see on Table 4 that there
is a significant variability in the typical bias (whether one considers its E1 or E2
measures) while the typical r.m.s. remains unaffected (not surprisingly) at least
when keeping 2 significant digits. The issue was not pursued further because
strictly speaking, one would have to change independently X0, X1, Z0, and Z1 in order to find the best geometrical parameters, and such a procedure is obviously not
possible in the course of a real experiment.
Table 4. statistical errors sensitivity to geometrical parameters in configuration #7.
X0 (cm) X1 (cm)
E1 (pix-equiv.)
E2 (%)
E3 (pix-equiv.
E4
(%)
+34
-31
0.44
2.8
0.26
1.7
+33.5
-31.5
0.30
1.9
0.26
1.7
+33
-32
0.21
1.3
0.26
1.7
+32.75
-32.25
0.20
1.3
0.26
1.7
+32.5
-32.5
0.23
1.5
0.26
1.7
+32
-33
0.35
2.3
0.26
1.7
Returning to Table 3, one can notice that in both cases A and B the vector
warping technique has larger measures of typical r.m.s. (E3,E4) than both the
Soloff and image mapping methods: this is not due to geometrical input inaccuracies, as demonstrated by Table 4; furthermore image mapping also necessitates
386 Session 7
geometrical input and does not exhibit as strong an increase in typical r.m.s as
vector warping. The (∆X,∆Z) dependency of the field-average and -r.m.s. given by
configuration #7 is plotted in Fig. 6; the spatial evolutions are the same as in Fig. 5
but the V and W field-r.m.s. are noticeably stronger. This fact, confirmed by the
measurements of case B, is difficult to interpret; however let us recall again that
all typical errors remain relatively small in terms of pixel-equivalents.
Finally, the last 3 lines of Table 3 list the error statistics for image mapping
computations (SPIV configurations #9 to #11): it appears that all image interpolators have the same performance, except perhaps for the Whittaker formula (configuration #10) which is marginally better (but it is also about 3 times more timeconsuming than the other two interpolators). This observation is surprising but in
our opinion it might be partly due to the nature of the images that were generated
during this study: in case A the images would be too small (no interpolator can
generate the information that was lost by under-sampling) and in case B they
would be too large (any interpolator can do a good job when the signal is nicely
over-sampled). On the other hand, Table 3 also indicates that image mapping does
not provide significant improvements compared to the other SPIV methods, at
least when particle image diameter is not optimal. Hence as far as back-projection
is concerned, the quest for the best image interpolator seems to be superfluous.
5 Conclusions
This contribution reports an inter-comparison of the measurement accuracies
achieved by applying several SPIV algorithms to a set of real images. The latter
were acquired from an optical set-up based on the angular-displacement arrangement with Scheimpflug mounts. Only uniform displacement fields are considered
here. The principal findings of the present study are as follows:
(a) All methods (vector warping, image mapping, Soloff) perform basically
well since the error is smaller than 2% of the measured quantity; the accuracy thus
appears to be limited by the precision of the standard (2D2C) PIV analysis.
(b) However the vector warping and image mapping methods can be easily biased (by a few percents) if the geometrical input (positions of the optical centers
of the cameras) is not cautiously measured for the experimental set-up. The Soloff
method appears to be better on this account since it does not necessitate such
geometrical input; one just has to ensure that the z range covered by the various
calibration grids (z being the direction perpendicular to the light sheet) is larger
than the largest W component measured (where W is the particle displacement
along z): i.e. the z range of calibration grids should be of the order of the light
sheet thickness.
(c) Vector warping generates slightly more noise in the final 2D3C fields (unexplained).
(d) The accuracy improvement (if any) brought by image mapping is certainly
far from being proportional to the computational load it induces. However, as
documented by Coudert and Schon (2001), one must recall that the mapping of at
least a few images is necessary to check and correct for a possible tilt and offset
between calibration and measurement planes.
Stereoscopic PIV 387
Acknowledgements
This work has been performed under the EUROPIV2 project. EUROPIV2 (A
joint program to improve PIV performance for industry and research) is a collaboration between LML UMR CNRS 8701, DASSAULT AVIATION, DASA, ITAP,
CIRA, DLR, ISL, NLR, ONERA and the universities of Delft, Madrid, Oldenburg, Rome, Rouen (CORIA URA CNRS 230), St Etienne (TSI URA CNRS 842),
Zaragoza.
The project is managed by LML UMR CNRS 8701 and is funded by the European Union within the 5th frame work (Contract N°: G4RD-CT-2000-00190).
References
Coudert, S. and Schon, J.P. (2001) Back-projection algorithm with misalignment corrections for 2D3C stereoscopic PIV. Meas. Sci. Technol., 12, pp.1371-1381.
Lawson, N.J. and Wu, J. (1997) Three-dimensional particle image velocimetry : experimental error analysis of a digital angular stereoscopic system. Meas. Sci. Technol., 8,
pp. 1455-1464.
Prasad, A.K. (2000) Stereoscopic particle image velocimetry. Exp. in Fluids, 29, pp. 103116.
Scarano, F. and Riethmuller, M.L. (2000) Advances in iterative multi-grid PIV image processing. Exp. In Fluids, [Suppl.], pp. S51-S60.
Soloff, S.M., Adrian, R.J. and Liu, Z.C. (1997) Distortion compensation for generalized
stereoscopic particle image velocimetry. Meas. Sci. Technol., 8, pp. 1441-1454.
Ursenbacher, T. (2000) Traitement de vélocimétrie par images digitales de particules par
une technique robuste de distortion d’images. Ph.D. Thesis, Ecole Polytechnique de
Lausanne, Switzerland.
Westerweel, J. and Van Oord, J. (1999) Stereoscopic PIV measurements in a turbulent
boundary layer. Particle Image Velocimetry : progress towards industrial application
eds M. Stanislas, J. Kompenhans and J. Westerveel, pp. 459-478. Kluwer, Dordrecht.
Willert, C. (1997) Stereoscopic digital particle image velocimetry for applications in wind
tunnel flows. Meas. Sci. Technol., 8, pp. 1465-1479.
388 Session 7
U-∆X
u'
10
0.05
8
10
0.08
8
0
0.06
6
∆Z
6
-0.05
4
4
0.04
2
0.02
-0.1
2
-0.15
0
0
2
4
6
8
0
10
0
2
4
V
6
8
10
v'
0.02
10
10
8
0.06
8
0
0.04
6
∆Z
6
4
4
-0.02
2
0.02
2
0
-0.04
0
2
4
6
8
0
10
0
2
4
W-∆ Z
6
8
10
w'
10
0.1
8
6
6
∆Z
0.08
10
8
0.06
0
0.04
4
4
2
2
-0.1
0
0
2
4
6
∆X
0
8
10
0.02
0
0
2
4
6
8
10
0
∆X
Fig. 5. Evolution of field-averaged and field-r.m.s. values of the measured displacement for
configuration #1 in case A. In the left-hand column, the difference between (i) field average
of the measurements and (ii) imposed value of the displacement is plotted as a function of
the imposed displacement (∆X, ∆Z). In the right-hand column the field-r.m.s. value of the
measurements is plotted as a function of ∆X and ∆Z. First, second and third rows correspond to u, v and w components, respectively. All quantities are divided by the “object
size” of a pixel (see text). The positive (negative) contours are indicated by solid (dashed)
lines. Contour step is 0.05 for all the plots on the left-hand side and 0.01 for the others.
Stereoscopic PIV 389
U-∆X
u'
10
10
0.08
0.1
8
8
0.06
6
∆Z
6
0
4
2
4
0.04
2
0.02
-0.1
0
0
0
2
4
6
8
10
0
2
4
V
6
8
10
0
v'
10
10
0.06
8
8
6
6
∆Z
0.04
4
0.1
4
0.05
0.02
2
2
0
0
0
2
4
6
8
0
10
0
2
4
W-∆ Z
6
8
10
0
w'
10
0.05
8
10
0.2
8
0
6
∆Z
6
-0.05
4
-0.1
2
0
4
0.1
2
0
0
2
4
6
∆X
8
10
0
2
4
6
8
10
0
∆X
Fig. 6. Evolution of field-averaged and field-r.m.s. values of the measured diplacement for
configuration #7 in case A. Same caption as Fig. 5.
Download