Digital Speckle Pattern Interferometry (DSPI) as a holographic velocimetry technique.

advertisement
Digital Speckle Pattern Interferometry (DSPI) as
a holographic velocimetry technique.
J. Lobera, N. Andrés and M. P. Arroyo
Dpto. Física Aplicada. Facultad de Ciencias. Universidad de Zaragoza
C/ Pedro Cerbuna, 12, 50009 – Zaragoza. SPAIN
e-mail: jlobera@unizar.es
Tel. +34 976 762441 Fax. +34 976 761233
Abstract
Digital speckle pattern interferometry (DSPI) can be studied as digital image plane
holography (DIPH), but to be able to reconstruct the object phase and intensity an offaxis setup has to be used. The three velocity components in a fluid plane could be
measured, since the phase maps detect an out-of-plane velocity component while the
intensity data can be analysed with standard PIV methods to detect the two in-plane
components. Several fluid planes can be simultaneously recorded but independently
reconstructed using an angular multiplexing setup. Some preliminary results from a
convective flow with a He-Ne laser illustrate these features.
Introduction
Digital speckle pattern interferometry (DSPI) is a technique well known in solid
mechanics, but as a velocimetry technique was first reported quite recently1. DSPI
shares with digital PIV the optical set-up for illuminating a fluid plane and the video
recording using CCD cameras. However, in DSPI, the light scattered by the fluid plane
is recorded simultaneously with a reference beam, being the stored image known as
specklegram. When the reference beam is smooth, the specklegram can be viewed as a
hologram of a diffuse object, being the illuminated fluid plane the diffuse object and the
CCD sensor the hologram plane. Since in DSPI the image of the fluid plane is on the
CCD sensor, the digital specklegrams are in fact digital image plane holograms.
DSPI, in its basic set-up, is similar to in-line digital image plane holography (DIPH).
The introduction of spatial phase shifting (SPS)2 techniques in the DSPI set-up allows
determining both the phase and the intensity of the recorded object wave from each
SPS-specklegram. The object wave intensity is a particle image field that allows
obtaining in-plane velocity fields using a standard PIV analysis. The object wave phase
allows obtaining the out-of-plane velocity. In both cases, the velocity is obtained from
comparing two object waves recorded at different instants.
The SPS-DSPI setup requires to introduce an appropriate angle between the reference
and the object beams and, thus, it can be viewed as an off-axis DIPH setup. When a
divergent reference beam originating from the same plane as the lens aperture is used,
the SPS-specklegrams will behave as lensless Fourier Transform holograms of the
aperture3. This is the SPS-DPSI setup we will analyse in this paper. After describing the
SPS-DSPI recording setup, we will present a theoretical study of SPS-DSPI where a
comparison of the analysis of the recorded images from the two points of view (as SPSspecklegrams or as digital holograms) will be presented. It must be emphasized that the
output of this analysis is the amplitude and the phase of the object. We will then show
how DIPH can be used to simultaneously record but independently reconstruct several
fluid planes. This will demonstrate the full potential of DIPH as a holographic
velocimetry technique.
53
2. SPS-DSPI recording setup
The SPS-DSPI setup used in this work is show in figure 1. The object wave is obtained
by illuminating the fluid with a sheetlike beam and focusing the light scattered by the
small particles inside the fluid onto a CCD detector using a convergent lens. The
reference beam is obtained by diverting a small amount of the main laser beam and
guiding it through an optical fibre, whose end is at the same distance from the CCD
sensor than the lens aperture. The object and reference beams are brought together by
means of a non-polarizing cube beam splitter.
Figure 2 shows a typical SPS-specklegram where the spatial modulation is apparent. It
also shows the random changes in intensity due to the random position and brightness
of the particle images.
x
Flow
y
ui
uo
Laser sheet
K
Object
beam
Object beam
Beam
splitter
Lens
Beam splitter
Reference
beam
Reference beam
a)
CCD
camera
Specklegram
CCD
b)
Fig. 1. SPS-DSPI recording setup: a) optical layout; b) detail on the
reference and object beam arrangement.
Fig. 2. 128x128 region of a
typical specklegram with SPS
modulation.
3. Theoretical analysis
3.1 SPS-specklegram intensity
The CCD sensor records the interference between the object wave o(x,y) and the
reference wave r(x,y) at the sensor plane (x,y) which is located in z=0. These waves can
be written as complex functions such that
o (x, y ) = A o (x, y ) exp [iφ o (x, y )]
(1)
r (x, y ) = A r (x, y ) exp[iφr (x, y )]
where A is the amplitude and φ the phase. The specklegram intensity can be written in
the usual way as
I(x, y ) = A o + A r + 2A o A r cos(φo − φ r )
2
(2)
2
For a smooth reference beam Ar is approximately constant while φr changes
continuously over the sensor plane. However, Ao and φo are random because they
depend on the particle position and size, which are random parameters. For a divergent
reference beam with its focus at (xr, yr, zr), φr=k (x - x r ) + (y - y r ) + z r
k=2π/λ. In the Fresnel region (xr, yr <<z r), φr can be written as
2
2
2

x r + yr

φ r (x, y ) = k  z r +
2z r


x x + yr y
x 2 + y2
−k r
+
k

zr
2z r

54
2
2
with
(3)
The second term in φr is the one responsible for the SPS modulation. For this
modulation to be seen, φo has to be constant over at least three pixels. This means that
the lens aperture has to be small enough so that particle images are 3 pixels in diameter.
For the sensor to resolve the modulation frequency, a maximum phase change of 2π
over 3 pixels has to occur. For a pixel diameter of 6.7 µm and a wavelength of 633nm,
xr=3.1mm when zr=100mm. For bigger xr/zr, the modulation will be undersampled by
the sensor and it may not be seen. For smaller xr/zr the modulation will not be seen
because it will give not even one fringe inside each particle image.
The specklegram intensity can be also written, as it is usual in holography, as
I(x, y ) = r (x, y ) + o(x, y ) + r * (x, y )o(x, y ) + r (x, y )o * (x, y )
2
(4)
2
A divergent reference beam, in the Fresnel region, can be written as
r (x, y ) =
[
]
 k

A r exp(jkz r )
(x − x r )2 + ( y − y r )2 
exp j
jλ z r
 2z r

(5)
For convenience o(x,y) can be written as a function of the object wave on the aperture
plane oA(xA,yA), such as
o(x, y ) =
[
 k
exp(jkz A ) ∞ ∞
(x − x A ) 2 + ( y − y A ) 2
o A (x A , y A ) exp  j
∫
∫
jλ z A −∞−∞
 2z A
]dx

A
dy A
(6)
where zA is the distance between the aperture plane and the CCD sensor. oA(xA,yA) can
be written as
(
)
 k
2
2 
o A (x A , y A ) = P(x A , y A )exp - j
x A + y A  ∑ Fn (x A , y A )
 2f
 n
(7)
where P(xA,yA) is the pupil function of the aperture, which is centred at (xA=0,yA=0),
and Fn is the divergent wave emanating from the n-particle on the illuminated fluid
plane. The phase quadratic term is introduced by the convergent lens with focal length f.
3.2 SPS-DSPI analysis
Although different SPS algorithms can be used to obtain Ao and φo, the Fourier
transform method (FTM)4,5 results in less noisy phase maps and intensity fields6. This
method relies on the different frequency spectrum of the four terms of Eq. 4. In the first
step the FTM calculates the Fourier transform ℑ of the specklegram, which can be
written as
~
2
2~
(8)
ℑ{I(x, y )} = I (f x , f y ) = A r δ (0,0) + A o S(f x , f y ) + ℑ{r * o} + ℑ{ro *}
The first term gives a very bright spot in the centre. The second term is the speckle
spectrum, which is also centred in (fx=0,fy=0). The third term defined as
~
ℑ{r * o} = I3 (f x , f y ) =
∞ ∞
∫ ∫ r *(x, y )o(x, y ) exp[- i2π(f
−∞−∞
can be expressed as
55
x
]
x + f y y ) dxdy
(9)
∞ ∞
 k
A exp( jk(z A - z r ) )
~
2
2 
I3 (f x , f y ) = r
exp
j
x
y
+
r
r  ∫ ∫ dx A dy A o A (x A , y A )

λ2 zrzA
 2z r
 −∞−∞
(
)
∞ ∞
 k 1
 k
1
2
2 
−
exp  j
x A + y A  ∫ ∫ dxdy exp  j 
 2z A
 −∞−∞
 2  zA zr

 2
 x + y 2 





x
x 
y
y   
exp- i2π  f x + A − r  x +  f y + A − r  y  
λz A λz r 
λ z A λ z r   



(
)
(
)
(10)
For zA=zr, Eq. 10 becomes into
(
)
(
)

 k
A
k
~
2
2 
2
2 
I3 (f x , f y ) = 2 r 2 exp - j
x r + y r  o A (x 1 , y1 ) exp  j
x 1 + y1 
λ zr
 2z r

 2z r

(11)
where x1=xr-λzrfx and y1=yr-λzrfy. In the same way the fourth term of Eq. 8 can be
written for zA=zr as
(
 k
A
~
2
2
I4 (f x , f y ) = 2 r 2 exp  j
x r + yr
λ zr
 2z r
)o

A
(
)

k
2
2 
* (x 2 , y 2 )exp - j
x 2 + y2 

 2z A
(12)
~
with x2=xr+λzrfx and y2=yr+λzrfy. Figure 3 shows the amplitude of I (f x , f y ), where the
~
separation of the different terms is evident. According to Eq.11 and 12, I3 (f x , f y ) is the
virtual image of the object wave in the aperture plane centred at fx=xr/λzr, fy=yr/λzr,
~
while I4 (f x , f y ) is the real image, which is centred at fx=-xr/λzr, fy=-yr/λzr. Figure 3
~
~
shows that the lens aperture has an heptagon shape. For zA≠ zr, I3 and I4 will show
unfocused images of the lens aperture (Fig. 3b).
The second step of the FTM is to remove all the information outside the aperture image
~
corresponding to I3 and calculate its inverse Fourier transform. The complex wave r*o
is thus recovered. The FTM can also work with zA≠ zr, as far as the aperture images do
not overlap.
3.3 DIPH analysis
The analysis of the SPS-specklegrams as digital holograms involves the numerical
reconstruction of the complex wave at any plane (x’,y’) using the Fresnel-Kirchoff
diffraction7,8. For planes far enough from the hologram, the reconstructed wavefield
u(x’,y’) at a plane located at a distance z’ can be calculated with a Fresnel
approximation such that
u (x' , y') =
[
∞ ∞
exp(jkz' )
 k
(x'− x )2 + (y'− y )2
c(x, y )I(x, y ) exp  j
∫
∫
jλ z' −∞−∞
 2z'
]dxdy
(13)
where c(x,y) is the reconstructing wavefront at the hologram plane. In general, the plane
(x’,y’) is taken as the best focused object plane. In a DIPH hologram, it is not necessary
to propagate the reconstructed wave because the object is focused in the hologram
plane. However both the virtual and the real images of the object plus the dc term are
overlapped in the hologram plane. To filter out the dc component some algorithms can
be used9 but to filter out one of the images it is necessary to find a plane where the
different contributions to the global wavefield are well separated10,11. When c(x,y) is
56
taken as a divergent wavefront emanating from the point source (xc,yc,zc), Eq. 13 can be
expressed as
u (x' , y') =
(
 jk x c 2 + y c 2
- A c exp(jk(z c + z' ))
exp 
2z c
λ2 zcz

 k1 1 
exp  j  +  x 2 + y 2
 2  z' z c 
(
When z’=-zc
u (x' , y') =


(
 jk x c 2 + y c 2
exp 
2z c

Ac
λ 2 zc

) exp- jk  x'z' + xz
2

c
c
) exp jk(x' + y' )
2



2
2z'
∞ ∞
 ∫ ∫ I(x, y )
 −∞−∞

 y' y   
 x +  + c  y  dxdy

 z' z c   
) exp - jk (x' + y' ) ~I  x
2



2
2z c
c - x' y c - y' 

,
 
z
z
λ
λ
c
c 
 
(14)
(15)
~
For zA=zr, I gives well separated terms as already seen in the previous section. The
~
contribution of I3 to u(x’,y’) can be written as
u 3 (x' , y') =
Ac
2
(
Ar
2
2
λ zc λ zr
(
 k
2
2
exp- j
x r + yr
 2z r
2
)o

)
(
)
 jk x c 2 + y c 2 
 jk x' 2 + y'2 
exp
 exp−

2z c
2z c




A
(
)
 k
2
2 
x 1 + y1 
 2z r

(x 1 , y1 ) exp j
(16)
y - y'
x c - x'
, y1=yr-zr c
.
zc
zc
~
The contribution of I4 to u(x’,y’) can be written as
where x1=xr-zr
(
)
(
)
 jk x c 2 + y c 2 
 jk x' 2 + y'2 
exp
exp


−

2
2
2z c
2z c
λ2 zc λ2 zr




 k
 k
2
2 
2
2 
exp j
x r + y r  oA * (x 2 , y 2 ) exp- j
x 2 + y2 

 2z r

 2z r
u 4 (x' , y') =
(
Ac
Ar
)
(
)
(17)
y - y'
x c - x'
, y2=yr+zr c
. Thus, the virtual image of the lens aperture is
zc
zc
now centred at x1’=xc- xrzc/zr, y1’=yc- yrzc/zr, while the real image is centred at x2’=xc+
xrzc/zr, y2’=yc+ yrzc/zr.
For (xc,yc,zc)= (xr,yr,zr), i.e. c(x,y)=r(x,y), the virtual image of the aperture will be in the
centre of the image, while the real image will be at a distance (2xr,2yr) as figure 4
shows. For (xc,yc,zc)= (-xr,-yr,-zr), i.e. c(x,y)=r*(x,y), the real image of the aperture will
be in the centre while the virtual image will be at (-2xr,-2yr). The second step of this
DIPH analysis is to select the information on the centred aperture image and propagate
the wave back to the sensor plane. For c(x,y)=r(x,y) we will obtain o(x,y), which will
directly give the Ao and φo values.
where x2=xr+zr
57
a)
b)
~
(
Fig. 3. a) Amplitude of the FT of a SPS-specklegram, I f x , f y
for a) zA= zr ; b) zA=0.9zr.
)
Fig. 4 Amplitude of the reconstructed wavefield, u(x’,y’) for zc=zr,
z’=-zc and zA=zr.
3.4 SPS-DSPI vs. DIPH analysis
SPS-DSPI with FTM analysis is a particular case of holographic reconstruction. It
works perfectly for zA=zr and when the image of the fluid plane is on the CCD sensor.
When zA≠ zr, the FT plane will show blurred images but as far as the real and virtual
aperture images are separated, the SPS-DSPI analysis will still work. Thus the condition
zA=zr is not very critical, both distances should be matched within 10%. There is also
some degree of freedom about the position of the image plane. Its separation from the
CCD sensor is more critical for the intensity field than for the phase field. The effect on
the intensity field is to produce unfocused particle images. If we set a limit in the degree
of unfocusing such that particle images are double in size as compared with perfectly
focused particles, the image plane can be about 1 mm from the CCD sensor for f#=16
and M=0.3.
The SPS-DSPI analysis allows to obtain r*o . The mean intensity and the linear part of
the phase φr only need to be known more accurately if propagation to other planes for
focusing the particle images is to be done. The intensity needs to be corrected for in any
case. For a reference beam fed through a fibre, the intensity can be taken as a constant if
the fibre optic numerical aperture is big enough. However, for a reference beam sent
directly without any fibre optic, the reference beam intensity will show all the spatial
inhomogeneities of the laser beam, and it will need to be corrected for. However, at the
points with too low Ar (black spots), the information on Ao will be too noisy. The same
will happen if Ar is bright enough to saturate the CCD sensor.
The DIPH analysis allows calculating o(x,y) directly, but it needs to know r(x,y). Thus
the same problem we just mentioned for SPS-DSPI will be common to DIPH. However,
the DIPH analysis gives us more information on how to improve the o(x,y) calculation.
First of all, Eq. 15 shows that the reconstruction wave c(x,y) fixes where the FT of the
hologram will be located. We also know that the aperture image will not be focused if
zA≠ zr (Fig 5a). However DIPH tell us in which plane the virtual (Fig 5b) or the real
(Fig 5c) images are focused.
By substituting I(x,y) by r*o in Eq (14) we obtain that
2
2
  x c 2 + y c 2
- A r A c exp(jk(z'+ z A + z c − z r ))
x r + y r  
x' 2 + y'2
+
u 3 (x' , y') =
exp jk 

2z c
2z'
2z r
λ 4 z r z c z A z'
 
 
∞ ∞
 x A2 + yA2  ∞ ∞
 k1 1

1
1 2
2




(
)
+
+
−
+
dx
dy
o
x
,
y
exp
dxdy
exp
j
x
y


A
A
A
A
∫∫



∫ ∫
2z A
− ∞− ∞

 −∞−∞
 2  z' z A z c z r 

  x' x
 y' y
y
x
x 
y   
exp- jk  + A + c − r  x +  + A + c − r  y  
(18)
  z' z A z c z r 
 z' z A z c z r   
(
(
)
) (
) (
)
(
58
)
u3(x’,y’) will give a focused aperture image when
1 1 1
1 
=0
+ −  −
z' z c  z r z A 
(19)
1 1
1
giving an image of zc into z’. Thus,
=
−
f zr zA
leaving zA≠ zr is equivalent to introduce a lens on the hologram plane, whose effect is a
shift on the position of the aperture image. In the same way u4(x’,y’) will give the
focused image where
Eq. 19 correspond to a lens with
1 1  1
1
+ − 
−
z' z c  z A z r

 = 0

(20)
1
1
1
=
− . As consequence, the virtual (Fig. 5b) and the real
f zA zr
(Fig. 5c) images of the aperture will be focused at different distances.
Thus, by using a DIPH analysis we can calculate u(x’,y’) where the virtual image is
focused and then propagate after selecting only the focused aperture, back to the plane
where the particle images are focused. Thus the DIPH analysis gives much more
flexibility than the FTM method.
Thus corresponding to
a)
b)
c)
Fig. 5 Amplitude of the reconstructed wavefield, u(x’,y’) for zc=zr, zA=0.9zr, and a) z’=-zc; b) z’=-0.9zc;
z’=-1.1zc
4. 3D holographic recording with DIPH
Since DIPH allows obtaining focused particle image fields even when the fluid plane is
not focused on the CCD sensor, DIPH can be extended to the recording of 3D areas.
Although a whole volume could be illuminated, a much better use of the laser energy is
obtained with a multiple light-sheet illumination and thus much bigger volumes can be
recorded whit the appropriate sampling by properly selecting the light sheet geometry.
From the DIPH analysis, it can also be deduced that the lens aperture position on the FT
plane depends on the position of the reference beam. Thus multiple aperture images can
be reconstructed by using multiple reference beams12. Furthermore, each reference can
be made interfere only with one of the fluid planes by an adequate control on the optical
path length or in some cases on the polarization state of the beams. In this way, multiple
holograms, each corresponding to one fluid plane, will be multiplexed in the same
recording. Each plane will be reconstructed from each of the aperture lens images
independently of the other planes.
59
Figure 6a shows the optical arrangement for a doubly multiplexed DIPH recording. The
same zr distance is used for the two reference beams but different (xr,yr). Figure 6b
presents the amplitude of the reconstructed wavefield at z’=-zr, using the coordinates of
the FTM method for a better visualization.
It shows that there are two multiplexed holograms but it is not possible to deduce if
each hologram has recorded one or the two planes. In order to know it a DIPH recording
with one of the light-sheets blocked has to be taken. If the coherence control is
appropriate, only two aperture image will be seen and we will also be able of deducing
which aperture corresponds to each plane. If the coherence control is not appropriate,
the four images will still be seen.
In the second step of the DIPH analysis, the complex wave at z’=-zr will be propagate
independently for each aperture image up to the z’ value where particle image are better
focused. Widely separate fluid planes can be recorded in this way.
Figure 7 shows the results for four fluid planes with different degree of unfocusing.
They have been carried out in a Rayleigh-Bénard convective flow13. The planes
z=12mm and z=1mm were recorded simultaneously while the other two planes were
recorded later. The image position ranges from 2.31 mm in front to 1.47 mm behind the
CCD sensor. For the z=1mm plane, the best focused particle images are reconstructed at
1-2 mm from the CCD sensor. The reconstruction on the CCD sensor is good enough
for other three planes.
Flow
Laser sheet 2
Laser sheet 1
Lens
Beam splitter
Reference beam 1
Reference beam 2
CCD
camera
a)
b)
Fig. 6 3D DIPH: a) Optical setup for a multiplexed recording of two fluid planes; b) Amplitude of the
reconstructed wavefield u(x’,y’) for z’=-zc=-zA=-zr.
z=1mm
z=6mm
z=12mm
z=19mm
Fig. 7 DIPH results from the recording of two fluid planes. Upper row: wrapped phase difference; lower
row Io1. The focused plane corresponds to z=12mm.
60
6. Conclusions
A theoretical analysis of SPS-DSPI as a holographic velocimetry technique has been
presented. The DIPH analysis has been shown to give more information than the SPSDSPI analysis. The feasibility of DIPH as a quasi-3D technique has been shown.
ACKNOWLEDGEMENTS:
This research was supported by a Spanish Research Agency Grant (DPI2000-1578-C02-02) and by the
EUROPIV 2 project.
References:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Andrés, N. and Arroyo M. P. (1999) Digital speckle-pattern interferometry as a full-fieldvelocimetry technique. Opt. Lett., 24, 575-577.
Fricke-Begemann T. and Burke J. (2001) Speckle interferometry: three-dimensional
deformation field measurement with a single interferogram. App. Opt. 40, 5011-5022.
Kreis T. M. (2002) Frequency analysis of digital holography. Opt. Eng. 41, 771-778.
Takeda M. Ina H. and Kobayashi S. (1982) Fourier-transform method of fringe-pattern
analysis for computer-based topography and interferometry. J.Opt. Soc. Am.,72, 1, 156-160
Saldner HO, Molin NE, Stetson KA (1996) Fourier-transform evaluation of phase data in
spatially phase-biased TV holograms. App Opt 35, 2:332-336
Lobera J, Andrés N and Arroyo MP (2003) From ESPI to Digital Image Plane Holography
(DIPH): requirements, possibilities and limitations for velocity measurements in a 3-D region.
Proc. EUROPIV 2 Workshop on Particle Image Velocimetry (Zaragoza)
Goodman JW (1968) Introduction to Fourier Optics. New York: McGraw-Hill
Schnars U and Juptner WPO (2002) Digital recording and numerical reconstruction of
holograms. Meas Sci Technol 13:R85-R101
Kreis TM. and Jüptner WPO (1997) Suppression of the dc term in digital holography. Opt.
Eng. 36(8), 2357-2360
Cuche E., Marquet P. and Depeursinge C.(2000) Spatial filtering for zero-order and twinimage elimination in digital off-axis holography. App. Opt. 39, 23, 4070-4075
Pedrini G., Fröning P., Fessler H., Tiziani H.J. (1998) In-line digital holographic
interferometry. App. Opt. 37,26,6262-6269
Schedin S; Pedrini G; Tiziani H. J. and Mendoza Santoyo F (1999) Simultaneous threedimensional dynamic deformation measurements with pulsed digital holography. App. Opt. 38,
7056-7062.
Arroyo MP and Savirón JM (1992) Rayleigh-Bénard convection in a small box: spatial
features and termal dependence of the velocity field. J Fluid Mech 235:325-348
61
Download