Document 14550422

advertisement
SIMULTANEOUS PHASE MEASUREMENT INTERFEROMETRY
FOR LASER INTERACTION IN AIR
ASIAH BINTI YAHAYA
A thesis submitted in fulfilment of the
requirements for the award of the degree of
Doctor of Philosophy
Faculty of Science
Universiti Teknologi Malaysia
FEBRUARY 2006
ii
iii
To my mother
CHE SUM SANAPI
My husband
MANAN MUNHAMAD JAIB
My children
KAMILAH, KAMIL MOHSEIN, NUR ATIKAH,
HAFIZ ARIF and KHAIRUL AIMAN
iv
ACKNOWLEDGEMENT
The author wishes to express her sincere gratitude to Prof. Madya Dr
Yusof Munajat from the Department of Physics, Universiti Teknologi Malaysia
for his dedication in the supervision of this work. His assistance, encouragement,
advice and motivation managed to pull the author through some tough times.
Sincere thanks should also go to Prof Dr Ramli Abu Hassan for keeping up with
the progress of the project and reminding that ‘nothing is impossible should we
set our mind to it’.
Special thanks go to En Subre from Electronic Laboratory, for his
assistance and skill, to see through the success of the electronic parts of the
system. En Rashid Isnin, En Rashdan and Allahyarham En Mohd Nyan are not
forgotten for the initial construction of parts of the system. May Allah bless
Allahyarham En Jumaat Anuar for managing to repair the Nd:YAG laser without
which this work could not proceed.
An endless thanks to Jabatan Perkhidmatan Awam (JPA) and Universiti
Teknologi Malaysia for the scholarship and the study leave provided, that enabled
the author to have this golden opportunity.
Finally, the author wishes to express her greatest appreciation to her
husband and her five children for their understanding, support and encouragement
that enabled completion of this research.
v
ABSTRACT
The problem encountered when evaluating phase profile of laser interacted
images with direct phase mapping method, using only one interferogram, was in the
form of phase ambiguity. This was due the existence of extra fringes in the interacted
region of the interferogram. The very sensitive Phase Measurement Interferometry
(PMI) also suffers from environmental factors such as vibrations and air turbulence.
The new system developed to reduce phase ambiguity was a three outputs
interferometer, which was designed to capture three interferograms simultaneously.
The fast photography incorporated in the system managed to eliminate the problems of
vibrations and air turbulence. The three interferogarms were initially arranged to have a
phase difference of 90° with one another; a requirement for quadrature imaging. Since
the interferograms were captured simultaneously, they would carry different phase
information of the event. The acoustic wave generated by laser interaction caused the
fringes to deviate accordingly to the change in its phase. From their three intensities,
appropriate phase shifting algorithms were selected to produce a single final phase
change profile of the interaction event. The result obtained revealed a significant
contribution to the reduction in phase ambiguity. The changes in phase were associated
with the changes in refractive index, density and pressure. The values of pressure
change were compared to those obtained from the conventional fringe analysis.
Measurements made at time delay of 3.6 µs indicated a 26 % difference. As the delay
increased, this difference seemed to decrease and at around 5.0 µs both techniques
seemed to produce agreeable results. The nonlinear profiles of the maximum pressure
change with time using the two techniques were presented. Despite the high complexity
of the experimental setup, the system managed to fulfill the objectives for its
development.
vi
ABSTRAK
Pengukuran fasa bagi interaksi laser dengan kaedah pemetaan fasa secara terus
dengan satu interferogram sering dibelenggu masaalah kesamaran disebabkan oleh
penambahan pinggir yang berlaku. Pengukuran fasa secara interferometri yang sangat
sensitif ini juga dibebani masaalah yang berkaitan faktor sekitaran seperti getaran dan
gelora udara. Sistem yang dibina bagi mengurangkan masaalah kesamaran fasa adalah
interferometer dengan tiga output bagi merakam tiga imej serentak. Sistem fotografi
berkelajuan tinggi yang digunakan untuk merakam imej serentak dapat mengatasi
masaalah gelora udara dan getaran. Ketiga-tiga interferogram diatur supaya berbeza fasa
90° antara satu sama lain, iaitu keperluan untuk pengimejan kuadratur. Oleh kerana
ketiga-tiga interferogram dirakam serentak, maklumat fasa yang dibawa adalah berbeza
bagi sesuatu peristiwa. Algoritma anjakan fasa yang dipadankan dengan sistem yang
dibina dapat menghasilkan satu profil perubahan fasa bagi interaksi laser. Hasil yang
diperolehi menunjukkan satu penemuan yang signifikan untuk mengurangkan masaalah
kesamaran fasa bagi interferogram interaksi laser. Profil perubahan indeks biasan,
ketumpatan dan juga tekanan yang sepadan juga dapat dibentuk. Perubahan ini
dibandingkan dengan perubahan yang diperolehi melalui kaedah yang terdahulu iaitu
penganalisaan pinggir. Kiraan ketika masa tundaan 3.6 µs, mencatakan perbezaan 26 %.
Namun apabila masa tundaan ditambah peratus perbezaan berkurang. Disekitar 5.0 µs,
kedua teknik yang digunakan mencapai kesamaan. Walaupun menghadapi pelbagai
cabaran di setiap peringkat penyelengaraan, hasilnya membuktikan bahawa semua
objektif yang di kemukakan bagi pembangunan projek ini dapat dipenuhi.
vii
TABLE OF CONTENTS
CHAPTER
1
2
TITLE
PAGE
INTRODUCTION
1
1.1
Introduction
1
1.2
Objectives of Study
2
1.3
Scope of Study
3
1.4
Thesis layout
5
LITERATURE REVIEW
7
2.1
Introduction
7
2.2
The Principle of Interferometry and Interferometric
8
Testing
2.3
Generation of Acoustic Waves By Laser
9
2.4
Phase Association with Refractive Index and Pressure
10
2.5
Abel Inversion Technique
13
2.6
Techniques for Phase Measurement
17
2.6.1
Fringe Analysis
18
2.6.2
Phase Mapping Techniques
20
viii
2.6.2.1
Fourier Transform Method
21
2.6.2.2
Carrier Frequency Method
23
2.6.2.3
Phase Shifting Interferometry
24
2.6.2.4
Phase Shifting Algorithms
26
2.6.2.5
Phase Unwrapping
31
2.6.2.6
Error in Phase Unwrapping
34
2.6.2.7
General Error Sources and
36
Measuring Limitations in PSI
3
2.7
Phase Measuring Interferometry versus Fringe Analysis
37
2.8
Simultaneous Phase Measurement Interferometry
38
METHODOLOGY
40
3.1
Introduction
40
3.2
The Laser
42
3.2.1
The Nd:YAG Laser
43
3.2.1.1
44
The Focusing system for Nd:YAG
laser
3.2.2
The Nitro-dye Laser
47
3.2.2.1
47
The magnification and the
collimation of Dye laser beam
3.3
The Interferometer
49
3.4
Alignment of the Interferometry System
53
3.5
Localization of the Fringes
54
3.6
Magnification and Focusing of the image
55
3.7
Quadrature Imaging
57
ix
3.8
4
5
High-speed Photography System
58
3.8.1 The CCD camera
59
3.8.2 The Frame Grabber
59
3.9
Synchronizing and Triggering
60
3.10
Image Production
64
3.11
Photography Techniques
66
3.12
Phase Retrieval
68
IMAGE PRODUCTION AND IMAGE PROCESSING
71
4.1
Introduction
71
4.2
The Photographic Images
72
4.3
Image Synchronization
76
4.4
Fourier Filtering
77
4.5
The Intensity
80
4.6
The 90° Phase Difference
83
4.7
The Effects of the Number of Fringes and their Shapes
90
4.8
Postprocessing Fringe Patterns
90
4.9
Summary
91
SINGLE-INTERFEROGRAM PHASE
93
INTERFEROMETRY
5.1
Introduction
93
5.2
Fringe Analysis Technique
94
5.3
FFT Phase Mapping Technique
98
5.4
Problems of Single Interferometry Phase Mapping
104
x
5.5
6
Summary
SIMULTANEOUS PHASE MEASUREMENT
105
107
INTERFEROMETRY
6.1
Introduction
107
6.2
Simultaneous Phase Measurement Interferometry
108
6.3
Refractive Index, Density and Pressure Profile of
117
Image
6.4
Pressure of Acoustic Waves from Laser Interaction
120
6.5
Image Representation
123
6.6
Comparison with Fringe Analysis
128
6.7
The Advantages of the Simultaneous Phase
130
Measurement
7
6.7.1 Phase Ambiguity Reduction
130
6.7.2 Visual Observation
138
6.7.3 Intensity Independency
138
6.7.4 Fringe Shapes and Sizes
139
6.7.5 User-friendly System
142
6.8
The Disadvantages of the System
142
6.9
Discussion: Error Contributors
143
6.10
Summary
145
CONCLUSION AND RECOMMENDATIONS
147
7.1
147
General Conclusion
xi
7.2
Recommendations for Future Work
150
REFERENCES
152
Appendices A-P
160 – 190
xii
LIST OF TABLES
TABLE NO.
Table 4.1
TITLE
Some combinations for the 90° phase
PAGE
86
difference
Table 6.1
Distribution of maximum pressure change
122
xiii
LIST OF FIGURES
FIGURE NO.
2.1
TITLE
Cross section of the spherically symmetrical
PAGE
14
refractive index distribution.
2.2
The zone and chordal divisions.
15
2.3
Fringe deviation measurements.
19
3.1
The general layout of the system.
40
3.2
Nd:YAG laser in Gaussian mode and the
45
amplitude distribution in the transverse direction.
3.3
The beam waist w along propagation axis.
45
3.4
Focusing system for Nd:YAG laser.
46
3.5
Magnification of dye laser beam.
48
3.6
The modified Mach Zehnder interferometer with
50
three outputs.
3.7
Fringe Localization.
57
3.8
A U-shaped Aluminium plate as reference frame
58
for the interference pattern.
3.9
Master slave configuration.
60
3.10
Arrangement for controlling the width and delay
62
of the three frame grabbers.
3.11
The optical detector used for laser delay
63
measurement.
3.12
The time chart for image capture
64
3.13
Shadowgraphy arrangement
66
3.14
Schlieren arrangement
67
4.1
The development of acoustic wave propagation
73
using (a) the Schlieren and (b) shadowgraphy
techniques
xiv
4.2
Stages of development of waves by
74
interferometric method.
4.3
Plot of the radius of wave with time.
75
4.4
Synchronization of center of interaction.
77
4.5
Cut-off frequency in Fourier filtering.
79
4.6
The unfiltered and the filtered intensity signal.
79
4.7
Intensity distributions of the three undisturbed
81
images.
4.8
The filtered intensity of the undisturbed images.
81
4.9
The sequence of the 90°-90° phase difference.
85
4.10
The wrapped phase.
87
4.11
The unwrapped phase wavefronts.
87
4.12
The fluctuation of the 90°-90° phase difference.
88
5.1
(a)The image at 3.6 µs. (b) The corresponding
95
fringe shift.
5.2
Profile of pressure change of the event.
97
5.3
(a)The interferogram at t = 3.6 µs. (b) The phase
100
change profile by FFT method.
5.4
Profile of the corresponding pressure change.
101
5.5
(a) Interferogram at 3.2 µs. (b) The associated
103
phase change exhibiting ambiguity.
5.6
The extra fringe in the interferogram.
104
6.1
The images of laser interaction from the three
110
CCD cameras at 3.6 µs delay.
6.2
Intensity distribution of the three images at y = 15.
111
6.3
The unfiltered an the filtered signals for the three
112
images.
6.4
(a)The wrapped phase spectrum. (b) The
114
unwrapped phase wavefront and its deviation from
its reference
6.5
The phase change with the first algorithm
115
6.6
The phase change with the second algorithm
117
6.7
Change in the refractive index due to interaction
118
xv
6.8
Change in density due to laser interaction.
119
6.9
Profile of pressure change of the event.
119
6.10
Distribution of maximum pressure change.
122
6.11
(a)3-D image of phase change with first
125
algorithm. (b) 3-D image of phase change with
second algorithm.
6.12
(a) Cross-section of the image. (b) Another view
126
of the cross section.
6.13
A quarter section of the event.
127
6.14
Profile of phase change at different locations
127
across the image.
6.15
Maximum pressure change profiles using the two
128
methods.
6.16
Field of view at three different locations.
130
6.17
Images at t = 3.8 µs.
131
6.18
Phase change profiles individually analyzed.
132
6.19
Phase change profile with simultaneous analysis.
133
6.20
Images at t = 3.4 µs.
133
6.21
Phase change profiles of images when analyzed
134
individually.
6.22
Phase change profile simultaneously analyzed.
135
6.23
Phase change profiles of the three images
137
individually analyzed.
6.24
Phase change profile simultaneously analyzed
138
6.25
Simultaneous phase analysis from high-intensity
140
images
6.26
Phase change from low intensity images
141
xvi
LIST OF ABREVIATIONS
ξ
-
spatial frequency coordinate
η
-
high frequency noise
2D, 3D
-
two and three dimensional
α
-
phase step
atm
-
atmospheric pressure
B
-
Bulk modulus
c
-
Velocity of light
CCD
-
Charge Couple Device
CCIR
-
Comite Consultive International Radio
cR
-
Rayleigh wave velocity
∆F
-
fringe shift
∆f
-
fractional fringe shift
∆φ
-
phase change
∆L
-
optical path difference
∆n
-
change in refractive index
∆P
-
change in pressure
∆ρ
-
change in density
E
-
electric field amplitude
f
-
frequency
xvii
FFT
-
Fast Fourier Transform
γ
-
coherence modulation
HD
-
Horizontal drive synchronization
He-Ne
-
Helium Neon
I
-
intensity
ISA
-
Industry Standard Architecture
λ
-
wavelength
LASER
-
Light Amplification by Stimulated Emission of Radiation
MHz
-
MegaHertz
µm
-
micrometer
MOSFET
-
Metal Oxide Semiconductor Field Effect
Transistor
µs, ns
-
microsecond, nanosecond
MW
-
MegaWatt
n
-
refractive index
Nd:YAG
-
Neodymium: Yttrium Aluminium Garnet
PAL
-
Phase Alternation Line
PMMA
-
polymethyl methacrylate
PMI
-
Phase Measurement Interferometry
PSI
-
Phase Shifting Interferometry
ρ
-
density of medium
TTL
-
Transistor Transistor Logic
VD
-
Vertical drive synchronization
w
-
width distribution of laser beam
xviii
LIST OF APPENDICES
APPENDIX
TITLE
PAGE
A
Laser Energy Produced At Laser Head.
160
B
The Trigger and Synchronize Unit Incorporating the
161
Nd:YAG and Nitro-dye Laser.
C
Power supply for trigger unit.
164
D
Formula Derivation For Simultaneous Phase Measurement.
165
E
Acoustic Wave Propagation.
167
F
Fringe Analysis.
168
G
Simultaneous Phase Measurement.
173
H
3D Representation of the Phase Change
183
I
The Cross-section of the Phase Image
187
J
Distribution of the Maximum Pressure Change by Fringe
190
Analysis and Simultaneous Method.
CHAPTER 1
INTRODUCTION
1.1
Introduction
Optical measurements are playing a much more important role today than they
ever did in the past. The demands on measurement accuracy have increased, driven by
the high-stake scientific and technological applications. One such example of the
immeasureable importance of measurements and their critical nature is that of the
Hubble Space Telescope. The imperfections in the primary mirror arose from the
defective measurements of the mirror’s surface contours were discovered after the
telescope was launched. However, the imperfections causing blurred vision were finally
spectacularly corrected in orbit (Rastogi, 1997). .
Laser interferometry provides the non-contact, non-destructive precision
measurements necessary for industrial purposes. The interaction of laser radiation with
matter and their applications have been studied extensively ranging from the higher
power laser applications in laser fusion, laser processing, laser chemistry, laser
annealing, non-linear optics, medicines, laser monitoring of the atmosphere to the low
power laser applications in optical fiber communication and spectroscopy.
As measurement precision increases, laser interferometry is gaining acceptance
in applications as exotic as gravitational-wave detection as a mundane; but equally
important as inspection of automotive engine components (Lerner, 1999). Other
applications of interferometry include Fourier-transform infrared spectroscopy; imaging
2
of 3-D surface profiles; laser wavelength determination; and the manufacture of optics
gigabit hard-disk drives, fuel-delivery systems in diesel engines, Pentium computer
processors and contact lenses (Peach, 1997).
In studying the acoustic waves due to laser interaction, the measurements of the
phase change can be made based on the fringe shift of the interferograms and also on
the change in the intensity level or the gray scale of the fringes. The propagation of the
waves will change the density and therefore the refractive index of the medium. This
changes the optical path lengths, which result in the shifting of the fringes in the
interference pattern. Using Abel inversion technique, the change in refractive index of
the medium can be related to the change in pressure of the resulting wave.
In this work, an interferometry system for phase measurement will be developed
to study the changes in pressure of the acoustic waves produced by laser interaction.
The system is designed to overcome the problem of phase ambiguity due to extra
fringes associated with laser interactions. As phase measurement interferometry is a
very sensitive and very precise measurement, its environmental effects should also be
taken care of. Thus, the system designed will also include eliminating the problems of
air turbulences and also vibrations. Error contaminations are unavoidable in the
production of the images. But, these errors would not be such a nuisance if they are of
the same nature and come from the same sources. This would simplify the noise
filtering process. Phase calculations will surely benefit from this type of images.
1.2
Objectives of the study
There are some common drawbacks and limitations to the use of interferometry
for phase measurements. The spherical nature of the acoustic waves produced by laser
interaction but viewed from a slight tilt, can sometimes produce extra fringes in the
interferogram. In analysis, this will lead to phase ambiguities. Environmental factors,
such as vibrations and air turbulence, have tremendous effects on phase calculations due
to the very sensitive nature of the interferometry system. Various time dependent noises
are not excluded in this type of phase measurement. Previously, phase measurement
3
using inteferometric methods can be long and tedious processes, involving large amount
of data. However, modern computer software and programming can overcome the
problem.
The objectives of this research are:
1
To develop a direct phase measurement system that will be able to measure
phase profile of laser interactions.
2 To overcome the problem of phase ambiguity due the effects of extra fringes
in the area of acoustic wave disturbance.
3
To improve the system by eliminating the factors of air turbulence and
vibrations.
4
1.3
To evaluate the pressure profiles of the waves produced.
Scope of Study
The scope of study includes the development of the system that consists of a
three outputs interferometer, a fast photography unit, a synchronize-and-trigger unit and
also the image-processing unit. The interferometer was a Mach Zehnder interferometer,
which was modified to suit the simultaneous-image capture requirement. The fast
photography unit made use of the 1 ηs illumination from the pulsed Nitro-dye laser.
The trigger and synchronize unit is an electronic system that connect, control and
synchronize the whole operation. The image-processing unit includes the writing of
computer programs to obtain the phase change for this system. The phase change will
be determined by the intensity distribution of the interferograms.
The phase of the three simultaneously captured interferograms differed by 90°
from one another. This will allow the wave to be assessed using three different phase
information; with the intention of minimizing the ambiguity problem. The algebraic
combination of their intensities will provide the associated phase change due to laser
interaction. The algorithms for phase measurement in this work are based on phaseshifting algorithms.
4
There are two methods of phase analysis namely fringe analysis and phase
mapping. This work will emphasize the phase mapping method, based on three
interferograms that are captured simultaneously. However, comparisons will be made
with the conventional fringe analysis.
The assumption made in this study is the spherically symmetrical nature of the
acoustic waves produced by laser interaction. With this assumption and the Abel
inversion technique, the phase change can be converted to the change in the refractive
index and density and finally to the change in pressure of the associated sample.
Visual phase representations such as 3-D images will be produced to enable
thorough observations of the changes taking place at any location of the interferogam to
be made. A computer program will be developed for this purpose.
1.4
Thesis layout
Chapter 2 describes the literature survey of the work done by the previous
researchers in the same discipline. It reveals the correlations between fringe deviation
and phase change, which are then related to changes in the refractive index, density and
pressure of acoustic wave produced by laser interaction. Various methods and
algorithms were designed and implemented by previous researchers, to suit the various
need in interferometry. There were tremendous efforts put in to overcome the errors
that accompany the system. However, no one particular method or algorithm can
eliminate most of the errors associated with interferometry measurements. Usually, a
system or a technique is developed to overcome certain problems only.
The system designed and built for this work is described in Chapter 3. The
interferometer system, with its three outputs designed to be at 90° out of phase from one
another, was a modified Mach Zehnder interferometer. A fast photography unit
attached to the system was used to capture the images of fast events (1 ns) such as laser
interaction. This was also used to eliminate environmental factors such as vibrations and
5
air turbulence. The trigger and synchronize electronic system acted as the control for
the start of the event and the delay between laser interaction and its image capture.
Chapter 4 described the preliminary work done with the system and the
preparations of the system before it is ready to take measurements for phase analysis.
Firstly, the system was arranged so that the intensity of the three images was about the
same. Secondly, the three outputs of the interferometer must be at a phase difference of
90° between the images. This was obtained by rotating the analyzers in front of the
detectors, until the right combinations that produced the required phase different was
found. Then, there was also the magnification factor of the image that must be recorded
in order to obtain the correct dimensions of the event.
Single interferometry phase analysis was described in Chapter 5. The methods
used here were the fringe analysis and the phase mapping using Fourier transform
analysis. These methods were known to be capable of producing reliable results. In
this work, fringe analysis was capable of producing the required phase profile but the
work involved was eye-straining, long and tedious. However, with phase mapping
method on laser interacted interferograms, even though easier, sometimes, could result
in phase ambiguity. This phase ambiguity is shown in this chapter.
The phase measurement method involving three simultaneously captured images
was revealed in Chapter 6 of this work. It showed how the change in phase of laser
interacted interferogram can be calculated using two different phase-shifting algorithms.
The evaluation of the associated change in density, refractive index and also pressure
profiles of laser interaction in air were made. Pressure profiles from both; the
simultaneous and the fringe measurement techniques were produced for comparison.
Visual representations in the form of 3-D images of the events were produced to
enhance the quantitative results. The author also quoted the advantages of the
simultaneous image analysis over the single interferometry analysis in overcoming the
current ambiguity problem of images produce by laser interaction. Some error factors
that could affect these measurements with the present system were also mentioned.
Besides the physical limitations and challenges faced with the present system, it
was concluded that the objectives of this project were fulfilled. This was concluded in
6
Chapter 7. However, the work must go on and the author stated a few ideas as to
improve the accuracy of the present system. Recommendations on the expansion and
the diversification of the present scope were also mentioned.
CHAPTER 2
LITERATURE REVIEW
2.1
Introduction
The need for making measurements in an environment of rapidly growing
complexity requires the development of equally high-performance procedures is
associated with the advance of the frontiers of knowledge in recent years. Along with
that the optical and digital image processing techniques have greatly improved. The use
of optical metrology methods for scientific and industrial measurements has notably
expanded over the last decade.
Interferometry is an old and very powerful technique to measure the deviation
between two wavefields with a sensitivity of a fraction of the wavelength of the
illumination source. Traditionally, interferograms have been analyzed by noting the
straightness of the fringes or by identifying the fringe centers and assigning a constant
surface height along each fringe. Adjacent fringes represent a height change of a half
wave. Finding the fringe centers for fringe analysis has been the inherent limit to the
precision of the technique and has also restricted the amount of data processing that can
be done to the results.
Then the analytical methods and their algorithms became the predominant
means for determining the phase of interference with the advent of solid-state chargecoupled-device (CCD) and powerful computers. Computer subtraction of the
8
interferometer noise also allows the removal of any geometrical distortion in the optics.
This makes phase measuring technique more accurate than fringe pattern
interferometry. The phase of the interference pattern corresponds to the intensity of the
wavefront. The point-by-point calculation recovers the phase and thus the analysis is
not dependent on the fringe centers or the straightness of the fringes. Any type of fringe
pattern can be analyzed. This is a more practical situation. Even fringe pattern with no
fringes (one very broad fringe covering the entire field of view) or with a complicated
series of closed fringes is analyzed correctly.
In research, for industrial applications, automatic fringe analysis is increasingly
important. Solid-state detector arrays and image memory boards together with
microprocessors and computers are used to extract information from the interferograms
and high-resolution graphic boards find important applications in optical metrology. In
this way much more information can be extracted from the interferograms, leading to
higher resolution and accuracy.
2.2
The principle of Interferometry and Interferometric Testing
There are many different types of interferometers, but with one basic principle
of operation. If light from a source takes two paths, each of slightly different
wavelengths, when they meet, the difference in their path lengths creates an interference
pattern of alternating light and dark fringes. Constructive interference occurs when the
difference in the path length is an exact number of wavelengths, say, one, two or three
waves. Alternatively, if the difference in their path length is 3/2, 5/2, 7/2…waves, there
is destructive interference. The distance between the bands represents the displacement
of the two wavefronts relative to one another. The spacing and the shape of the fringes
are determined by three factors: the distance traveled by light, the alignment and the
shape of the disturbance (object in its path), and the wavelength of the light source.
The interference pattern or the interferogram produced in interferometry carries
an enormous amount of information. Two techniques are generally used to process the
interferogram to obtain the change in its phase after undergoing certain interaction. One
9
approach is the fringe analysis technique whereby the deviations of the fringe from its
initial location is noted and calculated. These deviations are then related to the phase
change that took place. Another approach is to take a series of interferogram while the
phase difference between the interfering waves changes. The wavefront phase
distribution of each interferogram is encoded in the irradiance variation, and the phase
difference between the beams can be obtained, by analyzing the point- by- point
irradiance of three or more interferograms as the phase is varied. This method of
obtaining phase information from interferograms is known as phase-shifting
interferometry.
2.3
Generation of Acoustic Waves By Laser
Many scientists as early as the 1960s have studied the generation of acoustic
waves in gases by laser breakdown. In principle there are five important interaction
mechanisms that can be responsible for the generation of acoustic waves: dielectric
breakdown, vaporization or material ablation, thermoelastic process, electrostriction and
the radiation pressure. Their contribution depends on the parameters of the incident
laser beam as well as on the optical thermal parameters of the medium (Sigrist, 1986).
In this work, the phenomenon of the laser interaction with matter involves the
excitation of acoustic waves by laser impact. The dielectric breakdown in air only
occurs at laser intensities of approximately 1010 Wcm-2. This can be achieved quite
easily by focusing a beam of a pulsed laser using lens combinations. The plasma
produced from dielectric breakdown causes a shock wave to form, which will propagate
initially at supersonic speed in the medium before attenuating to the acoustic wave
speed. This is the most efficient process of converting optical energy to acoustical
energy. The dielectric breakdown dominates the interaction at high laser intensities,
especially in transparent media where sound generation due to ordinary absorption does
not occur (Sigrist, 1986).
Generation of acoustic waves is actually the result of the changing density of the
medium. This in turn changes the refractive index and the optical path length and as a
10
result, the phase of the optical wave also changed. In liquid, generation of acoustic
waves will be followed by the formation of cavitation bubble. The bubble will expand
and contract resulting in the formation of a second acoustic wave. This process is
repeated until all the energy is used. The acoustics waves tend to propagate spherically
outwards from the center of disturbance or source. However, propagation of the waves
near a solid boundary is much more complex (Yusof Munajat, 1997).
2.4
Phase Association with Refractive Index and Pressure
The acoustic waves passing through a medium will change the density and give
rise to a change in the refractive index of the medium. This will be seen as shifts from
the initial fringe distribution, more appropriately in this case, the appearance of a
spherical shaped disturbance at the center of the image. The spherical disturbance will
propagate from its center outwards. Assuming the symmetrically spherical nature of the
acoustic wave as it propagates outwards, the refractive index distribution will also
assumed to be spherically symmetric. Abel Inversion technique used this assumption to
model the refractive index profile, which then allowed the pressure profile of an
acoustic wave to be calculated (Yusof Munajat, 1997).
When there is no disturbance in the test arm of the interferometer, the
interference fringes obtained show straight and uniformly spaced fringes. However, any
disturbance introduced in the test arm will change the optical path length of the
incoming light and cause these fringes to deviate. This deviation is proportional to the
change in the phase of the medium, which can later be used to calculate the change in
the density, refractive index and pressure of the medium.
The relation between the optical path length, ∆L, and the optical phase
difference, ∆φ in the Mach Zehnder interferometer is given by
∆φ ( x, y ) =
2π∆L( x, y )
λ
(2.1)
11
The fringe shift is given as
∆F ( x, y ) =
∆φ
F ( x, y )
2π
(2.2)
where F(x,y) is the undisturbed fringe separation.
The difference in the optical path length between the test arm and the reference
arm of the Mach Zehnder interferometer is given as:
∆L( x, y ) = ∫ {n( x, y, z ) − n∞ }dz
s1
(2.3)
s2
where s1 and s2 are the surfaces bounding the sample and n∞ is the constant refractive
index in the reference arm of the interferometer. The relationship between the fringe
shift ∆F, and the refractive index, n, can be written as:
∆F ( x, y ) =
F ( x, y )
λ
s2
∫ {n(x, y, z ) − n }dz
(2.4)
∞
s1
2
If the sample has a uniform thickness L with a refractive index of n(x,y) which
does not vary in the z direction, the relationship can be further simplified to:
∆n( x, y ) =
λ∆f ( x, y )
L
(2.5)
where ∆n(x,y) is the difference between n(x,y) and n∞ and
∆f (x, y ) =
∆F ( x, y )
.
F ( x, y )
(2.6)
Thus, the refractive index is proportional to the fractional fringe shift ∆f(x,y).
12
The theoretical relationship between the refractive index n of a medium and its
density ρ is described by the so-called Clausius-Mossotti equation (Yusof Munajat,
1997)):
n2 −1
= K'ρ
n2 + 2
(2.7)
The constant K’ is dependent on the molecular properties of the material and the
frequency of the incident radiation. In liquids and gases where the refractivity, n-1 is
small, the relationship between the refractive index and its density ρ can be simplified
further to give the well-known Gladstone-Dale relation;
n −1 = Kρ
(2.8)
where K=3K’/2. The change in the refractive index ∆n can also be expressed as the
change in its density ∆ρ;
∆n = K∆ρ
(2.9)
The above relationship is accurate for pressures up to approximately 100 bars
(Partington, 1953). Usually, it is more convenient to express the changes in the density
of a sample, as the changes in its pressure. Since the pressure, usually generated in the
laboratory is less than 100 bars, a constant of proportionality between two variables
pressure, P and density, ρ is assumed. Thus, a modified Galdstone-Dale relationship
becomes;
∆n = C∆P
where the constant C =
(2.10)
K
and the unit is bar-1, and c is the speed of sound.
c2
In solid, this relationship should also include the bulk modulus factor B, of the
material. B = −V
dP
dρ − ρ
=
.
. Another related expression is the change in density;
dV
dV
V
13
These will make the relationship between the refractive index and the pressure in solid
to be
∆n ⎛ ρ ⎞ ∆n
=⎜ ⎟
∆P ⎝ B ⎠ ∆ρ
(2.11)
Rewriting the relationship between refractive index and pressure in solid, we have
∆P =
∆n
⎛ ρ ⎞ ∆n
, where C = ⎜ ⎟
. The constant C can be calculated from the
C
⎝ B ⎠ ∆ρ
experimental value of ρ
2.5
dn
given by Waxler et al (1979).
dρ
Abel Inversion Technique
Abel inversion technique is a mathematical approach that relates the changes in
the optical path lengths to the changes in the refractive index of the medium. In this
case the changes is caused by laser interaction. This is based on the assumption that the
acoustic waves resulting from laser interaction propagate outwards from the center of
interaction in a spherically symmetrical manner. Therefore, the profiles of the refractive
indices, ∆n(r), are also assumed to be spherically symmetric.
Figure 2.1 shows a two dimensional section through a disturbance with
coordinate (y,z) and dimensions (δy, δz). The total change in the optical path length
along the line A’A is the sum of the small element series along which the light passes.
As δy and δz approach zero, the total change in optical path length, ∆L, can be written
as
∆L( x, y ) =
(
+ R2 − y2
−
(
)
1
2
∫ ∆n(r )dz
)
1
R2 − y2 2
(2.12)
14
y
A’
A
r
R
z
Figure 2.1 Cross section of the spherically symmetrical
refractive index distribution
The change in the refractive index is caused by the change in the optical path
length between the test and the reference arm of the interferometer. According to Abel,
the relation can be expressed as
∆n(r )r
R
∆L( x, y ) = 2 ∫
y
(r
2
−y
)
1
2 2
dr
(2.13)
The refractive index can be obtained from the inversion of the above Abel’s
equation, giving
∆n(r ) =
1
R
π∫
r
∆L' ( y )
(y2 − r2)
1
2
dy
(2.14)
where ∆L’ is the first derivative of the optical path length, taken with respect to y.
With this technique the spherical region is split equally into m equal concentric zones,
labeled from j = 1 to j = m (Figure 2.2). It is then assumed that the changes in the
refractive index, ∆nj in each zone is constant. The spherical region is also divided into
an equally spaced chordal region labeled from i = 1 to i = n. The fringe shift, ∆FI is
15
assumed to be constant within each of the horizontal region. Summation of the
differences in the optical path length within each chordal region over the small area aij
give the required values of ∆nj .
i
4
3
2
a2 3
1
1
2
3
R
j
4
Figure 2.2 The zone and chordal divisions
In the outermost shell (i = m and j = m), the change in the refractive index of the
last shell, ∆nm contributes to the change of the optical path length, which produces the
fringe shift ∆fm. According to the Equation (2.7), the change in refractive index can be
written as
∆nm =
λ∆f m
Lm , m
(2.15)
where Lm,m is the total length of the outermost chordal element. The physical
dimensions Lij of all the elements within the spherical disturbance are related to those aij
of the geometrical construction in Figure 2.2 and is given by
Lij
aij
=
2R
m
(2.16)
16
The change in the refractive index ∆nij in Equation (2.15), can be expressed in terms of
aij as
∆nm =
λ '∆f m
(2.17)
a m,m
λm
where λ’ is a normalised wavelength defined by λ ' =
2R
In the second chordal region, the difference in the optical path length, λ∆fm-1
comes from the shells j = m and j = m-1. Thus the change in refractive index ∆nm-1 can
be calculated using the value obtained for the fist chordal region, giving:
λ∆f m −1 = (∆nm Lm −1,m + ∆nm −1 Lm −1,m −1 )
(2.18)
Since it is assumed that the change in refractive index within each zone ∆nj is
constant, its relationship with the fringe shift can be obtained from
∆nm −1 =
λ ' ∆f m −1 − ∆nm a m −1,m
(2.19)
a m −1,m −1
Similarly if this method is applied to all the chordal regions, the general equation for the
change in the refractive index of each of the chordal region will be
∆ni =
1⎛
⎜ λ ' ∆f i −
a ⎜⎝
m
∑a
j = i +1
ij
⎞
∆n j ⎟⎟
⎠
(2.20)
where the coefficients aij are the fractional chordal lengths and can be calculated from
the geometrical construction shown in Figure 2.2 using the Pythagorean relationship
[
aij = j 2 − (i − 1)
] − [( j − 1) − (i − 1 )]
1
2 2
2
2
1
2
(2.21)
17
The division of the field and the approximation of the geometrical path lengths through
the field may introduce errors into the calculations, especially for the smaller values of i.
2.6
Techniques for Phase Measurement
The primary interest in this work is to recover the phase, φ(x,y), of the fringes as
it carries some valuable information about the sample undergoing the test. During the
last 15 years, several techniques (analytical and digital) for the reconstruction of phases
from fringe patterns were developed.
The interferograms may be analyzed in a number of different ways. The two
most common methods are the fringe analysis and the phase-mapping technique. There
are fundamental differences between these two methods due to the preferable conditions
of the part under the test and also on the interferometer. As a result, differences should
be expected in the values calculated.
However, using modern metrological methods, the phase, the absolute shape as
well as deformation can be measured using digital image processing. There are various
techniques for digital image processing including the fringe tracking or the skeleton
method, the Fourier transform method, the carrier frequency method or spatial
heterodyning and the phase sampling or phase shifting method. All the methods have
significant advantages and disadvantages, so the design of the system depends mainly
on the parameter to be measured and on overcoming the major problems associated with
that parameter.
The fringe tracking or skeleton method is based on the assumption that the local
extrema of the measured intensity distribution correspond to the maxima and the
minima of a 2π periodic function of the intensity (Osten and Juptner, 1997). The
automatic identification of these intensity extrema and the tracking of the fringes is
perhaps the most obvious approach to fringe pattern analysis since this method focused
on reproducing the manual fringe counting process. This can be time consuming and
18
the results sometimes could suffer from possible ambiguities resulting from the loss of
directional information in the fringe formation process.
In phase-mapping incorporating Fourier transform method, the digitized
intensity distributions is Fourier transformed, leading to frequency distribution in the
spatial domain. After filtering the frequency distribution is transformed by the inverse
Fourier transformation, to produce a complex valued function and phase can be
calculated by its arctan function (Osten and Juptner, 1997).
2.6.1
Fringe Analysis
This is the conventional way of analyzing the fringe shift of an interferogram in
order to obtain its phase. Only one interferogram is required in this technique. Initially
the fringes produced by the illumination laser source are straight and uniformly spaced.
Any disturbance, such as breakdown from laser interaction in the test arm of the
interferometer, would cause a difference in the optical path lengths with the reference
beam thereby producing a distorted interference pattern. For laser- generated
breakdown in air, the interference pattern produced is spherical in nature. This is due to
the acoustic waves produced, which propagate spherically outwards from the emission
center (Yusof Munajat, 1997).
Based on the model set up by Abel for spherically symmetric nature of the wave
and the associated changes in the refractive index with its phase, the fringe analysis
technique will use the deviation of the fringes from its reference location to determine
the related phase change.
In the normal course of events, the fringe shifts are measured either by eye or by
computer programs, which are able to follow the path of the fringes and thus determine
their deviation from linearity. The first technique tends to be difficult, inaccurate and
time-consuming, whereas the second method is often unreliable for realistic
interferogram where noise is present with the result that the phase map often needs
touching up by hand afterward.
19
The displacement of a dark fringe ∆F1 is calculated from the central line COD,
which is the location of the original dark fringe before interaction (Figure 2.3). Data
collection can become a difficult task when the fringes are not straight. In some cases a
slight rotation of the image might be necessary to accommodate this. The data will then
be fed in the computer for phase analysis.
C
∆F1
∆F2
∆F3
A
B
O
D
Figure 2.3 Fringe deviation measurements
The relation between the fringe displacement, ∆F and the phase shift, ∆φ, is
given by
∆φ =
2π∆F ( x, y )
= 2πf ( x, y )
F ( x, y )
(2.22)
where, F is the fringe separation and f is the fractional fringe shift.
With this technique, determining each of the fringe centers alone can introduce
errors. This can be complicated by poor contrast in fringe, variation in the fringe
visibility, and image noise due to laser speckle and dust in the optical system.
Sometimes, a low-contrast fringe pattern from other surfaces may be superimposed
upon the fringes of interest.
20
With automated fringe center identification, the fringe centers can be located
either by thresholding the image to determine the fringe edges (a fringe center lies
between two edges) or by sensing intensity minima. Once the center of a particular scan
line was found, they must be matched to the centers of the previous scan to maintain
continuity of the fringes. To do this each newly located center should be matched with
the closest center of the previous scan. This method works as long as the fringes do not
change direction too abruptly in the interval between scan lines (Malacara, 1992).
Sometimes extra noise can cause an extra fringe center to be found between two
centers that have already been matched to adjacent fringes of the previous scan. These
extra centers should be rejected. It would also be easier to determine the fringe centers
of smaller fringes.
Another possible way to locate fringe centers of complex interferogram is to
trace each fringe by tracking the intensity minima of the fringe image. However, this
method is not fully automated because it requires an operator to indicate each fringe to
be tracked.
More errors can occur when the background fringes are not straight.
Furthermore, these techniques will produce quite large uncertainties when the fringe
visibility is poor or the fringe shifts are small. Apart from that, it works in almost every
case and it requires neither any additional equipment such as phase shifting devices nor
additional manipulations in the reference fields.
2.6.2
Phase Mapping Techniques
This technique utilizes the intensity values of the interferogram produced. The
intensity at each point in the interferogram varies as a sinusoidal function of the
introduced phase shift with a temporal offset given by the unknown wavefront phase.
The following intensity model is the base for phase analysis (Rastogi, 1997):
21
I (x, y, t ) = I 0 ( x, y ).{1 + V ( x, y ) cos[φ ( x, y ) + ϕ (x, y, t )]}.Rs (x, y ) + RE ( x, y, t )
= a( x, y, t ) + b( x, y ) cos[φ ( x, y ) + ϕ ( x, y, t )]
(2.23)
where I 0 ( x, y ) is the background intensity, V(x,y) denotes the visibility of the fringes,
Rs(x,y) is the multiplicative speckle noise, and RE (x,y) summarizes the influence of the
electronic noise components on the observed intensity distribution. In the simplified
version the variables a(x,y,t) and b(x,y) represents the additive disturbances (background
intensity, electronic noise) and multiplicative disturbances (visibility, speckle noise),
respectively. The term ϕ(x,y,t) is an additional introduced reference phase term that
discriminates the difference phase measuring techniques.
2.6.2.1 Fourier Transform Method
The Fourier transform method is based on fitting a linear combination of
harmonic spatial functions to the measured intensity distributions I(x,y) (Osten and
Juptner, 1997). This method also requires only one interferogram for its phase analysis.
The digitized intensity distribution is Fourier transformed, leading to a symmetrical
frequency distribution in the spatial domain. After an unsymmetrical filtering including
the regime around zero, the frequency distribution is transformed by the inverse Fourier
transformation, resulting in a complex-valued image.
Neglecting the time dependency and avoiding a reference phase, Equation (2.23)
of the recorded intensity distribution, is transformed to
I ( x, y ) = a ( x, y ) + c ( x, y ) + c ∗ ( x, y )
where c( x, y ) =
1
b( x, y ). exp[iφ ( x, y )] .
2
The symbol * denotes the complex conjugate. A two-dimensional Fourier
transformation of Equation (2.24) gives
(2.24)
22
I (u, v ) = A(u, v ) + C (u, v ) + C ∗ (u, v )
(2.25)
(u,v) being the spatial frequencies and A,C, and C* are the complex Fourier amplitudes.
Since I(x,y) is the real valued function, I(u,v) is a Hermitian distribution in the spatial
frequency domain.
I(u,v) = I*(-u,-v)
(2.26)
The real part of I(u,v) is even and the imaginary part is odd. Consequently, the
amplitude spectrum I (u , v ) is symmetric with respect to the direct current (dc) term
I(0,0). The term A(u,v) represents this zero peak and the low frequency components
that originated from the background modulation I0(x.y). C(u,v) and C*(u,v) carry the
same information as is evident from Equation (2.26). Using a selected bandpass- filter,
the unwanted background a(x,y) can be eliminated together with the mode C(u,v) or
C*(u,v). If for instance, only mode C(u,v) is preserved, the amplitude spectrum is no
longer Hermitian and the inverse Fourier transform returns a complex –valued c(x,y).
The phase φ(x,y) can then be calculated from
⎧ Im c( x, y ) ⎫
⎬
⎩ Re c( x, y ) ⎭
φ ( x, y ) = arctan ⎨
(2.27)
Taking into account the sign of the numerator and the denominator, the principle
value of the arctan function having a continuous period of 2π is reconstructed. As a
result a mod 2π-wrapped-phase profile, the so-called saw-tooth-map is obtained. This
stage in phase analysis is called the wrapping process, which result in its phase values in
radians. At this point it is impossible to tell the true values of the phase change
involved.
In order to obtain a meaningful value of the phase, this wrapped phase will need
to be unwrapped to produce a continuous phase change. Unwrapping process is a
process to get back the correct phase values, which was lost during the wrapping
process. A suitable computer software is needed for this purpose and a certain
23
computer programming skill is required to obtain a continuous, meaningful phase
profile for any particular location in the interferogram.
Takeda et al. (1982) described a Fourier transform method that analyzed onedimensional slices of an interferogram. Macy (1983) extended this method to two
dimensions, compared its accuracy to the sinosoidal fitting method, and reported an
accuracy of about λ/50. This method was further refined and analyzed by Womack
(1984), and Roddier and Roddier (1987) who were able to map the complex fringe
visibility in several types of interferograms. In 1985, Nugent also extended Takeda’s
work (1982) to eliminate significant errors introduced by the digitization of the
interferogram and by the non-linearities in the recording film. This he did by using a
minimization algorithm.
By the computational point of view, a variety of versions of the method exist, all
sharing the ability to eliminate the background and the contrast terms by trigonometric
operation on the acquired images ( Facchini and Zanetta, 1995).
2.6.2.2 Carrier –Frequency Method
A carrier frequency method is another method for phase measurement making
use of the Fourier transform technique. A certain amount of tilt between the reference
and the test wavefronts produce fringes of frequency, f0, which, in this context, will be
treated as a spatial carrier frequency. For simplicity, assume that the tilt is directed
along one axis. The recorded intensity distribution is given by
I (x, y ) = a( x, y ) + b( x, y ) cos[δ ( x, y ) + 2πf 0 x ]
= a ( x, y ) + c( x, y ) exp(2πif 0 ) + c ∗ ( x, y ) exp(− 2πf 0 ix )
with c(x,y) as defined in Equation (2.24).
(2.28)
24
This method can be classified as spatial phase shifting technique. FFT algorithm
can be used to separate the phase from its reference phase. The Fourier transform of the
resulting intensity distribution gives
I (u, v ) = A(u , y ) + C (u − f 0 , y ) + C * (u + f 0 , y )
(2.29)
Since the spatial variations of a ( x, y ), b(x, y ) and δ ( x, y ) are slow compared to the
spatial carrier frequency f0, the Fourier spectra A, C, and C* are well separated by the
carrier frequency f0. C and C* are placed symmetrically to the dc-term and centered
around u = f0 and u = -f0. Only one of the two sidelobes is necessary to calculate the
phase. By means of digital filtering, the sidelobe C (u − f 0 , y ) is filtered and translated
by f0 to the origin of the frequency axis in order to remove the carrier frequency. C* and
A(u, y) are eliminated by bandpass filtering. Consequently, C(u,y) is obtained. By
applying inverse FFT, c(x,y) is obtained. The phase can be calculated using Equation
(2.27). Since, the phase is wrapped into the range from -π to π, it has to be corrected by
using a phase unwrapping algorithm (Osten and Juptner,1997).
2.6.2.3
Phase Shifting Interferometry (PSI)
The concept behind phase shifting interferometry is that a time-varying phase
shift is introduced between the reference wavefront and the test or sample wavefront in
the interferometer. The phase is made to vary in some known manner such as by
changing it in discrete steps (stepping) or changing it linearly with time (ramping). A
time-varying signal is then produced at each measurement point in the interferogram
and the relative phase between the two wavefronts at that location is encoded in these
signals.
A powerful advance in computer technology came in the mid-1970s, when
phase-shifting interferometry (PSI) was being developed resulted in more robust
computers and sophisticated algorithms. The technique used for detection and
measurement of phase can be divided into two categories: electronic and analytical. To
25
determine phase electronically, hardware such as zero-crossing detectors, phase-lock
loops and up-down counters (Wyant and Shagam, 1978) are used to monitor the
interferogram intensity as the phase is modulated. In 1985, Cheng and Wyant devised
some practical methods to calibrate the phase shifter in PSI. They used piezeoelectric
transducer (PZT) as the phase shifter, which has the nonlinearity of < 1%.
In digital phase shifting interferometry ( Hariharan et al. 1987), the phase
difference between the two interfering beams is varied in a known manner and
measurements are made of the intensity distribution across the interferogram
corresponding to at least three different phase shifts. PZT was again the phase shifter
used here. If the values of those phase shifts are known, it is possible to calculate the
original phase difference of the interfering beams.
Referring to Equation (2.23), if ϕ is shifted, for instance temporally in n steps of
ϕ0, then the intensity values In(x,y) are measured for each point in the fringe pattern.
I n (x, y ) = a( x, y ) + b( x, y ) cos[φ ( x, y ) + ϕ n ]
(2.30)
with ϕ n = (n − 1)ϕ 0 , n = 1,…, m, and m ≥ 3 , and for example, ϕ 0 =
2π
.
m
If the reference phase is equidistantly distributed over one or a number of
period, the basic equation for the phase sampling
⎧ m
⎫
⎪⎪ ∑ I n ( x, y ) ⋅ sin ϕ n ⎪⎪
φ ( x, y ) = arctan ⎨ nm=1
⎬
⎪ ∑ I n ( x, y ) ⋅ cos ϕ n ⎪
⎪⎩ n =1
⎪⎭
(2.31)
Generally, only three intensity measurements are required to calculate the three
unknown components in the intensity equation: a(x,y), b(x,y) and φ(x,y) . However,
with m > 3, a better accuracy can be ensured using a least squares fitting technique.
26
Kinnstaettar et al. (1988), stated that the accuracy of phase shifting
interferometers is impaired by mechanical drifts and vibrations, intensity variations,
non-linearities of the photoelectric detection device and most seriously, by inaccuracies
of the reference phase shifter. They detected and diagnosed these systematic error
sources with the help of a Lissajous display technique. The phase shifter inaccuracies
were eliminated by an iterative process of the self-calibrating algorithm developed,
which rely solely on the interference pattern and its Fourier sums.
2.6.2.4
Phase-shifting Algorithms
Several phase-shifting algorithms for the determination of the phase of the
wavefront were published (Schwider, (1983), Wyant, (1985), Hariharan, (1987)). Phase
step and integrating buckets seems to be the most common methods. These would
require the analysis of many interferograms as the reference phase is varied. There are
two basic methods in phase-stepping-interferometry: temporal methods, by which the
interferograms are recorded one after the other, and the spatial phase measurement
methods, by which the interferograms are recorded simultaneously, but separated in
space (phase).
The only difference between stepping the phase and integrating the phase is the
reduction in the modulation of the interference fringes after detection. If the phase
shifts were stepped and not integrated, the sinc functions would have a value of one.
Therefore, phase stepping is a simplification of the integrating bucket technique. Since
this technique relies upon the modulation of the intensities as the phase is shifted, the
phase shift per exposure should be between 0 and π (Osten and Juptner, 1997).
The general intensity equation can also be written in the form
I ( x, y ) = I 0 ( x, y ){1 + γ ( x, y ) cos[φ ( x, y )]} + η
(2.32)
27
where I0(x, y) is the background intensity, γ(x, y) is the modulation of the interference
fringes, φ is the wavefront phase and η is the noise factor which can be easily filtered by
the FFT method.
The starting value of the reference phase is often chosen to produce a simpler
mathematical expression for the measured wavefront phase. In practice, there is no
need to know the absolute reference phase; what is important for the algorithms is the
phase shift between measurements. For example, by defining the starting position of
the reference mirror to be the first required phase value, the rest would follow from
there.
As mentioned earlier, the minimum number of interferograms needed to solve
for the phase shift is three. Several researchers came up with their algorithms for three
interferograms analysis, each trying to overcome certain errors and to improve the
accuracy of the others with their own techniques.
Wyant et al. (1984) and Bhushan (1985), suggested a three-step algorithm with a
phase step of 90° and a phase offset of 45°. The phase offset was introduced to simplify
the equations and for computational convenience.
The values of the phase shift chosen are π/4, 3π/4 and 5π/4.
⎧
π ⎤⎫
⎡
I1 ( x, y ) = I 0 ⎨1 + γ cos ⎢φ ( x, y ) + ⎥ ⎬
4 ⎦⎭
⎣
⎩
⎧
⎧ cos[φ ( x, y )] − sin[φ ( x, y )]⎫⎫
= I 0 ⎨1 + γ ⎨
⎬⎬
2
⎩
⎭⎭
⎩
⎧
3π ⎤ ⎫
⎡
I 2 ( x, y ) = I 0 ⎨1 + γ cos ⎢φ ( x, y ) + ⎥ ⎬
4 ⎦⎭
⎣
⎩
⎧
⎧ − cos[φ ( x, y ) − sin[φ ( x, y )]]⎫⎫
= I 0 ⎨1 + γ ⎨
⎬⎬
2
⎭⎭
⎩
⎩
(2.33)
28
⎧
5π ⎤ ⎫
⎡
I 3 ( x, y ) = I 0 ⎨1 + γ cos ⎢φ (x, y ) + ⎥ ⎬
4 ⎦⎭
⎣
⎩
⎧
⎧ − cos[φ ( x, y ) + sin[φ (x, y )]]⎫⎫
= I 0 ⎨1 + γ ⎨
⎬⎬
2
⎭⎭
⎩
⎩
The resulting phase change will be
⎡ I3 − I 2 ⎤
⎥
⎣ I1 − I 2 ⎦
φ ( x, y ) = arctan ⎢
(2.34)
Creath, (1988), also developed a three frame technique with an equal phase step
of size ϕ0. The three equations produced are:
I 1 (x, y ) = I 0 (1 + γ cos[φ ( x, y ) − ϕ 0 ])
I 2 ( x, y ) = I 0 (1 + γ cos[φ ( x, y )])
(2.35)
I 3 ( x, y ) = I 0 (1 + γ cos[φ ( x, y ) +ϕ 0 ])
Solving the equation using trigonometric identities results in the wavefront phase at
each location to be
⎧⎡1 − cos ϕ 0 ⎤ ⎡ I 1 − I 3 ⎤ ⎫
⎥⎢
⎥⎬
⎩⎣ sin ϕ 0 ⎦ ⎣ 2 I 2 − I 1 − I 3 ⎦ ⎭
φ ( x, y ) = arctan ⎨⎢
(2.36)
The two phase-step sizes that are commonly used with the three-step algorithms
are 90° and 120°. When ϕ0 = π/2, the phase equation becomes
I1 − I 3 ⎫
⎬
⎩ 2I 2 − I1 − I 3 ⎭
⎧
φ ( x, y ) = tan −1 ⎨
(2.37)
29
When ϕ0 = 2π/3, the equation becomes
⎧
φ ( x, y ) = tan −1 ⎨ 3
⎩
I1 − I 3 ⎫
⎬
2I 2 − I1 − I 3 ⎭
(2.38)
Many other algorithms were produced such as the four-step and the five-step
algorithms. Juptner et al. (1983), presented a technique, which was independent of the
amount of phase shift. The solution was based on Equation (2.23) for the intensity
distribution with an unknown amount of additional phase shift, ϕ0. In this case at least
four interferograms are needed to solve the equation system for the four unknown
quantities. The additional phase shift ϕ0 (x,y) is calculated as a function of the point
P(x,y). This allows a control over the phase shifter and on the reliability of the
evaluation, which might be disturbed by noise. The main variables of interest are ϕ0(x,y)
and φ(x,y)
⎧ I1 − I 2 + I 3 − I 4 ⎫
⎬
⎩ 2[I 2 − I 3 ] ⎭
ϕ 0 (x, y ) = arccos⎨
⎧⎪ I − 2 I + I + [I − I ]cos ϕ + 2[I − I ]cos 2 ϕ ⎫⎪
1
2
3
1
3
0
2
1
0
⎬
2
1 − cos ϕ 0 [I 1 − I 3 + 2(I 2 − I 1 ) cos ϕ 0 ]
⎪⎩
⎪⎭
φ ( x, y ) = arctan ⎨
(2.39)
(2.40)
In 1997, Yusof Munajat, devised a method of phase measurement by comparing
two interferograms which were arranged to be at 90° out of phase with each other,
before and after sample interaction with laser. The filtered intensity of the
interferograms produced before laser interaction were given as
I1 = γI 0 sin φ
(2.41)
I 2 = γI 0 cos φ
30
and he obtained the phase before laser interaction
⎛ I1 ⎞
⎟⎟
⎝ I2 ⎠
φ = tan −1 ⎜⎜
(2.42)
After laser interaction with samples, the filtered intensity equations carrying the phase
change became
I 3 = γI 0 sin (φ + ∆φ )
(2.43)
I 4 = γI 0 cos(φ + ∆φ )
giving
⎛ I3 ⎞
⎟⎟
⎝ I4 ⎠
φ + ∆φ = tan −1 ⎜⎜
(2.44)
Thus, by subtracting the before equation from the after equation gave him the required
phase change.
⎛I ⎞
⎛I ⎞
∆φ = tan −1 ⎜⎜ 3 ⎟⎟ − tan −1 ⎜⎜ 1 ⎟⎟
⎝ I4 ⎠
⎝ I2 ⎠
(2.45)
Another solution given by Yusof Munajat, (1997) was by simultaneous analysis
of the four interferogram using trigonometric relationship, which resulted in
⎛I I −I I ⎞
∆φ = tan −1 ⎜⎜ 2 3 1 4 ⎟⎟
⎝ I1 I 3 + I 2 I 4 ⎠
(2.46)
Hariharan et al. (1987), published a five-frame technique that uses π/2 phase shifts to
minimize phase shifter errors
m = 5:
φ = arctan
ϕ = -π, -π/2, 0, π/2, π
2(I 2 − I 4 )
2 I 3 − I 5 − I1
(2.47)
31
There are many more algorithms produced by researchers, each with certain
problems to tackle and values to maintain. But in most cases, phase shifting means each
image is captured individually one after the other with its phase shifted by some
intended value. This allows the different time-dependent factors to be embedded in the
images produced thereby introducing errors between frames.
2.6.2.5
Phase Unwrapping
In phase mapping methods, the initial phase values obtained from the intensity
distributions of the fringe patterns are wrapped in the values ranging from -π to π. The
reconstruction of the continuous phase distribution is called the phase unwrapping
process. According to Robinson, (1993), phase unwrapping is the process by which the
absolute value of the phase angle of a continuous function that extends over a range
more than 2π (relative to a predefined starting point) is recovered. This absolute value
is lost when the phase term is wrapped upon itself with a repeat distance of 2π due to
the fundamental sinusoidal nature of the wave functions used in the measurement of
physical properties.
Each pixel in the wrapped phase map is considered to be a vertex in a graph of
confidence. The problem is to construct a path for unwrapping, which maximizes
confidence. Each pixel has four neighbours, and a corresponding edge in the graph for
each pixel neighbour. Phase unwrapping is normally carried out by successive
comparisons of neighboring pixels. Each pixel has only a small phase difference from
its neighbor, except where wrapping has occurred, where there can be a jump of about
2π.
In this sense, phase unwrapping is the consequence of the fringe counting
problem in fringe pattern processing. When a large discontinuity occurs in the
reconstruction, a 2π or multiples of 2π is added to the adjoining data to remove the
discontinuity. So, the key to phase unwrapping is the reliable detection of the 2π phase
jumps.
32
The unwrapping process according to Kreis, (1986)
n1= 0
r = 2,3,….256
⎧nr −1
if φ r − φ r −1 < π ⎫
⎪
⎪
nr = ⎨nr −1 + 1 ifφ r − φ r −1 ≤ −π ⎬
⎪n + 1 ifφ − φ ≥ π ⎪
r
r −1
⎩ r −1
⎭
(2.48)
r = 1,2,...256
φcr = φr + 2πnr
The mapping of φcr gives a continuous phase profile that is not only limited to -π and π
range.
Ghiglia et al. (1987), provided a scheme for a two-dimensional unwrapping
algorithm analysis, with a sequence consisting of three operations: differencing,
thresholding and integrating. Conditional on whether neighboring phase samples satisfy
the relations
− π ≤ ∆ iφ (i, j ) < π , with ∆ iφ (i, j ) = φ (i, j ) − φ (i − 1. j )
− π ≤ ∆ j φ (i, j ) < π , with ∆ j φ (i, j ) = φ (i, j ) − φ (i, j − 1)
(2.49)
over the two dimensional array, both ways of unwrapping along columns or lines, or
any other combination, yield identical results. Thus, the process of unwrapping is path
independent. Otherwise, inconsistent values exist in the wrapped phase field.
The basic assumption for the validity of the scheme is that the phase between
any two adjacent pixels does not change by more than π. This limitation in the
measurement range results from the fact that sampled imaging systems with a limited
resolving power are used.
Yusof Munajat, (1997), unwrapped phase distribution form his 256 x256 pixels
interferograms as follows:
33
m1= 0
r = 2,3,……256
⎧
⎪mr −1
⎪
⎪
mr = ⎨mr −1 + 1
⎪
⎪
⎪mr −1 − 1
⎩
if ∆φ r − ∆φ r −1 <
if∆φ − ∆φ r −1
if∆φ − ∆φ r −1
π⎫
2⎪
⎪
π⎪
≤− ⎬
2⎪
π ⎪
≥
2 ⎪⎭
(2.50)
r = 1,2,
256
∆φc r = ∆φ r + πmr
In two- or three-dimensional phase mapping, the unwrapping process would
require some modifications
m1,1 = 0
m1, s
r = 2,3, ………256
⎧
⎪m1, s −1
⎪
⎪
= ⎨m1, s −1 + 1
⎪
⎪
⎪m1, s −1 − 1
⎩
if φ1, s − φ1, s −1 <
s = 2,3,………256
π⎫
2⎪
⎪
π⎪
ifφ1, s − φ1, s −1 ≤ − ⎬
2⎪
π ⎪
ifφ1, s − φ1, s −1 ≥
2 ⎪⎭
(2.51)
mr ,s
π ⎫
⎧
if φ r , s − φ r −1, s < ⎪
⎪mr −1, s
2
⎪
⎪
π⎪
⎪
= ⎨mr −1, s + 1 ifφ r , s − φ r −1, s ≤ − ⎬
2⎪
⎪
π ⎪
⎪
⎪mr −1, s − 1 ifφ r , s − φ r −1, s ≥ 2 ⎪
⎩
⎭
r = 1,2……256
s = 1,2,…...256
∆φc r , s = φ r , s + πmr , s
This will give phase profile along x-axis for the value of y at the selected value of s.
34
Yusof Munajat, (1997), also suggested that to improve the quality of the
constructed image, the unwrapping process should involve the two neighboring pixels
on the left and right of the chosen pixel as well as for the neighboring corner pixels for a
particular cross-section of the phase map. The common algorithm presently used,
compare the local gradients around the pixel being unwrapped as a criterion to
determine the search path. However, the actual implementation of the search and the
results of the unwrapping process can vary considerably from one algorithm to another.
2.6.2.6
Error in phase unwrapping
Phase unwrapping is the most demanding step in the recovery of the phase
change of the sample. However, noise and errors are almost unavoidable. The error
sources that most frequently arise in a fringe pattern are:
a)
Noise, electronic (produced during the acquisition of the image) and
speckle (due to the reflection of a coherent light beam in rough surfaces).
b)
Low modulation points that are due to areas of low visibility. The lowmodulation points appear as fluctuations in the phase module of 2π, which
might introduce errors in the phase unwrapping process.
c)
Abrupt phase changes that are due to object discontinuities.
d)
Violation of sampling theorem. The fringe pattern must be sampled
correctly for the recovering of all the information from the phase modulo
2π. There must be at least three sampling points per fringe needed for phase
sampling interferometry. Thus, the need for various noise immune
algorithms has led to the exploration of a large number of unwrapping
strategies.
Various noise-immune algorithms have been proposed to cope with the
inconsistence points. Typical examples of these are the region-oriented method by
Gierloff, (1987), the cut-line method by Huntley, (1989), the wide spanning tree method
by Judge et al. (1992), the pixel ordering technique by Ettemeyer et al. (1989), the local
phase information masking by Bone, (1991), the line detection method by Andra et al.
35
(1991) and Lin et al. (1994) and the distributed processing method using cellular
automata by Ghiglia et al. (1987) or using a neural network by Takeda et al. (1993) and
Kreis et al. (1995). In 1995, Cusack, Huntley and Goldrein, devised an algorithm for
unwrapping noisy phase maps, based on the identification on discontinuity sources that
mark the start or end of a 2π-phase discontinuity. Branch cuts between sources act as
barrier to unwrapping, resulting in a unique phase map that is independent of the
unwrapping route.
In 1991, Bone presented an unwrapping procedure, using local phase
information masking to provide better consistency in phase unwrapping. Strobel
(1996), presented a concept in phasor image processing for filtering, visualization,
masking and unwrapping the interferometric phase maps.
In 1996, Ettl and Creath, linked unwrapping performance to the gradient of first
failure of the algorithm. When the gradient of the first failure is plotted versus the
signal-to noise ratio, this was used as an indicator of which algorithm to use in a given
situation without the need for user intervention during measurement and calculation.
Charret and Hunter (1996) presented a robust method of phase unwrapping
designed for use on noisy phase images produced by a four- step phase stepping
algorithm using a speckle interferometer. They found that the spatial resolution of their
algorithm was equal to the plane-fitting domain size, which in turn is dependent on the
level of noise in the image and the performance of the filtering process on the raw data.
In the same year, (1996), Servin, Malacara and Cuevas produced a technique for
unwrapping subsampled phase maps obtained from a standard phase-shifting method.
The technique estimated the wrapped local curvature of the subsampled phase map,
which was then low-pass filtered with a free-boundary low-pass filter to reduce phase
noise. Finally, the estimated local curvature of the wave front is integrated by the use of
a least-squares technique to obtain the required continuous wavefront.
Harraez et al. (1996), also presented a new approach to the construction of a
simple and fast algorithm for two-dimensional unwrapping that has considerable
potential. They were interested in recovering those pixels that are error free, not the
36
whole image because they considered that it was better to mask one valid point, hence
to lose it, than to unwrap one point containing erroneous information, hence creating
errors in the final result.
Even though there are numerous phase unwrapping algorithms introduced in the
recent years, most have shown to be capable of handling certain error sources and their
success for certain phase maps only. No one single algorithm can do everything well.
Some algorithms process the whole phase map at once (path independence of global
algorithm), whereas others process the phase map pixel by pixel (path dependence or
local algorithm).
In this work, the signal was filtered of the low and high frequency noise before
wrapping process was carried out. The unwrapping procedure for the noise-free
wrapped phase would include line-by-line scanning through the image, detecting the
pixels where the phase-jumps occur and considering the direction of the jump. Then
only, the integration of the phase could be done either by adding or subtracting 2π at
these pixels.
There is currently no standard quantity to compare the reliability of the
algorithm and its dependence of the different gradients in the phase map. Various
strategies are proposed to avoid unwrapping errors in the phase maps, but until now
there has been no general approach to avoid all types of errors without user intervention,
especially if objects with complex shape undergo discontinuous deformation. No one
single algorithm can do everything well. The more general the algorithm, the longer, it
would take to calculate an unwrapped phase map.
2.6.2.7 General Error Sources and Measuring Limitation in PSI
There are numerous sources of errors that affect the accuracy of phase
measurement as determined by the basic PSI algorithms. Some of the PSI algorithms
are more sensitive to a particular error source than others while some errors are
fundamental and affect the accuracy of all the algorithms.
37
Error sources generally fall into three categories:
1 Those associated with the data acquisition process
2 Environmental effects such as vibration and air turbulence
3 Those associated with defects in optical and mechanical design and fabrication
The data acquisition process includes errors in the phase shift process, nonlinearities in the detection system, the amplitude and frequency stability of the source
and also quantization errors obtained in the analog-to-digital conversion process. This
will be mentioned again in Chapter 6.
The environmental effects such as vibrations and air turbulence are taken care of
in the system designed for this project by the high speed imaging system. Other errors
could also come from the interferometer optical itself, for example, the rays from an
imperfect wavefront do not retrace themselves even when reflected from a perfectly
spherical or flat surface. When the rays do not retrace themselves, they shear.
Another important precaution necessary in order to obtain the high accuracy and
high precision measurements is that the whole system must be situated in a clean, dustfree surrounding. The presence of dust particles on the optical components traversed by
the light would surely affect the shape of the fringes.
2.7 Phase Measuring Interferometry versus Fringe Analysis
Phase measuring interferometry is a dynamic process, which calculates the
interference phase at every point in the interference pattern. These interference phases
are then connected together to form a map of the event. However, phase measuring
interferometry is said to be the more accurate technique due to several reasons. These
includes higher density sampling of the interference pattern, uniform sampling of the
pattern, better phase resolution (< 0.001 fringe sensitivity) and also measurements may
be made at the null alignment condition which minimize optical errors due to imaging
distortion and ray-mapping (shear) errors.
38
Fringe analysis relies on comparing the shape of the interference fringes to an
ideal set of fringes (usually a set of straight, parallel, equally spaced fringes). The shape
of the fringes is found by locating the centers of the dark fringes.
In contrast, fringe analysis suffers from sampling data only along the dark
fringes – both non-uniform and low density. The risks could be, the missing of the
localized features (bumps and holes) in the interference pattern. Phase resolution is
>0.001 fringe. In fringe analysis, the user must make a trade-off between interferometer
alignment (few fringes) and sampling of the part (many fringes). By understanding the
differences in these two measurements, enables the choice of the technique suitable to
be used in the present system possible.
2.8
Simultaneous Phase Measurement Interferometry
With the knowledge and understanding gathered of the problems that one could
encounter in phase measurement interferometry, the author decided on a method to
remedy some of the problems highlighted with single interferogram phase mapping
method. The idea is to implement the phase mapping technique using Fourier analysis
with phase shifting algorithm, on a system consisting of three images. These images are
preset to be shifted in phase by 90° before images of laser interaction are simultaneously
captured.
A three-outputs interferometer for simultaneous imaging seemed to be the
solution to the ambiguity problem and the numerous time dependent factors often
associated with interferometric analysis. In this project, the Mach Zehnder
interferometer will be modified to produce three parallel outputs, each with 90° out of
phase with the other. These phase values are required for quadrature imaging and will
be set as one of the pre-determined parameter of this system. Once these initial phase
differences are set, they should be maintained throughout the measurement.
By the controlled-and-synchronized mechanism designed in this work, the three
images of laser interaction will be captured simultaneously with only a single pulse of
39
laser. The phase modulated intensity distribution patterns are spatially out of phase
from one another by 90° as previously arranged. That is why this system is different
from any other phase shifting methods available whereby the phase shifts are obtained
using a phase shifter and the image is produced one after the other.
The system to be designed in this project will contain no moving parts.
Measurements can be made without any user intervention. This will reduce the factors
of vibrations and air turbulence. Coupled with high speed photography, this system
should be able to eliminate most of the time dependent factors associated with phase
measurement interferometry.
These intensity distributions will be Fourier transformed to spatial frequency
domain for noise filtering. The admissible spatial frequencies of these harmonic
functions are defined by the cut-off frequencies application. A high spatial carrier
frequency introduced in the system, will be separated by this FFT filtering. This is
possible because the phase function is a slowly varying function compared with the
variation introduced by carrier.
The filtered intensity distributions of the three interferograms will be used to
extract the phase using the suitable algorithms (Equations (2.34) and (2.37)). The
resulting phase change of the interferogram will be mapped out to produce a total phase
profile for the whole image. This change in the phase is then related to the changes in
the refractive index, density and the pressure of the medium involved.
CHAPTER 3
METHODOLOGY
3.1
Introduction
The general layout of the interferometery system is shown in Figure 3.1.
Pin-hole
BS1
M1
Sample
BS2
NitroDye
Nd:YAG
CCD1
BS4
M2
Reference BS3
Trig.
&
Sync.
CCD2
CCD3
PC
Frame
Grabber
Figure 3.1 The general layout of the system
41
The system consisted of the interferometry unit, the video imaging unit, and the
trigger-and-synchronize unit that linked all these units, to the data collection and
processing unit. The whole system was set up on an optical table that stood in
containers filled with sand to absorb any kind of floor vibrations. This is to ensure the
successful capturing of the events to be studied.
The interferometer was a modified Mach Zehnder interferometer consisted of
four beamsplitters, BS, two mirrors, M, polarizers, analyzers and spatial density filters.
The video imaging unit consisted of three CCD cameras place at the three
interferometer-outputs and three frame grabbers slotted in a computer. The trigger and
synchronize unit served as a controller for the start of the activities and the delay
between the two laser pulses (Nd:YAG and Nitro-dye). This also became the link
connecting the image production and image capture.
The dye laser light was the one undergoing interference and producing the fringe
patterns. The Nd:YAG laser was used to create the disturbance in the medium so as the
fringes would deviate from their initial locations. This disturbance initially caused the
formation of shock waves which attenuated to acoustic waves soon after. It was the
propagation of the acoustic wave that caused the change in the refractive index and the
density of the medium. This in turn would change the optical path length of the light,
which resulted in the shifting of the fringes of the interference pattern. The continuous
helium-neon laser was also used in the early stage of the system development, to assist
the initial alignment of the whole set-up.
The three images, at a certain time delay between interaction and capture, were
simultaneously captured with a single pulse of laser. The simultaneous image capture
means that the same event is represented three times, each with different phase
information. However, through one algorithm, they will produce one phase profile of
the event. Three is the minimum number of images needed to produce the phase change
and at the same time reduce the ambiguity problem faced with single interferometry
analysis.
42
The images were captured by DT3155 frame grabbers using DT Acquire
software and stored in the computer hard disk. These images could be printed out or
produced for analysis at any convenient time. Mathcad 7 and Global Lab Image were
the softwares involved in this phase analysis.
To ensure the no turbulence and no vibration effects in the phase measurement, a
high speed imaging system using dye laser was used. This should freeze any kind of
activities to within 1 ns, thus reducing these environmental effects and other time
dependent noise.
Photography techniques such as Schlieren and shadowgraphy could also be
implemented with this system to provide visual images of the propagation of waves and
the dynamics of cavitations. These two techniques produced images mainly for
qualitative support of the interferometric phase analysis. For quantitative analysis, the
images would need to be photographed by the interferometric techniques.
At a certain time delay, the phase information of a laser interacted event
supplied by the three interferograms was extracted from a single phase-algorithm. The
phase changes were calculated and 3-D images of these changes were produced with the
aid of computer programming. The images produced using these designed programs
could be viewed from any locations and angles for thorough inspections of the changes
that took place in the medium. Thus, the system built in this work actually provided
both the required quantitative values and also the qualitative observations.
3.2
The Laser
Three lasers were used in this work, namely the Nitro-dye, the Nd:YAG and the
Helium-Neon laser. The Nitro-dye was for the production of the interference fringes
and in the fast photography function. The Nd:YAG was the laser producing the
disturbance caused by laser breakdown which distort the initially straight and parallel
43
fringes produced by the dye laser. The helium-neon was the continuous laser used in
the initial stage of laser alignment for the interferometer.
3.2.1
The Nd:YAG Laser
This is a four-level laser system used to generate the acoustic waves for this
work. The host medium is Yttrium Aluminium Garnet (Y3 Al5O12) with the rare earth
metal ion Neodymium, Nd3+ present as impurity providing the energy levels for both
laser transitions and pumping. Population inversion was created by the Nd3+ ions
pumped in by using intense flash of the xenon flashlamp. The Q-switching provided
short intense burst of radiation (6 ns, 250 mJ). The pulse energy output depended on the
voltage supplied to the flashlamp. The light produced was near the infrared region with
a wavelength of 1064 nm.
Diffraction causes the light waves to spread transversely as they propagate and it
is therefore impossible to have a perfectly collimated beam. In most laser applications
it is necessary to focus, modify or shape the laser beam using lenses and other optical
elements.
Table 3.1 (Appendix A) was produced by Yusof Munajat, (1997) for the
Nd:YAG laser used in this system. It showed the energy of a single pulse of laser with
and without the focusing system and also the percentage of the energy that passed
through the focusing system using Melles Griot power meter. Comparing the energy
that passed through with and without the focusing system, clearly indicated the massive
energy loss occurred after the focusing system was inserted in the beam path. Only
2.84% of the energy passed through the focusing system when 850 V was supplied to
the flashlamp. Thus, the loss is thought to be due to absorption by the focusing system
and the surroundings. What is more important is that, through tight focusing of the
system this was sufficient to produce energy exceeding the threshold value for
breakdown in air.
44
The laser could be operated either by internal or external triggering. To trigger
it externally, a 15 V supply with a pulse width of 60 µs was required. A pulse from the
trigger unit was connected to the external trigger connector of the laser via a 50-Ohm
coaxial cable. Laser light was emitted 290 µs after the initial trigger pulse. This was
determined by an optical method using a photodiode, which was linked to an
oscilloscope.
The laser head was positioned vertically to the optical table, so that the incident
beam was normal to the surface of the sample and also to the incoming dye laser light.
The average diameter of the laser beam was about 6.5 mm. Due to the massive lost of
its energy due to absorption by the focusing system, this beam must be carefully
focused in order to obtain enough energy needed for breakdown.
3.2.1.1 The Focusing System for Nd:YAG laser
The oscillation of the electric field component of the Nd:YAG light displayed a
Gaussian distribution (Figure 3.2). Propagation of a Gaussian beam through an optical
system could be treated almost as simple as geometric optics. The transverse
distribution intensity remained Gaussian at every point in the system; only the radius of
the Gaussian and the radius of curvature of the wavefront changed.
All Gaussian beams have a position along their axis at which the wavefront
becomes plane and the spot size goes through a minimum. This is called the beam waist
(Figure 3.3). The irradiance profile, I(r), of the Gaussian beam (Melles Griot 1995/96
catalogue) given by
⎛ 2r 2 ⎞
⎛ 2r 2 ⎞ 2 P
⎜⎜ − 2 ⎟⎟
I (r ) = I 0 exp⎜⎜ − 2 ⎟⎟ =
exp
2
⎝ w ⎠
⎝ w ⎠ πw
(3.1)
is the same at all cross sections of the beam where w is the width of distribution known
as the spot radius of the location and P is the total power in the beam. When 85% of the
45
E
E = E o exp( −
r2
)
w2
Eo
Eo
e
w
r
w
Figure 3.2 Nd:YAG laser in Gaussian mode and the amplitude
distribution in the transverse direction.
x
wo
w(z)
z=
0
Figure 3.3 The beam waist w along the propagation axis.
z
46
power passing through the lens, fall within a certain radius (r = w), the optical beam is
categorized as focused. The total power within a radius r,
⎛ 2r 2 ⎞
Pr
= 1 − exp⎜⎜ − 2 ⎟⎟
P
⎝ w ⎠
(3.2)
Due to the large energy losses, the focusing system required for this system was
the one that could produce the smallest focal spot in order to obtain the energy
exceeding the threshold for breakdown. The focusing unit (Figure 3.4) consisted of a
plano-concave lens (f = -25 mm) and a focusing (objective) lens (f = 28 mm, f/2.8). A
big cone angle was required for tight focusing to ensure the occurrence of a single
breakdown.
Nd:YAG
Plano-concave
Focusing lens
Breakdown
Figure 3.4 Focusing system for Nd:YAG laser.
The plano concave was for expanding the beam to fill the aperture of the
focusing lens which would then focus it down to a theoretical spot diameter of
approximately 38 µm (with cone angle in air of 21°). Although only around 3% of the
47
laser energy passed through both lenses, optical breakdown was still possible by tight
focusing of the arrangement.
3.2.2
The Nitro-dye Laser
Dye lasers are becoming increasingly important in spectroscopy, holography and
in biomedical application because of their tunability and coherence. Another recent
important application of dye lasers involves isotope separation.
In this work, the Nitro-dye laser was the one that produced the interference
fringes. Besides that, it was also used as photoflash in the the capture of the interaction
event that took place in the samples. The Photonic LN102C nitro-dye laser used in this
project was a combination of nitrogen and dye laser. The nitrogen laser operating at
357 nm was used as an optic pump for the dye. The green Coumarine 500, was the dye
emitting its visible radiation at 514 nm. This 1ns pulsed-laser was sufficient to
illuminate and record the Nd:YAG interaction event up to 100 µs. The efficiency was
about 20% from the average measured power transient of 30 kW at that wavelength.
Just as the Nd:YAG laser, the dye laser could also be triggered internally or
externally. To trigger it externally, a 6 µs -pulse of 5V was required. The laser light as
obtained optically was emitted 292 µs after the initial trigger pulse. The changes in the
refractive index and density in the interaction area caused the deflection of the dye laser
beam, which could be detected by the CCD camera.
3.2.2.1 The Magnification and Collimation of the Dye laser Beam
The diameter of the dye laser beam was initially too small (∼2.5 mm) for
illumination purposes. Therefore, it was necessary to magnify and collimate the beam
48
to obtain a reasonably uniform intensity distribution over the whole field of event. This
was achieved by using a system comprising of a microscope objective lens (f = 7.2
mm) to bring the beam into focus followed by a camera zoom lens of certain focal
length (f = 70 – 210 mm) to collimate the beam (Figure 3.5).
Zoom lens
Objective
lens
d2
d1
Collimated beam
Pin-hole
f1
f2
Figure 3.5 Magnification of dye laser beam.
Ideally two lenses might be used together to produce a laser beam expander
(Figure 3.5). The image formed by the objective lens should be at the focus of the zoom
lens system to produce a magnified and collimated beam. The Gaussian nature of the
laser beam however, would produce an expanded beam with a larger value of its waist
radius, w. Thus, the focal length of the zoom lens could be adjusted to produce the
required magnification of the dye laser beam. Optical filtering was carried out at this
stage using a 20 µm pinhole. The diameter of the final collimated beam produced for
this assignment was 25 mm. This was sufficient to illuminate the region of laser
interaction.
The magnification of the collimated beam can be calculated from the similar
triangles made by the two lenses (Figure 3.5):
m=
size of image produced 25 mm
=
= 10
size of source
2.5 mm
49
but m =
10 =
f2
f1
f2
⇒ f 2 = 72 mm
7.2 mm
To produce such magnification, the focal length of the zoom lens combination that was
adjusted to 72 mm.
The expanded beam could sometimes exhibit diffraction pattern (speckle noise)
due to dust on the surface of the microscpe objective. This could be eliminated by a
pin-hole placed at its focal point which acted as a spatial filter that blocked the
diffracted light. If the pinhole chosen has a diameter slightly greater than the central
maximum of the image (Airy disc), the loss of light is negligible. Care should be taken
when cutting off the beam with a very small aperture. The source distribution could no
longer be Gaussian, and the far-field intensity distribution would develop zeros and
other non-Gaussian features. However, if the aperture is at least three or four w in
diameter, these effects are negligible.
3.3
The Interferometer
The interferometer used was based on the principle of the Mach Zehnder. This
was chosen because it was more versatile over Michelson’s with its two beams wide
apart and the beams traverse only once for interference. Another important advantage
over the other interferometers was its flexibility in its fringe localization. The two
outputs of the original Mach Zehnder interferometer are out of phase by 180°, but in
this project it was modified to differ in phase by 90°. The other modification made was
the insertion of another beamsplitter in each output arm in order to produce four outputs
instead of two outputs in the original interferometer. However, only three of these
outputs were used in the measurements.
The optical components involved in the construction of the interferometer were:
a pair of plane fully reflecting mirrors (M1 and M2), semi-reflecting beamsplitters (BS)
50
with transmission-reflection ratio 50:50 for 514 nm light, quarter wave plates, polarizers
and analyzers. These were arranged in the form of a rectangle or parallelogram with
each component being allowed to rotate around its vertical and horizontal axes for easy
alignment and adjustment of the interferometer. The set-up for this amplitude splitting
interferometer is shown in Figure 3.6.
P
BS1
λ/4
M1
DYE
sample
λ/4
YAG
BS3
BS2
A1
CCD1
M2
compensator
A3
BS4
CCD3
A2
CCD2
Figure 3.6 The modified Mach Zehnder interferometer with three outputs.
The dye laser light was initially passed through a beam expander and collimator
to produce and maintain the beam size big enough to cover the interaction event. This
beam then passed through a polarizer, P, which was aligned to be at 45°. As a result
the light is made linearly polarized with the given orientation. Besides that, polarizer P
was also used to avoid feedback due to the light reflected back from the interferometer
into the laser, since changes in the amplitude of the phase of the reflected light can
cause changes in the output or even frequency of the laser.
A beamsplitter, BS1, placed at 45° to the oncoming light, split it into two equal
parts that traverse the two arms of the interferometer. One beam would follow its path
through the sample and the other beam would take its path through a compensator.
Mirrors M1 and M2 also aligned at 45° to the beams, placed at the two opposite corners
51
of the rectangle were used to deflect the two beams so as to meet at the second
beamsplitter (BS2), to produce the interference patterns.
The quarter wave plates introduced in each arm of the interferometer, served as
retarders which would introduce the relative phase shift of π/2 (path difference of λ/4)
between the orthogonal o- and the e-component of the wave. This meant that there was
a phase difference of 90° between the wave component along its fast and slow axis. As
the quarter wave plate was oriented at 45°, the two wave components would have the
same amplitude. This was important for the production of fringes with good contrast
and visibility. Propagation of waves having the same magnitude but a phase shift of 90°
would convert linearly polarized light to circularly polarized light and vice versa. This
was important in quadrature imaging.
Beamsplitters BS3 and BS4 were next introduced in each output arms of the
initial interferometer to again split the output beam into two parts. Thus, there were,
four outputs instead of two from the original interferometer. This is the modification
made in the Mach Zehnder interferometer besides making the outputs phase differ by
90° instead of the original 180°. However, only three of the four outputs would be
utilized in the analysis.
Analyzers A1, A2 and A3 were placed in front of the three cameras, CCD1,
CCD2 and CCD3 respectively. The polarizer and analyzer combinations were used to
provide the required phase separation between the three images captured simultaneously
by the CCD cameras.
For interference to occur, the optical path difference between the two interfering
beams must be within the coherence length of the light beam used. Superposition of the
two waves of the same frequency or wavelength travelling in approximately the same
direction resulted in an irradiance that was not distributed uniformly in space.
I = I 1 + I 2 + 2 I 1 I 2 cos ∆φ
(3.3)
52
where I1 and I2 are the irradiances of the individual waves and ∆φ is the phase difference
between them. At some points of the superposed irradiance pattern, the intensity
reached maxima (constructive interference) while at other points the pattern of intensity
attained minima (destructive interference). Apparently there was no lost in energy
during interference. The energy missing in the dark points appeared in the bright points.
The visibility,V of the fringes is defined as
V =
I max − I min
I max + I min
(3.4)
To obtain good visibility fringes, the amplitudes of the two interfereing waves must be
nearly equal.
I 1 = I 2 = I 0 ⇒ I = 4 I 0 cos 2
∆φ
⇒ I min = 0; I max = 4 I 0
2
(3.5)
If the mirrors and the beamsplitters positioned at each corner of the rectangle
were inclined at exactly 45° to the incoming beam, the two paths taken by the
interfering beams would be in phase and they produced no fringes at the outputs. Only
by tilting the final beamsplitter (BS2) slightly, a series of straight fringes would be
observed. The number of fringes that appear on the image would depend on the angle of
tilt of this beamsplitter. The flatness of the beam splitters and the mirrors used in this
work was λ/10, which should give good fringe patterns. The fringes could be arranged
to be horizontal or vertical by adjusting the knobs on the beamsplitter (BS2).
These fringes would be used as the background or reference for the interaction
events later in this work. For this analysis, the straight and parallel fringes were
arranged to be vertical. Another important criteria to take into consideration was that,
all the three images should have the same magnification factor so that the fringes in the
three images would be of the same size.
53
3.4
Alignment of the interferometer system
The initial alignment was made using a continuous He-Ne laser. It was placed
alongside of the dye laser so that its beam is coaxial with the dye laser beam. A mirror
placed at 45° in its path was used to deflect the beam. The height and tilt of every
optical components inserted in the path of the laser light in the interferometer were
adjusted one by one so that the image after each insertion will fall onto one reference
point made on the wall some distance away. To secure the positions and heights of these
optical components, they were marked and locked at their proper locations. This was
done to ensure the exact path would be taken by the pulsed dye laser later.
A magnified and collimated dye laser beam was required in this work. This was
achieved by passing this beam through a beam expander consisting of two lenses; an
objective lens and a zoom lens combination. These were adjusted until the right size
beam was obtained throughout the system. Final alignment was then made using this
collimated beam, tracing the path already made by the He-Ne laser. Fine adjustments
was still needed to ensure that the collimated beam passed through the principle axis of
every optical component along the path.
The optical component inserted in the light path leading to the three outputs of
the interferometer must also be the same. This was the most tedious and yet the most
essential part in the setting up of the interferometric system, to ensure the achievement
of the best possible interferograms.
The Nd:YAG laser was placed vertically to the system, directly over the location
of the sample. This is necessary to produce a direct interaction with the sample. The
diameter of the collimated dye laser beam was 25 mm. This was sufficient to cover the
interaction region of the sample with the Nd:YAG laser required in this work.
54
3.5
Localization of the Fringes
Localization was the necessary tool for the interpretation of the interferograms.
Interferograms produced should be of good contrast, visibility and focus. This was one
of the aims of this work, that was to obtain the best manageable interferogram possible
with the present system.
An extended source of light could be considered as an array of independent
point sources, each producing a separate interference pattern. If the path differences at a
point P were not the same for all these point sources, these elementary fringe patterns
would not coincide and there would be a reduction in the visibility of the fringes. Since
the reduction in visibility depended on the position of P (Figure 3.7), there must be, in
general, a position of the plane of observation for which the visibility of the fringes
would be a maximum (where all the fringe patterns coincide).
P
M1
BS2
Light source
BS1
M2
Figure 3.7 Fringe localization.
The region where the virtual rays meet became the virtual region of fringe
localization. This region of fringe localization could be moved anywhere between
mirror M1 and beamsplitter BS2 by simply rotating BS2 by a small amount. In this way,
one can actually place the fringes at infinity or before or behind BS2.
This flexibility in fringe localization gave the Mach Zehnder interferometer one
very important advantage over many other interferometers. For this work, it was
55
necessary to photograph both the interference fringes and the test sample
simultaneously so that both are in focus. Therefore, it was a neccessity that the fringes
be located (or localized) in the region where the test object was located. Generally
speaking, localized images are images that are formed on the plane of interference
which is also the plane where the sample is located.
Any alteration of the fringe spacing must be compensated for, otherwise the
relationship between the fringe spacing can get rather complicated. The fringe spacing
can be controlled and adjusted by slightly rotating the mirror, M1 and BS2 and the tilt of
the fringes can be adjusted by a slight turn of that beamsplitter knob. To obtain good
interference patterns, the flatness of the beam splitters and mirrors used in the
construction of the interferometer must at least be λ/10. Dust free optical components
surfaces and surroundings were also required.
3.6
Magnification and Focussing of the image
In order to obtain good interference fringes, the optical path in the arms of the
interferometer must be identical in every aspect. Another factor to be considered is the
dust free surrounding and the cleanliness of every optical component used in the
system.
The initial magnification was done by adjusting the distance between the CCD
camera and the zoom lens on each arm of the interferometer. The bigger the distance
between them, the bigger the image that was produced. However, the constrain to the
magnification factor in this set-up was the lack of space on the arm rail and the optical
table. However, a sufficient magnification of the interference image was obtained.
Focussing was done on a U-shaped object placed at the location where interaction was
to take place, by adjusting the focus of the zoom lens. The same position, magnification
,the same degree of focus and contrast and intensity level should be maintained for all
the three images throughout the measurements.
56
To get the reference image, a U-shaped thin aluminium plate (Figure 3.8) with a
fine wire vertically strapped at the center (acting like cross wire in a microscope) was
placed at the interaction point. The image produced was then compared with the true
dimension of the object (U-plate) to obtain the magnification factor. The tip of the wire
also represented the center of the three images and also the center of the interaction
event (the same pixel location on each of the three images). This was the critical factor
in determining the accuracy of the measurement technique.
Magnification factor M, is the ratio of the size of image and the size of object.
In this work the magnification factor used was 10.
M =
size of image
= 10
size of object
pointer
1: 1.5 mm
Figure 3.8 A U-shaped aluminium plate as reference frame for the
interference pattern.
Once the satisfactory (focussed and localized) image with the stated
magnification factor was obtained, it was calibrated and saved in the computer software
used for image analyses. This would be the calibrated reference for images of events
produced later in this work. Therefore, actual dimensions of the interaction activities at
any time delay could be easily determined.
57
3.7
Quadrature Imaging
Quadrature imaging developed by Hogenboom et al. (1998) at Northeastern
University (Boston, MA) provided information in a single measurement that can be
used to construct a three-dimensional (3D) image. Because the method was scalable, its
application could range from microscopy to Doppler laser radar (Carts-Powell, 1997).
Theoretically, the insertion of the quarter wave plate in each arm of the
interferometer would enable the 90° -phase difference requirement. But to ensure this
was really accomplished, the beam was made plane polarized by a polarizer, P. The
quarter wave plates were positioned at 45° to the incoming light to ensure the equal
amplitude division of the wave about its fast and slow axis. Equal amplitude but out of
phase by 90° makes the tip of the wave vector traces out a helix as the wave propagates.
This actually described a circularly polarized light, which was the requirement for
quadrature imaging.
In this work, the system was arranged for quadrature imaging. The reference
beam was circularly polarized using quarter wave plate. Quadrature imaging was
achieved mainly by mixing the signal with an in-phase reference and a quadrature
reference that is 90° out of phase with the cosine wave. The resulting output contained
the real and imaginary components (that is the amplitude and the phase) of the complex
signal.
The combined beam passed through a analyzer in front of the CCD camera.
Rotating the polarizer from vertical to horizontal altered the interference pattern,
providing both image intensity and phase information. The variations of orientation of
the analyzers A1, A2 and A3 inserted at the three outputs of the interferometer were
mapped out to investigate on the best same orientation for the three outputs that would
maintain the 90° phase difference between them.
The intensity of the light passing through the analyzer should remain unchanged.
If it varied somewhat, it meant that the light was elliptically polarized and that the
58
quarter wave plate was not actually operating at the required wavelength (there is an
unequal amplitude division of the light along its fast and slow axis).
Yusof Munajat, (1997), in his work gave 67.5° as the best orientation for his two
analyzers that provide the required 90° phase difference between the two outputs. It was
an ultimate importance to make sure that the beams in the three arms traverse equally in
all aspects to be certain of the outputs produced.
Quadrature imaging allowed the complete field to be retrieved. This concept
had been used by researchers for Doppler lidar, allowing them to determine in which
direction an object is moving, approaching or receding, in a single measurement, instead
of merely its speed.
3.8
High-speed Photography System
A high speed photography system utilized in this work comprised of three
monochrome CCD cameras linked with three DT 3155 frame grabbers installed in the
computer. The images were captured and stored in the computer hard disk using DT
Acquire software. A series of images were produced with a different time delay between
the events. This actually provided the detail profile of the advancing acoustic wave in
the media. Detailed transient images captured by the system were stored permanently
in a computer hard disk for further analysis. For the system to function, components
such as CCD cameras and frame grabbers must all conform to CCIR/PAL standard
system.
The fast photography system with Nitro-dye laser, devised in this work was able
to eliminate the environmental problems of air turbulence and vibrations often
associated with phase inteferometry. Photography techniques such as schlieren,
shadowgraphy and the interferometry were developed to capture the images of the
event.
59
3.8.1
The CCD Camera
In this work, three monochrome CCD cameras model SONY XC-75CE with
752 x 582 active pixel arrays were used in the detection system. The scanning process
for each 40 ms frame consisted of 625 interlaced lines of data recorded on pixels from
odd numbered lines and even-numbered lines. The vertical period of each field was 20
ms. The video output of the CCD camera was used as a signal for both the trigger unit
and the frame grabber.
The CCD cameras used in this work gave only analog signals, so, the trigger and
synchronize unit was also designed to modify these signals to form several trigger
pulses with the appropriate delays and widths. Since the framing rate of the cameras are
50 Hz, it was not possible to take multiple pictures of a single event. The system was
arranged to capture and retain a single 20 ms field of information from the CCD camera
at a given time using the frame grabber.
3.8.2 The Frame Grabber
The monochrome DT 3155 frame grabber used was suitable for scientific and
industrial image processing where data accuracy is critical. The frame gabber
conformed to CCIR, PAL (50 Hz) system. The acquisition modes provided were;
interlaced (start on the next even, next odd or next field), single frame or continuous
operation. The speed of the frame gabber was 1/25 s or 50 Hz. The data were formatted
according to the 8-bit monochrome format. Operating as a bus-master on the PCI bus,
the DT3155 could transfer images continously in real time, to system memory for
processing or display. Taking advantage of the PCI bus high speed, from 10 to 12 MB/s
typical up to 132 MB/s maximum, the DT3155 could transfer an unlimited number of
consecutive frames, in real time, across the bus to the host memory.
The DT 3155 could also accept external triggering, which meant that image
acquisition could be synchronized with an event external to the computer. It provided
60
eight programmable TTL-level digital outputs for controlling or actuating external
devices. As the system resources were not involved in transferring data with the
DT3155’s bus master design, the computer CPU was free to perform high-speed image
processing on the acquired data. The software used to run the application is the DTAcquire which enabled the capture, display and saving of the image data.
3.9
Synchronizing and Triggering
To capture the image of an extremely fast event such as laser interaction would
require a very accurate and consistent synchronization. The dye laser, the Nd:YAG and
the frame grabber must be precisely synchronized in order the capture the frame of
event.
In designing the triple channel video based interferometry system using three
CCD cameras, it was necessary to accurately synchronise them. This was done by
making one of the CCD cameras a master and the other two the slaves. An internal
jumper was set in one of the cameras so as to obtain the output of the horizontal sync,
HD and the vertical sync, VD (Figure 3.9). In the other two cameras, these outputs are
set as default for the incoming signals.
CCD1
MASTER
CCD2
CCD3
SLAVE
SLAVE
HD VD
HD VD
HD VD
To trigger unit and frame grabber
Figure 3.9 Master and slave configuration of CCD cameras.
61
The circuitary for the trigger and synchronize unit (APPENDIX B) of this work
was based on the circuit built by Yusof Munajat (1997) in Laser Research Laboratory
(UTM) but with some modifications made to suit the present system. LM1881, a sync
separator chip introduced in the circuit would change the analog signal from the CCD
camera to square waveform or pulses.
To trigger the Nd YAG laser, the pulse output from the pair of 74121 provided a
delay of 20 µs with a pulse width of 60 µs that was then passed through an ICL7667
inverter and MOSFET. This provided a 15V outgoing pulse to the laser output. The
MOSFET was required in this case to insure that the signal had enough current to drive
the laser input pulse. By Q-switching operation, a giant pulse of laser output was
obtained 290 µs after the initial trigger point.
The dye laser was triggered in the same manner as the Nd:YAG laser. A
variable delay of a few µs with a pulse width fixed at 6 µs was the signal that was sent
through the ICL7667 inverter and MOSFET to provide the necessary 5V outgoing laser
pulse. The dye laser light emerged 292 µs after the initial trigger pulse.
The frame grabbers were synchronized by a negative going 5V TTL pulse with a
width of 250 µs. The delay time was about 250 µs from the incoming video signal of
the CCD camera. This pulse instructed the frame grabber to accept the next complete
field of data reaching it. Three pairs of 74121 (monostable multivibrators), were used
to provide signals for the three frame grabbers in the system (Figure: 3.10). The first
chip of each pair was used to generate a negative going pulse with a length that could be
adjusted to the required delay via an external resistor–capacitor pair. The second chip
was triggered at the positive-going edge of the resulting pulse from the first chip and
could therefore be arranged to give the correct width of the output pulse via the same
method.
62
Figure 3.10 Arrangement for controlling the width and delay of the
three frame grabbers.
As the dye laser light emerged 292 µs after the initial trigger pulse while the
Nd:YAG laser light did it after 290 µs, it was necessary to delay the Nd:YAG laser
pulse with respect to initial video signal in order to capture any event. A 20 µs delay
was assigned to the Nd:YAG pulse for this purpose. This would ensure that the
Nd:YAG pulse arrived at the interaction region on the time chart earlier than the dye
pulse. In this way, the interaction event at the given delay would be timely illuminated
for capture inside the interaction region dictated on the time chart.
The delays between interaction and capture were carried out by fixing a laser
pulse (Nd:YAG) at a certain location on the time chart while varying the other (nitrodye) to provide the required delay. The point to remember was that, the Nd:YAG signal
must arrive earlier than the the dye signal. The trigger and synchronize unit could
control the variable delay up to 1 ms between the firing of the two lasers. The delay
between the two laser output pulses were detected by an optical method using a
photodiode and is measured using an oscilloscope.
63
An ideal photodiode can be considered as a current source parallel to a
semiconductor diode. Each incoming photon will generate units of electron charge,
which will contribute to the photocurrent. Thus the current produced corresponds to the
light–generated drift current, while the diode represents the behavior of the junction in
the absence of incident light.
The photodiode used in this part of the work, was to determine the optical time
delay between the dye and the Nd:YAG laser outburst. This was connected to an
oscilloscope for observation of the signal peaks of the two light sources. The optical
detector was chosen for this part of the measurement because it provided a direct
observation of the signal peaks. The time interval in the microsecond range was read
directly from the oscilloscope.
Laser
BPX65
9V
50 Ω
Vout
Figure 3.11 The optical detector used for laser delay measurement.
The time chart for the sequence of activities between the firing of the lasers and
the capturing of the images is presented in Figure 3.12. An appropriate delay for the
appearance of the Nd:YAG pulse was necessary to enable the successful capture of the
interaction events. The variable delays between the two laser pulses needed in the
analysis were controlled by the electronic circuitary for the two lasers incorporated in
the trigger and synchronize unit. The images of laser interaction activities in the
interaction region indicated on the time chart in Figure 3.12 were captured by the CCD
cameras.
64
signals
Interaction
region
Trigger point
CCD
DYE
20 µs
YAG
FG1
6 µs
292 µs
adjustable
290 µs
60 µs
250 µs
250 µs
FG2
250 µs
250 µs
250 µs
250 µs
FG3
time
Figure 3.12 The time chart for image capture
3.10
Image Production
Generation of acoustic waves was caused by the changing density of the
medium. This in turn changed the refractive index or the optical path length resulting in
the changes in the phase of the optical waves. Assuming that acoustic wave
propagation was spherically symmetric about the emission centre, then the refractive
index profile would also be spherically symmetric as indicated in interferogram
analysis. This was the basis used in Abel Inversion technique for phase measurement.
For this work, initially the signal from the CCD camera was used an initial
signal for the trigger unit. This signal was used as the starting point for arranging the
time delay between the two lasers and initiating the frame grabbers to capture the
images. All these must be synchronized in order to capture the images of the required
65
activities at the correct time delay. So the initial signal was used as a trigger point for all
the activities involved.
The flip-flop in the trigger unit circuit (APPENDIX B) could be triggered either
by a single pulse from the manual switch and the remote-control switch or by the
continuous pulse from the 555 timer integrated circuit. In the former case, a single
pulse from the manual switch was passed through a 74121 monostable multivibrator
which was arranged to give a constant 60 ms, 5 V pulse at the flip flop output. In the
later case, a train of 60 ms, 5 V pulses were formed with a repetition rate from 1 to
50 Hz, which was governed by the variable bias resistance on the timer inputs. The
signal from the flip-flop was then used by the other part of the trigger unit to trigger
parts of the system such as the frame grabber, the Nd:YAG laser and the dye laser.
The scanning system of the CCD camera consisted of 40 ms frames, each with
625 interlaced lines from two consecutive fields of data; the odd and the even numbered
lines. The total field has a vertical period of 20 ms. Three frame grabber cards slotted
in the computer were capable of capturing three images simultaneously. Each device
was capable of handling a picture format of 768 x 576 pixels of 256 gray level. The
images from the CCD cameras were captured and synchronized by an external trigger
output on the card.
To capture the three images with a single shot of the laser using DT Acquire
software, would require firstly the selection of the three devices, then the selection of
frame type working under the external triggering mode. The mode of operation
appropriate for this work was the single acquired frame. Once the three devices were
opened, the time out value was set accordingly, so enough time was given to ensure the
capture the image when the system is triggered.
The 768 x 576 pixels digital image of the events produced were stored in BMP
format in the computer hard disk which could be printed out by video printer or high
resolution laser printer. For convenient analysis, a 256 x 256 pixel size image was
selected. The high resolution HP laserjet printer at 600 x 600 dpi with 2 MB memory
66
could give a clear photo enhancement image and was easy to manage. It could also
maintain an image intensity of 256 gray level.
Digital filtering using Fourier transform with MATHCAD programming was
then used to filter out the unwanted noise from the signals. Phase unwrapping was
necessary to produce a continuous phase change required.
3.11
Photography Techniques
There are three techniques for obtaining images of the interaction events in this
work namely: shadowgraphy, Schlieren and interferometry. These three are chosen
because they can compliment each other in the confirmation of the final results.
Shadowgraphy and Schlieren techniques could easily be incorporated in the Mach
Zehnder system simply by blocking one of two arms of the interferometer.
Focus shadowgraphy simply means that the field of interest is focused on
camera. This should not lead to any shadow information but in practice the lens
aperture provides a ‘stop’ leading to some shadowing (Figure 3.13).
Light
source
Lens1
object
Lens 2
CCD
Figure 3.13 Shadowgraphy arrangement.
In this work, this is accomplished by blocking one arm of the interferometer
with a piece of hard paper. The variation of the light intensity across the image is
67
proportional to
∂ 2n
, for a two-dimensional object, where n is the refractive index and y
∂y 2
is the distance across the field of view of the system.
Theoretically, it is possible to calculate the changes in pressure from the shadow
images but this is not easy and generally does not give satisfactory results. So in this
work the images produced by this technique is treated only as visual images showing
the sequence of the real event that can also be used to support the analysis.
Schlieren technique is similar to shadowgraphy in that both use only one arm of
the interferometer (Figure 3.14). The difference being the presence of a knife-edge
placed at the focus of the lens to remove the undeflected zeroth-order light beam and
therefore all the higher orders at the bottom as well. This is to ensure the intensity of
the light is sensitive to any small change in the refractive index, thus increasing the
contrast of the image. The light is proportional to
∂n
for a two dimensional object
∂y
across the field of view.
Light
source
Lens 1
object
Lens 2
Knife edge
CCD
Figure 3.14 Schlieren arrangement.
In most cases this technique produced a much better quality image than
shadowgraphy. In this work, the image is also treated for qualitative value only, which
is to provide visual images of the real event at different time frame and also for the
confirmation of the data analysis.
Quantitative analysis is only conducted for the images obtained by the
interferometry method that is the main concern of this project. The deviation of the
68
fringes from straight lines is proportional to the phase shifts of the probe light and is
indicated by the variation of the intensity level of the fringes. The gray values of 1 to
256, representing the intensity levels are digitally processed to reveal the actual values
of the phase shifts.
3.12
Phase Retrieval
Phase interferometry measurement is a very sensitive, accurate and precise
measurement. That is why there were so many algorithms produced; each trying to
overcome certain errors as much as possibly can. The algorithms to be used in this
work were the 3-step algorithms introduced for phase shifting methods.
However, unlike phase shifting, the three interferograms in this work, were
captured simultaneously, but with a phase difference of 90° preset between them. The
three simultaneously-captured images of the same event but differ in their phase
information, provided the needed three intensity equations, that is the minimum
requirement to extract the unknown phase. Above all, the simultaneous image capture
is a mean to reduce phase ambiguity in phase meaurements of laser interacted events.
The general intensity equation (Equation 2.23), had three unknowns namely the
background intensity, (I0), the contrast factor, (γ) and also the phase, φ. The three
intensity equations were needed to fulfill the requirement of the three-step algorithms.
The unknown phase of the laser-interacted images, initially separated in phase by 90°,
can be extracted from Equation (2.34) which is;
⎡ I3 − I 2 ⎤
⎥
⎣ I1 − I 2 ⎦
φ ( x, y ) = arctan ⎢
and also from Equation (2.37} which is,
69
⎧
I1 − I 3 ⎫
⎬
⎩ 2I 2 − I1 − I 3 ⎭
φ ( x, y ) = tan −1 ⎨
Both of the phase shifting algorithms were chosen because they fulfilled the 90°
phase-difference requirement used in this analysis. These single formula algorithms
coupled with a suitable computer program can reduce the usually lengthy phaseprocessing time.
The simultaneous image capture with 90° phase difference using three CCD
cameras, that is the main interest of this project, also benefited from the FFT method for
filtering of the high and low frequency noise. The extraction of the phase from the three
noise-free images according to chosen algorithms would result in phase wrapped
spectrum. At this stage, the phase was not yet ready to be displayed or evaluated due to
its discontinuities caused by the arctangent limited range of angles, -π/2 to π/2.
To reveal the continuous, real phase values, an unwrapping process would be
required. For a physically correct unwrapping, it was necessary to distinguish between
the true mod 2π discontinuities and those caused by noise. These should be corrected;
and those caused by object, for example, gaps, boundaries, shadows and continuous
deformation such as cracks, should be evaluated. Phase mapping could then be
followed either in two-dimensional- or three-dimensional- mapping, which would
provide the total picture of the event.
By knowing the changes in phase that occur in the event, its association with the
changes in the density, refractive index and pressure of the sample could be worked out.
These parameters were of great importance to those in the manufacturing sectors.
Therefore, it was necessary to get these values as accurate as possible. This meant that
the required predetermined conditions before data collections, such as proper
synchronization of the three images, same intensity requirement and also the 90°-90°
phase separation between images, should be met with the greatest accuracy possible.
70
This work provided both quantitative and qualitative analysis. Besides the
numerical figures representing the changes taking place, these changes could also be
viewed directly from their 2-D and 3-D image representation of the events.
CHAPTER 4
IMAGE PRODUCTION AND IMAGE PREPROCESSING
1.1
Introduction
The wave propagation could be observed using the three photographic
techniques namely the shadowgraphy, Schlieren and the interferometry method. The
first two methods were mainly used to visualize the events. For quantitative analysis of
the phase changes taking place, the images must be captured by the interferometry
method. This was the emphasis of the project; to analyze phase changes due to laser
interaction interferometrically.
Before analysis could be carried out, the images produced interferometrically
must undergo the preprocessing stage. This was where the images were being prepared
with the initial conditions and parameters prior to the phase measurements. These
included; localized images, simultaneously captured images, the same intensity
condition for the three images and the 90° phase difference between the images.
Three images were captured simultaneously with a single pulse of laser. The
main objective for simultaneous image capture was to reduce the ambiguity problems
often faced by laser interacted images when analyzed by single interferometry phase
mapping method. The high-speed photography incorporated in the system would
eliminate the errors due to air turbulence and vibrations.
72
Finally the images obtained were cut into sizes suitable enough to cover the
areas of interaction and convenient enough for computer analysis. Special attention was
needed when cutting so that the center of interaction should lie on the same pixel
location in all the three images.
4.2
The Photographic Images
Dielectric breakdown is the most important process for converting laser (optical)
energy into acoustic energy. Generation of acoustic waves in air in this project was due
to dielectric breakdown, which occurred at laser power-density of approximately 1010
Wcm-2 at the focus of a lens. The plasma produced from dielectric breakdown resulted
in the formation of shock waves. The shock waves occurred spontaneously with the
production of plasma and propagated initially at supersonic speed in the medium,
making assessment of the wave rather difficult.
The incident laser light and the breakdown region were visible on the image as a
small luminous area at the center of the shock wave. The shock wave propagated
initially at a higher velocity than the velocity of sound before attenuating to the velocity
of sound where the wave was now classified as acoustic wave. The attenuation to a
more symmetrical acoustic wave enabled calculation of the phase to be made.
Figure 4.1 shows the development and propagation of shock wave after
undergoing laser interaction in air taken using Schliren and shadowgraphy techniques.
In the Schlieren technique, the undeflected beam was cut in order to increase the
contrast of the light. Only the deflected light can pass through to the image screen. In
air, the Schlieren technique seemed to produce clearer outline of the wave. This is
because the intensity of light on the image plane is proportional to
index n at a distance y across the sample.
∂ 2n
of the refractive
∂y 2
73
i) 0.4 µs
i) 0.4 µs
ii) 1.0 µs
ii) 1.0 µs
iii) 2.0 µs
iv) 5.0 µs
iii) 2.0 µs
iv) 5.0 µs
(a)
(b)
Figure 4.1 The development of acoustic wave propagation by (a) the
Schlieren technique. (b) the shadowgraphy technique.
1: 0.6 mm
74
a. 220 ns
b. 400 ns
c.1.0 µs
d. 2.0 µs
e. 3.0 µs
f. 4.0 µs
1: 0.12 mm
Figure 4.2 Stages of development of the shock
waves in air by interferometric method.
75
The images of laser interaction were clearly visible with the Schlieren technique.
However, neither of these techniques can provide quantitative information relating to
the changes in the density or the refractive index of the sample.
Various stages of development of the waves caused by laser interaction in air at
atmospheric pressure and temperature of 24°C were revealed using the three different
photographic techniques mentioned (Figure 4.1 and Figure 4.2). The immediate
response to laser interaction was the formation of shock waves as indicated by the
slightly unsymmetrical waveforms in the nanosecond region. Soon after (after a few
µs), these waves attenuated to a more symmetrical waveforms, which propagated away
from the point of interaction (source), changing the pressure along the way. Figure 4.2
shows the stages of wave development soon after undergoing laser interaction using the
interferometric method. Here the incident interference fringes and the deviation of
those fringes in the breakdown region are clearly visible.
By plotting the expansion of the diameter of the wave with time or the
advancement of the wavefront of the wave against the time taken, the gradient of the
plot would provide the average speed of propagation of these waves at any particular
instant. The plot of the wave expansion with time is shown in Figure 4.3. (APPENDIX
E).
4.5
Radius of wave (mm)
4
3.5
3
2.5
2
1.5
1
0.5
0
0
1
2
3
4
5
Time after interaction (us)
Figure 4.3 Plot of radius of wave with time.
6
76
The system developed was capable of capturing images earlier than 2 µs, but the
unsymmetrical nature of the waveform produced would not be able to produce accurate
expansion measurement. Once the wave acquired a more spherical shape, measurement
was possible. The instantaneous speed can be calculated from the gradient of the graph
at certain times in the range measured. The graph would take a linear form after certain
time duration. The value for the speed of acoustic wave in air at 20°C is 342 ms-1 (Kaye
and Laby, 1972). Thus, the system developed could also be used to determine the speed
of the waves produced.
4.3
Image Synchronization
.
The system was specially developed for simultaneous imaging. In this case,
three images were to be captured simultaneously. Data could only be collected after the
three images were synchronized or locked together at the same location at any one time
frame. This was accomplished by making one of the CCD cameras the master and the
other two the slaves (Figure 3.9). The circuitry for the trigger and synchronize unit
were shown in Appendix B. This unit functioned as a connector, controller and
synchronizer for the other units involved in the running of the whole system. Figure
3.12 shows the timing chart for this system, indicating the setting for synchronization of
the units involved and the region where image capture occurred.
The pointer on the U-shaped frame (Figure 3.8) was used as the initial reference
for the position of the center of activity for the three images. Fine adjustments were
made so that each of the interaction activity has its center lying on that same spot, that
is, on the tip of the pointer in the U-frame. The U-frame itself served as the reference
factor for the magnification of the images produced. This is important because in threeimage PSI, the images must be identical in term of its magnification, pixel location and
intensity. The only difference that they should have is their phase values.
As the images produced were rather large, (768 x 576 pixels), they were cut into
smaller sizes before processing. The images were cut into suitable sizes, enough to
77
cover the interaction areas and at the same time convenient for computer analyses. In
this case, a 256 x 256 pixel size was chosen.
(0,0)
C (128,
1
2
3
(256, 256)
Figure 4.4 Synchronization of center of interaction.
Even though the images had all been aligned previously using the pointer in the
U-shaped frame, the alignment could still be further improved at pixel level when
cutting down the images. This was accomplished by locating the center of the
interaction event on each image and working outwards from there to get the required
pixel size image. The cutting down process has to be made carefully so that the center
of interaction, C, would lie on the same pixel locations (coordinates) in all the three
images (Figure 4.4). In this way, the final images to be processed would be aligned and
matched pixel to pixel for greater accuracy.
4.4
Fourier Filtering
Fringe patterns obtained by coherently illuminated rough surfaces are
contaminated with a special kind of noise called speckle. Speckle, which is usually
78
modeled as signal dependent noise, has its role in image formation process and it also
functions as the carrier of information to be measured. Thus, this makes speckle
unavoidable in optical metrology. One way to overcome this problem was to process the
speckled fringe patterns. Several methods were proposed but since speckles are noise as
well as carrier of information, there is no ideal approach that operates effectively in all
cases.
The high-speed imaging using dye laser actually was able to freeze the event
within 1 ns, thereby, reducing vibration and turbulence factors a great deal. Other noise
contributors are the high frequency electronic noise. With simultaneous image capture,
the noise between frames would also be reduced. This would make the choice of the
noise filtering technique somewhat easier as the three images could benefit from the
same filtering technique.
Spatial filtering and digital filtering were made available to reduce the noise in
the signal. In any case, too much filtering could cause signal loss and too little would
still mean noisy signal that require further filtering.
In this work, Fourier transform filtering was used to filter the low frequency
background noise and the high frequency digital noise. The measured intensity
distribution was Fourier transformed to a linear combination of harmonic spatial
functions. In this domain, identification of the signal was made, by removing of the low
background frequency and the high noise frequency. Careful identification must be
made so as to remove the noise as much as possible and leave the signal intact. The idea
was to obtain as clean as possible the signals to be used for phase analysis.
Figure 4.5 is an example of the signal identification in this domain produced in
this work. The markers, 2 and 6, indicated the minimum and the maximum value of the
frequency to be removed from the interferogram. Thus, this allowed only those
between the indicated values of 2 and 6 to remain as filtered signal. Inversing this
transform would bring back a much cleaner filtered signal to be used in phase analysis
as shown in Figure 4.6.
79
80
2
6
Power density
60
40
20
0
0
20
40
Spatial frequency
Figure 4.5 Cut-off frequencies in Fourier filtering.
300
Intensity
200
100
0
0
50
100
150
200
250
300
Pixel location across the interferogram
Filtered intensity
Unfiltered intensity
Figure 4.6 The unfiltered and the filtered intensity signal.
80
Figure 4.6 shows the intensity profiles of an image before and after undergoing
Fourier filtering process. Notice the smoother filtered intensity profile (solid line) after
undergoing FFT filtering as compared with the initial profile of the unfiltered intensity.
The filtered profile appeared to be shifted downwards due to the removal of its dc
component.
All the three interferograms simultaneously captured would have to undergo this
same regime of Fourier filtering process to produce the filtered intensity signals. These
filtered intensity signals would be used initially in the determination of the 90° phase
difference and later, in the algorithms for phase measurements.
4.5
The Intensity
It was necessary to obtain the same intensity for all the three images. The object
should be sharply imaged onto the detecting array in the CCD camera in order to avoid
wrong phase data collection due to diffraction. This called for the localized images
condition.
Initially, the quality of the circularly polarized light could be checked using a
polarizer. The intensity of light passing through the polarizer should remain unchanged
as the polarizer was rotated. The stability of the intensity distribution during phase
measurement was essential. This also meant that the noise contributions in all the three
frames should be about the same. A real-time environment with interferometric
stability had to be guaranteed for successful phase measurement.
Electronic noise is time-dependent; its influence on the intensity distribution can
usually be diminished over a sequence of frames. In this work, the intensity levels of the
three images were taken simultaneously, not in sequence, so that each of the
interferogram would have about the same noise contribution. Filtering process would
be easier. The intensity levels of images obtained for this project, as analyzed using
Mathcad software, is shown in Figure 4.7. From this intensity distribution spectrum, it
Unfiltered intensity
81
200
100
50
100
150
200
250
Pixel location across the interferogram
Intensity of image 1
Intensity of image 2
Intensity of image 3
Figure 4.7 Intensity distributions of the three undisturbed images.
60
Filtered intensity
40
20
0
20
40
60
0
50
100
150
200
250
300
Pixel location across the interferogram
Intensity of image 1
Intensity of image 2
Intensity of image 3
Figure 4.8 The filtered intensity of the undisturbed images
82
showed a significant noise contamination. This must be filtered off to provide better
accuracy in the phase measurement. Digital filtering using FFT was implemented to
separate the signal from the noise.
The Fourier-filtered intensity of the three undisturbed images is shown is Figure
4.8. It was quite clear that even after undergoing the same noise filtering procedure, the
intensity of the three images was not as identical as expected. There was still, room for
improvement, but this was the best, the present system could provide.
With simultaneous image capture it was hoped that many of the time dependent
noise would be eliminated. However, as shown, other factors such as the systematic
error factors would still be present affecting the quality of the intensity produced. As
the number of optical components in the system increased in order to obtain three
simultaneous images, the number of error carriers would also increased.
Generally, no matter how careful and precise the measurements were made, the
familiar error sources such as electronic and speckle noise were unavoidable in
automated evaluations of the fringe patterns. So, it could be expected that during laser
interaction, this noise existence would persist and remain in the calculation of the phase
change.
This filtered intensity served as the background intensity for the images before
undergoing laser interaction. This would be compared with the intensity of the images
after undergoing laser interaction to obtain the phase change. Naturally, the uneven
background intensity would affect the final phase calculation.
Generally, in phase shifting interferometry (without simultaneous image
capture), these identical intensity conditions could not be fulfilled either because of the
time-dependent character of electronic noise and speckle displacements. Each image
captured separately one after the other with its phase regularly shifted would also suffer
from the slightly different energy burst from the laser.
83
4.6
The 90° Phase Difference
The dye laser light was initially linearly polarized. It was made to pass through
a polarizer, P1, which was aligned at 45°, in its path. The beamsplitter, BS1, at the first
corner of the Mach Zehnder arrangement split the light into two equal amplitudes. The
component that passed through the beamsplitter remained unchanged whereas the
reflected component had it phase changed by 180°. A quarter-wave plate was
introduced in each arm of the interferometer, to turn the plane-polarized light to a
circular- polarized light. To make that possible, the quarter-wave plate must be
oriented at 45° so as to produce equal amplitude division the waves.
The circularly polarized light in each arm of the interferometer was made to
differ by 90° by appropriate orientation of the analyzers; A1, A2 and A3. Thus, these
analyzers acted as phase separators for the three images before they were
simultaneously captured. Having to work with three variables; A1, A2 and A3 on a very
sensitive interferometer can be quite a task. A large amount of data were collected and
analyzed to obtain just the right combination that would allow the 90° phase difference
between the frames.
Initially, visual image of a point in the reference frame was sufficient to assist
the rough identification of the required phase difference. This was accomplished by
placing a reference frame, (a U-shaped aluminium sheet with a fine-wire pointer in the
middle, shown in Figure 3.8) at the would-be interaction (sample) location in the
interferometer. A certain number of fringes would be contained in the small area of the
reference frame. The pointer in the middle of the U-shaped frame acted as the marker
that determined the amount of shift in the fringes of the three images. In this way, the
90°-90° shifts between the three images and also the sequence of appearance were much
easier to visualize.
Variations of the angles of the three analyzers were made, taking turns as to
which should be set as the constants and the variables. The angular combinations that
would visually produce the 90° phase shifts were noted. Based on this judgment, the
84
images were processed to find the actual values of the phase differences or to make the
necessary adjustments so as to produce the required phase differences.
Figure 4.9 shows the visual images with 90°- 90° phase difference between the
three simultaneously captured images before undergoing the Nd:YAG laser interaction.
Referring to the pointer, it showed in the first image, the pointer is situated in the middle
of a light fringe. In the second image, the fringe appeared to move to the right of edge
of a dark fringe; meaning the fringe has moved by
λ
4
or by 90° (to the right). The third
image has its pointer in the middle of the dark fringe, meaning the fringe has moved
another
λ
4
or 90° (to the right) from the second image. So, visually, the 90°- 90°
separation could be seen with naked eyes but for its numerical representation of the
separation, a computer program was developed for the purpose.
The images were processed using Mathcad software to determine the suitable
combinations of the variables for the 90°- 90° phase difference. Changing the number of
fringes in the frame also meant finding new combinations of the 90°-90° phasedifference of the interferogram. Then, there was also the need to determine the optimum
value of the number of fringes per frame that would provide the most sensitive and valid
measurements for the chosen algorithm. The algorithm chosen in this work would
perform better with less number of fringes, as there would be more pixels assigned to a
fringe. The number of fringes per frame chosen for this work was 7 to 15.
This part of the analysis was the most tedious and time consuming because it
involved a number of variables. By fixing the number of fringes per frame and also the
orientations of two of the analyzers, the author varied the orientation of the third
analyzer. A large amount of data was collected, sorted and processed to find the right
combination that would produce the required phase difference. However, we were not
able to formulate any algorithm for these variables involved that would produce the
required 90°- 90° phase difference.
85
a. Image 1
b. Image 2
c. Image 3
1: 0.125 cm
Figure 4.9 The sequence of the 90°-90° phase difference of three
images captured simultaneously.
86
Table 4.1 shows some of the combinations of the analyzers A1, A2 and A3
orientations that would produce the 90°-90° phase different between the interferograms.
The sequence of the image appearances obtained in this combination was:
image 1→ image 2→ image 3.
φ1, φ2 and φ3 were the wavefront phases of image 1, 2 and 3. ∆φ1 and ∆φ2 were the
differences of their respective phases.
∆φ1 = φ1 - φ2
∆φ2 = φ2 - φ3
Table 4.1 Some combinations for 90° phase-difference
A1±0.5° A2±0.5°
A3±0.5°
∆φ1±1°
∆φ2±1°
5.0
20.0
15.0
89.0
93.0
30.0
20.0
15.0
87.0
83.0
15.0
25.0
20.0
90.0
89.0
15.0
30.0
20.0
94.0
82.0
This is the critical parameter for the presently chosen algorithm to determine the
phase shift during interaction. Therefore, it should contain the smallest error possible.
However, the presence of the different degrees of speckle distribution in the three
interferograms seemed to dominate the evaluations of the phase difference. Hence, the
computed principle phase values were corrupted to such an extent that locally
inconsistent regions were produced. These could be detected earlier from the saw-tooth
wrapped phase (Figure 4.10).
A smooth wrapped phase would be obtained if the signals were free from any
kind of noise. But as shown in Figure 4.11, it was not quite smooth. Some deviation as
indicated by the slight displacement meant that some noise was still present. Here the
phase was wrapped between -π and π.
87
Wrapped phase (rad)
4
2
0
2
4
0
50
100
150
200
250
300
250
300
Pixel location
Figure 4.10 The wrapped phase.
Unwrapped phase (rad)
30
20
10
0
10
0
50
100
150
200
Pixel location
Figure 4.11 The unwrapped phase wavefronts.
88
The unwrapped phase wavefronts of the three images separated by a phase
difference of 90° obtained were shown in Figure 4.11. This again, revealed the
inconsistencies observed in the wrapped phase earlier.
The aim to this part of the measurement was to obtain the smallest fluctuations
possible for the phase difference between the two sets of images. However, as
indicated by computer analysis of the phase differences in Figure 4.12, some errors
were still unavoidable. It showed the fluctuations of the separations of the three images
involved in this analysis.
∆φd is the phase difference between image 1 and image 2 whereas ∆φd2 is the
phase difference between image 2 and image 3 before undergoing laser interaction..
These values should be maintained at 90° through.
∆φ d
= φc − φ 2c
∆φ d 2 = φ 2c − φ 3c
Phase shift (rad)
2
π
2
1.5
1
50
15
100
150
200
250
r
Pixel location
300
256
∆φd Phase shift between image 1 and 2
∆φd2 Phase shift between image 2 and 3
Figure 4.12 The fluctuations of the 90° – 90° phase difference.
Computer analysis revealed the mean values of the phase difference obtained at
this location of the undisturbed fringe patterns for the two sets were 90.3° and 88.8°
with standard deviations of ±7.0°. These fluctuations of the 90° phase difference were
89
quite large, but that was unavoidable in automated calculations. Since phase
measurement was known to be a very sensitive and precise measurement, these initial
phase shift errors would certainly have significant affects on the final phase values.
In 1985, Cheng and Wyant illustrated a typical result where the phase difference
between fringes was 88° instead of the intended 90°. They introduced some practical
methods to calibrate the phase shifter in phase-shifting interferometry. The phase shifter
used was a piezoelectric transducer (PZT).
Usually phase shifter error seemed to be the major error contributor at this stage
of the phase shifting interferometry method. Tremendous efforts were made to
overcome this problem. One method for calibrating the phase shift was to use a
separate interferometer to monitor the position of the reference mirror. The detected
intensity was then used to control the phase shift controller. Another solution was to use
the measured phase positions with the generalized least-square algorithm. This resulted
in an algorithm that adapts to the actual phase shifts for which the data were collected
( Malacara 1992). However, the present system did not involve the use of a phase
shifter.
The error from the incorrect phase shifts between data frames, was again studied
by Wyant (1998). He also stressed that the errors were due to many sources such as
incorrect phase shifter calibration, vibrations and air turbulence. For example, the phase
shift should be (nπ/2) and the actual phase shift was (nπ/2 + nξ). For a three π/2 steps
method Wyant (1998), produced a plot of phase shift error due to 5% phase shift
calibration error. The quantities that were sent to the phase error module were the
numerator and the denominator of the arctangent. The other quantities required were
the number of steps (three in this case), the value of the step (π/2) and the percent
calibration error that were required.
With the present system, the error contributor such as vibrations, turbulence and
other time-dependent noise were eliminated by the high-speed image- capture using dye
laser. Phase shifter was not used in simultaneous phase measurement, so phase was not
shifted or changed in any way as the images were captured. Inter frame errors were
90
also reduced by simultaneous image capture. The phase difference between the frames
was already set before images were captured. Thus, the errors involved in this
measurement system could come mainly from other coherent noise contributions.
4.7
The effect of the number of fringes and their shapes
The technique and the algorithms used in this project would perform just as well
on any shape of the fringes even though the emphasis was on straight parallel and
circular fringes. This is because the measurements were made on the intensity level or
the gray scale recorded by each pixel of the CCD camera onto the interferogram. Thus,
the shapes and sizes of the fringes did not even matter. However, with this technique,
the bigger the fringe, the better would be the resolution as there would be more pixels to
represent it. Another words, it means, less fringes per frame is better with this
technique.
In this particular analysis, the number of fringes per frame used was 8.
However, the image size chosen for processing was a 256 x 256 pixels portion,
containing about three fringes. This meant that approximately 85 pixels were assigned
to a fringe. The chosen size was sufficient to cover the area of the interaction events up
to about 5µs delay, sensitive enough to produce the required phase change and also
convenient on the computer memory.
4.8
Postprocessing Fringe Patterns
Once the predetermined requirements were met, the interferograms were ready
to be processed. Using the assigned algorithms, a single phase-value was extracted
from the three images at any given locations. However, all phase measuring techniques
deliver the phase in mod 2π due to the sinusoidal nature of the intensity distributions.
This would give the saw-tooth appearance of the phase change in the undisturbed image
91
just like that shown in Figure 4.10. This is called the wrapped phase stage of phase
measurement, which therefore could not reveal the actual phase-change taking place.
The basic assumption for the validity of the phase measurement or phase
unwrapping in particular is that the phase between any two adjacent pixels does not
change by more than π. This limitation in the measurement range results from the fact
that sampled imaging systems with a limited resolving power are used. There must be at
least two pixels per fringe, a condition that limits the maximum spatial frequency of the
fringes to half of the sampling frequency (Nyquist frequency) of the sensor recording
the interferogram. Fringe frequencies above the Nyquist frequency are aliased to a
lower spatial frequency. In such cases, the unwrapping algorithm is unable to
reconstruct the modified data. If the fringe frequency is higher than the Nyquist
frequency, the unwrapping algorithm fails (Rastogi, 1997).
The digital processing of fringe patterns is a fast-growing field in interferometry.
The quantitative evaluation of fringe pattern to extract the physical quantities to be
measured is the ultimate aim. Although some of the processing steps could be
performed optically, increasing computing power favors the use of digital image
processing in the quantitative evaluation and analysis of the fringes.
Post processing or unwrapping the fringe patterns to reconstruct the continuous
phase distribution according to Kreis (1986) and Yusof Munajat (1997) were given in
section 2.6.2.5.
4.9
Summary
From the discussion in this chapter, it can be concluded that there were certain
initial conditions that must be prepared first prior to phase measurements. First of all,
the three images must be localized images. The intensity of the three images
simultaneously captured should be maintained at about the same level. The phase
92
separations between the three images were established as close as possibly can to the
required 90° shifts. The number of fringes per frame should also be considered.
Another factor that is also crucial here is the location of the center of the
interaction event. As the images were cut into a 256 x 256 pixels size portions before
processing, it must be certain that the center of the interaction event would lie on the
same pixel location in all the three images.
All these requirements were met with many challenges and obstacles. However,
within the scope of this work, the parameters obtained were sufficient to enable the
measurement of the phase change due to laser interaction to be made. The system was
designed to be user-friendly and no user intervention was necessary throughout the
measurement.
CHAPTER 5
SINGLE-INTERFEROGRAM PHASE INTERFEROMETRY
5.1
Introduction
The deviation of the fringes from the initial null condition is all that is needed to
establish the associated phase change. Previous works have shown the ability of single
interferometry methods to successfully produce the required phase changes. Two
methods were implemented here, namely the fringe analysis and the FFT phasemapping method.
Fringe analysis relies on comparing the shape of the disturbed interference
fringes with an ideal set of undisturbed fringes. The ideal set of undisturbed fringes
means a set of straight, parallel and equally spaced fringes. The deviation of the
disturbed fringe, usually a dark fringe is chosen, is found by firstly, locating its center.
By measuring the deviations of the center of this dark fringe from the center of the
reference fringe or the reference axis, the fringe shifts were recorded at several different
intervals assigned by the chordal divisions along an axis. Usually this method can be
time consuming and also suffers from nontrivial problems.
In phase-measurement with FFT method, the intensity of the fringes was
recorded. The digitized intensity distribution was Fourier transformed giving frequency
distribution in spatial domain. Digital filtering of the low frequency background noise
and the high frequency electronic noise was carried out in this domain. Once filtered,
this frequency distribution was transformed by the inverse Fourier transformation
94
resulting in a complex-valued image. The phase can then be calculated from the arctan
of this complex-valued function.
This chapter reveals the single-interferogram phase interferometry methods
carried out to obtain the phase change resulting from laser interactions. It also provides
some of the advantages, disadvantages and problems associated with both methods.
5.2
Fringe Analysis Technique
In the normal course of events, the fringe shifts are measured either by eye or by
using computer programming which are able to follow the path of the fringes and thus
determine their deviation from linearity. The first technique tends to be difficult and
inaccurate and time consuming, whereas the second is often unreliable for realistic
interferogram where noise is present with the result that the phase map often needs
touching up by hand afterwards. Furthermore, both these techniques will produce quite
large uncertainties when the fringe visibility is poor or the fringe shifts are small.
The principle of this analysis was described in Chapter 2, section 2.8.1. Due to
the spherically symmetrical nature of the wave, measurements were made only on a half
section of the image, as the rest would be mirror images of that region. This would cut
short data collecting time and at the same time also reduce the amount of data processed
and analyzed. With the use of computers, the previously tedious and lengthy process
could be made easier.
The image shown in Figure 5.1(a) was taken at a time delay of 3.6 µs. It was
initially divided into four quarters and the measurements of fringe deviations were made
from the selected reference. The best reference would be a line running through the
center of a dark fringe from the undisturbed region of the image, which runs through the
center of the image. Fringe deviations are measured from this reference at certain
intervals (chordal divisions). Global Lab software was used in the determination of the
95
1: 0.1 mm
(a)
Fringe shift (mm)
1
0.5
0
0.5
1
0
5
10
15
20
25
30
35
Radial data array
(b)
Figure 5.1 (a) The image at 3.6 µs. (b) The corresponding
fringe shift.
96
fringe deviations. Chordal divisions were made in the chosen half section. Calibration
factor was previously determined from the magnification factor of the image.
From the calibration factor set for this arrangement, the radius of this wave was
found to be 3.245 mm, which was an equivalent of 93 pixels. The chosen section was
divided into 31 chordal zones and thus each zone was represented by three pixels.
Increasing the number of chordal zones would smoothen the fringe shift profile thereby
increasing the accuracy. These data collected using Global Lab software were fed into a
computer program (using Mathcad 7) for its fringe shift (Figure 5.1b) and pressure
change calculation (Figure 5.2).
If the dark fringe that was usually used to measure fringe deviation fell on the
reference axis made through the center of the image, then fringe deviations could be
measured directly. Otherwise, certain compensations should be made to the fringe
deviation measurements. Sampling is usually made along a dark fringe because of its
high visibility. That is the reason why this measurement technique works best with
images with good visibility.
The fringe-shift profile produced of half of the cross-section of the wave taken
through its center was as shown in Figure 5.1b. The other half would be the mirror
image of this profile, which altogether, provided the total fringe shift profile of the
image across the interferogram through the center of the wave. Through proportionality
(Equation 2.5), the phase change resulting from the fringe shift in Figure 5.1(b) would
have about same profile. The associated changes in the refractive index, density and
pressure that took place could be related from this phase measurements.
The corresponding pressure-change profile of the interaction event is shown in
Figure 5.2. The profile produced is not a smooth profile expected of such changes. This
could be due to the difficulty in the determination of the fringe centers as the shape and
size of the fringes changed around the interaction point. Dust particles on the optical
components along the light path might also contribute to the deviation of the fringes.
However, the maximum value of the pressure change obtained from this interferogram,
using fringe analysis technique was 0.332 atm, which occurred at the wave radius of
97
2.738 mm. This value corresponded to 3.7 mJ, the energy of the laser produced with
focusing system when 850V was supplied to the flashlamp at room temperature of 25°.
Pressure change (atm)
0.5
0.33
0
0.5
1
0
0.5
1
1.5
2
2.5
3
3.5
Radius of acoustic wave (mm)
Figure 5.2 Profile of pressure change of the event.
The main problem that could be associated with this measurement technique, as
already mentioned, was the fringe center determination; especially when the fringe
contrast is poor. Fringe centers could be determined by naked eyes visualization or by
digital determination of the maxima or minima gray level of the intensity of a fringe.
Problem also arose when the reference fringes in the undisturbed region are not straight
and parallel. The spherical nature of the acoustic waves produced images with their
fringes that changed their shapes and sizes rather drastically. This made fringe center
identification, an eye straining process.
However, the advantage of this method is its reliability. The human eyes are
better detectors to the changes in the fringes. In fact, this technique is suitable for
abrupt changes in the phase values such as in liquid and at boundary conditions. By
increasing the number of chordal divisions, the accuracy of phase measurement would
increase. Smaller size fringes would make determination of the fringe centers easier.
Apart from all the difficulties mentioned, this technique remains a reliable technique for
phase measurement.
98
Kalal and Nugent (1988), produced a technique for Abel inversion using fast
Fourier transforms to study the properties of self generated magnetic fields in laserinduced plasma. They found that the technique provided a faster, potentially very
accurate and capable of handling large data sets.
5.3
FFT Phase Mapping Technique
This technique of phase measurement depended on the values of the intensity
levels (gray levels) recorded on each pixel of the interferogam. This measurement
technique managed to overcome the problem of identifying fringe centers faced in
fringe analysis. In fact, this technique worked better with fringes of low spatial
frequency, meaning less number of fringes per frame or simply bigger size fringes.
Thus, there would be more pixels assigned to a fringe, providing better accuracy.
Another advantage of this method over fringe analysis was its independence of
the reference axes. Measurement could commence from any location of the
interferogram. This was because this technique was independence of the shape of the
fringes. That was why the starting point for data collection was not important here.
What matters, was the variation of the intensity levels on the interferogram.
The digitized intensity distribution was Fourier transformed, leading to
asymmetrical frequency distribution in a spatial domain. Filtering was done at this
stage to remove the low frequency background and high frequency noise and other
disturbance such as saturated intensity by adjusting the spatial frequency cut-off. This
would enhance the quality of the interference pattern.
The inversion of Fourier transforms gave complex valued functions of the
image. On the basis of this image the phase δ(x,y) could be calculated from the arctan of
the complex function.
99
⎡ Im c( x, y ) ⎤
⎥
⎣ Re c( x, y ) ⎦
δ (x, y ) = arctan ⎢
(5.1)
The notations for the imaginary function, Im c(x,y) and the real function, Re
c(x,y) must be examined and taken into account separately in order to get the phase
range between -π and π. Taking into account the sign of the numerator and the
denominator, the principal value of the arctan function having a continuous period of 2π
is reconstructed. As a result, a mod 2π-wrapped- phase profile was obtained and phase
unwrapping was necessity to obtain a continuous phase map.
Another interferogram (Figure 5.3(a)), also taken 3.6 µs after laser interaction
was selected for phase calculation using this technique as the one selected for fringe
analysis resulted in phase ambiguity. This seemed to happen to a greater percentage of
the images captured. However, the quality of the image selected was not quite the same
as the image used in fringe analysis. This could be due to the slight fluctuation of the
laser energy pulse and the different speckle distribution at different time frame.
Figure 5.3(b) shows the phase shift calculated through the center of the activity
(y =128). The profile of the phase shift across the wave did not indicate a smooth,
symmetrical pattern previously assumed. The maximum phase change in the profile was
also different on the left and right side of the profile. This could be expected judging
from size and shape of the fringes through the center of the interferogram.
The size of the fringes after laser interaction was no longer uniform; some were
bigger while others were smaller than the background fringes. Bigger fringes meant
better resolution as they contained more pixels than in smaller fringes. So the smoother
slope in the profile indicated the lesser resolution due to the smaller size fringes of the
left half of the interferogram (Figure 5.3(a)).
Judging from the profile of the phase change through the center of the activity
(Figure 5.3(b)), error contaminations could have contributed to the unsymmetrical
profile obtained. The error could be due to the presence of dust or contaminations of
the optical surfaces in the path of the beam. This is a common problem faced with
100
(a)
1: 0.1 mm
Phase change (rad)
2
0
2
4
6
0
50
100
150
200
250
Pixel location across the image
(b)
Figure 5.3 (a) Interferogram at t = 3.6 µs delay. (b) The
phase change profile by FFT phase mapping method.
300
101
single interferometry phase mapping analysis of laser interaction. Therefore, it would be
expected that the pressure profile resulting from this interferogram would also deviate
from the spherically symmetrical assumption made earlier.
The corresponding pressure-change profile due the phase shift was worked out
and the result is shown in Figure 5.4. Since the phase shift profile (Figure 5.3(b)) was
not symmetrical as predicted, the pressure-change profile shown in Figure 5.4 could not
produce an accurate profile of the change across the radius of the wave.
Pressure change (atm)
4
0.5
0
0.5
4 1
0.5
1
1.5
2
2.5
3
3.5
Radius of acoustic wave (mm)
Figure 5.4 Profile of the corresponding pressure change.
However, the maximum pressure-change calculated from this particular portion
of the interferogram was 0.278 atm, which occurred at the wave radius of 2.568 mm. As
expected, these values differed from those obtained with fringe analysis method. The
value of the maximum pressure change obtained with this technique was reduced by
approximately by 16.3% whereas the location of maximum pressure differed by
approximately by 6.2 %. The difference in values between these two methods was
expected. This was because in phase mapping by Fourier analysis, digital filtering of
the signals had removed the unwanted noise, prior to phase analysis. Probably, in the
process, some part the signals were also removed. Noise filtering was not included in
fringe analysis. Another factor that could also be associated with this difference was the
small fluctuations of the laser energy in each pulse.
102
To show the existence of phase ambiguity problem with single interferogram
analysis using this technique, another interferogram was analyzed using this technique.
Figure 5.5(a) is the image of the event taken 3.2 µs after laser interaction. The phase
shift was similarly calculated at the same location, (y = 128), as for Figure 5.3.
However, as seen in Figure 5.5(b), the phase change shown failed to indicate the
symmetrical nature of the phase change in air as previously assumed. The right hand
side of the phase-shift profile should move up to form a phase well which finally ended
with a zero phase shift in the uninterrupted region of the image, similar to that shown in
Figure 5.3(b). But due to the existence of an additional fringe at the center of the image,
the profile obtained gave an incorrect phase interpretation. This is the phase ambiguity
problem associated with laser interacted interferograms which will be solved in this
project.
The existence of extra fringes sometime can be directly observed in the resulting
interferogram of laser interactions. Figure 5.6 shows an image with initially three dark
fringes in the undisturbed region or the background, whereas across the image, through
the center of the interferogram, there are four dark fringes presence. That means, there
is one extra fringe added to the interferogram at this location and hence, phase shift
calculation made across this location using phase mapping method, cannot be justified.
This image would definitely produced ambiguity when analyzed.
Majority of the images produced in this work, exhibited this ambiguity when
analyzed using this technique. Therefore, the image to be analyzed this way must be
carefully chosen so as not to include those with the extra fringes in them.
103
(a)
1: 0.1 mm
Phase change (rad)
5
0
5
10
0
50
100
150
200
250
300
Pixel location
(b)
Figure 5.5 (a) Interferogram at 3.2 µs delay. (b) The associated
phase change exhibiting ambiguity.
104
1
3
2
1
2
3
4
1: 0.1 mm
Figure 5.6 The extra fringe in the interferogram.
5.4
Problems of Single Interferometry Phase Mapping
Phase ambiguity due to extra fringes in laser interacted interferograms seemed to
be the main problem in the failure to obtain a continuous phase distribution using this
technique. Even when the image was preprocessed before analysis, the additional fringe
would still occur due to the nature of the acoustic wave produced by laser interaction.
An approach to remedy the situation would be to take a series of interferograms
while the phase between the two beams changes. The phase change can be obtained by
analyzing the point-by-point irradiance of three or more interferograms as the phase
difference is varied. This method for obtaining phase information from interferogram is
known as phase-shifting interferometry (PSI).
Normally, phase shifting involved capturing a series of images, with its phase
shifted from the other by a certain amount. This involved different time frames each
105
time an image is captured. The high measurement accuracy of the phase shifting
interferometry could suffer from the different time-dependent noise in each of the
interferogram captured. Several PSI techniques and algorithms were proposed to
improve the accuracy of phase measurements.
In 1985, Nugent extended a technique originally proposed by Takeda, et al.
(1982) to eliminate the significant errors introduced by the digitization of the
interferogram and by the non-linearity in the recording film.
Talamonti, et al. (1996) developed a numerical model to emulate the capabilities
of the system performing non-contact absolute distance measurements. The model
incorporated methods to minimize signal processing and digital sampling errors and
evaluated the accuracy limitations imposed by spectral peak isolation by using Hanning,
Blackman and Gaussian windows in the FFT transform technique. They found that the
precision was limited by the non-linearity in the laser scan.
5.5
Summary
The maximum pressure change associated with laser interaction at a time delay
of 3.6 µs measured by the two single- interferometry methods seemed to differ by a
significant value. Digital filtering in phase measurement with FFT was thought to be the
major factor affecting this value. However, a good comparison would be if the two
techniques were carried out on the same inteferogram, which unfortunately, was not
available due to ambiguity.
From the results obtained, it was clear that single interferometry phase
measurement using the two mentioned techniques even though simple but, could at
times run into some problems. With the FFT phase mapping technique one could
sometimes be faced with the existence of extra fringes. This could lead to ambiguity of
the unwrapped phase and thus could not represent the actual phase changes taking place.
106
With fringe analysis, difficulties could arise when the reference fringes are not
straight and parallel. Measurement of fringe shifts would tend to be a difficult task. The
optimal size of the fringes should be that which made determination of the fringe-center,
easier. Usually, it is more difficult to locate the fringe centers of larger fringes. Smaller
fringes made determination of the fringe center easier, but then the sensitivity and
resolution could be lost. Locating the fringe centers can be complicated by poor
contrast, variation on the fringe visibility and the image noise due to laser speckle and
dust in the optical system. Images of laser interaction in this work seemed to face this
problem because of the changing shapes and sizes of the resulting fringes. Another
factor that also need to be considered is, in order to increase the accuracy of the phase
measurement, more data would need to be collected and processed.
However, these two techniques have the advantage of simple algorithms and a
much reduced processing times. But to obtain accurate phase profiles, the images for
phase analysis must be of excellent quality, meaning, the signal must be noise-free
signal. The experiment must be conducted in a clean environment. Even then,
ambiguity problem in laser interacted interferogram is still unavoidable. Thus, for
accuracy, sometimes many more single images would have to be processed to find the
ones that would provide the expected phase change. As the new images were processed,
new and different time-dependent-factors must be considered and tended to individually
thus providing yet again different phase profiles.
In this project, in order to fulfill the objective of reducing the problem of phase
ambiguity, we decided on another technique but with its algorithms still based on the
FFT phase mapping method. The technique should also be capable of eliminating the
problem faced in fringe analysis such as fringe centers identifications and irregularshaped reference fringes. Couple with high-speed photography, the system developed
would also eliminate the problem of vibration and turbulence.
CHAPTER 6
SIMULTANEOUS PHASE MEASUREMENT INTERFEROMETRY
6.1
Introduction
The quantity of primary interest is the phase change ∆φ(x,y) in the fringes which
carry the necessary information required in material characterization. Only in the last
15 years several techniques for the automatic and precise reconstruction of phases from
fringe patterns were developed. A very common method of obtaining the phase change
is by phase shifting. This involved obtaining images of different phase values one after
the other. This would involve several different time-frames for each image capture and
therefore time-dependent noise contributions would be unavoidable. This also meant
that manual intervention was needed each time the new image was captured.
Phase measurement interferometry is the most sensitive and precise
measurement. Manual interventions and the different time frames would surely
introduce different level of noise contributions in each of image produced. Ideally,
phase shifting interferometry requires the images involved in the algorithms to be of the
same intensity, however the intensity of the images can also be time-dependent.
Phase measurement interferometry by phase mapping based on FFT method
seemed to be the most accepted technique for analyzing fringe patterns. Only one
interferogram is required for this analysis. It involves the calculation of phase of each
pixel, using digital image processing based on the intensity variation of the
108
interferogram. However, problems could arise from images with extraneous fringes, a
phenomenon which is quite common in images produced by laser interaction. The
existence of these extra fringes would distort the profile of the phase changes taking
place during laser interaction resulting in the phase discontinuity. Therefore, single
interferogram analysis for images with extraneous fringes could end with a phase
ambiguity, thus preventing any further analysis of the interferogram.
In this work, we modified the Mach Zehnder interferometer to produce three
outputs, which were then made to differ in phase by 90° from each other. The three
CCD cameras placed at the three interferometer outputs, were connected to three frame
grabbers placed in a computer. A trigger and synchronize electronics controlled the
start of the activity, the delay between dye and Nd:YAG laser firing and also the setting
of the frame grabbers to capture the oncoming images. Three images were captured
simultaneously using a single pulse of laser. This simultaneous image-capture
technique and the three 90° phase-difference algorithm were proposed to reduce if not
eliminate the problem of ambiguity usually found in single interferometry phase
mapping method.
6.2
Simultaneous Phase Measurement Interferometry
This method was chosen to overcome the problem of extraneous fringes in laser
interacted images which lead to phase ambiguity. The existence of phase ambiguity
would prevent further analysis of the changes associated with fringe shifts in the
interferogram. Since phase mapping technique on single interferogram could encounter
this problem, we decided to use the same technique on a three-interferograms model
using the appropriate phase-shifting algorithms. A computer program using Mathcad 7
(APPENDIX G) was written for this analysis.
Before introducing any disturbance from the Nd:YAG pulse, the fringes
produced were straight and parallel. The intensity of the three images I1, I2 and I3
should be the same. The three images were prearranged to be shifted 90° with one
109
another (Equation (2.25)). These were achieved by the appropriate orientations of the
analyzers in the arms of the interferometer.
The three images of the interaction events, taken simultaneously by three CCD
cameras with a single shot of laser, are shown in Figure 6.1. The advancing acoustic
waves at this particular instant, that is 3.6 µs, after laser interaction were indicated by
the spherical waveforms. The deviation from the straight and parallel fringes was the
result of the decrease in the refractive index of the interaction region as compared to the
outer undisturbed region. The change (decrease) in the refractive index also meant the
change in the density of the medium in that region which can also be related to the
decrease in pressure of that medium. As a result, there was a region of advancing high
pressure surrounding a region of low pressure, which would be seen as spherical
waveforms.
The number of fringes per frame in the captured images was 8 but the number of
fringes contained in the region chosen for analysis (the cut portion) showed only about
three fringes, with a phase equivalent of 6π. The acoustic wave patterns shown in
Figure 6.1 were captured at 3.6 µs after interaction with the Nd:YAG laser.
The intensity level could be read from anywhere in the array. The unfiltered
intensity distribution at the undisturbed regions of the three images is shown in Figure
6.2 at a location y = 15, on the interferogram. This location was chosen because it was
well away from the advancing acoustic wave to have any phase effects. This location
could also be used to recheck the 90°-90° phase separation before phase measurements
at the interaction region were made.
The values of the intensity were given in grayscale. As seen from in the figure,
the maximum intensity level of the images was around 200 grayscale (the intensity
range is from 1-256 grayscale). However, the value of the phase change depended on
the changes in the intensity level of the images, not on the absolute value of the
intensity.
110
a. Image 1
b. Image 2
c. Image 3
1: 0.1 mm
Figure 6.1 The images of laser interaction taken
simultaneously using three CCD cameras at 3.6 µs delay.
111
Intensity level
200
100
100
200
Pixel location
I1
I2
I3
Figure 6.2 Intensity distribution of the three images at y = 15
This intensity distribution was then Fourier transformed to a spatial frequency
domain. In this domain, the slow spatial variation of the signal parameters was
separated from its spatial carrier frequency. Appropriate frequency cut-off filtering was
carried out to remove the low background frequency as well as the higher noise
frequency as discussed in section (4.4). Inverse Fourier transformation would produce
the required filtered signal. Figure 6.3 shows the unfiltered and the filtered signals for
each image used in the analysis. The solid line represents the unfiltered signal and the
dotted line represents the signal filtered with the same frequency cut-off range. These
measurements were made at y = 128, the center of the interaction event. The reduction
of the intensity level was due to the low frequency (dc component) filtering.
The filtered intensities from the images with the disturbance could be checked
again to confirm their 90° phase separation with one another at the undisturbed region
of the image. This process also enabled us to reconfirm the sequence of appearance of
the images, that is, by identifying which of the three is to be I1, I2 and I3 in that order, as
required by the algorithm. Actually, both of these processes were already carried out
prior to phase measurement of the interaction events.
112
Intensity level
300
200
a. Image 1
100
0
100
0
100
200
300
Pixel location
Intensity level
200
100
b. Image 2
0
100
0
100
200
300
Pixel location
Intensity level
200
100
c. Image 3
0
100
0
50
100
150
200
250
300
Pixel location
Unfiltered signal
Filtered signal
Figure 6.3 The unfiltered and the filtered signals for the three images
113
Therefore, if the previous conditions were maintained, this meant that the system was
indeed stable and ready for use. Only then, these signals were ready to undergo
mathematical calculations for phase measurement.
The change in phase that occurred, ∆φ(x,y) after laser interaction using this
arrangement was analyzed using two three-step algorithms with 90° phase shifts. The
first phase-shifting algorithm devised by Wyant, (1984) produced the resulting
wavefront phase given by
φ ( x, y ) = tan −1 ⎡⎢ I 3 − I 2 ⎤⎥
⎣ I1 − I 2 ⎦
(6.1)
To obtain the values of the phase change, ∆φ(x,y), the values of the phase
obtained from the measured interacted location should be subtracted from those
obtained from the undisturbed region, which was referred to as the reference location
(equation 6.2). Since these two sets of values came from the same set of images, the
noise factors would be about the same and thus similarly eliminated. The location
chosen as the reference was well away from the boundary of the wave to ensure that it
had no effect from laser interaction.
⎡I − I2 ⎤
−1 ⎡ I 3 − I 2 ⎤
∆φ (x, y ) = tan −1 ⎢ 3
⎥ − tan ⎢
⎥
⎣ I1 − I 2 ⎦ m
⎣ I 1 − I 2 ⎦ ref
(6.2)
Figure 6.4a shows the wrapped phase due to the arctan function of the algorithm.
As seen, it displays a discontinous phase distribution of the event and therefore cannot
provide a meaningful representation of the changes in phase produced during
interaction.
114
Wrapped phase (rad)
2
1
0
1
2
0
50
100
150
200
250
300
Pixel location across the image
(a)
Unwrapped phase (rad)
10
0
10
20
0
50
100
150
200
250
300
Pixel location across the image
Disturbed profile
(b)
Reference
Figure 6.4 (a) the wrapped phase spectrum. (b) The unwrapped
phase wavefront and its deviation from the reference.
115
Figure 6.4b shows its unwrapped phase wavefront of the interaction region,
compared to the unwrapped phase wavefront of the uninterrupted region of the same
image (reference). As seen, the phase of the two wavefronts is clearly represented after
undergoing the unwrapping process. The change in phase in the interaction region was
indicated by the deviation from the reference profile.
The downward tilt of the phase wavefronts in the figure is due to the tilt of the
beamsplitter, BS2 in the interferometer, in order to produce the fringes. Turning BS2 in
the opposite direction would tilt the phase wavefronts in the upward direction. The size
of the fringes is governed by the tilt angles of this beamsplitter.
Using the first algorithm as in Equation (6.2), the phase change, ∆φ(x,y), that
occurred through the center of the interaction region is shown in Figure 6.5
(APPENDIX G). This is expected from laser interaction activity in air. The abrupt
change in pressure created a region of low pressure at the center of interaction and a
corresponding region of high pressure surrounding it.
Phase change (rad)
5
High pressure
0
5
Low pressure
10
0
50
100
150
200
250
300
Pixel location
Figure 6.5 The phase change with the first algorithm.
116
The profile did not display a very smooth pattern as it would if the signal was
pure (noise-free). Ideally, the region, which should be the undisturbed background
fringes, should not indicate any change in the phase values. However, this is not the
case as shown at the beginning of the profile in the Figure 6.5. This might be due to
some noise that managed to pass through the filtering process.
The other three-step algorithm with 90° phase shifts was introduced by
Gallagher and Herriot (1972) and Creath (1988), and was modified to give the equation
for the wavefront phase to be
I1 − I 3 ⎞
⎟⎟
⎝ 2I 2 − I1 − I 3 ⎠
⎛
φ ( x, y ) = tan −1 ⎜⎜
(6.3)
To obtain the phase change that occurred, just like the earlier algorithm, subtraction of
the phase wavefronts between the interacted region and the uninterrupted region of the
image was necessary.
⎛ I1 − I 3 ⎞
⎛ I1 − I 3 ⎞
⎟⎟
⎟⎟ − tan −1 ⎜⎜
∆φ (x, y ) = tan −1 ⎜⎜
⎝ 2 I 2 − I 1 − I 3 ⎠ ref
⎝ 2I 2 − I1 − I 3 ⎠ m
(6.4)
Using the second three-step algorithm with also a 90° phase step, as given by
Equation (6.4), produced what is shown in Figure 6.6. From the two phase-change
profiles (Figure 6.5 and Figure 6.6) presented, it showed clearly the close resemblance
of the profile of the phase change that took place. The only noticeable different is the
starting point of the profile which could arise from the different predetermined
conditions.
This proved that when the images were separated by 90°phase shifts, they
should produce the same profile regardless of the algorithm used. This in a way
confirmed the validity of phase measurement methods made with the 90° phaseseparated images using the system designed for this project.
117
Phase change (rad)
5
0
5
10
0
50
100
150
200
250
300
Pixel location
Figure 6.6 The phase change with the second algorithm.
Judging from the phase profiles of the two algorithms, we chose the first
algorithm for further analysis as it seemed to fit in better with the system designed for
this work.
6.3
Refractive Index, Density and Pressure Profile of Image
Section 2.4 explained the relation between fringe shifts and phase shifts and
how phase shifts were related to the changes that occur in the refractive index, density
and pressure of the sample.
The theoretical relationship between refractive index and its density was initially
described by Equation (2.7), and was further developed and simplified to become
Equation (2.8). The change in the refractive index ∆n can be expressed as the change in
its pressure according to Equation (2.9). However, since the pressure usually generated
in the laboratory is less than 100 bars, a constant of proportionality between two
variable P and ρ is assumed.
From Figure 6.5, the region selected for further analysis was from pixel position
26 to pixel 130. This represented the wave radius of 3.256 mm for the acoustic wave
118
produced at a time delay of 3.6 µs. The center of the activity was at pixel location 128.
Figure 6.7 shows the profile of the change in the refractive index of the medium after
undergoing interaction with the Nd:YAG laser. This represents half of the full profile
taken through the center of the interaction event (y = 128). Due to symmetrical
assumption of the wave, the profile of the other half is the mirror image of this. Thus, if
the refractive index of the medium before interaction is known, then the absolute value
of the refractive index of the sample at any location in the interferogram after
interaction can be computed.
The graph, obtained from computer analysis, shows the maximum value of the
change in the refractive index after laser interaction to be 0.070 x 10-3, which occurs at
the corresponding radius of 2.526 mm. There was an increase in the refractive index of
the medium due to the impact of the abrupt change in pressure of the acoustic wave. In
this project, this value is subjective to the value of the energy of the laser output, which
in this case is 3.7 mJ.
∆n x 10-3
0
0.1
0.2
0.3
0
0.5
1
1.5
2
2.5
3
3.5
Radius of wave (mm)
Figure 6.7 Change in the refractive index due to interaction.
119
The profile of change in its density will also be similar to the profile of the
changes in refractive index (Figure 6.7) due to its proportionality. Figure 6.8 shows the
corresponding profile of the change in density of the given activity. The maximum
change in density of 0.211 kgm-3 of the medium occurred at 2.526 mm radius. The
absolute value of the density would depend on the initial value of the density of the
media involved before laser interaction.
Change in density (kgm-3)
0.5
0.211
0
0.5
1
0
1
2
3
4
Radius of acoustic wave (mm)
Figure 6.8 Change in density due to laser interaction.
Pressure change (atm)
0.5
0.244
0
2.526 mm
0.5
1
0
1
2
3
Radius of wave (mm)
Figure 6.9 The profile of pressure change of the event.
4
120
Figure 6.9 shows the proportionality of the changes that occur in the refractive
index to the changes that occur in its pressure as displayed by the equations. From the
graph, the maximum value of the pressure change of this portion of the image was
found to be 0.244 atm which occurred at the wave radius of 2.526 mm; the same
location for the maximum change in refractive index and density. This increased value
of pressure was due to the advancement of the acoustic wave in the medium. The
measurement was made at room temperature of 25°C and under atmospheric pressure of
1 atm.
The value obtained with this method was found to be lower than that obtained
using fringe analysis by 26%. Initially it was thought that this could be due to the
signals being filtered first in this analysis as compared to the unfiltered image in fringe
analysis.
6.4
Pressure of Acoustic Waves from Laser Interaction
The conversion of optical energy provided by the laser light to acoustical energy
of the resulting waves was represented by the peak pressure values of the advancing
waves at various time intervals. The optical energy provided by the laser pulse
influenced the dielectric characteristic of the medium in such a way to allow the energy
conversion to occur. Initially this energy was so large causing the formation of the
shock waves that quickly subsided to acoustic waves, such as those recorded on the
interferograms. The energy of the laser pulse depended on the voltage used for the
flashlamp. In this analysis, to cause breakdown in air, the voltage supplied to the
flashlamp was 850V.
The peak pressure at certain time and location took the form of RankineHugonoit curve through the relationship:
121
7
6
⎡⎛ µ ⎞ 2
⎤
P
⎢ ⎜ ⎟ − 1⎥ =
⎥⎦ P0
⎣⎢ ⎝ c ⎠
(6.5)
µ = velocity of shock wave in the medium
c = velocity of gas medium at rest ahead of the shock wave
P = pressure build up due to shock wave
P0 = atmospheric pressure ahead of the shock wave
Equation (6.5) gives the indication of the pressure change decay soon after the
propagation started. Table 6.1 shows the relation between the time after interaction, the
maximum pressure change at that time and the location where this maximum pressure
change occurs. The maximum pressure change decreases with increasing size of the
wave or increasing time after interaction. This is to be expected because of the
decreasing energy as the wave move outwards from the center of interaction. Figure
6.10 shows the distribution of the maximum pressure change in relation with its radius
as the wave propagates.
Figure: 6.10, shows the distribution of the maximum pressure-change over a
period of less than 4.0 µs after laser interaction using simultaneous image analysis.
This takes the form of a Rankine-Hugonoit curve. This curve gives an indication that
the peak pressure change decays soon after the propagation started.
The system developed for this project was capable of capturing images much
earlier than 2.0 µs, but it was difficult to determine the values of the maximum pressure
change due to the shape of the waveform (shock wave stage). The distribution produced
also revealed some points that stray away from the constructed line. This could be
explained by the fluctuations of the effective energy level of the laser pulse produced at
different time intervals. Therefore, its conversion to the acoustical energy as represented
by its pressure would also change accordingly.
122
Table 6.1: Distribution of maximum pressure change
Time
Maximum pressure
Radius at max pressure
(µs)
change
change (mm)
(atm)
2.0
0.601
1.316
2.2
0.426
1.618
3.2
0.286
2.045
3.6
0.244
2.510
3.8
0.214
2.750
4.0
0.184
3.168
4.8
0.157
3.650
0.7
Max press change (atm)
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.5
1
1.5
2
2.5
3
3.5
Radius (mm)
Figure 6.10 Distribution of maximum pressure change.
4
123
6.5
Image Representation
The knowledge of quadrature imaging was incorporated in this image processing
work. This would visually provide direct information on the overall phase profile of the
event. The lighter gray level represented the higher values of the phase change while the
darker ones represented the corresponding lower values.
The information of the phase change could be visualized in two dimensions or in
three dimensions. The three-dimensional phase profiling provided greater detail of the
phase variation over the whole area of the event. Figure 6.11a shows the event taken 3.6
µs after laser interaction using the first algorithm and Figure 6.11b shows the same
event processed using the second algorithm (APPENDIX H).
Both algorithms provided pictures, which resembled a splash caused by
dropping a stone in a pool the water. A region of maximum phase change was created
around a region of minimum phase change, creating a well-like structure of the activity.
This was expected in laser interaction activity in air. The similar profile was also
expected from both algorithms because both made use of the 90° phase separations
between the images.
As seen in both images, the presence of the same kind of errors (seen as a
misplaced slices of the region) was detected. This could be due to dust particles or
optical surface irregularity on any of the many optical components used in the system.
However, this was something that we would bear in mind in the determination of the
phase values. This could have contributed to the failure in the unwrapping process,
when this process failed to detect the actual rise and fall of the actual phase of the
activity. This surely would have some effects on the final results.
With Mathcad programming cross-sections of the 3D image could also be
obtained at any required locations. Figure 6.12a shows the cross section of the activity
that took place in this event. The phase change could easily be viewed from any angle
because the image could be rotated and tilted from 0° to 360°. This would enable a
124
more detail observation of the activity at this instant. Figure 6.12(b) shows the same
cross-section but viewed from different angle (rotation and tilt).
The image of the event could also be cut into sections or sliced layer by layer for
thorough observation of the effect of interaction at different locations. The image could
be cut into halves (Figure 6.12(a) and Figure 6.12(b)) or quarters (Figure 6.13) or any
other portions as desired. Thus, this would enable thorough investigation on the
changes that took place in the sample.
Quantitatively, the numerical values of the phase change could be obtained from
their graphical representations. The values of the phase changes are given in radians,
which can be converted into degrees if desired. Figure 6.14 shows the profile of the
phase changes taken randomly at three different pixel-locations of the interferogram,
namely at y = 85, 102 and 128. This provided a detail slice-by-slice profile of the phase
change that occurred for a very thorough analysis.
It is an achievement to be able to thoroughly access the product of laser activity
using this technique when the single interferogram methods sometimes failed to
achieve. Despite of all the possible error contaminations and the difficulty in the
alignment of the optical components, this technique managed to extract the required
phase change due to laser interaction from the interferograms. Not only that, the
extraction of the phase is made without the lengthy mathematical manipulations
previously associated with interferometric analysis. The single formula provided by the
phase measurement algorithm was able to cut short the analysis time a great deal (with
Mathcad programming).
The phase shifts obtained in the process were then related to the changes in the
other associated parameters such as its refractive index, density and pressure of the
irradiated samples as shown in the previous section.
125
Phase change (rad)
5
0
200
100
5
0
200
Pixel location 100
0
(a)
Phase change (rad)
5
0
200
100
5
Pixel location
0
100
200
0
(b)
Figure 6.11 3-D image of the phase change using the (a) first algorithm
(Equation (6.2)). (b) the second algorithm (Equation (6.3)).
126
Phase change
(rad)
2
0
2
4
0
0
50
100
100
Pixel location
200
Phase change (rad)
(a)
2
0
0
100
2
100
50
(b)
Figure 6.12 (a) Cross section of image. (b) Another view
of cross-section.
4
Pixel location
200
0
127
0
2
4
0
0
50
50
Pixel location 100
100
Figure 6.13 A quarter-section of the event.
Phase change (rad)
5
0
5
10
0
50
100
150
200
250
300
Pixel location
Figure 6.14 Profile of the phase change at different
locations across the image.
Phase change (rad)
2
128
6.6
Comparison with fringe analysis
The maximum pressure change profiles obtained from the two techniques were
presented in Figure 6.15. Their differences for smaller time delays were noticeably
larger than those after the 4.0 µs delays. With fringe analysis, it was easier to find the
fringe centers of smaller size fringes at smaller time delays. This would provide better
accuracy in the phase calculated. However, this was the opposite of simultaneous
analysis, which required bigger fringes for better resolution and accuracy. This
explained the relatively more gradual drop in pressure change with simultaneous
analysis as compared to fringe analysis.
max pressure change (atm)
1
0.8
0.6
0.4
simultaneous analysis
fringe analysis
0.2
0
0
1
2
3
4
5
6
time (us)
Figure 6.15 Maximum pressure change profiles
using the two methods.
From the profiles produced, it actually shows the rapid decrease in the pressure
change coming from the shock wave region toward the acoustic wave region. Actually,
images could be captured at a much earlier time than the 2.0 µs, the starting point for
calculation in the graph shown in Figure 6.15. However, as the waves at this stage were
unsymmetrical, calculations were made only after the wave took up a more spherical
shape.
129
At the beginning fringe analysis seemed to be more reliable since it was much
easier to determine fringe- centers of smaller size fringes. As the waves propagated, the
sizes of the fringes in the interaction region also expanded (Figure 4.2). This meant
that, there were now more pixels to represent a fringe, and thus, simultaneous analysis
would be gaining accuracy over fringe analysis. By around 5 µs delays both methods
seemed to reach an agreeable stage as indicated by about the same values of the
pressure change produced. Unfortunately, after that time, the disturbances produced
were larger than the selected size of 256 x 256 pixels chosen for this analysis and
therefore further analysis cannot be carried out. However, with the knowledge that
sensitivity and accuracy of the simultaneous phase mapping method relies on the
number of pixels representing a fringe, we can be sure that this methods would have an
advantage over fringe analysis as the waves propagated further.
The average values for both methods were presented in Figure 6.15 with 5%
error bars. The slight deviation from the expected smooth profile could be due to the
fluctuation of the laser energy burst producing the interactions.
Another factor that could also be associated with the difference in the profile
was thought to be due to the unfiltered nature of the images being analyzed using fringe
analysis technique. With simultaneous phase mapping, the images were digitally
filtered to remove the unwanted noise, before analyses were made. During this process,
part of the signal could have been removed.
With this understanding, it did not seem fair to make comparisons to the values
of the changes in phase or pressure with time obtained by the two techniques, as both
approached the situations differently. However from the profiles obtained, both
indicated rapid reduction on the maximum pressure change soon after laser interaction,
which, finally reduced to a more gradual change after that.
130
6.7
The Advantages of the Simultaneous Phase Measurement
6.7.1
Phase Ambiguity Reduction
A very distinct advantage of this system is its ability to reduce phase ambiguity,
thus able to provide the true picture of the event, even though some of the
interferograms involved in the calculation exhibited ambiguity when analyzed
separately. A good analogy to this case could be as depicted in Figure 6.16
2
1
3
A
B
C
Figure 6.16 Fields of view at three different locations
The same scenery (or object) viewed from three different angles will not
produce the same picture. Observer 1 will not be able to see anything beyond the top of
A. Observer 3 will see the top of B and A and also able to distinguish their relative
heights, but will not be able to notice C. Observer 2 will be able to see A, B and C but
he will not be able to relate the relative heights of A, B and C. This is what happens
when one tries to view an object from one perspective only. The true picture of the
object cannot be projected. That is why it is always better to view the object from many
131
different locations because that is the only way to project an accurate picture of the
scene or object.
In this work, three CCD cameras were used to capture images of the same event
from three different angles. With the analogy given above, this should be sufficient to
provide the true picture of the event occurring. Actually this is the minimum
requirement to obtain the correct picture of the event according to the phase shifting
algorithm. Increasing the number of images from different angles would definitely
increase the sensitivity and accuracy of the measurements.
The interferograms shown in Figure 6.17 are the images of laser interaction
taken at the time delay of 3.8 µs. As the intensity levels were scanned from left to right
through the center of the interaction event, the phase change was calculated and mapped
at that location. When analyzed separately using phase mapping with Fourier analysis,
two of the phase change profiles (from image 1 and image 3 of Figure 6.18) appeared to
resemble the expected profile. But due the poor quality of the images, the profiles
produced were not smooth, indicating error contaminations. However, the second
profile, ∆φ2, clearly exhibited phase ambiguity and therefore failed to represent the
phase change associated with laser interaction.
a. Image 1
b. Image 2
Figure 6.17 Images at t = 3.8 µs.
c. Image 3
132
Phase change (rad)
4
2
0
2
4
6
0
50
100
150
200
250
300
250
300
250
300
Pixel location across the image
a. Image 1
Phase change (rad)
5
0
5
10
0
50
100
150
200
Pixel location across the image
b. Image 2
Phase change (rad)
2
0
2
4
0
50
100
150
200
Pixel location across the image
c. Image 3
Figure 6.18 Phase change profiles individually analyzed.
133
With simultaneous phase analysis, the phase change produced of the event is as
shown in Figure 6.19. Even though the final product is not a smooth profile due to the
errors picked up in the measurement, the profile indicated a reasonable profile expected
of this kind of interaction. This shows that when one out of three images exhibit
ambiguity with single interferogram analysis, simultaneous algorithm could still extract
the phase change of the event.
4
Phase change (rad)
2
0
2
4
6
0
50
100
150
200
250
300
Pixel location across the image
Figure: 6.19 Phase change profiles with simultaneous analysis
A second set of interferograms, taken at a time delay of 3.4 µs is shown in
Figure 6.20. From visual observations, there was a noticeable change in the intensity
level and contrast in the images over the previous set. This could be due to fluctuation
in the laser energy burst at this instant.
a. Image 1
b. Image 2
Figure 6.20 Images at t = 3.4 µs.
c. Image 3
134
Phase change (rad)
5
0
5
10
0
50
100
150
200
250
300
250
300
250
300
Pixel location across the image
a. Image 1
Phase change (rad)
2
0
2
4
0
50
100
150
200
Pixel location across the image
b. Image 2
Phase change (rad)
5
0
5
10
0
50
100
150
200
Pixel location across the image
c. Image 3
Figure 6.21 Phase change profiles of images when
analyzed individually
135
When these images were analyzed separately using phase mapping with Fourier
method, the phase changes produced were shown in Figure 6.21(a,b,c). From this set of
images, only phase profile of image 2 resembled the expected phase change profile of
laser interaction. The other two displayed the existence of phase ambiguities.
However, when analyzed simultaneously with the present algorithm, the phase
change produced is as shown in Figure 6.22. Again, this proved the ability of
simultaneous analysis to extract the phase even when two of the interferograms
exhibited ambiguity. The final phase produced with this analysis, clearly indicated the
separation between the disturbed and the undisturbed region of the interferogram.
However, the maximum change in phase on the left and right half of the image was not
quite similar due to error contamination.
Phase change (rad)
5
0
5
10
0
50
100
150
200
250
300
Pixel location across the image
Figure 6.22 Phase change profile simultaneously analyzed.
The third set of images was taken at the time delay of 3.6 µs (Figure 6.1). When
analyzed separately, they produced the phase change profile as shown in Figure 6.23.
Unfortunately, as shown, all the three images failed to produce the expected phase
change profile of laser interaction in air, like the one shown in Figure 6.5. In other
words, all the three images have failed to represent the true phase changes taking place,
because of the existence of extra fringes, when analyzed separately.
136
However, the algorithms used with this system is a combination of all the three
intensities, which were matched point by point on the three images produced
simultaneously. These meant that all the three images would combine accordingly to
remedy the point-to-point ambiguity and put the final product in the right perspective.
Figure: 6.24 shows the profile of the phase change produced by the set of images
(at t = 3.6 µs) but analyzed using the simultaneous-image algorithm given in Equation
(6.2.2). The final phase change obtained revealed a much smoother profile of laser
interaction in air. A clear separation between the interaction region and the undisturbed
region of the interferogram was indicated. The value of the maximum phase change on
the left and right half of the profile was quite similar. This was the kind of profile
expected from laser interaction in air. This, in a way, concluded that even when all the
three phase profiles exhibited ambiguity when individually analyzed, simultaneous
phase measurement would still be able to recover the true phase change.
As intended, this system indeed managed to reduce if not eliminate the problem
relating to phase ambiguity due to extra fringes or missing fringes commonly faced by
single interferometric analysis of laser interaction.
From the profiles that exhibited phase ambiguity, generally, it was noted that the
first halves of these profiles actually indicated reasonable values of the phase change
that took place. It was in the second half of the profile that the unwrapping procedures
failed to make the right connection, which lead to phase ambiguity. This is because as
the images were scanned from left to right, the extra fringes would usually be found at
the center of the images. Thus from the center to the right half of the interferogram, the
unwrapping process would encounter the presence of these extra fringes, creating
ambiguity in the process.
137
Phase change (rad)
5
0
5
10
0
50
100
150
200
250
300
Pixel location across the image
a. Image 1
Phase change (rad)
5
0
5
10
0
100
200
300
Pixel location across the image
b. Image 2
Phase change (rad)
5
0
∆φ3
5
10
0
50
100
150
200
250
300
Pixel location across the image
c. Image 3
Figure 6.23 Phase change profiles of the three images
individually analyzed.
138
Phase change (rad)
5
0
5
10
0
50
100
150
200
250
300
Pixel location across the image
Figure 6.24 Phase change profile simultaneously analyzed.
6.7.2 Visual Observations
A computer program written in Mathcad (APPENDIX H) provided visual image
representations of the phase changes that took place. Most of us are more sensitive to
visual observations of the changes than to the mathematical figures representing it.
These 3-D images provided views of the changes in phase of the events from any
orientations required by the user. With computer programming, these images could be
sliced and cut into several portions at any locations as required, for thorough
investigations
6.7.3
Intensity Independency
It was also found that the phase change would still be obtained even though the
intensity levels recorded were very low. The maximum scale for the intensity used in
this work is 256 grayscale. The phase measurement shown in the analysis was taken at
the maximum intensity level of 200 grayscale. However, phase measurement was still
possible when the intensity level was about 100 grayscale. Lower than this intensity
139
level, visibility is so low that our normal vision might not be able to distinguish the
fringes. Figure 6.25(c) and Figure 6.26(c) show the profiles of the phase changes
calculated with different intensity levels.
Figure 6.25(a) shows a set of simultaneously captured images taken at a time
delay of 3.4 µs. The images shown have reasonably good visibility with their maximum
intensity level of about 200 grayscale (Figure 6.25(b)). These values were measured
across the image through the center of interaction activity. Using the simultaneous
intensity analysis for phase ambiguity reduction, the phase change profile obtained is as
shown in Figure 6.25(c). This shows the profile expected from laser interaction in air.
Figure 6.26(a) shows a set of low visibility images with their maximum intensity
level of about 100 grayscale (Figure 26(b)), taken at a time delay of 2.8 µs. Figure
6.26c, shows the phase change calculated in the same manner as before. Comparing the
two profiles, Figure 26(c), even indicated a higher maximum phase change than that
produced in Figure 6.25c as the wave was captured earlier, at t = 2.8 µs as compared to
3.4 µs.
From these findings, it proved that phase measurements do not rely on the
absolute values of the intensity levels but on the difference or the change of the intensity
levels of the fringes.
6.7.4 Fringes Shapes and Sizes
Another advantage of the system designed in this project, is its ability to the
produce the phase change, regardless of the shapes and sizes of the fringes before or
after laser interaction. This is because phase measurements rely only on the equivalent
intensity levels accumulated on the detector pixels. As the size of the fringes increases,
there will be more pixels representing a fringe, thus increasing the accuracy of the
measurement.
140
i. image 1
iii. image 3
ii. image 2
(a)
Intensity
200
100
50
100
150
200
250
Pixel location
Image 1
Image 2
Image 3
(b)
Phase change
5
0
5
10
0
50
100
150
200
250
300
Pixel location
(c)
Figure: 6.25 Simultaneous phase analysis from high-intensity
images
141
ii. image 2
i. image 1
iii. image 3
Intensity level (grayscale)
(a)
200
100
50
100
150
200
250
Pixel location
(b)
Phase change (rad)
5
0
5
10
0
50
100
150
200
250
300
Pixel location
(c)
Figure: 6.26 Phase change from low-intensity images
142
Since, with this method, there was no need to calculate how far a fringe has
deviated from the reference and thus omitting the fringe centers identification for every
data location, analysis time can be cut short a great deal.
6.7.5
User-friendly System
The other advantage is the user-friendly way the system was designed to
minimize the tasks of the user. The system itself consisted of ‘no moving part’ and also
no user intervention was required during measurement. Apart from that, the single final
algorithm together with a simple computer programming, made it possible to reduce, the
usually lengthy analysis time.
6.8
The Disadvantage of the System
The system developed in this project was a rather complex system consisting of
many optical components along the paths taken by laser light. The clean dust-free
atmosphere in the common laboratory is hard to accomplish. Thus, the sensitive
interferometry system would record these impurities in the interferograms. As the
number of optical components increases, the quality of the images produced would also
suffer. Special care should be taken during digital filtering, so as not to remove part of
the signal thereby reducing the accuracy of the measurement.
The images produced in this work were not of excellent quality, but they were
sufficient to reduce ambiguity and produce the expected phase change profile of laser
interaction.
143
6.9
Discussion: Error Contributors
The phase profiles obtained were the proofs that the system developed was
capable of doing what it was intended for. However the profiles produced were not
smooth as expected due to the presence of errors. The 3-D image representations also
revealed some discontinuity in the phase change produced. Thus, the system developed
still requires an extra effort to reduce the amount of error at all possible stages in the
image production and processing.
It was not easy to produce images with exactly the same intensity even when
they were simultaneously captured with a single shot of laser. The quality of the images
such as the sharpness and contrast were slightly different from each other. The
distributions of speckle pattern also differ from one image to another. These could be
due to dusts along the different paths leading to the three CCD cameras. The other
factors could be the quality of the instrumentations and imperfect alignment. All these
factors would surely contribute errors during unwrapping.
It has been shown (Bruning, 1978 and Koliopolos, 1981) that source intensity
fluctuations caused the standard deviation in the measured wavefront phase to go as
σφ =
1
ns
where n is the number of phase steps and s is the signal to noise ratio. In an ideal
situation the noise limitation is set by photon shot noise (Wyant, 1975). If p is the
number of detected photons the standard deviation of the measured wavefront phase
goes as
σφ =
1
p
The phase shifts between images prior to phase measurements were also not
exactly 90°. Even though their mean phase difference was 90°, their standard deviation
144
of ± 7° was large. Improving this critical value would surely benefit the accuracy of
phase measurements.
Greivenkamp and Bruning, (1992), mentioned that the three-step algorithm
requiring only the minimum amount of data was simple to use but the algorithm was
also very sensitive to errors in the phase shifts between the frames. In the present work,
the phase was not shifted in a way carried out by the previous phase shifting methods
but prearranged to differ before being simultaneously captured. Thus the errors
between frames should be minimal.
According to Wyant (1998), the incorrect phase-shift between data frames could
also be the result in incorrect phase shifter calibration. For example, the phase should
be (nπ/2) and the actual phase shift is (nπ/2 + nξ). He produced a phase error plot
module using the measured intensities of the three images, the quantity of the phase step
(π/2), and the chosen calibration error percentage. He also found out that the error was
basically sinusoidal with a frequency equal to twice the frequency of the interference
fringes.
The conversion stage of analog intensity signals to digital signals can cause an
error, which is known as the quantization error. Since conversion is accomplished with
an analog-to–digital converter, the accuracy of this conversion process depends upon
the number of bits in the digital word transferred to the computer. In this work, the
converter digitized the analog input signal into an 8-bit word, meaning that there are
28 = 256 discrete quantization levels in the digital word. Kaliopoulos (1981), in his
work, discussed the effects of quantization errors for a three-step algorithm.
Another error source could be from stray reflections. A common problem in
interferometers using laser as light source was the extraneous interference fringes due to
stray reflections. The easiest way of thinking about the effect of stray reflections is that
the stray reflection adds to the test beam to give new beam of some amplitude and
phase. The difference between this resulting phase and the phase of the test beam gave
the phase error. However, often the stray light changed if the test beam was blocked. In
well-designed laser based interferometers the stray light is minimal. Probably the best
145
way of reducing or eliminating the error due to stray light is to use a short coherence
light source.
Other forms of coherent noise, such as dust and scratches on optical surfaces,
non-homogeneous and imperfections in the optical elements and coatings can also
contribute some coherent noise. Scrupulous cleaning of the optics can help to reduce
the scattered light, improve contrast, and reduce artifacts.
The overall optical alignment of the interferometer has a certain impact on the
accuracy of the measurement. Rays from an imperfect spherical wavefront do not
retrace themselves even when reflected from a perfectly spherical or flat surface. When
the rays do not retrace themselves, they shear. The measurement error introduced by
wavefront shear becomes greater with increased wavefront slope errors in the
interferometer.
6.10
Summary
In this work, three simultaneously captured images of laser interacted events
were sufficient to reduce the problem of phase ambiguity often associated with laser
interaction. Other algorithms probably incorporated more than three images for increase
accuracy. But to capture these many images simultaneously could lead to other
problems.
Phase changes due to laser interaction in air were extracted based on the
intensity levels of the images. Phase profiles of the interaction events were produced at
several locations in the interferograms. 3-D images were produced to provide clearer
pictures of the interaction events. These images could be rotated and tilted at any angle
from 0° to 360° for thorough observations. They could also be cut into several portions
or sliced at any locations for detail investigations.
146
Based on the changes in the phase produced, one can always proceed to work on
the corresponding changes that occur in the refractive index, density and also pressure
of the irradiated samples. The maximum pressure change at various time intervals was
displayed, which reveal the fast energy dissipation of the wave.
The values of the maximum pressure change using this method was found to be
much lower than that obtained using the traditional fringe analysis method, in the earlier
stage of wave production. This is thought to be due to different initial requirement of the
two methods. However, the difference seemed to diminish as the wave propagated
away from the center of interaction.
Thus, the system developed made it possible to study the pressure build-up
pattern in optical samples leading to critical points for damages. The technology can
also be used in the analysis of precision surfaces and positioning in the precision
manufacturing.
Finally, the advantages for using the present system for phase measurements
were mentioned together with some error factors that could be associated with this
system.
CHAPTER 7
CONCLUSION
AND RECOMMENDATIONS
7.1
General Conclusion
A new simultaneous image interferometry system for phase measurement of
images resulting from laser breakdown in air was developed in this project. The MachZehnder interferometer was modified to produce three outputs as required by the
algorithms. These outputs were arranged to differ in phase by 90° with one another for
quadrature imaging prior to phase measurement. In fact the system designed involved
no moving parts and all the components in the system were locked into position once
the system was set and thus no user intervention was necessary during measurements.
The system developed managed to fulfill all the objectives intended for this
project. The three images used in the algorithms, captured simultaneously with a single
pulse of laser managed to reduce phase ambiguity in laser interacted images. Coupled
with high-speed photography, this system was able to eliminate the factors of air
turbulence, vibrations and any other time-dependent noise. Furthermore, simultaneous
image capture actually reduced phase errors between frames and make the process of
noise filtering somewhat easier. Usually, multiple images used in phase–shifting
algorithms were individually captured at different time frames, so they would carry
different time dependent noise with them.
148
Photography techniques such shadowgraphy and Schlieren could be carried out
with this system, but their images were only for qualitative observations only. Only the
images captured interferometrically were used in the quantitative analysis for phase
measurements. The instantaneous speed of the wave at certain time intervals after laser
interaction could be calculated from the distance propagated by the wave with time
graph. The shock wave stage, which lasted momentarily (nanoseconds), could not
provide very accurate high-speed propagation due to the unsymmetrical nature of the
waveform. This very brief period is followed by small but more spherical waveform,
which continued to expand outwards reaching a constant acoustic wave speed in air.
Initially, phase measurement could be achieved using only one interferogram.
Two methods were introduced; fringe analysis and phase mapping with Fourier
transform method. The fringe analysis was able to produce the needed phase change
but the process was long and tedious. Phase mapping with Fourier transform was easy
and fast but with laser interacted images, this method was often plagued with the
problem of phase ambiguity. The probability of images captured exhibiting ambiguities,
in this work, was very high. Thus, another algorithm for the phase mapping method
was needed to reduce this ambiguity problem.
The solution to the problem was a combination of three intensity values from
three simultaneously captured images in a single algorithm, as expressed in Equations
(6.2) and (6.4). Three images, was the minimum requirement for any intensity-based
measurements. This would allow phase analysis to be made with the minimum amount
of data. An initial phase separation of 90° between the images was the requirement for
quadrature technique incorporated in this work. A simple MathCad programming used
in the analysis produced an almost instant result.
The images obtained simultaneously at 3.6 µs after laser interaction were
selected for analysis using the present algorithm. This algorithm had indeed managed to
successfully produce the expected phase change due to laser interaction in air even
when individually analyzed, each of the three images involved in the analysis, exhibited
ambiguities. From this phase profile, a corresponding pressure profile was worked out
and the maximum change in pressure produced was found to be 0.244 atm, which
149
occurred at the wave radius of 2.526 mm. This also corresponded to the maximum
change in density of 0.211 kgm-3 and the maximum change in the refractive index of
0.070 x 10-3, which also occurred at the same wave radius.
A profile of the maximum pressure change with time was produced using the
conventional fringe analysis technique. Comparisons were made with those obtained
from simultaneous analysis, the emphasis of work. The profiles produced indicated a
significant difference between the two methods at smaller time delays. The difference,
however, became smaller with time and was about the same by 5.0 µs (Figure 6.15). As
the shock propagated outwards (after 5 µs), both these methods seemed to be more
agreeable.
The reason for this was due to the nature of the method of analysis. It seemed
that, fringe analysis performed better with smaller size fringes as that found at smaller
time delays, whereas simultaneous phase mapping would perform better with larger
fringes as the wave propagated. The other factor was thought to be the loss of useful
signal during the FFT filtering process in simultaneous phase measurement. However,
the product of this analysis; the pressure profile produced indicated a rapid drop in
pressure as the wave propagated.
Phase profiles at different locations as well as 3-D image representations of the
phase change were presented. Besides that, visual image representations could also be
made at any angle and location. Images could be cut into small portions or sliced at any
location to aid visual inspection of the activity. Related changes in the density,
refractive index and pressure profiles of the activity at any location could be computed.
In conclusion, despite the high complexity of the experimental set-up, which
required perfect alignment of all the optical components involved as well as the liningup of the pixels of the three images, this system has managed to achieve all the
objectives aimed in the design of the system. In fact it has proven to have certain
advantages over some other techniques.
150
However, as constantly being reminded, there is no one particular algorithm and
method that is able to overcome all the problems relating to phase interferometry. So
too, was the system developed in this project.
7.2
Recommendation For Future Work
The system developed and the technique used in this project was successfully
used the way it was meant to work. However, there is still, room for improvement to
the quality of the images produced. The fact that phase measurement is so sensitive to
its surrounding means that the experiments must be conducted in a “clean dust-free
laboratory”, which was not quite fulfilled in this work, as the laboratory also housed
several undergraduate projects. The optical instruments for each arm of the
interferometer should also be of similar specifications, model and year of manufacture
since manufactures tend to update their specifications from production to production.
This is to ensure that light encounters exactly the similar pathways before arriving at the
detectors.
This system developed in this project is actually capable of producing four
images simultaneously if another detector is placed at the other free end of the
remaining light path of the interferometer. If the phase difference of 90° is maintained
as done in this system, then the wavefront phase of the four–step algorithm with 90°
phase separation between them, could take the form;
φ = arctan
I2 − I4
I 3 − I1
(7.1)
The procedure to find the phase change should be the same as what was done in
this project. The only difference is the one additional image to be captured and
processed before putting it in the new algorithm (Equation (7.1)). This may be able to
provide a better result because it involves more images than before. The problem
probably will be in getting the fourth image to be in sequence with 90° with the other
three images. Furthermore, as more optical components are introduced into the system,
151
probably more errors would also be introduced. There are also four-step algorithms that
are independent of the amount of phase shift given to the images that one could try.
As to the expansion of the system, one could attempt to modify the system to be
used to study thermo-optic coefficients of transparent materials. The fringe shifts
associated with the change in the refractive index, density and pressure can also be
associated with the change in temperature of sample.
The basic principle of this interferometric technique is that changes in the
temperature T of a slab of transparent material of refractive index n and thickness l
result in a temperature-dependent phase changes as given by;
dφ
⎛ dn
⎞
= kl ⎜
+ α (n − 1)⎟
dT
⎝ dT
⎠
where k =
(7.2)
2π
1 dl
is the material’s thermal coefficient of
is the wave vector, α = .
λ
l dT
linear expansion and the temperature-dependent optical phase difference is given by;
φ (T ) = (nT − 1)kl (T )
(7.3)
Conventional minimum angle of deviation techniques and interferometric
methods have also been extensively used for determining
dn
of polymeric materials
dT
such as plexiglas and polycarbonate samples, in thermal range where their behavior can
be related to a phase transition.
152
REFERENCES
.
Andra P, Mieth, U. and Osten,W. (1991). Some strategies for unwrapping noisy
intererogram in phase sampling interferometry. Proc SPIEE. 1508: 50-60.
Bhushan, B., Wyant, J.C., Kaliopoulos, C.L.(1985). Measurement of surface
topography of magnetic tapes by mirau interferometry. Appl.Optics. 24: 1489.
Bone, D.J. (1991). Fourier fringe analysis; the two-dimensional phase unwrapping
problem. J Appl Optics. 30( 25/1): 3627-3632
Bone, D.J., Bachor, H.A. and Sandeman, P. J. (1986). Fringe pattern analysis using a 2D Fourier transform. J Appl Optics. 25 (10/15): 1653-1660.
Boxiong Wang, Yuqing Shi, Pfeifer, T. and .Mischo, H.(1999). Phase unwrapping by
blocks, Measurement. 25(4): 285-290.
Briers, D. (1997). Interferometric Optical Testing. In: Rastogi, P.K. ed. Optical
Measurement Techniques and Applications. Artech House, Inc, Boston. London: 87110.
Brynston-Cross, B.J., Quan, C. and Judge T. R. (1994). Application of the FFT method
for the quantitative extraction of information from high resolution interferometric
and photoelastic data. Optics and Laser Tech. 3: 147-155.
Carome, E.F., Clark, N.A. and Moeller, C.E. (1964). Generation of acoustic signals in
liquids by ruby laser-induced thermal stress transients. Appl Physics Letters. 4(6), 95
153
Carts-Powell, Y. (1997). 3-D imaging: Common optics obtain quadrature images,
Laser Focus World. (Dec.), Pennwell Pub. 41-43.
Charret, P.G.and Hunter, I.W. (1996). Robust phase unwrapping method for phase
images with high noise content. J Appl Optics. 35( 29): 3506-3513.
Cheng, Y. and Wyant, J.C. (1985). Multiple-wavelength phase-shifting interferometry.
Appl. Optics. 24(6): 804-807
Cheng, Y. and Wyant, J.C. (1985a). Phase Shifter calibration in phase-shifting
interferometry. Appl Optics. 24(18): 3049-3052.
Creath, K. (1988). Phase measurement interferometry technique. In: Progress in
Optics. Amsterdam: Elsevier Science Pub.349-393.
Creath, K. (1993). Temporal Phase Measurement Methods. In: Robinson, D.W. and
Reids G.T., eds. Interferogram Analysis. Bristol:IOP Publ. Ltd. 94-140.
Cusack, R., Huntly, J.M. and Goldrein,H.T. (1995). Improved noise-immune phase –
unwrapping algorithm. J Appl Optics. 34(5): 781-789
De Lega, X.C. and Jacquot, P. (1996). Deformation measurement with object-induced
dynamic phase-shifting. J Appl Optics. 35 (25): 5115-5121.
De Nicola, S. and Ferraro, P. (1998). Fourier-transform calibration method for phase
retrieval of carrier-coded fringe pattern. Optics Communicatio, 151(4-6): 217-221.
De Nicola, S., Finizio, A., Ferraro, P. and Pierattini, G. (1999). An interferometric
technique based on Fourier fringe analysis for measuring the thermo-optic
coefficients of transparent materials. Optics Communications. 159(4-6): 203-207.
Deck, L. (1996). Vibration-resistant phase-shifting interferometry. J Appl Optics.
35(34): 6655-6662
154
Ettemeyer, A., Neupert, U., Rottenkolber, H. and Winter, C. (1989). Fast and Robust
Analysis of Fringe Patterns- An Important Step Towards the Automation of
Holographic Testing Procedures. In: Osten, W., Pryputniewicz,R.J., Reid,G.T. and
Rottenkolber, H. eds. Fringe ’89, Proc.1st Intern Workshop onAutomatic Processing
of Fringe patterns.. Berlin: Akademie Verlag. 23-31.
Ettl, P. and Creath, K. (1996). Comparison of phase unwrapping algorithm by using
gradient of first failure. J Appl Optics. 35( 25): 5108-5113.
Facchini, M. and Zanetta, P. (1995). Derivatives of displacement obtained by direct
manipulation of phase–shifted interferograms. J Appl Optics. 34(31): 7202-7206.
Ghiglia, D.C., Mastin, G.A. and Romero, L.A. (1987). Cellular automata method for
phase unwrapping. J Optical Soc Amer. (A). 4: 267-280.
Gierloff, J.J. (1987). Phase Unwrapping By Regions. Proc. SPIE. 818: 267-278.
Gopalakrishna, K.B. (1994). A Fourier transform technique to obtain phase derivatives
in interferometry. Optics Communication. 110(3-4): 279-286.
Greivenkemp, J.E. and Bruning, J.H. (1992). Phase shifting interferometry. In:
Malacara, D. ed. Optical Shop Testing. New York: John Wiley & Sons Inc. 501589.
Hariharan, P., Oreb, B.F. and Eiju, T. (1987). Digital phase-shfting interferometry: a
simple error-compensating phase calculation algorithm. J Appl Optics. 26(13):
2504-2505
Heinz, K., Roscher, I. and Bauer, G.(1984). Dual-beam interferometer for optical
difference measurements, Appl. Optics. 23(18): 3065-3074.
Herraez, M.A., Burton, D.R., Lalor, M.J. and Clegg, D.B. (1996). Robust, simple and
fast algorithm for phase unwrapping. J Appl Optics. 35(29): 5847-5852.
155
Hogenboom, D.O., Di Marzio, C.A., Gaudette, T. J., Devaney, A.J. and Lindberg, S.C.
(1998). Three-dimensional images generated by quadrature interferometry. Optics
Letters. 23(1ss). 783-785.
Herraez, M.A., Burton, D.R., Lalor, M.J. and Clegg, D.B. (1996). Robust, simple and
fast algorithm for phase unwrapping. J Appl Optics. 35(29): 5847-5852.
Huang, M.J. and Lai, C.J. (2002). Phase unwrapping based on parallel noise immune
algorithm. Optic & Laser Technology. 34(6): 457-464.
Huntley, J.M. (1989). Noise-Immune Phase Unwrapping Algorithm. Appl Optics. 28:
3268-3270.
Judge, R.T., Quan, C., Brysanston-Cross P.J. (1992). Holographic Deformation
Measurement By Fourier Transform Technique with Automatic Phase Unwrapping.
Opt. Eng. 31: 533- 543.
Juptner, W., Kreis, T and Kreithow, H. (1983). Automatic evolution of holographic
interferogram by reference beam phase shifting. Proc SPIE. 398: 22-29.
Kalal, M. and Nugent, K.A. (1988). Abel inversion using fast Fourier transform. J Appl
Optics. 27(10): 1956-1959.
Kaliopolulos, C. (1981). Interferometric Optical Phase Measurement Techniques. Univ
of Arizona: PhD Thesis.
Kaye, G.W. and Laby, T.H. (1986). Tables of Physical and Chemical constants. 15th
Edition. London: Longman.
Kinnstaetter, K., Lomann, A., Sshwider, J. and Streibl, N. (1988). Accuracy of phase
shifting interferometry. J Appl Optics. 27(24): 5082-5089
156
Klein M.V. and Furtak, T. (1986). Optics. 2nd Edition. New York: John Wiley & Sons,
Inc.
Kreis, T. (1986). Digital holographic interference-phase measurement using the Fouriertransform method. J. Opt. Soc. Am. A.3: 847-855.
Lerner, E.J. (1999). Interferometry explores new applications. Laser Focus World.
(Feb), Pennwell Pub, 121-126.
Lerner, E.J. (2001). Smaller CCDs gain imaging speed. Laser Focus World. ( Jan ),
Pennwell Pub,181-186.
Lin , Q., Vesecky, J.F. and Zebker, H.A. (1994). Phase Unwrapping Through Fringeline Detection in Synthetic Aperture Radar Interferometry. Appl. Optics. 33: 201208.
Lipson, S.G., Lipson, H. and Tannhauser, D.S. (1995). Optical Physics. 3rd Edition,
Cambridge: University Press.
Lyamshev, L.M. and Naugol’nykh. (1981). Optical generation of sound: nonlinear
effect. Sov. Phys. Acoust. 27(5), 357-371.
Macy,W.W. (1983). Two dimensional Fringe-pattern Analysis. Appl. Optics. 22: 38983901.
Malacara, D. (1992). Optical Shop Testing. 2nd Edition. New York: John Wiley & Sons.
Nugent, K. A. (1985). Interferogram analysis using an accurate fully automatic
algorithm. J Appl. Optics. 24(18): 3101-3105.
Osten, W. and Juptner, W. (1997). Digital Processing of Fringe Patterns in Optical
Metrology. In: Rastogi, P.K ed. Optical Measurement Techniques and Applications.
Artech House, Inc, Boston. London. 51-85.
157
Pandit, S.M. and Jordache, N. (1995). Data-dependent-systems and Fourier-transform
methods for single-interferogram analysis. Appl. Optics. 34(26): 5945-5951
Partington, J.R., (1953). An Advance Treaties on Physical Chemistry. Vol 4. London:
Longmans.
Prettyjohns, K. N., (1984). Charged coupled device image acquisition for digital
measurement interferometry. Optical Engineering. 23(A). 371-378.
Quiroga, J.A. and Bernabeu, E.(1994). Phase unwrapping algorithm for noisy phasemap processing. J Appl Optics. 33(29): 6725-6731.
Rastogi, P.K. (1997). Optical Measurement Techniques & Applications. Boston.
London: Artech House, Inc.
Robinson, D.R. (1993). Phase unwrapping method. In: Robinson D.W. & Reid, G.T.
eds). Interferogram Analysis Bristol: IOP Publishing Ltd. 194-229.
Roddier, C. and Roddier, F. (1987). Interferogam analysis using Fourier Transform
techniques. J Appl Optics. 26(9): 1668-1673.
Scala, C. M. and Doyle, D.A. (1989). Time- and frequency-domain characteristics of
laser-geneated ultrasonic surface waves. J Acoust Soc Am. 85(4): 1569-1576.
Schmit, J. and Creath, K. (1995). Extended averaging technique for the derivation of
errror-compensating algorithms in phase shifting interferometry. J Appl Optics.
34( 19): 3610-3619.
Schmit, J. and Creath, K. (1996). Window function influence on phase error in phaseshifting algorithms. Appl. Optics. 35(28): 5642-5648.
158
Schwider, J., Burrow, R , ElBner, K.E., Grzanna, J., Spolaczyk, R and Merkel, K.
(1983). Digital Wavefront Measuring Interferometry: Some Systematic Error
Sources. Appl. Optics. 22: 3421-3432.
Servin, M., Malacara, D. and Cuevas, F.J. (1996). Path-independent phase unwrapping
of subsampled phase maps. J Appl Optics. 35(19): 1643-1648.
Seymour, J. (1981). Electronic Devices and Component. London: Pitman Publishing
Limited.
Sigrist, M.W. (1986). Laser generation of acoustic waves in liquids and gases, J. Appl.
Phy. 60(7): 83-121.
Steward, E. G. (1987). Fourier Optics: An introduction. 2nd ed. Chichester: Ellis
Horwood Ltd.
Strobel, B. (1996). Processing of interferometric phase maps as complex-valued phasor
images. J Appl Optics. 35(13/1): 2192-2198.
Surrel, Y. (1997). Additive noise effect in digital phase detection. Appl. Optic. 36(1):
271-275.
Takeda, M., Ina H. and Kobayaschi S. (1982). Fourier -Transform Method of Fringe
Pattern Analysis for computer based Topography and Interferometry. J. Opt. Soc
Amer. 72: 156-160.
Takeda, M., Nagatome, M.K. and Watanabe, Y. (1993). Phase unwrapping by neural
network fringe ’93, Proc 2nd International Workshop on automatic processing of
fringe pattern (W Juptner and W Osten, Eds). Berlin Academic Verlag. 137-141.
Talamonti, J.J., Kay, R.B. and Krebs, D. (1996). Numerical model estimating the
capabilities and limitations of the fast Fourier transform technique in absolute
interferometry. Appl. Optics. 35(13): 2182-2191.
159
Wang, Z., Bryston-Cross, P.J. and Whitehorse, D.J. (1996). Phase difference
determination by fringe pattern matching. Elsevier-Advanced Tech. 417-421.
Waxler, R.M., Horowitz,D. and Feldman,A. (1979). Optical and Physical parameters of
Plexiglas 55 and Lexan. Applied Optics.18(1): 101-104.
Womack, K.H. (1984). Interferometric Phase Measurement Using Synchronous
Detection. Opt. Eng. 23: 391-395.
Wyant, J.C. and Shagam, R.N. (1978). Use of electronic phase measurement techniques
in optical testing. Proc. International Commission for Optics. 11:659.
Wyant, J.C., Kaliopoulos, C.K., Bhushan, B. and George, O.E. (1984). An optical
profilometer for surface characterization of magnetic media. ASLE Trans. 27: 101.
Wyant, J.C. and Creath K. (1985). Recent Advances in Interferometric Optical Testing.
Laser Focus/Electro Optics. 118-132.
Wyant, J.C. and Shagam, R.N. (1978). Use of electronic phase measurement techniques
in optical testing. Proc International Commission for Optics, Madrid. 11: 659.
Yusof Munajat (1997). High speed Optical Studies for laser induced acoustic wave and
phase measurement interferometry system. Universiti Teknologi Malaysia: PhD
Thesis.
160
APPENDIX A
Laser Energy Produced At Laser Head
Table 3.1 The measured laser energy in mJ at a distance of 30 cm from laser
head using a Melles Griot power meter.
The measured
laser energy (mJ)
Voltage supplied to the flash lamp (Volt)
650
700
750
800
850
900
950
39
59
81
107
130
156
177
1.1
1.7
2.3
3.0
3.7
4.4
5.0
2.82
2.88
2.83
2.80
2.84
2.82
2.82
(single pulse)
Without focusing
system
With focusing
system
% energy passed
through
APPENDIX B
161
The Trigger and Synchronize unit
Incorporating the Nd:YAG and Nitro-dye connection.
(a)
162
The Nd:YAG unit
(b)
The Dye Unit
163
(c )
164
APPENDIX C
The power supply of 5V and 15V used in the trigger unit
L
S
x 240V
∩∪
12V
Fuse 1 A
--
i/p
+
o/p
+15V
+
0
neon
7815
∩∪
12V
4,700 µF
25V
0.22 µF
0.47 µF
0V
N
0
xx
E
x
∩∪
240V 6V
--
i/p
+
o/p
+5V
+
0
6V
7805
∩∪
5,000 µF
15V
0.22 µF
0.47 µF
0V
0
xx
APPENDIX D
165
Formula Derivation For Simultaneous Phase Measurement.
The three intensity equations
⎧
π ⎤⎫
⎡
I 1 ( x, y ) = I 0 ⎨1 + γ cos ⎢φ ( x, y ) + ⎥ ⎬
4 ⎦⎭
⎣
⎩
⎧
3π ⎤ ⎫
⎡
I 2 ( x, y ) = I 0 ⎨1 + γ cos ⎢φ ( x, y ) + ⎥ ⎬
4 ⎦⎭
⎣
⎩
⎧
5π ⎤ ⎫
⎡
I 3 ( x, y ) = I 0 ⎨1 + γ cos ⎢φ ( x, y ) +
⎬
4 ⎥⎦ ⎭
⎣
⎩
Apply trigonometric relation,
cos(α ± β ) = cos α cos β ∓ sin α sin β
π⎞
π
π cos φ − sin φ
⎛
cos⎜ φ + ⎟ = cos φ cos − sin φ sin =
4⎠
4
4
2
⎝
3π ⎞
3π
3π − cos φ − sin φ
⎛
cos⎜ φ +
− sin φ sin
=
⎟ = cos φ cos
4 ⎠
4
4
2
⎝
5π ⎞
5π
5π − cos φ + sin φ
⎛
− sin φ sin
=
cos⎜ φ +
⎟ = cos φ cos
4 ⎠
4
4
2
⎝
After undergoing FFT filtering, the intensity equations became:
⎛ cos φ − sin φ ⎞
I 1 = γI 0 ⎜
⎟
2
⎝
⎠
⎛ − cos φ − sin φ ⎞
I 2 = γI 0 ⎜
⎟
2
⎝
⎠
⎛ − cos φ + sin φ ⎞
I 3 = γI 0 ⎜
⎟
2
⎝
⎠
I1 − I 2 =
I3 − I2 =
γI 0
2
γI 0
2
(cos φ − sin φ + cos φ + sin φ ) =
γI 0 (2 cos φ )
(− cos φ + sin φ + cos φ + sin φ ) =
2
γI 0 (2 sin φ )
2
166
tan φ =
sin φ I 3 − I 2
=
cos φ I 1 − I 2
thus
φ = tan −1
I3 − I2
I1 − I 2
APPENDIX E
Acoustic Wave Propagation
167
time (us)
2.0
2.2
2.6
3.0
3.4
3.6
3.8
4.2
4.6
4.8
5.2
Radius (mm)
2.205
2.345
2.560
2.825
3.125
3.265
3.390
3.530
3.705
3.725
3.810
4.5
Radius of wave (mm)
4
3.5
3
2.5
2
1.5
1
0.5
0
0
1
2
3
4
Time after interaction (us)
5
6
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
APPENDIX J
Distribution of the Maximum Pressure Change by
Fringe Analysis and Simultaneous Method.
time
max Ps
max Pf
2.0
2.6
3.0
3.2
3.6
3.8
4.0
4.8
0.601
0.454
0.326
0.286
0.244
0.214
0.184
0.157
0.856
0.633
0.502
0.404
0.330
0.260
0.223
0.167
Ps = max press. by simultaneous analysis
Pf = max press. by fringe analysis
max pressure change (atm)
1
0.8
0.6
0.4
simultaneous analysis
fringe analysis
0.2
0
0
1
2
3
time (us)
4
5
6
Download