A guide to InSAR for Geophysics

advertisement
1
Contents
1. Introduction to Synthetic Aperture Radar
2
2. The idea of 'phase'
6
3. Basic concepts for InSAR over a spherical earth
11
4. Basic concepts for measuring topography
18
5. Basic concepts for measuring surface deformation
25
6. How to make an interferogram - initial steps
30
7. Baselines and Orbits
34
8. Phase unwrapping
39
9. Phase gradients
43
10. Quantifying displacement
45
11. Atmospheric Effects
50
12. Limitations, advantages and resolution
57
13. Applications to neotectonics
61
14. Obtaining SAR data
67
Emma Hill, Spring 2001, Geol 695
2
1. Introduction to Synthetic Aperture Radar
Conventional radar (which itself is an acronym for 'RAdio Detection And
Ranging') remote sensing works by illuminating the ground with electromagnetic
waves of microwave frequency, and using the amplitude of the signal and the
time it takes between transmitting the signal and receiving back the echoes to
deduce the distance from the sensor to the image pixel on the ground. These
distances are used, along with orbital information, to produce a 2D image of
the ground, which looks similar to an aerial photograph.
Figure 1 The electromagnetic spectrum (from the Atlantis Scientific Webpage).).
There are various frequencies of electromagnetic energy in the microwave
wavelength range (see Figure 1) that can be used, which correspond to the
three commonly used bands L, C and X (see Table 1). There are tradeoffs
between using these different frequencies. C band, for example, loses
coherence more easily than L band, but is also four times more accurate than L
band (which wouldn't be accurate enough to detect small fault displacements)
(Massonnet, 1995).
Emma Hill, Spring 2001, Geol 695
3
Band
Frequency (GHz)
Wavelength (cm)
L
1-2
30-15
C
4-8
7.5-3.75
X
8-12
3.75-2.5
Table 1 Radar band characteristics.
Radar is an 'all weather' system, since the electromagnetic waves can
penetrate cloud. Since the energy comes from the system itself (it is an
'active' remote sensing system), rather than the sun (which is where 'passive'
systems get their illumination from), it can also be used at night.
The larger the radar antenna, the better the resolution will be. The
'synthetic' part of Synthetic Aperture Radar (SAR) therefore comes from the
use of the movement of the satellite (and more complicated processing
techniques involving the Doppler history of the radar echoes) to create a
'synthetically' larger antenna and hence improved resolutions. Typical pixel
spacing in space-based SAR is 20-100m within a 100km wide swath (see Figure
2 for notation).
Emma Hill, Spring 2001, Geol 695
4
Figure 2 Radar geometry and notation.
References
Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)
Synthetic Aperture Radar Interferometry to Measure Earth's Surface
Topography and its Deformation
Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209
Madsen and Zebker (1998)
Imaging Radar Interferometry; in Principles and Applications of Imaging Radar,
Manual of Remote Sensing; American Society of Photogrammetry and Remote
Sensing, Chapter 5, pp.270-358
Massonnet, D. (1995) (check you still need this reference here!)
Application of Remote Sensing Data in Earthquake Monitoring.
Adv. Space Research, Vol.15, No.11, pp.1137-1144
Emma Hill, Spring 2001, Geol 695
5
Atlantis Scientific's Webpage:
http://www.atlsci.com/
Emma Hill, Spring 2001, Geol 695
6
2. The idea of 'phase'
The Theory
A processed SAR signal is made up of two things - an amplitude and a phase,
which are represented by a complex number. The phase is not used in
traditional SAR studies, as it is influenced by so many things that it appears as
random over a SAR image and the things that influence it are very difficult to
quantify. In fact, the phase information was usually completely destroyed by
speckle removal techniques (used to make the radar image look more like an
aerial photo), which average the amplitudes of neighboring pixels.
When the radar sensor sends out a pulse of electromagnetic energy, there will
be an integer number of complete wavelengths that travel to and from the
target, then a final, incomplete wavelength received at the sensor, which is
what is termed the 'phase'. Phase is measured as an angle, in radians or
degrees. 2 radians of phase make up one phase cycle. The phase is therefore
a term that is inadvertently proportional to the range from the satellite to a
pixel on the ground.
The most important influences on phase are:
Useful:

Topography
Emma Hill, Spring 2001, Geol 695
7

Geometric displacement of the targets within each pixel (i.e. surface
deformation).
Not useful (to us, anyway?!):

Earth curvature

The reflection of the signal by scatterers in the pixel (rocks, plants,
etc).

Change in phase delay caused by differences in the position of these
scatterers within each pixel.

Orbit error

Atmospheric (ionospheric and tropospheric) delay

Phase noise (from the radar system).
This is quite an intimidating list, and we shall see that the focus of anyone
trying to use InSAR for research in their field will be to isolate just one of
these features, reducing and eliminating the other aspects of phase through
mathematical manipulation and calibration. Some of the effects, such as earth
curvature, are much easier to deal with than effects such as tropospheric
delay, but the closer we get to measuring extremely small effects on phase
(for example interseismic strain), to levels smaller than a single fringe, the
harder it will become to accurately remove the effects we don't want.
If two images were taken of exactly the same target from exactly the same
satellite position, and the ground had not changed, then the phase for each
would be the same. This implies that if two images are taken of exactly the
same target from slightly different positions and the difference between the
phases for each pixel is calculated, the unwanted effects on phase can be
Emma Hill, Spring 2001, Geol 695
8
removed, leaving only the useful quantities. This combining of images gives us
'Interferometric Synthetic Aperture Radar', otherwise known as the acronym
InSAR.
The Math
Representing Phase as a Complex Number
The radar wave can be viewed as shown in Figure 3. From this we can see that
the wave can be defined as A cos( ) , where A is the amplitude and  is the
number of radians. We get this just from simple trig - remember that you get
a similar curve if you plot a graph of y  cos x .
Figure 3 The electromagnetic wave as a function of amplitude and phase
If we translate these parameters onto real and complex axes, we get Figure 4.
In this view,  goes round and round the circle with increasing numbers of
Emma Hill, Spring 2001, Geol 695
9
waves, increasing by 2 each time it completes the loop. If the wave is
incomplete, the circle will not be completed and all the preceding full loops of
2 plus the partial value of  will be our phase. We can derive the following
equations for amplitude by looking at the geometry in the circle:
A  I 2  Q2
A
I
cos 
We can also, most importantly, get an equation for phase, :
Q

I
  tan 1 
Figure 4 The electromagnetic wave viewed in complex format.
Relating Phase to Range
Emma Hill, Spring 2001, Geol 695
10
The total, roundtrip, distance in wavelengths between a radar antenna and a
point on the surface is
2

, where  is the range and  is the wavelength of the
system. By multiplying this equation by , we get an equation for the total
phase over this distance:    .
2

. For two-way travel this is    .
4

.
We can also translate these equations to find phase difference. The
difference in round-trip range distance between two radar antennas to a point
on the surface, in wavelengths, is
2

, where  is the difference in distance
and  is the wavelength. This equation multiplied by 2 gives the phase
difference as  1   2   
4

.
References
Gabriel, Andrew K., Richard M.Goldstein and Howard A.Zebker (1989)
Mapping Small Elevation Changes Over Large Areas: Differential Radar
Interferometry
Journal of Geophysical Research, Vol.94, No.B7, pp.9183-9191
Rosen, Paul A., Scott Hensley, Ian R. Joughin, Fuk Li, Soren N. Madsen, Ernesto
Rodriguez and R.M.Goldstein (1999)
Synthetic Aperture Radar Interferometry
Proceedings of the IEEE, Unpublished manuscript
Emma Hill, Spring 2001, Geol 695
11
3. Basic concepts for InSAR over a spherical earth
The Theory
The natural variation of a LOS vector across a scene, caused by the shape of
the earth, introduces a gradient of phase across the scene (see Figure 5). The
effects of a spherical earth must therefore be removed by 'flattening' the
interferogram. We do this by calculating what the gradient of phase change
would be assuming that the earth is a sphere, and that there is no topography
and has been no surface deformation, and subtracting this from the
interferogram. The math behind the removal of phase caused by the shape of
the earth is a good introduction to the concepts of interferometry.
Figure 5 Fringes over a spherical earth (exaggerated for effect).
Emma Hill, Spring 2001, Geol 695
12
The Math
What follows are the calculations for obtaining e. They follow the steps of,
for example, Zebker et al, 1994, Price and Sandwell, 1998, Price, 2001, Chapter
1, but I have expanded them in tedious detail.
Figure 6 Diagram of geometry for InSAR over a spherical earth. A 1 and A2 represent the two sensor
positions.  is range from A1, + is range from A2. B is the baseline length, with components
perpendicular and parallel to the line of flight indicated by B II and B .  is the look angle (the
angle between the radar ray and the vertical) and  is the angle between the baseline and the
horizontal.
Emma Hill, Spring 2001, Geol 695
13
For a case with two satellite positions and no topography, the phase difference
 will only have a contribution from the difference in range over the scene, e,
caused by a spherical earth:
  e .
4

Remember the Cosine Rule:
a 2  b 2  c 2  2bcCosA
... and set the Cosine Rule up for the geometry in Figure 6
  e 2   2  B 2  2B cos(  (90o   )) .
Through a collection of trigonometric manipulations (which you could skip
reading), we can simplify the right hand side of this equation:
The trig law
cos( A  B)  cos A cos B  sin A sin B
means that we can write
cos(  (90 o   ))  cos  cos(90   )  sin  sin( 90   ) (**)
This can be simplified further by using the trig laws
cos( A  B)  cos A cos B  sin A sin B
sin( A  B)  sin A cos B  cos A sin B
to write
Emma Hill, Spring 2001, Geol 695
14
cos(90   )  cos 90 cos  sin 90 sin   sin 
sin( 90   )  sin 90 cos  cos 90 sin   cos .
This means that we can write Equation ** as
cos  sin   sin  cos  sin(    )
Substituted into the original equation, this gives
  e 2   2  B 2 2B sin(    )
Expanding the left hand side gives:
 2  e 2  2e   2  B 2  2 B sin(    )
Now we start making approximations to simplify things further. e is very
small, so e2 is going to be so tiny that we can ignore it. We can also subtract
2 from both sides.
2 e  B 2  2 B sin(    )
Then, by dividing each side by 2, we get:
e 
B2
 B sin(    )
2
(Zebker et al, 1994, swap this around to get e  B sin(    ) 
B2
, but how?)
2
Now we make another assumption - for spaceborne geometries,  is going to be
very large compared to B (approximately 800km compared to less than several
Emma Hill, Spring 2001, Geol 695
15
hundred meters), so we can pretty much pretend that the paths from the two
antennas are parallel. This idea was introduced by Zebker and Goldstein
(1986), and is now termed the 'Parallel Ray Approximation'. This means that
B2, in the scale of things, can also be ignored, which gives:
e   B sin(    )
Now look closely at the diagram again. We can see that Bll, the component of
the baseline vector that is parallel to the look direction of the reference
satellite, can be described as BII  B cos(  (90 o   )) , which we know from
before can be simplified to BII  B sin(    ) . For completeness, we can also see
that B  B cos(   ) . Combining this range-baseline relationship and the
equation for phase difference (  
terms of our observables:   
4

4

), gives us an equation for range in
.B II .
Since e   BII , we can draw the geometry as shown in Figure 7, as many
authors do (for example Zebker and Goldstein, 1986). Note that they change
the position of  to where 90- was before - I guess this is just for
convenience. You can see straight away from this that e  B cos(   ' ) . You
can also see that the height of the satellite, H, is H   sin  ' . Combining both
these equations with the equation for phase difference gives an expression for
H in terms of  and :



H   sin  cos 1
 
2B


Emma Hill, Spring 2001, Geol 695
16
Figure 7 Alternative notation for geometry over a spherical earth.
References
Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)
Synthetic Aperture Radar Interferometry to Measure Earth's Surface
Topography and its Deformation
Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209
Madsen and Zebker (1998)
Imaging Radar Interferometry; in Principles and Applications of Imaging Radar,
Manual of Remote Sensing; American Society of Photogrammetry and Remote
Sensing, Chapter 5, pp.270-358
Emma Hill, Spring 2001, Geol 695
17
Price, Evelyn J. and David T.Sandwell (1998)
Small-scale deformations associated with the 1992 Landers, California,
earthquake mapped by synthetic aperture radar interferometry phase
gradients.
Journal of Geophysical Research, Vol.103, No.B11, pp.27001-27016
Price, Chapter 1 (unpublished, 2001)
http://topex.ucsd.edu/insar/
Rosen, Paul A., Scott Hensley, Ian R. Joughin, Fuk Li, Soren N. Madsen, Ernesto
Rodriguez and R.M.Goldstein (1999)
Synthetic Aperture Radar Interferometry
Proceedings of the IEEE, Unpublished manuscript
Zebker and Goldstein (1986)
Topographic Mapping From Interferometric Synthetic Aperture Radar
Observations; JGR, Vol.91, No.B5, pp.4993-4999
Zebker, Howard A., Paul A. Rosen, Richard M. Goldstein, Andrew Gabriel and
Charles L. Werner (1994)
On the derivation of coseismic displacement fields using differential radar
interferometry: The Landers earthquake
Journal of Geophysical Research, Vol.99, No.B10, pp.19617-19634
Emma Hill, Spring 2001, Geol 695
18
4. Basic concepts for measuring topography
The Theory
If the range from two different sensor positions to a single point on the
surface of the earth is known, along with the distance between these sensor
positions, then it is a relatively simple geometric problem to calculate the
position of the point on the surface. If two SAR images are taken from
slightly different positions and the backscatter phase does not change
between them, then the measured phase difference will be proportional to the
difference in range from the sensor positions to each pixel. Simultaneous
measurement of range, azimuth angle and elevation angle can therefore provide
3D coordinates for each pixel. Slant range measurements of topography can
be converted to ground range to make DEMs in map projection.
The Math
Emma Hill, Spring 2001, Geol 695
19
Figure 8 Diagram of geometry for InSAR over a spherical earth with topography. Notation is the same
as for Figure 6 except that o is the look angle from the reference satellite to a spherical earth with no
topography and t is the distortion to this angle caused by the presence of topography.
Having been through the derivation of the equations for InSAR geometry on
the spherical earth with no topography in Chapter 3, it is now relatively easy to
convert these equations into a calculation for topography on a spherical earth.
Emma Hill, Spring 2001, Geol 695
20
The phase is now dependent on contributions from both spherical earth (e)
and topography (t):
 4 
  e  t .

  
We must now, therefore, write equation e   B sin(  o   ) from Chapter 3 as
e  t   B sin  o  t     .
Expanding the equation (using the trig laws from Chapter 3) gives
e  t   Bsin(  o  t ) cos   cos( o  t ) sin  ,
sin(  o  t )  sin  o cos t  cos o sin t
and
cos( o  t )  cos o cos t  sin  o sin t .
We can again make approximation here due to the scale of spaceborne
geometries, as t is going to be very small. For very small angles (if they are
measured in radians) sin A  A and cos A  1 
A2
, so
2
2 1
sin(  o   t )  sin  o   t ( sin  o  2 cos  o ) and
2
2 1
cos( o   t )  cos  o   t ( cos  o  2 sin  o ) .
2
(***CHECK!***) Putting these into the original equation gives

1
sin  o cos   cos o sin    cos o cos   sin  o sin  
2


=  Bsin(  o   )  cos( o   ) (except that it doesn't exactly, so I must have
e  t   B (sin  o cos   cos o sin  )  t 
missed something here?)
Emma Hill, Spring 2001, Geol 695
21
Remembering that BII  B sin(  o   ) and B  B cos( o   ) , we can therefore
write:
e  t  ( BII  t B )
We can now remove e from these equations using the equation for Bll and the
fact that e   BII , leaving only a relationship between range and topography;
t  t B cos o    or t  t B .
We will actually measure , the phase difference between the two images. We
know that  is a function of this range difference, . Combining the equations
for range and phase difference (  
 
4

4t

) we get:
. t B cos o   
We can also use trigonometry to calculate the vertical height of the antennas:
H   cos o  t  , then use the above equation to calculate an equation for H
in terms of our observables:
(CHECK!) Somehow do trig manipulation on  to get o+o separate to the rest
of the equation, then put this into equation for H.
Combining this equation for H with along-track distance, x, and slant range
measurements will give a topographic map in the coordinate system x,  and H.
[This next bit's from Zebker, 1986, and I'm not quite sure what y is relative to
Emma Hill, Spring 2001, Geol 695
22
the diagram...??] If we want to transform the coordinate system from slant
range to ground range (i.e. x, y and h coordinates (where h is ground
elevation)), we can calculate true ground range from the antennas to each pixel
using y   2  h 2 . The image must also be rectified to fit a square grid.
SAR Systems for measuring topography
There are two different system configurations for measuring topography:
Dual Antenna Systems
In dual antenna systems, two antennas are mounted on the same platform, so
there is a fixed baseline between them and the geometry of the system is well
constrained. The Shuttle Radar Topography Mission (SRTM) is the best
example of a 'dual antenna system' - see Chapter 14 for more details..
It is worth noting that if the same transmitter is used for both antennas, the
phase equation for one-way propagations differences should be used. If each
antenna transmits and receives, use the two-way propagation phase equation.
Single Antenna Systems
Topographic mapping using a single antenna is usually called 'repeat-track' or
'dual-pass' interferometry. This is how most space-based systems operate.
As the system only has one antenna, the 'slave' image must be obtained in a
second orbit following that of the 'master' image. This means that the orbit
Emma Hill, Spring 2001, Geol 695
23
parameters for the two image takes must be well constrained so that second
satellite is in a similar orbital position to the first and so that the baseline
between the two satellite positions can be calculated. For repeat track
systems you should use the two-way propagation formulae for phase.
The ERS Tandem mission is a good example of a repeat track system. It was
designed to measure topography, so ERS-1 followed ERS-2 in the same orbit,
with a one-day separation.
References
Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)
Synthetic Aperture Radar Interferometry to Measure Earth's Surface
Topography and its Deformation
Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209
Madsen and Zebker (1998)
Imaging Radar Interferometry; in Principles and Applications of Imaging Radar,
Manual of Remote Sensing; American Society of Photogrammetry and Remote
Sensing, Chapter 5, pp.270-358
Price, Evelyn J. and David T.Sandwell (1998)
Small-scale deformations associated with the 1992 Landers, California,
earthquake mapped by synthetic aperture radar interferometry phase
gradients.
Journal of Geophysical Research, Vol.103, No.B11, pp.27001-27016
Rosen, Paul A., Scott Hensley, Ian R. Joughin, Fuk Li, Soren N. Madsen, Ernesto
Rodriguez and R.M.Goldstein (1999)
Synthetic Aperture Radar Interferometry
Proceedings of the IEEE, Unpublished manuscript
Emma Hill, Spring 2001, Geol 695
24
Toutin, Thierry and Laurence Gray (2000)
State-of-the-art of elevation extraction from satellite SAR data.
ISPRS Journal of Photogrammetry and Remote Sensing, Vol.55, pp.13-33
Zebker and Goldstein (1986)
Topographic Mapping From Interferometric Synthetic Aperture Radar
Observations; JGR, Vol.91, No.B5, pp.4993-4999
Zebker, Howard A., Paul A. Rosen, Richard M. Goldstein, Andrew Gabriel and
Charles L. Werner (1994)
On the derivation of coseismic displacement fields using differential radar
interferometry: The Landers earthquake
Journal of Geophysical Research, Vol.99, No.B10, pp.19617-19634
SRTM Website:
http://www.jpl.nasa.gov/srtm
Emma Hill, Spring 2001, Geol 695
25
5. Basic concepts for measuring surface deformation
The Theory
If two images are taken from exactly the same sensor position of the exactly
the same target, at different times, and the returned phases are different,
then these different phases relate to a change in the range from the satellite
to the target, indicating a change in position of the target. This means that by
differencing the phase of images taken before and after the ground has
moved, changes towards or away from the satellite can be measured as
millimeter level line of site (LOS) (i.e. along the path of the radar signal)
displacements.
InSAR can ONLY measure LOS displacements. It can only measure a change in
the surface along the look direction of the satellite, not a full, 3D
displacement vector. Additional information can be obtained by using images
from both ascending and descending orbits (see Chapter 5), but this is at the
cost of increased complication in processing and the mercy of data availability.
If the orbits of two satellite passes were repeated perfectly, the phase would
only contain a measure of the deformation, but since the two images are
unlikely to have been taken from exactly the same position, the phase
difference between the two will include both a measure of the deformation and
Emma Hill, Spring 2001, Geol 695
26
a measure of topography. In order to quantify only surface displacement,
therefore, the effects of topography must be removed.
The Math
Figure 9 Geometry for InSAR over a spherical earth, with topography, that has undergone deformation
(d). Notation is the same as for Figure 8, except that d is the change in look angle caused by the
surface deformation.
Emma Hill, Spring 2001, Geol 695
27
If deformation has taken place, the phase can be written as

4

e  t  d  , where e represents the range change due to a
spherical earth, t represents range change due to topography and d
represents range change due to surface deformation.
We already know how to remove e (see Chapter 2), which leaves t to be
removed in order to isolate d.
Removing Topographic Fringes
Independent DEMs
To use this method, a DEM must be available from external sources. The DEM
is used to create a synthetic topographic fringe pattern, which is then
subtracted from the interferogram to leave only the fringes caused by surface
deformation (Massonnet and Feigl, 1995).
Three-pass, differential InSAR
This method is otherwise termed 'double differencing', or, particularly when
more than 3 images are used, the 'N-pass' method (see Figure 10). A DEM is
created from two SAR images, with at least one unrelated to the pair of
Emma Hill, Spring 2001, Geol 695
28
interest. This DEM is then subtracted from the interferogram thought to
show surface displacement. (Gabriel et al, 1989, Zebker et al, 1994).
Figure 10 Geometry for the 'N-Pass' method of measuring surface deformation.
References
Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)
Synthetic Aperture Radar Interferometry to Measure Earth's Surface
Topography and its Deformation
Emma Hill, Spring 2001, Geol 695
29
Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209
Gabriel, Andrew K., Richard M.Goldstein and Howard A.Zebker (1989)
Mapping Small Elevation Changes Over Large Areas: Differential Radar
Interferometry
Journal of Geophysical Research, Vol.94, No.B7, pp.9183-9191
Madsen and Zebker (1998)
Imaging Radar Interferometry; in Principles and Applications of Imaging Radar,
Manual of Remote Sensing; American Society of Photogrammetry and Remote
Sensing, Chapter 5, pp.270-358
Massonnet, D., and K.L.Fiegl (1995)
Discrimination of geophysical phenomena in satellite radar interferograms.
Geophysical Research Letters, Vol.22, pp.1537-1540
Massonnet, D. and K.L.Feigl (1998)
Radar interferometry and its application to changes in the earth's surface
Reviews of Geophysics , Vol.36, pp.441-500
Zebker, Howard A., Paul A. Rosen, Richard M. Goldstein, Andrew Gabriel and
Charles L. Werner (1994)
On the derivation of coseismic displacement fields using differential radar
interferometry: The Landers earthquake
Journal of Geophysical Research, Vol.99, No.B10, pp.19617-19634
Emma Hill, Spring 2001, Geol 695
30
6. How to make an interferogram - initial steps
Data Processing
Raw data consists of radar echoes collected from the surface. These must
first be processed so that each pixel contains amplitude and phase information.
Processing algorithms are based on the signal characteristics of the sensor and
the satellite orbit. Data that contains amplitude and phase information as an
array of complex numbers is called Single Look Complex (SLC). As you can see
in Figure 11,complex data looks pretty noisy, and not at all like the amplitudebased radar images we're used to seeing, which have had speckle removed.
Figure 11 An ERS-1 scene over Prudoe Bay, Alaska, in complex format (from the ASF website).
Emma Hill, Spring 2001, Geol 695
31
Corectification
Common pixels between the two images must be mapped, in order that the two
images may be overlain exactly. Differences in geometry, and differences
caused by inconsistencies in satellite design, must be accounted for; i.e. it may
be necessary to warp or stretch one image to fit well over the other.
Differences in satellite velocity can cause differences between the two
images, giving a systematic along-track distortion. Although the baseline is
ideally aligned parallel to the flight track, satellite tracks will usually be
divergent, which also introduces a 'linear shear'.
If the pixels are not properly aligned at a sub-pixel level, the random part of
the phase caused by scatterers in the image will not cancel out between
images. The precision of this alignment must be better than 100s (<1m) alongtrack and 5ns (<1m) in range for a 5x8m ERS pixel (Burgmann et al, 2000).
bit on how this is automated, and using baselines to do it?
The wavelength scale is of the order of tens of centimeters and the spatial
resolution tens of meters, so significant changes to the phase will be
insignificant to registration of the images.
Emma Hill, Spring 2001, Geol 695
32
Interferogram creation
To create the interferogram itself, the complex pixels of the 'master' image
must be multiplied by the complex conjugates of the same pixels of the 'slave'.
This will result in an image in which each cycle of color, or 'fringe', represents
a phase change of 2 radians.
A word on complex numbers:
If the amplitude and phase of a pixel in the master image is denoted by
z  a  bi , the complex conjugate of the same pixel of the slave will be
z  a  bi . a and b are the real and imaginary parts, respectively. If you
multiply these together, you get z * z  (a  bi )( a  bi )  a 2  b 2 i 2 . A property of
the imaginary unit i is that i 2  1 . This means that we can write the previous
equation as z * z  a 2  b 2 , which shows that multiplying the complex pixels of
the master image by the complex conjugates of the slave actually produces an
image that shows the difference in phase between the two images..... not
strictly true - go over this.
The phases for each image may not be the true phases corresponding directly
to range, since phase errors can result from processing, but as long as the
same processing is performed for each image, the phase errors should all the
identical, so when images are differenced they should be eliminated (Gabriel et
al, 1989).
Emma Hill, Spring 2001, Geol 695
33
References
Gabriel, Andrew K., Richard M.Goldstein and Howard A.Zebker (1989)
Mapping Small Elevation Changes Over Large Areas: Differential Radar
Interferometry
Journal of Geophysical Research, Vol.94, No.B7, pp.9183-9191
The Alaska SAR Facility:
http://www.asf.alaska.edu/
JPL's ROI-PAC processing software:
http://www.seismo.berkeley.edu/~dschmidt/ROI_PAC/
Atlantis Scientific's Webpage:
http://www.atlsci.com
Emma Hill, Spring 2001, Geol 695
34
7. Baselines and Orbits
The previous sections have shown the importance of having an accurate
knowledge of satellite position and the length of baseline between the satellite
position for each image. We saw in Chapter 3, for example, that any
ambiguities in baseline calculation will be translated through the flattening
operation into a distortion in the interferogram. Errors in the baseline will
lead to a nearly constant gradient of deformation across a scene, which can
account for up to 6cm ground relative displacement. (Klinger et al, 2000).
Adjustment of the baseline length to minimize this distortion, usually carried
out by trying to minimize deformation fringes in areas with known zero
displacement, is therefore a part of the InSAR processing technique.
The radar satellites currently in use were not designed specifically for InSAR,
so their orbits are not as well constrained as we would like. It is now possible
to design satellites with orbit accuracies of a few cm (this is a science in itself
- see David Sandwell's notes on orbits (2001)), but for now we must make the
best of what we have. ERS-1 and -2 have the best orbit control so far.
Optimum baseline length is a function of radar wavelength and the desired
result. If the aim is to measure surface deformation, the baseline should be as
short as possible, to eliminate the effects of topography. If the aim is to
measure topography, the baseline should not be too short, as this will reduce
the sensitivity to surface height, but also not too long, as the speckle effects
for the images will be too different and the interferogram will suffer
Emma Hill, Spring 2001, Geol 695
35
incoherence and decorrelation. If terrain is mountainous, smaller baselines are
optimal. If terrain is slight, larger baselines are necessary to measure
topography.
The dependence on sensor type and wavelength comes from the fact that both
observations should be within the effective beam width of a reradiating
resolution element (Gabriel et al, 1989). The use of RADARSAT's fine-beam
mode (see Chapter 15) allows for baselines of approximately 1km, which gives
greater sensitivity for measuring topography (although is very difficult to use
for measuring deformation). Optimal baselines for measuring topography with
ERS-1 and -2 usually fall within 300 and 500m.
The importance of which direction the antennas are separated is also a
function of the purpose of making the interferogram. If the antennas are
separated parallel to the line of flight (in 'azimuth', with no 'cross track'
separation) then surface deformations are more easily measured as
topographic effects are minimized. Processing must remove the effects of
pitch, yaw and roll of the platform. Antennas separated perpendicular to the
line of flight (in 'range') will measure topography, after the component of the
baseline that is not perpendicular to the line of flight is estimated and
geometric rectification applied (Gabriel et al (1989).
The orbit information that comes in the header files of the image data is not
usually accurate enough for the purposes of InSAR. Precise orbits can be
computed by locating ground control points on the image, or, in the case of
ERS, download from Delft University (see Chapter 15). A good illustration of
Emma Hill, Spring 2001, Geol 695
36
how important it is to use precise orbits, when available, is in a paper about
creating an interferogram over the 1999 Hector Mine earthquake, by Sandwell
et al (2000):
"The first interferogram, formed 19 hours after the download, had an
artificial cross-track slope of 20 fringes (560mm) caused by errors in the
predicted orbit. The interferogram was re-computed 5 days later using the
more accurate 'fast-delivery' orbit and no slope corrections were needed."
Sun et al (2000) discuss the merits of using large numbers of ground control
points (GCPs) to improve baseline estimates by calculating the precise 3D
position of the satellite. Since the baselines are rarely parallel, it is often
necessary to compute a number of baselines at several points along the flight
lines. They don't mention how they obtain these GCPs, but I guess they
identify bright reflectors in the image and then survey these using either
traditional surveying techniques or GPS.
An interesting proposal by the people at UCSD is to place radar reflectors at
GPS stations (or place them nearby and tie them precisely to the GPS stations
using ground survey methods). It should then be possible to precisely locate
these SAR pixels to precisions of less than 1mm. I'm not sure if anyone else
has tried this yet (CHECK).
Computing baselines, from Price and Sandwell (1998):
Emma Hill, Spring 2001, Geol 695
37
If s(t1) is the position of the satellite at time t1, and time s(t2) is the position
at the time of closest approach (where s1  x, y1 , z1  and s2  x, y1 , z1  ), then
the baseline length is B  s(t 2 )  s(t1 ) .
B
The baseline elevation angle, , is   tan 1  v
 BH

 , where Bv and BH are the local

vertical and horizontal components of the baseline; Bv  ( s 2  s1 ).
BH   B 2  Bv
2
s1
,
s1
(this is positive in the radar look direction).
Figure 12 Baseline geometry.
Satellites can usually view a scene from the different directions (which is
useful for measuring deformation in more dimensions that just LOS).
Ascending orbits fly south to north. Descending orbits fly north to south.
References
Klinger, Yann, Reni Michel and Jean-Phillipe Avouac (2000)
Emma Hill, Spring 2001, Geol 695
38
Co-seismic defomation during the Mw 7.3 Aqaba earthquake (1995) from ERSSAR interferometry
Geophysical Research Letters, Vol.27, No.22, pp.3651-3654
Price, Evelyn J. and David T.Sandwell (1998)
Small-scale deformations associated with the 1992 Landers, California,
earthquake mapped by synthetic aperture radar interferometry phase
gradients.
Journal of Geophysical Research, Vol.103, No.B11, pp.27001-27016
Sandwell's notes on orbits:
http://topex.ucsd.edu/insar/orbits.pdf
Scharoo, R and P.N.A.M. Visser (1998)
Precise orbit determination and gravity field improvement for the ERS
satellites
Journal of Geophysical Research, Vol.103, pp.8113-8127
Guoquing Sun, K.Jon Ranson, Jack Bufton and Micheal Roth (2000)
Requirement of Ground Tie Points for InSAR DEM Generation
Photogrammetric Engineering and Remote Sensing, Vol.66, No.1, pp.81-85
InSAR/GPS Integration:
http://topex.ucsd.edu/SAR/proposals/sar_gps.html
Emma Hill, Spring 2001, Geol 695
39
8. Phase Unwrapping
Although the final fraction of a wave that was received at the satellite is
known, the integer number of complete wavelengths that came before this is
not. Referring to Figure 4 this means that we do not know how many times 
looped round the circle before it was finally measured at the sensor as  as a
fraction of 2). There is therefore an integer number, which must be
calculated and then multiplied by 2(people call this 'knowing the phase modulo
2') in order to obtain the total distance between the sensor and each pixel.
This is very similar to ambiguity resolution for very precise, carrier-wave GPS
surveying. In other words, a phase measurement of x radians cannot be
distinguished from a phase measurement of x+2n radians, where n is any
integer (Zebker and Goldstein, 1986), so targets at different heights can still
appear at the same phase.
As an example, if a phase change of 2 is equal to 11.8cm of displacement along
the LOS for JERS-1 data (Kimura and Yamaguchi, 2000), for 20cm of
displacement the phase change is 3.4. Because the phase can only be modulo
2, though, only 1.4 will be recorded by the satellite.
For DEMs of areas with very slight relief, or differential interferograms of
areas that have undergone very little deformation, it is not always necessary to
unwrap. It may also be easy in some cases to interpret the image without the
hassle of unwrapping. If an image of surface displacement is to be unwrapped,
it is usually easier to unwrap the image containing the deformation phase,
Emma Hill, Spring 2001, Geol 695
40
rather than the images containing topographic phase, as this is likely, although
not always, in the case of large earthquakes, to be a smaller phase change than
that for topography.
Although there have been many attempts made at developing an automated
unwrapping technique, this step still requires user input and a certain level of
artistic license. Whilst fringes may be visible to the naked eye, computer
algorithms often find them harder to distinguish, particularly if the noise level
is high. This is something to bear in mind when deciding whether to unwrap or
leave wrapped, as unwrapping can lead to loss of signal over areas that had
fringes that were visually interpretable.
The general method for phase unwrapping is to choose a starting location,
where there is little noise and the phases are bright and clear, then unwrap its
neighbors in expanding contours by adding the multiple of 2 to them which
minimizes the phase change between adjacent points. This still leaves a global
2m ambiguity, where m is an underdetermined integer constant. This
ambiguity can only be removed using ground control points (Zebker and
Goldstein, 1986, Gabriel et al, 1989).
Problems arise when the algorithm encounters a sharp jump in phase, or when
the phase is very noisy. This is where the user's understanding of the image
comes into play, as they must delimit phase breaks, over which the algorithm
will not integrate. For example, in interferograms over an earthquake rupture,
incoherence is likely to be a serious problem close to the fault, causing
difficulties in phase unwrapping. A previous geological map of the fault can be
Emma Hill, Spring 2001, Geol 695
41
useful for defining a discontinuity in along the fault, over which the phase may
not be directly unwrapped.
Common phase unwrapping techniques
Iterative Disk Masking
A number of seeds are placed over the image and each seed acts as a new
starting point for phase unwrapping, as phase differences are integrated in
expanding contours. These seeds are placed in areas with good coherence and
where phases have little noise. This is the method that the Atlantis Scientific
EarthView InSAR software uses.
The Goldstein, Hartle and Nearest Neighbor Methods
All these techniques are fairly similar. They calculate the gradient from one
point to the next and integrate this to form a smoother path (Goldstein, 1998).
Phase along a closed path should return to the starting value.
Least Squares
This usually minimizes the function that describes the difference between
wrapped phases of adjacent rows and columns. It is more effective than some
other methods as it can work for corrupt data and poor geometry, but it has
the disadvantage that it will not just calculate integer numbers of cycles
(Zebker and Lu, 1997).
Emma Hill, Spring 2001, Geol 695
42
References
Gabriel, Andrew K., Richard M.Goldstein and Howard A.Zebker (1989)
Mapping Small Elevation Changes Over Large Areas: Differential Radar
Interferometry
Journal of Geophysical Research, Vol.94, No.B7, pp.9183-9191
Goldstein, R.M., H.A.Zebker and C.L.Werner (1988)
Satellite radar interferometry: Two-dimensional phase unwrapping
Radio Science, Vol.23, pp.713-720
Kimura, Hiroshi and Yasushi Yamaguchi (2000)
Detection of Landslide Areas Using Satellite Radar Interferometry.
Photogrammetric Engineering and Remote Sensing, Vol.66, No.3, pp.337-344
Madsen and Zebker (1998)
Imaging Radar Interferometry; in Principles and Applications of Imaging Radar,
Manual of Remote Sensing; American Society of Photogrammetry and Remote
Sensing, Chapter 5, pp.270-358
Zebker, H.A. and R.M.Goldstein (1986)
Topographic Mapping From Interferometric Synthetic Aperture Radar
Observations; JGR, Vol.91, No.B5, pp.4993-4999
Zebker, Howard A. and Yanping Lu (1997)
Phase Unwrapping Algorithms for Radar Interferometry: Residue-Cut, Least
Squares and Synthesis Algorithms
Submitted to JOSA-A (on Howard Zebker's webpage)
Emma Hill, Spring 2001, Geol 695
43
9. Phase Gradients
The use of phase gradients can eliminate the need for phase unwrapping.
Like phase differences, phase gradient is a function of topography and surface
deformation. Phase gradient caused by topography can be eliminated from
interferograms using the same technique as that for phase difference. Phase
gradient is computed from the real and imaginary parts of the interferogram
(hence it is unique and there are no 2 ambiguities). The phase gradient can
also be scaled by any real number (unlike phase difference, which can only be
scaled by an integer number).
Large scale deformations don't show up that well on phase gradient images they appear as more of a regional shift than local displacements. The big
advantage of using a phase gradient approach, however, is that it is very
sensitive to small displacements, that otherwise may not be identified.
"Small scale deformations are associated with secondary fractures,
preexisting faults, dry lake beds, and mountainous regions; they provide insight
into the formation of such geomorphic features and help define the role of
these features in fault interactions" (Price and Sandwell, 1998).
Price and Sandwell, 1998, use phase gradients to examine short-wavelength
features of the 1992 Landers Earthquake, which reveal previously unresolved
strain patterns.
Emma Hill, Spring 2001, Geol 695
44
The Math
The phase gradient (note the symbol - don't confuse this with phase
difference,  - the  indicates that it is a gradient operator), can be
expressed as  ( x) 
RI  IR
, where R and I are the real and imaginary
R2  I 2
parts of the complex SAR signal (from Price and Sandwell, 1998).
more on this - how did they get this expression?
see Price and Sandwell, p27006
References
(CHECK!) Peltzer, G., K.W.Hudnut and K.L.Feigl (1994)
Analysis of coseismic surface displacement gradients using radar
interferometry: new insights into the Landers earthquake
Journal of Geophysical Research, Vol.99, No.B11, pp.21971-21981
Price, Evelyn J. and David T.Sandwell (1998)
Small-scale deformations associated with the 1992 Landers, California,
earthquake mapped by synthetic aperture radar interferometry phase
gradients.
Journal of Geophysical Research, Vol.103, No.B11, pp.27001-27016
Emma Hill, Spring 2001, Geol 695
45
10. Quantifying displacement
Reading the interferogram
The basic information that you need to know to interpret an interferogram is
the wavelength of the sensor, as this will tell you what change in range to the
satellite each fringe (a whole cycle of color) represents. Each fringe
represents one half of the wavelength, which is equal to a 2 increment of
phase. For example, this number is 28mm for ERS satellites, which have a full
wavelength of 56mm.
To calculate the total amount of displacement at a point in an unwrapped
interferogram, it is usual to estimate where the fringe of zero displacement is,
which is far from the earthquake, and then count back from there.
Alternatively, you can take a point with a known displacement (from a survey
point or GPS station, for example), translate this displacement into LOS
displacement, and count from there. It is important to look at the key and see
which order of fringes represents movement towards the satellite. The
opposite order will represent movement away from it (like in Figure 13).
Emma Hill, Spring 2001, Geol 695
46
Figure 13 Interpreting interferogram fringes.
Some computer algorithms have been created to automatically create a grid
over the area and calculate height or displacement at each point (for example
Beauducel et al, 2000). This allows for calculation of standard deviations.
InSAR only measures 1D changes along the LOS. If you only have orbits flown
in one direction, therefore, only 1D displacements can be resolved. The
addition of data from other, non-parallel, orbits will allow for resolution of the
displacement vectors in their 3D entirety. In practice, we can only resolve a
maximum of two components of displacement from satellite measurements,
from ascending and descending orbits, and this depend on whether data from
Emma Hill, Spring 2001, Geol 695
47
both directions is available. Problems can occur where displacement is close to
perpendicular to the LOS.
Inverse Methods
Displacements taken from differential interferograms can be used in
conjunction with inverse mathematics (simulated annealing seems to be a
particularly popular method, since it explores the whole model space and does
not allow for local minima) to estimate fault parameters such as location,
length, strike, dip and width, along with variable slip distributions along the
length of the rupture. The derived fault parameters and equations for
displacement in a layered elastic space (such as those of Okada, 1985) can be
used to form synthetic interferograms (by calculating ground displacements
over a grid and converting these to LOS displacements), which when
differenced with true interferograms can outline InSAR noise and deficiencies
in the slip distribution, along with artifacts that cannot be explained by simple
models of deformation (eg. Delouis et al, 2000).
Studies such as that by Hurst et al (2000) have also investigated comparisons
between SAR interferograms and synthetic interferograms derived from the
inversion of GPS data. These also have the potential for outlining atmospheric
noise, problems in phase unwrapping and features in InSAR images that could
not have been identified with GPS alone.
Emma Hill, Spring 2001, Geol 695
48
References
Beauducel, Francois, Pierre Briole and Jean-Luc Froger (2000)
Volcano-wide fringes in ERS synthetic aperture radar interferograms of Etna
(1992-1998): Deformation or tropospheric effect?
Journal of Geophysical Research, Vol.105, No.B7, pp.16391-16402
Delouis B., P.Lundgren, J.Salichon and D.Giardini (2000)
Joint inversion of InSAR and teleseismic data for the slip history of the 1999
Izmit (Turkey) earthquake
Geophysical Research Letters, Vol.27, No.20, pp.3389-3392
Hurst, Kenneth J., Donald F.Argus, Andrea Donnellan, Michael B.Heflin, David
C.Jefferson, Gregory A.Lyzenga, Jay W.Parker, Mark Smith, Frank H.Webb
and James F.Zumberge (2000)
The coseismic geodetic signature of the 1999 Hector Mine Earthquake
Geophysical Research Letters, Vol.27, No.17, pp.2733-2736
Okada, Yoshimitsu (1985)
Surface Deformation due to Shear and Tensile Faults in a Half-space
BSSA, Vol.75, No.4, pp.1135-1154
Emma Hill, Spring 2001, Geol 695
49
11. Atmospheric Effects
The Theory
Since InSAR is based on converting precise time delays and phase shifts into
range distances, a propagation speed must be assumed. This is normally taken
to be constant, as if the wave was passing through a homogeneous medium.
ERS satellites, as an example, orbit at approximately 790km. This means that
the electromagnetic wave must pass through the ionosphere, stratosphere and
troposphere, and it must do this twice. The signal will therefore travel
through several changes in the index of refraction (as a result of changes in
pressure, temperature and water content) before returning to the sensor. The
refractive indices of the atmosphere are higher than that of free space, which
lowers the velocity of the radar wave, lowers the propagation time and
therefore contaminates the distance measurement.
Atmospheric effects are spatially variable and can produce up to 80 to 290m
of topographic error for baselines between 400 and 100m (Zebker et al, 1997)
and up to 10cm error in displacement in repeat pass differential
interferograms for humidity variations of 20% (Lu et al, 2000). The greatest
problem with atmospheric noise is that it can contaminate the deformation
signal, leaving interpretation open to debate. Larger baselines have smaller
propagation effects (why?) and, sadly, smaller baselines, which we need to get
the most accurate measurements of deformation, are affected the most.
Emma Hill, Spring 2001, Geol 695
50
The troposphere is the worst offender, due to its unstable nature and variable
humidity. Ionospheric effects can also, however, lead to signals in
interferograms, and are important in the auroral zone of polar regions.
Areas of high humidity, particularly at lower elevations and higher
temperatures, will suffer the most from atmospheric effects, whereas high,
dry areas will suffer the least. The atmosphere is generally more stable at
night.
Longer wavelengths are affected less than shorter wavelengths. Would it be
possible to remove ionospheric effects if you have two wavelengths, like you
can for dual-frequency GPS?
Recognizing atmospheric effects
Short wavelength artifacts are the most easily confused with tectonic
deformations. They typically have length scales of 5-10km, causing as much as
10cm excess two-way range (Price and Sandwell, 1998). This means that
atmospheric effects can account for up to 3 fringes in an ERS interferogram
(each fringe usually represents 2.8cm displacement).
Long wavelength artifacts are most likely to cause a planar phase gradient over
the interferogram, which looks similar to the effect of miscomputing the
Emma Hill, Spring 2001, Geol 695
51
baseline length. In fact, one of the easiest ways to reduce the effect of long
wavelength atmospheric artifacts is to adjust the baseline length until fringes
are not seen in areas with known minimal displacement.
Removing atmospheric effects
If there are multiple scenes over the area (and you have the money to obtain
them) then stacking is an effective technique for removing atmospheric
effects. It basically just averages all the interferograms to create one image
with a noise level of
N , where N is the number of interferograms used. It
doesn't actually eliminate atmospheric features, but it will reduce their
amplitude. The reduction of noise not only helps reduce atmospheric effects,
but also aids in phase unwrapping.
Alternatively, an independent source such as GPS (see below) or meteorological
measurements of pressure and humidity can be used to calibrate the image, by
using measurements of atmospheric delay at a few points to build a model over
the whole area.
If the position of a continuously monitoring GPS receiver is well constrained,
then any delays in the signal from the GPS satellite to the receiver can be
calculated. These delays can then be used to calculate atmospheric delay. It
is, in fact, usual practice to estimate tropospheric delay when processing highly
accurate GPS, with the intention of removing it from the signal. Inversion of
Emma Hill, Spring 2001, Geol 695
52
the atmospheric delay at GPS points, to solve for atmospheric delay over the
entire region, can then aid in the correction of interferograms (Williams et al,
1998), although the fact that the troposphere is so variable over small
distances can cause problems.
Sandwell and Sichoix (2000) suggest a method of using a low resolution DEM
(up to 1km spacing) to constrain the long-wavelength phase errors. A similar
technique to this would be to compare GPS heights with heights obtained
through InSAR as a means of calibration.
As a last resort it is possible to use external information such as geological
measurements, to remove atmospheric effects by reducing the number of
fringes in areas that should show little deformation. This method is to be
avoided if possible, as it introduces scope for purposely fitting the results to
an alternative dataset.
The Math
The following is taken mainly from Zebker(1997):
The complex amplitude of a unit intensity plane wave at position x in a medium
is E  e j ( kx t ) , where n(x) is the variable refractive index,  is wavelength and k
is the wavenumber ( k 
2n( x)

Emma Hill, Spring 2001, Geol 695
).
53
If we differentiate this, we get a relation between incremental path length dx
and incremental signal phase d in the form d 
the propagation path then gives   
x
2n( x)

2n( x)

dx . Integrating along
dx
If the wave is propagating through a vacuum, then n(x)=1, so we get the
familiar equation  
2

x . In the previous chapters, therefore, we only
considered the phase to be dependent on wavelength  and range x.
For the earth's atmosphere n(x) is not a constant (but always real and just a
little greater than 1), which makes things more complicated by introducing an
additional phase shift. We model n(x) for the earth's atmosphere as 1 + 106
N(x), where N(x) is the refractive index. The 10-6 shows how small the
change from our value of n(x)=1 really is.
We can therefore write the previous equations as  

2

x
2

2

x
2 10 6 N ( x)

x , or
x , where x  (x) dry  (x) wet . (x)dry represents hydrostatic
delay and (x)wet represents delay due to water vapor.
References
Beauducel, Francois, Pierre Briole and Jean-Luc Froger (2000)
Volcano-wide fringes in ERS synthetic aperture radar interferograms of Etna
(1992-1998): Deformation or tropospheric effect?
Emma Hill, Spring 2001, Geol 695
54
Journal of Geophysical Research, Vol.105, No.B7, pp.16391-16402
Goldstein, R.M. (1995)
Atmospheric Limitations to repeat-track radar interferometry
Geophysical Research Letters, Vol.22, pp.2517-2520
Hanssen, Ramon F., Tammy M.Weckwerth, Howard A.Zebker and Roland Klees
(1999)
High-resolution Water Vapor Mapping from Interferometric Radar
Measurements.
Science, Vol.283, pp.1297-1299
Lu, Zhong, Dorte Mann, Jeffrey T.Freymueller and David J.Meyer (2000)
Synthetic aperture radar interferometry of Okmok volcano, Alaska: Radar
observations
Journal of Geophysical Research, Vol.105, No.B5, pp.10791-10806
Price, Evelyn J. and David T.Sandwell (1998)
Small-scale deformations associated with the 1992 Landers, California,
earthquake mapped by synthetic aperture radar interferometry phase
gradients.
Journal of Geophysical Research, Vol.103, No.B11, pp.27001-27016
Sandwell, David T and Lydie Sichoix (in press, 2000)
Topographic Phase Recovery from Stacked ERS Interferometry and a Low
Resolution DEM
Submitted to Journal of Geophysical Research
Tarayre, J. and D. Massonnet (1996)
Atmospheric propagation heterogeneities revealed by ERS-1 interferometry
Geophysical Research Letters, Vol.23, pp.989-992
Williams, Simon, Yehuda Bock and Peng Fang (1998)
Integrated satellite interferometry: Tropospheric noise, GPS estimates and
implications for Interferometric synthetic radar products.
Emma Hill, Spring 2001, Geol 695
55
Journal of Geophysical Research, Vol.103, No.B11, pp.27051-27067
Zebker, Howard A., Paul A. Rosen and Scott Hensley (1997)
Atmospheric effects in interferometric synthetic aperture radar surface
deformation and topographic maps.
Journal of Geophysical Research, Vol.102, pp.7547-7563
Emma Hill, Spring 2001, Geol 695
56
12. Limitations, Advantages and Resolution
Resolution
Figure 14 shows the factors that limit the spatial dimensions of detectable
signals. These include the sensor parameters of pixel size, signal noise and
swath width (see Table 2, p. 66, for these parameters for various sensors), the
upper and lower limits of the amount of deformation and atmospheric noise
effects. Deformation signals which are spatially smaller than a pixel or larger
than a scene cannot be detected by InSAR alone. These limits on spatial
resolution of deformation can be changed by processing techniques such as
stacking and filtering.
Figure 14 Spatial limitations of measuring deformation for the ERS satellites. Taken from Price (2001),
Chapter 1.
Emma Hill, Spring 2001, Geol 695
57
Phase noise can prevent measurements of deformation signals that are smaller
than a few mm. Massonnet (1995) suggests measurements of the Landers
earthquake with a typical precision of 2 to 10 mm over a 35000km2 area. To
measure deformation to a greater precision than this, we must better
understand all the effects on phase, particularly the atmosphere.
Limitations
Spatial Decorrelation
Larger baselines mean better resolution of topography and decreased levels of
atmospheric noise, but they also mean that the reflection from scatters in
each pixel can change. Even if we were measuring deformation and had a way
to remove atmospheric effects, it is still not always possible to get short
baselines, since orbit control of the SAR satellites is not very precise. This
prevents successful coregistration and phase unwrapping of images, leaving
areas of gray where phase cannot be interpreted.
Layover is also a problem, with gray areas caused by blocking of the surface
from the view of the satellite by relief. This is particularly a problem in
mountainous areas, and for systems with steep look angles.
Temporal Decorrelation
Emma Hill, Spring 2001, Geol 695
58
If there is a long time between data takes, the interferogram can suffer
decorrelation as the scatterers in each pixel change over time. This is
particularly a problem in vegetated areas, and areas of heavy agriculture,
where the scatterers change every time a field is ploughed or a new crop
grown.
The problem is not so bad in the desert, and interferograms spanning as much
as 7 years have been made over very dry regions, although even changes in the
surface caused by sand blowing can cause decorrelation.
Advantages

Don’t need line of sight between stations.

All-weather

Can be done at night.

High spatial resolution (approx 20m pixel spacing) compared to other
geodetic techniques.

No need for instrumentation on the ground.

Particularly sensitive to vertical displacements, unlike GPS

Worldwide coverage
Emma Hill, Spring 2001, Geol 695
59
References
Burgmann, Roland, Paul A. Rosen and Eric J. Fielding (2000)
Synthetic Aperture Radar Interferometry to Measure Earth's Surface
Topography and its Deformation
Annu. Rev. Earth Plant. Sci, Vol 28, pp.169-209
Massonnet, D. (1995)
Application of Remote Sensing Data in Earthquake Monitoring.
Adv. Space Research, Vol.15, No.11, pp.1137-1144
Price, Chapter 1 (unpublished, 2001)
http://topex.ucsd.edu/insar/
Emma Hill, Spring 2001, Geol 695
60
13. Applications in neotectonics
The first study to introduce the idea of double differencing to measure
surface deformation was published by Gabriel et al (1989). The study uses
Seasat data over Imperial Valley, California, to measure ground swelling caused
by water-absorbing clays. Massonnet (1995) then used space-based InSAR,
with ERS data, to dramatic effect over the 1992 Landers earthquake (see
Figure 15) and Goldstein (1993) demonstrated the utility of InSAR for
measuring glacial ice flow in Antarctica. Since these studies, there have been
increasing numbers of studies using InSAR to monitor neotectonics, with
increasingly imaginative uses for the technique. What follows is just a few
examples of interesting studies.
Near real-time studies
Sandwell et al, 2000, have illustrated the utility of InSAR for near real-time
mapping of the 1999 Hector Mine earthquake. The ability to produce
interferograms within days after an event will be a huge help to geologists,
since the interferograms will illuminate areas of rupture for mapping, and
ensure that smaller ruptures are discovered and mapped before the surface
traces have been destroyed.
Emma Hill, Spring 2001, Geol 695
61
Near real-time InSAR can only be achieved with efficient ground stations for
SAR data download, reliable processing software and precise, real-time orbits.
The timing also depends on the timing of an overflight of a SAR satellite after
the event. Sandwell et al, 2000, produced an interferogram 20 hours after the
first orbit of the area after the earthquake, and this was 4 days after the
earthquake.
10km
Figure 15 Interferogram of deformation caused by the 1992 Landers Earthquake. One fringe
represents 2.8cm of motion. Image was taken from http://wwwee.Stanford.edu/~zebker/stanfordreport/landersigramcolor.gif.
Emma Hill, Spring 2001, Geol 695
62
Studies of dynamic processes
Delouis et al, 2000, use a combined inversion of InSAR and teleseimic data to
study the distribution of slip with respect to both space and time, at the time
of an earthquake. The use of both data sets limits tradeoffs between rupture
timing and slip location, in that InSAR provides a detailed slip distribution upon
which to base teleseismic measurements. Previous studies of this nature,
without geodetic constraints on slip, could not determine if changes in rupture
velocity were real, or a function of the tradeoff. The study showed that for
stretches of the fault with high coseismic displacements, rupture velocities
were high, whereas stretches of the fault with low coseismic displacements
saw lower rupture velocities. The slower patches correspond with geological
barriers, i.e. stronger patches of ground.
Postseismic Deformation
"Postseismic deformation includes aftershocks, afterslip on and surrounding
the coseismic rupture, transient slip on nearby faults, and viscous relaxation of
the mid- to lower- crust and upper mantle" (Burgmann et al, 2000).
By characterizing post-seismic deformation after an earthquake using geodetic
measurements, it is possible to quantify ductility of the upper mantle and thus
characterize the strength of the lithosphere. Previous studies have been
Emma Hill, Spring 2001, Geol 695
63
hampered by the problems of separating broadscale, deep relaxation and
localized crustal afterslip, which can give similar deformation patterns. A
combination of horizontal measurements from GPS and vertical measurements
from InSAR can help to avoid this problem. Pollitz et al (2000) use such a
combination to characterize mantle viscosity after the 1992 Landers
Earthquake.
References
Burgmann, R., E. Fielding and J. Sukhatme (1998)
Slip along the Hayward fault, California, estimated from space-based SAR
interferometry
Geophysical Research Letters, Vol.24, pp.37-40
Delouis B., P.Lundgren, J.Salichon and D.Giardini (2000)
Joint inversion of InSAR and teleseismic data for the slip history of the 1999
Izmit (Turkey) earthquake
Geophysical Research Letters, Vol.27, No.20, pp.3389-3392
Gabriel, A.K., R.M.Goldstein and H.A.Zebker (1989)
Mapping small elevation changes over large areas; differential radar
interferometry.
Journal of Geophysical Research, Vol.94, No.7, pp.9183-9191
Goldstein, R.M., H.Engelhardt, B.Kamb and R.M.Frolich (1993)
Satellite radar interferometry for monitoring ice sheet motion: Application to
an Antarctic ice stream
Science, 262, pp.1525-1530
Klinger, Yann, Reni Michel and Jean-Phillipe Avouac (2000)
Emma Hill, Spring 2001, Geol 695
64
Co-seismic defomation during the Mw 7.3 Aqaba earthquake (1995) from ERSSAR interferometry
Geophysical Research Letters, Vol.27, No.22, pp.3651-3654
Massonnet, D. (1995)
Application of Remote Sensing Data in Earthquake Monitoring.
Adv. Space Research, Vol.15, No.11, pp.1137-1144
Massonnet, D., M.Rossi, C.Carmona, F.Adragna, G.Peltzer, K.Feigl and T.Rabaute
(1993)
The displacement field of the Landers earthquake mapped by radar
interferometry
Nature, Vol.364, pp.138-142
Massonnet, D., K.Feigl, M.Rossi and F.Adragna (1994)
Radar interferometric mapping of deformation in the year after the Landers
earthquake
Nature, Vol.369, pp.227-230
Peltzer, G., K.W.Hudnut and K.L.Feigl (1994)
Analysis of coseismic surface displacement gradients using radar
interferometry: new insights into the Landers earthquake
Journal of Geophysical Research, Vol.99, No.B11, pp.21971-21981
Peltzer, G., and P.Rosen (1995)
Surface displacement of the 17 May 1993 Eureka Valley, California earthquake
observed by SAR interferometry
Science, Vol.286, pp.1333-1336
Pollitz, Fred F., Gilles Peltzer, Roland Burgmann (2000)
Mobility of continental mantle: Evidence from postseismic geodetic
measurements following the 1992 Landers earthquake
Journal of Geophysical Research, Vol.105, No.B4, pp.8035-8054
Price, Evelyn J. and David T.Sandwell (1998)
Emma Hill, Spring 2001, Geol 695
65
Small-scale deformations associated with the 1992 Landers, California,
earthquake mapped by synthetic aperture radar interferometry phase
gradients.
Journal of Geophysical Research, Vol.103, No.B11, pp.27001-27016
Sandwell, David T., Lydie Sichoix, Duncan Agnew, Yehunda Bock and JeanBernard Minster (2000)
Near real-time radar interferometry of the Mw 7.1 Hector Mine Earthquake
Geophysical Research Letters, in press.
Zebker, Howard A., Paul A. Rosen, Richard M. Goldstein, Andrew Gabriel and
Charles L. Werner (1994)
On the derivation of coseismic displacement fields using differential radar
interferometry: The Landers earthquake
Journal of Geophysical Research, Vol.99, No.B10, pp.19617-19634
Emma Hill, Spring 2001, Geol 695
66
14. Finding SAR data
There is a good amount of SAR data available (see Table 2 for common SAR
sensors), but none of the SAR sensors were designed with InSAR in mind, so it
can sometimes be difficult to find the scenes you want with the desired
lengths of temporal and spatial baselines. ERS satellites currently have the
best orbit control (and you can download the Delft University precise orbits
for free). ESA also flew the 'Tandem Mission' from May 1995 to June 1999, in
which ERS-1 and ERS-2 flew the same flight path with a one-day separation.
This means that it is easier to produce topographic interferograms. With the
advent of STRM this may not be such a consideration.
Satellite
Agency/country
Launch
Band/Frequency
Altitude
Repetition
Incidence
Swath
Resolution
Year
(GHz)
(km)
Period
Angle
Width
(m)
(days)
Seasat
NASA/USA
1978
L (1.3)
800
3
(km)
23
o
100
23
o
100
25
ERS-1
ESA
1991
C (5.3)
785
3, 35, 168
23
JERS-1
NASDA/Japan
1992
L (1.2)
565
44
35o
75
30
SIR-C
NASA/USA
1994
X (9.7), C (5.2),
225
variable
15-55o
15-90
10-200
DASA/Germany
L (1.3)
ASI/Italy
ERS-2
ESA
1995
C (5.3)
785
35
23o
100
25
Radarsat
Canada
1995
C (5.3)
792
24
20-50o
50-
28
500
Table 2 SAR sensor characteristics, taken from Price (2001), Chapter 1
Emma Hill, Spring 2001, Geol 695
67
SRTM
The Shuttle Radar Topography Mission (SRTM) was launched on February 11th
2000 (check?), and the mission lasted for 11 days. The aim of the mission was
to map over 80% of the earth's land coverage using a dual frequency (C and X
band) fixed antenna SAR system. The aim was to create a DEM of the world
with 30x30m spatial sampling, <16m absolute vertical height accuracy, <10m
relative vertical height accuracy and <20m horizontal accuracy. The mission
was a success, but it will take them 2 years to process the data, which will
probably come available in 2002. You can see the SRTM website for more
details, but they have been teasing us with sample images prior to the data
release, such as in Figure 16.
Figure 16 SRTM perspective view, with a LandSat overlay, of the Caliente Range and Cuyama Valley
in California (from the SRTM website).
Emma Hill, Spring 2001, Geol 695
68
ERS data
To decide which Track and Frame numbers you need to order:
1. Go to the European Space Agency Earthnet On-Line Interactive (EOLI)
Query site - http://odisseo.esrin.esa.it/eoli/eoli.html
2. Download Swing if you don't have it - you need this plug-in to run the EOLI
applet. The EOLI page should prompt you to do this. Install Swing in the C
directory. Restart computer after installation.
3. Reload the EOLI webpage. This can take a while as it starts the applet and
loads a map.
4. Choose 'ERS/SAR' collection and 'Interferometry' query mode.
5. Navigate and zoom in to the area you want on the map.
6. Enter the start and end dates for your search.
7. Click on the 'Set Area' button and draw a box around the area you are
interested in on the map (you can also enter coordinates or track and frame
numbers here instead).
8. Click on 'Submit Query.'
9. Write down the track, frame and orbit numbers for the scenes you want.
.......what about baselines - check this - do you still have to go through the
baseline checker:?
The data must be ordered in SLC format, or raw if you have a SAR processor
available. Processing the raw data to SLC yourself allows for greater control
Emma Hill, Spring 2001, Geol 695
69
of phase and geometry, and may make input of the data into the InSAR
processor easier if the SAR processor is designed by the same manufacturers
of the software.
WinSAR
UNR is a member of the WinSAR consortium, a group of Western US
universities and research groups that have formed to distribute ERS SAR
data. If they have the data you need in their archive (there is a list of
Tracks/Scenes on http://topex.ucsd.edu/winsar, which are shown in Figure 17
of this paper), you can ftp the scenes for free (in RAW data format, so you
need a SAR processor to convert the radar echoes to Single Look Complex
format for input to an InSAR software) from http://www.winsar.scec.org. You
need a password to do this, which must be obtained through the university
representative (at present John Bell).
Emma Hill, Spring 2001, Geol 695
70
Figure 17 ERS scenes currently available through WinSAR for the Western USA (maps taken from the
SDSU website).
Orbits
ERS precise orbits can be downloaded from the Delft University website.
Precise orbits are available for ERS-1 and -2 satellites 4-6 months after a data
take. Fast delivery orbits, which are not as accurate, are available
approximately 1 week after the data take and preliminary precise orbits are
available for ERS-2 approximately 1 month after the orbit has been flown.
Emma Hill, Spring 2001, Geol 695
71
1.
First check the arclist, which can be found on their websites, to decide which
'ODR' files you need. The ODR files contain the orbital position of the ERS
satellites as a function of time.
For ERS-1 orbits go to
http://www.deos.tudelft.nl/ers/precorbits/orbits/ers_1dgm.shtml
For ERS-2 orbits go to
http://www.does.tudelf.nl.ers/precorbits/orbits/ers_2dgm.shtml
NB:

'Arc' stands for 'orbit generation run'.

There are often two possible orbits for a single ERS image. The
recommended 'begin' of the precise part of the arc is in the middle,
which is always over the Antarctic, so choose the orbit that best
matches this - check on this.....

It is useful to write down the ancillary data for your selected orbits,
especially the residuals.
2.
ftp (in BINARY mode) the precise orbits from;
falcon.grdl.noaa.gov/pub/delft/ODR.ERS-1/dgm-e04
falcon.grdl.noaa.gov/pub/delft/ODR.ERS-2/dgm-e04
(Fast delivery orbits are stored in ~/dgm-e04.fd
Preliminary precise orbits are stored in ~/dgm-e04.prelim)
Emma Hill, Spring 2001, Geol 695
72
You will need to download both the arclist and the ODR files into the same
directory. The dgm-e04 directory is named so as the orbits are based on the
DGM-E04 gravity field.
If you don't know how to ftp:
1. Go to the C: prompt.
2. Change to the local directory you want to download orbits to by typing cd .. to move back a
directory and cd dir to change to a new directory.
3. Type ftp falcon.grdl.noaa.gov
4. For the username, type anonymous.
5. Type your email address as the password.
6. Type cd pub/delft to change to the Delft directory.
7. Type cd ODR.ERS-1/dgm-e04 or cd OCR.ERS-2/dgm-e04 to change to the right
directory for ERS-1 or ERS-2 orbits.
8. Type bin to make sure that you're going to get the ODR files in binary format.
9. Type get arclist, to ftp the arclist to your local directory.
10. Type get ODR.*** (where *** is the number of the ODR file that you need) (NB. To see a
list of files in this directory type ls).
11. Type Bye to exit ftp.
Section on finding other types of SAR data....??
Emma Hill, Spring 2001, Geol 695
73
References
Price, Chapter 1 (unpublished, 2001)
http://topex.ucsd.edu/insar/
Scharroo, R. and P.N.A.M.Visser (1998)
Precise orbit determination and gravity field improvement for the ERS
satellites
Journal of Geophysical Research, Vol.103, No.C4, pp.8113-8127
Toutin, Thierry and Laurence Gray (2000)
State-of-the-art of elevation extraction from satellite SAR data.
ISPRS Journal of Photogrammetry and Remote Sensing, Vol.55, pp.13-33
ESA EOLI Website:
http://odisseo.esrin.esa.it/eoli/eoli.html
Delft University Website:
http://www.deos.tudelft.nl/ers/precorbs/orbits
WinSAR websites:
http://topex.ucsd.edu/winsar
http://www.winsar.scec.org
Sources of SAR data:
http://www/atlsci.com/library/sar_sources.html
JPL Radar Page:
http://www.jpl.nasa.gov/radar/
SDSU Website:
Emma Hill, Spring 2001, Geol 695
74
http://www.ssi.sdsu.edu/class/geo647/geol600/insar/winsar.htm
ESA Baseline page:
http://odisseo.esrin.esa.it/baseline/baseline.html
SRTM website:
http://www.jpl.nasa.gov/srtm
Emma Hill, Spring 2001, Geol 695
Download