Lab 4: Topographic Analysis

advertisement
Astro3310 – Planetary Image Processing
Lab 4: Topographic Analysis
Due:
October 8, 2015
Reading:
Jankowski, D. G and Squyres, S. W. (1991) Sources of error in planetary
photoclinometry, J. Geophs. Res. 96, 20907-22.
Reflectance from Planetary Surfaces (Included in Appendix to this Lab)
Resources:
If you have any questions, Prof. Hayes’ office is 412 Space Sciences.
Credit:
A portion of this lab was produced by Prof. Oded Aharonson for Caltech course Ge151
(Planetary Surfaces), modified by Teaching Assistants Kevin Lewis and Alex Hayes in
2007/2008 and now Prof. Alex Hayes in 2015 (Note: Kevin Lewis is now an Assistant
Professor at Georgia Tech and looking for graduate students).
Instructions:
Download the data package from:
www.astro.cornell.edu/academics/courses/astro3310/LAB4_Data_Package_FA15.tar.gz
Work through the lab instructions below. Comments written in bold text are questions
that need to be addressed in your lab writeup. A consolidated list of the questions
can be found in the the lab writeup, which can be downloaded from:
Purpose:
Over the course of this lab you will learn:

How to find, retrieve, and process raw imaging data from MOC
(and understand the processed in general).

How to obtain topographic data using photoclinometry on processed images.

How to look at single-track altimetry data from the MOLA instrument.

How to use stereo topographic data from MOC

The benefit and limitations photocliometry, altimetry, and stereo topography.

Deducing some simple things about geomorphic features on Mars.
1
I. Image Processing:
We're going to start with pre-selected images to demonstrate basic image processing and some of
the software packages that are used to do it. The filename of the image is m0806185.imq. This is
an image taken in the Arabia Terra region of Mars in October 1999 by the Mars Observer
Camera (MOC), part of the Mars Global Surveyor (MGS) mission, which officially ended in
January 2007 (R.I.P.).
The image has been prepared for release by the Planetary Data System (PDS;
http://pds.nasa.gov/). These people are responsible for providing a consistent format for all
current and future planetary missions (they're slowly converting the older ones as well) known,
unsurprisingly, as PDS format. PDS images typical have either an .img or an .imq suffix (.imq is
simply a compressed version of .img). They have a text header with information specific to that
image followed by the binary image data itself.
After downloading the data package, move into the LAB 4 directory generated when you
extracted the archive.
Now we're ready to look at some images. From the DATA sub-directory, take a look at the
image file either in a text editor or by typing 'less m0806185.imq' at a Mac terminal, Windows
Cygwin Terminal (if you have one), of Linux / Unix prompt (type 'q' to exit). There are other
files there but ignore them for now. The information contained in the header usually relates to
things which won't change such as the time the image was taken, the gain state of the camera or
the number of lines and samples in the image. Other information such as the position of the
spacecraft relative to the planet, the longitude and latitude of the image or the size of the pixels
in meters is subject to change as better solutions for the orbit of the spacecraft are developed.
This other information is not part of the image file as it would be a huge effort to revise the
whole dataset each time these new and improved numbers are derived.
MOC images are identified by an 8-digit name; the first 3 characters denote the mission phase
m08 in this case means the ninth mapping phase. Each mapping phase lasts roughly one calendar
month. Other prefixes exist such as ab1, sp2, fha and cal which stand for aerobraking phase 1,
science phasing 2, full high gain antenna and calibration phase respectively. These other phases
took place early in the mission and were pretty short. The remaining 5 characters represent the
orbit number (first 3 characters) and the image number (last 2 characters) within that orbit. So in
the case of our image m0806185, it is the 85th image taken on the 61st orbit in mapping phase 8.
MOC is a line scan camera i.e. it has one row of detectors that sweeps across the surface of the
planet as the spacecraft moves in its orbit. The resolution along this line (known as cross-track
resolution) is set by the height of the orbit i.e. if the spacecraft is far away from the planet each
pixel sees more of the surface. The mapping orbit was planned to be ~400 km high, however
there is ~30 km of relief on the planet (from the summit of Olympus Mons to the bottom of the
Hellas impact basin) so the cross-track resolution can vary from place to place on the planet (by
almost 10%). The down-track resolution is determined by the groundspeed of the spacecraft (~ 3
km s-1) combined with the length of time the camera exposes each line. Ideally, you want the
spacecraft to move only a fraction of a pixel (to prevent smearing) in the down-track direction
2
during an exposure, however the exposure must be long enough to collect sufficient light for a
high signal to noise ratio. The camera sensitivity, cross-track resolution and mapping orbit
parameters have been designed so that the pixels have roughly the same resolution in the cross
and down track directions. Changes in distance to the surface and surface reflectivity mean that
the pixels are not entirely square so all MOC images have some non-unity aspect ratio.
Sometimes this can be quite severe and must be removed before the image makes any sense. In
general it is always nicer to correct the aspect ratio when looking at images (there is something
fundamentally disturbing about elliptical craters).
MOC images are always taken at the full resolution of the camera (~1.4 meters/pixel for the
narrow angle) but, to cope with the voluminous amounts of data, the spacecraft computer
intentionally degrades the resolution by summing and averaging pixels in both the cross and
down-track directions. This summing is not necessarily the same in each direction so in addition
to the inherent aspect ratio there is sometimes an induced aspect ratio due to this differential
summing. Note that the HiRISE camera onboard the Mars Reconnaissance Orbiter (MRO),
which is currently taking data on Mars, acquires images at ~0.1 meters/pixel!
ISIS (Integrated Software for Imagers and Spectrometers) http://isis.astrogeology.usgs.gov/
ISIS is a software package developed by the United States Geological Survey (USGS) for use
with spacecraft data. ISIS is really a collection of stand-alone programs, which perform
operations on datasets such as map-projecting of an image onto a reference ellipsoid. In this case
we are going to use some of the programs in the ISIS package to calibrate our image.
ISIS has its own environment from which you can call these programs known as tae. It's possible
to call them as stand alone programs as well directly from the terminal. If you use a Mac or
Linux computer, you can install ISIS yourself for free. There are number of great tutorial and
help files available on the ISIS website if you do (not required for the lab).
ISIS works with its own image format called 'cubes' with a .cub extension. An ISIS cube
contains header information like the PDS format does. The first step will be to convert this image
into the ISIS format. This is known, as level 0 processing i.e. it isn't really processing at all but
just data translation. To save time and save you from having to install ISIS on your computer, we
have run of all the processing for you. However, the script used to run all of the necessary ISIS
programs to do this conversion is in the lab DATA sub-directory and, if you so desire, you can
look at it. The filename is prepare_mocmola.sh. To be fare, this script calls perl scripts that are
part of the now outdated ISIS 2 library (ISIS is now up to version 3 and working on version 4).
However, it would be relatively straight forward to convert the various functions to their ISIS 3
(and soon 4) equivalents.
OK, now we can take a look at this image. ISIS has created a file called m0806185.lev0.cub; this
is the raw data in ISIS 2 format There are also updated ISIS 3 format files and PNG formatted 8bit images for ISIS 2 data file. You either view the ISIS 3 cube file (m0806185.lev0.isis3.cub) in
matlab, or the PNG file (m0806185.lev0.png) in your favorite image viewer (like Photoshop,
which is now free to Cornell students!). To view the file in matlab you can use our familiar
friend read_isis.m, which is stored in the lab SUBROUTINES sub-directory. From you Matlab
window, move into the REPORT sub-directory for the lab (if you are not already there) using the
3
“cd” command or the graphical file toolbar. Once you are in the directory, load the matlab
programs relevant to the lab by typing “addpath(genpath(‘../.’))”. This will load all of the files in
the matlab subdirectory into Matllab’s memory. To load the image, type “img0 =
read_isis(‘m0806185.lev0.isis3.cub’);”. Now you can display the image using the “imagesc”
command by typing “figure; imagesc(img0’); colormap gray; axis equal;” or using more
advanced image display and stretching routines that we went over during the Matlab Image
Tutorial. If you want to open the image in an image browser, simply pick your favorite image
browser and open the PNG file.
Now that we have the image up, lets discuss it. Well, its pretty ugly! It's streaky, looks like it has
an aspect ratio problem and looks pretty uninspiring in general. However you can turn this into a
thing of beauty and scientific worth in a few easy steps by puttning the image througha
calibration pipeline (remember LAB 3?). First, however, take a moment to look over the image
zooming up on the interesting stuff.
Next we will look at the level 1 cube for this image. Level one means that the image has been
radiometricaly corrected i.e. the previous DN values have been converted into meaningful I/F
values (see the appendix to this lab for the explanation of what I/F means). Open up the file
m0806185.lev1.isis3.cub (or m0806185.lev1.png) and take a look. It should be looking a whole
lot better now; the streakiness was due to different pixels on the line array having different
quantum efficiencies (i.e., the bright ones were very efficient at recording light and the darker
one were not). The calibration process has taken account of that and 'flattened' all the columns.
You’ll also notice that the pixel values are now between 0 and 1, as opposed to 8-bit values
between 0 and 255.
This is still not the ideal situation, however, as the image is still distorted because of its aspect
ratio and because we have no idea of what the scale is and which way is north. What we really
need to do to answer all of them is map project the image into some reference coordinate system.
Level 2 images do just this. Open up m0806185.lev2.cub. The image has now been projected into
a 'sinusoidal' projection. This converts lat/lon (in degrees) to x/y coordinates (in meters). The
conversion is calculated roughly as y=(lat/360)*2pi*RMars, x=(lon/360)*2pi*RMars*cos(lat). As
we will discuss in lecture, there are many other projections that all have their own advantages
and disadvantages, but they are beyond the scope of this lab.
In the lab data directory, there is a MOC Wide-Angle image (M0806186.jpg), which corresponds
to the NAC image we've been looking at. Wide Angle images are helpful for understanding the
context of the very small area within a Narrow Angle image. This image can be viewed either in
matlab or photoshop. To load the image in matlab type “wac=imread(‘M0806186.jpg’);” and
using the same commands as before. Alternatively, simply open the image up using your favorite
image browser. Describe the various forms and processes you see in the Narrow Angle
image, using both it and the context MOC image. Try and figure out what you're looking
at. We recognize that this is not a geology class, but photointerpretation is relevant to image
processing so try and describe the context provided by the images. Label a few of the main
features and save a figure in your REPORT sub-directory.
4
Searching for your own images
Staying with the case of MOC images, several websites are now offering various search
mechanisms. Each one has its advantages and disadvantages a quick summary of the main three
follow.
http://ida.wr.usgs.gov
This site is run by the USGS. Its main benefit is that if you know the image name then you can
find and download it very fast as everything is arranged by orbit number.
http://www.msss.com
This is the website of the company that actually built the MOC camera. They are also interested
in the science it returns and have constructed a graphical database for all the publicly released
images. Its difficult to find a specific image in here but this site is great for browsing regions of
the planet if you don't know exactly what you're looking for.
http://pds-imaging.jpl.nasa.gov/search/
This site is provided by the PDS. It's an extension of a search engine from the previous major
Mars mission (Viking). This allows searching graphically by zooming up on a map of the planet.
However, the thing that makes this site so very useful is the ability to search the MOC dataset
using forms. You can specify any number of image parameters (including latitude and longitude
ranges) and get a list of all the images, which match your search. Preview thumbnails are
available and each image can be viewed online and downloaded in any number of formats.
II. Altimetry and Topographic Analysis:
This section of the lab is going to make extensive use of Matlab. We've tried to write all the
more complex software in advance so that we can focus on the image processing and not turn the
lab into an exercise in computer programming. Note, however, that this software is not
bulletproof and if its given funky data you'll get funky answers. Also the MOLA data is much
more difficult to extract from its archived PDS form into useful numbers than is the MOC data
you used in the first part but again, we’ll try to make this as painless as possible.
I. Photoclinometry
This section is aimed at being an introduction to the concept of photoclinometry, or if your into
computer vision shape-from-shading (deriving topography from surface shading). If you've read
the lab appendix by now, you should know how local slopes as well as albedo can affect the
brightness of a particular patch of ground (if you haven't read it yet now would be a good time).
Photoclinometry is difficult to do in any quantitative way but this exercise will illustrate some of
the qualitative concepts associated with it.
In you haven’t already, you’ll need to add the path of the matlab routines for this lab by typing:
5
“addpath(genpath(‘../.’))”
from the lab REPORT sub-directory in the Matlab window.
There is an matlab program called 'pclin.m' in the SUBROUTINES sub-directory. This is a basic
photoclinometry program which will allow you to vary some input parameters and see their
effects. Please look through 'pclin.m' and make sure you understand what it is doing.
This program uses the I/F values in the image m0806185.lev1.cub. Some of the program lines are
reproduced below:
res = 2.95;
%Size of a pixel in meters
sunaz = 320.65;
% Solar Azimuth
inc_ang = 48.18;
% Inclination Angle
These lines set up the variables specific to this image. The solar azimuth is important, as only
tracks that cut across the image parallel to the illumination direction can be used for
photoclinometry.
Explain briefly why this is the case.
b = b-shadow;
b is the variable that stores the I/F values alone the sun's line of sight. Here we remove some
estimate of the I/F value, which is due solely to the atmosphere (called shadow brightness
because this is the only way shadows can be illuminated)
z = acos(b.*pi.*cos(inc_ang*(pi/180.0))./albedo);
z = z - (inc_ang*(pi/180.0));
z = res.*tan(z)
for i=2:length(z)
z(i) = z(i-1)-z(i);
end
These few lines are the guts of the program. The first calculates the incidence angle for each
pixel based on its I/F value. The second then removes the incidence angle that a flat surface
would have, leaving just the local slopes for each pixel. The third line converts the slope of each
pixel to the change in height of that pixel. The for loop adds up all these height changes to find
the actual height of each pixel.
Notice, for example, that we have to assume some value for the albedo and that the atmosphere
can have an important role, due to its ability to scatter light.
You can start the program by typing 'pclin('m0806185.lev1.cub')' at the matlab prompt. The
standard case will come up i.e. the program selects an interesting part of the image, guesses an
albedo and assumes no atmospheric effects. You can force it to use some particular albedo by
calling it like 'pclin('m0806185.lev1.cub',0.25)' for an albedo of 25%. You can
use 'pclin('m0806185.lev1.cub',.25, 3000)'to select a starting line for the section of image you
6
want to look at (line 3000, in this case). You can also make some correction for the atmosphere
by assuming some amount of I/F is due to scattering, e.g. 'pclin('m0806185.lev1.cub',.25, 3000,
.02)' will assume that 0.02 needs to be subtracted from each I/F value as a first order atmospheric
correction. You can choose each of these three variables, although they must be in the order:
“albedo, starting line, shadow brightness”. (If you want to specify the starting line but not the
albedo, for instance, a nonsensical value like -1 will cause it to choose the default albedo
value. 'pclin('m0806185.lev1.cub',-1, -1, -1)' will cause the program to use the defaults for all
three.) Note that the elevation here is in arbitrary units. It's easy to get relative elevations with
photoclinometry, but this technique (or more specifically, our rudimentary implementation of it)
isn't reliable enough to actually measure heights in meters. So in discussing these heights, don't
assume some unit, it's just an arbitrary scale.
Experiment with different albedos, letting the program guess first and then trying higher and
lower values. Try this for a number of areas and try to figure out what goes wrong when the
albedo is far away from the real answer. Explain it as best you can in words. Do the same for
the shadow illumination parameter.
Export your Matlab plots and insert them into your lab writeup.
II. Laser Altimetry
The Mars Orbiter Laser Altimeter (MOLA) operated on the Mars Global Surveyor (MGS)
spacecraft. It measured topography by pulsing a laser at the planet's surface and recording the
length of time it took for that pulse to be reflected back. It was a nadir-pointing instrument and
so can only see what's directly below it. The laser fired 10 shots a second while the spacecraft
has a groundspeed of 3 km s-1, so each shot is separated from the next by about 300 meters. The
laser illuminated a patch on the surface that's about 120 meters in size. As the spacecraft orbited
the planet a track of elevation measurements was built up of the terrain that MGS had passed
over. So, MOC images and topographic data were taken at the same time, which is quite useful
for interpreting landforms. MOLA was shut off in June 2001, and now operates in a passive
radiometry mode.
We'll look at one of the single tracks in particular that crosses over the image from part 1. Most
MOC images have accompanying MOLA data through mission phase E07 or so. However, this
data is only along the track of the spacecraft (the long axis of the image), and only along a small
strip of the image. The whole volume of MOLA tracks has been gridded into one planet-wide
map, but this map has slightly lower resolution in the low latitudes, and is not quite as reliable on
small scales as the individual laser shots.
We have all the MOLA data on disks. The data structure of the MOLA files is long and
complicated so to spare you the details and let you get straight to the data we have used a small
program that will automatically go and get the MOLA track for any MOC image you ask for.
This program returns 4 arrays, one of the longitude of each shot, one for the latitude, one for the
topographic height and one for the time each shot occurred. All you need give it is the name of
the MOC image you want data for. In the shell script you ran earlier, this was done by calling the
program 'mocmola.pl' on line 5. We have written a program to get all this data we produced into
Matlab. In MATLAB type:
7
“[time,y,x,z,d]=mocmolaplot('m0806185');”
Some plots will pop up, and some arrays will be spit out. In MATLAB you can find out
information about arrays by typing 'whos ARRAY-NAME' if you leave off the array name and just
type 'whos' you'll get information on all the variables MATLAB has stored in memory. Try
things like 'y(1:10)' which will show you the first 10 elements in y (the projected y-coordinate, in
meters, of the MOLA shots). Try 'max(x)' or 'min(time)' to see maximum and minimum values of
some of the arrays. Note that the time is in awkward units of seconds after JD2000 (Julian date
of 00:00 1/1/2000). Computers have no problem with numbers like this but humans generally do
(at least I do).
The plot on the left hand side of the figure simply plots the MOC image we have been looking at,
now projected, with the MOLA points on top. Note that the actual footprint of the laser beam is
significantly larger than the points shown here, but I think this makes for easier viewing. But if a
spot falls right on the edge of a cliff, the results may look a little funny. The plot on the right
shows the elevation of the MOLA data vs. Latitude. You can now see the actual elevations of
this image. Now plot the elevation on your own. For example, you can use the command:
“figure; plot(d,z);”
This will plot the topography recorded against the distance from the first point. Remember each
point represents about 300 m along the ground, or 0.1 seconds. Notice in this dataset that we are
dealing with absolute elevations, as opposed to unscaled and relative elevations in the
photoclinometry section. This is very useful! So, what is the total relief along this
track? Because we now have absolute elevations, we can compare different parts of the image
to each other. Are all of the high points in this image at the same elevation? What are the
maximum slopes seen in this track, and are they consistent with loose sediment (as opposed
to bedrock)? (The track isn't along the direction of maximum dip, so this will only be a lower
limit on the actual maximum slopes. Too bad we don't have topographic data for the whole
image...) Export your Matlab plots into your lab writeup when you're done.
III. Stereo Imagery
The third topographic technique we will employ is stereo photogrammetry (see lecture notes). It
just so happens that the image we have been looking at has been imaged twice by the MOC
team. After MOLA was shut off MGS was oriented, for the most part, at 17 degrees off-nadir to
save fuel. Images in this later phase of operation can be combined with earlier mapping phase
images to extract topographic information, as in a 3-D picture. By measuring the parallax
between various features, and with precise knowledge of the camera position, good topographic
data can be obtained, at near the resolution (typically ~3X) of the original MOC images.
Advantages to this method are the higher resolution, and the ability to extract absolute
elevations. However, the extent of MOC stereo coverage is rather limited, and it does not have
quite the vertical precision of the Laser Altimeter data. We will take a look at this data, and you
can see the advantages and disadvantages for yourself.
8
To save you time, we've made a Digital Elevation Model (DEM) beforehand: dm270_1626.grd
In matlab, type 'stereoplot' This will bring up two images. The first one is our friend M0806185.
Again, this is the projected version of the image. The second plot is of the stereo DEM, with the
image behind it. This is really different than the other two analysis methods! We have 2-D
coverage of the image...sort of. Notice that there are lots of areas where the automatic correlator
couldn't find matches between the images. This is usually because the images are too featureless
in these areas. But you can see that where there are slope streaks on the image, the tracker is able
to find matches between the images, because of their high contrast. (On the other hand, if you
look closely, there are places where new slope streaks have formed in the time between the two
images, and automatic correlation fails!) Also, you can see there is some noise to the data, much
more than the MOLA data. We're going to use a function called proftool3, so you can take
profiles of this plot in any direction you want. It can be used in two ways. Close the second
(color) plot window, or just make the black and white image the active plot. Type:
“proftool3(z)”
Use the mouse to select the profile in the image window (right click when you're done to exit).
This function lets you click on the image for reference, but extract the corresponding topography
(from the elevation matrix, z). Follow the directions that come up in the matlab workspace. A
plot should pop up showing the elevation profile. Alternatively, you can type:
“[xx,yy,zz,dd]=proftool3(z);”
This usage lets you save the x, y, z, and distance values of the profile in the four output
arguments, in case you want to plot them again later. You have two tasks, using this tool: 1)
Determine whether the layers exposed along the edge of this cliff are roughly horizontal or
not. 2) Determine the angle of the slope streaks which can be seen throughout the image Is
this what you would expect for a fine dust layer on the surface? Again, export your plots to
support your conclusions for these questions. Finally, type 'h=reshape(z,1,1751*501);' and
“figure; hist(h,20);” to view a histogram of elevations for this DEM. What are the maximum
and minimum elevations?
Phew! Congratulations, you're done.
9
III. Appendix: Reflectance from Planetary Surfaces
Reflected visible and near infrared light from planetary surfaces is arguably the most important
remote sensing window available to us. Not only are most planetary images taken in the visible
(where the sun provides a convenient abundant illumination source) but reflected light in the
near-infrared provides important spectral information capable of distinguishing many minerals.
This tutorial will cover a few basic things to make sure everybody is starting from the same
level.
When dealing with reflection the first thing to consider is the spectrum of the source (the sun for
planetary work). The solar luminosity is roughly 4x1026 Watts, emitted isotropically over
4pi ster-radians. The solar constant for each planet is the solar flux at that planets position (for
Earth its around 1300 Watts). The solar flux dies off as an inverse square law, Jupiter at ~5 AU
receives about 1/25th (~4 %) of the Earth's solar flux. The shape of the solar spectrum follows
that of a 6000 K blackbody. It peaks in the visible region (not surprisingly since our eyes have
evolved to take advantage of that) at about half a micron. To convert the Plank function to
specific flux (flux per hertz) you have only to multiply it by the solid angle subtended by the sun.
The solar constant is then just the integral of the specific flux over all frequencies
Most surface images of planetary bodies are taken with a broadband filter which covers most of
the visible range. Several things affect the brightness received by each pixel in the CCD array or
each element in a vidicon image (believe it or not for some bodies this is still the best we have).
The flux falling on the surface is the primary factor and is the same for all pixels unless the
image covers such a large fraction of the planet that the solar incidence angle varies considerably
from one part of the image to another. The albedo of the surface making up that particular pixel
is a measure of what fraction of the incident light is reflected, there are many types of albedo and
we'll discuss some of them later. Albedo varies from pixel to pixel, e.g. sand is darker than dust
(for reasons we'll also discuss later). Local slopes can also effect the incidence angle and so
effect the brightness of that pixels, e.g. the sunlight side of a hill is brighter than the shadowed
side. Before we go any further lets define our observational geometry to prevent any confusion.
10
The above figure is taken from Hapke, 'Theory of Reflectance and Emittance
Spectroscopy' (Cambridge University Press, 1993). The incidence angle is denoted by i and is the
angle the incident solar radiation makes with the local vertical. The emission angle is denoted
by e and is the angle made by the emitted radiation toward the detector with the local vertical.
Nadir pointing spacecraft observe only whats directly below them, so the emission angle is
usually very small. The phase angle is denoted by g and is the angle between the incident and
emitted rays. For nadir observations (where e is zero) the phase angle is equivalent to the
incidence angle. When g is zero the sun is directly behind the observer this can lead to a surge in
brightness known as the opposition effect, which we'll talk about later.
For a spherical planet the incidence angle on the sun is given by the following formula, where L
is the latitude, H is the hour angle and D is the solar declination.
cos(i)=sin(L)·sin(D) + cos(L)·cos(D)·cos(H)
Taking into account the position of the sun, planet and spacecraft it is possible to derive
an I/F value for each pixel. This is the fraction of light which hit the surface contained within the
pixel that was reflected. It depends both on the albedo of the surface at that point and on the local
slopes which may have effected the incidence angle. That's fine for bodies like the Moon,
Mercury and the Galilean satellites but not for bodies like the Earth or Mars which have
atmospheres. The atmosphere can scatter light out of the incoming solar radiation which reaches
the surface providing diffuse illumination rather than collimated and it can also scatter radiation
out of the outgoing beam of radiation. In short the presence of an atmosphere is bad news when
trying to interpret I/F values. We'll restrict ourselves to how the surface effects the I/F values but
keep in mind that any atmosphere is playing a major role.
11
ALBEDOS
Before describing the different kinds of albedo it is necessary to describe the concept of
a Lambert Surface. A Lambert surface is one which appears equally bright from whatever angle
you view it (any value of e). The Moon is a very good approximation of a Lambert surface, it
appears equally bright at its edges and at its center making it appear almost like a two
dimensional disk rather than a sphere. The flux from 1 m2 of a lambert surface is proportional to
cos(e) due to the geometrical effect of foreshortening, however the same angular size includes
more surface area at grazing angles and so these effects cancel out leaving the brightness
independent of e. There are many kinds of albedo, all of which were defined for use with the
Moon and then generalised to cover other bodies. Some of the more common ones you may
come across are:
Normal Albedo: Defined as the ratio of the brightness of a surface element observed at g=0 to
the brightness of a Lambert surface at the same position but illuminated and observed
perpendicularly i.e.i=e=g=0.
Geometrical (Physical) Albedo: It is the weighted average of the normal albedo over the
illuminated area of the body. Defined as the ratio of the brightness of a body at g=0 to to the
brightness of a perfect Lambert disk (not sphere) of the same size and distance as the body
illuminated and observed perpendicularly i.e. i=e=g=0.
Bond (Spherical) Albedo: Defined as the total fraction of incident irradiance scattered by a body
in all directions. This quantity will be important when considering the total amount of energy
absorbed by a surface for things like thermal balance etc...
SLOPES
We've already mentioned that local slope can serve to increase or decrease the apparent
brightness of a patch of ground. If we assume an albedo for each pixel then we can figure out
what the solar incidence angle is for each pixel based on the I/F value. We can subtract the
incidence angle for a flat piece of ground at that time,latitude and season, we are left with the
slope of each pixel but only in one direction! This procedure is known as photoclinometry, it
works best when you can remove the albedo easily e.g. if the surface is covered with some
uniform albedo material like frost or dust. This can only tell you what the slopes toward and
away from the sun are, if you want to reconstruct the entire topographic surface then you need at
least two observations illuminated from different directions. Again the atmosphere wreaks havoc
with this sort of technique and make it difficult to get quantitative results however if you could
constrain the answer with other methods such as a laser altimeter in the case of Mars then you
can generate very high resolution topography maps for small areas.
12
REFLECTION PHENOMENA
Photometric and Phase Functions: Surfaces behave differently when viewed under different
geometries. The phase function gives the brightness of the surface viewed under some arbitrary
phase angle divided by the brightness expected if the surface were viewed at zero phase. The
photometric function is the ratio of surface brightness at fixed e but varying i and g to its value
at g=0. For a Lambert surface the photometric function is given by cos(e).
Grainsize Effects: Scattering from a particulate medium in the visible and near infrared tends to
be dominated by multiply scattered light i.e. photons that have been scattered within the surface
more than once. Photons pass through grains getting absorbed along the way, they get scattered
by imperfections within the grains and by grain surfaces. The more surface area there is the more
scattering there is and the bigger the grains (more volume) the more absorption there is. The
surface area to volume ratio of the grains therefore has an effect on the amount of light scattered
by the surface. The surface area to volume ratio scales linearly with the grain size so for the same
material surfaces with larger grain sizes will in general be darker. The most familiar example of
this is probably sand on the beach, dry sand is bright but when sand get wet it clumps into larger
grains and turns darker. In reflection spectra larger grains produce broader, deeper absorption
bands.
Opposition Effect: When observed at zero phase surfaces experience a sharp increase in
brightness known as the opposition effect (also known as: opposition surge, heiligenschein, hot
spot and bright shadow). The pictures below show the opposition surge in the case of the moon,
look at the edges of the astronauts shadow (which is at zero phase angle), can you see the extra
brightness compared to the surrounding regolith. The plot shows the same effect but in a more
quantitative way.
The physical explanation offered for the opposition effect is that of shadow hiding. If the
illumination source is directly behind you then you cannot see any of shadows cast by surface
grains (since they're all hinding behind the grains themselves), whereas if you were looking
13
toward the illumination source then you can see all the shadows cast by the surface making it
appear darker. Coherent backscatter is another mechanism offered as an explanation and it is
likely that both mechanisms play some role although from recent work on Clementine data it
looks like shadow hiding is the dominant mechanism at least in the case of the Moon.
Reflection from particulate mediums is an area of research in its own right, which has been
largely pioneered by Bruce Hapke in recent years. This is only meant as a taste of what is really
out there. Anyone wishing to go deeper into the (gruelling) mathematical modeling of reflection
can come to either Jamie or Alex for references and/or help.
14
Download