Remote Sensing

advertisement
Mountain Environments – Geography 006d
Application of Remote Sensing to the Study of
Mountain Environments
Fall 2005
Background
Spatial digital technologies – such as satellite and aircraft remote sensing,
geographic information systems, global positioning systems, data visualizations, and
spatial analyses – are a central part of physical geography and important to the
study of mountain environments. Used for characterizing the nature of biophysical
landscapes and assessing their scale-pattern-process interrelationships, mountain
geographers are applying conventional approaches, such as, map and air photo
interpretation as well as newly emergent approaches, such as, digital satellite
remote sensing, image animation, and computer simulations to study the
environment. Empowered by our growing understanding of biophysical and social
processes and systems, spatial digital technologies are being used to examine the
landscape (a) at specified space-time scales that extend from fine-grained to
coarse-grained resolutions, (b) from local, to regional, to global extents, and (c) for
historical, contemporary, and future time period.
Spatial patterns of biophysical variables (e.g., vegetation units or snow
patches) may be mapped for a discrete time period through field measurements
and/or the use of single date air photos; for “snap-shots” in time through an
assembled satellite image time-series; or continuously in time through an
assortment of electronic devices such as quantum sensors linked to data loggers or
meteorological stations transmitting data to orbiting satellites for downloading to
ground receiving stations. Geographic Information Systems (GIS) are often used to
integrate data characterizing a host of space-time elements of biophysical
variables and system parameters so that process relationships can be examined and
complex issues assessed. All in all, spatial digital technologies are impacting physical
geography in a number of fundamental ways: they enhance and value-add to the
richness of our knowledge and understanding of how our environment is organized
and how environment functions in a host of ecological and geographic settings.
Goals
The primary goal of this lab is to describe important elements of a selected
group of spatial digital technologies, particularly remote sensing and geographic
2
information systems that offer mountain geographers immense potential as well as
proven capabilities for studying the biophysical and social landscapes. We will
consider how remote sensing is applied to the study of mountains; how data derived
from this technology is interpreted; and how the spatial digital technologies are
collectively integrated into a data collection, storage, analysis, and display system
that offers new insights to mountain geographers about our dynamic earth.
Remote Sensing Basics
Remote Sensing is a surveillance and mapping science that is concerned with
the observation and/or measurement of objects and features without having the
measuring device in direct contact with the entity of interest. Film and digital
sensors borne in aircraft and satellites are the most common types of remote
sensing devices. They are engineered to be sensitive to different parts of the
electromagnetic spectrum (EMS) (spectral resolution); map different sized objects
and features through their spatial resolution; and assess landscape characteristics
using a quantitative range of response intensities (radiometric resolution), generally
extending from 0 (low intensity of reflectance) to 255 (high intensity of
reflectance), which is analogous to qualitatively evaluating the color red by
describing it on a range extending from dull (low intensity reflectance) to bright
(high intensity reflectance). Remote sensing systems also are capable of rendering
views across time (temporal resolution); as a consequence of their historical
perspective of operation and their orbital specifications that periodically returns
the satellite over the same geographic location for change imaging.
The EMS is a continuum of energy that ranges from the very short
wavelengths such as x-rays to the very long wavelengths such as radio waves.
Energy travels in a sinusoidal, wave-like pattern much like the pattern generated on
a pond when a stone is dropped. In remote sensing, optical sensors are most
commonly applied to landscape mapping. Optical sensors typically operate in the
visible, near-infrared, and middle-infrared spectral regions of the EMS, because of
their capacity to discern important biophysical characteristics of the landscape
including special properties of vegetation. For instance, the visible wavelengths of
the EMS are best for discerning plant pigmentation or vegetation color; nearinfrared wavelengths for discerning the chlorophyll content or structure of the
leaf; middle-infrared wavelengths for discerning the moisture content of the leaf;
and the thermal-infrared wavelengths for discerning temperature characteristics
of the leaf. In film products, information collected about the landscape is generally
amalgamated into a single image by compositing the film layers, but in digital data
sets, separate images or “channels” are retained for each spectral region so that
the user can combine information about the landscape as he/she sees fit.
Therefore, remote sensing is a mapping science that takes into account how energy
and matter are interrelated at distinct spectral regions, whether collected using
3
our 35-mm camera, our digital camera, or our video camera, or captured on film or
by digital sensors placed in trucks, boats, planes, or satellites.
Remote sensing is used to map landscape objects (e.g., a tree or forest),
features (e.g., leaf-on or leaf-off), and conditions (e.g., low or high biomass (or
plant material) or greenness). Because of the vantage point of Earth observation
afforded by aircraft and satellites and the capability of sensors to characterize
components of our landscape, we can map landscape patterns for nearly any part of
the globe. Sensors providing fine grained views through high spatial resolution
sensors are often used to discern relatively small geographic areas, but in great
detail, whereas sensors providing coarse grained views through low spatial
resolution sensors are often used to discern relatively large geographic areas, but
in reduced detail. The extent of the sensor view is also of importance.
Imagine a camera pointed at the ground and placed in a rocket that is about
to be launched towards outer space. Before launch, the camera or digital sensor
records elements of our landscape in a very restricted area beneath the rocket,
called the instantaneous field of view (IFOV), but functionally, it is the camera’s
recoding area or what the camera “sees.” The camera or digital sensor may record
the presence of flowers, grasses, and limbs of trees, or if the camera or digital
sensor has a higher spatial resolution, it might even be capable of picking up
individual grains of sand. But upon launch, the camera or digital sensor expands the
areal extent that it “sees” and therefore cars, buildings, and forest patches
become clearly visible. As the rocket travels higher yet, the camera or digital
sensor records landscape objects, conditions, and features for an ever-increasing
area by mapping at some predetermined resolution such as at a 30 x 30 meter cell,
a 100 x 100 meter cell, or a 1 x 1 kilometer cell. As the satellite orbit is attained or
the aircraft flies at a constant or near-constant altitude, the ground resolution of
the remote sensing systems begins to function at a regular and preset, spatial
resolution. The term “pixel” is given to define the picture element of a digital
remote sensing system, meaning for example that each “cell” of information
collected about our landscape is contained within a regular matrix of say 30 x 30
meter units or pixels, appearing like tiles of the floor or ceiling in their spatial
arrangement. Each cell in the matrix has a number associated with it that indicates
something about the character of the objects, features, and conditions of the
landscape located and subsequently evaluated within that cell. A host of spatial
resolutions and a variety of sensors are available for characterizing our landscape.
Landsat TM is a sensor on-board a family of satellites that operates in the
visible, near-infrared, middle-infrared, and thermal-infrared parts of the
electromagnetic spectrum. The spatial resolution of the digital sensors of Landsat
TM are 30 x 30 meters; 120 x 120 meters for the thermal channel.
Remote Sensing Image Interpretation
4
To start with, lets consider some basic issues involving the use of remotely
sensed satellite data -- how the data are collected, and how best to view the data
on a computer display. Landsat satellite data were collected for places around the
globe beginning in July 1972. During those early years, the Multispectral Scanner
(MSS) and the Return Beam Vidicon (RBV) gathered information about the
landscape from an altitude of approximately 570 miles above our planet. MSS
quickly became the sensor of choice. The RBV performed much like a TV camera; its
data proved less valuable for scientific inquiry, because of limitations in its spatial
and spectral resolutions. So the MSS became the biophysical remote sensing
“workhorse” of physical geographers, ecologists, and other natural scientists
interested in studying the landscape. The MSS and Landsat catalogued thousands
of views of important ecological settings around the world and critical
environmental issues that included for instance mapping deserts that were
experiencing sand dune migration, rivers that were experiencing channel migration,
mountains being affected by disturbance regimes, and human settlements that were
in the process of relocation through population migration brought about by natural
hazards and/or armed conflict. Because the satellite was in an Earth orbit and
synchronous with the sun, the Landsat MSS system provided incredibly useful views
of our dynamic Earth where landscape patterns and changes in these mapped
patterns were studied. But before changes could be assessed, however, baseline
mapping was achieved for numerous areas in the USA and around the world.
Mapping, for instance, the meander scars of the Mississippi River, the extent of
urban places, deforestation in the Amazon Basin, and areas of flood inundation
afforded scientists and policy-makers a broad synoptic view seldom seen and even
more seldom integrated into the study of physical geography.
With time, newer and more powerful sensors have been developed, and as a
consequence more information about our planet can now be distilled. For the
Landsat satellite, the Thematic Mapper (TM) sensor came on-line in July 1982. A
number of significant improvements were realized over the MSS. First, TM sensed
the Earth at a higher spatial resolution: 30 x 30 meters over the 79 x 79 meters
of MSS. Second, TM operated in seven spectral regions (spectral resolution)
representing the visible, near-infrared, middle-infrared, and thermal-infrared
regions of the electromagnetic spectrum, whereas MSS sensed in only 4-spectral
regions excluding the middle- and thermal-infrared regions. Third, the orbit of the
Landsat satellite that carried the vehicle over nearly all of the Earth (except the
poles) improved from 18-days for repeat coverage over the same geographic area
for Landsat and the MSS sensor to every 16-days for repeat coverage over the
same geographic area for Landsat and the TM sensor (temporal resolution). And
fourth, the intensity of the spectral responses captured from the sensors on-board
these orbiting satellites increased as well, from 128 intensity levels for MSS to
256 intensity levels for TM, meaning that the radiometric resolution of reflectance
intensities was extended for greater precision in characterizing elements of the
5
landscape. For Landsat TM, a spectral response value of 0 indicates the lowest
reflectance, whereas a spectral response value of 255 indicates a highest
reflectance. An example might be a dark and deep-water body that reflects at the
lower end of the radiometer range versus a sandy beach that reflects at the upper
end of the radiometric range. These four areas of TM sensor and Landsat vehicle
improvements are considered the 4-remote sensing resolutions; they are used to
select the most appropriate sensor and satellite system to meet the goals of the
mapping mission. Mission goals might be to map phenomena having a high temporal
requirement -- that is, it changes very quickly with time, like a forest fire in
California, or to map phenomena having a high spatial requirement, like icebergs
floating in the North Atlantic shipping lanes.
An easy way to conceptualize how a satellite and its sensors operate is to
think of yourself walking across the landscape, traversing space much like a
satellite does when it orbits the Earth. As you look down upon the ground that lies
beneath you, your vision is constrained; by among other things, your height above
the ground, and the spatial resolution of your sensors, that is your eyes. You can
certainly see the blades of grass, stems of trees and the associated branches and
limbs, as well as sidewalks and so on, but some things are just too small to see, like
grains of sand and other tiny features. This ability to see some features but miss
other features because of their size and the resolving power of your eyes is
analogous to the spatial resolution of sensors – those “artificial eyes” placed onboard satellites and aircraft that can “see” much, but can’t “see” everything, just
like us. As you continue to walk and look at the landscape below you, you might be
wearing sunglasses that either darken your views or alter the color scheme of light
that your eyes are receiving. The lens of the sunglasses might yield a landscape
biased towards the reds, greens, or yellows depending upon the type of lens in your
sunglasses. Your sunglasses may also be capable of polarizing light and/or filter
light in ways that you become more selective or discriminating in the direction and
type of wavelengths of energy that are allowed to pass through the lens of your
sunglasses to reach your eyes. This is similar in concept to the spectral resolutions
of sensors borne in satellites that are capable of “seeing” in only certain parts of
the electromagnetic spectrum. Finally, you might be walking across a very highly
reflecting concrete road or driveway. Even with your sunglasses on, you can
distinguish the visual differences between crossing the bright concrete versus
walking across a freshly mowed lawn. This difference in the degree or intensity of
reflected light is similar to the radiometric resolution of digital sensors that are
placed in satellites (or airplanes). Instead of using a qualitative scale to reference
the degree of reflected light coming off a surface, such as dull or bright, the
digital sensor uses a numeric scale to represent a low to high reflectance, such as a
0-255 scale with 0 being low reflectance and 255 being high reflectance.
So you can see that a digital sensor in a satellite operates similar to our eyes
in many ways. Our eyes and the satellite sensors are involved in information
6
collection about our landscape. While we might construct mental images of what our
eyes see, the satellite can output information it collects to a computer for an
assortment of visualizations involving the generation of maps, tables, animations,
and many more. For viewing satellite images, it is important to realize that a
computer graphics card controls how images are presented by using 3-color guns
(red, green, and blue) and a computer display screen. Often, the graphics card in
the computer differentiates the intensity of the red, green, and blue tones on a 0255 scale, similar to the way that Landsat Thematic Mapper differentiates the
intensity of reflect light coming off the landscape. On the computer screen, it is
customary to have available to the operator a color palette of approximately 16.7
million unique color possibilities to differentiate earth features. This color palette
is derived by multiplying 256 shades (i.e., 0-255) of red, times 256 shades of green
(i.e., 0-255), times 256 shades (i.e., 0-255) of blue. The result is a color-mapping
scheme or color model that has more than enough capacity to differentiate and map
the compositional diversity of our landscape as viewed from satellites.
One more thing, recall that the Landsat TM sensor operates in 7-parts or
channels of the spectrum, simultaneously gathered for different properties or
attributes of the same landscape at the time of imaging. Three channels are
collected in the blue-visible, green-visible, and red-visible wavelengths; one channel
in the near-infrared wavelengths; two channels in the middle-infrared wavelengths;
and one channel in the thermal-infrared wavelengths. As a spatial analyst and
physical geographer it is up to you to view the satellite channels in a way that gives
you the best representation of the region under study and the ecological problem
under consideration. Because Landsat TM has 7-spectral channels of information
(actually we are using only 6 here; it is common to delete the thermal-infrared
channel if temperature profiles are less important to your study) and a computer
has 3-color guns for display purposes, you can see that you’ll need to select which
channel you wish to display and then to assign it to a specific color gun.
Channel
Spectral Region
Landscape Characteristics
1
2
3
4
5
6
Visible-blue
Visible-green
Visible-red
Near-infrared
Middle-infrared
Middle-infrared
Penetration of water bodies, differentiation of soil & water
Healthy vegetation
Healthy vegetation, mapping soil and geologic boundaries
Chlorophyll content/biomass; separates snow, clouds, ice
Plant moisture content, important in the study of drought
Rock structure and formations
Table 1. Spectral regions of Landsat TM (excluding the thermal-infrared) and the
associated biophysical characteristics.
From the information in Table 1, you can assign Landsat TM channels to
computer color guns, for example, based upon the landscape components you wish to
observe or map. Historically, Landsat TM channels 3 (red-visible wavelengths), 4
7
(near-infrared wavelengths), and 5 (middle-infrared wavelengths) have been found
to be most useful for general-purpose mapping, because they represent special
sensitivities to plant pigmentation, chlorophyll content of the leaf, and moisture
content of the leaf respectively. Now that the channels have been identified, you’ll
need to determine which color gun you would like each of the channels to be
assigned. This decision is also important, because you’ll be able to generate an onscreen image that has very different color ranges and renditions which highlight
different landscape properties or attributes.
Objectives
Logon to the system using your UNC ONYEN and Password. Once connected,
double click on the ERDAS ViewFinder software located on the desktop. The
software is for image processing and viewing of remotely sensed imagery. A number
of images from USA and around the World have been loaded. Images include views
of US cities such as San Francisco, San Diego, Chicago, Washington, DC; remote
places such as Singapore, Sweden, Canada, Russia, and Mexico; and certain
landscape features such as the Pyramids of Egypt, fires in Georgia, and more.
Using the open file pull-down, look over the set of images that have been
selected. Choose one image and double click on it – it will display in 3-image windows.
The large window is the primary work space and the two smaller windows show
enlargements of smaller subsets of the primary image. You’ll see a cross-hair on the
primary window for defining the portion of the image to more specifically consider.
You’ll also note the scroll bars on the side of the primary image box. Look for the
pull-down software functions associated with File, Edit, View, Tools, Image,
Window, Help. There are also image icons for some of the more important
functions. The Image functions perform various types of image enhancements, while
the View functions provide for zoom, roam, and general navigation. Explore the
software functionality using an image of your choice.
The objective of this lab is to give you some experience in viewing and
interpreting remotely sensed imagery. See the specific questions below.
Questions
First, let’s begin by looking at the Ikonos satellite data of a southern portion
of Glacier National Park and for other sites too. The IKONOS data has a spatial
resolution of 1-meter (panchromatic sensors) and 4-meters (multispectral sensors).
The images are for a number of snow-avalanche paths that were assessed in
October 2002. Experiment with assigning the 3-channels (say Ikonos channels 1, 2,
3) to the 3-color guns (i.e., red, green, blue) in a variety of fashions. For example,
assign the Ikonos multispectral channels in the following sequence and see what
happens to the generated image – try channels 3, 2, 1 (which means channel 3 to the
8
red color gun), channel 2 to the green color gun, and channel 1 to the blue color gun.
Now try, Ikonos channels 1, 3, 2 to red-green-blue. Find a color scheme that you
like best! You have now created a satellite image color composite by merging
multiple Ikonos channels into a single image, using the 3-color guns of the computer.
Simple, but at the same time powerful in its ability to characterize the landscape.
Some satellite color composites are presented in this chapter. (1) Tell me how you
accomplished the building of a color composite image and describe, in general, what
each composite seems to suggest.
Now to interpret the features themselves. (2) What can you say about the
pattern of the snow-avalanche paths? (3) What image factors helped you in your
interpretation of pattern? (4) What can be said about snow-avalanches as natural
hazards, and what features might be at risk?
Second, selected two images from the bank of images provided. (5) View the
imagery and interpret what the image indicates about the geographic place being
represented and what image characteristics you used to interpret the image? (6)
Use some of the image enhancements and see what it does to both of your selected
images?
Third, (7) tell me about the four remote sensing resolutions that help
determine their applicability for landscape studies. (8) Also, discuss how digital
images are displayed on the computer, and how image channels acquired by
multispectral sensors can be assigned to computer color guns and what is the value
in changing color assignments.
Download