Uploaded by Aira Alicabo

REMOTE-SENSING video-notes

advertisement
REMOTE SENSING: FINALS NOTES
Water Remote Sensing applications:




Water Management
Natural Hazards
Weather Forecasting
Earth System Science
INFRARED GEOLOGY
Case Study: Tracing fluid pathways in a 3.2 Ga
volcano-sedimentary sequence with hyperspectral
remote sensing
Study area: Archean Pilbara Craton, Australia
Can we find indications of hydrated minerals with
remote sensing? Yes. Real world manifestations are
Spinifex vegetations and iron coatings.
Muscovite -rock that contain hydrated minerals
Hyperspectral Remote Sensing – measuring reflected
sun light at many different wavelengths
Introduction to Digital Image Processing Techniques
Digital Image Processing -the task performed on
digital data using some image algorithms
1.
2.
3.
4.
5.
Pre processing
Image enhancement
Image transformation
Image classification
Data merging & GIS integration
Methodology for Digital Image Processing
Ground swath is not normal to the ground track but is
slightly skewed, producing cross-scan geometric
distortion
Platform velocity – if the speed of the platform
changer the ground track covered by successive mirror
scan changes producing along track scale distortion.
Earth rotation – the earth rotates as the sensor scans
the terrain. This results in a shift of the ground swath
being scanned, causing along scan distortion.
Attitude - platform depart from its normal altitude,
changes in scale occur
Altitude – one of the sensor system axes usually
maintained normal to the earth’s surface and
introduce geometric distortions
Image enhancement – conversion of the image quality
to a better and more understandable level for image
interpretation
Radiometric Correction – Preprocessing of Image
A. Raw form of images contains flaws and deficiencies.
2 types of errors:
Internal error – due to the sensor itself
External error -due to perturbations of the platform
and scene characteristics
B. Radiometric corrections and Geometric corrections
- operations aim to correct distorted and degraded
image data
C. Radiometric errors are due to: variations in scene
illumination, viewing geometry, atmospheric
correction, and sensor noise
D. Variations in illumination geometry between
images can be corrected by modelling the geometric
relationship and distance between area of the earth’s
surface image, the sun, and the sensor.
E. Atmospheric degradation can be corrected by
physical modelling, histogram of data and regression
method
Geometric Corrections – due to:






The perspective of the sensor optics
The motion of the scanning system
Motion of the platform
Platform altitude and velocity
Terrain relief
Curvature and rotation of the earth
Kinds of errors:
Scan skew – due to the forward motion of the
platform during the time for each mirror sweep.
Purpose: for easier interpretation of images, remove
distortion for better visualization, and extract
maximum data
Methods of image enhancing:
Point operations aka radiometric enhancement –
changes the value of each pixel independent of all
other pixels
Local operations aka spatial enhancement – change
the value of individual pixels in the context of the
values of the neighboring pixels
2 types:
Image reduction – reducing the original image
duhhhh; deleting row and column
Image magnification - referred to as zooming;
improve the scale and match scale of another image
Contrast Enhancement – play with the intensity of
brightness of the image.
Broad histogram, significant contrast; narrow
histogram, less contrast
Linear contrast enhancement – a DN in low range of
the original histogram is assigned to extreme black
and value at high end is assigned to extreme white
Maximum Minimum stretch- orig max and min value
are assigned to newly specified data set utilize full
range of available brightness values of display unit.
Important spectral differences can be detected by
stretching the minimum and maximum values.
Saturation stretch – aka percentage linear contrast
stretch. This method uses specified minimum and
maximum values that lies in a certain percentage of
pixels
Image transformation
1. Generates new images from two or more
sources which highlight particular features of
properties.
2. Image arithmetic operations
3. Principal Component Transformation (PCT)
4. Tasseled Cap Transformation (TCT)
5. Color space transformation, Fourier
Transformation
6. Image Fusion
Image arithmetic operations


Image Addition -use of multiple images by
means of reducing the overall noise
Image subtraction – used to identify changes
that have occurred between images collected
on different dates.
Principal Component Transformation (PCT) –
reduces number of bands to the required number
of bands.
Tasseled Cap Transformation – original DN value
breaks image into three band
Radiometric Enhancement – highest and lowest
value is obtained; value is evenly spread in all
bands.
Histogram Equalize – contrast stretch
redistributes pixels
Standard Deviation – blur the color
Piecewise contrast – increase or decrease the
contrast and brightness of the image for given
value of range
Haze Reduction – reduce the cloud cover from the
image
Noise reduction - reduce unwanted pixels and
unwanted reflectance values
Statistical Filters – shows the information based
on pixel values. And also, to improve pixel values
that fall outside user-selected statistical range.
IMAGE ANALYSIS AND INTERPRETATION
Image analysis - extraction of meaningful
information from images using image processing
methods
Image analysis techniques
Histograms – indicate how often a specific value
appears in the image
Segmentation – group together pictures that are
homogenous in terms of pixel values
Image interpretation – Identifying objects and
understanding the image content
Image interpretation technique
Semantic segmentation – assign each pixel to
some kind of semantic which results to segments,
and each segment is homogenous in terms of
semantic meaning
Each pixel is assigned to one pre-defined class and
the pixels of the same class are grouped together
to one semantic segment
Object detection – detection of single objects;
estimation of a bounding box, mostly parallel to
image borders
Instance segmentation – semantic segmentation
+ object detection; detected ojects have tight
object boundaries



Intensification of agriculture
Selective logging
Desertification
Land cover conversion vs Land cover modification


The smaller the number of classes or the
“coarser” the class definition, the lower
the amount of land cover conversion
Detection of land cover modification
requires a continuous, spatio-temporal
description of surface properties
Mixed pixels – mixture of spectral
characteristics; can be used to describe land
cover modification; increasing distance to object
decreases spatial resolution and increases the
amount of mixed pixels.
Change detection
1. Ensuring the relative and absolute
geometric comparability of data
 Datasets must be precisely
located with respect to each
other
 Effects of sensor distorions mus
be corrected
Image categorization – classify what are the
images shown; assign label to images
Land use and Land cover
Land use – defined by activities and
anthropogenic influence
Land cover – (Bio)physical coverage of the earth’s
surface
Estimation of LULC



Estimation of LULC classes
Discrete representation
Classification
Estimation of biophysical parameter
Continuous representation
Regression
Fluent transition between both tasks, see
for example: Climate Change Initiative
(CCI) LC by European Space Agency
Change detection – overtime, interpreted
satellite images help us to detect change
Land cover conversion – entire land cover class is
replaced by another class
Land cover modification – gradual change of the
nature of land cover class
2. Ensuring the relative and absolute
spectral comparability of the data.
 For all procedures that work
directly with the spectral values
of the data, these are
necessary:
 Sensor calibration sensitivity of the sensors,
etc.
 Atmospheric correction –
process of removing the
effects of the atmosphere
to the reflectance value of
the image
 Topographic correction –
determines whether a
piece of force???? ay
ambot, is sunny or shaded
3. Ensure the comparability of geometric
and spectral resolution
 Due to the limited data
availability, it’s often not
possible, so that comparisons of
multi-sensor data is often
unavoidable
 The comparison of data from
different imaging systems (e.g.
multispectral vs SAR) is a
particular challenge
4. Ensuring that comparable phenological
stages are available for comparison
 E.g., phenology of agricultural
crops might change
 Uncovered soil (e.g. dry vs wet
phases)
 Errors lead to pseudo-change in
change in analysis
Nomenclature
Reference data ground truth, labeled
data
 Training data for classifier
training
 Testing data –
for classifier
evaluation
Classification task
How can you detect change?



Binary detection of change (not
concerned with what has changed, bud
solely if there is change or none)
Exact description of the change
Detailed quantification of the
modification, for example, the amount of
forest cover loss.
CLASSIFICATION
onsamane
Linear Classification – we have inputs and
outputs.
Classification Framework – additional data or
information provided by experts daw??
Supervised Classification – classification with
supervision. Some info are given for the process like
providing some pixels for land use and land cover
class.
Feature extraction
 Intensities – color? Basta green ang forest
tas deforestation kay not so green
 Texture, etc.
Classification
 Learning step – model where decision
boundary is learned
 Testing step – you take all the unlabeled
pixels and assign them to a class
after kay evaluation and post processing na
How to obtain a classifier?
A computer program is said to learn from
experience E with respect to some class of
tasks T and performance measure P if its
performance tasks in T, as measures by P,
improves with experience E. (Mitchell, 1997)
(Experience E) Input-output pairs – training
data, labeled data, reference data, ground
truth
(Some class of task T) – interpretation task.
You can do semantic segmentation or object
detection
(Performance measure P) – u need to know if
your model is good. Also to train and evaluate
it
Two simple classifiers
 Nearest neighbor classifier – test
data is assigned to a class based on
proximity to training data


Training data represents the
distribution of the whole
data
 Problematic with erroneous
and noisy data
 Problematic for features and
classes with different
variances, because same
distance metric is used for all
feature dimensions
 Scale all features to the same
range so that all dimensions
have the same importance
Decision tree – test data is classified
based on decision rules derived from
training data; decision riles can be
learned or defined manually
Generative vs. Discriminative classifiers
Generative classifiers – model the data and derive the
decision boundary from it, the modelled distributions
can be used to generate new data.
Discriminative classifiers – we do not model the data
but directly determine the decision boundary
LIDAR – active optical remote sensing
How does it work?
 System emits laser
 Return signal detected
 Return time record
 Distance calculated
Distribution of the return of photons
Different collecting platforms
 Satellite – global; low resolution
 Airborne – local; intermediate resolution
 Ground-based - plot level; high resolution
Satellites –
ICESat I – Ice Cloud and land Elevation Satellite
- operated from 2003-2009
- main use: ice sheet elevation research
ICESat II – Photon counting lidar satellites
Scheduled launch for 2017:
- ice-sheet elevation change
- terrain height of earth
- vegetation canopy height
GEDI – Global Ecosystem Dynamics Investigation
- full-waveform lidar
- scheduled for launch in 2018
- biomass and ecosystem health measurement
Ground-based Lidar – stationary (ECHIDNA / DWEL)
Hand-held scanner is called Zebedee
Lidar data applications
 Vegetation structure
 Hydrology
 Ice measurements
 Archaeology
 Land altimetry
 City planning
 Caves
When to use lidar data?
 Research aim
 Data needs
 Scale: global/local
 Budget
 Data availability
 Time frame
Radar
Types of radar
 Radar altimeter – airborne; satellite sends
back signal and measures how much of the
waves come back
 Imaging radar
-Real Aperture (RAR/SLAR)
- Synthetic Aperture (SAR) (most common
these days)
Radar swath – width of the area the satellite is
measuring
Similarities with passive microwave remote sensing,
but radar:
 Much more sensitive to surface roughness
(the smoother the surface, the lower the
backscatter)
 Can produce more or less signals from dense
vegetation (depends on wavelength and
canopy form and structure)
 Has more random noise
 Can achieve higher spatial resolution
 Has distortion and shading issues
 Can be used to precisely measure topography
and height changed (backscatter can be
analyzed to estimate wave height and hence
wind speed)
Applications of Radar
 All-weather sensor: flood mapping
(backscatter also depends on soil wetness)
InSAR- Interferometric Synthetic Aperture
Radar
Two SAR images of the same area are
acquired at different times. If the surface
moves between the two acquisitions, a
phase shift is recorded. An interferogram
maps this phase shift spatially.
SRTM – NASA’s Shuttle Radar
Topographical Mission
11-day mission in 2000
High resolution DEM
-30m for USA and Australia
-90m for the rest of the world
Radar Altimetry – nadir-looking radar sensors; used
primarily for monitoring land, ice, and sea surface
height
Rainfall Radar – mostly C-band Doppler radar
Download