CLOUD DETECTION WITH OPERATIONAL IMAGERS

advertisement
AOS LECTURE
CLOUD DETECTION WITH OPERATIONAL IMAGERS
OUTLINE:
What is a Cloud Mask
Review of Cloud Mask Radiative Transfer
Properties of the Operational Imagers (AVHRR and GOES)
Contrast Tests
Spatial Tests
Temporal Tests
Spectral Tests
R2/R1, Split window, near – far window, 4 micron ref.
Discriminating Cloud Types
Who Am I?
I work for the Office of Research and Applications with NOAA
http://orbit-net.nesdis.noaa.gov
My group develops algorithms (including cloud masks), radiative transfer
techniques and calibrates the data from the NOAA Operational Sensors
My Responsibilities include some of the following
•AVHRR cloud properties (including cloud mask)
•GOES surface and some cloud properties (including cloud mask)
•VIIRS (the next AVHRR) cloud algorithms
What is a cloud mask
More than 50% of the globe is cloudy.
Almost all algorithms need pixels that are either or cloudy – few can
handle cloud contamination
For some parameters – SST or Vegetation Indices, cloud masking can be
the number one source of error.
Cloud masking is very much “in the eye of beholder” and one mask rarely
pleases everyone
The radiative transfer knowledge needed for cloud masking requires
knowledge of many features of atmosphere, surface and cloud radiative
properties.
Cloud masking is probably the most difficult thing to do right in imager
remote sensing.
Example Cloud Mask from AVHRR
Cloud masks have to balance coverage and quality of clear data
0.63 micron reflectance
Cloud Mask + clear 11 micron temp.
Ways to Write a Cloud Mask
1.
Automatic Classifiers (Neural Networks)
•
Develop a training data-set where you “manually” detect
clear and cloudy pixels
•
Once trained, a classifier can be applied to cloud mask
images
•
Reduces need to physically identify cloud tests
2. Sequential Decision Trees
•
This is most common and used in ISCCP, MODIS and CLAVR
•
Design a series of single tests that used in sequence
•
Results of one test can be used to effect subsequent tests
•
Traditionally thresholds used in tests were static
•
Now, radiative transfer modeling on the fly is becoming a
feasible option for setting thresholds.
The Radiative Transfer Behind Cloud Masking
To know what a cloud is , you need to know what a cloud isn’t
You need to be able discriminate the effects of cloud from those
of The atmosphere and the surface
Three Driving Effects on the what the satellite sees
• the variation in the source – solar reflection or thermal emission
•Variation in the atmosphere – scattering, transmission and
emission
•The surface – its emissivity and its temperature relative to the
clouds
•Variation in cloud signatures – reflectance and emissivity.
•You have to to understand all three to be able to uniquely detect
cloud.
•Some regions have surface or atmospheric conditions (ie
inversions) that complicate cloud detection
In case, you need to review your emission spectra….
Reflection of sunlight
Emission from earth
UV – NEAR IR Atmospheric Transmission Spectra (0.5 – 3 microns)
AVIRIS Image - Porto Nacional, Brazil
20-Aug-1995
224 Spectral Bands: 0.4 - 2.5 mm
Pixel: 20m x 20m Scene: 10km x 10km
Near-IR – FAR IR Atmospheric Transmission Spectra (2 – 15 microns)
GOES Sounder data – Shows how Infrared Observations vary spectrally.
X
X
X
X
X
X
The Surface also presents a variation of reflectance/emissivity
Note the chlorophyll behavior from 0.6 to 0.8 microns
In the infrared, most surfaces are blackbodies except
deserts (especially at 4 micron)
Clouds offer another spectral variation that has to be accounted for …
Extinction efficiency is related to optical depth and is therefore
related to the reflectance and emissivity of a cloud.
As extinction efficiency gets smaller, the emissivity decreases (1012 microns)
Operational Imagers:
In this lecture we are dealing with cloud masking applied to the
operational imagers.
Operational – they are used for real-time products and if they blow up,
they are replaced with an identical version.
Imagers are originally for taking images of weather patterns. They have:
• high spatial resolution
•few channels with wide responses
•often poor calibration (but this can be fixed)
•Meteorological imagers include GOES imager, AVHRR (we’ll discuss
those) but also
•MODIS – 36 channels, 1 km spatial resolution
•ABI – the next GOES imager – 12 channels
•SEVRI – Europe's Geo imager – 12 channels.
•Landsat – 5 channels – 28.5 meter resolution – limited coverage
The Operational Imagers
1. The AVHRR – A Polar Orbitting imager
• Designed in the 1970’s for non-quantitative cloud imagery
•Can be calibrated well enough for other remote sensing apps.
•5 channels with relative broad response functions
CH1 – 0.63 um
CH3 3.75 um
CH2 – 0.86 um
CH4 – 11 um
CH5 – 12 um
This is what AVHRR looks like…
0.6 um
0.86 um
NIGHT – NO REFLECTANCE
3.75 um
11 um
12 um
2. The GOES Imager
•5 channels like the AVHRR
•Replaces 0.86 micron channels with a “water vapor channel” at
6.7 microns
• 4 km resolution at the equator
•New GOES imagers replace 12 micron with 13.3 um co2 channel
Difference in data from Polar and Geostationary Orbitters
AVHRR
GOES
Geostationary Platforms Facilitates use of temporal tests
What is the difference between an imager and sounder?
1. While Sounders offer more channels, their spatial resolution is much worse
Sounder have more channels – 10’s (operational) or 100’s (research)
Cloud Mask Tests Types
All Cloud Masks uses four types of tests to look for cloud
• Contrast tests – Look for contrast between a pixel and what we think a
clear pixel should look like. (ie. Clouds are colder and brighter so
anything really bright or really cold must be a cloud)
•Spectral Tests – use knowledge radiative transfer to identify channel
behaviours that can only mean a pixel is cloudy
•Temporal Tests – look for rapid changes in a pixel that are greater than
that possible for a clear pixel (if pixel drops by 20K in 1 hr – it is
probably cloudy)
•Spatial – Clouds are texturally different than the clear sky. At 1 – 4km
resolution, the surface is usually more spatially uniform in temperature
and reflectance than clouds. At resolutions less than 100 m – the
surface can have more texture than clouds!
NO ONE TEST TYPE IS VALID ALL THE TIME, BUT SPATIAL AND
TEMPORAL COME CLOSE
Examples of Using Contrast in Cloud Masking
Actually, all tests are contrast tests, but contrast tests usually mean
• The temperature (11 micron) compared to clear sky
•The visible reflectance compared to clear sky
•Some regions prevent the use of both types of contrast test – cold
snow.
•Being able to stare at the same pixel from the same location, gives
GOES an advantage in using contrast tests for cloud masking. The next
images show a GOES visible image compared to visible composite made
by storing the darkest pixel over 28 days.
•Contrast tests are never enough for an accurate cloud mask
AVHRR Contrast Test Example – Comparison of SST versus Climatology
You can’t stare at the same spot from a polar orbitter. But Sea surface
temperature does not vary much. Good basis for contrast test for a polar
orbiter
SST from Climate
SST from AVHRR
Cloud Test Result
Contrast Test contd.
Where there is no reliable contrast, these test will fail!
(Example in the Poles were there is no contrast – thermal or visible)
Cloud
warmer
than
ice
Ice
Spatial Test Basis.
Assumption: Clear regions are more uniform than cloudy regions.
Very good robust test, regions were there is a lot of surface nonuniformity can cause problems. ( i.e. Mountains, sea-ice)
The “Golden Arches”
Plot temperature versus
the local temperature
standard deviation.
If there is only one cloud
layer, you should get an
arch like this.
Partially cloudy pixels
have high variability
Cloud foot
Clear foot
Spatial Uniformity Example
The CLAVR cloud mask uses the max – min on a 2x2 pixel array as the
estimate of the local variability.
Vary good at picking out partial or small scale cloudiness missed by others
Needs to be a function of surface type
11 micron temp.
Variability in 11 micron temp.
clear
LAND
Temporal Tests
•Geostationary satellites offer high temporal resolution
•Look for pixels that get brighter or colder from one image to the next
Not a cloud mask – just a sequence of ch1 images from GOES
Temporal Tests with Polar Orbiters
Polar Orbiters only see the same spot at the same time every few days so temporal
tests are hard to implement (ie CLAVR or MODIS do not use them).
But at the poles, imagers like the AVHRR can use temporal tests like GOES does.
Here is an example of an animation of MODIS 11 micron temperature used to
derive winds
Spectral Tests
A spectral test – looking a some channel or channel combination that
definitively says this is cloud.
With only 5-6 channels, all AVHRR or GOES cloud masks tend to use the
same spectral tests.
• Reflectance Ratio of 0.86 to 0.63 micron reflectance (AVHRR only)
• Split Window (11 – 12 micron ) Temperature difference
• Near – Far window (4 – 12 micron) Temperature Difference
• Derived 4 micron reflectance /emissivity
•Water Vapor – window temperature difference (GOES only)
•Co2 channel tests (GOES only)
•Even MODIS with 36 channels – these tests are the main body of the
MODIS cloud mask
To illustrate these tests, we are going to show their application in three regions
#1 CARIBBEAN
#2 – Sahara / Jungle
#3 Nighttime Artic
Reflectance Ratio Test Basis
Based on our knowledge of reflectance spectra, we can predict:
R2/R1 = 1.0 for cloud (if you can’t see the surface underneath)
R2/R1 > 1.0 for vegetation ( look at pinewoods spectra)
R2/R1 << 1.0 for water
cloud
R2/R1 about 1 for desert
Glint is a big limiting factor
To this test over oceans.
Also, smoke or dust can look
Like cloud in R2/R1.
R1
R2
Reflectance Ratio Test for Scene #1
R2/R1
R2/R1 < 1.0 over ocean
R2/R1 = 0.9- 1.0
for cloud
R2/R1 > 1.0 over
the jungle
Cloud Test:
R2/R1 > 0.7 for water
R2/R1 < 1.1 over land
R2 (0.86 micron)
Reflectance Ratio Test Over Scene #2 – Saharan Desert / Tropical Africa
R2/R1 near 1
no contrast
with clouds
R2/R1 detects cloud
Over jungle
R2/R1 does not work
Over deserts
Reflectance Ratio Test over the Poles
•
No sunlight for extended period of time
•
When present, sun angles are high anyway
•
snow does not offer much contrast with clouds in R2/R1.
Other Issues of this test
1.
Thresholds should be a function of angle
2.
Knowledge of vegetation really helps in defining tests
Split Window Spectral Test Basis
T11  e11 Tc + 11Ts
T12  e12 Tc + 12 Ts
Tc = cloud temperature
Ts = Surface temperature
= cloud emissivity (1 = black body)
 = cloud transmission (1 = transparent)
For Thick Clouds ;  = 0 so:
T11 – T12
 (e11- e12) Tc
For Thin cold (high) clouds, the surface term
dominates
T11 – T12  ( 11 - e 12) Ts
Split Window Spectral Test Basis
For Clouds, e11 < e12 so T11 – T12 < 0
For cirrus, 11 < 12 so T11 – T12 > 0
water
ice
Bigger potential in water cloud, but ice clouds are cold and have bigger net effect
Split Window Results for Scene 1
Thick cloud
Small T11 – T12
Thin cirrus
Big T11-T12
Clear ocean
Moderate
T11-T12
(h2o effect)
Hot land
tricky
Split Window Applied to the Desert Scene #2
Spectral variation in desert emissivity can be tricky
Surface
Feature
Split Window Applied to the Artic Scene #3
Little water vapor in poles – so no problem separating water vapor from cloud
Inversions can cause T11-T12 to behave oppositely
Magnitude of signal much smaller than in tropics.
Low cloud
cirrus
Far – Near IR window Spectral Test
During night, same qualitative behavior as split window but more sensitive
T4 – T12 < 0
T4 – T12 > 0
for thick cloud
for cirrus
During Day, solar energy impacts the 4 micron observations.
T4-T12 >> 0 for any cold cloud
At 4 microns, equation relating radiance to temperature is very strong. So
any additional signal from the sun or from the surface causes a big jump in
T4.
Over oceans during day, glint is a problem
Far – Near IR Results for Scene 1
Cold cloud
T4 – T12 > 20 K
Clear ocean
T4 –T12 < 8 K
Far – Near IR Results for Scene 2 (Sahara)
Surface
features
Far – Near IR Results for Scene 3 (Artic)
Possibly the best test in artic at night as data is not too noisy
Low Cloud
CIRRUS
4 micron Reflectance Image
You can estimate thermal contribution at 4 micron, subtract it off and the
rest can be treated like a 4 micron reflectance channel.
Very good test over the ocean.
Almost useless over deserts
Other Spectral Tests
Water Vapor Channel – You can’t see the surface or lowlevel clouds. Useful
in the artic (for MODIS) for detecting inversions. If you see an inversion,
you aren’t looking at a cloud.
CO2 Channel – same as water vapor. Able to detect high cloud only.
Since high cloud is usually cold and/or bright, these tests do not add that
much.
What’s New in Cloud Masking….
The 1.38 micron channel – a reflectance channel sitting in a water vapor
absorption band.
How does the Automated AVHRR cloud mask perform on these scenes?
Cloud contamination would show up as cold pixels
Cold temps = noise
Cloud Typing From AVHRR / GOES
In addition to detecting cloud, we also want to determine what kind of
cloud it is.
Using the same knowledge of cloud spectra we used for cloud detection,
We can tell if a cloud is:
•Made of ice particles or water droplets (ie phase)
•Optically thick or optically thin
•Something optically thin over a warmer cloud
Cloud Type Results from AVHRR for that scene over the Caribbean.
Cloud types: water , supercooled water, thick ice, cirrus, overlapped cirrus
How do we determine phase?
The spectral variation of the imaginary component of index of refraction
and its difference between ice and water is basis of the phase detection
Higher the value, the greater the absorption
4 um
8.5um
ice
11um 12um
water
Based on imaginary index of refraction, ice should absorb more (scatter less)
At 4 um than water clouds
This is the case
How do we detect cirrus over lower cloud?
One way is to look for pixels with high T11-T12 and high 0.65 um reflectance.
Plane parallel theory says this can’t be – so this one way to flag multilayer
cloud.
Another way (Baum) is to look for pixels that fall between the clusters of
the 0.65 and 1.6 um reflectance. 1.6 um is like 3.75 um – ice absorbs more
than water.
Download