Spatial Filter Performance on Point-target Detection in

advertisement
Spatial Filter Performance on Point-target Detection in
Various Clutter Conditions
Using Visible Images
By
Susan Hwang
Submitted to the Department of Electrical Engineering and Computer Science
in Partial Fulfillment of the Requirements for the Degree of
Master of Engineering in Electrical Engineering and Computer Science
at the Massachusetts Institute of Technology
May 2007
@Massachusetts Institute of Technology 2007. All rights reserved.
The author hereby grants to M.I.T. permission to reproduce and
to distribute publicly paper and electronic copies of this thesis document in whole and in
part in any medium now known or herafter created.
Author
Department of Electri''al Engineerin
Certified by28,
Computer Science
2007
Frederick K. Knight
Lincoln Laboratory Senior Staff
Thesis Supervisor
Certified by__
______________
/ert f e beorge
V erghese
_
,Profe
r6of 1ectrical Engineering
fhr~i&o-G Suvervisor
Accepted by_
Arthur C. Smith
Professor of Electrical Engineering
Chairman, Department Committee on Graduate Theses
This report is based on studies performed at Lincoln Laboratory, a center of research
operated by the Massachusetts Institute of Technology. This work was sponsored by the
Secretary of the Air Force/Rapid Capabilities Office under Air Force contract FA872105-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the
authors and not necessarily endorsed by the United States Government.
MASSACHUSEUS INSTMUE,
OF TECHNOLOGY
OCT 0 3 2007
LIBRARIES
BARKER
e.w.m..umem---menue-menwmemmenensemPAM
3-2. !..%.%:--...-.0::03:9
Ie
..
.
... :<.2.::
.. : .
. . -- :--
.-
- -.
--
-
-
-
: -
-
-
--
.3,..:3.... .. :...s
..
... "
Spatial Filter Performance on Point-Target Detection in
Various Clutter Conditions Using Visible Images
by
Susan Hwang
Submitted to the Department of Electrical Engineering and Computer Science
on May 28, 2007, in partial fulfillment of the
requirements for the degree of
Masters of Engineering in Electrical Engineering and Computer Science
Abstract
For a search-and-track system, detection of point targets in clutter is a challenge
because spatial noise in an image can be nuch greater than temporal noise. Suppression of clutter uses a spatial filter matched to the target size. The goal of filtering is
to reduce the spatial noise to the temporal noise limit. In this thesis, the detection
performances of the Laplacian, Median, Robinson and Mexican Hat spatial filters
were compared to determine the best filter and unveil trends in the dataset. The
sky images were collected on top of the Lincoln Laboratory roof in Lexington, Massachusetts with a visible imager (1024x1024 pixels, 170 and 15p[rad resolution) over
three months, seven times a day, fifty frames each time. Artificial targets of a range
of intensities near the temporal noise limit were embedded throughout the entirety of
the images to be filtered. After filtering, the performance of the filters was calculated
using the Neyman-Pearson Detection method that was implemented with MATLAB.
The Laplacian filter was found to be the best performing filter over the entire dataset
with the other three filters performing almost as well, only averaging 5 percent to 9
percent worse than the leading filter. Trends in the dataset show that performance
is also dependent on time of the day (e.g. morning, midday, after sunset), spatial
standard deviation, temporal standard deviation and on resolution of the images
(1024x1024, 512x512, 256x256). The conclusions of this thesis give a comparison of
spatial filters and a deeper understanding of the dependence of the filter performance
over a range of variables which can be later used to improve a detection scheme for
point detection in search-and-track systems.
Thesis Supervisor: Fred Knight
Title: Lincoln Laboratory Senior Staff
Thesis Supervisor: George Verghese
Title: Professor of Electrical Engineering
3
4
Contents
11
1 Introduction
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.1.1
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
1.2
Goals of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
1.3
Outline of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
1.1
2
2.1
Rooftop Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
2.2
Description of Dataset
. . . . . . . . . . . . . . . . . . . . . . . . . .
21
Example Day: Description of Histogram Method . . . . . . . .
22
2.2.1
3
19
Data Collection
25
Data Processing
3.1
Overall Description of the Processing Steps . . . . . . . . . . . . . . .
25
Method of Insertion . . . . . . . . . . . . . . . . . . . . . . . .
26
Target Insertion Methods . . . . . . . . . . . . . . . . . . . . . . . . .
27
3.1.1
3.2
3.3
3.4
3.2.1
Comparison of Two Methods
. . . . . . . . . . . . . . . . . .
28
3.2.2
NEI Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
. . . . . . . . . . . . . . . . . . . . . .
34
3.3.1
Filter choices and descriptions of filters . . . . . . . . . . . . .
34
3.3.2
Examples of images after filtering . . . . . . . . . . . . . . . .
36
. . . . . . . . . . . . . . . . . . . . . .
38
3.4.1
Receiver Operating Characteristics (ROC) . . . . . . . . . . .
40
3.4.2
C-index and PD versus SNR . . . . . . . . . . . . . . . . . . .
42
Application of Spatial Filters
Description of Analysis Tools
5
3.4.3
4
Results and Analysis
43
45
4.1
Best Filter . . . . . . . . . . . . . . . . . . .
45
4.2
Trends in Performance . . . . . . . . . . . .
50
4.2.1
Month of Year . . . . . . . . . . . . .
50
4.2.2
Time of Day . . . . . . . . . . . . .
51
4.2.3
Weather
. . . . . . . . . . . . . . .
52
4.2.4
Temperature . . . . . . . . . . . . .
53
4.2.5
Brightness (Mean of Scene) . . . . .
55
4.2.6
Severity of Clutter (Spatial Standard Deviation)
55
4.2.7
Temporal Standard Deviation
4.3
4.4
5
Multi-image or Multi-filter Barplots . . . . . . . . . . . . . .
. . .
56
Resolution Effects . . . . . . . . . . . . . . .
56
4.3.1
Resolution Comparisons
. . . . . . .
57
4.3.2
Visible versus Infrared Images . . . .
60
Performance on Real Target . . . . . . . . .
61
Conclusion
63
5.1
Sum mary
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
5.2
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
A Additional Figures
67
A.1
November Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
A.2
February Images
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
A.3
March Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
6
List of Figures
1-1
Typical Search and Track Detection Scheme . . . . . . . . . . . . . .
11
1-2
Comparison of Target in Blue-sky and in Clouds.
. . . . . . . . . . .
12
1-3
Division of Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
2-1
Data Collection Setup
. . . . . . . . . . . . . . . . . . . . . . . . . .
20
2-2 Calendar of Recording Dates. . . . . . . . . . . . . . . . . . . . . . .
22
2-3
Example day: Images and Histograms. . . . . . . . . . . . . . . . . .
23
3-1
Summary of Processing Steps . . . . . . . . . . . . . . . . . . . . . .
26
3-2
Target Insertion and Processing . . . . . . . . . . . . . . . . . . . . .
27
3-3 Target Intensity versus Local Mean . . . . . . . . . . . . . . . . . . .
29
3-4
Comparison of Local Mean Method and Global Mean Method. . . . .
31
3-5
NEI Analysis: Temporal Standard Deviation vs.
Spatial Standard
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
3-6
Comparison of the Performance for the 4 Filters . . . . . . . . . . . .
36
3-7
Ranges used in Histogram Calculations . . . . . . . . . . . . . . . . .
38
3-8
Analysis Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
3-9
Example of Threshold and ROC . . . . . . . . . . . . . . . . . . . . .
41
3-10 Example ROC, C-index, PD vs. SNR . . . . . . . . . . . . . . . . . .
42
Comparing Filters With Different Views . . . . . . . . . . . . . . . .
46
4-2 Overall Performance of All Filters on All Images . . . . . . . . . . . .
48
4-3 Month of Year Comparison For Each Month . . . . . . . . . . . . . .
50
4-4 Time of Day Comparison For Each Month . . . . . . . . . . . . . . .
51
Deviation
4-1
7
4-5 Weather Comparisons
. . . . . . . . . . . . . . . . . . . . . . . . . .
53
4-6 Temperature Comparisons . . . . . . . . . . . . . . . . . . . . . . . .
54
4-7 Brightness (Mean of Scene) Comparisons . . . . . . . . . . . . . . . .
54
. . . .
55
4-9 Temporal Standard Deviation Comparisons . . . . . . . . . . . . . . .
56
4-10 Comparing Filters With Different Resolutions . . . . . . . . . . . . .
58
4-11 Comparing Filters With Different Resolutions -
. . . . . . . .
59
4-12 Resolution in Visible versus IR image - Raw and Filtered . . . . . . .
60
4-13 Laplacian Filter Performance on Real Target . . . . . . . . . . . . . .
61
4-8
Severity of Clutter (Spatial Standard Deviation) Comparisons
8
o-spatial
List of Tables
. . . . . . . . . . . . . . . . . . .
21
. . . . . . . . . . .
21
Statistics for Filtered Images in Figure 3-6 . . . . . . . . . . . . . . .
38
2.1
Average Sunrise and Sunset Times
2.2
Weather Condition: Number of Images Collected
3.1
9
10
Chapter 1
Introduction
Introduction
1.1
Consider a search and track system, either using an imager or an infrared focal plane
array. A block diagram of a typical detection scheme is shown in Figure 1-1.
Input stream
single-frame
methodm
multiple-frame
metho
possible
registraton
Figure 1-1: Typical Search and Track Detection Scheme
All the steps in the detection scheme are necessary in an accurate search and track
system. Up front, there are important areas in the input stream that must be addressed including 1) maximizing target signal, 2) minimizing background. These two
areas are partially dependent on the quality of the imaging sensor and limited to the
sensor's resolution, sensitivity, noise and the intrinsic properties of the wavelengths of
interest that cannot be changed unless there are improvements in technology. However, improvements can be made through the use of spatial filters and threshold
techniques, the topic of this thesis. For visible and infrared sensors, insufficient reduction of the spatial noise from images where the weak target is masked by the high
11
clutter noise (often due to poor weather) limits the detection range and can reduce
the sensor's effectiveness. Spatial filters, applied on the raw images can minimize the
background and clutter noise and maximize the target signal in the scene.
Test a weak target: Target = localmean + 10* qwpm,
IF
Background:
Processing:
SNR*:
U
Blue Sky
None
Clouds
None
8.5
false alarms/image':
Clouds
Laplacian filter
1.7
6.1
94,000
0
0.001
*The number of false alarms In a 1024x1024 Image assuming
the measured temporal noise and no spatial noise (-blue sky):
10242,e4- x
Figure 1-2: Comparison of Target in Blue-sky and in Clouds.
Although temporal noise (standard deviation Otemporal ultimately limits the detection of a point target, spatial noise in the scene can degrade sensitivity severely. As
measured by the signal-to-noise ratio (SNR), applying a high-pass spatial filter can
overcome this degradation. As an example, Figure 1-2 compares the performance of
target detection in blue-sky and in cloud clutter conditions measured by Signal-toNoise Ratio(SNR) and number of false alarms and also shows the improvement in
performance after applying a spatial filter. The image was taken on November 20th,
2006 at 12pm by a 1024 by 1024, 12-bit visible camera. The inserted target intensity
is a relatively weak signal calculated from the equation:
Target = localmean + 10 x
Utemporal
(1.1)
The Otemporal is calculated by taking the temporal standard deviation of fifty frames
12
of an after-sunset image (relatively uniform and low spatial noise) and then averaged
across the entire image to obtain a final
Utempoal
value. The local mean is calculated
by taking the average of the target's eight neighboring pixels at the target's location.
The after-sunset SNR (as-SNR) is set as 10.
In the first case of Figure 1-2, the calculated target of value 337 for this location
replaced a pixel in the blue-sky portion of the image.
To compare the blue-sky
condition to the best case scenario, the after-sunset SNR of 10, SNR of the blue-sky
target is calculated by:
SNR = (targetvalue- mean(bggrid))/ std(bggrid)
(1.2)
The target-value is calculated from Equation 3.1 and the bg grid is the 5 by 5
grid surrounding the target. The SNR for the blue-sky target is 8.5. Essentially, the
SNR of the target in the images is measured against the perfect score of SNR = 10
from the after-sunset image. The value of SNR = 8.5 from the target in blue-sky
shows that there is a 1.5 loss of the signal relative to the surrounding clutter. This
is partially due to the fact that the spatial noise (2.36) of the blue-sky clutter is 1.18
times greater than the after-sunset temporal noise. However, this SNR value is still
high compared to the target in the clouds. In the second case, the target is inserted
into the cloudy portion of the image, resulting in a low SNR of 1.7. Compared to the
SNR of target in blue-sky, the SNR has decreased by a factor of 4. The background
spatial noise of the cloud case is 5.882 times greater than the blue-sky case and 5
times greater than the after-sunset case. After applying a Laplacian filter on the
entire image, the SNR improves to 6.1, by a factor of 3.6. This means that the spatial
filter is successful in recovering the signal by suppressing the clutter and enhancing
the target signal.
In addition to using SNR as a way to evaluate the detectability of the target,
another important metric is the estimated number of false alarms per image. A false
alarm is defined as a detection of a target when a no target is present. It is not
enough to look at how easily detectable the target is from background clutter but
13
whether the number of false alarms is increased as a result of trying to detect a
low SNR target. In order to calculate the approximate number of false alarms, the
background distribution needs to be estimated. The background noise can be modeled
as a Gaussian distribution. For a estimated background signal power of ranging from
0.2 kW/m
2
for 5 PM to 0.7 kW/m 2 from the sun, the mean of photoelectrons received
by the sensor is large enough that the Poisson distribution of an arrival of a photon
becomes a Gaussian distribution. Therefore, as the SNR of the target gets lower,
the target approaches to closer to the Gaussian distribution of the background noise,
generating more false alarms. As discussed in Subsection 1.1, the noise will be a
Gaussian distribution with mean of localmean and standard deviation of
The SNR is the ratio of (target-localmean)/tempora.
ctemporal.
The number of false alarms at
the target level = number of pixels x (probabilityofexceedingSNR). The probability
is that integral: from SNR to infinity. Here x = signal/otemporal, and the Gaussian
integrand is eX 2 /2, which is the same as e-, 2 /2,2.
A2/2,
J0NR
SNR_2/,
dx.
(1.3)
Referring back to Figure 1-2, although the SNR of the target in the cloudy portion
of the image is only decreasing by a factor of 4 from the blue-sky case, the number
of false alarms has increased from 0 to 94,000. The application Laplacian filter only
increased the SNR by a factor of 3.6 but decreased the false alarms by 94,000,000 to
0.001. Not only does the spatial filter increase the relative intensity of the target but
also suppresses the clutter.
1.1.1
Background
In order to improve performances of search and track systems, there has been much
research in finding better performing detection systems in the area of spatial filters.
Today, there are a collection of different types of spatial filters to choose from. Linear
spatial filters like the Laplacian filter or linear matched filter are standard filters used
in detection and have been used since the beginning of search and track systems.
14
However, the nonstationarity of infrared clutter backgrounds results in degraded performance for linear filters. In order to overcome this deficiency, non-linear techniques
1
for point target detection have been proposed as far back as 1959 by David Robinson
Few studies exist in published literature because of the lack of general theory as compared to rich Fourier theory for linear filters until a study done by Barnett 2 . Barnett
performs quantitative analysis on the statistical properties of a non-linear filter that
he calls "Median Subtraction Filtering (MSF)," which is called simply Median filter
in this thesis. The performance of the Median filter is also compared to an adaptive
linear filter. Results show that the Median filter is better than the adaptive filter in
mild clutter scenes but has a large Probability of False Alarm a problem typical in
detection schemes.
In Barnett's study, only 2 types of filters are compared. Sevigny 3 and Reiss 4 have
both independently tested a variety of spatial filter on a variety of infrared images.
Sevigny did a comparative study of spatial filters proposed for detecting stationary or
slow-moving positive contrast point targets limited to 1 pixel, similar to the targets
studied in this thesis. The Laplacian, Mexican Hat, Weiner filter and "submedian"
filters (also known as Median filter), and the double-gated filter were applied to
Infrared images and compared. The Mexican Hat filter worked the best while the
Laplacian, Weiner and submedian filters exhibited the same behavior and but the
nonlinear filter did not do better than the linear filter. The performance of the
submedian also decreases as the size of the kernel increases. Reiss' study uses standard
experimental techniques and filters to test filter performance on images taken a variety
of clutter conditions. The algorithms were tested on 276 images from the Lincoln
Laboratory Infrared Measurement Sensor (IRMS) database, which were taken at highresolution dual band at various sites, atmospheric conditions and times of day. The
target signature was modeled and then carefully inserted into the dataset. There
were a total of 15 linear and nonlinear filtering algorithms ranging from filter size and
training data size. Filter comparisons show that many of the filters showed similar
performances and all are clutter limited. The overall best performing filter was the
Robinson filter with a guard band and the 3x3 matched filter, both with clustering
15
algorithms. The results remain the same regardless of time of day, terrain or cloud
conditions.
Recently, morphology filters have been introduced to detect point targets. Morphology filters, which are shape operators like dilation and erosion filters. They are
applied to the image one after another often repeatedly to capture the background
statistics of the image to suppress clutter5 . Morphology filters have been studied by
Barnett, Ballard, and Lee who compared the morphology filters to adaptive linear
filters and MSF filters6 . Their studies show that the the morphology filters never outperformed the MSF filters. However, in another study done by Tom, Peli, Leung and
Bondaryk that also compared the morphology filters to Median filters showed that
the performance of the morphology filters was equal to or better than the Median
filters 2 out of the 3 cases7 . Morphology filters are not studied in this thesis because
of its computational intensity but should be in future work.
There are other related research areas for detection of targets in infrared clutter that is that studies performance trends on large datasets and resolution effects.
Klick, Blumenau, and Theriault have done a study that tested a 5x5 Laplacian filter
with guard band on a wide dataset taken from different location in various clutter
conditions and also in reduced resolution'. The results show that the clutter metric
(minimum detectable irradiance) is independent of season, detector spatial resolution
and waveband but is strongly dependent on clutter type and to a lesser extent, the
time of day.
As shown in this section, there has been much research done related to pointtarget detection in various natural and man made clutter for the infrared sensor, but
there has not been much work for the visible sensor. This is because in the visible field, work has been primarily done for machine vision for applications like the
inspection of pre-manufactured objects, medical imaging, or for automated guided
vehicles, where targets that have a larger spatial scale or edge detection or segmentation is needed. Advantages of using visible imagers for detection include its ability to
operate in stealth situation due to the passive nature of the sensor and its compact
size. In this thesis, I propose to study the performance of 4 spatial filters, 2 linear,
16
2 nonlinear on embedded point targets in visible images. The method of analysis is
modeled after previous studies done in this area like Reiss and Sevigny by collecting
images, calculating target intensity, inserting target, filtering, thresholding to generate Receiver Operation Characteristics (ROC) for comparison and analysis. Following
Lick, Blumenau, and Theriault, analysis will also be done in relation to spatial filter
on reduced resolution images and other qualitative and quantitative variables. The
detailed goals will be described in detail in the next section.
1.2
Goals of Thesis
The goal of this thesis is to get a comprehensive evaluation of the performances of
select spatial filters on the wide range of visible, high-resolution images of the sky
collected on top of Lincoln Laboratory S-building and on lower resolution images.
The entire evaluation will include collecting visible data, embedding targets, filtering
the images and analyzing the results. After applying the appropriate processing and
analysis steps to the visible images, it will be possible to answer these questions:
1. Which spatial filter is the best spatial filter to use overall (on all images) that
yields the highest detection performance?
2. What is the detection performance dependence on the variables of interest Time of day, Type of weather, Day of the week, Month of the year, Spatial
standard deviation, Temporal standard deviation, Brightness of Image, or Temperature?
3. What the effects of lower resolution on detection performance by reducing resolution by 1/2 and 1/4 of full resolution in each dimension?
4. What is the relationship between infrared and visible imagery? In particular,
IR image statistics show fluctuations at high significance ("in the tail" of the
distribution). Do these occur in the visible data?
5. What is the performance of the best filter on real targets?
17
Weather info
+ other vardables
Compare/
Chapter 3
Chapter 2
Chapter 4
Figure 1-3: Division of Chapters
1.3
Outline of Thesis
The thesis is divided into five chapters that follow the processing and analyzing steps
in a sequential manner.
Chapter 2 focuses on the initial data collection steps including detail descriptions
of the camera properties and the logistical aspects of the image collection like how
many times a day images are recorded, for how long, and at what data rate. This
chapter also includes a description of the data and the variety of qualitative and quantitative variables the dataset includes like different temperatures, weather conditions,
and statistical properties of the images.
Chapter 3 includes the description of the data processing steps and the data
analysis that justified the steps that were chosen. These sections will go through an
overall description of the processing chain, how target intensities are calculated and
inserted, and how the images will be filtered and with which spatial filters. There will
also be a description of the analyzing tools, the metrics used to compare the spatial
filters and their significance.
Chapter 4 shows all the results of the processing and the answers to the questions
posed in Chapter 1.
And finally, Chapter 5 will summarize the finding, conclude the thesis and pose
questions for further research.
18
Chapter 2
Data Collection
2.1
Rooftop Setup
The rooftop setup includes a Pulnix TM-1402CL visible camera that records at 30
frames per second at a 12-bit resolution with weather-tight housing and multiple
zoom settings shown in Figure 2-1. The camera is located on the East end of the
Lincoln Laboratory S-building roof in Lexington, Massachusetts facing North in the
sky, away from ground clutter. The data pipeline structure is Camera to Camera
Link to Optical Fiber (which is routed from the roof to the Vision Lab S3-248)
converted back to Camera Link and then sent directly to the Linux Computer. The
EDT framegrabber card (EDT PCI DV C-Link card) in the Linux Computer is the
interface to the Camera Link. The EDT company provides a C/C++ library that
creates program that will grab images, save images and control the shutter speed.
The images are saved directly to the computers hard drive.
The camera is set on two different fields of views (FOV) - 10 degrees and 0.9
degrees, and at a dimension of 1024 by 1024, giving an angular resolution of 170 and
15.3 prads. The camera was set to automatically take 50 consecutive frames, 7 times
a day, 5 days a week (Monday through Friday). The camera was on from November
2006 to March 2007, excluding December and January. For November, the times of
day were 7am, 8am, 10am, 12pm, 2pm, 4pm, 5pm and for February to April, the
times of day were 6am, 8am, 10am, 12pm, 2pm, 4pm, 6pm. The changes in times
19
S-building roof
East end
Visible Camera
Serial: pan tilt
S3-248
Camrem Link:
Auft-collection
Raw Image
Linux
Figure 2-1: Data Collection Setup
20
of day to collect images were adjusted according to the different sunrise and sunset
times of the 3 months. Table 2.1 shows the exact sunrise and sunset times at the
beginning and end of the data collection for each of the months. In November, data
collection began after the sunrise and ended after sunset. For February and March,
data collection began before sunrise and ended after sunset for February and only
part of March.
Month
November
February
March (no daylight savings)
Beg. Sunrise
6:26 AM
6:46 AM
6:21 AM
Beg. Sunset
4:31 PM
5:13 PM
5:34 PM
End Sunrise
6:52 AM
6:25 AM
6:31 AM
End Sunset
4:13 PM
5:30 PM
6:08PM
Table 2.1: Average Sunrise and Sunset Times
2.2
Description of Dataset
In total, there are 385 images that span 55 days, 3 months and a wide range of
weather conditions like clear, rainy, cloudy and many more. The exact breakdown of
the dataset is shown in Table 2.2. A dataset that contains such a variety of images
will yield varied detection performances as well.
Condition
clear
fog
mist
light drizzle
light rain
rain
light snow
scattered clouds
partly cloudy
mostly cloudy
overcast
November
37
4
2
4
12
3
0
3
2
10
36
February
37
1
0
0
0
0
8
8
10
11
14
March
64
0
0
0
9
3
2
15
14
16
23
Table 2.2: Weather Condition: Number of Images Collected
21
me
Nmi,=De
9 1
I
Th
F
20
a
Figure 2-2
2.2.1
m 206Fouay20
so
T
March 2007
V
fI"
r
so
3
T
It
TV
P
$I
Calendar of Recording Dates.
Example Day: Description of Histogram Method
Figure 2-3 shows an example of a typical day, November 20th from the dataset (7am
not shown). The images are from the first frame of the 50-frame set. Since each image
has a different mean and standard deviation, in order to see the texture in the images,
each displayed using its own scale of the range [mean-20atemporal mean+200temporalI].
The weather labels were taken from Weather Underground (www.wunderground.com),
a popular online weather-reporting site.
The histograms show a more in-depth statistical view of the images by plotting
the pixel value at the x-axis and the number of counts at the y-axis. The shapes of
the histograms and the spread of pixel values in the image illustrate different factors
that affect detection performance. As shown in Figure 1-2, targets in blue sky are
more easily detectable than targets in cloud clutter. This can be predicted by looking
at the histograms of the images. The range of the histogram indicates how bright
and far away from the background (histogram view) the target pixel value needs to
be to not be mistaken for a background. Looking at it another way, the narrower the
background, the harder it is to mistake a background pixel as a target pixel. Looking
at 5pm, the histogram shows a peak that goes off the scale with pixels centered at
around 30 digital counts. This means that the image is relatively bland with small
spatial standard deviation, similar to a blue-sky condition and can be expected to
have a high detection performance. A cloudier scene has a more variable histogram
shape with a wider range like in the case of 10am and 12pm. In order for a target to
stand out in that type of histogram, the target intensity will have to be higher than
in the case of the 5pm histogram.
22
x10
X10'
3.
3
2.5 --
-------
-------
2
--------
U -1- --
---
..---
01
0
200
. 5 -------
----------
--.-200
0
0w
400
Pixel value
400
600
clear
overcast
" 8 AM
------- - -
------..
15---------------------------- ---
------- - -
---.-- .
0 .5 ---. -- ----.
-------- -------- --.
.. -.. . -----.
2 --..------...
2 ----------- -------- r------- T---C1
*
--.----- .----.
S 1.5 ---. ----. ..--0
m8anM
5
3
x10
5
2'
--------- 2a -------- ----- -L
- ------- -------- --------
---...
-- --..--
2 --- -....
1.5 -- ------- 0-------- -------- .. ---..
0.5
----------------0
-
200
10
AM
510
15
m5
400
overcast
0-
3 PM
3
3
2 5 ------ - -1 ------ -- -
C2. ..--------- -- ---
------ --0. - --
2 -- ------- -------- ------
12 PM
Ma
4Wa
-
------
- ------ --
-- - -
0
m
Mostly cloudy
5 PM
Figure 2-3: Example day: Images and Histograms.
23
200
400
600
after sunset/dark
24
.11
Chapter 3
Data Processing
3.1
Overall Description of the Processing Steps
In general, the processing of the raw images includes embedding a target into the
image and filtering the image. The detailed processing steps include:
1. Choose raw image
2. Choose a location to insert the target
3. Choose an SNR to insert the target at and compute the target intensity
4. Insert the target into the raw image
5. Filter with a selected spatial filter
6. Calculate statistics, threshold and record into a Matlab structure
7. Repeat 2-6 for all pixel location throughout the image (10242)
8. Repeat for 2-7 for the rest of SNR from 1 to 20
9. Repeat for 2-8 for the rest of the filters.
10. Repeat for 2-9 for all the images
All steps are implemented using MATLAB scripts that run automatically through
the steps once a dataset of images are chosen. Due to the computational intensity
25
.......
................................
.- .......
. .....
Calculate
temporal
std. dev
histogram
Replace with
calculated
T
target
intensity
histogram
single
target
histogram
single
target
,
total
targets
Figure 3-1: Summary of Processing Steps
of the processing steps (i.e. computing target intensity per pixel per SNR, multiple
filtering steps), steps 1-9 will typically take 3-4 hours to complete for one image.
The global after-sunset signal to noise ratio (as-SNR) is varied in order to obtain a range of performances for targets at different brightness. The targets are also
inserted throughout the image instead of just one per image in order to get a statistically significant analysis of detection performance. Figure 3-1 shows a graphical
representation the processing steps.
3.1.1
Method of Insertion
Instead of inserting one target into the raw image, filtering and repeating this process
10242 times, the processing is reduced to filtering 4 images, each including a quarter of
all the targets. The targets are split in a way that the neighboring target will not affect
the 3x3 filter calculations. Figure 3-2 shows the method that the targets are allocated
to each embedded image. After filtering, the target values will be collected at each
of the pixel locations which generates the filtered target distribution. A raw image
without any targets is filtered and generates the filtered background distribution.
26
TT
Figure 3-2: Target Insertion and Processing
After normalization, both distributions are plotted as a histogram, a method that
will be used later on.
3.2
Target Insertion Methods
Many methods were considered for calculating the target intensities to insert into
the raw images. The brute force method would be to insert global constants (chosen
from looking at intensities of real targets) as target intensities. This means that for
all the images, regardless of mean or noise, the target will be inserted at a range of
numbers from 250-400. Using this method, it will be possible to compare detection
performance between the filters for each image but impossible to compare a filter's
performance for different images due to the different mean intensities of the individual
images. For example, inserting a target at 300'in an image after sunset will be more
easily detectable than a clear weather images during the middle of the day when
the sky is brighter. It is necessary to tie the target intensity to statistics unique to
each image. The next section will discuss two different methods of calculating target
intensity by tying it to the mean of the images.
27
3.2.1
Comparison of Two Methods
The two methods considered for calculating the target intensity are derived from a
standard method used to embed targets in the infrared images. According to the
method, the infrared target intensity is tied to the minimum detectable irradiance
(MDI) 4 . Because there is difficulty in modeling the intrinsic signal from a real target,
the limiting target level is set based on system properties, which is essentially a small
number (SNR) times the detector noise (noise-equivalent irradiance, NEI). This is
MDI = SNR x NEI. For the visible target, it is necessary, as seen in the beginning
of this section, to use the mean in the calculations.
The current lighting condition can be estimated by calculating the mean of the
image making the equation to use:
TargetIntensity= mean + SNR x NEI.
(3.1)
The limiting target level will be estimated by the SNR, which will be varied from 1
to 20 to obtain a range of target intensities. The chosen range was based the on the
range used by Barnett 2 which was 6 to 18. The NEI can usually be obtained when
the exact model of the sensor is available, but can also be estimated by taking the
temporal standard deviation of the 50 frames of an after-sunset or blue-sky images
(Utemporat),
which is close to the system noise level. The exact specifics of the final
NEI chosen will be discussed in the next section.
The final discussion is the decision between using a global or local mean to calculate the target intensity. The global mean uses the entire image to calculate the
mean whereas the local mean uses a local 3x3 grid around the target to calculate
the mean. The global method gives one target value per SNR for the entire image,
suggesting that the target's brightness does not change throughout the entire image,
and is independent of the local surroundings. One can imagine a situation where a
plane is in front of the clouds and in the sun so that as the plane flies throughout the
scene, its brightness remains unchanged. The justification for using a global mean as
a good measure is that the sun, at different times of the day will contribute a varying
28
w 'I", ImomqRNMR, oil 1111R,
WIN
1100
10-25-06 3PM
(cloudy day)
-Q -Target
LocalSackground
-4-
i - -- --
I-- -
Soo
-
--
-- ---
700
- ,-
- --------
-- -- -- ---
1000
-- ---------
-
4----
-
- --
---------- ------ -- -------------- ------
-----
-
------
Goo
20
0
40
10-25-06 2PM
( negative contrast)
so
so
- --
---
-
i00
-
r--
120
---------
140
1SO
180
20
---------
m
700
550
----- -----
+-
---------------------------------
650
500
410
0
20
40
so
50
100
120
140
160
110
200
tc
10-25-06 3PM
(clear day, very small target) -00
390
390
--
~-
-- - ----
--
- -
370
350
- -
- -
I-- - I-- -
340
..
...
...
330
S
20
40
S0
so
-100
120
Figure 3-3: Target Intensity versus Local Mean
29
149
16C
-lag
ZuJ
amount of visible sunlight to be reflected off of objects, both cloud and target. This
general mean trend should have the same effect on all objects in the same time period,
for example 12 PM will have a brighter global mean than the global mean in the 4PM
case.
The local mean method involves recalculating the target intensity for every pixel
in the image for every given SNR. The argument for the local method is that the best
estimate of the local environment (including sunlight, shadows, reflections) is the
immediate surroundings. Figure 3-3 shows the tracking over 200 frames of the a real
target's intensity and its local background through different scenes in the sky. Since
the target often spanned more than 1 pixel, the brightest target pixel was chosen as
the target intensity for this comparison. The results show that the target intensity
(shown as red curve) approximately follows the local mean (shown as blue curve).
One area of concern is in frame 20 of the clear day. The target intensity is at a value
of 410, which is far larger than the intensity in other frames. There is an outlier in
that frame because the majority of the small target was centered in one pixel, making
the target intensity a lot brighter than in the other situations. Since in a given image,
there is a large amount of variation, taking the local mean adjusts the target intensity
to match the local scene. Then the additional small offset (SNR x NEI) test the
limitation of detection within the scene. The situation that the local method is trying
to capture is a plane flying under a cloud for parts of the scene and out of the cloud
and going from a lower intensity to a higher one.
After using both methods of calculating target intensity, the decision was made
to use the local mean method. Figure 3-4 shows the results of using the two different
methods.
The target was inserted throughout the entire image, according to the
calculated intensities using each method and filtered with a Laplacian filter. The
histograms of the resulting filtered images for each of the methods show using different
means to calculate target intensity changes the shape of the filtered target quite a bit.
The background curve (blue in the figure) is the same shape, which is to be expected
since the background for both methods remains the same. The interesting part of
the histograms is the shape of the target histograms. The target histogram from the
30
11/20/06 10am
SNR = 11, atemporai= 2
.aI
'a
i
i ii--T~ ------
Is
I
is
'a
gis a iS
appl
-
f le -
UI
histograms after applying laplacian fitter
x10
2-~5
backgr-u-nd taret
2
1.
1-------------------------------
-
x--
2
------ -----------------
-------
------ I-----r---
I - - -v- --- r- - -
0.
- - -
-------------------
------------
400
-150
-100
-W0
0
so
100
150
.200Il
-150
-
50
0
$0
100
Local mean method
Global mean method
Target = global mean + SNR*atemporai
Target = local mean + SNR*atempoi
Figure 3-4: Comparison of Local Mean Method and Global Mean Method.
31
180
global mean method shows a full range of -171 to 124 and a standard deviation of
53.67 and average of 22 whereas the local mean method has a range of -23 to 32
and a standard deviation of only 1.9 but the same average of 22. For this particular
image, the calculated global target intensity is 339.3 and the mean of the raw image
background is 317.3 and the global standard deviation is 53.8.
Looking at the local mean method, the target histogram has a very sharp shape
with 81.75 percent of pixels falling within 1 standard deviation of the mean and a
small range of 55. This is the expected output of a Laplacian (similar to a high-pass
filter) filter. The spatial filters used in target detection should work to move the
target away from the background and the reduce background noise (lower standard
deviation). Although both methods are valid and show different ways of evaluating the
performance of the filters, the local mean method places the target above the clutter,
allowing a positive contrast, whereas in the global mean method, the variation in the
image is so large and the additive target signal so small compared to the surrounding
clutter or in negative contrast, making it harder to measure performance. Because
of this point and because the target intensity follows the local mean in the 3 images
tested in Figure 3-3, the local mean method was chosen.
One important thing to note is that this method does not model cases like an after
sunset situation when a plane's light can be brighter than if target in bright daylight.
Instead of modeling these factors and inserting a brighter a target in an after sunset
situation to account for this possibility, the target is inserted with an wide range of
target SNR from 1 to 20 to include all possible intensity cases, regardless of type of
image. Also, whether SNR = 1 or SNR = 20 is physically possible is also not dealt
with in this thesis. For example, SNR = 20 in a foggy scene may not be possible due
to the attenuation of the target signal. The target insertion method is merely a way
of comparing filter performance for different types of images.
3.2.2
NEI Analysis
The NEI is an estimation of sensor noise, and can be obtained either calculating spatial
standard deviation or temporal standard deviation. For a spatially flat (constant)
32
input signal and identical pixels, the spatial variation over many pixels should be
identical to the temporal variation of one pixel. This is a statement of independent
trials drawn from a Gaussian distribution. And Section 1.1 proved that the number
of photoelectrons received is high enough that an electron arrival which is modeled as
a Poisson distribution becomes a Gaussian for a large number of photoelectrons. In
order to test the validity of this statment, both spatial noise and temporal noise were
calculated for two relatively uniform images, blue-sky and after-sunset.
Figure 3-5
shows the results of the calculations. For the temporal standard deviation, five pixels
were chosen at locations, (512,512), (256,256), (768,256), (768,768), (256,768) and
the temporal standard deviation was calculated for each pixel spanning 50 frames.
Results in the Figure 3-5 show that the temporal standard deviation of all 5 locations
is relatively low, with mean at 4.5 and standard deviation is 0.2.
The locations
chosen for calculating spatial standard deviated were chosen using the pixels used for
temporal standard deviation as the center of a 7x7 grid, using 49 pixels. The spatial
standard deviation for the blue-sky image has a mean of 4.7, and a standard deviation
of 0.5, which are both slightly higher than the temporal noise case.
For the after-sunset image, the temporal noise mean is 2 with a standard deviation
of 0.066 and the spatial noise mean is 1.93 with standard deviation of 0.17. The fact
that the mean of the spatial noise measures to be lower than the mean of the temporal
noise shows that we have reached the minimum sensor noise where the temporal and
statial standard deviation are approximately equal. The small difference in the mean
of the temporal and spatial noises indicates that the statement we assumed at the
beginning of the section gives a close estimation within < 1 standard deviation of
the mean and can almost use the temporal and spatial noises interchangably. This
value of NEI is chosen to scale the target intensity. The idea is to choose the lowest
value so that other noisier and cluttered images can compared against.
Therefore
for all the targets to be inserted, the NEI = 2. Since all of the NEIs of the targets
are set to 2, that means the SNR used in the target intensity calculations is not a
true SNR but called an after-sunset SNR (as-SNR). Another alternative is to use the
Otemporal for each image instead of an universal 2.
This would adjust normalize the
33
.
.....
.....
....
......
Temporal Noise
Spatial Noise
1: 512,
512
2: 256,
256
3: 768,
256
4: 768,
768
5: 256,
(plXelplXel)
Blue-Sky
44643
4.2427
4.8413
4.5022
4.6386
5.5643
4.7478
4.0572
4.7314
4.6210
38.5229
After-sunset
1-9289
2.0103
2.0353
2.1043
1.9729
1.7229
2.0379
1.7093
2.0207
2.0798
2.0031
Location
Grid I
Grid 2
Grid 3
Grid 4
Grid 5
Entre
Image
768
Figure 3-5: NEI Analysis: Temporal Standard Deviation vs. Spatial Standard Deviation
images so that the effects of 0 temporal is eliminated. However, it is interesting to see
the role that
Utemporal
has on the performance of spatial filters so as-SNR is used for
target calculation. In the Section 4, the SNR is adjusted to give the true SNR for
comparison.
3.3
3.3.1
Application of Spatial Filters
Filter choices and descriptions of filters
In terms of spatial filters, there are many choices ranging from linear filters, non-linear
filters, filters with guard bands, or no guard bands, and filters with different kernel
sizes as well. The four filters that were chosen are the Laplacian, Median, Robinson
and the Mexican Hat with a 3x3 kernel size. These filters were chosen because they
are computationally cheap and have been proven to be successful in the past as seen
in Reiss's analysis of the application of spatial filter performance on infrared images.
The size and shape of the kernel is usually determined by the noise characteristics,
expected intensity distribution and the size of the target'. The 3x3 kernel was chosen
because of the small spatial scale of the point-target.
Laplacian Filter. This is a two-dimensional Laplacian second derivative operator
34
(high-pass filter) that is typically used as an edge-detector and is often used for
baseline comparison purposes4 . The idea is that the second derivative of a signal is
zero when the derivative magnitude is at maximum. Applying the Laplacian filter to
the images will extract the areas with the largest changes. A Laplacian will generally
give a bigger response to a point rather than to a line, which is a favorable feature for
detecting point targets' (Russ, 2002). However, at the same time, the Laplacian will
highlight single pixel noise. Usually, an application of a Laplacian filter yields many
points declared as edge points.
A typical Laplacian filter would look like:
0
4
0
0
4
-j0
Robinson Filter. The Robinson filter is a nonlinear nonparametic filter and is
known as an edge-removal filter in a general image processing context. The advantage
of the Robinson filter is its ability to remove threshold crossings that decreases the
detection threshold. The Robinson was also chosen because its great performance on
infrared images 4
The Robinson filter returns one of these values:
If X > max(NN) -> X - max(NN)
If X < min(NN) => X - min(NN)
If min(NN) < X < max(NN) => 0
where X is the pixel that will be replaced and Nearest Neighbors (NN) are the
pixels surrounding X.
Median Filter. The Median filter is a nonlinear process that is useful in detecting edges while reducing random noises (salt-and-pepper noise). The "traditional"
median filter takes the median of the pixels within the kernel window and uses the
median value as the output. In our case, this filter would blur our point target. If the
difference of the target pixel and the median is used than the contrast is highlighted.
The Median filter returns:
35
Inserting using
SNR = 10
.kDjg~ Filter
Median Filter
Robinson Filter
Mexican Hat Filter
Figure 3-6: Comparison of the Performance for the 4 Filters
X - med(NN)
Mexican Hat Filter. The Mexican Hat filter, called the Mexican Hat because
of the shape of the filter, is both a smoother and an edge-detector in one. It can
eliminate noise and emphasize the point target.
A typical Mexican Hat filter would look like:
4
2
2
3.3.2
4
4
Examples of images after filtering
Figure 3-6 shows the filter performance of the four different filters in a histogram
form. Again, filtered pixel value is the x-axis and number of pixels is the y-axis. In
order to evaluate the performance of the filters, it is necessary to look at the shape
of the target and the background distributions, specifically how far the target is from
the background. The ideal case would be if both target and background have narrow
distributions and the target values are sufficiently far from background so that the
36
target and background do not overlap.
The Table 3.3.2 captures the statistics of the distributions including the Range,
Mean, Standard Deviation of the background (blue in histogram) and target distributions (red in histogram) shown in Table 3-7. It also includes the percentage of the
total range that is overlapping and percentage of the pixels in each distribution that
is overlapping.
There are multiple views to analyze the statistics shown in 3.3.2. Looking first at
the ranges, it looks as if the Robinson filter is the best filter since it has the smallest
ranges for both the target and the background. However, the Robinson filter also has
the lowest target mean, yielding the second highest percentage of overlapping target
and background pixels which does not help determine the "best" filter. The percent
overlap of the total pixels for both the target and background will give a sense of
how many target pixels will be undetected or how many background pixels will be
mistaken for target pixels. The Laplacian has the lowest percent overlap of the total
range, meaning that only 24.49 percent of the entire range contain overlapping pixels.
It is also important to look at how many pixels are overlapping in this region. For
the Laplacian, the overlapping range includes 80 percent of the target pixels and 1.7
percent of the background pixels. The other filters have ranges up to 43.75 percent
with up to 85 percent target and 82 percent background which means the majority of
the pixels are overlapping. Using this view, the Laplacian is the best filter. However,
it is difficult to use the histogram as a metric for comparing detection performance
since looking at the distribution statistics only gives a good crude assessment of
performance. It is necessary to focus on the area of overlap to determine each filter's
exact balance between the probability of detection which is affected by the number of
target pixels in overlap region and the probably of false alarm which is affected by the
number of background pixels in the overlap region. The standard way of capturing
the true performance of a filter is to use the thresholding method to create a Receiver
Operating Characteristics (ROC). The ROC can be generated from the using the
thresholding method on the probability density curves of the target and background
that can be generated from normalized histograms. The ROC will be discussed in
37
overlap region
-20
.10
0
10
20
30
total target range
I
total background range
50
40
II
i
I
Figure 3-7: Ranges used in Histogram Calculations
the Section 3.4 along with other analysis tools.
Range
Mean
Lp - target
Lp - background
Md - target
Md - background
Rb - target
Rb - background
Mh - target
[9 to 29]
[-20 to 21]
[7 to 28]
[-20 to 24]
[0 to 20]
[-15 to 15]
[-4 to 41]
19.94
-0.13
19.28
0.015
13.82
0.01
19.95
Standard
Dev
1.90
4.03
1.49
3.81
2.50
1.19
4.28
Mh - background
[-23 to 24]
-0.05
4.76
Percent overlap Percent pixels
of total range
overlapping
24.49
80.00
1.70
35.42
99.96
4.91
73.24
42.86
92.26
85.70
43.75
82.54
Table 3.1: Statistics for Filtered Images in Figure 3-6
3.4
Description of Analysis Tools
There are a few analysis tools used to compare the performance of different filters
over a variety of images. Figure 3-8 shows the steps taken to generate the plots.
Each step will be explained in detail in the following subsections. Each plot shown
in Figure 3-8 shows a different perspective for looking at the results.
Histogram. The histogram as described in previous Section 3.3.1 is particularly
38
........................
Probability density fu~ncton
ROC
Choose Imago
C- ---------- Threshold byChooseC RFA
Choose SNR Ro ................... VarylngPA
SNR vs. PD
Choose P-
-Ch
02~~ --t ---------0.5
---- --
-------
------------
Minimum SNR
7
se P
FI
---LP
- 10M-
-inde
-0
Figure 3-8: Analysis Steps
useful for a general visual assessment of the filter performance by looking at the shape
of the distributions, the relative distances from the target, and the background. The
histogram is able to show the performance of one filter, at one SNR for one filter.
ROC. The Receiver Operating Characteristics (ROC) captures the relationship
between the Probability of Detection (PD) and the Probability of False Alarm (PFA)
(will defined in following section) by having a plot where it is PFA versus PD. The
ROC reduces the comparison of the different filters on one image or one filter on many
different images to looking at the curves.
C-index and SNR vs. PD. The C-index assigns each ROC a numerical value,
making it easier to compare performance for a whole range of PFA whereas a SNR
vs. PD plot chooses a PFA and looks at how SNR affects the PD. Both plots are able
to compare multiple filters or images.
Barplot. The barplot is an intuitive measure of looking at different trends in the
results by determining the minimum SNR needed in order to detect at a chosen PD
and PFA. This tool will be used to compare the aggregate performance of the filters
over the entire or partial dataset, the performance over variables like Time of Day,
Weather, and Clutter Severity.
39
3.4.1
Receiver Operating Characteristics (ROC)
Probability Density Function The following subsection is referenced from Oppenheim
and Verghese 2 , the PDF is defined represented by the function, fx(x) and captures
the range of possible events and their associated probabilities of occurance so that
the integral from -oc to oc of fx is 1, summing up all the probable events equals
100 percent probability of ocurrence. A specific probability value can be calculated
with the formula:
b
P(a < X
b) =
fx(x) dx.
where P(a < X < b) is the integral of fx taken over an interval, calculating the
probability of a certain group of events that result in X falling between a and b. P(X
=
a) can also be calculated by evaluating the integral for X = a.
In the case for the background and target, their respective functions are condi-
tional probability density functions fRIH(rIHo) and fRiH(rIHl), where r is a measurement of a random variable R, and H1 represents the presence of a target and Ho
represents the absence of a target. In order to calculate the PD and PFA, a detection
scheme must be defined. In our case, a simple threshold method is used, better known
as Neyman-Pearson Detection. The Neyman-Pearson Detection scheme is based on
maximizing the PD while keeping the PFA constant. This is achieved by setting a
threshold where, if a measurement is above the set threshold, it will be declared as a
target (H 1 ). Using this scheme, the PD and PFA is defined by:
If threshold =
PD = P(r > y|H1) =
fRIH(rIH1) dr.
PFA = P(r > r7jHo) =
fRIH(rIHo) dr.
In order to hold PFA constant c, a threshold q is chosen such that fX7 fRIH(rIHo) dr =
c. Therefore, for each PFA, there is its associated PD. If the threshold is set at a
low value, there will be a high PD but also a high PFA, as the threshold is moved
to higher values, the PD and PFA will both decrease at different rates. This can be
40
.....
........
ROC
.4 -----
0.2
---------
--
0o
d...ata1
si2
-e--
OZ
0.4
0.2
08
1
PFA
Data 1
- - - -
',Data 3:
Data 2
|
23
---.
-
-0
0
10
3
-10
0
2
Figure 3-9: Example of Threshold and ROC
captured in the ROC. The ROC is generated by moving the threshold from right to
left of the fR1H(r|Ho) and
fRIH(rIHl), each
threshold generating a PFA and PD which
is plotted PFA versus PD. Both the PFA and PD will go from 0 to 1.
Figure 3-9 shows an example of different ROCs generated by different target and
background conditional PDFs. The conditional PDFs of the background and target
can be generated by choosing an image, choosing a SNR to calculate the target intensity, filtering the embedded image, and plot the histogram of both the background
and the target of the filtered image then normalized to so each distribution adds up
to 1. Data 1, Data 2, and Data 3 show the result of inserted the target at 3 different
SNR shows below:
Data 1: target = localmean + SNR 1 x NEI
Data 2: target = localmean + SNR 2 x NEI
Data 3: target = local mean
where SNR 1 > SNR 2 , SNR 3 = 0.
Data 3, Data 2, and Data 1 show increasingly better performance, Data 3 being
the worst and Data 1 being the best. Data 3 shows as a the red line splitting the
ROC plot in half is an example of 50-50 detection performance. This means that the
41
IFlip
'am 4
SNR vs. PD
1
0.6
ROC
08
-
---------------
- -----
----
06------
-
4 -
. 4 -- - - 0.4
0.2[ -------------
--
--------- -----------15
010
20
C-index
----- "---8
-
- - -- -
0---------0.6
-------
-------
------
0
0-2
0
-------------
----------------
----------------
-----
-------------
---
0
2
-
- -- -C
02
SNR
10
08
10
SNR
4
15
2
Figure 3-10: Example ROC, C-index, PD vs. SNR
performance is as good as taking a coin and flipping it every time to decide whether
there is a target. This is often seen as the worse case scenario there the target and
the background are on top of each other and it is impossible to distinguish between
the two. Data 2 shows the best case scenario where the target and the background
do not overlap so that the threshold T2 is chosen in such a way (between the two
distributions)such that PD can be immediately achieved with a
PFA
of 0. However,
that is often not the case usually the ROC lies in between Data 2 and Data 3. In
our case, we are dealing with
PFA
of around 10-4 to 10-6 since anything higher than
that will be unacceptable in many applications. The ROC display range for the plots
in the 4 will be
3.4.2
PFA
from 10-4 to 10-6.
C-index and PD versus SNR
The ROC is great for comparing performances only when the curves are sufficiently
different so that they can be differentiated visually. However, in the case in this thesis,
the differences are so subtle for the images and spatial filters compared that it is not
immediately obvious which one has the best performance. In such cases, the C-index
is helpful in assigning a single number for each ROC so that the performance can be
more easily measured and compared 3 . The C-index is calculated by integrating under
the ROC so a larger C-index, the better the performance. A C-index of 1 would be
the best possible performance.
PD versus SNR plot is useful for comparing performance when
42
PFA
is set to a
!"
constant value. It will be possible to see the PD that can be achieved as the SNR
increases. It is basically computer by taking a vertical slice of the ROC of multiple
SNRs or it can be obtained directly from the PDFs (with different SNRs) by choosing
a PFA and calculating the PD. Figure 3-10 shows an example of C-index and PD versus
SNR plots generated from a ROCs. The blue curve is directly generated from the
ROC in Figure 3-10 and the other curves are generated from ROC which are not
displayed. The trend in PD versus SNR and the C-index curve are similar since PD
versus SNR captures the PD trend for a chosen PFA whereas the C-index captures
the PD trend over a range of PFAs-
3.4.3
Multi-image or Multi-filter Barplots
The multi-image or multi-filter barplots can compare the performance for multiple
images and filters in an intuitive way that relates to the target intensity. The barplot
uses estimates the minimum SNR that the target needs to be before it can be detected
at a maximum PFA of a and a minimum PD of b. By comparing all the different filters
in this manner, it is possible to determine which filter can detect at the lowest target
intensity. In addition, the difference in filter performance can be compared with the
intrinsic spread of the results for any one filter, showing qualitatively whether the
filter-to-filter performance is overpowered by the variation in the data. This implies
that the lower the SNR, the better performance. The x-axis is reversed to illustrate
this point.
43
44
Chapter 4
Results and Analysis
In this section, all the results of the processing of data are shown and discussed.
Section 4.1 will show comparison between the four filters and determine that the
Laplacian is the overall best filter. Section 4.2 will compare performance for different
qualitative and quantitative variables using the Laplacian filter from Section 4.1 to
unveil trends in the data. Section 4.3 will discuss the impact of reduced resolution on
the images and its relevance to infrared imagery. Section 4.4 will show an example of
the performance of the Laplacian on a real target.
4.1
Best Filter
In order to compare the performances between the filters, the analysis tools introduced in Section 3.4 are used. Figure 4-1 shows the comparison of the filters using
multiple views. The first two columns of plots show the histogram after filtering of
the background (blue) and target (red) inserted at SNR = 8 and Receiver Operating Characteristics (ROC) of target inserted at SNR = 2,4,6,8. The last column of
Figure 4-1 shows all four filters compared in the plots of PD versus SNR and the
C-index versus SNR. The four histograms were described in depth in Subsection
and the conclusion was made that although the Laplacian filter seemed to be the best
filter, it was difficult to capture the real detection performance without taking the
analysis further. The ROC as described in Section 3.4 fully describes the Probability
45
;
- -
-
-
. .. .............
: ::.. --- ,
S
-------- ---- ----------.-----
x10
5
SNR
4
LP
-2:
= 8
08
0.4-
3
0
0.4-
62
Lrb
09
4 -------
-
as
-- - --- , --- ---
----------- +---------------
07
1
01
4
- - - --..--- - ..--- -.----- - --2
0
2D
Q
2
4
8
e
x 10
5
1
4
Ob --- R--- -
10
----------------0
1
-2D
0
2)
0.4 --
- -
02 ----
-- -
0
Q
2
PbwMn
5
03
------
-----
6
8
FFA
0.4
-23
0
2D
Q
07
-
-
-
10
8
1
4
-- ----------- --
----- ---- - -- -- ----- --------------- r-------------- ---
W
Q5
4
...
--- --- - -------- - - -----I
o r------r--- --- --3
IL
+-
046 ---I
-
-
02
---- M-
IL
0a
OW %ekea
IL -- -M
1
-
-
2
02
-----------------
~-
L-0.4 -----
1
I
Q
8
x 10
x10
5
MHV82
4
2
10
---- r------------- ------
08
0e
-40
8
I
-
-
---
0
8
j-*-n4
-----------
1
------.
1
OA ---------------------
RBV62
4
-S
1
0.4---
2
D
10
x10
x10
4
-
01 ------------------
-
4
- ------
---------------
3
MDF 02
-40
-------------
-
-
8
4
FFA
01
-
8 10
0
..
. ....
2
x10.6
Figure 4-1: Comparing Filters With Different Views
46
- ------
4
a
8
10
-
- - ----
of Detection(PD) and Probability of False Alarm(PFA) that is achievable for each
target SNR and filter. In order to find the best performing filter using the ROC view,
the same SNR curve must be compared for each filter. In this case, the cyan curve
is SNR = 8 and can be compared at different levels of
PFA.
At
PFA =
i05, the
corresponding PD's for the Laplacian, Median, Robinson and Mexican Hat is 0.109,
0.011, 0.078 and 0.13 and Mexican Hat is the winner. However, looking at the
=
104,
PFA
the corresponding PD'S for the Laplacian, Median, Robinson and Mexican
Hat is .625, 0.32, 0.411 and 0.307 and Laplacian is the clear winner almost doubling
the performances of the Mexican Hat. It is apparent that the relative performances
of the filters change, depending on the value of
at very low
PFA
PFA-
but Laplacian works better at higher
Mexican Hat performs better
PFA.
Looking at the different
SNR plotted on the ROC, not only are performance rankings of the filters different
for different
PFA
but the performance rankings change depending on the SNR of the
target as well. The plots in the third column help compare the SNR trends.
The C-index allows comparison of multiple
PFA
over a range of SNR by integrating
the ROC for each SNR whereas the PD versus SNR view shown in Figure 4-1 was
constructed by choosing
PFA =
10-' to compare PD over a range of SNR. As discussed
in Section 3.4, PD versus SNR is essentially an instantaneous vertical slice of the
ROC, whereas the C-index is the aggregate. Both views tell similar stories. In both,
the Mexican Hat performs better than the Laplacian until SNR = 6.64 while the
Median and Robinson follow the performance of the Laplacian but at a lower level
throughout the full range of SNR. The conclusion can be made that at target SNR =
8 and the range of
PFA
from 0 to 1.17 x 10-' , and for
PFA =
10' and target SNR
from 0 to 6.64, the Mexican Hat is best filter to use on similar images, whereas the
Laplacian filter is the best one to use for the remaining ranges. Calculations shown in
Figure 4-1 were done for all images collected and all of the images had similar trends.
More examples of will be included in the Appendix A. To compare the performance of
all the images and all the filters in 1 plot instead of 350 separate figures, the barplots
were used. Figure 4-2 shows the target SNR necessary to achieve a
PFA =
10-
and
a PD = 0.5 for the four filters over all 350 images in the dataset. 10-5 is a typical
47
better
worse
Mwc
Dataset used
- Nov, Feb, Mar
o 50 days
- 350 images
HW
Rbnson
2D
18
16
14
12
10
8
6
4
2
0
12
10
8
6
4
2
0
Maim HU
2)
I
I
I
18
16
14
After-sunset SNR
S---------------
Mmdan Hat
Rdbinson
-----------
Median
- ----- -
-----------
--
am-am,2p
Laplacian
21
18
14
16
12
SNR
- ------------- - - - ----- --- -- --
Madcan Hat
Robinpn
----------
Median
--------- -----
20
18
116
--
14
12
-----
10
8
5pm, 6pm
6m
,
a
-
- - -
---
---------
8
0
2
--
---
- ------ -----------
I
---------
4
a
10
6
4
2
m" rnx
0
Adjusted SNR
Figure 4-2: Overall Performance of All Filters on All Images
48
,
PFA
value and the minimum PD was chosen to be calculated at 0.5 because that is
the minimum value that a detection system should operate at. The performances of
the four filters are shown in four different plots (referred as plot 1 through 4 counting
from the top of the figure) that highlight different statistics of the results. Plot 1
and 2 show the performance using after-sunset SNR (as-SNR), which assumes NEI
= 2 digital numbers (DN), whereas plot 3 and 4 show the SNR which is adjusted by
using each image's own temporal standard deviation instead of the after-sunset NEI
(the detailed definition of the two SNRs was discussed in Section 3.2.2). In addition,
plot 1 and 3 show the target SNR for each image plotted individually, color-coded
by the time of day the image was taken and as a range, the maximum and minimum
performance being the edges of the blue bar. For all 4 plots, the average performance
is represented by a red circle.
The Laplacian can be established as the best performing filter for the PFA
= 10-5
and a PD = 0.5 case over the entire dataset. In comparing the average SNR, and the
minimum and maximum for both SNR views, the Laplacian has the lowest average
SNR, minimum SNR and maximum SNR with respective values of 8.22, 3.74, and
11.65 for as-SNR and 4.29, 1.42 and 5.52 for adjusted-SNR and the Robinson filter
coming in a close second with values of 8.66, 4.03, and 12.3 for as-SNR and 4.52,
1.51, 5.69 for adjusted-SNR. In this case, whether the SNR is adjusted or not does
not change relative performances of the filters because the SNR is scaled in the same
way for each of the filters. The result of adjusting the as-SNR is the shift to a lower
SNR, and the decrease of the range of performances. The range decreases because
"noisier" images are reduced with a bigger temporal standard deviation whereas the
more uniform images are reduced with a smaller temporal standard deviation while
the average performance is decreased because of the scaling. In Section 4.2, adjusting
the SNR will yield a more significant affect.
49
After-sunset SNR
Adjusted SNR
#f i,,,ge
4u00
2
18
16
14
12
10
8
6
I-----------ey--------------
4
__
2D
18
16
14
12
10
8
2
0
2)
18
16
6
14
12
--W
I
__
4
_ - _ ja _
2
0
6
10
4
114
2
7
W -- ....
...
WW---
m~10.12pm,
_
2D
18
16
14
12
10
8
6
4
2
0
Figure 4-3: Month of Year Comparison For Each Month
4.2
Trends in Performance
It is of interest to study the trends of the dataset in order to gain a deeper understanding of the dependence of the filter performance over a range of variables. Since
the Laplacian filter had the best performance over the entire dataset, the following
section analyzes the Laplacian filter's performance trends over the following variables:
1. Month of Year
2. Time of Day
3. Weather
4. Temperature
5. Brightness (Mean of Scene)
6. Severity of Clutter (Spatial Standard Deviation)
7. Temporal Standard Deviation
4.2.1
Month of Year
Figure 4-3 shows the performance of the Laplacian filter on images taken during
three different months. February has a performances clustered around SNR = 5 and
50
L ........
.: 1 --I--
.....-...
------? ---- ---- 2-- --|-- --|- ----------
14 ..----
--
..---} - - ----
March
February
November
.----{----
14
.----
----. ----.
---- ----....
----I.
....S
----..---.-4---H
---
-.
.......
1-8
214
18
20
0
1
0
8
6
4
2
0
12
10
8
6
4
2
0
4
16
14
...
3
I
W
12
1
7- _---.
--
10
8
ma
6
4
2
032
12
6
10
2
4
23 12
0
--
--
-
-
- 12
-
--
-
-
-
-
-
-
-
12
1
8
10
m
6
4
2
0
~ Ir- -----------
4
6
4
2
0
.....
0 20
2
4
..
...
1219
K
SNR
.-.
:.sth
-n
- -
------
17
-
--
1AM,
....
a-d
M.
d
d4
d
-----
-
3-ags
-
-
-
... .
-- --
_
-
-
-
- -- -- --asa
-- - -4
tA
--...------
-
-
a
.PM
0
20e
aer--------
to
14a1p
--
-- 1.
..........-I----~
....
--
0 20t
m
------
aun-SN
isbcas
.1-.. ..
.
..
.
..
.
..
. .
30 is
if
1
20
0
I
6
1
2
..
.
-
-
..
.-
-
.....
eray
.
- --
2
2
-t
0 20
18
14
I4
12
approximately
6:46AM
7AM,
and
5PM
and
month
Figure
all
similar
4-4
February
and
parabolic
trends
dark
night
around SNR = 5 and all the day
images
pretty
be
that
more
for
images
taken
than
easily
detectable.
as-SNR
and
trend.
monthly
the
uniform
more
similar
conditions
the
seems
This
other
In
for
at
the
6AM,
of
times
general,
the
SNR
adjusted
in
reasonable;
each
exist.
performance
Looking
March.
throughout
is
a
so
Day
Laplacian
this
5:13PM
are
images
are
lighting
at
relatively
such
to
appear
is
and
months
of
shows
sunset
in
average
Time
Again,
3
not
does
4.2.2
are
targets
for
there
and
6PM
therefore
performances
and
and
around SNR = 9. This is because in the month of February, sunrise is at
images
day
morning
9 with allthe
=
at
the
largely
affected
day
for
times
plots,
as-SNR
with
by
different
morning
the
51
all
and
time
of
of
have
months
night
sunrise
day
November,
similar
images
and
for
with
sunset.
--
-
10
1
0
--
-
-
sa
Figure 4-4: Time of Day Comparison For Each Month
SNR
- -
-..
urs
4
1
e0
-
I---
................... L -------I...-ntemnho
3
.... 114
.....
= 9. Thi
4
-----------
......
..
3
8O
ra1
--------....-
..........
I6
T14s
2
4
6 d
..
-
U
4
----
2
------------. . ..
I .. .
1 . . ...
84 14
.
r
n3
I - . - ---
SNR.
-.. . -.
-................. -. -----..
14
10
-
is
6:4.AM
a - -s
ap.....x..t
- -
-
6
.. . -
.
Adjusted
--- - - . ---------. .....
...... ---
-
1214--
0 9
14 12
W
-....--
16
............. ...........
.
L
.L . .
........
-
14
---- --- ---- ---
. .... ....
,
2
1
------
-
-
20 18 16
3
-6
O34
8
9
U
.
1
2
- - - -- -
'-----------
.--- .--- .. -----------------
almost
the
The
lowest
light
4
9
- -
--
- -
-
blue shaded portions of the as-SNR plots indicate the images that were either taken
before the sunrise or after the sunset. The range of times for sunrise and sunset for
the 3 months are shown in Table 2.1. The images in the shaded portions all have low
SNR and a smaller range of values. This is because when there is no sunlight, the
images are relatively uniform and targets are more easily detectable. Images taken
close to sunrise or sunset but still have some sunlight have on average, better performance than ones during the day but a wider range of values. Images taken right
before sunset and right after sunrise include both day images and dark images, so the
performances have a large range of values.
The images taken during the day have similar performances with SNR averages
around 9.3.
When the SNR is adjusted, all performances for all the months are
centered around SNR = 5. The difference in ranges (blue bars) reflect the different
amounts of spatial clutter in the images and a number of outliers in the case of
6PM for February and March. There is no apparent trend in the variation of ranges
throughout the day.
4.2.3
Weather
Figure 4-5 shows the performance of the Laplacian filter for different types of weather.
The images were tagged using weather information found for Lexington, Massachusetts
on Weather Underground (www.wunderground.com),
an internet weather service
started by researchers from the University of Michigan.
Results show that clear
and overcast weather conditions have the widest ranges of 4.08 and 3.83 and light
rain has the best average performance for as-SNR view with SNR = 4.0 but not by
much. This result is not intuitive since clear days with the least amount of cloud
clutter should yield the best performance. Instead clear performance is average and
has a wider range of performances compared to the other images. However, it can be
explained that the large ranges of clear and overcast is due to the large number of
images that were taken during clear and overcast weather. The weather conditions
with smaller ranges like fog, mist, and light drizzle have only have 5, 2, 4 images
respectively. Therefore the differences in ranges is not a trend due to weather con52
T M_%
Adjusted SNR
After-sunset SNR
clear
---
fog -----------------------nrdst
-- ---------------.....
O
Igit drizzle -------------------------licit am
---------------
clear
--------
mfog
-------------- -------------I-------------
....-------------
rnizle
-- ------- - --
-
----
ran
ligt snow ----------------------scattered clouds -------------- --------
----
---
15
- -------------- 4
-.----------------
....
-AM
pary cbudy ------------ - - ---mosty CdOu&I
waercast ------------ - ---------20
----------..-- ------------
itt rzzle
lt rain
sin
loaterd
---------
20
15
5
10
---------- ------------ - :0---------- ----:
-
--- --- 1o
-------- ;10
--------
-
3
15
10
--------228
---:28
---------------- -----
Ofercast
---------dear -------------- -------tog
ist -------------- --------------:--------------------- - ---------lit
licit drizle ----------- ------------------- -- ------------l
Igt rain -------------- --.---rain
lht snow
scattered clouds
----------- --partly
partly Coudy -------------------------------mOse
mostly doudy ------------overcast ---------------------------------
----------
-------- - --------- - - ------ - 2
------------- ----------
0
10
SINR
-- 127
~ -~ rstly clw*t ----------- - -----------------
----------
# of images
----------
------------------------ ------------ ----------------------------------------Md
part919y dou$
----------
- ---------
-log
0
5
ce ------------- ------------- ---------clearr
rIN
rain
t rds
sz
rai
dci d
dou
rc
------------------------- ----------- - --------i
-------- 4----------- --------....
--------- ?---------------- -------------- ------- _-- -- --------.
--------------------- ------------- ------------------- ------- ------ ---- ------- ------..--..--- ---------- + ----------------------- ------------ ------------
20
0
SNR
15
10
SNR
-----5
0
Figure 4-5: Weather Comparisons
dition, but rather statistics. Furthermore, most of the lower SNR images were taken
in early morning or night as indicated by the large number of blue and green circles
at low SNR. The adjusted SNR view shows that the performances for all types of
weather are centered around 8.3 with a standard deviation of 0.58 and the width of
the ranges are also dependent on the number of images with the same weather tag,
further reinforcing the conclusion that the high-pass spatial filter effectively eliminates
the degradation due to spatial clutter.
4.2.4
Temperature
Figure 4-6 shows the affect of temperature on performance. The plots show that
performance does not depend on temperature because all performances are within
0.44 of the mean 6.92.
53
.............
,: After7sunset
SNR----------; -10 Adiusted;SNR
1-10 ......
: : , 'j
--------------------............
Adimages
----- ;31
..........................
47...
------
-------11-2D
2D
18
16
1-10 ------- ------
14
12
-
10
8
6
18
4
2
16
14
12
L ...... ------------- I-------F" :
1i L -L -L -L
L
.0
0
18
16
14
12
------- I I-10
10
SNR
10
SNR
-L
8
6
jV
-L
-L
4
2
0
-------------- ------
-------- 11-2D ------- ------ ------- --------------------------------- Allmon ......
21-M
-------- ------------31-M
............
------------------- -------
V I
41-M
...........
8
I .....
57
:63
:63
--i.69
-------
........41-M
-------------------
11-20
21-30
31-40 ................................. NOW"
------ ------ ------ -----41-50
--------------61-60
20
--------- ------ -------- ------------------- L
-------------------------- ----
----------- 21-M
21-30 ........................................
3140 -----------------------------------4
------ ----------1-50 -----61-60 ------ -------------- I------- .......
......I------f
4
2
0
6
-------------------L ...... ---------------
-60
2D
18
16
U
------- ------
12
10
SNR
8
6
4
2
0
Figure 4-6: Temperature Comparisons
After-sunset SNR
-----I-M -----...... I------ ------ I------ I------ I------------------
-1
:AqjustedSNR
----------------I I * I
1-50
51-100 ...... ......... --------------- ------------ L
12
51-=
---- ------------- 101-150
WID------------------------------ - ---------..........
------------------------- 151-200
151-M
----------------- 201-250
-------5
I
---------- -------------- 7 ------------- r ---- --2n2W------------------------------------....... 53
------ ------ ------ ---251-M
I 251-300 ...........................
7 ......r ------------------------------........................
-------------------:
301-360
Inm
6r,
.............
---------- L ............ 3514M
351-M -------I ----- I ----- L ----- L ----- I
--.....
--........... 4014W
---4014M ......
150
2D
18
16
14
12
10
8
6
4
2
0
4
2
0
16
14
12
ID
8
6
A
18
. 1
G.
40
L ----- L.
------
1-5D ......I ...... ...... -----51-IM ------ k------
301-38D ------------- ------ -------------
35140D ------------- ---------------- 18
16
14
12
ID
w
8
301-350 .............------------ ------- 1------ ----- 1.---
...........-
6
4
f -------------
------------------------------------------ ---- --------------...............
-----------------I------- I---------------------------- ---- ---------------------------- ------------ ------ ....... ...........
------ ----------- ......
----- Ir ----- r ---
4014M -------r ----- r ------r ------r -----2D
51-100 ------ I ------------- 4------
...............
101-150
-----151 -2DO
2D1-250
I------ r
251-300
-------------
In15D
151-20D ----------------------------------------------- L ------L ----- L -----
2n29D --------------------------------------251-30D ------- ------ -------------------------
1-50 ---------------I ----- I--------I ----- I ------ L .........
L -----------------
........ J,
-------------
2
0
20
18
16
U
12
10
Sw
8
6
Figure 4-7: Brightness (Mean of Scene) Comparisons
54
4
2
0
.
....
After-sunset SNR
31-40 ---
--
------
-
Adjusted SNR
+ -----------------
---+--..--------+
3140 .-.-- --- --- + ----
21-30 -----
...........
------
------------------- ----------40 ----- ------------
11-20 ----
3 ----21-30
-
-- -- -
+
-
31-40 -----
41-50 ----41-50 --6I-7
1-890 ---
+-
-----71V70 ---18
+---- 16
41-4020 ----
--T----14
14 -- 112
+ +
10
8
-- - - -1-10 -- -- -- - - --11-20
- - -- - --- ----21-303 1-40- .. .
---- - -SIWR
41-50
---------------- ----
61-70 - - - . . . . . . . --. . - 7 $-0 --- - - - - - ------ --- --- -20
18
16
140
12
6 .-.--. 4
---
2
0
8
18
-
-----
-
16
14
----------12
-----10
8
6
4
2
0
-- - --- -
-
----
-20
- -i
- ---- ---- - - - -
-- -- -- -- - - - -- - -- -- -10
20
6
4
2
1-0
18-- 16
12
--14
--10 -- -8-
6-4-2-0
0
Figure 4-8: Severity of Clutter (Spatial Standard Deviation) Comparisons
4.2.5
Brightness (Mean of Scene)
The level of light captured in the scene affects performance in a linear fashion. If
performance could be modeled with a line, it would be SNR
=
0.0124 x GlobalMean.
As a statistical property of the camera, the larger the number of electrons that is
received, the larger the temporal noise, hence the worser the performance in the
as-SNR view. Once the SNR is adjusted, the performances are roughly equivalent.
Again the size of the ranges is dependent on the number of images in the category.
4.2.6
Severity of Clutter (Spatial Standard Deviation)
The spatial standard deviation
(aspotial),
calculated by taking the standard deviation
of the entire image, is used as a measure of the severity of clutter. Figure 4-8 shows
the performance plotted by the severity of clutter and the 0spatiai range of 1 to 90.
The three images displayed next to the plots are example images that corresponds to
its spatial standard deviation. Plots show that there as
Uspatial
increases, the target
SNR increases but not in a linear manner. The rate of increase decreases as o-spatial
increases and eventually reaches an average limit of 8.63. The limit is due to the
sensitivity limit (resolution) of the visible imager. Section 4.3 will show the result of
having a lower resolution.
55
After-sunset SNR
3 -4
.....
...
2+
.........
.......... ..........
F.gure
4-S
...............
T
2-3
4.. ....
....
....
3 -4.....
.t.dar
DC
p.4-95
-------
------
8---
Temporal Standard Deviation
Figr.79
Coprsn
The temporal standard deviation (u-ternporai) plot shows strong dependence of performance on temporal standard deviation.The as-SNR view shows that as otemporaI
increases, the performance worsens as well. However, after adjusting the SNR with
each image's temporal standard deviation, the performance shows that performance
improves.
4.3
Resolution Effects
The resolution of images affects the performance of detection because clutter has
an intrinsic spatial scale. If the higher the resolution, the smoother the transition
through a hard edge in an image. It is a trade off between target size and resolution.
This takes into consideration the design of the sensor technology and field of view.
For a given target intensity and the target replacing one pixel in the image, the
target significance SNR goes down as pixels are added together. This is because the
temporal noise increases (being due to the rms sum of Poisson noise in the signal and
the readout noise). However, with decreased spatial resolution, the effectiveness of
the spatial filter (used on the summed pixels) decreases. The study here quantifies
this in effectiveness by plotting the mean SNR versus the spatial standard deviation
(in units of
u-temporai)
of the original image (see Figure 4-1 1).
Other effects of decreased spatial resolution are inmportant. As resolution is reduced, the fraction of a summed pixel occupied by a target is less, hence more back56
ground signal is added into the target pixel. Therefore in designing a sensor for
detection, it is ideal to match the pixel size to the expected target size, unless a
larger field of view was needed. A potential sensor design process will involve the
competing needs for better sensitivity (percentage of target in pixel) versus increased
field of view. The following plots show the affect of resolution on performance.
Figure 4-10 and Figure 4-11 uses images of 3 different resolutions, 1024x1024,
512x512 and 256x256. The 512x512 and 256x256 images were generated by binning
pixels from the original image together. For the 512x512 images, 2x2 grid of 4 pixels
were binned to make one pixel and for the 256x256 images, a 4x4 grid of 16 pixels were
binned to make one pixel. The targets for the lower resolutions were inserted using
the the same equation, Equation 3.1 but with different NEIs. NEI for the images
are 2, 4, 8 for the full resolution, 2x2 binning and, 4x4 binning respectively. This is
because reducing the resolution by 4 (2x2) or 16 (4x4) means that the NEI (temporal
standard deviation) will increase by a factor of 2 and 4.
4.3.1
Resolution Comparisons
Figure 4-10 shows the performance of the 4 filters on images of varying resolution in
November. In the first plot, the first 4 lines show the performance of the 4 filters on
the 256x256 images, next 4 lines show the performances for 512x512 images and the
last 4 for full resolution. Results show that after sunset images for all 3 resolutions
have approximately the same performance around SNR = 4.1 The day and morning
images are affected by the decrease in resolution. Results show that the filter with the
best performance is different for the different resolutions. Laplacian performs the best
for the image will full resolution, and the Robinson filter for the lower resolutions.
In the adjusted SNR plots, the opposite effect is shown. As resolution is reduced,
performance improves. This trend is a result of how the target intensity is calculated
for the different resolutions. Using Equation 3.1 for all three resolutions means that
the localmean is increased by 4 or 16, which means that the target signal is 4 or 16
times higher (effectively because it covers a pixel that is 4 or 16 times bigger). This
means that the relative signal-to-noise is much better. Hence, the smallest target that
57
Ad usted SNR
u s t
fNu
HrAs
M
RB,4x-MD,4x
MH,4x4RB,4x+
MD,4x4LP,4x
MH,2x2RB,2x2MD,2x2LP,2x2MHful
RB,ful1
MD fulLPfulI
- - -
LP,4x4M H.2x2
RB,2x2
MD,2x2
LP.2x2
- -
-
--
RB,fullLP,full
20
18
16
14
12
10
SNR
8
6
2
4
-1
MH ,4x4
RB,4x4
MD,4x4
LP,4x4
MH,2x2
RB,2x2
MD,2x2
LP,2x2
MHfull
RB,full
MD ,full
LP full
0
18
1
1
MH,4x4RB,4x4MD,4x4LP,4x4MH,2x2RB,2x2MD,2x2-LP,2x2-MH ful RBfu-
-- 1j-
M
f
.L
]j-
_
20
16
SNR
I
I
I
I
--
-
O
--
LPful
20
18
16
14
12
10
SNR
0
6
4
20
2
I
18
16
14
12
10
SNR
8
6
777
4
_
2
0
Figure 4-10: Comparing Filters With Different Resolutions
can be detected is smaller by a factor of V 4 = 2 or VYi = 4. Which is consistent
with what is displayed on the bar chart. In effect, this barplot is comparing targets
that are different sizes, (i.e. they cover larger pixels). Thus, the relative performance
for any single amount of spatial clutter noise improves with decreasing resolution.
Comparing a range of spatial noise, the trend with decreasing resolution shows an
increased effect of the spatial resolution (Figure 4-11).
Figure 4-11 shows the the effect of spatial standard deviation on images of different
resolution. Since the pixels were binned, the lower resolution images have a larger
mean, causing a larger standard deviation in more cluttered images. Therefore, the
spatial standard deviation scales for the 3 resolutions are different as well. Full
resolution images are plotted with
c-spatial
increments of 10, 40 for half resolution and
160 for quarter resolution. This difference in the
resolution reduction binning process. The total
c-spatjza
c-spatial
range is explained by the
range for each image increases
as the number of pixels that are binned increases. The range for the 2x2 binning
increases by a factor of 4, the 4x4 binning increases by a factor of 16 from the range
58
.. .............
..........
- ...
. ...
. .....
256x256
512x512
1024x1024
1-10
46
-- - -- 2
21a2O---
31A80
-------
2
-
-- - - -- - - - -- - - --
----
- -
--- ----.---....
2....31-44
010111
,4
1-10 -- -- - - - -. . .
21-M
. . .. ..
211
18
14
12
10
8
8
4
2
0
-----
-- -- - - - --- - -- - --- - -- - - - -- - - -- - - - - . . . .. . .
MIh
-- -- --- -
----
2oao
71-90
16 -
04)
MADuiIin
' :
121-M
20
-,
-------
01 -.
9
6
9
9
14
11
3
3
1
21810
--
-----------0
412
i10
0
0
4
2
Il1~ D
12B-14441
0
El1----1
p-A
~
2011014121)
----i
l------- ---2
864
Figure 4-11: Comparing Filters With Different Resolutions - Comparing
0
Uspatial
of the full resolution images. Results show that as resolution decreases, the rate of
decrease in performance as
0spatial
increases, increases as well. The performance of
the 3 resolutions start with the after sunset images with an average value of SNR
= 4 but after the initial point the rend is dominated by middle of the day images
0
with the full resolution increasing approximately by 15ospatia/SNR, 35 spatial/SNR
for the half resolution and 80opatia/SNR for the quarter resolution.
The trend
seen in the as-SNR is also present in the adjusted SNR view but at a smaller scale.
The slope increases as resolution increases. In the full resolution view, adjusting for
the temporal noise almost completely extracts the trend seen in the as-SNR view.
This means that the spatial filter does such a great job in eliminating clutter effects
that any degradation in performance is dominated temporal noise, a factor that is
determined by the sensor. In the lower resolution adjusted SNR plots, this is not the
case as indicated by the presence of a slope in adjusted SNR plots. As spatial noise
increases, the spatial filter can no longer eliminate all clutter effects so degradation
becomes of a combination of temporal and spatial noise.
59
.
. ...
....
......
. ..
.....
512x512
3O~
100
0
0
0
2
0
4M
MW
M
7M~
6S
)
)
8
-4
10
0
10
Figure 4-12: Resolution in Visible versus IR image - Raw and Filtered
4.3.2
Visible versus Infrared Images
Figure 4-12 shows the histograms of the images as the resolution is reduced and these
results are compared to an IR image. The visible image used was chosen from the
rooftop collection and the resolution was reduced by binning. The Infrared image has
a field of view is 20 degrees by 2.5 degrees, the resolution for this image is .3 mrad by
.3 mrad, and the units are in pW'V/cm 2 . Looking at the filtered images (displayed by
eliminating 2 percent of the outliers), as the resolution decreases, there is increased
structure in the filtered visible images. This result supports the conclusion that the
spatial filters do a great job of suppressing clutter in high resolution images. As the
resolution of the images decrease, the spatial filter can no longer suppress the clutter
and structures begin to appear in the filtered images. If resolution were reduced
further to 128x128 or 64x64, the filtered image would have more of the structure seen
in the infrared filtered image. In order to compare the histograms, it must be noted
60
- 7Z '71,
--
---
after
Enbefore
befoe
1O
"Target
£Target
0000
2000
1l00
1100
1300
1300
1400
0
100
0
-M0
0
M0
512x512
Processing:
None
Laplacian Filter
11.7
8.7
SNR:
100
512x512
Resolution:
Figure 4-13: Laplacian Filter Performance on Real Target
that both the y and x axis are different. Since each image has different number of
pixels (y-axis) the relative y-axis range is different. Since the pixels were binned to
generate a lower resolution image, the mean pixel value (x-axis) range is different
as well.
However, as resolution decreases, the standard deviation of distribution
increases, creating more a tail that is commonly seen in infrared images that could
potentially interfere with target detection.
4.4
Performance on Real Target
The performance of the Laplacian filter was also tested on one visible image that
contained a target with a spatial extent more than 1 pixel which is shown in Figure
4-13. Before filtering the SNR was 11.7 and the target pixel had a value of 1445,
with 1408 background pixels or 0.58 percent at or above the value of 1445. After
filtering, the SNR became 8.7 but the target was moved far from the background
distribution with a value of 81 and no pixels with higher values. The reason for the
61
----
--- M
decreased SNR after filtering is the fact that the spatial filter is not matched to a
target that spans more than 1 pixel. It is also possible that some of the residual
target pixels interfered with SNR calculations. However, using the histogram view,
the spatial filter still is able to suppress background and move the target away from
the background distribution.
62
Chapter 5
Conclusion
5.1
Summary
In this thesis, a large dataset of visible imagery in various cloud clutter conditions was
taken using a rooftop camera over a span of three months. Although temporal noise
(standard deviation Otemporai) ultimately limits the detection of a point target, spatial
noise in the scene can degrade sensitivity severely. As measured by the signal-tonoise ratio (SNR), applying a high-pass spatial filter can overcome this degradation.
Therefore a comparison analysis was done on the effectiveness of four different spatial filters on the collected dataset using embedded targets. Results show that the
Laplacian filter was found to be the best performing filter over the entire dataset with
the Median, Robinson and Mexican Hat performing almost as well, only averaging 5
percent to 9 percent behind (or 0.4 to 0.7 O-temporal.
Using the after-sunset SNR
(utemporal
set to 2) as a measure of the limiting cam-
era sensitivity, trends show that the Laplacian filter performance on point targets is
dependent on the time of day, which is mainly affected by the varying amount of
available light contributing to the background clutter during different times of the
day. Performance was the best before sunrise and after sunset when the clutter structures are reduced due to poor lighting. Laplacian performance is also dependent on
the brightness in the scene (mean of scene), severity of clutter (spatial standard deviation), and temporal standard deviation. As expected, no correlation of sensitivity
63
with day of week appears. For assumed conditions of PFA = 10-
and PD = 0.5,
each one of these variables increases as the limiting target intensity (SNR) decreases.
However, after reducing the as-SNR with each image's Otemporal, results show that
the Laplacian filter performance is no longer dependent on the variables. This result
shows that the Laplacian filter is able to suppress and eliminate the clutter structures
in the raw image. Any degradation of the performance is due solely to the temporal
noise of the images, caused by sensor properties and the intrinsic signal fluctuations.
In order to study whether coarser resolution increases spatial clutter as compared
to full resolution, tests at reduced resolution were performed on the same images.
The comparison of images with varying degrees of resolution reduction shows that
resolution has a strong effect on detection performance using spatial filters. First,
although the Laplacian filter performed best on the full resolution images, the Robinson and Median filters performed just as well on lower resolution images. Thus,
the preference for the Laplacian is not strong, and all four spatial filters perform
similarly. Second, the spatial filters became leaky at the quarter (4x4 binning) resolution, as indicated qualitatively by the appearances of structure in the filtered images
and quantitatively by a dependence of performances on spatial clutter (Figure 4-11).
With sufficient resolution, spatial noise (clutter) is eliminated and detection depends
on the usual temporal noise: SNR oc 1/Utemporai, but when resolution is reduced,
performance depends on the combination of both temporal noise and spatial noise.
Third, as resolution is reduced in visible images, they begin to resemble typical filtered infrared images and start to form the clutter tails commonly seen in infrared
image histograms.
The conclusions of this thesis give a comparison of spatial filters and a deeper understanding of the dependence of the filter performance over a range of variables which
can be later used to improve a detection scheme for point detection in search-andtrack systems. There are many possible extensions to this thesis that are discussed
in the next section.
64
5.2
Future Work
There are many ways the thesis can be extended:
1. Increase bank of filters to include temporal filters, filters with guard bands,
morphology filters, etc.
2. Insert targets that straddle multiple pixels or have negative contrast.
3. The effects of higher or lower resolution images (not artificially generated by
binning) on filter performance?
4. More analysis of filter performance on real targets
5. Collect images from an airborne visible imager.
6. Collect images using an infrared sensor in parallel to visible sensor.
These topics are all of great interest in the target detection community and are
part of on-going research.
65
66
...........
.......
.
. ........ .....
.........
......
Appendix A
Additional Figures
x 104
2006-11-07-08
2006-11-08-08
x 10
3
*2
0
I00
2
k-
0
500
500
0
counts
xl104
2006-11-07-10
0
4
2006-11-08-10
3
3
x 10
21.
b
-
3
500
0
500
0
counts
2006-11-07-12
counts
4
2006-11-08-12
x1O
3
counts
x 10
21
500
0
counts
2006-11-07-14
~2I~2
4
2006-11-08-14
x 10
30
500
U
i
11
3
2
x10
counts
2006-11-08-16
XlO
30
3
500
21,
500
0
counts
2006-1107-16
counts
x 10
21
1
00
500
counts
counts
67
............
2006-11-09-08
2006-11-10-07
x10
3
x10
0
0
500
x104
2006-11-09-10
500
0
2
counts
counts
2006-11-10-08
3
x 10
2
0
500
2006-11-09-12
counts
2006-11-10-10
X10
3
x10
2
2
0
0
500
0
counts
2006-11-09-14
2006-11-10-12
3
x 10
21
2
0
500
0
2006-11-10-14
x10
0
-
"
500
counts
counts
0
500
counts
x10
2006-11-09-16
500
0
counts
pF
500
X1 0
3'
21
500
0
counts
counts
68
.....
. . ..........
-
2006-11-13-07
2006-11-14-07
x10
3
2
3
211
1IE
x 10
i
500
0
counts
2006-11-13-08
4
2006-11-14-08
x 10
3'0
2
1
2
1
0
500
0
x 10
3
500
3
500
counts
counts
2006-11-13-10
counts
2006-11-14-10
x 10
3
x10
21
2
1I~
500
0
counts
counts
2006-11-13-12
2006-11-14-12
30
21
1I~A
0 X 10
3
500
x 10
m1
0
2006-11-14-14
3
2
500
counts
counts
2006-11-13-14
1
2
d
0
x 10
3
21
00
500
counts
counts
69
.......
......................
x104
2006-11-15-07
0
x104
2006-11-16-07
3
2
3
0
500
2006-11-15-08
counts
2006-11-16-08
x104
3
2
1I
0
0
3
2
x10
0
500
x
counts
4
x10
2006-11-16-10
3
~2I,2
3
counts
counts
xl104
2006-11-15-12
500
0
500
0
500
0
counts
2006-11-15-10
500
0
counts
x104
2006-11-16-12
3
2
0
0
500
counts
2006-11-16-14
x104
2006-11-15-14
3
2
xl04
500
counts
~~2l
0
0
500
500
counts
counts
70
.............
...
.......
xl104
2006-11-17-07
2006-11-20-07
3
3
2
x 10
2
0
i
500
0
counts
2006-11-17-08
x 10c4
30
2
1counts
3
2
0
500
0
x104
2006-11-20-08
500
500
counts
2006-11-20-10
0
3
2
500
0
3
counts
2006-11-17-12
*1
xl104
2
counts
2006-11-20-12
0
~~2I
2006-11-17-14
0 x104
500
3
2
x10
500
0
counts
x10
0
500
3
2
00
500
counts
counts
71
...
2006-11-21-07
X104
2006-11-22-07
x14
3
x 10
3
0
500
counts
2006-11-21-08
x104
2006-11-22-08
2
30
2
500
-3
0
counts
500
counts
2006-11-21-10
2006-11-22-10
3
x10
2
0
2
200m-11-21-12
p-
500
0
counts
x104
2006-11-22-12
x 10
*1
x 10 4
500
0
500
counts
3
2
0
counts
2006-11-21-14
2006-11-22-14
X10O
500
counts
21
0
500
0
counts
500
counts
72
.......................
2006-11-23-07
3
2006-11-24-07
x10
x10
3
2
211
50
0
i
2006-11-23-08
counts
2006-11-24-08
x10
3
2I
x 10
3
21
l
counts
counts
2006-11-23-10
3
2006-11-24-10
X 10
2I
0
x 10
3
21
l
0
0
0
500
3
2006-11-24-12
x 10
21
0
0
2006-11-23-14
3
500
counts
counts
2006-11-23-12
500
0
500
0
500
0
counts
x 10
21
3
500
0
500
counts
counts
2006-11-24-14
x 10
3
x10
21,
21i
0
500
0
500
counts
counts
73
. ..
...
..
...........................................
...................
2007-02-14-06
3
2
1
0
2007-02-15-06
x 10
0
500
x 10
3
2
1
0
2007-02-14-08
2007-02-15-08
4
x10zz
3
2
x10
3
2
0
0
-
2
500
0
counts
2007-02-14-10
-
2007-02-15-10
N
4
0
0
3
2
x 10
1
500
3
counts
2007-02-15-12
X 10
N
00
500
3
x 10
21
0
3
counts
2007-02-15-14
x 10
pN
0
0
500
0
counts
2007-02-14-14
500
0
counts
2007-02-14-12
500
counts
x 10
3
2
500
counts
counts
3
x 10
21
0
500
0
500
counts
counts
74
MF7---
siunoo
009
0
0
sIunoo
0
009
£
0
MIX
svunoo
0
0
009
MIX
sluno3
OL
009
0
siuno
Qog
0
0
009
sluno
0 L-91-Z0-L00z
0
0
svunoo
009
01X
0
sounoo
90-6KO~-L00Z
sIunoI
01
0
009
0
009
0
slunoo
09
0
~0
c
0
90-9 1-ZO-L00Z
H
01X
90-61-l0-L00Z
90-9L-Z0-L00Z
---
Ir-
......................
......................
2007-02-20-06
x 10
2007-02-21-06
x 10
3
2
1
0
0
3
2
1
2007-02-20-08
counts
x104
2007-02-21-08
4
x10Iz
30
500
..
30
counts
2007-02-20-10
0
500
000
500
counts
500
counts
x 10
2007-02-21-10
0
2
4
3
2
2007-02-20-12
0
500
30
500
counts
counts
x10
2007-02-21-12
x 10
3
2
X 10
0
500
0
counts
500
counts
200702-21-14
2007-02-20-14
X10
21
0
500
0
counts
500
counts
76
LL
sjunoo
slunoo
009
007
0
0
MLX
009
U
0
0
sjunoo
L
0
0
009
~OL
sluno3
009
0
0L-Z-Z0-L00Z
Qix
stunoo
sounoo
LIII0
0
009
0
~ 1L
£
90-ZZ-ZO-LOOZ
90-c0z-L00z
sluno3
01X
0
009
0I-
90-Z-Z-LOOZ
90-CZ-Z-LOOZ
2007-03-05-06
2007-03-06-06
x210
3
2
1
0
0
3
2
1
500
x10
0
500
counts
2007-03-05-08
counts
2007-03-06-08
x 10
3
x 10
21
E
0
0
500
2007-03-05-10
xiizz
0
0
2007-03-06-10
30
counts
2007-03-06-12
X 10
30
21
0 x 10
30
500
21
counts
2007-03-06-14
2007-03-05-14
0
x 10
0
500
4
2007-03-05-12
500
21counts
counts
p
500
counts
counts
78
500
x 10
0
500
counts
counts
500
6L
sjunoo
sjunoo
0
0
009
00
0
sjunoo
0
0
009
q0
MLX
00;
0
009
N V' L-LO-CW-LO0Z
v 90-CO-LOO0
slunoo
009
009
sunoo
0
MLX
OL01x
ZL-90-c0-L00z
U0
0
009
sluno3
ML
0L-90-CO-LOOZ
sjunoo
0
009
0 L -LO-C0-LOOZ
svuno
0
~L
0
009
90-L0-CO-LOOZ
90-90-CO-LOOZOL
90-LO-£0-LOOZ
08
sjunoo
sjuno3
0
009
0
009
01
sIunII
009
sluno3
009
0
0
KL-UL-£0-L00z
0
M'-60-C0-L00Z
sjuno3
sluno3
0
009
009
01
0
L
0L-L-£0-L00Z
sunoIZ
sluno3
sluno3
0
01 X
0
009
90-ZL-£-L00z
swunoo
90-60-1£0-L00Z
sjunoo
Lmv 0
009
009
0
MLX
90-Z KO£-L00Z
90-60-C0-LOOZ
2007-03-13-06
2007-03-14-06
x10
21
x 10
3
3
10.
0
1
0
50
500
counts
2007-03-13-08
2007-03-14-08
x 10
30
x
21.
4
--
2007-03-13-10
0
2
1I0
0
h1
3
0
2
500
counts 1
500
counts
x10
2007-03-14-10
x 10
30
3
30
500
21,
counts
counts
2007-03-13-12
10 cut
500
2007-03-14-12
X 10
x10
500
counts
21
0
0
30
2
500
counts
2007-03-13-14
3
2007-03-14-14
x 10
2
500
counts
x104
i
0
0
500
counts
counts
81
mvmmmm
-
--- - - I
2007-03-15-06
3
211
0
counts
counts
4
500
0
500
0
2007-03-15-08
.........
. ..
x 10
2007-03-16-06
x 10
3
2
. .
....
.
. . ..........
x10
2007-03-16-08
x 10
3
NoIa2
*0
0
0
500
2
2007-03-15-10
x104
2007-03-16-10
xl10O
3
500
counts
counts
3
counts
2007-03-15-12
30
2
x
500
3
2007-03-16-12
ounts
0
counts
xl10
0
500
2007-03-15-14
3
500
counts
counts
2007-03-16-14
x 10
x104
500
30
counts
0
500
counts
82
2007-03-20-10
2007-03-21-06
x10
3
21
3
2
0
x10
2007-03-21-08
2007-03-20-12
3
2 0 I
U
2
500
0
4
3
2
00
3
x10
500
4
2007-03-21-12
x 10
3
21
N
1I~
0
500
3
2
3
0
2007-03-21-14
X 10
in
]
2
500
0
counts
x 10
counts
2007-03-20-18
500
0
counts
p
counts
21
0
2007-03-20-16
500
4
2007-03-21-10
x 10
counts
x10
3'
21.
counts
2007-03-20-14
500
0
500
counts
500
4
counts
x 10
3
21
0
500
counts
counts
83
slunoo
009
0
0
OL
sjunoo
0
009
UC
0
0
LXC
t7l-ZZ-CO-L00Z
t'L- Cz-£0-LO0z
sjunoo
009
0090
stuno3
009
0
z
c
U
PU0 X
ZKiZ-0-LOOZ
slunoo
009
Z K-C£-L0OO
sluno3
LI~IIL
0
0
009
MLX
0L-£U-£-LO&z
s~uno3
009
o L-Zz-c-L0Oo
sjunoo
0
009
0
0
0
OL
9O-£z-£zO-LO0z
sjunoo
0
009
P
90-Z-C-LOOZ
sjuno3
009
0
0
~III~0
OL
9O-Z-i£-L0OO
L
9O-£CZ-CO-LOOZ
85
86
Bibliography
1. D. Z. Robinson, "Methods of background description and their utility," Proc. of
the IRE, vol. 47, no. 9, pp. 1554-1561, September 1959.
2. J. T. Barnett, "Statistical analysis of median subtraction filtering with application to point-target detection in infrared backgrounds," Proc. Infrared Systems and
Componenis III, R. L. Caswell ed., SPIE, 1050:10-18,1989.
3. L. Sevigny, "Characterization of spatial filters for point-target detection in IR
imagery," DREV report, Defence Research Establishment Valcartier, November 1986.
4. D. Reiss, "Spatial signal processing for infrared detection," SPIE,2235:38-51,
1994.
5. S. Hary, J. McKeeman, D.V. Cleave, "Evaluation of infrared missile warning
algorithm suites," Proc. IEEE 1993 Nationial Aerospace and Electronics Conference,
vol. 2, pp.1060-1066, May 1993.
6. J. Barnett, B.D. Ballard, and C. Lee. Nonlinear morphological processors for
point-target detection versus an adaptive linear spatial filter: a performance comparison. SPIE, 1954:1123, 1993.
7. V. T. Tom, T. Peli, M. Leung, J.E. Bondaryk, "Morphology-based algorithm
for point target detection in infrared backgrounds," Proc. SPIE Signal and Data
Processing of Small Targets, SPIE, 1954:2-11, 1993.
8. D. Klick, P. Blumenau, J. Theriault, "Detection of targets in infrared clutter,"
Proc. of SPIE, SPIE, 4370:120-133, 2001.
9. J. C. Russ, "The Image Processing Handbook," CRC Press, 2002.
10. A. Oppenheim, G. Verghese, "Introduction to Communication Control and
Signal Processing," Massachusetts Institute of Technology, 2006.
87
11. J. A. Hanley, B. J. McNeil, "The meaning and use of the area under a receiver
operating characteristic (ROC) curve," Radiology, 143(1):29-36, 1982.
88
1.'ilII111-.|J11.11
14'----
"''
"
"
'
-
'-'"
"'------
-
-
^"'
-
'
"--
'-
I5-'l--s
es---L'
ej.1h-A4L1.miILximilj.
11
->-'-
Download