-1---··-··--··111111II
_ _ ii
Abstract
This report presents a tutorial description of MIT Lincoln Laboratory's Infrared Airborne Radar (IRAR) sensor suite. IRAR, which was flown aboard a Gulfstream G-1 aircraft, was used to collect multispectral ground-images using the following sensors: active (laser radar) subsystems operating at long-wave infrared and near-visible infrared wavelengths; passive optical subsystems operating in the long-wave infrared band; and a millimeter-wave radar. The principles of operation for these sensor subsystems as well as their hardware implementations are described. Theoretical statistical models, with relevant parameter values, are presented for each of the IRAR sensors. The overall objective is to provide sufficient information to guide the development of image-processing algorithms applicable to IRAR data.
iii
Acknowledgements
This report could never have been prepared without the assistance of several members of the Center for Imaging Science. Technical details of the IRAR hardware implementation were clarified and confirmed by Ms. Vivian Titus and Dr. Robert Hull;
Dr. Hull also provided a detailed review of the draft manuscript. Dr. Thomas Green,
Jr. provided invaluable suggestions for the structure and scope of the report and, with
Prof. Jeffrey Shapiro, provided helpful discussions of the radar data models. I am also thankful for Prof. Shapiro's suggestions for improving many of the explanations in the text.
iv
1 General System Description 1
2 Principles of Sensor Operation
2.1 Laser Radar Principles of Operation.
2.1.1 Signal Transmission .
2.1.2 Propagation Effects .
2.1.3 Target Interaction .
. . . . .
2.1.4 Signal Reception.
2.2 Millimeter-Wave Radar Principles . .
2.3 Passive Infrared Imaging Principles .
.
.
.
.
.
.
.
.
.
. . . . . . . . .
........
. .. . .
16
. .. 18
3 Hardware Descriptions and Parameter Values
3.1 Forward-Looking Optical Sensor Hardware ...............
3.2 Forward-Looking Millimeter-Wave Radar Hardware ..........
3.3 Down-Looking Sensor Suite Hardware ..................
2
2
3
3
6
7 6
20
20
24
25
4 Data Statistics Models
4.1 Passive Radiometer Data Models ............
31
. . . .... 31
. . . .... 34 4.2 Forward-Looking Millimeter-Wave Radar Data Models
4.3 Laser Radar Data Models ................
4.3.1 Coherent-Detection Laser-Radar Data Models .
4.3.2 Direct-Detection Laser-Radar Data Models . . .
. . . . . . .
35
. . . .... 35
. . . . . . .
38
A Radar Cross Section 43
B Resolution, Precision, and Accuracy 44 v
__ ·· L __ ___
1 IRAR sensor fields-of-regard .
2 Four stages in the formation of a radar echo .
.
.
.
.
.
.
.
.
1
.......
.......
.
.
. .11
3
3 Coherent optical heterodyne detection .
4 Coherent laser radar receiver block diagram .
. .11
5 Direct-detection laser radar receiver block diagram
6 MMW radar transmit waveform .
7 MMW radar receiver mixer .
........
........
........
8 Down-looking sensor suite block diagram .
9 Photodetection noise sources, physical model .
........
. . . . . . . .
10 Photodetection noise, sources, NEP model .
. .
. . . . . . . .
11 Phase-estimation error pdf for direct-detection laser radars . .
32
33
42
1 Forward-Looking Optical Sensor Parameters: LP, LP+D Sets
2 Forward-Looking Optical Sensor Parameters: SP Set .
3 Forward-Looking Millimeter-Wave Radar Parameters .
4 Down-Looking Sensor Parameters: DL1 Set .
5 Down-Looking Sensor Parameters: DL2 Near-Visible Set . . .
6 Down-Looking Sensor Parameters: DL2 Long-Wave Set .
22
23
24
27
28
29 vi
This report describes the remote sensors used to collect image data as part of the
Infrared Airborne Radar (IRAR) Program at MIT Lincoln Laboratory. The report's purpose is to provide sufficient information to guide the development of imageprocessing algorithms applicable to IRAR data. Toward that end the report divides roughly into three parts: a description of the basic principles of operation of the sensors; a description of the hardware implementation details of the sensors; and a description of statistical models for the data collected by the sensors. The hardware description is based heavily on Reference [1], and the data models are based largely on the work of Shapiro, Hannon, and Green [2, 3, 4, 5].
All of the data sets were collected from experimental ground-imaging sensors aboard a Gulfstream G-1 aircraft. The sensors can be divided into two independent suites according to the area scanned as the aircraft moves along its flight trajectory on a data collection run: the first is a forward-looking suite, with the sensor field-of-regard pointing generally ahead of and somewhat below the direction of flight; the second is a down-looking suite, with sensor field-of-regard pointing generally at the ground directly below the aircraft [see Figure 1].
GULFSTREAM G-1
A i ~,¢3A=I
DOWN-LOOKING
OBSERVATION REGION
Figure 1: IRAR sensor fields-of-regard
FORWARD-LOOKING
OBSERVATION REGION
The forward-looking suite comprises an optical system housed in a unit suspended below the aircraft fuselage and a millimeter-wave radar unit housed in the nosecone of the aircraft. The forward-looking suite can be operated in several modes which combine left-to-right scanning (azimuth variation) with vertical scanning (elevation variation) to generate images. The millimeter-wave radar unit is aimed so that its field of view coincides with that of the optical system, and it is slaved to the optical system
1
__ ___ __ I __
so that they scan together. The optical system comprises active (laser radar) and passive imaging subsystems, with the active long-wave infrared laser radar system, operating at 10.6/tm, capable of collecting range, intensity, and Doppler (relative velocity) information. The passive system operates in the 8-to-12-um wavelength range. The active and passive subsystems share optics and are thus pixel-registered; that is, for each pixel of active data there is a corresponding pixel of passive data simultaneously recorded.
The down-looking suite comprises active and passive imaging systems. The active system has two channels in the later data sets, one corresponding to near-visible infrared (0.85 um), and one corresponding to long-wave infrared (10.6 pm). The passive system comprises a single channel in the long-wave infrared (LWIR, 8-12 pm) band.
The down-looking suite is scanned in azimuth for cross-track variation, and the flight motion of the aircraft provides the along-track variation required for imaging. All four optical sensors in the down-looking suite are pixel-registered.
In this section we provide a tutorial description of the principles of operation for each of the three types of imaging systems included in the two sensor suites onboard the aircraft: laser radar, millimeter-wave radar, and passive infrared. In the following section more detail is provided about the implementations of the three imaging systems used in the IRAR program, and specific parameter values for the sensors employed are given. In subsequent sections detailed models of the data provided by each imaging system are described.
In basic principle all radars are similar [6, 7, 8]. In the idealized case, a transmitter unit emits a directed beam of electromagnetic radiation in a controlled direction. A collection of objects in the path of the beam interacts with the radiation in a manner characteristic of the objects, generating a reflected signal. The reflected signal propagates back to the receiver, which may be co-located with the transmitter, and is detected and processed in the hope of obtaining information about the objects illuminated. The direction of the transmitter beam may then be changed and the process repeated to build up a two-dimensional image of the composite field-of-view, with each measurement corresponding to a pixel of the resulting image. The objects may comprise one or more target objects-objects in which the radar operator is interested-and additional objects, referred to as clutter, which only serve to obscure the picture. Anything other than targets and clutter which results in a received signal is termed background. Note that what constitutes a target and what consti-
2
tutes background or clutter is somewhat arbitrary and application-dependent-hence the difficulties associated with target recognition and detection. In general a radar's performance is quantified in terms of resolution, accuracy, and precision; these parameters are briefly explained and their distinctions described in Appendix B.
I
MODULATED
OMASTER -
OSCILLATOR
I
TRANSMISSION
BEAM-FORMING AND
BEAM-STEERING I
I
I.
I
I --
RECEPTION
COHERENT-
OR DIRECT- SCANNING AND
DETECTION BEAM-SHAPING
LL'
.
PROPAGATION
DIFFRACTION AND
ATMOSPHERIC
EFFECTS:
EXTINCTION,
TURBULENCE,
REFRACTION,
MULTIPATH
TARGET
INTERACTION
\'
I
I
N ol/.
REFLECTION
AND
DOPPLER SHIFT
I
.
POST-DETECTION
PROCESSING
DATA
RECORDING
Figure 2: Four stages in the formation of a radar echo
We can break down the process by which the signal returns are formed into four phases, as depicted in Figure 2. Signal generation occurs at the transmitter and dictates the temporal, spatial, and directional characteristics of the original signal; signal propagation includes the effects of propagation of the signal from the transmitter to the target and from the target to the receiver; target interaction includes the sometimes-complicated effects resulting in generation of a reflected signal, some part of which propagates to the receiver; and signal reception includes the detection and processing of the reflected signal (after propagation to the receiver) up to the point at which received data is recorded. Both the forward-looking laser radar and the
MMW radar are monostatic-that is, the transmitter and receiver share an antenna
(or optics), so that the transmitter and receiver are co-located from the perspective of the target. The down-looking laser radars are bistatic, with transmit apertures distinct from (though in close proximity to) the receiver apertures. In the rest of this section we consider laser radar in particular, though much of the discussion applies to conventional radar as well.
2.1.1 Signal Transmission
In the signal transmission phase we include the events controlling the time and frequency dependencies of the original signal, its initial spatial profile, and its directional characteristics, including any patterns of directional variation with time. The transmitter thus includes a master oscillator acting as a radiation source, along with any
3
___ I- _-
modulation, beam-forming, and beam-pointing hardware necessary.
The beam-forming optics of the transmitter determines the spot size of the resulting signal beam at the target. For maximum resolution, it is generally desirable to use diffraction-limited optics [9]-that is, optics for which the focusing error resulting from imperfections in the lensing system is smaller than the finite focal spot size required by diffraction from the finite aperture size, given by 2.44Af/d, where A is the transmitter wavelength, f is the focal length of the lens, and d is the lens diameter. The resulting resolution is specified by the beam-spread angle of the transmitter beam in the farfield, as determined by Fraunhofer diffraction. Assuming diffraction-limited optics, the beam-spread full-angle is given by 2.44A/d for a uniformly illuminated circular transmitter aperture of diameter d and transmitter wavelength A. For monostatic radars, the beam-forming optics are equally important in determining the spatial qualities of the received signal.
Scanning optics at the radar transceiver control the directional beam pattern traced out as a function of time. In particular, the forward-looking laser radar achieves two-dimensional imaging of return data by raster-scanning a region of interest in azimuth (parallel to the horizon) and elevation (vertically), while the down-looking laser radar images by scanning in azimuth and allowing aircraft motion to provide the second dimension of variation. When range data are collected and assembled into a 2-dimensional image, the result is 3-D imaging, with the transmitter scanning sweeps providing the two cross-range dimensions and the range information providing the third dimension. The resulting image can be displayed as a surface plot. A given laser transmitter may be capable of scanning in several different imaging modes, including a wide field-of-view mode for general reconnaissance and a narrow field-ofview mode for obtaining detailed information about a target. Note that, in reality, the scanning process is continuous, not discrete as indicated in the ideal scenario of the introduction. For a bistatic radar, this means that the radar receiver's instantaneous field-of-view is not precisely aligned with the spot illuminated by the corresponding signal beam pulse at the time of arrival of the reflected signal due to scanning motion during time-of-flight. This effect is called "lag angle." For a bistatic radar, if the transmitter and receiver can be independently pointed, it is possible to accommodate the lag angle with appropriate synchronization of the receiver and transmitter scanning systems. In many practical cases (including the IRAR down-looking radars), lag angle is only partially compensated for by scanning synchronization, the remainder being accommodated by an enlarged receiver instantaneous field-of-view. Though it must be considered carefully in design of the scanning properties of the radar, we assume any residual effects of the lag angle to be completely negligible in what follows.
The laser-radar master oscillator comprises a laser source and any additional hardware required for temporal modulation of the transmitted signal. The modulation format of the transmit signal is generally chosen to optimize in some sense the amount of information carried by the return regarding the target characteristic being measured; thus different transmit waveforms are used to measure different target characteristics.
4
For example, when detailed relative range information is sought for targets at a welldefined range, an amplitude-modulated continuous-wave transmitter signal (AMCW) can be used. By measuring the phase of the amplitude modulation of the returned signal relative to that of the transmitter, time-of-flight of the return signal can be determined modulo period T of the modulation of the transmitted waveform, and the time-of-flight measurement can be converted to a relative range estimate via
L = ct/2, where c is the propagation speed of the laser signal, and At is the measured time-of-flight modulo the waveform period. The ambiguity interval cT/2 might be relatively small (say a few meters) for the relative range measurement, but in the absence of noise and for a point target, the precision is limited only by the ability of the receiver to measure phase differences at the frequency of the amplitude modulation.
Alternatively, a pulsed waveform can be used for determination of range, with one pulse transmitted per pixel. The rate at which an image can be constructed depends on the pulse repetition frequency (PRF), with higher PRF's allowing faster image formation. However, if the PRF is sufficiently low that the time T between pulses is longer than 2AL/c, where AL is the a priori range uncertainty, a pulsed transmitter waveform allows absolute ranging-that is, the range ambiguity, given by cT/2, is larger than the a priori range uncertainty. The minimum distinguishable range difference for a rectangular pulse transmitter waveform is determined by the distance the beam travels along the propagation path (i.e., in the so-called along-range direction) during one pulse width. If each pixel of the target occupies a single range resolution cell, peak-detection can be used at the receiver to estimate the distance to target. If the target is spread over several range resolution cells for each pixel, the pulse returns will be spread over the time interval corresponding to the range extent of the target, and the shape of the received waveform contains information regarding the target and may be important for target identification-particular cases of this include identification of aerosols or atmospheric pollutants from backscattered returns and recognition of hard targets at ranges at which target features are not spatially resolved.
For target velocity measurement, a relatively long-duration, quasi-continuous waveform is required. The frequency difference between the transmitted and returned signals is then measured to determine the Doppler shift imposed by the target on the reflected signal. The resolution of a single measurement is limited by the uncertainty principle to at best the reciprocal duration of the measurement. Thus any particular resolution requirement puts a minimum limit on both the transmitted pulse duration and the dwell time of the radar on each pixel of the target. There is, as a result, a tradeoff between the time required to frame an image and the resolution of the resulting Doppler image.
5
__·_I_ _^iZ_ __
2.1.2 Propagation Effects
Propagation through the atmosphere from transmitter to target and from target to receiver can be described by three mechanisms [10, 11]: diffraction, extinction, and turbulence-induced effects. Diffraction dictates the minimum beam spread possible in free space for a given aperture size and wavelength; it is thus determined entirely by properties of the transmitter and, for the return path, the target, rather than by any property of the propagation path itself. The aperture size of the transmitter determines the transmitter-to-target diffraction-induced beam spread, whereas the spatial extent and the reflection characteristics of the target determine the target-toreceiver diffraction-induced beam spread. In the absence of turbulence-induced beam spread, the transmitter-to-target diffraction angle determines the angular resolution of the laser radar system.
Extinction represents the loss of transmitted beam power both by absorption by atmospheric gases and by scattering by particles suspended in the atmosphere; extinction causes an exponential decay in the power of the transmitted beam with distance along the propagation path. Extinction rates are strongly dependent on the wavelength of the optical signal; at the near-visible and long-wave IR wavelengths of interest here, fog, haze, precipitation, and high relative humidity can increase extinction significantly above its clear-weather value. In this case the operable mechanisms are absorption by water vapor and carbon dioxide and scattering by water particles
[12]. For this reason prevailing weather conditions are noted in the data set indices.
Propagation through cloud cover can also increase extinction rates dramatically, but at the altitudes commonly used for the collection of IRAR data, this is not a concern.
Turbulence-induced effects arise from the presence of random spatio-temporal variations of the refractive index of air due to local temperature variations on the order of 1 K. The relevant effects of propagation through turbulence include [10, 13] transmitter beam spread, target-return beam spread, target-plane scintillation, receiver coherence loss for coherent-detection radars, and angle-of-arrival spread and receiverplane scintillation for direct-detection radars. We will describe each briefly.
Transmitter beam spread [10] is largely induced by random phase modulation of the initial spatial phase fronts of the transmitted beam near the transmitter. When the transmitter aperture diameter d exceeds the turbulence coherence length p, d >
p, a collimated beam will experience noticeable beam spread due to turbulence.
However, for d << p, diffraction is the primary determinant of beam spread, and turbulence-induced beam spread can be ignored. Target-return beam spread [11] is analogous to transmitter beam spread but occurs for propagation from target to receiver. Target-return beam spread is only non-negligible for those glint returns which would otherwise have very narrow free-space beam spread.
Target-plane scintillation [10] manifests itself as a random variation in the illuminating beam intensity in the target plane. It results essentially from coherent interference of
6
the transmitted beam with itself due to induced spatial phase front modulation. For weak turbulence, the coherence length of the resulting log-amplitude fluctuations is
(AL) at range L.
As a result of spatial phase-front modulation of the received signal across the receiver aperture, the coherent-detection receiver experiences a reduced mixing efficiency termed receiver coherence loss. Receiver coherence loss is negligible for d << p, but when the receiver aperture size exceeds the turbulence coherence length, the received field's random phase fluctuations across the aperture create a spatial-mode mismatch between the received and local-oscillator fields, and the mixing efficiency can be disastrously affected [13]. For the direct-detection receiver this same turbulence-induced phase-front modulation results in angle-of-arrival spread and receiver-plane scintillation [14]. The angle-of-arrival spread adversely affects received signal power by spreading some of the received signal power outside the receiver field-of-view, but this can be compensated for by widening the field-of-view of the receiver at the cost of increasing the background noise level. Receiver-plane scintillation, which results from self-interference of the turbulence-corrupted received signal phase fronts, can be reduced by increasing the receiver aperture size so that the receiver effectively sums several independent instances of the intensity random variable, thus reducing the effect of scintillation on the receiver output signal.
2.1.3 Target Interaction
It is the interaction between target and incident signal beam which impresses upon the return signal any information it might contain about the target. Unfortunately the interaction between incident signal and target can be quite complicated, and a large body of literature has been dedicated to its description. We refer the reader to a general characterization [15], and we present here a description of more limited validity.
For a monostatic radar emitting a quasi-monochromatic, linearly polarized signal beam and observing a hard target at true range L, we describe the reflected beam as the product of a complex target reflection coefficient T(p) with the incident field in the target plane Et(p, t) [15]:
Er(i, t) = T(p)Et(p, t). (1)
Here P is a vector representing the x, y-coordinates in the target plane, Er(ji, t) is the complex envelope of reflected field in the target plane, Et(p, t) is the complex envelope of the incident field in the target plane, and T(pi) is the reflection coefficient for the effective plane of interaction represented by the target plane. The shape of the surface of the target is represented by the phase, and the reflectivity by the amplitude, of T(p). Inherent in this description is the assumption of bi-paraxial propagation, whereby propagation both from transmitter to target and from target back to receiver can be described by paraxial diffraction.
7
-- I---·-------·--- -- I
There are two limiting cases to be considered for T [10]: the pure specular reflection case and the pure diffuse reflection case. Specular reflection is described by a complex target reflection coefficient which varies smoothly in amplitude with p and which varies slowly in phase on a scale comparable to the wavelength of the incident signal.
Physically, a specular target has a polished surface (smooth on the scale of A) which may exhibit curvature with a minimum radius larger than the incident beam width.
Specular targets give rise to reflections which are strongly directional, and under favorable alignment conditions result in strong return signals at the radar receiver, referred to as glint.
Diffuse targets are represented by complex target reflection coefficients with rapidly varying phase. Physically the surface of a diffuse target is quite rough on the scale of the transmitter wavelength A, and the resulting reflected field is therefore essentially spatially incoherent. The temporal coherence of the illuminating laser source, however, results in constructive and destructive interference of the field at the receiver, an effect termed speckle. Actual targets generally lie somewhere between the extremes of pure glint and pure speckle targets in the interaction with the incident field.
For a spatially unresolved pure glint target at p = , we take [10]
T() =
(A
)
2 ei(p) (2) where ac is the radar cross section and 0 is a random variable representing phase and is uniformly distributed on [0, 27r). 0 represents our lack of knowledge of the absolute range of the target on a scale of A.
For a pure speckle target [10], T(p) = T.(p) where Ts(p) is a stationary, zero-mean random function with second moments
(Ts(pl)Ts(f2) = 0
(T(pl)Ts(p2)) = 2P 6(p - P2)
(3)
(4) where p is the target's diffuse reflectivity. The delta function on the righthand side of equation (4) implies that the reflected signal is completely nondirectional-that is, the diffuse target reflects equally (on average) in every direction. For the sake of calculation, Ts(p) will be taken to be a circulo-complex Gaussian process. Note that in taking the diffuse reflectivity to be a random function, we are effectively defining the minute surface details of any given target to be noise. To the extent that several different examples of a particular model of tank, for example, would have uncorrelated surface roughness detail on the order of A, this is a reasonable choice. As a result, however, even a perfect measurement of Ts(p), with no additional noise introduced, would be considered a noisy measurement of p. In effect the surface roughness of the real physical target becomes a noise source which obscures the abstract ideal target.
Realistic targets are neither pure glint nor pure speckle, but can be represented by a sum of glint and speckle terms.
8
As a final note about target interaction with the radar beam, remember that the multiplicative model of equation (1) is only valid for targets at fixed range from the transmitter. In the case of the IRAR, the transmitter is always in motion, so we must consider the effect of general relative motion between the transmitter and target. In the inertial frame of the transmitter, we can consider all motion to be due to relative target motion, and on reflection from the target, the signal will also experience a
Doppler frequency shift given by [6]
vD = -2v · /A (5) where
D is the Doppler frequency shift, in Hertz, experienced by the reflected signal,
v is the relative velocity of the target in the inertial frame of the transmitter, i is a unit vector pointing along the line-of-sight from transmitter to target, and A is the wavelength of the laser radar signal. Note the Doppler frequency shift is positive for an object moving toward the transmitter-that is, the received frequency is higher than the transmitted frequency. The Doppler shift is the interaction process that makes possible remote velocity estimation, and the Doppler shift due to the aircraft's motion over slow-moving surface targets must be considered in the design of the receiver if coherent detection is employed.
2.1.4 Signal Reception
The receiver used in the laser-radar system plays a critical role in determining the sensitivity and capabilities of the system. For optical radars, there is a choice of whether to employ direct detection techniques (in which the received signal is detected with a square-law device which responds to incident intensity) or coherent detection techniques (in which the received field is beat against a local oscillator field of nearly the same frequency, and the output signal is proportional to the received field strength) [16]. Coherent detection utilizes the proportionality of the beat term to the local oscillator field strength to provide essentially noiseless pre-detection gain in the ideal case, so that thermal and dark current noises inherent to the photodetector and pre-amplifier are dwarfed by the quantum noise inherent in the signal itself. Thus coherent detection techniques provide superior sensitivity to direct detection under ideal conditions when signal strength is limited. In addition, detection of Doppler frequency shifts associated with interesting target velocities requires coherent techniques. Unfortunately, coherent detection, while directly comparable to the techniques used for decades in radio receivers, is much more difficult to implement than direct detection at optical frequencies. Coherent detection places strict requirements on the spectral purity of the source (i.e., temporal coherence between the LO and the received signal are required) and requires that the received signal and the local oscillator have spatial phase fronts which are nearly perfectly aligned over the active area of the detector (i.e., spatial modes of the LO and received signal fields must be matched). Direct detection therefore has advantages over coherent
9
---18-----111- ·-1·-_-·---I.
-______.___- --
detection when either source temporal coherence or the spatial phase characteristics of the received signal cannot be strictly controlled, or when complexity or cost are important design issues.
Coherent Detection. Coherent detection is employed in the forward-looking laser radar system; in particular, an optical heterodyne receiver is used, as depicted in Figure 3. The optical receiver comprises a local oscillator laser, offset in frequency from the master oscillator by difference frequency vofset; mixing optics; and a photodetector array. A block diagram of the optical receiver and post-detection processor appears in Figure 4. The mixing optics combine the received target return signal with the local oscillator on the active surface of the photodetector array. For optimal detection, the spatial mode of the local oscillator should be perfectly matched to that of the received signal; the degree to which this is achieved is reflected in the heterodyne mixing efficiency. The output of the photodetectors includes a signal at the beat frequency
VIF = offset
+ lVDoppler where vDoppler is the Doppler shift associated with the forward motion of the aircraft relative to stationary surface targets. For a pulse-imager transmitting a rectangular pulse of duration tp, the complex envelope of the IF photodetector output signal, in a convenient normalization, is given by [2]
r(t) = y(t) + n(t) where n(t) is a zero-mean circulo-complex white Gaussian noise process of spectral density hvo/7, representing the local-oscillator shot noise; and y(t) is the received signal component, given by [10]
y(tf = f
dp T(p)~t2(p), 2L/c < t < tp + 2L/c; y(t) =
0 otherwise; where PT is the peak transmitted power, t(P) is the normalized complex field pattern of the transmitted beam in the target plane, and 2L/c is the propagation delay, and we have assumed that the transmitted field pattern in the target plane is identical to the normalized local oscillator complex field pattern backpropagated to the target plane.
It can be shown that the resulting carrier-to-noise ratio (that is, the ratio of average target-return power after IF filtering to average shot-noise power after IF filtering) is given by [2]
CNR = TB 4L hvB 4L
2
2 for a pure glint target and by
4GT
4rL
2 e-2aL (6)
CNR PT pARe -2aL hvoB 7rL 2
(7
10
I
BEAM-SPLITTER
PHOTODETECTOR
SURFACE
RECEIVED
SIGNAL AT OPTICAL
FREQUENCY
IF FILTER
LOCAL OSCILLATOR
AT OPTICAL
FREQUENCY
V o
Figure 3: Coherent optical heterodyne detection
DETECTED IF
SIGNAL AT FREQUENCY
VlF
I
12 RANGE
OR VELOCITY
ESTIMATES
12 PEAK
INTENSITY
ESTIMATES
Figure 4: Coherent laser radar receiver block diagram
-
11
-III
___ I __
for a pure speckle target, where PT represents the peak transmitted power, v, is the center frequency of the transmitted radar signal, B is the bandwidth of the IF filtering stage (B = 1/tp in this example), GT is the transmitter antenna gain, a, is the radar cross-section for the glint target, AR is the area of the receiver aperture, e represents the combined heterodyne-mixing and optics efficiencies, ac is the atmospheric extinction coefficient, and p is the diffuse reflectivity for the speckle target. Note that we have neglected any turbulence-induced atmospheric effects under the assumption that the radar aperture is much smaller than the turbulence coherence length under most conditions of interest.
Ideally the IF output signal from the photodetector is matched-filtered and envelopedetected; the resulting signal at the output of a square-law envelope detector is given by t,+2L/c 2 r (t) d at the exact time corresponding to the if the envelope detector's output is sampled at the exact time corresponding to the propagation delay associated with the target's true range. An intensity image constructed from these values would then show a signal-to-noise ratio given by
SNR = [(Ir1 2
) - h°/tp] var (rl12)
2 (8) where the term subtracted in the numerator represents a constant component of (Irl
2 ) which does not originate from the signal component.
The resulting SNR differs in construction from the CNR in that it takes into consideration both the action of the envelope detector and the fact that the signal component
y of r may contribute to the noise by means of the randomness of the target reflectivity T. However, it is the image SNR which is of concern to a viewer of the final image, which is, of course, the raison d 'etre for the system. The intensity-image SNR can be shown to be given by [10]
SNR =
1 + (2CNR)
-
CNR(9)
' + CNR/2SNRsat ' where SNRsat is the saturation value of image SNR and is given by
SNRsat oo (10) for a pure glint target and by
SNRsat = 1 (11) for the speckle target case. Thus in the glint target case, the image SNR goes to onehalf the CNR value for large CNR, and in the speckle case the image SNR saturates at unity.
The CNR differs from the signal-to-noise ratio in that it ignores target-dependent noise in the reflected image due to the noisy nature of the reflection process. In
12
contrast, the image SNR also includes noise introduced by variations in the reflection process which we have chosen to model as random, as mentioned in the target interaction section. Thus while the CNR relates the technical quality of the receiver output, it tells us little about the quality of the final image; this is the value of the image SNR. The CNR will figure prominently in the data statistics models in a later section.
Following the coherent pulse-tone optical receiver is any post-processing electronics which might be required. In the case of range and intensity measurement for hard targets, the filtered IF output is envelope-detected, and a peak detector is commonly used to select the maximum value of the optical receiver's filtered IF output that occurs within a preset range-uncertainty interval. The magnitude of this peak signal value is then digitized and recorded as the intensity value for the given pixel, and the range corresponding to that peak value is digitized and recorded as the range value for that pixel. The process is then repeated for the next pixel. In the case of target velocity measurement, the pre-processing must separate out the different frequency components of the optical receiver's output in order to obtain an estimate of the
Doppler shift associated with the target. There are several possible techniques for performing this sorting by frequency, including a discrete filter bank and calculation of the signal's Fourier transform. Some form of peak detection may then be performed on the frequency-sorted return signal to obtain a peak intensity value and velocity value for the pixel. These values are then digitized and recorded, and the process repeated for the next pixel.
Direct Detection. Direct detection is employed for both the down-looking laser radars. In this case the optical receiver comprises an optical interference-type filter to reduce the background radiation incident on the detector followed by photodiode semiconductor detectors. A block diagram of the optical receiver and the post-detection processor used on the down-looking laser radars is depicted in Figure 5. The most critical component of the direct detection receiver is the photodetector.
Both the active and passive sensors in the IRAR program use semiconductor devices as photodetectors. We provide here a very cursory description of the operation of semiconductor photodiodes; for a more complete discussion the reader is directed to the references [16, 17, 18]. The photodiode detectors we are concerned with have an intrinsic active region sandwiched between p-doped and n-doped regions, and hence are referred to as p-i-n detectors. The detector is operated in a reverse-biased mode so as to deplete the intrinsic region of carriers. Photons incident on the detector's active region are then absorbed with probability r7, where r7 is the detector's quantum efficiency. The absorption of a photon moves an electron from the valence to the conduction band of the photodiode, creating a hole-electron carrier pair in the depleted region. The reverse-biased field in the depleted region then causes the hole to drift toward the n-region and the electron toward the p-region of the photodiode.
This carrier drift, which ultimately results in recombination of each element of the
13
___ ___ _
- -
TRANSMITTER
I _
RECEIVER
INTENSITY ESTIMATE
Direct-detection laser radar receiver block diagram
_
14
carrier pair at the edges of the depleted region, results in a spike of photocurrent through the photodiode which is of duration limited by the carrier transit time across the depletion region. The total amount of charge carried by the current spike is q, the electrostatic charge of the electron, in the case of a standard p-i-n photodiode.
Note that a macroscopic average current associated with illumination of the detector by a constant optical intensity therefore has associated with it a noise level due to the discrete nature of the carrier creation process. This noise is called shot noise and is quantified by its (single-sided) spectral density, which can be multiplied by the bandwidth of the system to obtain the variance of the noise current:
Sii(f) = 2qI;
Ashot = 2qIB,
(12)
(13) where q is the charge of a single carrier, I is the mean photocurrent, ishot is the shot noise current, and B is the bandwidth of the receiver. We have assumed in Equation
(12) that the transit time of the carriers is sufficiently short relative to the impulse response of the receiver electronics that the current pulses can be modelled as impulses.
The times at which the discrete charge pulses occur constitute a Poisson process for constant illumination intensity. In the limiting case of an ideal, noiseless detector, shot noise determines the limiting performance of the detector. Its fundamental nature is made apparent by noting that it is ultimately a manifestation of the quantum nature of light.
In the case of an avalanche photodiode (APD), the reverse-biasing electric field is maintained at a sufficiently high strength that the original carriers are multiplied by impact ionization with the lattice, and each of the secondaries so produced is similarly multiplied, until all the resulting carriers drift outside the high-field depleted region.
As a result, for the APD, the total charge carried by the current spike associated with any particular photon absorption event is multiplied by a random factor as compared with the standard p-i-n photodiode. This multiplication constitutes the internal gain of the APD.
In a real photodiode, there are other sources of noise to be considered as well. The
Gaussian thermal noise associated with the load resistor and amplification electronics is one of the more important sources. For many detectors, the shot noise associated with the dark current of the detector (i.e., the current which flows through the photodiode even in the absence of illumination) must be considered. In addition, background radiation, including stray light originating both in the field of view and from the receiver enclosure itself, can be a significant source of shot noise, particularly for detection of thermal radiation, where controlling stray radiation becomes difficult.
Finally, for APDs, the random nature of the multiplication process introduces more noise into the detection process and must be accounted for as well. The strength of each of these noise sources is quantified by its spectral density, which is well-modelled as frequency-independent for both thermal and shot noise-that is, the noises are white, and each can be quantified by a single number, the spectral density. The
15
__ ___·_I _I I ·· _I ·_ · _ ____
excess noise due to randomness of the photodetector gain is represented by a factor
F > 1 which multiplies the spectral density of the shot noise. Further details are provided in the Data Statistics Models section.
In the case of direct-detection AMCW optical receivers, post-detection processing begins with narrowband filtering and separation into in-phase and quadrature components using a lock-in amplifier synchronized to the transmitter's amplitude modulation signal. The two components are further processed to obtain relative range and intensity estimates, which are then sampled and recorded.
Many of the principles of operation described in the previous discussion of laser radar apply to millimeter-wave radar as well. The process can again be divided into transmission, propagation, interaction, and reception phases, but with some differences in the important details of each. The differences are largely a result of the fact that the radio-frequency MMW radar operates in the 94-GHz band with a wavelength of about 3 mm, nearly a thousand times longer than the wavelengths of the laser radars. The MMW transmitter consists of a modulated Gunn diode acting as a microwave source producing a frequency-modulated continuous-wave (FMCW) waveform as shown in Figure 6. The instantaneous frequency of the waveform is ramped linearly in time from 85.35 to 85.65 GHz over a period of 447 Psec, then drops to 85.35 GHz and repeats, yielding a sawtooth instantaneous-frequency-vs.-time plot for the transmit waveform. The radar waveform is radiated from a parabolic antenna in the nosecone of the aircraft, producing a beam whose angular spread is determined by the antenna's effective aperture via diffraction.
finst GHz t
Figure 6: MMW radar transmit waveform, instantaneous frequency vs. time
16
Propagation through the atmosphere for the radar beam is less problematic than for optical-frequency waves. At the relatively long wavelengths of microwave radar, atmospheric turbulence does not significantly affect propagation. At microwave frequencies the important effects of atmospheric propagation include diffraction, attenuation, multipath, ground-wave effects, and refraction [7, 6, 19]. For the special case of the MMW air-to-ground geometry, only diffraction and extinction are relevant.
Diffraction is governed by the transmitter aperture and has already been discussed.
Extinction must be taken into account in the MMW radar equations just as in the optical case, though it should also be noted that precipitation can increase the extinction rate dramatically.
The same general target-interaction considerations mentioned in the discussion of laser radar also apply to MMW radar, with targets classified as pure speckle, pure glint, or somewhere between. In the case of the man-made targets typically of interest, target surfaces appear relatively smooth at the wavelength of the MMW radar.
Combined with the extremely fine range resolution of the MMW radar, this means that most targets of interest will closely resemble the pure glint target model [20], the most notable exception being background terrain in the absence of man-made targets [21]. For the purposes of modelling in this document, we shall assume the
MMW radar target is a pure glint target, though results for speckle targets can be constructed by analogy with the optical radar case in a straightforward manner.
The MMW radar echo is collected for the receiver by the same antenna that emits the transmit waveform, the radar being monostatic. Note that time-switching cannot be used to provide a high degree of isolation between the transmitter and receiver as it is in a pulsed radar, since the MMW radar is CW. Fortunately, due to the propagation delay associated with the radar echo, the signal being transmitted at the time of arrival of the radar echo is shifted in frequency from the echo by an amount proportional to the range delay and the frequency sweep rate. The frequency difference between the transmitted signal and the received echo thus constitutes an estimate of target range, with L = cAv/2P, where L is the range estimate, Av is the frequency difference, i is the frequency sweep rate of the transmitter, and c is the propagation speed of the radar signal. The receiver extracts the difference frequency by feeding the received signal into the RF port of a balanced mixer, with some fraction of the transmit signal being fed into the LO port, as in Figure 7. Since the transmit signal is used as the local oscillator, this arrangement constitutes a homodyne receiver
[20]. The output of the homodyne receiver is a signal proportional to the received echo field strength at the difference frequency. Antenna-induced coupling of the strong transmit signal into the receiver can then be neutralized by filtering out zerofrequency components in the baseband signal coming from the mixer, and the range uncertainty window can similarly be implemented as a bandpass filter. Spectrum analysis of the resulting signal sorts the received signal strength according to range.
The resulting signal is sampled at intervals corresponding to the range sampling interval; these samples are stored as the received range data for each pixel. Unlike the laser radar case, there is no peak detection. Note that the receiver must have
17
1
C1
1. 111 I -
bandwidth sufficiently wide to accommodate the transmitted sweep range.
f, f + Af,
RF
PORT
-fo
I 1 fo
LO PORT
I
BANDPASS
FILTER
Figure 7: MMW radar receiver mixer
Af
In contrast with the active nature of radar imagers, passive sensors construct images from radiation which is in some sense intrinsic to the targeted scene, rather than measuring the target's response to a probe beam. The radiation detected this way is either reflected ambient radiation (e.g., sunlight, moonlight or man-made sources incorporated in or near the target scene) or thermal radiation emanating from the target itself. For reflected radiation it is reflection contrast between the target and background that provides useful information about the target, whereas for thermal radiation it is temperature variation between background and target that makes detection possible. For reflected ambient light the spectrum depends on both the spectrum of the source and the reflection properties of the target. An approximate idea of the thermal spectrum radiated by an object at temperature T can be had by considering the spectral radiance of a blackbody surface, defined as power per unit area per unit frequency per unit solid angle for a blackbody source, as given by Planck's Radiation
Law [22, 16]:
2h,
C
2
3 cosO
exp(hvl/kT)- 1 (14) where h is Planck's constant, v is the frequency of radiation, 0 is the observation angle (normal = 0), k is Boltzmann's constant, and T is the absolute temperature of the source.
In this section we will discuss basic passive imaging, scanning and detection principles and their impact on spatial resolution and sensitivity. A detailed discussion of sensitivity appears in the Data Statistics Models section which appears later in this document.
18
The passive sensor itself comprises three systems: the scanning system, the imaging system, and the detection system. The scanning system generally is made up of an assembly of servo-controlled mirrors. Unlike in the active case, where the scanning optics must accommodate both steering of the probe beam and capture of the return signals, in the passive case the scanning system must simply re-direct the imaging system's instantaneous field-of-view to the appropriate point in space. The primary effect of the scanning system on the passive data is determined by the dwell time-that is, the time during which the imaging system is directed at the area corresponding to a particular pixel of the target. This time is important because it determines the maximum time over which the detected receiver signal can be integrated in the passive receiver, and thus ultimately affects sensitivity. Longer dwell times are desirable from a sensitivity standpoint, since the received signal strength for passive signals is likely to be quite low. However, the passive receiver's resolution is somewhat degraded for long dwell times by pixel drag-the effect of the moving point of view during integration resulting from the scanning process. For sufficiently short dwell times and slow scanning rates, however, this can be neglected. For a given dwell time, optimal sensitivity dictates that the integration time for the passive detector be as close to the full dwell interval as is possible.
The imaging optics must be designed for consistency with the resolution of the active system if the resulting data are to be pixel registered. The passive system on the forward-looking suite shares scanning mirrors and telescopic optics with the forwardlooking active system; in the down-looking sensor suite, the scanning optics for the various sensors are physically separate but synchronized.
The detection system comprises a photodetector or photodetector array which converts the received light into a photocurrent. Processing electronics integrate the photocurrent for a period ti which is upper bounded by the dwell time of the scanning system on the particular pixel. At the end of the integration time, the accumulated charge value is digitized and stored as the received intensity data for the pixel, and the integration process begins anew for the next pixel. For a photodetector array, several pixels are processed similarly but in parallel. The statistics of the received data depend on both the photodetector and the received optical signal; the basic operating principles of the photodetector were described in the previous section on laser radar principles.
An important consideration for the passive optical receiver is the effect of the photodetector on the bandwidth of the receiver. The duration of the spike associated with a single photon's absorption is determined by the transit time across the depleted region, which is roughly determined by the width of the depletion region. However, the junction capacitance of the photodetector is inversely proportional to the width of the depletion region, and, together with the load resistance R, the junction capacitance Cj determines the time constant RCj of the detection circuit. For typical photodiodes, the transit time across the depletion region is much less than the RCj time constant, which is therefore the limiting factor for frequency response. For the
19
__I___ _ _11__·1 1_ _I _I _
passive sensor, the requirements on detection bandwidth are relatively mild, requiring only that the detection bandwidth be much greater than the pixel sampling frequency so that sequential pixels are uncorrelated.
In this section the implementation details for each sensor system are described.
The descriptions are organized according to the physical layout of the systems, so that the forward-looking optical sensors are discussed first, then the forward-looking millimeter-wave radar system, and finally the down-looking sensor suite. Parameter set identifiers are assigned to each set of hardware parameters in the accompanying tables so that the data sets can be easily associated with the sensor parameters used to generate them.
The forward-looking optical sensor suite constituted the original sensor for the IRAR program. As a result of its long service life, there are several versions of the hardware, reflecting upgrades. The basic hardware versions are identified as Long-Pulse (with
6-meter precision), Long-Pulse + Doppler, and Short-Pulse (with 1-meter precision).
The parameter values for the forward-looking optical sensor suite are summarized in
Tables 1 and 2 [1].
The forward-looking optical sensor suite comprises a long-wave infrared (LWIR) laser radar system operating at 10.6-,m wavelength together with a passive long-wave infrared radiometer in the 8-12[,m band. These two units share optics to allow pixel-by-pixel image registration, but have independent 12-element linear HgCdTe detector arrays, with the arrays oriented vertically to permit simultaneous reception of 12 pixels, stacked in elevation. The shared optics includes a 13-cm afocal Ritchey-
Chretien telescope with a 0.2-mrad-full-angle instantaneous field-of-view per pixel.
Scanning is accomplished by a combination of a single scanning mirror and dualrastering galvanometer mirrors. This allows two operating modes for the system: linescan mode and framing mode. In linescan mode, the scanning mirror scans in azimuth at a rate of 2.5 scans per second, with the detector array providing an image
12 pixels in elevation by 3840 pixels in azimuth. The composite image field-of-view in linescan mode is operator-selectable within the 21.09 degree field-of-regard, but for most of the data records the composite image field-of-view is 10 degrees in azimuth by 2.4 mrad in elevation, though in a few early data sets the field-of-view is 20 degrees azimuth by 2.4 mrad elevation. Thus for the 10-degree linescans, the pixel sampling interval is 50-gradians, and the scene is oversampled in azimuth by 4:1. For a very small number of data sets the system was modified to allow either 2.5 or 5 linescans per
20
I
second. In framing mode, the scanning mirror holds a fixed direction (compensating for platform motions) while the galvanometer mirrors are rastered at 10 scans per second to form an image field-of-view of 25 mrad in azimuth by 12 mrad elevation
(125 x 60 pixels). Framing mode thus provides more rapid imaging of a smaller area than linescan mode.
Originally the active portion of the forward-looking optical sensor suite produced both range and intensity data for each pixel; later Doppler-sensing capability was added, bringing the channel count to three per pixel. The source for the laser radar subsystem is a compact CO
2 laser emitting at 10.6/ m. In the original version of the forwardlooking laser-radar system, the laser produced 200-nsec pulses at a pulse repetition frequency (PRF) of 20 kHz. The range precision of this earliest version of the system was 6 meters; this system constituted the Long-Pulse (FL-LP) transmitter. Both the range value and the intensity value are digitized and stored as 8-bit words for this system. The addition of Doppler-sensing capability required that the transmission waveform be changed from a pure pulse; in this configuration a pulse of duration
200nsec is followed by a cw tail of duration 25/usec. The repetition rate is reduced to 10 kHz in Doppler mode, so that there is a period of 75 /isec between the end of the cw portion and the beginning of the next pulse when the laser does not emit.
This period is used both to recharge the laser cavity and to allow the passive channel measurement to take place. This system is the Long-Pulse + Doppler (FL-LP+D) transmitter system. Eventually the laser radar system was modified to obtain 1meter precision and improved resolution. To accomplish this, the low-pressure CO laser was replaced with a high-pressure CO
2 laser to allow shorter pulses. Under this configuration the laser radar system is capable under ideal conditions of producing
2 pulses of duration 25nsec, but in typical field measurements the pulse duration is
40nsec. The PRF of this system is 20kHz, and range results are recorded with 1meter precision; this is the Short-Pulse (FL-SP) transmitter. The size of the recorded range word was increased from 8 to 11 bits to allow a similar range uncertainty window size to that of the original system. The Short-Pulse system is not capable of Doppler measurements.
The forward-looking laser radar system receiver employs optical heterodyne detection and is similar for both Long-Pulse and Short-Pulse systems. The Long-Pulse +
Doppler system includes additionally a Doppler-pre-processing unit immediately after the receiver. The receiver comprises a local oscillator CO
2 laser which shares a gas reservoir with the transmitter laser. The range-sampling interval is 6 meters for the
Long-Pulse system and 1 meter for the Short-Pulse system, and peak detection is utilized to determine both the range and intensity values-that is, the received signal is examined over the preset range uncertainty window, and the range value reported is that corresponding to the peak value of the return signal in that window. The intensity value reported is also the peak value of the return.
The Doppler pre-processor unit comprises a bank of surface-acoustic-wave (SAW) devices constituting a real-time, 12-channel spectrum analyzer. The Fourier transform
21
1191__11___1_1^111__1^_--111-_111 __ I X ·CI1 -- -_
Channels
FL-LP,
FL-LP+D:
Active Range and Intensity;
Passive Intensity;
Boresight Video
FL-LP+D:
Doppler Velocity;
Doppler Intensity
FL-LP and FL-LP+D Parameter Sets
Parameter
Imaged field-of-view
Linescan mode (el x az)
Value
2.4 mrad x 20 deg or 2.4 mrad x 10 deg
12 mrad x 25 mrad Framing mode (el x az)
Image array size
Linescan mode (el x az)
Framing mode (el x az)
Receiver aperture dimension, DR
Receiver field-of-view, OR
Detector quantum efficieny, /
Noise-equivalent temperature
(passive sensor), NEAT
Passive detector integration time, ti
Atmospheric extinction coefficient, a
Transmit wavelength, A
Average transmitter power, Ps
Number of detectors
Photon energy, hu
Pulse Repetition Frequency
(no Doppler), PRF
Pulse duration, tp
Intermediate Frequency,
VIF
IF filter bandwidth, B
Range resolution
Range precision
Number of range bins
Recorded Data Length range word intensity word
12 x 3840 pixels
60 x 125 pixels
13 cm
200/zrad
0.25
0.1 K
25* psec
0.5 dB/km (typical)
10.6 gm
2 W
12
1.87
x 10-20 J
20 kHz
200 nsec
13.5 MHz
20 MHz*
30 m
6.1 m
256
8 bits
8 bits
Doppler resolution
Doppler precision
Number of Doppler bins
Pulse Repetition Frequency, PRF
Recorded Data Length velocity word intensity word
0.6 mi/hr
0.5 mi/hr
256
10 kHz
8 bits
8 bits
--
Table 1: Forward-Looking Optical Sensor Suite Parameters: Longe-Pulse and Long-
Pulse + Doppler Parameter Sets. * designates estimated parameters.
22
FL-SP Parameter Set
Channels Parameter
Imaged field-of-view
Linescan mode (el x az)
Value
2.4 mrad x 20 deg or 2.4 mrad x 10 deg
12 mrad x 25 mrad
FL-SP
Framing mode (el x az)
Image array size
Linescan mode (el x az)
Framing mode (el x az)
Receiver aperture dimension, DR
Receiver field-of-view, OR
Detector quantum efficieny, qr
Noise-equivalent temperature
12 x 3840 pixels
60 x 125 pixels
13 cm
200prad
0.25
Active Range (passive sensor), NEAT 0.1 K and Intensity; Passive detector integration time, ti 25* sec
Passive Intensity; Atmospheric extinction coefficient, ca 0.5 dB/km (typical)
Boresight Video Average transmitter power, Ps
Number of detectors
5 W
12
Photon energy, hv 1.87 x 10-20 J
20 kHz Pulse Repetition Frequency, PRF
Pulse duration, tp
Intermediate Frequency,
VIF
IF filter bandwidth, B
Range resolution
40 nsec
70 MHz
80 MHz*
6 m
Range precision
Number of range bins
1 m
2048
Recorded Data Length range word intensity word
11 bits
8 bits
Table 2: Forward-Looking Optical Sensor Suite
Set. * designates estimated parameters.
Parameters: Short-Pulse Parameter
23
of the quasi-continuous return signal is peak-detected to obtain velocity and peak amplitude estimates. The pre-processor has a sampling interval of 0.5 miles per hour, a recorded Doppler word size of 8 bits, and an effective resolution of 0.6 miles per hour.
The forward-looking passive LWIR radiometer uses the same receiver optics as the laser radar system and has an identical linear detector array; thus it produces resolution identical to that of the laser system. The sensitivity in terms of noise equivalent temperature differential is 0.1 K.
Along with the active and passive IR channels, a bore-sighted videocamera is used to provide a greyscale visual-band image of the scene being sensed. One video frame is recorded for every linescan.
As a more recent addition to the IRAR system, the MMW radar unit has only one hardware configuration. The transmitter operates at a center frequency of 85.5 GHz for a radiation wavelength of 3.51 mm. The linear frequency sweep of the transmitted waveform covers 300 MHz in a chirp interval of 447 usec, yielding a frequency-to-range conversion factor of 4403 Hz/m. The sweep repetition frequency is 1600 Hz [1].
Parameter
Antenna diameter, DR
Full beamwidth
Center frequency, fc
Chirp bandwidth, Af
Chirp interval
Frequency/Range scale factor
Pulse repetition frequency, PRF
Peak transmit power, PT
Receiver bandwidth (per bin), B
Range bins per profile
Dynamic range
Range resolution
Range precision
Image array size (linescan)
10-deg az. scan
MMW Parameter Set
Value
._
30.5 cm
11.5 mrad
85.5 GHz
300 MHz
447 /Usec
4403 Hz/m
1.6 kHz
20 mW
2 kHz
512
8 bits
0.52 m
0.25 m
1 profile (el) x 15 profiles (az)
Table 3: Forward-Looking Millimeter-Wave Radar Parameters
24
The 30.5-cm-diameter antenna provides a maximal antenna gain of 46 dB at 50% efficiency and fixes the full width of the radar beam at 11.5 mrad, producing an elliptical spot size on the ground of 11.5-m cross-range diameter by 132-m diameter along the flight path at 1 km slant range with a 5-degree depression angle. The illuminated ground spot comprises a large number of illuminated swaths, each contributing the the echo in a single range bin and covering about 5.9 m
2 each near the center of the spot. The maximum range is better than 2 km in clear weather. The antenna can be slaved to the pointing mirror of the forward-looking optical system in the 2.5-Hz linescan mode, so that the MMW radar and the forward-looking optical system address the same target region. In linescan mode with a 10°-azimuth field-of-view, the
MMW data are stored as 15 range profiles, each containing an 8-bit intensity value for each of 512 range bins. Oversampling in the stored data can be used to enhance the data.
The principal limit on the resolution of the system is imposed by the linearity of the sweep generation circuit. The MMW unit has a maximal deviation from linearity of
0.01%, sufficient to support 1.7-foot (0.52-m) resolution. This means that the ultrahigh-range-resolution MMW unit is capable of resolving the individual scattering centers of targets which conventional radars cannot distinguish.
For an operating slant range of 2 km, the useful range data will be centered at approximately 9 MHz at the output of the balanced mixer. To extract meaningful range data while reducing the required recording rate, the MMW unit employs a range uncertainty window of 375 feet (115 m) and a tracking loop which correctly positions the center of a video bandwidth converter about the ground clutter return. The result is a baseband signal of approximately 1000 kHz bandwidth corresponding to the 375-ft.
range window. The 2-kHz-wide frequency band associated with each of the 512 range bins is then effectively filtered out and the corresponding output intensity digitized and recorded, along with headers providing the necessary information for recovery of absolute range.
The parameter values for the MMW radar unit, including system sensitivity parameters to be discussed in the Data Statistics Models section, are summarized in Table
3.
The down-looking sensor suite in its most general configuration comprises two laser radars-one operating at 0.8 jum and one at 10.6 [m-plus a passive radiometer operating in the LWIR band. A block diagram of the system appears in Figure 8. The sensors are arranged in a bistatic configuration, with each of the two laser radar transmitters utilizing independent exit optics and sharing a common scanning system. All three receivers share a collection mirror and scanning optics. The two scanning systems are synchronized, and all sensors are designed for comparable spatial resolution,
25
11__1_ _ ·__I··_ L __
so that images produced on the three sensors are pixel registered. The transmit and receive scanners are synchronized such that the receiver instantaneous field-of-view is offset sufficiently from the transmitter field-of-view to partially compensate for the lag angle associated with propagation delay.
I
LASER
RADAR
(GaAs)
IH
IA_'~~ l~~~~~~~~~~
) r~~~~~~~~~~
LASER
RADAR
(C2) l
I
I_ I
RAD8IOMETER
8.5-11.5
I...
SYNCHRONIZED SCANNING
OPTICS FOR REGISTERED DATA
I I
Figure 8: Down-looking sensor suite block diagram
I
-- -------- - - ___
| I
I
~I
The down-looking system supports a single scanning mode: two-dimensional images are constructed by scanning in azimuth (perpendicular to the flight path) from -45 degrees to +45 degrees and by allowing the motion of the aircraft to produce the along-path variation between azimuthal scans. The scan rate is controlled so as to guarantee contiguous ground coverage with minimal overlap of adjacent pixels. Note that this requires that the scan rate vary with the ratio of aircraft speed to altitude; since the scan rate affects pixel dwell time, sensitivity will be dependent on this ratio as well.
The two active systems share a common architecture, but with some variation in parameter values. Two different hardware versions of the NVIR system were employed, differing essentially only in transmitter power; the earlier version, an unmodified Perkin-Elmer Imaging Laser Radar (identified as DL1), utilized a cryogenically cooled 0.5-watt GaAs laser emitting at 0.85 microns, while the later version (DL2) incorporates a room-temperature 2-watt GaAs laser at 0.85 microns. The LWIR system included in the later version of the down-looking suite employs a 10-watt CO
2 laser emitting at 10.6 microns.
26
DL1 Parameter Set
Channels Parameter Value
0.85-[tm
Active:
Receiver aperture dimension, DR
Receiver focal length, fi
Receiver optical efficiency,
Optical filter bandwidth,
Receiver field-of-view,
OR
A,
Detector quantum efficieny, r
APD intrinsic gain, G
APD excess noise factor, F
12.7 cm
25.4 cm
0.5
6.5 x 103 ,Um
1 mrad
0.8
150
3.5
Relative range, Ambient incident scene power, Pscene
Intensity Day 3.63 /W
Night 0.018
1 uW
Equiv. thermal noise power,
Pthermal 1.20 nW
Equiv. dark current noise power,
Pdark
0.046 nW
Atmospheric extinction coefficient, a I dB/km
Average transmitter power, Ps
Modulation depth, m
Modulation frequency, f,
Range ambiguity interval
Pixel dwell time, td
Photon energy, hv
Typical target reflectivity, p
Recorded Data Length range word intensity word
0.5 W
1
15 MHz
10 m
1.25 tsec
2.34 x 10-19 J
0.25
8 bits
8 bits
Table 4: Down-Looking Sensor Suite Parameters: Perkin-Elmer ILR system
27
"-- 1-
DL2 Parameter Set: Near Visible Band
Channels
0.85-Mum
Active:
Parameter
Receiver aperture dimension, DR
Receiver focal length, f
Receiver optical efficiency, e
Optical filter bandwidth, AA
Receiver field-of-view, OR
Detector quantum efficieny, 7r
APD intrinsic gain, G
APD excess noise factor, F
Ambient incident scene power, Pscene
Day
Night
Value
12.7 cm
25.4 cm
0.5
6.5 x 10-3[tm
5 mrad
0.8
150
3.5
3.63
1 zW
0.018 MzW
Relative range, Equiv. thermal noise power, Pthermal 1.20 nW
Intensity Equiv. dark current noise power,
Atmospheric extinction coefficient,
Pdark 0.046 nW
1 dB/km
Average transmitter power, Ps
Transmit beam divergence,
T
2 W
1 mrad
1 Modulation depth, m
Modulation frequency, fo
Range ambiguity interval
15 MHz
10 m
Pixel dwell time, td
Aircraft air-to-ground altitude, H
Aircraft standard velocity, v
Photon energy, ho
Typical target reflectivity, p
Recorded Data Length range word intensity word
0.318 x 10-6 x (H)
Varies with data set
70 m/sec
2.34 x 10 - 19
0.25
J
8 bits
8 bits
Table 5: Down-Looking Sensor Suite Parameters: MAPS Near-Visible system
28
Channels
10.6-pm
Active:
Relative range,
Intensity
LWIR
Passive:
Intensity
DL2 Parameter Set: Long-Wave IR Band
Parameter Value
Receiver aperture dimension, DR
Receiver focal length, f
Receiver optical efficiency,
Optical filter bandwidth
12.7 cm
25.4 cm
0.5
active system, AAa passive system, AAp
Receiver field-of-view,
OR
3[tm
3pm
Detector quantum efficieny,
Noise equivalent power
Modulation depth, m
Modulation frequency, fo
Range ambiguity interval
5 mrad
0.25
active system, (NEP)a 1.2 x 10-
12
W/V/H passive system, (NEP)p
Transmit beam divergence,
XT
9.9 x 10-
Atmospheric extinction coefficient, a 0.5 dB/km
Average transmitter power, Ps 10 W
1 mrad
13
W/V/H-¶
0.5
15 MHz
10 m
Pixel dwell time, td 0.318 x 10-6 x (H)
Aircraft air-to-ground altitude, H
Aircraft standard velocity, v
Photon energy, hv
Typical target reflectivity, p
Recorded Data Length range word intensity word
Varies with data set
70 m/sec
1.87
0.04
8 bits
8 bits x 10-20 J
Table 6: Down-Looking Sensor Suite Parameters: MAPS Long-Wave system
29
-~ 1
In all cases the laser is amplitude-modulated at 15 MHz. The optical power at the output of the transmitter is thus given by
PT(t) = Ps [1 + m cos(27rft)], (15) where Ps is the time-averaged transmitter power, m is the modulation depth, and fo is the RF modulation frequency.
In the case of the NVIR system, the modulation depth is 1; in the case of the LWIR system the modulation depth is 0.5. The transmitter exit optics provide a transmit beam divergence of 1 mrad full-angle for the LWIR system and for the NVIR system.
At an altitude of 500 meters, this translates to spot sizes on the ground with 0.5-meter diameters.
The receivers for the down-looking radars have slightly larger fields-of-view than the corresponding transmitters to account for the those effects of lag angle not compensated for by the scanning offset. Thus spatial resolution of the radars is set by the transmitter beam divergence angles. The received optical signals are filtered using interference-type optical filters to reduce the level of background radiation striking the photodetector surfaces. The detectors for both systems are single-element photodiodes, so that only one pixel is processed at a time. The NVIR system employs a silicon APD as the photodetector, whereas the LWIR system employs a HgCdTe photovoltaic detector. In each case the output of the photodiode is split into in-phase and quadrature components lin and lquad and integrated over the pixel dwell time by a lock-in amplifier. Post-processors use the two outputs of the lock-in amplifier to compute the relative range estimate
i = 4wfi arctan
Iqad
(16) and the intensity estimate
=K i + 'quad,
(17) and the results are then digitized and recorded as 8 bit words for each of the two estimates. In most of the data sets, only one-fourth (22.5
° ) of the data from each scan line are recorded due to limits in the recording equipment, though in later data sets the full 90-degree azimuthal field-of-regard is recorded.
The passive radiometer is almost identical to the receiver portion of the laser radar at the corresponding wavelength, up to the photodetector output. The LWIR passive radiometer employs a 3-micron-passband optical filter between the imaging optics and the single-element HgCdTe photovoltaic detector. The photodiode's output current is then integrated over the pixel dwell interval and the output digitized and recorded.
The parameter values for the down-looking sensor suite are summarized in Tables 4,
5, and 6 [1].
30
In this section we provide statistical models for the data recorded by each of the on-board sensor systems. We consider each of the three sensor types independently: passive radiometer, millimeter-wave radar, and laser radar, in order of increasing complexity of the data models required. Because the data models are in some cases dependent on specific implementation details, where necessary the different implementations of a given sensor type are discussed separately within the appropriate section.
For the passive detection case we are primarily interested in a figure of merit to indicate the relative quality of the resulting image data. In particular, we define the signal-to-noise ratio as the ratio of the (electrical) power contained in the mean signal component originating in the target scene to the mean (electrical) noise power present, with both quantities measured at the output of the passive receiver just before digitization and recording. It is additionally possible to generate more detailed models of the noise present by considering which noise processes dominate for a given sensor.
In this section we will first present a very general model of optical photodetection; we will then specialize the model to the LWIR passive radiometer case. The general model will be needed for the description of the direct detection laser radar receivers.
The signal and noise components of general optical photodetection receiver can be modelled as in Figure 9. The noise mechanisms include Gaussian thermal noise attributable to the amplifier electronics and to the load resistor used to terminate the photodiode, shot noise due to the incident signal and background power, shot noise due to the dark current, and excess noise due to the randomness of the internal gain (if any), represented by the excess noise factor F multiplying the shot noise terms (F = 1 for the LWIR photodetectors, which have no gain). Note that the bandwidth of the filter at the output of the photodetector, which represents the effect of averaging over the pixel dwell interval td, varies with the speed-to-altitude ratio of the aircraft in the case of the down-looking sensors. In all cases, however, measurements of distinct pixels are modelled as statistically independent.
A more conceptually convenient model than that of Figure 9 instead references all noise mechanisms to the input of a noiseless photodetector with quantum efficiency q and noiseless gain G, as depicted in Figure 10. The combined noise sources constitute the noise equivalent power (NEP) [16]. Note, however, that the cost of achieving this conceptual simplicity is an expression for the thermal equivalent power Pthermal which can be misleading in some respects. Consider, for example, a detector using load resistance R and followed by a perfectly noiseless amplifier. The thermal noise would be represented in the physical model of Figure 9 as a current noise source of
31
II_-I··LI-·-·XI---__- -·1I1 - __ I -- _._
APBACK-shot
APsignal-shot -.
Psignal
PBACK ./
Figure 9: Photodetection noise sources, physical model spectral density 2kT/R. In the NEP scheme of Figure 10, the thermal noise for the same system would be represented by a thermal equivalent optical power of
Pthermal =
2hvkT vqq
2
G
2
FR incident on an ideal noiseless optical receiver. Some caution should be used in interpreting this expression; the functional dependencies indicated in this expression indicate the behavior of the term relative to other noise terms contributing to the
NEP. Thus, while increasing the photodetector excess noise F will not affect the absolute thermal noise power, it does reduce the relative contribution of the thermal noise to the NEP; hence the thermal equivalent power decreases with increasing F, all other parameters being held fixed, but the overall NEP increases.
It is useful to note that the photodetector converts incident optical power to electrical current, so that the signal portion of the current is proportional to incident optical power. This means that the electrical signal power present in the photocurrent is proportional to the square of the incident optical power. The noise mechanisms, on the other hand, are generally either electrical processes or conversion processes, so that they result in noise currents with broadband power spectra which can be modelled as white. The noise power present in the output of the receiver (i.e. electrical power) is therefore proportional to the electrical bandwidth of the receiver. As a result, the
NEP, representing the equivalent optical input noise power which would result in the same noise power at the output, carries units of Watts/VH , the final electrical-noise power spectrum being proportional to (NEP)
2
.
For the LWIR radiometer the important noise sources are signal, background, and
32
Psignal
PDARK
PBACK
APsigna
APDARK
APBACK
AD..h.--
Figure 10: Photodetection noise, sources, NEP model dark-current shot noise and thermal noise due to load resistor and amplifier electronics. The detector having no intrinsic gain, there is no excess noise (F = 1).
Moreover, in most circumstances the shot-noise contribution will be dwarfed by the thermal noise contribution, so that the noise may be adequately modelled as Gaussian at the output. In any case, the NEP can thus be expressed as
NEP = 2hvF (Pback + Pdark + Pthermal + Pscene) (18)
A number of additional issues arise due to the thermal origins of the detected radiation
[16]. Unlike in the visible spectrum, where all of the power emanating from the scene can be considered to be signal power, in the typical LWIR case most of the viewed scene is very near the same temperature, and thus only variation in emission strength from the level corresponding to the background temperature provides any useful information. Thus the power emanating from the observed scene and collected from a particular pixel can be broken into two components: a nominal power Pscene constituting the average power over the scene, and a signal power APscene representing the deviation of the power collected in a particular pixel from the nominal value due to temperature contrast with the background level. Accordingly, in the LWIR case we have for the signal-to-noise ratio
SNR = ( ene (19) in terms of NEP. For imaging in the LWIR band, since signal variations are due to target temperature variation, it is traditional to render sensitivity in terms of temperature differential instead of radiative power. For a given temperature variation of a pixel in the scene from nominal, the variation in scene power is given by A.Pscene =
33
_i 1-~_
_1- --·- _I l _ 11
(&Pscene/aT)AT, where
OPcene aT p hv
( scee kT
2
1_
1- exp(-hv/kT)) p_ h(20) se kT
2 for hu
>> tT.
(20)
The SNR can thus be expressed in terms of the temperature differential of the target:
AT
SNR NEP/ (i/E &Pscene/&T))
AT
NEAT (21) where NEAT is the temperature variation which produces unity SNR at the radiometer output.
In contrast with the optical case, at radio frequencies the photon quantum energy is sufficiently small that quantum noise is completely negligible. Instead the thermal background radiation received by the antenna, together with the thermal noise added by the electronic amplification stages, constitutes the only important noise source [20].
The thermal noise captured by the antenna is Gaussian and white, with single-sided spectral density kTa, where k is Boltzmann's constant and Ta is an effective temperature, which in general depends upon where the antenna is aimed. In the case of the
MMW radar, we assume that the earth's surface always fills the receiver field-of-view, and hence we take Ta = 290 K. For convenience, the inherent receiver amplification noise is referred to the receiver input, and, since it is statistically independent of the antenna noise, its spectral density can be added to that of the antenna noise. Thus the receiver's inherent thermal noise contribution can be represented as an additional noise temperature, though more typically we can represent it as a receiver system noise figure, F, which multiplies the antenna temperature. The total noise spectral density referred to the input is thus given by kTaFs, single-sided.
The signal power received by the radar can be computed directly with sufficient information about the target's radar cross section. As shown in Appendix A, the received power at the receiver antenna is given by
Pant = PTGTGR (4M)
3
L
4 atm (22) where PT is the transmitted power, GT is the transmitter antenna's gain in the target direction, GR is the receiver antenna's gain in the target direction, at is the target's radar cross section, A is the wavelength of the radar carrier, and 8 atm is the roundtrip loss factor due to atmospheric extinction, given by &atm
= e-2aL for range L and extinction coefficient a. In clear weather, ao can be expected to be about 0.5dB/km at 85.5 MHz.
34
___
Between the antenna and the receiver pre-amplifier input, the signal may be attenuated by a number of mechanisms which are agglomerated into the system loss factor
Es, so that Prec = Pant/Cs. The system loss factor accounts for transmission-line losses and other inefficiencies. The signal-to-noise ratio can thus be computed as
PTGTGRotA
2
SNR = PIkTaFsB = (47) 3
L
4
6atm kTaFsBLs (23) where B is the bandwidth of the receiver's post-detection filters. Using the parameter values from Table 3, and assuming a range of 2 km, we find that batm =
-2 dB, and the minimum detectable target radar cross section (that is, the cross-section which leads to unity SNR at a given range) at 2 km is object of at = 25 m 2 , the SNR is 26 dB.
cmin =
-14 dBsm - 400 cm 2 . For an
From the perspective of modelling the data statistics, there are two distinct types of laser radar included in the IRAR. The forward-looking unit is a pulsed imager using coherent detection with a 10.6-micron-wavelength carrier, but in some versions it also incorporates a separate time-multiplexed Doppler-imaging channel in combination with the pulsed imager. The down-looking unit comprises two AMCW direct-detection systems, one operating at a wavelength of 10.6 m and the other at
0.85 m; the models for these two systems differ effectively only in parameter values.
We consider models for each of the two systems separately.
4.3.1 Coherent-Detection Laser-Radar Data Models
We assume in this section a monostatic heterodyne-detection, peak-detecting, pulsedimaging laser radar with local-oscillator shot-noise limited performance. The pulses are assumed to be Gaussian, transform-limited and sufficiently short that for each pixel the target occupies a single range resolution cell. For the coherent laser radar system modelled this way, the complex envelope of the output of the matched filter can be represented as [2] l(z) = (Erv) e j exp[-4(z - L)
2
/zr2es + n(z) (24) where Er represents the average received energy; v is, for speckle targets, an exponential random variable with mean 1, and for glint targets, deterministically equal to 1;
X is a uniformly distributed random variable on [0, 27r) and is statistically independent of
v; L is the true range to the target, Zres = cT/v is the range resolution of the transmitted pulse (measured full width to e2
); and n(z) is a zero-mean, circulo-complex
Gaussian noise process with covariance (n(z)n*(O)) = (hvo/r) exp(-4z 2
/2es) representing the local oscillator shot-noise contribution. We have ignored any effects of
35
_I_ __ I___ ___ I I __ _
atmospheric turbulence in this model, a reasonable choice under most operating conditions for the aperture size and wavelength of the forward-looking laser-radar unit.
We assume in what follows that the target is a pure speckle target, and thus that the corresponding system CNR is given by Equation 6.
For convenience in the analysis we assume that the receiver utilizes a square-law envelope detector (i.e., envelope detection accomplished by generating 11(z)1
2
), though the coherent receivers used in the IRAR program were implemented with linear envelope detection (i.e., envelope detection accomplished by generating 11(z)[). Square-law envelope detection results in range and peak intensity estimates, whereas linear envelope detection results in estimates for range and peak amplitude; but the statistics for the two methods are simply related since the intensity and amplitude are uniquely related by I = A
2 and the resulting range estimates are identical.
Coarse-Grained Ranging Approximation. Suppose that the range uncertainty interval Z is divided into M range resolution bins (each of width zres). The mth bin is centered at
Zm,
1 < m < M, and we assume that L = Zm* for some value
m = m*. Then the discrete collection of random variables {l(zm): 1 < m < M} is a collection of zero-mean, circulo-complex Gaussian random variables which are approximately statistically independent, and [2]
{ hv/rl, for m n m*
(Zm) = for m = m* E, + hvl/r,
(25) is the expected value of the envelope detector output for the mth bin. We can define two detection scenarios:
1. The correct bin has the largest envelope-detected intensity value among all M bins. This case represents a correct range estimate, L = L.
2. The exponential random variable v takes a sufficiently small value, and the
Gaussian noise term n(z) for some incorrect bin n takes a sufficiently large value, that an incorrect bin is selected as the estimated range bin. This represents an anomalous range estimate, called an "anomaly."
Under this model the probability of anomaly can be shown to be [23, 24]
Pr(A) = 1- Pr [I[(zm*)2 > max l(Zm)l2]
= 1 j
dv (1 + CNR e-/(l+CNR)(1 _ e-)M-1
M-
-n(,-i,,
{ m (1 + C N R ) }
36
(26)
and for CNR >> 1 the approximation Pr(A) m [ln(M) + 0.577]/CNR holds.
As a result of the nonzero probability of anomalous range estimates, the maximum likelihood range estimate L for a single pulse measurement does not come close to attaining the Cramer-Rao bound on optimal estimate error, given by [2]
6L > ZRES +CNR
4CNR
ZRES
4CNR for CNR >> 1. (27)
However, conditioned on the fact that an anomaly did not occur for a particular measurement, the range estimation error of the peak-detection procedure described here should approach the Cram.r-Rao optimum lower bound for high CNR values.
This suggests that we can obtain an approximate joint pdf for range estimates and peak intensity values by iterated construction. We end up with [3, 2]
P,i'(I, Zm)
= (1
+ CNR)-le-i/(l+CNR)(1 e-I)M-lu(I)6m
+- eI(1 e-i/(l CNR))(1 e-i)M-2u()(l 6mm),
(28) p,(i) = (1
+ CNR) le-/(+CNR)(1 e-I)M-1u(I)
+ (M
)e-'(1 - e-i/(+CNR))(1 e-)M-2u(i), (29)
Pr(A)
Pi(Zm) = [1- Pr(A)]6m, + M I (1--), (30) where u(.) is the unit step function and ij is the Kronecker delta function.
Since a coherent receiver utilizing linear envelope detection breaks the sample space into exactly the same events, the intensity-range pdfs of equations 28-30 can be easily converted to amplitude-range pdfs by substituting A 2 for I and then by noting that pa
=
2Apii(A , 2
Zm), pa(A) = 2Ap(A
2
).
Fine-Range Resolving Model. The previous model is appropriate for a coarseranging radar where the target can be expected to occupy a single range resolution bin per pixel. For fine-ranging radar situations where the target is range-resolved and will vary significantly in range from pixel to pixel, a more detailed model reflecting the deviation of the non-anomalous range values from the exact correct values is needed.
For this case, any digitization of the range estimate is neglected, so that we may use a continuous range estimation variable to model the range output of the laser radar, and we model the reported value as a Gaussian random variable. For range estimates
37
-I-
and true ranges r* within the range uncertainty interval, the single pixel conditional probability density for a range estimate of given the true range r* is given by [4, 5]
Plr*(rIR) [1 - Pr(A)] -(R - R*) 2
2(6R)
2
J+ Pr(A)
,(
(31) where R - zr,,CNR-2 is the range accuracy dictated by the Cramer-Rao lower bound for high values of the CNR, and Pr(A) can be obtained from equation (26) with M = Z/Zres for M >> 1 and CNR >> 1.
Doppler Data Statistics Models. For the purposes of modelling the statistics of the Doppler velocity and intensity data, we treat the forward-looking laser radar as a 2-dimensional Doppler imager which transmits transform-limited Gaussian pulses with normalized field strength of the form s(t) ( 2) e4t
2
/T
2
The Doppler receiver comprises a coherent heterodyne optical receiver whose IF output is Fourier-transformed, linear envelope-detected, and finally peak detected to generate the Doppler velocity and Doppler intensity estimates.
It can then be shown [2] that the complex envelope of the output of the windowed
Fourier-transform stage can be written in a form exactly analogous to that of Equation
(24), but with the substitutions
Z -- vd
L V
(32)
(33)
ZRES VRES n( z) n(vd)
(34)
(35) where
Vd is the dummy velocity coordinate; Vz is the true along-range velocity coordinate of the target, where Vz lies within the velocity uncertainty interval; VRES =
v'XA/7rT is the velocity resolution of the Doppler imager; and n(vd) is a zero-mean, circulo-complex Gaussian noise process with a covariance given by (n(vd)n*(O)) =
(hzi/r) exp(-4vd/VREs) representing the local oscillator shot-noise contribution.
The Doppler channel data thus constitute duals of the range and amplitude data considered in both the coarse-grained and fine-grained range models of the previous subsections. The previous results for range and amplitude data therefore hold analogously for the Doppler data with the variable substitutions (32)-(35) in place.
4.3.2 Direct-Detection Laser-Radar Data Models
For the direct detection laser radar systems of the down-looking suite, there is no local oscillator mixing gain to overcome the thermal noise inherent in the detector
38
I --- I-
electronics, unlike in the case of coherent laser radar. In addition, the APD used in the GaAs system introduces an additional noise mechanism via the randomness of the internal gain process. The nature of the post-processing required to obtain relative range and intensity estimates makes obtaining data models somewhat more cumbersome than with some of the other sensors described in this document. We begin by modelling the statistics for the output of the two-channel lock-in amplifier.
The total average received power at the photodetector is given by
PR = Pradar(t) + Pscene, (36) where P,,,sene is the portion of broadband radiation power emanating from the scene which passes through the receiver's optical filter, and Pradar(t) is the average received portion of the time-varying transmitted radar signal, given by
Pradar (t) = Ps AR COS e
- 2
7rL
2
L [1 + mcos(27rft - 0)], (37) where Ps is the average transmitter power, p is the target's diffuse reflectivity, AR is the receiver aperture area, 0 is the azimuth angle from aircraft to target (nadir
= 0), is the optical efficiency of the receiver optics, a is the atmospheric extinction coefficient, L is the true range to target, m is the modulation depth, f is the RF modulation frequency, and 0 = 47rfL/c is the RF phase-shift resulting from roundtrip propagation delay. Here pixel drag is ignored, lag-angle compensation is assumed perfect, and the targets are modelled as speckle targets.
The mean detected photocurrent is given by
(t) = qG (Pradar(t) + Pscene) hi
(38) where q is the electron charge and G is the intrinsic gain of the photodetector. The noise mechanisms which must be considered in this case include signal, background, and dark-current shot noise, load-resistor and amplifier thermal noise, and, for the
NVIR case, excess noise from the silicon APD. The resulting NEP is given by
NEP = hvF (Pdark + Pthermal + Pscene + PRAD)
77
(39) where PRAD is the time-averaged value of the received radar signal power Pradar(t):
PRAD = PS Acos ee 2aL
Recall that the lock-in amplifier splits the detector photocurrent into in-phase and quadrature components (with respect to the transmitter signal) and integrates each over the pixel dwell time. That is,
39
------Ls---
---1 1 I ·
_ I_ ___
1
Iin =-J td to to+td
i(t) cos(27rfot) dt
1 fto+td
Iquad =-a td to i(t) sin(27rfot) dt.
We can define a signal-to-noise ratio at the output of the photodetector based on the signal and noise components of the photocurrent i(t) which lie in the effective passband of the lock-in amplifier. So doing gives
SNR -
(NEP)
2
/2td (A qa
A2
(40) where in the last expression on the right we have related the SNR to the output variables of the lock-in amplifier, and where we have assumed that the background radiation P,,,,ne is completely rejected by the lock-in amplifier..
Intensity Estimate Statistics. The intensity estimate is constructed from the lock-in amplifier outputs via
= K2
(41) where K is an arbitrary fixed proportionality constant. In the case of high SNR, with high probability /AI + Aqad << /i + quad, where Iin = Iin in and similarly for Alquad. By linearization we find that i
KJ
I u + 1 (li + (AIquad + adin + 2AIa d Iuad)
(42)
Thus the mean value of the intensity estimate is given approximately by i K/I quad 1 + + (43) with high probability in the case of high SNR, so that the expected value of the intensity estimate is proportional to the received intensity in the limit SNR -- oo.
Linearization similarly allows us to show that the variance of the intensity estimate is given approximately by
A1
2 z z2
I/SNR (44) in the case of high SNR, where we have made the additional assumption that the variances of the quadrature and in-phase components of the photocurrent are approximately equal (hence that the noise is phase-insensitive).
40
If we assume that the photocurrent noise can be adequately modelled as Gaussian
(as holds rigorously when thermal noise dominates the shot noise terms in the NEP), it can be shown [25] that the intensity estimate I is Rician distributed with mean value equal to the true intensity value. The resulting pdf for the intensity estimate is given by p(i) =
0
2 exp (- 2 )( for > 0
(45) otherwise where 1* is the true value of the intensity, is the standard deviation of the Gaussian noise component of Iin (or Iquad), and Io is the modified Bessel function of the first kind, order zero:
( )2j
=) xcos dO. (46)
Relative Range Estimate. The relative range estimate is constructed from the lock-in amplifier outputs via
2f 2 arctan(Iquad/Iin)
,
2f 2r
(47) where f is the modulation frequency of the AMCW signal, c is the propagation speed of the laser signal, and arctan(.) ranges over the interval (-7r, w7].
For SNR >> 1, again with probability approaching 1 the linearized approach will be valid. Paralleling the approach used to approximate the intensity estimate, it can be shown that the mean value of the relative range estimate is given by
T
R* = L* mod(zamb), (48) where R* is the true value of the relative range, L* is the true value of the absolute range, and
Zamb = c/(2fo) is the range ambiguity resulting from the 2r ambiguity in the measured phase shift of the carrier. Assuming again that the noise is phaseinsensitive, so that the variances of the in-phase and quadrature components of the photocurrent are identical, we can show after some algebra that the standard deviation of the range estimate is given by
2
6R. ~ ¢A (Zamb
(27r ) /
NR (49)
49
For cases where the noise can be modelled as Gaussian, the range estimate can be shown [25] to obey the probability density function
P-&(
V) - R*), for * - c/4fo < r < R* + c/4f;
= 0, otherwise,
(50)
41
1_ · _111111- - _-s -L-.1- II --- -
where the function Y is defined according to
F(Ar) =2f [eSN
+ (/-SNR cos (4rfAr)) x /2irSNRcos
( 4rfr) e s -
Rsin2(47rfoAr/c)] (51) where SNR is the signal-to-noise ratio as given in Equation (40), and 1(.) is defined by
,p (X = I
~ J22,7 e y 2
/
2 dy.
(52)
A plot of the pdf for the phase estimation error, AO = (47rfo/c)Ar, is shown in Figure
11 for SNR values of 1, 10, and 100.
.0
I-
0
'I,
Q.,
=
.0
-2 0
2
Figure 11: Phase-estimation error pdf for direct-detection laser radars.
SNR = 100; short-dash line, SNR = 10; long-dash line, SNR = 1.
Solid line,
Moreover, in the case of Gaussian noise we can specify the joint probability density function for the relative-range and intensity estimates [25]:
42
pig (Z, ) = 4
2fOi exp
I
[-
I\
(( sin (47rf( R*)/c))
+ (*
tcos (47rfo(-R*)/c))] I for > 0 and -c/4fo < - R* < c/4f o
;
0, otherwise,
(53) where a is the standard deviation of the Gaussian noise component of Iin (or Iquad).
Note that all the results for range estimates assume that the true value of the relative range is not near the edge of the allowed range reporting interval. That is, if the arctangent is taken to have range (-ir, 7r], the true phase difference must be sufficiently far from ±i 7 that the probability of wrap-around error can be neglected.
APPENDICES
In this appendix we give very brief explanations of the radar range equation and of radar cross section. For further details the reader is directed to the references [20].
The radar range equation gives an expression for the power scattered back to the antenna by a target of radar cross section at at range L illuminated by a monostatic radar. The power density in the target interaction plane is then given by
Iin = PTGT 4
1
L
2 atm where PT is the transmitted power, GT is the transmitter antenna's gain in the target direction, PT/47L 2 is the irradiance (W/m
2
) that would be projected to the target from an isotropic antenna in free space, and 6 atm is the round-trip loss factor due to atmospheric extinction, given by 6 atm = e
- 2 aL for range L and extinction coefficient
Oa.
The power at the receiver antenna is then given by
Pant = incat
4 rL
2
tGR (47) (55) where at is the radar cross section of the target, 1/47rL 2 is the propagation attenuation from target to receiver for an isotropically scattering target, and GRA 2 /41r is the
43
_I II ^ ____ I _ _I I ---·IIIIIL-
receiver antenna's effective area. For a monostatic radar, GR = GT - G, so that the received power at the antenna is given by
Pant -=
PTG
2 ot Satm
PTG A(7tbt. (56)
The radar cross section at is defined in terms of an isotropic scatterer, which scatters incident power uniformly into 47r steradians. Specifically, the radar cross section is the cross-sectional area required of an isotropic scatterer at the specified range to produce the same scattered power density at the receive antenna as the actual target, when atmospheric effects are neglected. In other words,
at =
47r L2 Iecho
,LIc
Iinc
(57) where Iecho is the echo power density (W/m
2
) at the radar receiver and Iin is the incident power density at the target.
The three performance parameters of resolution, precision, and accuracy, while used interchangeably in common usage, have distinct differences as used concerning a radar system [26, 27]. Resolution reflects the capability of the radar to distinguish between two nearby targets. Specifically, range resolution is the smallest range difference which could be distinguished reliably between two hypothetical targets occupying a single cross-range pixel. For a pulsed radar using rectangular pulses, the range resolution of the radar is one-half the pulse duration times the speed of light. Resolution is a deterministic quantity determined by the system design and is figured in the absence of noise.
Precision is a statistical quantity representing a measure of the consistency of the radar's range reporting. That is, for a target at fixed range L, precision might be indicated as the rms deviation of the reported range from the mean value (not necessarily equal to L) of the reported range. The deviation of the mean value of the reported range from L is termed the bias of the reported range estimate. Note that an otherwise ideal radar which digitizes the reported range for recording would have a precision limited by the digitization (the numerical precision), but in a real radar the precision may be larger (i.e., worse) than the numerical precision.
Accuracy is a measure of the absolute correctness of the reported range. That is, the accuracy indicates the amount by which the reported range deviates from the true range L. Accuracy has both deterministic and random components and can be related to the precision and to the bias if common measures are used. Assuming all
44
parameters are indicated using rms values, for example, accuracy is equal to precision only when the reported range is an unbiased estimate of the true range (i.e., there is no systematic error), and in general
(accuracy)
2 = (precision)
2
+ (bias)
2
.
(58)
45
__L_ _ I
I·I II ·- -- I
[1] A. B. Gschwendtner, "Airborne high resolution multi-sensor system," in Pro-
ceedings of the CIE 1991 International Conference on Radar (Beijing, China),
359-364, 1991.
[2] J. H. Shapiro, R. W. Reinhold, and D. Park, Proceedings of the SPIE 663, 38-56
(1986).
[3] S. M. Hannon and J. H. Shapiro, Proceedings of the SPIE 1222, 2-23 (1990).
[4] T. J. Green, Jr. and J. H. Shapiro, Optical Engineering 31, 2343-2354 (1992).
[5] T. J. Green, Jr. and J. H. Shapiro, Optical Engineering 33, 865-874 (1994).
[6] M. I. Skolnik, ed., Radar Handbook. New York: Artech House, 1970.
[7] M. I. Skolnik, Introduction to Radar Systems. New York: Artech House, 2nd ed.,
1980.
[8] E. Brookner, ed., Aspects of Modern Radar. Boston: Artech House, 1988.
[9] J. W. Goodman, Introduction to Fourier Optics. McGraw-Hill Physical and
Quantum Electronics Series, New York: McGraw-Hill Book Company, 1968.
[10] J. H. Shapiro, B. A. Capron, and R. C. Harney, Applied Optics 20, 3292-3313
(1981).
[11] D. M. Papurt, J. H. Shapiro, and R. C. Harney, Proceedings of the SPIE 300,
86-99 (1981).
[12] RCA Corporation, RCA Electro-Optics Handbook. Harrison, NJ: RCA Commercial Engineering, 2d ed., 1974.
[13] N. E. Zirkind and J. H. Shapiro, Proceedings of the SPIE 999 (1988).
[14] J. H. Shapiro, "Imaging and optical communication through atmospheric turbulence," in Laser Beam Propagation in the Atmosphere (J. W. Strohbehn, ed.), vol. 25 of Topics in Applied Physics, chapter 6, 171-222, Berlin: Springer-Verlag,
1978.
[15] J. H. Shapiro, Applied Optics 21, 3398-3407 (1982).
[16] R. H. Kingston, Detection of Optical and Infrared Radiation. No. 10 in Springer
Series in Optical Sciences, Berlin: Springer-Verlag, 1978.
[17] S. M. Sze, Physics of Semiconductor Devices. New York: John Wiley & Sons,
1981.
[18] A. Yariv, Optical Electronics. Holt, Reinhart and Winston, 3rd ed., 1985.
46
[19] N. Levanon, Radar Principles. New York: John Wiley & Sons, 1988.
[20] D. R. Wehner, High-Resolution Radar. Boston: Artech House, 2nd ed., 1995.
[21] N. C. Currie, R. D. Hayes, and R. N. Trebits, Millimeter-Wave Radar Clutter.
Boston: Artech House, 1992.
[22] R. M. Eisberg, Fundamentals of Modern Physics. New York: John Wiley & Sons,
1961.
[23] H. L. Van Trees, Detection, Estimation, and Modulation Theory Part II. New
York: John Wiley & Sons, 1971.
[24] H. L. Van Trees, Detection, Estimation, and Modulation Theory Part I. New
York: John Wiley & Sons, 1968.
[25] J. W. Goodman, Statistical Optics. New York: John Wiley & Sons, 1985.
[26] J. H. Shapiro, Proceedings of the SPIE 415, 142-146 (1983).
[27] G. J. A. Bird, Radar Precision and Resolution. New York: John Wiley & Sons,
1974.
47
_· I____ I_ ___ _·II _ I
I -