C H A P T E R 14 Signal Detection 14.1 SIGNAL DETECTION AS HYPOTHESIS TESTING In Chapter 13 we considered hypothesis testing in the context of random variables. The detector resulting in the minimum probability of error corresponds to the MAP test as developed in section 13.2.1 or equivalently the likelihood ratio test in section 13.2.3. In this chapter we extend those results to a class of detection problems that are central in radar, sonar and communications, involving measurements of signals over time. The generic signal detection problem that we consider corresponds to receiv­ ing a signal r(t) over a noisy channel. r(t) either contains a known deterministic pulse s(t) or it does not contain the pulse. Thus our two hypotheses are H1 : r(t) = s(t) + w(t) H0 : r(t) = w(t), (14.1) where w(t) is a wide-sense stationary random process. One example of a scenario in which this problem arises is in binary communication using pulse amplitude modulation. In that context the presence or absence of the pulse s(t) represents the transmission of a “one” or a “zero”. As another example, radar and sonar systems are based on transmitting a pulse and detecting the presence or absence of an echo. In our treatment in this chapter we first consider the case in which the noise is white and carry out the formulation and analysis in discrete-time which avoids some of the subtler issues associated with continuous-time white noise. We also initially treat the case in which the noise is Gaussian. In Section 14.3.4 we extend the discussion to discrete-time Gaussian colored noise. In Section 14.3.2 we discuss the implications when the noise is not Gaussian and in Section 14.3.3 we discuss how the results generalize to the continuous-time case. 14.2 OPTIMAL DETECTION IN WHITE GAUSSIAN NOISE In the signal detection task outlined above, our hypothesis test is no longer based on the measurement of a single (scalar) random variable R, but instead involves a collection of L (scalar) random variables R1 , R2 , . . . , RL . Specifically, we receive the (finite-length) DT signal r[n], n = 1, 2, · · · , L, regarded as the realization of a random process. More simply, the signal r[n] is modeled as c °Alan V. Oppenheim and George C. Verghese, 2010 247 248 Chapter 14 Signal Detection the values taken by a set of random variables R[n]. Let H0 denote the hypothesis that the random waveform is only white Gaussian noise, i.e. H0 : R[n] = W [n] (14.2) where the W [n] for n = 1, 2, · · · , L are independent, zero-mean, Gaussian random variables, with variance σ 2 . Similarly, let H1 denote the hypothesis that the wave­ form R[n] is the sum of white Gaussian noise W [n] and a known, deterministic signal s[n], i.e. H1 : R[n] = s[n] + W [n] (14.3) where the W [n] are again distributed as above. Our task is to decide in favor of H0 or H1 on the basis of the measurements r[n]. The nature and derivation of the solutions to such decision problems are similar to those in Chapter 13, except that we now use posterior probabilities conditioned on the entire collection of measurements, i.e. P (Hi |r[1], r[2], · · · , r[L]) rather than P (Hi |r). Similarly, we use compound (or joint) PDF’s, such as f (r[1], r[2], · · · , r[L]|Hi ) instead of f (r|Hi ). The associated decision regions Di are now regions in an L­ dimensional space, rather than segments of the real line. For detection with minimum probability of error, we again use the MAP rule or equivalently compare the values of f (r[1], r[2], . . . , r[L] | Hi ) P (Hi ) (14.4) for i = 0, 1, and decide in favor of whichever hypothesis yields the maximum value of this expression, i.e. the form of equation (13.7) for the case of multiple measure­ ments is ‘H1 ’ > f (r[1], r[2], . . . , r[L] | H1 ) P (H1 ) f (r[1], r[2], . . . , r[L] | H0 ) P (H0 ) < ‘H0 ’ (14.5) which also can easily be put into the form of equation (13.18) corresponding to the likelihood ratio test. With W [n] white and Gaussian, the conditional densities in (14.5) are easy to evaluate, and take the form ½ ¾ L Y 1 (r[n])2 exp − 2σ 2 (2πσ 2 )(L/2) n=1 ( L ) X (r[n])2 1 exp − = 2σ 2 (2πσ 2 )(L/2) n=1 f (r[1], r[2], . . . , r[L] | H0 ) = c °Alan V. Oppenheim and George C. Verghese, 2010 (14.6) Section 14.2 Optimal Detection in White Gaussian Noise 249 and ½ ¾ L Y (r[n] − s[n])2 1 exp − 2σ 2 (2πσ 2 )(L/2) n=1 ( L ) X (r[n] − s[n])2 1 exp − = 2σ 2 (2πσ 2 )(L/2) n=1 f (r[1], r[2], . . . , r[L] | H1 ) = (14.7) The inequality in equation (14.5) (or any inequality in general) will, of course still hold if a nonlinear, strictly increasing function is applied to both sides. Because of the form of equations (14.6) and (14.7) it is particularly convenient to replace equation (14.5) by applying the natural logarithm to both sides of the inequality. The resulting inequality, in the case of (14.6) and (14.7), is: “H1 ” ¶ µ L 1X 2 P (H0 ) > 2 + g= r[n]s[n] σ ln s [n] < 2 n=1 P (H1 ) n=1 “H0 ” L X (14.8) The sum on the left-hand side of Eq. (14.8) is referred to as the deterministic correlation between r[n] and s[n], which we denote as g. The second sum on the right-hand side is the energy in the deterministic signal s[n] which we denote by E. For convenience we denote the threshold represented by the entire right hand side of (14.8) as γ, i.e., equation (14.8) becomes where γ = σ 2 ln( “H1 ” > g γ < “H0 ” (14.9a) E P (H0 ) )+ P (H1 ) 2 (14.9b) If the Neyman-Pearson formulation is used, then the optimal decision rule is still of the form of equation (14.8), except that the right hand side of the inequality is determined by the specified bound on PF A . If hypothesis H0 is true, i.e. if the signal s[n] is absent, then r[n] on the left hand side of equation (14.8) will be Gaussian white noise only, i.e. g will be the random variable L X G= W [n]s[n] (14.10) n=1 Since W [n] at each value of n is Gaussian, with zero mean and variance σ 2 , and since a weighted, linear combination of Gaussian random variables is also Gaussian, L X the random variable G is Gaussian with mean zero and variance σ 2 s2 [n] = σ 2 E. n=1 c °Alan V. Oppenheim and George C. Verghese, 2010 250 Chapter 14 Signal Detection When the signal is actually present, i.e., when H1 holds, the random variable is the realisation of a Gaussian√random variable with mean E and still with variance Eσ 2 or standard deviation σ E. The optimal test in (14.8) is therefore described by Figure 14.1 which is of course similar to that in Figure 13.5 : f(r|H0 ) f(r|H1) σ ε γ ε r = Σ r[n]s[n] FIGURE 14.1 Optimal test for two hypotheses with equal variances and different means. Using the facts summarized in this figure, and given a detection threshold γ on the correlation (e.g. with γ picked equal to the right side of (14.8), or in some other way), we can compute PF A , PD , Pe , and other probabilities of interest. Figure 14.1 makes evident that the √ performance of the detection strategy is deter­ mined entirely by the ratio E/(σ E), or equivalently by the signal-to-noise ratio E/σ 2 , i.e. the ratio of the signal energy E to the noise variance σ 2 . 14.2.1 Matched Filtering Since the correlation sum in (14.8) constitutes a linear operation on the measured signal, we can consider computing the sum through the use of an LTI filter and the output sampled at an appropriate time to form the correlation sum g. Specifically, with h[n] as the impulse response and r[n] as the input, the output will be the convolution sum ∞ X r[k]h[n − k] (14.11) k=−∞ For r[n] = 0 except for 1 ≤ n ≤ L and with h[n] chosen as s[−n], the filter output at PL n = 0 is k=1 r[k]s[k] = g as required. In other words, we choose the filter impulse response to be a time-reversed version of the target signal for n = 1, 2, . . . , L, with h[n] = 0 elsewhere. This filter is said to be the matched filter for the target signal. The structure of the optimum detector for a finite-length signal in white Gaussian noise is therefore as shown below: c °Alan V. Oppenheim and George C. Verghese, 2010 Section 14.3 A General Detector Structure 251 Matched Filter x[k] h[k] = s[-k] r= Σ x[k]s[k] Sample at time zero > γ < ’H1 ’ ’H0 ’ FIGURE 14.2 Optimum detector 14.2.2 Signal Classification We can easily extend the previous two-hypothesis problem to the multiple hypoth­ esis case, where Hi , i = 0, 1, · · · , M − 1 denotes the hypothesis that the signal R[n], n = 1, 2, · · · , L, is a noise-corrupted version of the ith deterministic signal si [n], selected from a possible set of M deterministic signals: Hi : R[n] = si [n] + W [n] (14.12) with the W [n] denoting independent, zero-mean, Gaussian random variables with variance σ 2 . This scenario arises, for example, in radar signature analysis. Different aircraft reflect a radar pulse differently, typically with a distinct signature that can be used to identify not only its presence, but the type of aircraft. In this case, each of the signals si [n] and correspondingly each hypothesis Hi would correspond to the presence of a particular type of aircraft. Thus, our task is to decide in favor of one of the hypotheses, given a set of measurements r[n] of R[n]. For minimum error probability, the required test involves comparison of the quantities à L ! X Ei r[n]si [n] − + σ 2 ln P (Hi ) (14.13) 2 n=1 where Ei denotes the energy of the ith signal. The largest of the expressions in (14.13), for i = 0, 1, · · · , M − 1, determines which hypothesis is selected. If the signals have equal energies and equal prior probabilities, then the above comparison reduces to deciding in favor of the signal with the highest deterministic correlation L X r[n]si [n] . (14.14) n=1 14.3 A GENERAL DETECTOR STRUCTURE The matched filter developed in Section 14.2 extends to the case where we have an infinite number of measurements rather than just L measurements. As we will see in Section 14.3.4, it also extends to the case of colored noise. We shall, for simplicity, treat these extensions by assuming the general detector structure, shown in Figure c °Alan V. Oppenheim and George C. Verghese, 2010 252 Chapter 14 r[n] Signal Detection � Processor ↑ random process g[n] � n = 0 ↑ random process � Threshold ↑ random variable ‘H1 ’ > � < ↑ ‘H0 ’ decision FIGURE 14.3 A general detector structure. 11.7, and determine an optimum choice of processor and of detection threshold for each scenario. We are assuming that the transmitter and receiver are synchronized, so that we test g[n] at a known (fixed) time, which we choose here as n = 0. The choice of 0 as the sampling instant is for convenience; any other instant may be picked, with a corresponding time-shift in the operation of the processor. Although the processor could in general be nonlinear, we shall assume the processing will be done with an LTI filter. Thus the system to be considered is shown in Figure 14.4; a corresponding system can be considered for continuous time. r[n] � LTI, h[n] g[n] � n = 0 G � Threshold � ‘H1 ’ > < ‘H0 ’ FIGURE 14.4 Detector structure of Figure 14.3 with the processor as an LTI system. It can be shown formally, but is also intuitively reasonable, that scaling h[n] by a constant gain will not affect the overall performance of the detector if the threshold is correspondingly adjusted since a constant overall gain scales the signal and noise identically. For convenience, we normalize the gain of the LTI system so as to have +∞ X h2 [n] = 1 . (14.15) n=−∞ If r[n] is a Gaussian random process, then so is g[n], because it is obtained by linear processing of r[n], and therefore G is a Gaussian random variable in this case. 14.3.1 Pulse Detection in White Noise To suggest the approach we consider a very simple choice of LTI processor, namely with h[n] = δ[n], so H1 : G = g[0] = s[0] + w[0] H0 : G = g[0] = w[0] . c °Alan V. Oppenheim and George C. Verghese, 2010 (14.16) Section 14.3 A General Detector Structure 253 Also for convenience we assume that s[0] is positive. Thus, under each hypothesis, g[0] is Gaussian: " # 2 (g − s[0]) 1 exp − H1 : fG|H (g|H1 ) = N (s[0], σ ) = √ 2σ 2 2πσ · ¸ g2 1 exp − 2 . H0 : fG|H (g|H0 ) = √ 2σ 2πσ 2 fG|H (g|H1 ) fG|H (g|H0 ) � (14.17) � 0 s[0] g FIGURE 14.5 PDF’s for the two hypotheses in Eq. (14.16). This is just the binary hypothesis testing problem on the random variable G treated in Section 13.2 and correspondingly the MAP rule for detection with minimum probability of error is given by P (H1 | G = g) ‘H1 ’ > P (H | G = g) , 0 < ‘H0 ’ or, equivalently, the likelihood ratio test: fG|H (g | H1 ) fG|H (g | H0 ) ‘H1 ’ > P (H0 ) = η . < P (H ) 1 ‘H0 ’ (14.18) Evaluating equation (14.18) using equation (14.17) leads to the relationship exp ¸ · 2 ¸¾ ‘H1 ’ ½· g (g − s[0])2 > P (H0 ) + − < P (H ) 2σ 2 2σ 2 1 ‘H0 ’ (14.19) and equivalently exp · gs[0] s2 [0] − σ2 2σ 2 ¸ ‘H1 ’ > P (H0 ) < P (H ) 1 ‘H0 ’ (14.20) or, taking the natural logarithm of both sides of the likelihood ratio test as we did in Section 14.2, equation (14.20) is replaced by g ‘H1 ’ 2 > s[0] + σ ln P (H0 ) < 2 s[0] P (H1 ) ‘H0 ’ c °Alan V. Oppenheim and George C. Verghese, 2010 (14.21) 254 Chapter 14 Signal Detection We may not know the a priori probabilities P(H0 ) and P(H1 ) or, for other reasons, may want to modify the threshold, but still using a threshold test on the likelihood ratio, or a threshold test of the form ‘H1 ’ > λ. g < ‘H0 ’ (14.22) Sweeping the threshholds over all possible values leads to the receiver operating characteristics as discussed in Section 13.2.5. We next consider the more general case in which h[n] is not the identity system. Then, under the two hypothesis we have: H1 : g[n] H0 : g[n] = s[n] ∗ h[n] + w[n] ∗ h[n] = w[n] ∗ h[n] , (14.23) The term w[n] ∗ h[n] still represents noise but is no longer white, i.e. its spectral shape is changed by the filter h[n]. Denoting w[n] ∗ h[n] as v[n], the autocorrelation function of v[n] is Rvv [m] = Rww [m] ∗ Rhh [m] (14.24) and in particular the mean v[n] is zero and its variance is ∞ X var{v[n]} = σ 2 h2 [n]. (14.25) n=−∞ Because of the normalization in equation (14.15) the variance of v[n] is the same as that of the white noise, i.e. var{v[n]} = σ 2 . Furthermore, since w[n] is Gaussian so is v[n]. Consequently the value g[0] is again a Gaussian random variable with variance σ 2 . The mean of g[0] under the two hypotheses is now: H1 : E{g[n]} = ∞ X h[n]s[−n] , µ n=−∞ H0 : E{g[n]} = (14.26) 0, Therefore equation (14.17) is replaced by H1 : fG|H (g|H1 ) = N (µ, σ 2 ) H0 : fG|H (g|H0 ) = N (0, σ 2 ). (14.27) The probability density functions representing the two hypothesis are shown in Figure 14.6 below. On this figure we have also indicated the threshold γ of equation (14.27) above which we would declare H1 to be true and below which we would declare H0 to be true. Also indicated by the shaded areas are the areas under the PDF’s that would correspond to PF A and PD . c °Alan V. Oppenheim and George C. Verghese, 2010 Section 14.3 A General Detector Structure 255 fG|H (g[0]|H0 ) fG|H (g[0]|H1 ) � � PF A PD � 0 � λ M g[0] FIGURE 14.6 Indication of the areas representing PF A and PD . The value of PF A is fixed by the shape of fG|H (g[0]|H0 ) and the value of the threshold γ. Since fG|H (g[0]|H0 ) is not dependent on h[n], the choice of h[n] will not affect PF A . The variance of fG (g[0]|H1 ) is also not influenced by the choice of h[n] but its mean µ is. In particular, as we see from Figure 14.6, the value of PD is given by Z ∞ fG (g[0]|H1 )dg (14.28) PD = γ which increases as µ increases. Consequently, to minimize P (error), or alternatively to maximize PD for a given PF A , we want to maximize the value of µ. To determine the choice of h[n] to maximize µ we use the Schwarz inequality: ¯2 X ¯X X ¯ ¯ h2 [n] s2 [−n] h[n]s[−n]¯ ≤ ¯ (14.29) with equality if and only if h[n] = cs[−n] for some constant c. Since we normalized the energy in h[n], the optimum filter is h[n] = ( √1E )s[−n], which is again the matched filter. (This is as expected, since the optimum detector for a known finitelength pulse in white Gaussian noise has already been shown in Section 14.2.1 to have the form we assumed here, with the impulse response of the LTI filter being matched to the signal.) The filter output g[n] due to the pulse is then √1E Rss [n] and the output due to the noise is the colored v[n] with variance σ 2 . Since g[0] P∞ noise 1 2 √ is a random variable with mean E n=−∞ s [n] and variance σ 2 , only the energy in the pulse and not its specific shape, affects the performance of the detector. 14.3.2 Maximizing SNR If w[n] is white but not Gaussian, then g[0] is not Gaussian. However, g[0] is still distributed the same under each hypothesis, except that its mean under H0 is 0 while the mean under H1 is µ as given in equation (14.26). The matched filter in this case still maximizes the output signal-to-noise ratio (SNR) in the specified structure (namely, LTI filtering followed by sampling), where the SNR is defined as E{g[0]|H1 }2 /σ 2 . The square root of the SNR is the relative separation between the means of the two distributions, measured in standard deviations. In some intuitive sense, therefore, maximizing the SNR tries to separate the two distributions as well c °Alan V. Oppenheim and George C. Verghese, 2010 256 Chapter 14 Signal Detection as possible. However, this does not in general necessarily correspond to minimizing the probability of error. 14.3.3 Continuous-Time Matched Filters All of the matched filter results developed in this section carry over in a direct way to continuous-time. In Figure 14.7 we show the continuous-time counterpart to Figure 14.4 As before, we normalize the gain of h(t) so that r(t) � LTI h(t) g(t) � t = 0 G � Threshold λ � ‘H1 ’ > < ‘H0 ’ FIGURE 14.7 Continuous-time matched filtering. Z ∞ h2 (t)dt = 1 (14.30) −∞ with r(t) a Gaussian random process, g(t) is also Gaussian and G is a Gaussian random variable. Under the two hypotheses the PDF of G is then given by 2 H1 : fG|H (g |H1 ) = N (µ, σG ) 2 H0 : fG|H (g |H0 ) = N (0, σG ), where Z 2 σG = N0 and µ= ∞ h2 (t)dt = N0 (14.31) (14.32) −∞ Z ∞ h(t)s(−t)dt (14.33) −∞ Consequently, as in the discrete-time case, the probability of error is minimized by choosing h(t) to separate the two PDF’s in equation (14.31) as much as possi­ ble. With the continuous-time version of the Cauchy-Schwarz inequality applied to equation (14.33) we then conclude that the optimum choice for h(t) is proportional to s(−t), i.e. again the matched filter EXAMPLE 14.1 PAM with Matched Filter Figure 14.8(a) shows an example of a typical noise-free binary PAM signal as rep­ resented by Eq. (13.1). The pulse p(t) is a rectangular pulse of length 50 sec. The binary sequence a[n] over the time interval shown is indicated above the waveform. In the absence of noise, the optimal threshold detector of the form of Figure 14.4 c °Alan V. Oppenheim and George C. Verghese, 2010 Transmitted signal Section 14.3 1 0 1 0 0 1 A General Detector Structure 1 0 1 1 0 257 0 1 2 1 0 −1 0 200 400 600 Time (s) 800 1000 1200 Received signal (a) 10 0 −10 0 200 400 600 Time(s) 800 1000 1200 Matched filter output (b) 2 0 −2 0 200 400 600 Time (s) 800 1000 (c) FIGURE 14.8 Binary detection with on/off signaling would simply test at integer multiples of T whether the received signal is positive or zero. Clearly the probability of error in this noise-free case would be zero. In Figure 14.8(b) we show the same PAM signal but with wideband Gaussian noise added. If h(t) is the identity system and the threshold of the detector is chosen according to Eq. (14.18) with P (H0 ) = P (H1 ) i.e. using the likelihood ratio test but without the matched filter, the decoded binary sequence is 0100111111011 which has 6 bit errors. Figure 14.8(c) shows the output of the matched filter, i.e. with h(t) = s(−t). The detector threshold is again chosen based on the likelihood ratio test. The resulting decoded binary sequence is 1010011111000 which has 2 bit errors In Figure 14.9 we show the corresponding results when antipodal rather than onoff signaling is used. Figure 14.9(a) depicts the transmitted waveform with the same binary sequence as was used in Figure 14.8, and Figure 14.9(b) the received signal including additive noise. If h(t) = δ(t) and P (H0 ) = P (H1 ), then the choice of threshold for the likelihood ratio test is zero. The decoded binary sequence is c °Alan V. Oppenheim and George C. Verghese, 2010 1200 Transmitted Signal 258 Chapter 14 Signal Detection 2 0 −2 0 200 400 600 Time (s) 800 1000 1200 800 1000 1200 800 1000 1200 Received Signal (a) 10 0 −10 0 200 400 600 Time(s) Matched filter output (b) 2 0 −2 0 200 400 600 Time (s) (c) FIGURE 14.9 Binary Detection with antipodal signaling 0001001011001, resulting in 4 bit errors. With h(t) chosen as the matched filter the signal before the threshold detector is that shown in Figure 14.9(c). The resulting decoded binary sequence is 1010011011001 with no bit errors. In Table 14.1 we summarize the results for this specific example based on a simulation with a binary sequence of length 104 . On/Off Signaling Antipodal Signaling No matched filter 0.4808 0.4620 W/ matched Filter 0.3752 0.2457 TABLE 14.1 Bit error rate for a PAM signal illustrating effect of matched filter for two different signaling schemes. c °Alan V. Oppenheim and George C. Verghese, 2010 Section 14.3 14.3.4 A General Detector Structure 259 Pulse Detection in Colored Noise In Sections 14.2 and 14.3 the optimal detector was developed under the assumption that the noise is white. When the noise is colored , i.e. when its spectral density is not flat, the results are easily modified. We again assume a detector of the form of Figure 14.4. The two hypotheses are now: H1 : r[n] = s[n] + v[n], H0 : r[n] = v[n] , (14.34) where v[n] is again a zero-mean Gaussian process but in general, not white. The autocorrelation function of v[n] is denoted by Rvv [m] and the power spectral density by Svv (ejΩ ). The basic approach is to transform the problem to that dealt with in the previous section by first processing r[n] with a whitening filter as was discussed in Section 10.2.3 , which is always possible as long as Svv (ejΩ ) is strictly positive, i.e. it is not zero at any value of Ω. This first stage of filtering is depicted in Figure 14.10. Whitening Filter r[n] � hw [n] rw� [n] FIGURE 14.10 First stage of filtering The impulse response hw [n] is chosen so that its output due to the input noise v[n] is white, with variance σ 2 and, of course, will also be Gaussian. With this pre-processing the signal rw [n] now has the form assumed in Section 14.3.4 with the white noise w[n] corresponding to v[n] ∗ hw [n] and the pulse s[n] replaced by p[n] = s[n] ∗ hw [n]. The detector structure now takes the form shown in Figure 14.11 where h[n] is again the matched filter, but in this case matched to the pulse p[n], i.e. hm [n] is proportional to p[−n]. r[n] � LTI hw [n] rw [n] � LTI h[n] g[n] � n = 0 g[0] � Threshold λ � ‘H1 ’ > < ‘H0 ’ FIGURE 14.11 Detector structure with colored noise. Assuming that hw [n] is invertible (i.e. its Z-transform has no zeros on the unit circle) there is no loss of generality in having first applied a whitening filter. To see this concretely denote the combined LTI filter from r[n] to g[n] as hc [n] and assume that if whitening had not first been applied, the optimum choice for the filter from r[n] to g[n] is hopt [n]. Since hc [n] = hw [n] ∗ hm [n] c °Alan V. Oppenheim and George C. Verghese, 2010 (14.35) 260 Chapter 14 Signal Detection where hm [n] denotes the matched filter after whitening. If the performance with hopt [n] is better than with hc [n], this would imply that choosing hm [n] as hopt [n] ∗ hinv w [n] would lead to better performance on the whitened signal. However, as seen in Section 14.3, hm [n] = p[−n] is the optimum choice after the whitening and consequently we conclude that inv hm [n] = p[−n] = hopt [n] ∗ hw [n] (14.36) hopt [n] = hw [n] ∗ p[−n] (14.37) or equivalently In the following example we illustrate the determination of the optimum detector in the case of colored noise. EXAMPLE 14.2 Pulse Detection in Colored Noise Consider a pulse s[n] in colored noise v[n], with s[n] = δ[n] . (14.38) and 1 Rvv [m] = ( )|m| , so σv2 = 1 2 3/4 . then Svv (z) = 1 −1 (1 − 2 z )(1 − 21 z) (14.39) The noise component w[n] of desired output of the whitening filter has autocorre­ lation function Rww [m] = σ 2 δ[m] and consequently we require that Svv (z)Hw (z)Hw (1/z) = σ 2 Thus Hw (z)Hw (1/z) = σ2 4 1 1 = σ 2 (1 − z −1 )(1 − z) . Svv (z) 3 2 2 (14.40) We can of course choose σ arbitrarily (since it will only impact the overall gain). Choosing σ 2 = 1, either 1 Hw (z) = (1 − z −1 ), or 2 1 Hw (z) = (1 − z) 2 (14.41) Note that the second of these choices is non-causal. There are also other possi­ bile choices since we can cascade either choice with an all-pass Hap (z) such that Hap (z)Hap (1/z) = 1. c °Alan V. Oppenheim and George C. Verghese, 2010 Section 14.3 A General Detector Structure 261 With the first choice for Hw (z) from above, we have 1 Hw (z) = (1 − z −1 ), 2 1 hw [n] = δ[n] − δ[n − 1], 2 σ 2 = 3/4, 1 p[n] = s[n] − s[n − 1], and 2 h[n] = Ap[−n] for any convenient choice of A. (14.42) In our discussion in Section 14.3 of the detection of a pulse in white noise, we observed that the energy in the pulse affects performance of the detector but not the specific pulse shape. This was a consequence of the fact that the filter is chosen to maximize the quantity √1E Rss [0] where s[n] is the pulse to be detected. For the case of a pulse in colored noise, we correspondingly want to maximize the energy Ep in p[n] where p[n] = hw [n] ∗ s[n] (14.43) Expressed in the frequency domain, P (ejΩ ) = Hw (ejΩ )S(ejΩ ) (14.44) and from Parseval’s relation Ep = 1 2π = 1 2π Z π −π Z π −π |Hw (ejΩ )|2 |S(ejΩ )|2 dΩ (14.45a) |S(ejΩ )|2 dΩ Svv (ejΩ ) (14.45b) Based only on Eq. (14.45b), Ep can be maximized by placing all of the energy of the transmitted signal s[n] at the frequency at which Svv (ejΩ ) is minimum. However, in many situations the transmitted signal is constrained in other ways, such as peak amplitude and/or time duration. The task then is to choose s[n] to maximize the integral in Eq. (14.45b) under these constraints. There is generally no closedform solution to this optimization problem, but roughly speaking a good solution will distribute the signal energy so that it is more concentrated where the power Svv (ejΩ ) of the colored noise is less. c °Alan V. Oppenheim and George C. Verghese, 2010 MIT OpenCourseWare http://ocw.mit.edu 6.011 Introduction to Communication, Control, and Signal Processing Spring 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.