2.1 (a) E 2T P0 (b) E 10T1 2T2 P0 (c) E 1 P 2 (d) E P A2 2 (e) E 3 P 8 (f) E T P0 (g) 1 4a P0 E (h) A2T 3 E 4 P0 (i) E 1 P lim T T 3 T t 3 33 T 3 3 3 1/3 2/3t dt lim T T 3 T 3 lim 1/3 T T 0 2.2 (a) 8 (b) 0 (c) 4 2 (d) 1 2.3 (a) X ( s) 5 s 1 (b) x(t ) 5t eln(5 1 X ( s) s ln(5) (c) 5 s 1 (d) 5 s 1 6 ) e t ln(5) 2.4 (a) 2 X (s) (b) 2 4 4 2 4s 4s s3 s 2 s s3 2 x(t ) t 2 e 3t u (t ) t 2 4t 4 e 3t u (t ) 2 X ( s) (c) x(t ) t 2 u (t ) t 2 4t 4 u (t ) 2 s 3 3 4 s 3 2 4 26 20 s 4 s 2 3 s3 s 3 x(t ) cos 0t u (t ) 4 cos cos 0t sin sin 0t u (t ) 4 4 1 1 cos 0t sin 0t u (t ) 2 2 0 s 1 1 1 s 0 X (s) 2 2 2 2 2 2 2 s 0 2 s 0 2 s 0 (d) x(t ) e 3t cos 0t u (t ) 3 e 3t cos cos 0t sin sin 0t u (t ) 3 3 1 3 sin 0t u (t ) e 3t cos 0t 2 2 X ( s) 0 1 3 1 s 3 30 s3 2 2 2 2 2 s 3 0 2 s 3 0 2 s 32 02 2.5 (a) 1 1 j j X (s) 2 2 s2 j s2 j 1 1 x(t ) j e (2 j ) t j e (2 j ) t u (t ) 2 2 1 1 e 2t e jt je jt e jt je jt u (t ) 2 2 e 2t cos(t ) 2sin(t ) u (t ) 5e 2t cos t 63.4 u (t ) (b) X ( s) 4 4 4 s 2 s 1 s 12 x(t ) 4e 2t 4e t 4te t u (t ) 4 e2t t 1 e t u (t ) (c) X ( s) 4 4 4 2 s 2 s 2 s 1 x(t ) 4e2t 4te2t 4e t u (t ) 4 e t t 1 e2t u (t ) 2.6 (a) (b) 3 2 1 s 3 s 2 s 1 x(t ) 3e 3t 2e 2t e t u (t ) X ( s) X (s) 229 54 216 108 135 13 2 3 2 s 3 s 3 s 3 s 2 s 2 s 1 x(t ) 229 135t 26t 2 e 3t 216 108t e 2t 13e t u (t ) (c) X (s) 32 31 30 21 20 1 2 3 2 s 3 s 3 s 3 s 2 s 2 s 1 x(t ) 32 31t 30t 2 e 3t 21 20t e 2t e t u (t ) 2.7 (a) y(t ) a1 y(t ) a2 y (t ) b1 x(t ) b0 x(t ) initial conditions (b) The poles are at s a1 a12 4a0 (i) For real and distinct poles, we require a12 4a0 . In this case, the poles are s a1 a12 4a0 . (ii) For real and repeated poles, we require a12 4a0 . In this case, the poles are s a1 , s a1 . (iii) For complex conjugate poles, we require a12 4a0 . In this case, the poles are s a1 j 4a0 a12 (c) (i) n a0 a1 2 a0 In general, the poles are at s n n 2 1 (ii) For real and distinct poles, we require 1 . In this case, the poles are s n n 2 1 (iii) For real and repeated poles, we require 1 . In this case, the poles are s n , s n . (iv) For complex conjugate poles, we require 0 1 . In this case, the poles are s n jn 1 2 (v) This form is preferred because the nature of the poles is determined by a single parameter. (d) (i) b0 H (s) h(t ) (ii) H ( s) b0 s 2 2n s n2 2n 1 n e 2 1 n e 2 1 b0 ent 2n b0 n 1 2 s n 2 1 b0 2n b0 2 2 1 t 2 1 t e e 2n 2 1 s n 2 1 u (t ) n 2 1 t u (t ) n 2 1 t e n t sinh n 2 1 t u (t ) b0 s n 2 h(t ) b0te nt u (t ) (iii) b0 H (s) h(t ) (e) b0 j 2n 1 2 s n j 1 2 b0 j 2n e n t e jn 2 1 b0 e n t n 1 2 j 2n 1 2 s n j 1 2 1 2 t e n t e jn 1 2 t u (t ) sin n 1 2 t u (t ) An overdamped system does not display any oscillations in its impulse response, whereas the underdamped system does display oscillations. Hence, the term damped refers to oscillations: if oscillations are present, the oscillations have not been damped; if there are no oscillations, the oscillations have been damped. 2.8 (a) 6 6 5 5 5 X ( s) s6 s6 s s s6 6 t y (t ) 5 1 e u (t ) Y (s) (b) x(t) y(t) 6 4 2 0 (c) (d) 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 6s 6s 5 30 X (s) s6 s6 s s6 6 t v(t ) 30e u (t ) V ( s) 30 v(t) 20 10 0 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 2.9 (a) 6 6 5 5 5/6 5/6 X (s) 2 2 s6 s6 s s s s6 5 y (t ) 5t 1 e6t u (t ) 6 Y (s) (b) x(t) y(t) 6 4 2 0 (c) V ( s) 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 6s 6s 5 30 5 5 X (s) 2 s6 s6 s s ( s 6) s s 6 v(t ) 5 1 e 6t u (t ) (d) 5 v(t) 4 3 2 1 0 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 2.10 (a) 6 6 5 5 15 10 X ( s) 2 s 5s 6 s 5s 6 s s s 2 s 3 2 t 3t y (t ) 5 1 3e 2e u (t ) Y (s) 2 (b) 6 5 4 3 2 x(t) y(t) 1 0 (c) (d) 0 0.5 1 1.5 2 2.5 t 3 3.5 4 4.5 5 6s 6s 5 30 30 X ( s) 2 s 5s 6 s 5s 6 s s 2 s 3 2 t 3t v(t ) 30 e e u (t ) V ( s) 2 5 v(t) 4 3 2 1 0 0 0.5 1 1.5 2 2.5 t 3 3.5 4 4.5 5 2.11 (a) 25 15 10 6 6 5 5 Y (s) 2 X (s) 2 2 6 2 3 2 s 5s 6 s 5s 6 s s s s2 s3 2 5 3 y (t ) 5 t e 2t e 3t u (t ) 3 6 2 (b) 25 20 15 10 5 0 (c) (d) x(t) y(t) 0 0.5 1 1.5 2 2.5 t 3 3.5 4 4.5 5 6s 6s 5 5 15 10 X ( s) 2 2 s 5s 6 s 5s 6 s s s2 s3 v(t ) 5 1 3e 2t 2e 3t u (t ) V ( s) 2 5 v(t) 4 3 2 1 0 0 0.5 1 1.5 2 2.5 t 3 3.5 4 4.5 5 2.12 (a) 10 j 5 10 j 5 125 125 5 5 4 4 Y (s) 2 X (s) 2 s 10 s 125 s 10 s 125 s s s 5 j10 s 5 j10 10 j 5 j10t 10 j 5 j10t y (t ) 5 e 5t e e u (t ) 4 4 5 5t 1 5 1 e 5t 2 cos 10t sin 10t u (t ) 5 1 e cos 10t 26.6 u (t ) 2 2 (b) 6 5 4 3 2 x(t) y(t) 1 0 (c) (d) 0 0.2 0.4 0.6 0.8 1 t 1.2 1.4 1.6 1.8 2 125 125 125s 125s 5 j4 j4 V ( s) 2 X ( s) 2 s 10s 125 s 10s 125 s s 5 j10 s 5 j10 125 5t j10t j10t 125 5t u (t ) v(t ) e e e e sin 10t u (t ) 2 j4 40 v(t) 30 20 10 0 -10 0 0.2 0.4 0.6 0.8 1 t 1.2 1.4 1.6 1.8 2 2.13 (a) 2 4 j3 4 j3 125 5 5 125 20 20 X (s) 2 5 2 Y (s) 2 s 10 s 125 s 10 s 125 s 2 s s s 5 j10 s 5 j10 2 4 j 3 j10t 4 j 3 j10t y (t ) 5t e 5t e e u (t ) 5 20 20 2 3 2 1 4 5t e 5t cos 10t sin 10t u (t ) 5t e 5t cos 10t 36.9 u (t ) 5 10 5 2 10 (b) 10 8 6 4 2 0 (c) x(t) y(t) 0 0.2 0.4 0.6 0.8 1 t 1.2 1.4 1.6 1.8 2 10 j 5 10 j 5 125s 125s 5 5 4 4 V (s) 2 X ( s) 2 2 s 10s 125 s 10s 125 s s s 5 j10 s 5 j10 10 j 5 j10t 10 j 5 j10t v(t ) 5 e 5t e e u (t ) 4 4 5 5t 1 5 1 e 5t 2 cos 10t sin 10t u (t ) 5 1 e cos 10t 26.6 u (t ) 2 2 (d) 8 v(t) 6 4 2 0 0 0.2 0.4 0.6 0.8 1 t 1.2 1.4 1.6 1.8 2 2.14 Y ( s) k . This system has one pole at s k . The system is X ( s) s k stable if the pole is in the left-half plane. The pole is in the left-half plane when k 0 . The transfer function is H ( s ) 2.15 The system transfer function is H ( s ) Y (s) 1 a a 2 4 2 . The poles are s . X ( s ) s as 1 2 (a) The system is stable when the poles are in the left-half plane. The poles are in the left-half plane for a 0 . (b) The system impulse response exhibits oscillations when the poles have a non-zero imaginary part. The poles have a non-zero imaginary part when a 2 4 0 , which implies 2 a 2 . (Note that the system response is oscillatory and stable only for 0 a 2 .) These results are summarized by the plot below. This plot, known as a “root-locus” plot, plots the location of the poles as a function of a. p1 1 p2 1 a=0 0 0.5 a -> infinity a -> -infinity -0.5 Imag(s) Imag(s) 0.5 a -> -infinity 0 a -> infinity -0.5 a=0 -1 -10 -5 0 Real(s) 5 10 -1 -10 -5 0 Real(s) 5 10 2.16 The system transfer function is H ( s ) a a a 4 Y ( s) a 2 . The poles are s . 2 X ( s ) s as a (a) The system is stable when the poles are in the left-half plane. The poles are in the left-half plane for a 0 . (b) The system impulse response exhibits oscillations when the poles have a non-zero imaginary part. The poles have a non-zero imaginary part when a 4 0 and a 0 , which implies 0 a 4 . (Note that the system response is oscillatory and stable for these values of a.) These results are summarized by the plot below. This plot, known as a “root-locus” plot, plots the location of the poles as a function of a. p2 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 a -> infinity a=0 a -> -infinity -0.2 Imag(s) Imag(s) p1 1 a -> -infinity 0 -0.4 -0.6 -0.6 -0.8 -0.8 -5 0 Real(s) 5 10 a -> infinity -0.2 -0.4 -1 -10 a=0 -1 -10 -5 0 Real(s) 5 10 2.17 x t A cos 0t (a) X j (b) Xf A j 0t A j 0t A j j0t A j j0t e e e e e e 2 2 2 2 A j A e 2 0 e j 2 0 2 2 j A e 0 A e j 0 A j A e f f 0 e j f f 0 2 2 2.18 Xf 0 e at j 2 ft e dt e at e j 2 ft dt 0 1 j 2 f a 2a 2 a 4 2 f 2 X 0 e at jt e 1 j 2 f a dt e at e jt dt 0 1 j a 2a 2 a 2 1 j a 2.19 The easiest way to compute the Fourier transform is to recognize that x t , plotted below T T can be produced by convolving the function plotted below with itself. 1 T T 2 T 2 Fourier transform T sin fT fT Thus, the Fourier transform may be computed as follows: Xf T sin fT fT T sin fT fT T sin 2 fT 2 f 2T 2 Similarly, T T sin sin 2 T 2 T X j T T T 2 2 T sin 2 2 2 T 2 2.20 The easiest way to compute the Fourier transform is to recognize that x t is produced by the convolution of the two functions shown below. A T2 T1 T2 T1 2 T2 T1 2 T2 T1 2 Fourier transform T2 T1 T2 T1 2 Fourier transform sin f T2 T1 T2 T1 f T2 T1 sin f T2 T1 f T2 T1 Multiplying the two Fourier transforms produces X f A T2 T1 sin f T2 T1 sin f T2 T1 f T2 T1 f T2 T1 Similarly T T T T sin 2 1 sin 2 1 2 2 X j A T2 T1 T T T T 2 1 2 1 2 2 2.21 (a) X f 1 e j 2 f (b) Let e j . Then X f 1 e j 2 f X f X f X f 1 e j 2 f e j 2 f 2 2 1 2 cos 2 f 2 (c) Xf 2 1 1 2 1 2 f 0 2.22 j 1 j (a) Y j 3e j 5 (b) Y j e j 3 5 (c) (d) j 5 1 j 5 The symmetry properties of X j show that x t is real. Hence, Y j j 1 j Y j 2 1 j 2.23 (a) j x t j 5e j 2 1000t 3e 6 e j 2 60t 16 3e j 6 e j 2 60t j 5e j 2 1000t j 2 60t 6 j 2 60 t 6 16 3 e e j 5 e j 2 1000t e j 2 1000t 16 6 cos 2 60t 10sin j 2 1000t 6 (b) 1 1 1 Yes, x t is periodic. The period is LCM , . 60 1000 10 2.24 Using the identity 1 e 2t u (t ) and the property e j 2 ft0 X f x t t0 for the last 2 j 2 f term, we have 1 j 2 2 t 1 j 2 2 t e e e 2( t 2)u t 2 2 2 cos 2 2t e 2( t 2)u t 2 x t 2.25 (a) Xf B3 (b) Xf B3 (c) 1 a j 2 f 2 a 2 2 f 2 a 2 2a a 2 2 f a 2 X f e f B3 1 Xf 2 2a a 2 2 B3 2 1 2 2 a 2 1 2 ln(2) 2 X f e 2 f 2 2 2 e 2 B3 1 2 1 a 2 2 B3 2 1 2a 2 2.26 Using the transform pair 2T set T sin 2 fT 2 fT 1 T t T , 0 otherwise 1 and apply Parseval’s theorem: 2 sin 2 f f 2 df 1 2 1 1 2 2 dt 1 2.27 (a) E Xf x 2 (t )dt e 2 at dt 0 1 a j 2 f 2 0 0 2 at 2 at e dt e dt x 2 (t )dt Xf 2a a 2 2 f B90 2 E x 2 (t )dt Xf e Xf e t 2 2 df Now, using e 0.45 u2 2 e B90 2 f 2 2 2 2 B90 2 tan 1 a a 2 du df 1 4 1 df 4 e u2 2 e 2 f df e 2 f df 2 du e u2 2 4 B90 1 4 1 du 2 2 0 2 1 and Q z 2 2 0 2 B90 0 0 2 f a 2 2 B90 2 2 0 2 dt 1 0 2 f 2 a 4 B90 X f e 2 f B90 e 4a 2 f 2 0.9 2 e 2 f df 1 a 2 2 0.9 4a 2 2 a 0 a 2 2 f (c) 2 a 0.9 tan 2 a E a 2 2 f 2 B90 tan 1 a a 0.9 df 2 2 2 2a 0 a 2 f (b) 1 Xf B90 B90 1 2a e u2 2 e 2 f 2 df B90 du the two intergrals are z 2 1 2 2 2 1 2 4 B90 e u2 2 du 1 Q 2 4 B90 Putting this together produces, we have that B90 is determined from 0.45 2 0.5 Q 4 B90 2.28 (a) A2 A3 cos 2 0t cos3 0t 2 3 2 3 2 A A A A3 A cos t cos 2 t cos 30t 0 0 4 4 4 12 y t A cos 0t (b) Yf 6 A 576 −3000 (c) A3 4 f 1000 f 1000 2 A 2 A f 4 A2 A3 f 2000 f 2000 f 3000 f 3000 8 24 A4 64 −2000 A3 A 4 4 2 A3 A 4 4 4 A 16 −1000 0 1000 2 A4 64 2000 A6 576 3000 A6 A6 A4 A4 1 9 A 4 A6 576 576 64 64 THD 100 100 2 2 9 16 A2 8 A4 A6 A3 A3 A A 4 4 4 4 A plot of the THD is shown below. Note that the THD increases as the amplitude of the sinusoid increases. 2 10 0 THD (%) 10 -2 10 -4 10 0 1 2 A 3 4 2.29 (a) A cos 0t A cos 1t 4 17 A2 A2 A2 A2 A2 x2 t cos 20t cos 21t cos 1 0 t cos 1 0 t 32 32 32 4 4 A3 A3 99 A3 27 A3 x3 t cos 0t cos 1t cos 30t cos 31t 256 32 256 4 3 A3 3 A3 cos 1 20 t cos 1 20 t 64 64 3 A3 3 A3 cos 21 0 t cos 21 0 t 16 16 17 A2 A 99 A3 3 A3 A2 A2 cos t A cos t cos 2 t cos 21t y t 0 1 0 64 4 768 32 64 4 x t A3 A3 A2 A2 cos 30t cos 31t cos 1 0 t cos 1 0 t 768 12 8 8 A3 A3 cos 1 20 t cos 1 20 t 64 64 A3 A3 cos 21 0 t cos 21 0 t 16 16 (b) Yf A 99 A3 17 A2 f f 60 f 60 64 8 1536 A 3 A3 f 7000 f 7000 2 64 A2 A2 f 120 f 120 f 14000 f 14000 128 8 A3 A3 f 180 f 180 f 21000 f 21000 1538 24 A2 A2 f 6740 f 6740 f 7060 f 7060 16 16 A3 A3 f 6880 f 6880 f 7120 f 7120 128 128 A3 A3 f 13940 f 13940 f 14060 f 14060 32 32 2 A3 24 2 21000 0 60 60 14000 14060 13940 2 7000 7060 6880 120 6740 7120 180 2 2 120 180 2 7000 6880 7060 6740 7120 2 2 13940 2 2 2 2 2 2 A2 A3 A3 A2 A3 A2 2 2 2 128 1538 128 16 32 8 2 2 A 99 A2 A 3 A2 8 1536 2 64 2 A plot of the ID is shown below. Note that the ID is a function of A. 0 10 -2 10 -4 10 -6 10 0 1 2 3 A 4 5 2 21000 14060 A2 A3 A3 A2 A2 A3 A3 A2 A3 2 2 2 2 2 2 2 2 2 128 1538 128 16 16 128 32 8 32 ID 2 2 2 2 A 99 A A 3A 2 2 8 1536 2 64 2 A3 24 14000 ID (c) 2 A 99 A3 A 99 A3 2 2 8 1536 8 1536 A 3 A 2 2 17 A 2 64 2 2 2 A3 A2 2 64 A2 2 A3 2 A2 A2 2 2 2 2 2 A2 A3 2 8 A3 2 A3 8 A3 A 128 128 128 128 2 2 16 3 3 A A 16 32 32 32 32 2 3 2 A3 1538 1538 A 128 128 2 f 2.30 x(t ) ce k ck 1 T0 T0 2 0 k j 2 k t T0 A k sin 2 k 2 k j t A 2 j 2 T0 0 Ae dt e k 2 A k 2 j k k Xf k k k ck f T0 5 T0 Xf A A A 3 A 5 A 7 7 T0 k , odd k 0 A 2 k , even c f T Xf k 0 3 T0 A 3 1 T0 0 1 T0 3 T0 A 5 5 T0 A 7 7 T0 f 2.31 x(t ) ce k 1 ck T0 Xf k T0 2 Ae 2 k t T0 j 2 k t T0 0 T0 1 dt T0 T0 2 k 0 sin 2 k k t j j k 2 T0 2 0 Ae dt A 1 e k 2A k 2 j k k 0 k , even k , odd k 0 c f T k Xf j k k k ck f T0 Xf 7 T0 5 T0 2A 2A 3 2A 5 2A 7 2A 3 T0 2A 3 1 T0 0 1 T0 3 T0 2A 5 5 T0 2A 7 7 T0 f 2.32 x(t ) ce k ck j k T1 1 T0 Ae 2 k t T0 2 k t j T0 0 Xf j k T1 e T0 k 0 c f T k Xf k T1 sin T 2 T0 dt A 1 k T1 T0 2 T0 k k k ck f T0 A T1 T0 Xf T A 1 T0 fT1 sin 2 fT1 2 8 T0 7 T0 6 T0 5 T0 4 T0 3 T0 2 T0 1 T0 0 1 T0 2 T0 3 T0 4 T0 5 T0 6 T0 7 T0 8 T0 f 2.33 x(t ) ce k ck 1 T0 Xf T0 2 0 k 2 k t T0 A 2 2 k 2 k T 2 A j T0 t 1 0 2 A j T0 t A k dt 2 2 1 1 0 te dt 2 A t e T0 T0 T0 T0 k 2A 2 2 2 k k k 2A 49 2 7 T0 k , even k , odd k 0 k ck f T0 A 2 k 0 c f T k Xf j 2A 25 2 5 T0 Xf 2A 2A 2 2 2A 9 2 3 T0 2A 9 2 1 T0 0 1 T0 3 T0 2A 25 2 5 T0 2A 49 2 7 T0 f 2.34 x(t ) ce k k 1 ck T0 T0 4 T0 4 j 2 k t T0 4A te T0 j 2 k t T0 1 dt T0 3T0 4 T0 4 4A j 2 A t e T0 2 k t T0 dt j 4A k sin 2 2 k 2 0 k 0 0 k , even k 4A j 2 2 sin k , odd k 2 Xf k 0 c f T k Xf k k k ck f T0 Xf 4A 49 2 7 T0 4A 25 2 5 T0 4A 4A 2 2 4A 9 2 3 T0 4A 9 2 1 T0 0 1 T0 3 T0 4A 25 2 5 T0 4A 49 2 7 T0 f 2.35 x(t ) ce k k 1 ck T0 Xf 2 k t T0 T0 A 0 T0 te j 2 k t T0 A 2 dt j A 2 k k 0 k 0 k 0 c f T k Xf j k k k ck f T0 A 2 A 2 A 14 7 T0 A 12 6 T0 5 T0 4 T0 3 T0 A 2 A 4 A 6 A 8 A 10 Xf 2 T0 A 4 1 T0 0 1 T0 2 T0 A 6 3 T0 A 8 4 T0 A 10 5 T0 A 12 6 T0 A 14 7 T0 f 2.36 x(t ) ce k k ck 1 T0 j 2 k t T0 T0 2 t e 0 A sin T 0 A j A 4 0 2A k2 1 Xf k 1 k , odd, k 1 k , even k 0 c f T k Xf A k k 2 1 1 1 k 1 dt A j k 1 4 k 0 2 k j t T0 k k k ck f T0 2A A 4 Xf A 4 2A 3 2A 35 6 T0 2A 3 2A 15 4 T0 2A 15 2 T0 1 T0 0 1 T0 2 T0 4 T0 2A 35 6 T0 f 2.37 x(t ) ce k k 2 ck T0 Xf 4 k t T0 T0 2 0 A sin T0 j t e 4 k t T0 dt 2A 4k 2 1 2k 0 cf T k Xf j k k 2k ck f T0 2A Xf 2A 3 2A 35 6 T0 2A 3 2A 15 4 T0 2A 15 2 T0 0 2 T0 4 T0 2A 35 6 T0 f 2.38 p t t nT is periodic with period T. Hence it can be represented by a Fourier series n p t ce k 1 ck T T 2 k j 2 k t T where the Fourier series coefficients are t nT e j T n 2 1 j Thus, p t e T k 2 k t T 1 dt T 2 T k t e T 2 j 2 k t T dt 1 T 2 k t T Now using the transform pair e P j T 2 2 k . T j 2 k t T 2 k 2 we have T 2.39 The key to solving this problem is to understand what H f does to cos 2 f1t . The Fourier 1 1 f f1 f f1 . When this is the input, the Fourier 2 2 j j transform of the output of H f is f f1 f f1 whose inverse Fourier 2 2 j j transform is e j 2 f1t e j 2 f1t sin 2 f1t . Armed with this relationship, we may now 2 2 compute y t : transform of cos 2 f1t is y t cos 2 f 0t cos 2 f1t cos 2 f 2t sin 2 f 0t sin 2 f1t sin 2 f 2t 1 1 1 1 cos 2 f 0 f1 t cos 2 f 0 f1 t cos 2 f 0 f 2 t cos 2 f 0 f 2 t 2 2 2 2 1 1 1 1 cos 2 f 0 f1 t cos 2 f 0 f1 t cos 2 f 0 f 2 t cos 2 f 0 f 2 t 2 2 2 2 cos 2 f 0 f1 t cos 2 f 0 f 2 t Yf 1 2 1 2 1 2 1 2 1162 1161 −1161 −1162 f (kHz) 2.40 The Fourier transform of the upper input to the adder is 1165 1160 1155 −1155 −1160 −1165 f (kHz) The Fourier transform of the output of H f is j 0 −5 5 f (kHz) j Now, if z t Z f are a Fourier transform pair, then 1 1 Z f f0 Z f f0 j2 j2 Applying this relationship to the output of H f produces the Fourier transform of the lower z t sin 2 f 0t input to the adder: 1165 1160 1155 −1155 −1160 −1165 f (kHz) The result is Yf 1165 1160 −1160 −1165 f (kHz) 2.41 The key to solving this problem is to understand what H f does to cos 2 f1t . The Fourier 1 1 f f1 f f1 . When this is the input, the Fourier 2 2 j j transform of the output of H f is f f1 f f1 whose inverse Fourier 2 2 j j transform is e j 2 f1t e j 2 f1t sin 2 f1t . Armed with this relationship, we may now 2 2 compute y t : transform of cos 2 f1t is y t cos 2 f 0 t cos 2 f1t cos 2 f 2 t sin 2 f 0 t sin 2 f1t sin 2 f 2 t 1 1 1 1 cos 2 f 0 f1 t cos 2 f 0 f1 t cos 2 f 0 f 2 t cos 2 f 0 f 2 t 2 2 2 2 1 1 1 1 cos 2 f 0 f1 t cos 2 f 0 f1 t cos 2 f 0 f 2 t cos 2 f 0 f 2 t 2 2 2 2 cos 2 f 0 f1 t cos 2 f 0 f 2 t Yf 1 2 1 2 1 2 1 2 1159 1158 −1158 −1159 f (kHz) 2.42 The Fourier transform of the upper input to the adder is 1165 1160 1155 −1155 −1160 −1165 f (kHz) The Fourier transform of the output of H f is j 0 −5 5 f (kHz) j Now, if z t Z f are a Fourier transform pair, then 1 1 Z f f0 Z f f0 j2 j2 Applying this relationship to the output of H f produces the Fourier transform of the lower z t sin 2 f 0t input to the adder: 1165 1160 1155 −1155 −1160 −1165 f (kHz) The result is Yf 1160 1155 −1155 −1160 f (kHz) 2.43 Xf (a) 3 3 3 2 3 2 −1000 1000 2000 3000 1000 2000 3000 f −3000 −2000 Hf f −3000 −2000 −1000 Yf 1 1 1 1 −2000 −1000 1000 2000 f −3000 3000 Y f 2 cos 2 1000t 2 cos 2 2000t Xf (b) 2 f −3000 −2000 −1000 1000 2000 3000 1000 2000 3000 Hf f −3000 −2000 −1000 Yf 2 2 3 2 3 f −3000 −2000 −1000 1000 2000 3000 2.44 Xf (a) 1 1 1 2 1 2 −440 440 f −3000 −2000 2000 3000 Hf 2 1 f −3000 −2000 −1000 1000 2000 3000 Yf 1 1 1 1 −2000 −440 440 2000 f −3000 3000 y t 2 cos 2 440t 2 cos 2 2000t Xf (b) 1 4 1 2 1 2 1 4 f −3000 −880 −440 440 880 3000 Hf 2 1 f −3000 −2000 −1000 1000 2000 3000 Yf 1 2 1 1 1 2 f −3000 −880 −440 440 880 y t 2 cos 2 440t cos 2 880t 3000 Xf (c) 1 1 1 2 1 2 −600 600 f −2600 2600 Hf 2 1 f −3000 −2000 −1000 1000 2000 3000 Yf 1 1 1 1 −2600 −600 600 2600 f y t 2 cos 2 600t 2 cos 2 2600t (d) Xf A 2 A 8 A 8 −60 60 A 2 f −7000 7000 Hf 2 1 f −3000 −2000 −1000 1000 2000 3000 Yf A 4 A 4 −60 60 f −3000 y t A cos 2 60t 2 3000 Xf (e) 3 4 3 4 3 4 3 4 −3600 −1800 1800 3600 f Hf 2 1 f −3000 −2000 −1000 1000 2000 3000 Yf 3 4 3 4 −1800 1800 f y t 3 cos 2 1800t 2 2.45 H f xt y t cos2 70t cos2 70t 1 4 1 4 1 2 f −141 −140 −139 −1 0 1 139 140 141 Hf 4 f −1 0 1 2 f −1 0 1 Yf 1 1 f −71 −70 −69 69 70 71 2.46 H f xt G f y t cos2 3930t cos2 5930t 1 2 1 2 1 2 1 2 f −11930 −120 −70 −20 20 70 120 11930 Hf 2 2 f −100 −70 −40 40 70 100 1 1 f −100 −70 −40 1 2 1 2 40 70 100 1 2 1 2 f −4030 −4000 −3970 −3860 2 3860 G f 3970 4000 4030 2 f −4050 −4000 −3950 1 3950 Yf 4000 4050 1 f −4030 −4000 −3970 3970 4000 4030 2.47 H f xt G f y t cos2 3930t cos2 5930t 1 2 1 2 1 2 1 2 f −11930 −120 −70 −20 20 70 120 11930 Hf 2 2 f −120 −70 −20 20 70 1 120 1 f −120 1 2 1 2 −70 −20 20 70 120 1 2 1 2 f −4050 −4000 −3950 2 −3860 3860 G f 3950 4000 4050 2 f −4050 −4000 −3950 1 3950 Yf 4000 4050 1 f −4050 −4000 −3950 3950 4000 4050 2.48 Yf (a) 5 4 71 70 69 69 0 70 71 f (MHz) 71 f (MHz) Yf (b) 5 4 71 (c) 70 69 69 0 70 r t cos 2 f 0 t I t cos 2 f1t Q t sin 2 f1t cos 2 f 0 t I t 2 y t cos 2 f1 f 0 t Q t I t 2 I t 2 sin 2 f1 f 0 t cos 2 f1 f 0 t 2 In part (a), f 0 930 so that y t In part (b), f 0 1070 so that y t I t 2 I t cos 2 f1 f 0 t Q t Q t 2 cos 2 70t cos 2 70t 2 sin 2 f1 f 0 t sin 2 f1 f 0 t Q t 2 Q t sin 2 70t sin 2 70t 2 2 The difference is the sign of the “quadrature term” (the term involving sin 2 ft ). 2.49 (a) Xf 9 0.1 (b) 0.1 f Yf 1 0.1 0.1 f 2.50 (a) Fourier transform of input to H(f) H(f) f Fourier transform of input to G(f) G(f) f sum of “house” and “half-ellipse” Hf f2 455 f1 G f f1 455 f2 0 f1 450 460 f 2 1850 f 5 5 f (b) Fourier transform of input to H(f) f Note the overlapping spectra here. This is bad because there is not way to use a filter to isolate the desired spectrum from the combination. It is not possible to design an H(f) and G(f) to produce the desired output. The signal centered at 250 Hz must be removed before the mixer preceding the filter H(f). The typical solution is to add an additional filter as follows: This filter needs to remove the spectral content that interferes with the desired signal. In tunable systems (i.e., systems that need to isolate different channels), this BPF may need to be tunable. r t image rejection BPF H f G f cos2f 0t cos2f1t f 0 705 kHz f1 455 kHz xt 2.51 (a) Fourier transform of input to H(f) H(f) f Fourier transform of input to G(f) G(f) f sum of “house” and “half-ellipse” Hf f2 455 f1 G f f1 455 f2 0 f1 450 460 f 2 1850 f 5 5 f (b) Fourier transform of input to H(f) f Note the overlapping spectra here. This is bad because there is not way to use a filter to isolate the desired spectrum from the combination. It is not possible to design an H(f) and G(f) to produce the desired output. The signal centered at 250 Hz must be removed before the mixer preceding the filter H(f). The typical solution is to add an additional filter as follows: This filter needs to remove the spectral content that interferes with the desired signal. In tunable systems (i.e., systems that need to isolate different channels), this BPF may need to be tunable. r t image rejection BPF Hf G f cos2f 0t cos2f1t f 0 705 kHz f1 455 kHz xt 2.52 (a) Xf 0 15 19 23 38 (b) l t r t LPF 1 x t BPF 1 2r t l t r t cos 4 f 0t ×2 LPF 1 LPF 2 1 −15 2l t LPF 2 BPF 2 f 53 15 2 f −15 f 15 BPF 1 1 f −53 −38 23 −23 38 53 BPF 2 1 f −21 −17 17 (c) l(t) + r(t) is available at the output of LPF 1. 21 2.53 (a) Using x t e j 2 f0t X f f 0 , we have 1 1 x t cos 2 f 0 t x t e j 2 f0t e j 2 f0t 2 2 1 1 x t e j 2 f 0 t x t e j 2 f 0 t 2 2 1 1 X f f0 X f f0 2 2 (b) Ac 2 Ac 2 f fc B (c) (d) fc fc B fc B fc fc B fc B The minimum theoretical approach would be to space the carriers 2B Hz apart. But, this would require ideal low-pass filters. Real filters require a transition band thus necessitating a larger spacing. In commercial broadcast AM, for example, the channel assignments are every 2B = 10 kHz, but broadcast stations are not assigned to adjacent channels in any given geographical area. Instead, the closest spacing is every other channel slot. 2.54 Mf (a) Am 2 Am 2 fm (b) f fm s t Am cos 2 f m t Ac cos 2 f c t Am Ac A A cos 2 f c f m t m c cos 2 f c f m t 2 2 Am Ac f f c f m f f c f m f f c f m f f c f m S f 4 S f (c) Ac Am 4 Ac Am 4 Ac Am 4 Ac Am 4 f fc f m fc fc fm fc f m fc fc fm 2.55 (a) i. R f Ac 2 Ac 2 f fc B fc fc B fc B fc fc B Xf Ac2 2 Ac2 4 2 f c B ii. 2 f c 2 f c B B Ac2 4 B 2 fc B 2 fc f 2 fc B The filter must satisfy the conditions illustrated below. Hf 2 Ac2 B (b) i. ii. B A2 A2 x t m t cos 4 f c t 2 2 The filter removes the double frequency term and scales the baseband term by Thus the filter output is y t m t (c) i. f Ac2 Ac2 x t m t cos m t cos 4 f c t 2 2 y t m t cos 2 . A2 ii. The phase offset scales the output by the constant cos . This does not cause any distortion, especially when is small. As , the result is disastrous. 2 There are only two solutions: either must be known (or estimated from the received signal) or the modulation format should be altered to allow non-coherent detection. 2.56 (a) Ac m t cos 2 f c t Ac Am cos 2 f m t cos 2 f c t envelope Ac Am cos 2 f m t (b) envelope output t (c) The envelope detector does not preserve sign information. The solution is to reformat the signal so that the envelope is never negative. 2.57 (a) Ac 2 AAc 2 S f AAc 2 Ac 2 f fc B (b) fc fc B fc B fc fc B Write the modulated signal as s t A m t Ac cos 2 f c t AAc cos 2 f c t Ac m t cos 2 f c t The first term is a replica of the unmodulated carrier. Because this replica is part of the transmitted signal, the term “transmitted carrier” is used. (c) The envelope detector output is Ac A m t . Assuming A m t 0 for all t, the output may be written as Ac A Ac m t . The term Ac A needs to be subtracted from the envelope detector output to produce an appropriately scaled version of the desired signal. (In other words, the envelope detector needs to be AC-coupled.) The condition that guarantees correct output is A min m t 0 . 2.58 (a) A Am (b) A f Ac A f (c) S f Ac Am AA f fm c m f fm 2 2 1 1 A f fc A f fc 2 2 AA AA AA c f fc c m f fc f m c m f fc f m 2 2 2 Ac A AA AA f fc c m f fc f m c m f fc f m 2 2 2 (d) S f Ac A 2 Ac Am 4 Ac Am 4 Ac A 2 Ac Am 4 Ac Am 4 f fc f m fc fc fm fc f m fc fc fm 2.59 s t Ac Am cos 2 f m t cos 2 f c t (a) Ac Am AA cos 2 f c f m t c m cos 2 f c f m t 2 2 2 2 2 2 A A A A s 2 t c m c m cos 4 f c f m t 8 8 2 2 A A A 2 A2 c m cos 4 f c t c m cos 4 f m t 4 4 2 2 2 2 A A A A c m c m cos 4 f c f m t 8 8 1 Pm Tm Tm Ac2 Am2 0 s t dt 4 2 Ptot Pm power ratio (b) Pm 1 Ptot s t Ac A cos 2 f c t Ac Am cos 2 f m t cos 2 f c t Ac A cos 2 f c t s2 t Ac Am AA cos 2 f c f m t c m cos 2 f c f m t 2 2 Ac2 A2 Ac2 A2 cos 4 f c t 2 2 A 2 A2 A 2 A2 A 2 A2 A 2 A2 c m c m cos 4 f c f m t c m c m cos 4 f c f m t 8 8 8 8 2 2 A AA A AA c m cos 2 2 f c f m t c m cos 4 f m t 2 2 2 2 A AA A AA c m cos 2 2 f c f m t c m cos 4 f m t 2 2 2 2 2 2 A A A A c m cos 4 f c t c m cos 4 f m t 4 4 Pm Ac Am 4 1 Ptot Tm Tm (from part (a)) 2 s t dt 0 Ac2 A2 Ac2 Am2 2 4 Ac2 Am2 P 1 power ratio m 2 2 4 2 2 2 Ptot Ac A A A c m 2 A 1 4 2 Am A Am power ratio 1 3 2.60 (a) 1 s(t) 0.5 0 -0.5 -1 0 0.5 1 1.5 2 t 2.5 3 3.5 4 0 0.5 1 1.5 2 t 2.5 3 3.5 4 (b) 1 s(t) 0.5 0 -0.5 -1 2.61 (a) t (t ) 2f Am cos 2 f m d 0 Am f , then (t ) sin 2 f m t and s (t ) Ac cos 2 f c t sin 2 f m t . fm Let (b) Am f sin 2 f m t fm Using the identity e jX cos( X ) j sin( X ) , we see that cos( X ) Re e jX . Thus, Ac cos 2 f c t (t ) Ac Re e j 2 f c t ( t ) A Ree c j ( t ) e j 2 fct . Substitute (t ) sin 2 f m t to obtain the desired result. (c) 1 ck T0 T0 s(t )e j 2 kf m t 0 1 dt T0 T0 e j sin 2 f m t j 2 kf m t e dt 0 Using the substitution x 2 f m t , the integral becomes ck 1 2 2 e j sin x kx dx J k 0 Thus s (t ) Ac J cos 2 f k k c kf m t (d) Ac J k f fc kf m f fc kf m 2 k The bandwidth is infinite. (e) The power at the carrier frequency is determined by the k 0 in the answer to part (d). The k 0 term is Ac J 0 f f c f f c 2 The condition of zero power at the carrier frequency occurs when J 0 0 . Thus the S f curious and interesting situation occurs at the values of corresponding to the zeros of J 0 · . The first five zeros are 2.4048, 5.5201, 8.6537,11.7915,14.9309 . 2.62 (a) S f S f 2 Ac 2 Ac2 4 P J f f k k J f f k 2 k S f df 2 2 c A 4 Ac2 4 c kf m f f c kf m kf m f f c kf m c k J k2 f f c kf m f f c kf m df 2 c A J 2 2 J k 2 k 2 k k (b) The table of the Bessel functions is | k=0 k=1 k=2 k=3 k=4 k=5 k=6 k=7 k=8 ------+-------------------------------------------------------------------------------B = 1 | 0.58553 0.19364 0.01320 0.00038 0.00001 0.00000 0.00000 0.00000 0.00000 ------+-------------------------------------------------------------------------------B = 2 | 0.05013 0.33261 0.12449 0.01663 0.00116 0.00005 0.00000 0.00000 0.00000 ------+-------------------------------------------------------------------------------B = 5 | 0.03154 0.10731 0.00217 0.13310 0.15306 0.06819 0.01717 0.00285 0.00034 ------+-------------------------------------------------------------------------------- The table of the power ratio R for different values of K is K R J 02 2 J k2 k 1 | K=0 K=1 K=2 K=3 K=4 K=5 K=6 K=7 K=8 ------+-------------------------------------------------------------------------------B = 1 | 0.58553 0.97282 0.99922 0.99999 1.00000 1.00000 1.00000 1.00000 1.00000 ------+-------------------------------------------------------------------------------B = 2 | 0.05013 0.71535 0.96433 0.99759 0.99990 1.00000 1.00000 1.00000 1.00000 ------+-------------------------------------------------------------------------------B = 5 | 0.03154 0.24616 0.25049 0.51670 0.82282 0.95921 0.99356 0.99926 0.99993 ------+-------------------------------------------------------------------------------- The bold numbers indicate the smallest value of K which captures 98% of the power. The values are K 98 2 for 1 K 98 3 for K 98 6 for 2 5 from which we deduce that K 98 1 . Thus we have BW 2 K 98 f m 2 1 f m 2.63 (a) d Ac cos 2 f c t (t ) Ac 2 f c (t ) sin 2 f c t (t ) dt Ac 2 f c 2fm(t ) sin 2 f c t (t ) (b) The envelope detector output is Ac 2 f c 2fm(t ) . To produce the desired output, we need f c fm(t ) for all t. (c) Ac 2 f c fAm envelope detector output Ac 2 f c Ac 2 f c fAm t Ac 2fAm y t t Ac 2fAm 2.64 (a) (b) (c) H FM ( s) sk p F ( s) s k0 k p F ( s ) V ( s) H FM ( s)( s) s( s ) v(t ) sk p F ( s) s F ( s) s k0 k p F ( s ) The filter is a differentiator. k0 1 should be avoided. d (t ) which is the desired result. dt s k p 1 k0 2.65 (a) 1 1 1 h( n) u ( n) 2 2 2 (b) 1 1 1 h( n) u ( n) 3 2 2 (c) 1 h( n) 2 (d) h(n) (n 1) 2 (n 2) 3 (n 3) 4 (n 4) n n n 1 1 u (n 1) (n) 1 2 n n 1 u (n 1) n2 u (n 2) n 1 u (n 1) 2.66 (a) (b) 1 h( n) 2 2 H z n 1 n 1 u (n 1) u (n 1) 2 3 j 4 z 1 1 1 1 j z 1 4 3 1 1 h( n) 3 j 4 j 4 3 1 5 j 1 j e 4 12 3 3 j 4 z 1 1 1 1 j z 1 4 3 n 1 1 1 u (n 1) 3 j 4 j 4 3 3 j 4 5e 1 5 j 1 j e 4 12 3 j 2 3 j 4 5e j 2 n 1 u (n 1) j 5e j j 5e j 3 tan 1 4 5 h(n) 10 12 (c) n 1 sin n u (n 1) 3 1 3 1 1 j z 1 j z 4 4 H z 1 1 1 1 1 j z 1 1 j z 1 3 3 4 4 3 1 1 h( n) 1 j j 4 4 3 n 1 3 1 1 u (n 1) 1 j j 4 4 3 1 5 j 1 j e 3 12 4 3 5 j 5 1 j e 2 j e j 4 4 4 1 5 j 1 j e 3 12 4 3 5 j 5 1 j e 2 j e j 4 4 4 4 tan 1 3 5 5 h( n) 2 12 n 1 sin n u (n 1) n 1 u (n 1) (d) H z 2 z 2 2 2 z 1 1 1 z 1 4 1 1 1 z 2 n 1 n 1 1 1 h n 4 n 1 2 u n 1 2 4 2.67 (a) Y ( z ) z 1Y ( z ) z 2Y ( z ) z 1 (b) z 1 Y ( z) 1 z 1 z 2 (c) Y ( z) z z z z 1 1 5 1 5 z z 2 2 2 5 5 5 5 10 10 1 5 1 5 z z 2 2 5 5 1 5 5 1 z z 10 10 1 5 1 1 5 1 z z 1 1 2 2 5 5 1 5 n 1 5 5 1 5 n 1 y ( n) u (n 1) 10 2 10 2 n n 1 1 5 1 5 u (n 1) 5 2 2 2.68 (a) 1 −3 −2 −1 (b) 5 X z z 0 n n 0 (c) 1 2 3 4 5 6 7 3 4 5 6 7 1 z 6 1 z 1 1 −3 −2 −1 0 1 2 −1 (d) G z 1 z 6 (e) G z X z z 1 X z 1 z 1 X z 1 z 1 This answer is the same as that from part (d). (f) 1 1 z 6 G z 1 z 1 1 z 1 This answer is the same as that from part (b). X z 1 z 6 1 z 6 1 1 z 2.69 (a) H z 1 1 1 z 1 3 (b) Imaginary Part 1 0.5 0 -0.5 -1 -1 (c) -0.5 0 0.5 Real Part 1 n 1 h n u n 3 y n h n x n h n n n 1 n 2 h n h n 1 h n 2 0 1 4 3 13 1 n 2 9 3 n0 n0 n 1 n2 2.70 (a) H z 1 1 1 z 1 4 (b) Imaginary Part 1 0.5 0 -0.5 -1 -1 (c) -0.5 0 0.5 Real Part 1 n 1 h n u n 4 y n h n x n h n n n 1 n 2 h n h n 1 h n 2 0 1 3 4 13 1 n 2 16 4 n0 n0 n 1 n2 2.71 (a) IIR (b) FIR (c) H z H1 z H 2 z (e) (f) 1 1 1 1 z z 2 6 7 3 z 3 1 z 3 6 IIR y ( n) 11 1 7 y (n 1) y (n 2) y (n 3) 6 x(n 1) 8 x(n 2) x(n 3) 6 6 3 2 2 1 2 2 1 z 1 1 z 1 z 3 6 3 6 H z 1 1 1 z 1 1 12 z 1 1 z 3 1 0.5 Imaginary Part (d) 6 z 1 8 z 2 0 -0.5 -1 -1 -0.5 0 Real Part 0.5 1 (g) H z (h) y ( n) h( n) x ( n) z 1 1 z 1 2 z 1 3 z 1 1 1 1 z 1 1 z 1 2 3 n 1 n 1 1 1 h n 1 2 3 u (n 1) 2 3 h(n) (n) (n 1) (n 2) h(n) h(n 1) h(n 2) 0 6 9 n4 n4 3 7 1 13 1 4 2 9 3 n0 n 1 n2 n3 12 10 y(n) 8 6 4 2 0 0 5 10 n 15 20 2.72 (a) H z 1 z 8 1 H z has 8 zeros equally spaced on a circle of radius 8 and 8 poles at the origin as shown below. Imaginary Part 1 0.5 8 0 -0.5 -1 -1 -0.5 0 0.5 Real Part 1 The ROC is the entire z-plane except for z = 0. (b) G z 1 H z 1 1 z 8 1 8 G z has 8 poles equally spaced on a circle of radius and 8 zeros at the origin as 1 8 shown below. The ROC for a causal stable system is z . Imaginary Part 1 0.5 8 0 -0.5 -1 -1 (c) n g n 0 -0.5 0 0.5 Real Part n a multiple of 8 otherwise 1 2.73 (a) (b) y n a1 y n 1 a2 y n 2 b0 x n b1 x n 1 b2 x n 2 The poles are z i. a2 a1 1 1 4 2 2 a1 a22 a a . The poles are z 1 1 1 4 22 4 2 a1 ab a a1b0 a a 2 2 b0 1 0 2 b0 1 4 22 a1 a1 2 a1 1 2 z a2 1 4 2 a1 H z b0 a a 1 1 1 1 4 22 z 1 2 a1 For real, distinct poles we require a2 h n b0 n ii. . ab a a1b0 a a 2 2 b0 1 0 2 b0 1 4 22 a1 a1 2 a1 1 2 z a2 1 4 2 a1 a a 1 1 1 1 4 22 z 1 2 a1 a1b0 ab a a a 2 2 b0 1 0 2 b0 1 4 22 a1 a1 2 a1 2 a 1 4 22 a1 a1 1 1 4 a2 2 a12 n 1 a1b0 ab a a a 2 2 b0 1 0 2 b0 1 4 22 2 a1 a1 a1 2 a 1 4 22 a1 a1 1 1 4 a2 2 a12 n 1 For real, repeated poles, we require a12 4a2 . The poles are z a1 a1 , 2 2 u n 1 u n 1 1 b0 a1 z 1 b0 a12 z 2 4 b0 2b0 H z b0 2 a1 1 1 2 z n a1 1 a1 1 z z a1 1 2 2 b0 z 2 2 2 a1 1 a1 1 1 1 z z 2 2 a a a h n b0 n 2b0 n 1 u n b0 1 n 1 1 2 2 2 iii. n 1 u n 1 For complex conjugate poles, we require a12 4a2 0 . Let 2 a12 4a2 , then the a poles are z 1 j . 2 2 a1b0 a12 b0 a2 b0 1 a1b0 a12 b0 a2 b0 1 j j z z 2 2 2 2 H z b0 1 a1 a 1 j z 1 1 j z 1 2 2 2 2 n 1 a1b0 a12 b0 a2 b0 a1 j j u n 1 h n b0 n 2 2 2 2 n 1 a1b0 a12 b0 a2 b0 a1 j j u n 1 2 2 2 2 (c) i. The poles are z a1 1 j a12 4a2 re j from which we obtain 2 2 r a2 tan 1 1 4 ii. a2 a12 B z 1 2r cos z 1 r 2 z 2 r iii. r cos 1 times the coefficient of z 1 . 2 The distance from the origin is the square root of the coefficient of z 2 . The location of the pole intersects the real axis at iv. H z b0 b0 jre j 2 1 z 2sin jre j 2 1 z 2sin b0 1 re j z 1 1 re j z 1 rn h n b0 n b0 sin n 1 u n 1 sin (d) The case of real, repeated poles corresponds to 0 . In this case, r a2 Also note that lim sin n 1 0 h n b0 n b0 sin n 1 . Now, the answer to part (c)-iv becomes rn sin n 1 u n 1 sin b0 n b0 n 1 r n u n 1 b0 n b0 n 1 2 r n u n 1 b0 n 2b0 r n u n b0 r n 1 r n 1u n 1 Using r a1 gives the answer from part (c)-iv. 2 a12 a1 . 4 2 2.74 (a) F ( z ) 0.5 0.5 z 1 0.5 z 1 1 z 1 z 1 Y ( z) X ( z) 1 0.5 z 1 1 0.5 z 1 1 z 1 1 z 1 1 0.5 z 1 y (n) 1 (0.5) n 1 u (n 1) 2 u(n) y(n) 1 0 -1 0 2 4 6 8 10 n (b) F ( z ) 1 Y ( z ) z 1 X ( z ) z 1 1 z 1 y (n) u (n 1) 2 u(n) y(n) 1 0 -1 0 2 4 6 n 8 10 (c) F ( z ) 1.5 1.5 z 1 1.5 z 1 1 z 1 0.5 z 1 Y ( z) X ( z) 1 0.5 z 1 1 0.5 z 1 1 z 1 1 z 1 1 0.5 z 1 y (n) 1 0.5(0.5) n 1 u (n 1) 2 u(n) y(n) 1 0 -1 0 2 4 6 8 10 n (d) Unlike the case with continuous-time systems, a first-order system can show an oscillatory transient reponse. 2.75 (a) 1 1 1 z 1 1 2 V z 2 1 1 1 z 1 1 z 1 1 1 z 1 2 2 n 11 v n u n 22 2 u(n) y(n) 1.5 1 0.5 0 -0.5 -1 0 2 4 6 8 10 n (b) V z 1 1 z 1 1 1 1 z 1 1 1 1 z 1 v n n 2 u(n) y(n) 1.5 1 0.5 0 -0.5 -1 0 2 4 6 n 8 10 (c) 3 3 1 z 1 1 2 V z 2 1 1 1 z 3 1 z 1 1 1 z 1 2 2 v n n 3 1 u n 2 2 2 u(n) y(n) 1.5 1 0.5 0 -0.5 -1 0 2 4 6 8 n (d) Even though this is a first-order system, it can still exhibit oscillations. 10 2.76 (a) 1 1 z 4 H z 1 1 1 z 1 z 2 4 8 1 (b) 1 Imaginary Part 0.5 0 -0.5 -1 -1 (c) -0.5 0 Real Part 0.5 1 7 9 7 1 7 9 7 1 6 1 j j z z z 308 308 308 308 11 Y z 1 1 z 1 1 1 7 1 7 1 1 j 1 j z z 8 8 8 8 n 1 7 9 7 1 7 j j 308 8 8 308 n 1 7 1 7 9 7 j y n (n) j u n 1 308 8 308 8 6 11 2.77 The closed loop transfer function is 1 0.25 z 1 Y ( z) X ( z ) 1 0.75 K z 1 0.25 Kz 2 z z 0.25 z 0.75 K z 0.125K 2 This system has two poles at p1 , p2 0.75 K 0.75 K 2 K 2 The goal is to find the values of K for which both poles are inside the unit circle. That is, find the values of K for which | p1 | 1 and | p2 | 1 . The following Matlab script plots the poles for 0 K 3: %% ex2_77 K = 0:0.001:3; p1 = 0.5*(K-0.75 + sqrt((0.75-K).^2-K)); p2 = 0.5*(K-0.75 - sqrt((0.75-K).^2-K)); plot(real(p1),imag(p1),'b-',real(p2),imag(p2),'r.-',... real(p1(1)),imag(p1(1)),'bo',... real(p1(end)),imag(p1(end)),'bs',... real(p2(1)),imag(p2(1)),'ro',... real(p2(end)),imag(p2(end)),'rs','LineWidth',2); legend('p_1','p_2'); grid on; %% plot unit circle for reference hold on; plot(exp(j*2*pi*[0:0.01:1]),'k:'); hold off; This script produces the following plot 1 p1 p2 0.5 0 -0.5 -1 -1 -0.5 0 0.5 1 1.5 2 From this plot, we see that p2 is always inside the unit circle and that p1 is outside the unit circle when K is too large. Using the data vector p1 produced by the script, we see that p1 is outside the unit circle for K 2.334 . Repeating the same script for K 0 produces the following plot 1 p1 0.5 p2 0 -0.5 -1 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 Form this plot, we see that p1 is always inside the unit circle and that p2 is outside the unit circle when K gets too negative. Using the data vector p2 produced by the script we see that p2 is outside the unit circle for K 0.2 Putting this all together, the system is stable for 0.2 K 2.334 2.78 (a) H z (b) 1 1 z 2 1 1 1 1 2 z z 2 4 There is one zero at z 0 and two poles at z 1 3 j 4 4 Imaginary Part 1 0.5 0 -0.5 -1 -1 (c) -0.5 0 0.5 Real Part 1 2 1 1 j 3 1 1 j 3 1 z z z 3 12 12 Y z 1 1 1 3 1 3 1 1 z 1 j 1 j z z 4 4 4 4 n 1 n 1 1 j 3 1 1 j 3 1 3 3 2 y n j j u n 1 4 4 3 12 4 12 4 n 1 1 1 2 cos n 1 u n 1 3 3 2 3 tan 3 tan 1 1 2.79 H z Y z X z Kz 1 1 K 1 z 1 The poles are at z 1 K K 2 z 2 1 K 2 2K 2 . The position of the poles in the z-plane (as a function of K) is illustrated below. pole 1 1 imag. part 0.5 K 0 K - -0.5 -1 -1.5 -1 -0.5 0 0.5 real part 1 1.5 2 pole 2 1 imag. part 0.5 0 K K - -0.5 -1 -2 -1.5 -1 -0.5 real part 0 0.5 1 Several observations are in order The poles are real and distinct for K 2 3 and K 2 3 The poles are real and repeated for K 2 3 and K 2 3 The poles are complex conjugates for 2 3 K 2 3 . The pole at z The pole at z 1 K 1 K 2 2K 2 1 K 1 K 2 The system is stable for 0 K 2 2 2K is inside the unit circle for 0 K 2 and K 4 . is inside the unit circle for K 2 . 2.80 (a) 2 z 1 H z 3 2 1 z 1 3 (b) 2 Imaginary Part 1 0 -1 -2 -2 -1 0 Real Part 1 2 (c) 2 e j H e j 3 2 1 e j 3 (d) 2 2 4 2 j 2 j e j e j e e 1 j 3 3 9 3 3 H e 1 2 2 2 2 4 1 e j 1 e j 1 e j e j 3 3 3 3 9 The system is an all pass filter! 2 2.81 (a) (b) (c) X e j e j X e j e j e j 4 1 e j 1 1 j n j x n e 8 e 4 2 2 n e j 1 j e 2 e 4 e j j 4 2 8 j 8 4 j n j e e e 8 e 4 2 2 2 X e j 1 e j 2 8 e j j 1 2 j cos cos e 4 2 8 1 1 cos e j e j 2 4 8 (d) 9 sin 2 X e j sin 2 (e) X e j 1 2 l 1 2 l (f) X e j n j j 4 e e 8 2 2 l 5 5 3 2 l 3 2 l 7 7 l 2 l j 2 l j 3 3 n 2.82 (a) 3 n sin 3 8 x n 8 3 n 8 (b) 1 sin n 2 x n 1 n 2 (c) x n 10 n n 1 2 n 2 3 n 3 2.83 The DTFT of the sequence x n is X e j j 2 0.5 Y e j j 1 2 1 2 j 1 2 Y e j Y e j 1 2 0.5 1 2 0.5 j2 j2 1 0.5 Y e j j j 1 2 (d) 0.5 j 1 2 1 0.5 j j2 1 0.5 j2 (c) j 1 2 0.5 (b) 0.5 (a) j j 2 1 0.5 (e) j 1 2 j Y e j 1 2 j2 1 1 2 0.5 Y e j j2 j 1 2 0.5 (f) j 1 0.5 0.5 2.84 (a) Y e j 1 2 0.5 0.5 0.5 (b) Y e j 1 0.5 (c) Y e j 1 2 1 2 0.5 0.5 Y e j (d) 1 0.5 0.5 Y e j (e) 1 2 0.5 0.5 Y e j (f) 1 0.5 0.5 2.85 From the input/output relation we have 12Y e j 7e jY e j e j 2 Y e j 12 X e j 5e j X e j from which we obtain 12 5e X e 12 7e e Y e j j j j j 2 From the block diagram we have H e H e X e Y e j j and we are given H1 e j j 1 j 2 1 3 . 1 j 3 e j 1 e 3 Putting this all together gives, 3 12 5e j H1 e j H 2 e j H 2 e j j j 2 j 3e 12 7e e j 12 5e 3 8 H 2 e j j j 2 j 12 7e e 3e 4 e j n 1 h2 n 2 u n 4 2 1 1 e j 4 2.86 1 1 1 z 1 3 1 1 z 1 4 X z 1 1 z 1 2 Y z 2 3 1 1 1 z 1 z z 2 H z 1 3 4 1 1 X z 1 1 1 1 1 z 1 1 z 1 1 z 1 z 3 4 3 4 1 Y z 21 h n n 33 n 1 31 u n 1 44 n 1 u n 1 The frequency response is H e j H e j 2 1 1 e j 2 1 j 1 j 1 e 1 e 3 4 5 cos 4 97 91 1 cos cos 2 72 72 6 2.87 (a) 1 a a 1 H e j 2 a re j 1 j 1 j 1 e e 2 a a a 1 a e j ae j a 2 1 1 j e a r 1 2 cos 2 r r 1 2 1 r 2r cos 1 1 1 z a H z 1 az 1 1 (b) a 1 a H e j 2 a re j 1 1 j e a r 1 r 2 2r cos 1 a e j ae j a 1 1 j 1 j 1 1 2 1 e e 2 1 2 cos a r a r a H z 2 1 a z 1 1 1 z 1 a (c) 1 a a 1 H e j 2 | a | 2 a re j 1 j 1 j 1 e e 2 a a a 1 a e j ae j a 2 1 1 j e a r 1 r 2 2r cos 1 r 2 2r cos 1 1 1 z a H z a 1 az 1 1 (d) 1 a a H e j 2 H z a re j | a |2 a e j ae j 1 1 a e j ae j a 2 1 1 j e a r 1 r 2 2r cos 1 r 2 2r cos 1 a z 1 1 az 1 Both the systems of parts (c) and (d) are all-pass systems. However, only the system in part (d) has a z-transform in the standard form. 2.88 (a) 8 which means the frequency response 9 has a large amplitude in the vicinity of . Hence this is a high-pass filter as shown. The pole-zero plot (below) shows a pole at z 100 2 80 |H(ej |2 Imaginary Part 1 0 60 40 -1 20 -2 -2 0 Real Part 1 2 0 -0.5 0 0.5 normalized frequency (cycles/sample) 8 8 and a zero at z which means 9 9 the frequency response has a large amplitude in the vicinity of 0 and a small amplitude at . Hence this is a low-pass filter as shown. The pole-zero plot (below) shows two poles at z 4 2.5 x 10 2 2 |H(ej |2 1 Imaginary Part (b) -1 2 0 1.5 1 -1 0.5 -2 -2 -1 0 Real Part 1 2 0 -0.5 0 0.5 normalized frequency (cycles/sample) 8 The pole-zero plot (below) shows complex-conjugate poles at z j and a zero at the 9 origin, which means the frequency response has a large amplitude in the vicinity of 2 . Hence this is a band-pass filter as shown. 25 2 20 1 |H(ej |2 Imaginary Part (c) 2 0 15 10 -1 5 -2 -2 -1 0 Real Part 1 2 0 -0.5 0 0.5 normalized frequency (cycles/sample) 2.89 (a) 2 x(n) 1.5 1 0.5 0 -4 (b) -2 0 2 n 4 6 8 X e 6 4 cos 2 cos 2 4 cos 3 X e j 2 e j e j 3 j 2 20 |X(ej |2 15 10 5 0 -0.5 (c) -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 frequency (cycles/sample) 2 x(n) 1.5 1 0.5 0 -4 (d) -2 0 2 n X [0] 2 1 1 4 X [1] 2 e j /2 e j 3 /2 2 X [2] 2 e j 3 j 3 0 X [3] 2 e j 3 /2 e j 9 /2 2 4 6 8 0.3 0.4 0.5 (e) X e 2 e X e 2 e X e 2e X e0 2 1 1 4 j /2 j j 3 /2 j /2 j e j 3 /2 2 3 j 3 0 j 3 / 2 e j 9 / 2 2 These numbers are the same as those obtained in part (d). (f) 2 x(n) 1.5 1 0.5 0 -10 -5 0 5 n (g) X [0] 2 1 1 4 X [1] 2 e j /4 e j 3 /4 2 j 2 X [2] 2 e j /2 e j 3 /2 2 X [3] 2 e j 3 /4 e j 9 /4 2 j 2 X [4] 2 e j 3 j 3 0 X [5] 2 e j 5 /4 e j15 /4 2 j 2 X [6] 2 e j 3 /2 e j 9 /2 2 X [7] 2 e j 7 /4 e j 21 /4 2 j 2 10 15 (h) X e 2 e X e 2e X e 2e X e 2e X e 2e X e 2e X e 2e X e j0 2 1 1 4 j /4 j /4 e j 3 /4 2 j 2 j 2 /4 j /2 e j 3 /2 2 j 3 /4 j 3 /4 j 4 /4 j j 5 /4 j 5 /4 e j15 /4 2 j 2 j 6 /4 j 3 /2 e j 9 /2 2 j 7 /4 j 7 /4 e j 21 /4 2 j 2 e j 9 /4 2 j 2 3 j 3 0 These numbers are the same as those obtained in part (g). (i) The values obtained in part (d) are a subset of those obtained in part (g). This is because the DFT length of part (g) is divisible by the length of the DFT length of part (d). 2.90 Start with the sequence x1 n x1 n 2 1 1 0 1 2 3 4 5 6 7 8 9 n 2 Draw the length-4 periodic extension of x1 n – call it x1,4 n – and delay it by 3. This produces the sequence x1,4 n 3 2 1 1 0 1 2 3 4 5 6 7 8 9 n 2 Note that the samples at n 0,1, 2,3 are the same as those of x2 n . Now the DTFT of x1,4 n 3 is e X 1,4 [k ] e 3 j k 2 j3 2 4 X 1 [k ] and e j 3 k 2 X 1 [k ] X 2 [k ] Thus, X 2 [k ] e j 3 k 2 X 1[k ] X 1 [k ] times the DTFT of x1 n . Thus we have 2.91 (a) decimal numbers 3-bit binary equivalent 3-bit binary equivalent index of input 0 000 000 1 001 100 2 010 010 3 011 110 4 100 001 5 101 101 6 110 011 7 111 111 (b) The 3-binary equivalent of the indexes of the inputs are reversed (bit reversed) versions of the indexes associated with the natural (temporal) order. 2.92 (a) X d e j 100 (b) 9 8 10 10 8 10 9 10 The negative frequency component of X c f shows up as the positive frequency component of X d e j , and the positive frequency component of X c f shows up as the negative frequency component of X d e j . 2.93 (a) X d e j 102 (b) 51 51 51 51 42 51 42 51 The negative and positive frequency components of X c f are preserved on the negative and positive frequency components of X d e j , but there is aliasing. 2.94 (a) X d e j 52 (b) 9 13 2 4 13 4 13 2 9 13 The negative frequency component of X c f shows up as the positive frequency component of X d e j , and the positive frequency component of X c f shows up as the negative frequency component of X d e j . 2.95 1 0 n 7 Write x n w n cos n where w n . 4 0 otherwise Then 1 W e j 2 4 4 7 1 sin 4 j 2 e 2 4 4 sin 2 sin 4 7 sin 4 7 4 j 2 4 1 4 j 2 4 1 e e 2 2 1 1 sin sin 4 4 2 2 X e j 5 4 |X(ej )| (a) 3 2 1 0 -0.5 -0.375 -0.25 -0.125 0 0.125 0.25 0.375 0.5 /2 (cycles/sample) (b) 5 |X(ej )| 4 3 2 1 0 -0.5 -0.375 -0.25 -0.125 0 0.125 0.25 0.375 0.5 /2 (cycles/sample) (c) 5 |X(ej )| 4 3 2 1 0 -0.5 -0.375 -0.25 -0.125 0 0.125 0.25 0.375 0.5 /2 (cycles/sample) 2.96 (a) 4 3.5 |X(ej k)| 3 2.5 2 1.5 1 0.5 0 -0.5 -0.375 -0.25 -0.125 0 0.125 0.25 0.375 0.5 /2 (cycles/sample) (b) 4 3.5 |Y(ej k)| 3 2.5 2 1.5 1 0.5 0 -0.5 -0.375 -0.25 -0.125 0 0.125 0.25 0.375 0.5 /2 (cycles/sample) (c) The DTFT of x n is sin 4 7 sin 4 7 j 4 2 4 1 4 j 2 4 1 X e j e e 2 2 1 1 sin sin 4 4 2 2 which has zero-crossings at l 1 for l 0 . The length-8 DFT samples X e j at k which corresponds to the zero-crossings. On the other hand, the DTFT of y n is 4 sin 4 7 sin 4 7 3 j 2 3 1 3 j 2 3 1 Y e j e e 2 2 1 1 sin sin 3 3 2 2 3l 4 for l 0 . The length-8 DFT samples X e j at which has zero-crossings at 12 k which does not correspond to any of the zero-crossings. 4 2.97 X e j 80, 000 3.3 4 3.3 4 2.98 (a) S d e j 96000 5 1 12 2 5 12 1 2 normalized frequency (cycles/sample) (b) S d e j 88200 1 2 200 441 200 441 1 2 normalized frequency (cycles/sample) (c) The DTFT in part (b) is wider than the DTFT in part (a). This is due to the fact that the sample rate used in part (b) is closer to the minimum sample rate defined in the sampling theorem than the sample rate used in part (a). 2.99 S d e j 500 400 1 4 3 2 2 10 10 10 2 10 3 10 4 10 1 2 normalized frequency (cycles/sample) 2.100 (a) S d e j 280 224 1 2 24 56 14 56 4 56 4 56 24 56 14 56 1 2 normalized frequency (cycles/sample) (b) S d e j 200 160 1 2 1 4 1 4 1 2 normalized frequency (cycles/sample) (c) There are two key differences between the spectra in parts (a) and (b).The first difference is the bandwidth: the spectrum in part (a) is the spectrum of an oversampled signal whereas the spectrum in part (b) is the spectrum of a critically sampled signal (that is, it is sampled at the minimum sample rate defined by the sampling theorem). The second difference is the fact that the two spectra are “flipped” relative to each other. In part (a) the positive frequency component of the continuous-time signal shows up on the positive frequency axis in the DTFT domain. In part (b) the negative frequency component of the continuous-time signals shows up on the positive frequency axis in the DTFT domain. 2.101 (a) Y f 12 10 1 f 1 Y e j 120 100 1 2 1 10 1 10 normalized frequency (cycles/sample) 1 2 (b) Z p f 120 100 20 10 f 1 1 10 1 10 1 2 20 Z e j 120 100 1 2 1 10 normalized frequency (cycles/sample) (c) Zp f 120 100 20 10 f 1 positive- and negative-frequency components of the continuous-time signal overlap when sampled 1 10 20 Z e j 120 100 1 2 1 10 1 10 1 2 normalized frequency (cycles/sample) The fundamental problem with this approach is that the positive- and negative-frequency components of the z(t) overlap when sampled. This was not a problem when using the complexvalued version of the signal. This problem illustrates one of the advantages of signal-processing using complex-valued signals: one does not have to worry about negative- and positivefrequency components aliasing on top of each other. The disadvantage is complexity: a complexvalued signal is a “two-dimensional” signal. Consequently, addition and multiplication require more resources. 2.102 The DTFT of the sampled sequence is S e j 500 400 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 Neglecting scaling constants, the sampled sequence may be written as (see Exericse 2.48) s nT I nT cos 0.6 n Q nT sin 0.6 n . To produce I nT and Q nT from s nT , we need 0 0.6 . As it is written the system produces 1 I nT 2 1 y nT Q nT 2 x nT Hence the system must be modified either by changing the sign on y nT or by changing the sign on sin 0 n -- that is, use sin 0 n in place of sin 0 n on the lower mixer. 2.103 Hf (a) H j 1 −5 1 f 5 10 (b) H e j H e j 1 −5T (c) f 10 1 / 2 cycles/sample 5T 10 T 10 T rads/sample Construct the diagram of the impulse sampled waveform shown below Xp f Xf 1 Xf T 1 T 20 5 0 Hf 5 1 20 T 1 Xf T 20 1 T From this diagram, we see that the minimum sampling rate is determined from relation 1 1 20 5 from which we obtain 25 . T T 2.104 (a) H c j e jT (b) In the interval we have Hd e j j T T e 0 (c) hd (n) (d) W 1 2 W e W T sin n W T W j n e d T n W T T sin n T hd (n) T n T (e) W , j T T W W otherwise T 1 T 2 1 sin n 2 hd (n) 1 n 2 3.1 (a) X 2 e j 5 4 5 2 5 2 5 4 5 (b) X 4 e j 3.75 2.5 4 5 2 5 2 5 4 5 3.2 (a) X 2 e j 5 3 4 3 5 5 3 5 4 5 (b) X 4 e j 2.5 1.5 4 5 2 5 2 5 4 5 3.3 (a) X 2 e j 5 3 4 3 5 5 (b) 3 5 4 5 X 4 e j 2.5 1.5 4 5 2 5 2 5 4 5 3.4 (a) X 2 e j 5 3 4 3 5 5 3 5 4 5 3 5 4 5 (b) X 2 e j 5 3 4 3 5 5 (c) The spectra are reversed about 4 5 3.5 (a) X 4 e j 2.5 1.5 4 5 2 5 2 5 4 5 (b) X 4 e j 2.5 1.5 (c) 4 5 2 5 The spectra are reversed about 2 5 2 5 4 5 3.6 (a) X 2 e j 5 3 3 5 4 5 (b) X 4 e j 2.5 1.5 4 5 2 5 3.7 (a) X 2 e j 5 3 4 3 5 5 (b) X 4 e j 2.5 1.5 2 5 4 5 3.8 (a) X 2 e j 5 3 3 5 (b) 4 5 X 2 e j 5 3 4 3 5 5 (c) The signal in part (a) is centered at 4 whereas the signal in part (b) is centered at 5 4 . The two are almost frequency-domain reversed versions of each other. 5 3.9 (a) X 4 e j 2.5 1.5 4 5 2 5 (b) X 4 e j 2.5 1.5 (c) 2 5 The signal in part (a) is centered at 4 5 2 whereas the signal in part (b) is centered at 5 2 . The two are almost frequency-domain reversed versions of each other. 5 3.10 (a) H e j 10 X e j 2 5 5 5 2 5 2 2 X 2 e j 5 4 5 4 5 (b) H e j 10 X e j 10 3 2 5 5 5 3 2 5 3 X 3 e j 10 3 10 9 3 5 3 5 (c) H e j 10 7.5 X e j 2 5 5 5 4 2 5 4 X 4 e j 2.5 1.875 4 5 4 5 (d) H e j 10 X e j 2 5 5 5 2 5 X 5 e j 2 3.11 (a) X 2 e j 8 4 5 4 5 (b) X 4 e j 4 2 5 2 5 3.12 (a) X 2 e j 5 3 4 5 4 5 (b) X 4 e j 2.5 1.5 2 5 2 5 3.13 (a) X 2 e j 8 4 5 4 5 (b) X 2 e j 5 3 (c) 4 5 4 5 The spectrum in part (a) exhibits aliasing due to overlap by the positive and negative frequency components of the signal. The spectrum in part (b) does not show this aliasing because it does not have a negative-frequency component. 3.14 (a) X 4 e j 4 2 5 2 5 (b) X 4 e j 2.5 1.5 (c) 2 5 2 5 The spectrum in part (a) exhibits aliasing due to overlap by the positive and negative frequency components of the signal. The spectrum in part (b) does not show this aliasing because it does not have a negative-frequency component. 3.15 (a) X 4 e j 2.5 1.5 (b) 2 5 2 5 The spectrum of the signal after multiplication by the complex exponential is 8 6 10 10 After downsampling by 4, the spectrum is X 4 e j 2.5 1.5 (c) 2 5 2 5 The answers to (a) and (b) are identical. This means these two systems perform equivalent operations. In general, resampling a bandpass signal produces frequency translations for free. 3.16 The system illustrated in the block diagram translates the spectrum of x(n) directly to baseband prior to downsampling. The DTFT of y m is Y e j 10 N 6 N N 10 N 10 Clearly, N 10 for distortion-free resampling. The interpretation of (3.9) given in Section 3.2.2 starts with making N shifted copies of the 2 spectrum, each centered at k for k 0,1,, N 1 . For the downsample-by-N operation alone N to produce the desired result, we must have 2 2 k 2 N 5 for some combination of k and N. This condition reduces to k 1 1. N 5 The two distortion-free possibilities are k 4, N 5 and k 8, N 10 . Downsampling x n by 5 and 10 produces to two spectra below: X 5 e j 2 6 5 2 2 X 10 e j 1 3 5 3.17 (a) Label the points in the system as follows: x n y n s n H e j e j z m N n 2 The DTFT of x n is X e j 8 6 10 10 From this, it is seen that the filter requirements are H e j 1 10 10 In this ideal case Y e j X e j so that the lowest possible sample rate for z m corresponds to the maximum downsample factor N. The maximum downsample factor is N 10 . The DTFT of the output of the downsample block is Y10 e j Z e j 1 3 5 (b) Label the points in the system as follows: x1 m s n 2 y1 m x2 m y2 m H1 e j H 2 e j z k N e j1m After downsampling by 2, the spectrum is X 1 e j 5 3 4 5 4 5 From this, it is seen the first filter H1 e j must be an ideal high-pass filter: H 1 e j 1 4 5 4 5 Now, the spectrum should be translated to baseband. This requires 1 . (Note that e j1n e j n 1, 1,1, 1, ). The resulting spectrum is X 2 e j 5 3 5 5 Because of the filtering performed by H1 e j , the second filter, H 2 e j , is not needed and may be removed from the block diagram. The lowest possible sample rate for z m corresponds to the maximum downsample factor N. The maximum downsample factor at this point is N 5 . The DTFT of the output of the downsample block is Z e j 1 3 5 (c) Label the points in the system as follows: x1 m s n 4 y1 m x2 m y2 m H1 e j H 2 e j e j1m After downsampling by 4, the spectrum is N z k X 1 e j 2.5 1.5 2 5 2 5 From this, it is seen the first filter H1 e j must be an ideal low-pass filter: H 1 e j 1 2 5 2 5 Note that no frequency translation is needed here. Hence the multiplier involving the complex exponential is not needed. Consequently the second filter H 2 e j , is not needed and may be removed from the block diagram. The lowest possible sample rate for z m corresponds to the maximum downsample factor N. The maximum downsample factor at this point is N 2.5 . (Fractional downsample factors are covered in Chapter 9. It is performed using a polyphase partition of H1 ( z ) .) The DTFT of the output of the downsample block is Z e j 1 3 5 3.18 (a) X 2 e j 4 5 5 5 4 5 (b) X 4 e j 9 10 6 5 4 10 10 10 10 10 4 10 5 10 6 10 9 10 3.19 (a) X 2 e j 4 5 5 5 4 5 (b) X 4 e j 9 10 6 10 4 10 10 10 4 10 6 10 9 10 3.20 (a) X 2 e j 7 10 3 10 3 10 7 10 (b) X 4 e j 17 20 13 20 7 20 3 20 3 20 7 20 13 20 17 20 3.21 (a) X 2 e j 4 5 5 5 4 5 (b) X 2 e j (c) 7 10 3 10 3 10 7 10 Both have two spectral copies in the first Nyquist zone. The copies are located at slightly different frequencies, as expected. But the copies closest to baseband are frequencyreversed versions. 3.22 (a) X 4 e j 9 10 6 10 4 10 10 10 4 10 6 10 9 10 (b) X 4 e j (c) 17 20 13 20 7 20 3 20 3 20 7 20 13 20 17 20 Both have four spectral copies in the first Nyquist zone. The copies are located at slightly different frequencies, as expected. But the copies closest to baseband are identical (other than the center frequency). 3.23 (a) X 2 e j 4 5 5 5 4 5 (b) X 4 e j 9 10 6 10 4 10 10 10 4 10 6 10 9 10 3.24 (a) X 2 e j 7 10 3 10 3 10 7 10 (b) X 4 e j 17 20 13 20 7 20 3 20 3 20 7 20 13 20 17 20 3.25 (a) X 2 e j 4 5 5 5 4 5 (b) X 2 e j (c) 7 10 3 10 3 10 7 10 The two are identical except for the center frequencies of the spectral copies. 3.26 (a) X 4 e j 9 10 6 10 4 10 10 10 4 10 6 10 9 10 (b) X 4 e j (c) 17 20 13 20 7 20 3 20 3 20 7 20 13 20 The two are identical except for the center frequencies of the spectral copies. 17 20 3.27 (a) The result of the upsample-by-10 operation is illustrated by the DTFT below. X 10 e j 4 5 3 5 2 5 5 25 25 5 2 5 3 5 4 5 The goal of the low-pass filter is to remove all the spectral copies except the one at baseband. The generic filter requirements are H e j 10 10 But, because we know more about the spectral properties of the signal, we can generate more specific filter requirements: This filter is defined by the following intervals on the frequency axis (these intervals are usually called frequency bands): H e j passband: 0, 25 4 transition band: , 25 25 4 stop band: , 25 4 25 25 25 4 25 The DTFT of the filter output is shown below. DTFT at H output 4 5 3 5 2 5 5 25 25 5 2 5 After frequency translation, the DTFT of the output y m is 3 5 4 5 Y e j (b) 4 5 3 5 2 5 5 25 25 5 2 5 3 5 4 5 A system such as the one illustrated can produce the desired signal. This fact follows from the observation that after upsampling, X 10 e j has a spectral copy centered at the 3 . There simply is no need to eliminate the spectral copy at 0 , 5 keep the copy at 0 , then consume resources to translate the copy at 0 up to the spot on the frequency axis where there originally was a spectral copy. desired frequency 0 The filter requirements are frequency-translated versions of the low-pass filter H e j . The generic requirements are G e j 5 10 3 5 7 10 Note that G e j is a complex-valued filter. The more specific requirements are G e j 11 25 3 5 14 25 19 25 16 25 Note that after filtering X 10 e j (shown above) with G e j , the desired spectrum results. 3.28 H e j 5 5 Y e j 2 2 3.29 H e j 5 5 Y e j 2 3 3 3 2 3 3.30 (a) The highest frequency is 2 20000 5 . Thus the DTFT of x nT is 48000 6 X e j 5 6 (b) 5 6 44100 147 times the input rate. Hence U 147 and D 160 . 48000 160 The filter specifications are determined by the bandwidth of the signal to be resampled and the requirement to eliminate spectral energy aliased by the upsample/downsample operation. The thought process is illustrated below. The output rate is X e j spectral copy resulting from the upsample-by147 operation transition band X e j spectral copy resulting from the multiplication by the discretetime impulse train associated with the downsample-by-160 operation. transition band 5 6 147 2 147 7 6 147 5 2 6 147 160 241 35280 The transition band required by the upsample operation is 2 5 5 7 . 147 6 147 6 147 6 147 The transition band required by the downsample operation is 241 5 41 35280 6 147 35280 The transition band requirement for the downsample operation is more strict and hence is the one that must be used. From this requirement, we get 41 F . 70560 Now the passband and stopband ripples are 0.1 20 log 1 p p 100.1/20 1 0.0116 96 20 log s s 1096/20 1.5849 105 The filter length estimates based on the Kaiser and Harris formulae are Kaiser Estimate: 6407 Harris Estimate: 7510 clock rate = 7056 kHz (c) The starting point is to factor the sample rate conversion as follows 147 3·7·7 160 2·2·2·2·2·5 and perform the resampling operation in two steps. There are a number of options here. The thing to keep in mind is that the first resampling ratio must be greater than 5/6. (Otherwise the resampling filter will distort the signal.) Proceeding as outlined in part (b), except doing it twice, the following are some representative examples: U1 = 21 D1 = 20 Kaiser Estimate: 469 Harris Estimate: 550 clock rate = 1008 kHz U2 = 7 D2 = 8 Kaiser Estimate: 320 Harris Estimate: 375 clock rate = 352.8 kHz U1 = 7 D1 = 4 Kaiser Estimate: 156 Harris Estimate: 183 clock rate = 336 kHz U2 = 21 D2 = 40 Kaiser Estimate: 1602 Harris Estimate: 1877 clock rate = 1764 kHz U1 = 7 D1 = 8 Kaiser Estimate: 625 Harris Estimate: 733 clock rate = 336 kHz U2 = 21 D2 = 20 Kaiser Estimate: 1642 Harris Estimate: 1924 clock rate = 882 kHz U1 = 3 D1 = 2 Kaiser Estimate: 67 Harris Estimate: 79 clock rate = 144 kHz U2 = 49 D2 = 80 Kaiser Estimate: 3204 Harris Estimate: 3755 clock rate = 3528 kHz U1 = 49 D1 = 40 Kaiser Estimate: 1095 Harris Estimate: 1283 clock rate = 2352 kHz U2 = 3 D2 = 4 Kaiser Estimate: 160 Harris Estimate: 188 clock rate = 176.4 kHz (d) The starting point here is to factor the downsample rate: 160 2·2·2·2·2·5 In this part, the downsample operation can be performed in three steps where D1 D2 D3 160 . As an example, suppose D1 40, D2 2, and D3 2 . The first filter has a transition band as determined below. Note that as a result of the downsample-by-40 operation, the signal 100 now has a bandwidth of as shown. 441 X e j X e j transition band ↓ 40 5 2 6 147 7 147 6 147 100 441 The transition band for the second filter, and the corresponding spectrum after downsampling by D2 2 is shown below. X e j X e j transition band 100 441 341 441 ↓2 200 441 The transition band for the third filter, and the corresponding spectrum after downsampling by D3 2 is shown below. X e j X e j transition band 200 441 241 441 ↓2 400 441 The lengths of the three filters and their clock rates is summarized as follows: U = 147 D1 = 40 D2 = 2 D3 = 2 Kaiser Estimate for filter 1: 3284 Harris Estimate for filter 1: 3849 clock rate for filter 1 = 7056 kHz Kaiser Estimate for filter 2: 14 Harris Estimate for filter 2: 16 clock rate for filter 2 = 176.4 kHz Kaiser Estimate for filter 3: 80 Harris Estimate for filter 3: 94 clock rate for filter 3 = 88.2 kHz Some other options are summarized as follows: (f) U = 147 D1 = 2 D2 = 2 D3 = 40 Kaiser Estimate for filter 1: 3284 Harris Estimate for filter 1: 3849 clock rate for filter 1 = 7056 kHz Kaiser Estimate for filter 2: 8 Harris Estimate for filter 2: 9 clock rate for filter 2 = 3528 kHz Kaiser Estimate for filter 3: 1602 Harris Estimate for filter 3: 1877 clock rate for filter 3 = 1764 kHz U = 147 D1 = 20 D2 = 4 D3 = 2 Kaiser Estimate for filter 1: 3284 Harris Estimate for filter 1: 3849 clock rate for filter 1 = 7056 kHz Kaiser Estimate for filter 2: 27 Harris Estimate for filter 2: 32 clock rate for filter 2 = 352.8 kHz Kaiser Estimate for filter 3: 80 Harris Estimate for filter 3: 94 clock rate for filter 3 = 88.2 kHz U = 147 D1 = 20 D2 = 2 D3 = 4 Kaiser Estimate for filter 1: 3284 Harris Estimate for filter 1: 3849 clock rate for filter 1 = 7056 kHz Kaiser Estimate for filter 2: 10 Harris Estimate for filter 2: 11 clock rate for filter 2 = 352.8 kHz Kaiser Estimate for filter 3: 160 Harris Estimate for filter 3: 188 clock rate for filter 3 = 176.4 kHz U = 147 D1 = 10 D2 = 4 D3 = 4 Kaiser Estimate for filter 1: 3284 Harris Estimate for filter 1: 3849 clock rate for filter 1 = 7056 kHz Kaiser Estimate for filter 2: 19 Harris Estimate for filter 2: 23 clock rate for filter 2 = 705.6 kHz Kaiser Estimate for filter 3: 160 Harris Estimate for filter 3: 188 clock rate for filter 3 = 176.4 kHz A polyphase partition of the filter designed in part (b) may be used. The polyphase filterbank consists of 147 subfilters. (Note: using the estimates from the first problem, each subfilter requires approximately 100 coefficients.) The filterbank presents an upsampled and filtered version of the input signal in parallel as shown in the block diagram below. The output commutator strides through the available output samples by 160 (the downsample rate). As only one filter in the filter bank has be active to produce each output sample, this approach is an efficient method for resamping the signal. H0(z) H1(z) stride = 160 H146(z) 3.31 (a) hc t A1e s1t A2e s2t AN e sN t u t (b) hd n Thc nT s1Tn AT A2Te s2Tn AN Te sN Tn u n 1 e AT e s1T 1 (c) (d) Hd z n AT 1 1 e s1T z A2T e s2T 1 n AN T e sN T A2T 1 e s2T z 1 n u n AN T 1 e s N T z 1 The poles are e s1T , e s2T , , e s N T Thus a pole at s p maps to a pole at z e pT in the discrete-time system generated using the impulse invariance technique. 3.32 (a) The transform pair xc (t ) X c j implies xc t T e jT X c j . This sugguests the block diagram blow: H c j e jT xc t xc t T The interpretation is that a continuous-time LTI system with transfer function H c j e jT produces a delay of T . (b) Applying (2.86) produces H d e (c) hd (n) 1 2 W e j T T e jn d W T sin n W T W T n W T (d) W T sin n T hd (n) T n T (e) T 1 T 2 1 sin n 2 hd (n) 1 n 2 j 1 2 j TT e 0 | | W W | | T j n TT W j n W e T e T jn T 1 3.33 (a) (b) (c) hc (t ) ae at u (t ) hd (n) Thc (nT ) aT 1 e aT e j a . The technique outlined in Section 2.6.2 evaluates H c j at . a j T This clearly produces a result that is different from that obtained in (b). H c j 3.34 (a) 1 a 1 1 1 Sc ( s ) H c s s sa s s sa at sc (t ) 1 e u (t ) (b) sd (n) sc (nT ) 1 e aTn u (n) 1 e aT (c) n u(n) 1 1e S d e j H d e j j e jn e aT n 0 n 0 n e jn e jn n e jn n 0 Hd e (d) j n 0 1 1 j 1 e 1 e j 1 e j 1 e 1 e j j 1 e j 1 e j a . The technique outlined in Section 2.6.2 evaluates H c j at . a j T This clearly produces a result that is different from that obtained in (c). H c j 3.35 (a) (b) (c) 1 z 1 a aT 1 Hd z where 1 1 2 1 z 2 z 1 a 1 1 1 T 1 z 0 n n 1 1 1 hd (n) u (n) u (n 1) 1 1 1 1 1 2 1 n 2 1 1 n0 n0 n0 a . The technique outlined in Section 2.6.2 evaluates H c j at . a j T This clearly produces a result that is different from that obtained in (b). H c j 3.36 1 hd (n) 2 WcT WcT j j e jn d T 2 T WcT e jn d WcT WcT The integral e jn d may be evaluated using integration by parts: WcT WcT e j n WcT d e jn jn WcT WcT WcT WcT 1 jn 1 j n e d e jn e 2 jn jn W T jn c W jWcTn jWcTn 1 e e 2 e jWcTn e jWcTn jn n 2W cos WcTn The desired result is hd (n) WcT jn j2 sin WcTn n2 j times the integral. Putting this together gives 2 T W cos WcTn 1 sin WcTn T n T n2 3.37 The length-3 differentiator is 1 T hd (0) 0 hd (1) hd (1) 1 T Applying the window gives 0.5 T hd ,FIR (0) 0 hd ,FIR (1) hd ,FIR (1) 0.5 T The DTFT is H d ,FIR e j 0.5 j 0.5 j j e e sin T T T For small , sin so that H d ,FIR j . T 3.38 From Table 3.3.3 we have H I e j T e j 2 j 2 sin 2 . The series expansions for the exponential and the sine are e j 2 2 3 1 1 1 j j j 2 2! 2 3! 2 3 5 1 1 sin 2 2 3! 2 5! 2 For small , e j 2 1 1 so that H I T 1 and sin 2 2 j2 j 2 T 3.39 From Table 3.3.3 we have H I e j cos T 2 . j2 sin 2 The series expansions for the cosine and the sine are 2 4 1 1 cos 1 2! 2 4! 4 2 3 5 1 1 sin 2 2 3! 2 5! 2 T 1 1 For small , cos 1 and sin so that H I j2 j 2 2 2 2 T 3.40 (a) Use the identity cos( A B) cos( A) cos( B) sin( A) sin( B) with A c n and B nT . (b) Suppose S e j looks like this S e j 2W c W c c W c c W c W The input to the upper filter is s nT cos c n cos nT cos c n sin nT sin c n cos c n cos nT cos 2 c n sin nT sin c n cos c n 1 1 cos 2 c n 2 2 1 sin 2 c n 2 1 1 1 cos nT cos nT cos 2 c n sin nT sin 2 c n 2 2 2 double frequency term double frequency term If the filter H e j is a low-pass filter with a pass-band gain of 2 as follows: H e j W W then the double-frequency terms are eliminated and x nT cos nT . The input to the lower filter is s nT sin c n cos nT cos c n sin nT sin c n sin c n cos nT cos c n sin c n sin nT sin 2 c n 1 sin 2 c n 2 1 1 cos 2 c n 2 2 1 1 1 cos nT sin 2c n sin nT sin nT cos 2 c n 2 2 2 double frequency term double frequency term Using the same low-pass filter described above, the double frequency terms are eliminated so that y nT sin nT . (c) (d) y nT y nT sin nT tan nT . Thus we have nT tan 1 . x nT x nT cos nT nT d y nT dt x nT y 2 nT 1 2 x nT x nT y ' nT x ' nT y nT x 2 nT x nT y ' nT x ' nT y nT 2 y nT x 2 nT y 2 nT 1 2 x nT (e) x nT D z x ' nT zL x ' nT y nT z y nT scale to a point on the unit circle ' nT L D z y ' nT x nT y ' nT Note: the length of the derivative filter is assumed to be 2L+1. The delay by L samples is required to align the input samples with the outputs of the derivative filters. (f) 0.2 0.004 . Figure 3.3.18 shows that a length-3 (L = 1) 50 derivative filter is more than adequate. The bandwidth of x nT is 4.1 (a) Q A (b) Q A (c) 1 Q A (d) 1 2Q A 4.2 1 x 0 1 1 t 2 x e dt x e dt t2 x e t 2 dt 0 substitute u x 1 2 x u e du 2 0 x e 0 t 2 dt 1 x e 0 t 2 dt 4.3 x 2 1 1 et dt e u2 du x substitute u x thus 1 x 1 e dt t 2 1 x e dt t 2 e x u2 1 du 2 x e dt t 2 e x t 2 dt 4.4 1 t 2 /2 1 u2 1 2 u 2 1 x Q x e dt e du e du erfc 2 x/ 2 2 2 2 x x/ 2 substitute u t / 2 4.5 (a) 1 2Q 1 1 2 0.1587 0.6827 (b) 1 2Q 2 1 2 0.0228 0.9545 (c) 1 2Q 3 1 2 0.0013 0.9973 (d) 1 2Q 4 1 2 3.1533 105 0.9999 4.6 (a) 1 2Q 1 1 2 0.1587 0.6827 (b) 1 2Q 2 1 2 0.0228 0.9545 (c) 1 2Q 3 1 2 0.0013 0.9973 (d) 1 2Q 4 1 2 3.1533 105 0.9999 4.7 6 1 2Q 1 2Q 2 1 2 0.0228 0.9545 3 4.8 Let be a random variable with pdf f X ( x) and let Z u ( X ) X 2 . The pdf of Z may be 1 1 fX z fX z . expressed in terms of f X ( x) using the relationship f Z ( z ) 2 z 2 z For this problem, X ~ N (0, 2 ) so that the pdf of Z1 X 2 is f Z1 ( z ) 1 1 2 z 2 2 e z 2 2 1 1 2 z 2 2 e z 2 2 1 2 2 z 1/2 e z 2 2 Similarly, the pdf of Z 2 Y 2 is 1 f Z2 ( z ) 1/2 z 2 2 z e 2 2 Because X and Y are independent, Z1 and Z2 are independent. Hence the pdf of Z Z1 Z 2 is f Z ( z ) f Z1 ( z ) f Z2 ( z ) f Z1 ( z x) f Z2 ( x)dx z 2 2 0 2 1 e 2 zx z 2 e 2 2 2 zx 1 z dx ( z x) x 0 z 2 2 e 2 2 z 1 2 2 e 2 2 z0 1 2 2 x 1 2 2 e dx x 4.9 Let be a random variable with pdf f X ( x) and let Z u ( X ) X 2 . The pdf of Z may be 1 1 fX z fX z . expressed in terms of f X ( x) using the relationship f Z ( z ) 2 z 2 z For this problem, X ~ N ( X , 2 ) so that the pdf of Z1 X 2 is f Z1 ( z ) 1 1 2 z 2 1 2 1 2 2 2 z e e z X 2 2 2 z X2 2 2 e 1 1 2 z 2 X z 2 e z X2 e 2 z X 2 2 2 Similarly, if Y ~ N ( Y , 2 ) , the pdf of Z 2 Y 2 is f Z2 ( z ) 1 1 2 z 2 2 1 1 2 2 2 z e e z Y 2 2 2 2 2 z Y2 1 1 2 z 2 2 e z Y 2 2 2 z Y 2 z Y2 e e Because X and Y are independent, Z1 and Z2 are independent. Hence the pdf of Z Z1 Z 2 is f Z ( z ) f Z1 ( z ) f Z2 ( z ) f Z1 ( z x) f Z 2 ( x)dx 1 0 e z X2 Y2 2 2 8 e z X2 2 2 z 0 Y2 2 8 2 e z x X2 e 2 e 2 2 2 z x z z X2 Y2 2 2 8 2 2 X z x 2 1 x( z x) X e e zx X 2 z x Y 2 x Y2 2 x x 1 Y 2 e 2 Y 2 e dx e 2 2 x 2 x e X z x Y x 2 e X z x Y x 2 e X z x Y x 2 dx z X2 Y2 z X2 Y2 z X2 Y2 z X2 Y2 cos( ) cos( ) cos( ) cos( ) 1 1 1 1 2 2 2 2 e d e d e d e d 0 0 0 0 z 2 2 X Y 4I0 2 e z X2 Y2 2 2 2 2 Using the substation (4.38) gives the desired answer. z 2 2 X Y I0 2 z 0 4.10 Let Y be a random variable with pdf fY ( y ) and let Z u (Y ) Y . The pdf of Z may be expressed in terms of fY ( y ) using the relationship f Z ( z ) 2 zfY z 2 . In this case, Y is a central chi-square random variable whose pdf is (4.36): y 1 2 2 fY ( y ) e 2 2 Substituting produces z2 z2 2 z 2 2 z 2 fZ ( z) e 2 e 2 2 2 4.11 Let Y be a random variable with pdf fY ( y ) and let Z u (Y ) Y . The pdf of Z may be expressed in terms of fY ( y ) using the relationship f Z ( z ) 2 zfY z 2 . In this case, Y is a non-central chi-square random variable whose pdf is (4.37): y s2 1 2 2 s fY ( y ) e I0 y 2 2 2 Substituting produces 2z fZ ( z) e 2 2 z 2 s2 2 2 s z I0 z 2 2 2 e z 2 s2 2 2 s I0 z 2 4.12 This transformation is defined by two functions R u1 ( X , Y ) X 2 Y 2 Y u2 ( X , Y ) tan 1 X and their inverses X 1 v1 ( R, ) R cos X 2 v2 ( R, ) R sin The joint pdf f R , (r , ) may be expressed in terms of the joint pdf f X ,Y ( x, y ) using the relationship f R , (r , ) J f X ,Y v1 r , , v2 r , where J is the Jacobian matrix v1 r , r J v2 r , r v1 r , cos v2 r , sin r r sin r cos Starting with 1 f X ,Y ( x, y ) e 2 2 and substituting produces r f R , ( r , ) e 2 2 r e 2 2 r e 2 2 r cos X 2 r sin Y 2 2 2 r 2 cos2 2 r X cos X2 r 2 sin 2 2 r Y sin Y2 2 2 r 2 X2 Y2 2 r X cos 2 r Y sin 2 2 x X 2 y Y 2 2 2 4.13 Because the covariance matrix M is diagonal, X1 and X2 are uncorrelated and therefore independent random variables. To see that this is so, observe that the joint pdf can be factored: x fX 1 x2 1 1 2 5 0 1/2 e 5 0 x 1 1 x1 1 x2 20 5 x 1 2 2 2 0 5 1 e 2 5 1 1 x1 1 x2 2 5 2 0 0 x 1 1 1 x2 2 5 1 ( x1 1)2 ( x2 2)2 5 5 1 2 e 2 5 ( x 1)2 ( x 2)2 1 2 1 1 e 25 e 25 2 5 2 5 Thus we see that X1 ~ N(1,5) and X2 ~ N(2,5). 3 1 3 2 (a) P X 1 3 and X 2 3 P X 1 3 P X 2 3 Q Q 0.0607 5 5 11 2 2 1 (b) P X 1 1 and X 2 2 P X 1 1 P X 2 2 Q 1 Q 5 5 4 4.14 P( X 1 X 2 ) P( X 1 X 2 0) Let Z X 1 X 2 . Then the desired probability is P ( Z 0) . Z is a Gaussian random variable because it was obtained through a linear operation on the T Gaussian random vector X X 1 X 2 . The pdf of X is (4.44) with 2 10 0 μ and M 5 0 10 The random variable Z is related X through Z = AX where A 1 1 . The mean and variance of Z are Z Aμ 3 10 0 1 Z2 AMAT 1 1 20 0 10 1 Thus, 03 P( Z 0) Q 0.2512 20 4.15 P( X 1 X 2 ) P( X 1 X 2 0) Let Z X 1 X 2 . Then the desired probability is P ( Z 0) . Z is a Gaussian random variable because it was obtained through a linear operation on the T Gaussian random vector X X 1 X 2 . The pdf of X is (4.44) with 2 10 2 μ and M 5 2 10 The random variable Z is related X through Z = AX where A 1 1 . The mean and variance of Z are Z Aμ 3 10 2 1 Z2 AMAT 1 1 16 2 10 1 Thus, 03 P( Z 0) Q 0.2266 4 4.16 (a) 0.25 f X 5 2 1 1 1 2 3/2 0.5 0.1 0.5 1 1/2 e 1 0.5 0.1 0.25 0 1 0.250 5 4 2 20.5 2 0.5 5 4 2 0.1 0.5 1 2 2 0.0403 0.5 0.1 0.5 1 (b) Let Z X 1 X 2 X 3 . Then the desired probability is P ( Z 5) . Z is a Gaussian random variable because it was obtained through a linear operation on the Gaussian random vector T X X 1 X 2 X 3 . The pdf of X is (4.44) with 0 1 0.5 0.1 μ 4 and M 0.5 2 0.5 2 0.1 0.5 1 The random variable Z is related X through Z = AX where A 1 1 1 . The mean and variance of Z are Z Aμ 6 1 0.5 0.1 1 AMA 1 1 1 0.5 2 0.5 1 6.1 0.1 0.5 1 1 56 Thus, P( Z 5) Q 0.6572 . 6.1 2 Z T 4.17 Because the covariance matrix M is diagonal, X1, X2, and X3 are uncorrelated and therefore independent random variables. To see that this is so, observe that the joint pdf can be factored: x1 f X x2 x 3 1 1 2 3/2 1 0 0 0 1 0 1/2 e 1 0 0 x1 1 1 x1 1 x2 1 x3 1 0 1 0 x2 1 2 0 0 1 x3 1 0 0 1 1 2 ( x1 1)2 ( x2 1)2 ( x3 1)2 e 2 1 1 ( x1 1)2 2 1 ( x2 1)2 2 1 ( x3 1)2 2 e e e 2 2 2 Thus we see that X1 ~ N(1,1), X2~N(1,1), and X3 ~ N(1,1). (a) 0.25 f X 5 2 1 2 e (0.25 1)2 2 1 2 e (5 1)2 2 1 2 e (2 1)2 2 9.7517 10 6 (b) P( X 1 2 and X 2 2 and X 3 2) P( X 1 2) P( X 2 2) P( X 3 2) 2 1 2 1 2 1 Q Q Q 1 1 1 0.004 4.18 (a) The vector Y consists of 2 jointly Gaussian random variables because Y was obtained from X via a linear operation. Consequently, the pdf of Y is f Y (y ) 1 T y μ Y M Y1 y μ Y 1 2 e 2 | M y |1/2 and all that remains is to compute the mean vector and the covariance matrix: 2 1 5 1 2 4 3 24 μ Y E Y E AX AE X 1 2 4 6 1 2 18 2 1 M Y E YYT E AX RA T 1 0.3 5 1 2 4 3 0.4 2 4 6 1 2 0.2 0.1 AE XX A 0.3 1 0.3 0.4 0.2 T 0.4 0.3 1 0.3 0.4 0.2 0.4 0.3 1 0.3 AMAT T 0.1 5 0.2 1 0.4 2 0.3 4 1 3 2 4 99.4 94.1 6 94.1 112.2 1 2 Thus, the pdf of Y is T 1 24 0.0488 0.0410 24 y 0.0410 0.0433 18 y 1 18 2 f Y (y ) e 2 47.9361 0.5 The value of the pdf at the point Y is 0.5 T 1 0.5 24 0.0488 0.0410 0.5 24 0.0410 0.0433 0.5 18 0.5 1 2 0.5 18 fY e 0.5 2 47.9361 1.2678 104 (b) P(Y1 Y2 ) P(Y1 Y2 0) Let Z Y1 Y2 , then the desired probability is P ( Z 0) . The random variable Z is a Gaussian random variable because it was obtained from Y via a linear transformation. That is Z Y1 Y2 looks like Z BY where B 1 1 . Consequently, the pdf of Z is f Z ( z ) 1 2 Z2 e z Z 2 2 Z2 , where the mean and variance of Z are 24 Z E Z E BY BE{Y} Bμ Y 1 1 6 18 Z2 E ZZ T E BY BY T BE{YY }B T T 99.4 94.1 1 1 1 94.1 112.2 1 23.4 Now, the desired probability is 0 Z P( Z 0) Q Z 6 Q 0.8926 23.4 BM Y BT 4.19 xT x 2 1 The pdf of the vector X is f X (x) e 2 2 2 The vector Y consists of 2 jointly-Gaussian random variables because Y is obtained from X via a linear operation. Consequently, the pdf of Y is f Y (y ) 1 T y μ Y M Y1 y μ Y 1 2 e 2 | M y |1/2 and all that remains is to compute the mean vector and covariance matrix: 0 0 μ Y E Y E RX RE X R 0 0 M Y E YYT E RX RX T RE XX R T T R 2 IRT 2 RRT cos 2 ( ) sin 2 ( ) cos( ) sin( ) cos( )sin( ) 2 cos 2 ( ) sin 2 ( ) cos( ) sin( ) cos( ) sin( ) 2 I Because the pdf of Y is the form of two uncorrelated Gaussian random variables with zero mean and common variance 2 . 4.20 The pdf of the vector X is f X (x) 1 xT x 2 2 N e N The vector Y consists of N jointly-Gaussian random variables because Y is obtained from X via a linear operation. Consequently, the pdf of Y is 2 f Y (y ) N /2 1 2 N /2 | M y |1/2 e 1 y μY T M Y1 y μ Y 2 and all that remains is to compute the mean vector and covariance matrix: 0 0 0 0 μ Y E Y E UX UE X U 0 0 M Y E YYT E UX UX T UE XX U T T U 2IUT 2 UUT 2I Because the pdf of Y is the form of N uncorrelated Gaussian random variables with zero mean and common variance 2 . 4.21 C XX (n, k ) E X (n) (n) X (k ) (k ) E X (n) X (k ) (k ) X (n) (n) X (k ) (n) (k ) E X (n) X (k ) (k ) E X (n) (n) E X (k ) (n) (k ) RXX (n, k ) (k ) (n) (n) (k ) (n) (k ) RXX (n, k ) (n) (k ) 4.22 From the covariance matrix, we obtain the following: RXX (0) 1 RXX (1) RXX (1) 0.35 RXX (2) RXX (2) 0.13 RXX (3) RXX (3) 0.04 R S X e j k XX ( k ) e j k 0.04e j 3 0.13e j 2 0.35e j 1 0.35e j 0.13e j 2 0.04e j 3 1 0.7 cos 0.26 cos 2 0.08cos 3 4.23 The autocorrelation function is obtained by computing the inverse DTFT of S X e j : 1 1 1 1 1 1 RXX (k ) (k 3) (k 2) (k 1) (k ) (k 1) (k 2) (k 3) 8 4 2 2 4 8 1 1 2 1 8 RXX (k ) 1 2 1 4 -4 -3 -2 1 4 -1 0 1 2 1 8 3 4 k (a) The vector X consists of four jointly Gaussian random variables. The form of the pdf is given by (4.44). Consequently, all the remains to be found are the mean vector and the covariance matrix M. X 1 0 X 0 μ E X E 2 X 3 0 X 4 0 X 1 X 1 X 1 X 1 X 2 X1 X 3 X 1 X 4 X 2 X 2 X1 X 2 X 2 X 2 X 3 X 2 X 4 T M E XX E X 1 X 2 X 3 X 4 E X X X X X X X X 3 2 3 3 3 4 X 3 3 1 X 4 X 4 X 1 X 4 X 2 X 4 X 3 X 4 X 4 1 1 1 1 2 4 8 RXX (0) RXX (1) RXX (2) RXX (3) 1 1 1 1 R (1) R (0) R (1) R (2) 2 2 4 XX XX XX XX RXX (2) RXX (1) RXX (0) RXX (1) 1 1 1 1 2 RXX (1) RXX (0) 4 2 RXX (3) RXX (2) 1 1 1 1 8 4 2 Note that M 0.4219 M 1/2 0.6495 and that M 1 4 3 2 3 0 0 2 3 4 3 2 3 0 0 2 3 4 3 2 3 0 0 . 2 3 4 3 (b) Let Z X 1 X 2 X 3 X 4 . Then the desired probability is P ( Z 1) . The random variable Z is related to the random vector X through the transformation X1 X Z 1 1 1 1 2 X3 X4 which we write as Z = AX (where A = [1 1 1 1].) Z is a Gaussian random variable because it is the result of a linear operation on a sequence of jointly Gaussian random variables. The mean and variance of Z are Z E Z AE X 0 Z2 E ZZ T AMAT 1 1 1 1 1 1 2 1 4 1 8 33 4 1 2 1 1 2 1 4 1 4 1 2 1 1 2 1 8 1 1 4 1 1 1 2 1 1 Hence 1 Z 1 P( Z 1) Q Q 0.3639 33 / 4 Z 4.24 (a) M E XXT X1 X E 2 X1 X 3 X 4 X 1 X 1 X X E 2 1 X 3 X1 X 4 X 1 X1 X 2 X2X2 X3 X2 X4X2 X1 X 3 X2 X3 X3X3 X4 X3 X2 X3 X 4 X1 X 4 X 2 X 4 X3 X 4 X 4 X 4 1 RXX (0) RXX (1) RXX (2) RXX (3) 1 R (1) R (0) R (1) R (2) XX XX XX 2 XX RXX (2) RXX (1) RXX (0) RXX (1) 1 RXX (1) RXX (0) 4 RXX (3) RXX (2) 1 8 1 2 1 1 2 1 4 1 4 1 2 1 1 2 1 8 1 4 1 2 1 (b) 1 0.5 0 fX 0.25 1 1 2 2 1 0.5 0.25 0.125 0.5 1 0.5 0.25 0.25 0.5 1 0.5 0.125 0.25 0.5 1 1/2 e 0.5 0.25 0.125 0.5 1 0.5 1 0.5 0.25 0 1 0.5 0 0.25 1 0.25 0.5 1 2 0.5 0.25 1 1 0.125 0.25 0.5 0.0136 (c) Let Z X 1 X 2 X 3 X 4 . Then the desired probability is P ( Z 1) . The random variable Z is related to the random vector X through the transformation X1 X Z 1 1 1 1 2 X3 X4 which we write as Z = AX (where A = [1 1 1 1].) Z is a Gaussian random variable because it is the result of a linear operation on a sequence of jointly Gaussian random variables. The mean and variance of Z are Z E Z AE X 0 Z2 E ZZ T AMAT 1 1 1 1 1 1 2 1 4 1 8 1.75 1 2 1 1 2 1 4 1 4 1 2 1 1 2 1 8 1 1 4 1 1 1 2 1 1 Hence 1 Z 1 P( Z 1) Q Q 0.2248 1.75 Z (d) SX e j m 1 m 1 e jm 2 m m 1 1 e jm e jm 2 2 m m 0 1 1 1 1 1 1 e j 1 e j 2 2 3 4 5 cos 4 4.25 (a) 1 1 1 H e j e j e j 3 3 3 1 1 2 cos 3 2 1 H e j 3 4 cos 2 cos 2 9 SY e j 2 H e j 2 2 3 4 cos 2 cos 2 9 2 2 2 2 2 2 2 (k 2) (k 1) (k ) (k 1) (k 2) 9 9 3 9 9 The autocorrelation function is plotted below. (b) RYY (k ) 2 9 -3 -2 2 2 9 -1 3 2 9 0 2 2 9 2 9 1 2 RYY (k ) 3 k (c) The averages are correlated one with another and the span of the correlation is determined by the length of the filter. This makes sense, as the averages computed 1 or 2 samples before the current sample are based, in part, on the same data. 4.26 (a) 1 W W H e j otherwise 0 2 1 W W H e j otherwise 0 SY e j 2 0 W W otherwise (b) RYY (k ) 2 sin Wk k 4.27 (a) H e j (n 1)a n e jn H e j n0 2 SY e j 1 · 1 1 ae j 2 1 1 ae 1 ae j 2 j 2 1 1 a 4a(a 1) cos() 2a 2 cos(2) 4 2 2 1 a 4 4a(a 2 1) cos() 2a 2 cos(2) (b) RYY (k ) 2 h(k ) h( k ) For k 0 RYY (k ) 2 (m 1)a m (m k 1)a m k m 0 2a 4 a2 1 2 (2 k ) (1 k ) a k 2 3 2 2 2 (1 a ) 1 a (1 a ) For k 0 RYY (k ) 2 (m 1)a m (m k 1)a m k m k 1 2a 2 2a 4 1 2k 2k a k (k 1) a k a 2 k k (2 k ) 2 2 2 2 3 2 1 a (1 a ) (1 a ) 1 a ak 2 2 a 1 2k 2k k a a k ( 2 ) (1 ) 2 2 1 a2 (1 a ) 4.28 (a) H e j H e j SY e j 1 sin N1 2 e j n n N1 sin 2 1 sin 2 N1 2 2 sin 2 2 N1 1 sin 2 N1 2 2 sin 2 2 2 2 N1 1 | n | 2 N1 | n | (b) RYY (n) h(n) h(n) 0 2 N1 | n | 2 4.29 (a) H e 1 S e H e j e jn0 j j 2 2 Y (b) RYY (k ) 2 (k ) 4.30 hf k T hf 1 kT Thus, hf 2 3 hf 1 hf 1 hf kT 2 kT 6 kT hf 1 kT e kT 1 Substituting gives hf SV ( f ) e hf kT hf hf kT hf hf 1 1 1 kT kT 5.1 E= T2 ∫ [r (t ) − rˆ(t )] 2 T1 2 K −1 ⎡ ⎤ dt = ∫ ⎢r (t ) − ∑ xl φl (t )⎥ dt l =0 ⎦ T1 ⎣ T2 2 K −1 ∂ ⎡ ⎤ E = −2 ∫ ⎢r (t ) − ∑ xl φ l (t )⎥φ k (t )dt ∂x k l =0 ⎦ T1 ⎣ T T2 K −1 T2 T1 l =0 T1 = −2 ∫ r (t )φ k (t )dt + 2∑ xl ∫ φ l (t )φ k (t )dt δl −k T2 = −2 ∫ r (t )φ k (t )dt + 2 x k T1 2 ∂ E = 0 ⇒ x k = ∫ r (t )φ k (t )dt ∂x k T1 T 5.2 E= T2 ∫ [r (t ) − rˆ(t )] 2 T1 2 K −1 ⎡ ⎤ dt = ∫ ⎢r (t ) − ∑ xl φl (t )⎥ dt l =0 ⎦ T1 ⎣ T2 The optimum xk satisfy 2 K −1 ∂ ⎡ ⎤ 0= E = −2 ∫ ⎢r (t ) − ∑ xl φl (t )⎥φ k (t )dt ∂x k l =0 ⎦ T1 ⎣ T e (t ) T2 0 = −2 ∫ e(t )φ k (t )dt ⇒ φ k (t ) is orthogonal to e(t ) T1 5.3 The set is orthogonal (but not orthonormal) means l=k l≠k ⎧E ∫ φ (t )φ (t )dt = ⎨⎩ 0 T2 k l k T1 The optimum coefficients are derived as follows: E= T2 ∫ [r (t ) − rˆ(t )] 2 T1 2 K −1 ⎡ ⎤ dt = ∫ ⎢r (t ) − ∑ xl φl (t )⎥ dt l =0 ⎦ T1 ⎣ T2 2 K −1 ∂ ⎡ ⎤ ( ) E = −2 ∫ ⎢r t − ∑ xl φl (t )⎥φ k (t )dt ∂x k l =0 ⎦ T1 ⎣ T T2 K −1 T2 T1 l =0 T1 = −2 ∫ r (t )φ k (t )dt + 2∑ xl ∫ φl (t )φ k (t )dt Ek when l = k T2 = −2 ∫ r (t )φ k (t )dt + 2 x k E k T1 T2 ∂ 1 E = 0 ⇒ xk = ∂x k Ek T2 ∫ r (t )φ (t )dt = k T1 ∫ r (t )φ (t )dt k T1 T2 ∫ φ (t )dt 2 k T1 5.4 The set is neither orthogonal nor normalized. E= T2 ∫ [r (t ) − rˆ(t )] 2 T1 2 K −1 ⎡ ⎤ dt = ∫ ⎢r (t ) − ∑ xl φl (t )⎥ dt l =0 ⎦ T1 ⎣ T2 2 K −1 ∂ ⎡ ⎤ E = −2 ∫ ⎢r (t ) − ∑ xl φl (t )⎥φ k (t )dt ∂x k l =0 ⎦ T1 ⎣ T T2 K −1 T2 T1 l =0 T1 = −2 ∫ r (t )φ k (t )dt + 2∑ xl ∫ φl (t )φ k (t )dt 2 K −1 ∂ E = 0 ⇒ ∫ r (t )φ k (t )dt = ∑ xl ∫ φl (t )φ k (t )dt ∂x k l =0 T1 T1 T2 T k = 0,1,… , K − 1 This defines a set of K equations in the K unknowns xk (for k = 0,1, …, K − 1 ). To see this, Let T2 T2 T1 T1 Pk = ∫ r (t )φ k (t )dt and Rk ,l = ∫ φl (t )φ k (t )dt for k = 0,1, …, K − 1 . With these definitions, the system of equations is K −1 ∂ E = 0 ⇒ Pk = ∑ x k Rl ,k for k = 0,1, … , K − 1 . ∂x k l =0 As is customary with systems of equations, this system of equations may also be expressed in matrix form as follows: ⎡ P0 ⎤ ⎡ R0,0 ⎢ P ⎥ ⎢ R ⎢ 1 ⎥ = ⎢ 1,0 ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎣ PK −1 ⎦ ⎣ RK −1, 0 P R0, K −1 ⎤ ⎡ x0 ⎤ R1, K −1 ⎥⎥ ⎢ x1 ⎥ ⎢ ⎥ ⎥⎢ ⎥ ⎥⎢ ⎥ RK −1, K −1 ⎦ ⎣ x K −1 ⎦ R0,1 R1,1 RK −1,1 R x The matrix equation is P = Rx. The solution for the optimum x’s is x = R −1 P . Note that if the set of basis functions is orthogonal, then R is the diagonal matrix ⎡ E0 ⎢ R=⎢ ⎢ ⎢ ⎣ E1 ⎤ ⎥ ⎥. ⎥ ⎥ E K −1 ⎦ In this case, the solution for the optimum x’s is trival because R −1 ⎡1 ⎢E ⎢ 0 ⎢ =⎢ ⎢ ⎢ ⎢ ⎢⎣ 1 E1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 1 ⎥ E K −1 ⎥⎦ T2 where E k = ∫ φ k2 (t )dt for k = 0,1, …, K − 1 . The solution for xk is T1 1 Pk Ek which is the solution to Exercise 5.3. Furthermore, when the set of basis functions is orthonormal, Ek = 1 for k = 0,1, …, K − 1 . As a consequence, R = I, so that R −1 = I . The solution for xk is xk = x k = Pk which is the solution to Exercise 5.1. T2 5.5 Let E k = ∫ φ k2 (t )dt for k = 0,1, …, K − 1 and note that T1 ⎧E ∫ φ (t )φ (t )dt = ⎨⎩ 0 T2 k l T1 k l=k . l≠k Now the energy in s (t ) is K −1 ⎡ K −1 ⎤ E s = ∫ s (t )dt = ∫ ⎢∑ a k φ k (t )∑ a l φ l (t )⎥ dt l =0 ⎦ T1 T1 ⎣ k = 0 T2 T2 2 K −1 K −1 T2 k =0 l =0 T1 = ∑∑ a k al ∫ φ k (t )φl (t )dt K −1 = ∑ a k2 E k k =0 The interpretation is that Es is a scaled squared Euclidean distance from the origin of the signal space. Alternatively, the expression can be interpreted as the squared Euclidean distance in a Cartesian coordinate system whose axes do not have the same length. 5.6 +A 1 t 1 −A −A +A t 5.7 +A 1 t (0,+ A) −A +A 1 (+ A,0) t 5.8 +2A ½ +2A 1 t t −2A −2A (− A,+ A) (− A,− A) +2A −2A ½ 1 ½ 1 (+ A,+ A) (+ A,− A) +2A ½ t 1 t −2A 5.9 +A 1 t −A (0,+ A) +A 1 t -A (− A,0) (+ A,0) (0,− A) +A t −A 1 1 t 5.10 ½ − 3 −1 A 2 − 3 +1 A 2 1 t 3 +1 A 2 + 3 −1 A 2 t ½ ⎛ A 3 A⎞ ⎜− ,+ ⎟⎟ ⎜ 2 2⎠ ⎝ ⎛ A 3 A⎞ ⎜+ ,+ ⎟⎟ ⎜ 2 2⎠ ⎝ (0,0) +2A (0,− A) t −2A + ½ 1 +A t −A 1 1 −4A t −2A −4A −4A 1 t −2A ½ 1 −2A −2A −4A ½ ½ t (− A,+ A) (− 3 A,+ A) t (− A,+ A) 1 1 (− 3 A,+ A) ½ +4A +2A ½ (+ A,+ A) (+ 3 A,+ A) (+ A,+ A) (+ 3 A,+ A) ½ 1 1 t +4A +2A +2A +2A t +4A +4A ½ ½ 1 1 t t 5.11 5.12 1 x 0 = ∫ r (t )φ 0 (t )dt = 0 1 2 1 x1 = ∫ r (t )φ1 (t )dt = 0 0 1 rˆ(t ) = x 0φ 0 (t ) + x1φ1 (t ) = φ 0 (t ) 2 r (t ) r̂ (t ) +1 +½ t ½ 1 5.13 1 x 0 = ∫ r (t )φ 0 (t )dt = 0 1 2 1 x1 = ∫ r (t )φ1 (t )dt = 0 0 1 rˆ(t ) = x 0φ 0 (t ) + x1φ1 (t ) = φ 0 (t ) 2 r (t ) +1 r̂ (t ) +½ t ½ 1 5.14 1 x 0 = ∫ r (t )φ 0 (t )dt =0 0 1 x1 = ∫ r (t )φ1 (t )dt = − 0 1 2 1 rˆ(t ) = x 0φ 0 (t ) + x1φ1 (t ) = − φ1 (t ) 2 r (t ) +1 +½ −½ −1 r̂ (t ) ½ t 1 5.15 1 x 0 = ∫ r (t )φ 0 (t )dt =0 0 1 x1 = ∫ r (t )φ1 (t )dt = 0 1 2 1 rˆ(t ) = x0φ 0 (t ) + x1φ1 (t ) = φ1 (t ) 2 r (t ) +1 +½ −½ −1 1 ½ t r̂ (t ) 5.16 (a) 3φ0 (t ) + φ1 (t ) → +4 +2 t ½ 1 (b) 1 x 0 = ∫ r (t )φ 0 (t )dt =3 0 1 x1 = ∫ r (t )φ1 (t )dt =1 0 rˆ(t ) = x 0φ 0 (t ) + x1φ1 (t ) = 3φ 0 (t ) + φ1 (t ) r̂ (t ) r (t ) +4 +2 t ½ 1 (c) The best approximation for r (t ) in Span{φ 0 (t ), φ1 (t )} is 3φ 0 (t ) + φ1 (t ) . 5.17 ½ +A 2 1 t ½ −A 2 −A +A 1 t 5.18 +A 2 t ½ 1 (0,+ A) +A 2 (+ A,0) ½ 1 t 5.19 +A 2 −A 2 +A 2 ½ 1 t t (− A,+ A) (− A,− A) 1 −A 2 ½ 1 ½ 1 (+ A,+ A) (+ A,− A) +A 2 t ½ −A 2 t 5.20 +A 2 t −A 2 ½ 1 (0,+ A) +A 2 −A 2 +A 2 ½ 1 t (− A,0) (+ A,0) (0,− A) +A 2 ½ 1 t −A 2 t −A 2 ½ 1 5.21 3 A 2 1 + A 2 + + − 1 A 2 ½ t 1 3 A 2 ⎛ A 3 A⎞ ⎜+ ,+ ⎟⎟ ⎜ 2 2⎠ ⎝ ⎛ A 3 A⎞ ⎜− ,+ ⎟⎟ ⎜ 2 2⎠ ⎝ (0,0) +2A t −2A ½ t ½ 1 (0,− A) ½ −A 2 1 t 1 − A3 2 −A 2 − A3 2 −A 2 A 2 ½ ½ 1 1 t t −A 2 −A 2 A 2 ½ t (− A,+ A) (− 3 A,+ A) t (− A,+ A) 1 1 (− 3 A,+ A) ½ ½ −A 2 A 2 ½ (+ A,+ A) (+ 3 A,+ A) (+ A,+ A) (+ 3 A,+ A) A 2 A3 2 1 1 t t −A 2 A 2 A3 2 A 2 A3 2 ½ ½ 1 1 t t 5.22 5.23 (a) φ 0(t) 1 0 -1 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 φ 1(t) 1 0 -1 dimension K = 2 (b) 2 1.5 s1 1 φ1 0.5 0 s2 s0 -0.5 -1 s3 -1.5 -2 -2 -1.5 -1 -0.5 0 φ0 0.5 1 1.5 2 5.24 (a) φ 0(t) 1 0 -1 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 φ 1(t) 1 0 -1 dimension: K =2 (b) 2 1.5 s2 1 φ1 0.5 0 s1 s3 -0.5 -1 s0 -1.5 -2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 φ0 The basis functions here are the same as those for Exercise 5.23, but the constellation is a reordered version of the constellation in Exercise 5.23. 5.25 (a) 1.5 1 φ 0(t) 0.5 0 -0.5 -1 -1.5 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 dimension: K = 1 (b) 8 6 4 2 0 s3 s2 s1 s0 -2 -4 -6 -8 -8 -6 -4 -2 0 φ0 2 4 6 8 1 5.26 (a) φ 0(t) 1 0 -1 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 t 0.6 0.7 0.8 0.9 1 φ 1(t) 1 0 -1 dimension: K = 2 (b) s2 2 1.5 s3 1 φ1 0.5 s0 0 s1 -0.5 -1 -1.5 -2 -2 -1.5 -1 -0.5 0 φ0 0.5 1 1.5 2 5.27 (a) φ 0(t) 1 0 -1 0 0.5 1 1.5 t 2 2.5 3 0 0.5 1 1.5 t 2 2.5 3 0 0.5 1 1.5 t 2 2.5 3 φ 1(t) 1 0 -1 φ 2(t) 1 0 -1 dimension K = 3 (b) 5 s3 φ2 s2 0 s0 s1 -5 5 5 0 φ1 0 -5 -5 φ0 5.28 (a) φ 0(t) 1 0 -1 0 0.5 1 1.5 t 2 2.5 3 0 0.5 1 1.5 t 2 2.5 3 φ 1(t) 1 0 -1 dimension: K = 2 (b) 2.5 2 1.5 s1 s2 1 φ1 0.5 s0 0 -0.5 -1 -1.5 -2 -2.5 -2 -1 0 φ0 1 2 5.29 The average energy may be expressed as E avg = M −1 2 2 A M ∑ (2m + 1) m=− 2 = M 2 2A M M −1 2 2 ∑ (2m + 1) m =0 We will need the following identities: n ∑i = n i =0 n ∑i = i =0 n ∑i 2 i =0 = n(n + 1) 2 n(n + 1)(2n + 1) 6 Using these identities, we have M −1 2 ∑ (2m + 1) m =0 2 M −1 2 M −1 2 M −1 2 m =0 m =0 m =0 = 4 ∑ m2 + 4∑ m + ∑1 ⎛M ⎞⎛ M ⎞ ⎛M ⎞⎛ M ⎜ − 1⎟⎜ ⎟(M − 2 + 1) ⎜ − 1⎟⎜ 2 2 ⎠⎝ 2 ⎠ ⎠⎝ 2 = 4⎝ +4⎝ 6 2 3 M M −1 = 2 3 Putting this all together we have E avg = 2 A2 M M 3 − 1 M 3 − 1 2 A = M 2 3 3 ⎞ ⎟ ⎠+M 2 2 . 5.30 (a) A = 0.05. LUT 0 1 bits -0.05 +0.05 ↑N p(nT) ADC (b) -3 2 x 10 1.5 1 s(t) 0.5 0 -0.5 -1 -1.5 -2 0 1 2 3 4 t/Ts 5 6 7 8 (c) x(t) r(t) DAC N samples/bit x(kT) p(−nT) decision n = kN (d) k 0 1 2 3 ----------------------------------x(k) +0.00 -0.01 +0.33 +0.07 ----------------------------------a(k) +0.05 -0.05 +0.05 +0.05 ----------------------------------bits 1 0 1 1 s(t) 5.31 (a) A = 0.03. LUT 00 01 10 11 bits -0.09 -0.03 +0.09 +0.03 ↑N p(nT) ADC (b) -3 2.5 x 10 2 1.5 s(t) 1 0.5 0 -0.5 -1 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 (c) r(t) DAC N samples/symbol p(−nT) decision n = kN (d) k 0 1 2 3 ----------------------------------x(k) +0.07 +0.06 -0.03 -0.10 ----------------------------------a(k) +0.09 +0.03 -0.03 -0.09 ----------------------------------bits 10 11 01 00 s(t) 5.32 (a) A = 0.125 000 001 010 011 100 101 110 111 bits LUT -0.875 -0.625 -0.125 -0.375 +0.875 +0.625 +0.125 +0.375 ↑N p(nT) ADC (b) 0.02 0.018 0.016 0.014 s(t) 0.012 0.01 0.008 0.006 0.004 0.002 0 0.5 1 1.5 t/Ts 2 2.5 3 (c) r(t) DAC N samples/symbol p(−nT) decision n = kN (d) k 0 1 2 3 --------------------------------------x(k) +0.601 -0.101 +0.355 -0.777 --------------------------------------a(k) +0.625 -0.125 +0.375 -0.875 --------------------------------------bits 101 010 111 000 s(t) 5.33 2 1 0 -1 -2 -0.5 -0.4 -0.3 -0.2 -0.1 0 t/Ts 0.1 0.2 0.3 0.4 0.5 -0.4 -0.3 -0.2 -0.1 0 t/Ts 0.1 0.2 0.3 0.4 0.5 -0.4 -0.3 -0.2 -0.1 0 t/Ts 0.1 0.2 0.3 0.4 0.5 2 1 0 -1 -2 -0.5 2 1 0 -1 -2 -0.5 (d) All eye diagrams exhibit no ISI at the optimum sampling instant. The peak overshoot increases as the excess bandwidth decreases. The width of the eye opening (and hence, the immunity to timing offset) decreases as the excess bandwidth decreases. 5.34 4 2 0 -2 -4 -0.5 -0.4 -0.3 -0.2 -0.1 0 t/Ts 0.1 0.2 0.3 0.4 0.5 -0.4 -0.3 -0.2 -0.1 0 t/Ts 0.1 0.2 0.3 0.4 0.5 -0.4 -0.3 -0.2 -0.1 0 t/Ts 0.1 0.2 0.3 0.4 0.5 5 0 -5 -0.5 5 0 -5 -0.5 (d) All eye diagrams exhibit no ISI at the optimum sampling instant. The peak overshoot increases as the excess bandwidth decreases. The width of the eye opening (and hence, the immunity to timing offset) decreases as the excess bandwidth decreases. 5.35 magnitude (dB) part (a) 0 signal filter -20 -40 -5 0 frequency (cycles/symbol) 5 2 0 -2 -1 -0.5 0 t/Ts 0.5 1 magnitude (dB) part (b) 0 signal filter -20 -40 -5 0 frequency (cycles/symbol) 5 2 0 -2 -1 -0.5 0 t/Ts 0.5 1 magnitude (dB) part (c) 0 signal filter -20 -40 -5 0 frequency (cycles/symbol) 5 2 0 -2 -1 -0.5 0 t/Ts 0.5 1 (d) As the bandwidth of the channel decreases, signal distortion increases. The signal distortion manifests itself in the eye diagram as a narrowing of the eye opening and closure of the eye due to ISI. 5.36 magnitude (dB) part (a) 0 signal filter -20 -40 -5 0 frequency (cycles/symbol) 5 1 0 -1 -1 -0.5 0 t/Ts 0.5 1 magnitude (dB) part (b) 0 signal filter -20 -40 -5 0 frequency (cycles/symbol) 5 2 0 -2 -1 -0.5 0 t/Ts 0.5 1 magnitude (dB) part (c) 0 signal filter -20 -40 -5 0 frequency (cycles/symbol) 5 2 0 -2 -1 -0.5 0 t/Ts 0.5 1 (d) As the order of the filter increases, an increasing amount of energy is removed from the signal. This increases distortion seen as eye narrowing and eye closure due to ISI. 5.37 (a) ω 0τ = (b) π 2 π 2ω 0 ⇒ τ = π , then the output of the phase shifter is 2ω0 cos(ω x (t − τ )) = cos(ω x t − ω xτ ) = cos(ω x t − θ ) If the LO is operating at ω x rad/s with τ = Using ω x = ω 0 ± ∆ω , the phase shift may be expressed as θ = ω xτ = Now, 1 = π ω 0 ± ∆ω π ⎛ ∆ω ⎞ ⎟ = ⎜⎜1 ± ω 0 ⎟⎠ 2 ω0 2⎝ π 180 rad. So we want π 2 − π 180 ≤θ ≤ π 2 + π 180 which implies π⎛ π⎛ 1 ⎞ 1 ⎞ ⎜ 1 − ⎟ ≤ θ ≤ ⎜1 + ⎟ 2 ⎝ 90 ⎠ 2 ⎝ 90 ⎠ Case 1: θ = ∆ω ⎞ ⎟ ⎜⎜1 + 2⎝ ω 0 ⎟⎠ π⎛ ω ω 1 ⎞ π ⎛ ∆ω ⎞ π ⎛ 1 ⎟⎟ ≤ ⎜1 + ⎞⎟ ⇒ − 0 ≤ ∆ω ≤ 0 ⎜1 − ⎟ ≤ ⎜⎜1 + ω 0 ⎠ 2 ⎝ 90 ⎠ 2 ⎝ 90 ⎠ 2 ⎝ 90 90 π⎛ Case 2: θ = ∆ω ⎞ ⎟ ⎜⎜1 − 2⎝ ω 0 ⎟⎠ π⎛ ω ω 1 ⎞ π ⎛ ∆ω ⎞ π ⎛ 1 ⎞ ⎟⎟ ≤ ⎜1 + ⎟ ⇒ − 0 ≤ ∆ω ≤ 0 ⎜1 − ⎟ ≤ ⎜⎜1 − ω 0 ⎠ 2 ⎝ 90 ⎠ 2 ⎝ 90 ⎠ 2 ⎝ 90 90 π⎛ 5.38 (a) ω 0τ = π 2 ⇒ τ = π 2ω 0 π⎞ ⎛ The output of the phase shifter is sin ⎜ ω 0 t − ⎟ = − cos(ω 0 t ) 2⎠ ⎝ (b) The system outputs are − cos(ω0t ) sin (ω0t ) but − cos(ω 0 t ) = cos(ω 0 t + π ) and sin (ω 0 t ) = − sin (ω 0 t + π ) . So the system output may be expressed as cos(ω0t + θ ) − sin (ω0t + θ ) with θ = π. This system does indeed have the desired relationship. 5.39 (a) θ = 3π 2 (b) ω 0τ = 3π 2 ⇒ τ = 3π 2ω 0 (c) The delay is 3 times the delay from Exercise 5.37. Furthermore, the tunable range over which the phase error does not exceed 1° is less than that for Exercise 5.37. 5.40 (a) A = √2 ILUT 00 01 10 11 I(t) -1 -1 +1 +1 ↑N p(nT) 2 cos(Ω 0 n ) bits s(t) ADC − 2 sin (Ω 0 n ) QLUT 00 01 10 11 -1 +1 -1 +1 ↑N p(nT) Q(t) (b) I(t) 0.05 0 -0.05 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 Q(t) 0.05 0 -0.05 s(t) 0.05 0 -0.05 (c) x(t) x(kT) p(−nT) n = kN 2 cos(Ω 0 n ) r(t) decision DAC N samples/symbol − 2 sin (Ω 0 n ) y(t) y(kT) p(−nT) n = kN (d) 1.5 1 0.5 0 -0.5 -1 -1.5 -1.5 -1 -0.5 0 0.5 1 1.5 k 0 1 2 3 ----------------------------------x(k) -1.01 +1.02 +1.11 -0.03 y(k) +1.07 -0.99 +1.00 +0.07 ----------------------------------a0(k) -1.00 +1.00 +1.00 -1.00 a1(k) +1.00 -1.00 +1.00 +1.00 ----------------------------------bits 01 10 11 01 5.41 (a) A = 0.05 = 0.2236 LUT 00 01 10 11 bits -0.6708 -0.2236 +0.2708 +0.2236 ↑N p(nT) s(t) ADC 2 cos(Ω 0 n ) (b) 0.02 I(t) 0.01 0 -0.01 -0.02 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0.04 s(t) 0.02 0 -0.02 -0.04 (c) x(t) r(t) DAC N samples/symbol 2 cos(Ω n ) 0 x(kT) p(−nT) decision n = kN (d) 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -3 -2 -1 0 1 2 k 0 1 2 3 ----------------------------------x(k) -0.03 +0.55 +1.53 -2.10 ----------------------------------a(k) -0.22 +0.67 +0.67 -0.67 ----------------------------------bits 01 10 10 00 5.42 (a) A = 2 ILUT 00 01 10 11 I(t) 0 -√3 +√3 0 ↑N p(nT) 2 cos(Ω 0 n ) bits s(t) ADC − 2 sin (Ω 0 n ) QLUT 00 01 10 11 0 +1 +1 −2 ↑N p(nT) Q(t) (b) I(t) 0.05 0 -0.05 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 Q(t) 0.05 0 -0.05 s(t) 0.1 0 -0.1 (c) x(t) x(kT) p(−nT) n = kN 2 cos(Ω 0 n ) r(t) decision DAC N samples/symbol − 2 sin (Ω 0 n ) y(t) y(kT) p(−nT) n = kN (d) 2.5 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2.5 -2 -1 0 1 2 k 0 1 2 3 ----------------------------------x(k) -0.03 -0.01 +0.01 +0.03 y(k) +1.97 +1.97 +1.97 +1.97 ----------------------------------a0(k) -1.73 +0.00 +0.00 +1.73 a1(k) +1.00 +0.00 +0.00 +1.00 ----------------------------------bits 01 00 00 10 5.43 (a) A = 2 ILUT 000 001 010 011 100 101 110 111 +√2 +1 −1 0 +1 0 -√2 −1 I(t) ↑N p(nT) 2 cos(Ω 0 n ) bits s(t) ADC QLUT 000 001 010 011 100 101 110 111 − 2 sin (Ω 0 n ) 0 +1 +1 +√2 −1 -√2 0 −1 ↑N p(nT) Q(t) (b) I(t) 0.05 0 -0.05 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 Q(t) 0.05 0 -0.05 s(t) 0.05 0 -0.05 (c) x(t) x(kT) p(−nT) n = kN 2 cos(Ω 0 n ) r(t) decision DAC N samples/symbol − 2 sin (Ω 0 n ) y(t) y(kT) p(−nT) n = kN (d) 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2 -1 0 1 2 k 0 1 2 3 ----------------------------------x(k) -1.30 +1.51 -0.09 +0.65 y(k) +1.64 -1.49 -0.91 +0.07 ----------------------------------a0(k) -1.00 +1.00 +0.00 +1.41 a1(k) +1.00 -1.00 -1.41 +0.00 ----------------------------------bits 010 100 101 000 5.44 (a) A = 2 ILUT 000 001 010 011 100 101 110 111 +2√2 0 +2 +2√2 −2 -2√2 0 -2√2 I(t) ↑N p(nT) 2 cos(Ω 0 n ) bits s(t) ADC QLUT 000 001 010 011 100 101 110 111 − 2 sin (Ω 0 n ) +2√2 +2 0 −2√2 0 +2√2 −2 −2√2 ↑N p(nT) Q(t) (b) I(t) 0.1 0 -0.1 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 Q(t) 0.1 0 -0.1 s(t) 0.2 0 -0.2 (c) x(t) x(kT) p(−nT) n = kN 2 cos(Ω 0 n ) r(t) decision DAC N samples/symbol − 2 sin (Ω 0 n ) y(t) y(kT) p(−nT) n = kN (d) 8 6 4 2 0 -2 -4 -6 -8 -5 0 5 k 0 1 2 3 ----------------------------------x(k) -2.00 +1.00 +7.20 -1.64 y(k) -2.01 +1.64 +0.00 +1.25 ----------------------------------a0(k) -2.83 +0.00 +2.00 -2.00 a1(k) -2.83 +2.00 +0.00 +0.00 ----------------------------------bits 111 001 010 100 5.45 (a) A = 2 ILUT 000 001 010 011 100 101 110 111 +2 -1 +1 +2 -1 -2 +1 −2 I(t) ↑N p(nT) 2 cos(Ω 0 n ) bits s(t) ADC QLUT 000 001 010 011 100 101 110 111 − 2 sin (Ω 0 n ) +2 +1 +1 -2 −1 +2 -1 −2 ↑N p(nT) Q(t) (b) I(t) 0.05 0 -0.05 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 Q(t) 0.05 0 -0.05 s(t) 0.1 0 -0.1 (c) x(t) x(kT) p(−nT) n = kN 2 cos(Ω 0 n ) r(t) decision DAC − 2 sin (Ω 0 n ) N samples/symbol y(t) y(kT) p(−nT) n = kN (d) 3 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3 k 0 1 2 3 ----------------------------------x(k) -1.32 +1.51 +0.71 -0.05 y(k) -1.32 -1.51 +1.32 +0.05 ----------------------------------a0(k) -1.00 +2.00 +1.00 -1.00 a1(k) -1.00 -2.00 +1.00 +1.00 ----------------------------------bits 100 011 010 001 5.46 (a) A = 0.3 ILUT 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 -0.9 -0.9 -0.9 -0.9 -0.3 -0.3 -0.3 −0.3 +0.9 +0.9 +0.9 +0.9 +0.3 +0.3 +0.3 +0.3 I(t) ↑N p(nT) 2 cos(Ω 0 n ) bits ADC QLUT 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 +0.9 +0.3 -0.9 -0.3 +0.9 +0.3 -0.9 −0.3 +0.9 +0.3 -0.9 -0.3 +0.9 +0.3 -0.9 -0.3 − 2 sin (Ω 0 n ) ↑N p(nT) Q(t) s(t) (b) I(t) 0.02 0 -0.02 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 Q(t) 0.02 0 -0.02 s(t) 0.05 0 -0.05 (c) x(t) x(kT) p(−nT) n = kN 2 cos(Ω 0 n ) r(t) decision DAC N samples/symbol − 2 sin (Ω 0 n ) y(t) y(kT) p(−nT) n = kN (d) 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1 -0.5 0 0.5 1 k 0 1 2 3 ----------------------------------x(k) -0.03 +0.51 -0.09 +0.65 y(k) +0.64 -0.49 -0.91 +0.07 ----------------------------------a0(k) -0.30 +0.30 -0.30 +0.90 a1(k) +0.90 -0.30 -0.90 +0.30 ----------------------------------bits 0100 1111 0110 1001 5.47 (a) A = 0.3 ILUT 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 0 -0.9 0 +0.9 -0.3 +0.3 -1.5 -0.9 +0.9 +1.5 -0.3 +0.3 0 -0.9 +0.9 0 I(t) ↑N p(nT) 2 cos(Ω 0 n ) bits ADC QLUT 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 +1.5 +0.9 +0.9 +0.9 +0.3 +0.3 0 0 0 0 -0.3 -0.3 -0.9 -0.9 -0.9 -1.5 − 2 sin (Ω 0 n ) ↑N p(nT) Q(t) s(t) (b) I(t) 0.05 0 -0.05 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 0 0.5 1 1.5 2 t/Ts 2.5 3 3.5 4 Q(t) 0.05 0 -0.05 s(t) 0.1 0 -0.1 (c) x(t) x(kT) p(−nT) n = kN 2 cos(Ω 0 n ) r(t) decision DAC N samples/symbol − 2 sin (Ω 0 n ) y(t) y(kT) p(−nT) n = kN (d) 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2 -1 0 1 2 k 0 1 2 3 ----------------------------------x(k) +0.90 -1.95 +1.90 +0.05 y(k) +0.54 +0.01 -1.91 -0.94 ----------------------------------a0(k) +0.90 -1.50 +0.90 +0.00 a1(k) +0.90 +0.00 -0.90 -0.90 ----------------------------------bits 0011 0110 1110 1100 5.48 5.49 5.50 5.51 5.52 5.53 E avg E avg 4r12 + 12r22 r12 + 3r22 = = 16 4 2 2 2 4r + 12r2 + 16r3 r 2 + 3r22 + 4r32 = 1 = 1 32 8 5.54 r2 = 2.7236 r1 ∆θ = 0 maximum d min = 0.1462 3 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3 5.55 r2 = 2.8242 r1 r3 = 4.2143 r1 ∆θ 1 = 0 ∆θ 2 = 0 maximum d min = 0.073658 5 4 3 2 1 0 -1 -2 -3 -4 -5 -5 0 5 5.56 E avg E avg ( ) ( ) ( ( ) ( ) ) ( ) 4 2 A 2 + 4 9 A 2 + 4 25 A 2 + 4 18 A 2 27 2 = = A 16 2 4 2 A2 + 4 9 A2 11 = = A2 8 2 5.57 r1 = 2 A θ1 = π 4 θ2 = 0 π r3 = 3 2 A θ 3 = r2 = 3 A r1 = 5 A 4 θ2 = 0 r1 = 2 A θ1 = r2 = 3 A π 4 θ2 = 0 5.58 5.59 5.60 5.61 (a) 1.5 (1,1) 1 (1,1) 0.5 0 -0.5 -1 (-1,-1) -1.5 -1.5 -1 (1,-1) -0.5 0 0.5 1 1.5 (b) k 0 1 2 3 -------------------------------------------x(kTs) +1.0 -1.0 +1.0 +1.0 y((k+0.5)Ts) -1.0 -1.0 +1.0 +1.0 -------------------------------------------a0(k) +1.0 -1.0 +1.0 +1.0 a1(k) -1.0 -1.0 +1.0 +1.0 -------------------------------------------bits 10 00 11 11 5.62 For each case, the bit rate is Rb = 1 N M k where k is the number of data bits per subcarrier. TM (a) QPSK (k = 2) Rb = 100000 symbols subcarriers bits × 64 ×2 = 12.8 Mbits/s sec symbol subcarrier (b) 16-QAM (k = 4) Rb = 100000 symbols subcarriers bits × 64 ×4 = 25.6 Mbits/s sec symbol subcarrier (c) 64-QAM with rate-2/3 code: k = 6 × Rb = 100000 2 =4 3 symbols subcarriers bits × 64 ×4 = 25.6 Mbits/s sec symbol subcarrier 5.63 Rb = NM 1 NM k TM Rb 20 × 10 6 = = = 25 1 200000 × 4 k TM 5.64 Assign a point from a constellation with a small number of points (e.g., BPSK, QPSK) to those subcarriers that are attenuated a lot by the channel. Assign a point from a constellation with a large number of points (e.g., 16-QAM, 64-QAM) to those subcarriers that are not attenuated a lot by the channel. The selection criterion would be the maximum allowed bit error rate. 10.1 (a) I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε )sin (ω 0 t + φ ) = I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε )[sin (ω 0 t ) cos(φ ) + cos(ω 0 t )sin (φ )] = I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε ) cos(φ )sin (ω 0 t ) − Q(t ) 2 (1 + ε )sin (φ ) cos(ω 0 t ) = [I (t ) − Q(t )(1 + ε )sin (φ )] 2 cos(ω 0 t ) − Q(t )(1 + ε ) cos(φ ) 2 sin (ω 0 t ) ~ ~ I (t ) Q (t ) Putting the relationships in matrix form gives the desired result. (b) 0.5 0.4 0.3 0.2 Q(nT) 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.5 0 I(nT) 0.5 (c) 0.5 0.4 0.3 0.2 Q(nT) 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.5 0 I(nT) 0.5 (d) The phase trajectory is skewed, the same way a circle is skewed to create an ellipse. (e) ε = 0 φ = 20 deg 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 Q(nT) Q(nT) ε = 0.1 φ = 0 deg 0 0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.5 0 I(nT) 0.5 -0.5 0 I(nT) 0.5 The two plots above illustrate the effects: The amplitude imbalance stretches the axis. The phase imbalance rotates the Q-axis but not the I-axis creating the pronounced skewing effect observed in part (c). 10.2 (a) I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε )sin (ω 0 t + φ ) = I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε )[sin (ω 0 t ) cos(φ ) + cos(ω 0 t )sin (φ )] = I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε ) cos(φ )sin (ω 0 t ) − Q(t ) 2 (1 + ε )sin (φ ) cos(ω 0 t ) = [I (t ) − Q(t )(1 + ε )sin (φ )] 2 cos(ω 0 t ) − Q(t )(1 + ε ) cos(φ ) 2 sin (ω 0 t ) ~ ~ I (t ) Q (t ) Putting the relationships in matrix form gives the desired result. (b) 2 1.5 1 Q(nT) 0.5 0 -0.5 -1 -1.5 -2 -2 -1 0 I(nT) 1 2 (c) 2 1.5 1 Q(nT) 0.5 0 -0.5 -1 -1.5 -2 -2 -1 0 I(nT) 1 2 (d) The phase trajectory is skewed, the same way a circle is skewed to create an ellipse. (e) ε = 0 φ = 20 deg 2 1.5 1.5 1 1 0.5 0.5 Q(nT) Q(nT) ε = 0.1 φ = 0 deg 2 0 0 -0.5 -0.5 -1 -1 -1.5 -1.5 -2 -2 -1 0 I(nT) 1 2 -2 -2 -1 0 I(nT) 1 2 The two plots above illustrate the effects: The amplitude imbalance stretches the axis. The phase imbalance rotates the Q-axis but not the I-axis creating the pronounced skewing effect observed in part (c). 10.3 (a) I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε )sin (ω 0 t + φ ) = I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε )[sin (ω 0 t ) cos(φ ) + cos(ω 0 t )sin (φ )] = I (t ) 2 cos(ω 0 t ) − Q(t ) 2 (1 + ε ) cos(φ )sin (ω 0 t ) − Q(t ) 2 (1 + ε )sin (φ ) cos(ω 0 t ) = [I (t ) − Q(t )(1 + ε )sin (φ )] 2 cos(ω 0 t ) − Q(t )(1 + ε ) cos(φ ) 2 sin (ω 0 t ) ~ ~ I (t ) ~ ~ = I (t ) 2 cos(ω 0 t ) − Q (t ) 2 sin (ω 0 t ) Q (t ) ~ ~ The outputs of the matched filters are based on I (t ) and Q (t ) . Thus we have ~ ~ x (t ) = I (t ) ∗ p(− t ) = [I (t ) − Q(t )(1 + ε )sin (φ )] ∗ p(− t ) = I (t ) ∗ p(− t ) − (1 + ε )sin (φ )Q(t ) ∗ p(− t ) = x(t ) − (1 + ε )sin (φ ) y (t ) ~ ~ y (t ) = Q (t ) ∗ p(− t ) = (1 + ε ) cos(φ )Q(t ) ∗ p(− t ) = (1 + ε ) cos(φ ) y (t ) Organizing these expressions into a matrix equation produces the desired result. (b) Using x(t ) = ∑ a0 (m )rp (t − mT ) and y (t ) = ∑ a1 (m )rp (t − mT ) as substitutions in the m m equations from part (a), we have ~ x (t ) = ∑ [a 0 (m ) − (1 + ε )sin (φ )a1 (m )]rp (t − mT ) m ~ y (t ) = ∑ (1 + ε ) cos(φ )a1 (m )rp (t − mT ) m from which we obtain ~ x (kT ) = ∑ [a 0 (m ) − (1 + ε )sin (φ )a1 (m )]rp (kT − mT ) = a0 (k ) − (1 + ε )sin (φ )a1 (k ) m ~ y (kT ) = ∑ (1 + ε ) cos(φ )a1 (m )rp (kT − mT ) = (1 + ε ) cos(φ )a1 (k ) m Organizing these expressions into a matrix produces x (kT )⎤ ⎡1 − (1 + ε )sin (φ )⎤ ⎡a 0 (k )⎤ ⎡~ ⎥ ⎢~ ⎥=⎢ ⎥⎢ ⎣ y (kT )⎦ ⎣0 (1 + ε ) cos(φ ) ⎦ ⎣ a1 (k )⎦ (c) 1 y(kT) 0.5 0 -0.5 -1 -1 -0.5 0 x(kT) 0.5 1 -1 -0.5 0 x(kT) 0.5 1 (d) 1 y(kT) 0.5 0 -0.5 -1 (e) The phase trajectory is skewed, the same way a circle is skewed to create an ellipse. (f) ε = 0.1 φ = 0 deg 1 y(kT) 0.5 0 -0.5 -1 -1 -0.5 0 x(kT) 0.5 1 0.5 1 ε = 0 φ = 20 deg 1 y(kT) 0.5 0 -0.5 -1 -1 -0.5 0 x(kT) The two plots above illustrate the effects: The amplitude imbalance stretches the axis. The phase imbalance rotates the Q-axis but not the I-axis creating the pronounced skewing effect observed in part (d). 10.4 (c) 4 3 2 y(kT) 1 0 -1 -2 -3 -4 -4 -2 0 x(kT) 2 4 -4 -2 0 x(kT) 2 4 (d) 4 3 2 y(kT) 1 0 -1 -2 -3 -4 (e) The phase trajectory is skewed, the same way a circle is skewed to create an ellipse. (f) ε = 0 φ = 20 deg 4 4 3 3 2 2 1 1 y(kT) y(kT) ε = 0.1 φ = 0 deg 0 0 -1 -1 -2 -2 -3 -3 -4 -4 -4 -2 0 x(kT) 2 4 -4 -2 0 x(kT) 2 4 The two plots above illustrate the effects: The amplitude imbalance stretches the axis. The phase imbalance rotates the Q-axis but not the I-axis creating the pronounced skewing effect observed in part (d). 10.5 ⎡ ⎛m⎞ ⎛ m ⎞⎤ ~ s (nT ) = ∑ ⎢a0 ⎜ ⎟ + ja1 ⎜ ⎟⎥δ (n − Nm ) ∗ [ p c (nT ) + jp s (nT )] ⎝ N ⎠⎦ ⎝N⎠ m ⎣ ⎡ ⎛m⎞ ⎤ ⎛m⎞ = ∑ ⎢a0 ⎜ ⎟δ (n − Nm ) ∗ p c (nT ) − a1 ⎜ ⎟δ (n − Nm ) ∗ p s (nT )⎥ ⎝N⎠ ⎝N⎠ m ⎣ ⎦ ⎡ ⎛m⎞ ⎤ ⎛m⎞ + j ∑ ⎢a 0 ⎜ ⎟δ (n − Nm ) ∗ p s (nT ) − a1 ⎜ ⎟δ (n − Nm ) ∗ p c (nT )⎥ ⎝N⎠ ⎝N⎠ m ⎣ ⎦ Retaining the real part produces the desired result. 10.6 Start with the diagram in Figure 10.1.2 (d) and consider first the top branch: a0 (k ) pc (nT ) ↑N Let Pc ( z ) be the z-transform of the filter coefficients p c (nT ) and let ( ) ( ) ( ) ( ) Pc ( z ) = Pc ,0 z N + z −1 Pc ,1 z N + z −2 Pc , 2 z N + " + z − ( N −1) Pc , N −1 z N be the polyphase partition of Pc ( z ) . A direct application of the discussion in Section 3.2 surrounding Figure 3.2.9 produces the equivalent system Pc ,0 ( z ) a0 (k ) Pc ,1 ( z ) Pc , 2 ( z ) # Pc, N −1 ( z ) The same process is applied to the branch involving a1 (k ) and p s (nT ) . Putting it all together produces the desired block diagram. 10.7 Consider first the processing involving subfilter r: a0 (k ) + ja1 (k ) Pr ( z ) e jΩ 0 r Because the coefficients p (nT ) are purely real, the filtering process has the equivalent form Pr ( z ) a0 (k ) Pr ( z ) a1 (k ) b0 (k ) b1 (k ) e jΩ 0 r j The desired output is { } Re [b0 (k ) + jb1 (k )]e jΩ0 r = Re{[b0 (k ) + jb1 (k )][cos(Ω 0 r ) + j sin (Ω 0 r )]} = b0 (k ) cos(Ω 0 r ) − b1 (k )sin (Ω 0 r ) which is produced by the following block diagram a0 (k ) Pr (z ) cos(Ω 0 r ) + − a1 (k ) Pr (z ) sin (Ω 0 r ) Applying the same procedure to all subfilters generates the desired result. 10.8 We need to show that Pc ,r ( z ) = Pr ( z ) cos(rΩ 0 ) and Ps ,r (z ) = Pr ( z )sin (rΩ 0 ) . From Equation (10.14) we have Gr ( z ) = 2 p(r )e jΩ0 r z − r + 2 p( N + r )e jΩ 0 ( N + r ) z − ( N + r ) + 2 p(2 N + r )e jΩ0 (2 N + r ) z − (2 N + r ) + " Because Ω 0 = 2π × integer , e jΩ0 ( N + r ) = e jΩ0 so that N Gr ( z ) = 2 p (r )e jΩ0 r z − r + 2 p( N + r )e jΩ 0 r z − ( N + r ) + 2 p (2 N + r )e jΩ0 r z − (2 N + r ) + " [ ] = 2e jΩ0 r p (r )z − r + p ( N + r )z −( N + r ) + p (2 N + r )z −(2 N + r ) + " = 2e jΩ0 r Pr ( z ) which implies that g r (k ) = 2 p r (k )e jΩ 0 r . Now, because pr(k) is purely real, we have { } (k ) = Im{g (k )} = Im{ 2 p (k )e } = p c ,r (k ) = Re{g r (k )} = Re 2 p r (k )e jΩ0 r = 2 p r (k ) cos(Ω 0 r ) p s ,r jΩ 0 r r which proves the result. r 2 p r (k )sin (Ω 0 r ) 10.9 [ ] r (nT ) ∗ g (nT ) = I r (nT ) 2 cos(Ω 0 n ) − Qr (nT ) 2 sin (Ω 0 n ) ∗ g (nT ) { } ) ⎤⎥ ∗ g (nT ) + ([I (nT ) + jQ (nT )]e = 2 Re [I r (nT ) + jQr (nT )]e jΩ0 n ∗ g (nT ) jΩ 0 n ∗ ⎡ [I (nT ) + jQ (nT )]e jΩ0 n r r r r = 2⎢ 2 ⎢⎣ ⎥⎦ 1 [I r (nT ) + jQr (nT )]e jΩ0n ∗ g (nT ) + 1 [I r (nT ) − jQr (nT )]e − jΩ0n ∗ g (nT ) = 2 2 = [I r (nT ) + jQr (nT )]e jΩ0 n ∗ p(− nT )e jΩ0 n + [I r (nT ) − jQr (nT )]e − jΩ0 n ∗ p (− nT )e jΩ0 n first term = ∑ [I r (kT ) + jQr (kT )]e jΩ0 k p ((n + k )T )e jΩ0 (n − k ) k = e jΩ0 n ∑ [I r (kT ) + jQr (kT )] p((n + k )T ) k ⎡ ⎤ ⎢ ⎥ = e jΩ0 n ⎢∑ I r (kT ) p ((n + k )T ) + j ∑ Qr (kT ) p ((n + k )T )⎥ k k ⎥ ⎢ x ( nT ) y ( nT ) ⎣⎢ ⎦⎥ = e jΩ0 n [x(nT ) + jy (nT )] second term = ∑ [I r (kT ) − jQr (kT )]e − jΩ 0 k p((n + k )T )e jΩ0 (n − k ) k = e jΩ0 n ∑ [I r (kT ) + jQr (kT )]e − j 2 Ω0 k p((n + k )T ) k =0 The second term is zero because Ir(nT), Qr(nT), p(nT) are low-pass sequences. 10.10 [ ] r (nT ) ∗ g (nT ) = I r (nT ) 2 cos(Ω 0 n ) − Qr (nT ) 2 sin (Ω 0 n ) ∗ g (nT ) { } ) ⎤⎥ ∗ g (nT ) + ([I (nT ) + jQ (nT )]e = 2 Re [I r (nT ) + jQr (nT )]e jΩ0 n ∗ g (nT ) jΩ 0 n ∗ ⎡ [I (nT ) + jQ (nT )]e jΩ0 n r r r r = 2⎢ 2 ⎢⎣ ⎥⎦ 1 [I r (nT ) + jQr (nT )]e jΩ0n ∗ g (nT ) + 1 [I r (nT ) − jQr (nT )]e − jΩ0n ∗ g (nT ) = 2 2 jΩ 0 n jΩ 0 n = [I r (nT ) + jQr (nT )]e ∗ p(− nT )e + [I r (nT ) − jQr (nT )]e − jΩ 0 n ∗ p(− nT )e − jΩ 0 n first term = ∑ [I r (kT ) + jQr (kT )]e jΩ0 k p((n + k )T )e − jΩ 0 (n − k ) k = e − jΩ0 n ∑ [I r (kT ) + jQr (kT )]e j 2 Ω0 k p((n + k )T ) k =0 second term = ∑ [I r (kT ) − jQr (kT )]e − jΩ 0 k p((n + k )T )e − jΩ0 (n − k ) k = e − jΩ0 n ∑ [I r (kT ) − jQr (kT )] p ((n + k )T ) k ⎡ ⎤ ⎢ ⎥ = e − jΩ0 n ⎢∑ I r (kT ) p ((n + k )T ) − j ∑ Qr (kT ) p ((n + k )T )⎥ k k ⎢ ⎥ ⎢⎣ ⎥⎦ x ( nT ) y ( nT ) = e − jΩ0 n [x(nT ) − jy (nT )] The first term is zero because Ir(nT), Qr(nT), p(nT) are low-pass sequences. Note that the answer r (nT ) ∗ g (nT ) = e − jΩ0 n [x(nT ) − jy (nT )] is the complex-conjugate of the result of Exercise 10.9. 10.11 (a) H ( jω ) = = 1 1 1 + + jωC R jω L ωRL ωRL[ωL − j (ω 2 RLC − R )] = 2 ωL + j (ω 2 RLC − R ) ω 2 L2 + (ω 2 RLC − R ) = ω 2 RL2 − jωRL(ω 2 RLC − R ) ω 2 L2 + (ω 2 RLC − R ) The imaginary part is zero when ωRL(ω 2 RLC − R ) = 0 . Thus we have ω 02 RLC − R = 0 ⇒ ω 02 = 2 1 LC (b) H ( jω ) = 2 ω 2 R 2 L2 ω 2 L2 + (ω 2 RLC − R ) 2 from which we have H ( jω 0 ) 2 1 2 2 R L LC = = R2 2 1 2 ⎛ 1 ⎞ L +⎜ RLC − R ⎟ LC ⎝ LC ⎠ Now the 3-dB frequency ω3 satisfies ω 32 R 2 L2 ω 32 L2 + (ω 32 RLC − R ) 2 R2 = 2 ( ⇒ 2ω 32 L2 = ω 32 L2 + ω 32 RLC − R ( ) ) 2 ⇒ R 2 L2 C 2ω 34 − 2 R 2 LC + L2 ω 32 + R 2 = 0 (2R ) 2 LC + L2 − 4 R 4 L2 C 2 ⇒ω = 2 R 2 L2 C 2 1 1 1 1 ⇒ ω 32 = + ± + 2 3 2 2 4 4 LC 2 R C 4R C R LC 2 3 2 R 2 LC + L2 ± (this is a quadratic equation in ω 32 ) 2 1 ⎡ 1 1 ⎤ + 2 ⎢ 2 2 LC ⎥⎦ R C ⎣ 4R C ⇒ ω 32 = 1 1 + ± LC 2 R 2 C 2 ⇒ ω 32 = 1 1 1 + ± 2 2 LC 2 R C RC 2 1 1 + 2 2 LC 4R C ⇒ ω 32 = 1 1 ± 2 2 RC 4R C X2 1 1 1 1 + + + 2 2 2 2 LC LC 4R C 4R C Y2 2 XY ⎡ 1 1 1 ⎤ ⇒ω = ⎢ ± + ⎥ 2 2 LC ⎦ 4R C ⎣ 2 RC 2 2 3 ⎡ 1 1 1 ⎤ ⇒ ω3 = ± ⎢ ± + ⎥ 2 2 LC ⎦ 4R C ⎣ 2 RC There are four solutions. Retaining only the positive roots produces the desired result (c) W3−dB = ω 2 − ω1 = ⎡ 1 1 1 1 1 1 ⎤ + + − − + + ⎢ ⎥ 2 RC 4 R 2 C 2 LC ⎣ 2 RC 4 R 2 C 2 LC ⎦ 1 1 1 1 1 1 + + + − + 2 2 2 2 LC 2 RC LC 2 RC 4R C 4R C 1 = RC = (d) maximum C: ω 0 = 2π × 540 × 10 3 C max = minimum C: ω 0 = 2π × 1600 × 10 3 C min = 1 (2π × 540 × 10 ) (0.25 × 10 ) 3 2 −3 1 = 0.3475nF (2π × 1600 × 10 ) (0.25 × 10 ) 3 2 −3 = 0.0396nF (e) 30 3-dB bandwidth (kHz) 25 20 15 10 5 0 400 600 800 1000 1200 carrier frequency (kHz) 1400 1600 The relationship between the tuned frequency (the carrier frequency) and the 3-dB bandwidth follows the equation (see parts (a) and (c)) W3−dB = L 2 ω0 R This shows that the 3-dB bandwidth increases as the square of the center frequency. The component values selected for this problem produce the desired 3-dB bandwidth at 1000 kHz (approximately the center of the band). For carrier frequencies below 1000 kHz, the 3-dB bandwidth is too narrow; for carrier frequencies above 1000 kHz, the 3-dB bandwidth is too high. 10.12 For high side mixing, f LO = f c + f 0 . The image frequency on the positive frequency axis is the frequency resulting from the difference. That is, the image frequency satisfies f i − f LO = f 0 Substituting, we have f i = f 0 + f LO = f 0 + ( f c + f 0 ) = f c + 2 f 0 10.13 (a) Construct the diagram below where, for low-side mixing, f i = f c − 2 f IF . image rejection BPF BW f fi fi + fc B 2 Let BW be the bandwidth of the image rejection filter. Then, from the diagram, we have BW B⎞ B B ⎛ = f c − ⎜ f c − 2 f IF + ⎟ = f c − f c + 2 f IF − = 2 f IF − 2 2⎠ 2 2 ⎝ ⇒ BW = 4 f IF − B (b) Construct the diagram below where, for high-side mixing, f i = f c + 2 f IF . image rejection BPF BW f fc fi B 2 Let BW be the bandwidth of the image rejection filter. Then, from the diagram, we have fi − BW ⎛ B⎞ B B = ⎜ f c + 2 f IF − ⎟ − f c = f c − f c + 2 f IF − = 2 f IF − 2 2⎠ 2 2 ⎝ ⇒ BW = 4 f IF − B (c) Both expressions are the same. This issue is not the deciding factor in selecting between highside mixing and low-side mixing. 10.14 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 1600 kHz), the image frequency is f i = f c − 2 f IF = 1600 − 2 × 455 = 690 kHz which is within the AM band. Thus, YES: the image rejection filter must be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 540 kHz), the image frequency is f i = f c + 2 f IF = 540 + 2 × 455 = 1450 kHz which is within the AM band. Thus, YES: the image rejection filter must be tunable. 10.15 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 107.9 MHz), the image frequency is f i = f c − 2 f IF = 107.9 − 2 × 10.7 = 86.5 MHz which is not within the FM band. Thus, NO: the image rejection filter does not need to be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 88.1 MHz), the image frequency is f i = f c + 2 f IF = 88.1 + 2 × 10.7 = 109.5 MHz which is not within the FM band. Thus, NO: the image rejection filter does not need to be tunable. 10.16 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 848.97 MHz), the image frequency is f i = f c − 2 f IF = 848.97 − 2 × 10.7 = 827.57 MHz which is within the AMPS band. Thus, YES: the image rejection filter must be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 824.04 MHz), the image frequency is f i = f c + 2 f IF = 824.04 + 2 × 10.7 = 845.44 MHz which is within the AMPS band. Thus, YES: the image rejection filter must be tunable. 10.17 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 848.97 MHz), the image frequency is f i = f c − 2 f IF = 848.97 − 2 × 70 = 708.97 MHz which is not within the AMPS band. Thus, NO: the image rejection filter does not need to be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 824.04 MHz), the image frequency is f i = f c + 2 f IF = 824.04 + 2 × 70 = 964.04 MHz which is not within the AMPS band. Thus, NO: the image rejection filter does not need to be tunable. 10.18 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 893.97 MHz), the image frequency is f i = f c − 2 f IF = 893.97 − 2 × 10.7 = 872.57 MHz which is within the AMPS band. Thus, YES: the image rejection filter must be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 869.04 MHz), the image frequency is f i = f c + 2 f IF = 869.04 + 2 × 10.7 = 890.44 MHz which is within the AMPS band. Thus, YES: the image rejection filter must be tunable. 10.19 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 893.97 MHz), the image frequency is f i = f c − 2 f IF = 893.97 − 2 × 70 = 753.97 MHz which is not within the AMPS band. Thus, NO: the image rejection filter does not need to be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 869.04 MHz), the image frequency is f i = f c + 2 f IF = 869.04 + 2 × 70 = 1009.04 MHz which is not within the AMPS band. Thus, NO: the image rejection filter does not need to be tunable. 10.20 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 914.8 MHz), the image frequency is f i = f c − 2 f IF = 914.8 − 2 × 10.7 = 893.4 MHz which is within the GSM band. Thus, YES: the image rejection filter must be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 890.2 MHz), the image frequency is f i = f c + 2 f IF = 890.2 + 2 × 10.7 = 911.6 MHz which is within the GSM band. Thus, YES: the image rejection filter must be tunable. 10.21 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 914.8 MHz), the image frequency is f i = f c − 2 f IF = 914.8 − 2 × 70 = 774.8 MHz which is not within the GSM band. Thus, NO: the image rejection filter does not need to be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 890.2 MHz), the image frequency is f i = f c + 2 f IF = 890.2 + 2 × 70 = 1030.2 MHz which is not within the GSM band. Thus, NO: the image rejection filter does not need to be tunable. 10.22 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 959.8 MHz), the image frequency is f i = f c − 2 f IF = 959.8 − 2 × 10.7 = 838.4 MHz which is within the GSM band. Thus, YES: the image rejection filter must be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 935.2 MHz), the image frequency is f i = f c + 2 f IF = 935.2 + 2 × 10.7 = 956.6 MHz which is within the GSM band. Thus, YES: the image rejection filter must be tunable. 10.23 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 959.8 MHz), the image frequency is f i = f c − 2 f IF = 959.8 − 2 × 70 = 819.8 MHz which is not within the GSM band. Thus, NO: the image rejection filter does not need to be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 935.2 MHz), the image frequency is f i = f c + 2 f IF = 935.2 + 2 × 70 = 1075.2 MHz which is not within the GSM band. Thus, NO: the image rejection filter does not need to be tunable. 10.24 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 136.975 MHz), the image frequency is f i = f c − 2 f IF = 136.975 − 2 × 10.7 = 115.575 MHz which is not within the ATC band. Thus, NO: the image rejection filter does not need to be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 118 MHz), the image frequency is f i = f c + 2 f IF = 118 + 2 × 10.7 = 139.4 MHz which is not within the ATC band. Thus, NO: the image rejection filter does not need to be tunable. 10.25 (a) For low-side mixing, the image frequency is below the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the highest channel, the image frequency is in the band. When selecting the highest channel ( f c = 136.975 MHz), the image frequency is f i = f c − 2 f IF = 136.975 − 2 × 21.4 = MHz which is not within the ATC band. Thus, NO: the image rejection filter does not need to be tunable. (b) For high-side mixing, the image frequency is above the carrier frequency. Consequently the image rejection filter needs to be tunable if, when selecting the lowest channel, the image frequency is in the band. When selecting the lowest channel ( f c = 118 MHz), the image frequency is f i = f c + 2 f IF = 118 + 2 × 21.4 = MHz which is not within the ATC band. Thus, NO: the image rejection filter does not need to be tunable. 10.26 (a) c f − fc fc d f c > f1 − ( f c + f1 ) − ( f c − f1 ) f c − f1 f c + f1 f e − ( f c − f1 ) f f c − f1 f f 2 < f c − f1 f − ( f c − f1 − f 2 ) f c − f1 − f 2 − ( f c − f1 + f 2 ) f c − f1 + f 2 g f − ( f c − f1 − f 2 ) f c − f1 − f 2 (b) From the diagrams in part (a) and the assumed relationships, we have f IF = f c − f 1 − f 2 (c) To prevent spectral overlap at f = 0, we require B 2 B f c − f1 − f 2 > 2 f c − f1 > (d) BPF 1 f − fc B f c − 2 f1 − 2 fc f c + 2 f1 + BPF 2 − ( f c − f1 ) f f c − f1 f c − f1 − 2 f 2 − B 2 f c − f1 + 2 f 2 + B 2 BPF 3 B f − ( f c − f1 − f 2 ) f c − f1 − f 2 B 2 10.27 There are many correct answers to this question. To illustrate the approach, assume that all frequency translations use low-side mixing. With these assumptions, the first and second IF frequencies are first IF = fc – f1 second IF = fc – f1 – f2 Using these assumptions and the constraint on the center frequency to bandwidth ratios, we have 12000 ≤ 10 4 × (12000 − f1 ) + 6 (1) 12000 − f1 ≤ 10 4 × (12000 − f 1 − f 2 ) + 6 (2) In addition, we require the second IF to be 70 MHz. This creates a third condition: 12000 − f 1 − f 2 = 70 (3) f1 ≤ 11701.5 (4) 39 f 1 + 40 f 2 ≤ 468060 (5) Solving (1) for f1 gives Condition (2) may be re-expressed as Solving (3) for f2 and substituting into condition (5) produces f1 ≥ 9140 (6) 9140 ≤ f 1 ≤ 11701.5 (7) Putting (4) and (6) together gives The design procedure is to select f1 that satisfies condition (7). Given the choice for f1, f2 is given by f 2 = 11970 − f1 (8) As an example, choose f1 = 10000 MHz. This forces f2 = 1930 MHz. The bandwidths (and center frequency to bandwidth ratios) of the three bandpass filters are BW 1 = 4 × (12000 − 10000 ) + 6 = 8006 12000 Q1 = = 1.5 8006 BW 2 = 4 × (2000 − 1930 ) + 6 = 286 2000 Q2 = =7 286 BW 3 = 6 70 Q3 = = 11.7 6 The block diagram below illustrates the complete design. 2000 BPF 70 fixed BPF1 10000 fixed BPF2 1930 LO1 LO2 tuning control 12000 7997 1857 2000 2143 67 70 73 tuning control 16003 12000 from antenna to demod 10.28 (a) r (t ) 2 cos(ω c t ) = 2 I r (t ) cos 2 (ω c t ) − 2Qr (t )sin (ω c t ) cos(ω c t ) = I r (t ) + I r (t ) cos(2ω c t ) − Qr (t )sin (2ω c t ) (b) The low-pass filter output is I r (t ) . It is impossible to recover both I r (t ) and Qr (t ) from the LPF output. 10.29 (a) r (t ) 2 cos(ω c t ) = 2 I r (t ) cos(ω c t + θ ) cos(ω c t ) − 2Qr (t )sin (ω c t + θ ) cos(ω c t ) = I r (t ) cos(θ ) + I r (t ) cos(2ω c t + θ ) − Qr (t )sin (θ ) − Qr (t )sin (2ω c t + θ ) (b) The low-pass filter output is I r (t ) cos(θ ) − Qr (t )sin (θ ) . It is impossible to recover both I r (t ) and Qr (t ) from the LPF output. 10.30 (a) Channel k requires M complex-valued multiplications (b) The number of complex-valued multiplications required to produce K channels in parallel is M×K. (c) The condition is M ( p1 + p 2 + " + pν − ν ) < MK which requires p1 + p 2 + " + pν − ν < K 10.31 (a) 4 Mbits/s QPSK → 2 Msymbols/s: the sample rate must be a multiple of 2. The bandwidth of 4 Mbit/s QPSK with the SRRC pulse shape with 50% excess bandwidth is 1 + α Rb 1.5 4 2× × = 2× × = 3 MHz 2 2 2 2 Thus the sample rate must be greater than 6 Msamples/s From Table 10.1.1, the lowest sample rate that satisfies both requirements is 8 Msamples/s. (b) From Table 10.1.1, the sample rates that meet both requirements are 280 Msamples/s, 56 Msamples/s, 40 Msamples/s, and 8 Msamples/s. 10.32 (a) Maximum | Maximum Sample Rate Bandwidth | Sample Rate Bandwidth (Msamples/s) (MHz) | (Msamples/s) (MHz) --------------------------------------------------------180 90.0000 | 4 16/41 2.1951 60 30.0000 | 4 8/43 2.0930 36 18.0000 | 4 2.0000 25 5/7 12.8571 | 3 39/47 1.9149 20 10.0000 | 3 33/49 1.8367 16 4/11 8.1818 | 3 9/17 1.7647 13 11/13 6.9231 | 3 21/53 1.6981 12 6.0000 | 3 3/11 1.6364 10 10/17 5.2941 | 3 3/19 1.5789 9 9/19 4.7368 | 3 3/59 1.5254 8 4/7 4.2857 | 2 58/61 1.4754 7 19/23 3.9130 | 2 6/7 1.4286 7 1/5 3.6000 | 2 10/13 1.3846 6 2/3 3.3333 | 2 46/67 1.3433 6 6/29 3.1034 | 2 14/23 1.3043 5 25/31 2.9032 | 2 38/71 1.2676 5 5/11 2.7273 | 2 34/73 1.2329 5 1/7 2.5714 | 2 2/5 1.2000 4 32/37 2.4324 | 2 26/77 1.1688 (b) The lowest sample rate that can accommodate 8 MHz of bandwidth is 16 4/11 Msamples/s. 10.33 (a) Maximum | Maximum Sample Rate Bandwidth | Sample Rate Bandwidth (ksamples/s) (kHz) | (ksamples/s) (kHz) --------------------------------------------------------------42800 21400.0000 | 701 23/36 350.8194 14266 2/3 7133.3333 | 679 23/63 339.6825 8560 4280.0000 | 658 6/13 329.2308 6114 2/7 3057.1429 | 638 25/31 319.4032 4755 5/9 2377.7778 | 620 11/38 310.1447 3890 10/11 1945.4545 | 602 58/71 301.4085 3292 4/13 1646.1538 | 586 22/73 293.1507 2853 1/3 1426.6667 | 570 2/3 285.3333 2517 11/17 1258.8235 | 555 27/32 277.9219 2252 12/19 1126.3158 | 541 61/79 270.8861 2038 2/21 1019.0476 | 528 15/38 264.1974 1860 20/23 930.4348 | 515 55/83 257.8313 1712 856.0000 | 503 9/17 251.7647 1585 5/27 792.5926 | 491 83/87 245.9770 1475 25/29 737.9310 | 480 80/89 240.4494 1380 20/31 690.3226 | 470 30/91 235.1648 1296 32/33 648.4848 | 460 20/93 230.1075 1222 6/7 611.4286 | 450 10/19 225.2632 1156 28/37 578.3784 | 441 14/59 220.6186 1097 17/39 548.7179 | 432 11/34 216.1618 1043 37/41 521.9512 | 423 77/101 211.8812 995 15/43 497.6744 | 415 55/103 207.7670 951 1/9 475.5556 | 407 13/21 203.8095 910 30/47 455.3191 | 400 200.0000 873 23/49 436.7347 | 392 35/53 196.3302 839 11/51 419.6078 | 385 24/41 192.7927 807 29/53 403.7736 | 378 51/67 189.3806 778 2/11 389.0909 | 372 4/23 186.0870 750 50/57 375.4386 | 365 95/117 182.9060 (b) The lowest sample rate that can accommodate 200 kHz of bandwidth is 400 ksamples/s. 10.34 (a) Maximum | Maximum Sample Rate Bandwidth | Sample Rate Bandwidth (ksamples/s) (kHz) | (ksamples/s) (kHz) --------------------------------------------------------------1820 910.0000 | 29 51/61 14.9180 606 2/3 303.3333 | 28 8/9 14.4444 364 182.0000 | 28 14.0000 260 130.0000 | 27 11/67 13.5821 202 2/9 101.1111 | 26 26/69 13.1884 165 5/11 82.7273 | 25 45/71 12.8169 140 70.0000 | 24 68/73 12.4658 121 1/3 60.6667 | 24 4/15 12.1333 107 1/17 53.5294 | 23 7/11 11.8182 95 15/19 47.8947 | 23 3/79 11.5190 86 2/3 43.3333 | 22 38/81 11.2346 79 3/23 39.5652 | 21 77/83 10.9639 72 4/5 36.4000 | 21 7/17 10.7059 67 11/27 33.7037 | 20 80/87 10.4598 62 22/29 31.3793 | 20 40/89 10.2247 58 22/31 29.3548 | 20 10.0000 55 5/33 27.5758 | 19 53/93 9.7849 52 26.0000 | 19 3/19 9.5789 49 7/37 24.5946 | 18 74/97 9.3814 46 2/3 23.3333 | 18 38/99 9.1919 44 16/41 22.1951 | 18 2/101 9.0099 42 14/43 21.1628 | 17 69/103 8.8350 40 4/9 20.2222 | 17 1/3 8.6667 38 34/47 19.3617 | 17 1/107 8.5047 37 1/7 18.5714 | 16 76/109 8.3486 35 35/51 17.8431 | 16 44/111 8.1982 34 18/53 17.1698 | 16 12/113 8.0531 33 1/11 16.5455 | 15 19/23 7.9130 31 53/57 15.9649 | 15 5/9 7.7778 (b) The lowest sample rate that can accommodate 10 kHz of bandwidth is 20 ksamples/s. 10.35 Maximum | Maximum Sample Rate Bandwidth | Sample Rate Bandwidth (ksamples/s) (kHz) | (ksamples/s) (kHz) --------------------------------------------------------------85600 42800.0000 | 1403 5/18 701.6389 28533 1/3 14266.6667 | 1358 19/26 679.3654 17120 8560.0000 | 1316 12/13 658.4615 12228 4/7 6114.2857 | 1277 11/18 638.8056 9511 1/9 4755.5556 | 1240 11/19 620.2895 7781 9/11 3890.9091 | 1205 19/30 602.8167 6584 8/13 3292.3077 | 1172 44/73 586.3014 5706 2/3 2853.3333 | 1141 1/3 570.6667 5035 5/17 2517.6471 | 1111 11/16 555.8438 4505 5/19 2252.6316 | 1083 43/79 541.7722 4076 4/21 2038.0952 | 1056 15/19 528.3947 3721 17/23 1860.8696 | 1031 13/40 515.6625 3424 1712.0000 | 1007 1/17 503.5294 3170 10/27 1585.1852 | 983 79/87 491.9540 2951 21/29 1475.8621 | 961 71/89 480.8989 2761 9/31 1380.6452 | 940 29/44 470.3295 2593 16/17 1296.9706 | 920 40/93 460.2151 2445 5/7 1222.8571 | 901 1/19 450.5263 2313 19/37 1156.7568 | 882 9/19 441.2368 2194 34/39 1097.4359 | 864 11/17 432.3235 2087 33/41 1043.9024 | 847 32/61 423.7623 1990 30/43 995.3488 | 831 3/44 415.5341 1902 2/9 951.1111 | 815 5/21 407.6190 1821 5/18 910.6389 | 800 400.0000 1746 15/16 873.4688 | 785 9/28 392.6607 1678 22/51 839.2157 | 771 6/35 385.5857 1615 3/32 807.5469 | 757 12/23 378.7609 1556 4/11 778.1818 | 744 8/23 372.1739 1501 43/57 750.8772 | 731 73/117 365.8120 A.1 NRZ RZ 0.5 0.5 rp(τ) 1 rp(τ) 1 0 0 -0.5 -0.5 -1 -0.5 0 /T τ s 0.5 1 -1 -0.5 MAN 0 /T τ s 0.5 1 0.5 1 HS 0.5 0.5 rp(τ) 1 rp(τ) 1 0 0 -0.5 -0.5 -1 -0.5 0 0.5 1 -1 -0.5 τ/Ts 0 τ/Ts For all pulse shapes, the autocorrelation function is zero for τ > Ts . Hence the pulse shape autocorrelation function is zero for all non-zero multiples of Ts . A.2 (a) E = Ts / 2 ∫ 0 2 ⎛ t 4 A2 2 + t dt 4 A 2 ⎜⎜1 − 2 ∫ Ts ⎝ Ts T2 / 2 T 2 ⎞ A 2Ts ⎟⎟ dt = 3 ⎠ Ts 2 Ts / 2 τ +Ts / 2 ⎛ 2A 2A (t − τ )dt + ∫ 2 A⎜⎜1 − t rp (τ ) = ∫ t Ts ⎝ Ts τ Ts Ts / 2 ⇒ 3 Ts A= (b) For 0 ≤ τ ≤ = T 2 A2 3 2 A2 2 1 2 τ − τ + A Ts Ts 3 Ts2 ⎛τ = 6⎜⎜ ⎝ Ts For s ⎞ 2A ⎛ ⎟⎟ (t − τ )dt + ∫ 2 A⎜⎜1 − t ⎠ Ts ⎝ Ts τ +Ts / 2 3 2 ⎞ ⎛τ ⎞ ⎟⎟ − 6⎜⎜ ⎟⎟ + 1 ⎠ ⎝ Ts ⎠ Ts ≤ τ ≤ Ts 2 ⎛ t rp (τ ) = ∫ 2 A⎜⎜1 − ⎝ Ts τ 2 A2 =− 2 τ3 + 3Ts Ts ⎛τ = −2⎜⎜ ⎝ Ts 3 ⎞ ⎛ t −τ ⎞ ⎟dt ⎟⎟2 A⎜⎜1 − Ts ⎟⎠ ⎠ ⎝ 2 A2 2 2 τ + −2 A 2τ + A 2Ts Ts 3 2 ⎞ ⎛τ ⎞ ⎛τ ⎞ ⎟⎟ + 6⎜⎜ ⎟⎟ − 6⎜⎜ ⎟⎟ + 2 ⎠ ⎝ Ts ⎠ ⎝ Ts ⎠ Using the property rp (− τ ) = rp (τ ) we have ⎧0 3 2 ⎪ ⎪2⎛⎜ τ ⎞⎟ + 6⎛⎜ τ ⎞⎟ + 6⎛⎜ τ ⎞⎟ + 2 ⎜T ⎟ ⎜T ⎟ ⎪ ⎜⎝ T3 ⎟⎠ ⎝ 3⎠ ⎝ 3⎠ ⎪ 3 2 ⎪− 6⎛⎜ τ ⎞⎟ − 6⎛⎜ τ ⎞⎟ + 1 ⎜T ⎟ ⎪⎪ ⎜ T ⎟ ⎝ 3⎠ rp (τ ) = ⎨ ⎝ 33⎠ 2 ⎛τ ⎞ ⎪ ⎛⎜ τ ⎞⎟ ⎪6⎜ T ⎟ − 6⎜⎜ T ⎟⎟ + 1 ⎪ ⎝ 3⎠ 3 ⎝ 3⎠ 2 ⎪ ⎛τ ⎞ ⎛τ ⎞ ⎛τ ⎞ ⎪− 2⎜⎜ ⎟⎟ + 6⎜⎜ ⎟⎟ − 6⎜⎜ ⎟⎟ + 2 ⎝ T3 ⎠ ⎝ T3 ⎠ ⎪ ⎝ T3 ⎠ ⎪⎩0 τ ≤ −Ts − Ts ≤ τ ≤ − − Ts ≤τ ≤ 0 2 0 ≤τ ≤ Ts 2 Ts ≤ τ ≤ Ts 2 Ts ≤ τ Ts 2 ⎞ ⎛ t −τ ⎟⎟2 A⎜⎜1 − Ts ⎠ ⎝ ⎞ ⎟⎟dt ⎠ (c) P( f ) = Ts / 2 ∫ 0 =− = P( f ) 2 2 ⎛ 2 A − j 2πft t ⎞ te dt + ∫ 2 A⎜⎜1 − ⎟⎟e − j 2πft dt Ts ⎝ Ts ⎠ Ts / 2 T [ A 1 − e jπfTs 2 2π f Ts 2 ] 2 2A ⎛ πfT ⎞ sin 2 ⎜ s ⎟e jπfTs 2 π f Ts ⎝ 2 ⎠ 2 4 A2 ⎛ πfT ⎞ = 4 4 2 sin 4 ⎜ s ⎟ π f Ts ⎝ 2 ⎠ ⎛ πfT ⎞ sin 4 ⎜ s ⎟ AT ⎝ 2 ⎠ = 4 4 ⎛ πfTs ⎞ ⎟ ⎜ ⎝ 2 ⎠ ⎛ πfT ⎞ sin 4 ⎜ s ⎟ 3T ⎝ 2 ⎠ = s 4 ⎛ πfTs ⎞ 4 ⎟ ⎜ ⎝ 2 ⎠ 2 2 s (d) Babs = ∞ B90 = 0.85 Ts B99 = 1.30 Ts B−60 dB = 20 Ts (I used the closest null in the plot below for B-60dB) 0 -10 -20 |P(f)|2 (dB) -30 -40 -50 -60 -70 -80 -90 0 5 10 15 fTs (cycles/symbol) 20 25 A.3 2 ⎡ ⎛ 2πt ⎞ ⎤ 3 A 2Ts 2 ⎜ ⎟ = E A 1 cos dt = − (a) ∫0 ⎢ ⎜ T ⎟⎥ 2 ⎝ s ⎠⎦ ⎣ Ts ⇒ A= 2 3Ts (b) For 0 ≤ τ ≤ Ts ⎡ ⎛ 2πt ⎞⎤ ⎡ ⎛ 2π (t − τ ) ⎞⎤ ⎟⎟⎥ ⎢1 − cos⎜⎜ ⎟⎟⎥ dt rp (τ ) = ∫ A2 ⎢1 − cos⎜⎜ ⎝ Ts ⎠⎦ ⎣ ⎝ Ts ⎠⎦ τ ⎣ Ts = A2 (Ts − τ ) + ⎛ A2 (Ts − τ )cos⎜⎜ 2πτ 2 ⎝ Ts ⎞ 3 A2Ts ⎛ 2πτ ⎞ ⎟⎟ + ⎟⎟ sin ⎜⎜ 4π ⎠ ⎝ Ts ⎠ ⎛ τ ⎞ τ ⎞ 1 2⎛ τ ⎞ 1⎛ τ ⎞ ⎛ = ⎜⎜1 − ⎟⎟ + ⎜⎜1 − ⎟⎟ cos⎜⎜ 2π ⎟⎟ + sin ⎜⎜ 2π ⎟⎟ 3 ⎝ Ts ⎠ 3 ⎝ Ts ⎠ ⎝ Ts ⎠ 2π ⎝ Ts ⎠ Using the property rp (− τ ) = rp (τ ) we have ⎧0 ⎪2 ⎛ ⎛ τ ⎞ τ ⎞ 1⎛ τ ⎞ ⎛ τ ⎞ 1 ⎪ ⎜⎜1 + ⎟⎟ + ⎜⎜1 + ⎟⎟ cos⎜⎜ 2π ⎟⎟ − sin ⎜⎜ 2π ⎟⎟ Ts ⎠ 3 ⎝ Ts ⎠ ⎝ Ts ⎠ 2π ⎪3 ⎝ Ts ⎠ rp (τ ) = ⎨ ⎝ ⎪ 2 ⎛⎜1 − τ ⎞⎟ + 1 ⎛⎜1 − τ ⎞⎟ cos⎛⎜ 2π τ ⎞⎟ + 1 sin ⎛⎜ 2π τ ⎞⎟ ⎜ T ⎟ ⎪ 3 ⎜⎝ Ts ⎟⎠ 3 ⎜⎝ Ts ⎟⎠ ⎜⎝ Ts ⎟⎠ 2π s ⎠ ⎝ ⎪0 ⎩ (c) ⎡ ⎛ 2πt ⎞⎤ − j 2πft ⎟⎟⎥ e P( f ) = ∫ A⎢1 − cos⎜⎜ dt T ⎝ s ⎠⎦ 0 ⎣ 1 jA 1 − e − j 2πfTs = 2 2π f ( fTs ) − 1 Ts )[ ( =− P( f ) = 2 = e − jπfTs sin (πfTs ) π f ( fTs )2 − 1 A ( ) 2 sin 2 (πfTs ) AT 3 ( fT )2 − 1 2 (πfTs )2 s 2 ( 2 s ) 2 sin 2 (πfTs ) Ts 3 ( fT )2 − 1 2 (πfTs )2 s ( ) ] τ < Ts − Ts ≤ τ ≤ 0 0 ≤ τ ≤ Ts Ts < τ Babs = ∞ B90 = 0.95 Ts B99 = 1.41 Ts B−60 dB = 7 Ts (I used the closest null in the plot below for B-60dB) 0 -10 -20 -30 |P(f)|2 (dB) (d) -40 -50 -60 -70 -80 -90 0 2 4 6 fTs (cycles/symbol) 8 10 A.4 ∞ Let p (t ) ↔ P( f ) be a Fourier transform pair. Also note that ∑ δ (t − kTs ) ↔ k = −∞ (a) G( f ) = Rp ( f ) ∗ 1 Ts ∞ ∑ m = −∞ ⎛ δ ⎜⎜ f − ⎝ m⎞ 1 ⎟= Ts ⎟⎠ Ts (b) G( f ) = ∞ ∫ g (t )e − j 2πft dt −∞ ∞ = = ∞ ⎤ ⎡ ( ) r t δ (t − kTs )⎥ e − j 2πft dt ∫− ∞ ⎢⎣ p k∑ = −∞ ⎦ ∞ ∞ ∑ ∫ r (t )e k = −∞ − ∞ = p ∞ ∑ r (kT )e k = −∞ p s − j 2πft δ (t − kTs )dt − j 2πfTs k ⎛ ∞ m⎞ ⎟⎟ s ⎠ ∑ R ⎜⎜ f − T m = −∞ p ⎝ 1 Ts ∞ ⎛ m = −∞ ⎝ m⎞ ⎟⎟ s ⎠ ∑ δ ⎜⎜ f − T A.5 Rp ( f ) = ∞ r (τ )e ∫ τ − j 2πfτ p = −∞ dτ ⎡ ∞ ⎤ − j 2πfτ dτ ⎢ ∫ p(t ) p (t − τ )dt ⎥ e ∫ τ = −∞ ⎣t = −∞ ⎦ ∞ ⎡ ∞ ⎤ = ∫ p(t )⎢ ∫ p (t − τ )e − j 2πfτ dτ ⎥ dt t = −∞ ⎣τ = −∞ ⎦ ∞ ⎡ ∞ ⎤ = ∫ p(t )⎢ ∫ p( x )e − j 2πf (t − x )dx ⎥ dt t = −∞ ⎣ x = −∞ ⎦ ∞ ⎡ ∞ ⎤ = ∫ p(t )e − j 2πft dt ⎢ ∫ p( x )e j 2πfx dx ⎥ t = −∞ ⎣ x = −∞ ⎦ ∞ = ∞ = ∫ p(t )e − j 2πft t = −∞ = P( f )P∗ ( f ) = P( f ) 2 ⎡ ∞ ⎤ dt ⎢ ∫ p( x )e − j 2πfx dx ⎥ ⎣ x = −∞ ⎦ ∗ A.6 p (t ) = ⎛ π (1 − α ) ⎞ 4α ⎛ π (1 + α ) ⎞ sin ⎜⎜ t ⎟⎟ + t cos ⎜⎜ t ⎟⎟ 1 ⎝ Ts ⎠ Ts ⎝ Ts ⎠= 2 Ts π t ⎡ ⎛ 4α ⎞ 2 ⎤ ⎟ t ⎥ ⎢1 − ⎜ T s ⎢ ⎜⎝ T s ⎟⎠ ⎥ ⎦ ⎣ N (t ) T s D (t ) 1 The derivatives of the numerator and denominators are ⎛ π (1 + α ) ⎞ 4α ⎛ π (1 + α ) ⎞ 4αt π (1 + α ) ⎛ π (1 + α ) ⎞ − cos⎜⎜ t ⎟⎟ + t ⎟⎟ − t ⎟⎟ cos⎜⎜ sin ⎜⎜ Ts Ts ⎝ Ts ⎠ Ts ⎝ Ts ⎠ Ts ⎝ Ts ⎠ 2 2 π ⎡ ⎛ 4α ⎞ 2 ⎤ πt ⎛ 4α ⎞ ⎟⎟ t ⎥ − 2 ⎜⎜ ⎟ t D′(t ) = ⎢1 − ⎜⎜ Ts ⎢ ⎝ Ts ⎠ ⎥ Ts ⎝ Ts ⎟⎠ ⎣ ⎦ N ′(t ) = π (1 − α ) (a) lim p(t ) = t →0 N (t ) N ′(t ) 1 1 1 N ′(0) 1 ⎡ 4α ⎤ = = = lim lim 1−α + ⎢ t → 0 t → 0 D(t ) D′(t ) π ⎥⎦ Ts Ts Ts D′(0) Ts ⎣ (b) lim p(t ) = 1 Ts = 1 Ts = 1 Ts T t→ s 4α = ⎛T ⎞ N ′⎜ s ⎟ N (t ) N ′(t ) 1 1 ⎝ 4α ⎠ lim lim = = Ts D (t ) Ts D′(t ) Ts t → Ts D′⎛ Ts ⎞ t→ 4α 4α ⎜ ⎟ ⎝ 4α ⎠ π (1 − α ) ⎛ π 1 − α ⎞ 4α ⎛π 1+ α ⎞ π 1+α ⎛π 1+ α ⎞ cos⎜ cos⎜ sin ⎜ ⎟ ⎟− ⎟+ Ts ⎝4 α ⎠ ⎝4 α ⎠ 4 α ⎝ 4 α ⎠ Ts 2 2 2 π ⎡ ⎛ 4α ⎞ ⎛ Ts ⎞ ⎤ 2π ⎛ Ts ⎞⎛ 4α ⎞ ⎛ Ts ⎞ ⎟ ⎜ ⎟ ⎜ ⎢1 − ⎜ ⎟ ⎥− ⎜ ⎟⎜ ⎟ Ts ⎢ ⎜⎝ Ts ⎟⎠ ⎝ 4α ⎠ ⎥ Ts ⎝ 4α ⎠⎜⎝ Ts ⎟⎠ ⎝ 4α ⎠ ⎣ ⎦ 2πα ⎡⎛ 2⎞ ⎛ π ⎞ ⎛ 2 ⎞ ⎛ π ⎞⎤ − ⎜1 + ⎟ sin ⎜ ⎟ + ⎜1 − ⎟ cos⎜ ⎟ ⎢ 2Ts ⎣⎝ π ⎠ ⎝ 4α ⎠ ⎝ π ⎠ ⎝ 4α ⎠⎥⎦ 2π − Ts α ⎡⎛ 2⎞ ⎛ π ⎞ ⎛ 2 ⎞ ⎛ π ⎞⎤ ⎟ ⎜1 + ⎟ sin ⎜ ⎟ + ⎜1 − ⎟ cos⎜ ⎢ 2Ts ⎣⎝ π ⎠ ⎝ 4α ⎠ ⎝ π ⎠ ⎝ 4α ⎠⎥⎦ In a similar way it can be shown that the limit as t → − T2 is the same. 4α A.7 Let f3 be the 3-dB bandwidth. Then 2 Ts2 = Rp ( f3 ) 2 T T ⇒ s = Rp ( f3 ) = s 2 2 ⎡ ⎛ πf 3Ts π (1 − α ) ⎞⎤ − ⎟⎥ ⎢1 + cos⎜ 2α ⎠⎦ ⎝ α ⎣ ⎛ πf T π (1 − α ) ⎞ ⇒ 2 = 1 + cos⎜ 3 s − ⎟ 2α ⎠ ⎝ α πf T π (1 − α ) ⇒ cos −1 2 − 1 = 3 s − 2α α (1 − α ) = f T α ⇒ cos −1 2 − 1 + 3 s 2 π ⇒ 0.5 − 0.368α = f 3Ts ( ) ( ) A.8 Let f3 be the 3-dB bandwidth. Then 2 Ts2 = Rp ( f3 ) = Rp ( f3 ) ⇒ 2 f 3Ts = 1 by construction. 2 B.1 e − jω 0 t + e jω 0 t cos(− ω 0 t ) = 2 B.2 e jω0t + e − jω0t − cos(ω 0 t ) = − 2 B.3 e − jω0t − e jω0t sin (− ω 0 t ) = j2 B.4 e jω 0 t − e − jω 0 t e − jω 0 t − e jω 0 t − sin (ω 0 t ) = − = j2 j2 B.5 e − jω0t = cos(ω 0 t ) − j sin (ω 0 t ) B.6 e j (ω0t +θ ) + e − j (ω0t +θ ) cos(ω 0 t + θ ) = 2 jθ e e − jθ − jω 0 t = e jω0t + e 2 2 B.7 e j (ω0t +θ ) − e − j (ω0t +θ ) sin (ω 0 t + θ ) = j2 = e jθ jω 0 t e − jθ − jω 0 t e − e j2 j2 B.8 e j (ω0t +θ ) = cos(ω 0 t + θ ) + j sin (ω 0 t + θ ) B.9 Given e jω0t ↔ 2πδ (ω − ω 0 ) and x(t ) ↔ X ( jω ) ⇒ x ∗ (t ) ↔ X ∗ (− jω ) , we have ( e − jω 0 t = e jω 0 t ) ∗ ↔ 2πδ (− ω − ω 0 ) = 2πδ (ω + ω 0 ) if we assume the Dirac delta function is symmetric about 0. B.10 cos(ω 0 t ) = 1 jω0t 1 − jω0t e + e 2 2 1 1 ↔ 2πδ (ω − ω 0 ) + 2πδ (ω + ω 0 ) 2 2 = πδ (ω − ω 0 ) + πδ (ω + ω 0 ) B.11 sin (ω 0 t ) = 1 jω 0 t 1 − jω 0 t e e − j2 j2 1 1 ↔ 2πδ (ω − ω 0 ) − 2πδ (ω + ω 0 ) j2 j2 = π j δ (ω − ω 0 ) − π j δ (ω + ω 0 ) B.12 cos(ω 0 t + θ ) = 1 j (ω0t +θ ) 1 − j (ω0t +θ ) e + e 2 2 jθ e e − jθ − jω 0 t jω 0 t = e + e 2 2 e jθ e − jθ ↔ 2πδ (ω − ω 0 ) + 2πδ (ω + ω 0 ) 2 2 = πe jθ δ (ω − ω 0 ) + πe − jθ δ (ω + ω 0 ) B.13 sin (ω 0 t + θ ) = 1 j (ω0t +θ ) 1 − j (ω0t +θ ) e − e j2 j2 = e jθ jω 0 t e − jθ − jω 0 t e − e j2 j2 ↔ e jθ e − jθ 2πδ (ω − ω 0 ) − 2πδ (ω + ω 0 ) 2 2 = π j e jθ δ (ω − ω 0 ) − π j e − jθ δ (ω + ω 0 ) B.14 Given e jω0t ↔ 2πδ (ω − ω 0 ) and x(t ) ↔ X ( jω ) ⇒ x ∗ (t ) ↔ X ∗ (− jω ) , we have ( e − j (ω0t +θ ) = e − jθ e jω0t ) ∗ ↔ e − jθ 2πδ (− ω − ω 0 ) = e − jθ 2πδ (ω + ω 0 ) B.15 x(t ) ↔ X ( jω ) ⇒ x(t )e − jω0t ↔ X ( j (ω + ω 0 )) B.16 I (t ) cos(ω 0 t ) − Q(t )sin (ω 0 t ) = A(t ) cos(ω 0 t + θ (t )) where A(t ) = I 2 (t ) + Q 2 (t ) ⎛ Q(t ) ⎞ ⎟⎟ ⎝ I (t ) ⎠ θ (t ) = tan −1 ⎜⎜ B.17 (a) A straightforward application of the frequency shift property in Table 2.4.3 gives ~ ~ u~ (t ) ↔ U ( jω ) ⇒ u~ (t )e jω0t ↔ U ( j (ω − ω 0 )) . (b) A straight forward application of the conjugation property in Table 2.4.3 gives ~ u~ (t )e jω0t ↔ U ( j (ω − ω 0 )) ∗ ~ u~(t )e jω0t ↔ U ∗ ( j (− ω − ω 0 )) ~ u~ ∗ (t )e − jω0t ↔ U ∗ (− j (− ω + ω )) [ ] 0 (c) Let F.T.{x(t )} be the Fourier transform of x(t ) . Then { { }} 1 ⎧1 ⎫ F.T. Re u~ (t )e jω0t = F.T.⎨ u~ (t )e jω0t + u~ ∗ (t )e − jω0t ⎬ 2 ⎩2 ⎭ 1~ 1 ~ = U ( j (ω − ω 0 )) + U ∗ (− j (ω + ω 0 )) 2 2 B.18 (a) r (t ) = 1 2 [ ] I r (t ) e jω0t + e − jω0t + j [ ] [ 1 2 [ Qr (t ) e jω0t − e − jω0t r (t ) 2e − jω0t = I r (t ) 1 + e − j 2ω0t + jQr (t ) 1 − e − j 2ω0t = I r (t ) + jQr (t ) + [I r (t ) − jQr (t )]e (b) r (t ) = 1 2 ([I (t ) + jQ (t )]e r r jω 0 t ] − j 2ω 0 t + [I r (t ) − jQr (t )]e − jω0t r (t ) 2e − jω0t = [I r (t ) + jQr (t )] + [I r (t ) − jQr (t )]e − j 2ω0t ) ] B.19 MLTED e(k ) = Re{[a 0 (k ) − ja1 (k )][x (k ) + jy (k )]} = Re{a 0 (k )x (k ) + a1 (k ) y (k ) + j[a0 (k ) y (k ) − a1 (k )x (k )]} = a 0 (k )x (k ) + a1 (k ) y (k ) ELTED e(k ) = Re{[a 0 (k ) − ja1 (k )][x(k + 1 / 2 ) + jy (k + 1 / 2 ) − x(k − 1 / 2 ) − jy (k − 1 / 2 )]} ⎫ ⎧a 0 (k )x(k + 1 / 2 ) − a 0 (k )x(k − 1 / 2 ) + a1 (k ) y (k + 1 / 2 ) − a1 (k ) y (k − 1 / 2 ) = Re⎨ ⎬ ⎩+ j[a 0 (k ) y (k + 1 / 2 ) − a 0 (k ) y (k − 1 / 2 ) − a1 (k ) y (k + 1 / 2 ) + a1 (k ) y (k − 1 / 2 )]⎭ = a 0 (k )[x(k + 1 / 2 ) − x(k − 1 / 2 )] + a1 (k )[ y (k + 1 / 2 ) − y (k − 1 / 2 )] ZCTED e(k ) = Re{[x(k ) + jy (k )][a 0 (k − 1) − ja1 (k − 1) − a 0 (k ) + ja1 (k )]} ⎫ ⎧ x(k )a 0 (k − 1) − x(k )a 0 (k ) + y (k )a1 (k − 1) − y (k )a1 (k ) = Re ⎨ ⎬ ⎩+ j[ y (k )a 0 (k − 1) − y (k )a 0 (k ) − x(k )a1 (k − 1) + x(k )a1 (k )]⎭ = x(k )[a 0 (k − 1) − a 0 (k )] + y (k )[a1 (k − 1) − a1 (k )] GTED e(k ) = Re{[x(k − 1 / 2) + jy (k − 1 / 2)][x(k − 1) − jy (k − 1) − x(k ) + jy (k )]} ⎧ x(k − 1 / 2)x(k − 1) − x(k − 1 / 2 )x(k ) + y (k − 1 / 2) y (k − 1) − y (k − 1 / 2 ) y (k ) ⎫ = Re⎨ ⎬ ⎩+ j[ y (k − 1 / 2)x(k − 1) − y (k − 1 / 2)x(k ) − x(k − 1 / 2) y (k − 1) + x(k − 1 / 2) y (k )]⎭ = x(k − 1 / 2)[x(k − 1) − x(k )] + y (k − 1 / 2 )[ y (k − 1) − y (k )] MMTED e(k ) = Re{[a 0 (k − 1) − ja1 (k − 1)][x(k ) + jy (k )] − [a 0 (k ) − ja1 (k )][x(k − 1) + jy (k − 1)]} ⎫ ⎧a 0 (k − 1)x(k ) + a1 (k − 1) y (k ) − a 0 (k )x(k − 1) − a1 (k ) y (k − 1) = Re ⎨ ⎬ ⎩+ j[a 0 (k − 1) y (k ) − a1 (k − 1)x(k ) − a 0 (k ) y (k − 1) + a1 (k )x(k − 1)]⎭ = [a 0 (k − 1)x(k ) − a 0 (k )x(k − 1)] + [a1 (k − 1) y (k ) − a1 (k ) y (k − 1)] B.20 { } e(k ) = Im a~ ∗ (k )~ x ′(k ) = Im{[a 0 (k ) − ja1 (k )][x ′(k ) + jy ′(k )]} = Im{a 0 (k )x ′(k ) + a1 (k ) y ′(k ) + j[a 0 (k ) y ′(k ) − a1 (k )x ′(k )]} = a 0 (k ) y ′(k ) − a1 (k )x ′(k ) B.21 ~ x′ = ~ x e − jθ x ′ + jy ′ = ( x + jy )e − jθ = ( x + jy )(cos θ − j sin θ ) = x cos θ + y sin θ + j (− x sin θ + y cos θ ) x′ y′ Organizing the above into a matrix equation produces the desired result. B.22 (a) The upper output: [ ] r (t ) 2 cos(ω 0 t ) = I r (t ) 2 cos(ω 0 t ) − Qr (t ) 2 sin (ω 0 t ) 2 cos(ω 0 t ) = I r (t ) + I r (t ) cos(2ω 0 t ) − Qr (t )sin (2ω 0 t ) The output of the low-pass filter is I r (t ) . The lower output: [ ] r (t ) 2 sin (ω 0 t ) = I r (t ) 2 cos(ω 0 t ) − Qr (t ) 2 sin (ω 0 t ) 2 sin (ω 0 t ) = I r (t )sin (2ω 0 t ) − Qr (t ) + Qr (t ) cos(2ω 0 t ) The output of the low-pass filter is − Qr (t ) . (b) r (t ) LPF I r (t ) − jQr (t ) e jω 0 t The output of this system is the complex conjugate of the system that uses − 2 sin (ω 0 t ) . (d) − ω0 ω ω0 LPF 0 2ω0 ω ω 0 C.1 H a (s ) = k0 k p k s + k0 k p k ˆ (s ) = H (s )Θ(s ) = and Θ a ∆θ s k 0 k p k∆θ k0 k p k s + k0 k p k (a) Θ(s ) = ˆ (s ) = Θ s (s + k 0 k p k ) [ θˆ(t ) = ∆θ 1 − e (b) Θ(s ) = ˆ (s ) = Θ − k 0 k p kt = ∆θ ∆θ − s s + k0 k p k ]u(t ) ∆ω s2 k 0 k p k∆ω s (s + k 0 k p k ) 2 =− ∆ω k0 k p k s + ⎡ ∆ω k0k p k ∆ω − s 2 s + k0k p k 1 1 − k k kt ⎤ +t + e 0 p ⎥u (t ) k0k p k ⎢⎣ k 0 k p k ⎥⎦ ⎡ ∆ω − k k kt ⎤ = ⎢∆ωt − 1 − e 0 p ⎥u (t ) k0k p k ⎣⎢ ⎦⎥ θˆ(t ) = ∆ω ⎢− ( ) Θ(s ) C.2 ω n2 ω n2 ˆ H a (s ) = 2 and Θ(s ) = H a (s )Θ(s ) = 2 Θ (s ) s + 2ζω n s + ω n2 s + 2ζω n s + ω n2 The roots of s 2 + 2ζω n s + ω n2 are − ζω n ± ω n ζ 2 − 1 . These roots are real and distinct for ζ > 1 , real and repeated for ζ = 1 , and complex conjugates when 0 < ζ < 1 . (When ζ = 0 , the roots are purely imaginary. But we are not interested in this solution for this problem.) (a) Θ(s ) = ∆θ s Case 1, ζ > 1 : ˆ (s ) = Θ ∆θ ∆θωn2 ωn2 = s 2 + 2ζω n s + ωn2 s s s + ζω n − ωn ζ 2 − 1 s + ζω n + ωn ζ 2 − 1 )( ( ) ∆θ 2 ⎡ ∆θ ⎡ ζ ⎤ ζ ⎤ 1+ 2 ⎥ 1− 2 ⎥ ⎢ ⎢ 2 ⎣ ζ − 1⎦ ∆θ ⎣ ζ − 1⎦ − = − 2 s s + ζω n − ωn ζ − 1 s + ζω n + ωn ζ 2 − 1 ⎡ 1 ⎛⎡ ζ ⎤ − ⎛⎜ ζω θˆ(t ) = ∆θ ⎢1 − ⎜⎜ ⎢1 + 2 ⎥ e ⎝ 2 ζ −1 n − ω n ζ 2 −1 ⎞⎟ t ⎠ ⎡ ζ ⎤ − ⎛⎜ ζω n +ω n + ⎢1 − 2 ⎥ e ⎝ ⎣ ζ − 1⎦ ζ 2 −1 ⎞⎟ t ⎠ ⎢⎣ ⎦ ⎝⎣ ⎛ ⎡ ⎤⎞ ζ = ∆θ ⎜⎜1 − e −ζω n t ⎢cosh ωn ζ 2 − 1 t + 2 sinh ωn ζ 2 − 1 t ⎥ ⎟⎟u (t ) ζ −1 ⎣ ⎦⎠ ⎝ )) (( Case 2, ζ = 1 : ˆ (s ) = Θ ω n2 ∆θω n ∆θ ∆θ ∆θ = − − 2 2 s (s + ω n ) s + ω n (s + ω n ) s [ ] θˆ(t ) = ∆θ 1 − ω n te −ω t − e −ω t u (t ) n n (( )) ⎞⎤ ⎟⎥u (t ) ⎟ ⎠⎥⎦ Case 3, 0 < ζ < 1 : ˆ (s ) = Θ ω n2 ∆θω n2 ∆θ = s 2 + 2ζω n s + ω n2 s s s + ζω n − jω n 1 − ζ 2 s + ζω n + jω n 1 − ζ 2 )( ( ) ζ ⎤ ζ ⎤ ∆θ ⎡ ∆θ ⎡ ⎢1 + j ⎥ ⎢1 − j ⎥ 2 2 2 2 ⎢ ⎥⎦ ⎢ ⎥ ζ ζ − 1 1 − ∆θ ⎣ ⎣ ⎦ − = − s s + ζω n − jω n 1 − ζ 2 s + ζω n + jω n 1 − ζ 2 ⎡ ⎛ 1 θˆ(t ) = ∆θ ⎢1 − ⎜1 − j ⎞ −⎛⎜ ζω n − jωn ⎟e ⎝ 2 ⎟ 1−ζ ⎠ ζ 1−ζ 2 ⎞⎟ t ⎠ ζ 1⎛ − ⎜1 + j 2⎜ 1−ζ 2 ⎝ ⎢ 2 ⎜⎝ ⎣ ⎡ ⎛ ζ sin ω n ζ 2 − 1t = ∆θ ⎢1 − e −ζω nt ⎜ cos ω n ζ 2 − 1t + 2 ⎜ ⎢ 1−ζ ⎝ ⎣ ) ( ( ⎞ −⎛⎜ ζω n + jωn ⎟e ⎝ ⎟ ⎠ 1−ζ 2 ⎞⎟ t ⎠ ⎤ ⎥u (t ) ⎥ ⎦ ) ⎞⎤ ⎟⎥u (t ) ⎟⎥ ⎠⎦ ∆ω s2 Case 1, ζ > 1 : (b) Θ(s ) = ˆ (s ) = Θ ∆ωω n2 ω n2 ∆ω = s 2 + 2ζω n s + ω n2 s 2 s 2 s + ζω n − ω n ζ 2 − 1 s + ζω n + ω n ζ 2 − 1 2∆ωζ = )( ( ∆ω − s2 ωn s ⎡ 2ζ ⎢ ⎣ ωn θˆ(t ) = ∆ω ⎢t − + ) ∆ω ⎡ 2ζ 2 − 1 ⎤ 2ζ 2 − 1 ⎤ ∆ω ⎡ ⎢2ζ − ⎥ ⎢2ζ + ⎥ 2 2 2ω n ⎢ 2ω n ⎢ ⎥⎦ ⎥ − − 1 1 ζ ζ ⎣ ⎣ ⎦+ + s + ζω n − ω n ζ 2 − 1 s + ζω n + ω n ζ 2 − 1 1⎡ 2ζ 2 − 1 ⎤ −⎛⎜⎝ ζω n −ω n ⎢2ζ + ⎥e 2 2⎢ ⎥⎦ − 1 ζ ⎣ ⎡ 2ζ 1 −ζω nt ⎛⎜ 2ζ 2 − 1 ωn = ∆ω ⎢t − + e e ⎜ ζ 2 −1 ⎢ ω n 2ω n ⎝ ⎣ ( ζ 2 −1 ⎞⎟ t ζ 2 −1t ⎠ + 2ζ 2 − 1 ⎤ −⎛⎜⎝ ζω n +ωn 1⎡ ⎢2ζ − ⎥e 2 2⎢ ⎥⎦ − 1 ζ ⎣ + 2ζe ω n ) ζ 2 −1t − 2ζ 2 − 1 ζ 2 −1 ( e −ω n ζ 2 −1t ⎡ 2ζ 1 −ζω nt ⎛⎜ 2ζ 2 − 1 = ∆ω ⎢t − + e 2ζ cosh ω n ζ 2 − 1t + sinh ω n ζ 2 − 1t 2 ⎜ ω ω ⎢ ζ −1 n n ⎝ ⎣ ζ 2 −1 ⎞⎟ t ⎠ + 2ζe −ω n )⎞⎟⎟⎥⎥u(t ) ⎤ ⎠⎦ ⎤ ⎥u (t ) ⎥ ⎦ ζ 2 −1t ⎞⎤ ⎟⎥u (t ) ⎟⎥ ⎠⎦ Case 2, ζ = 1 : 2∆ω 2∆ω ω ωn ∆ω ∆ω ˆ (s ) = ∆ω ω Θ = 2 − n + + 2 2 2 s s (s + ω n ) s (s + ω n ) s + ω n 2 n ⎡ 2 ⎣ ωn θˆ(t ) = ∆ω ⎢t − e −ωnt + te −ωnt + ⎤ e −ωnt ⎥u (t ) ωn ⎦ 2 Case 3, 0 < ζ < 1 : ˆ (s ) = Θ = ∆ωω n2 ω n2 ∆ω = s 2 + 2ζω n s + ω n2 s 2 s 2 s + ζω n − jω n 1 − ζ 2 s + ζω n + jω n 1 − ζ 2 ( ∆ω − s2 2ζ ∆ω ωn s ⎡ 2ζ ⎢ ⎣ ωn θˆ(t ) = ∆ω ⎢t − + + ( ) 2ζ 2 − 1 ⎤ ∆ω ⎡ ⎢2ζ − j ⎥ 2 2ω n ⎢ ⎥⎦ 1 − ζ ⎣ s + ζω n − jω n 1 − ζ ( )( 2 + s + ζω n + jω n 1 − ζ ) ( ) ) 2ζ 2 − 1 ⎤ ∆ω ⎡ ⎢2ζ + j ⎥ 2 2ω n ⎢ ⎥⎦ 1 − ζ ⎣ 2ζ 2 − 1 ⎤ −⎛⎜⎝ ζω n − jω n 1⎡ ⎢2ζ − j ⎥e 2 2⎢ ⎥⎦ 1 − ζ ⎣ ⎡ 2ζ e −ζω n t = ∆ω ⎢t − + ωn ⎢ ωn ⎣ ( ) 1−ζ 2 ⎞⎟ t ⎠ ( + 2 ( ) 2ζ 2 − 1 ⎤ −⎛⎜⎝ ζω n + jω n 1⎡ ⎢2ζ − j ⎥e 2 2⎢ ⎥⎦ 1 − ζ ⎣ ) ( ) 2 ⎛ ⎞⎤ ⎜ 2ζ cos ω 1 − ζ 2 t + 2ζ − 1 sin ω 1 − ζ 2 t ⎟⎥u (t ) n n ⎜ ⎟⎥ 1− ζ 2 ⎝ ⎠⎦ 1−ζ 2 ⎞⎟ t ⎠ ⎤ ⎥u (t ) ⎥ ⎦ C.3 2ζω n s + ω n2 2ζω n s + ω n2 ˆ H a (s ) = 2 Θ (s ) and Θ(s ) = H a (s )Θ(s ) = 2 s + 2ζω n s + ω n2 s + 2ζω n s + ω n2 The roots of s 2 + 2ζω n s + ω n2 are − ζω n ± ω n ζ 2 − 1 . These roots are real and distinct for ζ > 1 , real and repeated for ζ = 1 , and complex conjugates when 0 < ζ < 1 . (When ζ = 0 , the roots are purely imaginary. But we are not interested in this solution for this problem.) ∆θ s Case 1, ζ > 1 : (a) Θ(s ) = ˆ (s ) = Θ ( ) ∆θ 2ζω n s + ω n2 2ζω n s + ω n2 ∆θ = s 2 + 2ζω n s + ω n2 s s + ζω n − ω n ζ 2 − 1 s + ζω n + ω n ζ 2 − 1 s ( )( ) (ζ − ζ − 1) ∆θ (ζ + ζ − 1) ∆θ 1 − (ζ − ζ − 1 ) 1 − (ζ + ζ − 1 ) ∆θ = + + 2 2 2 2 s + ζω n − ω n ζ 2 − 1 ( ) 2 2 s + ζω n + ω n ζ 2 − 1 ⎡ ζ − ζ 2 − 1 2 −⎛⎜ ζω −ω n n θˆ(t ) = ⎢ e ⎝ 2 ⎢1 − ζ − ζ 2 − 1 ⎣ ( 2 2 s ⎤ ( ζ + ζ − 1) + e + 1⎥ ∆θu (t ) ⎥ 1 − (ζ + ζ − 1 ) ⎦ ζ − 1) 1 − (ζ − ζ − 1 ) and the second term by and ζ − 1) 1 − (ζ − ζ − 1 ) ζ 2 −1 ⎞⎟ t 2 2 −⎛⎜ ζω n +ω n ζ 2 −1 ⎞⎟ t ⎠ ⎝ ⎠ ) 2 2 ( Multiplying the first term by 1 − (ζ + 1− ζ + 2 2 2 2 2 2 2 2 simplifying produces ⎡⎛ ⎡ 1 ⎤ ω ζ ˆ ⎥e n θ (t ) = ⎢⎜ ⎢− + ⎢⎜ ⎢ 2 2 ζ 2 − 1 ⎥ ⎦ ⎣⎝ ⎣ ⎡⎛ 1 = ⎢⎜ − e ωn ⎢⎜⎝ 2 ⎣ ζ 2 −1t Now using cosh (x ) = ⎡ ⎛ 1 − e −ω n 2 ⎡ 1 ⎤ −ω ζ + ⎢− − ⎥e n 2 2 ⎢⎣ 2 ζ − 1 ⎥⎦ ζ 2 −1t ζ 2 −1t + ⎢ ⎣ ⎜ ⎝ 2 ζ 2 −1 e ωn ζ 2 −1t − ⎤ ⎞ ⎟e −ζω nt + 1⎥ ∆θu (t ) ⎟ ⎥ ⎠ ⎦ ζ 2 ζ 2 −1 e −ω n e x + e−x e x − e−x and sinh (x ) = we have 2 2 ( ) θˆ(t ) = ⎢1 − e −ζω t ⎜ cosh ω n ζ 2 − 1t − n ζ ζ 2 −1t ζ ζ 2 −1 ( sinh ω n ζ 2 ⎞ − 1t )⎟⎥ ∆θu (t ) ⎟⎥ ⎤ ⎠⎦ ζ 2 −1t ⎞ −ζω t ⎤ ⎟e n + 1⎥ ∆θu (t ) ⎟ ⎥ ⎠ ⎦ Case 2, ζ = 1 : ˆ (s ) = Θ = 2ω n s + ω n2 ∆θ ∆θ (2ω n s + ω n2 ) = s 2 + 2ω n s + ω n2 s (s + ω n ) 2 s ω n ∆θ ∆θ ∆θ − + 2 (s + ω n ) s + ω n s [ ] = [1 − (1 − ω t )e ]∆θu (t ) θˆ(t ) = ω n te −ω t − e −ω t + 1 ∆θu (t ) n n −ω n t n Case 3, 0 < ζ < 1 : ˆ (s ) = Θ 2ζω n s + ω n2 ∆θ ∆θ (2ζω n s + ω n2 ) = s 2 + 2ζω n s + ω n2 s s + ζω n − jω n 1 − ζ 2 s + ζω n + jω n 1 − ζ 2 s )( ( ) (ζ − j 1 − ζ ) ∆θ (ζ + j 1 − ζ ) ∆θ 1 − (ζ + j 1 − ζ ) 1 − (ζ − j 1 − ζ ) ∆θ + + = 2 2 2 2 2 2 ) ⎡ ζ − j 1 − ζ 2 2 −⎛⎜ ζω − jω n n ˆ θ (t ) = ⎢ e ⎝ 2 ⎢1 − ζ − j 1 − ζ 2 ⎣ ( 2 s + ζω n + jω n ζ 2 − 1 s + ζω n − jω n ζ 2 − 1 ( 2 ) ζ 2 −1 ⎞⎟ t ⎠ s (ζ + j 1 − ζ ) e + 1 − (ζ + j 1 − ζ ) 2 2 2 ( Multiplying the first term by 1 − (ζ + j 1− ζ + j 1−ζ 2 1−ζ 2 ) ) 2 2 −⎛⎜ ζω n + jω n ζ 2 −1 ⎞⎟ t ⎠ ⎝ 2 ⎤ + 1⎥ ∆θu (t ) ⎥ ⎦ ( and the second term by 1 − (ζ − j 1− ζ − j 1−ζ 2 1−ζ 2 ) ) 2 2 and simplifying produces ⎡⎛ ⎡ 1 ⎢⎜ ⎢ 2 ⎣⎝ ⎣ θˆ(t ) = ⎢⎜ ⎢− − j ⎡⎛ 1 = ⎢⎜ − e jωn ⎢⎜⎝ 2 ⎣ ⎤ jω ⎥e n 2 2 1 − ζ ⎥⎦ ζ Now using cos( x ) = ⎡ ⎛ 1 − e − jω n 2 ζ 2 −1t ζ 2 −1t ζ 2 −1t ⎢ ⎣ ⎜ ⎝ −j ζ 2 ζ 2 −1 e jω n ζ 2 −1t + j ζ 2 −1t ( ) 2 ζ 2 −1 ( ) ⎞⎤ sin ω n ζ 2 − 1t ⎟⎥ ∆θu (t ) ⎟⎥ ζ 2 −1 ⎠⎦ ζ ⎤ ⎞ ⎟e −ζω nt + 1⎥ ∆θu (t ) ⎟ ⎥ ⎠ ⎦ ζ e jx − e − jx e jx + e − jx and sin ( x ) = we have 2 j2 θˆ(t ) = ⎢1 − e −ζω t ⎜ cos ω n ζ 2 − 1t − n ⎡ 1 ⎤ − jω ζ + ⎢− + j ⎥e n 2 ⎢⎣ 2 2 1 − ζ ⎥⎦ e − jω n ζ 2 −1t ⎞ −ζω t ⎤ ⎟e n + 1⎥ ∆θu (t ) ⎟ ⎥ ⎠ ⎦ ∆ω s2 Case 1, ζ > 1 : (b) Θ(s ) = ˆ (s ) = Θ ( ) ∆ω 2ζω n s + ω n2 2ζω n s + ω n2 ∆ω = s 2 + 2ζω n s + ω n2 s 2 s + ζω n − ω n ζ 2 − 1 s + ζω n + ω n ζ 2 − 1 s 2 ( ∆ω ∆ω 1 ωn 2 ζ − 1 s + ζω n − ω n ζ − 1 2 ⎡ ∆ω θˆ(t ) = ⎢ 1 ⎢⎣ ω n 2 ζ 2 − 1 e + + s + ζω n + ω n ζ − 1 2 − ⎛⎜ ζω n +ω n ζ 2 −1 ⎞⎟ t ⎠ ⎝ − ) 1 ωn 2 ζ 2 − 1 2 =− )( ∆ω 1 ωn 2 ζ 2 − 1 e ∆ω s2 −⎛⎜ ζω n −ω n ζ 2 −1 ⎞⎟ t ⎠ ⎝ ⎤ + ∆ωt ⎥u (t ) ⎥⎦ ) ( ⎡ ⎤ ∆ω 1 = ⎢∆ωt − sinh ω n ζ 2 − 1t e −ζω nt ⎥u (t ) ωn ζ 2 − 1 ⎢⎣ ⎥⎦ Case 2, ζ = 1 : ˆ (s ) = Θ 2ω n s + ω n2 ∆ω ∆ω (2ω n s + ω n2 ) = s 2 + 2ω n s + ω n2 s 2 (s + ω n )2 s 2 =− [ ∆ω (s + ω n ) 2 + ∆ω s2 ] θˆ(t ) = − ∆ωte −ω t + ∆ωt u (t ) n [ ] = ∆ωt 1 − e −ωnt u (t ) Case 3, 0 < ζ < 1 : ˆ (s ) = Θ ( ) ∆ω 2ζω n s + ω n2 2ζω n s + ω n2 ∆ω = s 2 + 2ζω n s + ω n2 s 2 s + ζω n − jω n 1 − ζ 2 s + ζω n + jω n 1 − ζ 2 s 2 ( ∆ω =− ∆ω 1 ωn j2 1 − ζ 2 s + ζω n − jω n 1 − ζ 2 ⎡ ∆ω θˆ(t ) = ⎢− 1 ⎢⎣ ω n j 2 1 − ζ 2 e + s + ζω n + jω n 1 − ζ 2 + ) 1 ωn j2 1 − ζ 2 −⎛⎜ ζω n − jω n 1−ζ 2 ⎞⎟ t ⎠ ⎝ ( )( ∆ω + 1 ωn j2 1 − ζ 2 ) ⎡ ⎤ ∆ω 1 = ⎢∆ωt − sin ω n 1 − ζ 2 t e −ζω nt ⎥u (t ) ωn 1 − ζ 2 ⎢⎣ ⎥⎦ ∆ω s2 e −⎛⎜ ζω n − jω n 1−ζ 2 ⎞⎟ t ⎠ ⎝ ⎤ + ∆ωt ⎥u (t ) ⎥⎦ C.4 Kω n2 Kω n2 ˆ Θ (s ) H a (s ) = 2 and Θ(s ) = H a (s )Θ(s ) = 2 s + 2ζω n s + ω n2 s + 2ζω n s + ω n2 2ζ k 2 where K = 1 + − 2 ωn ωn The roots of s 2 + 2ζω n s + ω n2 are − ζω n ± ω n ζ 2 − 1 . These roots are real and distinct for ζ > 1 , real and repeated for ζ = 1 , and complex conjugates when 0 < ζ < 1 . (When ζ = 0 , the roots are purely imaginary. But we are not interested in this solution for this problem.) (a) Θ(s ) = ∆θ s Case 1, ζ > 1 : ˆ (s ) = Θ Kω n2 K∆θω n2 ∆θ = s 2 + 2ζω n s + ω n2 s s s + ζω n − ω n ζ 2 − 1 s + ζω n + ω n ζ 2 − 1 )( ( ) ⎤ K∆θ ⎡ ζ K∆θ ⎡ ζ ⎤ ⎢1 + ⎥ ⎢1 − ⎥ 2 ⎢ 2 ⎢ ζ 2 − 1 ⎥⎦ ζ 2 − 1 ⎥⎦ K∆θ ⎣ ⎣ = − − s s + ζω n − ω n ζ 2 − 1 s + ζω n + ω n ζ 2 − 1 ⎡ ⎛⎡ ⎢ ⎣ 2 ⎜⎢ ⎝⎣ 1 θˆ(t ) = K∆θ ⎢1 − ⎜ ⎢1 + ⎤ −⎛⎜ ζω n −ωn ⎥e ⎝ 2 ζ − 1 ⎥⎦ ζ ζ 2 −1 ⎞⎟ t ⎠ ⎡ ζ ⎤ −⎛⎜⎝ ζωn +ωn + ⎢1 − ⎥e ⎢⎣ ζ 2 − 1 ⎥⎦ ) ( ( ) ζ 2 −1 ⎞⎟ t ⎛ ⎡ ⎤⎞ ζ = K∆θ ⎜1 − e −ζω nt ⎢cosh ω n 1 − ζ 2 t + sinh ω n 1 − ζ 2 t ⎥ ⎟u (t ) ⎜ ⎢⎣ ⎥⎦ ⎟⎠ ζ 2 −1 ⎝ Case 2, ζ = 1 : ˆ (s ) = Θ Kω n2 (s + ω n ) [ 2 K∆θω n ∆θ K∆θ K∆θ = − − 2 s s (s + ω n ) s + ω n ] θˆ(t ) = K∆θ 1 − ω n te −ω t − e −ω t u (t ) n n ⎠ ⎞⎤ ⎟⎥u (t ) ⎟⎥ ⎠⎦ Case 3, 0 < ζ < 1 : ˆ (s ) = Θ Kω n2 K∆θω n2 ∆θ = s 2 + 2ζω n s + ω n2 s s s + ζω n − jω n 1 − ζ 2 s + ζω n + jω n 1 − ζ 2 )( ( ) ζ ⎤ ζ ⎤ K∆θ ⎡ K∆θ ⎡ ⎢1 − j ⎥ ⎢1 + j ⎥ 2 2 2 2 ⎢ ⎥ ⎢ ⎥⎦ ζ ζ 1 1 − − K∆θ ⎣ ⎦ − ⎣ = − s s + ζω n − jω n 1 − ζ 2 s + ζω n + jω n 1 − ζ 2 ⎡ ⎛ 1 θˆ(t ) = K∆θ ⎢1 − ⎜1 − j ⎞ −⎛⎜ ζω n − jωn ⎟e ⎝ 2 ⎟ 1−ζ ⎠ ζ 1−ζ 2 ⎞⎟ t ⎠ ζ 1⎛ − ⎜1 + j 2⎜ 1−ζ 2 ⎝ ⎢ 2 ⎜⎝ ⎣ ⎡ ⎛ ζ sin ω n ζ 2 − 1t = K∆θ ⎢1 − e −ζω nt ⎜ cos ω n ζ 2 − 1t + 2 ⎜ ⎢ 1− ζ ⎝ ⎣ ) ( (b) Θ(s ) = ( ⎞ −⎛⎜ ζω n + jωn ⎟e ⎝ ⎟ ⎠ 1−ζ 2 ⎞⎟ t ⎠ ⎤ ⎥u (t ) ⎥ ⎦ ) ⎞⎤ ⎟⎥u (t ) ⎟⎥ ⎠⎦ ∆ω s2 Case 1, ζ > 1 : ˆ (s ) = Θ Kω n2 K∆ωω n2 ∆ω = s 2 + 2ζω n s + ω n2 s 2 s 2 s + ζω n − ω n ζ 2 − 1 s + ζω n + ω n ζ 2 − 1 2 K∆ωζ = )( ( K∆ω − s2 ωn s ) 2ζ 2 − 1 ⎤ K∆ω ⎡ 2ζ 2 − 1 ⎤ K∆ω ⎡ ⎥ ⎢2ζ + ⎥ ⎢2ζ − 2 2 2ω n ⎢ 2ω n ⎢ ⎥ ⎥⎦ ζ 1 ζ 1 − − ⎦+ ⎣ ⎣ + s + ζω n − ω n ζ 2 − 1 s + ζω n + ω n ζ 2 − 1 ⎡ 2ζ 1 ⎡ 2ζ 2 − 1 ⎤ −⎛⎜⎝ ζω n −ωn ζ 2 −1 ⎞⎟⎠ t 1 ⎡ 2ζ 2 − 1 ⎤ −⎛⎜⎝ ζω n +ωn ζ 2 −1 ⎞⎟⎠ t ⎤ ˆ ⎥u (t ) ⎢ θ (t ) = K∆ω t − + ⎢2ζ − + ⎢2ζ + ⎥e ⎥e 2 2 2 ⎥ ⎢ ω n 2 ⎢⎣ ⎢⎣ ζ − 1 ⎥⎦ ζ − 1 ⎥⎦ ⎦ ⎣ ⎡ 2ζ ⎞⎤ 2 2 1 −ζω nt ⎛⎜ 2ζ 2 − 1 ωn ζ 2 −1t 2ζ 2 − 1 −ωn ζ 2 −1t e e e = K∆ω ⎢t − + + 2ζe ωn ζ −1t − + 2ζe −ωn ζ −1t ⎟⎥u (t ) ⎜ ζ 2 −1 ⎟⎥ ⎢ ω n 2ω n ζ 2 −1 ⎝ ⎠⎦ ⎣ ⎡ 2ζ ⎞⎤ 1 −ζω nt ⎛⎜ 2ζ 2 − 1 e 2ζ cosh ω n ζ 2 − 1t + sinh ω n ζ 2 − 1t ⎟⎥u (t ) = K∆ω ⎢t − + ⎜ ⎟⎥ ⎢ ωn ωn ζ 2 −1 ⎝ ⎠⎦ ⎣ ( ) ( ) Case 2, ζ = 1 : 2 K∆ω ωn K∆ω ˆ (s ) = ∆ω Kω Θ = 2 − 2 2 s s (s + ω n ) 2 n ⎡ 2 ⎣ ωn θˆ(t ) = K∆ω ⎢t − + s e −ωnt + te −ωnt + 2 K∆ω K∆ω (s + ω n ) 2 + ωn s + ωn ⎤ e −ωnt ⎥u (t ) ωn ⎦ 2 Case 3, 0 < ζ < 1 : ˆ (s ) = Θ = Kω n2 K∆ωω n2 ∆ω = s 2 + 2ζω n s + ω n2 s 2 s 2 s + ζω n − jω n 1 − ζ 2 s + ζω n + jω n 1 − ζ K∆ω − s2 2ζ ( K∆ω ωn s ⎡ 2ζ ⎢ ⎣ ωn θˆ(t ) = K∆ω ⎢t − + + ( ) K∆ω ⎡ 2ζ 2 − 1 ⎤ ⎢2ζ − j ⎥ 2 2ω n ⎢ ⎥⎦ − 1 ζ ⎣ s + ζω n − jω n 1 − ζ ( )( 2 + ) 1⎡ 2ζ 2 − 1 ⎤ −⎛⎜⎝ ζω n − jωn ⎢2ζ − j ⎥e 2 2⎢ ⎥⎦ − 1 ζ ⎣ ⎡ 2ζ e −ζω nt = K∆ω ⎢t − + ωn ⎢ ωn ⎣ ( ) ) ) K∆ω ⎡ 2ζ 2 − 1 ⎤ ⎢2ζ + j ⎥ 2 2ω n ⎢ ⎥⎦ − 1 ζ ⎣ s + ζω n + jω n 1 − ζ 1−ζ 2 ⎞⎟ t ⎠ ( ( 2 + 2 ( ) 1⎡ 2ζ 2 − 1 ⎤ −⎛⎜⎝ ζω n + jω n ⎢2ζ − j ⎥e 2 2⎢ ⎥⎦ − 1 ζ ⎣ ) ( ) 2 ⎛ ⎞⎤ ⎜ 2ζ cos ω 1 − ζ 2 t + 2ζ − 1 sin ω 1 − ζ 2 t ⎟⎥u (t ) n n ⎜ ⎟⎥ 1−ζ 2 ⎝ ⎠⎦ 1−ζ 2 ⎞⎟ t ⎠ ⎤ ⎥u (t ) ⎥ ⎦ k p k 0 F (s ) In general, H (s ) = C.5 (a) s + k p k 0 F (s ) F (s ) = k H (s ) = H ( jω ) = H ( jω ) = 2 (k k0 k ) 2 p ω 2 + (k p k 0 k )2 = k p k0 k s + k p k0 k k p k0 k jω + k p k 0 k (k k0 k ) 2 p ω 2 + (k p k 0 k )2 k p k0 k 1 ⇒ ω 3 = k p k 0 k ⇒ B3 = 2 2π B3 k p k 0 k 4 2 = × = <1 Bn k p k0 k π 2π So, B3 < Bn (b) F (s ) = k s+k H (s ) = ω n2 s 2 + 2ζω n s + ω n2 H ( jω ) = H ( jω ) = 2 (ω ω n4 2 n −ω ) + (2ζω ω ) 2 2 2 = n B3 = B3 ω n = 1 − 2ζ 2 + Bn 2π So, B3 < Bn ω n2 ω n2 − ω 2 + j 2ζω nω (ω ω n4 2 n −ω2 ) + (2ζω ω ) 2 2 n 1 ⇒ ω 3 = ω n 1 − 2ζ 2 + 2 (1 − 2ζ ) 2 2 +1 ωn 2 1 − 2ζ 2 + (1 − 2ζ 2 ) + 1 2π (1 − 2ζ ) 2 2 +1 × 8ζ ωn = 8ζ 1 − 2ζ 2 + 2π (1 − 2ζ ) 2 2 + 1 < 1 for all ζ > 0 (c) F (s ) = k 1 + k2 s 2ζω n s + ω n2 H (s ) = 2 s + 2ζω n s + ω n2 H ( jω ) = H ( jω ) = 2 (ω ω n4 + (2ζω nω )2 2 n −ω ) + (2ζω ω ) 2 2 2 = n j 2ζω nω + ω n2 ω n2 − ω 2 + j 2ζω n ω (ω ω n4 + (2ζω nω )2 2 n −ω2 ) + (2ζω ω ) 2 2 n 1 ⇒ ω 3 = ω n 1 + 2ζ 2 + 2 (1 + 2ζ ) 2 2 ωn 2 1 + 2ζ 2 + (1 + 2ζ 2 ) + 1 2π B3 ω n 2 = 1 + 2ζ 2 + (1 + 2ζ 2 ) + 1 × Bn 2π +1 B3 = 2 ⎛ ω n ⎜⎜ ζ + ⎝ = 2 1 1 + 2ζ + π (1 + 2ζ ) 1 ζ+ 4ζ 2 2 +1 1 4ζ ⎞ ⎟⎟ ⎠ < 1 for all ζ > 0 (d) F (s ) = H (s ) = H ( jω ) = H ( jω ) = 2 (ω C2 2 n −ω ) + (2ζω ω ) 2 2 2 = n k1 + s k2 + s ⎛ k where C = ω n2 + ω n ⎜⎜ 2ζ − 2 ωn ⎝ C s + 2ζω n s + ω 2 2 n ⎞ ⎟⎟ ⎠ C ω − ω + j 2ζω nω 2 n (ω 2 C2 2 n ) + (2ζω ω ) ⇒ ω = ω (2ζ −ω2 C2 2 2 2 n 2 n 3 2 ) ( ) ( − 1 + ω n4 2ζ 2 − 1 − ω n4 − 2 2 ( ) ( ) ( ) ( ) ( ) ( ) 2 1 ω n2 2ζ 2 − 1 + ω n4 2ζ 2 − 1 − ω n4 − 2 2π B3 2 1 ω n2 2ζ 2 − 1 + ω n4 2ζ 2 − 1 − ω n4 − 2 × = Bn 2π B3 = 8ζ = 2πω n ) 8ζ 2 ⎡ ⎛ k2 ⎞ ⎤ ω n ⎢1 + ⎜⎜ 2ζ − ⎟⎟ ⎥ ωn ⎠ ⎥ ⎢⎣ ⎝ ⎦ ω n2 (2ζ 2 − 1) + ω n4 (2ζ 2 − 1) − (ω n4 − 2 ) 2 ⎛ k ⎞ 1 + ⎜⎜ 2ζ − 2 ⎟⎟ ωn ⎠ ⎝ 2 This is a mess. It appears to be less than 1 for all values of ζ, but it also depends on k2 and ωn. C.6 H a (s ) = 2ζω n s + ω n2 k where ζ = 1 2 2 2 s + 2ζω n s + ω n k0k p k2 and ω n = k 0 k p k 2 If k1 = 0, then ζ = 0 and we have H a (s ) = which has two poles at s = ± jω n . This is an oscillator, which we do not want! ω n2 s 2 + ω n2 C.7 k0 k p k (a) H a (s ) = s + k0 k p k = τ s +τ τT 2 τT τT 2 + z −1 τT 1+ 1+ ⎛ 2 1 − z −1 ⎞ τ 2 2 ⎟= = (b) H a ⎜⎜ −1 ⎟ −1 τ T T 1 2 1 + − z z ⎝ ⎠ 1− +τ 2 z −1 T 1 + z −1 1− τT 1+ 2 (c) H d ( z ) = 1− (d) K p K 0 Kz −1 1 − (1 − K p K 0 K )z −1 τT 2 = 1− K K K p 0 τT 1+ 2 (e) Bn = k0 k p k 4 ⇒ K p K0 K = ⇒ k 0 k p k = 4 Bn τT = τT 1+ 2 k 0 k p kT 1 1 + k 0 k p kT 2 ⇒ K0 K p K = 4 Bn T 1 + 2 Bn T C.8 ω n2 (a) H a (s ) = 2 s + 2ζω n s + ω n2 (b) ⎛ 2 1 − z −1 ⎞ ω n2 ⎟= H a ⎜⎜ −1 ⎟ 2 ⎛ 2 1 − z −1 ⎞ ⎝ T 1 + z ⎠ ⎛ 2 1 − z −1 ⎞ ⎟ + ω n2 ⎟ ⎜ ⎜⎜ 2 ζω + n⎜ −1 ⎟ −1 ⎟ ⎝T 1+ z ⎠ ⎝T 1+ z ⎠ = z −2 where θ n = ω nT 2 K p K0 (c) H d ( z ) = z −2 + (d) K = θ n2 ( z − 2 + z −1 + 1) 2 1 − 2ζθ n + θ n θ n2 − 1 1 + 2ζθ n + θ n2 −1 +2 + z 1 − 2ζθ n + θ n2 1 − 2ζθ n + θ n2 z −1 K K p K0 − K −1 K z −1 + 1 K 1 − 2ζθ n + θ n2 1 + 2ζθ n + θ n2 ωn ωT 1 we have Bn T = n = 8ζ 8ζ 4ζ ⎛ ω nT ⎞ θ n ⎜ ⎟= ⎝ 2 ⎠ 4ζ Making this substitution into the answer from part (d) gives (e) Using Bn = K= ⇒ θ n = 4ζ (BnT ) 1 − 8ζ 2 (BnT ) + 16ζ 2 (BnT ) 2 1 + 8ζ 2 (BnT ) + 16ζ 2 (BnT ) 2 C.9 ⎡ (a) H a (s ) = k 1 ⎛ ⎜⎜ 2ζ − 2 ωn ⎣ ωn ⎝ s 2 + 2ζω n s + ω n2 ω n2 ⎢1 + ⎞⎤ ⎟⎟⎥ ⎠⎦ = ω n2 C s 2 + 2ζω n s + ω n2 (b) ⎛ 2 1 − z −1 ⎞ ω n2 C ⎟ = H a ⎜⎜ −1 ⎟ 2 ⎝ T 1 + z ⎠ ⎛ 2 1 − z −1 ⎞ ⎛ 2 1 − z −1 ⎞ ⎜⎜ ⎟ ⎟ + ω n2 + 2ζω n ⎜⎜ −1 ⎟ −1 ⎟ ⎝T 1+ z ⎠ ⎝T 1+ z ⎠ θ n2 C (z −2 + z −1 + 1) 1 − 2ζθ n + θ n2 = θ n2 − 1 1 + 2ζθ n + θ n2 −1 + z −2 + 2 z 1 − 2ζθ n + θ n2 1 − 2ζθ n + θ n2 K p K0 (c) H d ( z ) = z −2 + K 2 − K p K 0 K1 K p K0 − K2 −1 K 2 − K p K 0 K1 ( z −1 1 − K 1 z −1 z −1 + ) 1 K 2 − K p K 0 K1 (d) K p K0 − K2 −1 K 2 − K p K 0 K1 = ( ) 2 θ n2 − 1 1 − 2ζθ n + θ n2 1 − 2ζθ n + θ n2 K 2 − K p K 0 K1 = 1 + 2ζθ n + θ n2 K1 = 1 − ⇒ 4θ n2 1 K p K 0 1 + 2ζθ n + θ n2 1 − 2ζθ n − 3θ n2 K2 = K p K0 + 1 + 2ζθ n + θ n2 Digital Communications: A Discrete-Time Approach M. Rice Errata Foreword Page xiii, first paragraph, “bare witness” should be “bear witness” Page xxi, last paragraph, “You know who you.” should be “You know who you are.” Chapter 1 Page 3, second new paragraph, “Pittsburg” should be “Pittsburgh” Page 9, The end of the second lines reads, “… signal sideband AM.” This should be “… single sideband AM.” Page 10, the second line of Section 1.2, “information baring” should be “information bearing” Page 12, Equation (1.1) should read Page 14, Second new paragraph, “The same power/bandwidth exists with digitally modulated carriers.” should read, “The same power/bandwidth trade off exists with digitally modulated carriers.” Chapter 2 Page 27, The sentence after Equation (2.14) should read “An energy signal is a signal with finite nonzero energy whereas a power signal is a signal with finite nonzero power.” Page 28, Equation (2.19) should read Page 28, Equation (2.20) should read Page 33, Equation (2.30) should read Page 36, Equation (2.36) should read Page 37: Equation (2.37) should read Page 40, the sixth row in Table 2.4.4 should be Page 41, second line of text, “complex plain” should be “complex plane” Page 50, Equations (2.57) and (2.58) should read Page 56, Equation (2.72) should be Page 65, the third paragraph of Section 2.6.2. The two occurrences of should be Page 75, Exercise 2.24, the equation for should be Page 76, Exercise 2.27, the line after the equation should read “where the integral on the left is the power contained in the interval Page 83, Exercise 2.46, the plot for should be Page 83, Exercise 2.47, the plot for should be Page 84, the second figure of Exercise 2.48 should be Page 85: The first figure of Exercise 2.50 should be …” Page 86: The figure at the top of the page (Exercise 2.50) should be Page 86: The first Figure in Exercise 2.51 should be Page 87, The figure at the top of the page (Exercise 2.51) should be Page 95, The figure of Exercise 2.63 should be Page 97, Exercise 2.67. The Fibonacci sequence is 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 377, 610, 987, … Page 98, Exercise 2.67. The initial conditions should be . Page 98, Exercise 2.69 should begin “Consider an LTI system with input …” Page 99, Exercise 2.70 should begin “Consider an LTI system with input …” Page 99, the figure of Exercise 2.71 should be Page 102, the figure of Exercise 2.77 should be Page 103, Exercise 2.81 (c) should read Page 103, Exercise 2.81 (f) should read Page 103, Exercise 2.82 (a) should read Page 106, Exercise 2.90. The plot for should be and the statement after the figures should read Show that the length-4 DFT of both series have the same magnitude. (Hint: do the analysis in the time domain.) Chapter 3 Page 125, Forth line after equation (3.16): should be Page 125, Fifth line after equation (3.16): should be Page 132, Equation (3.23) should read Page 133, Equation (3.24) should read Page 138, second line “… Figure 3.3.8 requires three multipliers,…” should read “… Figure 3.3.8 requires four multipliers,…” Page 154, Equation (3.66) should read, Page 165, the figure for Exercise 3.15 (b) should be Page 174, Exercise 3.33 (b) should read Sample the impulse response at instants impulse response . and scale by to produce a discrete-time Page 175, Exercise 3.37, The last sentence should read “Show that for small approximately . , the DTFT is Chapter 4 Page 195, Equation (4.45) should be Page 198, Equation (4.52): µ should be boldface so that Equation (4.52) reads as follows: Page 198, Equation (4.54): µ should be boldface so that Equation (4.54) reads as follows: Page 198, Equation (4.55): AMA should be boldface so that Equation (4.55) reads as follows: Page 200, the last equation on the page. should be . Page 201, figure 4.4.1 should be Page 201, the first line of the equation for should be Page 201, Example 4.4.1: In the properties of the Gaussian sequence, Page 202, Figure 4.4.2 should be should be . Page 211, Exercise 4.19: µ should be boldface. Page 211, Exercise 4.20: µ should be boldface. Page 212, Exercise 4.23: part (b) should read (b) Compute the probability that . Page 212, Exercise 4.24: The autocorrelation function should be Page 212, Exercise 4.24: part (c) should read (c) Compute the probability that . Page 212, Exercise 4.26: the impulse response should be written as follows: Page 213, Exercise 4.28: the impulse response should be written as follows: Chapter 5 Page 225, Equation (5.17) should read Page 229, First line after equation (5.29): 2 should be . Page 234, Four lines below equation (5.40), ADC should be DAC. Page 239, Equation (5.53) should be Page 239, Equation (5.54) should be Page 239, Equation (5.56) should be Page 241, Equation (5.62) should be Page 249, Figure 5.3.11 should be Page 253, Figure 5.3.13 should be Page 254, The table at the end of Example 5.3.9 should be k 0 +1.00 −0.95 +1 −1 1 +1.02 +1.02 +1 +1 2 −0.97 +0.98 −1 +1 3 −0.97 −1.00 −1 −1 Page 270, Figure 5.5.2 (b) should be Page 273, the sentence just above Equation (5.107) should read, “Using Bayes’ rule, …” Page 286, The constellation plot in Exercise 5.17 should be Page 286, The constellation plot of Exercise 5.18 should be Page 286, The constellation of Exercise 5.19 should be Page 286, The constellation of Exercise 5.20 should be Page 287, The constellation of Exercise 5.21 should be Page 291, The constellation of Exercise 5.30 should be Page 291, The constellation of Exercise 5.31 should be Page 291, The constellation of Exercise 5.32 should be Page 292, Exercise 5.33 should read Consider a binary PAM system using the SRRC pulse shape. (a) Produce an eye diagram corresponding to 200 randomly generated symbols for a pulse shape with 100% excess bandwidth and . Use a sampling rate equivalent to 16 samples/symbol. (b) Repeat part (a) for a pulse shape with 50% excess bandwidth and . (c) Repeat part (a) for a pulse shape with 25% excess bandwidth and . (d) Compare and contrast the eye diagrams from parts (a) – (c). Page 292, Exercise 5.34 should read Consider a 4-ary PAM system using the SRRC pulse shape. (a) Produce an eye diagram corresponding to 200 randomly generated symbols for a pulse shape with 100% excess bandwidth and . Use a sampling rate equivalent to 16 samples/symbol. (b) Repeat part (a) for a pulse shape with 50% excess bandwidth and (c) Repeat part (a) for a pulse shape with 25% excess bandwidth and . . (d) Compare and contrast the eye diagrams from parts (a) – (c). Page 300, The figure for Exercise 5.46 should be Page 301, The first line after the figure should read “… the average energy is 1.215 J …” Chapter 6 Page 306, Equation (6.2) should be Page 306, Equation (6.3) should be Page 313, The legend of Figure 6.1.3 is incorrect. Figure 6.1.3 should be as follows: Page 314, Equation (6.30) should be Page 314, Equation (6.31) should be Page 325, the sentence just above Equation (6.87) should read, “The union bound for the conditional probabilities are” Page 343, Footnote 5. The last line should read “… utility of these added features.” Page 349, Exercise 6.10 should read as follows: Use the union bound to upper bound the probability of bit error for the 4+12+16 APSK constellation shown in Figure 5.3.5 using , , , , , and . Page 357-358, Exercise 6.38. The paragraph (on page 357) that begins “The other kind of repeater is the regenerative repeater” should be part (c). Part (d) is Suppose a 1 Mbit/s QPSK link has dB W/Hz and dB W/Hz. Which type of repeater provides the lowest composite bit-error rate: the bent-pipe repeater or the regenerative repeater? Chapter 7 Page 424, Exercise 7.7: (7.45) should be (7.44). Chapter 8 Page 445, Equation (8.20) should be as follows: Page 488, In the caption of Figure 8.4.26, “Figure 8.4.26” should be “Figure 8.4.25”. Page 471, The lower diagram of Figure 8.4.16 should be Page 507, Equation (8.130) should be Page 507, The exponential term of the first line of equation (8.131) should be Pages 517-518, Exercises 8.12-8.15 and 8.18 should be deleted. The remaining exercises should be renumbered in the obvious way. Chapter 9 Page 560, the first line under Figure 9.3.3. Equation (9.60) should be (9.59). Page 560, the first new paragraph the text “… a filter’s transfer frequency response …” should be “ … a filter’s transfer function …” Page 562: Equation (9.61) should be Page 579: Figure 9.4.1 should be Page 580: Table 9.4.1 should be k 0 1 2 3 4 5 6 7 (degrees) 45.00 26.57 14.04 7.13 3.58 1.79 0.90 0.45 (degrees) 45.00 71.57 57.53 50.40 46.83 48.62 49.51 49.96 (degrees) +50.00 +5.00 -21.57 -7.53 -0.40 +3.17 +1.38 +0.49 +1 +1 -1 -1 -1 +1 +1 +1 +1.00 +1.00 +0.50 +0.88 +1.05 +1.13 +1.09 +1.07 +0.00 +1.00 +1.50 +1.38 +1.27 +1.20 +1.24 +1.25 Page 581: Figure 9.4.2 should be 90 k ! 80 θ 70 angles (degrees) δn−1 θn n=0 60 50 40 30 20 10 0 0 1 2 3 4 iteration index (k) 5 6 7 Page 582: The first line of equation (9.107) should be Page 587: The first line of equation (9.117) should be Page 596: The first word of the second sentence of the third new paragraph on the page (10 lines from the bottom of the page) should be CoRDiC. Page 602: The first equation of Exercise 9.25 should be Page 602: The first equation of Exercise 9.26 should be Page 603: The equation of Exercise 9.26 (c) should be Chapter 10 Page 660: Figure 10.2.18 should be Page 661: line 15, the parenthetical statement should read (ratio of RF power out to DC power in) Page 662: The matrix equation for Exercise 10.1 (a) should be Page 663: The matrix equation for Exercise 10.3 (a) should be Page 664: The second line of Exercise 10.6 should reference Figure 10.1.2 (d). Page 665: Exercise 10.8 should read The QAM modulator of Exercise 10.6 was derived from the QAM modulator of Figure 10.1.2 (d) by using polyphase partitions of and . The QAM modulator of Exercise 10.7 was derived from the QAM modulator of Figure 10.1.4 by using a polyphase partition of . Show that the filterbanks in these two modulators are exactly identical. Page 668: The second line of Exercise 10.20 should begin “consists of 124 channels …” Page 669: The second line of Exercise 10.22 should begin “consists of 124 channels…” Page 669, Exercise 10.26: the last sentence above the figure should read Assume and . Page 669, Exercise 10.26 (b) should read Derive an expression for in terms of , , and . Page 670, Exercise 10.28 (a) should read Express the product of and the LO in terms of baseband signals and double frequency signals (centered at rads/s). Appendix A Page 677, Equation (A.9) should read Page 679, Equation (A.11) should read Page 684, Figure A.2.4 should be Appendix B Page 715, The equation below the first line of Exercise B.18 should be Page 715, The equation below the first line of Exercise B.18 (b) should be Bibliography Page 752. Ref. 23 should be T. Cover and J. Thomas, Elements of Information Theory, John Wiley & Sons, New York, 1991. Page 758: Ref.182 should be C. Dick and f. harris, “On the structure, performance, and applications of recursive all-pass filters with adjustable and linear group delay,” Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, FL, May 14-17, 2002, pp. 1517-1520.