[1.1] COMMUNICATION SYSTEM ELEMENTS U References U 1. B. P. Lathi, “Modern Digital and Analog Communication Systems”, 3rd Ed., 1998. 2. R. Ziemer, W. Tranter, “Principles Of Communications, Systems, Modulation, and Noise” , 5th Ed., 2002. 3. Hwei P. Hsu, “theory and problems of analog and digital communications”, McGRAW-HILL, Schaume’s outline series.1993. 4. F. G. Stremler, “Introduction to communication system”, Addison-Wesley Publishing Com., 1999. 5. B. P. Lathi, “signal processing and linear systems”, Berkeley Cambridge Press, 1998. 6. J. G. Proakis, “Digital Communications”, 4th Ed., 2001. 1. COMMUNICATION SYSTEM ELEMENTS U Figure 1.1 shows a commonly used model for a single-link communication system. Although it suggests a system for communication between two remotely located points, this block diagram is also applicable to remote sensing systems, such as radar or sonar, in which the system input and output may be located at the same site. Regardless of the particular application and configuration, all information transmission systems invariably involve three major subsystems-a transmitter, the channel, and a receiver. Input Transducer: The wide variety of possible sources of information results in many different forms for messages. Regardless of their exact form, however, messages may be categorized as analog or digital. The former may be modeled as functions of a continuoustime variable (e.g., pressure, temperature, speech, music), whereas the latter consists of discrete symbols (e.g., written text). Almost invariably, the message produced by a source must be converted by a transducer to a form suitable for the particular type of communication system employed. For example, in electrical communications, speech waves are converted by a microphone to voltage variations. Such a converted message is referred to as the message signal. U U Transmitter: The purpose of the transmitter is to couple the message to the channel. Although it is not uncommon to find the input transducer directly coupled to the transmission medium, as for example in some intercom systems, it is often necessary to modulate a carrier wave with the signal from the input transducer. Modulation is the systematic variation of some attribute of the carrier, such as amplitude, phase, or frequency, in accordance with a function of the message signal. There are several reasons for using a carrier and modulating it. Important ones are U U (1) For ease of radiation. (2) To reduce noise and interference. (3) For channel assignment. (4) For multiplexing or transmission of several messages over a single channel. (5) To overcome equipment limitations. Channel: The channel can have many different forms; the most familiar, perhaps, is the channel that exists between the transmitting antenna of a commercial radio station and the receiving antenna of a radio. In this channel, the transmitted signal propagate through the atmosphere, or free space, to the receiving antenna. However, it is not uncommon to find the transmitter hard-wired to the receiver, as in most local telephone systems. This channel is vastly different from the radio example. However, all channels have one thing in common: the signal undergoes degradation from transmitter to receiver. Although this degradation may occur at any point of the communication system block diagram, it is customarily associated with the channel alone. This degradation often results from noise and other undesired signals U U Dr. Ahmed A. Alrekaby [1.2] or interference but also may include other distortion effects as well, such as fading signal levels, multiple transmission paths, and filtering Receiver: The receiver's function is to extract the desired message from the received signal at the channel output and to convert it to a form suitable for the output transducer. Although amplification may be one of the first operations performed by the receiver, especially in radio communications, where the received signal may be extremely weak, the main function of the receiver is to demodulate the received signal. Often it is desired that the receiver output be a scaled, possibly delayed, version of the message signal at the modulator input, although in some cases a more general function of the input message is desired. However, as a result of the presence of noise and distortion, this operation is less than ideal. U U Output Transducer: The output transducer completes the communication system. This device converts the electric signal at its input into the form desired by the system user. Perhaps the most common output transducer is a loudspeaker. However, there are many other examples, such as tape recorders, personal computers, meters, and cathode-ray tubes. U U Input message Fig.1.1 2. CLASSIFICATION OF SIGNALS U A signal is a function representing a physical quantity. Mathematically, a signal is represented as a function of an independent variable t. usually t represents time. Thus a signal is denoted by x(t). 2.1 Continuous-Time and Discrete-Time Signals U A signal x(t) is a continuous-time signal if t is a continuous variable. If t is a discrete variable, that is, x(t) is defined at discrete times, then x(t) is a discrete-time signal. Since a discrete-time signal is defined at discrete times, it is often identified as a sequence of numbers, denoted by {x(n)} or x[n], where n = integer. 2.2 Analog and Digital Signals U If a continuous-time signal x(t) can take on any values in the continuous interval (a, b), where a may be −∞ and b may be +∞, then the continuous-time signal x(t) is called an analog signal. If a discrete-time signal x(t) can take only a finite number of distinct values, then we call this signal a digital signal. The discrete-time signal x[n] is often formed by sampling a continuous-time signal x(t) such that x[n]=x(nTs), where Ts is the sampling interval. Dr. Ahmed A. Alrekaby [1.3] 2.3 Real and Complex Signals U A signal x(t) is a real signal if its value is a real number and is a complex signal if its value is a complex number. 2.4 Deterministic and random signals U Deterministic signals are those signals whose values are completely specified for any given time. Random signals are those signals that take random values at any given time, and these must be characterized statistically. 2.5 Energy and Power Signals U The normalized energy content E of a signal x(t) is defined as ∞ πΈπΈ = οΏ½ |π₯π₯(π‘π‘)|2 ππππ (2.1) 1 ∞ οΏ½ |π₯π₯(π‘π‘)|2 ππππ ππ→∞ ππ −∞ (2.2) −∞ The normalized average power P of a signal x(t) is defined as • • ππ = lim If 0 < E < ∞, that is, if E is finite (so P = 0), then x(t) is referred to as an energy signal. If E = ∞, but 0 < P < ∞, that is, P is finite, then x(t) is referred to as a power signal. Example 2.1: determine whether the signal x(t) = ππ −ππ|π‘π‘| is power or energy signals or neither. U U Sol. U U ∞ −ππππ π₯π₯(π‘π‘) = ππ −ππ|π‘π‘| = οΏ½ππ ππππ ππ π‘π‘ > 0 π‘π‘ < 0 ∞ ∞ 1 πΈπΈ = ∫−∞ [π₯π₯(π‘π‘)]2 ππππ = ∫−∞ ππ −2ππ|π‘π‘| ππππ = 2 ∫0 ππ −2ππππ ππππ = ππ < ∞, thus, x(t) is an energy signal. x(t) ππ ππππ ππ −ππππ 0 t H.W: repeat Example 2.1 with x(t)=A[u(t+a)-u(t-a)], a>0 and x(t)=tu(t). U U 2.6 Periodic and Non-periodic Signals U A signal x(t) is periodic if there is a positive number To such that x(t+nTo) = x(t) (2.3) The smallest positive number To is called the period, and the reciprocal of the period is called the fundamental frequency fo ππππ = 1 ππππ βππππππππ (π»π»π»π») (2.4) Dr. Ahmed A. Alrekaby [1.4] Any signal for which there is no value of To satisfying Eq.(2.3) is said to be nonperiodic or aperiodic, A periodic signal is a power signal if its energy content per period is finite, and then the average power of the signal need only be calculated over a period. Example 2.2: Let x1(t) and x2(t) be periodic signals with periods T1 and T2, respectively: Under what conditions is the sum x(t) =x1(t) +x2(t) periodic, and what is the period of x(t) .if it is periodic? U U Sol. From Eq. (2.3) U U x1(t) = x1(t + mT1) m an integer x2(t) = x2(t +nT2) n an integer If, therefore,T1, and T2 are such that mT1 = nT2 = T then, x(t + T) =x1(t + T) +x2(t + T) =x1(t) +x2(t) =x(t) that is, x(t) is periodic. Thus, the condition for x(t) to be periodic is ππ1 ππ = = rational number ππ2 ππ The smallest common period is the least common multiple of T1 and T2. If the ratio T1/T2 is an irrational number, then the signals x1(t) and x2(t) do not have a common period and x(t) cannot be periodic. 1 1 1 To check π₯π₯(π‘π‘) = ππππππ οΏ½3 π‘π‘οΏ½ + π π π π π π (4 π‘π‘) for periodicity, ππππππ οΏ½3 π‘π‘οΏ½ is periodic with period T1=6π, 1 and π π π π π π (4 π‘π‘) is periodic with period T2=8π. Since periodic with period T = 4T1 =3T2 = 24π. ππ1 ππ2 6ππ 3 = 8ππ = 4 is a rational number, π₯π₯(π‘π‘) is H.W: Is the following signal periodic? If so, find their period. U U 2.7 Singularity Functions U π₯π₯(π‘π‘) = ππππππ(π‘π‘) + 2π π π π π π (√2π‘π‘) An important subclass of non periodic signals in communication theory is the singularity functions (or, as they are sometimes called, the generalized functions). 2.7.1 Unit Step Function U The unit step function u(t) is defined as 1 π’π’(π‘π‘) = οΏ½ 0 u(t) 0 Fig.2.1 π‘π‘ > 0 π‘π‘ < 0 (2.5) t Note that it is discontinuous at t = 0 and that the value at t = 0 is undefined. Dr. Ahmed A. Alrekaby [1.5] 2.7.2 Unit Impulse Function U The unit impulse function, also known as the Dirac delta function, δ(t) is not an ordinary function and is defined in terms of the following process: ∞ οΏ½ ππ(π‘π‘)πΏπΏ(π‘π‘) ππππ = ππ(0) (2.6) −∞ where φ(t) is any test function continuous at t = 0. Some additional properties of δ(t) are ∞ οΏ½ ππ(π‘π‘)πΏπΏ(π‘π‘ − π‘π‘ππ ) ππππ = ππ(π‘π‘ππ ) (2.7) πΏπΏ(−π‘π‘) = πΏπΏ(π‘π‘) (2.9) −∞ πΏπΏ(ππππ) = 1 πΏπΏ(π‘π‘) |ππ| (2.8) π₯π₯(π‘π‘)πΏπΏ(π‘π‘) = π₯π₯(0)πΏπΏ(π‘π‘) (2.10) π₯π₯(π‘π‘)πΏπΏ(π‘π‘ − π‘π‘ππ ) = π₯π₯(π‘π‘ππ )πΏπΏ(π‘π‘ − π‘π‘ππ ) (2.11) An alternate definition of δ(t) is provided by the following two conditions: π‘π‘ 2 οΏ½ πΏπΏ(π‘π‘ − π‘π‘ππ ) ππππ = 1 π‘π‘ 1 πΏπΏ(π‘π‘ − π‘π‘ππ ) = 0 π‘π‘1 < π‘π‘ππ < π‘π‘2 (2.12) π‘π‘ ≠ π‘π‘ππ (2.13) Conditions in Eq.(2.12) and (2.13) correspond to the intuitive notion of a unit impulse as the limit of a suitably chosen conventional function having unity area in an infinitesimally small width. For convenience, δ(t) is shown schematically in Fig. 2.2. δ(t) 0 t Fig.2.2 If g(t) is a generalized function, its derivative g'(t) is defined by the following relation: ∞ ∞ Μ ππ(π‘π‘)ππππ = − οΏ½ g(π‘π‘) ππΜ(π‘π‘)ππππ οΏ½ g(π‘π‘) −∞ −∞ By using Eq. (2.14), the derivative of u(t) can be shown to be δ(t); that is, πΏπΏ(π‘π‘) = π’π’Μ (π‘π‘) = ∞ ππππ(π‘π‘) ππππ Example 2.3: Evaluate the integral, ∫−∞ (π‘π‘ 2 + ππππππππππ)πΏπΏ(π‘π‘ − 1) ππππ U U (2.14) (2.15) Dr. Ahmed A. Alrekaby [1.6] ∞ Sol. ∫−∞ (π‘π‘ 2 + ππππππππππ)πΏπΏ(π‘π‘ − 1) ππππ = π‘π‘ 2 + ππππππππππ|π‘π‘=1 = 1 + ππππππππ = 1 − 1 = 0 U U H.W: Evaluate the following integrals; U U ∞ (a) ∫−∞ ππ −π‘π‘ πΏπΏ(2π‘π‘ − 2)ππππ ∞ (b) ∫−∞ ππ −2π‘π‘ πΏπΏΜ (π‘π‘)ππππ 3. COMPLEX EXPONENTIAL FOURIER SERIES U Let x(t) be a periodic signal with period To. Then we define the complex exponential Fourier series of x(t) as ∞ π₯π₯(π‘π‘) = οΏ½ ππππ ππ ππππ ππ ππ π‘π‘ ππ =−∞ (3.1) where ππππ = 2ππ⁄ππππ = 2ππππππ , which is called the fundamental angular frequency. The coefficients cn are called the Fourier coefficients, and they are given by 1 π‘π‘ππ +ππππ ππππ = οΏ½ π₯π₯(π‘π‘) ππ −ππππ ππ ππ π‘π‘ ππππ (3.2) ππππ π‘π‘ππ 3.1 Frequency Spectra U If the periodic signal x(t) is real, then ππππ = |ππππ |ππ ππ ππππ ππ−ππ = ππππ∗ = |ππππ |ππ −ππ ππππ (3.3) |ππ−ππ | = |ππππ | ππ−ππ = −ππππ (3.4) The asterisk (*) indicates the complex conjugate. Note that A plot of |ππππ | versus the angular frequency ω = 2 π f is called the amplitude spectrum of the periodic signal x(t). A plot of ππππ versus ω is called the phase spectrum of x(t). These are referred to as frequency spectra of x(t). Since the index n assumes only integers, the frequency spectra of a periodic signal exist only at the discrete frequencies nωo. These are therefore referred to as discrete frequency Spectra or line spectra. From Eq. (3.4) we see that the amplitude spectrum is an even function of ω and the phase spectrum is an odd function of ω. Example 3.1: Find and sketch the magnitude spectra for the periodic square pulse train signal x(t) shown in Figure beside for (a) d=T/4 and (b) d=T/8. U U Dr. Ahmed A. Alrekaby [1.7] The magnitude spectrum for this case is shown in Figure below (b) d=T/8, nωod/2=nπd/T=nπ/8 The magnitude spectrum for this case is shown in Figure below H.W: If x1(t) and x2(t) are periodic signals with period T and their complex Fourier series expressions are U U ππππ ππ 0 π‘π‘ π₯π₯1 (π‘π‘) = ∑∞ ππ=−∞ ππππ ππ ππππ ππ 0 π‘π‘ π₯π₯2 (π‘π‘) = ∑∞ ππ=−∞ ππππ ππ ππππ = 2ππ ππ show that the signal x(t) =x1(t) x2(t) is periodic with the same period T and can be expressed ππππ ππ 0 π‘π‘ π₯π₯(π‘π‘) = ∑∞ ππ=−∞ ππππ ππ where cn is given by ππππ = ∑∞ ππ=−∞ ππππ ππππ−ππ 3.2 Power Content of a Periodic Signal U The power content of a periodic signal x(t) with period To is defined as the mean-square value over a period: Dr. Ahmed A. Alrekaby [1.8] ππ = 1 ππππ ⁄2 |π₯π₯(π‘π‘)|2 ππππ οΏ½ ππππ −ππππ ⁄2 (3.5) 3.3 Parseval’s Theorem for the Fourier Series U Parseval's theorem for the Fourier series states that if x(t) is a periodic signal with period To, then ∞ 1 ππππ ⁄2 2 |π₯π₯(π‘π‘)| ππππ = οΏ½ |ππππ |2 οΏ½ (3.6) ππππ −ππππ ⁄2 ππ=−∞ H.W: Verify Eq.(3.6). U U 4. FOURIER TRANSFORM U To generalize the Fourier series representation Eq.(3.1) to a representation valid for nonperiodic signals in the frequency domain, we introduce the Fourier transform. Let x(t) be a non-periodic signal. Then the Fourier transform of x(t), symbolized by β±, is defined by ∞ ππ(ππ) = β±[π₯π₯(π‘π‘)] = οΏ½ π₯π₯(π‘π‘)ππ −ππππππ ππππ −∞ (4.1) The inverse Fourier transform of X(ω), symbolized by β± −1 , is defined by 1 ∞ −1 [ππ(ππ)] π₯π₯(π‘π‘) = β± οΏ½ ππ(ππ)ππ ππππππ ππππ (4.2) = 2ππ −∞ Equations (4.1) and (4.2) are often called the Fourier transform pair. Writing X(ω) in terms of amplitude and phase as ππ(ππ) = |ππ(ππ)|ππ ππππ (ππ ) (4.3) we can show, for real x(t), that (4.4) ππ(−ππ) = ππ ∗ (ππ) = |ππ(ππ)|ππ −ππππ (ππ ) or |ππ(−ππ)| = |ππ ∗ (ππ)| ππ(−ππ) = −ππ(ππ) (4.5) Thus, just as for the complex Fourier series, the amplitude spectrum of x(t), denoted by |ππ(ππ)|, is an even function of ω, and the phase spectrum θ(ω) is an odd function of ω. These are referred to as Fourier spectra of x(t). Equation (4.4) is the necessary and sufficient condition for x(t) to be real. 4.1 Properties of the Fourier Transform U We use the notation x(t) ↔X(ω) to denote the Fourier transform pair. 4.1.1 Linearity (Superposition) U a1x1(t)+a2x2(t) ↔ a1X1(ω)+a2X2(ω) (4.6) where a1 and a2 are any constants. 4.1.2 Time Shifting U 4.1.3 frequency shifting U π₯π₯(π‘π‘ − π‘π‘ππ ) ↔ππ(ω)ππ −ππ ωπ‘π‘ππ (4.7) π₯π₯(π‘π‘)ππ ππ ππ ππ π‘π‘ ↔ππ(ππ − ππππ ) (4.8) Dr. Ahmed A. Alrekaby [1.9] 4.1.4 Scaling U π₯π₯(ππππ)↔ 4.1.5 Time Reversal U 1 π€π€ ππ οΏ½ οΏ½ |ππ| ππ (4.9) x(− t)↔X(− ω) (4.10) X(t) ↔2πx(−ω) (4.11) 4.1.6 Duality U 4.1.7 Differentiation U • • Time differentiation Frequency differentiation π₯π₯Μ (π‘π‘) = ππ π₯π₯(π‘π‘)↔ππππππ(ππ) ππππ (4.12) ππ ππ(ππ) (4.13) ππππ Example 4.1: Find the Fourier transform of the rectangular pulse signal x(t) defined by |π‘π‘| < ππ 1 π₯π₯(π‘π‘) = ππππ (π‘π‘) = οΏ½ |π‘π‘| > ππ 0 U (−ππππ)π₯π₯(π‘π‘)↔XΜ (ππ) = U ∞ ππ ππ(ππ) = ∫−∞ π₯π₯(π‘π‘)ππ −ππππππ ππππ = ∫−ππ ππ −ππππππ ππππ = Sol. U U 2π π π π π π (ππππ ) ππ = 2ππ π π π π π π (ππππ ) ππππ H.W: Verify Eqs. (4.6) to (4.13). U U 4.2 Fourier Transforms of Some Useful Signals U Dr. Ahmed A. Alrekaby [1.10] Example 4.2: Find the Fourier transform of π₯π₯(π‘π‘) = U U π π π π π π (ππππ ) 2 ππππ Sol. From Example 4.1 we have β±[ππππ (π‘π‘)] = ππ π π π π π π (ππππ) and from the duality property U U Thus, β± οΏ½ π π π π π π (ππππ ) 1 2 β± οΏ½ π π π π π π (ππππ)οΏ½ = 2ππππππ (−ππ) π‘π‘ 2 οΏ½ = 2ππ β± οΏ½ π‘π‘ π π π π π π (ππππ)οΏ½ = ππππ (−ππ) = ππππ (ππ) ππππ Where ππππ (ππ) = οΏ½ |ππ| < ππ |ππ| > ππ 1 0 1 X(ω) −a a ω 5. CONVOLUTION U The convolution of two signals x1(t) and x2(t), denoted by x1(t) ∗ x2(t), is a new signal x(t), defined by ∞ π₯π₯(π‘π‘) = π₯π₯1 (π‘π‘) ∗ π₯π₯2 (π‘π‘) = οΏ½ π₯π₯1 (ππ)π₯π₯2 (π‘π‘ − ππ)ππππ 5.1 Properties of Convolution U (5.1) −∞ (Commutative property) (Associative property) (5.2) (Distributive property) The Convolution with δ Functions: (5.3) 5.2 Convolution Theorem U • Time convolution: Let x1(t) ↔ X1(ω) and x2(t) ↔ X2(ω) Then x1(t) ∗ x2(t) ↔ X1(ω) X2(ω) • Frequency convolution: 1 x1(t) x2(t) ↔ 2ππ X1(ω) ∗ X2(ω) Example 5.1: Verify Eq. (5.4) and (5.5). U (5.4) (5.5) U Sol. Let X(ω)=X1(ω) X2(ω), then using Eq.(4.2) U U π₯π₯(π‘π‘) = β± −1 [ππ(ππ)] 1 ∞ οΏ½ ππ (ππ)ππ2 (ππ)ππ ππππππ ππππ = 2ππ −∞ 1 Dr. Ahmed A. Alrekaby [1.11] π₯π₯(π‘π‘) = 1 ∞ ∞ οΏ½ οΏ½ π₯π₯ (ππ) ππ −ππππππ ππ2 (ππ)ππ ππππππ ππππ ππππ 2ππ −∞ −∞ 1 ∞ 1 ∞ = οΏ½ π₯π₯1 (ππ) οΏ½ οΏ½ ππ2 (ππ)ππ ππππ (π‘π‘−ππ) πππποΏ½ ππππ 2ππ −∞ −∞ ∞ = οΏ½ π₯π₯1 (ππ)π₯π₯2 (π‘π‘ − ππ)ππππ = π₯π₯1 (π‘π‘) ∗ π₯π₯2 (π‘π‘) −∞ Example 5.1: Find the convolution of the two signals U U π₯π₯1 (π‘π‘) = ππ −πΌπΌπΌπΌ π’π’(π‘π‘) and π₯π₯2 (π‘π‘) = ππ −π½π½π½π½ π’π’(π‘π‘), πΌπΌ > π½π½ > 0 Sol. The steps involved in the convolution are illustrated in Fig. (5.1) for α = 4 and β =2. Mathematically, we can form the integrand by direct substitution: U U ∞ π₯π₯(π‘π‘) = π₯π₯1 (π‘π‘) ∗ π₯π₯2 (π‘π‘) = οΏ½ ππ −πΌπΌπΌπΌ π’π’(ππ) ππ −π½π½(π‘π‘−ππ) π’π’(π‘π‘ − ππ)ππππ −∞ But thus 0, π‘π‘ 0, π’π’(ππ)π’π’(π‘π‘ − ππ) = οΏ½1, 0, ππ < 0 0 < ππ < π‘π‘ ππ > π‘π‘ 1 οΏ½ππ −π½π½π½π½ − ππ −πΌπΌπΌπΌ οΏ½, πΌπΌ − π½π½ 0 This result for x(t) is also shown in Figure below π₯π₯(π‘π‘) = οΏ½ οΏ½ ππ −π½π½π½π½ ππ −(πΌπΌ−π½π½)ππ ππππ = π‘π‘ < 0 π‘π‘ ≥ 0 Fig. (5.1) Example 5.1: Find the convolution of the two signals shown in Fig.(5.2a) and Fig.(5.2b). Sol.: Here, f(t) has a simpler mathematical description than that of g(t), so it is preferable to invert f(t). Hence we shall determine g(t)∗f(t) rather than f(t)∗g(t). Thus U U U U c(t) = g(t) ∗ f(t) − ππ)ππππ Fig.(5.2c) shows g(τ) and f(-τ), whereas Fig.(2.5d) shows g(τ) and f(t−τ), which is f(−τ) shifted by t. because the edges of f(−τ) are at τ= −1 and 1, the edges of f(t−τ) are at −1+t and 1+t. the two functions overlaps over the interval (0,1+t), so that ∞ =∫−∞ ππ(ππ)ππ(π‘π‘ Dr. Ahmed A. Alrekaby [1.12] ∞ ππ(π‘π‘) = οΏ½ ππ(ππ)ππ(π‘π‘ − ππ)ππππ −∞ 1+π‘π‘ 1 ππ ππππ 3 0 1 = 6 (π‘π‘ + 1)2 -1≤ t ≤1 =οΏ½ (5.6) This situation, depicted in Fig.(2.5d), is valid only for −1≤ t ≤1. For 1< t <2, the situation is as illustrated in Fig.(5.2e). The two functions overlap only over the range −1+t to 1+t. therefore 1+π‘π‘ 1 ππ(π‘π‘) = οΏ½ ππππππ −1+π‘π‘ 3 2 = 3 π‘π‘ 1 ≤ t ≤2 (5.7) At the transition point t =1, both Eq.(5.6) and Eq.(5.7) yield a value of 2/3. i.e., c(1)=2/3. For 2≤ t <4 the situation is as shown in Fig.(5.2f). g(τ) and f(t−τ) overlap over the interval from −1+t to 3, so that 3 1 ππ(π‘π‘) = οΏ½ ππππππ −1+π‘π‘ 3 1 (5.8) = − 6 (π‘π‘ 2 − 2π‘π‘ − 8) Again, both Eqs. (5.7) and (5.8) apply the transition point t =2. For t ≥ 4, f(t−τ) does not overlap g(τ) as depicted in Fig.(5.2g). Consequently c(t) = 0 t≥4 for t<−1 there is no overlap between the two functions (see Fig.(5.2h), so that c(t) = 0 t<−1 Figures (5.2i) shows c(t) plotted according to Eqs.(5.6) through (5.10) (5.9) (5.10) Dr. Ahmed A. Alrekaby [1.13] Fig.(5.2) H.W: Find the convolution of the two signal f(t) and g(t) depicted in Fig. (5.3). Write a MATLAB program to accomplish this task. U U Fig(5.3) 6. CORRELATION U 6.1 Correlation of Energy Signals U Let x1(t) and x2(t) be real-valued energy signals. Then the cross-correlation function Rl2(τ) of x1(t) and x2(t) is defined by ∞ π π 12 (ππ) = οΏ½ π₯π₯1 (π‘π‘)π₯π₯2 (π‘π‘ − ππ)ππππ (6.1) π π 11 (ππ) = οΏ½ π₯π₯1 (π‘π‘)π₯π₯1 (π‘π‘ − ππ)ππππ (6.2) −∞ The autocorrelation function of x1(t) is defined as ∞ Properties of correlation functions: −∞ π π 12 (ππ) = π π 21 (−ππ) π π 11 (ππ) = π π 11 (−ππ) ∞ π π 11 (0) = οΏ½ [π₯π₯1 (π‘π‘)]2 ππππ = πΈπΈ −∞ (6.3) (6.4) (6.2) where E is the normalized energy content of x1(t). Example 6.1: find and sketch the autocorrelation function for π₯π₯1 (π‘π‘) = ππ −ππππ π’π’(π‘π‘) and a>0. Sol. U U U ∞ π π 11 (ππ) = οΏ½ π₯π₯1 (π‘π‘)π₯π₯1 (π‘π‘ − ππ)ππππ −∞ Dr. Ahmed A. Alrekaby [1.14] ∞ = οΏ½ ππ −ππππ π’π’(π‘π‘)ππ −ππ(π‘π‘−ππ) π’π’(π‘π‘ − ππ)ππππ −∞ = ππ ππππ ∞ οΏ½ ππ −2ππππ π’π’(π‘π‘)π’π’(π‘π‘ − ππ)ππππ −∞ 1 π‘π‘ > ππ For τ > 0, π’π’(π‘π‘)π’π’(π‘π‘ − ππ) = οΏ½ 0 π‘π‘ < ππ ∞ 1 Thus, π π 11 (ππ) = ππ ππππ ∫ππ ππ −2ππππ ππππ = 2ππ ππ −ππππ , since π π 11 (ππ)is an even function of τ, we conclude 1 that π π 11 (ππ) = 2ππ ππ −ππ|ππ| a > 0 π π 11 (ππ) τ 0 6.2 Energy Spectral Density U Let R11(τ) be the autocorrelation function of x1(t). Then ∞ ππ11 (ππ) = β±[π π 11 (ππ)] = οΏ½ π π 11 (ππ)ππ −ππππππ ππππ (6.3) −∞ is called the energy spectral density of x1(t). Now taking the inverse Fourier transform of Eq. (6.3), we have 1 ∞ οΏ½ ππ (ππ)ππ ππππππ ππππ (6.4) π π 11 (ππ) = β± −1 [ππ11 (ω)] = 2ππ −∞ 11 If x1(t) is real, then we have ππ11 (ππ) = β±[π π 11 (ππ)] = |ππ1 (ππ)|2 (6.5) 1 ∞ 2 ππππππ (6.6) and π π 11 (ππ) = 2ππ ∫−∞ |ππ1 (ππ)| ππ ππππ setting τ=0, we have 1 ∞ π π 11 (0) = οΏ½ |ππ1 (ππ)|2 ππππ (6.7) 2ππ −∞ Thus, from Eq.(6.2) ∞ 1 ∞ 2 πΈπΈ = οΏ½ |π₯π₯1 (π‘π‘)| ππππ = οΏ½ |ππ1 (ππ)|2 ππππ (6.8) 2ππ −∞ −∞ This is the reason why S11(ω) = |ππ1 (ππ)|2 is called the energy spectral density of x1(t). Equation (6.8) is also known as Parseval's theorem for the Fourier transform. Example 6.2: Find the energy of signal f(t)=e-atu(t). Determine the frequency W (rad/s) so that the energy contributed by the spectral components of all frequencies below W is 95% of the signal energy Ef. Sol.: we have ∞ ∞ 1 2 πΈπΈππ = οΏ½ ππ (π‘π‘) ππππ = οΏ½ ππ −2ππππ ππππ = 2ππ −∞ 0 We can verify this result by Parseval’s theorem. For this signal 1 πΉπΉ(ππ) = ππππ + ππ U U U and U 1 ∞ 1 ∞ πΈπΈππ = ππ ∫0 |πΉπΉ(ππ)|2 ππππ = ππ ∫0 1 1 ππ ∞ 1 ππππ = ππππ π‘π‘π‘π‘π‘π‘−1 οΏ½ ππ οΏ½οΏ½ = 2ππ ππ 2 +ππ 2 0 the band ω=0 to ω=W contains 95% of the signal energy, that is, 0.95/2a. Therefore, Dr. Ahmed A. Alrekaby [1.15] 0.95 1 ππ ππππ 1 ππ ππ 1 ππ −1 = οΏ½ = π‘π‘π‘π‘π‘π‘ οΏ½ = π‘π‘π‘π‘π‘π‘−1 2 2 2ππ ππ 0 ππ + ππ ππππ ππ 0 ππππ ππ ππ 0.95ππ or = π‘π‘π‘π‘π‘π‘−1 ππ βΉ ππ = 12.706ππ ππππππ/π π 2 this result indicates that the spectral components of f(t) in the band from 0 (DC) to 12.706a rad/s (2.02a Hz) contribute 95% of the total signal energy, the remaining spectral components (from 12.706a rad/s to ∞) contribute only 5% of the signal energy. 6.3 Correlation of Power Signals: U The time-average autocorrelation function π π οΏ½11 (ππ) of a real-valued power signal x1(t) is defined as 1 ππ ⁄2 οΏ½ π π 11 (ππ) = lim οΏ½ π₯π₯1 (π‘π‘)π₯π₯1 (π‘π‘ − ππ) ππππ (6.9) ππ→∞ ππ −ππ ⁄2 Note that 1 ππ ⁄2 οΏ½ π π 11 (0) = lim οΏ½ [π₯π₯1 (π‘π‘)]2 ππππ = ππ1 (6.10) ππ→∞ ππ −ππ ⁄2 If x1(t) is periodic with period To, then 1 ππ0 ⁄2 οΏ½ π π 11 (ππ) = οΏ½ π₯π₯ (π‘π‘)π₯π₯1 (π‘π‘ − ππ) ππππ (6.11) ππππ −ππ0 ⁄2 1 6.4 Power Spectral Density U Μ (ω), is defined as The power spectral density of x1(t), denoted ππ11 ∞ Μ (ππ) = β±[π π οΏ½11 (ππ)] = οΏ½ π π οΏ½11 (ππ) ππ −ππππππ ππππ ππ11 (6.12) Μ (ππ)] = π π οΏ½11 (ππ) = β± −1 [ππ11 (6.13) −∞ Then Setting τ=0, we get π π οΏ½11 (0) = Thus, from Eq. (6.10) 1 ∞ οΏ½ ππΜ (ππ) ππ ππππππ ππππ 2ππ −∞ 11 1 ∞ οΏ½ ππΜ (ππ) ππππ 2ππ −∞ 11 1 ππ ⁄2 1 ∞ οΏ½ [π₯π₯1 (π‘π‘)]2 ππππ = οΏ½ ππΜ (ππ) ππππ ππ→∞ ππ −ππ ⁄2 2ππ −∞ 11 Μ (ππ) is called the power spectral density of x1(t). This is the reason why ππ11 ππ1 = lim (6.14) (6.15) Example 6.2: Verify Eq.(6.15) for the sine wave signal x1(t)=A sin(ω1t+φ), where ω1=2π/T1. U U Sol. From Eq.(6.11), U U π π οΏ½11 (ππ) = = 1 ππ1 ⁄2 οΏ½ π₯π₯ (π‘π‘)π₯π₯1 (π‘π‘ − ππ) ππππ ππ1 −ππ1 ⁄2 1 π΄π΄2 ππ1 ⁄2 οΏ½ π π π π π π (ππ1 π‘π‘ + ππ) π π π π π π [ππ1 (π‘π‘ − ππ) + ππ]ππππ ππ1 −ππ1 ⁄2 π΄π΄2 ππ1 ⁄2 = οΏ½ [ππππππ(ππ1 τ) − ππππππ(2ππ1 π‘π‘ + 2ππ − ππ1 τ)]ππππ 2ππ1 −ππ1 ⁄2 Dr. Ahmed A. Alrekaby [1.16] and, From Eq.(6.12) Then = ππ1 ⁄2 π΄π΄2 π΄π΄2 ππππππ(ππ1 τ) οΏ½ ππππ = ππππππ(ππ1 τ) 2ππ1 2 −ππ1 ⁄2 2 π΄π΄ π π οΏ½11 (0) = 2 π΄π΄2 πππ΄π΄2 πππ΄π΄2 Μ (ππ) = β±[π π οΏ½11 (ππ)] = β± οΏ½ ππππππ(ππ1 τ)οΏ½ = ππ11 πΏπΏ(ππ − ππ1 ) + πΏπΏ(ππ + ππ1 ) 2 2 2 1 ∞ πππ΄π΄2 πππ΄π΄2 1 ∞ Μ (ππ) ππππ = οΏ½ ππ11 οΏ½ οΏ½ πΏπΏ(ππ − ππ1 ) + πΏπΏ(ππ + ππ1 )οΏ½ ππππ 2ππ −∞ 2 2 2ππ −∞ π΄π΄2 ∞ = οΏ½ [πΏπΏ(ππ − ππ1 ) + πΏπΏ(ππ + ππ1 )]ππππ 4 −∞ π΄π΄2 π΄π΄2 = (1 + 1) = = ππ1 4 2 7. SYSTEM REPRESENTATION AND CLASSIFICATION U 7.1 System Representation: U A system is a mathematical model of a physical process that relates the input signal (source or. excitation signal) to the output signal (response signal). Let x(t) and y(t) be the input and output signals, respectively, of a system. Then the system is viewed as a mapping of x(t) into y(t). Symbolically, this is expressed as π¦π¦(π‘π‘) = π―π―[π₯π₯(π‘π‘)] (7.1) where π―π― is the operator that produces output y(t) from input x(t), as illustrated in Fig.7.1. x(t) System y(t) π―π― Fig.7.1 7.2 System Classification U 7.2.1 Continuous-Time and Discrete-Time Systems U If the input and output signals x(t) and y(t) are continuous-time signals, then the system is called a continuous-time system. If the input and output signals are discrete-time signals or sequences, then the system is called a discrete-time system. 7.2.2 Linear Systems U If the operator π―π― in Eq. (7.1) satisfies the following two conditions, then π―π― is called a linear operator and the system represented by π―π― is called a linear system; • Additivity π―π―[π₯π₯1 (π‘π‘) + π₯π₯2 (π‘π‘)] = π―π―[π₯π₯1 (π‘π‘)] + π―π―[π₯π₯2 (π‘π‘)] = π¦π¦1 (π‘π‘) + π¦π¦2 (π‘π‘) (7.2) For all input signals x1(t) and x2(t). • Homogeneity π―π―[ππππ(π‘π‘)] = ππππ[π₯π₯(π‘π‘)] = ππππ(π‘π‘) (7.3) For all input signals x(t) and scalar a. Any system that does not satisfy Eq. (7.2) and/or Eq. (7.3) is classified as a nonlinear system Dr. Ahmed A. Alrekaby [1.17] Example 7.1: For each of the following systems, determine whether the system is linear. U U Thus, the system represented by (b) is not linear. The system also does not satisfy the homogeneity condition 7.2.3 Time Invariant Systems U If the system satisfies the following condition, then the system is called a time-invariant or fixed system: (7.4) π―π―[π₯π₯(π‘π‘ − π‘π‘ππ )] = π¦π¦(π‘π‘ − π‘π‘0 ) to to to where to is any real constant. Equation (7.4) indicates that the delayed input gives delayed output. A system which does not satisfy Eq. (7.4) is called a time-varying system. 7.2.4 Linear Time-Invariant (LTI) Systems: U If the system is linear and time-invariant, then the system is called a linear time-invariant (LTI) system. 8. IMPULSE RESPONSE AND FREQUENCY RESPONSE U 8.1 Impulse Response U The impulse response h(t) of an LTI system is defined to be the response of the system when the input is δ(t), that is β(π‘π‘) = π―π―[πΏπΏ(π‘π‘)] The function h(t) is arbitrary, and it need not be zero for t < 0. If h(t) = 0 for t < 0 then the system is called causal. (8.1) (8.2) 8.2 Response to an Arbitrary Input: U The response y(t) of an LTI system to an arbitrary input x(t) can be expressed as the convolution of x(t) and the impulse response h(t) of the system, that is, Dr. Ahmed A. Alrekaby [1.18] ∞ π¦π¦(π‘π‘) = π₯π₯(π‘π‘) ∗ β(π‘π‘) = οΏ½ π₯π₯(ππ)β(π‘π‘ − ππ)ππππ (8.3) π¦π¦(π‘π‘) = β(π‘π‘) ∗ π₯π₯(π‘π‘) = οΏ½ β(ππ)π₯π₯(π‘π‘ − ππ)ππππ (8.4) −∞ Since the convolution is commutative, we also can express the output as ∞ −∞ 8.3 Response of Causal Systems U From Eqs. (8.2) and (8.3) or (8.4), the response y(t) of a causal LTI system is given by π‘π‘ ∞ π¦π¦(π‘π‘) = οΏ½ π₯π₯(ππ)β(π‘π‘ − ππ)ππππ = οΏ½ π₯π₯(π‘π‘ − ππ)β(ππ)ππππ −∞ (8.5) 0 A signal x(t) is called causal if it has zero values for t < 0. Thus if the input x(t) is also causal, then π‘π‘ π‘π‘ π¦π¦(π‘π‘) = οΏ½ π₯π₯(ππ)β(π‘π‘ − ππ)ππππ = οΏ½ π₯π₯(π‘π‘ − ππ)β(ππ)ππππ 0 (8.6) 0 8.4 Frequency Response U Applying the time convolution theorem of the Fourier transform, we obtain Y(ω) =X(ω)H(ω) H(ω) is referred to as the frequency response (or transfer function) of the system. Thus π»π»(ππ) = β±[β(π‘π‘)] = ππ(ππ) ππ(ππ) (8.7) (8.8) H(ω) 1 δ(t) x(t) X(ω) LTI system h(t) y(t)=x(t)∗h(t) Y(ω)=X(ω)H(ω) 8.5 Distortionless Transmission For distortionless transmission through a system, we require that the exact input signal shape be reproduced at the output. Therefore, if x(t) is the input signal, the required output is π¦π¦(π‘π‘) = πΎπΎπΎπΎ(π‘π‘ − π‘π‘ππ ) (8.9) where td is the time delay and K is a gain constant. Taking the Fourier transform of both sides of Eq. (8.9), we get ππ(ππ) = πΎπΎππ −ππππ π‘π‘ππ ππ(ππ) (8.10) For distortionless transmission the system must have U U (8.11) That is, the amplitude of π»π»(ππ) must be constant over the entire frequency range, and the phase of π»π»(ππ)must be linear with frequency. Dr. Ahmed A. Alrekaby [1.19] 9. RELATIONSHIP BETWEEN INPUT AND OUTPUT SPECTRAL DENSITIES U Consider an LTI system with frequency response H(ω), input x(t), and output y(t). If x(t) and y(t) are energy signals, then by Eq.(6.5) their energy spectral densities are πππ₯π₯π₯π₯ (ππ) = |ππ(ππ)|2 and πππ¦π¦π¦π¦ (ππ) = |ππ(ππ)|2 , respectively. Since Y(ω) = H(ω)X(ω), it follows that πππ¦π¦π¦π¦ (ππ) = |π»π»(ππ)|2 πππ₯π₯π₯π₯ (ππ) (9.1) Μ (ππ) = |π»π»(ππ)|2 πππ₯π₯π₯π₯ Μ (ππ) πππ¦π¦π¦π¦ (9.2) A similar relationship holds for power signals and power spectral densities; that is, Example 9.1: Consider a system with π»π»(ω) = 1⁄(1 + ππππ) and input x(t)=e-2tu(t). Find the energy spectral density of the output. U U Sol. U 1 ππππ + 2 1 πππ₯π₯π₯π₯ (ππ) = |ππ(ππ)|2 = 2 ππ + 4 The energy spectral density of the output is π₯π₯(π‘π‘) = ππ −2π‘π‘ π’π’(π‘π‘) ↔ ππ(ππ) = πππ¦π¦π¦π¦ (ππ) = |π»π»(ππ)|2 |ππ(ππ)|2 = 1 (ππ 2 + 1)(ππ 2 + 4) Dr. Ahmed A. Alrekaby