Summer 2008 Signals & Systems S.F. Hsieh I. Introduction to Signals and Systems 1 Goal of Signals & Systems to develop mathematical models/techniques for continuous-/discrete-time signal and system analysis/synthesis(design). • Signals(functions) are variables(time/space) that carry information, Ex. f (t) speech(voltage/current picked up from microphone, a computer file xxx.wav, stock market index, ECG(ElectroCardioGram); f (x, y)/f (x, y) image xxx.gif or xxx.jpg; f (t, x, y) video xxx.mpg. • Systems(mapping) process input signals to produce output signals. They can be hardwares(DSP processors/circuits/automobile, economic system) or softwares(programs). 2 Classifications of signals 1. Digital/analog: the amplitude is discrete(quantized) or not. 2. Continuous-/Discrete-time: (MATLAB commands: plot, stem) • Continuous-time signal: x(t), −∞ < t < ∞, where the time-variable t is continuous. • Discrete-time signal: x[n] ≡ x(nT ), n = 0, ±1, ±2, . . ., where T represents the sampling time. Note that x[3 12 ] is NOT defined. Many discrete-time systems are computer programs. x(t) 1 0.8 0.6 0.4 0.2 0 2 4 6 8 10 12 14 16 2 4 6 8 x[n] 0.8 0.6 0.4 0.2 0 −8 −6 −4 −2 I-1 0 3. Deterministic/Random: • Deterministic signals: – periodic(Fourier series) x(t) = x(t + T ), ∀t minimum T is the fundamental period. – aperiodic(Fourier transform) • Random signals(such as noise): probability density function, mean, autocorrelation, power spectral density. 4. Energy/Power signals: • Size of a signal x(t): – energy: E≡ Z +∞ −∞ |x(t)|2 dt if x(t) is the voltage applied across a dummy load of one-Ω resistor, then x2 (t) is the power assumption (Watts). Integration of power over time is the total energy of the signal, namely, E indicates the energy that can be extracted from the signal. ∗ Continuous-time: T R Total energy over the time interval t1 ≤ t ≤ t2 is tt12 |x(t)|2 dt. P 2 ∗ Discrete-time: total energy over the time interval n1 ≤ n ≤ n2 is nn=n |x[n]|2 . 1 – average power: Z 1 T /2 |x(t)|2 dt P ≡ lim T →∞ T −T /2 ∗ If x(t) is periodic, then its average power becomes Z 1 T /2 |x(t)|2 dt P = T −T /2 √ ∗ The rms(root-mean-square) value of x(t) is P . • Definitions: – x(t) is an energy signal if it has a finite energy Ex < ∞ and its power Px = 0. – x(t) is a power signal if it has a finite power Px < ∞, and energy Ex = ∞. • (Lathi Ex 1.2, p 72) 2 – Power of a single real sinusoid x(t) = C cos(ω0 t + θ) is Px = C2 , where C is a positive amplitude. – Power of a single exponential y(t) = Dejω0 t is Py = |D|2 , where D is a complex number comprised of amplitude and phase. P – Power of sum of orthogonal complex exponentials z(t) = i Di ejωi t with distinct P P frequencies ωi ’s is equal to the sum of individual powers Pz = i Pi = i |Di |2 . – (Q) Suppose z(t) = x(t) + y(t). Under what condition will the power of z(t) be equal to the sum of those of x(t) and y(t), i.e., Pz = Px + Py ? • Examples: (a) (b) (c) (d) (e) x(t) = e−3t u(t) is an energy signal with energy 1/6. x(t) = Aej2παt is a power signal with power A2 . A rectangular pulse A ⊓ (t/τ ) of amplitude A and duration τ has energy A2 τ . A ⊓ (t/τ ) · cos(ω0 t + θ) has energy A2 τ /2. P t−nT0 2 A rectangular pulse train ∞ n:−∞ ⊓( τ ), T0 > τ , is a power signal with P = A τ /T0 I-2 (f) A ramp signal f (t) = t and an everlasting exponential g(t) = e−at are neither energy nor power signals. (g) Is x(t) = 10 a power or energy signal? (h) What is the energy of a sinc function sinc(x) = sinπxπx ? (Parseval’s thm in Chp 7’s Fourier transform can solve it easily.) 3 Basic operations on signals 1. Amplitude: (a) amplitude-scale: x(t) =⇒ ax(t), where a > 0, • a > 1: amplify(gain > 1) • 0 < a < 1: attenuate(gain < 1) (b) negation: x(t) =⇒ −x(t), change of polarity (c) level-shift: x(t) =⇒ x(t) + c • c > 0: shift up by c • c < 0: shift down by c (d) Note that sum of two lines is itself a line, too. x3 (t) = x1 (t) + x2 (t) x1 (t) XXX XXX x2 (t) XXX - t [Ex] Given x(t), plot y(t) = −2x(t) + 3. (“multiply/divide first, then add/subtract”) -1 0 -1 x(t) 1 0 A A - t 1 -2 A A 3 1 - t 1 @ @ −2x(t) - -1 2. change of Time-variable: Assume that x(t) looks like 1 -1 x(t) - 1 t I-3 0 1 t (a) time-scale: x(t) =⇒ x(αt), where α > 0 • α > 1, time-compression: 1 − 21 x(2t) - 1 2 • α < 1, time-expansion: x( 2t ) 1 - -2 t 2 t This may seem to hurt your instinct. For justification, Let y(t) = x(2t). Choose a few t’s and plot y(t), we have y(0) = x(0)(still centered around the origin), y( 12 ) = x(2 21 ) = x(1), and y(− 12 ) = x(−2 12 ) = x(−1). (b) time-reversal(inversion): x(t) =⇒ x(−t) 1 A x(−t) A A AA t -1 1 The right-hand-side is mirrored about the origin, and vice versa. (c) time-shift: x(t) =⇒ x(t + β) • β > 0, left-shifted(advanced) - −β • β < 0, right-shifted(delayed): 0 x(t + β) 0 t x(t − β) - β t Again, this seems to be unbelievable. A mindful reader will not hesitate to play the same game: Let y(t) = x(t − 3). y(3) = x(3 − 3) = x(0), y(0) = x(0 − 3) = x(−3), etc. From which, she/he convinces herself/himself. (d) Combined operation: x(t) =⇒ x(αt + β) , [method 1]: • Let y(t) = x(t + β), i.e., advance/delay x(t) by β to obtain y(t). • Let z(t) = y(αt), i.e., compress/expand y(t) by α. As a check, z(t) = y(αt) = x(αt + β). The above procedure can change its order. BUT, be careful! [method 2]: I-4 • Let v(t) = x(αt), i.e., compress/expand y(t) by α. • Let w(t) = v(t + αβ ), INSTEAD OF w′ (t) = v(t + β). As a check, w(t) = v(t + αβ ) = x(α[t + αβ ]) = x(αt + β) = z(t), while w′ (t) = v(t + β) = x(α[t + β]) = x(αt + αβ) 6= z(t)! Note: • if you have difficulty in combining w(t) and v(t), you can use more dummy variables to avoid confusion: β v(s) = x(αs), w(t) = v(t + ) α so that β w(t) = v(s)|s=t+ β = x(α[t + ]) = x(αt + β) α α • [Method 1] is preferred due to its algebraic simplicity. [Ex] Given x(t), plot y(t) = x(2t − 3), and z(t) = x(−2t + 3). (“subtract/add first, then divide/multiply”, a rule which is exactly opposite to the one used in the amplitude operations.) It is strongly recommended to check y(t)/z(t) at some t′ s where x(t) changes abruptly(and t = 0). • Given x(t) 1 x(t) - -1 1 • Right shift by 3, x(t − 3) 1 x(t − 3) - 3 2 t 4 t • Compress by 2, x(2t − 3) 1 - 2 t • (Check) let x(2t − 3) = x(0), let x(2t − 3) = x(−1), let x(2t − 3) = x(2), we have t = 3/2 we have t = 1 we have t = 5/2 Similarly, • Given x(t) 1 -1 x(t) - 1 t I-5 indeed y(3/2) = x(2t − 3) = x(0) indeed y(1) = x(2t − 3) = x(−1) indeed y(5/2) = x(2t − 3) = x(2) • Left shift by 3, x(t + 3) 1 -4 x(t + 3) - t -3 -2 • Compress by 2 and time-reversal(whichever comes first), x(−2t + 3) E 1 E E EE - 2 t • (Check) let x(−2t + 3) = x(0), let x(−2t + 3) = x(−1), let x(−2t + 3) = x(2), 4 we have t = 3/2 we have t = 2 we have t = 1/2 indeed y(3/2) = x(−2t + 3) = x(0) indeed y(2) = x(−2t + 3) = x(−1) indeed y(1/2) = x(−2t + 3) = x(2) Signal characteristics 1. Even/Odd • Even function(signal): x(t) = x(−t), ∀t, symmetric about the origin, #c # c # c • Odd function(signal): x(t) = −x(−t), ∀t, anti-symmetric about the origin, 6 b b b Q Q Q Q Q b b • (Fact) Every signal x(t) can be decomposed as a sum of even and odd signals: x(t) = and xo (t) ≡ Od{x(t} ≡ x(t)−x(−t) . xe (t) + xo (t), where xe (t) ≡ Ev{x(t} ≡ x(t)+x(−t) 2 2 2 A A A A x(t) 1 = 6 x (t) l e l l 2. Periodic signals: x(t) = x(t + nT ), ∀t. T is its period. + 6 xo (t) @ @ @ @ Sum of several continuous-time periodic signals may not be period, depending on the relationship among their fundamental periods. 3. Causal signal: x(t) = 0, ∀t < 0; anticausal signal: x(t) = 0, ∀t > 0. I-6 5 Basic Signal Models 1. Complex exponential signals: x(t) = Cest ≡ Aejφ e(σ+jω)t · · · · · · s ≡ σ + jω in Laplace transform x[n] = Cz n ≡ Aejφ (rejω )n · · · · · · z ≡ rejω in Z transform (a) ω = 0 and φ = 0, x(t) = Aeσt is real exponential(non-oscillating), i. σ > 0 or r > 1: exponentially growing, ii. σ < 0 or r < 1: exponentially decaying, iii. σ = 0 or r = 0: constant. (b) ω 6= 0, i. σ = 0 or r = 1: x(t) = Aejφ ejωt = A cos(ωt + φ) + jA sin(ωt + φ) or x[n] = Aejφ ejωn is a non-damping cisoid(complex sinusoid). · · · · · · will see in Fourier transforms. ii. σ > 0 or r > 1: x(t) = Aeσt ej(ωt+φ) is an exponentially increasing(unstable) cisoid iii. σ < 0 or r < 1: x(t) = Aeσt ej(ωt+φ) is an exponentially damped(decreasing) cisoid [Euler’s formula] ejωt ≡ cos ωt + j sin ωt and e−jωt ≡ cos ωt − j sin ωt 1 jωt 1 cos ωt = (e + e−jωt ) and sin ωt = (ejωt − e−jωt ) 2 2j 10 5 damping factor =0.5 frequency=2.7pi 0 −5 −10 −15 −5 t=−5:0.01:5; x=exp(0.5*t).*cos(2.7*pi*t); subplot(2,1,1); plot(t,x) −4 −3 −2 −1 0 1 2 3 4 5 3 4 5 10 smaller damping factor =−0.3 larger frequency=4.2pi 5 0 −5 y=exp(−0.3*t).*cos(4.2*pi*t); subplot(2,1,2);plot(t,y); −10 −15 −5 −4 −3 −2 −1 0 1 2 • On the left-hand-side, σ < 0, est → 0 as t → ∞, and the magnitude |σ| controls the envelope (damping factor); as |σ| ր increases, the envelope ց decays faster. • Along the jω-axis, ω controls the angular frequency of the rotating phasor. I-7 2. Rectangular pulse: ⊓(t/τ ) ⊓(t/τ ) 1 - 0 − τ2 t τ 2 3. Triangular pulse: Λ(t/τ ) Λ(t/τ ) 1 HH HH HH −τ τ 0 4. Unit step function: u(t) = Rt −∞ δ(λ)dλ - = t ( 1, t ≥ 0 0, t < 0 1 t 0 5. Unit ramp function: r(t) = Rt −∞ u(λ)dλ = tu(t): t 0 [Ex] tu(t) − 2(t − 1)u(t − 1) + (t − 2)u(t − 2) 1 HH HH HH 0 0 - 2 t I-8 6 Impulse function 6.1 Unit impulse (Kronecker delta function) in discrete time δ[n] ≡ ( 0, n 6= 0 1, n = 0 x 1 x x x -1 0 δ[n] x x x 1 2 3 - n 1. unit step function: u[n] ≡ ( 0, n < 0 1, n ≥ 0 u[n] 1 x x x x x ··· x -1 0 1 2 - n 3 • difference equation: δ[n] = u[n] − u[n − 1] • running sum: u[n] = 2. sampling property: Pn m=−∞ δ[m] x[n]δ[n] = x[0]δ[n] similar to differentiation in CT x[n]δ[n − n0 ] = x[n0 ]δ[n − n0 ] similar to integration in CT 3. sifting property: x[n] = X k x[k]δ[n − k] · · · · · · convolution of x[n] and δ[n] = · · · + x[−2]δ[n + 2] + x[−1]δ[n + 1] + x[0]δ[n] + x[1]δ[n − 1] + · · · any sequence can be expressed as a sum of scaled/shifted impulses . 6.2 Unit impulse function(Dirac delta function): δ(t), in continuous time δ(t) = lim ǫ→0 t d 1 ⊓( ) = u(t), a pulse with unit area and zero width, located at t = 0 ǫ ǫ dt 6 ǫ - 6 1 ǫ 6 δ(t) reducing ǫ area = 1 - 0 - t ǫ→0 - - 0 I-9 t Properties of δ(t): R∞ 1. sifting: −∞ x(t) δ(t − t0 )dt = x(t0 ) 6 δ(t − t0 ) x(t0 ) · δ(t − t0 ) 6 x(t) - Z x(t) δ(t − t0 )dt = Z = x(t0 ) R∞ −2t δ(t −∞ e t0 t x(t0 ) δ(t − t0 )dt = x(t0 ) Example: - ≡ t t0 Z δ(t − t0 )dt − 3)dt = e−6 2. Convolution of x(t) with δ(t − t0 ) is x(t − t0 ). which is useful in proving the sampling theorem and modulation/demodulation. x(t) x(t − t0 ) δ(t − t0 ) 6 ∗ 0 Pf. x(t) ∗ δ(t − t0 ) = R t0 0 x(τ )δ(t − t0 − τ )dτ = R = 0 x(τ )δ(τ − t + t0 )dτ = x(t − t0 ). Question: What is x(t) ∗ [δ(t − t0 ) + δ(t + t0 )] ∗ [δ(t − t0 ) + δ(t + t0 )]? (demodulation) 3. Other properties: • x(t)δ(t) = x(0)δ(t). • x(t)δ(t − t0 ) = x(t0 )δ(t − t0 ). • δ(t) = • • d dt u(t). Rt −∞ δ(τ )dτ = u(t). 1 δ(at) = |a| δ(t). • δ(−t) = δ(t). • x(t)δ(t − t0 ) = x(t0 )δ(t − t0 ). • • • R∞ −∞ δ(τ )δ(t R4 t=0 3δ(t R4 t=2 δ(t − τ )dτ = δ(t). − 2)dt = 3. − 6)dt = 0. I - 10 t0 7 Systems process an input x(t) to produce an output y(t): cause → system → effect 1. Physical models: an electric circuit 2. Block diagrams • continuous-time: using integrators, adders, and gains (and multipliers) • discrete-time: using delays, adders, and gains (and multipliers) 3. Mathematical system equations: • integro-differential equation: y(t) = dx(t) R t −∞ [4x(τ ) dt • difference equation: y[n] = ay[n − 1] + bx[n] − y(τ )]dτ . 4. Interconnection of systems: (a) series(cascade): - S1 - S2 - (b) parallel: - S1 6 ?- S2 ? - ⊕ -6 (c) feedback: - S 1 -⊕ 6 8 - S 2 - S3 ? Classification of Systems 1. Continuous-time or Discrete-time, 2. Analog or Digital, 3. Memoryless: the o/p of a memoryless system at time t0 depends only on its input at the same time instant t0 . Memory is associated with storage of energy in physical systems and storage registers in digital computers. Delay, capacitor, and inductor elements are not allowed for a memoryless system. Thus, a resistive voltage divider is a memoryless system, while an RC lowpass circuit has memory. [Ex] y[n] = x2 [n] + x[n], y(t) = 10x(t): memoryless. 2 vi (t) is also memoryless A voltage divider: vo (t) = R1R+R 2 t x(τ )dτ, y[n] = x[n − 2], y[n] = (x[n − 1] + x[n] + x[n − 1])/3: with memory. [Ex] y(t) = −∞ An electric example isR a current source i(t) connected to a capacitor, then the voltage across the t i(τ )dτ . capacitor is v(t) = C1 −∞ R I - 11 4. Invertible: A system is invertible if the cascade of this system with its inverse system yields an output which is the input to the first system. x[n] → system → y[n] → inverse system → w[n] ≡ x[n] Examples: (a) System: y(t) = 2x(3t); its inverse system: w(t) = 12 y( 3t ) (b) A running-sum system: y[n] = tion: w[n] = y[n] − y[n − 1]. Pn k=−∞ x[k]; its inverse system satisfies a difference equa- Inverse processing is important in equalization, lossless coding, etc. 5. Causal(nonanticipative) and noncausal: A causal system’s output y(t0 ) can only depends on the present and past inputs: {x(τ ), τ ≤ t0 }, i.e., the system cannot anticipate the future input. o/p: y1 (t) - i/p: x(t) 0 - - causal o/p: y2 (t) - %L % L L 0 t 2 H% #H # Noncausal - -2 " " e LL e e - t - t 0 Later, we will show that, for a causal LTI system, its impulse response h(t) = 0, ∀t < 0. [Ex] y[n] = x[n]− x[n + 1], and y(t) = x(t + 1) are noncausal, because the output y(2) at present time t = 2 depends on the future input x(3) at t = 3. • A memoryless system is also causal. • All physical systems with time as the independent variable are causal. There are practical systems that are not causal: – Physical systems for which time is not the independent variable e.g., the independent variable is (x, y) as in an image. y[n] = (x[n − 1] + x[n] + x[n + 1])/3, take the average of three neighboring data. – Processing of signals is not in real time, e.g., the signal has been recorded or generated in a computer. 6. Stable: A system is stable in the sense of bounded-input, bounded-output(BIBO) if the output y(t) is bounded for a bounded input x(t), i.e., if |x(t)| ≤ B1 , ∀t, then ∃B2 < ∞, ∋ |y(t)| ≤ B2 ,R∀t. For a stable linear-time-invariant system, its impulse response must be absolutely ∞ |h(t)|dt < ∞. (to be shown later) integrable: −∞ 7. LINEAR: linear combination of inputs lead to linear combination of corresponding outputs (superposition): x1 (t) - Linear - y1 (t) If and then x2 (t) - Linear - y2 (t) I - 12 ax1 (t) + bx2 (t) - Linear - ay1 (t) + by2 (t) Suppose y1 (t) = H[x1 (t)], and y2 (t) = H[x2 (t)]. If the output due to the input a1 x1 (t) + a2 x2 (t) is equal to a1 y1 (t) + a2 y2 (t) then the system H is linear, i.e., check if ? H[a1 x1 (t) + a2 x2 (t)] = a1 y1 (t) + a2 y2 (t) • Linearity does not allow x2 (t), |x(t)|. 8. TIME-INVARIANT(TI): a delayed input leads to a corresponding delayed output: If Time Invariant x(t) −→ −→ y(t) then x(t − τ ) −→ Time Invariant −→ y(t − τ ) for all x(t) and τ . We need to verify if the following is true: ? H[x(t − τ )] = {H[x(t)]}|t:t−τ = y(t − τ ), ∀τ (a) A system is time-invariant if its system parameters are fixed over time. + A A A A h x(t) − R h y(t) C h h d An RC lowpass filter with constant resistance R and C is time-invariant: RC dt y(t) + y(t) = x(t). If the resistance R(t), changes with time, then the system becomes time-varying: d y(t) + y(t) = x(t), because its system parameters are not constant over time. R(t)C dt (b) Time-invariance does not allow x(2t), x(−t), tx(t). (c) Ex. A compressor with y[n] = x[M n] is not TI because y[n − n0 ] = x[M (n − n0 )] 6= x[M n − n0 ]. Be careful in distinguishing x[M (n − n0 )] and x[M n − n0 ]. 9 LTI systems 1. Many man-made and naturally occurring systems can be modeled as linear time-invariant (LTI) systems. Ex. resistors(R), inductors(L), capacitors(C). 2. A fixed-coefficient linear differential equation (made up from RLC components) are LTI. (CT) 3. A fixed-coefficient linear difference equation (delay, gain, adder) are LTI. (Discrete-Time) 4. (Ex) Is the following system linear-time-invariant? y(t) = x(t)g(t) where x(t) and y(t) denote the input and output, respectively. • Linear, yes. Let y1 (t) ≡ x1 (t)g(t) and y2 (t) ≡ x2 (t)g(t). Suppose x(t) = a1 x1 (t) + a2 x2 (t) then H[x(t)] = x(t)g(t) = a1 x1 (t)g(t) + a2 x2 (t)g(t) = a1 y1 (t) + a2 y2 (t) I - 13 • Time-varying. Let y(t) = x(t)g(t). Suppose xd (t) = x(t − τ ). H[xd (t)] = x(t − τ )g(t) 6= y(t − τ ) = x(t − τ )g(t − τ ) • If g(t) = cos ωc t, we call it linear modulation. 9.1 1. Examples d dt y(t) + 10y(t) = x(t) is a causal LTI system. y[n] = 0.9y[n − 1] + x[n] is causal LTI, too. dn−1 dn dtn y(t) + an−1 dtn−1 y(t) + · · · + a0 y(t) 2. In general, system. 3. y(t) = R∞ −∞ x(τ )h(t m d d = bm dt m x(t) + · · · + b1 dt x(t) + b0 x(t) is a linear − τ )dτ is an LTI system. reads as “y(t) is the convolution of x(t) and h(t).” [Pf] The proof for linearity is omitted. Let x̂(t) = x(t − λ), then ŷ(t) : = = is the response to the input x̂(t) Z Zτ τ x̂(τ )h(t − τ )dτ x(τ − λ)h(t − τ )dτ let s ≡ τ − λ, sorry for so many dummy variables = y(t − λ) = 4. (a) y(t) = (b) y[n] = (c) y(t) = y[n] = (d) y(t) = R∞ −∞ x(t P∞ Z Zs τ x(s)h(t − λ − s)ds x(τ )h(t − λ − τ )dτ, from the definition of y(t) = −∞ x(τ )h(t Pn Rt − m] = P∞ m:−∞ x[n − m]h[m] is LTI. (discrete-time convolution) − τ )dτ is a causal LTI system. m:−∞ x[m]h[n −∞ x(t x(τ )h(t − τ )dτ − τ )h(τ )dτ is LTI, too. (commutative law for convolution) m:−∞ x[m]h[n Rt Z − m] is a causal LTI system. − τ )h(τ )dτ is a noncausal, linear, time-varying system. t 0 [Pf] Say h(t) = 1, then y(t) = −∞ x(t − τ )dτ we can see that y(0) = −∞ x(−τ )dτ = x(u)du, which means that y(0) is the integral of future input x(0+) upto x(∞). Non0 causal! R∞ 5. (a) d dt y(t) R + ty(t) = x(t) is a linear system. d d y1 (t) + ty1 (t) = x1 (t) y2 (t) + ty2 (t) = x2 (t) dt dt α1 d d y1 (t) + α2 y2 (t) + t[α1 y1 (t) + α2 y2 (t)] = α1 x1 (t) + α2 x2 (t) dt dt d [α1 y1 (t) + α2 y2 (t)] + t[α1 y1 (t) + α2 y2 (t)] = α1 x1 (t) + α2 x2 (t) dt Thus the response to the input α1 x1 (t) + α2 x2 (t) is α1 y1 (t) + α2 y2 (t). (b) d dt y(t) + 10y(t) + 5 = x(t) is a nonlinear system. I - 14 R (c) y(t) = x(t2 ) is linear, noncausal, and time-varying. y(2) depends on x(4); thus noncausal. Let x̂(t) = x(t − τ ), then ŷ(t) = x̂(t2 ) = x(t2 − τ ) 6= y(t − τ ) = x((t − τ )2 ); thus time-varying. (d) y(t) = sin[x(t)] is time-invariant. (e) y(t) = x(2t) is NOT time-invariant. [Pf] Suppose x(t) =⇒ y(t) = x(2t). Let x̂(t) = x(t − τ ), then the output from x̂(t) is ŷ(t) = x̂(2t) = x̂(s)|s=2t = x(s − τ )|s=2t = x(2t − τ ) Compared with the delayed output y(t − τ ): y(t − τ ) = y(s)|s=t−τ = x(2s)|s=t−τ = x(2(t − τ )) = x(2t − 2τ ) We find that ŷ(t) 6= y(t − τ ), it is NOT TI! (f) ny[n] + a1 y[n − 1] = x[n] is a linear and time-varying(because of n before y[n]). (g) y[n] + 0.8y[n − 1] + 5 = x[n] is a nonlinear system(because of the constant 5). (h) y[n] = x[n2 ] is linear, noncausal, and time-varying. (i) y[n] = sin(x[n]) is time-invariant. 9.2 Questions 1. If some system: x(t) −→ y(t) is linear, and another system: y(t) −→ z(t) is linear, too, Is the cascaded system x(t) −→ z(t) linear? 2. If some system: x(t) −→ y(t) is linear, and another system: x(t) −→ z(t) is linear, too, Is the parallel system x(t) −→ w(t) = y(t) + z(t) linear? 3. Redo the above two questions, when the systems are time-invariant. 4. Is the cascaded connection of two nonlinear systems is nonlinear? 5. (Oppenheim et al.) Consider three systems with the following input/output relationships: system 1 : y[n] = ( x[n/2], n : even 0, n : odd 1 1 system 2 : y[n] = x[n] + x[n − 1] + x[n − 2] 2 4 system 3 : y[n] = x[2n] Suppose these systems are cascaded in series. Find the input/ouput relationship for the overall cascaded system. Is it linear? Is it time-invariant? 6. Can you generalize the definitions of linearity and time-invariance for a system processing 3D signals with 2 independent variables: s(t1 , t2 ), where s = [s1 , s2 , s3 ]? I - 15