Ministry of High Education High Institute of Engineering & Technology, Kafrelsheikh Dept. of Electrical Engineering Electrical Communications EEE 2205 2nd Year Prof. Sami A. El-Dolil Contents Contents; Preface Chapter 1 Chapter 2 Introduction Communication Systems 2.1 2.2 2.3 2.4 Chapter 3 Communication System Elements Communication System Categories Overview of Current Communication Systems Communication Channel Properties 2.4-1 Wired Communication Channel 2.4-2 Wireless Communication Channel 2.4-3 Communication Channel Impairments Analogue Communication 3.1 Amplitude Modulation, AM 3.1-1 Full AM (DSBWC) signal 3.1-2 Double-Side Band Suppressed Carrier 3.1-3 Single-Side Band Suppressed Carrier 3.2 Angle Modulation 3.2.1 Frequency Modulation 3.2.2 Phase Modulation 3.3 Analogue Pulse Modulation 3.3-1 Pulse Amplitude Modulation, PAM 3.3-2 Pulse Width Modulation, PWM 3.3-3 Pulse Position Modulation, PPM 3.4 Digital Pulse Modulation 3.4-1 Pulse Code Modulation, PCM 3.4-2 Differential Pulse Code Modulation, DPCM 3.4-3 Delta Modulation, DM 3.4-4 Adaptive Delta Modulation, ADM 3.4-5 Nonlinear quantization (Companded) PCM Chapter 4 Noise Analysis 4.1 Additive White Gaussian Noise Channel 4.2 Thermal Noise 4.3 Active Device Noise 4.3-1 Noise in Transistor Amplifiers 4.4 Signal to Noise Ratio, SNR 4.5 Noise Figure i Contents 4.6 Receiver Sensitivity 4.7 Equivalent Noise Temperature 4.8 Performance of Analogue Communication Systems in the presence of Noise 4.8-1 SNR Characteristics of AM Demodulators 4.8-2 SNR Characteristics of DSB-SC Demodulators 4.8-3 SNR characteristics of SSB-SC Demodulator 4.8-4 SNR Characteristics of FM Signal Chapter 5 Multiplexing Techniques 5.1 Analogue Multiplexing Technique 5.2 Digital Multiplexing Technique ii Chapter 1 Introduction Chapter 1 Introduction Telecommunication is defined as the transmission of signals over a long distance greater than the hearing and visual range, such as radio, or television. It is one of the fastest growing business sectors of modern information technologies, deals with conveying information with electrical signals. So, another term we often hear is electrical communication. They are concerned with the transport and delivery of information or carry intelligence. Transmission may be defined as the electrical transfer of a signal, message, or other form of intelligence from one location to another. Traditionally, transmission has been one of the two major disciplines of telecommunication. Switching is the other principal specially. Switching establishes a connection from user X to some distant user Y. Simplistically, we can say that transmission is responsible for the transport of the signal from user X to user Y. In the old days of telephony, these disciplines were separate with strong demarcation between one and the other. Not so today, the demarcation line is fast disappearing. 1 Chapter 1 Introduction For example, under normal circumstances in the public switched telecommunications network (PSTN), a switch provides network timing that is vital for digital transmission. What we have been dealing with so far is baseband transmission. This is the transmission of a raw electrical signal similar to the alternating current derived from the mouthpiece of a telephone handset. Baseband transmission can have severe distance limitations. The signal can only be transmitted so far before being corrupted one way or another. For example, a voice signal transmitted from a standard telephone set over a fairly heavy copper wire pair may reach a distant subset earpiece some 30 km or less distant before losing all intelligibility. This is because the signal strength is so very low that it becomes inaudible. To overcome this distance limitation, we may turn to carrier or radio transmission. Both transmission types involve the generation and conditioning of a radio signal. Carrier transmission usually implies (not always) the use of a conductive medium such as wire pair, coaxial cable, or fiber-optic cable to carry a radio or light-derived signal. Radio transmission always implies radiation of the signal in the form of an electromagnetic wave. We listen to the radio or watch television. These are received and displayed or heard as the result the reception of radio signals. Figure 1.1 illustrates the basic concepts of transmission system. 2 Chapter 1 Introduction Figure 1.1 Basic concepts of transmission system At the transmitting side of a telecommunication link a radio carrier is generated. The carrier is characterized as a high frequency signal. This radio frequency signal carries no useful information for the user. Useful information may include voice, data, or image. In industrialized nations, the telephone is accepted as a way of life. It is connected to PSTN for local, national, and international voice communications. These same telephone connections may also carry data and image information (e.g., television). 3 Chapter 2 Communication Systems Chapter 2 Communication Systems The concept and theory of communication systems are needed in almost all electrical engineering fields and in many other engineering and scientific disciplines as well. This chapter introduces brief description and representation of signals and systems and their classifications. We begin the journey into the exciting field of communications by studying the basic building blocks of a communication system. We will study the various types of communication and how the electrical signal is impaired as it travels through the transmission medium. With the advances in digital electronics, digital communication systems are slowly replacing analog systems. The differences between analog communication and digital communication will be discussed. A system is defined as any algorithm or device that takes a signal in input and produces a signal at output. So, systems may be defined by rules relating input signals to output signals. 2.1 Communication System Elements A communication system is, simply, any system in which information is transmitted from one physical location, A-to-a second physical location, B as shown in figure 2.1. 4 Chapter 2 Communication Systems The communication system incorporates many special intermediate signal processing steps. Mainly a very simple communication system is made up of three parts, as shown in figure 2.1. First, at the transmitting side, point A, there will be a source that generates the data and a transducer that converts the data into an electrical signal. The transmitter function is to perform some processes on the information to be suitable for transmission over the channel. Next is the channel which is the transmission medium that the information travels through in going from point A to point B. An example of a channel is copper wire (wired communication system), or the atmosphere (wireless communication system). Transmitter A channel Receiver B Figure 2.1 Basic communication system. Finally, there’s the receiver, the end part of the communication system that sits at point B. Its function is to perform the inverse of the transmitter functions to restore back the original base band information that the transmitter sends over the channel. A transducer again is used to convert the electrical signal into data and is given to the destination (sink). For example, if two people want to talk to each other using this system, the transducer at the transmitter is the microphone that converts the sound waves into equivalent electrical signals. At the receiving end, the speaker is the transducer that converts the electrical signal into acoustic waves. Similarly, if video is to be transmitted, the transducers required are a video camera at the transmitting side and a 5 Chapter 2 Communication Systems monitor at the receiving side. Hence, depending on the type of communication, the distance to be covered, etc., a communication system will consist of a number of elements, each element carrying out a specific function. Some important elements are: Modulator: uses a high frequency carrier signal to carry the base band signal from the transmitter side to the receiver side at which the base band signal is restored again using a demodulator. Multiplexer: Combines the signals from different sources to transmit on the channel. At the receiving end, a de-multiplexer is used to separate the signals. Multiple access: When two or more users share the same channel, each user has to transmit his signal only using a specific frequency band, or at a specified time, or using a different code. Error detection and correction: If the channel is noisy, the received data will have errors. Detection, and possible correction, of the errors has to be done at the receiving end. This is done through a mechanism called channel coding. Source coding: If the channel has a lower bandwidth than the input signal bandwidth, the input signal has to be processed to reduce its BW so that it can be accommodated on the channel. Switching: If a large number of users has to be provided with communication facilities, as in a telephone network, the users are to be connected based on the numbers dialed. This is done through a mechanism called switching. 6 Chapter 2 Communication Systems Signaling: In a telephone network, when you dial a particular telephone number, you are telling the network whom you want to call. This is called signaling information. The telephone switch (or exchange) will process the signaling information to carry out the necessary operations for connecting the calling party to the called party. 2.2 Communication System Categories Communication systems may be classified based on the users requirements, they can be of different types. 2.2-1 According to the direction of transmission, communication systems may be classified as shown in figure 2.2 into; ■ Simplex communication system Simplex communication is possible only in one direction. There is one sender and one (many) receiver(s); there is no reply channel provided, the sender and receiver cannot change roles. Radio and television broadcasting are simplex communication systems. ■ Half-duplex communication system Half-duplex communication is possible in both directions, but one at a time. This is because one frequency is used for the two directions. Intercom, and Vax systems use this approach. Both stations at A, and B are transceivers. The hand set at each location can be switched between transmitting and receiving modes. Transmitter Receiver a- Simplex Communication System 7 Chapter 2 Communication Systems Transmitter / Receiver Receiver / Transmitter b- Half duplex Communication System Transmitter / Receiver Receiver / Transmitter c- Full duplex Communication System Figure 2.2 Communication Systems The person who wants to talk presses a talk button on his handset to start talking, and the other person's handset will be in receiving mode. When the sender finishes, he terminates it with an over message. The other person can press the talk button and start talking. These types of systems require limited channel bandwidth, so they are low cost systems. ■ Full-duplex communication system In a full-duplex communication system, the two parties-the caller and the called-can communicate simultaneously, as in a telephone system (PSTN). This is because two frequencies are used one for each direction. Both stations at A, and B are transceivers. 2.2-2 Based on the number of communicating transceivers, any communication system can be one of the types given in figure 2.3: a- Point-to-point 8 Chapter 2 Communication Systems b- Point-to-multi-point c- Multi-Point-to-multi-point Figure 2.3 Different communication systems ■ Point-to-point communication In this type, communication takes place between two end points. For instance, in the case of voice communication using telephones, there is one calling party and one called party. Hence the communication is point-topoint. It is bi-directional communication system. ■ Point-to-multipoint communication In this type of communication, there is one sender and multiple recipients. For example, in voice conferencing, one person will be talking but many others can listen. The message from the sender has to be multicast to many others. It is simplex communication system. Broadcasting: In a radio or TV broadcasting system, there is a central location from which information is sent to many recipients, as in the case of audio or video broadcasting. In a broadcasting system, the listeners are passive, and there is no reverse communication path. 9 Chapter 2 Communication Systems ■ Multipoint -to-multipoint communication In this type of communication, there is multiple sender and multiple recipients. The Internet is a global network of computers and has become the most popular multipoint -to-multipoint communication system. 2.2-3 Depending on the type of the processed signal: analog (continuous) and digital (discrete) signals, communication systems can be broadly divided into analog communication systems and digital communication systems. ■ Analog Communication System It is the communication system used to transfer analog information signal from the transmitter to the receiver as shown in figure 3.1. The electrical signal output from a transducer such as a microphone or a video camera to transmit over the communication medium is an analog signal, that is, with amplitude varies continuously with time. Transmitting analog signal to the receiving end results in analog transmission. However, at the receiving end, reproducing the analog signal is very difficult due to transmission impairments. It has to be ensured that the signal does not get distorted at all due to transmission impairments, which is very difficult as analog communication systems are badly affected by noise. Analog Signal ; It occurs in a continuous fashion over an interval of time or space. At some point, digital processing may take place; today, this is almost always necessary. Perhaps the application requires superior noise immunity. Continuous signal can be represented as a function of a time variable t; thus, x (t) is the value of signal x at time t. 10 Chapter 2 Communication Systems An example of an analog signal is the speech signal where its amplitude (strength) changes with time represented by a Sine (Cosine) waveform. A complex-valued analog signal is represented as, x (t) = xr (t) + j xj (t), where xr (t) = Cos (t) is the real part of x (t), xj (t) = Sin (t) is the imaginary part of x (t). Both Sin (t) and Cos (t) are differentiable: d/dt Sin (t) = Cos (t) and d/dt Cos (t) = - Sin (t). Vm(t) = Em Cos 2 π fm t (Sinusoidal analog signal) From this it follows that both have derivatives of all orders and have Taylor's series expansions about the origin: Sin (t) = t – t3/ 3! + t5/ 5! – t7/ 7! + …… 2 4 6 Cos (t) = 1 – t / 2! + t / 4! – t / 6! + …… (2.1-a) (2.1-b) Sine and Cosine signals are periodic such as; Sin (t + 2) = Sin (t) and Cos (t + 2) = Cos (t) for all. ■ Digital Communication System Figure 2.4 shows the block diagram of the digital communication system, where analog signal is converted into digital format using one of the analog-to-digital conversion techniques as shown in figure 2.5, and then sent through the medium. The output of a computer is a digital signal. 11 Chapter 2 Communication Systems Figure 2.4 Digital communication system Figure 2.5 A/D and D/A conversion Digital Signal At the transmitting side, digital signals thus come from sampling an analog signal. In sampling process a square pulses with frequency fs is used as a clock to move a switch (transistor) working as a sampler to sample the 12 Chapter 2 Communication Systems analog base band signal with frequency fm (fs ≥2fm) into samples (PAM). Figure 2.4, illustrates the typical functional elements of a digital communication system. The information source generates particular symbols at a particular rate. The source encoder translates these symbols in sequences of 0's and 1's. The inverse process takes place at the receiver side. Digital transmission is much more advantageous than analog transmission because digital systems are comparatively immune to noise and gives better performance than the analog system in noisy conditions. Due to advances in digital electronics, digital systems have become cheaper, as well. The advantages of digital systems are: More reliable transmission because only discrimination between ones and zeros is required. Less costly implementation because of the advances in digital logic chips. Ease of multiplexing various types of signals (voice, video, etc.). Ease of developing secure communication systems. The disadvantages of digital systems are: Increased bandwidth Need for time synchronization Incompatibilities with analog facilities Though a large number of analog communication systems are still in use, the old analog systems are being replaced by digital systems. All the newly developed communication systems are digital systems. Only in broadcasting applications, is analog communication used extensively. 13 Chapter 2 Communication Systems 2.2-4 according to its characteristics: linear and nonlinear, ■ Linear Systems Linear system is that system whose output is varying similar to the input variation. Linear system properties; i- When the input signal to the system is larger (amplified), or smaller (attenuated) in amplitude, then the output signal from the system produces is proportionally larger (or smaller). This is the scaling property of linearity. Then the output is a signal that is amplified or attenuated by the same amount. Let y = H(x), and let A be a scalar (real or complex number). The system H is linear if it obeys the scaling property: H(Ax) = A H(x) (2.2) ii- Furthermore, if two signals are added together before input, then the result is just the sum of the outputs that would occur if each input component were passed through the system independently. This is the superposition property. Let y = H(x). The system H is linear if it obeys the superposition property: H(x + y) = H(x) + H(y) (2.3) So, linear systems can be defined as those systems for which the superposition principle holds: Non-linear system All communications receivers, transmitters, and transmission medium contain some degree of non-linearity which can cause a change in the frequencies of the input signals and/or a change in the network gain. For 14 Chapter 2 Communication Systems these reasons the network non-linearity need to be clearly explained and considered during the design phase. When an input information signal transmitted through a non-linear transmission medium or communication system, the output signal is subject to a non-linear distortion, which is characterized by the following: 1- The output signal is no longer directly proportional to the input signal. 3. The output signal contains frequency components not present in the input signal. To demonstrate how new frequency components are created, let us take the signal; x (t) = E Cos ωt (2.4) as input to a non-linear system with output characteristics described by the relation; 2 y (t) = a x (t) + b x (t) + ....... 2 2 so, y(t) = a E Cos ωt + b E Cos ωt + ....... 2 = a E Cos ωt + b E (1/2)[1- Cos 2ωt ]+ ....... 2 2 = (b E /2) + a E Cos ωt – (b E /2) Cos 2ωt +.... (2.5) The first term is a dc-term, not an input term The third term is a term with frequency double the input signal frequency, not an input term, The second term is the input term. 2.3 Overview of Current Communication Systems Telecommunication means communication over large distance. 15 Chapter 2 Communication Systems An overall telecommunications network (PSTN) consists of local networks interconnected by one or more long-distance networks. The concept is illustrated in figure 2.6-a. The PSTN is open to public correspondence. It is usually regulated by a government authority or may be a government monopoly, although there is a notable trend toward privatization. End-users, as the term tells us, provide the inputs to the network and are recipients of network outputs. End-users usually connect to nodes where lines and trunks meet. A node usually carries out a switching function. In the case of the local area network (LAN), a network interface unit is used, through which one or more end-users may be connected. Figure 2.6-a PSTN consists of local networks intercepted by a long distance network. Navigation systems; for example, pass signals between a transmitter and a receiver in order to determine the location of a vehicle, or to guide and control its movement. Signaling systems for tracked vehicles, such as trains, are also simple communication systems. 16 Chapter 2 Communication Systems Mobile communication systems, Most wireless communication systems are designed to cover as much area as possible. These systems typically operate at maximum power and with the tallest antennas. The cellular system takes the opposite approach. It uses available channels more efficient by employing low-power transmitters to allow frequency reuse at much smaller distances as shown in figure 2.6-b. Maximizing the number of times each channel may be reused in a given geographic area is the key to an efficient cellular system design. Figure 2.6-b Cellular communication concept Cellular systems are designed to operate with groups of low-power radios spread out over the geographical service area. Each group of radios serve mobile stations located near them. The area served by each group of radios is called a cell. Each cell has an appropriate number of low-power radios to communicate within the cell itself. The power transmitted by the cell is chosen to be large enough to communicate with mobile stations located near the edge of the cell. The radius of each cell may be chosen to be large in a start-up system 17 Chapter 2 Communication Systems with relatively few subscribers. As the traffic grows, cell radius is reduced, and new cells and channels are added to the system. In reality, cell coverage is an irregularly shaped circle. The exact coverage of the cell depends on the terrain and many other factors. For design purposes and as a first-order approximation, we assume that the coverage areas are regular polygons. For example, for omnidirectional antennas with constant signal power, each cell site coverage area would be circular. To achieve full coverage without dead spots, a series of regular polygons are required for cell sites. After cellular pattern is drawn on a map of the coverage area, the signal-to-interference ratio (SIR) is calculated for various directions to complete the system design. Bluetooth is a wireless system initially conceived by Ericsson, after the Danish king Harald Blatand (Bluetooth). Its chips transmit over a short range (10 meters) and consume very little inexpensive low-power to connect devices that are close to each other. These devices instantly establish direct contact with each other as soon as they come within the Bluetooth range. Bluetooth devices can then automatically form “personal” (ad hoc) networks. Other Bluetooth devices can join a network as soon as they move within the Bluetooth range. Bluetooth chips use the unlicensed (free) ISM band at 2.4 GHz, and operate in noisy frequency environments, because it avoid interference by hopping to a new frequency after transmission or receipt of each packet. Bluetooth protocol provides built-in support for encryption, frequency hopping, and automatic output power adaptation to control the range – all this makes eavesdropping difficult, and authentication to prevent communication with unauthorized devices. 18 Chapter 2 Communication Systems Bluetooth is facing strong competition from the wireless LAN standard called Wi-Fi. In contrast, because Bluetooth chips are expected to be inexpensive and have low power requirements, Bluetooth is more suited for small devices such as PDAs, cell phones, and digital cameras. On the other hand, Wi-Fi, because of its range and bandwidth, is more suited for the larger devices such as laptops. It has some disadvantages such as; Small bandwidth (721KB) and range (10 meters). Wi-Fi Advantages are; High bandwidth (11 MB) and range (50 meters), with disadvantages of high power, and cost. Satellite communication systems, whether geostationary Earth orbit (GEO) or low Earth orbit (LEO), provide an effective platform to relay radio signals between points on the ground. The users who employ these signals enjoy a broad spectrum of telecommunication services on the ground, the sea, and in the air. In recent years, such systems have become practical to the point where a typical household can have its own direct-tohome (DTH) satellite dish with the more established broadcasting media, including over-the-air TV and cable TV. GEO and non-GEO satellites will continue to offer unique benefits for other services such as mobile communications, multimedia and in emergency situations where terrestrial lines and portable radios are not available or ineffective for a variety of reasons. A commercial satellite communication system is composed of the space segment and the ground segment, illustrated in figure 2.6-c. In this simple representation of a GEO system, the space segment includes the satellites operating in orbit and a tracking, telemetry, and command (TT&C) facility that controls and manages their operation. This part of the system represents 19 Chapter 2 Communication Systems a substantial investment, where the satellite operating lifetime is typically of the order of 12 years based on fuel availability. A satellite is capable of performing as a microwave repeater for Earth stations that are located within its coverage area, determined by the altitude of the satellite and the design of its antenna system. The arrangement of three basic orbit configurations is shown in figure 2.6d. A GEO satellite can cover nearly one-third of the Earth’s surface, with the exception of the polar regions. This includes more than 99 % of the world’s population and essentially all of its economic activity. Figure 2.6-c Satellite Network The LEO and medium Earth orbit (MEO) approaches require more satellites to achieve this level of coverage. Because of the fact that the satellites move in relation to the surface of the Earth, a full complement of satellites (called a constellation) must be operating to provide continuous service. The tradeoff here is that the GEO satellites, being more distant, incur a longer path length to Earth stations, whereas the LEO systems 20 Chapter 2 Communication Systems promise short paths not unlike those of terrestrial systems. The path length introduces a propagation delay since radio signals travel at the speed of light. Depending on the nature of the service, the increased delay of MEO and GEO orbits may impose some degradation on quality or throughput. Figure 2.6-d Satellite orbits 2.4 Communication Channel and its Properties As illustrated in figure 2.1, to transport the information signals from the transmitter to the receiver, a transmission medium (channel) is required. The choice between different transmission medium is dictated by: (a) the desired operating frequency band, (b) the amount of power to be transferred, and (c) the amount of transmission losses that can be tolerated. Each medium has a certain characteristic impedance value, current-carrying capacity, and physical shape and is designed to meet a particular requirement, and overcome different channel impairments such as; signal attenuation, noise, crosstalk, Doppler shift, path loss, and fading. Communication systems use a variety of transmission medium mainly classified to two categories; guided (wired), and unguided (wireless) transmission medium, carry an electromagnetic signal travelled in the space 8 by speed of light which equals 3 x10 m/ sec. 21 Chapter 2 Communication Systems Guided transmission channels transport the information via physical medium. It can be characterized by RLC circuit models, and the inputoutput signal transfer characteristics can be modeled by a transfer function. Cable manufacturers often provide impedance characteristics of the transmission line models for the cables. Most of the losses in a properly constructed and used transmission line are due to conductor resistance and, at higher frequencies, dielectric conductance. Both of these losses increase with frequency. Some transmission lines also radiate energy. This is particularly true of lines that are improperly connected; for instance, a coaxial line that is used in a balanced circuit will radiate energy from its shield. When parallel lines are used at very high frequencies, such that the distance between the wires is a substantial portion of the wavelength, they too can radiate. With unguided transmission medium, we consider models for free space radio channels that are linear. 2.4-1 Guided transmission medium, (wired comm. channel) Four types of guided transmission mediums will be discussed in this part include Wire pairs, Coaxial Line, Waveguides, and optical fibres . a-Wire Pair A wire pair consists of two wires of copper (aluminum). It is the oldest and most common transmission media. Its main disadvantages are high attenuation and sensitivity to interference, crosstalk, delay distortion and noise. A basic impairment of wire pair is loss (attenuation). Loss can be defined as the dissipation of signal strength (power) as a signal travels along a wire pair, usually expressed in decibels (dB). Power is expressed in 22 Chapter 2 Communication Systems watts, but the use of milli-watts may be more practical. If we denominate loss with the notation L dB, then; LdB = 10 log10 (P1 / P2) (2.6) where, P1 is the power of the signal at the wire pair input, and P2 is the power level of the signal at the distant end of the wire pair. Example 1. Suppose a 10-mW (milli-watt), 1000-Hz signal is launched into a wire pair. At the distant end of the wire pair the signal is measured at 0.2 mW. What is the loss in decibels on the line for this signal? LdB = 10 log10 (10 / 0.2) = 10 log10 (50) ≈ 17 dB Also, attenuation in wire cable increases with frequency approximately according to the formula: AdB = k √ dB (2.7) where AdB is attenuation in decibels, f is the frequency, and k is a constant specific for each cable. This formula gives us approximate attenuation at other frequencies if the attenuation at one frequency is known. For example, if we measure that attenuation of a certain cable is 6 dB at 250 kHz, then at the four times higher frequency of 1 MHz it is approximately 12 dB. The speed of signal propagation in a copper cable is approximately 200,000 km/sec.Wire Pairs are classified to two types; a-1 Open (parallel)-Wire Lines The oldest and simplest form of a two- parallel wire line uses bare conductors suspended at pole tops as shown in figure 2.7-a. The wires must not touch each other, generally spaced from 2 to 6 inches apart, otherwise short circuit occurs in the line and communication will be interrupted. 23 Chapter 2 Communication Systems This separation is achieved by using insulating spacers as shown in figure 2.7-b, or as ribbon (twin lead) illustrated in figure 2.7-c, where uniform spacing is assured by embedding the two wires in a low-loss dielectric, usually polyethylene. The first type of this transmission line is most often used for power lines, rural telephone lines, and telegraph lines, while the second type is commonly used to connect a television receiving antenna to a home TV set. An advantage of this transmission line is its simple construction. The principal disadvantages are the high radiation losses and electrical noise pickup because of the lack of shielding, so they are not used at microwave frequencies (UHF, SHF, and EHF bands). It has many wireless applications with large BW, and small wavelengths compared to the other radio waves. It is easily attenuated by objects found in their path. Radiation losses are produced by the changing fields created by the changing current in each conductor. New open-wire lines are rarely installed today but they are still in use in rural areas as subscriber lines or analog carrier systems with a small number of speech channels. Figure 2.7-a Open (parallel)-Wire Lines 24 Chapter 2 Communication Systems Figure 2.7-b Parallel two-wire line. Figure 2.7-c Two-wire ribbon type line. Crosstalk may be heard on most of our telephone line. It appears as another, “foreign” conversation having nothing to do with our telephone call electrically induced into our line. One basic cause of crosstalk is from other wire pairs sharing the same cable as our line due to insufficient shielding, excessively large disparity between signal levels in adjacent circuits, unbalanced lines, or overloaded analogue carrier transmission systems or interfaces. It is a statistical quantity because the number of sources and coupling paths is usually too large. Crosstalk falls in two categories: 1. unintelligible crosstalk (babble) 2. intelligible crosstalk The latter is most disturbing because it removes any impression of privacy. It can be caused by a single disturbing channel with enough coupling to spill into adjacent circuits. Unintelligible crosstalk is usually caused by 25 Chapter 2 Communication Systems a large number of disturbing channels, none of which is of sufficient magnitude to be understood, or extraneous modulation products in carrier transmission systems. Crosstalk can be further categorized as near-end and far-end. Near-end crosstalk is caused by crosstalk interference at the near end of a circuit with respect to the listener; far-end crosstalk is crosstalk interference at the far end, as shown in figure 2.8. To mitigate this impairment, physical twists are placed on each wire pair in the cable. Generally there are 2 to 12 twists per foot of wire pair. So, we get the term twisted pair shown in figure 2.9. Figure 2.8 Crosstalk distortion Another impairment causes a form of delay distortion on the line, which is cumulative and varies directly with the length of the line as well as with the construction of the wire itself. It has little effect on voice transmission, but can place definition restrictions on data rate for digital/data transmission on the pair. The impairment is due to the capacitance between one wire and the other of the pair, between each wire and ground, and between each wire and the shield, if a shield is employed. The insulated conductors of our wire pair, have to a greater or lesser degree this property of capacitance. 26 Chapter 2 Communication Systems The capacitance of two parallel open wires or a pair of cable conductors of any considerable length is appreciable in practice. a-2 Twisted Pair The Twisted Pair transmission line is shown in figure 2.9. As the name implies, it consists of two insulated wires that are typically 0.4 to 0.6 mm thick or about 1 mm thick if insulation is included, twisted together to form a flexible pair without the use of spacers. Figure 2.9 Twisted pair wire. These two wires are twisted together to reduce external electrical interference and interference from one pair to another in the same cable. The twisted pair is symmetrical and the difference in voltage (or, electromagnetic wave) between these two wires contains the transmitted signal. Twisted pair is easy to install, requires little space, and does not cost a lot. Many hundred pairs may put together to form a cable. When this is done it is usual to use different pitches of twist in order to limit electromagnetic coupling between them and hence cross-talk. It is not used for transmitting high frequency because of the high dielectric losses that occur in the rubber insulation. When the line is wet, the losses increase greatly. The conductor material is copper, and the primary constants of the twisted pair (series resistance, shunt capacitance, series inductance and shunt conductance, all per unit length) change with frequency. The bandwidth of 27 Chapter 2 Communication Systems the twisted pair can be extended to a higher frequency by inductive loading of the line. Lumped inductances are connected in series with the line at specified distances. The best results are obtained when the interval is kept short and the value of the lumped inductance is kept low, thus minimizing the discontinuities introduced by loading. The effect of loading of a (3.7 km) with 900 Ω terminations is shown in figure 2.10. Figure 2.10 Loading effect on a twisted pair cable The usable BW of twisted wire pair varies with the type of wire pair used and its length. Ordinary wire pair used in the PSTN subscriber access plant can support 2 MHz over about 1 mile of length. The IEEE defines BW as “the range of frequencies within which performance, with respect to some characteristic, falls within specific limits.” One such limit is the amplitude of a signal within the band. Here it is commonly defined at the points where the response is 3 dB below the reference value. This 3-dB power BW definition is illustrated graphically in figure 2.11. 28 Chapter 2 Communication Systems Twisted pairs are used in the telecommunications networks in subscriber lines, in 2-Mbps digital transmissions with distances up to 2 km between repeaters, in DSLs up to several Mbps, and in short-haul data transmissions up to 100 Mbps in LANs. Figure 2.11 Concept of the 3-dB power bandwidth. A repeater (regenerator) takes a corrupted and distorted digital signal, detects/demodulates it, and then re-modulates it. By this process, a brand new, nearly perfect digital signal is regenerated. b- Coaxial Cable It is entirely different configuration of two conductors used to advantage where high and very high radio frequencies are involved. The conducting pair consists of a cylindrical tube with a single stiff copper wire conductor makes up the core as shown in figure 2.12. In practice the center conductor is held in place accurately by a surrounding insulating material that may take the form of a solid core, along the axis of the wire or a spirally wrapped string. 29 Chapter 2 Communication Systems Figure 2.12 Co-axial Cable Coaxial lines are made with an inner conductor that consists of flexible wire insulated from the outer conductor by a solid, continuous insulating material. This insulator is encased by a cylindrical outer conductor made of metal braid, which gives the line flexibility. The outer conductor is covered in protective plastic sheath. The construction of the coaxial cable ensures that, at normal operating frequencies, the electromagnetic field generated by the current flowing in it is confined to the dielectric and does not extend outside the outer conductor (normally grounded) as shown in figure 2.13. It gives a good combination of large BW and excellent noise, cross-talk immunity. The nominal impedance is 75 ohms, and special cable is available with a 50-ohm impedance. Figure 2.13 Limited radiation in a perfectly shielded coaxial line. Radiation is therefore severely limited resulting in a perfectly shielded coaxial line. It is unaffected by seawater, gasoline, oil, and most other liquids that may be found aboard ship. The primary constants of the coaxial cable are much better behaved than those of the twisted pair. The inductance, L, capacitance, C, and 30 Chapter 2 Communication Systems conductance, G, per unit length are, in general, independent of frequency. The resistance, R, per unit length is a function of frequency due to skin effect; it varies as a function of √ . From about 1953 to 1986, coaxial cable was widely deployed for longdistance, multichannel transmission. Its frequency response was exponential, where its loss increased drastically as frequency was increased. For example, for 0.375-inch coaxial cable, the loss at 100 kHz was about 1 dB and the loss at 10 MHz was about 12 dB. Thus, equalization was required to level out the frequency response. Coaxial cables are commonly used to connect RF components working at frequencies below 3 GHz. Above that the losses are too excessive. For example, the attenuation might be 3 dB per 100 m at 100 MHz, but 10 dB/100 m at 1 GHz, and 50 dB/100 m at 10 GHz. Their power rating is typically of the order of one kilowatt at 100 MHz, but only 200 W at 2 GHz, being limited primarily because of the heating of the coaxial conductors and of the dielectric between the conductors. However, special short-length coaxial cables do exist that operate in the 40 GHz range. A number of coaxial cable are usually combined together with twisted pairs to form a multi-pair cable. The coaxial cable has a much larger BW than the twisted pair. However, it still requires repeaters and frequency equalizers for analog lines and phase equalization for digital signal transmission. Coaxial cables still widely used in LANs (original 10-Mbps Ethernet), as a radio-frequency (RF) transmission line in antenna systems for broadcasting connecting a radio and a subscriber’s TV set to its antenna. It is also extensively employed in cable TV distribution networks, in long distance 31 Chapter 2 Communication Systems telephone trunks, local area networks, , and in high capacity analog and digital transmission systems in telecommunications networks and even in older generation submarine systems. It can support a maximum data rate of 500Mbps for a distance of about 500 meters. Repeaters are required every 1 to 10 kilometers. Specially constructed coaxial cables with repeaters of very high reliability are used for submarine cable systems. Because of the very high cost of these cables, they are used to transmit messages in both directions by assigning separate frequency bands to each direction. In spite of the development of satellite communication channels, submarine cables are still viable for trans-Atlantic and trans-Pacific traffic. Because of the propagation delay involved in the signal travelling to the satellite and back, most trans-Atlantic telephone conversations use the satellite link in one direction only; cable is used in the opposite direction. With the advent of fiber-optic cable with its much greater BW and comparatively flat frequency response, the use of coaxial cable on longdistance circuits fell out of favor. c- Waveguides The term waveguide can be applied to all types of transmission lines in the sense that they are all used to guide energy from one point to another. As transmission line losses increase rapidly with frequency so, at frequencies of several gigahertz or more, the losses in conventional transmission lines are such as to make long cable runs impractical. The mechanical structure of the waveguide disqualifies it from being used for long-haul transmission. Luckily, there is a different form of transmission line that is impractical and not useful at low frequencies but very useful in the microwave region. This 32 Chapter 2 Communication Systems type of line, called a waveguide, may be classified according to its cross section (rectangular, cylindrical or elliptical) as shown in figure 2.14, or according to the material used in its construction (metallic or dielectric). b (a) Rectangular Propagation (b) Cylindrical (c) Elliptical Figure 2.14 Waveguides. Dielectric waveguides are seldom used because the dielectric losses for all known dielectric materials are too great to transfer the electric and magnetic fields efficiently. Metallic waveguide generally consists of a hollow, airfilled tube, made of conducting material. It may be viewed as a coaxial cable with the central conductor removed. The outer conductor guides the propagation of the electromagnetic wave. A waveguide internally guide microwave signals to transfer large amounts of microwave power at frequencies above about 3 GHz for very short links to link an antenna to its transmitter or receiver. It is not normally used below 3 GHz because its cross-sectional dimensions must be comparable to a wavelength at the operating frequency. For example at 5 GHz, the transmitted power might be one megawatt and the attenuation only 33 Chapter 2 Communication Systems 4 dB/100 m. Waveguides are used mainly as feed lines to antennas in terrestrial microwave relay systems and for frequencies above 18 GHz they are superior to all other media in terms of loss, noise and power handling. The method by which waveguide transmits energy down its length differs from the conventional methods. Electromagnetic waves are launched into one end using a wire loop or probe and propagate down the guide. Ideally the waves fill the entire space within the guide, and can be visualized as rays travel down the waveguide by totally repeated reflections between its brass or aluminum walls and bounced off the highly polished, silver or gold plated walls on the inside along their length to its destination as shown in figure 2.14, and there is no loss. In practice some absorption takes place, leading to small losses compared to cable losses at these frequencies. These systems present a very superior capacity when compared to the systems mentioned before, reaching up to 200.000 voice channels. They require a very precise and costly technology. Rectangular waveguides are used more frequently with cross section ratio of a:b = 2:1. The wider dimension must be about one-half the wavelength of the wave which it will transmit. Irregularities on the walls, such as holes, lack of a perfect match at joints, bends, twists and imperfect impedance matching at the terminations, can cause reflection and spurious modes to be generated, all of which result in signal loss. The cross section must remain uniform around the bend. If the waveguide is dented, or if solder is permitted to run inside the joints, the attenuation of the line is greatly increased. Dents and obstructions in the waveguide also reduce its breakdown voltage, thus limiting the waveguide’s power-handling 34 Chapter 2 Communication Systems capability because of possible arc over. Great care must be exercised during installation; one or two carelessly made joints can seriously inhibit the advantage of using the waveguide. Cylindrical guides are useful when rotating antennas are used, as in radar antennas, because of their symmetry. Elliptical waveguides can be semi-flexible and are used extensively for connecting equipment at the base of a tower to antennas on the tower. They are often easier to fabricate than coaxial lines, with much less attenuation but they are more expensive and difficult to install. The waves are completely contained within the conducting walls, so there is no loss due to radiation from the guide. Waveguides, then, solve the problem of high transmission-line losses for microwave signals. Figure 2.15 shows, as the angle a ray makes with the wall of the guide becomes larger, the distance the ray must travel to reach the far end of the guide becomes greater. Figure 2.15 Waveguide propagation modes The actual speed at which a signal travels along the guide is called the group velocity, and it is considerably somewhat slower than electromagnetic energy traveling through free space with speed of light. The group velocity in a rectangular waveguide is given by the equation; 35 Chapter 2 Communication Systems Vg = c √ ( ) (2.8) where, fc is the cutoff frequency given by; fc = ; (2.9) 8 where, c: is the speed of light equals 3 x 10 m/sec., a: is the long dimension of the rectangular guide Waveguides operate essentially as high-pass filters, with a low- frequency cut-off, for a given waveguide cross section, below which waves will not propagate. At frequencies below the gigahertz range, waveguides are too large to be practical for most applications. The greater distance traveled causes the effective velocity down the guide to be reduced. The angle the wave makes with the wall of the guide varies with frequency. At frequencies near the cutoff value, the wave moves back and forth across the guide more often while traveling a given distance down the guide than it does at higher frequencies. Figure 2.16 gives a qualitative idea of the effect. a- Medium frequency b- High frequency c- Low frequency d- Very low frequency Figure 2.16 Microwave propagation inside waveguide If a signal has components of different frequencies, the higher-frequency components will travel faster than the lower-frequency components. This can be a real problem for pulsed and other wideband signals. For instance, 36 Chapter 2 Communication Systems the upper sideband of an FM or AM signal travels faster than the lower sideband. d- Optical Fiber Cables Optical fiber cable is the most modern favoured transmission medium. Optical communication refers to the transmission of information signals over carrier waves that oscillate at optical frequencies. Its fields oscillate at frequencies much higher than radio waves or microwaves. It operates at optical and infrared frequencies, as indicated on the abbreviated chart of the electromagnetic spectrum in figure 2.17. Figure 2.17 Electromagnetic spectrum An optical fiber has a central core (with a diameter of 8 to 60 µm) of very pure glass surrounded by an outer layer of less dense glass as a peripheral transparent cladding which is a dielectric material with a diameter of 125 µm surrounded by protective packaging. The core may be of uniform 37 Chapter 2 Communication Systems refractive index or it may be graded. At present the most efficient core material is silica (SiO2). The outer glass sheathing, referred to as cladding, has a lower refractive index and the signal is therefore confined to the core. These techniques have resulted in optical fibers that have very small attenuation. Various types of protective covering may be put on the fiber and several fibers put together to form a cable. The practical propagation of light through an optical fiber may best be explained as, when light passes from the core, (of higher refractive index, n1) into the clad, (of lower refractive index, n2), the refractive ray is bent away from the normal. As the angle of incidence becomes more oblique, the refracted ray is bent more until finally the refracted energy emerges at ◦ an angle of 90 with respect to the normal and just scrapes the surface. A light ray is refracted from the surface between the two glass materials back to the core and it propagates in the core from end to end. The principle of optical cable transmission is presented in figure 2.18. Figure 2.18 Structure of optical fiber The fiber optic links are used as the major media for long-distance transmission in all developed countries for analog application, particularly for video/TV, and a PCM highway. 38 Chapter 2 Communication Systems Its applications cover very wideband terrestrial links, including undersea applications, and also used for cable television “super trunks.” Highcapacity coaxial cable systems are gradually being replaced by fiber systems. In the world of fiber optics, its technology was developed by physicists who are more accustomed to wavelength than to frequency to denote the position of light emission in the electromagnetic spectrum using the relation; λ=c/f In fact, the whole usable RF spectrum can be accommodated on just one such strand. Such a strand is about the diameter of a human hair. It can carry one serial bit stream at 10 Gbps transmission rate, or by wave division multiplexing methods (WDM), an aggregate of 100 Gbps or more. The maximum length of a fiber-optic link ranges from 32 km to several hundred km before requiring a repeater. This length can be extended by the use of repeaters, where each repeater can impart a 20- to 40-dB gain. A major advantage fiber optic cable has when compared with coaxial cable is that no equalization is necessary. Also, repeaters separation is on the order of 10 -100 times that of coaxial cable for equal length and transmission bandwidths. So, optical cable requires less number of repeaters than those required by co-axial cable with the same length. Other advantages are: Optical fibers have a very wide BW, can be measured in terahertz (THz), and it is also able to carry very high data rates, up to 50 Gbps. The cost of the fiber has decreased to the level of a twistedpair cable; however, the coating and shielding of the cable increase the 39 Chapter 2 Communication Systems cost by a factor of two or more. Electromagnetic immunity, and no external interference: Optical fibers have extremely high immunity to external electrical interference, and electromagnetic disturbances have no influence on the light signal inside the fiber. Fiber material weighs little and the fiber diameter is only of the order of a hundred micrometers instead of a millimeter or more for copper wire. Quartz used in glass fibers is one of the most common materials on Earth. ation, and no crosstalk: Optical fiber attenuation is very low, typically, 0.2 dB/km, and it is independent of the data rate. The maximum loss that a link can withstand and still operate satisfactorily is a function of the type of fiber, wavelength of the light signal, the bit rate and error rate, signal type (e.g., TV video), power output of the light source (transmitter), and the sensitivity of the light detector (receiver). The optical signal is confined entirely inside the fiber. The transmitted power is of the order of milli-watts. most cases, Optical fibers disadvantages are; ■ They are more difficult to install than copper cables. Installation and maintenance, for example, repair of a broken fiber, require special and 40 Chapter 2 Communication Systems costly equipment and well-trained personnel. ■ Radiation of light from an easy broken fiber may cause damage to the human eye. The safety standards set by IEC restrict the allowable maximum optical power that can be used and they also specify if equipment has to be able to switch off the transmitter in the case of a fiber fault. The visible light of BW 3.2 x 10 14 Hz has a shorter wavelength (700 - 400 nm) than light used in optical systems as shown in figure 2.17. ■ It is more difficult to locate faults on the fiber than it is on a metallic conductor. ■ Electric power, needed for example to drive electronic components at repeaters needs a conductor medium, as it can not be sent over the fiber. 2.4-2 Un-guided (radio, wireless channel) transmission medium Radio transmission is based on radiated emission. It does not require any physical medium used with cable transmission, so radio systems are quick to install with lower costs where, no digging in the ground is required. Radio waves are a form of electromagnetic radiation similar to light except for a lower frequency and longer wavelength. They display similar properties, as a beam of light may be reflected, refracted (i.e., slightly bent) and diffracted (slightly swayed around obstacles). The propagation characteristics of electromagnetic waves are highly dependent on the frequency, as shown in the following table 2.1. These characteristics are the result of changes in the radio wave velocity as a function of altitude and boundary conditions. The wave velocity is dependent on air temperature, air density, and levels of air ionization (free electrons). The 41 Chapter 2 Communication Systems ionization is caused by ultraviolet radiation from the sun, as well as cosmic rays. Table 2.1 Frequency Bands Frequency Band 3-30 kHz Designation VLF 30-300 kHz LF 300-3000 kHz MF 3-30 MHz HF Propagation characteristics Typical uses Ground wave; low attenuation day and night, high atmospheric noise level Similar to VLF, slightly less reliable: absorption in day time, Long Waves Long-range navigation: submarine comm. Long-range navigation and marine comm. Radio beacons Maritime radio, direction finding and AM Broadcasting. Amateur radio; international broadcasting, military comm., long distance aircraft and ship comm., Tf, and Tg., facsmile VHF TV, FM two way radio, AM aircraft comm., AM aircraft navigational aids, medicine diagnosis. UHF TV, cellular telephone, navigational aids, radar, microwave links, personal comm. systems Ground wave and night sky wave, attenuation low at night, and high in day; atmospheric noise, Medium Waves Ionospheric reflection varies with time of day, season, and frequency. Low atmospheric noise at 30 MHz, Short Waves 30-300 MHz VHF Nearly line-of-sight (LOS) propagation, with scattering because of temperature inversions, cosmic noise, Ultra Short Waves 0.3-3 GHz UHF LOS propagation, cosmic noise 1.0 – 2.0 GHz 2.0 – 4.0 GHz 3.0 – 30 GHz 4.0 - 8.0 8.0 - 12.0 12.0 – 18.0 18.0 - 27.0 27.0 - 40.0 26.5 - 40.0 30 – 300 GHz 33.0 – 50.0 40.0 – 75.0 75.0 – 110.0 110.0 – 300.0 103 –107GHz Letter L S SHF C X Ku K Ka R EHF mm wave Q V W mm Infrared, visible light, and Ultraviolet LOS propagation; rainfall attenuation above 10 GHz, atmospheric attenuation because of oxygen and water vapor, high water vapor absorption at 22.2 GHz Same; high water vapor absorption at 183 GHz and oxygen absorption at 60 and 119 GHz Satellite comm., radar microwave links Radar, satellite, microwave experimental LOS propagation Optical comm. 42 Chapter 2 Communication Systems Consequently, the amount of ionization is a function of the time of day, season of the year, and the sun activity (sunspot). One important factor that restricts the use of radio transmissions is the shortage of frequency bands. The most suitable frequencies are already occupied and there are many systems with a growing demand for wider frequency bands. Examples of systems using radio waves are public cellular systems, professional mobile radio systems, cordless telephones, broadcast radio and TV, satellite communications, and WLANs. The required BW of radio frequencies is regulated by the ITU-R at the global level and, for example, by ETSI at the European level and the FCC in the United States for a particular service / application. To implement a radio system, permission from a national telecommunications authority is required. Through bit packing techniques, the information carrying capacity of a unit of BW is considerably greater than 1 bit per Hz of BW. On line-ofsight microwave systems, 5, 6, 7, and 8 bits per hertz of BW are fairly common. The essential elements of any radio system are (1) a transmitter for generating and modulating a “high-frequency” carrier wave with an information baseband, (2) a transmitting antenna that will radiate the maximum amount of signal energy of the modulated carrier signal (as an Electromagnetic signal) in the desired direction, (3) a receiving antenna that will intercept the maximum amount of the radiated energy after its transmission through space, and (4) a receiver to select the desired carrier wave, amplify the signal, demodulate it, or separate the audio signal from the carrier. 43 Chapter 2 Communication Systems There are many different designs of radio systems. These differences depend upon the types of signals to be transmitted, type of modulation (AM, FM, or PM or a hybrid). The information transport capacity of a radio link depends on many factors. The first factor is the application. The following is a brief list of applications with some relevant RF bandwidths: ■ Line-of-sight microwave, depending on the frequency band: 2, 5, 10, 20, 30, 40, 60 MHz. ■ SCADA (system control and data acquisition): up to 12 kHz in the 900-MHz band. ■ Satellite communications, geostationary satellites: 500 MHz or 2.5-GHz BWs broken down into 36- and 72-MHz segments. ■ Cellular radio: 25-MHz bandwidth in the 800/900-MHz band. The 25-MHz band is split into two 12.5-MHz segments for two competitive providers. ■ Personal communication services (PCS): 200-MHz band just below 2.0 GHz, broken down into various segments such as licensed and unlicensed users. ■ Cellular/PCS by satellite (e.g., Iridium, Global star), 10.5-MHz BW in the 1600-MHz band. ■ Local multipoint distribution system (LMDS) in 28/38-GHz bands, 1.2-GHz bandwidth for CATV, Internet, data, and telephony services. Radio Wave Propagation When radio waves are transmitted from a point, they spread and propagate as spherical wave front that travels in a direction perpendicular to the wave front, as shown in figure 2.19. 44 Chapter 2 Communication Systems Figure 2.19 Radio wave propagation Four particular modes of radio wave propagation are shown in figure 2.20. A radio transmission system is normally designed to take advantage of one of these modes. The four modes are; Line-of-sight propagation; Surface wave (diffracted) propagation; Tropospheric scatter (reflected and refracted) propagation; Sky wave (refracted) propagation. (a) 45 Chapter 2 Communication Systems (b) Figure 2.20 a- Different modes of radio wave propagation, b- Earth's atmosphere layers. ■ Line-of-Sight Propagation A line-of-sight (LoS) radio system relies on the fact that it is normally travel only in a straight line from the transmitter antenna to the receiver antenna. This requires that the two antennas be in line-of-sight. This is the dominant mode of propagation for frequencies above about 30 MHz. The direct wave is the major mode of propagation for terrestrial microwave relay systems, used for long-distance telephone and television signals (2, 4, 6, 11, 18, and 30 GHz), and satellite transmission systems (4, 6, 8, 12, 14, 17 - 21, and 27 - 31 GHz). The range of a LoS system is limited by the effect of the earth’s curvature, as figure 2.20 shows. Line-of-sight systems are therefore restricted to short haul application (maximum 15-20 km range), and can reach beyond the horizon (up to 70 km) only when the terminals are installed on tall masts. For two antennas separated by distance d, with ht transmitting antenna height, and hr receiving antenna height, LoS is achieved when; d = √2 R ( √ ht + √ hr ) (2.10) 46 Chapter 2 Communication Systems Most fixed wireless access radio systems operate in the frequency bands above 1 GHz and are LoS or near-LoS systems (systems designed for bands at the lower end of the range, e.g., 2.5 GHz). ■ Earth-Reflected Wave Part of the propagated direct wave is reflected off the surface of the Earth and may arrive at the antenna with a different phase from the direct wave. Depending on the magnitude and phase of the reflected wave, it can cause signal fluctuation and sometimes even complete cancellation of the direct wave. This phenomenon has the most noticeable effect on terrestrial microwave relay systems where automatic gain control, diversity protection, and adaptive equalization may be used to counteract it. Atmospheric conditions vary with changes in height, geographical location, and even with changes in time (day, night, season, year). A knowledge of the composition of the Earth's atmosphere is extremely important for understanding wave propagation. Radio systems can also be used beyond the horizon by utilizing one of the other three radio propagation effects also shown in figure 2.20 (surfacewave, tropospheric-scatter or sky wave propagation). ■ Surface (ground, diffracted) wave propagation; Surface wave propagation illustrated in figure 2.20, is the dominant mode of propagation for frequencies below 2 MHz. It propagates by diffraction using the ground as a waveguide. When a surface wave meets an object with dimensions not exceed its wavelength, the wave tends to curve or bend around the object. The amount of bending (diffraction) is related to the radio wave length. The longer the wave length, the greater is the effect 47 Chapter 2 Communication Systems of diffraction. The smaller the object, the more pronounced the diffractive action will be. That is, diffraction of the wave causes it to propagate along the surface of the earth. This is the propagation mode used in AM broadcasting, where the local coverage follows the earth's contour, and the signal propagates over the visual horizon. High frequencies, with their short wavelengths, are not normally diffracted but are absorbed by the Earth at points relatively close to the transmitting site. Therefore, as the frequency of a surface wave is increased, the more rapidly the surface wave will be absorbed, or attenuated, by the Earth. Because of this loss by attenuation, the surface wave is impractical for long distance transmissions at frequencies above 2 megahertz. On the other hand, when the frequency of a surface wave is low enough to have a very long wavelength, the Earth appears to be very small, and diffraction is sufficient for propagation well beyond the horizon. The lowest used radio frequency depends on the length of the used antenna. For efficient radiation, the antenna needs to be longer than λ/10. For example, for signaling with a carrier frequency of fc = 10 kHz, the antenna length will be, 8 4 Lant= C / (10) fc = (3 x 10 ) / (10 x 10 ) = 3000 meter, which is not a practical length. ■ Tropospheric Scatter (reflected and refracted) propagation The troposphere is the portion of the Earth's atmosphere that extends from the surface of the Earth to a height of about 6 km at the Poles or the 18 km at the equator. Radio waves transmission in this layer is by tropospheric scatter propagation. Ultra High-frequency radio signals, at 1, 2, and 5 GHz are the best frequencies used for this transmission such as those used in 48 Chapter 2 Communication Systems terrestrial microwave relay systems, those may be reflected from the troposphere and have the same effect as the Earth-reflected wave at the receiver. Virtually all weather phenomena take place in the troposphere. The temperature in this region decreases rapidly with altitude, clouds form, and there may be much turbulence because of variations in temperature, density, and pressure. These conditions have a great effect on the propagation of radio waves. ■ Sky wave (refracted) propagation. In the sky wave propagation mode, the atmosphere behaves like a massive wave guide that is bounded below by the Earth’s surface and above by a layer of the ionosphere, which surrounding the Earth at an elevation of approximately 80 - 700 km. Ionosphere is a layer of ionized air caused by constant bombardment of ultraviolet, a, b and g radiation from the Sun as well as cosmic rays. It consists of several layers with different densities which causes that radio waves propagate more quickly in some of the layers than in others, which have different reflective, refractive, and absorptive effects on radio waves. Fading – both long and short term is due to cancellation and/or reinforcement of the different parts of the signal arriving at the receiver by diverse routes. Diversity protection techniques, such as the utilization of two or more carrier frequencies, are used. With the correct choice of transmission frequency and angle of incidence it is possible to establish communication between two points on the Earth’s surface where line-of-sight does not exist. This is the basis of short-wave (3-30MHz) transmission. 49 Chapter 2 Communication Systems So, Sky waves are radio waves travel by repeated reflections (multiple hop) between these two layers to reach some distant location on the Earth because of refraction (deflection) from the ionosphere. It is often called the ionospheric wave. This form of propagation is relatively unaffected by the Earth's surface and can propagate signals over great distances around the Earth and provide communication over distances up to many hundred kilometers. This is the most important region of the atmosphere for long distance point-to-point communications. As the electrical signal passes through the medium, the signal gets attenuated. The attenuated signal may not be able to drive the transducer at the receiving end at all if the distance between the sender and the receiver is large. We can, to some extent, overcome this problem by using amplifiers in both transmitter and receiving side. The amplifier will ensure that the electrical signals are of sufficient strength to drive the transducer. But we still have a problem. The transmission medium introduces noise and as a result, the signal gets distorted. The noise cannot be eliminated at all. So, in the above case, we amplify the signal, but at the same time, we also amplify the noise that is added to the actual signal containing the information. Amplification alone does not solve the problem, particularly when the system has to cover large distances. The objective of designing a communication system is for the electrical signal at the transmitting end to be reproduced at the receiving end with minimal distortion. To achieve this, different techniques are used, depending on issues such as type of data, type of communication medium, distance to be covered, and so forth. 50 Chapter 2 Communication Systems The signal generated at the transmitting side called the base band signal, is processed and transmitted only when it is allowed. The signal is sent on to the transmission medium through a transmitter. At the receiving end, the receiver amplifies the signal and does the necessary operations to present the base band signal to the user. Doppler Shift It is defined as the change of the received frequency from the transmitted frequency due to the mobility of either the transmitter or the receiver. The amount of frequency change depends mainly on the moving speed, moving direction. When the transmitter or the receiver moves toward each other, the pitch increases. This means the signal frequency is increasing and vice versa. 51 Chapter 3 Analogue Communication Chapter 3 Analogue Communication Modulation is required in many situations which means, the process of imposing the source information Vm(t) (base band-modulating signal) on a higher frequency signal, the carrier Vc(t) to produce a band pass signal S(t), (modulated signal). Demodulation is the recovery of that information from the carrier at the distant end near the destination user. A communication system in which the information signal undergoes this modulation process before being placed into the transmission medium is referred to as a modulated communication system. It is the long distance communication system where the base band signal is frequency translated (without distorting its content) to a frequency band centred at a frequency fc that is higher than its frequency. In radio transmission, modulation is needed in the transmission systems to transfer the message spectrum into high radio frequencies that propagate over radio channels. Typical examples where analogue information is transmitted in this fashion are; ■ Music — broadcast radio ■ Voice — citizen band radio, amateur radio, cellular radio ■ Video — broadcast television 52 Chapter 3 Analogue Communication Signals are typically characterized in both the time and frequency domain and that is the approach that will be taken in this chapter. There are many benefits of modulation such as; 1- Modulation increases the efficiency of the transmission medium by accommodating number of signals on the same medium at the same time using multiplexing techniques. 2- Modulation achieves practical length of antenna such as; For efficient radiation, the antenna needs to be longer than λ/10. For example, for modulation with a carrier frequency of fc = 10 kHz, the antenna length will be, 8 4 Lant= C / (10) fc = (3 x 10 ) / (10 x 10 ) = 3000 meter, which is not a practical length, but when modulating a carrier of frequency 100 MHz, , the antenna length will be, 8 8 Lant= C / (10) fc = (3 x 10 ) / (10 x 10 ) = 30 cim, which is a practical length, 3- Modulation allows to select a frequency that is high enough to be efficiently radiated by an antenna in radio systems, 4- Another important function of modulation is that it allows us to transmit at a frequency that is best suited to the transmission medium, where the behaviour of all practical transmission medium is frequency dependent. Analogue Modulation Techniques The primary purpose of analogue (CW) modulation in a communication system is to generate a modulated signal suited to the characteristics of a transmission channel. Analogue communication involves transferring an 53 Chapter 3 Analogue Communication analogue waveform containing information between two users using an analogue high frequency carrier wave. Consider the information message base band signal is given by; Vm(t) = Em Cos ωmt, (3.1) while, the carrier wave is represented by; Vc(t) = Ec Cos ωct, It is required that; Ec ≥ (3.2) Em , and ωc >> ωm The continuous high frequency carrier wave is defined by three characteristics: amplitude, frequency, and phase, where we insert the base band message into the carrier wave by altering any of these three factors of the carrier wave according to the message to be transmitted. This alteration is detected in the demodulator of the receiver and the original message is reproduced. Prior to 1960, all transmission systems were analogue. Today, in the PSTN, all telecommunication systems are digital, except for the preponderance of subscriber access lines, (the subscriber loops). There are three generic forms of analogue modulation: 1. Amplitude modulation, (AM) 2. Frequency modulation, (FM) 3. Phase modulation, (PM). Item 1, (amplitude modulation) is where a carrier is varied in amplitude in accordance with information baseband signal. In the case of item 2, (frequency modulation), a carrier is varied in frequency in accordance with the baseband signal. For item 3, (phase modulation) a carrier is varied in its phase in accordance with the information baseband signal. 54 Chapter 3 Analogue Communication Analogue transmission implies continuity. Typical analogue transmission are the signals we hear on AM and FM radio and what we see (and hear) on television. In fact, television is rather unique. The video itself uses amplitude modulation; the sound subcarrier uses frequency modulation, and the colour subcarrier employs phase modulation. All are in analoge formats. 3.1 Amplitude Modulation, AM The original carrier wave has a constant peak value (amplitude) and it has a much higher frequency than the modulating signal, the message. In AM the peak value of the carrier varies in accordance with the instantaneous value of the modulating signal and the outline wave shape, or envelope, of the modulated carrier wave follows the shape of the original modulating signal as shown in figure 3.2. Figure 3.1 illustrates the analogue communication system showing modulation and demodulation process. Transmitter Message Modulator Receiver Transmission ch. De-modulator Modulated carrier Detected message Carrier Wave Figure 3.1 Analogue communication system. 3.1-1 Full AM (DSBWC) Signal The pass band modulated signal is given by; S(t) = VAM(t) = [Ec ± Em Cos ωmt] Cos ωct, = Ec [ 1 + Cos ωmt] Cos ωct 55 Chapter 3 Analogue Communication So, VAM(t) = Ec [ 1 + Cos ωmt] Cos ωct, (3.3-a) Which is the general equation of the AM signal, with m, defined as the modulation index (factor, depth, coefficient….), with its value ranges between; 1≥m= ≥ 0, as Em ≤ E c. Rearranging equation (3.3-a), we can get; VAM(t) = Ec Cos ωc t + + Cos (ωc + ωm)t Cos (ωc - ωm)t (3.3-b) Figure 3.2 illustrates the modulation process in time, and frequency domain. Figure 3.2, Amplitude modulation and its spectrum 56 Chapter 3 Analogue Communication We can see that when a sinusoidal carrier with amplitude EC, and frequency fc Hz is amplitude modulated by a sinusoidal modulating signal with amplitude Em, and frequency fm Hz, the modulated wave known as a double side band with carrier, DSBWC signal. It contains the following three components with amplitudes, and frequencies, as shown in figure 3.2: • The carrier signal with amplitude EC, and frequency, fc Hz, • The upper side band (USB) with amplitude , and frequency equals the sum of the carrier and modulating signal frequencies, ( fc + fm) Hz, • The lower side band (LSB) with amplitude , and frequency equals the difference of the carrier and modulating signal frequencies, ( fc - fm) Hz. These sum and difference frequencies are new, produced by the AM process and they are called sideband frequencies. ■ Bandwidth of AM signal The bandwidth of the full amplitude modulated signal is; WAM = ( fc + fm) - ( fc + fm) = fm Hz (3.4) ■ Power of AM signal The power of the full amplitude modulated signal can be 57 Chapter 3 Analogue Communication calculated as the sum of the powers of the resulted components such as; PowerAM = Powercarrier+ PowerUSB + PowerLSB(3.5-a) 2 E E2 c Powercarrier = Pc c 2 2 Watt PU.S.B 2 2 2 m Ec / 2 m Ec = 8 2 watt P L.S.B 2 2 2 m Ec / 2 m Ec = 8 2 watt (3.5-b) (3.5-c) (3.5.d) P U.S.B = P L.S.B Substitute in equation (3.5-a), then; PAM = + So, PAM = Pc + [1+ = ] [1 + ] (3.5-e) From the above equation we find that the carrier power does not depend on the value of the modulation index as the side band powers which is increased when the modulation index value increases. The practical value of the modulation index ranges from 0.3 to 0.8. Also, depending on the manner by which the instantaneous amplitude of the modulating signal Em 58 Chapter 3 Analogue Communication varies it affect the value of m, and the wave form of the modulated signal changes. The following terminology is applicable. • m = 0, when Em = 0, means no carrier modulation occurs and the modulation result is the original carrier signal. • m =1, when Em = EC, means 100% carrier modulation and cause distortion when using envelope detection of the modulated signal as shown in figure 3.3; Figure 3.3 100% modulation • m > 1, when Em > EC, means carrier over modulation, causes phase rotation of the carrier and requires synchronous demodulation for recovering the modulating signal. The modulated wave form will be as shown in figure 3.4; 59 Chapter 3 Analogue Communication Figure 3.4 Over modulation • m < 1, when Em < EC, means carrier under modulation, allows envelope detection of the modulating signal but does not make efficient use of the carrier power. The modulated wave form will be as shown in figure 3.5; Figure 3.5 Under modulation Practically the modulation index, m, can be calculated using the following wave form shown in figure 3.6 as; 60 Chapter 3 Analogue Communication Figure 3.6 Modulation index calculation Emax = EC + Em (a) Emin = EC – Em (b) Adding the two equations Emax + Emin = 2 EC (c) Subtracting equation b, from equation a, then; Emax – Emin = 2 Em (d) Dividing equation d, by equation C, then; E E 2E max min m m E E 2E max min c (e) For a composite base band signal such as; Vm(t) = E1 Sin ω1t + E2 Sin ω2t + E3 Sin ω3t +……(3.6.a) The amplitude modulated signal will be; VAM(t) = (Ec+E1Sin ω1t+E2Sin ω2t+E3Sin ω3t+ ....)Sinωct (3.6.b) 61 Chapter 3 Analogue Communication VAM(t)=EC(1+m1Sin ω1t+m2Sin ω2t+m3Sin ω3t+...)SinωCt (3.6.c) From this equation we can get; m n E mn E c (f) Solving equation (3.6-c), we obtain; m E VAM(t) =ECSinω1t+ 1 c [Cos(ωc – ω1)t – Cos(ωC + ω1)t] 2 + m2 Ec [Cos(ωC – ω2)t – Cos(ωC + ω2)t] + …… 2 m Ec …+ n [Cos (ωc – ωn)t – Cos (ωc + ωn)t] (3.6.d) 2 where, meq = √ (g) ■ AM modulators The principle of generating the AM signal is described in figure 3.7 Figure 3.7 AM generation There are many electronic circuits that can be used as an AM modulator to generate the AM signal such as; 62 Chapter 3 Analogue Communication a- Common Emitter Transistor The common emitter transistor or common source JFET could be used as an AM modulator where, the carrier signal is applied to the base (or gate) and the base band is injected to the emitter (or source) while the AM signal is taken from the collector (or drain) as shown in figure 3.8. Figure 3.8 Common emitter transistor as a modulator Also, PN junction diode could be used known as square law modulator. The following block diagram represents an AM modulator. b- Square-law Modulator Consider the circuit shown in figure 3.9. Figure 3.9 Using the nonlinearity of the diode to obtain amplitude modulation, the circuit is tuned to ωc with sufficient BW to select the carrier and the side bands only. 63 Chapter 3 Analogue Communication Let Vm(t) = Em Cos ωmt, Vc(t) = Ec Cos ωct and assume the following: 1- Operation is at the resonant frequency of the circuit, hence; 2 ω = (3.7) 2- The diode current is given by; 2 i = a1V + a2V + ……. (3.8-a) Since the two voltage sources are connected in series so, V (t) = Vm(t) + Vc(t) = Em Cos ωmt + Ec Cos ωct (3.9) Substituting Equation (3.9) into Equation (3.8-a) gives, i = a1 [Vm(t) + Vc(t)] + a2[Vm(t) + Vc(t)]2 = a1Vm(t) + a1Vc(t) +a2 (t) + a2 (t) + 2a2Vm(t)Vc(t)+… = a1Em Cos ωmt + a1Ec Cos ωct + a2 +a2 As Cos 2 2 Cos ωmt 2 Cos ωct + 2 a2 Em Cos ωmt Ec Cos ωct+….(3.8-b) ϕ = [1+ Cos 2 ϕ ] Substituting into Equation (3.8-b) gives; i = a1Em Cos ωmt + a1Ec Cos ωct + a2 +a2 [1+ Cos 2 ωmt] [1+Cos 2ωCt]+2a2 Em Cos ωmt Ec Cosωct+…(3.8-c) 64 Chapter 3 Analogue Communication The LC tank circuit is tuned to ωc and acts as a band pass filter, and since ωc >> ωm, signals of frequency ωm, 2 ωm and 2 ωc will be filtered out, leaving; i = a1Ec Cos ωct + 2a2 Em Cos ωmt Ec Cosωct (3.8-d) When the load RL is reflected into the primary, with Em =1, the voltage across the primary is then given by; 2 Vp = a1 RL n Ec [1 + 2 Cos ωmt ] Cos ωct (3.10) = V'AM(t) = E'c [1 + km Cos ωmt ] Cos ωct This is an amplitude modulated voltage wave with amplitude, 2 E'c = a1 RL n Ec, and index of modulation, km = 2 . ■ Demodulation of Full AM signal Complete AM signal could be demodulated using many electronic circuits such as; envelope detector or coherent detector. a- Envelope detection The following figure 3.10 illustrates the circuit diagram which has been commonly used with success as a practical receiver used for recovering the base band signal from the AMWC signal given by equation (3.3). Such a circuit is called an envelope detector as it follows the envelope of the amplitude modulated carrier which is similar to the modulating base band signal given by , Ec [1 + m Cos ωmt ] (3.11) 65 Chapter 3 Analogue Communication It consists of a diode in series with a parallel R1C1 circuit working as a high pass filter, and a series dc blocking capacitor C2, terminated by a load RL(a high impedance headphone). Figure 3.10 Envelope detector circuit The operation of the envelope detector is described using the circuit given in figure 3.11. In order to separate the required signal from all other signals captured by the antenna, we use a band pass filter centred on the carrier frequency with sufficient BW to accommodate the upper and lower sidebands but with a sufficiently high Q factor so that all other carriers and their sidebands are attenuated to a level where they will not cause interference. This is most easily achieved by using an LC tuned circuit whose resonant frequency is that of the carrier. 66 Chapter 3 Analogue Communication Figure 3.11 The envelope detector circuit. The input signal to the circuit is most appropriately represented by an ideal current source connected to the primary of the transformer. This ideal current source represents all the currents induced in the antenna by all the radio stations broadcasting signals in free space. The signal is coupled to the parallel-tuned LC circuit which selectively enhances the amplitude of the signal whose carrier frequency is the same as the resonant frequency of the LC circuit. Consider the received full AM signal at the input of the envelope detector is only the enhanced modulated AM signal shown in figure 3.12 given by equation (3.3); VAM(t) = Ec [1 + m Cos ωmt ] Cos ωct Because the diode conducts only when the anode has a positive potential compared to the cathode, only the positive half of the signal appears across the output resistor R1. The diode ‘‘half wave’’ rectifies the AM wave and the R1C1 time-constant ‘follows’ the envelope with a slight ripple as shown in figure 3.13. 67 Chapter 3 Analogue Communication Figure 3.12 i/p signal to the envelope detector. Figure 3.13 o/p signal of the envelope detector. Note that (1) A smaller time constant increases the ripple, while a longer time constant will help reduce the ripple; however, it will also increase the likelihood that the output voltage will not follow the envelope when the voltage is falling causing ‘diagonal clipping’. (2) In practice, the carrier frequency is much higher than the modulating frequency, hence the ripple is much smaller than shown. b- Coherent detection We have seen that the base band signal is recovered from the received modulated carrier signal by demodulating it and then passing the result through a LPF. Figure 3.14 shows the coherent (synchronous) demodulator. 68 Chapter 3 Analogue Communication It gets its name as the local oscillator is synchronized in both frequency and phase with the received carrier used for modulation at the transmitter. AM Signal, Balanced detector M (t) Base Band Signal, (t) Coherent Carrier Signal Figure 3.14 Coherent detection The balanced demodulator is working as a multiplier of its two inputs; the received amplitude modulated signal, VAM(t) = Ec (1 + m Cos ωmt) Cos ωct, and the coherent carrier signal Vc(t) = Ec Cos ωct. The output of the demodulator is given by; Vo(t) = Ec [1 + m Cos ωmt] Cos ωct x Ec Cos ωct Vo(t) = [Ec Cos ωct + Cos(ωc+ ωm)t + Cos(ωc-ωm)t] x Ec Cos ωct ωct + = Cos (2ωc+ ωm)t + Cos (2ωc- ωm)t + + Vo(t) = ]+ Cos ωmt + st nd Cos ωmt Cos ωmt Cos (2ωc+ ωm)t + Cos (2ωc - ωm)t + Cos ωmt. th The 1 , 2 , and 4 terms are removed from the demodulator output to obtain the required base band signal; Vm(t) = Cos ωmt. (3.12) 69 Chapter 3 Analogue Communication So, we can use either the envelope detector or the coherent detector to restore the base band signal from the Full AM signal. 3.1-2 Double-Side Band Suppressed Carrier, DSB-SC Figure 3.15 illustrates the modulation process of the DSB-SC signal. Base Band Signal, (t) Balanced Modulator Carrier Signal, DSB-SC Signal, DSB−S (t) (t) Figure 3.15 Double-Side Band Suppressed Carrier modulation The balanced modulator consists of two balanced transistors, or a bridge of four diodes is used as a multiplier of its two inputs; the base band signal, Vm(t) = Em Cos ωmt, and the carrier signal Vc(t) = Ec Cos ωct. The output of the balanced modulator is given by; Vo(t) = VDSB-SC(t) = EmCos ωmt x Ec Cos ωct VDSB-SC(t) = Cos (ωc+ ωm)t + Cos (ωc - ωm)t (3.13) The following figure 3.16 illustrates the DSB-SC modulated signal in time and frequency domain. 70 Chapter 3 Analogue Communication Figure 3.16 (a) DSB-SC modulated signal in Time domain Figure 3.16(b) DSBSC modulated signal in Frequency domain ■ Bandwidth of DSB-SC signal BWDSB-SC = ( fc + fm) - ( fc - fm) = 2 fm Hz (3.14) ■ Power of DSB-SC signal It is calculated as the sum of the powers of the resulted components such as; PowerDSB-SC = PowerUSB + PowerLSB 71 Chapter 3 Analogue Communication PowerDSB-SC = + = (3.15) ■ Demodulation of DSB-SC signal It is required to check the possibility of demodulating the DSB-SC signal to restore the base band signal using either the envelope detector or the coherent detector. a - Envelope demodulation Envelope detector can’t be used to restore the base band signal from the DSBSC AM signal, as the diode requires a minimum of 0.3 or 0.7 volt to conduct, while Vmin of VDSBSC(t) is zero. This will cause that the obtained base band signal will be distorted by the negative peak clipping distortion. b - Coherent demodulation The balanced demodulator shown in figure 3.17 is used as a multiplier of its two inputs; the received DSB-SC modulated signal; VDSB-SC(t) = Cos (ωc+ ωm)t + Cos (ωc - ωm)t, and the Coherent carrier signal, VC(t) = Ec Cos ωct. The output of the balanced demodulator is given by; Base Band Signal, DSBSC Signal, DSB−S (t) balanced demodulator Carrier Signal, (t) LPF (t) Figure 3.17 DSB-SC coherent demodulation Vo(t) = VDSB-SC(t) x Vc(t) =[ Cos (ωc+ ωm)t + Cos (ωc - ωm)t] x Ec Cos ωct 72 Chapter 3 Analogue Communication Cos (2ωc+ ωm)t + = Cos ωmt + Cos (2ωc - ωm)t Cos ωmt + st (3.16) rd The 1 , and 3 , terms are removed from the demodulator output to obtain the required base band signal; Vm(t) = Cos ωmt (3.17) 3.1-3 Single-Side Band Suppressed Carrier, SSB-SC Figure 3.18 illustrates the modulation process of the SSB-SC signal. Base Band Signal, (t) Balanced Modulator LPF SSB-SC Signal, SSB−S (t) DSB-SC Signal Carrier Signal, (t) Figure 3.18 Single-Side Band Suppressed Carrier modulation The balanced modulator is followed by a LPF (HPF), to remove one of the resulted two side bands as shown in figure 3.18 to obtain the SSB-SC modulated signal in time and frequency domain. The base band signal, Vm(t) = Em Cos ωmt, and the carrier signal Vc(t) = Ec Cos ωct. The output of the balanced modulator is given by; V'o(t) = VDSB-SC(t) = EmCos ωmt x Ec Cos ωct = Cos (ωc+ ωm)t + Cos (ωc - ωm)t The filter output will be; 73 Chapter 3 Analogue Communication VSSB-SC(t) = Cos (ωc ωm)t (3.18) The following figure 3.19 illustrates the DSBSC modulated signal in time and frequency domain. filtered Figure 3.19(a) SSBSC modulated signal in Time domain Figure 3.19(b) SSBSC modulated signal in Frequency domain ■ Bandwidth of SSBSC signal BWSSB-SC = (fc + fm) - fc = fc - (fc - fm) = fm Hz (3.19) 74 Chapter 3 Analogue Communication ■ Power of SSB-SC signal It is calculated as the power of the resulted component given by; PowerSSB-SC = PowerUSB = PowerLSB PowerSSB-SC = = (3.20) ■ Demodulation of SSB-SC signal It is required to check the possibility of demodulating the SSB-SC signal to restore the base band signal using either the envelope detector or the coherent detector. a - Envelope demodulation Envelope detector can’t be used to restore the base band signal from the SSB-SC AM signal, as the diode requires a minimum of 0.3 or 0.7 volt to conduct, while Vmin of VDSB-SC(t) is zero. This will cause that the obtained base band signal will be distorted by the negative peak clipping distortion. b - Coherent demodulation The balanced demodulator is used as a multiplier of its two inputs; the received SSB-SC modulated signal; VSSB-SC(t) = Cos (ωc+ ωm)t = Cos (ωc - ωm)t, and the Coherent carrier signal, VC(t) = Ec Cos ωct. The output of the demodulator shown in figure 3.20, is given by; SSBSC Signal, SSBS (t) balanced demodulator Carrier Signal, Filter Base Band Signal, (t) (t) Figure 3.20 Demodulation of SSB-SC signal 75 Chapter 3 Analogue Communication Vo(t) = VSSB-SC(t) x Vc(t) Vo(t) = [ = Cos (ωc + ωm)t] x Ec Cos ωct Cos (2ωc+ ωm)t + Cos ωmt st The 1 term is removed from the demodulator output as it is a high frequency component to obtain the required base band signal; Vm(t) = Cos ωmt (3.21) 3.2 Angle Modulation This modulation concerns with varying the angle of the carrier wave according to the base band signal. The amplitude of the carrier wave is maintained constant. An important feature of angle modulation is that it can provide better discrimination against noise and interference, crosstalk, …etc,than amplitude modulation at the expense of increased transmission bandwidth. Such a trade-off is not possible with amplitude modulation. Angle Modulation can assume one of two interrelated forms: ■ Frequency Modulation ■ Phase Modulation 3.2-1 Frequency Modulation, FM This modulation is achieved as the instantaneous carrier-frequency is varied about a fixed value in accordance with the modulating signal amplitude. The carrier amplitude is kept constant. The number of times per second that the (rate) instantaneous frequency is varied from the average (carrier frequency) is controlled by the frequency of the modulating signal. The amount by which the frequency departs from the average value is controlled by the amplitude of the modulating signal. This variation is 76 Chapter 3 Analogue Communication referred to as the frequency deviation of the frequency-modulated wave. We can now establish two clear-cut rules for frequency deviation rate and amplitude in frequency modulation: α Vm(t) where, i.e., = Vm(t) (3.22) is known as the frequency modulation sensitivity in rad/sec./volt. This change in carrier frequency will bear the information of the base band signal. Assuming the information base band signal is; Vm(t) = EmCos ωmt, while the carrier signal is; Vc (t) = Ec Cos ωct. After modulation the instantaneous carrier frequency will be; ωi = ωc ± , The frequency modulated carrier signal will be; VFM(t) = Ec Cos where, (3.23) is the instantaneous phase of the modulated carrier signal, obtained using ωi as; θi = ∫ dt =∫ = θi = θi = ∫ t ∫ t t Sin 77 Chapter 3 Analogue Communication VFM(t) = Ec Cos [ t VFM(t) = Ec Cos [ t VFM(t) = Ec Cos [ t Sin ] Sin Sin ] ] (3.24-a) Which is the general equation of the FM signal; where, is the carrier frequency deviation due to modulation, = = is the frequency modulation index. As Cos (A+B) = Cos A Cos B - Sin A Sin B, VFM(t)=EcCos t Cos[ Sin ]-Ec Sin t Sin[ Sin ] (3.24-b) The FM wave has two bands of frequencies known as; Narrow band to give Narrow band FM, NBFM wave, and Wide band to give Wide band FM, WBFM wave. Both of the FM wave bands depend on the value of the modulation index, . a- For NBFM, >0 This is the case of the Narrow band frequency modulated signal. Under this condition, we use the approximation; Cos [ Sin ] ≈ 1, Sin [ Sin ]≈ Sin (3.25-a) Then, the Narrow band frequency modulated signal is; VNBFM(t) = EcCos t- Ec Sin Sin t (3.25-b) As Sin A Sin B = [Cos (A - B) - Cos (A + B)] 78 Chapter 3 Analogue Communication VNBFM(t) = Ec Cos t+ Cos st Cos nd 1 term is the carrier signal, 2 (3.25-c) term is the upper side band, and rd 3 term is the lower side band. In this case the NBFM signal contains the same frequency components as the AM signal but with inverted LSB. The spectrum of the NBFM signal is as shown in figure 3.21. V( ) Figure 3.21(a) FM signal in time domain V(ω) E c β β ω Figure 3.21(b) NBFM signal in frequency domain ■ The band with of the NBFM signal is given by; B M = (ωc + ) - (ωc - )= (3.26) 79 Chapter 3 Analogue Communication ■ The power of the NBFM signal is given by; B M = + + = [1 + ] (3.27) ≥1 b- For Wide band FM signal, WBFM, Under this condition, we can’t use the approximation used with NBFM case. Bessel function expansion is used to find the terms corresponding to both of Cos[ Sin ], and Sin[ Sin ] to obtain the WBFM signal components, where; ) + 2∑ Cos[ Sin ]= Sin[ Sin ] = 2∑ ( ( Cos n , (3.28-a) Sin n (3.28-b) is known as Bessel function of and order n. ) is plotted against 3.22. The values of ( for various values of n in the following figure ) are used from table 3.1 to calculate the amplitudes of the side-frequencies of the FM signal. 𝐉𝐧 (𝛃𝐅 ) 𝛃𝐟 Figure 3.22 Bessel function plot 80 Chapter 3 Analogue Communication Table 3.1 Bessel function amplitudes The wide band FM signal is given by; B M (t) = EcCos +2 ( t[ ) Cos 4 - EcSin t [2 ( +2 = Ec ( )+2 ( ) Cos 2 +2 ( ) Cos 6 ) Sin ( +2 ) Sin 3 +………………] ) Sin 5 t+2 ( ( +…….] ( ) Cos ) Ec Cos +2 ( ) EcCos t Cos 4 +2 ( ) EcCos t Cos 6 -2 ( ) Ec Sin t Sin t Cos 2 +……. 81 Chapter 3 Analogue Communication -2 ( -2 ( ) Ec Sin VWBFM(t) = Ec ( + ) Ec Sin ( ) Cos t Sin 3 t Sin 5 +……... t )[Cos - Cos ] + ( )[Cos + Cos ] + ( )[Cos - Cos ] + ( )[Cos + Cos ] + ( )[Cos - Cos ] + ( )[Cos + Cos ] + ( )[Cos - Cos ] + …………………….. st 1 term is the carrier signal with amplitude Ec nd st rd nd (3.28-c) ( ), 2 term is the 1 set of harmonics, with amplitude Ec ( 3 term is the 2 set of harmonics, with amplitude Ec ( ), ), ……….., and so on for infinite number of terms. It is clear that the upper and lower components of the odd harmonics have phase shift. The spectrum of the WBFM signal is as shown in figure 23. 82 Chapter 3 Analogue Communication Figure 3.23 Spectrum of the WBFM signal. The band width of the wide band FM signal is given by; B M= where, ( 2 ( +1) =2( + )=2 rad/sec. (3.29) +1) is the number of pairs of side frequencies, is the separation between the side frequencies. The Power of the WBFM signal is calculated by summing the power of its components given by; +∑ B M= = B M 2 E J ( ) 2 Ec J o ( f ) 2 c n f 2 2 n 1 = [ )+2∑ (3.30) FM techniques are widely used for broadcast applications as they are much less susceptible to noise and interference. By using high carrier frequencies 83 Chapter 3 Analogue Communication large base band BW can be used enabling quality stereo sound broadcasting. - NBFM technique provides high fidelity reception in the presence of noise or interference. It is ideally suited for mobile radio communication applications. -WBFM technique is ideal for broadcast applications due to its low susceptibility to noise and interference effects. It is commonly used in FM radio, and for the audio portion of TV broadcasting. Frequency Modulators A simple method for generating an FM signal is to start with any LC oscillator. The frequency of oscillation is determined by the values of C and L. If a variable capacitor ∆C is connected in parallel with C and the capacitance variation is proportional to the modulating signal then an FM signal will be obtained. Consider the oscillator circuit shown in figure 3.24. The frequency of oscillation is given by; = (3.31-a) When a variable capacitor ∆C (specially fabricated varactor diode (voltagecontrolled capacitors)) is connected in parallel with C, the frequency of oscillation will be; ∆ = (3.31-b) The operation of such circuit is as follows; when the base band signal applied to the input with; (3.31-c) 84 Chapter 3 Analogue Communication where; are constants depends on the PN material, is the depletion region width Figure 3.24 Frequency modulator where ∆C = k (t) The varactor is so designed that the change in capacitance is linear with the change in the applied voltage. This is a special design characteristic of the varactor diode. The varactor must not be forward biased because it cannot tolerate much current flow. Proper circuit design prevents the application of forward bias. Demodulation of FM signal The intelligence to be recovered from the FM signal is not in amplitude variations; it is in the variation of the instantaneous frequency of the carrier. 85 Chapter 3 Analogue Communication The primary function of FM signal demodulation is to produce a base band signal with amplitude varying according to the instantaneous frequency of the received FM signal. This can be achieved by converting the FM signal to an AM signal and extract the base band signal using envelope detector. One way to achieve the conversion process is to use a differentiating circuit. The FM signal may be given by; VFM(t) = Ec Cos [ t ∫ ] (3.32) Differentiating this waveform gives; Vd(t) = - Ec [ ∫ t ] (3.33) This is FM signal with envelope of magnitude proportional to the amplitude of the base band modulating signal . The base band modulating signal may then be recovered using an envelope detector that ignore the frequency variations of the carrier. Traditionally, a tuned circuit may be used to achieve the differentiation as shown in figure 3.25-a followed by envelope detector shown in figure 3.25b to restore the base band signal. The tuned circuit is chosen such as the instantaneous frequency of the carrier, either above or below its center frequency. Several types of FM detectors are in use, but the most common are: (1) Slope detector, (2) Foster-Seeley Discriminator. ■ Slope Detector Figure 3.25-a, shows a tank circuit used as a frequency modulator with components selected so that its resonant frequency at point 4, shown in 86 Chapter 3 Analogue Communication figure3.25-C is higher than the frequency of the FM carrier signal at point 2. The entire frequency deviation for the FM signal falls on the lower slope of the band pass (tuned circuit )curve between points 1 and 3. So it is known as slope detector circuit. When the FM signal applied to the tank circuit as shown in figure-A, the amplitude of its o/p signal will vary as its frequency swings between (1), and -∆f +∆f (3). Frequency variations will still be present in this waveform, but it will also develop amplitude variations, as shown in figure 3.25-a. This is because of the response of the tank circuit as it varies with the i/p frequency. This signal is then applied to a diode detector as shown in figure 3.25-b, where the detected waveform is the o/p base band signal. 87 Chapter 3 Analogue Communication Figure 3.25 Slope detector circuit and characteristics This circuit has the major disadvantage that any amplitude variations in the rf waveform will pass through the tank circuit and be detected. This disadvantage can be eliminated by placing a limiter circuit before the tank input. This circuit is the same as an AM detector with the tank tuned to a higher or lower frequency than the received carrier. Also the linearity of this single tuned circuit curve is limited to a relatively small frequency range. The resulted base band signal may be distorted. Introducing a second tuned circuit with a slightly different resonance frequency may extend the detector response linearity range. A circuit known as Foster-Seeley Discriminator is used for this purpose. Foster-Seeley Discriminator Figure 3.26-a, shows a typical Foster-Seeley Discriminator. It has two tuned tank circuits L1C1, and L2 C2, both are with high Q factor and tuned to resonate at the same centre (carrier) frequency fc of the received FM signal. 88 Chapter 3 Analogue Communication Its advantage is that it has a larger linear range of operation than the slope discriminator. It uses a double-tuned rf transformer T1 to convert frequency variations in the received FM signal to amplitude variations. This voltage varies in both amplitude and polarity as the i/p signal varies in frequency. A discriminator response curve known as S-curve is shown in figure 3.26-b. 89 Chapter 3 Analogue Communication b- S-curve Figure 3.26 Foster-Seely discriminator and its characteristics The collector tank circuit L1C1 of the preceding limiter/amplifier circuit ( ) limits the amplitude of the signal. This limiting keeps interfering noise low by removing excessive amplitude variations from signals. The inductance is mutually coupled to a centre-tapped inductor of the secondary tank circuit L2C2. The centre tap is connected by a coupling capacitor to the collector of the transistor . A radio-frequency choke coil L3 (RFC – a high-valued inductance) connects the centre tap to the ‘‘neutral’’ node of the two identical circuits consisting of a diode in series with a parallel combination of a resistance and a capacitance (CR1 – R3 – C3and CR2 – R4 – C4 ) connected across L3 is the dc return path for diode rectifiers to form a symmetrical circuit. and . and are not always necessary but are usually used when the reverse biased resistance of 90 Chapter 3 Analogue Communication the two diodes is different. Resistors are bypassed by and and are the load resistors and to remove rf ripples. is the o/p coupling capacitor. Discriminator Operation It can be explained using vector diagram shown in figure 3.26-c that shows phase relationships between the voltages and currents in the circuit. Before proceeding to the analysis of the discriminator circuit, for simplicity, it is interesting to make three assumptions: (1) The impedance of the coupling capacitance C8 is small enough to be considered as a short-circuit, at the frequency of operation. (2) The impedance of the RFC, L3 is an open-circuit at the high frequency > fc but a short-circuit at the low (audio) frequency. (3) The neutral node of the envelope discriminators can be considered to be grounded since the secondary circuit, including the envelope discriminators, is symmetric. The frequency of the received FM signal of frequency fi = fr various according to the value of Δf (frequency deviation). This produces variable voltages applied to the envelope detectors circuits. The amplitude of the voltage appearing across the i/p of the envelope detectors will vary proportionally to Δf . 91 Chapter 3 Analogue Communication Figure 3.26-c Phasor (vector) diagram of Foster-Seeley Discriminator operation At resonance Consider when the i/p (received) signal frequency is equal to the carrier frequency, fi = fc, (Δf = zero), and equals the center frequency of the resonant tank circuit. The i/p signal applied to the primary tank circuit is shown as vector ep= V1. Since coupling capacitor reactance at the i/p frequency, rf choke has negligible is effectively in parallel with the primary tank circuit, i/p voltage ep also appears across applied to the primary of . With voltage ep , a voltage is induced in the secondary which causes current to flow in the secondary tank circuit. When the i/p frequency equals the centre frequency, the tank is at resonance and acts resistive. Current and voltage are in phase in a resistance circuit, as shown by is and ep. The current flowing in the tank causes voltage drops across each half of 92 Chapter 3 Analogue Communication the balanced secondary winding of transformer . These voltage drops are of equal amplitude and opposite polarity w.r.t the centre tap of the winding. Because the winding is inductive, the voltage across it is out of phase with the current through it. Because of the centre-tap arrangement, the voltages at each end of the secondary winding of are out of phase and are shown as e1 and e2 on the vector diagram. The two diodes conduct on opposite half cycles of the input waveform. The voltage applied to the anode of is the vector sum of voltages ep and e1, shown as e3 on the diagram. Likewise, the voltage applied to the anode of is the vector sum of voltages ep and e2, shown as e4 on the diagram. At resonance equal anode voltages on diodes and produce equal currents and, with equal load resistors, equal and opposite voltages will be developed across and . The o/p is taken across and , where e3 and e4 are equal, as shown by vectors of the same length. So, the total o/p will be zero at resonance since these voltages are equal and of appositive polarity. Above resonance A phase shift occurs when an i/p frequency higher than the center frequency is applied to the discriminator circuit such as fi = fc + Δf and the current and voltage phase relationships change. When a series-tuned circuit operates at a frequency above resonance, the inductive reactance of the coil increases and the capacitive reactance of the capacitor decreases. Above resonance the tank circuit acts like an inductor. Secondary current lags the primary tank voltage, ep. Notice that secondary voltages e1 and e2 are still 93 Chapter 3 Analogue Communication out of phase with the current iS that produces them. The change to a lagging secondary current rotates the vectors in a clockwise direction. This causes e1 to become more in phase with epwhile e2 is shifted further out of phase with ep. The vector sum of ep and e2 is less than that of ep and e1. Above the center frequency, diode conducts more than diode heavier conduction, the voltage developed across voltage developed across . Because of this is greater than the ; the output voltage is positive. Below resonance When the i/p frequency is lower than the center frequency, the current and voltage phase relationships change. When the tuned circuit is operated at a frequency lower than resonance, the capacitive reactance increases and the inductive reactance decreases. Below resonance the tank acts like a capacitor and the secondary current leads primary tank voltage ep. This change to a leading secondary current rotates the vectors in a counter clock wise direction. From the vector diagram you should see that e2 is brought nearer in phase with ep, while e1 is shifted further out of phase with ep. The vector sum of ep and e2 is larger than that of ep and e1. Diode CR2 conducts more than diode CR1 below the center frequency. The voltage drop across R4 is larger than that across R3 and the o/p across both is negative. 3.2-2 Phase Modulation, PM The base band signal is applied to a phase modulator, where the instantaneous carrier-phase is varied in accordance with the modulating 94 Chapter 3 Analogue Communication signal amplitude as shown in figure 3.27. The carrier amplitude is kept constant. The amount of carrier phase modulation is proportional to the amplitude of the modulating signal. Since frequency is a function of time period per cycle, we can see that such a phase shift in the carrier will cause its frequency to change. The frequency change in FM is vital, but in PM it is merely incidental. The amount of frequency change has nothing to do with the resultant modulated wave shape in PM. At this point the comparison of FM to PM may seem a little hazy, but it will clear up as we progress. Figure 3.27 Phase modulation. α Vm(t) where, i.e.; = Vm(t) (3.34) is known as the phase modulation sensitivity in rad/volt. 95 Chapter 3 Analogue Communication This change in carrier phase by ± will bear the information of the base band signal. Phase modulated signal has high immunity against noise, interference, crosstalk, …etc. Assuming the information base band signal is; Vm(t) = EmCos ωmt, while the carrier signal is; Vc (t) = Ec Cos ωct. After modulation the instantaneous carrier phase will be; i = ωct ± (3.35) The phase modulated carrier signal will be; VPM(t) = Ec Cos (3.36) VPM (t) = VPM (t) = Ec Cos [ t VPM (t) = Ec Cos [ t EmCos ωmt] Cos ] (3.37) Which is the general equation of the PM signal; where, is the phase modulation index. As Cos (A+B) = Cos A Cos B - Sin A Sin B, VPM(t)=EcCos t Cos[ Cos ]-Ec Sin t Sin[ Cos ] (3.38) It is the same as that for FM signal with different modulation index, and Sin is replaced by Cos . The PM wave has two bands of frequencies known as; Narrow band to give Narrow band PM, NBPM wave, and Wide 96 Chapter 3 Analogue Communication band to give Wide band PM, WBPM wave. Both of the PM wave bands depend on the value of the modulation index, . a- For Narrow band PM signal, NBPM, >0 This is the case of the Narrow band phase modulated signal. Under this condition, we use the approximation; Cos[ Cos ] ≈ 1, Sin[ Cos ]≈ Cos (3.39-a) Then, the Narrow band phase modulated signal is; VNBPM(t) = EcCos t- Ec Sin Cos t (3.39-b) As Sin A Cos B = [Cos (A-B) + Cos (A + B)] VNBPM(t) = EcCos t- Cos Cos (3.39-c) st 1 term is the carrier signal, nd 2 term is the upper side band, and rd 3 term is the lower side band. In this case the NBPM signal contains the same frequency components as the AM signal but with inverted USB, and LSB. The spectrum of the NBPM signal is as shown in figure 3.28. 97 Chapter 3 Analogue Communication V( ) 3.28-a- PM signal in time domain V(ω) E c β β ω (b) NBPM signal in frequency domain Figure 3.28 Phase Modulated signal in time and frequency domain The band with of the NBPM signal is given by; B M = (ωc + ) - (ωc - )= (3.40) The power of the NBPM signal is given by; B M= + + = [1 + ] (3.41) b- For Wide band PM signal, WBPM, ≥1 This is the case of the Wide band phase modulated signal. Under this condition, we can’t use the approximation used with NBPM case. Bessel function expansion is used to find the terms corresponding to both of Cos[ Cos ], and Sin[ Cos ] to obtain the WBPM signal components, where; 98 Chapter 3 Analogue Communication ) + 2∑ Cos[ Cos ]= ( Sin[ Cos ] = 2∑ Cos n Sin n (3.42-a) is known as Bessel function of ( , ) is plotted against and order n. for various values of n, and the values of ( are used from tables to calculate the amplitudes of the side-frequencies of the PM signal as given for the WBFM signal discussed before. The wide band PM signal is given by; B M (t) = EcCos +2 t[ ( )-2 ( ) Cos 4 +2 - EcSin t [2 ( ) Cos +2 = Ec ( ( ) Cos ( ) Cos 2 ( -2 ) Cos 5 t-2 ( ( ( ) EcCos t Cos 4 +2 ( ) EcCos t Cos 6 -2 ( ) Ec Sin t Cos +2 ( ) Ec Sin t Cos 3 -2 ( ) Ec Sin t Cos 5 ( ) Cos ) Cos 3 +………………] ) Ec Cos +2 VWBPM(t) = Ec +…….] ) Cos 6 t Cos 2 +……. +……... t - ( )[Sin - Sin - ( )[Cos - Cos ] + ( )[Sin + Sin ] + ( )[Cos + Cos - ( )[Sin + Sin ] ] ] 99 ) Chapter 3 Analogue Communication - ( )[Cos + Cos + ( )[Sin - Sin st 1 term is the carrier signal with amplitude Ec nd 2 ] ]+…(3.42-b) ( ), st term is the 1 set of harmonics, with amplitude Ec ( rd 3 term is the 2 nd set of harmonics, with amplitude Ec ( ), ), ……….., and so on for infinite number of terms. It is clear that the upper and lower components of the odd harmonics have phase shift. The spectrum of the WBPM signal is as shown in figure 3.29. Figure 3.29 spectrum of the WBPM signal The band width of the wide band PM signal is given by; B M= where, ( 2( +1) =2 rad/sec., (3.43) +1) is the number of pairs of side frequencies, is the separation between the side frequencies. 100 Chapter 3 Analogue Communication The Power of the WBPM signal is calculated by summing the power of its components given by; +∑ B M= 2 E J ( ) 2 Ec J o ( p ) 2 c n p = 2 2 n 1 B M = [ )+2∑ (3.44) Analogue phase modulation, does not appear to have any practical applications. When the message signal is digital, it has distinct advantages such as improved immunity to noise. It is possible to use a phase modulator to produce FM signal by integrating the base band signal before applying to the phase modulator as shown in figure 3.30-a; Phase modulator integrator M . Figure 3.30- a Also, it is possible to use a frequency modulator to produce PM signal by differentiating the base band signal before applying to the frequency modulator as shown in figure 3.30-b; differentiator Frequency modulator M . Figure 3.30-b 101 Chapter 3 Analogue Communication As Vm(t) = EmCos ωmt, ( ∫ ) Sin ωmt, Using phase modulator, VPM(t) = Ec Cos [ωct + Cos ωmt] = Ec Cos [ωct +( ) Sin ωmt] Sin ωmt] VFM(t), = Ec Cos [ωct + Where, [( )= (3.45) ] Similarly, as Vm(t) = EmSin ωmt Cos ωmt, Using frequency modulator, VFM(t) = Ec Cos [ωct + = Ec Cos [ωct + ( = Ec Cos [ωct + Where, [ = Sin ωmt] ) Cos ωmt] Cos ωmt] VPM(t), (3.46) ]. Analogue phase modulation, does not appear to have any practical applications. When the message signal is digital, it has distinct advantages such as improved immunity to noise. PM can transmit digital data very efficiently since each phase change can encode multiple bits. It has high spectral efficiency that transmits more digital data per unit bandwidth than other modulation techniques. It is used for satellite and terrestrial data communication applications. 102 Chapter 3 Analogue Communication Phase Modulator One circuit that can cause carrier phase variation (shifting) is shown in figure 3.31-a for a sine wave. Figure 3.31-a Phase modulator for a sine wave. The capacitor in series with the resistor forms a phase-shift circuit. With a constant rf carrier frequency applied at the i/p, the o/p across the resistor o would be 45 out of phase with the i/p if XC = R. Now, let's vary the resistance and observe how the o/p is affected in figure 3.31-b. As the resistance reaches a value greater than 10 times XC, the phase difference between i/p and o/p is nearly 0 degrees. For all practical purposes, the circuit is resistive. As the resistance is decreased to 1/10 the value of XC, o the phase difference approaches 90 . The circuit is now almost completely capacitive. 103 Chapter 3 Analogue Communication Figure 3.31-b Variable resistance Phase modulator 104 Chapter 3 Analogue Communication 3.3 Analogue Pulse Modulation The objective of pulse modulation is to transfer a narrow band analog information, such as a phone call over a wideband low pass channel as a two-level signal. It is a system in which continuous waveforms are sampled at regular intervals. Information regarding the signal is transmitted only at the sampling times. It involves modulating a carrier that is a train of pulses. Pulse modulation is not, in itself, a digital but an analogue technique, although it transmits digital instead of analog signals, the modulating wave is continuous. At the receiving end, the original waveforms may be reconstituted from the information regarding the samples, if these are taken frequently enough. It may be subdivided broadly into two categories; analogue and digital. In the former, pulse analogue modulation, a base band information signal may modulate or vary one of the carrier pulses parameters, while in the latter, a code which indicates the sample amplitude to the nearest predetermined level is sent. With Analogue Pulse Modulation, the amplitude of a base band information signal S(t) may modulate or vary; the pulse amplitude, to produce a Pulse Amplitude Modulation (PAM); the Pulse Width (duration), to give Pulse Width Modulation (PWM); the time delay between pulses in a sequence, to give Pulse Position Modulation (PPM). However, it represents an intermediate stage in the generation of digitally modulated signals. The samples of the PAM signal may be quantized and coded to produce Pulse Code Modulation (PCM), Differential Pulse Code Modulation (DPCM), Delta Modulation (DM), or Adaptive Delta Modulation (ADM). Pulse modulation may be an end in itself and can be used as modulation schemes in their own right for analogue communication systems allowing, for 105 Chapter 3 Analogue Communication example, many separate information carrying signals to share a single physical channel by interleaving the individual signal pulses as called Time Division Multiplexing (TDM). 3.3-1 Pulse Amplitude Modulation, PAM The first step in digitizing an analogue waveform is to establish a set of discrete times at which the input signal waveform is sampled at a constant sampling frequency fS. If the samples occur often enough, the original waveform can be completely recovered from the sample sequence using a low-pass filter. These basic concepts are illustrated in figure 3.32, with sample height α S(t). Notice that the sampling process is equivalent to amplitude modulation of a constant-amplitude pulse train. The message information is encoded in the amplitude of a series of signal pulses. Hence the technique represented in figure 3.32 is usually referred to as a PAM, because the successive output intervals can be described as a sequence of pulses with amplitudes derived from the input waveform. a- Sampling It is the change of a continuous (analogue) signal to a discrete (not continuous in time) signal to convert it to digital one. A sample refers to a value at a point in time. The sampling interval is the interval Ts where sampling is performed by measuring the value of the continuous signal every Ts seconds. A sampler is a subsystem that extracts samples from a continuous signal. A theoretical ideal sampler (a transistor working as a switch) as shown in figure 3.32 multiplies 106 Chapter 3 Analogue Communication a continuous signal S(t) which is to be sampled with a pulse train P(t). The product of the two signals results when S(t) forms the input to a gating circuit is known as natural sampling PAM, where the pulse tops follow the variations of the signal being sampled. If the pulse train consists of narrow pulses, the output of the multiplier is a sampled version of the original waveform that depends on the sample values of the input. b- Nyquist Sampling Rate (Criteria) A classical result in sampling systems was established in 1933 by Harry Nyquist when he derived the minimum sampling frequency required to extract all information in a continuous, time-varying waveform. Figure 3.32 Sampling process 107 Chapter 3 Analogue Communication This result-the Nyquist criterion- defined by the relation; fS ≥ fN (3.47-a) where, fS = sampling frequency =1/ Ts in samples per sec. fN = Nyquist frequency The sampling theorem states that; A band limited signal having no spectral components above fm Hz (maximum frequency of the signal to be sampled, where Fourier transform of a function of time is zero for f > fm). The Fourier transform of S(t), S(f), therefore cuts off at fm as shown in figure 3.33. Figure 3.33 Representation of S(f) The values of the sampled function are known for t = nTs ( for all integer values of n), then the function is known exactly for all values of t, and can be determined uniquely by values sampled at uniform intervals of Ts ≤ 1 sec . The samples must be "close enough" to each other to give 2f m all of the information. The restriction is that the spacing Ts between samples be less than 1/2 fm. The upper limit of Ts, 1/2 fm is known as the 108 Chapter 3 Analogue Communication Nyquist Sampling Interval, and the frequency (half the sampling rate) is called the Nyquist frequency, fN of the sampling system. Expressing the upper limit on Ts in a more meaningful way by the sampling frequency (pulse rate), the restriction then becomes; fs ≥ 2 fm (= fN) (3.47-b) Thus, the minimum sampling frequency must be greater than twice the highest frequency of the signal being sampled. For example, a voice signal with maximum frequency of 4 kHz must be sampled at least 8000 times per second (which is the sampling rate used by nearly all telephony systems) to comply with the conditions of the sampling theorem. Note that the higher the frequency fm, the faster the function varies, and the closer together the sample points should be in order to permit function reconstruction. Since the multiplying pulse train is assumed to be periodic, it can be expanded in a Fourier series. The P(t) shown in figure 3.32 is an even function, so using the trigonometric series; Ss (t) = S(t) P(t) Ss(t) = S(t) a o a n Cos n s t n 1 = ao S(t) + a n S (t ) Cos n s t (3.48) n 1 The goal is to isolate the first term in the final expression (3.48), which is proportional to the original signal S(t). Each of the terms in the summation of equation (3.48) is off the form of S(t) multiplied by a sinusoid. When a 109 Chapter 3 Analogue Communication time signal is multiplied by a sinusoid, the result is a shift of all frequencies of the signal by an amount equal to the frequency of the sinusoid. The frequency content of each term in equation (3.48) is then centred around the frequency of the multiplying sinusoid (the carrier frequency). The Fourier transform of the sampled wave Ss(t) is found by periodically shifting and repeating the Fourier transform of the original signal. It is sketched in figure 3.34. Figure 3.34 Fourier transform of natural sampled wave The shape centred at the origin is the transform of aoS(t), and the shifted versions represent the transforms of the various harmonic terms are not overlapped, provided that fs > 2 fm. These terms can be separated from each other using linear filters, where the original signal can be recovered from the sampled waveform SS (t), using a low pass filter with a cutoff frequency of fm to recover the aoS(t) term. c- Errors in Sampling The sampling theorem indicates that S(t) can be perfectly recovered from its samples when the Nyquist criterion is met. Error results if the sampling rate is not high enough, (sampling occurs at too slow rate, the base band is under-sampled), it is impossible to recover the original base 110 Chapter 3 Analogue Communication band exactly even with an ideal rectangular low pass filter filtering. The original signal has frequency components above one-half of the sampling rate. The lower side band of the sampling frequency fS overlaps (appears within) the base band and thought to be part of it that is irretrievably corrupted, and the details of the original analogue waveform is lost. As shown in figure 3.35-c, and figure 3.36 for a base band signal sampled at only 1.6 fm (0.8 fN), this error is known as aliasing. It is a serious problem which is avoided by keeping to the Nyquist criterion. Figure 3.35 The Nyquist criterion The spectrum in figure 3.36 indicates that the lower side frequency of fS is at 0.6 fm (i.e., 1.6 fm - fm ). The waveform indicates how samples of fm fit 111 Chapter 3 Analogue Communication fs - fm. The shorter wavelength sinusoid is the original fm itself and the samples are positioned at sampling intervals of TS that is related to the base band period, Tm, by; fs = 1 / TS = 1.6 fm = 1.6 / Tm therefore, TS = Tm / 1.6 = 5 T m 8 Actually, sampling at the Nyquist frequency makes the two bands just touch as shown in figure 3.35-b, so that whilst, theoretically, an ideal filter could recover the base band, of course, such filter does not exists. Figure 3.36 Waveform and spectra showing aliasing: (a) spectrum of signal sampled below fs (at fs = 0.8 fN); (b) waveform showing how samples of fm fit fs – fm also (Ts=5/8 Tm, T(s - m)= 5/3 Tm) 112 Chapter 3 Analogue Communication In reality to handle this problem (minimize or avoid aliasing error) as gracefully as possible, one has to sample sufficiently above fN to allow a gap for filtering of at least 0.2 fN as shown in figure 3.35-a. Also most input analog signals are filtered (band limited) with an anti-aliasing filter (usually a low-pass filter with a cutoff frequency near the Nyquist frequency fs / 2) immediately before the sampling circuit. Whilst this filter may remove high frequency energy from the information signal the resulting distortion is generally less than that introduced if the same energy is aliased to incorrect frequencies by the sampling process. Analysis of aliasing is most easily performed in the frequency domain. Figure 3.37 illustrates a simple example of aliasing in the time domain. A sinusoid at a frequency of 3 Hz is shown. Suppose we sample this sinusoid at four samples per second. The sampling theorem tells us that the minimum sampling rate for unique recovery is six samples per second, so four samples per second is not fast enough. The samples at the slower rate are indicated in the figure. But alas, these are the same samples that would result from a sinusoid at 1 Hz, as shown by the dashed curve. The 3 Hz signal is thus disguising itself (aliasing) as a 1 Hz signal. Figure 3.37 Example of aliasing 113 Chapter 3 Analogue Communication The filtered signal will then be, approximately, that of the original signal S(t) but with the frequency components above fs / 2 folded back into the frequency band of interest. They actually appear below fs / 2. The spectral components originally representing high frequencies now appear under the alias of lower frequencies. Thus, in figure 3.37, the sampling represents a high frequency component (the 3-Hz signal) folded back to fall at the lower frequency sinusoid (1 Hz). This is known as Fold over Distortion. In essence, fold over distortion produces frequency components in the desired frequency band that did not exist in the original waveform. The sampling frequency may be used to be; fs ≥ 2.2 fm to allow for the transition, or roll-off, into the filter stop band. Figure 3.38 is a PAM technique, where an analogue wave form is sampled at a constant sampling frequency fS at the transmitting side producing a sequence of pulses with amplitudes derived from the input waveform. The base band signal is reconstructed using a low-pass filter. Figure 3.38 Pulse amplitude modulation 114 Chapter 3 Analogue Communication Assume the base band analog signal is level shifted so that no part of it is negative to make all samples positive, so; S(t) = Em (1 + Cos ωmt) (3.49) and the sampling function has constant amplitude spectrum with pulse duration τ, and sampling interval TS such that τ << TS, so; P(t)= T s (1+2 Cos ωst +2 Cos 2ωst +2 Cos 3ωst +...) (3.50) The sampled function will be; VPAM (t) = Em (1 + Cos ωmt ) T s (1 + 2 Cos ωst + 2 Cos 2ωst + 2 Cos 3ωst + ……..) VPAM(t) = Em (1+Cos ωmt+2Cos ωst+2Cos ωst Cos ωmt TS + 2 Cos 2ωst + 2 Cos 2ωst Cos ωmt + ……..) VPAM(t)= Em [1 + Cos ωmt + Cos (ωs- ωm)t + 2 Cos ωst TS + Cos (ωs + ωm) t + Cos (2ωs - ωm) t + 2 Cos 2ωst + Cos (2ωs + ωm) t + ……..] (3.51) which is a dc level plus the original base band plus a set of 100 % amplitude modulation on the sampling frequency and its harmonics. 115 Chapter 3 Analogue Communication Figure 3.39 demonstrates an aliasing process occurring in speech if a 5.5kHz signal is sampled at an 8-kHz rate. Figure 3.39 Aliasing of 5.5-kHz signal into a 3.5-kHz signal Notice that the sample values are identical to those obtained from a 3.5-kHz input signal. Thus after the sampled signal passes through the 4-kHz output filter, a 3.5kHz signal arises that did not come from the source. This example illustrates that the input must be band limited, before sampling, to remove frequency terms greater than 1 fS, even if these 2 frequency terms are ignored (i.e., are inaudible) at the destination. Thus, a complete PAM system, shown in figure 3.40, must include a band limiting filter before sampling to ensure that no spurious or source-related signals get folded back into the desired signal bandwidth. The input filter of a voice codec may also be designed to cut off very low frequencies to remove 60cycle hum from power lines. 116 Chapter 3 Analogue Communication Figure 3.40 shows the signal being recovered by a sample-and-hold circuit that produces a staircase approximation to the sampled waveform. With use of the staircase approximation, the power level of the signal coming out of the reconstructive filter is nearly the same as the level of the sampled input signal. The band limiting and reconstructive filters shown in figure 3.40 are implied to have ideal characteristics. Figure 3.40 End-to-end PAM system Since ideal filters are physically unrealizable, a practical implementation must consider the effects of non ideal implementations. Filters with realizable attenuation slopes at the band edge can be used if the input signal is slightly over sampled. As indicated in figure 3.32, when the sampling frequency fS is some what greater than twice the bandwidth, the spectral bands are sufficiently separated from each other that filters with gradual roll-off characteristics can be used. As an example, sampled voice systems typically use band limiting filters with a 3-dB cutoff around 3.4 kHz and a sampling rate of 8 kHz. 117 Chapter 3 Analogue Communication Thus the sampled signal is sufficiently attenuated at the overlap frequency of 4 k Hz to adequately reduce the energy level of the fold over spectrum. By interleaving the samples from multiple sources, PAM systems can be used to share a transmission facility in a time division multiplex manner. As previously mentioned, PAM systems are not generally useful over long distances owing to the vulnerability of the individual pulses to noise, distortion, inter-symbol interference, and crosstalk. Instead, for long distance transmission the PAM samples are converted into a digital format, thereby allowing the use of regenerative repeaters to remove transmission imperfections before errors result. Pulse-amplitude modulation is now rarely used, having been largely superseded by Pulse Code Modulation, and, more recently, by pulseposition modulation. 3.3-2 Pulse Width Modulation, PWM PWM of a signal or power source involves the modulation of its duty cycle, to either convey information over a communications channel or control the amount of power sent to a load. Figure 3.41 shows an unmodulated pulse train, a representative information signal S(t), and the resulting pulse width modulated (PWM) waveform, where sample width α S(t) such as the width of each sample varies in accordance with the instantaneous sample height of S(t). That is τ α ES with τ is the pulse width whose duty cycle is modulated resulting in the variation of the average value of the waveform. 118 Chapter 3 Analogue Communication Figure 3.41 Pulse Width Modulation The larger the sample value, the wider is the corresponding pulse. The transmitted signal pulses are all of the same height. Since the pulse width is not constant, the power of the waveform is also not constant. Thus, as the amplitudes of the signal increases, the power transmitted also increases. Finding the Fourier transform of the PWM waveform is a complex computational task as it is a nonlinear form of modulation, while PAM is a linear form of modulation. An example of PWM: the supply voltage modulated as a series of pulses results in a sine-like flux density waveform in a magnetic circuit of electromagnetic actuator. The smoothness of the resultant waveform can be controlled by the width and number of modulated impulses (per given cycle). Consider a single sinusoid base band with; τ = τo (1 + Cos ωm t) (3.52) 119 Chapter 3 VPWM= T s Analogue Communication [1 + 2 Cos ωst + 2 Cos 2ωst + 2 Cos 3ωst + ...] VPWM=[ ( o TS ) (1+ Cos ωmt)][1 + 2 Cos ωst + 2 Cos 2ωst + 2 Cos 3ωst + ……..] VPWM= ( o TS ) [1+Cos ωmt+2 Cos ωst+2 Cos ωmt Cos ωst + 2 Cos 2ωst + 2 Cos ωmt Cos 2 ωst +……..] VPWM= ( o TS ) [1 + Cos ωmt + Cos (ωs - ωm)t + 2 Cos ωst + Cos (ωs + ωm)t + Cos (ωs – 2 ωm)t + 2 Cos 2ωst + Cos (2ωs + ωm)t + …..] (3.53) This resulting spectrum is the same as the PAM spectrum in figure 3.32, but only because the simplified version of S(t) is used for τ ≤ TS. It contains a dc component and still has the original base band modulating signal as a separate term easy to recover, and phase modulated carriers at each harmonic of the frequency of the pulse. The amplitudes of the harmonic groups are restricted by a sinx / x envelope (sinc function) and extend to infinity. Figure 3.42 illustrates an example of the generation of the PWM waveform. PAM and PWM are related to each other, and it is possible to construct 120 Chapter 3 Analogue Communication systems that convert from one to another. We can use a saw tooth generator to convert between time and amplitude. The saw tooth waveform we employ is shown in figure 3.43. The conversion process is illustrated in figure 3.42. Figure 3.42-a shows a block diagram of the generator, and figure 3.42-b shows typical wave forms. The information signal S(t) which is normalized to lie between 0 and 1, is put through a sample and hold circuit to yield S1(t). The saw tooth is shifted down by one unit in order to form S2(t). The sum of S1(t) and S2(t) is S3(t). The times for switch S3(t) is positive represent intervals whose widths are proportional to the original signal sample values. The shifting saw tooth is put into a comparator with output of 1 for positive input and zero for negative input. This results in S4(t), the PWM wave form. The range of pulse widths can be adjusted by scaling the original function of time. Since the heights of the pulses in PWM are constant, but the widths depend on S(t), the power of the PWM wave form varies with the amplitude of S(t). This reduces the efficiency of the communication system, since the pulse amplitudes would have to be chosen to assure that the maximum power does not exceed that permitted by the system. PWM is disturbed by noise less than PAM is. It uses larger bandwidth to achieve the better SNR. 121 Chapter 3 Analogue Communication Figure 3.42 PWM generation 122 Chapter 3 Analogue Communication Figure 3.43 Saw tooth waveform for PWM-to-PAM conversion PWM Applications PWM is particularly attractive in analogue remote control applications because the reconstructed control analogue signal can easily be obtained by integrating the transmitted PWM signal to obtain PAM signal that passed through a LPF. Some simple circuits are used to generate PWM signals, in addition to circuit using 555 timer ICs Power delivery PWM can be used to reduce the total amount of power delivered to a load without losses normally incurred when a power source is limited by resistive means. This is because the average power delivered is proportional to the modulation duty cycle which may consider as a disadvantage in some situations. High frequency PWM power control systems are easily realized with semiconductor switches. The product of the current and the voltage at any given time defines the power dissipated by the switch, thus (ideally) no power is dissipated by the switch. Realistically, semiconductor switches such as MOSFETs or BJTs are nonideal switches, but high efficiency controllers can still be built. 123 Chapter 3 Analogue Communication PWM is also often used to control the supply of electrical power to another device such as in speed control of electric motors, volume control of Class D audio amplifiers or brightness control of light sources and many other power electronics applications. For example, light dimmers for home use employ a specific type of PWM control. Home use light dimmers typically include electronic circuitry which suppresses current flow during defined portions of each cycle of the AC line voltage. Voltage regulation, (Switched-mode power supply) PWM is also used in efficient voltage regulators. By switching voltage to the load with the appropriate duty cycle, the output will approximate a voltage at the desired level. The switching noise is usually filtered with an inductor and a capacitor. One method measures the output voltage. When it is lower than the desired voltage, it turns ON the switch. When the output voltage is above the desired voltage, it turns OFF the switch. Audio effects and amplification PWM is sometimes used in sound synthesis, in particular subtractive synthesis, as it gives a sound effect. (In fact, PWM is equivalent to the difference of two saw tooth waves.) The ratio between the high and low level is typically modulated with a low frequency oscillator. A new class of audio amplifiers based on the PWM principle is becoming popular. Called "Class-D amplifiers", these amplifiers produce a PWM equivalent of the analogue input signal which is fed to the loudspeaker via a suitable filter network to block the carrier and recover the original audio. These amplifiers are characterized by very good efficiency figures (≥ 90%) and compact size/light weight for large power outputs. 124 Chapter 3 Analogue Communication 3.3-3 Pulse Position Modulation, PPM In this method the pulses are all of the same height and width but occur delayed by a small time from the exact sampling repetition instant. This delay time (td) is proportional to sample height (ES), i.e., sample delay α S(t) as shown in figure 3.44. That is; td = tdo . ES (3.54) Or, for usual single sinusoid base band; td = tdo (1 + Cos ωm t) (3.55) To analyse such signal take the simplified version of S(t) and modify both T and τ by writing; t ≡ t + td = t + tdo ( 1 + Cos ωm t) (3.56) and TS ≡ TSo + δ T TS = TSo + d td / dt . δ t = TSo - ωm tdo Sin ωm t. TSo TS = TSo (1 - ωm tdo Sin ωm t) (3.57) Thus from figure 3.44, that illustrates the information signal S(t) and its PPM wave form, it is clear that the larger the sample value, the more the corresponding pulse deviates from its un-modulated position. 125 Chapter 3 Analogue Communication Figure 3.44 Pulse position modulation Thus; VPPM = (τ / TSo) (1 - ωm tdo Sin ωm t) x 2 t t do 1 Cos m t 1 2 Cos .... (3.58) T 1 t Sin t SO m do m Further simplification can be made if the signals are assumed to be sent in TDM. Then; τ ≤ TSo and Tm This makes the factor ωm tdo small so (writing 2π/TSo= ωS), VPPM becomes; VPPM= (τ / TSo)(1 - ωm tdo Sin ωmt) x {1+ 2 Cos ωS [(t +tdo) + tdo Cos ωmt] +……} (3.59) 126 Chapter 3 Analogue Communication The terms in ωS and its harmonics will produce NBFM-type spectra (as β = ωS tdo is small) which should not alias into the base band region. This means a further simplification to; VPPM = (τ / TS ) ( 1 - ωm tdo Sin ωm t + …….) (3.60) A term ωm could be recovered by filtering and then converted back to the base band itself by integrating. However it is analyzed, a PPM signal does not contain the base band as a separate term which could be filtered off, so it is usual to convert it back to PWM or PAM on reception. The chief advantage of PPM is that it is less susceptible to noise corruption than PWM or PAM as noise has a smaller effect on the time position of a pulse edge which carries the modulation. Since PAM relies on changes in pulse amplitude it requires larger SNR than PPM or PWM. This is essentially because a given amount of additive noise can change the amplitude of the pulse by a greater fraction than the position of its edges. It possesses the noise advantage of PWM without the problem of a variable power that is a function of the amplitude of the signal. A PPM wave form can be derived from a PWM wave form using a shaping circuit. The relationship between the two is that, while the position of the pulse varies in PPM, the location of the leading (or trailing) edge of the pulse varies in PWM. Suppose, for example, that we detect each trailing edge in a PWM waveform (We differentiate and look for large negative pulses). If we now place a constant-width pulse at each of these points, the result is PPM. This is shown in figure 3.45. 127 Chapter 3 Analogue Communication Figure 3.45 Conversion from PWM to PPM. All pulse modulated signals have wider bandwidth than the original information signal since their spectrum is determined solely by the pulse shape and duration. Clearly, both PWM and PPM are more complex than PAM. The justification for choosing one of these more complex systems is that it provides greater noise immunity than does PAM. Due to noise effect on PAM, it is not generally used for a complete system but it is largely employed as a basic process in other pulse systems such as PWM and PPM. 128 Chapter 3 Analogue Communication PWM and PPM share the frequency modulation the ability to trade bandwidth for improved noise performance. Also PWM could be work if synchronization between the transmitter and the receiver fails where as PPM does not. In PAM, additive noise directly affects the reconstructed sample value. The disruption is less severe in PWM and PPM, where the additive noise must affect the zero-crossings in order to cause an error. Along with their greater complexity, PWM and PPM have other negative properties. In multiplexed (TDM) systems, one must be sure that adjacent sample pulses do not overlap. If pulses are free to shift around or to get wider, as they are in PWM and PPM, one cannot simply insert other pulses in the spaces and be confident that no interaction will occur. Sufficient spacing must be maintained to allow for the largest possible sample value. This decreases the number of channels that can be multiplexed. Reception of PWM or PPM can be viewed as a two-step process. The received waveform is converted at first to a PAM signal and then, a PAM receiver is used. The conversion of PWM to PAM is accomplished using an integrator. For PWM, we simply start the integrator at the sample point and integrate the received pulse. The output of the integrator is sampled prior to the next signal sampling point, and the sample generates a PAM waveform as shown in figure 3.46. There are two observations you should make regarding this figure: First, we are using the form of PWM that sets the left edge of each pulse at the 129 Chapter 3 Analogue Communication sampling point; second, the resulting PAM waveform is delayed by one sampling period. Conversion of PPM to PAM is also illustrated in figure 3.46. Here, we start the integrator at each sampling point and set it to integrate a constant. The integrator stops when the pulse arrives. Since the PPM pulse is at the trailing edge of the PWM pulse, there is no essential difference between PWM and PPM reception. Figure 3.46 Conversion of PWM and PPM to PAM 130 Chapter 3.4 Digital Pulse Modulation 3.4 Digital pulse modulation Digital information has many advantages over analogue one. Its main advantage is that it tends to be far more resistant to transmitted and interpreted errors. This accounts for the clarity of digitally-encoded telephone connections as example. However, digital comm. has its own unique pitfalls, and there are multitudes of different and incompatible ways in which it can be sent. The technical features of digital communications networks in the order of their relative importance for general telephony are; 1. Ease of multiplexing, 2. Ease of signalling, 3. Use of modern technology, 4. Integration of transmission and switching, 5. Signal regeneration, 6. Accommodation of other services, 7. Operability at low signal-to-noise/interference ratios, 8. Ease of encryption, Error detect and correction. In particular applications, however, certain considerations may be more or less significant. Because of all these advantages, and because recent advances in wideband communication channels and solid-state electronics have allowed scientists to fully realize these advantages, digital communications has grown quickly, and is quickly edging out analog communication due to the vast demand to transmit computer data and the ability of digital communications to do so. Beside the above mentioned advantages of digital communications, it has some disadvantages. The basic technical disadvantages of digital implementations are: 1. Increased bandwidth, 2. Need for time synchronization, 3. Topologically restricted multiplexing, 4. Need for conference/extension bridges, 5. Incompatibilities with analogue facilities. The digital pulse modulation is an extension of PAM as after its quantization, it is encoded to produce a binary number indicating its height as binary digits known as a digital code word. The quantization process of the PAM signal degrades the quality of the information signal such that each analogue sample is adjusted in amplitude to coincide with the nearest quantizing level as shown in figure 3.47, then the resulting signal is no longer analogue but 131 Chapter 3.4 Digital Pulse Modulation discrete. When the sampled values after quantizing, are encoded, as shown in figure 3.48, the resulting signal is no longer discrete but digital. Figure 3.47 Quantization of a PAM signal Δ=q Figure 3.48 Sampling, quantization and encoding process 3.4-1 Pulse Code Modulation, PCM It is a standard method that is used in the telephone network to change an analogue signal to a digital one for transmission through the digital telecommunication network. It has some significant advantages over other base band modulation types, as PCM signals have greater noise immunity than PAM signals. The three most common techniques used to enhance the PCM signal are; DPCM, DM, and ADM. It is customary to choose the code as number of values that can be represented as power of n 2, as 2 where n is an integer to represent each sample as binary numbers using n binary digits (bits). The number of quantizing levels, M each level represented by n- bits, is; M = 2n (3.61) 132 Chapter 3.4 Digital Pulse Modulation The steps of the quantization procedure are: Step 1. If not given, decide how many bits will be used for quantization, for example n bits per sample. Step 2. Divide the interval between the minimum and maximum signal values into 2n -1 small intervals, then each step will be; n q = Vfs/ (2 – 1) volts, assuming all steps are of equal size. As an example, for Vfs = 7 volts, and the number of allowed quantization levels were 3 eight = 2 , then the pulse amplitudes could be represented by the binary numbers from zero (000) to seven (111) using n = 3 bits to represent the binary value of each sample, but for 4 n = 4 bits, we have 2 or 16 values for sample magnitude can be represented by 4 bits with binary representation (0000) through (1111). The following figure 3.49 shows a comparison between the waveforms of different pulse modulations. Figure 3.49 A comparison between PAM, PWM, PPM, and PCM Figure 3.50 depicts a typical quantization process in which all sample values falling in a particular quantization interval, are represented by a single discrete value located at the center of the quantization interval, so it is rounded off to send only discrete values. In this 133 Chapter 3.4 Digital Pulse Modulation manner the quantization process introduces a certain amount of error εq or distortion into the signal samples. This error, known as quantization noise, is minimized by establishing a large number of small quantization intervals. Figure 3.50 quantization error Of course, as the number of quantization intervals increases, so must the number of bits increases to uniquely identify the quantization intervals. Also, some signal levels are more sensitive to noise than others. The lower signal levels, for example, are affected by noise more heavily than the higher amplitudes. Again, we can manipulate the quantization in such a way that finer resolution (step size) can be used to represent lower signal amplitudes so that it is easier to detect them at the receiving side. With PCM, the frequency of voice signal that will be transmitted is chosen to be 300 - to -3,400 Hz, assuming the analogue speech signal has a BW of; B = fm = 4 kHz, leaving enough guard band for filtering. The sampling theorem defines the minimum sampling rate of this signal to be 2B = 8,000 bauds (samples per second) to preserve the information content of the signal. Samples are taken at intervals (inter-sample time or sampling period) of Ts = 1/ 2B = 125 µs, then each sample is quantized into 1 one of 256 levels and encoded into digital eight-bit words. Seven bits represent the sample information plus 1 parity bit, then the overall data rate of one PCM speech signal becomes; 134 Chapter 3.4 Digital Pulse Modulation Rb = fb = (fs sample/ sec) x (n bits/sample) = 8,000 sample/sec × 8 bit/sample = 64 Kbps. The overall data rate of one speech signal becomes: 8,000 sample/sec × 8 = 64 Kbps. This same data rate is available for data transmission through each speech channel in the network. This results in the conversion of an analog (speech) signal into a string of binary numbers. This binary data can be transmitted digitally using many of the base band or pass band modulation schemes. Because the signaling is binary, the baud rate equals the bit rate. The theoretically minimum absolute band width of the PCM signal is; Bmin = BNyquist = ½ Baud rate = 32 kHz and this is realizable if the PCM waveform consists of [(sin x) / x] pulse shapes. If rectangular pulse shaping is used, the absolute bandwidth is infinity and the first null band width is; BNull = Rb = 1 / Tb = 64 k Hz. Broadcast-quality color television signals have an analogue base band bandwidth of somewhat less than 5 MHz for conventional PCM encoding of these video signals, a sampling rate of fs = 10 M sample/sec., and a 9- bit per sample coding scheme is used. Thus, the resulting transmission rate is 90 Mb/s. Most television pictures have a large degree of correlation, which can be exploited to reduce the transmission rate, so digital broadcast- quality color television signals requiring only 20 to 45 Mb/s transmission rates. This results in the conversion of an analogue signal into a string of binary numbers. This binary data can be transmitted digitally using many of the base band or pass band modulation schemes. Figure 3.51 shows that a PAM signal can be transmitted and received by itself or after conversion to a PCM one by adding an analogue-to-digital (A/D) converter at the transmitter side, and a digital-to-analogue (D/A) converter at the receiver side. 135 Chapter 3.4 Digital Pulse Modulation Figure 3.51 Relationship between PAM, and PCM signal. a- Signal-to-Quantization noise Ratio (SNqR) Quantization noise (error) is the difference between the actual value of an analogue sample and the assigned digital value due to quantization. For example, if the analogue value of a sample is in the range 4/16 - 5/16, the assigned value is 9/32. For a sample whose value is 0.26 volts the assigned value is 9/32 = 0.281. Thus, there is an error of 0.281- 0.26 = 0.021 volts due to quantization. The signal-to-quantizing-noise ratio of a quantized signal can be determined as; E S 2 (t ) SNqR = = E [ y (t ) − S (t )]2 S v2 = N q q2 (3.62) . : expectation or averaging where, E S(t) : quantizer input signal (sample) power y(t) : quantizer output signal decoded signal power In determining the expected value of the quantization noise, it is convenient to make the following assumptions; 1- Linear quantization (all quantization intervals have equal lengths, q = Δ), the quantization noise is uniformly distributed and independent of the sample values. 2- Zero mean signal (symmetrical pdf about 0 v). 3- Uniform signal pdf, such as, all signal samples are equally Likely to fall anywhere within a quantization interva1, implying a uniform probability density of amplitude l / q. 136 Chapter 3.4 Digital Pulse Modulation Figure 3.52 shows that the quantization noise is essentially random, and its value can be calculated, leads to the concept of a Signal-to-Quantization noise Ratio, (SNqR). 4- The error y(t) – S(t) is limited in amplitude to q/2. The maximum quantization error is the half of the step size q/2. n n For n-bit PCM, there are 2 -1 steps. Therefore, the step size is 1/ 2 -1 volts and the maximum quantization error is half of that. 5- A sample value is equally likely to fall anywhere within a quantization interva1, implying a uniform probability density of amplitude l/q. 6- Signal amplitudes are assumed to be confined to the maximum range of the coder. If a sample value exceeds the range of the highest quantization interval, overload distortion (also called peak limiting) occurs. Figure 3.52 Quantization error interpreted as noise Example: For an 8-bit PCM system to be used to transmit an analogue signal with range between 0 and 6 volts and bandwidth of 4 kHz, we have the following parameters. 8 Voltage level range: 0 to 6 volt gives step size = 6/ (2 -1) = 6/255 volts. Maximum quantization error = 1/2 × 6/255 = 6/510 volts. 137 Chapter 3.4 Digital Pulse Modulation Maximum signal size for mid-step resolution = 6 - 6/510 = 6×509/510 volts. Minimum sampling rate = 8000 bauds Bits per sample = 8 (given). Minimum bit rate = 8 × 8000 = 64 kbps The above example is a straightforward description of PCM. Actual systems are much more complex due to certain features of the voice signal such as the information is carried more in one part of the signal spectrum than the other. The quantization levels of area 1/ M each are represented by its pdf p (υ), where M is the even number of quantization levels that is assumed to be large enough, and the signal varies from sample to sample. The distance between adjacent quantizing levels is q volt, with the pdf of allowed levels given by: M P(v) = (1 / M ) (v − iq / 2) (3.63) i = −M where, i takes on odd values only. The mean square signal power after quantization is: 2 = 2 P ( ) d − = 2 M q 2 2 ( − ) d + ( − 3 q / 2 ) d + .... 2 0 0 2 2 q 2 2 2 2 = 1 + 3 + 5 + ......+ ( M − 1) M 2 2 2 q 2 M ( M − 1)( M + 1) 2 = v = M M 2 6 i.e.; 2 = M −1 2 q 12 (3.64) 2 volt 2 Denoting the quantization error (difference between the un-quantized and quantized signals) as εq as showed in figure 3.52, with maximum value q /2 = Δ /2, then it follows from assumption 3 that the probability density function, pdf of the quantizing error εq is uniform 138 Chapter 3.4 Digital Pulse Modulation over the range [- Δ /2, Δ /2] as shown in figure 3.53, with; 1/𝑞, P (εq) = { 0, −𝑞/2 ≤ 𝜀𝑞 < 𝑞/2 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒 (3.65) Figure 3.53 PDF of quantizing error Denoting the quantizing error of the k th sample by εq[k], we have; qq2 = E q k = q P( q )d q = 0 q/2 (3.66) −q / 2 So the quantizing error has zero mean. The variance (mean square quantization error (noise)) of εq[k] is; = E k = q2 P( ) d q 2 q q/2 2 q = −q / 2 3 q 1 q+2 q2 = q 3 −q 12 2 V2 (3.67) using equation (3.64), then, SNqR = 2 2 = M2 - 1 (3.68) q / 12 SNqR|dB = 10 log10 v2 q 2 / 12 = 10.8 + 20 log10 v (3.69) q 139 Chapter 3.4 Digital Pulse Modulation For a sine wave input, the SNqR produced by uniform quantization is; 2 SNqR|dB = 10 log10 E m / 2 q 2 / 12 = 7.78 + 20 log10 E m (3.70) q where Em is the peak amplitude of the sine wave. The average SNqR given by equation (3.68) is therefore given by: SNqR = v2 q2 = M2 – 1 (3.71) 2 For large SNqR the approximation SNqR = M is used. Expressing in dB, SNqR = 20 log M Since the peak signal level is M q / 2 then the peak SNqR is: (SNqR)peak = (Mq / 2 ) = 3 M 2 2 (3.72) 2 q Expressing in dB the SNqRs are: SNqR)peak = 10 log (3 M2) = 10 log 3 + 20 log M (SNqR)peak = 4.8 + 20 log M = 4.8 + 20 log 2n = 4.8 + 6 n dB (3.72) Example, A sine wave with a l-V minimum amplitude is to be digitized with a minimum SNqR of 30 dB. How many uniformly spaced quantization intervals are needed, and how many bits are needed to encode each sample? Solution, Using equation (3.70), the maximum size of a quantization interval is determined as; q = (1) 10 - (30 - 7.78) / 20 = 0.078 V 140 Chapter 3.4 Digital Pulse Modulation Thus (E=1)/q = 1/q ≈ 13 quantizing intervals are needed for each polarity for a total of 26 intervals in all. The number of bits required to encode each sample is determined as; n = log2 (26) = 4.7 = 5 bits per sample Note that, minimum digitized voice quality requires a signal-to-noise ratio in excess of 26 dB. For a uniform PCM system to achieve a SNqR of 26 dB, equation (3.70) indicates that qmax = 0.123 Em. For equal positive and negative signal excursions (encoding from - Em to Em), this result indicates that just over l6 quantization intervals, or 4 bits per sample, are required. In addition to providing adequate quality for small signals, a telephone system must be capable of transmitting a large range of signal amplitudes, referred to as dynamic range. Dynamic range (DR) is usually expressed in decibels as the ratio of the maximum amplitude signal to the minimum amplitude signal: DR = 10 log10 (Pmax / Pmin) = 20 log10 (Vmax / Vmin) (3.71) A typical minimum dynamic range is 30 dB. Thus, signal values as large as 3l times Em must be encoded without exceeding the range of quantization intervals. Assuming equally spaced quantization intervals for uniform coding, the total number of intervals is determined as 496, which requires 9-bit code words, (This result is derived with the assumption of minimum performance requirements). Higher performance objectives (less quantization noise and greater dynamic range) require as many as 13 bits per sample for uniform PCM systems. This coding performance was established when it was likely that multiple conversions would occur in an end-to-end connection. Now that the possibility of multiple A/D and D/A conversions has been eliminated, end-to-end voice quality is much better than it was in the analogue network). The performance of an n-bit uniform PCM system is determined by observing that; q = 2 Em max / 2 n (3.72) where Em is the maximum (non overloaded) amplitude. Substituting equation (3.16) into equation (3.70) produces the PCM performance equation for uniform coding; SNqR = 1.76 + 6 n + 20 log10 (Em / E m−max.) (3.73) 141 Chapter 3.4 Digital Pulse Modulation The first two terms of equation (3.73) provide the SNqR when encoding a full-range sine wave. The last term indicates a loss in SNqR when encoding a lower level signal. These relationships are presented in figure 3.54, which shows the SNqR of a uniform PCM system as a function of the number of bits per sample and the magnitude of an input sine wave. Figure 3.54 SNqR for uniform PCM coding High-quality PCM encoders produce quantization noise that is evenly distributed across voice frequencies and independent of the encoded waveforms. Thus, signal to quantization noise ratio defined in equation (3.73) is a good measure of PCM performance. Example: What is the minimum bit rate that a uniform PCM encoder must provide to encode a high-fidelity audio signal with a dynamic range of 40 dB? Assume the fidelity requirements dictate passage of a 20-kHz bandwidth with a minimum signal-to-noise ratio of 50 dB. For simplicity, assume sinusoidal input signals. Solution, To prevent fold over distortion, the sampling rate must be at least 40 kHz. Assuming an excess sampling factor comparable to that used in D-type channel banks (4000/3400), we choose a sampling rate of 48 kHz as a compromise for a practical band limiting filter. By observing that a full-amplitude signal is encoded with an SNqR of 40 + 50 = 90 dB, we can use equation (3.73) to determine the number of bits n required to encode each sample: n = (90 – 1.76) / 6 = 15 bit Thus, the required bit rate is; (15 bit/sample) (48,000 sample/sec) = 720 k bit/ sec. 142 Chapter 3.4 Digital Pulse Modulation For a given number of quantization levels, M, the number of binary digits required for each PCM code word is; n = log2 M. The PCM peak signal to quantization noise ratio, (SNqR) peak is therefore; n (SNqR)peak = 3 M2 = 3 (2 )2 (3.74) 2 If the ratio of peak to mean signal power, 𝑣𝑝𝑒𝑎𝑘 /υ2 is denoted by α then, the average signal to quantization noise ratio is: SNqR = 3 (2 2n ) (1/α) (3.75) Expressing in dB this becomes; SNqR = 4.8 + 6n - αdB (3.76) For a sinusoidal signal α = 2 (3 dB), for speech α = 10 dB. The SNqR for an n-bit PCM voice system can therefore be estimated using the rule of thumb 6 (n - 1) dB. Example: A digital communications system is to carry a single voice signal using linearly quantized PCM. What PCM bit rate will be required if an ideal anti-aliasing filter with a cut-off frequency of 3.4 kHz is used at the transmitter and the signal to quantization noise ratio is to be kept above 50 dB? SNqR = 4.8 + 6n - αdB For voice signals α = 10 dB, i.e.: n = (50 + 10 – 4.8) / 6 = 9.2 ∴ 10 bit / sample are therefore required. The sampling rate required as given by Nyquist's rule is; fs = 2.2 x 3.4 kHz = 7.4 kHz (k samples / sec.) The PCM bit rate (or more strictly binary baud rate) is therefore: Rb = fs n = 7.48 x 103 x 10 bits / sec. = 74.8 k bit /s. 143 Chapter 3.4 Digital Pulse Modulation Example: A TV signal with a bandwidth of 4.2 MHz is transmitted using binary PCM. The number of quantization levels is 512. Calculate; a- Code word length, b- Transmission bandwidth, c- Final bit rate, d- SNqR . Solution, a- Code word length, M = 2n n = log2 M = log 512 2.71 log M 10 10 = = 9 bits = log 2 log 2 0.3 10 10 then, the code word length = 9 bits b- Transmission bandwidth, BT = Rb / 2 = 2 x fm x n / 2 = fm x n = 4.2 x 106 x 9 = 37.8 MHz c- Final bit rate, Rb = 2 x fm x n = 2 x 4.2 x 106 x 9 = 75.6 Mbps d- SNqR = 4.8 + 6 n = 4.8 + 6 x 9 = 58.8 dB, (assuming full dynamic range). Example: The information in an analogue signal voltage waveform is to be transmitted over a PCM system with an accuracy of ± 0.1 % (full scale). The analogue voltage waveform has a bandwidth of 100 Hz and an amplitude range of -10 to +10 volts. Determine; a- The maximum sampling rate, b- The number of bits in each PCM word, c- The minimum bit rate required in the PCM signal, d- The minimum absolute channel bandwidth required for the transmission of the PCM signal. Solution: a- For ± 0.1% accuracy, the quantization error should be ± 0.1%. the maximum quantization error should be εmax = q = ± 0.001, so the step size should be, 2 q = 0.002. 144 Chapter 3.4 As Digital Pulse Modulation 0.002 = 2 xE 2 xE m= m = 2 x10 , then the number of quantization levels will be; M −1 M M 2 x10 = 10000 0.002 M= The maximum sampling rate will be Rs ≥ 2 fm = 2 x 100 = 200 sample/sec., b- The number of bits in each PCM word will be calculated as M = 2n then; n = log2 M = log M log 10000 10 = =13.288 bits = 14 bits log 2 0.3 10 c- The minimum bit rate required in the PCM signal, Rb = n Rs = 14 x 200 = 2800 bit/sec. d- The minimum absolute channel bandwidth required for the transmission of the PCM signal will be calculated as; BT ≥ 1 1 R = x 2800 = 1400 Hz 2 b 2 b- SNqR for Decoded PCM When all PCM code words are received and decoded without error then the SNR of the decoded signal is essentially equal to the signal to quantization noise ratio, SNqR, as given in equations (3.70 - 3.76). The presence of channel and/or receiver noise, causes that one or more symbols in a given code word will be changed sufficiently in amplitude to be interpreted in error. For binary PCM this involves a digit 1 being interpreted as a 0 or a digit 0 being interpreted as a 1. The effect of this error on the SNR of the decoded signal depends on which symbol is detected in error. The least significant bit in a binary PCM word will introduce an error in the decoded signal equal to one quantization level. The most significant bit would introduce an error of many quantization levels. Figure 3.55 Shows the complete block diagram of PCM system. 145 Chapter 3.4 Digital Pulse Modulation Figure 2.55 PCM system To calculate the SNR performance of a PCM system in the presence of noise, we first assume that the probability of more than one error occurring in a single n-bit PCM code word is negligible. We also assume that all bits in the code word have the same probability (Pe) of being detected in error. Using subscripts 1, 2, • • •, ….., n to denote the significance of PCM code word bits (1 corresponding to the least significant, n corresponding to the most significant) then the possible errors in the decoded signal are: ε1 = q ε2 = ε3 = 2q 4q ………. (3.77) εn = 2n-1 q 2 The mean square decoding error, 𝜀𝑑𝑒 is the mean the mean square of the possible errors 146 Chapter 3.4 Digital Pulse Modulation multiplied by the probability of an error occurring in a code word, i.e.: 2 de = n Pe(l / n)[(q)2 + (2q)2 + ….. + (2 n-1 q)2] = Pe (q)2 [40 + 41 + 42 + 43 + ….. + 4(n-1)] (3.78) The square bracket is the sum of a geometric progression with the form: n Sn = a + a r + a r2 + …. + a r (n -1) = a (r −1) (3.79) r −1 where a = 1 and r = 4. Thus: 2 de = Pe q2 n (4 -1) / 3 (volt2) (3.80) Since the error or noise which results from incorrectly detected bits is statistically independent of the noise which results from the quantization process, we can add them together on a power basis, i.e., SNTR = 2 + 2 q where for v2 2 de (3.81) 2 de is the received signal power. Using equation (3.64) for v2 and equation (3.80) n and remembering that the number of quantization levels M = 2 we have; SNTR = M 2 −1 1+ 4( M 2 −1) Pe (3.82) This equation allows us to calculate the average SNTR of the decoded PCM signal including both quantization noise and the decoding noise which occurs due to corruption of individual PCM bits by channel or receiver noise. 147 Chapter 3.4 Digital Pulse Modulation The SNTRout in equation (3.82) is linear ratio (not dB values) is sketched for various values of n = log2 M in figure 3.56. Figure 3.56 Input / output SNR for PCM The noise immunity advantage of PCM illustrated by this figure is clear. The x-axis is the SNR of the received PCM signal. The y-axis is the SNR of the reconstructed (decoded) information signal. If the SNR of the received PCM signal is very large then the total noise is dominated by the quantization process and the output SNR is limited to SNqR. In practice, however, PCM systems are operated at lower input SNR values near the knee or threshold of the curves in figure 3.56. The output SNR is then significantly greater than the input SNR. At very low input SNR, when the noise is of comparable amplitude to the PCM pulses, then the interpretation of code words starts to become unreliable. Since even a single error in a PCM code word can change its numerical value by a large amount then the output SNR in this region (i.e. below threshold) decreases very rapidly. Example: Find the overall SNRout for the reconstructed analogue voice signal when n = 10 6 bits, if receiver noise induces an error rate, on average, of one in every 10 bits. Solution, Using equations (3.82), (3.76); SNRout = SN q R 1+ 4 SN q R Pe SNqR = 4.8 + 6n – α dB = 4.8 + (6 x 10) – 10 148 Chapter 3.4 Digital Pulse Modulation SNqR = 54.8 dB ( or 3.020 x 105) SNRout = 3.020 x10 5 1+ 4 (3.020 x10 5 ) (1x10 −6 ) = 1.368 x 105 = 51.4 dB The SNR available with PCM systems increases with the square of the number of quantization levels while the baud rate, and equivalently the bandwidth, increases with the logarithmic of the number of quantization levels. Thus, the bandwidth can be exchanged for SNR. Close to threshold PCM is superior to all analogue forms of pulse modulation at low SNR. However, all practical PCM systems have a performance which is an order of magnitude below their theoretical optimum. As PCM signals contain no information in their pulse amplitude they can be regenerated using non-linear processing at each repeater in a long haul system. Such digital regenerative repeaters allow accumulated noise to be removed and essentially noiseless signals to be retransmitted to the next repeater in each section of the link. The probability of error does accumulate from hop to hop however. We see a three ways trade-off among energy, bandwidth, and processing complexity. An increase in any one means the other two can be reduced, more or less, for the same performance. A PCM digitizer, for example, can be replaced by a more complex digitizer, which puts out fewer bits, which consume less transmission bandwidth. Transmission error may be driven down by more complex coding, instead of more transmission energy. With modern coded modulation, coding complexity can even be exchanged for bandwidth. The cheapest of energy, bandwidth, and processing is now processor complexity. 149 Chapter 3.4 Digital Pulse Modulation Sheet 1 1- a-Calculate SNqR for a 10-bit PCM system, b- Calculate the number of uniform quantization levels needed to convert the audio signal in the range 300 Hz to 3000 Hz to PCM one with SNqR = 30 dB. 2- For a PCM system, determine the following; a- The maximum sampling period required to reconstruct the analog signal with 4 volt amplitude and 3.5 k Hz frequency without distortion, b- The signal- to- quantization noise ratio due to the 4 bits encoder, c- The maximum bit duration, d- The result with wave forms of sampling the signal at 6 k Hz rate, e- The average total SNTR of the recovered signal with Pe = 10 - 4. 3- A PCM voice signal with bit rate 64 kbps. Find the value of the quantizing step for the base band signal; V(t) = 4 Sin (6.4 x 103 π t). Assume the channel is noisy with bit error rate of 10 - 4, a- What is the average signal-to-noise ratio at the receiver output, b- The maximum sampling period required to reconstruct the base band signal without distortion, c- The signal- to- quantization noise ratio due to using a 6 bits encoder, d- Show the result of sampling the base band signal at 5 k Hz rate in time and frequency domain. 4- A Sine wave with 0.5 volt amplitude and 3.2 kHz frequency is to be digitized with SNqR of 35 dB. Calculate the bit rate of the digital signal, and show the effect of using 0.8 fN sampling frequency. 150 Chapter 3.4 Digital Pulse Modulation 3.4-2 Differential Pulse Code Modulation, DPCM In conventional PCM, there are often successive samples taken in which there is little difference between the amplitudes of the two samples. This necessitates transmitting several identical PCM codes, which is redundant. Differential Pulse Code Modulation (DPCM) is designed specifically to take advantage of the sample-to-sample redundancies in a typical speech waveform. With DPCM the difference in the amplitude of two successive samples is transmitted rather than the actual sample. Thus, the band limiting filter in the encoder and the smoothing filter in the decoder are basically identical to those used in conventional PCM systems. The DPCM structure shown in figure 3.57 is more complicated. Figure 3.57 Functional block diagram of DPCM The band limiting filter is used to limit the analogue input signal frequency to one-half the sampling rate. A conceptual means of generating the difference samples for a DPCM coder is to store the previous input sample directly in a sample-and-hold circuit and use an analog differentiator (subtractor) to measure the change. The change in the signal is then quantized and encoded for transmission. However, because the previous input value is reconstructed by a feedback loop that integrates the encoded sample differences. In essence the feedback signal is an estimate of the input signal as obtained by integrating the encoded sample differences. Thus, the feedback signal is obtained in the same manner used to reconstruct the waveform in the decoder. 151 Chapter 3.4 Digital Pulse Modulation The advantage of the feedback implementation is that quantization errors do not accumulate indefinitely. If the feedback signal drifts from the input signal, as a result of an accumulation of quantization errors, the next encoding of the difference signal automatically compensates for the drift. In a system without feedback the output produced by a decoder at the other end of the connection might accumulate quantization errors without bound. As in PCM systems the analogue-to-digital conversion process can be uniform or companded. Some DPCM systems also use adaptive techniques to adjust the quantization step size in accordance with the average power level of the signal. Example: speech digitization techniques are sometimes measured for quality by use of an 800-Hz sine wave as a representative test signal. Assuming a uniform PCM system is available to encode the sine wave across a given dynamic range. Determine how many bits per sample can be saved by using a uniform DPCM system. A basic solution can be obtained by determining how much smaller the dynamic range of the difference signal is in comparison to the dynamic range of the signal amplitude. Assume the maximum amplitude of the sine wave is A, so that; X(t) = A Sin (2 π · 800 t) The maximum amplitude of the difference signal can be obtained by differentiating and multiplying by the time interval between samples: dx = A (2 ) (800) Cos (2 800 t ) dt x (t ) max = A (2 ) (800) ( 1 ) = 0.628 A 8000 The saving in bits per sample can be determined as; n = Log2 (1 / 0.628) = 0.67 bits This example demonstrates that a DPCM system can use 2/3 bit per sample less than a PCM system with the same quality. Typically, DPCM systems provide a full l-bit reduction in code word size. The larger savings is achieved because on average speech waveforms have a lower slope 152 Chapter 3.4 Digital Pulse Modulation than an 800-Hz tone. Figure 3.58 shows a simplified block diagram of a DPCM transmitter. Figure 3.58 A simplified block diagram of a DPCM transmitter The analog input signal is band limited to one-half the sample rate, then compared with the preceding accumulated signal level in the differentiator. The output of the differentiation is the difference between the two signals. The difference is PCM encoded and transmitted. The ADC operates the same as in a conventional PCM system, except that it typically uses fewer bits per sample. Figure 3.59 shows a simplified block diagram of a DPCM receiver. Each received sample is converted back to analog, stored, and then summed with the next sample received. In the receiver shown in figure 3.59, the integration is performed on the analog signals, although it could also be performed digitally. 153 Chapter 3.4 Digital Pulse Modulation Figure 3.59 A simplified block diagram of a DPCM receiver 3.4-3 Delta Modulation, DM Delta modulation (DM) is another digitization technique that specifically exploits the sample-to-sample redundancy in a speech waveform. In fact, DM can be considered as a special case of DPCM using only 1 bit per sample of the difference signal to achieve digital transmission of analogue signals. The single bit specifies merely the polarity of the difference sample and thereby indicates whither the signal has increased (the sample is larger than) or decreased (the sample is smaller than) since the last (previous) sample. An approximation to the input wave form is constructed in the feedback path by stepping up one quantization level when the difference is positive ("one") and stepping down one quantization level when the difference is negative ("zero"). If the current sample value is equal to the previous one, code the first such occurrence opposite to the previous bit and then alternate to ‘0’ and ‘1’ for later occurrences. In this way the input signal is encoded as a sequence of 'ups" and "downs" in a manner resembling a staircase. Figure 3.60 shows a DM approximation of a typical waveform that does not need k bits per sample as in PCM. One bit per sample will suffice. 154 Chapter 3.4 Digital Pulse Modulation Figure 3.60 Waveform encoding by delta modulation Notice that the feedback signal continues to step in one direction until it crosses the input, at which time the feedback step reverses direction until the input is crossed again. Thus, when tracking the input signal, the DM output "bounces" back and forth across the input waveform, allowing the input to be accurately reconstructed by a smoothing filter. The main attraction of DM is its simplicity. Since each encoded sample contains a relatively small amount of information (1 bit), DM systems require a higher sampling rate than PCM or multi-bit DPCM systems for comparable quality of speech. From another viewpoint, "over sampling" is needed to achieve better prediction from one sample to the next. Consequently, the advantage in saving bandwidth in sampling is lost in rendering the quality equal to the PCM. a- Delta Modulation Transmitter Figure 3.61 shows a block diagram of a delta modulation transmitter. The input analogue is sampled and converted to a PAM signal, which is compared with the output of the DAC. The output of the DAC is a voltage equal to the regenerated magnitude of the previous sample, which was stored in the up-down counter as a binary number. 155 Chapter 3.4 Digital Pulse Modulation Figure 3.61 Delta modulation transmitter The up-down counter is incremented or decremented depending on whether the previous sample is larger or smaller than the current sample. The up- down counter is clocked at a rate equal to the sample rate. Therefore, the up-down counter is updated after each comparison. Figure 3.62 shows the ideal operation of a delta modulation encoder. Initially, the up-down counter is zeroed, and the DAC is outputting 0 V. The first sample is taken, converted to a PAM signal, and compared with zero volts. The output of the comparator is a logic 1 condition (+V) indicating that the current sample is larger in amplitude than the previous sample. On the next clock pulse, the up-down counter is incremented to a count of 1. Figure 3.62 Ideal operation of a delta modulation encoder The DAC now outputs a voltage equal to the magnitude of the minimum step size (resolution), The steps change value at a rate equal to the clock frequency (sample rate). 156 Chapter 3.4 Digital Pulse Modulation Consequently, with the input signal shown, the up- down counter follows the input analogue signal up until the output of the DAC exceeds the analogue sample; then the up-down counter will begin counting down until the output of the DAC drops below the sample amplitude. In the idealized situation shown in figure 3.62, the DAC output follows the input signal. Each time the up-down counter is incremented, a logic1 is transmitted, and each time the up-down counter is decremented, a logic 0 is transmitted. b- Delta Modulation Receiver Figure 3.63 shows the block diagram of a delta modulation receiver. As you can see, the receiver is almost identical to the transmitter except for the comparator. As the logic 1s and 0s are received, the up-down counter is incremented or decremented accordingly. Consequently, the output of the DAC in the decoder is identical to the output of the DAC in the transmitter. Figure 3.63 Delta modulation receiver With delta modulation, each sample requires the transmission of only one bit; therefore, the bit rates associated with delta modulation are lower than conventional PCM systems. However, there are two problems associated with delta modulation that do not occur with conventional PCM: slope overload and granular noise. ■ Slope overload Basically, slope overload distortion occurs for large and fast signal transition, when the slope (rate of change) of the analog input signal exceeds the maximum rate of change that can be generated by the feedback loop. Figure 3.64 shows what happens when the analog input signal changes at a faster rate than the DAC can maintain. Since the maximum rate of change in the feedback loop is merely157 the step size times the sampling rate, a slope overload condition occurs if; Chapter 3.4 Digital Pulse Modulation d x(t ) q fs dt (3.83) The relatively high sampling rate of a delta modulator (increasing the clock frequency) produces a wider separation of these spectrums and hence, fold over distortion is prevented with less stringent roll-off requirements for the input filter. It reduces also the probability of slope overload occurring. Another way to prevent slope overload is to increase the magnitude of the minimum step size. Figure 2.64 Slope overload distortion Slope overload is not a limitation of just a DM system, but an inherent problem with any system, such as DPCM in general, that encodes the difference in a signal from one sample to the next. A difference system encodes the slope of the input with a finite number of bits and hence a finite range. If the slope exceeds that range, slope overload occurs. In contrast, a conventional PCM system is not limited by the rate of change of the input, only by the maximum encodable amplitude. Notice that a differential system can encode signals with arbitrary large amplitudes, as long as the large amplitudes are attained gradually. ■ Granular noise Figure 3.65 contrasts the original and reconstructed signals associated with a granular noise in delta modulation system. It can be seen that when the original analog input signal has relatively constant amplitude, the reconstructed signal has variations that were not present in the original signal. Granular noise in delta modulation is analogous to quantization noise in conventional PCM. 158 Chapter 3.4 Digital Pulse Modulation Figure 3.65 Granular noise Granular noise can be reduced by decreasing the step size. Granular noise is more prevalent in analog signals that have gradual slopes and whose amplitudes vary slowly by small amounts. Slope overload is more prevalent in analog signals that have steep slopes or whose amplitudes vary rapidly. Therefore, to reduce the granular noise, a small resolution is needed, and to reduce the possibility of slope overload occurring, a large resolution is required. Obviously, a compromise is necessary, where the design of a DM (or DPCM) necessarily involves a trade-off between the two types of distortion. The optimum DM step size in terms of minimizing the total of the granular and the slope overload noise is required. The perceptual effects of slope overload on the quality of a speech signal are significantly different from the perceptual effects produced by granular noise. As indicated in figure 3.66, the slope overload noise reaches its peaks just before the encoded signal reaches its peaks. Hence, slope overload noise has strong components identical in frequency and approximately in phase with a major component of the input. Figure 3.66 Slope overload and granular noise of DM system. 159 Chapter 3.4 Digital Pulse Modulation In fact, overload noise is much less objectionable to a listener than random or granular noise at an equivalent power level. Hence, from the point of view of perceived speech quality, the optimum mix of granular and slope overload noise is difficult to determine. Many versions of DM for voice encoding focused on ways of implementing adaptive delta modulation (ADM) to improve the performance at a given bit rate. The intense interest at that time was related to the simplicity, good tolerance of channel errors, and relatively low cost implementation. The cost factor is no longer relevant because even relatively complicated coding algorithms now have insignificant costs compared to most system costs. ADM is still used in some old PBXs, in some military secure voice radio systems, and as a means of encoding the residual error signal of some predictive coders. Sheet 2 1- A delta modulator system is designed to operate at five times the Nyquist rate for a signal with 3 kHz bandwidth. Determine the maximum amplitude of a 2 kHz input sinusoid for which the delta modulator does not have slope overload, when the quantization step size is 250 mV. 2- Consider the signal, V (t) = 0.1 Sin (2 π x 103 t), and q1 = 4 m.V, q2 = 60 m.V, does the slope overload distortion occur. If so, in which case. 3- Find the signal amplitude for minimum quantization error in a delta modulation system if the step size is1 volt having repetition period 1 m.sec., and the information signal operates at 100 Hz. 4- A DM system is designed to operate at 3 times the Nyquist rate for a signal with a 3 kHz bandwidth. The quantization step size is 250 mV, determine; a- The maximum amplitude of a 1 kHz input sinusoid for which the delta modulator does not show slope overload, b- the post filtered SNqR for the signal of part (a). 5- A DM system is tested with a 10 kHz sinusoidal signal with 1V pk-to-pk at the input. It is sampled at 10 times the Nyquist rate, determine; a- The step size required to prevent slope overload, b- The corresponding SNqR. 160 Chapter 3.4 Digital Pulse Modulation 3.4-4 Adaptive Delta Modulation, ADM Adaptive delta modulation is a delta modulation system where the step size of the DAC is automatically varied, depending on the amplitude characteristics of the analog input signal. Figure 3.67 shows how an adaptive delta modulator works. When the output of the transmitter is a string of consecutive 1s or 0s, this indicates that the slope of the DAC output is less than the slope of the analog signal in either the positive or the negative direction, Essentially, the DAC has lost track of exactly where the analog samples are, and the possibility of slope overload occurring is high. Figure 3.67 Adaptive delta modulation With an adaptive delta modulator, after a predetermined number of consecutive 1s or 0s, the step size is automatically increased. After the next sample, if the DAC output amplitude is still below the sample amplitude, the next step is increased even further until eventually the DAC catches up with the analog signal. When an alternative sequence of 1s and 0s is occurring, this indicates that the possibility of granular noise occurring is high. Consequently, the DAC will automatically revert to its minimum step size and, thus, reduces the magnitude of the noise error. A common algorithm for an adaptive delta modulator is when three consecutive 1s or 0s occur, the step size of the DAC is increased or decreased by a factor of 1.5. Various other algorithms may be used for adaptive delta modulators, depending on particular system requirements. 161 Chapter 3.4 Digital Pulse Modulation The differential systems described in the previous sections (DPCM, ADM) operate with lower data rates than PCM systems because they encode a difference signal that has lower average power than the raw input signal. The ratio of the input signal power to the power of the difference signal is referred to as the prediction gain. Simple DPCM systems provide about 5 dB of prediction gain. 3.4-5 Nonlinear quantization (Companded) PCM The derived expressions for SNqR assumed that the information signal has a uniform pdf, i.e., that all quantization levels are used equally. As indicated in equation (3.70) and figure 3.68, the SNqR increases with the signal amplitude Em. For example, a 26-dB SNqR for small signals and a 30-dB dynamic range produces a 56-dB SNqR for a maximum amplitude signal. In this manner a uniform PCM system provides unneeded quality for large signals. Moreover, the large signals are the least likely to occur. For these reasons the code space in a uniform PCM system is very inefficiently utilized. For most signals this is not a valid assumption. If the pdf of the information signal is not uniform, and is constant with time, then it is intuitively obvious that to optimize the average SNqR those quantization levels used most should introduce least quantization noise. One way to arrange for this to occur is to adapt non-linear quantization or equivalently, companding. Non-linear quantization is illustrated in figure 3.69-a. 162 Chapter 3.4 Digital Pulse Modulation Figure 3.69 Quantization characteristics If the information signal pdf has small amplitude for a large fraction of time and large amplitude for a small fraction of time (as is usually the case) then the step between adjacent quantization levels is made small for low levels and larger for higher levels, (the quantization intervals be directly proportional to the sample value, the SNqR will be constant for all signal levels.) When the quantization intervals are not uniform (nonlinear quantizing), a nonlinear relationship exists between code words and the sample values that they represent. Consider PCM system with n = 4 bits, this gives; M = 24 = 16 level. The step size will be; q = 2 Em / M = 2/16 = 1/8 for Em = 1 The quantization error will be; Єq = q/2 = 1/16 Assume that the full range voltage is 16 volts. The max. Єq will be 1 volt. This quantity for small amplitudes of 2, 3 volts is quit high ≈ 30-50%. But for large amplitudes near 15, 16 volts this is considerably small. Consider the two scales shown in figure 3.70-a, 163 Chapter 3.4 Digital Pulse Modulation Figure 3.70-a Linear and non-linear quantizing scales Comparing sample 1 to the linear scale, the sample is increased by 0.25 v from 1.75 v to 2 v. The percentage quantization distortion, that is added to the sample is determined as; distortion = | (actual − quantized ) |sampleamplitude (actual + quantized ) sampleamplitude x100 = 6.7 % Notice that the same result is obtained when comparing sample 1 to the non-linear scale. Comparing sample 2 to the linear scale means that it is increased by 0.25 v from 4.75 v to 5 v, which gives a percentage quantization noise of 2.56 %. Comparing sample 2 to the non-linear scale, it is decreased in amplitude by 0.75 v from 4.75 v to 4 v as shown. This gives a percentage quantization noise of 8.57 %. The above shows that the difference in the percentage quantization noise added between the two samples, when compared to the linear scale, is 6.7 – 2.56 = 4.14 %. 164 Chapter 3.4 Digital Pulse Modulation The difference in the percentage quantization noise added between the two samples when compared to the non-linear scale is 8.56 – 6.7 % = 1.87 %. This shows that the non-linear scale produces a smaller percentage difference in the amount of distortion added, i.e., 1.87 %, when compared to the percentage difference for the linear scale, i.e. 4.14 %. This shows that the non-linear scale tries to add (produce) approximately the same percentage quantization noise to both the samples irrespective of the sample amplitude. In practice this is more accurately achieved because the non-linear scale has more quantization steps for small amplitudes than for large amplitudes, which reduces the percentage quantization noise for small amplitude samples but increases the percentage quantization noise for large amplitude samples. The linear scale, however, produces a very small distortion for large amplitude samples but a large percentage distortion for small amplitude samples. The non-linear scale is preferred because the quantization noise added tends to be more independent of the sample amplitude than the quantization noise added by the linear scale. Companding (compression-expanding) achieves the same result by compressing the information signal using a non-linear amplitude characteristic, prior to linear quantization and expanding the reconstructed information signal with the inverse characteristic. This system involves using a compressor amplifier at the transmitter, with greater gain for low-level than for high-level signals. The compressor will reduce the quantizing error for small signals. The effect of compression on the signal can be reversed by using an expander at the receiver, with a gain characteristic that is the inverse of that at the transmitter. Companding is also useful to maintain, as nearly as possible a constant SNqR for all signal levels as shown in figure 3.71. Since quantization noise power is proportional 165 Chapter 3.4 Digital Pulse Modulation to q2 then RMS quantization noise voltage is proportional to q. If SNqR is to be constant for all signal levels then q must clearly be proportional to signal level, i.e., Em / q must be constant. Figure 3.71 Effect of companding One way to understand the companding process is to think of compressing the dynamic range of the analog signal first by compressor circuitry prior to transmission, which amplifies low levels more than higher levels. After this we may use linear quantization, and the signal values after compression and linear quantizing will actually be non-uniformly quantized. In the decoder of the receiver, we use linear quantizing to reproduce the compressed sample values. Then we low pass filter the sample sequence to reproduce the compressed analog signal. We then expand this analog signal by amplifying low levels less than high levels to cancel the distortion that was produced by the compressor in the encoder. After linear decoding in the receiver, the noise level is the same at any sample level. In expansion a low-level signal is reduced to its original value and quantizing noise is attenuated. This makes the noise level lower at low signal levels than at high signal levels and improves the SNqR at low signal levels. This improvement of the average SNqR at low analog signal levels is essential because noise is most disturbing at low 166 Chapter 3.4 Digital Pulse Modulation signal levels, and the quantizing noise does not disturb the listener very much if the signal level is high as well. It is clear that coding performance is improved using non-uniform quantization intervals. The SNqR will be maintained constant, as nearly as possible for all signal levels. So, companding is a mean of improving the dynamic range of a communication system. With PCM, companding may be accomplished using analog or digital techniques. Early PCM systems used analog companding, whereas more modern systems use digital companding. ■ Analog Companding Historically, analog compression was implemented using specially designed diodes inserted in the analog signal path in a PCM transmitter prior to the sample-and-hold circuit. Analog expansion was also implemented with diodes that were placed just after the low pass filter in the PCM receiver. The following figure 3.72 shows the complete basic process of the analog non-uniform quantizing (companding) system. In the transmitter, the dynamic range of the analog signal is compressed, sampled and then converted to a linear PCM code. In the receiver, the PCM code is converted to a PAM signal, filtered and then expanded back to its original dynamic range. Different signal distributions require different companding characteristics. 167 Chapter 3.4 Digital Pulse Modulation Figure 3.72 PCM system with complete analog non-uniform quantization For instance, voice quality telephone signals require a relatively constant SNqR performance over a wide dynamic range, which means that the distortion must be proportional to signal amplitude for all input signal levels. This requires a logarithmic compression ratio, which requires an infinite dynamic range and an infinite number of PCM codes. Of course, this is impossible to achieve. When the quantization intervals are not uniform, a nonlinear relationship exists between code words and the sample values that they represent. However, there are two main different nonlinear coding schemes of analog companding currently being used that closely approximate a logarithmic function and are often called log-PCM codes. They have been standardized internationally for speech by the International Telecommunication Union, ITU; and are known as A-law which is used in European standard countries, and the μ-law, which is used in North America and Japan. Here are some key points about these coding schemes: 1- Companding curves are based on the statistics of human voice and many good solutions can be found. 168 Chapter 3.4 Digital Pulse Modulation 2- These schemes provide quite the same quality, but they are not compatible. 3- A conversion device, a trans-coder, is needed between countries using different standards. ■ Digital Companding Digital Companding involves digital compression in the transmitter such as; after the analog input signal is first sampled and converted to a linear PCM code, then the linear code is digitally compressed. In the receiver the compressed PCM code expanded prior to PCM decoding (converted back to analog). Figure 3.73 shows the block diagram for a digitally companded PCM system. Figure 3.73 Digitally companded PCM system. The most recent digitally compressed PCM systems use a 12-bit linear PCM code and an eight-bit compressed PCM code. 169 Chapter Four Noise Analysis Chapter Four Noise Analysis Communication system performance evaluation is an important measure needs to apply to a wide range of system inputs. In analog communication, the output is required to be as close to the input waveform as possible. The more common measure of such closeness is the Signal-to-Noise S/N power ratio, since the human ear is sensitive to this quantity that needed to be below a certain threshold. In digital communication, the normal measure of performance is the rate at which bit errors occur described by the probability of error, Pe. Channel characteristic plays an important role in studying, choosing, and designing modulation schemes. Modulation schemes are chosen or designed according to channel characteristic in order to optimize their performance. Assume a channel noise model as shown in figure 4.1. In this model, the noise n(t) is added to the transmitted signal S(t), and the sum is low pass filtered. The low pass filter is normally part of the receiver. We include it in the model because, without it, the noise has significant components outside the frequency band of interest. Figure 4.1 Additive noise in base band channel The received signal consists of the transmitted signal, modified by the various distortions imposed by the transmission system, plus additional unwanted, random signals (noise) that are inserted somewhere between transmission and reception. It is the noise that is the major limiting factor that degrades the communication system performance. Understanding the 170 Chapter Four Noise Analysis properties of random noise allow one to control, by system design, its effect on receiver performance. Random noise is described in terms of its statistical properties such that its amplitude cannot be predicted exactly at any instant, but it can be expressed in terms of a probability density function. Noise consists of both nonrandom (periodic) components and random components. Nonrandom noise, include power-supply noise and noise due to the unwanted cross-coupling of large signals such as that from a local oscillator. Note that the oscillator signal is considered to be a noise if it occurs at a point in the system where it is not desired. Noise of human origin is usually the dominant factor in receiver noise. Most of this type of noise is deterministic and can (at least theoretically) be eliminated through proper circuit design, layout, and shielding. Random noise, by its varying nature, cannot be eliminated. It places a lower theoretical limit, much like the uncertainty principle in physics, on the receiver noise level. For system design and evaluation it is sufficient to describe the noise in terms of its meansquare or root-mean-square values. The mean-square power is normally frequencydependent and is usually expressed as a power spectral density function (unit of power per hertz). The total noise power P is; P= f2 P( f ) df (4.1) f1 Random noise can be subdivided into External Noise, such as atmospheric and interstellar noise over which the receiver designer has no control, and noise generated by sources external to the receiver that can be eliminated by removing its sources, and Internal Noise occurring inside the receiver. The most common form of random noise originating inside the receiver is thermal noise, known as component noise. 4.1 Additive White Gaussian Noise Channel Additive white Gaussian noise (AWGN) channel is a universal channel model for analyzing modulation schemes. In this model, the channel does nothing but add a white Gaussian noise to the signal passing through it. This implies that the channel's amplitude frequency response is flat (thus with unlimited or infinite bandwidth) and phase frequency 171 Chapter Four Noise Analysis response is linear for all frequencies so that modulated signals pass through it without any amplitude loss and phase distortion of frequency components. Using figure 4.1, the received signal in is simply given by; r(t) = S(t) + n(t) (4.2) where n(t) is the additive white Gaussian noise. The whiteness of n(t) implies that it is a stationary random process with a flat power spectral density (PSD) for all frequencies. It is convent to assume its two sided PSD as; -∞<f<∞ N(f) = No / 2 (4.3) This implies that a white process has infinite power. The noise samples are independent since the process is Gaussian such that at any time instance, the amplitude of n(t) obeys a Gaussian probability density function given by; P( ) = 1 2 2 2 exp − 2 2 where, η is used to represent the values of the random process (4.4) n(t) and σ2 is the variance of the random process. It is interesting to note that σ2 = ∞ for the AWGN process since σ2 is the power of the noise, which is infinite due to its “whiteness”. Band limiting the noise causes σ2 = No/2. Then the probability density function (PDF) of n can be written as; P(n ) = n2 1 exp − N No o (4.5) Strictly speaking, the AWGN channel does not exist since no channel can have an infinite bandwidth. However, when the signal bandwidth is smaller than the channel bandwidth, many practical channels are approximately an AWGN channel. For example, the line-ofsight (LoS) radio channels, including fixed terrestrial microwave links and fixed satellite links, are approximately AWGN channels when the weather is good. Wideband coaxial cables are also approximately AWGN channels since there is no other interference except the Gaussian noise. 172 Chapter Four Noise Analysis Here, all modulation schemes are studied for the AWGN channel. The reason of doing this is two-fold. First, some channels are approximately an AWGN channel, the results can be used directly. Second, additive Gaussian noise is ever present regardless of whether other channel impairments such as limited bandwidth, fading, multi-path, and other interferences exist or not. Thus the AWGN channel is the best channel that one can get. The performance of a modulation scheme evaluated in this channel is an upper bound on the performance. When other channel impairments exist, the system performance will degrade. The extent of degradation may vary for different modulation schemes. The performance in AWGN can serve as a standard in evaluating the degradation and also in evaluating effectiveness of impairment-combating techniques. When the channel bandwidth is smaller than the signal bandwidth, the channel is band limited. Severe bandwidth limitation causes inter-symbol interference (ISI) (i.e., digital pulses will extend beyond their transmission duration (symbol period Ts)) and interfere with the next symbol or even more symbols. The ISI causes an increase in the bit error probability (Pe) or bit error rate (BER), as it is commonly called. When increasing the channel bandwidth is impossible or not cost-efficient, channel equalization techniques are used for combating ISI. Throughout the years, numerous equalization techniques have been invented and used. New equalization techniques are appearing continuously. 4.2 Thermal Noise It is an inherent source of random noise caused by thermal agitation causes random motion of conduction electrons inside the components. This produces a minute current with energy uniformly distributed over the frequency spectrum and is therefore often called white noise. As shown in figure 4.2, the r.m.s thermal noise voltage generated in an impedance Z(f) in a frequency interval W is given by; V 2 = 4 k T R(f) W n (4.6) where; W : Operating frequency BW in Hz, 173 Chapter Four Noise Analysis R(f): resistive component of impedance Z(f), in ohm T : absolute temperature, in ko k : Boltzmann's constant, 1.38 x 10-23 j / Ko Since the real part of the impedance R(f) will in general be a function of frequency, the thermal noise voltage will also be frequency-dependent. Figure 4.2 A resistor together with the mean square thermal noise voltage If a resistor is connected to a frequency-dependent network as shown in figure 4.3, then the total noise at the output will be given by; 2 V = 4 k T R G(f) df no 0 (4.7) where, G(f) is the magnitude squared of the frequency-dependent transfer function between the input and the output voltages, or; G(f) = V ( f ) 2 o (4.8) V (f) in 174 Chapter Four Noise Analysis Figure 4.3 A resistor connected to a linear network with a frequency dependent transfer function G(f) Since in this case R is not a function of frequency, equation 4.7, is equivalent to; V2 no =4kTR G(f) df (4.9) 0 The integral of the magnitude squared of the transfer function (normalized for unity gain) is referred to as the Noise Bandwidth Bn of the system. The noise bandwidth differs from the system's 3-dB bandwidth in that it is the area under the curve G(f). A system can have a narrow 3-dB bandwidth and yet have a large noise bandwidth. If this resistance is connected to a matched load RL, the noise power delivered to it is given by; Pn = (Vn / 2)2 / RL = K T W (4.10) The available noise power is thus proportional both to bandwidth and temperature, but is independent of the resistance value. Any system or circuit operating at a temperature above absolute zero inherently will display thermal noise. The noise power density delivered into a circuit at a temperature of 290 Ko (i.e., 17° C) is 4 x l0 -21 W / Hz of bandwidth, i.e., -174 dBm / Hz. Thermal noise, often called resistance noise, white noise, or Johnson noise, is completely random of a Gaussian nature. Connected Resistors, Equation 4.6, states that the noise voltage is the r.m.s value of a randomly varying signal. 175 Chapter Four Noise Analysis If two resistors are connected in series, as shown in figure 4.4-a, it is the noise voltages squared, not the noise voltages, which are added; Vn2 = Vn21 + Vn22 = 4kT ( R1 + R2 ) W (4.11) Figure 4.4-a Two series resistors equivalent noise voltage Similarly, for two resistors connected in parallel as shown in figure 4.4-b, 2 1 v2n1 = 4kTWR1 2 Vn 2 = 4kTWR2 Figure 4.4-b Two parallel resistors equivalent noise voltage The equivalent noise voltage is; Vn2 = 4kT R1 R2 W R1 + R2 (4.12) The noise sources described here all refer to r.m.s quantities, and the noise source has no polarity associated with it. In order to keep the notation simple, the noise sources are expressed in terms of the square of the voltage or current, and the values referred to are always the mean-square values. 176 Chapter Four Noise Analysis Current Source Representation. Equation 4.6 states that the thermal noise can be represented by a voltage source in series with a noiseless resistor. Norton's theorem shows that the voltage noise source illustrated in figure 4.5-a can also be represented by a current generator in parallel with a noiseless resistor as shown in figure 4.5-b. Figure 4.5 Equivalent noise source a- Voltage noise source b- Current generator with a noiseless resistor Excess resistor noise. The thermal-noise-power density generated in resistors does not vary with frequency, but many resistors also generate a frequency-dependent noise referred to as excess noise. The excess noise power has a 1/f spectrum; the excess noise voltage is inversely proportional to the square root of the frequency. Noise that exhibits a l/f power spectral characteristic is often referred to as pink noise. The amount of excess noise generated in a resistor depends upon the resistor's composition. Carbon resistors generate the largest amount of excess noise, whereas the amount generated in wire-wound resistors is usually negligible. However, the inductance inherent in wire-wound resistors restricts them to low-frequency applications. Metal film resistors are usually the best choice for highfrequency communications circuits, where low noise and constant resistance are required. 4.3 Active Device Noise Besides the thermal noise of resistors, the other sources of random noise of importance in network design are the active devices-integrated circuits, diodes, and transistors. The two main types of device noise are 1/f, or flicker noise and shot noise. Flicker noise is a low- 177 Chapter Four Noise Analysis frequency phenomenon in which the noise power density follows a 1/f α curve, the value of a is close to unity. An electric current composed of discrete charge carriers flows through an active device. The discreteness of the charge-carrier fluctuations are present in the current crossing a barrier where the charge carriers pass independently of one another. Examples of such barriers are the semiconductor p-n junction in which the passage takes place by diffusion and the cathode of a vacuum tube where electron emission occurs as a result of thermal motion. The current fluctuations represent a noise component referred to as shot noise, which can be represented by an appropriate current source in parallel with the dynamic resistance of the barrier across which the noise originates. The spectral density of this shot noise is given by; 2 ino = q k Io A2 / Hz (4.13) where q is the charge on an electron, Io is the direct current, and k is a constant that varies from device to device and also depends on how the junction is biased. In a junction transistor k is equal to 2. Figure 4.6 illustrates the shot-noise equivalent circuit for a forward-biased p-n junction. Figure 4.6 Shot noise equivalent circuit Shot noise, like thermal noise, has a uniform power spectral density, and the total noise current squared is proportional to the bandwidth. That is, 2 I n2 = ino W (4.14) 178 Chapter Four Noise Analysis The current source represented in figure 4.6 denotes that no direction is associated with the current source since it is a mean-square value. If the additional 1/f noise is included, the total mean-square current noise density can be given as; f 2 in2 ( f ) = ino 1 + L f where A2 / Hz (4.15) fL is the frequency where the shot-noise current is equal to the l/f -noise current. fL varies from device to device and is usually determined empirically. Figure 4.7 shows the power density of the total noise current as a function of frequency. At frequencies below fL the noise power density decreases at a rate of 6 dB per octave, while at frequencies much higher than fL the noise power is equal to the shot noise and is independent of frequency. Figure 4.7 Spectral density function of the total noise current with 1/f If the noise current is connected to a frequency-dependent network, the mean-square current at the output will be; I o2 = 0 Ai ( f ) in2 df (4.16) where, Ai( f ) is the magnitude squared of the current transfer function between i/p and o/p. 4.3-1 Noise in Transistor Amplifiers The previous discussion has shown that any amplifier must generate noise, which consists of thermal noise generated in the resistors plus the shot and 1/ f noise generated in 179 Chapter Four Noise Analysis the active devices. An equivalent circuit of a transistor amplifier, which identifies the shotnoise sources, is shown in figure 4.8. Figure 4.8 Transistor amplifier with noise sources inos represents the shot-noise current density due to the bias current on the output of the device, and inis is the shot-noise current density due to the input bias current. The other noise source is due to the load resistor RL. If the transistor output impedance is much larger than RL, the output noise voltage due to inos will be; 2 = i2 R2 Vnos nos L It is convenient to refer all of the noise sources to the input. The amplifier voltage gain is approximately; Vo = g m RL Vi so, the output noise current source can be replaced by an equivalent input noise voltage source; 2 2 2 inos RL inos \ 2 Vnos = = 2 2 2 gm RL gm (4.17) 180 Chapter Four Noise Analysis The noise source V \ 2 can be interpreted as due to thermal noise of the trans-conductance nos gm. Likewise, the thermal noise of the load can also be represented as a noise V \ 2 in no series with the input, where; \2 = Vno 4 k T RL 2 2 gm RL Normally V \ << V \ no nos (4.18) and can be neglected. The amplifier, with the noise sources referred to the input, can be designated as shown in figure 4.9. The amplifier is considered noiseless, and the noise is represented by the noise voltage and current sources. The model as represented has been simplified by assuming that the voltage gain and amplifier transadmittance are independent of frequency. Figure 4.9 Transistor amplifier with noise equivalent circuit In this case the total mean-square noise voltage will be proportional to frequency, but in the more general case frequency-dependent transfer functions must be used, and the total meansquare noise voltage can only be obtained by integrating the instantaneous values over the 181 Chapter Four Noise Analysis frequency region of interest. The model also does not include any thermal noise present in the amplifier. Transistor noise can originate from one of two sources; 4.3-1.1 BJT Noise The principal noise sources in a bipolar transistor are the two shot-noise sources and the ' thermal noise created in the base spreading resistor r b ; Vn2 = 4 k T r `b V 2 / Hz (4.19) 2 inis = 2 q IB A2 / Hz (4.20) 2 inos = 2 q IC A2 / Hz (4.21) The noise current source 2 is connected between the base and emitter junction, and the inis other current source is connected between the collector and emitter junctions. At high frequencies (above fβ of the transistor) the noise currents increase with graduating frequencies, and the complete expression is; 2 = inc f 2 (1 + h in fe (4.22) 2 fT2 ) 4.3-1.2 FET Noise The FET noise sources (excluding excess noise) are given by the following expressions; Vn2 = 2.8 k T V2 / Hz (4.23) gm and, 2 = 2q I inis g where gm is the mutual conductance A2 / Hz (4.24) Ig is the gate leakage current. 182 Chapter Four Noise Analysis The noise sources of MOSFETs and JFETs are the same, but Ig is negligible for MOSFETs. The shot noise increases with frequency at very high frequencies; the total noise current is; 2 ins = 2 q I g + 2.8 k T g ω2 C 2 gs ' A2 / Hz (4.25) m where, C 2 is approximately two-thirds of the transistor gate-to-source capacitance. gs ' The transistor amplifier noise model will subsequently be used for the design low-noise amplifiers, but first one of the parameters most often used to characterize the "noisiness" of a system, the Noise Figure, will be described. 4.4 Signal to Noise Ratio, SNR When dealing with transmission engineering, the "noisiness" of a signal is usually specified in terms of the SNR (S/N), defining the ratio of signal power-to-noise power, that is generally, a function of frequency. It is perhaps more frequently used than any other criterion when designing a telecommunication system. The additive white noise has a power of No watts/Hz. The noise power at the output of the receiver filter is then NoW watts, and SNR = S NO W (4.26) SNR expresses in dB the amount by which a signal level exceeds the noise within a specified bandwidth. The output SNR of a base band analog receiver depends on the input SNR, the filtering characteristic of the receiver, and the noise added by the electronics in the receiver. The SNR at the input to the receiver depends on the characteristics of the channel and of the noise that intrudes during transmission. The performance of the base band analog receiver also depends on nonlinearities in the electronic devices. This is expressed in measures such as dynamic range and harmonic distortion. Dynamic range usually refers to the ratio (in dB) of the strongest to the weakest signal that a receiver can process without noise or distortion exceeding acceptable limit. Although this sounds like a simple concept, application to practical transmission is quit complex. For example, the behavior of 183 Chapter Four Noise Analysis a receiver when a single sinusoid forms the input may be quit different from that when the input is a complex sum of many signal components. Harmonic distortion is normally measured by setting the receiver input to be a single sinusoid. Nonlinearities in the receiver change this sinusoid to a periodic function with harmonics. The ratio of the power of the harmonics to the power of the fundamental is a measure of harmonic distortion. Each signal to be transmitted requires a minimum S/N to satisfy the customer or to make the receiving instrument function within certain specified criteria. We might require the following S/N with the corresponding end instruments: Voice 30 dB, Video 45 dB based on customer satisfaction, Data 15 dB, based on a specified error rate. In figure 4.10, a 1000 Hz signal has a S/N of 10 dB. The level of the noise is 5 dBm and the signal 15 dBm. Thus; (S/N)dB = Signal level in dBm - Noise level in dBm. (4.27) Figure 4.10 Signal-to-Noise ratio 4.5 Noise Figure The common problem in communication is dealing with weak signals. A signal which is stronger than back-ground noise can be detected by the receiver. The challenge is to 184 Chapter Four Noise Analysis detect a signal so weak that it is buried in noise. In such a case, if we use an amplifier it will not help the situation, because the amplifier will amplify both input signal and input noise. In fact, because the amplifier itself has electronic components, as the filter, ….or the entire receiver it will contribute additional noise of its own, known as added noise, Na appears at the receiver output as; No = G Ni + Na So = G Si (4.28-a) (4.28-b) where; Si, So: input and output signal power respectively Ni, No: input and output noise power respectively G : amplifier power gain. Due to this added noise S/N at the output of the amplifier is less than S/N at the input. Thus the amplifier does not improve S/N on the contrary, it degrades it. The amount of degradation is called noise figure F. It is a standard quantitative measure (figure of merit) of how much noise is added by the receiver. It is defined as the ratio of the signal-to-noise ratio at the input port to that portion at the output, i.e.; F= Signal − to − Noise ratio at input Signal − to − Noise ratio at output = F Si / N i No = So / N o G Ni (4.29) The output noise power will be; No = G N i F (4.30) Equating equation 4.28-a, and 4.28, then; G Ni + N a = G N i F (4.31-a) Na = G Ni (F-1) (4.31-b) Na / G N a' F = 1+ = 1+ Ni Ni (4.32) 185 Chapter Four Noise Analysis ' where, N a is the added noise seen at the input. The output noise power given by equation 4.28-a may be written as; ' No = G ( Ni + N a ) (4.31-c) Also, equation 4.31-a may be written as; Ni F = (Ni + N 'a ) (4.31-d) which is the same result given by equation 4.32. ' N a = Ni (F - 1) (4.33) Note that in an ideal receiver no noise is added, so the receiver has a unity noise figure. A receiver will usually improve the signal-to-noise ratio through filtering of the input noise. Noise Figure is often expressed in decibels, NF, and is defined as; NF = 10 1og10 F (4.34) It equals 0 dB, for an ideal noiseless network. Example 4.1; Calculate the variation in noise temperature as the noise figure varies from 1 to 1.6, assuming the reference temperature is 290o k. Comment. Solution; With F=1, (ideal system), then; Teqi = 290 (1-1) = 0, no added noise. With F = 1.6, (actual system), then Teqa = 290 (1.6 – 1) = 174o k. Thus a change of 174ok in the equivalent noise temperature is much greater than that in the noise figure, of 0.6, so Teq measure is preferred. Variation of Noise Figure with Frequency As frequency increases from a low frequency the noise figure starts decaying exponentially from a high value until an optimum constant value is achieved. This response 186 Chapter Four Noise Analysis is due to the effect of the input noise. As the frequency increases further the noise figure increases exponentially. This response is due to the decrease in amplifier gain at high frequency. It was considered sufficient for characterizing a receiver's performance, but today largeamplitude, unwanted signals are often at the receiver input, and the Noise Figure is not adequate to completely describe a receiver's performance. For this reason, the term "dynamic range" is introduced to completely describe receiver’s performance. 4.6 Receiver Sensitivity Most systems create noise, which limits its ability to process weak signals. So, one of the most important, critical factors to be considered in evaluating the performance of a communications system is its ability to detect low-amplitude signals that are adjacent in frequency to large-amplitude, unwanted signals. It determines the minimum signal level that can be detected by the receiver, which considered as a measure of the receiver sensitivity. The available input-signal level Si for a given output signal-to-noise ratio, (S / N)O is referred to as the system sensitivity, or noise floor. The input voltage level corresponding to Si, is called the minimum detectable signal level. Although the signal-to-noise ratio will depend on the system frequency response, we will assume for simplicity that the frequency response can be represented by the ideal characteristic shown in figure 4.11. Although this frequency response can never be realized in an actual receiver, it is closely approximated in many communication systems, especially those which include a narrow band-pass filter. Figure 4.11 Frequency response of ideal LPF 187 Chapter Four Noise Analysis With ideal frequency characteristic, equation 4.7 for the total available noise power from a resistor can be written as, V 2 = KTW n 4R (4.35) The input signal power is given by, Si = F ( KTW ) S (4.36) N o where, No is now the total noise power at the output. This input power level under matching condition as shown in figure 4.12, (source load RS = load resistance RL), may be expressed as; 2 V S Si = 4 RS (4.37) Sensitivity is always specified for a given signal-to-noise ratio. Although the required output signal-to-noise ratio may not be the same as that used in the sensitivity specification, sensitivity does provide an objective measure for comparing receiver performance. Figure 4.12 Maximum power transfer circuit Noise Figure of Cascaded Networks If the noise figure and power gain of individual networks are known, the noise figure of cascaded networks is readily determined. Consider an N-stages receiving system with noise figures F1, F2, ….,and power gains G1, G2, ……, the overall noise figure is thus; 188 Chapter Four F = F1 + T Noise Analysis Fn −1 F2 −1 ++ G1 G1 G2 Gn − 2 (4.38) so, equation 4.37, may be rewritten as ; Si / N i No FT = = So / N o N i GT and the total output noise power will be; NoT = Ni FT GT (4.39) For simplicity consider two stages receiving system then, the output noise power from the first stage will be; No1= Ni2= Ni1 G1 + Na1 the output noise power from the second stage will be; No2 = No = Ni2 G2 + Na2 = (Ni1 G1 + Na1 ) G2 + Na2 = Ni1 G1 G2 + Na1 G2 + Na2 with, GT = G1G2 then; No = ( Ni + N a1 + N a 2 ) G T G1 G1G2 ' No = NoT = ( Ni + N aT ) GT (4.40) Comparing equation 4.40, with equation 4.33, we find; ' Ni FT = Ni + N aT FT = \ N i + N aT Ni ' N aT =1+ Ni (4.41) N a' T = Ni (FT - 1) 189 Chapter Four F= Noise Analysis available input noise power + added noise available input noise power (4.42) ' where, N aT is the total noise power added by the receiving system measured at the input of the system, i.e.; ' N N a2 N aT = a1 + + G1 G1G2 N a3 (4.43) G1G2G3 the available input noise power Ni, is equal to kTW, the noise added by network 1 seen at its input is ; F1 Ni - Ni = N a' 1 = kTW(F1 - 1) (4.44) Likewise, the noise added by network 2 seen at its input is; N a' 2 = k TW (F2 - 1) (4.45) and, the noise added in network 2, referred to the input is; N a" 2 = kTW ( F2 − 1 ) N a' 2 = G1 G1 FT = F + 1 F2 −1 (4.46) (4.47) G1 Equation 4.47 states that if the power gain of the first stage is large, the overall noise figure will be essentially that of the first stage. In other cases, the noise figure of the second stage, and even of succeeding stages, will be an important factor in the overall noise figure. Example 4.2, Two stages receiver system, the first stage has a noise figure of 2 dB and a gain of 12 dB; the second stage has a noise figure of 6 dB and a power gain of 10 dB. What is the overall noise figure? Solution; The noise figures must first be converted to noise figure values: F1 = 1.59 F2 = 4 190 Chapter Four Noise Analysis The corresponding gain values are ; G1 = 14.9 G2 = l0 The overall noise figure is ; F = 1.59 + 4 −1 = 1.779 15.9 and the noise figure F, of the two-stage system is ; F (dB) =10 log10 1.779 = 2.5 dB Example 4.3, If G1 and G2 of the above Example are independent of frequency, what will be the total output noise power of the cascaded system in a 3-kHz bandwidth? o The operating temperature is 290 K. ' Solution; Since, Ni + N a = FT KTW KTW = 1.38 x 10 - 23 x 290 x 3 x 103 = 1.192 x 10-17 ' Ni + N a = 1.779 KTW = 2.12 x 10-17 Watt Watt and the output noise power will be , ' NoT = G1 G2 (N1+N a )= 1.59 x 2.12 x 10-17= 337 x 10-17 W or , NoT = Ni FT GT Ni = K T W = 1.38 x 10-23 x 290 x 3 x 103 = 1.192 x 10-17 Watt GT = G1 G2 = 14.9 x 10 = 159 NoT = 1.192 x 10-17 x 1.779 x 159 = 337 x 10-17 Watt Example 4.4, What minimum input signal will give an output SNR of 0 dB in a system that has an input impedance equal to 50 ohm, a noise figure of 8 dB, and a bandwidth of 2.1 kHz? 191 Chapter Four Noise Analysis Solution , For a 0-dB output signal-to-noise ratio and 290 Ko operating temperature, equation (4.42) can be written as; 10 log Si = F -144 +10 log W where Si is in milli-watts and W is in kilohertz. For a bandwidth of 2.1 kHz, Si = - 133 dBm 133 dB below 1-mW level Si is the available input power and is related to the input signal voltage by equation (4.26). Thus; V2 Watt S = s = 5.02 10 −17 i 4R s Since Rs = 50 Ω, Vs = 0.10 μ Volt That is, for these specifications the noise floor for an output signal-to-noise ratio of 1 is 0.10 μVolt. Example 4.5, What is the minimum detectable signal or noise floor of the system in the previous example for an output signal-to-noise ratio of 10 dB? Solution, In this case equation (4.38) becomes; 10 log Si = F -134 + 10 log W = -123 dB Si = 5 × 10-13 × 10-3 Watt and the minimum detectable signal is; Vs = 0.32 μVolt Example 4.6, Consider a receiver with 50 ohm input impedance, 3 kHz bandwidth, and 4-dB noise figure. The noise floor of this receiver for an output signal-to-noise ratio of 10dB is calculated, using equations (4.26) and (4.42), Si = -125 dBm = 3 × 10 –16 W Vs = 0.245 μV 192 Chapter Four Noise Analysis An input signal of 0.245 μV will produce a 10-dB output signal-to-noise ratio. Now consider the performance of this receiver when it is connected to an antenna with a noise figure of 20 dB. The antenna noise figure in this example is 100. Hence the antenna noise is seen to be; \ = 99 × thermal noise = 99 KTW N ant The total input noise is the antenna noise plus the source noise, of 100 KTW. In this example the antenna noise figure is 20 dB which corresponds to a noise figure of 100. Since the receiver noise figure is 2.5 (F = 4 dB), the input signal required for a 10-dB output signal-to-noise ratio is; S Si = (100 + 2.5 − 1) KTW N o = 10 × 101.5 × 397 × 10-23 × 3 × 103 = 1.203 × 10-14 Watt Thus the minimum detectable signal for a 10-dB output signal-to-noise ratio is; Vs = 1.56 μVolt This is much larger than the 0.245 μVolt required if there were no antenna noise. Example 4.7, what would be the minimum detectable signal level in the previous example if a receiver with a noise figure of 10 dB is substituted? Solution, Since the receiver noise figure is 10, equation (4.44) becomes for this system ; = 1.29 × 10-14 Watt S Si = (100 + 10 − 1) KTW N o and the minimum detectable signal is; Vs = 1.6 μVolt A 6-dB reduction in the receiver noise figure results in only a 0.3-dB reduction in the output signal-to-noise ratio because the noise added by the receiver is much less than the antenna noise. 193 Chapter Four Noise Analysis The above examples illustrate that if the input noise is large, very little is gained by reducing the system noise figure below some acceptable level. 4.7 Equivalent Noise Temperature The noise figure will normally lie between 1 and 10. For situations in which an expanded scale is needed, the system noise figure is usually expressed in terms of noise temperature. The noise figure is given by; N a' N a' F =1+ =1+ Ni kTW (4.48) where, T is the reference noise temperature. For a system with a number of noise-generating devices within it, we often refer to the system noise temperature Te in Ko. this is the temperature of a single noise source that would produce the same total noise power at the output The noise added can be interpreted as the available noise from a resistor whose temperature is T. That is; T F =1+ e T or Te = (F - 1)T (4.49) (4.50) Te is known as the system equivalent noise temperature. The required signal-to-noise ratio at the receiver output will depend on the function of the receiver and on whether or not additional signal processing (such as correlation detection) is performed. An output signal-to-noise ratio between 0 and 10 dB is adequate for normal listening. Receiver noise figure is a measure of how much noise is added by the system. A low noise figure is often desirable, but there are situations in which this is of little 194 Chapter Four Noise Analysis importance. This is particularly true when the input noise is much greater than the noise added by the system. Numerical examples will illustrate this point. Noise comparisons of two receivers must be used with care since the network with the lowest noise figure does not necessarily have the lowest output signal-to-noise ratio. The following section proves this important point. 4.8 Performance of Analogue Communication Systems in the presence of Noise Signals transmitted by any communication system via a channel always received accompanied with certain amount of noise added on the channel as shown in figure 4.13. Figure 4.13 Received signal accompanied with noise This noise can not be eliminated entirely from the system. The noise characteristics of a modulation system is evaluated by a parameter known as figure of merit γ, defined as the ratio of SNRo to SNRi of a receiver. A modulation system with higher γ has a better noise performance and adverse effect of noise is less. 4.8-1 SNR Characteristics of AM Demodulators Assume the received AM signal with additive noise is; VAMni (t) = Ec [1+ m Cos ωmt] Cos ωct + Vn(t) (4.51) where, Vn(t) is the noise accompanied the AM signal, Vn(t) =VnI(t) Cos ωct +VnQ(t) Sin ωct (4.52) 195 Chapter Four Noise Analysis VnI(t) : in-phase noise component VnQ(t) : quadratic noise component There are three methods commonly used for specifying SNRi at the AM detector input. 1- Carrier power-to-Noise power, CNRi ; E2 c C/N = 2 W (4.53) 2- Side band power-to-Noise power, SNRisb ; m 2 Ec2 Ssb / N = 4 W (4.54) 3- AM Power-to-Noise Power, SNRiAM 2 2 PAM / N = Ec (1 + m / 2) 2 W (4.55) 4.8-1.1 Envelope Detection The resultant input to the detector is given by the vector sum of the amplitude modulated waveform and the noise components as indicated in figure 4.14. Figure 4.14 Vector sum of AM signal with noise 196 Chapter Four Noise Analysis It is clear that for large SNRi the demodulator input is; VAMn(t) = Ec [1+ m Cosωmt] Cos ωct + VnI(t) (4.56) With ideal AM envelope detector, so its output will be; VAMno(t) = a Ec [1+ m Cos ωmt] + a VnI(t) a 2 m 2 Ec2 SO = , 2 SNRo = (4.57) 2 2 NO = a V 2 2 m 2 Ec2 m Ec = 2 W 2VnI2 nI (4.58) The figure of merit in this case is; γC = SNRo = m2 CNRi (4.59) It has its maximum value when m =1, SNRo = CNRi (4.60) For the comparison we find that; γSb = SNRo =2 SNR isb (4.61) SNRo is 3 dB greater than SNRisb The realistic comparison is made with SNRiAM ; 2 γAM = SNRo = 2 m 2 SNR iAM 2+m (4.62) with m =1; 197 Chapter Four Noise Analysis SNRo = 2 3 SNRiAM SNRo (dB) = - 1.76 (dB) + SNRiAM (4.63) (4.64) There is SNR degradation by an amount that is increased as the modulation index decreased. This is because 2 PAM is contained in the carrier signal which does not 3 contribute at all to the signal power at the detector output. The situation is improved when the carrier is suppressed to get 3 dB improvement. For poor SNRi conditions, SNRo given by equations 4.59, 4.61, and 4.63 are not valid, as the performance of the envelope detector deteriorates rapidly. The envelope detector can therefore be employed only in good SNRi conditions, SNRi > 10 dB. 4.8-1.2 Coherent Detection Using figure 4.15, with received signal corrupted by noise given by equation 4.51. The detector output is; VAMn(t) ={Ec[1+ m Cos ωmt] Cosωct}[E Cos ωct] + EVnI(t) Cos2ωct + EVnQ(t) Cosωct Sinωct (4.65) Figure 4.15 Coherent detector of a modulated signal accompanied by noise 198 Chapter Four Noise Analysis Using LPF to remove the high frequency components and dc components of the demodulator output, so the filtered output will be; Von = So = 1 E m E Ec Cos ωmt + VnI(t) 2 2 1 2 2 2 m E E c 8 , No = E2 W 4 SNRO at the detector filter output is; SNRo = m 2 Ec2 2 W Then ; γAM = SNRo = SNR i 2 m2 2 + m2 which is the same as that for the envelope detector, but without precondition that the SNRi should be large. Then coherent detection maintains its SNR performance for all values of SNRi and is therefore superior to the envelope detector in poor SNRi conditions. 4.8-2 SNR Characteristics of DSB-SC Demodulators Figure 4.15 shows a DSB-SC receiver model, where the received signal corrupted by noise is; VDSB-SCn(t) = [A Cos (ωc+ ωm)t +A Cos (ωc - ωm)t] A2 SiDSb = A , Ni = W , SNRi = W 2 + Vn(t) (4.66) (4.67) 199 Chapter Four Noise Analysis After demodulation by the carrier signal, E Cos ωct, the demodulator output is given by; Von = VDSB-SCn(t) x [E Cos ωct] Using LPF to remove the high frequency components and dc components of the demodulator output, so the filtered output will be; Von(t) = A E Cos ωmt + = A E Cos ωmt So + ( AE )2 = , 2 2 E VnI(t) Cos ωct E VnI(t) / 2 2 No = E W / 4 SNRO at the detector filter output is; 2 A2 SNRo = W (4.68) γDSB = (4.69) then; SNRo =2 SNR i SNRo = 2 SNRi (4.70) The same result as given by equation 4.61 using coherent detector of AM signal. There is 3dB improvement at the detector output due to the arithmetic addition of power in side bands which are translated down to base band and added coherently. This doubles the message signal power within the bandwidth. Noise power, on the other hand, is not doubled since the quadratic component is rejected. 200 Chapter Four Noise Analysis 4.8-3 SNR characteristics of SSB-SC Demodulator Using figure 4.15, with received SSB-SC signal given by; VSSB-SCn(t) = A Cos (ωc + ωm)t + Vn(t) SiSSb = A2 , 2 (4.71) 2 A Ni = W , SNRi = 2 W After demodulation by the carrier signal, E Cos (4.72) ωct, the demodulator output is given by; Von = VSSB-SCn(t) x [E Cos ωct] Using LPF to remove the high frequency components and dc components of the demodulator output, so the filtered output will be; Von = A E Cos ω t + E V (t) / 2 m nI 2 2 So = ( AE ) , 2 No = E W / 4 8 SNRO at the detector filter output is; SNRo = A2 2 W (4.73) 1 (4.74) Then; γSSB = SNRo = SNR i SNRo = SNRi (4.75) As a comparison between different AM situations; * The noise performance of AM is inferior to that of DSB, and SSB. * Using equations 4.63, 4.69, then; 201 Chapter Four Noise Analysis ( SNRo / SNRi ) AM , m = 1 1 = ( SNRo / SNRi ) 3 sb (4.76) with, SNRiSB = SNR iAM, m =1 SNRoSB = 3 SNRo AM, m =1 * Using equations 4.70, 4.75, then; ( SNRo / SNRi ) DSB = 2 ( SNRo / SNRi ) SSB (4.77) * Using equations 4.67, 4.72, and 4.77, then; SNRiDSB = 2 SNRiSSB SNRoDSB = 4 SNRoSSB (4.78) (4.79) For the same input power to both DSB, SSB demodulators, the noise power at the input of the DSB demodulator is higher than that at the input of the SSB demodulator by 3 dB, since DSB operates at twice the transmission bandwidth. Thus, for the same signal power, SNRi for DSB transmission is 3 dB lower than that of SSB. DSB system is more susceptible to distortions due to selective fading. * For AM demodulation; γAM, Synch. = γAM, Env., Moreover, the synch. Detection is both complex and costly, hence it is never used for AM detection. 4.8-4 SNR Characteristics of FM Signal Assuming the received FM signal is; VFM (t) = EC Cos (ωct + βf Sin ωmt) (4.80-a) The output of the discriminator is; Vo(t) = K Δf Cos ωmt (4.80-b) 202 Chapter Four Noise Analysis The average signal power is; 2 So = K f 2 (4.80-c) 2 To evaluate the effect of random noise, we observe that each noise frequency component fn will beat with the carrier wave to produce amplitude modulation and angle modulation as illustrated in figure 4.16. Figure 4.16 Vector sum of FM signal and noise If the noise component has a peak voltage Vn, where; EC >> Vn, then; Vn (t) = [EC + Vn Cos ωnt ] + j Vn Sin ωnt V V Vn(t) = EC [ (1+ n Cos ωnt ) + j n Sin ωnt ] Ec or Ec Vn (t) ≈ EC Cos (ωct + Ө) where, Ө = tan -1 Vn E Sin n t c Vn 1+ E Cos n t c Ө ≈ (Vn / EC) Sin ωnt (4.81-a) (4.81-b) (4.81-C) (4.81-d) 203 Chapter Four Noise Analysis The output noise voltage from the discriminator Vd (t) will be proportional to the frequency modulation produced by the noise signal Vn(t) which is related to the phase modulation it produces by; Vd(t) = K 1 d 2 dt = K (Vn / EC) fn Cos ωnt (4.82) The average output noise power δ No in a bandwidth δf is then given by; δNo = K 2 Vn2 f n2 2 Since, (4.83-a) Ec2 2 V n PO δ f = , and 2 2 PC = E c 2 is the average carrier power, hence; δ No = K 2 Po f f n2 2 Pc (4.83-b) The total output noise power No in an IF bandwidth ± B Hz. then becomes; B No= K 2 Po f f n2 = K 2 Po B 2 fn f 2 P 2 P c −B c −B 2 3 No = K Po B (4.84-a) (4.84-b) 3 Pc f 2 Hence, (So / No)FM = 2 2 K Po B 3 3 Pc K 2 (4.85-a) 204 Chapter Four Noise Analysis 2 (So / No)FM = 3 f Pc B 2 Po B (So / No)FM = 3 2f By considering, Si / Ni Pc 2 Po B (4.85-b) = Pc (4.86) B γFM = (SNRo )FM = (SNRi )FM 3 2f Pc 3 2 Po B = 2f 2 Pc B (4.87) To compare this result with that of AM from an envelope detector, with m = 1, then; (SNRo )FM = (SNRo ) AM 3 2f Pc 2 2 Po B = 3 f Pc 2 Po B (4.88) In particular, if Δf = 75 kHz, B = 15 KHz, we have βf = 5, and so the signal-to-noise improvement due to FM is; 3 x 25 = 75 or 19 dB. The factor 75 can be increased further by amount of 4 by the use of pre-emphases at the transmitter, and deemphases at the receiver giving an overall SNR improvement compared to AM of 23 dB. 205 Chapter 5 Multiplexing Techniques Chapter 5 Multiplexing Techniques The installation of communication systems is very costly, and the costliest element is the transmission medium. Quite clearly great economics would result if a single transmission medium can carry several signals all combined together. Multiplexing is the technique used to combine a number of signals and send them over the medium to make the best use of the transmission medium and ensure that its bandwidth is utilized to its full capacity. So, it is economically feasible to utilize the available bandwidth of optical fiber or coaxial cable or a radio system in a single high-capacity system shared by multiple users. In order for the signals to be received independently they must be sufficiently separated in some sense. This quality of separateness is usually called orthogonality. Orthogonal signals can be received independently of each other whilst non-orthogonal signals cannot. There are many ways in which orthogonality between signals can be provided. Two approaches to multiplexing are analog, or Frequency-Division Multiplexing, (FDM); and digital, or Time-Division Multiplexing, (TDM). The actual equipment that performs the multiplexing is called a channel bank. Analog or A-type channel banks perform analog multiplexing; digital 207 Chapter 5 Multiplexing Techniques or D-type channel banks perform digital multiplexing. In FDM, the frequency band of the system is divided into several narrowband channels, one for each user all the time. Use of multiplexing technique is possible if the bandwidth of the channel is higher than the bandwidth of the individual data sources to facilitate good utilization of the channel bandwidth. In TDM, the transmission time of the system is divided into several narrow time slot channels, one for each user that uses the total system bandwidth. Use of multiplexing technique is possible if the capacity of the channel is higher than the data rates of the individual data sources to facilitate good utilization of the channel capacity. Despite the need to convert the voice signals to a digital format at one end of a Tl line and back to analog at the other, the combined conversion and multiplexing cost of a digital TDM terminal was lower than the cost of a comparable analog FDM terminal. 5.1 Analog Multiplexing The traditional way of providing orthogonality in analogue telephony and audio/video broadcast applications is to transmit different information signals using different carrier frequencies. Such combined signals are disjoint (non-overlapped) in frequency and can be received separately using filters. Using different carriers, to isolate signals from each other in FDM technique, number of signals from different sources are translated into different frequency bands at the transmitting side and sent over the same transmission medium by using them to modulate the carrier signals with different and appropriate frequency so that they do not interfere with each other. 208 Chapter 5 Multiplexing Techniques FDM was the original multiplexing technique for analogue communications and is now experiencing a resurgence in fiber optic systems in which different wavelengths are used for simultaneous transmission of many information signals. In FDM telephony, 300 Hz -5.4 kHz bandwidth telephone base band signals, are stacked in frequency at 4 kHz spacing with small frequency guard bands between them to allow signals separation using practical filters. Figure 5.1 shows an example of a communication system in which the signals from three data sources can be combined (multiplexed) together and sent through a single transmission medium. At the receiving end, the signals are separated (de-multiplexed) using a bank of filters. Figure 5.1: Multiplexing and De-multiplexing. Figure 5.2 shows how an FDM signal can be generated. 209 Chapter 5 Multiplexing Techniques Figure 5.2 Generation of an FDM signal. 5.1-1 FDM Hierarchy Actual FDM is accomplished as a multilevel process. FDM hierarchy multiplex 12- voice signals each of 4 k Hz together to create a Basic group as one of the fundamental unit for marketing. Five-groups are multiplexed together to create a Super group. Ten Super groups multiplexed together give a Master group. Six Master groups multiplexed together give a Jumbo group. Three Jumbo groups multiplexed together give a Jumbo group multiplex. 5.1-1.1 Formation of a Basic Group In the trunk or toll system, 12 channels form a Basic group. The Basic group is formed by SSB-SC amplitude modulation of 12 sub-carriers at 64, 68, 72; . . . ; 108 kHz. This modulation technique is used to beat the corrupting influence of noise on the information content of the transmission as we put as much as possible, if not all, of the available power into one of the sidebands. An added advantage to this scheme is that the required bandwidth is reduced to one-half of its original value. Clearly, this would 210 Chapter 5 Multiplexing Techniques allow twice as many messages to be sent on the same channel as before. The price to be paid for this advantage is that to demodulate an SSB signal, it is necessary to reinstate the carrier at the receiver. The reinstated carrier has to be in synchronism with the original carrier, otherwise demodulation yields an intolerably distorted signal. Providing a synchronized local oscillator requires complex equipment at the transmitter as well as at the receiver. In SSB radio, an attenuated form of the carrier is transmitted with the signal. This is used to synchronize a local oscillator in the receiver. In the telephone system, a centrally generated pilot signal is distributed to all offices for demodulation purposes. In some cases, a local oscillator without synchronization is used. If the frequency error is small (approximately -5 Hz), successful demodulation can be achieved. The required carriers are generated from a 4 kHz crystal-controlled oscillator and multiplied by the appropriate factor. The upper sidebands are removed and the lower side bands are added together to form the Basic group. Figure 5.3-a shows a block diagram for channel 1. Figure 5.3-b shows a schematic representation of the spectrum of the Basic group that covers the frequency range from 60 kHz to 108 kHz. For a small-capacity trunk, the Basic group may be transmitted without further processing. The transmission channel can be a twisted pair or coaxial cable. 211 Chapter 5 Multiplexing Techniques Figure 5.3 Formation of Basic group with its Spectrum 5.1-1.2 Formation of a Super Group For higher capacity channels, five Basic groups are combined to form a Super group. Figure 5.4-a shows the block diagram of the Super group 1. Note that to make the filtering problem easier, the carrier frequency is chosen to be 420 kHz. Figure 5.4-b shows the frequency spectrum of the Super group that occupying the frequency range from 312 kHz to 552 kHz. Figure 5.4 Formation of the Super group and its Spectrum 212 Chapter 5 Multiplexing Techniques Table 5.1 shows the carrier frequencies and band widths for each Super group. For a 60-channel trunk, the signal can be transmitted in this form. Again a twisted pair with coil loading or amplification and coaxial cable may be the medium of transmission. By organizing the 12 Basic groups into a Super group of 5 it clear that the sub-carrier frequencies, the balanced modulators and band pass filters can all be duplicated five times over. Table 5.1 5.1-1.3 Formation of a Master Group To create a 600-channel trunk, 10 Super groups are combined to form a Master group. The frequency spectrum of the Master group is shown in figure 5.5 that occupying the frequency range from 564 kHz to 3,084 kHz and containing a total of 600 voice channels. Note that there are gaps of 8 kHz between each Super group spectrum. These gaps are designed to make the filtering problem easier. The carrier frequencies and bandwidths of the 10 Super groups are given in Table 5.2. The Super group can be transmitted over coaxial cable or it can be used to modulate a 4 GHz carrier for terrestrial microwave transmission or even sent over a satellite link. 213 Chapter 5 Multiplexing Techniques Table 5.2 Figure 5.5 Formation of the Maser group and is Spectrum. Six master groups multiplexed together give a Jumbo group occupying the frequency range from 564 kHz to 17,548 kHz and containing 3,600 channels. Three Jumbo groups multiplexed together give a Jumbo group multiplex containing 10,800 channel. Analog multiplexing and continual improvements in the technology enabled costly transmission systems to be shared to carry thousands of telephone signals, thereby making long-distance telephone service affordable to us all. But analog multiplexing suffered from noise, distortion, and other impairments, and was costly to maintain as it requires a huge number of modulators / demodulators, oscillators and filters. Analog multiplexing on coaxial cable has been used in the old Bell System since 1946 for longdistance telephone transmission. Two such coaxial make a two-way pair, 214 Chapter 5 Multiplexing Techniques with each coaxial carrying transmission in one direction. As the signal travels along the coaxial, it becomes weaker and weaker and must there be amplified before it becomes too weak. Amplification is a one-way affair, and thus a coaxial can only carry signals in one direction. A number of coaxial are placed together to form a coaxial cable for use in a transmission system. The multiplexing system used with coaxial cable is called L-carrier. Various generations of the technology are indicated by a number suffixed after the L. The key factor in the L-carrier system is the distance between the amplifiers, called repeaters that amplify and retransmit the signal to the next section of the cable. The age of analog multiplexing is now over; digital multiplexing proved superior in nearly all respects and is now quite widespread. In the late 1980s, AT&T replaced nearly all the analog multiplexing in use on its longdistance network with digital multiplexing systems. 5.2 Digital Multiplexing In FDM, voice signals were ‘‘stacked’’ in the frequency spectrum so that many such signals could be transmitted over the same channel without interference. Each channel is connected to the transmission medium for the whole time but each channel is allocated a different frequency band. In digital multiplexing, the time of channel use is divided among all users. Each voice signal is assigned the use of the complete channel bandwidth (All the channels use exactly the same frequency band) using one of nonoverlapping time slots on a periodic basis using a technique known as TDM. Normally, all time slots of a TDM system are of equal length. Also, each sub channel is usually assigned a time slot with a common repetition period called a frame interval. Use of multiplexing technique is possible if 215 Chapter 5 Multiplexing Techniques the capacity of the channel is higher than the data rates of the individual data sources to facilitate good utilization of the channel bandwidth. At the transmitter side the multiplexer collects the data from each source, and the combined bit stream is sent over a single medium. Framing information is needed for the switching circuit at the receiver side that separates the data corresponding to the individual sources (time slots) in the de-multiplexer as shown in figure 5.6. When the de-multiplexer detects the frame synchronization word, it knows that this is the start of a new frame and the next time slot contains the information of user channel 1. Many PAM or PCM signals could be time multiplexed using an electronic switch that is operated by gated pulses. The switch is closed for the duration of the pulse. A TDM system with two inputs PAM signals is shown in figure 5.7. The samplers or commutators are shown here as switches which are driven in synchronism. Figure 5.6 Digital Multiplexing and de-multiplexing 216 Chapter 5 Multiplexing Techniques Figure 5.7 TDM Principle Digital TDM forms the basis of the telephone hierarchy for transmitting multiple simultaneous telephone calls over high speed 2, or 140 M bit/s data links. 5.2-1 TDM Hierarchy PCM-coded speech is transmitted as 8-bit samples 8,000 times a second, which makes up a 64-Kbps data rate. These eight-bit words from different users are interleaved into a frame at a higher data rate. In a manner similar to the FDM hierarchy, there are two recommended standards: • North American, AT&T Standard • European, CEPT Standard American Telephone and Telegraph, AT&T established a digital TDM hierarchy that has become the standard for North America and Japan. It 217 Chapter 5 Multiplexing Techniques uses the μ-law for quantizing, and the system was designed for channelassociated signaling. All higher levels are implemented as a combination of some number of lower level signals. The designation of the higher level digital multiplexers reflects the respective input and output levels. For example, an M12 multiplexer combines four DSl signals to form a single DS2 signal, (Because T2 transmission systems have become obsolete, the M l2 function exists only in a functional sense within M l3 multiplexers, which multiplex 28 DS1 signals into 1 DS3 signal). A similar digital hierarchy has also been established by ITU-T as an international standard. In 1959 the European countries formed a nonpolitical organization "CEPT", The European Committee of Postal and Telecommunication Administrations. This committee recommended standards for compatibility of the telecommunications systems employed throughout Europe. Both European and American PCM frame are repeated at PCM sampling rate that is 8,000 times in a second. 5.2-1.1 AT&T System It is known as Bell T1 PCM Carrier System uses an eight-bit PCM in 24 voice-channel banks as shown in figure 5.8. The number of bits generated for one scan of the channels (frame) is 24 x 8 = 192. One bit (a frame alignment bit) is required for frame synchronization (S-bit) so the total number of bits per frame is 195. For band-limited 4 kHz analog signal, sampling rate 8 k Hz, the system produces a gross line bit rate of (193 x 8000( = 1.544 M bit/s. The 24 voice channels require 1.536 M bps and an additional 8 k bps are needed for synchronization purposes, thereby giving 1.544 M bps as the overall bit rate, The minimum bandwidth required to 218 Chapter 5 Multiplexing Techniques transmit the signal is 1.5 MHz. Starting with DS0 which is a 64 k bps digital version of a voice signal as a fundamental building block, DS1 or T1 level frame has 24-PCM channel. It is the most common frame structure in telecommunications networks used in North American standard areas. Figure 5.8 The 193-bit, 125 μ sec. DS-1 frame. The use of the eight-bit code means that the voice signals are quantized at 8 256 (2 ) levels. Some of the less significant bits may be robbed and used for signaling purposes such as dialing and detection of hook switch ON/OFF. The quantization error resulting from this is considered to be tolerable, although several compression schemes are used to minimize its effect. The 1.544-Mbps frame structure is shown in figure 5.9. A multi-frame is constructed from 12 subsequent frames and their 12 S-bits make up the 6-bit frame and 6-bit multi-frame synchronization words. In T1, the least significant bit of each channel in every sixth frame is used for signaling. As a consequence, only seven bits in each time slot are transparently carried through the network and the basic user data rate is 56 Kbps instead of the 64 Kbps in the European systems. 219 Chapter 5 Multiplexing Techniques Figure 5.9 The 1.544 Mbps PCM frame For frame synchronization and for de-multiplexing of signaling information, frames make up a multi-frame structure with two alternative lengths, a Super frame (SF) containing 12 frames or an Extended super frame (ESF) containing 24 frames. The framing bits of ESF, one in each frame, carry frame synchronization information including CRC code and data channel for network management messages. In transatlantic connections, E1 frames are adapted to the T1 frame structure and trans-coding between μ-law and A-law PCM is carried out. Each time slot in E1 is transmitted further in one time slot of T1. Table 5.3 and following figure 5.10 list the various multiplex levels, their bit rates, and the transmission media used for each. Notice that the bit rate of a high-level multiplex signal is slightly higher than the combined rates of the lower level inputs. The excess bits are included for certain control and 220 Chapter 5 Multiplexing Techniques synchronization functions. As DS0 with 64 k bps digital voice signal as a fundamental building block, DS1 or T1 level frame has 24-PCM voice channel digitally multiplexed together is sometimes called a digital group, or a digroup for short. Table 5.3 Digital TDM signals of North America and Japan Figure 5.10 various multiplex levels, As shown in table 5.3, four DS-1 signals time multiplexed together give a DS-2 signal containing 96 voice channels and requiring a data rate of 6.312 M bps. Seven DS-2 signals time multiplexed together give a DS-3 signal containing 672 channels and requiring 44.736 M bps. 221 Chapter 5 Multiplexing Techniques Six DS-3 signals time multiplexed together give a DS-4 signal containing 4032 channels and requiring a data rate of 274.176 M bps. The extra bits in the higher capacity digital signals are used for timing and synchronization information to assist in the separation and de-multiplexing of the individual channels. The timing and clocking information is contained within the digital bit stream and thus is self-synchronizing, or asynchronous. Such near synchronous timing schemes form a plesiochronous digital hierarchy (PDH). When the signal is sent over a twisted-pair telephone wire, it suffers considerable degradation from noise, bandwidth limitation, and phase delay. It is therefore necessary to place repeaters and equalization circuits at intervals of approximately 6000 ft to restore the pulses. 6000 feet (approximately 2000 m) of twisted pair non-loaded wire has a 3 dB bandwidth of approximately 4 kHz. Signaling Information There are two types of information carried by a DS-1 frame: the user information and the signaling information. The example of user information is digital voice or data. The signaling information is used by the network to delineate and control user information. In the case of DS-1 system, there may be as many as 24 users using each frame. There is the need for routing and control of this information. This is done by robbed bit signaling in voice communication and common channel signaling in data communications. In robbed bit signaling one of every 48 bits in a voice channel is stolen by the network to be used for signaling. Recall that each voice channel has 8 bits per frame. Therefore, stealing one bit out of six frames per channel 222 Chapter 5 Multiplexing Techniques leaves 47 out of the 48 bits used for actual user information. The stolen bit is the least significant bit of a channel of every sixth frame. Thus, user channel contains a signaling channel with a bit rate 1/48 times the bit rate of voice channel. Consequently, the 8-bit PCM does not get a true bit rate of 64 kbps. th Instead, it has a bit rate of (47/48) of 64 kbps. The signaling bit rate per voice channel is 1/48 × 64 kbps. In summary, when DS-1 is used to carry voice only traffic, the 24 channels are used for 8-bit PCM. Signaling information is added to each voice channel by stealing one bit per channel in every sixth frame. In common channel signaling mechanism, a separate channel is stolen from every frame to carry the signaling information about a group or groups of channels. In this case, twenty-three channels are used for data, and the 24 th channel is dedicated to carry the signaling information about all the remaining 23 channels. The main use of this channel is fast recovery from a framing error. In the 23 channels, the user information is carried by using 7 bits per channel, thus giving a data rate of ; 7 × 8000 = 56 kbps. The 8th bit of each data channel is used to define a signaling channel for each data channel. This is done when DS-1 frame is used to carry data from data terminals. Another type of signaling is the in-channel signaling in which the user and signaling information is carried over the same channel. It is used by allocating one bit of every data channel for this purpose. When DS-1 is used to carry combined voice and data information, then all the 24 channels are used. Multiple 223 Chapter 5 Multiplexing Techniques DS-1 frames can be multiplexed to obtain higher data rate systems, such as DS-2, using 4 DS-1 frames, and DS-3 combining 30 DS-1 frames. 5.2-1.2 The 30/32-Channel CEPT PCM System The 30/32 channel PCM system uses a frame and multi-frame structure is shown in figure 5.11. The TDM frame depicts the number of bits in each channel where out of the 32 slots, 30 slots are used to carry voice and two slots (slot 0 and slot 16) are used to carry synchronization and signaling information. The 2.048 Mbps used in areas that go by European standards. Starting with E0 which is a 64 k bps digital version of a voice signal as a fundamental building block, E1 or T1 level frame has 30-PCM channel. There is reserved time slot for CAS information in the 2-Mbps frame structure. Signaling Information In the E-l carrier system, there are 32 channels, each for 64 kbps data rate, providing a total bit rate of 2.048 Mbps. The term T-1 is used to describe the raw bit rate of 1.544 Mbps using DS-1 format. There is no such differentiation of terms in E-l: the term E-l conveys the meanings for the signal format as well as the raw bit rate. Multiplexing is performed at the transmitter side and the TDM frame has the following channel allocation: • Time slot 0: frame alignment word (FAW), frame service word (FSW). • Time slots 1 to 15: digitized speech for channels 1 to 15. • Time slot 16: multi-frame alignment word (MFAW), multi-frame service word (MFSW) signaling information. • Time slots 17 to 31; digitized speech for channels 16 to 30. 224 Chapter 5 Multiplexing Techniques Figure 5.11 30 / 32-Channel CEPT PCM System with 2,048-kbps frame structure Common channel signaling can be adapted for this system and if this is used then it would have 31 data channels and a single synchronization channel, or Frame Synchronization Time Slot. Each time slot contains an eight-bit sample value and each channel produces data at the rate of 64 Kbps. These voice channels or data channels are synchronously multiplexed where the multiplexer takes the 8 bits of each channel and produces a gross line bit stream at the rate of 2.048 Mbps 225 Chapter 5 Multiplexing Techniques (64 kbps × 32) data stream known as the 2-Mbps Frame, which is often called E1 (first level in European hierarchy). It uses the A-law in the quantization process. The frame is repeated 8,000 times a second, which is the same as the PCM sampling rate. For error-free operation the tributaries (64-Kbps data streams of the users) have to be synchronized with the clock signal of the 2-Mbps multiplexer. The data rate of 2,048 Kbps for the multiplexer is allowed to vary by 50 parts per million (ppm), and as a consequence each user of the network has to take timing from the multiplexer in the network and generate data exactly at the data rate of the multiplexer divided by 32. At the receiving end, the de-multiplexer separates the data corresponding to each channel. As shown in Table 5.4, this hierarchy is similar to the North American standard but involves different numbers of voice circuits at all levels. Table 5.4 Digital TDM signals of Europe The following figure 5.12 indicates higher order multiplexing stages for European standard illustrates in table 5.4 226 Chapter 5 Multiplexing Techniques Figure 5.12 higher order multiplexing stages A- PCM Frame and Multi-frame Structure A.1- The Frame Structure The frame consists of 32 time slots (TS), as shown in figure 5.11. Each TS consists of 8 bits. In these time slots bit 1 is used to indicate the polarity of the sample and bits 2 to 8 indicate the amplitude of the sample. A.2-The Multi-frame Structure In order for signaling information (dial pulses) for all 30 channels to be transmitted, the multi-frame consists of 16 frames numbered F0 to F15. This structure is shown in figure 5.11. Signaling for two channels is transmitted in time slot 16 in each frame. B- Time Durations CCITT recommendation is that the 8 kHz is used as the sampling frequency for 4 kHz voice signals. 227 Chapter 5 Multiplexing Techniques B.1- Frame duration The frame duration for fs = 8 kHz, is determined as; Frame duration = 1/fs = 125 μ second B.2- Time Slot duration The time slot duration is determined as follows: Time slot duration = frame duration time slots per frame = 125 x 10-6 / 32 = 5.906 μ second B.3- Bit duration The bit duration is determined as follows: Bit duration = time slot duration bits per time slot 3.906 x 10 6 488 n sec ond Bit duration = 8 B.4- Multi-frame duration The Multi-frame duration is determined as follows: Multi-frame duration = frame duration x frames per multiframe; = 125 x 10-6 x 16 = 2 m second. C- Gross line bit rate The gross line bit rate is determined as: Gross line bit rate = 1 1 = bit duration 488 x10 9 = 2.048 M bit/sec. It is the most common frame structure in telecommunications networks, as the primary rate 2,048-Kbps frame used in the European standard areas. This is the basic data stream that carries speech channels and ISDN-B 228 Chapter 5 Multiplexing Techniques channels through the network and it is called E-1.This primary rate frame is built up in digital local exchanges that multiplex 30 speech or data channels at bit rate of 64 Kbps into the 2,048-Kbps data rate. ITU-T defines this frame structure in Recommendation G.704. TDM is normally associated only with digital transmission links and the backbone digital links of the PSTN (T-carrier, digital microwave, and fiber optics) use a synchronous variety of TDM. Although analog TDM transmission can be implemented by interleaving samples from each signal, the individual samples are usually too sensitive to all varieties of transmission impairments. Analog TDM techniques have been used in some PBXs before digital electronics became so inexpensive that the digitization penalty disappeared. We discriminate between two types of TDMs to deal with the different ways in which time for channel use could be allocated. The form of TDM shown in figure 5.6 is sometimes referred to as synchronous TDM to specifically imply that each sub channel is assigned a certain amount of transmission capacity determined by the time slot duration and the repetition rate. In contrast, another form of TDM referred to as statistical, or asynchronous TDM. With this second form of multiplexing, sub channel rates are allowed to vary according to the individual needs of the sources. The frame alignment word is needed to inform the de-multiplexer where the words of the channels are located in the received 2-Mbps data stream. The frame synchronization time slot (TS0) includes frame alignment information and it has two different contents that are alternated in subsequent frames. The de-multiplexer looks for this time slot in the received data stream and, when it is found, locks onto it and starts picking 229 Chapter 5 Multiplexing Techniques up bytes from the time slots for each receiving user. Each user receives 8 bits in 125-µs periods, which makes 64 Kbps. A fixed alignment word is not reliable enough for frame synchronization because it may happen that a user’s data from one channel simulates the synchronization word and the de-multiplexer might lock to this user time slot instead of TS0 and due to this the de-multiplexer is able to detect the situation where one channel constantly transmits a word that is equal to the frame alignment word (FAW). To make frame alignment even more reliable, the cyclic redundancy check 4 (CRC-4) procedure was added in the mid-1980s. C-bits are allocated to carry a four-bit error check code that is calculated over all bits of a few frames. The receiver performs error check calculations over all bits of the frames and it is able to detect false frame alignment even if the frame alignment word is simulated by one user that alters bit two. Each receiver of the 2,048-Kbps data stream detects errors in order to monitor the quality of the received signal. Error monitoring is mainly based on the detection of errors in the frame alignment word. The receiver compares the received word in every other TS0 with the error-free frame alignment word. In addition to the frame alignment word, the CRC-4 code is used to detect low error rates. Errors in the frame alignment word do not give reliable results when the error rate is very low. It may take a long time before an error is detected in TS0 although many errors may have occurred in other time slots of the frame. The C-bit is set to 1 if CRC is not used. The TS0 in every other frame also contains a far-end alarm information bit A. This is used (set to 1) to tell the transmitting multiplexer that there is a severe problem in the transmission connection and reception is not 230 Chapter 5 Multiplexing Techniques successful at the other end of the system. This may caused by, a high error rate, loss of frame alignment, or loss of signal. With the help of the far-end alarm, consequent actions can take place. These actions include rerouting user channels to another operational system. D-bits can be used for transmission of network management information. At international borders they are usually set to 1. Multi-frame Structure of the Signaling Time Slot Time slot 16 (TS16) is defined to be used for the channel associated signaling to carry separate signaling information to all user channels of the frame. TS16 is a transparent 64-Kbps data channel like any other time slot in the frame. Thirty channels share the signaling capacity of TS16. A frame structure is needed to allocate the bits of this time slot to each of the 30 speech channels. The information about the location of the signaling data of each speech channel is given to the signaling de-multiplexer with the help of the multiframe structure containing a multi-frame alignment word for multi-frame synchronization. The data rate available for each speech channel is 2 Kbps. For CCS, the signaling information of all users is carried in data packets and any time slot can be used for this. Each packet carries information about the call to which it is related and signaling information. CCS packets can in some cases, for example, in the short message service of GSM, also carry user data. 5.2-2 Synchronous TDM In synchronous TDM, the sampling rate of the various channels are identical, all transmissions from multiplexed users occur at specified time 231 Chapter 5 Multiplexing Techniques instants. For example, each user is allowed to transmit for a time beginning at a given instant and ending at another instant. If the user does not have data to transmit at the beginning of the specified interval, the channel remains unused. Synchronous TDM is performed by defining channel as having a certain data rate. A multiplex cycle or frame is then defined as consisting of a certain number of bits to be repeated. This multiplex cycle is further divided into time slots such that the sum of slots is equal to the multiplex frame. Example 5.1: Consider 8-bit PCM voice transmission. Let a digital transmission channel have a data rate of 768 kbps or bits per sec. This channel has 12 times the data rate required by a single voice source. Suppose that a TDM cycle consists of (12 × 8 = 96) bits. Let the cycle be divided into 12 slots, each of 8 bits. Then 12 voice sources, each using 64 kbps 8-PCM can be multiplexed on this line. Note that in 8-bit PCM, 8 bits represent a single sample. Therefore, samples from 12 sources are multiplexed and interleaved on a single channel. In synchronous TDM, the time slots are allocated to traffic sources without any regard to whether these time slots will be used continuously or not. It is well known that a voice source is active only about 40% of the time. That means that for 60 % of the time there will be no data from each voice source and the slots will remain unused for about 60 % of time. 4 5.2-3 Asynchronous, Statistical TDM In this method of TDM, signals are with different sampling rates, specific slots are not allocated to individual users. Instead, all data sources store their data in a buffer and then spit them out at a fixed rate. The multiplexer visits data buffers one by one. If a buffer contains some data to be 232 Chapter 5 Multiplexing Techniques transmitted, it is transmitted. Otherwise, the multiplexer goes on to the next data buffer. Therefore, the channel bandwidth is not wasted if any buffer has some data to send all the time. The difference between synchronous and statistical TDM is shown in figure 5.12. The figure shows data from N users multiplexed on a single transmission line. A fixed frame length is defined in synchronous TDM in figure 5.12-a. Each of the N data sources is allocated a specified part of the frame. The figure shows that during one particular TDM cycle only three users have data to send. The frame carries data from these three users and remains largely empty. Shown in ‘white’ is the portion of the channel frame for which no one transmits any data, even if some of the sources have data to transmit. Figure 5.12 a- Synchronous multiplexing. Even if frame is partially filled, the source has to wait for the next frame to send more than one slot of traffic 233 Chapter 5 Multiplexing Techniques Figure 5.12- b Statistical multiplexing. If frame is partially filled, the source doesn't has to wait for the next frame to send more than one slot of traffic Contrary to this figure 5.12-b shows that there is no multiplex cycle in statistical TDM. The multiplexer visits each source for the same duration of time as in synchronous TDM. However, if a source does not have any data to send, the multiplexer does not stay there for the maximum allocated amount of time. Instead, it moves on to the next source of data. In this way, more data is transmitted for the duration of the same frame. In synchronous TDM even if an attached source does not have any data to send, the channel expects that the source will use the allocated slot. However, in statistical TDM, if a source has nothing to send, the multiplexer goes on to next source. In synchronous TDM, each source can transmit data only in the designated time slot or slots. Consequently, if the multiplexed data is heading towards a switch, the switch knows which slots are being used by each source. The data can be switched according to this information. However, the situation is different for statistical TDM. In this case, any slot could be carrying data from any of the sources. In fact, there is no particular need of 234 Chapter 5 Multiplexing Techniques dividing the frame into slots. The information about who is the source of a slot has to be included with the data. So, the user data blocks for statistical TDM consist of a header that tells the switch about the source or destination (or both) of data. The frame for statistical TDM does not have to be of a fixed length. Depending on factors, such as traffic from other sources, a source can transmit a variable number of bits each time. In this way, the frame for statistical TDM requires pretty much all functions of synchronous communication via frames, namely, flag, address and other frame control information. 5.2-4 Statistical Versus Synchronous TDM The slot allocation in synchronous TDM is just like allocating channel in circuit switched network. It does not require addressing and framing information. Therefore, all sources of traffic use their designated time slots as if the slots were separate channels. When traffic load is high enough to keep the TDM channel busy for most time, the synchronous TDM is more efficient than statistical TDM as it does not have any frame headers. Therefore, statistical TDM is not always better than synchronous TDM or vice versa. In fact, they both have strengths and weaknesses. The main strengths of statistical TDM lie in the following facts: 1. It does not require a strict definition of beginning and ending of a slot / frame. This property makes its implementation simple. This resembles packet switching mechanism. 2. It utilizes the channel capacity more efficiently, especially when individual sources have bursty traffic. Bursty traffic is characterized by repeated patterns of sudden data generation followed by long pauses. 235 Chapter 5 Multiplexing Techniques 5. It is better suited to traffic sources with varying requirements of channel capacity. Sources requiring higher capacities might send longer frames, and more often than the other slow speed sources. Even though synchronous TDM can provide this capability by assigning multiple time slots, the smallest unit of data in synchronous TDM is the slot itself and there is always a possibility that a slot remains only partially filled, thus wasting the channel capacity. In addition to the above benefits of using statistical TDM, there are some drawbacks associated with it as well. Some of these are: 1. The data has to be stored in buffers, that requires additional cost of memory. 2. Due to buffer storage, there will be delay distortion introduced. This would deteriorate the quality of real-time data. 3. There is an additional overhead attached to each frame that helps to identify the boundaries, addressee/addresser and other information about data. At higher loads, this header cuts down on the efficiency of the line. The synchronous TDM has the following benefits over the other. 1. Once slot allocation has occurred, there is a fixed relation between a data source and its slots. This is analogous to circuit allocation in circuit switched network. 2. There is no extra delay distortion, which makes it ideal for voice-like communication. There are tradeoffs in terms of some disadvantages, such as: 1. In conditions of low traffic volume from all or some data sources, the link utilization is lower than statistical TDM. 236 Chapter 5 Multiplexing Techniques 2. The multiplexer and traffic sources all have to be synchronized. Usually, the way this is done is by having a central clock providing synchronization to the main multiplexer with clocks from each source synchronized with the main clock. The main clock runs at the line capacity, and is sometimes called as the master clock. This is the real cost of synchronous TDM and offsets the memory cost of statistical TDM. Local clocks of individual sources generate data locally. Data is multiplexed according to a predetermined order. The multiplex cycle is divided into frames and slots, and one or more slots are assigned for individual data sources. 237 Preface Preface The course objectives are to provide undergraduate students with; ■ Good knowledge of different communication systems to acquire a logical progression in thinking about electrical communication. ■ An exposition of the theory required to build communication systems with engagement in engineering design of its components. ■ Learn the band pass representation for carrier modulated signals. ■ Learn to analyze and give an insight into spectral efficiency of transmission, reconstruction, and complexity of system implementation with various options for transmitting analog, and digital signals such as multiplexing. This book contains 5 chapters organized as follows; After the introduction given in chapter one as an overview on the communication concepts, the Communication System Elements and types are given in Chapter 2. Also, Current Communication Systems are reviewed. This chapter is ended by illustration of the Communication Channel Properties and Impairments. Chapter 3, introduces the concept of analog communication systems. It deals with Band pass signal communications with different techniques of amplitude modulation / demodulation with and without carrier suppression. The focus of the presentation was limited to the envelope detector and coherent demodulators, and examining the complexity associated with demodulation. Angle modulation/ demodulation is also introduced in this chapter. Analogue pulse modulation and digital pulse modulation are also explained in this chapter. Chapter 4 gives detailed analysis of noise and its metrics used for communication systems evaluation. Chapter 5, contains a discussion of multiplexing technique; analogue and digital with their hierarchy. i