8.2 Multiplexers The economic benefits of multiplexing are readily appreciated by most network managers. By optimizing the network through multiplexing, the number of local voice-grade leased lines can be reduced, while higher-capacity digital facilities may be utilized to their full advantage and at less cost. This greatly reduces the cost of local access, which, despite dramatic reductions in the cost of long-haul transmission facilities, consumes the largest percentage of most corporate communication budgets. The economies of T-carrier transmission are such that only five to eight voice-grade private lines justify the jump to T1, which has enough capacity to transport 24 channels at up to 64 Kbps each. The actual break-even point, of course, depends on the local exchange carrier and interexchange carrier used, as well as the length of the T1 line. Additional savings may accrue through the use of fractional T1 (FT1) services, whereby bandwidth may be ordered in increments of 64, 128, 256, 384, 512, and 768 Kbps to meet the needs of specific applications. Compression algorithms such as ADPCM can be used to reduce the bandwidth needs of voice to 32 Kbps or lower, rather than allowing it to pass at 64 Kbps under PCM. This allows more voice conversations to be handled on a private network but without any increase in the cost for more bandwidth. With an IAD, the channels of one or more T1 local-access lines can be assigned to handle voice conversations over the PSTN, while the rest of the bandwidth can be used for a highspeed Internet access connection that can be shared by multiple users. The IAD can even bundle together the bandwidth of two or more T1 lines for Internet access or for high-speed LAN connections between two corporate locations. Although the IAD includes the multiplex capability, it provides more flexibility in how bandwidth is partitioned for use to support specific applications. 8.2.1 Time Division Multiplexers High-speed multiplexers are generally divided into two major product types: TDMs and STDMs. Although they perform essentially the same function—that of dividing the available bandwidth into time slots or channels—the two products approach this function in different ways, which makes one more suitable than the other for particular applications. In sequential fashion, the TDM continually scans or polls all input channels, giving each a chance to access the data link. Each channel has its own “time slot” into which it can place data. If no data is offered, that portion of the data link bandwidth goes unused and scanning continues to the next input channel. A “frame slot” is inserted by the TDM at the beginning and end of each group of time slots. The frame slot is a “bit” in bit-interleaving muitiplexers, and a “byte” in byte interleaving multiplexers. The datum collected by one complete scan of all the input channels is called a “frame.” Each channel, then, occupies a unique time slot within a particular frame, with each frame separated by a frame slot. Empty time slots are filled with bits or bytes as necessary to ensure the uniformity of frames. This “stuffing” helps maintain transmission synchronization. The frame slots act as the primary synchronizing mechanism, enabling the remote multiplexer to pass received data to the proper destination at the proper time. The framed data are transmitted over the digital facility to another multiplexer at the receiving end, where the data are “demultiplexed” so that the time slots containing particular bits or bytes of information from each data link can be directed to the appropriate output channel. 8.2.2 Statistical Time Division Multiplexing Like TDMs, STDMs (“stat muxes”) scan the input channels for data awaiting transmission, and each frame of data is loaded into a time slot. If no data are present on a particular channel, however, the available bandwidth is assigned to a channel that is ready to transmit data. This technique is particularly useful when transmitting asynchronous data. In most asynchronous, interactive data applications, the probability that all channels will be operating at full capacity simultaneously is small. Consequently, the sum of the active channel data rates will typically not exceed the composite link rate. STDMs can handle peak traffic conditions better than TDMs. With a TDM, the aggregate peak input speed cannot exceed the composite link speed. With an STDM, on the other hand, the aggregate peak input speed can be exceeded because it makes use of a buffer that holds data during peak traffic conditions until bandwidth becomes available. This wait period—known as latency—is measured in milliseconds (ms). This is only one component of latency; congestion on the network may be another source of latency. To prevent data loss when the buffer fills up, the STDM channel regulates the output of the attached transmitting device. This is accomplished through the use of various flow control protocols, such as XON/XOFF, DTR/DSR, CTS/RTS, and ENQ/ACK. With XON/XOFF, for example, the XOFF code stops data flow, while the XON code permits data flow. This buffering capability also enables high-speed terminals to communicate with low-speed terminals, since the data are buffered from the high-speed terminal for low-speed transmission to the remote terminal. The ability of the STDM to accommodate variable length frames also contributes to bandwidth efficiency. This means that more than one byte of data can be accepted from a given terminal before the STDM moves on to the next active terminal. Other advantages of STDM include the capability to prioritize channel scanning and perform retransmissions when errors are detected. This processing delay, however, contributes to overall latency. STDMs offer some very sophisticated capabilities, such as channel prioritization, down-line loading, alternative routing, port contention, and virtual connections. With channel prioritization, the network administrator can designate one or more channels that will always be allowed to pass data before channels that have a lower priority. Down-line loading allows the host site to set the channel options at the remote site, eliminating the need for manual intervention by a network administrator. Alternative routing is the capability of some highend STDMs to invoke stored configurations of the network to bypass failed facilities or reconfigure the network automatically by time of day to adjust bandwidth to handle different traffic patterns. Alternative routing may be a simple redirection of a few channels or a complete reconfiguration that involves every node in the network. Port contention allows a larger number of users to contend for the services of a smaller number of multiplexer channels. Virtual connections are momentary (packet-by-packet) connections between devices on a switched or permanent basis. The primary advantage of STDM over TDM is efficiency of bandwidth (see Figure 8.1). Despite this advantage, however, STDMs have some drawbacks that might make them unsuitable for certain applications. Using STDMs for voice in other than simple point-to-point situations would result in unacceptably low quality, as well as intolerable time delays, because the digitized voice signals do not arrive at their destinations in regular, predictable time slots as with TDM. This causes varying amounts of phase delay, which results in unacceptable voice inflection. Mixing voice with data is the function that TDMs perform best. Figure 8.1: A comparison of TDM and STDM. 8.2.3 Points of Differentiation The choice of an STDM or TDM depends on the application. In electronic brokerage transactions, for example, the processing rate of an STDM could inhibit the timely flow of information. Yet the benefits of built-in error checking and retransmission could off-load the brokerage’s host computers and actually hasten the transaction-processing task. TDMs are totally transparent to the data that each channel is supplying in that they do not analyze the data to determine which protocol is being used. TDMs do not implement priority scanning or channel-based error checking. In the brokerage example, the communication network could transmit at T1 or higher rates, but error checking would be the responsibility of the host computer. The volume of trading would be a combination of the transaction processing capacity and error checking of the host and network. Whether it is better to use faster TDMs and have the host involved in retransmissions to correct errors or to use slower STDMs and let the host focus on transaction processing is an application-driven trade-off that must be considered during the planning process. Typically, TDMs are deployed on network backbones to ensure the free flow of data between major nodes. STDMs are typically used at locations where many users require only sporadic access to the network, for instance, when clustered terminals query a remote host in point-ofsale applications. In this case, the STDM makes the most efficient use of available bandwidth to “feed” the T1 backbone or provide a connection to a packet-switching network. If STDM is selected, careful attention must be given to proper design; offered traffic must fall within certain statistical limits to ensure an acceptable level of performance for the applications. When faced with the need to multiplex, the choice need not be limited to TDM only or STDM only. The mixing and matching of devices provides many varied configuration possibilities that can enhance network efficiency. Not only can TDMs network with other TDMs, they can interface with other types of hardware such as channel banks, PBXs, STDMs, front-end processors, and host computers (see Figure 8.2). An STDM, for example, can be used to concentrate data from a cluster of low-speed asynchronous terminals into a single input channel of a T1 multiplexer, which transmits the input of that 56-Kbps channel along with the other channels to a remote T1 multiplexer, where the channels may be routed to their separate destinations. Some of those channels (e.g., voice, low-speed data) may be routed to an appropriately equipped digital PBX that acts as a demultiplexer. Figure 8.2: The interconnectivity potential of a TDM-based T1 multiplexer. (Note: The CSU at each end of the T1 link may be integrated into each multiplexer as a module.) Multiplexers are also differentiated on the basis of architecture (bit versus byte), modularity, data and voice capabilities, synchronization characteristics, network management features, and interoperability with public network standards. Modularity. This feature allows small and mid-size companies to take advantage of the technology with a minimum up-front cost. As networking requirements grow and become more complex, the channel capacity can be added incrementally. With an STDM, for example, a company can start with a low-end product, which typically supports 2 to 8 input channels over a single point-to-point link. Later, the company can add modules to the empty card slots in the chassis to support up to 32 input channels. High-end STDMs handle even more input channels and provide multiple high-speed links on the output side. They also incorporate sophisticated switching and control capabilities. Feature packages are available to support built-in modems with automatic dial backup and per-channel password protection. The same package allows network administrators to set different channel speeds and protocols at each end of the circuit. For asynchronous traffic, STDMs strip out redundant bits to increase the density of the data stream, thereby permitting greater throughput over cheaper, lower speed lines. Modularity also applies to the features and capabilities of TDMs, allowing the addition of specific capabilities to the T1 multiplexer through plug-in boards or cartridges. These include boards that provide X.25 bridges, LAN gateways, V.35 interfaces for wideband modems, and the ISDN PRI. Aside from cost savings, integrating modems with multiplexers simplifies maintenance and eliminates potential cabling and interfacing problems. Compression devices may be integrated into TDMs via plug-in cards to increase the number of voice channels that can be carried. Some TDMs allow users to pick the type of input channels they want through software selection: asynchronous channels to support terminals, printers, and modems; synchronous channels to interface with multiplexers and other high-speed devices; and isochronous channels for applications requiring some form of encryption to safeguard sensitive information during transmission. T1 multiplexers often transmit over private-network leased lines; for point-to-point compatibility, D4 framing suffices. If interconnectivity with the public network is desired, however, the high-capacity TDM multiplexer must be DS0-compatible [1] to provide access to carrier equipment such as a DCS that allows 56/64-Kbps channels to be directed to a location on the PSTN. Such multiplexers typically use extended super frame (ESF), a carrier-provided T-carrier format containing management bits that are used for non-intrusive circuit testing and diagnostics without taking the circuit out of service. While subrate multiplexers are available as standalone products, some T1 multiplexers offer this capability as a value-added option. This allows users to feed lower-speed digital data service (DDS) circuits (2.4, 4.8, and 9.6 Kbps) to a COmultiplexer via a single 56 Kbps or T1 link so that, for example, five 9.6-Kbps data channels from five different locations can be consolidated over a single high-speed link terminating at an interexchange carrier’s PoP. At the PoP, the multiplexer distributes the data onto several subrate DDS lines to the appropriate locations. Because the telephone company CO or interexchange carrier PoP provides its own multiplexer, or appropriately equipped DCS, the user needs only a single-ended subrate multiplexing capability. Bit versus byte architectures. As noted, TDMs may package data in either bit or byte formats. Companies interested in sending voice traffic through the public network require a byteoriented TI multiplexer that adheres to the 8-bit byte structure (i.e., 64-Kbps DS0 format). Users interested primarily in moving data through their private networks usually turn to the bit-oriented T1 multiplexer because it can handle a variety of data rates and types with more efficiency and less processing delay than byte-oriented multiplexers (see Figure 8.3). Dataintensive applications like computer-aided design (CAD) require the lower delays and higher data density offered by bit-oriented T1 multiplexers. Figure 8.3: Comparison of processing delay with bit- and byte-oriented multiplexers. With the byte-oriented multiplexer, there is a slight processing delay while the bits are assembled into bytes. Byte-oriented multiplexers provide the connectivity needed to take advantage of current and future carrier services that require access to the public network through such services as DCS. Bit-oriented multiplexers cannot do this because their data format is proprietary. The byte structure is also required if the user wants to take advantage of ESF. While it may be desirable to choose a byte-oriented multiplexer to take advantage of carrier service offerings, byteoriented multiplexers are not without drawbacks. For instance, byte-oriented multiplexers waste bandwidth. Such multiplexers consume as much as 5% of the T1 bandwidth for overhead functions. This percentage may not appear to be very significant until such losses are translated into dollars. A T1 circuit might cost $4,000 a month, including local-loop charges at both ends. If overhead results in a 5% loss of available bandwidth, the user is paying about $200 for bandwidth that cannot be used, which translates into a $2,400 loss per year on that circuit. These losses are multiplied on a multinode T1 network and compounded even further by increases in distance. Suppose several 56-Kbps circuits are on a network and the bandwidth is derived with a byteoriented multiplexer to yield five 9.6-Kbps data channels. This subrate scheme immediately results in 14% of unusable bandwidth, or 8 Kbps. Add to that the 5% of lost bandwidth that the multiplexer uses and the five 9.6-Kbps subchannels are no longer achievable because 19% of the bandwidth on a 56-Kbps circuit is lost to overhead. With multiple long-haul 56-Kbps links going through multiple byte-oriented multiplexers, the dollar losses over a year can be quite large. A bit-oriented multiplexer, on the other hand, does not present such problems because it uses all of the available bandwidth for production data. When voice compression techniques like ADPCM are added to double the amount of voice channels on the circuit, the full 48 channels are derived, instead of only 44 under the byte orientation. Some vendors of ADPCM equipment accommodate both bit- and byte-interleaving multiplexers, preferring to let users configure the device for compatibility with either format. [1] DS0 refers to digital signal level 0, which is a channel that offers 56/64 Kbps of bandwidth. A T1 facility is capable of supporting 24 of these channels, providing 1.536 Mbps/1.544 Mbps of bandwidth, with or without 8 Kbps for signaling information, which is digital signal level 1, or DS1.