Hi Students U can download the cs-68 assignments from the given links if some answers in this solution are missing Link-1 https://docs.google.com/document/pub?id=1Er2PIp9SJeTD29E8lsqO4Maq645Fl5GmrMuA2wL g3hM Link-2 http://studentmasti2.blogspot.in/2012/02/cs-68-solved-assignment.html SOLUTION=> Q.1) a)A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information.[1] Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope. Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. Well-known communications protocols are Ethernet, a hardware and Link Layer standard that is ubiquitous in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines The equipment to be networked are those components that you want to be able to communicate with each other or hardware that you want to share A network is defined as a group of interconnected computers but it really is more than that. The connected computers communicate with each other and share resources on the network. Common resources include printers, disk storage, databases and access to other networks such as the internet. The diagram below illustrates some typical components. Hardware that connects the computers and peripherals Something is needed to carry the electronic signals between your computers and whatever they connect to. You can use a wired or wireless connection. Network cable that looks like thick telephone wire can be used to carry the data packets to and from your computer. These packets can also be transmitted like radio signals from a wireless transmitter to a wireless receiver. The following devices connect the members of the network: 1. 2. 3. 4. 5. 6. network hub network switch broadband modem computer network router router/switch nic card 1. Network Hub - The hub or network hub connects computers and devices and sends messages and data from any one device to all the others. If the desktop computer wants to send data to the lapop and it sends a message to the laptop through the hub, the message will get sent by the hub to all the computers and devices on the network. They need to do work to figure out that the message is not for them. The message also uses up bandwidth (room) on the network wires or wireless radio waves and limits how much communication can go on. Hubs are not used often these days. 2. Network Switch - The switch connects the computer network components but it is smart about it. It knows the address of each item and so when the desktop computer wants to talk to the laptop, it only sends the message to the laptop and nothing else. In order to have a small home network that just connects the local equipment all that is really needed is a switch and network cable or the switch can transmit wireless information that is received by wireless receivers that each of the network devices have. 3. Computer Network Router - A router is a device that connects 2 networks. If you happened to have 2 LANs (local area networks) in your home or office and wanted to connect them, the router is the device that you would need. For more information about routers see What does a router do?. The network that most home networks connect to is the world's biggest WAN (wide area network), the Internet. 4. Router/Switch - In order to let all the computers on the local network communicate with each other and the Internet, most routers sold today include a switch. The router part connects your network to the Internet. The switch part lets the computers talk to each other and to the internet. 5. Broadband Modem - Most everyone wants to connect to the internet. A broadband modem is used to take a high speed Internet connection provided by an ISP (Internet Service Provider) and convert the data into a form that your local network can use. The high speed connection can be DSL (Digital Subscriber Line) from a phone company or cable from a cable television provider. n order to be reached on the Internet, your computer needs a unique address on the internet. Your ISP will provide this to you as part of your Internet connection package. This address will generally not be fixed which means that they may change your address from time to time. For the vast majority of users, this makes no difference. If you have only one computer and want to connect to the Internet, you strictly speaking don't need a router. You can plug the network cable from the modem directly into the network connection of your computer. However, you are much better off connecting the modem to a router. The ip address your ISP provides will be assigned to the router. The router will assign a hidden addres (non routable) to each of the computers on the network. This is strong protection against hackers since they scan ip addresses for computers that are open to being attacked. The router is not a general purpose computer and will not be visible to them. 6. NIC card - In order to use your phone service you need to have a phone. Similarly to be able to talk on the network a computer or printer needs a NIC (network interface card) card (sort of redundant :-)) otherwise known as a network adapter. They come in 2 varieties, wired or wireless. Most modern desktop computers come with a wired nic and laptops come with both a wired and a wireless nic. If your computer doesn't have a built in nic card, you can get a USB based adapter that you can plug into the USB port of your computer. This is portable and can be moved from computer to computer. You can get a wireless lan USB adapter or a wired one. Q.1.b). General Comparison with TCP/IP: In the TCP/IP model of the Internet, protocols are deliberately not as rigidly designed into strict layers as the OSI model.[6] RFC 3439 contains a section entitled "Layering considered harmful." However, TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols, namely the scope of the software application, the end-to-end transport connection, the internetworking range, and lastly the scope of the direct links to other nodes on the local network. Even though the concept is different than in OSI, these layers are nevertheless often compared with the OSI layering scheme in the following way: The Internet Application Layer includes the OSI Application Layer, Presentation Layer, and most of the Session Layer. Its end-to-end Transport Layer includes the graceful close function of the OSI Session Layer as well as the OSI Transport Layer. The internetworking layer (Internet Layer) is a subset of the OSI Network Layer, while the Link Layer includes the OSI Data Link and Physical Layers, as well as parts of OSI's Network Layer. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the internal organization of the Network Layer document. The presumably strict consumer/producer layering of OSI as it is usually described does not present contradictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy implied in a layered model. Such examples exist in some routing protocols (e.g., OSPF), or in the description of tunneling protocols, which provide a Link Layer for an application, although the tunnel host protocol may well be a Transport or even an Application Layer protocol in its own right. The TCP/IP design generally favors decisions based on simplicity, efficiency and ease of implementation. Comparison between TCP/IP and OSI This chapter gives a brief comparison between OSI and TCP/IP protocols with a special focus on the similarities and on how the protocols from both worlds map to each other. The adoption of TCP/IP does not conflict with the OSI standards because the two protocol stacks were developed concurrently. In some ways, TCP/IP contributed to OSI, and vice-versa. Several important differences do exist, though, which arise from the basic requirements of TCP/IP which are: A common set of applications Dynamic routing Connectionless protocols at the networking level Universal connectivity Packet-switching The main differences between the OSI architecture and that of TCP/IP relate to the layers above the transport layer (layer 4) and those at the network layer (layer 3). OSI has both, the session layer and the presentation layer, whereas TCP/IP combines both into an application layer. The requirement for a connectionless protocol also required TCP/IP to combine OSI’s physical layer and data link layer into a network level. Physical Layer The physical layer may be either ethernet, SDH-DCC, or some timeslot of a PDH signal. Either OSI protocols and TCP/IP protocols build on the same physical layer standards, thus there is no difference between OSI and TCP/IP in this aspect. Data Link Layer The purpose of the data link layer is to provide error free data transmission even on noisy links. This is achieved by framing of data and retransmission of every frame until it is acknowledged from the far end, using flow control mechanisms. Error detection is done by means of error detection codes. The data link layer in the OSI world makes use of the Q.921 LapD protocol which must support an information field length of at least 512 octets according to G.784. LapD is based on HDLC framing. In the internet world there is no real data link layer protocol, but the subnet protocol which has quite many similarities. The subnet protocol consists of the IMP-IMP protocol which aims to provide a reliable connection between neighbored IMPs. For ethernet based networks e.g. LANs (Local Area Network), the data link protocol LLC (Logical Link Control) is equally used in OSI and TCP/IP networks. Network Layer The network layer provides routing capabilities between source and destination system. OSI uses the CLNS (Connection Less Network Service) protocols ES-IS for communication of an end system to an intermediate system and IS-IS for communication between intermediate systems. TCP divides messages in datagrams of up to 64k length. Each datagram consists of a header and a text part. Besides some other information, the header contains the source and the destination address of the datagram. IP routes these datagrams through the network using e.g. the protocol OSPF (Open Shortest Path First) or RIP (Route Information Protocol) for path calculation purposes. The service provided by IP is not reliable. Datagrams may be received in the wrong order or they may even get lost in the network. Transport Layer The transport layer provides a reliable end-to-end connection between source and destination system on top of the network layer. It builds an integral part of the whole OSI layering principle and of the internet protocol. The OSI transport layer protocol (TP4) and the internet tranport protocol (TCP) have many similarities but also some remarkable differences. Both protocols are built to provide a reliable connection oriented end-to-end transport service on top of an unreliable network service. The network service may loose packets, store them, deliver them in the wrong order or even duplicate packets. Both protocols have to be able to deal with the most severe problems e.g. a subnetwork stores valid packets and sends them at a later date. TP4 and TCP have a connect, transfer and a disconnect phase. The principles of doing this are also quite similar. One difference between TP4 and TCP to be mentioned is that TP4 uses nine different TPDU (Transport Protocol Data Unit) types whereas TCP knows only one. This makes TCP simpler but every TCP header has to have all possible fields and therefore the TCP header is at least 20 bytes long whereas the TP4 header takes at least 5 bytes. Another difference is the way both protocols react in case of a call collision. TP4 opens two bidirectional connections between the TSAPs whereas TCP opens just one connection. TP4 uses a different flow control mechanism for its messages, it also provides means for quality of service measurement. . Internet Protocol Suite: The Internet Protocol Suite also known as TCP/IP is the set of communications protocols used for the Internet and other similar networks. It is named from two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in this standard. IP networking represents a synthesis of several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs (Local Area Networks), which emerged in the mid- to late-1980s, together with the advent of the World Wide Web in early 1990s. The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted. The TCP/IP model consists of four layers (RFC 1122). From lowest to highest, these are the Link Layer, the Internet Layer, the Transport Layer, and the Application Layer. Q.1.c) Transmission modes A given transmission on a communications channel between two machines can occur in several different ways. The transmission is characterised by: the direction of the exchanges the transmission mode: the number of bits sent simultaneously synchronisation between the transmitter and receiver Simplex, half-duplex and full-duplex connections There are 3 different transmission modes characterised according to the direction of the exchanges: A simplex connection is a connection in which the data flows in only one direction, from the transmitter to the receiver. This type of connection is useful if the data do not need to flow in both directions (for example, from your computer to the printer or from the mouse to your computer...). A half-duplex connection (sometimes called an alternating connection or semi-duplex) is a connection in which the data flows in one direction or the other, but not both at the same time. With this type of connection, each end of the connection transmits in turn. This type of connection makes it possible to have bidirectional communications using the full capacity of the line. A full-duplex connection is a connection in which the data flow in both directions simultaneously. Each end of the line can thus transmit and receive at the same time, which means that the bandwidth is divided in two for each direction of data transmission if the same transmission medium is used for both directions of transmission. Q.1.d) ans= book no.1, page no. 21. Question 2: Differentiate the followings with appropriate examples: (i) Local and Remote Bridges (ii) Constant Bit Rate and Variable Bit Rate (iii) Time Division Multiplexing and Frequency Division Multiplexing (iv) Datagram and Virtual Circuit (v) Analog and Digital Signal Ans : A computer network, often simply referred to as a network, is a collection of computers and devices interconnected by communications channels that facilitate communications among users and allows users to share resources. Networks may be classified according to a wide variety of characteristics. A computer network allows sharing of resources and information among interconnected devices. (i) local and remote bridges ans : Local bridges: Directly connect local area networks (LANs). Local bridges are ties between two nodes in a social graph that are the shortest (and often the only plausible) route by which information might travel from those connected to one to those connected to the other. [2] Local bridges differ from regular bridges in that the end points of the local bridge cannot have a tie directly between them and should not share any common neighbors. The local bridge connection function (herein referred to as local bridge) can connect a Virtual HUB operating on the VPN Server or VPN Bridge and the physical network adapter connected to that server computer on a layer 2 connection, thereby joining two segments which originally operated as separate Ethernet segments into one. Local bridging enables a computer connected to a Virtual HUB and a computer connected to a physical LAN to communicate freely on an Ethernet level connected, in theory, to the same Ethernet segment, regardless of whether each of them is physically linked to a separate network. Using a local bridge makes it possible to easily construct a remote-access VPN and site-to-site VPN.The local bridge is a function often used by the PacketiX VPN to make VPN connections. Local bridging is used to connect a virtual network and a physical network on the Ethernet level. This section will explain local bridge concepts, methods for setting them and precautions. Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.A common use of bridges is to connect two(or more) distant LANs. For example, a company might have plants in several cities, each with its own LAN. Ideally, all the LANs should be interconnected, so the complete system acts like one large LAN. This goal can be achieved by putting abridge on each LAN and connecting the bridges pair wise with point to point lines. Various protocols can be used on the point to point lines. One possibility is to choose some standard point-to-point data link protocol such as PPP, putting complete MAC frames in the payload field. This strategy works best if all the LANs are identical, and the only problem is getting frames to the correct LAN. Another option is to strip off the MAC header and trailer at the Source Bridge and header and tailor can than be generated at the designation bridge. A disadvantage of this approaches that the checksum that arrives at the destination host is not the one computed by the source host, so errors caused by bad bits in a bridge's memory may not be detected (ii) Constant Bit Rate and Variable Bit Rate ans : Constant bitrate (CBR) is a term used in telecommunications, relating to the quality of service. Compare with variable bitrate. When referring to codecs, constant bit rate encoding means that the rate at which a codec's output data should be consumed is constant. CBR is useful for streaming multimedia content on limited capacity channels since it is the maximum bit rate that matters, not the average, so CBR would be used to take advantage of all of the capacity. CBR would not be the optimal choice for storage as it would not allocate enough data for complex sections (resulting in degraded quality) while wasting data on simple sections. The problem of not allocating enough data for complex sections could be solved by choosing a high bitrate (e.g., 256 kbit/s or 320 kbit/s) to ensure that there will be enough bits for the entire encoding process, though the size of the file at the end would be proportionally larger. Most coding schemes such as Huffman coding or run-length encoding produce variable-length codes, making perfect CBR difficult to achieve. This is partly solved by varying the quantization (quality), and fully solved by the use of padding. (However, CBR is implied in a simple scheme like reducing all 16-bit audio samples to 8 bits.) Variable bitrate (VBR) is a term used in telecommunications and computing that relates to the bitrate used in sound or video encoding. As opposed to constant bitrate (CBR), VBR files vary the amount of output data per time segment. VBR allows a higher bitrate (and therefore more storage space) to be allocated to the more complex segments of media files while less space is allocated to less complex segments. The average of these rates can be calculated to produce an average bitrate for the file.The advantages of VBR are that it produces a better quality-tospace ratio compared to a CBR file of the same data. The bits available are used more flexibly to encode the sound or video data more accurately, with fewer bits used in less demanding passages and more bits used in difficult-to-encode passages.This VBR encoding method allows the user to specify a bitrate range - a minimum and/or maximum allowed bitrate.[15] Some encoders extend this method with an average bitrate. The minimum and maximum allowed bitrate set bounds in which the bitrate may vary. The disadvantage of this method is that the average bitrate (and hence file size) will not be known ahead of time. The bitrate range is also used in some fixed quality encoding methods, but usually without permission to change a particular bitrate (iii) Time Division Multiplexing and Frequency Division Multiplexing ans : Time-division multiplexing (TDM) is a type of digital or (rarely) analog multiplexing in which two or more signals or bit streams are transferred apparently simultaneously as subchannels in one communication channel, but are physically taking turns on the channel. The time domain is divided into several recurrent timeslots of fixed length, one for each sub-channel. A sample byte or data block of sub-channel 1 is transmitted during timeslot 1, sub-channel 2 during timeslot 2, etc. One TDM frame consists of one timeslot per sub-channel plus a synchronization channel and sometimes error correction channel before the synchronization. After the last sub-channel, error correction, and synchronization, the cycle starts all over again with a new frame, starting with the second sample, byte or data block from sub-channel 1, etc. TDM is used for circuit mode communication with a fixed number of channels and constant bandwidth per channel. Bandwidth Reservation distinguishes time-division multiplexing from statistical multiplexing such as packet mode communication (also known as statistical time-domain multiplexing, see below) i.e. the time-slots are recurrent in a fixed order and pre-allocated to the channels, rather than scheduled on a packet-by-packet basis. Statistical time-domain multiplexing resembles, but should not be considered the same as time-division multiplexing. Frequency-division multiplexing (FDM) is a form of signal multiplexing which involves assigning non-overlapping frequency ranges to different signals or to each "user" of a medium.FDM can also be used to combine signals before final modulation onto a carrier wave. In this case the carrier signals are referred to as subcarriers: an example is stereo FM transmission, where a 38 kHz subcarrier is used to separate the left-right difference signal from the central left-right sum channel, prior to the frequency modulation of the composite signal. A television channel is divided into subcarrier frequencies for video, color, and audio. DSL uses different frequencies for voice and for upstream and downstream data transmission on the same conductors, which is also an example of frequency duplex. Where frequency-division multiplexing is used as to allow multiple users to share a physical communications channel, it is called frequency-division multiple access (FDMA).[1] FDMA is the traditional way of separating radio signals from different transmitters. In the 1860s and 70s, several inventors attempted FDM under the names of Acoustic telegraphy and Harmonic telegraphy. Practical FDM was only achieved in the electronic age. Meanwhile their efforts led to an elementary understanding of electroacoustic technology, resulting in the invention of the telephone.For long distance telephone connections, 20th century telephone companies used L-carrier and similar co-axial cable systems carrying thousands of voice circuits multiplexed in multiple stages by channel banks. For shorter distances,cheaper balanced pair cables were used for various systems including Bell System K- and N-Carrier. Those cables didn't allow such large bandwidths, so only 12 voice channels (Double Sideband) and later 24 (Single Sideband) were multiplexed into four wires, one pair for each direction with repeaters every several miles, approximately 10 km. See 12channel carrier system. By the end of the 20th Century, FDM voice circuits had become rare. Modern telephone systems employ digital transmission, in which time-division multiplexing (TDM) is used instead of FDM. Since the late 20th century Digital Subscriber Lines have used a Discrete multitone (DMT) system to divide their spectrum into frequency channels. The concept corresponding to frequency-division multiplexing in the optical domain is known as wavelength division multiplexing. (iv) Datagram and Virtual Circuit ans : Virtual circuit packet switching sets up a single path along which all packets in the message will travel. The facilities along that path are not dedicated to the circuit and may be used by other packets as well as those traveling through the virtual circuit. (For circuit switching the path is dedicated so traffic that is bursty wastes capacity in a circuit switched network but not a virtual circuit packet switching network) The setup of the circuit is one source of overhead that is not present in datagram packet switching. Datagram packet switching sends each packet along the path that is optimal at the time the packet is sent. When a packet traverses the network each intermediate station will need to determine the next hop. This should be equally efficient in both virtual circuit packet switching and datagram packet switching. Virtual circuit packet switching must find the entry for the flow in the routing table, datagram packet switching must find the entry for the destination in the routing table. For datagram packet switching in a real network the path of each packet is determined independently. Each packet may travel by a different path. Each different path will have a different total transmission delay (the number of hops in the path may be different, and the delay across each hop may change for different routes). Therefore, it is possible for the packets to arrive at the destination in a different order from the order in which they were sent. In contrast, for virtual circuit packet switching, all packets follow the same path, through the same virtual circuit. Therefore, for virtual circuit packet switching the packet will arrive in the order they are sent. In a congested network it is possible that virtual circuit packet switching will experience additional queuing delays at busy intermediate stations, while datagram packet switching may be able to avoid that congestion and delay by choosing other paths for the packets. Packet size and packet transmission time (the time to insert the packet into the network) are important in both types of packet switching. The packet headers used in both approaches will be the same size given that the protocols used in both networks are the same. (Source routing, where all intermediate stations are listed in the header is an exception to this rule). There is an optimal packet size where the tradeoff between extra overhead due to packet headers and shorter packets balances. Smaller packets lead to more equitable sharing of facilities between processes. Each process will have a shorter wait for its ‘turn’ to transmit. A smaller packet is also less likely to contain an error and need to be discarded or retransmitted. A smaller packet also means there is less data lost or retransmitted when an error does occur. These improvements in efficiency must be balanced against the added overhead, smaller packets mean more packets to transmit the same amount of data. Each packet has a header, so more packets mean more capacity used by the overhead of transmitting packet headers. The delay caused by waiting for packets to arrive at intermediate stations has two sources, the transmission (or equivalent reception) time and any time the packets sit it a local queue waiting to be transmitted. The first source of delay is the same for virtual circuit packet switching and datagram packet switching given that the packets for each method are the same size. The latter depends on present conditions in the network and under different condition may favor either method. (v) Analog and Digital Signal Ans : Analog signals are continuous where digital signals are discrete. Anolog signals are continuously varying where digital signals are based on 0's and 1's (or as often said------- on's and off's). As an analogy, consider a light switch that is either on or off (digital) and a dimmer switch (analog) that allows you to vary the light in different degrees of brightness. As another analogy, consider a clock in which the second hand smoothly circles the clock face (analog) versus another clock in which the second hand jumps as each second passes (digital). Digital computers work with a series of 0's and 1's to represent letters, symbols, and numbers. In addition, numbers are represented by using the binary code (where only 0's and 1's are used). Number Binary equivalent 1----------------------------------------------1 2---------------------------------------------10 3---------------------------------------------11 4--------------------------------------------100 5--------------------------------------------101 6--------------------------------------------110 7--------------------------------------------111 8-------------------------------------------1000 and so on. So each number (that we are accustomed to, such as 5) is represented by 0's and 1's. Morse code uses dits (or dots) and dashes. Digital signals are similar to Morse code. The signal is either a dit or a dash for Morse code and it is either a 0 or 1 for digital. A series of these dits and dashes might represent SOS to a navy radio man, and a series of 0's and 1's might represent the question mark to a computer. When an e-mail is sent that says "Hello Joe", Hello Joe doesn't mysteriously appear on Joe's computer. What is sent through the phone line is a series of 0's and 1's and Joe's computer "interprets" these into the words Hello Joe. If you type the letter A into your computer, it converts this A into 01000001. This 01000001 goes to Joe's computer and his computer interprets it as A. Each 0 or 1 is "bit" and the series of eight 0's and 1's is a byte. Well, that is about as simple as it gets and about as simple as I can state it. Analog Signal: 1.Analog signal are continuous. 2.Analog signal is continuously variable. 3.The primary disadvantage of an analog signal is noise. 4.Sound waves are a continuous wave and as such are analog in the real world. 5.Analog signal required lesser bandwidth capacity than digital capacity. Digital Signal: 1.Digital signal is discrete. 2.Digital signal are based on 0's and 1's. 3.Noise is much easier to filter out of a digital signal. 4.Most computer used such as the PC work using digital signals. 5.Digital signal required greater bandwidth capacity than analog signals. An analog signal is continuously variable. It differs from a digital signal in that <b><i>small fluctuations in the signal are meaningful</i></b>. That is the key. Whereas a digital signal only represents 2 values (0 and 1 - or off and on). The primary disadvantage of an analog signal is noise. As an analog signal is processed (copied, sampled, amplified, etc) the noise is hard to discriminate from the actual signal. Noise is much easier to filter out of a digital signal, because anything other than the pure 'high' or 'low' signal is considered noise. In an analog signal the voltage may assume any numeric value within some continuous range, changing smoothly with time. In a digital signal the voltage may assume only two discrete values, jumping between one & the other with time. These are square pulses. A pictorial diagram would be far easier to understand than this verbal description, but that's the best we can do here. Hope this helps. q.2.b) In the previous protocols, data frames were transmitted in one direction only. In most practical situations, there is a need for transmitting data in both directions. One way of achieving full-duplex data transmission is to have two separate communication channels and use each one for simplex data traffic (in different directions). If this is done, we have two separate physical circuits, each with a ‘‘forward’’ channel (for data) and a ‘‘reverse’’ channel (for acknowledgements). In both cases the bandwidth of the reverse channel is almost entirely wasted. In effect, the user is paying for two circuits but using only the capacity of one. A better idea is to use the same circuit for data in both directions. After all, in protocols 2 and 3 it was already being used to transmit frames both ways, and the reverse channel has the same capacity as the forward channel. In this model the data frames from A to B are intermixed with the acknowledgement frames from A to B. By looking at the kind field in the header of an incoming frame, the receiver can tell whether the frame is data or acknowledgement. Although interleaving data and control frames on the same circuit is an improvement over having two separate physical circuits, yet another improvement is possible. When a data frame arrives, instead of immediately sending a separate control frame, the receiver restrains itself and waits until the network layer passes it the next packet. The acknowledgement is attached to the outgoing data frame (using the ack field in the frame header). In effect, the acknowledgement gets a free ride on the next outgoing data frame. The technique of temporarily delaying outgoing acknowledgements so that they can be hooked onto the next outgoing data frame is known as piggybacking. The principal advantage of using piggybacking over having distinct acknowledgement frames is a better use of the available channel bandwidth. The ack field in the frame header costs only a few bits, whereas a separate frame would need a header, the acknowledgement, and a checksum. In addition, fewer frames sent means fewer ‘‘frame arrival’’ interrupts, and perhaps fewer buffers in the receiver, depending on how the receiver’s software is organized. In the next protocol to be examined, the piggyback field costs only 1 bit in the frame header. It rarely costs more than a few bits. However, piggybacking introduces a complication not present with separate acknowledgements. How long should the data link layer wait for a packet onto which to piggyback the acknowledgement? If the data link layer waits longer than the sender’s timeout period, the frame will be retransmitted, defeating the whole purpose of having acknowledgements. If the data link layer were an oracle and could foretell the future, it would know when the next network layer packet was going to come in and could decide either to wait for it or send a separate acknowledgement immediately, depending on how long the projected wait was going to be. Of course, the data link layer cannot foretell the future, so it must resort to some ad hoc scheme, such as waiting a fixed number of milliseconds. If 212 THE DATA LINK LAYER CHAP. 3 a new packet arrives quickly, the acknowledgement is piggybacked onto it; otherwise, if no new packet has arrived by the end of this time period, the data link layer just sends a separate acknowledgement frame. The next three protocols are bidirectional protocols that belong to a class called sliding window protocols. The three differ among themselves in terms of efficiency, complexity, and buffer requirements, as discussed later. In these, as in all sliding window protocols, each outbound frame contains a sequence number, ranging from 0 up to some maximum. The maximum is usually 2n 1 so the sequence number fits exactly in an n-bit field. The stop-and-wait sliding window protocol uses n 1, restricting the sequence numbers to 0 and 1, but more sophisticated versions can use arbitrary n. The essence of all sliding window protocols is that at any instant of time, the sender maintains a set of sequence numbers corresponding to frames it is permitted to send. These frames are said to fall within the sending window. Similarly, the receiver also maintains a receiving window corresponding to the set of frames it is permitted to accept. The sender’s window and the receiver’s window need not have the same lower and upper limits or even have the same size. In some protocols they are fixed in size, but in others they can grow or shrink over the course of time as frames are sent and received. Although these protocols give the data link layer more freedom about the order in which it may send and receive frames, we have definitely not dropped the requirement that the protocol must deliver packets to the destination network layer in the same order they were passed to the data link layer on the sending machine. Nor have we changed the requirement that the physical communication channel is ‘‘wire-like,’’ that is, it must deliver all frames in the order sent. The sequence numbers within the sender’s window represent frames that have been sent or can be sent but are as yet not acknowledged. Whenever a new packet arrives from the network layer, it is given the next highest sequence number, and the upper edge of the window is advanced by one. When an acknowledgement comes in, the lower edge is advanced by one. In this way the window continuously maintains a list of unacknowledged frames. Figure 3-13 shows an example. Since frames currently within the sender’s window may ultimately be lost or damaged in transit, the sender must keep all these frames in its memory for possible retransmission. Thus, if the maximum window size is n, the sender needs n buffers to hold the unacknowledged frames. If the window ever grows to its maximum size, the sending data link layer must forcibly shut off the network layer until another buffer becomes free. The receiving data link layer’s window corresponds to the frames it may accept. Any frame falling outside the window is discarded without comment. When a frame whose sequence number is equal to the lower edge of the window is received, it is passed to the network layer, an acknowledgement is generated, and the window is rotated by one. Unlike the sender’s window, the receiver’s window always remains at its initial size. Note that a window size of 1 means that the Figure 3-13. A sliding window of size 1, with a 3-bit sequence number. (a) Initially. (b) After the first frame has been sent. (c) After the first frame has been received. (d) After the first acknowledgement has been received. data link layer only accepts frames in order, but for larger windows this is not so. The network layer, in contrast, is always fed data in the proper order, regardless of the data link layer’s window size. Figure 3-13 shows an example with a maximum window size of 1. Initially, no frames are outstanding, so the lower and upper edges of the sender’s window are equal, but as time goes on, the situation progresses as shown. Q.3.b. The RPC tools make it appear to users as though a client directly calls a procedure located in a remote server program. The client and server each have their own address spaces; that is, each has its own memory resource allocated to data used by the procedure. The following figure illustrates the RPC architecture. As the illustration shows, the client application calls a local stub procedure instead of the actual code implementing the procedure. Stubs are compiled and linked with the client application. Instead of containing the actual code that implements the remote procedure, the client stub code: Retrieves the required parameters from the client address space. Translates the parameters as needed into a standard NDR format for transmission over the network. Calls functions in the RPC client run-time library to send the request and its parameters to the server. The server performs the following steps to call the remote procedure. 1. The server RPC run-time library functions accept the request and call the server stub procedure. 2. The server stub retrieves the parameters from the network buffer and converts them from the network transmission format to the format the server needs. 3. The server stub calls the actual procedure on the server. The remote procedure then runs, possibly generating output parameters and a return value. When the remote procedure is complete, a similar sequence of steps returns the data to the client. 1. The remote procedure returns its data to the server stub. 2. The server stub converts output parameters to the format required for transmission over the network and returns them to the RPC run-time library functions. 3. The server RPC run-time library functions transmit the data on the network to the client computer. The client completes the process by accepting the data over the network and returning it to the calling function. 1. The client RPC run-time library receives the remote-procedure return values and returns them to the client stub. 2. The client stub converts the data from its NDR to the format used by the client computer. The stub writes data into the client memory and returns the result to the calling program on the client. 3. The calling procedure continues as if the procedure had been called on the same computer. The run-time libraries are provided in two parts: an import library, which is linked with the application and the RPC run-time library, which is implemented as a dynamic-link library (DLL). The server application contains calls to the server run-time library functions which register the server's interface and allow the server to accept remote procedure calls. The server application also contains the application-specific remote procedures that are called by the client applications. Q.3.c) ans= book no 2, page no. 40.s Q.4.a) ans= book no2, page no. 39. Q.4.b) The use of Asynchronous Transfer Mode (ATM) technology and services creates the need for an adaptation layer in order to support information transfer protocols, which are not based on ATM. This adaptation layer defines how to segment and reassemble higher-layer packets into ATM cells, and how to handle various transmission aspects in the ATM layer. Examples of services that need adaptations are Gigabit Ethernet, IP, Frame Relay, SONET/SDH, UMTS/Wireless, etc. The main services provided by AAL (ATM Adaptation Layer) are: Segmentation and reassembly Handling of transmission errors Handling of lost and misinserted cell conditions Timing and flow control The following ATM Adaptation Layer protocols (AALs) have been defined by the ITU-T.[1] It is meant that these AALs will meet a variety of needs. The classification is based on whether a timing relationship must be maintained between source and destination, whether the application requires a constant bit rate, and whether the transfer is connection oriented or connectionless. AAL Type 1 supports constant bit rate (CBR), synchronous, connection oriented traffic. Examples include T1 (DS1), E1, and x64 kbit/s emulation. AAL Type 2 supports time-dependent Variable Bit Rate (VBR-RT) of connectionoriented, synchronous traffic. Examples include Voice over ATM. AAL2 is also widely used in wireless applications due to the capability of multiplexing voice packets from different users on a single ATM connection. AAL Type 3/4 supports VBR, data traffic, connection-oriented, asynchronous traffic (e.g. X.25 data) or connectionless packet data (e.g. SMDS traffic) with an additional 4byte header in the information payload of the cell. Examples include Frame Relay and X.25. AAL Type 5 is similar to AAL 3/4 with a simplified information header scheme. This AAL assumes that the data is sequential from the end user and uses the Payload Type Indicator (PTI) bit to indicate the last cell in a transmission. Examples of services that use AAL 5 are classic IP over ATM, Ethernet Over ATM, SMDS, and LAN Emulation (LANE). AAL 5 is a widely used ATM adaptation layer protocol. This protocol was intended to provide a streamlined transport facility for higher-layer protocols that are connection oriented. AAL 5 was introduced to: reduce protocol processing overhead. reduce transmission overhead. ensure adaptability to existing transport protocols. The AAL 5 was designed to accommodate the same variable bit rate, connection-oriented asynchronous traffic or connectionless packet data supported by AAL 3/4, but without the segment tracking and error correction requirements. Q.4.c) ans= book no. 2 , page no.28. Q.5.a) ans= book no 2., page no. 34. Q.5.c) ISDN Devices In the context of ISDN standards, STANDARD DEVICES refers not to actual hardware, but to standard collections of functions that can usually be performed by individual hardware units. The ISDN Standard Devices are: Terminal Equipment (TE) Terminal Adapter (TA) Network Termination 1 (NT1) Network Termination 2 (NT2) Exchange Termination (ET) Terminal Equipment (TE) A TE is any piece of communicating equipment that complies with the ISDN standards. Examples include: digital telephones, ISDN data terminals, Group IV Fax machines, and ISDNequipped computers. In most cases, a TE should be able to provide a full Basic Rate Access (2B+D), although some TEs may use only 1B+D or even only a D channel. Terminal Adapter (TA) A TA is a special interface-conversion device that allows communicating devices that don't conform to ISDN standards to communicate over the ISDN. The most common TAs provide Basic Rate Access and have one RJ-type modular jack for voice and one RS-232 or V.35 connector for data (with each port able to connect to either of the available B channels). Some TAs have a separate data connector for the D channel. Network Termination (NT1 and NT2) The NT devices, NT1 and NT2, form the physical and logical boundary between the customer's premises and the carrier's network. NT1 performs the logical interface functions of switching and local-device control (local signalling). NT2 performs the physical interface conversion between the dissimilar customer and network sides of the interface. In most cases, a single device, such as a PBX or digital multiplexer, performs both physical and logical interface functions. In ISDN terms, such a device is called NT12 ("NT-one-two") or simply NT. Exchange Termination (ET) The ET forms the physical and logical boundary between the digital local loop and the carrier's switching office. It performs the same functions at the end office that the NT performs at the customer's premises. In addition, the ET: 1. Separates the B channels, placing them on the proper interoffice trunks to their ultimate destinations 2. Terminates the signalling path of the customer's D channel, converting any necessary end-toend signalling from the ISDN D-channel signalling protocol to the carrier's switch-to- switch trunk signalling protocol ii Q5.b.Describe the bandwidth limitations of B-channels and D-channel. iii Q.3.a. Differentiate between Packet switching and Cell switching. (ii) ISDN access available is the Primary Rate Interface (PRI), which is carried over an E1 (2048 kbit/s) in most parts of the world. An E1 is 30 'B' channels of 64 kbit/s, one 'D' channel of 64 kbit/s and a timing and alarm channel of 64 kbit/s. In North America PRI service is delivered on one or more T1 carriers (often referred to as 23B+D) of 1544 kbit/s (24 channels). A PRI has 23 'B' channels and 1 'D' channel for signalling (Japan uses a circuit called a J1, which is similar to a PRI). Inter-changeably but incorrectly, a PRI is referred to as T1 because it uses the T1 carrier format. A true T1 or commonly called 'Analog T1' to avoid confusion uses 24 channels of 64 Kbit/s of in band signaling. Each channel uses 56 kb for data and voice and 8 kb for signaling and messaging. PRI uses out of band signaling which provides the 23 B channels with clear 64 kb for voice and data and one 64 kb 'D' channel for signaling and messaging. In North America, Non-Facility Associated Signalling allows two or more PRIs to be controlled by a single D channel, and is sometimes called "23B+D + n*24B". D-channel backup allows for a second D channel in case the primary fails. NFAS is commonly used on a T3. PRI-ISDN is popular throughout the world, especially for connection of PSTN circuits to PBXs. PSTN is the acronym for Public Switched Telephone Network and is sometime refered to as POTS or Plain Old Telephone Service. PSTN actually refers to the telephone network i.e. PBX to PBX while POTS refers to the connection to a user as in residential or commercial. While the North American PSTN can use PRI or Analog T1 format from PBX to PBX the POTS or BRI can be delivered to a buisiness or residence. North American PSTN can actually connect from PBX to PBX via Analog T1, T3, PRI, OC3, etc... Even though many network professionals use the term "ISDN" to refer to the lower-bandwidth BRI circuit, in North America by far the majority of ISDN services are in fact PRI circuits serving PBXs A data communications network with at least one PoP maintains a local cache database associated with each AAA service at the PoP on the data communications network. Each local database contains a group identification such as a domain identification corresponding to a group of users or an FQDN specifying a group of one individual, a maximum number of BChannels to provide the group of users at the PoP and a dynamic B-Channel session count corresponding to active B-Channel connections currently provided to the group of users at the PoP. Actions are taken when the group attempts to exceed the maximum number of BChannels by more than a predetermined number. The actions may include assessing extra charges, denying access, and sending warning messages to appropriate recipients. The local database may be synchronized by publishing B-Channel connection and disconnection events to all subscribing local databases. For proxy authentication users, the authentication information is published to the local caches of each AAA service at the PoP upon the first log-in of the user so as to avoid the need to proxy each successive connection authentication to a remote AAA service. (iii) Packet switching is a digital networking communications method that groups all transmitted data – regardless of content, type, or structure – into suitably-sized blocks, called packets. Packet switching features delivery of variable-bit-rate data streams (sequences of packets) over a shared network. When traversing network adapters, switches, routers and other network nodes, packets are buffered and queued, resulting in variable delay and throughput depending on the traffic load in the network. Packet switching contrasts with another principal networking paradigm, circuit switching, a method which sets up a limited number of dedicated connections of constant bit rate and constant delay between nodes for exclusive use during the communication session. In case of traffic fees, for example in cellular communication, circuit switching is characterized by a fee per time unit of connection time, even when no data is transferred, while packet switching is characterized by a fee per unit of information. Cell Switching is similar to packet switching, except that the switching does not necessarily occur on packet boundaries. This is ideal for an integrated environment and is found within Cellbased networks, such as ATM. Cell-switching can handle both digital voice and data signals. Cell Switching works similar than packet switching. The differences between both are the following: * All information -data, voice, video- is transported from the origin-node to the end-node in small and constant-size packets (in traditional packet switching the packet size is variable) - 53 octets - called cells. * Just lightened protocols are used in order to allow nodes fast switching. As a drawback protocols will be less efficient. * Signaling is completely separated from information flow in contrast to packet switching in which both, information and signaling are mixed. * Arbitrary binary rate traffic flows can be integrated in the same network. Q.5.d) ans= book no. 2, page no.40.