Computer Network

advertisement

GEC Group of College ’s

Dept. of Computer Sc. &Engg. and IT

Teaching Notes

CS-604

C.N

Prepared by

Ms. Shubhi Goyal

1

Computer Networking

Unit –I

Computer Network: Definitions, goals, components, Architecture, Classifications & Types.

Layered Architecture: Protocol hierarchy, Design Issues , Interfaces and Services, Connection

Oriented & Connectionless Services, Service primitives, Design issues & its functionality. ISO-

OSI Reference Model: Principle, Model, Descriptions of various layers and its comparison with

TCP/IP. Network standardization. Queueing Models: Little's Theorem, Queueing System:

M/M/1, M/M/m, M/M/∞, M/M/m/m, M/G/1

Unit-II

Data Link Layer: Need, Services Provided, Framing , Flow Control, Error control. Data Link

Layer Protocol: Elementary & Sliding Window protocol: 1-bit, Go-Back-N, Selective Repeat,

Hybrid ARQ.Bit oriented protocols: SDLC, HDLC BISYNC, LAP and LAPB. Protocol verification: Finite State Machine Models & Petri net models.

Unit-III

MAC Sublayer: MAC Addressing, Binary Exponential Back-off (BEB) Algorithm, Distributed

Random Access Schemes/Contention Schemes: for Data Services (ALOHA and Slotted-

ALOHA), for Local-Area Networks (CSMA, CSMA/CD, CSMA/CA), Collision Free Protocols:

Basic Bit Map, BRAP, Binary Count Down, MLMA Limited Contention Protocols: Adaptive

Tree Walk, URN Protocol, High Speed LAN: Fast Ethernet, Gigabit Ethernet, FDDI,

Performance Measuring Metrics. IEEE Standards 802 series & their variant.

Unit-IV

Network Layer: Need, Services Provided , Design issues, Routing algorithms:Least Cost Routing algorithm, Dijkstra's algorithm, Bellman-ford algorithm, Hierarchical Routing, Broadcast

Routing, Multicast Routing, Congestion Control Algorithms: General Principles of Congestion control, Prevention Policies, Congestion Control in Virtual-Circuit Subnets, Congestion Control in Datagram subnets. IP protocol, IP Addresses, Comparative study of IPv4 & IPv6, Mobile IP.

Unit-V

Transport Layer: Design Issues, UDP: Header Format, Per-Segment Checksum,Carrying

Unicast/Multicast Real-Time Traffic, TCP: Connection Management, Reliability of Data

Transfers, TCP Flow Control, TCP Congestion Control, TCP Header Format, TCP Timer

Management.Session layer: Authentication, Authorisation, Session layer protocol (PAP, SCP,

H.245). Presentation layer: Data conversion, Character code translation, Compresion,Encryption and Decryption, Presentation layer protocol (LPP, Telnet, X.25 packet

Assembler/Disassembler).Application Layer: WWW and HTTP, FTP, SSH,EmaiL(SMTP,

MIME, IMAP), DNS, Network Management (SNMP)

2

INDEX

S.NO NAME OF TOPIC

1

2

PAGE

NO.

4-6 INTRODUCTON TO COMPUTER NETWORKS-

Definitions, goals, components

Architecture, Classifications & Types 7-10

3

4

Layered Architecture: Protocol hierarchy, Design

Issues ,

Interfaces and Services

11-15

16

5

6

7

8

9

10

Connection

Oriented & Connectionless Services,

Service primitives

17-18

19

ISO-OSI Reference Model: Principle, Model,

Descriptions ofvarious layers

comparison with TCP/IP

20-25

26

Network standardization 27-28

Queueing Models:Little's Theorem, Queueing

System: M/M/1, M/M/m, M/M/

∞, M/M/m/m, M/G/1

29-33

3

LECTURE NOTES-COMPUTER NETWORK(CS-604)

SUBMITTED BY:SHUBHI GOYAL

UNIT-1

INTRODUCTON TO COMPUTER NETWORKS

A computer network or data network is a telecommunications network which allows computers to exchange data. In computer networks, networked computing devices pass data to each other along data connections (network links). Data is transferred in the form of packets. The connections between nodes are established using either cable media or wireless media. The best-known computer network is the Internet.

Network computer devices that originate, route and terminate the data are called network nodes.

[1] Nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices are said to be nnetworked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other.

Computer networks differ in the transmission media used to carry their signals, the communications protocols to organize network traffic, the network's size, topology and organizational intent. In most cases, communications protocols are layered on

(i.e. work using) other more specific or more general communications protocols, except for the physical layer that directly deals with the transmission media.

Computer networks support applications such as access to the World Wide Web, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications.

GOALS

Resource and load sharing

 Programs do not need to run on a single machine

Reduced cost

 Several machines can share printers, tape drives, etc.

High reliability

 If a machine goes down, another can take over

Mail and communication.

COMPONENTS

Computer networks share common devices, functions, and features including servers, clients, transmission media, shared data, shared printers and other

4

hardware and software resources, network interface card(NIC), local operating system(LOS), and the network operating system (NOS).

Servers - Servers are computers that hold shared files, programs, and the network operating system. Servers provide access to network resources to all the users of the network. There are many different kinds of servers, and one server can provide several functions. For example, there are file servers, print servers, mail servers, communication servers, database servers, print servers, fax servers and web servers, to name a few.

Clients - Clients are computers that access and use the network and shared network resources. Client computers are basically the customers(users) of the network, as they request and receive services from the servers.

Transmission Media - Transmission media are the facilities used to interconnect computers in a network, such as twisted-pair wire, coaxial cable, and optical fiber cable. Transmission media are sometimes called channels, links or lines.

Shared data - Shared data are data that file servers provide to clients such as data files, printer access programs and e-mail.

Shared printers and other peripherals - Shared printers and peripherals are hardware resources provided to the users of the network by servers. Resources provided include data files, printers, software, or any other items used by clients on the network.

Network Interface Card - Each computer in a network has a special expansion card called a network interface card (NIC). The NIC prepares(formats) and sends data, receives data, and controls data flow between the computer and the network.

On the transmit side, the NIC passes frames of data on to the physical layer, which transmits the data to the physical link. On the receiver's side, the NIC processes bits received from the physical layer and processes the message based on its contents.

Local Operating System - A local operating system allows personal computers to access files, print to a local printer, and have and use one or more disk and CD drives that are located on the computer. Examples are MS-DOS, Unix, Linux,

Windows 2000, Windows 98, Windows XP etc.

5

Network Operating System - The network operating system is a program that runs on computers and servers, and allows the computers to communicate over the network.

Hub - Hub is a device that splits a network connection into multiple computers. It is like a distribution center. When a computer request information from a network or a specific computer, it sends the request to the hub through a cable. The hub will receive the request and transmit it to the entire network. Each computer in the network should then figure out whether the broadcast data is for them or not.

Switch - Switch is a telecommunication device grouped as one of computer network components. Switch is like a Hub but built in with advanced features. It uses physical device addresses in each incoming messages so that it can deliver the message to the right destination or port.

Like a hub, switch doesn't broadcast the received message to entire network, rather before sending it checks to which system or port should the message be sent. In other words, switch connects the source and destination directly which increases the speed of the network. Both switch and hub have common features: Multiple

RJ-45 ports, power supply and connection lights.

6

TYPES OF COMPUTER NETWORK

Depending upon the geographical area covered by a network, it is classified as:

–Local Area Network (LAN)

–Metropolitan Area Network (MAN)

–Wide Area Network (WAN)

–Personal Area Network (PAN

LAN

•A LAN is a network that is used for communicating among computer devices, usually within an office building or home.

•LAN’s enable the sharing of resources such as files or hardware devices that may be needed by multiple users

•Is limited in size, typically spanning a few hundred meters, and no more than a mile

•Is fast, with speeds from 10 Mbps to 10 Gbps

•Requires little wiring, typically a single cable connecting to each device

•Has lower cost compared to MAN’s or WAN’s

•LAN’s can be either wired or wireless. Twisted pair, coax or fibre optic cable can be used in wired LAN’s.

•Every LAN uses a protocol –a set of rules that governs how packets are configured and transmitted.

A local area network , or LAN , consists of a computer network at a single site, typically an individual office building. A LAN is very useful for sharing resources, such as data storage and printers. LANs can be built with relatively inexpensive hardware, such as hubs, network adapters and Ethernet cables.

The smallest LAN may only use two computers, while larger LANs can accommodate thousands of computers. A LAN typically relies mostly on wired connections for increased speed and security, but wireless connections can also be part of a LAN. High speed and relatively low cost are the defining characteristics of LANs.

LANs are typically used for single sites where people need to share resources among themselves but not with the rest of the outside world. Think of an office building where everybody should be able to access files on a central server or be able to print a document to one or more central printers. Those tasks should be easy for everybody working in the same office, but you would not want somebody just walking outside to be able to send a document to the printer from their cell phone! If a local area network, or LAN, is entirely wireless, it is referred to as a wireless local area network, or WLAN.

7

MAN

•A metropolitan area network(MAN) is a large computer network that usually spans a city or a large campus.

•A MAN is optimized for a larger geographical area than a LAN, ranging from several blocks of buildings to entire cities.

•A MAN might be owned and operated by a single organization, but it usually will be used by many individuals and organizations.

A MAN often acts as a high speed network to allow sharing of regional resources.

•A MAN typically covers an area of between 5 and 50 km diameter.

•Examples of MAN: Telephone company network that provides a high speed DSL to customers and cable TV network.

A metropolitan area network , or MAN , consists of a computer network across an entire city, college campus or small region. A MAN is larger than a LAN, which is typically limited to a single building or site. Depending on the configuration, this type of network can cover an area from several miles to tens of miles. A MAN is often used to connect several LANs together to form a bigger network. When this type of network is specifically designed for a college campus, it is sometimes referred to as a campus area network, or CAN.

8

WAN

WAN covers a large geographic area such as country, continent or even whole of the world.

•A WAN is two or more LANs connected together. The LANs can be many miles apart.

•To cover great distances, WANs may transmit data over leased high-speed phone lines or wireless links such as satellites.Multiple LANs can be connected together using devices such as bridges, routers, or gateways, which enable them to share data.

•The world's most popular WAN is the Internet.

A wide area network , or WAN , occupies a very large area, such as an entire country or the entire world. A WAN can contain multiple smaller networks, such as LANs or MANs. The Internet is the best-known example of a public WAN.

9

PAN

One of the benefits of networks like PAN and LAN is that they can be kept entirely private by restricting some communications to the connections within the network.

This means that those communications never go over the Internet.

For example, using a LAN, an employee is able to establish a fast and secure connection to a company database without encryption since none of the communications between the employee's computer and the database on the server leave the LAN. But what happens if the same employee wants to use the database from a remote location? What you need is a private network.

One approach to a private network is to build an enterprise private network , or

EPN . An EPN is a computer network that is entirely controlled by one organization, and it is used to connect multiple locations. Historically, telecommunications companies, like AT&T, operated their own network, separate from the public Internet. EPNs are still fairly common in certain sectors where security is of the highest concern. For example, a number of health facilities may establish their own network between multiple sites to have full control over the confidentiality of patient records.

10

LAYERED ARCHITECTURE

Network architectures define the standards and techniques for designing and building communication systems for computers and other devices. In the past, vendors developed their own architectures and required that other vendors conform to this architecture if they wanted to develop compatible hardware and software.

There are proprietary network architectures such as IBM's SNA (Systems Network

Architecture) and there are open architectures like the OSI (Open Systems

Interconnection) model defined by the International Organization for

Standardization. The previous strategy, where the computer network is designed with the hardware as the main concern and software is afterthought, no longer works. Network software is now highly structured.To reduce the design complexity, most of the networks are organized as a series of layers or levels, each one build upon one below it.

The basic idea of a layered architecture is to divide the design into small pieces. Each layer adds to the services provided by the lower layers in such a manner that the highest layer is provided a full set of services to manage communications and run the applications. The benefits of the layered models are modularity and clear interfaces, i.e. open architecture and comparability between the different providers' components.

A basic principle is to ensure independence of layers by defining services provided by each layer to the next higher layer without defining how the services are to be performed. This permits changes in a layer without affecting other layers.

Prior to the use of layered protocol architectures, simple changes such as adding one terminal type to the list of those supported by an architecture often required changes to essentially all communications software at a site. The number of layers, functions and contents of each layer differ from network to network. However in all networks, the purpose of each layer is to offer certain services to higher layers, shielding those layers from the details of how the services are actually implemented.

The basic elements of a layered model are services, protocols and interfaces.

A service is a set of actions that a layer offers to another (higher) layer. Protocol is a set of rules that a layer uses to exchange information with a peer entity. These rules concern both the contents and the order of the messages used. Between the layers service interfaces are defined. The messages from one layer to another are sent through those interfaces. In an n-layer architecture, layer n on one machine carrie on conversation with the layer n on other machine. The rules and conventions used in this conversation are collectively known as the layer-n

11

protocol. Basically, a protocol is an agreement between the communicating parties on how communication is to proceed. Violating the protocol will make communication more difficult, if not impossible.

A five-layer architecture is shown in Fig. 1.2.1,the entities comprising the corresponding layers on different machines are called peers. In other words, it is the peers that communicate using protocols. In reality, no data is transferred from layer n on one machine to layer n of another machine. Instead, each layer passes data and control information to the layer immediately below it, until the lowest layer is reached. Below layer-1 is the physical layer through which actual communication occurs. The peer process abstraction is crucial to all network design. Using it, the un-manageable tasks of designing the complete network can be broken into several smaller, manageable, design problems, namely design of individual layers.

Between each pair of adjacent layers there is an interface. The interface defines which Primitives operations and services the lower layer offers to the upper layer adjacent to it.

12

When network designer decides how many layers to include in the network and what each layer should do, one of the main considerations is defining clean interfaces between adjacent layers. Doing so, in turns requires that each layer should perform well-defined functions. In addition to minimize the amount of information passed between layers, clean-cut interface also makes it simpler to replace the implementation of one layer with a completely different implementation, because all what is required of new implementation is that it offers same set of services to its upstairs neighbor as the old implementation(that is what a layer provides and how to use that service from it is more important than knowing how exactly it implements it)A set of layers and protocols is known as network architecture. The specification of architecture must contain enough information to allow an implementation to write the program or build the hardware for each layer so that it will correctly obey the appropriate protocol. Neither the details of implementation nor the specification of interface is a part f network architecture because these are hidden away inside machines and not visible from outside. It is not even necessary that the interface on all machines in a network be same, provided that each machine can correctly use all protocols.A list of protocols used by a certain system, one protocol per layer, is called protocol stack.

PROTOCOL HIEARACHIES

To reduce their design complexity, most networks are organized as a stack of

Layers or levels , each one built upon the one below it. The number of layers, the name of each layer, the contents of each layer, and the function of each layer differ from network to network. The purpose of each layer is to offer certain services to the higher layers while shielding those layers from the details of how the services are actually implemented. In a sense, each layer is a kind of virtual machine, offering certain services to the layer above it. This concept is actually a familiar one and is used throughout computer science, where it is variously known as information hiding, abstract data types, data encapsulation, and object-oriented programming. The fundamental idea is that a particular piece of software (or hardware) provides a service to its users but keeps the details of its internal state and algorithms hidden from them. When layer n on one machine carries on a conversation with layer n on another machine, the rules and conventions used in this conversation are collectively known as the layer n protocol. Basically, a protocol is an agreement between the communicating parties on how communication is to proceed. As an analogy, when a woman is introduced to a man, she may choose to stick out her hand. He, in turn, may decide to either shake it or kiss it, depending, for example, on whether she is an American lawyer at a business meeting or a European princess at a formal ball. Violating the protocol will make communication more difficult, if not completely impossible.

13

The peers may be software processes, hardware devices, or even human beings. In other words, it is the peers that communicate by using the protocol to talk to each other.

DESIGN ISSUES OF LAYERED PROTOCOL

Some of the key design issues that occur in computer networks will come up in layer after layer. Below, we will briefly mention the more important ones.

Reliability is the design issue of making a network that operates correctly even though it is made up of a collection of components that are themselves unreliable.

Think about the bits of a packet traveling through the network. There is a chance that some of these bits will be received damaged (inverted) due to fluke electrical noise, random wireless signals, hardware flaws, software bugs and so on. How is it possible that we find and fix these errors? One mechanism for finding errors in received information uses codes for error detection. Information that is incorrectly received can then be retransmitted until it is received correctly. More powerful codes allow for error correction, where the correct message is recovered from the possibly incorrect bits that were originally received. Both of these mechanisms work by adding redundant information. They are used at low layers, to protect packets sent over individual links,and high layers, to check that the right contents were received. Another reliability issue is finding a working path through a network.

Often there are multiple paths between a source and destination, and in a large network, there may be some links or routers that are broken. Suppose that the network is down in Germany. Packets sent from London to Rome via Germany will not get through, but we could instead send packets from London to Rome via

Paris. The network should automatically make this decision. This topic is called routing

.A second design issue concerns the evolution of the network. Over time, net- works grow larger and new designs emerge that need to be connected to the existing network. We have recently seen the key structuring mechanism used to support change by dividing the overall problem and hiding implementation details: protocol layering. There are many other strategies as well.Since there are many computers on the network, every layer needs a mechanism for identifying the senders and receivers that are involved in a particular message. This mechanism is called addressing or naming, in the low and high lay- ers, respectively. An aspect of growth is that different network technologies often have different limitations. For example, not all communication channels preserve the order of messages sent on them, leading to solutions that number messages.

14

Another example is differences in the maximum size of a message that the networks can transmit. This leads to mechanisms for disassembling,transmitting, and then reassembling messages. This overall topic is called internetworking.When networks get large, new problems arise. Cities can have traffic jams, a shortage of telephone numbers, and it is easy to get lost. Not many people have these problems in their own neighborhood, but citywide they may be a big issue. Designs that continue to work well when the network gets large are said to be

Scalable.

A third design issue is resource allocation. Networks provide a service to hosts from their underlying resources, such as the capacity of transmission lines.

To do this well, they need mechanisms that divide their resources so that one host does not interfere with another too much.Many designs share network bandwidth dynamically, according to the short term needs of hosts, rather than by giving each host a fixed fraction of the bandwidth that it may or may not use.

This design is called statistical multiplexing,meaning sharing based on the statistics of demand. It can be applied at low layers for a single link, or at high layers for a network or even applications that use the network.An allocation problem that occurs at every level is how to keep a fast sender from swamping a slow receiver with data. Feedback from the receiver to the sender is often used. This subject is called flow control.Sometimes the problem is that the network is oversubscribed because too many computers want to send too much traffic, and the network cannot deliver it all.This overloading of the network is called congestion.

15

INTERFACES AND SERVICES

The function of each layer in computer networking architecture is to provide services to the layer above it. The active elements in each layer are often called entities. An entity can be a software entity (such as a process), or a hardware entity

(such as an intelligent I/O chip). Entries in the same layer on different machines are called peer entities.

The entities in layer n implement a service used by layer n+1. In this case layer n called the service provider and layer n+1 is called the service user. Layer n may use the services of layer n-1 in order to provide its service. It may offer several classes of service for example, fast, expensive communication and slow, cheap communication. Services are available at SAPs (Service Access Points). The layer n SAPs are the places where layer n+1 can access the services offered. Each SAP has an address that uniquely identifies it.

In order for two layers to exchange information, there has to be an agreed upon set of rules about the interface. At a typical interface, the layer n+1 entity passes an

IDU (Interface Data Unit) to the layer n entity through the SAP. The IDU consists of an SDU (Service Data Unit) and some control information. The SDU is the information passes across the network to the peer entity and then up to layer n+1.

The control information is needed to help the lower layer so its job (e.g., the number of bytes in the SDU) but is not part of the data itself.

In order to transfer the SDU, the layer n entity may have to fragment it into several pieces, each of which is given a header and sent as a separate PDU (Protocol Data

Unit) such as a packet. The PDU headers are used by the peer entities to carry out their peer protocol. They identify which PDUs contain data and which contain control information, provide sequence numbers and counts, as so on.

16

CONNECTION ORIENTED AND CONNECTIONLESS SERVICES

Connection-oriented communication is a network communication mode in telecommunications and computer networking, where a communication session or a semi-permanent connection is established before any useful data can be transferred, and where a stream of data is delivered in the same order as it was sent. The alternative to connection-oriented transmission is connectionless communication, for example the datagram mode communication used by the IP and UDP protocols, where data may be delivered out of order, since different packets are routed independently, and may be delivered over different paths.

Connection-oriented communication may be a circuit switched connection, or a packet-mode virtual circuit connection.

In the latter case, it may use either a transport layer virtual circuit protocol such as the TCP protocol, allowing data to be delivered in order although the lower layer switching is connectionless, or it may be a data link layer or network layer switching mode, where all data packets belonging to the same traffic stream are delivered over the same path, and traffic flows are identified by some connection identifier rather than by complete routing information, allowing fast hardware based switching.

Connection-oriented protocol services are often but not always reliable network services, that provide acknowledgment after successful delivery, and automatic repeat request functions in case of missing data or detected bit-errors. ATM, Frame

Relay and MPLS are examples of a connection-oriented, unreliable protocol.

Connectionless communication is a data transmission method used in packet switching networks by which each data unit is individually addressed and routed based on information carried in each unit, rather than in the setup information of a prearranged, fixed data channel as in connection-oriented communication.

Connectionless communication is often referred to as CL-mode communication.

Under connectionless communication between two network end points, a message can be sent from one end point to another without prior arrangement.

The device at one end of the communication transmits data addressed to the other, without first ensuring that the recipient is available andready to receive the data.

Some protocols allow for error correction by requested retransmission. Internet

Protocol (IP) and User Datagram Protocol (UDP) are connectionless protocols.A packet transmitted in a connectionless mode is frequently called a end points have no protocol-defined way to remember where they are in a "conversation" of message exchanges.In connection-oriented communication the communicating

17

peers must first establish a logical or physical data channel or connection in a dialog preceding the exchange of user data.

The connectionless communications has the advantage over connection-oriented communications in that it has low overhead. It also allows for multicast and broadcast operations in which the same data is transmitted to several recipients in a single transmission.In connectionless transmissions the service provider usually cannot guarantee that there will be no loss, error insertion, misdelivery, duplication, or out-of-sequence delivery of the packet. However, the effect of errors may be reduced by implementing error correction within an application protocol

18

SERVICE PRIMITIVES

Each protocol which communicates in a layered architecture (e.g. based on the OSI

Reference Model) communicates in a peer-to-peer manner with its remote protocol entity.Communication between adjacent protocol layers (i.e. within the same communications node) are managed by calling functions, called primitives, between the layers. There are various types of actions that may be performed by primitives. Examples of primitives include: Connect , Data, Flow Control, and

Disconnect.

Primitives for communications between peer protocol entities

Each primitive specifies the action to be performed or advises the result of a previously requested action. A primitive may also carry the parameters needed to perform its functions. One parameter is the packet to be sent/received to the layer above/below (or, more acurately, includes a pointer to data structures containing a packet, often called a "buffer").

There are four types of primitive used for communicating data.The four basic types of primitive are :

1.

Request : A primitive sent by layer (N + 1 ) to layer N to request a service. It invokes the service and passes any required parameters.

2.

Indication : A primitive returned to layer (N + l) from layer N to advise of activation of a requested service or of an action initiated by the layer N service.

3.

Response : A primitive provided by layer (N + 1) in reply to an indication primitive. It may acknowledge or complete an action previously invoked by an indication primitive.

4.

Confirm : A primitive returned to the requesting (N + l)st layer by the Nth layer to acknowledge or complete an action previously invoked by a request primitive.

19

ISO-OSI MODEL

The Open Systems Interconnection model ( OSI ) is a conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers. The model is a product of the Open Systems

Interconnection project at the International Organization for Standardization (ISO), maintained by the identification ISO/IEC 7498-1.

The model groups communication functions into seven logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of that path. Two instances at one layer are connected by a horizontal connection on that layer.

Description of OSI layers

The recommendation X.200 describes seven layers, labeled 1 to 7. Layer 1 is the lowest layer in this model.

OSI Model

Layer Data unit Function [3] Examples

7.

Application

High-level APIs, including resource sharing, remote file

HTTP, directory

FTP, access, services and virtual

SMTP terminals

Host layers 6.

Presentation

5. Session

Data

Translation of data between a networking service and an ASCII, application; including EBCDIC, character encoding, JPEG data compression and encryption/decryption

Managing communication sessions, i.e. continuous exchange

RPC, PAP

20

4. Transport Segments of information in the form of multiple back-and-forth transmissions between two nodes

Reliable transmission of data segments between points on a network, including

TCP,

UDP segmentation, acknowledgement and multiplexing

Structuring and managing a multiIPv4,

3. Network Packet/Datagram node network, IPv6, including addressing, IPsec, routing and traffic AppleTalk control

Media layers

2. Data link Bit/Frame

Reliable transmission of data frames

PPP, between two nodes

IEEE connected by a

802.2,

L2TP physical layer

1. Physical Bit

Transmission and reception of raw bit DSL, streams over a USB physical medium

At each level N two entities at the communicating devices (layer N peers ) exchange protocol data units (PDUs) by means of a layer N protocol . Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers and/or footers.

Data processing by two communicating OSI-compatible devices is done as such:

1.

The data to be transmitted is composed at the topmost layer of the transmitting device (layer N ) into a protocol data unit ( PDU ).

21

2.

The PDU is passed to layer N-1 , where it is known as the service data unit

( SDU ).

3.

At layer N-1 the SDU is concatenated with a header, a footer, or both, producing a layer N-1 PDU . It is then passed to layer N-2 .

4.

The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device.

5.

At the receiving device the data is passed from the lowest to the highest layer as a series of SDU s while being successively stripped from each layer's header and/or footer, until reaching the topmost layer, where the last of the data is consumed.

Layer 1: physical layer

The physical layer has the following major functions:

 It defines the electrical and physical specifications of the data connection. It defines the relationship between a device and a physical transmission medium (e.g., a copper or fiber optical cable). This includes the layout of pins, voltages, line impedance, cable specifications, signal timing, hubs, repeaters, network adapters, host bus adapters (HBA used in storage area networks) and more.

 It defines the protocol to establish and terminate a connection between two

 directly connected nodes over a communications medium.

It may define the protocol for flow control.

 It defines transmission mode i.e. simplex, half duplex, full duplex.

 It defines topology.

 It defines a protocol for the provision of a (not necessarily reliable) connection between two directly connected nodes, and the modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over the physical communications channel. This channel can involve physical cabling (such as copper and optical fiber) or a wireless radio link.

The physical layer of Parallel SCSI operates in this layer, as do the physical layers of Ethernet and other local-area networks, such as Token Ring, FDDI, ITU-T G.hn, and IEEE 802.11 (Wi-Fi), as well as personal area networks such as Bluetooth and

IEEE 802.15.4.

Layer 2: data link layer

The data link layer provides node-to-node data transfer -- a reliable link between two directly connected nodes, by detecting and possibly correcting errors that may occur in the physical layer. The data link layer is divided into two sublayers:

22

 Media Access Control (MAC) layer - responsible for controlling how devices in a network gain access to data and permission to transmit it.

 Logical Link Control (LLC) layer - controls error checking and packet synchronization.

The Point-to-Point Protocol (PPP) is an example of a data link layer in the TCP/IP protocol stack.

The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol.

Layer 3: network layer

The network layer provides the functional and procedural means of transferring variable length data sequences (called datagrams) from one node to another connected to the same network.

It translates logical network address into physical machine address. A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver ("route") the message to the destination node. In addition to message routing, the network may (or may not) implement message delivery by splitting the message into several fragments, delivering each fragment by a separate route and reassembling the fragments, report delivery errors, etc.

Datagram delivery at the network layer is not guaranteed to be reliable .

A number of layer-management protocols, a function defined in the management annex , ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and networklayer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them.

Layer 4: transport layer

The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host via one or more networks, while maintaining the quality of service functions.

An example of a transport-layer protocol in the standard Internet stack is

Transmission Control Protocol (TCP), usually built on top of the Internet Protocol

(IP).

The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state- and

23

connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail. The transport layer also provides the acknowledgement of the successful data transmission and sends the next data if no errors occurred. The transport layer creates packets out of the message received from the application layer. Packetizing is a process of dividing the long message into smaller messages.

OSI defines five classes of connection-mode transport protocols ranging from class

0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery, and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries.

An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a post office manages the outer envelope of mail. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunneling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete frames or packets to deliver to an endpoint. L2TP carries PPP frames inside transport packet.

Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control

Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol

Suite are commonly categorized as layer-4 protocols within OSI.

Layer 5: session layer

The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The

OSI model made this layer responsible for graceful close of sessions, which is a property of the Transmission Control Protocol, and also for session checkpointing

24

and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls.

Layer 6: presentation layer

The presentation layer establishes context between application-layer entities, in which the application-layer entities may use different syntax and semantics if the presentation service provides a big mapping between them. If a mapping is available, presentation service data units are encapsulated into session protocol data units, and passed down the protocol stack.

This layer provides independence from data representation (e.g., encryption) by translating between application and network formats. The presentation layer transforms data into the form that the application accepts. This layer formats and encrypts data to be sent across a network. It is sometimes called the syntax layer.

[6]

The original presentation structure used the Basic Encoding Rules of Abstract

Syntax Notation One (ASN.1), with capabilities such as converting an EBCDICcoded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML.

Layer 7: application layer

The application layer is the OSI layer closest to the end user, which means both the

OSI application layer and the user interact directly with the software application.

This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model.

Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit.

When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists.

25

Comparison with TCP/IP model

In the TCP/IP model of the Internet, protocols are deliberately not as rigidly designed into strict layers as in the OSI model. RFC 3439 contains a section entitled "Layering considered harmful".However, TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the end-to-end transport connection; the internetworking range; and the scope of the direct links to other nodes on the local network.

Even though the concept is different from the OSI model, these layers are nevertheless often compared with the OSI layering scheme in the following way:

 The Internet application layer includes the OSI application layer,

 presentation layer, and most of the session layer.

Its end-to-end transport layer includes the graceful close function of the OSI session layer as well as the OSI transport layer.

 The internetworking layer (Internet layer) is a subset of the OSI network layer (see above)

 The link layer includes the OSI data link and physical layers, as well as parts of OSI's network layer.

These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the internal organization of the network layer document.

The presumably strict peer layering of the OSI model as it is usually described does not present contradictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy implied in a layered model.

Such examples exist in some routing protocols (e.g., OSPF), or in the description of tunneling protocols, which provide a link layer for an application, although the tunnel host protocol might well be a transport or even an application-layer protocol in its own right.

26

NETWORK STANDARDIZATION

Many network vendors and suppliers exist, each with its own ideas of how things should be done. Without coordination, there would be complete chaos, and users would get nothing done.

The only way out is to agree on some network standards. Not only do good standards allow different computers to communicate, but they also increase the market for products adhering to the standards. A larger market leads to mass production, economies of scale in manufacturing, better iMplementations, and other benefits that decrease price and further increase acceptance.In this section we will take a quick look at the important but little-known,world of international standardization.

But let us first discuss what belongs in a standard. A reasonable person might assume that a standard tells you how a protocol should work so that you can do a good job of implementing it. That person would be wrong. Standards define what is needed for interoperability: no more,no less. That lets the larger market emerge and also lets companies compete on the basis of how good their products are. For example, the 802.11 standard defines many transmission rates but does not say when a sender should use which rate, which is a key factor in good performance.

That is up to whoever makes the product.Often getting to interoperability this way is difficult, since there are many implementation choices and standards usually define many options.

For 802.11, there were so many problems that, in a strategy that has become common practice, a trade group called the WiFi Alliance was started to work on interoperability within the 802.11 standard.Similarly, a protocol standard defines the protocol over the wire but not the service interface inside the box, except to help explain the protocol. Real service interfaces are often proprietary.

For example, the way TCP interfaces toIP within a computer does not matter for talking to a remote host. It only matters that the remote host speaks TCP/IP. In fact, TCP and IP are commonly implemented together without any distinct interface. That said, good service interfaces, like good APIs, are valuable for getting protocols used, and the best ones (such as Berkeley sockets) can become very popular.Standards fall into two categories: de facto and de jure.De facto

(Latin for‘‘from the fact’’) standards are those that have just happened, without any formal plan. HTTP, the protocol on which the Web runs, started life as a de facto standard.

27

It was part of early WWW browsers developed by Tim Berners-Lee atCERN, and its use took off with the growth of the Web. Bluetooth is another example. It was originally developed by Ericsson but now everyone is using it. De jure(Latin for

‘‘by law’’) standards,in contrast, are adopted through the rules of some formal standardization body. International standardization authorities are generally divided into two classes: those established by treaty among national governments, and those comprising voluntary,nontreaty organizations.

In practice, the relationships between standards, companies, and standardization bodies are complicated. De facto standards often evolve into de jure standards, especially if they are successful.

This happened in the case of HTTP, which was quickly picked up by IETF.

Standards bodies often ratify each others’ standards, in what looks like patting one another on the back, to increase the market for a technology. These days, many ad hoc business alliances that are formed around particular technologies also play a significant role in developing and refining network standards.

For example, 3GPP(Third Generation Partnership Project) is a collaboration between telecommunications associations that drives the UMTS 3G mobile phone standards.

28

LITTLE’S THEOREM

In queueing theory, a discipline within the mathematical theory of probability,

Little's result , theorem , lemma , law or formula is a theorem by John Little which states:

The long-term average number of customers in a stable system L is equal to the long-term average effective arrival rate, λ , multiplied by the (Palm-)average time a customer spends in the system, W ; or expressed algebraically: L = λW .

Although it looks intuitively reasonable, it is quite a remarkable result, as the relationship is "not influenced by the arrival process distribution, the service distribution, the service order, or practically anything else."

The result applies to any system, and particularly, it applies to systems within systems. So in a bank, the customer line might be one subsystem, and each of the tellers another subsystem, and Little's result could be applied to each one, as well as the whole thing.

The only requirements are that the system is stable and non-preemptive; this rules out transition states such as initial startup or shutdown.

In some cases it is possible to mathematically relate not only the average number in the system to the average wait but relate the entire probability distribution (and moments) of the number in the system to the wait.

[5]

Finding Response Time

Imagine an application that had no easy way to measure response time. If you can find the mean number in the system and the throughput, you can use Little's Law to find the average response time like so:

MeanResponseTime = MeanNumberInSystem / MeanThroughput

For example: A queue depth meter shows an average of nine jobs waiting to be serviced. Add one for the job being serviced, so there is an average of ten jobs in the system. Another meter shows a mean throughput of 50 per second. You can calculate the mean response time as: 0.2 seconds = 10 / 50 per second. When exploring Little’s law and learning to trust it, be aware of the common mistakes of using arrivals(work arriving) when throughput(work completed) is called for and not keeping the units of your measurements the same.

[14]

Customers In The Store

Imagine a small store with a single counter and an area for browsing, where only one person can be at the counter at a time, and no one leaves without buying something. So the system is roughly:

Entrance → Browsing → Counter → Exit

29

In a stable system, the rate at which people enter the store is the rate at which they arrive at the store (called the arrival rate), and the rate at which they exit as well

(called the exit rate). By contrast, an arrival rate exceeding an exit rate would represent an unstable system, where the number of waiting customers in the store will gradually increase towards infinity.

Little's Law tells us that the average number of customers in the store L , is the effective arrival rate λ, times the average time that a customer spends in the store

W , or simply:

Assume customers arrive at the rate of 10 per hour and stay an average of 0.5 hour.

This means we should find the average number of customers in the store at any time to be 5.

Now suppose the store is considering doing more advertising to raise the arrival rate to 20 per hour. The store must either be prepared to host an average of 10 occupants or must reduce the time each customer spends in the store to 0.25 hour.

The store might achieve the latter by ringing up the bill faster or by adding more counters.

We can apply Little's Law to systems within the store. For example, the counter and its queue. Assume we notice that there are on average 2 customers in the queue and at the counter. We know the arrival rate is 10 per hour, so customers must be spending 0.2 hours on average checking out.

We can even apply Little's Law to the counter itself. The average number of people at the counter would be in the range (0, 1) since no more than one person can be at the counter at a time. In that case, the average number of people at the counter is also known as the utilisation of the counter.

However, because a store in reality generally has a limited amount of space, it cannot become unstable. Even if the arrival rate is much greater than the exit rate, the store will eventually start to overflow, and thus any new arriving customers will simply be rejected (and forced to go somewhere else or try again later) until there is once again free space available in the store. This is also the difference between the arrival rate and the effective arrival rate , where the arrival rate roughly corresponds to the rate at which customers arrive at the store, whereas the effective arrival rate corresponds to the rate at which customers enter the store.

However, in a system with an infinite size and no loss, the two are equal.

30

QUEING SYSTEM

Analytic congestion models

A congestion system is system in which there is a demand for resources for a system, and when the resources become unavailable, those requesting the resources wait for them to become available. The level of congestion in such systems is usually measured by the waiting line, or queue, of resource requests (waiting line or queuing models).

Elements of a queuing system

Arrival process

Distributed of time between arrival of successive customers; Inter-arrival time could generally be exponential distributed or constant.

Exponentially distributed: If an arrival just occurred or if we have been waiting 20 minutes for an arrival, the probability of an arrival the next 5 minutes remains the same. It can be shown that the only continuous random variable that is memory less is the exponentially distribution. It can also be shown that if the time between arrivals is exponentially distributed with mean, then the distribution of the number of arrivals during time interval is a Poisson distributed with mean 1/

.

Other characteristics of an arrival process include whether the customer population is finite or infinite and whether the mean time between arrivals is constant or changing over time. If the population of customers is finite, the rate of customer arrivals as the number of customers in the system increases.

Service process

The service process is characterized by the distribution of the time to service an arrival and the number of services. Models of congestion system often assume that the service time is exponentially distributed, thereby facilitating the development of analytical models of the congestion system. The number of serviced is greater that the rate at which customers arrive.

Traffic intensity (

) = rate at which customers arrive/ rate customers can be served

= mean service time/ (mean time between arrivals * Number of services)

= arrival rate/ (service rate * Number of services)

31

Queuing discipline

The queuing discipline describes the order in which arrivals are serviced.

Common queue disciplines include FIFO, shorter service time first, and random selection for the nest service. The queuing discipline also includes characteristics of the system such as maximum queue-length) when the queue reaches this maximum, arrivals turn away or balk) and customer reneging (customer waiting in line become impatient and leave the system before service)

Classification of queuing system models

A/Bs/K/E where A: specifies the arrival process

B: specifics the services process

s: specific’s the number of servers

K: maximum number of customers allowed into the system

E: queue discipline

Symbols

M: exponentially-distributed service or arrival times

D: constant service or arrival times

Measures of performance of a congestion systems:

Ls: Expected number of customers in the system

Lq: Expected number of customers in the queue

Ws: Expected time a customer is in the system, including the time for service

Wq: Expected time a customer waits for service

M/M/1 queuing model

Queuing models are most easily developed when the arrival times and service times are exponentially distributed. Although the assumption of exponentiallydistributed arrival and service times may seem unrealistic, this group of models has wide application and can also serve as a useful first pass in the analysis of more complex congestion systems.

Relationship between queue length, Lq and traffic intensity,

As the traffic intensity gets closer to 1.0, there is a very rapid increase in the measures of congestion, Lq (expected number of customers in the queue) and Wq

(expected time a customer in the system) gets closer to 1.0

32

Non-Markovian queuing models

When the time between customer arrival and the service times are not exponentially distributed

An integral part of simulation modeling is the identification of any analytic models that may serve as simple, initial model. The analytical model then can be used to give the analyst insights and estimates of the more complex system’s behavior.

33

INDEX

S.NO NAME OF TOPIC

1

2

3

4

5

Data Link Layer: Need

Services Provided

Framing

Flow Control

Error control

Elementary Data Link Layer ProtocoL 6

7

8

9

10

PAGE NO.

35-36

37-40

41-46

47

48-49

50

Sliding Window protocol: 1-bit, Go-Back-N, Selective

Repeat

Hybrid ARQ

Bit oriented protocols: SDLC, HDLC, BISYNC, LAP and LAPB

Protocol verification: Finite State Machine Models &

Petri net models

51-58

59

60-66

67

34

UNIT 2 (DATA LINK LAYER)

INTRODUCTION

In the seven-layer OSI model of computer networking, the data link layer is layer

2; in the TCP/IP reference model, it is part of the link layer. The data link layer is the protocol layer that transfers data between adjacent network nodes in a wide area network or between nodes on the same local area network segment.

The data link layer provides the functional and procedural means to transfer data between network entities and might provide the means to detect and possibly correct errors that may occur in the physical layeR.

The data link layer is concerned with local delivery of frames between devices on the same LAN. Data-link frames, as these protocol data units are called, do not cross the boundaries of a local network.

Inter-network routing and global addressing are higher layer functions, allowing data-link protocols to focus on local delivery, addressing, and media arbitration. In this way, the data link layer is analogous to a neighborhood traffic cop; it endeavors to arbitrate between parties contending for access to a medium, without concern for their ultimate destination.

When devices attempt to use a medium simultaneously, frame collisions occur.

Data-link protocols specify how devices detect and recover from such collisions, and may provide mechanisms to reduce or prevent them.

Delivery of frames by layer 2 devices is effected through the use of unambiguous hardware addresses.

A frame's header contains source and destination addresses that indicate which device originated the frame and which device is expected to receive and process it.

In contrast to the hierarchical and routable addresses of the network layer, layer-2 addresses are flat, meaning that no part of the address can be used to identify the logical or physical group to which the address belongs.

The data link thus provides data transfer across the physical link.

That transfer can be reliable or unreliable; many data-link protocols do not have acknowledgments of successful frame reception and acceptance, and some data-

35

link protocols might not even have any form of checksum to check for transmission errors.

In those cases, higher-level protocols must provide flow control, error checking, and acknowledgments and retransmission.

36

SERVICES PROVIDED

1.SERVICES PROVIDED TO THE NETWORK LAYER-

The function of the data link layer is to provide services to the network layer.

The principal service is transferring data from the network layer on the source ma- chine to the network layer on the destination machine. On the source machine is an entity, call it a process,in the network layer that hands some bits to the data link layer for transmission to the destination. The job of the data link layer is to transmit the bits to the destination machine so they can be handed over to the net- work layer there. The actual transmission follows the path ,but it is easier to think in terms of two data link layer processes communicating using a data link protocol.

The data link layer can be designed to offer various services. The actual ser- vices that are offered vary from protocol to protocol. Three reasonable possibili- ties that we will consider in turn are:

1. Unacknowledged connectionless service.

2. Acknowledged connectionless service.

3. Acknowledged connection-oriented service.

Unacknowledged connectionless service consists of having the source machine send independent frames to the destination machine without having the destination machine acknowledge them.

37

Ethernet is a good example of a data link layer that provides this class of service.

No logical connection is established beforehand or released afterward. If a frame is lost due to noise on the line, no attempt is made to detect the loss or recover from it in the data link layer. This class of service is appropriate when the error rate is very low, so recovery is left to higher layers. It is also appropriate for real-time traffic, such as voice, in which late data are worse than bad data.The next step up in terms of reliability is acknowledged connectionless service.

When this service is offered, there are still no logical connections used, but each frame sent is individually acknowledged. In this way, the sender knows whether a frame has arrived correctly or been lost. If it has not arrived within a specified time interval, it can be sent again.

This service is useful over unreliable channels, such as wireless systems. 802.11

(WiFi) is a good example of this class of service. It is perhaps worth emphasizing that providing acknowledgements in the data link layer is just an optimization, never a requirement. The network layer can always send a packet and wait for it to be acknowledged by its peer on the remote machine. If the acknowledgement is not forthcoming before the timer expires, the sender can just send the entire message again.

The trouble with this strategy is that it can be inefficient. Links usually have a strict maximum frame length imposed by the hardware, and known propagation delays. The network layer does not know these parameters. It might send a large packet that is broken up into,say, 10 frames, of which 2 are lost on average. It would then take a very long time for the packet to get through.

Instead, if individual frames are acknowledged and retransmitted, then errors can be corrected more directly and more quickly.

On reliable channels, such as fiber, the overhead of a heavyweight data link protocol may be unnecessary,but on (inherently unreliable) wireless channels it is well worth the cost.Getting back to our services,the most sophisticated service the data link layer can provide to the network layer is connection-oriented service.

With this service,the source and destination machines establish a connection before any data are transferred. Each frame sent over the connection is

38

numbered, and the data link layer guarantees that each frame sent is indeed received.

Furthermore, it guarantees that each frame is received exactly once and that all frames arereceived in the right order. Connection-oriented service thus provides the network layer processes with the equivalent of a reliable bit stream. It is appropriate over long, unreliable links such as a satellite channel or a longdistance tele phone circuit. If acknowledged connectionless service were used, it is conceivable that lost acknowledgements could cause a frame to be sent and received several times, wasting bandwidth.

When connection-oriented service is used, transfers go through three distinct phases. In the first phase the connection IS established by having both sides initialize variables and counters needed to keep track of which frames have been received and which ones have not. In the second phase, one or more frames are actually transmitted .In the third and final phase,the connection is released, freeing up the variables, buffers, and other resources used to maintain the connection.

2.Frame synchronization -

In telecommunication, frame synchronization or framing is the process by which, while receiving a stream of framed data, incoming frame alignment signals (i.e., a distinctive bit sequences or sync words) are identified (that is, distinguished from data bits), permitting the data bits within the frame to be extracted for decoding or retransmission.

3.Error control (automatic repeat request,ARQ), in addition to ARQ provided by some transport-layer protocols, to forward error correction (FEC) techniques provided on the physical layer, and to error-detection and packet canceling provided at all layers, including the network layer.

Data-link-layer error control (i.e. retransmission of erroneous packets) is provided in wireless networks and V.42 telephone network modems, but not in LAN protocols such as Ethernet, since bit errors are so uncommon in short wires. In that case, only error detection and canceling of erroneous packets are provided.

39

4.Flow control, in addition to the one provided on the transport layer. Data-linklayer error control is not used in LAN protocols such as Ethernet, but in modems and wireless networks.

40

FRAMING

To provide service to the network layer, the data link layer must use the ser- vice provided to it by the physical layer. What the physical layer does is accept a raw bit stream and attempt to deliver it to the destination. If the channel is noisy, as it is for most wireless and some wired links, the physical layer will add some redundancy to its signals to reduce the bit error rate to a tolerable level.

However,the bit stream received by the data link layer is not guaranteed to be error free. Some bits may have different values and the number of bits received may be lessthan, equal to, or more than the number of bits transmitted. It is up to the datalink layer to detect and, if necessary, correct errors.The usual approach is for the data link layer to break up the bit stream into discrete frames, compute a short token called a checksum for each frame, and include the checksum in the frame when it is transmitted.

When a frame arrives at the destination, the checksum is recomputed. If the newly computed checksum is different from the one contained in the frame, the data link layer knows that an error has occurred and takes steps to deal with it (e.g., discarding the bad frame and possibly also sending back an error report). Breaking up the bit stream into frames is more difficult than it at first appears.

A good design must make it easy for a receiver to find the start of new frames while using little of the channel bandwidth. We will look at four methods:

1. Byte count.

2. Flag bytes with byte stuffing.

3. Flag bits with bit stuffing.

4. Physical layer coding violations.

The first framing method uses a field in the header to specify the number of bytes in the frame. When the data link layer at the destination sees the byte count, it knows how many bytes follow and hence where the end of the frame is. for four small example frames of sizes 5, 5, 8,and 8 bytes, respectively.

The trouble with this algorithm is that the count can be garbled by a transmission error. For example, if the byte count of 5 in the second frame becomes a 7 due to a single bit flip, the destination will get out of synchronization.

41

It will then be unable to locate the correct start of the next frame. Even if the checksum is incorrect so the destination knows that the frame is bad, it still has no of telling where the next frame starts.

Sending a frame back to the source asking for a retransmission does not help either, since the destination does not know how many bytes to skip over to get to the start of the retransmission. For this reason, the byte count method is rarely used by itself.

The second framing method gets around the problem of resynchronization after an error by having each frame start and end with special bytes.

Often the same byte, called a flag byte, is used as both the starting and ending delimiter.

Two consecutive flag bytes indicate the end of one frame and the start of the next.

Thus, if the receiver ever loses synchronization it can just search for two flag bytes to find the end of the current frame and the start of the next frame. However, there is a still a problem we have to solve.

It may happen that the flag byte occurs in the data, especially when binary data such as photographs or songs are being transmitted. This situation would interfere with the framing.

One way to solve this problem is to have the sender’s data link layer insert a special escape byte (ESC) just before each ‘‘ accidental’’ flag byte in the data.

42

Thus, a framing flag byte can be distinguished from one in the data by the absence or presence of an escape byte before it. The data link layer on the receiving end re- moves the escape bytes before giving the data to the network layer. This technique is called byte stuffing.

Of course, the next question is: what happens if an escape byte occurs in the middle of the data? The answer is that it, too, is stuffed with an escape byte. At the receiver,the first escape byte is removed, leaving the data byte that follows it

(which might be another escape byte or the flag byte).

In all cases, the byte sequence delivered after destuffing is exactly the same as the original byte sequence. We can still search for a frame boundary by looking for two flag bytes in a row, without bothering to undo escapes. ThIS is a slight simplification of the one used in PPP(Point-to-Point Protocol), which is used to carry packets overcommunications links.

The third method of delimiting the bit stream gets around a disadvantage of byte stuffing, which is that it is tied to the use of 8-bit bytes. Framing can be also be done at the bit level, so frames can contain an arbitrary number of bits made up of units of any size.

It was developed for the once very popular HDLC ( High- level Data Link Control) protocol. Each frame begins and ends with a special bit pattern, 01111110 or 0x7E in hexadecimal. This pattern is a flag byte. When- ever the sender’s data link layer encounters five consecutive 1s in the data, it

43

automatically stuffs a 0 bit into the outgoing bit stream. This bit stuffing is analogous to byte stuffing, in which an escape byte is stuffed into the outgoing character stream before a flag byte in the data.

It also ensures a minimum density of transitions that help the physical layer maintain synchronization. USB(Universal

Serial Bus) uses bit stuffing for this reason.When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically destuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely transparent to the network layer in both computers, so is bit stuffing.

If the user data contain the flag pattern, 01111110

, this flag is transmitted as 011111010 but stored in the receiver’s memory as

01111110.

With bit stuffing, the boundary between two frames can be unambiguously recognized by the flag pattern. Thus, if the receiver loses track of where it is, all it has to do is scan the input for flag sequences, since they can only occur at frame boundaries and never within the data.

With both bit and byte stuffing, a side effect is that the length of a frame now depends on the contents of the data it carries.

For instance, if there are no flag bytes in the data, 100 bytes might be carried in a frame of roughly 100 bytes. If,however, the data consists solely of flag bytes,each flag byte will be escaped and the frame will become roughly 200 bytes long. With bit stuffing, the increase would be roughly 12.5% as 1 bit is added to every

44

byte.The last method of framing is to use a shortcut from the physical layer. This redundancy means that some signals will not occur in regular data.

For example, in the 4B/5B line code 4 data bits are mapped to 5 signal bits to ensure sufficient bit transitions. This means that 16 out of the 32 signal possibilities are not used. We can use some reserved signals to indicate the start and end of frames.

In effect, we are using ‘‘coding violations’’ to delimit frames.The beauty of this scheme is that, because they are reserved signals, it is easy to find the start and end of frames and there is no need to stuff the data. Many data link protocols use a combination of these methods for safety. A common pattern used for Ethernet and

802.11 is to have a frame begin with a well-defined pattern called a preamble. This pattern might be quite long (72 bits is typical for 802.11) to allow the receiver to prepare for an incoming packet.

The preamble is then followed by a length (i.e., count) field in the header that is used to locate the end of the frame. Error Control Having solved the problem of marking the start and end of each frame, we come to the next problem: how to make sure all frames are eventually delivered to the network layer at the destination and in the proper order.

Assume for the moment that the receiver can tell whether a frame that it receives contains correct or faulty information. For unacknowledged connectionless service it might be fine if the sender just kept outputting frames without regard to whether they were arriving properly. But for reliable, connection-oriented service it would not be fine at all.

The usual way to ensure reliable delivery is to provide the sender with some about what is happening at the other end of the line. Typically, the protocol calls for the receiver to send back special control frames bearing positive or negative acknowledgements about the incoming frames. If the sender receives a positive acknowledgement about a frame, it knows the frame has arrived safely.

On the other hand, a negative acknowledgement means that something has gone wrong and the frame must be transmitted again. An additional complication comes from the possibility that hardware troubles may cause a frame

45

to vanish completely (e.g., in a noise burst). In this case, the receiver will not react at all, since it has no reason to react.

Similarly, if the acknowledgement frame is lost, the sender will not know how to proceed. It should be clear that a protocol in which the sender transmits a frame and then waits for an acknowledgement, positive or negative, will hang forever if a frame is ever lost due to, for example, malfunctioning hardware or a faulty communication channel.

This possibility is dealt with by introducing timers into the data link layer. When the sender transmits a frame, it generally also starts a timer. The timer is set to expire after an interval long enough for the frame to reach the destination,processed there, and have the acknowledgement propagate back to the sender.

Normally, the frame will be correctly received and the acknowledgement will get back before the timer runs out, in which case the timer will be canceled.However, if either the frame or the acknowledgement is lost, the timer will go off, alerting the sender to a potential problem. The obvious solution is to just transmit the frame again. However, when frames may be transmitted multiple times there is a danger that the receiver will accept the same frame two or more times and pass it to the network layer more than once.

To prevent this from happening, it is generally necessary to assign sequence numbers to outgoing frames,so that the receiver can distinguish retransmissions from originals. The whole issue of managing the timers and sequence numbers so as to ensure that each frame is ultimately passed to the network layer at the destination exactly once, no more and no less, is an important part of the duties of the data link layer (and higher layers).

46

Flow Control

Another important design issue that occurs in the data link layer (and higher layers as well) is what to do with a sender that systematically wants to transmit frames faster than the receiver can accept them. This situation can occur when the sender is running on a fast, powerful computer and the receiver is running on a slow, low-end machine.

A common situation is when a smart phone requests a Web page from a far more powerful server, which then turns on the fire hose and blasts the data at the poor helpless phone until it is completely swamped. Even if the transmission is error free, the receiver may be unable to handle the frames as fast as they arrive and will lose some.

Clearly, something has to be done to prevent this situation.

Two approaches are commonly used. In the first one,feedback-based flow control, the receiver sends back information to the sender giving it permission to send more data, or at least telling the sender how the receiver is doing. In the second one,ratebased flow control , the protocol has a built-in mechanism that limits the rate at which senders may transmit data, without using feedback from the receiver.Feedback-based schemes are seen at both the link layer and higher layers.

The latter is more common these days, in which case the link layer hardware is designed to run fast enough that it does not cause loss. For example, hardware implementations of the link layer as NICs(Network Interface Cards) are sometimes said to run at ‘‘wire speed,’’ meaning that they can handle frames as fast as they can arrive on the link. Any overruns are then not a link problem, so they are handled by higher layers.Various feedback-based flow control schemes are known, but most of them use the same basic principle. The protocol contains well-defined rules about when a sender may transmit the next frame.

These rules often prohibit frames from being sent until the receiver has granted permission, either implicitly or explicitly. For example, when a connection is set up the receiver might say: ‘‘You may send me n frames now, but after they have been sent, do not send any more until I have told you to continue.’’

47

ERROR CONTROL o Common methods used for error detection include parity codes, checksums,

CRC codes. o Two types of error correction are used commonly: ARQ (Automatic Repeat reQuest) and FEC (Forward Error Correction). We shall only cover ARQ in this course. FEC codes are used to send in the forward direction along with the user data. The receiver is able to correct simple errors that may occur in the data stream using the FEC code. o ARQ: The fundamental idea is to use acknowledgments (ACKs) sent from the receiver to the sender for data frames sent from the sender to the receiver. See the book for figures. We introduce concepts used in ARQ here:

Timers: Sender keeps a timer, if an ACK is not received before the timer expires, it resends the frame. How large should the timer value be?

Need for sequence numbers for the frames: If a frame is properly received, but the ACK is lost, then the sender will time-out and retransmit. This will result in the receiver receiving duplicate frames. Hence frames need sequence numbers to allow the receiver to detect duplicates.

Need for sequence numbers in the ACKs: If a frame 0 is sent, but the timer times-out before the ACK is received, the sender will resend frame 0. If the

ACK for the first frame 0 arrives soon after the sender retransmitted frame 0, the sender will assume that the ACK was for the retransmitted frame 0 and hence send frame 1. Now if frame 1 gets lost, but the receiver sends another

ACK for the retransmitted frame 0, the sender may mistakenly think this

ACK is for frame 1. If ACKs said what frames they were acknowledging

(using their sequence numbers), this problem would not arise.

Checkpointing: periodically sending ENQ (enquiries) to see what frames were received. o Three modes:

Stop-and-wait : Send a frame, wait for an ACK

 inefficient because the Delay-BW product is low. Delay-BW product is bit rate times delay that elapses before any action can take place.Consider a

1.5Mbps link. If a 1000-bit frame is sent and an ACK awaited before the next frame can be sent, and the round trip delay (including propagation delay) is 40ms, then the delay-BW product could have been 1.5Mbps times

40ms = 60Kbits. Instead of sending this, only 1K bits are sent in the same time period, which makes stop-and-wait inefficient.

Go-back-N : sender maintains a window of size W s

frames. after frame 0 is sent, transmitter sends W s

-1 frames. This is the maximum number of frames

48

that can be outstanding (i.e., unacknowledged). Pipelining effect occurs improving the efficiency of the protocol. Upon receiving a NAK (negative

ACK), or experiencing a time-out, the sender will retransmit. The sender keeps track of S recent

, which lies between S last

and S last

+ W s

-1, where S last

is the last transmitted and yet unacknowledged; S recent

is the last one sent. The receiver maintains a variable R next

, which is the sequence number of the next expected frame. Typically, this is the number that the receiver sends in an

ACK. Go-back-N is an example of a sliding-window protocol. The window size at the sender should be less than 2 m if m bits are used for the sequence number.If W s

= 2 m then a problem could arise. If m = 2 bits and the window size is 4, and frames 0, 1, 2, 3 are sent and received properly, but all the

ACKs are lost. Then the sender goes back N and retransmits frame 0. When the receiver receives this frame it is a duplicate, but the receiver does not know this; it thinks it is the next set!

Negative ACK: If the receiver knows for sure that it didn't get a packet that it should have, it will send a NAK.

Piggybacking ACKs: ACKs are piggybacked on frames sent in the reverse direction if data flow is bidirectional. This makes the transfer more efficient.

Thus the receivers at both ends maintains R next

, while the senders maintain

S last

and S last

+W s

-1.

Cumulative vs. selective ACKs: In cumulative ACKs, instead of sending an

ACK for every packet, a cumulative ACK acknowledges multiple packets.

For example, the ACK-every-other-segment strategy is used in TCP implementations. In this strategy, every other packet is acknowledged, instead of every packet. Selective ACKs are used to indicate specific sequence numbers of packets that have been received.

Selective ARQ:

If error rates are high, then Go-back-N becomes inefficient. Selective ARQ is a scheme in which only the errored frames are retransmitted. This is clearly more efficient. This comes at a cost of more complex receivers. The receiver now maintains a window W r

. This is the maximum number of frames that the receiver is willing to receive. Out-of-sequence frames will be stored here. Timers are maintained on each frame. When the timer expires, only the potentially-errored frame is retransmitted. Can use NAKs if sequenced delivery is guaranteed. Also piggybacking can be used.

49

ELEMANTRY DATA LINK PROTOCOLS

To introduce the subject of protocols, we will begin by looking at three proto- cols of increasing complexity. For interested readers, a simulator for these and subsequent protocols is available via the Web.Before we look at the protocols, it is useful to make explicit some of the assumptions underlying the model of communication.

To start with, we assume that the physical layer, data link layer, and network layer are independent processes that communicate by passing messages back and forth.

A common implementation is shown in.

The physical layer process and some of the data link layer process run on dedicate hardware called a NIC(Network Interface Card). The rest of the link layer process and the network layer process run on the main CPU as part of the operating system, with the software for the link layer process often taking the form of a device driver.

However, other implementations are also possible (e.g., three processes offloaded to dedicated hardware called a network accelerator, or three processes running on the main CPU on a software-defined ratio).

Actually, the preferred implementation changes from decade to decade with technology trade-offs. In any event, treating the three layers as separate processes makes the discussion conceptually cleanerand also serves to emphasize the independence of the layers.

50

SLIDING WINDOW PROTOCOL

In the previous protocols, data frames were transmitted in one direction only. In most practical situations, there is a need to transmit data in both directions.One way of achieving full-duplex data transmission is to run two instances of one of the previous protocols, each using a separate link for simplex data traffic (indifferent directions).

Each link is then comprised of a ‘‘forward’’ channel (for data) and a ‘‘reverse’’ channel (for acknowledgements). In both cases the capacity of the reverse channel is almost entirely wasted.A better idea is to use the same link for data in both directions. After all, in protocols 2 and 3 it was already being used to transmit frames both ways, and the reverse channel normally has the same capacity as the forward channel. In this model the data frames from A to B are intermixed with the acknowledgement frames from A to B . By looking at the kind field in the header of an incoming frame, the receiver can tell whether the frame is data or an acknowledgement.

Although interleaving data and control frames on the same link is a big improvement over having two separate physical links, yet another improvement is possible. When a data frame arrives, instead of immediately sending a separate control frame, the receiver restrains itself and waits until the network layer passes it the next packet. The acknowledgement is attached to the outgoing data frame

(using the ack field in the frame header).

In effect, the acknowledgement gets a free ride on the next outgoing data frame.

The technique of temporarily delaying outgoing acknowledgements so that they can be hooked onto the next outgoing data frame is known as piggybacking .The principal advantage of using piggybacking over having distinct acknowledgement frames is a better use of the available channel bandwidth.

The ack field in the frame header costs only a few bits, whereas a separate frame would need a header, the acknowledgement, and a checksum. In addition, fewer frames sent generally means a lighter processing load at the receiver.In the next protocol to be examined, the piggyback field costs only 1 bit in the frame header. It rarely costs more than a few bits.

However, piggybacking introduces a complication not present with separate acknowledgements. How long should the data link layer wait for a packet onto

51

maximum size, the sending data link layer must forcibly shut off the network layer until another buffer becomes free.

The receiving data link layer’s window corresponds to the frames it may accept.

Any frame falling within the window is put in the receiver’s buffer. When a frame whose sequence number is equal to the lower edge of the window is received, it is passed to the network layer and the window is rotated by one. Any frame falling outside the window is discarded.

In all of these cases, a subsequent acknowledgement is generated so that the sender may out how proceed. Note that a window size of 1 means that the data link layer only accepts frames in order, but for larger windows this is not so. The network layer, in contrast, is always fed data in the proper order, regardless of the data link layer’s window size.

ONE BIT SLIDING WINDOW PROTOCOL

Before tackling the general case, let us examine a sliding window protocol with a window size of 1. Such a protocol uses stop-and-wait since the sender transmits a frame and waits for its acknowledgement before sending the next one.Secondly,sending a series of identical frames, all with

52

seq=0 and ack=1.When the first valid frame arrives at computer B, it will be accepted and frame expected will be set to a value of 1. All the subsequent frames received will be rejected because B is now expecting frames with sequence number

1, not 0.

Furthermore, since all the duplicates will have ack =1 and B is still waiting for an acknowledgement of 0, B will not go and fetch a new packet from its network layer.

After every rejected duplicate comes in, B will send A a frame containing seq=0 and ack =0. Eventually, one of these will arrive correctly at A , causing A to begin sending the next packet. No combination of lost frames or premature timeouts can cause the protocol to deliver duplicate packets to either network layer, to skip a packet, or to deadlock.

The protocol is correct. However, to show how subtle protocol interactions can be, we note that a peculiar situation arises if both sides simultaneously send an initial packet. This synchronization difficulty is illustrated by the normal opera- tion of the protocol is shown below. In (b) the peculiarity is illustrated. If

B waits forA ’s first frame before sending one of its own, the sequence is as shown in (a), and every frame is accepted. However, if A and B simultaneously initiate communication, their first frames cross, and the data link layers then get into situation (b). In (a) each frame arrival brings a new packet for the network layer; there are no duplicates. In (b)half of the frames contain duplicates, even though there are no transmission errors.

Similar situations can occur as a result of premature timeouts, even when one side clearly starts first. In fact, if multiple premature timeouts occur, frames may be sent three or more times, wasting valuable bandwidth

53

GO BACK N

Until now we have made the tacit assumption that the transmission time re- quired for a frame to arrive at the receiver plus the transmission time for the acknowledgement to come back is negligible. Sometimes this assumption is clearly false.

In these situations the long round-trip time can have important implications for the efficiency of the bandwidth utilization. As an example, consider a 50-kbps satellite channel with a 500-msec round-trip propagation delay. Let us imagine trying to use protocol 4 to send 1000-bit frames via the satellite. At t=0 thesender starts sending the first frame. At t=20 msec the frame has been completely sent. Not until t=270 msec has the frame fully arrived at the receiver, and not until t=520msec has the acknowledgement arrived back at the sender, under the best of circumstances (of no waiting in the receiver and a short acknowledgement frame).

This means that the sender was blocked 500/520 or 96%of the time. In other words, only 4% of the available bandwidth was used. Clearly, the combination of a long transit time, high bandwidth, and short frame length is disastrous in terms of efficiency.The problem described here can be viewed as a consequence of the rule requiring a sender to wait for an acknowledgement before sending another frame.

If we relax that restriction, much better efficiency can be achieved. Basically, the solution lies in allowing the sender to transmit up to w frames before blocking, instead of just 1.

54

With a large enough choice of w the sender will be able to continuously transmit frames since the acknowledgements will arrive for previous frames before the window becomes full, preventing the sender from blocking

One option, called go-back-n, is for the receiver simply to discard all subsequent frames, sending no acknowledgements for the discarded frames. This strategy corresponds to a receive window of size 1.

In other words, the data link layer refuses to accept any frame except the next one it must give to the network layer.

If the sender’s window fills up before the timer runs out, the pipeline will begin to empty. Eventually, the sender will time out and retransmit all unacknowledged frames in order, starting with the damaged or lost one. This approach can waste a lot of bandwidth if the error rate is high.

In FIG we see go-back-n for the case in which the receiver’s window is large.

Frames 0 and 1 are correctly received and acknowledged.

Frame 2, however, is damaged or lost. The sender, unaware of this problem, continues to send frames until the timer for frame 2 expires. Then it backs up to frame 2 and starts over with it, sending 2, 3, 4, etc. all over again. The other

55

general strategy for handling errors when frames are pipelined is called selective repeat .

When it is used, a bad frame that is received is discarded, any good frames received after it are accepted and buffered. When the sender times out, only the oldest unacknowledged frame is retransmitted. If that frame arrives correctly, the receiver can deliver to the network layer, in sequence, all the frames it has buffered.

Selective repeat corresponds to a receiver window larger than 1. This approach can require large amounts of data link layer memory if the window is large.Selective repeat is often combined with having the receiver send a negative acknowledgement (NAK) when it detects an error, for example,when it receives a checksum error or a frame out of sequence. NAKs stimulate retransmission before the corresponding timer expires and thus improve performance.

In Fig frames 0 and 1are again correctly received and acknowledged and frame 2 is lost. When frame 3 arrives at the receiver, the data link layer there notices that it has missed a frame, so it sends back a NAK for 2 but buffers 3. When frames 4 and

5 arrive, they, too, are buffered by the data link layer instead of being passed to the network layer. Eventually, the NAK 2 gets back to the sender, which immediately resends frame 2.

When that arrives, the data link layer now has 2, 3, 4, and 5 and can pass all of them to the network layer in the correct order. It can also acknowledge all frames up to and including 5, as shown in the figure. If the NAK should get lost, eventually the sender will time out for frame 2 and send it (and only it) of its own accord,but that may be a quite a while later.

SELECTIVE REPEAT REQUEST

It may be used as a protocol for the delivery and acknowledgement of message units, or it may be used as a protocol for the delivery of subdivided message subunits.

When used as the protocol for the delivery of messages , the sending process continues to send a number of frames specified by a window size even after a frame loss. Unlike Go-Back-N ARQ, the receiving process will continue to accept and acknowledge frames sent after an initial error; this is the general case of the

56

sliding window protocol with both transmit and receive window sizes greater than1.The receiver process keeps track of the sequence number of the earliest frame it has not received, and sends that number with every acknowledgement

(ACK) it sends.

If a frame from the sender does not reach the receiver, the sender continues to send subsequent frames until it has emptied its window . The receiver continues to fill its receiving window with the subsequent frames, replying each time with an ACK containing the sequence number of the earliest missing frame. Once the sender has sent all the frames in its window , it re-sends the frame number given by the ACKs, and then continues where it left off.

The size of the sending and receiving windows must be equal, and half the maximum sequence number (assuming that sequence numbers are numbered from

0 to n

−1) to avoid miscommunication in all cases of packets being dropped. To understand this, consider the case when all ACKs are destroyed. If the receiving window is larger than half the maximum sequence number, some, possibly even all, of the packets that are resent after timeouts are duplicates that are not recognized as such. The sender moves its window for every packet that is acknowledged.

When used as the protocol for the delivery of subdivided messages it works somewhat differently. In non-continuous channels where messages may be variable in length, standard ARQ or Hybrid ARQ protocols may treat the message as a single unit. Alternately selective retransmission may be employed in conjunction with the basic ARQ mechanism where the message is first subdivided into sub-blocks (typically of fixed length) in a process called packet segmentation.

The original variable length message is thus represented as a concatenation of a variable number of sub-blocks.

While in standard ARQ the message as a whole is either acknowledged (ACKed) or negatively acknowledged (NAKed), in ARQ with selective transmission the

NAKed response would additionally carry a bit flag indicating the identity of each sub-block successfully received. In ARQ with selective retransmission of subdivided messages each retransmission diminishes in length, needing to only contain the sub-blocks that were linked.

In most channel models with variable length messages, the probability of error-free reception diminishes in inverse proportion with increasing message length. In other words it's easier to receive a short message than a longer message. Therefore

57

standard ARQ techniques involving variable length messages have increased difficulty delivering longer messages, as each repeat is the full length. Selective retransmission applied to variable length messages completely eliminates the difficulty in delivering longer messages, as successfully delivered sub-blocks are retained after each transmission, and the number of outstanding sub-blocks in following transmissions diminishes.

58

HYBRID ARQ

Hybrid automatic repeat request ( hybrid ARQ or HARQ ) is a combination of high-rate forward error-correcting coding and ARQ error-control. In standard

ARQ, redundant bits are added to data to be transmitted using an error-detecting

(ED) code such as a cyclic redundancy check (CRC). Receivers detecting a corrupted message will request a new message from the sender.

In Hybrid ARQ, the original data is encoded with a forward error correction (FEC) code, and the parity bits are either immediately sent along with the message or only transmitted upon request when a receiver detects an erroneous message.

The ED code may be omitted when a code is used that can perform both forward error correction (FEC) in addition to error detection, such as a Reed-Solomon code. The FEC code is chosen to correct an expected subset of all errors that may occur, while the ARQ method is used as a fall-back to correct errors that are uncorrectable using only the redundancy sent in the initial transmission.

As a result, hybrid ARQ performs better than ordinary ARQ in poor signal conditions, but in its simplest form this comes at the expense of significantly lower throughput in good signal conditions. There is typically a signal quality cross-over point below which simple hybrid ARQ is better, and above which basic ARQ is better.

59

BIT ORIENTED PROTOCOLS

Synchronous Data Link Control (SDLC)

Although a subset of HDLC, SDLC was developed by IBM before HDLC, and was the first link layer protocol based on synchronous, bit-oriented operation. IBM defined SDLC for managing synchronous serially transmitted bits over a data link, and these links can be full/half-duplex, switched or unswitched, point-to-point, point to multipoint or even looped. SDLC is designed for carrying SNA traffic.

In SDLC, a link station is a logical connection between adjacent nodes. Only one

Primary Link Station is allowed on an SDLC line. A device can be set up as a

Primary or a Secondary link station. A device configured as a Primary link station can communicate with both PU 2.0 nodes and PU 2.1 nodes (APPN) and controls the secondary devices.

If the device is set up as a secondary link station then it acts as a PU 2.0 device and can communicate with Front End Processors (FEP), but only communicates with the primary device when the primary allows it, i.e. the primary sets up and tears down the connections and controls the secondaries.

In APPN configurations the device can support negotiable link stations where XID frames are exchanged to decide which station is to be secondary and which is to be primary.

A primary station issues commands, controls the link and initiates error-recovery.

A device set up as a secondary station can communicate to a FEP, exist with other secondary devices on an SDLC link and exist as a secondary PU 2.0 device.

SDLC supports line speeds up to 64Kb/s e.g. V.24 (RS-232) at 19.2Kb/s, V.35 (up to 64Kb/s) and X.21.

The following diagram shows the frame format for SDLC, almost identical to

HDLC.

60

Flag - Begins and ends the error checking procedure with 0x7E which is

01111110 in binary.

Address - This is only the secondary address since all communication occurs via the single primary device. The address can be an individual, group or broadcast address.

Control - this identifies the frame's function and can be one of the following: o o

Information (I) - contains the Send Sequence Number which is the number of the next frame to be sent, and the Receive Sequence

Number which is the number of the next frame expected to be received. The is also a Poll Final Bit (P/F) which performs error checking.

Supervisory (S) - this can report on status, ask for and stop transmission and acknowledge I frames. o Unnumbered (U) - this does not have sequence numbers (hence

'unnumbered'), it can be used to start up secondaries and can sometimes have an Information field.

 Data - can contain Path Information Unit (PIU) or Exchange

Identification (XID) .

Frame Check Sequence (FCS) - this check is carried out on the sending

AND receiving of the frame.

In a poll, the address field identifies the station being polled, in a response, the address field contains the station transmitting, so this field effectively is the secondary station's address. The control field .

High-Level Data Link Control (HDLC)

HDLC is the protocol which is now considered an umbrella under which many

Wide Area protocols sit. ITU-T developed HDLC in 1979, and within HDLC there are three types of stations defined:

 Primary Station - this completely controls all data link operations issuing commands from secondary stations and has the ability to hold separate sessions with different stations.

61

 Secondary Station - this can only send responses to one primary station.

Secondary stations only talk to each other via a Primary station.

 Combined Station - this can transmit and receive commands and responses from one other station.

Configuring a channel for use by a station can be done in one of three ways:

 Unbalanced - this configuration allows one primary station to talk to a

 number of secondary stations over half-duplex, full-duplex, switched, unswitched, point-to-point or multipoint paths.

Symmetrical - where commands and responses are multiplexed over one physical channel when two stations with primary and secondary parts have a point-to-point link joining them.

 Balanced - where two combined stations communicate over a point-to-point link which can be full/half-duplex or switch/unswitched.

When transferring data, stations are in one of three modes:

 Normal Response Mode (NRM) where the secondary station needs permission from the primary station before it can transmit data. Mainly used on multi-point lines.

 Asynchronous Response Mode (ARM) where the secondary station can send data without receiving permission from the primary station. This is hardly ever used.

 Asynchronous Balanced Mode (ABM) where either station can initiate transmission without permission from the other. This is the most common mode used on point-to-point links.

62

The following diagram details the HDLC frame format:

The HDLC frame begins and ends the error checking procedure with 0x7E which is 01111110 in binary.

There are three types of HDLC frame types defined by the control field:

 Information Frames are used for the data transfer between stations. The send sequence, or next send N(S), and the receive sequence, or next receive

N(R), hold the frame sequence numbers. The Poll/Final bit is called Poll when used by the primary station to obtain a response from a secondary station, and Final when used by the secondary station to indicate a response

 or the end of transmission.

Supervisory Frames are used to acknowledge frames, request for retransmissions or to ask for suspension of transmission. The Supervisory code denotes the type of supervisory frame being sent.

 Unnumbered Frames are used for link initialisation or link disconnection.

The Unnumbered bits indicate the type of Unnumbered frame being used.

63

BISYNC

Binary synchronous communications, or BISYNC, is a character (byte)-oriented form of communication developed by IBM in the 1960s. It was originally designed for batch transmissions between the IBM S/360 mainframe family and IBM 2780 and 3780 terminals. It supports online and RJE (remote job entry) terminals in the

CICS/VSE (Customer Information Control System/Virtual Storage Extended) environment.

BISYNC establishes rules for transmitting binary-coded data between a terminal and a host computer's BISYNC port. While BISYNC is a half-duplex protocol, it will synchronize in both directions on a full-duplex channel. BISYNC supports both point-to-point (over leased or dial-up lines) and multipoint transmissions.

Each message must be acknowledged, adding to its overhead.

BISYNC is character oriented, meaning that groups of bits (bytes) are the main elements of transmission, rather than a stream of bits.

The BISYNC frame is pictured next. It starts with two sync characters that the receiver and transmitter use for synchronizing. This is followed by a start of header

(SOH) command, and then the header.

Following this are the start of text (STX) command and the text. Finally, an end of text (EOT) command and a cyclic redundancy check (CRC) end the frame. The

CRC provides error detection and correction.

Most of the bisynchronous protocols, of which there are many, provide only halfduplex transmission and require an acknowledgment for every block of transmitted data. Some do provide full-duplex transmission and bit-oriented operation.

BISYNC has largely been replaced by the more powerful SDLC (Synchronous

Data Link Control)

LAP

The LAP protocols are part of a group of data link layer protocols for framing and transmitting data across point-to-point links. LAP originates from IBM SDLC

(Synchronous Data Link Control), which IBM submitted to the ISO for standardization. The ISO developed HDLC (High-level Data Link Control) from the protocol. Later, the CCITT (now referred to as the ITU) modified HDLC for use in its X.25 packet-switching network standard. It called the protocol LAP

(Link Access Procedure), but later updated it and called it LAPB (LAP Balanced).

64

This section discusses LAPB and several derivatives of LAPB. You can also refer to "SDLC (Synchronous Data Link Control)" and "HDLC (High-level Data Link

Control)" for additional information.

LAPB transmissions typically take place over physical point-to-point links. It is a full-duplex protocol, meaning that each station can send and receive commands and responses over separate channels to improve throughput.

The protocol is bit oriented, meaning that the data is monitored bit by bit. Bitoriented information in the LAPB frame defines the structure for delivering data and command/response messages between communicating systems. The frame format for LAPB is similar to the frame type for HDLC.

LAPB

Link Access Procedure, Balanced ( LAPB ) implements the data link layer as defined in the X.25 protocol suite. LAPB is a bit-oriented protocol derived from

HDLC that ensures that frames are error free and in the right sequence.

LAPB is specified in ITU-T Recommendation X.25 and ISO/IEC 7776. It can be used as a Data Link Layer protocol implementing the connection-mode data link service in the OSI Reference Model as defined by ITU-T Recommendation X.222.

LAPB is used to manage communication and packet framing between data terminal equipment (DTE) and the data circuit-terminating equipment (DCE) devices in the X.25 protocol stack.

LAPB is essentially HDLC in Asynchronous Balanced Mode (ABM). LAPB sessions can be established by either the DTE or DCE. The station initiating the call is determined to be the primary, and the responding station is the secondary

LAPB FRAME FORMAT

LAPB frames contian a header, encapsulated data, and a trailer. The diagram below shows the format of the of the LAPB frame and its relationship to the

Protocol Layer Packet (PLP) and the X.21bis frame.

65

66

Protocol verification

Petri Nets :

The history of Petri Nets goes back to the work of Carl Adam Petri during his Ph.D. thesis in Germany in 1962. A Petri Net is a graphical and math- ematical tool to verify systems and protocols. Many of researchers have used Petri

Nets to analyze and verify systems in different areas of science such as artificial intelligence, parallel processing system, control systems, and numerical analysis.

Petri Nets in the graphical forms are like flowcharts and network diagrams, while in mathematical forms, they are like algebra and logic subjects.

Although Petri Nets have existed for many decades, they have been recently used to verify cryptographic and security protocols. And the most known research centers that have used these Nets are Cambridge , Aarhus , and Queen’s

Universities.

Finite State Machines (FSM ):

Finite state machines mainly consist of a set of transition rules. In the traditional finite state machines model, the environment of the machine consists of two finite and disjoint sets of signals, input signals and output signals. Also, each signal has an arbitrary range of finite possible values.

67

Download