ECE/CS 356: Computer Network Architecture Homework 1 – Solution Note: If you showed no work but got the answer correct, I gave half credit. If you gave the incorrect answer and showed no work you got no credit. If you showed work but gave an incorrect answer you were graded accordingly. Problem 1 (40 points: a) 15 points | b) 25 points) Grading: If people gave an incorrect answer but showed work they received -5, unless their error was particularly minor in which case they received -2. Assuming the following scenario: a) Write down the expression for the end-to-end delay between A and B end systems (Assume a very fast processor and no other network traffic) Solution: The first item you needed to identify was the formula for nodal delay (slide 43 in th the Chapter 1 slide deck or Section 1.4. from the 6 edition book). The formula: d_nodal = d_proc + d_queue + d_trans + d_prop The 'd' here stands for "delay", and we can explain them as such: d_nodal = Delay at a single node d_proc = Delay from processing the packet of data d_queue = Delay from queuing d_trans = Delay from transmitting the packet onto the link d_prop = Delay from propagation on the link (time it takes for the packet to traverse the link) From the problem statement we know that d_proc = 0 (assume a very fast processor) and d_queue = 0 (assume no other network traffic). So we are just left to figure out d_trans and d_prop. Let's start with d_trans. From the diagram we see that each packet is L bits, and the transmission rate is R, so d_trans = L/R sec. Similarly, we see that the "distance between elements" (the length of the link between two routers) is D meters, and the communication speed (the speed at which data travels) is v meters/sec, so d_prop = D/v sec. Now we know that d_nodal = (L/R + D/v) sec. Observe that in order to get from end system A to B, a single packet must jump 4 nodes. We do not count the last node because that node simply needs to receive the data. Thus, the end to end delay for a single packet is 4(L/R + D/v). For multiple packets, the answer is not to simply multiply our above value by 3 (if that were the case then "high-speed" internet wouldn't exist). Instead, we need to consider that the packets can be pipelined; that is, after A sends packet 1, it immediately sends packet 2 and does not wait for packet 1 to reach A. The diagram below better illustrates the situation (attached). In the diagram each packet is a different color (1 is green, 2 is blue, 3 is red), and there are 6 states which depict the system over time as you go down. Please excuse my lack of drawing ability, and note that you can consider a packet to be at a node when the packet lies on the link leading to that node (so in the last state all packets have reached B). This was to make the diagram more readable, it may be helpful to view the packets as jumping from node to node rather than link to link. Also, as we move from one state to the next, a total of d_nodal seconds elapse. Starting from the initial state, with no packets yet in the system (not depicted in the diagram, it would be the state before the very top state), we observe that a total of 6d_nodal seconds elapse. So for three packets the total delay would be 6(L/R + D/v). b) Generalize the previous expression for M packets and N routers, assuming that the transmission rates, distance and communication speed are the same for each section of the network. Solution: Here we are asked to generalize the solution for M packets and N routers. Let's arrive at the conceptual solution first, after which we can construct the formulaic solution. A single packet requires (N+1)d_nodal time. Make sure to make the distinction between a router, end point, and node. Node is simply a machine, it can be a router or end point. A router links at least two other nodes and cannot be an end point. An end point is a node that is designated as either the source or destination for the packets. For pipelining multiple packets, we can think of the time as the sum it takes to send the first packet to the destination, (N+1)d_nodal, and the time is takes to process the remaining packets (M-1)d_nodal. It is M-1 because we have already sent the first packet. This is captured in step 4 of the above diagram (remember that a packet lying on the link leading into the node has arrived at the node). Thus the generalized formula is (M - 1 + N + 1)d_nodal = (M + N)d_nodal. Problem 2 (10 points: a) 8 points | b) 2 points) Grading: Similar to problem 1. If you did not show work you received half credit even if you got the problem correct. We need to transfer 700 Tbits of data (700x1012 bits) between two points. We have to transmission systems available: i) A new dedicated and direct link between the two end systems ii) A nice car to bring a 700 Tbits Hard disk between the two points separated by 100 miles. Our car can run as a steady speed of 50 miles per hour. So, our car requires two hours to make the travel. Questions: 1) What is the transmission rate of our car? Solution: The transmission rate is simply the amount of data divided by the time is takes to transmit that dat. However, it is important to explain that there are two metrics to convert between different units of bits: Decimal and Binary. We have accepted both metrics, so we have: • 700 Tbits / 2 hr = 350 Tbits / 3600 sec \approx 97,222 MBits/sec. (Using decimal metric) • 700 Tbits / 2 hr = 350 Tbits / 3600 sec \approx 101,945 MBits/sec.(Using binary metric) If you are interested on the differences and common confusions about these two metrics, please visit: The international systems of Units website. 2) Under what transmission rate the network link is more efficient than the car? Express the transmission rate in Mbps. (No consider gas cost or other factors, only from throughput perspectives) Solution: The purpose of this question was to make you rethink traditional methods of data transfer. The network link would only be more efficient if its transmission rate was greater than 101,945 MBits/sec (or 97,222), which is magnitudes faster than traditional internet speeds. Problem 3 (25 points) Grading: Full credit unless you said something completely wrong. What advantages circuit-switched network offer over a packet-switched network? Solution: Some of the advantages of circuit-switched networks over packet-switched networks: • • • They offer dedicated and guaranteed links for their users There is no data/packet loss They do not require network control mechanisms (e.g. TCP's rate and flow control mechanisms) in order to guarantee in-order packet delivery The main disadvantage with circuit-switched networks is that they do not allow for efficient use of resources. If a user is not sending data that his/her link is under-utilized and total network throughput cannot be optimized like it can be in a packet-switched network. Problem 4 (25 points: 5 points/each) Grading: If you did not explain why it was True or False but the answer was correct, you were deducted 2 points each. If your statement was incorrect you received no credit. True or False, and brief justification: a) 5 Mbps is equal to 300 Mb per minute A. True and False: You could have answered either way and received credit. For True, simply note that 300 Mb/minute = 300 Mb/60 sec = 5 Mb/sec. For False, think about what 300 Mb/minute means. It's intuitive to think about it such that 5 Mb are sent every second, but it could be that in the first 59 seconds no data is sent and then at the very last second 300 Mb are sent. This would average 5 Mb/sec but that does not mean that 5 Mb were actually sent every second of that minute. b) Client-server architecture is always more efficient and better option than P2P architecture B. False: There are many possible justifications. Basically, the choice of architecture depends on the application as there are pros and cons to each one. P2P architectures are much more scalable while client-server architectures offer centralized control. Any justification along those lines were accepted. c) UDP guarantees data transfer (no packet loss) C. False: UDP does not guarantee data transfer. The choice to use UDP comes at the cost of packet loss and out-of-order delivery but your data rate will not be throttled by UDP as it will by TCP. TCP, however, guarantees data transfer and even in-order delivery. d) HTTP is a Transport layer protocol D. False: HTTP is an application layer protocol. See slide 59 in the Chapter 1 slides. e) TCP guarantees data transfer (no packet loss) E. True: TCP does guarantee data transfer through rather intricate mechanisms such as rate and flow control. We will be covering these mechanisms in more detail soon."