EE 382C Spring 2011 Homework 2 solutions

advertisement
EE 382C Spring 2011 Homework 2 solutions
---------------------------------------------------------------------------8.1
a). Minimize for latency: To optimize for latency, our primary concern is reducing the
number of hops (since serialization latency is constant). The greedy routing algorithm
chooses the minimum number of hops for every packet, and therefore minimizes hop count
overall.
Sidenote: This answer can change depending on the traffic pattern and injection rate. For
traffic patterns under which the greedy routing algorithm offers a lower maximum
throughput (such as tornado which is the worst-case as explained in page 161), the latency
spent in buffers waiting for output channels may become significant and may make a more
load-balancing routing algorithm a better choice.
b). Under uniform random traffic, the greedy routing algorithm balances the load because it
does not attempt to balance the already uniformly distributed load. To see that, start from
any source and compute the load that that source causes on the network channels. For
instance, source 3 loads its adjacent channels by ½, the next channels by ¼, and so on. By
summing up, channels are loaded evenly to a unit of 1.
Random oblivious routing algorithms will attempt to randomize the already-randomized
traffic, and therefore would make packets take long routes unnecessarily (increasing their
load). Adaptive routing algorithms will perform no better than greedy. However, they may
still make sub-optimal decisions by sending packets the long route unnecessarily, reducing
the maximum throughput. For instance, with the adaptive routing algorithm described in
page 161, source 1 sending to source 2 may choose the long route of 7 hops, because at that
particular moment, it had transmitted more packets clockwise than counter-clockwise. This
problem is intensified because this adaptive routing algorithm disregards the hop count.
c). Considering a variety of permutation traffic patterns means that we should design
without prior knowledge of the traffic pattern. This hints that an adaptive routing algorithm
will probably be the best, since it will be able to adapt to different patterns. An adaptive
routing algorithm which takes into account the load (buffer occupancy) of the next hop as
well as the number of hops for each of the two routes will probably be a good choice
because it will divert traffic to less busy routes if there is severe enough contention in the
shortest route. If the number of hops is not taken into account, the routing algorithm will
load tend to load the other channels (of the long path) more. Designing the details of an
adaptive routing algorithm is beyond the scope of this question.
8.9
Since the torus is symmetric, it is sufficient to look at a single dimension, i.e., a ring with k
nodes. We first determine the channel load that each node contributes. Due to symmetry,
the load on all channels in a given direction is identical, and we can compute it simply by
adding up the total load a single node contributes on all channels in that direction.
For uniform random traffic, every node sends 1/k units of load to each node in the network,
including itself. Sending to itself does not contribute any channel load. Sending to the (k/2)-1
nodes that are closer in positive direction contributes 1, 2, …, (k/2)-1 units of traffic in that
direction, respectively, each with probability 1/k. Likewise, sending to the (k/2)-1 nodes
which are closer in negative direction contributes 1, 2, …, (k/2)-1 units in that direction with
probability 1/k each. Finally, sending to the node at the opposite end of the ring contributes
k/2 units of load in positive direction with probability 1/k, whereas it would contribute k/2
units of load in either direction with probability 1/(2k) in the load-balanced case. Summing
up, we can compute the load on each channel in positive and negative direction for uniform
random traffic as
UR, pos 
k
2
0 1
1 1 k  k  k 1 k  2
   i         1  
and
k k i1
k 2 2  2  8 4
8
k


UR,neg
1
0 1 2
1 1 k  k  k 1 k  2
    i      1    
,
k k i1
k 2 2  2  8 4
8
respectively. For the load-balanced case, on the other hand, the channel load in either
direction is
k
1
0 1 2
1 k k 1  1 k
 bal     i        .
k k i1 2k 2 8 4  4 8

Since throughput is inversely proportional to channel load, the throughput for the uniform
random case relative to capacity (i.e., relative to the throughput for the load-balanced case)
is given by
UR 

 bal
UR,pos
k
  ideal .
k2
In the worst case, each node sends to the node at the opposite end of the dimension, and
thus contributes k/2 units of load in positive direction with probability 1. Thus, the channel
loads for this case are
 W C,pos  1

  ideal 
k k
 and  W C,neg  0 .
2 2
Comparing again to the channel load for uniform random traffic with load-balanced routing,
throughput isnow
W C 

 bal
 W C, pos
  ideal 
1
  ideal .
4
To improve load balance for the half-way case while keeping the routing decision
deterministic, one simple approach is to have nodes determine which direction to send such
traffic in based on their own location; i.e., all even nodes send half-way traffic in positive
direction, while all odd nodes send it in negative direction. For uniform random traffic, when
averaging over all nodes, this balances the half-way traffic across both directions; as the
remaining traffic is balanced to begin with, throughput in this case is the same as that for the
randomly balanced approach, and thus equals capacity.
For the case where each node sends to the node at the opposite end of the dimension, load
is also balanced across both directions. Consequently, channel load is cut in half, and
throughput is doubled compared to the case where all half-way traffic is sent in one
direction, achieving half of capacity. However, with the load now balanced across both
directions, this traffic pattern actually no longer represents the worst case for k>2. Instead,
we can restore load imbalance by having each node send its traffic (k/2)-1 hops in positive
direction (this is the “Tornado” traffic pattern). This yields channel loads of
k
2
 k  2
and  W C,neg  0 , and thus
 2
 W C,pos 1  1
W C 

 bal
k
  ideal 
  ideal .
 W C,pos
 4  k  2
9.5

We have 1024 nodes choosing at random an intermediate node from the same pool of 1024
nodes. We can assume that each selection is random and independent. Therefore, the
number of source nodes that select a particular node as intermediate can be calculated
using the bins and balls model (binomial distribution). The probability (p) that a source node
will choose a particular intermediate node is 1/1024. The number of trials (n) is 1024.
Therefore, the expected value of this random variable is the expected number of source
nodes that choose any particular intermediate node and is
E  np 
1024
1
1024
With a high probability of 1
1
3 ln( 1024)
 10.7405 balls. This
, all bins have at most
1024
ln(ln( 1024))
is the maximum expected value for how many source nodes choose the same intermediate
node. You can read more on binomial distributions and the proof for the maximum expected
value at: http://pages.cs.wisc.edu/~shuchi/courses/787-F09/scribe-notes/lec7.pdf
Note that generating a comparable result using simulation experiments is also acceptable.
10.4
It is possible for adaptive routing to have zero dropping probability when routing upstream
in a Clos network, because it can avoid conflicts. For instance, adaptive routing can make
packets use different paths if there are other packets contending for the same path.
However, this is not true for oblivious routing. Since oblivious routing does not regard the
current network load and what other packets are requesting, it can easily route two packets
to the same output at the same time, causing a drop. Oblivious routing may load balance by
average, but is prone to transient imbalance. When routing downstream, both routing
algorithms will have the same dropping probability because there is only one path to the
destination. Therefore, adaptive routing will have a lower dropping probability.
Assuming nearest-common-ancestor routing, dropping probability increases as the locality
of traffic decreases. That is because less local routes require packets to traverse more
channels to reach a switch from which they can route to their destination. Therefore, the
probability of contention along the path that results in packets being dropped increases.
Download