Set 4

advertisement
CMPT 408/765 – Computer Communication Networks
Instructor: Funda Ergun
Note Set #3 (September 21 and 23, 2009) taken by: Sergey Zhuravlev
Last Time: We were talking about routing on a hypercube. More particularly we were discussing
permutation routing where every node sends exactly one packet and receives exactly one packet.
Worst Case Performance: We are interested in calculating the worst case performance for a routing
algorithm. We note that in a bit fixing algorithm the path length between two nodes is equal to its
Hamming distance (the number of bits by which the source and destination differ) and that the
maximum Hamming distance in a d-dimension hypercube is d. Worst case performance arises not from
the path length but rather from packets having to wait for one another. Packets can come together at
the same node but cannot use the same link at the same time. A FIFO (first in first out) queue is used to
determine which packets get to use a contended link.
Deterministic Algorithms vs. Random Algorithms: A deterministic algorithm will always produce the
same output given the same input. A randomized algorithm uses random bits and will return the correct
or desirable answer (1-δ) fraction of the time.
Question: What is more powerful a deterministic or a randomized algorithm?
Answer: A randomized algorithm is always more powerful than a deterministic one. A deterministic
algorithm can be thought of as a restricted version of a random algorithm. If a random algorithm “flips
zero coins” then it is a deterministic one. Thus, a randomized algorithm can be made deterministic but
the reverse is not necessarily true.
Theorem for Worst Case Performance: Given any deterministic routing algorithm there exists a
permutation such that Ω((N/d)1/2) steps are needed to complete the routing. Where N is the number of
nodes and d = log(N) is the maximum degree.
What this theorem means is that for any deterministic oblivious routing algorithm one can always come
up with a specific input that will cause it to perform in the worst time. It is bad to have an algorithm that
“fails” for a specific input. If, for some reason, this input was applied often then the algorithm
performance would be very poor. We want an algorithm that does not have this property.
Goal: Come up with an algorithm that will perform well on all permutations.
Valiant’s Algorithm: Suppose each node j wants to send a packet to the destination t(j) then we perform
the following two phases:
Phase 1: Node j picks a node m(j) uniformly at random and the packet is sent from j to m(j).
Phase 2: Node m(j) waits until time = 7d and then sends the packet to its true destination t(j).
Currently such algorithms are referred to as Two-Phase Routing. We also note that the routing algorithm
employed in phase one and two is the deterministic Left to Right Bit Fixing Algorithm.
Theorem for Valiant’s Algorithm: With probability greater than (1 – 1/N), for ALL INPUTS all packets will
be delivered in time less than or equal to 14d steps. (We refer to the (1 – 1/N) as high probability since
it grows with the number of nodes).
Example: Suppose in some structure the worst permutation is one where each node sends to its
sequential neighbor. It is possible that the two phase routing algorithm will stumble onto this worst case
when the intermediate nodes are chosen, as is shown below.
Start
0
(j)
Intermediate 1
m(j)
Destination
2
T(j)
1
2
3
4
5
6
7
2
3
4
5
6
7
0
3
6
7
1
0
5
4
However, it is far more likely that given a bad permutation the injection of the intermediate nodes will
make things better, as is shown below.
Start
0
(j)
Intermediate 2
m(j)
Destination
1
T(j)
1
2
3
4
5
6
7
3
6
7
1
0
5
4
2
3
4
5
6
7
0
Next we explore how packets can delay other packets.
Observation: The only time a packet waits to cross an edge is when packets come together at a node.
Observation: If the paths of packets i and j come together (share an edge) and then diverge, they will
never join again.
Simplest Proof: Assume not, then there exists the situation where packets i and j travel on the same
path to node u and then take different paths from u to v.
Contradiction: Since packets i and j are both using the SAME deterministic bit-fixing algorithm it is
impossible that they would travel on different paths from u to v.
More properties of the directed hypercube:
Question: Suppose paths P1 and P2 intersect, is it true that packet p1 which travels on P1 will wait for
packet p2 which travels on P2 or conversely that p2 will wait for p1?
Answer: No, the intersection of paths (sharing of edges) tells us nothing about when packets will
actually use these edges. Hence p1 and p2 may pass at different times never waiting for each other.
Question: Can packet i wait for packet j multiple times?
Answer: Yes. Consider the diagram below. In time step 1 packet i waits for packet j at node A. Packet j
moves to node B. During time step 2 packet i moves to node B, packet j moves to node C and packet k
also arrives at node C. During time step 3 packet i arrives at node C, packet k travels to node D while
packet j waits at node C for packet k. Finally, in time step 4 packet j travels to D while packet i once again
waits for it.
Question: If packet pi waits for packet pj at time t1 is it possible that pj will wait for pi at some later time
t2 > t1?
Answer: No. We already showed that once paths diverge they do not rejoin so we only need to consider
one intersecting portion of the path. If packet pi waits for packet pj then packet pi will always be behind
packet pj in any future FIFO queue and hence pj can never wait for pi.
Question: Can bit fixing have loops?
Answer: No. Loops require visiting the same node twice. Proof by contradiction: Suppose node X is
visited twice. When the packet is originally at node X it has the current address xxx….xxx corresponding
to the address of X. When the packet leaves X it flips the i-th bit. Since the bit-fixing algorithm never
flips the same bit twice the packet will always have an address xxx…y…xxx that differs by at least one bit
from the address of X and hence the packet can never visit X again.
We now explore networks topologies other than the hypercube.
Start Network: The star network has one node at the center which is connected to all other nodes. As a
result it has a single point of failure (the center node).
Tree Network: The tree network is a minimally connected graph. It also has a single point of failure (the
root). Routing on the tree network is very easy since there is a unique path between every pair of nodes.
The bandwidth needs are different at every level of the tree. For example, half of the messages from
each leaf need to be routed across the root to the other subtree.
Question: Assume only leaf nodes will send messages to each other and assume all edges have infinite
bandwidth. Calculate the traffic on an edge at level i. (This is for a complete binary tree).
Download