Efficient Streaming for Delay-tolerant Multimedia Applications Saraswathi Krithivasan

advertisement
Efficient Streaming
for
Delay-tolerant Multimedia Applications
Saraswathi Krithivasan
Advisor: Prof. Sridhar Iyer
Problem Definition
Given:
• A multicast tree with source S serving clients Cis
• Resources such as transcoders, layer encoders, and
streaming servers;
• Each Ci specifies two requirements:
• a minimum acceptable rate αmin and
• δi, the time it is willing to wait, once connection is
established, for start of playout of content;
• Content has playout duration T and base encoding rate α
Assumptions:
The topology is known with link bandwidths remaining
constant over a given prediction interval P.
Problem Definition
• Leverage delay tolerance δi, specified by
each Ci to provide (for each Ci):
• continuous playout at ri such that ris across
all Cis are maximized and
• each ri is greater than or equal to αmin .
Problem space
Our context
Key assumptions
• The network topology is known and static
• a subscription based network
• different streaming sessions may have different clients
participating
• Buffers are available at all network nodes
• unlimited buffers for the analysis of the cases considered
• Buffer management issues including the case of memory
constrained client devices are addressed
• Client requirements are known.
• The minimum rate requirement αi of Ci, is used mainly as an
admission control parameter
• When αmin is not defined by the client, a minimum rate (say 128
kbps) is assumed to be required.
• End-to-end delays are negligible.
Parameters defining dimensions of the problem
Static
Bandwidth
Varying
Scheduled
On-demand
Resource used
Service
types
Transcoder/streamer
Layered encoder /streamer
Contributions: Scheduled
streaming, Static bandwidth case
Example
Details
Example
Details
Details
Details
Contributions: Scheduled
streaming, Varying bandwidth
case
Example
Example
Example
Details
Details
Contributions: On-demand
streaming, Static bandwidth case
Overview
Details
Details
Node architectures
• Scheduled streaming
• Source Node architecture
• Relay Node architecture
• Client Node architecture
• On-demand streaming
• Source Node – with data transfer capabilities
architecture
• Streaming Relay Node – with streaming
capabilities architecture
Buffer: Resource management issues
Buffer requirements
• At Source/ relay node
• B ni max = ΣP(bip − bjp ) × P,
where P is the prediction interval duration
• At a client node
• When the last link to the client supports the optimal
delivered rate at a client, buffers can be moved up to
the upstream relay node.
• When the last link is the weakest link in the client’s
path, it is necessary that adequate buffers are
available at the client to support play out at the optimal
rate.
• Size and duration over which buffer is maintained
depends on δ value of client
Effect of δ on buffer requirement
When there is no sharing link constraint,
• increasing δj value of a client Cj, improves delivered rate
• δj value for which client receives stream at Γ
is denoted by đj
• Beyond đj no improvement in delivered rate; stream is
buffered at client till playout
• When δj takes values between đ j and (đj +T ), mechanism
moves from streaming to partial downloading to complete
downloading at (đj + T).
• When δj takes values greater than (đj + T), it is equivalent
to complete download and play back at a convenient time.
entire content is stored at the client for a time
(δ j − (đj +T)).
• Leads to the interesting notion of “residual delay
tolerance” đR = (δ j − đj).
To summarize ……..
Static
Bandwidth
Varying
Scheduled
On-demand
Resource used
Service
types
Transcoder/streamer
Layered encoder /streamer
Other interesting research problems…
•
In VoD scenario, the following problems pose interesting
challenges:
•
•
handling new requests during a scheduled streaming session
handling multiple servers with independent or partially
overlapping contents.
• On-demand streaming, varying bandwidths case needs to be
studied in depth.
• In-depth analysis of running delay-tolerant applications on
wireless devices
• Exploring possibilities of extending declarative networking
frameworks to facilitate the implementation of our proposed
algorithms
• Analyzing the problem in a business perspective
Future work:
A planning tool for CSPs
Selected publications
•
S. Krithivasan, S. Iyer, Strategies for Efficient Streaming in Delay-tolerant Multimedia
Applications, IEEE-ISM 2006, Sandiego, USA, December 2006.
•
A. Th. Rath, S. Krithivasan, S. Iyer, HSM: A Hybrid Streaming Mechanism for Delaytolerant Multimedia Applications, MoMM 2006, Yogyakarta, Indonesia, December 2006.
•
S. Krithivasan, S. Iyer, Enhancing QoS for Delay-tolerant Multimedia Applications:
Resource Utilization and Scheduling from a Service Provider's Perspective, Infocom
2006, Barcelona, Spain, April 2006.
•
S. Krithivasan, S. Iyer, Maximizing Revenue through Resource Provisioning and
Scheduling in Delay-tolerant Multimedia Applications: A Service Provider's Prespective,
Infocom 2006, Barcelona, Spain, April 2006.
•
S. Krithivasan, S. Iyer, Enhancing Quality of Service by Exploiting Delay Tolerance in
Multimedia Applications, ACM Multimedia 2005, Singapore, November 2005.
•
S. Krithivasan, S. Iyer, To Beam or to Stream: Satellite-based vs. Streaming-based
Infrastructure for Distance Education, Edmedia, World Conference on Educational
Multimedia, Hypermedia & Telecommunications 2004, Lugano, Switzerland, June 2004.
•
S. Krithivasan, S. Iyer, Mechanisms for Effective and Efficient Dissemination of
Multimedia, Technical Report, Kanwal Rekhi School of Information Technology (KReSIT),
IIT Bombay, September 2004.
Thank you 
Extra slides
nh City
Overview of HSM
• Components
• Selection of Streaming Point (SP) - the
relay node at which the streaming server
is placed.
• Delivered stream rate calculation
• Cache management at SP
HSM
PSM
SP
Contents cached
Client
Server
Transcoding
Back
Thumb rules for placement of streaming
servers
• Find bandwidth of the weakest link in the
distribution network, bmin
When rj <= bmin
• Weakest link occurs in the access network
• Choose node having most out going links in the distribution
network in the path of Cj
When rj > bmin
• bmin may be the weakest link in clients’ paths
• Choose node at the end of bmin, future requests for the
same content can be served without incurring delays
Back
Algorithm: find_max_clients
•
The algorithm involves the following two
steps:
1.
Choosing an appropriate streaming point
2.
Finding the number of clients serviced and
the delivered rates at the clients.
Back
Illustration of impact of δ
•  =512 kbps
• T=60 min.
512
1
448
S
R1
256
2
192
448
3
192kbps
128kbps
448
R2
128
4
Case 1
δ1 = 0;
δ2= 0;
C2
C1
Case 3
δ1 = 30;
δ2= 0;
Transcoder at R1
Case 2
δ1 = 30;
δ2= 0;
Back
Illustration of impact of shared link
•  =512 kbps
• T=60 min.
Transcoder at R1
384
1
480
256
2
S
R1
256
3
160
480
R2
128
4
C2
δ1 = 15;
δ2= 60;
C1
C2’s delivered rate is compromised from 512
to 480 due to shared link constraint
Back
Problem 1(a): Finding delivered rates
• Optimization approach
• Insights from analysis:
• Theorem 4.1: rmaxi = bi (1 + (δp/T ))
• Theorem 4.2:
γmaxk = bw (1 + (δk/T ))
= Γ otherwise;
if bw (1 + (δk/T )) < Γ
• Algorithm find_opt_rates_I
• 2 pass iterative algorithm
• Proven optimal
• Complexity: O(DC), Dnumber of levels, C-Number
of clients
• For given transcoder
placement option, finds
• Stream rates through links
• Delivered rates at clients
Back
Algorithm: find_opt_rates_I
• Two iterations:
• Bottom to top – to find maximum stream rate
that can flow through each link
• Top to bottom – to find the actual stream rate
flowing through each link
findoptratesI finds the optimal delivered
rates at clients for any given transcoder
placement option
Algorithm: find_opt_rates_I
Pass 1
function find opt rates I (transinfo);
Initialize stream rate [numlinks]
% stream rate is a vector of dimension numlinks %
Initialize optim rates [numclients]
% optim rates is the output vector of dimension numclients %
% start pass 1 %
for each link lj % Starting from the leaf nodes of the tree %
if link is directly attached to a client
max stream rate = find max deliverable rate (Ci)
% Calculated using Theorem 4.2 %
else
max rate thru link = linkbandwidth(1+ bj/T);
if there is a transcoder at node ni
% at the terminating end of the link %
max stream rate = max(stream rates flowing through the outgoing links from ni);
else
max stream rate = min(stream rates flowing through the outgoing links from ni);
end
end
stream rate [li] = min(max rate thru link, max stream rate);
% Apply Theorem 4.1 to assign appropriate rate to link %
end
Next
Algorithm: find_opt_rates_I
Pass 2
% start pass 2%
for each client C
for link L in C’s path starting from S
if L is the first link in path
max rate = stream rate [L]
% Traverse path from source to client and assign stream
rate of first link to max rate %
else
if stream rate [L] is greater than max rate
actual rate [L] = max rate;
else
actual rate [L] = stream rate [L];
max rate = stream rate [L];
end
end
optim rates [C] = actual rate [last link in C’s path]
end
Back
Redundancy rule 1
n1
nm
384
384
512
256
If transcoders are needed in a string n1, n2, . . . nm, it is
sufficient to place a transcoder at n1.
Redundancy rule 2
For each link li
if li is serving equi-deliverable rate clients,
assign equi-deliverable rate to li
Else
assign maximum rate supported by link
without loss
If l1 supports 512
kbps, transcoder at
R2 is redundant
Even if l1 supports
512 kbps, assigning it
to l1 would introduce
redundant transcoder
transcoder at R2
Algorithm: find_opt_rates_NR
Back
Optimality claims
We have the following two claims with respect to the
optimal placement of transcoders:
1.Optimality of O: O is the minimal set that is required for
providing the best delivered rates to clients. (the
redundant rules identify all instances of redundant
transcoders)
2.Optimality of find_opt_rates_NR: The set of
transcoders enabled by find_opt_rates_NR when AT
placement option is given as input, is the optimal set and
hence is the optimal placement option for providing the
best rates to clients. (the algorithm applies all the
redundancy rules)
Back
Properties of O-set
• O provides the best delivered
rates to clients
• removing any transcoder from O
results in reduction of the
delivered rate for at least one
client.
• placing transcoders at nodes not
in O in addition to O, does not
improve the delivered rate at any
client.
• removing one or more transcoders
from O and placing an equal
number at any node not in O does
not improve the delivered rate
for any client. i.e., even if O is not
unique, any other combination of
nodes would not provide higher
delivered rates at the clients.
Back
Algorithm: find_opt_rates_NR
Pass 2
for each client C
for each link L in C’s path
if link L serves equi-deliverable rates clients
% Apply redundancy rule due to Lemma 5.3 %
if link L is first link in client’s path
stream rate [L] = delivered rate at client;
else
stream rate [L] =
min(stream rate of previous link, delivered rate at client)
end
else % Apply redundancy rule Corollary 5.2 %
max deli rate = stream rate [L1]
%Traverse path from source to client and
assign stream rate of first link to max deli rate
if stream rate [L] is greater than max deli rate
actual rate [L] = max deli rate;
else
actual rate [L] = stream rate [L];
max deli rate = stream rate [L];
end
end
end
optim rates [C] = actual rate [last link in C’s path]
end
Back
Optimal set O = {4, 5, 6}
q=2
Optimal combination {2, 6}
Node 2 is the super node
In Case (ii), dependent set
is {4, 5, 6}
Useful set is: {2, 4, 5, 6}
Back
•Optimization based approach
More definitions
• Super node: A common ancestor (other than S)
to two or more nodes in the optimal set that
does not have a transcoder.
• Eligible set: This set contains all the relay
nodes in the network.
• Useful set: The set of nodes that includes the
optimal set and the super nodes. This set is
denoted by U.
Next
• Optimal algorithm considering
• combinations from the eligible
set
• combinations from the useful set
Back
• Greedy algorithms
•
Max-gain algorithm
• Given a limited number of transcoders, where can
each of these be inserted (among the useful nodes)
in turn such that the gain from each inserted
transcoder is maximized?
•
Min-loss algorithm
• Given transcoders at all the useful nodes, which of
these transcoders can be removed (to use just the
given number of transcoders) such that the loss is
minimized?
• Performance evaluation
Comparison of greedy algorithms with
optimal-U algorithm
Greedy algorithms perform
close to the optimal algorithm
Number of affected clients vary
Thumb rules:
For very small values of n or
very large values of n, opt_u
algorithm can be used
when q< |O|/2, findplacement
Max gain is preferred
Else Min loss is preferred
Next
Comparison of greedy algorithms
when n = |O|/4 and n= |O|3/4
number of
transcoders placed
q = |O|/4.
number of
transcoders placed
q = |O|3/4.
Back
Algorithm: find_opt_placement
With transcoder only at S, run find opt rates I to
find the base deli rates at clients
if num trans == 1
% when only one transcoder is available %
for each node in useful set
place a transcoder and find the delivered rates at clients;
find the node that maximizes the improvement in delivered rate
compared to the base deli rates
end
return trans placement, deli rates
else
% when multiple transcoders are available %
if option considered == 1
max num combns = (factorial(E)/ factorial(n) factorial(E-n));
else
max num combns = (factorial(U)/ factorial(n) factorial(U-n));
end
for count = 1: max num combns
for each combination of nodes
find delivered rates at the clients
for each client find the improvement compared to the
base delivered rates
choose combination that provides maximum improvement
new transinfo = chosen placement of transcoders
end
trans placement = new transinfo;
deli rates = delivered rates with new transinfo; num combns = combinations considered
end
Back
Optimization approach for
varying bandwidth case
Minimize Σk, k = 1,2,... m, Σ P (Γ − rlk )2, where Γ is the base encoding
rate and rlk is the rate flowing through the last link ll for each client k over
each prediction interval in P.
Constraints:
• Rate constraint:
rij <= Γ ,Ұi,i = 1,2,... n, Ұj, j = 1,2,... P.
• Layering constraint:
r i-1j >= rij, Ұ i, i =1,2,... n, Ұ j, j =1,2,... P.
• Delay tolerance constraint:
1. Constraints for loss-free transmission over a link :
• For a given link at the end of its active period, buffer should be
empty.
• At any prediction interval over the active period of a link, the
available data capacity should be greater than or equal to the
required data capacity.
2. Constraint for loss-free transmission to a client:
• For a given client, for each prediction interval over the playout
duration, for each link in its path, the available data capacity should
be greater than or equal to the required data capacity.
Back
Total data sent from S over T =
Sent data capacity =
S15 = (ra1 + ra2 . . . + ra5)*P
Total data available over the active
period of l1 =
Available data capacity =
A16 = (b11+ b12. . . + b16)*P
Total data available over the active
period of l2 =
Available data capacity =
A27 = (b21+ b22. . . + b27)*P
Total data required over the active
period of l1 and l2 = Required data
capacity =
D15 = D25 = S15 = (ra1 + ra2. . . + ra5)*P
Back
Example to illustrate various solution
approaches
Number of nodes N = 7;
Number of links L = 6;
Number of clients C = 3;
Play out duration T = 100 seconds; Prediction interval
duration P = 20 seconds; Delay tolerance 1 of C1 = 20
seconds;
Delay tolerance 2 of C2 = 40 seconds
Delay tolerance 3 of C3 = 20 seconds;
Number of prediction intervals N =
(T + max (i)) /P = = 140/20 = 7;
Link
Link1
Link2
Link3
Link4
Link5
Link6
1
384
128
256
384
128
384
2
384
256
128
384
192
256
Bandwidths over prediction intervals
3
4
5
384
256
448
448
512
256
128
192
128
320
192
320
128
384
128
384
128
192
Back
1
6
512
128
256
128
128
256
2
7
512
384
128
128
192
192
3
4
Illustration of using minimum
bandwidth
• Bandwidths available on the links are:
• Link 1: 256 kbps;
• Link 2 - Link 6: 128 kbps.
• Using algorithm find_opt_rates_I we find the
delivered rates at the clients:
• C1 : 153.6 kbps; C2 : 153.6 kbps; C3 : 153.6 kbps;
Links are under-utilized over most prediction intervals
Back
Illustration of using average bandwidth
• Consider three instances of predicted bandwidths on link 2.
• Predicted bandwidths on link 2 :
• Case 1: 128, 256, 448, 512, 256, 128 over prediction intervals 1,
2, 3, 4, 5, and 6 respectively
• Case 2: 512, 128, 256, 448, 256, 128 over prediction intervals
1, 2, 3, 4, 5, and 6 respectively.
• Case 3: 128, 256, 128, 256, 448, 512 over prediction intervals
1, 2, 3, 4, 5, and 6 respectively.
• The average rate is the same in all three cases, 288 kbps.
• Using Theorem 4.2, the stream rate that flows through the
link is computed as: (288 + (288 * 20/100)) = 345.6 kbps.
Illustration of using average bandwidth
PI
1
2
3
4
5
6
1
2
3
4
5
6
Stream rate
345.6
345.6
345.6
345.6
345.6
0
345.6
345.6
345.6
345.6
345.6
0
b/w
128
256
448
512
256
128
512
128
256
448
256
128
Data sent
128
(217.6 + 38.4)
(307.2 + 140.8)
(204.8 + 307.2)
(38.4 + 217.6)
128
345.6
128
(217.6 + 38.4)
(307.2 + 140.8)
(204.8 + 51.2)
128
3
345.6
345.6
345.6
128
256
128
128
(217.6 + 38.4)
128
4
345.6
256
256
5
345.6
448
448
6
0
512
512
Data buffered
217.6
307.2
204.8
38.4
128
0
0
217.6
307.2
204.8
294.4
166.4
Aij
Dij
128
384
832
1344
1600
1728
345.6(UU)
473.6
729.6
1177.6
1433.6
1561.6
217.6
307.2
(179.2 + 345.6)
= 524.6
(268.8 + 345.6)
= 614.4
(166.4 + 345.6)
= 512
0
Loss-free delivery is not guaranteed
0
345.6
691.2
1036.8
1382.4
1728
0
345.6
691.2
1036.8
1382.4
1728
128
384
512 (BP)
0
345.6
691.2
768
1036.8
1216
1382.4
1728
1728
(i)
(ii)
(iii)
Back
Illustration of I-by-I approach
I-by-I algorithm
The values of the clients are applied
equally across prediction intervals
spanning the playout duration:
Initialize AT option for transcoder placement;
for each PI
read the predicted bandwidths into matrix M,
representing the topology;
setup the data structures;
for each client
delta applied =
delta of client / number of PIs in playout duration
end
playout duration = prediction interval duration;
[stream rates, opt deli rates] =
find_opt_ rates (transinfo);
delivered rates(:, PI) = opt deli rates;
end
•
•
•
delta applied for C1 = (20/5) = 4 seconds.
delta applied for C2 = (40/5) = 8 seconds.
delta applied for C3 = (20/5) = 4 seconds.
PI
1
2
3
4
5
Average
C1
133.12
266.14
322.8
199.68
266.24
237.6
Delivered rates
C2
133.12
133.12
133.12
199.68
133.12
146.4
C3
133.12
133.12
133.12
133.12
133.12
133.1
Disadvantages of I-by-I approach
• Consideration of each prediction interval in
isolation
• When the weakest link changes over the prediction
intervals
• does not leverage the available bandwidths
• Lack of consideration of the predicted
bandwidths over all intervals spanning the active
period of a link.
• When the weakest link remains constant over the
prediction intervals
• may over-estimate delivered rates and hence may
not provide loss-free playout
Back
L-by-L approach
• Considers each link in isolation and finds loss-free
rates supported by the link over each PI
• For a given link at the end of its active period, buffer
should be empty.
• Algorithm find_base _stream_rate
Example
• Algorithm find_link_stream_rates
Example
• At any prediction interval over the active period of a link,
the available data capacity should be less than or equal to
the required data capacity, i.e., Aij <= Dij .
• Considers links in the path of each client, to find
the loss-free delivered rates at client over each PI.
• Algorithm find_delivered_rates
Example
Back
Comparison of L-by-L algorithm with Optimal algorithm
• L-by-L algorithm performs close to the optimal
algorithm
• Even for small topologies having 10 nodes, for 7 out of
22 runs, (about 32%) the optimization function did
not converge having taken about 1200 times the time
taken by the L-by-L algorithm.
Effect of number of Prediction intervals: L-by-L
algorithm
CPU time increases as more
iterations are required by
algorithms:
find_base_stream_rate and
find_link_stream_rates
log 10 (CPU time) (msec.)
Average delivered rates are
almost the same when number
of PIs are increased without
changing bandwidth values
6
5
4
3
2
1
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20
n prediction intervals
2xn prediction intervals
Topology
4xn prediction intervals
Effect of fluctuating bandwidths: L-by-L algorithm
Average delivered rates are
almost the same when
bandwidths fluctuate (while
bandwidth over the original PI
remains constant)
Epsilon value = 32.
When bandwidth fluctuates
such that higher bandwidth is
available on link in the initial
part of PIs, algorithm
converges faster than when
lower bandwidth is available in
the initial part. (both cases
have same available bandwidth
over 2 consecutive intervals)
Algorithm: find_adjusted_stream _rates
• Start with advance estimate of predicted
bandwidths
• Stream rates calculated using L-by-L algorithm
• At the beginning of each prediction interval, get
refined estimate for bandwidth available on link
• Using the refined estimate, if the bandwidth
available at a prediction interval (after sending
any data in the buffer) is:
• higher than the original available bandwidth using the
advance estimate,
• increase the stream rate by the difference ( upper bound is
base encoding rate)
• else
• decrease the stream rate by the difference.
Example
PI
1
2
3
4
5
6
a
b
c
Stream rate
469.3
469.3
469.3
448
512
0
Estimate
Pred. b/w
Available b/w
Diff. in b/w
Ad. rate data sent data buffered
Advance
384
384
384
85.3
Revised
384
384
384
85.3
Advance
384
298.7
298.7
170.6
Revised
320
234.7
234.7
170.6
Advance
384
213.4
213.4
255.9
Revised
256
85.4
85.4
255.9
Advance
256
0
0
448
Revised
448
192
192
320
Advance
448
0
0
512
Revised
448
128
128
384
Advance
512
512
512
0
Revised
Revised
Revised
384
448
256
384
448
256(buffer not empty)
384
384
256
0
0
128
64 (-)
128 (-)
192 (+)
128 (+)
405.3
341.3
512
512
384
384(UU)
256
Back
Algorithm: find_base_stream_rate
function find base stream rate (link id, start point, num pred intervals, delta);
under utilized == 1;
while under utilized == 1
available data = 0;
data to be sent = 0;
counter = 0;
start rate = find rate (link id, start point, num pred intervals, delta);
stream rate = start rate;
for PI = 1: TRANS INT
counter = counter + 1;
curr pred rate = pred table(link id, PI);
available data = available data + curr pred rate;
data to be sent = data to be sent + stream rate;
if available data is less than or equal to data to be sent
if counter == TRANS INT % when no under-utilized link
under utilized = 0;
end
else
when under-utilization of a link is detected
pred table(link id, PI) = start rate;
break;
end
end
end
Back
Algorithm: find_link_stream_rates
function find link stream rates (linkid, start point, num PI, delta);
last PI = 0; st rates = base st rates (linkid, 1);
while last PI = = 0
avail data vol = 0; req data vol = 0; counter = 0; break point = 0;
while break point == 0
For PI = 1: num PI
counter = counter + 1;
link bandwidth = pred table[counter];
avail data vol = avail data vol + link bandwidth;
if counter is greater than delta % after collecting data for delta %
req data vol = req data vol + current rate
if req data vol is greater than avail data vol
% when the rate can not be sustained %
current rate = find rate(....);
for interval = (start point + delta) to counter
st rates[interval] = current rate
end % find the rate for the intervals up to break-point %
if the last PI is reached
break point = 1; last PI = 1;
else
start from the next PI with,
num delta = 0; curr rate = find rate(....); break point = 1;
end
else
if last PI is reached
assign curr rate to stream rates for all intervals;
end
break point = 1; last PI = 1;
end
end
end
end
end
Back
Algorithm: find_delivered_rates
for each client C
for each prediction interval P
% traverse path from source to client %
for each link L in its path
if first link in client’s path
min rate = stream˙rates(C,P);
else
min rate = min(min rate[L], stream˙rates(C,P))
end
end
delivered rates [C, P] = min rate
end
end
Back
Illustration of find_base_stream_rate
•
Consider Link 4
•
data available over 6 prediction intervals is consumed over 5
intervals; the start rate computed is: 345.6 kbps.
•
For the first interval, Aij = 384 kbps and Sij = 345.6 kbps; Link is
underutilized.
•
Reduce predicted bandwidth in PI 1 is reduced to 345.6 and start
rate recomputed.
•
Repeat till start rate is equal to reduced predicted bandwidth in that
interval.
•
Repeat same check for every interval in active period of link
Base stream rate computed for Link 4 is: 320 kbps.
Back
Illustration of
find_link_stream_rates
Consider Link 1:
PI
Predicted
b/w
Available data Required data Effective
capacity
stream rate
capacity
1
384
384
0
2
384
768
473.6
469.3
3
384
1152
947.2
469.3
4
256
1408
1420.8
469.3
5
448
1856 (448)
(480)
448
6
512
2368
512
2368 / 5 = 473.6
1408 / 3 = 469.3
(448 + 512) / 2 = 480; can not be sustained in PI 5.
Back
Illustration of find_delivered_rates
Link
1
469.3
345.6
208
320
256
284
Link1
Link2
Link3
Link4
Link5
Link6
Stream rates over prediction interval
2
3
4
469.3
469.3
448
345.6
345.6
345.6
208
208
208
320
320
320
256
256
256
284
284
284
PI
1
2
3
4
5
Average
C1
320
320
320
320
320
320
Delivered rates
C2
208
208
208
208
208
208
5
512
345.6
208
320
256
284
C3
208
208
208
208
208
208
Back
findplacement-Max-gain
Find minimum set of transcoders U needed to provide
best delivered rates to clients
• Start with no transcoders and find delivered rates at clients.
•
•
•
•
Place transcoder at each node in U one at a time and
find the delivered rates at clients for each of these
additions.
Choose the option that maximizes the gain.
Remove the node selected for placement from set U.
With the chosen placement, repeat the process for
next transcoder placing it at each node in U.
Note that for each round of selection, the delivered
rates with the chosen placement in the previous
round would be used for comparison to find the
additional gain.
Findplacement-Min-loss
Find minimum set of transcoders U needed to
provide best delivered rates to clients
•
•
•
•
•
•
Start with transcoders at all nodes in U and find
the delivered rates at clients.
Remove transcoders one at a time and find the
delivered rates at clients for each of these
removals.
Choose that transcoder that minimizes the loss
from the removal of the transcoder at the chosen
location.
Remove selected node from U.
With the chosen placement, repeat the process for
next transcoder, removing it from each node in U.
Note that for each round of selection, the
delivered rates with the chosen placement in the
previous round would be used to find the cumulative
minimal loss.
Solution Approach: Optimization-based
Design variables:
Stream Rates xm flowing through links lm
Objective function:
Maximize playout rates at the clients:
Minimize Σi ( – xi)2
 is base encoding rate,
xi is the rate flowing through li
Optimization: Constraints
Rate constraint:
• xm<= 
where  is the base encoding rate,
the best possible rate that can be delivered at any client.,
Delay tolerance constraint:
• Li <= di, where Li = Lb+ Lt
Lb is the latency incurred in the path from S to Ci, due to
buffering, Lb = ((ri- bw)/ bw)*T
Lt is the latency due to transcoding
Optimization: Constraints…
Transcoder constraint:
• Source transcoding
• for every client Ci the rates flowing through links in
its path from source are equal
• Anywhere transcoding
• for every Ci, the rates flowing through links in its
path are non-increasing
Back
Optimization-based approach
Design variables:
Stream Rates xm flowing through links lm
Objective function:
Maximize playout rates at the clients:
Minimize Σi ( – xi)2
 is base encoding rate,
xi is the rate flowing through li
Optimization:
Constraints
Rate constraint:
•
xm<= 
• where  is the base encoding rate,
the best possible rate that can be delivered at any
client.,
Delay tolerance constraint:
• Li <= di, where Li = Lb+ Lt
• Lb is the latency incurred in the path from S to Ci, due
to buffering, Lb = ((ri- bw)/ bw)*T
• Lt is the latency due to transcoding
• Number of Transcoder constraint:
|k: node k has an outgoing link with stream rate smaller than
rate flowing into it| <= q.
•
where q is the number of transcoders to be placed.
Complexity of optimization approach:
• Given n transcoders, where n < |O|. E is the
eligible set.
• Number of possible unique placements is given
by:
EC = E!/n!(E-n)!
n
• The worst case complexity is exponential if
each option is considered in turn.
Back
Algorithm: find_max_clients
function find max client (client arrivals)
num serviced client = 0;
for each arriving client
if this is the first client in that subtree
place streaming server
% returns the streaming server node and the
rate delivered at the first client %
deli rates = maximum deliverable rate at client
% calculated using Theorem 4.2 %
num serviced client = num serviced client + 1;
elseif the links are busy
if content available in cache
start streaming from the streaming server
num serviced client = num serviced client + 1;
update TTL
deli rates = maximum deliverable rate at client
% considering path of client from the
streaming point %
elseif constraints can be satisfied when the links
become free
deli rates = maximum deliverable rate at client
% considering the remaining delta of client,
when the link becomes free %
num serviced client = num serviced client + 1;
else
reject request
end
end
end
Back
Proof sketch
• We use induction.
• Considering the base case, let l1 be a link connecting S to node
n1. Let b1 be the bandwidth available on l1, where b1>>α. Ŧ, the
time to transfer file of size Z from S to n1 is given by:
Ŧ = Z/ b1, as trivially, min(b1) = b1
• We consider consecutive links l1, l2, …, lm connecting S to node
n having bandwidths b1, b2, …, bm which are greater than α. Let
min(b1,b2..,bm) = bmin.
• If (2) holds for these consecutive links, we prove that it also
holds for links l1, l2, …, lm+1 to node n+1 having bandwidths b1,
b2, …, bm+1.
Back
Proof sketch
• By Theorem 1, if the weakest link in the path of Cj
occurs in the core distribution network bmin has to be
< rj when δj >0. Hence the weakest link in Cj’s path
occurs in the access network. □
Back
Proof sketch
• Let bw be the weakest link in the path of Cj.
• Suppose bw< bmin. This implies that the weakest link occurs in
the access network as bmin is the weakest link in the core
distribution network. By Theorem 1, rk depends on bw and δk.
For large values of δk, rk can be >= bmin. In this case, even
though bmin is not the weakest link in Ck’s path, rk > bmin.
• Now we consider the case when bw= bmin. In this case, by
Theorem 1, rj > bmin when δj >0. Note that here the weakest
link occurs in the core distribution network.
• Hence if rj > bmin, the weakest link in Cj’s path may occur in
the core distribution network, when δj >0 □
Back
• Enhancing QoS for Delay-tolerant Multimedia Applications: Resource
utilization and Scheduling from a Service Provider’s Perspective,
Short-paper: Doctoral symposium -- Presented at Infocom 2006,
Barcelona, Spain, April 2006.
• Maximizing Revenue through Resource Provisioning and Scheduling
in Delay-Tolerant Multimedia Applications:
A Service Provider’s Perspective
Short-poster paper -- Presented at Infocom 2006, Barcelona, Spain,
April 2006.
• Network Models for Distance Education Dissemination
-- Full paper, co-authored with Sridhar Iyer, Presented at International
Conference on Distance Education (ICDE) 2005, IGNOU, Delhi,
December 2005.
• Achieving Stable Transmission on VSAT Networks -Case Study:
Distance Education Program (DEP), IIT Bombay,
-- Full paper, co-authored with Sameer Sahasrabuddhe and Malati Baru
, Presented at ICDE 2005, IGNOU, Delhi, December 2005.
Source node architecture
Back
Relay node architecture
Back
Client node architecture
Back
Source node architecture
(for on-demand streaming)
Back
Relay node with streaming server
(for on-demand streaming)
Back
Transcoding vs. Layering
• either of the resources - transcoders or layer
encoders - can be used to adapt the encoding rate
• When bandwidth is static
• Can deploy fixed number of transcoders to deliver
optimal rates to clients
• Same results can be obtained with layered encoding
also; However, number of layers required may be too
high for the scheme to be feasible
• When bandwidth is varying
• For transcoding to be effective, every node needs to
be transcoding capable
• With little additional functionality at the relay
nodes, Source adaptive layering can be implemented
efficiently
Download