>> Yunnan Wu: Hello and good afternoon, everyone. ... Yao Zhao to you. Yao received his bachelor's and...

advertisement
>> Yunnan Wu: Hello and good afternoon, everyone. It's my great pleasure to introduce
Yao Zhao to you. Yao received his bachelor's and master degree from Tsinghua
University in computer science. And he will receive his Ph.D. degree in computer
science from Northwestern University next ->> Yao Zhao: Yeah.
>> Yunnan Wu: -- I guess. We are very fortunate to have Yao with us for the past few
months doing the internship.
So today he will tell us about his internship project. So without further adieu, let's
welcome Yao.
>> Yao Zhao: Okay. Yeah. Thank you, Yunnan, for the introduction.
I will talk about my intern work, utility maximization based P2P multiparty video
conferencing. This is joint work with my mentor, Yunnan Wu, and Minghua Chen, Jin
Li, Baochun Li, Sanjeev and Phil. And we also got a lot of help from other guys from the
CCS group.
Okay. So I think a lot of people actually, you know, want those multiparty
videoconference a lot of times when we discuss problem remotely. We hope that we
have such kind of application at hand.
For the multiparty videoconferencing, it is kind of different to the current live streaming
or on demand stuff. First it has high rate. Actually on average source, average
participant has a video rate of such as about 400kbps. Now, if you consider you have a
conference of four people, the rate is actually very high.
And also it has very, you know, critical requirement on the delay. It requires usually the
delay of the video packet to be less than 200 milliseconds so as people can have good
experience.
And also usual the scale of the videoconference is more, you know, just three, four
people or even five people. It's not like the P2P when you have hundreds of people
watching the same video.
So usually people come up with the server based solution, which is simple and actually
the server has a lot of benefits to support the good quality service.
For example, like Microsoft Office Communicator, and Cisco has WebEx.
So but the problem for the server based solution is the server itself is a kind of bottleneck
in the whole system. It's not scalable. You cannot really hold, you know, a lot of number
of sessions because of the bandwidth and computation limits on the server.
So that's why people also wanted the peer-to-peer based solutions, which, you know, the
peers -- the participants of the conference can help to also transfer the video data.
So iVisit is one product that I found online. It claims that it's the first of P2P based
videoconferencing system. But unfortunately I don't know the detail because it's kind of
commercial product.
And [inaudible] and we have like Mutualcast and [inaudible] for the P2P systems, which
is Sigmetrics08 paper [inaudible] they do not -- many focus on videoconferencing, but
they kind of similar for the P2P system and they can be potentially used for the
videoconferencing. Um-hmm.
>>: So where would you put Skype?
>> Yao Zhao: Skype itself doesn't support the multiparty videoconferencing.
>>: Yeah, it does. Yes.
>> Yao Zhao: Yeah?
>>: I think [inaudible].
>> Yao Zhao: Okay. Yeah. I -- then probably I should put Skype here as well. Because
when I search online, I found generally people say it doesn't support, and there was some
plug-in to duplicate those video copies so that they can make the multiparty conferencing.
Okay. So many in this P2P based multiparty videoconferencing because of the critical
requirement on the delay, it makes the problem to be very challenging. You basically
have to make sure that your video rate to be well controlled so that everybody can gather
the video packet in real time.
You cannot do something like the live streaming or on demand streaming. In that case,
you know, people usually watch the video which is like one minute or 30 second after the
real time of the video.
And this problem is special challenge of where the network resource are limited, because
in that way you have to, you know, well control the -- well utilize the upload bandwidths
of those peers.
Okay. In this talk I will first talk about the problem of formulation and then describe our
rate control algorithm, and then I will quickly discuss how we deliver this content, video
contents, with our rate control algorithms. And then I will show the evaluation results
and finally some discussions and then conclusions.
>>: So how big typical is videoconferencing?
>> Yao Zhao: Usually it's more like four participants.
>>: Yeah. I mean how big [inaudible]. Because that will have huge impact on your
design, right?
>> Yao Zhao: Yes.
>>: [inaudible] like typically how big can be [inaudible] can have conferencing, can they
go to 100? Or typical on order of tens?
>> Yao Zhao: I think typically less than ten people. And for like some commercial
products, they're usually in a set of some up limit the number of people participating in
the same conferencing. Such as iVisit allows at most six -- eight people. Sorry.
>>: So a small scale.
>> Yao Zhao: Yes.
>>: And do we have any sense of returns? Do people leave?
>> Yao Zhao: Yes.
>>: Go and leave all the time in videoconferences?
>> Yao Zhao: Yes.
>>: [inaudible] because if, for example, you don't have returns [inaudible].
>> Yao Zhao: [inaudible] still doesn't work because the upload benefits limitation.
>>: It's a ring, I upload [inaudible].
>> Yao Zhao: Right. But maybe for the guy it doesn't have like, you know, 400kbps
upload rate. Maybe you are a mobile phone who want to, you know, attend the
conferencing.
>>: [inaudible].
>> Yao Zhao: And also, you know, if you have five people, if you build such a ring,
delay is pretty large.
>>: Right, right. I'm saying [inaudible] there's no term -- I can do a lot more fancier
things [inaudible].
>> Yao Zhao: Yeah. Definitely there are returns. And, you know, there are maybe even
other TCP connections to make the bandwidths of the links to change quickly.
>>: But presumably the return is much lower ->>: Yeah, much lower.
>> Yao Zhao: Right.
>>: [inaudible] two would be very impolite to leaving [inaudible].
>> Yao Zhao: Yes. So we assume the return rate is not very high. It will be quite stable,
but like in minutes.
>>: [inaudible] huge impact on [inaudible].
>> Yao Zhao: Um-hmm. Yeah. That's a good question.
So, for example, assume that we have such topology and let's say a source in A, you
know, these two participants are Redmond and B and C are in Silicon Valley. And they
are connected with routers. And probably, you know, the green link here is the
bottleneck here.
And then if we have some rates, that's the -- have some peer-to-peer links, send in the
video data, for example, here we have source send something to A, source send
something to B. And then actually we want to know, you know, for such kind of
peer-to-peer links what's the rate, what's the actually maximum flow that A-B-C can
receive. Because that determines the video rate that you receive.
And it's well known that for the maximum flow problem, it's actually equivalent to the
min-cut. So that's the min-cut -- maximum flow min-cut algorithm.
For the cut here actually, you know, you divided the nodes into two parts, and one part
has the source S and the other set has the destination, say B here. And then, you know,
you compare the sum of all the links that are between these two sets, from the first set to
the second set, then you get the cut capacity.
And then you compute, you know, the minimum of the cuts, then actually that determines
the maximum flow the destination node can receive. So this is actually the pretty general
problem.
>>: Min-cut for one node ->> Yao Zhao: For ->>: -- [inaudible] mean of the min.
>> Yao Zhao: Mean of the min-cut. So that's the final video rate for this system.
>>: [inaudible].
>> Yao Zhao: If even you don't talk, you may -- people may still want to view your
image. Right.
And actually in this case, we, you know, make it simpler, just say one source and a
couple of, you know, nodes, one to watch your video. And if they do not subscribe your
video, probably then, for example, here C, if C doesn't want to watch the video of A, then
A doesn't have to send it to C. You know, then you move the C out of the -- this channel.
>>: But the other problem there is a problem, right? [inaudible] mobile device.
>> Yao Zhao: Yeah.
>>: Does that mean you're going to use [inaudible] or you're going to say everyone can
only watch [inaudible] which can support to any cell phone?
>> Yao Zhao: Well, that's actually a tough question we are also, you know, thinking. Do
we really, you know, want to make sure that everybody receive the same rate, video rate,
or because of the min-cut problem. Someone has to receive higher, someone receive
lower.
Right now we, you know, for the simplicity we just assume that everybody watch the
same video content. But, you know, if, you know, the layered coding can be applied in
this system, then, you know, we can make them receive different rates.
But, you know, in practice, I think layered coding is not widely used yet.
Okay. So anyway, so min-cut is actually what we care. The minimum of the min-cut of
every peer is what we care for this system.
So then the goal is how to, you know, make the min-cut to be as much as possible under
the network limitations. And actually we don't know the network topology, the
underlying topology, and we don't know what's the variable bandwidth on these links.
So we have to, you know, figure out, you know, on these links what rate to send, and then
finally they can achieve the best min-cut or they can achieve a certain min-cut so that you
can already achieve the good video rate, say 400kbps.
And then the second question is, you know, even you kind of allocated those link rate so
that you can achieve good min-cut, but you still needed to decide what to send along
those P2P links, so what's the content.
Generally we can use packing tree algorithm to send those packets using trees, or we can
use network coding to achieve min-cut. So these are the two typical approaches to
achieve the min-cut in the network with certain rate allocations.
Okay. So one intuitive question is people have -- now we have TCP. And TCP actually
do not consider the topology of the network, they don't know the topology. And a TCP
will kind of converge and finally achieve the fairness and achieve the maximum rate.
So maybe can we just use, you know, TCP on these links so that we can discover the rate
or the maximum rate that we can send over this peer-to-peer links.
But actually we did some simulation and, you know, for those previous topology we have
four nodes, then we have 12 TCP connections between every pair of node. And then we
left them to run and to see how they converge.
TCP does converge, but, you know, its rate fluctuates extremely. So in that case, you
know, you really don't want to use TCP because your video rate is not stable. You will
see suddenly high rate and then suddenly low rate. That's not good for the user
experience.
And TFRC actually, you know, is --
>>: [inaudible] you really shouldn't say TCP because -- you should say TCP Reno or
TCP [inaudible] ->> Yao Zhao: Yeah, actually I used TCP Reno. Yeah. Thanks for ->>: [inaudible] different kind of protocols, and TCP [inaudible]. And now that
experiment you use Linux or Windows? I presume Windows.
>> Yao Zhao: I actually used the TCP Reno implementation in ns-2.
>>: Oh, ns-2.
>> Yao Zhao: Yeah.
>>: [inaudible] a newer version [inaudible] Linux [inaudible].
>> Yao Zhao: Right. Even, you know, you use the -- I don't think that will change the
behavior like this convergence, because, you know, when you receive loss you have your
rate ->>: No, no [inaudible] the newer version will be different.
>> Yao Zhao: Okay.
>>: [inaudible] yes, you are doing a condition control protocol, but the same time
[inaudible] start to be a lot smoother [inaudible].
>> Yao Zhao: Yeah, that's Reno, um-hmm.
>>: [inaudible].
>> Yao Zhao: Okay.
>>: Yeah, that's our point.
>> Yao Zhao: Okay. Thanks. Yeah. We will look at the newer TCP versions. But so
far my guess they still fluctuate, will not be very stable.
And I also use the, you know, TFRC implementation in ns-2. And I check how stable
TFRC is.
So it turns out, you know, TFRC itself is -- try to make the, you know, rate to be more
stable. That's the purpose of TFRC.
And we can see that it's more stable than TCP, but still because when it find the loss, it
will still have half the rate.
So, you know, it's kind of fluctuate in some time, and, you know --
>>: [inaudible]
TFRC [inaudible] believe the average eight loss intervals.
>> Yao Zhao: Um-hmm.
>>: There's no way to capture [inaudible].
>> Yao Zhao: They may not do so, but, I mean, it mimic the behavior of TCP.
>>: Right. But on average this moves up substantially. I think an interval is like eight I
believe.
>> Yao Zhao: Yes.
>>: I think here [inaudible] the time is a second [inaudible] I think is pretty small.
>> Yao Zhao: Pretty small. Well, it depends. If it's inter-island, then it's about 100
milliseconds. If it's intra the island, then it's just the 10 milliseconds.
>>: I wouldn't be surprised if that happened. It depends on what's [inaudible]
simulation.
>> Yao Zhao: It's just [inaudible] implementation on ns-2.
>>: What's the source?
>> Yao Zhao: I don't have source. Here, you know, I created TFRC pair between every
pair of nodes.
>>: Right. What's the source to send the traffic?
>> Yao Zhao: Every node is source.
>>: [inaudible].
>> Yao Zhao: The source I just assume it's an FTP and you try to send as much as
possible.
>>: But then that's against the argument. If you're using [inaudible].
>>: It's mostly [inaudible].
>>: [inaudible].
>> Yao Zhao: Well, you ->>: Well, presumably you're using TFRC, the source is sending at whatever rate
[inaudible].
>>: Right. But the loss is driven by [inaudible].
>>: So do you have any cross traffic [inaudible].
>> Yao Zhao: I don't have other cross traffic. But I assume like the video -- packet
video rate is so high, like 1 million bps so that, you know, cannot really support it. You
have to -- you will have to drive to loss.
>>: Yeah, but then that's [inaudible]. That's a very different thing.
>>: I think ->> Yao Zhao: Yeah. If, you know, the network can really support all the rates, then, you
know, probably it's easier. Right? So we are targeting the harder problem. You know,
you really want very high but you cannot. You have to do rate control and congestion
control so that, you know, you achieve the good rate.
>>: I think that's the problem [inaudible].
>> Yao Zhao: Well, yeah, you have to do both. The source always want more. I want to
[inaudible].
>>: [inaudible] you do have fluctuation are you coding [inaudible].
>> Yao Zhao: Um-hmm.
>>: [inaudible] but that's a different story [inaudible].
>> Yunnan Wu: I would suggest let's defer the question [inaudible]. Just imagine, I
mean, you are doing -- let's say we have [inaudible] conference which actually is on the
product right now. That will require a lot of bandwidth, more than what the network can
support. And I guess Yao will show us how we can find the good bit rate to use
[inaudible].
>> Yao Zhao: Yeah. Thanks [inaudible] for making it clearer.
So basically we formulate our problem as a utility maximization problem. So generally
when the network topology and the network capacity is unknown to people, then similar
like TCP, how do find out these constraints and maximize the -- something that you want.
Like in TCP, you want to maximize the rates of the TCP connections and with certain
fairness.
And so this is actually the typical, you know, maximization formulation for this
[inaudible]. Generally you have utility function which actually is a function of this R.
The R is the vector of this rate of the end-to-end links.
And with utility function usually is convex. We make it to be convex because then the
system can -- you know, the optimization problem can converge and, you know, from
any point.
>>: [inaudible].
>> Yao Zhao: Oh, yeah, concave. Sorry. Yeah, it's concave.
It has -- then in that point, you know, you can -- the final -- the global optimization point
is the -- a local optimization point.
And the AR less than C actually, you know, specifies the network constraints. So the A
actually is, you know, element and A is binary which specify this e2e, end-to-end links,
contains this link, the fill code link or not. And the C is, you know, the capacity of those
fill code links.
Okay. Then because these constraints are unknown, how do people solve this problem.
Generally one solution is called the primal solution. So instead of using the utility
function only, it adds some penalty to this utility function.
So don't be scared by this large -- this complicated part here. Let's see what's the
derivative of this whole function here.
So basically it's the derivative of, you know, each utility function and then minus this
part. Actually, this part looks to be very complex, but it's just the loss rate in the system.
So basically you can use a loss as a signal to control the system. If you observe a loss,
which means you send too much, you have to slow down.
If you do not observe loss, then, you know, you're basically -- the utility function, the
derivative of this utility function is kind of positive so that to make your -- to increase
your rate.
So that's the control system. And the average round by round, we can, you know, make
the new rate to be the old rate plus the -- some step size function times the derivative. So
that's how you control, you know, how you use the feedback to control your system.
And there's some theory totals that if you do such kind of control, it will guarantee that
the system will converge, and converge to the global optimization point.
Okay. So let me use, you know, TCP as the example to show how this works. Actually,
for TCP Reno, people do the reverse engineering and translate its window-based
congestion control to the rate-based one, and then figure out the utility function in TCP
Reno is just this function. And the S here actually is a packet size, T is the run trip time.
And then the derivative of this, so it's this function. Basically this is actually the step size
function. And this is the derivative. We can see that for TCP Reno when you have
higher rate, then this RI actually increase, then this path is smaller. So which means with
higher rate you are more sensitive to the loss.
And the step size function actually is, you know, proportional to RI square, which means,
you know, once you have loss then you will have, you know, large decrease of your rate.
And these two, you know, times together is a constant, then when you don't have loss
then you get the constant increase on your rate, which kind of mimic the AIMD.
Okay. Then let's see how we formulate our problem in this utility maximization
framework.
So what we care about is actually the minimum min-cut from the source to the peers in
this system. So though R here actually -- we define as the minimum min-cut. And then
the utility function becomes, you know, the utility function of those min-cut. And still
we have the network limitations.
Besides this, we add another term into this utility function. Because, you know, if we
assume we have, you know, a maximum video rate that we want to achieve, say 100
[inaudible] bps, then if you get enough min-cut, you don't want to achieve more.
So this is the difference between ours and TCP. TCP will, you know, try to grab as much
results as you can. And here, you know, if we achieve enough rate, then we'll stop there.
So that's why we add a penalty here. So by control this penalty, we can make the utility
function to be maximized when you achieve certain rates.
>>: [inaudible] clarification, can you go back, please. So here RA could mean ->> Yao Zhao: Minimum of the min-cut.
>>: [inaudible] so which means all the users responding to the same width.
>> Yao Zhao: Yeah. That's a [inaudible].
>>: [inaudible] they only differ because they haven't [inaudible] functions, both same
rate, different users, will have different utility functions.
>> Yao Zhao: Yeah. Potentially you can have different utility function, but actually we
use the same utility function.
>>: No, they don't, right, because they use [inaudible] all the same rate, for example, 400
kilobytes per second, and different users will have different utility functions [inaudible]
40 kilobytes, I think that's fantastic [inaudible] that's horrible [inaudible].
>> Yao Zhao: Yeah. Potentially you can make the utility function to be different. But in
our current implementation, we make the utility function to be the same for every node.
>>: [inaudible].
>>: [inaudible] from the same domain, which is [inaudible] single rate.
>>: [inaudible].
>>: Right. I just wanted to clarify. That's number one. Number two is since I think
[inaudible] emphasize in the very beginning [inaudible] latency.
>> Yao Zhao: Um-hmm.
>>: So here we're -- because for this [inaudible] try to maximize the rate, and potentially
it can go over [inaudible]. For example, if we happen to have [inaudible] which is
fantastic, probably can even demolish that [inaudible] I can solve here very nice, right?
>>: [inaudible] cause the delay [inaudible].
>> Yao Zhao: We will have, you know, talk about how do we ->>: [inaudible].
>> Yao Zhao: -- control that to create for the links with low propagation delay. We have
something, you know, to add to this utility function, but it will show, you know, in next
couple slides.
But you are right that, you know, it's not very easy to, you know, simply minimize the
delay built in this framework. We have some ways to kind of prefer the quick links, but
it's not to minimize the link, to minimize the total delay.
>>: Okay. That's okay. And another which is not clear to me yet is how do you encode?
This is essentially -- the demand is not simultaneous. I presume you mean a conference
[inaudible] one guy is talking or you won't have to send the video of everyone to
everyone. So I don't know how you're encoding this one. Because [inaudible] so what is
the [inaudible] there's only one [inaudible] so therefore [inaudible] matrix is [inaudible]
to everyone, or it's you from one guy -- and so therefore I only have [inaudible] and they
are descript. So what you're saying is because you're hosting the convex so therefore you
do a convex [inaudible] how do you encode this?
>>: So far I think he's [inaudible] is talking, everybody else is watching.
>>: Ah. Okay.
>>: Only one guy.
>>: So only one guy [inaudible].
>>: So that will generalize to [inaudible].
[multiple people speaking at once].
>> Yao Zhao: I will talk about that. But that part is not -- still in the early stage yet.
>>: But that one [inaudible] I just want to make sure I don't miss it, because that part is a
convex combination. Yes, I [inaudible] matrix is the point, right? Is matrix [inaudible]
space. I only have [inaudible] users. Therefore [inaudible].
>>: Okay. Go ahead [inaudible].
>> Yao Zhao: Okay.
>>: [inaudible] comment on the users [inaudible] you can think of this [inaudible]
conferencing interface. Probably have two [inaudible], one is that let's say we have a
multiparty conferencing, each of us, that equals [inaudible] interface, then potentially
what we will have is another [inaudible] which compiled all the sources together. I think
you can get some experiment basically on that [inaudible].
Another case [inaudible] you let active speak -- so every one of us see the active speak.
But we will channel the active speaker to one server or let's say [inaudible]. The reason
is this. I mean, usually in multiparty conferencing, switching topology is difficult to
achieve. So you want to topology to be stable [inaudible] to be stable. So one possibility
is you always send ->>: [inaudible] topology across all the sources [inaudible].
>>: All the peers. So let's say you only want to show the activity of the current speaker.
Now, you want to channel this through one source. And then he will distribute out the
rest of the peers. That way at least the distribution chain is more stable.
>>: Yeah, but that [inaudible] efficient. Because now everyone [inaudible] a source, the
central delivery [inaudible]. And so therefore we all send some -- we all interfere with
the optimization because you are using [inaudible] capacities.
>>: So you will use the capacity of the node, which is uploading.
>>: Right. Uploading to [inaudible] distribution [inaudible]. Go ahead.
>> Yao Zhao: Then our prime problem here would also actually the loss as a signal
there. So similarly, you know, we can compute the derivative of this utility function, and
then choose a step size there and, you know, adjust the link rates. And the same theory
can, you know, guarantee that the system will finally converge to the maximized -- to the
optimal point so that you achieve them -- you achieve the, you know, maximum of the
utility in this system.
So the utility function actually is choosed by us. Right now we choose the utility
function to be the logistic function. So for the logistic function, its property is that when
the low, the min-cut is much smaller than the video rate, we actually use low divided by
V here. The V is the video rate.
If the low is much smaller than the video rate, then, you know, the utility -- the derivative
of this utility function is much large. And then it give you the large incentive to increase
your video rate.
And when the min-cut is more than the video rate, then the derivative is much smaller.
It's close to zero. Then you don't have the incentive to increase your rate. So that's how
we choose the utility function.
Then let's talk about the other part in this derivatives. I talk about utility function, then
let's see the link cost, the penalty on the link. And essentially we choose this function as
the function of RIJ, as WIJ times the RIJ [inaudible] power of one plus epsilon, then the
derivative of this function -- so this function actually is convex function, so because we
minus it then it become concave.
And how do we prefer the links with short preference? We can control this WIJ for
different links. So basically you can measure the propagation delayed there. And then,
you know, assign different weight based on the propagation delay.
So for the link with low propagation delay, because the weight is smaller, then the
penalty on this link is less. So to achieve the same, you know, min-cut, there actually can
be a lot of solution on this link rate allocations.
But because you have this, you know, weight control there, you -- finally you will, you
know, use -- you will prefer the links with low delay and kind of shift the traffic from
those links with high delay to the links with low delay.
And the second enhancement for the utility function is that we actually also add the
queuing delay there into this derivative. So basically we are also minus a perimeter times
the queuing delay.
So this can potentially use the queuing delay as a small so that you can kind of slow
down before you really see loss.
And for our control we chose the constant step size. So basically is this derivative which,
you know, is kind of from the utility function, and then times a constant step size, so
every time, you know, your adjustment is not very large, which, you know, potentially
can make the link rates, the peer-to-peer link rates to be stable.
Compared to, you know, TCP, you know, the step size is proportional to the rate raise the
power of 2, which will cause the large rate change when loss happens.
>>: So I don't understand the implication of adding queuing delay.
>> Yao Zhao: So adding queuing delay, you know, basically when the -- you know,
this -- we can take this as one part, right? And, you know, when you achieve a certain
rate, say the low is close to the video rate, and then this part is very small. And the
queuing delay actually, you know, if your rate is kind of over the link network capacity,
then you'll see the queuing delay. Then you'll kind of make this possible to be [inaudible]
then you'll know that I should reduce the rate.
So basically queuing delay happens, you know, faster than the loss. So that's kind of
quicker signal.
>>: [inaudible] primal, for example, when your queuing delay is larger, you're already
incurring your penalty or your violation of a constraint, right?
>> Yao Zhao: Yes.
>>: So I don't know -- I forgot [inaudible] I think you're missing a part.
>> Yao Zhao: [inaudible].
>>: Can you go back to -- right here. Right. So that's -- how come that disappeared, the
last term?
>> Yao Zhao: The last term, here?
>>: Yes.
>> Yao Zhao: That's the delay here.
>>: Oh, that's the L.
>> Yao Zhao: L, yeah. That's the loss there. So we use both the loss and the queuing
delay. And, you know, actually before you see loss, you will see queuing delay first.
And if you already -- you will also do some reaction to the queuing delay, you'll see.
Before you really see loss.
>>: Yes. But even, for example -- but conceptually low [inaudible] just they are
instances [inaudible] invariables, right?
>> Yao Zhao: Um-hmm.
>>: Invariables is sort of yes, conceptually when you converge and if you don't violate
the constraints it will be zero.
>> Yao Zhao: Yes.
>>: Here [inaudible] variables start to play a roll, to say go back, go back [inaudible].
>> Yao Zhao: Yeah.
>>: [inaudible].
>> Yao Zhao: Yeah. L is ->>: I mean, L was -- okay, don't reach that, I'm going to push you back.
>> Yao Zhao: Yeah.
>>: [inaudible] what you are doing is you're also adding a queuing delay, I presume
that's Q also, just QIG, I believe?
>> Yao Zhao: Yes.
>>: And I think here the argument is like the queuing delay is also reflected on the
[inaudible] constraints [inaudible]. So, I mean, the original term is basically [inaudible].
>>: [inaudible] so why -- do we have any sense how come that would help you in terms
of adding queuing delay?
[multiple people speaking at once].
>> Yao Zhao: If we don't have adding queuing delay there, for a system, you know, you
will always drive to loss if the -- you know, the network capacity cannot really hold your
target video rate.
>>: Yeah, but ->> Yao Zhao: But with queuing delay, you know, you can sort of -- in a lot of cases you
can avoid the loss.
>>: [inaudible] for example [inaudible] that's why people do virtual queuing. [inaudible]
you do a virtual target and [inaudible] before you will see a loss. You can already drive it
down.
>>: So here is this -- I mean, I think the argument is that -- let's say we are using the
virtual [inaudible] capacity as you suggested. Now the queuing delay is a very
[inaudible] the loss [inaudible] on the routers, depending on whether [inaudible] or using
[inaudible] packet loss can appear in different instance. I mean [inaudible] packet loss
sooner or later. Rather you get packet loss before you reach the full capacity.
Now, here is this -- I mean, the primer problem or the original problem is the integral
[inaudible] basically says, okay, if I -- basically I exceed my capacity, the penalty
[inaudible] start to kick in.
[multiple people speaking at once].
>>: And they're -- normally I would use the packet loss [inaudible] but packet loss kick
in too late and we are using the property that usually in the router cases, before the packet
loss happens, you see queuing delay which happens early [inaudible].
>> Yao Zhao: [inaudible] consideration for the low delay requirement. If you, you
know, always drive to loss, then actually the packet already observe large queuing delay
there. Then if you fluctuate, you know, around this critical point, you always drive to
loss and then reduce a little bit, and then drive the loss, reduce a little bit, you will see
your packet always see the constant large queuing delay.
But you already have queuing delay there, it can kind of, you know, use -- maybe before
you reach loss you already use that signal to slow your rate down, then for all the packet
in average they will see low queuing delay.
>>: [inaudible] suggestion, so, I mean, you're talking about the -- I mean, think about the
[inaudible] capacity [inaudible]. So currently, I mean, the CL is actual capacity
[inaudible]. No imagine basically reshape that function. I mean, that's basically just
manmade function, right? You just reshape the function so that before it reach capacity,
some penalties will kick in. And these penalties will basically kick in in stages early, let's
say, basically, I mean, starting at some basically -- I mean, when the total capacity on the
link reach some capacity [inaudible] that kick in.
In a sense we are using delay and the packet loss as a combined signal to basically
approach that final function. So we are doing that. I mean, before you reach full
capacity, delay will give you some [inaudible].
>>: So how much loss do we have in terms of optimality [inaudible]?
>> Yao Zhao: For some topologies, you know, you will completely see no loss.
>>: No, I mean in terms of [inaudible] because you're using primal solution [inaudible].
Even [inaudible] of RIJ prime, you go to zero.
>> Yao Zhao: Uh-huh.
>>: That's optimal combination [inaudible].
>> Yao Zhao: Yeah.
>>: And now you are adding more things to it. So eventually your system convert, go
back to last condition [inaudible] minus the queuing stuff. So here you [inaudible]
converge with the RIJ [inaudible] equal to zero [inaudible] away from optimal. Because
clearly it wouldn't be [inaudible].
>> Yao Zhao: Well, yes. But, you know, if your rate is smaller than the network
capacity, the queuing delay is very small. It's just, you know, because of some busty you
may see a little bit of queuing delay. But on average, if you average out, then it's pretty
small.
So unless your rate -- average rate is over the network capacity, you will see high
queuing delay. Otherwise, it's just caused by the -- you know, the busty of the packets.
>>: [inaudible] will not reach the full capacity allowed on the link.
>>: Yeah, I don't fully understand [inaudible] away your optimal solution.
>>: [inaudible] still be at zero.
>>: [inaudible] queuing delay [inaudible] queuing delay has some [inaudible].
>> Yao Zhao: Yeah. So ->>: [inaudible].
>> Yao Zhao: Given the rate is less than network capacity, you will observe a small
number of queuing delay. But if you average out in the certain period, I think that one is
smaller. And we do not observe the problem in this.
>>: [inaudible] some kind of [inaudible]. I mean, if you have -- if we truly know the
capacity of [inaudible] we can calculate [inaudible] but if you're actually sending those
rate down those routers, because we have so close to the capacity, well, I mean,
occasionally see packet loss, and suddenly we're seeing a lot of delay [inaudible]. And
that's desirable [inaudible]. So we are pushing it away from that so that we observe delay
in packet loss on the session [inaudible]. That can be interpretation of why we include
these two terms.
>>: [inaudible] so one way we can do the following [inaudible] queue is equal to
[inaudible]. That queue is certainly a code to [inaudible] 1 over C minus RIJ -- no, sigma
L. So queuing delay is equal to sum of 1 over the amount -- so basically for all Ls around
this IJ link. RIJ is from I to J or big I J?
>> Yao Zhao: Is from peer I to peer J.
>>: So essentially you're adding all this queuing -- this term into it. If you do a reverse
[inaudible].
>> Yao Zhao: Yes. [inaudible].
>>: Because that's an easy and adequate exception if you go back, okay, that's
[inaudible].
>> Yao Zhao: I think that's a very good comment to how do you kind of estimate the
effect of the queue -- queuing delay.
Okay. Any other question?
Okay. Then for the previous part, you know, the utility function is kind of defined by us,
and we also define the penalty of the link and the loss and the queuing delay are, you
know, measured.
So we need to actually figure out what's the minimum min-cut and what's the derivative
of this min-cut with respect to the link rates.
So this derivative actually, you know, just the one or zero means this link is critical link
to the min-cut or it's not.
To figure this out, because we consider the small scale videoconferencing, so we actually
use the link states, so everybody can, you know, kind of report its RIJ to everybody else
so then everyone knows the -- you know, the link rates. And then you can easily
compute the min-cut and also check whether the link is a critical link or not.
And, you know, for these link rates, we can do piggy back on the video packets. And this
updater is [inaudible]. Usually we set 250 milliseconds per update. So it's not very, you
know, large load if the network is small. The scale is small, like ten peers.
Okay. And also fast our convergence, we also choose similar stuff as TCP. So the slow
start when a new connection just starts, then if you do not see loss at the very beginning,
then every -- periodically you double your link rates.
And once you see loss or you have, you know, achieved the min-cut, which the min-cut is
larger than your video rate, then you stop the slow start. So this is essentially to help to
converge fast.
And also let's consider the node dynamics. So in these systems, still there are a node join
and a node leave. For the join, consider the user experience, we try to minimize the
impact on the existing peers in this system. And for the new node you can probably wait
five or ten seconds to get your video rate. But we don't want the existing peers to be -kind of experience the bad video when someone joins. So that's our goal.
To do so we define a so-called dynamic period. In this dynamic period -- so that's
starting from since a node joins. In this period we have some special processing on this
rate control algorithm.
So basically for the source and the existing peers, they run the, you know, previous rate
control algorithm without consider the min-cut to the new peer. Because at the very
beginning the min-cut to the new source probably is zero, then if you consider this, then,
you know, because the source to the existing node, the min-cut of them is much larger
than zero, then you will think that all the links are not critical compared to the minimum
min-cut of the whole system, then everybody will reduce their rate.
>>: [inaudible] understand this discussion. When a new node joins, you're saying that
initially its min-cut ->> Yao Zhao: Its min-cut initially is zero, right, from source to the new node. Because
you don't have any peer to send the traffic to you yet when you just join the network.
>>: Okay. So it's the min-cut over the traffic [inaudible].
>> Yao Zhao: Yeah. Essentially the new node prefers not to receive any traffic. So the
min-cut from source to the new node is zero. And if we consider the minimum min-cut
of the system, then it will be zero.
And in this case, you know, you will think about all the existing links are not critical.
Because the min-cut is zero, the critical links are those links to the new node. So that
will, you know, make the existing peers to reduce their link rates, but it try to increase the
uploading rate to the new node. So that's what we want to avoid out of the dynamic
period.
And then the second is for the new node -- you know, the new node will -- because it's
min-cut is much smaller than the [inaudible] is much smaller than the video rate, it has
large incentive to increase its rate. But then it will aggressively grab the traffic from
other peers. So it kind of affect the other nodes.
So what we want is we add some weight to the utility function when a new node join in
this dynamic period so that, you know, you are not so aggressive. For example, you have
your utility function, then you ->>: [inaudible] deal with some kind of bad convergence issues, I guess, right?
>> Yao Zhao: It deal with the [inaudible] states when new node join.
>>: Yeah. But, I mean, in principle, a new node joins, you can just like re-solve this
problem?
>> Yao Zhao: Yes.
>>: And set up a new R and ->> Yao Zhao: Right. But the [inaudible] ->>: -- the other guys won't be disturbed? Some kind of convergence problem you're
dealing with, right? It's not ->> Yao Zhao: We actually want to ->>: [inaudible] solution is going to be ->> Yao Zhao: -- control the path of the convergence. We don't want to -- during this
convergence, you know, the existing peers receive bad video rate. Because when
everyone want to upload to the new peer, you may make that your upload to the other
existing peer to decrease.
>>: [inaudible] is correct [inaudible]. I think this basically [inaudible] problem is
solved, the convergency issue, when a new node is joined. Or this is related to how we
currently formulate [inaudible] current formulation is to minimize -- sorry, maximize the
min-cut of peers when a new node joining in, if we assume its initial rate is zero and
whole -- basically function of the whole min-cut it becomes zero [inaudible] convergence
issue [inaudible].
>>: Yeah. But that's talking about like an initial rate. You just solve the problem -- let's
say you can solve the problem in some [inaudible]. Solve the problem means choose an
R, an RIJ.
>>: So you need to know the [inaudible] that new node you wanted to solve.
>>: Of course I think -- I mean, one possibility [inaudible] new node is going to be able
to receive [inaudible]. And I think that might be a correct to this solution. [inaudible]
other peers just assume this new node can receive R [inaudible] technically based on that.
Now, the new peer need to ramp up [inaudible]. So that may be our interpretation of
how -- why we are putting that [inaudible].
>> Yao Zhao: Okay. [inaudible] without the notification, then, you know, all the
receivers that receiving the traffic from this leaving peer affected, then, you know, we
have to readjust to the rates in the system.
So to foster the convergence to the new states, then we use slow start here.
Okay. Then let's see how we deliver this content. Basically we have two approach: one
is network coding, the other is using a tree-based algorithm. For network coding I think,
you know, everybody here knows a lot, so I can -- I do not need to introduce it.
But basically the advantage of this network coding here is first it has the packet head
overhead; the second is that it will generate some delay. First we cannot use a
generation-based approach, because, you know, otherwise you have a large delay of this
whole generation.
The second to that, even you use the earliest decoding, still there are some decoding
delay there.
So in our approach, we try to, you know, do the earliest decoding as possible as we can.
So the strategy of the coding is that we do not have generation here, and we
conceptually -- we max -- we mix all the packet together.
But because we have feedback, so we avoid mixing packets that the receiver already
decodes so that we can save the coding overhead. But conceptually we can mix the
packet or the packets.
And then for this case, you know, you can decode a packet as long as your min-cut is
larger than video rate. Right. Because there is more packets than original packets then
your matrix will have the full rank, and then you can decode them all.
But it will introduce the decoding delay in different specs. The first one is that because
there are delay difference on different paths to the source to the receiver, if some packet
come late, come very slow, then it may make other packets that go through the quick
links to be able to be decoded.
And then the second is a loss may cause the -- decode the delay. If one packet is lost, say
you first will get packet of one. Second you get a packet one plus packet two, third you
get a packet one plus two plus three. Then if everything's good, you can decode every
packet.
But somehow if packet one plus two is lost, then you can not decode them. Even if you
have some redundancy, you know, say 10 percent redundancy, then probably when you
receive the tenth packet, you gather those redundancy packet so that you can decode it.
So this loss will kind of cause the large decoding delay here.
So in our observation we see easily the matrix grows up to like five to ten rows because
of the loss or the delay difference.
>>: Delay difference is the same as -- in the tree?
>> Yao Zhao: Yes. But in tree it's -- with this you go through a slow path, then you just
affect your packet only. But for the network coding, you may affect other packet,
because other packet are waiting for you so that they can be decoded. I assume you send
the one -- packet one from one pass, packet one plus two from another pass, and a packet
one plus two plus three.
>>: [inaudible] in a tree. I mean, if you send -- I mean, if you send -- let's say you're
coding a frame and you're sending different packets and different trees, then you still
have to wait for the worse-case delay to be [inaudible].
>>: I guess he's concerned [inaudible] coding [inaudible] cause -- the delay spread
problem will cause further delays in decoding.
>> Yao Zhao: So in [inaudible] case, if one packet comes later, like 200 milliseconds,
then you'll probably drop it. That's your cost. But in network coding part, if this packet
comes after 200 milliseconds, it may cause the -- some previous packet cannot be
uncoded and they -- you know, you drop all these packets because they are more than the
delay requirement.
So in this case, you know, it will cause different results, different...
>>: So you're putting it into the same category as the loss?
>> Yao Zhao: Yeah. If your packet is over 200 milliseconds. You cannot, you know,
use this packet. It's like a loss. And it will cause, you know, some other packets to be
dropped as well.
And another approach we use is, you know, packing tree. And that's where, you know,
we calculated the min-cut, then we decided the video rate, which is min-cut divided by R,
and then we can, you know, use the Edmonds algorithm to pack in these trees and
sending out the packet using the source packing trees encoded in the packets.
And for the peers, they just forward those tree packets. And then if they have the
redundancy capacity, because we pack the video rate to be smaller than the min-cut, then
you have the redundant capacity, then you use network coding to send the robustness to
the loss, or some jitters.
So here's simple comparison. For the network coding, it's kind of clean solution. And
multicast the case, it will actually be optimal. But for the [inaudible] it has the -- some
decoding delay.
And the experiment, I will show there are some loss [inaudible] delay difference.
And for the tree packing, it has low delay, but it's not optimal for multicast case. And
there are some adaptation delay for the repacking trees.
Okay. That's the evaluation. Basically I use ns-2 and we studied two topologies, but in
the simulation results I only show the one with the island model. And we have scenarios
with or without slow start, with or without propagation delay. And the delay difference
between the tree packing and the network coding and the -- how the rates adapt when the
node join and then leave.
The topology is kind of standard topology we always use, is the dumbbell one. And the
bottleneck link is from R1 to R2, which is 500kbps. And the roundtrip delay -- no. And
the one-way inter-island delay is about 50 milliseconds and the intra-island delay is about
10 milliseconds.
So from source to B, a packet will have at least like 50 milliseconds delay here.
Here is the convergence without slow start. The left figure is the rate of peer A, which is
in the same island as the source. We can see that generally the red lines means the total
received rate and the green line means the innovative packet rate, which is the video rate.
Basically they use about seven seconds to converge to the final states. And then stable
there.
>>: Why does the time start at 10?
>> Yao Zhao: The time starts at time 10, yeah. So the video starts at time 10. Yeah. It
always starts time 10 in the following evaluation results.
Then with slow start, actually we can see the convergence is -- we can see that it actually
achieve the good video rate very quickly, just one or two seconds. And because with
slow start, it actually first grab more than enough rate, and then decrease because that
utility function find that the penalty is too large and you already achieve the good rates.
So also actually in this simulation I did not use propagation delay yet. Because
propagation delay will -- when we use propagation delay there, it will shift the traffic to
the short links and make the convergence to be different.
So this the delay, cumulative distributive function of the delays using the packing tree
algorithm. We can see that for most packets, you know, the delay is just 100
milliseconds for peer B and C, which is inter -- it's not in the same island as a source.
But for peer A, which, you know, who's in the same island, actually for a lot of packet, 30
percent of them they receive low delay because these packet are from the source directly.
And then for other part of packet, they are from B and C, back from another island, so it
makes a large delay here.
And this is the, you know, comparison between the network coding and the packing tree.
So basically they are quite similar, but the network coding on average use 10
milliseconds more because of the decoding delay.
>>: So basically that [inaudible] because when you actually [inaudible].
>> Yao Zhao: Yes.
>>: If you're still [inaudible] not look like that.
>> Yao Zhao: Right. So a certain number of packets are still uncoded packet, even when
we use the network coding.
>>: So basically the left part is the [inaudible] packet sent from the source is always
good.
>> Yao Zhao: Right. Sure.
>>: What's the difference between the maximum delay between the two?
>> Yao Zhao: The maximum delay -- actually for network coding part, the maximum
delay, you know, larger than the tree packing. Because sometimes, you know, the matrix
piles up because it cannot decode it.
>>: So in the network coding case, is it all pure network coding? Are you doing ->> Yao Zhao: Our strategy is the packet from the source is always uncoded. Then when
the peer receive the packet, it will send out one copy of the uncoded packet. And the
other ->>: Wait. Which scheme are you talking about?
>> Yao Zhao: Network coding-based approach. When we use the network coding
approach to minimize the decoding delay, we send a certain amount of the uncoded
packet in the system. As long as we are sure that this packet will not be kind of
forwarded by others. For example, the source can always send out the uncoded packet.
Because that packet is guaranteed to be innovative.
>>: And then ->> Yao Zhao: And then when a peer receive this uncoded packet, it may, you know,
send it out a couple of times. But for the first time, it send the original packet. For the
other times, it will mix this packet with other packets.
So basically two copies of all the video packet are uncoded in this system. The source
send one copy, and another peer send out one copy.
>> Yunnan Wu: My take is [inaudible] sending a copy of unencoded packet to reduce
the average decoding delay [inaudible] it helps to reduce those [inaudible]. The reason is
for those packet they do not need to be decoded. They do not need to receive other
mixed packet which will be able to see the content in those packets. So those [inaudible].
I think Yao's rule is more like saying I send one copy of all the packet out in uncoded
fashion. You can think of a systematic network coding in that case. You send one times
rate is all uncoded packet [inaudible].
Now, for video playback, most of the case your jitter buffer come through, need to target
the high percentile delay case. Right? I mean, for you to accommodate let's say 99
percent of the [inaudible] arrival and control your jitter [inaudible].
So in those cases, I think whether you send uncoded packet or encoded packet totally
from the source [inaudible].
Did this answer your question? I'm also [inaudible].
>>: I mean, well -- maybe. It just seems like kind of a hack that would have been
achieved anyway. I mean, if you -- for you to receive a -- the first packet -- the first
coded packet from -- the first packet that is any mixture of the newest source packet
would be uncoded anyway. So to have a special rule about going to send such a packet
uncoded seems ->> Yao Zhao: Well, I think it helps when some -- like the new packet, the latest packet
from the source has long delay to reach the peer. For example, the new packet, because
of some reason it just reached the peer at around 200 milliseconds, then if you -- this is
coded packet, you are awaiting some other packet, then this packet is probably useless to
you because it's above the delay ->>: Okay. Let me ask another question. So you had mentioned before this slide that
there was -- I guess you were doing some hybrid thing where the tree -- you had trees to
make up the bulk of the traffic, and then the network coding would add sort of the FEC
part of it.
>> Yao Zhao: Yeah.
>>: So when you're comparing these two things, which of these is [inaudible] system? I
mean, both of them use network coding and packing trees. So I'm kind of confused. Is
this a pure network coding system? Is this a pure packing tree system? Or is it a
combination? I mean, what are we looking at?
>> Yao Zhao: For the tree packing it's basically kind of hybrid one. But the tree packets,
you know, those uncoded packets probably can already send all the data. But the
network coding, the function of network coding is just to make it robust to the jitter or to
the loss.
>>: That's this case over here.
>> Yao Zhao: Yeah, for the right case.
>>: Okay. And then what is this?
>> Yao Zhao: For the left case, you know, basically it yields a network coding. But we
don't have any tree there. But somehow we try to send the uncoded packet as much as
possible so that we can reduce delay.
>>: Okay. So both of them have some element of both.
>> Yao Zhao: Yes.
>>: Okay.
>> Yao Zhao: Sure.
>>: Yao, [inaudible] have a graph which showing only use network coding? So
basically, I mean, all the packet stick out by the source [inaudible]? So I assume you
must have run experiment like that.
>> Yao Zhao: Yeah, I did. The older experiments do not send any uncoded packet using
the network coding approach.
>>: [inaudible] comments what a curve would look like if you don't have [inaudible].
>> Yao Zhao: I don't have that graph right now. But the path is then, you know, for this
path you [inaudible] you may not see like this good shape here because the packets from
source to peer A, you know, have to weight the packet from source to the next island and
back ->>: Why?
>> Yao Zhao: -- so that -- because you look at our topology. Sorry. You know, some
packet are from source directly. Some other goes from source to B and then back to this
A. So if -- then if all packets are coded, then sometimes the A receive packet one plus ->>: [inaudible] delay mean, then? That the graph -- your graph ->> Yao Zhao: [inaudible] end-to-end.
>>: Time -- it's in the time a packet is generated in the source and the time you decode it.
>>: Are you looking at the link delay or the total delay?
>> Yao Zhao: Just the total delay of the packet. So basically, you know, you will not see
this shape here. Probably all are here. Because, you know, you have to wait for the
packet to go to another island and then go back so that you can decode it.
>>: Once thing I observe is this, I mean, [inaudible] current maximization is the rate
maximization? Delay currently is basically picking as a penalty term. So I think in this
[inaudible] topology, we actually use as crosslink ->>: Yeah. I just think, you know, it's probably -- it's a little unfair or not the right thing
to be looking at here. Because, you know, if the packet -- you know, the delay of -- the
end-to-end delay of every packet. Because you have to push through a certain amount of
rate to decode this video. And some of that rate will be transmitted over short paths and
some have to be transmitted over longer paths because you can't fit all that rate over the
short paths.
So to count the short paths as if they're, oh, they're faster doesn't make any sense, because
you can only decode let's say a video frame when you have all of the bits to decode that
frame. So it's a little bit unfair [inaudible]. I understand what you're measuring, whether
it really means anything [inaudible] a different thing.
>> Yao Zhao: Then, you know, as we said, we have propagation delay in the penalty of
the link [inaudible]. In that case, here it shows the rate from the source to peer A. And
then at the very beginning, you know, they converge to one state, and then because this
link has short -- has lower propagation delay, it kind of shifted the traffic from the other
links, you know, those cross-island links to these quick links. And then this can reduce
the delay from, you know, peer A receives -- decodes the packets.
And in that case, you know, the shape kind of changed for the [inaudible] distributive
function for the peer A. So basically in this path, you know, increase from 40 percent, 45
percent to 70 percent. You know, if we run more time, then actually just [inaudible] goes
to close to one. Because the peer A receive all the traffic from source directly. So this is
how the -- you know, the propagation delay can help. And then it can reduce the average
delay in this case to 60 milliseconds.
And then let's see how it works when the nodes join the leave. Here we assume, you
know, we have such events as a time ten, peer B joins. So at that time you have only
peers the source and peer B here. And then at time 32, then peer A joins and then peer C
joins and then peer B leaves and then peer A leaves. So we make, you know, it to be
dynamic in terms of nodes.
So this is the rate of the peer B, you know, which is different island as the source. At the
very -- because B joins at the -- is the first peer to join the network, join the channel. So
your slow start, you know, after a couple of seconds, B grab the video rate to be 400kbps.
And then at time 32 peer A joins. Because, you know, peer A many -- is in the same
island as source, so peer A actually grabbed the traffic from source directly and it did not
affect peer B. Because, you know, from S to A has very large bandwidth. So actually it
doesn't affect peer B much.
And then when peer C joins, it has the large effect on peer B. Because C's in the same
island as B, and the bottleneck link is the inter-island link. When C try to grabs traffic, it
will grab from source from peer A. Then it makes the, you know -- it use more resource
than the bottleneck link. Then we will see large queuing delay and then we will see loss,
and then makes the system, you know -- the min-cut to be lower.
And then, you know, we can see at this there is kind of -- these red lights are receive rate,
and the green light is the innovative packet rate. So it's kind of drop, has a dip there. So
that's how a peer join affects the rate.
>>: [inaudible].
>> Yao Zhao: Yeah, it recovers. And the factor is not very large. You know, you have a
little bit rate drop, but it doesn't affect much.
>>: So when it recovers it's getting stuff from B.
>> Yao Zhao: Yeah. It will get from B. Because the bottleneck link is, you know, there.
Whenever a source send it to C, then this traffic should have -- should be sent to B as
well.
>>: So because there's so much jitter on this green line and it's hard for me to see, I
mean, are there any losses anywhere?
>> Yao Zhao: There is loss. I didn't ->>: I mean, did you average it over time that it will always be like constant [inaudible]
400?
>> Yao Zhao: 400, yeah. 400 is our packet video rate here.
>>: Yeah. I mean, so has it ever dipped below?
>> Yao Zhao: I'm sorry?
>>: Has it ever dipped below 400?
>> Yao Zhao: Yes. Here actually it's just 350. Actually [inaudible] 350. So it has dips
there.
>>: I mean, after you average it all [inaudible].
>> Yao Zhao: If you average out, it's kind of here, about 350 probably in this.
>>: I think [inaudible] question is actually [inaudible] videoconferencing experience is
going to be interrupted [inaudible].
>>: Yeah. During that little period.
>> Yao Zhao: Right.
>>: Okay.
>>: [inaudible] S to B is 500, the minimum cut.
>> Yao Zhao: The minimum ->>: Why owe don't cover to 500?
>>: [inaudible] 500 in the last generation rights I think converge to something like 450.
>>: Yeah. But why not cover to 500? Because from your figure, it seems that from S to
B would only be -- over there it's 500.
>> Yao Zhao: Because we -- you know, we -- as we said, we just want to converge to a
certain rate, to not grab all the possible rate. If our video rate, packet video rate is
400kbps, then we converge to, say, 1.1 times this 400. So we just grab 400 [inaudible].
>>: My interpretation is that ->>: I think you want to try to maximize [inaudible] have a fixed target.
>> Yao Zhao: We have a fixed target.
>>: So you want [inaudible].
>> Yao Zhao: Greater, like 1.1 of the packet. We want to be.
>>: [inaudible] so that's part -- maybe that's the one [inaudible] is maximize to the
function and your upper bound [inaudible] to be less or you could 1.1 some [inaudible] ->>: You're designing your tree function so that you're optimizing [inaudible].
>> Yao Zhao: So that's actually the -[multiple people speaking at once].
>>: 1.1 times the video rate.
>>: What's the video rate?
>>: Saying you fix [inaudible] some kind of encoding.
>> Yao Zhao: Some rate that you think it's fast enough, like a 500 meg. So you decided
this parameter.
>>: Correct me if I'm wrong, I think we stable also because at the rate around 450 delay
starts to kick in. [inaudible] delay terms, I mean, because we use the network delay term
in the convergence, right? That delay term will kick in at some point. Is that why we're
picking basically something like 450?
>> Yao Zhao: Not really. In this case may not. Because I -- I can, you know, change the
utility function a little bit to try to grab 1.2 of the video rate. That's the 480k. When the
rate will increase to 480 instead of 140.
>>: So you are saying [inaudible] 450?
>> Yao Zhao: If your target rate is 400. But if your packet is 3 meg, then you will
achieve 2 meg, because that's the maximum you can achieve.
So if your packet [inaudible] too high, then the loss and the queuing delay will kick in. If
you do not, then the link penalty will kick in.
Okay. Then let's see for the peer A. So at a time 30 it joins. Because we do not have
slow start here, then it converge a little bit slower. It has five seconds to join the
network. And then when peer C joins, it also affect a little bit. But because A, you
know, gets most of the traffic from source directly, so the effect is not much large -- is
not serious.
And then at time 70, peer B actually leaves the network. Because a lot of traffic A grab
actually from source to B, and then back to A. Then, you know, there's a dip there
because when peer B leaves, the min-cut of the system drops. All the uploading rate
from B is -- kind of turns out to be zero suddenly.
So in this case, where your slow starts try to reach the new convergence states and then
after like two or three seconds, you know, the rate goes back.
So the slow start actually help us to, you know, quickly converge back.
>>: Yao, a question. Is this basically, I mean, simulation with propagation or without
propagation?
>> Yao Zhao: Without.
>>: Without?
>> Yao Zhao: Yeah. [inaudible] propagation delay then for peer A, it will probably still
observe this one. It's because the min-cut -- the minimum min-cut of the whole system
drops. Because C is still there.
>>: I assume that -- I mean, if -- with propagation [inaudible] affect here the small
[inaudible] basically you will gradually shift the right [inaudible] from S to A and use
less on the min-cut.
>> Yao Zhao: Yeah. And this is the rate of, you know, peer C at 50 seconds it joins
network, take about five seconds to convert. And when peer B leave, you know,
[inaudible] slow start to converge back. An when peer A leave, it use slow start again
and to quickly grab the video rate.
>>: How do you know when to use slow start?
>> Yao Zhao: So basically if source detect that someone leave the network, then
suddenly I know that all the upload rate from that peer is gone. Then I ask everybody to
redo slow start, to try to quickly convert back. So basically source will send out a signal,
say let's converge to slow start quickly.
So this starts the scatter of the delays of every packet. So basically for the peer A, you
know, at the very beginning, you know, it has some packet receive, you know, large
delay because this is still in the convergence states. And then after some time, you know,
this 50 second when peer C joins, then it receive, you know, the delay increase a little bit.
And we can see that, you know, there are two lines. This line means the packet, you
know, from the source directly, and these lines are the packet, you know, goes back to
from source to B, C, and then back to A.
And because after C joins, this delay is larger because kind of the bottleneck link reason I
guess.
And for the peer B in time 50 second -- at 30 seconds it doesn't affect, because A just
gathers traffic from source directly. But when the C joins, it makes that some packet
has -- a few packets actually have low -- have large delay.
Okay. Then let's see some other issues we may have. So the first one is the, you know,
TCP friendly issue. So basically if we have, you know, two different utility
maximization algorithms and they work in the same system, then what will they
converge. Actually, they will converge into this state.
So this is the derivative of the first utility function, and they will be equal and they equal
to L. L, if we just use loss, then L is the loss rate there. So basically they'll be able to see
the same loss rate and make the derivative to be just equal to loss. So that's, you know,
the convergence and the maximization point in the system.
But because our utility function is quite different from TCP's utility function, so, you
know, the convergence point is kind of -- it's hard to say. And also one is how do we
define the friendly. Because here supposedly we have [inaudible]. Then it's equivalent
that we can have N square TCP links. Between every pair you can have -- actually
between every pair you have a peer-to-peer link.
And also our target is min-cut, it's not a single link. So even, you know, probably this
link is potentially important for the min-cut. And we really want to increase this link.
But for some other links maybe we will get -- less rate is fine because it doesn't affect the
min-cut.
So it's really so far I'm not sure what kind of friendly we want to achieve to TCP. But
let's see what we have right now.
So basically I use a simple topology, just one source, one receiver. And then I run one
session of our utility maximization based approach, and the other we have five TCP
sessions for this graph. And the link capacity is 2 meg.
So when it converges at this part, that's the TCP are still working. And since time 100,
the TCPs connections are gone. So before, you know, when the TCP competing with our
approach, in this case we receive, you know, about 480kbps and the TCP receive
300kbps for every TCP connection.
Another experiment I used ten TPC connections there. Then for our session, you know,
basically it drops from 480 to 420kbps. And for TCP connections, they actually kind of
half their rate to achieve 150 in average.
So this graph shows that in the current parameter setup and this utility function, we're
kind of more aggressive than TCP. Because, you know, when our rate is less than the
video rate, the min-cut is less than video rate, we have kind of large incentive to request
more rates.
>>: [inaudible] 400 kilobytes per second [inaudible] 2 meg.
>> Yao Zhao: In that case I guess probably if we set our target rate is 500, then because
we achieve, you know, less than 400, we cannot achieve more than 400, we achieve less
than 400, our incentive to -- you know, the derivative function is pretty large.
>>: [inaudible].
>> Yao Zhao: Yes.
>>: [inaudible].
>> Yao Zhao: Yes. In that case I guess would dominate TCP. Maybe we get a 300 and
all the other TCP share, you know, those 100.
>>: Is that because TCP is using [inaudible]?
>> Yao Zhao: So basically it's because TCP is, you know -- you look at the utility
function, it's something over X to 2. You know, X is the rate. So the rate is higher than
the TCP. You know, the derivative is smaller.
>>: [inaudible] curve is more aggressive than [inaudible].
>> Yao Zhao: Yeah. Our curve is more aggressive. If we look at the derivative of the
utility function. TCP drops quickly and ours drops slowly. So that make it is
convergence points to be favor our session.
>>: That's related to the shape of the utility curve [inaudible].
>> Yao Zhao: Right. Okay. Another problem is how do we make, you know,
multisource here. So in the previous experiment I always used just a single source. One
approach is that we can, you know, for the different source we can separate them as
different sessions, then they compete -- you know, you competed with yourself basically
using the network signals, like the queuing delay and the loss.
And this one is simple. We do not change anything. We can just run the program. And
actually did assign experiments using this path. So I run like -- make every node to be
source. And, you know, if we have three peers, then we have three source and three kind
of video channels and competing with themselves.
And some results shows that, you know, using the bottleneck -- the uplink bandwidth
limitation model, and it seems to be fine. You know, they still converge to the good
states.
And I think the theory also will kind of -- the theory itself also tells that you should,
because they use the same utility function, they are fair to each other.
Another approach is, you know, we can combine the sessions from different sources. So
a single way is, you know, you introduce a virtual source node there, and the virtual
source node send the packet to the real sources. So you actually add some virtual links
and a virtual node. And then you can do network coding.
The network coding part is same. You know, just mix every packet from the virtual
source.
And for the packing tree, it's a little bit more tricky. You know, you need a central node
to tell the other source, the real source to pack the trees.
Conceptually you can use the link states pack tree separately for every source. But
because a synchronization problem, every real source may not see the same states. Then
their packing tree may be kind of -- collide from each other. That's why we need kind of
some central algorithm so that they pack the -- different sorts can pack the right tree so
that they, you know, use the whole -- use the link rates.
So to conclude, we kind of designed the utility maximization based approach to control
the rates in the peer-to-peer system. And they can discover the network capacity
automatically and it has a fast convergence and robust to the dynamics.
But we have a lot of other issues to start, like TCP friendly. The multisource part is still
in the early stage. And, you know, the [inaudible] evaluation are on ns-2, we actually
really need some system buildup and real networks. And also what happen if we have
the server to help. In that case, the problem change. The broadcast problem may become
the multicast problem and it affects the design.
And also another issue is how to combine both video and the audio. Because for audio it
has much lower rate, but it has higher sensitivity on the delay. So this issue we also need
to consider.
Yeah. Okay. Yeah. Thank you. Open to any questions.
[applause]
Download