23339 >> Rich Draves: All right. Well, I'm pleased... from the University of Alberta. She's in town this...

advertisement
23339
>> Rich Draves: All right. Well, I'm pleased to introduce Chen Liu. She's visiting us
from the University of Alberta. She's in town this week to attend the Grace Hopper
Conference down in Portland. But while she's here, we're taking advantage of the
opportunity to have her tell us about her research, particularly relating to resource
management and wireless networking.
So Chen.
>> Chen Liu: Thank you for the introduction. It's my pleasure to present my research at
Microsoft. And today I'm going to talk about the framework I developed to achieve
effective resource management for multi-hop wireless networks. By effective resource
management, I mean three criteria. First how fair the resource allocation is. Second,
how efficiently the allocated resources is being utilized. And, three, how low the control
overhead is.
This is joint work with my supervisors Dr. Janelle Harms and Dr. Mike MacGregor. In
this talk, I will first motivate this research and summarize our contributions. After that, I
will present this proposed framework into four parts. First, the first mechanism aims to
find the optimal fair share of resources for wireless networks via globalized local
optimization. Please ignore the name now and I will explain that later.
And the second mechanism -- the second mechanism aims to improve resource
utilization efficiency via adaptive multi-variable control. The third and the fourth
mechanism handles the challenges in multi-hop scenarios specifically in terms of the
co-existence of interpath and intrapath interference as well as correlated congestion and
collisions.
At the end I will conclude my talk by pointing out our future directions. The motivations
for this research is very straightforward. On one hand, users always want more
resources, better service. Also the variety of applications will always have more and
more new apps. So it is challenging to satisfy user demands in terms of quantity, quality
and diversity. And on the other hand wireless resources is limited and compete among
multiple users. On top of that wireless transmissions are lossy subject to dynamic
interference scenarios and dynamic network conditions.
Especially when traffic is forwarded in the multi-hop manner, it further degrades the
network performance. So there's a big gap between the user demands and the actual
network delivery capability and effective resource management mechanism helps to
reduce this gap. It also makes the network more tractable so other advanced
mechanisms can be viewed on top of it.
>>: So what kind of application platforms that you're concerned with?
>> Chen Liu: Right now it's the generic model. And I aim to apply it to specific networks
or systems. That's the goal I will talk about in the future work.
>>: Depending on --
>> Chen Liu: So now the scenario is ad hoc, multi-hop wireless network.
>>: The applications that you're talking about...
>> Chen Liu: Applications so you mean what kind of applications running in the system?
>>: You're worried about some user demands
>> Chen Liu: At this point it's also generalized right now.
>>: Examples so we could ->> Chen Liu: So one example is the QS, when you have -- that's the ultimate goal for
my work, that if you have multimedia and also have the interactive traffic, you also have
the data transmission like 4D, Web browsing or different things. So you have different
requirements, right? So that can be one scenario to consider.
Did I answer your question?
>>: So you're assuming that there's some like servers serving some contents and then
the users have a platform that can consume this content, but this network is kind of
bottleneck to deliver this content to the end users?
>> Chen Liu: Yes, that's the case. Okay. So this is the motivation. So our research,
the major contributions can be summarized into four aspects. First we offer a normal
method to achieve fair allocation despite the diversity of user requirements and the
wireless impact of wireless interference and dynamic network conditions.
And the second contribution is our way of efficiently utilized resources despite the
co-existence of conflicts in terms of collisions and the waste in terms of unnecessary
idling.
Okay. Anyways, the third contribution is our method effectively control network
behaviors despite the complicated interference scenarios. And this interference
scenario can be summarized in a single hop like hidden exposed terminals and we'll talk
about that later on. And also the specific scenario, interference scenario for multi-hop
networks. And the last contribution is our method minimizes the control overhead in
terms of the amount of messages that's being passed around for control. And the
computational complexity. Now I will start talking about the framework. And the first
mechanism is the globalized local optimization. It aims to find optimal fair share of
resources for wireless networks, and the reason it's called globalized local optimization
is because we achieve competitive performance and as global optimization by only using
local information without message passing. So before diving into the technical.
>>: Can I ask another background question? So is your work focused on Wi-Fi
networks, ad hoc Wi-Fi networks?
>> Chen Liu: Well, that is the application scenario we use now for the demonstration.
But, again, this is a generic model and I aim to apply it for different networks, for
example, mash or different systems. That's the ultimate goal.
>>: And we're talking about a single radio node.
>> Chen Liu: Right now, single radio.
Okay. So before diving into the technical details, I want to elaborate a bit about the
design philosophy. And here I want to discuss two issues. First, how is fairness
influenced by wireless constraints in terms of the dynamic interference scenarios and
network conditions?
As we know, the conventional fairness criteria is developed for wired networks, where
we do not need to worry about the interference and the network conditions relatively
stable. But when we use them in wireless networks, the consequence is the gap
between the amount of resources being allocated and the amount of resources that's
being actually used for successful transmission. So why? Two reasons. First,
transmissions are lossy in wireless network. Second, scheduling algorithm may make
false detection in the wrong decisions, for example, CSMA. So what can we do about
this gap?
From the utilization point of view, we should minimize this gap as small as possible. But
the problem is the efforts we put into minimization also consumes resources so there's a
trade-off between the price you pay and the gain you get from this gap minimization.
And if you see this trade-off from another perspective.
>>: I'm sorry, just a background question. So in the specifics you're thinking of this as a
particularity flow problem as each node has demands to send to multiple other sense
and you just need to route them in sort of a fair fashion. Is that the formulation?
>> Chen Liu: Okay. So ->>: Or abstract formulation?
>> Chen Liu: Okay. So user here can abstract it as a flow. That's between adverse and
destination pair. And it may start from the same source node or different source node.
Currently I'm at the MAC layer. So routing is definitely an issue, I'll talk about it later on
but it's not currently in this one.
>>: Out of the path or in the path?
>> Chen Liu: Right now the path is fixed. That's for the current stage. Okay. So okay, I
was talking about -- okay. So from the efficient utilization point of view we should
minimize this gap. But also we have this overhead problem. That's what I just talked
about.
So if you think of this gap from the fair allocation point of view, actually it characterizes
the impact of wireless, the current interference situation.
So instead of pushing to make the gap as small as possible and incur high overhead,
you can adjust a fair share to reflect this trade-off.
So this is the first issue. And the second issue is what fairness criteria should we
support criteria. As user requirements as diverse and system might be different. So it's
impossible to generalize a single fairness criteria to cover all the possibilities. So we
decided to support different types of fairness criteria. How? I will talk about that in a
moment.
Another related issue is who has the authority to define fairness criteria? So before
answering that question, let's see what fairness is. It's actually defines a bunch of rules
that prioritize a certain group of users and discriminate certain group of user to a certain
level.
So it's subjective. And judgmental. So it's a question whether protocol designers should
define these criterias. In my opinion it should be the people who have more knowledge
about the applications that's running in the system and user profiles. They are the right
candidates to define this fairness criteria and the protocol designers should support
whatever decision is made above.
So based upon this principle we separate the fairness definition from fulfillment. Again,
what do I mean by that? Specifically, we leave the task for fairness criteria definition to,
for example, network operators or managers. But our mechanism simplifies their way to
derive fair share by allowing them to assume static -- well, the stable network condition
and the static interference scenarios.
And the idealized the fair share they derive is plucked into our mechanism so we
explicitly considered the impact of the interference scenarios, the actual interference
scenarios, the impact to it and to readjust the fair share.
So these are intuitions for this first mechanism. So based on these intuitions we
formulate this globalized local optimization, and we call it geo local just for short.
And I will explain this formulation in terms of its components. The first component is
consumption utility. And it characterizes the user behavior or how aggressive the user
consumes resources; and we choose the log function not for the purpose of proportional
fairness but to just for the sake of complexity.
So after finishing this sentence, because we aim to support different types of fairness.
Go ahead.
>>: If you can start by saying what the X ->> Chen Liu: Pardon me?
>>: Maybe you could describe what the Xs represent?
>> Chen Liu: Here X high level-wise is the resource or bandwidth that you consume for
the transmission. And here is the descending rate. Well it can be descending rate or
good put, and you see it in the example I will show later on.
>>: And that I is --
>> Chen Liu: For each user, yes.
>>: Each user?
>> Chen Liu: The first two part the mechanism I'll talk about is for single hop. And the
third and the fourth mechanism for multi-hop. So single hop scenario, as one user. So
it's like a conversation between a source and destination pair.
Okay. So that's I.
>>: So the Xs are given? The Xs are the demand the user is putting on the network.
>> Chen Liu: No the X is the actual.
>>: So network is granting X?
>> Chen Liu: Yes, it's granting -- you are changing that. Okay.
>>: I'm sorry ->> Chen Liu: I'm talking about that right now, okay? So the second component is called
consumption cost. It has two roles. First it drives the user consumption behavior to
converge to a certain level so you want to shoot for always the maximum amount. And
the second row is more interesting. This XF.
It is the -- I call it considerable fairness interface. So it is where the idolized fair share
that's derived by whoever is in charge of this role that we talked before, plug it in. And if
you ignore these scalars, ignore this W, which I will talk about right away, if you see the
formulation after ignoring these parts, the control policy will guarantee that the XI will
always converge to XF, because it's just a simple formula.
But remember we are in wireless networks. This idolized fair share is impossible to
achieve because of this, the reasons I talk about, the lossiness and depending how the
scheduling algorithm always works. So there's always a gap.
So we have to explicitly consider the impact of collisions. Okay. Go ahead.
>>: What [inaudible] it's true, right? Because UI and WI, once you level the max minus 1
minus KPIX.
>> Chen Liu: Your left has ignore K and Y minus K and this WXI. So it's just when you
have the derivative. So X will -- XI will equal to XF, right?
>>: So XI is X max, right?
>> Chen Liu: No, XI goes to XF. So if you -- just when you have the derivative equals to
0. So you get the derivative of UI and the CI, which is this one. So that will be 1 over XI,
and this derivative is X -- is 1 over XF. When they're equal, XI equals to XF, right?
Thanks. So I said this XF is not realistic in wireless environment to achieve because of
collisions. And necessary idling which is the amount of bandwidth that should be utilized
and which is not utilized.
Okay. So this is the formulation. And we demonstrate whether this geo local
optimization find the optimal fair share in simple network. It's Aloha. Why Aloha?
Because for simplicity for this stage control it's simple.
>>: Go back to this one. So you're assuming some -- specifying the design fair share for
each Ith. So it should be XF of I or XF for everybody?
>> Chen Liu: Depending on the application scenario, right?
>>: But in your case it's the same for ->> Chen Liu: No, depending on the -- you will see in this example that I'm going to talk
right away, that's a very -- this is a toy example. Based on the interference scenario, the
fair share for each link, the idealized is this way. It's not the same for each one.
Depending on -- in this demonstration I assume the proportional fairness is the criteria
that's being used to derive this idealized fair share. And I will talk about that very soon.
>>: Proportional depends on the demands. You can't get the fair share without knowing
the individual demands.
>> Chen Liu: Right now we assume this is saturated traffic. And for more realistic
scenarios, that will be considered later on.
Also, we have to go back. I was here. Why Aloha? It's because it's simple to
demonstrate our purpose for this mechanism. But the control effectiveness is not the
concern for the first mechanism. That's the focus for the second mechanism. And we'll
talk about that very soon.
So here I only have one control variable. That's the transmission probability. And this XI
is the PI multiplied the total capacity. And the control policy here I will skip the
mathematical derivation. It's just based on the labyrinthian transformation and gradient
search. And you get this. And the details I already ignored. This is the control policy we
derived. If you're interested in the detail I refer you to the paper I published in LCN this
year. It has all the mathematical details.
So we evaluate this, the first mechanisms in two. I will talk the next slides -- in two
setups. The first one is a toy example. I just have three links. Link one, link three
interfere with link two and they do not interfere with each other.
This is just for the easy illustration. And after that I will talk about larger random
deployed networks. But this is a simulation results.
So we compare our method with the global optimization algorithm that's developed by
Dr. Mung Chung from Princeton University. If you're interested I'll show you the paper
after the talk.
And this idealized optimization is like we do not consider the -- so you assume the
collisions and the unnecessary idling do not happening because you have some purpose
scheduling. That's the idealized fair share how it is derived.
So for this toy example, the second row shows our performance. And it's very, quite
close to the global optimization performance. But you see a big gap between the
idealized fair share and our performance. That's why, because of Aloha is very naive in
terms of controlling -- in terms of the way to avoid collisions and handling these conflicts.
>>: What kind of simulation will this be in?
>> Chen Liu: NS 2.
>>: NS 2.
>> Chen Liu: So we do the same experiment. Okay, go ahead.
>>: I'm sorry.
>> Chen Liu: No problem.
>>: Can you explain what about -- I mean the global optimization is doing something that
requires global knowledge, right?
>> Chen Liu: The way it's maximized the total utility for all the users. And it has the -well, so it decompose this global optimization problem into local. But each user have to
collect all the current consumption information from the competitors. That means
whoever is interfering with you and how they consume and readjust and you compute
some kind of shadow prize and readjust the way in terms of -- in this example is the
transmission probability to maximize this total utility.
So that's the high level description.
>>: The formulation that you presented, the GN, is making up for the lack of global
knowledge.
>> Chen Liu: So we only use the local knowledge, because you know your local
collisions, and you know the -- you can infer the unnecessary idling but the also global
aspect is you have idealized fair share as initial goal to reach but you readjust based on
how the current interference scenario is.
>>: Are you saying that maybe once you figure out XFs, the global knowledge, then you
don't need any further global knowledge? Is that what your proposal?
>> Chen Liu: Yeah, that's the goal, for the first mechanism. That's why we don't need
the message exchange for figuring out the actual fair share.
So we did the same experiment in a larger random network. We have random -- we
have 10, 20, 30 random single destination pairs. So here we vary the level of
interference in terms of the number of links. Sorry, here should not be path. So it shows
our mechanism which is the black bar achieves similar or sometimes slightly better
packet loss rate, end-to-end delay throughput and fairness. Well, GB, sorry, GB is the
global optimization performance. The white bar is our performance and similar fairness
performance.
And we also vary the traffic load, and this will show similar results. So so far -- so this is
the first mechanism. It aims to find optimal fair share by only using local information
without message passing.
And the second mechanism is called adaptive multi variable control. Okay. Go ahead.
>>: Is the global not optimal? It seems like your scheme is improving -- is improving on
the ->> Chen Liu: Because the way. For the global optimization, the impact of interference
is not explicitly modeled. And it's just based on whatever the best efforts you figure out
fair share but you're not making efforts to minimize this or to handle these collisions. So
here is the way they formulate, and also the gap you see.
It's the problem for both our method and the global optimization. It's because of the
control effect hasn't been taken into account yet. So that's the second mechanism.
Adaptive multi variable control aims to minimize this gap.
So this is what I'm going to talk about. Okay. Here we explicitly consider different
interference scenarios. So in time interference signals can come from the earlier
transmissions. It can also come from the simultaneous transmissions.
It is also possible the interference node is coming from the transmissions later on, in the
future.
And in space, when two transmissions, they're not aware of each other's existence and
they interfere with each other, collisions happens. This is the typical hidden terminal
problems. And the opposite 1-2 transmissions, they are aware of each other's
existence. And they actually do not interfere. But they do not transmit. It's waste of
opportunity. So it's the exposed terminals. And our method differentiate these different
scenarios. And we select effective factors related to these scenarios and tune it toward
the direction we want.
This is the high level idea. So these are the -- okay. So the first factor it's called item is
probability. The reason I do not tune the physical career sensing range, there are two
reasons. First is the way we handle the exposed terminals. So the way I do it is I set
the career sensing range small to be the same as the transmission range. The reason is
that within the transmission range, whatever information you received, you are able to
decode it. You won't know where it is from or where it is going. So it's easy to know
whether exposed terminals will happen or not.
So this is our method to handle exposed terminals to limit it within the transmission
range and then to reference it, to deduce it.
So in terms of hidden terminals, we use this idleness probability, which is the probability
that you do not transmit even the physical career sensing range tells you the current
medium is idle because we have a limited career sensing range so the result might be
fault detection.
So we use that to adapt to the current hidden terminal problems. So this is the first
factor that's designed, that's selected for handing the earlier transmissions calls and
hidden link exposed terminals.
And the second -- before talking about the second and third variable, I want to review a
little bit about the how CSMA handle collision. Device collision handling into two phases,
one is collision avoidance. One is starter transmission. You randomly select a slot to try
to avoid the single transmission caused collisions.
Once collision happens, you entered the resolution phase by double the contention
window to avoid repeated collisions. And your attempt, this retry, will stop until a certain
limit is reached.
And it associates a single contention window with both faces. The problem is that it's
possible that transmission doesn't have many competitors around. So you can use a
small contention window to control the delay, to not incur very long delay.
But the problem is if collision happens, this small contention window cannot occur fast
enough to avoid repeated collisions.
But if you consider the collision resolution you want to have a bigger contention window
but at the same time if you have less competitors, you may cause extra delay.
So because collision avoidance and resolutions are two different phases, why don't we
just have two different variables. So that's why I introduce avoid, avoidance window to
handle the simultaneous transmission calls to collisions and the resolution windows for
the collisions calls to buy future transmission.
So these three parameters are used to model this amount of bandwidth that's used for
transmission, which is XI. And this model shows, it considers the impact of each
parameter as well as their interaction. And the same way we model the collision, the
amount of bandwidth you consume for collisions in terms of multi-variable function of
these three selected factors.
And the derivation is based on the iterative least square fitting. Okay. Here it's just very
brief, and please ignore the details. It's just you substitute this model -- you combine this
model with the geo local optimization framework I talked previously. And you use
labyrinthian transformation gradient search. I got three control policies for this idleness
probability, avoidance window and resolution window.
Okay. Now we do the same experiments by comparing our mechanism with the CSMA.
That's the reference line for performance. We also compare it with the single variable
control we used previously to show the gain of having multi-variable control. So that's
the control effectiveness.
We also compare our method with SPSA's simultaneous perturbations stochastic
analysis. And why do I choose it? Because this method -- it estimates the gradient for
each factor by simultaneous perturb for each factor.
It does not require the model. It's just estimation-based approach. So you save the
modeling costs, but let's see whether it's -- but it has other issues. And we'll show in the
results soon.
Again, the first is the toy example for easy illustration. And the second row is our
performance. And you see the gap is smaller than before compared with these two
single variable control.
Okay. And this SP is the performance of having that estimation-based approach that
SPSA. And it's very -- in terms of convergence it's very slow. So it's not even
converged within this, for this example. That's why you see this performance.
Also, the CSMA/CAET -- so you use the SMA; you consider about exposed terminals by
setting the physical career sensing range small. So the nodes are more aggressive in
terms of transmission.
So the transmissions that have less level of interference will be more aggressive. And
the one suffers more interference is starved.
And when you consider about hidden terminals, you are more conservative. And the
gap is bigger, because -- okay. Same experiment for larger random network. It's the
same setup. And our performance is the first one on the left.
It achieves better packet loss rate throughput end-to-end delay and fairness index with
different offer loads and different interference levers.
>>: Like it's the aggregate across the whole network.
>> Chen Liu: Yes, that's the aggregate throughput.
>>: This setting how many nodes did you have.
>> Chen Liu: Depends. One was varied interference level, we would will 10 pairs, 20
pairs, 30 pairs, that means 60 nodes at most.
>>: And replacements is random.
>> Chen Liu: Random.
>>: And what you showed is some kind of average ->> Chen Liu: Yes, that's the average measurements.
>>: Okay.
>> Chen Liu: So so far I've talked about the fair allocation and efficient utilization for
single hop networks. And now from the third mechanism on it's explicitly handled the
challenges for multi-hop networks. Just a note here it's a work in progress and I have
preliminary results but because it's not complete I didn't show it here but I will verbalize
that, whatever I get a little bit.
So the third mechanism is called hyper CSMA scheduling. It's designed for the
co-existence of interpath and intrapath interference.
So the way I see this problem can be illustrated by an analogy. It might not be
appropriate but I think it serves the goal for now. I see the co-existence of inter and
intrapath interference similar like a group or a country that's under the attack of both intra
violence and the external invasion.
So once you have this situation, if you try to fight both, you have more enemies, right?
And if you form an alliance inside your country or whatever organization, you consolidate
your strength to fight the external invasion, external attacks. More effectively.
So based on this intuition, our method to handle intrapath interference is to coordinate
the transmissions within the same path to minimize the collisions by scheduling. And I
will talk about that in more detail later on.
Please ignore the graph, the figure for now and I will come to that soon. So for the
competition from different paths, our method is to let them compete and our method will
guide this competition to converge to the fair share.
So this is the high level idea. To do it, in terms of scheduling, our method is very, very
simple. It's pure local. We determine the schedule for each node within a path to
transmit just based on hop count.
So for the first node is one. Second, two, three. So every two nodes that's our two hops
away within a path. You repeat the schedule. So the underlying assumption is that
there is no interference coming from the nodes that are two hops away. But it's not true.
You will argue -- I'm sure you will think of scenarios like if they are close -- although you
have hop counts, but that's not distance or it does not show the actual interference level
that's this guy and this guy is receiving.
But we can do something about it. You can adjust the transmit power to make sure the
virtual distance among these two, these two guys, are far enough so they do not
interfere with each other.
So then we can keep this local inference mechanism to do this per path scheduling. So
it's very simple. And the four, to guide the competition between different paths is to
converge to the fair share. That's actually the goal what we just talked about this
multi-variable control and the geo local optimization data. So we use that for this
purpose.
And I don't show the results here because it's not complete yet. And from the data I
collect, it minimized the collision quite significantly.
The fourth mechanism is called hop-based congestion control. It handles the correlation
between congestion and collisions. What do I mean by correlated congestion and
collision?
In wireless networks, when collisions happen, it wastes the bandwidth. Your effective
bandwidth for delivering is already shrinked. But also you will put efforts to resolve this
collision more bandwidth will be used for this resolution.
So consequently the effective bandwidth for delivery is even smaller. And this will
contribute to the growth of the queue or the backlog and make the congestion even
worse.
And from the other perspective, when congestion happens -- so every node tries to push
the backlog to data out as much as possible. So the frequency of interference and the
period of interference is longer and is more frequent.
So which contributes to more collisions? So that's what I mean by correlated congestion
evaluation. The point here is congestion in wireless networks is not only because of
traffic overload, it's also because of interference. And you have to consider both
aspects. But existing methods, for example -- I categorize them into the TCP-like and
back pressure-like approach and they do not consider this differentiation.
For example, the TCP-like congestion control, it blames everything to offer load. So at
the source node, based on certain kind of whatever end-to-end metric you use to
indicate this congestion, when it happens, you will just -- you will adjust at the source,
the offered load at the source node to react to it.
Because the congestion might be caused by the interference. So your adjust of this
offered load might be too conservative. So it causes the resource underutilization. But
for the back pressure-like congestion control, the basic idea is whoever has the larger
queue, you will transmit you have more opportunities to transmit for goal to maximize the
aggregate network throughput.
For wired networks, yeah, it can do that. But for wireless network, because this backlog
might be caused by collisions. So if you let them to have more priority to transmit, you
will further aggravate this interference scenario and the result will be even worse.
So neither mechanism works for wireless networks. Our method takes into account both
the traffic overload and this interference, the impact of collisions on congestion. So we
introduce the offered load at the source. For the source node to the model. So our
congestion considers the traffic load as well as the parameters related to interference.
This is at the source node. And the hold derivation the same as before but just we have
one more variable. But for the non-source node, for the intermediate nodes, you don't
have the offered load but you have the amount of traffic that's being received.
So that's this XI receive here represents. And why I differentiate that I will talk in the
next slides. Here is the derivation by using this new model into the geo local
optimization framework, the same method, Lagrangian transformation gradient search,
we got this new control policies. But these are intermediate control policies, and I will
show the ultimate control policy next slide.
And I will tell you why. Okay. For source nodes you have the -- you have the authority
to adjust the offered load. Because that's where it started. We have the offered load
control policy. But at the intermediate node you receive traffic. And our assumption is
that we try to push as much received traffic, as much out instead of cut it.
So intermediate nodes do not have authority to change this amount of traffic being
received. But it has the ability to compute the amount of traffic that it cannot handle
based on the estimation of the current interference scenario locally. And you can pass
that information back to your previous hop to let them adjust for you. Right? Okay. So
that's why in the ultimate control policy this part that's in the red frame is the amount of
traffic that your next hop wants you to change and that message -- that information is
passed to the previous hop in order to adjust the traffic load and all these parameters
related to interference.
So this is for the source and mission control and for the intermediate congestion control,
we do not have any -- we cannot do anything about receive traffic but we pass what we
want to the previous hop. But we can do something for the next hop by adjusting the
parameters related to interference.
Cannot show the result because it's still a work in progress. Sorry about that. So in
conclusion, our methods -- it offers a new method to support different types of fairness
criteria. It readjust fair share for wireless networks by explicitly consider the interference
scenarios.
And we provide an effective way to control network behaviors by differentiating different
types of interference scenarios and by considering both the co-existence of waste and
conflicts in terms of unnecessary idling and collisions.
We also handle the challenges in multi-hop networks explicitly including inter and
intrapath interference as well as correlated congestion and collisions.
At last, well, we minimize the control overhead by reducing the amount of control
messages to be passed. For single hop scenarios we have zero message passing but
for the multi-hop scenario we need the like two successive hops, you need to pass the
information for the congestion control I just mentioned, the amount of traffic you cannot
handle, you need to pass to your upstream node. So that's the only message passing
we need here.
So in the future, my goal is to apply this method, customize this framework for, to
support QS or quality of service or quality of experience.
So that's the first step for the future work. And the second one is to incorporate routing,
because routing, it distributes the traffic load and if the routing layer has very
discriminated or very naive way to distribute load you will have already unfair traffic
distribution.
And even the MAC layer is effective; you still get influenced by your up layer. So here
there's some cross layer optimization to the loop. And ultimately I want to make this
framework work for mobile traffic. So that's the future work.
Thank you very much. Questions, please?
[applause]
>>: So have you thought about trying to implement some ideas in a test we had?
>> Chen Liu: That would be nice. But I think now it's the end of my Ph.D.. I don't
think -- well, before my thesis I have time to do that. But if I continue to work on this one
after my thesis that's definitely a direction to go.
>>: If I heard this correctly, you're assuming all the lengths are the same data bit?
>> Chen Liu: Well, for this, for this example or the current experiment, it assumes that.
But because like later on I want to support different types of traffic. That should not be
the assumption.
>>: So how hard would it be to change your algorithms to incorporate different iterates?
>> Chen Liu: One thing is there are two ways I can go. One is the way to derive this
idealized fair share, when you have different traffics what this idealized fair share should
be. One is you use the demand as the initial fair share. You impose but so our method
will consider about the demands, the fairness, as well as the other constraints related to
the specific system features.
In wireless network I talked about the collisions and unnecessary idling. So you
consider this aspect to readjust it to appropriate level. So you need to figure out the fair
share.
And the second way is you can do something about utility function, because that's how
user behaves. For example, for quality of experience, one of my friends he was working
in this area and he, the information he gave me is like, for example, for the Web page,
what's it called, sorry, just for quality experience, sometimes the user behavior is
exponential.
So by incorporating different utility functions, then I need to readjust to how I impose the
cost for these, all these constraints we have in terms of transmission features.
So these are the challenges and also the possible ways I can go to apply it for more
realistic scenarios.
>>: So looks like you added some complexity in terms of computing this [inaudible]
compared to simple TCP-like model. Can you say something about how easy or how
computationally expensive?
>> Chen Liu: That's a very good question. That's a very good question. Because
currently our way is still naive. You periodically adjust this multi-variable model.
So overhead in terms of this periodical update, it's high. But the thing is depending on
the application scenario, depending on how fast your network condition changed, it's not
necessarily you apply it periodically. It can be done at the beginning for a certain phase.
And then when the network condition changes, you apply it again.
>>: The periodic updates, does it require whatever some kind of oracle to know
everything?
>> Chen Liu: It's just based on ->>: Each node has to periodically ->> Chen Liu: You just need to -- local information about this XI and collision, the
transmission and collision. And then estimate the impact of each parameters in terms of
the performance. So it's pure local. You do not consider the other performance, other -your neighbors.
>>: How do you adapt? It seems like when you get a new flow, you're already -- your
control -- even the demands are stable your control takes to provide to the right rate.
>> Chen Liu: I think that depends on how -- I mean, this new added traffic, how long it
will exist. If like this very short, very short traffic, what you said might be true. But one is
it's relatively long it still converges. So that's a very good question. Actually currently
this framework does not consider the dynamic travel, like when you add new traffic and
you have traffic to go how it will work.
So that's definitely I'm going to add in the future work.
>>: So the thing to consider all [inaudible].
>> Chen Liu: Yeah, right now.
>>: And a lot of data consent to the situation [inaudible] but if you look at real life sample,
most of the traffic [inaudible] Web browsing or [inaudible] traffic. Is your framework
still -- can you adapt this framework ->> Chen Liu: That's the challenge I'm going to tackle. That's definitely -- currently the
assumption is not that realistic in terms of long-lived traffic. But I mean so it can handle
certain types of network traffic before Web browsing type of application I need to add
different mechanisms. That's definitely what I'm going to tackle next step if I want to
provide quality of service or quality of experience.
>>: [inaudible].
>> Chen Liu: That's the goal. That's the goal.
>>: So a follow-up question to ask. So your simulation is already somewhat
complicated. Some complexity there. And then to account for this in a dynamic and first
some traffic, I assume that's going to get more complicated. Do you have any insights
as to how you're going to handle like the actual --
>> Chen Liu: So your complexity -- the argument here is my model, right. You think the
way I derive the model is already complex. So when I have more dynamic scenarios or
other constraints to consider, it might get more complicated. And the thing is it's not that
all the factors you select. I mean, it has significant impact all the time, right? From what
I observe is for a different scenario, certain factors will stand out.
So this observation will help me to reduce the number of factors I need to use at a
certain, when certain scenarios happen. So it's not necessary to include all the factors
all the time so that helps to reduce the complexity.
>> Rich Draves: Okay. Thank you again.
>> Chen Liu: Thank you very much.
[applause]
Download