>> Lisa: Thank you for having me here. So today I'm going to talk about some algorithmic issues in scheduling for multi-carrier wireless environment and the work with, joint with Smithie Andrews (phonetic) was also from Bell Labs. Right. So before getting to multi-carrier scheduling, so let me first talk about what is single-carrier scheduling. So in a single-carrier environment what we have is one carrier and end-users and a time slotted system. So the job of the scheduler is to decide what each of the time slot, which user the scheduler would serve. So here's a schematic picture to describe the single-carrier system. So at each time step the arrival is described by the surrounding rapture. AIT is the amount of data arriving for user I, time T. There is a Q rector. QIT describes the amount of data waiting for a Q for user I. And there is a service rate factor which says if user I should be chosen by the scheduler, then the amount of data equivalent to RI of T will be draining from user I's Q. So because this is a wireless system, the service rate factor is both user-dependent and time-varying just because the users will be mobile for the most cases and internal conditions would also change. So this single-carrier scheduling environment is meant to model systems such as the CEMA EV-DO technology. In the EV-DO, the time is slotted in a way that each second is a partition about 600 slots and so each slot a scheduling decision will be made. So here is a basestation. Here are a bunch of -- here is one mobile. One of the many. So each mobile would measure the downing (phonetic) signal to noise strength and calculate the rate at which it can receive signal from the mobile or from, sorry, from the basestation and send this information to the basestation. and from all the informations collected from different users, the scheduler here is going to be able to make a decision. In particular for EV-DO, there are these discrete rates at which the basestation can send data to the mobiles. Now, moving on to a multi-carrier scheduling system. So instead of just having one single carrier, now we have multiple C carriers. Still we have end-users and a time slotted system. So what happens here is that the scheduler has to decide, for each carrier at each time slot, which user it is going to serve. The constraint, just because how the wireless system is set up, that each carrier can only serve one user per time slot. But it is possible multiple carriers would choose the same user to serve during the slot. Yes? So again to look at the schematic picture, the arrival vector and Q vectors are the same except that for the service, instead of having service rate factor, we are going to have a service rate matrix. So the matrix IC of T is the amount of data that user I would receive from carrier C at time T if the scheduling, if the scheduler decides that this service relationship is going to happen. And typically, one can imagine this number is both user-dependent, carrier-dependent and time-varying. The motivation having based wireless system systems will be WiMAX (inaudible), which is a multi-carrier scheduling system is to model the OSDM (inaudible) division multiplex. So examples of such or UMB, which is the next generation EV-DO or LTE the next generation EMTS. Okay. So for both single-carrier and multi-carrier scheduling system there are two different philosophies. One, one can do in slot by slot scheduling or we can do a template base scheduling. In the slot by slot scheduling, a decision is made every single time slot. So the advantage is that it (inaudible) quickly to the changing traffic or changing channel condition, therefore the changing service rate, either vector or matrix, matrixes. But the disadvantage is that fairly large overhead, both in terms of communication as well as computation in order to make a decision. So the counterpart is called the template based scheduling. What the template based scheduling is that we make up a schedule for a frame of T slots. So T can be a number, like say for, say 50. And then this template scheduling can be repeated over time. So you cannot compute a schedule for multiple time slots, and you repeat this slot over time. So obviously there is less overhead, both in terms of communication as well as the scheduling, the algorithmic computation. And what it is good for is if you view the traffic pattern as fairly steady and for the channel condition is also fairly steady. So if it doesn't change much then overall you better off with template scheduling. >> Question: (Inaudible) overhead, are you implying that if I were only making one decision (inaudible) T slot, then the user only need to get the channel condition every T slot? >> Lisa: Could even be less. Suppose our computer framework T slots and I use it five times. So only at the end of five T I'll recompute. >> Question: Oh (inaudible). >> Lisa: Yes. Yes. So -- Okay. So there has been lots of work in single-carrier scheduling. Multi-carrier study is relatively new. So what we want to do here is see how we can transfer our knowledge and the results (inaudible) single-carrier scheduling to multi-carrier situation. So first we'll look at the slot by slot scheduling and some of the work was presented in Multi-com (phonetic) last year. A large part of this work is just to focus on -- with the multi-carrier scheduling, one difficulty is how to prevent the overall allocation of resources, and that's what the objective of the work is. So let me explain. So let's first look at very the popular algorithm for the single-carrier scheduling. Remember, this is the schematic picture. And one straightforward and very popular and good algorithm is called Max Weight so Max Weight is very simple algorithm that works as follows. At each time step T, the scheduling algorithm would choose the user I. That maximizes this product. This product is the Q and the service rate product. So the motivation is if a user has a fairly long Q, then possibly it is good to serve it. And if the user has a very good service rate, then opportunistic, it is a good opportunity to serve it as well. So alternatively one can modify this objective function a little bit. Because if, say, the rate is fairly large, and if F is larger than the Q, then the benefit you get is actually the minimum of the Q and the rate. Yes. So alternatively can just maximize -- choose the user to maximize this product. In this way you can avoid resource wastage. Now, even though this algorithm is extremely simple, it has a very desirable property which means cost stability. This is saying if -- stability basically means the amount of data that has been present in the system is bounded over time. Very quantity independent of the time. So if the total data follows this blue line, then this algorithm is stable. But if it's kind of gross and value. This time is unstable algorithm. Then many people, many groups of people starting from early '90s have shown that Max Weight algorithm in fact is stable whenever possible. Overall, the idea is that the potential function which is sum of the Q squared has a negative drift if the Qs get really big. So therefore, using this method of analysis it is able to find an absolute balance for the Q side. Okay. So naturally one would wonder what's the way of generalizing this max weight algorithm for single-carrier >> Question: (Inaudible). >> Lisa: Throughput talk tomorrow. >> Question: That it can stabilize any -- (inaudible). >> Lisa: Not any. Not any -- >> Question: (Inaudible) it can be stabilized. >> Lisa: Okay. >> Question: Some ->> Lisa: If say the arrival vector is, say, say has a nice stochastic probability, it's say (inaudible) or something, yes, then it can stabilize any such input factor. And that's also being studied whether or not Max Weight is even stable if the arrival is adversarial. So I think there is also some progress on that front as well. >> Question: (Inaudible) max weight algorithm (inaudible). >> Lisa: That I'm not sure. >> Question: Okay. >> Lisa: I'm not sure /there are other algorithms like proportional fair, under a different assumption has, like, a good utility property, maximize the sum of the log, for example. Yes. So, so for Max Weight, let's just say what's the first cut, transformation from the single-carrier to multi-carrier. So the simplest thing to do is to treat each carrier independently. For each carrier C, carrier C independently chooses the user I that maximizes some of the product, which is the Q times the rate. And let's use I of T to be the sum of the rates that get assigned to user I across all other carriers assigned to I. Okay. Now one can show that fairly straightforwardly that in fact this algorithm would maximize this quantity, maximize QIFT and IFT. And in fact the same stability (inaudible) would follow through as well. So, so one might think maybe that's the end of the story, except that maybe we want to see -- in fact, would we be -- would, say, every carrier be in fact choosing the same user if the user should have a very good (inaudible) or something. So the total rate in fact would be more than the Q size. Then there would be wastage. So maybe the objective would be more sensible to maximize the sum of the Q and minimum of the total rate in Q size. So that would be a natural, more sensible objective to maximize. So -- and as we have seen in a single-carrier situation, this objective can be achieved very simply. However, just by introducing this very simple twist, we can in fact show that this objective becomes NP-hard to optimize. And worse yet, it is possible to find a small constant so that no approximation algorithm can get as close to optimal as this one, certain (inaudible). So let me just try to give you indication why this, where this hardness comes from. So we can make a connection to this (inaudible) call generalized assignment problem gap. So this problem states as follows: You have a bunch of beans, and you have bunch of items. And you want to put items into bin. Each bin would have a size of BI. And each, each item would have a size that's dependent both on the item and the bin. That's where the generalized part comes from. So has a size, FIC. And once you manage to put an item into the bin without exceeding the total bin size, you get a profit. PIC. So we can relate this bin to the users in our problem and relate the items to the carriers. And also the bin size will be the Q, and the profit would also, profit would also be related to the (inaudible). So therefore, making this connection, we are able to see that just having this simple twist, all of sudden our problem can now be solved, optimized exactly. So then what do we do? Fortunately, this objective function to solve this problem falls into a special case called maximizing some module function over a matroid. And if the optimization problem fits into this framework, the good approximation algorithm has been known for decades. So let me explain what this green line is. So I need to explain what is a modular function is. I need to explain what a matroid is. Once I've explained that, I'll tell you what a good approximation algorithm is. So what is a matroid. Matroid is called an omega. And this I is a conditions. So if any element of A would be a subset of the ground typically defined over a ground set. That's matroid if it satisfies the following two I, element of -- so A is an element of I, so set. And so suppose A is an element of this I. And B is a subset of A. And B is also an element of I. So this, sort of this enclosure downward. And if both A and B are elements of I, and A is bigger in size than B, then we can find the element that's in A but not in B so that we can augment B with this element A. So that the augmented set is also an element of I. So in that case, I becomes -- we can say I is a matroid. So a simple example, we can do -- suppose we have a graph. And the ground set would just be the edges of the graph. And an example of a matroid would be a collection of the forests from these edges. That would be a matroid. You can see that if -- if you have a forest which is A, and if B is a subset of A, it would be have to be forest as well so, therefore, it is in the matroid. And you can check the second condition as well. Okay. So what is the modular function. A modular function is defined on a set. So a set, if for any set A and B, if the function define A and B plus the function define A into section B is most the set is -- most the function would define as A, plus the function define now B. If this relationship holds for all subsets A and B, then the function is modular. Okay. So enough of definition. So I guess it is not so important that you know exactly what these are. You only need to know there is a matroid and this is a modular function. Now, so suppose you happen to be in this nice framework of trying to maximize modular function of a matroid. Now, in the '70s these people, Fisher, Neimanhouser (phonetic) and Woolsey showed that there is this very simple greedy algorithm that would do a lot of the work. So as it indicates, how you build a greedy solution is as follows. You begin with an empty solution. And every time you repeat the following process, until you cannot do it anymore, at every step you try to choose an element that's not already in your solution, and you try to augment it. So suppose from your current solution you augment it with one more element. Now, so therefore, you will see a progress in your objective function. So measured by the difference between the augmented solution and the original solution. If the difference -- so you can try it over all the possible elements. And you are going to choose to augment your current solution with an element that maximizes your current step. So every step you try to make your solution so that, so that it makes the best local progress. So that's your greedy algorithm. So a very simple typical greedy algorithm. However even though it is simple, these people have shown that this algorithm would give you half approximation to the absolute optimal, as long as it fits -- as long as this problem fits the framework of maximizing a somewhat lower function F over matroid, which if all these things form a matroid. Okay? So now let's see how this wonderful result applies to our framework. Okay. So what we are trying to do is we are trying to decide how we are going to assign carrier time slot pair to all the users. So let's see -- let's try the following greedy algorithm. So each time step T we are going to process carrier one by one. So what we are going to do is we are going to serve a user I that maximize the product of the Q and the benefit. Now, the benefit now is how much of a Q I can reduce, and -- so the maximum amount I can reduce is the rate. But it is possible that the previous carriers could have eaten up lots of the Qs already. So at this moment we use IT as the running sum of how much rates the carrier so far have assigned to user I. So this is the service that user I has received so far. And Q, minus that, is the maximum amount that the current carrier could serve this user. So take them in, these two quantity and the carrier C would serve the max of these two product. Okay. So apparently this is a greedy algorithm, because every time it does the local best. So all we need to see is whether or not this greedy, whether or not a framework fitting to the matroid, even maximizing sub model functional of a matroid, if we do know that, this greedy algorithm, even though it may not get you the absolute optimal, it get you halfway at least. Okay. So there are three things I need to verify. First I want to verify that the setup is in fact a matroid. And I define a matroid as following. The element in my ground set is a collection of the user and carrier pairs. Okay? So this is my ground set. And what my matroid is, is a subset of the -- a subset of the elements in a ground set. As long as each carrier appears at most once in A. So, now what does that mean? If carrier, each carrier appears at most once in this, in this sub set. That means in fact it is a valid solution, right, for any -- for any scheduling policy for multi-carrier, all I care about is that each carrier can serve at least most one user per time step, but each user may receive multiple service, may receive service from multiple carriers. So in fact this is saying all the valid schedules of, over this ground set actually form a matroid. So I look at all the possible valid schedules. And this function F simply tries to choose one, one element in a matroid, which is A, such that these objective function is maximized. So one can verify this is a matroid, and this objective function is in fact so modular, as long as I restrict my solution to be a valid schedule. And algorithm one, as we can see quite apparently, it is a greedy algorithm. So therefore we can apply this nice hammer from these three people. And we know that in fact this algorithm one is always within half factor of the absolute optimal solution. And then there are other ways to get half approximation, is another example. So instead of processing carriers, one by one in a greedy manner, one can process users one by one in a greedy manner. At every time step we can find a subset carrier to assign user I so that this objective function is maximized. However the trick is, every time we move to the next user, if we locally try to find a subset of carriers, that's the best full user, it is possible that a later user is trying to use a carrier that's been used already in the past. If that should happen, then we need to make a decision as to which user gains more by keeping that carrier. So each time step one needs to make a decision also. So the analysis takes the same route, although one needs to define a different matroid. And this matroid, the ground set associates each user with the set of carriers that can serve the user. And the matroid is defined on the ground set that each user appears at most ones. And then again we can show this is modular function, greedy algorithm and therefore we get half approximation. So there are lots of details there, but we are not getting into. So to finish the first half of the talk I just want to show you some simulation result. So the simulation result is trying to see, so the trace comes from, I think it is an simulated data, data trace for multi-carrier, for multi-carrier system. So we have two objective functions. One is an easy objective function we know exactly how to solve optimally with a simple algorithm, but it may produces resource wastage. And we have a more sensible objective function which is just, which tries to take the minimum of the total rate received in the Q length. However, we know this objective function is going to be hard to get. Nevertheless, we have a half approximation. So are we better off to use an exact algorithm for not so good objective function, or are we better off to use approximate algorithm for a better objective function. So these two plots, the X axis is the time units. The Y axis is the total Q length. So we can see that if we use the exact algorithm for not so good objective function, the total Q is much larger than if we use an approximation, approximate algorithm on more sensible objective functions where Q is much smaller in that case. Okay. So now I want to move on to the second part of the talk. So let me just wrap up the first part of the talk. First part of talk we are trying to do a slot by slot scheduling in multi-carrier system. We almost directly generalize a Max Weight algorithm, which is known to be very good for single-carrier system to multi-carrier system. There is a small twist with the objective of trying to avoid resource wastage. This simple twist makes an easy problem into an NP-hard problem, but fortunately it fits into the framework, nice framework of maximizing (inaudible) matrix so we use a greedy algorithm which is known to work well for all problems in this category. So now moving on to the multi-carrier template scheduling algorithm. So let me remind you -- or the template scheduling is free -- instead of making scheduling decisions every time slot, what we are going to create is a template. The template would simply solve most of our time slots staying with T, and this template can be repeated over time. So here the color coding in each of the slots is the user. So there are three users here. So the light green, the green one and the blue one. So this is saying the top carrier, the first time slot serves the right user, et cetera. So I also mention before, as to when the template scheduling is desirable. It is desirable when we think the traffic is quite steady and the channel conditions do not change. So therefore, we can kind of use the same scheduling again and again to reduce both the communication overhead as well as the computation overhead. So because we assume the traffic is steady and the channel condition, therefore, the service rate are relatively steady, we are going to drop the dependency on T for both lateral factor and service rate matrix. Okay. Okay. So for the template scheduling problem, in fact there are two. One could view this problem as two parts. One is, one is to decide how many slots each carrier C are allocated to user I during this whole framework. So use FIC to decide the number of carriers he took to assign user I. So in the previous picture. So the top carrier would have to two slots assigned to light green one. Two slots to the dark green and zero to the blue one. And the second carrier would have two to the dark green one and one to the light green one and one to the blue one, et cetera. One is to decide how many. And so one way to decide how many is just trying to formulate it as, say, optimizing position, probably math programming. So if, let's say, A is the arrival, steady arrival rate for user I, then I can try to maximize the product of AI and minimum of AI and this quantity. Now, this quantity is somewhat interesting. So as I said, FIC is the number of slots and IC is the rate. So if I sum up all of these, so in fact this is the amount of service user I would receive during a frame. So therefore, using the same philosophy as we dwelt with in the slot by slot scheduling, there is never any point to assign user I more service than the arrival rate. So it only makes sense to do (inaudible) these two, and of course the total slots added up is (inaudible) T. So if we formulate this, how many problem in this documentation. In fact it is very, very similar to the slot by slot scheduling. So we can revert to what we did for slot by slot to decide the issue of how many. So what I'm ->> Question: (Inaudible) so with maximization you are -- (inaudible) user? >> Lisa: Right. I guess -- >> Question: (Inaudible) >> Lisa: Yes, I think that's a summation -- yes, sum of I/AI, yes. it would not make sense. Otherwise So once we have found how many, then we need to know which ones. So for example, we know it is two light greens, too, but still there are many different ways of putting it. So overall, one would say this looks like a better template than that. Well, one reason being, see, for this template the service to the light green user are quite spread out to the green user. Dark green users are fairly spread out as well, and so is to the blue user. Whereas for this (inaudible) template, the light green user gets all its service at the beginning so it is more bunched up. So is, with the dark green user. So in a way we want to spread out the allocations to user I so the delays are kept low. So here, in this case, the -- you can each user would have to essentially wait for a long time before it is (inaudible). Once its burst is over, has to wait for a long time. Okay. So there is a way of formulate this notion of spreading things out. fact, this delay formula comes from networks calculus. So let me try to explain what this formula is for. In So there are notations we introduced in the past. FIC is the number of carrier C tokens assigned to user I (inaudible) IAC is the service rate that you find, I receive from carrier C. Now, this new quantity, MI of T is the amount of service received by user I by time T. So just -- we just try to calculate the accumulated sum. Yes? So now let's see -- >> Question: (Inaudible) of the total amount of the rate, user I is ->> Lisa: Yes, so the units are not consistent. So the rate, one can imagine -- yes. So rate, one would say it is sort of bips per second, but let's just say it is bips. Yes. Yes. Okay. So, so let's just look at this formula and then I'll give you an example and see if it makes sense. So if we look at an interval between S and T. Okay. So MIT minus NIS is the amount of service that the user receives between S and T. Yes? Now we want to normalize it. So what's in a denominator. So the summation is over all the carriers. IC times FIC. So as we have seen, this quantity is the total amount of service the user I would get over all carriers. Right? Because the first carrier gives I one times FI 1. Seven carriers. So this sum is the total amount of service the user I would get during a frame. The frame consists of T steps. So the average rate the user I would get over a frame is this thing divided by T. And this is the service it actually gets. And this is the, divided by the idealized rate. So if I look at the difference between the timing tool, which is T minuses S and this, this would tell me how much I fall behind. So let's look at the special case. Suppose T and S happens to be in entire frame. So the difference between T and S is big T. Yes? So during big T, NI of big T, minus MI of zero, in fact is the whole service I received. Right? During a frame I know the total is this. And if I divide it by that, what's in the denominator. This term actually gives me big T. So because, when T and S is the beginning end of a frame, this -- this and that sum actually is the same. So, therefore, this term gives me big T. And the first term is a big T. So this is saying that the service always catches up if you look at an entire frame by definition, except that what I'm interested in is how much I fall behind within a frame. Of course I can be -- I can be ahead in terms of service I receive. But there are bound to be time I'm behind. So I'm trying to get service as spread out as, as spread out as possible so I fall behind the least amount in comparison to the idealized steady stream. Okay. So this is the definition of the delay of user I. And similarly we can define the delay of user I with respect to the service that get from carrier C. It is a very similar definition. And we can show that EI is in fact immersed EIC, maximizing over all the carrier C. So this gives us an idea that maybe the way to do it is, maybe we should minimize the (inaudible) delay with respect to each carrier C. And, therefore, with this relationship, we have certain bound on the user I over, across all carriers. Yes? So this sends us back to, what can we do for single-carrier situation. How do we create a single-carrier template so that the tokens are spread out as possible. So I've written this problem with respect to the single-carrier situation so it's easier to think about. Now, is -- I drop the connection to carrier C so FI tokens will use (inaudible) T. So my template now only has one carrier in it. And each user I requests certain number of tokens in this template. So I know my FI. I've computed it before. I only want to know how I spread out the tokens so it can be spreaded out evenly. So naturally one would ask a question, is it possible that I spread out my tokens so my tokens are exactly T over FI apart. So the answer is no. Suppose we have a template of six slots and I have three users. One asks for one slot, two asks for two slots and third one asks for three slots. Then there is no way I can spread out the tokens so that they are exactly this many apart. So here's why. If I want to satisfy user C, which asks for three slots, I would have to put them out like that. So once I've placed these three tokens, there is no way I can pull out these two tokens so they spread out exactly evenly. So we cannot do it exactly. So I'll just show you three very simple algorithms. Although they're not spread out tokens exactly, evenly they do a pretty good job. The first algorithm is based on bipartite matching. So naturally I need to create a bipartite graph. So on the left hand side of bipartite graph I give all the tokens for each user. So for example, the green user asks for three tokens. So I put down three tokens. The red user asks for five tokens so I put down five. So on the right-hand side is my template. So I restrict where this token can connect to just to make sure that sufficiently spread out. So because the green one asks for three tokens, I say the first one can only go to the first third of the template. The second one can only go to the second third of template and third one can only go to last third of template. And red one has five of them. Yes. So the first one can go to the first six. The second one to the second fifth, et cetera. Okay. So I make these edges as bipartite edges. So now what I'm trying to solve is a matching problem. So I want to make sure that for each of the token I connected to one of these slots on the template so that any two different tokens what (inaudible) two different slots. It just so happens without, with such a construction of the graph, this matching indeed can be done using host (inaudible). So now if you believe that we have found a matching between the tokens and the slots, we in fact know that these, the desired slots are in pretty good places. Because what can happen in a worst case is that, for example, this green token gets to be matched in the very first slot in its third. But this one gets to be matched at the bottom of its third. So that instead of exactly T over FI part, I'm two times T over FI part. It cannot be any worse off than that. Okay. So if I use this bipartite matching type algorithm, I essentially get two times and with bit of constant. Now, there is another very simple algorithm. It is called a golden (inaudible) algorithm (inaudible). It works in the following way. So we have an interval, unit (inaudible) interval. First we are going to put things down in a continuous manner, in the following way. So if I have F one tokens for user I, what I'm going to put down is. First I'm going to put down a token at C. C is this golden ratio which is five minus one, divided by two. I put it here. It is probably .6. And the second token I put in a place called two C, except that interval is only (inaudible) so I wrap it around so the mach one will land me to about .2. Now suppose I'm done with F 1 so I continue with the second user which asks for F 2 tokens. So F 2 tokens, I start off where the previous user is left off. So I move to three C. Three C is about 1.8 so I'm here. And then I move to it four C. It is about 2.4, 2.5. So I'm here. So I just repeat this process until I'm done with all the users. Now, this places all the tokens in a continuous, continuous interval. The key here is that it defines an ordering in which the tokens are placed. So if we map this ordering to the template, to the discrete template, so that the ordering is preserved, we actually get a template design. And these people have shown that using this golden ratio algorithm, you are not. So T over FI is the absolute ideal. And what you get is two plus F1. A factor of two plus F1. >> Question: (Inaudible). >> Lisa: No, you won't. >> Question: (Inaudible). >> Lisa: No, you won't, because it is an irrational number so you won't. >> Question: But in reality (inaudible). >> Lisa: is -- Yes. Yes. Well -- well, yes, I suppose you need good position. >> Question: Well, I guess the chance is very ->> Lisa: Yes, you -- you see, the thing is, this is not how you place the template. All you need to know is the order in which things happen. So you can always do that. So Now the third algorithm is even simpler. It is a randomized algorithm. Again you have a unit that's interval. And what you are going to do is, for user I -- they say user one you have F 1 tokens. Right? You just randomly throw F one tokens in this interval in a sort of uniform at random, and do the same thing for the second user, et cetera. And then, then you are done. So with some holistic analysis, one can show that the delay, the -- this meaning term is in fact the idealized POFI except that there is an additive term which depends on the square root of M, which is the number of users, and the log-off, 2 TT is the template. Okay. So these are three very simple algorithms that work well for ->> Question: >> Lisa: Is this the worst case -- Yes, it's -- >> Question: It is the worst ->> Lisa: Well, it is expected or maybe it is high probability because this is assuming ->> Question: High probability (inaudible). >> Lisa: The delay defined with this formula. >> Question: Oh, I see. >> Lisa: So what you are trying to (inaudible) is how close together -- >> Question: So then (inaudible) is first two algorithms ->> Lisa: So this one is, this one -- the first two algorithms are deterministic. >> Question: Right ->> Lisa: The first one. >> Question: -- I would expect the delay to reduce as the number of users (inaudible) >> Lisa: Ahh. >> Question: No, here >> Lisa: No. >> Question: No (inaudible). >> Lisa: Yes (inaudible). >> Question: (Inaudible). >> Lisa: So the three algorithms are defined, described in terms of single-carrier. But because we have a relationship between a single-carrier delay band and a multi-carrier delay band. So essentially the multi-carrier delay band is the worst case single-carrier delay band over all carriers. So we can write these three bands from single-carrier to multi-carrier, like that. So -- so, but this is entirely solving a multi-carrier problem using a single-carrier solution. So one would wonder if there is any way to improve the situation because you have multi-carriers. So there are two things one can do. Remember, the single -- this (inaudible) algorithm and matching algorithm in fact is a deterministic algorithm. Right? So one thing we notice, if we introduce some randomization. So once you have a matching algorithm, for each carrier, if you introduce a random cyclic shift, you just kind of turn it for each carrier independently and you can reduce the delay. It reduces delay in the sense that a term that maximize the max over all carrier T divided by FIC becomes this term, ST divided by the sum of the rate slot. >> Question: (Inaudible). >> Lisa: Okay. So -- >> Question: (Inaudible) you do this independently. >> Lisa: So for each carrier -- so suppose instead of, say, instead of saying -- now, let me give you an extreme case. Suppose if it is a situation in the first carrier, second carrier, third carrier, compute exact the same slot arrangement. So it's light green, dark green, light green, dark green. Light green, dark green, light and dark green. So what I do is I -- suppose I put a random number. Say instead of starting with, at slot one, I start at number, slot two at random. So this would spread things out even more. And the other thing is -- so I guess I should have said that before. So T divided by FIC is kind of the ideal situation for each carrier. So what happens in the matching has two, a factor two on the delay. And (inaudible) ratio has two plus epsilon. Now, random has the smallest one in terms of (inaudible) so you do have an additive term. So the additive term, it can be big. It depends on the square root of the users. So one thing is, effectively you may want to reduce the number of users. So one thing is, if you have low rate users, many low rate users, you may be able to group them into fewer number of users called super users with larger rate. So this would be able to eliminate a little bit the square root term. So to finish off, I just want to show you two sets of plots. The first set is to illustrate in fact the usefulness of randomization. So I said we can add a random cyclic shift to two determines of the problem, and the one (inaudible). So the top curve is the delay when we don't have the randomized shift. The bottom is the same experiment except that we add some random shift to it. You can see a shift, we used delay in both algorithms. And the second set of plots in fact is try to validate what we observed in the bounds of the three algorithms. The leading, the leading terms in fact seem to be consistent with what we observed in the experiment. So golden ratio has a leading term of two plus epsilon. Bipartite matching has got two, and randomized is one with a relatively large additive term. So these two experiments -- one is for, when the carrier numbers are relatively small and with larger number of carriers. So we can see fairly consistent performance in practice with the bands that we ->> Question: (Inaudible) instead of random shift ->> Lisa: Yes. >> Question: -- you just continue the same golden ratio. first carrier ->> Lisa: So you finish the M'hmm. >> Question: -- you have an ending position. >> Lisa: Right. >> Question: What if you using the ending position ->> Lisa: As the starting position -- >> Question: >> Lisa: (Inaudible) Yes, I don't know. Wait. Oh, you're saying -- I see. >> Question: That's sort of the logic that was used in the single-carrier. Right? >> Lisa: Right. >> Question: You finish one user. >> Lisa: Yes. Then you just continue with epsilon. But then what's the relative position of the -- >> Question: Oh, so you just use that -- you just use that as a starting position. >> Lisa: I see. >> Question: So it is not a random shift issue. >> Lisa: Right. >> Question: (Inaudible). You just carry that over to the second channel. do exactly the same thing (inaudible). >> Lisa: M'hmm (inaudible). You I would imagine it would come out to the same. >> Question: Is there any reason ->> Lisa: I would imagine it comes out the same as the non-randomized -- >> Question: (Inaudible). >> Lisa: Yeah. Yeah. >> Question: (Inaudible) cyclic shift. >> Lisa: benefit. Yeah. Well -- I don't know, but I don't think it gives you the Probably is fairly similar to the non- -- the original one. >> Question: What you want to do is across the different channels you want to shake loose cluster of ->> Lisa: Yes, you want it to be independent. Yes, interesting thought. So you want it to be spread out. I could check it little bit more -- >> Question: In that case then, then maybe we should do what -- what we should do, maybe one thing to do is, you remember when was the last position when you finished user one. >> Lisa: M'hmm. >> Question: And then you use that as the starting position ->> Lisa: For the first -- >> Question: -- the first user in second term. >> Lisa: M'hmm. >> Question: I don't know. >> Lisa: Yes, it could be. >> Question: It is all very ->> Lisa: Yes, it could be. I didn't think that way, yes. User could think -maybe something interesting would happen, yes. (Inaudible). >> Question: Because that sort of spatially spreads out more individual ->> Lisa: But still, I mean, why would the leading term be any smaller then to process (inaudible). >> Question: One could take (inaudible). >> Lisa: Yes. Yes. Think about it. So I guess to finish off, I just want to recap what happened in this talk. So essentially we want to see how we can re-use the research results people have obtained over the years for single-carrier into multi-carrier. For this slot by slot scheduling we essentially have a very simple generalization of the max weight to algorithm, with a simple twist to make sure the resource is not wasted. So we have got a half approximation. So in fact one can do better than half, is what we -- you know, one minus one over (inaudible) is better than half. But nevertheless, it is a more involved algorithm, unlike this simple greedy. Just you can't implement it. So the open question is, is there any way you can use a simple algorithm to accomplish a ratio better than one half. And then, then there are also other issues. So in reality, all these constraints that are also easy to model. For example if, say, there are constraints, like, if you have multiple carriers, serving the same user, it is desirable these carriers can, consecutive carriers. So then how, how do you worry about that in your algorithm, et cetera. So there is still work to be done on the topic. Okay. So that concludes. [APPLAUSE]