>> Yuval Peres: We are delighted to welcome Matt... algorithms for strategic agents.

advertisement
>> Yuval Peres: We are delighted to welcome Matt Weinberg from MIT who will tell us about
algorithms for strategic agents.
>> Matthew Weinberg: Thanks. Thanks a lot for having me and coming to the talk. I want to
talk about, I don't know how many of you have seen this picture before, but this picture was
shown to me in like my first undergraduate algorithms class. What this is is a Soviet railway
map from 1955 and apparently, the CIA asked early computer scientists to find the min cut in
this graph because they wanted, in case they ever wanted to disrupt the Russian supply flow.
Now if we fast-forward to 2014, this is a graph of what autonomous systems in the internet
look like. Algorithms are run on this graph every second to help route internet traffic. One
difference between 60 years ago and today is that you'll notice that the graph on the left has
less than 100 nodes and the graph on the right has 42,000 nodes. Obviously, our algorithms
need to be better and faster, but that's actually not what I'm going to talk about. Instead, what
I want to point out is that the graph on the left, all of the nodes are controlled by one central
authority. The nodes have no interest and in some sense they just obey whatever the central
authority says. Whereas, in the graph on the right there are 42,000 autonomous systems that
are making their own decisions and they have their own interests and they behave in their best
interests. They are not just going to do what someone tells them to. Let me first give you an
example to convince you that this is a real thing. This is not just some crazy model. In 2008
there was a faulty BGP table update that caused a worldwide YouTube outage. If you don't
know, BGP is the protocol that is used to route all of our internet traffic. For several hours two
thirds of the world was unable to watch cute videos of cats. What went wrong was there was
offensive material on YouTube that prompted Pakistan to want to censor it and so the Pakistani
ISPs updated their BGP tables to map YouTube to nowhere. The problem is that this was
inadvertently broadcast to the rest of the world as well. The point of this, I think this is a nice
quote that summarizes what happened was that nobody ran any viruses or worms or malicious
code. This is just the way the internet works right now. So no one was trying to take down
YouTube, but it just happened because the ISPs were acting in their own interest instead of
what the central role of BGP was. Now I'm going to put this in a little bit of more of a general
context. Normally, when we think of algorithm design, we think of being given some input that
you know and asked to produce some output. What happens in between is what we call the
algorithm. The model I'm trying to get at is called algorithmic mechanism design where we
have this extra step where you don't know the input; you're not given it but you have to get it
reported to you by agents who are strategic and have their own interests. At the end the
agents experience some kind of payoff based on the output that you choose, so they actually
care about the algorithm that you run and what you choose to do. For this talk I'm going to call
this a mechanism instead of an algorithm to denote the difference. In a seminal paper by Nisan
and Ronen they introduced this topic to the computer science community and among other
things they posed this general question, just how much more difficult are optimization
problems on strategic input versus honest input. How much harder is mechanism design than
algorithm design? The dream is we would love to have some kind of a blackbox reduction that
says as long as you can solve algorithm design problems you can also solve mechanism design
problems and just throw them into this wrapper. What I mean by that is what you really want
to do is have some mechanism that works on strategic input, so it should take its input, m
different agents are going to report their input and then it's going to produce some good
output. Instead what you have is just some algorithm that works on honest input, so it takes
the input that is known and is correct in some sense and then produces an output. Even more,
we'd like to say that you just have blackbox access to it, which means that you can't actually
look inside, but you can just probe it with inputs and see what it outputs. So what we would
love to be the case is that you can design this mechanism on the left with just black box access
to an algorithm.
>>: Is it single output as opposed to what you want to do and what interacts with the agents
utility? Are you allowed to have a private output versus rewarding agents?
>> Matthew Weinberg: For this talk I'm going to say that there is one output and you have a
goal in mind and that goal would be the same if it was in an rhythm problem or a mechanism
problem. For instance, maybe your goal is to make the agents as happy as possible and it's not
about doing something different for the different agents, but you want to find the outcome
that makes all the agents as happy as possible, something like that. Did that answer?
>>: So you don't have an output that is just sort of predefined. Are your own goals predefined
independent of what the agents want?
>> Matthew Weinberg: It could be or it could depend on what the agents want. As an
example, let's say that I'm a government and I have some contract that I want to award and
everyone here is a business and I just want to give the contract to the business that can benefit
the most from it. What would be private information to all of you is how much you would
benefit from it as a business and based on how much you value it would affect who I want to
give the contract to. Does that, but my goal is to, is that, okay, sorry, okay. So let me say, why
would we want a blackbox reduction? One reason is that much more is known about
algorithms than mechanisms so one hope is that maybe we can reduce some unsolved
problems in mechanism design to problems in algorithm design that are already solved. A
second important thing is that the algorithms community is much larger than the algorithmic
game theory community so some kind of reduction like this would allow lots more people to
contribute to mechanism design problems and they wouldn't have to necessarily learn game
theory to do it.
>>: [indiscernible] [laughter]
>> Matthew Weinberg: I guess that's arguable, but I'll say it's a plus. The third is that hopefully
this would provide a deeper understanding of mechanism design because really we would like
to understand what makes incentives so difficult to deal with both computationally and
otherwise. What I'm going to show you through this talk is that with the right qualifications,
some kind of reduction like this does exist. I'm going to set up the problem and then after it I'll
describe like a series of papers that culminates in this result. The set up, there's going to be
some central designer and there's going to be some agents and there's going to be some
possible outcomes that the designer is allowed to choose. The agents are going to have a value
for each possible outcome that kind of denotes how happy they are with that outcome and I'm
going to say that this information is stored in their type, so for the rest of the talk just
remember that T stands for type. And T is going to be a function that takes as input and
outcome and outputs that agent’s value, so how happy they are. Also for the whole talk we are
going to be in a Bayesian setting which is normal for economic applications, so each agent’s
type is going to be sampled independently from a distribution D and every agent and the
designer knows this distribution. Also for the rest of the talk, just remember that D stands for
distribution. Now the designer's goal is to decide some outcome and this outcome should be
feasible, so I'll give you some concrete examples later, but think of it as if I only have one item
to give out, I should not be giving seven people the same item. If I only have one, I should be
getting it to one person. Also, in addition to choosing an outcome, he's allowed to charge
prices to the agents and he has some objective function in mind that he wants to optimize and
this objective function is like I was saying could depend on the types of the agents and it can
also depend on the prices that he charges and it's also going to depend on the outcome that he
chooses. Before I give you examples, let me tell you the strategic aspect of the problem. The
designer first is going to design his mechanism and a mechanism you should think of is just a
function, so it takes its input, a profile of types, so a report from every agent. It's going to
output an outcome and it's going to output a payment for each agent. After he designs his
mechanism he's going to ask the agents to report their types and then he's going to choose
whatever allocation he promised and charge whatever payments he promised.
>>: So what is a profile?
>> Matthew Weinberg: A profile is a type for every agent, so if there are five agents there will
be a type for a bidder one, two, three, four and five. Sorry. Each agent is going to decide what
type they want to say and they're going to do this based on the mechanism that they're playing
based on their beliefs on what everyone else is going to do and based on this they're going to
report some type. And if they were honest they would report their true type, but they're not.
They're strategic, so instead of telling the truth they're going to report whatever type happens
to maximize their own utility. If they think that they can gain based on the algorithm I'm
running, if they think they can benefit by lying, then they are going to lie. The designer's goal is
to design a mechanism that first it should encourage the agents to tell the truth and the specific
notion of truthfulness that we're going to use is called Bayesian incentive compatibility, and
what that means is that if everyone else is telling the truth, it is also in my best interest to tell
the truth. It's not as strong as dominant strategy which means that it's best for me to tell the
truth no matter what, but it is, it is an equilibrium for everyone to tell the truth. Second,
conditioned on being truthful it should optimize his objective function subject to this. Maybe
it's not possible to do as well as the best algorithm, but he should at least do, so your target is
of all truthful mechanisms do as well as the best one. That's the benchmark.
>>: You mean you optimize the objective function if you're in this particular equilibrium?
>> Matthew Weinberg: Yes, that's correct.
>>: But there might be other equilibriums?
>> Matthew Weinberg: There may be other equilibrium; that's correct, yes. Now I'm going to
give you some examples to help ground this a little bit. One example, let's say you are this nice
guy and you have a bunch of gifts that you want to give to a bunch of kids. You shouldn't be
giving the same gift to more than one child, but it's okay for the same child to receive multiple
gifts and so you would say that an outcome is feasible if it respects this. It's feasible if it doesn't
give the same item out more than once. In this case because you are just trying to make the
kids as happy as possible, you want to maximize welfare so as a function that would look like
summing over all agents their value for the outcome that gets chosen, so remember Ti is the
function that represents their value. A second example is maybe you sell houses, so you also
should not be giving the same house to multiple agents. Also, you should not be giving the
same agent multiple homes in this case. In this case you would say that an allocation is feasible
if it's a matching of homes to agents and because you are a salesman you want to maximize
your revenue, so in notation what that would look like is you sum over all agents the payment
that they make to you. As a last example, maybe you have to schedule jobs and in this case
each job should be assigned to exactly one machine but the same machine can process multiple
jobs and you would say that an allocation or I'll call it a schedule, is feasible if it respects this. In
this case you want to minimize the makespan which is the processing time of the slowest
machine, so the last machine to finish all the jobs that it's processing and so, again, in notation
you would be trying to minimize the maximum overall machines of their processing time.
>>: In the housing example does an agent have not only tell you a type but then do they have
and accept or reject for the solution? If you are going to charge them a payment and give them
a good?
>> Matthew Weinberg: So the idea is that if the agent is truthful then the price that you give
them for the house should always be less than what they are willing to pay for it, so they should
have no interest.
>>: [indiscernible] or…
>> Matthew Weinberg: Say that again.
>>: So you build that into [indiscernible]
>> Matthew Weinberg: Yes. That's true. There's a condition, so part of truthfulness is a
condition called individual rationality which I didn't explicitly state, which means that it should
be in every agent’s interest to participate in the mechanism. What would happen here is that if
you wound up giving them a price that was above what they wanted to pay, then it would not
be rational for them to participate because they would rather just sit at home and not take the
house. That's good; I should have said that, but that is a condition that is imposed, but
normally we think of that as being imposed by the truthfulness property rather than the
feasibility constraints. Yes. That should hold for all of the examples, not just the housing. Now
I want to tell you what's already known about this and what we do know is that if your goal is to
optimize welfare truthfully, no matter what kinds of types the agents have you can do that as
long as you know how to optimize welfare algorithmically. For revenue, I know there's some
notation in this line that I'm going to define shortly, what we know for revenue is that for very
simple bidder types if you want to optimize revenue truthfully, you just need to be able to
optimize virtual welfare algorithmically. I'll say what these terms are now. For types of bidders
what it means for a bidder to be additive is if it makes sense to say there are items, so like the
gifts or the houses or the jobs and the bidder has a value for each item and their value for a
subset of items just sums their value for each item in the set. There's really no interaction
between values for different items. Single dimensional is a special case of additive where as far
as the bidders are concerned all of the items are exactly the same. Think of additive as may be
like you have two items. One of them is the TV and the other one is a car. Then your value for
a car and a TV is just your value for their sum and single dimensional would be like you have 10
copies of the exact same TV for sale. Lastly, when I say maximize virtual welfare, for this talk
how you should interpret that is I'm just saying modify the input and then maximize welfare. If
you want to think of virtual welfare as welfare, that's fine, but just understand that the input is
getting modified a little bit first. That's what we know for welfare and revenue. However, for
revenue beyond single dimensional settings, so for additive, we don't know anything and what
we'd really like is to be able to let the bidder types be arbitrary and we really don't know
anything there. Also, as soon as you go beyond welfare and revenue, we really don't know
anything. In fact, the only thing we do know is an impossibility result that says that it's not
possible to reduce truthfully optimizing any objective to algorithmically optimizing that same
objective. In particular, no reduction like this can possibly exist for minimizing makespan.
What I'm going to show you in this talk is that moving beyond single dimensional bidders for
additive bidders and actually for arbitrary kinds of bidders, as long as you can optimize virtual
welfare algorithmically, then you can optimize revenue truthfully. For arbitrary objectives, as
long as you can optimize O plus virtual welfare, so that same objective plus virtual welfare, then
you can optimize that objective truthfully. So this sidesteps the impossibility result because we
changed the objective.
>>: Is welfare referring to that example you gave where you are just getting stuff out?
>> Matthew Weinberg: Welfare just refers to the objective of trying to make people as happy
as possible, so it's not just necessarily that you have stuff and you want to give it away. It could
be like I'm trying to build a bridge and I want to put it in the location that makes the most sense
for everyone. But welfare just refers to the objective of my goal is to make everyone here
happy.
>>: So this is a doable condition that you can solve it algorithmically if there is no problem with
agents telling the truth.
>> Matthew Weinberg: Yes, that's right. What that means is that if it was possible for me to
find the best location for the bridge if I knew everyone's value, then it's also possible for me to
design a mechanism that, you know, accommodates the fact that you guys have your own
interests and you may choose to lie, they will still wind up putting the bridge in the best
location possible. Is that?
>>: I'm trying to see from the examples how it could be that that second problem was not
algorithmically easy.
>> Matthew Weinberg: One example would be, so let's say. Let me think real quick. Let's say
that I was instead of just giving away gifts I was trying to, let's say that I was trying to run a
spectrum auction so the FCC, you know, has this large radio band and they want to give it away
and there are these very like complicated constraints on which spectrum can be allocated in
which locations or something, so in some sense it's a little bit like an independent set problem
where, you know, you have to make sure that wherever you allocate the spectrum that it's not
going to -- does that make sense? Like if I allocate part of the spectrum here, I also can't
allocate it at anywhere too nearby and independent set problems are hard to solve. So there
are problems where…
>>: Is this problem [indiscernible] graph theory and constraints?
>> Matthew Weinberg: That's right, yes.
>>: But you can do the same thing appear in those similar examples?
>> Matthew Weinberg: Yes, that's right.
>>: When you say welfare is okay and referred to all these references, you are not assuming
revenue neutrality so the auctioneer may have to pump in some of his own…
>> Matthew Weinberg: I don't believe in any of the examples that I cited that the auctioneer
will ever lose money. It's possible that he may not gain money. Sorry. If it's the case that he
has to pay money to build a bridge, it's definitely possible that the cost of, that what everyone
pays you may not cover the cost of the bridge. That's right. Yes. I guess I would consider that a
different direction than this, but those are definitely important problems.
>>: One more question. I'm trying to understand how the revenue problem is different from
the welfare problem. If the revenue you get to charge anything up to what the welfare is to
that person, right? So how is it different?
>> Matthew Weinberg: The reason it's different is because let's say that my mechanism is
going to charge you whatever you say, then, let's just say it's just you and me interacting. I tell
you the mechanism is that if you tell me that the item is worth $10 to you, I'm going to charge
you $10. Then you definitely are not going to tell me your actual value; you're going to tell me
something much lower.
>>: Well then the agents were just going to tell you the truth you could actually charge them
[indiscernible]
>> Matthew Weinberg: That's correct. It would be exactly the same as welfare.
>>: [indiscernible] mechanism [indiscernible] you might not be able to charge.
>> Matthew Weinberg: Yes, that's right. And in fact you have to charge them something less
because you want it to be truthful. Okay. Good. Okay. The main result is this blackbox
reduction, so it's a poly-time reduction from the mechanism design problem for any objective
that you want to optimize to the algorithm design problem for that same objective plus virtual
welfare. What that means is that in this picture, whenever the right-hand side, this algorithm
design problem is tractable then so is the mechanism design problem that's on the left. Two
facts that I wanted to emphasize is that this reduction is approximation preserving, so if you
have a hard problem to solve on the right like independent set that you can't maybe an
example that you could, at least, approximately solve, if you can approximately solve the
algorithmic problem then you can approximately solve the mechanism side problem and it's the
same ratio. Also, there's no constraints on the types that the agents have to have in order for
the reduction to work, so however the types get input to the mechanism design problem, they
get past in the exact same way to the right-hand side for the algorithm problem. Okay? So one
way…
>>: [indiscernible]
>> Matthew Weinberg: If you're trying, let me say this. If it's in, for the most part it can be
anything. If it's an objective that interacts in a weird way with randomness then it has to be
concave if you're maximizing it or convex if you're minimizing it. But if it's an objective that like
you would say expected makespan might be something like you compute the makespan of this,
you compute the makespan of that and you average them, then if it behaves like that with
randomness then it can be anything. Did that, yeah.
>>: [indiscernible] reduction virtual warfare, is that something that you sort of decide to add in
to tell the existing algorithm?
>> Matthew Weinberg: That's right. It's definitely possible that maybe adding virtual welfare
makes it a much harder problem. That's definitely possible that that might happen and actually
we'll see an example later where that's the case.
>>: Also it looks from this is if you run the existing algorithm on that with you have known
agent types [indiscernible] so how do you then go back?
>> Matthew Weinberg: In this slide I'm just thinking of this as not pretending that you know
the actual agent types, but as an algorithmic problem imagine that you did know the agent
types. That's the one that you have to be able to solve.
>>: [indiscernible]
>> Matthew Weinberg: That's right. Yes. Think of it that you have a machine that solves this
problem on the right. You can probe it several times and that will let you solve the problem on
the left. Yes. That's right. That's the right way to think of it.
>>: Is one of the [indiscernible] in the polynomial the number [indiscernible]
>> Matthew Weinberg: Yes, good. It is the case that the runtime is polynomial in the number
of total agent types, so it's different than the number of type profiles, so that would be very
bad. Say there are ten agents and the number of type profiles would be exponential in ten, so
that would be really bad. In some cases even just the number of agent types is really not good,
but in some cases that's also the best you can hope for. That's a good question. At the bottom
this is one way to think of this result as saying that transitioning from honest to strategic input
is no harder than adding virtual welfare to the objective. So maybe adding virtual welfare to
the objective, maybe that does make the problem a lot harder, but dealing with strategic
agents is no harder than that. I'm going to tell you a little bit about the tools and techniques
that get us here. For now I'm going to say the objective is revenue and the agents are going to
be additive. To remind you what that means, there are n items and agents have values for each
item and their value for a set of items just sums their value for every item in the set. This is just
going to make talking about it a lot cleaner. The first thing we have to look at is how should you
describe the mechanism. One observation you can make is that a mechanism is just a function,
so it takes as input type profiles and it outputs a price for all agents and it outputs a distribution
over outcomes. One way to describe a function is a really explicit description where you just
list for every possible input what is the output. We will call this like a laundry list. With this
description you can think of the mechanism as a really large dimensional vector that just lists
for every possible input and for every possible output of what's happening.
>>: [indiscernible] you are allowed to have a mix of random mixture?
>> Matthew Weinberg: Yes. By that I mean the mechanism is allowed to be randomized. So
the same thing as if I'm trying to build a bridge then my mechanism could decide with
probability of a half it builds a bridge here and with probability of half it builds a bridge there.
Maybe that let's me do something that I couldn't do with a deterministic mechanism. Now that
we decided we're going to think of mechanisms as vectors, one thing you might ask is can you
write a program that would just optimize over the space of truthful mechanisms.
>>: [indiscernible]
>> Matthew Weinberg: Say that again?
>>: [indiscernible] price deterministic?
>> Matthew Weinberg: The price, so for now let's say it could be deterministic; it could also be
a distribution. As far as the agents are concerned, so I didn't explicitly say this, but I assume
that the agents are risk neutral, so as far as the price is concerned they don't actually care.
Let's also say that as far as you're concerned, you also are only concerned with the expectation,
so if you want to you can use randomness. So the answer is sure, you can definitely write a
program that will optimize over the space of truthful mechanisms, but really what this is going
to look like is it's going to look really silly. It's going to look like you're writing a program to
optimize over all algorithms and then you're going to throw in a constraint to make sure that
the algorithm is truthful. What this looks like is you're going to have variables like I said to
explicitly describe this function. For all possible inputs and all possible outputs, what's the
probability that you choose that output on this input? And for all possible inputs for all agents,
what's the price you pay on that input? Then you have to constrain that the mechanism is
truthful and I'm not going to write them but you can write that with linear constraints, and you
have to guarantee that the mechanism is feasible. Those are also linear constraints, and so by
feasible for this I just mean that on every output, sorry for every input, we have a variable
denoting the probability that an outcome is chosen. These probabilities should sum to one in
order for this to be a real distribution, so that's all that's going on there. Now to optimize you
just want to optimize your expected revenue and I'm also not going to write this but this is also
a linear function. I just told you here's a linear program that solves the problem you want and
so what's the problem? The problem is that this is enormous and there are way too many
variables. What we need is a smaller description. The description we're going to use is
something called reduced forms. The idea is that we really don't need all of this information.
Let's just try and keep what's important. What we're going to do is we're going to make a
variable for every agent, for every item for every type. What's the probability that they get that
item when they say this type? And this probability is going to be over the randomness and the
mechanism if there is any and over the randomness and the other agents choosing their types.
And this Pi of T is just going to be the expected price they pay when they say this type. Now,
you can also think of this as a poly-dimensional vector, so it's much smaller, the description and
you want to say that a vector is feasible if it corresponds to an actual mechanism. It doesn't
have to be a truthful mechanism, but it should be an actual mechanism. As a picture of what's
going on is on the left we have the space of all mechanisms and on the right we have the space
of all vectors that at least have the right number of coordinates. And every mechanism induces
a reduced form and so the set of feasible reduced forms is the image of this map and that's a
subspace of the vectors that have the right number of coordinates. The first ingredient in the
solution is going to be to write a linear program using this compact description instead of the
laundry list one. The variables you're going to have are what I just said from the reduced form.
You still need to guarantee that the mechanism is truthful, and again, I'm not going to write this
but you can do this with a small number of linear constraints. And now the interesting problem
is that it's kind of hard to guarantee that a description is feasible. You can't do that with few
linear constraints and the new challenge is how can you tell if a reduced form is feasible. You
are still maximizing expected revenues so that part is easy. The first ingredient, we wrote a
succinct linear program and using the reduced forms, so all we need to do now is find out how
to tell if a reduced form is feasible. That's going to be the next step. To help you understand
this I'm going to give an example. The purpose of this example is one, just to give you an idea
of what it means to say a reduced form is feasible, and two, to convince you that this is an
interesting non-trivial problem. To do that I'm going to do a really simple example. It's just
going to be one item and two bidders and each bidder is going to have three types.
Furthermore, each bidder’s type is going to be chosen uniformly at random. Here's my reduced
form. It says that when bidder one says that his type is A, you should always get the item. If he
says his type is B he should get it with probability of a half. If he says his type is C, he should
never get it. Let's try and figure out is this feasible. The first thing we can look at is definitely
whenever A shows up he has to get the item, so something that we have to do is A has to beat
D, E, and F whenever they show up. We can do this and now we know that pi of A is equal to
one, so that's good. Also, now that D is losing to A, D has to get the item whenever else he
shows up, so that means D has to beat B and C. But if we do this then pi of D is going to be
equal to two thirds, so that's fine. Another thing we can do is C and F don't want the item at all,
so whenever B and F show up, we might as well give the item to B and whenever C and E show
up, we might as well give the item to E and that gets both of their pi’s up to one third. And now
the last decision we have to make is what should we do with B and E show up? In order to get
pi of B all the way up to a half, B would need to win with probability of half and in order to get
pi of E all the way up to 5/9, E would have to win with probability 2/3. At this point you can say
well 1/2+ 2/3 is larger than one so it is clearly not possible to do this, so this reduced form is
infeasible. So what just happened? We worked out a simple example and that was already
kind of a lot of work. I want to point out what would've happened if pi of A wasn't equal to
one, we wouldn't have had a starting point to go through that kind of reasoning. What if there
were more than two bidders? What if there is more than one item? And what if there are
interesting constraints like maybe a matching on who can get what at the same time?
Hopefully, I convinced you that this is somewhat of an interesting problem and what we need is
a consistent approach to solve it. The second ingredient is going to be what's called the
equivalence of separation and optimization. To tell you what this means I'm going to give you a
definition. A separation oracle for a convex region takes as input some point x and it's going to
output either yes if x is inside your convex region, or it's going to output a violated hyperplane.
So how you should think of a violated hyperplane? There is some direction w where x goes
further in direction w than any point inside the polytope and therefore it's definitely not in the
polytope because it goes too far in direction w. There's this famous theorem by Khachiayan in
1979 where he showed that if you can get a separation oracle for a convex region, then you can
optimize linear functions over that same region and the algorithm we use is called the ellipsoid
algorithm. What this means in our context is we just need a separation oracle for the space of
feasible reduced forms and then we can solve our linear program. Now what's less known is
that the other direction is also true. In ‘81, Grotschel, Lovasz and Schrijver and independently
Karp and Papadamitriou showed that if you can optimize linear functions over a convex region,
then you can get a separation oracle for that region too. So a good question to ask… Yes?
>>: Do the rules of what is feasible which could be computationally very complex, require that
it still be a convex problem?
>> Matthew Weinberg: Yes. It's actually always going to be convex, and the reason you can
think of that is if it's feasible for me to run one mechanism and it's feasible for me to run
another mechanism, then it's feasible for me to run this one with probability of half and this
one with probability of half, so it will also be feasible. Does that make sense? So it's feasible
for me to choose this one.
>>: [indiscernible] what I meant. Like you could give some, you know, give each person a
house with only one house or something like that, so I'm not sure how complicated these
[indiscernible] will be, but those still have to define a convex set.
>> Matthew Weinberg: Those actually don't have to define a convex set, so what I'm saying is if
I look at the probability vector that you get each different house, if it's possible for me to do
that and it's possible for me to give you a different probability vector, then it's also possible for
me to give you this one with probability of half and this one with probability of half.
>>: So once you project down on this much smaller [indiscernible] you get a convex set?
>> Matthew Weinberg: Yes. You immediately get convexity, yes. That's right. That's right. A
good question to ask is why is this second theorem I showed you, why is this ever useful? Who
uses separation oracles for anything other than optimizing linear functions? The answer is that
the most common usage is if you want to optimize a linear function over the intersection of two
convex regions. In this case just being able to optimize a linear function over one doesn't
immediately tell you anything useful about optimizing that same function over at the
intersection of two regions unless you use this reduction. Actually, this application is what gave
the first poly-time algorithm for some modular minimization and there are other uses of this
and one of them will come up later in the talk too. The second ingredient which was not our
contribution is the equivalence of separation and optimization, which says that if you have
optimization algorithm for this region, then you can get a separation oracle. So all we have to
do is be able to optimize over this region. I'm actually going to skip the details on this, but I'll
tell you that one thing that we did show is that if you can optimize a virtual welfare as an
algorithmic problem, then you can optimize linear functions over this region. I said I'm going to
skip the details, but I'll say that one way to interpret this is that it's a stronger version of the
equivalence of separation and optimization, but specifically for this mechanism design
application. It's not like a general tool. Now when you combine these three ingredients
together, this says that if you have a poly-time algorithm that can optimize virtual welfare, you
can go back through ingredient three, then two and then one and that will give you a poly-time
algorithm to find the optimal reduced form. So something is still missing and that's, I told you
that there's an algorithm to find the optimal reduced form, but a reduced form is not actually
an auction, or it's not a mechanism, because I threw away a lot of information. What I need is I
need an actual mechanism that can take as input a profile of types and output what to do. It
can't just be the case that everyone shows up and says their type and then I say this is the
reduced form of what I'm trying to do. I actually need to do something. This is another use of
separation oracles, which says that if I give you a separation oracle for a convex region, and a
point inside that region, then I can decompose that point into a convex combination of corners,
of extreme points of that region. The reason that that's useful is because we're able to show
the following, that any reduced form that is a corner of the feasible region has a really simple
form and can be implemented as a mechanism really easily. It's just by maximizing virtual
welfare on every profile. What that means is you should think of every corner modifies the
input in a different way, but then it just maximizes welfare at the end of the day. So the
corners are really simple mechanisms. What that means is to implement any feasible reduced
form, first use the separation oracle to write it as a convex combination of corners and then you
implement that corner. The corner just modifies the input and then runs an algorithm that
maximizes welfare.
>>: The corner, this convex side, is it actually a polytope?
>> Matthew Weinberg: In most examples that you would think of it is a polytope. If it's not a
polytope and it's just a convex region, then by corner I mean extreme point that's not, what's
the word I'm looking for?
>>: [indiscernible].
>> Matthew Weinberg: Yes. That's right. That's correct. That's the right way to put it.
>>: What do you mean by modifying the input? Do you mean the different corners of different
[indiscernible]
>> Matthew Weinberg: Yes. That's what I mean. Okay. Now, this is a last ingredient. With all
four of these together, this means that you can find and implement the mechanism that
optimizes revenue and poly-time as long as you have black box access to an optimal algorithm
for virtual welfare. What I just proved was a theorem that I just read, which was for revenue
for additive bidders and for exact optimization algorithms, what I promised you at the
beginning was the following more general theorem that said that any objective you want and it
preserves approximation and the agents can have any types you want, not just additive. So the
proof uses the same four ingredients plus a little bit extra that I'll go through now. The missing
ingredients, I need to tell you how to deal with approximation. I need to tell you how to let the
agents’ types not be additive and I need to tell you how to change the objective from revenue
to something else, and I'll give a one slide overview on each now. So for approximation, recall
this theorem that I told you earlier, which is that if you can optimize linear functions then you
can get a separation oracle. A reasonable question to ask is what happens if you use exactly
the same approach but I just start with an approximately optimal, or an approximation
algorithm instead? So you can call the output of this, we call it a weird separation oracle. The
reason we call it that is because it is an algorithm that sometimes says yes and it sometimes
outputs hyper planes, but it's not really a separation oracle and the reason for that is that it can
have this really weird behavior where it's possible for it to accept the point x and except the
point y but reject every convex combination of x and y. Whereas, if the region was actually
convex, not only should it accept some of these points, it should accept all of these points.
That's why we call it weird. But an informal statement of the theorem that we show that says if
you can approximately optimize linear functions over a convex region, then you get some
meaningful notion of an approximate separation oracle for that same region. The exact
definition of what I mean by a meaningful notion is a little technical, so I didn't write it. But an
overview of what happens is we show that the same algorithms that was used in the original
reduction works directly and the hard part is we have to understand what happens if you run
the ellipsoid algorithm like over a non-convex region, like what will happen.
>>: [indiscernible]
>> Matthew Weinberg: The idea is if you have a real separation oracle then the set of points
that it accepts are going to be a convex region and so this weird separation oracle doesn't have
that property, so it's going to accept some points and those points are going to form a set and
the set may not be convex and it may not even be connected, so it can be a really weird set.
>>: [indiscernible] for each query you give [indiscernible]
>> Matthew Weinberg: Yes, for each query it gives a consistent answer, but it's in some sense
not consistent with any single convex region. That's right. So it could be the case that like in
one direction it says you can go very far and in this slight, perturbing that direction just the little
bit, it says you can barely go anywhere, so it's possible that that will happen. That's how you
deal with approximation is you need to use the stronger version of the reduction. For arbitrary
types, let me first try and ask so what goes wrong as you move beyond additive bidders?
>>: In this approximation you would still just a aim for the first one, right? For example, when
you said for different [indiscernible] it gives [indiscernible] approximation [indiscernible]
>> Matthew Weinberg: That's right. Yes, the worst one. But you are guaranteed that in every
direction it does it at least half as well as the optimal and that's what you are targeting, yes.
>>: [indiscernible] some other [indiscernible]
>> Matthew Weinberg: That's right. So I don't know how to exploit that property. That's right.
So even if it did well in every direction except for one, I wouldn't know how to exploit it. Let me
know now, so what's the problem as you try to move beyond additive bidders? The issue is
that this reduced form that I was using is completely useless and here's why. Say there's just
one bidder and there's two items. You should think of the items as a left shoe and a right shoe.
Your value for both shoes together is a dollar, but your value for just one shoe or getting
nothing is 0, so that's an example of a bidder that's not additive. In one auction maybe I give
you both shoes with probability of half or nothing with probability of half. In the second option
you just get a single shoe uniformly at random. These two auctions have the same reduced
form, but your value for them is really different. The problem, so that's the problem, and how
do we deal with it? We use a different description of the auction that's really implicit. We'll
use something, you can think of this as a swap value. We're just going to explicitly get at what
is your expected value if you're real type is T let you choose to say T prime instead? Then we
store this for all pairs of types. I am not actually storing any information about the auction
other than directly how happy does it make certain types for reporting certain other types.
That's how you deal with arbitrary types and now I'll tell you how to deal with arbitrary
objectives. What you want to ask is what makes revenue special for this approach and the
answer is that expected revenue is a linear function of the reduced form or the implicit form.
Whereas, most other objectives aren't. For instance, welfare is but makespan isn't. So the
issue is that if I just give you the implicit form of a mechanism, there is no way you can possibly
hope to compute the makespan. The solution is we're just going to add another variable that
stores this value directly, okay? So we'll take the implicit form and we'll add one more variable
and that's just going to store the expected value of same makespan. This definitely makes
running the LP easier and the issue is that it may make determining feasibility a lot harder,
because now I have to determine is it feasible for me to make, you know, you this happy when
you say this other type and for me to do so in a way that gets makespan exactly 5 or something.
Now this is a much harder problem to deal with. Just to recap this new implicit form is going to
have these three components. The first is going to be the swap value which says if you are real
type is T, what's your value when you say T prime. It's also going to store the expected price
that everyone pays and it's also going to store the expected value of the objective when you
use this mechanism. Okay? Now let me go back to these four ingredients and tell you how
they change when you want to make these generalizations. Instead of using the reduced form
to handle arbitrary types and objectives, we're just going to use this implicit form that I told
you, so all we're doing is changing the variables and as far as the LP is concerned nothing
interesting happens. To handle approximation, we have to use this stronger approximation
preserving version of the equivalence of separation and optimization, but once we do that,
that's the only change we need to make. So for this third ingredient, optimizing virtual welfare
no longer lets us optimize over this space of implicit forms because we added this component
for O, but it turns out that adding this extra variable just causes the problem that you need to
solve to change, so if you want to optimize over this space of implicit forms, you just need to be
able to optimize O plus virtual welfare. That's how it changes and that's where this problem of
O plus virtual welfare comes from. For the last ingredient, because we no longer have an actual
separation oracle, we can't use the decomposition algorithm that I told you about before, so
instead we have to use a property of the approximation preserving reduction, so this
decomposition will just come immediately out of using the technique. Now I've shown this
more general version of the theorem which says that you can find and implement an
approximately optimal mechanism for any objective for arbitrary agent types in poly time as
long as you have black box access to an approximately optimal algorithm for that same
objective plus virtual welfare. Okay? So now let me tell you about some quick applications.
The first application is, you know, not surprisingly for revenue. In 1981 in Myerson’s seminal
paper he showed the following structural result about auctions. He said that the revenue
optimal option in single dimensional settings is a virtual welfare maximizer. A major open
question since then is what does a revenue optional auction look like beyond single
dimensional settings. So even for additive we don't know anything and definitely beyond
additive we really don't know anything. Okay? So what I showed you in this talk is that no
matter what the agent types look like, the revenue optional auction is a distribution over virtual
welfare maximizers, so it's not quite as simple as Myerson's, but it's not like super far from that.
It's definitely more than what we knew before. A note is that Meyerson also showed the other
direction which is that any approximately optimal mechanism necessarily approximately
maximizes virtual welfare as an algorithm. We can show something like this too in our setting
which I'm going to say now. What we show is that for revenue, this reduction actually holds
both ways. What I mean by that is if I gave you an algorithm that could find the optimal
truthful mechanism or even an approximately truthful mechanism, you could turn that into an
algorithm that can approximately maximize virtual welfare algorithmically. So the proof of this
I'm not going to talk about at all but it has nothing to do with the geometric techniques that
you saw in the rest of the talk. An encouraging corollary of this is that in this setting that means
that maximizing virtual welfare is in some sense the right way to maximize revenue because
you can't avoid doing it. If you have an algorithm that can maximize revenue, you would also
have an algorithm that maximizes virtual welfare. An evil corollary of this is that we are able to
show as a corollary that it's NP hard to truthfully approximate revenue within any polynomial
factor even for a single monotone submodular bidder. If you don't know what those terms
mean, monotone just means that as he gets more items he gets happier and submodular
means that he gets diminishing marginal returns from adding more and more items. The
second application I'll talk about makespan. If you remember there was this guy who was
trying to schedule jobs and now he's a little bit excited because he just has to design algorithms
for makespan plus virtual welfare. Let me give you a specific problem. You want to truthfully
minimize makespan on unrelated machines and so what that problem sounds like is there are n
different jobs and m machines and processing job i on machine j takes time pij, and this is
private knowledge to the machines. You don't know how long it takes to process any job on
any machine, but the machines do. I want to point out this is the original problem that was
studied in the seminal paper of Nisan and Ronen. In this talk I told you that this reduces to an
algorithm design problem for minimizing makespan plus virtual welfare. One way that you can
interpret this problem is that processing job i on machine j takes some processing time and it
also costs some monetary value, so think of it as you have to pay the machine to process a job
and it's also going to take them some time. Your goal is to find a schedule that minimizes the
time to process the last job, so that is a makespan component, plus the total monetary cost of
processing all jobs and that corresponds to the virtual welfare component. Think of it as you
only care about the time that it takes to finish the last job, but you also care about all of the
money you spend. Now what kind of sucks is that this problem is NP hard to approximate
within any finite factor, so not just within a constant factor. It's NP hard to determine if the
answer is positive or negative. I just spent a half hour telling you about this really general
reduction and now I gave you an example of a problem that says we can reduce a problem we
don't know how to solve to a problem that is impossible to solve. You should ask if this is not
the right approach. But the answer is we can still accommodate this, but we need to improve
the reduction a little bit. I'm going to give you one more definition and this is I promise going to
be the last definition of the talk. We will say that an algorithm is an alpha-beta approximation if
the output satisfies this following inequality and how you should interpret this is that if beta
was equal to one, this would be a normal alpha approximation. But by letting beta be
something less than one, this is letting you cheat a little bit, so it's letting you discount the
objective by some factor before you compare to alpha times the optimum. What we're able to
show is that if you have a poly time alpha-beta approximation algorithm for optimizing your
objective plus virtual welfare, then you can turn that into a poly time alpha over beta
approximate mechanism for the objective. That means having beta not equal to one, just
causes you to have to sacrifice another factor of beta on your approximation ratio. I'm also not
going to give any details on the proof, but I'll save that it's by basically extending this
equivalence of separation and optimization to accommodate some geometric notion of this
kind of approximation. What's really cool is that for makespan this problem was actually
studied 20 years ago and Shmoys and Tardos showed that there is a poly-time 1 1/2 poly-time
approximation algorithm for minimizing makespan plus virtual welfare. What that means is
that when you plug that back into our reduction, you can get an actual 2 approximation for a
truthful makespan minimization on unrelated machines in this Bayesian setting. I want to point
out that first that matches a guarantee of the best-known algorithm, so even if we didn't have
any strategic input, on honest input we don't know how to do better than a 2 approximation.
And also, if you think that this is a really important problem than this is the first like general
Bayesian setting constant factor approximation. In conclusion, so we studied this general
question motivated by Nisan and Ronen which said how much harder is it to solve problems on
strategic input than on honest input and we studied it through this specific open question
which is when are there blackbox reductions from mechanism to algorithm design. What I
showed you in this talk is that for all objectives in a Bayesian setting, there is a blackbox
reduction from mechanism to algorithm design as long as you perturb the objective. One way
to think of that, like I said before, is that transitioning from honest to strategic input
computationally is no harder than adding virtual welfare to your objective. Something we did is
we extended this equivalence of separation and optimization framework to accommodate both
traditional approximation and alpha-beta approximations. I also showed you that this
reduction, or I didn't show you but I told you that this reduction is tight for revenue. What that
means is that maximizing revenue in this setting is exactly as hard as optimizing virtual welfare.
And something else if you care about structure, I showed you that the optimal mechanism has
manageable form, which means that how you can implement it is you randomly sample a
corner and then you run the corresponding virtual objective maximizer where modifying the
input is going to depend on which corner. That's all I'm going to say about that and I'm just
going to really briefly talk about two other things that I've done. Changing gears, prophet
inequalities are kind of a fundamental problem in online algorithms and optimal stopping. It
turns out that this actually has strong applications in mechanism design. I'll define the problem.
So the off-line input to this problem is a list of distributions and what happens online is that
random variables are sampled one at a time from each distribution. They're revealed to you
and as soon as it's revealed you have to immediately decide do you accept or reject. If you
accept, then you get a reward equal to the random variable you just saw and the game stops. If
you reject, you throw it away forever and then the game keeps going. Your goal is to maximize
your expected reward. If you were a prophet who knew the future so this is where the term
prophet inequality comes from, you would always get the maximum element and the question
that you want to ask is if you are a gambler who didn't, how well can you do. There's a seminal
result that says in this setting there is a strategy for the gambler that guarantees him half of the
expected reward of the prophet. This is the best that you can possibly hope to do. Recently,
people have been trying to study what happens if the gambler has multiple choices. I'm going
to change the off-line input of the problem to be also a list of distributions and there are going
to be some feasibility constraints that say what elements you can simultaneously accept. The
elements are still revealed one at a time and you still have to decide online whether to accept
or reject. The difference is when you accept in element, the game doesn't immediately stop. It
just gets added to your accepted set of elements and the catch is that at all times your accepted
set has to be feasible. What you lose by accepting something is that it'll prohibit you from
accepting some other things in the future. Your goal is still to maximize your expected reward
and it's still the case that if you are a prophet you would always choose the max weight feasible
set and so what I showed with Bobby Kleinberg is that if the feasibility constraints are matroid
then there exists a strategy for the gambler also guaranteeing him half the expected reward of
the prophet. This is the best possible because it's the best possible even for the single choice
problem. Also recently with Pablo Azar we showed that there exists asymptotically optimal
strategies for the gambler that only require a single sample from the distributions, so he
doesn't need to actually know the distributions out right. One sample is enough. This is for
several special cases of matroids, but not actually for matroids in general.
>>: [indiscernible] evaluate the independent sets [indiscernible]
>> Matthew Weinberg: I mean the independent sets form a matroid, yes, that's right. The last
thing I'm going to talk about is auctions for everyday people and so the motivation for this is
that the mechanisms that I just finished describing to you I would consider them manageable
but I would not call them simple. What I mean by that is that the FCC could run this mechanism
as a spectrum auction. They are definitely capable of doing that, but what I also mean is that I
could not run that mechanism to sell my Pokémon cards. There is some bad news with respect
to this problem, which is that even in ridiculously simple settings the unique optimal
mechanism can be very complicated. In particular, Hart and Reny showed that even if there's
just one buyer and two items and his value for each item is drawn iid from a distribution of
support 3, then the unique optimal mechanism is already really complicated. The good news is
that this doesn't rule out simple and approximately optimal mechanisms, but we do have to
start kind of small trying to approach this.
>>: [indiscernible]
>> Matthew Weinberg: That's a good question. For this specific example I would say one
measure of complicatedness is if it uses a lot of randomness and another measure might be,
and I guess this might be kind of a fuzzy notion of complicated in that it doesn't look like it looks
weird. What their mechanism, I'll just tell you what the optimal one is for this setting. It offers
the buyer like the following four options. He can either pay a lot of money and get either item
deterministically, so with probability one, or he can pay less money and get, sorry, get both of
them at the same time. Or he can pay less money and buy a lottery ticket for one item that
gives him the item with probability half, so he can buy a lottery ticket, walk over to another
counter, hand them the lottery ticket and they will flip a coin. Maybe they'll give him the item
and maybe they'll give him nothing. He can buy one of these for either item. Does that, I don't
know. I would say that’s something that to me seems complicated. Okay. The model is there's
just going to be one additive buyer, one seller and n items. The buyer’s value for each item is
going to be sampled independently. Here are two simple ways you can sell the items. One is
you just sell them all separately. You just put one price on each item. You walk in, you buy
whatever you want and then you leave, so think of that as probably any time you've ever
bought something was probably like that. Another simple thing you can do is put them all
together into one big bundle and put a price on it and say you can buy everything or you can
buy nothing. So there's bad news even for this which is that there do exist instances where
selling everything separately does really bad compared to the optimal, and there are existences
where selling everything together does really bad compared to the optimal. So what we
showed with Babaioff, Immorlica and Branden Lucier was that for all instances one of them
does good though. One of them is always a constant factor even though there are instances
where, even though for both of them there are instances where they both suck. So there is a
fuzzy corollary of this. This means that if you sold anything on eBay recently, you probably did
it in a way that is provably approximately optimal.
>>: What does the seller decides which of these to apply? There's a clear option where the
seller would tell the buyer well you could either buy a bundle or buy them separately.
>> Matthew Weinberg: No. This is the seller beforehand decides which one do I think will
make me more money and then he uses this one and it's actually very easy for the seller to tell
which one will make him more money.
>>: What is the thing about giving the buyer the option? Is that [indiscernible]
>> Matthew Weinberg: I guess I would say by this result kind of trivially that would have to be
good because you can always, like you can always just set the price of one of them to be infinite
or something, or I could set it to be something so high that you would never actually choose it,
or if you do choose it than I'm making a ton of money, so it's only good. So that would be an if,
yeah, maybe that would be a better way to state it, which is you can just sell either the options
of buy everything separately or maybe for a much higher price you can buy everything. Or
maybe…
>>: [indiscernible] lower [indiscernible]
>> Matthew Weinberg: Yeah. I mean, unless you don't want him to choose that option
because really you want to sell everything separately, but yes, that's true. Okay. And so that's
everything I have. Thanks a lot for listening. [applause]
>> Yuval Peres: Any other questions?
>>: I see that initially you just compare the result from Meyerson, [indiscernible] but that was
deterministic [indiscernible]
>> Matthew Weinberg: Actually, so it's not comparing to Meyerson; it's comparing to whatever
the best really complicated thing you could possibly do. You are talking about for the last
result, this one?
>>: No, no. I mean at the top. I was trying to see, for example in Meyerson setting it is not
clear like the way that you [indiscernible] the prices on top.
>> Matthew Weinberg: Yes, that's true. So I'm actually in the main result I'm not comparing to
the optimal deterministic mechanism. I'm comparing to the actual optimal the best thing you
could do for any complicated truthful mechanism. That's what we're comparing to. It could
use randomness. It could, you know, do something really crazy.
>>: But can you see some places where randomness is not necessary [indiscernible]
>> Matthew Weinberg: So not for any new settings, you can't. One thing that we did try is we
tried to say now that we know all of this can we re-derive Meyerson's result and the answer is
yes, and you could give a geometric proof of that instead of using calculus. It's not clear that
that's really simpler, but you could. I think that's a great question but that's not something that
we've been successfully able to look at.
>>: So you explain your benchmark in the beginning as you are comparing to the optimal
truthful mechanism. Now in some special settings requiring truthfulness doesn't hurt, but in
some it does. So if the benchmark was the best, maybe the natural benchmark is
[indiscernible] necessarily about truthfulness. It just wants to do the best I know that saying
something like [indiscernible] won't motivate people to tell the truth, but it will still be better
off.
>> Matthew Weinberg: Yeah, so that's right. You are saying what if there is a higher
benchmark which is the best you could do with an algorithm? Let me answer that first and then
maybe there's a second thing you were trying to get at. The first is that step that is a better
benchmark and we do know in some settings like for welfare you can achieve that benchmark.
As far as I know actually there are not a lot of settings where we know that you can't for
Bayesian incentive compatibility. If you want dominant strategy truthfulness, then there are
some settings where we know that you can't do as well with the truthful mechanism as the
algorithm and so one useful thing I think is that, you know, we are working to actually code up
these algorithms and implement them, so with this you could actually run some experiments
and see like try and get some intuition as to say like for makespan, do you think that the gap
should be a constant factor. Should there be any gap at all or should it be enormous and so we
haven't finished coding up the implementation, so we haven't done that and honestly I don't
have any intuition into that. Maybe another interpretation of your question would be what if
you just run something that's not truthful and see what happens, how much revenue will you
get? There is this thing called the revelation principle that says that, you know, if the bidders
are behaving strategically then you could make just as much revenue by like behaving
strategically for them via a truthful mechanism. It's unclear that bidders do actually behave
strategically if you give them more options, but I think it would be nice to have some kind of
robustness argument that says that it's in everyone's interest to tell the truth so they should tell
the truth and you make this much money when they tell the truth. Whereas, if you just kind of
let them do whatever they want, then you have no guarantee as to what they are going to do
and it's much harder to get any kind of guarantee.
>>: I understand only vaguely was when you have solved the problem in the low dimensional
space and you are trying to find [indiscernible] fuzzy way I gather in the first instance where
everything was additive that you mapped the problem to want to find it separating
[indiscernible] because these extreme points have nice properties then it works. But I didn't
see, is it the same when you go to the non-additive place and you replace these representations
by just sort of expected values of what happens?
>> Matthew Weinberg: Yes. Right. First, let me tell you, for revenue when you go to this more
general space with this swap value description instead of the other description, then it's still
true that the extreme points have this nice property which is that they are still virtual welfare
maximizers, but what virtual welfare means gets more complicated as you let the types become
more complicated. For instance, when you start with additive bidders, the virtual welfare
maximization problem is still going to be on additive bidders. If you start with something like
submodular bidders, so let's say that all of your bidders have submodular evaluation functions.
Then the virtual welfare problem might ask you to maximize welfare when they have like the
difference of two sub modular functions, which could get much more complicated than just a
sub modular function. Let me say as a characterization result, it's still true that all of the
extreme points are virtual welfare maximizers. As you go to this more complicated description,
then what it means to be able to maximize virtual welfare gets more complicated and harder.
There's a second thing which is, so I told you if you don't have a real separation oracle, so if you
just have this fuzzy approximate version, you can't run this algorithm to even find the corners
to begin with. What happens is whenever, let me say it like this. Whenever you use this
approximate equivalents, whenever you say that a point is feasible, you can actually explicitly
find a convex combination of points that give you this one such that you can find that point just
by running your algorithm in a certain direction. What that means is that if I have an
approximation algorithm and I want to implement some mechanism, I can do it by randomly
choosing virtual transformation and then running whatever algorithm I used to find it. If I have
an approximation algorithm, it will be that same approximation algorithm. I don't know if that
was too…
>>: I'm not sure I…
>> Matthew Weinberg: Okay.
>>: [indiscernible]
>> Yuval Peres: Let's think the speaker again. [applause]
Download