>> Yuval Peres: Welcome, everyone. It's my pleasure to... Dughmi. He is a Ph.D. student at Stanford advised...

advertisement
>> Yuval Peres: Welcome, everyone. It's my pleasure to introduce Shaddin
Dughmi. He is a Ph.D. student at Stanford advised by Tim Roughgarden. He
works in the area of algorithmic game theory, in particular mechanism design,
and he's going to tell us about the power of randomized mechanisms.
>> Shaddin Dughmi: Thanks, Yuval.
Can everybody hear me fine? All right.
So I'm going to start with an introduction. So I'm motivated by the following
question: When can we allocate scarce resources efficiently to get a socially
desirable outcome? Can we do it in the presence of both selfish behavior and
computational limitations? So there's a computational constraint, I have
polynomial time to calculate what I'm going to do, how I'm going to allocate
resources, and there's an economic constraint.
I have selfish behavior that controls my input.
So resource allocation problems that involve selfish agents are everywhere, and
as systems grow larger also computational issues become more and more
important. So there's going to be a challenge to reconcile the two.
Okay. So let me start by introducing mechanism design by economical example.
So many of you have seen this before. Let's say you have an item to sell. Let's
say I want to sell my car. Let's say there's a bunch of people that are interest in
my car, interest in buying my car.
So a commonly employed solution is called the Vickrey Auction. It's the following
three-step process. So in the first step I ask players for bids, which is a dollar
amount that's a proxy for how much they value my car. Right?
And then I give the item to the highest bidder. So this lady bid 4,000 so she gets
the car. And then I charge this lady $3,000 because the next highest bid is
3,000. That's like a commonly employed solution. It's called the Vickrey Auction
or the second-price auction.
So what happens in such a thing? In order for me to predict what's going to
happen in such an auction I need a way to model players' utilities. So I'm going
to assume that each player has a valuation v sub i for my item which is basically
a dollar amount that helps how much they value the car, for example.
And a player's utility from the auction is the value for the car minus the price they
pay if they win or it's also zero if they lose because they pay nothing and they get
nothing. Right?
So I'm going to assume a player is rational and that he chooses the bid that
maximizes his utility.
So he has a valuation, and he has the utility, and the bid is going to not
necessarily be equal to the valuing but it's going to be chosen strategically.
So here's a fact about the Vickrey Auction. It's truthful in the sense that for every
player bidding, their bid equals their value is the best thing they can do. That's
how they maximize their utility.
Equivalently, truth-telling we say is a dominant strategy. It's basically -- it's
always in their best interest to tell truth no matter what everybody else is doing.
That's a property of the Vickrey Auction that I'm not going to prove.
Given this property, I can predict what's going to happen in the Vickrey Auction.
A player who's rational is going to tell the truth, so the option is going to get the
true values is the bids and it's going to give the item to the player who values it
most and it's going to charge this player the next highest valuation, right?
Because it's going to get the correct inputs.
Here's another fact about the Vickrey Auction. It maximizes social welfare. What
does that mean? Social welfare is basically the total value derived by the players
from the outcome.
To see this, notice that the player who values the car the most gets the car,
right? So in a sense the most value is created. In other words, social welfare is
maximized.
And this is what mechanism design tries to do. More generally, it tries to
basically figure out how to compute desirable outcomes from preference of
selfish players. It tries to extract information and do the computation
simultaneously.
And the Vickrey Auction is an example of a mechanism which is a three-step
process, a bidding step, an allocation step where you compute the outcome, who
gets what, and a payment step where you charge people payments to kind of
incentivize proper behavior.
And notice that a player's utility is basically determined by what happened in the
second step and in the third step. So they're going to choose what they do in the
first step to optimize what's going to happen there.
Questions?
Here's a third fact. The Vickrey Auction is obviously something you can write
code to do in polynomial time, right? Because it's basically just computing the
maximum of n numbers, right?
Now, things become more interesting when the computing the outcome involves
solving a more complicated problem. Maybe there's more than one item. Maybe
there's interdependencies between them. And that basically makes the problem
harder and harder and maybe becomes NP hard, right?
So that's kind of where algorithmic mechanism design comes in. It's basically
when you want to compute desired allocations of resources where the preference
of the players for the allocations are private and the players are selfish.
And, moreover, you want to do it in polynomial time where the problem is maybe
not tractable. Right?
So there's more complicated resource allocation problems where the
computational challenge becomes more important.
So let's look at some examples of these. We'll look at three examples that are
going to be illustrative, and I'm going to tell you how we solve them all during the
course of this talk. Some I'm going to explain in more detail than others.
So here's a problem. I'm going to call it the knapsack public projects problem.
It's basically a strategic variant of the knapsack problem that we all know and
love.
You have a bunch of people who live in a town, and there's a bunch of public
problems the town is considering building. So a bus station, a fire station and a
bridge, right? And there's a finite resource, let's say cement, right? There's a
hundred tons of it, and every project requires a certain amount of cement.
And now every player has a value which is in dollars of how much utility they
derive from every project if it were built. So maybe this guy would be -- would
derive $200 worth of value if the bridge was built, for example. And if I sum up
players' values for a particular project, I get the total value to society of building
that project.
For example, in this case the bridge is worth $500 to society, right? I can do this
for everything and I get a dollar value for every public project that tells me how
much society values it.
Now, my goal is to basically maximize the total welfare to society. I want to pick
a subset of these projects to build given that I only have 100 tons of cement and
given that these are the players' values. Right?
Let's forget for now that the players' values are private. I'm just defining the
problem and then we can worry about extracting these values.
So I want to maximize the social welfare by building a subset of these projects.
And notice that this is basically just the knapsack problem. As the number of
projects grows large, this becomes NP hard because invest a report constraint
and I have values on the things and I want to basically build a subset such that
it's fixed in the resource constraint of 100 tons of cement.
Questions?
All right. So it's NP hard, but it has an FP [inaudible] or more simply, there's a 1
minus epsilon approximation that runs in polynomial time for every epsilon, and
it's also going to polynomial in 1 over epsilon, as we all know.
Okay. So here's another example. It's called combinatorial auctions. It's one of
the most studied problems in mechanism design, in algorithmic mechanism
design.
So you have a bunch of people, and there's a bunch of items up for sale. A
volleyball net, a volleyball, a Ping-Pong ball and a Ping-Pong paddle. So every
player, also known as bidder, has a valuation function which is a function that
maps subsets of the items to the real numbers. It basically tells you how much
they value every bundle.
Why am I defining this to be a set function? Well, maybe there's dependency
between the items. Maybe if I'm this player on the left, maybe my value for the
volleyball net and the volleyball together is greater than some of my values for
each alone because they're complementary. I can only play volleyball if I have
both. Right?
So that's why I defined their valuations using these set functions.
And my goal is basically partition these items, partition these items between the
players so as to maximize their welfare. And what does that mean? It's the sum
over all the players of their value for what they get. Make sense?
Questions?
Okay. So this problem is very hard and hard to approximate if we don't assume
anything about the structure of these valuation functions. So I'm going to make -for the purpose of this talk, I'm going to make assumptions about these
valuations. The most common one is that they're submodular.
What this means roughly is that the players have diminishing marginal returns
over what they get. Essentially my value for an additional item is a decreasing
function of what I already have. Right? Which is natural.
For example, my value for the volleyball and the Ping-Pong paddle is going to be
less than the sum of my values for each alone because I don't have enough time
to play both. Right?
Okay. So now even if I assume the players' values are submodular, the problem
is still NP hard, but now I get a 1 minus 1 over e approximation algorithm
because this is essentially, if you've seen it before, this is maximizing a
submodular function subject to a [inaudible] constraint. It's not obvious to see
that, but trust me.
Questions?
All right. Okay. Let's look at even a third example. So this is called
combinatorial public projects. It shares -- it has similarity to each of problems we
talked about before.
So you have a bunch of players and, again, a bunch of projects you want to
build. Each player now has a set function that tells you how much they value if
these projects were built. So maybe my value for the train station and the bus
station together is less than the sum of my value for each alone because I either
take the bus or the train to work.
And my goal is to pick at most k where k is some fixed number. I want to pick at
most k of the project, say half of them or whatever, in order to maximize social
welfare because maybe I only have so many k bulldozers and I can only build k
of them. Right?
And now in this case social welfare, it's the sum over all the players of their value
of the set of projects that's built. And, again, this is highly inapproximable so I'm
going to assume their values are submodular and this models a lot of interesting
cases.
For example, if you forget about the public projects, the problem of designing
which overlay nodes in a network are the best to build for people who are trying
to route things is also kind of an instance of the same idea. So this models a
whole bunch of things.
So it's still NP hard if I assume submodularity, but know it also has a 1 minus 1
over e approximation. So I just told you, like, three problems that kind of have
similarities, but they all share a lot of common elements, right?
In a sense these are all special examples of a class of welfare maximization
problems. And these problems are everywhere. Whenever you want to allocate
resources and different agents have a stake in the outcome, you get such a
problem.
And it can be stated abstractly as a bunch of players, a bunch of solutions, and a
map for each player for how much they like every solution, and you want to
basically find the solution that maximizes welfare. So you can state it very
abstractly. And this is a large class of problems.
So for all these problems the economic work is basically done, right? There's a
solution that basically gives us all the economic properties we need. It's called
the Vickrey Clark Groves Mechanism. Many of you have seen it before, I
assume. And it's basically the following three-step process: You basically ask
players for how much they value every possible outcome, then you find a solution
that basically maximizes social welfare according to what they tell you. So in the
first step they may not tell you the truth, but you're going to optimize using
whatever they tell you, right?
And then you're going to charge every player his externality, which is basically
according to their purported valuations, you're going to charge every play the
increase in happiness of others if he drops out. So you're going to basically force
him to internalize how much he hurts everybody else.
I'm not going to say this more rigorously, but essentially it's basically you're
charging him for the amount of harm he causes others by being in this problem.
Right?
>>: What is omega?
>> Shaddin Dughmi: Sorry. Omega is the set of solutions of the welfare
maximization problem. For example, in combinatorial auctions it's the set of
partitions of the items among the players. So I'm thinking of it as an abstract set
of possible allocations of resources.
Other questions?
Right. So I'm going to charge the player his externality, and it's not too hard to
see that this mechanism is a generalization of the single item auction we looked
at before. And here's a fact about it: It's truthful and it maximizes social welfare.
Why is it truthful? First of all, notice that if it is truthful then it definitely maximizes
social welfare because in the second step it's explicitly using the reports to
maximize welfare according to these reports. So if they're the truth, it's finding
the optimal solution. Right?
Why is it truthful? Well, here's the intuition. Paying the externality internalizes
the welfare of others. So now the result is that my utility and the welfare are
basically bound together. The happier everybody else is, the more happy I am,
because the payments are basically rewarding me when other people are happy
and hurting me when other people are sad.
So if I'm rational I want to optimize my utility, and my utility is bound to the social
welfare so I'm going to bid whatever maximizes social welfare. And since this
mechanism maximizes on my behalf, I'm going to tell it the truth. Right?
Questions?
Okay. If you're a computer scientist like me, you have a problem with this in that
the second step is usually asking you to solve an NP hard problem like, for
example, in the cases we just talked about. So we can't implement VCG in
polynomial time when the problem is NP hard, when the underlying optimization
problem is NP hard.
Okay. Well, computer scientists have an answer to these kind of problems. We
design approximation algorithms. Basically they're algorithms that compute near
optimal solutions, ask we measure their quality using the approximation ratio in
the worst case sense, which is basically the percentage of the optimal welfare in
the worst case input, right? For example, in combinatorial auctions it's 1 minus 1
over e for the best approximate algorithms.
Now, for many problems we know we have a rich theory of approximation
algorithms. We know for a lot of cases what the best approximation is you can
get, and we have lower matching lower bounds that say something like if N
naught equal NP then you can't do better than this.
But there's a problem with that is that most known approximation algorithms can't
be converted to truthful mechanisms no matter how smart you are about
designing how you pay people. And there's fundamental reasons for that. And
I'm going to touch on why that is in the rest of the talk.
Okay. So now that we have an economic problem and a computational problem,
we want to ask can we combine the two sets of ideas from computer science and
economics to get the best of both worlds. So we have the VCG satisfies the
economic constraint but not the computational constraint, and vice-versa for
approximation algorithms. And now we want a mechanism that's both truthful
and runs in polynomial time and gives us a good solution to whatever problem
we have.
And, again, we're going to measure the quality of a mechanism using its
approximation ratio, and I'm going to ask what's the best approximation ratio I
can get if I want an algorithm that's both truthful and polynomial time.
This was first suggested by Nisan and Ronan as a research, and it's grown a lot
since then.
Questions?
So now this poses a philosophical question, which is is polynomial time
mechanism design any harder than non-truthful algorithm design? Can I always
get -- if I get, like, a 50 percent approximation using an approximation algorithm,
can I always make it truthful and still get a 50 percent approximation in
polynomial time? And that's a philosophical question.
And the best answer you can possibly hope for is a black box reduction that
takes any alpha approximation algorithm, say, for example, combinatorial
auctions or whatever the problem is, and turns it into an alpha approximation
algorithm that runs in polynomial time and is truthful. That's the best that you
could possibly hope for.
So in this talk, like I said, I'm going to focus on truthful mechanisms. I'm going to
try to justify that on this slide. I could do other things. There's other ways to
design mechanisms. There's mechanisms that are good in Nash equilibrium,
there's mechanisms that are good in other solution concepts like Bayes-Nash.
Why I am going to focus on truthfulness? Why restrict myself to truthfulness?
Well, there's advantages. First of all, truthfulness is a very nice solution concept.
It's very worst case, in a sense. It frees the players from reasoning about others.
For example, if I have a truthful mechanism for combinatorial auctions and it gets
a good solution by -- according to my measure of worst case approximation ratio,
and then that guarantee holds under a very minimal assumption about how
people behave. The assumption is that people are rational. If they're rational,
they're going to tell the truth, so basically I'm done. I don't have to assume
anything about how they're going to reason about what everybody else knows
and how they're going to react to the actions of others. Basically you don't have
on worry about what else is going on in the system. I can guarantee to them that
if you tell the truth, your utility is maximized. So it's a very worst case kind of
solution concept.
But there's disadvantages to focusing on truthful mechanisms. Maybe this
assumption is too strong and restrictive. Maybe it's too strong to require that it's
always in the best interest of everybody to tell the truth no matter what anybody
else is doing, right? Maybe I can't get good positive results using this kind of
constraint.
But I have a good answer to that second bullet. Well, that's not a non-issue
because in this talk I'm going to talk to you about positive results, how we can get
this very desirable yet a priori perhaps too restrictive constraint on what we're
going to design.
Okay. So now I'm going to talk about my contributions at a high level. So first let
me set things in context. Consider the state of the algorithmic mechanism design
circa 2008, 2009, 2007. There are huge gaps between the best polynomial time
approximation algorithms and the best truthful polynomial time approximation
mechanisms.
Here are some examples. There's more. In combinatorial auctions with
submodular bidders, the one we talked about before, the best approximation
algorithm is 1 minus 1 over e, i.e., 63 percent, and the best truthful one was
logarithmic in the number of items. In combinatorial public projects the problem
where you trying to pick the best k projects to build, it was 1 minus 1 over versus
square root of the number of items. Even worse, right? And there's many more.
And it was thought that these difficulties, the fact that you can't -- we can't seem
to get truthful mechanisms that are as good as the best approximation
algorithms, these difficulties were thought to be fundamental. And often that was
backed up by theorems. I'm going to elaborate on that soon.
My contribution is to realize that these impossibility results and the place we were
stuck was -- there was a loophole in all these kind of impossibility results, and I
designed mechanisms that closed that gap for these problems that I presented,
and more. And more conceptually, I designed general techniques that you can
use to design approximation mechanisms that get the optimal approximation
ratio.
So here's some highlights of what we were able to do. For example, first, there's
a general result that I got with Tim Roughgarden. It's a black box reduction for a
large class of problems with an FPTAS. So I look at a large class of problems
that models a lot of interesting welfare maximization problems and I say no
matter what the problem is, given black box access to a fully polynomial
approximation scheme, I can make it truthful.
As an example of that was the knapsack public projects problem where we have
a cement and a resource constraint. And a fully polynomial time approximation
scheme, if you're not familiar with the term or if you don't remember it, here's the
definition. I'm not going to read it out.
It's basically the best kind of approximation algorithm you can hope for for a NP
hard problem. Right? So essentially this is kind of a black box result that
converts very, very nice approximation algorithms to truthful mechanisms. We
would like to do it for arbitrary approximation ratios, but we're not there yet.
That's the first step.
Okay. So combinatorial options with submodular evaluations, it's essentially the
paradigmatic problem in algorithmic mechanism design. It's motivated much of
the research. And we improved the approximation ratio from logarithmic to 1
minus 1 over e for a large subclass of sub modular valuations. I'm going to tell
you what the subclass is later. But the nice thing is it includes basically also
modular functions we usually see in this context. It doesn't include things that
are kind of not smooth enough. But it's a step, right?
And, again, for combinatorial projects we'd get something similar. And it's also
one of the paradigmatic problems in the field. We improve the square root of m
to 1 minus 1 over e for most of modular functions -- for most of modular
valuations. And this I did recently.
Oh, and this one was with Tim Roughgarden and [inaudible] and this is by
myself.
>>: [inaudible] this is one number is bigger than 1, one number is smaller than
1 ->> Shaddin Dughmi: Oh, I'm sorry. Are you talking about here?
>>: Here and the next one.
>> Shaddin Dughmi: Yeah. So these are all bigger, right? So I'm thinking -- I
see. So 1 over that, say like negative 1. I'm sorry. So I'm using the common
notation approximation algorithms where you use either bigger than 1 other
smaller than 1 from context. Yeah, my bad. I should have put an inverse to
negative 1 here. You're right. I have to remember to fix that.
All right. So these are some highlights. There's more results, but this is kind of
just a bunch of highlights. So these are big gaps, and we closed them. And now
since we made so much progress in a short period of time it's good to ask why
were we stuck for so long. And there is one idea that basically underlines this
progress, and it's the realization that the we were stuck because people were
focusing on the -- people were focusing object deterministic mechanisms. They
with respect to using randomization. They weren't using randomized
mechanisms, they weren't using mechanisms that use approximation algorithms
to compute the allocation, right?
Question?
>>: [inaudible].
>> Shaddin Dughmi: I'm sorry?
>>: You said two things. They weren't using randomization and they weren't
using approximation algorithms?
>> Shaddin Dughmi: No, no, I just said they weren't using randomization.
Correct. That's it.
And there's a good reason for that. A priori, it's not clear why randomization
could help you combine truthfulness and polynomial time. So this is a restricted
computational model, and unlike other restricted computational models, like, for
example, online algorithms, it's not obvious intuitively why randomization could
help you. Why is it that randomization allows you to get good incentives and
polynomial time whereas without randomization you can't? It's not intuitively
clear, right?
And as I said, deterministic mechanisms had strong limitations that were backed
up by these theorems, and often when people get impossibility results for
deterministic mechanisms some people are thinking, okay, I don't see why
randomizations could help. Maybe this is fundamental, right? So that's
essentially why we were stuck.
So here's an example of such an impossibility result. There's many more.
[inaudible] and Singer showed that the combinatorial public projects problem, the
one where you want to pick k projects to satisfy people, they showed that even
though this problem has a 1 minus 1 over e approximation algorithm, there's no
deterministic truthful polynomial time mechanism that gets better than a square
root of the number of items approximation. So essentially there were strong
impossibility results for deterministic mechanisms.
And a priori it's not clear that if you make it deterministic you would get the 1
minus 1 over e. Right? So that's basically my contribution.
So here's one conceptual contribution. We identified a class of randomized
algorithms that gives us truthful mechanisms that are provably better than the
best deterministic ones and also developed techniques for using it effectively.
And this kind of renewed the hope that we may be able to get general positive
results now. Maybe we can get for large classes of problems or many interesting
problems, that we can always do the best approximation ratio via truthful
mechanisms.
So I'm going to basically split up my contributions into three main bullets. We
showed formally for the first time that randomized mechanisms are better than
deterministic ones. We looked at a problem called multi-unit auctions with was
Dobzinski, and we showed that there's a truthful FPTAS. I'm not going to define
multi-unit auctions. It's basically a variant of combinatorial auctions where the
items are identical.
And then we proved the lower bound, that deterministic mechanisms can't do
better than half. And this was kind of the first separation between deterministic
mechanisms and randomized mechanisms. So this was the first evidence that
we should head down the road of designing randomized mechanisms.
Then the second result I'm going to highlight is joint work with Tim Roughgarden.
We showed that -- we showed the first black box reduction from algorithm design
to dominant strategies truthful mechanism design. We basically looked at a large
class of welfare maximization problems and showed is that we can always
convert FPTAS for such a problem to a truthful randomized mechanism that is
also a FPTAS where, remember, FPTAS is an approximation algorithm that is
parameterized by an epsilon and gets a 1 minus epsilon approximation, right?
And the class we looked at is called packing decision problems. The knapsack
public project is one example, but there's many more, and they model a lot of the
interesting welfare maximization problems we usually see.
So this result I'm not going to highlight in this talk. I'm going to highlight the third
result which is more recent. Hence, I like to talk about it more.
But there's one aspect of this result that I definitely do want to mention because I
think many people would find it interesting.
The way we prove this is via connection to smooth complexity. Smooth
complexity, smooth analysis of algorithms. So if you haven't seen smooth
complexity or if you haven't seen it in a while, here's an informal definition. I'm
not going to get into it too much. An optimization problem has polynomial
smooth complexity if, even though it may be NP hard, I can still find you an
algorithm that solves the problem exactly, maybe runs in exponential time for
some inputs, but runs in polynomial time over any kind of small distribution over
inputs.
So in a sense a problem has polynomial smooth complexity if it's hard in the
worse case but easy in the average case where average is -- where I'm using
average in a very, very limited sense. I say if I have an instance and I perturb it a
little bit, it now becomes easy.
So a problem has polynomial smooth complexity if in a sense the hard instances
are measure zero. Right?
So this was a concept first introduced by Spielman [phonetic] and Tang
[phonetic]. It's become very popular.
Questions?
Right. So if a problem has polynomial smooth complexity, we show that -- I'm
sorry, yes?
Okay. So here's how we prove our black box result. The outline of our proof is
two points. There's an old result that's not due to us by [inaudible] and Tang
[phonetic], and they show that if a problem has a FPTAS, then it has polynomial
smooth complexity in an a rigorous sense that I didn't define very carefully.
What we do is the second part. We show that if a problem in the class we
consider has polynomial smooth complexity, then it has a truthful FPTAS, truthful
randomized FPTAS. Right?
>>: [inaudible].
>> Shaddin Dughmi: So there's -- here's what it is. There's an algorithm, an
exact algorithm, for the problem that over any perturbed -- if you take it and then
you perturb it in expectation over the perturbed input, it runs in polynomial time.
It's exact on every instance, but it runs in expected polynomial time over any
small distribution.
Any questions?
So they go from FPTAS to polynomial smooth complexity. What we do is we go
back. We say if it has polynomial smooth complexity then it has truthful FPTAS.
That's all I'm going to say about this result. But I'd love to talk about it offline if
anybody wants to see it. There's a lot of nice geometry in if.
Okay. The result I'm going to talk about is going to be this third set of results.
We introduced a new feedback based on convection optimization for designing
truthful randomized mechanisms. And we called this technique convex rounding,
and it's basically a way of rounding linear programs and other mathematical
relaxations that gives you truthful mechanisms.
So essentially the idea is if you have a linear program and you have a way of
rounding the linear program, it may not -- you may not be able to build a truthful
mechanism on top of that. But I say if you can make your rounding algorithm
convex in a sense, now you can, and now you can try to think about whether you
can make rounding algorithms with this nice property.
And using these ideas, we made progress in the most studied problems in
algorithmic mechanism design. In particular, combinatorial auctions and
combinatorial public projects. And this is the technique I'm going to focus on in
this talk, in the technical portion of this talk.
So here's a theorem we get using this technique. We show that there's 1 minus
1 over e truthful mechanism for combinatorial auctions for most of modular
valuations and also for combinatorial public projects, which is the last problem I
talked about also, we get a 1 minus 1 over e approximation for a large subclass
of submodular valuations, improving the best known from square root of number
of items.
Okay. So now I'm going to start the technical portion of the talk. There's going to
be two segments of the technical portion of the talk. First I'm going to tell you
what randomized mechanisms we're going to design, what are these randomized
algorithms that we're going to design, what do they look like, and then I'm going
to tell you how to use convex rounding to design them and get good results.
Okay. So when I say I want to use randomization, what do I really mean? I
mean that the process that computes the allocation and the payments basically
depends on internal random coins that are being flipped by the algorithm. And
I'm going to say such a mechanism is truthful if when a player doesn't know how
I'm going to flip my coins he maximizes his expected utility by telling the truth.
Maybe after the coin is flipped, he regrets what he did, but in expectation, if he
doesn't know how I'm going to flip the coin, he's going to tell the truth.
It's often called -- these mechanisms are often called truthful-in-expectation or
truthful for risk-neutral if you're an economist. But we're just going to call them
truthful loosely. Right. So let's remember VCG, right? VCG is the mechanism
that seems to work from an economic perspective because sometimes it doesn't
run in polynomial time.
So remember it's this three-step process. You ask for valuations, you find an
optimal solution and then you charge externality.
Here's the idea. I would like to plug in an approximation algorithm here.
Sometimes I can't do that because what I plug in here, it doesn't preserve
truthfulness of the whole thing. Right?
But there's some special approximation algorithms that I can plug in here and the
whole thing remains truthful. Right?
And these are the algorithms I'm going to use. And I'm going to tell you what
they look like in a second. These algorithms -- these algorithms are called
maximal in distributional range algorithms. It's a class of approximation
algorithms. It's approximation algorithms with a special structure.
Not all approximation algorithms are in this class. In fact, unless you're working
in mechanism design, why would you design such an approximation algorithm?
They tend to be harder and less easy to get than other approximation algorithms.
Okay. So remember we have a set of -- a problem is defined by a feasible set.
Let's say my feasible set lies in some euclidean space. Let's say my feasible set
is these black points, right?
And I want to also think about distributions over feasible solutions. So when I
draw a grey point, I really mean it's a distribution over black points. For example,
this gray point right there is -- if I draw it like this, I'm going to think of it as being
the distribution that outputs this black point with 50 percent and this black point
with 50 percent.
Obviously when I draw them like this it's not clear -- it's not unique what the
distribution I mean is. But I'm just going to do it as a caricature to just get the
intuition across.
So once I -- so I have this feasible set and I have distributions over it. A
distributional range is basically a subset of these distributions over feasible
solutions. Right?
You can also think of it as a set of lotteries over the feasible solutions. An
algorithm is maximal in distributional range if it fixes a distributional range, it fixes
a set of distributions. And when I say "fixes," I mean it commits to it before it
even sees the player valuations. Before it sees the objective function it says I'm
only ever going to output a solution in this set. Right? And then once it gets the
objective function, i.e., the player values, it finds the optimal solution in its range.
So it's a restricted approximation algorithm. It's an approximation algorithm that
commits up front to how it's going to lose -- how it's going to lose optimality. It's
going to say I'm going to commit up front to only outputting this restricted set of
solutions, and once it gets the objective function it's going to find the best solution
in its restricted range. Make sense?
So here's an example of -- so here's an example of a -- of how you would design
such a distributional range. So you have [inaudible] of combinatorial options.
You have a bunch of items. You want to split them up between players.
Let's think of a distribution over allocations. I'm going to call distribution a
product lottery if it's a distribution that I get by the following association. I
associate with every item j and a player i probability that i gets j and then I assign
each item independently according to the probability. Right?
Every set of fractions x sub ij for players i and j defines a different product
distribution over allocations, right? So every time I give you a set of fractions xij
and I basically give each item to a player with a probability I get a distribution that
I'm going to call a product lottery. Right?
So this set of product lotteries is a set of distributions, so it is a distributional
range, but it's not a very useful one because the set of deterministic partitions is
also a product lottery if I set these x sub ij's to 1s and 0s, right?
So essentially it's the same problem. It's not any easier. The optimal solution is
always a product lottery. So finding the best product lottery is NP hard if the
problem is NP hard, right?
And a surprising fact which I'm going to show you in this talk is that if I commit up
front that no player gets an item with probability more than 1 minus 1 over e,
suddenly I put the problem in p. If I tell you find me the best distribution over
solutions where you assign every item independently but nobody gets anything
with more than probability 1 minus 1 over e, that problem is polynomial time
solvable, and it's related to the original problem. In a sense, solving this smaller
problem exactly gives you 1 minus 1 over e approximation to the original
problem.
And so now if I have an algorithm that basically commits to a range up front and
then finds the optimal solution, I could plug it into VCG to get a truthful
mechanism. Why? Well, it's essentially just -- I define a different optimization
problem that I'm solving optimally. I told to you commit up front to a subset of the
feasible solutions and I'm going to solve exactly on those, so it's basically simply
VCG on a smaller problem that I solve optimally.
What's the upshot? I reduce truthful mechanism design to designing
approximation algorithms of this maximal distributional range variety, and now if I
can do this, I don't need to worry about incentives or payments or game theory
anymore. It's basically just a restricted computational model where you have to
design approximation algorithms with certain structure. Once I do that everything
else can be generically taken care of. Make sense?
All right. So now you know what the class -- what the algorithms I'm going to
design look like. They're going to fix a set of distributions over feasible solutions.
It's not going to be everything. And then once you see the objective function,
you're going to find the best solution in the range you committed to up front.
Okay. So now I'm going to look at combinatorial options and I'm going to tell you
what this convex rounding technique is and how we can use it to get algorithms
of this sort.
Remember combinatorial options, I have these items, and I want to split them up
between the players. This is all something we've seen before. Like I said, I'm
going to assume the player valuations are submodular. This is the formal
definition. It's just diminishing marginal returns.
Remember that without truthfulness there's a 1 minus 1 over e approximation
algorithm. And if you haven't seen -- that algorithm is due to Vondrack
[phonetic]. If you haven't seen it before, there's a much simpler algorithm that's a
half approximation and it's just a greedy algorithm where you basically go
through the items one by one and you give the item to the player who has the
most additional benefit from getting it. And using the fact that there's diminishing
marginal returns, a charging argument of the same sort we usually see with
greedy algorithms shows that this is a half approximation. Right?
And the 1 minus 1 over e approximation is basically just a fractional version of
that. It's a lot more complicated. It's a very nice, brilliant algorithm, but it's
basically a fractional version of this greedy descent.
Okay. Now, with truthfulness, like I said, the best known is logarithmic due to
Dobinski and Nisan. And now it's been the challenge problem in algorithmic
mechanism design for a while to find a constant factor approximation that is
truthful.
And here's the theorem that we get. We don't quite answer it for all submodular
valuations, but we come very, very close. We show that for a large subset of
submodular valuations there's a 1 minus 1 over e approximation for combinatorial
auctions. And this is the best we can get in polynomial time. It was the first
optimal result for combinatorial auctions with restricted valuations.
The set of valuations we prove this for is not all submodular functions, like I said.
It's the set I'm going to call Matrid Rank Sums. This may ring a bell, but if it
doesn't don't worry about it. In this talk I'm going to prove it -- because proving it
for this class is hard -- well, difficult, we prove it, but in this talk I'm going to prove
it for a smaller and simpler and more intuitive class of valuations called coverage
functions.
But why is this interest? Why is it interesting that I show that you? Well, I claim
it's very interesting because coverage valuations are basically the canonical
example of submodular functions, and they inherit all the algorithmic hardness
that is true of submodular functions, so I'm not cheating. This is a subclass of
submodular functions, we have all the same hardness results, and, moreover,
every time -- unless you're really, really in submodularity, if you've ever seen an
example of submodular functions, it was probably in this class.
So let me define what valuations -- what coverage valuations look like. Let's say
I'm this guy and there's two items in play, a volleyball and a Ping-Pong paddle.
My happiness is basically related to kind of a measure space of happiness. So I
get one unit of happiness if I get to play in the sun, I get one unit of happiness if I
get exercise and I get one unit of happiness if I get to socialize and play sports
with people at work. Right?
I actually only have two hobbies, volleyball and Ping-Pong, so it's actually very
realistic.
So with each one of the items I associate a subset of my happiness space. For
example, if I get the volleyball I get to play in the sun because I like to play beach
volleyball, and I also get some exercise. If I get the Ping-Pong paddle I get some
exercise and I get to play with my friends at work because there's a Ping-Pong
table in the Gates Building at Stanford.
But if I get both of them I actually don't get twice the amount of happiness from
exercise. I only have a fixed amount of time to exercise, so I only get one unit of
happiness from exercise whether I get one of these items or both of them.
Right? So that's, in a sense, why it's a coverage function, right?
And this is kind of caricature of an example, but you can think of if you're a
company and you're bidding on, for example -- if you're a telecommunications
company and you're bidding on radio spectrum licenses, you can think of the
number of people you get to reach through their advertising. Whereas, you don't
care if you reach them on two different frequencies, all I care about is the fact
that you reach them or not. So that's what a coverage function is. Make sense?
Questions?
It's going to be very important that you understand what these are.
>>: [inaudible].
>> Shaddin Dughmi: Yes. I think I just did via caricature, but here's what it is. A
coverage function on a set of items is basically a function that's defined by an
abstract measure space, and every item maps to a subset of a measure space
and your value for a set of items is the measure of the union of the measure
spaces associated with these items.
>>: [inaudible] either I get something and I get value 1 and if I cover it a
thousand times a still just get value 1?
>> Shaddin Dughmi: Exactly. That's exactly right.
More questions?
Right. So how am I going to design a maximal distributional range algorithm for
this problem? I could tell you how I do it specifically for combinatorial auctions,
but it's actually going to be simpler and more instructive to zoom out for a second
and look at approximation algorithms more abstractly, and this will give you
intuition as to why this is a general technique that we can use for other problems.
So for many problems in approximation algorithms the best algorithms we get are
based on relaxing the problem into some kind of mathematical relaxation like a
linear program, a semi-definite program or what have you, solving this happen
mathematical relaxation then rounding the fractional solution, right?
And like I said, convex rounding is a class of rounding algorithms that when you
can design such a rounding algorithm, it algorithm that uses this rounding, you
can always make it truthful.
So let's look at algorithms based on relaxation and rounding look like. Let's say
you have an optimization problem where you have a bunch of solutions and you
want to maximize objective v over these solutions. Let's say your objective is
linear. Right?
So what does this mean? I want to find the black point that goes as far as
possible in the direction of v, right? We see this a lot whenever we can write a
problem as a -- code solutions of a problem as points in euclidean space, right?
Maybe this problem is NP hard. Maybe it's hard for me to find the black dot that's
furthest along in the direction of v. But maybe this is easier. So the common
thing -- relaxation, what it does is you define basically a nice convex set that
surrounds your feasible solutions, usually a linear program or a polytope, and
then you basically find the optimal solution of this linear program. So you'd say
what's the best point in this convex set according to the direction v. Well, it's
maybe this one, right?
And then once you solve this linear program you round the solution to an integer
solution. So if I got this -- if I got this solution as my optimal solution to the
integer program -- to the linear program, maybe I round it to this integer solution.
If I got this one, maybe I round it here. If I got this solution, maybe my rounding
algorithm is randomized, so maybe I output this solution with 50 percent and this
solution with 50 percent. Right? So this is what algorithms based on relaxation
and rounding usually look like.
Questions? I assume many of you have seen this kind of thing before.
So these algorithms generally can't be made truthful no matter how smart you
are about defining the payments. Well, then you can say well, okay, why not?
Because solving the linear program is clearly, in a sense, maximal in range or
optimal over some problem, right? So why can't we just plug it into the Vickrey
Clark Groves Mechanism and get the right thing.
And that's a good point. And you're right. But the problem is the rounding step.
The rounding step is not structured enough -- it's so unstructured that it's
impossible to define payments that make the whole thing truthful. And this is
why. Imagine that you solved the linear program optimally and then you round
the solution that you get. Maybe after you round it, the rounding loses a lot,
right? Maybe it loses the whole approximation ratio. You end up with a solution
that's, say, 1 minus 1 over e worse than the solution of the linear program,
whereas you would have rather gotten a sub optimal solution to the linear
program and then rounded that and then maybe the rounding there is not so bad.
So in a sense, if I'm a player who even is already on board, if I'm already on
board and I want to help you maximize social welfare, I would still want to lie to
lead you to an optimal solution to the linear program so that the -- to exploit the
fact that the rounding algorithm is actually not so bad there. Make sense? So ->>: [inaudible].
>> Shaddin Dughmi: A suboptimal solution to the linear program which is
actually -- does not lose too much in rounding. I'm sorry, did I say optimal?
Okay. My bad.
All right. So in a sense, rounding changes the quality of a solution in an
unstructured way. And this unstructured change in the quality of the solution is
usually impossible to compensate for using payments. And this can be proved.
But I'm not going to get into it.
Okay. So here's a simple solution I'm going to propose. I'm going to propose
that you try and solve an optimization problem that a priori looks unsolvable. I'm
going to ask you to suspend disbelief.
Imagine if -- so usually we solve the LP and then we round the solution we get.
I'm going to instead incorporate the rounding into the objective function. So
remember r is a map from the fractional solutions, this polytope p, to the actual
solutions, to, for example, the partitions of the item.
And this map is randomized, right? So let's say I ask you to find me the fractional
point x that has the best rounded outcome. I want to find the point x such that v
transposed times the rounding of x is maximized. Right?
I'm going to put an expectation here because this function is random, right?
So in a sense I'm incorporating the rounding into the objective function, and I'm
asking you to find the fractional solution with the best rounded image. Make
sense?
Here's the insight. It's a very simple insight, but it's going to be very powerful.
Finding the optimal solution of this optimization problem and then just sampling
the rounding algorithm there is clearly going to give you a maximal distribution
range algorithm. Why? You're simply optimizing over all distributions over
solutions that could possibly arise by rounding a fractional solution by definition.
Make sense?
Questions?
>>: When you say you optimize [inaudible] ->> Shaddin Dughmi: So if xr is the optimal of this optimization problem ->>: [inaudible] rounding function of x, that is also within your control ->> Shaddin Dughmi: Right. I'm saying -- let's say I give you an r, a certain r, for
which you can solve this optimization problem. So, I'm sorry, I should have said
I'm fixing an r in this slide.
More questions?
So, yes, if I could solve this optimization problem, here's a maximal distributional
range algorithm, solve this optimization problem, then round the solution that you
get. By definition, the solution that I find is the thing that's going to give me the
best rounded result, so this algorithm is going to be maximum over its range,
right?
Okay. So how do we use this idea? Let's remember combinatorial auctions.
Here's a fractional relaxation of combinatorial auctions. For every player i and
item j I'm going to make a variable x sub ij that tells me whether player i gets item
j, right?
For every item, I don't want to give it to more than one person, right? That's the
first constraint. And everything is positive, for obvious reasons. And, for
example, here's part of a feasible solution of this fractional relaxation. Let's say
this volleyball, half of it goes to this guy, a quarter goes to this lady and a quarter
goes to the other guy, right? And if you have fractions for every one of these
items that sum to 1, then you get a solution to this kind of fractional relaxation,
right?
How would you round such a thing? If I give you a solution to this -- if I give you
a point in this polytope, how would you round it? The obvious thing to do is to
basically do -- assign every item independently according to these probabilities.
For example, for the volleyball, the obvious thing to do is to give it to this guy with
probability half, this lady with probability of a quarter and this guy with a
probability of a quarter and do it such that you don't -- do it buy sweeping, in a
sense so that you only give it to one person.
Yes?
>>: What's the objective function?
>> Shaddin Dughmi: I'm intentionally suppressing that. I'm intentionally
suppressing it, and it's going to be the expected of the rounding. Right? I'm
going to define the objective function in reference to the rounding. Right?
So I could round such a fractional point using the obvious way. In other words,
I'm going to output the product lottery with these marginal probabilities. That's a
rounding function.
But like we said before, finding the best result of this rounding function must be
NP hard. Why? Because if these are all 1s or 0s, then I could always get every
deterministic partition of the items by setting these to 1s or 0s. So if I ask you to
find the best possible thing I could get by rounding, it's no easier than the original
problem.
So I'm going to need to do something smarter, right? Okay, that's actually what I
should have said on this slide.
Right. So according to that, I want to find the best product lottery. Like I said, we
have them set to 1s or 0s so it's NP hard. I should have said this on this slide. I
apologize.
All right. So clearly this rounding algorithm is not what we want. And in fact, this
difficulty of finding the best rounded outcome is generally. We see it everywhere
in approximation algorithms if we try to do this, because all natural rounding
algorithms usually round integer solutions to themselves. In a sense, the integer
points of the polytopes are usually fixed points of the rounding map, right? If I
give you a fractional spanning tree, you round it a certain way, but if the fractional
spanning tree is actually integer, you're not going to give me another one, you're
going to give me the same one if you're running a normal approximation
algorithm.
So we need something that breaks this property, that breaks the property that
integer points are fixed points. So we're going to design better behaved rounding
algorithms that break this property and give us a tractable optimization problem.
Okay. So here's the class of rounding algorithms that I'm going to use. I'm going
to call a rounding algorithm r, a convex rounding algorithm if the objective
function of this optimization problem is concave in the variables of a fractional
solution. In other words, the expected value you get by rounding a fractional
solution is a concave function of the variables of the solution. Right?
If I give you a rounding algorithm r that's convex, this whole thing is a convex
optimization problem that I can solve with the ellipsoid method. So now the
problem -- if I can give you a convex rounding algorithm for combinatorial
auctions, finding the best result of the rounding over all fractional solutions is a
convex optimization problem that I can solve in polynomial time.
No reason to believe -- a priori, there's no reason to believe that interesting
rounding algorithms like this exist because, well, they must look unnatural. They
must round integer solutions to different integer solutions, which is kind of
strange.
But they do exist, and here it is for combinatorial auctions. I'm going to call this
Poisson Rounding. Usually what you would do if you're just thinking about an
obvious rounding algorithm, you'd give the volleyball is to this player with
probability at x12 and so on and so forth.
But instead of giving every item to the player with the fraction of the item that he
has, instead of giving item j to player i with probability x sub ij, give it to him with
probability 1 minus e to the minus x sub ij. Seems weird. Why would you ever
do this? This is less than or equal to x sub ij, right?
Why would you do this? There's really no intuition better than seeing the proof.
>>: So would these numbers add up to less than 1 so there's some probability
that ->> Shaddin Dughmi: Exactly. With some probability I will throw an item in the
trash. And that's essentially -- you can think of it as a way of punishing people in
such a way to align the punishment with the report so that they want to tell the
truth. That's exactly right.
So why is Poisson Rounding convex? Why does give me a concave objective
function? Like I said, I'm going to prove it for coverage functions.
Here's the proof. A fractional solution basically where x sub ij is a fraction of the
item j given to player i. Poisson Rounding gives j to i with probability 1 minus e to
the minus x sub ij.
Now I'm going to set up so the notation. For player i let s sub i be the set of
items he gets, right? And I want to show you that the expected of the social
welfare is concave in the variables, right? So this is a random variable that is the
result of the rounding. And the rounding is defined according to these x sub ij's.
Make sense?
Linearity expectation and the fact that a sum of two convex k functions is
concave, all I need to show you is for one player their expected value over what
they get is a concave function of the fractional solution. Right? So I'm going to
break it up into a player-by-player sense.
So let's remember this player who -- where there's a volleyball and a Ping-Pong
paddle in play. Let's say the fractional solution gives him x1 fraction of the
volleyball and x2 fraction of the Ping-Pong paddle. Let's say he gets x1 fraction
of the volleyball and x2 fraction of the Ping-Pong paddle, and then we have this
following coverage pattern, right? Like we said before, what's my -- if I'm this
player, what's my value?
My value is the probability that I get to play in the sun plus the probability that I
get to exercise plus the probability that I get to play at work, where you get these
two, if I get the volleyball and I get these two if I get the Ping-Pong paddle and I
get the three if I get both, right?
So it's sufficient -- by linearity of expectation, it's sufficient to show that each
probability here is concave. So let's look at them one by one. Let's look at the
easy ones first.
What's the probability that I get to play in the sun? Well, it's the probability that I
get the volleyball because the volleyball is the only thing that allows me to play in
the sun, and that's one minus e to the minus x1, which is a concave function of
x1.
Similarly, what's the probability that I get to play at work? It's the probability that I
get the Ping-Pong paddle, which is 1 minus e to the minus x2. Again, concave.
The interesting case is this one, which is covered by two things. Right? What's
the probability that I get exercise? Well, I get exercise whether I get the
volleyball or the Ping-Pong paddle, which is one minus the probability that I get
neither. What's the probability that I don't get the volleyball. It's e to the minus
x1. What's the probability that I don't get the Ping-Pong paddle? It's e to the
minus x2. So it's 1 minus e to the minus x1 times e to the minus x2, which -- and
then you can write it 1 minus e to the minus x1 plus x2. And if you remember
any convex analysis, this is a concave function.
And I claim that this is a proof by example and this is going to be general no
matter what the coverage pattern is.
>>: [inaudible].
>> Shaddin Dughmi: Yes. It's going to be 1 minus e to the minus x1 times e to
the minus x2 times e to the minus x3.
Make sense?
And if they don't remember that composition of convex functions, this is the
composition of an affine function with a concave function.
>>: When you have more complex combinatorial structures, you always just give
[inaudible].
>> Shaddin Dughmi: Exactly. You get 1 minus e to the minus some of -- the
variables are the things that have that part of the measure space, right? The
covered up part of the measured space, right?
Right. I claim this is general.
>>: You said you would give us proof that it works for things that are slightly
more complicated than coverage functions or ->> Shaddin Dughmi: Oh, no, I didn't say I was going to give that. I said we do
prove it there. That's a slightly more general class that includes Matrid Rank
functions and things like that.
>>: But isn't the composition of the proof identical to this one?
>> Shaddin Dughmi: No, it's actually -- we have to do a lot more work. We use
matroid contraction and things like that.
Okay. So now here's a second lemma. I claim that if you round this way, you're
gonna go get -- you know not going to lose too much in the approximation ratio.
You're going to lose it in [inaudible] 1 minus 1 over e.
The proof of this lemma is not so interesting so I'm just going to before I you
some very rough intuition. Because we're maximizing over the set of outputs of
the rounding algorithm, it suffices to show that there's always a fractional solution
that I can round to get something within 1 minus 1 over e.
And in fact it's not too hard to see that if I round the integer solution
corresponding to the optimal solution, it's not going to get much worse. So
there's always something that's not much worse than your range. So if you're
finding the best output in your range, you're fine.
Why is it that rounding any integer point doesn't make it too much worse? Well,
it's by diminishing marginal returns. If you have submodular valuations it implies
that you're losing everything with probability at most 1 over e, right? And the
system of diminishing modular returns, the aggregate loss is going to be 1 over
e.
And, yeah, that's essentially the intuition, but it's not very interesting.
Okay. So using these two lemmas we get the theorem, we get that there's a 1
minus 1 over e approximate randomized mechanism for combinatorial auctions.
Yeah, this is some stuff I said about it before.
And, also, we can use this technique to solve the third problem in my examples,
combinatorial public projects, and we get a similar result.
All right. So that's it. To summarize, positive progress in algorithmic mechanism
design was lacking. We didn't have good deterministic mechanisms and it was
not clear how to use randomness, so my contribution is developing techniques
that allow us to use randomness in a way that we can reconcile truthfulness in
computation. And we get a bunch of positive results, one that's generally for
problems in FPTAS and some general techniques as well, like convex rounding.
And future directions are how far do these ideas go? Do we get general positive
results? Remember, the Holy Grail is we can always -- we want to always
convert an approximation algorithm to a truthful mechanism without losing in the
approximation. Can we do this for nice, large classes of problems? That's an
interesting question. That's, I think, the most interesting question for future work
here.
And philosophically that really asks the question is getting truthfulness without
loss in polynomial time computation, or is computation and incentive
compatibility, are they at odds with each other or are they compatible. That's, I
think a very interesting philosophical question.
And that's all I have. Thank you for listening.
[applause]
Yes?
>>: So [inaudible] if all the xij are very small, then the approximation ratio gets
better?
>> Shaddin Dughmi: Yes.
>>: About the randomization. So in your [inaudible] this will be a barrier because
people might be worried about volatility there, they might not be as [inaudible].
>> Shaddin Dughmi: Uh-huh.
>>: What happens if players engage in your randomized auction tried to hedge
by placing the -- so you have this auction all set up, every person has a lottery,
and then say they then try to take their lottery and go to maybe an auctioneer or
someone who is [inaudible] and pays that lottery by a deterministic payment.
>> Shaddin Dughmi: So you're really asking do these ideas work when there's
no risk neutral at this. And the answer is no. And that's a very interesting
direction to see what -- does risk neutrality make things a lot -- does risk
averseness or risk-seeking behavior make this problem a lot harder? And I don't
know.
>>: [inaudible] suppose there is some risk-neutral player around, you could say
that the auctioneer has big resources and he's risk-neutral [inaudible] and then
can the player -- and the other players are, say, risk-averse.
>> Shaddin Dughmi: The players are risk-averse and the auctioneer is
risk-neutral. Right.
>>: Can the players then go to the auctioneer and try to hedge their bets so they
replace their lotteries by their expectations?
>> Shaddin Dughmi: I'm not sure -- what do you mean by replace their lotteries
by their -- by definition, they can't. By definition of the problem. Are you allowing
kind of side transfers or something like that?
>>: Yes, allowing side transfers.
>> Shaddin Dughmi: That I don't know. I'd have to think about that. According
to the model where I just design the mechanism and this is what the auctioneer is
going to do, they would have no incentive to change. If you expand the model, I
don't know. Whether, for example, they might want to hedge between each
other, that's also an interesting question.
>>: I'm just trying to understand this distinction you have between the
deterministic and the [inaudible]. If the players are allowed to hedge via side
payment with the auctioneer, then although what they get back is random in the
sense that they don't know if they'll get the volleyball or ->> Shaddin Dughmi: So what is the utility model of the -- so you're going to
assume that the auctioneer has a utility, which is what? So when you say hedge
with the auctioneer, I assume you have a concept of whether the auctioneer
would like to participate in a side payment as well, right?
>>: Right.
>> Shaddin Dughmi: So if you assume that the auctioneer's utility is his
payment, then he's trying to maximize revenue, then these things fall apart
because it's not a revenue maximizing thing. So the short answer is no, using
these ideas.
Other questions? All right.
[applause]
Download