21685 >> Nikhil Devanu Rangarajan: Hi. Welcome everyone. ... Hartline speak today. Jason is no stranger to the...

advertisement
21685
>> Nikhil Devanu Rangarajan: Hi. Welcome everyone. It's my pleasure to have Jason
Hartline speak today. Jason is no stranger to the area. He graduated from University of
Washington under the guidance of Anna Karlin. And he was also a researcher at the
Silicon Valley Lab. And currently he's a professor at Northwestern University. He's
going to talk about approximation and mechanism in design.
>> Jason Hartline: Thanks, Nikhil. Good. So this is going to be a survey talk. I'm going
to talk mostly about three papers. I think that the time right now is a very nice time for
mechanism designing in computer science, because there's been a lot of things coming
together. And the way I think about things now is totally different from how I thought
about it two years ago, and I think that's a good thing.
And so these are some of the papers that help me think about things differently. And the
last one isn't even mine. I wish it was. It's a great paper.
It's one of Tim's and his student's papers. So anyway I'm going to talk about sort of how
I think about mechanism design now and draw results mostly from these three papers
but also there will be some other results I'll reference.
So mechanism design is in general asking the following question: How can a social
planner or optimizer achieve a desired objective when the participants of the system that
the planner is trying to design for have private preferences and these are selfish and
may manipulate their preferences to try to get a better outcome for themselves.
And I'm a theoretician. So I do mechanism design theory, which tries to do the following
three things: I would like the theory of mechanism design to be descriptive, meaning
you take your theory and you say, oh, what does theory predict. Then you look about
what happens in real life and you hope that similar things happen in real life. And you
say my theory predicted what happens in real life.
You'd like it to be prescriptive in the sense that maybe someone wants you to design a
mechanism for some situation that no one's designed a mechanism for before, and you
want to be able to take that theory and use that theory to figure out what to do in this
new setting we've never talked about before.
And the last thing is something I think especially important. You want it to be conclusive.
You want to be able to sort of pinpoint salient features of good mechanisms or salient
features of the model that you're designing for that are really important to really get right.
So you want to do these three things. The informal thesis I have in this talk is that
approximate optimality often can do these three things very nicely. And it even does
them very nicely in cases where exact optimality fails to do some of these things.
So this talk, I think, really motivates why I care about approximation and what kinds of
approximations I care about. Because I care about approximations of these three
things.
Okay. So I'm going to start with an example problem, which is going to be both
motivational in terms of techniques and it's going to be -- I'm going to use it as later in
the talk to actually show the theorems about auctions.
Okay. So a gambler is playing the following game: Okay. There's a -- we have the
following optimization problem. A sequence of N games. Okay. And each game has a
prize. And the prize for game I is drawn from some distribution.
And the gambler knows these distributions exactly. And the gambler gets to play these
games as follows: In the first day he plays the first game. He realizes the first prize. He
draws it from a distribution. And he has to choose whether he wants to keep this prize
and quit playing the game, or discard the prize and go on to the next day.
So on day I, he draws prize I from the distribution. And he gets to choose to keep that
prize and go home or give up the prize and continue on to the next day. So it's a
question of when should this gambler stop?
Okay. Good. So how should our gambler play? And in this talk I'm going to sort of pose
a problem, talk about what's optimal, and then show how approximation says some
perhaps nicer things about what we should be thinking about doing. So here's an
optimal strategy. I'm going to pick a threshold TI for each stage, and in this stage I, if the
prize is above this threshold, I stop.
And I'm going to solve for these thresholds with backwards induction. Right? Because
on the last day my threshold should be 0. I take any prize I get.
>>: What's your criteria for optimality?
>> Jason Hartline: The gambler is trying to stop with a prize that's good. So he wants to
maximize the expected prize. So he gets some prize. So he's supposedly risk neutral.
So it's expected maximum, the expected value of the prize he gets he's trying to
maximize.
So on the last day, having quit so far I draw approximation distribution. It's got some
value. I can end it. Right? Clearly. Good. So on the second to last day I can calculate
the expected value I would have had on the last day, and I should take this prize only if
its value is above the expected value on the last day. Right?
So then on the third to last day, well, I know what the expected prize I would get from the
second -- the last day is, and so my thresholds are increasing in each day as we go
backwards in time. And I have thresholds. It's solved by backwards induction. Good.
So let's discuss -- this is optimal. Let's discuss this optimal strategy.
It's complicated. I have N different mostly unrelated thresholds. They're going to be
decreasing -- sorry, increasing as time goes on. Decreasing as time goes on.
I have a hard time making general conclusions about what [inaudible] are besides that
there are these thresholds and they decrease as time goes on.
There's a question?
>>: Is backward induction the same as backward programming?
>> Jason Hartline: Yes. It's not very robust in the following sense. Let's suppose that I
swap the order of two of the games, then everything changes. I can redo my entire
calculation from the latest prize to the beginning. Right?
Or what if I made I changed the distribution strictly above one threshold or strictly below
a threshold. Things change. Okay. And it's not very general in the sense that I don't
learn too much about changes to this game like the things that follow a similar kind of
high level idea. They're prizes, trying to get a good prize, et cetera. We don't learn too
much. So optimal strategies give great performance but they don't do too much in terms
of insight, understanding, generality and robustness and all these things we might like.
So I'm going to do to nonoptimal strategies. I'm going to consider approximations that
this gambler, simple approximations this gambler could try to go for. And the simplest
thing you can think of is let's have a single threshold strategy. I fixed in advance a
number T. And I'm going to play each game in order and the first game whose value is
above T, I take that prize and quit. That's my game. This is obviously not optimal, even
on the very last day the prize is below T, I don't take it. That's the strategy I'm talking
about. Okay. So here's the theorem about such a strategy.
The theorem is that if I choose T carefully, such that the probability that on the last day I
still don't even take a prize, and I get no prize, if I choose T such the probability that
event is exactly one-half, then the expected value the gambler gets for playing this
threshold strategy is at least the expected maximum value over 2.
>>: Here you are assuming FI for change -- FI are different?
>> Jason Hartline: So the problem setting, these FIs may be different for different
games.
>>: That's true for the theorem also.
>> Jason Hartline: That's true for the theorem also. It's the same setting. Okay. So I
want to note that actually this is an upper bound on the optimal strategy. Right? This is
clearly you love to get this but you can't because you don't have foresight. This is called
the profit inequality, because the gambler is doing as well as a prophet is doing, a
prophet who knows the future would just take the best prize, right? We don't know the
future. So we're stuck playing some strategy. So this is saying that the gambler, with
his simple strategy is doing as well as a factor of 2 of a prophet.
So let's discuss this solution. This is simple. I need one number. Great. I claim it's
pretty conclusive. It says clearly you're trading off something. You're trading off the
probability you don't stop ever and get nothing, with the probability that you stop where
the prize is not too good. Right, there's a later thing, prize coming later that's better.
You trade the probabilities off exactly one-half. That's something that's okay. It's very
robust. What if I change the order of prizes? That doesn't change the threshold I used
and therefore it doesn't change the guarantee at all. The maximum is still the same. I
still get a factor of 2. So the bound I get, if I make changes to my game, doesn't change
at all.
Okay. So I could change the order, or I could change, suddenly change the distribution
above the threshold I pick or below the threshold I pick, nothing would change.
Okay. And it's actually going to be a very general theorem that I'm going to use later to
apply to some problems in mechanism design. The reason why it's so general is
because as you see the proof of this theorem, it's not going to matter -- it's going to be
invariant to what I call the tie-breaking rule. And the tie-breaking rule is the following.
Suppose there are multiple prizes above the threshold. Which one does the gambler
get? Well, he gets the first one that he comes to, right? That's our tie-breaking rule. If
there are multiple prizes above the threshold he gets the first one. So lexical graphical is
the gambler's tie-breaking rule. We'll apply the same theorem in cases we do
tie-breaking in a different way. And since the theorem doesn't need the tie-breaking rule
we're just going to plug in the theorem as a black box. Okay? Good. So I'm now going
to prove this theorem. It's going to be the most strenuous proof I'll do in the talk and it's
actually very, very easy. So let's let QI be the probability that prize is below the
threshold. Prize I is below the threshold. So the probability the gambler never stops
we'll call that X. That's just a product of the QI. It's a product that every single prize is
below the threshold and we never stop.
Okay. So my strategy is to get an upper bound on the expected maximum prize, get a
lower bound on the prize the gambler gets and then plug in X equals one-half, which is
what I said the threshold was. And to prove the theorem. So that's what I'm going to do.
Okay. So to start off, the expected max is less than or equal to T plus the expected
maximum positive amount of a prize above the threshold. Okay. So let's look at two
cases. Let's assume that the best prize is below the threshold. This is clearly an upper
bound on the maximum prize in that case, when the best prize is below the threshold.
And now let's suppose that highest prize is above the threshold then I get T plus the
difference between the best prize and T. That's what this is.
So this is exactly equal in the case where, in the second case and it's an upper bound in
the first case therefore it's an upper bound in general.
Okay. Expectations of maximums are -- I'll replace it with expected sum. Because the
sum of the expectations is always bigger than the expected max. Good. Okay. Now I'm
ready to go for my lower bound.
Okay. So what is the gambler getting? Well, if he stops which happens with probability
1 minus X the probability that there's a prize above the threshold but then he stops. He
gets at least T because that prize is value above T. So he gets T in that case.
But he's also going to get the additional value above the threshold of whatever that prize
happens to be. But that's kind of a pain to figure out. So I'm going to -- I'm going to
make not keep track of all of the extra value that he gets. I'm going to only look at the
expected value that he gets above T when there's exactly one prize above T. So if
there's more than one prize above the threshold, I'm going to assume he just gets the
threshold.
Okay. And if there's exactly one, then he gets the prize. Okay. So this is where I say
it's sort of invariant to the typing rule. Because here the typing rule wouldn't matter if
there's only one thing it wouldn't matter what I'm getting and if there's multiple things I
get at least T. Good. So let's write out what I mean the expected prize if there's only
one prize above T. So I sum over all prizes, the expected positive part of that prize
above T, conditioned on all prizes being less than T times the probability that all the
prizes are less than T and I sum that over all I.
So now let's simplify this second part of the equation. So this is, of course, just the
product of all QJ, not equal to I. But that's going to be bigger than X, because X,
additionally, has a QI multiplied by it. And QI is less than or equal to 1, right? So I'm
going to factor this X out of the whole thing.
And now let's look at this expectation. I have the expectation of some random variable
and I'm conditioning on something that's based on other random variables, not the
random variable I'm talking about, so I can drop the conditioning.
So I have this equation. And now I want you to compare this equation to that equation
and note what happens by plugging X equals one-half.
Okay. We're good. Any questions? That's the hardest proof we're going to do. Good. I
give this talk to economists and for economists I need to explain to them why I care
about two approximations. For this audience I probably don't have to explain it, but I
think it's useful to remember, because if you have to talk to economists you have to tell
them why we care. Okay? And remember one of my goals. My goals was conclusive
understanding of what's important for mechanism design. Okay. And constant
approximations help me identify what are the salient features from the sort of details of
the unimportant things.
If I want true optimal, I have to pay attention to everything. Right? So can I do okay not
paying attention to everything? And the things that I have to pay attention to to get a
good approximation are the important things, and the things I don't have to pay attention
to aren't the important things.
And so this lets you determine what in the model is important, or what in the kinds of
mechanisms you're considering is important for doing a good job. Okay. So in other
words I can say is X important for mechanism design? And if there's no mechanism
without X, it's a constant approximation -- sorry. No if there's a mechanism without X it's
a constant approximation. If I can get a good mechanism without taking into account X,
then X is not important.
If I can't get a good mechanism without taking into account X, then X is very important, I
better think about it. I only give two examples that come from mechanism design here.
So one is competition between agents. So usually you think of mechanism design
running auctions, agents are competing for stuff, whatever, is competition important?
My answer is no. I can get constant approximations without competition. We're going to
see that later. Are transfers important? Let's suppose we want to maximize social
welfare, I have an item, I want to give it to a bunch of, say, end players. If I can't charge
players money, then I have to just give it to a random player and what if there's only one
good player and everyone else is bad then I'm getting a linear approximation. Without
transfers, I'm in big trouble. Transfers are very important for mechanism design.
Competition is not so important. Yes, sir?
>>: So a lot of times you can look at some of the algorithms you design for mechanism
design, the transfers is coming from a dual to an LP. And so in that sense, I mean, this
is sort of an oblivious algorithm in the sense that it doesn't actually depend on the
orderings of these different distributions. Is the reason you're able to avoid transfers that
it's not using sort of the dual and you don't have to create compensation between two
different players?
>> Jason Hartline: I'll tell you the answer. I'm sorry, the competition you said, why don't
I care about competition? . I do care about transfer.
>>: From the transfer standpoint, in the sense you can take something like min cost sort
of flows and sort of the way that you create the dual is you create this other sort of
payment scheme which then makes every edge sort of fair, right? And there are ways to
interpret this in mechanism design for some useful ends, right? So here the sort of
optimal solution for this stopping problem, you know, one could write it based on an LP
and then sort of the dual would come -- you might be able to interpret it as being a
transfer. Is there any sense in which, because you're not solving this problem optimally,
because you're not worrying about the dual, that's why you're able to get round transfers.
>> Jason Hartline: I'm not getting round transfers.
>>: It's just the long range ->> Jason Hartline: I said without transfers, I'm in big trouble. I said without transfers I
can't get better than an N approximation. Transfers are very important for mechanism
design. Competition, I said, wasn't important.
>>: Okay.
>> Jason Hartline: I'll tell you why ->>: In time for you are taking people with valid valuation? But when the example of the
competition is, if people have evaluation, so how do you have constant factor. You have
to figure out where the I is valuation. So how do you get constant factor? Unless you
assume that everybody has the same basic distribution.
>> Jason Hartline: No. We'll see. We'll see. This is a conclusion of the talk. The
computation is not so important. And I want to compare that to transfers, where
transfers are obviously important. It shouldn't be important why competition is not
important right now, so let's see. Let's see. Yes?
>>: Seems implicit a constant factor is not too large you don't do a billion-to-1.
>> Jason Hartline: That's important if I want to actually tell someone how to design a
mechanism. And I just said I want to understand what's important. And I argue that
things that -- I argue that constant approximation gives me a good lens on what's
important and what's not important, even if that constant is a billion. Now, I don't know
of any settings where you get a constant approximation in mechanism design and it's a
billion. I know of no such settings, the constants I know of are between 2 and 10 for tight
analyses.
And so I don't know of anything that's not in that small range, but you might argue that
10 is too big. And I agree. 10 is too big. The point, though, is that the mechanism from
theory that gets the constant approximation isn't the end of the day.
The point of that mechanism was to identify what was important and those are the things
you pay attention to first. And then if you're designing a mechanism in real scenario you
add some sort of -- maybe ad hoc improvement over it to fix -- to do better in the real life
scenario you're actually dealing with.
But I want to know what's important first. And that's what I hope theory does. And then
there's some applied theory which is going to tell me how to actually use this theory in
real life to get the last end -- get the rest of the way there to a good solution.
Okay. Good. So here's an overview of the things I want to talk about today. I'm going
to talk about basically this is going to be a talk by example. I'm going to show you a
number of examples of constant approximation being telling you lots of interesting things
about mechanism design. And they're going to be in fairly different scenarios.
Okay? So the first scenario I want to talk about is in the single dimensional mechanism
design. Okay. So single dimensional mechanism design is like you have one item to
sell and each player has a single, single dimensional value for that one item. Right it's
single dimensional.
Next thing I want to talk about is multi-dimensional mechanism design. There might be
multiple items for sale. Different value for each item and that different value is of course
a multi-dimensional object which you specify in your preferences.
The last thing which we may get to if we have time is prior independent settings. So the
first two take the standard economics approach of assuming the designer knows the
distribution from which the agent's preferences are drawn, and the last approach does it
more computer sciencsy thing saying what if we don't know anything? Can we design
one mechanism that's going to be good in many scenarios?
Okay. And what I think is really nice and important about the work that computer
scientists are doing in mechanism design is economists think they've solved this problem
and their solutions are fantastic and give really good intuition, and we're going to
hopefully add to that with approximation.
But there aren't good solutions for these two problems coming from economics. So
we're able to say something where there really isn't a nice story in existence at all.
Okay. I'm going to focus here on profit maximization, meaning I think of the mechanic
designer as a seller and he wants to make the most money. And I want to point out that
that is an example of an objective. And everything that I talk about can be thought about
for other objectives as well. And it's important to talk about an objective that's not social
welfare, though, because social welfare has some special coincidences that make it a
little bit too easy. So this is the more general objective to talk about.
>>: Just one comment. Compared to what Alexander said at the beginning the first
model question, you mentioned it's a solution that changes in time, often expected
maximum is not a good objective, is not a good objective. So they expected dissertation
is not good objective [inaudible] because some of the distributions have big tails, there's
some very unlikely events that might happen along with the distribution to the
expectation, but if you want to devise a strategy, you might -- that might not be the
objective you might have a completely [inaudible] function defined the opposite. So
thanks.
>> Jason Hartline: And I'm going to take nonlinear functions and pump them, because if
you're -- so -- they present a lot of problems. And the problem in mechanism design you
can often trade off risks between players and if you start with your toy functions that
have some sort of risk, they're not risk neutral, then trading off the risk between players
allows you to gain more revenue, say, and that is a little bit troublesome. So it really
changes the question. And so I don't want to really change the question. So I'm not
going to deal with that. Okay. So all I want to say is if you have non-linear utility, then it
completely changes the kind of questions you're asking and you're in another area of
work.
Okay. I want to talk about single dimensional bayesian mechanism design. And so the
example problem I have is the following: Single item for sale. There are N buyers. The
distribution for each buyer's value is drawn from a product distribution. So player I is
distribution FI and these are independent draws.
And my question is come up with an auction that has the highest expected revenue.
That's my question. So as I'm going to do in this talk I'm going to talk about what's
optimal and talk about approximation to that and why I like the approximations better.
Good. To talk about what's optimal here I could spend an hour, but I'm going to spend
two minutes instead. There's really fantastic theory due to Roger Myerson and it's quite
fantastic. His paper is brilliant and worth reading by everybody. But I'm going to do it in
two minutes.
So we start with the following statement: That -- so I want to know what I can design in
equilibrium, what can be the equilibrium of mechanism.
So I can have what's called a base Nash equilibrium. If and only if the allocation rule
and equilibrium is monotone. Meaning if I take an agent, compare what happens to
them in two cases. The case where they have a low value, one where they have high
value. They must be more likely to win when they have high value in the equilibrium.
Otherwise the high value could pretend to be the low value, following the strategy of the
low value, and that's not an equilibrium then. Right? So this equilibrium must have this
monotonicity property.
Let's talk about revenue, because our objective is going to be profit. So I wanted to find
the following revenue curve. Think about taking a single agent I. Agent I's value is
drawn from a distribution. So I want to think of the revenue I get as a function of the
probability at which I sell to them. Now, it's much easier to think about the revenue I get
as a function of the price I offer them. If I offer them a price of $10 then they buy with
probability above their value is above $10. So I get 10 times the property above 10,
which is 1 minus capital F of 10 where F is the distribution function.
So I'm actually going to write that in terms of the probability they buy instead of the price
they buy. So Q is the probability they buy, which is the right so F inverse of 1 minus Q is
the price I would have had to offer them so they buy with probability Q.
Okay. So this gives me a revenue as a function of Q of the probability I sell to the guy. I
illustrated one such revenue curve. Q goes from 0 to 1, I have probability of 0 probability
of 1 that I can sell to this guy. That's a revenue curve.
Okay. I want to define the virtual value of a player to be this formula, ignore the formula.
This is the derivative of the revenue. Okay. And if you've had Economics 101 you know
to maximize revenue you should maximize the derivative of the revenue. Right? You
set those equal and that's going to be maximizing revenue.
If you haven't, that's okay. That's what you do. Okay. So this virtual value is the
marginal revenue. You want to maximize the derivative of the virtual value.
So I'm going to call the virtual surplus to be the virtual value of the person who wins the
auction. And the theorem then is that virtual value is equal to revenue. Okay. So now I
want to, knowing that virtual value is equal to revenue I need to somehow maximize
virtual value subject to monotonicity. Right? To be in equilibrium is better at monotone.
And my objective revenue is equal to the virtual value.
So the way to think about this is the following: Actually, let me add one more point.
Okay. I'll call the distribution FI regular, if and only if the revenue curve is concave,
which is the same thing as the virtual values being monotone because the virtual values
are the derivative of the revenue curve and concavity of the revenue curve implies
monotonicity of the virtual values.
So let's assume we have regular distributions. And I want to maximize revenue subject
to monotonicity. So here's how I'll do it. Forget monotonicity. Just maximize virtual
surplus. And let's hope that it's monotone. If it happens to be monotone by luck, then
we're done.
So well if virtual values are regular, if the distribution is regular, then these virtual values
are monotone. That means an agent by her value has higher virtual value. If I try to
maximize virtual value, meaning give the item to the agent with the highest positive
virtual value, that's going to be a monotone rule because if they were winning with some
value, with some virtual value, if they increase their value they get a higher virtual value
so they still win, so it's monotone.
So the theorem is this. For regular distributions the optimal auction sells to the bidder
with the highest positive virtual value. That's true because revenue, expected revenue is
expected surplus and this rule with monotone.
Which is what I needed for it to be possible in equilibrium. Okay. And there's a nice
corollary of this statement, which is what happens in the IID case if the bidders are
drawn from the same distribution? Then these virtual value functions are the same for
every bidder, right? Virtual value function of the bidder so the bidder with the highest
virtual value is also the bidder with the highest value and the condition that the virtual
value has to be at least 0 means their value has to be at least the inverse value of 0.
Right? So the bidder with the highest positive value -- so for IID regular distributions the
optimal auction sells to the highest bidder so that the constraint that the bidder's at least
be inverse value of 0. And this auction is actually otherwise known as the viceroy
auction with a reserve price of the inverse virtual value of 0. Meaning you have a
reserve price of that virtual value and you sell to the bidder with the highest value above
that. That's it, that's maybe my five minutes review of optimal auction design and for the
purposes of this talk you don't have to understand all of this, that's fine.
What you should understand is that revenue is given by optimizing these virtual value
functions. Basically there's a mapping from values to virtual values and you optimize
that and then you're optimizing revenue.
That's what you should understand. Okay. So what are optimal auctions? Let's review
from the last slide. For IID regular distributions, the optimal action is viceroy with reserve
price, natural simple auction, like eBay auction is a viceroy auction with a reserve price.
You can post a minimum. The person who bids the most above that wins. That's the
viceroy auction with reserve price essentially. So these are very natural. They happen
all the time.
In general, it's sell to the bidder with the highest positive virtual value. Okay, what does
that mean? It's complicated. Very dependent on the distribution. So this IID regular
case seems kind of special. And the question is: Are the settings people run auctions in
always going to be IID and regular?
Probably not. They're probably more general. This general case, we know what the
optimal auction is but no one really runs it they instead run the auction that works in the
IID regular case but it's not optimal in the general case.
So the state is that basically we know what's optimal. And the optimal thing is too
complicated. And so people run the thing that's not optimal even in the case where they
should be running some more complicated thing.
Okay. That's the status of the state of things. But so that motivates a question which I
want to answer now, which is, well, we're running these reserve price mechanisms even
in settings where they're not optimal. So are they close to optimal?
Okay. So what's the answer to that question? Here's the theorem that answers that
question. The theorem is that the viceroy auction, with a constant virtual price, where
the probability that the item remains unsold is one-half is a 2 approximation.
And that theorem statement should look kind of familiar. And the proof of it is the
following: I'm going to apply profit and equalities to virtual values and use a tie-breaking
rule, tie-break by value. So what do I mean?
I want to maximize the virtual value of the winner because maximizing the virtual value
of the winner is equal to maximizing my profit. I'm going to impose a reserve price in
virtual value space. So constant threshold in virtual value space which is a constant
virtual price.
Okay. Such that the probability that there's only, that no one wants to buy anything is
exactly one-half. And then what is the viceroy auction with a bunch of reserve prices? If
you make your reserve price then you win, otherwise you don't win. If you make your
reserve price, then we take the highest value and that person wins.
So subject to making your reserve price, we break ties by the highest value. So the
person with the highest value wins. So let's be more specific here.
So profit and equality, we had prizes. We had threshold. We had a maximum price you
wanted to get and we had the prize for threshold T. When I'm talking about the viceroy
auction with reserve prices, what do I have? So I'm going to analyze everything in virtual
value space.
So I want to maximize -- I have virtual values for each of the coordinates, and I'm going
to post a single virtual price. I'd like to approximate the optimal revenue which is the
maximum positive virtual value.
But instead I get the viceroy revenue with the reserve price. Okay. So ->>: Inequality don't you need to know the number of bidders now who have [inaudible]
sell half?
>> Jason Hartline: Absolutely. And in the original gambler's game I knew how many
days I was playing.
>>: And another distribution.
>> Jason Hartline: I have another distribution.
>>: So you have some number of people coming even if it's a simple setting.
>> Jason Hartline: That's not my theorem. My theorem is you know the distributions.
>>: But it's something it's not the satisfactory theorem such as [inaudible] not always
because you need to know the number of people.
>> Jason Hartline: You also have to know the exact distributions of the people.
>>: Yes. Let's say I even know the people have similar distribution, then my [inaudible]
will give me a constant number where if I need to apply this theorem I need to know
exact number of people.
>>: Can't you do something like -- so suppose all the distributions were the same and
you just didn't know the number of people. Could you get a constant factor in that case,
just by using like a doubling sort of trick?
>>: Then it will give you the optimal will be a single prize any ways. If you apply this
theorem.
>>: It's already optimum.
>> Jason Hartline: I'll have some discussion of this. And I'm not going to discuss this
point. I'm not too concerned by this issue. So let's discuss and complain about this
solution. Okay. It gives me two approximations.
You do have to know the distributions completely. And that's I believe a more serious
condition to knowing the number of bidders. Know their exact distributions, right? That's
a more serious complaint. A constant virtual price actually gives me different actual
prices, right? Because if the distributions are different I invert the constant virtual price I
get a different actual price. Now I'm talking about running an auction with a different
price for each player.
That seems already kind of not very good. Okay. It is simple, though, because if you
have the distributions coming up with these virtual prices is fairly simple. It is robust in
the sense that in the following sense. If you had collusion, it wouldn't change anything.
Right? Because an arbitrary tie-breaking rule works. And what is collusion? Collusion
is I might lower my bid so you win when we're competing against each other. But
actually I don't care if there are more people above the threshold I don't care which one
wins. I'm happy if he just pays his threshold.
So the result I get, since doesn't care about tie-breaking, doesn't care about things like
collusion and such. And good. So forget even doing viceroy. Just post these prices
and people come in whatever order they want and take the prize if they want it and
first-come, first-served. That also satisfies the basic theorem because you break ties in
a different way.
>>: Each one has its own price.
>> Jason Hartline: Each one has its own price. That's fine. I complained about it
already. Okay?
Good. So let's talk about competition. Competition doesn't matter. Right? Because the
fact that other people are there they're not driving the price up. I can put people in any
order and they just -- the first person who wants to buy in this worst possible order buys.
Right? And this theorem still says they've got two approximations. I have a 2
approximation without any competition.
>>: The first person who buys the bid, he just pays the reserve price.
>> Jason Hartline: Yes, that's the point. The point is I'm happy just getting the reserve
price from any of the bidders who are above the reserve price.
>>: But you have already concluded a competition when you've satisfied the inequality,
already half.
>> Jason Hartline: Yes, but -- but I'm not actually looking at the bidders to satisfy that
quality. I use my knowledge of the distribution instead. In other words, with my
knowledge of the distribution, I can sort of ->>: What about competition then? What does the competition mean?
>> Jason Hartline: What does competition mean? I have two bidders in an auction, they
compete with each other. It's not competition. If I have some other bidder in mind who
might be there, might not be there set your price higher that's not competition that's me
thinking about and setting a higher price for you.
So my point -- so the obvious point here is that, yes, you can simulate competition by
raising prices. And that is what's happening here. You're simulating competition. So
there actually isn't any real competition happening.
>>: But if you have competing with another person to satisfy that [inaudible] sale you will
increase everybody's price. It's the moment -- the inequality will force you to raise
prices.
>> Jason Hartline: It depends on whether the bidders are actually there or whether the
designer just thinks they're there. This says the designer has some distributions for a
bidder showing up and he sets prices. The bidders that actually show up are not
competing with each other.
The bound is guaranteed regardless of whether they compete with each other or not.
They're not driving the price up at all.
>>: But if you have those distributions show up -- if those distributions don't show up,
then you have the answer, too?
>> Jason Hartline: I'm saying if ->>: You need to see with that distribution coming up to you.
>> Jason Hartline: I do need this -- this is true if the bidder's value is distribution.
However if those bidders choose not to compete with each other it's still fine, I still get
the bound.
But let's address some of these complaints. Okay. One big problem was these prices I
came up with depended -- they were different for each bidder. That was annoying.
They tended to pick up the distribution. That's unavoidable. So here's a question you
might ask second, which is can you get a good approximation with anonymous reserve
pricing? Meaning -- and this is the mechanism that worked well in the IID regular case.
In the IID regular case, anonymous reserve pricing is great. In the non-IID regular case,
for instance, it's not optimal, I just gave you on a previous slide a 2 approximation price
to it but I used a different reserve price. That was a pain. What about anonymous
reserve price?
So here's a theorem. You can get a 4 approximation. The proof of this theorem is via
more complicated usage of profit and equalities. But it's profit and equalities
nonetheless.
I'm not going to give you the proof. It's just a profit and equality kind of proof. Good.
The bound isn't tight. I believe the answer is two. I've never been able to prove it. I'm
kind of embarrassed about it so don't tell anyone.
I believe this theorem justifies the wide prevalence of the anonymous reserve prices.
Let's suppose anonymous reserve prices were often bad, like really bad? Then people
might be using other auctions.
Right? But this shows that actually no matter what, using a single reserve price is as
good as some much more complicated thing.
Okay?
>>: How can it be in the non-regular case?
>> Jason Hartline: Logarithmic. Okay. So you can generalize the results beyond single
item auctions. And I had one statement that I want to make about that, which is that if
you have nonidentical distributions possibly irregular, then oftentimes posted pricing
mechanisms where I come up with a different price for each player. Let them arrive in
any order they want. Then oftentimes this gets a good approximation 2. The optimal
revenue I could have gotten from having them all there at once in one mechanism
running some complicated Myerson like procedure with virtual values.
So this says you don't have to have everyone at once. You don't have to have them
competing with each other. You can just post prices, find prices that are good enough
with strange feasibility constraints, et cetera.
And the general proof of this kind of approach is that the optimal mechanism we know
what it is. It's maximized virtual surplus, maximized the derivative of the revenue curves
and then you show that reserve pricing mechanisms are virtual surplus approximators.
I think the biggest challenge area from this line of work is addressing non-gammer
closed settings or settings where you have to sub people with negative virtual values.
And this theorem only applies in downward closed settings and downward closed setting
is where if I have some set of winners I can just reject some people and that's fine.
So any subset of feasible side is feasible. And that means that I would never sell to a
bidder with negative virtual value. Because I always at least post a reserve price of the
inverse virtual value of 0. So I never solve a bidder with negative virtual value. So if you
have non [indiscernible] closed settings, it might be worthwhile to sell to someone with
negative virtual values so you could also sell to someone with high positive virtual value.
So these kind of settings, we don't know how to analyze at all.
Okay. So that's single dimensional, the single dimensional setting. I want to now move
on and talk about the multi-dimensional setting. And so the example I want you to think
about is this simple multi-dimensional pricing problem. I have a single unit demand
customer, meaning there are a bunch of items available and they want to buy at most
one item. So it's like you want to buy a house, there are a bunch of houses available.
You want one of them.
Okay. Your value for each house may be different. And I'm going to draw them at
random from a known distribution. Okay. I would like to put prices on these houses to
maximize the revenue I get as a seller, when a buyer derives value from the distribution
and takes his favorite house, meaning the house whose value minus price is the highest.
So what's optimal?
>>: There's one seller who has all these N houses?
>> Jason Hartline: There's this one seller with all these N houses and one buyer who
wants a house who has value for distribution -- good. How do I optimize this? Well, I
can take my distribution, my feasibility constraints and my consent constraints and throw
them into my favorite solver and push the solve button and some answer comes out.
In some cases this is polynomial time. But I still don't find it a very useful conclusive
solution, even if it is polynomial time, because I haven't understood the problem at all.
So there's little conceptual insight. Some cases polynomial time. Most cases it's not.
So that's problematic. So I want to approximate it. The first thing that you have to deal
with when trying to do approximation I want to approximate something and I don't
understand what that optimal thing is. And so we need to get an upper bound that we do
understand. So I want to compare two problems.
One is this unit demand pricing problem which I just mentioned. And the other is the
auction problem we talked about for the first 30 minutes of this talk.
Okay. And I want to ask you, if the distributions are the same, which seller would you
rather be? Would you rather be the seller of the auction running the optimal -- where we
know what it is. It's Myerson's maximized virtual cypress. Or would you rather be the
person who has to post prices on these houses and the bidder takes their favorite -- the
single buyer takes the house maximize value minus the price.
So the same informational structure. So I can ask this question. It's an apples-to-apples
comparison. And the theorem is that I'd actually rather be the auctioneer. What's the
intuition for this? The intuition for this is actually having these bidders compete with
each other drives the price up. That should help my revenue. You can actually take any
pricing here and design an auction around that pricing that uses the competition to get
more money.
And so that then allows me to get the following theorem. If I post prices on these houses
with a constant virtual price, such that the probability that the person doesn't buy any of
my houses is exactly one-half, then I get a 2 approximation, to this which is an upper
bound on that. Okay. How does this work? Well, it's a profit inequality. And we break
ties with the agency utility function. They're choosing the house to maximize VI minus
PI, so we're breaking ties that way.
And the same proof as before. Okay. So that was a case where I have just one agent.
And I want to generalize this to talk about more agents, where there's actually contention
for the houses, et cetera. And that's actually easy to do. So in many unit demand
settings, I can come up with what I call sequential posted pricings, which are the
following. I come up with prices for each agent and each house, say.
And then I allow the agents to arrive in any order, and tape whichever house is while
supplies last. And worst course of the order is this gives me a good approximation.
I sort of want to give you the high level approach for proving these kind of theorems.
The first step is to make a analogy to a single dimensional setting. Kind of like we did on
the last slide. So I am going to take each multi-dimensional agent and turn them into a
bunch of single dimensional agents to then compete against each other. Okay. I then
want to say the single dimensional auction problem is going to have more revenue than
the multi-dimensional pricing problem. Why? Because competition increases revenue
in the single dimensional auction problem. And that's a theorem. Then I want to say
that if I'm posting prices, that actually multi-dimensional price posting gives me more
revenue than single dimensional post pricing. Why is that? Well, when I'm posting
prices, I'm not using competition. So I shouldn't think that there's really a big advantage
of being in the single dimensional setting over the multi-dimensional setting. I'm not
using any competition prices in posting prices. So that's one thing.
In my definition of sequential posted pricing, I said there's an adversarial order. And
there's less orders in the multi-dimensional problem than the single dimensional
problem. In the single dimensional problem I could order all agent item pairs and the
multi dimension I can only order each agent. Which means all the items come together
for that one agent. So there are fewer orders to worst case over here.
And then I have to instantiate this reduction by showing that I can come up with a single
dimensional pricing that is a constant approximation to the single dimensional auction,
and this is an easy question to answer because this is in a single dimensional setting
and we understand single dimensional settings completely. We know what this is. This
is maximized virtual surplus. We need to show they're virtual surplus approximators.
And that's -- we've already by example in the previous part of this talk shown how to do
that in some settings, so I'm not going to do it anymore. That's the high level sketch of
how you show these kinds of results.
Okay. So some discussion. The mechanisms you get out of this are very robust in the
sense that they don't depend on agent ordering. Posted pricing doesn't depend on
collusion. They don't depend on a lot. It's conclusive in the sense that it shows you very
strongly that competition is not that important for approximation.
I was able to take a setting with competition and turn it into a setting where I just post
prices and people come in whatever worst case wherever they want so no one is
competing with each other in those settings.
And this close connection between unit demand preferences and single dimensional
preferences means that somehow unit demand incentives are very similar kinds of
incentives to the single dimensional incentives.
And so that's I think a nice intuition. Posted pricings are very widely prevalent. If you
look at eBay, they're migrating their auction platform to posted pricings with the Buy It
Now feature. Furthermore, pretty much most things everyone buys it's sold with posted
pricing. I think it's nice -- it corroborates the practice of selling stuff, which people
actually mostly use posted prices. And the fact that's not too bad is relieving, because if
it was bad to use posted pricing, people would have a lot more dynamic mechanisms.
You'd have to go to the grocery store and bid for your lettuce. That would be annoying.
It's nice these posted pricings are good approximations. I have one comment which is
that the role of randomization here is very important. In all the theorems I discussed in
the past I was assuming deterministic mechanisms and comparing a deterministic
posted pricing to deterministic optimal pricing, for instance. Randomization makes the
problem very interesting and here are two papers which discuss that in more detail
which I'm not going to talk about.
I think the biggest open question in the area of multi-dimensional mechanism design is
getting beyond this unit demand assumption. When an agent is unit demand, I have an
agent who wants one of many houses, then unit demand is incentives are similar to
conventional incentives. And I have this nice upper bound from the single dimensional
representative problem.
Beyond unit demand, I don't know of anything good. I don't know of a good upper
bound, I don't know of a way of understanding incentives. I think that's the most
interesting and important open question. Good. So part three? When did I start? 3:30.
So I've gone for an hour and a half. So why don't we stop and not do part three.
[applause]
>> Nikhil Devanu Rangarajan: Are there any more questions?
>> Jason Hartline: Yes.
>>: You mentioned post pricing before prevalent maybe for small purchases but have
you bought a car or house recently?
>> Jason Hartline: Sure. And so you're saying those are negotiations?
>>: Lots of them.
>> Jason Hartline: And the question is to what extent do these negotiations approximate
an auction versus what extent do they approximate a posted pricing, right? And, for
instance, buying a car, if you're buying a car from a dealer, he's not thinking oh if you
don't buy -- he's not like putting your bid against someone else's bid, which would be an
auction. He has some price in mind which is one to negotiate, too, and you either will
sell above it or not. Right? Houses are a little bit different. Houses usually negotiate
and sometimes there's other bidders. In the current market, actually, there probably
aren't that many other bidders. So it's basically a posted pricing and they have a price in
their mind before they're not going to go below. Maybe you're a good negotiator.
Essentially it's a posted price is what I'm saying. Okay?
>> Nikhil Devanu Rangarajan: Thank you.
[applause].
Download