Document 17864788

advertisement
>> Yuval Peres: Good afternoon. For me as a student in Jerusalem at Hebrew University I often
heard about the prophet inequalities that were very popular among the statisticians there. It's
great to see them occur now in this topic, so Balu Sivan would tell us about revenue
maximization and prophet inequalities.
>> Balasubramanian Sivan: Thanks Yuval. This is a couple of joint works and a bunch of further
work that has been going on in the past six years on this multi-parameter revenue maximization
and I acknowledge these as I talk about them. So what is the setting? It's a very simple setting.
You have n agents, m items and each agent has some value for item valuation for agent i for
item j is vij and the valuation is of course private knowledge. Only the agent knows the value
for the item. These valuations are drawn from distributions which are public. Everybody knows
about the distributions. Now some control of this talk is that these ideas are all independent.
For this talk I'm going to focus on unit demand agents. Unit demand agents are those which
are interested in buying at least one item, which means even if you give them a whole bundle
of set s the valuation is basically the most value item in that set s. What you want, your goal is
to maximize revenue by doing some matching between these agents and items. There should
be a matching. You should not give any agent more than one item and no items should be
given to more than one agent.
>>: What is a setting you can think of where unit demand is [indiscernible]
>> Balasubramanian Sivan: So for example, when you go shopping for a hotel, for example, you
don't want more than one hotel. So this is multi-parameter revenue maximization problem,
multi-parameter because each agent has more than one private parameter. He has value for
each item, so he holds m private parameters. Everybody holds m private parameters. So
formerly we are studying a multi-item auction problem you have n unit demand agents, m
items and you have a matrix f of distributions. Your goal is to get the seller optimal auction.
What is known about this? It's very well understood how to design seller optimal auctions in
single parameter settings, for example, if you set m equals one and there is just one item you
know what to do to get revenue optimal auction and this work what it basically shows is you
can reduce revenue maximization to welfare maximization. If you transform values to what is
called virtual values, which is distribution dependent transformation of values. After that on
virtual values we just run the [indiscernible] auction or the [indiscernible] auction and that will
maximize [indiscernible] for a single parameter settings, but this completely fails to extend for
multi-parameter settings. In particular, you get some nice insights there from Myerson, for
example that all optimal mechanisms is deterministic. Such kinds of things failed to extend to
multi-parameter settings. Even thinking about multi-parameter settings is very rarely used.
The obvious question is can we get some kind of understanding for multi-parameter settings
also.
>>: [indiscernible]
>> Balasubramanian Sivan: There is no seller here. There is one seller, but I mean this does not
ID the situations. They could be different for different items and everything. It's just
independent.
>>: So what do you mean by parameter? Parameter would be…
>> Balasubramanian Sivan: Parameter is a private parameter all the v agents. Each agent vi
agent i has vi1, vi2 and vim, so it's more than one. So if you just vim is equal to one. He holds
only one private parameter. In those cases you know how to maximize revenue. It would be
nice if we could get some sort of understanding of how optimal mechanisms look like in multiparameter settings. Vincent and Manelli what they say is that you can't do this. Basically what
they show is that even for independent distributions the class of all optimal mechanisms
basically includes all, almost all positive mechanism so you can't get any nice characterization of
optimal mechanisms. One of the takers of this talk is that you can get some nice guidelines
about how near optimal mechanisms look like if you are willing to relax optimality and look at
approximately optimal mechanisms. So what we show is that very simple posted price
mechanisms which are not like normal auctions, traditional auctions which ask people to bid
and then just decide who to get allocated, whom to allocate and what they pay. Instead you
just put prices on items and the agents come and choose whether they want to take it or not.
It's a take it or leave it offer. Simple posted price mechanisms are actually approximately
optimal in multi-parameter settings. They're good in single parameter settings and they extend
to multi-parameter settings. They are also widely used, of course, they are really just see prices
in most of these settings.
>>: What do you mean by [indiscernible]
>> Balasubramanian Sivan: Single dimension is same as single parameter, and multi-dimension
is multi-parameter. This is the outline of the talk. The main tool is basically understanding this
multi-parameter differences. These are complex subjects and that's why it was difficult to find
out how much you can extract optimally in these settings so we are going to reduce this multiparameter problem to a single parameter problem. This is the first step. Then the goal is to
build a good posted price mechanism for single parameter settings and that's where I'll make
the connection to prophet inequalities. And then show that these mechanisms can also be
used on multi-parameter problem settings. Assumption throughout the talk as I already said is
that f is a product distribution. For simplicity, for the first part of the talk I'm going to focus on
single agent settings, which means that there is just vi here. The second index is dropped.
There's only one agent. You can think of v1 2vm. They are all mutually independent. I will
briefly talk about correlated values towards the end of the talk. This is a single agent setting.
There is one agent and then there's m items, and we want to construct a revenue optimal or
near optimal mechanism in the setting. How do you understand this multi-parameter
preferences? We'll use the approach developed by [indiscernible] Jason Hartland Bobby
Claiborne in 2007. So what they say is that you don't understand this but you can construct a
related single parameter setting which we do understand well and this is the related single
parameter setting. When you have one agent and four items think of this or break this agent
into four representatives. Each representative is interested in buying exactly one item, so this
multi-parameter agent is broken into four single parameter agents and there is only one private
agent value that the agent is hiding. And you now are reading the same feasibility constraint as
the original setting. In the original setting the feasibility constraint was that he was a unit
demand agent. He can gain at most one item. What the feasibility constraint basically sum
subsystem all the agents or agent items first. You maintain the same feasibility constraint here.
Here you say that at most one item can be allocated. And so we are basically in a single item
auction problem. We have m agents and one item. One agent m item has been transformed to
m agents and one item. So this is the rated single parameter setting. So what about this
setting? The claim is that the optimal revenue in this new single parameter setting is at least as
much as the optimal revenue in the old multi-parameter setting. What intuition is basically the
single parameter setting is the representatives are going to compete with each other. They act
independently. Whereas in the original setting, the agent made a giant decision for all the
items. He picked the item; it maximizes its utility. Here these representatives are not going to
say that that guy has higher utility so I leave that item for him. So you can use the competition
to indicate more revenue than intuition and that can be formalized. I'm going to caution you
that this upper bound holds only for deterministic mechanisms. Separating each phase for
randomized mechanisms although it holds a constant factor, that's something that I'll get to at
the end of the talk. If you take deterministic multi-parameter mechanisms then their revenues
are upper bounded by this related single parameter setting. So that's what they say. So what?
You have this single parameter setting; you know how to construct the optimal mechanism in
the single parameter setting [indiscernible]. But what to do with it? You need something to
apply to the multi-parameter setting and you don't know how to use this mechanism. What is
the main challenge here? It is basically to eliminate competition. In the single parameter
setting there is competition. In the original setting there is no competition. In this case you are
looking for the best interest when bidding items. So if you can design a mechanism, so what
does competition basically -- there is competition and there is room for collusion. Agents can
say that the seller is using competition to extract revenue from us so people can collude and
bring down the revenue. So if you can design a mechanism which is robust to collusion, if
revenue doesn't go up due to collusion then you can use the same mechanism in multiparameter settings. Roughly there is the kind of mechanism we are looking for. So traditional
you know, data correlation mechanism, submit the bids and then you do something that's
always subject to collusion like [indiscernible] second-place auction et cetera are subject to
collusion. So that's what we need, a collusion resistant single parameter mechanism which can
be applied to multi-parameter settings and the candidate we pick is the posted price
mechanism in single parameter settings. Think of a one item auction. You have n agents.
Instead of asking for bids or anything just say that I have a price pi for agent i. Then agents who
come in any order. When agent i comes and he faces a price vi and if the item has already not
been sold he can choose to buy it at that price. If the value is higher than p and he's rational
then he'll buy it. That's it. Okay. We are going to count the revenue as the revenue you get in
the worst possible ordering. If we have agent one and agent two and these prices agent one
comes first then he'll of course pick the item and he'll get revenue of four. If two get it he'll get
revenue of three. I am going to count the revenue from the worst ordering and that's precisely
why this is collusion robust. What does collusion mean? Here agent two can tell agent one
look, I'm getting the ability of three. You get the ability of one, so if I go first, I'll pick item and
I'll split it into half-and-half so you would get 1.5 and I'll get 1.5. But if you go first you get the
ability of one so we are both better off if I go first. That is the collusion here. So if people
collude what does it amount to? Basically it amounts to people coming in a different order.
Agent two is coming ahead of agent one but we are counting the revenue from the worst
starting possible so basically that's why it's collusion robust. If we can design a posted price
mechanism which is collusion robust in this sense, you can approximate the optimal revenue
for the worst ordering and then you can use it to the original multiparameter setting. Obviously
the extent to multi-parameter setting you are sort of seeing now. If you put these very same
prices in the original setting that agent will pick the item that maximizes the a minus pa.
Obviously this is better than getting the worst possible orderings pa in the single parameter
setting. So what I just said is if you can get an alpha approx single parameter posted prices then
basically you can place alpha approx multi-parameter posted price mechanism. So the question
is do posted price mechanisms perform well in single parameter settings. So to do this I'll do a
quick overview or characterization of truthful payments in single diameter settings, what they
look like. If you are designing any truthful mechanism for any truthful mechanism single
parameter settings, so I'm going to talk about Bayesian and compatibility which is essentially
seen as the ordinary truthfulness. Bayesian single compatibility means that your truthful in
your expectation or other people’s valuations. It doesn't matter. You just drop the
expectations and that is your compatibility. What are the requirements for a mechanism to be
truthful? It just says that the expected allocation function should be monotone, which means if
the probability of me getting the item, or other people’s valuations is something now and if I
increase my value, then the probability of me getting the item should not fall below. Okay. It is
no smaller, and the payment should satisfy this formula. It looks weird but it's a nice there's a
nice little partition if you draw the application curve. The area to the left of the curve is
payments. Any, what are the mechanisms that [indiscernible] your [indiscernible] order or
whatever if you are developing a truthful mechanisms then these two conditions have to be
satisfied because it's a characterization. This artificial partition is nice for prices but it doesn't
tell me, guide me how to design the optimal mechanism through this. The revenue is basically
sum of the prices, so you want to maximize revenue and that's precisely what Mattson
[phonetic] did. He basically expanded the expectation [indiscernible] some clean thing which is
different and totally the value and the function of value. And you call the function of value
some function but your value. And this [indiscernible] is basically expected to [indiscernible]
value. It's expected virtual surplus. So this is delicately telling you how to maximize revenue if
the expected payment is equal to the expected virtual value you basically should give the item
to the person with the highest virtual revenue. You said xi equals one, this is agent to the
highest virtual revenue. This is [indiscernible] mechanism.
>>: [indiscernible] three-minute chase here, so this the three-minute [indiscernible] threeminute marathon.
>> Balasubramanian Sivan: Yeah. So what this basically says is you get the values, apply virtual
evaluation function. You get virtual revenues for the same agents. Then run [indiscernible]
auction. Only thing is that these virtual values could sometimes be negative. If everybody is
negative you don't basically serve anybody. The [indiscernible] auction always serves
somebody. Okay.
>>: There is an assumption that the virtual values are [indiscernible]
>> Balasubramanian Sivan: Yes. [indiscernible] but yeah. Okay. So that's enough for me to
talk about performance of posted price mechanisms in arbitrary orderings. What I wanted to
do initially was that agents could come in any order and I want the worst case of revenue for
orderings. I fix the prices beforehand and agents can come and get started and they want to
say that that revenue approximately starts with revenue. There is a characterization that the
revenue or the payment is basically virtual values. I can think of agents coming as virtual values
coming to me. When an agent comes basically [indiscernible] and saying that I'm going to sell
to this agent is basically taking that virtual value. So you have these random variables or virtual
values arriving to to you in an arbitrary sequence and your goal is to pick the near maximum xi.
You don't control the sequence but you can control who gets picked. That's exactly the setting
in profit inequalities. In prophet inequalities you have n random variables and these are drawn
independently from some distribution fi for [indiscernible] and the prophet he knows all the
random variables and therefore his revenue is always max of all these kinds of variables and he
gets this expectation. And if you are a gambler you don't know what is going to happen the
next day and you are a gambler you don't know what is going to happen the next day. And you
are to pick or drop something today. If you drop it you can't come back to it. If you pick it you
will start your [indiscernible] the next day. Then can you compete with the gamblers of the
prophet srevenue? That's the prophet inequality question. Samuel-Cahn had this beautiful
result in ‘84 where she said you just set a single threshold. Okay. Not different threshold, just
one threshold which is the median of the distribution of max over xi and then pick the first
random variable which exceeds the threshold. Okay. If you do that then your expected profit is
at least half of the expected max value which is as far as the profit gets. So you will get half of
the profit by this single threshold. This delicately translates to our setting because of the
interpretation through virtual values. So what exactly is happening is you have these virtual
values coming to you and I'm going to put the threshold on virtual values. It's a single threshold
on virtual values. I'll pick the first virtual value to exceed the threshold. Okay. So what does it
mean to put threshold on virtual values? It basically means that you put the threshold on
values by inverting this. But you know that different people according to the specific
[indiscernible] function so you know differently for different people, so there are different
prices for different people and this is the threshold on values. Obviously, this translates to an
agent picking or dropping. An agent looks at the price and if his values are above the price he
will pick the item; otherwise he will not pick the item. Okay. If you use the median of max over
i of virtual values then you get half of the optimal revenue. That's a simple connection to
prophet inequalities and these mechanisms, of course, since they do well in single parameter
settings, but earlier argument, they also do well in multiple statistics. Okay. So this is the half
of profit. This is joint work with [indiscernible] and David Malick. What is crucial here and I'm
saying this because I'll come to this later the profit threshold, the threshold uses non-elective.
If you keep on adapting the threshold for each item based on what happened before then that
is unacceptable for this. Why? For single parameter settings it is okay because what it
essentially means is that I will change the price for agent two based on how agent one reacted.
But for multi-parameter settings when there is one agent and n items it's not acceptable
because that's like saying I will not tell you what the price for item two is. Here is item one.
You pick it at this price or drop it if you want, but later depending on how you react I'll tell the
price for item two. That's completely unreasonable because you can't reason about what the
agent Lil do because he is completely in the dark about the future prices. So you want nonadaptive prices. You can do not ask the prices for single agent and k items but for more general
[indiscernible] feasibility constraints you won't have this non-adaptive thresholds. Okay. So
what is the extension to multi-agent settings is basically very similar to single agent settings.
You have these n agents and there we broke each into copies and similarly here we -- the single
agent is broken into copies. Here you break each agent into copies. Maintain the same
feasibility constraints. Okay. If some subset of agents was feasible the same subset of agents is
feasible here. The same argument is basically go through that more competition in place, more
revenue single parameter settings, although you need to be slightly careful. It's not as
straightforward as single agent settings. Again, this phase is for randomized mechanisms which
I'll come to later. The other thing I said was that you can use the single parameter mechanism
in a generalized multi-parameter settings and that happens here also except that instead of
putting, you know, price for this one agent. You need a price for each agent. You need a price
pij which is price given to agent i for item j. You have this n times m prices and how am I
hurting revenue, agents can come in arbitrary order and when agent i comes his menu is given
to him and he picks his favorite item at these prices and the next agent comes. Okay. If agent
one comes first. His value for item 1 is 5 and 10 and these are the prices. He obviously would
pick item two which gets him the ability of 10-6, 4 for him. And the first one gets only two, so
he'll pick item two, which automatically means that only item 1 is available for agent two.
Okay. And if agent two comes first and gets the worst and you get much less revenue; you get
only four but we count only that as the revenue or the worst possible ordering. Again, you can
look at the single parameter setting. The only thing is the feasibility constraints in single
parameter setting has now become more complicated. Originally it was just one item auction.
Now you can have partition matroid and more general things. So what do we get for these
general single parameter settings is all these constraints. For k unit auctions we've got a factor
of two. There's been a lot of work after ours. For example, this, factor 2 is basically coming
from prophet inequalities. For single item it was two. For k also it would be two. And
[indiscernible] showed in a very nice paper he showed that you can get arbitrarily close to one
for k item auctions if k goes to infinity. For the matching problem that we showed on the first
slide you can get single parameter which is 5.83 for the constraints and therefore general
matroids we got order of log rank of the matroid. This was later, again, improved by Claimberg
and Weinberg because they have an improved prophet inequalities for matroid settings.
Although the point to note about it is that it's adaptive prices. Then you can ask how does
adaptive prices work. I just said that adaptive is a problem. The beauty of the thing is there are
multiple agents. The price for agent two can depend on the price for agent one but within an
agent the price is not adaptive. The whole menu is presented to him without being adaptive so
they cannot give [indiscernible] compatibility et cetera. Is still open whether you can give a
prophet inequalities for matroids that is not adaptive. You don't change, you can put multiple
thresholds but it is not adaptive. You set thresholds once and for all. And you can't do well
with posted price mechanisms if you go beyond matroid settings. That we are sure. So that's
the story basically for deterministic mechanisms. You have this…
>>: [indiscernible] what's the problem?
>> Balasubramanian Sivan: The problem is basically that the gambler is going and seeing
random variables and the set of all random variables you can pick should be some independent
set in a matroid. Okay. The setting [indiscernible] is one uniform matroid. This is a k uniform
matroid and this guy is saying you get arbitrarily close to one in a k uniform matroid. Now the
natural question is, you know…
>>: [indiscernible] where different prophet inequality [indiscernible]
>> Balasubramanian Sivan: It is the same analysis. That's all.
>>: The same prophet as taking a single price for the value?
>> Balasubramanian Sivan: Yes.
>>: And you are comparing against?
>> Balasubramanian Sivan: You are comparing against the gambler prophet who knows all of
these things and he'll pick the top…
>>: [indiscernible]
>> Balasubramanian Sivan: Yes. Yes exactly. So the other thing is here the profit, the
gambler’s profit is getting arbitrarily close when the approximation factor [indiscernible] equals
to one. Can we get the same thing here when done with the rank, when the rank increases? In
general. Obviously you can do it because I can create a big partition matroid which is a union of
many uniform matroids and a uniform matroid might have small, but can you put some nice
restrictions on matroids and then say for these matroids the profit you get gets arbitrarily close
to gamblers profit as the rank increases. So that's another open question for my charts.
>>: [indiscernible]
>> Balasubramanian Sivan: Profit, profits yes. Approximation factor gets arbitrarily close too.
Okay. Okay. For the rest of this talk I'm going to talk about what else can you do. Where else
can you go? Can you generalize to randomized mechanisms? What happens to this argument
that it breaks? What happens if you go beyond independence? And what happens if you go
beyond unit demand settings? All of these things were essential for proof for unit demand
settings, independence and deterministic mechanisms. Does randomization help? The first
question. In single parameter settings it doesn't. It's a very nice thing the optimal mechanism
is already deterministic. But in multiple parameter settings it does.
>>: [indiscernible] for some distributions [indiscernible]
>> Balasubramanian Sivan: No, there's always a way to do matroids without randomizing.
>>: [indiscernible]
>> Balasubramanian Sivan: Yes. You make it asymmetric, correct. That is the way to do this
single parameter mechanism optimally deterministic. What does a randomized mechanism
look like in multi-parameter setting? So deterministic mechanisms when you have just one
agent and focus on one agent it's already complicated. Deterministic mechanism is just simply
a price menu. Okay? So you put the price P1 for item 1, P2 for item 2 and this basically splits
the whole turf of valuations into some regions where item 1 is bought. Same region where
item 2 is bought. What the random mechanisms look like. You can do two things. One thing is
to say I will pick a price menu randomly. If that doesn't do it, of course, because if that does
well then one of the price menus should do well. Okay? So what does well is actually price
menu or randomized allocations, so I'll give you three called lotteries. So instead of saying that
you can just pick item 1 at price P1, just pick item to at P2 you can also put these kinds of things
in the menu where if you give me price P I'll give you item 1 with probability Q1, item 2 with
probability Q2. You be willing to risk it just like how you do in Priceline trade. For cheap flights
you do not know which flight they are going to book or which hotel they are going to book. You
pay the price. After that they would tell you which hotel you are going to and what price you
got. So the assumption is that the agents are risk neutral. We definitely need this which means
that I'm going to calculate the expected durability when he buys his lottery as Q1 times v1 plus
Q2 plus v2 minus p. Okay. So what have we done by adding this lottery? We basically have
created a new market segment. A few of those guys were buying item 1 and item 2 earlier will
now move to this lottery. Then there's those guys who buy item two, those guys who buy item
1 so you have a better segment of the market by doing this. And one would expect that if you
can do this sophisticated market segmentation you will get better revenue and that's exactly
what's happening. And the thing is that it happens in even very simple settings. Let's look at
one agent, two items and iid uniformly distributed between five and six. This is from
Thanassoulis. In a deterministic mechanism the optimal thing to do is put a price and it turns
out to be the same for both items. And if you put these two prices and also I add a lottery at
just four cents below the price of these two items, then what will happen is this new segment
will come up and these people who are buying these items earlier will now switch to the lottery
which is a loss for you because a lottery is selling at a lesser price. These guys are paying p*.
Now they are paying something strictly less than p*, but the gain is that there is a new segment
which was not able to buy anything before. They are now able to buy the lottery because it's
priced somewhat lower. Okay? And it turns out that the gain outweighs the loss and you can
get a factor 1.1 improvement. Okay? So this begs the question what happens in general rate?
Can I put, keep on adding lotteries like this and keep on segmenting the space? This is one
segment you can keep on and we can add infinitely many lotteries and segment the turf so
nicely that there is a given incentive improvement. But how much can randomization increase
revenue? The first attempt would be to just go back to what we did for deterministic
mechanisms. So what I said was you break this item into four representatives. He was multi
parameter agent. These guys are single parameter. Competition is increased and therefore
revenue is increased. Okay? That's what we argued for the deterministic setting. The same
thing should go to here also. Roughly the competition has increased here also, so you get more
revenue and that's what we thought for a long time. None of the examples we had basically
prove this and we were kind of shocked when we considered an example where in spite of the
competition the single parameter mechanism optimal revenue is strictly smaller than the
randomized optimal mechanisms revenue in the multi-parameter settings. Again, this happens
for a very simple setting, one agent two iid items. The distribution is so-called equal revenue
distribution which means the probability that the value exceeds x is exactly one by x in an order
truncated to 10. What is the optimal single parameter mechanism’s revenue? What is the
mechanism for one item auction, right? There are two agents with these values and it's a one
item auction and even if we relax the constraint I'm saying that you can sell two items instead.
You can sell both agents. You can get that most revenue of one from each agent because for
any price P that you set he will buy the property one by P and this one plus one is two. At most
two you can get in single parameter setting, but here is a lottery menu in the multi-parameter
setting, okay? Three lotteries in the menu. The first lottery is what most agents prefer because
it is a small price of 2 1/2. The second and third lotteries are meant for people who are very
crazy about one of the items. So this is how the turf looks like now. Most of the people will
buy the lottery here. This is region one. There are some people who are very large value for
item 1 so they go to region two and they need to pay a price of 3n by 8. N could be huge but
it's a smaller set does this. Similarly for item two, and this lottery actually not lottery menu gets
a revenue of 2.277 which is more than 2 so there is a factor 1.13 gap between the optimal
single parameter mechanism and optimal randomized case, okay? Thankfully, this doesn't scale
more so we are able to show that the optimal single parameter mechanism is at least half the
optimal randomized mechanism. This is joint work with [indiscernible] and David Malik. Here is
the proof. So the optimal randomized mechanism I don't know how it looks like. It could be
some menu. It could even have infinity many lotteries in the menu. So take that menu, okay
and I'm going to call this LMP, multi-parameter MP. It has several lotteries and one such lottery
is general lottery is Q1 Q2 p okay? You'll get item one at property Q and item 2 at property P.
This is basically generalizes to any number of items but I'm going to focus on just two items.
Given this lottery, I'm going to construct a mechanism in the single parameter setting, okay?
That has good revenue. That's comparable to this. That's what I'm going to do. Here is the
mechanism for single parameter settings. So remember in single parameter settings we have
broken this one agent into two representatives, right? So now I need to tell what each
representative gets and here's what I'm going to do. For each lottery here I'm going to create
one lottery and put it in the set LSP1, another lottery and put it in the set LSP2 meant for these
two representatives. So this lottery says, you know, I'll give you item one at property Q1. I'll go
to representative one and say I'll give you your favorite item. He's interested in only one item,
right? The representatives each are interested in only one item. I'll give you your favorite item
with property Q1 and Q2, item two is not in this [indiscernible] so I'll slash the price from P to
P2-2v2. So this can be done only after people reveal their values. And similarly for agent two I
do this. And what's special about this lottery menu, what's special is that if this agent desires
that this lottery was the best lottery for him, L, then the corresponding single parameter agents
bought by these two lotteries, they got the utilities preserved. The utility here was q1v1 plus
q2v2 minus p. It's the same utility here. It's q1v1 minus this price p-q2v2 is the same. So what
[indiscernible] chooses. So that much is clear. So what I want to show is that this mechanism
revenue is good, MSP so what is its revenue? Consider any value v1, v2 and let's focus on the
case that v1 is at least v2 and the revenue is, of course, the revenue you get from LSP1 plus the
revenue you get from LSP2 which is at least the revenue you get from the first lottery menu.
What is that? That's exactly the price, right? He buys a lottery so he'll get the revenue at the
price of p-q2v2. This is still less than p. The original setting you got a revenue of p. Here you
got p-q2v2, so who knows this could be very small and even if you add this revenue it's again p
minus q2v2, so you have 2p minus q and p minus [indiscernible] I don't know what this exactly
how it compares to p. But the idea is that this Q2v2 is at most v2. V2 smaller than one, 0
demand setting. And this is the revenue of the second price auction in the single parameter
setting. So v1 is greater than v2 and [indiscernible] auction will get you v2, so this is the
revenue of the second price auction in the single parameter setting, so what they fall short of is
bounded by at most [indiscernible] revenue so what I showed now is that for any lottery menu
that you give me its revenues are upper bounded by this single parameter mechanism that we
constructed and the second price auction. And this is at most twice the original revenue. So
that's it and hereafter you can go to the whole story. There we said the deterministic
mechanism optimal deterministic mechanism in multi-parameter setting has a 2 approximation.
Here it becomes a 4 approximation because you lose n as a factor 2. Okay. So the same posted
price mechanism can approximate the optimal random optimized mechanism to a factor 4. So
that is all a say about randomized mechanism. The next thing I want to talk about is what
happens if you go beyond independence. Particularly, independence seems unrealistic in unit
demand settings. Unit demand means you are basically substitutes as I said hotel as an
example. You know it's very unrealistic that you have completely dependent values for
different hotels. On the other hand, if you argue the arbitrary correlations I could
[indiscernible] Bobby Claimberg and Matt Weinberg show that you can get unbounded gap
between randomized and deterministic mechanisms. So what happens is that they contrast
these [indiscernible] distributions for which you can construct one lottery menu for every point
in support of the distribution. You construct a huge support and then you can keep on
increasing it every [indiscernible]. The funny thing is this happens even for two items, one or
two items is the simplest setting. One item you can't do a randomized mechanism, so arbitrary
correlations are not allowed, so then what is a good model of correlation for substitutes. This is
something I am interested in one model that we came up with. This work was what we call
common base value correlation. Because these items are all substitutes, the three hotels, what
is the functionality of a hotel that gives you accommodations? So let's say that people have
some common value for accommodation. After that each hotel has its own specialty. Maybe it
is close to a beach. This one has the a good swimming pool and maybe Wi-Fi is good at the
Holiday Inn and maybe you have several additional values for these things. What we say is that
there is a common base value V not forgetting served and then initially there is a v8 for each
specialty. The v8’s are all independent. We know they are independent but they are all
correlated. This is a common base value correlation. This guy is a business traveler and he
cares about Wi-Fi. This one is on vacation and maybe he cares more about the beach. But with
this kind of correlation, mild correlation, but through substitutes you can prove that, you know,
the optimal randomized mechanism revenue is only within a factor 8 [indiscernible]. The same
posted price will give you a factor 8, constant factor approximation. So what are other forms of
correlation which you can tackle? That's an open question but you can tackle large value
correlations. So I leave beyond independence question there. Next question is beyond the unit
demand. Unit demand is very crucial in doing this. For example, I said the q2v2 was last. In
general it's going to be somewhat equal to [indiscernible] because 1 of qj vj and all these q’s
just sum to one you can bound. And that will give you a factor k instead of factor 2 I said, so
you can go beyond unit demand with the same approach. So what can you tell about other two
evaluations? There are other things that we don't even know how to do additive. Okay?
Again, the same story for additive valuations. So you [indiscernible] show that the gap between
randomized and deterministic is unbounded. And they show that if the distributions are
independent you can get an order of log squared n approximation. This was later improved by
[indiscernible] and Andy [indiscernible]. In the [indiscernible] they give an order of log n
approximation, improve the order of log square n and still the constant factor is open. If you
give a constant factor then this will make the story exactly the same as the unit demand story.
The same on with the [indiscernible] gap correlation, same constant factor of independence
since this is open. So that's the story for additive valuation and, of course, beyond additive
nobody knows what's going to happen so all of these are interesting open questions I'm sure.
I'm sure there's going to have intense activity in the next few years. Then what about
combinatorial feasibility constraints? Instead of the simple constraints the matching
constraints was the intersection of partition matroids. What happens if you go to general
matroid? This already mentioned . This Bobby Claimberg and Matt Weinberg showed that they
developed prophet inequalities for matroid feasibility constraints given the same factor 2, but
they showed these thresholds are adaptive. So what are the open questions? There are
several open questions but I'm going to just focus on things that are related to the talk. One is
this unit demand settings. There is no need to go through this introduction to two single
parameter settings and go off of that. If you go through that you can't do anything better than
2 here or anything better than 4 here. That’s the end of the idea. You can't extract more than
that. So can you do better than 2? In fact the PTAS is open. But anything slightly better than
even 1.9 is interesting to me because it's going to be a new idea. Same thing for randomized.
And for additive valuations constant factor [indiscernible] is open as I said and again PTAS is
also open. And the gap between randomized and deterministic is open because the PTAS could
be through a randomized mechanism. Matroid prophet inequalities, again, I already mentioned
due to non-adaptive thresholds. And other thing is prophet inequalities for correlated
distributions. One might think that correlated distributions are leaking more information,
therefore prophet inequalities should do well. [indiscernible] should do well but there are
examples where it performs as bad as possible so for arbitrary correlations you can't tackle. So
for what kinds of correlations can you give prophet inequalities to give constant factor
approximations? These are things that are directly related to the talk. That's it. Yeah. Thanks
for listening. [applause]
>> Yuval Peres: Any questions or comments?
>>: [indiscernible] single parameter setting and it also handle cases where distributions are not
regular or virtual valuation are not increasing, is that also true in your…
>> Balasubramanian Sivan: Yes, because I can look at [indiscernible] values. Those are my
random labels now. Just uplift prophet inequalities and [indiscernible]. That's it. Only thing is
you are to be careful about so [indiscernible] values these guys have some [indiscernible]
function right. [indiscernible] sort of pull the threshold and I'm going to invert it, but how how
invert when there is a flat region? You are either using inverted either using this and or that
end, so you can never change the probability of allocation in [indiscernible] region. It's not
surprising between. As long as you do that then you are good. That's all there is to it.
>>: What are the prophet inequalities?
>> Balasubramanian Sivan: Prophet inequalities is basically this, so you have these n random
variables. A prophet can see through all of these random variables ahead of time and picks the
biggest one. You as a gambler competing with this prophet. You don't see the future but still,
you know, you are saying I will do at least half as well as the prophet so that's the prophet
inequality. This is ordinary [indiscernible] just [indiscernible] they said that the best stopping
rule can give you a half approximation, but these best stopping rules could be complicated.
What Samuel-Cahn showed is that the simple threshold stopping rules are already good. They
give you the same half approximation and half a state, very simple examples.
>>: So this prophet inequality would show with 1 over 1-1 over root k, something like that.
Under which conditions is that?
>> Balasubramanian Sivan: This is a ki [indiscernible] property which means that the gambler
can choose k random variables, any k he wants. The original setting was that one should
choose you stop, right, you don't see anything the next step. Here you stop after the k
[indiscernible]
>>: And you just preset the thresholds?
>> Balasubramanian Sivan: Yes.
>>: And you could do a thing that factor of maximum overall towards the [indiscernible]
>> Balasubramanian Sivan: Yes. If you knew everything ahead of time you would pick the top
k. Yes.
>> Yuval Peres: Any more questions? No? Thank Balu again. [applause]
Download