18263 >> Nikhil Devanur Rangarajan: So it's a pleasure to...

advertisement
18263
>> Nikhil Devanur Rangarajan: So it's a pleasure to have Nicole Immorlica here. She's been
visiting us all week. And Nicole is well known to many at Microsoft from our time she spent her
as a post-doc a few years ago. Since then she's spent a year at CWI Amsterdam, and then she's
right now a professor, assistant professor at Northwestern. So she's going to tell us about how to
price information cascades.
>> Nicole Immorlica: Thanks. This is joint work with Sham Kakade and Ilan Lobel, done partly
while we were all visiting Microsoft Research in the New England Lab.
So I first got interested in this problem from the following example, which is plagiarized
unabashedly from this textbook of John Clineburg and David Easley. So in that example they
say, okay, let's suppose that we have two urns, and either we have an urn in which -- and each
urn has three balls in it. Either the urn has two yellow balls and one blue ball, or it has two blue
balls and one yellow ball.
And there's only one urn, and it's equally likely to be either of these two urns. We'll call the one
with more yellow balls a yellow urn; the one with more blue balls, a blue urn.
And we want to play the following game, which I actually played in a lecture I was teaching at
Northwestern. So the players or the students, they come in one by one into some room, and
they're going to see a random ball from the urn. Okay. So they're doing this sequentially.
Person walks into the room. They see a random ball from the urn. And they also know the
history of other people's guesses. So this is not the signal that the other people had. This is not
necessarily the same as the ball that the other person drew, but it's what the person guessed as
to the color of the urn.
So given their own private signal, this wall that they see in their urn, and the history of other
people's guesses, they have to make a public guess as to what they think the color of the urn is.
And in my undergraduate course, if the people -- if the students guessed correctly, I gave them
one point towards their final grade, otherwise they lost opportunity for a point. They didn't like
that very much.
So let's try to think about what might happen here. The goal of a player, or if you're a student
trying to earn points, your goal is to maximize the probability you get the point, given the
information you have.
And so let's think about the first person.
>>: Is there transferability between players?
>> Nicole Immorlica: Grades are not transferrable, I think. And in this game, no.
>>: The question ->>: So a later player will benefit if the earlier player will give --
>> Nicole Immorlica: You might want to pay earlier, people, to say their true signal, for example.
And then you can break -- so there's all sorts of work that goes into these information cascades.
I'll talk a little bit about it, one thing that you could try and look at is paying people to do certain
things.
>>: But I'll ask everyone ->> Nicole Immorlica: Everyone is in it for themselves. So this person sees a blue ball, and is
going to use bayesian to calculate the probability that they get a blue, that the urn is blue given
that they drew a blue ball.
And this boy also calculates the probability that the urn is yellow given that he drew a blue ball
and he's going to notice that the urn is more likely to be blue than yellow. And so he's going to -and a similar argument goes through if he drew a yellow ball. So he guesses that the urn is the
same color as the ball. Not very interesting. This hourglass is annoying me tremendously.
So the second person, what's she going to do? Well, this is important. So she knows that the
first guess was blue. That doesn't mean that she knows that the first ball is blue a priori. She just
knows that the first guess was blue. She knows the first person is a bayesian rational optimizing
utility maximizing agent and is therefore -- since he had no history, his guess mimicked the color
of the ball that he drew.
So she knows that the first person's guess was informative. From that she can conclude that the
first person's draw was actually blue.
>>: So in your class, students could use the grounded rationality assumptions of other students?
>> Nicole Immorlica: It's a lot of fun to play games with undergraduates. They're great little
guinea pigs.
So she looks at her ball, and she calculates that if she drew a blue ball, it's more likely to be a
blue urn; and if she drew a yellow ball, you can do the calculations with these numbers, it's
equally likely to be a yellow or blue urn. We can say she breaks ties in her favor. She guesses
that the urn is the same color as the ball.
Okay. Now, the third person comes in the room and sees that the first two guesses were blue.
And using the same sort of reasoning that I discussed about the second person, she's going to
infer that the first two people's guesses were informative and therefore the first two draws were
blue. So I'm considering a history in which the first draw was blue, the guess was blue, the
second draw was blue, the guess was blue. The first person knows that the first two draws were
blue.
And now it doesn't matter what color ball she draws. You can see the probability that the urn is
blue, she essentially sees three independent draws from an urn, two of which are blue. So it's
much more likely to be a blue urn. Much more likely to be a blue urn. Therefore, no matter what
she sees, she's going to guess that the urn is blue.
And thereafter, when the nth person enters the room, they know there were N minus 1 blue
guesses. Of course, they know that of these N minus 1 blue guesses only the first two had any
informational content at all.
So this person is facing the exact same decision process that the first person was facing. So
after two people guess that the urn is blue, every person thereafter is going to guess that the urn
is blue no matter what he or she sees. So the first two guesses are blue, everybody thinks that
the world is blue.
>>: The assumption is that you [indiscernible] blue.
>> Nicole Immorlica: It's not so sensitive. If you wanted to like flip a coin to break the tie, then
you just need a couple more blue guesses at the beginning. So the point is that this is a constant
probability, even though we have an infinitely many samples from this urn, we still have a
constant probability that the world is wrong.
>>: That actually happened [indiscernible].
>> Nicole Immorlica: Yeah, we had some information -- so, yeah, I had, I think, maybe was it five
students per -- you can't do this for all 60 students. It takes too long. I did three different rounds
of five students and in two of them they got the minority color. They guessed the minority, the
last person guessed the minority colors.
Okay. So this is an information cascade. I find this setup very fascinating. There's a lot of
questions that I'm pursuing in these directions. For example, Yuval mentioned transferrable
utilities. So if I could pay people off. Alternatively, as the government, if I could incentivize
people to be somewhat altruistic, paying you as a fraction of the population that guesses
correctly, can we design systems in which a majority of the people actually guess correctly.
>>: What's incentivizing, you're not actually ->> Nicole Immorlica: Okay. But I can say I'll pay you 50 cents or 50 percent if the population
does the right thing. Then you have some incentive to say your true signal.
We can also look at -- another thing I was looking at a little bit with Sephi, when I was visiting the
research lab last year, was you can't see everybody behind you but only your friends. So you're
given a social network. In what order should people draw balls from the urn in order to maximize
the probability that everybody's correct in a fully rational reasoning setting.
Okay. So anyway, this is information cascades. There's a lot of literature on this. There's a lot of
examples of information cascades, even in the real world. So one of the standard examples
when people discuss information cascades is the choosing a restaurant in a strange town. And
this is my mom's strategy. She walks into a strange town. She looks at all the restaurants and
finds the one that's the most full and she insists that we eat there, even if there's a two-hour wait.
Which irritates me endlessly. But it turns out that's not just like, you know, peer pressure thing.
She actually -- there's a model which would explain that she's acting rationally, being this model.
And so that's one example of an information cascade explaining why one restaurant is way more
popular than the others, amongst some tourists in a strange town.
>>: [indiscernible].
>> Nicole Immorlica: Staring up at the sky. So they actually did this in the street where some
sociologists had some people stand, stare up at the sky, and see how many people stopped to
look up.
>>: I think the rational -- you saw the cue for that thing. [laughter].
>> Nicole Immorlica: So these phenomenon are entertaining. Books about funny examples.
What more precisely do I mean by an information cascade? These occur when people make
decisions sequentially. Two essential ingredients are that people make decisions sequentially
based on some private information, and then they take some public action, and it's these public
actions that are observed by the population in general. And then we say that information cascade
occurs when people start to abandon their own information, the private signals that they're getting
in favor of inferences based on other people's actions, exactly like we saw in the example at the
beginning of the talk.
So one of the directions that I was considering was how do we use, if we have an -- can we use
information cascades to price a product as a company in order to get astronomically larger
revenue. Can we leverage this phenomenon to trick people into paying a lot for something that's
not really worth very much.
I was at Microsoft. [laughter] So the setting is that we have a product here which has some value
unknown to the people in the world and to seller of the product as well. We don't have any
information about how valuable this product is. But there's some value -- and we have a
distribution about the value of this product. And it's a common value setting. So this product has
the same value to every user.
So they're trying -- the users are essentially trying to learn this value and figure out how much
they should be paying for this product and the mechanism designer is also trying to learn the
value.
And what we have now, similar to the guessing game from before, we have a sales game, where
the buyers are going to enter the store sequentially one by one, and when they enter the store,
they fiddle around with the product a little bit and they get some sense about how cool it is. And
you can think about this as being like a thumbs up or a thumbs down.
So this is some coarse signal of the value of the product. It's correlated with the value of the
product, but it's coarser than the domain for the value. So it's not completely informative.
And they also are going to see the price of the product. Now products have a price tag with them.
They get a signal. They see the price. And they also notice the sales history. They see what
happened in the past, how much people were offered for the product and whether or not they
bought.
Then finally the buyer's going to use bayesian reasoning to decide whether or not to buy the
product, and in this case that means, well, they're going to look at the expected value of the
product minus the price. This is what their utility would be an expectation, conditioned on the
information set that they have, which is their private signal and the sales history of the product.
So to summarize, the sales game is that we have a sequential actions. People see the signal of
the quality and the product's current price. They decide whether or not to buy. Then they decide,
they see the previous actions and they decide whether or not to buy.
So this seems like the perfect setting for an information cascade, and we want to see if we could
get an information cascade here. In particular, we're trying to design a pricing mechanism that
gives sequential prices to the buyers that's going to maximize the long-term revenue of the
company.
And sort of a knee-jerk reaction for people in the field would be like, okay, let's try a fixed price,
the Myerson price seems to be magical terminology in this field. What's the Myerson price? You
would look at the distribution for the value of the product, and based on that distribution you can
see for any particular price what's the expected revenue of that price. Pick the one that's the
maximum and charge that to every user that enters the store.
So the problem with this or with any fixed price is that since the course, since the signal space is
coarse you get a thumbs up or a thumbs down about the quality of the product. It could be a few
people don't buy the product even though it's worthwhile, and then thereafter you're going to get
an information cascade as in the two earned model in which nobody buys even though it's
worthwhile, and you're going to get a low long-term sales revenue.
By the way, I keep saying like the first two people don't buy, but, of course, that's not really
relevant. What's relevant is the differential between the people that bought and the people that
didn't buy. So maybe you can think of this, maybe the audience likes this better, the random
walk. As soon as the random walk hits some boundary conditions it just flies out.
So there's more settings in which an information cascade happens. It doesn't have to be the first
two people that get the minority signals.
>>: But don't most outlets that we buy have fixed price, or does it happen all the time?
Promotions?
>>: Coupons you get in the mail.
>> Nicole Immorlica: Yeah, and also this is a common value setting, which -- so in particular I'm
interested in objects that have at least a common value component, which might not be what
you're used to thinking of.
So we get low long-term sales revenue with a fixed price. So what? Maybe you're going to get
low long-term sales revenue no matter what. What should we be looking for here? What can we
compare ourselves to? And the first lemma that we have is that in expectation the maximum
revenue we can get from any particular agent is going to be at most the expected value of the
product given the history that's happened so far. And, I mean, this is actually really intuitive,
because this would be the maximum revenue that we could get, ignoring the private signal that
the agent is going to draw. And the private signal, this information asymmetry can only help the
person who has the extra information, being the buyer in this case. So the mechanism can't get
more than expected value given the history, because if we ignore signals, that's the best we could
do, and the signals only help the agents.
And then, of course, an immediate corollary of this is that the maximum long-term revenue we
could hope to get is upper bounded by the expected value of the product, which just comes from
taking the expectation over all possible histories. So it means that what we want to do as the
mechanism designer is find a sequence of prices that induce the agents to learn the true value of
the product in the long term.
We're not going to be able to leverage information cascades to fool people to think that it's better
quality than it really is. Precisely because if we did that, then we would, there would be many
histories over which we would lose because they would think it's worse quality than it really is.
>>: When you're saying that, are you saying on average we can?
>> Nicole Immorlica: Yeah.
>>: If I'm a company, I can put out 100 different products and for -- it's possible that in one of
them I succeed in this, right? That I get both.
>> Nicole Immorlica: It's possible you get lucky in the history and you succeed.
>>: And from there on that product should, often people are paying more than ->> Nicole Immorlica: If you take the expectation over the histories, an expectation before you
start, if those coin flips, if you take the expectation over those coin flips ->>: This product and another product.
>>: [indiscernible].
>>: What?
>>: [indiscernible].
>> Nicole Immorlica: So your solution is to get rid of the product that has the bad history and just
keep trying until you get a product that has a good history?
>>: Version 3, right?
>>: Right.
>>: Version 3 or Cadillac. Different things.
>> Nicole Immorlica: Yeah.
>>: [indiscernible] you have to try something which are treated, the first two failed.
>> Nicole Immorlica: [indiscernible] you said?
>>: [indiscernible] [laughter] I didn't want to make --
>> Nicole Immorlica: I don't think you could even sustain a revenue above the expected true
value, because in some sense the agents are ->>: That was my question. I didn't have the full picture. If I'm lucky with the first 100 ->> Nicole Immorlica: Yeah. In that history you'll get higher lemma value.
>>: Solving the initial set of lemma, visualize the product.
>> Nicole Immorlica: Yeah, that is in some sense that.
>>: So a company might consider a product where the agent's actually true value of failure.
>> Nicole Immorlica: Well, it's going to ->>: So many different products. Success is one where you earn twice ->> Nicole Immorlica: If you're going to sell one product, a priori you want to come up with a
pricing scheme for this product, the maximum revenue you can get is the expected value of the
product. And then you do that by having agents learn the true value.
>>: I think the difference is this is if you can control an item when you have the sequence of
customers. Now if you're allowed to choose how long, then you presumably could get the -- if it's
one of the cases where you've got a positive cascade you could get another one.
>>: [indiscernible].
>> Nicole Immorlica: Okay. So the question then becomes how do we get agents to learn the
true value of the product without losing too much revenue. And what you can see is that there's a
trade-off between the prices and the information gain. And that's what I want to explain through
this example here.
So let's imagine that we have this product whose value distribution is the following: Either with
probability one-half it's worth two-thirds of a dollar, and with probability one-half it's worth only
one-third of a dollar. And, again, this is a common value for every agent.
So let's look at the first buyer. And suppose that the price tag of the product is one-third of a
dollar. Now, the first buyer might get a high signal. A thumbs up and think, oh, it looks like it's
worth -- sorry, a low signal, thumbs down, and say oh it looks like it's only worth a third of the
dollar. So the price is also a third so the agent will buy. Alternatively, might get a high signal, a
thumbs up, and say, oh, it looks like it's worth two-thirds of a dollar, and it's only priced at
one-third of a dollar. So again I'm going to buy. In either case the revenue is one-third when we
price at one-third.
Alternatively, we could have priced at two-thirds, and now with probability one-half, the signal of
the agent is low. And he doesn't buy. Whereas with probability one-half the signal is high and he
buys, and so our expected revenue is again one-third of a dollar.
But there's a difference between these two prices. What's the difference?
>>: [indiscernible].
>> Nicole Immorlica: Sorry.
>>: History.
>> Nicole Immorlica: The history. Exactly.
>>: Or the future.
>> Nicole Immorlica: Or the future, depending on whether you're looking forward or back. So
with the price of one-third of a dollar, the second buyer knows nothing about the first buyer's
signal, because no water the signal was, the guy bought. Whereas with price two-thirds of a
dollar we still get the same revenue and the second buyer gets to know the first buyer's signal,
whether it was high or low, which he can make inferences about the value of the product.
So the mechanism's price determines the informativeness of the signal. Now let's think about the
second buyer. Let's suppose that the first buyer, she inferred that the first buyer had a high
signal.
Now we can consider pricing the product at half a dollar, and if she gets a high signal and she
knows the first buyer also had a high signal, then she's going to think that it's likely to be worth
more than half a dollar. And so she's going to buy it.
On the other hand, if she got a low signal and the first buyer she, she infers that the first buyer got
a high signal, it still looks like an expectation. You can do the calculation. I didn't do it here. But
she's still going to think that it's worth the expected value is half a dollar. So she'll buy it. And the
revenue is half a dollar.
If we price at two-thirds of a dollar, with probability one-half, she's going to buy it. And with
probability one-half she gets the low signal and she doesn't buy it. So the revenue is one-third of
a dollar. So what happened here was that the price that was informative, being the higher price,
had a lower revenue than the price that was -- so to maximize revenue we should have charged a
lower price, but then everybody would have bought and we would have gotten no information
gain. Whereas by charging a higher price we lose revenue and we gain information. So here's
the key difference, the price of one-half the revenue is one-half which is higher than the revenue
with the price of two-thirds. Revenue of the price of two-thirds is just one-third. But with a price
of one-half, everybody's buying. So it doesn't differentiate the signals of the agents based on the
history. Whereas with a price of one-third, the third buyer is going to be able to infer that the first
two signals were high and buy.
So this shows that there's a trade-off between gaining information and gaining revenue. Okay.
So in general we can think about the following picture. We have the sliding scale about the price
we can choose to charge as the mechanism designer. And for low prices, any signal type is
going to buy. So there's absolutely no information gain.
For a very high price, no signal type is going to buy so there's very little information gain. But
there's this nice intermediate region where the prices are informative.
>>: Stupid question, but is the mechanism designer allowed to determine the price according to
the history? Can he look at it.
>> Nicole Immorlica: Yeah, I am using it here. So every day I post a new price and everybody
knows the history. The history is public to the designer and all the agents.
>>: The information, what's the goal? The goal is the inform that everybody that the product is
valuable or to gain information about.
>> Nicole Immorlica: You want to learn the true value of the product. So essentially we're setting
up a Martingale here.
>>: Wouldn't you rather convince people that it was valuable so you would price your product
high and then hope that people will buy it or keep it there.
>> Nicole Immorlica: Yeah so we're showing you can't get better than the expected value
through such a strategy in revenue.
>>: Restricted to one corner.
>>: With the mindset thing it's like the history. You have the history.
>> Nicole Immorlica: That's going to be exactly the -- so the Meyerson price in this example was
the one-half. So you don't gain information after a while if you keep using Meyerson price.
>>: To make the most money.
>> Nicole Immorlica: No, because the product might really be worth two-thirds, and you can
eventually learn that if you try to let them try and differentiate.
>>: It's kind of [indiscernible] reverse it. So one thing about the rough thing, it showed something
like that [indiscernible] you don't understand, people will not buy a Jaguar if you sell it too cheap.
>> Nicole Immorlica: Will not what.
>>: For some things, having a high price is ->> Nicole Immorlica: Yeah, people have an estimate of the quality based on the price that you
offer it at.
>>: Or they want to say to others that they can afford this high price.
>>: For us, though, it's a major signal [indiscernible].
>> Nicole Immorlica: Yeah, higher price, like if you go to the higher priced restaurant you look -because people know you're rich.
>>: But if you just price higher without making the product exclusive. It's not a God signal. I
mean, tomorrow I start selling a car $70,000 up from 17 nobody will buy it.
>>: Bring out the new brand.
>>: To actually spend that kind of money.
>>: Sorry for the interruption.
>> Nicole Immorlica: No, it's nice small audience. We can have informal discussion.
>>: Deadline.
>> Nicole Immorlica: I have a deadline -- you have a deadline [laughter].
>> Nicole Immorlica: It's okay. The talk is only 45 minutes long. Okay. So there's a trade-off
between gaining information and gaining revenue. The Meyerson price, even if you change it
every day it's not going to necessarily do the right thing in terms of information.
So back to this picture. We can also plot the revenue curve. That's a function of the prices. So
this is showing you that the price that maximizes the revenue on this given churn might not be
informative. It's going to be in general somewhere like the yellow curve and the blue curve are
different.
And by information curve, I don't have a precise mathematical meaning here. It's just some
intuitive picture. Okay. So in the 2 case, the 2 signal case that we were discussing earlier, I can
actually -- why doesn't that show up? Okay. I can actually look at the two special prices on this
picture being this price here which maximizes the yellow curve. And that price there which
maximizes the blue curve. And these are the explore and exploit prices, and we have a
characterization theorem that says any optimal mechanism is going to charge informative prices
infinitely often.
So it's going to give agents enough samples on which to learn. Informative prices are the ones
that actually differentiate the signals. And then it's going to charge the revenue maximizing
prices, the sort of instantaneous Meyerson prices, on a sufficiently dense set of agents in order to
exploit the revenue.
So any optimal mechanism has to look like this. And the question then becomes in what way do
you want to trade off the information gain versus the revenue gain.
>>: You have to separate these.
>> Nicole Immorlica: In general, yes, because as I showed in that picture, the revenue gain might
not be an informative price at all. And, of course, you need to charge revenue gaining prices. So
intuitively, you need to do this in order to learn. And we showed that any optimal mechanism
learns. And you need to do this in order to exploit the agents and get that revenue that you could
get. That's the picture I was looking for.
So we have the -- and the 2 signal case, we can talk about the exploit price and the explore price.
And we can consider a mechanism which the explore price is going to be the expected value
given the history. So now precisely only high signals buy in the two signal case with this price
tag. And the exploit price is the thing that gets you the most revenue. And the explore/exploit
mechanism will explore infinitely a small fraction of the time and exploit on all other rounds.
And what we show is that the mechanism has a long-term average revenue equal to the value.
So this is equal to the value. An expectation -- it's getting the expectation of the value. But in fact
it's actually equal to the value. So that's a slightly stronger statement. And the reason that this is
true is because the beliefs are going to converge to the correct value by Martingale properties in
the fact that we're exploring infinitely often. And we exploit most of the time. So once we finally
get there we're getting the value.
Furthermore, this is optimal in terms of maximizing revenue. And that was by the theorem that
said we can't get more than the expected value. This mechanism is getting the value, which is at
least expected value.
And finally we can find the optimal explore/exploit schedule which minimizes the regret over all
mechanisms. So if you might stop earlier in the sequence, you want to not have too much regret
with respect to the optimal mechanism up until that time, in a multi-banded sort of setting.
And in order to minimize the regret over all mechanisms you just need to do this optimal trade-off
between explore and exploit. That turns out to be T to the two-thirds. For that's that know.
>>: What's T to the two-thirds.
>> Nicole Immorlica: If you want to explore T rounds, you should exploit -- I mean, if you want to
have T people enter the store you should exploit T to the two-thirds time.
And that turns out to be the optimal regret in this setting. So if you don't know any multi-banded
stuff that probably doesn't make sense, but we have a proof of this. I can show you later if you're
interested.
So there are a lot of issues that you might have with this model. You guys mentioned some,
which was that you might have restarts that we don't allow. And other issues are that the prices
that we're assuming are public. And, in fact, that seems a little suspect. You might have in a real
setting you could perhaps have a sense of what the sales of this product are like and how
successful this product is. But knowing also the pricing trajectory of the product might be a more
suspicious thing.
We think we can accommodate private prices and also get rid of the common value assumption,
so long as you have correlated values they don't have to be completely common. So that's the
varying types.
And let me just skip that. We can also look at the issue of partial observation. So you don't
observe every, the entire history of the product but you only know whether your friends bought or
not.
And so then you need to start thinking about the graph structure in the information cascade
setting. So that's all I have. And take any questions.
[applause].
>> Nikhil Devanur Rangarajan: Any more questions.
>>: I'd like to ask you a question. Maybe even in the original balls in the bag. If, for example,
one percent of people haven't understood the rules and they just report the color that they see
and they have a 99 percent irrational, and everybody knows this.
>> Nicole Immorlica: And everybody knows which type you are.
>>: Well, yeah, I mean -- the one percent. Yeah. So, yeah, everyone knows. There everyone
knows this figure. Is it the same or is it.
>>: We're trying to see how robust is this.
>> Nicole Immorlica: It's not robust at all. This would break it. It's obvious that it would break it if
I know who is --
>>: Interesting there might be a transition, a critical point.
>>: You don't know the individual people you just know that I is rational.
>>: You don't know who is who.
>> Nicole Immorlica: Yeah, if you don't know who is who, then --
>>: If you know which fraction they are, if you suddenly see two-thirds of a percent yellow, you
realize that's the reason for that.
>>: Except if all the rational people had been saying blue. So if everyone -- if everyone's like
this, it's not precisely clear.
>>: Previous people would be using these considerations as well.
>>: Yeah, but at some point you cross the phase and then ->> Nicole Immorlica: One thing is you can always infer who is irrational. I mean, the irrational
people are the people that didn't understand the rules are going to be doing something that
doesn't make sense based on the history, right? Saying yellow when it should be blue. So --
>>: But, I mean, the part is [indiscernible], the one percent just report the goal that they draw,
simple as that, and 99 percent are fully rational and this is what's happening.
>>: So you see if you're saying people can infer, then you'll stop -- you won't get this stream of --
>>: This was a reasonable suggestion. It's the same until some point and then suddenly people
can see.
>>: It's a [indiscernible].
>>: So, sorry. I guess maybe I missed part. Did you talk about the transferability, what's the
optimum thing to pay.
>> Nicole Immorlica: I don't actually know. So all of these are infinite population models, but
we're looking at this question in a finite population model where you want to pay for some fraction
that gets it correctly. So at the end of time there's some end of the time. And now everybody
gets a rebate based on the fraction of the population that guesses it correctly. This is one variant
I thought of.
>>: Even if it's potentially infinite, just in your version, the fourth person will want to pay the third
person something so that they reveal ->> Nicole Immorlica: You'll have to ask that question. I don't know what goes on in that setting
either.
>>: And there hasn't been -- you don't know of any work on optimizing that.
>> Nicole Immorlica: I don't know about the transferrable utility setting. The question about the
fraction of the population, I've seen some similar things. I don't know a lot about information
cascades. I just basically read this chapter in that book and then Elan did a thesis on it. So that's
why I was talking to him about this problem.
>>: Do I assume the prices without the decisions bear no information for future, the future
buyers? So somehow if you get used to the idea that this object should cost this much, even if
you believe that it should be worth twice as much, it would be hard to sell it. So even if you
believe ->> Nicole Immorlica: You mean like I won't buy it even if I think it's worth the price because I'm
upset that you charged me more than my friend?
>>: Because we expect that's the solution. You have to repackage the Lexus and charge 50K,
even if you believe it's worth.
>> Nicole Immorlica: I don't know how to incorporate this in a rational utility model.
>>: You may just ->> Nicole Immorlica: For one thing you only walk into the store once here, right? So if you pass
it up because --
>>: But did you know that okay this is the exploration data, everything is done, then you'll come
back tomorrow.
>> Nicole Immorlica: Yeah, you only get one chance here. If you start to allow people to try and
time their arrivals or be strategic about that, then of course you're going to get people wanting to
come later when the value is more certain.
>>: So the conclusion that the seller wants the buyers to learn the true value, that's a
consequence of this one product assumption. So have a version where it can restart until a new
product which is --
>>: No, sir. It shows the consequence of the universe of value. It's the same pie for everybody.
>>: Don't need to think about restarting it, you can rephrase it instead of stop. It's determined by
the --
>>: Right. But you should then incorporate some cost of scrapping the product. Right.
>>: I think that's really one of the real differences between this model and what we're trying to
capture here. Get started and gets scrapped all the time when they're not successful. And
maybe it's because they're bad or maybe because the first few coins didn't run right.
>> Nicole Immorlica: Yeah, I mean I just had a thought about the earlier conversation we were
having, which is that if you allow me to transfer utility and you have an infinite population, then I
can just pay like -- I want an infinite sub sequence to be infinite values. But that infinite sub
sequence can be very sparse. So certainly I'll have enough money from other people to pay off --
>>: So the next thing you could do is the 101st people or ->> Nicole Immorlica: Or do something approaching one if you like.
>>: The question is more of rates as in [indiscernible], how fast you're doing, how fast you
approach them. If you're the 100th person, how will you want to divide your payment previous
ones, depends on how they divided their premiums.
>> Nikhil Devanur Rangarajan: All right. Let's thank her again.
[applause]
Download