21304 >> David Wilson: It is a pleasure to have...

advertisement
21304
>> David Wilson: It is a pleasure to have Bruc Wlodzimierz presenting the last talk of the day.
Bruc is a professor at the University of Cincinnati, now visiting UW as part of his sabbatical. And
he's going to be talking about Martingales the rest of the time.
>> Bruc Wlodzimierz: Thank you very much for the invitation and especially to Chris for making
this visit possible and we're a pleasure to be here with beautiful weather and I didn't expect any of
this.
So let me point out first the invisible things from those slides. The lower part of the slide is
difficult to see. So my talk is based on several works, several papers with the [inaudible] some
are in progress. Some are already appeared. Some are on archive.
And I apologize for the too long title. So let's just not try to read it. So let me begin with a picture
that comes to mind of a person who visits [inaudible] zone and is not used to living in it.
So what this is is a mathematical mechanical rendering of a simulation of a Markov process, used
2,000 points for the rendering and here you see 180 or 200 points or so.
So it is clear from those pictures that something is happening around T equal 1, maybe. I don't
know whether that's clear or not. And the first part of the talk is to try to convince you that T equal
1 is where things happen. And then the second part of the talk is to somehow try to see how can
we see these kinds of things if we were not able to -- this is a simulation. So I know, of course,
when things happen and if you were to observe a thing like this what kind of information could we
get from observations, perhaps.
So it is fairly obvious when you look at the beginning of the trajectory of this process that the
process is actually following straight lines, and a line and then what do I use? Maybe I will be old
fashioned and use the mechanical pointer.
So the process follows a line then jumps up on the next line and follows a line, jumps up and sort
of obvious that's what it does at the beginning. It's also obvious what it does at the end. It follows
a line, jumps down, follows a line, jumps down, follows a line, jumps down. It's easy to see on
this, but not so easy to see on this part because this is just 200 points. Of course we're missing
lots of those jumps.
So let's add more lines. This is much less convincing. This is on authority that these are the right
lines. Some of the points are on those lines. Some of the lines go, I think, without any points.
Let's add more lines on the right.
Let's add a little bit more lines. So I'm trying to say that since I know how the process was
simulated T equal 1 is a special point. That's where the lines change sort of.
So slightly to be more convincing here is all 2000 points and then fewer lines are missed but I
think some lines like this line here seems not to touch anything still. So you cannot really see this
kind of picture by looking at, by observing a process every millisecond. Can't really see that T
equals 1 is special.
So let's try to look at this picture and try to see, can we see things from some simple statistics of
the process? Can we see that T equal 1 is special from, say, the mean. Well, of course, we
cannot, this is a process that has means 0. How about the co-variance? This process I know I
made it up so I know it has co-variance medium ST. So again T equal 1 doesn't show up too
much there.
Well, there is a situation -- there is something that T equal 1 is a little bit special. This process is
actually time invertible. That is, X because the same distribution ST times X1 over T. That's the
same scaling as the Wiener process has.
The same time inversion thing. So the moments are not helpful. I had a friend in physics that I
took to heart his advice when he was studying physics and he said you're studying those
distributions like normal and whatever. He said this is stupid he said. The only thing we can
measure in statistics in physics is averages. So you should study averages, nothing else.
Okay. So let's go to this and say let's study conditional averages. Let's be a little bit at least. So
this process is a Martingale as well. Expected value of XT given pass sigma field is excess and it
doesn't matter whether the T is bigger than 1 or smaller than 1. Whether S is bigger or smaller
than 1. We cannot see anything out of this either.
If you want to study higher conditional moments, for example, conditional variance, conditional
variance of this process given the past is a polynomial in excess. If you were to study higher
conditional moments of polynomials of XT they would be polynomials. In fact, you can produce
polynomial Martingales for this process.
So nothing special here. Well, let's continue the game with conditioning. Let's look at the
two-sided filtrations where we condition with respect to past and future.
So we have FST is the sigma field generated bypass of the process to the S or future to the T
and let's condition here and see what happens.
So if we condition process XT given past and future, then what we get is essentially linear
interpolation between two points on the graph. XS and XU.
And this is what sometimes people call a hardness condition. So this process is a hardness, and
T equal 1 doesn't show up too much or not at all here.
Well, let's look into the conditional variance the next thing sort of next high moment. One can
compute the conditional variance for this process. It comes up as a sort of strange expression.
There is this constant that I would be often suppressing in some formulas. And then you have
linear function of increments of the process, XU minus XS over U minus S. This other thing looks
mysterious but these are actually increments of the time inverse of the process.
You can rewrite this as increments of this process. So it's sort of time invertibility is hidden here.
So we don't see the T equal 1 is anything special. But for people who demand -- there aren't
many but there was [inaudible] and a couple other people for people who play with conditional
moments of processes, we sort of develop this heuristics that if you know the first two conditional
moments of the process, including the two-sided conditional moments, then you actually know the
process.
So somehow all of those things that don't show that T equal 1 is special somehow T equal 1
should show up in those here on the blackboard.
So basically that's what the next part of the talk is trying to say. Try to show you how T equal 1 is
hidden in those formulas somehow.
So what happens is you can sort of decipher that T equal 1 is special if you look into should I say
bridges of the process? Conditional -- conditional process on XA and XB, fixed moments of time,
and then try to look at conditional moments there.
So conditional XA and XB, the mean of the process stopped being 0, of course. It's linear
function of T and alpha and beta, of course, random depend on XA and XB. Conditional variance
is -- so co-variance of the process conditional XA and XB is -- it's a linear function of S and T and
it appears at the end points.
>>: Do you need to write X1 there? Or XA?
>> Bruc Wlodzimierz: No, I meant to write XA. Thank you. And later on I will mean to write X1,
yes. I didn't even notice that, yeah, thank you.
So tying reversibility. So the first kind of things are lost. Time irreversibility is usually lost unless
you happen to choose conditioning and times the values correctly.
On the other hand, the other part, the two-sided conditioning those are not sensitive at all to XA
and XB because XA and XB are in the sigma field.
So the conditional mean is the linear function of XS and XU. Conditional variance is still the
same quadratic polynomial that we had previously.
So all of this looks sort of the same. And what I would like to say is that if we convert this
situation to standard form, then the two-sided conditional variances or bridges based on XA and
X1 on the blue part will be similar to the bridges based on X1 XB and different than the generic
bridges based on XA and XB.
Now mostly doing X 0. X1. X 0 is 0 so I don't have to worry about conditioning. And the next
thing I should explain is what do I mean by standard form.
So we typically -- since the process is the hardness, we sort of developed the terminology calling
it a quadratic hardness, if you want we can call it a quadratic hardness in standard form. If the
first two moments are like in the previous slide, that is, the mean is 0 and the co-variance is like
for the Wiener process.
And the process is a hardness. So expected value of ZT given past and future is a linear function
of ZS and ZU if it's a linear function of ZSZU you compute from the co-variance.
One other thing perhaps I want to mention is if the process is a hardness and the co-variance is
here, then the process is also in Martingale. You can pass with U to infinity and show that the
contribution disappears. So the hardness that happens to have the co-variances in .1 is also in
Martingale and also reverse Martingale.
So number two is a soft assumption. Linearity of regression, plus compute coefficients from
co-variance. Number three looks very scary. Let me say I view it as a soft assumption. The soft
assumption says that the variance of ZT given past and future is a quadratic function of ZS and
ZU.
And one can prove that under some technical assumptions this quadratic function cannot be
arbitrary, it has to be of the form and explicit expression which I don't want to write, and then it's a
quadratic expression in increments of the process. So linear function of ZU minus ZS over minus
S quadratic term, linear coefficient and the quadratic term for the time inverse of the process and
then this cross term. The cross product of the sort of increments of the [inaudible] with respect to
past and future.
So basically ->>: [inaudible] for the mean process?
>> Bruc Wlodzimierz: The right-hand side is the value that you get for the Wiener process, you
get 000, everywhere 0s, conditional variance is constant. The right hand side for the Poisson
process, you get this coefficient and nothing else. The right-hand side for the process I showed
early is theta equal 1. Theta equal 1 the rest is 0. You may do such processes exist. That's a
separate question, yes.
So okay so basically what happens is if I look at the bridge of the process that I had previously,
which was sort of quadratic hardness in standard form, once I changed to a bridge, then it loses
some of the properties. But things that it loses is the co-variance. The co-variance is wrong. So
the co-variance can be easily corrected and here is an example how do we correct for the
co-variance, the simplest example to think of is a Brownian bridge. So Brownian bridge is a
Gaussian process with a co-variance that happens to be product form for all the bridges of those
processes S times 1 minus T. And other than that, of course, it is a hardness.
The conditional expectation of XT given XS, XU linear function of XSXU has to have the right
coefficients. I forgot to say this hardness condition is compatible with more co-variances than
minimum ST.
And the conditional variance of XT given past and future is I wrote XSXU because of Markov
properties is constant so quadratic. So this is sort of a hardness except it is not in the quadratic
hardness except not in standard form, because of the co-variance being sort of messed up.
But then everybody knows how to correct for this messing up. Everybody at least who is as old
as me and had to study from the 68 edition of [inaudible] convergence of probability measures.
So on page 68 there's an exercise there that says take a Brownian bridge, do this transformation
and the transformation is not that complicated. It is really a Mobius transformation of time
multiplied by the denominator of this Mobius transformation. This is exactly what you need to
correct the co-variance from this form S times 1 minus T to the co-variance being minimum ST.
Of course this means that you convert the Brownian bridge into a Wiener process. And you can
go backwards and you can say, okay, now I can compute this in terms of that. In other words, I
think I called one of them W.
I think Y I called W now Brownian can be written as 1 minus T times the Wiener process applied
to the same transformation. So it's a Mobius transformation multiplied by a linear function of T.
So let's look at our process that we were talking about. Let's look at the blue part and let's say
let's look at XT conditional on X 0 which is 0. And X1. Well, then this process can be written
transforming to the standard form and if you go backwards, then it can be written as 1 minus T
times Poisson process. It's exactly the same transformation we saw for the Brownian bridge. So
it's 1 minus T times N poison process at T times minus 1 times T Poisson process of course has
parameter lambda. Everything should be conditional on X1 so the parameter lambda of this
process should be, oh I forgot to say this is centered so you take mean 0 first. So you subtract
them in. So NT is a centered Poisson process with parameter lambda but lambda now is a
random variable and it depends on X1. Well, not very complicated way, lambda happens to be 1
plus X1 and furthermore we actually know was the distribution of lambda. It happens to be
exponential.
And then if you look at the black part of the trajectory, what happens if XT, if you look at XT being
bigger than one conditional X1, X infinity is 0 really. Should look at this this way. Then XT for T
bigger than 1 conditional on X1 is another Poisson process. This transformation here is
[inaudible] time universe of that transformation.
So it's T times X1 over T applied to that thing. And instead of the Poisson process we had
previously we have an independent Poisson process. So until there's an independent Poisson
process, but with the same parameter lambda. With the same capital lambda.
So capital lambda is shared between the two processes. So what this really shows you is that I
can take two Poisson processes with random parameter lambda and if I choose the distribution,
this was a title of the talk, putting together pairs of Poisson processes into a Martingale. So what
I did here is I sort of showed how one can put -- well not convincingly -- how one can put two
Poisson processes with the same random parameter capital lambda which happens to be
exponential or exponential minus, yeah, exponential.
I can put together into a quadratic hardness and also into a Martingale. So the question of putting
levy processes into Martingales by this kind of fashion is, of course, something that I'm not even
trying to answer.
The question of putting together lady processes into a quadratic hardness is slightly easier to
answer because there are very few quadratic hardnesses among levy processes.
Well, okay. So let's look back into the, sort of the question slightly in more generality. So
suppose we had a quadratic hardness ZT which has co-variance minimum ST mean 0, has a
hardness condition, and also has this quadratic variance with all of those one, two, three, four,
five parameters floating there.
And what we want to ask is: When else this situation happens that we saw previously, that is,
when is it that we can find a capital T moment of time T such that the left-hand side of the
process, this blue bridge, for T is smaller than capital T conditioned on the value of ZT in the
standard form becomes a simple enough quadratic hardness. What do I mean by simple enough
quadratic hardness it has some parameter theta. Some parameter tau and the increments of the
process.
If Y you're supposed to be a levy process then conditional variance of YT given FSU would have
to be a function of increments of the process. Would have to be perhaps not a quadratic function
but for hardness for quadratic hardness must be a quadratic function. I left a row because we
can actually allow this cross here and the theorem is still the same.
So then essentially there are only three situations that this happens. One situation is sort of
boring. So all the parameters here are 0. Well, if all the parameters here are 0 ZT is actually a
Wiener process, one can show that, and then, of course, you can choose any capital T and all of
this happens. But that's a boring case.
The other possibility is that the upper part is there, but the lower part isn't. So there's no
quadratic terms. And then -- and I think we assume that eta times theta is positive, in my
example eta and theta were one.
Well, then this is an example that we saw. This is the pair of Poisson processes that one can
glue up, so that I'm not saying that's obvious at this point. But the next slide will make it more
obvious.
What one can do is one can compute, if you know, what are the parameters of the original
process, you can actually say what are the parameters of this bridge. That is, you can say that
the conditional bridge in the standard form, this is complicated in a sense of it is an ugly
computation so it's good to have good notation but one can compute those numbers in terms of
the original numbers and in terms of the conditional variables ZT.
And therefore we can sort of read out what is the process so that the process would be actually
corresponding to this case that we were doing in our example. And the third possibility is that you
have all the parameters really threat theta, sigma tau and rau. But they're not arbitrary. Rau is
actually at the upper range of its availability. That is, two times square root of sigma tau and eta
and theta are somehow tied by an equation with sigma and tau.
And then the moment of time that we are interested in is square root of tau over sigma. And then
the process itself we can recognize what it is, which is the next slide.
So the next slide is the theorem of Swalsky [phonetic] which says what do we expect from the
process or in each case there's only that many parameters. The third one I allowed for the
process, it turns out to always come up at 0. And that's why sort of could allow it, because it
doesn't matter.
So this is a question -- this is the slide that is supposed to be answering what kind of processes
can you get as bridges of quadratic hardnesses. There is this special moment of time capital T.
So the only ones that you can get is the ones that have theta and tau there and then we can
actually show that such a process has all moments and we can show that it has Martingale
orthogonal polynomials and we can identify the actual process that has to be a Markov process
which we know.
So one of them is possibilities is that Y is the Wiener process if we happen to have no tau but
linear term, then Y has to be a Poisson process.
If we happen to have full quadratic forms, then everything depends on whether this is the positive
definite form or sort of negative definite form.
So if theta square is bigger than 4 tau then this is a negative binomial process. I say type,
because what I mean is I find transformation that is -- that is, I'm requesting Y to have mean 0
and co-variance minimum ST. So you have to multiply subtract the mean of the Poisson process
and scale it to something related to lambda to get there.
And then if this happens to be a complete square, I thought, but maybe this is not a complete
square, but theta square equals 4 tau is when YT is a gamma type process. You have a gamma
process and the third possibility is unfortunately lesser known process called hyperbolic secant
process.
So basically what I'm trying to say with this slide is that if you are interested in putting together
pair of processes like I did with the Poisson process with this one example, then the only -- then
the only options for such processes are really the only interesting options are Poisson process,
negative binomial process gamma process or hyperbolic secant process. They're the only
processes that are worth trying to randomize parameters in them and trying to put them together
in a quadratic hardness.
So one would like to put a quadratic hardness, but the trouble is that so we sort of know the
distribution of the bridge, and we sort of know the distribution of the right-hand side bridge. What
we are missing is the distribution of Z1. I said Z1 happens to be Z1 happens to be exponential in
my particular construction.
And somewhat simpler version of the question is to say let's not be too ambitious, let's try not to
put things into a quadratic hardness, let's just try to put everything into a Martingale. So a pair of
two independent Poisson processes with parameter capital lambda, if you capital lambda has
exponential distribution, then we can actually put them together into a Martingale. Are there any
other capital distributions that we could use?
Or we can take a pair of negative binomial processes, levy processes I'm just writing the
distribution of YT, which is this ugly expression, and there is a parameter pi which is random, and
can we put them -- can we find a pi that this will make a Martingale? Well quadratic hardness
perhaps, but at least a Martingale.
For the gamma process, the natural randomization to do is actually the gamma has two
parameters and the scale parameter is what you want to randomize. You can just as well say we
take two independent gamma processes with X1 being exponentially distributed, and we ask:
Can we make a Martingale out of those by changing time correctly? And by choosing the
distribution of multiplier W correctly?
And then hyperbolic secant process. Likely the slide is hidden and most of you cannot see.
That's good for you because the density is ugly and has gamma function in it. Has E to the alpha
X in it and has parameter alpha which is in the interval minus pi, pi. That's the parameter where
we actually want to randomize.
Well, so you sort of saw the answer for the Poisson process at least a little bit. So let me try to
say first sort of -- so there's this -- we have this five or so theorems that are all identical, and one
of them says: Suppose that the process is conditional in negative binomial and supposedly this
conditional hyperbolic secant, suppose this, suppose that.
So suppose that we have a YT for not one of those processes. The YT and YT tilde later. It
happens to be conditional in negative binomial process.
Well, the first thing that sort of surprised us is that no matter what the pi is, the process is actually
Markov. I mean, way before the audience here, that's obvious. But for us it was a surprise.
This process is a Markov. Then if we assume that this pi has enough moments, so 1 over pi is
squared integrable, then we can sort of make out, this is one of those transformations we're
talking about.
So, in other words, I can transform the process Y into a new process ZT by this fine function,
Mobius transformation, not that complicated. Not that different from those things we saw
previously, times 1 minus T and subtract the mean.
And that will make it into a quadratic hardness for all the Ts that are available here. So all the Ts
in the interval 01. And you can compute the parameters which you don't care.
And the value of C also depends on the moments. Variance and mean of the randomization
thing.
So basically what this says is that every random variable pi will work to produce a hardness.
Will that make a Martingale? Previously I said hardnesses are without even quadratic
hardnesses, just hardness are Martingale, hardness 0 infinity are Martingales because you can
pass with U, with the right-hand side conditioning to infinity.
Hardness on 01 don't have to be Martingales so this is actually typically -- I mean, this YT over
ZT over T or something like this is a reverse Martingale. But this is very rarely a Martingale.
So here is a theorem that says when is that a Martingale? The thing that we constructed that I
described a moment ago. So suppose I take this random variable pi such that it has the first
moment and suppose that Y is this pi condition negative binomial process and that ZT is given by
this kind of transformation except I'm not even assuming the second moment of pi. I'm saying
there's some coefficient that make this A a Martingale. So I say suppose ZT of this form is a
Martingale. Well, this is equivalent to pi having a very explicit density. This is equivalent to pi
having a beta density, essentially P to some power times 1 minus 1 to some power times the
normalization, where A and B depend on what you used in the normalization, and this works out
backwards correctly so that then this random variable has the variance and then all of those
things match together.
So I don't know how known is this theorem. So originally we proved this for all five processes,
and we were fairly happy because every time we proved this we sort of had the proof. They were
similar, but they were not exactly the same. And then we eventually we ended up thinking wait a
minute, we have a new property of a Poisson process, because we have a similar thing about the
Poisson process as well. And then we thought is it realistic to expect to have a new theorem
about the Poisson process? So then we should have used Bing, but we used Google and we
found a paper of Nick Goodkin from 2007 that was essentially saying Martingale property of
randomized Poisson or something like this, or maybe it was saying poly processes as
randomized. But there was definitely -- this was exactly this kind of result except for Poisson
process.
But then once we found this, we sort of said, well, why don't we search more? So once we
searched more, we found the paper of Diaconis-Ylvisaker on conjugate priors for exponential
families. We weren't ever looking for that paper without Google search, because this looks like
complete statistics.
And it appeared in Statistics Journal, and it's fairly old, too, which has exactly the mathematics
that leads to this, which has general more mathematics, because all conjugate priors for all
exponential families. But surely it implies this result. It doesn't actually imply for Poisson case for
some technical reasons about the support. So for Poisson case, one has to go back to the paper
of Johnson from 1957 which does the same thing but only for Poisson case.
So once we know this, what we do next is we say, well, I'm still continuing this example of
negative binomial process, just not to repeat the Poisson every time. So suppose now that I take
Y and Y -- I should have used tilde like previously but I forgot. To be pi conditioning dependent
negative binomial processes, and let's assume that pi has mean and the variance.
And then we define this process that's the previous definition of the process I used for Martingale.
This is time universe of this process with Y prime and this is what I need to do in order to assign
the value for T equal 1. So this is 1 over pi standardized. This is 1 over pi minus the mean over
the variance.
I'm supposed to have co-variance of ZT is supposed to be variance of ZT is supposed to be T.
Then by construction, this process is time invertible, because I used the same formulas as time
inverses of the formulas here and there. This is actually time invertible process.
And then I can sort of prove a great theorem that says, well, the following conditions are
equivalent. ZT is quadratic hardness on 0 infinity. That's what I want to get really. Well, if it is a
quadratic hardness part of the definition it is a hardness. So okay one implies two. QED.
Well, if it is a hardness it's a limit only to take hardness with minimum ST co-variance implies a
Martingale so QED.
And the last fact that I showed you Diaconis-Ylvisaker if it's a Martingale then on all T on 0 infinity
I should have said it's a Martingale on 01. If it's a Martingale on 01 then the previous theorem
says pi has this very specific beta distribution and the only thing that remains to be proven is 4
implies 1, which amazingly works on exactly the same identities that you get to get from 3 to 4.
So it's sort of -- so the same kind of results hold for pairs of Poisson negative binomial gamma
hyperbolic secant processes.
We actually worked them out, the first couple of them we worked out as exercises because we
knew the answer from a paper we wrote in 2006.
So the Martingale condition on 01 determines the law of randomization, then the correct law
allows us to continue the process T larger than or equal to 1 and correct law gives us quadratic
hardness. This happens for Poisson negative binomial gamma and hyperbolic secant processes.
And all over the laws of randomizations with a small disclaimer about the Poisson can be
deduced from the Diaconis-Ylvisaker paper, so we don't have to work as hard as we did.
In particular a pair of lambda, this is a summary of those results saying when can you actually
glue together into a Martingale also in quadratic hardness a pair of conditional independent
Poisson processes. If and only if the randomization has gamma density which is written here.
And we knew this answer, as I say, from the paper that we wrote a long time ago. This is not that
difficult to guess that gamma density goes well with Poisson. You can sort of do various
formulas. A pair of pi condition negative binomial processes Martingale and 0 infinity or a
quadratic hardness if and only if pi has this beta density. This perhaps I wouldn't be able to
guess but [inaudible] was able to guess and we sort of, sort of based on various analogies
between those two formulas and this kind of theorem that you get a quadratic hardness with this
density is actually a master thesis student, student rolled up the complete proof. Unfortunately
available only to people who read Polish because it's in Polish.
So this is -- this is sort of student type assignment. Then a pair of gamma processes with
multiplier, multiplied by random variable makes a Martingale on 0 infinity if and only if W has 1
over W has a gamma density or W has this density, which is the density of 1 over the gamma.
And this result is also easy, because you can get gamma processes as a limit of negative
binomial processes and you can read out the answer. You can read out what randomization you
need to do.
So we did those three cases solely to learn how to recover from Martingale conditions, the
density, so we knew the answers. So because we had no idea with hyperbolic secant process.
So this is the answer for hyperbolic secant process. The hyperbolic secant process has this
parameter alpha which is between minus pi and pi, and this is a Martingale that pair of two
conditional independent hyperbolic secant processes are in Martingale and also quadratic
hardness, if and only if the density of the randomization is given by I don't know how to describe
this, I think we even don't know the normalization constant. I don't think we computed that.
So this law is defines Z1. This law defines the law of Z1 for the composition of the quadratic
hardness in the levy bridges that I was talking about in the previous theorem.
I have two minutes for conclusions. So one thing I want to say is after going through all of those
exercises, I would be able to recognize the existence of the special time capital T if you give me a
process that has two-sided regressions that are linear and conditional second moments.
And the product co-variance of this form. If you would give me all the parameters, then I would
be able to tell you whether there exists a capital T like this or not, essentially by computing
various algebraic things about the parameters.
Then if this would happen to be the case then I would be able to tell you whether that's a pair of
Poisson negative binomial gamma hyperbolic secant processes.
So the example they were showing in simulations was the Poisson process. The Poisson and
negative binomial process if you really were able to observe trajectories you'll see they follow
lines and go up and then the time reversal of the process so it goes, follows the lines, goes down.
The other two gamma and hyperbolic secant, even if you saw all the trajectories you wouldn't be
able to read there's anything special about T equal 1. I don't know how but infinite number of
jumps for T equal 1 and infinite number of jumps after 1 in all of the cases.
And I guess I squeezed the talk into 40 minutes. So here are the two authors and here's the
person that was hiding on the lower part of the slide. [applause].
>> David Wilson: Questions?
>>: I just want to make sure I really understand the velocity of this project because it seems to me
that the first question is why would you want to construct the process from two processes?
>> Bruc Wlodzimierz: Well I mean ->>: I'm trying to guess because there was this joke about the earthquake or something. So they
say -- are you saying this is supposed to be a model for some phenomena where there's a
distinguished time?
>> Bruc Wlodzimierz: I would like it to be, but of course not for earthquakes because they're
spatial and so on and there's this I forgot something law which oh something law fits those
processes. But everybody uses modified or something law, whether you put a power law instead
of 1 over T type things. So the intensity of jumps for this process on the interval 0 to this capital T
is of the form 1 over T. So I don't know. Does that make it a good model for earthquakes?
Probably not because they're spatial as well.
So I would like it to be models for something, but our original motivation was that you can write
down this quadratic hardness formula and say let's consider a process with the following
conditional moments.
Do those processes exist? So we were largely in the business of saying can we construct a
Markov process that would be a quadratic hardness with the following parameters. It's part of
that sort of bigger strategy of finding processes, enough examples of processes.
>>: So let me follow up with a much simpler question. You might have already answered me.
Recently appeared in a paper of mine [inaudible] and we just said in that paper that there's some
simple examples. Deep processes and there's bridges. So is there some kind of theory or
characterization of all [inaudible] for interesting hardnesses?
>> Bruc Wlodzimierz: Hardness, I think there's too many of them. Hardness that also has
second order moments. That also excludes several lady processes. There's only five of them.
So there's better hope for that. And for this there's almost a theory meaning model of technical
assumptions they all have orthogonal polynomials and [inaudible] polynomials, X of those, those
are actually outside of those [inaudible] schemes. And they're not limits of those either.
So in a sense this is -- we are playing a game of here is the set of parameters that you can put in
this quadratic form. We know what happens here. We know what happens there. Now here is
the boundary that we don't know what happens. That's the boundary.
And still unknown regions we don't know what happens in some regions between this boundary
and the other boundary which you get from [inaudible] Wilson.
>> David Wilson: Any more questions? Let's thank the speaker. [applause]
Download