Additive Lévy Processes Introduction Let X 1 � � � � X N denote N independent Lévy processes on R� . We can conN , as follows: struct an N-parameter stochastic process �, indexed by R+ �� := X�11 + · · · + X�NN We might also write N � for every � := (�1 � � � � � �N ) ∈ R+ � := X 1 ⊕ · · · ⊕ X N � And in this way, it follows that if � 1 � � � � � � N are independent additive Lévy processes, then � 1 ⊕ · · · ⊕ � N is an additive Lévy process as well, notation being more or less obvious. It is not hard to convince yourself that if Ψ1 � � � � � ΨN denote the respective Lévy exponents of X 1 � � � � � X N , then where Ψ(ξ) Ee�ξ·�� = e−�·Ψ N for all � ∈ R+ and ξ ∈ R� � � � Ψ (ξ) := Ψ1 (ξ) � � � � � ΨN (ξ) � And that Ψ determines uniquely the finite-dimensional distributions of �. Definition 1. The N-parameter stochastic process � is called the additive Lévy process corresponding to X 1 � � � � � X N . The function Ψ is called the Lévy exponent of �. � I mention a simple example of additive Lévy process. Exercise 2 shows us how to create other types of additive Lévy processes from independent Lévy processes. 91 92 14. Additive Lévy Processes Example 2. Let X 1 � � � � � X N denote N independent �-dimensional Brownian motions. Then the N-parameter Gaussian process � is called additive Brownian motion. More generally, if X 1 � � � � � X N are independent isotropic stable processes with the same index α ∈ (0 � 2], then � is an additive stable process with index α. Note that Ψ(ξ) ∝ �ξ�α (1 � � � � � 1)� � Let us define (�� �)(�) := E�(� + �� )� (�1 �)(�) := � N R+ e− �N �=1 �� (�� �)(�) d�� N , but there is [We could just as easily define �λ for λ > 0, or even λ ∈ R+ no pressing need for doing this here.] You should check the following; it states that there are natural, and easy-to-understand, analogues of semigroups and resolvents in the present N-parameter setting. Lemma 3. If P 1 � � � � � P N denote the respective semigroups of X 1 � � � � � X N , π(1) π(N) then �� = Pπ(1) · · · Pπ(N) for every permutation (π(1) � � � � � π(N)) of (1 � � � � � �). And if R11 � � � � � R1N respectively denote the 1-resolvents of X 1 � � � � � X N , then �1 = Rπ(1) · · · Rπ(N) . And, not surprisingly, we have also potential measures: Definition 4. The 1-potential measure �1 of � is defined as � �N �1 (A) := E e− �=1 �� 1lA (�� ) d�� N R+ for all A ∈ �(R� ). � Lemma 5. If U11 � � � � � U1N denote the respective 1-potential measures of X 1 � � � � � X N , then �1 = U11 ∗ · · · ∗ U1N . An addition theorem Theorem 6 (Khoshnevisan and Xiao, 2009; Khoshnevisan et al., 2003; Yang, N . Then, E|�(R N ) � G| > 0 if 2007). Choose and fix a Borel set G ⊆ R+ + and only if there exists a Borel probability measure ρ on G such that � � � � N 1 Re |ρ̂(ξ)|2 dξ < ∞� (1) 1 + Ψ� (ξ) R� �=1 I will prove the sufficiency of (1); that is the easier half of Theorem 6;. You can find the details of the [much] more difficult half in Khoshnevisan and Xiao (2009). An addition theorem Define for all probability densities � : R� → R and � ∈ R� , � �N (J�)(�) := e− �=1 �� �(� + �� ) d�� 93 N R+ Then, a careful computation, using Theorem 5 (page 86), reveals the following multiparameter analogue of Lemma 6 (page 88): Lemma 7. For all measurable probability densities � : R� → R+ , � � � E (J�)(�) d� = 1� � R� R� � 2 E |(J�)(�)| � 1 d� = (2π)� � N � R� �=1 Re � 1 1 + Ψ� (ξ) � ˆ 2 dξ� |�(ξ)| Proof of half of Theorem 6. If (J�)(�) > 0 for some probability density �, then certainly � + X• has hit the support of � at some time. That is, � � P � ∈ supp(�) � X(R+ ) ≥ P {(J�)(�) > 0} � In particular, � � � � � E �supp(�) � X(R+ )� ≥ R� P {(J�)(�) > 0} d�� Lemma 7 and the Paley–Zygmund inequality (page 89) together imply that −1 � � � � N � � 1 1 � � ˆ 2 dξ � E �supp(�) � X(R+ )� ≥ Re |�(ξ)| � 1 + Ψ� (ξ) (2π) R� �=1 where 1/∞ := 0. Now we approximate G by the support of a probability density of the form � := ρ ∗ �� , where ρ ∈ M1 (G) and �� is a bounded probˆ ability density with support in B(0 � �). Since |�(ξ)| ≤ |ρ̂(ξ)|, the preceding shows that if there exists a probability measure ρ on G that satisfies (1), then E|X(R+ ) � G| = E|G � X(R+ )| > 0. � � Definition 8. We say that � is absolutely continuous if �1 (A) = A υ(�) d� for some measurable υ. The function υ is called the 1-potential density of �. � It is not hard to see that υ can always be chosen to be a probability density. Moreover, if U 1 � � � � � U N have 1-potential densities �1 � � � � � �N respectively, then υ = �1 ∗ · · · ∗ �N . The following is proved similarly to Theorem 10 (page 82). 94 14. Additive Lévy Processes Theorem 9. Suppose � is absolutely continuous with a 1-potential denN sity υ such that υ(0) > 0. Then, for all G ∈ �(R� ), P{�(R+ ) ∩ G �= ∅} > 0 if and only if there exists a probability measure ρ on G that satisfies (1). Example 10. Let � be an additive stable process on R� with N parameters and index α ∈ (0 � 2]. Then, � � � |ρ̂(ξ)|2 N P �(R+ ) ∩ G �= ∅ > 0 iff dξ < ∞ for some ρ ∈ M1 (G)� Nα R� 1 + �ξ� When α = 2, this says something about additive Brownian motion. � A connection to Hausdorff dimension Definition 11. The Hausdorff dimension dim� G of a Borel set G ⊆ R� is defined as � � � |ρ̂(ξ)|2 ∃ dim� G := inf � ∈ (0 � �) : ρ ∈ M1 (G) such that dξ < ∞ � �−� R� 1 + �ξ� [The preceding is well defined, provided that we set inf ∅ := �.] � [This is not the usual definition, rather the consequence of a famous theorem called “Frostman’s theorem” of classical potential theory.] In particular, Example 10 tells us the following. Proposition 12. If � is an M-parameter additive stable process with index α ∈ (0 � 2], then for all G ∈ �(R� ): M (1) If dim� G > � − Mα, then P{�(R+ ) ∩ G �= ∅} > 0; whereas M (2) If dim� G < � − Mα, then P{�(R+ ) ∩ G �= ∅} = 0. Now let � be an N-parameter additive Lévy process on R� with Lévy exponent Ψ := (Ψ1 � � � � � ΨN ), independent from �. Then, � := � ⊕ � is an (N + M)-parameter additive Lévy process on R� . It follows from Theorem 9 that � � � � N � � 1 dξ N+M P 0 ∈ �(R+ ) > 0 iff Re < ∞� 1 + Ψ� (ξ) 1 + �ξ�Nα R� �=1 But 0 is in the closure of the range of � if and only if the closures of the ranges of � and � intersect! Therefore, Proposition 12 implies the following: Theorem 13 (Khoshnevisan et al., 2003; Yang, 2007). With probability one, � � � � N � � 1 dξ N dim� �(R+ ) = sup � ∈ (0 � �) : Re < ∞ � 1 + Ψ� (ξ) �ξ��−� R� �=1 An application to subordinators where sup ∅ := 0. 95 [Why can we replace (1 + �ξ��−� )−1 by �ξ�−�+� ?] It is not hard to see that if C is at most countable, then dim� (G ∪ C) = dim� G for all G ∈ �(R� ). Therefore, one can use the fact that the X � ’s are cadlag [hence have denumerably-many jumps] to prove that the closure sign of the preceding theorem can be removed. An application to subordinators Let us now apply Theorem 13 to the case where X� := T� is a subordinator with a Lévy exponent Φ [and Lévy exponent Ψ, still]. Theorem 14 (Horowitz, 1968). With probability one, � � � � ∞� � � 1 dλ dim� T(R+ ) = sup � ∈ (0 � 1) : <∞ � 1 + Φ(λ) λ 1−� 0 where sup ∅ := 0. The following is a convenient method by which we can “transform” a Lévy exponent to a Laplace exponent. Proposition 15. For every λ > 0, � � � ∞ 1 1 d� 1 = Re � 1 + Φ(λ) πλ −∞ 1 + Ψ(�) 1 + (�/λ)2 Proof. Define {Cλ }λ>0 to be an independent linear Cauchy process, normalized so that E exp(��Cλ ) = exp(−λ|�|) for � ∈ R and λ > 0. By independence, e−�Φ(λ) = Ee−λT� = Ee�Cλ T� = Ee−�Ψ(Cλ ) � But the probability density of Cλ is �(�) = (πλ)−1 (1 + (�/λ)2 )−1 . Therefore, � ∞ 1 e−�Ψ(�) e−�Φ(λ) = d�� πλ −∞ 1 + (�/λ)2 Multiply both sides by exp(−�) and integrate [d�] to finish. � Proof of Theorem 14. Here is how we can apply Proposition 15. First, note that if 0 < θ < 2 and � ∈ R, then � ∞ dλ � � ∝ |�|−θ � 1+θ 1 + (�/λ)2 λ 0 96 14. Additive Lévy Processes Therefore, if we multiply the equation in Proposition 15 by λ −θ and integrate [dλ], then we obtain � � � � ∞� � 1 dλ 1 d� ∝ Re � θ 1 + Φ(λ) λ 1 + Ψ(�) |�|θ 0 R� This and Theorem 13 together prove the theorem. � Let us conclude this section by applying Horowitz’s theorem to the set of “increase times” of linear Brownian motion. Proposition 16 (Lévy XXX). If B denotes standard Brownian motion, then � � 1 dim� � ≥ 0 : B� = sup B� = a.s. 2 �∈[0��] Proof. Define T� := inf {� > 0 : B� = �} for all � > 0� Then, T is a stable subordinator with index√α = 1/2; see Exercise 2 [page 51]. And Horowitz’s theorem [ with Φ(λ) ∝ λ] tells us that the Hausdorff dimension of the range of T is a.s. 1/2. On the other hand, it is a realvariable fact that � � T(R+ ) = � ≥ 0 : B� = sup B� �∈[0��] (Check!) Therefore, the proposition follows. � � There is a theorem of Lévy which implies that {� ≥ 0 : B� = sup[0��] B} has the same “law” as the zero set {� ≥ 0 : B� = 0} of Brownian motion. Therefore, the preceding implies that the Hausdorff dimension of the zero set of B is almost surely 1/2. Rather than study this particular problem in greater depth, we study the zero set of a more general Lévy process in the next lecture. Problems for Chapter 14 1. Let {P� }�∈R denote the two-sided semigroup of a two-sided Lévy process. (1) Is it true that P�+� = P� P� for all �� � ∈ R? [In other words, is {P� }�∈R a semigroup of linear operators?] (2) Define linear operators P̄� := P� P−� for all � ∈ R. Prove that {P̄� }�∈R is a semigroup of linear operators. 2. Let X 1 and X 2 denote two independent Lévy processes on R� with respective exponents Ψ1 and Ψ2 . Problems for Chapter 14 97 (1) Verify that ���� := X� − Y� defines a 2-parameter additive Lévy process on R� ; compute its Lévy exponent. (2) Verify that ���� := (X� � Y� ) defines a 2-parameter additive Lévy process on (R� )2 ; compute its Lévy exponent. 3. Derive Lemma 7. 4. Let X denote an isotropic stable process on R� with index α ∈ (0 � 2]. Compute dim� (X(R+ )). Indicate the changes to your formula if X(R+ ) is replaced by �(R+ ), where � is an additive stable process on R� with index α ∈ (0 � 2] and N parameters. Or more generally still if � := X 1 ⊕ · · · ⊕ X N , where X � ’s are independent symmetric stable processes on the line with respective indices α1 � � � � � α� ∈ (0 � 2]. 5. Compute the Hausdorff dimension of the range of Y� := (� � B� ), where B denotes �-dimensional Brownian motion. The range of Y is called the “graph” of Brownian motion. Indicate the changes made to your formula if we replace “Brownian motion” by “isotropic stable process on R� with index α ∈ (0 � 2].”