Hindawi Publishing Corporation Journal of Probability and Statistics Volume 2013, Article ID 208950, 8 pages http://dx.doi.org/10.1155/2013/208950 Research Article Note on Qualitative Robustness of Multivariate Sample Mean and Median Evgueni Gordienko, Andrey Novikov, and J. Ruiz de Chávez Department of Mathematics, Autonomous Metropolitan University, Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, C.P. 09340, Mexico City, DF, Mexico Correspondence should be addressed to J. Ruiz de ChaΜvez; jrch@xanum.uam.mx Received 4 October 2012; Revised 11 December 2012; Accepted 12 December 2012 Academic Editor: Shein-chung Chow Copyright © 2013 Evgueni Gordienko et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. It is known that the robustness properties of estimators depend on the choice of a metric in the space of distributions. We introduce a version of Hampel’s qualitative robustness that takes into account the √π-asymptotic normality of estimators in π π , and examine such robustness of two standard location estimators in Rπ . For this purpose, we use certain combination of the Kantorovich and Zolotarev metrics rather than the usual Prokhorov type metric. This choice of the metric is explained by an intention to expose a (theoretical) situation where the robustness properties of sample mean and πΏ 1 -sample median are in reverse to the usual ones. Using the mentioned probability metrics we show the qualitative robustness of the sample multivariate mean and prove the inequality which provides a quantitative measure of robustness. On the other hand, we show that πΏ 1 -sample median could not be “qualitatively robust” with respect to the same distance between the distributions. 1. Introduction The following Hampel’s definition (originally given for the one-dimensional case) of qualitative robustness [1, 2] deals with π-balls in the space of distributions rather than with standard “contamination neighborhoods” (see for the latter, e.g., [3, 4]). The sequence ππ , π ≥ 1, of estimators is qualitatively robust at the distribution L if for every π > 0 there exists Μ < πΏ entails πΏ > 0 such that π(L, L) Μ1 , . . . , π Μπ )) < π. supπ (ππ (π1 , . . . , ππ ) , ππ (π π≥1 (1) Here, and throughout, π denotes the Prokhorov metric, Μ1 , π Μ2 , . . ., are i.i.d. random vectors disand π1 , π2 , . . .; π Μ tributed, respectively, as L and L. For a metric π on the space of distributions and random vectors π, π we will write π(π, π) (as in (1)) having in mind the π-distance between the distributions of π and π. By all means, the use of the Prokhorov metric is only an option. For instance, in [5] other probability metrics in the definition of qualitative robustness were used. (See also [6– 9] for using different probability metrics or pseudo-metrics related to the estimation of robustness). As noted in [1, 2] in R1 , sample means are not qualitatively robust at any L, while sample medians are qualitatively robust at any L having a unique median. (See also [10] for lack of qualitative robustness of sample means in certain Banach spaces). Moreover, in [11] it was shown that for symmetric distributions the median is, in certain sense, the “most robust” estimator of a center location when using the pseudo-metric corresponding to neighborhoods of contamination type. (See more with this respect in [8]). At the same time, it is known from the literature that under different circumstances, in particular, using distinct probability metrics (as, e.g., in (1) or in other definitions) the robustness properties of estimators can change considerably (see, for instance, the discussion in [12]). The first aim of the present paper is to consider a modified version of Hampel’s definition of qualitative robustness taking into account the √π-asymptotic normality of the sequence of 2 Journal of Probability and Statistics estimators. The formal definition is given in the next section but, basically, we replace (1) with the following condition: Μ1 , . . . , π Μπ ) + πΎ) ≤ π, sup√ππ (ππ (π1 , . . . , ππ ) , ππ (π π≥1 (2) where πΎ is some constant, and π is a probability metric (different from the Prokhorov metric in our case). The second goal of the paper is to present an example of two probability metrics π and π (on the space of distributions in Rπ ) for which the following holds: (i) when ππ = ππ , π ≥ 1, are multivariate sample means, the left-hand side of inequality (2) (with some πΎ) is Μ bounded by const ⋅π(L, L); (ii) when ππ = ππ , π ≥ 1, are sample medians, we give an example of symmetric smooth distributions L and Μ πΌ , πΌ > 0, in R, such that π(L, L Μ πΌ ) → 0 as πΌ → 0, L while there is a positive constant π such that the lefthand side of inequality (2) (with πΎ = 0) is greater than π for all sufficiently small πΌ > 0. Therefore, sample medians are not qualitative robust (in our sense) with respect to these metrics. The metrics π and π are the following (the complete definitions are given in Section 2). π is the Kantorovich metric (see, e.g., [13]) and π is certain combination of π and of the Zolotarev metric of order 2 (see [14]). We should stress that this choice is not determined by any advantages for statistical applications. Moreover, the closeness of distributions in the Kantorovich metric implies the closeness in the Prokhorov metric, but also reduces the probability of “large-valued outliers” (but not the rounding errors). Therefore, our selection of metric is not quite consistent with the standard approach to qualitative robustness where the Prokhorov metric (or its invariant versions) is used. Nevertheless, our choice allows to unveil the possible unusual robustness properties of sample means and medians and to assess the certain quantitative robustness of the multivariate sample mean (with respect to the considered metrics!). The obtained “robustness inequality” does not work in the “gross error model” but (jointly with inequalities (23)) it could be useful for quantitative assessment of the robustness of sample means, under perturbation of data of “rounding” type. 2. Basic Definitions Let (π», | ⋅ |) be a separable Hilbert space with the norm | ⋅ | generated by an inner product β¨⋅, ⋅β©, and let B(π») be the Borel π-algebra of subsets of π». Let also π; π1 , . . . , ππ , . . . and Μπ Μ1 , . . . , π Μπ , . . . be two sequences of i.i.d. random vectors in π; Μ π» with their respective distributions denoted by L and L. Under the assumption πΈ |π| < ∞, σ΅¨ Μσ΅¨σ΅¨ σ΅¨σ΅¨ < ∞, πΈ σ΅¨σ΅¨σ΅¨σ΅¨π σ΅¨ (3) Μ are defined as the corresponding the means πΈπ and πΈπ Bochner integrals. The definition of the median of π is less standard (consult, e.g., [15] for different definitions in π» = Rπ ). We will use the definition of median given in general setting in [16] (sometimes called L1 -median): π := arg min πΈ (|π − π₯| − |π₯|) , (4) σ΅¨σ΅¨ Μ Μ := arg min πΈ (σ΅¨σ΅¨σ΅¨σ΅¨π σ΅¨ π σ΅¨ − π₯σ΅¨σ΅¨ − |π₯|) . (5) π₯∈π» π₯∈π» In [16] it was shown that π exists, and it is unique unless L is concentrated on a one-dimensional subspace of π». In the last case the set of minimizers in (4) is {π‘π₯0 : π‘ ∈ [π, π]} for some π₯0 ∈ π», and one can set π := ((π + π)/2)π₯0 . In what follows we denote (π = 1, 2, . . .): Μ1 + ⋅ ⋅ ⋅ + π Μπ , πΜπ := π ππ := π1 + ⋅ ⋅ ⋅ + ππ , ππ = ππ , π Μ Μπ := ππ π π (6) (sample means) . On the other hand, let (π = 1, 2, . . .), ππ = ππ (π1 , . . . , ππ ) , Μπ = π Μπ (π Μ1 , . . . , π Μπ ) (7) π be sample medians defined by (4) and (5) replacing L and Μ by the corresponding empirical distributions Lπ and L Μπ L Μ Μ (obtained from π1 , . . . , ππ and π1 , . . . , ππ , resp.). Robustness (in terms of π-contamination neighborhoods) and asymptotic normality of ππ , π ≥ 1, were proved, for instance, in [17] (see also [18]). The qualitative robustness properties of sample means {ππ } and sample medians {ππ } when the Prokhorov metric is used were discussed in Introduction. Let us first see what happens with qualitative robustness of {ππ } and {ππ } if in the above definition we replace π with the Kantorovich metric: Μ := sup σ΅¨σ΅¨σ΅¨σ΅¨πΈπ (π) − πΈπ (π) Μ σ΅¨σ΅¨σ΅¨σ΅¨ , Μ ≡ π (L, L) π (π, π) (8) σ΅¨ σ΅¨ π∈Lip where Lip := {π : π» σ³¨σ³¨→ R : π is bounded and σ΅¨ σ΅¨ σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π (π₯) − π (π¦)σ΅¨σ΅¨σ΅¨ ≤ σ΅¨σ΅¨σ΅¨π₯ − π¦σ΅¨σ΅¨σ΅¨ ; π₯, π¦ ∈ π»} . (9) Μ < ∞, and it is well known Under condition (3) π(π, π) (see, e.g., [13]) that π(ππ , π) → 0 if and only if π(ππ , π) → 0 and πΈ|ππ | → πΈ|π|. For π» = R, ∞ Μ =∫ π (π, π) −∞ σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨πΉπ (π₯) − πΉπΜ (π₯)σ΅¨σ΅¨σ΅¨ ππ₯. (10) Now, applying the regularity properties of π (see, e.g., [13]): π( ππ πΜπ 1 π Μ , Μπ ) = π (L, L) , ) ≤ ∑π (ππ , π π π π π=1 (11) we see that the sequence of sample means {ππ , π ≥ 1} is “qualitatively robust” using π instead of the Prokhorov metric. Journal of Probability and Statistics 3 It seems straightforward (using the approach similar to one given in [10, 19] and results in [16]) to show that the sequence of sample medians {ππ , π ≥ 1} is also “qualitatively robust” with respect to π. However, this is out of the scope of this paper. 3. π−π−√π-Robustness and Main Results π Now and in what follows we suppose that π» = R with the Euclidean norm |⋅|. The results presented in this section are an extension to the multidimensional case of the similar findings for π» = R, published in the hardly accessible proceedings [20]. Moreover, we improve the results of [20] even in the onedimensional case. In order to simplify calculations in the proof of Theorem 5 below we will use in the definitions of Kantorovich’s and Zolotarev’s metrics (see below) the following norm βπ₯β := ∑ππ=1 |π₯π | in the space Rπ . Thus, in what follows the Kantorovich metric π is defined by relationships (8), (9), where in (9) in place of the norm | ⋅ | the norm || ⋅ || is used. Let π be some fixed simple probability metric on the set of all probability distributions on (Rπ , B(Rπ )). We will consider a sequence ππ , π ≥ 1, of estimators of some parameter π ∈ Rπ of the distribution L (πΜ of the Μ resp.). distribution L, Definition 1. We say that a sequence ππ , π ≥ 1, is (π − π − Μ ∈ √π)-robust at L if there is some fixed vector πΎ = πΎ(L, L) π R such that for every π > 0 there exists πΏ > 0 such that Μ ≤ πΏ entails π(L, L) Μ1 , . . . , π Μπ ) + πΎ) ≤ π. sup√ππ (ππ (π1 , . . . , ππ ) , ππ (π (12) π≥1 Remark 2. Taking into account that π(ππ + π, ππ + π) = ππ(π, π), π ≥ 0, π ∈ Rπ , we see that (12) can be related with the √π-asymptotic normality of estimators ππ , π ≥ 1. The “scaling parameter” πΎ (in (12)) is necessary to ensure the equality of means of the corresponding limit normal Μ in distributions (when they exist). Only in case πΈπ = πΈπ (12) πΎ = 0. Zolotarev’s probability metric of order 2 π2 is defined as follows (see, e.g., [14, 21]): Μ := sup σ΅¨σ΅¨σ΅¨σ΅¨πΈπ (π) − πΈπ (π) Μ σ΅¨σ΅¨σ΅¨σ΅¨ , Μ ≡ π2 (L, L) π2 (π, π) σ΅¨ σ΅¨ π∈D 2 (13) where Μ On the other hand, if there exist second moments, πΈπ =ΜΈ πΈπ. Μ2 < ∞, then π2 (π, π) Μ < ∞ Μ and πΈπ2 < ∞, πΈπ πΈπ = πΈπ (see, e.g., [14]). The function π(π₯) := (1/2)|π₯|2 ∈ D2 in (14). Therefore, by (13), σ΅¨σ΅¨ 2 Μ2 σ΅¨σ΅¨σ΅¨σ΅¨ ≤ π2 (π, π) Μ . σ΅¨σ΅¨πΈπ − πΈπ σ΅¨ σ΅¨ Μ < ∞ then πΈπ Μ2 < ∞. Thus, if πΈπ2 < ∞ and π2 (π, π) 2 Μ < Μ Consequently, if πΈπ = πΈπ and πΈ|π| < ∞, then π2 (π, π) 2 Μ < ∞. ∞ if and only if πΈ|π| We now define the metric π to work with: Μ := max {2π (π, π) Μ , inf π2 (π, π Μ + π)} . π (π, π) π∈Rπ π₯, π¦ ∈ Rπ } , (14) and π·π = (π·1 π, . . . , π·π π) is the gradient of π. Μ can take infinite value. Remark 3. The distance π2 (π, π) Μ = ∞ if Particularly, from (13), (14) we see that π2 (π, π) (16) 3.1. (π−π−√π)-Robustness of Sample Means. To prove the inequality in Theorem 5 below we need to impose the following restriction on the distribution L. Assumption 4. (i) πΈ|π|π+2 < ∞, and the covariance matrix M of π is positive definite. The distribution L of π has a density π such that for some π ≥ 1: (ii) the density ππ of ππ := π1 + ⋅ ⋅ ⋅ + ππ is bounded and differentiable; (iii) the gradient π·ππ is bounded and |π·ππ | belongs to L1 (Rπ ); (iv) for some πΌ > 0 ∫ |π₯|>πΌπ σ΅¨σ΅¨ σ΅¨σ΅¨ −1/2 σ΅¨σ΅¨π·ππ σ΅¨σ΅¨ ππ₯ = π (π ) , as π σ³¨→ ∞. (17) Note that in view of Remark 3 under the condition (i) Μ < ∞ if and only if π(π, π) σ΅¨ Μσ΅¨σ΅¨2 σ΅¨σ΅¨ < ∞, πΈσ΅¨σ΅¨σ΅¨σ΅¨π σ΅¨ (18) (see (15)). Theorem 5. Under Assumption 4 and supposing (18) it holds Μ , Μπ + πΎ) ≤ ππ (L, L) sup√ππ (ππ , π π≥1 (19) Μ where πΎ = πΈπ − πΈπ, π = max {(10π − 1)1/2 , 5.4 ππ} , (20) σ΅© σ΅© π = sup maxσ΅©σ΅©σ΅©π·π ππ σ΅©σ΅©σ΅©L1 (Rπ ) < ∞, (21) π≥π 1≤π≤π σ΅© σ΅© σ΅© σ΅© D2 := {π : Rπ σ³¨σ³¨→ R : σ΅©σ΅©σ΅©π·π (π₯) − π·π (π¦)σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅© ; (15) and ππ is the density of (1/√π)(π1 + ⋅ ⋅ ⋅ + ππ ), π ≥ 1. Remark 6. (i) The constant π in (20), (21) is entirely determined by the distribution L of π. For various particular densities of π the constant π in (21) can be bounded by means of computer calculations. For this one can use the fact (true under wide conditions) that the sequence π·π ππ , π ≥ π , converges in L1-norm to the corresponding partial derivative 4 Journal of Probability and Statistics of the limit normal density with covariance matrix M (and zero mean since ||π·π ππ ||L1 (Rπ ) is invariant under translations). For example, let π = 2 and π = (πσΈ , πσΈ σΈ ), where πσΈ and σΈ σΈ π are independent random variables; πσΈ has the gamma density with πΌ = 2.1 and arbitrary πσΈ > 0, while πσΈ σΈ has the gamma density with πΌ = 3 and arbitrary πσΈ σΈ . Simple computer calculations show that in (21) π < max{0.7065πσΈ , 0.5414πσΈ σΈ }, and since we can take π = 1, we obtain in (20) that π < max {3, 10.8 max {0.7065πσΈ , 0.5414πσΈ σΈ }} . (22) For instance, π < 7.6302 for πσΈ = πσΈ σΈ = 1. (For these values of πσΈ , πσΈ σΈ we can take π = 3 in (20) and obtain π < 6.3829.) Μ < ∞ (ii) Since under the above assumption π(L, L) entails (18), inequality (19) ensures (π − π − √π)-robustness of the sequence of sample means ππ , π ≥ 1. (iii) For π = 1, in [20] an example is given showing that in general the sequence of sample means ππ , π ≥ 1 is not Μ It is (π − π − √π)-robust (even if (18) holds and πΈπ = πΈπ). also almost evident that the sample means ππ , π ≥ 1, are not (π − π − √π)-robust, for example, if π is the total variation metric π (or, if π = max(π, π)). The appearance of Zolotarev’s metric on the right-hand side of (19) is related to closeness of corresponding limit normal distributions. Corollary 7. Suppose for a moment that π = πΜ and that one evaluates the quality of estimators by mean of absolute errors: Μπ − π||. Then from (19) it follows πΏπ := πΈ||ππ − π||, πΏΜπ := πΈ||π that σ΅¨ σ΅¨σ΅¨ Μ , σ΅¨σ΅¨πΏπ − πΏΜπ σ΅¨σ΅¨σ΅¨ ≤ π−1/2 const π (L, L) σ΅¨ σ΅¨ π ≥ 1. (23) (The simple proof is similar to the one given in [20]). 3.2. About (π−π−√π)-Robustness of Sample Medians. Let again π be the metric defined in (16). We show that the sequence of sample medians ππ , π ≥ 1, in general, is not Μ have strictly positive, (π − π − √π)-robust even when π and π bounded, smooth densities symmetric with respect to the Μπ , origin, and the sequences of sample medians ππ , π ≥ 1, π π ≥ 1, are √π-asymptotically normal. We consider a modified version of the corresponding example from [20]. Μπ Example 8. Let π» = R, π ∼ π(0, 1), and for π ∈ (0, 1) let π be a random variable with the density: ππΜπ (π₯) := 2 π₯2 1 (π + 3 ) π−π₯ /2 , 2 π (π) π +π₯ π₯ ∈ R, (24) where π(π) is a normalizing constant. By symmetry of the density, we get Μ = πΈπ = πΈπ Μ = 0, π=π (25) and also it is clear that ππ, ππΜ ∈ C∞ (R), and sup ππΜπ (π₯) < ∞. π∈(0,1), π₯∈R (26) First of all let us show that for this example the left-hand side of (12) is infinite for any πΎ =ΜΈ 0. It is well known (see, Μπ → π Μ = 0 as π → ∞ e.g., [16]) that ππ → π = 0, π with probability 1. Also, from the results of [16] we can obtain that |ππ | ≤ (2/π) ∑ππ=1 |ππ |. Using this inequality it is easy to show that the sequence |ππ |, π ≥ 1, is uniformly integrable. Μπ | → 0) as π → ∞, and Therefore, πΈ|ππ | → 0 (and also πΈ|π Μ for this π(ππ , ππ + πΎ) → π(0, πΎ) = πΎ. Let now πΎ = 0 in (12). By the well-known asymptotic normality (see [22, page 307]), Μπ , ππ ) σ³¨→ 0, π (√ππ π (√πππ , π) σ³¨→ 0, (27) where π ∼ π (0, 1 π ) = π (0, √ ) , 2ππ (0) 2 π (π) ) = π (0, ππ ∼ π (0, ). 2ππΜπ (0) π 1 (28) From (27) and (28) it follows that there is π > 0 such that for all small enough π and all large enough π we get Μπ ) ≥ π. Consequently, for all such π and π, π(√πππ , √ππ Μπ ) = π (√πππ , √ππ Μπ ) ≥ π2 > 0, √ππ (ππ , π (29) since π ≥ π2 (see [13, page 86]). We have obtained that the left-hand side of (12) is positive Μπ ) → 0 for all small enough π > 0. On the other hand, π(π, π as π → 0. It follows from the inequality π2 (π, π) ≤ 1 ∞ 2 σ΅¨σ΅¨ σ΅¨ ∫ π₯ σ΅¨π (π₯) − ππ (π₯)σ΅¨σ΅¨σ΅¨ ππ₯ 2 −∞ σ΅¨ π (30) (valued when πΈπ = πΈπ; see [21, page 376]) and from (10). Remark 9. The densities as in (24) represent the following somewhat strange type of “contamination.” Since maxπ₯ ππ (π₯) = ππ(0) sample points from ππ tend to concentrate around the origin. But ππΜπ (0) → 0 as π → 0, and therefore sample points from ππΜπ frequently in some extent are separated from 0. A natural question is “how to choose the metric πΜ to ensure (π − πΜ − √π)-robustness of the sequence of sample medians ππ , π ≥ 1?” Our conjecture is (for π = 1, e.g.) Μ = max{π(π, π), Μ esssupπ₯∈R |ππ (π₯) − πΜ (π₯)|} Μ π) to try π(π, π (supposing the existence of densities). Μ then under certain conditions the closeness If π = π in πΜ guarantees the closeness of normal densities which are Μπ , π ≥ 1}, limiting for {√πππ , π ≥ 1} and for {√ππ respectively. To attempt proving (π − πΜ − √π)-robustness of ππ , π ≥ 1 (as in (12)) one can show Hampel’s qualitative robustness of ππ , π ≥ 1 with respect to the metric π and Μπ ) = π(√πππ , √ππ Μπ ). A then use the property √ππ(ππ , π not clear point of this plan is finding conditions under which πΈ|√π(ππ − π)| → 0 as π → ∞. Journal of Probability and Statistics 5 Example 10. Let us give another (very simple) example of the sequence of estimators which is (π − π − √π)-robust with π := π (on the class of distributions described below). For π» = R we consider the class π of all random variables π with bounded supports Supp(π) = [0, ππ ], having density ππ such that ππ (π₯) ≥ π½ > 0, π₯ ∈ [0, ππ ]. We suppose that ππ ≤ π∗ < ∞, π ∈ π, and π½ is the same for all π ∈ π. Assume that parameter π = ππ is unknown, and the sequence of estimators ππ := max1≤π≤π ππ is used to estimate it. Μ Denoting π = ππ , πΜ = πΜπ and choosing in (12) πΎ = π − π, we get Μπ ) Μπ + πΎ) = π (π − maxππ , πΜ − maxπ π (ππ , π 1≤π≤π 1≤π≤π σ΅¨σ΅¨ σ΅¨ Μπ )σ΅¨σ΅¨σ΅¨σ΅¨ ≤ πΈ σ΅¨σ΅¨σ΅¨σ΅¨(π − maxππ ) − (πΜ − maxπ σ΅¨σ΅¨ 1≤π≤π 1≤π≤π σ΅¨ (31) because of the metric π is minimal for the compound metric πΈ|π − π| (see, e.g., [13]). By elementary calculations we bound the right-hand side of (31) by ∞ Μπ ) = ∫ σ΅¨σ΅¨σ΅¨σ΅¨πΉπ (π₯) − πΉΜπ (π₯)σ΅¨σ΅¨σ΅¨σ΅¨ ππ₯ π (maxππ , maxπ π σ΅¨ σ΅¨ π 1≤π≤π 1≤π≤π 0 ∞ σ΅¨ σ΅¨ Μ . ≤ π∫ σ΅¨σ΅¨σ΅¨πΉπ (π₯) − πΉπΜ (π₯)σ΅¨σ΅¨σ΅¨ ππ₯ = ππ (π, π) 0 (33) Μ Then On the other hand, let, for example, π ≤ π. πΜ Μ = ∫ σ΅¨σ΅¨σ΅¨πΉπ (π₯) − πΉΜ (π₯)σ΅¨σ΅¨σ΅¨ ππ₯ + ∫ (1 − πΉΜ (π₯)) ππ₯. π (π, π) π π σ΅¨ σ΅¨ π 0 (34) Μ But for π₯ ∈ [π, π], πΜ 1 − πΉπΜ (π₯) = ∫ ππΜ (π¦) ππ¦ ≥ π½ (πΜ − π₯) . Μ π − ππΈπ πΜπ − ππΈπ = π( π , ), π π (35) From this relationship it follows that Μ σ³¨→ 0. πΜ σ³¨→ π as π (π, π) (36) because of π(π + π, π + π) = π(π, π), π ∈ Rπ . Also it is easy to see that π2 (π + π, π + π) = π2 (π, π). On the other hand, by definition of π in (16) Μ − πΈπ) Μ = π2 (π − πΈπ, π (A.2) (see Remark 3). Also Μ − πΈπ) Μ = π (π, π Μ + πΈπ − πΈπ) Μ π (π − πΈπ, π Μ , Μσ΅¨σ΅¨σ΅¨σ΅¨ ≤ 2π (π, π) Μ + πΈ σ΅¨σ΅¨σ΅¨σ΅¨π − π ≤ π (π, π) σ΅¨ σ΅¨ (A.3) because the metric π is minimal for the metric πΈ| ⋅ |. The above arguments show that to establish inequality (19) it suffices to prove the version of (19) with πΎ = 0, πΈπ = Μ = 0, and the metric πσΈ := max{π, π2 } on the right-hand πΈπ side of (19). The proof is based on the two following lemmas. Let π : Rπ σ³¨→ R be a differentiable function. We will write π·π := (π·1 π, . . . , π·π π), where π·π π(π₯) := (π/ππ₯π )π(π₯), π = 1, 2, . . . , π. Lemma A.1. Let π, π, and π be random vectors in Rπ such that (a) π is independent of π and π; (c) π has a bounded differentiable density ππ (with respect to the Lebesgue measure) such that π·π ππ ∈ L1 (Rπ ), π = 1, 2, . . . , π. Then σ΅© σ΅© π (π + π, π + π) ≤ π maxσ΅©σ΅©σ΅©σ΅©π·π ππ σ΅©σ΅©σ΅©σ΅©L (Rπ ) π2 (π, π) . 1≤π≤π Finally, applying (32), we can select π in such a way that Μπ + πΎ) ≤ π . sup√ππ (ππ , π 2 π≥π (A.1) (b) πΈ|π|2 < ∞, πΈ|π|2 < ∞; πΈπ = πΈπ; π₯ Μ ∈ π, for π, π Μ Μπ + πΎ) = π ( ππ , ππ + πΈπ − πΈπ) Μ π (ππ , π π π (32) Now for each π fixed, by induction we obtain (see (10)) π To start with the proof of Theorem 5 of the previous section, first of all we note that in (19) π∈Rπ 1≤π≤π 2 . π½ (π + 1) A. The Proofs Μ + π) = π2 (π, π Μ + πΈπ − πΈπ) Μ inf π2 (π, π Μπ ) , ≤ πΈ (π − maxππ ) + πΈ (πΜ − maxπ 1≤π≤π Appendix (37) For π < π we can use the inequality π(π, π + π) ≤ π(π, π) + |π| and (33), (36). Thus, choosing small enough πΏ > 0 we can Μ ∈ π with π(π, π) Μ ≡ ensure inequality (12) for all π, π Μ π(π, π) ≤ πΏ. In this way we proved (π − π − √π)-robustness of ππ on the set π. 1 (A.4) Proof. From the definition (8), (9) of the metric π (with the norm ||⋅|| instead of |⋅| !) it follows (the proof is simple) that in (8) the class of functions Lip (given in (9)) can be replaced by the class of all bounded differentiable functions π : Rπ σ³¨→ R such that σ΅¨σ΅¨ ππ (π₯) σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨ sup σ΅¨σ΅¨σ΅¨ σ΅¨σ΅¨ ≤ 1, π = 1, 2, . . . , π. (A.5) σ΅¨ σ΅¨ π₯∈Rπ σ΅¨ ππ₯π σ΅¨ 6 Journal of Probability and Statistics Let us fix any such function π and arbitrary π, π ∈ {1, 2, . . . , π}. Then (by the Fubini theorem), Indeed, since π£π (π₯π ) := πΈπ (π + π) − πΈπ (π + π) Rπ × ∫ [ππΉπ (π‘) − ππΉπ (π‘)] ∫ π (π₯) ππ (π₯ − π‘) ππ₯. Rπ Rπ σΈ ππ (√π π‘) = ∫ Rπ−1 (A.6) For each π₯ ∈ R fixed let β (π‘) := ∫ π (π₯) ππ (π₯ − π‘) ππ₯ (A.7) = ∫ π (π¦ + π‘) ππ (π¦) ππ¦, Rπ = ∫ π‘∈R . ππ πβ β (π‘) = ∫ π (π¦ + π‘) ππ (π¦) ππ¦, π ππ‘j R ππ‘π πππ ππ π2 β β (π‘) = − ∫ (π₯) (π₯ − π‘) ππ₯. π ππ‘π ππ‘π ππ‘π R ππ₯π In view of (A.5) we obtain σ΅¨σ΅¨ σ΅¨σ΅¨ 2 σ΅¨σ΅¨ ππ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ π β σ΅© σ΅¨σ΅¨ π σ΅¨σ΅¨ σ΅© σ΅¨ σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨ ππ₯ = σ΅©σ΅©σ΅©π·π ππ σ΅©σ΅©σ΅© π . ≤ − π‘) β ∫ (π₯ σ΅¨σ΅¨ σ΅¨σ΅¨ ππ‘ ππ‘ (π‘)σ΅¨σ΅¨σ΅¨ σ΅© σ΅¨ σ΅©L1 (R ) π ππ‘ σ΅¨ σ΅¨ R σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ π π π σ΅¨ π ππ (π₯) = 1 ππ₯ ∫ π£ (π₯ ) πππ‘π π₯π ππ₯π , ππ‘π R π π 1 (2π)π ∫ ππ (π‘) π−πβ¨π‘,π₯β© ππ‘, (A.15) Rπ π (A.8) ππ (π₯) = ∑ π−π/2 ππ (π₯) π0,M (π₯) + π ( π=0 (A.9) (A.14) σΈ and differentiate under the integral sign in (A.15). The condition ππ ∈ L1 (Rπ ) (π ≥ π) and Assumption (i) in Section 2 ensure the hypothesis of in [25, Theorem 19.2, Ch. 4]. By this theorem, 1 ππ/2 ), (A.16) π₯ ∈ Rπ (as π → ∞), where ππ , π = 0, . . . , π, are certain polynomials and π0,M is the normal density with zero mean and the covariance matrix M (see Assumption (i)). Let ππ denote the Fourier transform of π·π βπ , where π For every π₯, π¦ ∈ Rπ we have βπ (π₯) := ππ (π₯) − ∑ π−π/2 ππ (π₯) π0,M (π₯) . σ΅¨ σ΅¨ σ΅© σ΅©σ΅© σ΅©σ΅©π·β (π₯) − π·β (π¦)σ΅©σ΅©σ΅© = ∑ σ΅¨σ΅¨σ΅¨σ΅¨π·π β (π₯) − π·π β (π¦)σ΅¨σ΅¨σ΅¨σ΅¨ , (A.10) π=1 π2 β σ΅¨σ΅¨ σ΅¨ π σ΅© σ΅© σ΅¨σ΅¨π·π β (π₯) − π·π β (π¦)σ΅¨σ΅¨σ΅¨ ≤ ∑ sup (π₯) σ΅©σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© . (A.11) σ΅¨ σ΅¨ π ππ₯π ππ₯π π=1 π₯∈R Comparing (A.6)–(A.11) and taking into account the definition of π2 in (13), (14), we obtain inequality (A.4). Lemma A.2. Under the assumption of the previous section the constant π in (21) is finite. Remark A.3. A similar assertion was proved in [24] for || ⋅ ||L1 -norm of second derivatives of the densities ππ (under sightly different conditions). For this reason we give only a sketch of the proof of Lemma A.2 indicating only differences in comparison to the proof of Lemma 4.1 in [24] (where the omitted details can be seen). Proof. As before, let ππ (π ≥ π ) be the density of ππ /√π, and ππ its characteristic function. Let also ππ denote the density of ππ . There is π ≥ π such that |π‘|ππ (π‘) ∈ L1 (Rπ ) and therefore for π ≥ π. (A.17) π=0 π |π‘| ππ (π‘) ∈ L1 (Rπ ) , R and (A.12) follows for large enough π. In view of (A.12) we can write down the inverse Fourier transform for π ≥ π: π Because of boundedness of ππ/ππ‘π and integrability of π·π ππ we can differentiate in (A.7) under the integral sign (see [23, Appendix A]). Thus, σΈ ππβ¨π‘ ,π₯ β© ππ₯σΈ ∫ ππ (. . . , π₯π , . . .) πππ‘π π₯π ππ₯ πβ¨π‘σΈ ,π₯σΈ β© Rπ−1 Rπ (A.13) (fixing other variables), we can integrate by parts: = ∫ π (π₯) ππ₯∫ ππ (π₯ − π‘) [ππΉπ (π‘) − ππΉπ (π‘)] Rπ π π (. . . , π₯π , . . .) ∈ L1 (R) ππ₯π π (A.12) Using (A.15), (A.16), and in [25, Lemma 7.2, Ch. 2], we can obtain that for each fixed π π 1 ππ (π‘) = π‘π [ππ (π‘) − ∑ π−π/2 πΜπ (π‘) exp (− β¨π‘, Mπ‘β©)] , 2 π=0 (A.18) with certain polynomials πΜπ . By arguments similar to those given in [24], it follows from (A.18) that there exist constants π½ > 0, π < ∞ such that π σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ππ (π‘)σ΅¨σ΅¨σ΅¨ ππ‘ ≤ π/2 π |π‘|≤π½√π ∫ (π ≥ π) . (A.19) On the other hand, to prove that ∫ |π‘|>π½√π 1 σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ππ (π‘)σ΅¨σ΅¨σ΅¨ = π ( π/2 ) , π as π σ³¨→ ∞, (A.20) it is sufficient to show that ∫ |π‘|>πΎ√π 1 σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π‘π σ΅¨σ΅¨ σ΅¨σ΅¨ππ (π‘)σ΅¨σ΅¨σ΅¨σ΅¨ ππ‘ = π ( ), σ΅¨ σ΅¨ ππ/2 as π σ³¨→ ∞. (A.21) Journal of Probability and Statistics 7 But the last equality follows from (A.12) and the fact that σ΅¨σ΅¨ π‘ σ΅¨σ΅¨σ΅¨σ΅¨ σ΅¨ sup σ΅¨σ΅¨σ΅¨π1 ( )σ΅¨ ≤ π < 1. σ΅¨ √π σ΅¨σ΅¨σ΅¨ |π‘|>πΎ√π σ΅¨ (A.22) Expressing π·π βπ in terms of ππ (as an inverse Fourier transform), and using (A.16)–(A.20) we can establish that there exist a constant π and polynomials πΜπ (π₯), π = 0, . . . , π such that for each π ∈ {1, . . . , π}, σ΅¨σ΅¨ σ΅¨σ΅¨ π σ΅¨σ΅¨ σ΅¨σ΅¨ π sup σ΅¨σ΅¨σ΅¨π·π ππ (π₯) − ∑ π−π/2 πΜπ (π₯) π0,M (π₯)σ΅¨σ΅¨σ΅¨ ≤ π/2 , (A.23) σ΅¨ σ΅¨ σ΅¨σ΅¨ π π₯∈Rπ σ΅¨σ΅¨ π=0 π = π, π + 1, . . . The next step is to find for each π ∈ {1, . . . , π} an upper bound for ||π·π ππ ||L1 (Rπ ) which does not depend on π ≥ π . For π > π , ππ (π₯) = ππ/2 ππ (√ππ₯) and ππ (π₯) = ∫Rπ ππ (π₯ − π‘)ππ−π (π‘) ππ‘. Thus, using Assumptions (ii), (iii), and the corresponding theorems in [23, Appendix A], we get π·π ππ (π₯) = ∫ π·π ππ (π₯ − π‘) ππ−π (π‘) ππ‘. Rπ (A.24) We have σ΅© σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨ σ΅¨ σ΅©σ΅© σ΅¨σ΅¨π·π ππ σ΅¨σ΅¨σ΅¨ ππ₯ + ∫ σ΅¨σ΅¨π·π ππ σ΅¨σ΅¨σ΅¨ ππ₯, σ΅©σ΅©π·π ππ σ΅©σ΅©σ΅© π ≤ ∫ σ΅©L1 (R ) σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅© |π₯|≤2√ππΌ |π₯|>2√ππΌ (A.25) where πΌ is the constant from Assumption (iv). The first summands on the right-hand side of (A.25) are uniformly bounded in π ≥ π due to (A.23). To bound the second terms in (A.25) we write (see (A.24)) ∫ |π₯|>2√ππΌ σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨π·π ππ σ΅¨σ΅¨σ΅¨ ππ₯ σ΅¨ σ΅¨ σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π·π ππ (π§)σ΅¨σ΅¨σ΅¨ ππ§ σ΅¨ σ΅¨ |π§|>2πΌπ = π1/2 ∫ σ΅¨ σ΅¨ ππ§ ∫ σ΅¨σ΅¨σ΅¨σ΅¨π·π ππ (π§ − π‘)σ΅¨σ΅¨σ΅¨σ΅¨ ππ−π (π‘) ππ‘ =: π1/2 πΌπ . π |π§|>2πΌπ R (A.26) ≤ π1/2 ∫ Now πΌπ = ∫ |π§|>2πΌπ ππ§ ∫ |π‘|≤πΌπ σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π·π ππ (π§ − π‘)σ΅¨σ΅¨σ΅¨ ππ−π (π‘) ππ‘ σ΅¨ σ΅¨ σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨π·π ππ (π§ − π‘)σ΅¨σ΅¨σ΅¨ ππ−π (π‘) ππ‘. ππ§ ∫ +∫ σ΅¨ σ΅¨ |π§|>2πΌπ |π‘|>πΌπ (A.27) From |π§| > 2πΌπ and |π‘| ≤ πΌπ it follows that |π§ − π‘| > πΌπ. Thus, the first terms on the right-hand side of (A.27) are less than πσΈ /π1/2 due to Assumption (iv). Applying the Fubini theorem and using the fact of integrability of π·π ππ we see that the second summand in (A.27) is bounded by const π(| ∑π−π π=1 ππ | > πΌπ), which is π(1/π) by the Chebyshev and Rosenthal inequalities. Exploiting Lemmas A.1 and A.2, the rest of the proof of the Theorem 5 in Section 3 is carried out exactly as the proof in one-dimensional case given in [20]. The “ideality properties” of the metric π2 , π π π π=1 π=1 π=1 Μπ ) ≤ π2 ∑π2 (ππ , π Μπ ) , π2 (π∑ππ , π∑π (A.28) used there hold true for random vectors (see, e.g., [14]). The general version of inequality (3.15) in [20] which relates the Kantorovich and the total variation metrics is proved in [13, page 89]. Note that the proof presented in [20] uses the so-called convolution approach (see, e.g., [26]) and induction arguments. This method has been widely used to estimate rates of convergence in multidimensional Central Limit Theorems (see, e.g., [13, 14, 21, 26]). Acknowledgments The authors thank the National System of Investigators (SNI) of CONACYT, Mexico for the partial support of this work. The authors are grateful to the anonymous referee for his valuable suggestions on the improvement of an earlier version of this paper. References [1] F. R. Hampel, “A general qualitative definition of robustness,” Annals of Mathematical Statistics, vol. 42, pp. 1887–1896, 1971. [2] F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel, Robust Statistics: The Approach Based on Inuence Functions, Wiley, New York, NY, USA, 1986. [3] P. J. Huber, Robust Statistics, Wiley, New York, NY, USA, 1981. [4] J. JurecΜkovaΜ and P. K. Sen, Robust Statistical Procedures. Asymptotics and Interrelations, Wiley, New York, NY, USA, 1996. [5] X. He and G. Wang, “Qualitative robustness of S∗ -estimators of multivariate location and dispersion1,” Statistica Neerlandica, vol. 51, no. 3, pp. 257–268, 1997. [6] L. Davies, “An efficient FreΜchet differentiable high breakdown multivariate location and dispersion estimator,” Journal of Multivariate Analysis, vol. 40, no. 2, pp. 311–327, 1992. [7] D. L. Donoho and R. C. Liu, “The “automatic” robustness of minimum distance functionals,” Annals of Statistics, vol. 16, pp. 552–586, 1988. [8] X. He and D. G. Simpson, “Lower bounds for contamination bies: globally minimax versus locally linear estimation,” The Annals of Statistics, vol. 21, pp. 314–337, 1993. [9] A. Y. Kharin and P. A. Shlyk, “Robust multivariate Bayesian forecasting under functional distortions in the π2 -metric,” Journal of Statistical Planning and Inference, vol. 139, no. 11, pp. 3842–3846, 2009. [10] A. Cuevas, “Qualitative robustness in abstract inference,” Journal of Statistical Planning and Inference, vol. 18, no. 3, pp. 277– 289, 1988. [11] P. J. Huber, “Robust estimation of a location parameter,” Annals of Mathematical Statistics, vol. 35, pp. 73–101, 1964. [12] R. Zielinski, “Robustness of sample mean and sample median under restrictions on outliers,” Zastosowania MatematykiApplicationes Mathematicae, vol. 19, pp. 239–240, 1987. 8 [13] S. T. Rachev and L. R. RuΜshendorf, Mass Transportation Problems, vol. II: Applications, Springer, New York, NY, USA, 1998. [14] V. Zolotarev, “Probability metrics,” Theory of Probability and Its Applications, vol. 28, pp. 264–287, 1983. [15] C. G. Small, “A survey of multidimensional medians,” International Statistical Review, vol. 58, pp. 263–277, 1990. [16] J. H. B. Kemperman, “The median of a finite measure on Banach space,” in Statistical Data Analysis Based on the L1 -Norm and Related Methods, Y. Dodge, Ed., pp. 217–230, Elsevier Science Publishers, North Holland, 1987. [17] D. Gervini, “Robust functional estimation using the median and spherical principal components,” Biometrika, vol. 95, no. 3, pp. 587–600, 2008. [18] P. Chandhuri, “Multivariate location estimation using extension of R-estimates through U-statistics type approach,” Annals of Statistics, vol. 30, pp. 897–916, 1992. [19] I. Mizera, “Qualitative robustness and weak continuity: the extreme function? Nonparametrics and robustness in modern statistical inference and time series analysis,” in A Festschrift in Honor of Professor Jana JurecΜkovaΜ Institute of Mathematical Statistics Collections, J. Antoch, M. HusΜkovaΜ, and P. K. Sen, Eds., vol. 7, pp. 169–181, Institute of Mathematical Statistics, Beachwood, Ohio, USA, 2010. [20] E. Gordienko and A. Novikov, “Probability metrics and robustness: is the sample median more robust than the sample mean?” in Proceedings of the Joint Session of 7th Prague Symposium on Asymptotic Statistics and 15th Prague Conference on Information Theory, Statistics Decision Functions, pp. 374–386, Charles University in Prague, Prague, Czech Republic, 2006. [21] S. T. Rachev, Probability Metrics and the Stability of Stochastic Models, Wiley, Chichester, UK, 2009. [22] A. W. Van der Vaart, Asymptotic Statistics, Cambridge University Press, Cambridge, UK, 1998. [23] R. M. Dudley, Uniform Central Limit Theorem, Cambridge University Press, Cambridge, UK, 1999. [24] E. Gordienko, “Comparing the distributions of sums of independent random vectors,” Kybernetika, vol. 41, no. 4, pp. 519– 529, 2005. [25] R. N. Bhattacharya and R. Ranga Rao, Normal Approximation and Asymptotic Expansions, Wiley, New York, NY, USA, 1976. [26] V. V. Senatov, “Uniform estimates of the rate of convergence in the multidimensional central limit theorem,” Theory of Probability and Its Applications, vol. 25, pp. 745–759, 1980. Journal of Probability and Statistics Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014