THE WEIBULL DISTRIBUTION AND BENFORD’S LAW VICTORIA CUFF, ALLISON LEWIS, AND STEVEN J. MILLER Abstract. Benford’s law states that many data sets have a bias towards lower leading digits, with a ο¬rst digit of 1 about 30.1% of the time and a 9 only 4.6%. There are numerous applications, ranging from designing eο¬cient computers to detecting tax, voter and image fraud. An important, open problem is to determine which common probability distributions are close to Benford’s law. We show that the Weibull distribution, for many values of its parameters, is close to Benford’s law, and derive an explicit formula measuring the deviations. As the Weibull distribution arises in many problems, especially survival analysis, our results provide additional arguments for the prevalence of Benford behavior. 1. Introduction For any positive number π₯ and base π΅, we can represent π₯ in scientiο¬c notation as π₯ = ππ΅ (π₯) ⋅ π΅ π(π₯) , where ππ΅ (π₯) ∈ [1, π΅) is called the signiο¬cand1 of π₯ and the integer π(π₯) represents the exponent. Benford’s Law of Leading Digits proposes a distribution for the signiο¬cands which holds for many data sets, and states that the proportion of values beginning with digit π is approximately ( ) π+1 Prob(ο¬rst digit is π base π΅) = logπ΅ ; (1.1) π more generally, the proportion with signiο¬cand at most π base π΅ is Prob(1 ≤ ππ΅ ≤ π ) = logπ΅ π . (1.2) In particular, base 10 the probability that the ο¬rst digit is a 1 is about 30.1% (and not the 11% one would expect if each digit from 1 to 9 were equally likely). This leading digit irregularity was ο¬rst discovered by Simon Newcomb in 1881 [Ne], who noticed that the earlier pages in the logarithmic books were more worn than other pages. In 1938 Frank Benford [Ben] observed the same digit bias in a variety of data sets. Benford studied the distribution of the ο¬rst digits of 20 sets of data with over 20,000 total observations, including river lengths, populations, and mathematical sequences. For a full history and description of the law, see [Hi1, Rai]. There are numerous applications of Benford’s law. Two of the more famous are detecting tax and voter fraud [Me, Nig1, Nig2], but there are also applications in many other ο¬elds, ranging from round-oο¬ errors in computer science [Knu] to detecting image fraud and compression in engineering [AHMP-GQ]. It is an interesting question to determine which probability distributions lead to Benford behavior. Such knowledge gives us a deeper understanding of which natural data sets should follow Benford’s law (other approaches to understanding the universality of the law range from scale invariance [Hi2] to products of random variables [MiNi1]). Leemis, Schmeiser, and Evans [LSE] ran numerical simulations on a variety of parametric survival distributions to examine Key words and phrases. Benford’s Law, Weibull Distribution, Poisson Summation. The ο¬rst and second named authors were supported by NSF Grant DMS0850577 and Williams College; the third named author was supported by NSF Grant DMS0970067. 1The signiο¬cand is sometimes called the mantissa; however, such usage is discouraged by the IEEE and others as mantissa is used for the fractional part of the logarithm, a quantity which is also important in studying Benford’s law. 1 2 VICTORIA CUFF, ALLISON LEWIS, AND STEVEN J. MILLER conformity to Benford’s Law. Among these distributions was the Weibull distribution, whose density is β§ ( ) ( ( )πΎ ) β¨ πΎ π₯−π½ (πΎ−1) exp − π₯−π½ if π₯ ≥ π½ πΌ πΌ πΌ π (π₯; πΌ, π½, πΎ) = (1.3) β©0 otherwise, where πΌ, πΎ > 0. Note that πΌ adjusts the scale of the data, π½ translates the input, and only πΎ aο¬ects the shape of the distribution. Special cases of the Weibull include the exponential distribution (πΎ = 1) and the Rayleigh distribution (πΎ = 2). The most common use of the Weibull is in survival analysis, where a random variable π (modeled by the Weibull) represents the “time-to-failure”, resulting in a distribution where the failure rate is modeled relative to a power of time. The Weibull distribution arises in problems in such diverse ο¬elds as food contents, engineering, medical data, politics, pollution and sabermetrics [An, Ca, CB, Cr, Fr, MABF, Mik, Mi, TKD, We] to name just a few. Our analysis generalizes the work of Miller and Nigrini [MiNi2], where the exponential case was studied in detail (see also [DL] for another approach to analyzing exponential random variables). The main ingredients come from Fourier analysis, in particular applying Poisson summation to the derivative of the cumulative distribution function of the logarithms modulo 1, πΉπ΅ . One of the most common ways to prove a system is Benford is to show that its logarithms modulo 1 are equidistributed; we quickly sketch the proof of this equivalence (see also [Dia, MiNi2, MT-B] for details). If π¦π = logπ΅ π₯π mod 1 (thus π¦π is the fractional part of the logarithm of π₯π ), then the signiο¬cands of π΅ π¦π and π₯π = π΅ logπ΅ π₯π are equal, as these two numbers diο¬er by a factor of π΅ π for some integer π. If now {π¦π } is equidistributed modulo 1, then by deο¬nition for any [π, π] ⊂ [0, 1] we have limπ →∞ #{π ≤ π : π¦π ∈ [π, π]}/π = π − π. Taking [π, π] = [0, logπ΅ π ] implies that as π → ∞ the probability that π¦π ∈ [0, logπ΅ π ] tends to logπ΅ π , which by exponentiating implies that the probability that the signiο¬cand of π₯π is in [1, π ] tends to logπ΅ π . Thus Benford’s Law is equivalent to πΉπ΅ (π§) = π§, implying that our random variable is Benford if πΉπ΅′ (π§) = 1. Therefore, a natural way to investigate deviations from Benford behavior is to compare the deviation of πΉπ΅′ (π§) from 1, which would represent a uniform distribution. We use the following notation for the various error terms: Let β°(π₯) denote an error of at most π₯ in absolute value; thus π (π§) = π(π§) + β°(π₯) means β£π (π§) − π(π§)β£ ≤ π₯. Our main result is the following. Theorem 1.1. Let ππΌ,0,πΎ be a random variable whose density is a Weibull with parameters π½ = 0 and πΌ, πΎ > 0 arbitrary. For π§ ∈ [0, 1], let πΉπ΅ (π§) be the cumulative distribution function of logπ΅ ππΌ,0,πΎ mod 1; thus πΉπ΅ (π§) := Prob(logπ΅ ππΌ,0,πΎ mod 1 ∈ [0, π§). Then (1) The density of ππΌ,0,πΎ , πΉπ΅′ (π§), is given by ( )] [ ∞ ∑ log πΌ 2πππ −2πππ(π§− log ′ ) π΅ ⋅Γ 1+ πΉπ΅ (π§) = 1 + 2 Re π πΎ log π΅ π=1 [ ( )] π−1 ∑ log πΌ 2πππ −2πππ(π§− log ) π΅ = 1+2 Re π ⋅Γ 1+ πΎ log π΅ π=1 ( ) √ 1 √ 2 −π 2 π/πΎ log π΅ +β° 2 2π (40 + π ) πΎ log π΅ ⋅ π . (1.4) π3 In particular, the densities of logπ΅ ππΌ,0,πΎ mod 1 and logπ΅ ππΌπ΅,0,πΎ mod 1 are equal, and thus it suο¬ces to consider only πΌ in an interval of the form [π, ππ΅) for any π > 0. π΅ log 2 (2) For π ≥ πΎ log4π , the error from keeping the ο¬rst π terms is 2 √ 2 1 √ β£β°β£ ≤ 3 2 2π (40 + π 2 ) πΎ log π΅ ⋅ π−π π/πΎ log π΅ . (1.5) π THE WEIBULL DISTRIBUTION AND BENFORD’S LAW 3 (3) In order to have an error of at most π in evaluating πΉπ΅′ (π§), it suο¬ces to take the ο¬rst π terms, where π with π ≥ 6 and ( ππ ) π = − ln , πΆ π + ln π + 21 , π = π2 π = , πΎ log π΅ (1.6) √ √ 2 2(40 + π 2 ) πΎ log π΅ πΆ = . π3 (1.7) The above theorem is proved in the next section. Again, as in [MiNi2], the proof involves applying Poisson Summation to the derivative of the cumulative distribution function of the the logarithms modulo 1, which is a natural way to compare deviations from the resulting distribution and the uniform distribution. The key idea is that if a data set satisο¬es Benford’s Law, then the distribution of its logarithms will be uniform. Our series expansions are obtained by applying properties of the Gamma function. For further analysis, our series expansion for the derivative was compared to the uniform distribution through a Kolmogorov-Smirnov test; see Figure 1 for a contour plot of the discrepency. Note the good ο¬t observed between the two distributions when πΎ = 1 (representing the Exponential distribution), which has already been proven to be a close ο¬t to the Benford distribution [DL, LSE, MiNi2]. 10 0.5 10 0.3 0.05 0.1 0.15 8 8 0.8 0.075 0.7 0.6 6 0.2 6 0.125 0.5 4 4 0.175 0.025 0.4 0.15 2 0.6 0.5 0.7 0.2 2 4 6 8 10 2 0.175 0.2 0.8 12 14 0.1 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Figure 1. Kolmogorov−Smirnov Test: Left: πΎ ∈ [0, 15], Right: πΎ ∈ [0, 2]. As πΎ (the shape parameter on the x-axis) increases, the Weibull distribution is no longer a good ο¬t compared to the uniform. Note that πΌ (the scale parameter on the y-axis) has less of an eο¬ect on the overall conformance. The Kolmogorov−Smirnov metric gives a good comparison because it allows us to compare the distributions in terms of both parameters, πΎ and πΌ. We also look at two other measures of closeness, the πΏ1 -norm and the πΏ2 -norm, both of which also test the diο¬erences between (1.4) ∫1 and the uniform distribution; see Figures 2 and 3. The πΏ1 -norm of π − π is 0 β£π (π‘) − π(π‘)β£ππ‘, ∫1 which puts equal weights on the deviations, while the πΏ2 -norm is given by 0 β£π (π‘) − π(π‘)β£2 ππ‘, which unlike the πΏ1 -norm puts more weight on larger diο¬erences. The combination of the Kolmogorov-Smirnoο¬ tests and the πΏ1 and πΏ2 norms show us that the Weibull distribution almost exhibits Benford behavior when πΎ is modest; as πΎ increases the Weibull no longer conforms to the expected leading digit probabilities. The scale parameter πΌ does have a small eο¬ect on the conformance as well, but not nearly to the same extreme as the shape parameter, πΎ. Fortunately in many applications the scale parameter πΎ is not too large 4 VICTORIA CUFF, ALLISON LEWIS, AND STEVEN J. MILLER 1.4 0.4 1.2 0.3 1.0 0.8 0.2 0.6 0.4 0.1 0.2 4 6 8 10 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Figure 2. πΏ1 -norm of πΉπ΅′ (π§) − 1: Left: πΎ ∈ [0.5, 10], Right: πΎ ∈ [0.5, 2]. The closer πΎ is to zero the better the ο¬t. As πΎ increases the cumulative Weibull distribution is no longer a good ο¬t compared to 1. The πΏ1 -norm is independent of πΌ. 1.0 3.0 2.5 0.8 2.0 0.6 1.5 0.4 1.0 0.2 0.5 4 6 8 10 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Figure 3. πΏ2 -norm of πΉπ΅′ (π§) − 1: Left: πΎ ∈ [0.5, 10], Right: πΎ ∈ [0.5, 2]. The closer πΎ is to zero the better the ο¬t. As πΎ increases the cumulative Weibull distribution is no longer a good ο¬t compared to 1. The πΏ2 -norm is independent of πΌ. (it is frequently less than 2 in the Weibull references cited earlier), and thus our work provides additional support for the prevalence of Benford behavior. 2. Proof of Theorem 1.1 To prove Theorem 1.1, it suο¬ces to study the distribution of logπ΅ ππΌ,0,πΎ mod 1 when ππΌ,0,πΎ has the standard Weibull distribution; see (1.3). The analysis is aided by the fact that the cumulative distribution function for a Weibull random variable has a nice closed form expression; for ππΌ,0,πΎ the cumulative distribution function is β±πΌ,0,πΎ (π₯) = 1 − exp(−(π₯/π)π ). Let [π, π] ⊂ [0, 1]. Then, Prob(logπ΅ ππΌ,0,πΎ mod 1 ∈ [π, π]) = = = ∞ ∑ π=−∞ ∞ ∑ Prob(logπ΅ ππΌ,0,πΎ mod 1 ∈ [π + π, π + π]) Prob(ππΌ,0,πΎ ∈ [π΅ π+π , π΅ π+π ]) π=−∞ ∞ ( ∑ π=−∞ ( ( π+π )πΎ ) ( ( π+π )πΎ )) π΅ π΅ exp − − exp − . πΌ πΌ (2.1) 2.1. Proof of the Series Expansion. We ο¬rst prove the claimed series expansion in Theorem 1.1. THE WEIBULL DISTRIBUTION AND BENFORD’S LAW 5 Proof of Theorem 1.1(1). It suο¬ces to investigate (2.1) in the special case when π = 0 and π = π§, since for any other interval [π, π] we may determine its probability by subtracting the probability of [0, π] from [0, π]. Thus, we study the cumulative distribution function of logπ΅ ππΌ,0,πΎ mod 1 for π§ ∈ [0, 1]: πΉπ΅ (π§) := Prob(logπ΅ ππΌ,0,πΎ mod 1 ∈ [0, π§) ( ( π )πΎ ) ( ( π+π )πΎ )) ∞ ( ∑ π΅ π΅ exp − − exp − . πΌ πΌ = π=−∞ (2.2) This series expansion is rapidly converging, and Benford behavior for ππΌ,0,πΎ is equivalent to the rapidly converging series in (2.2) equalling π§ for all π§. As stated earlier, Benford behavior is equivalent to πΉπ΅ (π§) = π§ for all π§ ∈ [0, 1], thus a natural way to test for equivalence is to compare πΉ ′ (π§) to 1. As in [MiNi2], studying the derivative πΉπ΅′ (π§) is an easier way to approach(this problem, we obtain a simpler Fourier transform than ) ( because ) − π΅ π+π πΎ − π΅ π+π πΎ πΌ πΌ −π . We then can analyze the obtained Fourier the Fourier transform of π transform by applying Poisson Summation. Similarly as in [MiNi2] we use the fact that the derivative of the inο¬nite sum πΉπ΅ (π§) is the sum of the derivatives of the individual summands. This is justiο¬ed by the rapid decay of summands, yielding [ ] ( ( π+π )πΎ ) ( π+π )πΎ−1 ∞ ∑ 1 π΅ π΅ ′ π+π πΉπ΅ (π§) = ⋅ exp − π΅ πΎ log π΅ πΌ πΌ πΌ π=−∞ [ ] ( ( π )πΎ ) ( π )πΎ−1 ∞ ∑ ππ΅ ππ΅ 1 π = ⋅ exp − ππ΅ πΎ log π΅ , (2.3) πΌ πΌ πΌ π=−∞ where for π§ ∈ [0, 1], we use the change of variables π = π΅ π§ . ( ( π‘ )πΎ ) ( π‘ )πΎ−1 We introduce π»(π‘) = πΌ1 ⋅ exp − ππ΅ ππ΅ π‘ ππ΅ πΎ log π΅, where π ≥ 1 as π = π΅ π§ with πΌ πΌ π§ ≥ 0. Since π»(π‘) is decaying rapidly we may apply Poisson Summation (see [MT-B, SS]), thus ∞ ∑ π»(π) = π=−∞ ∞ ∑ π=−∞ ˆ is the Fourier Transform of π» : π»(π’) ˆ where π» = πΉπ΅′ (π§) = = ∞ ∑ π=−∞ ∞ ∑ π»(π‘)π−2πππ‘π’ ππ‘. Therefore ˆ π»(π) ∞ ∫ ∑ π=−∞ −∞ (2.4) π»(π) π=−∞ = ∫∞ ˆ π»(π), ∞ −∞ ( ( π )πΎ ) ( π )πΎ−1 1 ππ΅ ππ΅ ⋅ exp − ππ΅ π πΎ log π΅ ⋅ π−2πππ‘π ππ‘. πΌ πΌ πΌ We change variables again, setting ( π‘ )πΎ ππ΅ π€ = πΌ so that 1 ππ€ = πΌ ( or π‘ = logπ΅ ππ΅ π‘ πΌ )πΎ−1 ( πΌπ€1/πΎ π ⋅ ππ΅ π‘ πΎ log π΅ ππ‘. ) , (2.5) (2.6) (2.7) 6 VICTORIA CUFF, ALLISON LEWIS, AND STEVEN J. MILLER Then πΉπ΅′ (π§) = π=−∞ = ( ( 1/πΎ )) πΌπ€ π−π€ ⋅ exp −2πππ ⋅ logπ΅ ππ€ π −∞ ∞ ∫ ∑ ( ( 1/πΎ ))−2πππ/ log π΅ πΌπ€ π−π€ ⋅ exp log ππ€ π −∞ ∞ ∫ ∑ π=−∞ ∞ ∞ )−2πππ/ log(π΅) πΌπ€1/πΎ ππ€ π π=−∞ −∞ ∞ ( )−2πππ/ log π΅ ∫ ∞ ∑ πΌ π−π€ ⋅ π€−2πππ/πΎ log π΅ ππ€ = π −∞ π=−∞ ) ∞ ( )−2πππ/ log π΅ ( ∑ πΌ 2πππ = Γ 1− , π πΎ log π΅ = ∞ ∫ ∑ ∞ π−π€ ⋅ ( (2.8) π=−∞ where we have used the deο¬nition of the Γ-function: ∫ ∞ Γ(π ) = π−π’ π’π −1 ππ’, Re(π ) > 0. (2.9) 0 As Γ(1) = 1, we have πΉπ΅′ (π§) [( ) 2πππ ( ) ( ) −2πππ ( )] ∞ ∑ 2πππ π log π΅ 2πππ π log π΅ Γ 1− + Γ 1+ = 1+ πΌ πΎ log π΅ πΌ πΎ log π΅ π=1 (2.10) As in the [MiNi2], the above series expansion is rapidly convergent. As π = π΅ π§ we have [ ( )] [ ( )] ( )2πππ/ log π΅ log πΌ log πΌ π = cos 2πππ§ − 2πππ + π sin 2πππ§ − 2πππ , πΌ log π΅ log π΅ (2.11) which gives a Fourier series expansion for πΉ ′ (π§) with coeο¬cients arising from special values of the Γ-function. Using properties of the Γ-function we are able to improve (2.10). If π¦ ∈ β then from (2.9) we have Γ(1 − ππ¦) = Γ(1 + ππ¦) (where the bar denotes complex conjugation). Thus the πth summand in (2.10) is the sum of a number and its complex conjugate, which is simply twice the real part. We use the following relationship: ππ₯ 2ππ₯ = ππ₯ . (2.12) sinh(ππ₯) π − π−ππ₯ [ ( )] log πΌ Writing the summands in (2.10) as 2Re π−2πππ(π§− log π΅ ) ⋅ Γ 1 + πΎ2πππ , (2.10) then belog π΅ comes [ ( )] π−1 ∑ log πΌ 2πππ πΉπ΅′ (π§) = 1 + 2 Re π−2πππ(π§− log π΅ ) ⋅ Γ 1 + πΎ log π΅ π=1 ( )] [ ∞ ∑ log πΌ 2πππ −2πππ(π§− log ) π΅ +2 π ⋅Γ 1+ . (2.13) πΎ log π΅ β£Γ(1 + ππ₯)β£2 = π=π In the exponential argument above there is no change in replacing πΌ with πΌπ΅, as this changes the argument by 2ππ. Thus it suο¬ces to consider πΌ ∈ [π, ππ΅) for any π > 0. β‘ This proof demonstrates the power of using Poisson summation in Benford’s law problems, as it allows us to convert a slowly convergent series expansion into a rapidly converging one. THE WEIBULL DISTRIBUTION AND BENFORD’S LAW 7 2.2. Bounding the Error. We ο¬rst estimate the contribution to πΉπ΅′ (π§) from the tail, say from the terms with π ≥ π . We do not attempt to derive the sharpest bounds possible, but rather highlight the method in a general enough case to provide useful estimates. Proof of Theorem 1.1(2). We must bound Re ∞ ∑ π log πΌ −2πππ(π§− log π΅) π=π ) ( 2πππ ⋅Γ 1+ πΎ log π΅ (2.14) ∫∞ ∫∞ where Γ(1 + ππ’) = 0 π−π₯ π₯ππ’ ππ₯ = 0 π−π₯ πππ’ log π₯ ππ₯. Note that in our case, π’ = πΎ2ππ log π΅ . As π’ increases there is more oscillation and therefore more cancelation, resulting in a smaller value for our integral. Since β£πππ β£ = 1, if we take absolute values inside the integrand we have log πΌ β£π−2πππ(π§− log π΅ ) β£ = 1, and thus we may ignore this term in computing an upper bound. Using standard properties of the Gamma function, we have ππ₯ 2ππ₯ 2ππ β£Γ(1 + ππ₯)β£2 = = ππ₯ , where π₯ = . (2.15) −ππ₯ sinh(ππ₯) π −π πΎ log π΅ This yields β£β°β£ 2 ≤ ∞ ∑ π=π 1⋅ ( 1 4π 2 π ⋅ πΎ log π΅ π2π2 π/πΎ log π΅ − π−2π2 π/πΎ log π΅ )1/2 . (2.16) Let π’ = π2π π/πΎ log π΅ . We overestimate our error term by removing the diο¬erence√ of the exponentials in the denominator. Simple algebra shows that for π’−1 1 ≤ π’2 we need π’ ≥ 2. For π’ √ 2 π΅ log 2 us this means π2π π/πΎ log π΅ ≥ 2, allowing us to simplify the denominator if π ≥ πΎ log4π . 2 We substitute our new value for π into (2.15): √ )1/2 ∞ ( ∑ 4π 2 π 2 β£β°β£ ≤ ⋅ π2 π/πΎ log π΅ πΎ log π΅ π π=π √ √ ∞ π 2 2π ∑ ≤ √ 2 π/πΎ log π΅ π πΎ log π΅ π=π π √ ∫ ∞ 2 2 2π ≤ √ ππ−π π/πΎ log π΅ ππ. (2.17) πΎ log π΅ π π΅ log 2 After integrating by parts, we have the following (with the only restriction being π ≥ πΎ log4π ≥ 2 π ): √ 2 1 √ β£β°β£ ≤ 3 2 2π (40 + π 2 ) πΎ log π΅ ⋅ π−π π/πΎ log π΅ . (2.18) π Equation (2.18) provides the error for a given truncation, and is the error listed in (1.4). β‘ Proof of Theorem 1.1(3). Given the estimation of the error term from above, now ask the related question of, given an π > 0, how large must π be√so that √ the ο¬rst π terms give the πΉπ΅′ (π§) 2 2 π accurately to within π of the true value. Let πΆ = 2 2(40+ππ3) πΎ log π΅ and π = πΎ log π΅ . We must −ππ choose π so that πΆπ π ≤ π, or equivalently πΆ ππ π−ππ ≤ π. (2.19) π As this is a transcendental equation in π , we do not expect a nice closed form solution, but we can obtain a closed form expression for a bound on π ; for any speciο¬c choices of πΆ and π we can easily numerically approximate π . We let π’ = ππ , giving π’π−π’ ≤ ππ/πΆ. (2.20) 8 VICTORIA CUFF, ALLISON LEWIS, AND STEVEN J. MILLER With a further change of variables, we let π = − ln(ππ/πΆ) and then expand π’ as π’ = π + π₯ (as the solution should be close to π). We ο¬nd π’ ⋅ π−π’ (π + π₯) π−(π+π₯) π+π₯ ππ₯ ≤ ≤ π−π π−π ≤ 1. ≤ 1 (2.21) We try π₯ = ln π + 21 : π+π₯ ππ₯ π + ln π + 21 ≤ 1 1 πln π+ 2 π + ln π + 21 ≤ 1. (2.22) π ⋅ π1/2 From here, we want to determine the value of π such that ln π ≤ 21 π. Exponentiating, we need π 2 ≤ ππ . As ππ ≥ π 3 /3! for π positive, it suο¬ces to choose π so that π 2 ≤ π 3 /6, or π ≥ 6. For π ≥ 6, we have 1 1 1 19 π + ln π + ≤ π+ π+ π = π ≈ 1.5833, (2.23) 2 2 12 12 but ππ/2 ≈ 1.648222π. (2.24) Therefore we can conclude that the correct cutoο¬ value for π , in order to have an error of at most π, is π where π ≥ 6 and π = − ln ( ππ ) πΆ , = π2 π = , πΎ log π΅ π + ln π + 21 , π √ √ 2 2(40 + π 2 ) πΎ log π΅ πΆ = . π3 (2.25) (2.26) β‘ References [AHMP-GQ] C. T. Abdallah, Gregory L. Heileman, S. J. Miller, F. PeΜrez-GonzaΜlez and T. Quach, Application of Benford’s Law to Images, in Theory and Applications of Benford’s Law (editors Steven J. Miller, Arno Berger and Ted Hill), Princeton University Press, to appear. [An] Anonymous, Weibull Survival Probability Distribution, last modiο¬ed May 11, 2009. http://www.wepapers.com/Papers/30999/Weibull Survival Probability Distribution. [Ben] F. Benford, The Law of Anomalous Numbers, Proceedings of the American Philosophical Society 78 (1938), 551-572. [Ca] K. J. Carroll, On the use and utility of the Weibull model in the analysis of survival data, Controlled Clinical Trials 24 (2003), no. 6, 682–701. [CB] O. Corzoa and N. Brachob, Application of Weibull distribution model to describe the vacuum pulse osmotic dehydration of sardine sheets, LWT - Food Science and Technology 41 (2008), no. 6, 1108–1115. [Cr] T. S. Creasy, A method of extracting Weibull survival model parameters from ο¬lament bundle load/strain data, Composites Science and Technology 60 (2000), no. 6, 825–832. [DL] L. DuΜmbgen and C. Leuenberger, Explicit bounds for the approximation error in Benford’s law, Electronic Communications in Probability 13 (2008), 99–112. [Dia] P. Diaconis, The distribution of leading digits and uniform distribution mod 1, Ann. Probab. 5 (1979), 72–81. [Fr] S. Fry, How political rhetoric contributes to the stability of coercive rule: A Weibull model of postabuse government survival. Paper presented at the annual meeting of the International Studies Association, Le Centre Sheraton Hotel, Montreal, Quebec, Canada, Mar 17, 2004. [Hi1] T. Hill, The ο¬rst-digit phenomenon, American Scientists 86 (1996), 358–363. THE WEIBULL DISTRIBUTION AND BENFORD’S LAW [Hi2] [Knu] [LSE] [MABF] [Me] [Mik] [Mi] [MiNi1] [MiNi2] [MT-B] [Ne] [Nig1] [Nig2] [Rai] [SS] [TKD] [We] [Yi] [ZLYM] 9 T. P. Hill, A Statistical Derivation of the Signiο¬cant-Digit Law, Statistical Science 10 (1995), no. 4, 354-363. D. Knuth, The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Addison– Wesley, third edition, 1997. L. M. Leemis, B. W. Schmeiser and D. L. Evans, Survival Distributions Satisfying Benford’s Law, The American Statistician 54 (2000), no. 3. B. McShane, M. Adrian, E. T. Bradlow and P. S. Fader, Count Models Based on Weibull Interarrival Times, Journal of Business and Economic Statistics 26 (2006), no. 3, 369–378. W. Mebane, Election Forensics: The Second-Digit Benford’s Law Test and Recent American Presidential Elections, Election Fraud Conference, Salt Lake city, Utah, September 29-30, 2006. http://www.umich.edu/∼wmebane/fraud06.pdf. P. G. Mikolaj, Environmental Applications of the Weibull Distribution Function: Oil Pollution, Science 2 176 (1972), no. 4038, 1019–1021. S. J. Miller, A derivation of the Pythagorean Won-Loss Formula in baseball, Chance Magazine 20 (2007), no. 1, 40–48. S. J. Miller and M. Nigrini, The Modulo 1 Central Limit Theorem and Benford’s Law for Products, International Journal of Algebra 2 (2008), no. 3, 119–130. S. J. Miller and M. J. Nigrini, Order Statistics and Benford’s Law, International Journal of Mathematics and Mathematical Sciences, (2008), 1-13. S. J. Miller and R. Takloo-Bighash, An Invitation to Modern Number Theory, Princeton University Press, Princeton, NJ, 2006. S. Newcomb, Note on the frequency of use of the diο¬erent digits in natural numbers, Amer. J. Math. 4 (1881), 39-40. M. J. Nigrini, Digital Analysis and the Reduction of Auditor Litigation Risk. Pages 68-81 in Proceedings of the 1996 Deloitte & Touche / University of Kansas Symposium on Auditing Problems, ed. M. Ettredge, University of Kansas, Lawrence, KS, 1996. M. J. Nigrini, The Use of Benford’s Law as an Aid in Analytical Procedures, Auditing: A Journal of Practice & Theory, 16 (1997), no. 2, 52-67. R. A. Raimi, The First Digit Problem, The American Mathematical Monthly, 83:7 (1976), no. 7, 521-538. E. Stein and R. Shakarchi, Fourier Analysis: An Introduction, Princeton University Press, Princeton, NJ, 2003. Y. Terawaki, T. Katsumi and V. Ducrocq, Development of a survival model with piecewise Weibull baselines for the analysis of length of productive life of Holstein cows in Japan, Journal of Dairy Science 89 (2006), no. 10, 4058–4065. W. Weibull, A statistical distribution function of wide applicability, J. Appl. Mech. 18 (1951), 293–297. C. T. Yiannoutsos, Modeling AIDS survival after initiation of antiretroviral treatment by Weibull models with changepoints, J Int AIDS Soc. 12 (2009), no. 9, doi:10.1186/1758-2652-12-9. Y. Zhao, A. H. Lee, K. K. W. Yau, and G. J. McLachlan, Assessing the adequacy of Weibull survival models: a simulated envelope approach, Journal of Applied Statistics (2011), to appear. E-mail address: vcuff@clemson.edu Department of Mathematics, Clemson University, Clemson, SC 29634 E-mail address: lewisa11@up.edu Department of Mathematics, University of Portland, Portland, OR 97203 E-mail address: Steven.J.Miller@williams.edu, Steven.Miller.MC.96@aya.yale.edu Department of Mathematics and Statistics, Williams College, Williamstown, MA 01267