When there are correlations, the form of chi-square is ( χ N2 = ∑ ∑ wij ( xi − x ) x j − x N N i =1 j =1 ) where the wij are the elements of the inverse of the covariance matrix. Ignoring the correlations and instead using the uncorrelated form is “erroneous”, hence we attach a subscript e: 2 χ Ne (x − x ) ≡ ∑ N i i σ i =1 2 2 i Both have the expectation value N, but the erroneous form has a larger variance. The variance of the correct form is 2N, whereas the variance for the erroneous form is 2 var ( χ Ne ) = 2 N + 4 ∑ ∑ ρij2 N −1 N i =1 j = i +1 These are found by integrating the “erroneous” chi-squares over the joint density functions for the Gaussian random variables involved. In general, for N degrees of freedom, the joint density is e− χ N /2 2 pN ( x N ) = ( 2π ) N / 2 Ω N where ΩN is the covariance matrix, and |ΩN | is its determinant: ΩN ⎛ σ 12 ⎜ ⎜ ρ12 σ 1σ 2 =⎜ M ⎜ ⎝ ρ1 N σ 1σ N ρ12 σ 1σ 2 σ 22 M ρ2 N σ 2 σ N K ρ1 N σ 1σ N ⎞ ⎟ L ρ2 N σ 2 σ N ⎟ ⎟ O M ⎟ L σ N2 ⎠ The requirement that the determinant be positive definite constrains the range of values available to the correlation coefficients more restrictively than just absolute value less than unity. For example, for N = 3 2 Ω 3 = σ 12 σ 22 σ 32 (1 − ρ122 − ρ132 − ρ23 + 2 ρ12 ρ13 ρ23 ) The variance of the “erroneous” chi-square can be written more simply by using the fact that the diagonal correlations are always unity and are picked up once in the summation below, whereas the others are picked up twice: var ( χ ) = 2∑ ρij2 N 2 Ne ij Note that this is just twice the square of the Frobenius norm of the correlation matrix. So the variance of a random variable defined as (x − x ) ∑ N i =1 i 2 i σ i2 where the xi are Gaussian, is always twice the square of the Frobenius norm of the correlation matrix. If there are no correlations, this will be 2N, otherwise it will be somewhat larger as seen above.