Borgafjall1

advertisement
Winter Conference Borgafjäll
The geometry of random
fields in astrophysics
and brain mapping
Robert Adler, Technion
Jonathan Taylor, Stanford
Keith Worsley, McGill
Euler Characteristic in 3D:
EC = #blobs - #tunnels or handles + #hollows
EC(
)=1–0+0=1
EC(
)= 1–1+0=0
EC(
) = 1 - 3 + 0 = -2
EC(
) = 1 - 0 +1 = 2
Euler Characteristic in 3D:
EC = #blobs - #tunnels or handles + #hollows
=2-1+0=1
EC = #points - #edges + #faces - #cubes
= 55 - 90 + 40 - 4 = 1
Astrophysics
Sloan Digital Sky
Survey,
6, Aug. ‘07
Sloan
Digital Skydata
Survey,release
FWHM=19.8335
2000
1500
Euler Characteristic (EC)
1000
500
"Meat ball"
topology
"Bubble"
topology
0
-500
-1000
"Sponge"
topology
-1500
Observed
Expected
-2000
-5
-4
-3
-2
-1
0
1
Gaussian threshold
2
3
4
5
Expected EC: isotropic field, not Gaussian
Let s 2 S ½ <D .
Let T (s) be a smooth isotropic random ¯eld.
Let Xt = fs : T (s) ¸ tg be the excursion set inside S.
Then
X
D
\
E(EC(S Xt )) =
¹d (S)½d (t):
d=0
Intrinsic volume, or
Minkowski functional
EC density
Isotropic Gaussian random field in 3D
Suppose T (s) = Z(s), s 2¡ S ¢½ <3 is an isotropic Gaussian random ¯eld,
Z(s) » N(0; 1), ¸2 I3£3 = V @Z ,
@s
Z 1
1
E(EC(S \ Xt )) = EC(S) £
e¡z2 =2 dz
(2¼)1=2
t
¸ ¡2
+ 2 Diameter(S) £
e t =2
Intrinsic volumes or
2¼
EC
Minkowski functionals
¸2
¡
£
densities
+ 1 Area(S)
te t2 =2
2
(2¼)3=2
£ ¸3 (t2 ¡ 1)e¡t2 =2 :
+
Volume(S)
S
Diameter
(2¼)2
Average over rotations
e.g. box of size a × b × c:
2 Diameter = a + b + c
sometimes used by airlines:
Intrinsic volume of a set with smooth boundary
Let C(s) be the (D ¡ 1) £ (D ¡ 1) inside curvature matrix at s 2 @S, the
boundary of S. To compute the intrinsic volumes, we need the det-traces of a
square matrix: for a d £ d symmetric matrix A, let detrj (A) denote the sum of
the determinants of all j £ j principal minors of A, so that detrd (A) = det(A),
detr1 (A) = tr(A), and we de¯ne detr0 (A) = 1. Let ad = 2¼d=2 =¡(d=2) be the
(d ¡ 1)-dimensional Hausdor® (surface) measure of a unit (d ¡ 1)-sphere in <d .
For d = 0; : : : ; D ¡ 1 the d-th intrinsic volume of S is
Z
1
¹d (S) =
detrD¡1¡d fC(s)gds;
aD¡d @S
and ¹D (S) = jS j, the Lebesgue measure of S. Note that ¹0 (S) = EC(S) by the
Gauss-Bonnet Theorem, and ¹D¡1 (S) is half the surface area of S.
EC density of an isotropic Gaussian random field
(Adler, 1981)
In general, let Hej (t) be the Hermite polynomial of degree j, then
½d (t) = ¸d (2¼)¡(d+1)=2 Hed¡1 (t)e¡t2 =2
µ¡
¶
d
¸
@
p
P(Z ¸ t):
=
2¼ @t
Curiously this result only depends on the spatial correlation function through
its curvature at the origin. Let h(r) = Cor(Z(s); Z(s + r)), then
Ä
h(0)
= ¡¸2 ID£D
1
h(r)
0
What happens down
here doesn’t matter!
r
Z(s)
Brain
Imaging
white noise
=
filter
*
FWHM
If Z(s) is continuous white noise smoothed with p
an isotropic Gaussian ¯lter of
Full Width at Half Maximum FWHM then ¸ = 4 log 2=FWHM :
Z 1
1
E(EC(S \ Xt )) = EC(S)
e¡z2 =2 dz
(2¼)1=2
EC0(t)
t
Resels0(S)
Resels1(S)
Resels2(S)
Resels3(S)
Resels (Resolution elements)
Diameter(S) (4 log 2)1=2 ¡ 2
+2
e t =2
FWHM
2¼
EC1(t)
Area(S) 4 log 2 ¡ 2
1
+
te t =2
2 FWHM 2 (2¼)3=2
EC2(t)
3=2
Volume(S) (4 log 2)
+
(t2 ¡ 1)e¡t2 =2 :
(2¼)2
FWHM 3
EC3(t)
EC densities
Proof: Morse theory:
Cover set Xt with a smooth ‘Morse’ function;
EC = #max (M) - #saddles (S) + #min (m)
M
M
S
S
M
T(s)
S
Xt
M
M
S
M
M
M
m
S
(a) EC = 4 - 4 + 1 = 1
5
0
1
2
3
4
EC
4
c
5
(b) EC = 4 - 2 + 0 = 2
6
M
M
3
M
M
2
M
b
1
a
0
S
0
d
2
t
4
6
(c) EC = 4 – 0 + 0 = 4
(d) EC = 1 – 0 + 0 = 1
Proof continued …
Now use the random ¯eld itself as the Morse function:
X
\
EC(S Xt ) =
1fT ¸tg 1f _ g sign(¡TÄ) + boundary:
T =0
s
The beauty is that we have given the EC a point-set representation i.e. a sum
over points. This makes it easy to move the expectation inside the summation:
´
X ³
E(EC(S \ Xt )) =
E 1f ¸ g 1f _ g sign(¡TÄ) + boundary:
T t
T =0
s
The summation becomes ¹D (S) = jS j, and its coe±cient is the EC density:
¯
µ
¶ ³
´
¯
½D (t) = E 1fT ¸tg det(¡TÄ)¯¯ T_ = 0 P T_ = 0 :
In the Gaussian case, T = Z, then T; T_ ; TÄ » N, and after lots of messy algebra
we get the result.
Proof of the
factorisation
E(EC(S \ Xt )) =
X
D
¹d (S)½d (t)
d=0
1. Apply Morse theory to the case when the excursion set hits the boundary of
the search region. This gives the boundary terms for d = 0; : : : ; D ¡ 1.
2. Kinematic Fundamental Formula (Blashke, 1935). Suppose we have two sets
A; B ½ <D . A moves under under rigid motion (translation, rotation) but B is
¯xed. Then
X
D
\
E(EC(A B)) =
¹d (A) £ ¹D¡d (B)cd :
A
B
d=0
Bu®on's needle (1757): A = needle, B = cracks in °oor: EC = 0 or 1, so
A
B
E(EC(A \ B)) = P(A \ B) = 2=¼:
Random ¯elds: A=S, B = Xt (extended out to in¯nity) ! EC density, ½d (t).
Proof
continued
E(EC(S \ Xt )) =
X
D
¹d (S)½d (t)
d=0
3. (Hadwiger, 1930s): Suppose Á(S), S ½ <D , is a set functional that is invariant under translations and rotations of S, and satis¯es the additivity property
Á(A [ B) = Á(A) + Á(B) ¡ Á(A \ B):
All the intrinsic volumes Á(S) = ¹d (S) satisfy these properties. Hadwiger's
result is stronger: Á(S) must be a linear combination of intrinsic volumes:
Á(S) =
X
D
¹d (S) £ cd :
d=0
A
B
To complete the proof, the choice
Á(S) = E(EC(S \ Xt ))
is invariant under translations and rotations because the random ¯eld is isotropic,
and is additive because the EC = ¹0 is additive, so it must be a linear combination of intrinsic volumes.
Jonathan Taylor’s Gaussian Kinematic Formula
(2003) for functions of Gaussian fields
Let s 2 S ½ <D .
Let Z(s) = (Z1 (s); : : : ; Zn (s)) be iid smooth Gaussian random ¯elds.
Let T (s) = f (Z(s)), e.g. Â2 , T , F statistics.
Let Xt = fs : T (s) ¸ tg be the excursion set inside S.
Let Rt = fz : f (z) ¸ tg be the rejection region of T .
Then
X
D
\
L (S)½ (R ):
E(EC(S Xt )) =
d
d
t
d=0
Example: chi random field, 2 df
T (s) = Â2 (s) =
q
Z 2 (s) + Z 2 (s)
1
2
Z1~N(0,1)
3
Z2~N(0,1)
s2
2
1
0
-1
-2
Excursion sets,
Search
Region,
S
-3
Xt = fs : Â2 ¸ tg
s1
Rejection regions,
Z2
Rt = fZ : Â2 ¸ tg
4
2
3
0
Z1
2
1
-2
-2
0
2
Threshold t
E(EC(S \ Xt )) =
Beautiful symmetry:
X
D
L (S)½ (R )
d
d
t
d=0
Lipschitz-Killing curvature Ld (S)
Steiner-Weyl Tube Formula (1930)
EC density ½d (Rt )
Taylor Kinematic Formula (2003)
µ
¶
• Put a tube of radius r about
@Z the search region λS and rejection region Rt:
¸ = Sd
@s
Z2~N(0,1)
14
r
12
10
Rt
Tube(λS,r)
8
Tube(Rt,r)
r
λS
6
Z1~N(0,1)
t-r t
4
2
2
4
6
8 10 12 14
• Find volume or probability, expand as a power series in r, pull off1coefficients:
jTube(¸S; r)j =
X
D
d=0
¼d
L
P(Tube(Rt ; r)) =
¡d (S)r d
D
¡(d=2 + 1)
X (2¼)d=2
d!
d=0
½d (Rt )rd
Lipschitz-Killing
curvature Ld (S)
of a triangle
r
Tube(λS,r)
λS
¸ = Sd
µ
@Z
@s
¶
Steiner-Weyl Volume of Tubes Formula (1930)
Area(Tube(¸S; r)) =
X
D
¼ d=2
L
¡d (S)r d
D
¡(d=2 + 1)
d=0
= L2 (S) + 2L1 (S)r + ¼ L0 (S)r2
= Area(¸S) + Perimeter(¸S)r + EC(¸S)¼r2
L (S) = EC(¸S)
0
L (S) = 1 Perimeter(¸S)
1
2
L (S) = Area(¸S)
2
Lipschitz-Killing curvatures are just “intrinisic volumes” or “Minkowski functionals”
in the (Riemannian) metric of the variance of the derivative of the process
EC density ½d (Â2 ¸ t)
of the  statistic
q
T (s) = Â2 (s) = Z 2 (s) + Z 2 (s)
1
Z2~N(0,1)
Rt
Tube(Rt,r)
2
r
t-r t
Z1~N(0,1)
Taylor’s Gaussian Tube Formula
P (Z1 ; Z2 2 Tube(Rt ; r)) = P(Â2 ¸ t ¡ r) = e¡(t¡r)2 =2
1
X
(2¼)d=2
=
½d (Â ¸ t)rd
d!
d=0
= ½0 (Â2 ¸ t) + (2¼)1=2 ½1 (Â2 ¸ t)r + (2¼)½2 (Â2 ¸ t)r2 =2 + ¢ ¢ ¢
½0 (Â2 ¸ t) = e¡t2 =2
½ (Â ¸ t) = (2¼)¡1=2 e¡t2 =2 t
1
2
½2 (Â2 ¸ t) = (2¼)¡1 e¡t2 =2 (t2 ¡ 1)=2
..
.
EC densities for some standard test statistics
Using Morse theory method (1981, 1995):






T, χ2, F (1994)
Scale space (1995, 2001)
Hotelling’s T2 (1999)
Correlation (1999)
Roy’s maximum root, maximum canonical correlation (2007)
Wilks’ Lambda (2007) (approximation only)
Using Taylor’s Gaussian Kinematic Formula:



T, χ2, F are now one line …
Likelihood ratio tests for cone alternatives (e.g chi-bar, beta-bar) and
nonnegative least-squares (2007)
…
Strange things happen to the excursion set of the
chi field when the df < D, and the threshold ~0
E.g. Â2 ¯eld in <3
q
Â2 (s) = Z 2 (s) + Z 2 (s)
1
2
=0
, Z (s) = 0 and Z (s) = 0
1
2
The zero set X0 of a Â2 ¯eld is the
intersection of two smooth surfaces in <3
which form strings that are closed loops.
Can they be linked or knotted?
So far the EC has only been used as a descriptive tool.
Its main use is to detect sparse signal in a random field,
where it is used to approximate P-values of the maxima.
Bad design:
2 mins rest
2 mins Mozart
2 mins Eminem
2 mins James Brown
fMRI data: 120 scans, 3 scans hot, rest, warm, rest, …
First scan of fMRI data
1000
Highly significant effect, T=6.59
hot
rest
warm
890
500
870
0
No significant effect, T=-0.74
820
T statistic for hot - warm effect
5
800
Drift
0
810
-5
T(s) = (hot – warm effect) / S.d.
~ t110 if no effect, ~ N(0,1)
790
0
100
200
Time, seconds
300
Detecting sparse signal in T(s) by threshlding
Model:
T (s) = a(s) + Z(s);
where a(s) ¸ 0 is sparse signal and Z(s) » N(0; 1) is a smooth Gaussian random
¯eld. We can detect signal locations A = fs 2 S : a(s) > 0g by Xt = fs 2 S :
T (s) ¸ tg for some high threshold t. We choose t so that if there is no signal
(a(s) = 0), then
?
µ
¶
P max T (s) ¸ t = ®
s2S
5
A
0
-5
for some small ® = 0:05, say. If the signal is sparse, this controls the probability
of ¯nding a (false positive) signal outside A to something less than ®.
Bonferroni?
µ
¶
P max T (s) ¸ t · #points £ P(Z ¸ t);
s2S
obviously too conservative if the random ¯eld is smooth.
“EC heuristic” (Adler, 1981)
At high thresholds, holes
disappear, and the EC
counts the number of
connected components.
At even higher
thresholds, there is
only one component
with an EC of 1.
Xt
At high thresholds, t :
½
¾
max T (s) ¸ t ¼ EC(S \ Xt )
s2S
Taking expectations:
¶
P max T (s) ¸ t ¼ E(EC(S \ Xt ))
µ
s2S
Above max T(s),
Xt is empty with
an EC of 0.
EC heuristic again
Search Region, S
Excursion sets, Xt
EC= #blobs - # holes
= 1
2
7
10
5
2
1
s2S
15
Euler characteristic, EC
0 Â(s) ¸ t)
P(max
¼ E(EC) = 0:05
Observed
10
) t = 4:04
5
0
Expected
-5
-10
-15
0
0.5
EXACT!
1
1.5
2
E(EC(S \ Xt )) =
X
D
d=0
2.5
3
L (S)½ (R )
d
d
t
3.5
4
Threshold, t
Accuracy of the EC heuristic
¡ ¢
»
If Z(s) N(0; 1) is an isotropic Gaussian random ¯eld, ¸2 ID£D = V @Z ,
@s
µ
¶
P max Z(s) ¸ t ¼ E(EC(S \ fs : Z(s) ¸ tg))
s2S
=
X
D
¹d (S)¸d (2¼)¡(d+1)=2 Hed¡1 (t)e¡t2 =2
d=0
Z
= c0
1
(2¼)¡1=2 e¡z2 =2 dz + (c1 + c2 t + ¢ ¢ ¢ + cD tD¡1 )e¡t2 =2
t
It might be thought that the error is the next term down in the power series,
i.e. O(t¡2 e¡t2 =2 ). However the error is exponentially smaller than this:
¯ µ
¯
¶
¯
¯
¯P max Z(s) ¸ t ¡ E(EC(S \ fs : Z(s) ¸ tg))¯ = O(e¡®t2 =2 ); ® > 1:
¯
¯
s2S
Thus the expected EC gives all the polynomial terms in the expansion for the
P-value.
Volumes of tubes:
Getting the P-value of Gaussian fields directly
(Siegmund, Sun, 1989, 1993)
Approximate the Gaussian ¯eld by a Karhunen-Loµeve expansion in terms of
basis functions bj (s) with independent Gaussian coe±cients Zj » N(0; 1):
Z(s) ¼
X
m
bj (s)Zj = (b(s)0 U) jjZjj;
j=1
where b(s) = (b1 (s); : : : ; bm (s))0 , Z = (Z1 ; : : : ; Zm )0 , and jjZjj » Âm is independent of U = Z=jjZjj » Uniform on the unit m-sphere °m . Conditional on
jjZjj,
³
´
p
fU : Z(s) ¸ tg = Tube b(s); 1 ¡ t2 =jjZjj2 ½ °
m
Therefore
µ
¯
¸
P max Z(s) t ¯ jjZjj
s2S
¶
=
Vol(Tube)
;
Vol(°m )
so it comes down to a problem in geometry (Hotelling, Weyl, 1939). Takemura
& Kuriki (2000) showed that the ¯rst D + 1 terms are the same as E(EC).
Example: m = 3, ||Z||=1 b (s) =
3
t = 0:95
X
3
1
q
1
3
bj (s)Uj
j=1
0
s
¼
2
U3
Tube
b(s)
radius = 0.31
q b1 (s) =
2 cos(s)
3
U1
max
s2S
X
3
j=1
bj (s)Uj
U2
q b2 (s) =
2 sin(s)
3
Mann-Whitney random field
MW = sum of ranks of n/2 random fields, n=10
2.5
2
50
1.5
1
100
0.5
0
150
-0.5
-1
-1.5
200
-2
-2.5
250
50
100
150
200
250
Lipschitz-Killing
curvature Ld (S)
of a triangle
r
Tube(λS,r)
λS
¸ = Sd
µ
@Z
@s
¶
Steiner-Weyl Volume of Tubes Formula (1930)
Area(Tube(¸S; r)) =
X
D
¼ d=2
L
¡d (S)r d
D
¡(d=2 + 1)
d=0
= L2 (S) + 2L1 (S)r + ¼ L0 (S)r2
= Area(¸S) + Perimeter(¸S)r + EC(¸S)¼r2
L (S) = EC(¸S)
0
L (S) = 1 Perimeter(¸S)
1
2
L (S) = Area(¸S)
2
Lipschitz-Killing curvatures are just “intrinisic volumes” or “Minkowski functionals”
in the (Riemannian) metric of the variance of the derivative of the process
Lipschitz-Killing curvature Ld (S) of any set
S
S
S
¸ = Sd
Edge length × λ
12
10
8
6
4
2
.
.. . .
.
. . .
.. . .
.. . .
.
. . . .
. . . .
. . . .
.. . .
. . .
... .
.
4
..
.
.
.
.
.
.
.
.
.
.
.
6
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
..
. .
. ...
. ..
. . .
. .
. . .....
. . .
. ....
..
..
10
µ
@Z
@s
¶
of triangles
L (Lipschitz-Killing
²) = 1, L (¡) curvature
L (N
=
1,
)=1
0
0
0
L (¡) = edge length, L (N) = 1 perimeter
1
2
L1 (N) = area
2
P Lcurvature
P L
Lipschitz-Killing
union
L
² ¡ Pof L
¡ of triangles
N
(S) = P² 0 ( )
¡ 0( ) +
P
L (S) =
L (¡) ¡
L (N)
¡
N 1
L1 (S) = P L 1(N)
2
N 2
0
N
0
( )
s2
Z~N(0,1)
Non-isotropic data?
µ
¸ = Sd
3
@Z
@s
¶
2
1
0.14
0.12
0
-1
-2
s1
0.1
0.08
0.06
-3
. . . .. .
12we warp
..
• Can
to isotropy? i.e. multiply edge lengths by λ?
... . .the
. . data
•
•
•
.
. . . . . . . .
10 .
. . . . . . . ...
. . no,
Globally
. . but
. . locally
. . . . yes, but we may need extra dimensions.
.
8 . . . . . . . . . .
. .
. . . . . . . Theorem:
Nash
Embedding
#dimensions ≤ D + D(D+1)/2; D=2: #dimensions
6 . . . . . . . . . .....
.. . . . . . . . .
. . . . Euclidean
....
4 idea:
. . . replace
Better
distance by the variogram:
... . . . . ...
. . . . .
d(s1, s2)2 = Var(Z(s1) - Z(s2)).
2
4
6
8
10
≤ 5.
Non-isotropic data
¸(s) = Sd
Z~N(0,1)
s2
3
µ
@Z
@s
¶
2
1
0.14
0.12
0
-1
-2
Edge length × λ(s)
12
10
8
6
4
2
..
.
.
.
. .
. .
.
.. .
.
.. .
.
.
.
.
.
.
.
. . .
.
. . .
.. .
.
.
. .
...
.
. .
.
.
.
.
.
.
.
.
.
.
.
4
6
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
.
..
. .
. ...
. ..
. . .
. .
. . .....
. . .
. ....
...
10
s1
0.1
0.08
0.06
-3
of triangles
L (Lipschitz-Killing
²) = 1, L (¡) curvature
L (N
=
1,
)=1
0
0
0
L (¡) = edge length, L (N) = 1 perimeter
1
2
L1 (N) = area
2
P Lcurvature
P L
Lipschitz-Killing
union
L
² ¡ Pof L
¡ of triangles
N
(S) = P² 0 ( )
¡ 0( ) +
P
L (S) =
L (¡) ¡
L (N)
¡
N 1
L1 (S) = P L 1(N)
2
N 2
0
N
0
( )
Estimating Lipschitz-Killing curvature Ld (S)
We need independent & identically distributed random fields
e.g. residuals from a linear model
Z1
Z2
Z3
Z4
Replace coordinates of
the triangles 2 <2 by
normalised residuals
Z 2<
n;
jjZjj
Z5
Z7
Z8
Z9 … Zn
of triangles
L (Lipschitz-Killing
²) = 1, L (¡) curvature
L (N
=
1,
)=1
0
0
0
L (¡) = edge length, L (N) = 1 perimeter
1
2
L1 (N) = area
2
P Lcurvature
P L
Lipschitz-Killing
union
L
² ¡ Pof L
¡ of triangles
N
(S) = P² 0 ( )
¡ 0( ) +
P
L (S) =
L (¡) ¡
L (N)
¡
N 1
L1 (S) = P L 1(N)
2
N 2
0
Z = (Z1 ; : : : ; Zn ):
Z6
N
0
( )
Cortical thickness



n = 321 normal subjects
Y(s) = cortical thickness
x = age, gender
0.1
105 simulations, threshold chosen
so that P{maxS Z(s) ≥ t} = 0.05
0.09
0.08
Random field theory
Bonferroni
0.07
?
P value
0.06
0.05
0.04
2
0.03
0
0.02
-2
0.01
0
Z(s)
0
1
2
3
4
5
6
7
8
FWHM (Full Width at Half Maximum) of smoothing filter
9
10
FWHM
Improved Bonferroni (1977,1983,1997*)
*Efron,

B. (1997). The length heuristic for simultaneous hypothesis tests
Only works in 1D: Bonferroni applied to N events
{Z(s) ≥ t and Z(s-1) ≤ t} i.e.
{Z(s) is an upcrossing of t}

Conservative, very accurate

If Z(s) is stationary, with
Discrete local maxima
Z(s)
t
Cor(Z(s1),Z(s2)) = ρ(s1-s2),
s
s-1 s
Then the IMP-BON P-value is E(#upcrossings)
P{maxS Z(s) ≥ t} ≤ N × P{Z(s) ≥ t and Z(s-1) ≤ t}


We only need to evaluate a bivariate integral
However it is hard to generalise upcrossings to higher D …
Discrete local maxima

Bonferroni applied to N events
{Z(s) ≥ t and Z(s) is a discrete local maximum} i.e.
{Z(s) ≥ t and neighbour Z’s ≤ Z(s)}

Conservative, very accurate

If Z(s) is stationary, with
Z(s2)
≤
Z(s-1)≤ Z(s) ≥Z(s1)
Cor(Z(s1),Z(s2)) = ρ(s1-s2),
≥
Z(s-2)
Then the DLM P-value is E(#discrete local maxima)
P{maxS Z(s) ≥ t} ≤ N × P{Z(s) ≥ t and neighbour Z’s ≤ Z(s)}

We only need to evaluate a (2D+1)-variate integral …
Discrete local maxima: “Markovian” trick

If ρ is “separable”: s=(x,y),
ρ((x,y)) = ρ((x,0)) × ρ((0,y))
 e.g. Gaussian spatial correlation function:
ρ((x,y)) = exp(-½(x2+y2)/w2)
Then Z(s) has a “Markovian” property:
conditional on central Z(s), Z’s on
different axes are independent:
Z(s±1) ┴ Z(s±2) | Z(s)

Z(s2)
≤
Z(s-1)≤ Z(s) ≥Z(s1)
≥
Z(s-2)
So condition on Z(s)=z, find
P{neighbour Z’s ≤ z | Z(s)=z} = ∏dP{Z(s±d) ≤ z | Z(s)=z}
then take expectations over Z(s)=z
 Cuts the (2D+1)-variate integral down to a bivariate integral

The result only involves the correlation ½d between adjacent voxels along
each lattice axis d, d = 1; : : : ; D. First let the Gaussian density and uncorrected
P values be
Z 1
p
Á(z) = exp(¡z 2 =2)= 2¼; ©(z) =
Á(u)du;
z
respectively. Then de¯ne
1
¡
Q(½; z) = 1 2©(hz) +
¼
where
Z
®
exp(¡ 1 h2 z 2 = sin2 µ)dµ;
2
0
r ¡
1 ½
h=
:
1+½
³p
´
¡1
® = sin
(1 ¡ ½2 )=2 ;
Then the P-value of the maximum is bounded by
µ
¶
Z
P max Z(s) ¸ t · jS j
s2S
t
1 Y
D
Q(½d ; z) Á(z)dz;
d=1
where jS j is the number of voxels s in the search region S. For a voxel on
the boundary of the search region with just one neighbour in axis direction d,
replace Q(½; z) by 1 ¡ ©(hz), and by 1 if it has no neighbours.
0.1
105 simulations, threshold chosen
so that P{maxS Z(s) ≥ t} = 0.05
0.09
0.08
Bonferroni
Random field theory
0.07
P value
0.06
0.05
Discrete local maxima
0.04
2
0.03
0
0.02
-2
0.01
0
Z(s)
0
1
2
3
4
5
6
7
8
FWHM (Full Width at Half Maximum) of smoothing filter
9
10
FWHM
Download