Axial Data Analysis Random Vector X is random vector if X : , A, P R m , B m B B it follows that X B A m 1 Axial Data the spherical representation of RP m for axial data is [ x] {x, x} RP m S m / x ~ x axial data can also be writen as R m1 \ 0/ x ~ x , note: this is the homomorphous condtion x R m 1 x x 0 , x1 ,..., x m 1 , x m and y y 0 , y1 ,..., y m 1 , y m [ y ] [ x] y 0 y1 y m 1 y m 1 ... m 1 m 0 x x x x Properties of Axial data Sometime the observations are not direction but axes, that is, the unit vector and – are indistinguishable, so that it is which is observed. In this context it is appropriate to consider probability density functions for onwhich are anitpodally symmetric (diametrically opposite <an antipodal point on a sphere>) • i.e. f x f x • in such cases the observations can be regrarded as being on the projective space , which is obtained by identifying opposite points on the sphere . Random axis Maps to a Projection Y : , A, P RP m ,BRPm A random axis where BRPm algebra generated by open sets in RP m B BRP m , Y 1 B A Distance . d a1 , a2 where is the accute angle B a, b R m | d a, b for a 3-dimensional shpere volume RP 2 volume S 2 2 4 2 3 2 3 Uniform distribtuion . X has a density with respect to the volume probability measure f X p lim QX B p , U B p, 1 t 1 1 2 p 3 2 dt 2 1 t 2 1 0 p 3 2 dt Density for the Uniform . X has a density with respect to the volume probability measure f X p lim QX B p , U B p, 1 t 1 1 2 p 3 2 dt 2 1 t 2 1 0 p 3 2 dt Finding the Mean . Question : What is the mean axis of an arbitrary distribtuion on RP m recall for non-axial X R m we have E X x f x dx where x x1 , x2 ,..., xm T Intrinsic Mean . In RP m the Frechet of Y minimizer of E d Y , p F p 2 For axial data the distance is induced by a distance in a space of symmetric matrices. Distance between axes . j xxT x RP m note : xT x =1 and x ~ x j x j x d x , y d 0 j x , j y d 0 xxT , yy T m 1 d 2 0 A, B aij bi j i, j 2 Tr A B A B T d x , y Tr xxT yy T xxT yy T T Tr x x x x Tr xxT xxT 2xxT yy T yy T yy T T . T 2xxT yy T y y T y y T note : xT x y T y 1 therefore d x , y Tr xxT 2xxT yy T yy T Tr xxT 2Tr xxT yy T Tr yy T note : Tr ab Tr ba therefore Tr xxT =TR xT x =TR (1) 1 Tr yy T =TR y T y =TR (1) so d x , y 2 2Tr xxT yy T 0 Hence, d x , y is minimized when Tr xxT yy T is maximized The minimum of expected distance squared . E d 2 X , p F p min d X , p dQ d X min 2 RP So max RP m m Tr xxT 2xxT μμT μμT dQ d X Tr xxT μμT dQ d X max RP m Tr μT xxT μ dQ d X =G μ note: μT μ 1 Therefore RP m xxT dQ d X E XXT K note : G μ μT E XXT μ is maximized if if μ is the eigenvector corresponding to the largest eigenvalue Finding the Sample Mean . x1 ,..., x m is a sample of axes 1 The empical distibution Qˆ p x1 ... xn n the sample mean axes is the empirical n 1 E XXT xxT Qˆ p X xi xxT K n i 1 i 1 n Central Limit Theorem . let S G j , X ab n 1 N a N b X 1 1 a r X X a r N 2 r r then d T ν nνT μ1 , μ 2 ,..., μ m S 1 μ1 , μ 2 ,..., μ m ν N2 1 distribution T Watson Distribution • One of the simplest models for axial data is the Dimroth-Scheidegger-Watson model, which has densities • Where 1 2 1 p f x; μ, k M , , k exp k μT x 2 2 1 p p 1 1 M , ,k B , 2 2 2 2 p 1 1 2 p 1 1 1 B , t 2 2 0 1 t 1 1 1 2 1 1 e kt 2 1 t p 3 1 2 t 0 2 p 3 / 2 1 t dt 1 2 Note: the density is rotationally symmetric about μ Bingham distribution . 1 1 p f x, A 1 F1 , , A exp xT Ax 2 2 1 p xT Ax where 1 F1 , , A p1 e dx 2 2 S Where the integration is with respect to the uniform distribution on