Functional Bayesian point process model for neuroimaging meta-analysis data Silvia Montagna Joint work with T. D. Johnson and T. E. Nichols November 27, 2014 Functional neuroimaging: an overview Functional neuroimaging has become an essential tool for non-invasively studying the brain of normal and clinical populations Ex: compare activations between two similar but different types of experiments Increasing interest in inverse inference Work supported by the simplicity and computational efficiency of the mass univariate approach Limitations: low power, low reliability, etc. Need meta-analyis! Coordinate-based meta-analysis Often, peak activation coordinates (foci) only are reported Coordinate-based meta-analysis Often, peak activation coordinates (foci) only are reported Adopt spatial point processes Goal: build a Bayesian model for coordinate meta-analysis data that allows inference on the most likely domain for any new experiment Spatial point processes A spatial point process is a stochastic process where we observe random occurrences of events, possibly with a value or mark attached to each occurrence. Examples: Locations of foci in a brain (mark: z- or t-score) Locations of trees in a forest (mark: type of tree, diameter, etc.) Occurrences of disease (mark: type of disease) Poisson processes A Poisson process X is defined by a non-negative finite intensity measure M which defines, for any A ⊆ B, the number of expected points in A: Z N(A) ∼ Poisson(M(A)), where M(A) ≡ µ(ν)dν < ∞ A We write X ∼ Poisson(B, µ), where B is a common brain template B ⊂ 3 with finite volume |B|, and µ is the intensity function. R The density of a realisation x of X is π(x) = exp{|B| − M(B)} Y µ(x) x∈x A Cox process is a Poisson process with random intensity µ(x). Assumptions and notation N independent observations, i.e., N independent point patterns arising from N studies i from study i = 1, 2, . . . , N Foci xi = {xij }nj=1 Each xi is a realisation from a Cox process Xi driven by a random intensity function µi Given that observations are independent, the sampling distribution is ( N ) N X Y Y N N π({xi }i=1 |{µi }i=1 ) ∝ exp − Mi (B) µi (xij ) i=1 i=1 xij ∈xi Z Mi (B) µi (ν)dν < ∞ = B B is a common brain template B ⊂ R3 with finite volume |B|. Log-Gaussian Cox process (LGCP) One model for Cox processes is an LGCP. Writing the intensity as µ(ν) = exp{z> (ν)β}µ0 (ν), with z denoting a set of covariates, one models µ0 (ν) = exp{Ψ(ν)}, Ψ(ν) ∼ GP(ϕ, C(ν, ν 0 )), with C(ν, ν 0 ) = σ 2 exp{−φ||ν − ν 0 ||}. Eµ0 (ν) = exp{ϕ + σ2 /2}, so a useful specification is to set ϕ = −σ 2 /2 so that Eµ0 (ν) = 1. Many decent options for fitting LGCPs: INLA, MCMC, MALA, Hamiltonian MC. Alternatives to LGCPs Look into functional representation of the log intensity function Natural computational advantage over a GP Functional representations use a linear mean structure to describe the intensity’s behaviour whereas GPs use the covariance matrix Functional Latent Factor Regression Model (LFRM) We write the (log) intensity as log µi (ν) = p X bm (ν)θim = b(ν)> θ i (1) m=1 bm (ν) corresponds to the mth basis function evaluated at voxel ν Potential choices for b(·)> : B-splines, 3D Gaussian kernels θ i is the vector of study-specific basis function coefficients (scores) Variations between intensities are reflected through the variations in the score vectors. LFRM We specify a sparse factor model for the basis coefficients as θ i = Λη i + ζ i , ζ i ∼ Np (0, Σ) (2) Λ is a p × k factor loading matrix η i is a k × 1 vector of latent factors ζ i is a residual vector that is uncorrelated with other variables in the model, with Σ = diag(σ12 , . . . , σp2 ) This structure induces a low-dimensional representation of log µi . LFRM Information from study-specific covariates zi can be incorporated through a simple linear model η i = ι> zi + ∆i , ∆i ∼ Nk (0, I) (2) where ι is a r × k matrix of unknown coefficients, and r denotes the dimension of ιi . LFRM Despite the simplicity of this linear model, the resulting model on log µ1 , . . . , log µn allows a very flexible accommodation of covariate information. Conditionally on ({bm }pm=1 , Λ, Σ, ι, {zi }ni=1 ), the log intensities are independent (finite rank) Gaussian processes with covariate dependent mean functions E[log µi (ν)] = k X ι> l zi φl (ν) l=1 and a common covariance function Cov{log µi (ν), log µi (ν )} = 0 k X l=1 where φl (ν) = Pp m=1 λlm bm (ν). 0 φl (ν)φl (ν ) + p X m=1 2 σm bm (ν)bm (ν 0 ), Bayesian analysis: the MGPS prior We adopt the multiplicative gamma process shrinkage (MGPS) prior on the loadings [1]: λjh |ψjh , τh δ1 −1 −1 ∼ N(0, ψjh τh ), ∼ Gamma(a1 , 1), ψjh ∼ Gamma ρ ρ , , 2 2 δl ∼ Gamma(a2 , 1), τh = h Y δl l=1 l ≥2 τh is a global shrinkage parameter for the hth column τh ’s are stochastically increasing under the restriction a2 > 1 Favors more shrinkage overall as the column index increases ψjh ’s are local shrinkage parameters for the elements in the hth column Avoids over-shrinking the non-zero loadings MGPS prior The MGPS prior let Λ be q × ∞ with priors increasingly concentrated at zero as the column index increases The number of factors is automatically selected using adaptive Gibbs sampler The MGPS prior allows many of the loadings to be close to zero, thus inducing effective basis selection Bhattacharya and Dunson approach - computationally very efficient (block updating) Posterior computation Update the model parameters sampling from their full conditional posterior distributions when available in closed form Resort to Hamiltonian MC to update the basis coefficients θ i Adapt the number of factors as the sampler progresses Emotion meta-analysis dataset 164 publications 219 studies and 1393 foci Five emotions: sad, happy, anger, fear, and disgust (Picture from [2]) Some estimated intensities Est int. 148 − Exp. Foci = 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0015 ● 150 150 ● Est int. 64 − Exp. Foci = 2 Foci Kernels 200 200 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Foci Kernels 0.0012 ● ● ● 0.0010 ● 0.0010 ● ● ● ● y ● ● ● ● ● ● ● ● 50 ● ● ● ● ● ● ● ● ● 0.0005 ● ● ● ● ● ● ● ● ● ● ● ● 0.0008 ● 0.0006 ● ● ● ● 0.0004 ● 0.0002 ● ● ● ● ● ● 0 ● ● ● ● 0 ● ● ● ● ● ● 100 ● 50 100 y ● 0 50 100 150 0 50 100 x 150 x Est int. 164 − Exp. Foci = 4 ● Est int. 142 − Exp. Foci = 2 Foci Kernels ● 200 200 ● ● Foci Kernels 0.0012 ● ● ● ● ● ● ● ● ● ● ● ● ● 100 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0010 ● ● ● ● 0.0015 ● ● 0.0020 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0010 ● 0.0008 ● 0.0006 ● ● 0.0004 ● ● ● ● ● ● ● ● ● 0.0005 ● ● ● ● ● ● ● ● ● 0.0002 ● ● ● ● ● 0 ● 50 50 ● ● 0 y ● 0.0025 ● ● ● ● ● 150 ● ● y ● ● 100 150 ● 0 50 100 x 150 0 50 100 x 150 Summary Foci & mean posterior intensity Posterior stand dev Foci Kernels Kernels 200 200 ● ● ● ● ● ● ● ●●● ● ● y ● ● ● ● ● ● 100 ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●●● ● ● ● ●●● ● 50 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●●● ●● ●● ● ● ● ● ● ● ●●● ● ● ● ●●● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ●● ● ●● ● ●●●● ● ●●●● ●● ●● ● ●●● ●●● ●●●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ●● ● ● ●● ●●●● ●●● ●● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● 150 ● y 150 ● 100 ● ● ● ● ● ● 50 ● ● ● ● ● ● ● ● ● ● ● 0 ● 0 ● 0 50 100 150 0 50 x 0.0005 0.0010 100 150 x 0.0015 0.00002 0.00006 0.00010 0.00014 Probit model for valence Consider a meta-analysis data set of emotion studies, which can be split into positive and negative studies by valence. Let ( 1 if study i is positive yi = 0 if study i is negative with P(yi = 1|α, γ, η i ) = Φ(α + γ > η i ). The same set of latent factors impacts on the intensity function via the basis coefficients θ i and on the response variable via the probability of a positive study. The model is extendable to different types of outcomes. ROC – Emotion dataset 0.6 0.4 0.2 0.0 True positive rate 0.8 1.0 ROC 0.0 0.2 0.4 0.6 False positive rate 0.8 1.0 Simulations 1.0 1.5 0.2 ● −1.0 −0.5 0.0 x 0.5 1.0 ● ● ● ●● ● 0.0 1.0 0.0 ● 1.0 ● POS foci NEG foci Kernels 0.8 ● ● ● ● 0.6 ● ● 0.4 ●● ● 0.2 ● ● 1.5 0.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 x Est intensities Est intensities ● 1.5 ● 1.5 1.5 POS foci NEG foci Kernels 0.5 ● x Est intensities ● 0.5 0.0 x Est intensities y 0.2 −0.5 ● 0.4 ● ● −1.0 ● ● ● ● ● −1.5 ● ● ● ● ● ● −1.5 1.5 ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●●● ● ●● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ●●● ●● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●● ● ●●● ● ● ●● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ●●●● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.6 ● 0.0 −1.5 ● ● ● ● ● ● ● ● −0.5 0.5 y ● ● ● ●● ● ● ● ● ●● ● ● ● −1.5 0.5 1.0 ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ●● ●● ●●●● ● ● ● ● ● ● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ●● ●● ●●● ● ● ● ● ● ● ●●● ● ●● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ●● ● ● ●●●● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● POS foci NEG foci Kernels ● 1.5 −1.0 0.4 ● ● ●● ● ● ● ● −0.5 ● ● ● ● ● ● ● −1.0 ● ● ● 0.6 ● ● 0.0 0.5 y ● ● ● ● 0.8 ● ● ● ● ● ● ● ● 0.4 0.0 Positive Negative Kernels ● −1.0 0.8 ● −1.5 ● ● ●● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●●●● ● ● ● ● ●● ●● ● ● ● ●●● ● ● ● ● ●●● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ●● ●● ● ● ●● ● ● ●● ● ●● ● ●● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●●● ●● ● ● ● ●●●● ● ● ●●●●● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ●●● ●● ●●● ● ●● ● ● ●● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ●●● ●● ● ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ●● ● ● ● ●● ●●●●●● ●● ●●●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●●● ●● ●●● ● ● ●● ● ● ● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ●● ● ●●●●●● ● ● ●● ● ● ● ● ●●● ● ●● ● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ●●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.6 −1.5 −0.5 ● 1.0 ● ● 0.0 −1.0 1.0 1.0 0.8 −1.0 ● ● 0.2 −1.5 Positive Negative Kernels ● ● −0.5 0.0 ● −0.5 y 0.5 ● ●●● ● ● ● ●● ●● ● ● ●● ●● ● ● ● ●●● ●● ● ● ● ●● ●● ●●● ● ●●●● ● ●●● ● ● ●●● ● ● ● ●● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ●● ● ● ●● ● ●● ● ●● ● ● ●●● ● ● ●●● ●● ● ●● ● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ●● ● ●● ● ●● ● ●● ● ●● ● ●● ●● ● ● ● ● ●● ●● ●● ● ● ● ● ●●●● ●● ●● ● ●● ●● ● ● ● ●●● ● ●●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ●● ● ● SIM 4: Data ● 1.0 ● ● ● Positive Negative Kernels ● 0.0 1.0 1.0 ● ● ● ● ●●●● ● ● ● ● ● ● ● ●● ●● ●● ●● ●● ● ●● ●● ● ● ● ● ●●● ●● ● ●● ● ● ● ●●● ● ● ● ● ● ●● ●● ●●● ● ●●● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●●● ● ● ● ● ● ●● ● ● ● ●● ● ●● ●● ● ● ●● ● ● ●● ●●● ● ● ●● ● ●● ● ● ● ● ●● ● ●● ●● ● ● ● ●●● ● ●● ● ● ●●●● ● ● ●● ●● ●● ● ● ● ●● ● ●● ● ● ●●● ●●●● ● ● ● ● ●●●● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● SIM 3: Data ● 1.5 1.5 Positive Negative Kernels 1.5 SIM 2: Data ● 1.5 SIM 1: Data POS foci NEG foci Kernels ● ● ●● ● ● ● ● −1.0 ● 6 6 ● ● ● ● 4 5 ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● 2 3 ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● 3 3.5 ● 1 2.5 ● ●● ● 2 2 ● ● ●● 1.5 0.5 ● ● ● 3 2.5 ●● ● ● ● ●● ●● ●●● ● ● ● ● ● 5 ● ● ● ● 0.5 1 3 2.5 ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● 4 ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.5 4 1 62 ● ●●●● ● ●● ● ● ● ●● ● ●● ●● ●● ● ●● ● ● ●● ● ●● ● ●● ● ●● ● ●● ● ● ● ● ●●●●●● ● ●●●●● ●●● ●● ● ● ●●● ● ● ●● ●● ●● ● ● ● ● ● ● ●●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● 22 ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● y ● ● ● ●● ● ● ● ●● 0.0 y 0.0 ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● 3 ● ● ● 4 ● −1.0 8 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● −1.0 −0.5 ● ● ● ● 1 ● ●● ● ● ● ● ● ● −0.5 ● ● ● ● ● ● ● ● ● 1 ● ● ● ● ● ● ● ● ● ● ● ● ● 4 ● ● 8 6 7 −0.5 ● 10 8 ● ● ● ●● ● ● ● −1.0 32 10 ● ● 5 0.5 y ● ● ● ● ● ● ● ● ● ● ● 34 3.5 1.0 ● 2 ● 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ●● ● ● ●●● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●●● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ●● ● ● ●● ● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● 15 −0.5 ● ● ●● ● ● ● ● ● ● 0.5 ● 10 10 ● ● ● 10 ● 15 20 ● ● 10 0.0 ● ● ● ● 15 0.5 0.0 ● 5 15 ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ●●● ●● ● ● ● ●● ● ●● ● ● ●●●● ● ●● ● ●● ● ● ●● ● ●● ● ● ● ● ●● ● ● ●●●● ● ●● ● ● ● ●● ● ● ● ●● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ●●●● ●● ● ●● ●● ● ● ● ●● ●● ●● ● ● ● ●● ●● ●● ● ●● ●● ●● ● ● ● ● ● ● ●●●● ● ● ●● ●● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● 55 ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ●● ● ●● ● ● ●● ● ● ● ● ●●●● ● ● ● ● ● ●● ● ● ●● ● ● ● ●●● ● ● ● ● ● ●●● ● ● ● ●●●● ● ●● ●● ●● ● ● ● ●●● ● ●● ● ● ● ●●● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ● ●● ●● ●● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●●● ● ● ● ● ● ● ●● ●● ● ● ● ●● ●●● ● ● ● ●● ● ● ●● ● ● ● ●● ●● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ●● ● ● ● ● ● ● y 1.0 1.0 ● 1.0 ● ● ● ● ● ● ● ● 0.0 0.5 1.0 1.5 −1.5 −0.5 0.0 0.5 1.0 1.5 −1.5 0.5 1.0 −1.5 1.5 POS Intens NEG Intens −1.5 −1.0 −0.5 0.0 ROC ROC ROC 0.8 1.0 0.2 0.4 0.6 False positive rate 0.8 1.0 1.5 0.8 0.6 0.2 0.2 0.0 1.0 0.4 True positive rate 0.8 0.6 True positive rate 0.8 0.6 0.4 True positive rate 0.2 0.6 False positive rate 0.5 1.0 ROC 1.0 x 0.8 0.4 0.0 x 0.6 0.2 −0.5 x 0.4 0.0 −1.0 x 0.2 True positive rate −1.0 POS Intens NEG Intens 0.4 −0.5 POS Intens NEG Intens 1.0 −1.0 1.0 −1.5 −1.5 −1.5 −1.5 ● POS Intens NEG Intens 0.0 0.2 0.4 0.6 False positive rate 0.8 1.0 0.0 0.2 0.4 0.6 False positive rate 0.8 1.0 Summary and Discussion Functional representation of the log intensity function of a Cox process with inclusion of a high-dimensional set of pre-specified basis functions Allow for automatic shrinkage and effective removal of basis coefficients that are not needed to characterise the intensities Covariates allowed to impact on the latent factor scores Easy modifications for joint modelling of disparate data of many different types θ i replaceable with concatenated coefficients within component models for different types of objects, including images, movies, text, etc. Extendable to a semiparametric case that allows the latent variables densities to be unknown via nonparametric Bayes priors References Anirban Bhattacharya and David B Dunson. Sparse Bayesian infinite factor models. Biometrika, 98(2):291–306, June 2011. Jian Kang, Timothy D Johnson, and Thomas E Nichols. A Bayesian hierarchical spatial point process model for multi-type neuroimaging meta-analysis. Annals of Applied Statistics, 8(3):1800–1824, 2014.