JOURNAL CLUB: M. Pei et al., Shanghai Key Lab of MRI, East China Normal University and Weill Cornell Medical College “Algorithm for Fast Monoexponential Fitting Based on Auto-Regression on Linear Operations (ARLO) of Data.” Aug 18, 2014 Jason Su Motivation • Traditional fitting methods for exponentials have pros and cons – Nonlinear LS (Levenberg-Marquardt) – slow, may converge to local minimum – Log-Linear – fast but sensitive to noise • Can we improve upon them? – Surprisingly, yes! Background: Numerical Integration • Approximating the value of a definite integral • Trapezoidal Rule: the area under a 2-pt linear interpolation of the interval • Simpson’s Rule: the area under a 3-pt. quadratic interpolation of the interval • Newton-Cotes formulas: Theory • Log-Linear: linearize the signal equation with a nonlinear transformation to fit a line • ARLO: integrate the signal equation to fit a linear approximation (Simpson’s rule) −π‘/π2∗ π π‘ = π0 π π π = π‘π+2 π‘π π π‘ ππ‘ = π2∗ [π(π‘π ) − π(π‘π+2 )] βππΈ π π ≅ π π = [π π‘π + 4π π‘ + βππΈ + π(π‘ + 2βππΈ) 3 • Assuming decay curve sampled linearly at βππΈ intervals Theory • An auto-regressive time-series • Find T2* to minimize the error between model and data, π π − π π Methods • Rician noise compensation – Data truncation, only keep points with high SNR • Values > μ + 2σnoise in background – Apply a bias correction based on a Bayesian model table look-up depending on the number of coils Methods • Simulation to assess bias and variance – Fitting method vs T2* range, # channels, SNR – 10,000 trials with Rician noise • In vivo – 1.5T, 8ch, 15 patients, 2D GRE, TR=27.4, α=20deg, TE = 1.323.3ms (16 linearly sampled), liver – 3T, 8ch?, 2 volunteers, 3D GRE, α=20deg, 7/12 echoes with 6.5/4.1ms spacing, brain – 1.5T, 2D GRE, TR=19ms, α=35deg, TE=2.8-16.8ms (8 echoes), heart with iron overload – Manual segmentation of liver and brain structures • Statistical – Linear regression, Bland-Altman, and t-tests Results: Simulation • LM and ARLO are effectively equivalent • ARLO is generally equivalent to LM except at T2*=1.5ms • Log-linear is sensitive to T2*, SNR, and channels Results: In Vivo, Liver ROI • Computation time per voxel – 8.81 ± 1.00ms for LM – 0.57 ± 0.04ms for LL – 0.07 ± 0.02ms for ARLO Results: In Vivo, Whole Liver Results: In Vivo, Whole Liver Results: In Vivo, Brain Results: In Vivo, Brain Results: In Vivo, Heart Discussion • ARLO is more robust than LL to noise with accuracy as good as LM at 10x the speed of LL – – – – Noise is amplified by log-transform ARLO is a single-variable linear regression, O(N) LL is a two-variable linear regression, O(6N) LM is nonlinear LS, O(N3) • ARLO provides an effective linearization of the nonlinear estimation problem – Does not require an initial guess, immune to convergence issues like in LM Discussion • Simpson’s rule much better approximation than Trapezoidal – Higher order gave little improvement • Could also use differentiation but not as good as integration in low SNR and need finer sampling • Other applications: – Other exponential decay models like diffusion, T2, offresonance and T2* – T1 recovery “from data measured at various timing parameters such as TR or TI” • Can also be adapted to multi-exponential fitting Discussion • Limitations – Requires at least 3 data points vs 2 for LM and LL – Linear sampling of echo times – Results in minimum T2* of 1.5ms by ARLO • Probably due to poor protocol Thoughts • Nonlinear sampling – Generally linear sampling is not ideal for experimental design, are there approximations that don’t require this? – “Gaussian quadrature and Clenshaw–Curtis quadrature with unequally spaced points (clustered at the endpoints of the integration interval) are stable and much more accurate” • For protocols varying multiple parameters, we would integrate over multiple dimensions? – Higher-dimensional integral approximations? – Simpson’s in each dimension would be a lot of sample points Thoughts • Seems important to have an operation that is equivalent to a linear combination of the acquired data – e.g. integral of exponential is difference of exponentials • Consider SPGR: πΌπ+2 πΌπ (1 − πΈ1 ) sin πΌ ππΌ 1 − πΈ1 cos πΌ 1 − πΈ1 log πΈ1 cos πΌ − 1 = πΈ1 πΌπ+2 πΌπ