Journal Club - ARLO

advertisement
JOURNAL CLUB:
M. Pei et al., Shanghai Key Lab of MRI, East China
Normal University and Weill Cornell Medical College
“Algorithm for Fast Monoexponential Fitting Based
on Auto-Regression on Linear Operations (ARLO) of
Data.”
Aug 18, 2014
Jason Su
Motivation
• Traditional fitting methods for exponentials
have pros and cons
– Nonlinear LS (Levenberg-Marquardt) – slow, may
converge to local minimum
– Log-Linear – fast but sensitive to noise
• Can we improve upon them?
– Surprisingly, yes!
Background: Numerical Integration
• Approximating the value of a
definite integral
• Trapezoidal Rule: the area
under a 2-pt linear
interpolation of the interval
• Simpson’s Rule: the area under
a 3-pt. quadratic interpolation
of the interval
• Newton-Cotes formulas:
Theory
• Log-Linear: linearize the signal equation with a
nonlinear transformation to fit a line
• ARLO: integrate the signal equation to fit a linear
approximation (Simpson’s rule)
−𝑑/𝑇2∗
π‘š 𝑑 = 𝑀0 𝑒
𝑠𝑖 =
𝑑𝑖+2
𝑑𝑖
π‘š 𝑑 𝑑𝑑 = 𝑇2∗ [π‘š(𝑑𝑖 ) − π‘š(𝑑𝑖+2 )]
βˆ†π‘‡πΈ
𝑠𝑖 ≅ 𝑠𝑖 =
[π‘š 𝑑𝑖 + 4π‘š 𝑑 + βˆ†π‘‡πΈ + π‘š(𝑑 + 2βˆ†π‘‡πΈ)
3
• Assuming decay curve sampled linearly at βˆ†π‘‡πΈ
intervals
Theory
• An auto-regressive time-series
• Find T2* to minimize the error between model
and data, 𝑠𝑖 − 𝑠𝑖
Methods
• Rician noise compensation
– Data truncation, only keep points with high SNR
• Values > μ + 2σnoise in background
– Apply a bias correction based on a Bayesian model
table look-up depending on the number of coils
Methods
• Simulation to assess bias and variance
– Fitting method vs T2* range, # channels, SNR
– 10,000 trials with Rician noise
• In vivo
– 1.5T, 8ch, 15 patients, 2D GRE, TR=27.4, α=20deg, TE = 1.323.3ms (16 linearly sampled), liver
– 3T, 8ch?, 2 volunteers, 3D GRE, α=20deg, 7/12 echoes with
6.5/4.1ms spacing, brain
– 1.5T, 2D GRE, TR=19ms, α=35deg, TE=2.8-16.8ms (8
echoes), heart with iron overload
– Manual segmentation of liver and brain structures
• Statistical
– Linear regression, Bland-Altman, and t-tests
Results:
Simulation
• LM and ARLO
are effectively
equivalent
• ARLO is
generally
equivalent to
LM except at
T2*=1.5ms
• Log-linear is
sensitive to T2*,
SNR, and
channels
Results: In Vivo, Liver ROI
• Computation time per voxel
– 8.81 ± 1.00ms for LM
– 0.57 ± 0.04ms for LL
– 0.07 ± 0.02ms for ARLO
Results: In Vivo, Whole Liver
Results: In Vivo, Whole Liver
Results: In Vivo, Brain
Results: In Vivo, Brain
Results: In Vivo, Heart
Discussion
• ARLO is more robust than LL to noise with
accuracy as good as LM at 10x the speed of LL
–
–
–
–
Noise is amplified by log-transform
ARLO is a single-variable linear regression, O(N)
LL is a two-variable linear regression, O(6N)
LM is nonlinear LS, O(N3)
• ARLO provides an effective linearization of the
nonlinear estimation problem
– Does not require an initial guess, immune to
convergence issues like in LM
Discussion
• Simpson’s rule much better approximation than
Trapezoidal
– Higher order gave little improvement
• Could also use differentiation but not as good as
integration in low SNR and need finer sampling
• Other applications:
– Other exponential decay models like diffusion, T2, offresonance and T2*
– T1 recovery “from data measured at various timing
parameters such as TR or TI”
• Can also be adapted to multi-exponential fitting
Discussion
• Limitations
– Requires at least 3 data points vs 2 for LM and LL
– Linear sampling of echo times
– Results in minimum T2* of 1.5ms by ARLO
• Probably due to poor protocol
Thoughts
• Nonlinear sampling
– Generally linear sampling is not ideal for experimental
design, are there approximations that don’t require this?
– “Gaussian quadrature and Clenshaw–Curtis quadrature
with unequally spaced points (clustered at the endpoints
of the integration interval) are stable and much more
accurate”
• For protocols varying multiple parameters, we would
integrate over multiple dimensions?
– Higher-dimensional integral approximations?
– Simpson’s in each dimension would be a lot of sample
points
Thoughts
• Seems important to have an operation that is
equivalent to a linear combination of the
acquired data
– e.g. integral of exponential is difference of
exponentials
• Consider SPGR:
𝛼𝑖+2
𝛼𝑖
(1 − 𝐸1 ) sin 𝛼
𝑑𝛼
1 − 𝐸1 cos 𝛼
1 − 𝐸1 log 𝐸1 cos 𝛼 − 1
=
𝐸1
𝛼𝑖+2
𝛼𝑖
Download