More MR Fingerprinting

advertisement
More MR Fingerprinting
Key Concepts
• Traditional parameter mapping has revolved around
fitting signal equations to data with tractable analytical
forms
– mcDESPOT is perhaps among the more complicated but
still uses a multi-component matrix exponential model of
the SPGR and SSFP signals
• Generate unique signal time courses for each set of
T1/T2/M0/B0 parameters
– Vary the free variables in a bSSFP sequence (TR and flip
angle) with inversions every 200 TRs
– The resulting signal can be numerically found via Bloch
simulation
Key Concepts
• Use a dictionary and good lookup scheme to fit the
acquired data
– The reconstruction cost of using complex signal models is
often that the fitting becomes very expensive
• In mcDESPOT, the matrix exponential equation is calculated
thousands of times for every voxel
• In MRF, the Bloch simulation would similarly have to be run many
times to find a good fit
– Instead, if the parameter space is explored beforehand, we
can store a dictionary of signal evolutions
• This frontend loads all the computation time
• Any change in the model or pulse sequence would require a
recomputation of the entire dictionary
Interpretation
• To me, MRF is the generalization of parameter
mapping from analytical equations to numerical
simulations
• This poses two new problems:
– Excitation problem: what is the best choice of signal
parameters to optimize parameter estimation?
• “optimize” is obviously a loaded word here, but we want a
sequence that is robust to system imperfections and fast
– Reconstruction problem: how do we efficiently find
the parameters from acquired data?
Excitation Problem
• This is not well addressed by the MRF abstract and
they default to a random choice of sequence variables
– Most likely sub-optimal, resulting in long acquisition: 500
frames, 10min per slice
– Success depends on whether TR and flip angle choice of
bSSFP produce enough incoherence between different
tissues (i.e. T1/T2/M0/B0 sets)
– May have to generalize even further, with complete
freedom in RF excitation and gradients to achieve
reasonable times with a random approach
• Especially if they expand the model to include diffusion or multicomponent behavior
Excitation Problem
• Could be seen as an optimization problem of the form:
find 𝜌 𝑡 , 𝑔𝑥 [𝑡], 𝑔𝑦 [𝑡], 𝑔𝑧 [𝑡]
min. Σ 𝐹 + 𝜆𝑇
– where T is the total time and Σ is the correlation matrix of the
signal evolutions for various tissues, e.g. WM, GM, CSF, fat,
lesion
– The Frobenius norm is the root sum of squares of all the
elements in a matrix
– In other words, find the sequence that best reduces total scan
time and correlation between the signal evolutions of different
tissues
• This is the general form, could of course constrain it to only
choose variables within a bSSFP framework
Reconstruction Problem
• Orthogonal Matching Pursuit is their dictionary
lookup method of choice
– Previously used by Doneva et al. in T1/T2 mapping
from T1 Look-Locker and T2 spin echo data
• OMP solves the following problem:
find 𝑥 − 𝐷𝑠 2
s. t. 𝑠 0 ≤ 𝐾
– where D is the dictionary, s is a set of weights for the
over-complete basis functions in D, and K is the
sparsity
Orthogonal Matching Pursuit
• Essentially a CS reconstruction: the signal evolution is
sparse in the dictionary space (ideally it’s only one entry)
– Find a K-sparse representation that best matches the data
– May be strange to think about but T1/T2/M0/B0 maps are a way
to compress the acquired data set by using knowledge of the
signal behavior
• Matching Pursuit works by successively adding the most
correlated entries from the dictionary with each iteration
– Given a fixed dictionary, first find the one entry that has the
biggest inner product with the signal
– Then subtract the contribution due to that, and repeat the
process until the signal is satisfactorily decomposed.
• OMP is a refinement of this process that gives it additional
useful properties
Orthogonal Matching Pursuit
• Properties:
– For random linear measurements, requires O(K ln N)
samples – not sure how this applies to MRF
– For any finite size dictionary N, converges to
projection onto span of D within N iterations
– After any n iterations, gives the optimal
approximation for that subset of the dictionary
– Fast convergence, within K iterations
• Applicable to any dictionary scheme
– mcDESPOT could benefit from this
Spatial Acceleration
• Random spatial encoding like in CS can also be utilized
in this framework
– Doneva et al. achieve such acceleration by taking
advantage of the robustness of OMP
• Sampling incoherently in the spatial domain produces noise-like
interference in the images, OMP can still fit well through this noise
• Of course sampling pattern must change between frames
• Can think of it like the inherent denoising in CS reconstruction
– Results would be improved by including a spatial constraint
on sparsity in a transform domain
• Is the wavelet domain sparse for their sorts of images? Probably.
• Parallel imaging also provides information
Temporal Acceleration
• Typical temporal acceleration is achieved by exploiting
smoothness in the temporal direction
– This is the case for both dynamic imaging and parameter
mapping
• This seems tricky with a random pulse sequence
– If the signal evolution appears random, then this is
inherently very hard to compress
– May be an advantage of a better solution to the Excitation
problem
– Alternatively, perhaps random time course can achieve
good results in shorter time than a solution that enforces
smoothness and the net speed ends up being similar
(seems likely)
Download