Accelerated MR Fingerprinting with Low-Rank and Generative Subspace Modeling
Abstract
Magnetic Resonance (MR) Fingerprinting is an emerging multi-parametric quantitative MR imaging technique, for which image reconstruction methods utilizing low-rank and subspace constraints have achieved state-of-the-art performance. However, this class of methods often suffers from an ill-conditioned model-fitting issue, which degrades the performance as the data acquisition lengths become short and/or the signal-to-noise ratio becomes low. To address this problem, we present a new image reconstruction method for MR Fingerprinting, integrating low-rank and subspace modeling with a deep generative prior. Specifically, the proposed method captures the strong spatiotemporal correlation of contrast-weighted time-series images in MR Fingerprinting via a low-rank factorization. Further, it utilizes an untrained convolutional generative neural network to represent the spatial subspace of the low-rank model, while estimating the temporal subspace of the model from simulated magnetization evolutions generated based on spin physics. Here the architecture of the generative neural network serves as an effective regularizer for the ill-conditioned inverse problem without additional spatial training data that are often expensive to acquire. The proposed formulation results in a non-convex optimization problem, for which we develop an algorithm based on variable splitting and alternating direction method of multipliers. We evaluate the performance of the proposed method with numerical simulations and in vivo experiments and demonstrate that the proposed method outperforms the state-of-the-art low-rank and subspace reconstruction.
Index Terms:
Generative model, low-rank model, subspace model, unsupervised learning, deep learning, quantitative MRI.I Introduction
Magnetic Resonance (MR) Fingerprinting is an emerging quantitative MR imaging technique, which enables the simultaneous quantification of multiple MR tissue parameters (e.g., , , and proton density) in a single experiment [1]. MR Fingerprinting features an innovative transient-state signal excitation strategy. Together with highly-accelerated non-Cartesian acquisitions, it has enabled much faster imaging speeds than conventional quantitative MR imaging techniques. Since the invention of the technique, MR Fingerprinting has found a number of promising applications, including neuroimaging [2, 3], cardiovascular imaging [4, 5], cancer imaging [6, 7], and musculoskeletal imaging [8].
The original proof-of-principle MR Fingerprinting reconstruction employs a direct pattern matching scheme [1]. Although this method is simple and computationally efficient, it has been shown to be sub-optimal from a statistical estimation perspective [9]. Various more advanced image reconstruction methods, including model-based [10, 11, 12, 13, 9, 14, 15, 16, 17, 18, 19, 20] or learning-based methods [21, 22, 23, 24, 25, 26, 27, 28, 29], have been introduced over the past few years, which have significantly improved the accuracy and/or reduced acquisition times for MR Fingerprinting.
As a class of state-of-the-art image reconstruction methods, the low-rank reconstruction [12, 14, 17, 15, 16, 18, 19] utilizes a linear low-dimensional model to capture the strong spatiotemporal correlation of contrast-weighted time-series images in MR Fingerprinting. A subspace constraint can be further incorporated into the low-rank model by pre-estimating the temporal subspace of the model from an ensemble of magnetization evolutions simulated based on spin physics [15, 16, 19]. While the low-rank and subspace reconstruction significantly outperforms the conventional MR Fingerprinting reconstruction, its performance often degrades as the data acquisition length becomes short or the signal-to-noise (SNR) ratio becomes low, due to an ill-conditioned model-fitting issue arising from the undersampling of the temporal subspace of the low-rank model [30, 31].
In this paper, we introduce a new deep learning based method to improve low-rank and subspace reconstruction for MR Fingerprinting. Specifically, the proposed method utilizes an untrained generative convolutional neural network to represent the spatial subspace of the low-rank model, while estimating the temporal subspace directly from an ensemble of magnetization evolutions simulated based on spin physics. Here with the inductive bias of a convolutional neural network architecture, the untrained neural network serves as an effective regularizer for the image reconstruction problem without requiring any additional training data that are often very expensive to collect in MR Fingerprinting. To solve the resulting optimization problem, we develop an algorithm based on variable splitting [32, 33] and the alternating direction method of multipliers (ADMM) [34, 33]. We evaluate the proposed method with numerical simulations and in vivo experiments and demonstrate the improvement of the proposed method over the state-of-the-art low-rank and subspace reconstruction. A preliminary account of this work was described in our early conference paper [35].
A number of image reconstruction methods have recently been introduced on the use of deep learning to improve low-rank and subspace reconstruction [36, 37, 38, 39]. Note that the proposed method has key differences from these existing methods. First, the proposed method is an unsupervised learning method that utilizes the inductive bias of untrained neural networks as a regularizer. It is different from the supervised learning based reconstruction methods [36, 37, 38], which require a relatively large set of training data. Moreover, the proposed method is also distinct from a recent unsupervised learning based reconstruction method [39]. Specifically, the proposed method pre-determines the temporal subspace of the low-rank model, which is different from the above method that simultaneously estimates the spatial and temporal subspaces with a deep bi-linear model. In the context of quantitative MRI (e.g., MR Fingerprinting), the temporal subspace is often much easier to determine based on spin physics than dynamic imaging, and a pre-determined temporal subspace results in a simplified image reconstruction problem. In addition, the proposed algorithm is rather different from the one in [39].
The rest of the paper is organized as follows. Section II describes the proposed method, including the problem formulation and solution algorithm. Section III includes representative results to illustrate the performance of the proposed method. Section IV contains the discussion, followed by the concluding remarks in Section V.
II Proposed Method
II-A Problem Formulation
MR Fingerprinting experiments involve acquiring a sequence of contrast-weighted MR images , in which each image can be parameterized as
(1) |
for , where denotes the proton density, denotes the spin-lattice relaxation time, denotes the spin-spin relaxation time, and denotes the contrast-weighting function at the th repetition time (TR), which is determined by the magnetization dynamics governed by the Bloch equation [40].
In this work, we consider a discrete image model which assumes that each contrast-weighted image can be completely represented by its values at spatial locations . Accordingly, we can represent the contrast-weighted time-series images by the following Casorati matrix [41]:
In MR Fingerprinting, the imaging equation can be written as
(2) |
for and . Here denotes the contrast-weighted MR image at the th TR, denotes the undersampled Fourier-encoding matrix at the th TR, is a diagonal matrix whose diagonal entries contain the coil sensitivities from the th coil, denotes the measured -space data from the th receiver coil at the th TR, and denotes the measurement noise that is assumed to be complex Gaussian noise.
For notational simplicity, we can rewrite (2) in a more compact form as follows:
(3) |
where contains all the measured -space data from the th coil, denotes the fully-sampled Fourier encoding matrix, denotes the undersampling operator that acquires the -space data for each contrast-weighted image and then concatenates them into the data vector , contains the measurement noise from the th coil, and .
In MR Fingerprinting experiments, -space data are highly-undersampled, i.e., . A significant technical challenge arises from reconstructing contrast-weighted time-series images from highly-undersampled data. To ensure high-quality reconstruction, we impose a low-rank constraint on , i.e., , by exploiting the strong spatiotemporal correlation of contrast-weighted images in MR Fingerprinting. The low-rank constraint reduces the number of degrees of freedom in to [42], which is often much less than the total number of entries of . This enables image reconstruction from highly-undersampled -space data.
While there are many ways of imposing a low-rank constraint [42], here we use a low-rank matrix factorization [43, 44, 30], i.e., , where and . Note that the columns of and the rows of respectively span the spatial subspace and the temporal subspace of . Further, we can incorporate a subspace constraint by pre-estimating from an ensemble of magnetization evolutions simulated based on spin physics [15] or some physically-acquired navigator data [45].
While the low-rank and subspace reconstruction significantly outperforms the dictionary-matching based conventional reconstruction [1], it often suffers from an ill-conditioned model fitting issue, due to the undersampling of the temporal subspace of the model. This degrades the performance of the low-rank and subspace reconstruction [15], as the data acquisition length becomes short and/or SNR becomes low.
To address the issue, here we incorporate a deep generative prior into the low-rank and subspace reconstruction. While the effectiveness of a pre-trained deep generative prior [46] has been demonstrated in solving inverse problems, it often requires large training data sets, which are often very expensive to acquire in MR Fingerprinting. Recently, it has been demonstrated that with the inductive bias of a deep convolutional architecture, an untrained generative neural networks (e.g., deep image prior [47] and deep decoder [48, 49]) can serve as an effective spatial prior. Here we use an untrained generative neural network to represent the spatial subspace of a low-rank model, which serves as a regularizer for the low-rank and subspace reconstruction. Mathematically, we have , where is a generative neural network, is a low-dimensional latent random vector, and contains the trainable parameters of the neural network. Here we illustrate an example generative convolutional neural network architecture in Fig. 1, which consists of convolutional, up-sampling, ReLu, and batch normalization layers.
Incorporating the low-rank and subspace model with the deep generative prior, the image reconstruction problem can be formulated as follows:
(4) | ||||
where denotes the pre-estimated temporal subspace of the low-rank model by singular value decomposition. After solving (4), we can reconstruct the contrast-weighted time-series images by , from which we then estimate MR tissue parameters via a voxel-wise pattern matching. Next, we describe an optimization algorithm to solve (4).
II-B Solution Algorithm
The proposed formulation in (4) results in a non-convex optimization problem. Here we describe an algorithm based on variable splitting and the ADMM method [34, 33] to solve this problem. First, we introduce a set of auxiliary variables to split the coil sensitivities from the objective function. This converts (4) into the following equivalent constrained optimization problem:
(5) | ||||
for .
Second, we form the augmented Lagrangian associated with (5) as follows:
(6) | ||||
where and are the Lagrangian multipliers, are the penalty parameters, and the operator takes the real part of a complex number.
Next, we iteratively minimize (6) by solving the following three subproblems:
(7) |
(8) |
(9) |
and updating the Lagrangian multipliers as follows:
(10) |
(11) |
until the change of the solution is less than a pre-specified tolerance or the maximum number of iterations is reached. Next, we describe the detailed solutions to the subproblems in (7) - (9).
1) Solution to (7): Note that the subproblem in (7) can be written as
(12) |
where . This is a linear least-squares problem separable for each , i.e.,
(13) |
for . Further, it can be shown that (13) is separable for each column of , i.e.,
(14) |
for , where and are the th columns of and , respectively. The solution to (14) can be written as
(15) |
where is the identity matrix. While (15) can be solved iteratively (e.g., with the conjugate gradient algorithm [50]), we can solve it in a more efficient way with a direct matrix inversion, given that the same coefficient matrix is shared by a number of linear systems of equations at different iterations of the algorithm111In standard MR Fingerprinting acquisitions (e.g., [1, 51]), the same -space trajectory repeats at different TRs. Thus, the same coefficient matrix also occurs at different in each iteration of the algorithm..
Specifically, consider the inversion of the coefficient matrix of (15), and, by applying the matrix inversion lemma [52], we have
(16) | ||||
Note that is a small-scale matrix that can be pre-computed and saved, given that -space data are highly-undersampled for each (i.e., ). Here we only need to compute the matrix once and save it for later use at different iterations of the algorithm. Consequently, the matrix inversion in (LABEL:eq:subproblem1-4) only encompasses two non-uniform Fourier transforms and some small-scale matrix-vector multiplications that can be calculated very efficiently.
2) Solution to (8): The subproblem in (8) can be written as
(17) | ||||
which can be solved by
(18) |
where
Noting that the coefficient matrix is diagonal, we can solve (18) very efficiently via an entry-by-entry inversion.
3) Solution to (9): The subproblem in (9) can be written as
(19) |
This is a large-scale nonlinear optimization problem similar to a neural network training problem. Here we solve it by the Adam algorithm [53].
For clarity, we summarize the overall algorithm in Algorithm 1. Given that (5) is a non-convex optimization problem, the performance of the proposed algorithm depends on initialization. Here we initialize using the low-rank and subspace reconstruction, which is often computationally efficient. Additionally, we initialize the Lagrangian multipliers and with zero matrices and vector, and initialize with a random vector. In the next section, we will illustrate the performance of the above initialization scheme.
III Results
In this section, we show representative results from simulations and in vivo experiments to demonstrate the performance of the proposed method.
III-A Simulation Results
We conducted simulations on a numerical brain phantom to evaluate the proposed method. The numerical phantom simulates a single-channel MR Fingerprinting experiment, which has the same setup as the one in [9]. Specifically, we took the , , and proton density maps from the Brainweb database [54] as the ground truth. We performed Bloch simulations to generate contrast-weighted time-series images by an inversion recovery fast imaging with steady state precession (IR-FISP) sequence [51]. Here we used the same set of flip angles and repetition times as in [51] for Bloch simulations. We set the field-of-view (FOV) as mm2 and the corresponding matrix size as .
We acquired highly-undersampled -space data using one spiral interleaf per TR and a set of fully-sampled -space data consists of spiral interleaves. We added complex white Gaussian noise to measured data. Here we define the signal-to-noise ratio (SNR) as , where denotes the average signal intensity within a region of the white matter in the first contrast-weighted image, and denotes the standard deviation of the noise.
We performed image reconstruction using the low-rank and subspace method [15] and the proposed method. Here we manually selected the rank for both methods for optimized performance. For the proposed method, we employed a generative convolutional neural network architecture illustrated in Fig. 1. We solved the subproblem (19) using the Adam algorithm with the learning rate and the number of iterations . In addition, we manually selected the penalty parameters of the ADMM algorithm for optimized performance. We implemented both reconstruction methods on a Linux server with dual Intel Xeon Platinum R processors (each with GHz and cores), TB RAM, and dual NVIDIA A GPUs (each with GB RAM) running MATLAB® R2022b and Deep Learning ToolboxTM.
After image reconstruction, we estimated MR tissue parameter maps by dictionary matching. Here the same dictionary was used for both methods based on the following discretization scheme: 1) the range of possible values was in ms, with an incremental step of ms in the range of ms and an incremental step of ms in the range of ms; and 2) the range of possible values was in ms, with an incremental step of ms in the range of ms and an incremental step of ms in the range of ms.
To assess the accuracy of reconstructed contrast-weighted MR images or parameter maps, we used the normalized root-mean-square error (NRMSE) defined as ,222The regions associated with the background, skull, scalp, and cerebrospinal fluid were not included into the NRMSE calculation, given that they are often not regions of interest for neuroimaging studies. where and respectively denote the ground truth and the reconstruction.
III-A1 Algorithm convergence
The proposed formulation results in a non-convex optimization problem. While there is no theoretical guarantee for the convergence of the algorithm for solving (4), we demonstrate the convergence empirically with numerical simulations. Specifically, we simulated an MR Fingerprinting experiment with an acquisition length and dB, and performed image reconstruction using the proposed method. Here we initialized using the low-rank and subspace reconstruction. We initialized and with zero matrices and vector and with a random vector. Fig. 2 shows the NRMSE of the reconstructed time-series images using the proposed method. As can be seen, with a relatively small number of iterations, the proposed method improves the NRMSE by a factor of two, as compared to the initialization with the low-rank and subspace reconstruction.
III-A2 Impact of acquisition length
We evaluated the performance of the low-rank and subspace reconstruction and the proposed method at different acquisition lengths, ranging from to . We set the dB for this set of simulations. Fig. 3(a) shows the NRMSE of the reconstructed time-series images at different acquisition lengths using the above two methods. Fig. 3(b) shows three representative contrast-weighted MR images reconstructed by the above methods. As can be seen, the proposed method substantially outperforms the low-rank and subspace method, which illustrates the benefits of incorporating a deep generative neural network as a regularizer for the low-rank and subspace reconstruction.
Fig. 4 shows the NRMSE of the reconstructed , , and proton density maps from the low-rank and subspace reconstruction and the proposed method at different acquisition lengths. As can be seen, the proposed method improves over the low-rank and subspace method for the reconstruction of , , proton density maps at all acquisition lengths, and the improvement is more significant for , as the acquisition length becomes short.
Figs. 5, 6, and 7 show the reconstructed , , and proton density maps from the low-rank and subspace reconstruction and the proposed method at the two acquisition lengths (i.e., and ). As can be seen, the proposed method outperforms the low-rank and subspace reconstruction at both acquisition lengths. At a short acquisition length (i.e., ), it nicely enables a factor of two improvement for the reconstruction of and proton density maps. It is clear that with a generative neural network as a regularizer, the proposed method effectively mitigates the noise amplification incurred by the ill-conditioning issue associated with the low-rank and subspace reconstruction.
III-A3 Impact of SNR
We also evaluated the performance of the proposed method at different SNR levels ranging from dB to dB at the acquisition length . Fig. 8 shows the NRMSE of the reconstructed , , and proton density maps. As can be seen, the proposed method outperforms the low-rank and subspace reconstruction at different noise levels, and the improvement is more significant as the SNR becomes low. Again, this illustrates the effectiveness of incorporating a deep generative neural network as a regularizer for the low-rank and subspace reconstruction.
III-B In Vivo Results
We evaluated the performance of the proposed method for in vivo MR Fingerprinting experiments. Specifically, we conducted in vivo experiments on a 3T Siemens Magnetom Prisma scanner (Siemens Medical Solutions, Erlangen, Germany) equipped with a 20-channel receiver coil. A healthy volunteer was scanned using the IR-FISP sequence [51] with the approval from the Institutional Review Board. For in vivo experiments, we used the same set of acquisition parameters and spiral trajectories as in [51], and other relevant imaging parameters include FOV mm2, matrix size, and slice thickness mm.
Apart from the undersampled MR Fingerprinting experiments, we also acquired a set of fully-sampled data with the acquisition length following the same procedure as in [9], and then estimated , , and proton density maps as our references. Moreover, we conducted a calibration scan using a vendor-provided gradient echo sequence to estimate the coil sensitivities maps.
We performed the low-rank and subspace reconstruction and the proposed method for in vivo data. For the above two methods, we used the same rank value (i.e., ). In addition, we used the same dictionary for subspace estimation and dictionary matching as described in the simulations for both methods. For the proposed method, we used the same neural network architecture as described in the simulations, and manually selected the penalty parameters for optimized performance.
Fig. 9 shows the results from the low-rank and subspace reconstruction and the proposed method at the acquisition length . As can be seen, the proposed method outperforms the low-rank and subspace reconstruction for the , , and proton density maps, which is consistent with the simulation results. This further confirms the role of a deep generative neural network as a regularizer for improving the low-rank and subspace reconstruction.
IV Discussion
We have demonstrated the efficacy of the proposed method for utilizing an untrained deep generative prior to improve the low-rank and subspace reconstruction. Here it is worth making some further remarks on some related aspects. First, while we have illustrated the effectiveness of the proposed method utilzing a specific generative convolutional neural network, there are other alternative neural network architectures (e.g., U-net [55, 47] or deep decoder [48]) that could also be adopted for the proposed method. Here it is worth pointing out that exploring the best neural network architecture for the proposed method is an interesting open problem that requires an in-depth future study.
It is known that the use of untrained deep generative neural networks for solving inverse problems often has an overfitting issue [47]. For the proposed method, we used an early-stopping strategy to effectively mitigate this problem. Alternatively, we can incorporate additional regularizers (e.g., total variation [56] or plug-and-play [57] regularization) into the proposed formulation to address the issue. Although it is straightforward to extend the proposed algorithm to accommodate these regularizers, the computation time of the algorithm will inevitably increase.
We have used an untrained neural network prior to improve the low-rank and subspace reconstruction. It has been demonstrated in the early work [58] that pre-training a neural network with a few examples can improve the performance of solving inverse problems over that with an untrained neural network. Here we could adopt a similar strategy for the proposed method. For example, with the use of a few training examples, we could jointly optimize the latent vector and the weights of the neural network in the proposed formulation, which may potentially improve the performance of the proposed method. A systematic investigation of this extension will be conducted in future work.
In addition to the architecture of the neural network, the proposed method also has a number of other hyperparameters to select, including the rank and the penalty parameters and for the algorithm. While there exist theoretical methods that could aid the selection of these hyperparameters [59, 60], we can also optimize them in an empirical way, e.g., with fully-sampled reference data sets from phantom and/or in vivo experiments. The effectiveness of the latter approach has been demonstrated in this work and early methods [30, 31, 15].
In this work, we have demonstrated the effectiveness of the proposed method with a conventional 2D MR Fingerprinting acquisition scheme. As a general image reconstruction method, it can also be integrated with other optimized MR Fingerprinting acquisition schemes (e.g., [61, 62, 63]) for further improved performance. Moreover, the proposed method can be generalized to handle volumetric MR Fingerprinting acquisitions (e.g., [64, 65, 66]) to achieve faster imaging speeds. Last but not least, the proposed method can be extended to other MR parameter mapping or quantitative MRI applications. The preliminary results in our recent work [67] have demonstrated the utilities of such an extension.
V Conclusion
In this paper, we have presented a new learning-based image reconstruction method for MR Fingerprinting, which integrates low-rank and subspace modeling with a deep generative prior through an untrained neural network. The proposed method utilizes the architecture of a convolutional generative neural network to represent the spatial subspace of the low-rank model, which serves as an effective regularizer for the low-rank and subspace reconstruction. To solve the resulting optimization problem, we have described an algorithm based on variable splitting and the alternating direction method of multipliers. We have shown results from numerical simulations and in vivo experiments to demonstrate the improvement of the proposed method over the state-of-the-art low-rank and subspace reconstruction.
References
- [1] D. Ma, V. Gulani, N. Seiberlich, K. Liu, J. L. Sunshine, J. L. Duerk, and M. A. Griswold, “Magnetic resonance fingerprinting,” Nature, vol. 495, pp. 187–192, 2013.
- [2] C. Liao, K. Wang, X. Cao, Y. Li, D. Wu, H. Ye, Q. Ding, H. He, and J. Zhong, “Detection of lesions in mesial temporal lobe epilepsy by using MR fingerprinting,” Radiology, vol. 288, no. 3, pp. 804–812, 2018.
- [3] J. Y. Choi, B. Krishnan, S. Hu, D. Martinez, Y. Tang, X. Wang, K. Sakaie, S. Jones, H. Murakami, I. Blümcke, I. Najm, D. Ma, and Z. I. Wang, “Using magnetic resonance fingerprinting to characterize periventricular nodular heterotopias in pharmacoresistant epilepsy,” Epilepsia, vol. 63, no. 5, pp. 1225–1237, 2022.
- [4] J. I. Hamilton, Y. Jiang, Y. Chen, D. Ma, W.-C. Lo, M. A. Griswold, and N. Seiberlich, “MR fingerprinting for rapid quantification of myocardial T1, T2, and proton spin density,” Magn. Reson. Med., vol. 77, no. 4, pp. 1446–1458, 2017.
- [5] O. Jaubert, G. Cruz, A. Bustin, R. Hajhosseiny, S. Nazir, T. Schneider, P. Koken, M. Doneva, D. Rueckert, P.-G. Masci, R. M. Botnar, and C. Prieto, “T1, T2, and fat fraction cardiac MR fingerprinting: Preliminary clinical evaluation,” J. Magn. Reson. Imaging, vol. 53, no. 4, pp. 1253–1265, 2021.
- [6] C. Badve, A. Yu, S. Dastmalchian, M. Rogers, D. Ma, Y. Jiang, S. Margevicius, S. Pahwa, Z. Lu, M. Schluchter et al., “MR fingerprinting of adult brain tumors: Initial experience,” Am. J. Neuroradiol., vol. 38, no. 3, pp. 492–499, 2017.
- [7] C. M. Pirkl, L. Nunez-Gonzalez, F. Kofler, S. Endt, L. Grundl, M. Golbabaee, P. A. Gómez, M. Cencini, G. Buonincontri, R. F. Schulte, M. Smits, B. Wiestler, B. H. Menze, M. I. Menzel, and J. A. Hernandez-Tamames, “Accelerated 3D whole-brain T1, T2, and proton density mapping: Feasibility for clinical glioma MR imaging,” Neuroradiol., vol. 63, no. 11, pp. 1831–1851, 2021.
- [8] A. Sharafi, M. V. W. Zibetti, G. Chang, M. Cloos, and R. R. Regatte, “MR fingerprinting for rapid simultaneous T1, T2, and T relaxation mapping of the human articular cartilage at 3T,” Magn. Reson. Med., vol. 84, no. 5, pp. 2636–2644, 2020.
- [9] B. Zhao, K. Setsompop, H. Ye, S. F. Cauley, and L. L. Wald, “Maximum likelihood reconstruction for magnetic resonance fingerprinting,” IEEE Trans. Med. Imaging, vol. 35, no. 8, pp. 1812–1823, 2016.
- [10] M. Davies, G. Puy, P. Vandergheynst, and Y. Wiaux, “A compressed sensing framework for magnetic resonance fingerprinting,” SIAM J. Imaging Sciences, vol. 7, no. 4, pp. 2623–2656, 2014.
- [11] Z. Wang, Q. Zhang, J. Yuan, and X. Wang, “MRF denoising with compressed sensing and adaptive filtering,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2014, pp. 870–873.
- [12] B. Zhao, “Model-based iterative reconstruction for magnetic resonance fingerprinting,” in Proc. IEEE Int. Conf. Image Process., 2015, pp. 3392–3396.
- [13] E. Y. Pierre, D. Ma, Y. Chen, C. Badve, and M. A. Griswold, “Multiscale reconstruction for MR fingerprinting,” Magn. Reson. Med., vol. 75, no. 6, pp. 2481–2492, 2016.
- [14] M. Doneva, T. Amthor, P. Koken, K. Sommer, and P. Börnert, “Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data,” Magn. Reson. Imaging, vol. 41, pp. 41–52, 2017.
- [15] B. Zhao, K. Setsompop, E. Adalsteinsson, B. Gagoski, H. Ye, D. Ma, Y. Jiang, P. E. Grant, M. A. Griswold, and L. L. Wald, “Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling,” Magn. Reson. Med., vol. 79, no. 2, pp. 933–942, 2018.
- [16] J. Assländer, M. A. Cloos, F. Knoll, D. K. Sodickson, J. Hennig, and R. Lattanzi, “Low rank alternating direction method of multipliers reconstruction for MR fingerprinting,” Magn. Reson. Med., vol. 79, no. 1, pp. 83–96, 2018.
- [17] G. Mazor, L. Weizman, A. Tal, and Y. C. Eldar, “Low-rank magnetic resonance fingerprinting,” Med. Phys., vol. 45, no. 9, pp. 4066–4084, 2018.
- [18] G. Lima da Cruz, A. Bustin, O. Jaubert, T. Schneider, R. M. Botnar, and C. Prieto, “Sparsity and locally low rank regularization for MR fingerprinting,” Magn. Reson. Med., vol. 81, no. 6, pp. 3530–3543, 2019.
- [19] B. Zhao, K. Setsompop, D. Salat, and L. L. Wald, “Further development of subspace imaging to magnetic resonance fingerprinting: A low-rank tensor approach,” in Proc. IEEE Eng. Med. Bio. Conf., 2020, pp. 1662–1666.
- [20] Y. Hu, P. Li, H. Chen, L. Zou, and H. Wang, “High-quality MR fingerprinting reconstruction using structured low-rank matrix completion and subspace projection,” IEEE Trans. Med. Imaging, vol. 41, no. 5, pp. 1150–1164, 2022.
- [21] P. Virtue, S. X. Yu, and M. Lustig, “Better than real: Complex-valued neural nets for MRI fingerprinting,” in Proc. IEEE Int. Conf. Image Process., 2017, pp. 3953–3957.
- [22] O. Cohen, B. Zhu, and M. S. Rosen, “MR fingerprinting Deep RecOnstruction NEtwork (DRONE),” Magn. Reson. Med., vol. 80, no. 3, pp. 885–894, 2018.
- [23] Z. Fang, Y. Chen, M. Liu, L. Xiang, Q. Zhang, Q. Wang, W. Lin, and D. Shen, “Deep learning for fast and spatially constrained tissue quantification from highly accelerated data in magnetic resonance fingerprinting,” IEEE Trans. Med. Imaging, vol. 38, no. 10, pp. 2364–2374, 2019.
- [24] F. Balsiger, A. Jungo, O. Scheidegger, P. G. Carlier, M. Reyes, and B. Marty, “Spatially regularized parametric map reconstruction for fast magnetic resonance fingerprinting,” Med. Image Anal., vol. 64, p. 101741, 2020.
- [25] Y. Chen, Z. Fang, S.-C. Hung, W.-T. Chang, D. Shen, and W. Lin, “High-resolution 3D MR fingerprinting using parallel imaging and deep learning,” NeuroImage, vol. 206, p. 116329, 2020.
- [26] M. Golbabaee, G. Buonincontri, C. M. Pirkl, M. I. Menzel, B. H. Menze, M. Davies, and P. A. Gómez, “Compressive MRI quantification using convex spatiotemporal priors and deep encoder-decoder networks,” Med. Image Anal., vol. 69, p. 101945, 2021.
- [27] H. Lu, H. Ye, and B. Zhao, “Improved balanced steady-state free precession based MR fingerprinting with deep autoencoders,” in Proc. IEEE Eng. Med. Bio. Conf., 2022, pp. 3029–3034.
- [28] D. Chen, M. E. Davies, and M. Golbabaee, “Deep unrolling for magnetic resonance fingerprinting,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2022, pp. 1–4.
- [29] F. Cheng, Y. Liu, Y. Chen, and P.-T. Yap, “High-resolution 3D magnetic resonance fingerprinting with a graph convolutional network,” IEEE Trans. Med. Imaging, vol. 42, no. 3, pp. 674–683, 2023.
- [30] B. Zhao, J. P. Haldar, A. G. Christodoulou, and Z.-P. Liang, “Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints,” IEEE Trans. Med. Imaging, vol. 31, no. 9, pp. 1809–1820, 2012.
- [31] B. Zhao, W. Lu, T. K. Hitchens, F. Lam, C. Ho, and Z.-P. Liang, “Accelerated MR parameter mapping with low-rank and sparsity constraints,” Magn. Reson. Med., vol. 74, no. 2, pp. 489–498, 2015.
- [32] M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process., vol. 19, no. 9, pp. 2345–2356, 2010.
- [33] S. Ramani and J. A. Fessler, “Parallel MR image reconstruction using augmented lagrangian methods,” IEEE Trans. Med. Imaging, vol. 30, no. 3, pp. 694–706, 2011.
- [34] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, pp. 1–122, 2011.
- [35] H. Lu and B. Zhao, “Accelerated magnetic resonance fingerprinting with low-rank and generative subspace modeling,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. Workshop (ICASSPW), 2023, in press.
- [36] C. M. Sandino, F. Ong, and S. S. Vasanawala, “Deep subspace learning: Enhancing speed and scalability of deep learning-based reconstruction of dynamic imaging data,” in Proc. Int. Symp. Magn. Reson. Med., 2020, p. 0599.
- [37] Z. Chen, Y. Chen, Y. Xie, D. Li, and A. G. Christodoulou, “Data-consistent non-cartesian deep subspace learning for efficient dynamic MR image reconstruction,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2022, pp. 1–5.
- [38] M. Blumenthal, X. Wang, and M. Uecker, “Deep subspace learning for improved T1 mapping using single-shot inversion-recovery radial FLASH,” in Proc. Int. Symp. Magn. Reson. Med., 2022, p. 0241.
- [39] A. H. Ahmed, Q. Zou, P. Nagpal, and M. Jacob, “Dynamic imaging using deep bi-linear unsupervised representation (DEBLUR),” IEEE Trans. Med. Imaging, vol. 41, no. 10, pp. 2693–2703, 2022.
- [40] F. Bloch, “Nuclear induction,” Phys. Rev., vol. 70, pp. 460–474, 1946.
- [41] Z.-P. Liang, “Spatiotemporal imaging with partially separable functions,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2007, pp. 988–991.
- [42] B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM Rev., vol. 52, no. 3, pp. 471–501, 2010.
- [43] B. Zhao, J. P. Haldar, C. Brinegar, and Z.-P. Liang, “Low rank matrix recovery for real-time cardiac MRI,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2010, pp. 996–999.
- [44] J. P. Haldar and Z.-P. Liang, “Spatiotemporal imaging with partially separable functions: A matrix recovery approach,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2010, pp. 716–719.
- [45] H. Lu, H. Ye, and B. Zhao, “Improved low-rank and subspace reconstruction for magnetic resonance fingerprinting with self-navigating acquisitions,” in Proc. Int. Symp. Magn. Reson. Med., 2023, p. 427.
- [46] A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in Proc. Int. Conf. Mach. Learn., 2017, pp. 537–546.
- [47] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018.
- [48] R. Heckel and P. Hand, “Deep decoder: Concise image representations from untrained non-convolutional networks,” in Proc. Intl. Conf. Learn. Represent., 2019.
- [49] M. Z. Darestani and R. Heckel, “Accelerated MRI with un-trained neural networks,” IEEE Trans. Comput. Imaging, vol. 7, pp. 724–733, 2021.
- [50] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed. New York, NY: Springer, 2006.
- [51] Y. Jiang, D. Ma, N. Seiberlich, V. Gulani, and M. A. Griswold, “MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout,” Magn. Reson. Med., vol. 74, no. 6, pp. 1621–1631, 2015.
- [52] D. S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas with Application to Linear Systems Theory. Princeton, NJ: Princeton Univ. Press, 2005.
- [53] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. Learn. Represent., 2015.
- [54] D. L. Collins, A. P. Zijdenbos, V. Kollokian, J. G. Sled, N. J. Kabani, C. J. Holmes, and A. C. Evans, “Design and construction of a realistic digital brain phantom,” IEEE Trans. Med. Imaging, vol. 17, no. 3, pp. 463–468, 1998.
- [55] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Interv., 2015, pp. 234–241.
- [56] J. Liu, Y. Sun, X. Xu, and U. S. Kamilov, “Image restoration using total variation regularized deep image prior,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2019, pp. 7715–7719.
- [57] Z. Sun, F. Latorre, T. Sanchez, and V. Cevher, “A plug-and-play deep image prior,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2021, pp. 8103–8107.
- [58] O. Leong and W. Sakla, “Low shot learning with untrained neural networks for imaging inverse problem,” in Proc. Adv. Neural Inf. Process. Syst. Workshop (NeurIPSW), 2019.
- [59] E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson, “Optimal Parameter Selection for the Alternating Direction Method of Multipliers (ADMM): Quadratic Problems,” IEEE Trans. Autom. Control, vol. 60, no. 3, pp. 644–658, 2015.
- [60] R. Zhang, A. Shapiro, and Y. Xie, “Statistical rank selection for incomplete low-rank matrices,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2019, pp. 2912–2916.
- [61] B. Zhao, J. P. Haldar, C. Liao, D. Ma, Y. Jiang, M. A. Griswold, K. Setsompop, and L. L. Wald, “Optimal experiment design for magnetic resonance fingerprinting: Cramér-Rao bound meets spin dynamics,” IEEE Trans. Med. Imaging, vol. 38, no. 3, pp. 844–861, 2019.
- [62] J. Assländer, R. Lattanzi, D. K. Sodickson, and M. A. Cloos, “Optimized quantification of spin relaxation times in the hybrid state,” Magn. Reson. Med., vol. 82, no. 4, pp. 1385–1397, 2019.
- [63] S. P. Jordan, S. Hu, I. Rozada, D. F. McGivney, R. Boyacioğlu, D. C. Jacob, S. Huang, M. Beverland, H. G. Katzgraber, M. Troyer, M. A. Griswold, and D. Ma, “Automated design of pulse sequences for magnetic resonance fingerprinting using physics-inspired optimization,” Proc. Natl. Acad. Sci. U.S.A., vol. 118, no. 40, p. e2020516118, 2021.
- [64] H. Ye, D. Ma, Y. Jiang, S. F. Cauley, Y. Du, L. L. Wald, M. A. Griswold, and K. Setsompop, “Accelerating magnetic resonance fingerprinting (MRF) using t-blipped simultaneous multislice (SMS) acquisition,” Magn. Reson. Med., vol. 75, no. 5, pp. 2078–2085, 2016.
- [65] B. Zhao, B. Bilgic, E. Adalsteinsson, M. A. Griswold, L. L. Wald, and K. Setsompop, “Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling,” in Proc. IEEE Eng. Med. Bio. Conf., 2017, pp. 3264–3268.
- [66] C. Liao, B. Bilgic, M. K. Manhard, B. Zhao, X. Cao, J. Zhong, L. L. Wald, and K. Setsompop, “3D MR fingerprinting with accelerated stack-of-spirals and hybrid sliding-window and GRAPPA reconstruction,” NeuroImage, vol. 162, pp. 13–22, 2017.
- [67] H. Lu and B. Zhao, “Accelerated magnetic resonance parameter mapping with low-rank modeling and deep generative priors,” in Proc. IEEE Stat. Signal Process. Workshop, 2023, in press.