This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Accelerated MR Fingerprinting with Low-Rank and Generative Subspace Modeling

Hengfa Lu, Student Member, IEEE, Huihui Ye, Lawrence L. Wald, and Bo Zhao, Member, IEEE This work was supported in part by the National Institutes of Health under Grant NIH-R00-EB027181 and NIH-R01-EB017219.H. Lu is with the Department of Biomedical Engineering, University of Texas at Austin, Austin, Texas 78712 USA (e-mail: [email protected]). H. Ye is with the State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang, China, and also with Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China (e-mail: [email protected]).L. L. Wald is with the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129 USA, also with the Department of Radiology, Harvard Medical School, Boston, MA 02115 USA, and also with the Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139 USA (e-mail: [email protected]).B. Zhao is with the Department of Biomedical Engineering and the Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, Texas 78712 USA (e-mail: [email protected]).
Abstract

Magnetic Resonance (MR) Fingerprinting is an emerging multi-parametric quantitative MR imaging technique, for which image reconstruction methods utilizing low-rank and subspace constraints have achieved state-of-the-art performance. However, this class of methods often suffers from an ill-conditioned model-fitting issue, which degrades the performance as the data acquisition lengths become short and/or the signal-to-noise ratio becomes low. To address this problem, we present a new image reconstruction method for MR Fingerprinting, integrating low-rank and subspace modeling with a deep generative prior. Specifically, the proposed method captures the strong spatiotemporal correlation of contrast-weighted time-series images in MR Fingerprinting via a low-rank factorization. Further, it utilizes an untrained convolutional generative neural network to represent the spatial subspace of the low-rank model, while estimating the temporal subspace of the model from simulated magnetization evolutions generated based on spin physics. Here the architecture of the generative neural network serves as an effective regularizer for the ill-conditioned inverse problem without additional spatial training data that are often expensive to acquire. The proposed formulation results in a non-convex optimization problem, for which we develop an algorithm based on variable splitting and alternating direction method of multipliers. We evaluate the performance of the proposed method with numerical simulations and in vivo experiments and demonstrate that the proposed method outperforms the state-of-the-art low-rank and subspace reconstruction.

Index Terms:
Generative model, low-rank model, subspace model, unsupervised learning, deep learning, quantitative MRI.

I Introduction

Magnetic Resonance (MR) Fingerprinting is an emerging quantitative MR imaging technique, which enables the simultaneous quantification of multiple MR tissue parameters (e.g., T1T_{1}, T2T_{2}, and proton density) in a single experiment [1]. MR Fingerprinting features an innovative transient-state signal excitation strategy. Together with highly-accelerated non-Cartesian acquisitions, it has enabled much faster imaging speeds than conventional quantitative MR imaging techniques. Since the invention of the technique, MR Fingerprinting has found a number of promising applications, including neuroimaging [2, 3], cardiovascular imaging [4, 5], cancer imaging [6, 7], and musculoskeletal imaging [8].

The original proof-of-principle MR Fingerprinting reconstruction employs a direct pattern matching scheme [1]. Although this method is simple and computationally efficient, it has been shown to be sub-optimal from a statistical estimation perspective [9]. Various more advanced image reconstruction methods, including model-based [10, 11, 12, 13, 9, 14, 15, 16, 17, 18, 19, 20] or learning-based methods [21, 22, 23, 24, 25, 26, 27, 28, 29], have been introduced over the past few years, which have significantly improved the accuracy and/or reduced acquisition times for MR Fingerprinting.

As a class of state-of-the-art image reconstruction methods, the low-rank reconstruction [12, 14, 17, 15, 16, 18, 19] utilizes a linear low-dimensional model to capture the strong spatiotemporal correlation of contrast-weighted time-series images in MR Fingerprinting. A subspace constraint can be further incorporated into the low-rank model by pre-estimating the temporal subspace of the model from an ensemble of magnetization evolutions simulated based on spin physics [15, 16, 19]. While the low-rank and subspace reconstruction significantly outperforms the conventional MR Fingerprinting reconstruction, its performance often degrades as the data acquisition length becomes short or the signal-to-noise (SNR) ratio becomes low, due to an ill-conditioned model-fitting issue arising from the undersampling of the temporal subspace of the low-rank model [30, 31].

In this paper, we introduce a new deep learning based method to improve low-rank and subspace reconstruction for MR Fingerprinting. Specifically, the proposed method utilizes an untrained generative convolutional neural network to represent the spatial subspace of the low-rank model, while estimating the temporal subspace directly from an ensemble of magnetization evolutions simulated based on spin physics. Here with the inductive bias of a convolutional neural network architecture, the untrained neural network serves as an effective regularizer for the image reconstruction problem without requiring any additional training data that are often very expensive to collect in MR Fingerprinting. To solve the resulting optimization problem, we develop an algorithm based on variable splitting [32, 33] and the alternating direction method of multipliers (ADMM) [34, 33]. We evaluate the proposed method with numerical simulations and in vivo experiments and demonstrate the improvement of the proposed method over the state-of-the-art low-rank and subspace reconstruction. A preliminary account of this work was described in our early conference paper [35].

A number of image reconstruction methods have recently been introduced on the use of deep learning to improve low-rank and subspace reconstruction [36, 37, 38, 39]. Note that the proposed method has key differences from these existing methods. First, the proposed method is an unsupervised learning method that utilizes the inductive bias of untrained neural networks as a regularizer. It is different from the supervised learning based reconstruction methods [36, 37, 38], which require a relatively large set of training data. Moreover, the proposed method is also distinct from a recent unsupervised learning based reconstruction method [39]. Specifically, the proposed method pre-determines the temporal subspace of the low-rank model, which is different from the above method that simultaneously estimates the spatial and temporal subspaces with a deep bi-linear model. In the context of quantitative MRI (e.g., MR Fingerprinting), the temporal subspace is often much easier to determine based on spin physics than dynamic imaging, and a pre-determined temporal subspace results in a simplified image reconstruction problem. In addition, the proposed algorithm is rather different from the one in [39].

The rest of the paper is organized as follows. Section II describes the proposed method, including the problem formulation and solution algorithm. Section III includes representative results to illustrate the performance of the proposed method. Section IV contains the discussion, followed by the concluding remarks in Section V.

II Proposed Method

II-A Problem Formulation

MR Fingerprinting experiments involve acquiring a sequence of contrast-weighted MR images {Im(𝐱)}m=1M\{I_{m}(\mathbf{x})\}_{m=1}^{M}, in which each image Im(𝐱)I_{m}(\mathbf{x}) can be parameterized as

Im(𝐱)=ρ(𝐱)ϕm(T1(𝐱),T2(𝐱)),I_{m}(\mathbf{x})=\rho(\mathbf{x})\phi_{m}\left(T_{1}(\mathbf{x}),T_{2}(\mathbf{x})\right), (1)

for m=1,,Mm=1,\cdots,M, where ρ(𝐱)\rho(\mathbf{x}) denotes the proton density, T1(𝐱)T_{1}(\mathbf{x}) denotes the spin-lattice relaxation time, T2(𝐱)T_{2}(\mathbf{x}) denotes the spin-spin relaxation time, and ϕm()\phi_{m}(\cdot) denotes the contrast-weighting function at the mmth repetition time (TR), which is determined by the magnetization dynamics governed by the Bloch equation [40].

In this work, we consider a discrete image model which assumes that each contrast-weighted image Im(𝐱)I_{m}(\mathbf{x}) can be completely represented by its values at NN spatial locations {𝐱n}n=1N\left\{\mathbf{x}_{n}\right\}_{n=1}^{N}. Accordingly, we can represent the contrast-weighted time-series images {Im(𝐱)}m=1M\left\{I_{m}(\mathbf{x})\right\}_{m=1}^{M} by the following Casorati matrix [41]:

𝐂\displaystyle\mathbf{C} =[I1(𝐱1)IM(𝐱1)I1(𝐱N)IM(𝐱N)]N×M.\displaystyle=\left[\begin{array}[]{ccc}I_{1}\left(\mathbf{x}_{1}\right)&\ldots&I_{M}\left(\mathbf{x}_{1}\right)\\ \vdots&\ddots&\vdots\\ I_{1}\left(\mathbf{x}_{N}\right)&\ldots&I_{M}\left(\mathbf{x}_{N}\right)\end{array}\right]\in\mathbb{C}^{N\times M}.

In MR Fingerprinting, the imaging equation can be written as

𝐝m,c=𝐅m𝐒c𝐈m+𝐧m,c,\mathbf{d}_{m,c}=\mathbf{F}_{m}\mathbf{S}_{c}\mathbf{I}_{m}+\mathbf{n}_{m,c}, (2)

for m=1,,Mm=1,\cdots,M and c=1,,Ncc=1,\cdots,N_{c}. Here 𝐈mN\mathbf{I}_{m}\in\mathbb{C}^{N} denotes the contrast-weighted MR image at the mmth TR, 𝐅mPm×N\mathbf{F}_{m}\in\mathbb{C}^{P_{m}\times N} denotes the undersampled Fourier-encoding matrix at the mmth TR, 𝐒cN×N\mathbf{S}_{c}\in\mathbb{C}^{N\times N} is a diagonal matrix whose diagonal entries contain the coil sensitivities from the ccth coil, 𝐝m,cPm\mathbf{d}_{m,c}\in\mathbb{C}^{P_{m}} denotes the measured 𝐤\mathbf{k}-space data from the ccth receiver coil at the mmth TR, and 𝐧m,cPm\mathbf{n}_{m,c}\in\mathbb{C}^{P_{m}} denotes the measurement noise that is assumed to be complex Gaussian noise.

For notational simplicity, we can rewrite (2) in a more compact form as follows:

𝐝c=Ω(𝐅𝐒c𝐂)+𝐧c,\mathbf{d}_{c}=\Omega\left(\mathbf{FS}_{c}\mathbf{C}\right)+\mathbf{n}_{c}, (3)

where 𝐝cP\mathbf{d}_{c}\in\mathbb{C}^{P} contains all the measured 𝐤\mathbf{k}-space data from the ccth coil, 𝐅N~×N\mathbf{F}\in\mathbb{C}^{\tilde{N}\times N} denotes the fully-sampled Fourier encoding matrix, Ω():N~×MP\Omega(\cdot):\mathbb{C}^{\tilde{N}\times M}\rightarrow\mathbb{C}^{P} denotes the undersampling operator that acquires the 𝐤\mathbf{k}-space data for each contrast-weighted image and then concatenates them into the data vector 𝐝c\mathbf{d}_{c}, 𝐧cP\mathbf{n}_{c}\in\mathbb{C}^{P} contains the measurement noise from the ccth coil, and P=m=1MPmP=\sum_{m=1}^{M}P_{m}.

In MR Fingerprinting experiments, 𝐤\mathbf{k}-space data are highly-undersampled, i.e., PNMP\ll NM. A significant technical challenge arises from reconstructing contrast-weighted time-series images from highly-undersampled data. To ensure high-quality reconstruction, we impose a low-rank constraint on 𝐂\mathbf{C}, i.e., rank(𝐂)Lmin{N,M}\mathrm{rank}(\mathbf{C})\leq L\ll\mathrm{min}\left\{N,M\right\}, by exploiting the strong spatiotemporal correlation of contrast-weighted images in MR Fingerprinting. The low-rank constraint reduces the number of degrees of freedom in 𝐂\mathbf{C} to L(M+NL)L(M+N-L) [42], which is often much less than the total number of entries of 𝐂\mathbf{C}. This enables image reconstruction from highly-undersampled 𝐤\mathbf{k}-space data.

While there are many ways of imposing a low-rank constraint [42], here we use a low-rank matrix factorization [43, 44, 30], i.e., 𝐂=𝐔𝐕\mathbf{C}=\mathbf{U}\mathbf{V}, where 𝐔N×L\mathbf{U}\in\mathbb{C}^{N\times L} and 𝐕L×M\mathbf{V}\in\mathbb{C}^{L\times M}. Note that the columns of 𝐔\mathbf{U} and the rows of 𝐕\mathbf{V} respectively span the spatial subspace and the temporal subspace of 𝐂\mathbf{C}. Further, we can incorporate a subspace constraint by pre-estimating 𝐕\mathbf{V} from an ensemble of magnetization evolutions simulated based on spin physics [15] or some physically-acquired navigator data [45].

Refer to caption

Figure 1: Architecture of a generative neural network in the proposed method, which consists of convolutional, up-sampling, ReLu, and batch normalization layers.

While the low-rank and subspace reconstruction significantly outperforms the dictionary-matching based conventional reconstruction [1], it often suffers from an ill-conditioned model fitting issue, due to the undersampling of the temporal subspace of the model. This degrades the performance of the low-rank and subspace reconstruction [15], as the data acquisition length becomes short and/or SNR becomes low.

To address the issue, here we incorporate a deep generative prior into the low-rank and subspace reconstruction. While the effectiveness of a pre-trained deep generative prior [46] has been demonstrated in solving inverse problems, it often requires large training data sets, which are often very expensive to acquire in MR Fingerprinting. Recently, it has been demonstrated that with the inductive bias of a deep convolutional architecture, an untrained generative neural networks (e.g., deep image prior [47] and deep decoder [48, 49]) can serve as an effective spatial prior. Here we use an untrained generative neural network to represent the spatial subspace of a low-rank model, which serves as a regularizer for the low-rank and subspace reconstruction. Mathematically, we have 𝐔=𝐆𝜽(𝐳)\mathbf{U}=\mathbf{G}_{\boldsymbol{\theta}}(\mathbf{z}), where 𝐆𝜽(𝐳):N0N×L\mathbf{G}_{\boldsymbol{\theta}}(\mathbf{z}):\mathbb{R}^{N_{0}}\rightarrow\mathbb{R}^{N\times L} is a generative neural network, 𝐳N0\mathbf{z}\in\mathbb{R}^{N_{0}} is a low-dimensional latent random vector, and 𝜽\boldsymbol{\theta} contains the trainable parameters of the neural network. Here we illustrate an example generative convolutional neural network architecture in Fig. 1, which consists of convolutional, up-sampling, ReLu, and batch normalization layers.

Incorporating the low-rank and subspace model with the deep generative prior, the image reconstruction problem can be formulated as follows:

{𝐔^,𝜽^}\displaystyle\left\{\hat{\mathbf{U}},\hat{\boldsymbol{\theta}}\right\} =argmin𝐔,𝜽c=1Nc𝐝cΩ(𝐅𝐒c𝐔𝐕^)22,\displaystyle=\arg\min_{\mathbf{U},\boldsymbol{\theta}}\sum_{c=1}^{N_{c}}\left\|\mathbf{d}_{c}-\Omega\left(\mathbf{F}\mathbf{S}_{c}\mathbf{U}\hat{\mathbf{V}}\right)\right\|_{2}^{2}, (4)
s.t.𝐔=𝐆𝜽(𝐳),\displaystyle\ \qquad\textrm{s.t.}\quad\mathbf{U}=\mathbf{G}_{\boldsymbol{\theta}}(\mathbf{z}),

where 𝐕^\hat{\mathbf{V}} denotes the pre-estimated temporal subspace of the low-rank model by singular value decomposition. After solving (4), we can reconstruct the contrast-weighted time-series images by 𝐈^=𝐔^𝐕^\hat{\mathbf{I}}=\hat{\mathbf{U}}\hat{\mathbf{V}}, from which we then estimate MR tissue parameters via a voxel-wise pattern matching. Next, we describe an optimization algorithm to solve (4).

II-B Solution Algorithm

The proposed formulation in (4) results in a non-convex optimization problem. Here we describe an algorithm based on variable splitting and the ADMM method [34, 33] to solve this problem. First, we introduce a set of auxiliary variables {𝐇c}c=1Nc\left\{\mathbf{H}_{c}\right\}_{c=1}^{N_{c}} to split the coil sensitivities from the objective function. This converts (4) into the following equivalent constrained optimization problem:

min𝐔,{𝐇c},𝜽c=1Nc𝐝cΩ(𝐅𝐇c)22,\displaystyle\min_{\mathbf{U},\left\{\mathbf{H}_{c}\right\},\boldsymbol{\theta}}~{}\sum_{c=1}^{N_{c}}\left\|\mathbf{d}_{c}-\Omega\left(\mathbf{F}\mathbf{H}_{c}\right)\right\|_{2}^{2}, (5)
s.t.𝐇c=𝐒c𝐔𝐕^and𝐔=𝐆𝜽(𝐳),\displaystyle\ \textrm{s.t.}\quad\mathbf{H}_{c}=\mathbf{S}_{c}\mathbf{U}\hat{\mathbf{V}}\quad\textrm{and}\quad\mathbf{U}=\mathbf{G}_{\boldsymbol{\theta}}(\mathbf{z}),

for c=1,,Ncc=1,\cdots,N_{c}.

Second, we form the augmented Lagrangian associated with (5) as follows:

({𝐇c},𝐔,𝜽,{𝚲c},𝚪)=c=1Nc{𝐝cΩ(𝐅𝐇c)22+\displaystyle\mathcal{L}\left(\left\{\mathbf{H}_{c}\right\},\mathbf{U},\boldsymbol{\theta},\left\{\mathbf{\Lambda}_{c}\right\},\mathbf{\Gamma}\right)=\sum_{c=1}^{N_{c}}\Big{\{}\left\|\mathbf{d}_{c}-\Omega\left(\mathbf{F}\mathbf{H}_{c}\right)\right\|_{2}^{2}+ (6)
Re(<𝚲c,𝐇c𝐒c𝐔𝐕^>)+μ12𝐇c𝐒c𝐔𝐕^F2}+\displaystyle\textbf{Re}(<\mathbf{\Lambda}_{c},\mathbf{H}_{c}-\mathbf{S}_{c}\mathbf{U}\hat{\mathbf{V}}>)+\frac{\mu_{1}}{2}\left\|\mathbf{H}_{c}-\mathbf{S}_{c}\mathbf{U}\hat{\mathbf{V}}\right\|_{F}^{2}\Big{\}}+
Re(<𝚪,𝐔𝐆𝜽(𝐳)>)+μ22𝐔𝐆𝜽(𝐳)F2,\displaystyle\textbf{Re}(<\mathbf{\Gamma},\mathbf{U}-\mathbf{G}_{\boldsymbol{\theta}}(\mathbf{z})>)+\frac{\mu_{2}}{2}\left\|\mathbf{U}-\mathbf{G}_{\boldsymbol{\theta}}(\mathbf{z})\right\|_{F}^{2},

where {𝚲c}c=1Nc\left\{\mathbf{\Lambda}_{c}\right\}_{c=1}^{N_{c}} and 𝚪\mathbf{\Gamma} are the Lagrangian multipliers, μ1,μ2>0\mu_{1},\mu_{2}>0 are the penalty parameters, and the operator Re()\textbf{Re}(\cdot) takes the real part of a complex number.

Next, we iteratively minimize (6) by solving the following three subproblems:

𝐇c(k+1)=argmin{𝐇c}({𝐇c},𝐔(k),𝜽(k),{𝚲c(k)},𝚪(k)),\displaystyle\mathbf{H}_{c}^{(k+1)}=\arg\min_{\left\{\!\mathbf{H}_{c}\!\right\}}\mathcal{L}\!\left(\left\{\mathbf{H}_{c}\right\},\mathbf{U}^{(k)},\boldsymbol{\theta}^{(k)},\{\!\mathbf{\Lambda}_{c}^{(k)}\},\mathbf{\Gamma}^{(k)}\!\right), (7)
𝐔(k+1)=argmin𝐔({𝐇c(k+1)},𝐔,𝜽(k),{𝚲c(k)},𝚪(k)),\displaystyle\mathbf{U}^{(k+1)}=\arg\min_{\mathbf{U}}\mathcal{L}\!\left(\!\{\mathbf{H}_{c}^{(k+1)}\},\mathbf{U},\boldsymbol{\theta}^{(k)},\{\mathbf{\Lambda}_{c}^{(k)}\},\mathbf{\Gamma}^{(k)}\!\right), (8)
𝜽(k+1)=argmin𝜽({𝐇c(k+1)},𝐔(k+1),𝜽,{𝚲c(k)},𝚪(k)),\displaystyle\boldsymbol{\theta}^{(k+1)}=\arg\min_{\boldsymbol{\theta}}\mathcal{L}\!\left(\!\{\!\mathbf{H}_{c}^{(k+1)}\!\},\mathbf{U}^{(k+1)},\boldsymbol{\theta},\{\!\mathbf{\Lambda}_{c}^{(k)}\!\},\mathbf{\Gamma}^{(k)}\!\right), (9)

and updating the Lagrangian multipliers as follows:

𝚲c(k+1)=𝚲c(k)+μ1(𝐇c(k+1)𝐒c𝐔(k+1)𝐕^),\displaystyle\mathbf{\Lambda}_{c}^{(k+1)}=\mathbf{\Lambda}_{c}^{(k)}+\mu_{1}\left(\mathbf{H}_{c}^{(k+1)}-\mathbf{S}_{c}\mathbf{U}^{(k+1)}\hat{\mathbf{V}}\right), (10)
𝚪(k+1)=𝚪(k)+μ2(𝐔(k+1)𝐆𝜽(k+1)(𝐳)),\displaystyle\mathbf{\Gamma}^{(k+1)}=\mathbf{\Gamma}^{(k)}+\mu_{2}\left(\mathbf{U}^{(k+1)}-\mathbf{G}_{\boldsymbol{\theta}^{(k+1)}}(\mathbf{z})\right), (11)

until the change of the solution is less than a pre-specified tolerance or the maximum number of iterations is reached. Next, we describe the detailed solutions to the subproblems in (7) - (9).

1) Solution to (7): Note that the subproblem in (7) can be written as

min{𝐇c}c=1Nc{𝐝cΩ(𝐅𝐇c)22+μ12𝐇c𝐐c(k)F2},\min_{\{\mathbf{H}_{c}\}}\sum_{c=1}^{N_{c}}\left\{\left\|\mathbf{d}_{c}-\Omega\left(\mathbf{F}\mathbf{H}_{c}\right)\right\|_{2}^{2}+\frac{\mu_{1}}{2}\left\|\mathbf{H}_{c}-\mathbf{Q}_{c}^{(k)}\right\|_{F}^{2}\right\}, (12)

where 𝐐c(k)=𝐒c𝐔(k)𝐕^𝚲c(k)/μ1\mathbf{Q}_{c}^{(k)}=\mathbf{S}_{c}\mathbf{U}^{(k)}\hat{\mathbf{V}}-\mathbf{\Lambda}_{c}^{(k)}/{\mu_{1}}. This is a linear least-squares problem separable for each 𝐇c\mathbf{H}_{c}, i.e.,

min𝐇c𝐝cΩ(𝐅𝐇c)22+μ12𝐇c𝐐c(k)F2,\min_{\mathbf{H}_{c}}\left\|\mathbf{d}_{c}-\Omega\left(\mathbf{F}\mathbf{H}_{c}\right)\right\|_{2}^{2}+\frac{\mu_{1}}{2}\left\|\mathbf{H}_{c}-\mathbf{Q}_{c}^{(k)}\right\|_{F}^{2}, (13)

for c=1,,Ncc=1,\cdots,N_{c}. Further, it can be shown that (13) is separable for each column of 𝐇c\mathbf{H}_{c}, i.e.,

min𝐡m,c𝐝m,c𝐅m𝐡m,c22+μ12𝐡m,c𝐪m,c(k)22,\min_{\mathbf{h}_{m,c}}\left\|\mathbf{d}_{m,c}-\mathbf{F}_{m}\mathbf{h}_{m,c}\right\|_{2}^{2}+\frac{\mu_{1}}{2}\left\|\mathbf{h}_{m,c}-\mathbf{q}_{m,c}^{(k)}\right\|_{2}^{2}, (14)

for m=1,,Mm=1,\cdots,M, where 𝐡m,c\mathbf{h}_{m,c} and 𝐪m,c(k)\mathbf{q}^{(k)}_{m,c} are the mmth columns of 𝐇c\mathbf{H}_{c} and 𝐐c(k)\mathbf{Q}^{(k)}_{c}, respectively. The solution to (14) can be written as

(𝐅mH𝐅m+μ12𝟙N)𝐡m,c=𝐅mH𝐝m,c+μ12𝐪m,c(k),\left(\mathbf{F}_{m}^{H}\mathbf{F}_{m}+\frac{\mu_{1}}{2}\mathds{1}_{N}\right)\mathbf{h}_{m,c}=\mathbf{F}_{m}^{H}\mathbf{d}_{m,c}+\frac{\mu_{1}}{2}\mathbf{q}_{m,c}^{(k)}, (15)

where 𝟙NN×N\mathds{1}_{N}\in\mathbb{C}^{N\times N} is the identity matrix. While (15) can be solved iteratively (e.g., with the conjugate gradient algorithm [50]), we can solve it in a more efficient way with a direct matrix inversion, given that the same coefficient matrix (𝐅mH𝐅m+μ1𝟙N/2)(\mathbf{F}_{m}^{H}\mathbf{F}_{m}+\mu_{1}\mathds{1}_{N}/2) is shared by a number of linear systems of equations at different iterations of the algorithm111In standard MR Fingerprinting acquisitions (e.g., [1, 51]), the same 𝐤\mathbf{k}-space trajectory repeats at different TRs. Thus, the same coefficient matrix (𝐅mH𝐅m+μ1𝟙N/2)(\mathbf{F}_{m}^{H}\mathbf{F}_{m}+\mu_{1}\mathds{1}_{N}/2) also occurs at different mm in each iteration of the algorithm..

Specifically, consider the inversion of the coefficient matrix of (15), and, by applying the matrix inversion lemma [52], we have

(𝐅mH𝐅m+μ12𝟙N)1\displaystyle\left(\mathbf{F}_{m}^{H}\mathbf{F}_{m}+\frac{\mu_{1}}{2}\mathds{1}_{N}\right)^{-1} (16)
=2μ1𝟙N𝐅mH[(μ12)2𝟙N+μ12𝐅m𝐅mH]1𝐅m.\displaystyle=\frac{2}{\mu_{1}}\mathds{1}_{N}-\mathbf{F}_{m}^{H}\left[\left(\frac{\mu_{1}}{2}\right)^{2}\mathds{1}_{N}+\frac{\mu_{1}}{2}\mathbf{F}_{m}\mathbf{F}_{m}^{H}\right]^{-1}\mathbf{F}_{m}.

Note that 𝐅m𝐅mHPm×Pm\mathbf{F}_{m}\mathbf{F}^{H}_{m}\in\mathbb{C}^{P_{m}\times P_{m}} is a small-scale matrix that can be pre-computed and saved, given that 𝐤\mathbf{k}-space data are highly-undersampled for each 𝐈m\mathbf{I}_{m} (i.e., PmN~P_{m}\ll\tilde{N}). Here we only need to compute the matrix [(μ1/2)2𝟙N+μ1/2𝐅m𝐅mH]1\left[\left(\mu_{1}/2\right)^{2}\mathds{1}_{N}+\mu_{1}/2\mathbf{F}_{m}\mathbf{F}_{m}^{H}\right]^{-1} once and save it for later use at different iterations of the algorithm. Consequently, the matrix inversion in (LABEL:eq:subproblem1-4) only encompasses two non-uniform Fourier transforms and some small-scale matrix-vector multiplications that can be calculated very efficiently.

2) Solution to (8): The subproblem in (8) can be written as

min𝐔\displaystyle\min_{\mathbf{U}} c=1Nc𝐒c𝐔𝐕^(𝐇c(k+1)+1/μ1𝚲c(k))F2+\displaystyle\sum_{c=1}^{N_{c}}\left\|\mathbf{S}_{c}\mathbf{U}\hat{\mathbf{V}}-\left(\mathbf{H}_{c}^{(k+1)}+1/\mu_{1}\mathbf{\Lambda}_{c}^{(k)}\right)\right\|_{F}^{2}+ (17)
μ2μ1𝐔𝐆𝜽(k)(𝐳)F2,\displaystyle\qquad\qquad\qquad\qquad\qquad\quad\frac{\mu_{2}}{\mu_{1}}\left\|\mathbf{U}-\mathbf{G}_{\boldsymbol{\theta}^{(k)}}(\mathbf{z})\right\|_{F}^{2},

which can be solved by

(c=1Nc𝐒cH𝐒c+μ2μ1𝟙N)𝐔=𝐖(k+1),\left(\sum_{c=1}^{N_{c}}{\mathbf{S}}_{c}^{H}{\mathbf{S}}_{c}+\frac{\mu_{2}}{\mu_{1}}\mathds{1}_{N}\right)\mathbf{U}=\mathbf{W}^{(k+1)}, (18)

where

𝐖(k+1)=c=1Nc𝐒cH(𝐇c(k+1)+𝚲c(k)μ1)𝐕^H+μ2μ1𝐆𝜽(k)(𝐳).\mathbf{W}^{(k+1)}=\sum_{c=1}^{N_{c}}{\mathbf{S}}_{c}^{H}\left(\mathbf{H}_{c}^{(k+1)}+\frac{\mathbf{\Lambda}_{c}^{(k)}}{\mu_{1}}\right)\hat{\mathbf{V}}^{H}+\frac{\mu_{2}}{\mu_{1}}\mathbf{G}_{\boldsymbol{\theta}^{(k)}}(\mathbf{z}).

Noting that the coefficient matrix (c=1Nc𝐒cH𝐒c+μ2/μ1𝟙N)\left(\sum_{c=1}^{N_{c}}{\mathbf{S}}_{c}^{H}{\mathbf{S}}_{c}+\mu_{2}/\mu_{1}\mathds{1}_{N}\right) is diagonal, we can solve (18) very efficiently via an entry-by-entry inversion.

3) Solution to (9): The subproblem in (9) can be written as

min𝜽𝐆𝜽(𝐳)(𝐔(k+1)+1μ2𝚪(k))F2.\min_{\boldsymbol{\theta}}\left\|\mathbf{G}_{\boldsymbol{\theta}}(\mathbf{z})-\left(\mathbf{U}^{(k+1)}+\frac{1}{\mu_{2}}\mathbf{\Gamma}^{(k)}\right)\right\|_{F}^{2}. (19)

This is a large-scale nonlinear optimization problem similar to a neural network training problem. Here we solve it by the Adam algorithm [53].

For clarity, we summarize the overall algorithm in Algorithm 1. Given that (5) is a non-convex optimization problem, the performance of the proposed algorithm depends on initialization. Here we initialize 𝐔\mathbf{U} using the low-rank and subspace reconstruction, which is often computationally efficient. Additionally, we initialize the Lagrangian multipliers 𝚲c\mathbf{\Lambda}_{c} and 𝚪\mathbf{\Gamma} with zero matrices and vector, and initialize 𝜽\boldsymbol{\theta} with a random vector. In the next section, we will illustrate the performance of the above initialization scheme.

Input: Measured data {𝐝c}\{\mathbf{d}_{c}\}, sampling operator Ω\Omega, coil sensitivities {𝐒c}\left\{\mathbf{S}_{c}\right\}, temporal subspace 𝐕^\hat{\mathbf{V}}, neural network 𝐆𝜽(𝐳)\mathbf{G}_{\boldsymbol{\theta}}(\mathbf{z}), and penalty parameters μ1\mu_{1}, μ2\mu_{2}.
initialize: 𝐔(0)\mathbf{U}^{(0)}, {𝚲c(0)}\{\mathbf{\Lambda}^{(0)}_{c}\}, 𝚪(0)\mathbf{\Gamma}^{(0)}, and 𝜽(0)\boldsymbol{\theta}^{(0)}.
repeat
       a. Update 𝐇c(k+1)\mathbf{H}_{c}^{(k+1)} via (15).
       b. Update 𝐔(k+1)\mathbf{U}^{(k+1)} via (18).
       c. Update 𝜽(k+1)\boldsymbol{\theta}^{(k+1)} via (19).
       d.
𝚲c(k+1)\displaystyle\mathbf{\Lambda}_{c}^{(k+1)} =𝚲c(k)+μ1(𝐇c(k+1)𝐒c𝐔(k+1)𝐕^),\displaystyle=\mathbf{\Lambda}_{c}^{(k)}+\mu_{1}\left(\mathbf{H}_{c}^{(k+1)}-\mathbf{S}_{c}\mathbf{U}^{(k+1)}\hat{\mathbf{V}}\right),
𝚪(k+1)\displaystyle\mathbf{\Gamma}^{(k+1)} =𝚪(k)+μ2(𝐔(k+1)𝐆𝜽(k+1)(𝐳)).\displaystyle=\mathbf{\Gamma}^{(k)}+\mu_{2}\left(\mathbf{U}^{(k+1)}-\mathbf{G}_{\boldsymbol{\theta}^{(k+1)}}(\mathbf{z})\right).
until convergence
output : 𝐔^\hat{\mathbf{U}}
Algorithm 1 Proposed algorithm for (4)

III Results

In this section, we show representative results from simulations and in vivo experiments to demonstrate the performance of the proposed method.

III-A Simulation Results

We conducted simulations on a numerical brain phantom to evaluate the proposed method. The numerical phantom simulates a single-channel MR Fingerprinting experiment, which has the same setup as the one in [9]. Specifically, we took the T1T_{1}, T2T_{2}, and proton density maps from the Brainweb database [54] as the ground truth. We performed Bloch simulations to generate contrast-weighted time-series images by an inversion recovery fast imaging with steady state precession (IR-FISP) sequence [51]. Here we used the same set of flip angles and repetition times as in [51] for Bloch simulations. We set the field-of-view (FOV) as 300×300300\times 300 mm2 and the corresponding matrix size as 256×256256\times 256.

We acquired highly-undersampled 𝐤\mathbf{k}-space data using one spiral interleaf per TR and a set of fully-sampled 𝐤\mathbf{k}-space data consists of 4848 spiral interleaves. We added complex white Gaussian noise to measured data. Here we define the signal-to-noise ratio (SNR) as SNR=20log10(s/σ)\mathrm{SNR}=20\log_{10}(s/\sigma), where ss denotes the average signal intensity within a region of the white matter in the first contrast-weighted image, and σ\sigma denotes the standard deviation of the noise.

We performed image reconstruction using the low-rank and subspace method [15] and the proposed method. Here we manually selected the rank L=6L=6 for both methods for optimized performance. For the proposed method, we employed a generative convolutional neural network architecture illustrated in Fig. 1. We solved the subproblem (19) using the Adam algorithm with the learning rate =0.01=0.01 and the number of iterations =300=300. In addition, we manually selected the penalty parameters of the ADMM algorithm for optimized performance. We implemented both reconstruction methods on a Linux server with dual Intel Xeon Platinum 83628362R processors (each with 2.802.80 GHz and 3232 cores), 2.002.00 TB RAM, and dual NVIDIA A100100 GPUs (each with 8080 GB RAM) running MATLAB® R2022b and Deep Learning ToolboxTM.

After image reconstruction, we estimated MR tissue parameter maps by dictionary matching. Here the same dictionary was used for both methods based on the following discretization scheme: 1) the range of possible T1T_{1} values was in [100,3000][100,3000] ms, with an incremental step of 1010 ms in the range of [100,1500][100,1500] ms and an incremental step of 2020 ms in the range of [1501,3000][1501,3000] ms; and 2) the range of possible T2T_{2} values was in [20,350][20,350] ms, with an incremental step of 11 ms in the range of [20,200][20,200] ms and an incremental step of 22 ms in the range of [201,350][201,350] ms.

To assess the accuracy of reconstructed contrast-weighted MR images or parameter maps, we used the normalized root-mean-square error (NRMSE) defined as 𝐠𝐠^2/𝐠2\|\mathbf{g}-\hat{\mathbf{g}}\|_{2}/\|\mathbf{g}\|_{2},222The regions associated with the background, skull, scalp, and cerebrospinal fluid were not included into the NRMSE calculation, given that they are often not regions of interest for neuroimaging studies. where 𝐠\mathbf{g} and 𝐠^\hat{\mathbf{g}} respectively denote the ground truth and the reconstruction.

Refer to caption

Figure 2: The NRMSE of reconstructed time-series images at different iterations of the proposed algorithm at the acquisition length M=400M=400 and SNR=33\mathrm{SNR}=33~{}dB.

Refer to caption

Figure 3: (a) The NRMSE of reconstructed MR time-series images using the low-rank and subspace reconstruction and the proposed method at different acquisition lengths. (b) Reconstructed contrast-weighted images for three representative TRs and the associated relative error maps at the acquisition length M=400M=400 and SNR=33\mathrm{SNR}=33~{}dB. Note that the NRMSE is labeled at the right corner of each error map.

Refer to caption

Figure 4: The NRMSE for the reconstruction of (a) T1T_{1}, (b) T2T_{2}, and (c) proton density maps from the low-rank and subspace reconstruction and the proposed method at different acquisition lengths at SNR=33\mathrm{SNR}=33~{}dB.

Refer to caption

Figure 5: Reconstructed T1T_{1} maps from the low-rank and subspace reconstruction and the proposed method with two acquisition lengths, i.e., M=300M=300 and 600600. (a) Ground-truth T1T_{1} map. (b) Reconstructed T1T_{1} maps and corresponding error maps with M=300M=300. (c) Reconstructed T1T_{1} maps and corresponding error maps with M=600M=600. Note that the NRMSE is labeled at the right corner of each error map.

III-A1 Algorithm convergence

The proposed formulation results in a non-convex optimization problem. While there is no theoretical guarantee for the convergence of the algorithm for solving (4), we demonstrate the convergence empirically with numerical simulations. Specifically, we simulated an MR Fingerprinting experiment with an acquisition length M=400M=400 and SNR=33\mathrm{SNR}=33~{}dB, and performed image reconstruction using the proposed method. Here we initialized 𝐔\mathbf{U} using the low-rank and subspace reconstruction. We initialized 𝚲c\mathbf{\Lambda}_{c} and 𝚪\mathbf{\Gamma} with zero matrices and vector and 𝜽\boldsymbol{\theta} with a random vector. Fig. 2 shows the NRMSE of the reconstructed time-series images using the proposed method. As can be seen, with a relatively small number of iterations, the proposed method improves the NRMSE by a factor of two, as compared to the initialization with the low-rank and subspace reconstruction.

III-A2 Impact of acquisition length

We evaluated the performance of the low-rank and subspace reconstruction and the proposed method at different acquisition lengths, ranging from M=300M=300 to 700700. We set the SNR=33\mathrm{SNR}=33~{}dB for this set of simulations. Fig. 3(a) shows the NRMSE of the reconstructed time-series images at different acquisition lengths using the above two methods. Fig. 3(b) shows three representative contrast-weighted MR images reconstructed by the above methods. As can be seen, the proposed method substantially outperforms the low-rank and subspace method, which illustrates the benefits of incorporating a deep generative neural network as a regularizer for the low-rank and subspace reconstruction.

Fig. 4 shows the NRMSE of the reconstructed T1T_{1}, T2T_{2}, and proton density maps from the low-rank and subspace reconstruction and the proposed method at different acquisition lengths. As can be seen, the proposed method improves over the low-rank and subspace method for the reconstruction of T1T_{1}, T2T_{2}, proton density maps at all acquisition lengths, and the improvement is more significant for T2T_{2}, as the acquisition length becomes short.

Figs. 5, 6, and 7 show the reconstructed T1T_{1}, T2T_{2}, and proton density maps from the low-rank and subspace reconstruction and the proposed method at the two acquisition lengths (i.e., M=300M=300 and 600600). As can be seen, the proposed method outperforms the low-rank and subspace reconstruction at both acquisition lengths. At a short acquisition length (i.e., M=300M=300), it nicely enables a factor of two improvement for the reconstruction of T2T_{2} and proton density maps. It is clear that with a generative neural network as a regularizer, the proposed method effectively mitigates the noise amplification incurred by the ill-conditioning issue associated with the low-rank and subspace reconstruction.

Refer to caption

Figure 6: Reconstructed T2T_{2} maps from the low-rank and subspace reconstruction and the proposed method with two acquisition lengths, i.e., M=300M=300 and 600600. (a) Ground-truth T2T_{2} map. (b) Reconstructed T2T_{2} maps and corresponding error maps with M=300M=300. (c) Reconstructed T2T_{2} maps and corresponding error maps with M=600M=600. Note that the NRMSE is labeled at the right corner of each error map.

Refer to caption

Figure 7: Reconstructed proton density maps from the low-rank and subspace reconstruction and the proposed method with two acquisition lengths, i.e., M=300M=300 and 600600. (a) Ground-truth proton density map. (b) Reconstructed proton density maps and corresponding error maps with M=300M=300. (c) Reconstructed proton density maps and corresponding error maps with M=600M=600. Note that the NRMSE is labeled at the right corner of each error map.

III-A3 Impact of SNR

We also evaluated the performance of the proposed method at different SNR levels ranging from 2828~{}dB to 4040~{}dB at the acquisition length M=500M=500. Fig. 8 shows the NRMSE of the reconstructed T1T_{1}, T2T_{2}, and proton density maps. As can be seen, the proposed method outperforms the low-rank and subspace reconstruction at different noise levels, and the improvement is more significant as the SNR becomes low. Again, this illustrates the effectiveness of incorporating a deep generative neural network as a regularizer for the low-rank and subspace reconstruction.

Refer to caption

Figure 8: The NRMSE for the reconstruction of (a) T1T_{1}, (b) T2T_{2}, and (c) proton density maps from the low-rank and subspace reconstruction and the proposed method at different SNRs.

III-B In Vivo Results

We evaluated the performance of the proposed method for in vivo MR Fingerprinting experiments. Specifically, we conducted in vivo experiments on a 3T Siemens Magnetom Prisma scanner (Siemens Medical Solutions, Erlangen, Germany) equipped with a 20-channel receiver coil. A healthy volunteer was scanned using the IR-FISP sequence [51] with the approval from the Institutional Review Board. For in vivo experiments, we used the same set of acquisition parameters and spiral trajectories as in [51], and other relevant imaging parameters include FOV=300×300\ =300\times 300 mm2, matrix size=256×256\ =256\times 256, and slice thickness =5\ =5 mm.

Apart from the undersampled MR Fingerprinting experiments, we also acquired a set of fully-sampled data with the acquisition length M=1000M=1000 following the same procedure as in [9], and then estimated T1T_{1}, T2T_{2}, and proton density maps as our references. Moreover, we conducted a calibration scan using a vendor-provided gradient echo sequence to estimate the coil sensitivities maps.

We performed the low-rank and subspace reconstruction and the proposed method for in vivo data. For the above two methods, we used the same rank value (i.e., L=8L=8). In addition, we used the same dictionary for subspace estimation and dictionary matching as described in the simulations for both methods. For the proposed method, we used the same neural network architecture as described in the simulations, and manually selected the penalty parameters for optimized performance.

Fig. 9 shows the results from the low-rank and subspace reconstruction and the proposed method at the acquisition length M=500M=500. As can be seen, the proposed method outperforms the low-rank and subspace reconstruction for the T1T_{1}, T2T_{2}, and proton density maps, which is consistent with the simulation results. This further confirms the role of a deep generative neural network as a regularizer for improving the low-rank and subspace reconstruction.

Refer to caption

Figure 9: In vivo results from the low-rank and subspace reconstruction and the proposed method with the acquisition length M=500M=500. (a) T1T_{1} maps. (b) T2T_{2} maps. (c) proton density maps. Note that the overall NRMSE is labeled at the right corner of each error map.

IV Discussion

We have demonstrated the efficacy of the proposed method for utilizing an untrained deep generative prior to improve the low-rank and subspace reconstruction. Here it is worth making some further remarks on some related aspects. First, while we have illustrated the effectiveness of the proposed method utilzing a specific generative convolutional neural network, there are other alternative neural network architectures (e.g., U-net [55, 47] or deep decoder [48]) that could also be adopted for the proposed method. Here it is worth pointing out that exploring the best neural network architecture for the proposed method is an interesting open problem that requires an in-depth future study.

It is known that the use of untrained deep generative neural networks for solving inverse problems often has an overfitting issue [47]. For the proposed method, we used an early-stopping strategy to effectively mitigate this problem. Alternatively, we can incorporate additional regularizers (e.g., total variation [56] or plug-and-play [57] regularization) into the proposed formulation to address the issue. Although it is straightforward to extend the proposed algorithm to accommodate these regularizers, the computation time of the algorithm will inevitably increase.

We have used an untrained neural network prior to improve the low-rank and subspace reconstruction. It has been demonstrated in the early work [58] that pre-training a neural network with a few examples can improve the performance of solving inverse problems over that with an untrained neural network. Here we could adopt a similar strategy for the proposed method. For example, with the use of a few training examples, we could jointly optimize the latent vector 𝐳\mathbf{z} and the weights of the neural network 𝜽\boldsymbol{\theta} in the proposed formulation, which may potentially improve the performance of the proposed method. A systematic investigation of this extension will be conducted in future work.

In addition to the architecture of the neural network, the proposed method also has a number of other hyperparameters to select, including the rank LL and the penalty parameters μ1\mu_{1} and μ2\mu_{2} for the algorithm. While there exist theoretical methods that could aid the selection of these hyperparameters [59, 60], we can also optimize them in an empirical way, e.g., with fully-sampled reference data sets from phantom and/or in vivo experiments. The effectiveness of the latter approach has been demonstrated in this work and early methods [30, 31, 15].

In this work, we have demonstrated the effectiveness of the proposed method with a conventional 2D MR Fingerprinting acquisition scheme. As a general image reconstruction method, it can also be integrated with other optimized MR Fingerprinting acquisition schemes (e.g., [61, 62, 63]) for further improved performance. Moreover, the proposed method can be generalized to handle volumetric MR Fingerprinting acquisitions (e.g., [64, 65, 66]) to achieve faster imaging speeds. Last but not least, the proposed method can be extended to other MR parameter mapping or quantitative MRI applications. The preliminary results in our recent work [67] have demonstrated the utilities of such an extension.

V Conclusion

In this paper, we have presented a new learning-based image reconstruction method for MR Fingerprinting, which integrates low-rank and subspace modeling with a deep generative prior through an untrained neural network. The proposed method utilizes the architecture of a convolutional generative neural network to represent the spatial subspace of the low-rank model, which serves as an effective regularizer for the low-rank and subspace reconstruction. To solve the resulting optimization problem, we have described an algorithm based on variable splitting and the alternating direction method of multipliers. We have shown results from numerical simulations and in vivo experiments to demonstrate the improvement of the proposed method over the state-of-the-art low-rank and subspace reconstruction.

References

  • [1] D. Ma, V. Gulani, N. Seiberlich, K. Liu, J. L. Sunshine, J. L. Duerk, and M. A. Griswold, “Magnetic resonance fingerprinting,” Nature, vol. 495, pp. 187–192, 2013.
  • [2] C. Liao, K. Wang, X. Cao, Y. Li, D. Wu, H. Ye, Q. Ding, H. He, and J. Zhong, “Detection of lesions in mesial temporal lobe epilepsy by using MR fingerprinting,” Radiology, vol. 288, no. 3, pp. 804–812, 2018.
  • [3] J. Y. Choi, B. Krishnan, S. Hu, D. Martinez, Y. Tang, X. Wang, K. Sakaie, S. Jones, H. Murakami, I. Blümcke, I. Najm, D. Ma, and Z. I. Wang, “Using magnetic resonance fingerprinting to characterize periventricular nodular heterotopias in pharmacoresistant epilepsy,” Epilepsia, vol. 63, no. 5, pp. 1225–1237, 2022.
  • [4] J. I. Hamilton, Y. Jiang, Y. Chen, D. Ma, W.-C. Lo, M. A. Griswold, and N. Seiberlich, “MR fingerprinting for rapid quantification of myocardial T1, T2, and proton spin density,” Magn. Reson. Med., vol. 77, no. 4, pp. 1446–1458, 2017.
  • [5] O. Jaubert, G. Cruz, A. Bustin, R. Hajhosseiny, S. Nazir, T. Schneider, P. Koken, M. Doneva, D. Rueckert, P.-G. Masci, R. M. Botnar, and C. Prieto, “T1, T2, and fat fraction cardiac MR fingerprinting: Preliminary clinical evaluation,” J. Magn. Reson. Imaging, vol. 53, no. 4, pp. 1253–1265, 2021.
  • [6] C. Badve, A. Yu, S. Dastmalchian, M. Rogers, D. Ma, Y. Jiang, S. Margevicius, S. Pahwa, Z. Lu, M. Schluchter et al., “MR fingerprinting of adult brain tumors: Initial experience,” Am. J. Neuroradiol., vol. 38, no. 3, pp. 492–499, 2017.
  • [7] C. M. Pirkl, L. Nunez-Gonzalez, F. Kofler, S. Endt, L. Grundl, M. Golbabaee, P. A. Gómez, M. Cencini, G. Buonincontri, R. F. Schulte, M. Smits, B. Wiestler, B. H. Menze, M. I. Menzel, and J. A. Hernandez-Tamames, “Accelerated 3D whole-brain T1, T2, and proton density mapping: Feasibility for clinical glioma MR imaging,” Neuroradiol., vol. 63, no. 11, pp. 1831–1851, 2021.
  • [8] A. Sharafi, M. V. W. Zibetti, G. Chang, M. Cloos, and R. R. Regatte, “MR fingerprinting for rapid simultaneous T1, T2, and Tρ1{}_{1}\rho relaxation mapping of the human articular cartilage at 3T,” Magn. Reson. Med., vol. 84, no. 5, pp. 2636–2644, 2020.
  • [9] B. Zhao, K. Setsompop, H. Ye, S. F. Cauley, and L. L. Wald, “Maximum likelihood reconstruction for magnetic resonance fingerprinting,” IEEE Trans. Med. Imaging, vol. 35, no. 8, pp. 1812–1823, 2016.
  • [10] M. Davies, G. Puy, P. Vandergheynst, and Y. Wiaux, “A compressed sensing framework for magnetic resonance fingerprinting,” SIAM J. Imaging Sciences, vol. 7, no. 4, pp. 2623–2656, 2014.
  • [11] Z. Wang, Q. Zhang, J. Yuan, and X. Wang, “MRF denoising with compressed sensing and adaptive filtering,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2014, pp. 870–873.
  • [12] B. Zhao, “Model-based iterative reconstruction for magnetic resonance fingerprinting,” in Proc. IEEE Int. Conf. Image Process., 2015, pp. 3392–3396.
  • [13] E. Y. Pierre, D. Ma, Y. Chen, C. Badve, and M. A. Griswold, “Multiscale reconstruction for MR fingerprinting,” Magn. Reson. Med., vol. 75, no. 6, pp. 2481–2492, 2016.
  • [14] M. Doneva, T. Amthor, P. Koken, K. Sommer, and P. Börnert, “Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data,” Magn. Reson. Imaging, vol. 41, pp. 41–52, 2017.
  • [15] B. Zhao, K. Setsompop, E. Adalsteinsson, B. Gagoski, H. Ye, D. Ma, Y. Jiang, P. E. Grant, M. A. Griswold, and L. L. Wald, “Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling,” Magn. Reson. Med., vol. 79, no. 2, pp. 933–942, 2018.
  • [16] J. Assländer, M. A. Cloos, F. Knoll, D. K. Sodickson, J. Hennig, and R. Lattanzi, “Low rank alternating direction method of multipliers reconstruction for MR fingerprinting,” Magn. Reson. Med., vol. 79, no. 1, pp. 83–96, 2018.
  • [17] G. Mazor, L. Weizman, A. Tal, and Y. C. Eldar, “Low-rank magnetic resonance fingerprinting,” Med. Phys., vol. 45, no. 9, pp. 4066–4084, 2018.
  • [18] G. Lima da Cruz, A. Bustin, O. Jaubert, T. Schneider, R. M. Botnar, and C. Prieto, “Sparsity and locally low rank regularization for MR fingerprinting,” Magn. Reson. Med., vol. 81, no. 6, pp. 3530–3543, 2019.
  • [19] B. Zhao, K. Setsompop, D. Salat, and L. L. Wald, “Further development of subspace imaging to magnetic resonance fingerprinting: A low-rank tensor approach,” in Proc. IEEE Eng. Med. Bio. Conf., 2020, pp. 1662–1666.
  • [20] Y. Hu, P. Li, H. Chen, L. Zou, and H. Wang, “High-quality MR fingerprinting reconstruction using structured low-rank matrix completion and subspace projection,” IEEE Trans. Med. Imaging, vol. 41, no. 5, pp. 1150–1164, 2022.
  • [21] P. Virtue, S. X. Yu, and M. Lustig, “Better than real: Complex-valued neural nets for MRI fingerprinting,” in Proc. IEEE Int. Conf. Image Process., 2017, pp. 3953–3957.
  • [22] O. Cohen, B. Zhu, and M. S. Rosen, “MR fingerprinting Deep RecOnstruction NEtwork (DRONE),” Magn. Reson. Med., vol. 80, no. 3, pp. 885–894, 2018.
  • [23] Z. Fang, Y. Chen, M. Liu, L. Xiang, Q. Zhang, Q. Wang, W. Lin, and D. Shen, “Deep learning for fast and spatially constrained tissue quantification from highly accelerated data in magnetic resonance fingerprinting,” IEEE Trans. Med. Imaging, vol. 38, no. 10, pp. 2364–2374, 2019.
  • [24] F. Balsiger, A. Jungo, O. Scheidegger, P. G. Carlier, M. Reyes, and B. Marty, “Spatially regularized parametric map reconstruction for fast magnetic resonance fingerprinting,” Med. Image Anal., vol. 64, p. 101741, 2020.
  • [25] Y. Chen, Z. Fang, S.-C. Hung, W.-T. Chang, D. Shen, and W. Lin, “High-resolution 3D MR fingerprinting using parallel imaging and deep learning,” NeuroImage, vol. 206, p. 116329, 2020.
  • [26] M. Golbabaee, G. Buonincontri, C. M. Pirkl, M. I. Menzel, B. H. Menze, M. Davies, and P. A. Gómez, “Compressive MRI quantification using convex spatiotemporal priors and deep encoder-decoder networks,” Med. Image Anal., vol. 69, p. 101945, 2021.
  • [27] H. Lu, H. Ye, and B. Zhao, “Improved balanced steady-state free precession based MR fingerprinting with deep autoencoders,” in Proc. IEEE Eng. Med. Bio. Conf., 2022, pp. 3029–3034.
  • [28] D. Chen, M. E. Davies, and M. Golbabaee, “Deep unrolling for magnetic resonance fingerprinting,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2022, pp. 1–4.
  • [29] F. Cheng, Y. Liu, Y. Chen, and P.-T. Yap, “High-resolution 3D magnetic resonance fingerprinting with a graph convolutional network,” IEEE Trans. Med. Imaging, vol. 42, no. 3, pp. 674–683, 2023.
  • [30] B. Zhao, J. P. Haldar, A. G. Christodoulou, and Z.-P. Liang, “Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints,” IEEE Trans. Med. Imaging, vol. 31, no. 9, pp. 1809–1820, 2012.
  • [31] B. Zhao, W. Lu, T. K. Hitchens, F. Lam, C. Ho, and Z.-P. Liang, “Accelerated MR parameter mapping with low-rank and sparsity constraints,” Magn. Reson. Med., vol. 74, no. 2, pp. 489–498, 2015.
  • [32] M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process., vol. 19, no. 9, pp. 2345–2356, 2010.
  • [33] S. Ramani and J. A. Fessler, “Parallel MR image reconstruction using augmented lagrangian methods,” IEEE Trans. Med. Imaging, vol. 30, no. 3, pp. 694–706, 2011.
  • [34] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, pp. 1–122, 2011.
  • [35] H. Lu and B. Zhao, “Accelerated magnetic resonance fingerprinting with low-rank and generative subspace modeling,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. Workshop (ICASSPW), 2023, in press.
  • [36] C. M. Sandino, F. Ong, and S. S. Vasanawala, “Deep subspace learning: Enhancing speed and scalability of deep learning-based reconstruction of dynamic imaging data,” in Proc. Int. Symp. Magn. Reson. Med., 2020, p. 0599.
  • [37] Z. Chen, Y. Chen, Y. Xie, D. Li, and A. G. Christodoulou, “Data-consistent non-cartesian deep subspace learning for efficient dynamic MR image reconstruction,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2022, pp. 1–5.
  • [38] M. Blumenthal, X. Wang, and M. Uecker, “Deep subspace learning for improved T1 mapping using single-shot inversion-recovery radial FLASH,” in Proc. Int. Symp. Magn. Reson. Med., 2022, p. 0241.
  • [39] A. H. Ahmed, Q. Zou, P. Nagpal, and M. Jacob, “Dynamic imaging using deep bi-linear unsupervised representation (DEBLUR),” IEEE Trans. Med. Imaging, vol. 41, no. 10, pp. 2693–2703, 2022.
  • [40] F. Bloch, “Nuclear induction,” Phys. Rev., vol. 70, pp. 460–474, 1946.
  • [41] Z.-P. Liang, “Spatiotemporal imaging with partially separable functions,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2007, pp. 988–991.
  • [42] B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM Rev., vol. 52, no. 3, pp. 471–501, 2010.
  • [43] B. Zhao, J. P. Haldar, C. Brinegar, and Z.-P. Liang, “Low rank matrix recovery for real-time cardiac MRI,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2010, pp. 996–999.
  • [44] J. P. Haldar and Z.-P. Liang, “Spatiotemporal imaging with partially separable functions: A matrix recovery approach,” in Proc. IEEE Int. Symp. Biomed. Imaging, 2010, pp. 716–719.
  • [45] H. Lu, H. Ye, and B. Zhao, “Improved low-rank and subspace reconstruction for magnetic resonance fingerprinting with self-navigating acquisitions,” in Proc. Int. Symp. Magn. Reson. Med., 2023, p. 427.
  • [46] A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in Proc. Int. Conf. Mach. Learn., 2017, pp. 537–546.
  • [47] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018.
  • [48] R. Heckel and P. Hand, “Deep decoder: Concise image representations from untrained non-convolutional networks,” in Proc. Intl. Conf. Learn. Represent., 2019.
  • [49] M. Z. Darestani and R. Heckel, “Accelerated MRI with un-trained neural networks,” IEEE Trans. Comput. Imaging, vol. 7, pp. 724–733, 2021.
  • [50] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed.   New York, NY: Springer, 2006.
  • [51] Y. Jiang, D. Ma, N. Seiberlich, V. Gulani, and M. A. Griswold, “MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout,” Magn. Reson. Med., vol. 74, no. 6, pp. 1621–1631, 2015.
  • [52] D. S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas with Application to Linear Systems Theory.   Princeton, NJ: Princeton Univ. Press, 2005.
  • [53] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. Learn. Represent., 2015.
  • [54] D. L. Collins, A. P. Zijdenbos, V. Kollokian, J. G. Sled, N. J. Kabani, C. J. Holmes, and A. C. Evans, “Design and construction of a realistic digital brain phantom,” IEEE Trans. Med. Imaging, vol. 17, no. 3, pp. 463–468, 1998.
  • [55] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Interv., 2015, pp. 234–241.
  • [56] J. Liu, Y. Sun, X. Xu, and U. S. Kamilov, “Image restoration using total variation regularized deep image prior,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2019, pp. 7715–7719.
  • [57] Z. Sun, F. Latorre, T. Sanchez, and V. Cevher, “A plug-and-play deep image prior,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2021, pp. 8103–8107.
  • [58] O. Leong and W. Sakla, “Low shot learning with untrained neural networks for imaging inverse problem,” in Proc. Adv. Neural Inf. Process. Syst. Workshop (NeurIPSW), 2019.
  • [59] E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson, “Optimal Parameter Selection for the Alternating Direction Method of Multipliers (ADMM): Quadratic Problems,” IEEE Trans. Autom. Control, vol. 60, no. 3, pp. 644–658, 2015.
  • [60] R. Zhang, A. Shapiro, and Y. Xie, “Statistical rank selection for incomplete low-rank matrices,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2019, pp. 2912–2916.
  • [61] B. Zhao, J. P. Haldar, C. Liao, D. Ma, Y. Jiang, M. A. Griswold, K. Setsompop, and L. L. Wald, “Optimal experiment design for magnetic resonance fingerprinting: Cramér-Rao bound meets spin dynamics,” IEEE Trans. Med. Imaging, vol. 38, no. 3, pp. 844–861, 2019.
  • [62] J. Assländer, R. Lattanzi, D. K. Sodickson, and M. A. Cloos, “Optimized quantification of spin relaxation times in the hybrid state,” Magn. Reson. Med., vol. 82, no. 4, pp. 1385–1397, 2019.
  • [63] S. P. Jordan, S. Hu, I. Rozada, D. F. McGivney, R. Boyacioğlu, D. C. Jacob, S. Huang, M. Beverland, H. G. Katzgraber, M. Troyer, M. A. Griswold, and D. Ma, “Automated design of pulse sequences for magnetic resonance fingerprinting using physics-inspired optimization,” Proc. Natl. Acad. Sci. U.S.A., vol. 118, no. 40, p. e2020516118, 2021.
  • [64] H. Ye, D. Ma, Y. Jiang, S. F. Cauley, Y. Du, L. L. Wald, M. A. Griswold, and K. Setsompop, “Accelerating magnetic resonance fingerprinting (MRF) using t-blipped simultaneous multislice (SMS) acquisition,” Magn. Reson. Med., vol. 75, no. 5, pp. 2078–2085, 2016.
  • [65] B. Zhao, B. Bilgic, E. Adalsteinsson, M. A. Griswold, L. L. Wald, and K. Setsompop, “Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling,” in Proc. IEEE Eng. Med. Bio. Conf., 2017, pp. 3264–3268.
  • [66] C. Liao, B. Bilgic, M. K. Manhard, B. Zhao, X. Cao, J. Zhong, L. L. Wald, and K. Setsompop, “3D MR fingerprinting with accelerated stack-of-spirals and hybrid sliding-window and GRAPPA reconstruction,” NeuroImage, vol. 162, pp. 13–22, 2017.
  • [67] H. Lu and B. Zhao, “Accelerated magnetic resonance parameter mapping with low-rank modeling and deep generative priors,” in Proc. IEEE Stat. Signal Process. Workshop, 2023, in press.