This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Accelerating Magnetic Resonance T1ρ\text{T}_{1\rho} Mapping Using Simultaneously Spatial Patch-based and Parametric Group-based Low-rank Tensors (SMART)

Yuanyuan Liu    Dong Liang    \IEEEmembershipSenior Member, IEEE    Zhuo-Xu Cui    Yuxin Yang    Chentao Cao    Qingyong Zhu    Jing Cheng    \IEEEmembershipStudent Member, IEEE    Caiyun Shi    Haifeng Wang    \IEEEmembershipMember, IEEE    and Yanjie Zhu This study was supported in part by the National Key R&D Program of China no . 2020YFA0712200, National Natural Science Foundation of China under grant nos. 62201561, 12226008, 62125111, 81971611, and 81901736, the Innovation and Technology Commission of the government of Hong Kong SAR under grant no. MRP/001/18X, the Guangdong Basic and Applied Basic Research Foundation under grant no. 2021A1515110540, China Postdoctoral Science Foundation under grant no. 2022M723302, and the Shenzhen Science and Technology Program under grant no. RCYX20210609104444089.(Corresponding author: Yanjie Zhu) Yuanyuan Liu is with Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences and National Innovation Center for Advanced Medical Devices, Shenzhen, Guangdong, China (e-mail: [email protected]). Zhuo-Xu Cui, Qingyong Zhu and Dong Liang are with Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences(e-mail: {zx.cui; qy.zhu; dong.liang}\left\{\text{zx.cui; qy.zhu; dong.liang}\right\}@siat.ac.cn).Yuxin Yang, Chentao Cao, Jing Cheng, Caiyun Shi, Haifeng Wang and Yanjie Zhu are with Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (e-mail: {yx.yang; ct.cao; jing.cheng; cy.shi; hf.wang1; yj.zhu}\left\{\text{yx.yang; ct.cao; jing.cheng; cy.shi; hf.wang1; yj.zhu}\right\}@siat.ac.cn).Yuanyuan Liu and Dong Liang contributed equally to this study.
Abstract

Quantitative magnetic resonance (MR) T1ρ\text{T}_{1\rho} mapping is a promising approach for characterizing intrinsic tissue-dependent information. However, long scan time significantly hinders its widespread applications. Recently, low-rank tensor models have been employed and demonstrated exemplary performance in accelerating MR T1ρ\text{T}_{1\rho} mapping. This study proposes a novel method that uses spatial patch-based and parametric group-based low-rank tensors simultaneously (SMART) to reconstruct images from highly undersampled k-space data. The spatial patch-based low-rank tensor exploits the high local and nonlocal redundancies and similarities between the contrast images in T1ρ\text{T}_{1\rho} mapping. The parametric group-based low-rank tensor, which integrates similar exponential behavior of the image signals, is jointly used to enforce multidimensional low-rankness in the reconstruction process. In vivo brain datasets were used to demonstrate the validity of the proposed method. Experimental results demonstrated that the proposed method achieves 11.7-fold and 13.21-fold accelerations in two-dimensional and three-dimensional acquisitions, respectively, with more accurate reconstructed images and maps than several state-of-the-art methods. Prospective reconstruction results further demonstrate the capability of the SMART method in accelerating MR T1ρ\text{T}_{1\rho} imaging.

{IEEEkeywords}

T1ρ\text{T}_{1\rho} mapping, fast imaging, low-rank tensor, patch

1 Introduction

\IEEEPARstart

Quantitative magnetic resonance (MR) T1ρ\text{T}_{1\rho} (the spin lattice relaxation time in the rotating reference frame)[1] is a promising approach for characterizing intrinsic tissue-dependent information. It has been used as a novel contrast mechanism in many biomedical applications, including the musculoskeletal system[2], brain[3, 4], liver[5, 6], intervertebral discs[7], and cardiovascular imaging[8], to assess early macromolecular changes rather than conventional morphological imaging. However, MR T1ρ\text{T}_{1\rho} mapping requires acquiring multiple images with different spin-lock times (TSLs). The resulting long scan time significantly hinders its widespread clinical application. Parallel imaging is the most used acceleration method for MR T1ρ\text{T}_{1\rho} mapping, but it has a limited acceleration factor, usually approximately 2[9]. Compressed sensing (CS) [10, 11, 12] utilizing sparsifying priors has been applied in fast MR parametric mapping to achieve higher acceleration factors. However, the scan time of MR T1ρ\text{T}_{1\rho} mapping must be further shortened to facilitate clinical use.

In recent years, the low-rankness of the matrix has achieved promising results in accelerating MR parametric mapping by exploiting the anatomical correlation between contrast images globally or locally[13, 14, 15, 16]. Compared with conventional MR images, the contrast images in parametric mapping contain redundant information across the spatial domain and are highly correlated along the temporal dimension. As an extension of the matrix, the high-order tensor can capture the data correlation in multiple dimensions[17] beyond the original spatial-temporal domain. The sparse representation of multidimensional image structures can be effectively exploited using a high-order tensor[18, 19, 20, 21]. Therefore, high-order tensor models have been applied in many medical imaging applications and can accelerate high-dimensional magnetic resonance imaging (MRI), such as cardiac imaging[22, 21, 23, 24, 25], simultaneous multiparameter mapping[4], and functional magnetic resonance imaging[26]. Most of these methods globally construct a tensor by directly using the entire multidimensional image series as a tensor. The sparsity of the tensor, which usually relies on the sparsity of the core tensor coefficients, is modeled as a regularization to enforce global low-rankness using a higher-order singular-value decomposition (SVD). However, global processing jointly treats multiple tissues of different types, which may lead to residual artifacts that can be ameliorated by local processing[13, 21]. Like group sparsity[27, 28], a high-order tensor can be constructed in a patch-based local manner. Image patches with high correlation in the neighborhood are extracted and rearranged to form local tensors. Patch-based low-rank tensor reconstruction using the local processing approach has been demonstrated to outperform low-rank tensor reconstruction globally[21, 17, 24, 25, 26].

The signal evolution in quantitative MRI exhibits a low-rank structure in the temporal dimension that can be exploited to construct a low-rank Hankel matrix[29, 30] to shorten the scan time further. It differs from the spatial low-rankness that processes the entire image[13, 14, 31]; and the Hankel low-rank approximation treats the signal at each spatial location independently. Recently, k-space low-rank Hankel tensor completion approaches have demonstrated the suitability and potential of applying tensor modeling in the k-space of parallel imaging[32, 33]. The block-wise Hankel tensor reconstruction method[33] was proposed by organizing multicontrast k-space datasets into a single block-wise Hankel tensor, which exploits the multilinear low-rankness based on shareable information in datasets of identical geometry. Motivated by this block-wise Hankel tensor method, tissues with equal parametric values can also be clustered in image space. Each Hankel matrix of the same tissue can be concatenated into a Hankel tensor, which exhibits significant low rankness based on high correlations in tissues of similar signal evolution.

This study proposes a novel and fast MR T1ρ\text{T}_{1\rho} mapping method by exploiting low-rankness in both spatial and temporal directions. First, similar local patches were found and extracted from the images to form high-order spatial tensors using a block matching algorithm[34]. Second, assuming that the T1ρ\text{T}_{1\rho} values of the same tissue are similar, signals were classified into different tissue groups through the histogram of the T1ρ\text{T}_{1\rho} values. In each group, signal evolution in the temporal direction was used to construct the Hankel matrix, and all groups were cascaded to form a parametric tensor. Both the spatial and parametric tensors were utilized to reconstruct the contrast images from highly undersampled k-space data through high-order tensor decomposition (simultaneous spatial patch-based and parametric group-based low-rank tensor, (SMART)). We used quantitative T1ρ\text{T}_{1\rho} mapping of the brain to evaluate the performance of the proposed method. Two-dimensional (2D) and three-dimensional (3D) T1ρ\text{T}_{1\rho}-weighted images were undersampled with different acceleration factors to substantiate the suitability of SMART in improving the quality of fast T1ρ\text{T}_{1\rho} mapping regarding several state-of-the-art methods.

2 Material and Methods

2.1 Notation

Scalars are denoted by italic letters (e.g., bb). Vectors are denoted by bold lowercase letters (e.g., b ). Matrices are denoted by bold capital letters (e.g., B). Tensors of order three or higher are denoted by Euler script letters (e.g., 𝒳\mathcal{X}). Indexed scalar xijkx_{ijk} denotes the (i,j,k)(i,j,k)th element of third-order tensor 𝒳\mathcal{X}. Operators are denoted by capital italic letters (e.g., CC). CT\ C^{T} is used to denote the adjoint operator of CC.

The mode-ii matricization (unfolding) of a tensor 𝒳N1×N2××Nd\mathcal{X}\in\mathbb{C}^{N_{1}\times N_{2}\times\ldots\times N_{d}} is defined as X(i)Ni×(N1Ni1Ni+1Nd)\textbf{X}_{(i)}\in\mathbb{C}^{N_{i}\times\left(N_{1}\ldots N_{i-1}N_{i+1}\ldots N_{d}\right)}, arranging the data along the NiN_{i}th dimension to be the columns of X(i)\textbf{X}_{(i)}. The opposite operation of tensor matricization is the folding operation, which arranges the elements of the matrix X(i)\textbf{X}_{(i)} into the ddth order tensor 𝒳\mathcal{X}. Tensor multiplication is more complex than matrix multiplication. In this study, only the tensor ii-mode product[35] is considered, which is defined as the multiplication (e.g., 𝒳×i𝐔\mathcal{X}\times_{i}\mathbf{U}) of a tensor 𝒳\mathcal{X} with a matrix UJ×Ni\textbf{U}\in\mathbb{C}^{J\times N_{i}} in mode ii. Elementwise, we have wn1ni1jni+1nd=ni=1Nixn1ni1nini+1ndujniw_{n_{1}\cdots n_{i-1}jn_{i+1}\cdots n_{d}}=\sum_{n_{i}=1}^{N_{i}}x_{n_{1}\cdots n_{i-1}n_{i}n_{i+1}\cdots n_{d}}u_{jn_{i}}, where 𝒲=(𝒳×𝐔i)N1××Ni1×J×Ni+1××Nd\mathcal{W}=\left(\mathcal{X}\times{}_{i}\mathbf{U}\right)\in\mathbb{C}^{N_{1}\times\ldots\times N_{i-1}\times J\times N_{i+1}\times\ldots\times N_{d}}. The ii-mode product can also be calculated by the matrix multiplication 𝐖(i)=𝐔𝐗(i)\mathbf{W}_{(i)}=\mathbf{U}\mathbf{X}_{(i)}.

2.2 Measurement Model

Let XNvoxel×NTSL\textbf{X}\in\mathbb{C}^{N_{\textit{voxel}}\times N_{\textit{TSL}}} be the T1ρ\text{T}_{1\rho}-weighted image series to be reconstructed, where Nvoxel\textit{N}_{\textit{voxel}} denotes the voxel number of the T1ρ\text{T}_{1\rho}-weighted image, and NTSL\textit{N}_{\textit{TSL}} is the number of TSLs. 𝐘Nvoxel×Nc×NTSL\mathbf{Y}\in\mathbb{C}^{N_{\textit{voxel}}\times N_{c}\times N_{\textit{TSL}}} denotes the corresponding k-space measurement, where NcN_{c} denotes the coil number. The forward model for T1ρ\text{T}_{1\rho} mapping is given by

𝐘=E𝐗+ζ,\mathbf{Y}=E\mathbf{X}+\zeta, (1)

where ζ\zeta denotes the measurement noise and EE denotes the encoding operator[36, 37] given by E=AFSE=AFS, where AA is the undersampling operator that undersamples kk-space data for each T1ρ\text{T}_{1\rho}-weighted image. FF denotes the Fourier transform operator. SS denotes an operator which multiplies the coil sensitivity map by each T1ρ\text{T}_{1\rho}-weighted image coil-by-coil. The general formulation for recovering 𝐗\mathbf{X} from its undersampled measurements can be formulated as

argminX12E𝐗𝐘F2+λR(𝐗),\begin{gathered}\underset{\textbf{X}}{\operatorname{\arg\min}}\frac{1}{2}\left\|E\mathbf{X}-\mathbf{Y}\right\|_{F}^{2}+\lambda R(\mathbf{X})\end{gathered}, (2)

where F\left\|\cdot\right\|_{F} denotes the Frobenius norm, R(𝐗)R(\mathbf{X}) denotes a combination of regularizations, and λ\lambda is the regularization parameter.

In this study, the SMART reconstruction method assumes that the T1ρT_{1\rho}-weighted image series 𝐗\mathbf{X} can be expressed as a high‐order low‐rank representation on a spatial patch-based tensor and a parametric group-based tensor (shown in Fig. 1). The problem in (2) can be formulated as a constrained optimization on the two high‐order low‐rank tensors.

2.3 Proposed Method

2.3.1 Spatial Tensor Construction

We use the block matching algorithm[38, 39] to extract similar anatomical patches for building a spatial patch-based tensor at each spatial location. 𝐗\mathbf{X} can be expressed as a high-order low-rank representation on a patch scale. For a given reference patch Bi\textbf{B}_{\textit{i}} ( BiNb\textbf{B}_{\textit{i}}\in\mathbb{C}^{N_{b}}, Nb=b×bN_{b}=b\times b in 2D imaging and Nb=b×b×bN_{b}=b\times b\times b in 3D imaging ) centered at spatial location ii, let PiP_{i} denote the patch selection operator that extracts a group of similar patches to Bi\textbf{B}_{\textit{i}} from all time points and constructs a low-rank tensor 𝒯iNb×Npi×NTSL\mathcal{T}_{i}\in\mathbb{C}^{N_{b}\times N^{i}_{p}\times N_{\textit{TSL}}} from them, where NpiN^{i}_{p} denotes the number of similar patches. The process can be expressed as 𝒯i=Pi(X)\mathcal{T}_{\textit{i}}=\textit{P}_{\textit{i}}(\textbf{X}), and the adjoint operation PiT\textit{P}_{\textit{i}}^{\textit{T}} places the patches back to their original spatial locations in the image.

The patch selection process is as follows. For the patch Bi\textbf{B}_{\textit{i}}, it is compared with the other image patch Bi~\textbf{B}_{\tilde{i}} based on the normalized l2l_{2}-norm distance [34] with the exact formulation of the normalized mean-squared error as

dist(Bi,Bi~)=BiBi~22/Bi~22.\operatorname{dist}\left(\textbf{B}_{i},\textbf{B}_{\tilde{i}}\right)=\left\|\textbf{B}_{i}-\textbf{B}_{\tilde{i}}\right\|_{2}^{2}/\left\|\textbf{B}_{\tilde{i}}\right\|_{2}^{2}. (3)

Bi~\textbf{B}_{\tilde{i}} is considered similar to Bi\textbf{B}_{i} when the distance is less than a predefined threshold λm\lambda_{m}. We compare the candidates within a specified search radius rr and limit the maximum similar patch number to Np,maxN_{p,max} to reduce the complexity. If more image patches are matched to Bi\textbf{B}_{i}, only the Np,maxN_{p,max} patches with the highest degree of similarity are involved in the group for further processing.

2.3.2 Parametric Tensor Construction

Similar to T2\text{T}_{2} mapping in previous studies [29], the T1ρ\text{T}_{1\rho}-weighted images I(c,tm)I({\textit{c}},\textit{t}_{m}) in T1ρ\text{T}_{1\rho} mapping can be expressed as follows:

I(c,tm)=l=1Lρl(c)exp[iφ(c)]exp[tm/T1ρ,l(c)],I\left({\textit{c}},\textit{t}_{m}\right)=\sum_{l=1}^{L}\rho_{l}(c)\exp[i\varphi(\textit{c})]\exp\left[-\textit{t}_{m}/T_{1\rho,l}(\textit{c})\right], (4)

where ρl(c)\rho_{l}(\textit{c}) represents the proton density distribution, T1ρ,l\text{T}_{1\rho,l} is the T1ρ\text{T}_{1\rho} relaxation value of the llth tissue component, φ(c)\varphi(\textit{c}) represents the phase distribution, c indicates the spatial coordinate, tm\textit{t}_{m} is the mmth TSL, and LL is the number of linearly combined exponential terms, which is tissue-dependent. For instance, in the three-pool model of white matter, the white matter tissue is composed of a myelin water pool, myelinated axon water pool, and mixed water pool, yielding L=3L=3.

The signals from each voxel along the temporal direction can be used to form a Hankel matrix for c𝛀\forall{{\textit{c}}}\in{\boldsymbol{\Omega}} (where 𝛀\boldsymbol{\Omega} denotes the spatial support of the tissue.)

𝐆[I(c)]=[I(c,t1)I(c,t2)I(c,tk)I(c,t2)I(c,t3)I(c,tk+1)I(c,tNTSLk+1)I(c,tNTSLk+2)I(c,tNTSL)]\begin{array}[]{lc}\mathbf{G}[I({\textit{c}})]=\begin{bmatrix}I({c},t_{1})&I({c},t_{2})&\cdots&I({c},t_{k})\\ I({c},t_{2})&I({c},t_{3})&\cdots&I({c},t_{k+1})\\ \vdots&\vdots&\vdots&\vdots\\ I({c},t_{N_{\textit{TSL}}-k+1})&I({c},t_{N_{\textit{TSL}}-k+2})&\cdots&I({c},t_{N_{\textit{TSL}}})\end{bmatrix}\end{array} (5)

with a low rank property (e.g., rank(𝐆)=L(LNTSL)\text{rank}(\mathbf{G})=L(L\ll N_{\textit{TSL}}), where rank(𝐆)\text{rank}(\mathbf{G}) denotes the rank of matrix 𝐆\mathbf{G}, and the theoretical proof can be found in [29]). The Hankel matrix is designed as square as possible, where kk is selected as the nearest integer greater than or equal to half the number of TSLs.

The parametric tensor is constructed using the tissue clustering method as follows: First, a T1ρ\rm{T_{1\rho}} map is estimated from the initial T1ρ\rm T_{1\rho}-weighted image reconstructed using zero-filling. A histogram analysis divides the T1ρ\rm T_{1\rho} map into NgN_{g} groups. Each group is considered a group of signals belonging to one tissue category. According to linear predictability[29, 40], the signals from each voxel along the temporal direction can form a Hankel matrix. The Hankel matrices belonging to the same group are combined as a tensor 𝒵jNtissuej×(NTSLk+1)×k\mathcal{Z}_{j}\in\mathbb{C}^{N^{j}_{\textit{tissue}}\times{(N_{\textit{TSL}}-k+1)}\times k} from X. This can be expressed as 𝒵j=Hj(X)\mathcal{Z}_{j}=H_{j}(\textbf{X}), (j=[1,2,,Ng]j=[1,2,\cdots,N_{g}]), where NtissuejN^{j}_{\textit{tissue}} denotes the number of pixels in each group, and HH denotes the parametric tensor construction operator.

Refer to caption
Figure 1: Flow diagram of the proposed SMART method’s optimization for subproblem (P2) and subproblem (P3). Groups of similar 2D (respectively 3D) patches from the T1ρ\text{T}_{1\rho}-weighted images are extracted using the block matching method to construct spatial tensor 𝒯i\mathcal{T}_{i}. A third-order tensor can be formed by stacking each vectorized similar patch along the group and TSL direction. The T1ρ\text{T}_{1\rho} values are estimated from the T1ρ\text{T}_{1\rho}-weighted images using a nonlinear fitting method according to the signal relaxation model to construct the parametric tensor 𝒵j\mathcal{Z}_{j}. The image can be grouped into several groups of tissues after applying histogram analysis of the T1ρ\text{T}_{1\rho}. Moreover, in each group of the same tissue, the vector of pixels at a fixed location in the contrast images along the TSL direction can be extended to a Hankel matrix. In each group, all the Hankel matrices for the signals can be constructed as a third-order tensor.

2.3.3 SMART Reconstruction Model

Based on the spatial and parametric tensors, we propose a novel method that simultaneously uses spatial patch-based and parametric group-based low-rank tensor through higher-order tensor decomposition. Let 𝒯=[𝒯1,𝒯2,,𝒯i,]\mathcal{T}=[\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{i},\ldots], and 𝒵=[𝒵1,𝒵2,,𝒵j,]\mathcal{Z}=[\mathcal{Z}_{1},\mathcal{Z}_{2},\ldots,\mathcal{Z}_{j},\ldots]. T1ρ\rm T_{1\rho}-weighted images can be reconstructed from the undersampled k-space data by solving the following optimization problem:

argmin𝐗12EXYF2+λ1i𝒯i+λ2j𝒵j s.t. 𝒯i=Pi(X),𝒵j=Hj(X),\begin{gathered}\underset{\mathbf{X}}{\arg\min}\frac{1}{2}\left\|E\textbf{X}-\textbf{Y}\right\|_{F}^{2}+\lambda_{1}\sum_{i}\left\|\mathcal{T}_{i}\right\|_{*}+\lambda_{2}\sum_{j}\left\|\mathcal{Z}_{j}\right\|_{*}\\ \text{ s.t. }\mathcal{T}_{i}=P_{i}(\textbf{X}),\mathcal{Z}_{j}=H_{j}(\textbf{X}),\end{gathered} (6)

where \left\|\cdot\right\|_{*} is the nuclear norm, λ1\lambda_{1} and λ2\lambda_{2} are the regularization parameters. As the operator PiP_{i} is time-varying while the operator HjH_{j} always extracts mask on the same locations, (6) can be transformed into the following formulation using the Lagrangian optimization scheme:

Ln(X,𝒯,𝒵,α1,α2):=12EXYF2+λ1i𝒯i+μ12i𝒯iPin(X)F2iα1i,𝒯iPin(X)+λ2j𝒵j+μ22j𝒵jHj(X)F2jα2j,𝒵jHj(X),\begin{gathered}L^{n}\left(\textbf{X},\mathcal{T},\mathcal{Z},\alpha_{1},\alpha_{2}\right):=\frac{1}{2}\left\|E\textbf{X}-\textbf{Y}\right\|_{F}^{2}+\lambda_{1}\sum_{i}\left\|\mathcal{T}_{i}\right\|_{*}+\\ \frac{\mu_{1}}{2}\sum_{i}\left\|\mathcal{T}_{i}-P^{n}_{i}(\textbf{X})\right\|_{F}^{2}-\sum_{i}\left\langle\alpha_{1i},\mathcal{T}_{i}-P^{n}_{i}(\textbf{X})\right\rangle+\lambda_{2}\sum_{j}\left\|\mathcal{Z}_{j}\right\|_{*}\\ +\frac{\mu_{2}}{2}\sum_{j}\|\mathcal{Z}_{j}-{H}_{j}(\textbf{X})\|_{F}^{2}-\sum_{j}\left\langle\alpha_{2j},\mathcal{Z}_{j}-{H}_{j}(\textbf{X})\right\rangle,\end{gathered} (7)

where μ1\mu_{1} and μ2\mu_{2} denote the penalty parameters, nn denotes the nnth iteration number, and α1=[α11,α12,,α1i,]\alpha_{1}=[\alpha_{11},\alpha_{12},\ldots,\alpha_{1i},\ldots] and α2=[α21,α22,,α2j,]\alpha_{2}=[\alpha_{21},\alpha_{22},\ldots,\alpha_{2j},\ldots] are the Lagrange multipliers. For simplicity, (7) can be rewritten as follows:

Ln(X,𝒯,𝒵,α1,α2):=12EXYF2+λ1i𝒯i+μ12i𝒯iPin(X)α1iμ1F2+λ2j𝒵j+μ22j𝒵jHj(X)α2jμ2F2.\begin{gathered}L^{n}\left(\textbf{X},\mathcal{T},\mathcal{Z},\alpha_{1},\alpha_{2}\right):=\frac{1}{2}\left\|E\textbf{X}-\textbf{Y}\right\|_{F}^{2}+\lambda_{1}\sum_{i}\left\|\mathcal{T}_{i}\right\|_{*}+\\ \frac{\mu_{1}}{2}\sum_{i}\|\mathcal{T}_{i}-P^{n}_{i}(\textbf{X})-\frac{\alpha_{1i}}{\mu_{1}}\|_{F}^{2}+\lambda_{2}\sum_{j}\left\|\mathcal{Z}_{j}\right\|_{*}+\\ \frac{\mu_{2}}{2}\sum_{j}\|\mathcal{Z}_{j}-{H}_{j}(\textbf{X})-\frac{\alpha_{2j}}{\mu_{2}}\|_{F}^{2}.\end{gathered} (8)

Equation (8) can be efficiently solved through operator splitting via the alternating direction method of multipliers (ADMM)[41] by decoupling the optimization problem into three subproblems:
Update on X (subproblem P1):

L1n(X):=argmin𝐗12EXYF2+μ12i𝒯iPin(X)α1iμ1F2+μ22j𝒵jHj(X)α2jμ2F2.\begin{gathered}L^{n}_{1}\left(\textbf{X}\right):=\underset{\mathbf{X}}{\arg\min}\frac{1}{2}\left\|E\textbf{X}-\textbf{Y}\right\|_{F}^{2}+\\ \frac{\mu_{1}}{2}\sum_{i}\|\mathcal{T}_{i}-P^{n}_{i}(\textbf{X})-\frac{\alpha_{1i}}{\mu_{1}}\|_{F}^{2}+\frac{\mu_{2}}{2}\sum_{j}\|\mathcal{Z}_{j}-{H}_{j}(\textbf{X})-\frac{\alpha_{2j}}{\mu_{2}}\|_{F}^{2}.\end{gathered} (9)

The solution X for problem (P1) can be effectively solved using the conjugate gradient (CG) algorithm[42]. Update on 𝒯\mathcal{T} (subproblem P2):

L2n(𝒯):=argmin𝒯λ1i𝒯i+μ12i𝒯iPin(X)α1iμ1F2.\begin{gathered}L^{n}_{2}\left(\mathcal{T}\right):=\underset{\mathcal{T}}{\arg\min}\ \lambda_{1}\sum_{i}\left\|\mathcal{T}_{i}\right\|_{*}+\\ \frac{\mu_{1}}{2}\sum_{i}\|\mathcal{T}_{i}-P^{n}_{i}(\textbf{X})-\frac{\alpha_{1i}}{\mu_{1}}\|_{F}^{2}.\end{gathered} (10)

Subproblem (P2) can be solved in three steps. First, higher-order singular-value decomposition (HOSVD) is applied to the tensor 𝒯i\mathcal{T}_{i}[21, 32, 43]. It decomposes 𝒯i\mathcal{T}_{i} into a core tensor 𝒢\mathcal{G} and three orthonormal bases U(e)(e=1,2,3)\textbf{U}^{(e)}(e=1,2,3) for the three different subspaces of the ee-mode vectors[35]:

𝒯i=𝒢×1U(1)×2U(2)×3U(3),\mathcal{T}_{i}=\mathcal{G}\times_{1}\textbf{U}^{(1)}\times_{2}\textbf{U}^{(2)}\times_{3}\textbf{U}^{(3)}, (11)

where U(e)\textbf{U}^{(e)} is an orthogonal unitary matrix obtained from the SVD of Ti(e)\textbf{T}_{i(e)}. The low-rank tensor approximation is typically performed by soft thresholding[44] the core tensor or truncating the core tensor and unitary matrices when the ee-mode ranks are known [26]. Moreover, while the use of hard thresholding only provides an approximate solution to the nuclear norm regularized optimization problem [43, 45, 46], we found that both solutions achieved similar performance in the proposed SMART algorithm. As one of the comparison methods (PROST) [45] in this study used hard thresholding in HOSVD, we also used hard thresholding for a fair comparison.

The low-rank tensor approximation effectively acts as a high-order denoising process, where the small discarded coefficients mainly reflect contributions from noise[43] and noise-like artifacts[32, 45]. An implementation of the HOSVD is shown in Algorithm 1 of the supplementary information. Second, the denoised 𝒯i\mathcal{T}_{i} tensors were rearranged to form the denoised patches. Finally, the image patches overlap can be combined by simple averaging (Fig. 1, “Aggregation”) to generate an estimated image.

Update on 𝒵\mathcal{Z} (subproblem P3):

L3n(𝒵):=argmin𝒵λ2j𝒵j+μ22j𝒵jHj(X)α2jμ2F2.\begin{gathered}L^{n}_{3}\left(\mathcal{Z}\right):=\underset{\mathcal{Z}}{\arg\min}\ \lambda_{2}\sum_{j}\left\|\mathcal{Z}_{j}\right\|_{*}+\\ \frac{\mu_{2}}{2}\sum_{j}\|\mathcal{Z}_{j}-H_{j}(\textbf{X})-\frac{\alpha_{2j}}{\mu_{2}}\|_{F}^{2}.\end{gathered} (12)

Subproblem (P3) can also be solved in three steps. First, perform low-rank tensor approximation similar to the first step in solving subproblem (P2). Second, the image signals were extracted from the Hankel matrix from each horizontal slice of 𝒵j\mathcal{Z}_{j} along the first dimension. Finally, an estimated image can be generated after repeating the above two steps for all parametric tensors.

Update on α1\alpha_{1} and α2\alpha_{2}:

α1in+1=α1in+μ1[Pin+1(Xn+1)𝒯in+1],\alpha_{1i}^{n+1}=\alpha_{1i}^{n}+\mu_{1}[P_{i}^{n+1}(\textbf{X}^{n+1})-\mathcal{T}_{i}^{n+1}], (13)
α2jn+1=α2jn+μ2[Hj(Xn+1)𝒵jn+1].\alpha_{2j}^{n+1}=\alpha_{2j}^{n}+\mu_{2}[H_{j}(\textbf{X}^{n+1})-\mathcal{Z}_{j}^{n+1}]. (14)

An implementation of the SMART method is shown in Algorithm 2 of the supplementary information.

2.3.4 Space complexity analysis

The proposed SMART method consists of four components: solving the three sub-optimization problems P1, P2, and P3, and updating α1\alpha_{1} and α2\alpha_{2}. We analyze the above components to investigate their space complexity using the 𝒪\mathcal{O} notation. Assume that the number of spatial tensors is N𝒯N_{\mathcal{T}}, and the number of parametric tensors is N𝒵N_{\mathcal{Z}}. Solving 𝐗\mathbf{X} subproblem (P1) is dominated by the CG algorithm with cost of 𝒪((Nvoxel×NTSL)4/3)\mathcal{O}((N_{\textit{voxel}}\times N_{\textit{TSL}})^{4/3})[47]. Solving 𝒯\mathcal{T} of subproblem (P2) is dominated by the HOSVD operation, which applies SVD to each mode of the spatial tensor, which costs 𝒪(N𝒯×[Nb×(Npi×NTSL)2+Npi×(Nb×NTSL)2+NTSL×(Nb×Npi)2])\mathcal{O}(N_{\mathcal{T}}\times[{N_{b}}\times(N_{p}^{i}\times N_{\textit{TSL}})^{2}+{N_{p}^{i}}\times(N_{b}\times N_{\textit{TSL}})^{2}+{N_{\textit{TSL}}}\times(N_{b}\times N_{p}^{i})^{2}]). Similarly, solving 𝒵\mathcal{Z} of subproblem (P3) costs 𝒪(N𝒵×[Ntissue j×((NTSL k+1)×k)2+(NTSL k+1)×(Ntissuej×k)2+k×(Ntissue j×(NTSL k+1))2])\mathcal{O}(N_{\mathcal{Z}}\times[N_{\text{tissue }}^{j}\times((N_{\textit{TSL }}-k+1)\times k)^{2}+(N_{\textit{TSL }}-k+1)\times(N_{\text{tissue}}^{j}\times k)^{2}+k\times(N_{\text{tissue }}^{j}\times(N_{\textit{TSL }}-k+1))^{2}]). Updating α1\alpha_{1} and α2\alpha_{2} costs 𝒪(Nvoxel×NTSL)\mathcal{O}(N_{\textit{voxel}}\times N_{\textit{TSL}}). Therefore, the asymptotic space complexity of the SMART method is the upper bound of the space complexity of the above components.

2.4 Parameter Selection

The SMART method performance is affected by the parameters of the spatial and parametric tensor construction.These include patch size b, maximum similar patch number Np,maxN_{p,max}, normalized l2l_{2}-norm distance threshold λm\lambda_{m} in the spatial tensor construction, and the group number Ng\textit{N}_{g} in the parametric tensor construction. These parameters should be carefully tuned to obtain the best reconstruction performance.

2.5 T1ρ\text{T}_{1\rho} Map Estimation

T1ρ\rm T_{1\rho} maps were obtained by fitting the reconstructed T1ρ\rm T_{1\rho}-weighted images with different TSLs pixel-by-pixel:

Mk=M0exp(tk/T1ρ)k=1,2,,NTSL ,M_{k}=M_{0}\exp\left(-t_{k}/T_{1\rho}\right)_{k=1,2,\ldots,N_{\textit{TSL }}}, (15)

where M0M_{0} denotes the baseline image intensity without applying a spin-lock pulse, and MkM_{k} is the signal intensity for the kth TSL image. T1ρ\text{T}_{1\rho} map was estimated using the nonlinear least-squares fitting method with the Levenberg–Marquardt algorithm[48] from the reconstructed T1ρ\text{T}_{1\rho}-weighted images.

3 EXPERIMENT

3.1 Evaluation of the Tissue Clustering

Numerical phantom, real phantom, and in vivo experiments were performed to evaluate the tissue clustering method performance. All MR scans involved were performed on 3T scanners. The details of data acquisition are shown in the supplementary information and Table S1.

3.1.1 Numerical phantom dataset

The numerical phantom consisted of five vials (including 2809 pixels in each vial) with different T1ρ\text{T}_{1\rho} values representing five brain regions: putamen, frontal white matter, genu corpus callosum, head of caudate nucleus, and centrum semiovale. The numerical phantom was simulated using a bi-exponential model [49, 50] with the following formula to mimic multi-component tissue:

Mk=M0((1α)exp(tk/T1ρ,short)+αexp(tk/T1ρ,long))k=1,2,,NTSL,\begin{gathered}M_{k}=M_{0}((1-\alpha)\exp(-t_{k}/T_{1\rho,\textit{short}})+\\ \alpha\exp(-t_{k}/T_{1\rho,\textit{long}}))_{k=1,2,\ldots,N_{\textit{TSL}}},\end{gathered} (16)

where T1ρ,longT_{1\rho,\textit{long}} and T1ρ, shortT_{1\rho,\textit{ short}} are the long and short T1ρ\text{T}_{1\rho} relaxation times, α(0α1)\alpha(0\leq\alpha\leq 1) and 1α1-\alpha are the fractions for these two components, respectively. For the five vials, the T1ρ,longT_{1\rho,\textit{long}} values were set as 77, 78, 79, 82, and 89 ms. The T1ρ,shortT_{1\rho,\textit{short}} values were set as 18, 19, 20, 21, 22 ms, and α\alpha was set as 0.6, according to the previous study[51]. The datasets were simulated with the matrix size = 192×192192\times 192, and TSLs = 1, 20, 40, 60, and 80 ms.

3.1.2 Real phantom dataset

The phantom contained nine NiCl2\text{NiCl}_{2}-doped agarose vials, with T1\text{T}_{1} and T2\text{T}_{2} values in the in vivo range (T1\text{T}_{1}: 250 ms-1872 ms; T2\text{T}_{2}: 42 ms-231 ms)[52]. T1ρ\text{T}_{1\rho}-weighted images with TSLs = 1, 20, 40, 60, and 80 ms were acquired with ten averages, leading to a high signal-to-noise ratio (SNR) of the images. The exponential model of (15) was applied to obtain the T1ρ\text{T}_{1\rho} map. Supplementary information Fig. S1(a-c) shows a T1ρ\text{T}_{1\rho}-weighted image of the phantom at TSL = 40 ms, the T1ρ\text{T}_{1\rho} map, and the estimated T1ρ\text{T}_{1\rho} values of the nine vials, respectively.

3.1.3 In vivo dataset

One 2D brain dataset was collected from a volunteer with the same TSLs used in the phantom experiment. The average number was set as one, considering the long scan time.

Complex Gaussian noise was added to the numerical, real phantom, and in vivo datasets with SNRs ranging from 25 to 60 with an increment of five. SNR was computed as the ratio between the average value of T1ρ\text{T}_{1\rho}-weighted images and the standard deviation of the noise [15]. Two rank values were calculated for the images with different SNRs. One group consisted of the mean rank of all the Hankel matrices for each pixel in the region of interest (ROI).The other was the rank of a block Hankel matrix for a specific tissue. Let 𝐆i\mathbf{G}_{i} denote the Hankel matrix constructed from the pixels at a fixed location ii in the clustered tissue according to (5) (shown in Fig.1). The block Hankel matrix can be defined as 𝐆Block=[𝐆1,𝐆2,,𝐆Nv]\mathbf{G}_{\text{Block}}=\left[\mathbf{G}_{1},\mathbf{G}_{2},\ldots,\mathbf{G}_{N_{v}}\right], where NvN_{v} denotes the total number of pixels in the clustered tissue.

The rank value was calculated using the SVD with a fixed singular-value threshold calculated as the product of an empirical ratio and the most significant singular value. The ratio was set as 0.03 for both groups. The Monte Carlo simulation was performed 1000 times for each SNR to reduce the noise bias on the rank value calculated.

The kk-space data were undersampled with acceleration factor (R) R = 4 for the numerical and real phantom datasets, and R = 6 for the in vivo dataset to analyze the effect of residual aliasing. Initial reconstructions of the undersampled data were obtained using the Sparse MRI method[11] and were used to calculate the rank value.

3.2 In Vivo Reconstruction Experiments

The proposed method was evaluated on the T1ρ\text{T}_{1\rho} mapping datasets of the brain collected from six volunteers on 3T scanners[20, 53]. The local institutional review board approved the experiments, and informed consent was obtained from each volunteer. The details of data acquisition are shown in supplementary information and Table S1.

For the 2D datasets, the fully sampled k-space data were retrospectively (Retro) undersampled along the ky dimension using a pseudo-random undersampling pattern [54] with acceleration factors (R) = 4 and 6. For the 3D datasets, the fully sampled k-space datasets were retrospectively undersampled using Poisson disk random[55] patterns with R = 6.76 and 9.04. The sampling masks for each TSL were different. For the prospective (Pro) study , two 2D datasets were acquired from one volunteer (27 years old) with R = 4.48 and 5.76, respectively. Fully sampled data were also acquired as a reference in the prospective experiment.

SMART and four state-of-the-art methods were used to reconstruct the undersampled k-space data. These methods include the high-dimensionality undersampled patch-based reconstruction method (PROST) [45] using the spatial patch-based low-rank tensor, low-rank plus sparse (L + S) method[36] enforcing the spatial global low-rank and sparsity of the image matrix, locally low-rank method (LLR)[13] using the spatial patch-based low-rank matrix, and model-driven low rank and sparsity priors method (MORASA)[29] which enforces the global low-rankness of the matrix in both spatial and temporal directions and sparsity of the image matrix. In addition, a modified PROST reconstruction method (PROST + HM) was also compared with SMART to verify the effectiveness of the parametric group-based low-rank tensor in image reconstruction. This jointly uses the spatial patch-based low-rank tensor and a parametric low-rank matrix by replacing the patch-based parameter tensor in SMART with a Hankel matrix. Experiments with high acceleration factors of 10.2 and 11.7 in the 2D scenario and 11.26 and 13.21 in the 3D scenario were also performed to verify the effectiveness of the proposed method.

Refer to caption
Figure 2: Mean rank of all the Hankel matrices and the rank of the block Hankel matrix for each pixel in selected ROIs. (a) The T1ρ\text{T}_{1\rho}-weighted images of the numerical phantom at TSL = 80 ms, with SNR of 25 (low), 40 (medium), and 60 (high).The vial marked with a yellow circle shows the selected ROI for the rank calculation of the numerical phantom. (b) Rank curves as a function of SNR for the fully sampled data, and the accelerated data, respectively (from left to right: numerical phantom data, real phantom data, in vivo tissue 1, in vivo tissue 2).

T1ρ\text{T}_{1\rho} quantification was assessed using the normalized root mean square error (nRMSE)[49]. The quality of the reconstructed T1ρ\text{T}_{1\rho}-weighted images was assessed based on the nRMSE, high-frequency error norm (HFEN)[56, 57], structural similarity index measure (SSIM)[58], and peak signal-to-noise ratio (PSNR).

3.3 Parameter Selection

In this subsection, several different values of the parameters related to the spatial and parametric tensor construction mentioned in the previous section were set and used for 2D image reconstruction. The nRMSE values for the reconstructions of SMART with R = 4 and R = 6 using different parameters were compared.

The representative zero-filling reconstruction of the Retro 2D dataset with R = 6, the intermediate reconstruction at the 12th iteration, the parameter histogram, the estimated T1ρ\text{T}_{1\rho} map, the corresponding tissue groups, and the ranks of tissue groups were also compared to demonstrate the necessity of updating tissue maps in the iteration.

All reconstructions, T1ρ\text{T}_{1\rho} estimation, and analyses were performed in MATLAB 2017b (MathWorks, Natick, MA, USA) on an HP workstation with 500 GB DDR4 RAM and 32 cores (two 16-core Intel Xeon E5-2660 2.6 GHz processors). The reconstruction parameters used in all methods are presented in the supplementary information Table S2. The tensor transforms, including the folding and unfolding operators, and the tensor multiplication were implemented using the MATLAB tensor toolbox by Brett et al. from Sandia National Laboratories111http://www.tensortoolbox.org/. The HOSVD using algorithm 1 of the supplementary information was implemented based on the transforms from the above toolbox.

Refer to caption
Figure 3: Reconstructed T1ρ\text{T}_{1\rho}-weighted images from one 2D brain dataset at TSL = 1 ms with acceleration factors (R) = 4 and 6 using the SMART, PROST+HM, PROST, L+S, LLR, and MORASA methods. The corresponding error images for the reference image reconstructed from the fully sampled data are also shown. The error images are amplified by ten for visualization. The nRMSEs are shown at the bottom of the error images.

3.4 Ablation Experiment

We conducted an ablation study to analyze the spatial and parametric tensors’ effects on the reconstruction. Two models are constructed as follows:
model 1:

argmin𝐗12E𝐗𝐘F2+λ1i𝒯i s.t. 𝒯i=Pi(X)\underset{\mathbf{X}}{\arg\min}\frac{1}{2}\|E\mathbf{X}-\mathbf{Y}\|_{F}^{2}+\lambda_{1}\sum_{i}\left\|\mathcal{T}_{i}\right\|_{*}\text{ s.t. }\mathcal{T}_{i}=P_{i}(\textbf{X}) (17)

model 2:

argmin𝐗12EX𝐘F2+λ2j𝒵j s.t. 𝒵j=Hj(𝐗)\underset{\mathbf{X}}{\arg\min}\frac{1}{2}\|E\textbf{X}-\mathbf{Y}\|_{F}^{2}+\lambda_{2}\sum_{j}\left\|\mathcal{Z}_{j}\right\|_{*}\text{ s.t. }\mathcal{Z}_{j}={H}_{j}(\mathbf{X}) (18)

Model 1 only includes the spatial tensor, and model 2 only includes the parametric tensor. Model 1 is the same as the PROST method. Similarly, we applied ADMM to solve the optimization problem in model 2. These two models were used to reconstruct images from the retrospectively undersampled in vivo dataset with R = 4 and 6.

Refer to caption

Figure 4: Reconstructed T1ρ\text{T}_{1\rho}maps estimated from one 2D brain dataset with acceleration factors (R) = 4 and 6 using the SMART, PROST+HM, PROST, L+S, LLR, and MORASA methods. The corresponding error images for the reference estimated from fully sampled data are also shown. The nRMSEs are shown at the bottom of the error images.

4 RESULTS

4.1 Tissue Clustering Evaluation Experiment

T1ρ\text{T}_{1\rho}-weighted images at TSL = 80 ms of the numerical phantom with different SNRs are shown in Fig.2(a) to represent low, medium, and high SNRs. Fig.2(b) shows the rank curves of the numerical and real phantom, and in vivo datasets. The first row of Fig.2(b) is for the fully sampled dataset, while the second is for reconstructing the undersampled data.

4.1.1 Numerical phantom dataset

The first column of Fig.2(b) shows rank curves of the numerical phantom. Since the numerical phantom was simulated using the bi-exponential relaxation model, the rank of 𝐆Block\mathbf{G}_{\text{Block}} and the mean rank of the Hankel matrices should be 2. For the fully sampled dataset, the rank of 𝐆Block\mathbf{G}_{\text{Block}} was maintained at 2 for all SNRs, while the mean rank of the Hankel matrices was greater than 2.5. For the accelerated dataset, the rank of 𝐆Block\mathbf{G}_{\text{Block}} was still maintained at 2, while the mean rank of the Hankel matrices for the accelerated dataset increased to nearly 3.

4.1.2 Real phantom dataset

The second column of Fig. 2 (b) shows rank curves of the real phantom dataset. Since each vial of the phantom was considered to be composed of a single component, the rank values of 𝐆Block\mathbf{G}_{\text{Block}} and the Hankel matrices should be 1. For the fully sampled dataset, the rank of 𝐆Block\mathbf{G}_{\text{Block}} was maintained at 1 for all SNRs, while the mean rank of the Hankel matrices was greater than 1. For the accelerated dataset, the rank of 𝐆Block\mathbf{G}_{\text{Block}} increased due to the residual artifacts, especially at low SNR (SNR = 25). The mean rank of the Hankel matrices for the accelerated dataset was even higher than 2.

4.1.3 In vivo dataset

Two different tissues (denoted tissue 1 and tissue 2) were selected to evaluate the performance of the tissue clustering method. Tissue 1 was selected from tissue groups considered composed of a single-component and tissue 2 was selected from tissue groups considered composed of multi-components, according to the rank value of 𝐆Block\mathbf{G}_{\text{Block}}. The third and fourth columns of Fig. 2 (b) show the rank curves of these two tissues, respectively. For tissue 1, the rank of 𝐆Block\mathbf{G}_{\text{Block}} was maintained at 1, while the mean rank of the the Hankel matrices was higher than 2 for both fully sampled and accelerated datasets. For tissue 2, the rank of 𝐆Block\mathbf{G}_{\text{Block}} was maintained at 2 when SNR30\text{SNR}\geq 30 and increased at low SNR = 25. The mean rank of Hankel matrices was higher than 2 for both fully sampled and accelerated datasets.

The above results imply that the tissue clustering method is robust to noise and residual artifacts, and can improve the low-rank properties of the Hankel structure matrix.

4.2 Retrospective 2D Reconstruction

4.2.1 Low acceleration with 4-Fold and 6-Fold

Table 1: COMPARISONS OF DIFFERENT METHODS FOR THE RETROSPECTIVE 2D RECONSTRUCTIONS WITH DIFFERENT ACCELERATION FACTORS (R). THE BEST RESULTS ARE IN BOLD.
Metrics SMART PROST+HM PROST L+S LLR MORASA
4×\times HFEN 0.1858 0.2232 0.2359 0.2453 0.2645 0.2017
SSIM 0.9805 0.9762 0.9761 0.9737 0.9731 0.9780
PSNR 43.2262 41.5372 41.1570 40.4791 39.8966 42.6131
6×\times HFEN 0.2224 0.2671 0.2727 0.2897 0.3170 0.2553
SSIM 0.9761 0.9701 0.9704 0.9670 0.9641 0.9715
PSNR 42.0636 40.0560 40.1768 39.6337 39.0524 41.0307

The T1ρ\text{T}_{1\rho}-weighted images (at TSL = 1 ms) from one volunteer reconstructed using the SMART, PROST+HM, PROST, L+S, LLR, and MORASA methods are shown in Fig. 3. Difference images between the reconstructed and reference images are displayed under the reconstructions, and the nRMSE values are placed below the difference images. Table 1 lists the average HFEN, SSIM, and PSNR values for all reconstructed T1ρ\text{T}_{1\rho}-weighted images at different TSLs. The estimated T1ρ\text{T}_{1\rho} maps from the reconstructed and difference images are shown in Fig. 4.

Supplementary information Fig. S2 shows the reconstructed T1ρ\text{T}_{1\rho}-weighted images (at TSL = 1 ms) and the magnified images of the ROI from another volunteer using the SMART, RROST + HM, PROST, L + S, LLR, and MORASA methods at R = 4 and 6, respectively. The SMART method can better preserve the image resolution and finer details than the RROST + HM, PROST, and MORASA methods. The L + S and LLR methods have an image enhancement effect in the reconstructed images, particularly in the sulcus area, which is indicated by the yellow arrow. Compared to the methods exploiting spatial low-rankness (PROST, L + S, and LLR methods), the reconstruction can be improved by using parametric low-rankness based on high correlations in signal evolution in the reconstruction model.

4.2.2 Higher acceleration with 10.2-Fold and 11.7-Fold

The reconstructed T1ρ\text{T}_{1\rho}-weighted images and the corresponding error images with R = 10.2 and 11.7 are shown in Fig. 5. The nRMSEs are shown at the bottom of each error image. The reconstructed T1ρ\text{T}_{1\rho}-weighted images using the SMART method have no notable artifacts, even at a high acceleration factor of up to 11.7. The related errors remain noise-like, and the nRMSEs are less than 3%\%. Noticeable aliasing artifacts can be observed from the images or error maps using the PROST+HM, PROST, L+S, LLR, and MORASA methods at different TSLs. The corresponding T1ρ\text{T}_{1\rho} maps are shown in supplementary information Fig. S3. Table 2 shows the average HFEN, SSIM, and PSNR values for all reconstructed T1ρ\text{T}_{1\rho}-weighted images. The proposed SMART method qualitatively achieves the best performance among the six methods, especially at R = 11.7.

Table 2: COMPARISONS OF DIFFERENT METHODS FOR THE RETROSPECTIVE 2D RECONSTRUCTIONS WITH DIFFERENT ACCELERATION FACTORS (R). THE BEST RESULTS ARE IN BOLD.
Metrics SMART PROST+HM PROST L+S LLR MORASA
10.2×\times HFEN 0.2558 0.2953 0.3360 0.4057 0.4369 0.3459
SSIM 0.9728 0.9688 0.9637 0.9582 0.9542 0.9674
PSNR 40.3712 39.4523 38.5946 36.8233 35.3039 37.8079
11.7×\times HFEN 0.2817 0.3460 0.4134 0.4527 0.5148 0.3762
SSIM 0.9701 0.9640 0.9578 0.9533 0.9472 0.9634
PSNR 39.6822 38.1819 36.8288 35.9089 33.6896 37.2335
Refer to caption
Figure 5: Reconstructed T1ρ\text{T}_{1\rho}-weighted images from one 2D brain dataset at TSL = 1 ms with acceleration factors (R) = 10.2 and 11.7 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. The corresponding error images for the reference image reconstructed from the fully sampled data are also shown. The error images are amplified by ten for visualization. The nRMSEs are shown at the bottom of the error images.
Refer to caption
Figure 6: Reconstructed T1ρ\text{T}_{1\rho}-weighted images and magnified images from prospective 2D brain dataset at TSL = 40 ms with R = 4.48 and 5.76 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. Aliasing artifacts can be seen in magnified images of reconstructions using L+S and LLR methods, and the yellow arrow shows a blood vessel in the reference image.
Refer to caption
Figure 7: Reconstructed T1ρ\text{T}_{1\rho}-weighted images from one slice of the 3D brain dataset at TSL = 1 ms with acceleration factors (R) = 6.76 and 9.04 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. The corresponding error images for the reference estimated from fully sampled data are also shown. The error images are amplified by ten for visualization. The nRMSEs are shown at the bottom of the error images.

4.3 Prospective 2D Reconstruction

Fig. 6 shows the prospective reconstructed T1ρ\text{T}_{1\rho}-weighted images (at TSL = 40 ms) from another volunteer and the magnified images using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. Visual artifacts can be observed in the magnified images of reconstructions at all accelerating factors using the L + S and LLR methods. The images reconstructed using the PROST + HM and PROST methods were blurred compared to those reconstructed using the SMART method. Some image details (namely, the blood vessel marked in a yellow arrow) can narrowly be seen in the reconstructions using the PROST + HM and MORASA methods and disappeared in the reconstruction using the PROST method at R = 5.76. In contrast, the image details were well preserved using the SMART method. The T1ρ\text{T}_{1\rho} maps reconstructed using the methods above are shown in supplementary information Fig. S4. Similar conclusions can be drawn from the T1ρ\text{T}_{1\rho} maps.

4.4 3D Reconstruction

4.4.1 Low acceleration with 6.76-Fold and 9.04-Fold

The T1ρ\text{T}_{1\rho}-weighted images (at TSL = 1 ms) from one slice of the reconstructed 3D images using the SMART, PROST + HM PROST, L + S, LLR, and MORASA methods for R = 6.76 and 9.04 are shown in Fig.7. The T1ρ\text{T}_{1\rho} maps reconstructed using the methods above are shown in supplementary information Fig. S5.

4.4.2 Low acceleration with 11.26-Fold and 13.21-Fold

Fig. 8 shows the reconstructed T1ρ\text{T}_{1\rho}-weighted images and the related error maps at TSL = 1 ms with R = 11.26 and 13.21 using the six methods in 3D imaging. The estimated T1ρ\text{T}_{1\rho} maps from the reconstructions are shown in supplementary information Fig. S6. At high acceleration, the low-rank tensor-based reconstruction methods (SMART, PROST, and PROST+HM) show better detail retention than the low-rank matrix-based methods (LLR and MORASA). The SMART method still achieves lower nRMSE than other methods at higher accelerations.

4.5 Parameter Selection

Fig.9 shows the effects of four parameters of the spatial and parametric tensor construction in 2D imaging. As shown in Fig. 9(a), the smallest nRMSE is achieved at b=9b=9. When the patch size decreases from 9 to 3, the nRMSE increases rapidly due to the small image patches’ limited spatial modeling ability. When the patch size is larger than 9, the reconstruction performance slowly degrades since the discrepancy among the patches increases, causing the reduced low rankness of the constructed spatial tensor. The nRMSE of reconstruction decreases with the increase of Np,maxN_{p,max}, as shown in Fig. 9(c), but the trend slows down when Np,max30N_{p,max}\geq 30. Considering the computational complexity, we chose Np,max=30N_{p,max}=30 in this study. Fig. 9(d) shows that λm\lambda_{m} needs to be larger than 0.15; otherwise, the nRMSE will significantly increase. As shown in Fig. 9(b), changing NgN_{g} has little effect on the nRMSE of the reconstruction for R = 4. In contrast, large NgN_{g} shows improved performance for the higher acceleration factor of 6. Therefore, Ng=60N_{g}=60 was used in this study.

Supplementary information Fig. S7 shows the representative zero-filling reconstruction with R = 6, the intermediate reconstruction at the 12th iteration, the parameter histogram, the estimated T1ρ\text{T}_{1\rho} map, the corresponding tissue groups, and the ranks of tissue groups. The initial T1ρ\text{T}_{1\rho}-weighted image of the zero-filling reconstruction shows strong aliasing artifact, and the T1ρ\text{T}_{1\rho} map estimated from zero-filling images is quite blurred. The quality of T1ρ\text{T}_{1\rho}-weighted image and T1ρ\text{T}_{1\rho} map was significantly improved at the 12th iteration. After iteration, the distribution of the parameter histogram also changed (shown in Fig. S7(b)). In the figure of tissue groups, the pixel intensities along the pixel number direction are not as uniform as those at the 12th iteration. The ranks in several groups are reduced from 2 to 1.

4.6 Ablation Experiment

Fig. 10 shows the reconstructed images for models 1 and 2 with R = 4 and 6, respectively. For the results of model 1, the aliasing artifacts due to undersampling can be well removed from the reconstructed images by applying the spatial tensor, but the reconstructed images are blurry. In contrast, the sharpness of images reconstructed by model 2 is improved, but aliasing artifacts still exist. These results are in line with our hypothesis that the parametric group-based tensor can help improve image sharpness and preserve more image details in the reconstructions.

5 DISCUSSION

This study proposed the SMART method using simultaneous spatial patch-based and parametric group-based low-rank tensors for accelerated MR T1ρ\text{T}_{1\rho} mapping. We demonstrated that the proposed method could recover high-quality images from highly undersampled data in 2D and 3D imaging. The high local and nonlocal redundancies and similarities between the contrast images were exploited using the spatial patch-based low-rank tensor through HOSVD. Compared with transforms that use fixed bases, such as discrete cosine and wavelet, the HOSVD bases U(n)\textbf{U}^{(n)} are learned from the multidimensional data and thus more adaptive to the structures in the data. This adaptive nature enables HOSVD to be an excellent sparsifying transform for image reconstruction[43]. Similar overlapping patches are used to form the spatial low-rank tensors as in the PROST method. The optimization subproblem for the tensors can be addressed in the high-order denoising process. The denoised image is obtained by averaging different estimates, which may lead to blurring in the reconstructions. In the proposed SMART method, in addition to the spatial tensor, we jointly used a parametric group-based low-rank tensor in the reconstruction model, which exploits the low rank in spatial and temporal directions. The parametric tensor is formed non-overlapping, which exhibits stronger low-rankness based on high correlations in tissues of similar signal evolution. It may help improve image sharpness and preserve more image details in the reconstructions. Therefore, the SMART method can perform better than existing spatial patch-based methods.

Meanwhile, the SMART method exhibited high robustness in prospective experiments and can be extended to other quantitative imaging methods, such as T2\text{T}_{2} mapping and T1\text{T}_{1} mapping.

Refer to caption
Figure 8: Reconstructed T1ρ\text{T}_{1\rho}-weighted images from one slice of the 3D brain dataset at TSL = 1 ms with acceleration factors (R) =11.26 and 13.21 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. The corresponding error images for the reference estimated from fully sampled data are also shown. The error images are amplified by ten for visualization. The nRMSEs are shown at the bottom of the error images.
Refer to caption
Figure 9: nRMSE curves for the reconstructions of SMART using different parameters related to the spatial and parametric tensors.

5.1 Parameters Selection for the SMART Method

In this study, all the reconstruction parameters included in the ADMM and CG algorithms were selected empirically. The relative change in the solution between two consecutive iterations was used to show the convergence numerically and is defined as follows:

𝐗iter𝐗iter-1 F/𝐗iter-1 F\left\|\mathbf{X}_{\textit{iter}}-\mathbf{X}_{\textit{iter-1 }}\right\|_{F}/\left\|\mathbf{X}_{\textit{iter-1 }}\right\|_{F} (19)

where 𝐗iter\mathbf{X}_{\textit{iter}} denotes the reconstructed image at the iterth ADMM iteration. Fig. 11 shows the relative changes of 2D imaging with R = 4, 6, 10.2, and 11.7. The relative change rapidly stabilizes within a few iteration numbers and has no significant alterations when the iteration number reaches 15. Additionally, we changed the CG iteration number from 7 to 20 to test its effects on the reconstruction. The nRMSE variations of the reconstructed images with different CG iteration numbers were less than 1%, indicating the CG iteration number may have little effect on reconstruction after several iterations.

Refer to caption
Figure 10: The reconstructed images at TSL = 1 ms and 40 ms for the reconstruction models with only spatial tensor and only parametric tensor at R = 4 and 6, respectively. Each error image is placed on the right of each reconstructed image along with the nRMSE. The error images are amplified by ten for visualization.
Refer to caption
Figure 11: Relative change in the SMART reconstruction between the two consecutive ADMM iterations for one of the 2D brain datasets with R = 4, 6, 10.2, and 11.7.

We used several tricks to speed up the reconstruction in our experiments. For the patch extraction step, a patch offset with three was used to reduce the number of searched patches. Also, the patch extraction step was implemented in a graphics processing unit (GPU). The time consumption of this step was reduced from 9.5s to 0.14s for the 2D patch extraction and from 600 mins to 171s for the 3D patch extraction. The memory footprints were 411 MB and 821 MB for the 2D and 3D patch extractions. The T1ρ\text{T}_{1\rho} estimation for the histogram analysis of the tissue clustering was applied every three iterations to speed up the reconstruction. Additionally, parallel computing was applied in the HOSVD denoising step of tensors 𝒯\mathcal{T} and 𝒵\mathcal{Z} for each spatial and parametric tensor. The reconstruction times of the six methods are listed in supplementary information Table S3.

5.2 2D Patch vs. 3D Patch in the 3D Imaging Application

For 3D imaging, the images can be reconstructed slice-by-slice using the SMART method, where the 2D patch extraction is implemented. Compared to the 2D patch extraction, the correlation information between the slices can be utilized to improve the reconstruction. Therefore, in this study, the 3D patch extraction was selected in the SMART method. Fig. 12 shows a slice of the reconstructed images at R = 9.04 using the 2D and 3D patch extraction, respectively. The reference images from the fully sampled data and magnified images are also shown. As shown in the magnified images, the 3D patch extraction displays better image resolution and detail, particularly in the regions indicated by the red frames.

Refer to caption
Figure 12: One slice of the reconstructed 3D images at TSL = 1 ms using 2D and 3D patch extraction, respectively, with R = 9.04.

5.3 Comparison with Previous Studies

Previous studies [59, 60] have utilized convolutional sparse coding (CSC) to reconstruct images from undersampled data. Thanh et al.[59] proposed a filter-based dictionary learning method using a 3D CSC to recover the high-frequency information of the MRI images. The CSC employed a sparse representation of the entire image computed by the sum of a set of convolutions with dictionary filters instead of dividing an image into overlapped patches. It can solve high computational complexity and long-running time problems due to patch-based approaches and avoid artifacts caused by patch aggregation. However, filters must be carefully chosen, and artifacts may appear in some reconstructed images due to inaccurate filters[60].

Low rankness has been widely used in fast MR parametric mapping. The annihilating filter-based low-rank Hankel matrix method (ALOHA)[30] exploits the transform domain sparsity and the coil sensitivity smoothness, representing a low-rank Hankel structured matrix in the weighted kk-space domain. The prior information used in ALOHA differs from the image domain redundancy utilized in our study. Specifically, the spatial and temporal redundancies are mainly represented as a summation of exponentials and the Hankel matrix at each voxel with a low-rank property. Multi-scale low-rank with different patch sizes was also used to capture global and local information. It has a more compact signal representation and may further improve MR reconstruction. Frank Ong et al.[61, 62] have applied multi-scale low-rank in face recognition preprocessing and dynamic contrast-enhanced MRI. However, when constructing multiple low-rank tensors, the memory footprint is large, and time consumption is increased several times compared with a single patch size. Another problem is that the multi-scale low-rank decomposition is not translation invariant; shifting the input changes the resulting decomposition. This translation variant nature often creates blocking artifacts near the block boundaries, which can be visually jarring. Introducing overlapping partitions of the patches so that the overall algorithm is translation invariant can remove these artifacts. The proposed SMART method uses a single patch size with overlapped patches considering the limits of memory and reconstruction time.

5.4 Limitations of SMART

In this study, the setup of the proposed method is non-convex, and the patch selection operator is time-varying. The convergence of the whole algorithm cannot be completely guaranteed. However, based on our experimental results, the proposed method works well despite the lack of a convergence guarantee. We found that after several iterations, the relative change of reconstructions between two consecutive iterations and the patch extract selection varies slightly. Generally, this estimation based on the patch structure remains consistent along with the increasing iteration. In addition, the block matching algorithm for patch extraction is heuristic, and the current selection strategy may not present the best clusters. In future work, patches obtained through a sliding operation throughout the data may be used to construct the high order tensor. The local and nonlocal redundancies and similarities between the contrast images can be exploited using the tensor dictionary learning method by jointly applying the low-rank constraints on the dictionaries, and the sparse constraints on the core coefficient tensors [63]. This may further improves the reconstruction performance.

The iterative method’s long reconstruction time significantly hinders the proposed method’s clinical application. However, there are two potential solutions to this problem. One solution is using parallel computing via GPU. We have already implemented the patch extraction in GPU. Although the reconstruction time is still too long for clinical use, we plan to improve the speed of the SMART method by leveraging parallel computing technology, such as multi-GPU and cluster systems. The other solution is the deep learning-based reconstruction method. In recent years, the unrolling-based strategy has established a bridge from traditional iterative CS reconstruction to deep neural network design [64]. Previous studies have shown that an unrolling-based deep network significantly reduced the reconstruction time and improved the reconstruction quality through the learnable regularizations and parameters. However, deep learning-based reconstruction requires a considerable amount of high-quality training data, and the network performance may degrade due to the motion artifact during the data acquisition. Training data collection is critical and will be the bottleneck of the deep learning-based reconstruction method. In our future work, we will try both strategies to promote the clinical use of the proposed method.

6 CONCLUSIONS

This study explores low-rankness through high-order tensor in MR T1ρ\text{T}_{1\rho} mapping to obtain improved reconstruction results. In particular, we propose a method that simultaneously uses spatial patch-based and parametric group-based low-rank tensors to reconstruct images from highly undersampled data. The spatial patch-based low-rank tensor exploits the high local and nonlocal redundancies and the similarities between the contrast images. The parametric group-based low-rank tensor, which integrates similar exponential behavior of the image signals, is jointly used to enforce the multidimensional low-rankness in the reconstruction process. Experimental results in both 2D and 3D imaging scenarios show that the proposed method can improve the reconstruction results qualitatively and quantitatively.

References

  • [1] Y.-X. J. Wáng, Q. Zhang, X. Li, W. Chen, A. Ahuja, and J. Yuan, “T1ρ\text{T}_{1\rho} magnetic resonance: basic physics principles and applications in knee and intervertebral disc imaging,” Quant. Imaging Med. Surg., vol. 5, no. 6, p. 858, 2015.
  • [2] L. Wang and R. R. Regatte, “T1ρ\text{T}_{1\rho} MRI of human musculoskeletal system,” J. Magn. Reson. Imag., vol. 41, no. 3, pp. 586–600, 2015.
  • [3] Y. Zhu, Y. Liu, L. Ying, Z. Qiu, Q. Liu, S. Jia, H. Wang, X. Peng, X. Liu, H. Zheng, and D. Liang, “A 4-minute solution for submillimeter whole-brain  T1ρ\text{ T}_{1\rho} quantification,” Magn. Reson. Med., vol. 85, no. 6, pp. 3299–3307, 2021.
  • [4] S. Ma, N. Wang, Z. Fan, M. Kaisey, N. L. Sicotte, A. G. Christodoulou, and D. Li, “Three-dimensional whole-brain simultaneous T1\text{T}_{1}, T2\text{T}_{2}, and T1ρ\text{T}_{1\rho} quantification using MR multitasking: Method and initial clinical experience in tissue characterization of multiple sclerosis,” Magn. Reson. Med., vol. 85, no. 4, pp. 1938–1952, 2021.
  • [5] T. Allkemper, F. Sagmeister, V. Cicinnati, S. Beckebaum, H. Kooijman, C. Kanthak, C. Stehling, and W. Heindel, “Evaluation of fibrotic liver disease with whole-liver T1ρ\text{T}_{1\rho} MR imaging: a feasibility study at 1.5 T,” Radiology, vol. 271, no. 2, pp. 408–15, 2014.
  • [6] Q. Yang, T. Yu, S. Yun, H. Zhang, X. Chen, Z. Cheng, J. Zhong, J. Huang, T. Okuaki, Q. Chan, B. Liang, and H. Guo, “Comparison of multislice breath-hold and 3D respiratory triggered T1ρ\text{T}_{1\rho} imaging of liver in healthy volunteers and liver cirrhosis patients in 3.0 T MRI,” J. Magn. Reson. Imag., vol. 44, no. 4, pp. 906–13, 2016.
  • [7] Z. Zhou, B. Jiang, Z. Zhou, X. Pan, H. Sun, B. Huang, T. Liang, S. Ringgaard, and X. Zou, “Intervertebral disk degeneration: T1ρ\text{T}_{1\rho} MR imaging of human and animal models,” Radiology, vol. 268, no. 2, pp. 492–500, 2013.
  • [8] H. Qi, Z. Lv, J. Hu, J. Xu, R. Botnar, C. Prieto, and P. Hu, “Accelerated 3d free-breathing high-resolution myocardial T1ρ\text{T}_{1\rho} mapping at 3 tesla,” Magn. Reson. Med., vol. 88, no. 6, pp. 2520–2531, 2022.
  • [9] P. Pandit, J. Rivoire, K. King, and X. J. Li, “Accelerated T1ρ\text{T}_{1\rho} acquisition for knee cartilage quantification using compressed sensing and data-driven parallel imaging: A feasibility study,” Magn. Reson. Med., vol. 75, no. 3, pp. 1256–1261, 2016.
  • [10] S. Bhave, S. G. Lingala, C. P. Johnson, V. A. Magnotta, and M. Jacob, “Accelerated whole-brain multi-parameter mapping using blind compressed sensing,” Magn. Reson. Med., vol. 75, no. 3, pp. 1175–1186, 2016.
  • [11] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI : The application of compressed sensing for rapid MR imaging,” Magn. Reson. Med., vol. 58, no. 6, pp. 1182–95, 2007.
  • [12] M. V. W. Zibetti, R. Baboli, G. Chang, R. Otazo, and R. R. Regatte, “Rapid compositional mapping of knee cartilage with compressed sensing MRI,” J. Magn. Reson. Imag., vol. 48, no. 5, pp. 1185–1198, 2018.
  • [13] T. Zhang, J. M. Pauly, and I. R. Levesque, “Accelerating parameter mapping with a locally low rank constraint,” Magn. Reson. Med., vol. 73, no. 2, pp. 655–61, 2015.
  • [14] B. Zhao, W. M. Lu, T. K. Hitchens, F. Lam, C. Ho, and Z. P. Liang, “Accelerated MR parameter mapping with low-rank and sparsity constraints,” Magn. Reson. Med., vol. 74, no. 2, pp. 489–498, 2015.
  • [15] Y. Zhu, Y. Liu, L. Ying, X. Peng, Y. J. Wang, J. Yuan, X. Liu, and D. Liang, “SCOPE: signal compensation for low-rank plus sparse matrix decomposition for fast parameter mapping,” Phys. Med. Biol., vol. 63, no. 18, p. 185009, 2018.
  • [16] M. V. W. Zibetti, A. Sharafi, R. Otazo, and R. R. Regatte, “Accelerating 3D-T1ρ\text{T}_{1\rho} mapping of cartilage using compressed sensing with different sparse and low rank models,” Magn. Reson. Med., vol. 80, no. 4, pp. 1475–1491, 2018.
  • [17] Y. Zhang, X. Mou, G. Wang, and H. Yu, “Tensor-based dictionary learning for spectral CT reconstruction,” IEEE Trans. Med. Imag., vol. 36, no. 1, pp. 142–154, 2017.
  • [18] J. He, Q. Liu, A. Christodoulou, C. Ma, F. Lam, and Z. Liang, “Accelerated high-dimensional MR imaging with sparse sampling using low-rank tensors,” IEEE Trans. Med. Imag., vol. 35, no. 9, pp. 2119–2129, 2016.
  • [19] J. Liu, P. Musialski, P. Wonka, and J. P. Ye, “Tensor completion for estimating missing values in visual data,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 1, pp. 208–220, 2013.
  • [20] Y. Liu, L. Ying, W. Chen, Z. Cui, Q. Zhu, X. Liu, H. Zheng, D. Liang, and Y. Zhu, “Accelerating the 3D T1ρ\text{T}_{1\rho} mapping of cartilage using a signal-compensated robust tensor principal component analysis model,” Quant. Imaging Med. Surg., vol. 11, no. 8, p. 3376, 2021.
  • [21] B. Yaman, S. Weingartner, N. Kargas, N. D. Sidiropoulos, and M. Akcakaya, “Low-rank tensor models for improved multi-dimensional MRI: Application to dynamic cardiac T1\text{T}_{1} mapping,” IEEE Trans. Comput. Imag, vol. 6, pp. 194–207, 2020.
  • [22] A. Christodoulou, J. Shaw, C. Nguyen, Q. Yang, Y. Xie, N. Wang, and D. Li, “Magnetic resonance multitasking for motion-resolved quantitative cardiovascular imaging,” Nature Biomed. Eng., vol. 2, no. 4, pp. 215–226, 2018.
  • [23] J. Shaw, Q. Yang, Z. Zhou, Z. Deng, C. Nguyen, D. Li, and A. G. Christodoulou, “Free-breathing, non-ECG, continuous myocardial T1\text{T}_{1} mapping with cardiovascular magnetic resonance multitasking,” Magn. Reson. Med, vol. 81, no. 4, pp. 2450–2463, 2019.
  • [24] F. Liu, D. X. Li, X. Y. Jin, W. Y. Qiu, Q. Xia, and B. Sun, “Dynamic cardiac MRI reconstruction using motion aligned locally low rank tensor (MALLRT),” Magn. Reson. Imaging, vol. 66, pp. 104–115, 2020.
  • [25] J. D. Trzasko and A. Manduca, “A unified tensor regression framework for calibrationless dynamic, multi-channel MRI reconstruction,” in Proceeding of the 21st21^{st} International Society for Magnetic Resonance in Medicine (ISMRM), 2013, p. 603.
  • [26] S. Guo, J. A. Fessler, and D. C. Noll, “High-resolution oscillating steady-state fMRI using patch-tensor low-rank reconstruction,” IEEE Trans. Med. Imag., vol. 39, no. 12, pp. 4357–4368, 2020.
  • [27] S. Liu, J. Cao, H. Liu, X. Tan, and X. Zhou, “Group sparsity with orthogonal dictionary and nonconvex regularization for exact MRI reconstruction,” Inform. Sciences, vol. 451, pp. 161–179, 2018.
  • [28] M. Usman, C. Prieto, T. Schaeffter, and P. G. Batchelor, “k-t group sparse: a method for accelerating dynamic MRI,” Magn. Reson. Med, vol. 66, no. 4, pp. 1163–76, 2011.
  • [29] X. Peng, L. Ying, Y. Liu, J. Yuan, X. Liu, and D. Liang, “Accelerated exponential parameterization of T2\text{T}_{2} relaxation with model-driven low rank and sparsity priors (MORASA),” Magn. Reson. Med., vol. 76, no. 6, pp. 1865–1878, 2016.
  • [30] D. Lee, K. H. Jin, E. Y. Kim, S. H. Park, and J. C. Ye, “Acceleration of MR parameter mapping using annihilating filter-based low rank Hankel matrix (ALOHA),” Magn. Reson. Med., vol. 76, no. 6, pp. 1848–1864, 2016.
  • [31] S. Lingala, Y. Hu, E. DiBella, and M. Jacob, “Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR,” IEEE Trans. Med. Imag., vol. 30, no. 5, pp. 1042–54, 2011.
  • [32] Y. Liu, Z. Yi, Y. Zhao, F. Chen, Y. Feng, H. Guo, A. T. Leong, and E. Wu, “Calibrationless parallel imaging reconstruction for multislice MR data using low-rank tensor completion,” Magn. Reson. Med., vol. 85, no. 2, pp. 897–911, 2021.
  • [33] Z. Yi, Y. Liu, Y. Zhao, L. Xiao, A. T. Leong, Y. Feng, F. Chen, and E. Wu, “Joint calibrationless reconstruction of highly undersampled multicontrast MR datasets using a low-rank Hankel tensor completion framework,” Magn. Reson. Med., vol. 85, no. 6, pp. 3256–3271, 2021.
  • [34] M. Akçakaya, T. Basha, B. Goddu, L. Goepfert, K. Kissinger, V. Tarokh, W. Manning, and R. Nezafat, “Low-dimensional-structure self-learning and thresholding: regularization beyond compressed sensing for MRI reconstruction,” Magn. Reson. Med., vol. 66, no. 3, pp. 756–767, 2011.
  • [35] T. Kolda and B. Bader, “Tensor decompositions and applications,” SIAM Rew., vol. 51, no. 3, pp. 455–500, 2009.
  • [36] R. Otazo, E. Candes, and D. K. Sodickson, “Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components,” Magn. Reson. Med., vol. 73, no. 3, pp. 1125–36, 2015.
  • [37] K. Pruessmann, M. Weiger, P. Bornert, and P. Boesiger, “Advances in sensitivity encoding with arbitrary k-space trajectories,” Magn. Reson. Med., vol. 46, no. 4, pp. 638–51, 2001.
  • [38] A. Bustin, D. Voilliot, A. Menini, J. Felblinger, C. de Chillou, D. Burschka, L. Bonnemains, and F. Odille, “Isotropic reconstruction of MR images using 3D patch-based self-similarity learning,” IEEE Trans. Med. Imag., vol. 37, no. 8, pp. 1932–1942, 2018.
  • [39] D. Qiu, M. Bai, M. Ng, and X. Zhang, “Nonlocal robust tensor recovery with nonconvex regularization,” Inverse Problems, vol. 37, no. 3, p. 035001, 2021.
  • [40] H. Nguyen, X. Peng, M. Do, and Z. Liang, “Denoising MR spectroscopic imaging data with low-rank approximations,” IEEE Trans. Biomed. Eng., vol. 60, no. 1, pp. 78–89, 2013.
  • [41] Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci, vol. 1, no. 3, pp. 248–272, 2008.
  • [42] M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving,” Journal of research of the National Bureau of Standards, vol. 49, no. 6, p. 409, 1952.
  • [43] X. Zhang, J. Peng, M. Xu, W. Yang, Z. Zhang, H. Guo, W. Chen, Q. Feng, E. Wu, and Y. Feng, “Denoise diffusion-weighted images using higher-order singular value decomposition,” Neuroimage, vol. 156, pp. 128–145, 2017.
  • [44] J.-F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM J. Optimiz, vol. 20, no. 4, pp. 1956–1982, 2010.
  • [45] A. Bustin, G. Lima da Cruz, O. Jaubert, K. Lopez, R. M. Botnar, and C. Prieto, “High-dimensionality undersampled patch-based reconstruction (HD-PROST) for accelerated multi-contrast MRI,” Magn. Reson. Med., vol. 81, no. 6, pp. 3705–3719, 2019.
  • [46] X. Zhang, Z. Xu, N. Jia, W. Yang, Q. Feng, W. Chen, and Y. Feng, “Denoising of 3D magnetic resonance images by using higher-order singular value decomposition,” Med. Image Anal., vol. 19, no. 1, pp. 75–86, 2015.
  • [47] J. Shewchuk, “An introduction to the conjugate gradient method without the agonizing pain,” 1994.
  • [48] M. J. J, “The levenberg-marquardt algorithm: implementation and theory,” Numerical analysis, pp. 105–116, 1978.
  • [49] Y. Zhu, Y. Liu, L. Ying, X. Liu, H. Zheng, and D. Liang, “Bio-SCOPE: fast biexponential  T1ρ\text{ T}_{1\rho} mapping of the brain using signal-compensated low-rank plus sparse matrix decomposition,” Magn. Reson. Med., vol. 83, no. 6, pp. 2092–2106, 2020.
  • [50] M. V. W. Zibetti, A. Sharafi, R. Otazo, and R. R. Regatte, “Compressed sensing acceleration of biexponential 3D-T1ρ\text{T}_{1\rho} relaxation mapping of knee cartilage,” Magn. Reson. Med., vol. 81, no. 2, pp. 863–880, 2019.
  • [51] A. Sharafi, D. Xia, G. Chang, and R. R. Regatte, “Biexponential T1ρ\text{T}_{1\rho} relaxation mapping of human knee cartilage in vivo at 3 T,” NMR Biomed., vol. 30, no. 10, p. e3760, 2017.
  • [52] G. Captur, P. Gatehouse, K. E. Keenan, F. G. Heslinga, R. Bruehl, M. Prothmann, M. J. Graves, R. J. Eames, C. Torlasco, G. Benedetti et al., “A medical device-grade T1\text{T}_{1} and ECV phantom for global T2\text{T}_{2} mapping quality assurance—the T1\text{T}_{1} mapping and ECV standardization in cardiovascular magnetic resonance (T1MES) program,” J. Cardiovasc. Magn. Reson., vol. 18, no. 1, pp. 1–20, 2016.
  • [53] B. Mitrea, A. Krafft, R. Song, R. Loeffler, and C. Hillenbrand, “Paired self-compensated spin-lock preparation for improved T1ρ\text{T}_{1\rho} quantification,” J. Magn. Reson., vol. 268, pp. 49–57, 2016.
  • [54] A. Rich, M. Gregg, N. Jin, Y. Liu, L. Potter, O. Simonetti, and R. Ahmad, “Cartesian sampling with variable density and adjustable temporal resolution (CAVA),” Magn. Reson. Med, vol. 83, no. 6, pp. 2015–2025, 2020.
  • [55] Y. Zhou, P. Pandit, V. Pedoia, J. Rivoire, Y. Wang, D. Liang, X. Li, and L. Ying, “Accelerating  T1ρ\text{ T}_{1\rho} cartilage imaging using compressed sensing with iterative locally adapted support detection and JSENSE,” Magn. Reson. Med., vol. 75, no. 4, pp. 1617–29, 2016.
  • [56] S. Ravishankar and Y. Bresler, “MR image reconstruction from highly undersampled k-space data by dictionary learning,” IEEE Trans. Med. Imag., vol. 30, no. 5, pp. 1028–41, 2011.
  • [57] J. Cheng, S. Jia, L. Ying, Y. Y. Liu, S. S. Wang, Y. J. Zhu, Y. Li, C. Zou, X. Liu, and D. Liang, “Improved parallel image reconstruction using feature refinement,” Magn. Reson. Med., vol. 80, no. 1, pp. 211–223, 2018.
  • [58] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–12, 2004.
  • [59] T. Nguyen-Duc, T. Quan, and WK.Jeong, “Frequency-splitting dynamic MRI reconstruction using multi-scale 3D convolutional sparse coding and automatic parameter selection,” Med. Image Anal., vol. 53, pp. 179–196, 2019.
  • [60] P. Bao, W. Xia, k. Yang, W. Chen, M. Chen, Y. Xi, S. Niu, J. Zhou, H. Zhang, and H. Sun, “Convolutional sparse coding for compressed sensing CT reconstruction,” IEEE Trans. Med. Imag., vol. 38, no. 11, pp. 2607–2619, 2019.
  • [61] F. Ong and M. Lustig, “Beyond low rank + sparse: multiscale low rank matrix decomposition,” IEEE J. Sel. Topics Signal Process., vol. 10, no. 4, pp. 672–687, 2016.
  • [62] F. Ong, X. . Zhu, J. Cheng, K. Johnson, P. Larson, S. Vasanawala, and M. Lustig, “Extreme MRI: large-scale volumetric dynamic imaging from continuous non-gated acquisitions,” Magn. Reson. Med., vol. 84, no. 4, pp. 1763–1780, 2020.
  • [63] Q. Chen, H. She, and Y. Du, “Whole brain myelin water mapping in one minute using tensor dictionary learning with low-rank plus sparse regularization,” IEEE Trans. Med. Imag., vol. 40, no. 4, pp. 1253–1266, 2021.
  • [64] D. Liang, J. Cheng, Z. Ke, and L. Ying, “Deep magnetic resonance image reconstruction: Inverse problems meet neural networks,” IEEE Signal Process. Mag., vol. 37, no. 1, pp. 141–151, 2020.
  • [65] I. Chun, Z. Huang, H. Lim, and J. Fessler, “Momentum-net: Fast and convergent iterative neural network for inverse problems,” IEEE Trans. Pattern Anal. Mach. Intell., 2020.
  • [66] W. Dong, P. Wang, W. Yin, G. Shi, F.Wu, and X. Lu, “Denoising prior driven deep neural network for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 10, pp. 2305–2318, 2018.

Supplementary Information

6.1 Data Acquisition Information

The 2D dataset was collected on a 3T scanner (TIM TRIO,Siemens, Erlangen, Germany) using a 12-channel head coil from three volunteers (age: 24 ± 2 years). The 3D dataset was collected on a 3T scanner (uMR 790, United Imaging Healthcare, Shanghai, China) using a 32-channel head coil from two healthy volunteers (age: 26 ± 1 years). T1ρ\text{T}_{1\rho}-weighted images were acquired using the fast spin echo (FSE) sequence for 2D imaging and the modulated flip angle technique in refocused imaging with an extended echo train (MATRIX) sequence for 3D imaging, both using a spin-lock pulse at the beginning of the sequence to generate the T1ρ\text{T}_{1\rho}-weighting [20, 53]. The main imaging parameters that used in the 2D and 3D imaging applications are listed in Supplementary information Table S1.

The 2D phantom dataset was also collected a 3T scanner (TIM TRIO,Siemens, Erlangen, Germany) using a 12-channel head coil. The imaging sequence was the same as used in the in vivo data acquisition. To obtain an accurate T1ρ\text{T}_{1\rho} estimation of the phantom, a longer TR of 4000 ms was used. Therefore, the scan time of the phantom data acquisition was longer than that of the in vivo data acquisition.

6.2 Reconstruction Parameters

The reconstruction parameters that used the SMART method are listed in Supplementary information Table S2. In the L + S method, the singular-value thresholding and soft-thresholding algorithms were used to solve for the L and S components, respectively. The ratio for singular-value thresholding was set as 0.02, whereas that for soft-thresholding was set as [0.02, 0.025, 0.025, 0.035, and 0.035] to achieve optimal performance. In the LLR method, the iteration numbers were set as 40, 50, and 60 for R = 4 and 6, respectively. The block size was set as 8×88\times 8, and the ratio of the largest singular value used for the threshold of the singular value decomposition was initially set as 0.03, reduced to 0.01 after 10 iterations, and finally reduced to 0.005 in the final 10 iterations. The parameters for PROST and PROST + HM were the same as those for SMART. In the MORASA, the rank of the Casorati matrix was 2. The threshold for the wavelet coefficients and that of the Hankel matrix were 0.01 and 0.03, respectively.

6.3 Real Phantom

Supplementary Information Fig.S1(a-c) shows the T1ρ\text{T}_{1\rho}-weighted image of the real phantom at TSL = 40 ms, the obtained T1ρ\text{T}_{1\rho} map, and the estimated T1ρ\text{T}_{1\rho} values of the nine vials, respectively.

6.4 Retrospective 2D Reconstruction

6.4.1 Low acceleration

Supplementary information Fig. S2 shows the reconstructed T1ρ\text{T}_{1\rho}-weighted images (at TSL = 1 ms) and the magnified images of the region of interest from another volunteer using the SMART, RROST + HM, PROST, L + S, LLR, and MORASA methods at R = 4 and 6, respectively. The SMART method can better preserve the image resolution and finer details than the RROST + HM, PROST, and MORASA methods. The L + S and LLR methods have an image enhancement effect in the reconstructed images, particularly in the sulcus area, which are indicated by the red arrows. Compared to the methods exploiting spatial low-rankness (that is, the PROST, L + S, and LLR methods), the reconstruction can be improved by using parametric low-rankness based on high correlations in signal evolution in the reconstruction model.

6.4.2 Higher acceleration

Supplementary information Fig.S3 compares the T1ρ\text{T}_{1\rho} maps and the corresponding error maps obtained using different methods for R = 10.2 and 11.7. It can be seen that SMART generates the closest T1ρ\text{T}_{1\rho} map to the reference with the lowest nRMSE.The T1ρ\text{T}_{1\rho} maps obtained using L+S and LLR show obvious blurring artifacts, and T1ρ\text{T}_{1\rho} maps obtained using MORASA show aliasing artifacts (indicated with red arrows). The Patch-based low-rank tensor spatial tensor can reduce aliasing artifacts and preserve more image details. SMART further improves the reconstruction performance and T1ρ\text{T}_{1\rho} estimation accuracy by using the parametric tensor compared with PROST+HM and PROST.

6.5 Prospective 2D Reconstruction

Supplementary information Fig. S4 shows the corresponding T1ρ\text{T}_{1\rho} maps reconstructed using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods and the error maps between the reconstructed T1ρ\text{T}_{1\rho} map and the reference map. The T1ρ\text{T}_{1\rho} maps reconstructed using the SMART method exhibited superior reconstruction performance compared to those reconstructed using the other four methods, with the smallest nRMSEs and error maps for both acceleration factors.

6.6 Retrospective 3D Reconstruction

6.6.1 low acceleration

The T1ρ\text{T}_{1\rho} maps reconstructed using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods are shown in Supplementary information Fig. S5.

6.6.2 Higher acceleration

Supplementary information Fig.S6 shows the T1ρ\text{T}_{1\rho} maps and the corresponding error maps obtained using the above methods for R = 11.26 and 13.21 in 3D imaging. Similar conclusions can also be drawn as those from Fig. S3.

6.7 Parameter Selection

To clearly demonstrate the necessity of updating tissue maps in the iteration, the representative zero-filled reconstruction with R = 6, the intermediate results at the 12th iteration , the parameter histogram, the estimated T1ρ\text{T}_{1\rho} map, and the corresponding tissue groups and the ranks of tissue groups are included in the Supplemental information Fig. S7. To show the tissue group, we randomly selected 10 of them and chose 10 pixels from each of the group randomly, since the total tissue groups are too much to be shown. Then, an image block with the size of 10×510\times 5 can be constructed for each tissue group, where 10 is the pixel number chosen and 5 is the TSL number. The image blocks are shown as an example of the tissue groups. To calculate the rank of a tissue group, we construct the low-rank matrix with all pixels in the tissue group. The size of the matrix is Np×5N_{p}\times 5 where NpN_{p} is the pixel number in the group and 5 is the TSL number. The rank value of the matrix was calculated using the SVD method. The rank curves of all tissue groups are shown.

6.8 Reconstruction Time

The reconstruction time for each method used in the 2D (R = 4 ) and 3D (R = 6.76) imaging applications is listed in Supplementary information Table S3.

6.9 Algorithms

The algorithms for HOSVD and the SMART method are shown in Algorithms 1 and 2, respectively.

Refer to caption
Figure S1: The T1ρ\text{T}_{1\rho}-weighted image of the real phantom at TSL = 40 ms (a), the obtained T1ρ\text{T}_{1\rho} map (b), and the estimated T1ρ\text{T}_{1\rho} values (unit: ms) of the nine vials (c).
Refer to caption
Figure S2: Reconstructed T1ρ\text{T}_{1\rho}-weighted images from another 2D brain dataset at TSL = 1 ms with acceleration factors (R) = 4 and 6 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. The magnified figures of region of interest (denoted by the red box) are shown at the right bottom of each reconstructed image. The yellow arrow shows sulcus area in the reference image.
Refer to caption
Figure S3: T1ρ\text{T}_{1\rho} maps estimated from one 2D brain dataset with acceleration factors (R) = 10.2 and 11.7 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. The corresponding error images with respect to the reference estimated from fully sampled data are also shown. The nRMSEs are shown at the bottom of the error images. Red arrows show the aliasing artifacts in reconstructions using the MORASA method.
Table S1: IMAGING PARAMETERS FOR THE 2D AND 3D T1ρ\text{T}_{1\rho} DATASETS USED IN THIS STUDY
Retro Scanner Coils Matrix FOV( mm2)\left.\mathrm{mm}^{2}\right) Thickness (mm)(\mathrm{mm}) TSL (ms)(\mathrm{ms}) Scan time (min)(\mathrm{min})
2D Siemens (phantom) 12 256×256256\times 256 160×160160\times 160 5 1,20,40,60,801,20,40,60,80 11.211.2
2D Siemens (in vivo) 12 384×384384\times 384 230×230230\times 230 5 1,20,40,60,801,20,40,60,80 5.45.4
3D UIH 32 240×216×86240\times 216\times 86 240×216240\times 216 2 1,15,25,45,651,15,25,45,65 49.949.9
Pro 2D Siemens 12 384×384384\times 384 230×230230\times 230 5 1,20,40,60,801,20,40,60,80 1.2(4.48×),0.9(5.76×)1.2(4.48\times),0.9(5.76\times)
Refer to caption
Figure S4: T1ρ\text{T}_{1\rho} maps estimated from prospective 2D brain dataset with R = 4.48 and 5.76 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. The corresponding error images with respect to the reference estimated from fully sampled data are also shown. The quantitative nRMSE results are shown at the bottom of each error image.
Refer to caption
Figure S5: T1ρ\text{T}_{1\rho} maps reconstructed from one slice of the 3D brain dataset with acceleration factors (R) = 6.76 and 9.04 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. The corresponding error images with respect to the reference estimated from fully sampled data are also shown. The quantitative nRMSE results are shown at the bottom of each error image.
Table S2: RECONSTRUCTION PARAMETERS FOR THE 2D AND 3D DATASETS USED IN THIS STUDY
ADMM iteration λ1\lambda_{1} λ2\lambda_{2} μ1\mu_{1} μ2\mu_{2} CG iteration CG tolerance
2D2D 15 0.2,0.1,0.10.2,0.1,0.1
3D3D 15(6.76×),18(9.04×)15(6.76\times),18(9.04\times) 0.15,0.1,0.10.15,0.1,0.1 0.05,0.01,0.010.05,0.01,0.01 0.010.01 0.010.01 15 10710^{-7}
Table S3: RECONSTRUCTION TIME OF EACH METHOD
Methods SMART PROST+HM PROST L+S LLR MORASA
2D (R=4\mathrm{R}=4) 369.87369.87 358.25358.25 299.66299.66 437.94437.94 115.46115.46 304.55304.55
3D (R=6.76\mathrm{R}=6.76) 5981.115981.11 5913.785913.78 5898.545898.54 1197.881197.88 1767.331767.33 3579.693579.69
  • Unit of reconstruction time: second.

Refer to caption
Figure S6: T1ρ\text{T}_{1\rho} maps reconstructed from one slice of the 3D brain dataset with acceleration factors (R) = 11.26 and 13.21 using the SMART, PROST + HM, PROST, L + S, LLR, and MORASA methods. The corresponding error images with respect to the reference estimated from fully sampled data are also shown. The nRMSEs are shown at the bottom of the error images.
Refer to caption
Figure S7: Zero-filling reconstruction (a) and intermedium reconstruction at the 12th iteration (b) results for R = 6. The initial T1ρ\text{T}_{1\rho}-weighted image of the zero-filling reconstruction shows strong aliasing artifact, and the T1ρ\text{T}_{1\rho} map estimated from zero-filling images is quite blurred. The quality of T1ρ\text{T}_{1\rho}-weighted image and T1ρ\text{T}_{1\rho} map was greatly improved at 12th iteration. After iteration, the distribution of the parameter histogram also changed (shown in (b)). In the figure of tissue groups, the pixel intensities along pixel number direction are not as uniform as those at 12th iteration. The ranks in several groups are reduced from 2 to 1.

6.10 Convergence analysis

The convergence of the proposed method is still an open problem[18, 39]. Although empirical evidence suggests that the reconstruction algorithm has very strong convergence behavior, the satisfying proof is still under study. In this study, we analyze the time complexity of the SMART method using the 𝒪\mathcal{O} notation, and we give a weak convergence result with the following Theorem :

Theorem 1: Suppose Assumption (A1) in the Proof section holds, and the multipliers are bounded. Then, we have

mini[N](X,𝒯,𝒵,α1,α2)F2𝒪(1N)\min_{i\in[N]}\left\|\left(\textbf{X}^{*},\mathcal{T}^{*},\mathcal{Z}^{*},\alpha_{1}^{*},\alpha_{2}^{*}\right)\right\|_{F}^{2}\leq\mathcal{O}\left(\frac{1}{N}\right) (20)

where (X,𝒯,𝒵,α1,α2)\left(\textbf{X}^{*},\mathcal{T}^{*},\mathcal{Z}^{*},\alpha_{1}^{*},\alpha_{2}^{*}\right) denotes the subgradient of the augmented Lagrangian function Li(X,𝒯,𝒵,α1,α2)L^{i}\left(\textbf{X},\mathcal{T},\mathcal{Z},\alpha_{1},\alpha_{2}\right) and NN is the total number of iterations.

The proof of Theorem 1 is described in the next section.

6.11 Proof

In this section, we provide proofs on the following assumptions:

(A1) The sequence of paired proximal operators (Γn,Γn+1)(\Gamma^{n},\Gamma^{n+1}) is asymptotically nonexpansive with a sequence { ϵn+1\epsilon^{n+1} }, e.g.,

Γn(u)Γn+1(u)F2(1+ϵn+1)uvF2,u,v,n\|\Gamma^{n}(u)-\Gamma^{n+1}(u)\|_{F}^{2}\leq(1+\epsilon^{n+1})\|u-v\|_{F}^{2},\forall{u,v,n} (21)

The assumption (A1) is a standard assumption for analyzing the convergence of the learned iterative algorithms, which also can be found in [65, 66].

Considering the operator PiP_{i} is performed at every iteration in, let Γ=[V(𝒯1),,V(𝒯i),,V(𝒵1),,V(𝒵j),]T\Gamma=[V(\mathcal{T}_{1}),\ldots,V(\mathcal{T}_{i}),\ldots,V(\mathcal{Z}_{1}),\ldots,V(\mathcal{Z}_{j}),\ldots]^{T}, α=[V(α11),,V(α1i),,V(α21),,V(α2j),]T\alpha=[V(\alpha_{11}),\ldots,V(\alpha_{1i}),\ldots,V(\alpha_{21}),\ldots,V(\alpha_{2j}),\ldots]^{T},and Q=[P1n,,Pin,,H1,,Hj,]Q=[P^{n}_{1},\ldots,P^{n}_{i},\ldots,H_{1},\ldots,H_{j},\ldots], where VV denotes an operator which vectorizes the tensor. Then (7) can be expressed as

Ln(X,Γ,α):=12EXYF2+λ1i𝒯i+λ2i𝒵jα,ΓQn(X)+μ2ΓQn(𝐗)22\begin{gathered}L^{n}\left(\textbf{X},\Gamma,\alpha\right):=\frac{1}{2}\left\|E\textbf{X}-\textbf{Y}\right\|_{F}^{2}+\lambda_{1}\sum_{i}\left\|\mathcal{T}_{i}\right\|_{*}+\lambda_{2}\sum_{i}\left\|\mathcal{Z}_{j}\right\|_{*}\\ -\left\langle\alpha,\Gamma-Q^{n}(\textbf{X})\right\rangle+\frac{\mu}{2}\left\|\Gamma-Q^{n}(\mathbf{X})\right\|^{2}_{2}\end{gathered} (22)

According to the μ2\frac{\mu}{2}-strongly convex of Ln(Xn,Γ,αn)L^{n}(\textbf{X}^{n},\Gamma,\alpha^{n}) with respect to Γ\Gamma, we have

Ln(Xn,Γn,αn)Ln(Xn,Γn+1,αn)μ2ΓnΓn+122\begin{gathered}L^{n}(\textbf{X}^{n},\Gamma^{n},\alpha^{n})-L^{n}(\textbf{X}^{n},\Gamma^{n+1},\alpha^{n})\geq\frac{\mu}{2}\|\Gamma^{n}-\Gamma^{n+1}\|_{2}^{2}\end{gathered} (23)

Since Ln(X,Γn+1,αn)L^{n}(\textbf{X},\Gamma^{n+1},\alpha^{n}) is also μ2\frac{\mu}{2}-strongly convex with respect to 𝐗\mathbf{X}, we have

Ln(𝐗n,Γn+1,αn)Ln(𝐗n+1,Γn+1,αn)μ2𝐗n+1𝐗nF2\begin{gathered}L^{n}(\mathbf{X}^{n},\Gamma^{n+1},\alpha^{n})-L^{n}(\mathbf{X}^{n+1},\Gamma^{n+1},\alpha^{n})\\ \geq\frac{\mu}{2}\|\mathbf{X}^{n+1}-\mathbf{X}^{n}\|^{2}_{F}\end{gathered} (24)

According to the update rule of the multiplier α\alpha, we have

Ln(𝐗n+1,Γn+1,αn)Ln(𝐗n+1,Γn+1,αn+1)=1μαn+1αn22\begin{gathered}L^{n}(\mathbf{X}^{n+1},\Gamma^{n+1},\alpha^{n})-L^{n}(\mathbf{X}^{n+1},\Gamma^{n+1},\alpha^{n+1})\\ =-\frac{1}{\mu}\|\alpha^{n+1}-\alpha^{n}\|_{2}^{2}\end{gathered} (25)

On the other hand, we have

Ln(𝐗n+1,Γn+1,αn+1)Ln+1(𝐗n+1,Γn+1,αn+1)\displaystyle L^{n}(\mathbf{X}^{n+1},\Gamma^{n+1},\alpha^{n+1})-L^{n+1}(\mathbf{X}^{n+1},\Gamma^{n+1},\alpha^{n+1}) (26)
αn+1,Qn+1(𝐗n+1)Qn(𝐗n+1)\displaystyle\geq-\left\langle\alpha^{n+1},Q^{n+1}(\mathbf{X}^{n+1})-Q^{n}(\mathbf{X}^{n+1})\right\rangle
+μ2Qn+1(𝐗n+1)Qn(𝐗n+1)22=0\displaystyle+\frac{\mu}{2}\|Q^{n+1}(\mathbf{X}^{n+1})-Q^{n}(\mathbf{X}^{n+1})\|_{2}^{2}=0

where the sencond equality comes from the non-expansion assumption (A1) of QnQ^{n}. With respect to the first-order optimization condition for the update on 𝐗\mathbf{X}, that is, 𝐗Ln(𝐗n+1,Γn+1,αn)=0\nabla_{\mathbf{X}}L^{n}(\mathbf{X}^{n+1},\Gamma^{n+1},\alpha^{n})=0, we have

0=E(E𝐗n+1𝐘)+(Qn)αnμ(Qn)(Qn(𝐗n+1Γn+1))=E(E𝐗n+1𝐘)+(Qn)αn+1\begin{gathered}0=E^{*}(E\mathbf{X}^{n+1}-\mathbf{Y})+(Q^{n})^{*}\alpha^{n}-\mu(Q^{n})^{*}(Q^{n}(\mathbf{X}^{n+1}-\Gamma^{n+1}))\\ =E^{*}(E\mathbf{X}^{n+1}-\mathbf{Y})+(Q^{n})^{*}\alpha^{n+1}\end{gathered} (27)

where EE^{*} and QQ^{*} represent the Hermitian adjoint of operators EE and QQ, respectively. Then we have

σ¯Qnαn+1αn2EE(𝐗n+1𝐗n)F\displaystyle\underline{\sigma}_{Q^{n}}\left\|\alpha^{n+1}-\alpha^{n}\right\|_{2}\leq\|E^{*}E(\mathbf{X}^{n+1}-\mathbf{X}^{n})\|_{F} (28)
σ¯EE𝐗n+1𝐗nF\displaystyle\leq\bar{\sigma}_{E^{*}E}\left\|\mathbf{X}^{n+1}-\mathbf{X}^{n}\right\|_{F}

where σ¯Qn\underline{\sigma}_{Q^{n}} denotes the minimum singular value of QnQ^{n}, and σ¯EE\bar{\sigma}_{E^{*}E} denotes the maximum singular value of EEE^{*}E.

Combining (23), (24), (25), (26), (28), we have

Ln(𝐗n,Γn,αn)Ln+1(𝐗n+1,Γn+1,αn+1)\displaystyle L^{n}(\mathbf{X}^{n},\Gamma^{n},\alpha^{n})-L^{n+1}(\mathbf{X}^{n+1},\Gamma^{n+1},\alpha^{n+1}) (29)
(μ2σ¯EE2σ¯Qn2)𝐗n+1𝐗nF2+μ2Γn+1Γn22\displaystyle\geq(\frac{\mu}{2}-\frac{\bar{\sigma}^{2}_{E^{*}E}}{\underline{\sigma}^{2}_{Q^{n}}})\|\mathbf{X}^{n+1}-\mathbf{X}^{n}\|^{2}_{F}+\frac{\mu}{2}\|\Gamma^{n+1}-\Gamma^{n}\|^{2}_{2}

Summing both sides of the above inequality from 0 to NN, we have

N2[(μ2σ¯EE2σ¯Qn2)𝐗n+1𝐗nF2+μ2Γn+1Γn22]\displaystyle\frac{N}{2}\left[\left(\frac{\mu}{2}-\frac{\bar{\sigma}_{E^{*}E}^{2}}{\underline{\sigma}_{Q^{n}}^{2}}\right)\left\|\mathbf{X}^{n+1}-\mathbf{X}^{n}\right\|_{F}^{2}+\frac{\mu}{2}\left\|\Gamma^{n+1}-\Gamma^{n}\right\|_{2}^{2}\right] (30)
L0(𝐗0,Γ0,α0)LN(𝐗N,ΓN,αN)<+\displaystyle\leq L^{0}\left(\mathbf{X}^{0},\Gamma^{0},\alpha^{0}\right)-L^{N}\left(\mathbf{X}^{N},\Gamma^{N},\alpha^{N}\right)<+\infty

On the other hand, let (𝐗n+1,Γn+1,αn+1)L(𝐗n+1,Γn+1,αn+1)\left(\mathbf{X}_{n+1}^{*},{\Gamma}_{n+1}^{*},{\alpha}_{n+1}^{*}\right)\in\partial L\left(\mathbf{X}^{n+1},{\Gamma}^{n+1},{\alpha}^{n+1}\right), we have

{Γn+1αn+1μ(Qn+1𝐗n+1Γn+1)+λpΓpn+1𝐗n+1=E(E𝐗n+1𝐘)Qn+1αn+1+μ(Qn+1)(Qn+1𝐗n+1Γn+1)αn+1=Γn+1Qn+1𝐗n+1\left\{\begin{array}[]{l}{\Gamma}_{n+1}^{*}\in{\alpha}^{n+1}-\mu\left(Q^{n+1}\mathbf{X}^{n+1}-{\Gamma}^{n+1}\right)+\lambda\sum_{p}\partial\left\|\Gamma_{p}^{n+1}\right\|_{*}\\ \mathbf{X}_{n+1}^{*}=E^{*}\left(E\mathbf{X}^{n+1}-\mathbf{Y}\right)-Q^{n+1}{\alpha}^{n+1}\\ +\mu{(Q^{n+1})}^{*}\left(Q^{n+1}\mathbf{X}^{n+1}-{\Gamma}^{n+1}\right)\\ {\alpha}_{n+1}^{*}={\Gamma}^{n+1}-Q^{n+1}\mathbf{X}^{n+1}\end{array}\right. (31)

where pp is equal to the sum of ii and jj. According to the iterative rule of ADMM algorithm, we have

{0αnμ(Qn𝐗nΓn+1)+λpΓpn+10=E(E𝐗n+1𝐘)Qnαn+μ(Qn)(Qn+1𝐗n+1Γn+1)0=αn+1αn+μ(Γn+1Qn𝐗n+1)\displaystyle\left\{\begin{array}[]{l}0\in{\alpha}^{n}-\mu\left(Q^{n}\mathbf{X}^{n}-\Gamma^{n+1}\right)+\lambda\sum_{p}\partial\left\|\Gamma_{p}^{n+1}\right\|_{*}\\ 0=E^{*}\left(E\mathbf{X}^{n+1}-\mathbf{Y}\right)-Q^{n}\alpha^{n}\\ +\mu(Q^{n})^{*}\left(Q^{n+1}\mathbf{X}^{n+1}-{\Gamma}^{n+1}\right)\\ 0=\alpha_{n+1}-\alpha_{n}+\mu({\Gamma}^{n+1}-Q^{n}\mathbf{X}^{n+1})\end{array}\right. (32)

Combing (31) and (32), we have

{Γn+1=αnαn+1μ(Qn+1𝑿n+1Qn𝑿n)𝑿n+1=QnαnQn+1αn+1αn+1=1μ(αnαn+1)\left\{\begin{array}[]{l}{\Gamma}_{n+1}^{*}={\alpha}^{n}-{\alpha}^{n+1}-\mu\left(Q^{n+1}\boldsymbol{X}^{n+1}-Q^{n}\boldsymbol{X}^{n}\right)\\ \boldsymbol{X}_{n+1}^{*}=Q^{n}{\alpha}^{n}-Q^{n+1}{\alpha}^{n+1}\\ {\alpha}_{n+1}^{*}=\frac{1}{\mu}\left({\alpha}^{n}-{\alpha}^{n+1}\right)\end{array}\right. (33)

Then, we have

(Γn+1,𝑿n+1,αn+1)F2\displaystyle\left\|\left({\Gamma}_{n+1}^{*},\boldsymbol{X}_{n+1}^{*},{\alpha}_{n+1}^{*}\right)\right\|_{F}^{2} (34)
\displaystyle\leq (2+(1+ϵn)+1μ2)αnαn+122\displaystyle\left(2+(1+\epsilon^{n})+\frac{1}{\mu^{2}}\right)\left\|{\alpha}^{n}-{\alpha}^{n+1}\right\|_{2}^{2}
+2μ2(1+ϵn)𝑿n+1𝑿nF2\displaystyle+2\mu^{2}(1+\epsilon^{n})\left\|\boldsymbol{X}^{n+1}-\boldsymbol{X}^{n}\right\|_{F}^{2}
\displaystyle\leq [σ¯EE2σ¯Qn2(3+ϵn+1μ2)+2μ2(1+ϵn)]𝑿n+1𝑿nF2\displaystyle\left[\frac{\bar{\sigma}_{E^{*}E}^{2}}{\underline{\sigma}_{{Q}^{n}}^{2}}\left(3+\epsilon^{n}+\frac{1}{\mu^{2}}\right)+2\mu^{2}(1+\epsilon^{n})\right]\left\|\boldsymbol{X}^{n+1}-\boldsymbol{X}^{n}\right\|_{F}^{2}

where σ¯Qn{\bar{\sigma}}_{Q^{n}} denotes the largest singular values of QnQ^{n}, the first inequality is due to the nonexpensive assumption of QnQ^{n}, and the second inequality is due to the relation (28). By the convexity of norm, we have

mini[N](Γi,𝑿i,αi)F2\displaystyle\min_{i\in[N]}\left\|\left({\Gamma}_{i}^{*},\boldsymbol{X}_{i}^{*},{\alpha}_{i}^{*}\right)\right\|_{F}^{2} (35)
\displaystyle\leq 1Nn=1N(Γn,Xn,αn)F2\displaystyle\frac{1}{N}\sum_{n=1}^{N}\left\|\left({\Gamma}_{n}^{*},X_{n}^{*},{\alpha}_{n}^{*}\right)\right\|_{F}^{2}
\displaystyle\leq [σ¯EE2σ¯Qn2(3+ϵn+1μ2)+2μ2(1+ϵn)]\displaystyle\left[\frac{\bar{\sigma}_{E^{*}E}^{2}}{\underline{\sigma}_{{Q}^{n}}^{2}}\left(3+\epsilon^{n}+\frac{1}{\mu^{2}}\right)+2\mu^{2}(1+\epsilon^{n})\right]
1Nn=1N𝑿n+1𝑿nF2\displaystyle\cdot\frac{1}{N}\sum_{n=1}^{N}\left\|\boldsymbol{X}^{n+1}-\boldsymbol{X}^{n}\right\|_{F}^{2}
\displaystyle\leq [σ¯EE2σ¯Qn2(3+ϵn+1μ2)+2μ2(1+ϵn)](μ2σ¯EE2μσ¯Qn2)1\displaystyle\left[\frac{\bar{\sigma}_{E^{*}E}^{2}}{\underline{\sigma}_{{Q}^{n}}^{2}}\left(3+\epsilon^{n}+\frac{1}{\mu^{2}}\right)+2\mu^{2}(1+\epsilon^{n})\right]\left(\frac{\mu}{2}-\frac{\bar{\sigma}_{E^{*}E}^{2}}{\mu\underline{\sigma}_{{Q^{n}}}^{2}}\right)^{-1}
L(𝑿0,Γ0,α0)L(𝑿N,ΓN,αN)N\displaystyle\cdot\frac{L\left(\boldsymbol{X}^{0},{\Gamma}^{0},{\alpha}^{0}\right)-L\left(\boldsymbol{X}^{N},{\Gamma}^{N},{\alpha}^{N}\right)}{N}

From aforementioned analysis, we know that we can choose μ=𝒪(σ¯EEσ¯Q)\mu=\mathcal{O}(\frac{\bar{\sigma}_{E^{*}E}}{\underline{\sigma}_{Q}}) which implies that

mini[N](Γi,𝑿i,αi)F2𝒪(1N)\min_{i\in[N]}\left\|\left({\Gamma}_{i}^{*},\boldsymbol{X}_{i}^{*},{\alpha}_{i}^{*}\right)\right\|_{F}^{2}\leq\mathcal{O}\left(\frac{1}{N}\right) (36)
Algorithm 1 Higher-order Singular Value Decomposition (HOSVD) for SMART reconstruction
1:INPUT:
2:      Third-order data tensor 𝒯CN1×N2×N3\mathcal{T}\in C^{N_{1}\times N_{2}\times N_{3}} with dimensions (N1,N2,N3)\left(N_{1},N_{2},N_{3}\right) The regularization parameter λ=[λ1,λ2,λ3]\lambda=\left[\lambda_{1},\lambda_{2},\lambda_{3}\right]
3:
4:ALGORITHM:
5: (1) Unfold the tensor 𝒯\mathcal{T} along its single modes: 6:𝐓(1)\mathbf{T}_{(1)} : reshapes 𝒯\mathcal{T} into an N1×(N2×N3)N_{1}\times\left(N_{2}\times N_{3}\right) complex matrix. 7:𝐓(2)\mathbf{T}_{(2)} : reshapes 𝒯\mathcal{T} into an N2×(N1×N3)N_{2}\times\left(N_{1}\times N_{3}\right) complex matrix. 8:𝐓(3)\mathbf{T}_{(3)} : reshapes 𝒯\mathcal{T} into an N3×(N1×N2)N_{3}\times\left(N_{1}\times N_{2}\right) complex matrix.
9: (2) Compute the complex SVD of 𝐓(n)(n=1,2,3)\mathbf{T}_{(\mathrm{n})}(\mathrm{n}=1,2,3) and obtain the orthogonal matrices 𝐔(1),𝐔(2)\mathbf{U}^{(1)},\mathbf{U}^{(2)}, and 𝐔(3)\mathbf{U}^{(3)} from the n-mode signal subspace.
10: (3) Compute the complex core tensor 𝒢\mathcal{G} related by 11:𝒢=𝒯×𝐔(1)H1×𝐔(2)H2×𝐔(3)H3\qquad\mathcal{G}=\mathcal{T}\times{}_{1}\mathbf{U}_{(1)}^{H}\times{}_{2}\mathbf{U}_{(2)}^{H}\times{}_{3}\mathbf{U}_{(3)}^{H} 12:which is equivalent to its unfolding forms: 13:𝑮(n)=𝐔(n)H𝐓(n)[𝐔(i)𝐔(j)], with 1n3 and ijn\qquad\boldsymbol{G}_{(n)}=\mathbf{U}_{(n)}^{H}\mathbf{T}_{(n)}\left[\mathbf{U}_{(i)}\otimes\mathbf{U}_{(j)}\right],\text{ with }1\leq n\leq 3\text{ and }i\neq j\neq\mathrm{n} 14:where \otimes denotes the Kronecker product.
15: (4) Compute the high-order singular-value truncation (hard thresholding): 16:𝑮(n)(𝑮(n)<λn)=0\qquad\boldsymbol{G}_{(n)}\left(\boldsymbol{G}_{(n)}<\lambda_{n}\right)=0
17: (5) Construct back the filtered tensor 𝒯denoise \mathcal{T}^{\text{denoise }} with the n\mathrm{n}-mode (n=1,2,3)(\mathrm{n}=1,2,3) unfolding matrix 𝐓(n)denoise \mathbf{T}_{(n)}^{\text{denoise }} calculated as follows: 18:𝐓(n)denoise =𝐔(n)𝒢[𝑼(i)𝑼(j)]H with 1n3 and ijn\qquad\mathbf{T}_{(n)}^{\text{denoise }}=\mathbf{U}_{(n)}\mathcal{G}\left[\boldsymbol{U}_{(i)}\otimes\boldsymbol{U}_{(j)}\right]^{H}\text{ with }1\leq n\leq 3\text{ and }i\neq j\neq\mathrm{n}
19:
20:OUTPUT: The denoised tensor 𝒯denoise \mathcal{T}^{\text{denoise }} is obtained by folding.
Algorithm 2 Simultaneously spatial patch-based low-rank tensor and parametric group-based low-rank tensor (SMART)
1:INPUT:
2:      𝐘\mathbf{Y} : undersampled k-space data blocking matching parameters: λm\lambda_{m} : normalized l2l_{2}-norm distance threshold Np,maxN_{p,\max} : maximum similar patch number bb : patch size reconstruction parameters: ADMMiter ADMM_{\text{iter }} :ADMM iterations CGiter :CGCG_{\text{iter }}:\mathrm{CG} iterations λ1=[λ1(1),λ1(2),λ1(3)],λ2=[λ2(1),λ2(2),λ2(3)]\lambda_{1}=\left[\lambda_{1(1)},\lambda_{1(2)},\lambda_{1(3)}\right],\lambda_{2}=\left[\lambda_{2(1)},\lambda_{2(2)},\lambda_{2(3)}\right]: regularization parameters EE : Encoding operator α1,α2\alpha_{1},\alpha_{2} : Lagrangian multipliers μ1,μ2\mu_{1},\mu_{2} : Penalty parameters
3:
4:ALGORITHM:
5: 1. Initialize 𝐗1=E(𝐘),α11=0,α21=0,μ1=0.01,μ2=0.01\mathbf{X}^{1}=E^{*}(\mathbf{Y}),\alpha_{1}^{1}=0,\alpha_{2}^{1}=0,\mu_{1}=0.01,\mu_{2}=0.01.
6:2. Estimate the initial T1ρ\mathrm{T}_{1\rho} map T1ρ1\mathrm{T}_{1\rho}^{1} from 𝐗𝟏\mathbf{X}^{\mathbf{1}} using (16).
7:3.
8:for  n=1,,ADMMiter \mathrm{n}=1,\ldots,ADMM_{\text{iter }} do
9:      [1] Groups of similar spatial patch extraction on 𝐗n+α1nμ1\mathbf{X}^{n}+\frac{\alpha_{1}^{n}}{\mu_{1}} centered at spatial location ii using the blocking matching method with the normalized l2l_{2}-norm distance threshold λm\lambda_{m}, maximum similar patch number Npatch,max N_{\text{patch,max }}, and patch size bb
10:      [2] Spatial tensor construction of 𝒯in\mathcal{T}_{i}^{n} for each group of similar patches
11:      [3] Solve optimization of problem (P2) in (10): HOSVD-based denoising (see Algorithm 1) with λ1\lambda_{1}
12:      [4] Clusters of the same tissue extraction from 𝐗n+α2nμ2\mathbf{X}^{n}+\frac{\alpha_{2}^{n}}{\mu_{2}} using histogram analysis of T1ρn\mathrm{T}_{1\rho}^{n} Output: denoised spatial tensor 𝒯in+1\mathcal{T}_{i}^{n+1} and the estimated image 𝐓~n+1\widetilde{\mathbf{T}}^{n+1}
13:      [5] Parametric tensor construction of 𝒵jn\mathcal{Z}_{j}^{n} for each cluster of the same tissue
14:      [6] Solve optimization of problem (P3) in (12): HOSVD-based denoising (see Algorithm 1) with λ2\lambda_{2} Output: denoised spatial tensor 𝒵jn+1\mathcal{Z}_{j}^{n+1} and and the estimated image 𝐙~n+1\tilde{\mathbf{Z}}^{n+1}
15:      [7] Solve optimization of problem (P1) in (9) using the CGCG algorithm Output: reconstructed images 𝐗n+1\mathbf{X}^{n+1}
16:      [8] Update α1\alpha_{1} using (13): α1n+1=α1n+μ1(𝐗n+1𝐓~n+1)\alpha_{1}^{n+1}=\alpha_{1}^{n}+\mu_{1}\left(\mathbf{X}^{n+1}-\widetilde{\mathbf{T}}^{n+1}\right)
17:      [9] Update α2\alpha_{2} using (14): α2n+1=α2n+μ2(𝐗n+1𝐙~n+1)\alpha_{2}^{n+1}=\alpha_{2}^{n}+\mu_{2}\left(\mathbf{X}^{n+1}-\tilde{\mathbf{Z}}^{n+1}\right)
18:      [10] if (n mod 3 = 0 ), estimate the T1ρT_{1\rho} map T1ρn+1\mathrm{T}_{1\rho}^{n+1} from 𝐗n+1\mathbf{X}^{n+1}
19:end for
20:
21:OUTPUT: Reconstructed T1ρT_{1\rho}-weighted image series 𝐗\mathbf{X}