This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\floatsetup

[table]capposition=top \newfloatcommandcapbtabboxtable[][\FBwidth]

11institutetext: College of Physics and Information Engineering, Fuzhou University, Fuzhou, China 22institutetext: School of Data Science, Fudan University, Shanghai, China 33institutetext: School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China

Unsupervised Multi-Modality Registration Network based on Spatially Encoded Gradient Information

Wangbin Ding 11    Lei Li 2233    Liqin Huang * 11    Xiahai Zhuang X Zhuang and L Huang are co-senior and corresponding authors: [email protected]; [email protected]. This work was funded by the National Natural Science Foundation of China (Grant No. 61971142), and Shanghai Municipal Science and Technology Major Project (Grant No. 2017SHZDZX01).22
Abstract

Multi-modality medical images can provide relevant or complementary information for a target (organ, tumor or tissue). Registering multi-modality images to a common space can fuse these comprehensive information, and bring convenience for clinical application. Recently, neural networks have been widely investigated to boost registration methods. However, it is still challenging to develop a multi-modality registration network due to the lack of robust criteria for network training. In this work, we propose a multi-modality registration network (MMRegNet), which can perform registration between multi-modality images. Meanwhile, we present spatially encoded gradient information to train MMRegNet in an unsupervised manner. The proposed network was evaluated on the public dataset from MM-WHS 2017. Results show that MMRegNet can achieve promising performance for left ventricle registration tasks. Meanwhile, to demonstrate the versatility of MMRegNet, we further evaluate the method using a liver dataset from CHAOS 2019. Our source code is publicly available111https://github.com/NanYoMy/mmregnet.

Keywords:
Multi-Modality Registration Left Ventricle Registration Unsupervised Registration Network Gradient Information

1 Introduction

Registration is a critical technology to establish correspondences between medical images [9]. The study of registration algorithm enables tumor monitoring [17], image-guided intervention [1], and treatment planning [7]. Multi-modality images, such as CT, MR and US, capture different anatomical information. Alignment of multi-modality images can help a clinician to improve the disease diagnosis and treatment [6]. For instance, Zhuang [21] registered multi-modality myocardium MR images to fuse complementary information for myocardial segmentation and scar quantification. Heinrich et al. [10] performed registration of intra-operative US to pre-operative MR, which could aid image-guided neurosurgery.

Over the last decades, various methods have been proposed to perform multi-modality image registration. The most common methods are based on statistical similarity metric, such as mutual information (MI) [15], normalized MI (NMI) [18] and spatial-encoded MI (SEMI) [22]. Registrations are performed by maximizing these similarity metrics between the moved and fixed images. However, these metrics usually suffer from the loss of spatial information [16]. Other common methods are based on invariant representation. Wachinger et al. [19] presented the entropy and Laplacian image which are invariant structural representations across the multi-modality image, and registrations were achieved by minimizing the difference between the invariant representations. Zhuang et al. [23] proposed the normal vector information of intensity image for registration, which obtained comparable performance to the MI and NMI. Furthermore, Heinrich et al. [9] designed a handcraft modality independent neighborhood descriptor to extract structure information for registrations. Nevertheless, these conventional methods solved the registration problem by iterative optimizing, which is not applicable for time-sensitive scenarios.

Recently, several registration networks, which could efficiently achieved registrations in an one-step fashion, have been widely investigated. Hu et al. [11] proposed a weakly supervised registration neural network for multi-modality images by utilizing anatomical labels as the criteria for network training. Similarly, Balakrishnan et al. [4] proposed a learning-based framework for image registration. The framework could extend to multi-modality images when the anatomical label is provided during the training. Furthermore, Luo et al. [14] proposed a group-wise registration network, which could jointly register multiple atlases to the target image. Nevertheless, these methods required extensive anatomical labels for network training, which prevents them from unlabeled datasets.

At present, based on image-to-image translation generative adversarial network (GAN) [12], several unsupervised registration networks had been proposed. Qin et al. [16] disentangled a shape representation from multi-modality images via GAN, then a convenient mono-modality similarity metric could be applied on the shape representation for registration network training. Arar et al. [2] connected registration network with a style translator. The network could jointly perform spatial and style transformation on a moving image, and was trained by minimizing the difference between the transformed moving image and fixed image. The basic idea of these GAN-based methods is converting the multi-modality registration problem into a mono-modality one. Unfortunately, GAN methods easily cause geometrically distortions and intensity artifacts during image translation [20], which may lead to unrealistic registration results.

In this work, we propose an end-to-end multi-modality 3D registration network (MMRegNet). The main contributions are: (1) We present a spatially encoded gradient information (SEGI), which can provide a similarity criteria to train the registration network in an unsupervised manner. (2) We evaluated our method on multi-modality cardiac left ventricle and liver registration tasks and obtained promising performance on both applications.

2 Method

Registration Network:

Let ImI_{m} and IfI_{f} be a moving and fix image, respectively. Here, ImI_{m} and IfI_{f} are acquired via different imaging protocols, and are defined in a 3-D spatial domain Ω\Omega. We construct the MMRegNet based on a U-shape convolution neural network [5], which takes a pair of ImI_{m} and IfI_{f} images as input, and predicts forward UU and backward VV dense displacement fields (DDFs) between them simultaneously. Therefore, MMRegNet is formulated as follows,

(U,V)=fθ(Im,If)(U,V)=f_{\theta}(I_{m},I_{f}) (1)

where θ{\theta} is the parameter of MMRegNet. Each voxel xΩx\in\Omega in ImI_{m} and IfI_{f} can be transformed by UU and VV as follows,

(ImU)(x)=Im(x+U(x)),({I}_{m}\circ U)(x)=I_{m}(x+U(x)), (2)
(IfV)(x)=If(x+V(x)),({I}_{f}\circ V)(x)=I_{f}(x+V(x)), (3)

where \circ is spatial transformation operation, and ImU{I}_{m}\circ U, IfV{I}_{f}\circ V denote the moved image of ImI_{m}, IfI_{f}, respectively.

Spatially Encoded Gradient Information:

Generally, the parameter of MMRegNet could be optimized by minimizing the intensity-based criteria, such as mean square error of intensity between the moved image and fixed image. However, such metrics are ill-posed when applied in multi-modality scenarios. This is because the intensity distribution of an anatomy usually varies across different modalities of images. Normalized gradient information (NGI) [8] is widely explored for conventional multi-modality registration methods.

Refer to caption
Figure 1: A visual demonstration of the SEGI.

The basic idea of NGI is based on the assumption that image structures can be defined by intensity changes. Let GIG_{I} be the NGI of an intensity image II, each element of GIG_{I} is calculated as follows,

GI(x)=I(x)I(x)2,G_{I}(x)={\frac{\nabla I(x)}{\|\nabla I(x)\|_{2}}}, (4)

where xΩx\in\Omega, and I\nabla I refers to the gradient of image II. Ideally, MMRegNet can be trained by minimizing the difference between GImUG_{I_{m}\circ U} and GIfG_{I_{f}}. However, such a criteria is sensitive to the noises or artifacts of intensity images. It is still error-prone to train a registration network via NGI criteria in our practical experiments. To overcome this, we extend NGI to SEGI. Figure 1 illustrates the idea of SEGI, it is achieved by introducing a set of spatial variables Σ={σ1,σ2,σK}{\Sigma}=\{\sigma_{1},\sigma_{2}\cdots,\sigma_{K}\} to the standard NGI (GIG_{I}). For each spatial variable σk\sigma_{k}, we compute its associated SEGI (SGIσkSG_{I}^{\sigma_{k}}) as follows,

SGIσk(x)=pΩ𝒩(p|x,σk2)I(p)I(p)2,SG_{I}^{\sigma_{k}}(x)=\sum_{p\in\Omega}{\mathcal{N}(p|x,\sigma_{k}^{2})\frac{\nabla I(p)}{\|\nabla I(p)\|_{2}}}, (5)

where xΩx\in\Omega, and 𝒩(p|x,σk2)\mathcal{N}(p|x,\sigma_{k}^{2}) denotes Gaussian distribution. Notably, we accumulate gradient information around xx for a more robust representation of the intensity change. Finally, given a set of spatial variables Σ\Sigma, the SEGI of an intensity image II is defined as,

SGI={SGIσ1,SGIσ2,,SGIσK}.SG_{I}=\{SG_{I}^{\sigma_{1}},SG_{I}^{\sigma_{2}},\cdots,SG_{I}^{\sigma_{K}}\}. (6)

Loss Function:

We train the network by minimizing the cosine distance between the SEGI of moved (SGImUSG_{I_{m}\circ U}) and fixed (SGIfSG_{I_{f}}) images,

SG=1Kk=1K𝒟(SGImUσk,SGIfσk),\mathcal{L}_{SG}=\frac{1}{K}\sum_{k=1}^{K}\mathcal{D}({SG}_{I_{m}\circ U}^{\sigma_{k}},SG_{I_{f}}^{\sigma_{k}}), (7)
𝒟(SGImUσk,SGIfσk)=1|Ω|xΩcos(SGImUσk(x),SGIfσk(x)),\mathcal{D}({SG}_{I_{m}\circ U}^{\sigma_{k}},SG_{I_{f}}^{\sigma_{k}})=\frac{-1}{|\Omega|}\sum_{x\in\Omega}cos({SG}_{I_{m}\circ U}^{\sigma_{k}}(x),SG_{I_{f}}^{\sigma_{k}}(x)), (8)

where |Ω||\Omega| counts the number of voxels in an image, and cos(𝑨,𝑩)cos(\bm{A},\bm{B}) calculates the cosine distance between vector 𝑨\bm{A} and 𝑩\bm{B}.

Meanwhile, MMRegNet is designed to simultaneously predict UU and VV for each pair of ImI_{m} and IfI_{f}. Normally, UU and VV should be inverse of each other. Hence, we employ the cycle consistent constraint [5] for the DDFs such that each ImI_{m} can be restored to its original one after transforming by UU and VV in succession,

CC=1|Ω|xΩImUV(x)Im(x)1.\mathcal{L}_{CC}=\frac{1}{|\Omega|}\sum_{x\in\Omega}\|{I}_{m}\circ U\circ V(x)-I_{m}(x)\|_{1}. (9)

Finally, the total trainable loss of the registration network is defined as follows,

=SG+λ1CC+λ2{Ψ(U)+Ψ(V)},\mathcal{L}=\mathcal{L}_{SG}+\lambda_{1}\mathcal{L}_{CC}+{\lambda_{2}}\{\Psi(U)+\Psi(V)\}, (10)

where Ψ(U)\Psi(U) and Ψ(V)\Psi(V) are smoothness regularization terms for DDFs, and λ1\lambda_{1}, λ2\lambda_{2} are the hyper-parameters.

3 Experiments and Results

Experimental Setups:

MMRegNet was implemented by the TensorFlow on an NVIDIA P100. We tested it on two public datasets, i.e., the MM-WHS222www.sdspeople.fudan. edu.cn/zhuangxiahai/0/mmwhs/ [24] and CHAOS333https://chaos.grand-challenge.org/ [13].

  • MM-WHS: MM-WHS contains multi-modality (CT, MR) cardiac medical images. We utilized 20 MR and 20 CT images for left ventricle registration task. MMRegNet was trained to perform the registration of MR to CT images.

  • CHAOS: CHAOS contains multi-modality abdominal images from healthy volunteers. For each volunteer, the dataset includes their T1, T2 and CT images. We adopted 20 T1 MR, 20 T2 MR and 20 CT images for liver registration.

During the training phase, we employed ADAM optimizer to optimize the network parameters for 5000 epochs. The spatial variables Σ\Sigma were given to {1,1.5,3}\{1,1.5,3\} practically, aiming to capture multi-scale of robust gradient information for registration. Meanwhile, we tested λ1\lambda_{1} and λ2\lambda_{2} with four different weighting values, i.e., 0.01, 0.1, 1, 10. According to the corresponding results of different setups, we set {λ1=0.1,λ2=10}\{\lambda_{1}=0.1,\lambda_{2}=10\} and {λ1=0.1,λ2=1}\{\lambda_{1}=0.1,\lambda_{2}=1\} for MM-WHS and CHAOS dataset, respectively. To evaluate the performance of MMRegNet, we computed the Dice (DS) and average symmetric surface distance (ASD) between the corresponding label of moved and fix images. All experimental results were reported by 4-fold cross-validation.

Results:

We compared our registration method with three state-of-the-art multi-modality registration methods.

  • Sy-NCC: The conventional affine + deformable registration, which is based on the symmetric image normalization method with normalized cross-correlation (NCC) as optimization metric [3]. We implemented it based on the popular ANTs software package444https://github.com/ANTsX/ANTsPy .

  • Sy-MI: The Sy-NCC method which uses the MI instead of the NCC as optimization metric.

  • VM-NCC: The state-of-the-art registration network [4], which was trained by using the NCC as training criteria. We adopted their official online implementation555https://github.com/voxelmorph/voxelmorph.

Table 1: The performance of different multi-modality registration methods on MM-WHS dataset.
Method LVC (MR\rightarrowCT) Myo (MR\rightarrowCT)
DS (%)\uparrow ASD (mm)\downarrow DS (%)\uparrow ASD (mm)\downarrow
Sy-NCC [3] 70.07±\pm16.57 4.51±\pm2.67 50.66±\pm16.02 4.10±\pm1.77
Sy-MI [3] 69.16±\pm15.25 4.66±\pm2.54 49.00±\pm16.21 4.34±\pm2.04
VM-NCC [4] 79.46±\pm8.73 2.81±\pm1.05 62.77±\pm9.51 2.49±\pm0.61
MMRegNet 80.28±\pm7.22 3.46±\pm1.30 62.92±\pm8.62 3.01±\pm0.74
Table 2: The performance of different multi-modality registration methods on CHAOS dataset.
Method Liver (T1\rightarrowCT) Liver (T2\rightarrowCT)
DS (%)\uparrow ASD (mm)\downarrow DS (%)\uparrow ASD (mm)\downarrow
Sy-NCC [3] 74.94±\pm11.05 8.46±\pm4.10 75.46±\pm9.42 8.41±\pm3.86
Sy-MI [3] 73.88±\pm10.08 8.84±\pm3.70 75.82±\pm7.23 8.32±\pm2.73
VM-NCC [4] 74.63±\pm6.54 8.25±\pm2.17 71.10±\pm6.09 9.30±\pm2.01
MMRegNet 79.00±\pm8.06 7.03±\pm2.55 76.71±\pm8.80 7.87±\pm1.75
Refer to caption
Figure 2: Visualization of different methods on MM-WHS and CHAOS datasets. The showed images are the representative cases in terms of DS by MMRegNet. The blue contours are the gold standard label of the fixed images, while the red contours delineate the label of moving or moved images. We indicate the advantage of MMRegNet via yellow arrows. Moreover, the last column presents the moved images of MMRegNet. (The reader is referred to the online version of this article)

Table 1 shows the results on MM-WHS dataset. Compared with the conventional methods (Sy-NCC and Sy-MI), MMRegNet could achieve better performance on both left ventricle cavity (LVC) and left ventricle myocardium (Myo). Notably, compared to the state-of-the-art registration network, i.e., VM-NCC, MMRegNet obtained comparable results in terms of DS and ASD. This reveals that MMRegNet is applicable for multi-modality registration tasks, and the proposed SEGI could serve as another efficient metric, such as MI and NCC, for multi-modality registration.

Table 2 shows the results on CHAOS dataset. We independently reported the registration result of T1 or T2 to CT images. MMRegNet achieved comparable accuracy to the state-of-the-art conventional methods, i.e., Sy-MI and Sy-NCC. Meanwhile, compared to VM-NCC, MMRegNet obtained average 4.99% (T1\rightarrowCT: 4.37%, T2\rightarrowCT: 5.61%) and 1.33 mm (T1\rightarrowCT: 1.22 mm, T2\rightarrowCT: 1.43 mm) improvements in terms of DS and ASD, respectively. This indicates that MMRegNet could achieve promising performance for multi-modality registration tasks.

Additionally, Figure 2 visualizes four representative cases from the two datasets. On MM-WHS dataset, one can observe that both VM-NCC and MMRegNet achieved better visual results than Sy-MI and Sy-NCC, which is consistent with the quantitative results in Table 1. On CHAOS dataset, the yellow arrows highlight where MMRegNet could obtain relative reasonable results than other methods.

4 Conclusion

In this paper, we present an end-to-end network for multi-modality registration. The network is both applicable for heart and liver registration tasks. Meanwhile, we propose SEGI to obtain a robust structural representation for multi-modality images, and then applied it as the loss function for unsupervised registration network training. The results showed that MMRegNet could achieve promising performance when comparing with the state-of-the-art registration methods. Further work will extend MMRegNet to other multi-modality datasets.

References

  • [1] Alam, F., Rahman, S.U., Ullah, S., Gulati, K.: Medical image registration in image guided surgery: Issues, challenges and research opportunities. Biocybernetics and Biomedical Engineering 38(1), 71–89 (2018)
  • [2] Arar, M., Ginger, Y., Danon, D., Bermano, A.H., Cohen-Or, D.: Unsupervised multi-modal image registration via geometry preserving image-to-image translation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 13410–13419 (2020)
  • [3] Avants, B.B., Tustison, N., Song, G.: Advanced normalization tools (ANTS). Insight j 2(365), 1–35 (2009)
  • [4] Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Voxelmorph: a learning framework for deformable medical image registration. IEEE transactions on medical imaging 38(8), 1788–1800 (2019)
  • [5] Ding, W., Li, L., Zhuang, X., Huang, L.: Cross-modality multi-atlas segmentation using deep neural networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 233–242. Springer (2020)
  • [6] Fu, Y., Lei, Y., Wang, T., Curran, W.J., Liu, T., Yang, X.: Deep learning in medical image registration: a review. Physics in Medicine & Biology 65(20), 20TR01 (2020)
  • [7] Giesel, F., Mehndiratta, A., Locklin, J., McAuliffe, M., White, S., Choyke, P., Knopp, M., Wood, B., Haberkorn, U., von Tengg-Kobligk, H.: Image fusion using ct, mri and pet for treatment planning, navigation and follow up in percutaneous rfa. Experimental oncology 31(2),  106 (2009)
  • [8] Haber, E., Modersitzki, J.: Intensity gradient based registration and fusion of multi-modal images. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 726–733. Springer (2006)
  • [9] Heinrich, M.P., Jenkinson, M., Bhushan, M., Matin, T., Gleeson, F.V., Brady, M., Schnabel, J.A.: MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration. Medical image analysis 16(7), 1423–1435 (2012)
  • [10] Heinrich, M.P., Jenkinson, M., Papież, B.W., Brady, M., Schnabel, J.A.: Towards realtime multimodal fusion for image-guided interventions using self-similarities. In: International conference on medical image computing and computer-assisted intervention. pp. 187–194. Springer (2013)
  • [11] Hu, Y., Modat, M., Gibson, E., Li, W., Ghavami, N., Bonmati, E., Wang, G., Bandula, S., Moore, C.M., Emberton, M., et al.: Weakly-supervised convolutional neural networks for multimodal image registration. Medical image analysis 49, 1–13 (2018)
  • [12] Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European conference on computer vision (ECCV). pp. 172–189 (2018)
  • [13] Kavur, A.E., Gezer, N.S., Barış, M., Aslan, S., Conze, P.H., Groza, V., Pham, D.D., Chatterjee, S., Ernst, P., Özkan, S., et al.: CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation. Medical Image Analysis 69, 101950 (2021)
  • [14] Luo, X., Zhuang, X.: Mvmm-regnet: A new image registration framework based on multivariate mixture model and neural network estimation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 149–159. Springer (2020)
  • [15] Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., Suetens, P.: Multimodality image registration by maximization of mutual information. IEEE transactions on Medical Imaging 16(2), 187–198 (1997)
  • [16] Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. In: International Conference on Information Processing in Medical Imaging. pp. 249–261. Springer (2019)
  • [17] Seeley, E.H., Wilson, K.J., Yankeelov, T.E., Johnson, R.W., Gore, J.C., Caprioli, R.M., Matrisian, L.M., Sterling, J.A.: Co-registration of multi-modality imaging allows for comprehensive analysis of tumor-induced bone disease. Bone 61, 208–216 (2014)
  • [18] Studholme, C., Hill, D.L., Hawkes, D.J.: An overlap invariant entropy measure of 3D medical image alignment. Pattern recognition 32(1), 71–86 (1999)
  • [19] Wachinger, C., Navab, N.: Entropy and laplacian images: Structural representations for multi-modal registration. Medical image analysis 16(1), 1–17 (2012)
  • [20] Zhang, Z., Yang, L., Zheng, Y.: Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern Recognition. pp. 9242–9251 (2018)
  • [21] Zhuang, X.: Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE transactions on pattern analysis and machine intelligence 41(12), 2933–2946 (2018)
  • [22] Zhuang, X., Arridge, S., Hawkes, D.J., Ourselin, S.: A nonrigid registration framework using spatially encoded mutual information and free-form deformations. IEEE transactions on medical imaging 30(10), 1819–1828 (2011)
  • [23] Zhuang, X., Gu, L., Xu, J.: Medical image alignment by normal vector information. In: International Conference on Computational and Information Science. pp. 890–895. Springer (2005)
  • [24] Zhuang, X., Li, L., Payer, C., Štern, D., Urschler, M., Heinrich, M.P., Oster, J., Wang, C., Smedby, Ö., Bian, C., et al.: Evaluation of algorithms for multi-modality whole heart segmentation: an open-access grand challenge. Medical image analysis 58, 101537 (2019)