Unsupervised Multi-Modal Image Registration via Geometry Preserving Image-to-Image Translation
Abstract
Many applications, such as autonomous driving, heavily rely on multi-modal data where spatial alignment between the modalities is required. Most multi-modal registration methods struggle computing the spatial correspondence between the images using prevalent cross-modality similarity measures. In this work, we bypass the difficulties of developing cross-modality similarity measures, by training an image-to-image translation network on the two input modalities. This learned translation allows training the registration network using simple and reliable mono-modality metrics. We perform multi-modal registration using two networks - a spatial transformation network and a translation network. We show that by encouraging our translation network to be geometry preserving, we manage to train an accurate spatial transformation network. Compared to state-of-the-art multi-modal methods our presented method is unsupervised, requiring no pairs of aligned modalities for training, and can be adapted to any pair of modalities. We evaluate our method quantitatively and qualitatively on commercial datasets, showing that it performs well on several modalities and achieves accurate alignment.
1 Introduction

Scene acquisition using different sensors is common practice in various disciplines, from classical ones such as medical imaging and remote sensing, to emerging tasks such as autonomous driving. Multi-modal sensors allow gathering a wide range of physical properties, which in turn yields richer scene representations. For example, in radiation planning, multi-modal data (e.g. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scans) is used for more accurate tumor contouring which reduces the risk of damaging healthy tissues in radiotherapy treatment [23, 27]. More often than not, multi-modal sensors naturally have different extrinsic parameters between modalities, such as lens parameters and relative position. In these cases, non-rigid image registration is essential for proper execution of the aforementioned downstream tasks.
Classic multi-modal image registration techniques attempt to warp a source image to match a target one via a non-linear optimization process, seeking to maximize a predefined similarity measure [38]. Besides a computational disadvantage, which is critical for applications such as autonomous driving, effectively designing similarity measures for such optimization has proven to be quite challenging. This is true for both intensity-based measures, commonly used in the medical imaging [10], and feature-based ones, typically adapted for more detailed modalities (e.g Near Infra-Red (NIR) and RGB) [30].
These difficulties gave rise to the recent development of deep regression models. These types of models typically have lengthy training time, either supervised or unsupervised, yet they offer expeditious inference that usually generalizes well. Since it is extremely hard to collect ground-truth data for the registration parameters, supervised multi-modal registration methods commonly use synthesized data in order to train a registration network [28, 35]. This makes their robustness highly dependent on the similarity between the artificial and real-life data distribution and appearance. Unsupervised registration techniques, on the other-hand, frequently incorporate a spatial transform network (STN) [14] and train an end-to-end network [7, 18, 16, 34, 8].
Typically, such approaches optimize an STN by comparing the deformed image and the target one using simple similarity metrics such as pixel-wise Mean Squared Error (MSE) [29, 31, 6]. Of course, such approaches can only be used in mono-modality settings and become irrelevant for multi-modality settings. To overcome this limitation, unsupervised multi-modal registration networks use statistics-based similarity metrics, particularly, (Normalized) Mutual Information ((N)MI) [21], Normalized Cross Correlation (NCC) [5], or Structural Similarity Index Metric (SSIM) [20, 21] (see Figure 1, faded dashed path). However, these metrics are either computationally intractable (e.g., MI) [3] and hence cannot be used in gradient-based methods, or are domain-dependent (e.g., NCC), failing to generalize for all modalities.
In this paper, we present an unsupervised method for multi-modal registration. In our work, we exploit the celebrated success of Multi-Modal Image Translation [13, 36, 37, 12], and simultaneously learn multi-modal translation and spatial registration. The key idea is to alleviate the shortcomings of a hand-crafted similarity measure by training an image-to-image translation network on two given modalities. This in turn will let us use mono-modality metrics for evaluating our registration network (see Figure 1, vivid path on the top).
The main challenge for this approach is to train the registration network and the translation network simultaneously, while encouraging to be geometry preserving. This ensures that the two networks are task-specific — performs only a photo-metric mapping, while learns the geometric transformation required for the registration task. In our work, we use the concepts of generative adversarial networks (GAN [9, 22]) to train and . We show that the adversarial training is not only necessary for the translation task (as shown in previous works [13]), but is also necessary to produce smooth and accurate spatial transformation. We evaluate our method on real commercial data, and demonstrate its strength with a series of studies.
The main contributions of our work are:
-
•
An unsupervised method for multi-modal image registration.
-
•
A geometry preserving translation network that allows the application of mono-modality metrics in multi-modal registration.
-
•
A training scheme that encourages a generator to be geometry preserving.
2 Related Works
To deal with the photo-metric difference between modalities, unsupervised multi-modal approaches are forced to find the correlation between the different domains and use it to guide their learning process. In [20] a vanilla CycleGAN architecture is used to regularize a deformation mapping. This is achieved by training a discriminator network to distinguish between deformed and real images. To align a pair of images the entire network needs to be trained in a single pass. Training this network on a large dataset will encourage the deformation mapping to become an identity mapping. This is because the discriminator is given only the real and deformed images. Furthermore the authors use multiple cross-modality similarity metrics including SSIM, NCC and NMI which are limited by the compatibility of the specific modalities used. In contrast, our method learns from a large dataset and bypasses the need for cross-modality similarity metrics.
Wang et al. [34] attempt to bypass the need for domain translation by learning an Encoder-Decoder module to create modality-independent features. The features are fed to an STN to learn affine and non-rigid transformations. The authors train their network using a simple similarity measure (MSE) which maintains local similarity, but does not enforce global fidelity.
At the other extreme, [8] rely entirely on an adversarial loss function. They train a regular U-Net based STN by giving the resultant registered images to a discriminator network and using its feedback as the STN’s loss function. By relying solely on the discriminator network for guiding the training, they lose the ability to enforce local coherence between the registered and target images.
Closest to our work, [25] combines an adversarial loss with similarity measurements in an effort to register the images properly while concentrating on maintaining local geometric properties. They encode the inputs into two separate embedding, one for shape and one for content information, and train a registration network on these disentangled embedding. This method relies on learned disentanglement, which introduces inconsistencies on the local level. Our method directly enforces the similarity in the image space, which leads to a reliable local signal.



3 Overview
Our core idea is to learn the translation between the two modalities, rather than using a cross-modality metric. This novel approach is illustrated in Figure 1. The spatially transformed image is translated by a learnable network. The translated image can then be compared to the original source image using a simple uni-modality metric, bypassing the need to use a cross-modality metric. The advantage of using a learnable translation network is that it generalizes and adapts to any pairs of given modalities.
Our registration network consists of two components: (i) a spatial transformation network and (ii) an image-to-image translation network . The two components are trained simultaneously using two training flows as depicted in Figure 2. The spatial transformation network takes the two input images and yields a deformation field . The field is the then applied either before (Figure 2(b)) or after it (Figure 2(c)). Specifically, the field is generated using a network and is used by a re-sampling layer to get the transformed image, namely and . We will elaborate on these two training schemes in Section 4.2. The key is, as we shall show, that such two-flow training encourages to be geometry preserving, which implies that all the geometry transformation is encoded in .
Once trained, only the spatial transformation network is used in test time. The network takes two images and representing the same scene, captured from slightly different viewpoint, in two different modalities, and , respectively, and aligns with .
4 Method
Our goal is to learn a non-rigid spatial transformation which aligns two images from different domains. Let and be two paired image domains, where are the height, width, and number of channels for domain , respectively. Pairing means that for each image there exists a unique image representing the same scene, as acquired by the different respective sensors. Note that the pairing assumption is a common and reasonable one, since more often than not registration-base applications involve taking an image of the same scene from both modality sensors (e.g satellite images). Throughout this section, we let and be a pair of two images such that needs to be aligned with .
To achieve this alignment, we train three learnable components: (i) a registration network , (ii) a translation network and (iii) a discriminator . The three networks are trained using an adversarial model [9, 22], where and are jointly trained to outwit . Below, we describe the design and objectives of each network.
4.1 Registration Network
Our registration network () is a spatial transformation network (STN) composed of a fully-convolutional network and a re-sampler layer . The transformation we apply is a non-linear dense deformation - allowing non-uniform mapping between the images and hence gives accurate results. Next we give an in-depth description about each component.
- Deformation Field Generator: The network takes two input images, and , and produces a deformation field describing how to non-rigidly align to . The field is an matrix of -dimensional vectors, indicating the deformation direction for each pixel in the input image .
- Re-sampling Layer: This layer receives the deformation field , produced by , and applies it on a source image . Here, the source image is not necessarily and it could be from either domains - or . Specifically, the value of the transformed image at pixel is given by Equation 1:
(1) |
where is the deformation generated by at pixel , in the and -directions, respectively.
To avoid overly distorting the deformed image we restrict from producing non-smooth deformations. We adapt a common regularization term that is used to produce smooth deformations. In particular, the regularization loss will encourage neighboring pixels to have similar deformations. Formally, we seek to have small values of the first order gradients of , hence the loss at pixel is then given by:
(2) |
where is a set of neighbors of the pixel , and is a bilateral filter [32] used to reduce over-smoothing. Let to be the deformed image produced by on input , then the bilateral filter is given by:
(3) |
There are two important notes about the bilateral filter in Equation 3. First, the bilateral filtering is with respect to the transformed image (at each forward pass), and secondly, the term is a treated as constant (at each backward pass). The latter is important to avoid alternating pixel values so that (e.g., it could change pixels so that is relatively large), while the former allows better exploration of the solution space.
In our experiments we look at the neighborhood of , and set . The overall smoothness loss of the network , denoted by , is the mean value over all pixels .
4.2 Geometric Preserving Translation Network
A key challenge of our work is to train the image-to-image translation network to be geometric preserving. If is geometric preserving, it implies that it only performs photo-metric mapping, and as a consequence the registration task is performed solely by the registration network . However, during our experiments, we observed that tends to generate fake images that are spatially aligned with the ground truth image, regardless of ’s accuracy.
To avoid this, we could restrict from performing any spatial alignment by reducing its capacity (number of layers). While we did observe that reducing ’s capacity does improve our registration network’s performance, it still limits the registration network from doing all the registration task (See supplementary materials).
To implicitly encourage to be geometric preserving we require that and are commutative, i.e., . In the following we formally define both and :
Translation First - :
This mapping first apply an image-to-image translation on and then a spatial transformation on the translated image. Specifically, the final image is obtained by first applying on , which generates a fake sample . Then we apply our spatial transformation network on and get the final output:
Register First -
in this composition, we first apply spatial transformation on and obtain a deformed image . Then, we translate to domain using our translation network :
Note that in both compositions (i.e., and ), the deformation field, used by the re-sampler , is given by . The only difference is in the source image from which we re-sample the deformed image. Throughout this section, we refer to and as the outputs of and , respectively.
4.3 Training Losses
To train and to generate fake samples that are similar to those in domain , we use an -reconstruction loss:
(4) |
where minimizing the above implies that .
We use conditional GAN (cGAN)[22] as our adversarial loss for training , and . The objective of the adversarial network is to discriminate between real and fake samples, while and are jointly trained to fool the discriminator. The cGAN loss for and is formulated below:
(5) |
The total objective is given by:
(6) |
where we are opt to find and such that . Furthermore, in our experiments, we set and .
4.4 Implementation Details
Our code is implemented using PyTorch 1.1.0 [24] and is based on the framework and implementation of Pix2Pix [13], CycleGAN [36] and BiCycleGAN [37]. The network is an encoder-decoder network with residual connections [1], which is adapted from the implementation in [15]. The registration network is U-NET based [26] with residual connections in the encoder-path and the output path. In all residual connections, we use Instance Normalization Layer [33]. All networks were initialized by the Kaiming [11] initialization method. Full details about the architectures is provided in the supplementary material.
The experiments were conducted on single GeForce RTX 2080 Ti. We use Adam Optimizer [17] on a mini-batch of size 12 with parameters , and . We train our model for 200 epochs, and activate linear learning rate decay after 100 epochs.
5 Experimental Results
In the following section we evaluate our approach and explore the interactions between , and the different loss terms we use.
All our experiments were conducted on a commercial dataset, which contains a collection of images of banana plants with different growing conditions and phenotype. The dataset contains 6100 image frames, where each frame consist of an RGB image, IR Image and Depth Image. The colored images are a 24bit Color Bitmap captured from a high-resolution sensor. The IR images are a 16bit gray-scale image, captured from a long-wave infrared (LWIR) sensor. Finally, the depth images were captured by Intel Real-Sense depth camera. The three sensors were calibrated, and an initial registration was applied based on affine transformation estimation via depth and controlled lab measurements. The misalignment in the dataset is due to depth variation within different objects in the scene, which the initial registration fails to handle. We split the dataset into training and test samples, where the test image were sampled with probability .
5.1 Evaluation

Registration Accuracy Metric. We manually annotated 100 random pairs of test images. We tagged 10-15 pairs of point landmarks on the source and target images which are notable and expected to match in the registration process (See Figure 3). Given a pair of test images, and , with a set of tagged pairs , denote as deformed sample points . The accuracy of the registration network is simply the average Euclidean distance between the target points and the deformed source points:
(7) |
Furthermore, we used two type of annotations. The first type of annotation is located over salient objects in the scene (the blue points in Figure 3). This is important because in most cases, down-stream tasks are affected mainly by the alignment of the main object in the scene in both modalities. The second annotation is performed by picking landmark points from all objects across the scene.
Method | Salient Objects | Full Scene |
---|---|---|
Unregistered | 30.3 | 35.45 |
CG [36] + SIFT [19] | 17.9 | 34.74 |
+ SSIM On Edges | 26.12 | 28.41 |
+ NCC On Edges | 16.78 | 27.41 |
+ NCC | 15.8 | 29.91 |
+ (Ours) | 6.27 | 6.93 |
Quantitative Evaluation. Due to limited access to source code and datasets of related works, we conduct several experiments that demonstrate the power of our method with respect to different aspects of previous works. As the crux of our work is the alleviation of the need for cross-modality similarity measures, we trained our network with commonly used cross-modality measures. In Table 1 we show the registration accuracy of our registration network when trained with different loss terms. Specifically, we used Normalized Cross Correlation as it is frequently used in unsupervised multi-modal registration methods. Furthermore, we trained our network with Structural Similarity Index Metric (SSIM) on edges detected by Canny edge-detector [4] from both the deformed and target image. Finally, we attempt to train our registration network by maximizing the Normalized Cross Correlation between the edges of the deformed and the target image. As can be seen from Table 1, training the registration network using prescribed cross-modality similarity measures do not perform well. Further, using these produces noisy results, while using SSIM gives smooth but less accurate registration (see supplemental materials).
Furthermore, we also tried using traditional descriptors such as SIFT [19] in order to match corresponding key points from the source and target image. We use these key points to register the source and target images by estimating the transformation parameters between them. However, these descriptors are not designed for multi-modal data, and hence they fail badly to be used on our dataset.
Instead, we train a CycleGAN [36] network to translate between the two modalities at hand, without any supervision to match the ground truth. CycleGAN, like other unsupervised image-to-image translation networks, is not trained to generate images matching ground truth samples, thus, geometric transformation is not explicitly required from the translation network. Once trained, we use one of the generators in the CycleGAN, the one that maps between domain to domain to translate the input image onto modality . Assuming this generator is both geometry preserving and translates well between the modalities, it is expected that it also match well between feature of the fake sample and the target image. Thus, we extracted SIFT descriptors from the generated images by the CycleGAN translation network, and extracted SIFT features from the target image . We then matched these features and estimated the needed spatial registration. The registration accuracy using this method is significantly better than directly using SIFT [19] features on the input image . The results are shown in Table 1. Further visual results and details demonstrating this method are provided in the supplementary material.
Qualitative Evaluation.




































Figure 5 shows that our registration network successfully aligns images from different pair of modalities and handles different alignment cases. For example, the banana leaves in the first raw in Figure 5(a) are well-aligned in the two modalities. Our registration network maintains this alignment and only deforms the background for full alignment between the images. This can be seen from the deformation field visualization [2], where little deformation is applied on the banana plant, while most of the deformation is applied on the background. Furthermore, in the last row in Figure 5(a), there is little depth variation in the scene because the banana plant is small, hence a uniform deformation is applied across the entire image. To help measuring the alignment success, we overlay (with semi-transparency) the plant in image B on top of both image A before and after the registration. This means that the silhouette has the same spatial location in all images (the original image B, image A before and after the registration). Lastly, we achieve similar success in the registration between RGB and IR images (see Figure 5(b)).
It is worth mentioning that in some cases, the deformation field points to regions outside the source image. In those cases, we simply sample zero values. This happens because the the target image content (i.e., ) in these regions is not available in the source image (i.e., ). We provide more qualitative results in the supplemental materials.
5.2 Ablation Study
Next, we present a series of ablation studies that analyze the effectiveness of different aspects in our work. First, we show that training both compositions (i.e our presented two training flows) of and indeed encourages a geometric preserving translator . Additionally, we analyze the impact of the different loss terms on the registration network’s accuracy. We further show the effectiveness of the bilateral filtering, and that it indeed improves the registration accuracy. All experiments, unless otherwise stated, were conducted without the bilateral filtering.
Geometric-Preserving Translation Network.




To evaluate the impact of training of and simultaneously with the two training flows proposed in Figure 2, we compare the registration accuracy of our method with that of training models with either or . As can be seen from Figure 6, training both combinations yields a substantial improvement in the registration accuracy (shown in blue), compared to each training flow (i.e., and ) separately. Moreover, while the reconstruction loss of (shown in read) is lowest among the three options, it does not necessarily indicate a better registration. This is because in this setting the translation network implicitly performs both the alignment and translation tasks. Conversely, when training with only (shown in green), the network is unstable and at some point it starts to alternate pixel values, essentially taking on the role of a translation network. Since is only geometry-aware by design it fails to generate good samples. This is indicated by how fast the discriminator detects that the generated samples are fake (i.e., the adversarial loss decays fast). Visual results are provided in the supplementary materials.
Both | |||
- | 28.15 | ||
29.02 | - | 22.03 | |
Both | 11.01 |
Loss ablation. It has been shown in previous works [37, 13, 36] that training an image-to-image translation network with both a reconstruction and an adversarial loss yields better results. In particular, the reconstruction loss stabilizes the training process and improves the vividness of the output images, while the adversarial loss encourages the generation of samples matching the real-data distribution.
The main objective of our work is the production of a registration network. Therefore, we seek to understand the impact of both losses (reconstruction and adversarial) on the registration network. To understand the impact of each loss, we train our model with different settings: each time we fix either or ’s weights with respect to one of the loss functions. The registration accuracy is presented in Table 2. Please refer to the supplementary material for qualitative results. As can be seen in these figures, training only with respect to the reconstruction loss leads to overly sharp, but unrealistic images where the deformation field creates noisy artifacts. On the other hand, training only with respect to the adversarial loss creates realistic images, but with inexact alignment. This is especially evident in Table 2 where training with respect to the reconstruction loss achieves a significant improvement in the alignment, and the best accuracy is obtained when the loss terms are both used to update all the networks weights.
Bilateral Filtering Effectiveness Using bilateral filtering to weigh the smoothness loss allows us, in effect, to encourage piece-wise smoothness on the deformation map. As can be seen in Table 3, this enhances the precision of the registration. These results suggest that using segmentation maps for controlling the smoothness loss term could be beneficial.
Method | Test Acc. | Train Acc. |
---|---|---|
No Registration | 35.45 | 34.96 |
W/O Bilateral | 11.01 | 9.89 |
With Bilateral | 6.93 | 6.12 |
6 Summary and Conclusions
We presented an unsupervised multi-modal image registration technique based on image-to-image translation network. Our method, does not require any direct comparison between images of different modalities. Instead, we developed a geometry preserving image-to-image translation network which allows comparing the deformed and target image using simple mono-modality metrics. The geometric preserving translation network was made possible by a novel training scheme, which alternates and combines two different flows to train the spatial transformation. We further showed that using adversarial learning, along with mono-modality metric, we are able to produce smooth and accurate registration results even when there is only little training data.
We believe that geometric preserving generators can be useful for many other applications other than image registration. In the future, we would like to continue to explore the idea of alternate training a number of layers or operators in different flows to encourage them being commutative as means to achieve certain non-trivial properties.
Acknowledgments
This research was supported by a generic RD program of the Israeli innovation authority, and the Phenomics consortium.
References
- [1] Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2015.
- [2] Simon Baker, Daniel Scharstein, J. P. Lewis, Stefan Roth, Michael J. Black, and Richard Szeliski. A database and evaluation methodology for optical flow. Int. J. Comput. Vision, 92(1):1–31, Mar. 2011.
- [3] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 531–540, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
- [4] J Canny. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell., 8(6):679–698, June 1986.
- [5] Xiaohuan Cao, Jianhuan Yang, Li Wang, Zhong Xue, Qian Wang, and Dinggang Shen. Deep learning based inter-modality image registration supervised by intra-modality similarity. Machine learning in medical imaging. MLMI, 11046:55–63, 2018.
- [6] Adrian V. Dalca, Guha Balakrishnan, John V. Guttag, and Mert R. Sabuncu. Unsupervised learning for fast probabilistic diffeomorphic registration. In MICCAI, 2018.
- [7] Bob D. de Vos, Floris F. Berendsen, Max A. Viergever, Marius Staring, and Ivana Isgum. End-to-end unsupervised deformable image registration with a convolutional neural network. In M. Jorge Cardoso, Tal Arbel, Gustavo Carneiro, Tanveer F. Syeda-Mahmood, João Manuel R. S. Tavares, Mehdi Moradi, Andrew P. Bradley, Hayit Greenspan, João Paulo Papa, Anant Madabhushi, Jacinto C. Nascimento, Jaime S. Cardoso, Vasileios Belagiannis, and Zhi Lu, editors, Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, 2017, Proceedings, volume 10553 of Lecture Notes in Computer Science, pages 204–212. Springer, 2017.
- [8] Jingfan Fan, Xiaohuan Cao, Qian Wang, Pew-Thian Yap, and Dinggang Shen. Adversarial learning for mono- or multi-modal registration. Medical Image Analysis, 58:101545, 2019.
- [9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014.
- [10] Grant Haskins, Uwe Kruger, and Pingkun Yan. Deep learning in medical image registration: A survey, 2019.
- [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV ’15, pages 1026–1034, Washington, DC, USA, 2015. IEEE Computer Society.
- [12] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In ECCV, 2018.
- [13] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. CVPR, 2017.
- [14] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2017–2025, 2015.
- [15] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, 2016.
- [16] Boah Kim, Jieun Kim, June-Goo Lee, Dong Hwan Kim, Seong Ho Park, and Jong Chul Ye. Unsupervised deformable image registration using cycle-consistent CNN. In Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, editors, Medical Image Computing and Computer Assisted Intervention - MICCAI 2019 - 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part VI, volume 11769 of Lecture Notes in Computer Science, pages 166–174. Springer, 2019.
- [17] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 12 2014.
- [18] Matthew C. H. Lee, Ozan Oktay, Andreas Schuh, Michiel Schaap, and Ben Glocker. Image-and-spatial transformer networks for structure-guided image registration. In Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, editors, Medical Image Computing and Computer Assisted Intervention - MICCAI 2019 - 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part II, volume 11765 of Lecture Notes in Computer Science, pages 337–345. Springer, 2019.
- [19] David G. Lowe. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision, 60(2):91–110, Nov. 2004.
- [20] Dwarikanath Mahapatra, Bhavna J. Antony, Suman Sedai, and Rahil Garnavi. Deformable medical image registration using generative adversarial networks. In 15th IEEE International Symposium on Biomedical Imaging, ISBI 2018, Washington, DC, USA, April 4-7, 2018, pages 1449–1453. IEEE, 2018.
- [21] Dwarikanath Mahapatra, Zongyuan Ge, Suman Sedai, and Rajib Chakravorty. Joint registration and segmentation of xray images using generative adversarial networks. In Yinghuan Shi, Heung-Il Suk, and Mingxia Liu, editors, Machine Learning in Medical Imaging, volume 11046 of Lecture Notes in Computer Science, pages 73–80. Springer, 1 2018.
- [22] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014.
- [23] Seungjong Oh and Siyong Kim. Deformable image registration in radiation therapy. Radiation oncology journal, 35(2):101, 2017.
- [24] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, 2017.
- [25] Chen Qin, Bibo Shi, Rui Liao, Tommaso Mansi, Daniel Rueckert, and Ali Kamen. Unsupervised deformable registration for multi-modal images via disentangled representations. In Albert C. S. Chung, James C. Gee, Paul A. Yushkevich, and Siqi Bao, editors, Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Hong Kong, China, June 2-7, 2019, Proceedings, volume 11492 of Lecture Notes in Computer Science, pages 249–261. Springer, 2019.
- [26] O. Ronneberger, P.Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), volume 9351 of LNCS, pages 234–241. Springer, 2015. (available on arXiv:1505.04597 [cs.CV]).
- [27] Maria A Schmidt and Geoffrey S Payne. Radiotherapy planning using mri. Physics in Medicine & Biology, 60(22):R323, 2015.
- [28] N. Schneider, F. Piewak, C. Stiller, and U. Franke. Regnet: Multimodal sensor registration using deep neural networks. In 2017 IEEE Intelligent Vehicles Symposium (IV), pages 1803–1810, June 2017.
- [29] A. Sheikhjafari, Michelle Noga, Kumaradevan Punithakumar, and Nilanjan Ray. Unsupervised deformable image registration with fully connected generative neural network. 2018.
- [30] Xiaoyong Shen, Li Xu, Qi Zhang, and Jiaya Jia. Multi-modal and multi-spectral registration for natural images. In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV, pages 309–324, 2014.
- [31] Chang Shu, Xi Chen, Qiwei Xie, and Hua Han. An unsupervised network for fast microscopic image registration. In John E. Tomaszewski and Metin N. Gurcan, editors, Medical Imaging 2018: Digital Pathology, volume 10581, pages 363 – 370. International Society for Optics and Photonics, SPIE, 2018.
- [32] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision, ICCV ’98, pages 839–, Washington, DC, USA, 1998. IEEE Computer Society.
- [33] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. ArXiv, abs/1607.08022, 2016.
- [34] Chengjia Wang, Giorgos Papanastasiou, Agisilaos Chartsias, Grzegorz Jacenkow, Sotirios A. Tsaftaris, and Heye Zhang. FIRE: unsupervised bi-directional inter-modality registration using deep networks. CoRR, abs/1907.05062, 2019.
- [35] Armand Zampieri, Guillaume Charpiat, Nicolas Girard, and Yuliya Tarabalka. Multimodal image alignment through a multiscale chain of neural networks with application to remote sensing. In ECCV, 2018.
- [36] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networkss. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017.
- [37] Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems, 2017.
- [38] Barbara Zitová and Jan Flusser. Image registration methods: a survey. Image and Vision Computing, 21(11):977 – 1000, 2003.