Translate the Facial Regions You Like Using Region-Wise Normalization
Abstract.
Though GAN (Generative Adversarial Networks) based technique has greatly advanced the performance of image synthesis and face translation, only few works available in literature provide region based style encoding and translation. We propose in this paper a region-wise normalization framework, for region level face translation. While per-region style is encoded using available approach, we build a so called RIN (region-wise normalization) block to individually inject the styles into per-region feature maps and then fuse them for following convolution and upsampling. Both shape and texture of different regions can thus be translated to various target styles. A region matching loss has also been proposed to significantly reduce the inference between regions during the translation process. Extensive experiments on three publicly available datasets, i.e. Morph, RaFD and CelebAMask-HQ, suggest that our approach demonstrate a large improvement over state-of-the-art methods like StarGAN, SEAN and FUNIT. Our approach has further advantages in precise control of the regions to be translated. As a result, region level expression changes and step by step make up can be achieved. The video demo is available at https://youtu.be/ceRqsbzXAfk.

1. Introduction
With the development of Generative Adversarial Networks (GANs), the quality of the generated images are getting better. Recent unsupervised image-to-image translation algorithms are remarkably successful in transferring complex appearance changes across different image modalities (Zhu et al., 2017a; Choi et al., 2018, 2019; Liu et al., 2019), we now can use those GANs to generate impressive images. However, current image to image approaches mostly perform translation on the full image, which might not be able to fulfill the requirement of region level translation. For example, we sometime might only want to change the style of a certain part of the faces. As shown in Figure 1, while the mouths of faces with sad and disgusted expressions are changed to happy and surprise styles, the eyes of faces with happy and contemptuous expressions are changed to surprise and angry styles. Most of existing image translation approaches cannot achieve this, as they only apply the style changes to the whole image.
Recently, a few conditional Generative Adversarial Networks (cGAN) based attempts, like SPADE (Spatially-adaptive normalization) (Park et al., 2019) and SEAN (Semantic Region-Adaptive Normalization) (Zhu et al., 2019), have been proposed to synthesize images based on sematic segmentation masks. As SPADE insert style information in beginning of the network, only the same style code can be applied to different semantic regions. To address this issue, SEAN presented an AdaIN (Huang and Belongie, 2017) based technique to generate spatially varying normalization parameters and inject these parameters into multiple layers in the network. The styles of different semantic regions can thus be individually encoded and controlled, i.e. various style codes can be applied to different semantic regions.
However, SEAN is basically designed for image synthesis. Given a segmentation mask, spatially varying styles are used to control the synthesis of different semantic regions. While regions with different styles are synthesized, the shapes of these regions are defined by the segmentation mask and keep fixed during the synthesis. Figure 2 shows two example faces synthesized by SEAN from masks with labels of hair, eyes, nose, mouth and so on. The first row shows the face synthesized with reference to the semantic mask of a male face and the style of a lady’s face. While skin tone of the synthesized face is similar to that of the style image, the hair of the synthesized face is short, which is pre-defined by the hair region of the input mask. The expression of the face synthesized in the second row, is actually very different with that of the style image. We also did not get good translation results whey trying to translate the eyes and mouths of faces shown in Figure 1 to different expressions. While region-wise style information (ST) is encoded as a style map and integrated into the blocks of SEAN modules to synthesize faces from the mask M, the feature maps are modulated by the mask and thus the shapes of different regions for the synthesized faces are relatively fixed. To resize the shape of hair, eyes, nose and mouth, an interactive editing tool has to be required to change the contour of corresponding semantic regions in the mask.

To achieve higher degree of freedom in translating different facial regions, we propose in this paper a region-wise normalization block, called RIN, to inject per-region styles into region-wise feature maps, which are then fused together for the following convolution and upsampling process. To reduce the inferences among the translation of different regions, we have also designed a region matching loss to measure the similarity between the regions of content image and style translated image.
We perform extensive experiments on three publicly available datasets, i.e. Morph (Jr. and Tesafaye, 2006), RaFD (Langner et al., 2010) and CelebAMask-HQ (Lee et al., 2019; Karras et al., 2018; Liu et al., 2015). The results are quantitatively evaluated using metrics like Accuracy, FID (Frechet Inception Distance) and LPIPS (Learned Perceptual Image Patch Similarity). Both visual and quantitative results suggest that our approach demonstrate a large improvement over state-of-the-art methods like StarGAN, SEAN and FUNIT. The idea of our work can be summarized as below:
-
•
We propose a region based translation framework for face editing. While GAN based approaches usually transfer the style of faces as a whole image, our framework can transfer the style of specified regions, without changing other regions.
-
•
As our RIN translate the style of content images in a region-wise manner, the introduced building block can generate images more similar to the input style images, by translating both shape and texture of different regions.
-
•
A region matching loss is designed to measure the similarity between translated/non-translated regions, when the style of whole style image, or specific region is applied to transfer a given face. Ablation study shows that our matching loss can significantly reduce the inference between regions during the translation process.
2. Related Work
Generative Adversarial Networks. Generative Adversarial Networks(GANs) (Goodfellow et al., 2014) have been successfully applied to various image synthesis tasks, e.g. image inpainting (Demir and Ünal, 2018; Yu et al., 2018), image manipulation (Abdal et al., 2019; Bau et al., 2019; Zhu et al., 2016) and texture synthesis (Li and Wand, 2016; Slossberg et al., 2018; Frühstück et al., 2019). With continuous improvements on GAN architecture (Karras et al., 2019b; Park et al., 2019; Radford et al., 2016), loss function (Mao et al., 2017; Arjovsky et al., 2017) and regularization (Gulrajani et al., 2017; Mescheder et al., 2018; Miyato et al., 2018), the images synthesized by GANs are becoming more and more stable and realistic. For example, WGAN (Arjovsky et al., 2017) use Wasserstein distance to regularize the training of GANs, which trys solve the problem of unstable GAN training and balance the training process of generator and discriminator. Recently, the human face images generated by StyleGAN V1 (Karras et al., 2019b), V2 (Karras et al., 2019a) present very high quality and are almost indistinguishable from photographs by untrained viewers. A traditional GAN uses noise vectors as the input and thus provides little user control. This motivates the development of conditional GANs (cGANs) (Mirza and Osindero, 2014) where users can control the synthesis by feeding the generator with conditional information. Examples include class labels (Miyato and Koyama, 2018; Mescheder et al., 2018; Brock et al., 2019), text (Reed et al., 2016; Hong et al., 2018; Xu et al., 2018) and images (Isola et al., 2017; Park et al., 2019; Liu et al., 2017; Wang et al., 2018; Zhu et al., 2017a).
Image-to-Image Translation. Image-to-image translation is an umbrella concept that can be used to describe many problems in computer vision and computer graphics. As a milestone, Isola et al. (Isola et al., 2017) first showed that conditional GANs can be used as a general solution to various image-to-image translation problems. Since then, their method is extended by several works to scenarios including unsupervised learning (Liu et al., 2017; Zhu et al., 2017a), few-shot learning (Liu et al., 2019), high resolution image synthesis (Wang et al., 2018), multi-modal image synthesis (Zhu et al., 2017b; Huang et al., 2018) and multi-domain image synthesis (Choi et al., 2018, 2019). Among various image-to-image translation problems, unsupervised learning and few-shot learning are particularly useful as they can translate the image into an unseen class image with few images. But they are always translating the whole image, which may not pay enough attention to the details of local regions.
Region based style encoding. There have been few image synthesis works in literature related to semantic image generation. Before the proposal of SEAN, SPADE is believed to be the best architecture for semantic image synthesis. However, SPADE uses only one style code to control the entire style of an image. To allow different styles for different regions in the segmentation masks, SEAN generates spatially varying normalization parameters based on the input of segmentation mask and style images. Per-region style can thus be encoded and applied to different regions. Based on such per-region style encoding, we develop a region-wise normalization block, called RIN, to individually inject per-region styles into region-wise feature maps. The styles of both shape and texture can thus be translated for different regions. A region matching loss has also been proposed to significantly reduce the inference between regions during the translation process.



3. Method
In this paper, we focus on region level face translation. The proposed framework, named Region-wise Face Translation (RFT), aims to individually translate the styles of different regions in a content image. As shown in Figure 3(a), our generator network architecture consists of content encoder, style encoder and decoder. As shown in the figure, our generator takes four input images (content image , content mask , style image and style mask ) and outputs a translated image . The process of generation can be represented as:
(1) | ||||
where is the content encoder, is the style encoder, is the decoder, is the style tensor encoded by style encoder, is the content mask with R regions, is the translated image. In the following sections, we will show the details of our architecture.
3.1. Style encoder
As shown in Figure 3(a), inspired by SEAN, our encoder employs a bottleneck structure to remove the information irrelevant to styles from the style image. The feature map extracted by (transposed convolution) will be passed through a Region-wise average pooling module to get style tensor . Each vector in corresponds to one region in style mask. In implementation, we first transform style mask into one-hot tensor where each channel represents a region. Take a channel representing hair region for example, while the values of pixels in hair region are set as 1, others are set to 0. A set of R style feature maps can then be obtained by element-wise multiplication between feature map and different one-hot channels. Finally, we use a global average pooling to get style tensor , which consists of style information of the R regions.
3.2. Decoder
As shown in Figure 3(a), the decoder is composed of five blocks, three blocks and one layer. As shown in Figure 3(b), our proposed block consists of three convolutional layers, three layers and three blocks. Each residual block takes three inputs: content feature maps, per-region style tensor and content mask. Note that the input content mask is downsampled to the same height and width of the feature maps at the beginning of each block.

3.2.1. Region-wise normalization
Given a style tensor encoding R region styles, the segmentation mask of content image and input feature map , our block tries to translate each of the R regions in content image to the corresponding style specified in the tensor , by region-wise normalization. As shown in Figure 4(c), we fist multiply (element-wise) feature map with the one-hot masks (channels) to get per-region feature maps , which are then modulated by the normalization parameters learned from the style tensor . Let denote the input feature map of the current block in a deep convolutional network for a batch of samples, , and be the height, width and channel numbers of the feature map, the style feature map of the region at site () can be represented as:
(2) |
where denote the feature map at the site before normalization, denotes the one-hot mask corresponding to the region, and are the mean and standard deviation of the feature map in channel :
(3) |
(4) |
After getting the per-region feature map of content image, with the same operation as AdaIN (Huang and Belongie, 2017), we do the element-wise calculation between the per-region feature map and its corresponding regional modulation parameters and extracted by :
(5) |
where denotes style feature map for the region, and are the modulation parameters learned from the channel of .
By now, the per-region feature maps have been all injected with per-region styles encoded from style image, using our region-wise normalization. Finally, the R modulated per-region feature maps are added together to get the output feature map:
(6) |
3.3. Discriminator
The discriminator architecture of RFT is the same as that of FUNIT (Liu et al., 2019). As our RFT aims to translate the styles of specified regions only, we proposed a novel region matching loss to reduce the interferences among different regions.
3.3.1. Region Matching Loss
As shown in Figure 4, we first use a content image and a style image to generate a face presenting similar expression with , which can be represented as:
(7) | ||||
where is the style tensor encoded from the R regions of style image and is the result image where all R regions have been translated to the per-region styles encoded in . In the second task, we only translate the style of region of the content image , by replacing the channel of its style tensor, with that of :
(8) | ||||
where and are the channel of style tensor and , respectively, which encode the style of the region of and , represent the result image by translating the style of region of content image .
Given a content image and the fully translated image and partially translated , we design a region matching loss to measure the similarity between the regions of and , and the similarity between other regions of and :
(9) |
where and represent the one-hot mask corresponding to the and regions, respectively.
3.4. Training
The proposed RFT was trained by solving a minimax optimization problem given by
(10) |
where , , and are the GAN loss, the content image reconstruction loss, the feature matching loss and the region matching loss, respectively. The GAN loss is a conditional one given by
(11) |
The content reconstruction loss helps G learn a translation model. Specifically, when using the same image for both the input content image and the input style image, the loss encourages G to generate an output image identical to the input
(12) |
The feature matching loss regularizes the training. We first construct a feature extractor, referred to as , by removing the last (prediction) layer from . We then use to extract features from the translation output and the style image and minimize
(13) |
The GAN loss, the content reconstruction loss and the feature matching loss are the same as that of FUNIT.
4. Experiment
Our proposed RFT was evaluated on three challenging datasets, i.e. Morph, RaFD, CelebAMaskHQ. A wide range of quantitative metrics including FID, Accuracy and LPIPS were evaluated among different models; Qualitatively, the examples of synthesized images are shown for visual inspection.
4.1. Dataset
Morph. Morph dataset (Jr. and Tesafaye, 2006) is a large-scale public longitudinal face dataset, collected in indoor office environment with variations in age, pose, expression and lighting conditions. It has two subsets: Album1 and Album2. Album 2 contains 55,134 images of 13,000 individuals with age label ranging from 16 to 77 years old. We divide the images into a training set with 50020 images and a test set with 4,925 images. The images are separated into five groups with ages of 11-20, 21-30, 31-40, 41-50 and 50+.
RaFD. RaFD dataset (Langner et al., 2010) is a high-quality face database, containing a total of 67 models with 8,040 pictures displaying 8 emotional expressions, i.e., angry, fearful, disgusted, contempt, happy, surprise, sad and neutral. Each expression consists of three different gaze directions and was simultaneously photographed from different angles using five cameras. We divide the images into a training set with 4,320 images and a test set with 504 images.
CelebAMask-HQ. CelebAMask-HQ dataset (Lee et al., 2019; Karras et al., 2018; Liu et al., 2015) containing 30,000 segmentation masks for the CelebAHQ face image dataset. There are 19 different region categories in CelebAMask-HQ dataset. We divide the images into a training set with 25,000 images and a test set with 5,000 images.

4.2. Metrics
In the training stage of three datasets above, we train different GAN models with their training set. Note that all the baselines are trained with the batch size of 4, the image size of and the maximum iteration of 100,000. As show in Figure 5, for variant translation tasks, we design different mask setting for different datasets.
In the test stage, we evaluate performance of different models on their test set using three metrics as follows:
Accuracy. Three classifiers (Resnet-18) (He et al., 2016) trained using three training sets of different datasets are used to test accuracy of translation. If the synthetic face of target class is correctly classified by the classifier, we decide such translation as a successful one.
FID. Calculated as the Frechet inception distance (Heusel et al., 2017) between two feature distribution of the generated and real images, FID score has been shown to correlate well with human judgement of visual quality. It measures the similarity between two sets of images. Lower FID value indicates better quality of the synthetic images. We use the ImageNet-pretrained Inception-V3 (Szegedy et al., 2016) classifiers as feature extractor. For each test image from a source domain, we translate it into a target domain using 10 style images randomly sampled from the test set of the target domain. We then compute the FID between the translated images and training images in the target domain. We compute the FIDs for every pair of image domains and report the average score.
LPIPS. Learned perceptual image patch similarity (LPIPS) (Zhang et al., 2018) measures the diversity of the generated images using the L1 distance between features extracted from the pretrained AlexNet (Krizhevsky et al., 2017). For each test image, we translate its style with reference to 10 style images randomly sampled from the target domain. The L1 distances between each pair of translated image and the style image are then averaged as the LPIPS of the test image. Finally, we report the average of the LPIPS values over all test images. Note that LPIPS is not available for StarGAN, as it does not require any style image for face translation.


4.3. Results on Morph
Firstly, the RFT is evaluated on Morph dataset to assess region level age attribute translation. Figure 6 shows the results of our RFT. Note that the per-region styles are encoded using 10 style images randomly sampled from the test set of the target age groups. Figure 7 shows the translation results of an example face of a 25 years old man. In the first row, the hair of the young man is translated to the styles of different age groups (long black to short white), with fixed face regions. In the second row, the face of the young man is translated to the styles of different age groups (appearance of wrinkles), with fixed hair style. In the third row, both hair and face are translated. One can visually observe that our RFT can well control the regions to be translated and achieve decent styles for target regions.
Method | Accuracy(%) | FID score | LPIPS |
---|---|---|---|
StarGAN | 60.88 | 27.89 | - |
SEAN | 30.25 | 48.84 | 0.2525 |
FUNIT | 39.02 | 26.14 | 0.3152 |
RFT | 69.01 | 23.34 | 0.2512 |
The accuracy, FID and LPIPS of face images translated by our RFT are listed in Table 1, together with that translated by StarGAN, SEAN and FUNIT. One can observe from the table that the accuracy of RFT is as high as 69 , which is significantly higher than that of StarGAN, SEAN and FUNIT. Also, our method achieves the lowest FID score and LPIPS among these GAN based models.
In addition to region level translation, our RFT also does well for image level translation. Figure 11 in Appendix shows five example faces translated to different age groups specified by the style images listed in the first row. While clear hair changes and wrinkles can be observed, the identity of the face is well preserved.


4.4. Results on RaFD
We now test the performance of our RFT for region level expression translation using RaFD dataset. Figure 7 shows the translation result of an example face in RaFD, whose eyes or/and mouth are translated from neutral to different expressions like angry, fearful, happy, sad and surprise, etc. In the first and second rows, only the eyes and mouth of the man are respectively translated to different expressions, with other regions fixed. In the third row, both eyes and mouth are translated. One can observe from the figure that our approach can precisely translate the shape and texture of designated facial regions to a target expression, without touching any other regions.
Method | Accuracy(%) | FID score | LPIPS |
---|---|---|---|
StarGAN | 77.28 | 32.67 | - |
SEAN | 13.10 | 29.61 | 0.2610 |
FUNIT | 12.72 | 41.67 | 0.2937 |
RFT | 88.32 | 27.88 | 0.2776 |
Table 2 shows the accuracy, FID and LPIPS of faces generated by different GAN models. One can observe from the table that the accuracy of RFT is as high as 88.32 , which is significantly higher than that of StarGAN and more than 75 higher than that of SEAN and FUNIT. Also, our method achieves the lowest FID of , which is lower than that of FUNIT. Though the FID of SEAN is close to our RFT, the expressions translated by SEAN is not accurate, due to the fixed shape defined in the semantic mask (see Figure 2 for an example). Figure 12 in Appendix presents more example faces with different expressions translated by StarGAN, SEAN, FUNIT and our RFT, which clearly justify the advantages of our approach, in terms of the visual quality of generated face images.
Method | Accuracy(%) | FID score | LPIPS |
---|---|---|---|
StarGAN | 62.17 | 47.53 | - |
MUNIT | 81.65 | 37.07 | 0.4155 |
SEAN | 72.95 | 61.06 | 0.3465 |
FUNIT | 93.30 | 35.17 | 0.3781 |
RFT | 97.06 | 31.06 | 0.3450 |

4.5. Results on CelebAMask-HQ
We now evaluate the region level gender translation performance of our approach using CelebAMask-HQ dataset. Figure 8 shows the results of translating face or hair of example faces to different genders. While the faces in the left column are translated to the style of opposite genders, with fixed hair style, the hair styles of the faces in the right column are translated to long/short, with fixed facial styles. Figure 9 further shows the results of a man and lady when their left/right eyes, mouths, hairs and full images are translated to the styles of opposite genders, using the mask presented in Figure 5. Again one can observe that our model can precisely translate the style of region controlled by the mask overlaid on the bottom right corner of the generated faces, without touching other regions.
Table 3 lists the accuracy, FID and LPIPS of different approaches. Again, our RFT achieves the highest accuracy (97.06) and lowest FID (31.06) and LPIPS (0.3450).
Figure 13 in Appendix shows the translation of left/right eyes, nose, mouth and faces to the style of a beautiful lady. When the five regions are translated one by one, one can clearly see the make up effects like eye-shadow and whitening of the skin, which beautify the faces to make the ladies look more attractive.
Method | Accuracy(%) | FID score | LPIPS |
---|---|---|---|
RFT(RINSEAN) | 96.85 | 35.01 | 0.3494 |
RFT(w/o RM loss) | 96.61 | 33.81 | 0.3421 |
RFT | 97.06 | 31.06 | 0.3450 |
4.6. Ablation studies on CelebAMask-HQ
To further prove the effectiveness of our proposed RIN block and RM (region matching) loss, we perform an ablation study in this section. We replaced our RIN block with SEAN, removed the RM loss, i.e. set in equation (10), and tested the performance of RFT for gender style translation using CelebAMask-HQ dataset. Figure 11 shows the translation results of different regions for a young man when RFT with different settings are applied. Compared with RFT using SEAN blocks, the left/right eye and nose (the 2nd, 3rd and 4th columns) translated by the original RFT present more lady-like styles, i.e. eye-shadows appear around the eyes and the nose is whitened. When RM is removed, there is no significant difference among the faces presented in the third row when left/right eye and nose are translated, respectively. The long hair in the sixth column actually does not fit the face boundary well. Table 4 lists the accuracy, FID and LPIPS of different settings. Compared with SEAN, our RIN block significantly reduces FID from 35.01 to 31.06. The accuracy of our RFT is also higher than that with SEAN and trained without RM loss.
5. conclusion
This paper proposed a novel region-wise face translation network, named RFT, region based face translation. A region-wise normalization block and region matching loss are proposed to fuse the per-region style of style and content images, and reduce the influence between different regions, respectively. The proposed RFT is evaluated on three datasets and the experiments results demonstrates its effectiveness.
References
- (1)
- Abdal et al. (2019) Rameen Abdal, Yipeng Qin, and Peter Wonka. 2019. Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019. IEEE, 4431–4440. https://doi.org/10.1109/ICCV.2019.00453
- Arjovsky et al. (2017) Martín Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein GAN. CoRR abs/1701.07875 (2017). arXiv:1701.07875 http://arxiv.org/abs/1701.07875
- Bau et al. (2019) David Bau, Hendrik Strobelt, William S. Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, and Antonio Torralba. 2019. Semantic photo manipulation with a generative image prior. ACM Trans. Graph. 38, 4 (2019), 59:1–59:11. https://doi.org/10.1145/3306346.3323023
- Brock et al. (2019) Andrew Brock, Jeff Donahue, and Karen Simonyan. 2019. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. https://openreview.net/forum?id=B1xsqj09Fm
- Choi et al. (2018) Yunjey Choi, Min-Je Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 8789–8797. https://doi.org/10.1109/CVPR.2018.00916
- Choi et al. (2019) Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. 2019. StarGAN v2: Diverse Image Synthesis for Multiple Domains. CoRR abs/1912.01865 (2019). arXiv:1912.01865 http://arxiv.org/abs/1912.01865
- Demir and Ünal (2018) Ugur Demir and Gözde B. Ünal. 2018. Patch-Based Image Inpainting with Generative Adversarial Networks. CoRR abs/1803.07422 (2018). arXiv:1803.07422 http://arxiv.org/abs/1803.07422
- Frühstück et al. (2019) Anna Frühstück, Ibraheem Alhashim, and Peter Wonka. 2019. TileGAN: synthesis of large-scale non-homogeneous textures. ACM Trans. Graph. 38, 4 (2019), 58:1–58:11. https://doi.org/10.1145/3306346.3322993
- Goodfellow et al. (2014) Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger (Eds.). 2672–2680. http://papers.nips.cc/paper/5423-generative-adversarial-nets
- Gulrajani et al. (2017) Ishaan Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, and Aaron C. Courville. 2017. Improved Training of Wasserstein GANs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 5767–5777. http://papers.nips.cc/paper/7159-improved-training-of-wasserstein-gans
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE Computer Society, 770–778. https://doi.org/10.1109/CVPR.2016.90
- Heusel et al. (2017) Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 6626–6637. http://papers.nips.cc/paper/7240-gans-trained-by-a-two-time-scale-update-rule-converge-to-a-local-nash-equilibrium
- Hong et al. (2018) Seunghoon Hong, Dingdong Yang, Jongwook Choi, and Honglak Lee. 2018. Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 7986–7994. https://doi.org/10.1109/CVPR.2018.00833
- Huang and Belongie (2017) Xun Huang and Serge J. Belongie. 2017. Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. IEEE Computer Society, 1510–1519. https://doi.org/10.1109/ICCV.2017.167
- Huang et al. (2018) Xun Huang, Ming-Yu Liu, Serge J. Belongie, and Jan Kautz. 2018. Multimodal Unsupervised Image-to-Image Translation. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part III (Lecture Notes in Computer Science, Vol. 11207), Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (Eds.). Springer, 179–196. https://doi.org/10.1007/978-3-030-01219-9_11
- Isola et al. (2017) Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 5967–5976. https://doi.org/10.1109/CVPR.2017.632
- Jr. and Tesafaye (2006) Karl Ricanek Jr. and Tamirat Tesafaye. 2006. MORPH: A Longitudinal Image Database of Normal Adult Age-Progression. In Seventh IEEE International Conference on Automatic Face and Gesture Recognition (FGR 2006), 10-12 April 2006, Southampton, UK. IEEE Computer Society, 341–345. https://doi.org/10.1109/FGR.2006.78
- Karras et al. (2018) Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=Hk99zCeAb
- Karras et al. (2019b) Tero Karras, Samuli Laine, and Timo Aila. 2019b. A Style-Based Generator Architecture for Generative Adversarial Networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. Computer Vision Foundation / IEEE, 4401–4410. https://doi.org/10.1109/CVPR.2019.00453
- Karras et al. (2019a) Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2019a. Analyzing and Improving the Image Quality of StyleGAN. CoRR abs/1912.04958 (2019). arXiv:1912.04958 http://arxiv.org/abs/1912.04958
- Krizhevsky et al. (2017) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2017. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 84–90. https://doi.org/10.1145/3065386
- Langner et al. (2010) Oliver Langner, Ron Dotsch, Gijsbert Bijlstra, Daniel HJ Wigboldus, Skyler T Hawk, and AD Van Knippenberg. 2010. Presentation and validation of the Radboud Faces Database. Cognition and emotion 24, 8 (2010), 1377–1388.
- Lee et al. (2019) Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. 2019. MaskGAN: Towards Diverse and Interactive Facial Image Manipulation. CoRR abs/1907.11922 (2019). arXiv:1907.11922 http://arxiv.org/abs/1907.11922
- Li and Wand (2016) Chuan Li and Michael Wand. 2016. Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III (Lecture Notes in Computer Science, Vol. 9907), Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer, 702–716. https://doi.org/10.1007/978-3-319-46487-9_43
- Liu et al. (2017) Ming-Yu Liu, Thomas Breuel, and Jan Kautz. 2017. Unsupervised Image-to-Image Translation Networks. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 700–708. http://papers.nips.cc/paper/6672-unsupervised-image-to-image-translation-networks
- Liu et al. (2019) Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. 2019. Few-Shot Unsupervised Image-to-Image Translation. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019. IEEE, 10550–10559. https://doi.org/10.1109/ICCV.2019.01065
- Liu et al. (2015) Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep Learning Face Attributes in the Wild. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015. IEEE Computer Society, 3730–3738. https://doi.org/10.1109/ICCV.2015.425
- Mao et al. (2017) Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, Zhen Wang, and Stephen Paul Smolley. 2017. Least Squares Generative Adversarial Networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. IEEE Computer Society, 2813–2821. https://doi.org/10.1109/ICCV.2017.304
- Mescheder et al. (2018) Lars M. Mescheder, Andreas Geiger, and Sebastian Nowozin. 2018. Which Training Methods for GANs do actually Converge?. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018 (Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 3478–3487. http://proceedings.mlr.press/v80/mescheder18a.html
- Mirza and Osindero (2014) Mehdi Mirza and Simon Osindero. 2014. Conditional Generative Adversarial Nets. CoRR abs/1411.1784 (2014). arXiv:1411.1784 http://arxiv.org/abs/1411.1784
- Miyato et al. (2018) Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. Spectral Normalization for Generative Adversarial Networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=B1QRgziT-
- Miyato and Koyama (2018) Takeru Miyato and Masanori Koyama. 2018. cGANs with Projection Discriminator. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=ByS1VpgRZ
- Park et al. (2019) Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. 2019. Semantic Image Synthesis With Spatially-Adaptive Normalization. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. Computer Vision Foundation / IEEE, 2337–2346. https://doi.org/10.1109/CVPR.2019.00244
- Radford et al. (2016) Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1511.06434
- Reed et al. (2016) Scott E. Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative Adversarial Text to Image Synthesis. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016 (JMLR Workshop and Conference Proceedings, Vol. 48), Maria-Florina Balcan and Kilian Q. Weinberger (Eds.). JMLR.org, 1060–1069. http://proceedings.mlr.press/v48/reed16.html
- Slossberg et al. (2018) Ron Slossberg, Gil Shamai, and Ron Kimmel. 2018. High Quality Facial Surface and Texture Synthesis via Generative Adversarial Networks. In Computer Vision - ECCV 2018 Workshops - Munich, Germany, September 8-14, 2018, Proceedings, Part III (Lecture Notes in Computer Science, Vol. 11131), Laura Leal-Taixé and Stefan Roth (Eds.). Springer, 498–513. https://doi.org/10.1007/978-3-030-11015-4_36
- Szegedy et al. (2016) Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the Inception Architecture for Computer Vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE Computer Society, 2818–2826. https://doi.org/10.1109/CVPR.2016.308
- Wang et al. (2018) Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018. High-Resolution Image Synthesis and Semantic Manipulation With Conditional GANs. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 8798–8807. https://doi.org/10.1109/CVPR.2018.00917
- Xu et al. (2018) Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. 2018. AttnGAN: Fine-Grained Text to Image Generation With Attentional Generative Adversarial Networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 1316–1324. https://doi.org/10.1109/CVPR.2018.00143
- Yu et al. (2018) Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S. Huang. 2018. Generative Image Inpainting With Contextual Attention. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 5505–5514. https://doi.org/10.1109/CVPR.2018.00577
- Zhang et al. (2018) Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 586–595. https://doi.org/10.1109/CVPR.2018.00068
- Zhu et al. (2016) Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. 2016. Generative Visual Manipulation on the Natural Image Manifold. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V (Lecture Notes in Computer Science, Vol. 9909), Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer, 597–613. https://doi.org/10.1007/978-3-319-46454-1_36
- Zhu et al. (2017a) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017a. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. IEEE Computer Society, 2242–2251. https://doi.org/10.1109/ICCV.2017.244
- Zhu et al. (2017b) Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, and Eli Shechtman. 2017b. Toward Multimodal Image-to-Image Translation. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 465–476. http://papers.nips.cc/paper/6650-toward-multimodal-image-to-image-translation
- Zhu et al. (2019) Peihao Zhu, Rameen Abdal, Yipeng Qin, and Peter Wonka. 2019. SEAN: Image Synthesis with Semantic Region-Adaptive Normalization. CoRR abs/1911.12861 (2019). arXiv:1911.12861 http://arxiv.org/abs/1911.12861
Appendix A Appendix


