This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Unlabeled Data Guided Semi-supervised
Histopathology Image Segmentation
thanks: 978-1-7281-6215-7/20/$31.00 ©2020 IEEE.

Hongxiao Wang Lin Yang University of Notre Dame
Notre Dame, IN 46556, USA
[email protected]
University of Notre Dame
Notre Dame, IN 46556, USA
[email protected]
   Hao Zheng Yizhe Zhang University of Notre Dame
   Notre Dame, IN 46556, USA  
[email protected]
University of Notre Dame
   Notre Dame, IN 46556, USA
[email protected]
   Jianxu Chen Danny Z. Chen    Allen Institute for Cell Science    
Seattle, WA 98103, USA
[email protected]
University of Notre Dame
Notre Dame, IN 46556, USA
[email protected]
Abstract

Automatic histopathology image segmentation is crucial to disease analysis. Limited available labeled data hinders the generalizability of trained models under the fully supervised setting. Semi-supervised learning (SSL) based on generative methods has been proven to be effective in utilizing diverse image characteristics. However, it has not been well explored what kinds of generated images would be more useful for model training and how to use such images. In this paper, we propose a new data guided generative method for histopathology image segmentation by leveraging the unlabeled data distributions. First, we design an image generation module. Image content and style are disentangled and embedded in a clustering-friendly space to utilize their distributions. New images are synthesized by sampling and cross-combining contents and styles. Second, we devise an effective data selection policy for judiciously sampling the generated images: (1) to make the generated training set better cover the dataset, the clusters that are underrepresented in the original training set are covered more; (2) to make the training process more effective, we identify and oversample the images of “hard cases” in the data for which annotated training data may be scarce. Our method is evaluated on glands and nuclei datasets. We show that under both the inductive and transductive settings, our SSL method consistently boosts the performance of common segmentation models and attains state-of-the-art results.

Index Terms:
Image Segmentation, Semi-Supervised Learning, Image Generation

I Introduction

Deep learning methods have achieved unprecedented high performance in segmenting histopathology images [1, 2]. Still, the generalizability of such methods is hindered by limited available annotated training data, because medical experts are commonly required for annotation and there are high variations of image characteristics due to scanner effects, different staining protocols, patients, and disease states. With limited annotation, deep learning model training often covers only a limited fraction of the histopathology data space. There could exist considerable discrepancy (in appearance) between the labeled and unlabeled sets. Thus, the trained segmentation models are at risk of over-fitting and do not generalize well to unseen data.

An array of methods based on semi-supervised learning (SSL) has been proposed to make the most of limited training data and improve the model generalizability. The assumption is that unlabeled images are commonly from the original data distribution and contain useful information. In practice, there is often a large amount of unlabeled data available which are free to use. Some powerful SSL methods used the feature distribution of unlabeled images to reduce the need for labeling. For example, images are projected to low-dimensional feature space and pseudo-labels are assigned to unlabeld images based on clustering features [3, 4]. In [5, 6], images were intentionally perturbed to explore the decision boundary for adversarial training. While such SSL methods are commonly used for classification tasks, applying them to segmentation tasks is not a straightforward process because it is hard to define and utilize the distribution/clusters of unlabeled images due to the high-dimensional feature space of images. Dominant SSL methods for segmentation include using auxiliary loss [7], consistency learning [8], and pseudo-labeling [9]. For example, auxiliary loss was applied to encourage the model to produce output of plausible shapes [7]; an ensemble method [10] was proposed to train a meta-learner with generated pseudo-labels; some methods [8, 11, 12] aimed to make the models give consistent predictions on random perturbations (e.g., color jitting, rotation). For histopathology images, the aforementioned methods have a main drawback: Image characteristics different from the labeled samples cannot be effectively and efficiently utilized in the training process.

To deal with diverse characteristic variations of unlabeled data, an intuitive and effective way is to adapt style-transfer based data generation methods for SSL [13, 14, 15]. Specifically, these methods generated new (image,mask)(image,mask) pairs by transferring styles extracted from unlabeled images to labeled images. However, there are two unexplored issues in the previous work. First, neither image-based [16, 17] nor domain-based [18, 19] style transfer methods could concurrently provide a lightweight style representation and transfer styles between images in one (the same) domain, which is important for exploiting intrinsic data diversity and distributions of one dataset. Second, the associated augmentation policies of the known data generation methods were not carefully designed. For example, the methods [20, 21] filtered the generated images with higher feature similarity to the training images; in [22, 23], complex training procedures were leveraged to search for an optimal combination of very basic image augmentation operations (e.g., rotation, flip, color jitting). These policies did not consider characteristics of unlabeled data and the discrepancy between labeled and unlabeled data, which can provide helpful information for data generation.

In this paper, we propose a unified framework to exploit image characteristics and effectively employ it to guide data generation. We contemplate two main challenges: (1) image distribution is hard to observe and explicitly define; (2) an appropriate data generation policy needs to be designed for sampling from the augmented dataset to assist segmentation model training efficiently.

Refer to caption
Figure 1: Examples of style transferred images and segmentation results of DCN [24]. Different styles affect the segmentation performance of DCN.

Suppose our dataset consists of labeled and unlabeled sets. To address the first challenge, we exploit the relation between segmentation and image characteristics. We develop an Image Generation Module to learn and disentangle image representations and define characteristics distributions. Specifically, we consider two complementary key image characteristics in histopathology images: style and content. Style variation is caused by technical issues, including variations of staining protocols and scanner effects. Although it may not affect human judgment a lot, it can lead to reduced deep learning segmentation model performance (e.g., see Fig. 1). Content variation is directly related to specific segmentation tasks, which attributes to object shapes and distribution patterns related to disease types. Our module disentangles each histopathology image into style and content representations, whose distributions can be easily exploited by clustering. To make the features clustering-friendly, we embed style and content into low-dimensional spaces, for which an interpolation property is enforced to explore image similarity distances. We then generate new images by combining judiciously-selected pairs of style and content.

For the second challenge, we develop a new data generation policy to handle underrepresented and “hard-case” images (which usually impair model generalizability the most). To remedy the unbalanced data distributions in the labeled set, we propose a distribution matching strategy to match the statistics of the labeled set with those of the unlabeled set across clusters of images by adding generated images to the underrepresented clusters. Moreover, “hard-cases” are highly valuable for training networks but are often only a small fraction of the whole dataset and not sufficiently represented in the labeled data. For this, we propose a hard-case covering strategy to identify hard cases via output variations in terms of different style changes and oversample them by referring to the unlabeled data. To make the generated images meaningful to segmentation, we only transfer styles between images that are sampled from the same content cluster.

We conduct extensive experiments on two public histopathology datasets of nuclei [25] and glands [26], which show that our new SSL method achieves large segmentation improvement using common segmentation models through data generation. For both inductive and transductive settings, we attain state-of-the-art performance.

II Method

Fig. 2 gives an overview of our proposed SSL framework, which consists of two parts. First, we introduce our image generation module with an interpolation property enforced in latent space to exploit image characteristics distributions. Second, using the image generation module, we propose a new strategy to sample from the generated data to bridge the distribution discrepancy between the labeled and unlabeled data, and a strategy to identify and handle the hard cases for segmentation based on a devised uncertainty metric.

Refer to caption
Figure 2: The pipeline of our proposed method.

II-A Image Generation Module

MUNIT [18] is a state-of-the-art approach for cross-domain transformation, which explicitly disentangles image representation into content and style and provides a certain degree of explainability of the extracted latent image representation. It assumes that the source domain and target domain have distinguishable style spaces. But, in our problem, this is not the case (i.e., our source and target domains significantly overlap), and MUNIT cannot yield diversified images by transferring style from one image to another since both images are from the same dataset. Image-based methods can transfer styles between two arbitrary images; but, their representations of style and content are both in high-dimensional latent space (e.g., feature maps [17] or matrices [27, 16]), and thus it is hard to explore the image characteristics distributions. We will show how to utilize domain-based and image-based methods to (1) generate style-diversified images for model training and (2) make the image characteristics concisely represented so that their distributions can be effectively explored for segmentation.

We extract features with an Image Generation Module. To capture the local contents, we consider image characteristics distributions on image patches uniformly cropped. Given a set of MM image patches, 𝒳={xa}a=1M\mathcal{X}=\{x_{a}\}_{a=1}^{M}, from our dataset, each patch xax_{a} can be encoded into a style vector sa=Es(xa)s_{a}=E^{s}(x_{a}) by a style encoder EsE^{s} and a content vector ca=Ec(xa)c_{a}=E^{c}(x_{a}) by a content encoder EcE^{c}. All the sas_{a}’s and cac_{a}’s form a style space S={sa}a=1MS=\{s_{a}\}_{a=1}^{M} and a content space C={ca}a=1MC=\{c_{a}\}_{a=1}^{M}, respectively. A new image patch can be synthesized by a generator G(ca,sb)G(c_{a},s_{b}) combining any content vector cac_{a} and style vector sbs_{b}. The total loss consists of our style matching loss, GAN loss, and reconstruction loss [18], defined as:

L=w1Lstyle+w2LGAN+w3LreconL=w_{1}L_{style}+w_{2}L_{GAN}+w_{3}L_{recon} (1)

where the wiw_{i}’s are hyper-parameters for controlling the weight of each term. We design the style matching loss for achieving two goals: (1) preventing the training collapse (which would cause all information being encoded into content and the generator ignoring the style); (2) making the point distribution {sa}a=1M\{s_{a}\}_{a=1}^{M} in the low-dimensional latent space SS reflect the image style distribution. The core of the style matching loss is an interpolation relation, which associates the point-wise distance in the embedded space with the similarity between image patches.

To attain the first goal, when transferring a patch xax_{a} to the style sb=Es(xb)s_{b}=E^{s}(x_{b}) of a patch xbx_{b}, we encourage the generated patch xg1=G(Ec(xa),sb)x_{g1}=G(E^{c}(x_{a}),s_{b}) to have the target style sbs_{b}, by minimizing a style similarity metric [17] between xg1x_{g1} and xbx_{b}:

Ls(xg1,xb)==1Lα2N2J[xg1]J[xb]2L_{s}(x_{g1},x_{b})=\sum^{L}_{\ell=1}\frac{\alpha_{\ell}}{2N^{2}_{\ell}}||J_{\ell}[x_{g1}]-J_{\ell}[x_{b}]||^{2} (2)

where J[x]J[x] is a Gram matrix calculated on the vectorized VGG features of layer \ell, α\alpha_{\ell} is the weight of layer {\ell}, and NN_{\ell} is the number of filters in layer {\ell}.

For the second goal, we encourage that two patches with a higher similarity attain a smaller distance in SS. We train the model by enforcing the interpolation property: For a linear interpolation in the latent space, sg2=(1λ)Es(xa)+λEs(xb)s_{g2}=(1-\lambda)E^{s}(x_{a})+\lambda E^{s}(x_{b}), where λ[0,1]\lambda\in[0,1], the style of the generated patch xg2=G(Ec(xa),sg2)x_{g2}=G(E^{c}(x_{a}),s_{g2}) smoothly transfers to sbs_{b} as the distance between sg2s_{g2} and sbs_{b} gets closer. This property can be learned by optimizing the following style matching loss:

Lstyle=𝔼xa,xb,λ(1λ)Ls(xg2,xa)λLs(xg2,xb)L_{style}=\mathbb{E}_{x_{a},x_{b},\lambda}||(1-\lambda)L_{s}(x_{g2},x_{a})-\lambda L_{s}(x_{g2},x_{b})|| (3)

where λ\lambda is uniformly selected from [0,1][0,1] in training. By the interpolation property, those patches with similar styles tend to have closer style vectors, yet dissimilar patches tend to have style vectors far apart from one another. Hence, the distribution of style vectors in the style space is encouraged to reflect the patch style distribution in the given dataset. Only Eq. (3) is practically used in our training, as Eq. (2) is a special case of Eq. (3) with λ=1\lambda=1.

After obtaining the extracted content set C={ca}a=1MC=\{c_{a}\}_{a=1}^{M} and style set S={sa}a=1MS=\{s_{a}\}_{a=1}^{M} from all the patches, we conduct agglomerative clustering to attain content clusters {Ci}i=1m\{C_{i}\}_{i=1}^{m} of CC and style clusters {Sj}j=1n\{S_{j}\}_{j=1}^{n} of SS, where mm and nn are the numbers of such clusters, respectively. A patch space HH is defined by the Cartesian product of {Ci}i=1m\{C_{i}\}_{i=1}^{m} and {Sj}j=1n\{S_{j}\}_{j=1}^{n}: H={Ci}i=1m×{Sj}j=1n={Hij={(c,s)|cCi,sSj}}H=\{C_{i}\}_{i=1}^{m}\times\{S_{j}\}_{j=1}^{n}=\{H_{ij}=\{(c,s)\ |\ c\in C_{i},s\in S_{j}\}\}. Each HijH_{ij} corresponds to possible patches whose content vectors belong to content cluster CiC_{i} and style vectors belong to style cluster SjS_{j}. We explore the statistics of the patch space HH based on the information of each HijH_{ij}, whose numbers of labeled patches and unlabeled patches are denoted by N(Hijlabel)N(H_{ij}^{label}) and N(Hijunlabel)N(H_{ij}^{unlabel}), respectively.

The same reconstruction loss for images and the latent space as in MUNIT [18] is applied to the generated images. The content and style should be consistent after decoding and encoding. The latent reconstruction loss is computed as:

Lrecon\displaystyle L_{recon} =Lreconx+Lreconc+Lrecons\displaystyle=L^{x}_{recon}+L^{c}_{recon}+L^{s}_{recon} (4)
with:Lreconx\displaystyle{\rm with:}\ L^{x}_{recon} =𝔼xxG(Ec(x),Es(x))1\displaystyle=\mathbb{E}_{x}||x-G(E^{c}(x),E^{s}(x))||_{1} (5)
Lreconc\displaystyle L^{c}_{recon} =𝔼c,scEc(G(c,s))1\displaystyle=\mathbb{E}_{c,s}||c-E^{c}(G(c,s))||_{1} (6)
Lrecons\displaystyle L^{s}_{recon} =𝔼c,ssEs(G(c,s))1\displaystyle=\mathbb{E}_{c,s}||s-E^{s}(G(c,s))||_{1} (7)

where LreconxL^{x}_{recon}, LreconcL^{c}_{recon}, and LreconsL^{s}_{recon} are the image, content, and style reconstruction losses, respectively.

We seek to attain appearance realism of the generated images by training the discriminator using samples generated with interpolated styles. The generated image distribution is imposed to be the same as the original image distribution. The realism loss function is:

LGAN=minE,GmaxD𝔼xa,s[log(1D(G(ca,s)))+log(D(xa))]L_{GAN}=\min_{E,G}\max_{D}\mathbb{E}_{x_{a},s}[\log(1-D(G(c_{a},s)))+\log(D(x_{a}))] (8)

where xax_{a} follows the original image distribution (xap(x)x_{a}\sim p(x)), s=(1λ)sa+λsbs=(1-\lambda)*s_{a}+\lambda*s_{b} with sa,sbp(s)s_{a},s_{b}\sim p(s), and cac_{a} is encoded from xax_{a} with ca=Ec(xa)c_{a}=E^{c}(x_{a}).

II-B Generation Policy

Using extracted ciCc_{i}\in C and sjSs_{j}\in S, we can generate a set of patches that may potentially help segmentation models in a basic random generation setting (by uniform sampling). However, by doing so, (1) biomedically invalid patches may be generated without considering content similarity between the source images and target images, and (2) underrepresented and rare hard cases are overwhelmed by other “common” image characteristics. In this section, we first discuss a basic random generation strategy with content matching, and then further improve the effectiveness by proposing our distribution matching policy and hard case covering policy.

Random Generation with Content Matching. In common practice, one may generate data by transferring a labeled image to any other styles with the original segmentation label preserved. This generates an augmented set (denoted as SgenS_{gen}) from the original labeled set (SlabelS_{label}). However, invalid images can be generated when a target style is not biomedically valid with respect to the original content (e.g., an image of irregular-shaped cancerous glands with benign tubular texture). To handle this, we propose a content matching strategy that exchanges styles only between patches from the same content cluster, as follows:

Sgen\displaystyle S_{gen} (Slabel,SlabelSunlabel)={G(Ec(xa),Es(xb))\displaystyle(S_{label},S_{label}\cup S_{unlabel})=\{G(E^{c}(x_{a}),E^{s}(x_{b})) (9)
|xaSlabel,xbSlabelSunlabel,\displaystyle|\phantom{x}\forall x_{a}\in S_{label},\forall x_{b}\in S_{label}\cup S_{unlabel},
Ec(xa)&Ec(xb)are in the same content cluster}\displaystyle E^{c}(x_{a})\ \&\ E^{c}(x_{b})\ \text{are in the same content cluster}\}

Still, |Sgen||Slabel||S_{gen}|\gg|S_{label}|, and we need to strike a balance between the effectiveness of the augmented set and avoiding excessively perturbing the original labeled (training) data. SgenS_{gen} is used with a probability RaR_{a} (a probability 1Ra1-R_{a} for SlabelS_{label}).

Policy 1: Generation with Distribution Matching. To improve segmentation performance, we seek to remedy the observed statistics discrepancy between the labeled and unlabeled sets (i.e., HijlabelH_{ij}^{label} and HijunlabelH_{ij}^{unlabel}). In preliminary study, we found that the numbers of patches in different HijH_{ij}’s are highly unbalanced and show significant statistic differences between the labeled and unlabeled sets. Randomly sampling patches could bury rare image characteristics into other image characteristics, causing the segmentation models to be trained inadequately.

Thus, we propose to sample patches based on the statistics of the unlabeled data. Especially, we focus on the underrepresented clusters with image characteristics that appear frequently in the unlabeled set but rarely in the labeled set. Each HijH_{ij} is selected with a probability N(Hijunlabel)/Σk,lN(Hklunlabel)N(H^{unlabel}_{ij})/{\Sigma_{k,l}N(H^{unlabel}_{kl})}. Then we uniformly select samples from SgenHijS_{gen}\cap H_{ij} (or SlabelHijS_{label}\cap H_{ij}) with probability RaR_{a} (or 1Ra1-R_{a}). In this way, the underrepresented clusters benefit the most as they get higher chances to be included in the augmented training data.

Policy 2: Generation with Hard Case Covering. To improve segmentation, we further identify and handle the “hard cases”. In our preliminary experiments, we found that the segmentation performances across different clusters can be different. For example, the performance is not good enough for irregular-shaped or highly stained abnormal tissues for all the segmentation models that we considered.

To address this issue, we propose to identify these hard cases automatically. Inspired by bootstraping-based methods [28, 1], we quantify the uncertainty of a segmentation model by its prediction variance when the input is transferred to different target styles. For nn style clusters {Sj}j=1n\{S_{j}\}^{n}_{j=1}, we select one representative target style Rep(Sl)Rep(S_{l}) from each SlS_{l} with the minimum sum of distances to all the other styles skSls_{k}\in S_{l}. For each HijH_{ij}, its uncertainty UijU_{ij} for a segmentation model SegSeg is calculated as:

Uij=\displaystyle U_{ij}= 1N(Hijunlabel)xaHijunlabel,l{1,2,,n}\displaystyle\frac{1}{N(H^{unlabel}_{ij})}\sum_{x_{a}\in H^{unlabel}_{ij},\ l\in\{1,2,\ldots,n\}} (10)
Variance(l)(Seg(G(Ec(xa),Rep(Sl))))\displaystyle Variance_{(l)}(Seg(G(E^{c}(x_{a}),Rep(S_{l}))))

Selecting uncertain clusters more frequently will help the segmentation model reduce potential prediction errors. Thus, we upweight the probability of sampling uncertain clusters. Each HijH_{ij} is selected with a probability Uij/Σk,lUklU_{ij}/{\Sigma_{k,l}U_{kl}}. This policy has two advantages comparing to [1]: (1) our method does not require training separate versions of the segmentation model; (2) our uncertainty value can better assess the segmentation quality, as the experiments show that our coefficient R2R^{2} is 15% higher.

Overall, these two policies are complementary to each other and can be combined with Mixed policy: sample patches with an average probability of distribution matching and hard case covering (i.e., the probability is N(Hijunlabel)/(2Σk,lN(Hklunlabel))+Uij/(2Σk,lUkl)N(H^{unlabel}_{ij})/(2\Sigma_{k,l}N(H^{unlabel}_{kl}))+U_{ij}/(2\Sigma_{k,l}U_{kl})), which comprehensively covers the unlabeled data especially for the hard cases.

III Experiments

III-A Datasets and Implementation Details

We use two main histopathology image datasets in our experiments: GlaS [26] of glands and MoNuSeg [25] of nuclei. We uniformly crop patches of size 384×384384\times 384, with a fixed step size 64 along the width and height, from all the images. The GlaS dataset has 85 images (1697 patches) for training and 80 images (1559 patches) for test. We use 8 style clusters and 5 content clusters to study the data distributions. The MoNuSeg dataset has 30 high-resolution images, with 16 images (1296 patches) of 4 organs for training and 14 images (1134 patches) of 7 organs for test (note that 3 test organs are not seen in training). We use 7 style clusters and 3 content clusters. In both the datasets, we treat either the training set or a subset of the training set as the labeled set and the test set or a subset of the training set (by ignoring the labels of this subset) as the unlabeled set in our experiments.

We use four segmentation models: DCN [24], MildNet [2], FullNet [29], and CIA-Net [30]. Here, DCN and MildNet are common segmentation models. FullNet and CIA-Net attain state-of-the-art performance on the GlaS and MoNuSeg datasets, respectively. The default value of the probability RaR_{a} is set as 0.15. The weights w1w_{1}, w2w_{2}, and w3w_{3} in the loss function of Eq. (1) are set as 0.002, 1, and 10, respectively. Basic augmentation operations (e.g., flipping, rotation) are applied in all the experiments (denoted as Augbasic{\rm Aug_{basic}}).

III-B Results

Our experiments consist of three parts. In qualitative results, we show that our image generation module can effectively generate realistic images and cluster patches based on the style and content similarity. In quantitative results, we evaluate the effectiveness of our new method on improving segmentation performance under both the inductive learning and transductive learning settings of SSL scenarios. In ablation study, we evaluate the model sensitivity to different generation algorithms and policies.

Qualitative Results. Fig. 3 gives examples of generated images for the GlaS dataset. The patches in the same row are from the same content reference image patch (i.e., using the same cac_{a} given by the leftmost patch). The patches in the same column are generated with the same style. The results show our method can generate diversified realistic-looking images.

Refer to caption
Figure 3: Examples of style transfer results for GlaS datasets.

Quantitative Results. In the inductive learning setting, we randomly choose 50% of the patches from the original training set as labeled data SlabelS_{label}, with the rest of the training set ignoring their annotation as the unlabeled data SunlabelS_{unlabel}, for extracting style information to help improve segmentation performance. Tables I and II show the results. First, one can observe that with only 50% labeled data, our SSL method can attain performance better than the results with full annotation for both the networks used. Second, comparing to the other known SSL methods (RA [31] and CCT [11]), our method yields the best segmentation performance with 50% labeled data used. Third, our method can be applied to any segmentation networks. In comparison, the SSL algorithm in CCT [11] was designed upon a specific auto-encoder segmentation model structure.

Transductive learning means in the training process, the test images are shown as unlabeled images to the model (i.e., SunlabelS_{unlabel} is the original test set). In biomedical imaging, transductive learning has found wide applications. For example, in high-throughput experiments, a large amount of images (with potentially different styles) needs to be accurately segmented for further analysis. Transductive learning can be very useful in such scenarios. Tables II shows the segmentation results. We evaluate the segmentation models (DCN [24], MildNet [2], and CIA-Net [30]) with Augbasic{\rm Aug_{basic}} on our proposed random generation and mixed policy. First, our method can effectively boost the performance of all the models, yielding better results than the state-of-the-art performance. Second, our method works well especially for the rare hard cases (e.g., unseen organs in MoNuSeg).

TABLE I: Inductive segmentation results of the GlaS dataset.
Labeled Data Setting Model F1-A F1-B Dice-A Dice-B
100% Baseline MildNet w/o SSL 0.914 0.844 0.913 0.836
FullNet w/o SSL 0.924 0.853 0.914 0.856
RA w/o SSL 0.921 0.855 0.904 0.858
CCT w/o SSL 0.888 0.780 0.887 0.829
50% Baseline MildNet w/o SSL 0.909 0.829 0.904 0.832
FullNet w/o SSL 0.913 0.852 0.908 0.849
RA SSL 0.916 0.862 0.897 0.856
CCT SSL 0.847 0.793 0.845 0.821
Ours MildNet SSL (Ours) 0.917 0.840 0.905 0.845
FullNet SSL (Ours) 0.925 0.862 0.919 0.863
TABLE II: Transductive (top) and inductive (bottom) segmentation results of the MoNuSeg dataset.
AJI AJI F1 F1
(Seen Organ) (Unseen Organ) (Seen Organ) (Unseen Organ)
DCN Augbasic{\rm Aug_{basic}} 0.581 0.550 0.820 0.804
DCN Mix. Policy 0.594 0.579 0.829 0.806
MildNet Augbasic{\rm Aug_{basic}} 0.585 0.566 0.829 0.821
MildNet Mix. Policy 0.601 0.594 0.841 0.833
CIA-Net Augbasic{\rm Aug_{basic}} 0.613 0.631 0.824 0.846
CIA-Net Mix. Policy 0.632 0.650 0.857 0.851
CIA-Net Augbasic{\rm Aug_{basic}} (50%) 0.591 0.603 0.818 0.830
CIA-Net Mix. Policy (50%) 0.625 0.615 0.850 0.831
CIA-Net Augbasic{\rm Aug_{basic}} (30%) 0.479 0.354 0.767 0.732
CIA-Net Mix. Policy (30%) 0.507 0.441 0.767 0.762

Ablation Study. Tables III shows the DCN model’s performance on the GlaS dataset. The benefit to segmentation performance under the transductive setting of our Image Generation Module is compared to other known style transfer methods, including image-based (e.g., Gaytz et al. [17] and domain-based (e.g., MUNIT [18]) methods. For fair comparison, we use our random generation policy. One can see that our framework can achieve the best performance.

The contribution of each component in our generation policy is evaluated, including content matching (CM), distribution matching (DM), and hard case covering (HC). The performance is affected when only one policy is used or CM is not applied. The three components of our policy can help improve segmentation performance in a complementary manner.

TABLE III: Ablation study for our Image Generation Module and generation policy.
F1-A F1-B Dice-A Dice-B
Baseline (Augbasic{\rm Aug_{basic}}) 0.923 0.822 0.910 0.826
Gaytz et al. [17] 0.917 0.829 0.904 0.827
MUNIT [18] 0.923 0.818 0.912 0.827
Ours (Random Policy) 0.924 0.824 0.918 0.839
Ours (CM + DM) 0.925 0.841 0.915 0.833
Ours (CM + HC) 0.922 0.832 0.918 0.836
Ours (DM + HC) 0.925 0.843 0.913 0.841
Ours (Mix. Policy) 0.926 0.850 0.915 0.848

IV Conclusions

In this paper, we proposed a new unlabeled data guided semi-supervised learning framework for histopathology image segmentation. We designed (1) a style matching loss in our image generation module for image-based style transfer and exploring data distributions with concise style representations, and (2) new policies for guiding our image generation procedure. The effectiveness of our method was demonstrated by comprehensive experiments on two datasets.

Acknowledgement. This research was supported in part by NSF Grant CCF-1617735.

References

  • [1] L. Yang, Y. Zhang, J. Chen, S. Zhang, and D. Z. Chen, “Suggestive annotation: A deep active learning framework for biomedical image segmentation,” in MICCAI.   Springer, 2017, pp. 399–407.
  • [2] S. Graham, H. Chen, J. Gamper, Q. Dou, P.-A. Heng, D. Snead, Y. W. Tsang, and N. Rajpoot, “MILD-Net: Minimal information loss dilated network for gland instance segmentation in colon histology images,” Medical Image Analysis, vol. 52, pp. 199–211, 2019.
  • [3] A. Iscen, G. Tolias, Y. Avrithis, and O. Chum, “Label propagation for deep semi-supervised learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 5070–5079.
  • [4] W. Shi, Y. Gong, C. Ding, Z. MaXiaoyu Tao, and N. Zheng, “Transductive semi-supervised deep learning using min-max features,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 299–315.
  • [5] W. Li, Z. Wang, Y. Yue, J. Li, W. Speier, M. Zhou, and C. Arnold, “Semi-supervised learning using adversarial training with good and bad samples,” Machine Vision and Applications, vol. 31, no. 6, pp. 1–11, 2020.
  • [6] W. He, B. Li, and D. Song, “Decision boundary analysis of adversarial examples,” in ICLR, 2018.
  • [7] Z. Huang, X. Wang, J. Wang, W. Liu, and J. Wang, “Weakly-supervised semantic segmentation network with deep seeded region growing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7014–7023.
  • [8] G. French, T. Aila, S. Laine, M. Mackiewicz, and G. Finlayson, “Semi-supervised semantic segmentation needs strong, high-dimensional perturbations,” arXiv preprint arXiv:1906.01916, 2019.
  • [9] Y. Wei, H. Xiao, H. Shi, Z. Jie, J. Feng, and T. S. Huang, “Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7268–7277.
  • [10] H. Zheng, Y. Zhang, L. Yang, P. Liang, Z. Zhao, C. Wang, and D. Z. Chen, “A new ensemble learning framework for 3D biomedical image segmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 5909–5916.
  • [11] Y. Ouali, C. Hudelot, and M. Tami, “Semi-supervised semantic segmentation with cross-consistency training,” in The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  • [12] D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. A. Raffel, “MixMatch: A holistic approach to semi-supervised learning,” in NeurIPS, 2019, pp. 5049–5059.
  • [13] A. BenTaieb and G. Hamarneh, “Adversarial stain transfer for histopathology image analysis,” IEEE Transactions on Medical Imaging, vol. 37, no. 3, pp. 792–802, 2017.
  • [14] M. T. Shaban, C. Baur, N. Navab, and S. Albarqouni, “StainGan: Stain style transfer for digital histological images,” arXiv preprint arXiv:1804.01601, 2018.
  • [15] C. Ma, Z. Ji, and M. Gao, “Neural style transfer improves 3D cardiovascular MR image segmentation on inconsistent data,” in MICCAI.   Springer, 2019, pp. 128–136.
  • [16] Y. Li, M.-Y. Liu, X. Li, M.-H. Yang, and J. Kautz, “A closed-form solution to photorealistic image stylization,” in ECCV, 2018, pp. 453–468.
  • [17] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in CVPR, 2016, pp. 2414–2423.
  • [18] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, “Multimodal unsupervised image-to-image translation,” in ECCV, 2018, pp. 172–189.
  • [19] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in ICCV, 2017.
  • [20] M. Tamura and T. Murakami, “Augmented hard example mining for generalizable person re-identification,” arXiv preprint arXiv:1910.05280, 2019.
  • [21] Y. Xue, J. Ye, R. Long, S. Antani, Z. Xue, and X. Huang, “Selective synthetic augmentation with quality assurance,” arXiv preprint arXiv:1912.03837, 2019.
  • [22] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “AutoAugment: Learning augmentation policies from data,” arXiv preprint arXiv:1805.09501, 2018.
  • [23] D. Ho, E. Liang, I. Stoica, P. Abbeel, and X. Chen, “Population based augmentation: Efficient learning of augmentation policy schedules,” arXiv preprint arXiv:1905.05393, 2019.
  • [24] H. Chen, X. J. Qi, J. Z. Cheng, and P. A. Heng, “Deep contextual networks for neuronal structure segmentation,” in AAAI, 2016.
  • [25] N. Kumar, R. Verma, S. Sharma, S. Bhargava, A. Vahadane, and A. Sethi, “A dataset and a technique for generalized nuclear segmentation for computational pathology,” IEEE Transactions on Medical Imaging, vol. 36, no. 7, pp. 1550–1560, 2017.
  • [26] K. Sirinukunwattana, J. P. Pluim, H. Chen, X. Qi, P.-A. Heng, Y. B. Guo, L. Y. Wang, B. J. Matuszewski, E. Bruni, U. Sanchez et al., “Gland segmentation in colon histology images: The GlaS challenge contest,” Medical Image Analysis, vol. 35, pp. 489–502, 2017.
  • [27] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang, “Universal style transfer via feature transforms,” in NIPS, 2017, pp. 386–396.
  • [28] R. W. Johnson, “An introduction to the bootstrap,” Teaching Statistics, vol. 23, no. 2, pp. 49–54, 2001.
  • [29] H. Qu, Z. Yan, G. M. Riedlinger, S. De, and D. N. Metaxas, “Improving nuclei/gland instance segmentation in histopathology images by full resolution neural network and spatial constrained loss,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, 2019, pp. 378–386.
  • [30] Y. Zhou, O. F. Onder, Q. Dou, E. Tsougenis, H. Chen, and P.-A. Heng, “CIA-Net: Robust nuclei instance segmentation with contour-aware information aggregation,” in International Conference on Information Processing in Medical Imaging.   Springer, 2019, pp. 682–693.
  • [31] H. Zheng, L. Yang, J. Chen, J. Han, Y. Zhang, P. Liang, Z. Zhao, C. Wang, and D. Z. Chen, “Biomedical image segmentation via representative annotation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 5901–5908.