This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Kyushu University, Fukuoka, Japan 11email: [email protected] 22institutetext: Kyoto University Hospital, Kyoto, Japan

Proportion Estimation by Masked Learning from Label Proportion

Takumi Okuo 11    Kazuya Nishimura 11    Hiroaki Ito 22    Kazuhiro Terada 22    Akihiko Yoshizawa 22    Ryoma Bise 11
Abstract

The PD-L1 rate, the number of PD-L1 positive tumor cells over the total number of all tumor cells, is an important metric for immunotherapy. This metric is recorded as diagnostic information with pathological images. In this paper, we propose a proportion estimation method with a small amount of cell-level annotation and proportion annotation, which can be easily collected. Since the PD-L1 rate is calculated from only ‘tumor cells’ and not using ‘non-tumor cells’, we first detect tumor cells with a detection model. Then, we estimate the PD-L1 proportion by introducing a masking technique to ‘learning from label proportion.’ In addition, we propose a weighted focal proportion loss to address data imbalance problems. Experiments using clinical data demonstrate the effectiveness of our method. Our method achieved the best performance in comparisons.

Keywords:
Learning from label proportion Histopathology.

1 Introduction

The proportional information of cancer subtypes is recorded as diagnostic information with pathological images in many diagnoses, such as programmed cell death ligand-1 (PD-L1) diagnosis [8, 14], chemotherapy, and lung cancer diagnosis [13, 15]. For example, the PD-L1 test is conducted to check if cancer immunotherapy will be helpful for a patient. Fig. 1 shows an example image (called core image) captured by a whole slide scanner. In the core image, over 10 thousand cells belong to three classes; positive cancer, negative cancer, and non-tumor cells. The PD-L1 rate is calculated by the number of positive tumor cells over all of the tumor cells in a tissue. Note that it does not include non-tumor cells. Counting all cells in the core image is almost impossible; thus, pathologists roughly estimate the PD-L1 rate without counting in clinical. Therefore, there is a demand to develop an automatic proportion estimation method.

A simple solution is to detect positive and negative tumor cells by a deep detection model [10] and calculate the proportion from the detection results. However, this approach does not use the useful information, PD-L1 rate, which has already been recorded as diagnosis information and requires a certain amount of cell-level annotation for positive and negative tumor cells. A small amount of annotation can be collected, but the performance of a network trained using insufficient training data is worse. Large datasets with sufficient variability must be collected to address inter-tumor heterogeneity and different staining characteristics, but it is time-consuming and labor-intensive.

Refer to caption
Figure 1: Illustration of a core image in PD-L1 test. Blue dots regions indicate negative tumors, and red indicates positive tumors. Ideally, the proportion of positive tumor cells is calculated by counting and classifying all cells; however, pathologists roughly estimate the ratio subjective without counting or segmenting.

Another solution is to estimate the proportion by regression or classification directly. The approach can train a network using proportion information without additional annotation since PD-L1 rates are recorded in clinical settings. However, these methods have two drawbacks. A PD-L1 rate is a ‘partial’ proportion of all cells, i.e., it is calculated only from tumor cells but ignores non-tumor cells. This ambiguity caused by without using the non-tumor region information makes training difficult to estimate proportion. Another disadvantage is poor interpretability for estimation. When pathologists use the estimated results in clinical, they check the cell distributions of three classes to know how the rate is calculated. A class activation map (CAM)-based approach [16], which visualizes contributed pixels to the output, is unsuitable for this task because class distributions of all tumor regions are required to estimate the proportion.

This paper proposes a proportion estimation method to estimate the proportion of PD-L1 tumor cells and output the distributions of three classes in a core image while keeping annotation costs as low as possible. As discussed above, estimating the proportion without tumor region information is difficult. Thus, we made a small amount of training data to train a cell detection network that detects tumor or non-tumor cells. This enables us to make a tumor region mask, which is used for estimating the proportion (PD-L1 rate). We proposed a masked ‘Learning from Label Proportions (LLP)’ that estimates the proportion by using the tumor cell region mask predicted by the cell detection network. In addition, we propose a weighted focal proportion loss to address data imbalance problems, where data imbalance often occurs in medical image applications. Experiments using clinical data demonstrate the effectiveness of our method. Our method achieved the best performance in comparison.

2 PD-L1 tumor proportion estimation

Our method aims to train a network that estimates the proportion rr of PD-L1 tumor cells in core images II. We use three types of annotation for the training. One is a cell position label, the second is a tumor cell region, and the third is the proportion label. For proportion labels, a proportion is often recorded as interval sections in clinical since pathologists roughly estimate the PD-L1 rate subjective without counting. For example, PD-L1 rates are either ‘0 to 0.01’, ‘0.01 to 0.25’, ‘0.25 to 0.5’, ‘0.5 to 0.75’, or ‘0.75 to 1.00’.

Fig. 2 shows an overview of the proposed method. Our method was designed as a two-stage structure to effectively use a small amount of cell-level annotation and a large amount of tissue-level annotation (proportion). We first detect tumor cells in core image II to produce tumor mask MM. Then, we estimate the proportion of PD-L1-positive cells of an input image II using mask MM.

Refer to caption
Figure 2: Overview of proposed method. Top: cell detection network, and Bottom: proportion estimation network.

Tumor cell detection: Fig. 2 (Top) shows the overview of the tumor cell detection network, which consists of a cell detection model hh and a classification model gg. For the cell detection model hh, we follow the heatmap-based cell detection method [9], which produces a heatmap of all cells, where the coordinates of the peaks in the maps indicate the centroid positions of cells. For the classification model gg, we propose a two-stage cell detection method inspired by bounding box-based general object detectors [11]. Since cell shapes are similar between tumor and non-tumor cells, it is difficult to accurately identify their class from local information. To use the information of surrounding cells effectively, we first extract global feature by feature extractor gfg_{f} and then classify cells based on the extracted feature, which contains the surrounding information.

We first train the cell detection network hh using the training data of cell positions, where hh produces the cell position heatmap for input image II [9]. To train hh, the ground truth of the heatmap HH is generated using the given training data of cell positions. The network hh is trained by minimizing the MSE Loss Ld=HH^2L_{d}=\|H-\hat{H}\|^{2}, where H^=h(I)\hat{H}=h(I) is an estimated heatmap by hh. The peak points in H^\hat{H} are detected as the cell positions in a testing phase, denoted as {𝐩^i}i=1Nd\{\hat{{\bf p}}_{i}\}_{i=1}^{N_{d}}, where NdN_{d} is the number of detected cells in II. Note that H^\hat{H} only contains the cell position information but not the tumor class information.

For each detected cell position 𝐩^i\hat{{\bf p}}_{i}, the tumor classification network gg (consists of gfg_{f} and gcg_{c}) estimates its class y^i\hat{y}_{i}. The feature extractor gfg_{f} extracts a feature map FtF_{t} from II, and then the fully connected (Fc) layer gcg_{c} estimates the tumor class y^i[0,1]\hat{y}_{i}\in[0,1] for each cell position by inputting the feature vector Ft(𝐩^i)F_{t}(\hat{{\bf p}}_{i}) at pixel 𝐩^i\hat{{\bf p}}_{i}, where y^i>0.5\hat{y}_{i}>0.5 indicates a tumor cell, otherwise, a non-tumor cell. To train this network gg, we use the binary cross-entropy loss between the predicted score y^i\hat{y}_{i} and the ground truth yiy_{i}, where the loss is calculated at only the detected cell positions {𝐩^i}i=1Nd\{\hat{{\bf p}}_{i}\}_{i=1}^{N_{d}}. The detection results are denoted as 𝒫^={𝐩i^,y^i}i=1Nd\hat{\mathcal{P}}=\{\hat{{\bf p}_{i}},\hat{y}_{i}\}_{i=1}^{N_{d}}.

We generate a tumor cell mask MM using detection results 𝒫^\hat{\mathcal{P}}. In MM, pixels around the tumor cell positions 𝒫^c={p^i|y^i>0.5}\hat{\mathcal{P}}_{c}=\{\hat{p}_{i}|\hat{y}_{i}>0.5\} takes 1, otherwise 0, where the distance from the detected position to a positive pixel is less than α\alpha. This mask is used to estimate the proportions.

PD-L1 proportion estimation: The proportion estimation network ff estimates the proportion of PD-L1 positive cells among tumor cells. As shown in Fig. 3, the feature extractor of ff extracts the positive map FpF_{p} and negative map FnF_{n} by inputting II. These feature maps are masked to calculate the pixels on only the tumor regions, denoted as FpMF_{p}\odot M and FnMF_{n}\odot M, respectively, which indicate the maps of the positive/negative ‘tumor’ cell position maps. Then, the estimated proportion r^\hat{r} is calculated from these masked maps. The positive score sps_{p} and negative score sns_{n}, which indicate the numbers of positive or negative tumor cells in the image, are defined as the sum of the pixel values in FpMF_{p}\odot M and FnMF_{n}\odot M. The PD-L1 tumor proportion is calculated by r^=spsp+sn\hat{r}=\frac{s_{p}}{s_{p}+s_{n}}. The network is trained using the estimated proportion r^\hat{r} and the ground truth rr. The details of the loss function are proposed below.

Refer to caption
Figure 3: Left: focal proportion loss. Right: weighted focal proportion loss.

As discussed in the introduction, the PD-L1 rate is given as a proportion interval; pathologists give either ‘0 to 0.01’, ‘0.01 to 0.25’, ‘0.25 to 0.5’, ‘0.5 to 0.75’, or ‘0.75 to 1.00’ for a core image. In previous works, the loss for proportion interval [2] outperforms cross-entropy. This loss mitigates the problem of overfitting by sacrificing strictness but at the expense of discriminability. In our problem setting, there are different intervals; the interval in ‘0 to 0.01’ is much smaller than that of others, and there is data imbalance, e.g., the number of core images belonging to ‘0.5 to 0.75’ is much fewer than that of ‘0 to 0.01’. These issues make training difficult.

We thus propose a weighted focal proportion loss that can mitigate these issues. This loss is designed inspired by the focal loss [7], which has been widely used for imbalanced data classification. The focal loss is a dynamically scaled cross-entropy loss, where the scaling factor decays to zero as confidence in the correct class increases [7]. We introduce this idea into the proportion loss [1], widely used in LLP. Let us denote rk,(k=1,2)r_{k},(k=1,2) are the proportion of the positive and negative cells. The weighted focal proportion loss is defined as:

WFL=|rr^|γ(krklogrkrk^),\displaystyle WFL=-|r-\hat{r}|^{\gamma}\left(\sum_{k}r_{k}\log\frac{r_{k}}{\hat{r_{k}}}\right), (1)

where rr is the ground truth of the proportion (PD-L1 rate), which takes the mean of the interval; e.g., r=0.375r=0.375 for ‘0.25 to 0.5’. r^\hat{r} is the estimated proportion, and γ\gamma is a hyper-parameter. krklogrkrk^\sum_{k}r_{k}\log\frac{r_{k}}{\hat{r_{k}}} indicates the KL-divergence between the truth and estimated proportion (proportion loss), r1=rr_{1}=r indicates the proportion of the positive cells, and r2=1rkr_{2}=1-r_{k} indicates that of the negative one. |rr^|γ|r-\hat{r}|^{\gamma} indicates that the loss decreases when the estimation becomes the correct value.

Figure 3 (Left) shows the plot of the focal proportion loss for each proportion interval when γ=2\gamma=2. In this graph, the loss for ‘0 to 0.01’ (blue line) has low gradients, and the loss around 3 % is lower than that of the neighboring interval ‘0.01 to 0.25’. It makes training difficult to distinguish the proportions from the neighbor intervals. This is because the interval of ‘0 to 0.01’ is much smaller than the others. Therefore, we add the weight for the hyper-parameter γ\gamma, where smaller γ\gamma gives more significant gradients: we use γ=0\gamma=0 for ‘0 to 0.01’ and γ=2\gamma=2 for other intervals. Figure 3 (Left) shows the weighted focal proportion loss curves. The blue curve (‘0 to 0.01’) has larger gradients, and thus this makes training easy to identify the proportion in either ‘0 to 0.01’ or ‘0.25 to 0.5’.

3 Experiments

Implementation details: For tumor cell detection, we used U-net structure [12] for the cell detector hh, and Resnet-50 [5] for the feature extractor gfg_{f}. The Adam optimizer [6] is adopted with learning rates of 0.001 for hh and 0.0002 for gg, and the batch sizes were 8 and 16 for hh and gg, respectively. Random rotation and flipping were applied for augmentation during classification training.

For proportion estimation, we used Resnet-18 [5] pre-trained on Imagenet. The Adam optimizer [6] is adopted with a learning rate of 0.0001. The batch size was 16, and the epoch was 100. We used early stopping with a patience of 30 to stop training. We used random rotation and horizontal and vertical flipping for augmentation. We set the hyperparameter γ\gamma, which controls the slope of the weighted focal loss function, to 0 for 0 to 1 % and 2 for the other sections.

Dataset: For tumor detection, we used 58 core images of patients with about 10000 ×\times 10000. The tumor or non-tumor regions, pathologists manually annotated, where the average of the labeled region in one core image is 5.71%5.71\%. For two core images, the cell position is also annotated. The total number of cells is 8000. Note that if we were to train the network using only the supervised data of cell detection, a huge amount of annotation would be required. In contrast, we can train the network using proportion labels and only a small amount of cell-level annotation (5.71%5.71\% area in a tissue). Proportion labels are available from clinical records, and no public datasets have proportion annotations.

For proportion estimation, we used 606 core images, where the proportion interval is labeled for each core; either ‘0 to 0.01’, ‘0.01 to 0.25’, ‘0.25 to 0.5’, ‘0.5 to 0.75’, or ‘0.75 to 1.00’. We resized the images to 2048 ×\times 2048 to input the network. Fig. 4 shows examples of core images. The images gradually turn brown along with a larger proportion of PD-L1 positive cells. However, it is necessary to observe whether the brown cells are tumors and whether the membranes are dyed. This means that it is not so easy for color-based methods. We performed 4-fold cross-validation and evaluated the average performance metrics.

Refer to caption
Figure 4: Example of core images in cases of (a) 0 to 1%, (b) 1 to 25%, (c) 25 to 50%, (d) 50 to 75%, (e) 75 to 100%.
Table 1: Performance of proportion estimation by comparative methods.
Method w Mask 0-1 % 1-25 % 25-50 % 50-75 % 75-100 % mRecall mPrecision mF1
Det 0.832 0.587 0.062 0.071 0.568 0.479 0.424 0.429
\hdashlineClass [5] 0.858 0.433 0.312 0.411 0.682 0.539 0.526 0.524
O-Reg [4] 0.811 0.510 0.500 0.411 0.568 0.560 0.541 0.532
Prop w/o mask [1] 0.381 0.779 0.438 0.339 0.750 0.537 0.510 0.451
Ours w/o mask 0.627 0.529 0.125 0.232 0.795 0.462 0.442 0.427
\hdashlineLPI Loss [2] 0.187 0.808 0.188 0.429 0.682 0.458 0.519 0.392
Prop 0.474 0.760 0.438 0.464 0.795 0.586 0.561 0.516
WProp 0.795 0.683 0.312 0.393 0.795 0.596 0.552 0.558
WProp + List 0.837 0.692 0.375 0.339 0.705 0.590 0.567 0.565
FocalProp 0.078 0.731 0.562 0.482 0.818 0.534 0.538 0.404
Ours 0.878 0.663 0.375 0.393 0.795 0.621 0.606 0.603

Evaluation: To confirm the effectiveness of our proportion estimation with mask, we compared our method with ten baseline methods; 1) the detection-based method (Det) modified our cell detection network for 3 classes of cell detection: positive tumor, negative tumor, and non-tumor. The proportion of PD-L1 is calculated from the number of detected positive and negative tumor cells, where the proportion labels were not used to train this method. 2) Classification (Class) [5], which directly classifies the core image into five classes (proportion intervals) with cross-entropy loss. 3) Ordinal regression (O-Reg) [4]. which estimates used the proportion using the loss function from [4]. 4)Prop w/o mask [1], which estimates continuous values with proportion loss without using the mask. 5) Ours without mask (Ours w/o mask), which uses the weighted focal proportion loss without tumor cell detection. The above 2)-4) methods did not use the tumor region mask MM, i.e., the networks were trained to produce the PD-L1 rate directly from the entire image. As an ablation study, the following five methods are introduced into our framework, which uses the mask MM and estimates the proportion from the masked maps. 6) LPI Loss [2]. 7) Proportion loss (Prop) [1]. 8) Weighted proportion loss (WProp), where the proportion loss is weighted by the length of the interval. 9) WProp + List, which introduces listnet [3] to the weighted proportion loss. 10) Forcal proportion loss (FocalProp), which introduces the focal loss [7] into the proportion loss. This is proposed by us. 11) Ours, which uses the weighted focal proportion loss.

Table 1 shows the mean of precision, recall, and f1 score for each method. Our method achieved the best performance compared to the other methods. Positive and negative tumor cells have various appearances depending on staining properties and patient variation. Det did not work well for three-class classifications as the positive and negative tumor cells have similar shapes, as shown in Fig. 5. To achieve accurate detection, a large amount of training data is required. Comparing Prop w/o mask and Prop, Prop is better than Prop w/o mask. Since both methods use the same loss function, it shows that the tumor mask contributes to improving performance. The accuracy of ‘0 to 1 %’ in FocalProp is much worse because this loss teats all interval sections the same even though the interval length is different, as discussed in Section 2. The proposed method outperformed other mask-based methods because weighted focal loss can handle imbalances (interval and data) while maintaining discriminative ability.

Refer to caption
Figure 5: Visualization by intermediate outputs and CAM-based method [16]. Red indicates a positive, and blue indicates a negative.

Fig. 5 shows examples of estimated intermediate positive and negative feature maps, which can be used for the interpretability of the network; pathologists can understand how the AI classifies the cells to estimate the proportion. The mask (the 3rd column) shows the cell detection results of tumor cells (including both positive and negative). The feature maps show the estimation results of positive (red) or negative (blue) classification, in which the feature map has a low resolution than the original image. A masked feature map is generated by combining the mask and feature map. Regression CAM shows the activation map from the regression network, where red indicates the pixels contributing to the network output. The first and second rows are examples of successfully estimated cases, which pathologists confirmed. In these cases, both of them are stained as brown. However, the class of cells is different; positives in the 1st column and negatives in the 2nd one, which is defined by the staining pattern even though they are brown. Our method successfully classified such difficult cases. The third row is an example of a miss-classified case, which is a difficult case. Actually, all of them are positive because their membrane is slightly stained by light brown. However, our method miss-classified them as negative. This is a difficult case, even for medical doctors. The CAM is meaningless in this task because all tumor regions are used to calculate the proportions.

4 Conclusion

We propose a proportion estimation method that can estimate the partial proportion (about only tumor cells) by using a tumor mask and address the imbalanced (interval and data) issues by our weighted focal proportion loss. We first detect tumor cells and generate a tumor mask. Then, we estimate the PD-L1 tumor proportion among tumor cells. By applying the mask, we could represent intermediate output for PD-L1 positive and negative, in which the visualization is useful for pathologists in clinical. In the experiments, our method outperforms other comparisons and achieves state-of-the-art performance.

Acknowledgements: This work was supported by JSPS KAKENHI Grant Number JP23K18509, Japan.

References

  • [1] Ardehaly, E.M., Culotta, A.: Co-training for demographic classification using deep learning from label proportions. In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW). pp. 1017–1024. IEEE (2017)
  • [2] Bortsova, G., Dubost, F., Ørting, S., Katramados, I., Hogeweg, L., Thomsen, L., Wille, M., de Bruijne, M.: Deep learning from label proportions for emphysema quantification. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II 11. pp. 768–776. Springer (2018)
  • [3] Cao, Z., Qin, T., Liu, T.Y., Tsai, M.F., Li, H.: Learning to rank: from pairwise approach to listwise approach. In: Proceedings of the 24th international conference on Machine learning. pp. 129–136 (2007)
  • [4] Cheng, J., Wang, Z., Pollastri, G.: A neural network approach to ordinal regression. In: 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence). pp. 1279–1284. IEEE (2008)
  • [5] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
  • [6] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  • [7] Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980–2988 (2017)
  • [8] Liu, J., Zheng, Q., Mu, X., Zuo, Y., Xu, B., Jin, Y., Wang, Y., Tian, H., Yang, Y., Xue, Q., Huang, Z., Chen, L., Gu, B., Hou, X., Shen, L., Guo, Y., li, Y.: Automated tumor proportion score analysis for pd-l1 (22c3) expression in lung squamous cell carcinoma. Scientific Reports 11 (08 2021). https://doi.org/10.1038/s41598-021-95372-1
  • [9] Nishimura, K., Wang, C., Watanabe, K., Bise, R., et al.: Weakly supervised cell instance segmentation under various conditions. Medical Image Analysis 73, 102182 (2021)
  • [10] Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779–788 (2016)
  • [11] Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28 (2015)
  • [12] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. pp. 234–241. Springer (2015)
  • [13] Tokunaga, H., Iwana, B.K., Teramoto, Y., Yoshizawa, A., Bise, R.: Negative pseudo labeling using class proportion for semantic segmentation in pathology. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16. pp. 430–446. Springer (2020)
  • [14] Widmaier, M., Wiestler, T., Walker, J., Barker, C., Scott, M.L., Sekhavati, F., Budco, A., Schneider, K., Segerer, F.J., Steele, K., Rebelatto, M.C.: mparison of continuous measures across diagnostic pd-l1 assays in non-small cell lung cancer using automated image analysis. Modern Pathology 33 (03 2020). https://doi.org/10.1038/s41379-019-0349-y
  • [15] Yoshizawa, A., Motoi, N., Riely, G.J., Sima, C.S., Gerald, W.L., Kris, M.G., Park, B.J., Rusch, V.W., Travis, W.D.: Impact of proposed iaslc/ats/ers classification of lung adenocarcinoma: prognostic subgroups and implications for further revision of staging based on analysis of 514 stage i cases. Modern pathology 24(5), 653–664 (2011)
  • [16] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2921–2929 (2016)