Binary Classification with Instance and Label Dependent Label Noise
Abstract
Learning with label dependent label noise has been extensively explored in both theory and practice; however, dealing with instance (i.e., feature) and label dependent label noise continues to be a challenging task. The difficulty arises from the fact that the noise rate varies for each instance, making it challenging to estimate accurately. The question of whether it is possible to learn a reliable model using only noisy samples remains unresolved. We answer this question with a theoretical analysis that provides matching upper and lower bounds. Surprisingly, our results show that, without any additional assumptions, empirical risk minimization achieves the optimal excess risk bound. Specifically, we derive a novel excess risk bound proportional to the noise level, which holds in very general settings, by comparing the empirical risk minimizers obtained from clean samples and noisy samples. Second, we show that the minimax lower bound for the 0-1 loss is a constant proportional to the average noise rate. Our findings suggest that learning solely with noisy samples is impossible without access to clean samples or strong assumptions on the distribution of the data.
1 Introduction
Within the framework of the traditional classification problem, it is generally assumed that both the training data and the test data are drawn from identical distributions. Nevertheless, there are several factors, including covariate shift, disturbances to the data, and changes in the domain of the application that can cause the distribution of testing data to be very different from that of training data. A particularly important example is label noise, where the labels in a classification problem may be corrupted with some nonzero probability. Various factors can contribute to the emergence of label noise, including human error during the annotation process, instances of ambiguous classification boundaries, and inconsistencies in data collection and processing. Also, with the increasing scale of data in recent years, researchers and practitioners are increasingly turning to cost-effective data collection methods, such as data mining on social media (Injadat et al., 2016) or web scraping (Schroff et al., 2010), rather than relying on human experts. These cost-effective approaches are more susceptible to the presence of label noise, which has contributed to the increased prevalence of this issue in recent years. Consequently, a critical question within the machine learning community is to understand the effects of label noise and when one may or may not be able to develop algorithms that are robust to the noise.
In this paper, we consider a binary classification problem with a feature space and a label space . We adopt the label noise model using the label noise function , which quantifies the probability of flipping the label to - given the sample . The simplest possible label noise model is the random classification noise (RCN) model (Angluin and Laird, 1988), in which the label noise function remains constant for all and . The RCN model has been extensively studied in theory, and it is well established that certain loss functions, such as the 0-1 loss or squared loss, can learn an optimal classifier solely using noisy samples even when the noise rate is unknown (Manwani and Sastry, 2013; Ghosh et al., 2015).
The class-conditional noise (CCN) model is more complex than the RCN model, wherein the label noise function is a constant depending on each label (Natarajan et al., 2013). Similarly to the RCN model, numerous theoretical works and methods have demonstrated that learning with the CCN is possible (Menon et al., 2015; Natarajan et al., 2013; Scott, 2015; Van Rooyen and Williamson, 2017; Reeve et al., 2019; Zhang et al., 2021). A common approach to tackle CCN is the loss correction method, which adjusts the loss function according to the estimated noise rate, ensuring the consistency of the estimator for clean samples. Some loss correction methods necessitate knowing the noise rates (Menon et al., 2015; Van Rooyen and Williamson, 2017; Patrini et al., 2017), while others estimate these rates under the assumption that anchor points exist (Scott et al., 2013; Scott, 2015; Ramaswamy et al., 2016; Liu and Tao, 2015; Reeve et al., 2019). An instance is an anchor point for the label if and it is useful in the sense that it allows us to learn the noise rates accurately. Moreover, Xia et al. (2019) proposed a noise rate estimator that does not require the anchor points. Recently, Liu and Guo (2020) introduced a peer loss function that allows learning with noisy samples without the need to estimate noise rates. Other approaches to handle CCN include sample selection method and label correction method, in which they aim to identify noisy samples, filter them out, or correct them to the accurate label (Bootkrajang and Kabán, 2012; Goldberger and Ben-Reuven, 2017; Malach and Shalev-Shwartz, 2017; Han et al., 2018; Jiang et al., 2018; Huang et al., 2019; Nguyen et al., 2019; Yu et al., 2019; Wei et al., 2020; Yi and Wu, 2019; Han et al., 2020; Park and Kwon, 2021). However, it is intuitive that the label noise in real-world data should be dependent on both the instance and the label rather than solely on the label. For example, a blurred image is more likely to be mislabeled compared to a clear version of the same image. Additionally, Chen et al. (2021) provided theoretical evidence that the CCN assumption does not hold in real-world datasets by introducing a CCN hypothesis testing method. Thus, this motivates us to examine a more general label noise model, instance and label dependent noise (ILN) model, in which the noise rate depends on both the instance and the label.
The purely instance-dependent noise (PIN) model, where the noise rate depends only on the instance and not the label, has been studied by Menon et al. (2018) as a special case of the ILN model. Menon et al. (2018) showed that the Bayes optimal classifiers for both the clean and noisy distributions are identical, indicating that learning with PIN is possible. However, in the ILN model, the problem becomes more challenging, as estimating the noise rate for each instance is generally difficult. There are some algorithms proposed to address learning under the ILN model (Cheng et al., 2020b; Xia et al., 2020; Cheng et al., 2020a; Zhu et al., 2021). Xia et al. (2020) attempted to address the ILN model by making a part dependence assumption, which assumes that instances are composed of parts and that these parts are subject to corruption. Cheng et al. (2020b) considered the bounded instance and label dependent (BILN) model, where the noise rate for each label is bounded above. They tried to learn an estimator using samples that are consistent with the Bayes optimal classifier and samples that have been cleaned by human experts. Berthon et al. (2021) considered the confidence-scored instance-dependent model, in which the confidence score information is provided along with the instance and the label. Although these methods provide us with heuristic ways to address the ILN, there has been limited theoretical work on the ILN model. For example, each of these methods requires access to clean samples or relies on specific strong assumptions. Nevertheless, the question of whether learning with ILN is possible without these conditions or access to clean samples has not been adequately addressed. Furthermore, while much work has been done to modify the loss function to enable empirical risk minimization (ERM) to learn from noisy samples, it is often overlooked how the label noise might degrade the performance of ERM. Perhaps the work closest to these lines of questions and to this paper is the work of Lee and Foygel Barber (2022). They studied the performance of ERM in the presence of RCN. However, their work is restricted to the RCN model with Lipschitz loss functions and linear hypothesis classes, whereas this study explores more comprehensive settings: instance and label dependent noise, as well as a broader range of hypothesis classes and loss functions.
In this paper, we provide a novel theoretical analysis of the instance and label dependent (ILN) model. We consider loss functions that satisfy a mild boundedness assumption and ensure that standard generalization guarantees hold. These conditions do not require the loss function to be Lipschitz or convex, and even 0-1 loss complies with these requirements. Moreover, other general loss functions such as logistic loss or hinge loss, meet these conditions under certain settings. We start by studying the performance difference between the clean estimator and the noisy estimator, both obtained through empirical risk minimization (ERM) using clean samples and noisy samples, respectively. (Herein, we refer to a clean sample as a sample from the underlying distribution, and a noisy sample as one that has passed through the ILN model). We evaluate the performance of each estimator with respect to the risk under the clean (true) distribution and provide an upper bound for the difference between those risks. We identify a new source of irreducible error, namely a constant term proportional to the amount of label noise present, in the upper bound. Next, we show that this irreducible error term cannot be fully removed in the ILN model by developing a minimax lower bound of the 0-1 risk. This suggests that to learn an effective predictor under the ILN model, we may need access to clean samples (Cheng et al., 2020b) or require some additional assumptions (Xia et al., 2020) beyond those made in the CCN model.
Organization
The rest of the paper is organized as follows. In Section 2, we introduce our problem setting for the ILN model. In Section 3, we derive an upper bound for the difference between the risk of the clean estimator and the risk of the noisy estimator. In Section 4, we show that the minimax lower bound of the 0-1 risk is a constant proportional to the noise level, even when considering the margin assumption (Mammen and Tsybakov, 1999) and the anchor point assumption, which are commonly used in the CCN model.
2 Problem setting
In this section, we present our formal problem setting and assumptions of the ILN model. Let be the feature space and be the label space. We assume that the random variables are jointly distributed with unknown distribution , where denotes the feature vector, is the true label, and is the noisy label. We use to denote the marginal distribution of for , while and are used to indicate the marginal distribution of for and the marginal distribution of for , respectively. We refer to as a clean distribution as it is related to the true label and refer to as a noisy distribution. We use and to denote i.i.d. samples drawn from and , respectively. Furthermore, we use and to represent the corresponding product distribution associated with the samples and . Given the noise functions for , a sample of is generated by the following procedure:
Here , denotes the conditional probability of given (i.e., the regression function), while indicates the probabilty of the true label being flipped for instance Both and for are unknown. For simplicity, we use and to denote and , respectively. As discussed in Section 1, various noise models have been considered in the literature and each of these models can be formulated using the noise functions . Precisely, the RCN model is a special case where the noise function is given by for all . The CCN model generalizes the RCN model, with constant noise levels and such that and for all . For the ILN model, there are no specific constraints on the noise functions, covering the more general scenarios. In this study, we make the following mild assumption for the ILN model.
Assumption 2.1.
For all , we have
In this case, the constant represents a uniform bound on the sum of the two noise functions for all . The objective in a binary classification problem with label noise is to determine a function such that best predicts the true label , for a given input , where represents the sign function. However, we only observe a sequence of noisy pairs , and the values of the true labels are unknown. As is standard in binary classification, a fundamental performance metric of the function is the 0-1 risk
where denotes the standard 0-1 indicator function. The function associated with the Bayes optimal classifier minimizes the 0-1 risk, and indicates the minimum misclassification rate under distribution . Since the regression function is unknown, the learner must identify a function that approximates based on the observed samples. Additionally, the distribution remains unknown and therefore the 0-1 risk can not be evaluated. As an alternative, the learner generally seeks a decision rule that (approximately) minimizes the empirical 0-1 risk
In the context of the noisy label setting where the true labels are unobserved, the learner may in fact instead (approximately) minimize the empirical 0-1 risk with noisy labels
The non-convex and discontinuous nature of the 0-1 loss makes the above minimization challenging, and the classical way to circumvent this issue is to consider an alternative surrogate convex loss function (Bartlett et al., 2006). The risk associated with the surrogate loss function is referred as -risk of function , and is denoted by
where denotes the -risk under the clean distribution and denotes the -risk under the noisy distribution. According to the above notation, we define and as the empirical -risk of a function with clean samples and noisy samples, respectively. Generally, we select the function from a predefined class of functions , which represents the hypothesis class in our modeling framework. We denote the empirical risk minimizer with clean samples as:
(1) |
and for noisy samples, we have:
(2) |
For clarity, we call a clean estimator and a noisy estimator. Note that the loss function remains unchanged when obtaining the noisy estimator , which distinguishes it from the loss correction or the label correction methods discussed in Section 1. Although empirical risk minimization (ERM) is a fundamental technique in machine learning and many works focused on modifying the loss function to learn with ERM using noisy samples, the question of how ERM degrades in the presence of label noise has been previously overlooked. Therefore, one of our interests is to study the performance difference between the noisy estimator and the clean estimator.
3 Risk gap between noisy estimator and clean estimator
In this section, we study the performance difference between the clean estimator and the noisy estimator , each of which are obtained through empirical risk minimization (ERM) using clean samples and noisy samples, respectively. In particular, the performance difference can be expressed as:
(3) |
It is important to emphasize that we do not apply any modification to the loss function to address the issue of noisy samples. We consider loss functions that adhere to the following mild assumption.
Assumption 3.1.
There is a constant such that
(4) |
Assumption 3.1 states that the difference between the loss of input and input is uniformly bounded over and . This assumption is typical in the problem of binary classification. For example, the 0-1 loss satisfies (4) with for any and . In addition, other loss functions, such as squared loss and logistic loss, satisfy (4) for specific and . We introduce some relevant examples in section 3.2. Furthermore, we make a natural assumption about the generalization bound of the empirical risk minimization (ERM) procedure.
Assumption 3.2.
For any , there exists a function with , and for all distributions , it satisfies
with probability at least .
Note that the function in Assumption 3.2 may depend on , , and the loss function , but not the distribution . In particular, Assumption 3.2 also implies
for the noisy distribution as well. Assumption 3.2 is valid in various settings, such as a linear hypothesis class or a Reproducing Kernel Hilbert Space (RKHS) with a bounded kernel. Also, the function is closely connected to the complexity of the hypothesis class , which can be quantified through measures such as the VC dimension, Rademacher complexity, and -covering number (Wainwright, 2019). The details are discussed in Section 3.2.
3.1 Upper bound analysis
In this section, we derive an upper bound for the risk gap between the clean and the noisy estimator, as defined in Equation (3). We start by studying the relationship between the clean conditional distribution and the noisy conditional distribution .
Lemma 1.
Considering the ILN model, we have
(5) |
The equation (5) makes intuitive sense within the framework of our noise model, as the noisy label can come from either an uncorrupted true label or a corrupted true label . To obtain an upper bound of the risk gap, we need to study the difference between the clean risk and the noisy risk. Using the relationship between and , we derive an upper bound on the difference between the clean risk and the noisy risk uniformly over .
The upper bound in Lemma 2 is intuitive. When there is no noise in the distribution, , the upper bound is equal to 0. Additionally, as the amount of noise present in the distribution increases, the upper bound also increases proportionally to the noise rate. Based on these findings, we introduce our main theorem, which provides an upper bound on the risk gap between the clean estimator and the noisy estimator.
Theorem 1.
We can see that a constant term related to the constant appears in the upper bound in Theorem 1. Similar to the result of Lemma 2, the structure of (7) appears to be reasonable. If the noise level is small and the sample size is large enough, the performance difference between the clean estimator and the noisy estimator becomes negligible. On the other hand, if there is a significant amount of noise in the data, the performance difference between the clean estimator and the noisy estimator grows substantially. Using the fundamental relationship between the excess risk and generalization bound and Theorem 1, we can derive an upper bound for the excess risk of the noisy estimator .
Corollary 3.1.
Proof.
With probability , we have
Then, adding the above inequality to inequality (7), we get the desired result. ∎
We refer to the constant term in the upper bound given by Corollary 3.1 as an irreducible error which is conceptually similar to the approximation error, as it arises from approximating the clean distribution using noisy samples. Furthermore, the term can be interpreted as an estimation error. However, for some special cases, the irreducible error can be zero, which occurs when the regression function is learnable only using noisy samples. As we mentioned in Section 1, Menon et al. (2018) showed that it is possible to learn through minimizing the empirical risk of noisy samples when the noisy label is purely instance-dependent, where the noise function is identical for all labels and depends solely on the instance . Also, we discussed various approaches that enable learning with CCN in Section 1; however, their theoretical understanding remains limited in the context of ILN. Therefore, a pertinent question arises: Is there an approach, apart from ERM, that can remove the irreducible error term in Corollary 3.1 while relying exclusively on noisy samples? We address this question by studying the minimax risk lower bound in Section 4.
3.2 Applications
In this section, we present examples of two specific settings where Theorem 1 can be applied. First, observe that Assumption 3.1 and Assumption 3.2 are quite broad, and this generality enables us to apply our theoretical result to a wide range of settings. Let us consider a bounded domain . Then, Assumption 3.1 is valid for any Lipschitz loss function and any hypothesis class that is a subset of a class of -Lipschitz functions, where is a given constant. We provide an example for the case when we consider a linear hypothesis class.
Example 3.1.
Suppose that we have a bounded domain , a constrained linear hypothesis class and a -Lipschitz margin-based loss function, i.e., . Notice that the linear hypothesis class is a subset of a class of -Lipschitz functions, with logistic loss and hinge loss being examples of such loss functions. Then, we have
So Assumption 3.1 holds with constant .
Example 3.1 represents a well-studied, classic setting found in machine learning literature. Kakade et al. (2008) studied the generalization bounds for the constrained linear hypothesis class, and we review a pertinent result in the following lemma.
Lemma 3 (Corollary 4 in (Kakade et al., 2008)).
Suppose we adopt the same setting as in Example 3.1. For any , with probability at least , we have
Lemma 3 implies that fulfills Assumption 3.2 in the context of Example 3.1. Thus, by applying the above result to Theorem 1, we obtain the following upper bound.
Proposition 1.
Suppose we follow the same setting of Example 3.1. For any , with probability at least , we have
(9) |
Lee and Foygel Barber (2022) showed the upper bound of the excess risk of noisy estimator when is a linear hypothesis class without bound constraints. In fact, the upper bound in (9) can be derived by following the same proof technique as Lee and Foygel Barber (2022). However, it is important to note that our theoretical results are considerably more general, as their results were based on the simpler RCN model, while our results were built upon the ILN model. Also, their findings were limited to the linear settings, while our results can be applied to a wide range of classes. As an example beyond the linear hypothesis class, we consider a case of non-parametric regression. Let us consider the following example.
Example 3.2.
Suppose we have a bounded kernel with for all and let be its reproducing kernel hilbert space (RKHS), and -Lipschitz margin-based loss function. We consider a hypothesis class , where is the corresponding RKHS norm. For given and , we have
So, Assumption 3.1 holds with constant .
Also, we have the following well-known result generalization bound for RKHS (Mohri et al., 2018, Chapter 6.3).
Lemma 4.
Suppose we adhere to the same setting as in Example 3.2. For given , with probability at least , we have
Lemma 4 shows us that we can use . Thus, by applying the above result to Theorem 1, we get the following result.
Proposition 2.
Suppose we follow the setting same as Example 3.2. For any , with probability at least , we have
This example shows that our theoretical findings can be applied to a wide range of settings encountered in classification problems.
4 Lower bound analysis
In this section, we study the minimax lower bound for 0-1 loss for binary classification within the ILN model, to see whether the irreducible error in Equation (8) can be removed. We would like to show that learning with ILN is not possible under assumptions typically made in the CCN model. These assumptions strengthen our lower bound results, as they make the clean distribution more favorable for learning. We start by introducing two widely adopted assumptions in the context of binary classification with CCN. These two assumptions only apply to the clean distribution .
Assumption 4.1 (Margin assumption (Mammen and Tsybakov, 1999) ).
For given and , the clean distribution satisfies the margin assumption with parameter if the following holds for all ,
The margin assumption is widely used in the problem of binary classification (Audibert and Tsybakov, 2007; Mammen and Tsybakov, 1999; Pillaud-Vivien et al., 2018; Reeve et al., 2019). In binary classification, the easiest instance is where the label is deterministic , while the hardest instance is where the conditional probability is on the decision boundary (). Thus, the difficulty of the problem is determined by the quantity of where is close to . The margin assumption implies that we can control this quantity with the parameter and also indicates that not all of the mass of is concentrated around .
Assumption 4.2 (Anchor point assumption).
For each label , there exist such that and .
The anchor point assumption implies that for each label there exists at least one instance where the label is deterministic. The anchor point allows us to estimate the unknown noise rate in the classification problem with CCN (Scott, 2015; Reeve et al., 2019). With two assumptions together, many theoretical works showed that they were able to learn the optimal classifier only with noisy samples when the noise is instance-independent but label-dependent. However, the following theorem implies that this is not possible with instance- and label-dependent label noise.
Theorem 2.
Suppose is a function learned using noisy samples . For all , we have
(10) |
The infimum is taken over all possible estimator and the supremum is taken over all possible clean distribution that satisfy Assumption 4.1 and 4.2, as well as noisy distribution , where and has the same marginal distribution over .
Theorem 2 suggests that even when provided with a sufficiently large number of noisy samples, there is no estimator capable of learning the optimal function . This occurs because the noise rate varies for each instance, making the presence of anchor points insufficient to accurately estimate these noise rates. Interestingly, when combined with Theorem 1, Theorem 2 suggests that empirical risk minimization achieves the optimal rate of excess risk, which is proportional to the noise level . Also, this implies that no other sophisticated methods such as loss correction or label correction are necessary without any additional assumptions.
4.1 Proof of Theorem 2
The two representative techniques to lower bound the minimax risk are Le Cam’s method and Fano’s method (Wainwright, 2019). Both approaches reduce the estimation problem to the testing problem and subsequently derive a lower bound for the probability of error within that testing context. Here, we use a variant of Fano’s method (Birg, 2001) to derive a minimax lower bound.
Lemma 5 (Birg (2001)).
Let be a finite set of probability distributions in a measurable space with . Suppose is a random variable that follows a given distribution and is the Kullback-Leibler divergence between two distributions and . Then, we have
where the infimum is taken over all possible estimators .
Throughout this section, we use and to simplify the labels and , respectively. The proof of Theorem 2 is adapted to some extent from the proof by Reeve et al. (2019), which also uses Fano’s method to derive a minimax lower bound for the CCN model. A key difference is that our study of the more general ILN model allows us to construct two distinct clean distributions, and , while ensuring that their noisy counterparts, and , are indistinguishable. In this setting, distinguishing from becomes possible with a large enough sample size, while differentiating from remains impossible. As a result, any estimator that relies on noisy samples to determine the originating distribution of the samples would be left with no option but to make a random guess.
We choose to be a finite set, and the marginal distribution over to be the same for and . Consequently, we only need to concentrate on , which represents the conditional distribution of given for the distribution . We denote the noisy counterpart of as for . We set and for all , ensuring that both distributions adhere to the anchor point assumption. We set and , as we want to be the point that distinguishes and . This is shown in Figure 1(a).


We use for to denote the noise function of for . We aim to configure the noise functions in such a way that the conditional distributions and become indistinguishable. By Lemma 1, we have
(11) |
Since and , we choose for all and resulting in for all . We still need to determine the noise functions for . To do so, we select and . For the noise function, we set , , and . Then, we have for all , which is illustrated in Figure 1(b). Consequently, we have
(12) |
Equation (12) implies that the distribution and satisfy the margin assumption for . We introduce the key lemma for the proof of our theorem.
Lemma 6.
Let ( product distribution of ) measurable classifier be given. Then, we have
where for .
5 Conclusion
This paper provides a theoretical analysis of instance and label dependent noise model. We derived an upper bound for the excess risk of the empirical risk minimizer when using noisy samples. Furthermore, we showed that the minimax risk of the 0-1 loss is lower bounded by a constant that is proportional to the noise level. These results imply that the empirical risk minimizer achieves the optimal excess risk bound in terms of the noise level, despite its simplicity compared with other noise-tolerant methods. Furthermore, the lower bound result prompts us to consider what additional assumptions are required to make learning with ILN possible and some justification for using clean samples, as learning solely with noisy samples is not possible. These theoretical findings validate numerous works on the ILN model, which assumed stronger assumptions and used human experts to clean noisy samples.
References
- Angluin and Laird (1988) D. Angluin and P. Laird. Learning from noisy examples. Machine Learning, 2:343–370, 1988.
- Audibert and Tsybakov (2007) J.-Y. Audibert and A. B. Tsybakov. Fast learning rates for plug-in classifiers. 2007.
- Bartlett et al. (2006) P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006.
- Berthon et al. (2021) A. Berthon, B. Han, G. Niu, T. Liu, and M. Sugiyama. Confidence scores make instance-dependent label-noise learning possible. In International conference on machine learning, pages 825–836. PMLR, 2021.
- Birg (2001) L. Birg. A new look at an old result: Fano’s lemma. 2001.
- Bootkrajang and Kabán (2012) J. Bootkrajang and A. Kabán. Label-noise robust logistic regression and its applications. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2012, Bristol, UK, September 24-28, 2012. Proceedings, Part I 23, pages 143–158. Springer, 2012.
- Chen et al. (2021) P. Chen, J. Ye, G. Chen, J. Zhao, and P.-A. Heng. Beyond class-conditional assumption: A primary attempt to combat instance-dependent label noise. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11442–11450, 2021.
- Cheng et al. (2020a) H. Cheng, Z. Zhu, X. Li, Y. Gong, X. Sun, and Y. Liu. Learning with instance-dependent label noise: A sample sieve approach. arXiv preprint arXiv:2010.02347, 2020a.
- Cheng et al. (2020b) J. Cheng, T. Liu, K. Ramamohanarao, and D. Tao. Learning with bounded instance and label-dependent label noise. In International Conference on Machine Learning, pages 1789–1799. PMLR, 2020b.
- Ghosh et al. (2015) A. Ghosh, N. Manwani, and P. Sastry. Making risk minimization tolerant to label noise. Neurocomputing, 160:93–107, 2015.
- Goldberger and Ben-Reuven (2017) J. Goldberger and E. Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In International conference on learning representations, 2017.
- Han et al. (2018) B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, and M. Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31, 2018.
- Han et al. (2020) B. Han, G. Niu, X. Yu, Q. Yao, M. Xu, I. Tsang, and M. Sugiyama. Sigua: Forgetting may make learning with noisy labels more robust. In International Conference on Machine Learning, pages 4006–4016. PMLR, 2020.
- Huang et al. (2019) J. Huang, L. Qu, R. Jia, and B. Zhao. O2u-net: A simple noisy label detection approach for deep neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3326–3334, 2019.
- Injadat et al. (2016) M. Injadat, F. Salo, and A. B. Nassif. Data mining techniques in social media: A survey. Neurocomputing, 214:654–670, 2016.
- Jiang et al. (2018) L. Jiang, Z. Zhou, T. Leung, L.-J. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International conference on machine learning, pages 2304–2313. PMLR, 2018.
- Kakade et al. (2008) S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. Advances in neural information processing systems, 21, 2008.
- Lee and Foygel Barber (2022) Y. Lee and R. Foygel Barber. Binary classification with corrupted labels. Electronic Journal of Statistics, 16(1):1367–1392, 2022.
- Liu and Tao (2015) T. Liu and D. Tao. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3):447–461, 2015.
- Liu and Guo (2020) Y. Liu and H. Guo. Peer loss functions: Learning from noisy labels without knowing noise rates. In International conference on machine learning, pages 6226–6236. PMLR, 2020.
- Malach and Shalev-Shwartz (2017) E. Malach and S. Shalev-Shwartz. Decoupling” when to update” from” how to update”. Advances in neural information processing systems, 30, 2017.
- Mammen and Tsybakov (1999) E. Mammen and A. B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808–1829, 1999.
- Manwani and Sastry (2013) N. Manwani and P. Sastry. Noise tolerance under risk minimization. IEEE transactions on cybernetics, 43(3):1146–1151, 2013.
- Menon et al. (2015) A. Menon, B. Van Rooyen, C. S. Ong, and B. Williamson. Learning from corrupted binary labels via class-probability estimation. In International conference on machine learning, pages 125–134. PMLR, 2015.
- Menon et al. (2018) A. K. Menon, B. Van Rooyen, and N. Natarajan. Learning from binary labels with instance-dependent noise. Machine Learning, 107:1561–1595, 2018.
- Mohri et al. (2018) M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of machine learning. MIT press, 2018.
- Natarajan et al. (2013) N. Natarajan, I. S. Dhillon, P. K. Ravikumar, and A. Tewari. Learning with noisy labels. Advances in neural information processing systems, 26, 2013.
- Nguyen et al. (2019) D. T. Nguyen, C. K. Mummadi, T. P. N. Ngo, T. H. P. Nguyen, L. Beggel, and T. Brox. Self: Learning to filter noisy labels with self-ensembling. arXiv preprint arXiv:1910.01842, 2019.
- Park and Kwon (2021) S. W. Park and J. Kwon. Wasserstein distributional normalization for robust distributional certification of noisy labeled data. In International Conference on Machine Learning, pages 8381–8390. PMLR, 2021.
- Patrini et al. (2017) G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1944–1952, 2017.
- Pillaud-Vivien et al. (2018) L. Pillaud-Vivien, A. Rudi, and F. Bach. Exponential convergence of testing error for stochastic gradient methods. In Conference on Learning Theory, pages 250–296. PMLR, 2018.
- Ramaswamy et al. (2016) H. Ramaswamy, C. Scott, and A. Tewari. Mixture proportion estimation via kernel embeddings of distributions. In International conference on machine learning, pages 2052–2060. PMLR, 2016.
- Reeve et al. (2019) H. Reeve et al. Classification with unknown class-conditional label noise on non-compact feature spaces. In Conference on Learning Theory, pages 2624–2651. PMLR, 2019.
- Schroff et al. (2010) F. Schroff, A. Criminisi, and A. Zisserman. Harvesting image databases from the web. IEEE transactions on pattern analysis and machine intelligence, 33(4):754–766, 2010.
- Scott (2015) C. Scott. A rate of convergence for mixture proportion estimation, with application to learning from noisy labels. In Artificial Intelligence and Statistics, pages 838–846. PMLR, 2015.
- Scott et al. (2013) C. Scott, G. Blanchard, and G. Handy. Classification with asymmetric label noise: Consistency and maximal denoising. In Conference on learning theory, pages 489–511. PMLR, 2013.
- Van Rooyen and Williamson (2017) B. Van Rooyen and R. C. Williamson. A theory of learning with corrupted labels. J. Mach. Learn. Res., 18(1):8501–8550, 2017.
- Wainwright (2019) M. J. Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge university press, 2019.
- Wei et al. (2020) H. Wei, L. Feng, X. Chen, and B. An. Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13726–13735, 2020.
- Xia et al. (2019) X. Xia, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama. Are anchor points really indispensable in label-noise learning? Advances in neural information processing systems, 32, 2019.
- Xia et al. (2020) X. Xia, T. Liu, B. Han, N. Wang, M. Gong, H. Liu, G. Niu, D. Tao, and M. Sugiyama. Part-dependent label noise: Towards instance-dependent label noise. Advances in Neural Information Processing Systems, 33:7597–7610, 2020.
- Yi and Wu (2019) K. Yi and J. Wu. Probabilistic end-to-end noise correction for learning with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7017–7025, 2019.
- Yu et al. (2019) X. Yu, B. Han, J. Yao, G. Niu, I. Tsang, and M. Sugiyama. How does disagreement help generalization against label corruption? In International Conference on Machine Learning, pages 7164–7173. PMLR, 2019.
- Zhang et al. (2021) Y. Zhang, G. Niu, and M. Sugiyama. Learning noise transition matrix from only noisy labels via total variation regularization. In International Conference on Machine Learning, pages 12501–12512. PMLR, 2021.
- Zhu et al. (2021) Z. Zhu, T. Liu, and Y. Liu. A second-order approach to learning with instance-dependent label noise. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10113–10123, 2021.
Appendix A Proofs
A.1 Proof for Lemma 1
Proof.
For given , we have
The third equation follows from the definition of and . Similary, we have the following equation:
(13) |
∎
A.2 Proof for Lemma 2
Proof.
Suppose be given. We have
The first equation follows from the definition of the clean risk and the noisy risk . The second equation is derived from .
Given that both terms in the preceding equation are integrated over , we can combine the two terms accordingly:
Using (5) and (13) and rearranging the terms, we have
Let us focus on the first bracket term within the integral. For given , we have
We can split the second bracket term as follows
The last inequality comes from Assumption 3.1. Similarly, the second bracket term within the integral can be upper bounded by
Combining the above result, we have
Since is a constant, it remains to bound .
(14) |
The last inequality comes from Assumption 3.1. Instead of binding by common factor in the third equation, if we use , then we have
(15) |
where the last inequality comes from . As a result, we have
Since is given, we have
∎
A.3 Proof for Theorem 1
Proof.
First of all, we have
The last inequality comes from Lemma 2. We decompose the second term in the above inequality to
The following holds with probability at least by Assumption 3.2:
Thus, with probability at least , we get
It remains to bound the middle term . We have
The first inequality comes from the optimality of and the second equality comes from . Therefore, we obtain
with probability at least . ∎
A.4 Proof for Lemma 6
Proof.
Suppose be the given noisy samples, but we do not know whether came from or . We want to construct an estimator , which estimates the originating distribution of the samples, using the function that is trained using . We choose , if . Since, and , Bayes optimal classifier, for each distribution satisfies for . Then, we have
by definition of . Since and are the same, we have . By Lemma 5, we have
(16) |
Also, the following holds for any measurable and .
where the last inequality comes from (12). Inputting for and taking expectation over for , we get
Summing the above inequality for and using (16), we get
As we can choose , we get the desired result. ∎