Robust estimation algorithms don’t need to know the corruption level.
Abstract
Real data are rarely pure. Hence the past half century has seen great interest in robust estimation algorithms that perform well even when part of the data is corrupt. However, their vast majority approach optimal accuracy only when given a tight upper bound on the fraction of corrupt data. Such bounds are not available in practice, resulting in weak guarantees and often poor performance. This brief note abstracts the complex and pervasive robustness problem into a simple geometric puzzle. It then applies the puzzle’s solution to derive a universal meta technique that converts any robust estimation algorithm requiring a tight corruption-level upper bound to achieve its optimal accuracy into one achieving essentially the same accuracy without using any upper bounds.
1 Motivation
Much of statistics concerns learning from samples, typically assumed to be generated by an unknown distribution. As the number of samples increases, many statistical tasks can be learned to arbitrarily small error. However, in practical applications, often not all samples are generated according to the underlying distribution. Some may be inaccurate, corrupt, or even adversarial.
Robust algorithms that learn accurately even when some samples are corrupt, were therefore studied since the early works of Tukey [Tuk60], Huber [Hub64], and has been the subject of many a book, [HR09, HRRS11]. The research continues to this day with recent efficient algorithms for robust mean and density estimation [LRV16, DKK+16, DKK+17], sparse mean estimation [BDLS17, DKK+19c], learning from batches [QV18, CLM20a, JO20b, JO20a, CLM20b, JO21], Erdos-Renyi graphs [AJK+21], and many more [BDLS17, CSV17, KKM18, SCV18, DKK+18, HL18, KSS18, DKK+19b, DKK+19a, ZJS19, PSBR20, LSLC20, CKMY20, PJL21, JLST21], see [DK19] for a recent survey.
Essentially all these works consider the Huber model [Hub64] and its generalizations. A fraction of the samples are genuine, while the remaining fraction may be inaccurate, corrupt, or adversarial, even chosen after viewing the genuine samples. A significant effort within this research concerns statistical estimation, approximating an underlying distribution or parameter set. This note addresses statistical estimation under any of the corruption models above.
Most robust algorithms, both in general and specifically for estimation, either make the theoretically convenient assumption that is known, or more practically utilize an input parameter that is assumed to upper bound , and else may result in an arbitrary error. Since is never known, the first approach cannot work in practice. Assuming an upper bound necessitates a large to ensure correctness. Yet as the following examples show, the algorithm’s estimation error is an increasing function .
[DKK+16, DKK+17] considered robust learning of high-dimensional Gaussians with identity covariance. They showed that as long as the corrupted fraction is at most the upper bound , with sufficiently many samples, the distribution mean can be leaned in distance and the same accuracy applies for learning the distribution itself in TV-Distance. They also derived similar results for learning general Gaussian distributions and product distributions when provided an upper bound . A sequence of papers, [QV18, CLM20a, JO20b, JO20a, CLM20b, JO21] studied discrete and piecewise-polynomial continuous distributions where the samples arrive in batches of size , and a fraction of the batches can be arbitrary. They showed that the underlying distribution can be learned to a TV distance . Similarly, [AJK+21] showed that for Erdős-Rényi graphs with nodes, where the edges connected to a fraction of the nodes may be corrupt, the connection probability can be learned to accuracy .
While this note addresses robust estimation, the known-upper-bound assumption is prevalent in many other robust learning applications including robust regression and robust gradient descent, see for example [KKM18, CKMY20, DKS19, PSBR20]. It would interesting to see whether they could be similarly addressed as well.
The conflicting dependence of accuracy and validity on raises several natural concerns. First the corrupt fraction is typically unknown. Upper bounding it by a fixed goes against the very essence of robustness. Even if one is willing to assume an upper bound , what should it be. Large can drastically reduce the algorithm’s accuracy, for example, yet small risks invalidating its results altogether.
Questions about the validity and choice of have therefore haunted this approach in both presentations and applications.
This brief note takes a bird’s-eye view of optimal robust estimation. Instead of addressing each individual problem, it reformulates all of them as an elementary geometric puzzle whose exceedingly simple and elegant solution yields a universal, unified, and efficient method achieving optimal error without knowing or bounding .
The next section describes the puzzle and its solution, Section 3 applies the result to remove the upper bound requirement from all robust estimation problems, and achieving the same accuracy as if the corruption level was known in advance. It demonstrates the effect on the three robust learning examples provided above. Section 4 considers some of the technique’s limitations and possible extensions.
2 AirTag
Apple’s AirTag approximates the location of a misplaced item. Its beta-version successor lets the user select an approximation distance that in turn affects the search space , if then . AirTag then returns an approximate location that is within distance from if , and is arbitrary otherwise. The set of possible locations and distance may form any metric space, can assume any finite set of positive values, and par for Apple’s course, except for its growth with , is completely unknown.
The best approximation distance is clearly the smallest such that , which we denote by . Choosing worsens ’s accuracy, while for , is arbitrary. However and are unknown, hence so is . Upon obtaining the locations for all , can you approximate to a distance at most for some small constant ?
We will soon describe two simple solutions, but first observe that the puzzle captures both the essence and functionality of robust algorithms. Given an upper bound on the corruption level, these algorithms approximate an underlying distribution or parameter set. If the actual corruption level is below , their output is within the specified distance from the distribution, and otherwise, no guarantees are provided.
One small difference is that for estimation algorithms, the distance guarantee is not , but rather some known increasing function specific to each problem. The addition of can be viewed as a simple reparamtrization of , hence it does not change the problem. Yet it will be convenient to use it in the application, hence we phrase the results in this more general form.
Consider a metric space , finite set , an association with every , and a non-decreasing . An estimator is given for all as input, and outputs a point . It achieves approximation factor if for every input and every and such that for all , it must satisfy .
The following estimator achieves approximation factor 2. Let be the ball of radius around . Define to be the smallest number in such that is non empty, and let be any point in this intersection.
Lemma 1.
achieves approximation factor 2.
Proof.
Let and satisfy for all . For all , . Hence, by definition , and . By the triangle inequality . ∎
The next lemma shows that approximation factor 2 is best possible.
Lemma 2.
No estimator achieves approximation factor less than 2.
Proof.
Consider the reals with absolute difference distance, for , for all , and , . Let estimator achieve approximation factor . For every and such that for all , it must satisfy . Hence for and , , while for and , . By the triangle inequality, , hence that can be made to exceed any number less than 2. ∎
While finding an at the intersection of several balls may be feasible in some metric spaces, for example when , for general this may be computationally challenging. The next method achieves a slightly larger approximation factor , but requires only pairwise distance computations among the ’s.
Define to be the smallest number in such that for all , .
Lemma 3.
achieves approximation factor 3.
Proof.
Let and satisfy for all . By the triangle inequality, for all ,
hence by ’s definition, . Adding the triangle inequality, the condition on , and ,
3 Robustness applications
The AirTag puzzle and its solution suggest a simple universal method for removing the upper bound requirement from any robust estimation algorithm.
Recall that many existing algorithms utilize an input parameter , and so long as it exceeds the actual fraction of corrupt data , they estimate the unknown distribution or parameter vector to error , while for the algorithm fails and its error can be arbitrary large. If was known in advance, one could run the algorithm with input parameter resulting in output that achieves the best error guarantee . We show that even without knowing , we can still find an estimate such that .
Let denote the algorithm’s breakdown point, the largest corruption fraction for which the algorithm gives a meaningful answer. For example, when more than half the data is corrupt, no parameter can be accurately estimated, hence for every algorithm, . We henceforth assume that the actual corruption level .
The AirTag solution involves finding potentially for every in the set of all possible values. In corruption applications, every is possible, hence the approach cannot be applied directly. Instead, we select a small geometric sequence that contains a tight approximation of any . For any choice of and , let be the collection of for till . Let be the closest upper bound of in . Clearly , and since , for all we have .
Running the original algorithm times, once for each value in , Lemmas 1 and 3 achieve error for and , respectively. For example, selecting , the method achieves error by running the algorithm times. For typical problems is sublinear in , hence the method achieves error.
This approach applies to every robust estimation algorithm. It converts any robust estimation algorithm that requires a tight upper bound on to achieve its optimal guarantee, into one that achieves essentially the same guarantees without knowing . Consider for example the problems mentioned in the introduction. For any positive , however small, without knowledge or upper bound on the corruption level , high-dimensional identity-covariance Gaussians can be learned to TV distance and the same holds for approximating the mean in distance. Similar results hold for the other problems considered in [DKK+16, DKK+17]. When the samples arrive in batches of , the underlying distribution can be estimated to TV distance .
For Erdős-Rényi graphs, recall that for any , one can estimate the connection probability to accuracy . However, since is unknown, so is this accuracy, hence we cannot apply our approach directly. To mitigate this problem, we first use the worst possible corruption level to approximate by a weak approximation , and then us to obtain a tight upper bound on for all , which therefore upper bounds the actual error all .
A simple calculation shows that for all , , and , if then . Therefore, given such , upper bounds the algorithm’s for all input parameter , using our approach we can estimate to an accuracy .
Next, we obtain a weak estimate of by using the max corruption level as the input parameter, namely, . This way we can estimate (and hence ) to an accuracy , and using we get . This estimate of , can be used to find such that , where . Using this , as shown above, we can estimate to an accuracy . Note that the extra term is very small and dominated by except for the extreme regime where and .
4 Extensions and limitations
This note concerns robust estimation of an underlying distribution, or some of its parameters, from samples. It would be interesting to generalize this approach to other robust learning problems such as regression, gradient descent etc. In fact, as the AirTag puzzle itself suggests, the approach is not limited to robustness problems. It applies to any algorithm whose accuracy depends monotonically on an input parameter, up to an unknown limit where the algorithm is no longer correct. It would be interesting to find other types of problems with that property.
Another natural generalization may be to multi parameter problems. Yet, even with two parameters we can not achieve guarantees similar to the one parameter case. Consider a metric space , a finite set and for let for be non-negative and non-decreasing function in both and . For each , let be points in . Can we find a point , such that for all and if and , , then the point satisfies for some universal constant ?
Unfortunately such a point does not necessarily exist. Consider the reals with absolute difference distance and . Let , , , and be arbitrary. Let and for . For , satisfies the above condition, and . For , satisfies the above condition, and . Then for to be within a distance constant times from all corresponding , the point must have a distance zero from both 1 and -1, therefore does not exist.
In fact, this limitation is not an artifact of our approach, but is inherent to some 2-parameter estimation problems.
Let be an unknown distribution over with unknown mean and variance . Given a tight upper bound on either or simple truncated-mean estimators achieve optimal error . When a tight upper bound on is known, trim fraction of samples from both the extremes, and when a tight upper bound on is known, recursively remove the farthest sample from the mean of remaining samples until the mean square deviation of the remaining samples is . We show that if both and can be arbitrary, then for any choice of and any algorithm incurs an error much larger than for some and .
Consider two distributions, assigns probability to , while assigns probability to and probability to . For and the adversary can make the distribution of overall samples appear to be . In this case , therefore for any estimate that achieve the error at most then must satisfy . However, for the case , , where the mean of is , for any , the error .
References
- [AJK+21] Jayadev Acharya, Ayush Jain, Gautam Kamath, Ananda Theertha Suresh, and Huanyu Zhang. Robust estimation for random graphs. arXiv preprint arXiv:2111.05320, 2021.
- [BDLS17] Sivaraman Balakrishnan, Simon S. Du, Jerry Li, and Aarti Singh. Computationally efficient robust sparse estimation in high dimensions. In Proceedings of the 30th Annual Conference on Learning Theory, COLT ’17, pages 169–212, 2017.
- [CKMY20] Sitan Chen, Frederic Koehler, Ankur Moitra, and Morris Yau. Online and distribution-free robustness: Regression and contextual bandits with huber contamination. arXiv preprint arXiv:2010.04157, 2020.
- [CLM20a] Sitan Chen, Jerry Li, and Ankur Moitra. Efficiently learning structured distributions from untrusted batches. In Proceedings of the 52nd Annual ACM Symposium on the Theory of Computing, STOC ’20, pages 960–973, New York, NY, USA, 2020. ACM.
- [CLM20b] Sitan Chen, Jerry Li, and Ankur Moitra. Learning structured distributions from untrusted batches: Faster and simpler. In Advances in Neural Information Processing Systems 33, NeurIPS ’20. Curran Associates, Inc., 2020.
- [CSV17] Moses Charikar, Jacob Steinhardt, and Gregory Valiant. Learning from untrusted data. In Proceedings of the 49th Annual ACM Symposium on the Theory of Computing, STOC ’17, pages 47–60, New York, NY, USA, 2017. ACM.
- [DK19] Ilias Diakonikolas and Daniel M. Kane. Recent advances in algorithmic high-dimensional robust statistics. arXiv preprint arXiv:1911.05911, 2019.
- [DKK+16] Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robust estimators in high dimensions without the computational intractability. In Proceedings of the 57th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’16, pages 655–664, Washington, DC, USA, 2016. IEEE Computer Society.
- [DKK+17] Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Being robust (in high dimensions) can be practical. In Proceedings of the 34th International Conference on Machine Learning, ICML ’17, pages 999–1008. JMLR, Inc., 2017.
- [DKK+18] Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robustly learning a Gaussian: Getting optimal error, efficiently. In Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’18, Philadelphia, PA, USA, 2018. SIAM.
- [DKK+19a] Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robust estimators in high-dimensions without the computational intractability. SIAM Journal on Computing, 48(2):742–864, 2019.
- [DKK+19b] Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, and Alistair Stewart. Sever: A robust meta-algorithm for stochastic optimization. In Proceedings of the 36th International Conference on Machine Learning, ICML ’19, pages 1596–1606. JMLR, Inc., 2019.
- [DKK+19c] Ilias Diakonikolas, Daniel Kane, Sushrut Karmalkar, Eric Price, and Alistair Stewart. Outlier-robust high-dimensional sparse estimation via iterative filtering. In Advances in Neural Information Processing Systems 32, NeurIPS ’19, pages 10688–10699. Curran Associates, Inc., 2019.
- [DKS19] Ilias Diakonikolas, Weihao Kong, and Alistair Stewart. Efficient algorithms and lower bounds for robust linear regression. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’19, pages 2745–2754, Philadelphia, PA, USA, 2019. SIAM.
- [HL18] Samuel B. Hopkins and Jerry Li. Mixture models, robustness, and sum of squares proofs. In Proceedings of the 50th Annual ACM Symposium on the Theory of Computing, STOC ’18, pages 1021–1034, New York, NY, USA, 2018. ACM.
- [HR09] Peter J. Huber and Elvezio M. Ronchetti. Robust Statistics. Wiley, 2009.
- [HRRS11] Frank R. Hampel, Elvezio M. Ronchetti, Peter J. Rousseeuw, and Werner A. Stahel. Robust Statistics: The Approach Based on Influence Functions. Wiley, 2011.
- [Hub64] Peter J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35(1):73–101, 1964.
- [JLST21] Arun Jambulapati, Jerry Li, Tselil Schramm, and Kevin Tian. Robust regression revisited: Acceleration and improved estimation rates. Advances in Neural Information Processing Systems, 34, 2021.
- [JO20a] Ayush Jain and Alon Orlitsky. A general method for robust learning from batches. Advances in Neural Information Processing Systems, 33, 2020.
- [JO20b] Ayush Jain and Alon Orlitsky. Optimal robust learning of discrete distributions from batches. In Proceedings of the 37th International Conference on Machine Learning, ICML ’20, pages 4651–4660. JMLR, Inc., 2020.
- [JO21] Ayush Jain and Alon Orlitsky. Robust density estimation from batches: The best things in life are (nearly) free. In International Conference on Machine Learning, pages 4698–4708. PMLR, 2021.
- [KKM18] Adam Klivans, Pravesh K. Kothari, and Raghu Meka. Efficient algorithms for outlier-robust regression. In Proceedings of the 31st Annual Conference on Learning Theory, COLT ’18, pages 1420–1430, 2018.
- [KSS18] Pravesh Kothari, Jacob Steinhardt, and David Steurer. Robust moment estimation and improved clustering via sum of squares. In Proceedings of the 50th Annual ACM Symposium on the Theory of Computing, STOC ’18, pages 1035–1046, New York, NY, USA, 2018. ACM.
- [LRV16] Kevin A. Lai, Anup B. Rao, and Santosh Vempala. Agnostic estimation of mean and covariance. In Proceedings of the 57th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’16, pages 665–674, Washington, DC, USA, 2016. IEEE Computer Society.
- [LSLC20] Liu Liu, Yanyao Shen, Tianyang Li, and Constantine Caramanis. High dimensional robust sparse regression. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS ’20, pages 411–421. JMLR, Inc., 2020.
- [PJL21] Ankit Pensia, Varun Jog, and Po-Ling Loh. Robust regression with covariate filtering: Heavy tails and adversarial contamination, 2021.
- [PSBR20] Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, and Pradeep Ravikumar. Robust estimation via robust gradient estimation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82(3):601–627, 2020.
- [QV18] Mingda Qiao and Gregory Valiant. Learning discrete distributions from untrusted batches. In Proceedings of the 9th Conference on Innovations in Theoretical Computer Science, ITCS ’18, pages 47:1–47:20, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
- [SCV18] Jacob Steinhardt, Moses Charikar, and Gregory Valiant. Resilience: A criterion for learning in the presence of arbitrary outliers. In Proceedings of the 9th Conference on Innovations in Theoretical Computer Science, ITCS ’18, pages 45:1–45:21, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
- [Tuk60] John W. Tukey. A survey of sampling from contaminated distributions. Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling, pages 448–485, 1960.
- [ZJS19] Banghua Zhu, Jiantao Jiao, and Jacob Steinhardt. Generalized resilience and robust statistics. arXiv preprint arXiv:1909.08755, 2019.