An Elementary Predictor Obtaining Distance to Calibration
Abstract
Błasiok et al. (2023) proposed distance to calibration as a natural measure of calibration error that unlike expected calibration error (ECE) is continuous. Recently, Qiao and Zheng (2024) (COLT 2024) gave a non-constructive argument establishing the existence of a randomized online predictor that can obtain distance to calibration in expectation in the adversarial setting, which is known to be impossible for ECE. They leave as an open problem finding an explicit, efficient, deterministic algorithm. We resolve this problem and give an extremely simple, efficient, deterministic algorithm that obtains distance to calibration error at most .
1 Introduction
Probabilistic predictions of binary outcomes are said to be calibrated, if, informally, they are unbiased conditional on their own predictions. For predictors that are not perfectly calibrated, there are a variety of ways to measure calibration error. Perhaps the most popular measure is Expected Calibration Error (ECE), which measures the average bias of the predictions, weighted by the frequency of the predictions. ECE has a number of difficulties as a measure of calibration, not least of which is that it is discontinuous in the predictions. Motivated by this, Błasiok et al. (2023) propose a different measure: distance to calibration, which measures how far a predictor is in distance from the nearest perfectly calibrated predictor. In the online adversarial setting, it has been known since Foster and Vohra (1998) how to make predictions with ECE growing at a rate of . Qiao and Valiant (2021) show that obtaining rates for ECE is impossible. Recently, in a COLT 2024 paper, Qiao and Zheng (2024) showed that it was possible to make sequential predictions against an adversary guaranteeing expected distance to calibration growing at a rate of . Their algorithm is the solution to a minimax problem of size doubly-exponential in . They leave as an open problem finding an explicit, efficient, deterministic algorithm for this problem. In this paper we resolve this problem, by giving an extremely simple such algorithm with an elementary analysis.
2 Setting
We study a sequential binary prediction setting: at every round , a forecaster makes a prediction , after which an adversary reveals an outcome . Given a sequence of predictions and outcomes , we measure expected calibration error (ECE) as follows:
Following Qiao and Zheng (2024), we define distance to calibration to be the minimum distance between a sequence of predictions produced by a forecaster and any perfectly calibrated sequence of predictions:
where is the set of predictions that are perfectly calibrated against outcomes . First we observe that distance to calibration is upper bounded by ECE.
Lemma 1 (Qiao and Zheng (2024)).
Fix a sequence of predictions and outcomes . Then, .
Proof.
For any prediction , define
to be the average outcome conditioned on the prediction . Consider the sequence where . Observe that is perfectly calibrated. Thus, we have that
∎
The upper bound is not tight, however. The best known sequential prediction algorithm obtains ECE bounded by (Foster and Vohra, 1998), and it is known that there is no algorithm guaranteeing ECE below (Qiao and Valiant, 2021; Dagan et al., 2024). Qiao and Zheng (2024) give an algorithm that is the solution to a game of size doubly-exponential in that obtains expected distance to calibration . Here we give an elementary analysis of a simple efficient deterministic algorithm (Algorithm 1) that obtains distance to calibration .
Theorem 1.
Algorithm 1 (Almost-One-Step-Ahead) guarantees that against any sequence of outcomes, .
3 Analysis of Algorithm 1
Before describing the algorithm, we introduce some notation. We will make predictions that belong to a grid. Let denote a discretization of the prediction space with discretization parameter , and let . For a sequence of predictions and outcomes , we define the bias conditional on a prediction as:
To understand our algorithm, it will be helpful to first state and analyze a hypothetical “lookahead” algorithm that we call “One-Step-Ahead”, which is closely related to the algorithm and analysis given by Gupta and Ramdas (2022) in a different model. One-Step-Ahead produces predictions as follows. At round , before observing , the algorithm fixes two predictions satisfying and . Such a pair is guaranteed to exist, because by construction, it must be that for any history, and . Note that a well known randomized algorithm obtaining diminishing ECE (and smooth calibration error) uses the same observation to carefully randomize between two such adjacent predictions (Foster, 1999; Foster and Hart, 2018). Upon observing the outcome , the algorithm outputs prediction . Naturally, we cannot implement this algorithm, as it chooses its prediction only after observing the outcome, but our analysis will rely on a key property this algorithm maintains—namely, that it always produces a sequence of predictions with ECE upper bounded by , the number of elements in the discretized prediction space.
Theorem 2.
For any sequence of outcomes, One-Step-Ahead achieves .
Proof.
We will show that for any , we have , after which the bound on ECE will follow: . We proceed via an inductive argument. Fix a prediction . At the first round in which is output by the algorithm, we have that . Now suppose after round , we satisfy . If is the prediction made at round , it must be that either: and ; or and . Thus, since and either take value 0 or differ in sign, we can conclude that
which proves the theorem. ∎
Algorithm 1 (Almost-One-Step-Ahead) maintains the same state as One-Step-Ahead (which it can compute at round after observing the outcome ). In particular, it does not keep track of the bias of its own predictions, but rather keeps track of the bias of the predictions that One-Step-Ahead would have made. Thus it can determine the pair that One-Step-Ahead would commit to predict at round . It cannot make the same prediction as One-Step-Ahead (as it must fix its prediction before the label is observed) — so instead it deterministically predicts (or — the choice can be arbitrary and does not affect the analysis). Since we have that , it must be that for whichever choice One-Step-Ahead would have made, we have . In other words, although Almost-One-Step-Ahead does not make the same predictions as One-Step-Ahead, it makes predictions that are within distance after rounds. The analysis then follows by the ECE bound of One-Step-Ahead, the triangle inequality, and choosing .
Proof of Theorem 1. Observe that internally, Algorithm 1 maintains the sequence which corresponds exactly to predictions made by One-Step-Ahead. Thus, by Lemma 1 and Theorem 2, we have that . Then, we can compute the distance to calibration of the sequence :
where in the last step we use the fact that for all and thus . The result then follows by setting . ∎
Acknowledgements
This work was supported in part by the Simons Collaboration on the Theory of Algorithmic Fairness, NSF grants FAI-2147212 and CCF-2217062, an AWS AI Gift for Research on Trustworthy AI, and the Hans Sigrist Prize.
References
- Błasiok et al. (2023) Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, and Preetum Nakkiran. A unifying theory of distance from calibration. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 1727–1740, 2023.
- Dagan et al. (2024) Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich, Robert Kleinberg, and Princewill Okoroafor. Improved bounds for calibration via stronger sign preservation games, 2024. URL https://arxiv.org/abs/2406.13668.
- Foster (1999) Dean P Foster. A proof of calibration via blackwell’s approachability theorem. Games and Economic Behavior, 29(1-2):73–78, 1999.
- Foster and Hart (2018) Dean P Foster and Sergiu Hart. Smooth calibration, leaky forecasts, finite recall, and nash dynamics. Games and Economic Behavior, 109:271–293, 2018.
- Foster and Vohra (1998) Dean P. Foster and Rakesh V. Vohra. Asymptotic calibration. Biometrika, 85(2):379–390, 1998. ISSN 00063444. URL http://www.jstor.org/stable/2337364.
- Gupta and Ramdas (2022) Chirag Gupta and Aaditya Ramdas. Faster online calibration without randomization: interval forecasts and the power of two choices. In Po-Ling Loh and Maxim Raginsky, editors, Proceedings of Thirty Fifth Conference on Learning Theory, volume 178 of Proceedings of Machine Learning Research, pages 4283–4309. PMLR, 02–05 Jul 2022. URL https://proceedings.mlr.press/v178/gupta22b.html.
- Qiao and Valiant (2021) Mingda Qiao and Gregory Valiant. Stronger calibration lower bounds via sidestepping. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2021, page 456–466, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450380539. doi: 10.1145/3406325.3451050. URL https://doi.org/10.1145/3406325.3451050.
- Qiao and Zheng (2024) Mingda Qiao and Letian Zheng. On the distance from calibration in sequential prediction. In Shipra Agrawal and Aaron Roth, editors, Proceedings of Thirty Seventh Conference on Learning Theory, volume 247 of Proceedings of Machine Learning Research, pages 4307–4357. PMLR, 30 Jun–03 Jul 2024. URL https://proceedings.mlr.press/v247/qiao24a.html.