General bounds on the quality of Bayesian coresets
Abstract
Bayesian coresets speed up posterior inference in the large-scale data regime by approximating the full-data log-likelihood function with a surrogate log-likelihood based on a small, weighted subset of the data. But while Bayesian coresets and methods for construction are applicable in a wide range of models, existing theoretical analysis of the posterior inferential error incurred by coreset approximations only apply in restrictive settings—i.e., exponential family models, or models with strong log-concavity and smoothness assumptions. This work presents general upper and lower bounds on the Kullback-Leibler (KL) divergence of coreset approximations that reflect the full range of applicability of Bayesian coresets. The lower bounds require only mild model assumptions typical of Bayesian asymptotic analyses, while the upper bounds require the log-likelihood functions to satisfy a generalized subexponentiality criterion that is weaker than conditions used in earlier work. The lower bounds are applied to obtain fundamental limitations on the quality of coreset approximations, and to provide a theoretical explanation for the previously-observed poor empirical performance of importance sampling-based construction methods. The upper bounds are used to analyze the performance of recent subsample-optimize methods. The flexibility of the theory is demonstrated in validation experiments involving multimodal, unidentifiable, heavy-tailed Bayesian posterior distributions.
1 Introduction
Large-scale data is now commonplace in scientific and commerical applications of Bayesian statistics. But despite its prevalence, and the corresponding wealth of research dedicated to scalable Bayesian inference, there are still suprisingly few general methods that provably provide inferential results, within some reasonable tolerated error, at a significant computational cost savings. Exact Markov chain Monte Carlo (MCMC) methods require many full passes over the data [1, Ch. 6–12, 2, Ch. 11–12], limiting the utility of these methods when even a single pass is expensive. A wide range of MCMC methods that access only a subset of data per iteration, e.g., via delayed acceptance [3, 4, 5, 6], pseudomarginal or auxiliary variable methods [7, 8, 9], and basic subsampling [10, 11, 12, 13], provide at most a minor improvement over full-data MCMC [14, 15, 16]. On the other hand, methods including carefully constructed log-likelihood function control variates can provide substantial gains [17, 18, 19]. However, black-box control variate constructions for large-scale data often rely on assumptions such as posterior density differentiability and unimodality that do not hold in many popular models, e.g., those with discrete variables or multimodality. See [15, 20] for a survey of scalable MCMC methods. Parametric approximations via variational inference [21] or the Laplace approximation [22, 23] can be obtained scalably using stochastic optimization methods, but existing general theoretical guarantees for these methods again typically rely on posterior normality assumptions [24, p. 141–144,25, 26, 27, 28, 29, 30] (see [21, 31] for a review).
Although many existing methods rely on asymptotic normality or unimodality in the large-scale data regime, the problem of handling large-scale data in Bayesian inference does not fundamentally require this structure. Instead, one can more generally exploit redundancy in the data (i.e., the existence of good approximate sufficient statistics), which can be used to draw principled conclusions about a large data set based only on a small fraction of examples. Indeed, while approximate posterior normality often does not hold in models with latent discrete or combinatorial objects, weakly identifiable or unidentifiable parameters, persisting heavy tails, multimodality, etc., such models can and regularly do exhibit significant redundancy in the data that can be exploited for faster large-scale inference. Bayesian coresets [32]—which involve replacing the full dataset during inference with a sparse weighted subset—are based on this notion of exploiting data redundancy. Empirical studies have shown the existence of high-quality coreset posterior approximations constructed from a small fraction of the data, even in models that violate posterior normality assumptions and for which standard control variate techniques work poorly [33, 34, 35, 36, 37]. However, existing theoretical support for Bayesian coresets in the literature is limited. There exist no lower bounds on Bayesian coreset approximation error, and while upper bounds do exist, they currently impose restrictive assumptions. In particular, the best available theoretical upper bounds to date apply to exponential family models [36, 38] and models with strongly log-concave and locally smooth log-densities [37].
This article presents new theoretical techniques and results regarding the quality of Bayesian coreset approximations. The main results are two general large-data asymptotic lower bounds on the KL divergence (Theorems 3.3 and 3.5), as well as a general upper bound on the KL divergence (Theorem 5.3) under the assumption that the log-likelihoods satisfy a multivariate generalization of subexponentiality (Definition 5.2). The main general results in this paper lead to various novel insights about specific Bayesian coreset construction methods. Under mild assumptions,
-
•
common importance-weighted coreset constructions (e.g. [32]) require a coreset size proportional to the dataset size (Corollary 4.1), even with post-hoc optimal weight scaling (Corollary 4.2), and thus yield a negligible improvement over full-data inference;
-
•
any construction algorithm requires a coreset size when the log-likelihood function is determined by parameters locally around a point of concentration (Corollary 4.3);
-
•
subsample-optimize coreset construction algorithms (e.g. [38, 36, 37, 39]) achieve an asymptotically bounded error with a coreset size in a wide variety of models (Corollary 6.1).
The paper includes empirical validation of the main theoretical claims on two models that violate common assumptions made in the literature: a multimodal, unidentifiable Cauchy location model with a heavy-tailed prior, and an unidentifiable logistic regression model with a heavy-tailed prior and persisting posterior heavy tails. Experiments were performed on a computer with an Intel Core i7-8700K and 32GB of RAM. Proofs of all theoretical results may be found in Appendix A.
Notation. We use standard asymptotic growth symbols (see, e.g., [40, Sec. 3.3]), and their probabilistic variants (see, e.g., [24, Sec. 2.2]). We use the same symbol to denote a measure and its density with respect to a specified dominating measure. We also regularly suppress integration variables and differential symbols in integrals throughout for notational brevity when these are clear from context; for example, is shorthand for . Finally, the pushforward of a measure by a map is denoted simply .
2 Background
Define a target probability distribution on a space comprised of a sum of potentials , and a base distribution ,
(2) |
where the normalization constant is not known. In the Bayesian context, this distribution corresponds to a Bayesian posterior distribution for a statistical model with prior and conditionally i.i.d. data , where . The goal is to compute or approximate expectations under ; but the likelihood (and its gradient) becomes expensive to evaluate when is large. To avoid this cost, Bayesian coresets [32, 33, 34, 35, 36, 37] involve replacing the target with a surrogate density
(3) |
where , are a set of weights, and is the new normalizing constant. If has at most nonzeros, the cost of evaluating (and its gradient) is a significant improvement upon the original cost. In this work, the problem of coreset construction is formulated in the data-asymptotic limit; a coreset construction method should
-
•
run in time and memory (or at most with a small leading constant),
-
•
produce a small coreset of size ,
-
•
produce a coreset with posterior forward/reverse KL divergence as .
These three desiderata ensure that the effort spent constructing and sampling from the coreset posterior is worthwhile: the coreset provides a meaningful reduction in computational cost compared with standard Markov chain Monte Carlo algorithms, and has a bounded approximation error.
3 Lower bounds on approximation error
This section presents lower bounds on the KL divergence of coreset approximations for general models and data generating processes. The first key steps in the analysis are to write all expectations in terms of distributions that do not depend on , and to remove the difficult-to-control influence of the tails of and by restricting certain integrals to some small subset of the parameter space. Lemma 3.1, the key theoretical tool used in this section, achieves both of these two goals; note that the result has no major assumptions and applies generally in any setting that a Bayesian coreset can be used. For convenience, define
(4) |
and the decreasing, nonnegative function ,
(7) |
Lemma 3.1 (Basic KL Lower Bound).
For all measurable and coreset weights ,
(8) |
where
(9) |
Note that while the integrals in the fraction denominator in range over the whole space, a further lower bound on can be obtained by restricting their domains arbitrarily. Also, crucially, the bound in Lemma 3.1 does not depend on , which would be difficult to analyze without detailed knowledge of the tail behaviour of as a function of the coreset weights . Although the bound in Lemma 3.1 applies generally, it is most useful when is small (so that simple local approximations of and can be used), concentrates on (so that ), and and are very different when restricted to ; the behaviour of the bound in this case is roughly (see the proof in Appendix A) . Finally, note that Lemma 3.1 remains valid if one replaces with and with for any constants that do not depend on but may depend on the data and coreset weights .
For the remainder of this section, consider the setting where is a measurable subset of for some , fix some , and assume each is differentiable in a neighbourhood of . Let
Theorems 3.3 and 3.5 characterize KL divergence lower bounds in terms of the sum of the coreset weights and the log-likelihood gradients . Intuitively for the full data set where all and , and an i.i.d. data generating process from the likelihood with parameter , the central limit theorem asserts under mild conditions that at a rate of . Theorems 3.3 and 3.5 below provide KL lower bounds when the coreset construction algorithm does not match this behavior. In particular, Theorem 3.3 provides results that are useful when occurs reasonably quickly but slower than , while Theorem 3.5 strengthens the conclusion when very slowly or not at all. The major benefit of Theorems 3.3 and 3.5 for analyzing coreset construction methods is that they reduce the problem of analyzing posterior KL divergence to the much easier problem of analyzing the 2-norm of a weighted sum of random vectors in .
Consider a sequence as , and for a fixed matrix let
(10) |
be a sequence of neighbourhoods around ; these will appear in Assumptions 3.2, 3.4, 3.3 and 3.5 below. Note that throughout, all asymptotics will be taken as , and various sequences (e.g., and ) are implicitly indexed by . To simplify notation, this dependence is suppressed. Assumption 3.2 makes some weak assumptions about the model and data generating process: it intuitively asserts that the potential functions are sufficiently smooth around , that slowly, and that concentrates at at a usual rate. Note that Assumption 3.2 does not assume data are generated i.i.d. and places no conditions on the coreset construction algorithm.
Assumption 3.2.
has a density with respect to the Lebesgue measure, , each and are twice differentiable in for sufficiently large , and
(11) |
Two additional assumptions related to the coreset construction algorithm—namely, that it works well enough that and at a rate faster than —lead to asymptotic lower bounds on the best possible quality of coresets produced by the algorithm, as well as lower bounds even after optimal post-hoc scaling of the weights.
Theorem 3.3.
Theorem 3.3 is restricted to the case where the coreset algorithm is performing reasonably well. Theorem 3.5 extends the bounds to the case where the algorithm is performing poorly, in the sense that it is unable to make at a rate faster than (or perhaps does not converge to 0 at all). In order to draw conclusions in this setting, we need a weak global assumption on the potential functions. A function is -smooth below at if
(15) |
Note that -smoothness below is weaker than Lipschitz smoothness and does not imply concavity; Eq. 15 restricts the growth of the function only in the negative direction, and only when the expansion is taken at . Assumption 3.4 asserts that the potential functions are smooth below.
Assumption 3.4.
There exist such that is -smooth below at , for each is -smooth below at , and .
Theorem 3.5 uses Assumptions 3.2 and 3.4 and additional assumptions related to the coreset construction algorithm to obtain lower bounds in a setting that relaxes the “performance” conditions in Theorem 3.3: no longer needs to converge to in probability, and can converge to 0 slowly or not at all.
Theorem 3.5.
An important final note in this section is that while Theorems 3.3 and 3.5, as stated, require choosing to be some measurable subset of and that the posterior concentrates around some point of interest , these results can be generalized to a wider class of models and spaces. In particular, Corollary 3.6 demonstrates that if is arbitrary, but the potential functions only depend on through some other function , that the conclusions of Theorems 3.3 and 3.5 still hold.
Corollary 3.6.
Suppose is an arbitrary measurable space, and the potential functions take the form for some measurable function . Then if the assumptions of Theorems 3.3 and 3.5 hold for potentials as functions on and pushforward prior on , the stated lower bounds also hold for .
4 Lower bound applications
In this section, the general theoretical results from Section 3 are applied to specific algorithms, Bayesian models, and data generating processes to explain previously observed empirical behaviour of coreset construction, as well as to place fundamental limits on the necessary size of coresets. Consider a setting where the data arise as an i.i.d. sequence drawn from some probability distribution , for , , and the following technical criteria hold (where denotes expectation under the data generating process):
-
(A1)
and .
-
(A2)
for some and .
-
(A3)
On a neighbourhood of , , .
-
(A4)
is twice differentiable a neighbourhood of , and .
-
(A5)
For all such that , .
These conditions apply to a wide range of models, e.g., an unidentifiable, multimodal location model posterior with heavy tails on , where the Bayesian model is specified by
(18) |
and the data are generated from the likelihood with parameter , and an unidentifiable logistic regression posterior with heavy tails on , where the Bayesian model is specified by
(19) |
the covariates are generated via , and the observations are generated from the likelihood with parameter . See Proposition A.6 in Appendix A for the verification of (A1-5) for these two models. Example posterior log-densities for these models are displayed in Fig. 1.




4.1 Minimum coreset size for importance-weighted coresets
A popular algorithm for coreset construction that has appeared in a wide variety of domains—e.g., Bayesian inference [32, 33, Section 4.1], frequentist inference (e.g., [41, 42, 43, 44, 45]), and optimization (see [46] for a recent survey)—involves subsampling of the data followed by an importance-weighting correction. The pseudocode is given in Algorithm 1. Note that , and so ; the coreset potential is an unbiased estimate of the exact potential. The advantage of this method is that it is straightforward and computationally efficient. If the sampling probabilities are uniform , then Algorithm 1 constructs a coreset in time and memory. Nonuniform probabilities require time, as they require a pass over all data points to compute each [32, 42] followed by sampling the coreset, e.g., via an alias table [47, 48]. However, empirical results produced by this methodology have generally been underwhelming, even with carefully chosen sampling probabilities; see, e.g., Figure 2 of [32].
Corollary 4.1 explains these poor results: Bayesian coresets constructed via Algorithm 1 must satisfy in order to maintain a bounded in the data-asymptotic limit. In other words, such coresets do not satisfy the desiderata in Section 2. The only restriction is that there exist constants such that for all , the sampling probabilities satisfy
(20) |
The lower threshold ensures that the variance of the importance-weighted log-likelihood is not too large, while the upper threshold ensures sufficient diversity in the draws from subsampling. The condition in Eq. 20 is not a major restriction, in the sense that performance should deteriorate even further when it does not hold. The may otherwise depend arbitrarily on the data and model.
Corollary 4.1.
Given (A1-6), , and , coresets produced by Algorithm 1 satisfy
(21) |
The intuition behind Corollary 4.1 is that both the true posterior and the importance-weighted coreset posterior are asymptotically approximately normal with variance as ; however, the coreset posterior mean is roughly away from the posterior mean, because the subsample is of size . The KL divergence between two Gaussians is lower-bounded by the inverse variance times the mean difference squared, yielding as in Eq. 21.
Given the intuition that the coreset posterior mean is far from the posterior mean relative to their variances, it is worth asking whether one can apply a small amount of effort to “correct” the importance-weighted coreset by scaling the weights (and hence the variance) down, as shown in Algorithm 2. Unfortunately, Corollary 4.2 demonstrates that even with optimal scaling, is still required in order to maintain a bounded KL divergence as .
Corollary 4.2.
Given (A1-6), , and , coresets produced by Algorithm 1 satisfy
(22) |
Section 4.1 provides empirical confirmation of Corollaries 4.1 and 4.2 on the Cauchy location and logistic regression models in Eqs. 18 and 19. In particular, these figures show that the empirical rates of growth of KL as a function of closely matches for importance-weighted coresets, and for the same with post-hoc scaling, for a wide range of coreset sizes . Thus, importance weighted coreset construction methods do not satisfy the desiderata in Section 2 for a wide range of models, and alternate methods should be considered.
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/44a05b92-f408-46e2-bec1-0320ca0a2bcb/img-cauchyloc-nonunif.png)
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/44a05b92-f408-46e2-bec1-0320ca0a2bcb/img-logreg-nonunif.png)
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/44a05b92-f408-46e2-bec1-0320ca0a2bcb/img-cauchyloc-nonunif-scaled.png)
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/44a05b92-f408-46e2-bec1-0320ca0a2bcb/img-logreg-nonunif-scaled.png)
4.2 Minimum coreset size for any coreset construction
This section extends the minimum coreset size results from importance-weighted schemes to any coreset construction algorithm. In particular, Corollary 4.3 shows that under (A7)—a strengthening of (A3) and Assumption 3.4—and (A8)—which asserts that are linearly independent a.s. and satisfy a technical moment condition—at least coreset points are required to keep the KL divergence bounded as .
-
(A7)
Assumption 3.4 holds and there exists such that for all sufficiently large ,
(23) -
(A8)
For all coreset sizes , there exists a such that
(24)
Corollary 4.3.
For a fixed coreset size , given (A1-5,7,8),
(25) |
5 Upper bounds on approximation error
This section presents upper bounds on the KL divergence of coreset approximations. As in Section 3, the first step is to write all expectations in terms of distributions that do not depend on . Lemma 5.1 does so without imposing any major assumptions; the result again applies generally in any setting that a Bayesian coreset can be used. For convenience, define
(26) |
Lemma 5.1 (Basic KL Upper Bound).
For all coreset weights ,
(27) |
where for all , , , and .
The upper bound in Lemma 5.1 is nonvacuous (i.e., finite) as long as there exists a such that the Rényi divergence [49, p. 3799] is finite. Note that as in Lemma 3.1, the bound in Lemma 5.1 remains valid if one replaces with and with for any constants that do not depend on but may depend on the coreset weights and data.
More practical bounds necessitate an assumption about the behaviour of the potentials . Definition 5.2 below asserts that the multivariate moment generating function of is bounded when the vector is close to 0. This definition is a generalization of the usual definition of subexponentiality for the univariate setting (e.g., [50, Sec. 2.7]). Theorem 5.3 subsequently shows that Definition 5.2 is sufficient to obtain simple bounds on .
Definition 5.2.
For , , and monotone function such that , the potentials are -subexponential if
(28) |
Theorem 5.3.
If the potentials are -subexponential, then
(29) |
Definition 5.2, the key assumption in Theorem 5.3, is satisfied by a wide range of models when choosing and , as demonstrated by Proposition 5.4. Because this case applies widely, let -subexponential be shorthand for -subexponentiality with .
Proposition 5.4.
If for all in a ball centered at the origin, , then there exists such that the potentials are -subexponential.
In other words, intuitively, if a coreset construction algorithm produces weights such that is small, then is also small. That being said, the generality of Definition 5.2 to allow arbitrary is still helpful in obtaining upper bounds in specific cases; see, e.g., Propositions A.1 and A.2.
6 Upper bound application: subsample-optimize coresets
A strategy to construct Bayesian coresets that has recently emerged in the literature, shown in Algorithm 3, is to first subsample the data to select data points, and subsequently optimize the weights for those selected data points [36, 38, 37]. The subsampling step serves to pick a reasonably flexible basis of log-likelihood functions for coreset approximation, and avoids the slow greedy selection routines from earlier work [33, 35, 34]. The optimization step tunes the weights for the selected basis, avoiding the poor approximations of importance-weighting methods. Indeed, Algorithm 3 creates exact coresets with high probability in Gaussian location models [36, Prop. 3.1] and finite-dimensional exponential family models [37, Thm. 4.1], and near-exact coresets with high probability in strongly log-concave models [37, Thm. 4.2] and Bayesian linear regression [38, Prop. 3].
Corollary 6.1 generalizes these results substantially, and demonstrates that coresets of size produced by the subsample-optimize method in Algorithm 3 maintain a bounded KL divergence as . Two key assumptions are subexponentiality of the potentials and a polynomial (in ) growth of ; these conditions are not stringent and should hold for a wide range of Bayesian models and i.i.d. data generating processes. The last key assumption in Eq. 30 is that a randomly-chosen potential function , (with probabilities as in Algorithm 3) is well-aligned with the residual coreset error function. Similar alignment conditions have appeared in past results for more restrictive settings (see, e.g., in [37, Thm. 4.1]).
Corollary 6.1.
Suppose there exist and such that the potential functions are -subexponential with probability increasing to 1 as , , , and
(30) | |||
(31) |
Then Algorithm 3 produces a coreset with as .


Fig. 3 confirms that subsample-optimize coreset construction methods applied to the logistic regression and Cauchy location models in Eqs. 18 and 19 (which both violate the conditions of past upper bounds in the literature) are able to provide high-quality posterior approximations for very small coresets—in this case, .
7 Conclusions
This article presented new general lower and upper bounds on the quality of Bayesian coreset approximations, as measured by the KL divergence. These results were used to draw novel conclusions regarding importance-weighted and subsample-optimize coreset methods, which align with simulation experiments on two synthetic models that violate the assumptions of past theoretical results. Avenues for future work include general bounds on the subexponentiality constant in Proposition 5.4, as well as the alignment probability in Eq. 30, in the setting of Bayesian models with i.i.d. data generating processes. A limitation of this work is that both quantities currently require case-by-case analysis.
Acknowledgments and Disclosure of Funding
The author gratefully acknowledges the support of an NSERC Discovery Grant (RGPIN-2019-03962).
References
- [1] Christian Robert and George Casella. Monte Carlo Statistical Methods. Springer, 2nd edition, 2004.
- [2] Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin. Bayesian data analysis. CRC Press, 3rd edition, 2013.
- [3] J. Andrés Christen and Colin Fox. Markov chain Monte Carlo using an approximation. Journal of Computational and Graphical Statistics, 14(4):795–810, 2005.
- [4] Marco Banterle, Clara Grazian, Anthony Lee, and Christian P. Robert. Accelerating Metropolis-Hastings algorithms by delayed acceptance. Foundations of Data Science, 1(2):103–128, 2019.
- [5] Richard Payne and Bani Mallick. Bayesian big data classification: a review with complements. arXiv:1411.5653, 2014.
- [6] Chris Sherlock, Andrew Golightly, and Daniel Henderson. Adaptive, delayed-acceptance MCMC for targets with expensive likelihoods. Journal of Computational and Graphical Statistics, 26(2):434–444, 2017.
- [7] Arnaud Doucet, Michael Pitt, George Deligiannidis, and Robert Kohn. Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator. Biometrika, 102(2):295–313, 2015.
- [8] Dougal Maclaurin and Ryan Adams. Firefly Monte Carlo: exact MCMC with subsets of data. In Conference on Uncertainty in Artificial Intelligence, 2014.
- [9] Matias Quiroz, Minh-Ngoc Tran, Mattias Villani, Robert Kohn, and Khue-Dung Dang. The block-Poisson estimator for optimally tuned exact subsampling MCMC. Journal of Computational and Graphical Statistics, 30(4):877–888, 2021.
- [10] Max Welling and Yee Whye Teh. Bayesian learning via stochastic gradient Langevin dynamics. In International Conference on Machine Learning, 2011.
- [11] Sungjin Ahn, Anoop Korattikara, and Max Welling. Bayesian posterior sampling via stochastic gradient Fisher scoring. In International Conference on Machine Learning, 2012.
- [12] Anoop Korattikara, Yutian Chen, and Max Welling. Austerity in MCMC land: cutting the Metropolis-Hastings budget. In International Conference on Machine Learning, 2014.
- [13] Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In International Conference on Machine Learning, 2015.
- [14] James Johndrow, Natesh Pillai, and Aaron Smith. No free lunch for approximate MCMC. arXiv:2010.12514, 2020.
- [15] Rémi Bardenet, Arnaud Doucet, and Chris Holmes. On Markov chain Monte Carlo methods for tall data. Journal of Machine Learning Research, 18:1–43, 2017.
- [16] Tigran Nagapetyan, Andrew Duncan, Leonard Hasenclever, Sebastian Vollmer, Lukasz Szpruch, and Konstantinos Zygalakis. The true cost of stochastic gradient Langevin dynamics. arXiv:1706.02692, 2017.
- [17] Jack Baker, Paul Fearnhead, Emily Fox, and Christopher Nemeth. Control variates for stochastic gradient MCMC. Statistics and Computing, 29:599–615, 2019.
- [18] Christopher Nemeth and Paul Fearnhead. Stochastic gradient Markov Chain Monte Carlo. Journal of the American Statistical Association, 116(533):433–450, 2021.
- [19] Matias Quiroz, Robert Kohn, Mattias Villani, and Minh-Ngoc Tran. Speeding up MCMC by efficient data subsampling. Journal of the American Statistical Association, 114(526):831–843, 2019.
- [20] Matias Quiroz, Robert Kohn, and Khue-Dung Dang. Subsampling MCMC—an introduction for the survey statistician. Sankhya: The Indian Journal of Statistics, 80-A:S33–S69, 2018.
- [21] David Blei, Alp Kucukelbir, and Jon McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017.
- [22] Zhenming Shun and Peter McCullagh. Laplace approximation of high dimensional integrals. Journal of the Royal Statistical Society: Series B, 57(4):749–760, 1995.
- [23] Peter Hall, Tung Pham, Matt Wand, and Shen S.J. Wang. Asymptotic normality and valid inference for gaussian variational approximation. The Annals of Statistics, 39(5):2502–2532, 2011.
- [24] Aad van der Vaart. Asymptotic Statistics. Cambridge University Press, 2000.
- [25] Yixin Wang and David Blei. Frequentist consistency of variational Bayes. Journal of the American Statistical Association, 114(527):1147–1161, 2018.
- [26] Badr-Eddine Chérief-Abdellatif and Pierre Alquier. Consistency of variational Bayes inference for estimation and model selection in mixtures. Electronic Journal of Statistics, 12:2995–3035, 2018.
- [27] Yun Yang, Debdeep Pati, and Anirban Bhattacharya. -variational inference with statistical guarantees. The Annals of Statistics, 2018.
- [28] Pierre Alquier and James Ridgway. Concentration of tempered posteriors and of their variational approximations. The Annals of Statistics, 48(3):1475–1497, 2020.
- [29] Zuheng Xu and Trevor Campbell. The computational asymptotics of Gaussian variational inference and the Laplace approximation. Statistics and Computing, 32(63), 2022.
- [30] Jeffrey Miller. Asymptotic normality, concentration, and coverage of generalized posteriors. Journal of Machine Learning Research, 22:1–53, 2021.
- [31] Cheng Zhang, Judith Bütepage, Hedvig Kjellström, and Stephan Mandt. Advances in variational inference. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8):2008–2026, 2019.
- [32] Jonathan Huggins, Trevor Campbell, and Tamara Broderick. Coresets for scalable Bayesian logistic regression. In Advances in Neural Information Processing Systems, 2016.
- [33] Trevor Campbell and Tamara Broderick. Automated scalable Bayesian inference via Hilbert coresets. Journal of Machine Learning Research, 20(15):1–38, 2019.
- [34] Trevor Campbell and Boyan Beronov. Sparse variational inference: Bayesian coresets from scratch. In Advances in Neural Information Processing Systems, 2019.
- [35] Trevor Campbell and Tamara Broderick. Bayesian coreset construction via greedy iterative geodesic ascent. In International Conference on Machine Learning, 2018.
- [36] Naitong Chen, Zuheng Xu, and Trevor Campbell. Bayesian inference via sparse Hamiltonian flows. In Advances in Neural Information Processing Systems, 2022.
- [37] Cian Naik, Judith Rousseau, and Trevor Campbell. Fast Bayesian coresets via subsampling and quasi-Newton refinement. In Advances in Neural Information Processing Systems, 2022.
- [38] Martin Jankowiak and Du Phan. Surrogate likelihoods for variational annealed importance sampling. In International Conference on Machine Learning, 2022.
- [39] Naitong Chen and Trevor Campbell. Coreset Markov chain Monte Carlo. In International Conference on Artificial Intelligence and Statistics, 2024.
- [40] Thomas Cormen, Charles Leiserson, Ronald Rivest, and Clifford Stein. Introduction to Algorithms. The MIT Press, edition, 2022.
- [41] Ping Ma, Michael Mahoney, and Bin Yu. A statistical perspective on algorithmic leveraging. Journal of Machine Learning Research, 16:861–911, 2015.
- [42] HaiYing Wang, Rong Zhu, and Ping Ma. Optimal subsampling for large sample logistic regression. Journal of the American Statistical Association, 113(522):829–844, 2018.
- [43] HaiYing Wang. More efficient estimation for logistic regression with optimal subsamples. Journal of Machine Learning Research, 20:1–59, 2019.
- [44] Mingyao Ai, Jun Yu, Huiming Zhang, and HaiYing Wang. Optimal subsampling algorithms for big data regressions. Statistica Sinica, 31(2):749–772, 2021.
- [45] HaiYing Wang and Yanyuan Ma. Optimal subsampling for quantile regression in big data. Biometrika, 108(1):99–112, 2021.
- [46] Dan Feldman. Introduction to core-sets: an updated survey. arXiv:2011.09384, 2020.
- [47] Alastair Walker. New fast method for generating discrete random numbers with arbitrary frequency distributions. Electronics Letters, 10(8):127–128, 1974.
- [48] Alastair Walker. An efficient method for generating discrete random variables with general distributions. ACM Transactions on Mathematical Software, 3(3):253–256, 1977.
- [49] Tim van Erven and Peter Harrëmos. Rényi divergence and Kullback-Leibler divergence. IEEE Transactions on Information Theory, 60(7):3797–3820, 2014.
- [50] Roman Vershynin. High-dimensional probability: an introduction with applications in data science. Cambridge University Press, 2020.
- [51] Igor Vajda. Note on discrimination information and variation. IEEE Transactions on Information Theory, 16(6):771–773, 1970.
- [52] David Pollard. A user’s guide to probability theory. Cambridge series in statistical and probabilistic mathematics. Cambridge University Press, edition, 2002.
- [53] Robert Keener. Theoretical statistics: topics for a core course. Springer, 2010.
- [54] Andre Bulinski. Conditional central limit theorem. Theory of Probability & its Applications, 61(4):613–631, 2017.
- [55] Lorraine Schwartz. On Bayes procedures. Z. Wahrscheinlichkeitstheorie und Verwandte Gebiete, 4:10–26, 1965.
Appendix A Proofs
Proof of Lemma 3.1.
By Vajda’s inequality [51],
(32) | ||||
(33) | ||||
(34) |
The bound is monotone increasing in ; therefore because the squared Hellinger distance satisfies the inequality [52, p. 61],
(35) |
we have that
(36) |
We substitute the value of the squared Hellinger distance to find that
(37) |
Note that , so
(38) |
The bound is monotone decreasing in , so we require an upper bound on . To obtain the required bound, we split the integral into two parts—one on the set , and the other on —and then use the Cauchy-Schwarz inequality to bound the part on . Note that by definition and are mutually dominating, so the density ratio is well-defined and measurable.
(39) | ||||
(40) | ||||
(41) | ||||
(42) |
The result follows. ∎
Proof of Lemma 5.1.
We first consider the forward KL divergence. By definition,
(43) | ||||
(44) |
Since the is positive, for ,
(45) | ||||
(46) | ||||
(47) |
by Jensen’s inequality. Next we consider the reverse KL divergence. For any ,
(48) | ||||
(49) |
By Jensen’s inequality, for ,
(50) | ||||
(51) | ||||
(52) | ||||
(53) | ||||
(54) |
This is the same bound as in the forward KL divergence case. Since the bound applies for all , we can take the infimum. ∎
Proof of Theorem 3.3.
By replacing the integrals over the whole space in the denominator of in Lemma 3.1 with integrals over the subset ,
(55) | ||||
(56) | ||||
(57) | ||||
(58) |
So to obtain the stated lower bound on the KL divergence, we require an upper bound on , and lower bounds on and . By Taylor’s theorem, Assumption 3.2, and the assumption on , for all ,
(59) |
We shift the exponential arguments in by , note that is continuous and positive around , and and apply the Taylor expansions in Eq. 59 to obtain an upper bound on the first term:
(60) |
where denotes a quantity that converges in probability to 1 as . We can transform variables to , where is the Cholesky factorization of , and subsequently complete the square:
(61) |
We can obtain lower bounds on the other two terms using a similar technique:
(62) | ||||
(63) |
It remains to analyze the three terms. We bound the integral term in Eq. 61 with the integral over the whole space:
(64) |
For the integral term in Eq. 62, note that since and , we have
(65) | |||
(66) | |||
(67) | |||
(68) | |||
(69) |
For the integral term in Eq. 63, we consider two cases: one where is large, and one where it is small. First assume ; then by a similar technique as used in the first lower bound, since ,
(70) | |||
(71) | |||
(72) | |||
(73) |
When , we transform variables to find that since ,
(74) | ||||
(75) | ||||
(76) |
Therefore regardless of the value of ,
(77) |
So therefore combining all previous results,
(78) | ||||
(79) | ||||
(80) | ||||
(81) |
We now consider the minimum over . Since neither or above depends on , we have that
(82) |
On the branch of the objective function, the derivative in is always positive, and hence the minimum occurs at , and so
(83) |
On the branch of the objective function,
(84) |
For and , the function is convex in with minimum at , and so
(85) |
By assumption, and , and hence the branch has the asymptotic minimum:
(86) |
∎
Proof of Theorem 3.5.
By Lemma 3.1,
(87) | ||||
(88) | ||||
(89) |
Note that in this proof is subtly different from the used in the proof of Theorem 3.3; the latter two integrals are over the whole space (directly from Lemma 3.1), rather than . We shift the exponential arguments in by . We first provide lower bounds on two of the integral terms via Assumption 3.4:
(90) |
where denotes a quantity that converges in probability to 1, , and . Transforming variables via ,
(91) | ||||
(92) | ||||
(93) | ||||
(94) |
Let . Using the same technique, with and ,
(95) | ||||
(96) | ||||
(97) | ||||
(98) | ||||
(99) |
For the upper bound on the first term, we use a local quadratic expansion around , where ,
(100) |
Because , we have eventually; we can transform variables to , where is the Cholesky factorization, and subsequently complete the square. Note that
(101) |
so
(102) |
and therefore
(103) | |||
(104) |
Suppose first that . In this case we bound the integral in Eq. 104 by integrating over the whole space:
(105) |
Combining this with the previous results yields
(106) | |||
(107) | |||
(108) | |||
(109) | |||
(110) |
Bounding the last term below by 0 and minimizing over such that yields
(111) |
Bounding and minimizing over such that yields
(112) | ||||
(113) | ||||
(114) |
where the second line follows because for and , the function is convex in with minimum at . Therefore for ,
(115) |
Next suppose . A second upper bound on Eq. 104 can be obtained by taking the maximum of the integrand over the integration region . Note that since , , , and , we have that , and so
(116) | |||
(117) | |||
(118) |
So therefore combining this result with the previous bounds and minimizing over yields
(119) | ||||
(120) | ||||
(121) | ||||
(122) | ||||
(123) |
Combining with the earlier bound and noting that yields the final result. ∎
Proof of Corollary 3.6.
The proof follows directly from Theorems 3.3 and 3.5 by the data processing inequality applied to . ∎
Proof of Theorem 5.3.
By Lemma 5.1,
(124) | ||||
(125) |
Since are -subexponential, if
(126) |
then
(127) |
By assumption, the condition holds when ; the result follows. ∎
Proof of Proposition 5.4.
Let . By the finiteness condition, [53, Theorem 2.4] asserts that is continuous, and has derivatives of all orders that can be obtained by passing differentiation through the integral within the set . Let , and . Note that : since , if and only if -a.s.; and since is symmetric positive semidefinite, if and only if . Therefore is continuous, has derivatives of all orders, and derivatives can be passed through the integral within the set . For a vector , , , and minimum positive eigenvalue of ,
(128) |
and so is continuous, has derivatives of all orders, and derivatives can be passed through the integral within the set . By Taylor’s theorem, for any in this set, there exists a distribution with density proportional to for some on the segment from the origin to such that
(129) |
By definition of , implies that -a.s. and hence . Therefore, for ,
(130) | ||||
(131) | ||||
(132) |
where denotes outer products to form a tensor. By continuity of derivatives of all orders within the neighbourhood , the result follows by selecting a sufficiently small . ∎
Proposition A.1.
Suppose there exist , , and such that and for all coreset weights satisfying , . Then the potentials are -subexponential with .
Proof of Proposition A.1.
Let . Since and for some , ,
(133) | ||||
(134) | ||||
(135) | ||||
(136) | ||||
(137) | ||||
(138) | ||||
(139) | ||||
(140) | ||||
(141) |
where . ∎
Proposition A.2.
Suppose , is -strongly concave, and there exists , , and such that for all coreset weights satisfying , is -Lipschitz smooth, and both and are bounded. Then for any , there exists , such that the potentials are -subexponential with the same as in Proposition A.1.
Proof of Proposition A.2.
Since is -strongly concave and is -Lipschitz smooth, we can write
(142) | ||||
(143) | ||||
(144) |
So setting implies is a nonpositive function as required. Then
(145) |
For , setting and then maximizing over yields
(146) | ||||
(147) |
By the boundedness of and , maximizing over yields a value of . ∎
Lemma A.3.
Let be i.i.d. random variables in with , and define the resampled sum
(148) |
where , with strictly positive resampling probabilities that may depend on and . If there exists a such that as ,
(149) |
then
(150) |
Proof.
We can rewrite
(151) |
where . Consider where is independent of , with probability , and otherwise. So if we set , [54, Cor. 3] asserts that
(152) |
as long as for all large enough,
(153) |
and as ,
(154) |
Note that the conditional mean and variance have the form
(155) | ||||
(156) |
which implies that a.s., since are strictly nonnegative and implies is finite almost surely. Next, note that
(157) | ||||
(158) | ||||
(159) |
The above expression converges in probability to 0 by the technical assumptions in the statement of the result as well as the fact that by the law of large numbers. Once again by the technical assumptions, , so
(160) | ||||
(161) |
and hence by Slutsky’s theorem,
(162) |
Using Slutsky’s theorem again with and rearranging yields the final result. ∎
Lemma A.4.
Suppose coreset weights are generated using the importance weighted construction in Algorithm 1. Let , , and . If conditions A(1-3) and (A6) in Section 4 hold, , and , then
(163) |
and
(164) |
Proof.
First, since , , and
(165) | ||||
(166) | ||||
(167) | ||||
(168) | ||||
(169) | ||||
(170) |
where the last line follows by assumption A6. Therefore by Chebyshev’s inequality and , . Since the data are i.i.d., by conditions A1 and A2, the central limit theorem holds for the sum of such that converges in distribution to a normal, and hence . By conditions A1, A2, and A6, Lemma A.3 holds such that for any ,
(171) |
Since condition A6 asserts that , the law of large numbers, condition A1, and imply that
(172) |
Summing over a basis of vectors shows that
(173) |
This completes the first three results. Next, by condition A3, for sufficiently large such that the neighbourhood contains the ball of radius around ,
(174) | ||||
(175) |
and
(176) |
so that we have that both
(177) |
by Markov’s inequality. Finally, by the bounded variance in A2, sampling probability bounds in A6, and , the variances of and both converge to 0 as , and since both of these quantities are unbiased estimates of , Chebyshev’s inequality yields the desired convergence in probability. ∎
Lemma A.5.
Suppose are i.i.d. random vectors in . Fix , and define . If there exists such that
(178) |
where denotes a vector of all 1 entries, then as ,
(179) |
Proof.
For any , by the union bound over subsets of of size ,
(180) | ||||
(181) | ||||
(182) | ||||
(183) |
By Markov’s inequality and ,
(184) | ||||
(185) |
Setting yields
(186) |
The right-hand side converges to 0 as , yielding the stated result. ∎
Proof of Corollary 4.1 and Corollary 4.2.
Set . Then since , , and assumptions (A1-3) and (A6) hold, Lemma A.4 holds. Note that , is positive at and twice differentiable by (A4), and since . Thus the conditions of Theorem 3.3 are verified. Substitution into the right term in the minimum of Theorem 3.3 yields the stated lower bound of . For the left term in Theorem 3.3, define . Then since , , and , (A5) guarantees that . Therefore the minimum is , and we complete the proof by transferring from on the -pushforward model to on the original model using Corollary 3.6. ∎
Proof of Corollary 4.3.
Fix the guaranteed by (A8), and set . Note that , is positive at and twice differentiable by (A4), and by (A1-3) the results pertaining to and in Lemma A.4 hold; thus Assumption 3.2 holds. By (A7), Assumption 3.4 holds as well as the conditions on and in Theorem 3.5. Finally by (A8), Lemma A.5 holds such that
(187) |
and hence . Therefore all conditions of Theorem 3.5 hold. For the left term in the minimum in Theorem 3.5, define . Then since , , and , (A5) guarantees that . For the right term,
(188) |
The minimum of these two is from the right term, so
(189) |
We complete the proof by transferring from on the -pushforward model to on the original model using Corollary 3.6. ∎
Proof.
The exact same technique applies to both models, so here we will just demonstrate it for the Cauchy model. In the Cauchy model, , , , and
(190) | ||||||
(191) |
where . Property (A1) holds by routine interchange of differentiation and integration. Property (A2) holds (for any ) because and are bounded functions of and jointly. Property (A3) holds (for any neighbourhood of ) because is a bounded function of and jointly. Property (A4) holds because the pushforward of through has full support on . In order to verify assumption (A5), suppose there exists a sequence of bounded measurable functions of the data and constants such that for all , ,
(192) |
The functions are similar to the test functions of Schwartz [55]. Then defining and ,
(193) | ||||
(194) | ||||
(195) |
Using the same proof technique as in Theorem 3.3, the denominator satisfies
(196) |
By assumption, there exists such that
(197) |
and a such that
(198) | ||||
(199) | ||||
(200) |
Therefore ; and since , as required by (A5). So to complete the proof of (A5) we need to find a suitable . Fix , and set
(201) |
Under , Hoeffding’s inequality yields
(202) |
And under for , for small enough , . Therefore
(203) | ||||
(204) | ||||
(205) |
at which point we can again apply Hoeffding’s inequality, completing the result. ∎
Lemma A.7.
Fix vectors in a separable Hilbert space with inner product denoted and norm denoted . Let be drawn from with probabilities either with or without replacement (if without replacement, the probabilities are renormalized after every draw). Then for all ,
(206) |
where
(207) |
Proof.
First note that it suffices to analyze the case with replacement, since this case provides an upper bound on the case without replacement. To demonstrate this, we couple two probability spaces—one that draws with replacement, and one without replacement. First, draw an identical vector for both copies. On each subsequent iteration , the “with replacement” copy first draws whether or not it selects a vector that was previously selected by the “without replacement” copy. If it does, it draws that vector independently; if it does not, it selects the same vector as the “without replacement” copy. In any case, at each iteration , the vectors drawn by the “with replacement” copy are always a subset of the vectors drawn by the “without replacement” copy, and hence the minimum over is greater for that copy. It therefore suffices to analyze the case with replacement.
To obtain an upper bound on the probability when sampling with replacement, instead of minimizing over all jointly, suppose we use the following iterative algorithm. Set . At the first iteration, we draw and set the weight by optimizing over :
(208) |
Set , and note that . Then at each subsequent iteration , assume the previous iterate is optimized over all nonnegative weights, and hence satisfies . We draw another vector , and bound the erorr of the next iterate by optimizing over only the weight for the new vector . Then
(209) | ||||
(210) |
Therefore,
(211) | |||
(212) | |||
(213) | |||
(214) | |||
(215) | |||
(216) |
where
(217) | ||||
(218) |
So for all ,
(219) |
Using the Chernoff bound on the binomial CDF, for all ,
(220) | ||||
(221) |
Substituting yields
(222) |
∎
Proof of Corollary 6.1.
Since the potentials are subexponential, Theorem 5.3 guarantees that
(223) |
We apply Lemma A.7 with vectors (in equivalence classes specified up to a additive constant) and inner product between defined by . In the notation of Lemma A.7, by assumption, and . Substituting , we find that
(224) |
Combining this result with the KL bound above yields the final result. ∎
NeurIPS Paper Checklist
-
1.
Claims
-
Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?
-
Answer: [Yes]
-
Justification: The abstract and introduction state that the paper produces new general upper and lower bounds for Bayesian coreset approximations, that these bounds are applied to particular cases of interest, and that empirical results align with the theory. The paper does indeed contain these results, with proofs in the supplemental material, and empirical results in the figures do match the theory well.
-
Guidelines:
-
•
The answer NA means that the abstract and introduction do not include the claims made in the paper.
-
•
The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
-
•
The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
-
•
It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
-
•
-
2.
Limitations
-
Question: Does the paper discuss the limitations of the work performed by the authors?
-
Answer: [Yes]
-
Justification: In the conclusions section of the work, two main limitations of the theory are mentioned: the need to perform case-by-case analysis of subexponentiality constants and alignment probabilities in Definition 5.2 and Corollary 6.1.
-
Guidelines:
-
•
The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
-
•
The authors are encouraged to create a separate ”Limitations” section in their paper.
-
•
The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
-
•
The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
-
•
The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
-
•
The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
-
•
If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
-
•
While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
-
•
-
3.
Theory Assumptions and Proofs
-
Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
-
Answer: [Yes]
-
Justification: Theoretical results are numbered, and proofs of all results are included in the appendix.
-
Guidelines:
-
•
The answer NA means that the paper does not include theoretical results.
-
•
All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
-
•
All assumptions should be clearly stated or referenced in the statement of any theorems.
-
•
The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
-
•
Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
-
•
Theorems and Lemmas that the proof relies upon should be properly referenced.
-
•
-
4.
Experimental Result Reproducibility
-
Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
-
Answer: [Yes]
-
Justification: All details needed to reproduce the experimental results in Figure 2 and 3 are included in the text. Algorithms used in the experiments exist in the cited literature.
-
Guidelines:
-
•
The answer NA means that the paper does not include experiments.
-
•
If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
-
•
If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
-
•
Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
-
•
While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
-
(a)
If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
-
(b)
If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
-
(c)
If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
-
(d)
We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
-
(a)
-
•
-
5.
Open access to data and code
-
Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
-
Answer: [No]
-
Justification: There are no new algorithms presented in this work; the experiments involve only existing methods for which public code is available. The code is not central to the contributions of the paper.
-
Guidelines:
-
•
The answer NA means that paper does not include experiments requiring code.
-
•
Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
-
•
While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
-
•
The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
-
•
The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
-
•
The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
-
•
At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
-
•
Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
-
•
-
6.
Experimental Setting/Details
-
Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
-
Answer: [Yes]
-
Justification: All details needed to reproduce the experimental results in Figure 2 and 3 are included in the text. Algorithms used in the experiments exist in the cited literature.
-
Guidelines:
-
•
The answer NA means that the paper does not include experiments.
-
•
The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
-
•
The full details can be provided either with the code, in appendix, or as supplemental material.
-
•
-
7.
Experiment Statistical Significance
-
Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
-
Answer: [Yes]
-
Justification: All empirical results show the mean over a number of trials, as well as error bars indicating standard error.
-
Guidelines:
-
•
The answer NA means that the paper does not include experiments.
-
•
The authors should answer ”Yes” if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
-
•
The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
-
•
The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
-
•
The assumptions made should be given (e.g., Normally distributed errors).
-
•
It should be clear whether the error bar is the standard deviation or the standard error of the mean.
-
•
It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified.
-
•
For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
-
•
If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
-
•
-
8.
Experiments Compute Resources
-
Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
-
Answer: [Yes]
-
Justification: These details are not important for this paper, as there are no new methods or algorithms presented or claims related to computational performance. However, the introduction does list that simulations were performed on a desktop computer with a Core i7 processor and 32GB RAM.
-
Guidelines:
-
•
The answer NA means that the paper does not include experiments.
-
•
The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
-
•
The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
-
•
The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper).
-
•
-
9.
Code Of Ethics
-
Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
-
Answer: [Yes]
-
Justification: This paper presents a new theoretical analysis of error bounds for Bayesian coresets methods. It does not present any new methodology or data with potential harmful consequences.
-
Guidelines:
-
•
The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
-
•
If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
-
•
The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
-
•
-
10.
Broader Impacts
-
Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
-
Answer: [N/A]
-
Justification: There is no potential negative societal impact of this work. The paper provides new theory regarding existing methodology.
-
Guidelines:
-
•
The answer NA means that there is no societal impact of the work performed.
-
•
If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
-
•
Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
-
•
The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
-
•
The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
-
•
If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
-
•
-
11.
Safeguards
-
Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
-
Answer: [N/A]
-
Justification: This does not apply.
-
Guidelines:
-
•
The answer NA means that the paper poses no such risks.
-
•
Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
-
•
Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
-
•
We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
-
•
-
12.
Licenses for existing assets
-
Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
-
Answer: [N/A]
-
Justification: This does not apply.
-
Guidelines:
-
•
The answer NA means that the paper does not use existing assets.
-
•
The authors should cite the original paper that produced the code package or dataset.
-
•
The authors should state which version of the asset is used and, if possible, include a URL.
-
•
The name of the license (e.g., CC-BY 4.0) should be included for each asset.
-
•
For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
-
•
If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
-
•
For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
-
•
If this information is not available online, the authors are encouraged to reach out to the asset’s creators.
-
•
-
13.
New Assets
-
Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
-
Answer: [N/A]
-
Justification: This does not apply.
-
Guidelines:
-
•
The answer NA means that the paper does not release new assets.
-
•
Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
-
•
The paper should discuss whether and how consent was obtained from people whose asset is used.
-
•
At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
-
•
-
14.
Crowdsourcing and Research with Human Subjects
-
Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
-
Answer: [N/A]
-
Justification: This does not apply.
-
Guidelines:
-
•
The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
-
•
Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
-
•
According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
-
•
-
15.
Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects
-
Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
-
Answer: [N/A]
-
Justification: This does not apply.
-
Guidelines:
-
•
The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
-
•
Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
-
•
We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
-
•
For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
-
•