This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\newcites

MainReferences \newcitesSuppReferences

1]\orgdivResearch School of Finance, Actuarial Studies and Statistics, \orgnameThe Australian National University, \orgaddress\cityCanberra, \stateACT, \countryAustralia

2]\orgdivDepartment of Statistical Sciences and Operations Research, \orgnameVirginia Commonwealth University, \orgaddress\cityRichmond, \stateVA, \countryUSA

BOB: Bayesian Optimized Bootstrap for Uncertainty Quantification in Gaussian Mixture Models

\fnmSantiago \surMarin    \fnmBronwyn \surLoong    \fnmAnton H. \surWestveld [ [
Abstract

A natural way to quantify uncertainties in Gaussian mixture models (GMMs) is through Bayesian methods. That said, sampling from the joint posterior distribution of GMMs via standard Markov chain Monte Carlo (MCMC) imposes several computational challenges, which have prevented a broader full Bayesian implementation of these models. A growing body of literature has introduced the Weighted Likelihood Bootstrap and the Weighted Bayesian Bootstrap as alternatives to MCMC sampling. The core idea of these methods is to repeatedly compute maximum a posteriori (MAP) estimates on many randomly weighted posterior densities. These MAP estimates then can be treated as approximate posterior draws. Nonetheless, a central question remains unanswered: How to select the random weights under arbitrary sample sizes. We, therefore, introduce the Bayesian Optimized Bootstrap (BOB), a computational method to automatically select these random weights by minimizing, through Bayesian Optimization, a black-box and noisy version of the reverse Kullback–Leibler (KL) divergence between the Bayesian posterior and an approximate posterior obtained via random weighting. Our proposed method outperforms competing approaches in recovering the Bayesian posterior, it provides a better uncertainty quantification, and it retains key asymptotic properties from existing methods. BOB’s performance is demonstrated through extensive simulations, along with real-world data analyses.

keywords:
Bayesian Optimization, Finite Mixture Models, Multimodal Posterior Sampling, Weighted Bayesian Bootstrap

1 Introduction

Gaussian Mixture Models (GMMs) are powerful and flexible tools, which turn up naturally when the population of sampling units consists of homogeneous clusters or subgroups [52], and can be applied in a wide range of scientific problems including model-based clustering [13], density estimation [20], or as flexible semi-parametric model-based approaches for analyzing complex data [39]. It is no surprise, then, that uncertainty quantification to infer properties or make predictions about a population of interest based on mixture models is now a crucial and important area of statistical research [53], and a natural way to quantify uncertainty in mixture models is through Bayesian methods by specifying a sampling model and a prior distribution [35].

Motivation. Despite the flexibility and wide applicability of GMMs, the large computational costs of Bayesian analysis have prevented a broader full Bayesian implementation of these models. For instance, one could sample from the posterior distribution of GMMs via standard Markov chain Monte Carlo (MCMC). However, since the log-likelihood function is non-concave, the resulting posterior might be multimodal. The multimodality of the posterior, combined with the serial nature of MCMC algorithms, results in a sampler that might get trapped in local regions of high posterior density, failing to explore the entire parameter space and leading to mixing limitations between posterior modes [8, 12]. Moreover, in the context of mixture models, MCMC algorithms scale poorly to large datasets [35] and their serial nature has prevented the adoption of recent developments in parallel computing [32, 40], exacerbating even further the computational limitations of these methods. Consequently, new and refined posterior samplers are still required.

Related Work. This research joins a growing body of literature, dating back to 1994 when Newton and Raftery proposed the Weighted Likelihood Bootstrap (WLB) as an alternative to MCMC. The main idea behind WLB is to generate approximate posterior draws by independently maximizing a series of randomly weighted log-likelihood functions. WLB, however, does not naturally incorporate any prior information in the sampling procedure. With this in mind, researchers have been recently extending the WLB framework to accommodate prior information by weighting not only the likelihood but also the prior. A few examples include the Weighted Bayesian Bootstrap, or WBB [32, 34] and the Bayesian Bootstrap spike-and-slab Lasso [36]. In the context of generalized Bayesian analysis and model misspecification, we can find the loss-likelihood Bootstrap [28], the Posterior Bootstrap [12, 40] and the Deep Bayesian Bootstrap [37].

Our Contribution. In this paper, we first illustrate how to carry out a Bayesian implementation of GMMs within a WBB framework by developing an algorithm to optimize a randomly weighted and non-concave posterior distribution associated with a GMM. Secondly, we address the problem of selecting adequate random weights in order to obtain better posterior approximations. Up until now, one key question has not been addressed: how to select the random weights under arbitrary—especially under fixed—sample sizes. Few analyses have been proposed from the lenses of asymptotic theory but, to the best of our knowledge, the core of this question remains unanswered. In fact, as discussed by [33] and [37], “no general recipe (to select the random weights) yet exists.” Thus, in this work we develop the Bayesian Optimized Bootstrap (BOB), a computational approach to automatically select the random weights by minimizing a black-box and noisy version of the reverse Kullback–Leibler (KL) divergence between the Bayesian posterior and an approximate posterior obtained via random weighting. Since this divergence is expensive to evaluate and we do not have access to a closed-form expression for the divergence or its derivatives, the minimization is carried out via Bayesian Optimization, or BO (and thus the name BOB), as BO is one of the most efficient methods for optimizing a black-box objective function, requiring only a few evaluations of the objective itself [22, 21]. We show that BOB leads to a better approximation of the Bayesian posterior, it allows for uncertainty quantification, and, unlike standard MCMC, it is able to adopt recent developments in parallel computing.

Article Outline. The rest of this article is organized as follows. Section 2 revisits WBB and describes how a Bayesian implementation of GMMs can be carried out within a WBB framework. In Section 3 we formally introduce BOB and draw connections with existing Bootstrap-based posterior samplers. Simulation exercises are carried out in Section 4. In section 5, we demonstrate the applicability of BOB on real-world data. We conclude with a discussion in Section 6.

2 Revisiting Weighted Bayesian Bootstrap

2.1 Problem Setup

For our sampling model, let {𝐲1,,𝐲n}\left\{\mathbf{y}_{1},\dots,\mathbf{y}_{n}\right\} be a random sample from a Gaussian Mixture with KK components. More precisely, let the density of 𝐲id\mathbf{y}_{i}\in\mathbb{R}^{d}, for i{1,n}i\in\{1\,\dots,n\}, be

p(𝐲i|{πk,𝝁k,𝚺k}k=1K)=k=1Kπkφ(𝐲i;𝝁k,𝚺k),p\bigl{(}\mathbf{y}_{i}\bigl{|}\left\{\pi_{k},\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}\right\}_{k=1}^{K}\bigr{)}=\sum_{k=1}^{K}\pi_{k}\varphi\left(\mathbf{y}_{i};\,\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}\right), (1)

where πk[0,1]\pi_{k}\in[0,1] is the weight of component kk so that k=1Kπk=1\sum_{k=1}^{K}\pi_{k}=1, K2K\geq 2 is a known integer, φ(;𝝁k,𝚺k)\varphi(\cdot\,;\,\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}) denotes the Gaussian density with mean vector 𝝁kd\boldsymbol{\mu}_{k}\in\mathbb{R}^{d} and covariance matrix 𝚺k𝕄+d\boldsymbol{\Sigma}_{k}\in\mathbb{M}_{+}^{d}, and 𝕄+d\mathbb{M}_{+}^{d} denotes the set of symmetric positive semidefinite matrices of size d×dd\times d. Additionally, let us define the latent indicator variables zik=𝟙{i-th obs. is drawn from the k-th component}z_{ik}=\mathbbm{1}\{i\text{-th obs. is drawn from the }k\text{-th component}\} so that 𝐳i=(zi1,,ziK)(1;{πk}k=1K)\mathbf{z}_{i}=(z_{i1},\dots,z_{iK})^{\prime}\sim\mathcal{M}(1;\left\{\pi_{k}\right\}_{k=1}^{K}), where \mathcal{M} denotes the multinomial distribution. Then, the joint distribution of 𝐘=(𝐲1,,𝐲n)n×d\mathbf{Y}=(\mathbf{y}_{1}^{\prime},\dots,\mathbf{y}_{n}^{\prime})^{\prime}\in\mathbb{R}^{n\times d} and 𝐙=(𝐳1,,𝐳n){0,1}n×K\mathbf{Z}=(\mathbf{z}_{1}^{\prime},\dots,\mathbf{z}_{n}^{\prime})^{\prime}\in\{0,1\}^{n\times K} would be given by

p(𝐘,𝐙|{πk,𝝁k,𝚺k}k=1K)=i=1nk=1K(πkφ(𝐲i;𝝁k,𝚺k))zik.p\bigl{(}\mathbf{Y},\mathbf{Z}\bigl{|}\left\{\pi_{k},\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}\right\}_{k=1}^{K}\bigr{)}=\prod_{i=1}^{n}\prod_{k=1}^{K}\left(\pi_{k}\varphi\left(\mathbf{y}_{i};\,\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}\right)\right)^{z_{ik}}. (2)

For our prior specification, let 𝝁k|𝚺k𝒩(𝜷k,𝚺k/λk)\boldsymbol{\mu}_{k}|\boldsymbol{\Sigma}_{k}\sim\mathcal{N}(\boldsymbol{\beta}_{k},\,\boldsymbol{\Sigma}_{k}/\lambda_{k}), 𝚺k𝒲(νk,𝚿k1)\boldsymbol{\Sigma}_{k}\sim\mathcal{IW}(\nu_{k},\,\boldsymbol{\Psi}_{k}^{-1}), and 𝝅𝒟(a1,,aK)\boldsymbol{\pi}\sim\mathcal{D}(a_{1},\dots,a_{K}), which corresponds to a normal-inverse-Wishart prior for 𝝁k\boldsymbol{\mu}_{k} and 𝚺k\boldsymbol{\Sigma}_{k} with scale matrix 𝚿k\boldsymbol{\Psi}_{k}, and a Dirichlet prior for the mixture proportions 𝝅=(π1,,πK)\boldsymbol{\pi}=(\pi_{1},\dots,\pi_{K})^{\prime}. Additionally, let 𝜽={πk,𝝁k,𝚺k}k=1K𝚯\boldsymbol{\theta}=\left\{\pi_{k},\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}\right\}_{k=1}^{K}\in\boldsymbol{\Theta} be the collection of all model parameters so that, under the sampling distribution and priors introduced above, the joint Bayesian posterior would be, up to a proportionality constant, given by

p(𝜽,𝐙|𝐘)p(𝝅)k=1K{p(𝚺k)p(𝝁k|𝚺k)i=1np(𝐲i|𝝁k,𝚺k,zik)p(zik|πk)}.p(\boldsymbol{\theta},\mathbf{Z}|\mathbf{Y})\propto p(\boldsymbol{\pi})\prod_{k=1}^{K}\left\{p(\boldsymbol{\Sigma}_{k})p(\boldsymbol{\mu}_{k}|\boldsymbol{\Sigma}_{k})\prod_{i=1}^{n}p(\mathbf{y}_{i}|\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k},z_{ik})p(z_{ik}|\pi_{k})\right\}. (3)

2.2 Posterior Sampling via Random Weighting

We now illustrate how one can approximately sample from the joint posterior in (3) via WBB. Following [32], WBB can be summarized in three simple steps:

Step 1: Start by sampling non-negative random weights 𝐮=(u1,,un)\mathbf{u}=(u_{1},\dots,u_{n})^{\prime} from some weight distribution FuF_{u} and construct the re-weighted log-density of 𝐘\mathbf{Y} and 𝐙\mathbf{Z} as

logpu(𝐘,𝐙|𝜽)=i=1nui{k=1Kzik[logπk+logφ(𝐲i;𝝁k,𝚺k)]}.\log p_{u}\left(\mathbf{Y},\mathbf{Z}|\boldsymbol{\theta}\right)=\sum_{i=1}^{n}u_{i}\left\{\sum_{k=1}^{K}z_{ik}\left[\log\pi_{k}+\log\varphi(\mathbf{y}_{i};\,\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k})\right]\right\}. (4)

Step 2: Sample 𝐮~=(u~π,u~μ1,,u~μK,u~Σ1,,u~ΣK)\tilde{\mathbf{u}}=(\tilde{u}_{\pi},\tilde{u}_{\mu_{1}},\dots,\tilde{u}_{\mu_{K}},\tilde{u}_{\Sigma_{1}},\dots,\tilde{u}_{\Sigma_{K}})^{\prime} from some weight distribution Fu~F_{\tilde{u}} and construct the re-weighted log-prior density as

logpu~(𝜽)k=1K{u~π(ak1)logπku~Σk[(νk+d2+1)log|𝚺k|+tr(𝚿k𝚺k1)2]u~μk[λk2(𝝁k𝜷k)𝚺k1(𝝁k𝜷k)]}.\begin{gathered}\log p_{\tilde{u}}(\boldsymbol{\theta})\propto\sum_{k=1}^{K}\Biggl{\{}\tilde{u}_{\pi}\left(a_{k}-1\right)\log\pi_{k}-\tilde{u}_{\Sigma_{k}}\left[\left(\frac{\nu_{k}+d}{2}+1\right)\log|\boldsymbol{\Sigma}_{k}|+\frac{\operatorname{tr}\left(\boldsymbol{\Psi}_{k}\boldsymbol{\Sigma}_{k}^{-1}\right)}{2}\right]\\ -\tilde{u}_{\mu_{k}}\left[\frac{\lambda_{k}}{2}(\boldsymbol{\mu}_{k}-\boldsymbol{\beta}_{k})^{\prime}\boldsymbol{\Sigma}_{k}^{-1}(\boldsymbol{\mu}_{k}-\boldsymbol{\beta}_{k})\right]\Biggr{\}}.\end{gathered} (5)

Step 3: By marginalizing 𝐙\mathbf{Z} out of equation (4), one would have that logpu(𝐘|𝜽)=log(Zpu(𝐘,𝐙|𝜽))\log p_{u}\left(\mathbf{Y}\left|\boldsymbol{\theta}\right.\right)=\log\left(\sum_{Z}p_{u}\left(\mathbf{Y},\mathbf{Z}\left|\boldsymbol{\theta}\right.\right)\right) and the re-weighted log-posterior of 𝜽\boldsymbol{\theta} would be, up to an additive constant, given by logpu(𝜽|𝐘)logpu(𝐘|𝜽)+logpu~(𝜽)\log p_{u}(\boldsymbol{\theta}|\mathbf{Y})\propto\log p_{u}\left(\mathbf{Y}\left|\boldsymbol{\theta}\right.\right)+\log p_{\tilde{u}}(\boldsymbol{\theta}). WBB proceeds to maximize this posterior density seeking

𝜽=argmax𝜽𝚯{logpu(𝜽|𝐘)}.\boldsymbol{\theta}^{*}=\operatorname*{arg\,max}_{\boldsymbol{\theta}\in\boldsymbol{\Theta}}\left\{\log p_{u}(\boldsymbol{\theta}|\mathbf{Y})\right\}. (6)

Under a traditional Bayesian framework, the randomness in 𝜽\boldsymbol{\theta} arises from treating it as a random variable with a prior distribution. On the other hand, as discussed in [37], 𝜽\boldsymbol{\theta}^{*} can be seen as an estimator, where, given the data, the only source of randomness comes from the random weights. Thus, by fixing the data 𝐘\mathbf{Y} and repeating steps 1 - 3 many times, one would obtain approximate posterior draws {𝜽(s)}s\{\boldsymbol{\theta}^{*}_{(s)}\}_{s\in\mathbb{N}}. This idea is attractive as optimizing can be easier and less computationally intensive than sampling from an intractable posterior. Note, additionally, that the random weights are independent across iterations. Consequently, 𝜽(s)\boldsymbol{\theta}^{*}_{(s)} and 𝜽(s)\boldsymbol{\theta}^{*}_{(s^{\prime})}, for sss\neq s^{\prime}, are also independent and could be sampled in parallel. Moreover, Bootstrap-based posterior samplers do not require costly tuning runs, burn-in periods, or convergence diagnostics [12, 40], making random weighting an attractive alternative to MCMC sampling.

Nonetheless, performing the joint optimization in (6) is still a challenge on its own, especially in light of the non-concavity of logpu(𝜽|𝐘)\log p_{u}(\boldsymbol{\theta}|\mathbf{Y}). Thus, we treat the latent indicator variables, 𝐙\mathbf{Z}, as missing data and develop a randomly weighted expectation-maximization (EM) algorithm [10] to solve (6).

2.3 Randomly Weighted Expectation-Maximization

2.3.1 Expectation Step

We start by computing the expected value of 𝐙\mathbf{Z}, conditional on 𝐘\mathbf{Y}, 𝐮\mathbf{u} and 𝜽(t)\boldsymbol{\theta}^{(t)}, where 𝜽(t)\boldsymbol{\theta}^{(t)} is the current value of 𝜽\boldsymbol{\theta} within the EM algorithm. By Bayes’ rule, we have that

𝔼[zik|𝜽(t),𝐘,𝐮]=exp{ui[log(zik=1|πk(t))+logφ(𝐲i;𝝁k(t),𝚺k(t))]}r=1Kexp{ui[log(zir=1|πr(t))+logφ(𝐲i;𝝁r(t),𝚺r(t))]}=[πk(t)φ(𝐲i;𝝁k(t),𝚺k(t))]uir=1K[πr(t)φ(𝐲i;𝝁r(t),𝚺r(t))]ui=qik.\begin{split}\mathbb{E}\left[z_{ik}|\boldsymbol{\theta}^{(t)},\mathbf{Y},\mathbf{u}\right]&=\frac{\exp\left\{u_{i}\left[\log\mathbb{P}\left(z_{ik}=1|\pi_{k}^{(t)}\right)+\log\varphi\left(\mathbf{y}_{i};\,\boldsymbol{\mu}_{k}^{(t)},\boldsymbol{\Sigma}_{k}^{{(t)}}\right)\right]\right\}}{\sum_{r=1}^{K}\exp\left\{u_{i}\left[\log\mathbb{P}\left(z_{ir}=1|\pi_{r}^{(t)}\right)+\log\varphi\left(\mathbf{y}_{i};\,\boldsymbol{\mu}_{r}^{(t)},\boldsymbol{\Sigma}_{r}^{{(t)}}\right)\right]\right\}}\\ &=\frac{\left[\pi_{k}^{(t)}\varphi\left(\mathbf{y}_{i};\,\boldsymbol{\mu}_{k}^{(t)},\boldsymbol{\Sigma}_{k}^{{(t)}}\right)\right]^{u_{i}}}{\sum_{r=1}^{K}\left[\pi_{r}^{(t)}\varphi\left(\mathbf{y}_{i};\,\boldsymbol{\mu}_{r}^{(t)},\boldsymbol{\Sigma}_{r}^{{(t)}}\right)\right]^{u_{i}}}=q_{ik}.\end{split} (7)

2.3.2 Maximization Step

In the maximization step, we then maximize the following surrogate objective function with respect to 𝜽𝚯\boldsymbol{\theta}\in\boldsymbol{\Theta}.

𝔼[logpu(𝜽,𝐙|𝐘)|𝜽(t)]=12i=1nui{k=1Kqik[log|𝚺k|+(𝐲i𝝁k)𝚺k1(𝐲i𝝁k)]}k=1K{u~μk[λk2(𝝁k𝜷k)𝚺k1(𝝁k𝜷k)]+u~Σk[(νk+d2+1)log|𝚺k|+tr(𝚿k𝚺k1)2][a~k1+i=1nuiqik]logπk},\begin{gathered}\mathbb{E}\left[\log p_{u}(\boldsymbol{\theta},\mathbf{Z}|\mathbf{Y})|\boldsymbol{\theta}^{(t)}\right]=-\frac{1}{2}\sum_{i=1}^{n}u_{i}\left\{\sum_{k=1}^{K}q_{ik}\left[\log|\boldsymbol{\Sigma}_{k}|+\left(\mathbf{y}_{i}-\boldsymbol{\mu}_{k}\right)^{\prime}\boldsymbol{\Sigma}_{k}^{-1}\left(\mathbf{y}_{i}-\boldsymbol{\mu}_{k}\right)\right]\right\}\\ -\sum_{k=1}^{K}\Biggl{\{}\tilde{u}_{\mu_{k}}\left[\frac{\lambda_{k}}{2}(\boldsymbol{\mu}_{k}-\boldsymbol{\beta}_{k})^{\prime}\boldsymbol{\Sigma}_{k}^{-1}(\boldsymbol{\mu}_{k}-\boldsymbol{\beta}_{k})\right]+\tilde{u}_{\Sigma_{k}}\left[\left(\frac{\nu_{k}+d}{2}+1\right)\log|\boldsymbol{\Sigma}_{k}|+\frac{\operatorname{tr}\left(\boldsymbol{\Psi}_{k}\boldsymbol{\Sigma}^{-1}_{k}\right)}{2}\right]\\ -\left[\tilde{a}_{k}-1+\sum_{i=1}^{n}u_{i}q_{ik}\right]\log\pi_{k}\Biggr{\}},\end{gathered} (8)

where a~k=(ak1)u~π+1\tilde{a}_{k}=(a_{k}-1)\tilde{u}_{\pi}+1. As we will illustrate shortly, the objective in (8) leads to well-known maximization problems, which can be solved relatively cheaply.

Proposition 1.

Let us define n~k=i=1nuiqik\tilde{n}_{k}=\sum_{i=1}^{n}u_{i}q_{ik}, λ~k=u~μkλk\tilde{\lambda}_{k}=\tilde{u}_{\mu_{k}}\lambda_{k}, 𝚿~k=u~Σk𝚿k\tilde{\boldsymbol{\Psi}}_{k}=\tilde{u}_{\Sigma_{k}}\boldsymbol{\Psi}_{k}, ν~k=u~Σk(νk+d+2)2d\tilde{\nu}_{k}=\tilde{u}_{\Sigma_{k}}(\nu_{k}+d+2)-2-d, 𝐲¯~k=(n~k)1i=1n(uiqik)𝐲i\tilde{\bar{\mathbf{y}}}_{k}=(\tilde{n}_{k})^{-1}\sum_{i=1}^{n}(u_{i}q_{ik})\mathbf{y}_{i} and 𝐒~k=i=1n(uiqik)(𝐲i𝐲¯~k)(𝐲i𝐲¯~k)\tilde{\mathbf{S}}_{k}=\sum_{i=1}^{n}(u_{i}q_{ik})(\mathbf{y}_{i}-\tilde{\bar{\mathbf{y}}}_{k})(\mathbf{y}_{i}-\tilde{\bar{\mathbf{y}}}_{k})^{\prime}, for all k{1,,K}k\in\{1,\dots,K\}. Additionally, let 𝛃¯k=λ~k/(λ~k+n~k)𝛃k+n~k/(λ~k+n~k)𝐲¯~k\bar{\boldsymbol{\beta}}_{k}=\tilde{\lambda}_{k}/(\tilde{\lambda}_{k}+\tilde{n}_{k})\boldsymbol{\beta}_{k}+\tilde{n}_{k}/(\tilde{\lambda}_{k}+\tilde{n}_{k})\tilde{\bar{\mathbf{y}}}_{k}, 𝚿¯k=(𝚿~k+𝐒~k+λ~kn~k/(λ~k+n~k)(𝐲¯~k𝛃k)(𝐲¯~k𝛃k))\bar{\boldsymbol{\Psi}}_{k}=(\tilde{\boldsymbol{\Psi}}_{k}+\tilde{\mathbf{S}}_{k}+\tilde{\lambda}_{k}\tilde{n}_{k}/(\tilde{\lambda}_{k}+\tilde{n}_{k})(\tilde{\bar{\mathbf{y}}}_{k}-\boldsymbol{\beta}_{k})(\tilde{\bar{\mathbf{y}}}_{k}-\boldsymbol{\beta}_{k})^{\prime}), λ¯k=λ~k+n~k\bar{\lambda}_{k}=\tilde{\lambda}_{k}+\tilde{n}_{k}, and ν¯k=ν~k+n~k\bar{\nu}_{k}=\tilde{\nu}_{k}+\tilde{n}_{k}. Then, the update of 𝛍k\boldsymbol{\mu}_{k} and 𝚺k\boldsymbol{\Sigma}_{k} would be given by

(𝝁k(t+1),𝚺k(t+1))=argmax𝝁k,𝚺k{h(𝝁k,𝚺k)},\displaystyle\bigl{(}\boldsymbol{\mu}_{k}^{(t+1)},\boldsymbol{\Sigma}_{k}^{(t+1)}\bigr{)}=\operatorname*{arg\,max}_{\begin{subarray}{c}\boldsymbol{\mu}_{k},\,\boldsymbol{\Sigma}_{k}\end{subarray}}\bigl{\{}h(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k})\bigr{\}}, (9)

where

h(𝝁k,𝚺k)=(ν¯k+d2+1)log|𝚺k|12tr(𝚿¯k𝚺k1)λ¯k2(𝝁k𝜷¯k)𝚺k1(𝝁k𝜷¯k).h(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k})=-\left(\frac{\bar{\nu}_{k}+d}{2}+1\right)\log|\boldsymbol{\Sigma}_{k}|-\frac{1}{2}\operatorname{tr}\left(\bar{\boldsymbol{\Psi}}_{k}\boldsymbol{\Sigma}_{k}^{-1}\right)-\frac{\bar{\lambda}_{k}}{2}(\boldsymbol{\mu}_{k}-\bar{\boldsymbol{\beta}}_{k})^{\prime}\boldsymbol{\Sigma}_{k}^{-1}(\boldsymbol{\mu}_{k}-\bar{\boldsymbol{\beta}}_{k}).

Details on the derivation of h:d×𝕄+dh:\mathbb{R}^{d}\times\mathbb{M}_{+}^{d}\rightarrow\mathbb{R} are presented in the supplementary materials. To optimize hh, we make use of a two-stage procedure in which we first update 𝚺k\boldsymbol{\Sigma}_{k} and then update 𝝁k\boldsymbol{\mu}_{k}.

Update 𝚺k\boldsymbol{\Sigma}_{k}: Start by noting that 𝚺klogdexp(h(𝝁k,𝚺k))𝑑𝝁k\boldsymbol{\Sigma}_{k}\mapsto\log\int_{\mathbb{R}^{d}}\exp(h(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}))d\boldsymbol{\mu}_{k} is a monotone transformation of 𝚺kh(𝝁k,𝚺k)\boldsymbol{\Sigma}_{k}\mapsto h(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}) [46, Ch. 9]. Thus, optimizing (9) with respect to 𝚺k\boldsymbol{\Sigma}_{k} yields

𝚺k(t+1)=argmax𝚺k𝕄+d{ν¯k+d+12log|𝚺k|12tr(𝚿¯k𝚺k1)}.\boldsymbol{\Sigma}_{k}^{(t+1)}=\operatorname*{arg\,max}_{\boldsymbol{\Sigma}_{k}\in\mathbb{M}_{+}^{d}}\left\{-\frac{\bar{\nu}_{k}+d+1}{2}\log|\boldsymbol{\Sigma}_{k}|-\frac{1}{2}\operatorname{tr}\left(\bar{\boldsymbol{\Psi}}_{k}\boldsymbol{\Sigma}_{k}^{-1}\right)\right\}. (10)

One can identify the solution to (10) as the mode of an 𝒲(ν¯k,𝚿¯k1)\mathcal{IW}(\bar{\nu}_{k},\,\bar{\boldsymbol{\Psi}}_{k}^{-1}) distribution. This is,

𝚺k(t+1)=𝚿¯kν¯k+d+1.\boldsymbol{\Sigma}_{k}^{(t+1)}=\frac{\bar{\boldsymbol{\Psi}}_{k}}{\bar{\nu}_{k}+d+1}. (11)

Update μk\boldsymbol{\mu}_{k}: Again, note that 𝝁k𝕄+dexp(h(𝝁k,𝚺k))𝑑𝚺k\boldsymbol{\mu}_{k}\mapsto\int_{\mathbb{M}_{+}^{d}}\exp(h(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}))d\boldsymbol{\Sigma}_{k} is a monotone transformation of 𝝁𝒌h(𝝁k,𝚺k)\boldsymbol{\mu_{k}}\mapsto h(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}), so optimizing (9) with respect to 𝝁k\boldsymbol{\mu}_{k} reduces to

𝝁k(t+1)=argmax𝝁kd{[1+λ¯k(𝝁k𝜷¯k)𝚿¯k1(𝝁k𝜷¯k)](ν¯k+1)/2},\boldsymbol{\mu}_{k}^{(t+1)}=\operatorname*{arg\,max}_{\boldsymbol{\mu}_{k}\in\mathbb{R}^{d}}\left\{\left[1+\bar{\lambda}_{k}(\boldsymbol{\mu}_{k}-\bar{\boldsymbol{\beta}}_{k})^{\prime}\bar{\boldsymbol{\Psi}}_{k}^{-1}(\boldsymbol{\mu}_{k}-\bar{\boldsymbol{\beta}}_{k})\right]^{-(\bar{\nu}_{k}+1)/2}\right\}, (12)

which corresponds to the mode of a t(ν¯kd+1)(𝜷¯k,𝚿¯k/(λ¯k(ν¯kd+1)))t_{(\bar{\nu}_{k}-d+1)}(\bar{\boldsymbol{\beta}}_{k},\,\bar{\boldsymbol{\Psi}}_{k}/(\bar{\lambda}_{k}(\bar{\nu}_{k}-d+1))) distribution, where tν(𝐦,𝚲)t_{\nu}(\mathbf{m},\boldsymbol{\Lambda}) denotes the t-distribution with ν\nu degrees of freedom, location vector 𝐦\mathbf{m} and scale matrix 𝚲\boldsymbol{\Lambda} [4, Ch. 2]. More precisely, the update of 𝝁k\boldsymbol{\mu}_{k} is given by 𝝁k(t+1)=𝜷¯k\boldsymbol{\mu}_{k}^{(t+1)}=\bar{\boldsymbol{\beta}}_{k}.

Update π\boldsymbol{\pi}: Lastly, note from (8) that the update of 𝝅\boldsymbol{\pi} corresponds to the mode of a 𝒟(a¯1,,a¯K)\mathcal{D}\left(\bar{a}_{1},\dots,\bar{a}_{K}\right) distribution, where a¯k=a~k+n~k\bar{a}_{k}=\tilde{a}_{k}+\tilde{n}_{k}. Namely, for k{1,,K}k\in\{1,\dots,K\},

πk(t+1)=a¯k1(r=1Ka¯r)K.\pi_{k}^{(t+1)}=\frac{\bar{a}_{k}-1}{\left(\sum_{r=1}^{K}\bar{a}_{r}\right)-K}. (13)

Our EM algorithm, then, iterates between the E and the M steps until convergence.

2.3.3 Dealing with Suboptimal Modes: Tempered EM

Despite the simplicity of our EM algorithm, a well-known drawback of non-convex optimization methods is that they might converge to a local optima (i.e., a suboptimal mode). Random restarts (RRs) have been widely used to increase the parameter space exploration and escape suboptimal modes. However, this approach is too computationally intensive and yet, does not guarantee that one would reach the global mode.

Tempering and annealing [24, 45], on the other hand, are optimization techniques which also increase the parameter space exploration at a lower computational cost. More precisely, let {Tt}t\left\{T_{t}\right\}_{t\in\mathbb{N}} be a tempering profile, i.e., a sequence of positive numbers such that limtTt=1\lim_{t\rightarrow\infty}T_{t}=1, where tt is the tt-th iteration of the EM algorithm. Then, a tempered EM algorithm modifies the E step in equation (7) as q~ik,t=qik1/Tt/(r=1Kqir1/Tt)\tilde{q}_{ik,t}=q_{ik}^{1/T_{t}}/(\sum_{r=1}^{K}q_{ir}^{1/T_{t}}), where larger values of TtT_{t} allow the algorithm to explore more of the target posterior by flattening the distribution of q~ik,t\tilde{q}_{ik,t}, and as tt\rightarrow\infty one would recover the original objective function, progressively attracting the solution towards the global mode [2, 26]. Thus, we let

Tt=1+aτ+bsin(τ)τ,whereτ=t+crr,t,T_{t}=1+a^{\tau}+b\frac{\sin{(\tau)}}{\tau},\qquad\text{where}\qquad\tau=\frac{t+cr}{r},\;\;t\in\mathbb{N}, (14)

with a[0,1)a\in[0,1), bb\in\mathbb{R}, and c,r>0c,r>0, which corresponds to the oscillatory tempering profile with gradually decreasing amplitudes from [2]. To select a suitable combination of a,b,ca,b,c and rr, we use a grid-search over a range of possible values, choose the combination that yields the largest objective using an unweighted EM algorithm, and fix such a solution throughout the entire sampler. As discussed in [26], the tempering hyper-parameters can remain fixed across different optimization problems with similar characteristics, as in the case of randomly weighted EM.

3 Introducing BOB

So far, we have developed and presented a randomly weighted EM algorithm to approximately sample from the posterior distribution of GMMs within a WBB framework, but we have not yet answered a key and important question: How to select the random weights. [33] showed that under low-dimensional settings and assuming the square of Jeffreys prior, uniform Dirichlet weights for the likelihood yield approximate posterior draws that are first order correct, i.e., consistent (they tend to concentrate around a small neighbourhood of the maximum likelihood estimator—MLE) and asymptotically normal. More recently, [34] established first order correctness and model selection consistency for a wide range of weight distributions in linear models with Lasso priors, and [36] showed that, for a number of weight distributions, the approximate posterior from Bayesian Bootstrap spike-and-slab Lasso concentrates at the same rate as the actual Bayesian posterior. Note, however, that these results are all based on the assumption that nn\rightarrow\infty. Considering the case of a fixed nn is important because, under a small to medium sample size, the prior, p(𝜽)p(\boldsymbol{\theta}), would have more influence on the posterior, and as the effect of p(𝜽)p(\boldsymbol{\theta}) gets bigger, the relationship between pu(𝜽|𝐘)p_{u}(\boldsymbol{\theta}|\mathbf{Y}) and the random weights becomes less clear [37], making the choice of adequate weights a much more important and difficult task. Thus, we propose BOB as an alternative methodology for automatically selecting the random weights under arbitrary sample sizes.

Before illustrating our proposed method, though, let us present a brief summary of Variational Bayes (VB) methods, as we draw inspiration from them. VB aims to obtain an approximation, gVB(𝜽|𝝂^)g_{{}_{VB}}(\boldsymbol{\theta}|\hat{\boldsymbol{\nu}}), with variational parameters 𝝂^𝚯\hat{\boldsymbol{\nu}}\in\boldsymbol{\Theta} such that 𝝂^=argmin𝝂𝚯𝔼gVB(𝜽|𝝂)[loggVB(𝜽|𝝂)logp(𝜽|𝐘)]\hat{\boldsymbol{\nu}}=\operatorname*{arg\,min}_{\boldsymbol{\nu}\in\boldsymbol{\Theta}}\mathbb{E}_{g_{{}_{VB}}(\boldsymbol{\theta}|{\boldsymbol{\nu}})}[\log g_{{}_{VB}}(\boldsymbol{\theta}|{\boldsymbol{\nu}})-\log p(\boldsymbol{\theta}|\mathbf{Y})], i.e., VB aims to minimize the reverse KL divergence between the Bayesian posterior and a variational approximation [5].

With this in mind, we propose the following weighting scheme: Draw the likelihood weights as ui=wiαi=1nwiαu_{i}=\frac{w_{i}^{\alpha}}{\sum_{i=1}^{n}w_{i}^{\alpha}}, with wi(1)w_{i}\sim\mathcal{E}(1) and αδα(xα)\alpha\sim\delta_{\alpha}(x_{\alpha}), where δα(xα)\delta_{\alpha}(x_{\alpha}) denotes the Dirac measure so that (α=xα)=1\mathbb{P}(\alpha=x_{\alpha})=1, xα1x_{\alpha}\geq 1, and (1)\mathcal{E}(1) denotes the exponential distribution with mean 1. If xα=1x_{\alpha}=1, the distribution for the likelihood weights would be the uniform Dirichlet, while xα>1x_{\alpha}>1 would yield an overdispersed distribution [33, 16]. For the weights associated with the prior, we set, for k{1,,K}k\in\{1,\dots,K\}, u~μkδu~μk(xμk)\tilde{u}_{\mu_{k}}\sim\delta_{\tilde{u}_{\mu_{k}}}(x_{\mu_{k}}), u~Σkδu~Σk(xΣk)\tilde{u}_{{}_{\Sigma_{k}}}\sim\delta_{\tilde{u}_{\Sigma_{k}}}(x_{{}_{\Sigma_{k}}}), and u~πδu~π(xπ)\tilde{u}_{\pi}\sim\delta_{\tilde{u}_{\pi}}(x_{\pi}), where xμk,xΣk,xπ0x_{\mu_{k}},x_{{}_{\Sigma_{k}}},x_{\pi}\geq 0. Hence, our problem is reduced to finding appropriate values for 𝒙=(xα,xμ1,,xμK,xΣ1,,xΣK,xπ)𝓧2(K+1)\boldsymbol{x}=(x_{\alpha},x_{\mu_{1}},\dots,x_{\mu_{K}},x_{{}_{\Sigma_{1}}},\dots,x_{{}_{\Sigma_{K}}},x_{\pi})^{\prime}\in\boldsymbol{\mathcal{X}}\subset\mathbb{R}^{2(K+1)}, where 𝓧\boldsymbol{\mathcal{X}} is a compact search space. Inspired by VB, we propose to select the optimal 𝒙\boldsymbol{x} as

𝒙=argmin𝒙𝓧{(𝒙)},\boldsymbol{x}^{*}=\operatorname*{arg\,min}_{\boldsymbol{x}\in\boldsymbol{\mathcal{X}}}\left\{\mathcal{L}(\boldsymbol{x})\right\}, (15)

with

(𝒙)=𝚯gu(𝜽|𝒙,𝐘)log(gu(𝜽|𝒙,𝐘)p(𝜽|𝐘))𝑑𝜽=𝔼gu(𝜽|𝒙,𝐘)[loggu(𝜽|𝒙,𝐘)log(p(𝜽)p(𝐘|𝜽)p(𝐘))]𝔼gu(𝜽|𝒙,𝐘)[loggu(𝜽|𝒙,𝐘)]𝔼gu(𝜽|𝒙,𝐘)[logp(𝜽)+logp(𝐘|𝜽)],\begin{split}\mathcal{L}(\boldsymbol{x})&=\int_{\boldsymbol{\Theta}}g_{{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})\log\left(\frac{g_{{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})}{p(\boldsymbol{\theta}|\mathbf{Y})}\right)d\boldsymbol{\theta}\\ &=\mathbb{E}_{g_{{}_{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})}\left[\log g_{{u}}(\boldsymbol{\theta}|\boldsymbol{x},\mathbf{Y})-\log\left(\frac{p(\boldsymbol{\theta})p(\mathbf{Y}|\boldsymbol{\theta})}{p(\mathbf{Y})}\right)\right]\\ &\propto\mathbb{E}_{g_{{}_{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})}\left[\log g_{{u}}(\boldsymbol{\theta}|\boldsymbol{x},\mathbf{Y})\right]-\mathbb{E}_{g_{{}_{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})}\left[\log p(\boldsymbol{\theta})+\log p(\mathbf{Y}|\boldsymbol{\theta})\right],\end{split}

where the latter line is known as the negative evidence lower bound (ELBO) [5]. In other words, we propose to select the optimal 𝒙\boldsymbol{x}^{*} so that we minimize the reverse KL divergence between the Bayesian posterior (up to a proportionality constant) and the approximate posterior induced by random weighting, denoted as gu(𝜽|𝒙,𝐘)g_{{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y}). As discussed in section 2, given the data 𝐘\mathbf{Y}, the only source of variation in 𝜽\boldsymbol{\theta}^{*} comes from the random weights, which, in our weighting scheme, depend on 𝒙\boldsymbol{x}.

Ideally, we would like to compute 𝒙\boldsymbol{x}^{*} as in (15), but we cannot optimize (𝒙)\mathcal{L}(\boldsymbol{x}) directly because: (a) we do not know the form of the joint density gu(𝜽|𝒙,𝐘)g_{{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y}), and (b) the expectation 𝔼gu(𝜽|𝒙,𝐘)[]\mathbb{E}_{g_{{}_{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})}[\cdot] is analytically intractable. By Sklar’s theorem [47], the approximate joint posterior density has the form

gu(𝜽|𝒙,𝐘)=c(Gu,1(θ1|𝒙,𝐘),,Gu,M(θM|𝒙,𝐘))i=1Mgu,j(θj|𝒙,𝐘),g_{{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})=c(G_{u,1}(\theta_{1}|\boldsymbol{x},\mathbf{Y}),\dots,G_{u,M}(\theta_{M}|\boldsymbol{x},\mathbf{Y}))\prod_{i=1}^{M}g_{u,j}(\theta_{j}|\boldsymbol{x},\mathbf{Y}),

where c(Gu,1(θ1|𝒙,𝐘),,Gu,M(θM|𝒙,𝐘))c(G_{u,1}(\theta_{1}|\boldsymbol{x},\mathbf{Y}),\dots,G_{u,M}(\theta_{M}|\boldsymbol{x},\mathbf{Y})) is the copula density of 𝜽\boldsymbol{\theta} (i.e., cc represents the dependence structure in the approximate posterior), gu,j(θj|𝒙,𝐘)g_{u,j}(\theta_{j}|\boldsymbol{x},\mathbf{Y}) denotes the marginal density of the jj-th entry of 𝜽\boldsymbol{\theta} with cumulative distribution function Gu,j(θj|𝒙,𝐘)G_{u,j}(\theta_{j}|\boldsymbol{x},\mathbf{Y}), and M=dim(𝜽)M=\text{dim}(\boldsymbol{\theta}). Given posterior draws {𝜽(s)}s\{\boldsymbol{\theta}^{*}_{(s)}\}_{s\in\mathbb{N}}, one could think about estimating such a copula density (see e.g. [41] for a review on copula density estimation in the bivariate case); however, obtaining accurate copula density estimates in high dimensional settings is remarkably difficult. Note that the number of unique parameters in our GMM is given by dK+d(d+1)2K+(K1)=𝒪(d2)dK+\frac{d(d+1)}{2}K+(K-1)=\mathcal{O}(d^{2}), which grows quadratically with dd. For instance, letting d=15d=15 and K=4K=4 would feature 543 unique parameters, leading to inaccurate and unstable copula density estimates. Thus, let us assume that the approximate posterior parameters are mutually independent. More precisely, let c(Gu,1(θ1|𝒙,𝐘),,Gu,M(θM|𝒙,𝐘))1c(G_{u,1}(\theta_{1}|\boldsymbol{x},\mathbf{Y}),\dots,G_{u,M}(\theta_{M}|\boldsymbol{x},\mathbf{Y}))\approx 1, for all 𝜽𝚯\boldsymbol{\theta}\in\boldsymbol{\Theta}, so that

loggu(𝜽|𝒙,𝐘)i=1Mloggu,j(θj|𝒙,𝐘).\log g_{{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})\approx\sum_{i=1}^{M}\log g_{u,j}(\theta_{j}|\boldsymbol{x},\mathbf{Y}).

That being so, to overcome (a), we propose to obtain a batch of approximate posterior draws, denoted by {𝜽(s)}s=1Sb\{\boldsymbol{\theta}^{*}_{(s)}\}_{s=1}^{S_{b}}, and compute univariate kernel density estimates (KDEs) of each marginal density gu,jg_{u,j}, denoted by g^u,j\hat{g}_{u,j}, which can be evaluated efficiently as in [18, 19]. To overcome (b), we propose to use the sample mean as an unbiased estimator of the expected value. More formally, for any 𝒙𝓧\boldsymbol{x}\in\boldsymbol{\mathcal{X}}, we obtain a batch of approximate posterior draws and estimate (𝒙)\mathcal{L}(\boldsymbol{x}) with

^(𝒙)=1Sbs=1Sb{(j=1Mlogg^u,j(θj,(s)|𝒙,𝐘))logp(𝜽(s))logp(𝐘|𝜽(s))}.\hat{\mathcal{L}}(\boldsymbol{x})=\frac{1}{S_{b}}\sum_{s=1}^{S_{b}}\left\{\left(\sum_{j=1}^{M}\log\hat{g}_{u,j}(\theta^{*}_{j,(s)}|\boldsymbol{x},\mathbf{Y})\right)-\log p(\boldsymbol{\theta}^{*}_{(s)})-\log p(\mathbf{Y}|\boldsymbol{\theta}^{*}_{(s)})\right\}. (16)

Hence, we can think about ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}) as a noisy and black-box approximation of (𝒙){\mathcal{L}}(\boldsymbol{x}) in the sense that, given an input 𝒙𝓧\boldsymbol{x}\in\boldsymbol{\mathcal{X}}, we can evaluate the objective, but we do not have access to a closed-form expression for neither the objective nor its derivatives. One could use grid-search methods to minimize ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}); however, evaluating ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}) is tremendously expensive because, for each 𝒙\boldsymbol{x}, we need to sample a batch of posterior draws, compute univariate KDEs of the approximate marginals, and compare those to the Bayesian posterior. As a consequence, exhaustive grid-search approaches would be infeasible to implement. To overcome this, we use Bayesian Optimization (BO) to minimize ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}), as it is one of the most efficient approaches to optimize a noisy black-box objective with little evaluations [22, 21].

Comments on the independence assumption between approximate posterior parameters: To construct ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}), we assume that the approximate posterior parameters are independent, as this is a widely implemented technique in the Bayesian literature. For instance, mean-field variational Bayes (MFVB) assumes that the variational distribution factorizes as gVB(𝜽|𝝂)=j=1MgVB,j(θj|νj)g_{{}_{VB}}(\boldsymbol{\theta}|{\boldsymbol{\nu}})=\prod_{j=1}^{M}g_{{}_{VB,j}}(\theta_{j}|\nu_{j}). In such cases, the MFVB objective function is given by

𝔼gVB(𝜽|𝝂)[(j=1MloggVB,j(θj|νj))logp(𝜽)logp(𝐘|𝜽)],\mathbb{E}_{g_{{}_{VB}}(\boldsymbol{\theta}|{\boldsymbol{\nu}})}\left[\left(\sum_{j=1}^{M}\log g_{{}_{VB,j}}(\theta_{j}|\nu_{j})\right)-\log p(\boldsymbol{\theta})-\log p(\mathbf{Y}|\boldsymbol{\theta})\right],

which closely reassembles our noisy black-box objective in equation (16). In a GMM context, however, one could argue that the independence assumption between model parameters might be unreasonable, as the latent indicator variables 𝐙\mathbf{Z} depend on each other (i.e., if an observation 𝐲i\mathbf{y}_{i} belongs to a cluster, it automatically cannot belongs to any other cluster), as well as on the mean vectors and covariance matrices. In such cases, the MFVB distribution, j=1MgVB,j(θj|νj)\prod_{j=1}^{M}g_{{}_{VB,j}}(\theta_{j}|\nu_{j}), might not contain the true posterior, as it is not able to capture such a complicated dependence structure, negatively affecting the entire analysis. Fortunately, this is not our case. Unlike MFVB, our goal is not to obtain a variational distribution to approximate the true posterior, but to obtain adequate random weights within a WBB framework. Thus, the negative effects would be less severe if the joint density j=1Mlogg^u,j(θj|𝒙,𝐘)\sum_{j=1}^{M}\log\hat{g}_{u,j}(\theta^{*}_{j}|\boldsymbol{x},\mathbf{Y}) used in equation (16) does not contain the true posterior, as we do not use ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}) to construct an entire variational distribution to approximate p(𝜽|𝐘)p(\boldsymbol{\theta}|\mathbf{Y}). We obtain our approximate posterior draws through a randomly weighted EM algorithm, where the latent variables 𝐙\mathbf{Z} are integrated out in the E-step, incorporating those dependencies in the posterior draws. Moreover, recall from the M-step that the updates of 𝜷𝒌\boldsymbol{\beta_{k}}, 𝚺k\boldsymbol{\Sigma}_{k}, and 𝝅\boldsymbol{\pi} do not depend on each other, giving room to assume that logg^u(𝜽|𝒙,𝐘)i=1Mlogg^u,j(θj|𝒙,𝐘)\log\hat{g}_{{u}}(\boldsymbol{\theta}|{\boldsymbol{x}},\mathbf{Y})\approx\sum_{i=1}^{M}\log\hat{g}_{u,j}(\theta_{j}|\boldsymbol{x},\mathbf{Y}).

3.1 Minimizing ^\hat{\mathcal{L}} via Bayesian Optimization

We now illustrate our BO approach to minimize ^\hat{\mathcal{L}}. First up, though, note that maximizing Υ(𝒙)=^(𝒙)\Upsilon(\boldsymbol{x})=-\hat{\mathcal{L}}(\boldsymbol{x}) is equivalent to minimizing ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}). Thus, we consider the problem

𝒙=argmax𝒙𝓧{Υ(𝒙)}.\boldsymbol{x}^{*}=\operatorname*{arg\,max}_{\boldsymbol{x}\in\boldsymbol{\mathcal{X}}}\left\{\Upsilon(\boldsymbol{x})\right\}.

Namely, given the black-box function Υ:𝓧\Upsilon:\boldsymbol{\mathcal{X}}\rightarrow\mathbb{R}, we would like to find its global maximum using repeated evaluations of Υ\Upsilon. However, since each evaluation is expensive, our goal is to maximize Υ\Upsilon using as few evaluations as possible.

By combining the current evidence, i.e., the evaluations {(𝒙m,Υ(𝒙m))}m=1l\{(\boldsymbol{x}_{m},{\Upsilon}(\boldsymbol{x}_{m}))\}_{m=1}^{l}, with some prior information over Υ{\Upsilon}, i.e., Q(Υ)Q({\Upsilon}), BO efficiently determines future evaluations. More precisely, BO uses the posterior Q(Υ|{(𝒙m,Υ(𝒙m))}m=1l)Q(\Upsilon\,|\,\{(\boldsymbol{x}_{m},{\Upsilon}(\boldsymbol{x}_{m}))\}_{m=1}^{l}) to construct an acquisition function ξ:𝓧\xi:\boldsymbol{\mathcal{X}}\rightarrow\mathbb{R} and choose its maximum, 𝒙l+1=argmax𝒙𝓧ξ(𝒙)\boldsymbol{x}_{l+1}=\operatorname*{arg\,max}_{\boldsymbol{x}\in\boldsymbol{\mathcal{X}}}\xi(\boldsymbol{x}), as our next point for evaluation [23].

For our prior on Υ\Upsilon, we assume a Gaussian Process (𝒢𝒫\mathcal{GP}). This is,

Υ𝒢𝒫(ϕ,κ),\Upsilon\sim\mathcal{GP}(\phi,\kappa),

where ϕ:𝓧\phi:\boldsymbol{\mathcal{X}}\rightarrow\mathbb{R} and κ:𝓧×𝓧\kappa:\boldsymbol{\mathcal{X}}\times\boldsymbol{\mathcal{X}}\rightarrow\mathbb{R} are a mean and a kernel (covariance) function, respectively, so that Υ(𝒙)𝒩(ϕ(𝒙),κ(𝒙,𝒙))\Upsilon(\boldsymbol{x})\sim\mathcal{N}(\phi(\boldsymbol{x}),\kappa(\boldsymbol{x},\boldsymbol{x})), for all 𝒙𝓧\boldsymbol{x}\in\boldsymbol{\mathcal{X}}. Given the evidence {(𝒙m,y^m)}m=1l\{(\boldsymbol{x}_{m},\hat{y}_{m})\}_{m=1}^{l}, with 𝒙m𝓧\boldsymbol{x}_{m}\in\boldsymbol{\mathcal{X}}, y^m=Υ(𝒙m)+ϵm\hat{y}_{m}=\Upsilon(\boldsymbol{x}_{m})+\epsilon_{m}\in\mathbb{R}, and ϵm𝒩(0,η2)\epsilon_{m}\sim\mathcal{N}(0,\eta^{2}), the resulting posterior Υ|{(𝒙m,y^m)}m=1l\Upsilon\,|\,\{(\boldsymbol{x}_{m},\hat{y}_{m})\}_{m=1}^{l} would also be a 𝒢𝒫\mathcal{GP} with mean and covariance given by

ϕ^(𝒙)=𝒌(𝐊+η2𝐈l)1𝒚^,\displaystyle\hat{\phi}(\boldsymbol{x})=\boldsymbol{k}^{\prime}(\mathbf{K}+\eta^{2}\mathbf{I}_{l})^{-1}\hat{\boldsymbol{y}}, (17)
κ^(𝒙,ϑ)=κ(𝒙,ϑ)𝒌(𝐊+η2𝐈l)1𝒗,\displaystyle\hat{\kappa}(\boldsymbol{x},\boldsymbol{\vartheta})={\kappa}(\boldsymbol{x},\boldsymbol{\vartheta})-\boldsymbol{k}^{\prime}(\mathbf{K}+\eta^{2}\mathbf{I}_{l})^{-1}\boldsymbol{v}, (18)

where 𝒙,ϑ𝓧\boldsymbol{x},\boldsymbol{\vartheta}\in\boldsymbol{\mathcal{X}}, 𝒚^=(y^1,,y^l)\hat{\boldsymbol{y}}=(\hat{y}_{1},\dots,\hat{y}_{l})^{\prime}, km=κ(𝒙,𝒙m)k_{m}=\kappa(\boldsymbol{x},\boldsymbol{x}_{m}), vm=κ(ϑ,𝒙m)v_{m}=\kappa(\boldsymbol{\vartheta},\boldsymbol{x}_{m}), 𝒌=(k1,,kl)\boldsymbol{k}=(k_{1},\dots,k_{l})^{\prime}, 𝒗=(v1,,vl)\boldsymbol{v}=(v_{1},\dots,v_{l})^{\prime}, and (𝐊)m,m=κ(𝒙m,𝒙m)(\mathbf{K})_{m,m^{\prime}}=\kappa(\boldsymbol{x}_{m},\boldsymbol{x}_{m^{\prime}}) [42].

As suggested by [48], we use a Matèrn 2.5 kernel, given by

κ(𝒙,ϑ)=ζ0(1+5r2(𝒙,ϑ)+53r2(𝒙,ϑ))exp{5r2(𝒙,ϑ)},\displaystyle\kappa(\boldsymbol{x},\boldsymbol{\vartheta})=\zeta_{0}\left(1+\sqrt{5r^{2}(\boldsymbol{x},\boldsymbol{\vartheta})}+\frac{5}{3}r^{2}(\boldsymbol{x},\boldsymbol{\vartheta})\right)\exp\left\{-\sqrt{5r^{2}(\boldsymbol{x},\boldsymbol{\vartheta})}\right\},

where r2(𝒙,ϑ)=r=12(K+1)(xrϑr)2/ζr2r^{2}(\boldsymbol{x},\boldsymbol{\vartheta})=\sum_{r=1}^{2(K+1)}\left(x_{r}-\vartheta_{r}\right)^{2}/\zeta_{r}^{2}, ζ0\zeta_{0} is a covariance amplitude, and ζr\zeta_{r}, for r{1,,2(K+1)}r\in\{1,\dots,2(K+1)\} are 𝒢𝒫\mathcal{GP} length scales. For our acquisition function, we use the Expected Improvement (EI) over the best current value [22], as it has been shown to be efficient in the number of evaluations required to find the global maximum of a wide range of black-box functions [6, 48]. More precisely, given the best current value—denoted by 𝒙+\boldsymbol{x}^{+}, we set

ξ(𝒙)=𝔼[max{0,Υ(𝒙)Υ(𝒙+)}|{(𝒙m,y^m)}m=1l].\xi(\boldsymbol{x})=\mathbb{E}\left[\max\left\{0,\,\Upsilon(\boldsymbol{x})-\Upsilon(\boldsymbol{x}^{+})\right\}\,|\,\{(\boldsymbol{x}_{m},\hat{y}_{m})\}_{m=1}^{l}\right].

That being said, note that the BO procedure is carried out just once. Then, we use the learned random weights throughout the entire sampling process, as described in section 2. Additionally, obtaining the batch of approximate posterior draws, computing the univariate KDEs, and evaluating the Bayesian posterior, can all be trivially implemented in parallel, reducing the cost of evaluating and minimizing ^\hat{\mathcal{L}}. Thus BOB, as WLB and WBB, also embraces recent developments in parallel computing. BOB, however, let us select the optimal 𝒙\boldsymbol{x}^{*}, and hence, adequate random weights. We summarize our proposed method in algorithm 1.

Algorithm 1 Bayesian Optimized Bootstrap

Input:
      Data: 𝐘\mathbf{Y}
      Total number of posterior draws: SS
      Prior hyper-parameters: {ak,λk,νk,𝜷k,𝚿k}k=1K\left\{a_{k},\,\lambda_{k},\,\nu_{k},\,\boldsymbol{\beta}_{k},\,\boldsymbol{\Psi}_{k}\right\}_{k=1}^{K}
      Batch size: SbS_{b}
      Compact search space: 𝓧\boldsymbol{\mathcal{X}}
      Tempering hyper-parameters: {a,b,c,r}\{a,\,b,\,c,\,r\}
Output:
      Posterior draws: {𝜽BOB,(s)}s=1S\left\{\boldsymbol{\theta}^{*}_{\text{BOB},(s)}\right\}_{s=1}^{S}

1:Compute 𝒙argmin𝒙𝓧^(𝒙)\boldsymbol{x}^{*}\leftarrow\operatorname*{arg\,min}_{\boldsymbol{x}\in\boldsymbol{\mathcal{X}}}\hat{\mathcal{L}}(\boldsymbol{x}) via Bayesian Optimization
2:Set (xα,xμ1,,xμK,xΣ1,,xΣK,xπ)𝒙(x_{\alpha}^{*},x_{\mu_{1}}^{*},\dots,x_{\mu_{K}}^{*},x_{{\Sigma_{1}}}^{*},\dots,x_{{\Sigma_{K}}}^{*},x_{\pi}^{*})^{\prime}\leftarrow\boldsymbol{x}^{*}
3:for s{1,S}s\in\{1\,\dots,S\} do \triangleright in parallel
4:     Set u~πxπ\tilde{u}_{\pi}\leftarrow x_{\pi}^{*}
5:     Set u~μkxμk\tilde{u}_{\mu_{k}}\leftarrow x_{\mu_{k}}^{*}, k[K]\forall k\in[K]
6:     Set u~ΣkxΣk\tilde{u}_{\Sigma_{{k}}}\leftarrow x_{\Sigma_{k}}^{*}, k[K]\forall k\in[K]
7:     Construct uiwixαi=1nwixαu_{i}\leftarrow\frac{w_{i}^{x_{\alpha}^{*}}}{\sum_{i=1}^{n}w_{i}^{x_{\alpha}^{*}}}, where wi(1)w_{i}\sim\mathcal{E}(1), i[n]\forall i\in[n]
8:     Compute 𝜽BOB,(s)argmax𝜽𝚯logpu(𝜽;𝐙|𝐘)\boldsymbol{\theta}^{*}_{\text{BOB},(s)}\leftarrow\operatorname*{arg\,max}_{\boldsymbol{\theta}\in\boldsymbol{\Theta}}\log p_{u}(\boldsymbol{\theta};\mathbf{Z}|\mathbf{Y}) via Tempered EM
9:end for

To construct our Bayesian optimizer, we make use of the well-established “DiceOptimR package [44, 27]. Our implementation of BOB, as well as of WBB, are all available in the “BOBgmmsR package, which can be found in the supplementary materials or online at github.com/marinsantiago/BOBgmms. Source code to reproduce the results from this article can also be found in the supplementary materials or online at github.com/marinsantiago/BOBgmms-examples.

3.2 Asymptotic Properties

Throughout this subsection, we want to show that BOB retains key asymptotic properties from WLB and WBB. To that end, let 𝜽^MLE𝚯\hat{\boldsymbol{\theta}}_{\text{MLE}}\in\boldsymbol{\Theta} be the MLE for 𝜽\boldsymbol{\theta} and let 𝜽BOB𝚯\boldsymbol{\theta}^{*}_{\text{BOB}}\in\boldsymbol{\Theta} be a draw from BOB. It is assumed that the ordering of 𝜽^MLE\hat{\boldsymbol{\theta}}_{\text{MLE}} and 𝜽BOB\boldsymbol{\theta}^{*}_{\text{BOB}} is the same. Let also, for k{1,,K}k\in\{1,\dots,K\}, ak=ak(n)a_{k}=a_{k}(n), λk=λk(n)\lambda_{k}=\lambda_{k}(n), 𝚿k=𝚿k(n)\boldsymbol{\Psi}_{k}=\boldsymbol{\Psi}_{k}(n), and νk=νk(n)\nu_{k}=\nu_{k}(n), such that, as nn\rightarrow\infty, ak(n)1a_{k}(n)\rightarrow 1, λk(n)0\lambda_{k}(n)\rightarrow 0, 𝚿k(n)𝟎d×d\boldsymbol{\Psi}_{k}(n)\rightarrow\mathbf{0}_{d\times d}, and νk(n)(d+2)\nu_{k}(n)\rightarrow-(d+2). Then, note that BOB can be seen as a generalization of WLB and WBB. In fact, by setting 𝒙=(1,0,0,,0)𝓧\boldsymbol{x}^{*}=(1,0,0,\dots,0)^{\prime}\in\boldsymbol{\mathcal{X}} or 𝒙=(1,1,1,,1)𝓧\boldsymbol{x}^{*}=(1,1,1,\dots,1)^{\prime}\in\boldsymbol{\mathcal{X}}, one would recover WLB or WBB (with fixed prior weights [34]), respectively. Thus, if {𝐲i}i=1n\{\mathbf{y}_{i}\}_{i=1}^{n} are random samples from a sufficiently regular model p(𝐲|𝜽0)p(\mathbf{y}|\boldsymbol{\theta}_{0}), as described in section 2.1, and if the BOB solution converges to the WLB or WBB solutions, as nn\rightarrow\infty, then 𝜽BOB\boldsymbol{\theta}^{*}_{\text{BOB}} would retain key asymptotic properties such as consistency and asymptotic normality. More precisely, following theorems 1 and 2 from [33], along with the results from [32], we have that if 𝒙(1,0,0,,0)𝓧\boldsymbol{x}^{*}\rightarrow(1,0,0,\dots,0)^{\prime}\in\boldsymbol{\mathcal{X}} or 𝒙(1,1,1,,1)𝓧\boldsymbol{x}^{*}\rightarrow(1,1,1,\dots,1)^{\prime}\in\boldsymbol{\mathcal{X}}, as nn\rightarrow\infty, then:

  1. 1.

    For any scalar ε>0\varepsilon>0, as nn\rightarrow\infty,

    u(𝜽BOB𝜽^MLE>ε|{𝐲i}i=1n)0,\mathbb{P}_{u}(\lVert\boldsymbol{\theta}^{*}_{\text{BOB}}-\hat{\boldsymbol{\theta}}_{\text{MLE}}\rVert>\varepsilon\,|\,\{\mathbf{y}_{i}\}_{i=1}^{n})\rightarrow 0,

    for almost every infinite sequence of data {𝐲i}i\left\{\mathbf{y}_{i}\right\}_{i\in\mathbb{N}}.

  2. 2.

    For all measurable 𝐁(𝚯)\mathbf{B}\in\mathcal{B}(\boldsymbol{\Theta}), as nn\rightarrow\infty,

    u(n(𝜽BOB𝜽^MLE)𝐁|{𝐲i}i=1n)(𝐓𝐁),\mathbb{P}_{u}(\sqrt{n}(\boldsymbol{\theta}^{*}_{\text{BOB}}-\hat{\boldsymbol{\theta}}_{\text{MLE}})\in\mathbf{B}\,|\,\{\mathbf{y}_{i}\}_{i=1}^{n})\rightarrow\mathbb{P}(\mathbf{T}\in\mathbf{B}),

    for almost every infinite sequence of data {𝐲i}i\left\{\mathbf{y}_{i}\right\}_{i\in\mathbb{N}}. In this case, 𝐓𝒩(𝟎,𝓘(𝜽0)1)\mathbf{T}\sim\mathcal{N}(\mathbf{0},\boldsymbol{\mathcal{I}}(\boldsymbol{\theta}_{0})^{-1}), 𝓘(𝜽)\boldsymbol{\mathcal{I}}(\boldsymbol{\theta}) is the Fisher information, and (𝚯)\mathcal{B}(\boldsymbol{\Theta}) denotes the Borel field on 𝚯\boldsymbol{\Theta}.

Remark 1.

The probabilities, u\mathbb{P}_{u}, from above, refer to the distribution of 𝛉BOB\boldsymbol{\theta}^{*}_{\text{BOB}} induced by random weights given the sequence of data {𝐲i}i=1n\{\mathbf{y}_{i}\}_{i=1}^{n}.

Remark 2.

The Bernstein–von Mises theorem states that, under regularity conditions, the actual Bayesian posterior converges to a normal distribution. More formally, as nn\rightarrow\infty, {𝛉|𝐘}𝒩(𝛉0,[n𝓘(𝛉0)]1)\{\boldsymbol{\theta}|\mathbf{Y}\}\rightarrow\mathcal{N}(\boldsymbol{\theta}_{0},[n\,\boldsymbol{\mathcal{I}}(\boldsymbol{\theta}_{0})]^{-1}). Comparing this result with our previous results, one have that it is possible to approximate posterior credible sets with BOB as (𝛉BOB𝐁|𝐘)𝐁p(𝛉|𝐘)𝑑𝛉\mathbb{P}(\boldsymbol{\theta}^{*}_{\text{BOB}}\in\mathbf{B}{}\,|\,\mathbf{Y})\approx\int_{\mathbf{B}}p(\boldsymbol{\theta}|\mathbf{Y})d\boldsymbol{\theta}, for all 𝐁(𝚯)\mathbf{B}\in\mathcal{B}(\boldsymbol{\Theta}), and the approximation would get better with a growing nn.

On the whole, we have that BOB provides an automatic and much more informed approach to select the random weights, while retaining the asymptotic first order correctness from existing Bootstrap-based posterior samplers.

4 Simulations

We now evaluate the performance of different approximate posterior samplers trough various simulation experiments.

4.1 Simulations Setup

To generate the simulated data, we start by sampling 𝐳i\mathbf{z}_{i}, for i{1,,n}i\in\{1,\dots,n\}, from p(𝐳i|K)k=1K(1K)zikp(\mathbf{z}_{i}|K)\propto\prod_{k=1}^{K}(\frac{1}{K})^{z_{ik}}, where K{2,3,4}K\in\{2,3,4\}. For the dimension dd, we are going to consider low (d=5)(d=5), medium (d=10)(d=10), and high (d=15)(d=15) dimensional problems. To generate each 𝝁k\boldsymbol{\mu}_{k}, we set, for k{1,,K}k\in\{1,\dots,K\},

𝝁k=((5k4)𝟏0.6d,𝟎(d0.6d))d,\boldsymbol{\mu}_{k}=\left((5k-4)\mathbf{1}_{\lceil 0.6d\rceil}^{\prime},\mathbf{0}_{(d-\lceil 0.6d\rceil)}^{\prime}\right)^{\prime}\in\mathbb{R}^{d},

where only 60% of the entries of 𝝁k\boldsymbol{\mu}_{k} are important parameters in separating the clusters, while the remaining entries are set to zero. Lastly, we set each 𝚺k\boldsymbol{\Sigma}_{k} to be an identity matrix of size d×dd\times d, i.e., 𝚺k=𝐈d\boldsymbol{\Sigma}_{k}=\mathbf{I}_{d}. In total, we consider nine different simulations settings, which are summarized in table 1. In all cases, we standardize the data so that each feature has mean 0 and variance 1.

Table 1: Simulation Settings
Setting
Simulation parameters 1 2 3 4 5 6 7 8 9
nn 50 50 50 100 100 100 150 150 150
dd 5 10 15 5 10 15 5 10 15
KK 2 2 2 3 3 3 4 4 4
\botrule

The prior hyper-parameters are set as 𝜷k=𝟎d\boldsymbol{\beta}_{k}=\mathbf{0}_{d}, 𝚿k=𝐈d\boldsymbol{\Psi}_{k}=\mathbf{I}_{d}, ak=1.1a_{k}=1.1, λk=λ\lambda_{k}=\lambda, and νk=ν\nu_{k}=\nu, for k{1,,K}k\in\{1,\dots,K\}. To select the regularization parameters, λ\lambda and ν\nu, we use a likelihood-based cross-validation approach, as in [14]. The idea is to consider a grid of values for λ\lambda and ν\nu, run an unweighted EM algorithm using each pair of λ\lambda and ν\nu on a training subset of the data, and choose the pair of λ\lambda and ν\nu that maximizes the log-likelihood over the validation set. We then use the same values of λ\lambda and ν\nu across all the posterior samplers. That being said, the goal of these experiments is not to choose the “best” regularization parameters. Instead, as pointed out by [37], the goal is to compare different posterior approximations for given values of λ\lambda and ν\nu.

For our approximate posterior samplers we consider BOB as well as two versions of WBB, namely WBB1 (with random prior weights) and WBB2 (with fixed prior weights). To implement WBB1 we set 𝐮iid(1)\mathbf{u}\overset{iid}{\sim}\mathcal{E}(1) and 𝐮~iid(1)\tilde{\mathbf{u}}\overset{iid}{\sim}\mathcal{E}(1), and to implement WBB2 we set 𝐮iid(1)\mathbf{u}\overset{iid}{\sim}\mathcal{E}(1) and 𝐮~iidδu~(1)\tilde{\mathbf{u}}\overset{iid}{\sim}\delta_{\tilde{u}}(1). We also consider a mean-field Automatic Differentiation Variational Inference (ADVI) algorithm [25] and for our MCMC algorithm we use a No-U-Turn sampler, or NUTS [17]. Both NUTS and ADVI are executed in Stan [7] as it is a highly optimized and off-the-shelf probabilistic programming language, widely used by many practitioners.

We set the lower and upper bounds of the BO search space, 𝓧\boldsymbol{\mathcal{X}}, to be 𝓧lower=(1,105,,105)\boldsymbol{\mathcal{X}}_{\text{lower}}=(1,10^{-5},\dots,10^{-5})^{\prime} and 𝓧upper=(1.5,,1.5)\boldsymbol{\mathcal{X}}_{\text{upper}}=(1.5,\dots,1.5)^{\prime}, respectively, except in settings 4 and 7, where we set 𝓧upper=(1.25,,1.25)\boldsymbol{\mathcal{X}}_{\text{upper}}=(1.25,\dots,1.25)^{\prime} as we have more available data. Note that in any case, the search space contains WBB and WLB as potential solutions. To set the initial parameter values, we consider a pool of candidate values and choose the initialization that yields the largest posterior density. Further details on our initialization strategy can be found in the supplementary materials. It is worth noting that we initialize all the algorithms (i.e., BOB, WBB, NUTS, and ADVI) at the same point. Thus, if we see any difference in performance, it would be due to the sampling algorithm and not due to the initialization, as they all share the same initial values.

We obtain S=20000S=20000 approximate posterior draws using BOB, WBB, and ADVI, while with NUTS, we run the algorithm for 2S=400002S=40000 iterations and discard the first half as burn-in. In the case of BOB, we use batches of size Sb=4000S_{b}=4000 to construct ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}). We run all our simulations and data analyses on an Apple Silicon-based machine with 48 GB of memory and 16 CPU cores.

4.2 Comparison Metrics

To assess the accuracy of posterior approximations, the Kolmogorov–Smirnov (KS) and the Total Variation (TV) distances between the Bayesian posterior and its approximation have been traditionally used. To ease computations, this assessment has been usually based on comparing the Bayesian marginal posteriors with the approximate marginal posteriors [51, 34]. However, in the context of mixture models, these comparisons are no longer viable because of the so-called label switching problem [11]. One could use relabeling algorithms, as in [50], but the problem would remain as the resulting ordering of the approximate marginals might be different to the ordering of the Bayesian marginals, even after employing a relabeling algorithm, making comparisons between the marginals virtually meaningless.

We, therefore, base our comparisons on the posterior predictive distribution, as the posterior predictive is invariant to any permutation of 𝜽\boldsymbol{\theta} (i.e., it circumvents the label switching problem). More formally, let p(𝜽|𝐘)p(\boldsymbol{\theta}|\mathbf{Y}) be the Bayesian posterior and let g(𝜽|𝐘)g(\boldsymbol{\theta}|\mathbf{Y}) be its approximation. Then, the Bayesian posterior predictive distribution would be given by p(𝐲new|𝐘)=𝚯p(𝐲new|𝜽)p(𝜽|𝐘)𝑑𝜽p(\mathbf{y}_{\text{new}}|\mathbf{Y})=\int_{\boldsymbol{\Theta}}p(\mathbf{y}_{\text{new}}|\boldsymbol{\theta})p(\boldsymbol{\theta}|\mathbf{Y})d\boldsymbol{\theta}, while the approximate posterior predictive would be g(𝐲new|𝐘)=𝚯p(𝐲new|𝜽)g(𝜽|𝐘)𝑑𝜽g(\mathbf{y}_{\text{new}}|\mathbf{Y})=\int_{\boldsymbol{\Theta}}p(\mathbf{y}_{\text{new}}|\boldsymbol{\theta})g(\boldsymbol{\theta}|\mathbf{Y})d\boldsymbol{\theta}. Note, additionally, that the posterior predictive distribution reflects two kinds of uncertainty: (a) Sampling uncertainty about 𝐲\mathbf{y} conditional on 𝜽\boldsymbol{\theta} and (b) parametric uncertainty about 𝜽\boldsymbol{\theta} [29]. Following [15], under a correctly specified model and a proper prior, the Bayes risk

𝚯p(𝜽)[𝓨p(𝐘|𝜽)DKL(ptrue(𝐲new)g(𝐲new|𝐘))d𝐘]d𝜽\int_{\boldsymbol{\Theta}}p(\boldsymbol{\theta})\left[\int_{\boldsymbol{\mathcal{Y}}}p(\mathbf{Y}|\boldsymbol{\theta})\text{D}_{\text{KL}}(p_{\text{true}}(\mathbf{y}_{\text{new}})\rVert\,g(\mathbf{y}_{\text{new}}|\mathbf{Y}))d\mathbf{Y}\right]d\boldsymbol{\theta}

is minimized when g(𝐲new|𝐘)=p(𝐲new|𝐘)g(\mathbf{y}_{\text{new}}|\mathbf{Y})=p(\mathbf{y}_{\text{new}}|\mathbf{Y}), where DKL(ptrue(𝐲new)g(𝐲new|𝐘))\text{D}_{\text{KL}}(p_{\text{true}}(\mathbf{y}_{\text{new}})\rVert\,g(\mathbf{y}_{\text{new}}|\mathbf{Y})) denotes the KL divergence between the true data generating mechanism ptrue(𝐲new)p_{\text{true}}(\mathbf{y}_{\text{new}}) and the approximate posterior predictive distribution g(𝐲new|𝐘)g(\mathbf{y}_{\text{new}}|\mathbf{Y}), making the Bayesian posterior predictive distribution a natural benchmark when one is interested in quantifying uncertainty.

Thus, we consider the TV and KS distances between p(𝐲new|𝐘)p(\mathbf{y}_{\text{new}}|\mathbf{Y}) and g(𝐲new|𝐘)g(\mathbf{y}_{\text{new}}|\mathbf{Y}) as comparison metrics, where smaller distances would suggest a better approximation to the Bayesian posterior and a better uncertainty quantification. To ease computations, we approximate the TV and KS distances between p(𝐲new|𝐘)p(\mathbf{y}_{\text{new}}|\mathbf{Y}) and g(𝐲new|𝐘)g(\mathbf{y}_{\text{new}}|\mathbf{Y}) as

TV^=1dj=1d12y𝒴|P^ynew,jBayes(y)G^ynew,j()(y)|,\displaystyle\hat{\text{TV}}=\frac{1}{d}\sum_{j=1}^{d}\frac{1}{2}\sum_{y\in\mathcal{Y}}\left|\hat{P}^{\text{Bayes}}_{y_{\text{new},j}}(y)-\hat{G}^{(\cdot)}_{y_{\text{new},j}}(y)\right|, (19)
KS^=1dj=1dsupy𝒴|P^ynew,jBayes(y)G^ynew,j()(y)|,\displaystyle\hat{\text{KS}}=\frac{1}{d}\sum_{j=1}^{d}\sup_{y\in\mathcal{Y}}\left|\hat{P}^{\text{Bayes}}_{y_{\text{new},j}}(y)-\hat{G}^{(\cdot)}_{y_{\text{new},j}}(y)\right|, (20)

where P^ynew,jBayes\hat{P}^{\text{Bayes}}_{y_{\text{new},j}} denotes the empirical CDF of the actual Bayesian posterior predictive distribution of ynew,jy_{\text{new},j} (i.e., the jj-th element of 𝐲new\mathbf{y}_{\text{new}}) and G^ynew,j()\hat{G}^{(\cdot)}_{y_{\text{new},j}} denotes the empirical CDF of the posterior predictive of ynew,jy_{\text{new},j} obtained by one of the approximations (i.e., BOB, WBB, NUTS, or ADVI). Details on how one can sample from the posterior predictive distributions and the actual Bayesian posterior (when the true latent variables, 𝐙\mathbf{Z}, are known), can be found in the supplementary materials.

4.3 Simulation Results

Figure 1 and Table 2 present the simulation results. Figure 1 displays boxplots of the TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances between the Bayesian posterior predictive distribution and its approximations obtain via BOB, WBB1, and WBB2 based on ten independent runs per setting. Table 2 shows the median (and the interquartile range) of the TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances, as well as of the elapsed (wall-clock) times, for all considered methods, based on the same ten independent runs. Figure 1 clearly shows that, across the nine settings, BOB constantly outperforms both versions of WBB in recovering the Bayesian posterior predictive distribution. Interestingly, we can also observe that the performance of both versions of WBB is very similar, with no major differences between the two methods.

Refer to caption
Figure 1: Boxplots of the TV^\hat{\text{TV}} distances (in red) and KS^\hat{\text{KS}} distances (in blue) between the Bayesian posterior predictive distribution and the posterior predictives obtained via BOB, WBB1, and WBB2, based on ten independent runs per setting.
Table 2: Median TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances between the Bayesian posterior predictive distribution and its approximations obtained via BOB, WBB1, WBB2, NUTS, and ADVI, as well as median elapsed (wall-clock) times in minutes, based on ten independent runs per setting.
Setting
Metric Method 1 2 3 4 5 6 7 8 9
TV^\hat{\text{TV}} BOB 0.033 0.049 0.061 0.026 0.036 0.045 0.023 0.032 0.039
(0.003) (0.001) (0.007) (0.002) (0.000) (0.001) (0.001) (0.001) (0.001)
WBB1 0.047 0.068 0.084 0.034 0.050 0.062 0.030 0.043 0.054
(0.001) (0.001) (0.002) (0.002) (0.000) (0.002) (0.001) (0.001) (0.001)
WBB2 0.047 0.070 0.087 0.033 0.050 0.063 0.028 0.043 0.055
(0.002) (0.001) (0.002) (0.002) (0.000) (0.002) (0.000) (0.001) (0.001)
NUTS 0.006 0.006 0.056 0.039 0.070 0.057 0.072 0.058
(0.004) (0.049) (0.027) (0.061) (0.012) (0.027) (0.010) (0.018)
ADVI 0.083 0.107 0.284 0.074 0.094 0.159 0.071 0.095 0.113
(0.018) (0.040) (0.077) (0.016) (0.058) (0.032) (0.015) (0.042) (0.076)
KS^\hat{\text{KS}} BOB 0.028 0.040 0.051 0.023 0.031 0.037 0.021 0.027 0.033
(0.002) (0.001) (0.005) (0.004) (0.001) (0.001) (0.001) (0.002) (0.004)
WBB1 0.039 0.055 0.067 0.028 0.041 0.051 0.026 0.035 0.043
(0.003) (0.001) (0.001) (0.002) (0.001) (0.001) (0.001) (0.003) (0.001)
WBB2 0.036 0.055 0.070 0.027 0.039 0.050 0.023 0.033 0.044
(0.003) (0.001) (0.001) (0.002) (0.002) (0.001) (0.001) (0.001) (0.001)
NUTS 0.008 0.009 0.058 0.048 0.063 0.065 0.049 0.047
(0.004) (0.050) (0.031) (0.043) (0.008) (0.017) (0.002) (0.008)
ADVI 0.078 0.081 0.151 0.062 0.075 0.102 0.052 0.066 0.081
(0.013) (0.010) (0.034) (0.012) (0.015) (0.019) (0.002) (0.019) (0.040)
Elapsed BOB 1.891 2.271 2.731 2.525 2.988 3.553 3.281 3.829 4.692
(0.033) (0.086) (0.040) (0.140) (0.035) (0.050) (0.075) (0.053) (0.197)
WBB1 0.002 0.003 0.004 0.003 0.005 0.008 0.004 0.007 0.011
(0.000) (0.001) (0.001) (0.000) (0.002) (0.001) (0.001) (0.001) (0.001)
WBB2 0.002 0.003 0.004 0.003 0.004 0.008 0.004 0.007 0.011
(0.000) (0.001) (0.001) (0.000) (0.000) (0.000) (0.000) (0.001) (0.001)
NUTS 0.681 4.400 22.027 8.591 36.547 80.995 22.318 85.547
(0.073) (6.519) (8.028) (5.367) (3.141) (2.646) (5.641) (21.072)
ADVI 0.033 0.081 0.157 0.078 0.180 0.369 0.131 0.323 0.686
(0.004) (0.008) (0.010) (0.004) (0.031) (0.020) (0.008) (0.068) (0.052)
\botrule
00footnotetext: Note: Interquartile ranges are provided in parentheses.

In Table 2, we can observe that both versions of WBB have the lowest running times. We can also observe that in simpler settings, the running times and accuracy of NUTS are comparable to those of BOB. However, as the simulation settings get more complicated, the median running times of BOB become significantly lower than those of NUTS. For instance, in setting 5, BOB takes a median running time of 2.99 minutes, while NUTS takes 36.55 minutes. This issue is more clear in settings 6 and 8, where BOB’s median running times are 3.55 and 3.83 minutes, respectively, while NUTS takes median running times of 81 and 85.55 minutes, respectively. In other words, BOB can be 22.8 times faster than NUTS. Nonetheless, it is worth mentioning that in setting 9, NUTS was not able to run within a computing budget of 8 hours (480 minutes), while BOB took a median running time of 4.69 minutes, illustrating the poor scalability of MCMC algorithms to large problems.

More interestingly, though, we can observe that in more complicated settings, BOB is not only faster than NUTS, but also more accurate. For instance, in setting 7, the median TV^\hat{\text{TV}} distance obtained via NUTS is 3 times larger than the one of BOB, illustrating that BOB is, indeed, a reliable posterior sampler. It is also clear that, across all settings, ADVI produces the least accurate approximations of the Bayesian posterior predictive distribution. On the whole, even if WBB has the lowest running times, BOB constantly produces reliable approximations of the Bayesian posterior predictive distribution in a timely fashion, nicely balancing the trade-off between computational cost and accuracy.

To provide a clearer demonstration of what is happening, let us bring up an illustrative example with n=50n=50, d=10d=10, and K=2K=2. Figure 2 present KDEs of the true Bayesian posterior predictive distribution, as well as of the different approximations (details on how one can sample from the Bayesian posterior predictive distribution can be found in the supplementary materials). We can observe that the Bayesian posterior predictive density (i.e., the red contour plots) clearly indicates the existence of two clusters. We can also observe that WBB1 and WBB2 correctly capture the locations of such clusters, but do not correctly capture the dispersion of the Bayesian posterior. In fact, both versions of WBB produce overconfident posterior predictive distributions, which is not optimal for uncertainty quantification. Moreover, NUTS and ADVI fail to identify the two clusters in the data. Instead, they fit the data as one big cluster, which is not desirable either. These results are not surprising as the reparameterization used by ADVI forces the posterior to be unimodal [31]. As a matter of fact, similar results for NUTS and ADVI have also been reported in [12], where the authors show that the samplers produce unimodal posteriors. Overall, it is clear that BOB produces the closest approximation to the Bayesian posterior predictive distribution and the best uncertainty quantification. Additional posterior predictive density plots are provided in the supplementary materials, where we can observe similar results.

Table 3 presents the TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances between the true Bayesian posterior predictive distribution and its approximations, as well as the elapsed (wall-clock) times, for our illustrative example with n=50n=50, d=10d=10, and K=2K=2. We can observe that BOB not only yields the smallest distances, but it is also 2.4 times faster than NUTS.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 2: True Bayesian posterior predictive density (red contours) and its approximations (blue KDEs) obtained via BOB, WBB1, WBB2, NUTS, and ADVI, when K=2K=2, d=10d=10, and n=50n=50.
Table 3: TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances between the true Bayesian posterior predictive distribution and its approximations obtained via BOB, WBB1, WBB2, NUTS, and ADVI, as well as elapsed (wall-clock) times in minutes for each method, when K=2K=2, d=10d=10, and n=50n=50.
Method
Metric BOB WBB1 WBB2 NUTS ADVI
TV^\hat{\text{TV}} 0.047 0.070 0.071 0.054 0.099
KS^\hat{\text{KS}} 0.040 0.055 0.055 0.055 0.059
Elapsed (min) 2.136 0.005 0.005 5.144 0.083
\botrule

Lastly, we would like to investigate how the sample size affects both BOB and WBB. As discussed in section 3.2, both BOB and WBB are first order correct for the Bayesian posterior. In other words, their approximations should get better with a growing sample size. Thus, we consider an additional experiment where we fix d=15d=15 and K=2K=2, and consider various sample sizes spanning from n=50n=50 to n=500n=500. Figure 3 displays the TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances between the Bayesian posterior predictive distribution and its approximations obtained via BOB, WBB1, and WBB2, as a function of nn. We can observe that the TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances decrease as nn increases, which is expected. However, we can see that BOB constantly outperforms WBB, for all values of nn. Asymptotically, though, we can observe that WBB and BOB tend to converge to the same limiting distribution, which is desirable as WBB possesses appealing asymptotic properties. Altogether, we have that BOB is a reliable method for approximate posterior sampling, which can be applied in a wide range of problems with different sample sizes.

Refer to caption
Figure 3: TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances between the Bayesian posterior predictive and its approximations obtained via BOB (golden continuous line), WBB1 (blue dashed line), and WBB2 (red dotted line), as a function of nn, with d=15d=15 and K=2K=2. The curves were produced by computing the median across ten independent runs. Error bars indicate the 25th and 75th percentiles, respectively.

5 Analysis of Real-world Data

To further demonstrate the performance and practical utility of our proposed methods, we apply them to the widely analyzed Wine [1] and Seeds [9] datasets. Both datasets are publicly available from the UC Irvine Machine Learning Repository.

5.1 Wine Data

The data consists of d=13d=13 chemical properties for 178178 Italian wines, belonging to K=3K=3 different types, namely Barbera, Barolo, and Grignolino. From the Barbera type we have 48 specimens, from Barolo we have 59 specimens, and from Grignolino we have 71 specimens. A detailed description of all the chemical properties from each wine is presented in the supplementary materials. The idea is to cluster types of wine based on these chemical features. To assess the validity of our model specification and model fitting, we randomly split our data into training and held-out sets, with n=100n=100 observations in the training set. In all our analyses, we standardize the data so that each feature has mean 0 and variance 1. To set the prior hyperparameters, we follow the same approach as in section 4.1. We obtain S=20000S=20000 approximate posterior draws using BOB, WBB1, WBB2, and ADVI. In the case of NUTS, we run the algorithm for 40000 iterations and discard the first half as burn-in. For BOB, we use batches of size Sb=4000S_{b}=4000 to construct ^(𝒙)\hat{\mathcal{L}}(\boldsymbol{x}). As before, we set the lower and upper bounds of the BO search space to be 𝓧lower=(1,105,,105)\boldsymbol{\mathcal{X}}_{\text{lower}}=(1,10^{-5},\dots,10^{-5})^{\prime} and 𝓧upper=(1.5,,1.5)\boldsymbol{\mathcal{X}}_{\text{upper}}=(1.5,\dots,1.5)^{\prime}, respectively.

5.2 Seeds Data

Our second dataset is made up of d=7d=7 observed measurements of geometrical properties for 210210 kernels belonging to K=3K=3 different varieties of wheat, namely, Kama, Rosa and Canadian. There are 70 kernels of each variety and the idea is to cluster varieties of wheat based on these observed measurements. Further details on these observed measurements are presented in the supplementary materials. Again, we randomly split our data into training and held-out sets, with n=110n=110 observations in the training set. All other configurations are set as in subsection 5.1.

5.3 Results

Figures 4 and 5 present posterior predictive density plots for selected variables from the wine and seeds datasets, respectively. As before, the red contours represent the true Bayesian posterior predictive density, while the blue KDEs represent the different approximations. These contours and KDEs were obtained using only the training data. On the other hand, the scatter plots represent the held-out data, which can help us evaluate how well the different methods generalize to unseen data and how well the different approximations recover the true underlying data generating mechanism. Different symbols and colors in the scatter plots represent the actual labels of the held-out data. For instance, in Figure 4, orange squares, pink triangles and dark circles represent Barolo, Grignolino, and Barbera wines, respectively. In Figure 5, orange squares, pink triangles and dark circles represent Kama, Rosa, and Canadian wheat kernels, respectively.

In Figures 4 and 5, the Bayesian posterior predictive density clearly indicates the existence of three clusters, which align with the held-out data, suggesting that the model assumptions are reasonable. We can also observe that both versions of WBB correctly capture the location of these clusters. However, both versions of WBB produce overconfident posterior predictive distributions and do not capture the dispersion of the Bayesian posterior. NUTS and ADVI are unable to identify the three clusters in the data. Having said that, it is clear that in the wine and seeds datasets, BOB produces the closest approximation to the Bayesian posterior predictive distribution and the best uncertainty quantification. Note that these results are consistent with the results from section 4. Additional posterior density plots are provided in the supplementary materials, where we can observe similar patterns across different variables from the wine and seeds datasets.

Moreover, Table 4 presents elapsed (wall-clock) times in minutes, as well as the TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances between the Bayesian posterior predictive distribution and its approximations obtained via BOB, WBB1, WBB2, NUTS, and ADVI, for the wine and seeds datasets. We can observe that BOB returns the smallest TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances in both datasets. Additionally, note that in the wine dataset, NUTS takes 69.4 minutes to run. BOB, on the other hand, takes only 3.16 minutes. This is a 22-fold reduction in the running time, suggesting that BOB not only outperforms competing approaches in recovering the Bayesian posterior predictive distribution, but it can also be substantially faster than MCMC samplers. On the whole, this illustrates BOB’s practical utility in real-world problems.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: True Bayesian posterior predictive density (red contours) and its approximations (blue KDEs) obtained via BOB, WBB1, WBB2, NUTS, and ADVI, for selected variables from the wine data. Contours and KDEs were computed using only the training data. Scatter plots depict the held-out data. Orange squares, pink triangles and dark circles represent Barolo, Grignolino, and Barbera wines, respectively.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 5: True Bayesian posterior predictive density (red contours) and its approximations (blue KDEs) obtained via BOB, WBB1, WBB2, NUTS, and ADVI, for selected variables from the seeds data. Contours and KDEs were computed using only the training data. Scatter plots depict the held-out data. Orange squares, pink triangles and dark circles represent Kama, Rosa, and Canadian wheat kernels, respectively.
Table 4: TV^\hat{\text{TV}} and KS^\hat{\text{KS}} distances between the Bayesian posterior predictive distribution and its approximations obtained via BOB, WBB1, WBB2, NUTS, and ADVI, as well as elapsed (wall-clock) times in minutes, for the wine and seeds datasets.
Method
Dataset Metric BOB WBB1 WBB2 NUTS ADVI
Wine TV^\hat{\text{TV}} 0.039 0.056 0.058 0.065 0.117
KS^\hat{\text{KS}} 0.032 0.048 0.049 0.047 0.071
Elapsed (min) 3.158 0.010 0.007 69.399 0.256
Seeds TV^\hat{\text{TV}} 0.019 0.025 0.023 0.072 0.071
KS^\hat{\text{KS}} 0.020 0.027 0.023 0.079 0.079
Elapsed (min) 2.869 0.005 0.005 13.657 0.131
\botrule

6 Discussion

In this article, we have investigated alternatives to MCMC algorithms in order to sample from the posterior distribution of GMMs. More precisely, we developed a randomly weighted EM algorithm and introduced BOB, a novel computational methodology for approximate posterior sampling. We build on WLB and WBB ideas, which are based on the premise that optimizing randomly weighted posterior densities can be faster and less computationally intensive than sampling from an intractable posterior. BOB, however, tackles the problem of automatically selecting the random weights under arbitrary sample sizes, which has not been addressed before. This is done by minimizing, through Bayesian Optimization, a black-box and noisy version of the reverse KL divergence between the Bayesian posterior and an approximate posterior induced by random weighting. We have demonstrated that BOB constantly outperforms competing approaches in recovering the Bayesian posterior, it provides a better uncertainty quantification, and it retains key asymptotic properties from existing methods. Moreover, we showed that BOB is not only more accurate than all the other existing methods, but it can also be substantially faster than MCMC algorithms, making BOB a competitive approximate posterior sampler.

Limitations: As with many other methodological developments, BOB presents both strengths and limitations, which can make it either highly suitable, or impractical, depending on the specific context of its application. That said, we can identify two main limitations: (1) Somehow ironic, the first limitation involves the use of Bayesian Optimization. Despite being one of the most efficient approaches to optimize a black-box objective, BO does not scale well to high-dimensional search spaces [30]. Recall that our search space, 𝓧\boldsymbol{\mathcal{X}}, is 2(K+1)2(K+1)-dimensional, and thus, we should expect that BOB’s applicability and performance would be significantly better under small-medium values of KK (i.e., the number of components in the mixture). Fortunately, high-dimensional black-box optimization is an active and vibrant area of research within the scientific community. Incorporating such developments and comparing different black-box optimizers could be an avenue for future research. (2) The second limitation is with respect to the sensitivity of our algorithms to the initial parameter values. Despite the incorporation of a tempering profile, both WBB and BOB are still sensitive to the initial parameter values. Poorly chosen initial values might lead the algorithm to saddle at a sub-optimal solution, and no amount of tempering would help us escape such a local optima. Fortunately, recent developments in clustering methods can help us obtain adequate starting points. Lastly, throughout this article, we have assumed that the number of clusters, KK, is fixed and known. The problem of selecting KK within a Bayesian paradigm has been widely discussed in the literature (see e.g. [43], [49], [38], or [3]), and it is outside the scope of this article.

On the whole, BOB joins a growing body of literature, which aims to draw practitioner’s attention toward posterior samplers beyond traditional MCMC algorithms [37, 32]. Thus, extending BOB (and BOB-like methodologies) from GMMs with conjugate priors to arbitrary posterior distributions presents an exciting research opportunity, which we intend to pursue in the future.

Acknowledgements

We would like to thank Michael Martin for many constructive comments and helpful suggestions. Any remaining errors are, of course, our own.

References

  • \bibcommenthead
  • Aeberhard et al. [1994] Aeberhard, S., D. Coomans, and O. de Vel. 1994. Comparative analysis of statistical pattern recognition methods in high dimensional settings. Pattern Recognition 27(8): 1065–1077 .
  • Allassonnière and Chevallier [2021] Allassonnière, S. and J. Chevallier. 2021. A new class of stochastic em algorithms. escaping local maxima and handling intractable sampling. Computational Statistics & Data Analysis 159: 107159 .
  • Baudry et al. [2010] Baudry, J.P., A.E. Raftery, G. Celeux, K. Lo, and R. Gottardo. 2010. Combining mixture components for clustering. Journal of Computational and Graphical Statistics 19(2): 332–353 .
  • Bishop [2006] Bishop, C.M. 2006. Pattern recognition and machine learning. Springer New York, NY.
  • Blei et al. [2017] Blei, D.M., A. Kucukelbir, and J.D. McAuliffe. 2017. Variational inference: A review for statisticians. Journal of the American Statistical Association 112(518): 859–877 .
  • Bull [2011] Bull, A.D. 2011. Convergence rates of efficient global optimization algorithms. Journal of Machine Learning Research 12(88): 2879–2904 .
  • Carpenter et al. [2017] Carpenter, B., A. Gelman, M.D. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li, and A. Riddell. 2017. Stan: A probabilistic programming language. Journal of Statistical Software 76(1): 1–32 .
  • Celeux et al. [2000] Celeux, G., M. Hurn, and C.P. Robert. 2000. Computational and inferential difficulties with mixture posterior distributions. Journal of the American Statistical Association 95(451): 957–970 .
  • Charytanowicz et al. [2010] Charytanowicz, M., J. Niewczas, P. Kulczycki, P.A. Kowalski, S. Łukasik, and S. Żak. 2010. Complete gradient clustering algorithm for features analysis of x-ray images. Information Technologies in Biomedicine 2: 15–24 .
  • Dempster et al. [1977] Dempster, A.P., N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society Series B: Statistical Methodology 39(1): 1–22 .
  • Diebolt and Robert [1994] Diebolt, J. and C.P. Robert. 1994. Estimation of finite mixture distributions through bayesian sampling. Journal of the Royal Statistical Society Series B: Statistical Methodology 56(2): 363–375 .
  • Fong et al. [2019] Fong, E., S. Lyddon, and C. Holmes. 2019. Scalable nonparametric sampling from multimodal posteriors with the posterior bootstrap. Proceedings of the 36th International Conference on Machine Learning 97: 1952–1962 .
  • Fraley and Raftery [2002] Fraley, C. and A.E. Raftery. 2002. Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 97(458): 611–631 .
  • Friedman et al. [2008] Friedman, J., T. Hastie, and R. Tibshirani. 2008, 12. Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9(3): 432–441 .
  • Geisser [2017] Geisser, S. 2017. Predictive Inference. Chapman and Hall/CRC.
  • Gelman and Rubin [1992] Gelman, A. and D.B. Rubin. 1992. Inference from iterative simulation using multiple sequences. Statistical Science 7(4): 457–472 .
  • Hoffman and Gelman [2014] Hoffman, M.D. and A. Gelman. 2014. The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo. Journal of Machine Learning Research 15(47): 1593–1623 .
  • Hofmeyr [2021] Hofmeyr, D.P. 2021. Fast exact evaluation of univariate kernel sums. IEEE Transactions on Pattern Analysis and Machine Intelligence 43(2): 447–458 .
  • Hofmeyr [2022] Hofmeyr, D.P. 2022. Fast kernel smoothing in r with applications to projection pursuit. Journal of Statistical Software 101(3): 1–33 .
  • Izenman and Sommer [1988] Izenman, A.J. and C.J. Sommer. 1988. Philatelic mixtures and multimodal densities. Journal of the American Statistical Association 83(404): 941–953 .
  • Jones [2001] Jones, D.R. 2001. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization 21: 345–383 .
  • Jones et al. [1998] Jones, D.R., M. Schonlau, and W.J. Welch. 1998. Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13: 455–492 .
  • Kandasamy et al. [2020] Kandasamy, K., K.R. Vysyaraju, W. Neiswanger, B. Paria, C.R. Collins, J. Schneider, B. Poczos, and E.P. Xing. 2020. Tuning hyperparameters without grad students: Scalable and robust bayesian optimisation with dragonfly. Journal of Machine Learning Research 21(81): 1–27 .
  • Kirkpatrick et al. [1983] Kirkpatrick, S., C.D. Gelatt Jr, and M.P. Vecchi. 1983. Optimization by simulated annealing. Science 220(4598): 671–680 .
  • Kucukelbir et al. [2017] Kucukelbir, A., D. Tran, R. Ranganath, A. Gelman, and D.M. Blei. 2017. Automatic differentiation variational inference. Journal of Machine Learning Research 18(14): 1–45 .
  • Lartigue et al. [2022] Lartigue, T., S. Durrleman, and S. Allassonnière. 2022. Deterministic approximate em algorithm; application to the riemann approximation em and the tempered em. Algorithms 15(3): 78 .
  • Le Riche and Picheny [2021] Le Riche, R. and V. Picheny. 2021. Revisiting bayesian optimization in the light of the coco benchmark. Structural and Multidisciplinary Optimization 64(5): 3063–3087 .
  • Lyddon et al. [2019] Lyddon, S.P., C. Holmes, and S. Walker. 2019. General bayesian updating and the loss-likelihood bootstrap. Biometrika 106(2): 465–478 .
  • Lynch and Western [2004] Lynch, S.M. and B. Western. 2004. Bayesian posterior predictive checks for complex models. Sociological Methods & Research 32(3): 301–335 .
  • Moriconi et al. [2020] Moriconi, R., M.P. Deisenroth, and K. Sesh Kumar. 2020. High-dimensional bayesian optimization using low-dimensional feature spaces. Machine Learning 109: 1925–1943 .
  • Morningstar et al. [2021] Morningstar, W., S. Vikram, C. Ham, A. Gallagher, and J. Dillon. 2021. Automatic differentiation variational inference with mixtures. International Conference on Artificial Intelligence and Statistics 130: 3250–3258 .
  • Newton et al. [2021] Newton, M.A., N.G. Polson, and J. Xu. 2021. Weighted bayesian bootstrap for scalable posterior distributions. Canadian Journal of Statistics 49(2): 421–437 .
  • Newton and Raftery [1994] Newton, M.A. and A.E. Raftery. 1994. Approximate bayesian inference with the weighted likelihood bootstrap. Journal of the Royal Statistical Society Series B: Statistical Methodology 56(1): 3–48 .
  • Ng and Newton [2022] Ng, T.L. and M.A. Newton. 2022. Random weighting in lasso regression. Electronic Journal of Statistics 16(1): 3430–3481 .
  • Ni et al. [2020] Ni, Y., P. Müller, M. Diesendruck, S. Williamson, Y. Zhu, and Y. Ji. 2020. Scalable bayesian nonparametric clustering and classification. Journal of Computational and Graphical Statistics 29(1): 53–65 .
  • Nie and Ročková [2023a] Nie, L. and V. Ročková. 2023a. Bayesian bootstrap spike-and-slab lasso. Journal of the American Statistical Association 118(543): 2013–2028 .
  • Nie and Ročková [2023b] Nie, L. and V. Ročková. 2023b. Deep bootstrap for bayesian inference. Philosophical Transactions of the Royal Society A 381(2247): 20220154 .
  • Nobile and Fearnside [2007] Nobile, A. and A.T. Fearnside. 2007. Bayesian finite mixtures with an unknown number of components: The allocation sampler. Statistics and Computing 17: 147–162 .
  • Omori et al. [2007] Omori, Y., S. Chib, N. Shephard, and J. Nakajima. 2007. Stochastic volatility with leverage: Fast and efficient likelihood inference. Journal of Econometrics 140(2): 425–449 .
  • Pompe [2021] Pompe, E. 2021. Introducing prior information in weighted likelihood bootstrap with applications to model misspecification. arXiv preprint arXiv:2103.14445 .
  • Provost and Zang [2024] Provost, S.B. and Y. Zang. 2024. Nonparametric copula density estimation methodologies. Mathematics 12(3): 398 .
  • Rasmussen and Williams [2005] Rasmussen, C.E. and C.K.I. Williams. 2005, 11. Gaussian Processes for Machine Learning. The MIT Press.
  • Richardson and Green [1997] Richardson, S. and P.J. Green. 1997. On bayesian analysis of mixtures with an unknown number of components (with discussion). Journal of the Royal Statistical Society Series B: Statistical Methodology 59(4): 731–792 .
  • Roustant et al. [2012] Roustant, O., D. Ginsbourger, and Y. Deville. 2012. Dicekriging, diceoptim: Two r packages for the analysis of computer experiments by kriging-based metamodeling and optimization. Journal of Statistical Software 51(1): 1–55 .
  • Sambridge [2014] Sambridge, M. 2014. A Parallel Tempering algorithm for probabilistic sampling and multimodal optimization. Geophysical Journal International 196(1): 357–374 .
  • Schilling [2017] Schilling, R.L. 2017. Measures, Integrals and Martingales. Cambridge University Press.
  • Sklar [1959] Sklar, M. 1959. Fonctions de répartition à N dimensions et leurs marges. Annales de l’ISUP 8(3): 229–231 .
  • Snoek et al. [2012] Snoek, J., H. Larochelle, and R.P. Adams. 2012. Practical bayesian optimization of machine learning algorithms. Advances in Neural Information Processing Systems 25: 2951–2959 .
  • Stephens [2000a] Stephens, M. 2000a. Bayesian analysis of mixture models with an unknown number of components—an alternative to reversible jump methods. The Annals of Statistics 28(1): 40 – 74 .
  • Stephens [2000b] Stephens, M. 2000b. Dealing with label switching in mixture models. Journal of the Royal Statistical Society Series B: Statistical Methodology 62(4): 795–809 .
  • Stringer et al. [2023] Stringer, A., P. Brown, and J. Stafford. 2023. Fast, scalable approximations to posterior distributions in extended latent gaussian models. Journal of Computational and Graphical Statistics 32(1): 84–98 .
  • Van Havre et al. [2015] Van Havre, Z., N. White, J. Rousseau, and K. Mengersen. 2015. Overfitting bayesian mixture models with an unknown number of components. PloS one 10(7): e0131739 .
  • Wade and Ghahramani [2018] Wade, S. and Z. Ghahramani. 2018. Bayesian Cluster Analysis: Point Estimation and Credible Balls (with Discussion). Bayesian Analysis 13(2): 559 – 626 .
\Titlefont

Supplementary Materials for “BOB: Bayesian Optimized Bootstrap for Uncertainty Quantification in Gaussian Mixture Models”

\Authorfont

Santiago Marin,  Bronwyn Loong,  Anton H. Westveld

S.1 Proof of Proposition 1

From (8), we have that the update of 𝝁k\boldsymbol{\mu}_{k} and 𝚺k\boldsymbol{\Sigma}_{k} is given by

(𝝁k(t+1),𝚺k(t+1))=argmax𝝁k,𝚺k{h(𝝁k,𝚺k)},\bigl{(}\boldsymbol{\mu}_{k}^{(t+1)},\boldsymbol{\Sigma}_{k}^{(t+1)}\bigr{)}=\operatorname*{arg\,max}_{\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k}}\bigl{\{}h(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k})\bigr{\}},

with

h(𝝁k,𝚺k)=12i=1n(uiqik[log|𝚺k|+(𝐲i𝝁k)𝚺k1(𝐲i𝝁k)])u~Σktr(𝚿k𝚺k1)2u~μkλk2(𝝁k𝜷k)𝚺k1(𝝁k𝜷k)u~Σk(νk+d2+1)log|𝚺k|=()n~k2log|𝚺k|12tr[(n~k(𝐲¯~k𝝁k)(𝐲¯~k𝝁k)+𝐒~k)𝚺k1](ν~k+d2+1)log|𝚺k|12tr(𝚿~k𝚺k1)λ~k2tr[(𝝁k𝜷k)(𝝁k𝜷k)𝚺k1]=(ν¯k+d2+1)log|𝚺k|12tr(𝚿¯k𝚺k1)λ¯k2(𝝁k𝜷¯k)𝚺k1(𝝁k𝜷¯k),\begin{split}h(\boldsymbol{\mu}_{k},\boldsymbol{\Sigma}_{k})&=-\frac{1}{2}\sum_{i=1}^{n}\left(u_{i}q_{ik}\left[\log|\boldsymbol{\Sigma}_{k}|+\left(\mathbf{y}_{i}-\boldsymbol{\mu}_{k}\right)^{\prime}\boldsymbol{\Sigma}_{k}^{-1}\left(\mathbf{y}_{i}-\boldsymbol{\mu}_{k}\right)\right]\right)-\frac{\tilde{u}_{\Sigma_{k}}\operatorname{tr}(\boldsymbol{\Psi}_{k}\boldsymbol{\Sigma}_{k}^{-1})}{2}\\ &\quad-\frac{\tilde{u}_{\mu_{k}}\lambda_{k}}{2}(\boldsymbol{\mu}_{k}-\boldsymbol{\beta}_{k})^{\prime}\boldsymbol{\Sigma}_{k}^{-1}(\boldsymbol{\mu}_{k}-\boldsymbol{\beta}_{k})-\tilde{u}_{\Sigma_{k}}\left(\frac{\nu_{k}+d}{2}+1\right)\log|\boldsymbol{\Sigma}_{k}|\\ &\overset{(*)}{=}-\frac{\tilde{n}_{k}}{2}\log|\boldsymbol{\Sigma}_{k}|-\frac{1}{2}\operatorname{tr}\left[\left(\tilde{n}_{k}(\tilde{\bar{\mathbf{y}}}_{k}-\boldsymbol{\mu}_{k})(\tilde{\bar{\mathbf{y}}}_{k}-\boldsymbol{\mu}_{k})^{\prime}+\tilde{\mathbf{S}}_{k}\right)\boldsymbol{\Sigma}_{k}^{-1}\right]\\ &\quad-\left(\frac{\tilde{\nu}_{k}+d}{2}+1\right)\log|\boldsymbol{\Sigma}_{k}|-\frac{1}{2}\operatorname{tr}\left(\tilde{\boldsymbol{\Psi}}_{k}\boldsymbol{\Sigma}_{k}^{-1}\right)-\frac{\tilde{\lambda}_{k}}{2}\operatorname{tr}\left[(\boldsymbol{\mu}_{k}-\boldsymbol{\beta}_{k})(\boldsymbol{\mu}_{k}-\boldsymbol{\beta}_{k})^{\prime}\boldsymbol{\Sigma}_{k}^{-1}\right]\\ &=-\left(\frac{\bar{\nu}_{k}+d}{2}+1\right)\log|\boldsymbol{\Sigma}_{k}|-\frac{1}{2}\operatorname{tr}\left(\bar{\boldsymbol{\Psi}}_{k}\boldsymbol{\Sigma}_{k}^{-1}\right)-\frac{\bar{\lambda}_{k}}{2}(\boldsymbol{\mu}_{k}-\bar{\boldsymbol{\beta}}_{k})^{\prime}\boldsymbol{\Sigma}_{k}^{-1}(\boldsymbol{\mu}_{k}-\bar{\boldsymbol{\beta}}_{k}),\end{split}

where ()(*) follows from the fact that i=1n𝐛i𝐀𝐛i=tr(𝐁𝐁𝐀)\sum_{i=1}^{n}\mathbf{b}_{i}^{\prime}\mathbf{A}\mathbf{b}_{i}=\operatorname{tr}(\mathbf{B}^{\prime}\mathbf{BA}), where 𝐁n×d\mathbf{B}\in\mathbb{R}^{n\times d} denotes the matrix whose ii-th row is 𝐛i1×d\mathbf{b}_{i}^{\prime}\in\mathbb{R}^{1\times d}, and by writing (𝐲i𝝁k)(\mathbf{y}_{i}-\boldsymbol{\mu}_{k}) as (𝐲i𝐲¯~k+𝐲¯~k𝝁k)(\mathbf{y}_{i}-\tilde{\bar{\mathbf{y}}}_{k}+\tilde{\bar{\mathbf{y}}}_{k}-\boldsymbol{\mu}_{k}). ∎

S.2 Sampling from the Posterior Predictive Distribution

Given (approximate or actual) posterior draws {𝜽(s)}s=1S\left\{\boldsymbol{\theta}_{(s)}\right\}_{s=1}^{S}, one can easily obtain posterior predictive draws as described in algorithm S.1.

Algorithm S.1 Posterior Predictive Sampler

Input:
      Posterior draws: {𝜽(s)}s=1S\left\{\boldsymbol{\theta}_{(s)}\right\}_{s=1}^{S}
      Model: p(𝐲|𝜽)p(\mathbf{y}|\,\boldsymbol{\theta})
Output:
      Posterior predictive draws: {𝐲new,(s)}s=1S\left\{\mathbf{y}_{\text{new},(s)}\right\}_{s=1}^{S}

1:for s{1,S}s\in\{1\,\dots,S\} do \triangleright in parallel
2:     Sample 𝐲new,(s)p(𝐲|𝜽(s))\mathbf{y}_{\text{new},(s)}\sim p(\mathbf{y}|\,\boldsymbol{\theta}_{(s)})
3:end for

S.3 Sampling from the Actual Bayesian Posterior

Following \citeSuppgelman2013bayesian, under the likelihood and priors specified in section 2.1, the Bayesian posteriors for {𝝁k|𝐘,𝐙}\left\{\boldsymbol{\mu}_{k}|\mathbf{Y},\mathbf{Z}\right\}, {𝚺k|𝐘,𝐙}\left\{\boldsymbol{\Sigma}_{k}|\mathbf{Y},\mathbf{Z}\right\}, and {𝝅|𝐘,𝐙}\left\{\boldsymbol{\pi}|\mathbf{Y},\mathbf{Z}\right\} are given by

𝝁k|𝐘,𝐙t(ν^kd+1)(𝜷^k,𝚿^k/(λ^k(ν^kd+1))),\displaystyle\boldsymbol{\mu}_{k}|\mathbf{Y},{\mathbf{Z}}\sim\textit{t}_{(\hat{\nu}_{k}-d+1)}\left(\hat{\boldsymbol{\beta}}_{k},{{\hat{\boldsymbol{\Psi}}_{k}}/{(\hat{\lambda}_{k}(\hat{\nu}_{k}-d+1))}}\right),
𝚺k|𝐘,𝐙𝒲(ν^k,𝚿^k1),\displaystyle\boldsymbol{\Sigma}_{k}|\mathbf{Y},{\mathbf{Z}}\sim\mathcal{IW}\left(\hat{\nu}_{k},\hat{\boldsymbol{\Psi}}_{k}^{-1}\right),
𝝅|𝐘,𝐙𝒟(a^1,,a^K),\displaystyle\boldsymbol{\pi}|\mathbf{Y},{\mathbf{Z}}\sim\mathcal{D}\Bigl{(}\hat{a}_{1},\dots,\,\hat{a}_{K}\Bigr{)},

where 𝜷^k=(β^k1,,β^kd)=λk/(λk+nzk)𝜷k+nzk/(λk+nzk)𝐲¯zk\hat{\boldsymbol{\beta}}_{k}=(\hat{\beta}_{k1},\dots,\hat{\beta}_{kd})^{\prime}=\lambda_{k}/(\lambda_{k}+n_{z_{k}})\boldsymbol{\beta}_{k}+n_{z_{k}}/(\lambda_{k}+n_{z_{k}})\bar{\mathbf{y}}_{z_{k}}, 𝚿^k=(ψ^kjj)=𝚿k+𝐒zk+(λknzk)/(λk+nzk)(𝐲¯zk𝜷k)(𝐲¯zk𝜷k)\hat{\boldsymbol{\Psi}}_{k}=\left(\hat{\psi}_{kjj^{\prime}}\right)=\boldsymbol{\Psi}_{k}+\mathbf{S}_{z_{k}}+(\lambda_{k}n_{z_{k}})/(\lambda_{k}+n_{z_{k}})(\bar{\mathbf{y}}_{z_{k}}-\boldsymbol{\beta}_{k})(\bar{\mathbf{y}}_{z_{k}}-\boldsymbol{\beta}_{k})^{\prime}, λ^k=λk+nzk\hat{\lambda}_{k}=\lambda_{k}+n_{z_{k}}, ν^k=νk+nzk\hat{\nu}_{k}=\nu_{k}+n_{z_{k}}, and a^k=ak+nzk\hat{a}_{k}=a_{k}+n_{z_{k}}, with nzk=i=1nzikn_{z_{k}}=\sum_{i=1}^{n}z_{ik}, 𝐲¯zk=(nzk)1i:zik=1𝐲i\bar{\mathbf{y}}_{z_{k}}=(n_{z_{k}})^{-1}\sum_{i:z_{ik}=1}\mathbf{y}_{i}, and 𝐒zk=i:zik=1(𝐲i𝐲¯zk)(𝐲i𝐲¯zk)\mathbf{S}_{z_{k}}=\sum_{i:z_{ik}=1}(\mathbf{y}_{i}-\bar{\mathbf{y}}_{z_{k}})(\mathbf{y}_{i}-\bar{\mathbf{y}}_{z_{k}})^{\prime}. When the true latent indicator variables, 𝐙\mathbf{Z}, are known (e.g., under simulated data or when we know the true cluster labels), one can easily compute, evaluate or obtain draws from the joint Bayesian posterior distribution. More precisely, algorithm S.2 generates draws from the joint Bayesian posterior, namely {𝜽Bayes,(s)}s=1S\left\{\boldsymbol{\theta}_{\text{Bayes},(s)}\right\}_{s=1}^{S}. Then, with the posterior draws {𝜽Bayes,(s)}s=1S\left\{\boldsymbol{\theta}_{\text{Bayes},(s)}\right\}_{s=1}^{S}, we can proceed to obtain posterior predictive draws as in algorithm S.1.

Algorithm S.2 Bayesian Posterior Sampler

Input:
      Data: 𝐘\mathbf{Y}
      True latent indicator variables: 𝐙\mathbf{Z}
      Total number of posterior draws: SS
      Posterior hyper-parameters: {𝜷^k,𝚿^k,λ^k,ν^k,a^k}k=1K\bigl{\{}\hat{\boldsymbol{\beta}}_{k},\hat{\boldsymbol{\Psi}}_{k},\hat{\lambda}_{k},\hat{\nu}_{k},\hat{a}_{k}\bigr{\}}_{k=1}^{K}
Output:
      Bayesian posterior draws: {𝜽Bayes,(s)}s=1S\left\{\boldsymbol{\theta}_{\text{Bayes},(s)}\right\}_{s=1}^{S}

1:for s{1,S}s\in\{1\,\dots,S\} do \triangleright in parallel
2:     Sample 𝚺k,(s)|𝐘,𝐙𝒲(ν^k,𝚿^k1)\boldsymbol{\Sigma}_{k,(s)}|\mathbf{Y},{\mathbf{Z}}\sim\mathcal{IW}\bigl{(}\hat{\nu}_{k},\hat{\boldsymbol{\Psi}}_{k}^{-1}\bigr{)}, k[K]\forall k\in[K]
3:     Sample 𝝁k,(s)|𝐘,𝐙,𝚺k,(s)𝒩(𝜷^k,𝚺k,(s)/λ^k)\,\boldsymbol{\mu}_{k,(s)}|\mathbf{Y},{\mathbf{Z}},\boldsymbol{\Sigma}_{k,(s)}\sim\mathcal{N}\bigl{(}\hat{\boldsymbol{\beta}}_{k},\boldsymbol{\Sigma}_{k,(s)}/\hat{\lambda}_{k}\bigr{)}, k[K]\forall k\in[K]
4:     Sample 𝝅(s)|𝐘,𝐙𝒟(a^1,,a^K)\boldsymbol{\pi}_{(s)}|\mathbf{Y},{\mathbf{Z}}\sim\mathcal{D}\bigl{(}\hat{a}_{1},\dots,\,\hat{a}_{K}\bigr{)}
5:     Set 𝜽Bayes,(s){πk,(s),𝝁k,(s),𝚺k,(s)}k=1K\boldsymbol{\theta}_{\text{Bayes},(s)}\leftarrow\bigl{\{}\pi_{k,(s)},\boldsymbol{\mu}_{k,(s)},\boldsymbol{\Sigma}_{k,(s)}\bigr{\}}_{k=1}^{K}
6:end for

S.4 Warm Initialization Strategy

To initialize our algorithms, we consider a pool of candidate initial values. These candidate values are obtained via (1) Hard-thresholded KK-means \citepSuppHT_k_means_raymaekers2022, where the HTK-means penalty term is selected using AIC, BIC and a regularization path plot; (2) Sparse KK-means \citepSuppwitten2010framework, where the sparse KK-means shrinkage parameter is selected using a permutation approach; (3) Model-based clustering using the “mclustR package \citepSuppmclustScrucca, which itself is initialized by hierarchical model-based agglomerative clustering; and (4) KK-means clustering \citepSuppmacqueen1967some. Then, we choose as a starting point the values that yield the largest posterior density. To facilitate comparisons, we initialize all algorithms at this starting point.

S.5 Features in Benchmark Data

Tables S.1 and S.2 present details and descriptions of the variables in the Wine and Seeds datasets, respectively.

Table S.1: Features in the Wine Data
Wine Feature Wine Feature
Abbreviation Description Abbreviation Description
alco Alcohol percentage nflav Nonflavanoid phenols
mal Malic acid proa Proanthocyanins
ash Ash col Colour intensity
akash Alkalinity of ash hue Hue
mag Magnesium ODdw OD280OD315\frac{OD_{280}}{OD_{315}} of diluted wines
phen Total phenols prol Proline
flav Flavanoids
\botrule
Table S.2: Features in the Seeds Data
Seed Feature Seed Feature
Abbreviation Description Abbreviation Description
area Area of kernel k.width Width of kernel
perimeter Perimeter of kernel asymmetry Asymmetry coefficient
compactness Compactness of kernel k.gr.len Length of kernel groove
k.len Length of kernel
\botrule

S.6 Additional Posterior Density Plots

Figure 6 presents additional posterior predictive density plots for our illustrative example, with n=50n=50, d=10d=10, and K=2K=2. We can observe similar patterns as in the main article. More precisely, both versions of WBB fail to capture the variance from the Bayesian posterior, while NUTS and ADVI produce unimodal posterior predictive distributions, failing to identify the clusters in the data. It is clear that BOB offers the best approximation to the Bayesian posterior predictive distribution and the best uncertainty quantification.

Figures 7 and 8 present posterior density plots for additional variables from the wine and seeds datasets, respectively. Similar as in the main document, we can observe that both versions of WBB correctly capture the location of the three clusters. However, both versions of WBB produce overconfident posterior predictive distributions and do not capture the dispersion of the Bayesian posterior. NUTS and ADVI are unable to identify the three clusters in the data. Clearly, BOB produces the closest approximation to the Bayesian posterior predictive distribution and the best uncertainty quantification.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 6: True Bayesian posterior predictive density (red contours) and its approximations (blue KDEs) obtained via BOB, WBB1, WBB2, NUTS, and ADVI, when K=2K=2, d=10d=10, and n=50n=50.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 7: True Bayesian posterior predictive density (red contours) and its approximations (blue KDEs) obtained via BOB, WBB1, WBB2, NUTS, and ADVI, for additional variables from the wine data. Contours and KDEs were computed using only the training data. Scatter plots depict the held-out data. Orange squares, pink triangles and dark circles represent Barolo, Grignolino, and Barbera wines, respectively.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 8: True Bayesian posterior predictive density (red contours) and its approximations (blue KDEs) obtained via BOB, WBB1, WBB2, NUTS, and ADVI, for additional variables from the seeds data. Contours and KDEs were computed using only the training data. Scatter plots depict the held-out data. Orange squares, pink triangles and dark circles represent Kama, Rosa, and Canadian wheat kernels, respectively.
\bibliographystyleSupp

sn-chicago.bst \bibliographySuppreferences-supp.bib