Bayesian model inversion using stochastic spectral embedding
Abstract
In this paper we propose a new sampling-free approach to solve Bayesian model inversion problems that is an extension of the previously proposed spectral likelihood expansions (SLE) method. Our approach, called stochastic spectral likelihood embedding (SSLE), uses the recently presented stochastic spectral embedding (SSE) method for local spectral expansion refinement to approximate the likelihood function at the core of Bayesian inversion problems.
We show that, similar to SLE, this approach results in analytical expressions for key statistics of the Bayesian posterior distribution, such as evidence, posterior moments and posterior marginals, by direct post-processing of the expansion coefficients. Because SSLE and SSE rely on the direct approximation of the likelihood function, they are in a way independent of the computational/mathematical complexity of the forward model. We further enhance the efficiency of SSLE by introducing a likelihood specific adaptive sample enrichment scheme.
To showcase the performance of the proposed SSLE, we solve three problems that exhibit different kinds of complexity in the likelihood function: multimodality, high posterior concentration and high nominal dimensionality. We demonstrate how SSLE significantly improves on SLE, and present it as a promising alternative to existing inversion frameworks.
Keywords: Bayesian model inversion, inverse problems, polynomial chaos expansions, spectral likelihood expansions, stochastic spectral likelihood embedding, sampling-free inversion.
1 Introduction
Computational models are an invaluable tool for decision making, scientific advances and engineering breakthroughs. They establish a connection between a set of input parameters and output quantities with wide-ranging applications. Model inversion uses available experimental observations of the output to determine the set of input parameters that maximize the predictive potential of a model. The importance of efficient and reliable model inversion frameworks can hardly be overstated, considering that they establish a direct connection between models and the real world. Without it, the most advanced model predictions might lack physical meaning and, consequently, be useless for their intended applications.
Bayesian model inversion is one way to formalize this problem (Jaynes, 2003; Gelman et al., 2014). It is based on Bayesian inference and poses the problem in a probabilistic setting by capitalizing on Bayes’ theorem. In this setting a so-called prior (i.e., before observations) probability distribution about the model parameters is updated to a so-called posterior (i.e., after observations) distribution. The posterior distribution is the probability distribution of the input parameters conditioned on the available observations, and the main outcome of the Bayesian inversion process.
In Bayesian model inversion, the connection between the model output and the observations is established through a probabilistic discrepancy model. This model, which is a function of the input parameters, leads to the so-called likelihood function. The specific form of the likelihood function depends on the problem at hand, but typically it has a global maximum for the input parameters with the model output that is closest to the available observations (w.r.t. some metric), and rapidly goes to zero with increasing distance to those parameters.
Analytical expressions for the posterior distribution can only be found in few academic examples (e.g., conjugate priors with a linear forward model, Bishop (2006); Gelman et al. (2014)). In general model inversion problems, such analytical solutions are not available though. Instead, it is common practice to resort to sampling methods to generate a sample distributed according to the posterior distribution. The family of Markov chain Monte Carlo (MCMC) algorithms are particularly suitable for generating such a posterior sample (Beck and Au, 2002; Robert and Casella, 2004).
While MCMC and its extensions are extensively used in model inversion, and new algorithms are continuously being developed (Haario et al., 2001; Ching and Chen, 2007; Goodman and Weare, 2010; Neal, 2011), it has a few notable shortcomings that hinder its application in many practical cases. It is well known that there are no robust convergence criteria for MCMC algorithms, and that their performance is particularly sensitive to their tuning parameters. Additionally, samples generated by MCMC algorithms are often highly correlated, thus requiring extensive heuristic post-processing and empirical rules (Gelman and Rubin, 1992; Brooks and Gelman, 1998). MCMC algorithms are also in general not well suited for sampling multimodal posterior distributions.
When considering complex engineering scenarios, the models subject to inversion are often computationally expensive. Because MCMC algorithms usually require a significant number of forward model evaluations, it has been proposed to accelerate the procedure by using surrogate models in lieu of the original models. These surrogate models are either constructed non-adaptively before sampling from the posterior distribution (Marzouk et al., 2007; Marzouk and Xiu, 2009) or adaptively during the sampling procedure (Li and Marzouk, 2014; Birolleau et al., 2014; Cui et al., 2014; Conrad et al., 2016; Conrad et al., 2018; Yan and Zhou, 2019). Adaptive techniques can be of great benefit with posterior distributions that are concentrated in a small subspace of the prior domain, as the surrogate only needs to be accurate near high density areas of the posterior distribution.
Polynomial chaos expansions (PCE) are a widely used surrogate modelling technique based on expanding the forward model onto a suitable polynomial basis (Ghanem and Spanos, 1991; Xiu and Karniadakis, 2002). In other words, it provides a spectral representation of the computational forward model. Thanks to the introduction of sparse regression (see, e.g. Blatman and Sudret (2011)), its computation has become feasible even in the presence of complex and computationally expensive engineering models. This technique has been successfully used in conjunction with MCMC to reduce the total computational costs associated with sampling from the posterior distribution (Marzouk et al., 2007; Wagner et al., 2020).
Alternative approaches to compute the posterior distribution or its statistics include the Laplace approximation at a posterior mode (Tierney and Kadane, 1986; Tierney et al., 1989b, a), approximate Bayesian computations (ABC) (Marin et al., 2012; Sisson et al., 2018), optimal transport approaches (El Moselhy and Marzouk, 2012; Parno, 2015; Marzouk et al., 2016) and embarrassingly parallel quasi-Monte Carlo sampling (Dick et al., 2017; Gantner and Peters, 2018).
Stochastic spectral embedding (SSE) is a metamodelling technique suitable for approximating functions with complex localized features recently developed in Marelli et al. (2021). In this paper we propose to extend this technique with an ad-hoc adaptive sample enrichment strategy that makes it suitable to efficiently approximate likelihood functions in Bayesian model inversion problems. This method can be seen as a generalization of the previously proposed spectral likelihood expansions (SLE) approach (Nagel and Sudret, 2016).
Due to its deep connection to SLE, we call the application of SSE to likelihood functions stochastic spectral likelihood embedding (SSLE). We show that, due to its local spectral characteristics, this approach allows us to analytically derive expressions for the posterior marginals and general posterior moments by post-processing its expansion coefficients.
The paper is organized as follows: In Section 2 we establish the basics of Bayesian inference and particularly Bayesian model inversion. We then give an introduction into spectral function decomposition with a focus on polynomial chaos expansions and their application to likelihood functions (SLE) in Section 3. In Section 4 we present the main contribution of the paper, namely the derivation of Bayesian posterior quantities of interest through SSLE and the extension of the SSE algorithm with an adaptive sampling strategy. Finally, in Section 5 we showcase the performance of our approach on three case studies of varying complexity.
2 Model inversion
The problem of model inversion occurs whenever the predictions of a model are to be brought into agreement with available observations or data. This is achieved by properly adjusting a set of input parameters of the model. The goal of inversion can be twofold: on the one hand the inferred input parameters might be used to predict new realizations of the model output. On the other hand, the inferred input parameters might be the main interest. Model inversion is a common problem in many engineering disciplines, that in some cases is still routinely solved manually, i.e. by simply changing the input parameters until some, often qualitative, goodness-of-fit criterion is met. More quantitative inversion approaches aim at automatizing this process, by establishing a metric (e.g., -distance) between the data and the model response, which is then minimized through suitable optimization algorithms.
While such approaches can often be used in practical applications, they tend not to provide measures of the uncertainties associated with the inferred model input or predictions. These uncertainties are useful in judging the accuracy of the inversion, as well as indicating non-informative measurements. In fact, the lack of uncertainty quantification in the context of model inversion can lead to erroneous results that have far-reaching consequences in subsequent applications. One approach to consider uncertainties in inverse problems is the Bayesian framework for model inversion that will be presented hereinafter.
2.1 Bayesian inference
Consider some non-observable parameters and the observables . Furthermore, let be a set of measurements, i.e., noisy observations of a set of realizations of . Statistical inference consists in drawing conclusions about using the information from (Gelman et al., 2014). These measurements can be direct observations of the parameters () or some quantities indirectly related to through a function or model . One way to conduct this inference is through Bayes’ theorem of conditional probabilities, a process known as Bayesian inference.
Denoting by a probability density function (PDF) and by a PDF conditioned on , Bayes’ theorem can be written as
(1) |
where is known as the prior distribution of the parameters, i.e., the distribution of before observing the data . The conditional distribution , known as likelihood, establishes a connection between the observations and a realization of the parameters . For a given realization , it returns the probability density of observing the data . Under the common assumption of independence between individual observations, , the likelihood function takes the form:
(2) |
The likelihood function is a map , and it attains its maximum for the parameter set with the highest probability of yielding . With this, Bayes’ theorem from Eq. (1) can be rewritten as:
(3) |
where is a normalizing constant often called evidence or marginal likelihood. On the left-hand side, is the posterior PDF, i.e., the distribution of after observing data . In this sense, Bayes’ theorem establishes a general expression for updating the prior distribution using a likelihood function to incorporate information from the data.
2.2 Bayesian model inversion
Bayesian model inversion describes the application of the Bayesian inference framework to the problem of model inversion (Beck and Katafygiotis, 1998; Kennedy and O’Hagan, 2001; Jaynes, 2003; Tarantola, 2005). The two main ingredients needed to infer model parameters within the Bayesian framework are a prior distribution of the model parameters and a likelihood function . In practical applications, prior information about the model parameters is often readily available. Typical sources of such information are physical parameter constraints or expert knowledge. Additionally, prior inversion attempts can serve as guidelines to assign informative prior distributions. In cases where no prior information about the parameters is available, so-called non-informative or invariant prior distributions (Jeffreys, 1946; Harney, 2016) can also be assigned. The likelihood function serves instead as the link between model parameters and observations of the model output . To connect these two quantities, it is necessary to choose a so-called discrepancy model that gives the relative probability that the model response to a realization of describes the observations. One common assumption for this probabilistic model is that the measurements are perturbed by a Gaussian additive discrepancy term , with covariance matrix . For a single measurement it reads:
(4) |
This discrepancy between the model output and the observables can result from measurement error or model inadequacies. By using this additive discrepancy model, the distribution of the observables conditioned on the parameters is written as:
(5) |
where denotes the multivariate Gaussian PDF with mean value and covariance matrix . The likelihood function is then constructed using this probabilistic model and Eq. (2). For a given set of measurements it thus reads:
(6) |
With the fully specified Bayesian model inversion problem, Eq. (3) directly gives the posterior distribution of the model parameters . In the setting of model inversion, the posterior distribution represents therefore the state of belief about the true data-generating model parameters, considering all available information: computational forward model, discrepancy model and measurement data (Beck and Katafygiotis, 1998; Jaynes, 2003).
Often, the ultimate goal of model inversion is to provide a set of inferred parameters, with associate confidence measures/intervals. This is often achieved by computing posterior statistics (e.g., moments, mode, etc.). Propagating the posterior through secondary models is also of interest. So-called quantites of interest (QoI) can be expressed by calculating the posterior expectation of suitable functions of the parameters , with , as in:
(7) |
Depending on , this formulation encompasses posterior moments ( or for the first and second moments, respectively), posterior covariance () or expectations of secondary models ().
3 Spectral function decomposition
To pose a Bayesian inversion problem, the specification of a prior distribution and a likelihood function described in the previous section is sufficient. Its solution, however, is not available in closed form in the general case.
Spectral likelihood expansion (SLE) is a recently proposed method that aims at solving the Bayesian inversion problem by finding a polynomial chaos expansion (PCE) of the likelihood function in a basis orthogonal w.r.t. the prior distribution (Nagel and Sudret, 2016). This representation allows one to derive analytical expressions for the evidence , the posterior distribution, the posterior marginals, and many types of QoIs, including the posterior moments.
We offer here a brief introduction to regression-based, sparse PCE before introducing SLE, but refer the interested reader to more exhaustive resources on PCE (Ghanem and Spanos, 1991; Xiu and Karniadakis, 2002) and sparse PCE (Xiu, 2010; Blatman and Sudret, 2010, 2011).
Let us consider a random variable with independent components and associated probability density functions so that Assume further that is a scalar function of which fulfills the finite variance condition (). Then it is possible to find a so-called truncated polynomial chaos approximation of such that
(8) |
where is an -tuple and . For most parametric distributions, well-known classical orthonormal polynomials satisfy the necessary orthonormality condition w.r.t. (Xiu and Karniadakis, 2002). For more general distributions, arbitrary orthonormal polynomials can be constructed numerically through the Stieltjes procedure (Gautschi, 2004). If additionally, is a sparse subset of , the truncated expansion in Eq. (8) is called a sparse PCE.
In this contribution, we restrict the discussion to independent inputs, because it is computationally challenging, albeit possible, to construct an orthogonal basis for dependent inputs (e.g. through Gram-Schmidt decomposition of ). Furthermore, Torre et al. (2019) demonstrated that for purely predictive purposes, ignoring input dependence can significantly improve the predictive performance of PCE, at the cost of losing basis orthonormality. The latter, however, is required to derive analytical post-processing quantities such as the moments of the posterior distribution (see Sections 3.1 and 4.1).
There exist different algorithms to produce a sparse PCE in practice, i.e. select a sparse basis and compute the corresponding coefficients. A powerful class of methods are regression-based approaches that rely on an initial input sample , called experimental design, and corresponding model evaluations (See, e.g. Lüthen et al. (2021) for a recent survey). Additionally, it is possible to design adaptive algorithms that choose the truncated basis size (Blatman and Sudret, 2011; Jakeman et al., 2015).
To assess the accuracy of PCE, the so-called generalization error shall be evaluated. A robust generalization error estimator is given by the leave-one-out (LOO) cross validation technique. This estimator is obtained by
(9) |
where is constructed by leaving out the -th point from the experimental design . For methods based on linear regression, it can be shown (Chapelle et al., 2002; Blatman and Sudret, 2010) that the LOO error is available analytically by post-processing the regressor matrix.
3.1 Spectral likelihood expansions
The idea of SLE is to use sparse PCE to find a spectral representation of the likelihood function occurring in Bayesian model inversion problems (see Eq. (2)). We present here a brief introduction to the method and the main results of Nagel and Sudret (2016).
Likelihood functions can be seen as scalar functions of the input random vector . In this work we assume priors of the type , i.e. with independent marginals, their spectral expansion then reads:
(10) |
where the explicit dependence on was dropped for notational simplicity.
Upon computing the basis and coefficients in Eq. (8), the solution to the inverse problem is converted to merely post-processing the coefficients . The following expressions can be derived for the individual quantities:
- Evidence
-
The evidence emerges as the coefficient of the constant polynomial
(11) - Posterior
-
Upon computing the evidence , the posterior can be evaluated directly through
(12) - Posterior marginals
-
We split the random vector into two vectors with components and with components , where and are two non-empty disjoint index sets such that . Denote further by and the prior marginal density functions of and respectively. The posterior marginals then read:
(13) where . The series in the above equation constitutes a subexpansion that contains non-constant polynomials only in the directions .
- Quantities of interest
-
Finally, it is also possible to analytically compute posterior expectations of functions that admit a polynomial chaos expansion in the same basis of the form . Eq. (7) then reduces to the spectral product:
(14)
The quality of these results depends only on the approximation error introduced in Eq. (10). The latter, in turn, depends mainly on the chosen PCE truncation strategy (Blatman and Sudret, 2011; Nagel and Sudret, 2016) and the number of points used to compute the coefficients (i.e., the experimental design). It is known that likelihood functions typically have quasi-compact supports (i.e., on a majority of ). Such functions require a very high polynomial degree to be approximated accurately, which in turn can lead to the need for prohibitively large experimental designs.
4 Stochastic spectral embedding
Stochastic spectral embedding (SSE) is a multi-level approach to surrogate modeling originally proposed in Marelli et al. (2021). It attempts to approximate a given square-integrable function with independent inputs through
(15) |
where is a set of multi-indices with elements for which and where is the number of levels and is the number of subdomains at a specific level . We call a residual expansion given by
(16) |
In the present paper the term denotes a polynomial chaos expansion (see Eq. (8)) constructed in the subdomain , but in principle it can refer to any spectral expansion (e.g., Fourier series). A schematic representation of the summation in Eq. (15) is given in Figure 2. The detailed notation and the algorithm to sequentially construct an SSE are given in the sequel.
4.1 Stochastic spectral likelihood embedding
Viewing the likelihood as a function of a random variable with independent marginals, we can directly use Eq. (15) to write down its SSLE representation
(17) |
where the variable is distributed according to the prior distribution and, consequently, the local basis used to compute is orthonormal w.r.t. that distribution.
Due to the local spectral properties of the residual expansions, the SSLE representation of the likelihood function retains all of the post-processing properties of SLE (Section 3.1):
- Evidence
-
The normalization constant emerges as the sum of the constant polynomial coefficients weighted by the prior mass:
(18) - Posterior
-
This allows us to write the posterior as
(19) - Posterior marginal
-
Utilizing again the disjoint sets and from Eq. (13) it is also possible to analytically derive posterior marginal PDFs as
(20) where
(21) is a subexpansion of that contains only non-constant polynomials in the directions . Note that, as we assumed that the prior distribution has independent components, the constants and are obtained as products of univariate integrals which are available analytically from the prior marginal cumulative distribution functions (CDFs).
- Quantities of interest
-
Expected values of function for under the posterior can be approximated by:
(22) where are the coefficients of the PCE of in the bases . This can also be used for computing posterior moments like mean, variance or covariance.
These expressions can be seen as a generalization of the ones for SLE detailed in Section 3.1. For a single-level global expansion (i.e., ) and consequently , they are identical.
4.2 Modifications to the original algorithm
The original algorithm for computing an SSE was presented in Marelli et al. (2021). It recursively partitions the input domain and constructs truncated expansions of the residual. We reproduce it below for reference but replace the model with the likelihood function . We further simplify the algorithm by choosing a partitioning strategy with .
-
1.
Initialization:
-
(a)
,
-
(b)
-
(c)
-
(a)
-
2.
For each subdomain , :
-
(a)
Calculate the truncated expansion of the residual in the current subdomain
-
(b)
Update the residual in the current subdomain
-
(c)
Split the current subdomain in subdomains based on a partitioning strategy
-
(d)
If , , go back to 2a, otherwise terminate the algorithm
-
(a)
-
3.
Termination
-
(a)
Return the full sequence of and needed to compute Eq. (15).
-
(a)
In practice, the residual expansions are computed using a fixed experimental design and corresponding model evaluations . The algorithm then only requires the specification of a partitioning strategy and a termination criterion, as detailed in Marelli et al. (2021).
Likelihood functions are typically characterized by a localized behaviour: Close to the data-generating parameters they peak while quickly decaying to in the remainder of the prior domain. This means that in a majority of the domain the likelihood evaluation is non-informative. Directly applying the original algorithm is then expected to waste many likelihood evaluations.
We therefore modify the original algorithm by adding an adaptive sampling scheme (Section 4.2.1) that includes the termination criterion and introducing an improved partitioning strategy (Section 4.2.2) that is especially suitable for finding compact support features. The rationale for these modifications is presented next.
4.2.1 Adaptive sampling scheme
The proposed algorithm has two parameters: the experimental design size for the residual expansions and the final experimental design size corresponding to the available computational budget . At the initialization of the algorithm points are sampled as a first experimental design. At every further iteration, additional points are then sampled from the prior distribution. These samples are generated in a space-filling way (e.g. through latin hypercube sampling) in the newly created subdomains to have always exactly points available for constructing the residual expansions. The algorithm is terminated, once the computational budget has been exhausted.
Adding new samples points requires the evaluation of the likelihood function. Because multiple points are added at every iteration of the algorithm, this step can be easily performed simultaneously on multiple computational nodes.
At every step, the proposed algorithm chooses a single refinement domain from the set of unsplit, i.e. terminal domains, creates two new subdomains by splitting the refinement domain and constructs residual expansions after enriching the experimental design. The selection of this refinement domain is based on the error estimator that is defined by
(23) |
This estimator incorporates the subdomain size through the prior mass , and the approximation accuracy, through the leave-one-out estimator. The distinction is necessary to assign an error estimator also to domains that have too few points to construct a residual expansion, in which case the error estimator of the previous level is reused.
The algorithm sequentially splits and refines subdomains with large approximation errors. Because likelihood functions typically have the highest complexity close to their peak, these regions tend to have larger approximation errors and are therefore predominantly picked for refinement. The proposed way of adaptive sampling then ends up placing more points near the likelihood peak, thereby reducing the number of non-informative likelihood evaluations.
The choice of a constant is simple and could in principle be replaced by a more elaborate strategy (e.g., based on the approximation error of the current subdomain relative to the total approximation error). A benefit of this enrichment criterion is that all residual expansions are computed with experimental designs of the same size. Upon choosing the domain with the maximum approximation error among the terminal domains, the error estimators then have a more comparable estimation accuracy.
4.2.2 Partitioning strategy
The partitioning strategy determines how a selected refinement domain is split. As described in Marelli et al. (2021), it is easy to define the splitting in the uniformly distributed quantile space and map the resulting split domains to the (possibly unbounded) real space through an appropriate isoprobabilistic transform (e.g., the Rosenblatt transform (Rosenblatt, 1952)).
Similar to the original SSE algorithm presented in Marelli et al. (2021), we split the refinement domain in half w.r.t. its prior mass. The original algorithm chooses the splitting direction based on the partial variance in the refinement domain. This approach is well suited for generic function approximation problems. For the approximation of likelihood functions, however, we propose a partitioning strategy that is more apt for dealing with their compact support.
We propose to pick the split direction along which a split yields a maximum difference in the residual empirical variance between the two candidate subdomains created by the split. This can easily be visualized with an example given by the dimensional domain in Figure LABEL:sub@fig:SSE:algo:splitting:1. Assume this subdomain was selected as the refinement domain. To decide along which dimension to split, we construct the candidate subdomain pairs and estimate the corresponding in those subdomains defined by
(24) |
In this expression, and denote subsets of the experimental design inside the subdomains and respectively. The occurring variances can be easily estimated with the empirical variance of the residuals in the respective candidate subdomains.
After computing the residual variance differences, the split is carried out along the dimension
(25) |
i.e., to keep the subdomains and that introduce the largest difference in variance. For , the resulting split can be seen in Figure LABEL:sub@fig:SSE:algo:splitting:4.




The choice of this partitioning strategy can be justified heuristically with the goal of approximating compact support functions. Assume that the likelihood function has compact support, this criterion will avoid cutting through its support and instead identify a split direction that results in one subdomain with large variance (expected to contain the likelihood support) and a subdomain with small variance. In subsequent steps, the algorithm will proceed by cutting away low variance subdomains, until the likelihood support is isolated.
4.2.3 The adaptive SSLE algorithm
The algorithm is presented here with its two parameters , the minimum experimental design size needed to expand a residual, and , the final experimental design size. The sample refers to , i.e. the subset of inside . Further, the multi-index set at each step of the algorithm gathers all indices of unsplit subdomains. It thus denotes the terminal domains: . For visualization purposes we show the first iterations of the algorithm for a two-dimensional example in Figure 2.
-
1.
Initialization:
-
(a)
-
(b)
Sample from prior distribution
-
(c)
Calculate the truncated expansion of in the full domain , retrieve its approximation error and initialize
-
(d)
-
(a)
-
2.
For :
-
(a)
Split the current subdomain in sub-parts and update
-
(b)
For each split
-
i.
If and
-
A.
Enrich sample with new points inside
-
A.
-
ii.
If
-
A.
Create the truncated expansion of the residual in the current subdomain using
-
B.
Update the residual in the current subdomain
-
A.
-
iii.
Retrieve the approximation error from Eq. (23)
-
i.
-
(c)
If no new expansions were created, terminate the algorithm, otherwise go back to 2
-
(a)
-
3.
Termination
-
(a)
Return the full sequence of and needed to compute Eq. (15)
-
(a)
The updating of the multi-index set in Step 2a refers to removing the current index from the set and adding to it the newly created indices and .






4.2.4 Convergence of the adaptive SSLE algorithm
Convergence of the original SSE algorithm is guaranteed in the mean-square sense by the spectral convergence in each subdomain Marelli et al. (2021). This convergence property directly applies to the SSLE of any likelihood function as, per existence of a finite maximum likelihood value, they fulfil the finite variance condition Nagel and Sudret (2016).
This result cannot be directly extended to the present adaptive (greedy) setting, because a combination of parameters might in general lead to a whole subdomain not being explored further. This can only happen, however, if the error estimator tremendously underestimates the actual error in a terminal domain. The choice of the leave-one-out cross-validation error estimator (see Eq. (23)), with its tendency to avoid overfitting, reduces significantly the probability of this scenario. Further investigations in this direction are ongoing.
5 Case studies
To showcase the effectiveness of the proposed SSLE approach, we present three case studies with different types of likelihood complexity: (i) a one-dimensional vibration problem with a bimodal posterior, (ii) a six-dimensional heat transfer problem that exhibits high posterior concentrations (i.e. highly informative likelihood) and (iii) a -dimensional diffusion problem with low active dimensionality that models concentration-driven diffusion in a one-dimensional domain.
For all case studies, we adopt the adaptive sparse-PCE based on LARS approach developed in Blatman and Sudret (2011) through its numerical implementation in UQLab (Marelli and Sudret, 2014, 2019). Each is therefore a degree- and -norm-adaptive polynomial chaos expansion. We further introduce a rank truncation of , i.e. we limit the maximum number of input interactions (Marelli and Sudret, 2019) to two variables at a time. The truncation set for each spectral expansion (Eq. (8)) thus reads:
(26) |
where
(27) |
The -norm is adaptively increased between while the maximum polynomial degree is adaptively increased in the interval , where the maximum degree for case study (i) and (ii) and for case study (iii) due to its high dimensionality.
In case study (ii) and (iii), the performance of SLE (Nagel and Sudret, 2016), the original non-adaptive SSLE (Marelli et al., 2021) and the proposed adaptive SSLE approach presented in Section 4.2 is compared. The comparison was omitted for case study (i), because only adaptive SSLE succeeded in solving the problem. For clarity, we henceforth abbreviate the adaptive SSLE algorithm to adSSLE.
To simplify the comparison, the same partitioning strategy employed for adSSLE (Section 4.2.2) was employed for the non-adaptive SSLE approach. Also, the same experimental designs were used for the non-adaptive SSLE and the SLE approaches. Finally, the same parameter was used to define the enrichment samples in adSSLE and the termination criterion in non-adaptive SSLE.
Because the considered case studies do not admit a closed form solution, we validate the algorithm instead through MCMC reference solutions with long chain lengths to produce samples from the posterior distributions. In this respect, identifiability of the “true” underlying data-generating model, while clearly important in inverse problems in general, is not central in the present discussion. As a proxy for identifiability, however, we consider several different case studies with more or less informative (peaked) likelihood and posterior distributions.
To assess the performance of the three algorithms considered, we define an error measure that allows to quantitatively compare the similarity of the SSLE, adSSLE and SLE solution with the reference MCMC solution. This comparison is inherently difficult, as a sampling-based approach (MCMC) needs to be compared to a functional approximation (SSLE, adSSLE, SLE). We proceed to compare the univariate posterior marginals, available analytically in SSLE, adSSLE and SLE (See Eq. (13) and Eq. (20)), to the reference posterior marginals estimated with kernel density estimation (KDE, Wand and Jones (1995)) from the MCMC sample. Denoting by the SSLE, adSSLE or SLE approximations and by the reference solution, we define the following error measure
(28) |
where is the dimensionality of the problem and is the Jensen-Shannon divergence (Lin, 1991), a symmetric and bounded () distance measure for probability distributions based on the Kullback-Leibler divergence.
The purpose of the error measure is to allow for a fair comparison between the different methods investigated. It is not a practical measure for engineering applications because it relies on the availability of a reference solution, and it its magnitude does not have a clear quantitative interpretation. However, it is considerably more comprehensive than a pure moment-based error measure. Because it is averaged over all marginals, it encapsulates the approximation accuracy of all univariate posterior marginals in a scalar value.
As all algorithms (SSLE, adSSLE, SLE) depend on a randomly chosen experimental designs, we produce replications for case study (ii) and replications for case study (iii) by running them multiple times. The computational cost of all examples is given in terms of number of likelihood evaluations , as they dominate the total computational cost in any engineering setting. All computations shown were carried out on a standard laptop, with the longest single runs taking , which includes both forward model evaluations and the overall algorithmic overhead.
5.1 1-dimensional vibration problem
In this first case study, the goal is the inference of a single unknown parameter with a multimodal posterior distribution. This problem is difficult to solve with standard MCMC methods due to the presence of a probability valley, i.e. low probability region, between essentially disjoint posterior peaks. It also serves as an illustrative example of how the proposed adaptive algorithm constructs an adSSLE. The problem is fabricated and uses synthetic data, but is presented in the context of a relevant engineering problem.
Consider the oscillator displayed in Figure 3 subject to harmonic (i.e., sinusoidal) excitation. Assume the prior information about its stiffness is that it follows a lognormal distribution with and . Its true value shall be determined using measurements of the oscillation amplitude at the location of the mass . The known properties of the oscillator system are the oscillator mass , the excitation frequency and the viscous damping coefficient . The oscillation amplitude is measured in five independent oscillation events and normalized by the forcing amplitude yielding the measured amplitude ratios .

This problem is well known in mechanics and in the linear case (i.e., assuming small deformations and linear material behavior) can be solved analytically with the amplitude of the frequency response function. This function returns the ratio between the steady state amplitude of a linear oscillator and the amplitude of its excitation. It is given by
(29) |
We assume a discrepancy model with known discrepancy standard deviation . In conjunction with the available measurements , this leads to the following likelihood function:
(30) |
We employ the adSSLE algorithm to approximate this likelihood function with . A few iterations from the solution process are shown in Figure 4. The top plots show the subdomains constructed at each refinement step, highlighting the terminal domains . The middle plots display the residual between the true likelihood and the approximation at the current iteration, as well as the adaptively chosen experimental design . The bottom plots display the target likelihood function and its current approximation.
The initial global approximation of the first iteration in Figure LABEL:sub@fig:cs1:SSEBehaviour:1 is a constant polynomial based on the initial experimental design. By the third iteration, the algorithm has identified the subdomain as the one of interest and proceeds to refine it in subsequent steps. By the th iteration both likelihood peaks have been identified. Finally, by the th iteration in Figure LABEL:sub@fig:cs1:SSEBehaviour:4, both likelihood peaks are approximated well by the adSSLE approach.
The last iteration shows how the algorithm splits domains and adds new sample points. There is a clear clustering of subdomains and sample points near the likelihood peaks at and .












The results from Eq. (22) show that without further computations it would be possible to directly extract the posterior moments by post-processing the SSLE coefficients. In the present bimodal case, however, the posterior moments are not very meaningful. Instead, the available posterior approximation gives a full picture of the inferred parameter . It is shown together with the true posterior and the original prior distribution in Figure 5.

For this case study, non-adaptive experimental design approaches like the standard SSLE (Marelli et al., 2021) and the original SLE algorithm (Nagel and Sudret, 2016) will almost surely fail for the considered experimental design of . In numerous trial runs these approaches did not manage to accurately reconstruct the likelihood function due to a lack of informative samples near the likelihood peaks.
5.2 Moderate-dimensional heat transfer problem
This case study is a complex engineering problem that can be modified to exhibit high posterior concentration in the prior domain. It was originally presented in Nagel and Sudret (2016) and solved there using SLE. We again solve the same problem with SSLE and compare the performance of SLE (Nagel and Sudret, 2016), the original non-adaptive SSLE (Marelli et al., 2021) and the proposed adSSLE approach presented in Section 4.2. To investigate the performance of the algorithm in the case of high posterior concentrations (e.g. due to a large data-set), two instances of the problem with different discrepancy parameters are investigated.
Consider the diffusion-driven stationary heat transfer problem sketched in Figure LABEL:sub@fig:cs3:setup. It models a 2D plate with a background matrix of constant thermal conductivity and inclusions with conductivities . The diffusion driven steady state heat distribution is described by a heat equation in Euclidean coordinates of the form
(31) |
where the thermal conductivity field is denoted by and the temperature field by . The boundary conditions of the plate are given by a no-heat-flux Neumann boundary conditions on the left and right sides (), a Neumann boundary condition on the bottom () and a temperature Dirichlet boundary condition on the top.
We employ a finite element (FE) solver to solve the weak form of Eq. (31) by discretizing the domain into approximately triangular elements. A sample solution returned by the FE-solver is shown in Figure 6b.


In this example we intend to infer the thermal conductivities of the inclusions. We assume the same problem constants as in Nagel and Sudret (2016) (i.e., , , ). The forward model takes as an input the conductivities of the inclusions , solves the finite element problem and returns the steady state temperature at the measurement points, i.e., .
To solve the inverse problem, we assume a multivariate lognormal prior distribution with independent marginals on the inclusion conductivities, i.e. . We further assume an additive Gaussian discrepancy model, which yields the likelihood function
(32) |
with a discrepancy standard deviation of .
As measurements, we generate one temperature field with and collect its values at points indicated by black dots in Figure LABEL:sub@fig:cs3:setup. We then perturb these temperature values with additive Gaussian noise and use them as the inversion data .
We look at two instances of this problem that differ only by the discrepancy parameter from Eq. (32). The prior model response has a standard deviation of approximately , depending on the measurement point . We therefore solve the problem first with a large value and second with a small value . As the discrepancy standard deviation determines how peaked the likelihood function is, the first problem has a likelihood function with a much wider support and is in turn significantly easier to solve than the second one. It is noted here that in practice, the peakedness of the likelihood function is either increased by a smaller discrepancy standard deviation, or the inclusion of additional experimental data.
To monitor the dependence of the algorithms on the number of likelihood evaluations, we solve both problems with a set of maximum likelihood evaluations . The number of refinement samples is set to .
As a benchmark, we use reference posterior samples generated by the affine-invariant ensemble sampler MCMC algorithm (Goodman and Weare, 2010) with steps and parallel chains, requiring a total of likelihood evaluations. Based on numerous heuristic convergence tests and due to the large number of MCMC steps, the resulting samples can be considered to accurately represent the true posterior distributions.
The results of the analyses are summarized in Figure 7, where the error measure is plotted against the number of likelihood evaluations for the large and small standard deviation case. For the large discrepancy standard deviation case, both SSLE approaches clearly outperform standard SLE w.r.t. the error measure . This is most significant at mid-range experimental designs (), where SLE does not reach the required high degrees and fails to accurately approximate the likelihood function. At larger experimental designs SLE catches up to non-adaptive SSLE but is still outperformed by the proposed adSSLE approach. The real strength of the adaptive algorithm shows for the case of a small discrepancy standard deviation, where the limitations of fixed experimental designs become obvious. When the likelihood function is nonzero in a small subdomain of the prior, the global SLE and non-adaptive SSLE approach will fail in practice because of the insufficient number of samples placed in the informative regions. The adSSLE approach, however, works very well in these types of problems. It manages to identify the regions of interest and produces a likelihood approximation that accurately reproduces the posterior marginals.


Tables 1 and 2 show the convergence of the adSSLE method moment estimate (mean and variance) to the reference solution for a single run. In brackets next to the moment estimates , the relative error is also shown. Due to the non-strict positivity of the SSLE estimate, one variance estimate computed with Eq. (22) is negative and is therefore omitted from Table 2.
adSSLE | |||||||
---|---|---|---|---|---|---|---|
MCMC | |||||||
adSSLE | |||||||
MCMC |
adSSLE | |||||||
---|---|---|---|---|---|---|---|
MCMC | |||||||
adSSLE | |||||||
MCMC |
The full posterior marginals obtained from one run of adSSLE with are also compared to those of the reference MCMC and displayed in Figure 8. The individual plots show the univariate posterior marginals (i.e. ) on the main diagonal and the bivariate posterior marginals (i.e. ) in the -th row and -th column. It can be clearly seen that the posterior characteristics are very well captured. However, the adSSLE approach sometimes fails to accurately represent the tails of the distribution. This is especially obvious in the small discrepancy case in Figure LABEL:sub@fig:cs2:Posterior:SSESmall where the tail is sometimes cut off. We emphasize here that the SSLE marginals are obtained analytically as 1D and 2D surfaces for the univariate and bivariate marginals respectively. For the reference MCMC approach, on the other hand, they need to be approximated with histograms based on the available posterior sample.




5.2.1 Convergence of the posterior moments
In practical inference applications, posterior moments are often one of the main quantities of interest. An estimator of these moments is readily available at every refinement step of adSSLE through Eq. (22).
Tracking the evolution of the posterior moments throughout the adSSLE iterations can be used as a heuristic estimator of the convergence of the adSSLE algorithm. However, only the stability of the solution can be assessed, without guarantees on the bias. As an example, we now consider the large discrepancy problem and plot the evolution of the posterior mean and standard deviation for every as a function of the number of likelihood evaluations in Figure 9. It can be seen that after likelihood evaluations, most moment estimators achieve convergence to a value close to the reference solution. This plot also reveals a small bias of the and estimators, that was previously highlighted in Table 1.


5.2.2 Influence of
The main hyperparameter of the proposed adSSLE algorithm is the number , which corresponds to the number of sample points that are required at each PCE construction step (see Section 4.2). In Figure 10 we display the effect of different values on the convergence in the small and large discrepancy problems.


influences the accuracy of the two error estimators used inside the adSSLE algorithm. They are: (i) the residual expansion accuracy in Eq. (23) and (ii) the splitting error in Eq. (24).
Small values of allow to quickly obtain a crude likelihood approximation with limited experimental design sizes , but this comes at the cost of lower convergence rates at larger . This behaviour can be partially attributed to the deterioration of residual expansion error in Eq. (23). At small experimental design sizes, the overall number of terminal domains is relatively small and this effect is not as pronounced. At larger experimental designs and higher numbers of subdomains, however, the error estimators high variances can lead to difficulties in identifying the true high error subdomains.
Large values of lead to slower initial convergence rates because of the smaller number of overall subdomains. The algorithm stability, however, is increased because both error estimators have lower variance and thereby allow the algorithm to more reliably identify the true high error subdomains and choose the split directions that maximize Eq. (25).
5.3 High-dimensional diffusion problem
The last cast study shows that adSSLE for Bayesian model inversion remains feasible in high dimensional problems with low effective dimensionality. The considered forward model is often used as a standard benchmark in UQ computations (Shin and Xiu, 2016; Fajraoui et al., 2017). It represents the 1-D diffusion along a domain with coordinate given by the following boundary value problem:
(33) |
The concentration field can be used to describe any steady-state diffusion driven process (e.g., heat diffusion, concentration diffusion, etc.). Assume that the diffusion coefficient is a log-normal random field given by where is a standard normal stationary Gaussian random field with exponential autocorrelation function . Let be approximated through a truncated Karhunen-Loève expansion
(34) |
with the pairwise uncorrelated random variables denoting the field coefficients and the real valued function obtained from the solution of the Fredholm equation for (Ghanem and Spanos, 1991). The truncation variable is set to to explain of the variance. Some realizations of the random field and resulting concentrations are shown in Figure 11.
In this example, the random vector of coefficients shall be inferred using a single measurement of the diffusion field at given by . The considered model therefore takes as an input a realization of that random vector, and returns the diffusion field at , i.e., . It is expected that due to the single measurement location at , only very little information about the parameters will be recovered in the inverse problem.


We impose a standard normal prior on the field coefficients such that and assume the standard additive discrepancy model with known discrepancy variance . This yields the likelihood function
(35) |
We proceed to compare the performance of standard SLE, non-adaptive SSLE and the proposed adSSLE approach on this example. We solve the problem with a set of maximum likelihood evaluations .
In the present high-dimensional case, it is necessary to set to a relatively large number (). At smaller numbers the variance of the estimator of in Eq. (24) makes it difficult for the algorithm to correctly identify the splitting direction that maximizes Eq. (25).
To compare the results of the algorithms, they are compared to a reference MCMC solution obtained with the affine-invariant ensemble sampler (Goodman and Weare, 2010) algorithm with steps and parallel chains at a total cost of likelihood evaluations.
To allow a quantitative comparison, we again use the error measure from Eq. (28) with . It is plotted for a set of maximum likelihood evaluations in Figure 12. It is clear that both SSLE algorithms outperform SLE, while the adSSLE approach manages to improve the performance of SLE by an order of magnitude.
The overall small magnitude of the error in Figure 12 can be attributed to the low active dimensionality of this problem. Despite its high nominal dimensionality (), this problem in fact has only very few active dimensions, as the first few variables are significantly more important than the rest. In physical terms, very local fluctuations of the conductivity do not influence the output which results from an integration of these fluctuations. Therefore, the biggest change between prior and posterior distribution happens in the first few parameters (see also Table 3), while the other parameters remain unchanged by the updating procedure. This results in a small value of the Jensen-Shannon divergence for the inactive dimensions that lower the average value as defined in Eq. (28).

To highlight the results of one adSSLE algorithm instance with , we display plots of the marginal posteriors in Figure 13. Due to the low active dimensionality of the problem, we focus on the first parameters . The remaining posterior parameters are not significantly influenced by the considered data. The comparative plots show a good agreement between the adSSLE and the reference solution, especially w.r.t. the interaction between and . As can be expected from the single measurement location, only the first parameter is significantly influenced by the Bayesian updating procedure.
For the same instance, we also compute the first two posterior moments for all posterior marginals and compare them to the MCMC reference solution. The resulting values are presented in Table 3. Keeping in mind that the prior distribution is a multivariate standard normal distribution ( and for ) it is obvious from this table that the data most significantly affects the first three parameters.
The adSSLE approach manages to accurately recover the first two posterior moments at the relatively low cost of likelihood evaluations. The average absolute error for and is approximately .


6 Conclusions
Motivated by the recently proposed spectral likelihood expansions (SLE) framework for Bayesian model inversion presented in Nagel and Sudret (2016), we showed that the same analytical post-processing capabilities can be derived when the novel SSE approach from Marelli et al. (2021) is applied to likelihood functions, giving rise to the proposed SSLE approach. Because SSE is designed for models with complex local characteristics, it was expected to outperform SLE on practically relevant, highly localized likelihood functions. To further improve SSLE performance, we introduced a novel adaptive sampling procedure and modified partitioning strategy.
There are a few unsolved shortcomings of SSLE that will be addressed in future works. Namely, the discontinuities at the subdomain boundaries may be a source of error that should be addressed. Additionally, for the adSSLE algorithm, it is not possible at the moment to specify the optimal parameter a priori. In light of the considerable influence of that parameter as shown in Section 5.2.2, it might be necessary to adaptively adjust it, or decouple this parameter from the termination criterion.
Approximating likelihood functions through local PCEs prohibits the enforcement of strict positivity throughout the function domain. For visualization purposes this is not an issue, as negative predictions can simply be set to in a post-processing step. When computing posterior expectations with Eq. (22), however, this can lead to erroneous results such as negative posterior variances. One way to enforce strict positivity is through an initial transformation of the likelihood function (e.g., log-likelihood ). This is avoided in the present work because it comes at a loss of the desirable analytical post-processing properties.
Another class of problems that can cause instability when focusing directly on the likelihood function (both for SLE and SSLE), is that of problems with a very sharp likelihood function. This can happen, e.g. in the case of many available measurements and small discrepancy variances, that cause the likelihood function to reduce to the Dirac delta function, which has a very dense spectral representation.
In problems with few active dimensions (e.g. High-dimensional diffusion problem, Section 5.3), SSLE performs well because the likelihood is constant in many dimensions. The local sparse PCE construction can exploit this and could therefore handle up to hundreds of input dimensions. However, if the number of active dimensions is high as well, the present SSLE algorithm will require prohibitively large experimental designs, thereby rendering it unfeasible.
The biggest advantage of SSLE, however, is that it poses the challenging Bayesian computation in a function approximation setting. This yields an analytical expression of the posterior distribution and preserves the analytical post-processing capabilities of SLE while delivering highly improved likelihood function approximations. As opposed to many existing algorithms, SSLE can even efficiently solve Bayesian problems with multiple posterior modes. As shown in the case studies, the proposed adaptive algorithm further capitalizes on the compact support nature of likelihood functions and leads to significant performance gains, especially at larger experimental designs.
Acknowledgements
The PhD thesis of the first author is supported by ETH grant #44 17-1.
References
- Beck and Au (2002) Beck, J. L. and S.-K. Au (2002). Bayesian updating of structural models and reliability using Markov chain Monte Carlo simulation. Journal of Engineering Mechanics 128(4), 380–391.
- Beck and Katafygiotis (1998) Beck, J. L. and L. S. Katafygiotis (1998). Updating models and their uncertainties. I: Bayesian statistical framework. Journal of Engineering Mechanics 124(4), 455–461.
- Birolleau et al. (2014) Birolleau, A., G. Poëtte, and D. Lucor (2014). Adaptive Bayesian inference for discontinuous inverse problems, application to hyperbolic conservation laws. Communications in Computational Physics 16(1), 1–34.
- Bishop (2006) Bishop, C. M. (2006). Pattern recognition and machine learning. Information Science and Statistics. Springer.
- Blatman and Sudret (2010) Blatman, G. and B. Sudret (2010). An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Probabilistic Engineering Mechanics 25, 183–197.
- Blatman and Sudret (2011) Blatman, G. and B. Sudret (2011). Adaptive sparse polynomial chaos expansion based on Least Angle Regression. Journal of Computational Physics 230, 2345–2367.
- Brooks and Gelman (1998) Brooks, S. P. and A. Gelman (1998). General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics 7(4), 434–455.
- Chapelle et al. (2002) Chapelle, O., V. Vapnik, and Y. Bengio (2002). Model selection for small sample regression. Journal of Machine Learning Research 48(1), 9–23.
- Ching and Chen (2007) Ching, J. and Y. Chen (2007). Transitional Markov chain Monte Carlo method for Bayesian model updating, model class selection, and model averaging. Journal of Engineering Mechanics 133(7), 816–832.
- Conrad et al. (2018) Conrad, P. R., A. D. Davis, Y. M. Marzouk, N. S. Pillai, and A. Smith (2018, jan). Parallel local approximation MCMC for expensive models. SIAM/ASA Journal on Uncertainty Quantification 6(1), 339–373.
- Conrad et al. (2016) Conrad, P. R., Y. M. Marzouk, N. S. Pillai, and A. Smith (2016, oct). Accelerating asymptotically exact MCMC for computationally intensive models via local approximations. Journal of the American Statistical Association 111(516), 1591–1607.
- Cui et al. (2014) Cui, T., Y. M. Marzouk, and K. E. Willcox (2014, aug). Data-driven model reduction for the Bayesian solution of inverse problems. International Journal for Numerical Methods in Engineering 102(5), 966–990.
- Dick et al. (2017) Dick, J., R. N. Gantner, Q. T. Le Gia, and C. Schwab (2017). Multilevel higher-order quasi-Monte Carlo Bayesian estimation. Mathematical Models and Methods in Applied Sciences 27(5), 953–995.
- El Moselhy and Marzouk (2012) El Moselhy, T. A. and Y. M. Marzouk (2012). Bayesian inference with optimal maps. Journal of Computational Physics 231(23), 7815–7850.
- Fajraoui et al. (2017) Fajraoui, N., S. Marelli, and B. Sudret (2017). Sequential design of experiment for sparse polynomial chaos expansions. SIAM/ASA Journal on Uncertainty Quantification 5(1), 1061–1085.
- Gantner and Peters (2018) Gantner, R. N. and M. D. Peters (2018). Higher-order quasi-Monte Carlo for Bayesian shape inversion. SIAM/ASA Journal on Uncertainty Quantification 6(2), 707–736.
- Gautschi (2004) Gautschi, W. (2004). Orthogonal polynomials: Computation and approximation. Numerical Mathematics and Scientific Computation. Oxford University Press.
- Gelman et al. (2014) Gelman, A., J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin (2014). Bayesian Data Analysis (3 ed.). Texts in Statistical Science. CRC Press.
- Gelman and Rubin (1992) Gelman, A. and D. B. Rubin (1992). Inference from iterative simulation using multiple sequences. Statistical Science 7(4), 457–472.
- Ghanem and Spanos (1991) Ghanem, R. G. and P. Spanos (1991). Stochastic finite elements – A spectral approach. Springer Verlag, New York. (Reedited by Dover Publications, Mineola, 2003).
- Goodman and Weare (2010) Goodman, J. and J. Weare (2010). Ensemble samplers with affine invariance. Communications in Applied Mathematics and Computational Science 5(1), 65–80.
- Haario et al. (2001) Haario, H., E. Saksman, and J. Tamminen (2001). An adaptive Metropolis algorithm. Bernoulli 7(2), 223–242.
- Harney (2016) Harney, H. L. (2016). Bayesian Inference: Data Evaluation and Decisions (2 ed.). Springer International Publishing.
- Jakeman et al. (2015) Jakeman, J., M. S. Eldred, and K. Sargsyan (2015). Enhancing -minimization estimates of polynomial chaos expansions using basis selection. Journal of Computational Physics 289, 18–34.
- Jaynes (2003) Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge University Press.
- Jeffreys (1946) Jeffreys, H. (1946). An invariant form for the prior probability in estimation problems. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 186(1007), 453–461.
- Kennedy and O’Hagan (2001) Kennedy, M. C. and A. O’Hagan (2001). Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63(3), 425–464.
- Li and Marzouk (2014) Li, J. and Y. M. Marzouk (2014). Adaptive construction of surrogates for the Bayesian solution of inverse problems. SIAM Journal on Scientific Computing 36(3), A1163–A1186.
- Lin (1991) Lin, J. (1991). Divergence measures based on the Shannon entropy. Transactions on Information Theory 37(1), 145–151.
- Lüthen et al. (2021) Lüthen, N., S. Marelli, and B. Sudret (2021). Sparse polynomial chaos expansions: Review and benchmark. SIAM/ASA Journal on Uncertainty Quantification.
- Marelli and Sudret (2014) Marelli, S. and B. Sudret (2014). UQLab: A framework for uncertainty quantification in Matlab. In Vulnerability, Uncertainty, and Risk (Proceedings of the 2nd International Conference on Vulnerability, Risk Analysis and Management (ICVRAM2014), Liverpool, United Kingdom), pp. 2554–2563.
- Marelli and Sudret (2019) Marelli, S. and B. Sudret (2019). UQLab user manual – Polynomial chaos expansions. Technical report, Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Switzerland. Report # UQLab-V1.3-104.
- Marelli et al. (2021) Marelli, S., P.-R. Wagner, C. Lataniotis, and B. Sudret (2021). Stochastic spectral embedding. International Journal for Uncertainty Quantification 11(2), 25–47.
- Marin et al. (2012) Marin, J.-M., P. Pudlo, C. P. Robert, and R. J. Ryder (2012). Approximate Bayesian computational methods. Statistics and Computing 22(6), 1167–1180.
- Marzouk et al. (2016) Marzouk, Y., T. Moselhy, M. Parno, and A. Spantini (2016). Sampling via measure transport: An introduction. In R. Ghanem, D. Higdon, and H. Owhadi (Eds.), Handbook of Uncertainty Quantification. Springer International Publishing.
- Marzouk et al. (2007) Marzouk, Y. M., H. N. Najm, and L. A. Rahn (2007). Stochastic spectral methods for efficient Bayesian solution of inverse problems. Journal of Computational Physics 224, 560–586.
- Marzouk and Xiu (2009) Marzouk, Y. M. and D. Xiu (2009). A stochastic collocation approach to Bayesian inference in inverse problems. Communications in Computational Physics 6(4), 826–847.
- Nagel and Sudret (2016) Nagel, J. and B. Sudret (2016). Spectral likelihood expansions for Bayesian inference. Journal of Computational Physics 309, 267–294.
- Neal (2011) Neal, R. M. (2011). MCMC using Hamiltonian dynamics. In S. Brooks, A. Gelman, G. L. Jones, and X.-L. Meng (Eds.), Handbook of Markov Chain Monte Carlo, Handbooks of Modern Statistical Methods, Chapter 5, pp. 113–162. Chapman & Hall/CRC.
- Parno (2015) Parno, M. D. (2015). Transport maps for accelerated Bayesian computation. PhD thesis, Massachusetts Institute of Technology (MIT).
- Robert and Casella (2004) Robert, C. P. and G. Casella (2004). Monte Carlo statistical methods (2nd Ed.). Springer Series in Statistics. Springer Verlag.
- Rosenblatt (1952) Rosenblatt, M. (1952). Remarks on a multivariate transformation. Annals of Mathematical Statistics 23(3), 470–472.
- Shin and Xiu (2016) Shin, Y. and D. Xiu (2016). Nonadaptive quasi-optimal points selection for least squares linear regression. SIAM Journal on Scientific Computing 38(1), A385–A411.
- Sisson et al. (2018) Sisson, S. A., Y. Fan, and M. A. Beaumont (Eds.) (2018). Handbook of approximate Bayesian computation. Chapman and Hall/CRC.
- Tarantola (2005) Tarantola, A. (2005). Inverse Problem Theory and Methods for Model Parameter Estimation. Society for Industrial and Applied Mathematics (SIAM).
- Tierney and Kadane (1986) Tierney, L. and J. B. Kadane (1986). Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association 81(393), 82–86.
- Tierney et al. (1989a) Tierney, L., R. E. Kass, and J. B. Kadane (1989a). Approximate marginal densities of nonlinear functions. Biometrika 76(3), 425–433.
- Tierney et al. (1989b) Tierney, L., R. E. Kass, and J. B. Kadane (1989b). Fully exponential Laplace approximations to expectations and variances of nonpositive functions. Journal of the American Statistical Association 84(407), 710–716.
- Torre et al. (2019) Torre, E., S. Marelli, P. Embrechts, and B. Sudret (2019, jul). Data-driven polynomial chaos expansion for machine learning regression. Journal of Computational Physics 388, 601–623.
- Wagner et al. (2020) Wagner, P.-R., R. Fahrni, M. Klippel, A. Frangi, and B. Sudret (2020, feb). Bayesian calibration and sensitivity analysis of heat transfer models for fire insulation panels. Engineering Structures 205(15), 110063.
- Wand and Jones (1995) Wand, M. and M. C. Jones (1995). Kernel smoothing. Chapman and Hall, Boca Raton.
- Xiu (2010) Xiu, D. (2010). Numerical methods for stochastic computations – A spectral method approach. Princeton University press.
- Xiu and Karniadakis (2002) Xiu, D. and G. E. Karniadakis (2002). The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM Journal on Scientific Computing 24(2), 619–644.
- Yan and Zhou (2019) Yan, L. and T. Zhou (2019, mar). Adaptive multi-fidelity polynomial chaos approach to Bayesian inference in inverse problems. Journal of Computational Physics 381, 110–128.