Adaptive operator learning for infinite-dimensional Bayesian inverse problems
Abstract
The fundamental computational issues in Bayesian inverse problems (BIP) governed by partial differential equations (PDEs) stem from the requirement of repeated forward model evaluations. A popular strategy to reduce such costs is to replace expensive model simulations with computationally efficient approximations using operator learning, motivated by recent progress in deep learning. However, using the approximated model directly may introduce a modeling error, exacerbating the already ill-posedness of inverse problems. Thus, balancing between accuracy and efficiency is essential for the effective implementation of such approaches. To this end, we develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas. This is accomplished by adaptively fine-tuning the pre-trained approximate model with training points chosen by a greedy algorithm during the posterior evaluation process. To validate our approach, we use DeepOnet to construct the surrogate and unscented Kalman inversion (UKI) to approximate the BIP solution, respectively. Furthermore, we present a rigorous convergence guarantee in the linear case using the UKI framework. The approach is tested on a number of benchmarks, including the Darcy flow, the heat source inversion problem, and the reaction-diffusion problem. The numerical results show that our method can significantly reduce computational costs while maintaining inversion accuracy.
keywords:
Operator learning, DeepOnet, Bayesian inverse problems, Unscented Kalman inversion.1 Introduction
Many realistic phenomenons are governed by partial differential equations (PDEs), where the states of the system are described by PDEs solutions. The properties of these systems are characterized by the model parameters, such as the permeability and thermal conductivity, which can not be directly determined. Instead, the parameters can be inferred from the discrete and noisy observations of the states, which are known as inverse problems. Because inverse problems are ill-posed in general, many methods for solving them are based primarily on either regularization theory or Bayesian inference. By imposing a prior distribution on the parameters, the Bayesian approach can provide a more flexible framework. The solutions to the Bayesian inverse problems, or the posterior distributions, can then be obtained by conditioning the observations using Baye’s formula. In some cases, the model parameters are required to be functions, leading to infinite-dimensional Bayesian inverse problems. These cases possibly occur when the model parameters are spatially varying with uncertain spatial structures, which can be found in many realistic applications, including many areas of engineering and science [1, 2, 3, 4, 5].
The formulation of infinite-dimensional Bayesian inverse problems presents a number of challenges, including the well-posedness guaranteed by the proper prior selection, as well as the convergence of the solutions governed by the discretization scheme. Following that, dealing with the discrete finite-dimensional posterior distributions can be difficult due to the expensive-to-solve forward models and high-dimensional parameter spaces. Common methods to deal with these issues include (i) model reduction methods [6, 1, 7], which exploit the intrinsic low dimensionality of the governing physical systems, (ii) direct posterior approximation methods, such as Laplace approximation and variational inference [8, 4], and (iii) surrogate modeling [9, 10, 11, 12, 13], which approximates the computationally expensive model with a more efficient, lower-cost alternative.
Surrogate modeling emerges as the most promising approach for efficiently accelerating the sampling of posterior distributions among the methods listed above. Deep learning methods, specifically deep neural networks (DNN), have recently become the most popular surrogate models in engineering and science due to their power in approximating high-dimensional problems[14, 15, 16, 17, 18]. In general, DNN employs the power of machine learning to construct a quick-to-evaluate surrogate model to approximate the parameter-to-observation maps[19, 13, 20]. Numerical experiments, such as those described in [13], demonstrated that with sufficiently large training datasets, highly accurate approximations can be trained. Traditional deep learning methods, on the other hand, frequently necessitate a lot of training points that are not always available. Furthermore, whenever the measurement operator changes, the surrogate should be retrained. Physical-informed neural networks(PINNs)[15] can address this issue by incorporating the physical laws into the loss function and learning the parameter-to-state maps[21, 22]. As a result, they can be used as surrogates for a variety of Bayesian inverse problems involving models governed by the same PDEs but with different types of observations operators, which will further reduce the cost of surrogate construction. However, PINNs have some limitations[23, 24], such as hyperparameter sensitivity and the potential for training instability due to the hybrid nature of their loss function. Several solutions have been proposed to address these issues[25, 26, 27, 28, 29]. Operator neural networks, such as FNO[30] and DeepOnet[31], are able to model complex systems in high-dimensional spaces as infinite-dimensional approximations. They are therefore promising surrogates, as described in[32, 33]. However, using approximate models directly may introduce a discrepancy or modeling error, exacerbating an already ill-posed problem and leading to a worse solution.
In order to reduce model errors, several approaches[9, 10, 13, 20, 34, 35] that incorporate local approximation techniques have been applied to Bayesian posterior sampling problems. For example, Conrad et al.[9] described a framework for building and refining local approximation during an MCMC simulation using either local polynomial approximations or local Gaussian process regressors(GPR). To generate the sample sets used for local approximations, the authors employ a sequential experimental design procedure that interleaves infinite refinement of the approximation with Markov chain posterior exploration. Recently, Cleary et al. [34] proposed a “Calibrate-Emulate-Sample”(CES) framework for approximate Bayesian inversion. This approach consists of three main steps: To begin, the ensemble Kalman method[36, 37] is used in conjunction with a full-order model to extract sample points from the exact posterior distribution. Next, these sample points are used to create a GPR emulator for the parameter-to-observation map. Finally, the approximated posterior obtained from the emulator is sampled using direct sampling methods such as MCMC. A similar idea is proposed in [20], which use a goal-oriented DNN surrogate approach to significantly reduce the computational burden of posterior sampling. Specifically, they begin by using a Laplace approximation to approximate the posterior distribution. After that, they select training points to build a DNN surrogate from this approximate distribution. In the last stage, direct sampling methods are used to sample from the approximate posterior. However, these works primarily focus on building local approximations of the parameter-to-observation maps. One significant limitation of this approach is that the calibrate phase (i.e., generating the posterior approximation) necessitates extensive computations with full-order models, which can be computationally expensive. Furthermore, if the observation data changes, the entire process must be restarted from scratch, including recalculating the posterior approximation and reconstructing the local approximation. This can reduce the framework’s efficiency and increase its resource requirements, especially in dynamic environments where observations change frequently. Inspired by these works, we present a mutual learning framework that can reduce model error by forcing the approximate model to be locally accurate for posterior characterization during the inversion process. This is achieved by first using neural network representations of parameter-to-state maps between function spaces, and then fine-tuning this initial model with points chosen adaptively from the approximate posterior. This procedure can be repeated multiple times as necessary until the stop criteria is met. In contrast to the CES framework, we choose training points from the prior distribution to build the initial emulator offline. When measurement data is available, the surrogate model only needs to be fine-tuned during the sampling process, which involves specific sampling methods. The integration of prior knowledge into the initial model construction enhances the robustness and accuracy of the model, while the targeted fine-tuning during sampling improves computational efficiency, making the approach particularly suitable for scenarios requiring fast and reliable decision-making. For the detailed implementation, we use the DeepOnet[31] to approximate the parameter-to-state maps and the unscented Kalman inversion [38] to estimate the posterior distribution. Moreover, we can show that under the linear case, convergence can be obtained if the surrogate is accurate throughout the space, which can also be extended to non-linear cases with locally accurate approximate models. To demonstrate the effectiveness of our method, we propose testing several benchmarks such as the Darcy flow, the heat source inversion problem, and a reaction-diffusion problem. Our main contributions can be summarized as follows.
-
•
We propose a framework for adaptively reducing the surrogate’s model error. To maintain local accuracy, the greedy algorithm is proposed for selecting adaptive samples for fine-tuning the pre-trained model.
-
•
We adopt DeepOnet to approximate the parameter-to-state and then combine the UKI to accelerate infinite-dimensional Bayesian inverse problems. We demonstrate that this approach not only maintains inversion accuracy but also saves a significant amount of computational cost.
-
•
We show that in the linear case, the mean vector and the covariance matrix obtained by UKI with an approximate model can converge to those obtained with a full-order model. The results can also be verified in non-linear cases with locally accurate surrogates.
-
•
We present several benchmark tests including the Darcy flow, a heat source inversion problem and a reaction-diffusion problem to verify the effectiveness of our approach.
The remainder of this paper is organized as follows. Section 2 introduces infinite-dimensional Bayesian inverse problems as well as the basic concepts of DeepOnet. Our adaptive framework for model error reduction equipped with greedy algorithm and the unscented Kalman inversion, is presented in Section 3. To confirm the efficiency of our algorithm, several benchmarks are tested in Section 4. The conclusion is covered in Section 5.
2 Background
In this section, we first give a brief review of the infinite-dimensional Bayesian inverse problems. Then we will introduce the basic concepts of DeepOnet.
2.1 Infinite-dimensional Bayesian inverse problems
Consider a steady physical system described by the following PDEs:
(1) |
where denotes the general partial differential operator defined in the domain , is the boundary operator on the boundary , represents the unknown parameter field and represents the state field of the system.
Let denotes a set of discrete and noisy observations at specific locations in . Suppose the state and are connected through an observation system ,
(2) |
where is a Gaussian with mean zero and covariance matrix , which models the noise in the observations. Combining the PDE model (1) and the observation system (2) defines the parameter-to-observation map , i.e,
Here is the solution operator, or the parameter-to-state map, of the PDE model (1).
The following least squared functional plays an important role in such inverse problems:
(3) |
where denotes the weighted Euclidean norm in . In cases where the inverse problem is ill-posed, optimizing in is not a well-behaved problem, and some type of regularization is necessary. Bayesian inference is another method to consider. In the Bayesian framework, is viewed as a jointly varying random variable in . Given the prior on , the solution to the inverse problem is to determine the distribution of conditioned on the data , i.e., the posterior given by an infinite dimensional version of Bayes’ formula as
(4) |
where is the model evidence defined as
In general, the main challenge of infinite-dimensional Bayesian inverse problems lies in well-posedness of the problem and numerical methodologies. To guarantee the well-posedness, the prior is frequently considered to be a Gaussian random field, which guarantees the existence of the posterior distribution[4, 39]. To obtain the finite posterior distributions, one can use Karhunen-Loeve (KL) expansions or direct spatial discretization methods. The posterior distribution can then be approximated using numerical techniques like Markov Chain Monte Carlo (MCMC)[40] and variational inference (VI)[41]. It should be emphasized that each likelihood function evaluation requires a forward model (or ) evaluation. The computation of the forward model can be very complicated and expensive in some real-world scenarios, making the computation challenging. As a result, it is critical to replace the forward model with a low-cost surrogate model. In this paper, we apply deep operator learning to construct the surrogate in order to substantially reduce computational time.
2.2 DeepOnet as surrogates
In this section, we employ the neural operator DeepOnet as the surrogate, which is fast to evaluate and can speed up in the posterior evaluations. The basic idea is to approximate the true forward operator with a neural network , where are spaces defined before and are the parameters in the neural network. This neural operator can be interpreted as a combination of encoder , approximator and reconstructor [42] as depicted in Figs.1 and 2, i.e.,
Here, the encoder maps into discrete values in at a fixed set of sensors , i.e.,
The encoded data is then approximated by the approximator through a deep neural network. Given the encoder and approximator, we can define the branch net as the composition . The decoder maps the results to with the form
where are the outputs of the trunk net as depicted in Fig.2. Note that by this formula, the trunk net approximates the basis of the solution space and the branch net approximates the coefficients of the expansion, which together approximate the spectral expansion of the solution space.
Combined with the branch net and trunk net, the operator network approximation is obtained by finding the optimal , which minimizes the following loss function:
(5) |
where is the parameter space. It should be noted that the loss function cannot be computed exactly and is usually approximated by Monte Carlo simulation by sampling the space and the input sample space . That is, we take i.i.d samples at points , leading to the following empirical loss
(6) |


After the operator network has been trained, an approximation of the forward model can be constructed by adding the observation operator , i.e., . We then can obtain the surrogate posterior
(7) |
where is again the prior of and is the approximate least-squares data misfit defined as
The main advantage of the surrogate method is that once an accurate approximation is obtained, it can be evaluated many times without resorting to additional simulations of the full-order forward model. However, using approximate models directly may introduce a discrepancy or modeling error, exacerbating an already ill-posed problem and leading to a worse solution[13]. Specifically, we can define an -feasible set and the associated posterior measure as
Then, the complement of the -feasible set is given by , which has posterior measure . We can obtain an error bound between and in the Kullback-Leibler distance:
Theorem 1 ([43]).
Suppose we have the full posterior distribution and its approximation induced by the surrogate . For a given , there exist constants and such that
It is important to note that in order for the approximate posterior to converge to the exact posterior , the posterior measure must tend to zero. One way to achieve this goal is to enable the surrogate model trained sufficiently in the entire input space such that the model error is small enough. However, a significant amount of data and training time are frequently required to effectively train the surrogate model. Indeed, the surrogate model only needs to be accurate within the posterior distribution space, not the entire prior space[32, 13, 43]. To maintain accurate results while lowering the computational costs, an adaptive algorithm should be developed. In the following section, we will look at how to design a framework for adaptively reducing the surrogate’s modeling error.
3 Adaptive operator learning framework
3.1 Adaptive model error reduction
Developing stable and reliable adaptive surrogate modeling methods presents several challenges, especially when dealing with infinite-dimensional Bayesian inverse problems. One major challenge lies in the need to maintain the accuracy of the surrogate model in the high-density regions of the posterior distribution, where the true posterior is most concentrated. According to Theorem 1, the approximate posterior will be close to the true posterior when the surrogate is accurate in the high density region of the posterior distribution. On the other hand, the accuracy of the surrogate model, especially in the context of an operator network like DeepOnet, depends heavily on the quality and distribution of the training set. Ideally, this training set should be well-distributed across the posterior distribution in order to capture the key characteristics of the problem. However, the high density region of the posterior distribution is unknown until the observations are given. One feasible approach, as discussed earlier, is the CES framework [34], which first involves a calibration step where posterior samples are obtained using the full-order model, and then these samples can be used to train the surrogate. Nonetheless, this strategy can be very computationally expensive, especially when new observational data emerges, as the entire process must be repeated. A natural question is how to create an effective local approximation that maintains a good balance between accuracy and computational cost. To achieve this, the new adaptive framework should be designed to automatically select new training points during the posterior exploration process. These selected points are then used to fine-tune the emulator generated by DeepOnet. Instead of relying on the full-order model to explore the posterior space initially, the training points for the emulator are sequentially updated by integrating samplers that utilize the emulator itself. Based on this purpose, we separate the adaptive algorithms into offline and online stages. Offline, a small amount of samples from the prior distribution are utilized to train the DeepOnet in an acceptable length of time. The purpose of offline computation is to obtain a rough but comprehensive approximation of the DeepOnet model. The online stage consists of two major steps. First, decide whether to adaptively update the surrogate model with the posterior computation algorithm. Second, if the surrogate model needs to be updated, choose a new training set locally. Our new approach is an online strategy that depends on a specific realization of the observed data and an associated posterior approximation, in contrast to the direct DeepOnet strategy which maintains the surrogate model unchanged. We expect that the inversion results computed by adaptive DeepOnet will have better accuracy than those produced by the offline direct DeepOnet strategies because using the data centers attention on regions of high posterior probability and causes a localization effect in the construction of surrogate. We will explore this conjecture in numerical results below.

Our approach, as depicted in Fig.3 is indeed the summary of the previous efforts. The procedure is broken down into the following steps, which are modular in nature and can be approached in various ways:
-
•
Initialization (offline): Build a surrogate using the initial training dataset with a relatively small sample size. This model is used as the initial pre-trained model.
-
•
Posterior computation: Using some numerical techniques to approximate, or draw samples from the approximate posterior induced by .
-
•
Refinement (online): Choose a criteria to determine whether the refinement is needed. If refinement is needed, then selecting new training points from to refine the training dataset and the surrogate .
-
•
Repeat the above procedure several times until the stop criteria is met.
The rest of this part turns this outline into a workable algorithm by explaining when and where to refine, as well as how to choose new training data to improve approximations. To create an initialization surrogate, we can start by creating a small training dataset from the prior distribution. In detail, suppose we have a collection of model evaluations . We then can use these points to train an operator network and obtain a surrogate , which can be evaluated repeatedly at negligible cost. Subsequently, we decide where and when to refine the surrogate by conducting exploration using the current surrogate. Specifically, assume that is the approximate posterior at step. To assess the accuracy of the surrogate model in the approximate posterior space, we define the following “local” model error assist with :
(8) |
A natural approach is to use this error as an indicator: if the error exceeds a predefined tolerance, the surrogate model should be refined. Unfortunately, the need for high-dimensional integration makes directly evaluating this error in high-dimensional spaces prohibitively expensive. Notice that our primary goal is to maintain the surrogate accurate along the posterior computation trajectories. To achieve this, we can now obtain samples from using a posterior computation method, such as particle methods or MCMC-based sampling methods. Importantly, this process only relies on information from the surrogate model and the data , without requiring any information from the full-order model . To prevent the current samples from deviating too far from the true posterior trajectory, we define an anchor point from the obtained samples using the full-order model as follows:
(9) |
This allows us to define an error indicator based on the data-fitting term at the anchor point within the posterior sample set:
(10) |
This error indicator not only measures the model error but also ensures that the samples do not deviate significantly from the true posterior trajectory. When the relative error exceeds a tolerance of , the surrogate must be refined in . Otherwise, the refinement process stops.
The following task is to develop sampling strategies for surrogate refinement. We only need to ensure that the surrogate is accurate along the posterior evaluation trajectories, so we can generate a small set of important (adaptive) points near the anchor point and then refine the surrogate. This ensures that the refined surrogate is locally accurate in while also reducing computational costs. This topic is relevant to optimal experimental design. A number of existing strategies can be used to accomplish this goal. In this work, we propose a greedy algorithm for efficiently selecting the most important samples from . Specifically, we first draw a large set of samples from . A greedy algorithm is then used to select a subset of “important” points one by one from . Assuming the current selected point sets are , then the newly selected point must be near the anchor point . Furthermore, the surrogate solution for this point should have the greatest distance with the set , i.e.,
(11) |
where is the distance between the value and the set . Here is the factor used to control the balance between these two distances and is chosen to be 1 for convenience. The process only calculates the distance between points and predicted values of the surrogate model on the preselected data set, resulting in a negligible calculation time. Then, we have . We can improve the DeepOnet model by collecting the new training set . Specifically, during the online training phase, we initialize the neural network parameter from the previously trained model . We expect a significant speedup in solving Eq. (5) with this initialization, which can be viewed as an example of transfer learning.
The advantages of this method are obvious. Specifically, the selected adaptive points will have varying features in the surrogate solution spaces and will be close to the anchor point . This is more of an adversarial process; in order to guarantee local accuracy, we want the new points to be close to the . In order to enhance generalization capacity, we also want them to incorporate as much data as the current surrogate did. The only remaining question is how to approximate the approximate posterior distribution . To this end, traditional MCMC-based sampling methods and particle-based approaches[44, 45, 36, 46, 35] can be applied. In this paper, we have chosen to exclusively focus on the Unscented Kalman Inversion (UKI) method [38] in order to conceptually validate that our proposed adaptive operator learning framework. The details of the UKI algorithm will be presented in the following section. It is important to emphasize that UKI’s strategy relies on Gaussian approximations. While UKI may not provide highly accurate posterior approximations in many cases, such as non-Gaussian posteriors, we have still chosen to use UKI for sampling due to its computational efficiency. As a gradient-free, particle-based method, UKI requires only full-order model evaluations per iteration and typically converges within iterations, making it computationally less expensive compared to MCMC-type methods. The numerical examples will demonstrate that even with the lower computational complexity of UKI, our newly designed framework still achieves significant improvements in computational efficiency. Additionally, our algorithm is easily integrated with MCMC or other particle-based methods[47, 48, 49, 50, 36, 37, 51, 52, 53], and when used with other samplers that require a larger number of samples, our framework yields even greater computational efficiency gains.
3.2 Unscented Kalman Inversion
In this section, we give a brief review of the UKI algorithm discussed in [38]. The UKI is derived within the Bayesian framework and is considered to approximate the posterior distribution using Gaussian approximations on the random variable via its ensemble properties. We consider the following stochastic dynamical system:
(12) |
where is the unknown discrete parameter vector, and is the observation vector, the artificial evolution error and observation error are mutually independent, zero-mean Gaussian sequences with covariances and , respectively. Here is the regularization parameter, is the initial arbitrary vector.
Let denote the observation set at time . In order to approximate the conditional distribution of , the iterative algorithm starts from the prior and updates through the prediction and analysis steps: , and then , where is the distribution of . In the prediction step, we assume that , then under Eq. (12), is also Gaussian with mean and covariance:
(13) |
In the analysis step, we assume that joint distribution of can be approximated by a Gaussian distribution
(14) |
where
(15) |
Conditioning the Gaussian in Eq.(14) to find gives the following expressions for the mean and covariance of the approximation to :
(16) |
By assuming all observations are identical to (i.e., ), Eqs.(13)-(16) define a conceptual algorithm for using Gaussian approximation to solve BIPs. To evaluate integrals appearing in Eq. (13), UKI employs the unscented transform described below.
Theorem 2 (Modified Unscented Transform [54]).
Let Gaussian random variable , symmetric -points are chosen deterministically:
where is the column of the Cholesky factor of . The quadrature rule approximates the mean and covariance of the transformed variable as follows
Here these constant weights are
We obtain the following UKI algorithm in Algorithm 1 by applying the aforementioned quadrature rules. UKI is a derivative-free algorithm that applies a Gaussian approximation theorem iteratively to transport a set of particles to approximate given distributions. As a result, it only needs model evaluations per iteration, making it simple to implement and inexpensive to compute. However, for highly-nonlinear problems, UKI may encounter the intrinsic difficulty of using Gaussian distributions to approximate the posterior and different samplers can be applied.
3.3 Algorithm Summary
The overview of our UKI-based adaptive operator learning strategy is provided by Algorithm 2. To summarize, we begin with an offline pre-trained DeepOnet and refine the surrogate until the stop criteria is satisfied using local training data from the current approximate posterior distribution obtained by UKI. In other words, the UKI provides the DeepOnet with valuable candidate training points to refine, while the DeepOnet efficiently returns approximate data misfit information for the UKI to explore the parameter space further. Together, they establish a mutual learning system that enables them to grow and learn from one another over time. In particular, in order to approximate the induced posterior distribution and produce a series of intermediate Gaussian approximations , we first perform UKI with steps during the refinement process. The least squared error with will be calculated using Eq.(3) to form . It should be noted that not every step in this process is valid because the model error will eventually blow up during the iteration process. As a result, if the refinement process is terminated, we will select the last valid inversion result from this set as:
If the surrogate requires more refinement, the pair is used to select new training points and serve as the initial vector for the subsequent UKI iteration.
We review the computational efficiency of our method. Since the pre-trained operator network can be applied as surrogates for a class of various BIPs with models that are governed by the same PDEs but have various types of observations and noise modes. Thus, for a given inversion task, the main computational cost centers on the online forward evaluations and the online fine-tuning. On the other hand, the online retraining only takes a few seconds each time, so it can be ignored in comparison to the forward evaluation time. In these situations, the forward evaluations account for the majority of the computational cost. For our algorithm, the maximum number of online high-fidelity model evaluations is , where is the number of adaptive samples for refinement, is the maximum number of iterations for UKI using our approach, and is the number of adaptive refinement. While represents the total evaluations for UKI using the FEM solver, represents the discrete dimension of the parameter field, and represents the maximum iteration number for UKI. Consequently, the asymptotic speeds increase can be computed as
Note that the efficiency of our method basically depends on since is typically small. First of all, the number of adaptive samples will be sufficiently small (e.g. ) compared to the discrete dimension (e.g. ), resulting in a significant reduction in computational cost as it determines the number of forward evaluations. Second, the total number of adaptive retraining is determined by the inverse tasks, which are further subdivided into in-distribution data (IDD) and out-of-distribution (OOD) cases. The IDD typically refers to ground truth that is located in the high density region of the prior distribution. Alternatively, the OOD refers to the ground truth that is located far from the high density area of the prior. The original pre-trained surrogate can be accurate in nearly all of the high probability areas for the IDD case. As a result, our framework will converge quicker. In the case of OOD, our framework requires a significantly higher number of retraining cycles in order to reach the high density area of the posterior distribution. However, for both inversion tasks can be small. Consequently, our method can simultaneously balance accuracy and efficiency and has the potential to be applied to dynamical inversion tasks. In other words, once the initial surrogate is trained, we can use our adaptive framework to modify the estimate at a much lower computational cost.
3.4 Convergence analysis under the linear case
It is important to note that the UKI’s ensemble properties are used to approximate the posterior distribution using Gaussian approximations. Specifically, the sequence in Eq.(16) obtained by the full-order model will converge to the equilibrium points of the following equations under certain mild conditions in the linear case [38]:
(17a) | |||
(17b) |
We can actually demonstrate that in the linear case, the mean vector and covariance matrix obtained by our approach will be close to those obtained by the true forward if the surrogate is close to the true forward . Consider the following: , and is linear. Using as a surrogate, the corresponding sequence of in Eq.(16) then converges to the following equations
(18a) | |||
(18b) |
In the following, we will demonstrate that if the surrogate is near the true forward model , then the ought to be near the true ones as well. We shall need the following assumptions.
Assumption 3.
Suppose for any , the linear neural operator can be trained sufficiently to satisfy
(19) |
Assumption 4.
Suppose the forward map is bounded, that is
(20) |
where is a constant.
Assumption 5.
Suppose the matrix 111We use the notation here to demonstrate that the matrix is symmetric and positive definite and can be bounded from below as
(21) |
where is a positive constant.
We can obtain the following lemma.
Proof.
Note that these assumptions are reasonable and can be found in many references [12, 32]. We will then supply the main theorem based on these assumptions.
Theorem 7.
Proof.
The proof can be found in Appendix A.
Remark 1.
In order to meet the requirements of Theorem 7, it is possible to make the neural operator linear by dropping the nonlinear activation functions in the branch net and keeping the activation functions in the trunk net.
4 Numerical experiments
In this section, we provide several numerical examples to demonstrate the effectiveness and precision of the adaptive operator learning approach for solving inverse problems. To better present the results, we will compare DeepOnet based UKI inversion results (referred to as DeepOnet-UKI) with those of conventional FEM solvers (referred to as FEM-UKI). In addition, depending on whether adaptive refinement is used, the DeepOnet-UKI method has two variants: DeepOnet-UKI-Direct and DeepOnet-UKI-Adaptive. In particular, for DeepOnet-UKI-Direct, we will leave the surrogate model unchanged during the UKI iteration process.
In all of our numerical tests, the branch and trunk nets for DeepOnet are fully connected neural networks with five hidden layers and one hundred neurons in each layer, with the tanh function as the activation function. DeepOnet is trained offline with iterations and prior samples from the Gaussian random field. Unless otherwise specified, we set the maximum retraining number to and the tolerance to . For all examples investigated in this paper, the synthetic noisy data are generated as:
(27) |
where are the exact data, dictates the relative noise level and is a Gaussian random variable with zero mean and unit standard deviation. In UKI, the regularization parameters are for noise levels 0.05 and 0.1 and for noise levels 0.01. The starting vector for the UKI is chosen at random from . The selection of other hyperparameters is based on [38]. For numerical examples, we set . The maximum number of UKI iterations per cycle is 20 for FEM-UKI and DeepOnet-UKI-Direct, and 10 for DeepOnet-UKI-Adaptive. Following that, we will use the greedy algorithm to select adaptive samples for noise level 0.01 and adaptive samples for noise levels 0.05 and 0.1 from samples.
To measure the accuracy of the numerical approximation with respect to the exact solution, we use the following relative inversion error defined as
(28) |
where and are the numerical and exact solutions, respectively. Additionally, we will create samples during the UKI iteration process to calculate the local model error in Eq.(8) as
(29) |
to demonstrate that our adaptive framework can actually reduce the local model error. Moreover, we also calculate the data fitting error via Eq.(3) using the true model during the UKI iteration process.
4.1 Example 1: Darcy flow
In the first example, we consider the following Darcy flow problem:
(30) |
Here, the source function is defined as
(31) |
The aim is to determine the permeability from noisy measurements of the -field at a finite set of locations. To ensure the existence of the posterior distribution, we typically selected the prior distribution as a Gaussian measure . In particular, we focus on the covariance operator with the following form:
(32) |
where denotes the Laplacian operator in subject to homogeneous Neumann boundary conditions, denotes the inverse length scale of the random field and determines its regularity. For the numerical experiments presented in this section, we take the same values for these parameters as in[38]: . To sample from the prior distribution, we can use the Karhunen-Loeve (KL) expansion, which has the form
(33) |
where and are the eigenvalues and eigenfunctions, and are independent random variables. In practice, we truncate the sum (33) to terms, based on the largest eigenvalues, and hence . The forward problem is solved by FEM method on a grid.
We will create the observation data for the inverse problem using the in-distribution data (IDD) and out-of-distribution data (OOD), respectively, as shown in Fig.4. The IDD field is calculated using Eq.(33) with and . The OOD field is generated for convenience by sampling . To avoid the inverse crime, we will try to inverse the first KL modes using these observation data.
We plot the data fitting error, model error, and inversion error in Fig.5 to demonstrate the effectiveness of our framework. The performance of IDD and OOD data is different. For IDD data, if we directly apply the initial trained surrogate to run UKI, known as DeepOnet-UKI-Direct, we can see that the model error is consistently small. Even without refinements, the inversion error would be similar to that obtained by FEM-UKI, as shown in the right display of Fig.5. However, if we use an adaptive dataset to refine the initial model, we can still see a significant decrease in the local model error, resulting in a better estimate after running UKI for several steps. This suggests that refinements can improve the inversion accuracy of IDD data. The situation with OOD data is not the same. Because the ground truth is far from the prior distribution, the model error will first decrease and then explode suddenly as expected if we directly apply the initial model, as shown in the middle display of Fig.5. In such cases, DeepOnet-UKI-Direct will produce an incorrect estimate, which requires the refinement of the surrogate to improve inversion accuracy. The refinement process is typically divided into two stages: exploration and exploitation. During the exploration stage, we will run UKI with the current surrogate for steps. Then we will select the anchor point with smallest data fitting error computed with true model. In the exploitation stage, we generate an adaptive training dataset near this anchor point using the greedy algorithm to refine the surrogate, which leads to much smaller model error as demonstrated in the middle display of Fig.5. Then, using this refined surrogate, we continue the UKI iteration, starting with the anchor point from the previous refinement. This significantly improves the inversion results, as shown right display of Fig.5. Additionally, this figure shows that the model error will dramatically increase during the inversion process with just one refinement. Refinements will no longer clearly affect the accuracy of the inversion after five iterations, at which point the model error is usually negligible. Note that this strategy also works for IDD data; with refinements, our method can reduce model error and achieve comparable performance to FEM-UKI. The difference is that for OOD data, convergence is slower, leading to more refinements. While this agrees with our formal analysis. It is worth noting that, in the IDD situation, the adaptive method produces better results than the FEM-UKI. One probable explanation is that the model error is lower than the noise level of observation data, resulting in a minor random fluctuation of data that benefits the UKI method.
To summarize, DeepOnet-UKI-Adaptive can perform well for both IDD and OOD data. The black star in Fig.5 denotes the final solution for DeepOnet-UKI-Adaptive selected with the minimum data fitting error. We can observe that this value is typically the same as that acquired by FEM-UKI. We show the final estimated permeability fields generated by three different approaches in Figs.6 and 7. The estimated permeability fields obtained by FEM-UKI and DeepOnet-UKI-Adaptive are very similar to the true permeability field, however DeepOnet-UKI-Direct’s result differs dramatically, illustrating the usefulness of our framework.
To test the effect of the number of adaptive samples used in each refinement, we repeat the experiment ten times for each . The error box of is plotted Fig.8. It is evident from the IDD data that the relative inversion error does not decrease significantly with increasing data. This suggests that a small set of adaptive samples– roughly 50 – can meet the requirements for accuracy and efficiency. On the other hand, the relative inversion error for OOD data steadily drops with increasing dataset size. In order to examine the effects of varying noise levels, we are going to perform the experiment with three different noise levels (0.01, 0.05, and 0.1) and then repeat it with ten different UKI initial values. The numerical results are also shown in Fig.8. We can clearly observe that the relative inversion error gradually drops as noise levels rise, suggesting that higher noise levels are less sensitive to model errors. Consequently, our framework performs better in real-world applications with higher noise levels.
To test the computational efficiency of our adaptive framework, we plot the mean total online forward evaluations in Figure 9. Our method incurs a significantly lower cost compared to conventional numerical methods, even when adaptive refinement is applied. Specifically, our approach requires a maximum of only 50 samples for model refinement, whereas FEM-UKI demands 5,140 forward evaluations. To further evaluate the computational efficiency of our adaptive framework, we present the average computational CPU times and the number of forward evaluations per valid iteration in Figure 10. The results show that, apart from the initial offline cost for generating data points and training, the iterative computational efficiency of our adaptive framework surpasses traditional FEM solvers for both IDD and OOD cases. Given that the online fine-tuning process is highly efficient, with retraining taking just a few seconds, the cost of these forward simulations becomes negligible. Moreover, even when considering the offline cost, the total number of forward evaluations required by our framework is substantially lower than that of FEM-UKI, indicating that our scheme offers superior efficiency, particularly when the underlying PDE is expensive to solve.
4.2 Example 2: The heat source inversion problem
Consider the following heat conduction problem in
(34) | ||||||
The objective is to identify the heat source from noisy measurements. To illustrate our method more clearly, we divide this inversion task into two cases. In both cases, the surrogate is the DeepOnet model with same architectures. The FEM method is used to solve the forward problem on a grid and the resulting differential equations are integrated using the implicit-Euler scheme.
Case I: In this case, we consider a 2D heat source inversion problem which is adapted from[11]. Take with zero initial condition and zero Neumann boundary condition, where is the heavy-side function. Take . The inverse problem is to infer the location by giving the observations at and . In this paper, we take the ground truth to be . A uniform sensor network is used to collect noisy point-wise observations from the PDE field solution. At each sensor location, two measurements are taken at and , for a total of 18 measurements.
To replace the forward model, DeepOnet is trained on with 500 uniformly distributed samples. We use UKI with our adaptive scheme to run the experiment with initial value . The numerical results are shown in Fig.11. The DeepOnet-UKI-Direct method can provide only a rough estimate. While with our adaptive refinement, the DeepOnet-UKI-Adaptive can nearly achieve the same accuracy as FEM-UKI. That is because the adaptive sampling can select import samples in the high density area of the approximate posterior and the surrogate is refined to reduce the local model error. This phenomenon can be further observed in Fig.12, which plots the inversion trajectories and samples distribution for our adaptive method. We can clearly see that with refinements, the surrogate gradually become accurate in the high density area of the true posterior, which can modify the inversion trajectory and thus lead to more accurate results compared to DeepOnet-UKI-Direct.
Case II: In this case, the heat source field is considered with the formula with zero Dirichlet boundary condition and initial condition . Conversely, the inverse problem involves using noisy measurements of to determine the true spatial source field . We assume that the Gaussian random field defined in Eq.(33) is the prior of .
We assume that the ground truth has an analytical solution in this example to increase the dimension of the problem, i.e.,
(35) |
Using this specific solution, we generate the observations from the final temperature field at 36 equidistant points in . Fig.13 displays the corresponding observations and the true spatial field . In the inverse procedure, the KL expansion (33) will be employed to approximate the true source field. Specifically, to accomplish the inversion task, we will truncate the first 128 modes.
To test the effectiveness of our framework, we first run the experiment using the original pre-trained model directly running UKI, i.e. DeepOnet-UKI-Direct. We plot the local model error and the relative inversion error in the middle and right displays of Fig.14, respectively. As expected, the local model error will increase significantly, and the pre-trained model will eventually fail to predict the result. Because of the growing model error, the relative inversion error follows accordingly, increasing dramatically and providing a totally inaccurate final estimate. Nonetheless, we may refine our model by constructing adaptive samples based on this estimate. The procedures are similar to Example 1. During the exploration stage, we will run UKI for 10 steps and select the anchor point with the smallest data fitting error as the initial value for the next run of UKI. To reduce model error, we will generate adaptive samples near the anchor point and refine the surrogate. As shown in the middle of Fig.14, the local model error significantly decreases after refinement. As a result, our method produces relative inversion errors that are significantly reduced after refinement and comparable to those of FEM-UKI. Finally, after several refinements, the entire procedure is terminated based on the stop criteria, with the black star indicating the final point that we would accept for the inversion process using our DeepOnet-UKI-Adaptive algorithm. To demonstrate the effectiveness of our method, we plot the inversion results in Figs.15 and 16. We can see that the final numerical results produced by DeepOnet-UKI-Adaptive and FEM-UKI are very similar and do not differ significantly. This suggests that our method can handle OOD data in closed form.
In order to provide additional evidence of the efficacy of our approach, we figure out to perform the experiment for UKI at varying noise levels. In addition, we will repeat the experiment ten times with different initial values for each noise level. Following that, we will compare the number of forward evaluations and the relative inversion errors for every approach. We plot the difference of the relative inversion errors in the left display of Fig.17. It is clear that when noise levels increase, DeepOnet-UKI-Adaptive performs often better than FEM-UKI. This implies that our method can achieve higher accuracy than traditional solvers. For the reasons mentioned below, the computational cost of the new method can also be extremely small. First of all, it is much faster to fine-tune the original pre-trained surrogate model than it is to solve PDEs. Specially, we only need a maximum of 50 online forward evaluations in this example to retrain the network, which drastically lowers computational costs. We are able to clearly see that DeepOnet-UKI-Adaptive has a substantially smaller total number of forward evaluations than FEM-UKI, as the middle display of Fig.17 illustrates. Secondly, the entire process is automatically stopped by applying the stop criterion. We can start with the initial model that has been trained offline and fine-tune it multiple times for a given inversion task. As a result, our framework can achieve an accuracy level comparable to traditional FEM solvers, but at a significant reduction in computational cost for such problems. The CPU computation times are plotted in the right display of Fig.17. It is clear that our adaptive framework greatly accelerates the inversion process, which is nearly 10 times faster than FEM-UKI with nearly same accuracy.
4.3 Example 3: The reaction diffusion problem
Here we consider the forward model as a parabolic PDE defined as
(36) | ||||||
where is the diffusion coefficient, and , is the velocity field. The forward problem is to find the concentration field defined by the initial field . The inverse problem here is to find the true initial field using noisy measurements of . The forward problem is discretized using FEM method on a grid, and the resulting system of ordinary differential equations is integrated over time using a Crank-Nicolson scheme with uniform time step .
We only take into account the OOD data as Example 1 for the inverse problem. In other words, we will attempt to inverse the first 128 KL modes using the ground truth , which is defined by (33) with . The exact solution and the corresponding synthetic data are displayed in Fig.18.
The numerical results are shown in Fig.19. We can clearly see that refinement reduces the local model error significantly, and thus the inversion error will continue to decrease. This implies that our surrogate model can maintain its local accuracy during the inversion process by focusing on the region with the highest posterior probability. Finally, after six iterations of the initial model, the retraining was terminated using the stop criteria. Furthermore, as shown in the right display of Fig.19, DeepOnet-UKI-Adaptive can achieve nearly the same order of accuracy as FEM-UKI. This can be further confirmed by examining at Figs.20 and 21, which plot the final estimated initial fields and estimated states obtained by different methods. In this case, the CPU time of evaluating the conventional FEM-UKI is more than 5149s. In contrast, for the DeepOnet-UKI-Adaptive approach, the online CPU times is only 530s, meaning that the adaptive approach can provide accurate results, yet with less computational time.
We repeat the experiment with varying noise levels in order to thoroughly compare the performance of DeepOnet-UKI-Adaptive and FEM-UKI. We repeat the experiment ten times for each noise level, varying the UKI initial values each time. The difference between the relative inversion errors and the mean total number of forward evaluations is displayed in Fig 22. It is evident that DeepOnet-UKI-Adaptive can even achieve smaller relative inversion errors than FEM-UKI when dealing with higher noise levels. Furthermore, our method has a very low computational cost. In comparison to DeepOnet-UKI-Adaptive, the total number of forward evaluations for FEM-UKI is at least ten times higher. That is to say, our approach can efficiently complete the inversion task with significantly lower computational cost once the initial model has been trained. This feature offers the possibility to handle real-time forecasts in some data assimilation tasks.
5 Conclusion
We present an adaptive operator learning framework for iteratively reducing the model error in Bayesian inverse problems. In particular, the unscented Kalman inversion(UKI) is used to approximate the solution of inverse problems, and the DeepOnet is utilized to construct the surrogate. We suggest a greedy algorithm to choose the adaptive samples to retrain the approximate model. The performance of the proposed strategy has been illustrated by three numerical examples. It should be noted that our adaptive framework may be ineffective in certain situations due to the use of Kalman-based methods. On the one hand, for strictly non-Gaussian posterior distributions, such as those with multiple modes, the UKI fails to capture all of the modes because it approximates the posterior using a Gaussian distribution. On the other hand, for some low- to moderate-dimensional problems, particularly when focusing on a specific task, the CES framework may be more suitable. Nonetheless, our strategy is intended to be adaptable to a broader range of scenarios. Using this framework, we can easily modify the posterior computation and surrogate methods to broaden the applicability of our method. While future work will address the potential drawbacks mentioned earlier.
Proof of Theorem 7: We first consider the error estimate of the covariance matrix. Using Eqs. (17a) (18a), we have
(37) |
Note that the first part is proved in Eq.(24), i.e.,
(38) |
We consider the second part. Let us assume that represents the Banach spaces of matrices in . The operator norm in is induced by the Euclidean norm. The Banach spaces of linear operators equipped with the operator norm are denoted by . If we define , then . , the derivative of , is defined by the direction as
(39) |
According to [38], is a contraction map in , such that we have
(40) |
Therefore, we can use the Mean Value Theorem in matrix functions to get that
(41) |
Combining Eqs.(38) and (41) yields
(42) |
Then we can have the error estimate of the covariance matrix
(43) |
We now take into consideration the error estimate of the mean vector. Using Eqs. (17b) and (18b), we obtain
(44) |
Since
(45) |
and
(46) |
We can obtain
(47) |
For the first part , we have
(48) |
And then the second part,
(49) |
For the last part, according to Eq.(41) we have
(50) |
Moreover, from Eq.(17a) we have
(51) |
Combining Eqs.(47)-(51), we have
(52) |
Note that by Eq.(18), we have
(53) |
Afterwards, we can get the bound of as
(54) |
Combining Assumption 5 and Eqs.(52) and (54), we can get
(55) |
where is the upper bound of the condition number of respectively and . ∎
Acknowledgment
The authors would like to thank anonymous referees for their many insightful and constructive comments and suggestions that improved the organization and quality of our paper essentially.
References
- [1] Tiangang Cui, Youssef Marzouk, and Karen Willcox. Scalable posterior approximations for large-scale bayesian inverse problems via likelihood-informed parameter and state reduction. Journal of Computational Physics, 315:363–387, 2016.
- [2] Hejun Zhu, Siwei Li, Sergey Fomel, Georg Stadler, and Omar Ghattas. A bayesian approach to estimate uncertainty for full-waveform inversion using a priori information from depth migration. Geophysics, 81(5):R307–R323, 2016.
- [3] Alen Alexanderian, Noemi Petra, Georg Stadler, and Omar Ghattas. A fast and scalable method for a-optimal design of experiments for infinite-dimensional bayesian nonlinear inverse problems. SIAM Journal on Scientific Computing, 38(1):A243–A272, 2016.
- [4] Tan Bui-Thanh, Omar Ghattas, James Martin, and Georg Stadler. A computational framework for infinite-dimensional bayesian inverse problems part i: The linearized case, with application to global seismic inversion. SIAM Journal on Scientific Computing, 35(6):A2494–A2523, 2013.
- [5] Noemi Petra, James Martin, Georg Stadler, and Omar Ghattas. A computational framework for infinite-dimensional bayesian inverse problems, part ii: Stochastic newton mcmc with application to ice sheet flow inverse problems. SIAM Journal on Scientific Computing, 36(4):A1525–A1555, 2014.
- [6] Tiangang Cui, Youssef M Marzouk, and Karen E Willcox. Data-driven model reduction for the bayesian solution of inverse problems. International Journal for Numerical Methods in Engineering, 102(5):966–990, 2015.
- [7] Chad Lieberman, Karen Willcox, and Omar Ghattas. Parameter and state model reduction for large-scale statistical inverse problems. SIAM Journal on Scientific Computing, 32(5):2523–2542, 2010.
- [8] Claudia Schillings, Björn Sprungk, and Philipp Wacker. On the convergence of the laplace approximation and noise-level-robustness of laplace-based monte carlo methods for bayesian inverse problems. Numerische Mathematik, 145:915–971, 2020.
- [9] Patrick R Conrad, Youssef M Marzouk, Natesh S Pillai, and Aaron Smith. Accelerating asymptotically exact mcmc for computationally intensive models via local approximations. Journal of the American Statistical Association, 111(516):1591–1607, 2016.
- [10] Jinglai Li and Youssef M Marzouk. Adaptive construction of surrogates for the bayesian solution of inverse problems. SIAM Journal on Scientific Computing, 36(3):A1163–A1186, 2014.
- [11] Youssef M Marzouk, Habib N Najm, and Larry A Rahn. Stochastic spectral methods for efficient bayesian solution of inverse problems. Journal of Computational Physics, 224(2):560–586, 2007.
- [12] Liang Yan and Yuan-Xiang Zhang. Convergence analysis of surrogate-based methods for bayesian inverse problems. Inverse Problems, 33(12):125001, 2017.
- [13] Liang Yan and Tao Zhou. An adaptive surrogate modeling based on deep neural networks for large-scale bayesian inverse problems. Communications in Computational Physics, 28(5):2180–2205, 2020.
- [14] J. Han, A. Jentzen, and W. E. Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34):8505–8510, 2018.
- [15] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707, 2019.
- [16] C. Schwab and J. Zech. Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in uq. Analysis and Applications, 17(01):19–55, 2019.
- [17] R. K. Tripathy and I. Bilionis. Deep UQ: Learning deep neural network surrogate models for high dimensional uncertainty quantification. Journal of Computational Physics, 375:565–588, 2018.
- [18] Y. Zhu and N. Zabaras. Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. Journal of Computational Physics, 366:415–447, 2018.
- [19] Teo Deveney, Eike Mueller, and Tony Shardlow. A deep surrogate approach to efficient Bayesian inversion in PDE and integral equation models. arXiv:1910.01547, 2019.
- [20] Liang Yan and Tao Zhou. An acceleration strategy for randomize-then-optimize sampling via deep neural networks. Journal of Computational Mathematics, 39(6):848–864, 2021.
- [21] Yongchao Li, Yanyan Wang, and Liang Yan. Surrogate modeling for bayesian inverse problems based on physics-informed neural networks. Journal of Computational Physics, 475:111841, 2023.
- [22] Mohammad Amin Nabian and Hadi Meidani. Adaptive Physics-Informed Neural Networks for Markov-Chain Monte Carlo. arXiv: 2008.01604, 2020.
- [23] Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why pinns fail to train: A neural tangent kernel perspective. Journal of Computational Physics, 449:110768, 2022.
- [24] Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems, 34:26548–26560, 2021.
- [25] Zhiwei Gao, Liang Yan, and Tao Zhou. Failure-informed adaptive sampling for pinns. SIAM Journal on Scientific Computing, 45(4):A1971–A1994, 2023.
- [26] Zhiwei Gao, Tao Tang, Liang Yan, and Tao Zhou. Failure-informed adaptive sampling for pinns, part ii: combining with re-sampling and subset simulation. Communications on Applied Mathematics and Computation, 6(3):1720–1741, 2024.
- [27] Wenbin Liu, Liang Yan, Tao Zhou, and Yuancheng Zhou. Failure-informed adaptive sampling for pinns, part iii: Applications to inverse problems. CSIAM Transactions on Applied Mathematics, 5(3):636–670, 2024.
- [28] Levi McClenny and Ulisses Braga-Neto. Self-adaptive physics-informed neural networks using a soft attention mechanism. arXiv preprint arXiv:2009.04544, 2020.
- [29] Zixue Xiang, Wei Peng, Xu Liu, and Wen Yao. Self-adaptive loss balanced physics-informed neural networks. Neurocomputing, 496:11–34, 2022.
- [30] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
- [31] Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3):218–229, 2021.
- [32] Lianghao Cao, Thomas O’Leary-Roseberry, Prashant K Jha, J Tinsley Oden, and Omar Ghattas. Residual-based error correction for neural operator accelerated infinite-dimensional bayesian inverse problems. Journal of Computational Physics, 486:112104, 2023.
- [33] Martin Genzel, Jan Macdonald, and Maximilian März. Solving inverse problems with deep neural networks–robustness included? IEEE transactions on pattern analysis and machine intelligence, 45(1):1119–1134, 2022.
- [34] Emmet Cleary, Alfredo Garbuno-Inigo, Shiwei Lan, Tapio Schneider, and Andrew M Stuart. Calibrate, emulate, sample. Journal of Computational Physics, 424:109716, 2021.
- [35] Liang Yan and Tao Zhou. Stein variational gradient descent with local approximations. Computer Methods in Applied Mechanics and Engineering, 386:114087, 2021.
- [36] A. Garbuno-Inigo, F. Hoffmann, W. Li, and A. M. Stuart. Interacting langevin diffusions: Gradient structure and ensemble kalman sampler. SIAM Journal on Applied Dynamical Systems, 19(1):412–441, 2020.
- [37] M.A. Iglesias, K.J.H. Law, and A.M. Stuart. Ensemble Kalman methods for inverse problems. Inverse Problems, 29(4):045001, 2013.
- [38] Daniel Zhengyu Huang, Tapio Schneider, and Andrew M Stuart. Iterated kalman methodology for inverse problems. Journal of Computational Physics, 463:111262, 2022.
- [39] Andrew M Stuart. Inverse problems: a bayesian perspective. Acta numerica, 19:451–559, 2010.
- [40] S. Brooks, A. Gelman, G. L. Jones, and X. L. Meng, editors. Handbook of Markov chain Monte Carlo. Chapman & Hall/CRC Handbooks of Modern Statistical Methods. CRC Press, Boca Raton, FL, 2011.
- [41] D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877, 2017.
- [42] Samuel Lanthaler, Siddhartha Mishra, and George E Karniadakis. Error estimates for deeponets: A deep learning framework in infinite dimensions. Transactions of Mathematics and Its Applications, 6(1):1–141, 2022.
- [43] Liang Yan and Tao Zhou. Adaptive multi-fidelity polynomial chaos approach to bayesian inference in inverse problems. Journal of Computational Physics, 381:110–128, 2019.
- [44] P. Chen, K. Wu, J. Chen, T. O’Leary-Roseberry, and O. Ghattas. Projected stein variational newton: A fast and scalable bayesian inference method in high dimensions. In Advances in Neural Information Processing Systems, pages 15130–15139, 2019.
- [45] G. Detommaso, T. Cui, Y. Marzouk, A. Spantini, and R. Scheichl. A stein variational newton method. In Advances in Neural Information Processing Systems, pages 9169–9179, 2018.
- [46] Q. Liu and D. Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In Advances in neural information processing systems, pages 2378–2386, 2016.
- [47] N.K. Chada and X. T. Tong. Convergence acceleration of ensemble Kalman inversion in nonlinear settings. Mathematics of computation, 91:1247–1280, 2021.
- [48] José A Carrillo, Franca Hoffmann, Andrew M Stuart, and Urbain Vaes. Consensus-based sampling. Studies in Applied Mathematics, 148(3):1069–1140, 2022.
- [49] O.G. Ernst, B. Sprungk, and H. Starkloff. Analysis of the ensemble and polynomial chaos Kalman filters in Bayesian inverse problems. SIAM/ASA Journal on Uncertainty Quantification, 3(1):823–851, 2015.
- [50] Daniel Zhengyu Huang, Jiaoyang Huang, Sebastian Reich, and Andrew M Stuart. Efficient derivative-free bayesian inference for large-scale inverse problems. Inverse Problems, 38(12):125006, 2022.
- [51] Yanyan Wang, Qian Li, and Liang Yan. Adaptive ensemble kalman inversion with statistical linearization. Communications in Computational Physics, 33(5):1357–1380, 2023.
- [52] S. Weissmann, N.K. Chada, C. Schillings, and X. T. Tong. Adaptive Tikhonov strategies for stochastic ensemble Kalman inversion. Inverse Problems, 38(4):045009, 2022.
- [53] Liang Yan and Tao Zhou. An adaptive multifidelity PC-based ensemble kalman inversion for inverse problems. International Journal for Uncertainty Quantification, 9(3):205–220, 2019.
- [54] Eric A Wan and Rudolph Van Der Merwe. The unscented kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No. 00EX373), pages 153–158. Ieee, 2000.