suppSupplementary References
GAR: A Revisit to the Classic Linear Autoregressive Model And Its Application To Multi-Fidelity Problems
GAR: Generalized Linear Autoregressive Model
GAR: Generalized Autoregression
for Multi-Fidelity Fusion
GAR: Generalized Autoregression for Multi-Fidelity Fusion
GAR: Generalized Autoregression for Multi-Fidelity Fusion
GAR: Generalized Autoregression
for Multi-Fidelity Fusion
Abstract
In many scientific research and engineering applications, where repeated simulations of complex systems are conducted, a surrogate is commonly adopted to quickly estimate the whole system. To reduce the expensive cost of generating training examples, it has become a promising approach to combine the results of low-fidelity (fast but inaccurate) and high-fidelity (slow but accurate) simulations. Despite the fast developments of multi-fidelity fusion techniques, most existing methods require particular data structures and do not scale well to high-dimensional output. To resolve these issues, we generalize the classic autoregression (AR), which is wildly used due to its simplicity, robustness, accuracy, and tractability, and propose generalized autoregression (GAR) using tensor formulation and latent features. GAR can deal with arbitrary dimensional outputs and arbitrary multi-fidelity data structure to satisfy the demand of multi-fidelity fusion for complex problems; it admits a fully tractable likelihood and posterior requiring no approximate inference and scales well to high-dimensional problems. Furthermore, we prove the autokrigeability theorem based on GAR in the multi-fidelity case and develop CIGAR, a simplified GAR with the exact predictive mean accuracy with computation reduction by a factor of , where d is the dimensionality of the output. In experiments of canonical PDEs and scientific computational examples, the proposed method consistently outperforms the SOTA methods with a large margin (up to 6x improvement in RMSE) with only a couple of high-fidelity training samples.
1 Introduction
The design, optimization, and control of many systems in science and engineering can rely heavily on the numerical analysis of differential equations, which is generally computationally intense. In this case, a data-driven surrogate model is used to approximate the system based on the input-output data of the numerical solver and to help improve convergence efficiency where repeated simulations are required, e.g., in Bayesian optimization (BO) [1] and uncertainty analysis (UQ) [2].
With the surrogate model in place, the remaining challenge is that executing the high-fidelity numerical solver to generate training data can still be very expensive. To further reduce the computational burden, it is possible to combine low-fidelity results to make high-fidelity predictions [3]. More specifically, low-fidelity solvers are normally based on simplified PDEs (e.g., reducing the levels of physical detail) or simple solver setup (e.g., using a coarse mesh, a large time step, a low order of an approximating basis, and a higher error tolerance). They provide fast but inaccurate solutions to the original problems, whereas the high-fidelity solver is accurate yet expensive. The multi-fidelity fusion technique works similarly to transfer learning to utilize many low-fidelity observations to improve the accuracy when using only a few high-fidelity samples. In general, it involves constructions of surrogates for different fidelities and a cross-fidelity transition process. Due to its efficiency, multi-fidelity has attracted increasing attention in BO [4, 5], UQ [6, 7], and surrogate modeling [8, 9]l. We refer to [10, 11] for a recent review.
Despite the success of many state-of-the-art (SOTA) approaches, they normally assume that (1) the output dimension is the same and aligned across all fidelities, which generally does not hold for multi-fidelity simulation where the output are quantities at nodes that are not aligned; (2) the high-fidelity samples’ corresponding inputs form a subset of the low-fidelity inputs; and (3) the output dimension is small, which is not practical for scientific computing where the dimension can be 1-million (for a spatial-temporal field). These assumptions seriously hinder their applications for modern high-dimensional problems, which is common in scientific computing for solving PDEs and real-world datasets, e.g., MRI imaging.
To resolve these challenges, previous work either uses interpolation to align the dimension [12, 9] or relies on approximate inference with brutal simplification [8, 13], leading to inferior performance. We notice that the classic autoregression (AR), which is popularly used due to its simplicity, robustness, accuracy, and tractability, consistently shows robust and top-tier performance for different datasets in the literature, despite its incapability for high-dimensional problems. Thus, instead of proposing another ad-hoc model with pre-processing and simplification (leading to models that are difficult to tune and generalize poorly), we generalize AR with tensor algebra and latent features and propose generalized autoregression (GAR), which can deal with arbitrary high-dimensional Problems without the subset multi-fidelity data structure. GAR is a fully tractable Bayesian model with scalability to extremely high-dimensional outputs, without requiring any approximate inference. The novelty of this work is as follows,
-
1.
Generalization of the AR for arbitrary non-structured high-dimensional outputs. With tensor algebra and latent features, GAR allows effective knowledge transfer in closed form and is scalable to extreme high-dimensional problems.
-
2.
Generalization to non-subset multi-fidelity data. To the best of our knowledge, we are the first to generalize the closed-form solution of subset data to non-subset cases for the AR and the proposed GAR.
-
3.
For the first time, we reveal the autokrigeability [14] for the multi-fidelity fusion within an AR structure, based on which we derive conditional independent GAR (CIGAR), an efficient implementation of GAR with the exact accuracy in posterior mean predictions.
2 Backgronud
2.1 Statement of the problem
Given multi-fidelity data , for , where denotes the system inputs (a vector of parameters that appear in the system of equations and/or in the initial-boundary conditions for a simulation); denotes the corresponding outputs, where is the dimension for fidelity; is the total number of fidelities. Generally speaking, higher-fidelity are closer to the ground truth and are expensive to obtain, and thus we have fewer samples for high fidelity, i.e., . The dimensionality is not necessary the same or aligned across different fidelities. In most work e.g., [15, 16, 11], the system inputs of higher-fidelity are chosen to be the subset of the lower-fidelity, i.e., . We call this the subset structure for a multi-fidelity dataset, as opposed to arbitrary data structures, which we will resolve in Section section 3.3 with a closed-form solution and extend it to the classic AR. Our goal is to estimate the function given the observation data across different fidelities .
2.2 Autoregression
For the sake of clarity, we consider a two-fidelity case with superscript indicating high-fidelity and low-fidelity. Nevertheless, the formulation can be generalized to problems with multiple fidelities recursively. Considering a simple scalar output for all fidelities, the AR [3] assumes
(1) |
where is a factor transferring knowledge from the low fidelity in a linear fashion whereas tries to capture the residual information. If we assume a zero mean Gaussian process (GP) prior [17] (see Appendix A for a brief description) for and , i.e., and , the high-fidelity function also follows a GP. This gives an elegant joint GP for the joint observations ,
(2) |
where is the low-fidelity observations corresponding to input and is the high-fidelity observations; is the covariance matrix of the low-fidelity inputs ; is for the high-fidelity inputs ; is the cross-fidelity covariance matrix of the low-fidelity inputs and high-fidelity inputs , and . One immediate advantage of the AR is that the joint Gaussian form allows not only joint training for all low- and high-fidelity data but also predictions for any given new inputs by conditional Gaussian (as the posterior is derived in a standard GP [17]). Furthermore, Le Gratiet [15] derive Lemma 1 to reduce the complexity from to with a subset data structure.
Lemma 1.
[15] If , the joint likelihood and predictive posterior of the AR can be decomposed into two independent parts corresponding to the low- and high-fidelity data.
3 Generalized Autoregression
Let’s now consider the high-dimensional case. A naive approach is to simply convert the multi-dimensional output into a scalar value by attaching a dimension index to the input. However, the AR will end up with a joint GP with a covariance matrix of the size of , making it infeasible for large dimensional problems.
3.1 Tensor Factorized Generalization with Latent Features
To resolve the scalability issue, we rearrange all the output into a multidimensional space (i.e., a tensor space) and introduce latent coordinate features to index the outputs to capture their correlations as in HOGP [18]. More specifically, we organize the low-fidelity outputs as a -mode tensor, , where the output dimension . The element is indexed based on its coordinates ( for ). If the original data indeed admit a multi-array structure, we can use their original index with actual meaning to index the coordinates. For instance, a 2D spatial dataset can use its original spatial coordinate to index a single location (pixel). To improve our model flexibility, we do not have to limit ourselves from using the original index, particularly for the cases where the original data does not admit a multi-array structure or the multi-array structure is of too large size. In such case, we can use arbitrary tensorization and a latent feature vector (whose values are inferred in model training) for each coordinate in mode . This way, the element is indexed by the vector . Following the linear transformation of Eq. (1), we first introduce the tensor-matrix product [19] at mode ,
(3) |
where denotes target function with its output organized into multi-array , and the same concept applies to and ; denotes the tensor-matrix product at mode . To give a concrete example, considering an arbitrary tensor and a matrix , the product is calculated as , which becomes an tensor. We can further denote the group of linear transformation matrixes as a Tucker tensor and represent Eq. (3) compactly using a Tucker operator [19], , which has an important property:
(4) |
Inspired by the AR of Eq. (1), we place a tensor-variate GP (TGP) prior [20] for the low-fidelity tensor function and the residual tensor function :
(5) |
where are the output correlation matrix with and being the kernel function (with unknown hyperparameters). A TGP is a generalization of a multivariate GP that essentially represents a joint GP prior . Similar to the joint probability of (2), we can derive the joint probability for based on Tucker transformation of (3); we keep the the proof in the Appendix for clarity.
Lemma 2.
Given the tensor GP priors for and and the Tucker transformation of (3), the joint probability for is , where
Lemma 2 admits any arbitrary outputs (living in different spaces, having different dimensions and/or modes, and being unaligned) at different fidelity. Also, it does not require a subset dataset to hold.
Corollary 3.0.1.
Lemma 2 can be applied to data with a different number of modes at each fidelity, i.e., , if we pad the output with fewer modes with a mode having only one latent index such that all outputs have the same number of modes.
Lemma 2 defines our GAR model, a generalized AR with special tensor structures. The covariance for low-fidelity is , cross-fidelity (where is the -th element of ), and high-fidelity . The complex between-fidelity output correlations are captured using latent features with arbitrary kernel function whereas the cross-fidelity output correlations are captured in a composite manner. This combination overcomes the simple linear correlations assumed in previous work that simply decomposes the output as a dimension reduction preprocess [12]. When the dimensionality aligns for and and thus , we can share the same latent features across the two fidelity data by letting while keeping the kernel functions different. This way, the latent features are more resistant to overfitting. For non-aligned data with explicit indexing, we can use kernel interpolation [21] for the same purpose. To further encourage sparsity in the latent feature, we impose a Laplace prior, i.e., .
3.2 Efficient Model Inference for Subset Data Structure
With the model fully defined, we can now train the model to obtain all unknown model parameters. For compactness, we use the following compact notation , , , , , , , and, (with a slight abuse of notation).
Lemma 3.
Tensor generalization of Lemma 1. If , the joint likelihood for admits two independent separable likelihoods , where
where is a Tucker tensor with selection matrix .
We preserve the proof in the Appendix for clarity. Note that and are HOGP likelihoods for and residual , respectively. Since the computational of and are independent, the model training can be conducted efficiently in parallel.
Predictive posterior. Similarly, we can derive the concrete predictive posterior for high-fidelity outputs by integrating out the latent functions after some tedious linear algebra (see Appendix), which is also Gaussian, , where
(6) | ||||
where is the vector of covariance between the give input and low-fidelity observation inputs and similarly, , , .
3.3 Generalization for Non-subset Data: Efficient Model Inference and Prediction
In practice, it is sometimes difficult to ask the multi-fidelity data to preserve a subset structure, particularly in the case of multi-fidelity Bayesian optimization [22, 23]. This presents the challenge for most SOTA multi-fidelity models e.g., NAR [16], ResGP [9], stochastic collocation [24]. In contrast, the advantage of AR is that even if the multi-fidelity data does not admit a subset data structure, the model can still be trained using all available data based on the joint likelihood of (2). However, this method lacks scalability due to the inversion of the large joint covariance matrix . The situation will get worse if we deal with multi-fidelity data with more than two fidelities. To resolve this issue, we propose a fast inference method based on imaginary subset. More specifically, considering the missing low-fidelity data as latent variables , the joint likelihood function is
(7) | ||||
where is the likelihood in Lemma 3 given the complementary imaginary subset, and is the imaginary posterior with the given low-fidelity observations . The integral can be calculated using Gaussian quadrature or other sampling methods as in [8, 25], which is slow and inaccurate.
Lemma 4.
The joint likelihood of GAR for non-subset (and also unaligned) data can be decomposed into two independent GPs’
(8) | ||||
where is the likelihood for low-fidelity data , is the collection of high-fidelity observations corresponding to the imaginary low-fidelity outputs ; is the complement (with selection matrix ) corresponding to low-fidelity outputs , i.e., ; and are the selection matrix for .
We preserve the proof in the appendix. Notice that is the low-right block of the predictive variance for the missing low-fidelity observations ; We can easily understand the last part of the likelihood as a GP with accumulated uncertainty(variance) added to the corresponding missing points. Lemma 4 naturally applied to AR when the output is a scalar, where , , and
Predictive posterior. Surprisingly, the posterior also turns out to be a Gaussian distribution,
(9) | ||||
where and and the mean of the predictive posterior are given as follows,
(10) | ||||
where and are the selection matrices that , , and is the covariance matrix that .
3.4 Autokrigeability, Complexity, and Further Acceleration
For subset structured data, the computational complexity of GAR is decomposed into two independent TGPs for likelihood and predictive posterior. Thanks to the tensor algebra (mainly ), the complexity of the -fidelity kernel matrix inversion is reduced to instead of . For the non-subset case, the computational complexity in Eq. (8) is unfortunately where is the number of the imaginary low-fidelity points. Nevertheless, due to the tensor structure, we can still use conjugate gradient [26] to solve the linear system efficiently.
Notice that the mean prediction in Eq. (A.60) and Eq. (9) does not depend on any output covariance matrixes , which reassemble the autokrigeability (no knowledge transfer in noiseless cases for mean predictions) [14, 9] based on the GAR framework. For applications where the predictive variation is not of interest, we can introduce a conditional independent output-correlation, i.e., and orthogonal weight matrices, i.e., , to reduce the computationally complexity further down to (see Appendix for detailed proof). We call this CIGAR, an abbreviation for conditional independent GAR. In practice, CIGAR is slightly worse than GAR due to the difficulty of ensuring .
4 Related Work
GP for high-dimensional outputs is an important model in many applications such as spatial data modeling and uncertainty quantification. For an excellent review, the readers are referred to [27]. Linear model of coregionalization (LMC) [28, 29] might be the most general framework for high-dimensional GP developed in the geostatistic community. LMC assumes that the full covariance matrix is a sum of constant matrixes timing input-dependent kernels. To reduce model complexity, semiparametric latent factor models (SLFM) [30] simplify LMC by assuming that the matrixes are rank-1 matrixes. [31] further simplifies SLFM using singular value decomposition (SVD) on the output collection to fix the bases for the rank-1 matrixes. To overcome the linear assumptions of LMC, the (implicit) bases can be constructed in a nonlinear fashion using manifold learning, e.g., KPCA [32] and IsoMap [33] or process convolution [34, 35, 36]. Other approaches include multi-task GP, which considers the outputs as dependent tasks [37, 38, 39] in a framework similar to LMC and GP regression network (GPRN) [40, 41], which proposes products of GPs to model nonlinear outputs, leading to nontractable models. Despite their success, the complicity of the above approaches are at best whereas some are , which cannot scale well to high-dimensional outputs for scientific data where D can be, says, 1 million. This problem can be well resolved by introducing tensor algebra [42] to form HOGP [18] or scalable model inference, e.g., in GPRN [43].
Multi-fidelity has become a promising approach to further reduce the data demands in building a surrogate model [13] and Bayesian optimization. The seminal autoregressive (AR) model of Kennedy [3] introduce a linear transformation to univariate high-fidelity outputs. This model was enhanced by Le Gratiet [15] by adopting a deterministic parametric form of linear transformation for the efficient training scheme as introduced previously. However, it is unclear how AR can deal with high-dimensional outputs. To overcome the linearity of AR, Perdikaris et al. [16] proposes nonlinear AR (NAR). It ignores the output distributions and directly uses the low-fidelity solution as an input for the high-fidelity GP model, which is essentially a concatenating GP structure known as deep GP [44]. To propagate uncertainty through the multi-fidelity model, Cutajar et al. [25] uses expensive approximation inference, which makes it prone to overfitting and incapable of dealing with very large dimensional problems. For multi-fidelity Bayesian optimization (MFBO), Poloczek et al. [4] and Kandasamy et al. [45] approximate each fidelity with a GP independently; Zhang et al. [46] use convolution kernel, similar to the process convolution [34, 36] to learn the fidelity correlations; Song et al. [5] combine all fidelity data into one single GP to reduce uncertainty. However, most MFBO surrogates do not scale to high-dimensional problems because they are designed for one target (or at most a couple).
To deal with large dimensional outputs, e.g., spatial-temporal fields, Xing et al. [9] extend AR by assuming a simple additive structure and replacing the simple GPs with scalable multi-output GPs at the cost of losing the power for capturing the output correlations, leading to inferior performance and inaccurate uncertainty estimates; Xing et al. [12] propose Deep coregionalization to extend NAR by learning the latent process [30, 29] extracted from embedding the high-dimensional outputs onto a residual latent space using a proposed residual PCA; Wang et al. [8] further introduce bases propagation along with latent process propagation in a deep GP to increase model flexibility at the cost of significant growth in the number of model parameters and a few simplifications in the approximated inference. Parussini et al. [6] generalize NAR to high-dimensional problems. However, these methods lack a systematic way for joint model training, leading to instability and poor fitting for small datasets. Wu et al. [47] extend GP using the neural process to model high-dimensional and non-subset problems effectively. In scientific computing, multi-fidelity fusion has been implemented using the stochastic collocation (SC) method [24] for high-dimensional problems, which provides closed-form solutions and efficient design of experiments for the multi-fidelity problem. Xing et al. [7] showed that SC is essentially a special case of AR and proposed active learning to select the best subset for the high-fidelity experiments.
To take the advances of deep learning neural network (NN) and being compatible with arbitrary multi-fidelity data (i.e., non-subset structure), Perrone et al. [22] propose an NN-based multi-task method that can naturally extend to MFBO. Li et al. [23] further extend it as a Bayesian neural network (BNN) to MFBO. Meng and Karniadakis [48] add a physics regularization layer, which requires an explicit form of the problem PDEs, to improve prediction accuracy. To scale for high-dimensional problems with arbitrary dimensions in each fidelity, Li et al. [13] propose a Bayesian network approach to multi-fidelity fusion with active learning techniques for efficiency improvement.
Except for multi-fidelity fusion, AR can be used for model multi-variate problems [49, 50], where GAR can also find its applications. GAR is a general framework for GP-based multi-fidelity fusion of high-dimensional outputs. Specifically, AR is a special case when setting and using a separable kernel; ResGP is a special case of GAR by setting and ; NAR is a special case of integrating out with a normal prior and using a separable kernel; DC is a special case of GAR if it only uses one latent process, integrating out as in NAR with a separable kernel; MF-BNN is a finite case of GAR if only one hidden layer is used. See Appendix C for the comparison between SOTA methods.
5 Experimental Results
To access GAR and CIGAR, we compare them with the SOTA multi-fidelity fusion methods for high-dimensional outputs. Particularly. We compare to the following methods: (1) AR [3], (2) NAR [16], (3) ResGP [9], (4) DC111https://github.com/wayXing/DC [12], and, (5) MF-BNN222https://github.com/shib0li/DNN-MFBO [13]. All GPs use an RBF kernel for a fair comparison. Because the ARD kernel is separable, the AR and NAR are accelerated using the Kronecker product structure as in GAR for a feasible computation. The original DC with residual PCA cannot deal with unaligned outputs, but it can do so by using an independent PCA, which we call DC-I. Both DCs preserve 0.99% energy for dimension reductions. MF-BNN is conducted using its default setting. GAR, CIGAR, AR, NAR, and ResGP are implemented using Pytorch333https://pytorch.org/. All experiments are run on a workstation with an AMD 5950x CPU and 32 GB RAM.
5.1 Multi-Fidelity Fusion for Canonical PDEs
We first access GAR in canonical PDE simulation benchmarks, which produce high-dimensional spatial/spatial-temporal fields as model outputs. Specifically, we test on Burgers’ , Poisson’s and Heat equations as in [12, 51, 52, 53]. The high fidelity results are obtained by solving these equations using finite difference on a mesh, whereas low fidelity by an coarse mesh. The solutions on these grid points are recorded and vectorized as outputs. Because the mesh differs, the dimensionality varies. To compare with the standard multi-fidelity method that can only deal with aligned outputs, we use interpolation to upscale the low-fidelity and record them at the high-fidelity grid nodes. The corresponding inputs are PDE parameters and parameterized initial or boundary conditions. Detailed experimental setups can be found in Appendix E.1
We uniformly generate samples for testing and 32 for training. We increase the high-fidelity training samples to the number of low-fidelity training samples of 32. The comparisons are conducted five times with shuffled samples. The statistical results (mean and std) of the RMSE are reported in Fig. 1. GAR and CIGAR outperform competitors with a significant margin, up to 6x reductions in RMSE and also reaching the optimal with a maximum of 8 high-fidelity samples, indicating a successful transfer from low- to high-fidelity. CIGAR is slightly worse than GAR possibility due to a lack of hard constraint on the orthogonality of its weight matrixes during implementation. As we have discovered in the literature, AR consistently performs well. With a flexible linear transformation, GAR outperforms the AR while inheriting its robustness, leading to the best performance. For the unaligned output, MF-BNN showed slightly worse performance than in the aligned cases, highlighting the challenges of the unaligned outputs. In contrast, GAR and CIGAR show almost identical performance for both cases. Nevertheless, MF-BNN also shows good performance compared to the rest of the other methods, which is consistent with the finding in [13]. It is interesting to see that for the non-subset data, the capable methods show better performances than in the subset cases. GAR and CIGAR still outperform the competitors with a clear margin.
5.2 Multi-Fidelity Fusion for Real-World Applications
Optimal topology structure is the optimized layout of materials, e.g., alloy and concrete, given some design specifications, e.g., external force and angle. This topology optimization is a key technique in mechanical designs, but it is also known for its high computational cost, which renders the need for multi-fidelity fusion. We consider the topology optimization of a cantilever beam with the location of point load, the angle of point load, and the filter radius [55] as system inputs. The low-fidelity use a regular mesh for the finite element solver, whereas the high-fidelity . Please see Appendix E.3 for detailed setup.
As in the previous experiment, the RMSE statistics against an increasing number of high-fidelity training samples are shown in Fig. 2. It is clear that GAR outperforms other competitors with a large margin consistently. CIGAR can be as good as GAR when the number of training samples is large.
Steady-state 3D solid oxide fuel cell which solves complex coupled PDEs including Ohm’s law, Navier-Stokes equations, Brinkman equation, and Maxwell-Stefan diffusion and convection simultaneously, is a key model for modern fuel cell optimization. The model was solved using finite element in COMSOL. The inputs were taken to be the electrode porosities, cell voltage, temperature, and pressure in the channels. The low-fidelity experiment is conducted using 3164 elements and relative tolerance of 0.1, whereas the high-fidelity uses 37064 elements and relative tolerance of 0.001. The outputs are the coupled fields of electrolyte current density (ECD) and ionic potential (IP) in the plane located at the center of the channel. Please see Appendix E.4 for detailed experiment setup.
The RMSE statistics are shown in Fig. 3(a), which, again, highlights the superiority of the proposed method with only four high-fidelity data points. To further assess the model capacity for non-structured outputs, we keep only the ECD (Fig. 3(b)) and IP (Fig. 3(c)) in the low-fidelity training data to rise the challenges of predicting high fidelity ECD+IP fields. We can see that removing some low-fidelity information indeed increases the difficulties, especially when removing ECD, where MF-BNN outperforms GAR and CIGAR with a small number of training data. As soon as the training number increases, GAR and CIGAR become superior again.
Plasmonic nanoparticle arrays is a complex physical simulation that calculate the extinction and scattering efficiencies and for plasmonic systems with varying numbers of scatterers using the Coupled Dipole Approximation (CDA) approach. CDA is a method for mimicking the optical response of an array of similar, non-magnetic metallic nanoparticles with dimensions far smaller than the wavelength of light (here 25 nm). and are defined as the QoIs in this work. Please see Appendix E.5 for detailed experiment setup.
We conducted the experiments 5 times with shuffled samples, and we fixed the number of low-fidelity training samples to 32, 64, and 128 and gradually increase the high-fidelity training data from 4 to 32, 64, and 128. We can see in Fig.4, GAR outperforms others by a clear margin, especially when the high-fidelity data is only 4 samples. When there is a large training sample dataset, CIGAR can be as excellent as GAR.
6 Conclusion
We propose GAR, the first AR generalization to arbitrary outputs and non-subset multi-fidelity data with a closed-form solution, and CIGAR, an efficient implementation by revealing the autokrigeability in the AR. Limitation of this work is the scalability w.r.t to the number of training samples, the lack of active learning [13], and the applications to the broader problems of time series and transfer learning [49, 50].
References
- [1] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando Freitas. Taking the Human Out of the Loop: A Review of Bayesian Optimization. Proceedings of the IEEE, 104(1):148–175. ISSN 0018-9219, 1558-2256.
- [2] Apostolos F. Psaros, Xuhui Meng, Zongren Zou, Ling Guo, and George Em Karniadakis. Uncertainty Quantification in Scientific Machine Learning: Methods, Metrics, and Comparisons.
- [3] M. Kennedy. Predicting the output from a complex computer code when fast approximations are available. Biometrika, 87(1):1–13. ISSN 0006-3444, 1464-3510.
- Poloczek et al. [2017] Matthias Poloczek, Jialei Wang, and Peter Frazier. Multi-information source optimization. Advances in neural information processing systems, 30, 2017.
- [5] Jialin Song, Yuxin Chen, and Yisong Yue. A General Framework for Multi-fidelity Bayesian Optimization with Gaussian Processes. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 3158–3167.
- Parussini et al. [2017] Lucia Parussini, Daniele Venturi, Paris Perdikaris, and George E Karniadakis. Multi-fidelity gaussian process regression for prediction of random fields. Journal of Computational Physics, 336:36–50, 2017.
- Xing et al. [a] W. Xing, M. Razi, R. M. Kirby, K. Sun, and A. A. Shah. Greedy nonlinear autoregression for multifidelity computer models at different scales. Energy and AI, 1:100012, a. ISSN 2666-5468.
- [8] Zheng Wang, Wei Xing, Robert Kirby, and Shandian Zhe. Multi-Fidelity High-Order Gaussian Processes for Physical Simulation. In International Conference on Artificial Intelligence and Statistics, pages 847–855. PMLR.
- Xing et al. [b] W. W. Xing, A. A. Shah, P. Wang, S. Zhe, Q. Fu, and R. M. Kirby. Residual gaussian process: A tractable nonparametric bayesian emulator for multi-fidelity simulations. Applied Mathematical Modelling, 97:36–56, b. ISSN 0307-904X.
- [10] M. Giselle Fernández-Godino, Chanyoung Park, Nam-Ho Kim, and Raphael T. Haftka. Review of multi-fidelity models.
- [11] B. Peherstorfer, K. Willcox, and M. Gunzburger. Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization. SIAM Review, 60(3):550–591. ISSN 0036-1445.
- Xing et al. [c] Wei W. Xing, Robert M. Kirby, and Shandian Zhe. Deep coregionalization for the emulation of simulation-based spatial-temporal fields. Journal of Computational Physics, 428:109984, c. ISSN 0021-9991.
- Li et al. [2020] Shibo Li, Robert M Kirby, and Shandian Zhe. Deep multi-fidelity active learning of high-dimensional outputs. arXiv preprint arXiv:2012.00901, 2020.
- [14] Mauricio A. Alvarez, Lorenzo Rosasco, and Neil D. Lawrence. Kernels for Vector-Valued Functions: A Review.
- [15] Loic Le Gratiet. Bayesian Analysis of Hierarchical Multifidelity Codes. SIAM/ASA Journal on Uncertainty Quantification, 1(1):244–269. ISSN 2166-2525.
- [16] Paris Perdikaris, M Raissi, Andreas Damianou, N D. Lawrence, and George Karniadakis. Nonlinear Information Fusion Algorithms for Data-Efficient Multi-Fidelity Modelling, volume 473. Royal Society.
- [17] Carl Edward Rasmussen and Christopher K I Williams. Gaussian Processes for Machine Learning. Gaussian Processes for Machine Learning, page 266.
- [18] Shandian Zhe, Wei Xing, and Robert M. Kirby. Scalable High-Order Gaussian Process Regression. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 2611–2620. PMLR.
- [19] Tamara Gibson Kolda. Multilinear operators for higher-order decompositions.
- [20] Zenglin Xu, Feng Yan, Yuan, and Qi. Infinite Tucker Decomposition: Nonparametric Bayesian Models for Multiway Data Analysis.
- [21] Andrew Gordon Wilson and Hannes Nickisch. Kernel interpolation for scalable structured Gaussian processes (KISS-GP). In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 1775–1784. JMLR.org.
- Perrone et al. [2018] Valerio Perrone, Rodolphe Jenatton, Matthias W Seeger, and Cédric Archambeau. Scalable hyperparameter transfer learning. Advances in neural information processing systems, 31, 2018.
- Li et al. [a] Shibo Li, Wei Xing, Robert Kirby, and Shandian Zhe. Multi-Fidelity Bayesian Optimization via Deep Neural Networks. Advances in Neural Information Processing Systems, 33:8521–8531, a.
- [24] Akil Narayan, Claude Gittelson, and Dongbin Xiu. A Stochastic Collocation Algorithm with Multifidelity Models. SIAM Journal on Scientific Computing, 36(2):A495–A521. ISSN 1064-8275.
- [25] Kurt Cutajar, Mark Pullin, Andreas Damianou, Neil Lawrence, and Javier González. Deep Gaussian Processes for Multi-fidelity Modeling.
- Wilson and Adams [2013] Andrew Wilson and Ryan Adams. Gaussian process kernels for pattern discovery and extrapolation. In International conference on machine learning, pages 1067–1075. PMLR, 2013.
- [27] Mauricio A. Álvarez, Lorenzo Rosasco, and Neil D. Lawrence. Kernels for Vector-Valued Functions: A Review. Foundations and Trends® in Machine Learning, 4(3):195–266. ISSN 1935-8237, 1935-8245.
- Goulard and Voltz [1992] Michel Goulard and Marc Voltz. Linear coregionalization model: tools for estimation and choice of cross-variogram matrix. Mathematical Geology, 24(3):269–286, 1992.
- Goovaerts et al. [1997] Pierre Goovaerts et al. Geostatistics for natural resources evaluation. Oxford University Press on Demand, 1997.
- Teh et al. [2005] Yee Whye Teh, Matthias Seeger, and Michael I Jordan. Semiparametric latent factor models. In International Workshop on Artificial Intelligence and Statistics, pages 333–340. PMLR, 2005.
- [31] Dave Higdon, James Gattiker, Brian Williams, and Maria Rightley. Computer Model Calibration Using High-Dimensional Output. Journal of the American Statistical Association, 103(482):570–583. ISSN 0162-1459, 1537-274X.
- Xing et al. [d] W.W. Xing, V. Triantafyllidis, A.A. Shah, P.B. Nair, and N. Zabaras. Manifold learning for the emulation of spatial fields from computational models. Journal of Computational Physics, 326:666–690, d. ISSN 0021-9991.
- Xing et al. [e] Wei Xing, Akeel A. Shah, and Prasanth B. Nair. Reduced dimensional Gaussian process emulators of parametrized partial differential equations based on Isomap. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 471(2174):20140697, e. ISSN 1364-5021, 1471-2946.
- Álvarez et al. [2019] Mauricio A Álvarez, Wil Ward, and Cristian Guarnizo. Non-linear process convolutions for multi-output gaussian processes. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1969–1977. PMLR, 2019.
- Boyle and Frean [2004] Phillip Boyle and Marcus Frean. Dependent gaussian processes. Advances in neural information processing systems, 17, 2004.
- [36] Dave Higdon. Space and Space-Time Modeling using Process Convolutions. In Clive W. Anderson, Vic Barnett, Philip C. Chatwin, and Abdel H. El-Shaarawi, editors, Quantitative Methods for Current Environmental Issues, pages 37–56. Springer London. ISBN 978-1-4471-1171-9 978-1-4471-0657-9.
- Bonilla et al. [2007] Edwin V Bonilla, Felix V Agakov, and Christopher KI Williams. Kernel multi-task learning using task-specific features. In Artificial Intelligence and Statistics, pages 43–50. PMLR, 2007.
- Rakitsch et al. [2013] Barbara Rakitsch, Christoph Lippert, Karsten Borgwardt, and Oliver Stegle. It is all in the noise: Efficient multi-task gaussian process inference with structured residuals. Advances in neural information processing systems, 26, 2013.
- [39] Ping Li and Songcan Chen. Hierarchical Gaussian Processes model for multi-task learning. Pattern Recognition, 74:134–144. ISSN 0031-3203.
- Wilson et al. [2012] Andrew Gordon Wilson, David A. Knowles, and Zoubin Ghahramani. Gaussian process regression networks. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, page 1139–1146, Madison, WI, USA, 2012. Omnipress. ISBN 9781450312851.
- Nguyen and Bonilla [2013] Trung Nguyen and Edwin Bonilla. Efficient variational inference for gaussian process regression networks. In Artificial Intelligence and Statistics, pages 472–480. PMLR, 2013.
- [42] Tamara G. Kolda and Brett W. Bader. Tensor Decompositions and Applications. SIAM Review, 51(3):455–500. ISSN 0036-1445, 1095-7200.
- Li et al. [b] Shibo Li, Wei Xing, Robert M. Kirby, and Shandian Zhe. Scalable Gaussian Process Regression Networks. volume 3, pages 2456–2462, b.
- [44] Andreas Damianou. Deep Gaussian processes and variational propagation of uncertainty.
- Kandasamy et al. [2016] Kirthevasan Kandasamy, Gautam Dasarathy, Junier B Oliva, Jeff Schneider, and Barnabás Póczos. Gaussian process bandit optimisation with multi-fidelity evaluations. Advances in neural information processing systems, 29, 2016.
- Zhang et al. [2017] Yehong Zhang, Trong Nghia Hoang, Bryan Kian Hsiang Low, and Mohan Kankanhalli. Information-based multi-fidelity bayesian optimization. In NIPS Workshop on Bayesian Optimization, 2017.
- Wu et al. [2022] Dongxia Wu, Matteo Chinazzi, Alessandro Vespignani, Yi-An Ma, and Rose Yu. Multi-fidelity hierarchical neural processes. arXiv preprint arXiv:2206.04872, 2022.
- [48] Xuhui Meng and George Em Karniadakis. A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems. Journal of Computational Physics, 401:109020. ISSN 0021-9991.
- Requeima et al. [2019] James Requeima, William Tebbutt, Wessel Bruinsma, and Richard E Turner. The gaussian process autoregressive regression model (gpar). In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1860–1869. PMLR, 2019.
- Xia et al. [2020] Rui Xia, Wessel Bruinsma, William Tebbutt, and Richard E Turner. The gaussian process latent autoregressive model. In Third Symposium on Advances in Approximate Bayesian Inference, 2020.
- [51] Rui Tuo, C. F. Jeff Wu, and Dan Yu. Surrogate Modeling of Computer Experiments With Different Mesh Densities. Technometrics, 56(3):372–380. ISSN 0040-1706, 1537-2723.
- [52] Mehmet Onder Efe and Hitay Ozbay. Proper orthogonal decomposition for reduced order modeling: 2d heat flow. In Proceedings of 2003 IEEE Conference on Control Applications, 2003. CCA 2003., volume 2, pages 1273–1277. IEEE.
- [53] Maziar Raissi and George Em Karniadakis. Machine Learning of Linear Differential Equations using Gaussian Processes. Journal of Computational Physics, 348:683–693. ISSN 0021-9991.
- [54] I.M Sobol’. On the distribution of points in a cube and the approximate evaluation of integrals. USSR Computational Mathematics and Mathematical Physics, 7(4):86–112. ISSN 0041-5553.
- Bruns and Tortorelli [2001] Tyler E. Bruns and Daniel A. Tortorelli. Topology optimization of non-linear elastic structures and compliant mechanisms. Computer Methods in Applied Mechanics and Engineering, 190(26):3443 – 3459, 2001. ISSN 0045-7825.
- [56] TJ Chung. Computational fluid dynamics. Cambridge university press.
- [57] N Sugimoto. Burgers equation with a fractional derivative; hereditary effects on nonlinear acoustic waves. Journal of fluid mechanics, 225:631–653.
- [58] Kai Nagel. Particle hopping models and traffic flow theory. Physical review E, 53(5):4655.
- [59] S Kutluay, AR Bahadir, and A Özdecs. Numerical solution of one-dimensional burgers equation: explicit and exact-explicit finite difference methods. Journal of Computational and Applied Mathematics, 103(2):251–261.
- [60] A. A. Shah, W. W. Xing, and V. Triantafyllidis. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2200):20160809. ISSN 1364-5021, 1471-2946.
- [61] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561.
- [62] Steven C Chapra, Raymond P Canale, et al. Numerical methods for engineers. Boston: McGraw-Hill Higher Education,.
- [63] S Persides. The laplace and poisson equations in schwarzschild’s space-time. Journal of Mathematical Analysis and Applications, 43(3):571–578.
- Lagaris et al. [Sept./1998] I. E. Lagaris, A. Likas, and D. I. Fotiadis. Artificial Neural Networks for Solving Ordinary and Partial Differential Equations. IEEE Transactions on Neural Networks, 9(5):987–1000, Sept./1998. ISSN 1045-9227.
- [65] Frank Spitzer. Electrostatic capacity, heat flow, and brownian motion. Probability theory and related fields, 3(2):110–121.
- [66] Krzysztof Burdzy, Zhen-Qing Chen, John Sylvester, et al. The heat equation and reflected brownian motion in time-dependent domains. The Annals of Probability, 32(1B):775–804.
- [67] Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. Journal of political economy, 81(3):637–654.
- Xing et al. [f] Wei Xing, Shireen Y. Elhabian, Vahid Keshavarzzadeh, and Robert M. Kirby. Shared-Gaussian Process: Learning Interpretable Shared Hidden Structure Across Data Spaces for Design Space Analysis and Exploration. Journal of Mechanical Design, 142(8), f. ISSN 1050-0472, 1528-9001.
- Andreassen et al. [2011] Erik Andreassen, Anders Clausen, Mattias Schevenels, Boyan S. Lazarov, and Ole Sigmund. Efficient topology optimization in matlab using 88 lines of code. Structural and Multidisciplinary Optimization, 43(1):1–16, Jan 2011. ISSN 1615-1488.
- Bendsoe and Sigmund [2004] Martin Philip Bendsoe and Ole Sigmund. Topology optimization: Theory, methods and applications. Springer, 2004.
- Guérin et al. [2006] Charles-Antoine Guérin, Pierre Mallet, and Anne Sentenac. Effective-medium theory for finite-size aggregates. JOSA A, 23(2):349–358, 2006.
- [72] Mani Razi, Ren Wang, Yanyan He, Robert M. Kirby, and Luca Dal Negro. Optimization of Large-Scale Vogel Spiral Arrays of Plasmonic Nanoparticles. Plasmonics, 14(1):253–261. ISSN 1557-1955, 1557-1963.
- Christofi et al. [2016] Aristi Christofi, Felipe A Pinheiro, and Luca Dal Negro. Probing scattering resonances of vogel’s spirals with the green’s matrix spectral method. Optics letters, 41(9):1933–1936, 2016.
- Wu et al. [2021] Dongxia Wu, Liyao Gao, Xinyue Xiong, Matteo Chinazzi, Alessandro Vespignani, Yi-An Ma, and Rose Yu. Quantifying uncertainty in deep spatiotemporal forecasting. arXiv preprint arXiv:2105.11982, 2021.
Checklist
The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing the appropriate section of your paper or providing a brief inline description. For example:
-
•
Did you include the license to the code and datasets? [Yes] See Section LABEL:gen_inst.
-
•
Did you include the license to the code and datasets? [No] The code and the data are proprietary.
-
•
Did you include the license to the code and datasets? [N/A]
Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below.
-
1.
For all authors…
-
(a)
Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] See contributions, abstract, and introduction
-
(b)
Did you describe the limitations of your work? [Yes] See the conclusion and complexity analysis section
-
(c)
Did you discuss any potential negative societal impacts of your work? [N/A] We do not see an obvisou negative societal impacts as it is fundamental and quite theoretical.
-
(d)
Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
-
(a)
-
2.
If you are including theoretical results…
-
(a)
Did you state the full set of assumptions of all theoretical results? [Yes]
-
(b)
Did you include complete proofs of all theoretical results? [Yes] Please see Appendix
-
(a)
-
3.
If you ran experiments…
-
(a)
Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] Please see supplementary materials
-
(b)
Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Please see the Appendix
-
(c)
Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] Please see the experimental section
-
(d)
Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Please see the experimental section
-
(a)
-
4.
If you are using existing assets (e.g., code, data, models) or curating/releasing new assets…
-
(a)
If your work uses existing assets, did you cite the creators? [Yes] Please see the experimental section
-
(b)
Did you mention the license of the assets? [Yes]
-
(c)
Did you include any new assets either in the supplemental material or as a URL? [Yes]
-
(d)
Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [Yes] see experiment senction and the Appendix
-
(e)
Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] We do not use such data
-
(a)
-
5.
If you used crowdsourcing or conducted research with human subjects…
-
(a)
Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
-
(b)
Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
-
(c)
Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
-
(a)
Appendix
Appendix A Gaussian process
Gaussian process (GP) is a typical choice for the surrogate model because of its model capacity for complicated black-box functions and uncertainty quantification. Consider, for the time being, a simplified scenario in which we have noise-contaminated observations . In a GP model, a prior distribution is placed over , indexed by :
(A.1) |
with mean and covariance functions:
(A.2) |
where is the expectation and are the hyperparameters that control the kernel function. By centering the data, the mean function may be assumed to be an equal constant, . Alternative options are feasible, such as a linear function of , but they are rarely used until previous knowledge of the shape of the function is provided. The covariance function can take several forms, with the automated relevance determinant (ARD) kernel being the most popular.
(A.3) |
From this point on, we eliminate the explicit notation of ’s reliance on . In this instance, the hyperparameters are referred to as length-scales. For constant parameter , is its random variable. In contrast, a collection of values, , , is a partial realization of the GP. GP’s realizations are functions of that are deterministic. The primary characteristic of GPs is that the joint distribution of is multivariate Gaussian.
Assuming the model deficiency is likewise Gaussian, we can derive the model likelihood using the prior (A.1) and available data.
(A.4) | ||||
where is the covariance matrix, in which , . The hyperparameters is often derived from point estimations using the maximum likelihood (MLE) of Eq. (A.4) w.r.t. . The joint distribution of and is also a joint Gaussian distribution with mean value and covariance matrix.
(A.5) |
where . Conditioning on , the conditional predictive distribution at is obtained.
(A.6) |
The expected value is given by and the predictive variance by . From Eq. (A.5) to Eq. (A.6) is crucial since the prediction posterior of this wake is based on a comparable block covariance matrix.
Appendix B Proof of Theorem
Lemma 1.
[16] If , the joint likelihood of AR can be decomposed into two independent likelihoods of the low- and high-fidelity.
This lemma has been proven by [15]. However, the notation and derivation are not easy to follow. To lay out the foundations of GAR, we prove it using a clearer way with friendly notations.
Proof.
Following Eq. (2), the inversion of the covariance matrix is
We can write down the log-likelihood for all the low- and high-fidelity observations as,
(A.7) | ||||
∎
where , is the log-likelihood of the low-fidelity data with the lower fidelity kernel, and is the log-likelihood of the residual data with the residual kernel; and are independent and thus can be trained in parallel.
Based on the joint probability Eq. (2), we can similarly derive the predictive posterior distribution of the high-fidelity using the standard GP posterior derivation. Conditioning on and , the predictive high-fidelity posterior for a new input is also a Gaussian :
(A.8) | ||||
and
(A.9) | ||||
where, is the covariance vector between the new inputs and . Notice that the predictive posterior is also decomposed into two independent parts that related to the low-fidelity GP and the residual GP, which is convenient for parallel computing and saving computational resources.
Lemma 2.
Given tensor GP priors for and and the Tucker transformation of Eq. (3), the joint probability for is , where
Proof.
Since the is the covariance matrix of , it can be expressed in block form as:
where is the cross-covariance between and . Assuming and , together with the property of the Tucker operator in Eq. (3), the high-fidelity data and low-fidelity data have the following transformation,
(A.10) | ||||
where , and is the selection matrix such that . By definition our GP prior, the low-fidelity data has the joint probability:
Thus the covariance matrix of low-fidelity data is . After that, we can derive the other part of the . Firstly, assuming the residual information is independent from , the covariance between and is
(A.11) | ||||
Since is the transpose of , so the upper right part of is
(A.12) |
For the lower and right part of , the covariance between is
(A.13) | ||||
Assembling the several parts together, we have the joint covariance matrix :
(A.14) |
∎
Before we move on to the next proof, we introduce the matrix inversion property, which will become handy later.
Property 1.
For any invertible block matrixes , where the sub-matrixes are also invertible, we have and .
Proof.
The inversion of a block matrix (if it is invertible) following the Sherman–Morrison formula is
(A.15) |
where , we can then derive the multiplication in Property 1 by the rule of block matrix multiplication:
(A.16) | ||||
Similarly, the other part of the conclusion can also be derived
(A.17) | ||||
∎
which seems quite obvious and intuitive if we assume that the matrix is symmetric.
Lemma 3.
Generalization of Lemma 1 in GAR. If , the joint likelihood for admits two independent separable likelihoods , where
where is the original weight tensor concatenated with an selection matrix .
Proof. Let the kernel matrix be partitioned into four blocks. We again make use of the matrix inversion of Sherman–Morrison formula in Property 1, with a slight modification as follows:
where
We begin this proof with the matrix within Property 1, which gives us:
Therefore, the last part of matrix is
(A.18) | ||||
Substituting Eq. (A.18) back into the matrix inversion, we can derive derive matrix , , , and as
(A.19) | ||||
(A.20) | ||||
(A.21) | ||||
and
(A.22) | ||||
Putting all these elements together, we get the inversion of joint kernel matrix
(A.23) |
where , , , , and as defined in the main paper. With the property in Eq. (A.10), and defining , we can substitute Eq. (A.23) into the joint likelihood to derive the data fitting part of the joint likelihood
(A.24) | ||||
With the block matrix’s determinant formula, we can also derive the determinant of joint kernel matrix,
(A.25) |
where we do not decompose them further with the purpose of forming to independent GPs for the low- and high-fidelity. With the conclusion of Eq. (A.24), the full joint log-likelihood is
(A.26) | ||||
The meaning of , , , and remain the same as defined in Eq.A.23.
B.1 Posterior distribution
For the posterior distribution we compute the mean function and covariance matrix with the assumption that the high-fidelity and low-fidelity data have very strict subset requirements, . With the conclusion in Lemma 2 and the rule of block matrix multiplication, the mean function and covariance matrix have the following expression,
(A.27) | ||||
(A.28) | ||||
where the , and have the same meaning with the main paper.
B.2 Joint Likelihood for Non-Subset Multi-Fidelity Data
In the main paper and the subset section, we decompose the joint likelihood into two parts as
where is the derived predictive posterior if the high-fidelity data are subset to the low-fidelity data.
(A.29) | ||||
where we define . Based on the low-fidelity training data, we also have being a Gaussian.
(A.30) | ||||
where the is the posterior covariance matrix of the . We can combine Eq. (A.29) and Eq. (A.30) to derive the integral part of the joint likelihood
(A.31) | ||||
where the . Since we know that and we assume that , we can derive
(A.32) | ||||
in which the , and denotes the corresponding part of the . For convenience, we choose to compute the exponential part in Eq. (A.31) as our first step. We try to decompose it into the subset part, i.e., ( and ) and the non-subset part, i.e., ( and ). Eq. (A.32) will become handy for the later derivation. Let’s first consider the data fitting part by substituting Eq. (A.32) into Eq. (A.31),
(A.33) | ||||
which gives us the decomposition as the subset part, the non-subset part, and the interaction part between them. Now we can substitute Eq. (A.33) into the integral part in Eq. (A.31),
(A.34) | ||||
where is defined by Eq. (A.35) ,
(A.35) | ||||
With Sherman-Morrison formula, we can further simplify part 1 in Eq. (A.34) as
(A.36) | ||||
part 2 in Eq. (A.34) as
(A.37) | ||||
and part 3 in Eq. (A.34) as
(A.38) | ||||
With the simplifications for part 1, 2, and 3 we get in Eq. (A.36), Eq. (A.37) and Eq. (A.38), the integral part will be more compact by substituting Eq. (A.36), Eq. (A.37) and Eq. (A.38) back to Eq. (A.34) which is equal to
(A.39) | ||||
The determinant part of Eq. (A.31) of the matrix can also be decomposed in the following way,
(A.40) | ||||
Putting everything we have derived up to this point, the joint likelihood for the non-subset data is:
(A.41) | ||||
where , and .
B.3 Posterior Distribution for Non-Subset Multi-Fidelity Data
We then explore the posterior distribution of this non-subset data structure. For the first, we use the integration to express the posterior distribution,
(A.42) | ||||
We try to express the integral by different parts. Once is decided, the predictive posterior can be described using the standard subset way, which is , where and are
(A.43) | ||||
We further simplify the situation, we introduce definitions:
Which simplify Eq. (A.43) as
At the same time, since is the sample from , so it also follow the posterior distribution in subset way, which means where the and are
Therefore the posterior distribution of non-subset data structure is
(A.44) | ||||
where and is defined by the following equation,
(A.45) | ||||
We then utilize the Sherman-Morrison formula to simplify part a, b, and c in Eq. (A.44) as follows. For part a in Eq. (A.44),
(A.46) | ||||
for part b in Eq. (A.44),
(A.47) | ||||
and for part c in Eq. (A.44),
(A.48) | ||||
And the determinant (part d in Eq. (A.44)) can also use the Morrison formula to derive a more compact version,
(A.49) | ||||
Taking part a, b, c, and d back into Eq. (A.44), we have the compact form
(A.50) | ||||
We can see the joint likelihood ends up with a elegant formulation about the low-fidelity TGP and residual TGP.
B.4 CIGAR
As we mentioned in the Section 3.4, we assume the output covariance matrixes and are identical matrixes and orthogonal weight matrixes, i.e.,, . Substituting these assumptions into (A.41), we get the simplified covariance matrix,
(A.51) | ||||
where is a identical matrix of size ; the same rules apply to ; and . The joint likelihood of non-subset data becomes
(A.52) | ||||
where .
We can see that the complexity of kernel matrix inversion is reduced to .
B.5 -Fidelity Autoregression Model
As we mentioned in Section 2.1, we can apply the AR to more levels of fidelity, so the GAR does. In this section, we try to expand the GAR into more levels of fidelity. Assuming the , we can derive the joint covariance matrix,
where and .
As same as the proof of GAR, we can derive the inversion of the joint covariance matrix,
Therefore, we have shown here that building an s-level TGP is equivalent to building s independent TGPs. We present the mean function and covariance matrix of the posterior distribution,
(A.53) | ||||
Appendix C Summary of the SOTA methods
We compare and conclude the capability and complexity of the SOTA methods, GAR, and CIGAR in Table 1.
Model | Arbitrary outputs? | Non-subset data? | Complexity |
NAR [16] | Yes | No | |
ResGP[9] | No | No | |
MF-BNN [13] | Yes | Yes | * |
DC [12] | Yes | No | |
AR [3] | No | No | |
GAR | Yes | Yes | |
CIGAR | Yes | Yes | |
* is the total weight size of NN for i-th fidelity and is the number of all parameters |
Appendix D Implementation and Complexity
We now present the training and prediction algorithm for GAR and CIGAR using tensor algebra so that the full covariance matrix is never assembled or explicitly computed to improve computational efficiency. We use a normal TGP as example, given the dataset , . The inference needs to estimate all the covariance matrix and . For compactness, we use and to denote and , and . We estimate parameters by minimizing the negative log likelihood of the model,
However, since the is a matrix of size , when the size of outputs is large, it will be unable to compute the inversion of . So for the TGP, we exploit the Kronecker product in to calculate the negative log-likelihood efficiently. Firstly, we use eigendecomposition to denote the joint kernel matrix, and . Then we use to denote the joint kernel matrix, . With the Kronecker product property, we can have that
(A.54) |
where and since and is eigenvectors and orthogonal, so . Therefore, we can have that
(A.55) |
Therefore, we only need to compute diagonal elements to calculate part of the negative log-likelihood.
After that, we compute the part in the negative log likelihood. First, we have , where is a tensor of full ones and is the Kruskal operator. Then we have
(A.56) | ||||
where . Since is a Kronecker product matrix, we can apply the property of Tucker operator [19] to compute .
(A.57) | ||||
where means element-wise product, and means take power of element wisely. Therefore the complexity of negative log likelihood is .
Based on the above conclusions, we can also calculate the GAR more efficiently. According to Lemma 3, the joint likelihood admits two separable likelihoods and . For each of these two, we can use the tricks to reduce the complexity to . Since,
(A.58) |
in which , and , are low-fidelity data and residuals corresponding vectors and eigenvalues. Therefore, the joint log-likelihood will be,
(A.59) | ||||
Given a new input , the prediction of the output tensorized as is a conditional Gaussian distribution , where
(A.60) | ||||
We can use the Tucker operator to compute the predictive mean and in a more efficient way. Using the eigendecomposition of kernel matrix, we can derive that , where . Therefore, the . We can also use tensor algebra to calculate the predictive covariance matrix
where . Therefore, we can also compute the predictive covariance matrix in GAR efficiently.
(A.61) |
where the and are vectors for low-fidelity and residual data. When we calculate the , we need to be careful that the output kernel matrix should be .
Appendix E Experiment in Detail
E.1 Canonical PDEs
We consider three canonical PDEs: Poisson’s equation, the heat equation, and Burger’s equation, These PDEs have crucial roles in scientific and technological applications \citesuppchapra2010numerical,chung2010computational,burdzy2004heat. They offer common simulation scenarios, such as high-dimensional spatial-temporal field outputs, nonlinearities, and discontinuities, and are frequently used as benchmark issues for surrogate models [12, 51, 52, 53]. and denote the spatial coordinates, and specifies the time coordinate, which contradicts the notation in the main paper. This notation in the appendix serves merely to make the information clear; it has no bearing on or connections to the main article.
Burgers’ equation is regarded as a standard nonlinear hyperbolic PDE; it is commonly used to represent a variety of physical phenomena, including fluid dynamics [56], nonlinear acoustics [57], and traffic flows [58]. It serves as a benchmark test case for several numerical solvers and surrogate models [59, 60, 61] since it can generate discontinuities (shock waves) based on a normal conservation equation. The viscous version of this equation is given by
where indicates volume, represents a spatial location, indicates the time, and denotes the viscosity. We set , , and with homogeneous Dirichlet boundary conditions. We uniformly sampled viscosities as the input parameter to generate the solution field.
In the space and time domains, the problem is solved using finite elements with hat functions and backward Euler, respectively. For the first (lowest-fidelity) solution, the spatial-temporal domain is discretized into regular rectangular mesh. Higher-fidelity solvers double the number of nodes in each dimension of the mesh, e.g., for the second fidelity and for the third fidelity. The result fields (i.e., outputs) are calculated using a regular spatial-temporal mesh.
Poisson’s equation is a typical elliptic PDE in mechanical engineering and physics for modeling potential fields, such as gravitational and electrostatic fields [62]. Written as
It is a generalization of Laplace’s equation [63]. Despite its simplicity, Poisson’s equation is commonly encountered in physics and is regularly used as a fundamental test case for surrogate models [51, 64]. In our experiment, we impose Dirichlet boundary conditions on a 2D spatial domain with . The input parameters consist of the constant values of the four borders and the center of the rectangular domain, which vary from to each. We sampled the input parameters equally in order to create the matching potential fields as outputs. Using the finite difference approach with a first-order center differencing scheme and regular rectangular meshes, the PDE is solved. For the coarsest level solution, we utilized an mesh. The improved solver employs a finer mesh with twice as many nodes in each dimension. The resultant potential fields are estimated using a spatial-temporal regular grid of cells.
Heat equation is a fundamental PDE that defines the time-dependent evolution of heat fluxes. Despite having been established in 1822 to describe just heat fluxes, the heat equation is prevalent in many scientific domains, including probability theory [65, 66] and financial mathematics [67]. Consequently, it is commonly utilized as a stand-in model. This is the heat equation:
where is the material’s conductivity is the rate at which energy is generated per unit volume of the medium is the density and is the specific heat capacity. The input parameters are the flux rate of the left boundary at (ranging from 0 to 1), the flux rate of the right boundary at (ranging from -1 to 0), and the thermal conductivity (ranging from 0.01 to 0.1).
We establish a 2D spatial-temporal domain , with the Neumann boundary condition at and , and , where is the Heaviside step function.
The equation is solved using the finite difference in space and backward Euler in time domains. The spatial-temporal domain is discretized into a regular rectangular mesh for the first (lowest) fidelity solver. A refined solver uses a mesh for the second fidelity. The result fields are computed on a spatial-temporal grid.
The equation is solved using a finite difference in the spatial domain and reverse Euler in the temporal domain. The spatial-temporal domain is discretized into an regular rectangular mesh for the first (least accurate) solution. The second fidelity of an improved solver’s mesh is a grid. On a spatial-temporal grid, the result fields are calculated.
E.2 Multi-Fidelity Fusion for Canonical PDEs
We use the same experimental setup as in Section 5.1 for this experiment with the only difference being that the training data is generated using a Sobol sequence. We generated 256 data samples for testing and 32 samples for training. We increased the number of high-fidelity training data gradually from 4 to 32 with the high-fidelity training data fixed to 32. Fig. 5 and Fig. 6 show the RMSE statistical results for aligned outputs using interpolated and original unaligned outputs. GAR and CIGAR outperform the competitors with large margins with scarce high-fidelity training data as in the main paper. Similarly, the advantage of GAR and CIGAR are more obvious when dealing with non-aligned outputs, where GAR and CIGAR demonstrate a 5x reduction in RMSE with 4 and 8 high-fidelity training samples, surpassing the competitors by a wide margin.
E.3 Multi-Fidelity Fusion for Topology Optimization
We use GAR in a topology structure optimization problem, where the output is the best topology structure (in terms of maximum mechanical metrics like stiffness) of a layout of materials, such as alloy and concrete, given some design parameters like external force and angle. Topology structure optimization is a significant approach in mechanical designs, such as airfoils and slab bridges, especially with recent 3D printing processes in which material is deposited in minute quantities. However, it is well known that topology optimization is computationally intensive due to the gradient-based optimization and simulations of the mechanical characteristics involved. A high-fidelity solution, which necessitates a huge discretization mesh and imposes a significant computing overhead in space and time, makes matters worse.
Utilizing data-driven ways to aid in the process by offering the appropriate structures [68, 13] is subsequently gaining popularity. Here, we investigate the topology optimization of a cantilever beam (shown in the appendix). We employ the rapid implementation [69] to carry out density-based topology optimization by reducing compliance subject to volume limitations .
The SIMP scheme [70] is used to convert continuous density measurements to discrete, optimal topologies. We set the position of point load , the angle of point load , and the filter radius [55] as system input. We solved this challenge for low-fidelity with a regular mesh and for high-fidelity with a regular mesh. This experiment only includes techniques that can process arbitrary outputs.
As with the early experiments, we generate 128 testing samples and 64 training samples using a Sobol sequence to approximately assess the perform in a active learning. The results are shown in Figure 8. We can see that all available methods show similar performance for both raw outputs that are not aligned by interpolation and the aligned outputs. Nevertheless, GAR consistently outperforms the competitors with a clear margin. CIGAR, in contrast, performs better for the raw outputs.
E.4 Multi-Fidelity Fusion for Solid Oxide Fuel Cell

In this test problem, a steady-state 3-D solid oxide fuel cell model is considered. Fig.9 illustrates the geometry. The model incorporates electronic and ionic charge balances (Ohm’s law), flow distribution in gas channels (Navier-Stokes equations), flow in porous electrodes (Brinkman equation), and gas-phase mass balances in both gas channels and porous electrodes (Maxwell-Stefan diffusion and convection). Butler-Volmer charge transfer kinetics is assumed for reactions in the anode () and cathode (). The cell functions in a potentiostat manner (constant cell voltage). COMSOL Multiphysics444https://www.comsol.com/model/current-density-distribution-in-a-solid-oxide-fuel-cell-514 (Application ID: 514), which uses the finite-element approach, was used to solve the model.
The assumed inputs were the electrode porosities , the cell voltage V, the temperature K, and the channel pressure atm. A Sobol sequence was used to choose 60 inputs within the ranges specified for the low-fidelity and high-fidelity simulations. 40 high-fidelity test points were chosen at random (from the ranges above) to complete the test. The low-fidelity F1 model used 3164 mapped elements and relative tolerance of 0.1, while the high-fidelity model employed 37064 elements and relative tolerance of 0.001. Additionally, the COMSOL model employs a V-cycle geometric multigrid. The quantities of interest were profiles of electrolyte current density (A m) and ionic potential (V) in the plane centered on the channels (Fig. 9). In both instances, points were captured, and both profiles were vectorized to provide the training and test outputs.
We add the classic experiment where the number of low-fidelity training samples was fixed to {32,64,128} and the high-fidelity training samples are gradually increased from 4 to {32,64,128}. The outputs are aligned using interpolation, and the experimental results are shown in Fig. 10. We can see that the GAR and CIGAR methods always perform better than the other methods, especially when only a few high-fidelity training data are used. This is consistent with the previous experiment. We can also see that AR also performs well, indicating that this data is not highly nonlinear and complex, making it relatively easy to solve. However, both AR and MF-BNN converge to higher error bounds whereas GAR and CIGAR converge to a lower error bound.
To investigate the prediction error in detail, we define the average RMSE field by
where is the prediction, is the ground truth value, and the square root is the element-wise operation. Fig. 11 show the average RMSE field of NAR, MF-BNN, and DC methods on the ECD in SOFC data with 32 low-fidelity training samples, 16 high-fidelity training samples, and 128 test samples. To highlight the advantage of GAR and CIGAR, Fig. 12 show the average RMSE field of the same setup with only 4 high-fidelity training samples. It can be seen clearly that GAR and CIGAR have a smaller error field even with only 4 high-fidelity training samples compared to NAR, MF-BNN, and DC with 16 high-fidelity training samples. Also note that GAR seems to have some checkerboard artifacts, which might be caused by the over-parameterization using a full transfer matrix. We leave this issue to our further work to resolve. CIGAR have fewer checkerboard artifacts with the price of a slight increase in the RMSE.
In Fig. 13 and Fig. 14, similar to the previous experimental setup, we draw the average RMSE with 128 testing samples on the IP fields from the SOFC dataset. The NAR, MF-BNN, and DC are trained with 16 high-fidelity samples, while GAR and CIGAR are trained with only 4 high-fidelity samples. We can see that our methods outperform other methods by a clear margin. However, the checkerboard artifact is even worse for GAR in this case, whereas CIGAR successfully reduces such an artifact with also low error.
E.5 Multi-Fidelity Fusion for Plasmonic nanoparticle arrays
In the final example, we calculate the extinction and scattering efficiencies and for plasmonic systems with varying numbers of scatterers using the Coupled Dipole Approximation (CDA) approach. CDA is a method for mimicking the optical response of an array of similar, non-magnetic metallic nanoparticles with dimensions far smaller than the wavelength of light (here 25 nm). and are defined as the QoIs in this work. We constructed surrogate models for efficiency with up to three fidelities using our proposed method. We examined particle arrays resulting from Vogel spirals. Since the number of interactions of incident waves from particles influences the magnetic field, the number of nanoparticles in a plasmonic array has a substantial effect on the local extinction field caused by plasmonic arrays. The configurations of Vogel spirals with particle numbers in the set that define fidelities F1 through F3 are depicted in Fig. 15. nm, rad, and were determined to be the parameter space. These are, respectively, the incidence wavelength, the divergence angle, and the scaling factor. A Sobol sequence was utilized to choose inputs. The computing time required to execute CDA increases exponentially as the number of nanoparticles increases. Consequently, the proposed sampling approach results in significant reductions in computational costs.
The response of a plasmonic array to electromagnetic radiation is calculable using the solution of the local electric fields, , for each nano-sphere. Considering metallic particles defined by the same volumetric polarizability and situated at vector coordinates , it is possible to calculate the local field by solving [71] the corresponding linear equation.
(A.62) |
in which is the incident field, is the wave number in the background medium, denotes the dielectric permittivity of vacuum ( in the CGS unit system), and is constructed from blocks of the overall Green’s matrices for the th and th particles. is a zero matrix when , and otherwise calculated as
(A.63) |
where denotes the unit position vector from particles to and . By solving Eqs. A.62 and A.63, the total local fields , and as a result the scattering and extinction cross-sections, are computed. Details of the numerical solution can be found in [72].
and are derived by normalizing the scattering and extinction cross-sections relative to the array’s entire projected area. We considered the Vogel spiral class of particle arrays, which is described by [73]
(A.64) |
where and represent the radial distance and polar angle of the -th particle in a Vogel spiral array, respectively. Therefore, the Vogel spiral configuration may be uniquely defined by the incidence wavelength , the divergence angle , the scaling factor , and the number of particles .




E.6 Stability Test
As a non-parametric model, we expected GAR and CIGAR to have stable performance against overfitting compared to the NN-based method. In this section, we show the testing RMSE against the training epoch for GAR, CIGAR, and MF-BNN for the Poisson equation, SOFC, and topology optimization. The experiments are repeated five times to ensure fairness. The results are shown in Fig. 16. We can see clearly that GAR and CIGAR are more stable than MF-BNN in almost all cases. The most notables are the converge rate of GAR and CIGAR, which is more than 10x if we look at the topology optimization and SOFC cases. For Poisson’s equation, the MF-BNN is not likely to match the performance of GAR and CIGAR with no matter how many training epochs are used. For the SOFC, MF-BNN, and topology optimization, MF-BNN might be able to match GAR given a very large epoch number, making it a bad choice that consumes expensive computational resources and is environmentally unfriendly.
E.7 Metircs for the Predictive Uncertainty
Despite that RMSE has been used as a standard metric for evaluating the performance of a multi-fidelity fusion algorithm [9, 12, 13, 16], a metric that considers the predictive uncertainty is also important [47], particularly when the downstream applications rely heavily on the quality of the predictive confidence, e.g., in MFBO [23]. To assess the proposed method more comprehensively, we evaluate the quality of the predictive posterior using the most commonly used metric, negative-log-likelihood (nll).
We reproduce Figs. 2 and 3 using exactly the same experimental setups but with the nll metric; the results are shown in Figs. 17 and 18. Note that the nll of DC and MF-BNN is very poor, probably due to our implementations, and cannot be fitted into the figures. Thus they are not shown in the figures. Also note that some figures show negative nll. This is because our computation of the nll omits the constant term. Nevertheless, this modification does not affect the comparison. We can see that for the topology structure posterior in Fig. 17, the results are consistent with the conclusion drawn on the RMSE results. Since the CIGAR ignores the inter-output correlations, it will overestimate the covariance determinant, leading to higher nll than GAR. The NAR starts with poor performance with a small number of training data. It consistently improves with an increasing number of training data and ends up with similar performance as GAR and CIGAR. Similarly, the SOFC results are consistent with the conclusion for the RMSE results. However, all methods demonstrated do not improve their performance significantly with more training data. This is caused by the calculations of the nll and the data itself. More specifically, in the ECD and IP fields, there are a few spatial locations where the recorded values are almost constant (caused by the Dirichlet boundary conditions). In this case, the nll will be dominated by the logarithm of variance and becomes less informative for the quality of the predictive variance. We thus see that the nll in Fig. 18 fluctuates around the same values no matter how many training points are used. We leave investigating the uncertainty metric using more advanced metrics (e.g., [74]) more in-depth in the future considering the scope of this work.