Modulating Scalable Gaussian Processes for Expressive Statistical Learning
Abstract
For a learning task, Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability. The vanilla GP however struggles to learn complicated distribution with the property of, e.g., heteroscedastic noise, multi-modality and non-stationarity, from massive data due to the Gaussian marginal and the cubic complexity. To this end, this article studies new scalable GP paradigms including the non-stationary heteroscedastic GP, the mixture of GPs and the latent GP, which introduce additional latent variables to modulate the outputs or inputs in order to learn richer, non-Gaussian statistical representation. We further resort to different variational inference strategies to arrive at analytical or tighter evidence lower bounds (ELBOs) of the marginal likelihood for efficient and effective model training. Extensive numerical experiments against state-of-the-art GP and neural network (NN) counterparts on various tasks verify the superiority of these scalable modulated GPs, especially the scalable latent GP, for learning diverse data distributions.
keywords:
Gaussian process , Modulation , Scalability , Heteroscedastic noise , Multi-modality , Non-stationarity1 Introduction
Given the input , rather than simply predicting the point estimation of the output , we are more interested in inferring the underlying generative process , the distribution of which is most likely to produce the observed data , in order to figure out not only the prediction mean but also the associated variability. Along this line, the modeling of the marginal (conditional) distribution becomes the central task, which raises from many machine learning scenarios, for example, regression, conditional density estimation [1], data association [2], and uncertainty quantification [3].
The well-known Gaussian process (GP) [4] is suitable for building our beliefs upon the statistical relationship between the inputs and outputs due to the Bayesian perspective, thus showcasing widespread application in various scenarios [5, 6, 7, 8, 9]. Usually, the GP adopts the Gaussian assumption and the independent and identically distributed (i.i.d.) Gaussian noise to conduct efficient closed-form inference and prediction, and employs the stationary kernels to simply quantify how quickly the correlations vary along each dimension.111The stationary kernel depends only on the relative distance . Consequently, the vanilla GP may not be appropriate for approximating non-Gaussuan, multi-modal or non-stationary probabilistic behaviors in reality, since its marginal is always Gaussian. Besides, another prominent weakness of GP is the poor scalability on massive data due to the operations of the kernel matrix, which raise the cubic complexity . Hence, the above issues raise an urgent demand for having novel GP paradigms which could effectively and efficiently learn rich statistical representations from massive data.
In order to improve the scalability, various scalable GPs have been exploited in recent years from different perspectives [10]. Alternatively, we could directly use advanced linear algebra methods, for example, the hierarchical off-diagonal low-rank and matrix vector multiplication algorithms [11, 12], and train the GP distributedly through multiple CPUs/GPUs. Besides, we could resort to the divide-and-conquer idea by splitting the data into small blocks and aggregating predictions from local GPs [13, 14, 15], which naturally support parallel and distributed learning. A more efficient and principled alternative is using sparse approximation [16, 17, 18], which introduces () global inducing variables to summarize the latent variables statistically, thus significantly reducing the complexity to through variational inference (VI) [19]. Further reduction of model complexity is possible by exploiting the structured inducing points, see for example [20, 21]. This article builds our scalable GPs upon the widely used sparse approximation, which has been further elaborated in Sec. 2, due to the complete statistical framework and the remarkably low complexity with theoretically guaranteed property [22].
Aiming to develop new scalable GP paradigms for learning rich probabilistic behaviors, we particularly introduce a Bayesian framework wherein a latent modulation variable is introduced for encoding the complicated statistical structures from outputs or inputs. Under this modulated GP paradigm, three representative models have been studied, including (i) the scalable heteroscedastic GP (SHGP) which modulates the amplitude of latent output and the noise variance simultaneously, (ii) the scalable mixture of GPs (SMGP) which modulates the assignment of global GP experts at data points for mixing distributions, and finally (iii) the scalable latent GP (SLGP) which augments the input space with latent variables in order to modulate the covariances and moreover the predictive distribution. The key for the above three modulated GPs is to derive scalable and effective evidence lower bounds (ELBOs) for model training.
Along this line, the main contributions of this article are four-fold:
-
1.
We use the sparse approximation strategy to build a scalable version for SHGP and derive an analytical and scalable ELBO for efficient training through variational inference;
-
2.
We sidestep the need of tackling the posterior of discrete assignment distribution for SMGP, and marginalize all the latent variables out to directly approximate the marginal likelihood and derive a tighter bound for model training with higher quality;
-
3.
We enhance the power of latent representation of SLGP by introducing a regularized stochastic encoder. Besides, in order to achieve higher training quality, we derive a hybrid and tighter ELBO that takes the advantage of both the VI and importance-weighted VI (IWVI) based bounds;
-
4.
We compare the three scalable modulated GPs against state-of-the-art GP and neural network (NN) counterparts to comprehensively investigate their characteristics on various tasks, and release the python implementations at https://github.com/LiuHaiTao01/ModulatedGPs.
The remaining of the article is organized as follows. Sec. 2 first briefly introduces the scalable GP using sparse approximation and variational inference. Thereafter, Sec. 3 develops three scalable GPs that modulate the probabilistic behavior from outputs or inputs, and derives the analytical or tight ELBOs for model training. Sec. 4 then discussed the related works and their differences to our work. Finally, Sec. 5 conducts extensive experiments to showcase the superiority of modulated GPs on various tasks, followed by the overall concluding remarks summarized in Sec. 6. Note that for improving the readability of this article, the employed acronyms and notations are summarized in Appendix A.
2 Scalable GP revisited
The GP learns the underlying, noise-free latent function by placing the GP prior over the functional space as , where we take the zero mean without loss of generality, and is the kernel function describing the smoothness of .222The squared exponential (SE) kernel with automatic relevance determination is commonly used in practice. The final observation polluted with independent and identically distributed (i.i.d.) Gaussian noise is expressed as
(1) |
where the observation noise . Given training data , in order to infer the model hyperparameters, we marginalize all the latent variables out and maximize the marginal likelihood
(2) |
where the GP prior with the covariance matrix , and the Gaussian likelihood .333We omit the dependency of these distributions on the deterministic inputs . After model training, we calculate the Bayes’ rule and use it to predict at the test point as where the prediction mean and the prediction variance with and . Furthermore, we have the final predictive distribution .
The vanilla GP however suffers from poor scalability due to the cubic complexity raised by the operations over the kernel matrix . To alleviate this issue in the era of big data, the well-known scalable GP (SGP) using sparse approximation [16, 17] introduces () inducing variables at the inducing points as sufficient statistics of . Consequently, we arrive at the Nyström approximation , where and , with the time complexity reduced as .
Furthermore, the variational inference employs a tractable variational posterior to minimize the Kullback-Leibler (KL) divergence
which is equivalent to maximizing the analytical ELBO expressed as [19]
(3) |
where the GP prior , the posterior with the mean and covariance expressed respectively as
and the individual . Due to the factorization of the expectation term in (3) over data points, the bound has an unbiased estimation
(4) |
where is a subset of the training data . This bound owns a remarkable complexity of through the stochastic optimization with single-sample approximation (i.e., ).
It is observed that the vanilla SGP only (i) provide the Gaussian predictive distribution with homoscedastic noise; and (ii) describe the invariant spatio-temporal behaviors due to the stationary kernels, which therefore limit the application to complicated tasks dominated by rich underlying stochastic processes.
3 Modulated scalable Gaussian processes
It is known that the vanilla GP paradigm struggles to approximate complicated distribution. Hence, in order to improve the capability, we introduce an additional latent variable to modulate the behavior of GP as
(5) |
The latent variable permits disentangling the statistical structure to enhance the expressivity of modeling. Hence, the challenging details in data, e.g., the heteroscedastic noise, the multi-modality, and the non-stationarity, could be explained and absorbed by . We next proceed to present three scalable modulated GPs which perform the modulation on the outputs or inputs.

3.1 Scalable heteroscedastic GP
Instead of using the poor i.i.d noise, the heteroscedastic GP (HGP) defines the following additive model
(6) |
to capture the variability in both outputs and noise. In (6), the additional latent function is introduced to modulate the amplitude of output in order to describe the non-stationarity [23]; besides, it has an input-dependent noise [24], which also uses to modulated the noise variance; and the positive parameter leaves flexibility for adjusting the noise variance. Note that the exponential form of ensures the positivity of noise variance and the modulation for .
The HGP takes the following GP priors
(7) |
and the non-stationary, heteroscedastic likelihood
(8) |
where the covariance . Note that the prior mean is utilized to account for the variability of noise variance. Besides, it has been pointed out that the likelihood of this model is log-concave on both and , thus resulting in unimodal posteriors [25, 26].
Analytical ELBO. We next proceed to use variational inference to derive the scalable and analytical objective for model training of scalable HGP (SHGP), the graphical model of which is depicted in Fig. 1(a). We first introduce the inducing variables and for and respectively. Thereafter, by minimizing the KL divergence where , we arrive at the following ELBO for as
(9) |
where the GP prior , and the variational posterior with the mean and covariance . Due to the Gaussian posteriors and , the two KL terms in (9) have closed-form expressions. Furthermore, the expectation of the likelihood in the right-hand side of can be analytically expressed as
(10) |
where the diagonal matrices and with being the vector comprising the diagonal elements of .
Prediction. Finally, when performing prediction at the test point , we have
(11) |
where the prediction with the mean and variance ; and similarly, with the mean and variance . The prediction can be estimated through the Markov Chain Monte Carlo (MCMC) sampling, or more efficiently, the Gauss-Hermite quadrature. Note that the predictive distribution of SHGP is non-Gaussian.
3.2 Scalable mixture of GPs
The capability of SHGP for describing complicated distribution is limited due to the Gaussian noise. However, it is known that the mixture of infinite Gaussians could approximate any target distribution. To this end, the mixture of GPs (MGP) aggregates the predictions from GP experts in order to break through the Gaussian assumption. By activating the GP experts in different locations, the MGP is enabled to tackle both the non-stationary and multi-modal data distributions.
Specifically, the MGP employs independent latent functions , and the indicator vector at point to represent its assignment to one of the GP experts. Consequently, we have the following likelihood
(12) |
where the sets , , the variable is the noise variance for the -th GP expert, and is the indicator function. It is observed that (12) defines a generative process where the observed data is generated by the mixture of several global independent stochastic processes.
We further assume that the indicator are drawn independently from a multinomial distribution with unnormalized logit parameters as
(13) |
where the logit parameters are assumed to follow independent GP priors as
(14) |
Now the GP priors and in MGP encode the structures of both functions and associations.
Tighter ELBO. We thereafter decide to build the scalable training framework for the scalable MGP (SMGP) depicted in Fig. 1(b). The joint prior of SMGP is defined as
where the sets and are the inducing variables for assignment GPs;444For the convenience of presentation, we assume that the inducing sizes for the GPs are the same. the conditionals factorize as and ; and finally, the GP priors and . Correspondingly, the joint variational posterior writes as
where the variational posteriors and .
The variational inference then helps derive the following ELBO for SMGP as
(15) |
where the posterior factorizes as with the mean and covariance ; and similarly, the variational posterior with the mean and covariance . However, since the prior is the product of discrete multinomial distributions, it is not straightforward to handle the posterior and the related KL divergence in (15). To sidestep this issue, Kaiser et al. [2] decided to keep the latent variables by approximating rather than the interested .
Differently, we here marginalize all the latent variables out to directly approximate through a tighter ELBO. To this end, we first follow the sparse GP framework to derive the lower bound for the conditional as
(16) |
Thereafter, according to Jensen’s inequality we have
(17) |
It is observed that in comparison to the bound (15), the tighter bound avoids introducing an additional variational posterior . Besides, note that during the model training, instead of sampling from the discrete prior distribution , we employ the continuous relaxation proposed in [27, 28] in order to perform back propagation for stochastic optimization: it adopts the reparameterization trick to sample from a Concrete random variable controlled by a temperature parameter , and approaches a discrete random variable when . Our SMGP adopts a small value of .
Prediction. When performing prediction at the test point , we have
where the prediction with the mean and variance ; and the assignment prediction where with the mean and variance . Thereafter, we sample from the Gaussian and multinomial distributions, and pass through the model to produce predictive samples.
3.3 Scalable latent GP
The foregoing models including SHGP and SMGP directly modulate the outputs for capturing complicated distributions. Differently, we could introduce the stochasticity in the augmented input space, and modulate the covariances in order to output expressive predictive distribution more flexibly. It has been found that the GP could warp the normal stochasticity in the augmented inputs into complicated distribution like [29, 30]. Along this line, the latent GP (LGP) introduces additional latent inputs to augment the input space as
(18) |
It is usually assumed that the latent variables for data points are independent from each other. We next attempt to derive scalable ELBO for LGP using variational inference.
The IWVI-based ELBO. Similar to SMGP, the scalable LGP (SLGP) first obtains the bound for the conditional as
(19) |
where the partial bound is analytical; the posterior wherein the kernel matrices in and are calculated in the augmented input space ; and finally, is an unnormalized distribution which satisfies , since in this case the sparse GP recovers the full GP. Thereafter, by inserting the inequality (19) back into and using the Jensen’s inequality, we get a scalable bound for as
(20) |
In (20), we could have the independent normal prior , or more informatively, the independent but expressive prior [31]
(21) |
which amortizes the parameters over a multi-layer perception (MLP) for efficient training; the variational posterior also takes the Gaussian, with the mean and variance (i) treated as individual hyperparameters [1], or more efficiently, (ii) parameterized as a MLP of both and , i.e.,
(22) |
Moreover, the importance weighted variational inference (IWVI) [32, 33] further helps pushing the bound towards the marginal likelihood by making the quantity inside the first expectation term of (20) concentrated about the mean. This is performed by a sample average over terms with respect to (w.r.t.) as
(23) |
where the posterior , and maximizing the expectation of w.r.t. represents a regularization: it pushes the posterior towards the prior . This bound becomes strictly tighter as increases, that is, [32], which can be obtained by using Jensen’s inequality inversely. From another point of view [34], the IWVI is equivalent to employing a posterior with . This posterior is more informative than the original , and converges to when and .
Enabling non-stationarity. Note that unlike the SHGP and SMGP, the SLGP described so for has no way to tackle non-stationary features. Therefore, as depicted in Fig. 1(c), we introduce an additional stochastic encoder 555The latent variable is actually conditioned on . But since is deterministic, we omit it. as the mapping between the augmented input space and the latent space for performing latent representation learning, which thus warps the non-stationary feature to ease the subsequent regression task. The stochastic encoder could be implemented by deep Gaussian process [35], which however significantly increases model complexity. Alternatively, inspired by the idea of amortized variational inference, we follow [36] and resort to the great representational power of neural networks. Hence, for the stochastic encoder, we take
-
1.
the factorized Gaussian prior
(24) where the prior mean represents the mapping of the augmented inputs. Particularly, when , we have an identity mapping ; when , we use the zero-padding strategy ; and finally, when , we use some dimensionality reduction algorithms, for example, the principle component analysis (PCA), to map as ; and
-
2.
the variational posterior
to be Gaussians factorized over dimensions.The variational mean and variance are parameterized as a MLP of input as
(25) where the shared parameter in and enables knowledge transfer between the prior and posterior.
Note that though we are using the MLPs for amortized inference, the GP framework alleviates the necessity for fine-tuning or regularization.
Furthermore, following [36], a hybrid prior
(26) |
is employed in order to arrive at (i) more expressive prior and (ii) adjustable regularization on the latent representation learning . Consequently, we obtain the following ELBO
(27) |
where . A key advantage of this bound is the capability to tune and control the regularization on latent representation learning through the balance factor . As has been investigated in [36], when , we pose the complete regularization to constrain the latent representation learning , since maximizing the expectation of w.r.t. pushes the posterior towards the prior ; contrarily, the decreasing weakens the regularization to have more powerful representation at the cost of however raising the possibility of over-fitting; particularly, offers a deterministic encoder for .
Tighter bound is not necessarily better. Though increasing is found to bring tighter bound, it is however found that the bound using still often risks severe over-fitting in scenarios with finite number of data points. This may be attributed to the monte carlo approximation quality of , the maximization of which may not push towards . Empirical evidence has been provided in the numerical experiments in Sec. 5.
It is observed that when , the bound (27) degenerates to the relaxed VI-based ELBO as
(28) |
which now owns an analytical, non-negative KL term to measure the gap between and , the minimization of which pushes the posterior towards the prior. This bound therefore fully utilizes the regularization when , with however less tight approximation to the objective , which may harm the performance.
Therefore, aiming to take advantages from both the IWVI-based ELBO (27) and the VI-based ELBO (28), we propose a hybrid ELBO as
(29) |
which separates the KL term w.r.t. to achieve the regularized latent representation learning. The hybrid ELBO has higher approximation quality for in comparison to (28); meanwhile, it achieves controllable regularization w.r.t. in comparison to (27).
Prediction. Finally, it is notable that when performing prediction at , it depends on the posterior which however is unknown. Instead, we take samples from the prior and pass them through the model to output predictive samples. This inconsistency indicates that instead of using the simple unit normal prior , we may need to use more informative prior, like (21), in order to mimic the posterior . Consequently, it alleviates the inconsistency when performing prediction. That is why recent works have exploited the usage of expressive priors like mixture of Gaussians and normalizing flow [37].
3.4 Discussions
Table 1 summarizes the capability of various GP paradigms together with their time complexity in different scenarios. It is observed that the hemoscedastic, stationary SGP [19] fails in the three challenging scenarios. Contrarily, the three scalable modulated GPs achieve improvement. Among them, the SHGP is not designed for multi-modal scenario, and the Gaussian noise assumption prohibits the modeling of non-Gaussian residuals. Besides, the share of in (6) for modulating both the output and noise may weaken the learning of non-stationary features. The time complexity of SHGP is two times that of SGP due to the additional log-GP. As for SMGP, ideally, it is capable of approximating any target distribution, given that we own many GP experts (i.e., a large ) and learn the assignment GPs reasonably, which however are quite challenging and raise remarkably high time complexity. As for the final SLGP, it showcases superiority in the following numerical experiments, with the additional complexity , which is usually lower than that of SGP, brought by the MLPs for the prior in (21), and the posteriors in (21) and in (25).
Model | Hetero. noise | Multi. modality | Non-stationarity | Time complexity |
---|---|---|---|---|
SGP | ||||
SHGP | ||||
SMGP | ||||
SLGP |
4 Related work
As for the modeling of heteroscedastic residuals, Goldberg et al [24] developed the heteroscedastic GP model , wherein the Gibbs sampling is used to sample from posteriors for model training. The scalability of this model has thereafter been improved through variational inference and distributed learning using sparse approximation [38, 39, 40]. Moreover, in order to tackle non-stationary features and arrive at log-concave likelihood, the HGP model in (6) has been first studied in [25, 26] by combining the non-stationary GP [23] and the above heteroscedastic noise, the inference of which resorts to expectation propagation. The full GP paradigm however makes it only available on small datasets. As an improvement, we here improve the scalability of (6) under sparse approximation and extend it to the regime of big data; besides, we use variational inference to derive the closed-form ELBO (9) for efficient model training.
As for the mixture of GPs,666It belongs to the category of mixture of experts (MoE) [41, 42]. early works seek to infer a gating network , which is usually parameterized as a softmax function, to perform probabilistic separation of the input domain and the assignment of data points, which in turn allow the training of local GP experts [43, 44, 45]. The sparse approximations [46, 47, 48] have then been introduced in MGP to reduce the model complexity under the variational expectation maximization framework. The key assumption in the above MGPs is that only one GP expert is used to explain the data at each point. Differently, a whole Bayesian generative model for both the GPs and the assignments has been recently investigated in [2], wherein the key assumption is that the data at each position is generated by a number of global GP experts. However, as discussed in Sec. 3.2, the discrete multinomial prior makes the VI-based ELBO (15) hard to use. To sidestep this issue, Kaiser et al. [2] decided to keep the latent variables by approximating rather than the interested . Contrarily, our work devotes to marginalize all the latent variables out to directly approximate the marginal likelihood and derive a tighter bound for model training with higher quality.
As for the latent GP, it was first proposed in [49] for modeling hereroscedastic non-Gaussian residuals, and then extended for conditional density estimation [50, 1, 35] and disentangled learning [51]. The similar idea of augmenting inputs with latent variables was also independently presented from the regime of reinforcement learning [52]. It is also notable that the LGP is close to the Gaussian process latent variable model (GPLVM) [30, 29], despite that the latter seeks to infer the low-dimensional latent variables for unsupervised learning. Besides, the LGP has the same model structure to the conditional variational autoencoder (CVAE) [53], though they were developed from different views. Our SLGP model is most related to the work in [1], which combines the amortized variational inference with LGP for conditional density estimation. The major differences are that (i) our SLGP introduces an additional, regularized stochastic encoder to enhance the power of latent representation; and (ii) we derive a hybrid and tighter ELBO to achieve high training quality.
5 Numerical experiments
This section comprehensively investigates the characteristics and performance of different scalable modulated GPs for approximating diverse distributions on three toy cases, eight UCI datasets, the large New York City Taxi dataset, and finally the mnist image generation task. All the experiments are performed on a Linux workstation with eight 3.20 GHz cores, nvidia GTX1080Ti, and 32GB memory.
5.1 Toy cases
This section employs three toy cases with different characteristics to showcase the performance of modulated GPs. The first case
has the heteroscedastic Gaussian noise; the second case is a non-stationary step function around with a constant noise ; and finally the third case is the multi-modal moon data imported from the scikit-learn package [54]. We generate training points for the heteroscedastic case, for the step case, and for the multi-modal case, and predict at 500 evenly distributed points.
In the comparison study, the training data has been normalized before model training. The modulated GPs use the squared exponential (SE) kernel, and inducing variables. Particularly, the SMGP uses GP experts; the SLGP uses the balance parameter and a single latent input (i.e., ), and adopts the MLPs for , and with three hidden layers, each of which owns 100 units and takes the ReLU activation function. Note that both SMGP and SLGP use MC samples to evaluate their ELBOs for model training. As for the optimization, we employ the Adam solver with the learning rate of and run it over 10000 iterations. Fig. 2 illustrates the prediction samples generated by the three modulated GPs. The hemoscedastic, stationary SGP [19] is also included as a baseline. We conclude the following findings from the comparative results.

The modulated GPs outperform the SGP. The homoscedastic SGP is only able to estimate the simple constant noise in data, and has no way to tackle non-stationary and multi-modal features. As a result, the generated samples cannot well recover the three data distributions. Contrarily, the modulated GPs approximate the data more accurately by extracting richer statistical behavior behind data via the modulation . As shown in Fig. 3, (i) the modulation variables learnt by SHGP capture the varying noise in the heteroscedastic case; (ii) the four GP experts learnt by SMGP describe part of the data for capturing the step behavior; and (iii) the mean of learnt by SLGP encodes the multi-modality of the moon case, and the expressive prior mimics the posterior for better prediction.

The SLGP is promising in various scenarios. The SHGP uses an additional latent function to describe the varying Gaussian noise and calibrate the amplitude of output. Hence, it perfectly approximates the first case and improves over SGP in capturing the step case. Due to the Gaussian noise assumption, the SHGP however cannot fit the multi-modal case well. The SMGP describes the distributions more flexibly by mixing several GP experts. The proper number of experts however is problem-dependent. Ideally, the SMGP improves with ; however, it meanwhile toughens the model training due to the exponentially increased number of hyperparameters and the more complicated model structure. In comparison to the flexible but average performance of SMGP, the SLGP is promising. By including stochasticity in the augmented input space and using an NN-assisted stochastic encoder, the SLGP well recovers the three distributions.
The SLGP may sacrifice the homoscedastic noise. It is found that the GP framework uses the estimated variance to quantify the epistemic uncertainty in model parameters [55]. As for the aleatoric uncertainty inherited in the observations, the SHGP employs an additional GP to fit it; while the SGP, SMGP and SLGP use a simple i.i.d noise for approximation. It is observed from Fig. 2 that the latent samples of SLGP from are almost the same as the samples from for the first heteroscedastic case, which indicates that the SLGP here sacrifices the i.i.d. noise. That is, the SLGP pushes the noise variance towards zero, e.g., it has the pretty small estimation . This is because for SLGP, the i.i.d. noise is inconsistent to the heteroscedastic property of the first case. Thus, it tends to ignore the noise term and resorts to the conditioned on augmented stochastic inputs to explain the data.
5.2 UCI regression
This section further investigates the performance of the three modulated GPs on eight UCI benchmarks [56] summarized in Table 2. In addition to the SGP [19], the comparative study adopts the GP competitors including
-
1.
the stochastic variational HGP (SVHGP) [40] which uses the latent function to only modulate the noise variance,
-
2.
the data association GP (DAGP) [2] which maximizes the joint likelihood rather than the marginal likelihood for model training, and
-
3.
the GP conditional density estimation (GPCDE) [1] which derives an optimal ELBO for LGP.
Besides, the competitors also include the neural network counterparts including
-
1.
the heteroscedastic neural network (HNN) [57] with the last layer having two outputs, one for the prediction mean of a Gaussian and the other for the uncertainty of the prediction mean,
-
2.
the mixture of density network (MDN) [58] with the last layer outputting the means and variances of multiple Gaussians together with the related coefficients to mix them, and finally,
-
3.
the conditional variational auto-encoder(CVAE) [53] which has the similar structure to the proposed SLGP with the main difference being that it is a pure NN architecture without stochastic regularization.
The model configurations on the UCI datasets are elaborated in Appendix B. Note that the SLGP performs grid search of the balance parameter from the candidate set . Table 3 reports the average comparative results over ten runs in terms of negative log likelihood (NLL), which is estimated by the kernel density estimator built from the prediction samples. We have the following findings from the comparative results.
dataset | |||
---|---|---|---|
boston | 456 | 50 | 13 |
energy | 692 | 76 | 8 |
concrete | 927 | 103 | 8 |
wine-red | 1440 | 159 | 11 |
kin8nm | 7373 | 819 | 8 |
power | 8612 | 956 | 4 |
naval | 10741 | 1193 | 16 |
protein | 41157 | 4573 | 9 |
model | boston | energy | concrete | wine-red | kin8nm | power | naval | protein |
---|---|---|---|---|---|---|---|---|
SGP | 2.3325±0.0723 | 1.4431±0.0331 | 3.0514±0.0313 | 0.9527±0.0544 | -0.9947±0.0135 | 2.7274±0.0097 | -7.0355±0.6860 | 2.8784±0.0080 |
HNN | 3.5959±0.2348 | 2.0527±0.3290 | 2.7110±0.1831 | 1.0314±0.1625 | 1.4230±0.5217 | 2.5078±0.0411 | -5.2414±0.7426 | 2.1944±0.1062 |
SVHGP | 2.1863±0.0530 | 1.0108±0.0786 | 2.9546±0.0266 | 0.8585±0.0652 | -1.0413±0.0118 | 2.6678±0.0136 | -7.1000±0.3524 | 2.7445±0.0130 |
SHGP | 2.1174±0.0678 | 1.0762±0.0774 | 2.9706±0.0307 | 0.8991±0.0579 | -1.0378±0.0120 | 2.7019±0.0137 | -7.4956±0.1271 | 2.7628±0.0114 |
MDN | 3.3399±0.1768 | 1.3182±0.2403 | 2.6500±0.1326 | 0.3965±0.9257 | 1.7145±0.2523 | 2.4702±0.0252 | -3.5556±1.0886 | 1.9195±0.0219 |
DAGP | 2.3004±0.0535 | 1.1484±0.0938 | 3.0369±0.0377 | 0.9533±0.0621 | -0.9985±0.0176 | 2.7186±0.0087 | -7.2886±0.4019 | 2.8367±0.0094 |
SMGP | 2.2094±0.0770 | 1.1500±0.1117 | 2.9681±0.0321 | 1.1342±0.0846 | -1.0263±0.0128 | 2.7001±0.0110 | -7.3514±0.1601 | 2.7529±0.0118 |
CVAE | 3.6231±0.2309 | 1.7857±0.1825 | 2.6141±0.1813 | -0.7183±0.3034 | 1.3869±0.2132 | 2.3990±0.0472 | -5.8454±0.3416 | 1.9666±0.0237 |
GPCDE | 2.3292±0.0662 | 1.2102±0.0636 | 3.0572±0.0339 | 0.9516±0.0602 | -0.9927±0.0136 | 2.7293±0.0083 | -6.9172±0.2030 | 2.8181±0.0053 |
SLGP | 2.2075±0.0804 | 0.6821±0.3064 | 2.5031±0.1559 | 0.6663±0.0904 | -1.0414±0.0135 | 2.4083±0.0368 | -6.3133±0.2206 | 1.9089±0.0311 |
1.0 | 1.0 | 0.5 | 1.0 | 0.5 | 0.1 | 0.1 | 0.01 |
The SLGP generally outperforms the others. Similar to the results on toy cases, the SLGP again showcases superiority on six out of the eight UCI datasets. Besides, it is observed in Table 3 that the optimal balance parameter is problem-dependent. As for the first four small datasets, the SLGP prefers using in order to fully utilize the regularization to guard against over-fitting. Contrarily, the SLGP decides to use a small on the remaining four large datasets for improving the latent representation learning, which is beneficial for extracting the underlying patterns under massive data. As for the other two modulated GPs, they outperform the SGP for most of the cases. Finally, the mixture of GPs, which ideally has high flexibility and capability, does not show significant superiority over the SHGP. This may be attributed to the difficulty in training such complex model.
The modulated GPs usually outperform the GP counterparts. As for SLGP, it performs better than GPCDE on all the eight benchmarks, due to the regularized latent representation learning together with the hybrid and tight ELBO (29). As for SMGP, it outperforms the DAGP in six out of the eight benchmarks due to the more reasonable ELBO (17), which marginalizes all the latent variables out. As for SHGP, however, it does not show superiority over SVHGP by additionally modulating the amplitude of output in (6). As has been explained before, this may be attributed to the share of in SHGP for modulating both the output and noise, which may weaken the learning of non-stationary features.
The NN counterparts suffer from over-fitting on small datasets. It is not surprising to observe that in comparison to the modulated GPs, the NN counterparts without regularization risk over-fitting on small datasets, for example, the boston and energy datasets. This issue can be alleviated by some fine-tuning tricks, e.g., model pruning, adaptive learning rate, early stop, dropout, and data augmentation. The NN models, especially the CVAE, perform well on the remaining large datasets, because the many data points themselves act as a regularization. Contrarily, though being a hybrid model of NN and GP, the well-calibrated SLGP under the GP framework yields robust and superior performance on both small and large datasets without fine-tuning tricks.
The SMGP improves with ? The mixture of more GP experts in SMGP is expected to improve the capability of fitting more complicated data distributions. The comparative results in Fig. 4 on the small concrete and large naval datasets however cannot strongly support this claim. It is observed that the SMGP using improves over that with on the concrete dataset; but the larger brings almost no improvement. More seriously, the SLMP is insensitive to on the naval dataset. This again confirms that training the SMGP comprising GP experts with interactions is a challenging task.

The SLGP improves with ? Fig. 5 investigates the impact of the dimensionality of the latent inputs on the performance of SLGP on the small wine-red and large protein datasets. It is found that the performance of SLGP is insensitive to on the two datasets. This may be attributed to the sufficient capability of for describing statistical structures on the two dataset. We again investigate the impact of on the challenging mnist image dataset in Sec. 5.4 and observe the benefits brought by large .

The hybrid inference in SLGP outperforms VI and IWVI. As stated before, the IWVI helps derive a tighter bound (27) than the commonly used VI-based bound (28). The bound is indeed tighter at the cost of however losing the explicit control of regularization on the stochastic encoder when resorting to the MC approximation. This risks severe over-fitting on the small wine-red dataset even when we are using , as shown in Fig. 6. Differently, the hybrid bound (29) takes the advantages of both the VI- and IWVI-based ELBOs. Consequently, it outperforms the VI-based inference on the small wine-red dataset, and even performs better than the IWVI-based inference on the large protein dataset.

5.3 Large-scale spatio-temporal modeling
This section applies the three scalable modulated GPs to the large-scale 2016 New York City Taxi (nytaxi) dataset, which is composed of the records of more than 1.4 million taxi trips.777The data is available at https://www.kaggle.com/c/nyc-taxi-trip-duration/data. We employ the Bayesian Benchmarks888https://github.com/hughsalimbeni/bayesian_benchmarks. package to choose the trips within the Manhattan area. We attempt to predict the duration of trip with respect to the drop-off location, the pick-up location, the day of week, and the time of day. It is notable that the day of week and the time of day are transformed as sine and cosine with the natural periods, thus resulting in totally eight inputs. The model configurations on this experiment are elaborated in Appendix B.


Fig. 7 shows the comparative results of modulated GPs and their counterparts over ten runs in terms of NLL. It is observed that all the three modulated GPs significantly outperform the conventional SGP. Specifically, the SMGP performs slightly better than the SHGP; the SMGP and SLGP outperform their GP counterparts (DAGP and GPCDE), but the SHGP and SMGP perform worse than their NN counterparts on this large data; finally, the SLGP shows remarkable superiority on this large dataset, even in comparison to CVAE. Furthermore, Fig. 8 illustrates the estimated densities of the four GPs with respect to the pick-up locations. It is observed that the three modulated GPs, especially the SLGP, have higher likelihood to explain the data.
5.4 Image generation task
Finally, since the powerful SLGP and its NN counterpart CVAE [53] showcase superiority on the foregoing numerical experiments, this section compares them on the high-dimensional mnist dataset that is composed of 70000 hand-written digit images for density estimation as well as sample generation.999The dataset is available at http://yann.lecun.com/exdb/mnist/. We choose 90% of the dataset as training data, and the remaining 10% for testing. Different from the above tasks, the image task has ten-dimensional inputs encoded from the one-hot labels, and 784 vectorized pixel outputs. Appendix B describes the model settings as well as the MLP architectures of SLGP and CVAE. In this experiment, we additionally investigate the impact of latent dimensionality by using and , respectively.
CVAE | SLGP | |
---|---|---|
2 | -825.1102 | -1198.2542 |
16 | -935.4681 | -1225.2195 |


Table 4 reports the NLL results of CVAE and SLGP on the mnist dataset. It is observed that the Bayesian framework helps the SLGP outperform the CVAE, and both of them improve with the latent dimensionality . Besides, Fig. 9 depicts the 0-10 digit images sampled from the distributions learnt by SLGP using and , respectively. It is observed that the samples generated by the SLGP with have obviously noisy background and misidentify some digits, for example, ’1’ and ’7’. By increasing up to sixteen, we are now capable of generating clear and diverse digit images conditioned on the labels.
6 Conclusion
In order to enhance the capability of learning rich statistical representation for GP from massive data, we studied three scalable modulated GPs, which encode challenging details into the latent variable by modulating the outputs or inputs. The variational inference helps further derive analytical or tighter ELBOs for efficient and effective model training. We extensively evaluate these scalable modulated GPs and compare them against state-of-the-art GP and NN counterparts on various tasks. It is observed that (i) they outperform the conventional scalable GP in terms of the quality of predictive distribution in various challenging scenarios, and (ii) particularly, the SLGP shows remarkable performance in learning various distributions at the cost of however may sacrificing the homoscedastic noise in some circumstances.
Acknowledgments
This work was supported by the Fundamental Research Funds for the Central Universities (DUT19RC(3)070) at Dalian University of Technology, and it was partially supported by the Research and Innovation in Science and Technology Major Project of Liaoning Province (2019JH1-10100024), the National Key Research and Development Project (2016YFB0600104), and the MIIT Marine Welfare Project (Z135060009002).
Appendix A Acronyms and notations
A.1 Acronyms
A.2 Notation
Appendix B Experimental configurations
The UCI datasets. We elaborate the model configurations on the eight UCI datasets as below. First, we preprocess the data standardization over each dimension to have zero mean and unit variance. In addition, we shuffle and split the data to choose 90% for training and the rest 10% for testing.
Second, the modulated GPs employ the SE kernel with the length-scales initialized as and the signal variance initialized as . They adopt inducing variables, the positions of which are initialized by the -means clustering technique. The GP competitors including SGP, SVHGP, DAGP and GPCDE also take the same configuration for the fair of comparison. It is notable that, since the SLGP has an augmented input space , we draw samples from normal distribution to act as inducing points for the extra dimensions. Besides, the SLGP performs grid search of the balance parameter from the candidate set , uses a single latent input (i.e., ), and has the latent dimension of without otherwise indicated. In addition, both the SMGP and SLGP use Monte Carlo samples to estimate their ELBOs (17) and (29). Note that the SHGP model has no need due to the analytical ELBO (9).
Third, as for the MLPs for , and in SLGP, they take the fully-connected (FC) neural networks with three hidden layers, each of which has 100 units and applies the ReLU activation. Besides, the parameter is initialized as for the MLPs of encoder in (25). It is notable that the experiments do not use additional fine-tuning tricks, e.g., the scheduled learning rate, the regularized weights, or the dropout technique, for MLPs. The NN counterparts including HNN, MDN and CVAE also take the same architecture, with the only difference being that the last layer is not GP.
Forth, the training process adopts the Adam solver [59] with the batch size of 512, the learning rate of , and the maximum number of iterations of 20000.
Finally, as for model evaluation on the test set, we sample 200 points from the predictive distribution at each test point, and then employ the kernel density estimator from the scikit-learn package [54] to estimate the density as well as the negative log likelihood. Note that the bandwidth parameter for the kernel density estimator is estimated by Silverman’s rule.
The large-scale nytaxi dataset. The three scalable modulated GPs take almost the same configuration to that on the UCI datasets. The differences are that they herein adopt inducing variables and run the Adam optimizer with the batch size of 1024 and the maximum number of iterations of 50000; besides, the SLGP employs a small balance parameter on this large dataset to enhance extracting the patterns under the massive data; finally, the MLPs in SLGP and the NN models take the hidden architecture of “FC(256)-Relu-FC(128)-Relu-FC(64)-Relu-FC(32)”.
The image dataset. For this 10-input-784-output image dataset, we randomly choose 63000 images for training and the rest 7000 images for testing. The SLGP employs inducing points and the small balance parameter , and runs the Adam optimizer over 50000 iterations with the batch size of 64. As for the MLPs in SLGP and CVAE, they take the hidden architecture of “FC(512)-Relu-FC(512)-Relu-FC(512)”.
References
References
- [1] V. Dutordoir, H. Salimbeni, J. Hensman, M. Deisenroth, Gaussian process conditional density estimation, in: Advances in Neural Information Processing Systems, 2018, pp. 2385–2395.
- [2] M. Kaiser, C. Otte, T. A. Runkler, C. H. Ek, Data association with Gaussian processes, in: European Conference on Principles of Data Mining and Knowledge Discovery, 2019, pp. 548–564.
- [3] I. Bilionis, N. Zabaras, Multi-output local Gaussian process regression: Applications to uncertainty quantification, Journal of Computational Physics 231 (17) (2012) 5718–5746.
- [4] C. K. Williams, C. E. Rasmussen, Gaussian processes for machine learning, MIT press Cambridge, MA, 2006.
- [5] Y. Son, S. Lee, S. Park, J. Lee, Learning representative exemplars using one-class gaussian process regression, Pattern Recognition 74 (2018) 185–197.
- [6] M. Kandemir, T. Kekeç, R. Yeniterzi, Supervising topic models with gaussian processes, Pattern Recognition 77 (2018) 226–236.
- [7] P. Li, S. Chen, Hierarchical gaussian processes model for multi-task learning, Pattern Recognition 74 (2018) 134–144.
- [8] P. Li, S. Chen, Gaussian process approach for metric learning, Pattern Recognition 87 (2019) 17–28.
- [9] S. Sun, W. Dong, Q. Liu, Multi-view representation learning with deep gaussian processes, IEEE Transactions on Pattern Analysis and Machine Intelligence (2020) 1–15.
- [10] H. Liu, Y.-S. Ong, X. Shen, J. Cai, When Gaussian process meets big data: A review of scalable GPs, IEEE Transactions on Neural Networks and Learning Systems (2020) 1–19.
- [11] S. Ambikasaran, D. Foreman-Mackey, L. Greengard, D. W. Hogg, M. O/’Neil, Fast direct methods for Gaussian processes, IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (2) (2016) 252–265.
- [12] K. A. Wang, G. Pleiss, J. R. Gardner, S. Tyree, K. Q. Weinberger, A. G. Wilson, Exact Gaussian processes on a million data points, arXiv preprint arXiv:1903.08114.
- [13] M. P. Deisenroth, J. W. Ng, Distributed Gaussian processes, in: International Conference on Machine Learning, JMLR. org, 2015, pp. 1481–1490.
- [14] H. Liu, J. Cai, Y. Wang, Y.-S. Ong, Generalized robust Bayesian committee machine for large-scale Gaussian process regression, in: International Conference on Machine Learning, 2018, pp. 3131–3140.
- [15] G. Pillonetto, L. Schenato, D. Varagnolo, Distributed multi-agent gaussian regression via finite-dimensional approximations, IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (9) (2019) 2098–2111.
- [16] E. Snelson, Z. Ghahramani, Sparse Gaussian processes using pseudo-inputs, in: Advances in Neural Information Processing Systems, 2006, pp. 1257–1264.
- [17] M. K. Titsias, Variational learning of inducing variables in sparse Gaussian processes, in: Artificial Intelligence and Statistics, 2009, pp. 567–574.
- [18] Y. Cao, M. A. Brubaker, D. J. Fleet, A. Hertzmann, Efficient optimization for sparse gaussian process regression, IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (12) (2015) 2415–2427.
- [19] J. Hensman, N. Fusi, N. D. Lawrence, Gaussian processes for big data, in: Uncertainty in Artificial Intelligence, 2013, pp. 282–290.
- [20] A. Wilson, H. Nickisch, Kernel interpolation for scalable structured Gaussian processes (KISS-GP), in: International Conference on Machine Learning, 2015, pp. 1775–1784.
- [21] G. Pleiss, J. Gardner, K. Weinberger, A. G. Wilson, Constant-time predictive distributions for Gaussian processes, in: International Conference on Machine Learning, 2018, pp. 4111–4120.
- [22] D. Burt, C. E. Rasmussen, M. Van Der Wilk, Rates of convergence for sparse variational Gaussian process regression, in: International Conference on Machine Learning, 2019, pp. 862–871.
- [23] R. P. Adams, O. Stegle, Gaussian process product models for nonparametric nonstationarity, in: International Conference on Machine Learning, 2008, pp. 1–8.
- [24] P. W. Goldberg, C. K. Williams, C. M. Bishop, Regression with input-dependent noise: A Gaussian process treatment, in: Advances in Neural Information Processing Systems, 1998, pp. 493–499.
- [25] L. Munoz-Gonzalez, M. Lazaro-Gredilla, A. R. Figueiras-Vidal, Divisive Gaussian processes for nonstationary regression, IEEE Transactions on Neural Networks 25 (11) (2014) 1991–2003.
- [26] L. Munoz-Gonzalez, M. Lazaro-Gredilla, A. R. Figueiras-Vidal, Laplace approximation for divisive Gaussian processes for nonstationary regression, IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (3) (2016) 618–624.
- [27] C. J. Maddison, A. Mnih, Y. W. Teh, The concrete distribution: A continuous relaxation of discrete random variables, in: International Conference on Learning Representations, 2017.
- [28] E. Jang, S. Gu, B. Poole, Categorical reparameterization with Gumbel-Softmax, in: International Conference on Learning Representations, 2017.
- [29] A. C. Damianou, M. K. Titsias, N. D. Lawrence, Variational inference for uncertainty on the inputs of Gaussian process models, arXiv preprint arXiv:1409.2287.
- [30] N. Lawrence, Probabilistic non-linear principal component analysis with Gaussian process latent variable models, Journal of Machine Learning Research 6 (Nov) (2005) 1783–1816.
- [31] A. Pagnoni, K. Liu, S. Li, Conditional variational autoencoder for neural machine translation, arXiv preprint arXiv:1812.04405.
- [32] Y. Burda, R. B. Grosse, R. Salakhutdinov, Importance weighted autoencoders, in: International Conference on Learning Representations, 2016.
- [33] J. Domke, D. R. Sheldon, Importance weighting and variational inference, in: Advances in Neural Information Processing Systems, 2018, pp. 4470–4479.
- [34] C. Cremer, Q. Morris, D. Duvenaud, Reinterpreting importance-weighted autoencoders, arXiv preprint arXiv:1704.02916.
- [35] H. Salimbeni, V. Dutordoir, J. Hensman, M. Deisenroth, Deep Gaussian processes with importance-weighted variational inference, in: International Conference on Machine Learning, 2019, pp. 5589–5598.
- [36] H. Liu, Y.-S. Ong, X. Jiang, X. Wang, Deep latent-variable kernel learning, arXiv preprint arXiv:2005.08467.
- [37] A. Bhattacharyya, M. Hanselmann, M. Fritz, B. Schiele, C.-N. Straehle, Conditional flow variational autoencoders for structured sequence prediction, in: Bayesian Deep Learning NeurIPS 2019 Workshop, 2019.
- [38] M. Lázaro-Gredilla, M. K. Titsias, Variational heteroscedastic Gaussian process regression, in: International Conference on Machine Learning, Omnipress, 2011, pp. 841–848.
- [39] I. A. Almosallam, M. J. Jarvis, S. J. Roberts, GPZ: Non-stationary sparse Gaussian processes for heteroscedastic uncertainty estimation in photometric redshifts, Monthly Notices of the Royal Astronomical Society 462 (1) (2016) 726–739.
- [40] H. Liu, Y.-S. Ong, J. Cai, Large-scale heteroscedastic regression via Gaussian process, IEEE Transactions on Neural Networks and Learning Systems (2018) 1–14.
- [41] S. E. Yuksel, J. N. Wilson, P. D. Gader, Twenty years of mixture of experts, IEEE Transactions on Neural Networks and Learning Systems 23 (8) (2012) 1177–1193.
- [42] S. Masoudnia, R. Ebrahimpour, Mixture of experts: A literature survey, Artificial Intelligence Review 42 (2) (2014) 275–293.
- [43] C. E. Rasmussen, Z. Ghahramani, Infinite mixtures of Gaussian process experts, in: Advances in Neural Information Processing Systems, 2002, pp. 881–888.
- [44] E. Meeds, S. Osindero, An alternative infinite mixture of Gaussian process experts, in: Advances in Neural Information Processing Systems, 2006, pp. 883–890.
- [45] Y. Yang, J. Ma, An efficient EM approach to parameter learning of the mixture of Gaussian processes, in: International Symposium on Neural Networks, 2011, pp. 165–174.
- [46] C. Yuan, C. Neubauer, Variational mixture of Gaussian process experts, in: Advances in Neural Information Processing Systems, 2009, pp. 1897–1904.
- [47] S. Sun, X. Xu, Variational inference for infinite mixtures of Gaussian processes with applications to traffic flow prediction, IEEE Transactions on Intelligent Transportation Systems 12 (2) (2011) 466–475.
- [48] T. Nguyen, E. Bonilla, Fast allocation of Gaussian process experts, in: International Conference on Machine Learning, 2014, pp. 145–153.
- [49] C. Wang, R. M. Neal, Gaussian process regression with heteroscedastic or non-Gaussian residuals, arXiv preprint arXiv:1212.6246.
- [50] E. Bodin, N. D. Campbell, C. H. Ek, Latent Gaussian process regression, arXiv preprint arXiv:1707.05534.
- [51] K. Märtens, K. Campbell, C. Yau, Decomposing feature-level variation with covariate Gaussian process latent variable models, in: International Conference on Machine Learning, 2019, pp. 4372–4381.
- [52] S. Depeweg, J. Hernández-Lobato, F. Doshi-Velez, S. Udluft, Learning and policy search in stochastic dynamical systems with Bayesian neural networks, in: International Conference on Learning Representations, 2019.
- [53] K. Sohn, X. Yan, H. Lee, Learning structured output representation using deep conditional generative models, in: Advances in Neural Information Processing Systems, 2015, pp. 3483–3491.
- [54] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12 (2011) 2825–2830.
- [55] A. Kendall, Y. Gal, What uncertainties do we need in Bayesian deep learning for computer vision, in: Advances in Neural Information Processing Systems, 2017, pp. 5580–5590.
-
[56]
D. Dua, C. Graff, UCI machine learning
repository (2017).
URL http://archive.ics.uci.edu/ml - [57] D. Nix, A. Weigend, Estimating the mean and variance of the target probability distribution, in: IEEE International Conference on Neural Networks, Vol. 1, 1994, pp. 55–60.
-
[58]
C. M. Bishop,
Mixture
density networks, Tech. Rep. NCRG/94/004 (January 1994).
URL https://www.microsoft.com/en-us/research/publication/mixture-density-networks/ - [59] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980.