Incremental Structure Discovery of Classification via Sequential Monte Carlo
Abstract
Gaussian Processes (GPs) provide a powerful framework for making predictions and understanding uncertainty for classification with kernels and Bayesian non-parametric learning. Building such models typically requires strong prior knowledge to define preselect kernels, which could be ineffective for online applications of classification that sequentially process data because features of data may shift during the process. To alleviate the requirement of prior knowledge used in GPs and learn new features from data that arrive successively, this paper presents a novel method to automatically discover models of classification on complex data with little prior knowledge. Our method adapts a recently proposed technique for GP-based time-series structure discovery, which integrates GPs and Sequential Monte Carlo (SMC). We extend the technique to handle extra latent variables in GP classification, such that our method can effectively and adaptively learn a-priori unknown structures of classification from continuous input. In addition, our method adapts new batch of data with updated structures of models. Our experiments show that our method is able to automatically incorporate various features of kernels on synthesized data and real-world data for classification. In the experiments of real-world data, our method outperforms various classification methods on both online and offline setting achieving a 10% accuracy improvement on one benchmark.
1 Introduction
Classification is a fundamental problem in machine learning research methods [7, 18, 15, 13]. Many outstanding solutions have been proposed with different perspective, including neural networks [14], tree model [3], kernel method [11], etc. These distinct methodologies offer unique advantages across various problem domains, necessitating a profound understanding of both the problem domain and algorithmic intricacies to discern and implement the most suitable solution.
The design of classification methods widely faces problems of insufficient prior knowledge and variation of data patterns. In order to automatically select a suitable method for a specific domain, meta-learning [29, 24, 12] evaluates various candidate methods and chooses the method that, in its opinion, fits the problem best. However, building such candidate methods would also require expertise and it is possible that the pattern of a classification domain may shift across time or batches of inputs, especially in an online setting. Such a problem can be referred as incremental learning.
In this work, we study incremental learning for Gaussian Process (GP) classification with automatic selection of GP kernels. Our method follows the methodology of Bayesian inference. We define an assumption as a prior distribution that describes the distribution of GP kernels, including both the kernel structures and parameters. Then, the evidence or likelihood is obtained by observing given data points to perform conditioning. By the Bayes’ rule, one obtain the posterior distribution of GP kernels, in a manner that is consistent with the prior distribution and the evidence.
The reason that we choose Bayesian learning to address the problem is its ability to describe distribution of kernels, quantify uncertainty in predictions and adapt to new data by updating the posterior distribution. Describing distribution of kernels is fundamental for automatic selection for kernels because it provides a probabilistic framework to evaluate and compare different kernels based on their suitability for the data. With quantification of uncertainty and adjustment of the posterior distribution, our method is flexible and robust in dynamic and evolving datasets.
AutoGP [25] is a recently proposed framework of automatic selection of GP kernels for regression based on Bayesian learning. It proposes a structure learning algorithm that combines sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) methods for efficient posterior inference. For GP regression, the posterior distribution is usually Gaussian so one can use analytical closed form solutions; however, for GP classification, one usually needs to handle non-Gaussian posterior distributions, due to an extra mapping from GP-produced logits to actual classification probabilities.
Contribution
We propose a novel method for automatically selecting Gaussian kernels for models of classification and incrementally learning from evolving datasets. We expand the framework of AutoGP to the domain of binary classification by automatic selection of kernel, which reduces requirement of prior knowledge. Our method applies Sequential Monte Carlo with online kernel adaption of classification to address the problem of pattern shifting during online learning. Our experiment shows that our method can automatically discover different kernel structures in different datasets and outperform various classification methods on real-world datasets.
2 Gaussian Processes Classification Models
In this section, we formulate a family of Gaussian Process Classification (GPC) models, whose kernel structures do not have to be pre-determined, in the sense that both the kernel structures and parameters reside in the latent space of the Bayesian inference.
In this paper, we focus on binary classification, but it would be straightforward to extend our approach to support multi-class classification.
2.1 Preliminaries
Let be a dataset for binary classification, where is the data points and is the corresponding labels, i.e., and for each , where stands for the feature space of data points. GPC typically samples a function from a Gaussian Process prior , where is a mean function and is a covariance function, i.e., a kernel [28]. The probabilistic model for classification is then be formulated by
(1) | ||||
(2) |
where is a sigmoid function, e.g., the logistic function or the probit function. To infer the label on a new data point , i.e., to reason about the posterior distribution , we treat as a vector of latent variables and carry out the inference in two steps: (i) first derive the distribution of the latent variable as
(3) |
and (ii) then derive the posterior of by integrating out as
(4) |
Note that the key part is to account for the posterior distribution of latent variables .
Usually, the mean is pre-determined to be the constant-zero function, whereas the kernel is parameterized by a vector , where is the number of real-valued parameters in . Let us denote and reuse the same notation for the actual covariance function derived from and . While many GP-based methods pre-determine and treat as hyper-parameters, in this paper, we aim to characterize both and as latent information of the classification model, as well as develop a method that is adaptive in both the structure and parameters of the kernel in the following sections.
2.2 GPC with A Domain Specific Language for Kernels
To allow a rich prior distribution over kernel structures , we define a sample space using a probabilistic context-free grammar (PCFG), following the practice of a line of prior work [9, 26, 25]:
(5) | ||||
(6) | ||||
(7) |
The non-terminal stands for an extensible collection of basic kernels. Two kernels can be combined with a binary operator that computes the pointwise addition or multiplication of the two kernels. The PCFG also assigns a probability to each production rule in (5)-(7), thus formulating a prior distribution on . The meaning of a kernel expression is defined inductively as follows.
(8) | |||||
(9) | |||||
(10) | |||||
(11) |
With the PCFG, we extend the standard GPC model (c.f. (1)-(2)) to treat both the structure and the parameters of the kernel as latent variables, resulting in a probabilistic model :
(12) | ||||
(13) | ||||
(14) | ||||
(15) | ||||
(16) |
The latent variable stands for the noise in the GP. Note that the model samples kernel parameters from the standard Normal distribution, but basic kernels may impose constraints on its parameters. We apply standard transformations to obtain constrained parameters and omit the details here; for example, for positive parameters and for parameters in .
We further apply a standard reparameterization trick to the model above via Cholesky decomposition. That is, instead of sampling from a multivariate Normal distribution, we sample i.i.d. auxiliary variables from the standard Normal distribution and compute the latent vector as follows.
(17) | ||||
(18) | ||||
(19) | ||||
(20) |
The term performs Cholesky decomposition of the covariance matrix , i.e., computes a lower-triangular matrix such that . The reparmeterized model specifies a joint distribution . To simplify the notation, we define and to denote the kernel parameters and auxiliary variables, respectively.
2.3 Problem Statement: Structure Discovery for GPC
By the Bayes’ rule, the posterior distribution on given a dataset is given by
(21) |
The goal of our method is to generate and maintain a finite set of weighted particles
(22) |
each of which consists a weight and a tuple of latent variables . These particles are intended to approximate the posterior distribution given in (21), so as to compute the expectation of some test function with respect to the posterior distribution as
(23) |
Recall that in (3)-(4) we reviewed how to condition a standard GPC model on a dataset to make predictions. We generalize the idea to define a test function to compute the classification probability of a new data point , given all the latent information :
(24) |
Different from (3), the posterior distribution on becomes analytically tractable because the latent vector is determined by the given latent information . To see that, we derive and simplify the posterior distribution on as
(25) |
which is a Normal distribution because of the multivariate Normal joint distribution (c.f. (15)). However, (24) might not be analytically tractable due to the choice of the sigmoid function . Fortunately, to predict the label for a new data point , (24) is essentially a uni-variate integral, so we resort to use Monte Carlo estimation. In some case, e.g., when is the probit function, the integral has a closed-form solution so that can be evaluated easily.
3 Sequential Monte Carlo for Adaptive and Incremental GPC Learning
In this section, we develop an adaptive and incremental learning method for the GPC models present in Section 2. We extend a recently proposed Sequential Monte Carlo sampler for time series learning, namely AutoGP [25], to our setting of GP-based classification. Notably, our method supports both online and offline settings, enabling itself to perform classification both when data arrive sequentially and when data are available all at once.
3.1 Background
Sequential Monte Carlo
Sequential Monte Carlo (SMC) is a class of sampling-based inference algorithms designed to approximate a sequence of hard-to-sample probability distributions, especially in dynamic and non-linear systems [1]. An SMC sampler produces at each step a finite set of weighted particles —in the same manner as shown by (22)—as an empirical approximation of the distribution . Initially at step , an SMC sampler draws i.i.d. samples from (assuming that is easy to sample instead) and assigns all the weights to be one. At step , the particle set is updated in two steps: (i) first evolve each particle by sampling from a forward Markov kernel between the measurable spaces at step and :
(26) |
and (ii) then reweight each particle using the forward Markov kernel as well as a backward Markov kernel between the measurable spaces at step and :
(27) |
with the understanding that it is totally fine if we can evaluate pointwise up to a normalizing factor. Those Markov kernels should be chosen based on the actual learning problem. For example, one can define to be the time reversal of such that , leading to a simple reweight scheme . Note that an SMC sampler uses the backward kernels only to do density estimation, but it uses the forward kernels also for proposing new values. An SMC sampler also features a resampling phase that deals with particle collapse, i.e., when the weights of some particles become negligible compared against other particles. Each particle would be resampled, simultaneously, to use the value with probability . After resampling, an SMC sampler can use a rejuvenation phase to evolve the particles within the step , i.e., to rejuvenate each particle with respect to the target distribution There are many ways to implement rejuvenation; in this paper, we consider using Markov-Chain Monte Carlo (MCMC).
Markov Chain Monte Carlo
Markov Chain Monte Carlo (MCMC) methods are a class of inference algorithms that sample from hard-to-sample probability distributions by constructing a Markov chain, which has the desired target distribution as its stationary distribution. An MCMC method generates a sequence of samples by simulating the Markov chain, so that every sample only depends on the previous sample. At the heart of MCMC methods is the Metropolis-Hastings (MH) algorithm [10, 19]. MH starts to iterate by proposing from a proposal distribution , where also denotes the probability density that previous state transits to the proposed state . MH then computes a ratio to decide whether to accept :
(28) |
where is the target distribution, which can be evaluated up to a normalizing constant. Two popular and powerful MCMC methods are Involutive MCMC (IMCMC) [21] and Hamiltonian Monte Carlo (HMC) [20]. IMCMC’s proposal samples auxiliary variables and uses an involutive map on (i.e., ) to propose a new state , where is the current state. As we will soon discuss in the next paragraph, IMCMC is suitable to implement transdimensional proposals; in particular, it is suitable to evolve the kernel structure in GP-based models. HMC takes inspiration from Hamiltonian dynamics, and its proposal samples an auxiliary momentum variable and uses numerical integration (e.g., leap-frog) to generate diverse candidate states in a continuous sample space. In particular, it is suitable to evolve the real-valued random variables in our GPC model, including the kernel parameters and auxiliary variables .
SMC for Time Series Learning
AutoGP [25] is a recently proposed method that is able to effectively and adaptively find suitable structures for time-series data in both the offline and online settings. Notably, AutoGP uses Gaussian Process Regression for time-series learning. In this paper, we take inspiration from AutoGP and extend its methodology to solve classification problems. AutoGP is an SMC sampler based on data tempering; that is, the target distribution at step is the posterior distribution conditioned on the first data points in the time-series data. In this way, AutoGP enables incremental learning, and the nature of SMC—which evolves a collection of particles with possibly different kernel structures—enables adaptive learning. AutoGP carefully designs an IMCMC proposal to rejuvenate kernel structures. The proposal makes use of Subtree-Replace operations and Detach-Attach operations. The Subtree-Replace operation randomly replaces a sub-structure of a given kernel to another one, while the Detach-Attach operation randomly moves a sub-structure of a given kernel to another location in the same kernel. These operations enable AutoGP to effectively and efficiently explore versatile kernel structures.
3.2 Method
Algorithm 1 shows our method of applying SMC to adaptive and incremental learning on the GPC models present in Section 2. It follows a standard reweight-resample-rejuvenate pipeline for implementing SMC samplers [4]. Our method reweights and resamples the particle set as shown in line 1 to line 1. Line 1 and line 1 are the process of reweighting; it implicitly chooses the forward Markov kernel to be the identity kernel and the backward Markov kernel to be its time reversal. As a result, it calculates the new weight for each particle based on the joint probability of model’s latent variables and the first data points . The process of resampling in lines 1 to 1 adopts a standard machinery of adaptive resampling, i.e., only initiate the resample process when the effective sample size (ESS) drops below a threshold. Finally, lines 1 to 1 implement the rejuvenation loop. In each rejuvenation iteration, we follow the design of AutoGP [25] to first evolve the kernel structure of each particle via IMCMC (reviewed in the previous section) and then apply HMC to evolve the real-valued latent kernel parameters and auxiliary variables .
Algorithm 1 can already to applied to the online setting, in the sense that the step stands for the order when a data point arrives. In the offline setting, although algorithm 1 can also work, we can make it more effective by allowing batch tempering, i.e., we can divide a dataset into batches, each of which contains data points , for . Then in the iteration for step , instead of incorporating one single data point, we use the whole batch in the reweighting and rejuvenation processes.
With the particle set that approximates the posterior distribution , we can use it to make predictions for new data points. Following (23)-(25), for a new data point , we approximately compute the predictive probability of being as
(29) |
where is the number of Monte Carlo estimations to approximate and are i.i.d. samples from the posterior for each particle .
4 Experiment
We implemented our method based on AutoGP [25]. We set up our experiments on a device with 13th Gen Intel(R) Core(TM) i9-13900H 2.60 GHz CPU and 32 GB RAM.
The research questions of the experiments are to study the following:
-
•
RQ1: Can our method learn kernel structures and parameters adaptively for classification?
-
•
RQ2: Can our method learn kernel structures and parameters incrementally for classification?
For the purpose of testing our method comprehensively, there are two parts in our experiment. The first part involves applying our method to some toy datasets to demonstrate the capability and characterization of our method. The second part involves applying our method to real-world datasets across diverse domains, assessing its performance under authentic data conditions.
4.1 Toy Datasets

The toy datasets used in our experiments are generated by Scikit-Learn [23]. Those toy datasets with 2-dimensional inputs are easy to be visualized with weights; thus, they provide an intuitive way to illustrate the characterization of our method.
For RQ1, the first experiment is designed to demonstrate the automatic adaptability of our method in selecting appropriate kernels for varying datasets. Figure 1 illustrates the comparison between our method and GPs using pre-selected kernels—Linear and SquaredExponential—across three datasets. GPs used in this experiment are implemented using the same machinery as Algorithm 1, except that the kernel structure is fixed. This comparison demonstrates that our method can exhibit characteristics of different kernels.
To take a closer look at on how our method combines properties of different kernels, Figure 2 plots the kernel’s behavior of different particles after running Algorithm 1 on a dataset that can be linearly separated. It shows that our method could discover different kernel structures with similar performance; thus, the method would be robust to incorporate future unseen data points.




Furthermore, to evaluate our method’s performance on pattern shifts and simulate an online setting described in RQ2, we initially run our method to learn on a portion of the dataset, followed by subsequently learning on the remaining data in another batch. Figure 3 shows that our method is able to adjust kernel structures to adapt to the shift of pattern in an online setting.
In summary, experiments on the toy datasets show that our method can discover various kernel structures, as well as learn and adapt to pattern shifts in the online setting.


4.2 Real-World Datasets
To test our method in practice, we select three real-world datasets in different domain. Some datasets [8, 31] were used in previous studies [17]. The details of datasets can be found at Appendix.
We use two GPs with pre-determined kernels, Random Forest, and three other classification methods for evaluating RQ1. The two GPs are set up in the same manner as Section 4.1. We use a Julia implementation of Random Forest [27]. To further test the effectiveness of our method, we select three methods provided by Scikit-Learn [23]. We choose Passive Aggressive Classifier (PAC) [5], Stochastic Gradient Descent (SGD) [32], and Naive Bayes in this experiment.
In order to test the ability of online learning for RQ2, we adapt a similar setting to Section 4.1 but make it more extreme and biased: in the first batch of learning, we only include one class of the data and then incorporate batches of remaining data with the other class.
Results are shown in Table 1. Accuracy of each offline task is calculated in a standard way and reported. For online task, average accuracy of each task is calculated by , where is the total number of batches and is accuracy after each batch of learning. For the offline setting, our method outperforms all other methods. It shows that our method achieves the goal of reducing requirement of prior knowledge by automatic selection of kernel. For the online setting, our method achieves highest accuracy on two out of three datasets. The results of the online setting demonstrate our method is able to adapt kernel structures in an incremental manner.
In summary, experiments on the real-world datasets show that our method has ability to seamlessly integrate features of different kernels and accurately adjust itself against pattern shifts within the dataset.
Offline | Online | |||||
---|---|---|---|---|---|---|
Ionosphere | Musk | Heartdisease | Ionosphere | Musk | Heartdisease | |
Our Method | 90.7% | 86.3% | 89.9% | 81.1% | 61.7% | 72.0% |
Linear | 87.2% | 61.7% | 82.3% | 74.4% | 54.9% | 66.7% |
SquaredExponential | 84.3% | 65.4% | 84.0% | 73.2% | 58.9% | 70.7% |
Naive Bayes | 75.1% | 75.3% | 84.0% | 72.9% | 58.8% | 74.6% |
PAC | 80.8% | 64.3% | 80.6% | 65.6% | 56.4% | 66.6% |
SGD | 81.5% | 78.5% | 76.4% | 61.8% | 55.8% | 71.4% |
Random Forest | 87.2% | 76.4% | 81.5% | 73.3% | 52.1% | 62.2% |
5 Related Work
Automatic Selection of GP Kernels
Automatic selection of kernels [9, 26, 25] is a technique that automatically samples GP kernels, defined within a context-free grammar (CFG), to model various types of data. Earlier works have primarily focused on employing CFG for the regression of data. Specifically, [26] and [25] have advanced the inference algorithms for automatic selection of kernels.
Our work is based on automatic selection of kernel for regression, namely AutoGP [25], with expansion of non-time-series, non-scalar inputs, and classification.
Given the challenge of non-Gaussian posteriors in classification scenarios, we introduce adapt AutoGP to approximate and predict with such posteriors effectively.
Incremental Learning with GPs
There are many previous works on using GPs to build classification model [17, 30, 2]. Work [17] presents an ensemble learning method that learns an ensemble of GPs and handles incremental data by applying kernel dictionaries that contain pre-selected kernel structures. Compared with those methods, our method follows the methodology of Bayesian inference to approximate the posterior distribution, thus requiring less prior knowledge when designing the learning algorithm. By sampling from a CFG space, our method provides more flexibility to build GP-based classification models. However, our method is typically more computationally expensive than non-Bayesian methods.
Sequential Monte Carlo Learning
SMC learning has a broad field of applications including graphical models [22], finance [6], and robotic [16]. Despite its extensive application in time-related domains, our work seeks to expand the utility of SMC to non-time-related problem, such as classification tasks. The connection is established via a broadly applied technique called data tempering, i.e., organizing the dataset as a sequence of data points or a sequence of batches. However, data tempering is usually more computationally expensive, especially in our context of GP-based learning, as GP computation is also expensive.
6 Conclusion
In this paper, we propose a new method for Gaussian Process classification by sequential Monte Carlo. We extend the problem formulation of structure discovery for regression to the binary-classification problem. Based on a recently proposed method on sequential Monte Carlo for time-series learning, we develop an algorithm for adaptive and incremental learning of both kernel structures and parameters. Especially, our algorithm is able to handle the non-Gaussian posterior distributions that arise from Gaussian Process classification problems. The experiments shows that our method can discover different kernel structures for different datasets and outperform various classification methods on real-world datasets.
References
- [1] JM Bernardo, MJ Bayarri, JO Berger, AP Dawid, D Heckerman, AFM Smith, M West, P Del Moral, A Doucet and A Jasra “Sequential monte carlo for bayesian computation” In Bayesian Stat 8, 2011, pp. 1–34
- [2] Thang D Bui, Cuong Nguyen and Richard E Turner “Streaming sparse Gaussian process approximations” In Advances in Neural Information Processing Systems 30, 2017
- [3] Tianqi Chen and Carlos Guestrin “XGBoost: A Scalable Tree Boosting System”, KDD ’16 New York, NY, USA: Association for Computing Machinery, 2016, pp. 785–794 DOI: 10.1145/2939672.2939785
- [4] Nicolas Chopin “A sequential particle filter method for static models” In Biometrika 89.3 Oxford University Press, 2002, pp. 539–552
- [5] Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, Yoram Singer and Manfred K Warmuth “Online passive-aggressive algorithms.” In Journal of Machine Learning Research 7.3, 2006
- [6] Chenguang Dai, Jeremy Heng, Pierre E Jacob and Nick Whiteley “An invitation to sequential Monte Carlo samplers” In Journal of the American Statistical Association 117.539 Taylor & Francis, 2022, pp. 1587–1600
- [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li and Li Fei-Fei “Imagenet: A large-scale hierarchical image database” In 2009 IEEE conference on computer vision and pattern recognition, 2009, pp. 248–255 IEEE
- [8] Thomas Dietterich, Ajay Jain, Richard Lathrop and Tomas Lozano-Perez “A comparison of dynamic reposing and tangent distance for drug activity prediction” In Advances in neural information processing systems 6, 1993
- [9] David Duvenaud, James Lloyd, Roger Grosse, Joshua Tenenbaum and Ghahramani Zoubin “Structure discovery in nonparametric regression through compositional kernel search” In International Conference on Machine Learning, 2013, pp. 1166–1174 PMLR
- [10] W Keith Hastings “Monte Carlo sampling methods using Markov chains and their applications” Oxford University Press, 1970
- [11] Marti A. Hearst, Susan T Dumais, Edgar Osuna, John Platt and Bernhard Scholkopf “Support vector machines” In IEEE Intelligent Systems and their applications 13.4 IEEE, 1998, pp. 18–28
- [12] Taewon Jeong and Heeyoung Kim “Ood-maml: Meta-learning for few-shot out-of-distribution detection and classification” In Advances in Neural Information Processing Systems 33, 2020, pp. 3907–3916
- [13] Bryan Klimt and Yiming Yang “The enron corpus: A new dataset for email classification research” In European conference on machine learning, 2004, pp. 217–226 Springer
- [14] Alex Krizhevsky, Ilya Sutskever and Geoffrey E Hinton “Imagenet classification with deep convolutional neural networks” In Advances in neural information processing systems 25, 2012
- [15] Yann LeCun, Léon Bottou, Yoshua Bengio and Patrick Haffner “Gradient-based learning applied to document recognition” In Proceedings of the IEEE 86.11 IEEE, 1998, pp. 2278–2324
- [16] Zhiwei Liang, Xudong Ma and Xianzhong Dai “Information-theoretic approaches based on sequential Monte Carlo to collaborative distributed sensors for mobile robot localization” In Journal of Intelligent and Robotic Systems 52 Springer, 2008, pp. 157–174
- [17] Qin Lu, Georgios V Karanikolas and Georgios B Giannakis “Incremental ensemble Gaussian processes” In IEEE Transactions on Pattern Analysis and Machine Intelligence 45.2 IEEE, 2022, pp. 1876–1893
- [18] Kolby Nottingham Markelle Kelly “The UCI Machine Learning Repository” URL: https://archive.ics.uci.edu
- [19] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller and Edward Teller “Equation of state calculations by fast computing machines” In The journal of chemical physics 21.6 American Institute of Physics, 1953, pp. 1087–1092
- [20] Radford M. Neal “MCMC Using Hamiltonian Dynamics” In Handbook of Markov Chain Monte Carlo ChapmanHall/CRC, 2010
- [21] Kirill Neklyudov, Max Welling, Evgenii Egorov and Dmitry Vetrov “Involutive MCMC: a Unifying Framework”, ICML’20, 2020, pp. 7273–7282 URL: https://dl.acm.org/doi/10.5555/3524938.3525612
- [22] Brooks Paige and Frank Wood “Inference networks for sequential Monte Carlo in graphical models” In International Conference on Machine Learning, 2016, pp. 3040–3049 PMLR
- [23] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss and Vincent Dubourg “Scikit-learn: Machine learning in Python” In the Journal of machine Learning research 12 JMLR. org, 2011, pp. 2825–2830
- [24] Matthias Reif, Faisal Shafait and Andreas Dengel “Meta-learning for evolutionary parameter optimization of classifiers” In Machine learning 87 Springer, 2012, pp. 357–380
- [25] Feras Saad, Brian Patton, Matthew Douglas Hoffman, Rif A Saurous and Vikash Mansinghka “Sequential Monte Carlo learning for time series structure discovery” In International Conference on Machine Learning, 2023, pp. 29473–29489 PMLR
- [26] Feras A. Saad, Marco F. Cusumano-Towner, Ulrich Schaechtle, Martin C. Rinard and Vikash K. Mansinghka “Bayesian synthesis of probabilistic programs for automatic data modeling” In Proc. ACM Program. Lang. 3.POPL New York, NY, USA: Association for Computing Machinery, 2019 DOI: 10.1145/3290350
- [27] Ben Sadeghi, Poom Chiarawongse, Kevin Squire, Daniel C. Jones, Andreas Noack, Cédric St-Jean, Rik Huijzer, Roland Schätzle, Ian Butterworth, Yu-Fong Peng and Anthony Blaom “DecisionTree.jl - A Julia implementation of the CART Decision Tree and Random Forest algorithms” Zenodo, 2022 DOI: 10.5281/zenodo.7359268
- [28] Matthias Seeger “Gaussian processes for machine learning” In International journal of neural systems 14.02 World Scientific, 2004, pp. 69–106
- [29] Xinyue Shao, Hongzhi Wang, Xiao Zhu, Feng Xiong, Tianyu Mu and Yan Zhang “EFFECT: Explainable framework for meta-learning in automatic classification algorithm selection” In Information Sciences 622, 2023, pp. 211–234 DOI: https://doi.org/10.1016/j.ins.2022.11.144
- [30] Yanning Shen, Tianyi Chen and Georgios B Giannakis “Random feature-based online multi-kernel learning in environments with unknown dynamics” In Journal of Machine Learning Research 20.22, 2019, pp. 1–36
- [31] Vincent G Sigillito, Simon P Wing, Larrie V Hutton and Kile B Baker “Classification of radar returns from the ionosphere using neural networks” In Johns Hopkins APL Technical Digest 10.3, 1989, pp. 262–266
- [32] Bianca Zadrozny and Charles Elkan “Transforming classifier scores into accurate multiclass probability estimates” In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, 2002, pp. 694–699