Learning with Asymmetric Kernels: Least Squares and Feature Interpretation
Abstract
Asymmetric kernels naturally exist in real life, e.g., for conditional probability and directed graphs. However, most of the existing kernel-based learning methods require kernels to be symmetric, which prevents the use of asymmetric kernels. This paper addresses the asymmetric kernel-based learning in the framework of the least squares support vector machine named AsK-LS, resulting in the first classification method that can utilize asymmetric kernels directly. We will show that AsK-LS can learn with asymmetric features, namely source and target features, while the kernel trick remains applicable, i.e., the source and target features exist but are not necessarily known. Besides, the computational burden of AsK-LS is as cheap as dealing with symmetric kernels. Experimental results on the Corel database, directed graphs, and the UCI database will show that, in the case asymmetric information is crucial, the proposed AsK-LS can learn with asymmetric kernels and performs much better than the existing kernel methods that have to do symmetrization to accommodate asymmetric kernels.
Index Terms:
Asymmetric kernels, Least squares support vector machine, Kullback-Leibler kernel, Directed graphs.1 Introduction
Kernel-based learning [1, 2, 3] is an important scheme in machine learning and has been widely used in classification [4, 5], regression [6], clustering [7], and many other tasks. Traditionally, a kernel that can be used in kernel-based learning should satisfy Mercer’s condition [8]. For Mercer’s condition, there are two well-known requirements on the kernel : for samples , where and are the dimension and the number of data. The kernel matrix should be i) symmetric and ii) positive semi-definite (PSD). When the latter condition is relaxed, the flexibility is enhanced and those methods are called indefinite learning, for which some interesting results could be found in [9, 10, 11, 12, 13].
However, discussion on relaxing the symmetry condition is rare. Many asymmetric similarities exist in real life. For example, in directed graphs, the adjacency matrix is certainly asymmetric and thus the connection similarity is asymmetric, i.e., : the path from the -th node to the -th node is not equal to that from the -th node to the -th node. Conditional probability, which has been widely used to measure the directional similarity [14], is also asymmetric: . Those asymmetric measurements cannot be used in the current kernel-based learning directly. Let us consider the support vector machine (SVM, [4]):
(1) |
where is a pre-given constant, is a n-dimensional vector of all ones, , and is a dual variable vector. When is asymmetric, at least in (1), we can directly use it. However, by noticing that
one may find that only the symmetric part of an asymmetric kernel is learned by directly using it in the SVM.
Another popular kernel-based learning framework is the least squares support vector machine (LS-SVM, [15, 2]). Its dual form is the following linear system,
where is an identity matrix and , respectively. An interesting point here is that using an asymmetric kernel in the LS-SVM will not reduce to its symmetrization and asymmetric information can be learned. Then we can develop asymmetric kernels in the LS-SVM framework in a straightforward way. The corresponding kernel trick, the feature interpretation, and the asymmetric information will be investigated in this paper. Notice that we do not claim that asymmetric kernels could not be applied in the SVM, but it is not straightforward and requires further investigation. Similarly, for symmetric but indefinite kernels, the solving method in the LS-SVM framework keeps easy [11], while the SVM needs delicately design in form, theory, and solving-algorithm [16, 12].

In this paper, a novel method called AsK-LS for learning with asymmetric kernels in the framework of least squares will be established. The most important discussion is to investigate the kernel trick and the feature interpretation on how the asymmetric information could be extracted by Ask-LS. Generally, there are two features involved in the kernel trick. Using the concept in directed graphs, which have two feature embeddings for source and target nodes [17, 18, 19], we call the features in AsK-LS as source feature and target feature. The name distinguishes two features but does not mean that it can only be used for directed graphs. For the singular value decomposition, asymmetric kernels could be introduced into the LS-SVM framework [20] where the two features are related to columns and rows of the matrix, respectively.
In the rest of this paper, we will first discuss asymmetric kernels and illustrate that there are two different features embedded in an asymmetric kernel; see Section 2 for details. Then in Section 3 we will formulate AsK-LS, discuss its feature interpretation, and design the solving algorithm. In principle, an asymmetric kernel contains more information than the symmetric one. Thus, the proposed AsK-LS will demonstrate advantages when the asymmetric information is crucial, as numerically evaluated in Section 4. Section 5 will end this paper with a brief conclusion.
2 Asymmetric kernels
In the classical kernel-based learning, the kernel is symmetric, i.e., the kernel matrix for training samples is also symmetric. But there could be many asymmetric kernels. For example, in image classification tasks, Kullback-Leibler (KL, [21]) divergence could be used to measure the dissimilarity between two probability distributions. The directed graph is another example, where the similarity between two nodes is essentially asymmetric.
Intuitively, being asymmetric contains more information than that being symmetric. For symmetric kernels, the kernel trick (there are additional conditions for the existence of kernel trick; see, e.g, [8]) means that there is a feature mapping such that for two samples and . Then it is expected that there are more features for asymmetric kernels. Fig. 1 illustrates a simple case for asymmetric dissimilarity. In Fig. 1(b-d), we illustrate three methods which make the kernel matrix symmetric. For source dissimilarity, one can extract a nonlinear feature mapping, denoted as . Meanwhile, a target nonlinear feature mapping, denoted as which is generally different from , can be also extracted. However, in the existing kernel-based learning, only symmetric kernels are acceptable and hence one has to use: symmetric similarity, for example, or , which indicates that those symmetrization methods may lose the asymmetric information.
Besides KL divergence and directed graphs, there are other tasks where the asymmetric kernels may be superior. In kernel density estimation problems, asymmetric kernels performed better than symmetric ones when the underlying random variables were bounded [22, 23, 24]. In Gaussian process regression tasks, Pintea et al. argued that it was helpful to set an individual kernel parameter for each data center, which enabled each data center to learn a proper kernel parameter in its neighborhood and resulted in an asymmetric kernel matrix [25]. In federated learning tasks [26], an asymmetric neural tangent kernel was established to address the issue that the gradient of the global machine was not determined by local gradient directions directly.
Our aim in this paper is to propose a novel method to directly learn with asymmetric kernels and correspondingly can learn with two feature mappings. For a long time, symmetrization is the main way for dealing with asymmetric kernels. In an early paper [27], Tsuda let the asymmetric kernel matrix be symmetric by multiplying its transpose, then a new symmetric matrix was obtained as . Munoz et al. utilized a pick-out method to convert the asymmetric kernel into the symmetric one [28]. Moreno et al. studied the KL divergence kernel in the SVM on multimedia data [21]. They defined to satisfy the Mercer condition, but the asymmetric information was disappeared. Koide and Yamashita proposed an asymmetric kernel method and applied it to the Fisher’s discriminant (AKFD) [29]. They claimed that an asymmetric kernel was generated by the inner product between two different feature mappings. In the AKFD, the decision function was assumed to be spanned by and input data were mapped by . However, the assumption of the AKFD was very strict and the situation that the decision function was spanned by was not considered. Wu et al. proposed a hyper asymmetric kernel method to learn with asymmetric kernels between data from two different input spaces such as query space and document space [30], while an asymmetric kernel degenerated to a symmetric one when two spaces were identical, i.e., . In summary, these works used symmetrization methods at the optimization level, which canceled the asymmetric information and was not expected in the asymmetric kernel-based learning.
It was interesting that the matrix singular value decomposition (SVD) could be merged in the LS-SVM framework [20, 31]. The matrix to be decomposed could be asymmetric and even non-square, implying that the LS-SVM could tolerate asymmetric kernels. From the viewpoint of the LS-SVM setting, the matrix SVD was related to two feature maps acting on the column vectors and the row vectors of the matrix, respectively. For directed graphs, it was also possible to use the adjacency matrix without the label to extract embeddings as the source and target features, respectively [17, 18, 19]. These works, although in an unsupervised setting, demonstrated that asymmetric kernels can be studied rather than through the symmetrization process.
3 Asymmetric kernels in the LS-SVM
3.1 PSD and indefinite kernels in the LS-SVM
Given training samples with and , a discriminant function is constructed to classify the input samples. For linearly inseparable problems, a non-linear feature mapping is needed, where is a high-dimensional space.
LS-SVMs with PSD kernels can be solved by the following optimization problem [15],
(2) | ||||
where the discriminant function is formulated as . When the kernel is generalized to a non-PSD one, the primal problem is as follows [11],
(3) | ||||
where and are two non-linear feature mappings, both and are potential high-dimensional spaces. The discriminant function here is formulated as .
Although the feature interpretations of (LABEL:ls-svm) and (LABEL:indefinite-ls-svm) are not the same, their dual problems share the same formulation as below,
(4) |
The kernel trick, which gives the feature interpretation of (4), is different for different types of kernels. If the kernel is PSD then,
If the kernel is non-PSD but has a positive decomposition, for which the conceptual condition and a practice judgment can be found in [13], the kernel trick becomes,
where and are two PSD kernels.
3.2 AsK-LS
When looking at the framework of LS-SVM from the viewpoint of solving (4), there is not any problem if is asymmetric. It is still well-defined and a solution can be readily obtained. The key problem is to analyze what we really learn if is asymmetric.
First, we define a generalized kernel trick to present a kernel as an inner product of two mappings and .
Definition 1.
A kernel trick for a kernel can be defined by the inner product of two different feature mappings as follows:
where , , and is a high-dimensional even infinite-dimensional space.
Different from the classical kernel trick, the above definition allows different and , of which even the dimension and could be different. In this paper, we consider the case then both and are well-defined and the kernel matrix for training data is square but asymmetric. Definition 1 is compatible with the existing symmetric kernels, including PSD and indefinite ones.
-
1.
The symmetric and positive semi-definite kernel can be defined as follows:
in the situation when, two feature mappings and are identical . .
-
2.
The symmetric and indefinite kernel can be defined as follows:
in the situation when two feature mappings and are not identical, and are two PSD kernels and and are two high-dimensional feature mappings corresponding to and , respectively. and are two high-dimensional spaces.
The kernel trick associated with an asymmetric kernel contains two different feature mappings. Using the concept from directed graphs, we call them source and target features, respectively. Then for each sample, e.g., a node in a directed graph, we can extract two features from different views and classify the sample in the framework of the least squares support vector machine, which is hence called Ask-LS. The AsK-LS in the primal space takes the following form,
(5) | ||||
where and is the regularization coefficient of misclassification errors. In AsK-LS (LABEL:asy_ls_svm), the sample plays different roles and can have different meanings in different tasks. In the matrix decomposition, the features could be given as column and row vectors, which could lead to an asymmetric kernel for unsupervised learning [20].
Now let us investigate the dual problem of (LABEL:asy_ls_svm), of which the kernel trick for an asymmetric kernel is crucial.
Proposition 1.
Let be the solution of the problem below, where with an asymmetric kernel .
(6) |
-
1.
and are spanned by and , respectively.
where () is a stationary point of the primal problem (LABEL:asy_ls_svm)
-
2.
The primal problem (LABEL:asy_ls_svm) results in two discriminant functions and as follows
(7) where is a training set and denotes a element-wise product,
,
.
Proof.
The Lagrangian function of the primal problem (LABEL:asy_ls_svm) is formulated as follows:
(8) | ||||
where is the parameter set. Then the condition of stationary points requires the following equations:
(9) |
The last two conditions can be converted into the following equations
(10) |
The equations (10) can be formulated as a linear system as follows,
(11) |
where
According to Definition 1, an asymmetric kernel is defined as follows:
Then, the linear system (11) can be reformulated as follows:
where with a given asymmetric kernel .
Suppose be the solution of (6), according to partial derivative equations (9), a stationary point () can be formulated as below,
Since a stationary point of the primal problem (LABEL:asy_ls_svm) is obtained, two functions which classify samples from source and target points of view respectively can be formulated as follows:
∎
The primal-dual relationship between (LABEL:asy_ls_svm) and (6) for asymmetric kernels is characterized by Proposition 1, which also includes symmetric kernels. In that case, both the primal and dual formulations reduce to the classical LS-SVM, as shown below.
Proposition 2.
Proof.
According to Definition 1, symmetric kernels can be also defined in the asymmetric kernel framework. Thus, kernels in AsK-LS can be also positive semi-definite, even indefinite.
A solution to the primal problem (LABEL:asy_ls_svm) is given by the linear system (6) which can be reformulated as follows, according to the matrix column transformation () and row transformation () formulas,
Feeding the symmetric kernel , is then a symmetric matrix, , i.e., . The following equation holds,
The above equation can be simplified by moving the right term to the left.
Symmetric and asymmetric kernels lead to a unified linear system. When the data size is not very large, (6) is easy to solve. For large-scale problems, one can consider the fixed-size [32, 33], Nyström approximation [34], for the latter of which necessary modifications are required. In the early work by Schmidt [35], an asymmetric kernel has a pair of adjoint eigenfunctions corresponding to the eigenvalue, thus the modified Nyström approximation for asymmetric kernels leads to a standard singular value decomposition. When the kernel is asymmetric, we simultaneously obtain two functions . To fully use the asymmetric information, we need to merge them together. Averaging is a simple way and other ensemble methods like the stacked generalization [36] can be also used. Overall, we summarize the algorithm for AsK-LS in the following.
4 Numerical experiments
The aim of designing the AsK-LS is to learn with asymmetric kernels. As discussed before, asymmetric kernels could come from asymmetric metrics and directed graphs. The following experiments are not to claim that asymmetric kernels are better than symmetric kernels, which is surely not true since the choice of kernels is problem-dependent. Instead, we will show that when the asymmetric information is crucial, our AsK-LS can largely improve the performance of the existing kernel learning methods. The experiments are implemented in MATLAB on a PC with Intel i7-10700K CPU (3.8GHz) and 32GB memory. All the reported results are the average over trials.
4.1 Kullback-Leibler kernel
Kullback-Leibler divergence is the measure of how one probability distribution is different from another probability distribution :
(13) |
which is asymmetric. KL divergence has been successfully used in many fields, e.g., VAE [37] and GAN [38, 39]. From KL divergence, one can evaluate the similarity between two probability distributions by the exponentiation as follows:
(14) |
where is a hyperparameter to control the scale of the kernel (14), denotes a sample and , are probability distributions estimated by samples and , respectively. Since KL divergence is asymmetric, the kernel matrix is also asymmetric. Thus the AsK-LS can be utilized to directly learn with the asymmetric KL kernel rather than its symmetry [21].
We conduct image classification experiments on the Corel database [40]. The Corel database contains 80 concept groups e.g., autumn, aviation, dog, and elephant. And 10 classes of those groups are picked: beaches, bus, dinosaurs, elephants, flowers, foods, horses, monuments, snow mountains, and African people & villages. There are 100 images per class: 90 for training and 10 for testing. We follow the standard feature extraction method [21] to obtain a sequence of 64-dimensional discrete cosine transform feature vectors of an image where is the number of feature vectors.
In the experiment, we use the Gaussian mixture model (GMM) with 256 diagonal Gaussian components to estimate the probability distribution of the image feature vector sequence . Since the KL divergence for two GMMs can not be calculated by the formulation (13) directly, we use the Monte Carlo method to calculate it. Then kernel value between two images is given by the KL kernel similarity (14) of these two probability distributions.
The corresponding asymmetric KL divergence has been widely used in learning tasks. However, due to that KL divergence violates the symmetry requirement, it has never been directly used in the kernel-based learning. Now with the AsK-LS, we can use it in classification tasks. As a comparison, the SVM and the LS-SVM are used with a symmetric KL kernel [21]. For the parameters, i.e., the hyperparameter and regularization parameter in all the three models is tuned by 10-folds cross-validation. The AsK-LS outputs two classifiers and here we simply average them.
The image classification is a multi-classes task where we utilize the one-vs-rest scheme, for which the average Micro-F1 and Macro-F1 scores are reported in Table I. The AsK-LS achieves better performance, showing that learning with the asymmetric information can help.
Methods | SVM | LS-SVM | AsK-LS |
Micro-F1 | 0.830±0 | 0.840±0 | 0.916±0.0048 |
Macro-F1 | 0.822±0 | 0.832±0 | 0.914±0.0046 |
Datasets | Cora | Citeseer | Pubmed | AM- | AM- |
photo | computer | ||||
#Classes | 7 | 6 | 3 | 8 | 10 |
#Nodes | 2078 | 3327 | 19717 | 7650 | 13752 |
#Edges | 5429 | 4732 | 44338 | 143663 | 287209 |
4.2 Directed adjacency matrix
Datasets | F1 Score | Embedding | Graph | |||
SVM | LS-SVM | SVM | LS-SVM | AsK-LS | ||
Cora | Micro | 0.740±0.010 | 0.733±0.009 | 0.305±0.010 | 0.713±0.015 | 0.753±0.013 |
Macro | 0.730±0.012 | 0.724±0.011 | 0.077±0.004 | 0.708±0.016 | 0.748±0.013 | |
Citeseer | Micro | 0.497±0.013 | 0.507±0.010 | 0.393±0.042 | 0.596±0.011 | 0.605±0.012 |
Macro | 0.454±0.013 | 0.452±0.010 | 0.301±0.044 | 0.560±0.012 | 0.579±0.012 | |
Pubmed | Micro | 0.732±0.006 | 0.726±0.004 | 0.602±0.022 | 0.635±0.006 | 0.719±0.004 |
Macro | 0.680±0.006 | 0.669±0.005 | 0.451±0.018 | 0.503±0.006 | 0.698±0.005 | |
AM- | Micro | 0.858±0.005 | 0.834±0.006 | 0.319±0.022 | 0.799±0.008 | 0.885±0.005 |
photo | Macro | 0.834±0.005 | 0.767±0.006 | 0.155±0.018 | 0.732±0.008 | 0.874±0.005 |
AM- | Micro | 0.606±0.005 | 0.609±0.004 | 0.400±0.005 | 0.619±0.014 | 0.868±0.002 |
computer | Macro | 0.429±0.004 | 0.454±0.003 | 0.107±0.003 | 0.443±0.016 | 0.846±0.005 |
Nodes classification with an asymmetric adjacency matrix is a task that needs to learn from asymmetric metrics. In this task, nodes in a directed graph with nodes set and edges set are classified. Its adjacency matrix showing the connection among nodes is defined as follows,
where means that there is a link pointing to node from node . This model has wide applications and here we consider five directed graphs, namely Cora, Citeseer, and Pubmed [41], AM-photo, and AM-computer [42]. Details of the data set could be found in Table II. For the first three widely used graphs, the nodes and edges present documents and citations, respectively. For the latter two, the nodes present goods, and the edges mean the two goods are frequently bought together. The node classification is originally a multi-classes task where we utilize the one-vs-rest scheme and focus on Micro-F1 and Macro-F1 scores.
A directed graph is fully characterized by its adjacency matrix. However, in existing kernel methods, one cannot directly use the adjacency matrix as a kernel. Therefore, the current mainstream is to first do feature embedding [17, 18, 19] to extract the asymmetric information and then do classification based on the extracted features. In the experiment, For the embedding, we use a SOTA embedding method NERD [19], which utilizes random walk to extract node embeddings and as source feature and target feature of each node on a directed graph. We combine them as a unified feature which then defines a symmetric kernel that can be used in classical kernel-based learning methods, e.g., the SVM and the LS-SVM, and results are reported in the third and the fourth columns in Table III.
Since the asymmetric information can be extracted by the embedding method, the performance of that is much better than using existing kernel methods with the adjacency matrix by symmetrization , of which micro-/macro- F1 scores are listed in the 5th and the 6th column in Table III.
Now we have the AsK-LS and can directly learn with the adjacency matrix without the feature embedding. Before sending the adjacency matrix to the AsK-LS, we pre-process it by its in-degree (the same pre-process is also applied for the SVM and the LS-SVM). The Micro- and Macro-F1 scores of the AsK-LS are reported in the last column in Table III. The performance is much better than the SVM and the LS-SVM with the symmetrization of the kernel, indicating that the asymmetric information is helpful. For the comparison with the SOTA embedding methods, the AsK-LS is better or at least comparable, showing the effectiveness of AsK-LS for extracting the asymmetric information.
4.3 Symmetric or asymmetric kernels
We have shown that the proposed AsK-LS can learn with asymmetric kernels. Since asymmetric kernels are more general than symmetric ones, learning with asymmetric kernels is promising to get improvement, if there is an efficient method to get a suitable kernel. In the case in our paper, kernels are pre-given, then the performance is determined by the choice of kernels. In the previous sections, the asymmetric information, i.e., measuring the difference by KL divergence and directed distance, is important, thus the corresponding asymmetric kernels lead to a good performance. If not the case, a specific asymmetric kernel is not necessarily better than the symmetric one, e.g., the RBF kernel.
We conduct classification experiments on several datasets from the UCI database[43], where 60% of the data are randomly picked up for training and the rest for testing. Two asymmetric kernels, called SNE kernel and T kernel, are considered here. They have been used for dimension reduction [14] but have not been used for classification, since before there was no classification method that can learn with asymmetric kernels directly. The formulations are given as below,
-
1.
The SNE kernel with the parameter :
-
2.
The T kernel:
where stands for the training set.
Datasets | LS-SVM | AsK-LS | |
RBF | SNE | T | |
heart | 0.837±0.032 | 0.824±0.025 | 0.825±0.031 |
sonar | 0.856±0.001 | 0.854±0.028 | 0.865±0.027 |
monks1 | 0.791±0.012 | 0.790±0.014 | 0.778±0.012 |
monks2 | 0.841±0.007 | 0.866±0.016 | 0.866±0.014 |
monks3 | 0.936±0.003 | 0.936±0.004 | 0.901±0.004 |
pima | 0.738±0.026 | 0.749±0.026 | 0.752±0.021 |
australian | 0.862±0.015 | 0.854±0.019 | 0.859±0.032 |
spambase | 0.925±0.007 | 0.908±0.007 | 0.922±0.018 |
The classification accuracy of the AsK-LS with the two asymmetric kernels is reported in Table IV, together with the accuracy of the LS-SVM with the RBF kernel (all the parameters are tuned by 10-folds cross-validation). Generally speaking, the best choice of kernels is problem-dependent and one cannot assert which kernel is good in advance. But at least, the proposed AsK-LS makes it possible to use asymmetric kernels. With delicately designing or efficiently learning, asymmetric kernels could lead to a good performance.
5 Conclusion
In this paper, we investigate the least squares support vector machine with asymmetric kernels in theoretical and algorithmic aspects. The proposed AsK-LS is the first model that can learn with asymmetric kernels. The primal and dual representations for AsK-LS are given, showing the feature interpretation that there are two different functions, can be simultaneously learned from the source and the target views. In numerical experiments, when the asymmetric information is physically important, the AsK-LS with asymmetric kernels significantly outperforms the SVM and the LS-SVM that can only deal with symmetric kernels.
The most significant contribution of this paper is to make asymmetric kernels useful in the kernel-based learning. In methodology, the least squares framework is not the unique way to accommodate asymmetric kernels. Models from other kernel-based methods, e.g., the support vector machine and the logistic regression, etc., are worthy to be investigated. In theory, the functional space associated with asymmetric kernels is interesting, which is beyond the Reproducing Kernel Hilbert Space for PSD kernels or the Reproducing Kernel Kreĭn Space for indefinite kernels. In application, one can design asymmetric kernels for different tasks, especially those that involve directional relationships, including but not limited to directed graphs, the distribution distance [44], the causality analysis [45, 46], and the optimal transport [47, 48].
Acknowledgments
This work was supported by National Natural Science Foundation of China (No. 61977046), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and Shanghai Science and Technology Research Program (20JC1412700 and 19JC1420101). The research leading to these results has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program / ERC Advanced Grant E-DUALITY (787960). This paper reflects only the authors’ views and the Union is not liable for any use that may be made of the contained information. This work was supported in part by Research Council KU Leuven: Optimization frameworks for deep kernel machines C14/18/068; Flemish Government: FWO projects: GOA4917N (Deep Restricted Kernel Machines: Methods and Foundations), PhD/Postdoc grant. This research received funding from the Flemish Government (AI Research Program).
References
- [1] B. Schölkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT press, 2002.
- [2] J. A. K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, and J. Vandewalle, Least Squares Support Vector Machines. World Scientific, 2002.
- [3] V. Vapnik, The Nature of Statistical Learning Theory. Springer Science & Business Media, 2013.
- [4] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
- [5] K. Soman, R. Loganathan, and V. Ajay, Machine Learning with SVM and other Kernel Methods. PHI Learning Pvt. Ltd., 2009.
- [6] H. Drucker, C. J. Burges, L. Kaufman, A. Smola, V. Vapnik et al., “Support vector regression machines,” Advances in Neural Information Processing Systems, vol. 9, pp. 155–161, 1997.
- [7] M. Girolami, “Mercer kernel-based clustering in feature space,” IEEE Transactions on Neural Networks, vol. 13, no. 3, pp. 780–784, 2002.
- [8] H. Jeffreys, B. Jeffreys, and B. Swirles, Methods of Mathematical Physics. Cambridge University Press, 1999.
- [9] C. S. Ong, X. Mary, S. Canu, and A. J. Smola, “Learning with non-positive kernels,” in Proceedings of the 21st International Conference on Machine Learning, 2004, p. 81.
- [10] B. Haasdonk, “Feature space interpretation of SVMs with indefinite kernels,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 4, pp. 482–492, 2005.
- [11] X. Huang, A. Maier, J. Hornegger, and J. A. K. Suykens, “Indefinite kernels in least squares support vector machines and principal component analysis,” Applied and Computational Harmonic Analysis, vol. 43, no. 1, pp. 162–172, 2017.
- [12] D. Oglic and T. Gärtner, “Scalable learning in reproducing kernel Kreĭn spaces,” in International Conference on Machine Learning. PMLR, 2019, pp. 4912–4921.
- [13] F. Liu, X. Huang, Y. Chen, and J. A. K. Suykens, “Fast learning in reproducing kernel Kreĭn spaces via signed measures,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2021, pp. 388–396.
- [14] G. Hinton and S. T. Roweis, “Stochastic neighbor embedding,” in Advances in Neural Information Processing Systems, vol. 15. Citeseer, 2002, pp. 833–840.
- [15] J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293–300, 1999.
- [16] G. Loosli, S. Canu, and C. S. Ong, “Learning SVM in Kreĭn spaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 6, pp. 1204–1216, 2015.
- [17] M. Ou, P. Cui, J. Pei, Z. Zhang, and W. Zhu, “Asymmetric transitivity preserving graph embedding,” in Proceedings of the 22nd ACM SIGKDD International Cconference on Knowledge Discovery and Data Mining, 2016, pp. 1105–1114.
- [18] C. Zhou, Y. Liu, X. Liu, Z. Liu, and J. Gao, “Scalable graph embedding for asymmetric proximity,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
- [19] M. Khosla, J. Leonhardt, W. Nejdl, and A. Anand, “Node representation learning for directed graphs,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2019, pp. 395–411.
- [20] J. A. K. Suykens, “SVD revisited: A new variational principle, compatible feature maps and nonlinear extensions,” Applied and Computational Harmonic Analysis, vol. 40, no. 3, pp. 600–609, 2016.
- [21] P. J. Moreno, P. Ho, and N. Vasconcelos, “A Kullback-Leibler divergence based kernel for SVM classification in multimedia applications.” in Advances in Neural Information Processing Systems, 2003, pp. 1385–1392.
- [22] M. Mackenzie and A. K. Tieu, “Asymmetric kernel regression,” IEEE Transactions on Neural Networks, vol. 15, no. 2, pp. 276–282, 2004.
- [23] C. Kuruwita, K. Kulasekera, and W. Padgett, “Density estimation using asymmetric kernels and bayes bandwidths with censored data,” Journal of Statistical Planning and Inference, vol. 140, no. 7, pp. 1765–1774, 2010.
- [24] H. L. Koul and W. Song, “Large sample results for varying kernel regression estimates,” Journal of Nonparametric Statistics, vol. 25, no. 4, pp. 829–853, 2013.
- [25] S. L. Pintea, J. C. van Gemert, and A. W. Smeulders, “Asymmetric kernel in Gaussian processes for learning target variance,” Pattern Recognition Letters, vol. 108, pp. 70–77, 2018.
- [26] B. Huang, X. Li, Z. Song, and X. Yang, “FL-NTK: A neural tangent kernel-based framework for federated learning analysis,” in International Conference on Machine Learning. PMLR, 2021, pp. 4423–4434.
- [27] K. Tsuda, “Support vector classifier with asymmetric kernel functions,” in European Symposium on Artificial Neural Networks (ESANN), 1999, pp. 183–188.
- [28] A. Munoz, I. M. de Diego, and J. M. Moguerza, “Support vector machine classifiers for asymmetric proximities,” in Artificial Neural Networks and Neural Information Processing—ICANN/ICONIP 2003. Springer, 2003, pp. 217–224.
- [29] N. Koide and Y. Yamashita, “Asymmetric kernel method and its application to Fisher’s discriminant,” in the 18th International Conference on Pattern Recognition, vol. 2, 2006, pp. 820–824.
- [30] W. Wu, J. Xu, H. Li, and S. Oyama, “Asymmetric kernel learning,” Tech. Rep. MSR-TR-2010-85, June 2010. [Online]. Available: https://www.microsoft.com/en-us/research/publication/asymmetric-kernel-learning/
- [31] J. A. K. Suykens, “Deep restricted kernel machines using conjugate feature duality,” Neural Computation, vol. 29, no. 8, pp. 2123–2163, 2017.
- [32] M. Espinoza, J. A. K. Suykens, and B. De Moor, “Fixed-size least squares support vector machines: A large scale application in electrical load forecasting,” Computational Management Science, vol. 3, no. 2, pp. 113–129, 2006.
- [33] K. De Brabanter, J. De Brabanter, J. A. K. Suykens, and B. De Moor, “Optimized fixed-size kernel models for large data sets,” Computational Statistics & Data Analysis, vol. 54, no. 6, pp. 1484–1504, 2010.
- [34] C. Williams and M. Seeger, “Using the Nyström method to speed up kernel machines,” in Proceedings of the 14th Annual Conference on Neural Information Processing Systems, 2001, pp. 682–688.
- [35] G. W. Stewart, “On the early history of the singular value decomposition,” SIAM Review, vol. 35, no. 4, pp. 551–566, 1993.
- [36] K. M. Ting and I. H. Witten, “Stacking bagged and dagged models,” in Proceedings of the 14th International Conference on Machine Learning, 1997, pp. 367–375.
- [37] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in the 2nd International Conference on Learning Representations, 2014.
- [38] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410.
- [39] J. Gui, Z. Sun, Y. Wen, D. Tao, and J. Ye, “A review on generative adversarial networks: Algorithms, theory, and applications,” IEEE Transactions on Knowledge and Data Engineering, pp. 1–1, 2021.
- [40] J. Z. Wang, J. Li, and G. Wiederhold, “Simplicity: Semantics-sensitive integrated matching for picture libraries,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 9, pp. 947–963, 2001.
- [41] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI Magazine, vol. 29, no. 3, pp. 93–93, 2008.
- [42] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann, “Pitfalls of graph neural network evaluation,” arXiv preprint arXiv:1811.05868, 2018.
- [43] C. Blake and C. Merz, “UCI repository of machine learning databases,” 1998.
- [44] S. K. Zhou and R. Chellappa, “From sample similarity to ensemble similarity: Probabilistic distance measures in reproducing kernel Hilbert space,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 6, pp. 917–929, 2006.
- [45] K. Radinsky, S. Davidovich, and S. Markovitch, “Learning causality for news events prediction,” in Proceedings of the 21st international conference on World Wide Web, 2012, pp. 909–918.
- [46] H. Xu, M. Farajtabar, and H. Zha, “Learning Granger causality for Hawkes processes,” in International Conference on Machine Learning. PMLR, 2016, pp. 1717–1726.
- [47] N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy, “Optimal transport for domain adaptation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 9, pp. 1853–1865, 2016.
- [48] B. Su and G. Hua, “Order-preserving optimal transport for distances between sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 12, pp. 2961–2974, 2018.