On The Relative Error of Random Fourier Features for Preserving Kernel Distance
Abstract
The method of random Fourier features (RFF), proposed in a seminal paper by Rahimi and Recht (NIPS’07), is a powerful technique to find approximate low-dimensional representations of points in (high-dimensional) kernel space, for shift-invariant kernels. While RFF has been analyzed under various notions of error guarantee, the ability to preserve the kernel distance with relative error is less understood. We show that for a significant range of kernels, including the well-known Laplacian kernels, RFF cannot approximate the kernel distance with small relative error using low dimensions. We complement this by showing as long as the shift-invariant kernel is analytic, RFF with dimensions achieves -relative error for pairwise kernel distance of points, and the dimension bound is improved to for the specific application of kernel -means. Finally, going beyond RFF, we make the first step towards data-oblivious dimension-reduction for general shift-invariant kernels, and we obtain a similar dimension bound for Laplacian kernels. We also validate the dimension-error tradeoff of our methods on simulated datasets, and they demonstrate superior performance compared with other popular methods including random-projection and Nyström methods.
1 Introduction
We study the ability of the random Fourier features (RFF) method (Rahimi & Recht, 2007) for preserving the relative error for the kernel distance. Kernel method (Schölkopf & Smola, 2002) is a systematic way to map the input data into a (indefinitely) high dimensional feature space to introduce richer structures, such as non-linearity. In particular, for a set of data points , a kernel function implicitly defines a feature mapping to a feature space which is a Hilbert space, such that . Kernel methods have been successfully applied to classical machine learning (Boser et al., 1992; Schölkopf et al., 1998; Girolami, 2002), and it has been recently established that in a certain sense the behavior of neural networks may be modeled as a kernel (Jacot et al., 2018).
Despite the superior power and wide applicability, the scalability has been an outstanding issue of applying kernel methods. Specifically, the representation of data points in the feature space is only implicit, and solving for the explicit representation, which is crucially required in many algorithms, takes at least time in the worst case. While for many problems such as kernel SVM, it is possible to apply the so-called “kernel trick” to rewrite the objective in terms of , the explicit representation is still often preferred, since the representation is compatible with a larger range of solvers/algorithms which allows better efficiency.
In a seminal work (Rahimi & Recht, 2007), Rahimi and Recht addressed this issue by introducing the method of random Fourier features (see Section 2 for a detailed description), to compute an explicit low-dimensional mapping (for ) such that , for shift-invariant kernels (i.e., there exists , such that ) which includes widely-used Gaussian kernels, Cauchy kernels and Laplacian kernels.
Towards understanding this fundamental method of RFF, a long line of research has focused on analyzing the tradeoff between the target dimension and the accuracy of approximating under certain error measures. This includes additive error (Rahimi & Recht, 2007; Sriperumbudur & Szabó, 2015; Sutherland & Schneider, 2015), spectral error (Avron et al., 2017; Choromanski et al., 2018; Zhang et al., 2019; Erdélyi et al., 2020; Ahle et al., 2020), and the generalization error of several learning tasks such as kernel SVM and kernel ridge regression (Avron et al., 2017; Sun et al., 2018; Li et al., 2021). A more comprehensive overview of the study of RFF can be found in a recent survey (Liu et al., 2021).
We focus on analyzing RFF with respect to the kernel distance. Here, the kernel distance of two data points is defined as their (Euclidean) distance in the feature space, i.e.,
While previous results on the additive error of (Rahimi & Recht, 2007; Sriperumbudur & Szabó, 2015; Sutherland & Schneider, 2015; Avron et al., 2017) readily implies additive error guarantee of , the relative error guarantee is less understood. As far as we know, Chen & Phillips (2017) is the only previous work that gives a relative error bound for kernel distance, but unfortunately, only Gaussian kernel is studied in that work, and whether or not the kernel distance for other shift-invariant kernels is preserved by RFF, is still largely open.
In spirit, this multiplicative error guarantee of RFF, if indeed exists, makes it a kernelized version of Johnson-Lindenstrauss Lemma (Johnson & Lindenstrauss, 1984) which is one of the central result in dimension reduction. This guarantee is also very useful for downstream applications, since one can combine it directly with classical geometric algorithms such as k-means++ (Arthur & Vassilvitskii, 2007), locality sensitive hashing (Indyk & Motwani, 1998) and fast geometric matching algorithms (Raghvendra & Agarwal, 2020) to obtain very efficient algorithms for kernelized -means clustering, nearest neighbor search, matching and many more.
1.1 Our Contributions
Our main results are characterizations of the kernel functions on which RFF preserves the kernel distance with small relative error using target dimensions. Furthermore, we also explore how to obtain data-oblivious dimension-reduction for kernels that cannot be handled by RFF.
As mentioned, it has been shown that RFF with small dimension preserves the additive error of kernel distance for all shift-invariant kernels (Rahimi & Recht, 2007; Sriperumbudur & Szabó, 2015; Sutherland & Schneider, 2015). In addition, it has been shown in Chen & Phillips (2017) that RFF indeed preserves the relative error of kernel distance for Gaussian kernels (which is shift-invariant). Hence, by analogue to the additive case and as informally claimed in Chen & Phillips (2017), one might be tempted to expect that RFF also preserves the relative error for general shift-invariant kernels as well.
Lower Bounds.
Surprisingly, we show that this is not the case. In particular, we show that for a wide range of kernels, including the well-known Laplacian kernels, it requires unbounded target dimension for RFF to preserve the kernel distance with constant multiplicative error. We state the result for a Laplacian kernel in the following, and the full statement of the general conditions of kernels can be found in Theorem 4.1. In fact, what we show is a quantitatively stronger result, that if the input is -bounded, then preserving any constant multiplicative error requires target dimension. Here, a point is -bounded if and , i.e., the magnitude is (upper) bounded by and the resolution is (lower) bounded by .
Theorem 1.1 (Lower bound; see Remark 4.1).
For every and some feature mapping of a Laplacian kernel , if for every that are -bounded, the RFF mapping for with target dimension satisfies with constant probability, then . This holds even when .
Upper Bounds.
Complementing the lower bound, we show that RFF can indeed preserve the kernel distance within error using target dimensions with high probability, as long as the kernel function is shift-invariant and analytic, which includes Gaussian kernels and Cauchy kernels. Our target dimension nearly matches (up to the degree of polynomial of parameters) that is achievable by the Johnson-Lindenstrauss transform (Johnson & Lindenstrauss, 1984), which is shown to be tight (Larsen & Nelson, 2017). This upper bound also greatly generalizes the result of Chen & Phillips (2017) which only works for Gaussian kernels (see Appendix G for a detailed comparison).
Theorem 1.2 (Upper bound).
Let be a kernel function which is shift-invariant and analytic at the origin, with feature mapping for some feature space . For every , every , , if is an RFF mapping for with target dimension , then for every ,
The technical core of our analysis is a moment bound for RFF, which is derived by analysis techniques such as Taylor expansion and Cauchy’s integral formula for multi-variate functions. The moment bound is slightly weaker than the moment bound of Gaussian variables, and this is the primary reason that we obtain a bound weaker than that of the Johnson-Lindenstrauss transform. Finally, several additional steps are required to fit this moment bound in Bernstein’s inequality, which implies the bound in Theorem 1.2.
Improved Dimension Bound for Kernel -Means.
We show that if we focus on a specific application of kernel -means, then it suffices to set the target dimension , instead of , to preserve the kernel -means clustering cost for every -partition. This follows from the probabilistic guarantee of RFF in Theorem 1.2 plus a generalization of the dimension-reduction result proved in a recent paper (Makarychev et al., 2019). Here, given a data set and a kernel function , denoting the feature mapping as , the kernel -means problem asks to find a -partition of , such that is minimized.
Theorem 1.3 (Dimension reduction for clustering; see Theorem 3.1).
For kernel -means problem whose kernel function is shift-invariant and analytic at the origin, for every data set , the RFF mapping with target dimension , with probability at least , preserves the clustering cost within error for every -partition simultaneously.
Applying RFF to speed up kernel -means has also been considered in Chitta et al. (2012), but their error bound is much weaker than ours (and theirs is not a generic dimension-reduction bound). Also, similar dimension-reduction bounds (i.e., independent of ) for kernel -means were obtained using Nyström methods (Musco & Musco, 2017; Wang et al., 2019), but their bound is which is worse than our ; furthermore, our RFF-based approach is unique in that it is data-oblivious, which enables great applicability in other relevant computational settings such as streaming and distributed computing.
Going beyond RFF.
Finally, even though we have proved RFF cannot preserve the kernel distance for every shift-invariant kernels, it does not rule out the existence of other efficient data-oblivious dimension reduction methods for those kernels, particularly for Laplacian kernel which is the primary example in our lower bound. For instance, in the same paper where RFF was proposed, Rahimi and Recht (Rahimi & Recht, 2007) also considered an alternative embedding called “binning features” that can work for Laplacian kernels. Unfortunately, to achieve a relative error of , it requires a dimension that depends linearly on the magnitude/aspect-ratio of the dataset, which may be exponential in the input size. Follow-up works, such as (Backurs et al., 2019), also suffer similar issues.
We make the first successful attempt towards this direction, and we show that Laplacian kernels do admit an efficient data-oblivious dimension reduction. Here, we focus on the -bounded case, Here, we use a similar setting to our lower bound (Theorem 1.1) where we focus on the -bounded case.
Theorem 1.4 (Oblivious dimension-reduction for Laplacian kernels, see Theorem F.1).
Let be a Laplacian kernel, and denote its feature mapping as . For every , every , every , there is a mapping , such that for every that are -bounded, it holds that
The time for evaluating is .
Our target dimension only depends on which may be interpreted as the precision of the input. Hence, as an immediate corollary, for any -points dataset with precision , we have an embedding with target dimension , where the success probability is and the overall running time of embedding the points is .
Our proof relies on the fact that every metric space can be embedded into a squared metric space isometrically. We explicitly implement an approximate version of this embedding (Kahane, 1981), and eventually reduce our problem of Laplacian kernels to Gaussian kernels. After this reduction, we use the RFF for Gaussian kernels to obtain the final mapping. However, since the embedding to squared is only of very high dimension, to implement this whole idea efficiently, we need to utilize the special structures of the embedding, combined with an application of space bounded pseudo-random generators (PRGs) (Nisan, 1992).
Even though our algorithm utilizes the special property of Laplacian kernels and eventually still partially use the RFF for Gaussian kernels, it is still of conceptual importance. It opens up the direction of exploring general methods for Johnson-Lindenstrauss style dimension reduction for shift-invariant kernels. Furthermore, the lower bound suggests that the Johnson-Lindenstrauss style dimension reduction for general shift-invariant kernels has to be not differentiable, which is a fundamental difference to RFF. This requirement of “not analytical” seems very counter-intuitive, but our construction of the mapping for Laplacian kernels indeed provides valuable insights on how the non-analytical mapping behaves.
Experiments and Comparison to Other Methods.
Apart from RFF, the Nyström and the random-projection methods are alternative popular methods for kernel dimension reduction. In Section 6, we conduct experiments to compare their empirical dimension-error tradeoffs with that of our methods on a simulated dataset. Since we focus on the error, we use the “ideal” implementation of both methods that achieve the best accuracy, so they are only in favor of the two baselines – for Nyström, we use SVD on the kernel matrix, since Nyström methods can be viewed as fast and approximate low-rank approximations to the kernel matrix; for random-projection, we apply the Johnson-Lindenstrauss transform on the explicit representations of points in the feature space. We run two experiments to compare each of RFF (on a Gaussian kernel) and our new algorithm in Theorem 1.4 (on a Laplacian kernel) with the two baselines respectively. Our experiments indicate that the Nyström method is indeed incapable of preserving the kernel distance in relative error, and more interestingly, our methods perform the best among the three, even better than the Johnson-Lindenstrauss transform which is the optimal in the worst case.
1.2 Related Work
Variants of the vanilla RFF, particularly those that use information in the input data set and/or sample random features non-uniformly, have also been considered, including leverage score sampling random Fourier features (LSS-RFF) (Rudi et al., 2018; Liu et al., 2020; Erdélyi et al., 2020; Li et al., 2021), weighted random features (Rahimi & Recht, 2008; Avron et al., 2016; Chang et al., 2017; Dao et al., 2017), and kernel alignment (Shahrampour et al., 2018; Zhen et al., 2020).
The RFF-based methods usually work for shift-invariant kernels only. For general kernels, techniques that are based on low-rank approximation of the kernel matrix, notably Nyström method (Williams & Seeger, 2000; Gittens & Mahoney, 2016; Musco & Musco, 2017; Oglic & Gärtner, 2017; Wang et al., 2019) and incomplete Cholesky factorization (Fine & Scheinberg, 2001; Bach & Jordan, 2002; Chen et al., 2021; Jia et al., 2021)) were developed. Moreover, specific sketching techniques were known for polynomial kernels (Avron et al., 2014; Woodruff & Zandieh, 2020; Ahle et al., 2020; Song et al., 2021), a basic type of kernel that is not shift-invariant.
2 Preliminaries
Random Fourier Features.
RFF was first introduced by Rahimi and Recht (Rahimi & Recht, 2007). It is based on the fact that, for shift-invariant kernel such that (this can be assumed w.l.o.g. by normalization), function such that , which is the Fourier transform of , is a probability distribution (guaranteed by Bochner’s theorem (Bochner, 1933; Rudin, 1991)). Then, the RFF mapping is defined as
where are i.i.d. samples from distribution with densitiy .
Theorem 2.1 (Rahimi & Recht 2007).
.
Fact 2.1.
Let be a random variable with distribution over . Then
(1) |
and .
3 Upper Bounds
We present two results in this section. We start with Section 3.1 to show RFF preserves the relative error of kernel distance using target dimensions with high probability, when the kernel function is shift-invariant and analytic at origin. Then in Section 3.2, combining this bound with a generalized analysis from a recent paper (Makarychev et al., 2019), we show that RFF also preserves the clustering cost for kernel -clustering problems with -objective, with target dimension only which is independent of .
3.1 Proof of Theorem 1.2: The Relative Error for Preserving Kernel Distance
Since is shift-invariant, we interpret as a function on instead of , such that . Here for simplicity of exhibition we further assume . But with a straightforward modification, our proof can still work without this assumption. As in Section 2, let be the Fourier transform of , and suppose in the RFF mapping , the random variables are i.i.d. sampled from the distribution with density . When we say is analytic at the origin, we mean there exists some constant s.t. is analytic in . We pick to be the maximum of such constant . Also notice that in , there are constants about hidden inside the , i.e. as in Lemma 3.2.
Fact 3.1.
The following holds.
-
•
, and .
-
•
.
Define . As a crucial step, we next analyze the moment of random variables . This bound will be plugged into Bernstein’s inequality to conclude the proof.
Lemma 3.1.
If for some , is analytic in , then for every being even and every s.t. , we have .
Proof.
The proof can be found in Appendix A. ∎
Lemma 3.2.
For kernel which is shift-invariant and analytic at the origin, there exist such that for all , .
Proof.
The proof can be found in Appendix B. ∎
Proof sketch of Theorem 1.2.
We present a proof sketch for Theorem 1.2, and the full proof can be found in Appendix C. We focus on the case when (the other case can be found in the full proof). Then by Lemma 3.2, we have . Then we have:
We take for simplicity of exhibition. Assume , let is even. By Markov’s inequality and Lemma 3.1:
For simplicity denote by , by and define , note that . By some further calculations and plugging in the parameters , we can eventually obtain . Denote as the variance of , then again by Lemma 3.1 we immediately have . The theorem follows by a straightforward application of Bernstein’s inequality. ∎
3.2 Dimension Reduction for Kernel Clustering
We present the formal statement for Theorem 1.3 in Theorem 3.1. In fact, we consider the more general -clustering problem with -objective defined in the following Definition 3.1, which generalizes kernel -means (by setting ).
Definition 3.1.
Given a data set and kernel function , denoting the feature mapping as , the kernel -clustering problem with -objective asks for a -partition of that minimizes the cost function: .
Theorem 3.1 (Generalization of Theorem 3.6).
DBLP:conf/stoc/MakarychevMR19] For kernel -clustering problem with -objective whose kernel function is shift-invariant and analytic at the origin, for every data set , the RFF mapping with target dimension satisfies
Proof.
The proof can be found in Appendix D. ∎
4 Lower Bounds
Theorem 4.1.
Consider , and a shift-invariant kernel function , denoting its feature mapping . Then there exists that are -bounded, such that for every , the RFF mapping for with target dimension satisfies
(2) |
where , and .
Proof.
The proof can be found in Appendix E. ∎
Note that the right hand side of (2) is always less than , since the first term achieves its maximum at , and this maximum is 1. On the other hand, we need the right hand side of (2) to be in order to obtain a useful lower bound, and a typical setup to achieve this is when .
Intuition of .
Observe that measures the ratio between the variance of RFF and the (squared) expectation evaluated at . The intuition of considering this comes from the central limit theorem. Indeed, when the number of samples/target dimension is sufficiently large, the error/difference behaves like a Gaussian distribution where with constant probability the error . Hence, this measures the “typical” relative error when the target dimension is sufficiently large, and an upper bound of is naturally a necessary condition for the bounded relative error. The following gives a simple (sufficient) condition for kernels that do not have a bounded .
Remark 4.1 (Simple sufficient conditions for lower bounds).
Assume the input dimension is , so , and assume , . Then the -bounded property simply requires . We claim that, if ’s first derivative at is non-zero, i.e., , then RFF cannot preserve relative error for such . To see this, we use Taylor’s expansion for at the origin, and simply use the approximation to degree one, i.e., (noting that so this is a good approximation), where . Then
So if , then for sufficiently small and , . This also implies the claim in Theorem 1.1 for Laplacian kernels (even though one needs to slightly modify this analysis since strictly speaking is not well defined at for Laplacian kernels). As a sanity check, for shift-invariant kernels that are analytic at the origin (which include Gaussian kernels), it is necessary that .
5 Beyond RFF: Oblivious Embedding for Laplacian Kernel
In this section we provide a proof sketch for theorem 1.4. A more detailed proof is deferred to appendix F.
Embedding
To handle a Laplacian kernel function with some constant , we cannot directly use the RFF mapping , since our lower bound shows that the output dimension has to be very large when is not analytical around the origin. To overcome this issue, we come up with the following idea. Notice that relies on the -distance between . If one can embed (embedding function ) the data points from the original metric space to a new metric space and ensure that there is an kernel function , analytical around the origin, for the new space s.t. for every pair of original data points , then one can use the function composition to get a desired mapping.
Indeed, we find that can be embedded to isometrically (Kahane, 1981) in the following way. Here for simplicity of exhibition we only handle the case where input data are from , upper bounded by a natural number . Notice that even though input data points are only consisted of integers, the mapping construction needs to handle fractions, as we will later consider some numbers generated from Gaussian distributions or numbers computed in the RFF mapping. So we first setup two numbers, large enough and small enough. All our following operations are working on numbers that are -bounded. For each dimension we do the following transformation. Let be such that for every , the first entries of is the number , while all the remaining entries are . Then consider all dimensions. The embedding function be such that for every , we have being the concatenation of vectors . After embedding, consider a new kernel function , where . One can see immediately that . Hence, we can apply RFF then, i.e. the mapping is , which has a small output dimension. Detailed proofs can be seen in section F.1.
However, there is another issue. In our setting, if the data is bounded, then we have to pick . The computing time has a linear factor in , which is too large.
Polynomial Time Construction
To reduce computing time, we start from the following observation about the RFF mapping . Each output dimension is actually a function of , where is a vector of i.i.d Gaussian random variables. For simplicity of description we only consider that has only one dimension and . So is just a vector consists of number of ’s starting from the left and then all the remaining entries are ’s. Notice that a summation of Gaussian random variables is still a Gaussian. So given , one can generate according to the summation of Gaussians. But here comes another problem. For two data points , we need to use the same . So if we generate and separately, then are independent.
To bypass this issue, first consider the following alternate way to generate . Let be the smallest integer s.t. . Consider a binary tree where each node has exactly 2 children. The depth is . So it has exactly leaf nodes in the last layer. For each node , we attach a random variable in the following way. For the root, we attach a Gaussian variable which is the summation of independent Gaussian variable with distribution . Then we proceed layer by layer from the root to leaves. For each being children of a common parent , assume that is the summation of independent distributions. Then let be the summation of the first distributions among them and be the summation of the second distributions. That is with being independent. Notice that conditioned on , then takes the value with probability . takes the value when takes value .
The randomness for generating every random variable corresponding to a node, is presented as a sequence, in the order from root to leaves, layer by layer, from left to right. We define to be the summation of the random variables corresponding to the first leaves. Notice that can be sampled efficiently in the following way. Consider the path from the root to the -th leaf. First we sample the root, which can be computed using the corresponding part of the randomness. We use a variable to record this sample outcome, calling an accumulator for convenience. Then we visit each node along the path. When visiting , assume its parent is , where has already been sampled previously with outcome . If is a left child of , then we sample conditioned on . Assume this sampling has outcome . Then we add to the current accumulator . If is a right child of a node , then we keep the current accumulator unchanged. After visiting all nodes in the path, is the sample outcome for . We can show that the joint distribution has basically the same distribution as . See lemma F.2.
The advantage of this alternate construction is that given any , to generate , one only needs to visit the path from the root to the -th leaf, using the above generating procedure. To finally reduce the time complexity, the last issue is that the uniform random string for generating random variables here is very long. If we sweep the random tape to locate the randomness used to generate a variable corresponding to a node, then we still need a linear time of . Fortunately, PRGs for space bounded computation, e.g. Nisan’s PRG (Nisan, 1992), can be used here to replace the uniform randomness. Because the whole procedure for deciding whether approximates within multiplicative error, is in poly-logarithmic space. Also the computation of such PRGs can be highly efficient, i.e. given any index of its output, one can compute that bit in time polynomial of the seed length, which is poly-logarithmic of . Hence the computing time of the mapping only has a factor poly-logarithmic in instead of a factor linear in .
Now we have shown our construction for the case that all input data points are from . One can generalize this to the case where all numbers are bounded, by doing some simple roundings and shiftings of numbers. Then this can be further generalized to the case where the input data has dimension, by simply handling each dimension and then concatenating them together. More details of this part are deferred to section F.4.
6 Experiments
We evaluate the empirical relative error of our methods on a simulated dataset. Specifically, we do two experiments, one to evaluate RFF on a Gaussian kernel, and the other one to evaluate the new algorithm in Theorem F.1, which we call “New-Lap”, on a Laplacian kernel. In each experiment, we compare against two other popular methods, particularly Nyström and random-projection methods.
Baselines.
Observe that there are many possible implementations of these two methods. However, since we focus on the accuracy evaluation, we choose computationally-heavy but more accurate implementations as the two baselines (hence the evaluation of the error is only in the baseline’s favor). In particular, we consider 1) SVD low-rank approximation which we call “SVD”, and 2) the vanilla Johnson-Lindenstrauss algorithm performed on top of the high-dimensional representation of points in the feature space, which we call “JL”. Note that SVD is the “ideal” goal/form of Nyström methods and that Johnson-Lindenstrauss applied on the feature space can obtain a theoretically-tight target-dimension bound (in the worst-case sense).
Experiment Setup.
Both experiments are conducted on a synthesized dataset which consists of points with dimensions generated i.i.d. from a Gaussian distribution. For the experiment that we evaluate RFF, we use a Gaussian kernel , and for that we evaluate New-Lap, we use a Laplacian kernel . In each experiment, for each method, we run it for varying target dimension (for SVD, is the target rank), and we report its empirical relative error, which is defined as
where is the kernel distance and is the approximated distance. To make the result stabilized, we conduct this entire experiment for every for times and report the average and 95% confident interval. We plot these dimension-error tradeoff curves, and we depict the results in Figure 1.


Results.
We conclude that in both experiments, our methods can indeed well preserve the relative error of the kernel distance, which verifies our theorem. In particular, the dimension-error curve is comparable (and even slightly better) to the computationally heavy Johnson-Lindenstrauss algorithm (which is theoretically optimal in the worst case). On the contrary, the popular Nyström (low-rank approximation) method is largely incapable of preserving the relative error of the kernel distance. In fact, we observe that or 0 often happens for some pairs of such that , which explains the high relative error. This indicates that our methods can indeed well preserve the kernel distance in relative error, but existing methods struggle to achieve this.
Acknowledgments
Research is partially supported by a national key R&D program of China No. 2021YFA1000900, startup funds from Peking University, and the Advanced Institute of Information Technology, Peking University.
References
- Ahle et al. (2020) Thomas D. Ahle, Michael Kapralov, Jakob Bæk Tejs Knudsen, Rasmus Pagh, Ameya Velingker, David P. Woodruff, and Amir Zandieh. Oblivious sketching of high-degree polynomial kernels. In SODA, pp. 141–160. SIAM, 2020.
- Arthur & Vassilvitskii (2007) David Arthur and Sergei Vassilvitskii. k-means++: the advantages of careful seeding. In SODA, pp. 1027–1035. SIAM, 2007.
- Avron et al. (2014) Haim Avron, Huy L. Nguyen, and David P. Woodruff. Subspace embeddings for the polynomial kernel. In NIPS, pp. 2258–2266, 2014.
- Avron et al. (2016) Haim Avron, Vikas Sindhwani, Jiyan Yang, and Michael W. Mahoney. Quasi-monte carlo feature maps for shift-invariant kernels. J. Mach. Learn. Res., 17:120:1–120:38, 2016.
- Avron et al. (2017) Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, and Amir Zandieh. Random fourier features for kernel ridge regression: Approximation bounds and statistical guarantees. In ICML, volume 70 of Proceedings of Machine Learning Research, pp. 253–262. PMLR, 2017.
- Bach & Jordan (2002) Francis R. Bach and Michael I. Jordan. Kernel independent component analysis. J. Mach. Learn. Res., 3:1–48, 2002.
- Backurs et al. (2019) Arturs Backurs, Piotr Indyk, and Tal Wagner. Space and time efficient kernel density estimation in high dimensions. In NeurIPS, pp. 15773–15782, 2019.
- Berry (1941) Andrew C Berry. The accuracy of the gaussian approximation to the sum of independent variates. Transactions of the american mathematical society, 49(1):122–136, 1941.
- Bochner (1933) Salomon Bochner. Monotone funktionen, stieltjessche integrale und harmonische analyse. Mathematische Annalen, 108(1):378–410, 1933.
- Boser et al. (1992) Bernhard E. Boser, Isabelle Guyon, and Vladimir Vapnik. A training algorithm for optimal margin classifiers. In COLT, pp. 144–152. ACM, 1992.
- Box (1958) George EP Box. A note on the generation of random normal deviates. Ann. Math. Statist., 29:610–611, 1958.
- Brehm (1981) Ulrich Brehm. Extensions of distance reducing mappings to piecewise congruent mappings on lm. Journal of Geometry, 16(1):187–193, 1981.
- Chang et al. (2017) Wei-Cheng Chang, Chun-Liang Li, Yiming Yang, and Barnabás Póczos. Data-driven random fourier features using stein effect. In IJCAI, pp. 1497–1503. ijcai.org, 2017.
- Chen & Phillips (2017) Di Chen and Jeff M. Phillips. Relative error embeddings of the gaussian kernel distance. In ALT, volume 76 of Proceedings of Machine Learning Research, pp. 560–576. PMLR, 2017.
- Chen et al. (2021) Li Chen, Shuisheng Zhou, Jiajun Ma, and Mingliang Xu. Fast kernel k-means clustering using incomplete cholesky factorization. Appl. Math. Comput., 402:126037, 2021.
- Chitta et al. (2012) Radha Chitta, Rong Jin, and Anil K. Jain. Efficient kernel clustering using random fourier features. In ICDM, pp. 161–170. IEEE Computer Society, 2012.
- Choromanski et al. (2018) Krzysztof Choromanski, Mark Rowland, Tamás Sarlós, Vikas Sindhwani, Richard E. Turner, and Adrian Weller. The geometry of random features. In AISTATS, volume 84 of Proceedings of Machine Learning Research, pp. 1–9. PMLR, 2018.
- Czumaj & Sohler (2004) Artur Czumaj and Christian Sohler. Sublinear-time approximation for clustering via random sampling. In ICALP, volume 3142 of Lecture Notes in Computer Science, pp. 396–407. Springer, 2004.
- Dao et al. (2017) Tri Dao, Christopher De Sa, and Christopher Ré. Gaussian quadrature for kernel features. In NIPS, pp. 6107–6117, 2017.
- Erdélyi et al. (2020) Tamás Erdélyi, Cameron Musco, and Christopher Musco. Fourier sparse leverage scores and approximate kernel learning. In NeurIPS, 2020.
- Esseen (1942) Carl-Gustav Esseen. On the liapunov limit error in the theory of probability. Ark. Mat. Astr. Fys., 28:1–19, 1942.
- Fine & Scheinberg (2001) Shai Fine and Katya Scheinberg. Efficient SVM training using low-rank kernel representations. J. Mach. Learn. Res., 2:243–264, 2001.
- Girolami (2002) Mark A. Girolami. Mercer kernel-based clustering in feature space. IEEE Trans. Neural Networks, 13(3):780–784, 2002.
- Gittens & Mahoney (2016) Alex Gittens and Michael W. Mahoney. Revisiting the nystrom method for improved large-scale machine learning. J. Mach. Learn. Res., 17:117:1–117:65, 2016.
- Hormander (1966) Lars Hormander. An introduction to complex analysis in several variables. van Nostrand, 1966.
- Indyk & Motwani (1998) Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pp. 604–613. ACM, 1998.
- Jacot et al. (2018) Arthur Jacot, Clément Hongler, and Franck Gabriel. Neural tangent kernel: Convergence and generalization in neural networks. In NeurIPS, pp. 8580–8589, 2018.
- Jia et al. (2021) Hongjie Jia, Liangjun Wang, Heping Song, Qirong Mao, and Shifei Ding. An efficient nyström spectral clustering algorithm using incomplete cholesky decomposition. Expert Syst. Appl., 186:115813, 2021.
- Johnson & Lindenstrauss (1984) William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26:28, 1984.
- Kahane (1981) Jean-Pierre Kahane. Hélices et quasi-hélices. In Leopoldo Nachbin (ed.), Mathematical Analysis and Applications, Part B, pp. 418–432. Academic Pr, first edition edition, 1981. ISBN 0125128029,9780125128025.
- Larsen & Nelson (2017) Kasper Green Larsen and Jelani Nelson. Optimality of the johnson-lindenstrauss lemma. In FOCS, pp. 633–638. IEEE Computer Society, 2017.
- Li et al. (2021) Zhu Li, Jean-Francois Ton, Dino Oglic, and Dino Sejdinovic. Towards a unified analysis of random fourier features. J. Mach. Learn. Res., 22:108:1–108:51, 2021.
- Liao et al. (2020) Zhenyu Liao, Romain Couillet, and Michael W. Mahoney. A random matrix analysis of random fourier features: beyond the gaussian kernel, a precise phase transition, and the corresponding double descent. In NeurIPS, 2020.
- Liu et al. (2020) Fanghui Liu, Xiaolin Huang, Yudong Chen, Jie Yang, and Johan A. K. Suykens. Random fourier features via fast surrogate leverage weighted sampling. In AAAI, pp. 4844–4851. AAAI Press, 2020.
- Liu et al. (2021) Fanghui Liu, Xiaolin Huang, Yudong Chen, and Johan AK Suykens. Random features for kernel approximation: A survey on algorithms, theory, and beyond. IEEE Transactions on Pattern Analysis & Machine Intelligence, (01):1–1, 2021.
- Makarychev et al. (2019) Konstantin Makarychev, Yury Makarychev, and Ilya P. Razenshteyn. Performance of johnson-lindenstrauss transform for k-means and k-medians clustering. In STOC, pp. 1027–1038. ACM, 2019.
- Musco & Musco (2017) Cameron Musco and Christopher Musco. Recursive sampling for the nystrom method. In NIPS, pp. 3833–3845, 2017.
- Nisan (1992) Noam Nisan. Pseudorandom generators for space-bounded computation. Comb., 12(4):449–461, 1992.
- Oglic & Gärtner (2017) Dino Oglic and Thomas Gärtner. Nyström method with kernel k-means++ samples as landmarks. In ICML, volume 70 of Proceedings of Machine Learning Research, pp. 2652–2660. PMLR, 2017.
- Phillips & Tai (2020) Jeff M. Phillips and Wai Ming Tai. The gaussiansketch for almost relative error kernel distance. In APPROX-RANDOM, volume 176 of LIPIcs, pp. 12:1–12:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
- Raghvendra & Agarwal (2020) Sharath Raghvendra and Pankaj K. Agarwal. A near-linear time -approximation algorithm for geometric bipartite matching. J. ACM, 67(3):18:1–18:19, 2020.
- Rahimi & Recht (2007) Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In NIPS, pp. 1177–1184. Curran Associates, Inc., 2007.
- Rahimi & Recht (2008) Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In NIPS, pp. 1313–1320. Curran Associates, Inc., 2008.
- Ren & Du (2020) Yuanhang Ren and Ye Du. Uniform and non-uniform sampling methods for sub-linear time k-means clustering. In ICPR, pp. 7775–7781. IEEE, 2020.
- Rudi & Rosasco (2017) Alessandro Rudi and Lorenzo Rosasco. Generalization properties of learning with random features. In NIPS, pp. 3215–3225, 2017.
- Rudi et al. (2018) Alessandro Rudi, Daniele Calandriello, Luigi Carratino, and Lorenzo Rosasco. On fast leverage score sampling and optimal learning. In NeurIPS, pp. 5677–5687, 2018.
- Rudin (1991) W. Rudin. Fourier Analysis on Groups. Wiley Classics Library. Wiley, 1991. ISBN 9780471523642.
- Schölkopf & Smola (2002) Bernhard Schölkopf and Alexander Johannes Smola. Learning with Kernels: support vector machines, regularization, optimization, and beyond. Adaptive computation and machine learning series. MIT Press, 2002.
- Schölkopf et al. (1998) Bernhard Schölkopf, Alexander J. Smola, and Klaus-Robert Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput., 10(5):1299–1319, 1998.
- Shahrampour et al. (2018) Shahin Shahrampour, Ahmad Beirami, and Vahid Tarokh. On data-dependent random features for improved generalization in supervised learning. In AAAI, pp. 4026–4033. AAAI Press, 2018.
- Song et al. (2021) Zhao Song, David P. Woodruff, Zheng Yu, and Lichen Zhang. Fast sketching of polynomial kernels of polynomial degree. In ICML, volume 139 of Proceedings of Machine Learning Research, pp. 9812–9823. PMLR, 2021.
- Sriperumbudur & Szabó (2015) Bharath K. Sriperumbudur and Zoltán Szabó. Optimal rates for random fourier features. In NIPS, pp. 1144–1152, 2015.
- Sun et al. (2018) Yitong Sun, Anna C. Gilbert, and Ambuj Tewari. But how does it work in theory? linear SVM with random features. In NeurIPS, pp. 3383–3392, 2018.
- Sutherland & Schneider (2015) Danica J. Sutherland and Jeff G. Schneider. On the error of random fourier features. In UAI, pp. 862–871. AUAI Press, 2015.
- Wang et al. (2019) Shusen Wang, Alex Gittens, and Michael W. Mahoney. Scalable kernel k-means clustering with nyström approximation: Relative-error bounds. J. Mach. Learn. Res., 20:12:1–12:49, 2019.
- Williams & Seeger (2000) Christopher K. I. Williams and Matthias W. Seeger. Using the nyström method to speed up kernel machines. In NIPS, pp. 682–688. MIT Press, 2000.
- Woodruff & Zandieh (2020) David P. Woodruff and Amir Zandieh. Near input sparsity time kernel embeddings via adaptive sampling. In ICML, volume 119 of Proceedings of Machine Learning Research, pp. 10324–10333. PMLR, 2020.
- Yang et al. (2012) Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nyström method vs random fourier features: A theoretical and empirical comparison. In NIPS, pp. 485–493, 2012.
- Zhang et al. (2019) Jian Zhang, Avner May, Tri Dao, and Christopher Ré. Low-precision random fourier features for memory-constrained kernel approximation. In AISTATS, volume 89 of Proceedings of Machine Learning Research, pp. 1264–1274. PMLR, 2019.
- Zhen et al. (2020) Xiantong Zhen, Haoliang Sun, Ying-Jun Du, Jun Xu, Yilong Yin, Ling Shao, and Cees Snoek. Learning to learn kernels with variational random features. In ICML, volume 119 of Proceedings of Machine Learning Research, pp. 11409–11419. PMLR, 2020.
Appendix A Proof of Lemma 3.1
See 3.1
We first introduce following two lemmas to show the properties of .
Lemma A.1.
For any sampled in RFF and , we have .
Proof.
By eq. 1 it is sufficient to prove that
We prove this by induction. In the case of the lemma holds obviously.
If for the lemma holds, we have
where in the third equality we use the fact that . ∎
Lemma A.2.
If there exists such that is analytic in , then , for some constant .
Proof.
We denote analytic function as Taylor series around origin as
(3) |
where is a monomial and its coefficient is . By definition, since . We let denote the degree of . Since by definition, hence is an even function, so for odd.
Recall that . In the following, we drop the subscripts in and write for simplicity. By the definition of we have
(4) |
Note that by Lemma A.1, . Plug this in eq. 4:
where the second equality comes from the Tyler expansion of . Next we will show that is of degree at least .
For note that , since is even and , we have . For , we next show that every term of degree less than has coefficient zero.
Fix and take Tyler expansion for
Without loss of generality, we assume are all s that equals , so we have .
Now we consider the coefficient of term , which would be:
where is the number of ordered sequence , here, for and are equivalent.
Next, we show if the degree of a monomial , its coefficient is zero. Since all , we may assume , therefore .
Suppose operator is a mapping from a function space to itself, such that is defined by . Denote as its -time composition, define to be the identity mapping such that . Similarly we can define addition that and scalar multiplication that . By definition, .
Let , the coefficient can be rewritten as:
Let , the above is . Now we show
Note that calculates second order difference, namely, that is a polynomial of degree is a polynomial of degree , and that is a polynomial of degree is .
Since is a polynomial of degree less than , we have
Combining the above two cases, we have proved eq. 4 is of degree at least , which completes our proof. ∎
Proof of Lemma 3.1.
If , since , we have . Otherwise . Define , we have:
where the second equation comes from Lemma A.2 and Taylor expansion with Lagrange remainder.
Lemma A.3 (Cauchy’s integral formula for multivariate functions Hormander 1966).
For analytic in
Furthermore,
If in addition , we have the following evaluation:
Recall that , so is analytic when . Applying Cauchy’s integral formula Lemma A.3 (here is in the analytic area),
we have
∎
Appendix B Proof of Lemma 3.2
See 3.2
Proof.
It suffices to prove that , for some . Towards proving this, we show that is strongly convex at origin. In fact, by definition, for every fixed , therefore , hence is strongly convex with respect to at origin, so is . ∎
Appendix C Proof of Theorem 1.2
See 1.2
Proof.
When , consider the function . It follows from definition that , so strictly decreases for all . So . We denote , so by Chernorff bound, when , we have:
When , by Lemma 3.2, we have . Then we have:
We take for simplicity of exhibition. Assume , let is even. By Markov’s inequality and Lemma 3.1:
For simplicity denote by , by and define , note that . Then:
where . The first inequality is by considering the two conditions and , then taking a triangle inequality. The first and second equations are by definition of . The third equation is a straightforward computation. The last equation is due to . By Lemma 3.1 for every integer ,
The first equality is straightforward. The first inequality is by Markov. The second equality is by which follows from Lemma 3.1, and a rearrangement of parameters, where is the parameter in Lemma 3.1. Therefore,
The first inequality is by the property of absolute value. The second inequality is because we can divide the event into and when , . The third inequality is by pluging in the previous bound for . The last inequality is by a calculation of the integral.
By plugging in parameters , , , we have . Note that the hides a constant . Denote as the variance of . So by Lemma 3.1.
Lemma C.1 (Bernstein’s Inequality).
Let be independent zero-mean random variables. Suppose that , then for all positive ,
Applying Bernstein’s Inequality to ,
Since , , we have . With probability, every . Therefore, Combine it together, .
∎
Appendix D Proof of Theorem 3.1
See 3.1
The proof relies on a key notion of -dimension reduction from (Makarychev et al., 2019), and we adopt it with respect to our setting/language of kernel distance as follows.
Definition D.1 (Makarychev et al. 2019, Definition 2.1).
For , a feature mapping for some Hilbert space , a random mapping is an -dimension reduction, if
-
•
for every , with probability at least , and
-
•
for every fixed , .
In Makarychev et al. (2019), most results are stated for a particular parameter setup of Definition D.1 resulted from Johnson-Lindenstrauss transform (Johnson & Lindenstrauss, 1984), but their analysis actually works for other similar parameter setups. The following is a generalized statement of (Makarychev et al., 2019, Theorem 3.5) which also reveals how alternative parameter setups affect the distortion. We note that this is simply a more precise and detailed statement of (Makarychev et al., 2019, Theorem 3.5), and it follows from exactly the same proof in Makarychev et al. (2019).
Lemma D.1 (Makarychev et al. 2019, Theorem 3.5).
Let and . If some -dimension reduction for feature mapping of some kernel function satisfies , then with probability at least , for every partition of ,
Proof of Theorem 3.1.
We verify that setting , the RFF mapping with target dimension satisfies the conditions in Lemma D.1, namely, it is a -dimension reduction . In fact, Theorem 1.2 already implies such satisfies that for every , with probability at least , where for some constant , and . For the other part,
(integration by part) | ||||
Where the third equality follows by decays exponentially fast with respect to . Observe that for decrease when , and for decrease when . Hence for , we have
In conclusion, by setting , for and , it satisfies . This verifies the condition of Lemma D.1.
Finally, we conclude the proof of Theorem 3.1 by plugging and the above mentioned RFF mapping with target dimension into Lemma D.1. ∎
Appendix E Proof of Theorem 4.1
See 4.1
Proof.
Our proof requires the following anti-concentration inequality.
Lemma E.1 (Berry 1941; Esseen 1942).
For i.i.d. random variables with mean 0 and variance 1, let , then for any t,
Let , choose such that . Clearly, such pair of satisfies that is -bounded. In fact, it is without loss of generality to assume that both and are -bounded, since one may pick , and still have . We next verify that such satisfy our claimed properties. Indeed,
where the second inequality is by Lemma E.1, and the the second-last equality follows from the definition of , and that of such that . ∎
Appendix F Beyond RFF: Oblivious Embedding for Laplacian Kernel with Small Computing Time
In this section we show an oblivious feature mapping for Laplacian kernel dimension reduction with small computing time. The following is the main theorem.
Theorem F.1.
Let be a Laplacian kernel with feature mapping . For every , every , , every , there is a mapping , such that for every that are -bounded,
The time for evaluating is .
For simplicity of exhibition, we first handle the case when the input data are from . At the end we will describe how to handle the case when the input data are from by a simple transformation. Let be s.t. every entry of an input data point is at most . Even though input data are only consisted of integers, the mapping construction needs to handle fractions, as we will later consider some numbers generated from Gaussian distributions or numbers computed in the RFF mapping. So we first setup two numbers, large enough and small enough. All our following operations are working on numbers that are -bounded. Denote as for convenience.
F.1 Embedding from to
Now we describe an isometric embedding from norm to . This construction is based on Kahane (1981), in which the first such construction of finite dimension was given, to the best of our knowledge. Let be such that for every , , if and otherwise. Let be such that for every , we have being the concatenation of vectors .
Lemma F.1.
For every with , it holds that
Proof.
Notice that for every , has its first entries being while has its first entries being . Thus is exactly . If we consider all the dimensions, then by the construction of , the lemma holds. ∎
F.2 Feature Mapping for Laplacian Kernel
Notice that we can apply the mapping and the RFF mapping from Theorem 1.2 for the kernel function which is actually a Gaussian kernel. This gives a mapping which preserves kernel distance for Laplacian kernel. To be more precise, we setup the mapping to be . The only drawback is that the running time is high, as in the above mapping we map dimension to dimension. We formalize this as the following theorem.
Theorem F.2.
Let be a Laplacian kernel with feature map . For every , every , , there exists a mapping s.t. for every ,
The time of evaluating is .
Proof.
The time complexity is a bit large, since we need to compute which has a rather large dimension . Next we show how to reduce the time complexity.
F.3 An Alternate Construction
We give the following map which has the same output distribution as that of . Then in the next subsection we will use pseudorandom generators to replace the randomness in while using its highly efficiency in computation to reduce the time complexity. Notice that in computing , for each output dimension, the crucial step is computing for some Gaussian distribution which has each dimension being an independent gaussian distribution . The final output is a function of . So we only need to present the construction for the first part, i.e. the inner product of an dimension Gaussian distribution and . For the other parts the computations are the same and finally we only need to sum them up. Hence to make the description simpler, we denote this inner product as , where now we let and has dimensions each being an independent .
Let be the smallest integer s.t. . Consider a binary tree where each node has exactly 2 children. The depth is . So it has exactly leaf nodes in the last layer. For each node , we attach a random variable in the following way. For the root, we attach a Gaussian variable which is the summation of independent Gaussian variable with distribution . Then we proceed layer by layer from the root to leaves. For each being children of a common parent , assume that is the summation of independent distributions. Then let be the summation of the first distributions among them and be the summation of the second distributions. That is with being independent. Notice that conditioned on , then takes the value with probability . takes the value when takes value .
The randomness for generating every random variable corresponding to a node, are presented as a sequence, in the order from root to leaves, layer by layer, from left to right. We define to be the summation of the random variables corresponding to the first leaves. Notice that can be sampled efficiently in the following way. Consider the path from the root to the -th leaf. First we sample the root, which can be computed using the corresponding randomness. We use a variable to record this sample outcome, calling an accumulator for convenience. Then we visit each node along the path. When visiting , assume its parent is , where has already been sampled previously with outcome . If is a left child of , then we sample conditioned on . Assume this sampling has outcome . Then we add to the current accumulator . If is a right child of a node , then we keep the current accumulator unchanged. After visiting all nodes in the path, is the sample outcome for .
Lemma F.2.
The joint distribution has the same distribution as .
Proof.
According to our construction, each leaf is an independent distribution . Hence if we take all the leaves and form a vector, then it has the same distribution as .
Notice that for each parent with two children , by the construction, . Here are independent, each being a summation of independent , with being the number of leaves derived from . Thus for each layer, for every node in the layer, ’s are independent and the summation of them is their parent. So for the last layer all the variables are independent and follow the distribution . And for each node in the tree, is the summation of the random variables attached to the leaves of the subtree whose root is . So is the summation of the first leaf variables. ∎
We do the same operation for other dimensions of the output of and then sum them up to get an alternate construction for .
We note that to generate an , we only need to simulate the conditional distributions. The distribution function of the random variable is easy to derive, since its density function is a product of three Gaussian density functions, i.e.
where are Gaussians. After getting the density function , we can use reject sampling to sample.
We remark that simulating a distribution using uniform random bits always has some simulating bias. The above lemma is proved under the assumption that the simulation has no bias. But we claim that the statistical distance between the simulated distribution and the original distribution is at most , which is small enough by our picking of . To see the claim, notice that in the reject sampling we can first obtain an interval s.t. the density function on any point of is at least . Then we do reject sampling only on . Notice that because is some exponentially small functions, the interval is not too large, . Thus trying reject samplings is enough to get a sample with sufficient high probability . And this sample only has a statistical distance from the original distribution, since the density functions are Gaussian functions or product of Gaussian functions and this implies the probability measure on the complement of is still at most .
So if we consider simulation bias, then we can show that for every subset , the joint distribution has a statistical distance to the joint distribution . Later we will only use the case that , i.e. two points. So the overall statistical distance is which does not affect our analysis and parameters.
F.4 Reducing the Time Complexity Using PRGs
Next we use a pseudorandom generator to replace the randomness used in the above construction. A function a pseudorandom generator for space computations with error parameter , if for every probabilistic TM with space using bits randomness in the read-once manner
Here is called the seed length of .
Theorem F.3 (Nisan 1992).
For every and , there exists an pseudorandom generator for space computations with parameter , where . can be computed in polynomial time (in ) and space. Moreover, given an index , the -th bit of the output of can be computed in time .
Let be a pseudorandom generator for space , with , for some large enough constants . Again we only need to consider the construction corresponding to the first output dimension of . We replace the randomness used in the construction by output of . That is, when we need uniform random bits to construct a distribution in the tree, we first compute positions of these bits in and then compute the corresponding bits in the output of . Then use them to do the construction in the same way. We denote this mapping using pseudorandomness as our final mapping .
Now we provide a test algorithm to show that the feature mapping provided by the pseudorandom distribution has roughly the same quality as that of the mapping provided by the true randomness. We denote the test algorithm as where and is a Laplacian kernel with feature mapping . works as the following. Its input is the randomness either being or . first computes . Notice that actually does not have to compute since the distance can be directly computed as . Then computes and test
Notice that when the input is , then this algorithm is instead testing
Recall that is defined in the previous section as our mapping using true randomness.
Next we consider using on true randomness.
Lemma F.3.
.
Proof.
Now we show that is actually a small space computation.
Lemma F.4.
runs in space for some constant and the input is read-once.
Proof.
The computing of is in space , since and the kernel function can be computed in that space. Now we focus on the computation of . We claim that by the construction of in section F.3, can be computed using space . The procedure proceeds as the following. First it finds the path to the -th leaf. This takes space . Then along this path, for each node we need to compute a distribution . This takes space . Also notice that since the randomness is presented layer by layer, the procedure only needs to do a read-once sweep of the randomness. needs to compute for both and , but this only blow up the space by . So the overall space needed is as stated. ∎
Finally we prove our theorem by using the property of the PRG.
Proof of Theorem F.1.
We first show our result assuming for an integer . We claim that is the mapping we want. By lemma F.3, By Lemma F.4, runs in space and is read-once. As is for space for some large enough constant ,
where seed length Notice that is equivalent to . Thus
The running time is computed as the following. We only need to consider one dimension of the input data and one output dimension of the mapping, since others can be computed using the same time. So actually we consider the time for sampling . For , recall that we visit the path from the root to the -th leaf. We don’t have to compute the whole output of , but instead only need to use some parts of the output. For sampling each variable along the path, we use bits in the output of . By Theorem F.3, the computing of each random bit in ’s output, given the index of this bit, needs time . Locating the bits of randomness for generating needs time . Generating each of the Gaussian random variable using these random bits needs time . Summing up these variables takes less time than sampling all of them. After sampling, the cosine and sine function of the RFF can be computed in time . There are input dimensions and output dimensions. So the total time complexity is .
For the case that , we only need to modify the embedding in the following way. We first round every entry so that their decimal part is now finite. The rounded parts are small enough (e.g. dropping all digits after the -th position to the right of the decimal point.) such that this only introduce some small additive errors. Then we shift all the entries to be non-negative numbers by adding a common shift . Then we multiply every entry of by a common factor s.t. every entry now only has an integer part. Notice that and can both be chosen according to , for example . And we can take to be . Then we apply , and multiply a factor . Denote this map as . Notice that this ensures that . Then we can apply the same construction and analysis as we did for the above natural number case. This shows the theorem. ∎
Appendix G Remarks and Comparisons to Chen & Phillips (2017)
Our upper bound in Theorem 1.2 is not directly comparable to that of Chen & Phillips (2017) which gave dimension reduction results for Gaussian kernels. Chen & Phillips (2017) showed in their Theorem 7 a slightly improved target dimension bound than ours, but it only works for the case of , where is the parameter in the Gaussian kernel111This condition is not clearly mentioned in the theorem statement, but it is indeed used, and is mentioned in one line above the statement in Chen & Phillips (2017).. For the other case of , their Theorem 14 gave a related bound, but their guarantee is quite different from ours. Specifically, their target dimension depends linearly on the input dimension . Hence, when is large (e.g., ), this Theorem 14 is worse than ours (for the case of .
Finally, we remark that there might be subtle technical issues in the proof of [CP17]. Their Theorem 7 crucially uses a bound for moment generating functions that is established in their Lemma 5. However, we find various technical issues in the proof of Lemma 5 (found in their appendix). Specifically, the term in the last line above “But” (in page 17), should actually be . Even if one fixes this mistake (by negating the exponent), then eventually we can only obtain a weaker bound of in the very last step, since the term is negated accordingly. Hence, it is not clear if the claimed bound can still be obtained in Theorem 7.