udaExample
Local transfer learning from one data space to another
*
Abstract
A fundamental problem in manifold learning is to approximate a functional relationship in a data chosen randomly from a probability distribution supported on a low dimensional sub-manifold of a high dimensional ambient Euclidean space. The manifold is essentially defined by the data set itself and, typically, designed so that the data is dense on the manifold in some sense. The notion of a data space is an abstraction of a manifold encapsulating the essential properties that allow for function approximation. The problem of transfer learning (meta-learning) is to use the learning of a function on one data set to learn a similar function on a new data set. In terms of function approximation, this means lifting a function on one data space (the base data space) to another (the target data space). This viewpoint enables us to connect some inverse problems in applied mathematics (such as inverse Radon transform) with transfer learning. In this paper we examine the question of such lifting when the data is assumed to be known only on a part of the base data space. We are interested in determining subsets of the target data space on which the lifting can be defined, and how the local smoothness of the function and its lifting are related.
1 Introduction
A fundamental problem in machine learning is the following. A data of the form is given, assumed to be sampled from an unknown probability distribution. The goal is to approximate the function from the data. Typically, the points belong to an ambient Euclidean space of a very high dimension, leading to the so called curse of dimensionality. One of the strategies to counter this “curse” is to assume the manifold hypothesis; i.e., assume that the points are located on an unknown low dimensional submanifold of the ambient space. Examples of some well known techniques in this direction, dimensionality reduction in particular, are Isomaps tenenbaum2000global , maximum variance unfolding (MVU) (also called semidefinite programming (SDP)) weinberger2005nonlinear , locally linear embedding (LLE) roweis2000nonlinear , local tangent space alignment method (LTSA) zhang2004principal , Laplacian eigenmaps (Leigs) belkin2003laplacian , Hessian locally linear embedding (HLLE) david2003hessian , diffusion maps (Dmaps) coifmanlafondiffusion , and randomized anisotropic transform chuiwang2010 . A recent survey of these methods is given by Chui and Wang in chuidimred2015 . An excellent introduction to the subject of diffusion geometry can be found in the special issue achaspissue of Applied and Computational Harmonic Analysis, 2006. The application areas are too numerous to mention exhaustively. They include, for example, document analysis coifmanmauro2006 , face recognition niyogiface ; ageface2011 ; chuiwang2010 , hyperspectral imaging chuihyper , semi-supervised learning niyogi1 ; niyogi2 , image processing donoho2005image ; arjuna1 , cataloguing of galaxies donoho2002multiscale , and social networking bertozzicommunity .
A good deal of research in the theory of manifold learning deals with the problem of understanding the geometry of the data defined manifold. For example, it is shown in jones2010universal ; jones2008parameter that an atlas on the unknown manifold can be defined in terms of the heat kernel corresponding to the Laplace-Beltrami operator on the manifold. Other constructions of the atlas are given in chui_deep ; shaham2018provable ; schmidt2019deep with applications to the study of deep networks. Function approximation on manifolds based on scattered data (i.e., data points whose locations are not prescribed analytically) has been studied in detail in many papers, starting with mauropap , e.g., frankbern ; modlpmz ; eignet ; compbio ; heatkernframe ; mhaskar2020kernel . A theory was applied successfully in mhas_sergei_maryke_diabetes2017 to construct deep networks for predicting blood sugar levels based on continuous glucose monitoring devices.
A fundamental role in this theory is played by the heat kernel on the manifold corresponding to an appropriate elliptic partial differential operator. In coifmanmauro2006 ; heatkernframe , a muti-resolution analysis is constructed using the heat kernel. Another important tool is the theory of localized kernels based on the eigen-decomposition of the heat kernel. These were introduced in mauropap based on certain assumptions on the spectral function and the property of finite speed of wave propagation. In the context of manifolds, this later property was proved in sikora2004riesz ; frankbern to be equivalent to the so called Gaussian upper bounds on the heat kernels. Although such bounds are studied in many contexts by many authors, e.g., grigoryan1995upper ; grigor1997gaussian ; davies1990heat ; kordyukov1991p , we could not locate a reference where such a bound was proved for a general smooth manifold. We have therefore supplied a proof in mhaskar2020kernel . In [tauberian, , Theorem 4.3], we have proved a very general recipe that yields localized kernels based on the Gaussian upper bound on the heat kernel in what we have termed a data defined space (or data space in some other papers).
The problem of transfer learning (or meta-learning) involves learning the parameters of an approximation process based on one data set, and using this information to quickly learn the corresponding parameters on another data set, e.g., valeriyasmartphone ; maskey2023transferability ; maurer2013sparse . In the context of manifold learning, a data set (point cloud) determines a manifold, so that different data sets would correspond to different manifolds. In the context of data spaces, we can therefore interpret transfer learning as “lifting” a function from one data space (the base data space) to another (the target data space). This viewpoint allows us to unify the topic of transfer learning with the study of some inverse problems in image/signal processing. For example, the problem of synthetic aperture radar (SAR) imaging can be described in terms of an inverse Radon transform nolan2002synthetic ; cheney2009fundamentals ; munson1983tomographic . The domain and range of the Radon transform are different, and hence, the problem amounts to approximating the actual image on one domain based on observations of its Radon transform, which are located on a different domain. Another application is in analyzing hyperspectral images changing with time coifmanhirn . A similar problem arises in analyzing the progress of Alzheimer’s disease from MRI images of the brain taken over time, where one is interested in the development of the cortical thickness as a function on the surface of the brain, a manifold which is changing over time kim2014multi .
Motivated by these applications and the paper coifmanhirn of Coifman and Hirn, we studied in tauberian the question of lifting a function from one data space to another, when certain landmarks from one data space were identified with those on the other data space. For example, it is known lerch2005focal that in spite of the changing brain, one can think of each brain to be parametrized by an inner sphere, and the cortical thickness at certain standard points based on this parametrization are important in the prognosis of the disease. In tauberian we investigated certain conditions on the two data spaces which allow the lifting of a function from one to the other, and analyzed the effect on the smoothness of the function as it is lifted.
In many applications, the data about the function is available only on a part of the base data space. The novel part of this paper is to investigate the following questions of interest: (1) determine on what subsets of the target data space the lifting is defined, and (2) how the local smoothness on the base data space translates into the local smoothness of the lifted function. In limited angle tomography, one observes the Radon transform on a limited part of a cylinder and needs to reconstruct the image as a function on a ball from this data. A rudimentary introduction to the subject is given in the book natterer2001mathematics of Natterer. We do not aim to solve the limited angle tomography problem itself, but we will study in detail an example motivated by the singular value decomposition of the Radon transform, which involves two different systems of orthogonal polynomials on the interval . The theory of transplantation theorems muckenhoupt1986transplantation deals with the following problem. We are given the coefficients in the expansion of a function on in terms of Jacobi polynomials with certain parameters (the base space expansion in our language), and use them as the coefficients in an expansion in terms of Jacobi polynomials with respect to a different set of parameters (the target space in our language). Under what conditions on and the parameters of the two Jacobi polynomial systems will the expansion in the target space converge and in which spaces? While old fashioned, the topic appears to be of recent interest diaz2021discrete ; arenas2019weighted . We will illustrate our general theory by obtaining a localized transplantation theorem for uniform approximation.
In Section 2, we review certain important results in the context of a single data space (our abstraction of a manifold). In particular, we present a characterization of local approximation of functions on such spaces. In Section 3, we review the notion of joint spaces (introduced under a different name in tauberian ). The main new result of our paper is to study the lifting of a function from a subset (typically, a ball) on one data space to another. These results are discussed in Section 4. The proofs are given in Section 5. An essential ingredient in our constructions is the notion of localized kernels which, in turn, depend upon a Tauberian theorem. For the convenience of the reader, this theorem is presented in Appendix 5.1. Appendix 5.2 lists some important properties of Jacobi polynomials which are required in our examples.
2 Data spaces
As mentioned in the introduction, a good deal of research on manifold learning is devoted to the question of learning the geometry of the manifold. For the purpose of harmonic analysis and approximation theory on the manifold, we do not need the full strength of the differentiability structure on the manifold. Our own understanding of the correct hypotheses required to study these questions has evolved, resulting in a plethora of terminology such as data defined manifolds, admissible systems, data defined spaces, etc., culminating in our current understanding with the definition of a data space given in mhaskar2020kernel . For the sake of simplicity, we will restrict our attention in this paper to the case of compact spaces. We do not expect any serious problems in extending the theory to the general case, except for a great deal of technical details.
Thus, the set up is the following.
We consider a compact metric measure space with metric and a probability measure . We take to be a non-decreasing sequence of real numbers with and as , and to be an orthonormal set in . We assume that each is continuous. The elements of the space
(1) |
are called diffusion polynomials (of order ). We write . We introduce the following notation.
(2) |
If we define
(3) |
With this set up, the definition of a compact data space is the following.
Definition 2.1.
The tuple is called a (compact) data space if each of the following conditions is satisfied.
-
1.
For each , , is compact.
-
2.
(Ball measure condition) There exist and with the following property: For each , ,
(4) (In particular, .)
-
3.
(Gaussian upper bound) There exist such that for all , ,
(5)
We refer to as the exponent for .
The primary example of a data space is, of course, a Riemannian manifold.
Let be a smooth, compact, connected Riemannian manifold (without boundary), be the geodesic distance on , be the Riemannian volume measure normalized to be a probability measure, be the sequence of eigenvalues of the (negative) Laplace-Beltrami operator on , and be the eigenfunction corresponding to the eigenvalue ; in particular, . We have proved in [mhaskar2020kernel, , Appendix A] that the Gaussian upper bound is satisfied. Therefore, if the condition in Equation (4) is satisfied, then is a data space with exponent equal to the dimension of the manifold.
Remark 2.2.
In friedman2004wave , Friedman and Tillich give a construction for an orthonormal system on a graph which leads to a finite speed of wave propagation. It is shown in frankbern that this, in turn, implies the Gaussian upper bound. Therefore, it is an interesting question whether appropriate definitions of measures and distances can be defined on a graph to satisfy the assumptions of a data space.
The constant convention. In the sequel, will denote generic positive constants depending only on the fixed quantities under discussion such as , , , the various smoothness parameters and the filters to be introduced. Their value may be different at different occurrences, even within a single formula. The notation means , means and means .
In this example, we let and for we simply define the distance as
(6) |
We will consider the so-called trigonometric functions nowak2011sharp
(7) |
where are orthonormalized Jacobi polynomials defined as in Appendix 5.2 and . We define
(8) |
We see that a change of variables in Equation (96) results in the following orthogonality condition
(9) |
So our orthonormal set of functions with respect to will be . It was proven in nowak2011sharp that with
(10) |
we have
(11) |
In conclusion,
(12) |
is a data space with exponent .
The following example illustrates how a manifold with boundary can be transformed into a closed manifold as in Example 2. We will use the notation and facts from Appendix 5.2 without always referring to them explicitly. We adopt the notation
(13) |
Let denote the volume measure of , normalized to be a probability measure. Let be the space of the restrictions to of homogeneous harmonic polynomials of degree on variables, and be an orthonormal (with respect to ) basis for . The polynomials are eigenfunctions of the Laplace-Beltrami operator on the manifold with eigenvalues . The geodesic distance between is , so the Gaussian upper bound for manifolds takes the form
(14) |
As a result, is a data space with dimension .
Now we consider
We can identify with as follows. Any point has the form for some , . We write . With this identification, is parameterized by and we define
(15) | ||||
where is the probability volume measure on , and is the probability volume measure on . It is also convenient to define the distance on by
(16) |
All spherical harmonics of degree are even functions on . So with the identification of measures as above, one can represent the even spherical harmonics as an orthonormal system of functions on . That is, by defining
(17) |
we have
(18) | ||||
To show the Gaussian upper bound for on , we first see that in view of the addition formula (101) and (98), we deduce
(19) | ||||
In light of Equation (95) we define
(20) |
which is conveniently not dependent upon . Using (LABEL:eq:specialjacobigauss), we see that for
(21) | ||||
Therefore, is a data space with exponent .
In this section, we will assume to be a fixed data space and omit its mention from the notations. We will mention it later in other parts of the paper in order to avoid confusion. Next, we define smoothness classes of functions on . In the absence of any differentiability structure, we do this in a manner that is customary in approximation theory. We define first the degree of approximation of a function by
(22) |
We find it convenient to denote by the space ; e.g., in the manifold case, if and . In the case of Example 2, we need to restrict ourselves to even functions.
Definition 2.3.
Let , .
(a) For , we define
(23) |
and note that
(24) |
The space comprises all for which .
(b)
We write .
If is a ball in , comprises functions which are supported on .
(c) If , the space comprises functions such that there exists with the property that for every , .
If , the space ; i.e., comprises functions which are in for each .
A central theme in approximation theory is to characterize the smoothness spaces in terms of the degree of approximation from some spaces; in our case we consider ’s.
For this purpose, we define some localized kernels and operators.
The kernels are defined by
(25) |
where is a compactly supported function.
The operators corresponding to the kernels are defined by
(26) |
where
(27) |
The following proposition recalls an important property of these kernels. Proposition 2.4 is proved in mauropap , and more recently in much greater generality in [tauberian, , Theorem 4.3].
Proposition 2.4.
Let be an integer, be an even, times continuously differentiable, compactly supported function. Then for every , ,
(28) |
where the constant may depend upon and , but not on , , or .
In the remainder of this paper, we fix a filter ; i.e., an infinitely differentiable function , such that for , for . The domain of the filter can be extended to by setting . Since is fixed, its mention will be omitted from the notation unless we feel that this would cause a confusion. The following theorem gives a crucial property of the operators, proved in several papers of ours in different contexts, see mhaskar2020kernel for a recent proof.
Theorem 2.5.
Let . If , then . Also, for any with ,
(29) |
If , and , then
(30) |
While Theorem 2.5 gives, in particular, a characterization of the global smoothness spaces , the characterization of local smoothness requires two more assumptions: the partition of unity and product assumption.
Definition 2.6 (Partition of unity).
We say that a set has a partition of unity if for every , there exists a countable family of functions with the following properties:
-
1.
Each is supported on for some .
-
2.
For every and , .
-
3.
For every there exists a finite subset (with cardinality bounded independently of ) such that for all
(31)
Definition 2.7 (Product assumption).
We say that a data space satisfies the product assumption if there exists and a family such that for every ,
(32) |
If instead for every and we have , then we say that satisfies the strong product assumption.
In the most important manifold case, the partition of unity assumption is always satisfied [docarmo_riemannian, , Chapter 0, Theorem 5.6]. It is shown in geller2011band ; modlpmz that the strong product assumption is satisfied if ’s are eigenfunctions of certain differential equations on a Riemannian manifold and the ’s are the corresponding eigenvalues. We do not know of any example where this property does not hold, yet cannot prove that it holds in general. Hence, we have listed it as an assumption.
Our characterization of local smoothness (compbio ; heatkernframe ; mhaskar2020kernel ) is the following.
Theorem 2.8.
Let , , , . We assume the partition of unity and the product assumption.
Then the following are equivalent.
(a) .
(b) There exists a ball centered at such that
(33) |
A direct corollary is the following.
Corollary 2.9.
Let , , , be a compact subset of . We assume the partition of unity and the product assumption.
Then the following are equivalent.
(a) .
(b) There exists such that
(34) |
3 Joint data spaces
In order to motivate our definitions in this section, we first consider a couple of examples.
Let , be two data spaces with exponent . We denote the heat kernel in each case by
In the paper coifmanhirn , Coifman and Hirn assumed that , , and proposed the diffusion distance between points to be the square root of
Writing, in this example only,
(35) |
we get
(36) |
Furthermore, the Gaussian upper bound conditions imply that
(37) | ||||
Writing, in this example only,
we observe that for any ,
In this example we let for and assume that . Then we select the following two data spaces as defined in Example 2
(38) |
Since both spaces already have the same distance, we will define a joint distance for the systems accordingly:
(39) |
Similar to Example 3 above, we are considering two data spaces with the same underlying space and measure. However, we now proceed in a different manner. Let us denote
(40) |
Let and . Then we define
(41) | ||||
The orthogonality of the Jacobi polynomials tells us that at least when or . Furthermore, we have the following two sums
(42) |
We define , utilize the Gaussian upper bound property for and Equation (42) to deduce as in Example 3 that
(43) | ||||
We note (cf. [mhaskar2020kernel, , Lemma 5.2]) that
(44) | ||||
Motivated by these examples, we now give a series of definitions, culminating in Definition 3.3. First, we define the notion of a joint distance.
Definition 3.1.
Let , be metric spaces, with each having a metric . A function will be called a joint distance if the following generalized triangle inequalities are satisfied for and :
(45) | ||||
For convenience of notation we denote . Then for , , , , , we define
(46) | ||||
We recall here that an infimum over an empty set is defined to be .
Definition 3.2.
Let (connection coefficients) and (joint eigenvalues) be bi-infinite matrices. For , , , the joint heat kernel is defined formally by
(47) | ||||
Definition 3.3.
For , let be compact data spaces. With the notation above, assume each and that for any , the set is finite. A joint (compact) data space is a tuple
where each of the following conditions is satisfied for some :
-
1.
(Joint regularity) There exist such that
(48) -
2.
(Variation bound) For each ,
(49) -
3.
(Joint Gaussian upper bound) The limit in (47) exists for all , , and
(50)
We refer to as the (joint) exponents of the joint data space.
The kernel corresponding to the one defined in Equation (25) is the following, where is a compactly supported function.
(51) |
For and , we also define
(52) | ||||
The localization property of the kernels is given in the following proposition (cf. [tauberian, , Eqn. (4.5)]).
Proposition 3.4.
Let be an integer, be an even, times continuously differentiable, compactly supported function. Then for every , , ,
(53) |
where the constant involved may depend upon , and , but not on , , .
In the sequel, we will fix to be the filter introduced in Section 2, and will omit its mention from all notations. Also, we take to be fixed, although we may put additional conditions on as needed. As before, all constants may depend upon and .
In the remainder of this paper, we will take , work only with continuous functions on or , and use to denote the supremum norm of on a set . Accordingly, we will omit the index from the notation for the smoothness classes; e.g., we will write instead of . The results in the sequel are similar in the case where due to the Riesz-Thorin interpolation theorem, but more notationally exhausting without adding any apparent new insights.
We end the section with a condition on the operator defined in Equation (52) that is useful for our purposes.
Definition 3.5 (Polynomial preservation condition).
Let be a joint data space. We say the polynomial preservation condition is satisfied if there exists some with the property that if , then for all .
Remark 3.6.
The polynomial preservation condition is satisfied if, for any , we have the following inclusion:
(54) |
We utilize the same notation as in Examples 2 and 3. We now see, in light of Definition 3.3, that is a joint data space with exponents . It is clear that both the partition of unity and strong product assumption hold in these spaces. One may also recall that at least whenever , so there exists such that Equation (54) is satisfied. As a result, we conclude the polynomial preservation condition holds.
4 Local approximation in joint data spaces
In this section, we assume a fixed joint data space as in Section 3. We are interested in the following questions. Suppose , and we have information about only in the neighborhood of a compact set . Under what conditions on and a subset can be lifted to a function on ? Moreover, how does the local smoothness of on depends upon the local smoothness of on ? We now give definitions for for which we have considered these questions.
Definition 4.1.
Given , we define the lifted function to be the limit
(55) |
if the limit exists.
Definition 4.2.
Let and be a compact subset with the property that there exists a compact subset such that
(56) |
for some . We then define the image set of by
(57) |
If the set does not exist, then we define .
Remark 4.3.
We now state our main theorem. Although there is no explicit mention of in the statement of the theorem, Remark 4.5 and Example 4 clarify the benefit of such a construction.
Theorem 4.4.
Let be a joint data space with exponents .
We assume that the polynomial preservation condition holds with parameter . Suppose has a partition of unity.
(a) Let , satisfying
(59) |
Then as defined in Definition 4.1 exists on and for we have
(60) | ||||
In particular, if satisfies the strong product assumption, has a partition of unity, and is given such that for all , then .
(b)
If additionally, with , then is continuous on and for , we have .
Remark 4.5.
Given the assumptions of Theorem 4.4, is not guaranteed to be continuous on the entirety of (or even defined outside of ). As a result, in the setting of 4.4(b) we cannot say belongs to any of the smoothness classes defined in this paper. However we can still say, for instance, that
(61) |
(this can be seen directly by taking such that when and when ). Consequently, if it happens that , then .
We now conclude the running examples from 2, 3, and 3 by demonstrating how one may utilize Theorem 4.4. We assume the notation given in each of the prior examples listed. First, we find the image set for given some and . We let in correspondence to Definition 4.2 and define
(62) | ||||
Then we can let . By Theorem 4.4(a), can be lifted to (where we note that Equation (59) is automatically satisfied due to ). Since , we have . If we suppose for some (with chosen so is sufficiently large), then Theorem 4.4(b) informs us that for . Lastly, as a result of Equation (61), we can conclude
(63) |
5 Proofs
In this section, we give a proof of Theorem 4.4 after proving some preperatory results. We assume that is a joint data space with exponents .
Lemma 5.1.
Let , . We have
(64) |
In particular,
(65) |
Proof 5.2.
In this proof only, define
(66) |
Then the joint regularity condition (48) implies for each . We can also see by definition that when , then . Since , we deduce that for ,
(67) | ||||
This completes the proof of (64) when . The joint regularity condition and Proposition 3.4 show further that
(68) |
We use in the estimates (67) and (68) and add the estimates to arrive at both (65) and the case of (64).
The next lemma gives a local bound on the kernels defined in (52).
Lemma 5.3.
Proof 5.4.
Lemma 5.5.
We assume the polynomial preservation condition with parameter . Let satisfy (59). Then
(71) |
exists on . Furthermore, when , we have
(72) | ||||
Proof 5.6.
Now we give the proof of Theorem 4.4.
Proof 5.7.
In this proof only denote . We can deduce from Theorem 2.5 and Lemma 5.3 that for ,
(78) | ||||
The polynomial preservation condition (Definition 3.5) gives us that
(79) |
Then utilizing Equation (LABEL:eq:pf2eqn1) and Lemma 5.5, we see
(80) | ||||
This proves Equation (60).
In particular, when and , the only with non-zero coefficients in Equation (52) are those where , which implies and further that . This completes the proof of part (a).
In the proof of part (b), we may assume without loss of generality that . We can see from Corollary 2.9 that for each
(81) |
which implies that whenever we have
(82) |
Further, the assumption that gives us
(83) |
Since , we have from Corollary 2.9 that
(84) |
Using Equation (60) from part (a), we see
(85) |
Thus, is a sequence of continuous functions converging uniformly to on , so itself is continuous on . Let us define for each n such that . Theorem 2.5 and the strong product assumption (Definition 2.7) allow us to write
(86) |
Using Equations (65) and (86), Theorem 2.5, and the fact is supported on , we can deduce
(87) | ||||
In view of Equations (85) and (87), we can conclude that
(88) | ||||
Thus, , completing the proof of part (b).
Appendix
5.1 Tauberian theorem
For the convenience of the reader, we reproduce the Tauberian theorem from [tauberian, , Theorem 4.3].
We recall that if is an extended complex valued Borel measure on , then its total variation measure is defined for a Borel set by
where the sum is over a partition of comprising Borel sets, and the supremum is over all such partitions.
A measure on is called an even measure if for all , and . If is an extended complex valued measure on , and , we define a measure on by
and observe that is an even measure such that for . In the sequel, we will assume that all measures on which do not associate a nonzero mass with the point are extended in this way, and will abuse the notation also to denote the measure . In the sequel, the phrase “measure on ” will refer to an extended complex valued Borel measure having bounded total variation on compact intervals in , and similarly for measures on .
Our main Tauberian theorem is the following.
Theorem 5.8.
Let be an extended complex valued measure on , and . We assume that there exist , such that each of the following conditions are satisfied.
-
1.
(89) -
2.
There are constants , such that
(90)
Let , be an integer, and suppose that there exists a measure such that
(91) |
and
(92) |
Then for ,
(93) |
5.2 Jacobi polynomials
For , and integer , the Jacobi polynomials are defined by the Rodrigues’ formula [szego, , Formulas (4.3.1), (4.3.4)]
(94) | ||||
where denotes . The Jacobi polynomials satisfy the following well-known differential equation:
(95) |
Each is a polynomial of degree with positive leading coefficient, satisfying the orthogonality relation
(96) |
and
(97) |
It follows that . In particular, is an even polynomial, and is an odd polynomial. We note (cf. [szego, , Theorem 4.1]) that
(98) | ||||
It is known nowak2011sharp that for and ,
(99) | ||||
We note that when , this yields
(100) | ||||
If is an integer, and is an orthonormal basis for the space of restrictions to the sphere (with respect to the probability volume measure) of -variate homogeneous harmonic polynomials of total degree , then one has the well-known addition formula mullerbk and [batemanvol2, , Chapter XI, Theorem 4] connecting ’s with Jacobi polynomials defined in (94):
(101) |
where and .
References
- [1] A. Arenas, Ó. Ciaurri, and E. Labarga. A weighted transplantation theorem for jacobi coefficients. Journal of Approximation Theory, 248:105297, 2019.
- [2] H. Bateman, A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi. Higher transcendental functions, volume 2. McGraw-Hill New York, 1955.
- [3] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and semi-supervised learning on large graphs. In Learning theory, pages 624–638. Springer, 2004.
- [4] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003.
- [5] M. Belkin and P. Niyogi. Semi-supervised learning on Riemannian manifolds. Machine learning, 56(1-3):209–239, 2004.
- [6] A. L. Bertozzi and A. Flenner. Diffuse interface models on graphs for classification of high dimensional data. Multiscale Modeling & Simulation, 10(3):1090–1118, 2012.
- [7] M. Cheney and B. Borden. Fundamentals of radar imaging. SIAM, 2009.
- [8] C. K. Chui and D. L. Donoho. Special issue: Diffusion maps and wavelets. Appl. and Comput. Harm. Anal., 21(1), 2006.
- [9] C. K. Chui and H. N. Mhaskar. Deep nets for local manifold learning. Frontiers in Applied Mathematics and Statistics, 4:12, 2018.
- [10] C. K. Chui and J. Wang. Dimensionality reduction of hyperspectral imagery data for feature classification. In Handbook of Geomathematics, pages 1005–1047. Springer, 2010.
- [11] C. K. Chui and J. Wang. Randomized anisotropic transform for nonlinear dimensionality reduction. GEM-International Journal on Geomathematics, 1(1):23–50, 2010.
- [12] C. K. Chui and J. Wang. Nonlinear methods for dimensionality reduction. In Handbook of Geomathematics, pages 1–46. Springer, 2015.
- [13] R. R. Coifman and M. J. Hirn. Diffusion maps for changing data. Applied and Computational Harmonic Analysis, 36(1):79–107, 2014.
- [14] R. R. Coifman and S. Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5–30, 2006.
- [15] R. R. Coifman and M. Maggioni. Diffusion wavelets. Applied and Computational Harmonic Analysis, 21(1):53–94, 2006.
- [16] L. D. David and G. Carrie. Hessian eigenmaps: new locally linear embedding techniques for high dimensional data, tr2003-08, dept. of statistics, 2003.
- [17] E. B. Davies. Heat kernels and spectral theory, volume 92. Cambridge University Press, 1990.
- [18] A. Díaz-González, F. Marcellán, H. Pijeira-Cabrera, and W. Urbina. Discrete–continuous jacobi–sobolev spaces and fourier series. Bulletin of the Malaysian Mathematical Sciences Society, 44:571–598, 2021.
- [19] M. P. do Carmo Valero. Riemannian geometry. Birkhäuser, 1992.
- [20] D. L. Donoho and C. Grimes. Image manifolds which are isometric to euclidean space. Journal of mathematical imaging and vision, 23(1):5–24, 2005.
- [21] D. L. Donoho, O. Levi, J.-L. Starck, and V. Martinez. Multiscale geometric analysis for 3d catalogs. In Astronomical Telescopes and Instrumentation, pages 101–111. International Society for Optics and Photonics, 2002.
- [22] M. Ehler, F. Filbir, and H. N. Mhaskar. Locally learning biomedical data using diffusion frames. Journal of Computational Biology, 19(11):1251–1264, 2012.
- [23] F. Filbir and H. N. Mhaskar. A quadrature formula for diffusion polynomials corresponding to a generalized heat kernel. Journal of Fourier Analysis and Applications, 16(5):629–657, 2010.
- [24] F. Filbir and H. N. Mhaskar. Marcinkiewicz–Zygmund measures on manifolds. Journal of Complexity, 27(6):568–596, 2011.
- [25] J. Friedman and J.-P. Tillich. Wave equations for graphs and the edge-based laplacian. Pacific Journal of Mathematics, 216(2):229–266, 2004.
- [26] D. Geller and I. Z. Pesenson. Band-limited localized Parseval frames and Besov spaces on compact homogeneous manifolds. Journal of Geometric Analysis, 21(2):334–371, 2011.
- [27] A. Grigor’yan. Upper bounds of derivatives of the heat kernel on an arbitrary complete manifold. Journal of Functional Analysis, 127(2):363–389, 1995.
- [28] A. Grigor’yan. Gaussian upper bounds for the heat kernel on arbitrary manifolds. J. Diff. Geom., 45:33–52, 1997.
- [29] X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang. Face recognition using Laplacianfaces. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 27(3):328–340, 2005.
- [30] P. W. Jones, M. Maggioni, and R. Schul. Manifold parametrizations by eigenfunctions of the Laplacian and heat kernels. Proceedings of the National Academy of Sciences, 105(6):1803–1808, 2008.
- [31] P. W. Jones, M. Maggioni, and R. Schul. Universal local parametrizations via heat kernels and eigenfunctions of the Laplacian. Ann. Acad. Sci. Fenn. Math., 35:131–174, 2010.
- [32] W. H. Kim, V. Singh, M. K. Chung, C. Hinrichs, D. Pachauri, O. C. Okonkwo, S. C. Johnson, A. D. N. Initiative, et al. Multi-resolutional shape features via non-euclidean wavelets: Applications to statistical analysis of cortical thickness. NeuroImage, 93:107–123, 2014.
- [33] Y. A. Kordyukov. –theory of elliptic differential operators on manifolds of bounded geometry. Acta Applicandae Mathematica, 23(3):223–260, 1991.
- [34] J. P. Lerch, J. C. Pruessner, A. Zijdenbos, H. Hampel, S. J. Teipel, and A. C. Evans. Focal decline of cortical thickness in alzheimer’s disease identified by computational neuroanatomy. Cerebral cortex, 15(7):995–1001, 2005.
- [35] Z. Li, U. Park, and A. K. Jain. A discriminative model for age invariant face recognition. Information Forensics and Security, IEEE Transactions on, 6(3):1028–1037, 2011.
- [36] M. Maggioni and H. N. Mhaskar. Diffusion polynomial frames on metric measure spaces. Applied and Computational Harmonic Analysis, 24(3):329–353, 2008.
- [37] S. Maskey, R. Levie, and G. Kutyniok. Transferability of graph neural networks: an extended graphon approach. Applied and Computational Harmonic Analysis, 63:48–83, 2023.
- [38] A. Maurer, M. Pontil, and B. Romera-Paredes. Sparse coding for multitask and transfer learning. In International conference on machine learning, pages 343–351. PMLR, 2013.
- [39] H. Mhaskar. A unified framework for harmonic analysis of functions on directed graphs and changing data. Applied and Computational Harmonic Analysis, 44(3):611–644, 2018.
- [40] H. N. Mhaskar. Eignets for function approximation on manifolds. Applied and Computational Harmonic Analysis, 29(1):63–87, 2010.
- [41] H. N. Mhaskar. A generalized diffusion frame for parsimonious representation of functions on data defined manifolds. Neural Networks, 24(4):345–359, 2011.
- [42] H. N. Mhaskar. Kernel-based analysis of massive data. Frontiers in Applied Mathematics and Statistics, 6:30, 2020.
- [43] H. N. Mhaskar, S. V. Pereverzyev, and M. D. van der Walt. A deep learning approach to diabetic blood glucose prediction. Frontiers in Applied Mathematics and Statistics, 3:14, 2017.
- [44] B. Muckenhoupt. Transplantation theorems and multiplier theorems for Jacobi series. American Mathematical Soc., 1986.
- [45] C. Müller. Spherical harmonics, volume 17. Springer, 2006.
- [46] D. C. Munson, J. D. O’Brien, and W. K. Jenkins. A tomographic formulation of spotlight-mode synthetic aperture radar. Proceedings of the IEEE, 71(8):917–925, 1983.
- [47] F. Natterer. The mathematics of computerized tomography. SIAM, 2001.
- [48] V. Naumova, L. Nita, J. U. Poulsen, and S. V. Pereverzyev. Meta-learning based blood glucose predictor for diabetic smartphone app, 2014.
- [49] C. J. Nolan and M. Cheney. Synthetic aperture inversion. Inverse Problems, 18(1):221, 2002.
- [50] A. Nowak and P. Sjögren. Sharp estimates of the jacobi heat kernel. arXiv preprint arXiv:1111.3145, 2011.
- [51] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.
- [52] J. Schmidt-Hieber. Deep ReLU network approximation of functions on a manifold. arXiv preprint arXiv:1908.00695, 2019.
- [53] U. Shaham, A. Cloninger, and R. R. Coifman. Provable approximation properties for deep neural networks. Applied and Computational Harmonic Analysis, 44(3):537–557, 2018.
- [54] A. Sikora. Riesz transform, Gaussian bounds and the method of wave equation. Mathematische Zeitschrift, 247(3):643–662, 2004.
- [55] G. Szegö. Orthogonal polynomials. In Colloquium publications/American mathematical society, volume 23. Providence, 1975.
- [56] J. B. Tenenbaum, V. De Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
- [57] Y. van Gennip, B. Hunter, R. Ahn, P. Elliott, K. Luh, M. Halvorson, S. Reid, M. Valasik, J. Wo, G. E. Tita, et al. Community detection using spectral clustering on sparse geosocial data. SIAM Journal on Applied Mathematics, 73(1):67–83, 2013.
- [58] K. Q. Weinberger, B. D. Packer, and L. K. Saul. Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization. In Proceedings of the tenth international workshop on artificial intelligence and statistics, pages 381–388. Citeseer, 2005.
- [59] Z.-Y. Zhang and H.-Y. Zha. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. Journal of Shanghai University (English Edition), 8(4):406–424, 2004.