This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Geometric signals

Tatyana Barron Department of Mathematics, University of Western Ontario, London Ontario N6A 5B7, Canada [email protected]
Abstract.

In signal processing, a signal is a function. Conceptually, replacing a function by its graph, and extending this approach to a more abstract setting, we define a signal as a submanifold MM of a Riemannian manifold (with corners) that satisfies additional conditions. In particular, it is a relative cobordism between two manifolds with boundaries. We define energy as the integral of the distance function to the first of these boundary manifolds. Composition of signals is composition of cobordisms. A ”time variable” can appear explicitly if it is explictly given (for example, if the manifold is of the form Σ×[0,1]\Sigma\times[0,1]). Otherwise, there is no designated ”time dimension”, although the cobordism may implicitly indicate the presence of dynamics. We interpret a local deformation of the metric as noise. The assumptions on MM allow to define a map MMM\to M that we call a Fourier transform. We prove inequalities that illustrate the properties of energy of signals in this setting.

Keywords: energy, information, Fourier transform, Riemannian manifold

1. Introduction

This paper is about a geometric generalization of the concept of signal. The goal is to build an abstract mathematical model of transmitting information via geometric objects which are, informally speaking, higher dimensional analogues of sound waveforms. We are not taking the most obvious path to defining a signal as an n\mathbb{R}^{n}-valued function on an open subset of manifold. Instead, we are taking a more intuitive approach with cobordisms. This would allow to treat quite general geometric objects as information.

In practical applications, signal processing involves sampling, analog to digital conversion, quantization, and mathematical techniques that are needed because of the hardware used to receive and analyze the signal. We will concentrate our attention on the geometric concept of signal, rather than being concerned with digitizing such a signal or further processing the resulting data.

For background discussion, let’s consider the word ”dot”. It can be transmitted as a finite sequence of three images, of the letters ”D”, ”O”, ’T”, respectively. Each of the three images can be converted into digital data (e.g. by drawing the letters on the grid paper via shading appropriate squares and entering 11 in the corresponding matrix for each shaded square, 0 for every blank square Fig. 1).

Refer to caption
Figure 1. The ”empty squares” are the zero entries of the rectangular matrix.

Alternatively, the three letters can be represented by the binary representations of the numbers 4, 15, 20 (their positions in the English alphabet). Or, instead, one can send the word ”dot” as an audio file ( a recording of a person saying this word), or the wave soundform of this recording (which can be subdivided into three parts, each for one of the three letters of the word Fig. 2).

Refer to caption
Figure 2. A soundform of the word ”dot”.

The mathematical aspects of these processes depend on the choice of sampling, quantization, encoding, transmission, other procedures related to speech processing, as well as the linguistics aspects such as the language and the alphabet (if the written language is based on an alphabet). For details see [1], [5].

Instead of doing all that, we can concentrate only on the geometric aspects and consider the immersed submanifolds of 2\mathbb{R}^{2} (with the standard Riemannian metric) that correspond to the data on Fig. 1, 2. This is the general point of view in this paper. In order to account, intuitively, for a ”process” or ”evolution” taking place, without explicitly defining a time variable, we use cobordisms.

There is a vast amount of literature on applications of Riemannian geometry and its generalizations to signal processing, information theory, and computer science. It would be an impossible task to give a survey of this suject and to assign proper credit to all contributors. Here is an attempt to give a glimpse of this area. Work by Belkin and Niyogi, and co-authors, including [7], [8], [9], presents an intriguing vision. Broadly, one could view their approach as smoothing out the discrete data (or continuously approximating the discrete data). Typically the data sets are large and the data is put into a smooth manifold. Since every smooth manifold has a Riemannian metric, the analytic tools of Riemannian geometry are readily available. Similar reasoning supports the appearance of submanifolds in machine learning (see e.g. Ch. 6 [3]), with linear and nonlinear methods in dimensionality reduction. This also echoes the general philosophy of topological data analysis, most naively described as applying topological methods to discrete data mapped into a metric space. A particular kind of Riemannian manifolds is used in information geometry (see e.g. the discussion and references in Nielsen’s survey [17]). In Menon’s work [15] there are some interesting points about submanifolds, as well as regularity and numerical methods. In particular, he states that an embedding is an information transfer.

Most generally, geometric aspects are often intrinsic to the numerical analysis, including effectiveness, accuracy, errors, in papers devoted to applications in a variety of areas, including, for instance, image processing, neural networks, or analysis of electromagnetic data (see [4], [10], [16], [19] as some specific citations).

In mathematics, the interplay between geometry (e.g. manifolds) and analysis (e.g. functions on this manifold) is often pursued via proving theorems that determine to what extent one determines the other (e.g. reconstruction theorems in noncommutative geometry, that allow to ”rebuild” a manifold from its algebra of functions etc.; or no-go theorems, that say what can not happen on a manifold). An example of such work, with results that show how a manifold structure determines behaviour of certain functions on this manifold, is [2].

In the present paper, we depart from this perspective. For example, instead of a function f:f:\mathbb{R}\to\mathbb{R}, we can consider its graph {(t,f(t))|t}\{(t,f(t))|t\in\mathbb{R}\} which is a subset of the tyty-plane. Generalizing this, we can consider an arbitrary curve in 2\mathbb{R}^{2}, or an arbitrary subset CC of the plane and consider this subset to be ”data” or ’information” or ”signal”. There is no longer a need for it to be a graph of a function and there is no need to keep track of the global geometry of 2\mathbb{R}^{2}. Everything that we need to know about this curve should be localized on CC. The setting in this paper involves cobordisms. In a follow up paper [6] we use a more general definition, instead. We essentially view arbitrary geometric objects as signal/information and it also potentially allows for flexibility that, intuitively, should be needed to quantify local events that occur in neural networks.

Acknowldegements. The valuable comments and suggestions from the reviewers are appreciated by the author.

2. Cobordisms as signals

Let k0k\geq 0 be an integer. Let X1X_{1} and X2X_{2} be closed kk-dimensional oriented manifolds such that X1X2=X_{1}\cap X_{2}=\emptyset and (X;X1,X2)(X;X_{1},X_{2}) is a (k+1)(k+1)-dimensional (oriented) cobordism, i.e. XX is a compact (k+1)(k+1)-dimensional oriented manifold with the boundary

X=X1X2.\partial X=X_{1}\sqcup X_{2}.
Refer to caption
Figure 3. A cobordism (X;X1,X2)(X;X_{1},X_{2}).

Let Y1Y_{1} and Y2Y_{2} be closed kk-dimensional oriented manifolds and let (Y;Y1,Y2)(Y;Y_{1},Y_{2}) is a (k+1)(k+1)-dimensional (oriented) cobordism. In general, all boundaries will be assumed to be nonempty where appropriate, and when we talk about a cobordism, as above, it will be assumed that YY, Y1Y_{1} and Y2Y_{2} are nonempty and Y1Y2=Y_{1}\cap Y_{2}=\emptyset.

Let (M;X,Y,Σ;X,Y)(M;X,Y,\Sigma;\partial X,\partial Y) be a (k+2)(k+2)-dimensional cobordism between manifolds with nonempty boundaries (or, a relative cobordism) in the sense of [11], i.e. (Σ,X,Y)(\Sigma,\partial X,\partial Y) is a (k+1)(k+1)-dimensional cobordism, Σ\Sigma\neq\emptyset, XY=X\cap Y=\emptyset, ΣX=X\Sigma\cap X=\partial X, ΣY=Y\Sigma\cap Y=\partial Y and

M=XΣY.\partial M=X\cup\Sigma\cup Y.

Assume, moreover, that

Σ=AB\Sigma=A\sqcup B

and (A;X1,Y1)(A;X_{1},Y_{1}), (B;X2,Y2)(B;X_{2},Y_{2}) are (k+1)(k+1)-dimensional cobordisms. MM is a manifold with corners [13]. Figure 4 shows an example where k=0k=0, the boundary of XX consists of two points and the boundary of YY consists of two points.

Refer to caption
Figure 4. A relative cobordism (M;X,Y,Σ=AB;X,Y)(M;X,Y,\Sigma=A\cup B;\partial X,\partial Y), k=0k=0.
Example 2.1.

For applications, a simplest typical example would be k=1k=1, XX is a compact smooth surface with the boundary which is the disjoint union of n1+n2n_{1}+n_{2} circles (n1,n2n_{1},n_{2}\in{\mathbb{N}}), X1X_{1} is a disjoint union of n1n_{1} circles, X2X_{2} is a disjoint union of n2n_{2} circles, Y=XY=X, M=X×[0,1]M=X\times[0,1]. In Figure 3, n1=n2=1n_{1}=n_{2}=1. In Figure 5, n1=2n_{1}=2 and n2=3n_{2}=3. Such XX could model a neuron or a part of a neural network. To take into account the electromagnetic field, one could consider submanifolds of X×[0,1]×6X\times[0,1]\times\mathbb{R}^{6}.

Refer to caption
Figure 5. A cobordism (X;X1,X2)(X;X_{1},X_{2}), n1=2n_{1}=2, n2=3n_{2}=3.

Let (M~,g~)(\tilde{M},\tilde{g}) be a Riemannian manifold such that MM~M\subset\tilde{M} (a subset of M~\tilde{M} which is an embedded submanifold via the inclusion map). We will write MgM_{g} for MM equipped with the Riemannian metric gg induced by g~\tilde{g}. Unless explicitly stated otherwise, we will also assume that the Riemannian metric on every submanifold of MM is the one induced by gg. In Figure 4, M2=M~M\subset\mathbb{R}^{2}=\tilde{M}. Figure 6 shows a relative cobordism with XX as in Example 2.1, k=1k=1, n1=n2=1n_{1}=n_{2}=1, M=X×[0,1]M=X\times[0,1], and M3=M~M\subset\mathbb{R}^{3}=\tilde{M}.

Refer to caption
Figure 6. A relative cobordism (M;X,Y,Σ;X,Y)(M;X,Y,\Sigma;\partial X,\partial Y).

Denote by dVgdV_{g} the volume form of gg and by ρg\rho_{g} the Riemannian distance with respect to gg. Define

fX:Mf_{X}:M\to\mathbb{R}

by fX(x)=ρg(x,X)f_{X}(x)=\rho_{g}(x,X). Since XX is compact, fXf_{X} is well defined and continuous on MM. Similarly, define

fA:Mf_{A}:M\to\mathbb{R}

by fA(x)=ρg(x,A)f_{A}(x)=\rho_{g}(x,A).

Remark 2.2.

Regularity of the distance function to a submanifold is treated in [12]. In [12], the discussion is for n\mathbb{R}^{n} and it is noted that the proofs are similar for submanifolds of Riemannian manifolds.

Denote W=XYW=X\sqcup Y. Define

  • the energy of the signal MgM_{g}

    (1) E(Mg)=MfA(x)𝑑Vg(x)E(M_{g})=\int_{M}f_{A}(x)dV_{g}(x)
  • the Fourier transform FF of the signal MgM_{g} as the map of cobordisms of manifolds with boundary

    (M;X,Y,Σ;X,Y)(M;A,B,W;X,Y)(M;X,Y,\Sigma;\partial X,\partial Y)\to(M;A,B,W;\partial X,\partial Y)

    induced by the identity map MMM\to M

  • the energy of the Fourier transform of the signal F(Mg)F(M_{g})

    E(F(Mg))=MfX(x)𝑑Vg(x).E(F(M_{g}))=\int_{M}f_{X}(x)dV_{g}(x).
  • a noise (U,h)(U,h) where UMU\subset M is an open set and hh is a Riemannian metric on MM such that h=gh=g on MUM-U. Then

    E(Mh)=Mρh(x,A)𝑑Vh(x)E(M_{h})=\int_{M}\rho_{h}(x,A)dV_{h}(x)
    E(F(Mh))=Mρh(x,X)𝑑Vh(x).E(F(M_{h}))=\int_{M}\rho_{h}(x,X)dV_{h}(x).
  • a filter a signal M=(M;X,Y,Σ;X,Y)M^{\prime}=(M^{\prime};X^{\prime},Y^{\prime},\Sigma^{\prime};\partial X^{\prime},\partial Y^{\prime}), such that X=X1X2\partial X^{\prime}=X_{1}^{\prime}\sqcup X_{2}^{\prime}, Y=Y1Y2\partial Y^{\prime}=Y_{1}^{\prime}\sqcup Y_{2}^{\prime}, Σ=AB\Sigma^{\prime}=A^{\prime}\sqcup B^{\prime}, MMM^{\prime}\subset M, XXX^{\prime}\subset X and YYY^{\prime}\subset Y.

  • composition of two signals M=(M;X,Y,Σ=AB;X,Y)M=(M;X,Y,\Sigma=A\sqcup B;\partial X,\partial Y), M=(M;X,Y,Σ=AB;X,Y)M^{\prime}=(M^{\prime};X^{\prime},Y^{\prime},\Sigma^{\prime}=A^{\prime}\sqcup B^{\prime};\partial X^{\prime},\partial Y^{\prime}) such that Y=XY=X^{\prime} and MM=XM\cap M^{\prime}=X^{\prime} is the signal

    M′′=(M′′;X,Y,Σ′′;X,Y)M^{\prime\prime}=(M^{\prime\prime};X,Y^{\prime},\Sigma^{\prime\prime};\partial X,\partial Y^{\prime})

    where

    M′′=MMM^{\prime\prime}=M\cup M^{\prime}
    A′′=AAA^{\prime\prime}=A\cup A^{\prime}
    B′′=BBB^{\prime\prime}=B\cup B^{\prime}
    Σ′′=ΣΣ=A′′B′′\Sigma^{\prime\prime}=\Sigma\cup\Sigma^{\prime}=A^{\prime\prime}\sqcup B^{\prime\prime}

    We assume A′′B′′=A^{\prime\prime}\cap B^{\prime\prime}=\emptyset.

Remark 2.3.

If there is no ambiguity about the metric and the metric is presumed to be the one induced by gg, then we will sometimes omit gg from notation and write E(M)=E(Mg)E(M)=E(M_{g}), similarly for volume, diameter etc.

Remark 2.4.

In signal processing, a discrete signal is represented by a finite sequence of real numbers (xn)(x_{n}). These come (via sampling) from the sound waveform and should be understood as the numbers that represent the air pressure at time nn. The signal is characterized by its energy

n|xn|2\sum_{n}|x_{n}|^{2}

and magnitude max{|xn|}\max\{|x_{n}|\}. The energy of a continuous one-dimensional signal y=x(t)y=x(t) is

(2) |x(t)|2𝑑t.\int|x(t)|^{2}dt.

The concept of energy in Riemannian geometry is different and the value of energy for a curve in 2\mathbb{R}^{2} is not the same integral as the integral (2) typically used in signal processing. One can modify the signal y=x(t)y=x(t) so that (2) becomes the Riemannian energy for the modified signal. It is possible to write a similar argument that explains how our definition of energy (1) relates to the other definitions, at least in a very simple low dimensional setting. We will discuss this in detail in upcoming work [6], where we will also consider the meaning of Fourier transform and noise (which is sometimes defined as the Fourier tranform of autocorrelation).

Remark 2.5.

To give an example of noise (noise as a local distortion of the metric, as stated in the list of definitions above), one can consider a small ball BB in MM and either a conformal deformation of the metric gg supported on BB or a local diffeomorphism φ\varphi that is the identity map on MInt(B)M-Int(B), and the metric h=φgh=\varphi^{*}g.

Recall (see e.g. section 3.2 [14]) that the (boundary) injectivity radius iA>0i_{A}>0 of AA is defined as follows. Let TAT^{\perp}A be the normal bundle of AA (in MM), trivialized by the inward unit normal vector field NN. Identify TAT^{\perp}A with A×A\times\mathbb{R} via this trivialization. For pAp\in A, denote by γ\gamma the geodesic such that γ(0)=p\gamma(0)=p and γ(0)=Np\gamma^{\prime}(0)=N_{p}. Let D(p)=inf{t>0|γp(t)M}(0,]D(p)=\inf\{t>0|\gamma_{p}(t)\in\partial M\}\in(0,\infty]. The boundary exponential map

exp:(p,t)γt(p)\exp^{\perp}:(p,t)\mapsto\gamma_{t}(p)

is defined on the subset of A×A\times\mathbb{R} that consists of pairs (p,t)(p,t) with 0t<D(p)0\leq t<D(p). The number ιA\iota_{A} is defined as the supremum of s0s\geq 0 such that exp|A×[0,s)\exp^{\perp}\Bigr{|}_{A\times[0,s)} is a diffeomorphism onto its image. The boundary injectivity radius iX>0i_{X}>0 of XX is defined similarly.

Theorem 2.6.

Let M=(M;X,Y,Σ=AB;X,Y)M=(M;X,Y,\Sigma=A\sqcup B;\partial X,\partial Y) be a signal.

(i)(i) Then

11+4vol(M)(diam(M)+diam(A)+diam(X))iX2vol(X)E(F(M))E(M)\frac{1}{1+\frac{4\ {\mathrm{vol}}(M)({\mathrm{diam}}(M)+{\mathrm{diam}}(A)+{\mathrm{diam}}(X))}{i_{X}^{2}{\mathrm{vol}}(X)}}\leq\frac{E(F(M))}{E(M)}\leq
1+4vol(M)(diam(M)+diam(A)+diam(X))iA2vol(A).1+\frac{4\ {\mathrm{vol}}(M)({\mathrm{diam}}(M)+{\mathrm{diam}}(A)+{\mathrm{diam}}(X))}{i_{A}^{2}{\mathrm{vol}}(A)}.

(ii)(ii) Let (U,h)(U,h) be noise, where

U={xM|ρg(x,p)<δ}U=\{x\in M|\ \rho_{g}(x,p)<\delta\}

for some pMp\in M and δ>0\delta>0. Let 0<δ0<δ0<\delta_{0}<\delta and let 0<ε<10<\varepsilon<1.

Then there is a smooth function aε:Ma_{\varepsilon}:M\to\mathbb{R} such that

(3) aε(x)={ε,ifx{xM|ρg(x,p)δ0}1,ifx{xM|ρg(x,p)δ}a_{\varepsilon}(x)=\left\{\begin{array}[]{ll }\varepsilon,\ {\mathrm{if}}\ x\in\{x\in M|\ \rho_{g}(x,p)\leq\delta_{0}\}\\ 1,\ {\mathrm{if}}\ x\in\{x\in M|\ \rho_{g}(x,p)\geq\delta\}\end{array}\right.
(4) 0<aε(x)<1forallx{xM|δ0<ρg(x,p)<δ},0<a_{\varepsilon}(x)<1\ {\mathrm{for\ all}}\ x\in\{x\in M|\ \delta_{0}<\rho_{g}(x,p)<\delta\},

and as ε0\varepsilon\to 0

E(F(Mhaε))E(Mhaε)=βγ(1+Cεk+22+O(εk+2))\frac{E(F(M_{ha_{\varepsilon}}))}{E(M_{ha_{\varepsilon}})}=\frac{\beta}{\gamma}\Bigl{(}1+C\varepsilon^{\frac{k+2}{2}}+O(\varepsilon^{k+2})\Bigr{)}

where

β={xM|ρg(p,x)>δ0}ρhaε(x,X)aε(x)k+22𝑑Vh(x)\beta=\int_{\{x\in M|\rho_{g}(p,x)>\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,X)a_{\varepsilon}(x)^{\frac{k+2}{2}}dV_{h}(x)
γ={xM|ρg(p,x)>δ0}ρhaε(x,A)aε(x)k+22𝑑Vh(x)\gamma=\int_{\{x\in M|\rho_{g}(p,x)>\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,A)a_{\varepsilon}(x)^{\frac{k+2}{2}}dV_{h}(x)
C={xM|ρg(p,x)<δ0}ρhaε(x,X)𝑑Vh(x)β{xM|ρg(p,x)<δ0}ρhaε(x,A)𝑑Vh(x)γ.C=\frac{\int\limits_{\{x\in M|\rho_{g}(p,x)<\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,X)dV_{h}(x)}{\beta}-\frac{\int\limits_{\{x\in M|\rho_{g}(p,x)<\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,A)dV_{h}(x)}{\gamma}.

(iii)(iii) Let (U,h)(U,h) be noise as in (ii)(ii) and let MM^{\prime} be a filter such that X1=X1X_{1}^{\prime}=X_{1}, Y1=Y1Y_{1}^{\prime}=Y_{1}, A=AA^{\prime}=A, UMMU\subset M-M^{\prime}. Then

E(Mg)E(Mg)E(M^{\prime}_{g})\leq E(M_{g})
E(Mg)E(Mh).E(M^{\prime}_{g})\leq E(M_{h}).
Remark 2.7.

In Theorem 2.6 (ii)(ii) the function aεa_{\varepsilon}, in a sense, modulates the noisy signal, from MhM_{h} to MhaεM_{ha_{\varepsilon}}. In (iii)(iii), MM^{\prime} filters out the noise.

Theorem 2.8.

Let M′′M^{\prime\prime} be the composition of signals MM and MM^{\prime}. Then

(5) E(M′′)E(M)+E(M)E(M^{\prime\prime})\leq E(M)+E(M^{\prime})
(6) E(F(M′′))E(F(M)).E(F(M^{\prime\prime}))\geq E(F(M)).

3. Proofs

Proof of Theorem 2.6.

By the mean value theorem, there is a0Ma_{0}\in M such that

E(M)=MfA(x)𝑑Vg(x)=fA(a0)volg(M)E(M)=\int_{M}f_{A}(x)dV_{g}(x)=f_{A}(a_{0}){\mathrm{vol}}_{g}(M)

and there is x0Mx_{0}\in M such that

E(F(M))=MfX(x)𝑑Vg(x)=fX(x0)volg(M).E(F(M))=\int_{M}f_{X}(x)dV_{g}(x)=f_{X}(x_{0}){\mathrm{vol}}_{g}(M).

Since AA and XX are compact, there are a1Aa_{1}\in A and x1Xx_{1}\in X such that ρg(a0,a1)=ρg(a0,A)\rho_{g}(a_{0},a_{1})=\rho_{g}(a_{0},A) and ρg(x0,x1)=ρg(x0,X)\rho_{g}(x_{0},x_{1})=\rho_{g}(x_{0},X) . Let bAXb\in A\cap X. Then

E(F(M))E(M)=fX(x0)fA(a0)=ρg(x0,X)fA(a0)ρg(x0,b)+ρg(x1,b)fA(a0)\frac{E(F(M))}{E(M)}=\frac{f_{X}(x_{0})}{f_{A}(a_{0})}=\frac{\rho_{g}(x_{0},X)}{f_{A}(a_{0})}\leq\frac{\rho_{g}(x_{0},b)+\rho_{g}(x_{1},b)}{f_{A}(a_{0})}\leq
ρg(a0,x0)+ρg(a0,b)+ρg(x1,b)fA(a0)ρg(a0,x0)+ρg(a0,a1)+ρg(a1,b)+ρg(x1,b)fA(a0)\frac{\rho_{g}(a_{0},x_{0})+\rho_{g}(a_{0},b)+\rho_{g}(x_{1},b)}{f_{A}(a_{0})}\leq\frac{\rho_{g}(a_{0},x_{0})+\rho_{g}(a_{0},a_{1})+\rho_{g}(a_{1},b)+\rho_{g}(x_{1},b)}{f_{A}(a_{0})}\leq
(7) 1+diam(M)+diam(A)+diam(X)ρg(a0,A).1+\frac{{\mathrm{diam}}(M)+{\mathrm{diam}}(A)+{\mathrm{diam}}(X)}{\rho_{g}(a_{0},A)}.

There is a neighborhood A1A_{1} of AA in MM and a diffeomorphism

α:A1A×[0,iA)\alpha:A_{1}\to A\times[0,i_{A})

defined by the normal exponential map. Let

A2=α1(A×[iA2,iA)).A_{2}=\alpha^{-1}\Bigl{(}A\times[\frac{i_{A}}{2},i_{A})\Bigr{)}.

Let 𝒰={U1,,Um}\mathcal{U}=\{U_{1},...,U_{m}\} be an open cover of AA by manifold charts {Ui,φi}\{U_{i},\varphi_{i}\} and let {ψ1,,ψm}\{\psi_{1},...,\psi_{m}\} be a smooth partition of unity subordinate to 𝒰\mathcal{U}. Then

fA(a0)=MfA(x)𝑑Vg(x)vol(M)1vol(M)A2fA(x)𝑑Vg(x)=f_{A}(a_{0})=\frac{\int_{M}f_{A}(x)dV_{g}(x)}{{\mathrm{vol}}(M)}\geq\frac{1}{{\mathrm{vol}}(M)}\int_{A_{2}}f_{A}(x)dV_{g}(x)=
1vol(M)A2fA(x)j=1mψj(x)dVg(x)=\frac{1}{{\mathrm{vol}}(M)}\int_{A_{2}}f_{A}(x)\sum_{j=1}^{m}\psi_{j}(x)dV_{g}(x)=
(8) 1vol(M)j=1mUj×[iA2,iA]fA(α1(a,t))ψj(α1(a,t))𝑑Vg(α1(a,t)),\frac{1}{{\mathrm{vol}}(M)}\sum_{j=1}^{m}\int_{U_{j}\times[\frac{i_{A}}{2},i_{A}]}f_{A}(\alpha^{-1}(a,t))\psi_{j}(\alpha^{-1}(a,t))dV_{g}(\alpha^{-1}(a,t)),

where aAa\in A, t[iA2,iA)t\in[\frac{i_{A}}{2},i_{A}). Denote by a1(i),,ak+1(i)a_{1}^{(i)},...,a_{k+1}^{(i)} the coordinates in the ii-th chart. Using the Fubini theorem, we get that (8) equals

1vol(M)[iA2,iA]i=1mφi(Ui)fA(α1(φi1(a1,,ak+1),t))ψi(α1(φi1(a1,,ak+1),t))\frac{1}{{\mathrm{vol}}(M)}\int\limits_{[\frac{i_{A}}{2},i_{A}]}\sum_{i=1}^{m}\int\limits_{\varphi_{i}(U_{i})}f_{A}(\alpha^{-1}(\varphi_{i}^{-1}(a_{1},...,a_{k+1}),t))\psi_{i}(\alpha^{-1}(\varphi_{i}^{-1}(a_{1},...,a_{k+1}),t))
detg(a1,,ak+1,t)da1dak+1dt.\sqrt{\det g(a_{1},...,a_{k+1},t)}da_{1}...da_{k+1}dt.

By the mean value theorem, there is iA2<t0<iA\frac{i_{A}}{2}<t_{0}<i_{A} such that

[iA2,iA]i=1mφi(Ui)fA(α1(φi1(a1,,ak+1),t))ψi(α1(φi1(a1,,ak+1),t))\int\limits_{[\frac{i_{A}}{2},i_{A}]}\sum_{i=1}^{m}\int\limits_{\varphi_{i}(U_{i})}f_{A}(\alpha^{-1}(\varphi_{i}^{-1}(a_{1},...,a_{k+1}),t))\psi_{i}(\alpha^{-1}(\varphi_{i}^{-1}(a_{1},...,a_{k+1}),t))
detg(a1,,ak+1,t)da1dak+1dt=\sqrt{\det g(a_{1},...,a_{k+1},t)}da_{1}...da_{k+1}dt=
(9) iA2i=1mφi(Ui)fA(α1(φi1(a1,,ak+1),t0))ψi(α1(φi1(a1,,ak+1),t0))\frac{i_{A}}{2}\sum_{i=1}^{m}\int\limits_{\varphi_{i}(U_{i})}f_{A}(\alpha^{-1}(\varphi_{i}^{-1}(a_{1},...,a_{k+1}),t_{0}))\psi_{i}(\alpha^{-1}(\varphi_{i}^{-1}(a_{1},...,a_{k+1}),t_{0}))
detg(a1,,ak+1,t0)da1dak+1.\sqrt{\det g(a_{1},...,a_{k+1},t_{0})}da_{1}...da_{k+1}.

Then

fA(a0)1vol(M)iA2i=1mφi(Ui)fA(α1(φi1(a1,,ak+1),t0))f_{A}(a_{0})\geq\frac{1}{{\mathrm{vol}}(M)}\frac{i_{A}}{2}\sum_{i=1}^{m}\int\limits_{\varphi_{i}(U_{i})}f_{A}(\alpha^{-1}(\varphi_{i}^{-1}(a_{1},...,a_{k+1}),t_{0}))
ψi(α1(φi1(a1,,ak+1),t0))detg(a1,,ak+1,t0)da1dak+1=\psi_{i}(\alpha^{-1}(\varphi_{i}^{-1}(a_{1},...,a_{k+1}),t_{0}))\sqrt{\det g(a_{1},...,a_{k+1},t_{0})}da_{1}...da_{k+1}=
1vol(M)iA2i=1mUifA(α1(a,t0))ψi(α1(a,t0))𝑑μA(a)=\frac{1}{{\mathrm{vol}}(M)}\frac{i_{A}}{2}\sum_{i=1}^{m}\int\limits_{U_{i}}f_{A}(\alpha^{-1}(a,t_{0}))\psi_{i}(\alpha^{-1}(a,t_{0}))d\mu_{A}(a)=
1vol(M)iA2i=1mAfA(α1(a,t0))ψi(α1(a,t0))𝑑μA(a)=\frac{1}{{\mathrm{vol}}(M)}\frac{i_{A}}{2}\sum_{i=1}^{m}\int\limits_{A}f_{A}(\alpha^{-1}(a,t_{0}))\psi_{i}(\alpha^{-1}(a,t_{0}))d\mu_{A}(a)=
1vol(M)iA2AfA(α1(a,t0))𝑑μA(a)\frac{1}{{\mathrm{vol}}(M)}\frac{i_{A}}{2}\int\limits_{A}f_{A}(\alpha^{-1}(a,t_{0}))d\mu_{A}(a)

where dμAd\mu_{A} is the measure on AA induced by the Riemannian metric. Hence

ρg(a0,A)=fA(a0)1vol(M)iA2t0vol(A)vol(A)vol(M)iA24.\rho_{g}(a_{0},A)=f_{A}(a_{0})\geq\frac{1}{{\mathrm{vol}}(M)}\frac{i_{A}}{2}t_{0}{\mathrm{vol}}(A)\geq\frac{{\mathrm{vol}}(A)}{{\mathrm{vol}}(M)}\frac{i_{A}^{2}}{4}.

Then, with (7), we get

E(F(M))E(M)1+4vol(M)(diam(M)+diam(A)+diam(X))iA2vol(A).\frac{E(F(M))}{E(M)}\leq 1+\frac{4\ {\mathrm{vol}}(M)({\mathrm{diam}}(M)+{\mathrm{diam}}(A)+{\mathrm{diam}}(X))}{i_{A}^{2}{\mathrm{vol}}(A)}.

Repeating the argument for E(M)E(F(M))\frac{E(M)}{E(F(M))}, we get:

E(M)E(F(M))1+4vol(M)(diam(M)+diam(A)+diam(X))iX2vol(X).\frac{E(M)}{E(F(M))}\leq 1+\frac{4\ {\mathrm{vol}}(M)({\mathrm{diam}}(M)+{\mathrm{diam}}(A)+{\mathrm{diam}}(X))}{i_{X}^{2}{\mathrm{vol}}(X)}.

This completes the proof of (i)(i).

Proof of (ii)(ii). Let {φ,ψ}\{\varphi,\psi\} be a smooth partition of unity subordinate to the open cover

{{xM|ρg(p,x)<δ},{xM|ρg(p,x)>δ0}}.\{\{x\in M|\rho_{g}(p,x)<\delta\},\{x\in M|\rho_{g}(p,x)>\delta_{0}\}\}.

Then the function aεa_{\varepsilon} defined by

aε(x)=εφ(x)+ψ(x)a_{\varepsilon}(x)=\varepsilon\varphi(x)+\psi(x)

satisfies (3), (4). We have:

E(F(Mhaε))E(Mhaε)=Mρhaε(x,X)𝑑Vhaε(x)Mρhaε(x,A)𝑑Vhaε(x)=Mρhaε(x,X)aε(x)k+22𝑑Vh(x)Mρhaε(x,A)aε(x)k+22𝑑Vh(x)=\frac{E(F(M_{ha_{\varepsilon}}))}{E(M_{ha_{\varepsilon}})}=\frac{\int_{M}\rho_{ha_{\varepsilon}}(x,X)dV_{ha_{\varepsilon}}(x)}{\int_{M}\rho_{ha_{\varepsilon}}(x,A)dV_{ha_{\varepsilon}}(x)}=\frac{\int_{M}\rho_{ha_{\varepsilon}}(x,X)a_{\varepsilon}(x)^{\frac{k+2}{2}}dV_{h}(x)}{\int_{M}\rho_{ha_{\varepsilon}}(x,A)a_{\varepsilon}(x)^{\frac{k+2}{2}}dV_{h}(x)}=
{xM|ρg(x,p)>δ0}ρhaε(x,X)aε(x)k+22𝑑Vh(x)+εk+22{xM|ρg(x,p)δ0}ρhaε(x,X)𝑑Vh(x){xM|ρg(x,p)>δ0}ρhaε(x,A)aε(x)k+22𝑑Vh(x)+εk+22{xM|ρg(x,p)δ0}ρhaε(x,A)𝑑Vh(x)=\frac{\int_{\{x\in M|\ \rho_{g}(x,p)>\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,X)a_{\varepsilon}(x)^{\frac{k+2}{2}}dV_{h}(x)+\varepsilon^{\frac{k+2}{2}}\int_{\{x\in M|\ \rho_{g}(x,p)\leq\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,X)dV_{h}(x)}{\int_{\{x\in M|\ \rho_{g}(x,p)>\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,A)a_{\varepsilon}(x)^{\frac{k+2}{2}}dV_{h}(x)+\varepsilon^{\frac{k+2}{2}}\int_{\{x\in M|\ \rho_{g}(x,p)\leq\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,A)dV_{h}(x)}=
βγ1+εk+221β{xM|ρg(p,x)<δ0}ρhaε(x,X)𝑑Vh(x)1+εk+221γ{xM|ρg(p,x)<δ0}ρhaε(x,A)𝑑Vh(x).\frac{\beta}{\gamma}\frac{1+\varepsilon^{\frac{k+2}{2}}\frac{1}{\beta}\int_{\{x\in M|\rho_{g}(p,x)<\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,X)dV_{h}(x)}{1+\varepsilon^{\frac{k+2}{2}}\frac{1}{\gamma}\int_{\{x\in M|\rho_{g}(p,x)<\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,A)dV_{h}(x)}.

The Maclaurin series for

11+εk+221γ{xM|ρg(p,x)<δ0}ρhaε(x,A)𝑑Vh(x)\frac{1}{1+\varepsilon^{\frac{k+2}{2}}\frac{1}{\gamma}\int_{\{x\in M|\rho_{g}(p,x)<\delta_{0}\}}\rho_{ha_{\varepsilon}}(x,A)dV_{h}(x)}

yields the desired statement.

Proof of (iii)(iii).

E(Mg)=Mρg(x,A)𝑑Vg(x).E(M^{\prime}_{g})=\int_{M^{\prime}}\rho_{g}(x,A^{\prime})dV_{g}(x).

Since A=AA^{\prime}=A, MMM^{\prime}\subset M, and UM=U\cap M^{\prime}=\emptyset, both inequalities follow. ∎

Proof of Theorem 2.8.

E(M′′)=M′′ρg(x,A′′)𝑑Vg(x)=Mρg(x,A′′)𝑑Vg(x)+Mρg(x,A′′)𝑑Vg(x)E(M^{\prime\prime})=\int_{M^{\prime\prime}}\rho_{g}(x,A^{\prime\prime})dV_{g}(x)=\int_{M}\rho_{g}(x,A^{\prime\prime})dV_{g}(x)+\int_{M^{\prime}}\rho_{g}(x,A^{\prime\prime})dV_{g}(x)

Since for every xx, ρg(x,A′′)ρg(x,A)\rho_{g}(x,A^{\prime\prime})\leq\rho_{g}(x,A) and ρg(x,A′′)ρg(x,A)\rho_{g}(x,A^{\prime\prime})\leq\rho_{g}(x,A^{\prime}), the inequality (5) follows.

E(F(M′′))=M′′ρg(x,X)𝑑Vg(x)=Mρg(x,X)𝑑Vg(x)+Mρg(x,X)𝑑Vg(x)E(F(M^{\prime\prime}))=\int_{M^{\prime\prime}}\rho_{g}(x,X)dV_{g}(x)=\int_{M}\rho_{g}(x,X)dV_{g}(x)+\int_{M^{\prime}}\rho_{g}(x,X)dV_{g}(x)

The inequality (6) follows. ∎

4. Conclusions

In signal processing, a signal is a function (typically, a function of time). See e.g. [18], Chapter zero. In this paper, we define a signal to be a submanifold MM of Riemannian manifold, with extra conditions: it is a (relative) cobordism between two manifolds with boundary, XX and YY, and moreover, MM is also a cobordism between two other manifolds with boundaries, AA and BB. We give examples. In particular, instead of a function of time, we would now consider its graph. This definition allows to define a self-map of MM, which we call a Fourier transform. We define energy, which is a positive real number that characterizes a strength of the signal. Our definition is different from the standard Riemannian geometry definition and from the standard signal processing definition. We compare the three in the follow up paper [6]. We say that noise is a local deformation of the metric, which makes sense since everything is bounded. We say that a modified signal filters out the noise if it is not affected by the noise. Composition of signals is composition of cobordisms. We prove energy inequalities. These inequalities give an idea about the behaviour of energy. This paper was intended as a step towards a geometric framework in the discussion of signals. We discuss applications in [6].

References

  • [1] J. Allebach, K. Chandrasekar. Digital Signal Processing in a Nutshell. 2nd ed. E-book, volumes 1, 2.
  • [2] N. Alluhaibi, T. Barron. On vector-valued automorphic forms on bounded symmetric domains. Annals of Global Analysis and Geometry 55 (2019), issue 3, 417-441.
  • [3] E. Alpaydin. Introduction to Machine Learning. 3rd ed. Cambridge, Massachusetts, The MIT Press, 2014.
  • [4] K. Arai. Image Restoration based on Maximum Entropy Method with Parameter Estimation by Means of Annealing Method. International Journal of Advanced Computer Science and Applications 2020 11(8), paper 33.
  • [5] Tom Bäckström, Okko Räsänen, Abraham Zewoudie, Pablo Pérez Zarazaga, Liisa Koivusalo, Sneha Das, Esteban Gómez Mellado, Mariem Bouafif Mansali, Daniel Ramos, Sudarsana Kadiri and Paavo Alku, Introduction to Speech Processing, 2nd Edition, 2022. https://speechprocessingbook.aalto.fi
  • [6] T. Barron, S. Kelly, C. Poulton. Signals as submanifolds, and configurations of points. In preparation.
  • [7] M. Belkin, P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Computation (2003) 15 (6), 1373-1396.
  • [8] M. Belkin, P. Niyogi. Towards a theoretical foundation for Laplacian-based manifold methods. J. Comput. System Sci. 74 (2008), no. 8, 1289-1308.
  • [9] M. Belkin, P. Niyogi, V. Sindhwani. Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 7 (2006), 2399-2434.
  • [10] V. Christodoulou, Y. Bi, G. Wilkie. A Fuzzy Shape-Based Anomaly Detection and Its Application to Electromagnetic Data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11 (2018), no. 9, pp. 3366-3379.
  • [11] M. Borodzik, A. Némethi, A. Ranicki. Morse theory for manifolds with boundary. Algebr. Geom. Topol.16 (2016), no. 2, 971-1023.
  • [12] R. Foote. Regularity of the distance function. Proc. Amer. Math. Soc. 92 (1984), no.1, 153-155.
  • [13] G. Laures. On cobordism of manifolds with corners. Trans. Amer. Math. Soc. 352 (2000), no. 12, 5667-5688.
  • [14] H. Liu. A compactness theorem for hyperkähler 4-manifolds with boundary. Duke Math. J. 173 (2024), no. 6, 1177-1225.
  • [15] G. Menon. Information theory and the embedding problem for Riemannian manifolds. In Geometric science of information, Lecture Notes in Comput. Sci., 12829, pp. 605-612; Springer, Cham, 2021.
  • [16] E. Merdivan, A. Vafeaidis, D. Kalatzis, S. Hanke, J. Kropf, K. Votis, D. Giakoumis, D. Tzovaras, L. Chen, R. Hamzaoui, M. Geist. Image-Based Text Classification using 2D Convolutional Neural Networks. 2019 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 2019, pp. 144-149.
  • [17] F. Nielsen. An elementary introduction to information geometry. Entropy 22 (2020), no.10, Paper 1100.
  • [18] R. Priemer. Introductory signal processing. World Scientific Publ.,1991.
  • [19] M. Yuan, W. Wang, X. Luo, C. Ge, L. Li, J. Kurths, W. Zhao. Synchronization of a Class of Memristive Stochastic Bidirectional Associative Memory Neural Networks with Mixed Time-Varying Delays via Sampled-Data Control. Math. Problems Eng. (2018), article ID 9126183.