Finite-Time Capacity: Making
Exceed-Shannon Possible?
Abstract
Shannon-Hartley theorem can accurately calculate the channel capacity when the signal observation time is infinite. However, the calculation of finite-time capacity, which remains unknown, is essential for guiding the design of practical communication systems. In this paper, we investigate the capacity between two correlated Gaussian processes within a finite-time observation window. We first derive the finite-time capacity by providing a limit expression. Then we numerically compute the maximum transmission rate within a single finite-time window. We reveal that the number of bits transmitted per second within the finite-time window can exceed the classical Shannon capacity, which is called as the Exceed-Shannon phenomenon. Furthermore, we derive a finite-time capacity formula under a typical signal autocorrelation case by utilizing the Mercer expansion of trace class operators, and reveal the connection between the finite-time capacity problem and the operator theory. Finally, we analytically prove the existence of the Exceed-Shannon phenomenon in this typical case, and demonstrate the achievability of the finite-time capacity and its compatibility with the classical Shannon capacity.
Index Terms:
Finite-time capacity, Exceed-Shannon, Mercer expansion, signal autocorrelation, operator theory.I Introduction
The Shannon-Hartley theorem [1] has accurately revealed the fundamental theoretical limit of information transmission rate , which is also called as the Shannon capacity, over a Gaussian waveform channel of a limited bandwidth . The expression for Shannon capacity is , where and denote the signal power and the noise power, respectively. The derivation of Shannon-Hartley Theorem heavily depends on the Nyquist sampling principle [2]. The Nyquist sampling principle, which is also named as the theorem [3], claims that one can only obtain independent samples within an observation time window in a channel band-limited to [4], where means higher-order infinitesimal, i.e., .
Based on the Nyquist sampling principle, the Shannon capacity is derived by multiplying the capacity of a Gaussian symbol channel [5, p.249] with at first, and then dividing the result by , finally letting . In the above derivation, is approximated by in the final step to obtain the Shannon capacity. Note that this approximation only holds when . Therefore, the Shannon capacity only asymptotically holds as becomes sufficiently large. When is of finite value, the approximation fails to work. Thus, when the observation time is finite, i.e., the received signal can only be observed within a finite-time window , Shannon-Hartley Theorem cannot be directly applied to calculate the capacity in a finite-time window. To the best of our knowledge, the evaluation of the finite-time capacity has not yet been investigated in the literature. One possible reason is that, most of the researchers mainly focused on how to approximate the Shannon capacity with advanced coding and modulation schemes. It is worth noting that any real-world communication systems transmit signals in a finite-time window, thus evaluating the finite-time capacity is of practical significance.
In this paper, to fill in this gap, the finite-time capacity instead of the traditional infinite-time counterpart is analyzed, where we reveal and prove the existence of “Exceed-Shannon” phenomenon within a finite-time observation window111Simulation codes will be provided to reproduce the results presented in this paper: http://oa.ee.tsinghua.edu.cn/dailinglong/publications/publications.html.. Specifically, our contributions are summarized as follows:
-
•
We derive the capacity expressions within a finite-time observation window by using dense sampling and limiting methods. In this way, we can overcome the continuous difficulties that appear when analyzing the information contained in a continuous time interval. These finite-time capacity expressions make the analysis of finite-time capacity problems possible.
-
•
We approximate the original continuous finite-time capacity expressions by discrete matrices, and conduct numerical experiments based on the discretized formulas. In the numerical results under a special setting, we reveal the “Exceed-Shannon” phenomenon222In fact, the finite-time “Exceed-Shannon” phenomenon revealed in this paper does not contradict the classical infinite-time Shannon-Hartley theorem, since new assumptions are considered. Specifically, in the Shannon-Hartley theorem, the sampling time is assumed to be infinitely long, while in this paper, the sampling takes place in a finite-time observation window. Similarly, although compressed sensing[6] can achieve much lower sampling rate than the Nyquist sampling rate to perform accurate sparse signal reconstruction, it does not contradict the Nyquist sampling principle due to the new assumption of signal sparsity., i.e., the mutual information within a finite-time observation window exceeds the Shannon capacity.
-
•
In order to analytically prove the revealed “Exceed-Shannon” phenomenon, we first derive an analytical finite-time capacity formula based on Mercer expansion [7], where we can find the connection between the capacity problem and the operator theory [8]. To make the problem tractable, we construct a typical case in which the transmitted signal has certain statistical properties. Utilizing this construction, we obtain a closed-form capacity solution in this typical case, which leads to a rigorous proof of the “Exceed-Shannon” phenomenon. Inspired by the techniques in the proof, we find that the finite-time capacity is, in fact, a more general case of the Shannon limit, thus the “Exceed-Shannon” phenomenon of the finite-time capacity is compatible with the classical Shannon theory.
Organization: In the rest of this paper, the finite-time capacity is formulated and evaluated numerically in Section II, where the “Exceed-Shannon” phenomenon is first discovered. Then, in Section III, we derive a closed-form finite-time capacity formula under a typical case. Based on this formula, in Section IV, the “Exceed-Shannon” phenomenon is rigorously proved. Finally, conclusions are drawn in Section V.
Notations: denotes a Gaussian process; denotes the autocorrelation function; are the power spectral density (PSD) of the corresponding process , where ; Boldface italic symbols denotes the column vector generated by taking samples of on instants ; Upper-case boldface letters such as denote matrices; denotes the expectation; denotes the indicator function of the set ; denotes the collection of all the square-integrable functions on window ; denotes the imaginary unit.
II Numerical Analysis of the Finite-Time Capacity
In this section, we focus on the numerical evaluation of the finite-time capacity. In Subsection II-A, we model the transmission problem by Gaussian processes, and derive the capacity expressions within a finite-time observation window by using dense sampling and limiting methods; In Subsection II-B, we approximate the finite-time capacity by discretized matrix-based formulas; In Subsection II-C, we reveal the “Exceed-Shannon” phenomenon by numerically evaluating the finite-time capacity in a special setting of the signal autocorrelations.
II-A The Expressions for Finite-Time Capacity
The finite-time capacity is, heuristically, defined as the maximum number of bits that can be successfully transmitted within a finite-time window. Since Shannon capacity is defined on pairs of random variables, it is crucial to introduce randomness into the transmission model. Inspired by [9], we model the transmitted signal by a zero-mean stationary Gaussian stochastic process, denoted as , and the received signal by . The process , which denotes the noise, is also a stationary Gaussian process independent of . The receiver is only allowed to observe the signal within finite-time window , where is the observation window span. Our goal is to find the maximum number of bits that can be acquired within this time window.
To analytically express the amount of the acquired information, we first introduce sampling instants inside the time window, denoted by , and then let to approximate the finite-time capacity333In this paper, we do not explicitly distinguish between the terms “finite-time mutual information” and “finite-time capacity”, since we consider communication schemes where the source autocorrelation is fixed.. This approximation of capacity becomes more precise as the sampling instants become denser. Then by defining and , the capacity on these samples can be expressed as
(1) |
and the finite-time capacity is defined as
(2) |
Then, the transmission rate can be defined by dividing the amount of information acquired within by the time span :
(3) | ||||
From these definitions, we can define the limit capacity as by letting . The quantity characterizes the maximum average number of bits per second one can acquire from a received noisy stochastic process.
II-B Discretization
Without loss of generality, we fix the sampling instants uniformly onto fractions of : . Since the random vectors and are samples of a Gaussian process, they are both Gaussian random vectors with mean zero and covariance matrices and , where are symmetric positive-definite matrices. The entries of and are determined by the autocorrelation functions of Gaussian processes and , denoted by and :
(4) | ||||
Note that is the independent sum of and , thus the autocorrelation functions satisfy , and similarly the covariance matrices satisfy .
The mutual information is defined as , where denotes the differential entropy. For -dimentional Gaussian vector with mean 0 and covariance matrix , the differential entropy is given by
(5) |
Plugging (5) into the definition of , we obtain
(6) |
In (6), by letting , we can find that increases monotonously when doubles, because of the data processing inequality. Though without rigorous proof, we can assume with confidence that is an increasing function of . However, it remains unknown whether tends to a finite limit. In fact, can be arbitrarily large, since the signal outside the noise band is strictly unpolluted by the noise, which results in infinite SNR. Thus, the capacity will diverge to infinity. Therefore, in order to avoid capacity divergence, at least one of the following conditions should be satisfied:
-
•
The noise process is not band-limited.
-
•
The power spectral density of is strictly contained inside the band of .
Thus, in the following numerical analysis, we choose to be band-unlimited. This leads to the choice of reasonable autocorrelation functions of and in the following subsection.
II-C Numerical Analysis
In order to study the properties of mutual information as a function of , we perform numerical analyses under different values of and . The autocorrelation function and PSD of the signal process and noise process are set to the special case
(7) | ||||
where . Note that the PSD of the transmitted process is strictly band-limited, while the PSD of the noise process is not. In fact, the noise PSD is carefully selected to ensure the received noise has finite power on each instant , allowing the execution of numerical computations. A finitely powered process PSD must be colored, in contrast to additive white Gaussian noise (AWGN) with white PSD and infinite power. That is the reason why we choose to be the form of (7).
In order to compare the finite-time capacity with the classical Shannon capacity, we have to calculate the Shannon capacity with colored noise spectrum , which is a generalized version of the well-known formula . The Shannon capacity of colored noise PSD [5], measured in , is expressed as
(8) |
Then, plugging (7) into (8) yields the numerical result for .
In the numerical analysis, we calculate the finite-time transmission rate and Shannon capacity against the number of samples within the observation window . The numerical results are collected in Fig. 1. It is shown that is an increasing function of , and for fixed values of , the approximated finite-time capacity tends to a finite limit under the correlation assumptions given by (7). The most amazing observation is that, we can obtain more information within finite-time window than the prediction given by the Shannon capacity (8). We call this phenomenon the “Exceed-Shannon” phenomenon.

To analytically verify the existence of the Exceed-Shannon phenomenon, we are going to introduce some mathematical tools in the following Section III, and finally give an analytical proof in Section IV.
III A Closed-Form Finite-Time Capacity Formula
In this section, we first introduce the Mercer expansion in Subsection III-A as a basic tool for our analysis. Then we derive the series representation of the finite-time capacity, and the corresponding power constraint in Subsection III-B, under the assumption of AWGN noise. The power constraint shows that the finite-time capacity is upper-bounded, thus the series expansion of the finite-time capacity converges absolutely.
III-A The Mercer Expansion
Motivated by the discovery of the Exceed-Shannon phenomenon, we go further into the underlying mechanism behind this fact. Since the calculation of (6) depends on the evaluation of determinants that are determined by autocorrelation functions of Gaussian processes, it is possible to obtain and directly from the autocorrelation functions. In fact, if we know the Mercer expansion [7] of the autocorrelation function on window , then we can calculate more easily [10]. In the following discussion, we assume the Mercer expansion of the source autocorrelation function to be in the following form
(9) |
Due to the positive-definite property of the integral kernel , the eigenvalues are strictly positive: , and the eigenfunctions form an orthonormal set:
(10) |
The Mercer’s theorem [7] ensures the existence and uniqueness of the eigenpairs , and furthermore, the kernel itself can be expanded under the eigenfunctions. The convergence is absolute and uniform:
(11) |
The Mercer expansion enables us to analytically express an autocorrelation function on a finite-time interval , since the autocorrelation function can be naturally treated as a positive-definite integral kernel.
III-B Finite-Time Capacity Formula
Based on Mercer expansion, we can obtain a closed-form formula in the following Theorem 1.
Theorem 1 (Source expansion, AWGN noise)
Suppose the information source, modeled by the transmitted process , has autocorrelation function . An AWGN noise of PSD is imposed onto , resulting in the received process . The Mercer expansion of on is given by (9), satisfying (10). Then the finite-time mutual information within the observation window between the processes and can be expressed as
(12) |
Proof:
See Appendix A. ∎
From Theorem 1, we can conclude that the finite-time capacity of AWGN channel is uniquely determined by the Mercer spectra of within . However, it remains unknown whether the series representation (12) converges. In fact, the convergence is closely related to the signal power. In Fourier transform, the power in the time domain is equal to the power in the frequency domain, which is known as the Parseval’s theorem[11]. Like the Fourier transform, the transform defined by the orthonormal basis also satisfy the Parseval’s theorem. This observation leads to a theoretic verification of power conservation in the view of , which is stated in the following Lemma 1.
Lemma 1 (Operator Trace Coincide with Power Constraint)
Proof:
See Appendix B. ∎
Remark 1
The convergence of the finite-time capacity series (12) is ensured by Lemma 1. In fact, from the above Lemma 1, we can conclude that the sum of is finite when is finite. It can be immediately derived that , since . Furthermore, note that the sum of is finite even for non-stationary processes (i.e., the power at time : is not always a constant ), as long as holds for any . Then the conclusion holds even for non-stationary processes.
Remark 2
The finite-time capacity formula (12) is closely related to the operator theory [8] in functional analysis. The sum of all the eigenvalues is called the operator trace in linear operator theory. As is mentioned in Lemma 1, the autocorrelation function can be treated as a linear operator on . Furthermore, this operator belongs to the trace class [12] if and only if . Note that this condition is automatically satisfied if is a Gaussian process, since Gaussian random variables always have finite variances.
The Mercer spectra enables us to explicitly calculate the finite-time capacity, and furthermore, prove the “Exceed-Shannon” phenomenon. This will be demonstrated in the next section.
IV Proof of the Existence of Exceed-Shannon Phenomenon
In this section, we first give two different proofs of the existence of the Exceed-Shannon phenomenon, both in a typical case. Then we discuss the achievability of the finite-time capacity, and the compatibility with Shannon-Hartley Theorem.
IV-A Closed-Form Capacity in A Typical Case
In order to show the existence of Exceed-Shannon phenomenon, we only need to show that the finite-time capacity is greater than Shannon capacity in a typical case. Let us consider a finite-time communication scheme with a finitely-powered stationary transmitted signal autocorrelation444The signal autocorrelation is often observed in many scenarios, such as passing a signal with white spectrum through an RC lowpass filter., which is specified as
(14) |
where , in AWGN channel555In this theoretical proof of “Exceed-Shannon” phenomenon, we assume the noise to be AWGN, to simplify the analytical computations. Gaussian processes of white spectrum are “immoral”, thus they can neither be power-limited, nor they can be directly sampled and numerically represented in computers. with noise PSD being . The power of signal is . According to Lemma 1, the trace of the corresponding Mercer operator is finite. Then the finite-time capacity given by Theorem 1 is also finite, as is shown in Remark 1. Finding the Mercer expansion is equivalent to finding the eigenpairs . The eigenpairs are determined by the following characteristic integral equation [13]:
(15) |
Differentiating both sides of (15) twice with respect to yields the boundary conditions and the differential equation that must satisfy:
(16) | ||||
Let denote the resonant frequency of the above harmonic oscillator differential equation, then , and must satisfy the above two boundary constraints. Let be the sinusoidal form of the eigenfunction. Using the boundary conditions we obtain
(17) | ||||
To ensure the existence of solution to the homogeneous linear equations (17) with unknowns , the determinant must be zero. Exploiting this condition, we find the equation that must satisfy:
(18) |

By introducing an auxillary variable , equation (18) can be simplified as , i.e., there exists a positive integer such that . The integer can be chosen to be equal to . From the function images of and (Fig. 2), we can determine , and then . To sum up, the solution to the characteristic equation (15) are collected into (19) as follows.
(19) | ||||
where denotes the normalization constants of on to ensure orthonormality.
Equation (19) gives all eigenpairs , from which we can calculate by applying Theorem 1. As for the Shannon capacity , by applying (8) and evaluating the integral with[14], we can obtain
(20) | ||||
where the evaluation of the improper integral is given in Appendix E.
After all the preparation works above, we can rigorously prove that under the typical case of (14), as long as the transmission power is smaller than a constant . The following Theorem 2 proves this result.
Theorem 2 (Existence of Exceed-Shannon phenomenon in a typical case)
Proof:
See Appendix C. ∎

To verify the above theoretical analysis, numerical experiments on are conducted based on evaluations of (19) and (20). As is shown in Fig. 3, it seems that we can always harness more mutual information in a finite-time observation window, compared with the Shannon capacity. Though seems impossible, this fact is somehow unsurprising because the observations inside the finite-time window can always eliminate some extra uncertainty outside the window due to the autocorrelation of . Different from the finite-time capacity, the Shannon capacity describes the circumstance of , where the fringe effect near and becomes negligible compared with the prolonged window period. Thus, the Shannon capacity does not take into consideration the small extra information on the fringe, causing an underestimation of the capacity. Fig. 3 also shows that, the extra capacity between the finite-time result and Shannon capacity tends to a constant as . As is discussed above, the difference may come from the additional elimination of uncertainty at the fringe of the window. This asymptotically constant difference results in asymptotic linearity of the finite-time mutual information as a function of .
Apart from the above discussion, there is an extra interesting observation in Fig. 3, which leads to another rigourous proof of the Exceed-Shannon phenomenon. If we investigate the slope of curves at the origin, i.e., the “instant transmission rate” at the origin, we find that . This observation is confirmed by the following theorem:
Theorem 3 (Instant Finite-Time Rate Exceeds Shannon)
Proof:
See Appendix D. ∎
From the conclusion of Theorem 3 and (20), we can reduce the Exceed-Shannon inequality at to the following inequality:
(23) |
which can be directly verified by simple term-shifting and squaring on both sides. This inequality implies that the average transmission rate in the finite-time regime is strictly larger than the Shannon capacity around the origin .
Remark 3
Remark 4
In fact, Theorem 2 and Theorem 3 characterize the Exceed-Shannon phenomenon of the finite-time capacity from two aspects. One aspect is the observation time , and the other is the transmit power . Combining these two proofs of the theorems may result in a universal proof that is independent of the choice of parameters and , which requires further study.
The conclusion of Theorem 3 is verified numerically in Fig. 5 and Fig. 5. The blue solid lines, representing the finite-time capacity , are above the red dashed lines representing the Shannon capacity, which demonstrates the Exceed-Shannon phenomenon. The curves all start at when , which coincides with the conclusion of Theorem 3. It can also be observed from the two figures that, for fixed values of , as increases, the transmitted signal tends to be less correlated, thus being more informative. The transmission rate is then improved. This insight also comes from the change of PSD . As increases, the PSD becomes flatter, i.e., a wider range of bandwidth is occupied, and thus the rate increases accordingly.


IV-B Further Discussions on the Exceed-Shannon Phenomenon
Achievability of the finite-time capacity. It is known that for any band-limited stationary Gaussian process with PSD , one can generate signals whose PSD is exactly by first generating a sequence of sufficiently high sampling rate, and then passing the generated sequence through a shaping filter. Since the transmitted signal and its generating sequence determine each other uniquely if is strictly band-limited, and can be treated to contain the same amount of information. Then we can conclude that, after observing the noisy received process , because of the definition of the finite-time capacity , the amount of uncertainty of the underlying transmitted sequence can be reduced by exactly nats. That is to say, we link the finite-time capacity with a sequence-to-sequence capacity. Thus, the finite-time mutual information is achievable by standard capacity-achieving techniques such as random coding [1], as long as the sampling instants are dense enough.
Compatibility with the Shannon-Hartley theorem. Though the Exceed-Shannon effect does imply an average data transmission rate within a finite-time window higher than that predicted by Shannon, in fact, it is still impossible to construct a long-time stable data transmission scheme above the Shannon capacity by leveraging this effect. So the Exceed-Shannon phenomenon does not contradict the Shannon-Hartley Theorem. Placing additional observation windows cannot increase the average information rate, because the received process observed by the subsequent additional windows has already been implicitly altered by the previous observation. The posterior process does not carry as much information as the original one, thus causing a rate reduction in the later windows. It is expected that, the average transmission rate would ultimately decrease to the Shannon capacity as the total observation time tends to infinity (i.e., ), and the analytical proof is still worth investigation in future works.
V Conclusions
In this paper, we provided rigorous proofs of the existence of the “Exceed-Shannon” phenomenon under typical autocorrelation settings of the transmitted signal process and the noise process. Our discovery of the “Exceed-Shannon” phenomenon revealed a possible new direction of research in information theory, as it provided a generalization of Shannon’s renowned formula to the practical finite-time communications. It shows the possibility that we can communicate at a higher-than-Shannon rate in a short time. Since the finite-time capacity is a more precise estimation of the ultimate capacity limit, the optimization target may shift from the Shannon capacity to the finite-time capacity in the design of practical communication systems. Thus, it has guiding significance for the performance improvement of modern communication systems. In future works, general proofs of , independent of the concrete autocorrelation settings, still require further investigation. Moreover, we need to answer the question of how to exploit this Exceed-Shannon phenomenon to improve the communication rate. In addition, although we have discovered numerically that the finite-time capacity agrees with the Shannon capacity when , an analytical proof of this result is required in the future.
Appendix A
Proof Of Theorem 1
Define -by- matrix as
where . According to the definition of this matrix, the following relation holds:
(24) | ||||
This implies that the matrix satisfies the property of asymptotic orthogonality:
(25) |
and the matrix can asymptotically diagonalize because of the eigenvalue property (9):
(26) | ||||
Next, we investigate the noise realizations on sampling instants : For AWGN noise, the instantaneous power is , i.e., , so it is necessary to assume that the AWGN noise is sampled after passing a rectangular-shaped impulse response filter with pulse width and gain . This assumption is reasonable, since the filter tends to an ideal sampler as . Under this hypothesis, the noise variance for each sample can be calculated as
(27) | ||||
Note that the equality holds since the noise autocorrelation is . In this way, the mutual information within window can be calculated as
(28) | ||||
where comes from sandwiching the determinant in the bracket with both the asymptotically orthogonal matrix on the left and its transpose on the right, and comes from plugging (24) and (26) into the previous step.
Appendix B
Proof Of Lemma 1
Appendix C
Proof Of Theorem 2
Plugging (20) into the right-hand side of (21), and differentiate both sides w.r.t . Notice that if , then both sides of (21) are equal to 0. Thus, we only need to prove that the derivative of left-hand side is strictly larger than that of right-hand side within a small interval :
(30) |
Multiply both sides of (30) by and define , and then from Lemma 1 we obtain . In this way, (30) is equivalent to
(31) |
Since is convex on , by applying Jensen’s inequality to the left-hand side of (31), we only need to prove that
(32) |
From the definition of we can derive that . So we go on to calculate . That is equivalent to calculate , where corresponds to the integral kernel:
(33) |
Evaluating the kernel on the diagonal yields
(34) | ||||
Integrating this kernel on the diagonal of gives :
(35) | ||||
By substituting (35) into (32), we just need to prove that
(36) |
Define the dimensionless number . Since the function is strictly positive and less than 1 at , we can conclude that, there exists a small positive such that (36) holds for . The number can be chosen as
(37) |
which implies that (30) holds for any . Thus, integrating (30) on both sides from to gives rise to the conclusion (21), which completes the proof of Theorem 2.
Appendix D
Proof Of Theorem 3
Differentiating the finite-time capacity expression for , i.e. (12), with respect to , then we obtain
(38) |
where , by applying Lemma 1, can be expressed as
(39) | ||||
Since as , we can safely conclude that . From Dirichlet’s test, the series in (38) converges uniformly. Thus, by interchanging the infinite sum and the limit operation, we obtain
(40) | ||||
which completes the proof of Theorem 3.
Appendix E
The Evaluation of The Improper Integral (20)
Define the improper inegral with parameters and :
(41) |
By taking the partial derivative of with respect to , we obtain
(42) |
Note that the analytic function defined as
(43) |
has residual
(44) |
at pole in the upper half-plane, thus the integral in (42) can be evaluated by the residual theorem:
(45) | ||||
Since , by integrating (42) with respect to from to yields
(46) | ||||
Then the integral in (20) can be calculated by setting and in (46):
(47) | ||||
which completes the proof.
References
- [1] C. E. Shannon, “A mathematical theory of communication,” The Bell Syst. Techni. J., vol. 27, no. 3, pp. 379–423, Jul. 1948.
- [2] H. Landau, “Sampling, data transmission, and the nyquist rate,” Proc. IEEE, vol. 55, no. 10, pp. 1701–1706, Oct. 1967.
- [3] H. Nyquist, “Certain topics in telegraph transmission theory,” Trans. American Institute of Electrical Engineers, vol. 47, no. 2, pp. 617–644, Apr. 1928.
- [4] D. Slepian, “On bandwidth,” Proc. IEEE, vol. 64, no. 3, pp. 292–300, Mar. 1976.
- [5] T. M. Cover, Elements of information theory. John Wiley & Sons, 1999.
- [6] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.
- [7] J. Mercer, “Functions of positive and negative type and their connection with the theory of integral equations,” Philos. Trans. Royal Soc., vol. 209, pp. 4–415, 1909.
- [8] K. Zhu, Operator theory in function spaces. American Mathematical Soc., 2007, no. 138.
- [9] A. Balakrishnan, “A note on the sampling principle for continuous signals,” IRE Trans. Inf. Theory, vol. 3, no. 2, pp. 143–146, Jun. 1957.
- [10] J. Barrett and D. Lampard, “An expansion for some second-order probability distributions and its application to noise problems,” IRE Trans. Inf. Theory, vol. 1, no. 1, pp. 10–15, Mar. 1955.
- [11] S. S. Kelkar, L. L. Grigsby, and J. Langsner, “An extension of parseval’s theorem and its use in calculating transient energy in the frequency domain,” IEEE Trans. Ind. Electron., vol. IE-30, no. 1, pp. 42–45, Feb. 1983.
- [12] C. Brislawn, “Kernels of trace class operators,” American Math. Soc., vol. 104, no. 4, 1988.
- [13] D. Cai and P. S. Vassilevski, “Eigenvalue problems for exponential-type kernels,” Comput. Methods in Applied Math., vol. 20, no. 1, pp. 61–78, Jan. 2020.
- [14] I. S. Gradshteyn and I. M. Ryzhik, Table of integrals, series, and products. Academic press, 2014.