Rényi Cross-Entropy Measures for Common Distributions
and Processes with Memory
Abstract
Two Rényi-type generalizations of the Shannon cross-entropy, the Rényi cross-entropy and the Natural Rényi cross-entropy, were recently used as loss functions for the improved design of deep learning generative adversarial networks. In this work, we build upon our results in [1] by deriving the Rényi and Natural Rényi differential cross-entropy measures in closed form for a wide class of common continuous distributions belonging to the exponential family and tabulating the results for ease of reference. We also summarise the Rényi-type cross-entropy rates between stationary Gaussian processes and between finite-alphabet time-invariant Markov sources.
Keywords
Rényi information measures, cross-entropy, divergence measures, exponential family distributions, Gaussian processes, Markov sources.
I Introduction
The Rényi entropy [2] of order of a probability mass function with finite support is defined as
for . The Rényi entropy generalizes the Shannon entropy,
in the sense that as , . Several other Rényi-type information measures have been put forward, each obeying the condition that their limit as goes to one reduces to a Shannon-type information measure. This includes the Rényi divergence (of order ) between two discrete distributions and with common finite support , given by
which reduces to the familiar Kullback-Leibler divergence,
as . Note that in some cases [3], there may exist multiple Rényi-type generalisations for the same information measure (particularly for the mutual information).
Many of these definitions admit natural counterparts in the case when the involved distributions have a probability density function (pdf). This gives rise to information measures such as the Rényi differential entropy for pdf with support ,
and the Rényi (differential) divergence between pdfs and with common support ,
The Rényi cross-entropy between distributions and is an analogous generalization of the Shannon cross-entropy
Two definitions for this measure have been recently suggested. In light of the fact that Shannon’s cross-entropy satisfies , a natural definition of the Rényi cross-entropy is:
(1) |
This definition was indeed proposed in [4] in the continuous case, with the differential cross-entropy measure given by
(2) |
In contrast, prior to [4], the authors of [5] introduced the Rényi cross-entropy in their study of shifted Rényi measures expressed as the logarithm of weighted generalized power means. Specifically, upon simplifying Definition 6 in [5], their expression for the Rényi cross-entropy between distributions and is given by
(3) |
For the continuous case, (3) can be readily converted to yield the Rényi differential cross-entropy between pdfs and :
(4) |
Note that both (1) and (3) reduce to the Shannon cross-entropy as [6]. A similar result holds for (2) and (4), where the Shannon differential cross-entropy, , is obtained. Also, the Rényi (differential) entropy is recovered in all equations when (almost everywhere). These properties alone make these definitions viable extensions of the Shannon (differential) cross-entropy.
Finding closed-form expressions for the cross-entropy measure in (2) for continuous distributions is direct, since the Rényi divergence and the Rényi differential entropy were already calculated for numerous distributions in [7] and [8], respectively. However, deriving the measure in (4) is more involved. We hereafter refer to the measures in (1) and (2), as the Natural Rényi cross-entropy and the Natural Rényi differential cross-entropy, respectively; while we plainly call the measures in in (3) and (4) as the Rényi cross-entropy and the Rényi differential cross-entropy, respectively.
In a recent conference paper [1], we showed how to calculate the Rényi differential cross-entropy between distributions of the same type from the exponential family. Building upon the results shown there, the purpose of this paper is to derive in closed form the expression of for thirteen commonly used univariate distributions from the exponential family, as well as for multivariate Gaussians, and tabulate the results for ease of reference. We also analytically derive the Natural Rényi differential cross-entropy for the same set of distributions. Finally, we present tables summarising the Rényi and Natural Rényi (differential) cross-entropy rate measures, along with their Shannon counterparts, for two important classes of sources with memory, namely stationary Gaussian sources and finite-state time-invariant Markov sources.
Motivation for determining formulae for the Rényi cross-entropy originates from the use of the Shannon differential cross-entropy as a loss function for the design of deep learning generative adversarial networks (GANs) in [9]. The parameter , ubiquitous to all Rényi information measures, allows one to fine-tune the loss function to improve the quality of the GAN-generated output. This can be seen in [10, 6] and [4], which used the Rényi differential cross-entropy, and the Natural Rényi differential cross-entropy measures, respectively, to generalize the original GAN loss function (which is recovered as ), resulting in both improved GAN system stability and performance for multiple image datasets. It is also shown in [10, 6] that the introduced Rényi-centric generalized loss function preserves the equilibrium point satisfied by the original GAN via the so-called Jensen-Rényi divergence [11], a natural extension of the Jensen-Shannon divergence [12] upon which the equilibrium result of [9] is established. Other GAN systems that utilize different generalized loss functions were recently developed and analyzed in [13] and [14, 15] (see also the references therein for prior work).
The rest of this paper is organised as follows. In Section II, the formulae for the Rényi differential cross-entropy and Natural Rényi differential cross-entropy for distributions from the exponential family are given. In Section III, these calculations are systematically carried for fourteen pairs of distributions of the same type within the exponential family and the results are presented in two tables. The Rényi and Natural Rényi differential cross-entropy rates are presented in Section IV for stationary Gaussian sources; furthermore, the Rényi and Natural Rényi cross-entropy rates are provided in Section V for finite-state time-invariant Markov sources. Finally, the paper is concluded in Section VI.
II Rényi and Natural Rényi Differential Cross-Entropies for Distributions from the Exponential Family
An exponential family is a class of probability distributions over a support defined by a parameter space and functions , , , and such that the pdf in this family have the form
(5) |
where denotes the standard inner product in . Alternatively, by using the (natural) parameter , the pdf can also be written
(6) |
where with . Examples of important pdfs we consider from the exponential family are included in Appendix A.
In [1] the cross-entropy between pdfs and of the same type from the exponential family was proven to be
(7) |
where
Here refers to a distribution of the same type as and within the exponential family with natural parameter
It can also be shown that the Natural Rényi differential cross-entropy between and is given by
(8) |
where
and
where refers to a distribution of the same type as and within the exponential family with natural parameter .
III Tables of Rényi and Natural Rényi Differential Cross-Entropies
Tables LABEL:RenDivTable and LABEL:RenNatTable list Rényi and Natural differential cross-entropy expressions, respectively, between common distributions of the same type from the exponential family (which we describe in Appendix A for convenience). The closed-form expressions were derived using (7) and (8), respectively. In the tables, the subscript of is used to denote that a parameter belongs to pdf , .
Name | |
Beta | |
, | |
, | |
, | |
, | |
, | |
, | |
Exponential | |
, | |
Gamma | |
, | |
, | |
, | |
, | |
, | |
Half-Normal | |
, | |
, | |
, | |
, | |
Rayleigh | |
, |
Name | |
Beta | |
, | |
, | |
, | |
, | |
, | |
, | |
Exponential | |
, | |
Gamma | |
, , | |
, | |
, | |
, | |
Half-Normal | |
, | |
, | |
Maxwell Boltzmann | |
, | |
, | |
Rayleigh | |
, |
IV Rényi and Natural Rényi Differential Cross-Entropy Rates for Stationary Gaussian Processes
In [1] the Rényi differential cross-entropy rate for stationary zero-mean Gaussian processes was derived. This, alongside with the Shannon and Natural Rényi differential cross-entropy rates, are summarised in Table LABEL:RencrosenTable. Here, is the spectral density of the first zero-mean Gaussian process, is the spectral density of the second,
and
Information Measure | Rate | Constraint |
---|---|---|
V Rényi and Natural Rényi Cross-Entropy Rates for Markov Sources
In [1], the Rényi cross-entropy rate between finite-state time-invariant Markov sources was established, using as in [16] tools from the theory of non-negative matrices and Perron-Frobenius theory (e.g., cf. [17, 18]). This, alongside the Shannon and Natural Rényi differential cross-entropy rates, are derived and summarised in Table LABEL:MarkovcrosenTable. Here, and are the (stochastic) transition matrices associated with the first and second Markov sources, respectively, where both sources have common alphabet of size . To allow any value of the Rényi parameter in , we assume that the transition matrix of the second Markov chain has positive entries (); however, the transition matrix of the first Markov chain is taken to be an arbitrary stochastic matrix. For simplicity, we assume that the initial distribution vectors, and , of both Markov chains also have positive entries ( and ).111This condition can be relaxed via the approach used to prove [16, Theorem 1]. Moreover, denotes the stationary probability row vector associated with the first Markov chain and is an -dimensional column vector where each element equals one. Furthermore, denotes element-wise multiplication (i.e., the Hadamard product operation) and is the element-wise natural logarithm.
Finally, the definition of for a matrix is more involved. If is irreducible, is its largest positive eigenvalue. Otherwise, rewriting in its canonical form as detailed in [16, Proposition 1], we have that , where is the maximum of all largest positive eigenvalues of (irreducible) sub-matrices of corresponding to self-communicating classes, and is the maximum of all largest positive eigenvalues of sub-matrices of corresponding to classes reachable from an inessential class.
Information Measure | Rate |
---|---|
VI Conclusion
We have derived closed-form formulae for the Rényi and Natural Rényi differential cross-entropies of commonly used distributions from the exponential family. This is of potential use to further studies in information theory and machine learning, particularly problems where deep neural networks, trained according to a Shannon cross-entropy loss function, can be improved via generalized Rényi-type loss functions in virtue of the extra degree degree of freedom provided by the Rényi () parameter. In addition, we have provided formulae for the Rényi and Natural Rényi differential cross-entropy rates for stationary zero-mean Gaussian processes and expressions for the cross-entropy rates for Markov sources. Further work includes expanding the present collection by considering distributions such as Levy or Weibull and investigating cross-entropy measures based on the -divergence [19, 20, 21], starting with Arimoto’s divergence [22].
Acknowledgements
This work was supported in part by Natural Sciences and Engineering Research Council (NSERC) of Canada.
Appendix A: Distributions listed in Tables LABEL:RenDivTable and LABEL:RenNatTable
Name | |
() | (Support) |
Beta | |
(, ) | |
(, ) | |
() | |
() | |
Exponential | |
() | |
Gamma | |
(, ) | |
(, ) | |
(, ) | |
Half-Normal | |
() | |
Gumbel | |
(, ) | |
Pareto | |
(, ) | |
Maxwell Boltzmann | |
() | |
Rayleigh | |
() | |
Laplace | |
(, ) |
Notes
-
•
is the Beta function.
-
•
is the Gamma function.
References
- [1] F. C. Thierrin, F. Alajaji, and T. Linder, “On the Rényi cross-entropy,” in Proceedings of the 17th Canadian Workshop on Information Theory (see also arXiv e-prints, arXiv:2206.14329), 2022, pp. 1–5.
- [2] A. Rényi, “On measures of entropy and information,” in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, 1961, pp. 547–561.
- [3] S. Verdú, “-mutual information,” in Proceedings of the IEEE Information Theory and Applications Workshop, 2015, pp. 1–6.
- [4] A. Sarraf and Y. Nie, “RGAN: Rényi generative adversarial network,” SN Computer Science, vol. 2, no. 1, p. 17, 2021.
- [5] F. J. Valverde-Albacete and C. Peláez-Moreno, “The case for shifting the Rényi entropy,” Entropy, vol. 21, pp. 1–21, 2019. [Online]. Available: https://doi.org/10.3390/e21010046
- [6] H. Bhatia, W. Paul, F. Alajaji, B. Gharesifard, and P. Burlina, “Least kth-order and Rényi generative adversarial networks,” Neural Computation, vol. 33, no. 9, pp. 2473–2510, Aug 2021. [Online]. Available: http://dx.doi.org/10.1162/neco_a_01416
- [7] M. Gil, F. Alajaji, and T. Linder, “Rényi divergence measures for commonly used univariate continuous distributions,” Information Sciences, vol. 249, pp. 124–131, 2013. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0020025513004441
- [8] K.-S. Song, “Rényi information, loglikelihood and an intrinsic distribution measure,” J. Statist. Plann. Inference, vol. 93, 02, 2001.
- [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of Advances in Neural Information Processing Systems, vol. 27, 2014, pp. 2672–2680.
- [10] H. Bhatia, W. Paul, F. Alajaji, B. Gharesifard, and P. Burlina, “Rényi Generative Adversarial Networks,” ArXiv:2006.02479v1, 2020.
- [11] P. A. Kluza, “On Jensen-Rényi and Jeffreys-Rényi type -divergences induced by convex functions,” Physica A: Statistical Mechanics and its Applications, 2019.
- [12] J. Lin, “Divergence measures based on the Shannon entropy,” IEEE Transactions on Information Theory, vol. 31, pp. 145–151, 1991.
- [13] Y. Pantazis, D. Paul, M. Fasoulakis, Y. Stylianou, and M. Katsoulakis, “Cumulant GAN,” arXiv:2006.06625, 2020.
- [14] G. R. Kurri, T. Sypherd, and L. Sankar, “Realizing GANs via a tunable loss function,” in Proceedings of the IEEE Information Theory Workshop (ITW), 2021, pp. 1–6.
- [15] G. R. Kurri, M. Welfert, T. Sypherd, and L. Sankar, “-GAN: Convergence and estimation guarantees,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), 2022, pp. 312–317.
- [16] Z. Rached, F. Alajaji, and L. L. Campbell, “Rényi’s divergence and entropy rates for finite alphabet Markov sources,” IEEE Transactions on Information Theory, vol. 47, no. 4, pp. 1553–1561, 2001.
- [17] E. Seneta, Non-negative Matrices and Markov Chains. Springer Science & Business Media, 2006.
- [18] R. G. Gallager, Discrete Stochastic Processes. Springer, 1996.
- [19] I. Csiszar, “Eine Informationstheoretische Ungleichung und ihre Anwendung auf den Bewis der Ergodizitat on Markhoffschen Ketten,” Publications of the Mathematical Institute of the Hungarian Academy of Sciences, Series A, vol. 8, 01 1963.
- [20] I. Csiszár, “Information-type measures of difference of probability distributions and indirect observations,” Studia Sci. Math. Hungarica, vol. 2, pp. 299–318, 1967.
- [21] S. M. Ali and S. D. Silvey, “A general class of coefficients of divergence of one distribution from another,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 28, no. 1, pp. 131–142, 1966. [Online]. Available: http://www.jstor.org/stable/2984279
- [22] F. Liese and I. Vajda, “On divergences and informations in statistics and information theory,” IEEE Transactions on Information Theory, vol. 52, no. 10, pp. 4394–4412, 2006.