Department of Mathematics and Computer Science, Leipzig University, Leipzig, Germany;
Santa Fe Institute, Santa Fe, NM, USA; 44email: [email protected]
The Information-Geometric Perspective of Compositional Data Analysis
Abstract
Information geometry uses the formal tools of differential geometry to describe the space of probability distributions as a Riemannian manifold with an additional dual structure. The formal equivalence of compositional data with discrete probability distributions makes it possible to apply the same description to the sample space of Compositional Data Analysis (CoDA). The latter has been formally described as a Euclidean space with an orthonormal basis featuring components that are suitable combinations of the original parts. In contrast to the Euclidean metric, the information-geometric description singles out the Fisher information metric as the only one keeping the manifoldโs geometric structure invariant under equivalent representations of the underlying random variables. Well-known concepts that are valid in Euclidean coordinates, e.g., the Pythagorean theorem, are generalized by information geometry to corresponding notions that hold for more general coordinates. In briefly reviewing Euclidean CoDA and, in more detail, the information-geometric approach, we show how the latter justifies the use of distance measures and divergences that so far have received little attention in CoDA as they do not fit the Euclidean geometry favoured by current thinking. We also show how Shannon entropy and relative entropy can describe amalgamations in a simple way, while Aitchison distance requires the use of geometric means to obtain more succinct relationships. We proceed to prove the information monotonicity property for Aitchison distance. We close with some thoughts about new directions in CoDA where the rich structure that is provided by information geometry could be exploited.
1 Introduction
Information geometry and Compositional Data Analysis (CoDA) are fields that have ignored each other so far. Independently, both have found powerful descriptions that led to a deeper understanding of the geometric relationships between their respective objects of interest: probability distributions and compositional data. Although both of these live on the same mathematical space (the simplex), and some of the mathematical structures are identically described, surprisingly, both fields have come to focus on quite different geometric aspects. On the one hand, the tools of differential geometry have revealed the underlying duality of the manifold of probability distributions, with the Fisher information metric playing a central role. On the other hand, the classical log-ratio approach led to the modern description of the compositional sample space as Euclidean and affine. We think it is time that CoDA starts to profit from the rich structures that information geometry has to offer. This paper intends to build some bridges from information geometry to CoDA. In the first section, we will give a brief description of the Euclidean CoDA perspective. The second, and main, part of the paper describes in some detail the approach centred around the Fisher metric, with a description of the dual coordinates, exponential families and of how dual divergence functions generalize the notion of Euclidean distance. To ease understanding, throughout this section we link those concepts to the ones familiar in CoDA. In the third part of the paper, we show how information-based measures can lead to simpler expressions when amalgamations of parts are involved, and an important monotonicity result that holds for relative entropy is derived for Aitchison distance. We conclude with a short discussion about where we could go from here.
2 The Euclidean CoDA perspective
Compositional data analysis is now unthinkable without the log-ratio approach pioneered by Aitchison AitchisonBook . It has both led to a variety of data-analytic developments (for the most recent review, see green ), and more formal mathematical descriptions (see sampleSpace ). Following these, compositions can be described as equivalence classes whose representatives are points in a Euclidean space. We will give a brief recount here for completeness of exposition. Compositional data are defined as vectors of strictly positive numbers describing the parts of a whole for which the relevant information is only relative. As such, the absolute size of a -part composition is irrelevant, and all the information is conveyed by the ratios of its components. This can further be formalized by considering equivalent compositions , such that for a positive constant . A composition is then an equivalence class of such proportional vectors mathCoDA . A closed composition is the simplicial representative , where the symbol denotes the closure operation (i.e., the division by the sum over the components). Closed compositions are elements of the simplex
(1) |
where denotes transposition. The simplex can be equipped with a Euclidean structure by the vector space operations of perturbation and powering (playing the role of vector addition and scalar multiplication in real vector spaces), defined by
(2) | |||||
(3) |
An inverse perturbation is given by .
As a vector space, also carries the structure of an affine space, and we can study affine subspaces, which are referred to as linear manifolds in simplicialGeometry . In order to do so, we require a set of vectors , which we assume to be linearly independent, and an origin point . Here, independence means the following:
Let be the neutral element. A set of compositions is called perturbation-independent if the fact that implies .
With this, an affine subspace is given as the set of compositions such that
(4) |
for any real constants , . Due to the linear independence of the vectors
, this is
an -dimensional space.
It is convenient to define the inner product for our Euclidean space via the so-called centred log-ratio transformation. Its definition logratioTrans and inverse operation are
(5) | |||||
(6) |
where denotes the geometric mean . Note that the sum over the components of clr-transformed vectors is 0. An inner product can then be defined by
(7) |
The corresponding (squared) norm and distance are
(8) |
This distance is known as Aitchison distance (denoted by the subscript). Note that the clr-transformation is an isometry between -dimensional Euclidean spaces, where
(9) |
We can thus obtain orthonormal bases in from orthonormal bases in . Such orthonormal basis vectors are defined by the columns , , of , a matrix of order obeying
(10) | |||||
(11) |
with the identity matrix and a -dimensional vector where each component is 1. The first equation ensures orthonormality, the second equation makes sure components sum to zero. Now the vectors
(12) |
constitute an orthonormal basis in . Euclidean coordinates of with respect to the basis , , then follow from the so-called isometric log-ratio transformation ilr . Its definition and inverse operation are
(13) | |||||
(14) |
The second equation shows the composition as generated from the coordinates . This can also be written in the usual component form using the basis vectors of Eq.ย (12):
(15) |
Equations (10) and (11) characterize any orthonormal basis of . is known under the name of contrast matrix. Many choices for this matrix are possible. Balance coordinates balances are popular for their relative simplicity and interpretability, where pivot coordinates pivot are sometimes preferred.
3 The information-geometric perspective
Information geometry started out as a study of the geometry of statistical estimation. The set of probability distributions that constitute a statistical model is seen as a manifold whose invariant geometric structures Chentsov are studied in relation to the statistical estimation using this model. A Riemannian metric and a family of affine connections are naturally introduced on such a manifold Rao . It turns out that a fundamental duality underlies these structures AmariNagaoka . While the first ideas about the geometry of statistical estimation are from the first half of the 20th century and go back to a variety of authors, a first attempt at a unified exposition of the topic by Amari lectureNotes was published around a similar time as Aitchisonโs book.
Here we try to highlight some of the main concepts, borrowing from chapter two in Nihat , but mainly following the treatment in Amari . The latter also showcases the many applications information geometry has found during the last decades. While the best known book on the topic may be AmariNagaoka , the most formal and complete exposition can currently be found in Nihat .
The ideas that require more advanced notions from differential geometry will not be touched upon in our short outline.
3.1 Dual coordinates and Fisher metric in finite information geometry
To exploit the equivalence of probability distributions with compositional data, we only consider the manifold of discrete distributions . To emphasize the equivalence, we will denote the probabilities by the same symbol as our compositions, i.e., is a vector of probabilities. To complete the probabilistic picture, we need a random variable which can take values with the respective probabilities , the coordinates of . The distribution of this random variable can now be written as
(16) |
From the information-geometric perspective, there are two natural ways to parametrize the set of all (strictly positive) distributions of . The first possibility is quite obvious. The first probabilities in Eq.ย (16) that are free to be specified, that is , can be considered parameters, which we denote by . In these coordinates, probability distributions are written as
(17) |
Alternatively, our distribution can be parametrized using what is known as the alr-transformation logratioTrans in CoDA:
(18) |
With this, we can write our distribution in the form
(19) |
where if , and otherwise. The function ensures normalization, that is
(20) |
The parametrization of Eq.ย (19) in terms of can be used in order to define a linear structure on . The addition of two distributions and can simply be defined by taking their product and then normalizing. With this vector addition, denoted by , we obviously have
(21) |
Thus, the operation is consistent with the usual addition in the parameter space . The multiplication of a distribution with a scalar can be correspondingly defined by potentiating and then normalizing. This defines a scalar multiplication , and we have
(22) |
Obviously, the scalar multiplication is consistent with the usual multiplication in the parameter space. Note that the vector space structure defined here, which is well known in information geometry, coincides with the structure defined by equations (2) and (3). Given the linear structure, we can consider affine subspaces of . These are well-known and fundamental families in statistics, statistical physics, and information geometry, the so-called exponential families. We basically obtain them from the representation of Eq.ย (19) if we replace the functions by () functions , and shift the whole family by some reference measure :
(23) |
Here, ensures normalization,
but it does not reduce to the simple structure of Eq.ย (20) in general.
In statistical physics, the function
is known as the free energy (in other contexts it is also known as the cumulant-generating function).
Note that, given the same linear structure on , the
exponential families coincide with the linear manifolds of Eq.ย (4), which were introduced into the field of Compositional Data Analysis more recently.
In what follows we restrict attention to the parametrizations of equations (17) and (19) of the full simplex as one instance of the general structure that underlies information geometry. The function , given by Eq.ย (20), is a convex function, and we can get back the coordinates from it via a Legendre transformation AmariNagaoka
(24) |
where denotes the partial derivatives . The Legendre dual of is another convex function defined by
(25) |
which is given by the negative Shannon entropy
(26) |
In equivalence to Eq.ย (24), from we can get back the coordinates:
(27) |
There is thus a fundamental duality mediated by the Legendre transformation which links the two types of parameters and as well as the convex functions and . Legendre transformations are well known to play a fundamental role in phenomenological thermodynamics. Their importance for information geometry was established by Amari and Nagaoka AmariNagaoka .
In what follows, we use the parameters . Like , they define a (local) coordinate system of the manifold . From each point , coordinate curves in ,
,
emerge when holding of the constant.
Consider their velocities
(28) |
In the point itself, the vectors pointing in the direction of each coordinate curve form the basis of the so-called tangent space of this point, see Figure 1a). Similarly, we can define vectors with respect to the coordinates , which also span the tangent space.

For a Riemannian manifold a metric tensor is defined. This metric is obtained via an inner product on the tangent space:
(29) |
which depends on the coordinates chosen. The coordinate system is Euclidean if (this is the case for the parametrization achieved by Eq.ย 13). For the Riemannian metric of exponential families, the basis vectors can be identified with the so-called score111The score function plays an important role in maximum-likelihood estimation. function . The resulting Riemannian metric is known as the Fisher information matrix:
(30) | |||||
(31) | |||||
(34) |
with denoting the expectation value, and
Note that this is the covariance matrix of the random vector , as expressed by Eq.ย (31). Convex functions have positive definite Hessian matrices. Here their elements are given by the Fisher metric itself:
(35) | |||||
(36) |
The second matrix is the inverse of the first, which follows from their Legendre duality. This means the second matrix is an inverse covariance, an important object in the theory of graphical models graphMod . Although the Fisher metric is not Euclidean, i.e., , we do have a generalization of this when mixing the dual coordinates: .
3.2 Distance measures and divergences
Our affine structure on can be reformulated additively via an exponential map , :
(39) | |||||
(40) |
using the notation introduced in section 2, and shown here together with its inverse. This map is used in AyErb , where ordinary linear differential equations with the time derivative defined for letting are considered. These turn out to be replicator equations with special properties that are known from population dynamics. Note that, e.g., with the center of the simplex as defined before, we have . With the exponential map, we can interpret as the difference vector between two compositions, see Figure 1b). The set of all such difference vectors for a given point can be interpreted as the gradient field of a convex function (a.k.a.ย a potential). In order to highlight the generality of this concept in information geometry, we consider a general convex function with respect to some parameters from a convex domain. In the following, we will denote the compositions for which the parameters are to be evaluated by subscripts. The linearization of in is given by
The graph of this linearization is a hyperplane of dimension touching the graph of in the point , see Figure 2. The difference between and its linearisation in , evaluated at , defines a so-called Bregman divergence, a class of divergences that plays an important role in information geometry. More precisely,
(41) |
t]
Divergences are similar to distance functions but they are not necessarily symmetric and need not fulfill the triangle inequality. As an example, let us consider the potential naturally associated with the structure of equations (39) and (40), the squared Aitchison norm
(42) |
with ilri the -th ilr coordinate , see Eq.ย (13), and the vector of coordinates . We then have
(43) |
coinciding with the squared Aitchison distance. This is the special case of a Euclidean divergence, which is also a (squared) distance function.
Let us now come to the divergences that arise when replacing in Eq.ย (41) by our dually convex functions and . They turn out to be the relative entropies (a.k.a.ย Kullback-Leibler divergences)
(44) | |||||
(45) |
Thus the symmetry we had in the Euclidean case finds its generalization for our dual case in
(46) |
Moreover, one can show that we can โcomplete the squareโ via
(47) |
Of course, symmetrizations of relative entropy exist, with the Jensen-Shannon divergence perhaps the most prominent example. Also, a symmetric โcompositionalโ version of relative entropy has been proposed because of its additional properties that are often regarded as indispensable222As an example, the translation invariance of distance measures, known under the name of perturbation invariance in CoDA, has its information-geometric analog in the invariance of the inner product of tangent vectors , under parallel transport and its dual : . in CoDA symmDiv . While such measures have some interesting properties, they do not make use of the duality of our parametrizations and are therefore less suitable for our approach. Indeed, although a dual divergence is not a distance measure in the strict sense, it can quantify the distance between points along a curve in a similar way, and generalizations of well-known relationships from Euclidean geometry are available. Geodesic lines connecting two compositions can be constructed via convex combinations of the parameters , and the corresponding dual geodesics from convex combinations of the dual parameters . Such geodesics are orthogonal to each other when the inner product of their tangent vectors, with respect to the Fisher metric, vanishes in the point of intersection. In this case, a generalized Pythagorean theorem holds for the corresponding dual divergence, e.g.,
(48) |
where the geodesics and intersect orthogonally.
3.3 Distances obtained from the Fisher metric compared with those used in CoDA
We have seen that the potential of Eq.ย (42) led to a divergence that is also a Euclidean distance. Let us now consider a generalization of our dual divergences that includes as a special case a Euclidean distance that is related to the Fisher metric. The so-called -divergence (closely related to the Renyi Renyi and Tsallis Tsallis entropies) is defined as
(49) |
In the limit, the cases correspond to and . The case (where the divergence is self-dual, i.e., symmetric) corresponds to the Euclidean distance of the points and :
(51) |
This is the so-called Hellinger distance. It is closely related to the Riemannian distance333The Riemannian distance between two points on a manifold is the minimum of the lengths of all the piecewise smooth paths joining the two points. between two compositions with respect to the Fisher metric. This so-called Fisher distance can be expressed explicitly Nihat as
(52) |
and its relation to the Hellinger distance is given by
(53) |
These distances can be better understood when noting the role that plays the angle between the two points on the sphere, i.e., when comparing Eq.ย (52) with the cosine of the angle between the rays going from the origin through the transformed compositions (see Fig.ย 3), also referred to as Bhattacharyya coefficient Bcoeff :
(54) |
t]
Let us now come back to Aitchison distance and discuss in which structural aspects it differs from divergences obtained from the Fisher metric. In fact, in data analysis, parametrized classes of distance measures are common green3 . In the same way as in Eq.ย (49), they are mediated by the Box-Cox transformation, which has the limit
(55) |
This has been applied in CoDA to obtain log-ratio analysis as a Correspondence Analysis of power-transformed data green1 . There, we have the following family of distance measures:
(56) |
where are suitable weights. For the case , this is the (square of the) symmetric -distance used in Correspondence Analysis, while gives444This can be seen as the high-temperature limit in statistical physics. Aitchison distance (when for all ), see the Appendix for a proof. Although the case has a direct relationship with it, Hellinger distance does not form part of this family because of the closure operation that makes us stay inside the simplex. Similarly, Aitchison distance cannot be obtained from the alpha divergences of Eq.ย (49). Alpha divergences are included in a more general class of divergences known under the name of -divergence. They have the form
(57) |
where is a convex function. It is a well-established result in information geometry that -divergences are the only decomposable555A divergence is decomposable if it can be written as a sum of terms that only depend on individual components. divergences that behave monotonically under coarse-graining of information Amari , i.e., when compositional parts are amalgamated into higher-level parts. This important invariance property is called information monotonicity. Aitchison distance is not decomposable, as in each summand we use information from all compositional parts when evaluating their geometric mean. Nevertheless, in the next section we will show that it fulfills information monotonicity.
It is interesting to note that the two Euclidean distances (Hellinger and Aitchison) are each related to different isometries of the simplex. In the case of Hellinger distance, compositions are isometrically mapped into the positive orthant of the sphere666But note that this mapping does not obtain the spherical representative of the composition in the sense of the definition of an equivalence class.. This isometry also holds for the Fisher metric itself Nihat (which makes it possible to view the Fisher metric as a Euclidean metric on the where the sphere is embedded). In the case of Aitchison distance, the isometry in question is of central interest in CoDA. It is the clr transformation, i.e., the map between and . Here, however, there is no corresponding isometry of the Fisher metric. Although a Euclidean metric may appear convenient, it does not have the same desirable properties as the Fisher metric. In fact, a central result in information geometry states that the Fisher metric is the only metric Chentsov2 that stays invariant under coarse graining of information.
4 Information monotonicity of Aitchison distance
4.1 Amalgamations lead to coarse grained information
Let us denote by a subset of , and let denote the corresponding subcomposition, i.e., the vector where parts with indices not belonging to have been removed. Let denote the Shannon entropy of composition , i.e., the potential . Consider its decomposition
(58) |
where . We see that this is a convex combination of the entropies of the two subcompositions plus a binary entropy, where all terms involve the amalgamation . Probabilistically speaking, this particular amalgamation corresponds to the probability that an event occurs for any of the events that are left out to obtain the subcomposition . Generally, amalgamation is nothing else but a coarse graining of the events and their probabilities. The corresponding coarse graining of information is described by this alternative decomposition of Shannon entropy:
(59) |
As the second summand is greater equal zero, this also shows that information cannot grow after coarse graining.
4.2 The notion of monotonicity for divergences and distance measures
These considerations lead us to an important result that concerns the divergence associated with the potential , i.e., the relative entropy. This result is the information monotonicity under coarse graining, where the notion of monotonicity is somewhat related to the notion of subcompositional dominance. The latter refers to the property that a measure of distance does not increase when evaluating it on a subset of parts only. This is often seen as a desirable property of distances in CoDA (and not fulfilled by distances like Hellinger and Batthacharyya, see clustering for a discussion of distance measures with respect to compositions). A similarโbut perhaps more naturalโrequirement that has not received attention yet in the CoDA community is the one that a distance between compositions should not increase when comparing it with the one after amalgamating over a subset of parts.777Subcompositional coherence, i.e., the fundamental requirement that quantities remain identical on a renormalized subcomposition, is not an issue for amalgamation: after amalgamation there is no need for renormalization. As we have seen in the previous subsection, we cannot gain information when amalgamating parts, so we should lose resolution when comparing the amalgamated compositions. This is also related to the notion of sufficient statistic, see Amari . The information monotonicity property of relative entropy can be expressed as
(60) |
This result can be shown for the more general case of -divergences and continuous distributions using Jensenโs inequality AmariNagaoka .
Note that in balances , when discussing amalgamations of parts, the notion of โmonotonicityโ is used differently. There, the authors argue against amalgamations referring to the observation that Aitchison distances between amalgamated compositions and the amalgamated center of the simplex show a non-monotonic behaviour along an ilr-coordinate axis defined before amalgamation. We will show below that information monotonicity does hold for Aitchison distance. We see the lack of distance monotonicity as discussed in balances rather as an argument against the use of a Euclidean coordinate system here.
4.3 Monotonicity of Aitchison distance
A symmetrized version of relative entropy has recently been used in the context of data-driven amalgamation amalgams , where it was shown to be better preserved between samples than Aitchison distance. While the information-theoretic meaning and mathematical properties reflected in the decompositions shown in section 4.1 make Shannon entropy an ideal measure of information, alternative indices that can sometimes be evaluated more easily on real-world data (e.g., making use of sums of squares) have also been considered. In our context, it is interesting that , the potential of Eq.ย (42), has been proposed as an alternative measure of information within an attempt to reformulate information theory from a CoDA point-of-view E&PGinfo . More recently, it has also been proposed as an inequality index (when divided by the number of parts ) sampleSpace . Here, the following decomposition was shown:
(61) |
where we denoted the set size of by . If we now want to decompose with respect to a composition that was partly amalgamated, we find a corresponding relationship that is more complicated888It is interesting to note that the two interaction terms have the form of squares of the balance and pivot coordinates mentioned in section 2.:
(62) |
Clearly, if we replace the amalgamation by the geometric mean , we get a simpler equality:
(63) |
Aggregating by geometric means or by amalgamations has been a subject of debate in the CoDA community sampleSpace ; amalgGreen . As we can see, measures like the Aitchison norm lend themselves much better to taking geometric means rather than amalgamations. There is, however, no straight-forward probabilistic interpretation of geometric means999The product over parts specifies the probability that all events in the subset occur, but this is then re-scaled by the exponent to the probability of a single event., and the more elegant formal expressions that result often suffer from reduced interpretability.
To the best of our knowledge, the information monotonicity property in its general form has not been considered yet for Aitchison distance. We here exploit the various decompositions stated above for proving it. Results are summarized in the following propositions. All proofs can be found in the Appendix.
Proposition 1
Let and be two index sets with sizes and , respectively. Further, let and be the simplicial representatives of two compositions in . The amalgamation of over the subset of parts be denoted by . Then the following decomposition of Aitchison distance holds:
(64) |
Corollary 1
When aggregating a subset of parts in form of their geometric mean, we have the following decomposition of Aitchison distance
(65) |
From this decomposition, we get the following monotonicity result:
Corollary 2
With parts aggregated by geometric means, the following inequality holds
As we can see, for geometric-mean summaries, the sum of the interaction terms (i.e., of the terms not involving norms) will remain greater equal zero. This is no longer true for amalgamation of parts, and it is less straightforward to show the corresponding inequality:
Proposition 2
Aitchison distance fulfills the information monotonicity
5 Discussion and outlook
In our little outline of finite information geometry, we could but scratch the surface of the formal apparatus that is at our disposal. We are certain it can serve to advance the field of Compositional Data Analysis in various ways. Differential geometry provides a universally valid framework for the problems occurring with constrained data. Considering the simplex a differentiable manifold enables a general approach from which specific problems like the compositional differential calculus compDiffCalc follow naturally. Clearly, there is (and has to be) overlap in methodology between the information-geometric perspective and the CoDA approach. An example is the use of the exponential map anchored at the center of the simplex discussed in section 3.2, which allows to identify the simplex with a linear space that is central to the Euclidean CoDA approach. Another example is the fundamental role that exponential families play in information geometry and which have been studied in the CoDA context in the so-called Bayes spaces BayesSpaces . But we also think that some of the current limitations of CoDA can be overcome using the additional structures that information geometry can provide. The ease with which amalgamations of parts can be handled by Kullback-Leibler divergence might partly resolve the debate surrounding this issue in the CoDA community. Further, maximum-entropy projections, where Kullback-Leibler divergences are the central tool, seem an especially promising avenue to pursue in the context of data that are only partially available or subject to constraints. Also, our description has focused on the equivalence of compositions with discrete probability distributions, but information geometry can of course be used to describe the distributions themselves. These are no longer finite but continuous and contain a constraint that introduces dependencies among their random variables, calling for the use of more general versions of the concepts presented here.
Appendix
Proof that Eq.ย 56 tends to Aitchison distance for
The terms inside the bracket can be written as
(66) |
when subtracting and adding it back. Similarly,
(67) |
In this, we recognize the Box-Cox transform in the numerators, and can make use of the limit in Eq.ย (55). The sums in the denominators clearly tend to for . We can thus evaluate the limit as a quotient of finite limits and conclude
(68) |
Proof of Proposition 1
We start by defining . We can now use the decomposition
(69) |
which can be derived using equalities like
which can be used to expand
and doing the square, cross terms vanish after summation.
Next, we observe that, for an arbitrary which we join as an additional component with the vector , we have
(70) |
because, similarly as before, we have
(71) |
Proof of Corollary 1
Proof of Corollary 2
To prove the corollary, we have to show that the last term in the decomposition of Corollary 1 is greater or equal to zero. We thus need to show
(73) |
Since , and the quadratic equation has solutions and , between these values we are either above or below zero. We are above because the first derivative of Eq.ย (73) in is , which is bigger zero.
Proof of Proposition 2
Let denote the vector with components , . To prove the proposition, we have to bound the terms after the first plus sign in Proposition 1 from below, i.e.,
(74) |
Let us start with the second summand. We rewrite it as
Since we showed Eq.ย (73), the summand on the left is greater equal zero. Thus we have
(75) |
where the second inequality follows because the big bracket (with a prefactor smaller one) has a structure that can be bounded like
with playing the role of . Finally, the last term in Eq.ย (75) can be bounded from above as follows. Since , we also have
(76) |
so the ratio of sums is smaller than the maximum over the ratios. Without restricting generality, let us assume that , i.e., . The bound on the sum ratio implied by Eq.ย (76) is then sufficient for proving the proposition:
(77) |
References
- (1) Aitchison, J: The statistical analysis of compositional data. Chapman and Hall (1986)
- (2) Greenacre, M: Compositional Data Analysis, Annual Reviews of Statistics and its Application 8(1), 271โ299 (2021)
- (3) Egozcue, JJ and Pawlowsky-Glahn, V: Compositional data: the sample space and its structure. TEST 28(3), 599โ638 (2019)
- (4) Barcelรณ-Vidal, C, Martรญn-Fernรกndez, JA: The Mathematics of Compositional Analysis. Austrian Journal of Statistics 45(4), 57โ71 (2016)
- (5) Aitchison, J: The Statistical Analysis of Compositional Data. J Royal Stat Soc B 44 (2), 139โ160 (1982)
- (6) Egozcue, JJ, Barcelรณ-Vidal, C, Martรญn-Fernรกndez, JA, Jarauta-Bragulat, E, Dรญaz-Barrero, JL, Mateu-Figueras, G, Pawlowsky-Glahn, V, Buccianti, A: Elements of simplicial linear algebra and geometry. In: Pawlowsky-Glahn, V and Buccianti, A (eds.) Compositional data analysis: Theory and applications, pp. 141โ157. Wiley (2011)
- (7) Egozcue, JJ, Pawlowsky-Glahn, V, Mateu-Figueras, G, and Barcelรณ-Vidal, C: Isometric Logratio Transformations for Compositional Data Analysis. Mathematical Geology, 35 (3), 279โ300 (2003)
- (8) J. J. Egozcue and V. Pawlowsky-Glahn. Groups of Parts and Their Balances in Compositional Data Analysis. Mathematical Geology, 37 (7), 795โ828 (2005)
- (9) Hron, K, Filzmoser, P, de Caritat, P, Fiลerovรก, E, Gardlo, A: Weighted Pivot Coordinates for Compositional Data and Their Application to Geochemical Mapping. Mathematical Geosciences 49, 797โ814 (2017)
- (10) Chentsov, N: Statistical Decision Rules and Optimal Inference (vol.ย 53), Nauka (1972) (in Russian); English translation in: Math. Monograph. (vol.ย 53), Am. Math. Soc. (1982)
- (11) Rao, CR: Information and the accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 37, 81โ89 (1945)
- (12) Amari, S and Nagaoka, H: Methods of Information Geometry. Translations of Mathematical Monographs (vol.ย 191), American Mathematical Society (2000)
- (13) Amari, S: Differential-Geometric Methods in Statistics. Lecture Notes in Statistics (vol.ย 28), Springer (1985)
- (14) Ay, N, Jost, J, Le, HV, Schwachhรถfer, L: Information Geometry. A Series of Modern Surveys in Mathematics (vol.ย 64), Springer (2017)
- (15) Amari, S: Information Geometry and Its Applications. Applied Mathematical Sciences, (vol.ย 194), Springer (2016)
- (16) Whittaker, J: Graphical models in applied multivariate statistics. Wiley (1990)
- (17) Ay, N and Erb, I: On a notion of linear replicator equations. J Dyn. Diff. Eqs., 17 (2), 427-451 (2005)
- (18) Martรญn-Fernรกndez, JA, Bren, M, Barcelรณ-Vidal, C and Pawlowsky-Glahn, V: A Measure of Difference for Compositional Data based on measures of divergence. In: Proceedings of the Fifth Annual Conference of the International Assotiation for Mathematical Geology, Ed. Lippard, S.J., Naess, A., and Sinding-Larsen, R.. Trondheim (Norway), Vol. 1, pp. 211-215 (1999)
- (19) Rรฉnyi, A.: On measures of entropy and information. In: Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp.ย 547โ561. University of California Press, Berkeley (1961)
- (20) Tsallis, C.: Possible generalization of BoltzmannโGibbs statistics. J. Stat. Phys. 52, 479โ487 (1988)
- (21) Greenacre, M: Power transformations in correspondence analysis. Computational Statistics & Data Analysis, 53(8), 3107-3116 (2009)
- (22) Greenacre, M: Log-Ratio Analysis Is a Limiting Case of Correspondence Analysis. Math Geosci 42, 129 (2010)
- (23) Bhattacharyya, A: On a measure of divergence between two statistical populations defined by their probability distributions. Bulletin of the Calcutta Mathematical Society 35, 99โ109. (1943)
- (24) Chentsov, N: Algebraic foundation of mathematical statistics. Math.ย Oper.forsch.ย Stat., Ser.ย Stat.ย 9, 267โ276 (1978)
- (25) Palarea-Albaladejo, J, Martรญn-Fernรกndez, JA and Soto, J.A.: Dealing with Distances and Transformations for Fuzzy C-Means Clustering of Compositional Data, Journal of Classification 29(2), 144-169 (2012)
- (26) Quinn, TP, Erb, I: Amalgams: data-driven amalgamation for the dimensionality reduction of compositional data, NAR Genomics and Bioinformatics 2(4) Iqaa076 (2021)
- (27) Egozcue, JJ and Pawlowsky-Glahn, V: Evidence functions: a compositional approach to information. SORT 42 (2), 101-124 (2018)
- (28) Greenacre, M: Amalgamations are valid in compositional data analysis, can be used in agglomerative clustering, and their logratios have an inverse transformation. Appl.ย Comp.ย Geosc., 5, 100017 (2020)
- (29) Barcelรณ-Vidal, C., Martรญn-Fernรกndez, J.A. and Mateu-Figueras, G., Compositional Differential Calculus on the Simplex. In: Pawlowsky-Glahn, V and Buccianti, A (eds.) Compositional data analysis: Theory and applications, pp. 176โ190. Wiley (2011)
- (30) Egozcue, JJ, Pawlowsky-Glahn, V, Tolosana-Delgado, R, Ortego, MI and van den Boogaart, KG, Bayes spaces: use of improper distributions and exponential families. Revista de la Real Academia de Ciencias Exactas, Fรญsicas y Naturales. Serie A. Matematicas, 107(2), pp. 475โ486 (2013)