Asymmetric Dependence Measurement and Testing
Abstract
Measuring the (causal) direction and strength of dependence between two variables (events), and , is fundamental for all science. Our survey of decades-long literature on statistical dependence reveals that most assume symmetry in the sense that the strength of dependence of on exactly equals the strength of dependence of on . However, we show that such symmetry is often untrue in many real-world examples, being neither necessary nor sufficient. Vinod’s (2014) asymmetric matrix of generalized correlation coefficients provides intuitively appealing, readily interpretable, and superior measures of dependence. This paper proposes statistical inference for using Taraldsen’s (2021) exact sampling distribution of correlation coefficients and the bootstrap. When the direction is known, proposed asymmetric (one-tail) tests have greater power.
1 Introduction
A great deal of science focuses on understanding the dependence between variables. Its quantification has a long history starting with Galton-Pearson correlation coefficient from the 1890s and its cousins, including Spearman’s , Kendall’s , and Hoeffding’s . Let measure the strength of dependence of on given a measurement . Many measures of dependence try to satisfy the symmetry postulate by Renyi (1959), which posits that the two strengths based on opposite conditioning are identical:
(1) |
We regard the symmetry postulate akin to an avoidable dogma. The following subsection explains why attempting to satisfy this symmetry equation (1) provides misleading measures of dependence in practice.
1.1 Four Examples of Asymmetric Dependence
A correct notion of dependence in nature or data is rarely (if ever) symmetric.
-
•
A newborn baby boy depends on his mother for his survival, but it is ludicrous to expect that his mother must exactly equally depend on the boy for her survival, as implied by (1).
-
•
Meteorologists know that the average daily high of December temperatures in New York city is 44 degrees Fahrenheit and that this number depends on New York’s latitude (40.7). The latitude is a geographical given and does not depend on anything like city temperatures. Symmetric dependence by (1) between temperature and latitude implies the ludicrous claim that latitude depends on temperature with equal strength.
-
•
For a third example, imagine a business person B with several shops. B’s 30% earnings depend on the hours worked by a key employee in one shop. Now the symmetry by (1) means that hours worked by the key employee always depend on B’s earnings, exactly 30%.
-
•
Our fourth example assumes as complete data, but a subset of is unavailable. The available subset is a proxy that depends on , but the complete set does not equally depend on its subset .
These four examples are enough to convince the reader that the symmetry postulate is neither necessary nor sufficient for real-world dependence. However, it is interesting that the unrealistic property (1) is an old established sacrosanct postulate from the 1950s, Renyi (1959). Even in 2022, Geenens and de Micheaux (2022)(“GM22”), still adhere to the symmetry postulate (dogma) by proposing an ingenious new definition of dependence to fit the model in (1). Actually, measure of dependence satisfying (1) can be ludicrous some contexts analogous to the four examples above.
1.2 Sources of the Symmetry Dogma
What is the origin of symmetry dogma?
-
(i) The definitional and numerical equality of covariances, may have been the initial reason for the symmetry result.
-
(ii) In a bivariate linear regression , the strength of dependence of on is clearly measured by the coefficient of determination . If we consider a flipped linear regression, , the strength of dependence is . The assumption of linearity makes the two strengths equal to each other . The equality of two strengths supports the symmetry dogma. When we consider the signed square roots of the two values, we have a symmetric matrix of correlation coefficients, . These signed measures of dependence further support the dogma.
The symmetry dogma depends on the harmless-looking linearity assumption. Back in 1784, the German philosopher Kant said: “Out of the crooked timber of humanity, no straight thing was ever made.” Since social sciences and medicine deal with human subjects, evidence supporting linearity and the implied symmetry dogma is missing.
-
(iii) Since all distances satisfy symmetry, it may have been another reason behind Renyi’s postulate.
-
(iv) The concept of statistical independence in probability theory is symmetric. It can be formulated in terms of the absence of any divergence between a joint density and a product of two marginal densities,
(2) Since dependence is the opposite of independence, it is tempting (but unhelpful) to impose symmetry on dependence as well.
1.3 Statistical Independence in Contingency Tables
Two-way contingency tables refer to tabulated data on a grand total of observations distributed over an matrix. There are two categorical variables represented by manifestations of row characteristics along () rows, and column characteristics along () columns. The body of the () contingency table has observed values in a matrix cell located at row number and column number . The joint probability is simply , where GT denotes the grand total of the tabulated numbers. The row margins of the contingency table have row totals, . The column margin has column totals, . The marginal probabilities are and , which are also called unconditional probabilities.
A conditional probability restricts the sample space to the part of the Table which satisfies the specified condition, referring to a particular row or column . The direct computation of conditional probability has the respective row or column sums in its denominator instead of the grand total . An equivalent calculation of conditional probability defines , as a ratio of the joint probability to the marginal probability of the conditioning characteristic. Analogous conditional probability conditioning on row characteristic is a ratio of the same joint probability to the marginal probability of the conditioning row, .
In probability theory based on contingency tables, the notion of statistical independence is studied by considering the following three criteria.
-
(a) , joint probability equals the product of marginals.
-
(b) , conditional probability equals unconditional or marginal probability.
-
(c) , the other conditional probability equals unconditional or marginal probability.
Note that criterion (a) is both necessary and sufficient for independence. It is symmetric in that the joint probability is the same even if we interchange the order and write it as . However, data can satisfy (b) without satisfying (c), and vice versa. Hence tests of independence typically rely on the symmetric criterion (a). However, dependence is the opposite of independence and is generally asymmetric. We find that using (b) and (c) helps avoid the misleading symmetry postulate in the context of dependence.
It is customary to imagine a population of thousands of contingency tables; the observed table is one realization from that population. The null hypothesis () is that row and column characteristics are statistically independent. The sample table may not exactly satisfy independence in the sense of (a) to (c) above. The testing problem is whether the observed table of values could have arisen from a population where conditions (a) to (c) are satisfied. That is, are numerically close enough to the expected values obtained from the cross-product of relevant marginal totals.
Pearson’s Chi-square test statistic for () or independence of row effect and column effect in a contingency table is
(3) |
where denotes the degrees of freedom. Note that of (3) cannot be computed unless we have contingency tables. Statisticians have long recognized that the magnitude of cannot reliably measure the direction and strength of dependence. This paper assumes that a practitioner would want to know both the general direction and strength of dependence.
2 Symmetric Measures of Dependence
Granger et al. (2004) (“Gr04”) is an important paper on formal testing for statistical independence, especially for time series data. They cite a survey by Tjostheim (1996) on the topic. The novelty in Gr04 is in using nonparametric nonlinear kernel densities in testing the equality (2) in their test of independence. Unfortunately, Gr04 authors adhere to the symmetry dogma by insisting that, similar to independence, a measure of dependence should be a symmetric distance-type ‘metric.’
2.1 Dependence Measures and Entropy
Shannon defined information content in 1948 as the amount of surprise in a piece of information. His information is inversely proportional to the probability of occurrence and applies to both discrete and continuous random variables with probabilities defined by a probability distribution . In the context of entropy, let us use the fourth example of Section 1.1, where is the complete data and is a subset with some missing observations. How does depend on ? We develop a measure of dependence using information theory, especially entropy.
Intuitively, entropy is our ignorance or the extent of disorder in a system. The entropy is defined by the mathematical expectation of the Shannon information or . And the conditional entropy of Y given X averaged over is
(4) |
The reduction in our ignorance by knowing the proxy is . Mutual information is defined as It is symmetric since . The entropy-based measure of dependence is
(5) |
or proportional reduction in entropy of by knowing . Reimherr and Nicolae (2013) complain that (5) is not symmetric. By contrast, we view asymmetry as a desirable property.
Neyman-Pearson showed that a way to distinguish between two distributions and for parameter is the difference between logs of their likelihood functions. Shannon’s relative entropy, also known as Kullback–Leibler (KL) divergence, is the expected value of that difference,
(6) |
It is easy to verify that KLD or relative entropy is not symmetric.
Gr04 authors state on page 650 that “Shannon’s relative entropy and almost all other entropies fail to be ‘metric’, as they violate either symmetry, or the triangularity rule, or both.” We argue that asymmetry is an asset, not a liability, in light of four examples in Section 1.1. Hence, we regard (5) or (6) as superior measures compared to the symmetric measure by Gr04.
D(X;Y) of (5) and KLD of (6) cannot be used directly on data vectors. They need frequency distribution counts as input based on the grouping of data into bins (histogram class intervals). The choice of the number of bins is arbitrary, and D(X;Y) and KLD are sensitive to that choice. Hence, we do not recommend D(X;Y) or KLD as a general-purpose measure of dependence.
2.2 Dependence Measures and Fisher Information
Fisher information measures the expected amount of information given by a random variable about a parameter of interest. Under Gaussian assumptions, the Fisher information is inversely proportional to the variance. Reimherr and Nicolae (2013) use the Fisher information to define a measure of dependence. Consider the estimation of a model parameter using as a proxy for unavailable . That is, is a subset of with missing observations, as in the fourth example of Section 1.1. If the Fisher information for based on proxy is denoted by , they define a measure of dependence as:
(7) |
where . Consider the special case where a proportion of the data are missing in at completely random locations. Then, the measure of dependence (7) equals . This measure of dependence is almost acceptable because it is asymmetric, where subset being a proxy for cannot be interchanged with , except that of (7) cannot be negative. Later, we recommend in Section 3 a more generally applicable and intuitive measure of dependence.
2.3 Regression Dependence from Copulas
Consider a two-dimensional joint (cumulative) distribution function and marginal densities and obtained by probability integral transformations. Sklar proved in 1959 that a copula function is unique if the components are continuous. The copula function is subject to certain conditions forcing it to be a bivariate uniform distribution function. It is extended to the multivariate case to describe the dependence structure of the joint density. We have noted in section 1.3 that a contingency table represents the joint dependence structure of row and column characteristics. Copulas represent similar joint dependence when row and column characteristics are continuous variables rather than simple categories.
Dette et al. (2013) (“DSS13”) define joint density as , and conditional density of given as . They use uniform random variables and to construct copula as a joint distribution function. The copula serves as their measure of dependence based on the quality of regression-based prediction of from . The flipped prediction of from ignored by DSS13 is considered in Section 3 in the sequel.
DSS13 assume Lipschitz continuity, which implies that a copula is absolutely continuous in each argument, so that it can be recovered from any of its partial derivatives by integration. The conditional distribution is related to the corresponding copula by .
A symmetric measure of dependence proposed by DSS13 is denoted here as
(8) |
where represents independence, and represents almost sure functional dependence. DSS13 focus on filling the intermediate range of the closed interval while ignoring the negative range . Section 3 covers , including the negative range. DSS13 rely on parametric copulas, making them subject to identification problems, as explained by Allen (2022).
The numerical computation of (8) is involved since it requires the estimation of the copula’s partial derivative. DSS13 authors propose a kernel-based estimation method without providing any ready-to-use computational tools for .
Remark 3.7 in Beare (2010) states that symmetric copulas imply time reversibility, which is unrealistic for economic and financial data. Bouri et al. (2020) reject the symmetry dogma and note that their parametric copula can capture tail dependence, which is important in a study of financial markets. Allen (2022) uses nonparametric copula construction and asymmetric from Vinod (2014). Allen’s application to financial data shows that cryptocurrencies do not help portfolio diversification.
2.4 Hellinger Correlation as a Dependence Measure
Now we turn to the recent GM22 paper mentioned earlier, which proposes Hellinger correlation as a new symmetric measure of the strength of dependence. They need to normalize to ensure that . GM22 denote the normalized version as . GM22 authors explain why dependence axioms by Renyi (1959) need updating, while claiming that their satisfies all updated axioms. Unfortunately, GM22 retain the symmetry axiom criticized in Section 1.1 above. An advantage of over Pearson’s is that it incorporates some nonlinearities.
Let and denote the known marginal distributions of random variables and , and let denote their joint distribution. Now, GM22 authors ask readers to imagine reconstructing the joint distribution from the two marginals. The un-intuitive (convoluted?) definition of the strength of dependence by GM22 is the size of the “missing link” in reconstructing the joint from marginals. This definition allows GM22 to claim that symmetry is “unquestionable.”
GM22 authors define squared Hellinger distance as the missing link between and . They approximate a copula formulation of using the Bhattacharyya (1943) affinity coefficient . Let denote the copula of , and denote its density. The computation of in the R package HellCor uses numerical integrals . Hellinger correlation is
(9) |
The Hellinger correlation is symmetric, .
GM22 provide an R package HellCor to compute from data as a measure of dependence, and test the null hypothesis of independence of two variables.
A direct and intuitive measure of dependence in a regression framework is the multiple correlation coefficient (of determination) . It is symmetric because even if we flip and , the from linear regressions is exactly the same. The reason for the equality of two flipped values, , is the assumption of linearity of the two regressions. When we relax linearity, the two values generally differ, . We argue that quantitative researchers should reject the unrealistic linearity assumption in the presence of ready-to-use kernel regression (np package) software.
Kernel-based values of flipped regressions are rarely equal. GM22 cite Janzing et al. (2013) only to reject such asymmetric dependence suggested by nonparametric regressions.
3 Recommended Measures of Dependence
We have noted earlier that covariances satisfy symmetry . However, the sign of symmetric covariances suggests the overall direction of the dependence between the two variables. For example, means when goes up goes down, by and large. Most of the symmetric measures of dependence discussed above fail to provide this type of useful directional information except Pearson’s correlation coefficients . Hence, has retained its popularity as a valuable measure of dependence for over a century, despite assuming unrealistic linearity.
Zheng et al. (2012) reject the dogma by introducing nonsymmetric generalized measures of correlation (), proving that
(10) |
Since GMCs fail to provide directional information in covariances needed by practitioners, Vinod (2014) and Vinod (2017) extend Zheng et al. (2012) to develop a non-symmetric correlation matrix , where , while providing an R package. The R package generalCorr uses kernel regressions to overcome the linearity of from the np package by Hayfield and Racine, which can handle kernel regressions among both continuous and discrete variables.
Sometimes the research interest is focused on the strength of dependence, while the direction is ignored, perhaps because it is already established. In that case, one can use the R package generalCorr and the R function depMeas(,). It is defined as appropriately signed larger of the two generalized correlations, or
(11) |
where is the sign of the covariance between the two variables.
In general, both the strength and general direction of quantitative dependence matter. Hence, we recommend two asymmetric measures and . The generalCorr package functions for computing elements are rstar(x,y) and gmcmtx0(mtx). The latter converts a data matrix argument (mtx) with columns, into a asymmetric matrix of generalized correlation coefficients. Regarding the direction of dependence, the convention is that the variable named in the column is the “cause” or the right-hand regressor, and the variable named along the row is the response. Thus the recommended measures from are easy to compute. See an application to forecasting the stock market index of fear (VIX) and causal path determination in Allen and Hooper (2018).
3.1 Statistical Inference for Recommended Measures
We recommend the signed generalized correlation coefficients from the matrix as the best dependence measure. This is because they do not adhere to the potentially misleading symmetry dogma while measuring arbitrary nonlinear dependence dictated by the data. An additional reason is its potential for more powerful (one-tail) inference, discussed in this section.
The sign of each element of the matrix is based on the sign of the covariance . A two-tail test of significance is appropriate only when . Otherwise, a one-tail test is appropriate. Any one-tailed test provides greater power to detect an effect in one direction by not testing the effect in the other direction, Kendall and Stuart (1977), sections 22.24 and 22.28.
Since sample correlation coefficient from a bivariate normal parent has a non-normal distribution, Fisher developed his famous z-transformation in the 1920s. He proved that the following transformed statistic is approximately normal with a stable variance,
(12) |
provided . Recent work has developed the exact distribution of a correlation coefficient. It is now possible to directly compute a confidence interval for any hypothesized value of the population correlation coefficient.
Let be the empirical correlation of a random sample of size from a bivariate normal parent. Theorem 1 of Taraldsen (2021) generalized Fisher’s famous z-transformation extended by C. R. Rao. The exact density with is
where F(.;.;.;.) denotes the Gaussian hypergeometric function, available in the R package hypergeom by R.K.S Hankin. The following R code readily computes (3.1) over a grid of 2001 values.
library(hypergeo); r=seq(-1,1,by=0.001) Tarald=function(r,v,rho,cum){ #find quantile r given cum Trm1=(v*(v-1)*gamma(v-1))/((sqrt(2*pi)*gamma(v+0.5))) Trm2=(1-r^2)^((v-1)/2) Trm2b=((1-rho^2)^((v-2)/2))*((1-rho*r)^((1-2*v)/2)) Trm3b=hypergeo(3/2,-1/2,(v+0.5),(1+r*rho)/2) y0=Re(Trm1*Trm2*Trm2b*Trm3b) p=y0/sum(y0) cup=cumsum(p) loc=max(which(cup<cum))+1 return(r[loc])} Tarald(r=seq(-1,1,by=0.001),v=11,rho=0,cum=0.05) #example
Assuming that the data come from a bivariate normal parent, the sampling distribution of any correlation coefficient is (3.1). Hence, the sampling distribution of unequal off-diagonal elements of the matrix of generalized correlations also follows (3.1). When we test the null hypothesis , the relevant sampling distribution is obtained by plugging in (3.1) depicted in Figure 1 for two selected sample sizes. Both distributions are centered at the null value .
A two-tail (95%, say) confidence interval is obtained by using the 2.5% and 97.5% quantiles of the density. If the observed correlation coefficient is inside the confidence interval, we say that the observed is statistically insignificant, as it could have arisen from a population where the null value holds.

Similarly, one can test the nonzero null hypothesis using the equation obtained by plugging in (3.1) depicted in Figure 2 .

Figures 1 and 2 show that the formula (3.1) and our numerical implementation are ready for practical use. These exact densities depend on the sample size and on the value of the population correlation coefficient, . Given any hypothesized and sample size, a computer algorithm readily computes the exact density, similar to Figures 1 and 2. Suppose we want to help typical practitioners who want the tail areas useful for testing the null hypothesis . Then, we need to create a table of a set of typical quantiles evaluated at certain cumulative probabilities and a corresponding selected set of common sample sizes with a fixed .
Because of the complicated form of the density (3.1), it is not surprising that its (cumulative) distribution function by analytical methods is not available in the literature. Hence, let us compute cumulative probabilities by numerical integration defined as the rescaled area under the curve for . See Figure 1 for two choices of for sample sizes (n=50, 15). The cumulative probability becomes a sum of rescaled areas of small-width rectangles whose heights are determined by the curve tracing . The accuracy of numerical approximation to the area is obviously better, the larger the number of rectangles.
We use a sequence of created by the R command r=seq(-1,1, by =0.001), yielding 2001 rectangles. Denote the height of by . Now, the area between any two limits, say and is a summation of areas (height times width=0.001) of all rectangles. Now, the cumulative probabilities in the range are
(14) |
where the common width cancels, and where the denominator converts the rectangle areas into probabilities. More generally, we can use for any .
Thus we have a numerical approximation to the exact (cumulative) distribution function under the bivariate normality of the parent,
The transform from to is called the probability integral transform, and its inverse gives relevant correlation coefficients as quantiles for specified cumulative probability as the argument. A computer algorithm can readily find such quantiles.
The exact allows the construction of confidence intervals based on quantiles for each and sample size. For example, a 95% two-tail confidence interval uses the 2.5% quantile as the lower limit, and 97.5% quantile as the upper limit. These limits depend on hypothesized and sample size. Since is a common null hypothesis for correlation coefficients, let us provide a table of quantiles for eleven sample sizes (listed in row names) and eight cumulative probabilities listed in column titles of Table 1.
The p-values in statistical inference are defined as the probability of observing the random variable (correlation coefficient) as extreme or more extreme than the observed value of the correlation coefficient for a given null value . Any one-tail p-values based on of (3.1) for arbitrary nonzero “null” values of can be similarly computed by numerical integration defined as the area under the curve. Some code for R functions Tarald(.), and pTarald(.) is included in Sections 3.1 and 4, respectively.
c= | c= | |||||||
---|---|---|---|---|---|---|---|---|
c=0.01 | 0.025 | c=0.05 | c=0.1 | c=0.9 | c=0.95 | 0.975 | c=0.99 | |
n=5 | -0.83 | -0.75 | -0.67 | -0.55 | 0.55 | 0.67 | 0.75 | 0.83 |
n=10 | -0.66 | -0.58 | -0.50 | -0.40 | 0.40 | 0.50 | 0.58 | 0.66 |
n=15 | -0.56 | -0.48 | -0.41 | -0.33 | 0.33 | 0.41 | 0.48 | 0.56 |
n=20 | -0.49 | -0.42 | -0.36 | -0.28 | 0.28 | 0.36 | 0.42 | 0.49 |
n=25 | -0.44 | -0.38 | -0.32 | -0.26 | 0.26 | 0.32 | 0.38 | 0.44 |
n=30 | -0.41 | -0.35 | -0.30 | -0.23 | 0.23 | 0.30 | 0.35 | 0.41 |
n=40 | -0.36 | -0.30 | -0.26 | -0.20 | 0.20 | 0.26 | 0.30 | 0.36 |
n=70 | -0.27 | -0.23 | -0.20 | -0.15 | 0.15 | 0.20 | 0.23 | 0.27 |
n=90 | -0.24 | -0.20 | -0.17 | -0.14 | 0.14 | 0.17 | 0.20 | 0.24 |
n=100 | -0.23 | -0.20 | -0.16 | -0.13 | 0.13 | 0.16 | 0.20 | 0.23 |
n=150 | -0.19 | -0.16 | -0.13 | -0.10 | 0.10 | 0.13 | 0.16 | 0.19 |
For the convenience of practitioners, we explain how to use the cumulative probabilities in Table 1 in the context of testing the null hypothesis . The Table confirms that the distribution is symmetric around as in Figure 1. Let us consider some examples. If n=100, the critical value from Table 1 for a one-tail 95% test is 0.16 (line n=100, column c=0.95). Let the observed positive r be 0.3. Since exceeds the critical value, ), we reject . If n=25, the critical value for a 5% left tail in Table 1 is . If the observed , is less than the critical value it falls in the left tail, and we reject to conclude that it is significantly negative.
Table 1 can be used for constructing a few two-tail 95% confidence intervals as follows. If the sample size is 30, we see along the row n=30, and column c=0.025 gives as the lower limit, and column c=0.975 gives as the upper limit. In other words, for n=30, any correlation coefficient smaller than 0.35 in absolute value is statistically insignificant.
If the standard bivariate normality assumption is not believed, one can use the maximum entropy bootstrap (R package meboot) designed for dependent data. A bootstrap application creates a large number , say, versions of data () for . Each version yields values. A large set of replicates of these correlations give a numerical approximation to the sampling distribution of these correlations. Note that such a bootstrap sampling distribution is data-driven. It does not assume bivariate normality needed for the construction of Table 1 based on (3.1).
Sorting the replicated values from the smallest to the largest, one gets their “order statistics” denoted upon inserting parentheses by replacing by . Now a left-tail 95% confidence interval for leaves a 5% probability mass in the left tail. The interval is approximated by the order statistics as . If the hypothesized is inside the one-tail interval, one fails to reject (accepts) the null hypothesis .
We conclude this section by noting that recommended measures of dependence based on the matrix and their formal inference are easy to implement. The tabulation of Taraldsen’s exact sampling distribution of correlation coefficients in Table 1 is new and should be of broader applicability. It is an improvement over standard significance tests of correlation coefficients based on Fisher’s z-transform. The next section illustrates with examples the use of Table 1, newer dependence measures, and other inference tools.
4 Dependence Measure Examples & Tests
This section considers some examples of dependence measures. Our first example deals with fuel economy in automobile design. R software comes with ‘mtcars’ data on ten aspects of automobile design and performance for 32 automobiles. We consider two design features for illustration, miles per gallon and horsepower . Vinod (2014) reports the Pearson correlation coefficient in his Figure 2. The negative sign correctly shows that one gets reduced when a car has larger horsepower Table 2 in Vinod (2014) reports two generalized correlation coefficients obtained by using kernel regressions as and .
One can interpret these values as signed strengths of dependence of on the conditioning variable . The strengths are asymmetric, , the absolute value of a dependence strength using both generalized correlation coefficients is larger than the dependence strength suggested under linearity. Thus Pearson’s correlation coefficient can underestimate dependence by assuming linearity.
For the ‘mtcars’ data depMeas based on (11) is . Now consider Table 1 for n=30 row and c=0.05 for a one-tail critical value . The observed correlation is obviously in the left tail (rejection region) of the exact sampling distribution of the correlation coefficient. Thus the negative dependence of fuel economy (mpg) on the car’s horsepower is statistically significant. We re-confirm the significance by computing the one-tail p-value (= 1e-16) using an R function pTarald(.). Our R code for p-values from Taraldsen’s exact density of correlation coefficients is given next.
pTarald=function(r,n,rho,obsr){ v=n-1 if(v<=164) Trm1=(v*(v-1)*gamma(v-1))/((sqrt(2*pi)*gamma(v+0.5))) if(v>164) Trm1=(164*(163)*gamma(163))/((sqrt(2*pi)*gamma(163.5))) Trm2=(1-r^2)^((v-1)/2) if(rho!=0) Trm2b=((1-rho^2)^((v-2)/2))*((1-rho*r)^((1-2*v)/2)) if(rho==0) Trm2b=1 Trm3b=Re(hypergeo(3/2,-1/2,(v+0.5),(1+r*rho)/2)) y0=Re(Trm1*Trm2*Trm2b*Trm3b) p=y0/sum(y0) cup=cumsum(p) loc=max(which(r<obsr))+1 if(obsr<0) ans=cup[loc] if(obsr>=0) ans=1-cup[loc] return(ans)} pTarald(r=seq(-1,1,by=0.001),n=32,rho=0,obsr=-0.938)
The first term (Trm1) in the R function computing the p-values involves a ratio of two gamma (factorial) functions appearing in (3.1). For each gamma becomes infinitely large, and Trm1 becomes ‘NaN’ or not a number. Our code winsorizes large values.
Since mtcars data has n=32, and the observed generalized correlation is , we use the command on the last line of the code to get the p-value of 1e-16, or extremely small, suggesting statistical significance. If we use the same automobile data using GM22’s R package called HellCor we find that , giving no hint that mpg and hp are negatively related. If we compare numerical magnitudes, we have . Since exceeds Pearson’s correlation in absolute value, is seen to incorporate nonlinear dependence. However, may be an underestimation of the absolute value of depMeas=. We fear that either may be failing to incorporate some nonlinear dependence or is paying an unknown penalty for adhering to the symmetry dogma.
4.1 Further Real-Data Applications in GM22
GM22 illustrate the Hellinger correlation measure of dependence using two sets of data where the Pearson correlation is statistically insignificant, yet their Hellinger correlation is significant. Their first data set refers to the population of seabirds and coral reef fish residing around n = 12 islands in the British Indian Ocean Territory of Chagos Archipelago. Ecologists and other scientists cited by GM22 have determined that fish and seabirds have an ecologically symbiotic relationship. The seabirds create an extra nutrient supply to help algae. Since fish primarily feed on those algae, the two variables should have a significantly positive dependence.
GM22 begin with the low Pearson correlation and a 95% confidence interval that contains a zero, suggesting no significant dependence. The p-value using pTarald(..,obsr=0.374) is 0.0935, which exceeds the benchmark of 0.05, confirming statistical insignificance. The wide confidence interval, which includes zero, is partly due to the small sample size (n=12).
Our Table 1 with the exact distribution of correlations suggests that when , more conservative than the correct n=12, the exact two-tail 95% confidence interval (leaving 2.5% probability mass in both tails) also has a wide range , which includes zero. Assuming the direction is known, a one-tail interval with 5% in the right tail (n=10) value is 0.50. That is, only when the observed correlation is larger than 0.50 it is significantly positive (assuming a bivariate normal parent density).

GM22 find that their Hellinger correlation needs to be normalized to ensure that , because their estimate of can exceed unity. They denote the normalized version as and claim an easier interpretation of on the “familiar Pearson scale,” though Pearson’s scale admits negative values. GM22 employ considerable ingenuity to achieve the positive range [0, 1] described in their Section 5.3. They state on page 650 that their range normalization “comes at the price of a lower power when it comes to test for independence.”
Using the population of seabirds and coral reef fish residing around n = 12 islands, GM22 report the estimate (fish, seabirds)=0.744. If one assumes bivariate normal parent distribution and uses Taraldsen’s exact density from Table 1, (fish, seabirds) suggests statistical significance. The p-value using pTarald(..,obsr=0.744) is 0.0027, which is smaller than the benchmark 0.05, confirming significance.
In light of Figure 3, it is unrealistic to assume that the data come from a bivariate normal parent distribution. Hence the evidence showing a significantly positive correlation between fish and seabirds based on Taraldsen’s exact density is suspect. Accordingly, GM22 report a bootstrap p-value of as their evidence. Since this p-value is too close to 0.05, we check for unintended p-hacking. When one runs their HellCor(.) function with set.seed(99) and default settings, the bootstrap p-value becomes , which exceeds the benchmark suggesting insignificant Then, GM22’s positive Hellinger correlation estimate of = 0.744 is not statistically significant at the usual 95% level. Thus, the Hellinger correlation fails to be strongly superior to Pearson’s correlation because is also insignificantly positive.
Now, let us compare with off-diagonal elements of the generalized correlation matrix recommended here. Our gmcmtx0(cbind(fish,seabirds)) suggests the “causal” direction (seabirds fish), to be also positive, . The p-value using pTarald(..,obsr=0.6687) is 0.0086, which is smaller than the benchmark 0.05, confirming significance. There is no suspicion of p-hacking here. A 95% bootstrap two-tail confidence interval using the meboot R package is [0.3898, 0.9373]. A one-tail interval is [0.4394, 1], which includes the observed 0.6687 with a p-value of zero. See Figure 4, where almost the entire density has positive support. Note that the interval does not include a zero, suggesting significant positive dependence consistent with what the ecologists expect. The lower limit of our meboot confidence interval is not close to zero. More importantly, our generalized correlation coefficients do not impose symmetric dependence, revealing sign information borrowed from the covariance, absent in the Hellinger correlation.

The second example in GM22 has the number of births () and deaths () per year per 1000 individuals in n=229 countries in 2020. A data scatterplot in their Figure 7 displays a C-shaped nonlinear relation. Pearson’s correlation is negative and insignificant at level . This is based on a two-tail traditional Fisher approximation to the sampling distribution of a correlation coefficient. It is reversed by our more powerful one-tail p-value using Taraldsen’s exact sampling distribution. Our pTarald(..,n=229, obsr=-0.13) is , implying a statistically significant negative correlation. On the other hand, GM22 estimate with a two-tail 95% bootstrap confidence interval [0.474, 0.746], hiding important information about the negative direction of dependence. Since zero is outside the confidence interval, GM22 claim that they have correctly overcome an apparently incorrect inference based on traditional methods. We have shown that traditional inference was incorrect because more accurate Taraldsen distribution was not used.
Our gmcmtx0(cbind(birth,death)) estimates that is . A one-tail 95% confidence interval using the maximum entropy bootstrap (R package meboot) is [-1, -0.5693]. A somewhat less powerful two-tail interval [] is also entirely negative. The null hypothesis states that the true unknown is zero. Since our random interval excludes zero, the dependence is significantly negative. The p-value is zero in Figure 5, since almost the entire density has negative support. A larger birth rate significantly leads to a lower death rate in 229 countries in 2020.

In summary, the two examples used by GM22 to sell their Hellinger correlation have a discernible advantage over Pearson’s , but not over our generalized correlation . The examples confirm four shortcomings of Hellinger’s correlation over . (a) It imposes an unrealistic symmetry assumption. (b) It provides no information about the direction of dependence. (c) It forces the use of less powerful two-tail confidence intervals. (d) It is currently not implemented for discrete variables.
5 Final Remarks
Many scientists are interested in measuring the directions and strengths of dependence between variables. This paper surveys quantitative measures of dependence between two variables. We use four real-world examples in Section 1.1 to show that any symmetric measure of dependence alleging equal strength in both directions is unacceptable. Yet, the majority of statistical dependence extant in the literature adheres to the symmetry dogma. A 2022 paper (GM22) develops an intrinsically flawed symmetric measure of dependence while proposing Hellinger correlations.
We show that off-diagonal elements of the asymmetric matrix of generalized correlation coefficients provide an intuitively sensible measure of dependence after incorporating nonlinear and nonparametric relations among the variables involved. The R package generalCorr makes it easy to implement our proposal. Its six vignettes provide ample illustrations of the theory and applications of .
We discuss statistical inference for elements of the matrix, providing a new Table 1 of quantiles of Taraldsen (2021) exact density of a correlation coefficient for eleven typical sample sizes and eight cumulative probabilities. We illustrate with two data sets used by GM22 to support their Hellinger correlation. Directional information is uniquely provided by our asymmetric measure of dependence in the form of generalized correlation coefficients . It allows the researcher to achieve somewhat better qualitative results and more powerful one-tail tests compared to symmetric measures of dependence in the literature.
We claim that one-tail p-values of the Taraldsen’s density can overcome the inaccuracy of the traditional Pearson correlation inference based on Fisher’s z-transform. We illustrate the claim using GM22’s second example where Pearson correlation is shown to be significantly negative using Taraldsen’s density. Hence, the complicated Hellinger correlation inference is not really needed to achieve correct significance. Interestingly, both hand-picked examples designed to show the superiority of GM22’s over show the merit of our proposal based on over .
Almost every issue of every quantitative journal refers to correlation coefficients at least once, underlining its importance in measuring dependence. We hope that and our implementation of Taraldsen’s exact sampling distribution of correlation coefficients receive further attention and development.
References
- Allen (2022) Allen, D. E. (2022), “Cryptocurrencies, Diversification and the COVID-19 Pandemic,” Journal of Risk and Financial Management, 15, URL https://www.mdpi.com/1911-8074/15/3/103.
- Allen and Hooper (2018) Allen, D. E. and Hooper, V. (2018), “Generalized Correlation Measures of Causality and Forecasts of the VIX Using Non-Linear Models,” Sustainability, 10, 1–15, URL https://www.mdpi.com/2071-1050/10/8/2695.
- Beare (2010) Beare, B. K. (2010), “Copulas and Temporal Dependence,” Econometrica, 78(1), 395–410.
- Bhattacharyya (1943) Bhattacharyya, A. (1943), “On a Measure of Divergence Between Two Statistical Populations Defined by Their Probability Distributions,” Bulletin of the Calcutta Mathematical Society, 35, 99–109.
- Bouri et al. (2020) Bouri, E., Shahzad, S. J. H., Roubaud, D., Kristoufek, L., and Lucey, B. (2020), “Bitcoin, gold, and commodities as safe havens for stocks: New insight through wavelet analysis,” The Quarterly Review of Economics and Finance, 77, 156–164, URL https://www.sciencedirect.com/science/article/pii/S1062976920300326.
- Dette et al. (2013) Dette, H., Siburg, K. F., and Stoimenov, P. A. (2013), “A Copula-Based NonParametric Measure of Regression Dependence,” Scandinavian Journal of Statistics, 40, 21–41.
- Geenens and de Micheaux (2022) Geenens, G. and de Micheaux, P. L. (2022), “The Hellinger Correlation,” Journal of the American Statistical Association, 117 (538), 639–653, URL 10.1080/01621459.2020.1791132.
- Granger et al. (2004) Granger, C. W. J., Maasoumi, E., and Racine, J. (2004), “A Dependence Metric for Possibly Nonlinear Processes,” Journal of Time Series Analysis, 25, 649–669.
- Janzing et al. (2013) Janzing, D., Balduzzi, D., Grosse-Wentrup, M., and Schölkopf, B. (2013), “Quantifying Causal Influences,” The Annals of Statistics, 41, 2324–2358.
- Kendall and Stuart (1977) Kendall, M. and Stuart, A. (1977), The Advanced Theory of Statistics, vol. 2, New York: Macmillan Publishing Co., 4th ed.
- Reimherr and Nicolae (2013) Reimherr, M. and Nicolae, D. L. (2013), “On Quantifying Dependence: A Framework for Developing Interpretable Measures,” Statistical Science, 28, 116–130, URL https://doi.org/10.1214/12-STS405.
- Renyi (1959) Renyi, A. (1959), “On Measures of Dependence,” Acta Mathematica Academiae Scientiarum Hungarica, 10, 441–4510.
- Taraldsen (2021) Taraldsen, G. (2021), “Confidence in Correlation,” arxiv, 1–7, URL 10.13140/RG.2.2.23673.49769.
- Tjostheim (1996) Tjostheim, D. (1996), “Measures and tests of independence: a survey,” Statistics, 28, 249–284, URL https://arxiv.org/pdf/1809.10455.pdf.
- Vinod (2014) Vinod, H. D. (2014), “Matrix Algebra Topics in Statistics and Economics Using R,” in “Handbook of Statistics: Computational Statistics with R,” , eds. Rao, M. B. and Rao, C. R., New York: North-Holland, Elsevier Science, vol. 34, chap. 4, pp. 143–176.
- Vinod (2017) — (2017), “Generalized correlation and kernel causality with applications in development economics,” Communications in Statistics - Simulation and Computation, 46, 4513–4534, available online: 29 Dec 2015, URL https://doi.org/10.1080/03610918.2015.1122048.
- Zheng et al. (2012) Zheng, S., Shi, N.-Z., and Zhang, Z. (2012), “Generalized Measures of Correlation for Asymmetry, Nonlinearity, and Beyond,” Journal of the American Statistical Association, 107, 1239–1252.