Gaussian Information Bottleneck and the Non-Perturbative Renormalization Group
Abstract
The renormalization group (RG) is a class of theoretical techniques used to explain the collective physics of interacting, many-body systems. It has been suggested that the RG formalism may be useful in finding and interpreting emergent low-dimensional structure in complex systems outside of the traditional physics context, such as in biology or computer science. In such contexts, one common dimensionality-reduction framework already in use is information bottleneck (IB), in which the goal is to compress an “input” signal while maximizing its mutual information with some stochastic “relevance” variable . IB has been applied in the vertebrate and invertebrate processing systems to characterize optimal encoding of the future motion of the external world. Other recent work has shown that the RG scheme for the dimer model could be “discovered” by a neural network attempting to solve an IB-like problem. This manuscript explores whether IB and any existing formulation of RG are formally equivalent. A class of soft-cutoff non-perturbative RG techniques are defined by families of non-deterministic coarsening maps, and hence can be formally mapped onto IB, and vice versa. For concreteness, this discussion is limited entirely to Gaussian statistics (GIB), for which IB has exact, closed-form solutions. Under this constraint, GIB has a semigroup structure, in which successive transformations remain IB-optimal. Further, the RG cutoff scheme associated with GIB can be identified. Our results suggest that IB can be used to impose a notion of “large scale” structure, such as biological function, on an RG procedure.
I Introduction
An overarching theme in the study of complex systems is effective low-dimensionality. We are content, for example, with the existence of laws of fluid dynamics whose few phenomenological parameters accurately account for the macroscopic behavior of many completely different fluids. We are also confident that the laws are insensitive to the particular microscopic configuration of a fluid at any given time. These are connected, but different notions of low-dimensionality; the first deals with simplification in model space, while the second refers to the emergence of collective modes, of which relatively few, when compared to the total number of degrees of freedom, will be important. A central result of Wilson’s renormalization group (RG) formulation is that an effective low-dimensional model of a system may be found through repeated coarsening of the microscopic or “bare” model. In other terms, by successively removing dynamical degrees of freedom from the system description, the effective model “flows” towards a description involving very few parameters. In general, there are many strategies which can be used to simplify the description of a high-dimensional system, and RG methods, though vast in breadth, form only a subset of these. An altogether different dimensionality reduction framework is the information bottleneck (IB), which attempts to compress (or more accurately coarsen) a signal while keeping as much information about some a priori defined “relevance” variable as possible [1]. Both IB and RG have been applied in theoretical neuroscience [2, 3, 4, 5], computer science [6, 7, 8, 9, 10, 11], and other frontier areas of applied statistical physics [12, 13]. Given the ubiquitous need to find simplifying structure in complex models and data, a synthesis of the ideas present in IB and RG could yield powerful new analysis methods and theoretical insight.
Probability-theoretic investigations of renormalization group methods are not a recent development [14]. One early paper by Jona-Lisinio used limit theorems from probability theory to argue the equivalence of older, field-theoretic RG formalism due to Gell-Mann and Low with the modern view due to Kadanoff and Wilson [15]. Recent work [16, 17, 18, 19, 20, 13] has focused on connections of RG to information theory. Since the general goal in RG is to remove information about some modes or system states through coarsening, an effective characterization of RG explains how the information loss due to coarsening generates the RG flow or relates to existing notions of emergence. Moreover, like the probabilistic viewpoint promoted by Jona-Lisinio, the information-theoretic viewpoint enjoys a healthy separation from physical context. The hope is that, by removing assumptions about the particular organization or interpretation of the degrees of freedom in the system, RG methods can be generalized and made applicable to problems outside of a traditional physics setting [5, 21]. This viewpoint also has the potential to enrich traditional RG applications, as Koch-Janusz et al. point out [12]. Their neural-network implementation of an IB-like coarsening scheme was able to “discover” the relevant, large-scale modes of the dimer model, whose thermodynamics are completely entropic, and whose collective modes do not resemble the initial degrees of freedom. More recently, Gordon et al. built upon this scheme to formally connect notions of “relevance” between IB and RG [13].
In contrast to most RG formulations which require an explicit, a priori notion of how the modes of the system should be ordered, the information bottleneck approach defines the relevance of a feature by the information it carries about a specified relevance variable. To be concrete, let be a random variable, called the “input,” which we wish to coarsen. Then, let be another random variable, called the “relevance variable,” which has some statistical interaction with the input . IB defines a non-deterministically coarsened version of , , which is optimal in the sense that the mutual information (MI) between and is maximized. Because is defined as a non-deterministic coarsening of , an exact correspondence between RG and IB demands that the RG scheme uses what is known as a “soft” cutoff. This means, for example, that the ubiquitous perturbative momentum shell approach put forth by Wilson cannot be mapped exactly onto IB under the interpretation of as some coarse-grained variable. The trade-off between degree of coarsening, indicated by and the amount of relevant information retained is controlled by a continuous variable, denoted . Formally, the non-deterministic map which yields from is found by optimizing the IB objective function:
(1) |
For large values of , the compressed representation is more detailed and retains a greater deal of predictive information about . Conversely, for smaller , relatively few features are kept, in favor of reducing (increasing compression/coarsening). The formalism investigated here is the one originally laid out in 2000 by Tisbhy et al. [1], but since then a number of thematically similar IB schemes have been proposed [22, 23, 24]. IB methods have been employed extensively in computer science, specifically towards artificial neural networks and machine learning [6, 7, 8, 9, 10, 11]. In theoretical neuroscience, Palmer et al. have demonstrated using IB that the retina optimally encodes the future state of some time-correlated stimuli, suggesting that prediction is a biological function instantiated early on in the visual stream [4, 25]. IB has also been applied in studies of other complex systems, for instance to efficiently discover important reaction coordinates in large MD simulations [26], and to rigorously demonstrate hierarchical structure in the behavior of Drosophila over long timescales [27].
From a broad perspective, there are some basic similarities between RG and IB. Both frameworks entail a coarsening procedure by which the irrelevant aspects of a system description are discarded in order to generate a lower-dimensional, “effective” picture. Further, the Lagrange multiplier in IB, which parameterizes the level of detail retained, can be seen as roughly analogous to the scale cutoff present in some implementations of RG. As a first guess, one might imagine that in IB roughly corresponds to the (fluctuating) bare state of a system we are interested in renormalizing, and its compressed representation is a coarsened dynamical field akin to a fluctuating “local” order parameter. However, it is not difficult to find implementations of RG which do not map to IB in this way, and vice versa. For example, in Wilsonian RG schemes with a hard momentum cutoff, the decimation step represents a deterministic map from bare to coarsened system state. Together with our provisional interpretation, this contradicts the original formulation of IB, in which the coarsening is non-deterministic 111Alternative IB frameworks have been proposed which result in deterministic mappings [23, 24], and these could conceivably be connected to hard-cutoff RG schemes, though some issues occur for continuous random variables. We restrict our present discussion to the original framework..
Another, more serious discrepancy is due to the expected use cases of these two theoretical frameworks. Generically, the fixed point description of criticality offered by RG is legitimate due to the presence of infinitely many interacting degrees of freedom, otherwise the coarsened model cannot be mapped back into the original model space. In IB, the random variables is finite-dimensional, such as a finite lattice of continuous spins, and “dimensional reduction” does not refer to the convergence towards a low-dimensional critical manifold in model space, but instead the actual removal of dimensions from the coarsened representation of . Finally, and perhaps most dauntingly, there is not an obvious equivalent of the IB relevance variable in RG. It seems counterintuitive that one would want more control over the collective mode basis used to describe a system, when for the vast majority of RG applications, length or energy scale works perfectly well as a cutoff.
Despite these apparent mismatches, there are some significant structural similarities between IB for continuous variables and a class of RG implementations involving soft cutoffs. For concreteness, we restrict our discussion of the correspondence to Gaussian statistics. While this precludes the analysis of non-Gaussian criticality, it allows all of the results to be expressed analytically and makes connections more transparently. This can also serve as a basis for later investigations involving non-Gaussian statistics and interacting systems. To begin, we show that Gaussian information bottleneck (GIB) [29] exhibits a semi-group structure in which successive IB coarsenings compose larger IB coarsenings. This structure is summarized in an explicit function of the Lagrange multiplier which simply multiplies under semigroup action and is therefore analogous to the length scale in canonical RG. Next we explore how the coarsening map provided by IB defines an infra-red regulator which serves as a soft cutoff in several non-perturbative renormalization group (NPRG) schemes. This relation shows that the freedom inherent in choosing a cutoff scheme maps directly to the choice of -statistics in IB. Finally, we use a Gaussian field theory as a toy model to explore the physical significance of this fact. One result is that the RG scheme provided by IB can select a collective mode basis which is not Fourier, and hence impose a cutoff which cannot be interpreted as a wavenumber. Additionally, in whichever collective mode basis is chosen, the shape of this IB cutoff scheme is closely related to the Litim regulator which is ubiquitous in NPRG literature [30].
II Semigroup structure in Gaussian Information Bottleneck
Every IB problem begins with the distribution , which specifies the statistical dependencies linking the input variable to the relevance variable . Gaussian information bottleneck (GIB) refers to the subset of IB problems in which is jointly Gaussian. Under this constraint, a family of coarsening maps can be found exactly for all . Chechik et al. [29] showed this by explicitly parameterizing the coarsening map, then minimizing the IB objective function with respect to these parameters. Their parameterization consists of two matrices and , which are used to define the compressed representation as a linear projection of the input plus a Gaussian “noise” variable . Explicitly, with . Under this parameterization, one exact solution is given by:
(2) |
Where is the Heaviside step function and . The matrix represents a set of eigenvectors with corresponding eigenvalues in the following way:
The matrix used above also appears in canonical correlation analysis and we therefore refer to it as the “canonical correlation matrix”. Note that since it is not generally symmetric, the eigenvector matrix is not generally orthogonal. An important property of the canonical correlation matrix is that its eigenvalues lie within the unit interval; that is, for all .
The GIB solution (2) is not unique. At a cursory level, this follows from the IB objective function (1), which is a function only of mutual information terms and hence invariant to all invertible transformations on , , and . However, not all invertible transformations will leave the joint distributions and Gaussian. It is specifically invertible linear transformations (and analogous transformations for and ) which preserve IB optimality and leave all joint distributions Gaussian. One consequence of this is that changes the coarsening parameters . If is invertible, then these new parameters also solve GIB. When testing whether a given parameter combination is GIB-optimal, it is therefore useful to consider the quantity , Which is invariant to all invertible linear transformations on , and .
In this section, we show that solutions to GIB have an exact semigroup structure, wherein two GIB solutions “chained together” compose a larger solution which is still optimal. To be more precise, let be jointly Gaussian, then suppose is IB optimal. Because is Gaussian under the parameterization , it must also be that is jointly Gaussian and thus a valid starting point for a new GIB problem. Taking to be the new input variable, let the second optimal coarsening map be , and parameterize it the same way: . Then, we claim, the composition of these two coarsening maps, obtained by integrating the expression over , is also given by a single IB-optimal coarsening for some , where is a binary operator whose explicit form will be provided shortly. We represent this composition schematically with the Markov chain:
(3) |
To simplify the analysis, we begin by redefining 222To clarify notation: by , we mean that each instance of should be replaced by . the input variable by projecting it onto the eigenvectors of the canonical correlation matrix. Assuming that is full-rank,
is an invertible linear transformation. Invertibility guarantees that the objective function is unaffected, while linearity guarantees that remains Gaussian. We call this new basis for the “natural basis” since after this transformation , , and are diagonal. Additionally, after the first compression to , the new analogous quantities, e.g. , , and will remain diagonal. For the transformation matrices and , this fact can be seen by inspecting (2), while Lemma B.1. in [29] proves that and are diagonal. In this new basis, they are given by:
We now show that successively applied GIB compression as portrayed in (3) composes GIB transformations of greater compression. A more detailed treatment is given in appendix A. Suppose that and describe a non-deterministic map . From Lemma A.1. in [29], this map is IB-optimal if there exists some such that
where is as given in (2).
Consider two successive maps with bottleneck parameters and , each with unit noise. The composition of these transformations is represented by the pair . Both and can be computed explicitly using (2), though is initially given in terms of the statistics . Using , we thus re-write in terms of the original relevance variable-input variable statistics . After this substitution, direct evaluation of shows that is IB optimal:
where is the bottleneck parameter of the full, 1-step compression:
(4) |
It is important to note that this computation defines the binary operator . If GIB did not have a semigroup structure, it would not be possible to identify in this manner. Direct computations show that this operator satisfies closure and associativity, and thus furnishes the space in which values live, that is , with a semigroup structure. As bonuses, if we consider to be an element, we see that it is the identity element. This aligns with the fact that in the limit , the IB objective (1) becomes insensitive to the encoding cost and hence no coarsening occurs; becomes a deterministic function of every component of which contains information about . Further, is symmetric. One should be careful to note, however, that the maps and need only agree in the overall level of compression achieved, and may otherwise differ since is a symmetry.
II.1 What is the significance of this structure?
A broad goal of this paper is to explore structural similarities between IB and RG. The semigroup structure present in Wilsonian RG is crucial to its explanation of scaling phenomena, so its presence in GIB is a promising sign. The traditional picture is this: consider the RG transformations and which rescale length by factors and , respectively. Then a fundamental property of is that . This structure imposes a strong constraint on the behavior of the flow near a fixed point. If represents an eigenvector of the Jacobian matrix at the fixed point, then its associated eigenvalue will scale as [32]. In short, the semigroup typically allows one to define the critical exponent .
The operator we introduced does not immediately lend itself to this sort of analysis. However, we can introduce a function which satisfies . By inspection, this function is given by:
This quantity is interesting because it is analogous to the length-rescaling factor found in typical Wilsonian or Kadanoff RG schemes, yet in IB there is no need for a notion of space, and hence rescaling length generally means nothing. Compare this with, for example, a momentum-shell decimation scheme. One identifies the rescaling factor by comparing the new and old UV cutoffs, and so it acquires the meaning of a length-rescaling factor. Here, is determined entirely by the Lagrange multiplier and the structure of GIB, both of which are defined without deference to an a priori existing notion of spatial extent. As discussed in the introduction, connecting IB to RG is attractive, in part, precisely because IB is an information-theoretic framework and does not rely on physical interpretations. Hence this rescaling factor should be considered an information-theoretic quantity in the same way as .
Can as defined above be used in the same way as the rescaling factor is used in RG? First, limits of the IB problem involving extremal values of should match intuition about in an RG context. Indeed, for , the zero-coarsening limit, . Next, by the data processing inequality, at the optimal IB solution is degenerate with complete coarsening, i.e. becomes independent of . Correspondingly, the limit gives . Next, let us recall the scope of the GIB framework. GIB makes statements only about completely Gaussian statistics, so no anomalous scaling will appear, and thus a discussion of critical exponents is hard to motivate. Second, GIB is defined for finite-dimensional and , so we cannot simply connect it to, say, momentum-shell Wilsonian RG, which only makes statements about infinite systems. Finally, and related to the last point, we have not identified yet what the analogous “model space” is in the context of IB, or how an optimal GIB map could represent an RG transformation in that space. This will be the subject of the next section, where we show that the non-deterministic nature of IB coarsening aligns exactly with existing soft-cutoff RG methods.
Whether or not this analysis helps to formally connect IB and RG, it is interesting to ask whether other IB problems exhibit semigroup structure. One could imagine, for example, that a series of high- compression steps (low-compression limit) might be easier than one large compression step. If this is the case, IB problems with semigroup structure may benefit from an iterative chaining scheme similar to the one we present here. One possible application of this structure is the construction of feed-forward neural networks with IB objectives. If the IB problem in question has semigroup structure, then the task of training the entire network can be reduced to training the layers one-by-one on smaller (higher-compression) IB problems. This has benefits in biological systems, such as biochemical and neural networks, where processing is often hierarchical, likely as a result of underlying evolutionary and developmental constraints. Biological systems are also shaped by their output behavior, which sets a natural relevance variable in the arc from sensation to action.
III Structural similarities between IB and NPRG
III.1 Soft-cutoff NPRG is a theory of non-deterministic coarsening
The renormalization group is not a single coherent framework, but rather a collection of theories, computational tools, and loosely-defined motifs. As such, it is probably not possible to succinctly define RG on the whole. A common theme, at least, is that RG techniques describe how the effective model of a given system changes as degrees of freedom are added or removed. The modern view of RG theory, which is largely due to Wilson [33, 34, 35, 36, 37] and Kadanoff [38], concerns itself with the removal of degrees of freedom through a process known as decimation, in which a thermodynamic quantity (typically the partition function) is re-written by performing a configurational sum or integral over a subset of the original modes. Here, even before discussing rescaling and renormalization, we must make procedural choices. To begin, one must specify the subset of degrees of freedom which are to be coarsened off. In theories where modes are labelled by wavenumber or momentum, one typically establishes a cutoff and decimates all modes with momentum above it. As a result, those modes are completely removed from the system description, and their statistics are incorporated into the couplings which parameterize the new effective theory. Another consideration is the practicality of carrying out such a procedure. If the model in consideration can be expanded in a perturbation series about a Gaussian model, and if the non-Gaussian operators are irrelevant or marginal under the flow, then this analysis is amenable to perturbative RG. However, this is often not the case, for example in systems far from their critical dimension, or in non-equilibrium phase transitions, where there may not even be critical dimensions [39, 40].
In non-perturbative RG (NPRG) approaches, the need for a perturbative treatment is removed by working from a formally exact flow equation at the outset. The first such treatment was put forth in 1973 by Wegner and Houghton, who used Wilson’s idea of an infinitesimal momentum-shell integration to derive an exact flow equation for the full coarse-grained Hamiltonian [41]. Because this equation describes the evolution of the Hamiltonian for every field configuration, this and other NPRG flow equations are called integro-differential equations, and the NPRG is sometimes referred to as the functional renormalization group (FRG). Later, Wilson and Kogut [35], as well as Polchinski [42], proposed new NPRG flow equations in which the cutoff was not described explicitly through a literal demarcation between included and excluded modes, but instead through non-deterministic coarsening, so that the effective Hamiltonian satisfies a functional generalization of a diffusion equation 333This interpretation is explicitly presented in the Wilson-Kogut paper. Given that their approach is formally equivalent with the one put forth by Polchinski, the interpretation should apply to that framework as well.. These approaches were introduced, at least in part, as a response to difficulties 444Under the sharp cutoff construction, some issues include the generation of non-local position-space interactions [35], unphysical nonanalyticities in correlation functions, and the need to evaluate ambiguous expressions such as where the function is not known a priori [49]. that arise from the sharp cutoff in the Wegner-Houghton construction. Correspondingly, the Wilson-Polchinski FRG approach can be thought to give a soft cutoff, where modes can be “partially coarsened”.
The most common NPRG approach in use today was first described in 1993 by C. Wetterich [45]. Like the Wilson-Polchinski NPRG, the Wetterich approach uses a soft cutoff, but the objects computed by this framework are fundamentally different. Instead of computing the effective Hamiltonian of modes which are below the cutoff, the Wetterich framework computes the effective free energy of modes above the cutoff. For this reason, we say that the Wilson-Polchinski framework is UV-regulated and Wetterich is IR-regulated. Yet, despite this difference in perspective, the Wetterich formalism still describes the flow effective models make from their microscopic to macroscopic pictures. In this section, we will explore how the soft-cutoff construction is related to a notion of non-deterministic coarsening, and in turn, the information bottleneck framework. An in-depth discussion of the philosophy and implementation of NPRG techniques would be distracting, so we instead refer the reader to a number of good references on the topic [46, 47, 48, 49].
So far we have not explained how one actually imposes a soft-cutoff scheme. We begin by examining the Wetterich setup, in which one writes the effective (Helmholtz) free energy at cutoff :
The bare action, given by , is the microscopic theory which is known a priori. The source allows us to take (functional) derivatives of this object to obtain cumulants (connected Green’s functions). The remaining term is known as the deformation, and it is this term which enforces the cutoff. It is written as a bilinear in :
(5) |
For compactness, we will often resort to a condensed notation and express integrals instead as contraction over suppressed continuous indices. For example, the deformation may be re-written:
The kernel (matrix) is known as the regulator, and it controls the “shape” of the cutoff. Almost always, it is chosen to be diagonal in Fourier basis so that the cutoff has the interpretation of a wavenumber or momentum. The resulting Fourier-transformed regulator has some freedom in its definition, but it must satisfy the following properties [30]:
-
1.
-
2.
-
3.
These constraints guarantee that the deformation acts as an IR cutoff. The first condition increases the effective mass of low-momentum modes and suppresses their contribution to the effective free energy. The second ensures that modes with high momentum () are left relatively unaffected, and contribute more fully to . The third condition ensures that the so-called “effective action,” defined as
approaches the bare action (or Hamiltonian, as the case may be) in the limit . Here, the order parameter is given by . Because of this construction, the second regulator property also ensures that in the limit , the deformation disappears, and the effective action becomes the Legendre transform of . This functional is known in many-body theory as the 1PI generating functional, and in statistical mechanics as the Gibbs free energy. In the Wetterich formalism, one is generally interested in computing the flow of because of these useful boundary conditions.
To see how this approach is related to non-deterministic coarsening, we will connect it to a soft-cutoff UV-regulated approach, also put forth by Wetterich, which is formally equivalent to the Wilson-Polchinski framework. We begin with the following expression defining the average action , taken directly from the paper [50], with only a slight change in notation:
where we refer to this functional as the coarsening map. If we were interested in performing deterministic coarsening, i.e. one involving a hard cutoff, the coarsening map would be something like a delta-function for some functional . However, in all soft-cutoff UV-regulated approaches, this distribution is Gaussian in
(6) |
In principle, given the coarsening parameters and for all , the exact flow equation for is determined. Wetterich gives explicit choices for these parameters, while Wilson and Polchinski independently give their own (though in slightly different fashion). The term is a normalizing constant which is essentially unimportant to the remainder of our discussion.
Now we connect the IR and UV approaches to show that they are complementary, and in some sense, equivalent. In particular, suppose we know for all . Then, from this single object, one can construct both the IR-regulated and UV-regulated flows. This should make intuitive sense; the IR-regulated part tracks the thermodynamics of the already-integrated modes, while the UV-regulated part tracks the model of the unintegrated modes. This can all be seen clearly by writing out the full sourced partition function and invoking the normalization of the coarsening map.
(7) | |||||
In the final expression, the normalizing constant has been dropped. Readers familiar with the Polchinski formulation will immediately recognize as the effective interaction potential. However, the argument to this potential is shifted by the source , which therefore enters nonlinearly, unlike in Polchinski’s approach. This difference is due to the fact that we define a flow for each initial source configuration, instead of adding a linear source term to the vacuum flow.
To arrive at (7) above, we had to define the effective field-dependent source and identify a suitable deformation term in . By directly substituting (6), one can see that
and
(8) |
As promised, the existence of a family of distributions with a known parameterization allows us to define an IR regulator scheme, and therefore compute the NPRG flow both above and below the cutoff. The deformation term ultimately came from the term present in the coarsening map, which could be interpreted as a free energy. We also identify immediately that the IR regulator corresponding to a given choice of coarsening map is given by .
We will next use this viewpoint to introduce information bottleneck into the discussion. In particular, we will associate the coarsening map with the IB coarsening map and examine some consequences. This discussion comes with some restrictions. Firstly, one should note that all soft-cutoff NPRG frameworks, regardless of the structure of the microscopic action, assume a Gaussian coarsening map. With a non-Gaussian , the flow may still be defined, but it will not, in general, satisfy any known exact flow equations. This is easiest to see in the IR Wetterich formalism, since a non-Gaussian would yield a which is no longer bilinear in , and hence one could not write the flow equation in terms of the exact effective propagator, as it usually is. Indeed, the more general could have terms at arbitrarily high order in , and thus require arbitrarily high-order derivatives of in the flow equation. So, while it is not impossible to seriously consider non-Gaussian , it is certainly inadvisable without good reason.
With this in mind, we must also note that IB has an exact solution involving Gaussian , but only when the variables and are jointly Gaussian. By analogy, this restricts us to discussing theories where the bare action , or perhaps more accurately, the bare Hamiltonian contains only linear and bilinear terms in . While everything presented above holds for general , everything that follows will be totally Gaussian so that IB optimality can be exactly satisfied. Finally, note that IB may not be well-defined for infinite-dimensional random variables such as fields, so our scope is further limited to finite-dimensional multivariate Gaussian distributions of classical variables.
III.2 The Gaussian IB regulator scheme
In the last section, we briefly introduced soft-cutoff NPRG approaches and argued that both UV- and IR-regulated flows can be defined given a family of Gaussian coarsening maps . Broadly, we aim to show in this paper that IB and RG can be connected by identifying this map with the IB-optimal coarsening map . By this we do not mean to say that the family of maps produced by IB are the “correct” starting point for NPRG. Instead, we simply note that IB-optimality is a constraint one could impose on the coarse-graining scheme. Assuming we do so, what characteristics does the IB-RG scheme carry? Using the exact solution to GIB and Eq. (8), we identify the regulator, or soft-cutoff scheme, required by IB optimality for some known initial statistics
(9) |
Here the are critical bottleneck values, indexed by the components of the so-called “natural” basis, which is found by diagonalizing the canonical correlation matrix as discussed in section II. The critical bottleneck values are given by . If is the matrix of right eigenvectors of this matrix, then is given by . Notice also that this regulator is diagonal in natural basis. denotes the Heaviside step function.
In the typical context, is diagonalized by a Fourier transform, and thus it represents a cutoff in wavevector or momentum. Here this notion is generalized, and instead of identifying a cutoff wavenumber , we should consider the cutoff to be of information-theoretic origin, and fundamentally defined by . Consequently, the degree to which the mode labelled by is coarsened should be found by comparing its corresponding critical value to the cutoff . As such, we can essentially make the replacements and , with the caveat that and should approach unity as and go to zero.
In Figure 1, we plot obtained from the first toy model presented in section IV.2 and compare it against the well-known Litim regulator [30], denoted and given in Eq. (11). Ignoring for now the particulars of the model, we point out that the IB and Litim regulators appear qualitatively similar, and for fixed parameters and , all limits involving and satisfy the regulator scheme requirements. Moreover, we see that the NPRG and IB notions of mode relevance are in agreement. Smaller canonical correlation eigenvalues (top plot) correspond to collective modes which get integrated out later in the flow. This is reflected in the structure of the soft cutoff, which increasingly suppresses fluctuations as .

Is it okay to take (9) seriously as an IR regulator scheme? Let us attempt to compare with the conditions outlined in the last section. The typical interpretation of the first requirement on is that the lowest energy modes should be given extra mass by the regulator so that they are “frozen out” of the configurational integral. In fewer words, there should not be soft modes in intermediate stages of the flow. By analogy, it must be true that , where we take to represent the most “relevant” mode (in the IB sense). Indeed, for all , this is satisfied by (9). Next, must vanish for the mode when the cutoff is taken sufficiently far below . Because of the step function, this is satisfied. Finally, each diagonal component should diverge as so that at zero compression, only the saddle point configuration of the microscopic theory contributes to the generating function, or whichever thermodynamic potential we are interested in. If are all finite, then this limit holds as well 555Two edge cases may appear important; and . Because , these correspond to limits where a mode is deterministically related to or completely independent of , respectively. Complete deterministic dependence between and should not be considered without modification, and in the case of independence, those modes may be removed as a formality..
Because it satisfies all of the properties required of a typical regulator in a soft-cutoff scheme, we call (9) the “IB regulator” and denote it . This identification has some interesting consequences, which will be explored in the coming section. One particularly striking feature is that the cutoff scheme is now parameterized by the family of distributions . In IB theory, these distributions formalize the notion of “important features” of implicitly through its correlations with . This means that the RG scheme selected by a given set of IB solutions will not favor, for instance, “long distance modes” unless is chosen to enforce that. Instead, the analogue of long distance modes are those modes which have the most information about . In section IV.2 we will attempt to clarify this by calculating the IB regulator explicitly in a simple, familiar context.
IV Consequences and interpretations of the correspondence
IV.1 The Blahut-Arimoto update scheme may displace the flow-equation description
The apparent goal of Information Bottleneck is to identify the coarsening map for some set of values. This seems to align poorly with the problem statement and goals of NPRG, in which the coarsening map is taken as the starting point and used to derive the flow equations. Is it really true that solving IB only gets us to the starting point of an RG scheme, after which we still need to “do the RG part?” In this section, we investigate one way to resolve this dissonance by noting that the quantities one would usually consider to be the results of the NPRG flow can be used to parameterize itself. From this viewpoint, one may organize the computation around a set of self-consistent update equations instead of a set of flow equations.
The general IB problem can be solved, in principle, by iterating what is known as the Blahut-Arimoto procedure, which is borrowed from rate distortion theory in a more general context [1]. This procedure relies on the fact that when is IB optimal, it satisfies the following condition:
Where everything on the RHS is to be considered a function of through
The function normalizes and therefore also depends on through the above equations.
In brief, the BA procedure entails taking an estimate for , plugging it into the IB optimality criterion above, then iterating until satisfactory convergence. In this way, we say that is self-determined. This procedure is practically very difficult—if not impossible—for distributions of multivariate continuous variables in general. However, in the case of GIB, we can parameterize the distributions then use Gaussian integral identities to update these parameters exactly. Chechik et al. [29] carry out this procedure in terms of the matrices and , used to define . We repeat this computation but instead parameterize the update equation using , , and . The first two of these represent objects of interest in the UV- and IR-regulated parts of the NPRG scheme, respectively. The third quantity, carries information about how the IR degrees of freedom are coupled to the original, UV variables . In a very condensed form, the BA update equations in this parameterization read:
(10) | |||||
where both and can be expressed in terms of , and the current estimate for the parameterization of . The full expressions are complicated and given fully in appendix C. Note that represents the IR-regulated flow; it is directly analogous to the effective propagator in the Wetterich formalism. In other words, given that we are only looking at Gaussian statistics, the function (or ) can be simply reconstructed from . Next, represents the UV-regulated part, since the probability distribution describing can be reconstructed from it.
We reiterate that this self-consistent updating scheme comes from IB optimality, written in terms of objects we would usually calculate in NPRG. The idea of a self-consistent updating scheme which determines the IR-regulated statistics and UV-regulated dynamics simultaneously is interesting. In addition to essentially replacing the flow-equation description, it is very non-perturbative in nature. However, it seems wrong that imposing a constraint on should make anything easier, especially given the fact that IB enforces a goal which is only sometimes aligned with the typical goals of RG analysis. A natural question, then, is whether IB has actually provided any new leverage. More precisely, if we really have given up the flow equation in favor of a self-consistency scheme, does this new scheme actually help to calculate the objects of interest as the flow equation usually would? If so, why would IB optimality be necessary?
In the case of general, i.e. non-Gaussian , the integration
can’t be carried out directly. This is equivalent to the statement that at (and below) intermediate values of in NPRG, can’t be directly computed from its integral representation. The whole point of Wilsonian RG is to get around this integration step by connecting to by invoking a known flow equation. So, to answer our question, the IB update scheme may actually provide the same leverage, but only if (1.) we can represent the BA procedure parametrically, and (2.) the derivation of that parametric representation does not require the explicit marginalization of to obtain . The updates we present above for the fully Gaussian problem satisfy the first requirement, but fail the second since we explicitly carried out Gaussian integrals over in the derivation. It is therefore unclear at this point whether some structure in IB could allow us to estimate parametrically, which seems to be a prerequisite for the utility of a more general IB-RG framework in which IB is exactly enforced. Finally, we note that these conditions are necessary, but not sufficient, since further integration steps may be required to complete the BA update, for example in computing and going from an updated back to the moments of .
In principle, the self-consistent structure imposed by IB-optimality obviates the need for a traditional cutoff/flow equation description. However, the opposite is also true: if the cutoff scheme and flow equation are known, then the self-consistency conditions are displaced. Because GIB is exactly solvable, we are able to examine both approaches here. In Eq. (9), we present a soft cutoff scheme which arises from the constraint of GIB-optimality, but it is given in terms of quantities which have no physical context, and so it is hard to say a priori how it relates to existing cutoff schemes structurally. In the next section, we consider a toy model which provides this physical context and therefore affords us a glimpse into how IB-optimal NPRG schemes differ structurally from those already employed.
IV.2 Collective modes are not always Fourier: a minimal example
In the Wetterich NPRG, the cutoff is enforced through a deformation added to the bare action or Hamiltonian. In section III.2, we identified this structure as the free energy of a Gaussian coarsening map from the bare degrees of freedom to some compressed representation . We then defined the IB regulator through the deformation produced by the map solves the Gaussian Information Bottleneck problem, and showed that it satisfies the various “design” constraints traditionally placed upon it. An immediate consequence of this construction is that the regulator design space is now parameterized by the joint distributions which define the starting point of IB, and for many such distributions, the preferred basis selected by IB will look nothing like Fourier modes. Of course, for finite systems not organized in a lattice, this is unsurprising; the Fourier basis will not exist in any familiar sense. However, for practitioners of NPRG, it may cause discomfort to consider a regulator in which the numbers and do not represent radii in momentum space. In contrast, for the majority of applications, the standard cutoff scheme is provided by the Litim regulator
(11) |
Which should be interpreted as a soft momentum-space cutoff. The Litim regulator sees widespread use both because it is optimized to give good convergence properties in certain contexts 666The Litim regulator was introduced in the context of NPRG analysis of the model. Its favorable or “optimal” characteristics are manifested through improved convergence properties of so-called “threshold functions,” which constitute a frequently encountered class of momentum integrals involving the cutoff., and because its simple form often leads to analytically expressible flow equations (after appropriate truncation procedures) [53, 30].
The IB regulator given in (9) does not manifestly have any such nice qualities, and in the general case may be difficult to interpret. In this section, we calculate explicitly in a trivial statistical field theory problem to explore its structure in a familiar context and address some of its non-intuitive features. For our model, we consider a real scalar field in dimensions at equilibrium and finite temperature . This fluctuating field will serve as the “input variable” . We also add a disordered source field which will serve as the “relevance variable” .
(12) |
We also give Gaussian statistics to the disorder:
In our condensed notation, the above equations are re-expressed:
Together, the Boltzmann weight and the distribution describing the disorder statistics constitute a joint distribution which is jointly Gaussian and thus—momentarily casting aside worries about the continuously infinite-dimensional random variables—a valid starting point for GIB. From the IB standpoint, the goal would usually be to construct a coarsened field which discards some information about while encoding as much as possible about the statistics of . However, the goal here is not to discuss but rather to better understand the NPRG cutoff scheme that IB imposes as a consequence of this starting point. Since we have assumed a canonical form for the bare Green’s function and the source term is , the only remaining control over is the two-point correlation of :
To explore different forms of we therefore consider three different constructions of . First, we choose to be totally uncorrelated at different points, with a constant variance at each point. Second, we choose diagonal in Fourier basis, but with some dispersion that adds position-space correlations. In both of these first examples, we will arrive at regulators with momentum-space cutoffs. It is the goal of the third case to present an which is not diagonal in momentum basis, thereby introducing a non-momentum cutoff structure.

IV.2.1 IB regulator when disorder correlations are diagonal in momentum space
In the first and simplest case, we take to be a -function multiplied by some constant factor . Since the Fourier transform is unitary 777We choose the formalism in which includes a factor of so that . Further, the appearance of factors as opposed to the more traditional is a consequence of our decision to conjugate with respect to , instead of . This appeals more to the matrix multiplication shorthand., the momentum-space representation of is unchanged from its position-space representation:
The first step in GIB analysis is constructing the canonical correlation matrix , where we have chosen and . After a calculation involving only Gaussian integral identities and our definition of , we obtain:
Next, we find the right eigenfunctions and corresponding eigenvalues of the correlation. For our current construction of ,
Where is obtained after Fourier transform of . To finally obtain the IB regulator in a familiar form, we would like to find a way to express it completely in terms of , , and the various other parameters introduced in this application. However, equation (9) gives us in terms of the bottleneck parameter , which has not been defined yet in this application.
The crucial insight is to note that serves essentially the same role as in the typical theory. To find the explicit map between the two, we use the fact that critical bottleneck values are defined in terms of the canonical correlation eigenvalues through . In this model, the critical bottleneck values are
Using this map, we can replace with , where is the usual momentum cutoff. Doing so, we find that the IB regulator can be neatly expressed in terms of the Litim regulator:
(14) | |||||
In particular, the limit gives . It is interesting that the Litim regulator appears in this expression, since its derivation invokes optimality principles which are not obviously connected to information bottleneck.
IV.2.2 Momentum-space IB regulator with dispersion in disorder correlations
Without changing our decision to make diagonal in Fourier basis, we can also add -dependence to . In this case, the steps taken above are essentially unchanged, and we end up with a slightly different regulator:
With some manipulations, one could optionally re-write this in terms of in order to appeal to the Litim description once again.
A new feature appears in the regulator scheme when is given -dependence. For extreme choices of , the ordering of modes can actually be reversed. To see how this is possible, note that fundamentally it is the IB parameter which sets the cutoff, while the critical values define the mapping to . Therefore, by picking, e.g., , one achieves a which monotonically decreases with respect to , meaning longer wavelength modes (lower ) actually get integrated out before shorter ones. However, this construction presents some pathologies and is hard to interpret in the truly continuous case, so we will not explore it further here.
IV.2.3 Explicit form of the IB regulator in a more general case
In the last section we assumed a form of which was diagonal in Fourier basis. This assumption led us to a regulator scheme which could be interpreted as a soft cutoff in momentum space. In this section we explore an example in which is no longer diagonal in Fourier basis:
Where is the fractional Fourier transform through angle and is a constant. Under this definition, we can again compute and find the spectrum of . This yields eigenfunctions analogous to the plane wave solutions in last section, but indexed by a new parameter which can neither be interpreted as position nor wavenumber:
Here, the notation indicates that is best conceptualized as a functional parameterized by , where for instance the collective modes of would be given by . Stated differently, the leftmost operator is evaluated at , and the rightmost is a Fourier transform over the integrand . Unfortunately, this solution is only formal, and cannot be visualized in the same manner as plane waves. In a true field theory, even with the trivial Gaussian setup, both and are poorly behaved when written as functions of . When written as an integral in , diverges when , and is discontinuous in both and . One way to conceptualize this is by comparison with , which includes and thus cannot be written as elementary functions of . After Fourier transform, we can replace the operator description with a simple function of the continuous variables . Similarly, although we cannot express and as functions of , the various operators we are interested in can be written simply in the non-orthogonal basis defined by :
(15) | |||||
(16) |
It is hard to say what the label physically represents beyond being a parameter that defines and orders collective modes in the system. Despite this, the regulator maintains its simple form:
(17) |
Where now takes the role of the cutoff, replacing as has replaced . That is, the collective modes labelled by are ordered in terms of their predictiveness about the disordered source field . GIB then imposes a soft-cutoff scheme at a scale , which is a proxy for the bottleneck parameter , as was in the Fourier case. We stress that these labels and are defined by the correlation structure of and have no simple intrinsic physical meaning. Without significantly more effort, all we can say is that a mode labelled carries more information about the disorder than a mode labelled if .
Many of the difficulties present in this discussion, such as the poorly-behaved character of collective modes and disorder correlator , as well as the non-intuitive nature of the mode labels and , stem from a common cause. IB is only suited to analysis of systems with finitely many degrees of freedom, and field theories have infinitely many. The calculations above were nonetheless performed in this context to demonstrate that IB defines collective modes of a system and establishes a cutoff scheme which, in general, differs from traditional notions of relevance, as represented by the Fourier basis and momentum cutoff. This idea could be crucial to understanding collective behavior in systems without clear notions of locality or organization. Such problems abound in, for example, the brain where long-distance connections between brain areas are common and important for computation while information is also spread across many areas and recombined for important, multi-modal tasks. The recurrent, highly interconnected, and still computationally efficient structure in the brain renders the simple notion of physical distance between cells rather limiting.
IV.3 The relevance variable can have many physical interpretations
Gaussian IB begins with a choice of joint distribution . As we have discussed, this distribution gives a constrained parameterization of a cutoff scheme which is analogous to the one employed in Wetterich NPRG. In the last section, we showed that not all choices lead to collective modes which have a canonical interpretation such as Fourier modes. That discussion was carried out under the assumption that the relevance variable pertains to a source field with some disorder statistics. Generally speaking, this is only one way of constructing . Even within the constraint of being jointly Gaussian, the physical interpretation of and can vary. Here we briefly discuss some of these alternative interpretations.
First, may represent the environment of a set of variables . This scenario is analogous to the one presented by Koch-Janusz et al. [12]. Consider a collection of spins on a lattice, and choose some enclosed region. Let be the state of the spins in that region and let denote the state of those outside. In the case that these spins have Gaussian statistics, this is a valid starting point for GIB. With this setup, we expect that the most relevant collective modes would be relatively slowly varying in position. In fact, Gordon et al. recently formalized this idea for field theories not restricted to Gaussian statistics [13]. They consider a “buffer” zone between and whose size is taken to infinity. In this limit, the first collective variables encoded by IB at strong compression (low in our notation) correspond to the operators with the smallest scaling dimensions, and hence the most relevant operators in the RG sense. Their approach is therefore promising for the analysis of systems with local interactions whose order parameter is not known a priori. More fundamentally, they have shown that and can be chosen to enforce a traditional, “physical” definition of relevance.
Second, consider a stationary stochastic process with Gaussian statistics both in time and across variable index. We could choose to represent the current state of the system while represents the future. Here, the most relevant modes are those projections of which vary the slowest. In fact, if we suppose that time has been properly discretized, this interpretation of the GIB problem is equivalent to a certain class of slow feature analysis problems [55].
Third, we can imagine another dynamical system in which variables which are driven by a stochastic signal such that the joint distribution is Gaussian and stationary. Now, the features of which are most relevant are no longer simply the slowest-varying components. The cutoff scheme we find will depend on the statistics which generate , the manner in which couples to , the internal dynamics of , and whether we take to be in the past, future, or present.
Together with the example from last section, in which fulfilled the role of a disordered source field, these examples span a number of physically interesting scenarios. Certainly, more are possible. Any valid interpretation will generally consist of a set of random variables that obeys a Gaussian joint distribution, which is then partitioned into two or three disjoint sets. The first is , the second is , and the third, which is optional, is a dummy set containing every which we don’t care to include in the model. In the case that these sets aren’t disjoint, it is possible to have and become deterministically related which is an invalid starting point for GIB. Finally, we note that while this framework allows for some discussion of systems involving dynamics, it is poorly suited for application to general stochastic processes as the distribution must be stationary. This also means that the connections drawn here between GIB and NPRG are not meant to cover the more general, dynamical NPRG framework often seen in nonequilibrium statistical mechanics literature [49, 56, 57]. However, given the importance of both IB and the dynamical NPRG to applications in nonequilibrium settings, we believe that a more general framework is in demand.
V Conclusion
In this manuscript, we have examined structural similarities between the Gaussian information bottleneck problem and a class of RG techniques involving soft cutoffs. Our main result is to identify that the crucial connection between the two is a non-deterministic coarsening map. In NPRG, this map defines both the UV-regulated coarse-grained Hamiltonian of the Wilson-Polchinski picture, as well as the IR-regulated free energy used in the Wetterich approach. Therefore, one can rigorously connect IB to RG by requiring that this coarsening map solves a particular IB problem. In doing so, one parameterizes a space of soft cutoff schemes in terms of IB relevance variable statistics . Additionally, one can identify the structures in an IB problem which are analogous to UV- and IR-cutoffs in RG.
While we believe that this connection holds for more general IB problems, we limited our discussion to Gaussian statistics for two main reasons. First, NPRG coarsening maps are always Gaussian, since this leads to simpler flow equations with physical interpretations. Second, in order to be compatible with this first consideration, we studied only the GIB problem which has exactly known solutions that are Gaussian [29].
Another result was to show that the GIB coarsening map satisfies a semigroup property. In particular, we identify an explicit function which multiplies under composition of coarsening maps in a manner analogous to the length scale in a traditional RG setting. Given that the typical role of semigroup structure in RG theory is the identification of anomalous exponents, it is not within the scope of this manuscript to assign a similar task to . More immediately, the presence of this structure within GIB raises the question of whether it may be present in IB schemes more generally. If so, would an iterative coarse-graining scheme consisting of repeated low-compression transformations be advantageous as an analysis technique?
By explicitly comparing the set of GIB solutions provided by Chechik et al. with a generic NPRG scheme, we identified the IR cutoff scheme present in GIB (9). A similar analysis can be carried out to identify the UV cutoff, but doing so involves a discussion about reparameterization which we felt would distract from the main points. Direct computations on a toy model showed that the IB regulator has some characteristics which are similar to the ubiquitous Litim regulator [30]. An important generalization is that IB selects the collective mode basis according to which features of the system state will be most informative about , whatever it is chosen to be. We gave a simple example in which this collective mode basis could not be interpreted as a Fourier basis. In general, this will be the case, though depending on how is defined, one may still arrive at collective modes which are essentially Fourier in nature. One bit of analysis we did not carry out is the connection of IB to the dynamical NPRG, though for non-equilibrium problems involving IB—such as the predictive coding problem—this may be a fruitful avenue for further work.
Next, we note that IB is generally extremely difficult to solve, so restricting an NPRG scheme to a family of exact IB solutions is completely unrealistic without significant advances in IB theory. One avenue of attack is to find better ways of solving IB. As outlined in sec. IV.1, a more general parametric Blahut-Arimoto scheme would be very powerful in this context since it could essentially replace the flow-equation description with a self-consistency scheme at each cutoff value. However, given that the exact Gaussian form we derive is complicated, this seems unlikely to work. A more realistic approach to practical IB-RG implementation is to relax the IB-optimality constraint. We suggest that even in a non-Gaussian setting, one could directly calculate the IB regulator (9) proposed here and use the NPRG flow equations in exactly the same way. While the resulting statistics would no longer be exactly IB-optimal, this procedure is no more difficult than any other NPRG implementation, and may produce qualitatively similar results to an exact IB solution.
We reiterate that not all IB problems will benefit from the RG connections presented here, and vice versa. Ideally, the problem in question involves a system with a large, but finite, number of degrees of freedom statistically coupled to a similarly large number of random variables . Finiteness is required by IB, but because of the construction of the NPRG, this is not an issue. The flow is defined exactly even in the absence of a traditional rescaling step, which would be illegal in a finite system since it adds more modes. Biophysics systems, for example, may be particularly well-suited to IB-RG analysis, because can be chosen to have biological relevance, and the cutoff scheme will define and prioritize collective modes that are most informative about that function. Biological systems all have size and energy constraints that make the efficient compression of inputs from the external world critical for survival. Balancing that, and just as important for function, organisms also have clear preference for what is relevant in that external signal, namely which aspects can be used to drive behavior that confers a fitness benefit. The IB framework helps cast behavioral relevance as the prime mover in input compression, while the RG can help show how this kind of computation is achieved. Uniting these theories can provide a way to pull together normative notions of relevance with their mechanistic implementation.
Acknowledgements.
We thank Umang Mehta and David J. Schwab for comments. This work was supported by the National Institutes of Health BRAIN initiative program (R01B026943) and by the Aspen Center for Physics, which is supported through National Science Foundation grant PHY-1607611.Appendix A Detailed derivation of GIB semigroup structure
A map representing solves the GIB problem if it satisfies:
for some . To show that the composition of two GIB maps is IB-optimal, we explicitly compute the above expression for the map arrived at by sequential coarsening. The individual maps are,
This construction gives
So we have that, for ,
In order to ensure that both and are diagonal, we project into natural basis with the replacement . Note that is actually automatically diagonal because the first compressed representation is already in natural basis. After this transformation, the optimality condition (A) is simplified because the matrices have been absorbed into the definition of . The new condition is:
The latter two equations must be re-expressed in terms of the original statistics, represented by and .
Solving for , we have:
Now, directly evaluating and , we get the following for and :
Using these last two expressions, can be expressed directly in terms of and . By direct substitution, we can now check whether the composite scheme satisfies the GIB optimality condition (A):
where the binary operator is given by
By identifying with a single value , we find that the GIB optimality condition (A) is satisfied. It is important to note that this operator maps the space of valid values to itself. That is,
Which means that really can be identified as a bottleneck parameter. Along with associativity, this means that is a semigroup representing sequential GIB coarsening.
Appendix B Derivation of the GIB regulator
In section III.2 we present an IR regulator that is both analogous to the one used in the Wetterich NPRG formalism, and which enforces optimality in the Gaussian IB problem. Here, we show explicitly how this regulator is derived. To begin, recall that the role of the regulator is to deform the microscopic theory through the addition of a mass-like term which “freezes out” the most relevant modes:
In a context relevant to GIB where the bare variable is finite-dimensional, we would write this as:
With a positive semi-definite matrix. Following the argument in section III.1, we can identify the deformation produced by a Gaussian coarsening :
Now, by imposing IB optimality (A) on , we find that
With
After the substitution ,
This expression differs from the one given in section III.2 because there we assumed that had already been projected into natural basis. Here, taking means , and so
Appendix C Blahut-Arimoto update scheme for GIB in terms of NPRG objects
Eqs. (10) depict the Blahut-Arimoto updates for , , and at a schematic level. Written as expectations, these matrices are:
As described in the main text, the BA procedure can be thought of as an iterative procedure wherein an estimate for , or equivalently , is plugged into a known functional representing the consistency condition required by optimality. Schematically,
where is the Kullback-Leibler divergence, defined for two distributions and of the same variable as
and the RHS can be seen as a functional of through the expressions:
In this appendix, we derive the equations (10) using the explicit form of the BA map presented above. The goal is to express “updates” for the matrices , , and in terms of their current estimates. In general, quantities describing the updated joint distribution will be primed. To begin, we evaluate using elementary properties of Gaussian variables. Next, we evaluate the divergence , and the partition function . Finally, we combine these elements and read off the updated parameters. Suppose , with and . Then
Therefore consider with and suppose that . Then
Now, consider jointly Gaussian variables . Then
Hence, assuming without loss of generality that , , and ,
and
so finally,
These matrices allow us to construct and thereby calculate . For Gaussian distributions, the KL divergence has a standard form. In this context, we care only about the terms which carry and dependence.
Where denotes “up to addition of a constant.” The matrix describing the coupling between and has been introduced for convenience. Note also that there is a pure- term in this quantity, denoted , which will cancel with the partition function that normalizes in the BA map. In addition to this trivial -dependence, also contributes a new term, which needs to be included.
Here we have introduced to further clean up notation. Now it is straightforward to obtain from the BA map by direct evaluation.
Now, finally, all that remains is to complete the square and put the distribution in the form:
where
After completing the square, the updated matrices , , and can be read off:
These can be written entirely in terms of the old estimates through the substitutions:
Finally, we note that iteration of these equations does not guarantee the convergence of each matrix involved, since invertible linear transformations on the random variables are a symmetry of the objective function. The GIB-optimal solutions, which are described by the fixed points of this update scheme, are connected continuously by these symmetries. If one wishes to use these updates practically and ensure that all matrices converge to fixed values, it is necessary to break this reparameterization invariance by taking extra steps after each update. In the original GIB paper [29], the reparameterization-invariant quantities are instead plotted over iteration of their BA scheme, because their convergence is guaranteed.
Appendix D Selected computations for toy model
D.1 Canonical correlation Green’s function
A central object in GIB is the canonical correlation matrix, . From this object, one obtains the eigenvector matrix , which describes the linear transformation of into its collective modes, and eigenvalues , which order these modes in terms of their information content about . In the toy model, we begin with physical definitions for the statistics in Eqs. (12) and (IV.2). Then, by interpreting as the input variable and the disorder as the relevance variable , we ask what the structure of the resulting GIB-regularized NPRG scheme looks like. Like any other GIB problem, we must first calculate the canonical correlation Green’s function, . Two Green’s functions come directly from the definitions:
To find , we need , which we get through :
Compute this mean by looking at the Hamiltonian for with frozen disorder :
Hence we can identify :
Now, use the Schur complement formula to identify :
So finally, the canonical correlation Green’s function is
D.2 Canonical correlation eigendecomposition calculations
D.2.1 Fourier collective basis
Once the canonical correlation Green’s function is known, one calculates its eigenfunctions (or eigenvectors, in the usual, finite-dimensional case) and eigenvalues. In the main text, we consider three constructions of which altogether yield two eigenbases: Fourier and non-Fourier. Let’s first calculate the spectrum for case 2 in section IV.2, which also covers the analysis of case 1.
Where represents a “diagonal” function, . The frozen disorder propagator is also diagonal in Fourier basis:
We use to represent both the function , and the diagonal kernel interchangeably, as needed. Using the expression for derived in the last section, we have:
Since is unitary and both and are diagonal, we have:
D.2.2 Non-Fourier collective basis
Next, we carry out the same computation for case 3, in which is not diagonal in Fourier basis, and so neither is the canonical correlation Green’s function. Written formally, the disorder correlator is
Where is the -dimensional fractional Fourier transform. The 1-dimensional version defined as:
(20) |
This transform is unitary, satisfies , and gives the usual one-dimensional Fourier transform. To construct the -dimensional version , we simply take tensor products: = with copies. Hence, has properties analogous to , namely
As in the Fourier case, we calculate the canonical correlation Green’s function in terms of and , then write it in the form with diagonal.
Hence we arrive at the eigendecomposition:
In the main text, we refrain from writing as a kernel , because it is discontinuous and divergent. This is more evident when it is expressed in integral form:
Where, e.g., .
References
- Tishby et al. [2000] N. Tishby, F. C. Pereira, and W. Bialek, arXiv:physics/0004057 (2000), arXiv: physics/0004057.
- Creutzig et al. [2009] F. Creutzig, A. Globerson, and N. Tishby, Physical Review E 79, 041925 (2009).
- Chalk et al. [2018] M. Chalk, O. Marre, and G. Tkačik, Proceedings of the National Academy of Sciences 115, 186 (2018).
- Palmer et al. [2015] S. E. Palmer, O. Marre, M. J. Berry, and W. Bialek, Proceedings of the National Academy of Sciences 112, 6908 (2015).
- Meshulam et al. [2019] L. Meshulam, J. L. Gauthier, C. D. Brody, D. W. Tank, and W. Bialek, Physical Review Letters 123, 178103 (2019).
- Mehta and Schwab [2014] P. Mehta and D. J. Schwab, arXiv:1410.3831 [cond-mat, stat] (2014), arXiv: 1410.3831.
- Tishby and Zaslavsky [2015] N. Tishby and N. Zaslavsky, in 2015 IEEE Information Theory Workshop (ITW) (IEEE, Jerusalem, Israel, 2015) pp. 1–5.
- Shwartz-Ziv and Tishby [2017] R. Shwartz-Ziv and N. Tishby, arXiv:1703.00810 [cs] (2017), arXiv: 1703.00810.
- Alemi et al. [2019] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, arXiv:1612.00410 [cs, math] (2019), arXiv: 1612.00410.
- Kolchinsky et al. [2019] A. Kolchinsky, B. D. Tracey, and D. H. Wolpert, Entropy 21, 1181 (2019).
- Saxe et al. [2019] A. M. Saxe, Y. Bansal, J. Dapello, M. Advani, A. Kolchinsky, B. D. Tracey, and D. D. Cox, Journal of Statistical Mechanics: Theory and Experiment 2019, 124020 (2019).
- Koch-Janusz and Ringel [2018] M. Koch-Janusz and Z. Ringel, Nature Physics 14, 578 (2018).
- Gordon et al. [2021] A. Gordon, A. Banerjee, M. Koch-Janusz, and Z. Ringel, Physical Review Letters 126, 240601 (2021).
- Jona-Lasinio [2001] G. Jona-Lasinio, Physics Reports 352, 439 (2001).
- Jona-Lasinio [1975] G. Jona-Lasinio, Il Nuovo Cimento B Series 11 26, 99 (1975).
- Apenko [2012] S. Apenko, Physica A: Statistical Mechanics and its Applications 391, 62 (2012).
- Machta et al. [2013] B. B. Machta, R. Chachra, M. K. Transtrum, and J. P. Sethna, Science 342, 604 (2013).
- Bény and Osborne [2015] C. Bény and T. J. Osborne, New Journal of Physics 17, 083005 (2015).
- Raju et al. [2018] A. Raju, B. B. Machta, and J. P. Sethna, Physical Review E 98, 052112 (2018).
- Lenggenhager et al. [2020] P. M. Lenggenhager, D. E. Gökmen, Z. Ringel, S. D. Huber, and M. Koch-Janusz, Physical Review X 10, 011037 (2020), arXiv: 1809.09632.
- Bradde and Bialek [2017] S. Bradde and W. Bialek, Journal of Statistical Physics 167, 462 (2017).
- Slonim et al. [2006] N. Slonim, N. Friedman, and N. Tishby, Neural Computation 18, 1739 (2006).
- Slonim and Tishby [1999] N. Slonim and N. Tishby, in NIPS, Vol. 4 (1999).
- Strouse and Schwab [2017] D. Strouse and D. J. Schwab, Neural Computation 29, 1611 (2017).
- Salisbury and Palmer [2016] J. M. Salisbury and S. E. Palmer, Journal of Statistical Physics 162, 1309 (2016).
- Wang et al. [2019] Y. Wang, J. M. L. Ribeiro, and P. Tiwary, Nature Communications 10, 3573 (2019).
- Berman et al. [2016] G. J. Berman, W. Bialek, and J. W. Shaevitz, Proceedings of the National Academy of Sciences 113, 11943 (2016).
- Note [1] Alternative IB frameworks have been proposed which result in deterministic mappings [23, 24], and these could conceivably be connected to hard-cutoff RG schemes, though some issues occur for continuous random variables. We restrict our present discussion to the original framework.
- Chechik et al. [2005] G. Chechik, A. Globerson, N. Tishby, Y. Weiss, and P. Dayan, Journal of machine learning research 6 (2005).
- Litim [2001] D. F. Litim, Physical Review D 64, 105007 (2001).
- Note [2] To clarify notation: by , we mean that each instance of should be replaced by .
- Goldenfeld [2018] N. Goldenfeld, Lectures on Phase Transitions and the Renormalization Group, 1st ed. (CRC Press, 2018).
- Wilson [1971a] K. G. Wilson, Physical Review B 4, 3184 (1971a).
- Wilson [1971b] K. G. Wilson, Physical Review B 4, 3174 (1971b).
- Wilson [1974] K. Wilson, Physics Reports 12, 75 (1974).
- Wilson and Fisher [1972] K. G. Wilson and M. E. Fisher, Physical Review Letters 28, 240 (1972).
- Fisher [1998] M. E. Fisher, Rev. Mod. Phys. 70, 29 (1998).
- Kadanoff [1966] L. P. Kadanoff, Physics Physique Fizika 2, 263 (1966).
- Canet et al. [2004] L. Canet, H. Chaté, and B. Delamotte, Physical Review Letters 92, 255703 (2004).
- Canet et al. [2005] L. Canet, H. Chaté, B. Delamotte, I. Dornic, and M. A. Muñoz, Physical Review Letters 95, 100601 (2005).
- Wegner and Houghton [1973] F. J. Wegner and A. Houghton, Physical Review A 8, 401 (1973).
- Polchinski [1984] J. Polchinski, Nuclear Physics B 231, 269 (1984).
- Note [3] This interpretation is explicitly presented in the Wilson-Kogut paper. Given that their approach is formally equivalent with the one put forth by Polchinski, the interpretation should apply to that framework as well.
- Note [4] Under the sharp cutoff construction, some issues include the generation of non-local position-space interactions [35], unphysical nonanalyticities in correlation functions, and the need to evaluate ambiguous expressions such as where the function is not known a priori [49].
- Wetterich [1993a] C. Wetterich, Physics Letters B 301, 90 (1993a), arXiv: 1710.05815.
- Bagnuls and Bervillier [2001] C. Bagnuls and C. Bervillier, Physics Reports 348, 91 (2001), arXiv: hep-th/0002034.
- Berges et al. [2002] J. Berges, N. Tetradis, and C. Wetterich, Physics Reports 363, 223 (2002).
- Delamotte [2012] B. Delamotte, arXiv:cond-mat/0702365 852, 49 (2012), arXiv: cond-mat/0702365.
- Kopietz et al. [2010] P. Kopietz, L. Bartosch, and F. Schütz, Introduction to the Functional Renormalization Group, Lecture Notes in Physics, Vol. 798 (Springer Berlin Heidelberg, Berlin, Heidelberg, 2010).
- Wetterich [1993b] C. Wetterich, Zeitschrift für Physik C Particles and Fields 57, 451 (1993b).
- Note [5] Two edge cases may appear important; and . Because , these correspond to limits where a mode is deterministically related to or completely independent of , respectively. Complete deterministic dependence between and should not be considered without modification, and in the case of independence, those modes may be removed as a formality.
- Note [6] The Litim regulator was introduced in the context of NPRG analysis of the model. Its favorable or “optimal” characteristics are manifested through improved convergence properties of so-called “threshold functions,” which constitute a frequently encountered class of momentum integrals involving the cutoff.
- Litim [2000] D. F. Litim, Physics Letters B 486, 92 (2000).
- Note [7] We choose the formalism in which includes a factor of so that . Further, the appearance of factors as opposed to the more traditional is a consequence of our decision to conjugate with respect to , instead of . This appeals more to the matrix multiplication shorthand.
- Creutzig and Sprekeler [2008] F. Creutzig and H. Sprekeler, Neural Computation 20, 1026 (2008).
- Canet et al. [2011] L. Canet, H. Chaté, and B. Delamotte, Journal of Physics A: Mathematical and Theoretical 44, 495001 (2011), arXiv: 1106.4129.
- Haga [2019] T. Haga, Renormalization Group Analysis of Nonequilibrium Phase Transitions in Driven Disordered Systems, Springer Theses (Springer Singapore, Singapore, 2019).