Structural Entropy of the Stochastic Block Models
Abstract
With the rapid expansion of graphs and networks and the growing magnitude of data from all areas of science, effective treatment and compression schemes of context-dependent data is extremely desirable. A particularly interesting direction is to compress the data while keeping the “structural information” only and ignoring the concrete labelings. Under this direction, Choi and Szpankowski introduced the structures (unlabeled graphs) which allowed them to compute the structural entropy of the Erdős–Rényi random graph model. Moreover, they also provided an asymptotically optimal compression algorithm that (asymptotically) achieves this entropy limit and runs in expectation in linear time.
In this paper, we consider the Stochastic Block Models with an arbitrary number of parts. Indeed, we define a partitioned structural entropy for Stochastic Block Models, which generalizes the structural entropy for unlabeled graphs and encodes the partition information as well. We then compute the partitioned structural entropy of the Stochastic Block Models, and provide a compression scheme that asymptotically achieves this entropy limit.
1 Introduction
Shannon’s metric of “Entropy” of information is a foundational concept of information theory [39, 9]. Given a discrete random variable with support set (that is, the possible outcomes) , which occurs with probability , the entropy of is defined as
where the logarithm here and throughout this paper is of base 2. Note that the entropy of is a function of the probability distribution of .
The entropy was originally created by Shannon in [33] as part of his theory of communication, where a data communication system consists of a data source , a channel and a receiver. The fundamental problem of communication is for the receiver to reliably recover what data was generated by the source, based on the bits it receives through the channel. Shannon proved that the entropy of the source plays a central role – in his source coding theorem it is shown that the entropy is the mathematical limit on how well the data can be losslessly compressed.
The question then arises: How to compress data that has structures, e.g., data in social networks? In Shannon’s 1953 less known paper [34] he argued for an extension of information theory, where data is considered as observations of a source, to “non-conventional data” (that is, lattices). Indeed, nowadays data appears in various formats and structures (e.g., sequences, expressions, interactions) and in drastically increasing amounts. In many scenarios, data is highly context-dependent and in particular, the structural information and the context information seem to be two conceptually different aspects. Therefore it is desirable to develop novel theory and efficient algorithms for extracting useful information from non-conventional data structures. Roughly speaking, such data consists of structural information, which, might be understood as the “shape” of the data, and context information which should be recognized as data labels.
It is well-known that complex networks (e.g., social networks) admit community structures [26]. That is, users within a group interact with each other more frequently than those outside the group. The Stochastic Block Model (SBM) [13] is a celebrated random graph model that has been widely used to study the community structures in graphs and networks. It provides a good benchmark to evaluate the performance of community detection algorithms and inspires the design of many algorithms for community detection tasks. The theoretical underpinnings of the SBM have been extensively studied and sharp thresholds for exact recovery have been successively established [2, 19, 3, 11]. We refer readers to [1] for a recent survey, where other interesting and important problems in SBM are also discussed.
In addition to the SBM model discussed in [1], there are other angles to study compression of data with graph structures. Asadi et. al. [6] investigated data compression on graphs with clusters. Zenil et. al. [40] have surveyed information-theoretic methods, in particular Shannon entropy and algorithmic complexity, for characterizing graphs and networks.
1.1 Compression of graphs
In recent years, graphical data and the network structures supporting them are becoming increasingly common and important in branches of engineering and sciences. To better represent and transmit graphical data, many works consider the problem of compressing the (random) graph up to isomorphism, i.e., compressing the structure of a graph. A graph contains a finite set of vertices and a set of edges each of which connects two vertices. A graph can be represented by a binary matrix (the adjacency matrix) that further can be viewed as a binary sequence. Thus, encoding a labeled graph (that is, all vertices need to be distinguished) is equivalent to encoding the -digit binary sequence, given certain probability distribution on all possible edges. However, such a string does not reflect internal symmetries that are conveyed by the graph automorphism, and sometimes we are only interested in the local or global structures in the graph, rather than the exact vertex labelings. The structural entropy is defined when the graphs are considered unlabeled, or simply called structures, where the vertices are viewed as undistinguishable. The goal of this natural definition is to capture the information of the structure, and thus provides a fundamental measure in graph/structure compression schemes.
The problem actually has a strong theoretical background. Back to 1984, Turán [37] raised the question of finding an efficient coding method for general unlabeled graphs on vertices, where a lower bound of bits is suggested. This lower bound can be seen by the number of unlabeled graphs [12]. The question was later answered by Naor [25] in 1990 who proposed such a representation that is optimal up to the first two leading terms when all unlabeled graphs are equally likely. In a recent paper Kieffer et al. [14] proved a structural complexity of a binary tree. There also have been some heuristic methods for real-world graph compression schemes, see [4, 7, 28, 32, 35]. Rather recently, Choi and Szpankowski [8] studied the structural entropy of the Erdős–Rényi random graph . They computed the structural entropy given that is not (very) close to 0 or 1 and also gave a compression scheme that matches their computation. Later, the structural entropy for other randomly generated graphs, e.g. the preferential attachment graphs and web graphs are also studied [18, 17, 31, 15].
However, it is well-known that the Erdős–Rényi model is too simplistic to model real networks, in particular due to its strong homogeneity and absence of community structure. In this paper, we consider the compression of graphical structures of the SBM, which in general model real networks better and circumvent the issues of the ER-model. In summary, our contributions are as follows:
-
•
We introduce the partitioned structural entropy which generalizes the structural entropy for unlabeled graphs and we show that it reflects the partition information of the SBM.
-
•
We provide an explicit formula for the partitioned structural entropy of the SBM.
-
•
We also propose a compression scheme that asymptotically achieves this entropy limit.
Semantic communications are considered as a key component of future generation networks, where a natural problem to consider is how to efficiently extract and transmit the “semantic information”. In the case of graph data, one may view the (partitioned) structures as the information that needs to be abstracted while the concrete labeling information is considered redundant. From this point of view, our result is a step for the study of semantic compression/communication under appropriate contexts.
1.2 Related works
Finally, we would like to point out that there are some other information metrics defined on graphs. The term “graph entropy” has been defined and used in the history. For example, graph entropy introduced by Kőrner in [16] denotes the number of bits one has to convey to resolve the ambiguity of a vertex in a graph. This notion also turns out to be useful in other areas, including combinatorics. Chromatic entropy introduced in [5] is the lowest entropy of any coloring of a graph. It finds application in zero-error source coding. We remark that the structural entropy we considered is quite different from the Kőrner graph entropy and chromatic entropy.
On the other hand, a concept of graph entropy (also called topological information content of a graph) was introduced by Rashevsky [29] and Trucco [36], and later by Mowshowitz [23, 20, 21, 22, 24, 10], which is defined as a function of (the structure of) a graph and an equivalence relation defined on its vertices or edges. Such a concept is a measure of the graph itself and does not involve any probability distribution.
2 Preliminaries
2.1 Structural entropy of unlabeled graphs
Now let us formally define the structural entropy given a probability distribution on unlabeled graphs.
Given an integer , define as the collection of all -vertex labeled graphs.
Definition 2.1 (Entropy of Random Graph).
Given an integer and a probability distribution on , the entropy of a random graph is defined as
where is the probability of a graph in .
Then the random structure model associated with the probability distribution , is defined as the unlabeled version of . For a given , the probability of can be computed as
Here means that and have the same structure, that is, is isomorphic to . Clearly if all isomorphic labeled graphs have the same probability, then for any labeled graph , one has
where stands for the number of different labeled graphs that have the same structure as .
Definition 2.2 (Structural Entropy).
The structural entropy of a random graph is defined as the entropy of a random structure associated with , that is,
where the sum is over all distinct structures.
The Erdős–Rényi random graph , also called the binomial random graph, is a fundamental random graph model, which has vertices and each pair of vertices is connected with probability , independent of other pairs. In 2012, Choi and Szpankowski [8] proved the following for the Erdős–Rényi random graphs.
Theorem 2.3 (Choi and Szpankowski, [8]).
For large and all satisfying and , the following holds:
-
1.
The structural entropy of is
for some .
-
2.
For a structure of vertices and
where is the entropy rate of a binary memoryless source.
Furthermore, they [8] also presented a compression algorithm for unlabeled graphs that asymptotically achieves the structural entropy up to an error term.
2.2 Stochastic Block Model – Our result
It is well-known that the ER model is too simplistic to model real networks, in particular due to its strong homogeneity and absence of community structure. The Stochastic Block Model is then introduced on the assumption that vertices in a network connect independently but with probability based on their profiles, or equivalently, on their community assignment. For example, in the SBM with two communities and symmetric parameters, also known as the planted bisection model, denoted by , the vertex set is partitioned into two sets and , any pair of vertices inside or are connected with probability and any pair of vertices across the clusters are connected with probability , and all these connections are independent.
As an illuminating example, consider a context where there are users and devices, and each pair of users and each pair of devices are connected with probability , a user and a device is connected with probability and each of these connections is independent of all other connections. Suppose that we need to compress the information of . However, in the context it is not appropriate to view as an unlabeled graph, that is, in addition to the structure information, it is also important to keep the “community” information – the compression also needs to encode the information that who is a user and who is device.
Definition 2.4 (Partition-respecting isomorphism, Partitioned Unlabeled Graphs).
Let be integers. Suppose is a set of vertices and is a partition of into parts. The partition-respecting isomorphism, denoted by “” is defined as follows. For any two labeled graphs and , we write if and only if they are isomorphic via an isomorphism function such that , for . Then is defined as the collection of -vertex graphs on where we ignore the labels of vertices inside each , , namely, the equivalence classes under partition-respecting isomorphism, with respect to .
Note that every labeled graph corresponds to a unique structure , and we use to denote this relation. Furthermore, under the above definition, general unlabeled graphs correspond to the case .
Definition 2.5 (Partitioned Structural Entropy).
Let be a set of vertices where . Suppose is a partition of into parts and is a probability distribution over all partitioned unlabeled graphs on vertices. Then the structural entropy associated to is defined by
In this paper we extend Theorem 2.3 to the structural entropy of the Stochastic Block Model with any given number of blocks, and provide a compression algorithm that asymptotically matches this structural entropy. We first give the result for the balanced bipartition case .
Theorem 2.6.
Let be a set of vertices and . Suppose is a probability distribution of graphs on where every edge inside or is present with probability and every edge between and is present with probability , and these edges are mutually independent. For large even and all satisfying and , the following holds:
-
(i)
The partitioned structural entropy of is
(1) for some .
-
(ii)
For a balanced bipartitioned structure and
where is the entropy rate of a binary memoryless source.
Note that the structural entropy here is larger than that in Theorem 2.3 (even if ), which reflects the fact that the SBM with “a planted (bi-)partition” contains prefixed structures, so has less symmetries than , the pure random model111For , when it is asymmetric, comparing with the completely labeled graphs, Theorem 2.3 saves a term as ; this saving becomes for the planted balanced bipartition case in Theorem 2.6..
3 Proof of Theorem 2.6
One key ingredient in the proof of Theorem 2.3 in [8] is the following lemma on the symmetry of . A graph is called asymmetric if its automorphism group does not contain any permutation other than identity; otherwise it is called symmetric.
Lemma 3.1 (Kim, Sudakov and Vu, 2002).
For all satisfying and , a random graph is symmetric with probability for any positive constant .
Proof of Theorem 2.6.
Note that every pair of vertices in or in should be considered as undistinguishable, but not the pairs of vertices in . Recall that we write for a graph and a structure if represents the structure of (with respect to the partition ).
Let . We first compute . Note that there are possible edges in , and we can view it as a binary sequence of length , where each digit is a Bernoulli random variable. Moreover, for edges inside or , the random variable, denoted by , has expectation and for edges in the random variable, denoted by , has expectation . Thus we have
Now write for the probability distribution on over all partitioned unlabeled graphs inherited from , namely, given , . Let be the partitioned structural entropy of . Therefore, compared with our goal, it remains to show that
(2) |
Note that in , all labeled graphs such that have the same probability . Thus, given a (labeled) graph , we have , where is such that . So the graph entropy of can be written as
(3) |
Define be be restricted on for . Now we split into and , i.e., and . Write for the automorphism group for , and we naturally have
Combining this with (2) and (3), it remains to show that
In the summation above we only need to focus on such that either or is symmetric, as otherwise . By Lemma 3.1, we conclude that the probability of restricted on or is symmetric is for some , and for such we use the trivial bound . This gives us the desired estimate in (i)
To show (ii), for a set of vertices and a balanced bipartition of , we define the typical set as the set of structures on vertices satisfying (a) is asymmetric on and , respectively; (b) for
Denote by and the sets of structures satisfying the properties (a) and (b), respectively and thus we have . Firstly, by the asymmetry of (Lemma 3.1), we conclude that for large . Secondly, we use a binary sequence of length to represent a (labeled) instance of , where the first bits represent the induced subgraph on , the next bits represent the induced subgraph on , and finally the rest bits represent the bipartite graph on . Since all edges of are generated independently, both and have in expectation 1’s and the AEP property of the binary sequences implies that
holds with probability at least . Similarly, has in expectation 1’s and the AEP property of the binary sequences gives that with probability at least ,
Since these edges are independent, we finally conclude that (b) holds with probability at least . Thus, . Now we can compute for . By (a), for any . Together with (b) and straightforward computation, the assertion of (ii) follows. ∎
4 SBM Compression Algorithm
Given the computation of the structural entropy, a natural next step is to design efficient compression schemes that are close to or even (asymptotically) achieve this entropy limit. Choi and Szpankowski [8] presented such an algorithm (which they named Szip) for (unlabeled) random graphs, which uses in expectation at most bits and asymptotically achieves the structural entropy given in Theorem 2.3. Roughly speaking, Szip greedily peels off vertices from the graph and (efficiently) store the neighborhood information. This procedure can be simply reversed but the labeling of the recovered graph may be different from the original graph, which is the reason on why a saving of the codeword length is achieved. Refinements and analysis [8] are also provided to achieve the proposed performance.
Here we give an algorithm that optimally compresses SBMs which uses the Szip algorithm as building blocks and matches the structural entropy computation in Theorem 2.6. The algorithm consists of two stages. It first compresses and using Szip and then compresses using an arithmetic compression algorithm with the help of Szip decoding outputs.
To give a brief description of the compression algorithm, we again use the balanced bipartition as an example. The encoding and decoding procedure of the algorithm is illustrated in Figure 1. The algorithm encodes the observed into a binary string as follows. It uses Szip as a subroutine to compress and into binary sequences and . Then, as part of the encoder, we run the Szip decoder on and to obtain decoded structures and , respectively. We then compress as a labeled bipartite graph under the vertex labeling of and into . This “Labeled Encoder” can be done by treating it as a binary sequence of length and compressing using a standard arithmetic encoder [27, 30, 38]. The concatenation of Szip algorithms and the arithmetic encoder forms the cascade encoder of our algorithm and obtains the codeword . Upon receiving the codeword, we decode them parallelly using Szip decoder and the arithmetic decoder. This completes our algorithm.
The main challenge in the design of our algorithm is how the decoder can retrieve the consistency between the bipartite graph and the decoded version of and . A key observation here is that since Szip is a deterministic algorithm, although it may permute the vertex labelings, its output is an invariant given the same input. Given this, our solution here is to first run Szip (both encoding and decoding) at the encoder, and obtain structures and , respectively. We then compress (as a labeled bipartite graph) under the vertex labeling of and . This would guarantee that the decoded structures , and share the same vertex labeling as and , namely, is recovered.
Before discussing the performance of the algorithm, we first describe some useful properties of the arithmetic compression algorithm in the following lemma. We omit the proof of the lemma, which follows from the analysis in [27, 30, 38] and AEP properties in [39, 9].
Lemma 4.1.
Let be the codeword length of the arithmetic compression algorithm when compressing a binary sequence with length and entropy rate . For large , the following holds:
-
The expected codeword length asymptotically achieves the entropy of the message, i.e.,
(4) -
For any ,
(5) -
The arithmetic algorithm runs in time .
The following theorem characterizes the performance of our algorithm. It is immediate from Theorem 2 in [8] (performance of Szip) and Lemma 4.1, we omit the detailed proofs here.
Theorem 4.2.
Let be a set of vertices and . Given a partitioned unlabeled graph on , let be the codeword length given by our algorithm. For large , our algorithm runs in time , and satisfies the following:
-
The algorithm asymptotically achieves the structural entropy in (1) 222Note that ., i.e.,
-
For any ,
5 General SBM with blocks
In previous sections, we discussed the structural entropy of SBM and the compression algorithm that asymptotically achieves this structural entropy for the balanced bipartition case (). The corresponding results in Theorem 2.6 and Theorem 4.2 can be easily generalized to the general -partition case. We briefly describe the generalizations below.
5.1 Structural entropy
Our approach can deal with general SBMs similarly. In a general SBM with parts, an symmetric matrix is used to describe the probabilities between and within the communities, where two vertices and are connected by an edge with probability ( and are not necessarily distinct). To simplify the presentation, we only present the results below in its special form where if and if , and we remark that similar results hold in the general case as well. We first give the result on the computation of the partitioned structural entropy of SBM.
Theorem 5.1.
Fix reals in whose sum is 1. Let be a set of vertices with a partition into parts such that . For large and all satisfying and , the following holds:
-
The -partitioned structural entropy for a partitioned structure on is
(6) for some , where .
-
For a partitioned structure on and
5.2 Compression algorithm
The compression algorithm for a general with vertex partition can be viewed as a union of the compression algorithms for and (). To be more precise, we describe the algorithm as follows. It first compresses all into using Szip. Then run the Szip decoder with input to obtain the decoded structure . With the indices of , , we can compress as a labeled -partite graph into using an arithmetic encoder. This completes the encoding procedure and gives the codewords , for which we concatenate together and get the final codeword. The decoding is to simply run the Szip decoders and labeled (arithmetic) decoders parallelly. The correctness of the decoding output can also be argued accordingly.
The performance of the algorithm can be obtained similar to Theorem 4.2 as follows.
Theorem 5.2.
Fix reals in whose sum is 1. Let be a set of vertices with a partition into parts such that . Given a partitioned unlabeled graph on , let be the codeword length given by our algorithm. For large , our algorithm runs in time , and satisfies the following:
-
The algorithm asymptotically achieves the structural entropy in (6), i.e.,
-
For any ,
6 Conclusion
In this paper we defined the partitioned unlabeled graphs and partitioned structural entropy, which generalize the structural entropy for unlabeled graphs introduced by Choi and Szpankowski [8]. We then computed the partitioned structural entropy for Stochastic Block Models and gave a compression algorithm that asymptotically achieves this structural entropy limit. As mentioned earlier, we believe that in appropriate contexts the structural information of a graph or network can be interpreted as a kind of semantic information, in which case, the communication schemes may benefit from structural compressions which considerably reduce the cost.
References
- [1] E. Abbe. Community detection and stochastic block models: recent developments. The Journal of Machine Learning Research, 18(1):6446–6531, 2017.
- [2] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. IEEE Trans. Inf. Theory, 62(1):471–487, 2015.
- [3] E. Abbe and C. Sandon. Community detection in general stochastic block models: Fundamental limits and efficient algorithms for recovery. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 670–688. IEEE, 2015.
- [4] M. Adler and M. Mitzenmacher. Towards compressing web graphs. In Proceedings DCC 2001. Data Compression Conference, pages 203–212, 2001.
- [5] N. Alon and A. Orlitsky. Source coding and graph entropies. IEEE Trans. Inf. Theory, 42(5):1329–1339, 1996.
- [6] A. R. Asadi, E. Abbe, and S. Verdú. Compressing data on graphs with clusters. In 2017 IEEE International Symposium on Information Theory (ISIT), pages 1583–1587, 2017.
- [7] F. Chierichetti, R. Kumar, S. Lattanzi, M. Mitzenmacher, A. Panconesi, and P. Raghavan. On compressing social networks. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM KDD ’09, page 219–228, New York, NY, USA, 2009. Association for Computing Machinery.
- [8] Y. Choi and W. Szpankowski. Compression of graphical structures: Fundamental limits, algorithms, and experiments. IEEE Trans. Inf. Theory, 58(2):620–638, 2012.
- [9] T. M. Cover and J. A. Thomas. Elements of Information Theory 2nd Edition. Wiley-Interscience, USA, 2006.
- [10] M. Dehmer and A. Mowshowitz. A history of graph entropy measures. Information Science, 181(1):57–78, Jan. 2011.
- [11] B. Hajek, Y. Wu, and J. Xu. Information limits for recovering a hidden community. IEEE Trans. Inf. Theory, 63(8):4729–4745, 2017.
- [12] F. Harary and E. M. Palmer. Graphical Enumeration. New York: Academic, 1973.
- [13] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109–137, 1983.
- [14] J. C. Kieffer, E.-H. Yang, and W. Szpankowski. Structural complexity of random binary trees. In Proceedings of 2009 IEEE International Symposium on Information Theory (ISIT), pages 635–639, Seoul, 2009.
- [15] I. Kontoyiannis, Y. H. Lim, K. Papakonstantinopoulou, and W. Szpankowski. Symmetry and the entropy of small-world structures and graphs. In Proceedings of 2021 IEEE International Symposium on Information Theory (ISIT), pages 3026–3031, 2021.
- [16] J. Körner. Coding of an information source having ambiguous alphabet and the entropy of graphs. In 6th Prague conference on information theory, pages 411–425, 1973.
- [17] T. Łuczak, A. Magner, and W. Szpankowski. Asymmetry and structural information in preferential attachment graphs. Random Structures & Algorithms, 55:696–718, 03 2019.
- [18] T. Łuczak, A. Magner, and W. Szpankowski. Compression of preferential attachment graphs. In Proceedings of 2019 IEEE International Symposium on Information Theory (ISIT), pages 1697–1701, 2019.
- [19] E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for the planted bisection model. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pages 69–75, 2015.
- [20] A. Mowshowitz. Entropy and the complexity of graphs ii: the information content of digraphs and infinite graphs. Bulletin of Mathematical Biophysics 30, pages 225–240, 1968.
- [21] A. Mowshowitz. Entropy and the complexity of graphs iii: graphs with prescribed information content. Bulletin of Mathematical Biophysics 30, pages 387–414, 1968.
- [22] A. Mowshowitz. Entropy and the complexity of graphs iv: entropy measures and graphical structure. Bulletin of Mathematical Biophysics 30, pages 533–546, 1968.
- [23] A. Mowshowitz. Entropy and the complexity of the graphs i: an index of the relative complexity of a graph. Bulletin of Mathematical Biophysics 30, pages 175–204, 1968.
- [24] A. Mowshowitz and M. Dehmer. Entropy and the complexity of graphs revisited. Entropy, 14(3):559–570, 2012.
- [25] M. Naor. Succinct representation of general unlabeled graphs. Discr. Appl. Math., 28(3):303–307, 1990.
- [26] G. Palla, I. Derényi, I. Farkas, and T. Vicsek. Uncovering the overlapping community structure of complex networks in nature and society. nature, 435(7043):814–818, 2005.
- [27] R. C. Pasco. Source coding algorithms for fast data compression. Ph.D. disssertation, Stanford Univ., May 1976.
- [28] L. Peshkin. Structure induction by lossless graph compression. In Proceedings of 2007 Data Compression Conference (DCC’07), pages 53–62, 2007.
- [29] N. Rashevsky. Life information theory and topology. Bulletin of Mathematical Biophysics 17, pages 229–235, 1955.
- [30] J. J. Rissanen. Generalized kraft inequality and arithmetic coding. IBM Journal of Research and Development, 20(3):198–203, May 1976.
- [31] M. Sauerhoff. On the entropy of models for the web graph, 2016.
- [32] S. A. Savari. Compression of words over a partially commutative alphabet. IEEE Trans. Inf. Theory, 50(7):1425–1441, 2004.
- [33] C. E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27:379–423, 1948.
- [34] C. E. Shannon. The lattice theory of information. Transactions of the IRE Professional Group on Information Theory, 1(1):105–107, 1953.
- [35] J. Sun, E. Bollt, and D. Ben-Avraham. Graph compression - save information by exploiting redundancy. J. Statist. Mechan.: Theory Exper., pages 06001–06001, 2008.
- [36] E. Trucco. A note on the information content of graphs. Bulletin of Mathematical Biophysics 18, pages 129–135, 1956.
- [37] G. Turán. On the succinct representation of graphs. Discr. Appl. Math., 8(3):289–294, 1984.
- [38] F. Willems, Y. Shtarkov, and T. Tjalkens. The context-tree weighting method: basic properties. IEEE Trans. Inf. Theory, 41(3):653–664, 1995.
- [39] R. W. Yeung. Information Theory and Network Coding. Springer, Verlag, USA, 2008.
- [40] H. Zenil, N. A. Kiani, and J. Tegnér. A review of graph and network complexity from an algorithmic information perspective. Entropy, 20(8), 2018.