Poisson approximation for large permutation groups
Abstract
Let be a group of permutations of objects which permutes things independently in disjoint blocks of size and then permutes the blocks. We investigate the probabilistic and/or enumerative aspects of random elements of . This includes novel limit theorems for fixed points, cycles of various lengths, number of cycles and inversions. The limits are compound Poisson distributions with interesting dependence structure.
keywords:
cycles of random permutations , compound Poisson distribution , Pólya theoryMSC:
05A05 , 60C05[label1]organization=Departments of Mathematics and Statistics, Stanford University, addressline=390 Jane Stanford Way, city=Stanford, postcode=94305, state=California, country=USA
[label2]organization=Stanford University, addressline=390 Jane Stanford Way, city=Stanford, postcode=94305, state=California, country=USA
1 Introduction
Probabilistic or enumerative theory for random permutations is a classical subject. Pick uniformly, what does it look like? A permutation splits into disjoint cycles, i.e., fixed points, transpositions, etc. Write if has cycles of length . Following Monmort (1708), [1, 2], [3], roughly, for large , the have limiting independent Poisson distributions with . Many refinements, extensions and applications can be found in the survey paper of [4], the book of [5], and the recent survey [6].
This paper develops similar theory for large subgroups of . Let , , and let
(1) |
be the Wreath product. This permutes with permuting , permuting , permuting independently and then permuting those blocks. For example, : permutes 1 2 3 4 5 6 first to 2 1 3 4 6 5 and then 6 5 2 1 3 4. In the usual two-line notation for permutations,
1 | 2 | 3 | 4 | 5 | 6 |
6 | 5 | 2 | 1 | 3 | 4 |
has cycle decomposition .
Widely occurring examples of Wreath products include:
-
1.
: the symmetry group of the hypercube. This can also be seen as the group of centrally symmetric permutations in .
-
2.
: generalized permutation group
-
3.
: a maximal subgroup (O’nan–Scott theorem)
Here and . We propose picking at random and then ask: what does it look like? for , suppose has -cycles, so . Write . Here are two special cases of our main theorem.
Example 1.1 ().
Theorem 1.
Pick uniformly. Write . Then converges in distribution to with joint distribution
where
and all , , are independent.
Remarks.
-
1.
All the have marginal compound Poisson distributions.
-
2.
The are dependent. For example, , share a common component . Similarly, and are dependent (but and are independent).
-
3.
Many are independent; are independent, as are , for any , .
Example 1.2 ().
Here the permutations within blocks are only by cyclic shifts, independently for each block. In this case, the are independent.
Theorem 2.
For fixed and large, has converging in distribution to with
where is the greatest common divisor, is Euler’s totient function, and are independent for all .
2 gives background on Wreath products (2.1), motivation from a new algorithm for generating random partitions of , i.e., the commuting graph process (2.2), background on compound Poisson distributions (2.3), and background on cycle indices and Pólya theory (2.4). This last is illustrated by a normal limit theorem for the number of cycles of in 2.5.
2 Background
This section gives background on Wreath products (2.1), the commuting graph process (2.2), compound Poisson distributions (2.3), and cycle indices and Pólya theory (2.4). As an application, it derives a central limit theorem for the number of cycles in a random element of using cycle theory and Anscomb’s central limit theorem for stopped sums.
2.1 Wreath products
For , , is the Wreath product sometimes denoted . It occurs frequently in basic group theory with its own Wikipedia page. Any text on group theory has a section on Wreath products. They often appear as “troublemakers”, providing counter-examples. For instance, , a group of order , is the Sylow- subgroup of the symmetric group . It is the smallest example of a non-regular -group.
Wreath products also appear in applications:
-
, the hyperoctahedral group, is the group of symmetries of an -dimensional hypercube. This may also be seen as the subgroup of of centrally symmetric permutations . For example in :
These occur as the arrangements of cards obtainable with the two types of perfect shuffles [7]. In [8] it is shown that these permutations biject with phylogenetic trees and their cycle theory is useful for understanding Markov chains on trees. is also the Weyl group of the symplectic group and [9] needed results about fixed points in for their work on this group.
-
is the generalized permutation group. This may be pictured as permutation matrices with the usual ones replaced by th roots of unity. It has its own Wikipedia page. We encounter it in understanding what permutations commute with a given . If , then the subgroup of permutations commuting with , . Further explanation is in 2.2. Of course, .
-
occurs in statistical work. With groups of people, each with subjects, a natural symmetry for permutation tests of group homogeneity is . See [10] for a textbook account. It also occurs in group theory: the O’nan–Scott theorem, [11] for example, classifies maximal subgroups of , i.e., equivalently primitive actions of . There is a small list of possibilities and, for , is on the list. Properties of cycles are used to prove theorems about the distribution of fixed points and derangements in [12].
Iterated Wreath products occur too. The Sylow- subgroup of is (-factors). There has been substantial probabilistic and/or enumerative effort at understanding the conjugacy structure of these groups. See [13] and their references.
2.2 The commuting graph process and random partitions
Our motivation for the present study arose from analysis of an algorithm for generating uniformly distributed partitions of . More generally, let be a finite group and consider the commuting graph of . This has vertices indexed by and an edge from to if in . Let be the transition matrix of the natural random walk on . So if and commute, zero otherwise.
Proposition 1.
is a reversible Markov chain with unique stationary distribution , the conjugacy class of . Further, the chain generated by lumps to an ergodic, symmetric Markov chain on conjugacy classes with transition matrix
The proof is elementary Markov chain theory; see [17] or [18]. The point is that this gives a method to generate a uniformly chosen conjugacy class. As an example, consider ; the conjugacy classes are indexed by partitions of . The Markov chain is easy to run: from , choose uniformly and move to . When , the transition matrix of the lumped chain on partitions becomes the symmetric matrix:
This illustrates three properties, valid for general :
When , a prime, except when or , as in the example. | ||
for each pick a partition of from independently | ||
and take as the union of these. |
For of type , shows that the distribution of the cycles of determine the transition matrix. We needed these to understand Doeblin bounds and couplings of the chain . This is ongoing work (!).
2.3 Compound Poisson distributions
As the examples in the Introduction show, the natural limit laws in this part of the world often involve compound Poisson laws: nonnegative integer combinations of independent Poisson variates. These are familiar objects in applied probability. A survey of examples and techniques for proving limit theorems by Stein’s method is in [21]. This offers a different way to prove our theorems.
Definition 2.1.
Let be i.i.d. with for . Let . Then
(2) |
has a compound Poisson law with parameters and . Patently, compound Poisson laws are infinitely divisible and a classical theorem of [22] shows that these are all the infinitely divisible laws supported on .
An equivalent definition can be shown: let be independent, . Then
(3) |
has a compound Poisson law as in (2).
In present applications, collections of dependent variates with compound Poisson margins occur. Our examples match up with the standard framework for constructing dependent, infinitely divisible vectors. See [23] or [24]. A clean definition of compound Poisson vectors appears in [25]:
Definition 2.2.
Fix summing to . For each , let be the measure defined by (2). Fix a finite or countable index set and let be the set of non-empty subsets of . Finally, let be a function with . Define, for ,
where are independent compound Poisson with parameters and . Then is a compound Poisson random vector.
Note.
The same component may appear in several different so the are (generally) dependent. The vector is infinitely divisible (and all margins are as well). We do not know if this class includes all infinitely divisible vectors supported on .
2.4 Cycle indices and Pólya theory
Pólya theory studies enumeration under symmetry. Let be a finite group operating on a finite set . This splits into disjoint orbits
Natural questions include:
-
1.
How many orbits are there?
-
2.
What are the orbit sizes?
-
3.
Are there “natural” or “nice” labels for orbits?
-
4.
How can an orbit be chosen uniformly?
The commuting graph process of 2.2 provides an example. There and acts on itself by conjugation. The Goldberg–Jerrum [26] Burnside process is similar.
Suppose now that is a permutation group. The cycle index polynomial
(4) |
is a useful tool for Pólya-type problems; [27, 28, 15, 29] are good sources for this connection. They contain the following basic facts, used below, with the cyclic group and the symmetric group on .
(5) | |||
(6) | |||
(7) | |||
(8) |
Pólya studied the problem of classifying molecules up to symmetry. Pólya theory has been used for graphical enumeration, as in the question of how many unlabeled trees are there? A striking example is Hanlon’s [30, 31] study of graph coloring.
We have not seen many applications in probability theory. Yet, the cycle index has a perfectly simple probabilistic interpretation: for , suppose has cycle type , . Then
Example 2.1.
Consider , so . The elements of are
Hence, from the definition,
Using , formula (7) gives
so
as above. In particular: pick uniformly, the chance of a four-cycle is .
Example 2.2.
Consider . Claim:
To see the first claim, write
The only term on the right side that carries the monomial is the term corresponding to , , with coefficient as claimed. The proof for is easier.
Example 2.3.
Similarly, consider .
Example 2.4.
Consider , with and relatively prime.
The next subsection gives a more substantive example.
2.5 Distribution of the number of cycles in
The number of cycles of a random permutation in is a classical object of study [1, 2]; see also [22]. These papers prove a central limit theorem:
Theorem 3 (Goncharov).
Let be chosen from the uniform distribution on . Let be the number of cycles. Then, with Euler’s constant,
and, normalized by its mean and variance, has a limiting standard normal distribution.
This subsection studies the distribution of the number of cycles in Wreath products. Let be a subgroup of and be a subgroup of . Then acts on by permuting independently within blocks of size by the elements in and then permuting the blocks by .
The cycle index connection
Recall the cycle index:
Setting all gives the generating function for the number of cycles, ; call this ,
From (7),
This proves:
Proposition 2.
For uniformly chosen in , the number of cycles has the same distribution as , where , , are i.i.d. with generating function and is independent of with generating function .
This represents as a randomly stopped sum, an extensively studied class of random variables; see [32, 33]. In particular,
As shown below, a classical central limit theorem of Anscomb for stopped sums will give a central limit theorem under conditions on when is large. See [32, 33] for a full discussion of Anscomb’s theorem.
For or a full symmetric group, the means and variances are given in the preceding theorem from Goncharov. We pause to compute them for the important case where is the cyclic group .
Proposition 3.
Let be the first and second moments of the number of cycles of a randomly chosen element of the cyclic group . Then are multiplicative functions of ; recall is multiplicative if for GCD . Further, for a prime, ,
This determines for all .
Proof.
A randomly chosen in has with probability if . It follows that
Elementary number theory shows and are multiplicative and, if is multiplicative, is, too. This proves the first claim.
For , the divisors are , , and , for . Thus
as claimed. The claim for is proved similarly. ∎
In current applications with fixed, so the asymptotics of , are not a problem.
Clearly some conditions on are required to have a central limit theorem; if and , acts on in the usual way by cycling . The number of cycles is with probability . This is a discrete distribution which depends heavily on the divisibility properties of . No simple limit theorem seems possible.
We content ourselves with the following, taking general for fixed and . This includes all of our introductory examples and the argument generalizes to groups such as .
Theorem 4.
Let be fixed and consider . For chosen uniformly in , the number of cycles of acting on satisfies
where and are the mean and variance of the number of cycles of a random element of .
Proof.
Let be i.i.d. random variables chosen from the distribution with generating function with means subtracted. Let be chosen from . From Goncharov’s theorem,
Let . The classical central limit theorem implies, as ,
Write . Then
It must only be shown that
For this, let , set and . Write
Using Kolmogorov’s inequality for the first two terms, this is bounded above by
From , the last term tends to zero and the proof is complete. ∎
Note.
The only property of the cycle distribution of that was used in the proof was that and that is concentrated about its mean. Thus the theorem holds for, e.g., and many other choices.
3 A General Theorem
This section derives a theorem for the joint distribution of the number of cycles for . The theorem is given in two forms:
-
1.
an exact result for when is randomized;
-
2.
a limiting result when is large.
Theorem 5.
Fix and . Let . Let be the union of , . For , define on as follows: pick with , then pick uniformly. For , under , the joint distribution of is the same as the joint distribution of
(9) | ||||
(10) |
The sum in (9) is over partitions of and that multiply to , and so is finite. is the probability that a uniformly chosen element of has cycle type .
Note, as in Example 1 of the Introduction, the same may appear in several of the so these are dependent, compound Poisson variates as in 2.3.
Theorem 6.
Fix and . Let . Pick uniformly and let . Then, as , the joint distribution of converges (weakly) to the law of with as in (10) with .
A quantitative form of 6, showing that the total variation distance between the laws , is at most is in 4.
The proofs of 5 and 6 use moments. It may help the reader to recall that if are independent Poisson , then
Similarly, if for every non-empty subset , are independent Poisson and , , then
Example 3.1.
The notation in 5 and 6 is daunting. Consider 6 with as in 1 of the Introduction. Then, with respective chances
Consider . The sum in (9) has only the term , so with , .
Consider . The sum has only the terms , or , . So . The three terms are independent with
This matches the claims in 1.
Proof of 5.
Proof of 6.
As , the right-hand side of (10) converges to the generating function of the claimed compound Poisson distributions. It must be shown that the coefficient of on the left-hand side converges to this same limit. This may be argued from the Tauberian arguments of [3] or [34].
However, 4 below gives an independent coupling argument showing that, for any fixed , the joint distribution of converges to some limit law as tends to infinity. Thus, setting equal to 1 and letting , the generating functions converge to a limit as tends to infinity. Now, Able’s theorem shows that
By inspection, from 5, . This completes the proof. ∎
4 Coupling for Wreath Products
4.1 Introduction
This section gives a coupling proof of the convergence of the cycle lengths of a random permutation to the compound Poisson distributions of 6. The proof comes with a rate of convergence which seems difficult to derive from the generating function approach of 3. The argument here is also needed to complete the proof of 6.
Here is a special case of the main result. Let . Suppose has -cycles as a permutation of . Let be the compound Poisson distributions of 6.
Theorem 7.
With notation as above, for uniform in , for any , let be the distribution of and be the distribution of . Then
7, and variations when grows and is fixed, are proved by coupling. The coupling is reminiscent of the well known Feller coupling which is briefly recalled.
Build a uniformly distributed permutation directly in cycle form as follows. Starting with , choose uniformly from . Then choose uniformly from . Continue in this manner until , when the cycle is completed, and then start over with all the unused elements. With , a typical construction is
with steps 0 through 6. At step three, is chosen as the image of , so the cycle closes and another begins with an arbitrary element out of . In step four this element is chosen as the image of itself, creating a fixed point and continuing with a new cycle. If is one or zero as a new cycle is started at step , clearly , .
The joint distribution of the cycle lengths can be neatly described just using the independent random variables . Following the careful description of [35], say an -spacing occurs in a binary sequence starting at position and ending at position if . If is the number of spacings in , then
where is uniformly chosen in . A coupling argument shows that the limit distribution of cycle counts can then be read from an infinite random binary string with . A surprising theorem of Ignatov (see [35]) says that the number of blocks in is (exactly) and that these are independent as varies (!). All of this and extensions to Ewens distributed random permutations is wonderfully explained in [35] which also includes a scholarly review of the literature.
4.2 Cycles from binary indicators
For clarity we single out here the simplified construction to sample the cycle type of a random using binary indicators. Showing that these indicators indeed arise from sampling the permutation itself is relegated to 4.4. Let be distributed independently and be the cycle types of iid -permutations. Let be the event that there is an -space in starting at position .
Proposition 4.
With the construction above, for , set
(11) |
Then there is equality of joint distributions,
where is chosen uniformly in and has -cycles.
Note.
The (and so the cycle indices ) have somewhat complex dependence, even in the large- limit (see 1).
4.3 Coupling to the limit
This section provides a coupling proof of 7 and some refinements. Throughout, let be independent Bernoulli with , . 4 of 4.2 uses these binary indicators (and some independent auxiliary variates ) to construct random variables shown to be equidistributed with the number of -cycles in a uniform element of . The construction only needs , but the formula defining works for infinite sequences as well. Let be the random variables constructed from the infinite sequence by the formulae of 4.
Theorem 8.
For any , , let be the distribution of , , and be the distribution of , . Then
Proof.
Mirroring the proof of [5, Lemma 1.4] note that is only possible due to the deterministic for , while for the th indicator term remains random. This can only lead to extra cycles with lengths in if the space preceding this 1 is of length at most . For fixed ,
Now, a union bound shows the probability that the number of zeros preceding is of length at most is at most .
Similarly, for , can only occur if a spacing in occurs after . Now,
Thus, union bounding over gives that the probability of an extra space in the tail is at most . Combining bounds,
Corollary 1 (Proof of 7).
The preceding argument shows that the joint distribution of converges to the joint distribution of (in the sense of finite dimensional distributions). 4 identifies the first with the joint distribution of with uniform in . 6 shows that the Abel limit of the generating function of converges to the generating function of the compound Poisson distributions of 7. Since 8 shows that the sequence of random variables converge to “something” (here ) that something must be the claimed compound Poisson limits.
8 is uniform in but requires . For the special case , it is possible to obtain a convergence theorem for fixed as . In this case we sample the of 4.2 using sequences just as we did with to sample from , that is are Bernoulli with independently. The fact that the number of -spaces in is equal in distribution to the number of -cycles in follows from 4 with (and is also just the original Feller coupling). Then we define in analogy with : letting be the event that there is an -space in starting at
Theorem 9.
Let be uniform in of cycle type . For fixed , let be the joint distribution of . Let be the joint distribution of . Suppose finally that . Then
9 is uniform in and so we may allow with . Thus for uniform in , as above, and the joint distribution of :
Corollary 2.
Proof of 9.
Let . By Ignatov’s theorem, the number of -spacings in are independent , . From the coupling, the number of spacings in of size at most is stochastically dominated by , where are independent . This in turn is stochastically dominated by . By Bennett’s inequality,
where . Let be the event that there are at most spacings in of size at most . Then 8 may be applied for each with a union bound to give
This yields
Theorem 10.
Let be uniform in of cycle type . As both the joint distribution of converges (weakly) to the law of with
where mutually independently for and is the Poisson PMF with parameter evaluated at .
As appealing and useful as these coupling bounds are, it must be remembered that they can be exponentially far off. For uniform in , the law of , ,
See [37] for detailed discussion.
4.4 A Feller coupling for Wreath products
This section provides a direct sequential description of a uniform in cycle form. Keeping track of the steps of the algorithm yields the binary indicators (and ) used previously.
Recall that , for , . Then acts on by using to permute , to permute , to permute , and finally permutes these blocks.
Example 4.1.
When , acts on by first permuting within blocks as
and then using on the blocks to get
In cycle notation with , .
The coupling depends on a sequential procedure for building up the cycle representation directly, one step at a time. This is described below, where the case will be used as a running example.
-
1.
Begin by putting the blocks in an urn. The fixed places for these blocks will be called “block places”.
-
2.
Pick a block at random from the urn, remove it, permute the elements by a uniformly chosen , and place it under place 1. In the example, if block 4 is chosen first, and id, the permutation starts
-
3.
Construct a bookkeeping array of “open cycles” . In the example: .
-
4.
Pick a random remaining block from the urn, permute it by a uniformly chosen , and place it under the previously chosen block. Then extend the open cycles in the bookkeeping array. In the example, this results in
if the block containing is chosen second and permutes this to . The bookkeeping array becomes .
-
5.
Continue in this fashion, extending the open cycles in the bookkeeping array until the block in place 1 is pulled from the urn, forcing them to close. In the example this may look like
and .
-
6.
If there are more blocks left in the urn, start again by setting the leftmost unused block as the new “place 1” (head of new cycles). Proceed to pick images uniformly from the urn until closure as above. In the example there is only one choice for its image (itself), so the new cycles are immediately closed
and
It’s clear that the chosen is uniform in . Inspecting the above, one may see that only two pieces of information are relevant to the final cycle lengths. Consider the time of first closure ( in the example). The value of determines the lengths of the partial cycles immediately before closing. The cycle type of determines the way in which they close (merging). However since the are all iid uniform permutations this composition is simply a uniform -permutation. Of course these observations hold for all subsequent closure times as well. Thus if one only cares about the cycle type of one may throw away all information except the closure times and the cycle type of a single -permutation for each closure.
Given this, we may directly sample these random variables without sampling the whole permutation. The closure times are 1’s in the sequence from 4.2 with . We must add a terminal one for this spacing interpretation to work, and also choose to write the sequence such that the 1’s corresponding to closure times arrive from right to left in the interest of making these sequences infinite later. When is the left endpoint of an -space the corresponding from 4.2 represents the cycle type of , which by inspection yields cycles of length as in (11). This proves 4.
5 Three Theorems About Patterns
There are myriad theorems about the distribution of various properties of random permutations. Most all can be adapted (and studied) for the Wreath products considered here. For a first stab at the distribution of the longest increasing subsequence, see [38]. This brief section treats two basic statistics: descents and inversions for permutations induced by Wreath products. It also treats a different action of . For simplicity, we treat throughout the case
but most arguments extend in a transparent way to with .
5.1 Descents
The “up/down” pattern in a permutation is captured by the number of descents. For , . If is uniform in ,
(12) |
and, normalized by its mean and standard deviation, has a limiting standard normal distribution when is large. Proofs and extensive further discussion are in [39]. The extension to is straightforward:
Theorem 11.
Let be uniformly chosen in . Then
and, normalized by its mean and standard deviation, has a limiting standard normal distribution when is large.
Proof.
Write for , . By inspection,
All the summands are independent. the first sum has mean , variance and, when normalized, a limiting standard normal distribution. The term has mean , variance and, when is large, normalized, a limiting standard normal distribution. This implies
has a limiting standard normal distribution. ∎
The preceding argument works for fixed or growing with .
5.2 Inversions
For , the number of inversions in , , is defined as . This is a standard measure of disarray, widely used in statistical testing as Kendal’s tau. Extensive references and further developments are in [40]. As is shown there (and classically),
and, normalized by its mean and standard deviation, has a limiting standard normal distribution. Again, the extension to is straightforward.
Theorem 12.
Let be uniformly chosen in . Then
and, normalized by its mean and standard deviation, has a limiting standard normal distribution.
Proof.
The proofs here lean on the classical central limit theorem and normal limit theorems for , in . All of these ingredients are available with Berry–Esseen-type errors. These transfer in a straightforward way.
5.3 A different action
John Kingman has pointed to a different appearance of Poisson distributions for the distribution of cycles of large subgroups of . Consider acting on via
The points are permuted by and one may ask about features such as cycles.
Example 5.1.
For , one can picture a deck of cards with A–K of each of the four suits in order, row by row. Permute the rows by and the columns by .
Example 5.2.
Consider , the number of fixed points. For to be fixed, row and column must be fixed so that . For and large, and converge to with and independent distributions. The product of independent Poissons is no longer infinitely divisible, so not a compound Poisson variate.
Example 5.3.
With the preceding notation, for and large, 13 below gives the joint limiting distribution of . As an example,
with independent variates.
The general form of the limit is a quadratic polynomial in compound Poisson variates. The general result is usefully organized by another result of classical Pólya theory:
Proposition 5.
where denote the least common multiple and greatest common divisor, , . For generalizations of this result to products of subgroups see [41].
Example 5.4.
The general theorem follows from the proposition using Poissonization as in (7).
Theorem 13.
Pick uniformly in . Let be the number of -cycles in the product action. Then, for and large,
with
(13) |
and
Proof.
Using 5 and (7), the generating function
with
Consider the marginal distribution of . Its generating function is obtained by setting for and . Fix and and consider the exponent of in . This is provided . It is zero otherwise. Thus
For fixed the exponential is the generating function of the compound Poisson distribution
As tends to infinity, under the probability , the joint distribution of tends to the independent Poisson . Now let , the Tauberian arguments of [3] complete the proof that the marginal distribution of converges to as in (13). The result for the joint distribution is similar and further details are suppressed. ∎
Coupling as in the proof of 7 also gives
Theorem 14.
For any and uniform in , let be the distribution of and be the distribution of . Then
Proof sketch.
6 Acknowledgments and Funding Sources
Acknowledgments: We thank David Aldous, Sourav Chatterjee, Jason Fulman, Bob Guralnick, Michael Howes, John Kingman, Gerad Letac, Martin Liebeck, Arun Ram and Chenyang Zhong for their help throughout this project.
Funding: This work was supported by the National Science Foundation [grant number 1954042].
References
- [1] V. Goncharov, Sur la distribution des cycles dans les permutations, C. R. (Doklady) Acad. Sci. URSS (N.S.) 35 (1942) 267–269.
- [2] V. Goncharov, Du domaine d’analyse combinatoire, Bull. Acad. Sci. URSS Ser. Math (Izv. Akad. Nauk SSSR) 8 (1944) 3–48.
- [3] L. A. Shepp, S. P. Lloyd, Ordered cycle lengths in a random permutation, Trans. Amer. Math. Soc. 121 (1966) 340–357.
- [4] J. Fulman, Random matrix theory over finite fields, Bull. Amer. Math. Soc. (N.S.) 39 (1) (2002) 51–85. doi:10.1090/S0273-0979-01-00920-X.
- [5] R. Arratia, A. D. Barbour, S. Tavaré, Logarithmic Combinatorial Structures: A Probabilistic Approach, EMS Monographs in Mathematics, European Mathematical Society (EMS), Zürich, 2003. doi:10.4171/000.
- [6] P. Diaconis, M. Simper, Statistical enumeration of groups by double cosets, J. Algebra 607 (2022) 214–246. doi:10.1016/j.jalgebra.2021.05.010.
- [7] P. Diaconis, R. L. Graham, W. M. Kantor, The mathematics of perfect shuffles, Adv. Appl. Math. 4 (2) (1983) 175–196.
- [8] P. W. Diaconis, S. P. Holmes, Matchings and phylogenetic trees, Proc. Natl. Acad. Sci. USA 95 (25) (1998) 14600–14602.
- [9] J. Fulman, R. Guralnick, Derangements in subspace actions of finite classical groups, Transactions of the American Mathematical Society 369 (4) (2017) 2521–2572.
- [10] R. A. Bailey, Design of Comparative Experiments, Vol. 25 of Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, Cambridge, 2008. doi:10.1017/CBO9780511611483.
- [11] J. D. Dixon, B. Mortimer, Permutation Groups, Vol. 163 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1996. doi:10.1007/978-1-4612-0731-3.
- [12] P. Diaconis, J. Fulman, R. Guralnick, On fixed points of permutations, J. Algebraic Combin. 28 (1) (2008) 189–218.
- [13] M. Abért, B. Virág, Dimension and randomness in groups acting on rooted trees, J. Amer. Math. Soc. 18 (1) (2005) 157–192. doi:10.1090/S0894-0347-04-00467-9.
- [14] D. Bernhardt, A. C. Niemeyer, F. Rober, L. Wollenhaupt, Conjugacy classes and centralisers in wreath products, J. Symbolic Comput. 113 (2022) 97–125. doi:10.1016/j.jsc.2022.02.005.
- [15] G. James, A. Kerber, The Representation Theory of the Symmetric Group, Vol. 16 of Encyclopedia of Mathematics and its Applications, Addison-Wesley Publishing Co., Reading, MA, 1981.
- [16] I. G. Macdonald, Symmetric Functions and Hall Polynomials, 2nd Edition, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, New York, 1995.
- [17] P. J. Cameron, B. Kuzma, Between the enhanced power graph and the commuting graph, J. Graph Theory 102 (2) (2023) 295–303.
- [18] P. Diaconis, Analysis of a Bose–Einstein Markov chain, Ann. Inst. H. Poincaré Probab. Statist. 41 (3) (2005) 409–418.
- [19] R. Arratia, S. DeSalvo, Probabilistic divide-and-conquer: A new exact simulation method, with integer partitions as an example, Combin. Probab. Comput. 25 (3) (2016) 324–351. doi:10.1017/S0963548315000358.
- [20] H. C. Andersen, P. Diaconis, Hit and run as a unifying device, J. Soc. Fr. Stat. & Rev. Stat. Appl. 148 (4) (2007) 5–28.
- [21] A. D. Barbour, O. Chryssaphinou, Compound Poisson approximation: A user’s guide, Ann. Appl. Probab. 11 (3) (2001) 964–1002. doi:10.1214/aoap/1015345355.
- [22] W. Feller, An Introduction to Probability Theory and its Applications. Vol. I, 3rd Edition, John Wiley & Sons Inc., New York, 1968.
- [23] M. Dwass, H. Teicher, On infinitely divisible random vectors, Ann. Math. Statist. 28 (1957) 461–470. doi:10.1214/aoms/1177706974.
- [24] G. Letac, Stationary sequences with simple joint Poisson distributions, J. Statist. Plann. Inference 90 (1) (2000) 1–20. doi:10.1016/S0378-3758(00)00106-3.
- [25] R. S. Ellis, Inequalities for multivariate compound Poisson distributions, Ann. Probab. 16 (2) (1988) 658–661.
- [26] L. A. Goldberg, M. Jerrum, The “Burnside process” converges slowly, Combin. Probab. Comput. 11 (1) (2002) 21–34. doi:10.1017/S096354830100493X.
- [27] G. M. Constantine, Combinatorial Theory and Statistical Design, Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons, Inc., New York, 1987.
- [28] N. G. de Bruijn, Pólya’s theory of counting, in: E. F. Beckenbach (Ed.), Applied Combinatorical Mathematics, 1964, pp. 144–184.
- [29] G. Pólya, R. C. Read, Combinatorial Enumeration of Groups, Graphs, and Chemical Compounds, Springer-Verlag, New York, 1987. doi:10.1007/978-1-4612-4664-0.
- [30] P. Hanlon, A cycle index sum inversion theorem, J. Combin. Theory Ser. A 30 (3) (1981) 248–269. doi:10.1016/0097-3165(81)90021-2.
- [31] P. Hanlon, The characters of the wreath product group acting on the homology groups of the Dowling lattices, J. Algebra 91 (2) (1984) 430–463. doi:10.1016/0021-8693(84)90113-3.
- [32] A. Gut, Stopped Random Walks: Limit Theorems and Applications, 2nd Edition, Springer Series in Operations Research and Financial Engineering, Springer, New York, 2009. doi:10.1007/978-0-387-87835-5.
- [33] A. Gut, Anscombe’s theorem 60 years later, Sequential Anal. 31 (3) (2012) 368–396.
- [34] B. Fristedt, The structure of random partitions of large integers, Trans. Amer. Math. Soc. 337 (2) (1993) 703–735. doi:10.2307/2154239.
- [35] J. Najnudel, J. Pitman, Feller coupling of cycles and Poisson spacings, arXiv e-prints (2019) arXiv:1907.09587arXiv:1907.09587, doi:10.48550/arXiv.1907.09587.
- [36] N. Tung, Forthcoming manuscript, Stanford University.
- [37] P. Diaconis, L. Miclo, On a Markov construction of couplings, arXiv e-prints (2023) arXiv:2305.02580arXiv:2305.02580, doi:10.48550/arXiv.2305.02580.
-
[38]
S. Chatterjee, P. Diaconis, A vershik-kerov theorem for wreath products (2024).
arXiv:2408.04364.
URL https://arxiv.org/abs/2408.04364 - [39] S. Chatterjee, P. Diaconis, A central limit theorem for a new statistic on permutations, Indian J Pure Appl Math 48 (4) (2017) 561–573. doi:10.1007/s13226-017-0246-3.
- [40] P. Diaconis, Applications of group representations to statistical problems, in: Proceedings of the International Congress of Mathematicians, Vol. I, II (Kyoto, 1990), Math. Soc. Japan, Tokyo, 1991, pp. 1037–1048.
- [41] W.-D. Wei, J.-Y. Xu, Cycle index of direct product of permutation groups and number of equivalence classes of subsets of zv, Discrete Mathematics 123 (1-3) (1993) 179–188.