1. Introduction
The work in this paper is motivated by a question in the theory of integer random matrices but is of independent interest to the study of random walks on groups.
Random walks on finite groups are well-studied in the reversible, time-homogeneous, ergodic regime, where the random walk on a group consists of a product for i.i.d. drawn from a distribution supported on a generating set of . Such random walks are known to converge to the uniform distribution on exponentially quickly. Namely, if we denote by the distribution of , then
|
|
|
where is the second-largest singular value of the Markov operator of the random walk and denotes the norm of signed measures as functions . See [Sal04] for an excellent review of these kinds of walks.
The above inequality comes from looking at norms of convolution operators on the space of signed measures on . If and are random elements of distributed according to and respectively, then is distributed according to the convolution
|
|
|
In particular, if the are distributed according to , then is the -fold convolution .
Some of the assumptions can be relaxed; for instance, Saloff-Coste and Zúñiga [SZ07] studied convergence of time-inhomogeneous Markov chains, including random walks on finite groups, in the case where each step of the random walk is irreducible (in particular, supported on a generating set of ). In that case, if we denote by the second-largest singular value of the th step,
|
|
|
The condition that each step of the random walk is supported on a generating set is crucial because if the subgroup generated by the supports of the steps is a proper subgroup of , the random walk will surely stay in that subgroup. Nevertheless, one may expect that if the supports of all steps taken together generate , the random walk might still equilibrate to the uniform distribution on .
A consequence of our first main result is the following theorem, which relaxes this “generating” assumption by extending part of [SZ07, Theorem 3.5] to some time-inhomogeneous random walks where the probability measures driving each step need not be irreducible:
Theorem 1.1.
Let be a finite group, and let be probability measures on . For each subgroup of , let . Let be a finite set of normal subgroups of such that . Write . Also, for each , let be the second-largest singular value of as an operator on . Let be the uniform distribution on .
If is nonempty for each , we have
|
|
|
We prove a more general version of this result in Theorem 3.1.
In particular, if a time-inhomogeneous random walk on a finite group has steps supported on enough subgroups, then it converges to the uniform distribution on the group with an exponential rate controlled by subgroups that appear infrequently or mix very slowly. Adding more probability measures to the convolution may not improve the convergence rate, but it never makes the bound worse because convolution with a probability measure is non-expansive in the norm. A nice consequence of this is that need not be an exhaustive list of every normal subgroup for which is nonempty.
The main difference between this result and [SZ07, Theorem 3.5] is that [SZ07] relaxes the time-homogeneity assumption for random walks but not the assumption that each step is supported on a generating set for the group. The new condition that the supports of the steps jointly generate is substantially weaker than the assumption that the support of each step generates .
The conditions of Theorem 1.1 can be weakened so that not all the subgroups need to be normal (see Theorem 3.1), but see Example 3.3 for why some hypothesis on the subgroups is necessary.
Our main interest in developing this theorem is an application to limiting distributions of cokernels of random matrices. Wood [Woo19, Theorem 2.9] and Nguyen and Wood [NW22, Theorem 1.1] showed that cokernels of integer-valued random matrices approach a universal limiting distribution in the following sense. Let be a sequence with each a random integer matrix with independent entries (). Wood showed that, under very weak conditions on the distributions of the entries of the , the distribution of the isomorphism class of the random group converges weakly (i.e., at any finite collection of primes) as to the distribution on isomorphism classes of profinite abelian groups defined as follows: if and is a finite abelian -group, then
|
|
|
independently for all primes , where is the -part of . If further , then is supported on isomorphism classes of finite abelian groups, and for finite abelian we have
|
|
|
where denotes the Riemann zeta function. Nguyen and Wood weakened the conditions on the entries and showed strong (pointwise) convergence to . The phenomenon that the limiting distribution of is rather insensitive to the distributions of the entries of is an example of universality. The distributions and are known as the Cohen-Lenstra distributions, and are conjectured to describe the distributions of class groups of imaginary and real quadratic number fields, respectively.
In her 2022 ICM talk, Wood [Woo23, Open Problem 3.10] asks if the universality class of can be extended to cokernels of matrices with some dependent entries. There are a few specific results in this direction. Most recently, Nguyen and Wood [NW22, Theorem 1.1] show that the distribution is universal for Laplacians of Erdős-Rényi random directed graphs. Mészáros [Més20] shows that is universal for Laplacians of random regular directed graphs. Friedman and Washington [FW89] show that the cokernels of the random matrices , where is drawn at random from the multiplicative Haar measure on , approach the -part of as . However, when there is too much dependence in the entries of the random matrices, one gets different (but often related) limiting distributions, for example in the case of symmetric matrices ([Woo14]), Laplacians of random regular undirected graphs ([Més20]), products of independent random integral matrices ([NV24]), and quadratic polynomials in Haar-random matrices ([CK24]). It is natural to ask just how much (and what kind of) dependence is allowed between the entries of sequences of random matrices before their cokernels leave the universality class of .
The main application of Theorem 3.1 in this paper is a Theorem 1.2 below, which extends the result of [Woo19] to matrices with some dependence in their rows and columns. We introduce a regularity condition on matrices, -balanced, in Definitions 4.2 and 4.7. Generally, it means that the matrix can be written as a block matrix where the blocks have height at most , width at most , are all independent, and each satisfy some regularity condition depending on . The key detail is that the blocks of the matrix may have dependent entries, as long as there is no dependence between blocks. (The -balanced condition is invariant under permutation of rows and columns, so one can also think of a -balanced matrix as a block matrix which is at most blocks tall, at most blocks wide, and such that the entries of each block are independent of each other.) With this condition, we have:
Theorem 1.2.
Let be an integer. Let be sequences of real numbers such that , , and for some and .
For each integer , let be an -balanced random matrix with entries in . Then the distribution of converges weakly to as . In other words, if , then for every positive integer and every abelian group with exponent dividing we have
|
|
|
The key idea of the proof of Theorem 1.2 uses the moment method developed in [Woo14] and [Woo19]. Understanding the cokernel of a random integer matrix reduces to finding the probability that each random column maps to zero under an arbitrary surjective group homomorphism for an arbitrary abelian group . To handle dependent columns, we treat several columns at a time and look at the induced surjection . We view the image of a random element of as a random walk in and apply Theorem 3.1 to approximate the distribution of this image. Since the surjection is arbitrary, we have very little control over the distribution of the steps of this walk. In particular, they are almost never supported on all of , which is why we need Theorem 3.1 to handle random walk steps supported on proper subgroups. The -balanced condition allows us to bound the singular values of the associated convolution operators and get quantitative bounds on the error in terms of , , and .
There is a considerable body of literature pertaining to random matrices with complex entries, with analogous universality results about distributions of eigenvalues. If is a sequence of random complex matrices whose entries are independent, with appropriately normalized mean and variance, the empirical distribution of the eigenvalues of converges to the circular law, which is the uniform distribution on the unit disc in [TVK10]. The universality of the circular law for spectra of a wide class of random complex block matrices was proved by Nguyen and O’Rourke in [NO15].
2. Notation and Terminology
For a finite set and , we use to denote the space of signed measures (equivalently, real-valued functions) on , equipped with the norm . When the set is implicit, we write for . Any set map defines a pushforward map by . We say a signed measure is uniform on if for . For a point with the Euclidean metric and a linear subspace , we write for the distance between and its orthogonal projection onto . Note that this is equal to . If is a finite group, any signed measure defines a linear convolution operator (or, if is a probability measure, Markov operator) on given by . When we discuss the second-largest singular value of an operator, we are counting with multiplicity; for example, if the singular values of are , then its second-largest singular value is .
For two finite or profinite groups , we write for the set of (continuous) group homomorphisms from to and for the set of (continuous) surjective group homomorphisms from to . For a subset , we denote by the (closed) subgroup of generated by . We refer to the identity element of a group as .
A probability measure or distribution is a measure with total mass 1 (not signed). The uniform distribution on is usually denoted and is the measure on with for . We use for probability and for expectation. We denote by the support of a measure . If a random variable has law , we write .
For a positive integer , we write for the set .
3. Random Walks
This section is devoted to proving the following stronger version of Theorem 1.1 from the introduction:
Theorem 3.1.
Let be a finite group and suppose we have a sequence of surjective homomorphisms
|
|
|
For , define by (so ), and for define by .
Let be probability measures on . Let . For each , let . Let be the uniform distribution on .
For , let be the second largest singular value of the -random walk on . If each is nonempty, we have
|
|
|
In the case where and , we recover the first part of [SZ07, Theorem 3.5]. We postpone the proof that Theorem 3.1 implies Theorem 1.1 until the end of this section.
The following example gives a case that is covered by Theorem 3.1 but not by Theorem 1.1:
Example 3.2.
Consider the dihedral group with . The subgroup is normal, but the subgroup is not normal. We have generated by the image of , so the image of in the quotient is normal. Let be the quotient map. Let be a measure on with second-largest singular value . Consider the following random walk on :
-
•
For odd, .
-
•
For even, and .
Say is even. In matrix form the operator on is given by . It is symmetric, so the singular values are just the absolute values of the eigenvalues. The vector is a -eigenvector. Thus, the singular values of are 1 and , so . Theorem 3.1 says that
|
|
|
In particular, if , the random walk mixes on half as fast as the -random walk mixes on .
Although Theorem 3.1 makes weaker assumptions on the subgroups than Theorem 1.1, it is not possible to fully remove the normality assumption, as the following example shows:
Example 3.3.
Consider the alternating group . Recall that is generated by the 3-cycles . Consider the following three-step time-inhomogeneous “random walk” on : is uniformly distributed on , is uniformly distributed on , and is uniformly distributed on . The step distributions on the respective cyclic groups all have second-largest singular value zero. However, the product is not uniformly distributed on . Indeed, when acts on the tuple , can never end up in the fourth or fifth position, whereas if were uniform on , would end up in the fourth and fifth position with probability each.
Consider the space of -valued functions on a finite group . Since is finite, (with the Euclidean norm). Let . Let be the set of signed measures on with . Note that and are parallel affine hyperplanes in . Probability measures on are points in the simplex formed by the part of in the positive orthant. The orthogonal complement to is the line spanned by the uniform probability measure on , and intersects at and nowhere else.
Any measure on acts by on by convolution on the right. If is a probability measure, the convolution operator also sends into itself. The following lemma tells us that, in this case, contracts the distance between points of and .
Lemma 3.4.
Let be any finite group and a probability measure on . Let be the convolution operator and let be the second-largest singular value of on .
-
(1)
If is a signed measure on , then
|
|
|
-
(2)
If are signed measures on with , then
|
|
|
Proof.
Part (1) is a case of Young’s convolution inequality for unimodular groups . To prove part (2), we will show that is the operator norm of on the subspace of consisting of signed measures with total mass 0.
Let be the adjoint operator to . Observe that is also a convolution operator, given by , where for .
Thus, is the convolution operator given by . In particular, and are each given by convolution with a probability measure, so they have a shared 1-eigenvector: the uniform measure on . The largest singular value of a real matrix coincides with its operator norm.
By part (1), the operator norm of is at most 1, so the largest eigenvalue of is exactly 1. Let . Since is an eigenvector of , the operator restricts to an operator on , and since , the singular values of are the singular values of with a copy of 1 (the largest singular value of ) excluded. Thus, the operator norm and largest singular value of is the second-largest singular value of , which is . If , then , so
|
|
|
∎
The second part of Lemma 3.4 implies that if the support of generates and is a probability measure, then , so applying to a probability measure times contracts the distance to by a factor of . More generally, if the support of each generates , then applying any combination of in any order contracts the distance to by the appropriate product of factors. However, when the support of is contained in a proper subgroup of , the second-largest singular value of as an operator on is always 1. In this case, Lemma 3.4(2) applied to gives no useful information.
The key idea in the proof of Theorem 3.1 is that even though we cannot say outright that applying the operator moves a probability measure closer to uniform, we will show in Lemma 3.8 that moves a probability measure closer to some (explicit) subspace of containing . This subspace depends on the subgroup generated by the support of . If we choose enough subspaces that their intersection is just , then we will be able to show that successive application of different ’s moves a probability measure closer to that intersection, that is, to the uniform probability measure. The condition that our chosen subspaces intersect in is exactly the condition in Theorem 1.1, or the condition that in Theorem 3.1.
For each subgroup , let be the space of functions on which are uniform on each left coset of (i.e., for and with , ).
Lemma 3.6.
Let be a finite group and be a subgroup. Let . Let be the signed measure on given by for . Then is the orthogonal projection of onto the subspace of . In particular, .
Proof.
We have decompositions
|
|
|
where is given by for all . The projection operator decomposes as a direct sum of projection operators, one for each coset of . In , projection onto is given by inner product with , and we have , which means the projection of onto is .
∎
Lemma 3.7.
Let be a finite group and a normal subgroup of . Let . Then preserves , i.e., .
Proof.
Suppose , so is uniform on each left coset of . Since is normal, its left and right cosets coincide, so is uniform on each right coset of . Say and are independent, so is the distribution of . We have for all and . For , we therefore have
|
|
|
for all and . Summing over shows for all and . So, is uniform on right cosets (hence also left cosets) of and .
∎
Lemma 3.8.
Let be a finite group and a normal subgroup of . Let be probability measures on and . Let . For each , let be the second-largest singular value of on . Let be the set of signed measures on that are uniform on left cosets of . Then
|
|
|
Proof.
We say is the Dirac measure on the identity of , so that for all .
We will show the following statement by induction on :
|
|
|
First, we have . This proves . .
Now suppose holds. Let be the orthogonal projection of onto , described explicitly in Lemma 3.6. Then note that .
There are two cases, depending on whether :
Suppose , so . By Lemma 3.4(1), .
By Lemma 3.7, we have , and
|
|
|
Now suppose , so .
For , define by . Note that is an automorphism of normed spaces, and for signed measures and , we have .
For each left coset of we have
|
|
|
|
|
|
|
|
Since is supported on a subset of , for any we have .
Thus,
|
|
|
By Lemma 3.6, is uniform on with total mass , so is uniform on with total mass . Thus, .
Applying Lemma 3.4(2) on , we get
|
|
|
|
|
|
|
|
Adding up over cosets of gives
|
|
|
Hence,
|
|
|
By induction, we get
|
|
|
∎
Note that if is a subgroup of and is the projection onto the left coset space, then one can identify with as follows:
Lemma 3.9.
Let be a finite group and a subgroup. Let be the space of measures uniform on left cosets of . Let send each element to the corresponding left coset of . Then the map sending to restricts to an isometry of normed spaces . Moreover, is the orthogonal projection map .
Proof.
The map is norm-preserving because if is uniform on left cosets of , then for . Indeed, we have
|
|
|
The inverse map is given by . The fact that is the orthogonal projection map follows from Lemma 3.6.
∎
Identifying with will allow us to prove Theorem 3.1 by induction. Using Lemma 3.8, we can say that a random walk approaches the subspace . Then, we can consider its projection onto as a random walk on . This allows us to ignore all random walk steps supported in . The key ingredient that allows us to combine Lemma 3.8 with the inductive hypothesis is the following lemma:
Lemma 3.10.
Let be a finite group and . Let be the uniform distribution on and let be any signed measure on . Let be the set map sending each element of to the corresponding left coset of . Let be the set of signed measures uniform on left cosets of . Then
|
|
|
Proof.
Let be the orthogonal projection of onto . By Lemma 3.6, we have . Then
|
|
|
|
|
|
|
|
|
|
|
|
∎
The final lemma before the proof of Theorem 3.1 is a pair of facts about pushforwards to quotients:
Lemma 3.11.
Let be a finite group and a normal subgroup. Let be the projection. Then:
-
(1)
If , we have .
-
(2)
Suppose and the second-largest singular value of on is . Then the second-largest singular value of on is at most .
Proof.
-
(1)
We have
|
|
|
Since left cosets of are right cosets too, , so
|
|
|
-
(2)
By restricting to the projection we may as well assume .
Recall (from the proof of Lemma 3.4(2)) that the second-largest singular value of is the operator norm of acting on the subspace of measures with total mass 0. Suppose is the second-largest singular value of . Then
|
|
|
Let be the space of signed measures uniform on cosets of . By Lemma 3.9 and part (1) of this lemma, we have
|
|
|
Then by Lemma 3.9 again, we have
|
|
|
∎
Proof of Theorem 3.1.
We will prove the following statement by induction on :
() |
|
|
|
When , the right hand side of is 0. Since both and are the unique probability measure on , the left hand side is also 0, so holds.
Now suppose holds. We will show holds.
Since is the uniform distribution on , Lemma 3.10 applied to and says
|
|
|
|
where is the subspace of consisting of measures uniform on cosets of . By the inductive hypothesis,
|
|
|
|
|
|
|
|
By Lemma 3.11(1), we have , so by Lemma 3.8 applied to , , and the measures , we get
|
|
|
Hence,
|
|
|
|
|
|
|
|
completing the induction. When , we get
|
|
|
∎
Now we show how Theorem 3.1 implies Theorem 1.1 by giving a corollary slightly stronger than Theorem 1.1.
Corollary 3.12.
Let be a finite group, and let be probability measures on . For each subgroup of , let . Let be a finite set of subgroups of such that and the image of in is a normal subgroup for all . Write . Also, for each , let be the second-largest singular value of as an operator on . Let be the uniform distribution on .
If is nonempty for each , we have
|
|
|
Proof.
We have , so . Let be the projection. For , let be the second-largest singular value of on . By Lemma 3.11(2), we have . Then applying Theorem 3.1 to the sequence
|
|
|
we get that
|
|
|
Then the corollary follows by subadditivity of square root.
∎
We obtain Theorem 1.1 from Corollary 3.12 because the image of a normal subgroup under a surjection is normal.
4. Universality for Random Groups
The goal of this section is to prove Theorem 1.2.
To prove Theorem 1.2, we will use the moment method of Wood (see [Woo14, Woo19]) as follows. Let be a sequence of random finitely generated abelian groups and be a random finitely generated abelian group. Let be an integer and the set of isomorphism classes of abelian groups with exponent dividing . If for every we have
|
|
|
then for every we have
|
|
|
[Woo19, Theorem 3.1]. The quantity is called the -moment of .
If , then [Woo19, Lemma 3.2] gives
|
|
|
Following this strategy, we obtain Theorem 1.2 as a corollary of Theorem 4.18, which states that if are the cokernels of random matrices satisfying appropriate conditions, then .
When is the cokernel of a random matrix , the problem of counting surjections from into can be attacked with combinatorics. Say , where is a random subgroup of (e.g., the column space of a random integer matrix). Then surjections correspond one-to-one with surjections which vanish on . It follows from linearity of expectation that
|
|
|
In the case of cokernels of random matrices, is the subgroup generated by the columns of the random matrix, viewed as random elements of . But we can also view as a random element of . Given a map , we get by abuse of notation a map applying to each component. Then we have that if and only if . Thus, we want to bound the probabilities . Past work on random matrices with independent entries (e.g., [NW22]) has observed that if is a random tuple in with independent, sufficiently regular components, then for most , the element is close to uniformly distributed. Applying this to each column independently allows us to compute . In this work, we apply the same principle to consider several columns of a random matrix at a time.
4.1. Balanced elements
The following definition captures the idea that a random element in a group is not too concentrated in a particular coset.
Definition 4.2.
Let be a group. A -valued random variable is -balanced if for any proper subgroup and element , we have .
This definition agrees with the definition in [Woo19] when is a finite cyclic group. Here is an example of an -balanced random variable that does not take values in a cyclic group.
Example 4.3.
Let be a finitely generated group with finite generating set containing the identity, and let be a random variable supported on with . Then is -balanced.
Indeed, suppose is a subgroup of and such that . Then , so . Since contains the identity element of , we must have , and since , we must have .
In this paper, we consider integer matrices as elements of the abelian group . For each subset of , we have a quotient map from onto given by taking the entries of a matrix indexed by pairs in . We say that a subset of the entries of a random matrix with indices is jointly -balanced if is -balanced in .
The new definition of -balanced has some desirable properties that help construct new examples of -balanced random variables.
Lemma 4.4.
-
(1)
If is a surjective homomorphism of groups and is -balanced in , then is -balanced in .
-
(2)
Let be groups, be -balanced in , and be -balanced in . If and are independent, then is -balanced in .
Proof.
-
(1)
Let be a coset of a proper subgroup of . Let , so is a coset of a proper subgroup of . Since is -balanced,
|
|
|
as desired.
-
(2)
Let be a coset of a proper subgroup of . Note that
|
|
|
Recall that the intersection of two cosets in a group is either empty or a coset of their intersection. In particular, is either empty or a coset of a subgroup of .
There are two cases, depending on whether is always a proper subset of :
-
i.
If for all :
Condition on for some fixed . Since and are independent, and is -balanced,
|
|
|
-
ii.
If for some , then , so in particular is a subgroup of and we must have . We claim that for some proper subgroup of .
Indeed, let be the projection and let . On one hand, clearly . On the other, if , then for some . Then . Since , we have . Hence . Note that , or else is not a proper subgroup.
Then
|
|
|
Hence, in both cases we have and since this holds for every proper coset , we have that is balanced.
∎
Note that Lemma 4.4 gives us a nice way to build up -balanced matrices. If the entries of a random matrix can be partitioned into independent subsets and each of these subsets of the entries is jointly -balanced, then the whole matrix is -balanced. For example, any matrix with independent, -balanced entries (as in [Woo19]) is -balanced as a matrix.
When a random variable is -balanced, we can get an upper bound on the associated singular value.
Lemma 4.5.
Suppose is a finite group and is -balanced in with distribution . Let be the second largest singular value of the operator on . Then
|
|
|
Proof.
Note that is the square root of the second largest eigenvalue of the operator , where is the adjoint to the operator , given by . The operator is the transition operator for a random walk on , where each step is a difference of two independent copies of .
In particular, note that . For any generating set of , [Sal04, Theorem 6.2] applied to shows that the second-largest eigenvalue of is bounded above by
|
|
|
where and is the diameter of the Cayley graph of . In particular, .
The goal is to choose an appropriate to bound from below. Note that if and are -balanced and independent, then so is (via conditioning on ). In particular, is -balanced.
We proceed iteratively. Having chosen (including the empty set ), if then we are done. Otherwise, since is -balanced, . Choose
|
|
|
Since , we have , so .
Hence we have , so
|
|
|
as desired.
∎
Now we will use the -balanced condition to give a related balancedness condition for matrices that contains information about how balanced and independent the entries are.
Definition 4.6.
Let be a finite set. A partition of is a collection , such that and each is nonempty. We say and . If , write for .
Note that .
The next definition specifies the kinds of restrictions we will give for the matrices in our universality class. The idea is that we can split up the columns of the matrix and then the rows, so that the resulting sections of the matrix are -balanced.
If is an matrix, , and , then is the matrix .
Definition 4.7.
An random matrix with entries in a ring is -balanced if there is a partition of and a partition of with , , and such that each random matrix is -balanced in the additive abelian group and the random matrices are independent.
If then we recover the definition of -balanced from [Woo19] and other related work.
Now we are ready to state the main theorem of this section:
Theorem 4.8.
Let be an integer. Let be sequences of real numbers such that , , and for some and .
For each integer , let be an -balanced random matrix with entries in . Let . Then for all positive integers and abelian groups of exponent dividing we have
|
|
|
Together with Remark 4.1, this gives Theorem 1.2.
As discussed at the beginning of this section, we will prove this by computing the limiting moments of , which involves estimating for maps .
4.2. Bounds for most maps
It turns out that -balanced is a strong enough condition that we can get bounds on for the vast majority of maps .
Definition 4.10.
If is an abelian group with generating set and , we write for the subgroup of . When or we implicitly take to be the “standard basis”.
Let be a partition of and be a finite abelian group. A function is a -code of distance if for any with , we have .
To approximate for codes , we will split the matrices into independent sets of columns. Each such set of random columns gets mapped to something close to uniform in . The following lemma is analogous to [Woo19, Lemma 2.1].
Lemma 4.11.
Let be integers. Let be a finite abelian group and let be a multiple of the exponent of . Let be the number of subgroups of . Let be a real number. Let . Let be a partition of and let . Let be a -code of distance .
Let be an random matrix in such that the matrices are independent and -balanced as random elements of .
Let . Then
|
|
|
Proof.
Let be the standard generating set for . For , let .
The idea is to treat as a random walk in . We have
|
|
|
where is interpreted as an -balanced random element of , a subgroup of .
Let . Note that , so as ranges over there are at most possible values for , each an th power of a subgroup of . Let . Then , and so . Since is a -code of distance , it remains surjective if we discard all of these indices, which means the images of the s with generate . In other words, we have . The subgroups in will be the ones we use in the random walk, applying Theorem 1.1.
By the definition of , for each in we have . By Lemma 4.4, the steps are -balanced, which means that by Lemma 4.5 the second largest singular value of the th step is bounded above: (using the fact that each is supported on a subgroup of ).
Hence by Theorem 1.1 we have
|
|
|
as desired.
∎
To combine these estimates we will use a result in the flavor of [Woo19, Lemma 2.3]:
Lemma 4.12.
Let be real numbers such that . Then
|
|
|
and
|
|
|
Proof.
The first statement follows from the second statement because and . So, we will show the second statement.
First, assume for all . In that case,
|
|
|
Next, assume for all . Using the fact that , we get
|
|
|
We have at and for , so for . Hence, if , then .
Now consider the general case. By replacing each negative with zero, we can only increase the product . On the other hand, by replacing each positive with zero, we can only decrease it. Hence, for general , we get
|
|
|
∎
Applying this lemma with being the error in Lemma 4.11 multiplied by yields an estimate on the probability that the whole matrix maps to zero:
Lemma 4.13.
Let be an integer. Let be a finite abelian group and let be a multiple of the exponent of . Let be sequences of real numbers such that , , and for some .
For a natural number , let . Let be an -balanced random matrix with entries in . Let be the row partition associated to and let be a -code of distance .
Then there are constants depending only on , , , and the sequences such that for all ,
|
|
|
Proof.
Let and be the row and column partitions for as in the definition of -balanced. Let for each . Let . By independence,
|
|
|
For each , let . By Lemma 4.11, we have
|
|
|
|
|
|
|
|
Hence we have
|
|
|
Since and , there is a constant depending only on the proportionality constant in such that for large enough we have so that
Since , for large enough we have so that and, for large enough , .
Finally, since , we also have that for large enough, and . In particular, for large enough,
|
|
|
By Lemma 4.12, we therefore have that for such ,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Since , the expression is uniformly bounded above by some constant for all . Then the appropriate constant can be chosen so that
|
|
|
for all , as desired.
∎
4.3. Bounds for the rest of the maps
This gives results for the case when is a code, but we still need to account for non-codes. To do this, we will show that non-codes make up a negligible proportion of all maps and thus contribute only a small error term to the sum . However it turns out that splitting maps into codes and non-codes is not enough to get this bound. Instead, as in [Woo19], [NW22], and similar work, we will categorize non-codes by how far they are from being codes.
If is an integer with prime factorization , we write .
Definition 4.14.
If and is a partition of the “standard basis” of , the -depth of is the maximal positive such that there is a with such that , or 1 if there is no such .
We can count the number of that have given -depth:
Lemma 4.15.
If , then the number of of -depth is at most
|
|
|
where is the number of subgroups of of index .
Proof.
For each of -depth , there is a as described in Definition 4.14. There must be some set with and . There are choices of , and for each choice of , there are certainly at most choices of . Since is a partition, uniquely determines , so there are at most choices of for each choice of .
Now we count how many of -depth have each choice of , so fix . There are subgroups of of index , so there are options for .
Fix a subgroup of with index . We now count the number of with . There are at most maps from to , and for each such map, there are at most homomorphisms from to which restrict appropriately. Hence, there are at most
|
|
|
maps with . Combined with the counts of choices of and subgroups of of index , we get the lemma.
∎
For non-codes, we do not get precise estimates on , but we can get upper bounds.
Lemma 4.16.
Let be an integer. Let be a finite abelian group and let be a multiple of the exponent of . Let be the number of subgroups of . Let and be real numbers. Let . Let be a partition of and let . Let have -depth with .
Let be an random matrix in such that the matrices are independent and -balanced as random elements of .
Then
|
|
|
Proof.
Since has -depth , there is a with such that . Let . Since , we cannot have that is empty.
Write . So,
|
|
|
We bound the two probabilities on the right side separately. Note that since , we have exactly when . Since , there must be some such that reduces to a nonzero element of . Conditioning on all other for , by the -balanced assumption we have that
|
|
|
For the second probability, let be the partition of induced by . Notice that is a -code of distance . Indeed, suppose there is some with inducing some with . Then the image of would have index strictly greater than , contradicting maximality of .
Now we can apply Lemma 4.11 to the submatrix and the code mapping it into . If is the number of subgroups of and , then conditioning on for gives
|
|
|
|
|
|
|
|
and the lemma follows.
∎
Finally, we use Lemma 4.12 again to get a bound for the full matrix:
Lemma 4.17.
Let be an integer. Let be a finite abelian group and let be a multiple of the exponent of . Let be sequences of real numbers such that , , and for some .
For a natural number , let . Let be an -balanced random matrix with entries in . Let be the row partition associated to and let have -depth , with .
Then there is a constant depending only on , , , , and the sequences , such that for all ,
|
|
|
Proof.
Let be the column partition for as in the definition of -balanced. Let for each . By independence,
|
|
|
For each , let . By Lemma 4.16, we have
|
|
|
|
|
|
|
|
By the same argument as in the proof of Lemma 4.13, there is some constant depending only on and the sequence such that for large enough (where “large enough” depends on , , , , and the sequences and ), we have
|
|
|
By Lemma 4.12, we therefore have that for such ,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Since , the expression is uniformly bounded above by some constant for all . Then the appropriate constant can be chosen so that
|
|
|
for all . Hence we have
|
|
|
|
|
|
|
|
|
|
|
|
The lemma follows from the fact that for large enough , we have , so .
∎
4.4. Computing the moments
Finally, we can combine all these results to compute the limiting moments for cokernels of -balanced random matrices. The most delicate part of this proof is the part where we handle the non-codes. This will involve a careful choice of the sequence .
Theorem 4.18.
Let be an integer. Let be a finite abelian group and let be a multiple of the exponent of (including zero). Let be sequences of real numbers such that , , and for some and .
Then there are such that the following holds for every sufficiently large natural number . Let be an -balanced random matrix with entries in . Then
|
|
|
Proof.
Let . Following the discussion at the beginning of this section, we have
|
|
|
Let be the row and column partitions witnessing the -balancedness of .
Let . Note that then with , so satisfies the conditions for Lemmas 4.13 and 4.17.
For notational convenience, we will allow to change in each line as long as it remains a constant depending only on .
We have
|
|
|
|
|
|
|
|
|
(1) |
|
|
|
(2) |
|
|
|
(3) |
|
|
|
(4) |
|
|
|
Wood showed in the proof of [Woo19, Theorem 2.9] that (4) is bounded above by . By Lemma 4.13, we can bound (1):
|
|
|
|
|
|
|
|
|
|
|
|
To bound (2) and (3) we use Lemma 4.15. For each , there are at most
|
|
|
maps of -depth . A standard inequality says that , so for (which is the case for large enough, independent of )
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Hence, the number of maps of -depth is at most
|
|
|
|
|
|
|
|
Since and as , for large enough (depending only on and ) we have , which means that for large enough ,
|
|
|
|
|
|
|
|
|
|
|
|
bounding (3) as desired.
Finally, we need to bound (2). From Lemma 4.17, we have that if has -depth ,
|
|
|
which means
|
|
|
|
|
|
|
|
Since , we have that , so
|
|
|
Hence for large enough (depending only on , , and ), we have and
|
|
|
For the same reason, for large enough (depending only on ) we have
|
|
|
which means
|
|
|
giving us a bound on (2).
Finally, choose and appropriately to obtain the desired result.
∎
Acknowledgements
The author was supported by the NSF Graduate Research Fellowship Program, the Caltech Summer Undergraduate Research Fellowship program, and the Samuel P. and Frances Krown SURF Fellowship. The author thanks Melanie Wood and Omer Tamuz for mentorship and Alexander Gorokhovsky, Seth Berman, Sandra O’Neill, and Hoi Nguyen for insightful conversations. The author also thanks Gilyoung Cheong, Yifeng Huang, Hoi Nguyen, Roger Van Peski, Will Sawin, and Melanie Wood for helpful comments on an earlier draft of this manuscript.