CONNECTIVITY OF ALTERNATING SIGN TRIANGLES
Abstract
\justifyAlternating sign triangles were introduced by Carroll and Speyer in relation to cube recurrence, by analogy to alternating sign matrices for octahedron recurrence. In this paper, we prove the connectivity of alternating sign triangles, which is analogous to the connectivity of alternating sign matrices.
Keywords: Alternating sign triangle, cube recurrence
1 Introduction
The octahedron recurrence is given by the initial conditions for and
An alternating sign matrix is a square matrix that satisfies:
-
1.
all entries are ,
-
2.
every row and column has sum ,
-
3.
in every row and column the non-zero entries alternate in sign.
In [1], Robbins and Rumsey found that the exponents of the in any monomial in formed an alternating sign matrix. In [2], the following theorem about alternating sign matrices was proven:
Theorem 1.
For every alternating sign matrices and , can be obtained from by sequentially adding or subtracting the following sub-matrix:
In [3], James Propp introduced another recurrence called the cube recurrence, which is given by for and
In [4], Carroll and Speyer introduced a combinatorial object that describe the cube recurrence called grove, which will be defined rigorously in section 2. Carroll and Speyer proved that is a sum of Laurent monomials in the variables , and they observed that, in each monomial, the exponents of form an alternating sign triangles. Some properties of alternating sign triangles can be found in [5].
In this paper, we will prove the following theorem, analogous to theorem 1, for alternating sign triangles:
Theorem 2.
For every alternating sign triangles and , can be obtained from by sequentially adding or subtracting one of the following three sub-triangles:
Acknowledgments. I thank Pavlo Pylyavskyy for introducing me to this topic and for his helpful suggestions and continuous support.
2 Background
In [4], Carroll and Speyer defined groves as follows.
Define the lower cone of any to be
Let be a subset such that, whenever . Let , and define the set of initial conditions
We also define a rhombus to be any set of the form
In addition, define the edges of each rhombus to be the pairs
Now suppose that is a cutoff for . Let be a graph whose vertices are the points in and edges are the edges of all rhombi occurring in . We define an -grove within radius to be a subgraph with the following properties:
-
•
(Completeness) the vertex set of is all of ;
-
•
(Complementarity) for every rhombus, exactly one of its two edges occurs in ;
-
•
(Compactness) for every rhombus all of whose vertices satisfy , the short edge occurs in ;
-
•
(Connectivity) every component of contains exactly one of the following sets of vertices, and conversely, each such set is contained in some component:
-
–
, and for all with and ;
-
–
for ;
-
–
, and for .
-
–

Source: [4]
Carroll and Speyer also proved a bijection between groves and simplified groves. A simplified grove within radius , where is a cutoff for and furthermore is odd, is a subgraph of satisfying:
-
•
(Vertex set) the vertex set of is ;
-
•
(Acyclicity) is acyclic;
-
•
(Connectivity) every component of contains exactly one of the following sets of vertices, and conversely, each such set is contained in some component:
-
–
, and for with and ;
-
–
;
-
–
, and .
-
–

Source: [4]
3 Definition
For our convenience, we will redefine a simplified grove of size to be a graph satisfying:
-
•
(Vertex set) the vertex set of is ;
-
•
(Acyclicity) is acyclic;
-
•
(Connectivity) the boundary vertices can be partitioned into the following sets so that each component of contains exactly one set, and conversely, each set is contained in some component:
-
–
and ,
-
–
(west pairs) for ,
-
–
(east pairs) for ,
-
–
(south pairs) for ,
-
–
(middle triplet) if is even.
-
–
It can be checked that this new definition gives the same set of simplified groves as Carroll’s. We also define a upward triangle of size to be a triangle whose vertices are . Similarly, define an downward triangle of size to be a triangle whose vertices are . For simplicity, we will refer to downward and upward triangle of size as downward and upward triangle respectively.
Now assign to each downward triangle a number where is the number of edges in that triangle. From the acyclicity condition, we have the number of edge in every triangle is less than ; hence, the number assigned to each downward triangle is either or . Define an alternating sign triangle to be a configuration of numbers generated by our process. It can be checked that this definition gives the same set of alternating sign triangles as Carroll’s proposed definition does.
Now we will introduce some definitions that will be useful for our proof. First of all, define the target grove of size n to be the following grove:
We also say two vertices or two edges to be in the same group if they are in the same component in the target grove. Additionally, we define group as in Figure 4 as they play an important part in our proof. Next, for every grove , define its difference grove to be the graph in which:
-
•
Edges that appear in but not in the target grove are colored red.
-
•
Edges that appear the target grove but not in are colored black.
-
•
Edges that appear in both and the target grove are colored blue.
Lastly, we define a spin to be one of the following changes:
We also call the middle vertex above the pivot of the spin. It can be seen that spins of type does not change the corresponding alternating sign triangles while spins of type is equivalent to adding or subtracting one of the three sub-triangles in Theorem 2 to the corresponding alternating sign triangles. It can also be seen that if a grove can be obtained from another grove through a sequence of spins, then can also be obtained from through a sequence of spins. Hence, instead of proving Theorem 2, we will prove the following theorem:
Theorem 3.
For every grove , we can obtain the target grove after a sequence of spins.
4 Observations
Before proving Theorem 3, we will make some trivial observations:
Observation 1.
Between any two consecutive black edges in a group, and between any black edge and the boundary vertices, there is at least one red edge.
Observation 2.
If an interior vertex has degree 1, we can spin freely with that vertex as the pivot.
Observation 3.
We can move any black edge to its nearest red edge.
5 Proof
We will prove that if there is some black edges in the difference grove, we can reduce the number of black edges.
Case 1: If there is no group with more than one black edge, then, we choose a group with one black edge. Clearly, the path connecting the two boundary vertices of this group must contain a segment of red edges. Moreover, the black edge lies between the two endpoints of the segment. Because there is only one black edge, every vertices in the region bounded by the red segment and the group edges must be in the same component with the group’s vertices. This means that we can move the black edge to one of the endpoints of the red segment and then, with the endpoint as the pivot, spin the red edge to the place of the black edge, and the number of black edge is reduced.
Case 2: If there are some groups with more than one black edge, then, we choose one of such groups and consider a pair of consecutive black edges. By observation 1, there have to be at least one red edge between the two black edges. If there is only one red edge, by observation 3, we can move the two black edges to the red edge. Now the shared vertex of the two black edges and the red edge has degree 1; therefore, by observation 2, we can spin with that vertex as the pivot to move the red edge to the place of one of the black edges, and the number of black edges is reduced.
On the other hand, if there are more than one red edge, we consider two adjacent red edges and consider two sub cases:
Case 2a: If the two chosen red edges are connected to the same component, then in the other component, between the two red edges, there must be some black edges since otherwise we will have a cycle. If there is only one black edge, then similar to case 1, we can reduce the number of black edges.
If there are more than one black edge, consider two consecutive ones, then by observation 1, there have to be some red edges between them. Because of our choice of red edges at the beginning of this case, these red edges cannot be connected with our original group; hence, they have to be connected to other groups. If there is only one red edge, then similar as above, we can reduce the number of red edges. If there are more than one red edge, and they are connected to the same group, then we get back to this case. If they are connected to different groups, then we get the following bad scenario, which will be considered last.
Case 2b: If the two chosen red edges are connected to different components, then in at least one of the two other components, there have to be black edges on both sides of the red edge because otherwise the boundaries vertices of the two components will be connected. Then if between the two black edges there are only one red edge, then similar as above, we can reduce to number of black edges. If there are more than one red edge, then we proceed similarly as above. Consider two adjacent red edges, if they are connected to the same component, then we are back to case 2a, else, we are back to this case.
Now let us take a moment to review what we have got. We now know that if we get case 2b, we will either finish, or get back to case 2b again, or get to case 2a. If we get case 2a, we will either finish, or get back to case 2a again, or get to the bad scenario. However, this process cannot go on forever because we only have a finite number of red edges, so the process will eventually stops. Therefore, eventually, we will either finish, or get to the bad scenario. Hence, if we can solve for the bad scenario, the proof is complete.
Now we will show that the bad scenario is actually not that bad. To solve for the bad scenario, we need to see when the bad scenario arises. It arises when we have three groups adjacent to each other, and the only place where that happens is at group that we defined earlier. Additionally, it only arises when, in our process, we go from group to group and in case 2a.
Now if we look closely, we can see that we can apply the same reasoning as in case 2b for group and , that is, on at least one of the two groups, there have to be black edges on both sides of the red edge. Then we can proceed as we did with case 2b, and we will have that we either finish, or get to case 2a, or get back to case 2b, and the process goes on as above. However, we can see that if we continue getting case 2b, our process will only go either eastward or southward. If at any point we get case 2a, our process will only go westward until it stops. That means that we will never get to the bad scenario again, which means that when our process stops, and we know that it will stop eventually, we will finish. This completes our proof.
6 Discussion
Now looking back at our proof, we can see that we use only two types of spins: spins with the pivot having degree one, and spins among vertices in the same component as in case 1. In the first type, which direction we spin does not matter, so we can always spin in, say, clockwise direction. In the second type, notice that one of the endpoints of the red segment is to the left of the black edge, so if we use that endpoint as the pivot, we can spin in the clockwise direction only. Therefore, we have the following stronger theorem of theorem 3:
Theorem 4.
For every grove , we can obtain the target grove after a sequence of spins in the clockwise direction.
Now let us define the identity triangle, analogous to identity matrix, to be the following triangle:
Clearly, the target grove gives the identity triangle; therefore, from theorem 4, we have the following theorem:
Theorem 5.
From every alternating sign triangle, we can obtain the identity triangle by sequentially adding one of the following three sub-triangles:
There are also bijections between alternating sign matrix and some other interesting objects, one of which is the corner-sum matrix, which was introduced in [1]. Corner-sum matrices are matrices in which the last row and last column consist of the numbers from 1 to n (in order), and within each row and column, each entry is either equal to, or one more than, the preceding entry. By adding (or subtracting) the sub-matrix in theorem 1 to an alternating sign matrix, we are increasing (or decreasing) the top left entry of the sub-matrix in the corresponding corner-sum matrix by one.
Another notable object is the monotone triangle, which is a triangular array where row has entries respectively, the numbers in the bottom row are 1 to n (in order), the numbers in each row are strictly increasing from left to right, and the numbers along diagonals are weakly increasing from left to right (see [3]). Again, by adding (or subtracting) the sub-matrix in theorem 1 to an alternating sign matrix, we are decreasing (or increasing) one entry in the corresponding monotone triangle by one.
Now that we have proven theorem 2 for alternating sign triangle, it is possible that there are objects analogous to corner-sum matrix and monotone triangle for alternating sign triangle. Additionally, these objects may also allow us to give a complete characterization of alternating sign triangles.
References
- [1] D. Robbins and H. Rumsey. “Determinants and Alternating-Sign Matrices,” Advances in Mathematics. 62 (1986), 169-184.
- [2] R.A. Brualdi, K.P. Kiernan, S.A. Meyer, M.W. Schroeder. "Patterns of alternating sign matrices," Linear Algebra Appl. 438 (10) (2013) 3967–3990.
- [3] J. Propp. “The Many Faces of Alternating-Sign Matrices,” Discrete Mathematics and Theoretical Computer Science Proceedings. AA (DM-CCG) (2001), 43-58.
- [4] G. Carroll and D. Speyer. “The Cube Recurrence,” arXiv:math/0403417
- [5] S. Nguyen. "Characterizing Alternating Sign Triangles," arXiv:2105.03969