The 4 4 orthostochastic variety
Abstract.
Orthostochastic matrices are the entrywise squares of orthogonal matrices, and naturally arise in various contexts, including notably definite symmetric determinantal representations of real polynomials. However, defining equations for the real variety were previously known only for matrices. We study the real variety of orthostochastic matrices, and find a minimal defining set of equations consisting of 6 quintics and 3 octics. The techniques used here involve a wide range of both symbolic and computational methods, in computer algebra and numerical algebraic geometry.
Key words and phrases:
orthostochastic matrices, real algebraic varieties, numerical algebraic geometry2010 Mathematics Subject Classification:
14Q15, 14P05, 15B51, 68W301. Introduction
A real square matrix is called orthostochastic if it is the entrywise square of an orthogonal matrix. Explicitly, is orthostochastic if there exists an orthogonal matrix with for all . It follows immediately that an orthostochastic matrix is doubly stochastic (i.e. has nonnegative entries and all row and column sums equal to 1), as all rows and columns of an orthogonal matrix are unit vectors.
As an interesting and special class of doubly stochastic matrices, orthostochastic (and their unitary generalization, unistochastic) matrices arise naturally in a number of contexts, including spectral theory [13, 14], convex analysis [1, 11], and physics [3, 7]. More recently – and of interest in algebraic geometry – it has been shown that orthostochastic matrices are deeply connected to definite symmetric determinantal representations of real polynomials. Indeed, for hyperbolic plane curves, every monic symmetric determinantal representation arises from certain associated orthostochastic matrices, which yields an effective algorithm [5, 8] for computing such representations for cubic curves.
It is thus of interest to find intrinsic characterizations of the set of orthostochastic matrices. One approach to this is: by definition, the set of orthostochastic matrices is the image of an algebraic variety (the real orthogonal group) under a polynomial map (coordinate-wise squaring) – thus the Zariski closure of the image is a real algebraic variety. Our goal then is to find equations for this Zariski closure, which we refer to as finding equations for the orthostochastic variety. The equations of the orthostochastic variety are known, which made the computation of determinantal representations of cubic curves possible, but for matrices of size no set of equations which define the orthostochastic variety were previously known. In view of this, our main result is:
Theorem 1.
The orthostochastic variety is defined set-theoretically by 6 quintics and 3 octics, which are known explicitly and defined over .
We outline the remainder of the article: in Section 2 we formalize the set-up and notation, and review some basic invariants of the varieties under consideration. In Section 3 we give a procedure for obtaining a naive set of equations which always cut out a superset of the orthostochastic variety, and see how dimension counts give a simple proof of equality in the case . In Section 4 we consider the case , and detail the techniques used to find the equations in Theorem 1. Section 5 concludes with some remarks and remaining questions.
2. Setup and basic invariants
We begin by setting up some notation which will be used in the remainder of the article. We reserve to denote the size of a square matrix, and all matrices are considered to have real entries. For , let (resp. ) denote the group of orthogonal (resp. special orthogonal) matrices, which is a real algebraic variety.
Next, the set of doubly stochastic matrices is defined by the linear conditions , , for all . This can be interpreted as saying that an doubly stochastic matrix is uniquely determined by any submatrix obtained by deleting a row and column. Moreover, recall that the set of doubly stochastic matrices equals the convex hull of all permutation matrices, which is the so-called Birkhoff polytope , of dimension .
We now introduce the varieties in question. Identifying the space of real matrices with -dimensional affine space , we have the coordinate-wise squaring map
Restriction to gives a map , whose image lies in (note that the image of is contained in the non-negative orthant ). Next, the coordinate projection , given by projecting onto the upper-left submatrix, is injective on , so we may compose with to obtain . Finally, for both theoretical and practical reasons it is convenient to work in projective space, so taking the projective closure of the image yields the map
We set , the Zariski-closure of the image of , which is a projective variety in . Concretely, we view as the projective closure of the set of matrices which are the upper-left submatrix of an orthostochastic matrix. In this way the linear equations which define (the linear span of) are already accounted for in , and furthermore, the reduction of variables from to will be valuable for computation.
Introducing coordinates, we have that the map above corresponds algebraically to the ring map
Here is the affine coordinate ring of , and is the homogeneous coordinate ring of . Equations for thus correspond to homogeneous forms in , and computing these will be our primary goal, since we can then obtain equations for the orthostochastic variety by dehomogenizing with respect to , i.e. setting (although one caveat arises with the hyperplane at infinity – this will be dealt with later).
We next give the dimension and degree of . Note that is a finite map, with general fibers of size , corresponding to sign choices on each of the entries of a potential preimage. This implies that , which we now recall: a matrix is orthogonal iff for all , the dot product of the and rows equals , so
In fact, these quadrics form a regular sequence, so .
The degree of is also known (albeit much more recently): the degree of was computed in [4], and in [9], a general formula for the degree of a coordinate-wise power of a variety is given, and combining these yields:
Remark 3.
1) Although is a complete intersection of the quadric generators given above, the degree of is not the product of the generator degrees, namely , since the quadrics defining are not homogeneous (it is true that the homogenizations of the quadrics define a homogeneous complete intersection of degree , but that variety contains many more components besides those arising from ).
2) Every orthostochastic matrix is in fact the image under of a special orthogonal matrix – e.g. one may negate the first row without changing the coordinate-wise square. This shows that is irreducible, being (the projective closure of) an image of an irreducible variety .
For reference, we tabulate the values of for small values of :
2 | 3 | 4 | 5 | |
---|---|---|---|---|
1 | 3 | 6 | 10 | |
1 | 4 | 40 | 1536 |
3. and naive equations
We now review what is known about the defining equations of , for . For , every doubly stochastic matrix is orthostochastic: indeed, a doubly stochastic matrix is of the form for some , and writing gives , so is the coordinate-wise square of the rotation matrix . Thus no equations are needed to define .
For , the variety is a -dimensional variety in , hence is a hypersurface, and is defined by a single equation. We now show how to find this equation, first found in [12], following the presentation in Section 3 of [7], which will in fact give a set of “naive” equations for any . Consider a doubly stochastic matrix
In order for to be orthostochastic, there must exist sign choices such that with these sign choices, the entrywise square roots of the first two columns are orthogonal:
By rearranging the above equation and squaring, we can eliminate one sign choice and all but one square root:
and another rearrangement and squaring produces a polynomial relation without :
and finally, using the fact that is doubly stochastic, we may express (resp. ) in terms of (resp. ) and homogenize with respect to to obtain
This defines a degree hypersurface in which contains , and therefore equals by dimension and degree considerations.
In general, one can perform the same procedure for any pair of columns or rows of an doubly stochastic matrix, to obtain a set of equations which any orthostochastic matrix must satisfy:
Definition 4.
For , let be the polynomial obtained by eliminating from the relations
Note that is a polynomial of degree in , obtained by repeatedly squaring (and rearranging) the first relation listed times (as in the case of above), and that only involves variables in the and columns of a generic doubly stochastic matrix . Similarly, by transposing indices we define which only involves variables in the and rows, and refer to as the set of naive equations, of which there are equations in total.
It follows from the discussion above that every orthostochastic matrix must satisfy the naive equations . More precisely, a doubly stochastic matrix satisfies if and only if there exist choices of signs such that with these choices of signs, the entrywise square roots of the and columns of become orthogonal. A natural question is:
Question 5 (cf. [7], Remark 3.4).
Do the naive equations define the orthostochastic variety inside ? In other words, if a doubly stochastic matrix satisfies for all , must it be orthostochastic?
Stated another way: satisfying a single equation only guarantees sign choices which work for a single pair of columns. 5 asks whether existence of local sign choices for each pair of columns (and rows), implies existence of a single global sign choice which simultaneously makes all pairs of columns orthogonal.
As an example of what could go wrong, it is conceivable that (after fixing sign choices on the first two columns as required by ) the sign choices imposed on the third column by could differ from those imposed by . This turns out not to happen in the case of , as is defined by the vanishing of (the homogenization of) . However, as noted in [7], this good behavior is indeed special only to small values of , as evidenced by the following proposition (which justifies our use of the term naive):
Proposition 6.
For , there exists a doubly stochastic matrix which satisfies for all , but is not orthostochastic. Thus 5 has a negative answer for .
Proof.
Let be block diagonal, where is the matrix of all ’s, and is the identity matrix of size . Then is doubly stochastic, and it is straightforward to check that satisfies all : since is symmetric, it suffices to check that is satisfied, i.e. there exist local sign choices which make the entrywise square roots of the and rows orthogonal. If then the and rows are already orthogonal, and if then choosing all positive signs for the row and half positive, half negative for the row shows that is satisfied. On the other hand, is not orthostochastic, since there does not exist a Hadamard matrix, and therefore is not orthostochastic, since a direct sum of square matrices is orthostochastic iff each matrix is orthostochastic. ∎
Remark 7.
It was noted in [7] that if satisfies , then satisfies but is not orthostochastic, and it was also proven that counterexamples exist for all , using Hurwitz-Radon theory. However, the simple explicit argument with direct sums given in Proposition 6 seems to have gone unnoticed.
4.
We now arrive at the main focus of this paper, the case . In this section, most of the methods and arguments we use will be primarily computational in nature, so we first remark on the techniques used.
Computations were performed in Macaulay2 [10] and Bertini [2], and involve a mix of symbolic and numerical methods. All symbolic computations were done in exact arithmetic, using Gröbner bases over the rational numbers – this is possible since all the varieties in question are defined over . For the purposes of this article, we regard results obtained by symbolic computation to be on par with those obtained by theoretical proof.
On the other hand, numerical computations were done in floating point, to 53 bits of precision (i.e. standard double-precision floating-point). Results obtained by numerical methods are regarded as correct “with probability 1”: the fact that these programs give reproducible results agreeing with theory in thousands of cases allows us to say with overwhelming confidence that the results obtained in this way are correct. Throughout the section we indicate when a computation was used, as well as the type (symbolic or numerical). We encourage the interested reader to confirm the results of the computations themselves, included in the appendix/auxiliary files.
Returning to the variety at hand: is a -dimensional variety in of degree (we remark that by coincidence, this is the unique value for for which ). As before, we have the naive equations for , which constitute octics whose vanishing locus (after taking projective closure) contains .
Definition 8.
We set and as the varieties defined by the homogenizations of naive octics corresponding to (all pairs of) columns and rows, respectively. We also define . Note that these are all subvarieties of the same ambient space as .
We know that , and in fact the two agree generically:
Proposition 9.
If is a general linear space of codimension , then consists of points.
Proof.
Since the claim is for a general linear space , we may show this by direct computation. Choosing a linear slice at random (i.e. linear forms in variables with random rational coefficients), upon computing a minimal presentation of the ideal of we arrive at an ideal generated by octics in variables, whose dimension and degree can easily be computed symbolically to be 0 and 40, respectively. ∎
Proposition 9 shows that has the same dimension and degree as . In particular, (being irreducible) is the unique top-dimensional component of , and so if and only if contains additional components of lower dimension. Thus, if denotes the ideal generated by the 12 naive octics, then showing that has only one minimal prime (or equivalently, that the radical of is prime) would imply .
However, the basic issue is that is too large to handle directly: half of the octics have 967 terms, and the other half have 6760 terms. Neither symbolic nor numerical methods (via numerical irreducible decomposition) were able to determine the number of minimal primes of . This motivates our search for other, “smaller” equations of .
How does one find equations for a parametrized variety? This procedure is known as implicitization, and amounts to computing the kernel of a ring map. Classically, this is accomplished with Gröbner bases, but in the case, this is infeasible (as of yet). Thus, we try a numerical approach, using interpolation (cf. Theorem 3 in [6]). The method is as follows: since is a projective variety, its defining ideal has homogeous generators, so we may search degree by degree. Fixing a degree , we may find a basis of (the degree part of ) by sampling a large number of points on , evaluating all monomials of degree in at these points, and then computing the numerical kernel of the resulting matrix (with rows indexed by points, and columns indexed by monomials). The kernel vectors are then coefficients of degree forms (linearly independent over ) which vanish at all of the sampled points, and if the number of points is large, one expects such a form to vanish on all of . There are no linear forms in (as these are already accounted for by restricting to ), and similarly we find no forms of degrees , and . However, in degree we find:
Proposition 10.
The space of quintics in is -dimensional, with explicitly known basis.
Proof.
Note that sampling points on is easily accomplished: one can generate a random orthogonal matrix, e.g. via the Cayley parameterization of , and then apply the map to obtain a point on . In total, if is a matrix with entries chosen at random, then the final output of is a (random) point on .
With this, determining numerically the dimension of by the method outlined above is straightforward and relatively quick, using the Macaulay2 package NumericalImplicitization, which returns an answer of within minute. However, this is only a numerical computation, which provides approximate quintics with floating-point coefficients.
To obtain explicit quintics over from the approximate quintics, we use the LLL algorithm, again implemented in NumericalImplicitization (which in turn calls the native LLL implementation in Macaulay2). This calculation is significantly longer, taking around hours to finish, but results in a integer matrix, whose columns are coefficients of degree 5 forms in . Once these quintics are known, they are easily checked to be in : verifying symbolically that the integer quintics lie in takes second. ∎
Definition 11.
We set to be the ideal generated by the quintics found in Proposition 10, and let be the variety they define.
Although still too large to comfortably reproduce here, the quintics in are significantly more manageable than the octics : 4 of the quintics have 284 terms, and the most complicated involves 454 terms. In fact a Gröbner basis of can be computed (taking around 8 hours), which yields , (the same as ). Since we also know as well, we may again test whether equality holds by computing the number of minimal primes of , i.e. irreducible components of . Although symbolic computation of minimal primes of is still too slow to be practical, the key difference from the previous case with is that is small enough for a numerical irreducible decomposition to be feasible. The result, though initially negative, is immediately promising and leads to a resolution of the problem.
Proposition 12.
The numerical irreducible decomposition of consists of:
-
(1)
components of dimension , each of degree ,
-
(2)
components of dimension , each of degree ,
-
(3)
component of dimension , of degree .
Proof.
Using Bertini [2], we may compute a numerical irreducible decomposition of , which finishes within minutes. Note that the result is consistent with the Gröbner basis of computed symbolically, which implies that is the unique top-dimensional component of . ∎
The data of a numerical irreducible decomposition of a variety (given by defining equations) consists of a collection of witness sets, each representing an irreducible component of the variety, along with linear equations defining a complementary-dimensional slice of each witness set, and the finitely many intersection points of each witness set with its linear slice, thus encoding the dimension and degree of each irreducible component. Since all the lower-dimensional components of obtained in the numerical irreducible decomposition were themselves linear spaces (being degree 1), one should expect that equations for them can be found and moreover interpreted as certain classes of matrices. This indeed turns out to be the case:
-
(1)
The 4-dimensional components correspond to the 16 ways of specifying that one entry of a doubly stochastic matrix be equal to : this is a codimension constraint in , as it forces the remaining entries in that row and column to be . For instance, one such component is defined by the ideal , which (after dehomogenizing ) requires that the -entry is . Up to row and column permutations, such matrices are a direct sum of a matrix with the identity.
-
(2)
The 5-dimensional components are slightly more complicated: of these, 6 are contained in the hyperplane at infinity , and thus are irrelevant for the orthostochastic variety. The 9 relevant components are as follows: fix a pair of indices with , and write . Then the ideal defines a 5-dimensional component of . Note that these components can be described by partitioning a doubly stochastic matrix into 4 disjoint submatrices, and requiring each submatrix to be circulant (i.e. has equal cross terms).
These results were obtained by following the same procedure as in the proof of Proposition 10 (sampling points, obtaining approximate linear equations, and then extracting exact linear equations via LLL), and the resulting equations are symbolically verified to define subvarieties of . In the case of the 5-dimensional components, an additional, carefully chosen, change-of-basis was necessary to obtain binomial linear generators, and with it the ensuing description as matrices. The explicit linear ideals are recorded (as o22 and o24) in the appendix below.
Each of the relevant lower-dimensional components of can thus be defined over and interpreted as a particular class of matrices. With these descriptions via integer equations at hand, it is also straightforward to confirm that each of these classes of matrices defines an actual component of , via checking dimensions of tangent spaces. Finally, we may use this to obtain Theorem 1:
Proof of Theorem 1.
Let be the ideal generated by 3 of the octics, corresponding to (pairs of) the first 3 columns. We claim that and together cut out the orthostochastic variety set-theoretically, i.e. as sets (here means dehomogenize with respect to ). In view of the numerical irreducible decomposition of computed in Proposition 12, it suffices to show that for each of the relevant lower-dimensional components of , we have .
For a -dimensional component of , we have that a point is (up to row and column permutation) a direct sum of a and a orthostochastic matrix. The case in Section 3 implies that is defined by a single form of degree , and we check symbolically that is in fact defined by a single octic, which is precisely the square of the quartic defining .
For a relevant -dimensional component of , we again check that is defined by a single octic. In this case we do not have equations for a priori – however, the structure of the matrices in is special enough to make symbolic implicitization feasible. Indeed, we may compute symbolically the kernel of the ring map
which takes roughly 8 minutes – this is made possible by the fact that after minimizing the presentation, the source is really a polynomial ring in 5 variables. This gives equations for , which turns out to be precisely the dehomogenization of the octic defining .
Putting this all together, we have that if are the -dimensional components of , and are the -dimensional components of , where are contained in the hyperplane , then
5. Conclusion
A few remarks on the methodology used in Section 4 are in order. First, the only part of the proof of Theorem which relies on numerical computation (i.e. for which there is no symbolic verification) is the correctness of the numerical irreducible decomposition Proposition 12. Specifically, what is required is that there are no other irreducible components of other than those listed in Proposition 12. For readers concerned about the random choices involved in path tracking and homotopy continuation, we mention that the numerical irreducible decomposition of has been calculated multiple times, each time giving the same consistent result. Moreover, the numerical irreducible decomposition of the affine variety (the dehomogenization of ) has also been computed, and is consistent with Proposition 12 (e.g. among the -dimensional components, only the relevant components not at infinity are present).
One may also ask about the utility of set-theoretic equations for . After all, given a particular matrix, the brute-force check to determine if it is orthostochastic is still feasible (namely checking all possible sign patterns in a fiber of . Since acts on fibers of by row and column sign changes, one may assume the first row and column are all nonnegative, so there are only sign patterns to check). In response to this, we note that in addition to their intrinsic interest, knowledge of set-theoretic equations is extremely useful for many other purposes. For instance, given a variety which meets the set of orthostochastic matrices, one can now describe their common intersection locus (which was previously not possible to do). We plan to use this in a forthcoming upgrade to [5], to compute monic symmetric determinantal representations of hyperbolic plane quartic curves.
As for the equations themselves, it is natural to ask about minimality. For degree reasons, any set of minimal generators of must include the quintics in Definition 11. It has been checked that the spaces of sextics and 7-forms in are - and -dimensional respectively, and moreover that the quintics in have no quadratic (or linear) syzygies. It follows that there are no forms of degree or in other than those already in . For degree , it is not difficult to see that of the naive octics (along with ) would not be sufficient to cut out , so the generating set in Theorem 1 is both inclusion-wise and degree-wise minimal.
Finally, we list some questions that remain unanswered with this work. First, it would be interesting to have a combinatorial description of the quintics , as well as a theoretical proof for why they vanish on orthostochastic matrices. Next, for it remains open whether the naive equations cut out the orthostochastic variety inside the Birkhoff polytope set-theoretically. Lastly, we know that for additional equations are needed beyond the naive ones. However, finding where these come from is related to the first question listed here, and even a computational proof of sufficiency may only be possible with some advances in computing technology.
6. Acknowledgements
The first author would like to thank Anton Leykin for helpful discussions, especially related to verifying components via tangent spaces. The second author would like to thank Paul Breiding, Mateusz Michalek and Bernd Sturmfels for their encouragement towards work on this project.
References
- [1] Au-Yeung, Yik-Hoi, and Yiu-Tung Poon. orthostochastic matrices and the convexity of generalized numerical ranges. Linear Algebra Appl. 27 (1979): 69–79.
- [2] Bates, Daniel J., Jonathan D. Hauenstein, Andrew J. Sommese, and Charles W. Wampler. Bertini: Software for Numerical Algebraic Geometry. Available at bertini.nd.edu with permanent doi: dx.doi.org/10.7274/R0H41PB5.
- [3] Bengtsson, Ingemar, Åsa Ericsson, Marek Kuś, Wojciech Tadej, and Karol Życzkowski. Birkhoff’s polytope and unistochastic matrices, and . Comm. Math. Phys. 259, no. 2 (2005): 307–324.
- [4] Brandt, Madeline, Juliette Bruce, Taylor Brysiewicz, Robert Krone, and Elina Robeva. The Degree of . Combinatorial Algebraic Geometry, pp. 229–246. Springer, New York, NY, 2017.
- [5] Chen, Justin and Papri Dey. Computing symmetric determinantal representations. arXiv:1905.07035.
- [6] Chen, Justin and Joe Kileel. Numerical implicitization. J. Softw. Algebra Geom. 9, no. 1 (2019): 55–63.
- [7] Chterental, Oleg, and Dragomir Ž. Doković. On orthostochastic, unistochastic and qustochastic matrices. Linear Algebra Appl. 428, no. 4 (2008): 1178–1201.
- [8] Dey, Papri. Definite Determinantal Representations via Orthostochastic Matrices. arXiv:1708.09559.
- [9] Dey, Papri, Paul Görlach, and Nidhi Kaihnsa. Coordinate-wise powers of algebraic varieties. Beitr. Algebra Geom. (2020). https://doi.org/10.1007/s13366-019-00481-8.
- [10] Grayson, Daniel R., and Michael E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at https://faculty.math.illinois.edu/Macaulay2/.
- [11] Horn, Alfred. Doubly stochastic matrices and the diagonal of a rotation matrix. Amer. J. Math. 76, no. 3 (1954): 620–630.
- [12] Nakazato, Hiroshi. Set of orthostochastic matrices. Nihonkai Math. J. 7, no. 2 (1996): 83–100.
- [13] Nylen, Peter, Tin-Yau Tam, and Frank Uhlig. On the eigenvalues of principal submatrices of normal, Hermitian and symmetric matrices. Linear Multilinear Algebra 36, no. 1 (1993): 69–78.
- [14] Thompson, Robert. C. Principal submatrices IV. On the independence of the eigenvalues of different principal submatrices. Linear Algebra Appl. 2, no. 3 (1969): 355–374.
Appendix A M2 session
The following is an example Macaulay2 session that demonstrates the main results in this paper. For technical reasons, it is more convenient to use as the variables of rather than .
Macaulay2, version 1.15 i1 : needsPackage "DeterminantalRepresentations" i2 : needsPackage "NumericalImplicitization" i3 : S = RR[x_(1,1)..x_(4,4)]; R = QQ[y_0..y_9]; s = y_9; i6 : A = genericMatrix(S,4,4); IO4 = minors(1, A*transpose A - id_(S^4)); i8 : F = matrix{flatten entries submatrix(hadamard(A,A),{0,1,2},{0,1,2})} | matrix{{1_S}} -- Sample random points on Z_4 i9 : pts = apply(binomial(9+5,5), i -> sub(F,matrix{flatten entries randomOrthogonal(4,RR)})); -- Compute dimension of deg 5 part of I(Z_4): takes ~30 seconds i10 : HF5 = numericalHilbertFunction(F, IO4, pts, 5, UseSLP => true, Precondition => false) -- Get integer quintics via LLL: takes ~30 hours i11 : time E = extractImageEquations(HF5, AttemptZZ => true); -- Alternatively, after downloading the auxiliary files, one can load the equations with: -- i11 : E = first lines get "ortho4-quintics.txt"; i12 : J = ideal value toString E; -- Create the s-homogenization of a generic 4x4 doubly stochastic matrix i13 : V = genericMatrix(R,3,3); V = transpose(V || matrix{toList(3:s) - sum entries V}) i15 : V = V || matrix{toList(4:s) - sum entries V} -- 3 naive octics C_{1,2}, C_{1,3}, C_{2,3} i16 : K = ideal apply(subsets(3, 2), p -> ( (i,j) = (p#0, p#1); ((V_(i,0)*V_(j,0) + V_(i,1)*V_(j,1) - V_(i,2)*V_(j,2) - V_(i,3)*V_(j,3))^2 - 4*V_(i,0)*V_(j,0)*V_(i,1)*V_(j,1) - 4*V_(i,2)*V_(j,2)*V_(i,3)*V_(j,3))^2 - 64*V_(i,0)*V_(j,0)*V_(i,1)*V_(j,1)*V_(i,2)*V_(j,2)*V_(i,3)*V_(j,3) )); -- cf. equation (3.5) in [7], esp. the coefficient of 64 i17 : time numericalIrreducibleDecomposition(J, Software => BERTINI) -- takes ~25 minutes -- Note: the dimensions shown here are of the affine cones, -- which are 1 more than that of the corresponding projective variety i18 : (X, eps) = (oo, 1e-10); -- These define some helper functions to obtain descriptions of linear components i19 : realPartMatrix := A -> matrix apply(entries A, r -> r/realPart) i20 : imPartMatrix := A -> matrix apply(entries A, r -> r/imaginaryPart) i21 : getLinEqs = (A, mons, c, n) -> ( B = random(RR)*realPartMatrix A + random(RR)*imPartMatrix A; C = matrix apply(entries B, r -> r/(e -> lift(round(10^(1+n)*round(n, e)), ZZ))); mons*submatrix(LLL(id_(ZZ^(numcols C))||C), toList(0..<numcols mons), toList(0..<c)) ) -- The following are the 4-dim (= codim 5) components of X, recorded in o22 i22 : L5 = apply(X#5, W -> ideal getLinEqs(clean(eps, matrix W#Points#0), basis(1,R), 5, 10)) o22 = {ideal(y_0, y_3, y_6, -y_2-y_5-y_8+y_9, -y_1-y_4-y_7+y_9), ideal(y_0, y_2, y_4, y_7, -y_1+y_9), ideal(y_2, y_5, y_6, y_7, -y_8+y_9), ideal(y_2, y_3, y_4, y_8, -y_5+y_9), ideal(y_1, y_2, y_3, y_6, -y_0+y_9), ideal(y_0, y_1, y_2, -y_6-y_7-y_8+y_9, -y_3-y_4-y_5+y_9), ideal(y_0+y_3+y_6-y_9,y_0+y_1+y_2-y_9,y_6+y_7+y_8-y_9,y_1+y_4+y_7-y_9,y_2+y_5+y_8-y_9), ideal(y_0, y_3, y_7, y_8, -y_6+y_9), ideal(y_6, y_7, y_8, -y_3-y_4-y_5+y_9, -y_0-y_1-y_2+y_9), ideal(y_1, y_3, y_5, y_7, -y_4+y_9), ideal(y_0, y_4, y_5, y_6, -y_3+y_9), ideal(y_3, y_4, y_5, -y_6-y_7-y_8+y_9, -y_0-y_1-y_2+y_9), ideal(y_0, y_1, y_5, y_8, -y_2+y_9), ideal(y_1, y_4, y_7, -y_0-y_3-y_6+y_9, y_2+y_5+y_8-y_9), ideal(y_2, y_5, y_8, -y_0-y_3-y_6+y_9, y_1+y_4+y_7-y_9), ideal(y_1, y_4, y_6, y_8, -y_7+y_9)} -- Check: description as direct sums of 3x3 ++ 1x1 matrices, Q \cap Z_4 = Q \cap V(K) i23 : all((0,0)..(3,3), p -> ( (i,j) = p; L = ideal(V_(i,j) - s, V_((i+1)%4,j), V_((i+2)%4,j), V_(i,(j+1)%4), V_(i,(j+2)%4)); any(L5, l -> l == L) and isSubset(J, L) and ( Q = minimalPresentation sub(K, R/L); T = ring Q; f = (T_4^2 - T_4*(T_0+T_1+T_2+T_3) + (T_0*T_3+T_1*T_2))^2 - 4*T_0*T_1*T_2*T_3; ideal(f^2) == Q ))) -- The following are the 5-dim (= codim 4) components of X, recorded in o24 i24 : L4 = apply(X#6, W -> ideal getLinEqs(clean(eps, matrix W#Points#0), basis(1,R), 4, 10)) o24 = {ideal(-y_1+y_6, -y_0+y_7, -y_1-y_3-y_4-y_7+y_9, y_2-y_3-y_4+y_8), ideal(-y_2+y_6, -y_0+y_8, y_1-y_3-y_5+y_7, -y_1-y_2-y_7-y_8+y_9), ideal(-y_1+y_3, -y_0+y_4, -y_2-y_5+y_6+y_7, -y_2-y_3-y_4-y_5+y_9), ideal(y_9, y_0+y_1, y_3+y_4, y_6+y_7), ideal(-y_4+y_6, -y_3+y_7, -y_0-y_1+y_5+y_8, -y_3-y_5-y_6-y_8+y_9), ideal(y_9, y_0+y_6, y_1+y_7, y_2+y_8), ideal(-y_5+y_6, -y_3+y_8, -y_0-y_2-y_5-y_8+y_9, -y_0-y_2+y_4+y_7), ideal(-y_2+y_4, -y_1+y_5, -y_0-y_3+y_7+y_8, -y_1-y_2-y_7-y_8+y_9), ideal(y_9, y_3+y_6, y_4+y_7, y_5+y_8), ideal(-y_5+y_7, -y_4+y_8, -y_1-y_2+y_3+y_6, -y_1-y_2-y_5-y_8+y_9), ideal(-y_2+y_3, -y_0+y_5, -y_1-y_4+y_6+y_8, -y_2-y_5-y_6-y_8+y_9), ideal(y_9, y_1+y_2, y_4+y_5, y_7+y_8), ideal(-y_2+y_7, -y_1+y_8, y_0-y_4-y_5+y_6, -y_1-y_2-y_4-y_5+y_9), ideal(y_9, y_0+y_2, y_3+y_5, y_6+y_8), ideal(y_9, y_0+y_3, y_1+y_4, y_2+y_5)} -- Remove 6 components at infinity i25 : L4 = select(L4, l -> s % l != 0); #L4 o26 = 9 -- Check: description as block 2x2 circulant matrices, and containment in X i27 : all((1,1)..(3,3), p -> ( (a,a’) = p; (b,c) = toSequence({1,2,3} - set{a}); (b’,c’) = toSequence({1,2,3} - set{a’}); L = ideal(V_(0,0)-V_(a,a’),V_(a,0)-V_(0,a’),V_(b,0)-V_(c,a’),V_(0,b’)-V_(a,c’)); any(L4, l -> l == L) and isSubset(J, L) )) -- Work over QQ for exact implicitization i28 : S = QQ[x_(1,1)..x_(4,4)]; A = genericMatrix(S,4,4); i30 : IO4 = minors(1, A*transpose A - id_(S^4)); i31 : H = matrix{flatten entries submatrix(hadamard(A,A),{0,1,2},{0,1,2})}; i32 : F = H | matrix{{1_S}} -- Check: (P \cap Z_4)_{y_9=1} = (P \cap V(K))_{y_9=1} i33 : all(L4, I -> ( P = minimalPresentation sub(K + ideal(s - 1), R/I); time ker map(S/(IO4 + sub(I, F)), ring P, H_((gens ring P)/baseName/last)) == P )) -- takes ~9*8 minutes -- Check: J is contained in ker F i34 : sub(gens J, F) % IO4 == 0 -- Check: J has no quadratic syzygies i35 : syz(gens J, DegreeLimit => 7) == 0 -- To compute the dimension of the space of 7-forms in I(Z_4): -- repeat lines i3 - i10, replacing everywhere 5 with 7 (takes ~7 hours, ~34 GB RAM)