This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Configuration polynomials
under contact equivalence

Graham Denham Graham Denham
Department of Mathematics, University of Western Ontario
London, Ontario, Canada N6A 5B7
[email protected]
Delphine Pol Delphine Pol
Department of Mathematics, TU Kaiserslautern
67663 Kaiserslautern
Germany
[email protected]
Mathias Schulze Mathias Schulze
Department of Mathematics, TU Kaiserslautern
67663 Kaiserslautern
Germany
[email protected]
 and  Uli Walther Uli Walther
Department of Mathematics, Purdue University
West Lafayette, IN 47907, USA
[email protected]
Abstract.

Configuration polynomials generalize the classical Kirchhoff polynomial defined by a graph. Their study sheds light on certain polynomials appearing in Feynman integrands. Contact equivalence provides a way to study the associated configuration hypersurface. In the contact equivalence class of any configuration polynomial we identify a polynomial with minimal number of variables; it is a configuration polynomial. This minimal number is bounded by (r+12)r+1\choose 2, where rr is the rank of the underlying matroid. We show that the number of equivalence classes is finite exactly up to rank 33 and list explicit normal forms for these classes.

Key words and phrases:
Configuration, matroid, contact equivalence, Feynman, Kirchhoff, Symanzik
2010 Mathematics Subject Classification:
Primary 14N20; Secondary 05C31, 14M12, 81Q30
GD supported by NSERC of Canada. DP supported by a Humboldt Research Fellowship for Postdoctoral Researchers. UW supported in part by Simons Foundation Collaboration Grant for Mathematicians #580839.

1. Introduction

The Matrix-Tree Theorem is a classical result in algebraic graph theory. It was found by German physicist Gustav Kirchhoff in the mid-19th century in the study of electrical circuits. It states that the number of spanning trees of a connected undirected graph GG with edge set EE agrees with any principal submaximal minor of its Laplacian. Putting weights on the edges eEe\in E of GG and considering them as variables xex_{e} yields the Kirchhoff polynomial

ψG=T𝒯GxT,\psi_{G}=\sum_{T\in\mathcal{T}_{G}}x^{T},

where 𝒯G\mathcal{T}_{G} is the set of all spanning trees of GG, and xT=eTxex^{T}=\prod_{e\in T}x_{e}.

Kirchhoff polynomials are a crucial ingredient of the theory of Feynman integrals (see, for example, [Alu14, Bit+19, BS12, BSY14] and the literature trees in these works). In short, the Kirchhoff polynomial of a graph appears in the denominator of the Feynman integral attached to the particle scattering encoded by the dual graph via Feynman’s rule. In certain cases, the integrand is just a power of the Kirchhoff polynomial, but in general there is also another component, a second Symanzik polynomial. In this way, singularities of Kirchhoff polynomials influence the behavior of the corresponding Feynman integral.

Considered as functions over 𝕂=\mathbb{K}=\mathbb{C}, Kirchhoff polynomials are never zero if all variables take values in a common open half-plane (defined by positivity of a non-trivial \mathbb{R}-linear form). Because Kirchhoff polynomials are homogeneous, this property is independent of the choice of half-plane. For the right half-plane (with positive real part) it is referred to as the (Hurwitz) half-plane property; the upper half-plane (with positive imaginary part) defines the class of stable polynomials. Generalizing beyond graphs, any matroid 𝖬\mathsf{M} with set of bases 𝖬\mathcal{B}_{\mathsf{M}} defines a matroid basis polynomial ψ𝖬=B𝖬xB\psi_{\mathsf{M}}=\sum_{B\in\mathcal{B}_{\mathsf{M}}}x^{B}. In this way, ψG=ψ𝖬G\psi_{G}=\psi_{\mathsf{M}_{G}} depends only on the graphic matroid 𝖬G\mathsf{M}_{G} on EE with set of bases 𝒯G\mathcal{T}_{G}. Conditions for the half-plane property of ψ𝖬\psi_{\mathsf{M}} in terms of 𝖬\mathsf{M} were formulated by Choe, Oxley, Sokal and Wagner (see [Cho+04, 92]). They consider general polynomials ψ𝖬,𝒂=B𝖬aBxB\psi_{\mathsf{M},{\boldsymbol{a}}}=\sum_{B\in\mathcal{B}_{\mathsf{M}}}a_{B}x^{B} with matroid support and arbitrary coefficients 𝒂=(aB)B𝖬{\boldsymbol{a}}=(a_{B})_{B\in\mathcal{B}_{\mathsf{M}}}. The question whether the half-plane property of ψ𝖬,𝒂\psi_{\mathsf{M},{\boldsymbol{a}}} for some coefficients 𝒂{\boldsymbol{a}} descends to ψ𝖬\psi_{\mathsf{M}} is studied for example by Brändén and González D’León (see [BG10, Thm. 2.3]), while Amini and Brändén (see [AB18]) consider interactions of the half-plane property, representability and the Lax conjecture.

Recently, Brändén and Huh introduced the class of Lorentzian polynomials (see [BH20]), which are defined by induction over the degree using partial derivatives, starting from quadratic forms satisfying a signature condition. Stable polynomials are Lorentzian. These polynomials have interesting negative dependence properties and close relations with matroids. For example, if a multiaffine polynomial (that is, a polynomial supported on squarefree monomials) is Lorentzian, then it has the form ψ𝖬,𝒂\psi_{\mathsf{M},{\boldsymbol{a}}} for some matroid 𝖬\mathsf{M} and positive coefficients 𝒂{\boldsymbol{a}}.

By the Matrix-Tree Theorem, however, the coefficients 11 of the Kirchhoff polynomial arise in a particular way: Pick any orientation on GG and let AA be an incidence matrix with one row deleted. Then ψG=det(AXA)\psi_{G}=\det(AXA^{\intercal}), where XX the diagonal matrix of variables xex_{e} for all eEe\in E. In more intrinsic terms, this is the determinant of the generic diagonal bilinear form, restricted to the span WGEW_{G}\subseteq\mathbb{Z}^{E} of all incidence vectors. Bloch, Esnault and Kreimer took this point of view for any linear subspace W𝕂EW\subseteq\mathbb{K}^{E} over a field 𝕂\mathbb{K} (see [BEK06, Pat10]). With respect to the basis of 𝕂E\mathbb{K}^{E} this is a linear realization of a matroid 𝖬\mathsf{M}, or a configuration. The dimension dimW\dim W equals the rank of 𝖬\mathsf{M}, which we refer to as the rank of the configuration WW (see Definition 2.1). The generic diagonal bilinear form on 𝕂E\mathbb{K}^{E} restricts to a configuration form QWQ_{W} on WW. Its determinant ψW=det(QW)\psi_{W}=\det(Q_{W}) is the configuration polynomial associated with WW, a homogeneous polynomial of degree dimW\dim W in variables xex_{e} for all eEe\in E (see Definitions 2.3 and 2.5). Configuration polynomials over 𝕂=\mathbb{K}=\mathbb{C} are stable, by a result of Borcea and Brändén (see [BB08, Prop. 2.4]). Notably the above mentioned second Symanzik polynomial is a configuration polynomial, but not a Kirchhoff polynomial.

The configuration point of view has recently led to new insights on the affine and projective hypersurfaces defined by Kirchhoff polynomials (see [DSW21, Den+20]). At present, the understanding of all the details of the singularity structure, as well as a satisfactory general treatment of Feynman integrals, is highly incomplete. There is some evidence that this is due to built-in complications coming from complexity issues (see [BB03]). A natural problem is then to determine to what extent the formula for a configuration hypersurface is the most efficient way to encode the geometry: given a configuration polynomial, can it be rewritten in fewer variables, and can this even be done as via another configuration?

In this article, we elaborate on this idea by studying configurations through the lens of (linear) contact equivalence of their corresponding polynomials. This is the equivalence relation on polynomials induced by permitting coordinate changes on the source and target of the polynomial (see Definition 4.1). Polynomials in the same equivalence class define the same affine hypersurfaces, up to a product with an affine space. While this approach is very natural from a geometric point of view, forgetting the matroid structure under the equivalence makes it difficult to navigate, and provides certain surprises discussed below.

The main vehicle of our investigations is that any matrix representation of QWQ_{W} consists of Hadamard products vwv\star w of vectors v,wWv,w\in W, defined with respect to a basis of 𝕂E\mathbb{K}^{E} (see Notation 2.2). After some preliminary discussion in earlier sections, we focus in §5 on the problem of finding “small” representatives within the contact equivalence class of a given configuration. This requires us to look in detail at the structure of the higher Hadamard powers WsW^{\star s} of WW (see §3). While such Hadamard powers do usually not form chains with increasing ss, they nonetheless have some monotonicity properties with regard to suitable restrictions to subsets of EE (see Lemma 3.3). We use this to minimize the number of variables of configuration polynomials under contact equivalence (see Proposition 5.3). As a result we obtain

Theorem 1.1.

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration over a field 𝕂\mathbb{K} of characteristic ch𝕂=0\operatorname{ch}\mathbb{K}=0, or ch𝕂>dimW\operatorname{ch}\mathbb{K}>\dim W. Let ν(W)\nu(W) be the minimal number of variables appearing in any polynomial contact equivalent to ψW\psi_{W}. Then

νW(dimW+12).\nu_{W}\leq{\dim W+1\choose 2}.

This minimum is realized within the set of configuration polynomials: there is a configuration W𝕂ν(W)W^{\prime}\subseteq\mathbb{K}^{\nu(W)}, constructed from WW by a suitable matroid restriction, with ψW\psi_{W} and ψW\psi_{W^{\prime}} contact equivalent.

In §6, §7 and §8, we then consider the classification problem of determining all contact equivalence classes for configurations WW of a given rank, and we prove the following

Theorem 1.2.

For configurations of rank up to 33, there are only finitely many contact equivalence classes. For each rank at least 44, there is an infinite family of pairwise inequivalent configurations over 𝕂=\mathbb{K}=\mathbb{Q}.

More precisely, we identify for dimW3\dim W\leq 3 all contact equivalence classes and write down a normal form for each class (see Table 1). This list is made of all possible products of generic determinants in up to 66 variables together with

det(y1y4y5y4y20y50y3)anddet(y1y4y4+y5y4y2y5y4+y5y5y3).\det\begin{pmatrix}y_{1}&y_{4}&y_{5}\\ y_{4}&y_{2}&0\\ y_{5}&0&y_{3}\end{pmatrix}\quad\text{and}\quad\det\begin{pmatrix}y_{1}&y_{4}&y_{4}+y_{5}\\ y_{4}&y_{2}&y_{5}\\ y_{4}+y_{5}&y_{5}&y_{3}\end{pmatrix}.

For dimW=4\dim W=4, already for |E|=6{\left|E\right|}=6 variables, we exhibit an infinite family of contact equivalence classes of configurations (see Proposition 8.2).

Our computations show that the contact equivalence class of a configuration neither determines nor is determined by the underlying matroid. Thus, one is prompted to wonder what characteristics of the graph/matroid of a Kirchhoff/configuration polynomial determine its complexity. We hope that our investigations here will help to shed light on this problem.

Acknowledgments

We gratefully acknowledge support by the Bernoulli Center at EPFL during a “Bernoulli Brainstorm” in February 2019, and by the Centro de Giorgi in Pisa during a “Research in Pairs” in February 2020. We also thank June Huh for helpful comments.

2. Configuration forms and polynomials

Let 𝕂\mathbb{K} be a field. We denote the dual of a 𝕂\mathbb{K}-vector space WW by

W:=Hom𝕂(W,𝕂).W^{\vee}:=\operatorname{Hom}_{\mathbb{K}}(W,\mathbb{K}).

Let EE be a finite set. Whenever convenient, we order EE and identify

E={e1,,en}={1,,n}.E={\left\{e_{1},\dots,e_{n}\right\}}={\left\{1,\dots,n\right\}}.

We identify EE with the canonical basis of the based 𝕂\mathbb{K}-vector space

𝕂E:=eE𝕂e.\mathbb{K}^{E}:=\bigoplus_{e\in E}\mathbb{K}\cdot e.

We denote by E=(e)eEE^{\vee}=(e^{\vee})_{e\in E} the dual basis of

(𝕂E)=𝕂E.(\mathbb{K}^{E})^{\vee}=\mathbb{K}^{E^{\vee}}.

We write xe:=ex_{e}:=e^{\vee} to emphasize that x:=(xe)eEx:=(x_{e})_{e\in E} is a coordinate system on 𝕂E\mathbb{K}^{E}. For FEF\subseteq E we denote by

xF:=fFxfx^{F}:=\prod_{f\in F}x_{f}

the corresponding monomial. For w𝕂Ew\in\mathbb{K}^{E} and eEe\in E, denote by we:=e(w)w_{e}:=e^{\vee}(w) the ee-component of ww.

Definition 2.1.

Let EE be a finite set. A configuration over 𝕂\mathbb{K} is a 𝕂\mathbb{K}-vector space W𝕂EW\subseteq\mathbb{K}^{E}. It gives rise to an associated matroid 𝖬=𝖬W\mathsf{M}=\mathsf{M}_{W} with rank function Sdim𝕂S|WS\mapsto\dim_{\mathbb{K}}{\left\langle S^{\vee}|_{W}\right\rangle} and set of bases 𝖬\mathcal{B}_{\mathsf{M}}. We refer to its rank

rW:=dim𝕂Wr_{W}:=\dim_{\mathbb{K}}W

as the rank of the configuration. Equivalent configurations obtained by rescaling EE or by applying a field automorphism have the same associated matroid.

Notation 2.2.

We denote the Hadamard product of u,v𝕂Eu,v\in\mathbb{K}^{E} by

uv:=eEuevee𝕂E.u\star v:=\sum_{e\in E}u_{e}\cdot v_{e}\cdot e\in\mathbb{K}^{E}.

We suppress the dependency on EE in this notation. We abbreviate

us:=uus.u^{\star s}:=\underbrace{u\star\cdots\star u}_{s}.
Definition 2.3 ([DSW21, Rem. 3.21, Def. 3.20],[Oxl11, §2.2]).

Denote by μ𝕂\mu_{\mathbb{K}} the multiplication map of 𝕂\mathbb{K}. Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration of rank r=rWr=r_{W}. The associated configuration form is

QW=eExeμ𝕂(e×e):W×Wx𝕂.Q_{W}=\sum_{e\in E}x_{e}\cdot\mu_{\mathbb{K}}\circ\left(e^{\vee}\times e^{\vee}\right)\colon W\times W\to{\left\langle x\right\rangle}_{\mathbb{K}}.

A choice of (ordered) basis w=(w1,,wr)w=(w^{1},\dots,w^{r}) of W𝕂EW\subseteq\mathbb{K}^{E} together with an ordering of EE is equivalent to the choice of a configuration matrix A=(wji)i,j𝕂r×nA=(w^{i}_{j})_{i,j}\in\mathbb{K}^{r\times n} with row span A{\left\langle A\right\rangle} equal to WW. With respect to these choices, QWQ_{W} is represented by the r×rr\times r matrix

Qw:=QA:=(x,wiwj)i,j=(eExeweiwej)i,j.Q_{w}:=Q_{A}:=({\left\langle x,w^{i}\star w^{j}\right\rangle})_{i,j}={\left(\sum_{e\in E}x_{e}\cdot w_{e}^{i}\cdot w_{e}^{j}\right)}_{i,j}.

Different choices of bases w,ww,w^{\prime} and orderings (or, equivalently, of configuration matrices) yield conjugate matrix representatives for QWQ_{W}.

Judicious choices of the basis and the orderings lead to a normalized configuration matrix A=(IrA)A=\begin{pmatrix}I_{r}&A^{\prime}\end{pmatrix}, where IrI_{r} is the r×rr\times r unit matrix.

Remark 2.4.

For fixed eEe\in E, (weiwej)ij(w_{e}^{i}\cdot w_{e}^{j})_{i\leq j} is the image of (wei)i(w_{e}^{i})_{i} under the second Veronese map 𝕂r𝕂(r2)\mathbb{K}^{r}\to\mathbb{K}^{r\choose 2}. Thus, QwQ_{w} determines the vectors (wei)i(w_{e}^{i})_{i} up to a common sign. In particular, QWQ_{W} determines the configuration WW up to equivalence.

Definition 2.5 ([DSW21, Def. 3.2, Rem. 3.3, Lem. 3.23]).

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration. If AA is a configuration matrix for WW with corresponding basis ww, then the associated configuration polynomial is defined by

ψW:=ψw:=ψA:=det(QA)𝕂[x].\psi_{W}:=\psi_{w}:=\psi_{A}:=\det(Q_{A})\in\mathbb{K}[x].

It is determined by WW up to a square factor in 𝕂\mathbb{K}^{*}. One has the alternative description

ψA=B𝖬det(𝕂B𝑤W𝕂B)2xB,\psi_{A}=\sum_{B\in\mathcal{B}_{\mathsf{M}}}\det(\mathbb{K}^{B}\overset{w}{\to}W\twoheadrightarrow\mathbb{K}^{B})^{2}\cdot x^{B},

using the ordering corresponding to AA on every basis BEB\subseteq E.

The matroid (basis) polynomial

ψ𝖬=B𝖬xB[x]\psi_{\mathsf{M}}=\sum_{B\in\mathcal{B}_{\mathsf{M}}}x^{B}\in\mathbb{Z}[x]

of 𝖬=𝖬W\mathsf{M}=\mathsf{M}_{W} has the same monomial support as ψW\psi_{W} but the two can be significantly different (see [DSW21, Ex. 5.2]).

Remark 2.6.

If G=(V,E)G=(V,E) is a graph and W𝕂EW\subseteq\mathbb{K}^{E} is the row span of the incidence matrix of GG, then ψW=ψG\psi_{W}=\psi_{G} is the Kirchhoff polynomial of GG (see [DSW21, Prop. 3.16]).

3. Hadamard products of configurations

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration of rank

r=rW=dim𝕂W|E|.r=r_{W}=\dim_{\mathbb{K}}W\leq{\left|E\right|}.

For s1s\in\mathbb{N}_{\geq 1}, denote by

Ws:=WWs:=w1ws|w1,,wsW𝕂EW^{\star s}:=\underbrace{W\star\cdots\star W}_{s}:={\left\langle w^{1}\star\cdots\star w^{s}\;\middle|\;w^{1},\dots,w^{s}\in W\right\rangle}\subseteq\mathbb{K}^{E}

the ss-fold Hadamard product of WW and by

rWs:=dim𝕂Ws|E|r_{W}^{s}:=\dim_{\mathbb{K}}W^{\star s}\leq{\left|E\right|}

its dimension. Note that rW=rW1r_{W}=r_{W}^{1} By multilinearity and symmetry of the Hadamard product, we have a surjection

Sym𝕂sWWs,wi1wiswi1wis.\operatorname{Sym}^{s}_{\mathbb{K}}W\twoheadrightarrow W^{\star s},\quad w^{i_{1}}\cdots w^{i_{s}}\mapsto w^{i_{1}}\star\cdots\star w^{i_{s}}.

In particular, for all s,s1s,s^{\prime}\in\mathbb{N}_{\geq 1}, there is an estimate

(3.1) rWs(rW+s1s).r_{W}^{s}\leq{r_{W}+s-1\choose s}.

and equations

(𝕂E)s=𝕂E,WsWs=W(s+s).(\mathbb{K}^{E})^{\star s}=\mathbb{K}^{E},\quad W^{\star s}\star W^{\star s^{\prime}}=W^{\star(s+s^{\prime})}.
Example 3.1.

Consider the non-isomorphic rank 22 configurations in 𝕂n\mathbb{K}^{n}

W=(1,,1),(1,2,3,,n),W=(1,0,,0),(0,1,0,,0).W={\left\langle(1,\dots,1),(1,2,3,\dots,n)\right\rangle},\quad W^{\prime}={\left\langle(1,0,\ldots,0),(0,1,0,\ldots,0)\right\rangle}.

Then rWs=min{s,n}r_{W}^{s}=\min{\left\{s,n\right\}} as follows from properties of Vandermonde determinants, whereas rWs=2r_{W^{\prime}}^{s}=2.

Remark 3.2.

Extending a configuration W𝕂EW\subseteq\mathbb{K}^{E} by a direct summand 𝕂\mathbb{K} with basis ff yields a new configuration W=W𝕂{f}𝕂E{f}W^{\prime}=W\oplus\mathbb{K}^{\left\{f\right\}}\subseteq\mathbb{K}^{E\sqcup{\left\{f\right\}}} with configuration matrix A=(A001)A^{\prime}=\begin{pmatrix}A&0\\ 0&1\end{pmatrix}, rWs=rWs+1r_{W^{\prime}}^{s}=r_{W}^{s}+1 and ψW=ψWxf\psi_{W^{\prime}}=\psi_{W}\cdot x_{f}.

For FEF\subseteq E, denote by

πF:𝕂E𝕂F\pi_{F}\colon\mathbb{K}^{E}\to\mathbb{K}^{F}

the corresponding 𝕂\mathbb{K}-linear projection map. Abbreviate

wF:=πF(w),WF:=πF(W).w_{F}:=\pi_{F}(w),\quad W_{F}:=\pi_{F}(W).

By definition, (w1ws)F=wF1wFs(w^{1}\star\cdots\star w^{s})_{F}=w_{F}^{1}\star\cdots\star w_{F}^{s} and hence

(Ws)F=(WF)s=:WFs.(W^{\star s})_{F}=(W_{F})^{\star s}=:W^{\star s}_{F}.
Lemma 3.3.

For every configuration W𝕂EW\subseteq\mathbb{K}^{E} there is a filtration

F1FtEF_{1}\subseteq\cdots\subseteq F_{t}\subseteq\cdots\subseteq E

on EE such that, for all sss^{\prime}\leq s in 1\mathbb{N}_{\geq 1}, there is a commutative diagram

(3.2) 𝕂E{\mathbb{K}^{E}}𝕂Fs{\mathbb{K}^{F_{s}}}Ws{W^{\star s^{\prime}}}WFss{W^{\star s^{\prime}}_{F_{s}}}πFs\scriptstyle{\pi_{F_{s}}}\scriptstyle{\subseteq}\scriptstyle{\cong}\scriptstyle{\subseteq}

in which the right hand containment is an equality for s=ss^{\prime}=s. In particular, for sss^{\prime}\leq s,

(3.3) rWsrWs.r_{W}^{s^{\prime}}\leq r_{W}^{s}.
Proof.

Note that (3.3) is a direct consequence of (3.2) and the filtration property. We will construct the filtration inductively, starting with F1F_{1}. Let F1F_{1} be any subset of EE such that rWF1=|F1|r_{W_{F_{1}}}={\left|F_{1}\right|} (in other words, a basis for the matroid 𝖬W\mathsf{M}_{W} represented by WW). Then (3.2) is clear.

Suppose that F1FtF_{1}\subseteq\cdots\subseteq F_{t} have been constructed, satisfying (3.2) whenever ssts^{\prime}\leq s\leq t. We claim first that WFs(t+1)=𝕂FsW^{\star(t+1)}_{F_{s}}=\mathbb{K}^{F_{s}} for all 1st1\leq s\leq t. So take a basis element eFse\in F_{s}. From the inductive hypothesis WFss=𝕂FsW_{F_{s}}^{\star s}=\mathbb{K}^{F_{s}} we obtain a vWsv\in W^{\star s} such that vFs=ev_{F_{s}}=e. By definition of WsW^{\star s}, there must be a uWu\in W such that ue=1u_{e}=1 as otherwise We=0W_{e}=0. But then w:=u(t+1s)vW(t+1)w:=u^{\star(t+1-s)}\star v\in W^{\star(t+1)} satisfies wFs=ew_{F_{s}}=e, so that WFs(t+1)=𝕂FsW^{\star(t+1)}_{F_{s}}=\mathbb{K}^{F_{s}} as claimed.

The just established equation WFt(t+1)=𝕂FtW^{\star(t+1)}_{F_{t}}=\mathbb{K}^{F_{t}} says that FtF_{t} is an independent set for the matroid associated to the configuration W(t+1)𝕂EW^{\star(t+1)}\subseteq\mathbb{K}^{E}. Extend it to a basis Ft+1F_{t+1}. Then (3.2) follows for s=s=t+1s^{\prime}=s=t+1 (including the equality of the right inclusion). On the other hand, for sts^{\prime}\leq t, the natural composite surjection

Ws{W^{\star s^{\prime}}}WFt+1s{W^{\star s^{\prime}}_{F_{t+1}}}WFts{W^{\star s^{\prime}}_{F_{t}}}

is by the inductive hypothesis an isomorphism. Hence each of the two arrows in the display is an isomorphism as well, proving that (3.2) holds for s<s=t+1s^{\prime}<s=t+1. ∎

Definition 3.4.

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration. By Lemma 3.3 there is a minimal index tWt_{W} such that rWt=rWtWr_{W}^{t}=r_{W}^{t_{W}} for all ttWt\geq t_{W}. We call tWt_{W} the Hadamard exponent and rWtWr_{W}^{t_{W}} the Hadamard dimension of WW.

4. Linear contact equivalence

Definition 4.1.

We call two polynomials ϕ𝕂[x1,,xm]\phi\in\mathbb{K}[x_{1},\ldots,x_{m}] and ψ𝕂[x1,,xn]\psi\in\mathbb{K}[x_{1},\ldots,x_{n}] (linearly contact) equivalent if for some pm,np\geq m,n there exists an GLp(𝕂)\ell\in\operatorname{GL}_{p}(\mathbb{K}) and a λ𝕂\lambda\in\mathbb{K}^{*} such that

(4.1) ϕ=λψ\phi=\lambda\cdot\psi\circ\ell

in 𝕂[x1,,xp]\mathbb{K}[x_{1},\ldots,x_{p}]. We write ϕψ\phi\simeq\psi in this case.

Remark 4.2.
  1. (a)

    If 𝕂\mathbb{K} is a perfect field and ψ\psi is homogeneous, then one can assume λ=1\lambda=1 in (4.1) at the cost of scaling \ell by λ1/deg(ψ)\lambda^{1/\deg(\psi)}.

  2. (b)

    By definition, both adding redundant variables and permuting variables yield equivalent polynomials. In particular enumerating EE and considering E{1,,p}E\subseteq{\left\{1,\dots,p\right\}} as a subset for any p|E|p\geq{\left|E\right|} gives sense to equivalence of configuration polynomials ψW\psi_{W}.

Notation 4.3.

For a fixed field 𝕂\mathbb{K}, we set

Ψ:={ψW|E finite set,W𝕂E}.\Psi:={\left\{\psi_{W}\;\middle|\;E\text{ finite set},\ W\subseteq\mathbb{K}^{E}\right\}}.

We aim to understand linear contact equivalence on Ψ\Psi.

5. Reduction of variables modulo equivalence

Lemma 5.1.

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration. Then there is a subset FEF\subseteq E of size |F|=rWF2=rW2{\left|F\right|}=r_{W_{F}}^{2}=r_{W}^{2} such that ψWψWF\psi_{W}\simeq\psi_{W_{F}}.

Proof.

Lemma 3.3 with t=2t=2 yields a subset FEF\subseteq E such that

(5.1) πF|W:WWF and πF|W2:W2WF2=𝕂F.\pi_{F}|_{W}\colon W\stackrel{{\scriptstyle\cong}}{{\longrightarrow}}W_{F}\quad\text{ and }\quad\pi_{F}|_{W^{\star 2}}\colon W^{\star 2}\stackrel{{\scriptstyle\cong}}{{\longrightarrow}}W^{\star 2}_{F}=\mathbb{K}^{F}.

Let ιF\iota_{F} be the section of πF\pi_{F} that factors through the inverse of πF|W2\pi_{F}|_{W^{\star 2}},

(5.2) ιF:𝕂F{\iota_{F}\colon\mathbb{K}^{F}}W2{W^{\star 2}}𝕂E.{\mathbb{K}^{E}.}(πF|W2)1\scriptstyle{(\pi_{F}|_{W^{\star 2}})^{-1}}

Consider the 𝕂\mathbb{K}-linear isomorphism of based vector spaces

q:𝕂E𝕂E,weEwexeq\colon\mathbb{K}^{E}\to\mathbb{K}^{E^{\vee}},\quad w\mapsto\sum_{e\in E}w_{e}\cdot x_{e}

inducing the configuration q(W)𝕂Eq(W)\subseteq\mathbb{K}^{E^{\vee}}. Set F:=q(F)F^{\vee}:=q(F) and ιF:=qιFq1\iota_{F^{\vee}}:=q\circ\iota_{F}\circ q^{-1}. Then πF=qπFq1\pi_{F^{\vee}}=q\circ\pi_{F}\circ q^{-1}, and (5.1) and (5.2) persist if FF is replaced by FF^{\vee} and WW by q(W)q(W) throughout.

Now choose a basis w=(w1,,wr)w=(w^{1},\dots,w^{r}) of WW. Then wF=(wF1,,wFr)w_{F}=(w^{1}_{F},\dots,w^{r}_{F}) is a basis of WFW_{F} by (5.1) and

QW\displaystyle Q_{W} =(q(wiwj))i,j\displaystyle={\left(q(w^{i}\star w^{j})\right)}_{i,j}
=(q(wi)q(wj))i,j\displaystyle={\left(q(w^{i})\star q(w^{j})\right)}_{i,j}
=(5.2)(ιFπF(q(wi)q(wj)))i,j\displaystyle\overset{\eqref{60}}{=}{\left(\iota_{F^{\vee}}\circ\pi_{F^{\vee}}(q(w^{i})\star q(w^{j}))\right)}_{i,j}
=ιF(q(wi)Fq(wj)F)i,j\displaystyle=\iota_{F^{\vee}}{\left(q(w^{i})_{F^{\vee}}\star q(w^{j})_{F^{\vee}}\right)}_{i,j}
=ιF(q(wFi)q(wFj))i,j\displaystyle=\iota_{F^{\vee}}{\left(q(w^{i}_{F})\star q(w^{j}_{F})\right)}_{i,j}
=ιF(q(wFiwFj))i,j=ιFQWF.\displaystyle=\iota_{F^{\vee}}{\left(q(w_{F}^{i}\star w_{F}^{j})\right)}_{i,j}=\iota_{F^{\vee}}Q_{W_{F}}.

Since ιF\iota_{F^{\vee}} is a section of πF\pi_{F^{\vee}}, ψWψWF\psi_{W}\simeq\psi_{W_{F}} by taking determinants. ∎

Lemma 5.2.

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration. Suppose that ch𝕂=0\operatorname{ch}\mathbb{K}=0, or ch𝕂>rW\operatorname{ch}\mathbb{K}>r_{W}. If ψWϕ𝕂[y1,,yn1]\psi_{W}\simeq\phi\in\mathbb{K}[y_{1},\dots,y_{n-1}] where n:=|E|n:={\left|E\right|}, then ψWψWE{e}\psi_{W}\simeq\psi_{W_{E\setminus{\left\{e\right\}}}} for some eEe\in E.

Proof.

Let GLp(𝕂)\ell\in\operatorname{GL}_{p}(\mathbb{K}) and λ𝕂\lambda\in\mathbb{K}^{*} realize the equivalence ϕψW\phi\simeq\psi_{W}, that is, ϕ=λψW\phi=\lambda\cdot\psi_{W}\circ\ell where E{1,,p}E\subseteq{\left\{1,\dots,p\right\}} (see Remark 4.2.(b)). Consider the 𝕂\mathbb{K}-linearly independent 𝕂\mathbb{K}-linear derivations of 𝕂[x1,,xp]\mathbb{K}[x_{1},\dots,x_{p}]

δi:=(yn1+i)=yn1+i()1,i=1,,pn+1.\delta_{i}:=\ell_{*}(\frac{\partial}{\partial y_{n-1+i}})=\frac{\partial}{\partial y_{n-1+i}}(-\circ\ell)\circ\ell^{-1},\quad i=1,\dots,p-n+1.

Since ϕ\phi is independent of yn,,ypy_{n},\dots,y_{p}, we have

(5.3) δi(ψW)=λ1ϕyn1+i1=0,i=1,,pn+1.\delta_{i}(\psi_{W})=\lambda^{-1}\cdot\frac{\partial\phi}{\partial y_{n-1+i}}\circ\ell^{-1}=0,\quad i=1,\dots,p-n+1.

By suitably reordering {1,,p}{\left\{1,\dots,p\right\}} we may assume that the matrix (δi(xj))i,j{1,,pn+1}(\delta_{i}(x_{j}))_{i,j\in{\left\{1,\dots,p-n+1\right\}}} is invertible. After replacing the δi\delta_{i} by suitable linear combinations, we may further assume that δi(xj)=δi,j\delta_{i}(x_{j})=\delta_{i,j} for all i,j{1,,pn+1}i,j\in{\left\{1,\dots,p-n+1\right\}}. Then

xi\displaystyle x_{i} =xi,i=1,,pn+1,\displaystyle=x^{\prime}_{i},\quad i=1,\dots,p-n+1,
xi\displaystyle x_{i} =xi+j=1pn+1δj(xi)xj,i=pn+2,,p,\displaystyle=x^{\prime}_{i}+\sum_{j=1}^{p-n+1}\delta_{j}(x_{i})\cdot x^{\prime}_{j},\quad i=p-n+2,\dots,p,

defines a coordinate change such that

(5.4) δj=i=1pδj(xi)xi=i=1pxixjxi=xj,j=1,,pn+1.\delta_{j}=\sum_{i=1}^{p}\delta_{j}(x_{i})\frac{\partial}{\partial x_{i}}=\sum_{i=1}^{p}\frac{\partial x_{i}}{\partial x^{\prime}_{j}}\frac{\partial}{\partial x_{i}}=\frac{\partial}{\partial x^{\prime}_{j}},\quad j=1,\dots,p-n+1.

If ch𝕂>0\operatorname{ch}\mathbb{K}>0, then ch𝕂>rW=deg(ψW)\operatorname{ch}\mathbb{K}>r_{W}=\deg(\psi_{W}) by hypothesis. By (5.3) and (5.4), ψW\psi_{W} is thus independent of x1,,xpn+1x^{\prime}_{1},\dots,x^{\prime}_{p-n+1}. Setting xi=xi=0x_{i}=x^{\prime}_{i}=0 for i=1,,pn+1i=1,\dots,p-n+1 thus leaves ψW\psi_{W} unchanged and makes xi=xix_{i}=x^{\prime}_{i} for i=pn+2,,pi=p-n+2,\dots,p. It follows that

ψWψW|x1==xpn+1=0=ψW|x1==xpn+1=0=ψWE{1,,pn+1}.\psi_{W}\simeq\psi_{W}|_{x^{\prime}_{1}=\cdots=x^{\prime}_{p-n+1}=0}=\psi_{W}|_{x_{1}=\cdots=x_{p-n+1}=0}=\psi_{W_{E\setminus{\left\{1,\dots,p-n+1\right\}}}}.

Then any eE{1,,pn+1}e\in E\cap{\left\{1,\dots,p-n+1\right\}} satisfies the claim. ∎

Proposition 5.3.

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration. Then there is a subset FEF\subseteq E of size |F|=rWF2rW2{\left|F\right|}=r_{W_{F}}^{2}\leq r_{W}^{2} such that ψWψWF\psi_{W}\simeq\psi_{W_{F}}. Suppose that ch𝕂=0\operatorname{ch}\mathbb{K}=0, or ch𝕂>rW\operatorname{ch}\mathbb{K}>r_{W}. Then any polynomial ϕψWF\phi\simeq\psi_{W_{F}} depends on at least |F|{\left|F\right|} variables. In other words, among the polynomials equivalent to ψW\psi_{W} with minimal number of variables is the configuration polynomial ψWF\psi_{W_{F}}.

Proof.

By Lemma 5.1 there is a subset GEG\subseteq E such that |G|=rWG2=rW2{\left|G\right|}=r_{W_{G}}^{2}=r_{W}^{2} and ψWψWG\psi_{W}\simeq\psi_{W_{G}}. Note that |G|=rWG2{\left|G\right|}=r_{W_{G}}^{2} means WG2=𝕂GW_{G}^{\star 2}=\mathbb{K}^{G} which for any subset FGF\subseteq G implies that WF2=𝕂FW_{F}^{\star 2}=\mathbb{K}^{F} and hence |F|=rWF2rW2{\left|F\right|}=r_{W_{F}}^{2}\leq r_{W}^{2}. Pick such an FF with ψWFψWG\psi_{W_{F}}\simeq\psi_{W_{G}} minimizing |F|{\left|F\right|}. Note that rWFrWr_{W_{F}}\leq r_{W}. By Lemma 5.2 applied to the configuration WF𝕂FW_{F}\subseteq\mathbb{K}^{F}, any ϕψWF\phi\simeq\psi_{W_{F}} depending on fewer than |F|{\left|F\right|} variables yields an eFe\in F such that ψWFψWF{e}\psi_{W_{F}}\simeq\psi_{W_{F\setminus{\left\{e\right\}}}}, contradicting the minimality of FF. ∎

Remark 5.4.

By Remark 2.4, QWQ_{W} determines rW2r_{W}^{2}. By definition, (the equivalence class of) ψW\psi_{W} determines rW1=rW=degψWr_{W}^{1}=r_{W}=\deg\psi_{W}. We do not know whether it also determines rW2r_{W}^{2}.

6. Extremal cases of equivalence classes

Notation 6.1.

For r,dr,d\in\mathbb{N}, set

Ψrd={ψWE finite set,W𝕂E,rW=r,rW2=d}.\Psi^{d}_{r}={\left\{\psi_{W}\mid E\text{ finite set},\ W\subseteq\mathbb{K}^{E},\ r_{W}=r,\ r_{W}^{2}=d\right\}}.
Lemma 6.2.

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration of rank rr with basis (w1,,wr)(w^{1},\dots,w^{r}). Let GG be the graph on the vertices v1,,vrv_{1},\ldots,v_{r} in which {vi,vj}\{v_{i},v_{j}\} is an edge if and only if wiwj0w^{i}\star w^{j}\neq 0. Let GG^{*} be the cone graph over GG.

If {wiwjij,wiwj0}{\left\{w^{i}\star w^{j}\mid i\leq j,w^{i}\star w^{j}\neq 0\right\}} is linearly independent, then

ψWψG\psi_{W}\simeq\psi_{G^{*}}

is the Kirchhoff polynomial of GG^{*}.

Proof.

See [BB03, Thm. 3.2] and its proof. ∎

Proposition 6.3.

If d=rd=r, then every element of Ψrd\Psi^{d}_{r} is equivalent to x1xrx_{1}\cdots x_{r}.

If d=(r+12)d={r+1\choose 2}, then every element of Ψrd\Psi^{d}_{r} is equivalent to the elementary symmetric polynomial of degree rr in the variables x1,,xdx_{1},\ldots,x_{d}.

Proof.

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration.

First suppose that rW2=rWr_{W}^{2}=r_{W}. By Lemma 5.1, we may assume that |E|=rW2{\left|E\right|}=r_{W}^{2}. Then W=𝕂EW=\mathbb{K}^{E} and hence ψW=xE\psi_{W}=x^{E} is the matroid polynomial of the free matroid on rWr_{W} elements.

Now suppose that rW2=(rW+12)r_{W}^{2}={r_{W}+1\choose 2}. Then {wiwj1ijr}{\left\{w^{i}\star w^{j}\mid 1\leq i\leq j\leq r\right\}} is linearly independent for any basis (w1,,wr)(w^{1},\ldots,w^{r}) of WW. By Lemma 6.2, ψW\psi_{W} is then equivalent to the Kirchhoff polynomial of the complete graph on rW+1r_{W}+1 vertices. ∎

7. Finite number classes for small rank matroids

The purpose of this section is to give a complete classification of configuration polynomials for matroids of rank at most 33 with respect to the equivalence relation of Definition 4.1. Due to Proposition 5.3, we may assume that |E|=rW2{\left|E\right|}=r_{W}^{2}.

Definition 7.1 ([Oxl11, §2.2]).

A choice of basis (w1,,wr)(w^{1},\dots,w^{r}) of W𝕂EW\subseteq\mathbb{K}^{E} and order of EE gives rise to a configuration matrix A=(wji)i,j𝕂r×nA=(w^{i}_{j})_{i,j}\in\mathbb{K}^{r\times n}, whose row span recovers W=AW={\left\langle A\right\rangle}. Up to reordering EE it can be assumed in normalized form A=(Ir|A)A=(I_{r}|A^{\prime}) where IrI_{r} is the r×rr\times r unit matrix.

Proposition 7.2.

Let WW be a configuration of rank 22. If rW2=2r_{W}^{2}=2, then ψWx1x2\psi_{W}\simeq x_{1}x_{2}, otherwise, rW2=3r_{W}^{2}=3 and ψWx1x2x32\psi_{W}\simeq x_{1}x_{2}-x_{3}^{2}.

Proof.

Most of this follows from the proof of Proposition 6.3. Apply x1x1+x2x_{1}\mapsto x_{1}+x_{2} to the Kirchhoff polynomial x1x2+x2x3+x3x1x_{1}x_{2}+x_{2}x_{3}+x_{3}x_{1} of K3K_{3}; the result is x12+x1(x2+2x3)+x2x3x_{1}^{2}+x_{1}(x_{2}+2x_{3})+x_{2}x_{3}.

If ch𝕂=2\operatorname{ch}\mathbb{K}=2, then this is x12+x2(x1+x3)x_{1}^{2}+x_{2}(x_{1}+x_{3}). If 2𝕂2\in\mathbb{K} is a unit, complete the square and scale x2x_{2} by 22 to arrive at x12x22+x32x_{1}^{2}-x_{2}^{2}+x_{3}^{2}. In both cases the result is easily seen to be equivalent to x1x2x32x_{1}x_{2}-x_{3}^{2}. ∎

Proposition 7.3.

The numbers of equivalence classes for rank 33 configurations WW for different values of rW2r_{W}^{2} are

|Ψ33/|=1,|Ψ34/|=2,|Ψ35/|=2,|Ψ36/|=1.|\Psi_{3}^{3}/_{\simeq}|=1,\quad|\Psi_{3}^{4}/_{\simeq}|=2,\quad|\Psi_{3}^{5}/_{\simeq}|=2,\quad|\Psi_{3}^{6}/_{\simeq}|=1.

Table 1 lists the equivalence classes of ψW\psi_{W} that arise from normalized configuration matrices AA when rW=3r_{W}=3 and rW2=|E|r_{W}^{2}={\left|E\right|}.

Table 1. Equivalence classes for rank rW=3r_{W}=3 configurations
|E|=rW2{\left|E\right|}=r^{2}_{W} AA conditions ψWdet()\psi_{W}\simeq\det(-)
33 (100010001)\left(\begin{smallmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{smallmatrix}\right) None (y1000y2000y3)\left(\begin{smallmatrix}y_{1}&0&0\\ 0&y_{2}&0\\ 0&0&y_{3}\end{smallmatrix}\right)
44 (100a1010a2001a3)\left(\begin{smallmatrix}1&0&0&a_{1}\\ 0&1&0&a_{2}\\ 0&0&1&a_{3}\end{smallmatrix}\right) ai=0a_{i}=0 for exactly one ii. (y1y40y4y2000y3)\left(\begin{smallmatrix}y_{1}&y_{4}&0\\ y_{4}&y_{2}&0\\ 0&0&y_{3}\end{smallmatrix}\right)
(100a1010a2001a3)\left(\begin{smallmatrix}1&0&0&a_{1}\\ 0&1&0&a_{2}\\ 0&0&1&a_{3}\end{smallmatrix}\right) ai0a_{i}\neq 0 for all ii. (y1y4y4y4y2y4y4y4y3)\left(\begin{smallmatrix}y_{1}&y_{4}&y_{4}\\ y_{4}&y_{2}&y_{4}\\ y_{4}&y_{4}&y_{3}\end{smallmatrix}\right)
55 (100a1,1a1,2010a2,1a2,2001a3,1a3,2)\left(\begin{smallmatrix}1&0&0&a_{1,1}&a_{1,2}\\ 0&1&0&a_{2,1}&a_{2,2}\\ 0&0&1&a_{3,1}&a_{3,2}\end{smallmatrix}\right) Exactly one pair of (ai,1aj,1ai,2aj,2)\left(\begin{smallmatrix}a_{i,1}\cdot a_{j,1}\\ a_{i,2}\cdot a_{j,2}\end{smallmatrix}\right), iji\neq j, is linearly dependent. (y1y4y5y4y20y50y3)\left(\begin{smallmatrix}y_{1}&y_{4}&y_{5}\\ y_{4}&y_{2}&0\\ y_{5}&0&y_{3}\end{smallmatrix}\right)
(100a1,1a1,2010a2,1a2,2001a3,1a3,2)\left(\begin{smallmatrix}1&0&0&a_{1,1}&a_{1,2}\\ 0&1&0&a_{2,1}&a_{2,2}\\ 0&0&1&a_{3,1}&a_{3,2}\end{smallmatrix}\right) All pairs of (ai,1aj,1ai,2aj,2)\left(\begin{smallmatrix}a_{i,1}\cdot a_{j,1}\\ a_{i,2}\cdot a_{j,2}\end{smallmatrix}\right), iji\neq j, are linearly independent. (y1y4y4+y5y4y2y5y4+y5y5y3)\left(\begin{smallmatrix}y_{1}&y_{4}&y_{4}+y_{5}\\ y_{4}&y_{2}&y_{5}\\ y_{4}+y_{5}&y_{5}&y_{3}\end{smallmatrix}\right)
66 (100a1,1a1,2a1,3010a2,1a2,2a2,3001a3,1a3,2a3,3)\left(\begin{smallmatrix}1&0&0&a_{1,1}&a_{1,2}&a_{1,3}\\ 0&1&0&a_{2,1}&a_{2,2}&a_{2,3}\\ 0&0&1&a_{3,1}&a_{3,2}&a_{3,3}\end{smallmatrix}\right) None (y1y4y6y4y2y5y6y5y3)\left(\begin{smallmatrix}y_{1}&y_{4}&y_{6}\\ y_{4}&y_{2}&y_{5}\\ y_{6}&y_{5}&y_{3}\end{smallmatrix}\right)
Proof.

Let W𝕂EW\subseteq\mathbb{K}^{E} be a configuration of rank rW=3r_{W}=3 with normalized configuration matrix AA. By (3.1) and Lemma 5.1, we may assume that

3=rWrW2=|E|(rW+12)=6.3=r_{W}\leq r_{W}^{2}={\left|E\right|}\leq{r_{W}+1\choose 2}=6.

The cases where rW2{3,6}r_{W}^{2}\in{\left\{3,6\right\}} are covered by Proposition 6.3.

Suppose now that rW2=4r_{W}^{2}=4. Up to reordering rows and columns, AA then has the form

A=(100a1010a2001a3),a1,a2,a3𝕂,a1a20,A=\begin{pmatrix}1&0&0&a_{1}\\ 0&1&0&a_{2}\\ 0&0&1&a_{3}\end{pmatrix},\quad a_{1},a_{2},a_{3}\in\mathbb{K},\quad a_{1}a_{2}\neq 0,

and hence

QA=(x1+a12x4a1a2x4a1a3x4a1a2x4x2+a22x4a2a3x4a1a3x4a2a3x4x3+a32x4).Q_{A}=\begin{pmatrix}x_{1}+a_{1}^{2}x_{4}&a_{1}a_{2}x_{4}&a_{1}a_{3}x_{4}\\ a_{1}a_{2}x_{4}&x_{2}+a_{2}^{2}x_{4}&a_{2}a_{3}x_{4}\\ a_{1}a_{3}x_{4}&a_{2}a_{3}x_{4}&x_{3}+a_{3}^{2}x_{4}\end{pmatrix}.

If a3=0a_{3}=0, then we can write, in terms of suitable coordinates y1,y2,y3,y4y_{1},y_{2},y_{3},y_{4},

(7.1) QA=(y1y40y4y2000y3),ψA=det(QA)=(y1y2y42)y3.Q_{A}=\begin{pmatrix}y_{1}&y_{4}&0\\ y_{4}&y_{2}&0\\ 0&0&y_{3}\end{pmatrix},\quad\psi_{A}=\det(Q_{A})=(y_{1}y_{2}-y_{4}^{2})y_{3}.

On the other hand, if a30a_{3}\neq 0, then we can write

Qλ,μ:=QA=(y1y4μy4y4y2λy4μy4λy4y3),λ:=a3a1,μ:=a3a2.Q_{\lambda,\mu}:=Q_{A}=\begin{pmatrix}y_{1}&y_{4}&\mu y_{4}\\ y_{4}&y_{2}&\lambda y_{4}\\ \mu y_{4}&\lambda y_{4}&y_{3}\end{pmatrix},\quad\lambda:=\frac{a_{3}}{a_{1}},\quad\mu:=\frac{a_{3}}{a_{2}}.

Applying the coordinate change (y1,y2,y3,y4)(y1λ2,y2μ2,y3,y4λμ)(y_{1},y_{2},y_{3},y_{4})\mapsto(\frac{y_{1}}{\lambda^{2}},\frac{y_{2}}{\mu^{2}},y_{3},\frac{y_{4}}{\lambda\mu}), yields

Qλ,μ:=(y1λ2y4λμy4λy4λμy2μ2y4μy4λy4μy3),Q^{\prime}_{\lambda,\mu}:=\begin{pmatrix}\frac{y_{1}}{\lambda^{2}}&\frac{y_{4}}{\lambda\mu}&\frac{y_{4}}{\lambda}\\ \frac{y_{4}}{\lambda\mu}&\frac{y_{2}}{\mu^{2}}&\frac{y_{4}}{\mu}\\ \frac{y_{4}}{\lambda}&\frac{y_{4}}{\mu}&y_{3}\end{pmatrix},

and hence by extracting factors from the first and second row and column

det(Qλ,μ)λ2μ2det(Qλ,μ)=det(Q1,1).\det(Q_{\lambda,\mu})\simeq\lambda^{2}\mu^{2}\det(Q^{\prime}_{\lambda,\mu})=\det(Q_{1,1}).

In contrast to ψA\psi_{A} in (7.1), this cubic is irreducible since 𝖬W=U3,4\mathsf{M}_{W}=U_{3,4} is connected (see [DSW21, Thm. 4.16]). In particular, the cases a3=0a_{3}=0 and a30a_{3}\neq 0 belong to different equivalence classes.

Suppose now that rW2=5r_{W}^{2}=5. Then AA has the form

A=(100a1,1a1,2010a2,1a2,2001a3,1a3,2).A=\begin{pmatrix}1&0&0&a_{1,1}&a_{1,2}\\ 0&1&0&a_{2,1}&a_{2,2}\\ 0&0&1&a_{3,1}&a_{3,2}\end{pmatrix}.

First suppose that, after suitably reordering the rows and columns of AA, w1w2w^{1}\star w^{2} and w2w3w^{2}\star w^{3} are linearly dependent, and hence w1w2w^{1}\star w^{2} and w1w3w^{1}\star w^{3} are linearly independent. In terms of suitable coordinates y1,,y5y_{1},\dots,y_{5}, we can write

Qλ:=QA=(y1y4y5y4y2λy4y5λy4y3),λ𝕂.Q_{\lambda}:=Q_{A}=\begin{pmatrix}y_{1}&y_{4}&y_{5}\\ y_{4}&y_{2}&\lambda y_{4}\\ y_{5}&\lambda y_{4}&y_{3}\end{pmatrix},\quad\lambda\in\mathbb{K}.

By symmetric row and column operations,

det(Qλ)=det(y1y4y5λy1y4y20y5λy10y32λy5+λ2y1)det(Q0).\det(Q_{\lambda})=\det\begin{pmatrix}y_{1}&y_{4}&y_{5}-\lambda y_{1}\\ y_{4}&y_{2}&0\\ y_{5}-\lambda y_{1}&0&y_{3}-2\lambda y_{5}+\lambda^{2}y_{1}\end{pmatrix}\simeq\det(Q_{0}).

One computes that the ideal of submaximal minors of Q0Q_{0} equals

(7.2) I2(Q0)=y1y2y42,y3,y5y1y3y52,y2,y4.I_{2}(Q_{0})={\left\langle y_{1}y_{2}-y_{4}^{2},y_{3},y_{5}\right\rangle}\cap{\left\langle y_{1}y_{3}-y_{5}^{2},y_{2},y_{4}\right\rangle}.

Suppose now that all pairs of wiwjw^{i}\star w^{j} with i<ji<j, are linearly independent. In terms of suitable coordinates, y1,,y5y_{1},\dots,y_{5}, we can write

Qλ,μ=(y1y4λy4+μy5y4y2y5λy4+μy5y5y3),λ,μ𝕂.Q_{\lambda,\mu}=\begin{pmatrix}y_{1}&y_{4}&\lambda y_{4}+\mu y_{5}\\ y_{4}&y_{2}&y_{5}\\ \lambda y_{4}+\mu y_{5}&y_{5}&y_{3}\end{pmatrix},\quad\lambda,\mu\in\mathbb{K}^{*}.

Applying the coordinate change (y1,y2,y3,y4)(μ2y1,y2,λ2y3,μy4,λy5)(y_{1},y_{2},y_{3},y_{4})\mapsto(\mu^{2}y_{1},y_{2},\lambda^{2}y_{3},\mu y_{4},\lambda y_{5}), yields

Qλ,μ=(μ2y1μy4λμ(y4+y5)μy4y2λy5λμ(y4+y5)λy5λ2y3),Q^{\prime}_{\lambda,\mu}=\begin{pmatrix}\mu^{2}y_{1}&\mu y_{4}&\lambda\mu(y_{4}+y_{5})\\ \mu y_{4}&y_{2}&\lambda y_{5}\\ \lambda\mu(y_{4}+y_{5})&\lambda y_{5}&\lambda^{2}y_{3}\end{pmatrix},

and hence by extracting factors from the first and last row and column

det(Qλ,μ)1λ2μ2det(Qλ,μ)=det(Q1,1).\det(Q_{\lambda,\mu})\simeq\frac{1}{\lambda^{2}\mu^{2}}\det(Q^{\prime}_{\lambda,\mu})=\det(Q_{1,1}).

The linear independence of all pairs of wiwjw^{i}\star w^{j} with i<ji<j implies that 𝖬W=U3,5\mathsf{M}_{W}=U_{3,5} which is 33-connected (see [Oxl11, Table 8.1]). In contrast to I2(Q0)I_{2}(Q_{0}) in (7.2), I2(Q1,1)I_{2}(Q_{1,1}) must be a prime ideal (see [DSW21, Thm. 4.37]). In particular, the two cases with rW2=5r_{W}^{2}=5 belong to different equivalence classes. ∎

8. Infinite number of classes for rank 44 matroids

For rank 44 configurations there are infinitely many equivalence classes of configuration polynomials. For simplicity we prove this over the rationals, so in this section we assume 𝕂=\mathbb{K}=\mathbb{Q}.

Consider the family of normalized configuration matrices

A:=(1000110100a1b10010a2000010b2),A:=\begin{pmatrix}1&0&0&0&1&1\\ 0&1&0&0&a_{1}&b_{1}\\ 0&0&1&0&a_{2}&0\\ 0&0&0&1&0&b_{2}\end{pmatrix},

depending on parameters a1,a2,b1,b2a_{1},a_{2},b_{1},b_{2}\in\mathbb{Q} where a1a2b1b20a_{1}a_{2}b_{1}b_{2}\neq 0. We will see that it gives rise to an infinite family of polynomials

ψm:=det(Qm),Qm:=(y1y5+y6y5my6y5+y6y2y5y6y5y5y30my6y60y4),m:=a1b1,\psi_{m}:=\det(Q_{m}),\quad Q_{m}:=\begin{pmatrix}y_{1}&y_{5}+y_{6}&y_{5}&my_{6}\\ y_{5}+y_{6}&y_{2}&y_{5}&y_{6}\\ y_{5}&y_{5}&y_{3}&0\\ my_{6}&y_{6}&0&y_{4}\end{pmatrix},\quad m:=\frac{a_{1}}{b_{1}}\in\mathbb{Q},

which are pairwise inequivalent for |m|>1{\left|m\right|}>1.

Lemma 8.1.

With the above notation, we have ψAψm\psi_{A}\simeq\psi_{m}.

Proof.

The configuration form associated to AA is given by

QA=(x1+x5+x6a1x5+b1x6a2x5b2x6a1x5+b1x6x2+a12x5+b12x6a1a2x5b1b2x6a2x5a1a2x5x3+a22x50b2x6b1b2x60x4+b22x6).Q_{A}=\begin{pmatrix}x_{1}+x_{5}+x_{6}&a_{1}x_{5}+b_{1}x_{6}&a_{2}x_{5}&b_{2}x_{6}\\ a_{1}x_{5}+b_{1}x_{6}&x_{2}+a_{1}^{2}x_{5}+b_{1}^{2}x_{6}&a_{1}a_{2}x_{5}&b_{1}b_{2}x_{6}\\ a_{2}x_{5}&a_{1}a_{2}x_{5}&x_{3}+a_{2}^{2}x_{5}&0\\ b_{2}x_{6}&b_{1}b_{2}x_{6}&0&x_{4}+b_{2}^{2}x_{6}\end{pmatrix}.

The coordinate changes

(z1,,z6)\displaystyle(z_{1},\dots,z_{6}) :=(x1+x5+x6,x2+a12x5+b12x6,x3+a22x5,x4+b22x6,a1x5,b1x6),\displaystyle:={\left(x_{1}+x_{5}+x_{6},x_{2}+a_{1}^{2}x_{5}+b_{1}^{2}x_{6},x_{3}+a_{2}^{2}x_{5},x_{4}+b_{2}^{2}x_{6},a_{1}x_{5},b_{1}x_{6}\right)},
(y1,,y6)\displaystyle(y_{1},\ldots,y_{6}) :=(z1,z2a12,z3a22,z4b22,z5a1,z6a1)\displaystyle:={\left(z_{1},\frac{z_{2}}{a_{1}^{2}},\frac{z_{3}}{a_{2}^{2}},\frac{z_{4}}{b_{2}^{2}},\frac{z_{5}}{a_{1}},\frac{z_{6}}{a_{1}}\right)}

turn QAQ_{A} into

QA\displaystyle Q_{A} =(z1z5+z6a2a1z5b2b1z6z5+z6z2a2z5b2z6a2a1z5a2z5z30b2b1z6b2z60z4)\displaystyle=\begin{pmatrix}z_{1}&z_{5}+z_{6}&\frac{a_{2}}{a_{1}}z_{5}&\frac{b_{2}}{b_{1}}z_{6}\\ z_{5}+z_{6}&z_{2}&a_{2}z_{5}&b_{2}z_{6}\\ \frac{a_{2}}{a_{1}}z_{5}&a_{2}z_{5}&z_{3}&0\\ \frac{b_{2}}{b_{1}}z_{6}&b_{2}z_{6}&0&z_{4}\end{pmatrix}
=(y1a1(y5+y6)a2y5a1b2b1y6a1(y5+y6)a12y2a1a2y5a1b2y6a2y5a1a2y5a22y30a1b2b1y6a1b2y60b22y4),\displaystyle=\begin{pmatrix}y_{1}&a_{1}(y_{5}+y_{6})&a_{2}y_{5}&\frac{a_{1}b_{2}}{b_{1}}y_{6}\\ a_{1}(y_{5}+y_{6})&a_{1}^{2}y_{2}&a_{1}a_{2}y_{5}&a_{1}b_{2}y_{6}\\ a_{2}y_{5}&a_{1}a_{2}y_{5}&a_{2}^{2}y_{3}&0\\ \frac{a_{1}b_{2}}{b_{1}}y_{6}&a_{1}b_{2}y_{6}&0&b_{2}^{2}y_{4}\end{pmatrix},

so that det(QA)=a12a22b22det(Qm)\det(Q_{A})=a_{1}^{2}a_{2}^{2}b_{2}^{2}\det(Q_{m}) by extracting factors from the last three rows and columns. ∎

Proposition 8.2.

For m,mm,m^{\prime}\in\mathbb{Q}^{*}, ψmψm\psi_{m}\simeq\psi_{m^{\prime}} if and only if m=mm=m^{\prime} or mm=1mm^{\prime}=1.

Proof.

By a Singular computation, the primary decomposition of the ideal of submaximal minors of QmQ_{m} reads

I2(Qm)=Pm,1Pm,2Pm,3I_{2}(Q_{m})=P_{m,1}\cap P_{m,2}\cap P_{m,3}

where

Pm,1\displaystyle P_{m,1} =y1+my2(m+1)y5(m+1)y6,\displaystyle=\langle y_{1}+my_{2}-(m+1)y_{5}-(m+1)y_{6},
y2y4y4y5y4y6+(m1)y62,my2y3y3y5+(1m)y52y3y6\displaystyle y_{2}y_{4}-y_{4}y_{5}-y_{4}y_{6}+(m-1)y_{6}^{2},my_{2}y_{3}-y_{3}y_{5}+(1-m)y_{5}^{2}-y_{3}y_{6}\rangle
Pm,2\displaystyle P_{m,2} =y6,y4,y1y2y3y52(y1+y2+y32y5)\displaystyle={\left\langle y_{6},y_{4},y_{1}y_{2}y_{3}-y_{5}^{2}(y_{1}+y_{2}+y_{3}-2y_{5})\right\rangle}
Pm,3\displaystyle P_{m,3} =y5,y3,y1y2y4y62(y1+m2y2+y42my6)\displaystyle={\left\langle y_{5},y_{3},y_{1}y_{2}y_{4}-y_{6}^{2}(y_{1}+m^{2}y_{2}+y_{4}-2my_{6})\right\rangle}

Fix m,m𝕂m,m^{\prime}\in\mathbb{K}^{*} with ψmψm\psi_{m}\simeq\psi_{m^{\prime}}. Then there is an GL6(𝕂)\ell\in\operatorname{GL}_{6}(\mathbb{K}) such that

{(Pm,i)i{1,2,3}}={(Pm,i)i{1,2,3}}.{\left\{\ell^{*}(P_{m,i})\mid i\in{\left\{1,2,3\right\}}\right\}}={\left\{\ell^{*}(P_{m^{\prime},i})\mid i\in{\left\{1,2,3\right\}}\right\}}.

Let us assume first that

(8.1) (Pm,1)=Pm,1,(Pm,2)=Pm,2,(Pm,3)=Pm,3.\ell^{*}(P_{m,1})=P_{m^{\prime},1},\quad\ell^{*}(P_{m,2})=P_{m^{\prime},2},\quad\ell^{*}(P_{m,3})=P_{m^{\prime},3}.

Then \ell^{*} stabilizes the vector spaces y3,y5{\left\langle y_{3},y_{5}\right\rangle} and y4,y6{\left\langle y_{4},y_{6}\right\rangle} and hence

(y3)\displaystyle\ell^{*}(y_{3}) =3,3y3+3,5y5,\displaystyle=\ell_{3,3}y_{3}+\ell_{3,5}y_{5}, (y4)\displaystyle\ell^{*}(y_{4}) =4,4y4+4,6y6,\displaystyle=\ell_{4,4}y_{4}+\ell_{4,6}y_{6},
(y5)\displaystyle\ell^{*}(y_{5}) =5,3y3+5,5,y5,\displaystyle=\ell_{5,3}y_{3}+\ell_{5,5,}y_{5}, (y6)\displaystyle\ell^{*}(y_{6}) =6,4y4+6,6,y6.\displaystyle=\ell_{6,4}y_{4}+\ell_{6,6,}y_{6}.

with non-vanishing determinants

(8.2) 1,12,21,22,10,3,35,53,55,30,4,46,64,66,40.\ell_{1,1}\ell_{2,2}-\ell_{1,2}\ell_{2,1}\neq 0,\quad\ell_{3,3}\ell_{5,5}-\ell_{3,5}\ell_{5,3}\neq 0,\quad\ell_{4,4}\ell_{6,6}-\ell_{4,6}\ell_{6,4}\neq 0.

In degree 33 the second equality in (8.1) yields

(8.3) (3,3y3+3,5y5)i=161,iyij=162,jyj(5,3y3+5,5y5)2(i=16(1,i+2,i)yi+(3,325,3)y3+(3,525,5)y5)λ(y1y2y3y52(y1+y2+y32y5))mody4,y6,λ𝕂.(\ell_{3,3}y_{3}+\ell_{3,5}y_{5})\sum_{i=1}^{6}\ell_{1,i}y_{i}\sum_{j=1}^{6}\ell_{2,j}y_{j}\\ -(\ell_{5,3}y_{3}+\ell_{5,5}y_{5})^{2}{\left(\sum_{i=1}^{6}(\ell_{1,i}+\ell_{2,i})y_{i}+(\ell_{3,3}-2\ell_{5,3})y_{3}+(\ell_{3,5}-2\ell_{5,5})y_{5}\right)}\\ \equiv\lambda(y_{1}y_{2}y_{3}-y_{5}^{2}(y_{1}+y_{2}+y_{3}-2y_{5}))\mod{\left\langle y_{4},y_{6}\right\rangle},\quad\lambda\in\mathbb{K}^{*}.

By comparing coefficients of y1y2y5y_{1}y_{2}y_{5} in (8.3), we find (1,12,2+1,22,1)3,5=0(\ell_{1,1}\ell_{2,2}+\ell_{1,2}\ell_{2,1})\ell_{3,5}=0 which forces 3,5=0\ell_{3,5}=0 by (8.2). Comparing next the coefficients of the monomials

y12,y22,y1y52,y2y52,y_{1}^{2},\quad y_{2}^{2},\quad y_{1}y_{5}^{2},\quad y_{2}y_{5}^{2},

in (8.3) we then obtain

(8.4) 1,12,1\displaystyle\ell_{1,1}\ell_{2,1} =0,\displaystyle=0, 1,22,2\displaystyle\ell_{1,2}\ell_{2,2} =0,\displaystyle=0,
5,52(1,1+2,1)\displaystyle-\ell_{5,5}^{2}(\ell_{1,1}+\ell_{2,1}) =λ,\displaystyle=-\lambda, 5,52(1,2+2,2)\displaystyle-\ell_{5,5}^{2}(\ell_{1,2}+\ell_{2,2}) =λ,\displaystyle=-\lambda,

which yields

(8.5) 1,1+2,1=1,2+2,2.\ell_{1,1}+\ell_{2,1}=\ell_{1,2}+\ell_{2,2}.

In degree 11 the first equality in (8.1) yields

(8.6) i=16((1,i+m2,i)yi)(m+1)(5,3y3+5,5y5)(m+1)(6,4y4+6,6y6)=μ(y1+my2(m+1)y5(m+1)y6).\sum_{i=1}^{6}{\left((\ell_{1,i}+m\ell_{2,i})y_{i}\right)}-(m+1)(\ell_{5,3}y_{3}+\ell_{5,5}y_{5})-(m+1)(\ell_{6,4}y_{4}+\ell_{6,6}y_{6})=\\ \mu{\left(y_{1}+m^{\prime}y_{2}-(m^{\prime}+1)y_{5}-(m^{\prime}+1)y_{6}\right)}.

Comparing coefficients of y1y_{1} and y2y_{2} we find

(8.7) 1,1+m2,1\displaystyle\ell_{1,1}+m\ell_{2,1} =μ,\displaystyle=\mu, 1,2+m2,2\displaystyle\ell_{1,2}+m\ell_{2,2} =mμ.\displaystyle=m^{\prime}\mu.

By equation (8.4), 1,i\ell_{1,i} or 2,i\ell_{2,i} must be zero for i=1,2i=1,2. Thus, we consider the following cases:

  • If 1,1=1,2=0\ell_{1,1}=\ell_{1,2}=0, then 2,1=μm\ell_{2,1}=\frac{\mu}{m} and 2,2=mμm\ell_{2,2}=\frac{m^{\prime}\mu}{m} by (8.7), hence μm=mμm\frac{\mu}{m}=\frac{m^{\prime}\mu}{m} by (8.5), so m=1m^{\prime}=1.

  • If 1,1=2,2=0\ell_{1,1}=\ell_{2,2}=0, then 2,1=μm\ell_{2,1}=\frac{\mu}{m} and 1,2=mμ\ell_{1,2}={m^{\prime}\mu} by (8.7), hence μm=mμ\frac{\mu}{m}={m^{\prime}\mu} by (8.5), so m=1mm^{\prime}=\frac{1}{m}.

  • If 2,1=1,2=0\ell_{2,1}=\ell_{1,2}=0, then 1,1=μ\ell_{1,1}={\mu} and 2,2=mμm\ell_{2,2}=\frac{m^{\prime}\mu}{m} by (8.7), hence μ=mμm{\mu}=\frac{m^{\prime}\mu}{m} by (8.5), so m=mm^{\prime}=m.

  • If 2,1=2,2=0\ell_{2,1}=\ell_{2,2}=0, then 1,1=μ\ell_{1,1}={\mu} and 1,2=mμ\ell_{1,2}={m^{\prime}\mu} by (8.7), hence μ=mμ{\mu}={m^{\prime}\mu} by (8.5), so m=1m^{\prime}=1.

A similar discussion applies, with the same consequences, to the case where

(Pm,1)\displaystyle\ell(P_{m,1}) =Pm,1,\displaystyle=P_{m^{\prime},1}, (Pm,2)\displaystyle\ell(P_{m,2}) =Pm,3,\displaystyle=P_{m^{\prime},3}, (Pm,3)\displaystyle\ell(P_{m,3}) =Pm,2.\displaystyle=P_{m^{\prime},2}.

In conclusion and by exchanging \ell by 1\ell^{-1}, we find

m{1,m,1m},m{1,m,1m}.m^{\prime}\in{\left\{1,m,\frac{1}{m}\right\}},\quad m\in{\left\{1,m^{\prime},\frac{1}{m^{\prime}}\right\}}.

Unless m=mm^{\prime}=m, we have m=1m=b1a1m^{\prime}=\frac{1}{m}=\frac{b_{1}}{a_{1}}. In terms of the coordinates from the proof of Lemma 8.1, we can write

ψA\displaystyle\psi_{A} =a22b22det(z1z5+z6z5a1z6b1z5+z6z2z5z6z5a1z5z3a220z6b1z60z4b22)\displaystyle=a_{2}^{2}b_{2}^{2}\det\begin{pmatrix}z_{1}&z_{5}+z_{6}&\frac{z_{5}}{a_{1}}&\frac{z_{6}}{b_{1}}\\ z_{5}+z_{6}&z_{2}&z_{5}&z_{6}\\ \frac{z_{5}}{a_{1}}&z_{5}&\frac{z_{3}}{a_{2}^{2}}&0\\ \frac{z_{6}}{b_{1}}&z_{6}&0&\frac{z_{4}}{b_{2}^{2}}\end{pmatrix}
det(z1z5+z6z5a1z6b1z5+z6z2z5z6z5a1z5z30z6b1z60z4)\displaystyle\simeq\det\begin{pmatrix}z_{1}&z_{5}+z_{6}&\frac{z_{5}}{a_{1}}&\frac{z_{6}}{b_{1}}\\ z_{5}+z_{6}&z_{2}&z_{5}&z_{6}\\ \frac{z_{5}}{a_{1}}&z_{5}&z_{3}&0\\ \frac{z_{6}}{b_{1}}&z_{6}&0&z_{4}\end{pmatrix}

One can see that the morphism that leaves z1,z2z_{1},z_{2} fixed, and interchanges the pairs z3z4,z5z6,a1b1z_{3}\leftrightarrow z_{4},z_{5}\leftrightarrow z_{6},a_{1}\leftrightarrow b_{1} transforms this final matrix into a conjugate matrix. However, by Lemma 8.1 the determinants of these two matrices are equivalent to ψm\psi_{m} and ψ1/m\psi_{1/m} respectively, where m=a1b1m=\frac{a_{1}}{b_{1}}. It follows that ψm\psi_{m} and ψ1/m\psi_{1/m} are equivalent. ∎

Corollary 8.3.

For every kk\in\mathbb{N}, we have |Ψ4+k6+k/|=|\Psi_{4+k}^{6+k}/_{\simeq}|=\infty over 𝕂=\mathbb{K}=\mathbb{Q}.

Proof.

Applying the construction from Remark 3.2 yields configurations WW with rW=4+kr_{W}=4+k and rW2=6+kr_{W}^{2}=6+k which give rise to the infinite family of polynomials ψm,k=ψmy7y7+k\psi_{m,k}=\psi_{m}\cdot y_{7}\cdots y_{7+k}, contact equivalent to elements of Ψ4+k6+k\Psi_{4+k}^{6+k}. For reasons of degree, ψm,kψm,k\psi_{m,k}\simeq\psi_{m^{\prime},k} is equivalent to ψmψm\psi_{m}\simeq\psi_{m^{\prime}}, so the claim follows from Proposition 8.2. ∎

References

  • [AB18] Nima Amini and Petter Brändén “Non-representable hyperbolic matroids” In Adv. Math. 334, 2018, pp. 417–449 DOI: 10.1016/j.aim.2018.03.038
  • [Alu14] Paolo Aluffi “Generalized Euler characteristics, graph hypersurfaces, and Feynman periods” In Geometric, algebraic and topological methods for quantum field theory World Sci. Publ., Hackensack, NJ, 2014, pp. 95–136 DOI: 10.1142/9789814460057_0003
  • [BB03] Prakash Belkale and Patrick Brosnan “Matroids, motives, and a conjecture of Kontsevich” In Duke Math. J. 116.1, 2003, pp. 147–188 DOI: 10.1215/S0012-7094-03-11615-4
  • [BB08] Julius Borcea and Petter Brändén “Applications of stable polynomials to mixed determinants: Johnson’s conjectures, unimodality, and symmetrized Fischer products” In Duke Math. J. 143.2, 2008, pp. 205–223 DOI: 10.1215/00127094-2008-018
  • [BEK06] Spencer Bloch, Hélène Esnault and Dirk Kreimer “On motives associated to graph polynomials” In Comm. Math. Phys. 267.1, 2006, pp. 181–225 DOI: 10.1007/s00220-006-0040-2
  • [BG10] Petter Brändén and Rafael S. González D’León “On the half-plane property and the Tutte group of a matroid” In J. Combin. Theory Ser. B 100.5, 2010, pp. 485–492 DOI: 10.1016/j.jctb.2010.04.001
  • [BH20] Petter Brändén and June Huh “Lorentzian polynomials” In Ann. of Math. (2) 192.3, 2020, pp. 821–891 DOI: 10.4007/annals.2020.192.3.4
  • [Bit+19] Thomas Bitoun, Christian Bogner, René Pascal Klausen and Erik Panzer “Feynman integral relations from parametric annihilators” In Lett. Math. Phys. 109.3, 2019, pp. 497–564 DOI: 10.1007/s11005-018-1114-8
  • [BS12] Francis Brown and Oliver Schnetz “A K3 in ϕ4\phi^{4} In Duke Math. J. 161.10, 2012, pp. 1817–1862 DOI: 10.1215/00127094-1644201
  • [BSY14] Francis Brown, Oliver Schnetz and Karen Yeats “Properties of c2c_{2} invariants of Feynman graphs” In Adv. Theor. Math. Phys. 18.2, 2014, pp. 323–362 URL: http://projecteuclid.org.ezproxy.lib.purdue.edu/euclid.atmp/1414414837
  • [Cho+04] Young-Bin Choe, James G. Oxley, Alan D. Sokal and David G. Wagner “Homogeneous multivariate polynomials with the half-plane property” Special issue on the Tutte polynomial In Adv. in Appl. Math. 32.1-2, 2004, pp. 88–187 DOI: 10.1016/S0196-8858(03)00078-2
  • [Den+20] Graham Denham, Delphine Pol, Mathias Schulze and Uli Walther “Graph hypersurfaces with torus action and a conjecture of Aluffi” To appear in Communications in Number Theory and Physics, 2020 arXiv:2005.02673
  • [DSW21] Graham Denham, Mathias Schulze and Uli Walther “Matroid connectivity and singularities of configuration hypersurfaces” In Lett. Math. Phys. 111.11, 2021 DOI: 10.1007/s11005-020-01352-3
  • [Oxl11] James Oxley “Matroid theory” 21, Oxford Graduate Texts in Mathematics Oxford University Press, Oxford, 2011, pp. xiv+684 URL: https://doi.org/10.1093/acprof:oso/9780198566946.001.0001
  • [Pat10] Eric Patterson “On the singular structure of graph hypersurfaces” In Commun. Number Theory Phys. 4.4, 2010, pp. 659–708 DOI: 10.4310/CNTP.2010.v4.n4.a3