This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A composition method for neat formulas of chromatic symmetric functions

David G.L. Wang School of Mathematics and Statistics & MIIT Key Laboratory of Mathematical Theory and Computation in Information Security, Beijing Institute of Technology, Beijing 102400, P. R. China. [email protected]  and  James Z.F. Zhou School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 102400, P. R. China. [email protected]
Abstract.

We develop a composition method to unearth positive eIe_{I}-expansions of chromatic symmetric functions XGX_{G}, where the subscript II stands for compositions rather than integer partitions. Using this method, we derive positive and neat eIe_{I}-expansions for the chromatic symmetric functions of tadpoles, barbells and generalized bulls, and establish the ee-positivity of hats. We also obtain a compact ribbon Schur analog for the chromatic symmetric function of cycles.

Key words and phrases:
chromatic symmetric functions, ee-positivity, noncommutative symmetric functions, ribbon Schur functions, Schur positivity
2020 Mathematics Subject Classification:
05E05, 05A15
Wang is the corresponding author, and is supported by the NSFC (Grant No. 12171034).

1. Introduction

In his seminal paper [23], Stanley introduced the concept of the chromatic symmetric function XGX_{G} for any graph GG, which tracks proper colorings of GG. It is a generalization of Birkhoff’s chromatic symmetric polynomial χG\chi_{G} in the study of the 44-color problem. Chromatic symmetric functions encode many graph parameters and combinatorial structures, such like the number of vertices, edges and triangles, the girth, and the lattice of contractions, see Martin et al. [17] and [23, Page 167]. For any basis bb of the algebra Sym\mathrm{Sym} of symmetric functions, a graph GG is said to be bb-positive if every bb-coefficient of XGX_{G} is nonnegative. Stanley [23, Section 5] brought forward the question that which graphs are ee-positive, and asserted that a complete characterization of ee-positive graphs “appears hopeless.” He restated Stanley and Stembridge’s (3+1)(3+1)-free conjecture [27], which became a leading conjecture in the study of chromatic symmetric functions henceforth.

Conjecture 1.1 (Stanley and Stembridge).

The chromatic symmetric function of the incomparability graph of every (3+1)(3+1)-free poset is ee-positive.

Gasharov [8] confirmed the Schur positivity of the graphs in Conjecture 1.1, which are all claw-free. Stanley [24] then proposed the following Schur positivity conjecture and attributed it to Gasharov, see also Gasharov [9].

Conjecture 1.2 (Stanley and Gasharov).

Every claw-free graph is Schur positive.

Shareshian and Wachs [21] introduced the notion of chromatic quasisymmetric functions, refined Gasharov’s Schur positivity result, and unveiled connections between Conjecture 1.1 and representation theory. By Guay-Paquet’s reduction [12], Conjecture 1.1 can be restated as that every unit interval graph, or equivalently, every claw-free interval graph, is ee-positive. These conjectures thereby charm graph theorists that are fascinated by claw-free graphs and interval graphs, see Faudree et al. [7] for an early survey on claw-free graphs, and Corneil et al. [2] for wide applications of interval graphs. The Schur positivity of interval graphs can be shown by using a result of Haiman [13]. Haiman’s proof used Kazhdan and Lusztig’s conjectures that were confirmed later, see [23, Page 187].

Technically speaking, to show that a graph is not ee-positive or not Schur positive is comparably undemanding, in the sense that the demonstration of a negative eλe_{\lambda}- or sλs_{\lambda}-coefficient for a particular partition λ\lambda is sufficient, which may call for a scrupulous selection of λ\lambda though. For instance, Wang and Wang [30] proved the non-ee-positivity and non-Schur positivity of some spiders and brooms. Two common criteria for the non-positivity are Wolfgang III’s connected partition criterion and Stanley’s stable partition criterion, see [33] and [24] respectively.

In contrast, to confirm that a graph is ee-positive is seldom easy. Stanley [23] studied paths and cycles by displaying the generating functions of their chromatic symmetric functions, whose Taylor expansions indicate the ee-positivity as plain sailing. Gebhard and Sagan [10] lifted XGX_{G} up to certain YGY_{G} in the algebra NCSym\mathrm{NCSym} of symmetric functions in noncommutative variables, so that XGX_{G} equals the commutative image of YGY_{G}. They developed a theory for certain (e)(e)-positivity of YGY_{G}, which leads to the ee-positivity of XGX_{G}. In particular, KK-chains are ee-positive. Tom [29] obtained an ee-expansion of the chromatic symmetric function of a general unit interval graph in terms of “forest triples,” and used it to reconfirm the ee-positivity of KK-chains. Dahlberg and van Willigenburg [4] classified when YGY_{G} is a positive linear combination of the elementary symmetric functions in noncommuting variables. Via this YGY_{G}-approach, Wang and Wang [31] uncovered the ee-positivity of two classes of cycle-chords. Aliniaeifard et al. [1] reinterpreted the equivalence idea for the (e)(e)-positivity in terms of the quotient algebra UBCSym\mathrm{UBCSym} of NCSym\mathrm{NCSym} and obtained the ee-positivity of kayak paddle graphs. An example of using chromatic quasisymmetric functions to show the ee-positivity can be found from Huh et al. [14] for melting lollipops.

We think the plainest way of confirming the ee-positivity of a graph GG is to compute XGX_{G} out and make certain that the eλe_{\lambda}-coefficient for each partition λ\lambda is nonnegative. A variant idea is to recast XGX_{G} as a linear combination of ee-positive chromatic symmetric functions with positive coefficients, see Dahlberg and van Willigenburg [3] for a treatment of lollipops for example. Up to late 2023, to the best of our knowledge, only complete graphs, paths, cycles, melting lollipops, KK-chains, and slightly melting KK-chains own explicit formulas of chromatic symmetric functions, see Section 2.3 and Tom [29]. In this paper, we conceive a new approach along this way, called the composition method.

We were inspired from Shareshian and Wachs’s discovery

(1.1) XPn=I=i1i2nwIeIX_{P_{n}}=\sum_{I=i_{1}i_{2}\dotsm\vDash n}w_{I}e_{I}

for paths PnP_{n}, where the sum runs over compositions II of nn, and

(1.2) wI=i1j2(ij1).w_{I}=i_{1}\prod_{j\geq 2}(i_{j}-1).

They [22, Table 1] obtained Eq. 1.1 by using Stanley’s generating function for Smirnov words, see also Shareshian and Wachs [20, Theorem 7.2]. An equally engaging formula for cycles was brought to light by Ellzey [6], see Proposition 2.4.

The composition method is to expand a chromatic symmetric function XGX_{G} in the elementary symmetric functions eIe_{I} which are indexed by compositions II. This idea can be best understood through Eq. 1.1. The eIe_{I}-coefficients, taking Eq. 1.2 for example, are functions defined for compositions. See Section 2.5 for more examples. An ordinary eλe_{\lambda}-coefficient for any partition λ\lambda is the sum of “the eIe_{I}-coefficients” over all compositions II that can be rearranged as λ\lambda; we write this property of II as

(1.3) ρ(I)=λ.\rho(I)=\lambda.

Here arises a potential ambiguity about the wording “the eIe_{I}-coefficient”. Namely, when the parts of II decrease weakly and so II coincides with λ\lambda, it may be understood as either the coefficient of eIe_{I} in some eIe_{I}-expansion or the coefficient of eλe_{\lambda} in the unique ee-expansion of XGX_{G}. This ambiguity comes from the unspecification of the background algebra, which leads us to the algebra NSym\mathrm{NSym} of noncommutative symmetric functions, see Sections 2.2 and 2.4 for details.

In order to give a step by step instruction for applying the composition method, we need some basic knowledge of the algebra NSym\mathrm{NSym}. First, the commutative images of the basis elements ΛI\Lambda^{I} and ΨI\Psi^{I} of NSym\mathrm{NSym} are the elementary and power sum symmetric functions eρ(I)e_{\rho(I)} and pρ(I)p_{\rho(I)}, respectively. Second, every symmetric function λncλeλ\sum_{\lambda\vdash n}c_{\lambda}e_{\lambda} has an infinite number of noncommutative analogs IncIΛI\sum_{I\vDash n}c_{I}^{\prime}\Lambda^{I} in NSym\mathrm{NSym}, in which only a finite number are Λ\Lambda-positive with integer coefficients. Third, a symmetric function is ee-positive if and only if it has a Λ\Lambda-positive noncommutative analog. For the purpose of showing the ee-positivity of a chromatic symmetric function XGX_{G}, one may follow the steps below.

Step 1:

Initiate the argument by deriving a noncommutative analog X~G\widetilde{X}_{G} in its Λ\Lambda-expansion. We know two ways to achieve this. One is to start from the pp-expansion of XGX_{G} by definition, which implies the Ψ\Psi-expansion of a noncommutative analog directly. Then we transform the analog to its Λ\Lambda-expansion by change-of-basis, see Appendix A for this approach working for cycles. The other way is to compute XGX_{G} by applying Orellana and Scott’s triple-deletion property [19], and by using graphs with known eIe_{I}-expansions, see Theorem 3.3 for this way working for tadpoles.

Step 2:

Find a positive eIe_{I}-expansion. Decompose the set of all compositions of n=\absV(G)n=\abs{V(G)} as (1)I(l)\mathcal{I}^{(1)}\sqcup\dotsm\mathcal{\sqcup}I^{(l)}, such that

  1. (1):

    X~G=k=1lI(k)cIΛI\widetilde{X}_{G}=\sum_{k=1}^{l}\sum_{I\in\mathcal{I}^{(k)}}c_{I}\Lambda^{I},

  2. (2):

    the compositions in each (k)\mathcal{I}^{(k)} have the same underlying partition, say, λ(k)\lambda^{(k)}, and

  3. (3):

    the inner sum for each kk has an ee-positive commutative image, i.e., I(k)cI0\sum_{I\in\mathcal{I}^{(k)}}c_{I}\geq 0.

It follows that

(1.4) XG=k=1l\brk4I(k)cIeλ(k)X_{G}=\sum_{k=1}^{l}\brk 4{\sum_{I\in\mathcal{I}^{(k)}}c_{I}}e_{\lambda^{(k)}}

is a positive eIe_{I}-expansion.

Step 3:

Produce a neat eIe_{I}-expansion by shaping Eq. 1.4. One thing we can do is to simplify each of the coefficients I(k)cI\sum_{I\in\mathcal{I}^{(k)}}c_{I} for given composition functions cIc_{I}. Another thing is to further merge the terms for distinct indices, say kk and hh, with the same underlying partition λ(k)=λ(h)\lambda^{(k)}=\lambda^{(h)}. Sign-reversing involutions, injections and bijections may help embellish expressions to make them compact and elegant.

One may catch a whiff of the combinatorial essence of the composition method from each of the steps. Besides suitably selecting a vertex triple to apply the triple-deletion property, a vast flexibility lies in both the process of decomposing and coefficient shaping. We wish that the ee-positivity of Eq. 1.4 is as transparent as the ee-positivity in Eq. 1.1. Step 33 is not necessary for the sole purpose of positivity establishment, however, it would be computationally convenient if we make use of a neat eIe_{I}-expansion in proving the ee-positivity of graphs that are of more complex.

In this paper, we start the journey of understanding the computing power of the composition method in proving the ee-positivity of graphs.

After making necessary preparations in Section 2, we apply the composition method for special families of graphs in Section 3. We work out neat formulas for tadpoles and barbells. The former are particular squids that were investigated by Martin et al. [17], see also Li et al. [15], while the latter contains lollipops, lariats and dumbbells as specializations. Using the composition method, we also establish the ee-positivity of hats. The family of hats contains both tadpoles and generalized bulls. Our result for hats induces a second eIe_{I}-expansion for tadpoles. The family of generalized bulls was listed as an infinite collection of ee-positive claw-free graphs that are not claw-contractible-free by Dahlberg et al. [5, Section 33]. We also consider the line graphs of tadpoles, since the line graph of any graph is claw-free, which is a key condition in both Conjectures 1.1 and 1.2.

An early try of the composition method towards Schur positivity is [28], in which Thibon and Wang obtained the ribbon Schur expansion of a noncommutative analog for spiders of the form S(a,2,1)S(a,2,1). They are not ribbon positive. This analog yields a skew Schur expansion of XS(a,2,1)X_{S(a,2,1)}. By the Littlewood–Richardson rule, the ordinary Schur coefficients are by that means multiset sizes of Yamanouchi words, and the Schur positivity then follows by injections. A similar proof for the Schur positivity of spiders of the form S(a,4,1)S(a,4,1) is beyond uncomplicated. We thereby expect more satisfying applications of the composition method in establishing the Schur positivity of graphs. In this paper, we give a compact ribbon Schur analog for the chromatic symmetric function of cycles, see Theorem 3.2.

2. Preliminaries

This section contains necessary notion and notation, basic results on commutative symmetric functions, chromatic symmetric functions, and noncommutative symmetric functions, that will be of use.

2.1. Compositions and partitions

We use terminology from Stanley [25]. Let nn be a positive integer. A composition of nn is a sequence of positive integers with sum nn, commonly denoted I=i1isnI=i_{1}\dotsm i_{s}\vDash n. It has size \absI=n\abs{I}=n, length (I)=s\ell(I)=s, and reversal I¯=isis1i1\overline{I}=i_{s}i_{s-1}\dotsm i_{1}. The integers iki_{k} are called parts of II. For notational convenience, we write I=vsI=v^{s} if all parts have the same value vv, and denote the kkth last part as iki_{-k}; thus i1=isi_{-1}=i_{s}. We consider the number 0 to have a unique composition, denoted ϵ\epsilon. Whenever a capital letter such like II and JJ is adopted to denote a composition, we use the small letter counterparts such as ii and jj respectively with integer subscripts to denote the parts. A factor of II is a subsequence that consists of consecutive parts. A prefix (resp., suffix) of II is a factor that starts from i1i_{1} (resp., ends at isi_{s}). Denote by mk(I)m_{k}(I) the the number of parts kk in II, namely,

(2.1) mk(I)=\abs{j{1,,s}:ij=k}.m_{k}(I)=\abs{\{j\in\{1,\dots,s\}\colon i_{j}=k\}}.

A partition of nn is a multiset of positive integers with sum nn, commonly denoted as

λ=λ1λ2=1m1(λ)2m2(λ)n,\lambda=\lambda_{1}\lambda_{2}\dotsm=1^{m_{1}(\lambda)}2^{m_{2}(\lambda)}\dotsm\vdash n,

where λ1λ21\lambda_{1}\geq\lambda_{2}\geq\dotsm\geq 1. For any composition II, there is a unique partition ρ(I)\rho(I) satisfying Eq. 1.3, i.e., the partition obtained by rearranging the parts of II. As partitions have Young diagrams as graphic representation, one uses the terminology ribbons to illustrate compositions. In French notation, the ribbon for a composition II is the collection of boxes such that

  • Row kk consists of iki_{k} consecutive boxes, and

  • the last box on Row kk and the first box on Row k+1k+1 are in the same column.

In the theory of integer partitions, by saying a Young diagram λ\lambda one emphasizes the geometric shape of the partition λ\lambda. Being analogous in our composition calculus, we phrase the wording “a ribbon II” to call attention to the illustration of the composition II.

Following MacMahon [16], the conjugate II^{\sim} of a composition II is the ribbon consisting of the column lengths of II from right to left. This is different to the conjugate λ\lambda^{\prime} of a partition λ\lambda, whose Young diagram is obtained by turning rows into columns. For example, 32=121232^{\sim}=121^{2} and 32=22132^{\prime}=221. A refinement of II is a composition J=j1jtJ=j_{1}\dotsm j_{t} such that

i1=jk0+1++jk1,,is=jks1+1++jks,i_{1}=j_{k_{0}+1}+\dots+j_{k_{1}},\quad\dots,\quad i_{s}=j_{k_{s-1}+1}+\dots+j_{k_{s}},

for some integers k0<<ksk_{0}<\dots<k_{s}, where k0=0k_{0}=0 and ks=tk_{s}=t. We say that II is a coarsement of JJ if JJ is a refinement of II. The reverse refinement order \preceq for compositions is the partial order defined by

IJJ is a refinement of I.I\preceq J\iff\text{$J$ is a refinement of $I$}.

The first parts of blocks of JJ with respect to II are the numbers jk0+1,,jks1+1j_{k_{0}+1},\dots,j_{k_{s-1}+1}, with product

fp(J,I)=jk0+1jks1+1.f\!p(J,I)=j_{k_{0}+1}\dotsm j_{k_{s-1}+1}.

The last parts of blocks of JJ with respect to II are the numbers jk1,,jksj_{k_{1}},\dots,j_{k_{s}}, with product

lp(J,I)=jk1jks.lp(J,I)=j_{k_{1}}\dotsm j_{k_{s}}.

By definition, one may derive directly that

(2.2) lp(J¯,I¯)=fp(J,I).lp(\overline{J},\overline{I})=f\!p(J,I).

For any compositions I=i1isI=i_{1}\dotsm i_{s} and J=j1jtJ=j_{1}\dotsm j_{t}, the concatenation of II and JJ is the composition IJ=i1isj1jtI\!J=i_{1}\dotsm i_{s}j_{1}\dotsm j_{t}, and the near concatenation of II and JJ is the composition

IJ=i1is1(is+j1)j2jt.I\triangleright J=i_{1}\dotsm i_{s-1}(i_{s}+j_{1})j_{2}\dotsm j_{t}.

In French notation, the ribbon IJI\!J (resp., IJI\triangleright J) is obtained by attaching the first box of JJ immediately below (resp., to the immediate right of) the last box of II.

The decomposition of a ribbon JJ relatively to a composition II is the unique expression

I(J)=J11J22s1Js,\nabla_{I}(J)=J_{1}\bullet_{1}J_{2}\bullet_{2}\dots\bullet_{s-1}J_{s},

where s=(I)s=\ell(I), each JkJ_{k} is a ribbon of size iki_{k}, and each symbol k\bullet_{k} stands for either the concatenation or the near concatenation. For instance,

83(5141)=51221.\nabla_{83}(5141)=512\triangleright 21.

We call the ribbons JkJ_{k} blocks of I(J)\nabla_{I}(J). In the language of ribbons, the block JkJ_{k} consists of the first iki_{k} boxes of the ribbon that is obtained from JJ by removing the previous blocks J1,,Jk1J_{1},\dots,J_{k-1}.

A hook is a ribbon of the form 1st1^{s}t for some s0s\geq 0 and t1t\geq 1. Every hook appears as the English letter L or a degenerate one, that is, a horizontal ribbon tt or a vertical ribbon 1s1^{s}. Here we recognize the ribbon 11 as horizontal. Denote by I\mathcal{H}_{I} the set of ribbons JJ such that every block in the decomposition I(J)\nabla_{I}(J) is a hook. Then

n={n, 1(n1), 12(n2),, 1n22, 1n}\mathcal{H}_{n}=\{n,\ 1(n-1),\ 1^{2}(n-2),\ \dots,\ 1^{n-2}2,\ 1^{n}\}

is the set of hooks of the composition nn consisting of a single part. Moreover, since every factor of a hook is still a hook, we have nI\mathcal{H}_{n}\subseteq\mathcal{H}_{I} for all InI\vDash n. For example, 4={4, 13, 122, 14}\mathcal{H}_{4}=\{4,\ 13,\ 1^{2}2,\ 1^{4}\}, 31=4{31, 121}\mathcal{H}_{31}=\mathcal{H}_{4}\cup\{31,\ 121\}, and 13=4{22, 212}\mathcal{H}_{13}=\mathcal{H}_{4}\cup\{22,\ 21^{2}\}. Let I=i1isI=i_{1}\dotsm i_{s}. By definition, the set I\mathcal{H}_{I} is in a bijection with the set

{J11J22s1Js:Jkik for 1ks, and k{,} for 1ks1},\{J_{1}\bullet_{1}J_{2}\bullet_{2}\dots\bullet_{s-1}J_{s}\colon J_{k}\in\mathcal{H}_{i_{k}}\text{ for $1\leq k\leq s$, and }\bullet_{k}\in\{\triangleleft,\,\triangleright\}\text{ for $1\leq k\leq s-1$}\},

where the symbol \triangleleft stands for the concatenation operation. As a consequence, one may calculate \absI=2s1i1is\abs{\mathcal{H}_{I}}=2^{s-1}i_{1}\dotsm i_{s}.

2.2. Commutative symmetric functions

We give an overview of necessary notion and notation for the theory of commutative symmetric functions. For comprehensive references, one may refer to Stanley [26] and Mendes and Remmel [18]. Let RR be a commutative ring with identity. A symmetric function of homogeneous degree nn over RR is a formal power series

f(x1,x2,)=λ=λ1λ2ncλx1λ1x2λ2, where cλR,f(x_{1},x_{2},\dots)=\sum_{\lambda=\lambda_{1}\lambda_{2}\dotsm\vdash n}c_{\lambda}\cdot x_{1}^{\lambda_{1}}x_{2}^{\lambda_{2}}\dotsm,\quad\text{ where }c_{\lambda}\in R,

such that f(x1,x2,)=f(xπ(1),xπ(2),)f(x_{1},x_{2},\dots)=f(x_{\pi(1)},x_{\pi(2)},\dots) for any permutation π\pi. Denote by \mathbb{Q} the field of rational numbers. Define Sym0=\operatorname{Sym}^{0}=\mathbb{Q}, and define Symn\operatorname{Sym}^{n} to be the vector space of homogeneous symmetric functions of degree nn over \mathbb{Q}. Common bases of Symn\mathrm{Sym}^{n} include the elementary symmetric functions eλe_{\lambda}, the complete homogeneous symmetric functions hλh_{\lambda}, the power sum symmetric functions pλp_{\lambda}, and the Schur symmetric functions sλs_{\lambda}. The first three ones are multiplicatively defined by

bλ=bλ1bλl,for b{e,h,p} and for any partition λ=λ1λl,b_{\lambda}=b_{\lambda_{1}}\dotsm b_{\lambda_{l}},\quad\text{for $b\in\{e,h,p\}$ and for any partition $\lambda=\lambda_{1}\dotsm\lambda_{l}$},

where

ek=1i1<<ikxi1xik,hk=1i1ikxi1xik,andpk=i1xik.e_{k}=\sum_{1\leq i_{1}<\dots<i_{k}}x_{i_{1}}\dotsm x_{i_{k}},\quad h_{k}=\sum_{1\leq i_{1}\leq\dots\leq i_{k}}x_{i_{1}}\dots x_{i_{k}},\quad\text{and}\quad p_{k}=\sum_{i\geq 1}x_{i}^{k}.

The Schur symmetric function sλs_{\lambda} can be defined combinatorially by sλ=TCSλw(T)s_{\lambda}=\sum_{T\in\mathrm{CS}_{\lambda}}w(T), where CSλ\mathrm{CS}_{\lambda} is the set of column strict tableaux of shape λ\lambda, and the weight w(T)w(T) is the product of xix_{i} for all entries ii in TT. Here a tableau of shape λ\lambda is said to be column strict if

  • the entries in each row weakly increase, and

  • the entries in each column strictly increase starting from the longest row; this is to say from bottom to top in French notation.

The Schur symmetric functions are said to be “the most important basis for Sym\mathrm{Sym} with respect to its relationship to other areas of mathematics” and “crucial in understanding the representation theory of the symmetric group,” see [18, Page 37].

For any basis {bλ}\{b_{\lambda}\} of Symn\mathrm{Sym}^{n} and any symmetric function fSymnf\in\mathrm{Sym}^{n}, the bλb_{\lambda}-coefficient of ff is the unique number cλc_{\lambda} such that f=λncλbλf=\sum_{\lambda\vdash n}c_{\lambda}b_{\lambda}, denoted [bλ]f=cλ[b_{\lambda}]f=c_{\lambda}. The symmetric function ff is said to be bb-positive if every bb-coefficient of ff is nonnegative. For instance, every elementary symmetric function is Schur positive since eλ=μ\absλKμλsμe_{\lambda}=\sum_{\mu\vdash\abs{\lambda}}K_{\mu^{\prime}\lambda}s_{\mu}, where KμλK_{\mu^{\prime}\lambda} are Kostka numbers, see [18, Exercise 2.12].

With the aid of the function ρ\rho defined by Eq. 1.3, one may extend the domain of these basis symmetric functions from partitions to compositions. Precisely speaking, one may define bI=bρ(I)b_{I}=b_{\rho(I)} for any composition II and any basis {bλ}λ\{b_{\lambda}\}_{\lambda}. With this convention, we are safe to write eIe_{I} instead of the redundant expression eρ(I)e_{\rho(I)}. Since {eI}In\{e_{I}\}_{I\vDash n} is not a basis of Symn\mathrm{Sym}^{n}, the notation [eI]f[e_{I}]f is undefined.

2.3. Chromatic symmetric functions

Stanley [23] introduced the chromatic symmetric function for a graph GG as

XG=κvV(G)𝐱κ(v),X_{G}=\sum_{\kappa}\prod_{v\in V(G)}\mathbf{x}_{\kappa(v)},

where 𝐱=(x1,x2,)\mathbf{x}=\left(x_{1},x_{2},\ldots\right) is a countable list of indeterminates, and κ\kappa runs over proper colorings of GG. Chromatic symmetric functions are particular symmetric functions, and it is a generalization of Birkhoff’s chromatic polynomials χG(k)\chi_{G}(k), since XG(1k00)=χG(k)X_{G}(1^{k}00\dotsm)=\chi_{G}(k). For instance, the chromatic symmetric function of the complete graph KnK_{n} is

(2.3) XKn=n!en.X_{K_{n}}=n!e_{n}.

We will need the pp-expansion of XGX_{G}, see [23, Theorem 2.5].

Proposition 2.1 (Stanley).

The chromatic symmetric function of a graph G=(V,E)G=(V,E) is

XG=EE(1)\absEpτ(E)X_{G}=\sum_{E^{\prime}\subseteq E}(-1)^{\abs{E^{\prime}}}p_{\tau(E^{\prime})}

where τ(E)\tau(E^{\prime}) is the partition consisting of the component orders of the spanning subgraph (V,E)(V,E^{\prime}).

By [18, Theorem 2.22], every ee-coefficient in a power sum symmetric function pμp_{\mu} is an integer. It then follows from Proposition 2.1 that every ee-coefficient of XGX_{G} is integral. Stanley [23, Corollary 3.6] presented the following quick criterion for the ee-positivity.

Proposition 2.2 (Stanley).

Any graph whose vertices can be partitioned into two cliques is ee-positive.

Such graphs have several characterizations, such as the complements of bipartite graphs and the incomparability graphs of 33-free posets, see Guay-Paquet [12, Theorem 5.3]. Stanley [23, Propositions 5.3 and 5.4] confirmed the ee-positivity of paths and cycles.

Proposition 2.3 (Stanley).

Let E(z)=n0enznE(z)=\sum_{n\geq 0}e_{n}z^{n} and F(z)=E(z)zE(z)F(z)=E(z)-zE^{\prime}(z). Denote by PnP_{n} the nn-vertex path and by CnC_{n} the nn-vertex cycle. Then

n0XPnzn=E(z)F(z)andn2XCnzn=z2E′′(z)F(z).\sum_{n\geq 0}X_{P_{n}}z^{n}=\frac{E(z)}{F(z)}\quad\text{and}\quad\sum_{n\geq 2}X_{C_{n}}z^{n}=\frac{z^{2}E^{\prime\prime}(z)}{F(z)}.

As a consequence, paths and cycles are ee-positive.

Explicit formulas for the ee-coefficients of XPnX_{P_{n}} and XCnX_{C_{n}} were obtained by extracting the coefficients of these generating functions, see Wolfe [32, Theorem 3.2]. Shareshian and Wachs [22] obtained the much simpler Eq. 1.1 for paths. Ellzey [6, Corollary 6.2] gave a formula for the chromatic quasisymmetric function of cycles, whose t=1t=1 specialization is an equally simple one.

Proposition 2.4 (Ellzey).

For n2n\geq 2, XCn=In(i11)wIeIX_{C_{n}}=\sum_{I\vDash n}(i_{1}-1)w_{I}e_{I}.

We provide a proof for Proposition 2.4 using the composition method in Appendix A. Orellana and Scott [19, Theorem 3.1, Corollaries 3.2 and 3.3] established the triple-deletion property for chromatic symmetric functions.

Theorem 2.5 (Orellana and Scott).

Let GG be a graph with a stable set TT of order 33. Denote by e1e_{1}, e2e_{2} and e3e_{3} the edges linking the vertices in TT. For any set S{1,2,3}S\subseteq\{1,2,3\}, denote by GSG_{S} the graph with vertex set V(G)V(G) and edge set E(G){ej:jS}E(G)\cup\{e_{j}\colon j\in S\}. Then

XG12=XG1+XG23XG3andXG123=XG13+XG23XG3.X_{G_{12}}=X_{G_{1}}+X_{G_{23}}-X_{G_{3}}\quad\text{and}\quad X_{G_{123}}=X_{G_{13}}+X_{G_{23}}-X_{G_{3}}.

2.4. Noncommutative symmetric functions

For an introduction and basic knowledge on noncommutative symmetric functions, see Gelfand et al. [11]. Let KK be a field of characteristic zero. The algebra of noncommutative symmetric functions is the free associative algebra NSym=KΛ1,Λ2,\mathrm{NSym}=K\langle\Lambda_{1},\Lambda_{2},\dots\rangle generated by an infinite sequence {Λk}k1\{\Lambda_{k}\}_{k\geq 1} of indeterminates over KK, where Λ0=1\Lambda_{0}=1. It is graded by the weight function w(Λk)=kw(\Lambda_{k})=k. The homogeneous component of weight nn is denoted NSymn\mathrm{NSym}_{n}. Let tt be an indeterminate that commutes with all indeterminates Λk\Lambda_{k}. The elementary symmetric functions are Λn\Lambda_{n} themselves, whose generating function is denoted by

λ(t)=n0Λntn.\lambda(t)=\sum_{n\geq 0}\Lambda_{n}t^{n}.

The complete homogeneous symmetric functions SnS_{n} are defined by the generating function

σ(t)=n0Sntn=1λ(t).\sigma(t)=\sum_{n\geq 0}S_{n}t^{n}=\frac{1}{\lambda(-t)}.

The power sum symmetric functions Ψn\Psi_{n} of the first kind are defined by the generating function

ψ(t)=n1Ψntn1=λ(t)σ(t).\psi(t)=\sum_{n\geq 1}\Psi_{n}t^{n-1}=\lambda(-t)\sigma^{\prime}(t).

For any composition I=i1i2I=i_{1}i_{2}\dotsm, define

ΛI=Λi1Λi2,SI=Si1Si2,andΨI=Ψi1Ψi2.\Lambda^{I}=\Lambda_{i_{1}}\Lambda_{i_{2}}\dotsm,\quad S^{I}=S_{i_{1}}S_{i_{2}}\dotsm,\quad\text{and}\quad\Psi^{I}=\Psi_{i_{1}}\Psi_{i_{2}}\dotsm.

The algebra NSym\mathrm{NSym} is freely generated by any one of these families. Here the superscript notation are adopted to indicate that the functions are multiplicative with respect to composition concatenations. The sign of II is defined by

(2.4) εI=(1)\absI(I).\varepsilon^{I}=(-1)^{\abs{I}-\ell(I)}.

It is direct to check that

(2.5) εIεJ=εIJ.\varepsilon^{I}\varepsilon^{J}=\varepsilon^{I\!J}.

Another linear basis of NSym\mathrm{NSym} is the ribbon Schur functions RIR_{I}, which can be defined by

εIRI=JIεJSJ,\varepsilon^{I}R_{I}=\sum_{J\preceq I}\varepsilon^{J}S^{J},

see [11, Formula (62)]. We list some transition rules for these bases, see [11, Propositions 4.15 and 4.23, and Note 4.21].

Proposition 2.6 (Gelfand et al.).

For any composition II, we have

(2.6) ΛI\displaystyle\Lambda^{I} =JI¯RJ,\displaystyle=\sum_{J\succeq\overline{I}^{\sim}}R_{J},
(2.7) ΨI\displaystyle\Psi^{I} =JIεJfp(J,I)ΛJ,and\displaystyle=\sum_{J\succeq I}\varepsilon^{J}f\!p(J,I)\Lambda^{J},\quad\text{and}
(2.8) ΨI\displaystyle\Psi^{I} =JIεIJ1J(I)RJ,\displaystyle=\smashoperator[r]{\sum_{J\in\mathcal{H}_{I}}^{}}\varepsilon^{I\!J_{1}\dotsm J_{\ell(I)}}R_{J},

where JkJ_{k} are the composition blocks of the decomposition I(J)\nabla_{I}(J).

Equation 2.7 is true by virtue of Eq. 2.2, though it was expressed in terms of the product lp(J,I)lp(J,I) in [11]. Recall from Eq. 1.3 that ρ\rho maps a composition to its underlying partition. We use the same notation ρ\rho to denote the projection map defined by ρ(ΛI)=eI\rho(\Lambda^{I})=e_{I} and by extending it linearly. By definition, for any composition II,

ρ(ΛI)=eI,ρ(SI)=hI,ρ(ΨI)=pI,andρ(RI)=ssh(I),\rho(\Lambda^{I})=e_{I},\quad\rho(S^{I})=h_{I},\quad\rho(\Psi^{I})=p_{I},\quad\text{and}\quad\rho(R_{I})=s_{\mathrm{sh}(I)},

where sh(I)\mathrm{sh}(I) is the skew partition of shape II. For instance,

ρ(Λ12)=e21,ρ(S12)=h21,ρ(Ψ12)=p21,ρ(R12)=s21andρ(R21)=s22/1.\rho(\Lambda^{12})=e_{21},\quad\rho(S^{12})=h_{21},\quad\rho(\Psi^{12})=p_{21},\quad\rho(R_{12})=s_{21}\quad\text{and}\quad\rho(R_{21})=s_{22/1}.

When ρ(F)=f\rho(F)=f for some FNSymF\in\mathrm{NSym} and fSymf\in\mathrm{Sym}, we say that ff is the commutative image of FF, and that FF is a noncommutative analog of ff. For instance, Eqs. 1.1 and 2.4 imply that XPnX_{P_{n}} and XCnX_{C_{n}} have the noncommutative analogs

(2.9) X~Pn\displaystyle\widetilde{X}_{P_{n}} =InwIΛI,and\displaystyle=\sum_{I\vDash n}w_{I}\Lambda^{I},\quad\text{and}
(2.10) X~Cn\displaystyle\widetilde{X}_{C_{n}} =In(i11)wIΛI,\displaystyle=\sum_{I\vDash n}(i_{1}-1)w_{I}\Lambda^{I},

respectively. If a chromatic symmetric function XGX_{G} has a noncommutative analog X~GNSym\widetilde{X}_{G}\in\mathrm{NSym}, then for any partition λ\absV(G)\lambda\vdash\abs{V(G)},

[eλ]XG=ρ(I)=λ[ΛI]X~G.[e_{\lambda}]X_{G}=\sum_{\rho(I)=\lambda}[\Lambda^{I}]\widetilde{X}_{G}.

The aforementioned ambiguity issue is solved naturally in the language of the algebra NSym\mathrm{NSym}. Indeed, since {ΛI}In\{\Lambda^{I}\}_{I\vDash n} is a basis of NSymn\mathrm{NSym}_{n}, we talk about the well defined ΛI\Lambda^{I}-coefficients instead of the undefined “eIe_{I}-coefficients”.

By definition, any chromatic symmetric function has an infinite number of noncommutative analogs, among which only a finite number with integer coefficients are ee-positive. In particular, if a symmetric function λncλeλ\sum_{\lambda\vdash n}c_{\lambda}e_{\lambda} is ee-positive, then the analog λncλΛλ\sum_{\lambda\vdash n}c_{\lambda}\Lambda^{\lambda} is Λ\Lambda-positive. Therefore, a symmetric function is ee-positive if and only if it has a Λ\Lambda-positive noncommutative analog. Therefore, in order to prove that a graph GG is ee-positive, it suffices to find a Λ\Lambda-positive analog of XGX_{G}. The algebra NSym\mathrm{NSym} plays the role of providing theoretical support for the composition method. As a consequence, we display only positive eIe_{I}-expansions in theorem statements. We would not write in terms of noncommutative analogs except when arguing ΛI\Lambda^{I}-coefficients is convenient.

2.5. Warming up for the composition method

This section consists of a property of the function wIw_{I} defined by Eq. 1.2, some other composition functions and their interrelations, as well as some practices of using these functions.

From definition, it is straightforward to see that wI=wJw_{I}=w_{J} for any composition JJ that is obtained by rearranging the non-first parts of II. Another five-finger exercise is as follows.

Lemma 2.7.

Let II and JJ be nonempty compositions such that j11j_{1}\neq 1. Then

wIwJ=j1j11wKw_{I}w_{J}=\frac{j_{1}}{j_{1}-1}\cdotp w_{K}

for any composition KK that is obtained by rearranging the parts of IJI\!J such that k1=i1k_{1}=i_{1}.

Proof.

Direct by Eq. 1.2. ∎

For any number a\absIa\leq\abs{I}, we define the surplus partial sum of II with respect to aa to be the number

(2.11) σI+(a)=min\brk[c]1\absi1ik:0k(I),\absi1ika.\sigma_{I}^{+}(a)=\min\brk[c]1{\abs{i_{1}\dotsm i_{k}}\colon 0\leq k\leq\ell(I),\ \abs{i_{1}\dotsm i_{k}}\geq a}.

Define the aa-surplus of II to be the number

(2.12) ΘI+(a)=σI+(a)a.\Theta_{I}^{+}(a)=\sigma_{I}^{+}(a)-a.

Then ΘI+(a)0\Theta_{I}^{+}(a)\geq 0. The function ΘI+()\Theta_{I}^{+}(\cdot) will appear in Theorem 3.3. Here is a basic property.

Lemma 2.8.

Let InI\vDash n and 0a,tn0\leq a,t\leq n. If ΘI+(a)t\Theta_{I}^{+}(a)\geq t, then ΘI+(a)=t+ΘI+(a+t)\Theta_{I}^{+}(a)=t+\Theta_{I}^{+}(a+t).

Proof.

This is transparent if one notices σI+(a+t)=σI+(a)\sigma_{I}^{+}(a+t)=\sigma_{I}^{+}(a). ∎

Lemma 2.8 will be used in the proof of Theorem 3.10. Similarly, for any number a0a\geq 0, we define the deficiency partial sum of II with respect to aa to be the number

(2.13) σI(a)=max\brk[c]1\absi1ik:0k(I),\absi1ika,\sigma_{I}^{-}(a)=\max\brk[c]1{\abs{i_{1}\dotsm i_{k}}\colon 0\leq k\leq\ell(I),\ \abs{i_{1}\dotsm i_{k}}\leq a},

and define the aa-deficiency of II to be the number

(2.14) ΘI(a)=aσI(a).\Theta_{I}^{-}(a)=a-\sigma_{I}^{-}(a).

Then ΘI(a)0\Theta_{I}^{-}(a)\geq 0. The function σI\sigma^{-}_{I} (resp., ΘI\Theta^{-}_{I}) can be expressed in terms of σI+\sigma^{+}_{I} (resp., ΘI+\Theta^{+}_{I}).

Lemma 2.9.

Let InI\vDash n and 0an0\leq a\leq n. Then

(2.15) σI(a)=nσI¯+(na),\sigma_{I}^{-}(a)=n-\sigma_{\overline{I}}^{+}(n-a),

or equivalently,

(2.16) ΘI(a)=ΘI¯+(na).\Theta_{I}^{-}(a)=\Theta_{\overline{I}}^{+}(n-a).
Proof.

We shall show Eq. 2.15 first. If a=na=n, then σI(a)=n\sigma_{I}^{-}(a)=n and σI¯+(na)=0\sigma_{\overline{I}}^{+}(n-a)=0, satisfying Eq. 2.15. Suppose that 0a<n0\leq a<n, and

(2.17) σI(a)=\absi1ik.\sigma_{I}^{-}(a)=\abs{i_{1}\dotsm i_{k}}.

Then 0k(I)10\leq k\leq\ell(I)-1. By Eq. 2.13,

\absi1ika<\absi1ik+1.\abs{i_{1}\dotsm i_{k}}\leq a<\abs{i_{1}\dotsm i_{k+1}}.

Subtracting from nn by each sum in the above inequality, we obtain

\absik+1i1na>\absik+2i1,\abs{i_{k+1}\dotsm i_{-1}}\geq n-a>\abs{i_{k+2}\dotsm i_{-1}},

which reads, σI¯+(na)=\absik+1i1\sigma_{\overline{I}}^{+}(n-a)=\abs{i_{k+1}\dotsm i_{-1}}. Adding it up with Eq. 2.17, we obtain the sum nn as desired. This proves Eq. 2.15. Using Eqs. 2.12 and 2.14, one may infer Eq. 2.16 from Eq. 2.15. This completes the proof. ∎

Lemma 2.9 will be used in the proof of Theorem 3.9. Let us express the product XPlXCmX_{P_{l}}X_{C_{m}} in terms of the functions wIw_{I} and ΘI+()\Theta_{I}^{+}(\cdot).

Lemma 2.10.

For l1l\geq 1 and m2m\geq 2,

(2.18) XPlXCm\displaystyle X_{P_{l}}X_{C_{m}} =Il,Jmj1wIJeIJ\displaystyle=\sum_{I\vDash l,\,J\vDash m}j_{1}w_{I\!J}e_{I\!J}
(2.19) =Kl+m,ΘK+(l)=0\brk1ΘK+(l+1)+1wKeK.\displaystyle=\sum_{K\vDash l+m,\ \Theta_{K}^{+}(l)=0}\brk 1{\Theta_{K}^{+}(l+1)+1}w_{K}e_{K}.
Proof.

By Eqs. 1.1, 2.4 and 2.7,

XPlXCm=IlwIeIJm(j11)wJeJ=Il,Jmj1wIJeIJ.X_{P_{l}}X_{C_{m}}=\sum_{I\vDash l}w_{I}e_{I}\sum_{J\vDash m}(j_{1}-1)w_{J}e_{J}=\sum_{I\vDash l,\ J\vDash m}j_{1}w_{I\!J}e_{I\!J}.

This proves Eq. 2.18. The other formula holds since j1=ΘK+(l+1)+1j_{1}=\Theta_{K}^{+}(l+1)+1 when K=IJK=I\!J. ∎

Note that neither of Eqs. 2.18 and 2.19 holds for l=0l=0. Now we compute a partial convolution of XPlX_{P_{l}} and XCmX_{C_{m}}.

Lemma 2.11.

For 0ln20\leq l\leq n-2,

(2.20) k=0lXPkXCnk=In\brk1σI+(l+1)1wIeI.\sum_{k=0}^{l}X_{P_{k}}X_{C_{n-k}}=\sum_{I\vDash n}\brk 1{\sigma_{I}^{+}(l+1)-1}w_{I}e_{I}.

Dually, for 2mn12\leq m\leq n-1,

(2.21) i=2mXCiXPni=InσI¯(m)wIeI.\sum_{i=2}^{m}X_{C_{i}}X_{P_{n-i}}=\sum_{I\vDash n}\sigma_{\overline{I}}^{-}(m)w_{I}e_{I}.
Proof.

By Eq. 2.18, the convolution on the left hand of Eq. 2.20 has a noncommutative analog

k=1lX~PkX~Cnk=k=1lIk,Jnkj1wIJΛIJ=K=IJn, 1\absIlj1wKΛK.\sum_{k=1}^{l}\widetilde{X}_{P_{k}}\widetilde{X}_{C_{n-k}}=\sum_{k=1}^{l}\sum_{I\vDash k,\ J\vDash n-k}j_{1}w_{I\!J}\Lambda^{I\!J}=\sum_{K=I\!J\vDash n,\ 1\leq\abs{I}\leq l}j_{1}w_{K}\Lambda^{K}.

Combining it with Proposition 2.4, we obtain

k=0lX~PkX~Cnk\displaystyle\sum_{k=0}^{l}\widetilde{X}_{P_{k}}\widetilde{X}_{C_{n-k}} =K=IJn, 0\absIlj1wKΛKKnwKΛK.\displaystyle=\sum_{K=I\!J\vDash n,\ 0\leq\abs{I}\leq l}j_{1}w_{K}\Lambda^{K}-\sum_{K\vDash n}w_{K}\Lambda^{K}.

The coefficient of wKΛKw_{K}\Lambda^{K} of the first sum on the right side is the partial sum k1++krk_{1}+\dots+k_{r} such that

\absk1kr1l<\absk1kr,\abs{k_{1}\dotsm k_{r-1}}\leq l<\abs{k_{1}\dotsm k_{r}},

that is, the sum σK+(l+1)\sigma_{K}^{+}(l+1). This proves Eq. 2.20. In the same fashion, one may show Eq. 2.21. ∎

We need the noncommutative setting in the proof above since the coefficient of wKΛKw_{K}\Lambda^{K} is considered. Lemma 2.11 will be used in the proof of Theorem 3.3 for tadpoles.

Corollary 2.12.

For n2n\geq 2, the average of the full convolution of chromatic symmetric functions of paths and cycles with total order nn is the chromatic symmetric function of the path of order nn, i.e.,

1n1k=0n2XPkXCnk=XPn.\frac{1}{n-1}\sum_{k=0}^{n-2}X_{P_{k}}X_{C_{n-k}}=X_{P_{n}}.
Proof.

Taking l=n2l=n-2 in Eq. 2.20, and using Eq. 1.1, one obtains the desired formula. ∎

It can be shown alternatively by taking m=n1m=n-1 in Eq. 2.21 and using Proposition 2.4 and the identity ΘI¯(n1)=ni1\Theta_{\overline{I}}^{-}(n-1)=n-i_{1}, or, by Proposition 2.3.

3. Neat formulas for some chromatic symmetric functions

In this section, we use the composition method to produce neat formulas for the chromatic symmetric functions of several families of graphs, including tadpoles and their line graphs, barbells, and generalized bulls. We also establish the ee-positivity of hats.

3.1. The ribbon expansion for cycles

In view of Eq. 2.6, if a noncommutative symmetric function FF is Λ\Lambda-positive, then it is RR-positive. Thibon and Wang [28] discovered that the analog X~Pn\widetilde{X}_{P_{n}} has the rather simple ribbon expansion

X~Pn=In,i1=1,i1,,i222m1(I)1RI.\widetilde{X}_{P_{n}}=\sum_{I\vDash n,\ i_{-1}=1,\ i_{1},\dots,i_{-2}\leq 2}2^{m_{1}(I)-1}R_{I}.

We present a Ψ\Psi-expansion for a noncommutative analog of cycles.

Lemma 3.1.

For n2n\geq 2, the chromatic symmetric function XCnX_{C_{n}} has a noncommutative analog

X~Cn=(1)nΨn+InεIi1ΨI,\widetilde{X}_{C_{n}}=(-1)^{n}\Psi^{n}+\sum_{I\vDash n}\varepsilon^{I}i_{1}\Psi^{I},

where εI\varepsilon^{I} is defined by Eq. 2.4.

Proof.

Let Cn=(V,E)C_{n}=(V,E) be the cycle with vertices v1,,vnv_{1},\dots,v_{n} arranged counterclockwise. Let EEE^{\prime}\subseteq E. The contribution of the edge set E=EE^{\prime}=E in Proposition 2.1 is (1)npn(-1)^{n}p_{n}. When EEE^{\prime}\neq E, the graph (V,E)(V,E^{\prime}) consists of paths. Let i1i_{1} be the order of the path containing v1v_{1}. Then 1i1n1\leq i_{1}\leq n. Let i2,i3,i_{2},i_{3},\dots be the orders of paths counterclockwise in the sequel. Since the path containing v1v_{1} has i1i_{1} possibilities:

v1vi1,vnv1vi11,vn1vnv1vi12,,vni1+1vni1+2vnv1,v_{1}\dotsm v_{i_{1}},\quad v_{n}v_{1}\dotsm v_{i_{1}-1},\quad v_{n-1}v_{n}v_{1}\dotsm v_{i_{1}-2},\quad\dots,\quad v_{n-i_{1}+1}v_{n-i_{1}+2}\dotsm v_{n}v_{1},

we can deduce by Proposition 2.1 that

XCn=(1)npn+Ini1(1)(i11)+(i21)+pρ(I)=(1)npn+Ini1εIpρ(I).X_{C_{n}}=(-1)^{n}p_{n}+\sum_{I\vDash n}i_{1}\cdotp(-1)^{(i_{1}-1)+(i_{2}-1)+\dotsm}p_{\rho(I)}=(-1)^{n}p_{n}+\sum_{I\vDash n}i_{1}\varepsilon^{I}p_{\rho(I)}.

Since ρ(ΨI)=pI\rho(\Psi^{I})=p_{I}, XCnX_{C_{n}} has the desired analog. ∎

Now we can produce a ribbon Schur analog of XCnX_{C_{n}}.

Theorem 3.2.

The chromatic symmetric function of cycles has a noncommutative analog

X~Cn=In,i1=i1=1,i2,,i222m1(I)\brk3112rRIR1n,\widetilde{X}_{C_{n}}=\sum_{I\vDash n,\ i_{1}=i_{-1}=1,\ i_{2},\,\dots,\,i_{-2}\leq 2}2^{m_{1}(I)}\brk 3{1-\frac{1}{2^{r}}}R_{I}-R_{1^{n}},

where i1i_{-1} and i2i_{-2} are the last and second last part of II respectively, m1(I)m_{1}(I) is defined by Eq. 2.1, and rr is the maximum number of parts 11 that start II.

Proof.

Recall that I\mathcal{H}_{I} is the set of ribbons JJ such that every block in the decomposition I(J)\nabla_{I}(J) is a hook. By Eq. 2.8, we can rewrite the formula in Lemma 3.1 as

X~Cn\displaystyle\widetilde{X}_{C_{n}} =(1)nJnεnJRJ+Ini1εIJIεIJ1J(I)RJ\displaystyle=(-1)^{n}\sum_{J\in\mathcal{H}_{n}}\varepsilon^{n\!J}R_{J}+\sum_{I\vDash n}i_{1}\varepsilon^{I}\sum_{J\in\mathcal{H}_{I}}\varepsilon^{I\!J_{1}\dotsm J_{\ell(I)}}R_{J}
(3.1) =JnJ1J2(J)\absJ1εJ1J2RJJnεJRJ,\displaystyle=\sum_{J\vDash n}\sum_{J_{1}\bullet J_{2}\bullet\dotsm\in\mathcal{H}(J)}\abs{J_{1}}\varepsilon^{J_{1}J_{2}\dotsm}R_{J}-\sum_{J\in\mathcal{H}_{n}}\varepsilon^{J}R_{J},

where (J)\mathcal{H}(J) is the set of decompositions J1J2J_{1}\bullet J_{2}\bullet\dotsm such that every block in JkJ_{k} is a hook. Here each bullet \bullet is either the concatenation or the near concatenation. It is direct to compute

[R1n]X~Cn=Ini1ε1i11i2ε1n=Ini11=n+j=1n1j2nj11=2n2.[R_{1^{n}}]\widetilde{X}_{C_{n}}=\sum_{I\vDash n}i_{1}\varepsilon^{1^{i_{1}}1^{i_{2}}\dotsm}-\varepsilon^{1^{n}}=\sum_{I\vDash n}i_{1}-1=n+\sum_{j=1}^{n-1}j\cdotp 2^{n-j-1}-1=2^{n}-2.

Below we consider JnJ\vDash n such that J1nJ\neq 1^{n}.

We introduce a sign-reversing involution to simplify the inner sum in Eq. 3.1. Let

d=J1J2(J).d=J_{1}\bullet J_{2}\bullet\dotsm\in\mathcal{H}(J).

For any box \square in the ribbon JJ, denote

  • by JJ_{\square} the hook JkJ_{k} in dd that contains \square, and

  • by \square^{\prime} the box lying to the immediate right of \square, if it exists.

We call \square^{\prime} the right neighbor of \square. We say that a box \square of JJ is an active box of dd if

  • its right neighbor \square^{\prime} exists,

  • JJ1J_{\square}\neq J_{1}, and

  • the union JJJ_{\square}\cup J_{\square^{\prime}} of boxes is a hook.

Let (J)\mathcal{H}^{\prime}(J) be the set of decompositions d(J)d\in\mathcal{H}(J) that contain an active box. We define a transformation φ\varphi on (J)\mathcal{H}^{\prime}(J) as follows. Let d(J)d\in\mathcal{H}^{\prime}(J). Let \square be the last active box of dd. Define φ(d)\varphi(d) to be the decomposition obtained from dd by

  • dividing JJ_{\square} into two hooks which contain \square and \square^{\prime} respectively, if J=JJ_{\square}=J_{\square^{\prime}};

  • merging JJ_{\square} and JJ_{\square^{\prime}} into a single hook, if JJJ_{\square}\neq J_{\square^{\prime}}.

From definition, we see that φ\varphi is an involution. In view of the sign of the inner sum in Eq. 3.1, we define the sign of d=J1J2d=J_{1}\bullet J_{2}\bullet\dotsm to be sgn(d)=εJ1J2\mathrm{sgn}(d)=\varepsilon^{J_{1}J_{2}\dotsm}. Then φ\varphi becomes sign-reversing as

sgn\brk1φ(d)=sgn(d).\mathrm{sgn}\brk 1{\varphi(d)}=-\mathrm{sgn}(d).

As a result, the contribution of decompositions in (J)\mathcal{H}^{\prime}(J) to the inner sum in Eq. 3.1 is zero, and (J)\mathcal{H}(J) for the inner sum can be replaced with the set

′′(J)=(J)\(J)\mathcal{H}^{\prime\prime}(J)=\mathcal{H}(J)\backslash\mathcal{H}^{\prime}(J)

of decompositions of JJ without active boxes.

First of all, we shall show that

[RJ]X~Cn=0if J is a hook and J1n.[R_{J}]\widetilde{X}_{C_{n}}=0\quad\text{if $J$ is a hook and $J\neq 1^{n}$.}

Let JJ be a hook and J1nJ\neq 1^{n}. Let d′′(J)d\in\mathcal{H}^{\prime\prime}(J). Then dd has no active boxes. In particular, the second last box \square of JJ is not active. It follows that J=J1J_{\square}=J_{1} and

′′(J)={J,J11},\mathcal{H}^{\prime\prime}(J)=\{J,\,J_{1}\triangleright 1\},

where J1=J\j1J_{1}=J\backslash j_{-1}. Therefore, by Eq. 3.1,

[RJ]X~Cn=nεJ+(n1)εJ11εJ=0.[R_{J}]\widetilde{X}_{C_{n}}=n\varepsilon^{J}+(n-1)\varepsilon^{J_{1}1}-\varepsilon^{J}=0.

Below we can suppose that JJ is not a hook. Then the subtrahend in Eq. 3.1 vanishes, and Eq. 3.1 implies that

(3.2) [RJ]X~Cn=J1J2′′(J)\absJ1εJ1J2.[R_{J}]\widetilde{X}_{C_{n}}=\sum_{J_{1}\bullet J_{2}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J)}\abs{J_{1}}\varepsilon^{J_{1}J_{2}\dotsm}.

Second, we claim that [RJ]X~Cn=0[R_{J}]\widetilde{X}_{C_{n}}=0 unless j1=1j_{-1}=1. In fact, if j12j_{-1}\geq 2, then the second last box of JJ is active for any decomposition d(J)d\in\mathcal{H}(J). Thus

′′(J)=and[RJ]X~Cn=0.\mathcal{H}^{\prime\prime}(J)=\emptyset\quad\text{and}\quad[R_{J}]\widetilde{X}_{C_{n}}=0.

This proves the claim. It follows that

J=1s1t11s2t21sltl1sl+1, where l1s1,,sl0sl+11, and t1,,tl2J=1^{s_{1}}t_{1}1^{s_{2}}t_{2}\dotsm 1^{s_{l}}t_{l}1^{s_{l+1}},\quad\text{ where $l\geq 1$, $s_{1},\dots,s_{l}\geq 0$, $s_{l+1}\geq 1$, and $t_{1},\dots,t_{l}\geq 2$. }

Denote the last box on the horizontal part tjt_{j} by j\square_{j}. We say that a box of JJ is a leader of a decomposition d′′(J)d\in\mathcal{H}^{\prime\prime}(J) if it is the first box of some hook of length at least 22 in dd.

Third, we claim that

[RJ]X~Cn=0unless t2==tl=2.[R_{J}]\widetilde{X}_{C_{n}}=0\quad\text{unless $t_{2}=\dots=t_{l}=2$}.

Let j2j\geq 2. If tj3t_{j}\geq 3, then the third last box in tjt_{j} is active for any d(J)d\in\mathcal{H}(J), which implies [RJ]X~Cn=0[R_{J}]\widetilde{X}_{C_{n}}=0 as before. This proves the claim. Moreover, if j\square_{j} is not a leader for some d′′(J)d\in\mathcal{H}^{\prime\prime}(J), then the second last box in tjt_{j} is active in dd, contradicting the choice of dd. Therefore, by Eq. 3.2,

(3.3) [RJ]X~Cn=d=J1J2′′(J)j is a leader of dj2\absJ1εJ1J2.[R_{J}]\widetilde{X}_{C_{n}}=\sum_{\begin{subarray}{c}d=J_{1}\bullet J_{2}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J)\\ \square_{j}\text{ is a leader of~{}$d$, }\forall j\geq 2\end{subarray}}\abs{J_{1}}\varepsilon^{J_{1}J_{2}\dotsm}.

Fourth, we shall show that

[RJ]X~Cn=0unless t1=2.[R_{J}]\widetilde{X}_{C_{n}}=0\quad\text{unless $t_{1}=2$}.

Suppose that t13t_{1}\geq 3 and d=J1J2′′(J)d=J_{1}\bullet J_{2}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J). Let BkB_{k} be the kkth last box in t1t_{1}. In particular, B1=1B_{1}=\square_{1}. We observe that B3J1B_{3}\in J_{1} since otherwise it would be active. Moreover, if J1J_{1} ends with B3B_{3}, then 1\square_{1} must be a leader of dd, since otherwise B2B_{2} would be active. To sum up, we are left to 33 cases:

  1. (1)

    J1J_{1} ends with B3B_{3}, J2={B2}J_{2}=\{B_{2}\}, and 1\square_{1} is a leader,

  2. (2)

    J1J_{1} ends with B2B_{2},

  3. (3)

    J1J_{1} ends with B1B_{1}.

Let h=s1+t1h=s_{1}+t_{1}. The classification above allows us to transform Eq. 3.3 to

(3.4) [RJ]X~Cn=(h2)1s1(t12)1J3′′(J)j is a leader, j1ε1s1(t12)+(h1)J=1s1(t11)J2′′(J)j is a leader, j2ε1s1(t11)+hε1s1t1J=(1s1t1)J2′′(J)j is a leader, j21.[R_{J}]\widetilde{X}_{C_{n}}=(h-2)\cdotp\smashoperator[]{\sum_{\begin{subarray}{c}1^{s_{1}}(t_{1}-2)\triangleright 1\triangleright J_{3}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J)\\ \text{$\square_{j}$ is a leader, }\forall j\geq 1\end{subarray}}^{}}\varepsilon^{1^{s_{1}}(t_{1}-2)}+(h-1)\cdotp\smashoperator[]{\sum_{\begin{subarray}{c}J=1^{s_{1}}(t_{1}-1)\triangleright J_{2}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J)\\ \text{$\square_{j}$ is a leader, }\forall\,j\geq 2\end{subarray}}^{}}\varepsilon^{1^{s_{1}}(t_{1}-1)}+h\cdotp\varepsilon^{1^{s_{1}}t_{1}}\smashoperator[]{\sum_{\begin{subarray}{c}J=(1^{s_{1}}t_{1})J_{2}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J)\\ \text{$\square_{j}$ is a leader, }\forall\,j\geq 2\end{subarray}}^{}}1.

For 1jl1\leq j\leq l, let VjV_{j} be the column of boxes in JJ that contains j\square_{j}. Then

\absVj={sj+1+2,if 1jl1;sj+1+1,if j=l.\abs{V_{j}}=\begin{cases*}s_{j+1}+2,&if $1\leq j\leq l-1$;\\ s_{j+1}+1,&if $j=l$.\end{cases*}

For j2j\geq 2, we observe that VjV_{j} is the union of several blocks in dd. Conversely, since j\square_{j} is a leader, \absJj2\abs{J_{\square_{j}}}\geq 2, and there are 2\absVj22^{\abs{V_{j}}-2} ways to decompose VjV_{j} to form the blocks of some d′′(J)d\in\mathcal{H}^{\prime\prime}(J). Computing various cases for V1V_{1} in the same vein, we can deduce from Eq. 3.4 that

[RJ]X~Cn\displaystyle[R_{J}]\widetilde{X}_{C_{n}} =ε1s1t1\brk1(h2)2\abss2sl+11(h1)2\abss2sl+1+h2\abss2sl+11=0.\displaystyle=\varepsilon^{1^{s_{1}}t_{1}}\brk 1{(h-2)\cdotp 2^{\abs{s_{2}\dotsm s_{l+1}}-1}-(h-1)\cdotp 2^{\abs{s_{2}\dotsm s_{l+1}}}+h\cdotp 2^{\abs{s_{2}\dotsm s_{l+1}}-1}}=0.

Note that each of the 33 terms in the parenthesis holds true even for when l=1l=1.

Fifth, let us compute the RJR_{J}-coefficient for

J=1s121sl21sl+1, where l1s1,,sl0, and sl+11J=1^{s_{1}}2\dotsm 1^{s_{l}}21^{s_{l+1}},\quad\text{ where $l\geq 1$, $s_{1},\dots,s_{l}\geq 0$, and $s_{l+1}\geq 1$. }

If B1J1B_{1}\not\in J_{1}, then 1\square_{1} must be a leader, since otherwise B2B_{2} would be active. Since every vertical hook has sign 11, we can deduce from Eq. 3.3 that

[RJ]X~Cn\displaystyle[R_{J}]\widetilde{X}_{C_{n}} =1sJ2′′(J)1ss1j is a leader, j1s+1s1+1J2′′(J)j is a leader, j2(s1+1)1s12J2′′(J)j is a leader, j2(s1+2).\displaystyle=\sum_{\begin{subarray}{c}1^{s}J_{2}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J)\\ 1\leq s\leq s_{1}\\ \text{$\square_{j}$ is a leader, }\forall j\geq 1\end{subarray}}s+\sum_{\begin{subarray}{c}1^{s_{1}+1}\triangleright J_{2}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J)\\ \text{$\square_{j}$ is a leader, }\forall j\geq 2\end{subarray}}(s_{1}+1)-\sum_{\begin{subarray}{c}1^{s_{1}}2J_{2}\bullet\dotsm\in\mathcal{H}^{\prime\prime}(J)\\ \text{$\square_{j}$ is a leader, }\forall j\geq 2\end{subarray}}(s_{1}+2).

Computing the number of decompositions in ′′(J)\mathcal{H}^{\prime\prime}(J) for each of the 33 sums above, we derived that

[RJ]X~Cn\displaystyle[R_{J}]\widetilde{X}_{C_{n}} =s=1s1s2s1s2\abss2sl+11+(s1+1)2\abss2sl+1(s1+2)2\abss2sl+11,\displaystyle=\sum_{s=1}^{s_{1}}s\cdotp 2^{s_{1}-s}\cdotp 2^{\abs{s_{2}\dotsm s_{l+1}}-1}+(s_{1}+1)\cdotp 2^{\abs{s_{2}\dotsm s_{l+1}}}-(s_{1}+2)\cdotp 2^{\abs{s_{2}\dotsm s_{l+1}}-1},

which is true even for l=1l=1. Note that \abss1sl+1=m1(J)\abs{s_{1}\dotsm s_{l+1}}=m_{1}(J), and

s=1s1s2s=2s1+22s1\sum_{s=1}^{s_{1}}\frac{s}{2^{s}}=2-\frac{s_{1}+2}{2^{s_{1}}}

holds as an identity. Therefore,

[RJ]X~Cn\displaystyle[R_{J}]\widetilde{X}_{C_{n}} =2m1(J)1\brk32s1+22s1+s1+12s11s1+22s1=2m1(J)\brk3112s1.\displaystyle=2^{m_{1}(J)-1}\brk 3{2-\frac{s_{1}+2}{2^{s_{1}}}+\frac{s_{1}+1}{2^{s_{1}-1}}-\frac{s_{1}+2}{2^{s_{1}}}}=2^{m_{1}(J)}\brk 3{1-\frac{1}{2^{s_{1}}}}.

Finally, collecting the coefficients above, we obtain

(3.5) X~Cn=(2n2)R1n+J=1s121sl21sl+1nl1,sl+11,s2,,sl02m1(J)\brk3112s1RJ,\widetilde{X}_{C_{n}}=(2^{n}-2)R_{1^{n}}+\sum_{\begin{subarray}{c}J=1^{s_{1}}2\dotsm 1^{s_{l}}21^{s_{l+1}}\vDash n\\ l\geq 1,\ s_{l+1}\geq 1,\ s_{2},\dots,s_{l}\geq 0\end{subarray}}2^{m_{1}(J)}\brk 3{1-\frac{1}{2^{s_{1}}}}R_{J},

which can be recast as the desired formula. ∎

In view of Eq. 3.5, every RIR_{I}-coefficient is nonnegative. For instance,

X~C5=30R15+4R1211+6R1121.\widetilde{X}_{C_{5}}=30R_{1^{5}}+4R_{1211}+6R_{1121}.

3.2. Tadpoles and their line graphs

For m2m\geq 2 and l0l\geq 0, the tadpole Tm,lT_{m,l} is the graph obtained by connecting a vertex on the cycle CmC_{m} and an end of the path PlP_{l}. By definition,

\absV(Tm,l)=\absE(Tm,l)=m+l.\abs{V(T_{m,l})}=\abs{E(T_{m,l})}=m+l.

See Fig. 1 for the tadpole Tm,lT_{m,l} and its line graph (Tm,l)\mathcal{L}(T_{m,l}).

CmC_{m}ll verticesCmC_{m}ll vertices
Figure 1. The tadpole Tm,lT_{m,l} and its line graph (Tm,l)\mathcal{L}(T_{m,l}).

Li et al. [15, Theorem 3.1] pointed out that tadpoles possess Gebhard and Sagan’s (e)(e)-positivity, which implies the ee-positivity. They gave the chromatic symmetric function

(3.6) XTm,l=(m1)XPm+li=2m1XCiXPm+liX_{T_{m,l}}=(m-1)X_{P_{m+l}}-\sum_{i=2}^{m-1}X_{C_{i}}X_{P_{m+l-i}}

in their formula (3.11). By investigating the analog Y(Tm,l)NCSymY_{\mathcal{L}(T_{m,l})}\in\mathrm{NCSym}, Wang and Wang [31, Theorem 3.2] obtained the (e)(e)-positivity of the line graphs (Tm,l)\mathcal{L}(T_{m,l}), which implies the ee-positivity of the graphs (Tm,l)\mathcal{L}(T_{m,l}) and Tm,lT_{m,l}. They [31, Formulas (3.2) and (3.3)] also obtained the formulas

(3.7) X(Tm,l)\displaystyle X_{\mathcal{L}(T_{m,l})} =XPlXCm+2k=0l1XPkXCnk2lXPn,and\displaystyle=X_{P_{l}}X_{C_{m}}+2\sum_{k=0}^{l-1}X_{P_{k}}X_{C_{n-k}}-2lX_{P_{n}},\quad\text{and}
(3.8) XTm,l\displaystyle X_{T_{m,l}} =12\brk1X(Tm,l)+XPlXCm=k=0lXPkXCnklXPn.\displaystyle=\frac{1}{2}\brk 1{X_{\mathcal{L}(T_{m,l})}+X_{P_{l}}X_{C_{m}}}=\sum_{k=0}^{l}X_{P_{k}}X_{C_{n-k}}-lX_{P_{n}}.
Theorem 3.3 (Tadpoles).

For 0ln20\leq l\leq n-2, we have

XTnl,l=InΘI+(l+1)wIeI,X_{T_{n-l,l}}=\sum_{I\vDash n}\Theta_{I}^{+}(l+1)w_{I}e_{I},

where wIw_{I} and ΘI+\Theta_{I}^{+} are defined by Eqs. 1.2 and 2.12, respectively.

Proof.

It is direct by Eqs. 3.8, 2.20 and 1.1. ∎

One may deduce Theorem 3.3 alternatively by using Eqs. 2.21, 3.6 and 2.15. The tadpole Tm,1T_{m,1} is called an mm-pan. For example, the 44-pan has the chromatic symmetric function

XT4,1=I5ΘI+(2)wIeI=15e5+9e41+3e32+e221.X_{T_{4,1}}=\sum_{I\vDash 5}\Theta_{I}^{+}(2)w_{I}e_{I}=15e_{5}+9e_{41}+3e_{32}+e_{221}.

We remark that Theorem 3.3 reduces to Eq. 1.1 when l=n2l=n-2, and to Proposition 2.4 when l=0l=0.

A lariat is a tadpole of the form T3,n3T_{3,\,n-3}. Dahlberg and van Willigenburg [3] resolved 66 conjectures of Wolfe [32] on XT3,n3X_{T_{3,\,n-3}} by analyzing Eq. 3.6. We now bring out a neat formula for XT3,n3X_{T_{3,\,n-3}}, which implies effortless resolutions of the conjectures.

Corollary 3.4 (Lariats).

For n3n\geq 3, we have XT3,n3=2In,i13wIeIX_{T_{3,\,n-3}}=2\sum_{I\vDash n,\ i_{-1}\geq 3}w_{I}e_{I}.

Proof.

Direct by taking l=n3l=n-3 in Theorem 3.3. ∎

The line graphs of tadpoles also admit simple analogs.

Theorem 3.5 (The line graphs of tadpoles).

For 1ln21\leq l\leq n-2,

X(Tnl,l)=In,ΘI+(l)=0\brk1ΘI+(l+1)1wIeI+2In,ΘI+(l)2ΘI+(l+1)wIeI,X_{\mathcal{L}(T_{n-l,\,l})}=\sum_{I\vDash n,\ \Theta_{I}^{+}(l)=0}\brk 1{\Theta_{I}^{+}(l+1)-1}w_{I}e_{I}+2\sum_{I\vDash n,\ \Theta_{I}^{+}(l)\geq 2}\Theta_{I}^{+}(l+1)w_{I}e_{I},

where wIw_{I} and ΘI+\Theta_{I}^{+} are defined by Eqs. 1.2 and 2.12, respectively.

Proof.

Let n=m+ln=m+l and G=Tm,lG=T_{m,l}. Taking a noncommutative analog for every term in Eq. 3.7, using Eqs. 2.9, 2.19 and 2.20, we obtain the analog

X~(G)\displaystyle\widetilde{X}_{\mathcal{L}(G)} =In,ΘI+(l)=0\brk1ΘI+(l+1)+1wIΛI+2In\brk1σI+(l)1wIΛI2lInwIΛI\displaystyle=\sum_{I\vDash n,\ \Theta_{I}^{+}(l)=0}\brk 1{\Theta_{I}^{+}(l+1)+1}w_{I}\Lambda^{I}+2\sum_{I\vDash n}\brk 1{\sigma_{I}^{+}(l)-1}w_{I}\Lambda^{I}-2l\sum_{I\vDash n}w_{I}\Lambda^{I}
(3.9) =In,ΘI+(l)=0\brk1ΘI+(l+1)+1wIΛI+2In\brk1ΘI+(l)1wIΛI.\displaystyle=\sum_{I\vDash n,\ \Theta_{I}^{+}(l)=0}\brk 1{\Theta_{I}^{+}(l+1)+1}w_{I}\Lambda^{I}+2\sum_{I\vDash n}\brk 1{\Theta_{I}^{+}(l)-1}w_{I}\Lambda^{I}.

Let InI\vDash n such that wI0w_{I}\neq 0. We now compute the coefficient [wIΛI]X~(G)[w_{I}\Lambda^{I}]\widetilde{X}_{\mathcal{L}(G)}.

  1. (1)

    If ΘI+(l)=0\Theta_{I}^{+}(l)=0, then ΘI+(l+1)1\Theta_{I}^{+}(l+1)\geq 1 and [wIΛI]X~(G)=ΘI+(l+1)10[w_{I}\Lambda^{I}]\widetilde{X}_{\mathcal{L}(G)}=\Theta_{I}^{+}(l+1)-1\geq 0.

  2. (2)

    If ΘI+(l)1\Theta_{I}^{+}(l)\geq 1, then the first sum in Eq. 3.9 does not contribute, and the second sum gives

    [wIΛI]X~(G)=2\brk1ΘI+(l)1=2ΘI+(l+1)[w_{I}\Lambda^{I}]\widetilde{X}_{\mathcal{L}(G)}=2\brk 1{\Theta_{I}^{+}(l)-1}=2\Theta_{I}^{+}(l+1)

    by Lemma 2.8.

This completes the proof. ∎

For example, we have

X(T4,1)=I5,ΘI+(1)=0\brk1ΘI+(2)1wIeI+2I5,ΘI+(1)2ΘI+(2)wIeI=30e5+6e41+6e32.X_{\mathcal{L}(T_{4,1})}=\sum_{\begin{subarray}{c}I\vDash 5,\ \Theta_{I}^{+}(1)=0\end{subarray}}\brk 1{\Theta_{I}^{+}(2)-1}w_{I}e_{I}+2\sum_{\begin{subarray}{c}I\vDash 5,\ \Theta_{I}^{+}(1)\geq 2\end{subarray}}\Theta_{I}^{+}(2)w_{I}e_{I}=30e_{5}+6e_{41}+6e_{32}.

The noncommutative setting in the proof above is adopted since ΛI\Lambda^{I}-coefficients are considered.

3.3. Barbells

For any composition I=i1isnI=i_{1}\dotsm i_{s}\vDash n, the KK-chain K(I)K(I) is the graph (V,E)(V,E) where

V\displaystyle V =j=1sVj with Vj={vj1,vj2,,vjij},and\displaystyle=\bigcup_{j=1}^{s}V_{j}\text{ with }V_{j}=\{v_{j1},\ v_{j2},\ \dots,\ v_{ji_{j}}\},\quad\text{and}
E\displaystyle E =(V12)(V2{v1i1}2)(V3{v2i2}2)(Vs{v(s1)is1}2).\displaystyle=\binom{V_{1}}{2}\cup\binom{V_{2}\cup\{v_{1i_{1}}\}}{2}\cup\binom{V_{3}\cup\{v_{2i_{2}}\}}{2}\cup\dots\cup\binom{V_{s}\cup\{v_{(s-1)i_{s-1}}\}}{2}.

Here for any set SS,

(S2)=\brk[c]1{i,j}:i,jS and ij.\binom{S}{2}=\brk[c]1{\{i,j\}\colon i,j\in S\text{ and }i\neq j}.

See Fig. 2.

Ki1K_{i_{1}}i1i_{1} verticesKi2+1K_{i_{2}+1}i2i_{2} verticesKi3+1K_{i_{3}+1}i3i_{3} verticesKi1+1K_{i_{-1}+1}i1i_{-1} vertices
Figure 2. The KK-chain K(I)K(I) for I=i1i2I=i_{1}i_{2}\dotsm.

In other words, the KK-chain K(I)K(I) can be obtained from a sequence G1=Ki1G_{1}=K_{i_{1}}, G2=Ki2+1G_{2}=K_{i_{2}+1}, G3=Ki3+1G_{3}=K_{i_{3}+1}, \dots, Gl=Kis+1G_{l}=K_{i_{s}+1} of cliques such that GjG_{j} and Gj+1G_{j+1} share one vertex, and that the s1s-1 shared vertices are distinct. The number of vertices and edges of K(I)K(I) are respectively

\absV=nand\absE=(i12)+j2(ij+12).\abs{V}=n\quad\text{and}\quad\abs{E}=\binom{i_{1}}{2}+\sum_{j\geq 2}\binom{i_{j}+1}{2}.

For instance, K(1n)=PnK(1^{n})=P_{n} and K(n)=KnK(n)=K_{n}. The family of KK-chains contains many special graphs.

  1. (1)

    A lollipop is a KK-chain of the form K(a1na)K(a1^{n-a}). A lariat is a lollipop of the form K(31n3)K(31^{n-3}).

  2. (2)

    A barbell is a KK-chain of the form K(a1bc)K(a1^{b}c). A dumbbell is a barbell of the form K(a1b)K(a1b).

  3. (3)

    A generalized bull is a KK-chain of the form K(1a21na2)K(1^{a}21^{n-a-2}).

Tom [29, Theorem 2] gave a formula for the chromatic symmetric function of melting lollipops, with lollipops as a specialization.

Theorem 3.6 (Lollipops, Tom).

Let na1n\geq a\geq 1. Then XK(a1na)=(a1)!In,i1awIeIX_{K(a1^{n-a})}=(a-1)!\sum_{\begin{subarray}{c}I\vDash n,\ i_{-1}\geq a\end{subarray}}w_{I}e_{I}.

Theorem 3.6 covers Eqs. 1.1, 2.3 and 3.4. It can be derived alternatively by using Eqs. 2.3 and 1.1 and

XG=(a1)!\brk3XPni=1a2ai1(ai)!XKaiXPna+i,X_{G}=(a-1)!\brk 3{X_{P_{n}}-\sum_{i=1}^{a-2}\frac{a-i-1}{(a-i)!}X_{K_{a-i}}X_{P_{n-a+i}}},

which is due to Dahlberg and van Willigenburg [3, Proposition 9].

Using Dahlberg and van Willigenburg’s method of discovering a recurrence relation for the chromatic symmetric functions of lollipops, we are able to handle barbells.

Theorem 3.7 (Barbells).

Let n=a+b+cn=a+b+c, where a1a\geq 1 and b,c0b,c\geq 0. Then

XK(a1bc)=(a1)!c!\brk4In,i1ai1c+1wIeI+In,i1ai1c<i2(i2i1)j3(ij1)eI,X_{K(a1^{b}c)}=(a-1)!\ c!\brk 4{\sum_{\begin{subarray}{c}I\vDash n,\ i_{-1}\geq a\\ i_{1}\geq c+1\end{subarray}}w_{I}e_{I}+\sum_{\begin{subarray}{c}I\vDash n,\ i_{-1}\geq a\\ i_{1}\leq c<i_{2}\end{subarray}}(i_{2}-i_{1})\prod_{j\geq 3}(i_{j}-1)e_{I}},

where wIw_{I} is defined by Eq. 1.2.

Proof.

Fix aa and n=a+b+cn=a+b+c. See Fig. 3.

KaK_{a}rar_{a}r1r_{1}ra1r_{a-1}s1s_{1}sb1s_{b-1}sbs_{b}Kc+1K_{c+1}t1t_{1}tct_{c}
Figure 3. The barbell K(a1bc)K(a1^{b}c).

For c{0,1}c\in\{0,1\}, the graph K(a1bc)K(a1^{b}c) reduces to a lollipop, and the desired formula reduces to Theorem 3.6. Below we can suppose that c2c\geq 2. We consider a graph family

{Gb,ck,k:k=0,1,,c}\{G_{b,\,c-k,\,k}\colon k=0,1,\dots,c\}

defined as follows. Define Gb,c,0=K(a1bc)G_{b,c,0}=K(a1^{b}c). For 1kc1\leq k\leq c, define Gb,ck,kG_{b,\,c-k,\,k} to be the graph obtained from K(a1bc)K(a1^{b}c) by removing the edges sbt1s_{b}t_{1}, \dots, sbtks_{b}t_{k}. In particular,

  • Gb, 1,c1=K(a1b+1(c1))G_{b,\,1,\,c-1}=K(a1^{b+1}(c-1)), and

  • Gb,0,cG_{b,0,c} is the disjoint union of the lollipop K(a1b)K(a1^{b}) and the complete graph KcK_{c}.

By applying Theorem 2.5 for the vertex triple (sb,tk+1,tk+2)(s_{b},\,t_{k+1},\,t_{k+2}) in Gb,ck,kG_{b,\,c-k,\,k}, we obtain

XGb,ck,k=2XGb,ck1,k+1XGb,ck2,k+2for 0kc2.X_{G_{b,\,c-k,\,k}}=2X_{G_{b,c-k-1,k+1}}-X_{G_{b,c-k-2,k+2}}\quad\text{for $0\leq k\leq c-2$}.

Therefore, one may deduce iteratively that

XK(a1bc)=XGb,c,0\displaystyle X_{K(a1^{b}c)}=X_{G_{b,c,0}} =2XGb,c1,1XGb,c2,2=3XGb,c2,22XGb,c3,3\displaystyle=2X_{G_{b,c-1,1}}-X_{G_{b,c-2,2}}=3X_{G_{b,c-2,2}}-2X_{G_{b,c-3,3}}
==cXGb,1,c1(c1)XGb,0,c\displaystyle=\dotsm=cX_{G_{b,1,c-1}}-(c-1)X_{G_{b,0,c}}
=cXK(a1b+1(c1))(c1)XK(a1b)XKc.\displaystyle=cX_{K(a1^{b+1}(c-1))}-(c-1)X_{K(a1^{b})}X_{K_{c}}.

Then we can deduce by bootstrapping that

XK(a1bc)\displaystyle X_{K(a1^{b}c)} =cXK(a1b+1(c1))(c1)XK(a1b)XKc\displaystyle=cX_{K(a1^{b+1}(c-1))}-(c-1)X_{K(a1^{b})}X_{K_{c}}
=c\brk1(c1)XK(a1b+2(c2))(c2)XK(a1b+1)XKc1(c1)XK(a1b)XKc\displaystyle=c\brk 1{(c-1)X_{K(a1^{b+2}(c-2))}-(c-2)X_{K(a1^{b+1})}X_{K_{c-1}}}-(c-1)X_{K(a1^{b})}X_{K_{c}}
=c(c1)\brk1(c2)XK(a1b+3(c3))(c3)XK(a1b+2)XKc2\displaystyle=c(c-1)\brk 1{(c-2)X_{K(a1^{b+3}(c-3))}-(c-3)X_{K(a1^{b+2})}X_{K_{c-2}}}
c(c2)XK(a1b+1)XKc1(c1)XK(a1b)XKc\displaystyle\qquad-c(c-2)X_{K(a1^{b+1})}X_{K_{c-1}}-(c-1)X_{K(a1^{b})}X_{K_{c}}
=\displaystyle=\dotsm
=c!XK(a1b+c)i=0c2c!(ci1)(ci)!XKciXK(a1b+i).\displaystyle=c!\,X_{K(a1^{b+c})}-\sum_{i=0}^{c-2}\frac{c!(c-i-1)}{(c-i)!}X_{K_{c-i}}X_{K(a1^{b+i})}.

By Eqs. 2.3 and 3.6, we obtain

(3.10) XK(a1bc)(a1)!c!=In,i1awIeIi=0c2(ci)Jn,j1a(ci1)wJe(ci)J.\frac{X_{K(a1^{b}c)}}{(a-1)!c!}=\sum_{I\vDash n,\ i_{-1}\geq a}w_{I}e_{I}-\sum_{i=0}^{c-2}\sum_{(c-i)J\vDash n,\ j_{-1}\geq a}(c-i-1)w_{J}e_{(c-i)J}.

We can split it as

(3.11) XK(a1bc)(a1)!c!=Y1+Y2,\frac{X_{K(a1^{b}c)}}{(a-1)!c!}=Y_{1}+Y_{2},

where Y1Y_{1} is the part containing e1e_{1}, and Y2Y_{2} the part without e1e_{1}. Let

𝒲n\displaystyle\mathcal{W}_{n} ={i1i2n:i1,i2,2},\displaystyle=\{i_{1}i_{2}\dotsm\vDash n\colon i_{1},i_{2},\dots\geq 2\},
(3.12) 𝒜n\displaystyle\mathcal{A}_{n} ={I𝒲n:i1a,i1c}and\displaystyle=\{I\in\mathcal{W}_{n}\colon i_{-1}\geq a,\ i_{1}\leq c\}\quad\text{and}
(3.13) n\displaystyle\mathcal{B}_{n} ={I𝒲n:i1a,i1c+1}.\displaystyle=\{I\in\mathcal{W}_{n}\colon i_{-1}\geq a,\ i_{1}\geq c+1\}.

Then 𝒜nn=\mathcal{A}_{n}\cap\mathcal{B}_{n}=\emptyset and

𝒜nn={I𝒲n:i1a}.\mathcal{A}_{n}\sqcup\mathcal{B}_{n}=\{I\in\mathcal{W}_{n}\colon i_{-1}\geq a\}.

From Eq. 3.10, we obtain

Y1=J𝒜n1n1w1Je1Ji=0c2(ci)J𝒜n1(ci1)w1Je1(ci)J.Y_{1}=\sum_{J\in\mathcal{A}_{n-1}\sqcup\mathcal{B}_{n-1}}w_{1J}e_{1J}-\sum_{i=0}^{c-2}\sum_{(c-i)J\in\mathcal{A}_{n-1}}(c-i-1)w_{1J}e_{1(c-i)J}.

Considering I=(ci)JI=(c-i)J in the negative part. When ii runs from 0 to c2c-2 and JJ runs over compositions such that (ci)J𝒜n1(c-i)J\in\mathcal{A}_{n-1}, II runs over all compositions in 𝒜n1\mathcal{A}_{n-1}. Since

(ci1)w1J=wIande1(ci)J=e1I,(c-i-1)w_{1J}=w_{I}\quad\text{and}\quad e_{1(c-i)J}=e_{1I},

we can deduce that

(3.14) Y1=J𝒜n1n1w1Je1JI𝒜n1w1Ie1I=Jn1w1Je1J.Y_{1}=\sum_{J\in\mathcal{A}_{n-1}\sqcup\mathcal{B}_{n-1}}w_{1J}e_{1J}-\sum_{I\in\mathcal{A}_{n-1}}w_{1I}e_{1I}=\sum_{J\in\mathcal{B}_{n-1}}w_{1J}e_{1J}.

On the other hand, by Eq. 3.10, we find

Y2=I𝒜nnwIeIi=0c2(ci)J𝒜n(ci1)wJe(ci)J.Y_{2}=\sum_{I\in\mathcal{A}_{n}\sqcup\mathcal{B}_{n}}w_{I}e_{I}-\sum_{i=0}^{c-2}\sum_{(c-i)J\in\mathcal{A}_{n}}(c-i-1)w_{J}e_{(c-i)J}.

Similarly, we consider I=(ci)JI=(c-i)J in the negative part. When ii runs from 0 to c2c-2 and JJ runs over compositions such that (ci)J𝒜n(c-i)J\in\mathcal{A}_{n}, II runs over all compositions in 𝒜n\mathcal{A}_{n}. Note that

(ci1)wJ=(i11)wI\i1ande(ci)J=eI,(c-i-1)w_{J}=(i_{1}-1)w_{I\backslash i_{1}}\quad\text{and}\quad e_{(c-i)J}=e_{I},

where I\i1=i2i1I\backslash i_{1}=i_{2}\dotsm i_{-1}. Therefore,

Y2\displaystyle Y_{2} =I𝒜nwIeI+InwIeII𝒜n(i11)wI\i1eI=InwIeI+I𝒜nfIeI,\displaystyle=\sum_{I\in\mathcal{A}_{n}}w_{I}e_{I}+\sum_{I\in\mathcal{B}_{n}}w_{I}e_{I}-\sum_{I\in\mathcal{A}_{n}}(i_{1}-1)w_{I\backslash i_{1}}e_{I}=\sum_{I\in\mathcal{B}_{n}}w_{I}e_{I}+\sum_{I\in\mathcal{A}_{n}}f_{I}e_{I},

where

fI=wI(i11)wI\i1=(i2i1)j3(ij1).f_{I}=w_{I}-(i_{1}-1)w_{I\backslash i_{1}}=(i_{2}-i_{1})\prod_{j\geq 3}(i_{j}-1).

Note that the involution ϕ\phi defined for the compositions I𝒜nI\in\mathcal{A}_{n} such that i2ci_{2}\leq c by exchanging the first two parts satisfies fϕ(I)+fI=0f_{\phi(I)}+f_{I}=0. Therefore,

Y2=InwIeI+I𝒜n,i2c+1fIeI.Y_{2}=\sum_{I\in\mathcal{B}_{n}}w_{I}\cdotp e_{I}+\sum_{I\in\mathcal{A}_{n},\ i_{2}\geq c+1}f_{I}\cdotp e_{I}.

In view of Eq. 3.12, the last sum can be recast by considering the possibility of i1=1i_{1}=1 as

In,i1a2i1c<c+1i2fIeI=In,i1a1i1c<i2fIeIJn1,j1aj1c+1k1(jk1)e1J,\sum_{\begin{subarray}{c}I\vDash n,\ i_{-1}\geq a\\ 2\leq i_{1}\leq c<c+1\leq i_{2}\end{subarray}}f_{I}\cdotp e_{I}=\sum_{\begin{subarray}{c}I\vDash n,\ i_{-1}\geq a\\ 1\leq i_{1}\leq c<i_{2}\end{subarray}}f_{I}\cdotp e_{I}-\sum_{\begin{subarray}{c}J\vDash n-1,\ j_{-1}\geq a\\ j_{1}\geq c+1\end{subarray}}\prod_{k\geq 1}(j_{k}-1)\cdotp e_{1J},

in which the negative part is exactly Y1Y_{1} by Eq. 3.14. Therefore,

Y2=InwIeI+In,i1a,i1c<i2fIeIY1.Y_{2}=\sum_{I\in\mathcal{B}_{n}}w_{I}e_{I}+\sum_{I\vDash n,\ i_{-1}\geq a,\ i_{1}\leq c<i_{2}}f_{I}\cdotp e_{I}-Y_{1}.

Hence by Eqs. 3.11 and 3.13, we obtain the formula as desired. ∎

For example,

XK(3122)\displaystyle X_{K(31^{2}2)} =(31)! 2!\brk4I7,i1,i13wIeI+I7,i13,i12<i2(i2i1)j3(ij1)eI\displaystyle=(3-1)!\ 2!\brk 4{\sum_{I\vDash 7,\ i_{1},i_{-1}\geq 3}w_{I}e_{I}+\sum_{I\vDash 7,\ i_{-1}\geq 3,\ i_{1}\leq 2<i_{2}}(i_{2}-i_{1})\prod_{j\geq 3}(i_{j}-1)e_{I}}
=28e7+20e61+12e52+68e43+16e321.\displaystyle=28e_{7}+20e_{61}+12e_{52}+68e_{43}+16e_{3^{2}1}.

We remark that Theorem 3.7 reduces to Theorem 3.6 when c=0c=0. In view of the factor (i2i1)(i_{2}-i_{1}) in Theorem 3.7, we do not think it easy to derive Theorem 3.7 by applying Tom’s KK-chain formula to barbells. The next two formulas for the graphs K(ab)K(ab) and dumbbells K(a1b)K(a1b) are particular cases of Tom’s KK-chain formula. They are straightforward from Theorem 3.7.

Corollary 3.8 (Tom).

Let a1a\geq 1 and 0ba0\leq b\leq a. Then

XK(ab)\displaystyle X_{K(ab)} =(a1)!b!i=0b(a+b2i)e(a+bi)i,and\displaystyle=(a-1)!\,b!\ \sum_{i=0}^{b}(a+b-2i)e_{(a+b-i)i},\quad\text{and}
XK(a1b)\displaystyle X_{K(a1b)} =(a1)!b!\brk3(a1)(b+1)ea(b+1)+i=0b(a+b+12i)e(a+b+1i)i.\displaystyle=(a-1)!\,b!\ \brk 3{(a-1)(b+1)e_{a(b+1)}+\sum_{i=0}^{b}(a+b+1-2i)e_{(a+b+1-i)i}}.
Proof.

In Theorem 3.7, taking n=a+cn=a+c and b=0b=0 yields the first formula, while taking n=a+1+bn=a+1+b and b=1b=1 yields the second. ∎

We remark that the ee-positivity of the graphs K(ab)K(ab) and K(a1b)K(a1b) are clear from Proposition 2.2. On the other hand, in Corollary 3.8, taking b=1b=1 in the first formula and taking b=0b=0 in the second result in the same formula

XK(a1)=(a1)!\brk1(a+1)ea+1+(a1)ea1.X_{K(a1)}=(a-1)!\brk 1{(a+1)e_{a+1}+(a-1)e_{a1}}.

3.4. Hats and generalized bulls

A hat is a graph obtained by adding an edge to a path. Let

n=a+m+b,where m2 and a,b0.n=a+m+b,\quad\text{where $m\geq 2$ and $a,b\geq 0$}.

The hat Ha,m,bH_{a,m,b} is the graph obtained from the path Pn=v1vnP_{n}=v_{1}\dotsm v_{n} by adding the edge va+1va+mv_{a+1}v_{a+m}, see Fig. 4. It is a unicyclic graph with the cycle length mm. By definition,

\absV(Ha,m,b)=\absE(Ha,m,b)=n.\abs{V(H_{a,m,b})}=\abs{E(H_{a,m,b})}=n.

It is clear that Ha,m,bH_{a,m,b} is isomorphic to Hb,m,aH_{b,m,a}. In particular, the hat H0,m,bH_{0,m,b} is the tadpole Tm,bT_{m,b}, the hat Ha,2,bH_{a,2,b} is a path with a repeated edge, and the hat Ha,3,bH_{a,3,b} is the generalized bull K(1a+121b)K(1^{a+1}21^{b}).

11a+1a+1a+ma+mnn
Figure 4. The hat Ha,m,bH_{a,m,b}.

Computing XHa,m,bX_{H_{a,m,b}}, we encounter the chromatic symmetric function of spiders with 33 legs. For any partition λ=λ1λ2n1\lambda=\lambda_{1}\lambda_{2}\dotsm\vdash n-1, the spider S(λ)S(\lambda) is the tree of order nn obtained by identifying an end of the paths Pλ1+1P_{\lambda_{1}+1}, Pλ2+1P_{\lambda_{2}+1}, \dots, see Fig. 5 for an illustration of S(abc)S(abc).

aa verticesbb verticescc vertices
Figure 5. The spider S(abc)S(abc), which has n=a+b+c+1n=a+b+c+1 vertices.

Zheng [34, Lemma 4.4] showed that for any multiset {a,b,c}\{a,b,c\} and n=a+b+c+1n=a+b+c+1,

(3.15) XS(abc)=XPn+i=1cXPiXPnii=b+1b+cXPiXPni.X_{S(abc)}=X_{P_{n}}+\sum_{i=1}^{c}X_{P_{i}}X_{P_{n-i}}-\sum_{i=b+1}^{b+c}X_{P_{i}}X_{P_{n-i}}.

For proving the ee-positivity of hats, we introduce a special composition bisection defined as follows. For any composition KK of size at least b+1b+1, we define a bisection K=K1K2K=K_{1}K_{2} by

\absK1=σK+(b+1).\abs{K_{1}}=\sigma_{K}^{+}(b+1).

It is possible that K2K_{2} is empty. A key property of this bisection is the implication

(3.16) H=K1HH1=K1.H=K_{1}H^{\prime}\implies H_{1}=K_{1}.
Theorem 3.9.

Every hat is ee-positive.

Proof.

Let n=a+m+bn=a+m+b. Since XHa,2,b=XPnX_{H_{a,2,b}}=X_{P_{n}} is ee-positive, we can suppose that m3m\geq 3. Let G=Ha,m,bG=H_{a,m,b}. When m3m\geq 3, applying Theorem 2.5 for the triangle e1e2e3e_{1}e_{2}e_{3} in Fig. 6, we obtain

(3.17) XHa,m,b=XHa+1,m1,b+XS(a+1,m2,b)XPa+1XTm1,b.X_{H_{a,m,b}}=X_{H_{a+1,\,m-1,\,b}}+X_{S(a+1,\,m-2,\,b)}-X_{P_{a+1}}X_{T_{m-1,\,b}}.
11a+1a+1a+2a+2a+ma+mnne1e_{1}e2e_{2}e3e_{3}
Figure 6. The triangle e1e2e3e_{1}e_{2}e_{3} in applying the triple-deletion property to the hat Ha,m,bH_{a,m,b}.

By adding Eq. 3.17 for the parameter mm from 33 to the value mm, we obtain

XG=XPn+k=1m2\brk1XS(a+k,b,mk1)XPa+kXTmk,b.X_{G}=X_{P_{n}}+\sum_{k=1}^{m-2}\brk 1{X_{S(a+k,\,b,\,m-k-1)}-X_{P_{a+k}}X_{T_{m-k,\,b}}}.

Substituting Eq. 3.15 for spiders into the formula above, we deduce that

XG\displaystyle X_{G} =XPn+k=1m2\brk4XPn+i=1mk1\brk1XPiXPniXPb+iXPnbiXPa+kXTmk,b\displaystyle=X_{P_{n}}+\sum_{k=1}^{m-2}\brk 4{X_{P_{n}}+\sum_{i=1}^{m-k-1}\brk 1{X_{P_{i}}X_{P_{n-i}}-X_{P_{b+i}}X_{P_{n-b-i}}}-X_{P_{a+k}}X_{T_{m-k,\,b}}}
=i=0m2(m1i)XPiXPnii=1m2(m1i)XPb+iXPnbii=1m2XTmi,bXPa+i.\displaystyle=\sum_{i=0}^{m-2}(m-1-i)X_{P_{i}}X_{P_{n-i}}-\sum_{i=1}^{m-2}(m-1-i)X_{P_{b+i}}X_{P_{n-b-i}}-\sum_{i=1}^{m-2}X_{T_{m-i,\,b}}X_{P_{a+i}}.

Substituting Eq. 1.1 for paths and Theorem 3.3 for tadpoles into it, we obtain

(3.18) XG\displaystyle X_{G} =K=IJn\absIm2\brk1m1\absIwIwJeKK=PQnb+1\absPb+m2\brk1b+m1\absPwPwQeKK=PQnb+2\absPb+m1ΘP+(b+1)wPwQeK.\displaystyle=\smashoperator[]{\sum_{\begin{subarray}{c}K=I\!J\vDash n\\ \abs{I}\leq m-2\end{subarray}}^{}}\brk 1{m-1-\abs{I}}w_{I}w_{J}e_{K}-\smashoperator[]{\sum_{\begin{subarray}{c}K=PQ\vDash n\\ b+1\leq\abs{P}\leq b+m-2\end{subarray}}^{}}\brk 1{b+m-1-\abs{P}}w_{P}w_{Q}e_{K}-\smashoperator[]{\sum_{\begin{subarray}{c}K=PQ\vDash n\\ b+2\leq\abs{P}\leq b+m-1\end{subarray}}^{}}\Theta_{P}^{+}(b+1)w_{P}w_{Q}e_{K}.

Note that the upper (reps., lower) bound for \absP\abs{P} in the second (resp., third) sum can be replaced with b+m1b+m-1 (resp., b+1b+1). As a consequence, one may think the last two sums run as for the same set of pairs (P,Q)(P,Q). By Lemma 2.9, we can merge their coefficients of wPwQeKw_{P}w_{Q}e_{K} as

\brk1b+m1\absP+ΘP+(b+1)\displaystyle\brk 1{b+m-1-\abs{P}}+\Theta_{P}^{+}(b+1) =m2\absP+σP+(b+1).\displaystyle=m-2-\abs{P}+\sigma_{P}^{+}(b+1).

Therefore, we can rewrite Eq. 3.18 as

(3.19) XG=(I,J)𝒜aIwIwJeIJ(P,Q)bPwPwQePQ,X_{G}=\sum_{(I,J)\in\mathcal{A}}a_{I}w_{I}w_{J}e_{I\!J}-\sum_{(P,Q)\in\mathcal{B}}b_{P}w_{P}w_{Q}e_{PQ},

where aI=m1\absIa_{I}=m-1-\abs{I}, bP=m2\absP+σP+(b+1)b_{P}=m-2-\abs{P}+\sigma_{P}^{+}(b+1),

𝒜\displaystyle\mathcal{A} ={(I,J):IJn,\absIm2,wIwJ0},and\displaystyle=\{(I,J)\colon I\!J\vDash n,\ \abs{I}\leq m-2,\ w_{I}w_{J}\neq 0\},\quad\text{and}
\displaystyle\mathcal{B} ={(P,Q):PQn,b+1\absPb+m1,wPwQ0}.\displaystyle=\{(P,Q)\colon PQ\vDash n,\ b+1\leq\abs{P}\leq b+m-1,\ w_{P}w_{Q}\neq 0\}.

One should note the following facts:

  • When (I,J)𝒜(I,J)\in\mathcal{A}, it is possible that I=ϵI=\epsilon is the empty composition.

  • aI1a_{I}\geq 1 for any (I,J)𝒜(I,J)\in\mathcal{A}.

  • bP0b_{P}\geq 0 for any (P,Q)(P,Q)\in\mathcal{B}. Moreover, together with Eq. 3.18, one may infer that

    (3.20) bP=0{\absP=b+m1ΘP+(b+1)=0P=P1P2 with \brk1\absP1,\absP2=(b+1,m2).b_{P}=0\iff\begin{cases*}\abs{P}=b+m-1\\ \Theta_{P}^{+}(b+1)=0\end{cases*}\iff P=P_{1}P_{2}\text{ with }\brk 1{\abs{P_{1}},\,\abs{P_{2}}}=(b+1,\,m-2).

We will deal with the cases q1=1q_{1}=1 and q11q_{1}\neq 1 respectively. Let

1\displaystyle\mathcal{B}_{1} ={(P,Q):q1=1,bP>0}\displaystyle=\{(P,Q)\in\mathcal{B}\colon q_{1}=1,\ b_{P}>0\}
={(P,1Q):P1Qn,b+1\absPb+m1,wPw1Q0,bP>0},and\displaystyle=\{(P,1Q^{\prime})\colon P1Q^{\prime}\vDash n,\ b+1\leq\abs{P}\leq b+m-1,\ w_{P}w_{1Q^{\prime}}\neq 0,\ b_{P}>0\},\quad\text{and}
2\displaystyle\mathcal{B}_{2} ={(P,Q):q11}\displaystyle=\{(P,Q)\in\mathcal{B}\colon q_{1}\neq 1\}
={(P,Q):PQn,b+1\absPb+m1,wPQ0}.\displaystyle=\{(P,Q)\colon PQ\vDash n,\ b+1\leq\abs{P}\leq b+m-1,\ w_{PQ}\neq 0\}.

Let (P,1Q)1(P,1Q^{\prime})\in\mathcal{B}_{1}. We shall show that the map hh defined by

h(P, 1Q)=(1P2,P1Q)h(P,\,1Q^{\prime})=(1P_{2},\,P_{1}Q^{\prime})

is a bijection from 1\mathcal{B}_{1} to the set

𝒜1\displaystyle\mathcal{A}_{1} =\brk[c]1(1I,J)𝒜:\absJ2a\displaystyle=\brk[c]1{(1I^{\prime},\,J)\in\mathcal{A}\colon\abs{J_{2}}\geq a}
=\brk[c]1(1I,J):1IJn,\abs1Im2,w1IwJ0,\absJ2a.\displaystyle=\brk[c]1{(1I^{\prime},\,J)\colon 1I^{\prime}\!J\vDash n,\ \abs{1I^{\prime}}\leq m-2,\ w_{1I^{\prime}}w_{J}\neq 0,\ \abs{J_{2}}\geq a}.

Before that, it is direct to check by definition that

a1P2\displaystyle a_{1P_{2}} =m1\abs1P2=bP,\displaystyle=m-1-\abs{1P_{2}}=b_{P},
(3.21) w1P2wP1Q\displaystyle w_{1P_{2}}w_{P_{1}Q^{\prime}} =wPw1Q,and\displaystyle=w_{P}w_{1Q^{\prime}},\quad\text{and}
e1P2P1Q\displaystyle e_{1P_{2}P_{1}Q^{\prime}} =eP1Q.\displaystyle=e_{P1Q^{\prime}}.

Therefore, if the bijectivity is proved, then we can simplify Eq. 3.19 to

(3.22) XG=(I,J)𝒜,i11aIwIwJeIJ(P,Q)2bPwPwQePQ+(I,J)𝒜1aIwIwJeIJ,X_{G}=\sum_{\begin{subarray}{c}(I,J)\in\mathcal{A},\ i_{1}\neq 1\end{subarray}}a_{I}w_{I}w_{J}e_{I\!J}-\sum_{(P,Q)\in\mathcal{B}_{2}}b_{P}w_{P}w_{Q}e_{PQ}+\sum_{(I,J)\in\mathcal{A}_{1}^{\prime}}a_{I}w_{I}w_{J}e_{I\!J},

where 𝒜1={(I,J)𝒜:i1=1}\𝒜1={(I,J)𝒜:i1=1,\absJ2a1}\mathcal{A}_{1}^{\prime}=\{(I,J)\in\mathcal{A}\colon i_{1}=1\}\backslash\mathcal{A}_{1}=\{(I,J)\in\mathcal{A}\colon i_{1}=1,\ \abs{J_{2}}\leq a-1\}.

In order to establish the bijectivity of hh, we need to prove that

  1. (1)

    h(P,1Q)𝒜1h(P,1Q^{\prime})\in\mathcal{A}_{1},

  2. (2)

    hh is injective, and

  3. (3)

    hh is surjective. for any (1I,J)𝒜1(1I^{\prime},J)\in\mathcal{A}_{1}, there exists (P,1Q)1(P,1Q^{\prime})\in\mathcal{B}_{1} such that h(P,1Q)=(1I,J)h(P,1Q^{\prime})=(1I^{\prime},J).

We proceed one by one. Item 1 If we write h(P,Q)=(1I,J)h(P,Q)=(1I^{\prime},J), then by the implication (3.16),

(3.23) (I,J1,J2)=(P2,P1,Q).(I^{\prime},\,J_{1},\,J_{2})=(P_{2},\,P_{1},\,Q^{\prime}).

Let us check (1I,J)𝒜1(1I^{\prime},J)\in\mathcal{A}_{1} by definition:

  • 1IJ=1P2P1Qn1I^{\prime}\!J=1P_{2}\cdotp P_{1}Q^{\prime}\vDash n since P1QnP\cdotp 1Q^{\prime}\vDash n;

  • \abs1Im2\abs{1I^{\prime}}\leq m-2 since 0<bP=m2\absP20<b_{P}=m-2-\abs{P_{2}};

  • w1IwJ=w1P2wP1Q=wPw1Q0w_{1I^{\prime}}w_{J}=w_{1P_{2}}w_{P_{1}Q^{\prime}}=w_{P}w_{1Q^{\prime}}\neq 0; and

  • \absJ2=\absQ=n1\absPn1(b+m1)=a\abs{J_{2}}=\abs{Q^{\prime}}=n-1-\abs{P}\geq n-1-(b+m-1)=a.

Item 2 If h(P,1Q)=h(α,1β)=(1I,J)h(P,1Q^{\prime})=h(\alpha,1\beta^{\prime})=(1I^{\prime},J), then by Eq. 3.23, P=P1P2=J1I=α1α2=αP=P_{1}P_{2}=J_{1}I^{\prime}=\alpha_{1}\alpha_{2}=\alpha and Q=βQ^{\prime}=\beta^{\prime}.

Item 3 Let (1I,J)𝒜1(1I^{\prime},J)\in\mathcal{A}_{1}. Consider (P,1Q)=(J1I, 1J2)(P,1Q^{\prime})=(J_{1}I^{\prime},\,1J_{2}). By the implication (3.16), we obtain Eq. 3.23. Thus h(P,Q)=(1P2,P1Q)=(1I,J)h(P,Q)=(1P_{2},\,P_{1}Q^{\prime})=(1I^{\prime},\,J). It remains to check that (P,1Q)1(P,1Q^{\prime})\in\mathcal{B}_{1}:

  • P1Q=J1I1J2nP1Q^{\prime}=J_{1}I^{\prime}1J_{2}\vDash n since 1IJn1I^{\prime}J\vDash n.

  • b+1\absJ1\absJ1I=\absP=\absJ1I=n1\absJ2n1a=b+m1b+1\leq\abs{J_{1}}\leq\abs{J_{1}I^{\prime}}=\abs{P}=\abs{J_{1}I^{\prime}}=n-1-\abs{J_{2}}\leq n-1-a=b+m-1.

  • wPw1Q=wJ1Iw1J2=w1IwJ0w_{P}w_{1Q^{\prime}}=w_{J_{1}I^{\prime}}w_{1J_{2}}=w_{1I^{\prime}}w_{J}\neq 0.

  • If bP=0b_{P}=0, then bJ1I=0b_{J_{1}I^{\prime}}=0. By (3.16) and (3.20), \absI=m2\abs{I^{\prime}}=m-2, a contradiction. Thus bP>0b_{P}>0.

This proves that hh is bijective.

It remains to deal with the case q11q_{1}\neq 1. Continuing with Eq. 3.22, we decompose 2\mathcal{B}_{2} as

2=K𝒦(K),\mathcal{B}_{2}=\bigsqcup_{K\in\mathcal{K}}\mathcal{B}(K),

where

𝒦\displaystyle\mathcal{K} ={Kn:wK0,\absK1b+m1},and\displaystyle=\{K\vDash n\colon w_{K}\neq 0,\ \abs{K_{1}}\leq b+m-1\},\quad\text{and}
(K)\displaystyle\mathcal{B}(K) ={(P,Q)2:PQ=K}\displaystyle=\{(P,Q)\in\mathcal{B}_{2}\colon PQ=K\}
={(P,Q):PQ=K,b+1\absPb+m1}.\displaystyle=\{(P,Q)\colon PQ=K,\ b+1\leq\abs{P}\leq b+m-1\}.

We remark that the bound restriction in 𝒦\mathcal{K} is to guarantee that (K)\mathcal{B}(K) is not trivial:

\absK1b+m1(K).\abs{K_{1}}\leq b+m-1\iff\mathcal{B}(K)\neq\emptyset.

In fact, the restriction implies (K1,K2)(K)(K_{1},K_{2})\in\mathcal{B}(K); conversely, if \absK1b+m\abs{K_{1}}\geq b+m, then KK has no prefix PP such that b+1\absPb+m1b+1\leq\abs{P}\leq b+m-1. This proves the equivalence relation.

Now, fix K𝒦K\in\mathcal{K}. Let

𝒜(K)\displaystyle\mathcal{A}(K) ={(I,J)𝒜:i11,J1I¯J2=K}\displaystyle=\{(I,J)\in\mathcal{A}\colon i_{1}\neq 1,\ J_{1}\overline{I}J_{2}=K\}
={(I,J):\absIm2,J1I¯J2=K}.\displaystyle=\{(I,J)\colon\abs{I}\leq m-2,\ J_{1}\overline{I}J_{2}=K\}.

Then the sets 𝒜(K)\mathcal{A}(K) for K𝒦K\in\mathcal{K} are disjoint. In fact, if

(I,J)𝒜(K)𝒜(H),(I,J)\in\mathcal{A}(K)\cap\mathcal{A}(H),

then KK and HH have the same prefix J1=K1J_{1}=K_{1} by the implication (3.16), the same suffix J2J_{2}, and the same middle part I¯\overline{I}; thus K=HK=H. The pairs (I,J)(I,J) for the first sum in Eq. 3.22 that we do not use to cancel the second sum form the set

𝒜2\displaystyle\mathcal{A}_{2} ={(I,J)𝒜:i11}\K𝒦𝒜(K)\displaystyle=\{(I,J)\in\mathcal{A}\colon i_{1}\neq 1\}\backslash\sqcup_{K\in\mathcal{K}}\mathcal{A}(K)
={(I,J)𝒜:i11,J1I¯J2𝒦}\displaystyle=\{(I,J)\in\mathcal{A}\colon i_{1}\neq 1,\ J_{1}\overline{I}J_{2}\not\in\mathcal{K}\}
={(I,J)𝒜:i11,\absJ1b+m}.\displaystyle=\{(I,J)\in\mathcal{A}\colon i_{1}\neq 1,\ \abs{J_{1}}\geq b+m\}.

Since eIJ=eK=ePQe_{I\!J}=e_{K}=e_{PQ} for any (I,J)𝒜(K)(I,J)\in\mathcal{A}(K) and (P,Q)(K)(P,Q)\in\mathcal{B}(K), Eq. 3.22 can be recast as

(3.24) XG=K𝒦Δ(K)eK+(I,J)𝒜1𝒜2aIwIwJeIJ,X_{G}=\sum_{K\in\mathcal{K}}\Delta(K)e_{K}+\sum_{(I,J)\in\mathcal{A}_{1}\cup\mathcal{A}_{2}}a_{I}w_{I}w_{J}e_{I\!J},

where

Δ(K)\displaystyle\Delta(K) =(I,J)𝒜(K)aIwIwJ(P,Q)(K)bPwPwQ.\displaystyle=\sum_{(I,J)\in\mathcal{A}(K)}a_{I}w_{I}w_{J}-\sum_{(P,Q)\in\mathcal{B}(K)}b_{P}w_{P}w_{Q}.

Hence it suffices to show that Δ(K)0\Delta(K)\geq 0.

Let K2=m1m2K_{2}=m_{1}m_{2}\dotsm. Then mi2m_{i}\geq 2 for all ii since wK0w_{K}\neq 0. For i0i\geq 0, we define

Pi=K1m1mi,Qi=mi+1mi+2,Ii=mim1,andJi=K1Qi.P^{i}=K_{1}\cdotp m_{1}\dotsm m_{i},\quad Q^{i}=m_{i+1}m_{i+2}\dotsm,\quad I^{i}=m_{i}\dotsm m_{1},\quad\text{and}\quad J^{i}=K_{1}\cdotp Q^{i}.

Then P1i=J1i=K1P^{i}_{1}=J^{i}_{1}=K_{1} by the implication (3.16),

(3.25) (K)\displaystyle\mathcal{B}(K) ={(P0,Q0),,(Pl,Ql)},where \absPl=σK(b+m1), and\displaystyle=\{(P^{0},Q^{0}),\,\dots,\,(P^{l},Q^{l})\},\quad\text{where $\abs{P^{l}}=\sigma_{K}^{-}(b+m-1)$, and}
𝒜(K)\displaystyle\mathcal{A}(K) ={(I0,J0),,(Ir,Jr)}, where \absIr=σK2(m2).\displaystyle=\{(I^{0},\,J^{0}),\ \dots,\ (I^{r},\,J^{r})\},\quad\text{ where $\abs{I^{r}}=\sigma_{K_{2}}^{-}(m-2)$. }

We observe that

  • l(K2)1l\leq\ell(K_{2})-1, since \absQl=n\absPln(b+m1)=a+11\abs{Q^{l}}=n-\abs{P^{l}}\geq n-(b+m-1)=a+1\geq 1; and

  • lrl\leq r, since \absIl=\absPl\absK1(b+m1)(b+1)=m2\abs{I^{l}}=\abs{P^{l}}-\abs{K_{1}}\leq(b+m-1)-(b+1)=m-2.

Therefore,

(3.26) Δ(K)=Sl+i=l+1raIiwIiwJi,\Delta(K)=S^{l}+\sum_{i=l+1}^{r}a_{I^{i}}w_{I^{i}}w_{J^{i}},

where

Sk=i=0k\brk1aIiwIiwJibPiwPiwQifor k0.S^{k}=\sum_{i=0}^{k}\brk 1{a_{I^{i}}w_{I^{i}}w_{J^{i}}-b_{P^{i}}w_{P^{i}}w_{Q^{i}}}\quad\text{for $k\geq 0$}.

Let us compare aIia_{I^{i}} with bPib_{P^{i}}, and compare wIiwJiw_{I^{i}}w_{J^{i}} with wPiwQiw_{P^{i}}w_{Q^{i}}, respectively.

  • We have bPi=aIi1b_{P^{i}}=a_{I^{i}}-1 for all 0il0\leq i\leq l, since by Lemma 2.9,

    \absPiσPi+(b+1)=\absPi\absK1=\absIi.\abs{P^{i}}-\sigma_{P^{i}}^{+}(b+1)=\abs{P^{i}}-\abs{K_{1}}=\abs{I^{i}}.
  • By Lemma 2.7,

    wPiwQi\displaystyle w_{P^{i}}w_{Q^{i}} =wKmi+1mi+11,for 0il,and\displaystyle=w_{K}\cdotp\frac{m_{i+1}}{m_{i+1}-1},\quad\text{for $0\leq i\leq l$},\quad\text{and}
    (3.27) wIiwJi\displaystyle w_{I^{i}}w_{J^{i}} ={wK,if i=0,wKmimi1,if 1i(K2).\displaystyle=\begin{cases*}w_{K},&if $i=0$,\\ \displaystyle w_{K}\cdotp\frac{m_{i}}{m_{i}-1},&if $1\leq i\leq\ell(K_{2})$.\end{cases*}

It follows that

(3.28) S0\displaystyle S^{0} =(m1)wK(m2)m1m11wK=wKm1m+1m11,and\displaystyle=(m-1)w_{K}-(m-2)\cdotp\frac{m_{1}}{m_{1}-1}\cdotp w_{K}=w_{K}\cdotp\frac{m_{1}-m+1}{m_{1}-1},\quad\text{and}
(3.29) Sl\displaystyle S^{l} =S0+wKi=1l\brk3\brk1m1\absIimimi1\brk1m2\absIimi+1mi+11.\displaystyle=S^{0}+w_{K}\sum_{i=1}^{l}\brk 3{\brk 1{m-1-\abs{I^{i}}}\cdotp\frac{m_{i}}{m_{i}-1}-\brk 1{m-2-\abs{I^{i}}}\cdotp\frac{m_{i+1}}{m_{i+1}-1}}.

This sum in Eq. 3.29 can be simplified by telescoping. Precisely speaking, since iith the negative term and the (i+1)(i+1)th positive term have sum

\brk1m2\absIimi+1mi+11+\brk1m1\absIi+1mi+1mi+11=mi+1,-\brk 1{m-2-\abs{I^{i}}}\cdotp\frac{m_{i+1}}{m_{i+1}-1}+\brk 1{m-1-\abs{I^{i+1}}}\cdotp\frac{m_{i+1}}{m_{i+1}-1}=-m_{i+1},

we can simplify the sum in Eq. 3.29 by keeping the first positive term and the last negative term as

Sl=S0+wK\brk2\brk1m1\absI1m1m11m2ml\brk1m2\absIlml+1ml+11.S^{l}=S^{0}+w_{K}\brk 2{\brk 1{m-1-\abs{I^{1}}}\cdotp\frac{m_{1}}{m_{1}-1}-m_{2}-\dots-m_{l}-\brk 1{m-2-\abs{I^{l}}}\cdotp\frac{m_{l+1}}{m_{l+1}-1}}.

Together with Eq. 3.28, we can infer that when l1l\geq 1,

SlwK\displaystyle\frac{S^{l}}{w_{K}} =m1m+1m11+(m1m1)m1m11m2ml\brk1m2\absIlml+1ml+11\displaystyle=\frac{m_{1}-m+1}{m_{1}-1}+(m-1-m_{1})\cdotp\frac{m_{1}}{m_{1}-1}-m_{2}-\dots-m_{l}-\brk 1{m-2-\abs{I^{l}}}\cdotp\frac{m_{l+1}}{m_{l+1}-1}
(3.30) =\absIl+1m+1ml+11.\displaystyle=\frac{\abs{I^{l+1}}-m+1}{m_{l+1}-1}.

In view of Eq. 3.28, we see that Eq. 3.30 holds for l=0l=0 as well. Note that

Sl0\absIl+1m1r=l.S^{l}\geq 0\iff\abs{I^{l+1}}\geq m-1\iff r=l.

Here we have two cases to deal with. If r=lr=l, then

(3.31) Δ(K)=Sl=wK\absIl+1m+1ml+110.\Delta(K)=S^{l}=w_{K}\cdotp\frac{\abs{I^{l+1}}-m+1}{m_{l+1}-1}\geq 0.

If rl+1r\geq l+1, then by Eqs. 3.30 and 3.27,

Sl+aIl+1wIl+1wJl+1\displaystyle S^{l}+a_{I^{l+1}}w_{I^{l+1}}w_{J^{l+1}} =wK\absIl+1m+1ml+11+\brk1m1\absIl+1wKml+1ml+11\displaystyle=w_{K}\cdotp\frac{\abs{I^{l+1}}-m+1}{m_{l+1}-1}+\brk 1{m-1-\abs{I^{l+1}}}\cdotp w_{K}\cdotp\frac{m_{l+1}}{m_{l+1}-1}
=wK\brk1m1\absIl+1.\displaystyle=w_{K}\cdotp\brk 1{m-1-\abs{I^{l+1}}}.

It follows that

(3.32) Δ(K)=wK\brk1m1\absIl+1+i=l+2raIiwIiwJi0.\Delta(K)=w_{K}\cdotp\brk 1{m-1-\abs{I^{l+1}}}+\sum_{i=l+2}^{r}a_{I^{i}}w_{I^{i}}w_{J^{i}}\geq 0.

This completes the proof. ∎

By carefully collecting all terms of XHa,m,bX_{H_{a,m,b}} along the proof of Theorem 3.9, and combinatorially reinterpreting the coefficients and bound requirements, we can assemble a positive eIe_{I}-expansion for the chromatic symmetric function of hats.

Theorem 3.10 (Hats).

Let n=a+m+bn=a+m+b, where m2m\geq 2 and a,b0a,b\geq 0. Then

(3.33) XHa,m,b\displaystyle X_{H_{a,m,b}} =Kn,NK1NKwKeKΘK+(b+m)+ΘK(b+m1)+Kn,NK1NKwKeK\displaystyle=\sum_{K\vDash n,\ N_{K}\leq-1}\frac{-N_{K}w_{K}e_{K}}{\Theta_{K}^{+}(b+m)+\Theta_{K}^{-}(b+m-1)}+\sum_{K\vDash n,\ N_{K}\geq 1}N_{K}w_{K}e_{K}
+(I,J)𝒮a,m,b\brk1m1\absIwIwJeIJ,\displaystyle\qquad+\sum_{(I,J)\in\mathcal{S}_{a,m,b}}\brk 1{m-1-\abs{I}}w_{I}w_{J}e_{I\!J},

where NK=ΘK+(b+1)ΘK+(b+m)N_{K}=\Theta_{K}^{+}(b+1)-\Theta_{K}^{+}(b+m), and if we write J1J2J_{1}J_{2} as the bisection of JJ such that \absJ1=σJ+(b+1)\abs{J_{1}}=\sigma_{J}^{+}(b+1),

𝒮a,m,b\displaystyle\mathcal{S}_{a,m,b} =\brk[c]1(I,J):K=J1I¯J2n,i11,\absJ1b+m1, 2\absIm2,\absJ2σK¯(a)1\displaystyle=\brk[c]1{(I,J)\colon K=J_{1}\overline{I}J_{2}\vDash n,\ i_{1}\neq 1,\ \abs{J_{1}}\leq b+m-1,\ 2\leq\abs{I}\leq m-2,\ \abs{J_{2}}\leq\sigma_{\overline{K}}^{-}(a)-1}
{(I,J):K=J1I¯J2n,i11,\absJ1b+m, 2\absIm2}\displaystyle\qquad\cup\{(I,J)\colon K=J_{1}\overline{I}J_{2}\vDash n,\ i_{1}\neq 1,\ \abs{J_{1}}\geq b+m,\ 2\leq\abs{I}\leq m-2\}
{(I,J):IJn,i1=1,\absIm2,\absJ2a1}.\displaystyle\qquad\cup\{(I,J)\colon I\!J\vDash n,\ i_{1}=1,\ \abs{I}\leq m-2,\ \abs{J_{2}}\leq a-1\}.
Proof.

We keep notion and notation in the proof of Theorem 3.9. Let K𝒦K\in\mathcal{K}. Then

\absK1b+m1,i.e.,ΘK+(b+1)m2.\abs{K_{1}}\leq b+m-1,\quad\text{i.e.,}\quad\Theta_{K}^{+}(b+1)\leq m-2.

The numerator and denominator in Eq. 3.31 can be recast as

\absIl+1m+1\displaystyle\abs{I^{l+1}}-m+1 =\brk1\absK1+\absIl+1bm\brk1\absK1b1,and\displaystyle=\brk 1{\abs{K_{1}}+\abs{I^{l+1}}-b-m}-\brk 1{\abs{K_{1}}-b-1},\quad\text{and}
(3.34) =ΘK+(b+m)ΘK+(b+1),\displaystyle=\Theta_{K}^{+}(b+m)-\Theta_{K}^{+}(b+1),
ml+11\displaystyle m_{l+1}-1 =\brk1\absK1+\absIl+1bm+\brk1b+m1\absK1\absIl\displaystyle=\brk 1{\abs{K_{1}}+\abs{I^{l+1}}-b-m}+\brk 1{b+m-1-\abs{K_{1}}-\abs{I^{l}}}
(3.35) =ΘK+(b+m)+ΘK(b+m1),\displaystyle=\Theta_{K}^{+}(b+m)+\Theta_{K}^{-}(b+m-1),

respectively. By Eqs. 3.31 and 3.32,

(3.36) K𝒦ΘK+(b+m)ΘK+(b+1)Δ(K)eK\displaystyle\smashoperator[r]{\sum_{\begin{subarray}{c}K\in\mathcal{K}\\ \Theta_{K}^{+}(b+m)\geq\Theta_{K}^{+}(b+1)\end{subarray}}^{}}\Delta(K)e_{K} =Kn,wK0ΘK+(b+1)m2ΘK+(b+1)ΘK+(b+m)1ΘK+(b+m)ΘK+(b+1)ΘK+(b+m)ΘK(b+m1)wKeK,and\displaystyle=\sum_{\begin{subarray}{c}K\vDash n,\ w_{K}\neq 0\\ \Theta_{K}^{+}(b+1)\leq m-2\\ \Theta_{K}^{+}(b+1)\leq\Theta_{K}^{+}(b+m)-1\end{subarray}}\frac{\Theta_{K}^{+}(b+m)-\Theta_{K}^{+}(b+1)}{\Theta_{K}^{+}(b+m)\Theta_{K}^{-}(b+m-1)}w_{K}e_{K},\quad\text{and}
K𝒦ΘK+(b+m)<ΘK+(b+1)Δ(K)eK\displaystyle\smashoperator[r]{\sum_{\begin{subarray}{c}K\in\mathcal{K}\\ \Theta_{K}^{+}(b+m)<\Theta_{K}^{+}(b+1)\end{subarray}}^{}}\Delta(K)e_{K} =K𝒦\brk3\brk1ΘK+(b+1)ΘK+(b+m)wK+i=l+2raIiwIiwJieK,\displaystyle=\sum_{K\in\mathcal{K}^{\prime}}\brk 3{\brk 1{\Theta_{K}^{+}(b+1)-\Theta_{K}^{+}(b+m)}w_{K}+\sum_{i=l+2}^{r}a_{I^{i}}w_{I^{i}}w_{J^{i}}}e_{K},

where

𝒦={Kn:wK0,ΘK+(b+m)+1ΘK+(b+1)m2}.\mathcal{K}^{\prime}=\{K\vDash n\colon w_{K}\neq 0,\ \Theta_{K}^{+}(b+m)+1\leq\Theta_{K}^{+}(b+1)\leq m-2\}.

We claim that the right side of Eq. 3.36 can be simplified to KnK\vDash n and

(3.37) ΘK+(b+m)ΘK+(b+1)1.\Theta_{K}^{+}(b+m)-\Theta_{K}^{+}(b+1)\geq 1.

In fact, Eq. 3.37 is one of the original bound requirements. It suffices to show that ΘK+(b+1)m2\Theta_{K}^{+}(b+1)\leq m-2 also holds. Assume to the contrary that ΘK+(b+1)m1\Theta_{K}^{+}(b+1)\geq m-1. Then

ΘK+(b+1)=ΘK+(b+m)+(m1)\Theta_{K}^{+}(b+1)=\Theta_{K}^{+}(b+m)+(m-1)

by the definition Eq. 2.12 of ΘK+\Theta_{K}^{+}, contradicting Eq. 3.37. This proves the claim.

In view of Eq. 3.24, it remains to simplify

K𝒦i=l+2raIiwIiwJieK+(I,J)𝒜1𝒜2aIwIwJeIJ,\sum_{K\in\mathcal{K}^{\prime}}\sum_{i=l+2}^{r}a_{I^{i}}w_{I^{i}}w_{J^{i}}e_{K}+\sum_{(I,J)\in\mathcal{A}_{1}\cup\mathcal{A}_{2}}a_{I}w_{I}w_{J}e_{I\!J},

in which the summands have the same form aIwIwJeIJa_{I}w_{I}w_{J}e_{I\!J}. If a pair (I,J)(I,J) appears as (Ii,Ji)(I^{i},J^{i}) in the first sum, then the requirement il+2i\geq l+2 is equivalent to say that

\absI>\absIl+1,i.e.,\absJ1I¯>σJ1I¯J2+(b+m),\abs{I}>\abs{I^{l+1}},\quad\text{i.e.,}\quad\abs{J_{1}\overline{I}}>\sigma_{J_{1}\overline{I}J_{2}}^{+}(b+m),

and the requirement iri\leq r is equivalent to \absIm2\abs{I}\leq m-2. Thus the set of pairs (I,J)(I,J) for the first sum is

K𝒦\brk[c]1(Ii,Ji):Ii¯J2i=K2 for some l+2ir\displaystyle\bigcup_{K\in\mathcal{K}^{\prime}}\brk[c]1{(I^{i},J^{i})\colon\overline{I^{i}}J^{i}_{2}=K_{2}\text{ for some $l+2\leq i\leq r$}}
=\displaystyle= {(I,J):K=J1I¯J2n,wK0,ΘK+(b+m)+1ΘK+(b+1)m2,\displaystyle\ \{(I,J)\colon K=J_{1}\overline{I}J_{2}\vDash n,\ w_{K}\neq 0,\ \Theta_{K}^{+}(b+m)+1\leq\Theta_{K}^{+}(b+1)\leq m-2,
\absJ1I¯>σK+(b+m),\absIm2}\displaystyle\qquad\abs{J_{1}\overline{I}}>\sigma_{K}^{+}(b+m),\ \abs{I}\leq m-2\}
=\displaystyle= \brk[c]1(I,J):K=J1I¯J2n,wK0,\absJ1b+m1,\absIm2,\absJ1I>σK+(b+m)\displaystyle\ \brk[c]1{(I,J)\colon K=J_{1}\overline{I}J_{2}\vDash n,\ w_{K}\neq 0,\ \abs{J_{1}}\leq b+m-1,\ \abs{I}\leq m-2,\ \abs{J_{1}I}>\sigma_{K}^{+}(b+m)}
=\displaystyle= \brk[c]1(I,J):K=J1I¯J2n,wK0,\absJ1b+m1,\absIm2,\absJ2σK¯(a)1.\displaystyle\ \brk[c]1{(I,J)\colon K=J_{1}\overline{I}J_{2}\vDash n,\ w_{K}\neq 0,\ \abs{J_{1}}\leq b+m-1,\ \abs{I}\leq m-2,\ \abs{J_{2}}\leq\sigma_{\overline{K}}^{-}(a)-1}.

On the other hand,

𝒜1\displaystyle\mathcal{A}_{1} ={(I,J)𝒜:i1=1,\absJ2a1}\displaystyle=\{(I,J)\in\mathcal{A}\colon i_{1}=1,\ \abs{J_{2}}\leq a-1\}
={(I,J):IJn,wIwJ0,i1=1,\absIm2,\absJ2a1},and\displaystyle=\{(I,J)\colon I\!J\vDash n,\ w_{I}w_{J}\neq 0,\ i_{1}=1,\ \abs{I}\leq m-2,\ \abs{J_{2}}\leq a-1\},\quad\text{and}
𝒜2\displaystyle\mathcal{A}_{2} ={(I,J)𝒜:i11,\absJ1b+m}\displaystyle=\{(I,J)\in\mathcal{A}\colon i_{1}\neq 1,\ \abs{J_{1}}\geq b+m\}
={(I,J):IJn,\absIm2,wIwJ0,i11,\absJ1b+m}\displaystyle=\{(I,J)\colon I\!J\vDash n,\ \abs{I}\leq m-2,\ w_{I}w_{J}\neq 0,\ i_{1}\neq 1,\ \abs{J_{1}}\geq b+m\}
={(I,J):K=J1I¯J2n,wK0,\absJ1b+m,\absIm2}.\displaystyle=\{(I,J)\colon K=J_{1}\overline{I}J_{2}\vDash n,\ w_{K}\neq 0,\ \abs{J_{1}}\geq b+m,\ \abs{I}\leq m-2\}.

Since the product aIwIwJeIJa_{I}w_{I}w_{J}e_{I\!J} vanishes when wIwJ=0w_{I}w_{J}=0, we can replace the conditions wK0w_{K}\neq 0 for K=J1I¯J2K=J_{1}\overline{I}J_{2} with i11i_{1}\neq 1. Furthermore,

{(I,J)𝒜2:I=ϵ}={(ϵ,K):Kn,\absK1b+m}.\displaystyle\{(I,J)\in\mathcal{A}_{2}\colon I=\epsilon\}=\{(\epsilon,K)\colon K\vDash n,\ \abs{K_{1}}\geq b+m\}.

The sum for aIwIwJeIJa_{I}w_{I}w_{J}e_{I\!J} over this subset can be merged into the second sum as

KnΘK+(b+m)+1ΘK+(b+1)m2\brk1ΘK+(b+1)ΘK+(b+m)wKeK+K=IJn,I=ϵ\absK1b+maIwIwJeIJ\displaystyle\sum_{\begin{subarray}{c}K\vDash n\\ \Theta_{K}^{+}(b+m)+1\leq\Theta_{K}^{+}(b+1)\leq m-2\end{subarray}}\brk 1{\Theta_{K}^{+}(b+1)-\Theta_{K}^{+}(b+m)}w_{K}e_{K}+\sum_{\begin{subarray}{c}K=I\!J\vDash n,\ I=\epsilon\\ \abs{K_{1}}\geq b+m\end{subarray}}a_{I}w_{I}w_{J}e_{I\!J}
=KnΘK+(b+1)ΘK+(b+m)1\brk1ΘK+(b+1)ΘK+(b+m)wKeK;\displaystyle\ =\sum_{\begin{subarray}{c}K\vDash n\\ \Theta_{K}^{+}(b+1)-\Theta_{K}^{+}(b+m)\geq 1\end{subarray}}\brk 1{\Theta_{K}^{+}(b+1)-\Theta_{K}^{+}(b+m)}w_{K}e_{K};

this is because when \absK1b+m\abs{K_{1}}\geq b+m,

ΘK+(b+1)ΘK+(b+m)=m1=aϵ1.\Theta_{K}^{+}(b+1)-\Theta_{K}^{+}(b+m)=m-1=a_{\epsilon}\geq 1.

Collecting all the contributions to XGX_{G}, we obtain Eq. 3.33 as desired. ∎

For example,

XH1,4,1\displaystyle X_{H_{1,4,1}} =K6ΘK+(5)ΘK+(2)1ΘK+(5)ΘK+(2)ΘK+(5)+ΘK(4)wKeK+K6ΘK+(2)ΘK+(5)1\brk1ΘK+(2)ΘK+(5)wKeK\displaystyle=\sum_{\begin{subarray}{c}K\vDash 6\\ \Theta_{K}^{+}(5)-\Theta_{K}^{+}(2)\geq 1\end{subarray}}\frac{\Theta_{K}^{+}(5)-\Theta_{K}^{+}(2)}{\Theta_{K}^{+}(5)+\Theta_{K}^{-}(4)}w_{K}e_{K}+\sum_{\begin{subarray}{c}K\vDash 6\\ \Theta_{K}^{+}(2)-\Theta_{K}^{+}(5)\geq 1\end{subarray}}\brk 1{\Theta_{K}^{+}(2)-\Theta_{K}^{+}(5)}w_{K}e_{K}
+J1I¯J26,i11\absJ14,\absI=2\absJ2σK¯(1)1\brk13\absIwIwJeIJ+J1I¯J26,i11\absJ15,\absI=2\brk13\absIwIwJeIJ+IJ6,i1=1\absI2,\absJ20\brk13\absIwIwJeIJ\displaystyle\qquad+\smashoperator[r]{\sum_{\begin{subarray}{c}J_{1}\overline{I}J_{2}\vDash 6,\ i_{1}\neq 1\\ \abs{J_{1}}\leq 4,\ \abs{I}=2\\ \abs{J_{2}}\leq\sigma_{\overline{K}}^{-}(1)-1\end{subarray}}^{}}\brk 1{3-\abs{I}}w_{I}w_{J}e_{I\!J}+\smashoperator[r]{\sum_{\begin{subarray}{c}J_{1}\overline{I}J_{2}\vDash 6,\ i_{1}\neq 1\\ \abs{J_{1}}\geq 5,\ \abs{I}=2\end{subarray}}^{}}\brk 1{3-\abs{I}}w_{I}w_{J}e_{I\!J}+\smashoperator[r]{\sum_{\begin{subarray}{c}I\!J\vDash 6,\ i_{1}=1\\ \abs{I}\leq 2,\ \abs{J_{2}}\leq 0\end{subarray}}^{}}\brk 1{3-\abs{I}}w_{I}w_{J}e_{I\!J}
=(w24e24/3+w23e23)+(w42e42+w132e132+3w6e6+3w15e15)\displaystyle=(w_{24}e_{24}/3+w_{2^{3}}e_{2^{3}})+(w_{42}e_{42}+w_{132}e_{132}+3w_{6}e_{6}+3w_{15}e_{15})
+0+0+2(w1w5e51+w1w14e141)\displaystyle\qquad+0+0+2(w_{1}w_{5}e_{51}+w_{1}w_{14}e_{141})
=18e6+22e51+6e42+6e412+2e321+2e23.\displaystyle=18e_{6}+22e_{51}+6e_{42}+6e_{41^{2}}+2e_{321}+2e_{2^{3}}.

Particular hats Ha,m,bH_{a,m,b} are special graphs that we explored previously.

  1. (1)

    For a=0a=0, Theorem 3.10 reduces to Theorem 3.3 since only the second sum in Eq. 3.33 survives.

  2. (2)

    For m=2m=2, Theorem 3.10 reduces to Eq. 1.1, since only the first two sums in Eq. 3.33 survive, and they are the sum of terms wIeIw_{I}e_{I} for ΘK+(b+1)=0\Theta_{K}^{+}(b+1)=0 and for ΘK+(b+1)1\Theta_{K}^{+}(b+1)\geq 1 respectively.

  3. (3)

    For b=0b=0, Theorem 3.10 may give a noncommutative analog for the tadpole Tm,aT_{m,a} that is different from the one given by Theorem 3.3. For instance, these two analogs for XT3,2X_{T_{3,2}} are respectively

    X~H2,3,0=10Λ5+6Λ14+2Λ23+6Λ32andX~T3,2=10Λ5+6Λ14+8Λ23.\widetilde{X}_{H_{2,3,0}}=10\Lambda^{5}+6\Lambda^{14}+2\Lambda^{23}+6\Lambda^{32}\quad\text{and}\quad\widetilde{X}_{T_{3,2}}=10\Lambda^{5}+6\Lambda^{14}+8\Lambda^{23}.

For m=3m=3, the hat Ha,3,bH_{a,3,b} is the generalized bull K(1a+121b)K(1^{a+1}21^{b}). We produce for generalized bulls a neat formula, which is not a direct specialization of Theorem 3.10.

Theorem 3.11 (Generalized bulls).

For a1a\geq 1 and na+2n\geq a+2,

XK(1a21na2)=In,i13ΘI+(a)1i12i11wIeI+InΘI+(a)22wIeI+Jn1ΘJ+(a)2wJeJ1.X_{K(1^{a}21^{n-a-2})}=\sum_{\begin{subarray}{c}I\vDash n,\ i_{-1}\geq 3\\ \Theta_{I}^{+}(a)\leq 1\end{subarray}}\frac{i_{-1}-2}{i_{-1}-1}\cdotp w_{I}e_{I}+\sum_{\begin{subarray}{c}I\vDash n\\ \Theta_{I}^{+}(a)\geq 2\end{subarray}}2w_{I}e_{I}+\sum_{\begin{subarray}{c}J\vDash n-1\\ \Theta_{J}^{+}(a)\geq 2\end{subarray}}w_{J}e_{J1}.
Proof.

Let G=K(1a21na2)G=K(1^{a}21^{n-a-2}). Taking (a,m,b)=(na2, 3,a1)(a,m,b)=(n-a-2,\,3,\,a-1) in Eq. 3.33, we obtain

(3.38) XG=S1+S2+S3,X_{G}=S_{1}+S_{2}+S_{3},

where

S1\displaystyle S_{1} =Kn,NK0,wK0NKDKwKeK,\displaystyle=\sum_{K\vDash n,\ N_{K}\leq 0,\ w_{K}\neq 0}\frac{-N_{K}}{D_{K}}w_{K}e_{K},
S2\displaystyle S_{2} =Kn,NK>0,wK0NKwKeK,and\displaystyle=\sum_{K\vDash n,\ N_{K}>0,\ w_{K}\neq 0}N_{K}w_{K}e_{K},\quad\text{and}
(3.39) S3\displaystyle S_{3} =Jn1,ΘJ+(a)2wJe1J,\displaystyle=\sum_{J\vDash n-1,\ \Theta_{J}^{+}(a)\geq 2}w_{J}e_{1J},

where NK=ΘK+(a)ΘK+(a+2)N_{K}=\Theta_{K}^{+}(a)-\Theta_{K}^{+}(a+2) and DK=ΘK+(a+2)+ΘK(a+1)D_{K}=\Theta_{K}^{+}(a+2)+\Theta_{K}^{-}(a+1).

We shall simplify S1S_{1} and S2S_{2} separately. For S1S_{1}, we proceed in 33 steps. First, we claim that

(3.40) {NK0wK0{ΘK+(a)1wK0.\begin{cases*}N_{K}\leq 0\\ w_{K}\neq 0\end{cases*}\iff\begin{cases*}\Theta_{K}^{+}(a)\leq 1\\ w_{K}\neq 0.\end{cases*}

In fact, for the forward direction, if ΘK+(a)2\Theta_{K}^{+}(a)\geq 2, then NK=2N_{K}=2 by Lemma 2.8, a contradiction. For the backward direction, we have two cases to deal with:

  • If ΘK+(a)=0\Theta_{K}^{+}(a)=0, then NK=ΘK+(a+2)0N_{K}=-\Theta_{K}^{+}(a+2)\leq 0 holds trivially.

  • If ΘK+(a)=1\Theta_{K}^{+}(a)=1, since wK0w_{K}\neq 0, we then find ΘK+(a+2)1\Theta_{K}^{+}(a+2)\geq 1 and NK0N_{K}\leq 0.

This proves the claim. It allows us to change the sum range for S1S_{1} to

𝒦a={Kn:wK0,ΘK+(a)1}={Kn:wK0,ΘK+(a)+ΘK(a+1)=1}.\mathcal{K}_{a}=\{K\vDash n\colon w_{K}\neq 0,\ \Theta_{K}^{+}(a)\leq 1\}=\{K\vDash n\colon w_{K}\neq 0,\ \Theta_{K}^{+}(a)+\Theta_{K}^{-}(a+1)=1\}.

Second, for K𝒦aK\in\mathcal{K}_{a}, we have NK=ΘK+(a+2)ΘK+(a)=DK1-N_{K}=\Theta_{K}^{+}(a+2)-\Theta_{K}^{+}(a)=D_{K}-1 and

S1=K𝒦aDK1DKwKeK.S_{1}=\sum_{K\in\mathcal{K}_{a}}\frac{D_{K}-1}{D_{K}}w_{K}e_{K}.

Thirdly, we claim that

(3.41) S1=K𝒦ak12k11wKeK.S_{1}=\sum_{K\in\mathcal{K}_{a}}\frac{k_{-1}-2}{k_{-1}-1}w_{K}e_{K}.

In fact, recall from Eq. 3.35 that DK=ml+11D_{K}=m_{l+1}-1 is a factor of wKw_{K}. By the definition Eq. 3.25 of ll, the part ml+1m_{l+1} is the part kjk_{j} of KK such that

\absk1kj1=σK(a+1),i.e.,\absk1kj=σK+(a+2).\abs{k_{1}\dotsm k_{j-1}}=\sigma_{K}^{-}(a+1),\quad\text{i.e.,}\quad\abs{k_{1}\dotsm k_{j}}=\sigma_{K}^{+}(a+2).

Since ΘK+(a)1\Theta_{K}^{+}(a)\leq 1, we find j2j\geq 2. For any K=k1ks𝒦aK=k_{1}\dotsm k_{s}\in\mathcal{K}_{a}, define H=φ(K)H=\varphi(K) to be the composition obtained from KK by moving the part kjk_{j} to the end, i.e., H=k1kj1kj+1kskjH=k_{1}\dotsm k_{j-1}k_{j+1}\dotsm k_{s}k_{j}. Then wH=wK0w_{H}=w_{K}\neq 0, eK=eHe_{K}=e_{H}, and

\absh1hj1=\absk1kj1=σK(a+1){a,a+1}.\abs{h_{1}\dotsm h_{j-1}}=\abs{k_{1}\dotsm k_{j-1}}=\sigma_{K}^{-}(a+1)\in\{a,\,a+1\}.

Thus ΘH+(a)1\Theta_{H}^{+}(a)\leq 1, and H𝒦aH\in\mathcal{K}_{a}. Since wH0w_{H}\neq 0, we find \absh1hj1=σH(a+1)\abs{h_{1}\dotsm h_{j-1}}=\sigma_{H}^{-}(a+1). Therefore, KK can be recovered from HH by moving the last part to the position immediately after hj1h_{j-1}. Hence φ\varphi is a bijection on 𝒦a\mathcal{K}_{a}, and

S1=K𝒦aDK1DKwKeK=H𝒦ah12h11wHeH.S_{1}=\sum_{K\in\mathcal{K}_{a}}\frac{D_{K}-1}{D_{K}}w_{K}e_{K}=\sum_{H\in\mathcal{K}_{a}}\frac{h_{-1}-2}{h_{-1}-1}w_{H}e_{H}.

This proves the claim. We can strengthen H𝒦aH\in\mathcal{K}_{a} by requiring h13h_{-1}\geq 3 without loss of generality.

Next, the condition NK>0N_{K}>0 in S2S_{2} can be replaced with ΘK+(a)2\Theta_{K}^{+}(a)\geq 2 by the equivalence relation (3.40). Under this new range requirement for S2S_{2}, we find NK=2N_{K}=2 by Lemma 2.8. Thus

(3.42) S2=Kn,ΘK+(a)22wKeK.S_{2}=\sum_{K\vDash n,\ \Theta_{K}^{+}(a)\geq 2}2w_{K}e_{K}.

Substituting Eqs. 3.41, 3.42 and 3.39 into Eq. 3.38, we obtain the desired formula. ∎

We remark that Theorem 3.11 reduces to Corollary 3.4 when n=a+2n=a+2.

Appendix A A proof of Proposition 2.4 using the composition method

By Eqs. 2.7 and 2.5, we can deduce from Lemma 3.1 that

X~Cn\displaystyle\widetilde{X}_{C_{n}} =(1)nJnεJfp(J,n)ΛJ+InεIi1JIεJfp(J,I)ΛJ\displaystyle=(-1)^{n}\sum_{J\succeq n}\varepsilon^{J}f\!p(J,n)\Lambda^{J}+\sum_{I\vDash n}\varepsilon^{I}i_{1}\sum_{J\succeq I}\varepsilon^{J}f\!p(J,I)\Lambda^{J}
=Jn\brk4(1)(J)fp(J,n)+IJ(1)(I)+(J)i1fp(J,I)ΛJ.\displaystyle=\sum_{J\vDash n}\brk 4{(-1)^{\ell(J)}f\!p(J,n)+\sum_{I\preceq J}(-1)^{\ell(I)+\ell(J)}i_{1}\cdotp f\!p(J,I)}\Lambda^{J}.

Let J=j1jtnJ=j_{1}\dotsm j_{t}\vDash n. Then any composition II of length ss that is finer than JJ can be written as

I=(jk1++jk21)(jk2++jk31)(jks++jt)I=(j_{k_{1}}+\dots+j_{k_{2}-1})(j_{k_{2}}+\dots+j_{k_{3}-1})\dotsm(j_{k_{s}}+\dots+j_{t})

for some indices k1<<ksk_{1}<\dots<k_{s}, where k1=1k_{1}=1 and kstk_{s}\leq t. Therefore,

[ΛJ]X~Cn\displaystyle[\Lambda^{J}]\widetilde{X}_{C_{n}} =(1)tj1+1=k1<<kst(1)t+s\absj1jk21j1jk2jks\displaystyle=(-1)^{t}j_{1}+\sum_{1=k_{1}<\dots<k_{s}\leq t}(-1)^{t+s}\abs{j_{1}\dotsm j_{k_{2}-1}}j_{1}j_{k_{2}}\dotsm j_{k_{s}}
=j1\brk3(1)t+1=k1<<kst(1)t+s(j1jk2jks+j2jk2jks++jk21jk2jks)\displaystyle=j_{1}\brk 3{(-1)^{t}+\sum_{1=k_{1}<\dots<k_{s}\leq t}(-1)^{t+s}(j_{1}j_{k_{2}}\dotsm j_{k_{s}}+j_{2}j_{k_{2}}\dotsm j_{k_{s}}+\dots+j_{k_{2}-1}j_{k_{2}}\dotsm j_{k_{s}})}
=j1\brk3(1)t+1h1<k2<<kst(1)t+sjh1jk2jks\displaystyle=j_{1}\brk 3{(-1)^{t}+\sum_{1\leq h_{1}<k_{2}<\dots<k_{s}\leq t}(-1)^{t+s}j_{h_{1}}j_{k_{2}}\dotsm j_{k_{s}}}
=j1(j11)(j21)(jt1)=(j11)wJ.\displaystyle=j_{1}(j_{1}-1)(j_{2}-1)\dotsm(j_{t}-1)=(j_{1}-1)w_{J}.

This proves Proposition 2.4.

Acknowledgment

We are grateful to Jean-Yves Thibon, whose expectation for noncommutative analogs of chromatic symmetric functions of graphs beyond cycles encourages us to go on the discovery voyage for neat formulas. We would like to thank Foster Tom for pointing us to Ellzey’s paper, and for presenting his signed combinatorial formula in an invited talk. We also thank the anonymous reviewer for sharing his/her understanding of the motivation and for the writing suggestions.

References

  • Aliniaeifard et al. [2024] F. Aliniaeifard, V. Wang, and S. van Willigenburg. The chromatic symmetric function of a graph centred at a vertex. Electron. J. Combin., 31(4):#P4.22, 2024.
  • Corneil et al. [2010] D. G. Corneil, S. Olariu, and L. Stewart. The LBFS structure and recognition of interval graphs. SIAM J. Discrete Math., 23(4):1905–1953, 2010.
  • Dahlberg and van Willigenburg [2018] S. Dahlberg and S. van Willigenburg. Lollipop and lariat symmetric functions. SIAM J. Discrete Math., 32(2):1029–1039, 2018.
  • Dahlberg and van Willigenburg [2020] S. Dahlberg and S. van Willigenburg. Chromatic symmetric functions in noncommuting variables revisited. Adv. in Appl. Math., 112:101942, 25, 2020.
  • Dahlberg et al. [2020] S. Dahlberg, A. Foley, and S. van Willigenburg. Resolving Stanley’s ee-positivity of claw-contractible-free graphs. J. European Math. Soc., 22(8):2673–2696, 2020.
  • Ellzey [2017] B. Ellzey. Chromatic quasisymmetric functions of directed graphs. Sém. Lothar. Combin., 78B:Art. 74, 12, 2017.
  • Faudree et al. [1997] R. Faudree, E. Flandrin, and Z. Ryjáček. Claw-free graphs — a survey. Discrete Math., 164(1):87–147, 1997. The second Kraków conference of graph theory.
  • Gasharov [1996] V. Gasharov. Incomparability graphs of (3+1)(3+1)-free posets are ss-positive. In Proceedings of the 6th Conference on Formal Power Series and Algebraic Combinatorics (New Brunswick, NJ, 1994), volume 157, pages 193–197, 1996.
  • Gasharov [1999] V. Gasharov. On Stanley’s chromatic symmetric function and clawfree graphs. Discrete Math., 205(1–3):229–234, 1999.
  • Gebhard and Sagan [2001] D. D. Gebhard and B. E. Sagan. A chromatic symmetric function in noncommuting variables. J. Alg. Combin., 13(3):227–255, 2001.
  • Gelfand et al. [1995] I. Gelfand, D. Krob, A. Lascoux, B. Leclerc, V. Retakh, and J. Thibon. Noncommutative symmetrical functions. Adv. Math., 112(2):218–348, 1995.
  • Guay-Paquet [2013] M. Guay-Paquet. A modular relation for the chromatic symmetric functions of (3+1)-free posets. arXiv: 1306.2400, 2013.
  • Haiman [1993] M. Haiman. Hecke algebra characters and immanant conjectures. J. Amer. Math. Soc., 6(3):569–595, 1993.
  • Huh et al. [2020] J. Huh, S.-Y. Nam, and M. Yoo. Melting lollipop chromatic quasisymmetric functions and Schur expansion of unicellular LLT polynomials. Discrete Math., 343(3):111728, 2020.
  • Li et al. [2021] E. Y. H. Li, G. M. X. Li, D. G. L. Wang, and A. L. B. Yang. The twinning operation on graphs does not always preserve ee-positivity. Taiwanese J. Math., 25(6):1089–1111, 2021.
  • MacMahon [1915, 1916] P. A. MacMahon. Combinatory Analysis, volume I and II. Camb. Univ. Press, Cambridge, 1915, 1916.
  • Martin et al. [2008] J. L. Martin, M. Morin, and J. D. Wagner. On distinguishing trees by their chromatic symmetric functions. J. Combin. Theory Ser. A, 115(2):237–253, 2008.
  • Mendes and Remmel [2015] A. Mendes and J. Remmel. Counting with Symmetric Functions, volume 43 of Dev. Math. Springer, Cham, 2015.
  • Orellana and Scott [2014] R. Orellana and G. Scott. Graphs with equal chromatic symmetric functions. Discrete Math., 320:1–14, 2014.
  • Shareshian and Wachs [2010] J. Shareshian and M. L. Wachs. Eulerian quasisymmetric functions. Adv. Math., 225(6):2921–2966, 2010.
  • Shareshian and Wachs [2012] J. Shareshian and M. L. Wachs. Chromatic quasisymmetric functions and Hessenberg varieties. In Configuration spaces, volume 14 of CRM Series, pages 433–460. Ed. Norm., Pisa, 2012.
  • Shareshian and Wachs [2016] J. Shareshian and M. L. Wachs. Chromatic quasisymmetric functions. Adv. Math., 295:497–551, 2016.
  • Stanley [1995] R. P. Stanley. A symmetric function generalization of the chromatic polynomial of a graph. Adv. Math., 111(1):166–194, 1995.
  • Stanley [1998] R. P. Stanley. Graph colorings and related symmetric functions: ideas and applications: a description of results, interesting applications, & notable open problems. Discrete Math., 193(1-3):267–286, 1998.
  • Stanley [2011. It is a thoroughly revised version of the 1st edition published in 1986.] R. P. Stanley. Enumerative Combinatorics, Vol. 1, volume 49 of Cambridge Stud. in Adv. Math. Camb. Univ. Press, Cambridge, 2nd edition, 2011. It is a thoroughly revised version of the 1st edition published in 1986.
  • Stanley [2023. The main difference with the 1st ed. published in 1999 is 159 additional exercises and solutions on symmetric functions.] R. P. Stanley. Enumerative Combinatorics. Vol. 2, volume 62 of Cambridge Stud. in Adv. Math. Camb. Univ. Press, Cambridge, 2nd edition, 2023. The main difference with the 1st ed. published in 1999 is 159 additional exercises and solutions on symmetric functions.
  • Stanley and Stembridge [1993] R. P. Stanley and J. R. Stembridge. On immanants of Jacobi-Trudi matrices and permutations with restricted position. J. Combin. Theory Ser. A, 62(2):261–279, 1993.
  • Thibon and Wang [2023] J.-Y. Thibon and D. G. L. Wang. A noncommutative approach to the schur positivity of chromatic symmetric functions. arXiv:2305.07858, 2023.
  • Tom [2024] F. Tom. A signed ee-expansion of the chromatic symmetric function and some new ee-positive graphs. In Proceedings of the 36th Conference on Formal Power Series and Algebraic Combinatorics, Sém. Lothar. Combin., volume 91B, Article #48, 12 pp., Bochum, 2024.
  • Wang and Wang [2023a] D. G. L. Wang and M. M. Y. Wang. The ee-positivity and schur positivity of some spiders and broom trees. Discrete Appl. Math., 325:226–240, 2023a.
  • Wang and Wang [2023b] D. G. L. Wang and M. M. Y. Wang. The ee-positivity of two classes of cycle-chord graphs. J. Alg. Combin., 57(2):495–514, 2023b.
  • Wolfe [1998] M. Wolfe. Symmetric chromatic functions. Pi Mu Epsilon J., 10(8):643–657, 1998.
  • Wolfgang III [1997] H. L. Wolfgang III. Two Interactions between Combinatorics and Representation Theory: Monomial Immanants and Hochschild Cohomology. PhD thesis, MIT, 1997.
  • Zheng [2022] K. Zheng. On the ee-positivity of trees and spiders. J. Combin. Theory Ser. A, 189:105608, 2022.