This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Necessary conditions for the existence of Morita Contexts in the bicategory of Landau-Ginzburg Models

Yves Baudelaire Fomatati
Department of Mathematics and Statistics, University of Ottawa,
Ottawa, Ontario, Canada K1N 6N5.
[email protected].

Abstract

We use a matrix approach to study the concept of Morita context in the bicategory 𝒢K\mathcal{LG}_{K} of Landau-Ginzburg models on a particular class of objects. In fact, we first use properties of matrix factorizations to state and prove two necessary conditions to obtain a Morita context between two objects of 𝒢K\mathcal{LG}_{K}. Next, we use a celebrated result (due to Schur) on determinants of block matrices to show that these necessary conditions are not sufficient. Finally, we state a trivial sufficient condition.

Keywords. Matrix factorizations, tensor product, Morita equivalence.
Mathematics Subject Classification (2020). 16D90, 15A23, 15A69, 18N10.

In the sequel, KK is a commutative ring with unity and RR will denote the power series ring K[[x1,,xn]]K[[x_{1},\cdots,x_{n}]] or its subring of polynomials K[x1,,xn]K[x_{1},\cdots,x_{n}]. It would always be clear which ring we are referring to.

1 Introduction

A Morita context also called pre-equivalence data [4], is a generalization of Morita equivalence between categories of modules. Two rings TT and SS are called Morita equivalent if the categories of left TT-Modules (TT-Mod) and of left SS-Modules (S(S-Mod) are equivalent. The prototype of Morita equivalent rings is provided by a ring TT and the ring of n×nn\times n matrices over TT (for details, see corollary 22.6 of [2]).
It is evident that if two rings are isomorphic, they are Morita equivalent. However, the converse is not true in general. There exist rings that are not isomorphic, yet are Morita equivalent (cf. p. 470 of [22]). In fact, it suffices to take a ring TT and the ring of n×nn\times n matrices over TT. However, there is a partial converse which holds. Indeed, if two rings are Morita equivalent, then their centers are isomorphic. In particular, if the rings are commutative, then they are isomorphic ([22], p.494). Because of this result, Morita equivalence is interesting solely in the situation of noncommutative rings. For more details on Morita equivalence, see [2]. This notion can be generalized using the notion of Morita contexts.
Morita contexts were first introduced in the bicategory of unitary rings and bimodules as 66-tuples (A,B,M,N,ϕ,ψ)(A,B,M,N,\phi,\psi); where AA and BB are rings, MM is an A-B-bimodule, NN is a B-A-bimodule and ϕ:MNA\phi:M\otimes N\rightarrow A and ψ:NMB\psi:N\otimes M\rightarrow B are homomorphisms satisfying ϕM=Mψ\phi\otimes M=M\otimes\psi and Nϕ=ψNN\otimes\phi=\psi\otimes N. For a characterization of Morita context in this bicategory, see theorem 5.4 of [25]. Bass (cf. chap. 2, section 4.4 of [3]) proved that the Morita context (A,B,M,N,ϕ,ψ)(A,B,M,N,\phi,\psi) is a Morita equivalence if and only if MM is both projective and a generator in the category of AA-modules. Recall (cf. [22]) that if AA is a ring, an AA-module PP is projective if for every surjective AA-linear map f:MNf:M\rightarrow N and every AA-linear map g:PNg:P\rightarrow N there is a unique AA-linear map h:PMh:P\rightarrow M such that g=fhg=fh.
Morita contexts were recently studied in many bicategories [25]. But they have not yet been studied in the bicategory 𝒢K\mathcal{LG}_{K} of Landau-Ginzburg models (section 2.2 of [6]) over a commutative ring KK whose intricate construction ([8]) is reminiscent of, but more complex than that of the bicategory of associative algebras and bimodules. In this paper, we study the concept of Morita context in the bicategory of Landau-Ginzburg models. A Landau-Ginzburg model is a model in solid state physics for superconductivity. 𝒢K\mathcal{LG}_{K} possesses adjoints (also called duals, cf. [8]) and this helps in explaining a certain duality that exists in the setting of Landau-Ginzburg models in terms of some specified relations (cf. page 1 of [8]). The objects of 𝒢K\mathcal{LG}_{K} are potentials which are polynomials satisfying some conditions (definition 2.4 p.8 of [8]). But unlike [8], we do not restrict ourselves to potentials. It turns out that the authors of [8] used potentials to suit their purposes because even if we take the objects of 𝒢K\mathcal{LG}_{K} to be polynomials rather than potentials and then apply the construction of 𝒢K\mathcal{LG}_{K} given in [8], we obtain virtually the same bicategory except that we now have more objects. Thus in this paper, the objects of 𝒢K\mathcal{LG}_{K} are simply polynomials.
There are many reasons for studying the notion of Morita context. The first reason is that it generalizes the very important notion of Morita equivalence. Another reason is that it is used to prove some celebrated results. For example, the Morita context which has been introduced in [23] was used since to prove Wedderburn theorem on the structure of simple rings [3]. Morita contexts were also used in [1] to obtain various results: Goldie’s theorem ([16], [17]) on the ring of quotients of semi-prime rings and as a specialization, Wedderburn’s structure theorems of semi-simple Artinian rings were obtained. Other applications, though sometimes not stated in an explicit form, can be found in various places (e.g. [18], p.75]).
In this paper, we use properties of matrix factorizations to give necessary conditions to obtain a Morita context between two objects of the bicategory 𝒢K\mathcal{LG}_{K}. Thus, our first main result is the following theorem.

Theorem A.
Let

  • (R,f)(R,f) and (S,g)(S,g) be two objects of 𝒢K\mathcal{LG}_{K}.

  • X𝒢K((R,f),(S,g))=hmf(RKS,gf)X\in\mathcal{LG}_{K}((R,f),(S,g))=hmf(R\otimes_{K}S,g-f) i.e., X:(R,f)(S,g)X:(R,f)\rightarrow(S,g) is a finite rank matrix factorization of gfg-f.

  • Y𝒢K((S,g),(R,f))=hmf(SKR,fg)Y\in\mathcal{LG}_{K}((S,g),(R,f))=hmf(S\otimes_{K}R,f-g) i.e., Y:(S,g)(R,f)Y:(S,g)\rightarrow(R,f) is a finite rank matrix factorization of fgf-g.
    such that XYX\otimes Y and YXY\otimes X are finite rank matrix factorizations.

  • Δf𝒢K((R,f),(R,f))=hmf(RKR,fididf)\Delta_{f}\in\mathcal{LG}_{K}((R,f),(R,f))=hmf(R\otimes_{K}R,f\otimes id-id\otimes f) i.e., Δf:(R,f)(R,f)\Delta_{f}:(R,f)\rightarrow(R,f) is a finite rank matrix factorization of fididff\otimes id-id\otimes f.

  • Δg𝒢K((S,g),(S,g))=hmf(RKS,gididg)\Delta_{g}\in\mathcal{LG}_{K}((S,g),(S,g))=hmf(R\otimes_{K}S,g\otimes id-id\otimes g) i.e., Δg:(S,g)(S,g)\Delta_{g}:(S,g)\rightarrow(S,g) is a finite rank matrix factorization of gididgg\otimes id-id\otimes g.

  • η:XRY1(R,f)=Δf\eta:X\otimes_{R}Y\rightarrow 1_{(R,f)}=\Delta_{f} and let (P,Q)(P,Q) and (R,T)(R,T) be pairs of matrices representing respectively the finite rank matrix factorizations XYX\otimes Y and Δf\Delta_{f}.

  • ρ:YSX1(S,g)=Δg\rho:Y\otimes_{S}X\rightarrow 1_{(S,g)}=\Delta_{g} and let (P,Q)(P^{\prime},Q^{\prime}) and (R,T)(R^{\prime},T^{\prime}) be pairs of matrices representing respectively the finite rank matrix factorizations YXY\otimes X and Δg\Delta_{g}.

Then: A necessary condition for Γ=(X,Y,η,ρ)\Gamma=(X,Y,\eta,\rho) to be a Morita Context is

{η1=0,andη0Q=0,ρ1=0,andρ0Q=0.\begin{cases}\eta^{1}=0,\;and\;\eta^{0}Q=0,\\ \rho^{1}=0,\;and\;\rho^{0}Q^{\prime}=0.\end{cases}

where for ease of notation, we wrote ρi\rho^{i} and ηi\eta^{i} respectively for the matrices of ρi\rho^{i} and ηi\eta^{i}, i=0,1i=0,1.

Next, thanks to a celebrated result (due to Schur [26], [27]) on determinants of block matrices, we observe that if XX and XX^{\prime} are respectively two matrix factorizations of two arbitrary polynomials, then the determinants of the four matrices appearing in the Yoshino tensor products X^X=(P,Q)X\widehat{\otimes}X^{\prime}=(P,Q) and X^X=(P,Q)X^{\prime}\widehat{\otimes}X=(P^{\prime},Q^{\prime}) are all equal.
Moreover, when we translate this in 𝒢K\mathcal{LG}_{K} where a 11-morphism is a matrix factorization of the difference of two polynomials, we find out that those four determinants are all equal to zero. So our second main result is stated as follows:

Theorem B.
Let (R,f)(R,f) and (S,g)(S,g) be two objects in 𝒢K\mathcal{LG}_{K} and let

X:(R,f)(S,g)andX:(S,g)(R,f)X:(R,f)\rightarrow(S,g)\,\,and\,\,X^{\prime}:(S,g)\rightarrow(R,f)

be 1-morphisms in 𝒢K\mathcal{LG}_{K}. If XX=(P,Q)X\otimes X^{\prime}=(P,Q) and XX=(P,Q)X^{\prime}\otimes X=(P^{\prime},Q^{\prime}), then

det(P)=det(Q)=det(P)=det(Q)=0det(P)=det(Q)=det(P^{\prime})=det(Q^{\prime})=0

Thanks to this result we conclude that the necessary conditions earlier stated are not sufficient.
This paper is organized as follows: In the next section, we review the notion of matrix factorization. In section 3, we recall properties of matrix factorizations. Section 4 is a recall of the definition of the bicategory of Landau-Ginzburg models. The notion of Morita contexts in 𝒢K\mathcal{LG}_{K} is discussed in section 5. Finally, we discuss further problems in the last section.

2 Matrix Factorizations

In this section, we first recall the definition of a matrix factorization and describe the category of matrix factorizations of a power series ff. Next, the definition of Yoshino’s tensor product is recalled.

In 1980, Eisenbud came up with an approach of factoring both reducible and irreducible polynomials in RR using matrices. For instance, the polynomial f=x2+y2f=x^{2}+y^{2} is irreducible over the real numbers but can be factorized as follows:

[xyyx][xyyx]=(x2+y2)[1001]=fI2\begin{bmatrix}x&-y\\ y&x\end{bmatrix}\begin{bmatrix}x&y\\ -y&x\end{bmatrix}=(x^{2}+y^{2})\begin{bmatrix}1&0\\ 0&1\end{bmatrix}=fI_{2}

We say that ([xyyx],[xyyx])(\begin{bmatrix}x&-y\\ y&x\end{bmatrix},\begin{bmatrix}x&y\\ -y&x\end{bmatrix}) is a 2×22\times 2 matrix factorization of ff.

Definition 2.1.

[29], [10]
An n×nn\times n matrix factorization of a power series fRf\in\;R is a pair of nn ×\times nn matrices (P,Q)(P,Q) such that PQ=fInPQ=fI_{n}, where InI_{n} is the n×nn\times n identity matrix and the coefficients of PP and of QQ are taken from RR.

When n=1n=1, we get a 11 ×\times 11 matrix factorization of ff, i.e., f=[g][h]f=[g][h] which is simply a factorization of ff in the classical sense. But in case ff is not reducible, this is not interesting, that’s why we will mostly consider n>1n>1.
The original definition of a matrix factorization was given by Eisenbud [13] as follows: a matrix factorization of an element xx in a ring AA (with unity) is an ordered pair of maps of free AA-modules ϕ:FG\phi:F\rightarrow G and ψ:GF\psi:G\rightarrow F s.t., ϕψ=x1G\phi\psi=x\cdot 1_{G} and ψϕ=x1F\psi\phi=x\cdot 1_{F}. Though this definition is valid for any arbitrary ring (with unity), in order to effectively study matrix factorizations, it is important to restrict oneself to specific rings. Working with specific rings makes it possible to easily give examples and it also allows one to carry out computations in a well-defined framework. Yoshino [29] restricted himself to matrix factorizations of power series. In this section, we will restrict ourselves to matrix factorizations of a polynomial.

Example 2.1.

Let h=xy+xz2+yz2h=xy+xz^{2}+yz^{2}.
We give a 2×22\times 2 matrix factorization of hh:

[z2yxxy][x+yyxz2]=(xy+xz2+yz2)[1001]=hI2\begin{bmatrix}z^{2}&y\\ x&-x-y\end{bmatrix}\begin{bmatrix}x+y&y\\ x&-z^{2}\end{bmatrix}=(xy+xz^{2}+yz^{2})\begin{bmatrix}1&0\\ 0&1\end{bmatrix}=hI_{2}

Thus;

([z2yxxy],[x+yyxz2])(\begin{bmatrix}z^{2}&y\\ x&-x-y\end{bmatrix},\begin{bmatrix}x+y&y\\ x&-z^{2}\end{bmatrix})

is a 2×22\times 2 matrix factorization of hh.

We now propose a simple straightforward algorithm to obtain an n×nn\times n matrix factorization from one that is m×mm\times m, where m<nm<n.

Simple straightforward algorithm: Let (P,Q)(P,Q) be an m×mm\times m matrix factorizations of a power series ff. Suppose we want an n×nn\times n matrix factorization of ff, where n>mn>m.
Let (i,j)(i,j) stand for the entry in the ithi^{th} row and jthj^{th} column.

  • Turn PP and QQ into n×nn\times n matrices by filling them with zeroes everywhere except at entries (i,j)(i,j) where 1im1\leq i\leq m and 1jm1\leq j\leq m.
    Then for all entries (k,k)(k,k) with k>mk>m, Either:

    1. 1.

      In PP, replace the diagonal elements (which are zeroes) with ff
      And

    2. 2.

      In QQ, replace the diagonal elements (which are zeroes) with 1

    Or:
    Interchange the roles of PP and QQ in steps 1.1. and 2.2. above.

It is evident that this simple algorithm works not only for polynomials but also for any element in a unital ring.

The standard algorithm to factor polynomials using matrices is found in [10] and for an improved version see [14] and [15].

2.1 The category of matrix factorizations of fRf\in R

The category of matrix factorizations of a power series fR=K[[x]]:=K[[x1,,xn]]f\in R=K[[x]]:=K[[x_{1},\cdots,x_{n}]] denoted by MF(R,f)MF(R,f) or MFR(f)MF_{R}(f), (or even MF(f)MF(f) when there is no risk of confusion) is defined [29] as follows:
\bullet The objects are the matrix factorizations of ff.
\bullet Given two matrix factorizations of ff; (ϕ1,ψ1)(\phi_{1},\psi_{1}) and (ϕ2,ψ2)(\phi_{2},\psi_{2}) respectively of sizes n1n_{1} and n2n_{2}, a morphism from (ϕ1,ψ1)(\phi_{1},\psi_{1}) to (ϕ2,ψ2)(\phi_{2},\psi_{2}) is a pair of matrices (α,β)(\alpha,\beta) each of size n2×n1n_{2}\times n_{1} which makes the following diagram commute:

K[[x]]n1\textstyle{K[[x]]^{n_{1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ψ1\scriptstyle{\psi_{1}}α\scriptstyle{\alpha}K[[x]]n1\textstyle{K[[x]]^{n_{1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}β\scriptstyle{\beta}ϕ1\scriptstyle{\phi_{1}}K[[x]]n1\textstyle{K[[x]]^{n_{1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}α()\scriptstyle{\alpha\;\;\;\;\;\;\;\;\;\;(\bigstar)}K[[x]]n2\textstyle{K[[x]]^{n_{2}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ψ2\scriptstyle{\psi_{2}}K[[x]]n2\textstyle{K[[x]]^{n_{2}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ϕ2\scriptstyle{\phi_{2}}K[[x]]n2\textstyle{K[[x]]^{n_{2}}}

That is,

{αϕ1=ϕ2βψ2α=βψ1\begin{cases}\alpha\phi_{1}=\phi_{2}\beta\\ \psi_{2}\alpha=\beta\psi_{1}\end{cases}

For a detailed definition of this category, see [14].

2.2 Yoshino’s Tensor Product of Matrix Factorizations and its variants

In this subsection, we recall the definition of the tensor product of matrix factorizations denoted ^\widehat{\otimes}, constructed by Yoshino [29] using matrices. This will be useful when we will be describing the notion of Morita Context in 𝒢K\mathcal{LG}_{K} (cf. section 5). In the sequel, except otherwise stated S=K[[y1,,yr]]=K[[y]]S=K[[y_{1},\cdots,y_{r}]]=K[[y]], where KK is a commutative ring with unity and r1r\geq 1.

Definition 2.2.

[29] Let X=(ϕ,ψ)X=(\phi,\psi) be an n×nn\times n matrix factorization of a power series fRf\in R and X=(ϕ,ψ)X^{\prime}=(\phi^{\prime},\psi^{\prime}) an m×mm\times m matrix factorization of gSg\in S. These matrices can be considered as matrices over L=K[[x,y]]L=K[[x,y]] and the tensor product X^XX\widehat{\otimes}X^{\prime} is given by
([ϕ1m1nϕ1nψψ1m],[ψ1m1nϕ1nψϕ1m]\begin{bmatrix}\phi\otimes 1_{m}&1_{n}\otimes\phi^{\prime}\\ -1_{n}\otimes\psi^{\prime}&\psi\otimes 1_{m}\end{bmatrix},\begin{bmatrix}\psi\otimes 1_{m}&-1_{n}\otimes\phi^{\prime}\\ 1_{n}\otimes\psi^{\prime}&\phi\otimes 1_{m}\end{bmatrix})
where each component is an endomorphism on LnLmL^{n}\otimes L^{m}.

It is easy to see [29] that X^XX\widehat{\otimes}X^{\prime} is a matrix factorization of f+gf+g of size 2nm2nm.
In the following example, we consider matrix factorizations in one variable.

Example 2.2.

Let X=(x2,x2)X=(x^{2},x^{2}) and X=(y2,y4)X^{\prime}=(y^{2},y^{4}) be 1×11\times 1 matrix factorizations of f=x4f=x^{4} and g=y6g=y^{6} respectively. Then
X^X=([x211y21y4x21],[x211y21y4x21])=([x2y2y4x2],[x2y2y4x2])X\widehat{\otimes}X^{\prime}=(\begin{bmatrix}x^{2}\otimes 1&1\otimes y^{2}\\ -1\otimes y^{4}&x^{2}\otimes 1\end{bmatrix},\begin{bmatrix}x^{2}\otimes 1&-1\otimes y^{2}\\ 1\otimes y^{4}&x^{2}\otimes 1\end{bmatrix})=(\begin{bmatrix}x^{2}&y^{2}\\ -y^{4}&x^{2}\end{bmatrix},\begin{bmatrix}x^{2}&-y^{2}\\ y^{4}&x^{2}\end{bmatrix})
And

[x2y2y4x2][x2y2y4x2]=(x2+y6)[1001]\begin{bmatrix}x^{2}&y^{2}\\ -y^{4}&x^{2}\end{bmatrix}\begin{bmatrix}x^{2}&-y^{2}\\ y^{4}&x^{2}\end{bmatrix}=(x^{2}+y^{6})\begin{bmatrix}1&0\\ 0&1\end{bmatrix}

This shows that X^XX\widehat{\otimes}X^{\prime} is a matrix factorization of f+g=x2+y6f+g=x^{2}+y^{6} and it is of size 2(1)(1)=22(1)(1)=2.

In the next example, we consider matrix factorizations in two variables.

Example 2.3.

Let

X= ([xyyx],[xyyx]\begin{bmatrix}x&-y\\ y&x\end{bmatrix},\begin{bmatrix}x&y\\ -y&x\end{bmatrix}), and X’= ([xyyx],[xyyx]\begin{bmatrix}-x&y\\ -y&-x\end{bmatrix},\begin{bmatrix}x&y\\ -y&x\end{bmatrix})
be 2×22\times 2 matrix factorizations of f=x2+y2f=x^{2}+y^{2} and g=x2y2g=-x^{2}-y^{2} respectively. Let

A= [xyyx][1001]\begin{bmatrix}x&-y\\ y&x\end{bmatrix}\otimes\begin{bmatrix}1&0\\ 0&1\end{bmatrix}, B= [1001][xyyx]\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\otimes\begin{bmatrix}-x&y\\ -y&-x\end{bmatrix}, C= [1001][xyyx]\begin{bmatrix}-1&0\\ 0&-1\end{bmatrix}\otimes\begin{bmatrix}x&y\\ -y&x\end{bmatrix},

D= [xyyx][1001]\begin{bmatrix}x&y\\ -y&x\end{bmatrix}\otimes\begin{bmatrix}1&0\\ 0&1\end{bmatrix}, A’= [xyyx][1001]\begin{bmatrix}x&y\\ -y&x\end{bmatrix}\otimes\begin{bmatrix}1&0\\ 0&1\end{bmatrix}, B’= [1001][xyyx]\begin{bmatrix}-1&0\\ 0&-1\end{bmatrix}\otimes\begin{bmatrix}-x&y\\ -y&-x\end{bmatrix},

C’= [1001][xyyx]\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\otimes\begin{bmatrix}x&y\\ -y&x\end{bmatrix}, and D’= [xyyx][1001]\begin{bmatrix}x&-y\\ y&x\end{bmatrix}\otimes\begin{bmatrix}1&0\\ 0&1\end{bmatrix}

Then
X^X=([ABCD],[ABCD])X\widehat{\otimes}X^{\prime}=(\begin{bmatrix}A&B\\ C&D\end{bmatrix},\begin{bmatrix}A^{\prime}&B^{\prime}\\ C^{\prime}&D^{\prime}\end{bmatrix})

=((x0y0xy000x0yyx00y0x000xy0y0x00yxxy00x0y0yx000x0y00xyy0x000yx0y0x),(x0y0xy000x0yyx00y0x000xy0y0x00yxxy00x0y0yx000x0y00xyy0x000yx0y0x))=(\left(\begin{array}[]{cccccccc}x&0&-y&0&-x&y&0&0\\ 0&x&0&-y&-y&-x&0&0\\ y&0&x&0&0&0&-x&y\\ 0&y&0&x&0&0&-y&-x\\ -x&-y&0&0&x&0&y&0\\ y&-x&0&0&0&x&0&y\\ 0&0&-x&-y&-y&0&x&0\\ 0&0&y&-x&0&-y&0&x\\ \end{array}\right),\left(\begin{array}[]{cccccccc}x&0&y&0&x&-y&0&0\\ 0&x&0&y&y&x&0&0\\ -y&0&x&0&0&0&x&-y\\ 0&-y&0&x&0&0&y&x\\ x&y&0&0&x&0&-y&0\\ -y&x&0&0&0&x&0&-y\\ 0&0&x&y&y&0&x&0\\ 0&0&-y&x&0&y&0&x\\ \end{array}\right))

If we call these two 8×88\times 8 matrices PP and QQ respectively, then PQ=0I8=0,PQ=0\cdot I_{8}=0, where I8I_{8} is the identity matrix of size 88 and the last zero is the zero matrix of size 88. Hence, X^XX\widehat{\otimes}X^{\prime} is a matrix factorization of f+g=x2+y2x2y2=0f+g=x^{2}+y^{2}-x^{2}-y^{2}=0 of size 2(2)(2)=82(2)(2)=8.

Yoshino’s tensor product has three mutually distinct functorial variants as can be seen in [15]

3 Properties of Matrix Factorizations

We will now state and prove some properties of matrix factorizations of polynomials. Some of them will be used when studying the notion of Morita Context between two objects in the bicategory 𝒢K\mathcal{LG}_{K}.
In this section, except otherwise stated R=K[x1,,xn]=K[x]R=K[x_{1},\cdots,x_{n}]=K[x]. All statements and proofs in this section are taken from [10] except for proposition 3.2. They are reproduced here (sometimes with slight modifications) for the purposes of completeness.

Lemma 3.1.

If PQ=fInPQ=fI_{n}, then det(PP) divides fnf^{n}. If in addition, ff is irreducible in RR, then det(PP) is a power of ff.

Corollary 3.1.

If 0fR0\neq f\in R and PQ=fInPQ=fI_{n}, then over the field of fractions \mathcal{F} of RR, PP is invertible.

Since PP is invertible over \mathcal{F}, the unique QQ such that PQ=fInPQ=fI_{n} is Q=P1fIn=fdet(P)adj(P)Q=P^{-1}fI_{n}=\frac{f}{det(P)}adj(P), where adj(P)adj(P) is the adjoint of the matrix PP.

Now, adj(P)adj(P) is a matrix over R=K[x1,,xn]R=K[x_{1},\cdots,x_{n}]. So, if det(P)det(P) divides ff, then QQ will also be a matrix over RR. However, it is possible for QQ not to have entries in RR, and therefore PP will not appear in any matrix factorization of ff over RR. We now prove that matrices appearing in a matrix factorization commute with each other.

Proposition 3.1.

If  0fR0\neq f\in R and P,QP,\;Q are n×nn\times n matrices so that PQ=fInPQ=fI_{n}, then QP=fInQP=fI_{n}.

Corollary 3.2.

If 0fR0\neq f\in R and PQ=fInPQ=fI_{n}, then over the field of fractions \mathcal{F} of RR, PP and QQ are invertible.

Remark 3.1.

It is important to note that corollary 3.1 and corollary 3.2 actually say that matrices that appear in a matrix factorization of a nonzero polynomial are invertible over the field of fractions \mathcal{F} of RR. This result will soon be very useful when describing the notion of Morita Context in 𝒢K\mathcal{LG}_{K}.

We state and prove another property of matrix factorizations thanks to which we can conclude that an n×nn\times n matrix factorization of a polynomial ff is not unique.

Proposition 3.2.

If 0fR0\neq f\in R and P,QP,\;Q are n×nn\times n matrices such that PQ=fInPQ=fI_{n}, then QtPt=fInQ^{t}P^{t}=fI_{n}, where QtQ^{t} (respectively PtP^{t}) stands for the transpose of QQ (respectively PP).

Proof.

Assume 0fR0\neq f\in R and P,QP,\;Q are n×nn\times n matrices such that PQ=fInPQ=fI_{n}. then:

PQ=fIn\displaystyle PQ=fI_{n} (PQ)t=(fIn)t\displaystyle\Rightarrow(PQ)^{t}=(fI_{n})^{t}
QtPt=fInsince(f)t=fviewedasa 1×1matrix\displaystyle\Rightarrow Q^{t}P^{t}=fI_{n}\;since\;(f)^{t}=f\,viewed\;as\,a\,1\times 1\,matrix

Before we proceed, it is worthwhile stating some well-known facts in the literature, see notes on tensor products by Conrad (for points 1. and 2. below see theorem 4.9, example 4.11 of [9], point 3. simply generalizes point 2.).

Lemma 3.2.
  1. 1.

    Let K[x1,x2,,xn]K[x_{1},x_{2},\cdots,x_{n}] and K[x1,x2,,xm]K[x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{m}] be free KK-modules with respective bases {ei}i=1n\{e_{i}\}^{n}_{i=1} and {ej}j=1m\{e^{\prime}_{j}\}^{m}_{j=1}. Then {eiej}i=1,n;j=1,m\{e_{i}\otimes e^{\prime}_{j}\}_{i=1,\cdots n;j=1,\cdots m} is a basis of K[x1,x2,,xn]K[x1,x2,,xm]K[x_{1},x_{2},\cdots,x_{n}]\otimes K[x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{m}].

  2. 2.

    The KK-modules K[x1,x2,,xn]K[x1,x2,,xm]K[x_{1},x_{2},\cdots,x_{n}]\otimes K[x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{m}] and K[x1,x2,,xn,x1,x2,,xm]K[x_{1},x_{2},\cdots,x_{n},x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{m}] are isomorphic as KK-modules.

  3. 3.

    If we let xx stand for x1,x2,,xnx_{1},x_{2},\cdots,x_{n} and x(l)x^{(l)} stand for x1(l),x2(l),,xnl(l)x^{(l)}_{1},x^{(l)}_{2},\cdots,x^{(l)}_{n_{l}}, where n,l,nln,l,n_{l}\in\mathbb{N}, n=n0n=n_{0}, and {x(l)}\{x^{(l)}\} means xx with ll primes, e.g x(2)=x′′,x(0):=xx^{(2)}=x^{\prime\prime},\;x^{(0)}:=x.
    Then more generally, we have:

    K[x,x(1),,x(l)]p=0,,lK[x(p)].K[x,x^{(1)},\cdots,x^{(l)}]\cong\bigotimes_{p=0,\cdots,l}K[x^{(p)}].

4 The bicategory 𝒢K\mathcal{LG}_{K} of Landau-Ginzburg models

We quickly recall the construction of the bicategory 𝒢K\mathcal{LG}_{K} of Landau-Ginzburg models. In definition 5.5 of [14], a BB-category is defined to be a structure having all requirements of a bicategory but without necessarily satisfying the unit morphisms requirement. In order to easily recall the construction of 𝒢K\mathcal{LG}_{K}, we first construct a BB-category which we call BFacB-Fac and then we proceed to the unit construction. The objects of BFacB-Fac are polynomials ff denoted by pairs (R,f)(R,f) where fR=K[x]f\in R=K[x]. Here we do not impose restrictions on the objects of our bicategory as was originally done in [8], where the authors instead consider potentials (Definition 2.4, p.8 of [8]). This generalization we do at the level of the objects does not pose a problem in the construction of 𝒢K\mathcal{LG}_{K}. The end result is simply a bicategory which has more objects than the original 𝒢K\mathcal{LG}_{K} defined in [8], we still call it 𝒢K\mathcal{LG}_{K}.
We first recall the notion of linear factorizations which is an ingredient for the construction of 𝒢K\mathcal{LG}_{K}.

Definition 4.1.

(p.8 of [8]) Linear factorization
A linear factorization of fR=K[x]f\in R=K[x] is a 2\mathbb{Z}_{2}-graded RR-module X=X0X1X=X^{0}\oplus X^{1} together with an odd (i.e., grade reversing) RR-linear endomorphism d:XXd:X\longrightarrow X such that d2=fidXd^{2}=f\cdot id_{X}.
fidXf\cdot id_{X} stands for the endomorphism xfx,xXx\mapsto f\cdot x,\,\,\forall x\in X.

Since we are dealing with a 2\mathbb{Z}_{2}-grading and dd is odd, we can also say dd is a degree one map. dd is called a twisted differential in [7]. dd is actually a pair of maps (d0,d1)(d^{0},d^{1}) that we may depict as follows:

X0\textstyle{X^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}d0\scriptstyle{d^{0}}X1\textstyle{X^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}d1\scriptstyle{d^{1}}X0\textstyle{X^{0}}

and the stated condition on them is: d0d1=fidX1d^{0}\circ d^{1}=f\cdot id_{X^{1}} and d1d0=fidX0d^{1}\circ d^{0}=f\cdot id_{X^{0}}.
If XX is a free RR-module, then the pair (X,d)(X,d) is called a matrix factorization and we often refer to it by XX without explicitly mentioning the differential dd.

Remark 4.1.

If M0M_{0} and M1M_{1} are respectively matrices of the RR-linear endomorphisms d0d^{0} and d1d^{1}, then the pair (M0,M1)(M_{0},M_{1}) would be a matrix factorization of ff according to definition 2.1.

Definition 4.2.

(p.9 [8]) Morphism of linear factorizations
A morphism of linear factorizations (X,dX)(X,d_{X}) and (Y,dY)(Y,d_{Y}) is an even (i.e., a grade preserving) RR-linear map ϕ:XY\phi:X\longrightarrow Y such that dYϕ=ϕdXd_{Y}\phi=\phi d_{X}.

Concretely (see page 19 of [21]), ϕ\phi is a pair of maps X0\textstyle{X^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ϕ0\scriptstyle{\phi^{0}}Y0\textstyle{Y^{0}} and X1\textstyle{X^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ϕ1\scriptstyle{\phi^{1}}Y1\textstyle{Y^{1}} such that the following diagram commutes:

X0\textstyle{X^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dX0\scriptstyle{{d^{0}_{X}}}ϕ0\scriptstyle{\phi^{0}}X1\textstyle{X^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ϕ1\scriptstyle{\phi^{1}}dX1\scriptstyle{{d^{1}_{X}}}X0\textstyle{X^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ϕ0\scriptstyle{\phi^{0}}Y0\textstyle{Y^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dY0\scriptstyle{{d^{0}_{Y}}}Y1\textstyle{Y^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dY1\scriptstyle{{d^{1}_{Y}}}Y0\textstyle{Y^{0}}
Definition 4.3.

(p.9 [8]) homotopic linear factorizations
Let (X,dX)(X,d_{X}) and (Y,dY)(Y,d_{Y}) be linear factorizations. Two morphisms φ,ψ:XY\varphi,\psi:X\longrightarrow Y are homotopic if there exists an odd RR-linear map λ:XY\lambda:X\longrightarrow Y such that dYλ+λdX=ψφd_{Y}\lambda+\lambda d_{X}=\psi-\varphi.
More precisely, the following diagram commutes:

X1\textstyle{X_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dX\scriptstyle{d_{X}}ψϕ\scriptstyle{\psi-\phi}X0\textstyle{X_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}λ0\scriptstyle{\lambda_{0}}ψϕ\scriptstyle{\psi-\phi}dX\scriptstyle{d_{X}}X1\textstyle{X_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}λ1\scriptstyle{\lambda_{1}}ψϕ\scriptstyle{\psi-\phi\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\dagger}}Y1\textstyle{Y_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dY\scriptstyle{d_{Y}}Y0\textstyle{Y_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dY\scriptstyle{d_{Y}}Y1\textstyle{Y_{1}}

i.e.,

dYλ0+λ1dX=ψϕd_{Y}\circ\lambda_{0}+\lambda_{1}\circ d_{X}=\psi-\phi
Notations 4.1.

We keep the following notations used in [8]:
\bullet We denote by HF(R,f)HF(R,f) the category of linear factorizations of fRf\in R modulo homotopy.
\bullet We also denote by HMF(R,f)HMF(R,f) its full subcategory of matrix factorizations.
\bullet Furthermore, we write hmf(R,f)hmf(R,f) for the full subcategory of finite rank matrix factorizations, viz. the matrix factorizations whose underlying RR-module is free of finite rank.

Remark 4.2.

(p.9 of [8])
HMF(R,f)HMF(R,f) is idempotent complete ([5], [24]). As earlier stated, we work with polynomials rather than power series, so hmf(R,f)hmf(R,f) is not necessarily idempotent complete [20]. The idempotent closure of hmf(R,f)hmf(R,f) (denoted by hmf(R,f)ωhmf(R,f)^{\omega}) is a full subcategory of HMF(R,f)HMF(R,f) whose objects are those matrix factorizations which are direct summands of finite-rank matrix factorizations in the homotopy category. Moreover, hmf(R,f)ωhmf(R,f)^{\omega} is an idempotent complete category. Taking the idempotent completion is necessary because the composition of 11-morphisms in 𝒢K\mathcal{LG}_{K} results in matrix factorisations which, while not finite-rank, are summands in the homotopy category of something finite-rank. There are two natural ways to resolve this: work throughout with power series rings and completed tensor products, or work with idempotent completions.

Let (R=K[x],fR=K[x],f) and (S=K[y],gS=K[y],g) be elements of BFacB-Fac. The small category BFac((R,f),(S,g))B-Fac((R,f),(S,g)) is defined as follows:

BFac((R,f),(S,g)):=hmf(RS,1Rgf1S)ω=hmf(K[x,y],gf)ωB-Fac((R,f),(S,g)):=hmf(R\otimes S,1_{R}\otimes g-f\otimes 1_{S})^{\omega}=hmf(K[x,y],g-f)^{\omega}

viz. a 1-morphism between two polynomials ff and gg is a matrix factorization of gfg-f.
Then given two composable 11-cells XBFac((R,f),(S,g))X\in B-Fac((R,f),(S,g)) and YBFac((S,g),(T,h))Y\in B-Fac((S,g),(T,h)), we define their composition using Yoshino’s tensor product as discussed in subsection 2.2. YX:=YSXY\circ X:=Y\otimes_{S}X HMF(RKT,1Rhf1T)\in\,HMF(R\otimes_{K}T,1_{R}\otimes h-f\otimes 1_{T}) which is a 2\mathbb{Z}_{2}-graded module, where:

(YSX)0=(Y0SX0)(Y1SX1)and(YSX)1=(Y0SX1)(Y1SX0),(Y\otimes_{S}X)^{0}=(Y^{0}\otimes_{S}X^{0})\oplus(Y^{1}\otimes_{S}X^{1})\;\;and\;\;(Y\otimes_{S}X)^{1}=(Y^{0}\otimes_{S}X^{1})\oplus(Y^{1}\otimes_{S}X^{0}),

the differential ([8]) is

dYX=dY1+1dX,d_{Y\otimes X}=d_{Y}\otimes 1+1\otimes d_{X},

where the tensor product is taken over SS and 1dX1\otimes d_{X} has the usual Koszul signs when applied to elements. That is;

dYX(y,x)=dY(y)x+(1)iydX(x)d_{Y\otimes X}(y,x)=d_{Y}(y)\otimes x+(-1)^{i}y\otimes d_{X}(x)\,\,\,\,\,\,\,\,\,

(see   p.28 [21]) where yYiy\in Y^{i}.
By remark 2.1.8 on p.29 of [6], YSXY\otimes_{S}X is a free module of infinite rank over RKTR\otimes_{K}T. However, the argument of Section 12 of [12] shows that it is naturally isomorphic to a direct summand in the homotopy category of something finite-rank. So, we may define YX:=YSXY\circ X:=Y\otimes_{S}X hmf(RKT,1Rhf1T)ω=BFac((R,f),(T,h))\in\,hmf(R\otimes_{K}T,1_{R}\otimes h-f\otimes 1_{T})^{\omega}=B-Fac((R,f),(T,h)).

We now define the tensor product of morphisms of matrix factorizations. Let X1X_{1}, X2X_{2} be objects of BFac((R,f),(S,g))B-Fac((R,f),(S,g)) and Y1Y_{1}, Y2Y_{2} be objects of BFac((S,g),(T,h))B-Fac((S,g),(T,h)). Let α:X1X2\alpha:X_{1}\rightarrow X_{2} and β:Y1Y2\beta:Y_{1}\rightarrow Y_{2} be two morphisms, then we define their tensor product in the obvious way βα:Y1X1Y2X2\beta\otimes\alpha:Y_{1}\otimes X_{1}\rightarrow Y_{2}\otimes X_{2} in BFac((R,f),(T,h))B-Fac((R,f),(T,h)).
With the above data, the composition (bi-)functor is entirely determined in our B-category:

(R,f),(S,g),(T,h):BFac((R,f),(S,g))\star_{(R,f),(S,g),(T,h)}:B-Fac((R,f),(S,g)) x BFac((S,g),(T,h))BFac((R,f),(T,h))B-Fac((S,g),(T,h))\rightarrow B-Fac((R,f),(T,h))

(X,Y)YSX(X,Y)\mapsto Y\otimes_{S}X

The definition of the associativity morphism is easy to state. In fact, for XBFac((R,f),(S,g))X\in B-Fac((R,f),(S,g)), YBFac((S,g),(T,h))Y\in B-Fac((S,g),(T,h)) and ZBFac((T,h),(P,r))Z\in B-Fac((T,h),(P,r)), the associator is the 2-isomorphism

aZ,Y,X:Z(YX)(ZY)Xa_{Z,Y,X}:Z\otimes(Y\otimes X)\rightarrow(Z\otimes Y)\otimes X

given by the usual formula

z(yx)(zy)xz\otimes(y\otimes x)\rightarrow(z\otimes y)\otimes x

where xXx\in X, yYy\in Y and zZz\in Z.

Lemma 4.1.

BFacB-Fac is a BB-category.

We now discuss the construction of the units in 𝒢K\mathcal{LG}_{K}.

4.1 Unit 11-morphisms in 𝒢K\mathcal{LG}_{K}

Here, we will construct the identity 1-cells. We let R=K[x1,x2,,xn]R=K[x_{1},x_{2},\cdots,x_{n}].

From lemma 3.2, we have in particular that

RKRK[x1,x2,,xn,x1,x2,,xn].R\otimes_{K}R\cong K[x_{1},x_{2},\cdots,x_{n},x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{n}].

where xi=xi1x_{i}=x_{i}\otimes 1 and xi=1xix^{{}^{\prime}}_{i}=1\otimes x_{i}.
The subscript "K""K" in K\otimes_{K} will be very often omitted for ease of notation. We need an object Δf:(R,f)(R,f)\Delta_{f}:(R,f)\rightarrow(R,f) in hmf(RR,fididf)hmf(R\otimes R,f\otimes id-id\otimes f) or equivalently,
hmf(K[x1,x2,,xn,x1,x2,,xn],h(x,x)),hmf(K[x_{1},x_{2},\cdots,x_{n},x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{n}],h(x,x^{\prime})), where h(x,x)=f(x)f(x),h(x,x^{\prime})=f(x)-f(x^{\prime}), where idid stands for 1R1_{R}.
Recall (cf. section 5.5 of [28]): The exterior algebra (V)\bigwedge(V) of a vector space VV over a field K is defined as the quotient algebra of the tensor algebra; T(V)=i=1Ti(V)=KV(VV)(VVV),T(V)=\bigoplus_{i=1}^{\infty}T^{i}(V)=K\bigoplus V\bigoplus(V\otimes V)\bigoplus(V\otimes V\otimes V)\bigoplus\cdots, by the two-sided ideal II generated by all elements of the form xxx\otimes x for xVx\in V. Symbolically, (V)=T(V)/I\bigwedge(V)=T(V)/I. The exterior product \wedge of two elements of (V)\bigwedge(V) is the product induced by the tensor product \otimes of T(V)T(V). That is, if

π:T(V)(V)=T(V)/I\pi:T(V)\rightarrow\bigwedge(V)=T(V)/I

is the canonical surjection, and if a and b are in (V)\bigwedge(V), then there are α\alpha and β\beta in T(V)T(V) such that a=π(α)a=\pi(\alpha) and b=π(β)b=\pi(\beta) and ab=π(αβ)a\wedge b=\pi(\alpha\otimes\beta). Let θ1,θ2,,θn\theta_{1},\theta_{2},\cdots,\theta_{n} be formal symbols111That is, we declare those symbols to be linearly independent by definition. We consider the RRR\otimes R-module:

Δf={i=1n(RR)θi}\Delta_{f}=\bigwedge\{\bigoplus_{i=1}^{n}(R\otimes R)\theta_{i}\}

This is an exterior algebra generated by nn anti-commuting variables, the θis\theta_{i}s modulo the relations that the θ\theta’s anti-commute, that is θiθj=θjθi\theta_{i}\wedge\theta_{j}=-\theta_{j}\wedge\theta_{i}. Typically, we will omit the wedge product and write for instance θiθj\theta_{i}\wedge\theta_{j} simply as θiθj\theta_{i}\theta_{j}. Here, the ”\wedge” product is taken over K just like the tensor product. A typical element of Δf\Delta_{f} is (rr)θi1θi2θik(r\otimes r^{\prime})\theta_{i_{1}}\theta_{i_{2}}\cdots\theta_{i_{k}} or equivalently h(x1,x2,,xn,x1,x2,,xn)θi1θi2θikh(x_{1},x_{2},\cdots,x_{n},x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{n})\theta_{i_{1}}\theta_{i_{2}}\cdots\theta_{i_{k}} where i1,,ik{1,,n}i_{1},\cdots,i_{k}\in\{1,\cdots,n\} and h(x1,x2,,xn,x1,x2,,xn)h(x_{1},x_{2},\cdots,x_{n},x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{n}) \in K[x1,x2,,xn,x1,x2,,xn]K[x_{1},x_{2},\cdots,x_{n},x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{n}].
Δf\Delta_{f} as an algebra is finitely generated by the set of formal symbols {θ1,,θn}\{\theta_{1},\cdots,\theta_{n}\}.
Δf\Delta_{f} as an RRR\otimes R-module is generated by the set containing the empty list and all products of the form θi1θik\theta_{i_{1}}\dots\theta_{i_{k}} where i1,,ik{1,,n}i_{1},\cdots,i_{k}\in\{1,\cdots,n\}. The action of RRR\otimes R is the obvious one.

Δf\Delta_{f} is endowed with the 2\mathbb{Z}_{2}-grading given by θ\theta-degree (where degθi=1\theta_{i}=1 for each ii). Thus degθi2=0anddegθiθj=0deg\theta_{i}^{2}=0\;and\;deg\theta_{i}\theta_{j}=0.
Next, we define the differential as follows:

d:ΔfΔfd:\Delta_{f}\longrightarrow\Delta_{f}
d()=i=1n[(xixi)θi()+i(f)θi()]d(-)=\sum_{i=1}^{n}[(x_{i}-x_{i}^{{}^{\prime}})\theta_{i}^{\ast}(-)+\partial_{i}(f)\theta_{i}\wedge(-)]\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdots\,\,\,\natural

Where θi\theta_{i}^{\ast} is the unique derivation extending the map θi(θj)=δij\theta_{i}^{\ast}(\theta_{j})=\delta_{ij} and as mentioned in [8], it acts on an element θi1θi2θik\theta_{i_{1}}\theta_{i_{2}}\cdots\theta_{i_{k}} of the exterior algebra by the Leibniz rule with Koszul signs. In fact,

θi(θj1θj2θjk)={0forij,j{j1,j2,,jk}(1)p+1θj1θj2θi^θjkotherwise\theta_{i}^{\ast}(\theta_{j_{1}}\theta_{j_{2}}\cdots\theta_{j_{k}})=\begin{cases}0\,for\,i\neq j,\,\forall j\in\{{j_{1}},{j_{2}},\cdots,{j_{k}}\}\\ (-1)^{p+1}\theta_{j_{1}}\theta_{j_{2}}\cdots\hat{\theta_{i}}\cdots\theta_{j_{k}}\,\,otherwise\end{cases}

where θi^\hat{\theta_{i}} signifies that θi\theta_{i} has been removed, and pp is the position of θi\theta_{i} in θj1θj2θiθjk\theta_{j_{1}}\theta_{j_{2}}\cdots\theta_{i}\cdots\theta_{j_{k}}

And:

i:k[x1,,xn,x1,,xn]k[x1,,xn,x1,,xn]\partial_{i}:k[x_{1},\cdots,x_{n},x_{1}^{{}^{\prime}},\cdots,x_{n}^{{}^{\prime}}]\longrightarrow k[x_{1},\cdots,x_{n},x_{1}^{{}^{\prime}},\cdots,x_{n}^{{}^{\prime}}]

is defined by,

i(h)=h(x1,,xi1,xi,,xn,x1,,xn)h(x1,,xi,xi+1,,xn,x1,,xn)xixi\partial_{i}(h)=\frac{h(x_{1}^{{}^{\prime}},\cdots,x_{i-1}^{{}^{\prime}},x_{i},\cdots,x_{n},x_{1}^{{}^{\prime}},\cdots,x_{n}^{{}^{\prime}})-h(x_{1}^{{}^{\prime}},\cdots,x_{i}^{{}^{\prime}},x_{i+1},\cdots,x_{n},x_{1}^{{}^{\prime}},\cdots,x_{n}^{{}^{\prime}})}{x_{i}-x_{i}^{{}^{\prime}}}

where for ease of notation we wrote hh as argument of i\partial_{i} instead of the more cumbersome notation i(h(x1,,xn,x1,,xn))\partial_{i}(h(x_{1},\cdots,x_{n},x_{1}^{{}^{\prime}},\cdots,x_{n}^{{}^{\prime}})). The following lemma will be useful in the definition of the left and the right units of 𝒢K\mathcal{LG}_{K}.

Lemma 4.2.

There is a canonical map of factorizations
π:ΔfR\pi:\Delta_{f}\longrightarrow R given by π[(rr)θi1θi2θik]=δk,0rr\pi[(r\otimes r^{\prime})\theta_{i_{1}}\theta_{i_{2}}\cdots\theta_{i_{k}}]=\delta_{k,0}rr^{\prime}. π\pi is in fact the composition of the projection π:ΔfRR\pi^{\ast}:\Delta_{f}\longrightarrow R\otimes R to θ\theta-degree 0, followed by the multiplication map m:RRR=R0R1m:R\otimes R\longrightarrow R=R_{0}\oplus R_{1}, where we endow RR with the trivial grading i.e., R0=RR_{0}=R and R1={0}R_{1}=\{0\}.

4.2 The left and the right units of 𝒢K\mathcal{LG}_{K}

In this subsection, we recall the definition of the left and right identities of the bicategory 𝒢K\mathcal{LG}_{K}. We will denote the right (respectively left) unit by ρ\rho (respectively λ\lambda).
Consider a 11-morphism XX\in hmf(RS,1Rgf1S)ω=hmf(RS,idgfid)ωhmf(R\otimes S,1_{R}\otimes g-f\otimes 1_{S})^{\omega}=hmf(R\otimes S,id\otimes g-f\otimes id)^{\omega}.
Thus, XX is a matrix factorization of idgfidid\otimes g-f\otimes id and is also an RSR\otimes S-module.
Let 1X:XX1_{X}:X\longrightarrow X be the identity map and π\pi be the projection defined in lemma 4.2.

Remark 4.3.

Observe that any SS-module NN can be considered as an RR-module by letting rn:=f(r)nrn:=f(r)n where f:RSf:R\longrightarrow S is a homomorphism of commutative rings. It is easy to see that the RSR\otimes S-module XX can be considered as an RR-module via the following KK-homomorphism of commutative unitary rings f:RRSf:R\longrightarrow R\otimes S defined by f(r)=r1Sf(r)=r\otimes 1_{S}, and hence one can also see XX as an RRR\otimes R-module by means of the following (multiplicative map which is a) KK-homomorphism of commutative unitary rings m:RRRm:R\otimes R\longrightarrow R defined by m(rr)=rrm(r\otimes r^{\prime})=rr^{\prime}. It is not difficult to see that the RRR\otimes R-module Δf\Delta_{f} can be considered as an RR-module via the following homomorphism of commutative unitary rings f:RRRf:R\longrightarrow R\otimes R defined by f(r)=r1Rf(r)=r\otimes 1_{R}.

Thanks to this remark, it makes sense to form the following tensor product over RR: XRRX\otimes_{R}R and XRΔfX\otimes_{R}\Delta_{f} since XX and Δf\Delta_{f} can be viewed as RR-modules. Consequently, we will simply write XRX\otimes R and XΔfX\otimes\Delta_{f} for ease of notation.
Similarly, since the RSR\otimes S-module XX and the SSS\otimes S-module Δg\Delta_{g} can be viewed as SS-modules, we can form the module ΔgSX\Delta_{g}\otimes_{S}X that we simply write as ΔgX\Delta_{g}\otimes X.
Also consider the map u:XRXu:X\otimes R\longrightarrow X defined by u(xr)=xru(x\otimes r)=xr. This definition makes sense since XX can be viewed as an RR-module. uu is an isomorphism (See example 1 page 363 of [11]).

Now, define ρX:XΔfX\rho_{X}:X\otimes\Delta_{f}\longrightarrow X by ρX:=u(1Xπ)\rho_{X}:=u\circ(1_{X}\otimes\pi) and λX:ΔgXX\lambda_{X}:\Delta_{g}\otimes X\longrightarrow X by λX:=u(π1X)\lambda_{X}:=u\circ(\pi\otimes 1_{X}).
ρX\rho_{X} and λX\lambda_{X} are clearly morphisms in hmf(RS,idgfid)ωhmf(R\otimes S,id\otimes g-f\otimes id)^{\omega}.
ρ\rho is natural w.r.t. 22-morphisms in the variable XX and there is no direct inverse for ρX\rho_{X}, for each XX (See section 5.2 of [14]).

5 Morita contexts in 𝒢K\mathcal{LG}_{K}

Before discussing the notion of Morita context in 𝒢K\mathcal{LG}_{K}, we define what it is in an arbitrary bicategory.

Definition 5.1.

[25]
Let \mathcal{B} be a bicategory with natural isomorphisms a, r and l. Given two 0-cells A and B, we define a Morita Context between A and B as a four-tuple Γ=(f,g,η,ρ)\Gamma=(f,g,\eta,\rho) consisting of two 1-cells fHom(A,B)f\in Hom_{\mathcal{B}}(A,B) and gHom(B,A)g\in Hom_{\mathcal{B}}(B,A), and two 2-cells η:fg1A\eta:fg\rightarrow 1_{A} and ρ:gf1B\rho:gf\rightarrow 1_{B} such that the following diagrams commute

(gf)g\textstyle{(gf)g\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}a\scriptstyle{a}ρ1\scriptstyle{\rho 1}g(fg)\textstyle{g(fg)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}1η\scriptstyle{1\eta}1Bg\textstyle{1_{B}g\ignorespaces\ignorespaces\ignorespaces\ignorespaces}l\scriptstyle{l}g1A\textstyle{g1_{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}r\scriptstyle{r}g\textstyle{g}
(fg)f\textstyle{(fg)f\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}a\scriptstyle{a}η1\scriptstyle{\eta 1}f(gf)\textstyle{f(gf)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}1ρ\scriptstyle{1\rho}1Af\textstyle{1_{A}f\ignorespaces\ignorespaces\ignorespaces\ignorespaces}l\scriptstyle{l}f1B\textstyle{f1_{B}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}r\scriptstyle{r}f\textstyle{f}

Equationally, we have:

  1. 1.

    r1ηa=lρ1r\circ 1\eta\circ a=l\circ\rho 1

  2. 2.

    r1ρa=lη1r\circ 1\rho\circ a=l\circ\eta 1

Remark 5.1.

[25]

  • Observe that any adjunction <AfB,BgA,1Aεfg,gfη1B><\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 6.75pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-6.75pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern 13.65971pt\raise 6.1111pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern 0.0pt\raise-1.75pt\hbox{$\scriptstyle{f}$}}}\kern 3.0pt}}}}}}\ignorespaces{\hbox{\kern 30.75pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern 30.75pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{B}$}}}}}}}\ignorespaces}}}}\ignorespaces,\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 7.0434pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-7.0434pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern 14.24844pt\raise 5.1875pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern 0.0pt\raise-0.8264pt\hbox{$\scriptstyle{g}$}}}\kern 3.0pt}}}}}}\ignorespaces{\hbox{\kern 31.0434pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern 31.0434pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{A}$}}}}}}}\ignorespaces}}}}\ignorespaces,\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 7.6pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-7.6pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{1_{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern 14.9679pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern 0.0pt\raise-1.50694pt\hbox{$\scriptstyle{\varepsilon}$}}}\kern 3.0pt}}}}}}\ignorespaces{\hbox{\kern 31.6pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern 31.6pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{fg}$}}}}}}}\ignorespaces}}}}\ignorespaces,\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 8.55035pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-8.55035pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{gf\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern 15.8125pt\raise 5.1875pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern 0.0pt\raise-0.8264pt\hbox{$\scriptstyle{\eta}$}}}\kern 3.0pt}}}}}}\ignorespaces{\hbox{\kern 32.55035pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern 32.55035pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{1_{B}}$}}}}}}}\ignorespaces}}}}\ignorespaces> becomes a Morita context as soon as its unit ε\varepsilon is invertible.

  • A Morita context is strict if both η\eta and ρ\rho in the foregoing definition are isomorphisms. Strict Morita contexts and adjoint equivalences are basically the same. One can switch between them by inverting the unit ε\varepsilon of the adjunction.

Nota Bene: It is perhaps good to mention that what is called Morita context in this paper is instead called wide right Morita context from BB to AA in [19]. In [25], it is also called an abstract bridge. Both authors declare that the notion of left Morita context is defined by reversing 2-cells. We will not deal with left Morita context in our work.

In the sequel, we will write K[x]K[x] (respectively K[y]K[y]) for K[x1,x2,,xn]K[x_{1},x_{2},\cdots,x_{n}] (K[y1,y2,,ym]K[y_{1},y_{2},\cdots,y_{m}]), where xix_{i} and yjy_{j} are indeterminates for i,j{1,2,,max(n,m)}i,j\in\{1,2,\cdots,max(n,m)\}.

Description of Morita context in 𝒢K\mathcal{LG}_{K}
Let (R,f)(R,f) and (S,g)(S,g) be two objects of 𝒢K\mathcal{LG}_{K}, that is polynomials such that fR=K[x]f\in R=K[x] and gS=K[y]g\in S=K[y]. In all of this section, except otherwise stated, we want to keep the following remark and assumption in mind:

Remark 5.2.

Let X𝒢K((R,f),(S,g))X\in\mathcal{LG}_{K}((R,f),(S,g)) and Y𝒢K((S,g),(R,f))Y\in\mathcal{LG}_{K}((S,g),(R,f)) be 11-morphisms of 𝒢K\mathcal{LG}_{K}, we normally have:

  1. 1.

    X𝒢K((R,f),(S,g))=hmf(RKS,gf)ωX\in\mathcal{LG}_{K}((R,f),(S,g))=hmf(R\otimes_{K}S,g-f)^{\omega} i.e., X:(R,f)(S,g)X:(R,f)\rightarrow(S,g) is a matrix factorization which is a direct summand of a finite rank matrix factorization of gfg-f.

  2. 2.

    Y𝒢K((S,g),(R,f))=hmf(SKR,fg)ωY\in\mathcal{LG}_{K}((S,g),(R,f))=hmf(S\otimes_{K}R,f-g)^{\omega} i.e., Y:(S,g)(R,f)Y:(S,g)\rightarrow(R,f) is a matrix factorization which is a direct summand of a finite rank matrix factorization of fgf-g.

  3. 3.

    Δf𝒢K((R,f),(R,f))=hmf(RKR,fididf)\Delta_{f}\in\mathcal{LG}_{K}((R,f),(R,f))=hmf(R\otimes_{K}R,f\otimes id-id\otimes f) i.e., Δf:(R,f)(R,f)\Delta_{f}:(R,f)\rightarrow(R,f) is a finite rank matrix factorization of fididff\otimes id-id\otimes f.

  4. 4.

    Δg𝒢K((S,g),(S,g))=hmf(SKS,gididg)\Delta_{g}\in\mathcal{LG}_{K}((S,g),(S,g))=hmf(S\otimes_{K}S,g\otimes id-id\otimes g) i.e., Δg:(S,g)(S,g)\Delta_{g}:(S,g)\rightarrow(S,g) is a finite rank matrix factorization of gididgg\otimes id-id\otimes g.

Observe that the 11-morphism XX as defined in the previous remark can also be a finite rank matrix factorization i.e., an object of hmf(RKS,gf)hmf(R\otimes_{K}S,g-f) which is a subcategory of hmf(RKS,gf)ωhmf(R\otimes_{K}S,g-f)^{\omega}. A similar observation holds for YY.
We would like to use determinant of matrices to discuss the notion of Morita context in 𝒢K\mathcal{LG}_{K}, therefore, it is important for us to deal with entities that are of finite rank. That is why we need the following assumption.
Assumption: We restrict our study of Morita context in 𝒢K\mathcal{LG}_{K} to those objects (R,f)(R,f) and (S,g)(S,g) which are such that the following 11-morphisms are all finite rank matrix factorizations: XX, YY, XYX\otimes Y and YXY\otimes X.
Since we will only be dealing with finite rank matrix factorizations, we will sometimes intentionally omit the phrase ”finite rank” in front of the phrase ”matrix factorization”,.

A Morita Context between objects (R,f)(R,f) and (S,g)(S,g) of 𝒢K\mathcal{LG}_{K} is a four-tuple Γ=(X,Y,η,ρ)\Gamma=(X,Y,\eta,\rho) where:

  • X:(R,f)(S,g)X:(R,f)\rightarrow(S,g) is a matrix factorization of gfg-f.

  • Y:(S,g)(R,f)Y:(S,g)\rightarrow(R,f) is a matrix factorization of fgf-g.

  • η:XRY1(R,f)=Δf\eta:X\otimes_{R}Y\rightarrow 1_{(R,f)}=\Delta_{f} where Δf\Delta_{f} is the identity on (R,f)(R,f) in 𝒢K\mathcal{LG}_{K} i.e., a matrix factorization of fididff\otimes id-id\otimes f as seen in the previous section.

  • ρ:YSX1(S,g)=Δg\rho:Y\otimes_{S}X\rightarrow 1_{(S,g)}=\Delta_{g}

    such that the following diagrams (thereafter referred to as the M.C.LGKM.C.LG_{K} diagrams) commute up to homotopy:

    (YSX)RY\textstyle{(Y\otimes_{S}X)\otimes_{R}Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}a\scriptstyle{a}ρ1Y\scriptstyle{\rho\otimes 1_{Y}}YS(XRY)\textstyle{Y\otimes_{S}(X\otimes_{R}Y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}1Yη\scriptstyle{1_{Y}\otimes\eta}ΔgSY\textstyle{\Delta_{g}\otimes_{S}Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}lY\scriptstyle{l_{Y}}YRΔf\textstyle{Y\otimes_{R}\Delta_{f}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}rY\scriptstyle{r_{Y}}Y\textstyle{Y}
    (XRY)SX\textstyle{(X\otimes_{R}Y)\otimes_{S}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}a\scriptstyle{a}η1X\scriptstyle{\eta\otimes 1_{X}}XR(YSX)\textstyle{X\otimes_{R}(Y\otimes_{S}X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}1Xρ\scriptstyle{1_{X}\otimes\rho}ΔfRX\textstyle{\Delta_{f}\otimes_{R}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}lX\scriptstyle{l_{X}}XSΔg\textstyle{X\otimes_{S}\Delta_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}rX\scriptstyle{r_{X}}X\textstyle{X}

    That is, the following two conditions hold:

    1. 1.

      rY1YηalYρ1Yr_{Y}\circ 1_{Y}\otimes\eta\circ a\approx l_{Y}\circ\rho\otimes 1_{Y}

    2. 2.

      rX1XρalXη1Xr_{X}\circ 1_{X}\otimes\rho\circ a\approx l_{X}\circ\eta\otimes 1_{X}

    where \approx stands for the homotopy relation.

equivalently:

  1. 1.

    λ:ZY\exists\lambda:Z\rightarrow Y s.t. dYλ+λdZ=ψϕd_{Y}\lambda+\lambda d_{Z}=\psi-\phi, where Z=(YSX)RYZ=(Y\otimes_{S}X)\otimes_{R}Y, ψ=rY1Yηa\psi=r_{Y}\circ 1_{Y}\otimes\eta\circ a, and ϕ=lYρ1Y\phi=l_{Y}\circ\rho\otimes 1_{Y}

  2. 2.

    ξ:ZX\exists\xi:Z^{\prime}\rightarrow X s.t. dXξ+ξdZ=ψϕd_{X}\xi+\xi d_{Z^{\prime}}=\psi^{\prime}-\phi^{\prime}, where Z=(XRY)SXZ^{\prime}=(X\otimes_{R}Y)\otimes_{S}X, ψ=rX1Xρa\psi^{\prime}=r_{X}\circ 1_{X}\otimes\rho\circ a, and ϕ=lXη1X\phi^{\prime}=l_{X}\circ\eta\otimes 1_{X}

We would now like to find necessary conditions on η\eta and ρ\rho to obtain a Morita context in 𝒢K\mathcal{LG}_{K}. We will use a matrix approach because it is easier to use matrices than linear transformations in our setting. Thus, we will have recourse to one of the properties of matrix factorizations we studied earlier (cf. corollaries 3.1 and 3.2); namely that matrices that appear in a matrix factorization of a nonzero polynomial are invertible.
In the sequel, except otherwise stated, matrices that appear in a matrix factorization are of a fixed size nn\in\mathbb{N}. So we will not bother to mention the sizes of pair of matrices constituting a matrix factorization.
Let (P,Q)(P,Q) and (R,T)(R,T) be pairs of matrices representing respectively the matrix factorizations XRYX\otimes_{R}Y and Δf\Delta_{f}. We assume ff is not a constant polynomial, thus Δf\Delta_{f} is not the zero polynomial and so, by corollaries 3.1 and 3.2, we have that RR (respectively TT) is invertible over the field of fractions of RR (respectively over the field of fractions of TT).
We know that
η=(η0,η1):XY1(R,f)=Δf\eta=(\eta^{0},\eta^{1}):X\otimes Y\rightarrow 1_{(R,f)}=\Delta_{f}
is a morphism of matrix factorizations if the following diagram commutes:
(XY)0\textstyle{(X\otimes Y)^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dXY0\scriptstyle{{d^{0}_{X\otimes Y}}}η0\scriptstyle{\eta^{0}}(XY)1\textstyle{(X\otimes Y)^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}η1\scriptstyle{\eta^{1}}dXY1\scriptstyle{{d^{1}_{X\otimes Y}}}(XY)0\textstyle{(X\otimes Y)^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}η0\scriptstyle{\eta^{0}}(Δf)0\textstyle{(\Delta_{f})^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dΔf0\scriptstyle{{d^{0}_{\Delta_{f}}}}(Δf)1\textstyle{(\Delta_{f})^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dΔf1\scriptstyle{{d^{1}_{\Delta_{f}}}}(Δf)0\textstyle{(\Delta_{f})^{0}}.

That is:

{η0dXY1=dΔf1η1η1dXY0=dΔf0η0\begin{cases}\eta^{0}d^{1}_{X\otimes Y}=d^{1}_{\Delta_{f}}\eta^{1}\\ \eta^{1}d^{0}_{X\otimes Y}=d^{0}_{\Delta_{f}}\eta^{0}\end{cases}

In matrix form:

{η0Q=Tη1i)η1P=Rη0ii)\sharp\begin{cases}\eta^{0}Q=T\eta^{1}...i)\\ \eta^{1}P=R\eta^{0}...ii)\end{cases}

where for ease of notation we wrote ηj\eta^{j} for the matrix of ηj\eta^{j}, j=0,1j=0,1.
Now, XX (respectively YY) being a matrix factorization of g(y)f(x)g(y)-f(x) (respectively f(x)g(y)f(x)-g(y)) we obtain from the Yoshino’s tensor product of matrix factorizations (cf. definition 1.2 of [29]) that, XYX\otimes Y is a factorization of (g(y)f(x))+(f(x)g(y))=0(g(y)-f(x))+(f(x)-g(y))=0. This means that PQ=0PQ=0.

from ii)ii), since RR is invertible, we have:

η1P=Rη0\displaystyle\eta^{1}P=R\eta^{0} R1η1P=η0\displaystyle\Rightarrow R^{-1}\eta^{1}P=\eta^{0}\cdots\bigstar
R1η1PQ=η0Q\displaystyle\Rightarrow R^{-1}\eta^{1}PQ=\eta^{0}Q
η0Q=0\displaystyle\Rightarrow\eta^{0}Q=0\cdots\bigstar\bigstar

Putting this in i)i), since TT is invertible, we obtain:

Tη1=η0Q=0\displaystyle T\eta^{1}=\eta^{0}Q=0 η1=T10=0\displaystyle\Rightarrow\eta^{1}=T^{-1}0=0
η1=0\displaystyle\Rightarrow\eta^{1}=0

We would now like to give a necessary condition on ρ\rho. The process to obtain this necessary condition is completely analogous to what we did in the case of η\eta. But we will present it here for the sake of clarity.

Let (P,Q)(P^{\prime},Q^{\prime}) and (R,T)(R^{\prime},T^{\prime}) be pair of matrices representing respectively the matrix factorizations YSXY\otimes_{S}X and Δg\Delta_{g}. We assume gg is not a constant polynomial, thus Δg\Delta_{g} is not the zero polynomial and so, by corollaries 3.1 and 3.2, we have that RR^{\prime} and TT^{\prime} are invertible.
We know that
ρ=(ρ0,ρ1):YX1(S,g)=Δg\rho=(\rho^{0},\rho^{1}):Y\otimes X\rightarrow 1_{(S,g)}=\Delta_{g}
is a morphism of matrix factorizations if the following diagram commutes:
(YX)0\textstyle{(Y\otimes X)^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dYX0\scriptstyle{{d^{0}_{Y\otimes X}}}ρ0\scriptstyle{\rho^{0}}(YX)1\textstyle{(Y\otimes X)^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ρ1\scriptstyle{\rho^{1}}dYX1\scriptstyle{{d^{1}_{Y\otimes X}}}(YX)0\textstyle{(Y\otimes X)^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}ρ0\scriptstyle{\rho^{0}}(Δg)0\textstyle{(\Delta_{g})^{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dΔg0\scriptstyle{{d^{0}_{\Delta_{g}}}}(Δg)1\textstyle{(\Delta_{g})^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}dΔg1\scriptstyle{{d^{1}_{\Delta_{g}}}}(Δg)0\textstyle{(\Delta_{g})^{0}}

That is:

{ρ0dYX1=dΔg1ρ1ρ1dYX0=dΔg0ρ0\begin{cases}\rho^{0}d^{1}_{Y\otimes X}=d^{1}_{\Delta_{g}}\rho^{1}\\ \rho^{1}d^{0}_{Y\otimes X}=d^{0}_{\Delta_{g}}\rho^{0}\end{cases}

In matrix form:

{ρ0Q=Tρ1i)ρ1P=Rρ0ii){\dagger}\begin{cases}\rho^{0}Q^{\prime}=T^{\prime}\rho^{1}...i)^{\prime}\\ \rho^{1}P^{\prime}=R^{\prime}\rho^{0}...ii)^{\prime}\end{cases}

where for ease of notation we wrote ρi\rho^{i} for the matrix of ρi\rho^{i}, i=0,1i=0,1.
Now, XX (respectively YY) being a matrix factorization of g(y)f(x)g(y)-f(x) (respectively f(x)g(y)f(x)-g(y)) we obtain from the Yoshino’s tensor product of matrix factorization (cf. definition 1.2 of [29]) that YXY\otimes X is a factorization of (f(x)g(y))+(g(y)f(x))=0(f(x)-g(y))+(g(y)-f(x))=0. This means that PQ=0P^{\prime}Q^{\prime}=0.

from ii)ii)^{\prime}, since RR^{\prime} is invertible, we have:

ρ1P=Rρ0\displaystyle\rho^{1}P^{\prime}=R^{\prime}\rho^{0} R1ρ1P=ρ0\displaystyle\Rightarrow R^{\prime-1}\rho^{1}P^{\prime}=\rho^{0}
R1ρ1PQ=ρ0Q\displaystyle\Rightarrow R^{\prime-1}\rho^{1}P^{\prime}Q^{\prime}=\rho^{0}Q^{\prime}
ρ0Q=0\displaystyle\Rightarrow\rho^{0}Q^{\prime}=0

Putting this in i)i)^{\prime}, since TT^{\prime} is invertible, we get:

Tρ1=ρ0Q=0\displaystyle T^{\prime}\rho^{1}=\rho^{0}Q^{\prime}=0 ρ1=T10=0\displaystyle\Rightarrow\rho^{1}=T^{\prime-1}0=0
ρ1=0\displaystyle\Rightarrow\rho^{1}=0

We now gather our results in the following theorem.

Theorem 5.1.

Let

  • (R,f)(R,f) and (S,g)(S,g) be two objects of 𝒢K\mathcal{LG}_{K}.

  • X𝒢K((R,f),(S,g))=hmf(RKS,gf)X\in\mathcal{LG}_{K}((R,f),(S,g))=hmf(R\otimes_{K}S,g-f) i.e., X:(R,f)(S,g)X:(R,f)\rightarrow(S,g) is a finite rank matrix factorization of gfg-f.

  • Y𝒢K((S,g),(R,f))=hmf(SKR,fg)Y\in\mathcal{LG}_{K}((S,g),(R,f))=hmf(S\otimes_{K}R,f-g) i.e., Y:(S,g)(R,f)Y:(S,g)\rightarrow(R,f) is a finite rank matrix factorization of fgf-g.
    such that XYX\otimes Y and YXY\otimes X are finite rank matrix factorizations.

  • Δf𝒢K((R,f),(R,f))=hmf(RKR,fididf)\Delta_{f}\in\mathcal{LG}_{K}((R,f),(R,f))=hmf(R\otimes_{K}R,f\otimes id-id\otimes f) i.e., Δf:(R,f)(R,f)\Delta_{f}:(R,f)\rightarrow(R,f) is a finite rank matrix factorization of fididff\otimes id-id\otimes f.

  • Δg𝒢K((S,g),(S,g))=hmf(RKS,gididg)\Delta_{g}\in\mathcal{LG}_{K}((S,g),(S,g))=hmf(R\otimes_{K}S,g\otimes id-id\otimes g) i.e., Δg:(S,g)(S,g)\Delta_{g}:(S,g)\rightarrow(S,g) is a finite rank matrix factorization of gididgg\otimes id-id\otimes g.

  • η:XRY1(R,f)=Δf\eta:X\otimes_{R}Y\rightarrow 1_{(R,f)}=\Delta_{f} and let (P,Q)(P,Q) and (R,T)(R,T) be pairs of matrices representing respectively the finite rank matrix factorizations XYX\otimes Y and Δf\Delta_{f}.

  • ρ:YSX1(S,g)=Δg\rho:Y\otimes_{S}X\rightarrow 1_{(S,g)}=\Delta_{g} and let (P,Q)(P^{\prime},Q^{\prime}) and (R,T)(R^{\prime},T^{\prime}) be pairs of matrices representing respectively the finite rank matrix factorizations YXY\otimes X and Δg\Delta_{g}.

Then: A necessary condition for Γ=(X,Y,η,ρ)\Gamma=(X,Y,\eta,\rho) to be a Morita Context is

{η1=0,andη0Q=0,ρ1=0,andρ0Q=0.\begin{cases}\eta^{1}=0,\;and\;\eta^{0}Q=0,\\ \rho^{1}=0,\;and\;\rho^{0}Q^{\prime}=0.\end{cases}

We will now prove that these necessary conditions are not sufficient thanks to an auxiliary lemma (lemma 5.1) which is proved using a celebrated result (Fact 5.1) on determinant of block matrices due to Schur (e.g [26], [27] (theorem 3)).

Fact 5.1.

If A,B,CA,B,C and DD are n×nn\times n matrices, and

M=[ABCD],M=\begin{bmatrix}A&B\\ C&D\end{bmatrix},

Then the following hold:

  1. 1.

    det(M)=det(ADCB)det(M)=det(AD-CB) if AC=CAAC=CA

  2. 2.

    det(M)=det(ADBC)det(M)=det(AD-BC) if CD=DCCD=DC

  3. 3.

    det(M)=det(DABC)det(M)=det(DA-BC) if BD=DBBD=DB

  4. 4.

    det(M)=det(DACB)det(M)=det(DA-CB) if AB=BAAB=BA

The following auxiliary lemma states that the determinants of the matrices in the matrix factorizations XXX\otimes X^{\prime} and XXX^{\prime}\otimes X are all equal.

Lemma 5.1.

Let X=(ϕ,ψ)X=(\phi,\psi) be an n×nn\times n matrix factorization of fRf\in R and X=(ϕ,ψ)X^{\prime}=(\phi^{\prime},\psi^{\prime}) be an m×mm\times m matrix factorizations of gSg\in S where ϕ\phi and ψ\psi (respectively ϕ\phi^{\prime} and ψ\psi^{\prime}) are matrices over K[x]K[x] (respectively K[y]K[y]). These matrices can be considered as matrices over L=K[x,y]L=K[x,y] and let
X^X=(P,Q)=([ϕ1m1nϕ1nψψ1m],[ψ1m1nϕ1nψϕ1m])X\widehat{\otimes}X^{\prime}=(P,Q)=(\begin{bmatrix}\phi\otimes 1_{m}&1_{n}\otimes\phi^{\prime}\\ -1_{n}\otimes\psi^{\prime}&\psi\otimes 1_{m}\end{bmatrix},\begin{bmatrix}\psi\otimes 1_{m}&-1_{n}\otimes\phi^{\prime}\\ 1_{n}\otimes\psi^{\prime}&\phi\otimes 1_{m}\end{bmatrix}) and

X^X=(P,Q)=([ϕ1n1mϕ1mψψ1n],[ψ1n1mϕ1mψϕ1n])X^{\prime}\widehat{\otimes}X=(P^{\prime},Q^{\prime})=(\begin{bmatrix}\phi^{\prime}\otimes 1_{n}&1_{m}\otimes\phi\\ -1_{m}\otimes\psi&\psi^{\prime}\otimes 1_{n}\end{bmatrix},\begin{bmatrix}\psi^{\prime}\otimes 1_{n}&-1_{m}\otimes\phi\\ 1_{m}\otimes\psi&\phi^{\prime}\otimes 1_{n}\end{bmatrix})

where each component is an endomorphism on LnLmL^{n}\otimes L^{m}.
Then

{det(P)=det(P),det(Q)=det(Q)\begin{cases}det(P)=det(P^{\prime}),\\ det(Q)=det(Q^{\prime})\end{cases}

Furthermore, all the four determinants are equal.

Proof.

We will make use of fact 5.1(2), to compute the determinants of PP, PP^{\prime}, QQ and QQ^{\prime} which are block matrices. Looking at PP, in order to apply fact 5.1(2), we first have to observe that (1nψ)(ψ1m)=1nψψ1m=ψψ=ψ(1n)1mψ=(ψ1m)(1nψ)(-1_{n}\otimes\psi^{\prime})(\psi\otimes 1_{m})=-1_{n}\psi\otimes\psi^{\prime}1_{m}=-\psi\otimes\psi^{\prime}=\psi(-1_{n})\otimes 1_{m}\psi^{\prime}=(\psi\otimes 1_{m})(-1_{n}\otimes\psi^{\prime}) and looking at PP^{\prime} we equally observe that (1mψ)(ψ1n)=1mψψ1n=ψψ=ψ(1m)1nψ=(ψ1n)(1mψ)(-1_{m}\otimes\psi)(\psi^{\prime}\otimes 1_{n})=-1_{m}\psi^{\prime}\otimes\psi 1_{n}=-\psi^{\prime}\otimes\psi=\psi^{\prime}(-1_{m})\otimes 1_{n}\psi=(\psi^{\prime}\otimes 1_{n})(-1_{m}\otimes\psi), we can compute the determinants of PP and PP^{\prime} as follows:

det(P)\displaystyle det(P) =det[(ϕ1m)(ψ1m)(1nϕ)(1nψ)]\displaystyle=det[(\phi\otimes 1_{m})(\psi\otimes 1_{m})-(1_{n}\otimes\phi^{\prime})(-1_{n}\otimes\psi^{\prime})]
=det[ϕψ1m+1nϕψ]\displaystyle=det[\phi\psi\otimes 1_{m}+1_{n}\otimes\phi^{\prime}\psi^{\prime}]
=det[f1n1m+1ng1m]sinceϕψ=f1nandϕψ=g1m\displaystyle=det[f1_{n}\otimes 1_{m}+1_{n}\otimes g1_{m}]\,since\,\phi\psi=f1_{n}\,and\,\phi^{\prime}\psi^{\prime}=g1_{m}
=det[f(1n1m)+g(1n1m)]bypropertiesof\displaystyle=det[f(1_{n}\otimes 1_{m})+g(1_{n}\otimes 1_{m})]\,by\,properties\,of\,\otimes
=det[(f+g)1nm]since 1n1m=1nm\displaystyle=det[(f+g)1_{nm}]\,since\;1_{n}\otimes 1_{m}=1_{nm}
=(f+g)nmdet(1nm)bypropertiesofdeterminants\displaystyle=(f+g)^{nm}det(1_{nm})\;by\,properties\;of\;determinants
=(f+g)nm\displaystyle=(f+g)^{nm}
det(P)\displaystyle det(P^{\prime}) =det[(ϕ1n)(ψ1n)(1mϕ)(1mψ)]\displaystyle=det[(\phi^{\prime}\otimes 1_{n})(\psi^{\prime}\otimes 1_{n})-(1_{m}\otimes\phi)(-1_{m}\otimes\psi)]
=det[ϕψ1n+1mϕψ]\displaystyle=det[\phi^{\prime}\psi^{\prime}\otimes 1_{n}+1_{m}\otimes\phi\psi]
=det[g1m1n+1mf1n]sinceϕψ=f1nandϕψ=g1m\displaystyle=det[g1_{m}\otimes 1_{n}+1_{m}\otimes f1_{n}]\,since\,\phi\psi=f1_{n}\,and\,\phi^{\prime}\psi^{\prime}=g1_{m}
=det[g(1m1n)+f(1m1n)]\displaystyle=det[g(1_{m}\otimes 1_{n})+f(1_{m}\otimes 1_{n})]
=det[(g+f)1mn]since 1m1n=1mn\displaystyle=det[(g+f)1_{mn}]\,since\;1_{m}\otimes 1_{n}=1_{mn}
=(g+f)mndet(1mn)bypropertiesofdeterminants\displaystyle=(g+f)^{mn}det(1_{mn})\;by\,properties\;of\;determinants
=(f+g)mnbycommutativityinK[x,y]\displaystyle=(f+g)^{mn}\;by\;commutativity\;in\;K[x,y]
=det(P)asdesired.\displaystyle=det(P)\,as\;desired.

Likewise, in order to use fact 5.1(2) to compute det(Q)det(Q), from QQ, we observe that (1nψ)(ϕ1m)=1nϕψ1m=ϕψ=ϕ1n1mψ=(ϕ1m)(1nψ)(1_{n}\otimes\psi^{\prime})(\phi\otimes 1_{m})=1_{n}\phi\otimes\psi^{\prime}1_{m}=\phi\otimes\psi^{\prime}=\phi 1_{n}\otimes 1_{m}\psi^{\prime}=(\phi\otimes 1_{m})(1_{n}\otimes\psi^{\prime}) and looking at QQ^{\prime} we equally observe that (1mψ)(ϕ1n)=1mϕψ1n=ϕψ=ϕ1m1nψ=(ϕ1n)(1mψ)(1_{m}\otimes\psi)(\phi^{\prime}\otimes 1_{n})=1_{m}\phi^{\prime}\otimes\psi 1_{n}=\phi^{\prime}\otimes\psi=\phi^{\prime}1_{m}\otimes 1_{n}\psi=(\phi^{\prime}\otimes 1_{n})(1_{m}\otimes\psi) we can compute the determinants of QQ and QQ^{\prime} as follows:

det(Q)\displaystyle det(Q) =det[(ψ1m)(ϕ1m)(1nϕ)(1nψ)]\displaystyle=det[(\psi\otimes 1_{m})(\phi\otimes 1_{m})-(-1_{n}\otimes\phi^{\prime})(1_{n}\otimes\psi^{\prime})]
=det[ψϕ1m+1nϕψ]\displaystyle=det[\psi\phi\otimes 1_{m}+1_{n}\otimes\phi^{\prime}\psi^{\prime}]
=det[f1n1m+1ng1m]sinceψϕ=f1nandϕψ=g1m\displaystyle=det[f1_{n}\otimes 1_{m}+1_{n}\otimes g1_{m}]\,since\,\psi\phi=f1_{n}\,and\,\phi^{\prime}\psi^{\prime}=g1_{m}
=det[f(1n1m)+g(1n1m)]bypropertiesof\displaystyle=det[f(1_{n}\otimes 1_{m})+g(1_{n}\otimes 1_{m})]\,by\,properties\,of\,\otimes
=det[(f+g)1nm]since 1n1m=1nm\displaystyle=det[(f+g)1_{nm}]\,since\;1_{n}\otimes 1_{m}=1_{nm}
=(f+g)nmdet(1nm)bypropertiesofdeterminants\displaystyle=(f+g)^{nm}det(1_{nm})\;by\,properties\;of\;determinants
=(f+g)nm\displaystyle=(f+g)^{nm}
det(Q)\displaystyle det(Q^{\prime}) =det[(ψ1n)(ϕ1n)(1mϕ)(1mψ)]\displaystyle=det[(\psi^{\prime}\otimes 1_{n})(\phi^{\prime}\otimes 1_{n})-(-1_{m}\otimes\phi)(1_{m}\otimes\psi)]
=det[ψϕ1n+1mϕψ]\displaystyle=det[\psi^{\prime}\phi^{\prime}\otimes 1_{n}+1_{m}\otimes\phi\psi]
=det[g1m1n+1mf1n]sinceψϕ=f1nandϕψ=g1m\displaystyle=det[g1_{m}\otimes 1_{n}+1_{m}\otimes f1_{n}]\,since\,\psi\phi=f1_{n}\,and\,\phi^{\prime}\psi^{\prime}=g1_{m}
=det[g(1m1n)+f(1m1n)]\displaystyle=det[g(1_{m}\otimes 1_{n})+f(1_{m}\otimes 1_{n})]
=det[(g+f)1mn]since 1m1n=1mn\displaystyle=det[(g+f)1_{mn}]\,since\;1_{m}\otimes 1_{n}=1_{mn}
=(g+f)mndet(1mn)bypropertiesofdeterminants\displaystyle=(g+f)^{mn}det(1_{mn})\;by\,properties\;of\;determinants
=(f+g)mnbycommutativityinK[x,y]\displaystyle=(f+g)^{mn}\;by\;commutativity\;in\;K[x,y]
=det(Q)asdesired.\displaystyle=det(Q)\,as\;desired.

Clearly, all the four determinants are equal. ∎

Remark 5.3.

It is good to observe that in the case of a Morita Context in 𝒢K\mathcal{LG}_{K}, the ff and the gg we have in the above auxiliary lemma are actually additive inverses of each other by definition of 1-morphisms in 𝒢K\mathcal{LG}_{K}. In fact, if XX is a morphism from the polynomial f1f_{1} to the polynomial f2f_{2}, then XX is a matrix factorization of f2f1=ff_{2}-f_{1}=f. And if YY is a morphism from the polynomial f2f_{2} to the polynomial f1f_{1}, then YY is a matrix factorization of f1f2=gf_{1}-f_{2}=g. We see that f=gf=-g.

Thus, we have the following result which is actually a consequence of lemma 5.1.

Theorem 5.2.

Let (R,f)(R,f) and (S,g)(S,g) be two objects in 𝒢K\mathcal{LG}_{K} and let X:(R,f)(S,g)X:(R,f)\rightarrow(S,g) and X:(S,g)(R,f)X^{\prime}:(S,g)\rightarrow(R,f) be 1-morphisms in 𝒢K\mathcal{LG}_{K}. If XX=(P,Q)X\otimes X^{\prime}=(P,Q) and XX=(P,Q)X^{\prime}\otimes X=(P^{\prime},Q^{\prime}), then det(P)=det(Q)=det(P)=det(Q)=0det(P)=det(Q)=det(P^{\prime})=det(Q^{\prime})=0

Proof.

The proof follows immediately from the remark and lemma 5.1. ∎

It follows from this theorem that QQ in theorem 5.1 is not invertible and so, the equality η0Q=0\eta^{0}Q=0 will never have a unique solution for η0\eta^{0}. There would be several solutions from this equality, among which the one(s) that will be sufficient to obtain a Morita context.

Remark 5.4.

The fact that QQ is not invertible helps to see that the necessary condition given for η\eta in theorem 5.1 is not sufficient. In fact, in the discussion that precedes the statement of that theorem, we cannot reverse the direction of the implication symbol from \bigstar\bigstar to \bigstar. Indeed, suppose η0Q=0\eta^{0}Q=0 then since PQ=0PQ=0, we have η0Q=R1η1PQ\eta^{0}Q=R^{-1}\eta^{1}PQ implying Rη0Q=η1PQR\eta^{0}Q=\eta^{1}PQ which implies (Rη0η1P)Q=0(R\eta^{0}-\eta^{1}P)Q=0. Now, QQ being noninvertible, we cannot obtain from here that (Rη0η1P)=0(R\eta^{0}-\eta^{1}P)=0 which is \bigstar. So, that necessary condition is not a sufficient one.
Similarly, since it also follows from theorem 5.2 that QQ^{\prime} in theorem 5.1 is not invertible, the necessary condition given for ρ\rho in theorem 5.1 is not sufficient.

Though the necessary conditions we found are not sufficient, there is a trivial sufficient condition. In fact, it is easy to see that two equal morphisms of linear factorizations are homotopic; since it would suffice to take λ=0\lambda=0 in definition 4.3. We immediately have the following remark which gives a (trivial) sufficient condition to obtain a Morita context in 𝒢K\mathcal{LG}_{K}.

Remark 5.5.

Let (R,f)(R,f) and (S,g)(S,g) be two objects of 𝒢K\mathcal{LG}_{K}. Let X:(R,f)(S,g)X:(R,f)\rightarrow(S,g) be a matrix factorization of gfg-f and Y:(S,g)(R,f)Y:(S,g)\rightarrow(R,f) a matrix factorization of fgf-g. Then Γ=(X,Y,0,0)\Gamma=(X,Y,0,0) is a Morita Context in 𝒢K\mathcal{LG}_{K}. That is, provided we have XX and YY, it suffices to take η=0\eta=0 and ρ=0\rho=0 in definition 5.1 to obtain a Morita Context in 𝒢K\mathcal{LG}_{K} .

This follows from the fact that the maps rr and ll are morphisms of matrix factorizations and so, they are linear. Consequently, the image of zero under these morphisms is zero.
In fact:
If η=0\eta=0 and ρ=0\rho=0, then:

  1. 1.

    ψ=rY1Yηa=0\psi=r_{Y}\circ 1_{Y}\otimes\eta\circ a=0 and ϕ=lYρ1Y=0\phi=l_{Y}\circ\rho\otimes 1_{Y}=0

  2. 2.

    ψ=rX1Xρa=0\psi^{\prime}=r_{X}\circ 1_{X}\otimes\rho\circ a=0, and ϕ=lXη1X=0\phi^{\prime}=l_{X}\circ\eta\otimes 1_{X}=0

Hence, the M.C.LGKM.C.LG_{K} diagrams commute up to homotopy if:
λ:ZY\exists\lambda:Z\rightarrow Y s.t. dYλ+λdZ=ψϕ=0d_{Y}\lambda+\lambda d_{Z}=\psi-\phi=0, where Z=(YSX)RYZ=(Y\otimes_{S}X)\otimes_{R}Y and ξ:ZX\exists\xi:Z^{\prime}\rightarrow X s.t. dXξ+ξdZ=ψϕ=0d_{X}\xi+\xi d_{Z^{\prime}}=\psi^{\prime}-\phi^{\prime}=0, where Z=(XRY)SXZ^{\prime}=(X\otimes_{R}Y)\otimes_{S}X

Hence, it now suffices to choose λ=0\lambda=0 and ξ=0\xi=0 to see that Γ=(X,Y,0,0)\Gamma=(X,Y,0,0) is a Morita Context in 𝒢K\mathcal{LG}_{K}.
A straightforward consequence of theorem 5.1 and remark 5.5 is the following:

Corollary 5.1.

A necessary and sufficient condition on η1\eta^{1} and ρ1\rho^{1} for Γ=(X,Y,η,ρ)\Gamma=(X,Y,\eta,\rho) to be a Morita Context is η1=0=ρ1\eta^{1}=0=\rho^{1}.

Example 5.1.

Consider h=x2+1[x]=Rh=-x^{2}+1\in\mathbb{R}[x]=R and g=y2+1[y]=Sg=y^{2}+1\in\mathbb{R}[y]=S. A Morita Context between (R,h)(R,h) and (S,g)(S,g) is a quadruple Γ=(X,X,η,ρ)\Gamma=(X,X^{\prime},\eta,\rho) where:

  • X:(R,h)(S,g)X:(R,h)\rightarrow(S,g) is a matrix factorization of gh=y2+x2[x,y]g-h=y^{2}+x^{2}\in\mathbb{R}[x,y]. We take
    X=([xyyx],[xyyx])X=(\begin{bmatrix}x&-y\\ y&x\end{bmatrix},\begin{bmatrix}x&y\\ -y&x\end{bmatrix}) since [xyyx][xyyx]=(gh)I2\begin{bmatrix}x&-y\\ y&x\end{bmatrix}\begin{bmatrix}x&y\\ -y&x\end{bmatrix}=(g-h)\cdot I_{2}

  • X:(S,g)(R,h)X^{\prime}:(S,g)\rightarrow(R,h) is a matrix factorization of hg=y2x2[x,y]h-g=-y^{2}-x^{2}\in\mathbb{R}[x,y]. We take
    X=([xyyx],[xyyx])X^{\prime}=(\begin{bmatrix}-x&y\\ -y&-x\end{bmatrix},\begin{bmatrix}x&y\\ -y&x\end{bmatrix}) since [xyyx][xyyx]=(hg)I2\begin{bmatrix}-x&y\\ -y&-x\end{bmatrix}\begin{bmatrix}x&y\\ -y&x\end{bmatrix}=(h-g)\cdot I_{2}

  • η=XXΔh\eta=X\otimes X^{\prime}\rightarrow\Delta_{h}, xx0x\otimes x^{\prime}\mapsto 0 viz. η\eta is the zero map.

  • ρ=XXΔg\rho=X^{\prime}\otimes X\rightarrow\Delta_{g}, xx0x^{\prime}\otimes x\mapsto 0 viz. ρ\rho is the zero map.

So,

Γ=[([xyyx],[xyyx]),([xyyx],[xyyx]),0,0]\Gamma=[(\begin{bmatrix}x&-y\\ y&x\end{bmatrix},\begin{bmatrix}x&y\\ -y&x\end{bmatrix}),(\begin{bmatrix}-x&y\\ -y&-x\end{bmatrix},\begin{bmatrix}x&y\\ -y&x\end{bmatrix}),0,0]

is a Morita context between (R,h)(R,h) and (S,g)(S,g).

Remark 5.6.
  1. 1.

    It is good to mention that in such a setting (remark 5.5) not all Morita Contexts between two objects are identical. In fact, they differ at the level of the matrix factorizations XX and YY.

  2. 2.

    Intuitively, Morita contexts being pre-equivalences, it is not interesting to study cases where the two polynomials ff and gg are equal.

6 FURTHER PROBLEMS

In this paper, we gave necessary conditions on η\eta and ρ\rho to obtain a Morita context in 𝒢K\mathcal{LG}_{K}. But the only sufficient condition we were able to give on a 44-tuple (X,Y,η,ρ)(X,Y,\eta,\rho) to be a Morita context was the trivial one, i.e., η=ρ=0\eta=\rho=0. An interesting question would be to find nontrivial sufficient conditions on η\eta and ρ\rho.

Acknowledgments

Part of this work was carried out during my Ph.D. studies in mathematics at the University of Ottawa in Canada. I am grateful to Prof. Dr. Richard Blute who was my Ph.D. supervisor for all the fruitful interactions.
I gratefully acknowledge the financial support of the Queen Elizabeth Diamond Jubilee scholarship during my Ph.D. studies.

References

  • Amitsur, [1971] Amitsur, S. A. (1971). Rings of quotients and morita contexts. Journal of Algebra, 17(2):273–298.
  • Anderson and Fuller, [2012] Anderson, F. W. and Fuller, K. R. (2012). Rings and categories of modules, volume 13. Springer Science & Business Media.
  • Bass, [1962] Bass, H. (1962). The Morita theorems. University of Oregon.
  • Bass and Roy, [1967] Bass, H. and Roy, A. (1967). Lectures on topics in algebraic K-theory, volume 41. Tata Institute of Fundamental Research Bombay.
  • Bökstedt and Neeman, [1993] Bökstedt, M. and Neeman, A. (1993). Homotopy limits in triangulated categories. Compositio Mathematica, 86(2):209–234.
  • Camacho, [2015] Camacho, A. R. (2015). Matrix factorizations and the landau-ginzburg/conformal field theory correspondence. arXiv preprint arXiv:1507.06494.
  • Carqueville and Murfet, [2015] Carqueville, N. and Murfet, D. (2015). A toolkit for defect computations in landau-ginzburg models. In Proc. Symp. Pure Math, volume 90, page 239.
  • Carqueville and Murfet, [2016] Carqueville, N. and Murfet, D. (2016). Adjunctions and defects in landau–ginzburg models. Advances in Mathematics, 289:480–566.
  • Conrad, [2016] Conrad, K. (2016). Tensor products. Notes of course, available on-line.
  • Crisler and Diveris, [2016] Crisler, D. and Diveris, K. (2016). Matrix factorizations of sums of squares polynomials. Diakses pada: http://pages. stolaf. edu/diveris/files/2017/01/MFE1. pdf.
  • Dummit and Foote, [2004] Dummit, D. S. and Foote, R. M. (2004). Abstract algebra, volume 3. Wiley Hoboken.
  • Dyckerhoff et al., [2013] Dyckerhoff, T., Murfet, D., et al. (2013). Pushing forward matrix factorizations. Duke Mathematical Journal, 162(7):1249–1311.
  • Eisenbud, [1980] Eisenbud, D. (1980). Homological algebra on a complete intersection, with an application to group representations. Transactions of the American Mathematical Society, 260(1):35–64.
  • Fomatati, [2019] Fomatati, Y. B. (2019). Multiplicative Tensor Product of Matrix Factorizations and Some Applications. PhD thesis, Université d’Ottawa/University of Ottawa.
  • Fomatati, [2021] Fomatati, Y. B. (2021). On tensor products of matrix factorizations. arXiv preprint arXiv:2105.10811.
  • Goldie, [1958] Goldie, A. W. (1958). The structure of prime rings under ascending chain conditions. Proceedings of the London Mathematical Society, 3(4):589–608.
  • Goldie, [1960] Goldie, A. W. (1960). Semi-prime rings with maximum condition. Proceedings of the London Mathematical Society, 3(1):201–220.
  • Jacobson, [1964] Jacobson, N. (1964). Structure of rings, rev. ed. In Amer. Math. Soc. Colloq. Publ, volume 37.
  • Kaoutit, [2006] Kaoutit, L. E. (2006). Wide morita contexts in bicategories. arXiv preprint math/0608601.
  • Keller et al., [2011] Keller, B., Murfet, D., and Van den Bergh, M. (2011). On two examples by iyama and yoshino. Compositio Mathematica, 147(2):591–612.
  • Khovanov and Rozansky, [2008] Khovanov, M. and Rozansky, L. (2008). Matrix factorizations and link homology ii. Geometry & Topology, 12(3):1387–1425.
  • Lam, [1999] Lam, T.-Y. (1999). Graduate texts in mathematics.
  • Morita, [1958] Morita, K. (1958). Duality for modules and its applications to the theory of rings with minimum condition. Science Reports of the Tokyo Kyoiku Daigaku, Section A, 6(150):83–142.
  • Neeman, [2001] Neeman, A. (2001). Triangulated categories. Princeton University Press.
  • Pécsi, [2012] Pécsi, B. (2012). On morita contexts in bicategories. Applied Categorical Structures, 20(4):415–432.
  • Puntanen and Styan, [2005] Puntanen, S. and Styan, G. P. (2005). Historical introduction: Issai schur and the early development of the schur complement. In The Schur complement and its applications, pages 1–16. Springer.
  • Silvester, [2000] Silvester, J. R. (2000). Determinants of block matrices. The Mathematical Gazette, 84(501):460–467.
  • Smith, [2011] Smith, R. A. (2011). Introduction to vector spaces, vector algebras, and vector geometries. arXiv preprint arXiv:1110.3350.
  • Yoshino, [1998] Yoshino, Y. (1998). Tensor products of matrix factorizations. Nagoya Mathematical Journal, 152:39–56.