Necessary conditions for the existence of Morita Contexts in the bicategory of Landau-Ginzburg Models
Abstract
We use a matrix approach to study the concept of Morita context in the bicategory of Landau-Ginzburg models on a particular class of objects.
In fact, we first use properties of matrix factorizations to state and prove two necessary conditions to obtain a Morita context between two objects of .
Next, we use a celebrated result (due to Schur) on determinants of block matrices to show that these necessary conditions are not sufficient. Finally, we state a trivial sufficient condition.
Keywords. Matrix factorizations, tensor product, Morita equivalence.
Mathematics Subject Classification (2020). 16D90, 15A23, 15A69, 18N10.
In the sequel, is a commutative ring with unity and will denote the power series ring or its subring of polynomials . It would always be clear which ring we are referring to.
1 Introduction
A Morita context also called pre-equivalence data [4], is a generalization of Morita equivalence between categories of modules.
Two rings and are called Morita equivalent if the categories of left Modules (Mod) and of left Modules Mod) are equivalent.
The prototype of Morita equivalent rings is provided by a ring and the ring of matrices over (for details, see corollary 22.6 of [2]).
It is evident that if two rings are isomorphic, they are Morita equivalent. However, the converse is not true in general. There exist rings that are not isomorphic, yet are Morita equivalent (cf. p. 470 of [22]). In fact, it suffices to take a ring and the ring of matrices over . However, there is a partial converse which holds. Indeed,
if two rings are Morita equivalent, then their centers are isomorphic. In particular, if the rings are commutative, then they are isomorphic ([22], p.494).
Because of this result, Morita equivalence is interesting solely in the situation of noncommutative rings. For more details on Morita equivalence, see [2]. This notion can be generalized using the notion of Morita contexts.
Morita contexts were first introduced in the bicategory of unitary rings
and bimodules as -tuples ; where and are rings, is an
A-B-bimodule, is a B-A-bimodule and and are homomorphisms satisfying and . For a characterization of Morita context in this bicategory, see theorem 5.4 of [25].
Bass (cf. chap. 2, section 4.4 of [3]) proved that the Morita context is a Morita equivalence if and only if is both projective and a generator
in the category of modules. Recall (cf. [22]) that if is a ring, an -module is projective if for every surjective -linear map and every -linear map there is a unique -linear map such that .
Morita contexts were recently studied in many bicategories [25]. But they have not yet been studied in the bicategory of Landau-Ginzburg models (section 2.2 of [6]) over a commutative ring whose intricate construction ([8]) is reminiscent of, but more complex than that of the bicategory of associative algebras and bimodules. In this paper, we study the concept of Morita context in the bicategory of Landau-Ginzburg models. A Landau-Ginzburg model is a model in solid state physics for superconductivity. possesses adjoints (also called duals, cf. [8]) and this helps in explaining a certain duality that exists in the setting of Landau-Ginzburg models in terms of some specified relations (cf. page 1 of [8]). The objects of are potentials which are polynomials satisfying some conditions (definition 2.4 p.8 of [8]).
But unlike [8], we do not restrict ourselves to potentials.
It turns out that the authors of [8] used potentials to suit their purposes because even if we take the objects of to be polynomials rather than potentials and then apply the construction of given in [8], we obtain virtually the same bicategory except that we now have more objects. Thus in this paper, the objects of are simply polynomials.
There are many reasons for studying the notion of Morita context.
The first reason is that it generalizes the very important notion of Morita equivalence.
Another reason is that it is used to prove some celebrated results. For example, the Morita context which has been introduced in [23] was used since to prove Wedderburn theorem on the structure of simple rings [3]. Morita contexts were also used in [1] to obtain various results: Goldie’s theorem ([16], [17]) on the ring of quotients of semi-prime rings and as a specialization, Wedderburn’s structure theorems of semi-simple Artinian rings were obtained.
Other applications, though sometimes not stated in an explicit form, can be found in various places (e.g. [18], p.75]).
In this paper, we use properties of matrix factorizations to give necessary conditions to obtain a Morita context between two objects of the bicategory . Thus, our first main result is the following theorem.
Theorem A.
Let
-
•
and be two objects of .
-
•
i.e., is a finite rank matrix factorization of .
-
•
i.e., is a finite rank matrix factorization of .
such that and are finite rank matrix factorizations. -
•
i.e., is a finite rank matrix factorization of .
-
•
i.e., is a finite rank matrix factorization of .
-
•
and let and be pairs of matrices representing respectively the finite rank matrix factorizations and .
-
•
and let and be pairs of matrices representing respectively the finite rank matrix factorizations and .
Then: A necessary condition for to be a Morita Context is
where for ease of notation, we wrote and respectively for the matrices of and , .
Next, thanks to a celebrated result (due to Schur [26], [27]) on determinants of block matrices, we observe that if and are respectively two matrix factorizations of two arbitrary polynomials, then the determinants of the four matrices appearing in the Yoshino tensor products and are all equal.
Moreover, when we translate this in where a morphism is a matrix factorization of the difference of two polynomials, we find out that those four determinants are all equal to zero. So our second main result is stated as follows:
Theorem B.
Let and be two objects in and let
be 1-morphisms in . If and , then
Thanks to this result we conclude that the necessary conditions earlier stated are not sufficient.
This paper is organized as follows: In the next section, we review the notion of matrix factorization. In section 3, we recall properties of matrix factorizations. Section 4 is a recall of the definition of the bicategory of Landau-Ginzburg models. The notion of Morita contexts in is discussed in section 5. Finally, we discuss further problems in the last section.
2 Matrix Factorizations
In this section, we first recall the definition of a matrix factorization and describe the category of matrix factorizations of a power series . Next, the definition of Yoshino’s tensor product is recalled.
In 1980, Eisenbud came up with an approach of factoring both reducible and irreducible polynomials in using matrices. For instance, the polynomial is irreducible over the real numbers but can be factorized as follows:
We say that is a matrix factorization of .
Definition 2.1.
When , we get a matrix factorization of , i.e., which is simply a factorization of in the classical sense. But in case is not reducible, this is not interesting, that’s why we will mostly consider .
The original definition of a matrix factorization was given by Eisenbud [13] as follows: a matrix factorization of an element in a ring (with unity) is an ordered pair of maps of free modules and s.t., and .
Though this definition is valid for any arbitrary ring (with unity), in order to effectively study matrix factorizations, it is important to restrict oneself to specific rings. Working with specific rings makes it possible to easily give examples and it also allows one to carry out computations in a well-defined framework. Yoshino [29] restricted himself to matrix factorizations of power series. In this section, we will restrict ourselves to matrix factorizations of a polynomial.
Example 2.1.
Let .
We give a matrix factorization of :
Thus;
is a matrix factorization of .
We now propose a simple straightforward algorithm to obtain an matrix factorization from one that is , where .
Simple straightforward algorithm:
Let be an matrix factorizations of a power series . Suppose we want an matrix factorization of , where .
Let stand for the entry in the row and column.
-
•
Turn and into matrices by filling them with zeroes everywhere except at entries where and .
Then for all entries with , Either:-
1.
In , replace the diagonal elements (which are zeroes) with
And -
2.
In , replace the diagonal elements (which are zeroes) with 1
Or:
Interchange the roles of and in steps and above. -
1.
It is evident that this simple algorithm works not only for polynomials but also for any element in a unital ring.
The standard algorithm to factor polynomials using matrices is found in [10] and for an improved version see [14] and [15].
2.1 The category of matrix factorizations of
The category of matrix factorizations of a power series denoted by or , (or even when there is no risk of confusion) is defined [29] as follows:
The objects are the matrix factorizations of .
Given two matrix factorizations of ; and respectively of sizes and , a morphism from to is a pair of matrices each of size which makes the following diagram commute:
That is,
For a detailed definition of this category, see [14].
2.2 Yoshino’s Tensor Product of Matrix Factorizations and its variants
In this subsection, we recall the definition of the tensor product of matrix factorizations denoted , constructed by Yoshino [29] using matrices. This will be useful when we will be describing the notion of Morita Context in (cf. section 5). In the sequel, except otherwise stated , where is a commutative ring with unity and .
Definition 2.2.
[29]
Let be an matrix factorization of a power series and an matrix factorization of . These matrices can be considered as matrices over and the tensor product is given by
()
where each component is an endomorphism on .
It is easy to see [29] that
is a matrix factorization of of size .
In the following example, we consider matrix factorizations in one variable.
Example 2.2.
Let and be matrix factorizations of and respectively. Then
And
This shows that is a matrix factorization of and it is of size .
In the next example, we consider matrix factorizations in two variables.
Example 2.3.
Let
X=
(), and
X’=
()
be matrix factorizations of and respectively. Let
A=
,
B=
,
C=
,
D=
,
A’=
,
B’=
,
C’=
, and
D’=
Then
If we call these two matrices and respectively, then where is the identity matrix of size and the last zero is the zero matrix of size . Hence, is a matrix factorization of of size .
Yoshino’s tensor product has three mutually distinct functorial variants as can be seen in [15]
3 Properties of Matrix Factorizations
We will now state and prove some properties of matrix factorizations of polynomials. Some of them will be used when studying the notion of Morita Context between two objects in the bicategory .
In this section, except otherwise stated .
All statements and proofs in this section are taken from [10] except for proposition 3.2. They are reproduced here
(sometimes with slight modifications) for the purposes of completeness.
Lemma 3.1.
If , then det() divides . If in addition, is irreducible in , then det() is a power of .
Corollary 3.1.
If and , then over the field of fractions of , is invertible.
Since is invertible over , the unique such that is , where is the adjoint of the matrix .
Now, is a matrix over . So, if divides , then will also be a matrix over . However, it is possible for not to have entries in , and therefore will not appear in any matrix factorization of over . We now prove that matrices appearing in a matrix factorization commute with each other.
Proposition 3.1.
If and are matrices so that , then .
Corollary 3.2.
If and , then over the field of fractions of , and are invertible.
Remark 3.1.
We state and prove another property of matrix factorizations thanks to which we can conclude that an matrix factorization of a polynomial is not unique.
Proposition 3.2.
If and are matrices such that , then , where (respectively ) stands for the transpose of (respectively ).
Proof.
Assume and are matrices such that . then:
∎
Before we proceed, it is worthwhile stating some well-known facts in the literature, see notes on tensor products by Conrad (for points 1. and 2. below see theorem 4.9, example 4.11 of [9], point 3. simply generalizes point 2.).
Lemma 3.2.
-
1.
Let and be free modules with respective bases and . Then is a basis of .
-
2.
The modules and are isomorphic as modules.
-
3.
If we let stand for and stand for , where , , and means with primes, e.g .
Then more generally, we have:
4 The bicategory of Landau-Ginzburg models
We quickly recall the construction of the bicategory of Landau-Ginzburg models. In definition 5.5 of [14],
a category is defined to be a structure having all requirements of a bicategory but without necessarily satisfying the unit morphisms requirement. In order to easily recall the construction of ,
we first construct a category which we call and then we proceed to the unit construction. The objects of are polynomials denoted by pairs where . Here we do not impose restrictions on the objects of our bicategory as was originally done in [8], where the authors instead consider potentials (Definition 2.4, p.8 of [8]). This generalization we do at the level of the objects does not pose a problem in the construction of . The end result is simply a bicategory which has more objects than the original defined in [8], we still call it .
We first recall the notion of linear factorizations which is an ingredient for the construction of .
Definition 4.1.
(p.8 of [8]) Linear factorization
A linear factorization of is a graded module
together with an odd (i.e., grade reversing) linear endomorphism
such that .
stands for the endomorphism .
Since we are dealing with a grading and is odd, we can also say is a degree one map. is called a twisted differential in [7]. is actually a pair of maps that we may depict as follows:
and the stated condition on them is:
and .
If is a free module, then the pair is called a matrix factorization and we often refer to it by without explicitly mentioning the differential .
Remark 4.1.
If and are respectively matrices of the linear endomorphisms and , then the pair would be a matrix factorization of according to definition 2.1.
Definition 4.2.
(p.9 [8]) Morphism of linear factorizations
A morphism of linear factorizations and is an even (i.e., a grade preserving) linear map such that .
Concretely (see page 19 of [21]), is a pair of maps and such that the following diagram commutes:
Definition 4.3.
(p.9 [8]) homotopic linear factorizations
Let and be linear factorizations. Two morphisms are homotopic if there exists an odd linear map such that .
More precisely, the following diagram commutes:
i.e.,
Notations 4.1.
We keep the following notations used in [8]:
We denote by the category of linear factorizations of modulo homotopy.
We also denote by its full subcategory of matrix factorizations.
Furthermore, we write for the full subcategory of finite rank matrix factorizations, viz. the matrix factorizations whose underlying -module is free of finite rank.
Remark 4.2.
(p.9 of [8])
is idempotent complete ([5], [24]).
As earlier stated, we work with polynomials rather than power series, so is not necessarily idempotent complete [20]. The idempotent closure of (denoted by ) is a full subcategory of whose objects are those matrix factorizations which are direct summands of finite-rank matrix factorizations in the homotopy category. Moreover, is an idempotent complete category.
Taking the idempotent completion is necessary because the composition of -morphisms in results in matrix factorisations which, while not finite-rank, are summands in the homotopy category of something finite-rank. There are two natural ways to resolve this: work throughout with power series rings and completed tensor products, or work
with idempotent completions.
Let () and () be elements of . The small category is defined as follows:
viz. a 1-morphism between two polynomials and is a matrix factorization of .
Then given two composable cells and , we define their composition using Yoshino’s tensor product as discussed in subsection 2.2. which is a graded module, where:
the differential ([8]) is
where the tensor product is taken over and has the usual Koszul signs when applied to elements. That is;
(see p.28 [21]) where .
By remark 2.1.8 on p.29 of [6], is a free module of infinite rank over .
However, the argument of Section 12 of [12] shows that it is naturally isomorphic to a direct summand in the homotopy category of something finite-rank. So, we may define .
We now define the tensor product of morphisms of matrix factorizations. Let , be
objects of and , be
objects of . Let and be two morphisms, then we define their tensor product in the obvious way in .
With the above data, the composition (bi-)functor is entirely determined in our B-category:
x
The definition of the associativity morphism is easy to state. In fact, for , and , the associator is the 2-isomorphism
given by the usual formula
where , and .
Lemma 4.1.
is a category.
We now discuss the construction of the units in .
4.1 Unit morphisms in
Here, we will construct the identity 1-cells. We let .
From lemma 3.2, we have in particular that
where and .
The subscript in will be very often omitted for ease of notation.
We need an object in
or equivalently,
where where stands for .
Recall (cf. section 5.5 of [28]): The exterior algebra of a vector space over a field K is defined as the quotient algebra of the tensor algebra; by the two-sided ideal generated by all elements of the form for . Symbolically, . The exterior product of two elements of is the product induced by the tensor product of . That is, if
is the canonical surjection, and if a and b are in , then there are and in such that and and . Let be formal symbols111That is, we declare those symbols to be linearly independent by definition. We consider the module:
This is an exterior algebra generated by anti-commuting variables, the
modulo the relations that the ’s anti-commute, that is . Typically, we will omit the wedge product and write for instance simply as . Here, the ”” product is taken over K just like the tensor product.
A typical element of is or equivalently where and
.
as an algebra is finitely generated by the set of formal symbols .
as an module is generated by the set containing the empty list and all products of the form where .
The action of is the obvious one.
is endowed with the grading given by degree (where deg for each ). Thus .
Next, we define the differential as follows:
Where is the unique derivation extending the map and as mentioned in [8], it acts on an element of the exterior algebra by the Leibniz rule with Koszul signs. In fact,
where
signifies
that has been removed, and is the position of in
And:
is defined by,
where for ease of notation we wrote as argument of instead of the more cumbersome notation . The following lemma will be useful in the definition of the left and the right units of .
Lemma 4.2.
There is a canonical map of factorizations
given by
.
is in fact the composition of the projection to degree , followed by the multiplication map , where we endow with the trivial grading i.e., and .
4.2 The left and the right units of
In this subsection, we recall the definition of the left and right identities of the bicategory .
We will denote the right (respectively left) unit by (respectively ).
Consider a -morphism .
Thus, is a matrix factorization of and is also an module.
Let be the identity map and be the projection defined in lemma 4.2.
Remark 4.3.
Observe that any module can be considered as an module by letting where is a homomorphism of commutative rings.
It is easy to see that the module can be considered as an module via the following homomorphism of commutative unitary rings defined
by ,
and hence one can also see as an module by means of the following (multiplicative map which is a) homomorphism of commutative unitary rings defined by .
It is not difficult to see that the module can be considered as an module via the following homomorphism of commutative unitary rings defined
by .
Thanks to this remark, it makes sense to form the following tensor product over : and
since and can be viewed as modules. Consequently, we will simply write and for ease of notation.
Similarly, since the module and the module can be viewed as modules, we can form the module that we simply write as .
Also consider the map defined by . This definition makes sense since can be viewed as an module. is an isomorphism (See example 1 page 363 of [11]).
Now, define by and by .
and are clearly morphisms in .
is natural w.r.t. morphisms in the variable and there is no direct inverse
for , for each (See section 5.2 of [14]).
5 Morita contexts in
Before discussing the notion of Morita context in , we define what it is in an arbitrary bicategory.
Definition 5.1.
[25]
Let be a bicategory with natural isomorphisms a, r and l. Given two 0-cells A and B, we define a Morita Context between A and B as a four-tuple
consisting of two 1-cells and , and two 2-cells and such that the following diagrams commute
Equationally, we have:
-
1.
-
2.
Remark 5.1.
[25]
-
•
Observe that any adjunction becomes a Morita context as soon as its unit is invertible.
-
•
A Morita context is strict if both and in the foregoing definition are isomorphisms. Strict Morita contexts and adjoint equivalences are basically the same. One can switch between them by inverting the unit of the adjunction.
Nota Bene: It is perhaps good to mention that what is called Morita context in this paper is instead called wide right Morita context from to in [19]. In [25], it is also called an abstract bridge. Both authors declare that the notion of left Morita context is defined by reversing 2-cells. We will not deal with left Morita context in our work.
In the sequel, we will write (respectively ) for (), where and are indeterminates for .
Description of Morita context in
Let and be two objects of , that is polynomials such that and . In all of this section, except otherwise stated, we want to keep the following remark and assumption in mind:
Remark 5.2.
Let and be morphisms of , we normally have:
-
1.
i.e., is a matrix factorization which is a direct summand of a finite rank matrix factorization of .
-
2.
i.e., is a matrix factorization which is a direct summand of a finite rank matrix factorization of .
-
3.
i.e., is a finite rank matrix factorization of .
-
4.
i.e., is a finite rank matrix factorization of .
Observe that the morphism as defined in the previous remark can also be a finite rank matrix factorization i.e., an object of which is a subcategory of . A similar observation holds for .
We would like to use determinant of matrices to discuss the notion of Morita context in , therefore, it is important for us to deal with entities that are of finite rank. That is why we need the following assumption.
Assumption:
We restrict our study of Morita context in to those objects and which are such that the following morphisms are all finite rank matrix factorizations: , , and .
Since we will only be dealing with finite rank matrix factorizations, we will sometimes intentionally omit the phrase ”finite rank” in front of the phrase ”matrix factorization”,.
A Morita Context between objects and of is a four-tuple where:
-
•
is a matrix factorization of .
-
•
is a matrix factorization of .
-
•
where is the identity on in i.e., a matrix factorization of as seen in the previous section.
-
•
such that the following diagrams (thereafter referred to as the diagrams) commute up to homotopy:
That is, the following two conditions hold:
-
1.
-
2.
where stands for the homotopy relation.
-
1.
equivalently:
-
1.
s.t. , where , , and
-
2.
s.t. , where , , and
We would now like to find necessary conditions on and
to obtain a Morita context in . We will use a matrix approach because it is easier to use matrices than linear transformations in our setting. Thus, we will have recourse to one of the properties of matrix factorizations we studied earlier (cf. corollaries 3.1 and 3.2); namely that matrices that appear in a matrix factorization of a nonzero polynomial are invertible.
In the sequel, except otherwise stated, matrices that appear in a matrix factorization are of a fixed size
. So we will not bother to mention the sizes of pair of matrices constituting a matrix factorization.
Let and be pairs of matrices representing respectively the matrix factorizations and . We assume is not a constant polynomial, thus is not the zero polynomial and so, by corollaries 3.1 and 3.2, we have that (respectively ) is invertible over the field of fractions of (respectively over the field of fractions of ).
We know that
is a morphism of matrix factorizations if the following diagram commutes:
.
That is:
In matrix form:
where for ease of notation we wrote for the matrix of , .
Now, (respectively ) being a matrix factorization of (respectively ) we obtain from the Yoshino’s tensor product of matrix factorizations (cf. definition 1.2 of [29]) that, is a factorization of . This means that .
from , since is invertible, we have:
Putting this in , since is invertible, we obtain:
We would now like to give a necessary condition on . The process to obtain this necessary condition is completely analogous to what we did in the case of . But we will present it here for the sake of clarity.
Let and be pair of matrices representing respectively the matrix factorizations and . We assume is not a constant polynomial, thus is not the zero polynomial and so, by corollaries 3.1 and 3.2, we have that and are invertible.
We know that
is a morphism of matrix factorizations if the following diagram commutes:
That is:
In matrix form:
where for ease of notation we wrote for the matrix of , .
Now, (respectively ) being a matrix factorization of (respectively ) we obtain from the Yoshino’s tensor product of matrix factorization (cf. definition 1.2 of [29]) that is a factorization of . This means that .
from , since is invertible, we have:
Putting this in , since is invertible, we get:
We now gather our results in the following theorem.
Theorem 5.1.
Let
-
•
and be two objects of .
-
•
i.e., is a finite rank matrix factorization of .
-
•
i.e., is a finite rank matrix factorization of .
such that and are finite rank matrix factorizations. -
•
i.e., is a finite rank matrix factorization of .
-
•
i.e., is a finite rank matrix factorization of .
-
•
and let and be pairs of matrices representing respectively the finite rank matrix factorizations and .
-
•
and let and be pairs of matrices representing respectively the finite rank matrix factorizations and .
Then: A necessary condition for to be a Morita Context is
We will now prove that these necessary conditions are not sufficient thanks to an auxiliary lemma (lemma 5.1) which is proved using a celebrated result (Fact 5.1) on determinant of block matrices due to Schur (e.g [26], [27] (theorem 3)).
Fact 5.1.
If and are matrices, and
Then the following hold:
-
1.
if
-
2.
if
-
3.
if
-
4.
if
The following auxiliary lemma states that the determinants of the matrices in the matrix factorizations and are all equal.
Lemma 5.1.
Let be an matrix factorization of and be an matrix factorizations of where and (respectively and ) are matrices over (respectively ). These matrices can be considered as matrices over and let
and
where each component is an endomorphism on .
Then
Furthermore, all the four determinants are equal.
Proof.
We will make use of fact 5.1(2), to compute the determinants of , , and which are block matrices.
Looking at , in order to apply fact 5.1(2), we first have to observe that and looking at we equally observe that
, we can compute the determinants of and as follows:
Likewise, in order to use fact 5.1(2) to compute , from , we observe that and looking at we equally observe that
we can compute the determinants of and as follows:
Clearly, all the four determinants are equal. ∎
Remark 5.3.
It is good to observe that in the case of a Morita Context in , the and the we have in the above auxiliary lemma are actually additive inverses of each other by definition of 1-morphisms in . In fact, if is a morphism from the polynomial to the polynomial , then is a matrix factorization of . And if is a morphism from the polynomial to the polynomial , then is a matrix factorization of . We see that .
Thus, we have the following result which is actually a consequence of lemma 5.1.
Theorem 5.2.
Let and be two objects in and let and be 1-morphisms in . If and , then
Proof.
The proof follows immediately from the remark and lemma 5.1. ∎
It follows from this theorem that in theorem 5.1 is not invertible and so, the equality will never have a unique solution for . There would be several solutions from this equality, among which the one(s) that will be sufficient to obtain a Morita context.
Remark 5.4.
The fact that is not invertible helps to see that
the necessary condition given for in theorem 5.1 is not sufficient. In fact, in the discussion that precedes the statement of that theorem, we cannot reverse the direction of the implication symbol from to . Indeed, suppose then since , we have implying which implies . Now, being noninvertible, we cannot obtain from here that which is . So, that necessary condition is not a sufficient one.
Similarly, since it also follows from theorem 5.2 that in theorem 5.1 is not invertible, the necessary condition given for in theorem 5.1 is not sufficient.
Though the necessary conditions we found are not sufficient, there is a trivial sufficient condition. In fact, it is easy to see that two equal morphisms of linear factorizations are homotopic; since it would suffice to take in definition 4.3. We immediately have the following remark which gives a (trivial) sufficient condition to obtain a Morita context in .
Remark 5.5.
Let and be two objects of . Let be a matrix factorization of and a matrix factorization of . Then is a Morita Context in . That is, provided we have and , it suffices to take and in definition 5.1 to obtain a Morita Context in .
This follows from the fact that the maps and are morphisms of matrix factorizations and so, they are linear. Consequently, the image of zero under these morphisms is zero.
In fact:
If and , then:
-
1.
and
-
2.
, and
Hence, the diagrams commute up to homotopy if:
s.t. , where and s.t. , where
Hence, it now suffices to choose and to see that is a Morita Context in .
A straightforward consequence of theorem 5.1 and remark 5.5 is the following:
Corollary 5.1.
A necessary and sufficient condition on and for to be a Morita Context is .
Example 5.1.
Consider and . A Morita Context between and is a quadruple where:
-
•
is a matrix factorization of . We take
since -
•
is a matrix factorization of . We take
since -
•
, viz. is the zero map.
-
•
, viz. is the zero map.
So,
is a Morita context between and .
Remark 5.6.
-
1.
It is good to mention that in such a setting (remark 5.5) not all Morita Contexts between two objects are identical. In fact, they differ at the level of the matrix factorizations and .
-
2.
Intuitively, Morita contexts being pre-equivalences, it is not interesting to study cases where the two polynomials and are equal.
6 FURTHER PROBLEMS
In this paper, we gave necessary conditions on and to obtain a Morita context in . But the only sufficient condition we were able to give on a -tuple to be a Morita context was the trivial one, i.e., . An interesting question would be to find nontrivial sufficient conditions on and .
Acknowledgments
Part of this work was carried out during my Ph.D. studies in mathematics at the University of Ottawa in Canada.
I am grateful to Prof. Dr. Richard Blute who was my Ph.D. supervisor for all the fruitful interactions.
I gratefully acknowledge the financial support of the Queen Elizabeth Diamond Jubilee scholarship during my Ph.D. studies.
References
- Amitsur, [1971] Amitsur, S. A. (1971). Rings of quotients and morita contexts. Journal of Algebra, 17(2):273–298.
- Anderson and Fuller, [2012] Anderson, F. W. and Fuller, K. R. (2012). Rings and categories of modules, volume 13. Springer Science & Business Media.
- Bass, [1962] Bass, H. (1962). The Morita theorems. University of Oregon.
- Bass and Roy, [1967] Bass, H. and Roy, A. (1967). Lectures on topics in algebraic K-theory, volume 41. Tata Institute of Fundamental Research Bombay.
- Bökstedt and Neeman, [1993] Bökstedt, M. and Neeman, A. (1993). Homotopy limits in triangulated categories. Compositio Mathematica, 86(2):209–234.
- Camacho, [2015] Camacho, A. R. (2015). Matrix factorizations and the landau-ginzburg/conformal field theory correspondence. arXiv preprint arXiv:1507.06494.
- Carqueville and Murfet, [2015] Carqueville, N. and Murfet, D. (2015). A toolkit for defect computations in landau-ginzburg models. In Proc. Symp. Pure Math, volume 90, page 239.
- Carqueville and Murfet, [2016] Carqueville, N. and Murfet, D. (2016). Adjunctions and defects in landau–ginzburg models. Advances in Mathematics, 289:480–566.
- Conrad, [2016] Conrad, K. (2016). Tensor products. Notes of course, available on-line.
- Crisler and Diveris, [2016] Crisler, D. and Diveris, K. (2016). Matrix factorizations of sums of squares polynomials. Diakses pada: http://pages. stolaf. edu/diveris/files/2017/01/MFE1. pdf.
- Dummit and Foote, [2004] Dummit, D. S. and Foote, R. M. (2004). Abstract algebra, volume 3. Wiley Hoboken.
- Dyckerhoff et al., [2013] Dyckerhoff, T., Murfet, D., et al. (2013). Pushing forward matrix factorizations. Duke Mathematical Journal, 162(7):1249–1311.
- Eisenbud, [1980] Eisenbud, D. (1980). Homological algebra on a complete intersection, with an application to group representations. Transactions of the American Mathematical Society, 260(1):35–64.
- Fomatati, [2019] Fomatati, Y. B. (2019). Multiplicative Tensor Product of Matrix Factorizations and Some Applications. PhD thesis, Université d’Ottawa/University of Ottawa.
- Fomatati, [2021] Fomatati, Y. B. (2021). On tensor products of matrix factorizations. arXiv preprint arXiv:2105.10811.
- Goldie, [1958] Goldie, A. W. (1958). The structure of prime rings under ascending chain conditions. Proceedings of the London Mathematical Society, 3(4):589–608.
- Goldie, [1960] Goldie, A. W. (1960). Semi-prime rings with maximum condition. Proceedings of the London Mathematical Society, 3(1):201–220.
- Jacobson, [1964] Jacobson, N. (1964). Structure of rings, rev. ed. In Amer. Math. Soc. Colloq. Publ, volume 37.
- Kaoutit, [2006] Kaoutit, L. E. (2006). Wide morita contexts in bicategories. arXiv preprint math/0608601.
- Keller et al., [2011] Keller, B., Murfet, D., and Van den Bergh, M. (2011). On two examples by iyama and yoshino. Compositio Mathematica, 147(2):591–612.
- Khovanov and Rozansky, [2008] Khovanov, M. and Rozansky, L. (2008). Matrix factorizations and link homology ii. Geometry & Topology, 12(3):1387–1425.
- Lam, [1999] Lam, T.-Y. (1999). Graduate texts in mathematics.
- Morita, [1958] Morita, K. (1958). Duality for modules and its applications to the theory of rings with minimum condition. Science Reports of the Tokyo Kyoiku Daigaku, Section A, 6(150):83–142.
- Neeman, [2001] Neeman, A. (2001). Triangulated categories. Princeton University Press.
- Pécsi, [2012] Pécsi, B. (2012). On morita contexts in bicategories. Applied Categorical Structures, 20(4):415–432.
- Puntanen and Styan, [2005] Puntanen, S. and Styan, G. P. (2005). Historical introduction: Issai schur and the early development of the schur complement. In The Schur complement and its applications, pages 1–16. Springer.
- Silvester, [2000] Silvester, J. R. (2000). Determinants of block matrices. The Mathematical Gazette, 84(501):460–467.
- Smith, [2011] Smith, R. A. (2011). Introduction to vector spaces, vector algebras, and vector geometries. arXiv preprint arXiv:1110.3350.
- Yoshino, [1998] Yoshino, Y. (1998). Tensor products of matrix factorizations. Nagoya Mathematical Journal, 152:39–56.