University of Helsinki, Department of Mathematics and Statistics, Helsinki, Finlandemailhttps://orcid.org/0000-0003-0115-5154Funded by grants 308712 and 338259 of the Academy of Finland Leibniz Universität Hannover, Institut für Theoretische Informatik, Hannover, [email protected]://orcid.org/0000-0002-8061-5376Funded by the German Research Foundation (DFG), project ME4279/1-2 Leibniz Universität Hannover, Institut für Theoretische Informatik, Hannover, [email protected]://orcid.org/0000-0002-5651-5391Funded by the German Research Foundation (DFG), project ME4279/1-2 \CopyrightJuha Kotinen, Arne Meier, and Yasir Mahmood {CCSXML} <ccs2012> <concept> <concept_id>10003752.10003790.10003800</concept_id> <concept_desc>Theory of computation Higher order logic</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003752.10003777.10003779</concept_id> <concept_desc>Theory of computation Problems, reductions and completeness</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \ccsdesc[500]Theory of computation Higher order logic \ccsdesc[500]Theory of computation Problems, reductions and completeness
A Parameterized View on the Complexity of Dependence Logic
Abstract
In this paper, we investigate the parameterized complexity of model checking for Dependence Logic which is a well studied logic in the area of Team Semantics. We start with a list of nine immediate parameterizations for this problem, namely: the number of disjunctions (i.e., splits)/(free) variables/universal quantifiers, formula-size, the tree-width of the Gaifman graph of the input structure, the size of the universe/team, and the arity of dependence atoms. We present a comprehensive picture of the parameterized complexity of model checking and obtain a division of the problem into tractable and various intractable degrees. Furthermore, we also consider the complexity of the most important variants (data and expression complexity) of the model checking problem by fixing parts of the input.
keywords:
Team Semantics, Dependence Logic, Parameterized Complexity, Model Checking1 Introduction
In this article, we explore the parameterized complexity of model checking for dependence logic (). We give a concise classification of this problem and its standard variants (expression and data complexity) with respect to several syntactic and structural parameters. Our results lay down a solid foundation for a systematic study of the parameterized complexity of team-based logics.
The introduction of Dependence Logic [27] in 2007 marks also the birth of the general semantic framework of team semantics that has enabled a systematic study of various notions of dependence and independence during the past decade. Team semantics differs from Tarski’s semantics by interpreting formulas by sets of assignments instead of a single assignment as in first-order logic. Syntactically, dependence logic is an extension of first-order logic by new dependence atoms expressing that the values of variables functionally determine values of the variables (in the team under consideration). Soon after the introduction of dependence logic many other interesting team-based logics and atoms were introduced such as inclusion, exclusion, and independence atoms that are intimately connected to the corresponding inclusion, exclusion, and multivalued dependencies studied in database theory [13, 9]. Furthermore, the area has expanded, e.g., to propositional, modal and probabilistic variants (see [15, 19, 14] and the references therein).
For the applications, it is important to understand the complexity theoretic aspects of dependence logic and its variants. In fact, during the past few years, these aspects have been addressed in several studies. For example, on the level of sentences dependence logic and independence logic are equivalent to existential second-order logic while inclusion logic corresponds to positive greatest fixed point logic and thereby captures P over finite (ordered) structures [11]. Furthermore, there are (non-parameterized) studies that restrict the syntax and try to pin the intractability of a problem to a particular (set of) connective(s). For instance, Durand and Kontinen [5] characterize the data complexity of fragments of dependence logic with bounded arity of dependence atoms/number of universal quantifiers, and Grädel [12] characterizes the combined and the expression complexity of the model checking problem of dependence logic. These studies will be of great help in developing our parameterized approach.
A formalism to enhance the understanding of the inherent intractability of computational problems is brought by the framework of parameterized complexity [4]. Initiated by the founding fathers Downey and Fellows, in this area within computational complexity theory one strives for more structure within the darkness of intractability. Essentially, one tries to identify so-called parameters of a considered problem to find algorithms solving with runtimes of the form for inputs , corresponding parameter values , and a computable function . These kind of runtimes are called FPT-runtimes (from fixed-parameter tractable; short FPT) and tame the combinatoric explosion of the solution space to a function in the parameter. As a very basic example in this vein, we can consider the propositional satisfiability problem . An immediate parameter that pulls the problem into the class FPT is the number of variables, as one can solve in time if is the number of variables of a given propositional formula . Yet, this parameter is not very satisfactory as it neither is seen fixed nor slowly growing in its practical instances. However, there are several interesting other parameters under which becomes fixed-parameter tractable, e.g., the so-called treewidth of the underlying graph representations of the considered formula [26]. This term was coined by Robertson and Seymour in 1984 [25] and established a profound position (currently DBLP lists 812 papers with treewidth in its title) also in the area of parameterized complexity in the last years [3, 4].
Coming back to fpt-runtimes, a runtime of a very different quality (yet still polynomial for fixed parameters) than FPT is summarized by the complexity class XP: for inputs , corresponding parameter values , and a computable function . Furthermore, analogously as XP but on nondeterministic machines, the class XNP will be of interest in this paper. Further up in the hierarchy, classes of the form for a classical complexity class play a role in this paper. Such classes intuitively capture all problems that are in the complexity class after fpt-time preprocessing. In Fig. 1 an overview of these classes and their relations are depicted (for further details see, e.g., the work of Elberfeld et al. [7]).
Recently, the propositional variant of dependence logic () has been investigated regarding its parameterized complexity [23, 20]. Moreover, propositional independence and inclusion logic have also been studied from the perspective of parameterized complexity [21]. In this paper, we further pursue the parameterized journey through the world of team logics and will visit the problems of first-order dependence logic . As this paper is the first one that investigates from the parameterized point of view, we need to gather the existing literature and revisit many results particularly from this perspective. As a result, this paper can be seen as a systematic study with some of the result following in a straightforward manner from the known non-parameterized results and some shedding light also on the non-parameterized view of model checking.
We give an example below to illustrate how the concept of dependence arises as a natural phenomenon in the physical world.
Example 1.1.
Flight | Destination | Gate | Date | Time |
---|---|---|---|---|
FIN-70 | HEL – FI | C1 | 04.10.2021 | 09:55 |
SAS-475 | OSL – NO | C3 | 04.10.2021 | 12:25 |
SAS-476 | HAJ – DE | C2 | 04.10.2021 | 12:25 |
FIN-80 | HEL – FI | C1 | 04.10.2021 | 19:55 |
KLM-615 | ATL – USA | A5 | 05.10.2021 | 11:55 |
QR-70 | DOH – QR | B6 | 05.10.2021 | 12:25 |
THY-159 | IST – TR | A1 | 05.10.2021 | 15:55 |
FIN-80 | HEL – FI | C1 | 05.10.2021 | 19:55 |
The database in Table 1 presents a screen at an airport for showing details about departing flights. Alternatively, it can be seen as a team over attributes in the top row as variables. Clearly, , as well as . Whereas, as witnessed by the pair (FIN-, HEL – FI, C , , ) and (FIN-, HEL – FI, C , , ).
Contribution.
Our classification is two-dimensional:
-
1.
We consider the model checking problem of under various parameterizations: number of split-junctions in a formula , the length of the formula , number of free variables , the treewidth of the structure , the size of the structure , the size of the team , the number of universal quantifiers in the formula , the arity of the dependence atoms , as well as the total number of variables .
-
2.
We distinguish between expression complexity (the input structure is fixed), data complexity (the formula is fixed), and combined complexity .
The results are summarized in Table 2. For instance, the parameters and impact in lowering the complexity for (and not for or ), while the parameter impacts for but not for or .
Related work.
The parameterized complexity analyses in the propositional setting [23, 20, 21] have considered the combined complexity of model checking and satisfiability as problems of interest. On the -level, the picture there is somewhat different, e.g., team size as a parameter for propositional dependence logic enabled a FPT algorithm while in our setting it has no effect on the complexity (paraNEXP). Grädel [12] studied the expression and the combined complexity for in the classical setting, whereas the data complexity was considered by Kontinen [16].
Organization of the paper.
2 Preliminaries
We require standard notions from classical complexity theory [24]. We encounter the classical complexity classes and their respective completeness notions, employing polynomial time many-one reductions ().
Parameterized Complexity Theory.
A parameterized problem (PP) is a subset of the crossproduct of an alphabet and the natural numbers. For an instance , is called the (value of the) parameter. A parameterization is a polynomial-time computable function that maps a value from to its corresponding . The problem is said to be fixed-parameter tractable (or in the class FPT) if there exists a deterministic algorithm and a computable function such that for all , algorithm correctly decides the membership of and runs in time . The problem belongs to the class XP if runs in time on a deterministic machine, whereas XNP is the non-deterministic counterpart of XP. Abusing a little bit of notation, we write -machine for the type of machines that decide languages in the class , and we will say a function is “-computable” if it can be computed by a machine on which the resource bounds of the class are imposed.
Also, we work with classes that can be defined via a precomputation on the parameter.
Definition 2.1.
Let be any complexity class. Then is the class of all PPs such that there exists a computable function and a language with such that for all we have that .
Notice that . The complexity classes are used in the context by us.
A problem is in the complexity class W[P], if it can be decided by a NTM running in time steps, with at most -many non-deterministic steps, where are computable functions. Moreover, W[P] is contained in the intersection of paraNP and XP (for details see the textbook of Flum and Grohe [8]).
Let and be a PP, then the -slice of , written as is defined as . Notice that is a classical problem then. Observe that, regarding our studied complexity classes, showing membership of a PP in the complexity class , it suffices to show that for each slice is true.
Definition 2.2.
Let be two PPs. One says that is fpt-reducible to , , if there exists an FPT-computable function such that
-
•
for all we have that ,
-
•
there exists a computable function such that for all and we have that .
Finally, in order to show that a problem is -hard (for some complexity class ) it is enough to prove that for some , the slice is -hard in the classical setting.
Dependence Logic.
We assume basic familiarity with predicate logic [6]. We consider first-order vocabularies that are sets of function symbols and relation symbols with an equality symbol . Let be a countably infinite set of first-order variables. Terms over are defined in the usual way, and the set of well-formed formulas of first order logic () is defined by the following BNF:
where are terms , is a -ary relation symbol from , , and . If is a formula, then we use for its set of variables, and for its set of free variables. We evaluate -formulas in -structures, which are pairs of the form , where is the domain of (when clear from the context, we write instead of ), and interprets the function and relational symbols in the usual way (e.g., if ). If is a tuple of terms for , then we write for .
Dependence logic () extends by dependence atoms of the form where and are tuples of terms. The semantics is defined through the concept of a team. Let be a structure and , then an assignment is a mapping .
Definition 2.3.
Let . A team in with domain is a set of assignments .
For a team with domain define its restriction to as . If is an assignment and is a variable, then is the assignment that maps to and to . Let be a team in with domain . Then we define as the supplementing function of . This is used to extend or modify to the supplementing team . For the case is the constant function we simply write for . The semantics of -formulas is defined as follows.
Definition 2.4.
Let be a vocabulary, be a -structure and be a team over with domain . Then,
iff | ||||
iff | ||||
iff | ||||
iff | ||||
iff | ||||
iff | ||||
iff | ||||
iff |
Notice that we only consider formulas in negation normal form (NNF) as any formula of dependence logic can be transformed into logically equivalent NNF-form. Further note that for all when (this is also called the empty team property). Furthermore, -formulas are local, that is, for a team in over domain and a -formula , we have that if and only if . Finally, every -formula , if then for every . This property is known as the downwards closure.
Definition 2.5 (Gaifman graph).
Given a vocabulary and a -structure , the Gaifman graph of is defined as
That is, there is a relation of arity such that and appear together in .
Intuitively, the Gaifman graph of a structure is an undirected graph with the universe of as vertices and connects two vertices when they share a tuple in a relation (see also Fig. 2).
Definition 2.6 (Treewidth).
The tree decomposition of a given graph is a tree , where the vertex set is the collection of bags and is the edge relation such that the following is true.
-
•
,
-
•
for every there is a bag with , and
-
•
for all the restriction of to (the subset with all bags containing ) is connected.
The width of a given tree decomposition is the size of the largest bag minus one: . The treewidth of a given graph is the minimum over all widths of tree decompositions of .
Observe that if is a tree then the treewidth of is one. Intuitively, one can say that treewidth accordingly is a measure of tree-likeness of a given graph.
Example 2.7.
Consider the database form our previous example. Recall that the universe consists of entries in each row. Let include a binary relation ( flights and are owed by the same company) and a ternary relation ( the gate is reserved by the flight at time ). For simplicity, we only consider first four rows with the corresponding three columns from Table 3, see Figure 2 for an explanation. Since the largest bag size in our decomposition is , the treewidth of this decomposition is . Furthermore, the presence of cycles of length suggests that there is no better decomposition. As a consequence the given structure has treewidth .
The decision problem to determine whether the treewidth of a given graph is at most , is NP-complete [1]. See Bodlaender’s Guide [2] for an overview of algorithms that compute tree decompositions. When considering the parameter treewidth, one usually assumes it as a given value and does not need to compute it. We consider only the model checking problem () and two variants in this paper. First, let us define the most general version.
Problem: | (combined complexity of model checking) |
---|---|
Input: | a structure , team and a -formula . |
Question: | ? |
We further consider the following two variants of the model checking problem.
Problem: | (data complexity of model checking, is fixed) |
---|---|
Input: | a structure , team . |
Question: | ? |
Problem: | (expression complexity of model checking, are fixed) |
---|---|
Input: | a -formula . |
Question: | ? |
List of Parameterizations.
Now let us turn to the parameters that are under investigation in this paper. We study the model checking problem of under nine various parameters that naturally occur in an -instance. Let be an instance of , where is a -formula, is a structure and is a team over . The parameter denotes the number of occurrences of the split operator (), is the number of universal quantifiers in . Moreover, (resp., ) denotes the total number of (free) variables in . The parameter is the size of the input formula , and similarly the two other size parameters are and . The treewidth of the structure (see Def. 2.6) is defined as the treewidth of and denoted by . Note that for formulas using the dependence atom , one can translate to a formula using only dependence atoms where (via conjunctions). That is why the arity of a dependence atom is defined as and is the maximum arity of any dependence atom in .
Let be any parameterization and , then by - we denote the problem when parameterized by . If more than one parameterization is considered, then we use ‘’ as a separator and write these parameters in brackets, e.g., as the problem with parameterization . Finally, notice that since the formula is fixed for this implies that is nothing but . That is, bounding the parameter does not make sense for as the problem remains NP-complete.
3 Complexity results
Parameter | |||
---|---|---|---|
We begin by proving relationships between various parameterizations.
Lemma 3.1.
The following relations among parameters hold.
-
1.
for any ,
-
2.
. Moreover, for , ,
-
3.
For , is constant.
Proof 3.2.
-
1.
Clearly, the size of the formula limits all parts of it including the parameters mentioned in the list.
-
2.
Notice that for data complexity, the formula and consequently the number of free variables in is fixed. Moreover, due to locality of it holds that , where is the number of free variables in . That is, the team can be considered only over the free variables of . This implies that teamsize is polynomially bounded by the universe size, as . Finally, the result for follows due to Definition 2.6. This is due to the reason that in the worst case all universe elements belong to one bag in the decomposition and .
-
3.
Notice that the team is fixed in . Together with the locality of -formulas (see Def. 2.4), this implies that the domain of (which is same as the set of free variables in the formula ) is also fixed and as a result, of constant size.
Remark 3.3.
If the number of free variables () in a formula is bounded then the total number of variables () is not necessarily bounded, on the other hand, bounding also bounds .
3.1 Data complexity ()
Classically, the data complexity of model checking for a fixed -formula is NP-complete [27].
Proposition 3.4.
For a fixed formula, the problem whether an input structure and a team satisfies the formula is NP-complete. That is, the data complexity of dependence logic is NP-complete.
In this section we prove that none of the considered parameter lowers this complexity, except . The proof relies on the fact that the complexity of model checking for already a very simple formula (see below) is NP-complete.
Lemma 3.5.
Let . Then the problem , is paraNP-c.
Proof 3.6.
The upper bound follows from Proposition 3.4. Kontinen [16, Theorem 4.9] proves that the data complexity for a fixed -formula of the form is already NP-complete. For clearity, we briefly sketch the reduction presented by Kontinen [16]. Let be an instance of . Consider the structure over the empty vocabulary, that is, . Let . The team is constructed over variables that take values from . As an example, the clause gives rise to assignments in Table 3.
‘variable’ | ‘parity’ | ‘clause’ | ‘position’ |
---|---|---|---|
Notice that, a truth assignment for is constructed using the division of according to each split. That is, if and only if such that for and each satisfies th dependence atom. Let be such that , then we let and . That is, one literal in each clause must be chosen in such a way that satisfies this clause, whereas, the remaining two literals per each clause are allowed to take values that does not satisfy it. As a consequence, each clause is satisfied by the variables chosen in this way, which proves correctness.
This implies that the -slice (for ), -slice (for as well as ), -slice (for ), and -slice (for ) are NP-complete. Consequently, the paraNP-hardness for these cases follow. Finally, the case for also follows due to the reason that the vocabulary of the reduced structure is empty. As a consequence, our definition 2.6 yields a tree decomposition of width trivially as no elements of the universe are related.
This completes the proof to our lemma.
Remark 3.7.
Recall that as a parameter for does not make sense as the input consists of . That is, the formula is already fixed which is stronger than fixing the size of .
We now prove the only tractable case for the data complexity.
Lemma 3.8.
.
Proof 3.9.
Lemma 3.10.
is paraNP-complete.
Proof 3.11.
For a fixed sentence (that is, with no free variables) and for all models and team we have that . As a result, the problem -reduces to the model checking problem with . Consequently, 1-slice of is NP-complete because model checking for a fixed -sentence is also NP-complete [27]. This gives paraNP-hardness.
For the membership, note that given a structure and a team then for a fixed formula the question whether is in NP. Consequently, giving paraNP-membership.
A comparison with the propositional dependence logic () at this point might be interesting. If the formula size is a parameter then the model checking for can be solved in FPT-time [20]. However, this is not the case for even if the formula is fixed in advance.
3.2 Expression and Combined Complexity ()
Now we turn towards the expression and combined complexity of model checking for . Here again, in most cases the problem is still intractable for the combined complexity. However, expression complexity when parameterized by the formula size () and the total number of variables () yields membership in FPT. Similar to the previous section, we first present results that directly translate from the known reductions for proving the NEXP-completeness for .
Lemma 3.12.
Let . Then both and are paraNEXP-complete.
Proof 3.13.
In the classical setting, NEXP-completeness of the expression and the combined complexity for was shown by Grädel [12, Theorem 5.1]. This immediately gives membership in paraNEXP. Interestingly, the universe in the reduction consists of with empty vocabulary and the formula obtained is a -sentence. This implies that -slice (for ), -slice (for ), -slice (for ), and -slice (for the number of free variables) are NEXP-complete. As a consequence, paraNEXP-hardness for the mentioned cases follows and this completes the proof.
For the number of splits as a parameterization, we only know that this is also highly intractable, with the precise complexity open for now.
Lemma 3.14.
and are both paraPSPACE-h.
Proof 3.15.
Consider the equivalence of to quantified constraint satisfaction problem (QCSP) [22, p. 418]. That is, the fragment of with only operations in allowed. Then QCSP asks, whether the conjunction of quantified constraints (-relations) is true in a fixed -structure . This implies that already in the absence of a split operator (even when there are no dependence atoms), the model checking problem is PSPACE-hard. Consequently, the mentioned results follow.
The formula size as a parameter presents varying behaviour depending upon if we consider the expression or the combined complexity.
Lemma 3.16.
is paraNP-complete.
Proof 3.17.
Notice that, due to Lemma 3.1, the size of a formula also bounds the maximum number of free variables in any subformula of . This gives the membership in conjunction with [12, Theorem 5.1]. That is, the combined complexity of is NP-complete if maximum number of free variables in any subformuala of is fixed. The lower bound follows because of the construction by Kontinen [16] (see also Lemma 3.5) since for a fixed formula (of fixed size), the problem is already NP-complete.
Lemma 3.18.
is in FPT.
Proof 3.19.
Recall that in expression complexity, the team and the structure are fixed. Whereas, the size of the input formula is a parameter. The result follows trivially because any PP is FPT when the input size is bounded by the parameter.
The expression complexity regarding the number of universal quantifiers as a parameter drops down to paraNP-completeness, which is still intractable but much lower than paraNEXP-completeness. However, regarding the combined complexity we can only prove the membership in XNP, with paraNP-lower bound.
Lemma 3.20.
is paraNP-complete.
Proof 3.21.
We first prove the lower bound through a reduction form the satisfiability problem for propositional dependence logic (). That is, given a -formula , whether there is a team such that ? Let be a -formula over propositional variables . For , let denote a variable corresponding to the proposition . Let be the structure over empty vocabulary. Clearly is satisfiable iff is satisfiable iff , where is a -formula obtained from by simply replacing each proposition by the variable . Notice that the reduced formula does not have any universal quantifier, that is . This gives paraNP-hardness since the satisfiability for is NP-complete [18].
For membership, notice that a -sentence with universal quantifiers can be reduced in P-time to an -sentence of the form [5, Cor. 3.9], where is a quantifier free -formula, , and each function symbol is at most -ary for . Finally, . Where the latter question can be solved by guessing an interpretation for each function symbol and . This requires guessing steps, and can be achieved in paraNP-time for a fixed structure (as we consider expression complexity). Consequently, the membership in paraNP follows. Notice that the arity of function symbols in the paraNP-membership above is bounded by if is a -sentence. However, if is a -formulas with free variables then the arity of function symbols as well as the number of universal quantifiers in the reduction, both are bounded by where and . Nevertheless, recall that for , the team is also fixed. Moreover, due to Lemma 3.1 the collection of free variables in has constant size. This implies that the reduction above provides an -sentence with universal quantifiers as well as function symbols of arity at most. Finally, guessing the interpretation for functions still takes paraNP-steps (because is constant) and consequently, we get paraNP-membership for open formulas as well.
The following corollary immediately follows from the proof above.
Corollary 3.22.
is paraNP-complete.
Lemma 3.23.
is paraNP-hard. Moreover, for sentences of , is in XNP.
Proof 3.24.
The paraNP-lower bound follows due to the fact that the expression complexity of is already paraNP-complete when parameterized by (Lemma 3.20).
For sentences, similar to the proof in Lemma 3.20, a -sentence can be translated to an equivalent -sentence in polynomial time. However, if the structure is not fixed as for expression complexity, then the computation of interpretations for functions can no longer be done in paraNP-time, but requires non-deterministic -time for each guessed function, where . Consequently, we reach only membership in XNP for sentences.
For open formulas, we do not know if is also in XNP. Our proof technique does not immediately settle this case as the team is not fixed for .
Similar to the case of universal quantifiers, the arity as a parameter also reduces the complexity but not as much as the universal quantifiers. Moreover, the precise combined complexity when parameterized by the arity is also open.
Lemma 3.25.
is paraPSPACE-complete.
Proof 3.26.
Notice that a -sentence with -ary dependence atoms can be reduced in P-time to an -sentence of the form [5, Thm. 3.3], where is an -formula and each function symbol is at most -ary for . Finally, . That is, one needs to guess the interpretation for each function symbol , which can be done in paraNP-time. Finally, evaluating an -formula for a fixed structure can be done in PSPACE-time. This yields membership in paraPSPACE. Moreover, if is an open -formula then the result follows due to a similar discussion as in the prof of Lemma 3.20.
For hardness, notice that the expression complexity of is PSPACE-complete. This implies that already in the absence of any dependence atoms, the complexity remains PSPACE-hard, as a consequence, the -slice of is PSPACE-hard.
This proves the desired result.
The combination () also does not lower the expression complexity as discussed before in the case of .
Corollary 3.27.
is paraPSPACE-complete.
Lemma 3.28.
is paraPSPACE-hard.
Proof 3.29.
Consider the fragment of with only dependence atoms of the form , the so-called constancy logic. The combined complexity of constancy logic is PSPACE-complete [12, Theorem 5.3]. This implies that the -slice of is PSPACE-hard, proving the result.
The combined complexity of model checking for constancy logic is PSPACE [12, Thm. 5.3]. Aiming for an paraPSPACE-upper bound via squeezing the fixed arity of dependence atoms (in some way) into constancy atoms is unlikely to happen as captures whereas constancy logic for sentences (and also open formulas) collapses to [10].
Notice that a similar reduction as in the proof of Lemma 3.20 holds from , in which both parameters ( and ) are bounded. This implies that there is no hope for tractability even when both parameters are considered together. That is, the complexity of expression complexity remains paraNP-complete when parameterized by the combination of parameters (, ).
Corollary 3.30.
is also paraNP-complete.
Finally, for the parameter total number of variables, the expression complexity drops to FPT whereas, the combined complexity drops to paraNP-completeness. The case of expression complexity is particularly interesting. This is due to the reason that it was posed as an open question in [28] whether the expression complexity of the fixed variable fragment of dependence logic () is NP-complete similar to the case of the combined complexity therein. We answer this negatively by stating FPT-membership for , which as a corollary proves that the expression complexity of is in P for each .
Lemma 3.31.
is paraNP-complete.
Proof 3.32.
Notice that if the total number of variables in is fixed, then the number of free variables in any subformula of is also fixed. This implies the membership in paraNP due to [12, Theorem 5.1]. On the other hand, by [28, Theorem 3.9.6] we know that the combined complexity of is NP-complete. This implies that for each , the -slice of the problem is NP-hard. This gives the desired lower bound.
Lemma 3.33.
is FPT.
Proof 3.34.
Given a formula of dependence logic with variables, we can construct an equivalent formula of in polynomial time [28, Theorem 3.3.17]. Moreover, since the structure is fixed, there exists a reduction of to an -formula with variables (big disjunction on the universe elements for each second order existential quantifier). Finally, the model checking for -formulas with variables is solvable in time [17, Prop 6.6]. This implies the membership in FPT.
Corollary 3.35.
The expression complexity of is in P for every .
Proof 3.36.
Since both, the number of variables and the universe size is fixed. The runtime of the form in Lemma 3.33 implies membership in P.
4 Conclusion
In this paper, we started the parameterized complexity classification of model checking for dependence logic with respect to nine different parameters (see Table 2 for an overview of the results). In Fig. 3 we depict a different kind of presentation of our results that also takes the grouping of parameters into quantitative, size related, and structural into account. The data complexity of shows a dichotomy (FPT vs./ paraNP-complete), where surprisingly there is only one case () where one can reach FPT. This is even more surprising in the light of the fact that the expression ( and the combined () complexities under the same parameter are still highly intractable. Furthermore, there are parameters when and vary in the complexity (). The combined complexity of stays intractable under any of the investigated parameterizations. It might be interesting to study combination of parameters and see their joint effect on the complexity (yet, Corollaries 3.22, 3.27, 3.30 tackle already some cases).
We want to close this presentation with some further questions:
-
•
What other parameters could be meaningful (e.g., number of conjunction, number of existential quantifiers, treewidth of the formula)?
-
•
What is the exact complexity of -, -/-, -?
-
•
The parameterized complexity analysis for other team-based logics, such as independence logic and inclusion logic.
References
- [1] Stefan Arnborg, Derek G. Corneil, and Andrzej Proskurowski. Complexity of finding embeddings in a -tree. SIAM Journal on Algebraic Discrete Methods, 2(8):277––284, 1987. doi:10.1137/0608024.
- [2] Hans L. Bodlaender. A tourist guide through treewidth. Acta Cybern., 11(1-2):1–21, 1993.
- [3] Hans L. Bodlaender. Discovering treewidth. In SOFSEM, volume 3381 of Lecture Notes in Computer Science, pages 1–16. Springer, 2005.
- [4] Rodney G. Downey and Michael R. Fellows. Fundamentals of Parameterized Complexity. Texts in Computer Science. Springer, 2013. doi:10.1007/978-1-4471-5559-1.
- [5] Arnaud Durand and Juha Kontinen. Hierarchies in dependence logic. ACM Transactions on Computational Logic (TOCL), 13(4):31, 2012. doi:10.1145/2362355.2362359.
- [6] Heinz-Dieter Ebbinghaus and Jörg Flum. Finite model theory. Perspectives in Mathematical Logic. Springer, 1995.
- [7] Michael Elberfeld, Christoph Stockhusen, and Till Tantau. On the space and circuit complexity of parameterized problems: Classes and completeness. Algorithmica, 71(3):661–701, 2015.
- [8] Jörg Flum and Martin Grohe. Parameterized Complexity Theory. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2006. doi:10.1007/3-540-29953-X.
- [9] Pietro Galliani. Inclusion and exclusion dependencies in team semantics: On some logics of imperfect information. Annals of Pure and Applied Logic, 163(1):68 – 84, 2012. doi:10.1016/j.apal.2011.08.005.
- [10] Pietro Galliani. On strongly first-order dependencies. In Dependence Logic, pages 53–71. Springer, 2016.
- [11] Pietro Galliani and Lauri Hella. Inclusion Logic and Fixed Point Logic. In Simona Ronchi Della Rocca, editor, Computer Science Logic 2013 (CSL 2013), volume 23 of Leibniz International Proceedings in Informatics (LIPIcs), pages 281–295, Dagstuhl, Germany, 2013. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. URL: http://drops.dagstuhl.de/opus/volltexte/2013/4203, doi:10.4230/LIPIcs.CSL.2013.281.
- [12] Erich Grädel. Model-checking games for logics of imperfect information. Theor. Comput. Sci., 493:2–14, 2013. doi:10.1016/j.tcs.2012.10.033.
- [13] Erich Grädel and Jouko Väänänen. Dependence and independence. Studia Logica, 101(2):399–410, 2013. doi:10.1007/s11225-013-9479-2.
- [14] Miika Hannula, Juha Kontinen, Jan Van den Bussche, and Jonni Virtema. Descriptive complexity of real computation and probabilistic independence logic. In LICS, pages 550–563. ACM, 2020.
- [15] Miika Hannula, Juha Kontinen, Jonni Virtema, and Heribert Vollmer. Complexity of propositional logics in team semantic. ACM Trans. Comput. Log., 19(1):2:1–2:14, 2018.
- [16] Jarmo Kontinen. Coherence and computational complexity of quantifier-free dependence logic formulas. Studia Logica, 101(2):267–291, 2013. doi:10.1007/s11225-013-9481-8.
- [17] Leonid Libkin. Elements of Finite Model Theory. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2004. URL: http://www.cs.toronto.edu/%7Elibkin/fmt, doi:10.1007/978-3-662-07003-1.
- [18] Peter Lohmann and Heribert Vollmer. Complexity results for modal dependence logic. Stud Logica, 101(2):343–366, 2013. doi:10.1007/s11225-013-9483-6.
- [19] Martin Lück. Canonical models and the complexity of modal team logic. Log. Methods Comput. Sci., 15(2), 2019.
- [20] Yasir Mahmood and Arne Meier. Parameterised complexity of model checking and satisfiability in propositional dependence logic. In Foundations of Information and Knowledge Systems - 11th International Symposium, FoIKS 2020, Dortmund, Germany, February 17-21, 2020, Proceedings, pages 157–174, 2020. doi:10.1007/978-3-030-39951-1\_10.
- [21] Yasir Mahmood and Jonni Virtema. Parameterised complexity of propositional logic in team semantics. CoRR, abs/2105.14887, 2021.
- [22] Barnaby Martin. First-order model checking problems parameterized by the model. In CiE, volume 5028 of Lecture Notes in Computer Science, pages 417–427. Springer, 2008.
- [23] Arne Meier and Christian Reinbold. Enumeration complexity of poor man’s propositional dependence logic. In FoIKS, volume 10833 of Lecture Notes in Computer Science, pages 303–321. Springer, 2018.
- [24] Christos H. Papadimitriou. Computational complexity. Addison-Wesley, 1994.
- [25] Neil Robertson and Paul D. Seymour. Graph minors. III. planar tree-width. J. Comb. Theory, Ser. B, 36(1):49–64, 1984.
- [26] Marko Samer and Stefan Szeider. Fixed-parameter tractability. In Handbook of Satisfiability, volume 185 of Frontiers in Artificial Intelligence and Applications, pages 425–454. IOS Press, 2009.
- [27] Jouko A. Väänänen. Dependence Logic - A New Approach to Independence Friendly Logic, volume 70 of London Mathematical Society student texts. Cambridge University Press, 2007. URL: http://www.cambridge.org/de/knowledge/isbn/item1164246/?site_locale=de_DE.
- [28] Jonni Virtema. Approaches to Finite Variable Dependence: Expressiveness and Computational Complexity. PhD thesis, School of Information Sciences of the University of Tampere, 2014. Available online at https://trepo.tuni.fi/handle/10024/95328.