This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\ignore

\toolnamelp2pb: Translating Answer Set Programs into Pseudo-Boolean Theories

Wolf De Wulf    Bart Bogaerts Vrije Universiteit Brussel
Brussels, Belgium [email protected]
Abstract

Answer set programming (ASP) is a well-established knowledge representation formalism. Most ASP solvers are based on (extensions of) technology from Boolean satisfiability solving. While these solvers have shown to be very successful in many practical applications, their strength is limited by their underlying proof system, resolution. In this paper, we present a new tool \toolnamelp2pbthat translates ASP programs into pseudo-Boolean theories, for which solvers based on the (stronger) cutting plane proof system exist. We evaluate our tool, and the potential of cutting-plane–based solving for ASP on traditional ASP benchmarks as well as benchmarks from pseudo-Boolean solving. Our results are mixed: overall, traditional ASP solvers still outperform our translational approach, but several benchmark families are identified where the balance shifts the other way, thereby suggesting that further investigation into a stronger proof system for ASP is valuable.

1 Introduction

Answer set programming (ASP) is a well-established knowledge representation formalism that grew from the observation that stable models [34] of a logic program can be used to encode search problems [60, 63, 50]. ASP is rapidly gaining adoption, with applications in domains such as decision support for the Space Shuttle [64], product configuration [76], phylogenetic inference [46, 12], knowledge management [38], e-Tourism [66], biology [33], robotics [6], and machine learning [42, 13].

The success of ASP can, to a large extend, be explained by two factors. The first factor is a rich, first-order language, ASP-Core2 \myciteAspCore2, to express knowledge in, with an easy-to-understand modeling methodology known as generate-define-and-test. The second factor is the availability of a large number of reliable tools — grounders [\reftogringo, \reftoDLV] and solvers [\reftoclasp, \reftowasp, \reftominisatid] — that allow to efficiently compute stable models of a given logic program.

Throughout its history, ASP has always benefited from progress in other domains of combinatorial search. For instance, the addition of conflict-driven clause learning (CDCL) \mycitecdcl to Boolean satisfiability (SAT) solvers is often recognized as one of the most important leaps forward in SAT solving; this technique was very quickly adopted in ASP. In fact, the relation goes two ways, \toolnameclasp, a native ASP solver has long been one of the best performing SAT solvers. A recent example of such positive reinforcement between domains is found in recent constraint ASP systems [\reftoclingcon], which use techniques from SAT modulo theories \myciteSMT and from constraint programming \myciteCP – in particular, lazy clause generation \mycitelcg.

Next to native solvers, also various ASP tools are available based on translations to other formalisms: to SAT \mycitelp2sat, to difference logic \mycitelp2diff, to mixed integer programming \mycitelp2mip, and to SAT modulo acyclicity \mycitelp2acyc. Our current work fits in this line, by translating answer set programs into (linear) pseudo-Boolean (PB) constraints \mycitePB.

Most modern ASP solvers are built on conflict-driven clause learning and thus on the resolution proof system. This also holds for ASP solvers with native support for aggregates, which typically employ lazy clause generation techniques, essentially compiling their theory lazily into clauses and henceforth relying on the underlying CDCL solver. The advantage of building on CDCL technology is that this has been researched intensely, and the simplicity of using only clauses allows for highly optimized implementations, resulting in efficient, well-engineered solvers. The disadvantage is that the resolution proof system is known to be weak; for several very simple problems, resolution proofs are exponentially large; the most notorious such problem is the pigeon hole problem. In practice this means that modern CDCL solvers can, for instance, not solve the problem “do 15 pigeons fit in 14 holes?” One way to avoid this limitation, is using symmetry exploitation methods, which are well-researched in SAT [2, 20, 19, 62], and have also been ported to ASP [22, 18], but the detection of symmetries is often very brittle, e.g. , adding redundant constraints often removes symmetries. Another option is using a stronger proof system, such as the cutting planes proof system \myciteCuttingPlanes. This proof system works on linear constraints over the integers, or, when restricted to 010-1 variables, on so-called pseudo-Boolean constraints. Recent research in the field of pseudo-Boolean solving has resulted in a new and efficient solver, \toolnameRoundingSAT\myciteRoundingSAT, that builds on previous work to integrate conflict-driven search with the cutting plane proof system \mycite\reftoPRS,tcad/ChaiK05,\reftojsat/SheiniS06,jsat/ManquinhoS06,\reftoSAT4J. This recent improvement in psuedo-Boolean solving triggers the question whether answer set programming could also benefit from these techniques.

The main contribution of this paper is the introduction and experimental validation of a new tool \toolnamelp2pbthat translates ground logic programs into pseudo-Boolean theories. This tool is valuable both for the ASP community and for the PB community. For ASP, it enables the use of an extra class of solvers. Furthermore, since we translate into the well-accepted OPB standard format for pseudo-Boolean problems111See http://www.cril.univ-artois.fr/PB10/format.pdf., compatibility of future pseudo-Boolean solvers is obtained for free, allowing us to quickly test the potential of novel PB solving techniques for logic programming. Additionally, the OPB format is supported by important industrial tools such as \toolnamegurobi\mycite\reftogurobi. For the PB community, this tool provides access to a new set of applications and benchmarks. Additionally, it establishes answer set programming as a modelling language for PB solvers, thereby bypassing the need to write by hand a program that generates benchmarks for every new class of benchmarks considered.

We experimentally validate \toolnamelp2pbon two classes of benchmarks. On novel ASP models of four benchmark families where the difference between cutting planes and resolution is known to be essential, our approach, unsurprisingly, outperforms traditional ASP solvers. On benchmarks from the latest ASP competition, we find that overall, traditional ASP solvers are still more efficient, but several benchmark families (mainly optimization problems) are found where the cutting plane proof system pays off.

The rest of this paper is structured as follows. In Section 2, we introduce preliminaries on ASP and pseudo-Boolean constraints. In Sections 3 and 4, we discuss our translation and its implementation respectively. Section 5 contains our experiments. In Section 6, we discuss some closely related work and we conclude in Section 7.

2 Preliminaries

A vocabulary is a set of symbols, also called atoms; vocabularies are denoted by σ,τ\sigma,\tau. A literal is an atom pp or its negation p¯\bar{p}. An interpretation \struct\struct of a vocabulary \mσis a subset of \mσ\m{\sigma}. We use the truth values true (\ltrue\ltrue) and false (\lfalse\lfalse) and will identify \ltrue\ltrue with 11 and \lfalse\lfalse with 0, as is common in pseudo-Boolean theories. The truth value of an atom pσp\in\sigma in an interpretation \struct(denoted p\structp^{\struct}) is 11 if p\structp\in\struct and 0 otherwise. The truth value of literals, conjunctions of literals, and clauses (disjunctions of literal) are defined as usual.

Pseudo-Boolean Constraints

A (linear) pseudo-Boolean constraint over \mσ\m{\sigma} is a linear constraint with variables from \mσ, i.e., an expression of the form

_iw_ix_ib\sum_{\_}iw_{\_}ix_{\_}i\sim b (1)

with w_i,bw_{\_}i,b\in\mathbb{Z}, x_i\mσx_{\_}i\in\m{\sigma}, and \sim one of <<, >>, \leq, \geq, and ==. The value of a _i=1nw_ix_i\sum_{\_}{i=1}^{n}w_{\_}ix_{\_}i in \structis, as usual, defined as _i=1nw_ix_i\struct\sum_{\_}{i=1}^{n}w_{\_}ix_{\_}i^{\struct}. A pseudo-Boolean constraint of the form \eqrefeq:pb is satisfied in \structif _i=1nw_ix_i\structb\sum_{\_}{i=1}^{n}w_{\_}ix_{\_}i^{\struct}\sim b. A pseudo-Boolean theory is a set of pseudo-Boolean constraints. A model of a pseudo-Boolean theory \theory\theory is an interpretation \structsuch that all constraints are satisfied in \struct.

Logic Programming

A normal logic program \PP\PP over vocabulary \mσ\m{\sigma} is a set of rules rr of form

h\lrulea_1a_nb_1¯b_m¯.h\lrule a_{\_}1\land\dots\land a_{\_}n\land\overline{b_{\_}1}\land\dots\land\overline{b_{\_}m}. (2)

where hh, the a_ia_{\_}i’s, and b_ib_{\_}i’s are atoms in \mσ\m{\sigma}. We call hh the head of rr, denoted \mhead(r)\m{\mathrm{head}}(r), and a_1a_nb_1¯b_m¯a_{\_}1\land\dots\land a_{\_}n\land\overline{b_{\_}1}\land\dots\land\overline{b_{\_}m} the body of rr, denoted \mbody(r)\m{\mathrm{body}}(r). If n=m=0n=m=0, we simply write hh.

Remark 1

We use the notation p¯\overline{p} for the negation of pp. In the context of logic programming, the type of negation used here is often referred to as “negation as failure” or “default negation”, referring to the fact that in “good” models of logic programs (called stable models below), an atom pp is false by default: it is false unless there is a rule that can derive it. In this work, there is no need to distinguish between different types of negation (indeed, all definitions such as when an interpretation satisfies a literal remain valid) and for uniformity and brevity we thus use the notation p¯\overline{p} which is standard in pseudo-Boolean solving throughout the paper.

An interpretation \struct\struct is a model of a logic program \PPif, for all rules rr in \PP, whenever \mbody(r)\m{\mathrm{body}}(r) is satisfied by II, so is \mhead(r)\m{\mathrm{head}}(r). The reduct of \PPwith respect to II, denoted \PPI\PP^{I}, is the program that consists of rules h\lrulea_1a_nh\lrule a_{\_}1\land\dots\land a_{\_}n for all rules of the form \eqrefeq:rule in \PPsuch that b_iIb_{\_}i\not\in I for all ii. An interpretation II is a stable model of \PPif it is the \subseteq-minimal model of \PPI\PP^{I} [34].

In practice, often not just rules of the form \eqrefeq:rule, but also aggregates are used. In non-ground programs (i.e., programs with first-order variables), they take various forms, but at the propositional level, it is well-known [72, 59] that in order to capture the standard aggregates \myciteAspCore2, it suffices to consider only weight constraint rules: rules of the form

h\lruleWh\lrule W (3)

where hσh\in\sigma and WW is a pseudo-Boolean constraint l_iv_ia_i+_iw_ib_i¯l\leq\sum_{\_}iv_{\_}ia_{\_}i+\sum_{\_}iw_{\_}i\overline{b_{\_}i} with hh, the a_ia_{\_}i’s, and the b_ib_{\_}i’s in \mσ\m{\sigma} and with l,v_i,w_il,v_{\_}i,w_{\_}i\in\mathbb{Z}. Various semantics have been proposed for programs with weight constraint rules; for completeness we here include one. The FLP-reduct of a program \PP\PP (with weight constraints) with respect to \I\I is the set of rules of \PP\PP whose body is satisified in \I. A interpretation \Iis an FLP-stable model of \PPif it is a minimal model of the FLP-reduct of \PPwith respect to \I. We do stress that the particular choice of semantics for these programs with weight constraints is not relevant in the current work, since we focus on the class of programs on which all proposals coincide. This is discussed in detail in the next section, and we come back to this issue in our discussion on future work in Section 7.

3 Translating Logic Programs into Pseudo-Boolean Theories

Scope and Limitations

As can be seen in our definition of rules, we do not consider so-called disjunctive logic programs in this paper: the head of a rule is a single atom. For programs where disjunction “behaves nicely”, for instance for the classes of head-cycle free \myciteHCF-LP and head-elementary-set-free \myciteHEF-LP programs, disjunction in the head can be eliminated by means of an operation called shifting [35]. Through the use of \toolnamelpshift[41], which implements shifting, our tool also works for such programs.

For weight constraint programs, or more generally, programs with different notions of aggregates, many different semantics have been proposed [27, 73, 65, 26, 36, 5]. Following the ASP-Core-2 standard \myciteAspCore2, we restrict our attention to programs without recursion over aggregates since for such programs all of the aforementioned semantics coincide. Formally, we say that in the context of a logic program \PP\PP, an atom hh depends directly on atom hh^{\prime} if there is a rule rr (of the form \eqrefeq:rule or \eqrefeq:rulePB) in \PPwhere hh^{\prime} occurs in \mbody(r)\m{\mathrm{body}}(r). We say that hh depends on atom hh^{\prime} if it depends directly on hh^{\prime} or if there exists some h′′h^{\prime\prime} such that hh depends on h′′h^{\prime\prime} and h′′h^{\prime\prime} on hh^{\prime}. In this paper, following the ASP-Core-2 standard, we only consider programs such that for each rule of the form \eqrefeq:rulePB, no atom hh^{\prime} that occurs in WW depends on hh.

Translation

In order to translate logic programs into pseudo-Boolean theories, we make use of existing frameworks and tools as much as possible, to avoid reinventing the wheel. First of all, we can assume that for each rule of the form \eqrefeq:rulePB in the program, hh is uniquely defined by that rule. We can always obtain this situation by introducing a new atom hh^{\prime} and replacing the rule by two rules h\lruleWh^{\prime}\lrule W and h\lrulehh\lrule h^{\prime}. This operation is sound for most semantics of logic programs with weight constraint rules, and it is always sound if there is no recursion over aggregates. It now becomes apparent that the aggregates can be “isolated”.

Proposition 2

Let \PPbe a logic program without recursion over aggregates such that for each weight constraint rule r\PPr\in\PP, \mhead(r)\m{\mathrm{head}}(r) has no other defining rules. Let \PP\PP^{\prime} be the logic program obtained from \PP\PP by replacing all constraint rules h\lruleWh\lrule W by two rules h\lruleh¯h\lrule\overline{h^{\prime}} and h\lruleh¯h^{\prime}\lrule\overline{h}, where hh^{\prime} is a new atom not occurring in \PP. Then there is a one-to-one correspondence between the answer sets of \PP\PP and the answer sets of \PP\PP^{\prime} in which hWh\Leftrightarrow W is satisfied for each rule of the form \eqrefeq:rulePB in \PP\PP.

This (unsurprising) proposition follows directly from well-known splitting results; for instance the seminal work of Lifschitz and Turner [51] or the results of Vennekens et al. [77] for an algebraic variant that is applicable to the semantic characterization of Pelov et al. [65] of logic programs with aggregates.

Proposition 2 shows that we can split the task of translating \PP\PP into a pseudo-Boolean theory in two parts: first, we use any off-the-shelve method to translate \PP\PP^{\prime} into a propositional theory, and next, we add an encoding of the constraints of the form hWh\Leftrightarrow W, for instance using constraints of the form

hb_i=1nw_il_ih\Leftrightarrow b\leq\sum_{\_}{i=1}^{n}w_{\_}il_{\_}i

is equivalent to the set of pseudo-Boolean constraints {align*} b ≤∑i=1^n wi li + M1 ¯h,     b ¿ ∑i=1^n wi li - M2 h when M_1M_{\_}1 and M_2M_{\_}2 are sufficiently large. The equivalence holds as soon as M_1b_i=1nmin(0,w_i)M_{\_}1\geq b-\sum_{\_}{i=1}^{n}\min(0,w_{\_}i) and M_2>b+_i=1nmax(w_i,0)M_{\_}2>b+\sum_{\_}{i=1}^{n}\max(w_{\_}i,0). For instance, for such large M_iM_{\_}i, in case hh is false, the first constraint is trivially satisfied; in case hh is true, it reduces to b_i=1nw_il_i.b\leq\sum_{\_}{i=1}^{n}w_{\_}il_{\_}i.

4 Implementation

Our tool accepts input in the \lparse\smodelsintermediate format \mycitelparsemanual. In case disjunction is present in the head of a rule, it is first eliminated using \toolnamelpshift [41]. Of course, this is not correct in general, but only for the classes of programs considered here, where disjunction is head-cycle free (or, more general, head-elementary set free). The input is split into two parts: one part contains all rules containing aggregates (in the \lparse\smodelsformat these are the constraint rules, weight rules, and minimize rules) while the other part contains all other rules, as well as the other information present in the \lparse\smodelsintermediate format (the symbol table, and compute statements). For constraint and weight rules (the former are a special case of the latter) in the first part, the transformation from Proposition 2 is used to split them into a choice rule (the combination of the rules h\lruleh¯,h\lruleh¯h\lrule\bar{h^{\prime}},h^{\prime}\lrule\bar{h}) which is added to the second part and two pseudo-Boolean constraints as described below Proposition 2 to be included in the output. A minimize statement directly corresponds to a linear term, and is hence translated directly into a corresponding minimisation statement in the OPB format. As mentioned in Section 3, we make use of existing tools as much as possible. Therefore, the second part (with the additional rules) is then given to the pipeline \toolnamelp2normal\toolnamelp2lp2\toolnamelp2sat\toolname{lp2normal}\mid\toolname{lp2lp2}\mid\toolname{lp2sat}; the combination of these three tools translates a non-disjunctive logic program into an equivalent propositional theory in CNF [11, 41]. Our tool subsequently transforms each clause produced by \toolnamelp2satinto a simple linear constraint and combines this with the linear constraints obtained from the first part to produce a complete pseudo-Boolean theory that characterizes exactly the stable models of the original logic program. We do not describe the complete implementation, but instead discuss a couple of peculiar points.

Auxiliary Variables

Since the translation introduces auxiliary variables, the translation happens only after parsing the entire input; at that point the highest used variable number is known; auxiliary variables will be numbered with subsequent numbers.

Multilevel optimization

While the ASP-Core-2 standard supports multilevel optimization (expressed in the \lparse\smodelsintermediate format by multiple minimise rules), the OPB format has no such construct. We use a well-known technique to reduce multilevel optimization to single-level optimization, namely summing up the different optimization terms but thereby multiplying the optimization terms at higher levels with a coefficient that is large enough to dominate over the terms at the lower levels. An effect of this is that — without postprocessing of the results produced by the pseudo-Boolean solver — the actual values of the optimization function cannot be read out directly from the output.

The closed world assumption

Answer set programming uses a form of the closed world assumption: all variables that are not mentioned in a program are false. In propositional logic on the other hand, unmentioned variables can take an arbitrary value. When naively applying the transformation from Proposition 2, this can cause problems. For instance, if in the original program a certain variable only occurs in the body of a weight constraint rule (or in the optimization statement), then after applying the translation that variable no longer occurs in the program to be translated into SAT. Since in the original program, it is implicit that that variable must be false (due to the lack of any rules that derive it), it should still be false after translating. However, our pipeline used to translate to SAT does not enforce this constraint unless is it aware of the existence of that variable. There are two possible ways to fix this: either by including such variables in the symbol table or by manually adding a constraint that makes them false. We implemented the first option.

Unused variables

A last point of attention is that \toolnamelp2sat, when translating a logic program into CNF makes some simplifications. In particular, in case a variable does not occur in the body of any rule, and that atom is not included in the symbol table (meaning that the user does not care about the value of that atom), \toolnamelp2satadds a constraint that falsifies this atom. However, since we only give \toolnamelp2sata part of the program, this optimization is no longer correct. This behaviour is again avoided by adding all atoms that occur in rules not given to the \toolnamelp2satpipeline in the symbol table.

5 Experiments

The experiments and set-ups were chosen to shine light on following research questions:

  1. 1.

    How well do modern Pseudo-Boolean solvers perform on ASP models of problems where cutting planes is known to be stronger than resolution?

  2. 2.

    To which extent is the cutting plane proof system promising for traditional ASP?

The benchmarks were ran on the VUB Hydra cluster. Each solver call was assigned a single core on a 10-core INTEL E5-2680v2 (IvyBridge) processor, a timelimit of 20 minutes and a memory-limit of 12GB, thereby matching the limits of the latest ASP competition \myciteASPComp7. The following benchmarks were used:

  1. 1.

    Four benchmark families inspired by the work of Elffers et al. [24], using problems described there, as well as the same types of instances as in that paper (e.g., the shapes of the graphs considered). These four families are known to be (with the right encoding) easy for the cutting-plane proof system in the sense that polynomial cutting plane proofs exist, but are hard for CDCL solvers. All our ASP encodings are straightforward and use aggregates. The four families are:

    Pigeon Hole

    The problem here is to fit nn pigeons in mm holes with at most one pigeon residing in each hole. All our instances are unsatisfiable with n=m+1n=m+1.

    Even Colouring

    This problem takes as input a connected graph in which each vertex has an even degree. The problem is to determine if a black-white colouring of the edges exists such that each nodes has the same number of incident black and white edges. The problem is satisfiable if and only if the number of edges is even. Our instances are long toroidal grids in which one auxiliary vertex is inserted to break a single edge in two. All these instances are thus unsatisfiable.

    Vertex Cover

    The input to this problem is a connected graph and a number SS. The problem is to decide if a size SS vertex cover exists, i.e., a subset of the nodes of size SS such that each edge is incident to some vertex in the set. We again use long toroidal grids, here with an even number of rows; in that case an instance is satisfiable if and only if Smn/2S\geq m\cdot\lceil n/2\rceil where mm is the number of rows and nn the number of columns. All our instances are unsatisfiable and have S=mn/21S=m\cdot\lceil n/2\rceil-1.

    Dominating Set

    This problem again takes a graph and a number SS as input. The problem is to decide if the input graph has a size-SS dominating set, i.e., a set of vertices such that each vertex is either in the set or adjacent to a vertex in teh set. Our instances are long hexagonal grid. All our instances are unsatisfiable and have S=v/4S=\lfloor v/4\rfloor where vv is the number of vertices in the graph.

    The instances selected in these four benchmark families all scale linearly, that is, after starting from a small instance, we increase the size of the instance by a fixed step size.

  2. 2.

    All decision and optimization problems from the 2017 ASP competition \myciteASPComp7, which includes many benchmarks from earlier competitions, with the exception of:

    • Problems including non head-cycle-free disjunction, since those were problems beyond the first level of the polynomial hierarchy that can hence not be translated compactly into pseudo-Boolean theories.

    • The video streaming benchmark family, since it contains very high coefficients (higher than what \toolnameRoundingSATsupports).

    For each benchmark family, the 20 instances selected for the competition were used.

All benchmarks and instances are available at https://github.com/wulfdewolf/lp2pb_benchmarks.

We compared four solver configurations:

  • \m\toolname

    C\toolnamec\xspace: \gringo\toolnameclasp\gringo\mid\toolname{clasp}

  • \m\toolname

    C\toolnamepb\xspace: \gringo\toolnamelp2pb\toolnameRoundingSAT\gringo\mid\toolname{lp2pb}\mid\toolname{RoundingSAT}

  • \m\toolname

    C\toolnamen-pb\xspace: \gringo\toolnamelp2normal\toolnamelp2lp2\toolnamelp2sat\toolnameRoundingSAT\gringo\mid\toolname{lp2normal}\mid\toolname{lp2lp2}\mid\toolname{lp2sat}\mid\toolname{RoundingSAT}

  • \m\toolname

    C\toolnamen-c\xspace: \gringo\toolnamelp2normal\toolnamelp2lp2\toolnameclasp\gringo\mid\toolname{lp2normal}\mid\toolname{lp2lp2}\mid\toolname{clasp}

Of each of the used tools, the latest available version was used, i.e. \gringo5.4.0, \toolnameclasp3.3.5, \toolnamelp2normal2.27, \toolnamelp2lp21.23, \toolnamelp2sat1.24, \toolnamelp2pb1.0222https://github.com/wulfdewolf/lp2pb, and \toolnameRoundingSATat commit fd464d43a333At the time of the writing, this commit has not been released yet. For reproducability, the binary can be found on our experiment github repository..

A comparison between \m\toolnameC\toolnamec\xspaceand \m\toolnameC\toolnamepb\xspaceshould give insights into how a state-of-the-art ASP solvers compares to a state-of-the-art pseudo-Boolean solver after our translation. Interpreting the results of \m\toolnameC\toolnamen-pb\xspacerequires some care. For decision problems, in \m\toolnameC\toolnamen-pb\xspace, the input given to \toolnameRoundingSATis a CNF. It is well-known that despite the fact that cutting planes can be exponentially more powerful than resolution, this power is not used by conflict-driven pseudo-Boolean solvers on CNF input, where they essentially produce resolution proofs (see e.g., [78]). For decision problems, a comparison between \m\toolnameC\toolnamen-pb\xspaceand \m\toolnameC\toolnamen-c\xspaceshould thus give an idea of the difference in engineering and optimizations between \toolnameRoundingSATand \toolnameclasp. For optimization problems, this comparison does not hold since bounds on the objective function that are added during branch-and-bound search are typically non-clausal. Finally, a comparison between \m\toolnameC\toolnamepb\xspaceand \m\toolnameC\toolnamen-pb\xspaceshould give an indication of how valuable the pseudo-Boolean constraints (coming from aggregates) are for \toolnameRoundingSAT, i.e., how much is gained by using our translation compared to a plain CNF translation.

Analysis

Cactus plots of the runtimes of the first benchmark set are presented in Figure 1. Overall, these results are consistent with our expectations. The combination of \toolnamelp2pband \toolnameRoundingSAToutperforms resolution-based solvers by far. This is most prominently visible in the Pigeon Hole problem, where no resolution-based configuration solves the problem with 16 pigeons, while \toolnameRoundingSATsolves all problems up 916 pigeons. The odd one out of the four families is the Even Colouring family, where the normalization-based configurations slightly outperform \m\toolnameC\toolnamepb\xspace. Our assumption is that the auxiliary variables introduced by \toolnamelp2normal\toolname{lp2normal} change the language of learning and in this way enable short resolution proofs. A similar effect, but less prominent, is seen in the Vertex Cover family, where normalization-based approaches also outperform \m\toolnameC\toolnamec\xspace, but do not reach the performance of \m\toolnameC\toolnamepb\xspace.

{adjustbox}

width=0.9 01020304050607080901000601201802403003604204805406006607207808409009601020108011401200instancetime (s)\m\toolnameC\toolnamec\xspace\m\toolnameC\toolnamepb\xspace\m\toolnameC\toolnamen-pb\xspace\m\toolnameC\toolnamen-c\xspace

(a) Dominating Set
{adjustbox}

width=0.9 0501001502002503003504004505000601201802403003604204805406006607207808409009601020108011401200instancetime (s)\m\toolnameC\toolnamec\xspace\m\toolnameC\toolnamepb\xspace\m\toolnameC\toolnamen-pb\xspace\m\toolnameC\toolnamen-c\xspace

(b) Even Colouring
{adjustbox}

width=0.9 01020304050607080901000601201802403003604204805406006607207808409009601020108011401200instancetime (s)\m\toolnameC\toolnamec\xspace\m\toolnameC\toolnamepb\xspace\m\toolnameC\toolnamen-pb\xspace\m\toolnameC\toolnamen-c\xspace

(c) Pigeon Hole
{adjustbox}

width=0.9 01020304050607080901001101200601201802403003604204805406006607207808409009601020108011401200instancetime (s)\m\toolnameC\toolnamec\xspace\m\toolnameC\toolnamepb\xspace\m\toolnameC\toolnamen-pb\xspace\m\toolnameC\toolnamen-c\xspace

(d) Vertex Cover
Figure 1: Cactus plots for the first set of benchmark families.
Table 1: Decision problems of the ASP competition. For each family and set-up pair, 20 instances were ran; this table contains the number of instances for which (un)satisfiability was proven.
(UN)SAT Proven
Family #sum? #count? \m\toolnameC\toolnamec\xspace \m\toolnameC\toolnamepb\xspace \m\toolnameC\toolnamen-pb\xspace \m\toolnameC\toolnamen-c\xspace
Crew Allocation No Yes 18 17 15 16
Graph Colouring No No 16 8 8 16
Knight Tour With Holes No No 13 2 2 3
Labyrinth No No 13 1 3 12
Stable Marriage No No 8 0 0 3
Visit-all No Yes 18 13 13 17
Combined Configuration Yes Yes 14 2 2 1
Graceful Graphs No Yes 13 8 10 13
Incremental Scheduling Yes Yes 14 13 1 1
Nomistery No No 7 9 7 8
Partner Units No Yes 11 10 10 10
Permutation Patternmatching No No 13 9 8 8
Qualitative Spatial Reasoning No No 11 11 13 12
Ricochet Robots No Yes 10 7 11 10
Sokoban No Yes 11 9 8 10
Total 190 119 111 140

When examining the results on decision problems, summarized in Table 1, we notice that \m\toolnameC\toolnamec\xspace, i.e. \gringo\toolnameclasp\gringo\mid\toolname{clasp}, outperforms all other configurations on most benchmark families. For problems without aggregates, this is in line of the expectations. But for problems with aggregates our expectation was to see a positive effect of the cutting planes proof system. Also the difference between \m\toolnameC\toolnamepb\xspaceand \m\toolnameC\toolnamen-pb\xspaceis very small, suggesting that little to no benefit of the cutting plane proof system is obtained on those benchmarks.

Table 2: Optimization problems. For each family and configuration, the number of instances (out of 20) for which optimality was proven, as well as the number of instances for which a configuration found the best optimization value, among the four configurations. For the System Synthesis family this column cannot be calculated as the problems in this family are multilevel optimisation problems.
Optimality Proven Best Value Found
Family #sum? #count? \m\toolnameC\toolnamec\xspace \m\toolnameC\toolnamepb\xspace \m\toolnameC\toolnamen-pb\xspace \m\toolnameC\toolnamen-c\xspace \m\toolnameC\toolnamec\xspace \m\toolnameC\toolnamepb\xspace \m\toolnameC\toolnamen-pb\xspace \m\toolnameC\toolnamen-c\xspace
Bayesian NL No Yes 15 5 4 14 18 10 9 18
Markov NL No Yes 11 0 0 9 18 5 4 16
Supertree No Yes 7 5 5 8 13 5 5 17
Connected Maximum-density Still Life No Yes 7 8 8 2 19 8 9 7
Crossing Minimization No Yes 13 19 19 13 15 20 19 17
Maximal Clique Problem No No 0 0 0 0 0 16 13 0
Max SAT No Yes 10 18 18 10 11 19 19 11
Steiner Tree No Yes 3 1 1 2 20 2 2 3
System Synthesis Yes Yes 0 0 0 0 - - - -
Valves Location problem Yes Yes 15 13 3 6 20 13 3 6
Total 81 69 58 64 170 120 108 110

The optimization problems paint a different picture. Next to the number of instances completely solved for each family, Table 2 also shows the number of instances for which a given configuration found the best solution among the four configurations. For System Synthesis, those last values are not included as this is a multilevel optimization problem and the values given by the different solvers are incomparable. In three out of ten families a clear improvement of cutting plane over state-of-the-art ASP solving is visible. Overall, \m\toolnameC\toolnamec\xspaceis still the best solving configuration, but \m\toolnameC\toolnamepb\xspacecomes second, performing slightly better than the approaches in which aggregates are normalized. These results suggest that \toolnamelp2pb, constitutes a valuable extra tool in the ASP toolkit.

6 Related Work

There is a rich history of research on using SAT solvers to search for computing stable models of logic programs. One approach works by introducing loop formulas on-the-fly [53, 48, 37] and in fact lies at the basis of most modern native ASP solvers [\reftoclasp, \reftowasp]. Another approach is studying translations of ASP into SAT that are more compact than the (worst-case) exponential blow-up loop formulas induce [9, 52, 40, 44]. These translations introduce auxiliary variables; some of them induce a one-to-one correpondence between the stable models of the original program and the models of the obtained propositional theory, while others duplicate some models. What all these methods have in common is that aggregates are encoded into clause, either lazily (as done in native solvers, often following the lazy clause generation paradigm \myciteLCG), or eagerly, for instance by applying normalization tools [11] that eliminate the aggregates before the actual SAT-translation is called. Our work closely relates to the translation based approach. In fact, internally our tool makes use of the tools of Janhunen and Niemelä [44] to translate the part of the program without aggregates into SAT; the actual translation used can easily be changed in \toolnamelp2pb. The main difference with the standard translational approaches is that aggregates are not normalized but preserved.

Also \toolnameDLV2[3] has an option (--pre=wbo) to translate ASP programs into PB theories; however, this translation is limited to tight programs and it cannot handle multilevel optimisation problems.

Another related tool is \toolnamemingo\mycitemingo, which integrates answer set programming and mixed integer programming \myciteMIP, thus allowing more types of constraints (using non-Boolean variables) than PB theories. These non-Boolean variables are used, among others, to encode the level mapping characterization [55] that ensures stability of the obtained models. Unlike our translation, \toolnamemingodoes not guarantee a one-to-one correspondence between the models of the obtained theory and the stable models of the original program, making it unsuitable for model counting. Our approach does guarantee this, mainly by building on the guarantees from the used translation of aggregate-free logic programs to SAT [44]. Another difference is that part of the focus of \toolnamemingois on developing an extension of the ASP language in which mixed-integer constraints can be written directly in the program. Nowadays, this approach is common in various Constraint-ASP formalisms [23, 49, 7, 43, 70].

A final related tool is \toolnamepbmodels\mycitepbmodels, which also uses pseudo-Boolean solvers to find stable models. The main difference is that \toolnamepbmodelsis designed as a wrapper around a PB solver that iteratively calls the solver for supported models, next checks for stability and if the result is not stable, adds loop formulas, while \toolnamelp2pboutputs a translation that can be fed to a PB solver to be solved in a single solver call, which benefits the solver’s internal constraint learning mechanism.

7 Conclusion and Future Work

One direction for future work is investigating an extension of our translation to support recursive aggregates. The semantics of recursive aggregates constitute an intense topic of debate, as can be witnessed by the number of papers written about them [27, 73, 65, 26, 36, 5]. However, for monotonic (and in fact, convex aggregates [57]), most of them agree — the notable exception being [36]. Hence, an extension of our tool that works for recursive aggregates under the condition that they be convex, would be valuable. The most lightweight way to achieve this would be to start from the translation of \mycitemingo, which builds on the level mapping of Liu and You [55], and modify it to use Boolean variables. An unresolved challenge in that case is how a one-to-one correspondence between the stable models of the program and the models of the resulting theory can be achieved.

Another interesting, but perhaps more ambitious direction for future work is to develop a new native ASP solver that uses the cutting plane proof system under the hood, for instance by developing an extension of \toolnameRoundingSATwith support for recursive rules.

To conclude, we presented a novel tool, called \toolnamelp2pb, to translate logic programs into pseudo-Boolean formulas and experimentally validated its performance on a large set of benchmarks. The results are mixed. On the one hand, overall traditional ASP solvers seem to outperform pseudo-Boolean solvers on the benchmark traditionally tackled with ASP. But on the other hand, a couple of benchmark families was identified on which pseudo-Boolean reasoning can provide a real advantage, thus warranting further research into using the cutting plane proof system in ASP solving.

Acknowledgments

The resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government. We are very grateful to Jakob Nordström and Jo Devriendt for interesting discussions on this topic and for providing us with the latest version of \toolnameRoundingSAT.

References

  • [1]
  • [2] Fadi A. Aloul, Karim A. Sakallah & Igor L. Markov (2006): Efficient symmetry breaking for Boolean satisfiability. IEEE Transactions on Computers 55(5), pp. 549–558, 10.1109/TC.2006.75.
  • [3] Mario Alviano, Giovanni Amendola, Carmine Dodaro, Nicola Leone, Marco Maratea & Francesco Ricca (2019): Evaluation of Disjunctive Programs in WASP. In: Proceedings of LPNMR, LNCS 11481, pp. 241–255, 10.1007/978-3-030-20528-7_18.
  • [4] Mario Alviano, Carmine Dodaro, Wolfgang Faber, Nicola Leone & Francesco Ricca (2013): WASP: A Native ASP Solver Based on Constraint Learning. In: Proceedings of LPNMR, pp. 54–66, 10.1007/978-3-642-40564-86.
  • [5] Mario Alviano & Wolfgang Faber (2018): Aggregates in Answer Set Programming. KI 32(2-3), pp. 119–124, 10.1007/s13218-018-0545-9.
  • [6] Benjamin Andres, David Rajaratnam, Orkunt Sabuncu & Torsten Schaub (2015): Integrating ASP into ROS for Reasoning in Robots. In: Proceedings of LPNMR, pp. 69–82, 10.1007/978-3-319-23264-57.
  • [7] Mutsunori Banbara, Benjamin Kaufmann, Max Ostrowski & Torsten Schaub (2017): Clingcon: The next generation. TPLP 17(4), pp. 408–461, 10.1017/S1471068417000138.
  • [8] Clark W. Barrett, Roberto Sebastiani, Sanjit A. Seshia & Cesare Tinelli (2009): Satisfiability Modulo Theories. In: Handbook of Satisfiability, pp. 825–885, 10.3233/978-1-58603-929-5-825.
  • [9] Rachel Ben-Eliyahu & Rina Dechter (1994): Propositional Semantics for Disjunctive Logic Programs. Ann. Math. Artif. Intell. 12(1-2), pp. 53–87, 10.1007/BF01530761.
  • [10] Daniel Le Berre & Anne Parrain (2010): The Sat4j library, release 2.2. JSAT 7(2-3), pp. 59–6.
  • [11] Jori Bomanson (2017): lp2normal - A Normalization Tool for Extended Logic Programs. In: Proceedings of LPNMR, pp. 222–228, 10.1007/978-3-319-61660-5_20.
  • [12] Daniel R. Brooks, Esra Erdem, Selim T. Erdogan, James W. Minett & Donald Ringe (2007): Inferring Phylogenetic Trees Using Answer Set Programming. J. Autom. Reasoning 39(4), pp. 471–511, 10.1007/s10817-007-9082-1.
  • [13] Maurice Bruynooghe, Hendrik Blockeel, Bart Bogaerts, Broes De Cat, Stef De Pooter, Joachim Jansen, Anthony Labarre, Jan Ramon, Marc Denecker & Sicco Verwer (2015): Predicate logic as a modeling language: modeling and solving some machine learning and data mining problems with IDP3. TPLP 15(6), pp. 783–817, 10.1017/S147106841400009X.
  • [14] Francesco Calimeri, Wolfgang Faber, Martin Gebser, Giovambattista Ianni, Roland Kaminski, Thomas Krennwallner, Nicola Leone, Marco Maratea, Francesco Ricca & Torsten Schaub (2020): ASP-Core-2 Input Language Format. TPLP 20(2), pp. 294–309, 10.1017/S1471068419000450.
  • [15] Donald Chai & Andreas Kuehlmann (2005): A fast pseudo-Boolean constraint solver. IEEE Trans. on CAD of Integrated Circuits and Systems 24(3), pp. 305–317, 10.1109/TCAD.2004.842808.
  • [16] William J. Cook, Collette R. Coullard & György Turán (1987): On the complexity of cutting-plane proofs. Discrete Applied Mathematics 18(1), pp. 25–38, 10.1016/0166-218X(87)90039-4.
  • [17] Broes De Cat, Bart Bogaerts, Jo Devriendt & Marc Denecker (2013): Model Expansion in the Presence of Function Symbols Using Constraint Programming. In: Proceedings of ICTAI, pp. 1068–1075, 10.1109/ICTAI.2013.159.
  • [18] Jo Devriendt & Bart Bogaerts (2016): BreakID: Static Symmetry Breaking for ASP (System Description). In Bart Bogaerts & Amelia Harrison, editors: Proceedings of ASPOCP, pp. 25–39.
  • [19] Jo Devriendt, Bart Bogaerts & Maurice Bruynooghe (2017): Symmetric Explanation Learning: Effective Dynamic Symmetry Handling for SAT. In: Proceedings of SAT, pp. 83–100, 10.1007/978-3-319-66263-36.
  • [20] Jo Devriendt, Bart Bogaerts, Maurice Bruynooghe & Marc Denecker (2016): Improved Static Symmetry Breaking for SAT. In: Proceedings of SAT, pp. 104–122, 10.1007/978-3-319-40970-2_8.
  • [21] Heidi E. Dixon & Matthew L. Ginsberg (2002): Inference Methods for a Pseudo-Boolean Satisfiability Solver. In: Proceedings of AAAI, pp. 635–640.
  • [22] C. Drescher, O. Tifrea & T. Walsh (2011): Symmetry-breaking Answer Set Solving. AI Communications 24(2), pp. 177–194, 10.3233/AIC-2011-0495.
  • [23] Christian Drescher & Toby Walsh (2011): Conflict-Driven Constraint Answer Set Solving with Lazy Nogood Generation. In: AAAI, pp. 1772–1773.
  • [24] Jan Elffers, Jesús Giráldez-Cru, Jakob Nordström & Marc Vinyals (2018): Using Combinatorial Benchmarks to Probe the Reasoning Power of Pseudo-Boolean Solvers. In: Proceedings of SAT, pp. 75–93, 10.1007/978-3-319-94144-8_5.
  • [25] Jan Elffers & Jakob Nordström (2018): Divide and Conquer: Towards Faster Pseudo-Boolean Solving. In: Proceedings of IJCAI, pp. 1291–1299, 10.24963/ijcai.2018/180.
  • [26] Wolfgang Faber, Gerald Pfeifer & Nicola Leone (2011): Semantics and complexity of recursive aggregates in answer set programming. AIJ 175(1), pp. 278–298, 10.1016/j.artint.2010.04.002.
  • [27] Paolo Ferraris (2005): Answer Sets for Propositional Theories. In: Proceedings of LPNMR, pp. 119–131, 10.1007/1154620710.
  • [28] Martin Gebser, Tomi Janhunen & Jussi Rintanen (2014): Answer Set Programming as SAT modulo Acyclicity. In: Proceedings of ECAI, pp. 351–356, 10.3233/978-1-61499-419-0-351.
  • [29] Martin Gebser, Benjamin Kaufmann & Torsten Schaub (2012): Conflict-driven answer set solving: From theory to practice. AIJ 187, pp. 52–89, 10.1016/j.artint.2012.04.001.
  • [30] Martin Gebser, Joohyung Lee & Yuliya Lierler (2007): Head-Elementary-Set-Free Logic Programs. In: Proceedings of LPNMR, pp. 149–161, 10.1007/978-3-540-72200-7_14.
  • [31] Martin Gebser, Marco Maratea & Francesco Ricca (2020): The Seventh Answer Set Programming Competition: Design and Results. TPLP 20(2), pp. 176–204, 10.1017/S1471068419000061.
  • [32] Martin Gebser, Torsten Schaub & Sven Thiele (2007): GrinGo: A New Grounder for Answer Set Programming. In: Proceedings of LPNMR, pp. 266–271, 10.1007/978-3-540-72200-7_24.
  • [33] Martin Gebser, Torsten Schaub, Sven Thiele & Philippe Veber (2011): Detecting inconsistencies in large biological networks with answer set programming. TPLP 11(2-3), pp. 323–360, 10.1017/S1471068410000554.
  • [34] Michael Gelfond & Vladimir Lifschitz (1988): The Stable Model Semantics for Logic Programming. In: Proceedings of ICLP/SLP, pp. 1070–1080.
  • [35] Michael Gelfond, Halina Przymusinska, Vladimir Lifschitz & Miroslaw Truszczynski (1991): Disjunctive Defaults. In: Proceedings of KR, pp. 230–237.
  • [36] Michael Gelfond & Yuanlin Zhang (2014): Vicious Circle Principle and Logic Programs with Aggregates. TPLP 14(4–5), pp. 587–601, 10.1017/S1471068414000222.
  • [37] Enrico Giunchiglia, Yuliya Lierler & Marco Maratea (2006): Answer Set Programming Based on Propositional Satisfiability. J. Autom. Reasoning 36(4), pp. 345–377, 10.1007/s10817-006-9033-2.
  • [38] Giovanni Grasso, Salvatore Iiritano, Nicola Leone & Francesco Ricca (2009): Some DLV Applications for Knowledge Management. In: Proceedings of LPNMR, pp. 591–597, 10.1007/978-3-642-04238-663.
  • [39] LLC Gurobi Optimization (2020): Gurobi Optimizer Reference Manual.
  • [40] Tomi Janhunen (2004): Representing Normal Programs with Clauses. In: Proceedings of ECAI, pp. 358–362.
  • [41] Tomi Janhunen (2018): Cross-Translating Answer Set Programs Using the ASPTOOLS Collection. KI 32(2-3), pp. 183–184, 10.1007/s13218-018-0529-9.
  • [42] Tomi Janhunen, Martin Gebser, Jussi Rintanen, Henrik J. Nyman, Johan Pensar & Jukka Corander (2017): Learning discrete decomposable graphical models via constraint optimization. Statistics and Computing 27(1), pp. 115–130, 10.1007/s11222-015-9611-4.
  • [43] Tomi Janhunen, Roland Kaminski, Max Ostrowski, Sebastian Schellhorn, Philipp Wanko & Torsten Schaub (2017): Clingo goes linear constraints over reals and integers. TPLP 17(5-6), pp. 872–888, 10.1017/S1471068417000242.
  • [44] Tomi Janhunen & Ilkka Niemelä (2011): Compact Translations of Non-disjunctive Answer Set Programs to Propositional Clauses. In: Logic Programming, Knowledge Representation, and Nonmonotonic Reasoning - Essays Dedicated to Michael Gelfond on the Occasion of His 65th Birthday, pp. 111–130, 10.1007/978-3-642-20832-4_8.
  • [45] Tomi Janhunen, Ilkka Niemelä & Mark Sevalnev (2009): Computing Stable Models via Reductions to Difference Logic. In: LPNMR, pp. 142–154, 10.1007/978-3-642-04238-6_14.
  • [46] Laura Koponen, Emilia Oikarinen, Tomi Janhunen & Laura Säilä (2015): Optimizing phylogenetic supertrees using answer set programming. TPLP 15(4-5), pp. 604–619, 10.1017/S1471068415000265.
  • [47] Nicola Leone, Gerald Pfeifer, Wolfgang Faber, Thomas Eiter, Georg Gottlob, Simona Perri & Francesco Scarcello (2006): The DLV system for knowledge representation and reasoning. ACM Trans. Comput. Log. 7(3), pp. 499–562, 10.1145/1149114.1149117.
  • [48] Yuliya Lierler (2005): cmodels - SAT-Based Disjunctive Answer Set Solver. pp. 447–451, 10.1007/11546207_44.
  • [49] Yuliya Lierler (2012): On the Relation of Constraint Answer Set Programming Languages and Algorithms. In: Proceedings of AAAI.
  • [50] Vladimir Lifschitz (1999): Answer Set Planning. In: Proceedings of ICLP, pp. 23–37.
  • [51] Vladimir Lifschitz & Hudson Turner (1994): Splitting a Logic Program. In: Proceedings of ICLP, pp. 23–37.
  • [52] Fangzhen Lin & Jicheng Zhao (2003): On Tight Logic Programs and Yet Another Translation from Normal Logic Programs to Propositional Logic. In: Proceedings of IJCAI, pp. 853–858.
  • [53] Fangzhen Lin & Yuting Zhao (2004): ASSAT: Computing Answer Sets of a Logic Program by SAT Solvers. AIJ 157(1-2), pp. 115–137, 10.1016/j.artint.2004.04.004.
  • [54] Guohua Liu, Tomi Janhunen & Ilkka Niemelä (2012): Answer Set Programming via Mixed Integer Programming. In: Proceedings of KR.
  • [55] Guohua Liu & Jia-Huai You (2010): Level Mapping Induced Loop Formulas for Weight Constraint and Aggregate Logic Programs. Fundam. Inform. 101(3), pp. 237–255, 10.3233/FI-2010-286.
  • [56] Lengning Liu & Miroslaw Truszczynski (2005): Pbmodels - Software to Compute Stable Models by Pseudoboolean Solvers. In: Proceedings of LPNMR, pp. 410–415, 10.1007/11546207_37.
  • [57] Lengning Liu & Mirosław Truszczyński (2006): Properties and Applications of Programs with Monotone and Convex Constraints. J. AI Res. (JAIR) 27, pp. 299–334, 10.1613/jair.2009.
  • [58] Vasco M. Manquinho & João P. Marques Silva (2006): On Using Cutting Planes in Pseudo-Boolean Optimization. JSAT 2(1-4), pp. 209–219.
  • [59] Victor Marek, Ilkka Niemelä & Mirosław Truszczyński (2008): Logic programs with monotone abstract constraint atoms. TPLP 8(2), pp. 167–199, 10.1017/S147106840700302X.
  • [60] Victor Marek & Mirosław Truszczyński (1999): Stable Models and an Alternative Logic Programming Paradigm. In: The Logic Programming Paradigm: A 25-Year Perspective, Springer-Verlag, pp. 375–398, 10.1007/978-3-642-60085-2_17.
  • [61] João P. Marques-Silva & Karem A. Sakallah (1999): GRASP: A Search Algorithm for Propositional Satisfiability. IEEE Transactions on Computers 48(5), pp. 506–521, 10.1109/12.769433.
  • [62] Hakan Metin, Souheib Baarir & Fabrice Kordon (2019): Composing Symmetry Propagation and Effective Symmetry Breaking for SAT Solving. In: Proceedings of NFM, LNCS 11460, pp. 316–332, 10.1007/978-3-030-20652-9_21.
  • [63] Ilkka Niemelä (1999): Logic Programs with Stable Model Semantics as a Constraint Programming Paradigm. Ann. Math. Artif. Intell. 25(3-4), pp. 241–273, 10.1023/A:1018930122475.
  • [64] Monica Nogueira, Marcello Balduccini, Michael Gelfond, Richard Watson & Matthew Barry (2001): An A-Prolog Decision Support System for the Space Shuttle. In: PADL, pp. 169–183.
  • [65] Nikolay Pelov, Marc Denecker & Maurice Bruynooghe (2007): Well-founded and Stable Semantics of Logic Programs with Aggregates. TPLP 7(3), pp. 301–353, 10.1017/S1471068406002973.
  • [66] Francesco Ricca, Antonella Dimasi, Giovanni Grasso, Salvatore Maria Ielpa, Salvatore Iiritano, Marco Manna & Nicola Leone (2010): A Logic-Based System for e-Tourism. Fundam. Inform. 105(1-2), pp. 35–55, 10.3233/FI-2010-357.
  • [67] Francesca Rossi, Peter van Beek & Toby Walsh, editors (2006): Handbook of Constraint Programming. Foundations of Artificial Intelligence 2, Elsevier.
  • [68] Olivier Roussel & Vasco M. Manquinho (2009): Pseudo-Boolean and Cardinality Constraints. In Armin Biere, Marijn Heule, Hans van Maaren & Toby Walsh, editors: Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications 185, IOS Press, pp. 695–733, 10.3233/978-1-58603-929-5-695.
  • [69] Hossein M. Sheini & Karem A. Sakallah (2006): Pueblo: A Hybrid Pseudo-Boolean SAT Solver. JSAT 2(1-4), pp. 165–189.
  • [70] Da Shen & Yuliya Lierler (2018): SMT-Based Constraint Answer Set Solver EZSMT+ for Non-Tight Programs. In: Proceedings of KR, pp. 67–71.
  • [71] G. Sierksma, P. van Dam & G.A. Tijssen (1996): Linear and integer programming: theory and practice. Monographs and textbooks in pure and applied mathematics, Dekker.
  • [72] Patrik Simons, Ilkka Niemelä & Timo Soininen (2002): Extending and implementing the stable model semantics. AIJ 138(1-2), pp. 181–234, 10.1016/S0004-3702(02)00187-X.
  • [73] Tran Cao Son, Enrico Pontelli & Islam Elkabani (2006): An Unfolding-Based Semantics for Logic Programming with Aggregates. CoRR abs/cs/0605038.
  • [74] Peter J. Stuckey (2010): Lazy Clause Generation: Combining the Power of SAT and CP (and MIP?) Solving. In: Proceedings of CPAIOR, pp. 5–9, 10.1007/978-3-642-13520-0_3.
  • [75] Tommi Syrjänen (2000): Lparse 1.0 User’s Manual. http://www.tcs.hut.fi/Software/smodels/lparse.ps.gz.
  • [76] Juha Tiihonen, Timo Soininen, Ilkka Niemelä & Reijo Sulonen (2003): A practical tool for mass-customising configurable products. In: Proceedings ICED, pp. 1290–1299.
  • [77] Joost Vennekens, David Gilis & Marc Denecker (2006): Splitting an operator: Algebraic modularity results for logics with fixpoint semantics. ACM Trans. Comput. Log. 7(4), pp. 765–797, 10.1145/1182613.1189735.
  • [78] Marc Vinyals, Jan Elffers, Jesús Giráldez-Cru, Stephan Gocht & Jakob Nordström (2018): In Between Resolution and Cutting Planes: A Study of Proof Systems for Pseudo-Boolean SAT Solving. In: Proceedings of SAT, pp. 292–310, 10.1007/978-3-319-94144-8_18.