This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: National Institute of Informatics, Japan
11email: [email protected]
22institutetext: National Institute of Informatics, Japan
22email: [email protected]
33institutetext: National Institute of Informatics, Japan
33email: [email protected]

Towards end-to-end ASP computation

Taisuke Sato 11 0000-0001-9062-0729    Akihiro Takemura 22 0000-0003-4130-8311    Katsumi Inoue 33 0000-0002-2717-9122
Abstract

We propose an end-to-end approach for answer set programming (ASP) and linear algebraically compute stable models satisfying given constraints. The idea is to implement Lin-Zhao’s theorem [14] together with constraints directly in vector spaces as numerical minimization of a cost function constructed from a matricized normal logic program, loop formulas in Lin-Zhao’s theorem and constraints, thereby no use of symbolic ASP or SAT solvers involved in our approach. We also propose precomputation that shrinks the program size and heuristics for loop formulas to reduce computational difficulty. We empirically test our approach with programming examples including the 3-coloring and Hamiltonian cycle problems. As our approach is purely numerical and only contains vector/matrix operations, acceleration by parallel technologies such as many-cores and GPUs is expected.

Keywords:
Answer Set Programming end-to-end ASP vector space cost minimization loop formula supported model stable model

1 Introduction

Computing stable model semantics [9] lies at the heart of answer set programming (ASP) [19, 16, 11] and there have been a variety of approaches proposed so far. Early approaches such as Smodels [20] used backtracking. Then the concept of loop formula was introduced and approaches that use a SAT solver to compute stable models based on Lin-Zhao’s theorem [14] were proposed. They include ASSAT [14] and Cmodels [10] for example. Later more elaborated approaches such as Clasp [7, 8] based on conflict-driven no good learning have been developed. While these symbolic approaches continue to predominate in ASP, there has been another trend towards differentiable methods. For example Differentiable ASP/SAT [18] computes stable models by an ASP solver that utilizes derivatives of a cost function. More recently NeurASP [29] and SLASH [26] combined deep learning and ASP. In their approaches, deep learning is not used in an end-to-end way to compute stable models but used as a component to compute and learn probabilities represented by special atoms interfacing to ASP. A step towards end-to-end computation was taken by Aspis et al. [2] and Takemura and Inoue [27]. They formulated the computation of supported models, a super class of stable models111 Supported models and stable models of a propositional normal logic program coincide when the program is tight (no infinite call chain through positive goals) [5, 4]. , entirely as fixedpoint computation in vector spaces and obtain supported models represented by binary vectors [1, 17]. Also in the context of probabilistic modeling, an end-to-end sampling of supported models was proposed by Sato and Kojima in [24]. However there still remains a gap between computing stable models and computing supported ones.

In this paper, we propose an end-to-end approach for ASP and compute stable models satisfying given constraints in vector spaces. The idea is simple; we implement Lin-Zhao’s theorem [14] together with constraints directly in vector spaces as a cost minimization problem, thereby no use of symbolic ASP or SAT solvers involved. Also as our approach is purely numerical and only contains vector/matrix operations, acceleration by parallel technologies such as many-cores and GPUs is expected.

Technically, Lin-Zhao’s theorem [14] states that a stable model of a ground normal logic program coincides with a supported model which satisfies “loop formulas” associated with the program. They are propositional formulas indicating how to get out of infinite loops of top-down rule invocation. We formulate finding such a model as root finding in a vector space of a non-negative cost function represented in terms of the matricized program and loop formulas. The problem is that in whatever approach we may take, symbolic or non-symbolic, computing supported models is NP-hard (for example graph coloring is solved by computing supported models) and there can be exponentially many loop formulas to be satisfied [12]. We reduce this computational difficulty in two ways. One is precomputation that removes atoms from the search space which are known to be false in any stable model and yields a smaller program. The other is to heuristically choose loop formulas to be satisfied. In a symbolic approach, the latter would mean allowing non-stable model computation but in our continuous approach, it means gracefully degraded gradient information for the continuous search process.

Our end-to-end computing framework differs from those by [2] and [27] in that they basically compute supported models and the computing process itself has no mechanism such as loop formulas to exclude non-stable models. Also we impose no restriction on the syntax of programs like the MD condition [2] or the SD condition [27], any propositional normal logic program is acceptable. More importantly we incorporate the use of constraints, i.e. rules like a¬b\leftarrow a\wedge\neg b, which make ASP programming smooth and practical.

Hence our contributions include

  • a proposal of end-to-end computing of stable models in vector spaces for unrestricted programs

  • augmentation of the above by constraints

  • introduction of precomputation and heuristics to reduce computational difficulty of stable model computation.

We add that since our primary purpose in this paper is to establish theoretical feasibility of end-to-end ASP computing in vector spaces, programming examples are small and implementation is of preliminary nature.

In what follows, after preliminaries in Section 2, we formulate the computation of supported models in vector spaces in Section 3 and that of stable models in Section 4. We then show programming examples in Section 5 including ASP programs for the 3-coloring problem and the HC problem. We there compare performance of precomputation and loop formula heuristics. Section 6 contains related work and Section 7 is conclusion.

2 Preliminaries

In this paper, we mean by a program a propositional normal logic program PP which is a finite set of rules of the form aGa\leftarrow G where aa is an atom222 We equate propositional variables with atoms. called the head, GG a conjunction of literals333 A literal is an atom (positive literal) or its negation (negative literal). called the body of the rule respectively. We suppose PP is written in a given set of atoms 𝒜{\cal A} but usually assume 𝒜{\cal A} = atom(PP), i.e. the set of atoms occurring in PP. We use G+G^{+} to denote the conjunction of positive literals in GG. GG may be empty. The empty conjunction is always true. We call aGa\leftarrow G rule for aa. Let aG1,,aGma\leftarrow G_{1},\ldots,a\leftarrow G_{m} be rules for aa in PP. When m>0m>0, put iff(a)=aG1Gm\mbox{iff}(a)=a\Leftrightarrow G_{1}\vee\cdots\vee G_{m}. When m=0m=0, i.e. there is no rule for aa, put iff(a)=a\mbox{iff}(a)=a\Leftrightarrow\bot where \bot is a special symbol representing the empty disjunction which is always false. We call iff(a)\mbox{iff}(a) the completed rule for aa. The completion of PP, comp(PP), is defined as comp(PP) = {iff(a)atom a occurs in P}\{\mbox{iff}(a)\mid\mbox{atom $a$ occurs in $P$}\}. For a finite set SS, we denote the number of elements in SS by |S||S|. So |P||P| is the number of rules in the program PP.

A model (assignment) II over a set of atoms 𝒜{\cal A} is a mapping which determines the truth value of each atom a𝒜a\in{\cal A}. Then the truth value of a formula FF is inductively defined by II and if FF becomes true evaluated by II, we say II satisfies FF, FF is true in II or II is a model of FF and write IFI\models F. This notation is extended to a set F={F1,,Fk}F=\{F_{1},\ldots,F_{k}\} by considering FF as a conjunction F1FkF_{1}\wedge\cdots\wedge F_{k}. For convenience, we always equate II with {a𝒜Ia}\{a\in{\cal A}\mid I\models a\}, i.e. the set of atoms true in II. When II satisfies all rules in the program PP, i.e. IPI\models P, II is said to be a model of PP. If no rule body contains negative literals, PP is said to be a definite program. In that case, PP always has the least model (in the sense of set inclusion) {a𝒜Pa}\{a\in{\cal A}\mid P\vdash a\}, i.e. the set of atoms provable from PP. A model II of comp(PP) is called a supported model of PP [17]. When PP is a definite program, its least model is also a supported model. In general, for non-definite PP, there can be zero or multiple supported models. Stable models are a subclass of supported models. They are defined as follows. Given a program PP and a model II, remove all rules from PP whose body contains a negative literal false in II, then remove all negative literals from the remaining rules. The resulting program, PIP^{I}, is called the Gelfond-Lifschitz (GL) reduction of PP by II or just the reduct of PP by II. It is a definite program and has the least model. If this least model is identical to II, II is called a stable model of PP [9]. PP may have zero or multiple stable models like supported models. Since the existence of a stable model is NP-complete [16] and so is a supported model, their computation is expected to be hard.

Let F=d1dhF=d_{1}\vee\cdots\vee d_{h} be a Boolean formula in nn variables (atoms) in disjunctive normal form (DNF) where each did_{i} (1ih1\leq i\leq h) is a conjunction of literals and called a disjunct of FF. When FF has no disjunct, FF is false. FF is called full when every did_{i} is a conjunction of nn distinct literals.

A walk in a directed graph is a sequence v1v2vkv_{1}\rightarrow v_{2}\rightarrow\cdots\rightarrow v_{k} (k1k\geq 1) of vertices representing the corresponding non-zero sequence of edges (v1,v2),,(vk1,vk)(v_{1},v_{2}),\ldots,(v_{k-1},v_{k}). When vk=v1v_{k}=v_{1}, it is said to be closed. A cycle is a closed walk v1v2vkv1v_{1}\rightarrow v_{2}\rightarrow\cdots\rightarrow v_{k}\rightarrow v_{1} where {v1,,vk}\{v_{1},\ldots,v_{k}\} are all distinct. A Hamiltonian cycle is a cycle which visits every vertex once. A path is a walk with no vertex repeated. A directed subgraph is called strongly connected if there are paths from v1v_{1} to v2v_{2} and from v2v_{2} to v1v_{1} for any pair of distinct vertices v1v_{1} and v2v_{2}. This “strongly connected” relation induces an equivalence relation over the set of vertices and an induced equivalence class is called a strongly connected component (SCC).

The positive dependency graph pdg(P)\mbox{pdg}(P) for a program PP is a directed graph where vertices are atoms occurring in PP and there is an edge (a,b)(a,b) from atom aa to atom bb if-and-only-if (iff) there is a rule aGa\leftarrow G in PP such that bb is a positive literal in GG. PP is said to be tight [5]444 In [5], it is called “positive-order-consistent”. when pdg(P)\mbox{pdg}(P) is acyclic, i.e. has no cycle. A loop L={a1,,ak}L=\{a_{1},\ldots,a_{k}\} (k>0)(k>0) in PP is a set of atoms where for any pair of atoms a1a_{1} and a2a_{2} in LL (a1=a2a_{1}=a_{2} allowed), there is a path in pdg(P)\mbox{pdg}(P) from a1a_{1} to a2a_{2} and also from a2a_{2} to a1a_{1}.

We denote vectors by bold italic lower case letters such as 𝐚{\mathbf{a}} where 𝐚(i){\mathbf{a}}(i) represents the ii-th element of 𝐚{\mathbf{a}}. Vectors are column vectors by default. We use (𝐚𝐛)({\mathbf{a}}\bullet{\mathbf{b}}) to stand for the inner product (dot product) of vectors 𝐚{\mathbf{a}} and 𝐛{\mathbf{b}} of the same dimension. 𝐚1\|{\mathbf{a}}\|_{1} and 𝐚2\|{\mathbf{a}}\|_{2} respectively denote the 1-norm and 2-norm of 𝐚{\mathbf{a}} where 𝐚1=|𝐚(i)|\|{\mathbf{a}}\|_{1}=\sum|{\mathbf{a}}(i)| and 𝐚2=𝐚(i)2\|{\mathbf{a}}\|_{2}=\sqrt{\sum{\mathbf{a}}(i)^{2}}. We use 𝟏{\mathbf{1}} to denote an all-ones vector of appropriate dimension. A model II over a set 𝒜={a1,,an}{\cal A}=\{a_{1},\ldots,a_{n}\} of nn ordered atoms is equated with an nn-dimensional binary vector 𝐮I{1,0}n{\mathbf{u}}_{I}\in\{1,0\}^{n} such that 𝐮I(i)=1{\mathbf{u}}_{I}(i)=1 if aia_{i} is true in II and 𝐮I(i)=0{\mathbf{u}}_{I}(i)=0 otherwise (1in1\leq i\leq n). 𝐮I{\mathbf{u}}_{I} is called the vectorized II.

Italic capital letters such as A{A} stand for a matrix. We use A(i,j){A}(i,j) to denote the i,ji,j-th element of A{A}, A(i,:){A}(i,:) the ii-th row of A{A} and A(:,j){A}(:,j) the jj-th column of A{A}, respectively. We often consider one dimensional matrices as (row or column) vectors. AF\|{A}\|_{F} denotes the Frobenius norm of A{A}. Let A,Bm×n{A},{B}\in\mathbb{R}^{m\times n} be m×nm\times n matrices. AB{A}\odot{B} denotes their Hadamard product, i.e., (AB)(i,j)=A(i,j)B(i,j)({A}\odot{B})(i,j)={A}(i,j){B}(i,j) for i,j(1im,1jn)i,j(1\leq i\leq m,1\leq j\leq n). [A;B][{A};{B}] designates the 2m×n2m\times n matrix of A{A} stacked onto B{B}. We implicitly assume that all dimensions of vectors and matrices in various expressions are compatible. We introduce a piece-wise linear function min1(x)=min(x,1)\mbox{min}_{1}(x)=\mbox{min}(x,1) that returns the lesser of 1 and xx as an activation function which is related to the popular activation function ReLU(x)=max(x,0)\mbox{ReLU}(x)=\mbox{max}(x,0) by 1min1(x)=ReLU(1x)1-\mbox{min}_{1}(x)=\mbox{ReLU}(1-x). min1(A)\mbox{min}_{1}({A}) denotes the result of component-wise application of min1(x)\mbox{min}_{1}(x) to matrix A{A}. We also introduce thresholding notation. Suppose θ\theta is a real number and 𝐚{\mathbf{a}} an nn-dimensional vector. Then [𝐚θ][{\mathbf{a}}\leq\theta] denotes a binary vector obtained by thresholding 𝐚{\mathbf{a}} at θ\theta where for i(1in)i(1\leq i\leq n), [𝐚θ](i)=1[{\mathbf{a}}\leq\theta](i)=1 if 𝐚(i)θ{\mathbf{a}}(i)\leq\theta and [𝐚θ](i)=0[{\mathbf{a}}\leq\theta](i)=0 otherwise. [𝐚θ][{\mathbf{a}}\geq\theta] is treated similarly. We extend thresholding to matrices. Thus [A1][A\leq 1] means a matrix such that [A1](i,j)=1[A\leq 1](i,j)=1 if A(i,j)1A(i,j)\leq 1 and [A1](i,j)=0[A\leq 1](i,j)=0 otherwise. For convenience, we generalize bit inversion to an nn-dimensional vector 𝐚{\mathbf{a}} and use an expression 1𝐚1-{\mathbf{a}} to denote the nn-dimensional vector such that (1𝐚)(i)=1𝐚(i)(1-{\mathbf{a}})(i)=1-{\mathbf{a}}(i) for i(1in)i\,(1\leq i\leq n). 1A1-{A} is treated similarly.

3 Computing supported models in vector spaces

In this section, we formulate the semantics of supported models in vector spaces and show how to compute it by cost minimization.

3.1 Matricized programs

First we encode programs as matrices. For concreteness, we explain by an example (generalization is easy). Suppose we are given a program P0P_{0} below containing three rules {r1,r2,r3}\{r_{1},r_{2},r_{3}\} in a set of atoms 𝒜0={p,q,r}{\cal A}_{0}=\{p,q,r\}.

P0\displaystyle P_{0} =\displaystyle= {pq¬r: rule r1 for pp¬q: rule r2 for pq: rule r3 for q\displaystyle\begin{array}[]{ll}\left\{\;\begin{array}[]{lllll}p&\leftarrow&q\;\wedge\,\neg r&&\mbox{: rule $\,r_{1}\,$ for $p$}\\ p&\leftarrow&\neg q&&\mbox{: rule $\,r_{2}\,$ for $p$}\\ q&&&&\mbox{: rule $\,r_{3}\,$ for $q$}\\ \end{array}\right.\end{array} (5)

Assuming atoms are ordered as p,q,rp,q,r and correspondingly so are the rules {r1,r2,r3}\{r_{1},r_{2},r_{3}\} as in (5), we encode P0P_{0} as a pair of matrices (D0,Q0)(D_{0},Q_{0}). Here Q0Q_{0} represents conjunctions (the bodies of {r1,r2,r3}\{r_{1},r_{2},r_{3}\}) and D0D_{0} their disjunctions so that they jointly represent P0P_{0}.

Q0\displaystyle Q_{0}\; =\displaystyle= pqr¬p¬q¬r[  0 1 0 0 0 1  0 0 0 0 1 0  0 0 0 0 0 0]r1 has the body q¬rr2 has the body ¬qr3 has the empty body\displaystyle\begin{array}[]{ll}\begin{matrix}\;\;\;\;p&\;q&\;r&\!\!\!\!\neg p&\!\!\!\neg q&\!\!\!\!\neg r\\ \vspace{-0.8em}\end{matrix}\\ \begin{bmatrix}\;\;0&\,1&\,0&\;0&\,0&\,1\;\;\\ \;\;0&\,0&\,0&\;0&\,1&\,0\;\;\\ \;\;0&\,0&\,0&\;0&\,0&\,0\;\;\\ \end{bmatrix}&\begin{matrix}\;\;\mbox{: $r_{1}$ has the body $q\wedge\neg r$}\hfill\\ \;\;\mbox{: $r_{2}$ has the body $\neg q$}\hfill\\ \;\;\mbox{: $r_{3}$ has the empty body}\hfill\\ \end{matrix}\end{array} (8)
D0\displaystyle D_{0}\; =\displaystyle= r1r2r3[  1 1 0  0 0 1  0 0 0]p has two rules r1 and r2q has one rule r3r has no rule\displaystyle\begin{array}[]{ll}\begin{matrix}\;\;\;\;r_{1}&r_{2}&\!r_{3}\\ \vspace{-0.8em}\end{matrix}\\ \begin{bmatrix}\;\;1&\,1&\,0\;\;\\ \;\;0&\,0&\,1\;\;\\ \;\;0&\,0&\,0\;\;\\ \end{bmatrix}&\begin{matrix}\;\;\mbox{: $p\,$ has two rules $r_{1}$ and $r_{2}$}\hfill\\ \;\;\mbox{: $q\;$ has one rule $r_{3}$}\hfill\\ \;\;\mbox{: $r\;$ has no rule}\hfill\\ \end{matrix}\end{array} (11)

As can be seen, Q0Q_{0} represents conjunctions in P0P_{0} in such a way that Q0(1,:)Q_{0}(1,:) for example represents the conjunction q¬rq\wedge\neg r of the first rule in P0P_{0} by setting Q0(1,2)=Q0(1,6)=1Q_{0}(1,2)=Q_{0}(1,6)=1 and so on. D0D_{0} represents disjunctions of rule bodies. So D0(1,1)=D0(1,2)=1D_{0}(1,1)=D_{0}(1,2)=1 means the first atom pp in {p,q,r}\{p,q,r\} has two rules, the first rule r1r_{1} and the second rule r2r_{2}, representing a disjunction (q¬r)¬q(q\wedge\neg r)\vee\neg q for pp.

In general, a program PP that has mm rules in nn atoms is numerically encoded as a pair P=(D,Q)P=(D,Q) of binary matrices D{0,1}n×mD\in\{0,1\}^{n\times m} and Q{0,1}m×2nQ\in\{0,1\}^{m\times 2n}, which we call a matricized PP. QQ represents rule bodies in PP. Suppose atoms are ordered like 𝒜={a1,,an}{\cal A}=\{a_{1},\ldots,a_{n}\} and similarly rules are ordered like {r1:ai1G1,,rm:aimGm}\{r_{1}:a_{i_{1}}\leftarrow G_{1},\ldots,r_{m}:a_{i_{m}}\leftarrow G_{m}\}. Then the jj-th row Q(j,:)Q(j,:) (1jm1\leq j\leq m) encodes the jj-th conjunction GjG_{j} of the jj-th rule aijGja_{i_{j}}\leftarrow G_{j}. Write Gj=ai1aip¬aip+1¬aip+qG_{j}=a_{i_{1}}\wedge\cdots\wedge a_{i_{p}}\wedge\neg a_{i_{p+1}}\wedge\cdots\wedge\neg a_{i_{p+q}} (1p,qn1\leq p,q\leq n). Then Q(j,:)Q(j,:) is a zero vector except for Q(j,i1)==Q(j,ip)=Q(j,n+ip+1)==Q(j,n+ip+q)=1Q(j,i_{1})=\cdots=Q(j,i_{p})=Q(j,n+i_{p+1})=\cdots=Q(j,n+i_{p+q})=1. DD combines these conjunctions as a disjucntion (DNF) for each atom in 𝒜{\cal A}. If the ii-th atom ai𝒜a_{i}\in{\cal A} (1in1\leq i\leq n) has rules {aiGj1,,aiGjs}\{a_{i}\leftarrow G_{j_{1}},\ldots,a_{i}\leftarrow G_{j_{s}}\} in PP, we put D(i,j1)==D(i,js)=1D(i,j_{1})=\cdots=D(i,j_{s})=1 to represent a disjunction Gj1GjsG_{j_{1}}\vee\cdots\vee G_{j_{s}} which is the right hand side of the completed rule for aia_{i}: iff(ai)=aiGj1Gjs\mbox{iff}(a_{i})=a_{i}\Leftrightarrow G_{j_{1}}\vee\cdots\vee G_{j_{s}}. If aia_{i} has no rule, we put D(i,j)=0D(i,j)=0 for all jj (1jm1\leq j\leq m). Thus the matricized P=(D,Q)P=(D,Q) can represent the completed program comp(P)\mbox{comp}(P).

3.2 Evaluation of formulas and the reduct of a program in vector spaces

Here we explain how the propositional formulas and the reduct of a program are evaluated by a model in vector spaces. Let II be a model over a set 𝒜{\cal A} of atoms. Recall that II is equated with a subset of 𝒜{\cal A}. We inductively define the relation “a formula FF is true in II555 Equivalently, II satisfies FF. , IFI\models F in notation, as follows. For an atom aa, IaI\models a iff aIa\in I. For a compound formula FF, I¬FI\models\neg F iff I⊧̸FI\not\models F. When FF is a disjunction F1FkF_{1}\vee\cdots\vee F_{k} (k0k\geq 0), IFI\models F iff there is some ii (1ik1\leq i\leq k) s.t. IFiI\models F_{i}. So the empty disjunction (k=0k=0) is always false. We consider a conjunction F1FkF_{1}\wedge\cdots\wedge F_{k} as a syntax sugar for ¬(¬F1¬Fk)\neg(\neg F_{1}\vee\cdots\vee\neg F_{k}) using De Morgan’s law. Consequently the empty conjunction is always true. Let PP be a program having mm ordered rules in nn ordered atoms as before and G=ai1aip¬aip+1¬aip+qG=a_{i_{1}}\wedge\cdots\wedge a_{i_{p}}\wedge\neg a_{i_{p+1}}\wedge\cdots\wedge\neg a_{i_{p+q}} the body of a rule aGa\leftarrow G in PP. By definition, IGI\models G (GG is true in II) iff {ai1,,aip}I\{a_{i_{1}},\ldots,a_{i_{p}}\}\subseteq I and {aip+1,,aip+q}I=\{a_{i_{p+1}},\ldots,a_{i_{p+q}}\}\cap I=\emptyset. Also let iff(ai)=aiGj1Gjs\mbox{iff}(a_{i})=a_{i}\Leftrightarrow G_{j_{1}}\vee\cdots\vee G_{j_{s}} be the completed rule for an atom aia_{i} in PP. We see Iiff(a)I\models\mbox{iff}(a)  iff  (aiIiffIGj1Gjs)\left(a_{i}\in I\;\mbox{iff}\;I\models G_{j_{1}}\vee\cdots\vee G_{j_{s}}\right).

Now we isomorphically embed the above symbolic evaluation to the one in vector spaces. Let II be a model over ordered atoms 𝒜={a1,,an}{\cal A}=\{a_{1},\ldots,a_{n}\}. We first vectorize II as a binary column vector 𝐮I{\mathbf{u}}_{I} such that 𝐮I(i)=1{\mathbf{u}}_{I}(i)=1 if aiIa_{i}\in I and 𝐮I(i)=0{\mathbf{u}}_{I}(i)=0 (1in1\leq i\leq n) otherwise, and introduce the dualized 𝐮I{\mathbf{u}}_{I} written as 𝐮Iδ{\mathbf{u}}_{I}^{\delta} by 𝐮Iδ=[𝐮I;(1𝐮I)]{\mathbf{u}}_{I}^{\delta}=[{\mathbf{u}}_{I};(1-{\mathbf{u}}_{I})]. 𝐮Iδ{\mathbf{u}}_{I}^{\delta} is a vertical concatenation of 𝐮I{\mathbf{u}}_{I} and the bit inversion of 𝐮I{\mathbf{u}}_{I}.

Consider a matricized program P=(D,Q)P=(D,Q) (D{0,1}n×m(D\in\{0,1\}^{n\times m}, Q{0,1}m×2n)Q\in\{0,1\}^{m\times 2n}) and its jj-th rule rjr_{j} having a body GjG_{j} represented by Q(j,:)Q(j,:). Compute Q(j,:)𝐮IδQ(j,:){\mathbf{u}}_{I}^{\delta} which is the number of true literals in II in GjG_{j} and compare it with the number of literals |Q(j,:)|1|Q(j,:)|_{1}666 |𝐯|1=i|𝐯(i)||{\mathbf{v}}|_{1}=\sum_{i}|{\mathbf{v}}(i)| is the 1-norm of a vector 𝐯{\mathbf{v}}. in GjG_{j}. When |Q(j,:)|1=Q(j,:)𝐮Iδ|Q(j,:)|_{1}=Q(j,:){\mathbf{u}}_{I}^{\delta} holds, all literals in GjG_{j} are true in II and hence the body GjG_{j} is true in II. In this way, we can algebraically compute the truth value of each rule body, but since we consider a conjunction as a negated disjunction, we instead compute Q(j,:)(1𝐮Iδ)Q(j,:)(1-{\mathbf{u}}_{I}^{\delta}) which is the number of false literals in GjG_{j}. If this number is non-zero, GjG_{j} have at least one literal false in II, and hence GjG_{j} is false in II. The converse is also true. The existence of a false literal in GjG_{j} is thus computed by min1(Q(j,:)(1𝐮Iδ))\mbox{min}_{1}(Q(j,:)(1-{\mathbf{u}}_{I}^{\delta})) which is 11 if there is a false literal, and 0 otherwise. Consequently 1min1(Q(j,:)(1𝐮Iδ))=11-\mbox{min}_{1}(Q(j,:)(1-{\mathbf{u}}_{I}^{\delta}))=1 if there is no false literal in GjG_{j} and vice versa. In other words, 1min1(Q(j,:)(1𝐮Iδ))1-\mbox{min}_{1}(Q(j,:)(1-{\mathbf{u}}_{I}^{\delta})) computes IGjI\models G_{j}.

Now let {aiGj1,,aiGjs}\{a_{i}\leftarrow G_{j_{1}},\ldots,a_{i}\leftarrow G_{j_{s}}\} be an enumeration of rules for ai𝒜a_{i}\in{\cal A} and Gj1GjsG_{j_{1}}\vee\cdots\vee G_{j_{s}} the disjunction of the rule bodies. di=t=1s(1min1(Q(jt,:)(1𝐮Iδ)))d_{i}=\sum_{t=1}^{s}(1-\mbox{min}_{1}(Q(j_{t},:)(1-{\mathbf{u}}_{I}^{\delta}))) is the number of rule bodies in {Gj1,,Gjs}\{G_{j_{1}},\ldots,G_{j_{s}}\} that are true in II. Noting D(i,j)=1D(i,j)=1 if j{j1,,js}j\in\{j_{1},\ldots,j_{s}\} and D(i,j)=0D(i,j)=0 otherwise by construction of DD in P=(D,Q)P=(D,Q), we replace the summation t=1s\sum_{t=1}^{s} by matrix multiplication and obtain di=D(i,:)(1min1(Q(1𝐮Iδ)))d_{i}=D(i,:)(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}_{I}^{\delta}))). Introduce a column vector 𝐝I=D(1min1(Q(1𝐮Iδ))){\mathbf{d}}_{I}=D(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}_{I}^{\delta}))). We have 𝐝I(i)=di{\mathbf{d}}_{I}(i)=d_{i} = the number of rules for aia_{i} whose bodies are true in II (1in1\leq i\leq n).

Proposition 1

Let P=(D,Q)P=(D,Q) be a matricized program PP in a set of atoms 𝒜{\cal A} and 𝐮I{\mathbf{u}}_{I} a vectorized model II over 𝒜{\cal A}. Put 𝐝I=D(1min1(Q(1𝐮Iδ))){\mathbf{d}}_{I}=D(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}_{I}^{\delta}))). It holds that

Icomp(P)\displaystyle I\models\mbox{comp}(P) if-and-only-if 𝐮Imin1(𝐝I)2=0.\displaystyle\|{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I})\|_{2}=0. (12)

(Proof) Put n=|𝒜|n=|{\cal A}|. Suppose Icomp(P)I\models\mbox{comp}(P) and write iff(ai)\mbox{iff}(a_{i}), the completed rule for an atom ai𝒜a_{i}\in{\cal A} (1in1\leq i\leq n), as iff(ai)=aiGj1Gjs\mbox{iff}(a_{i})=a_{i}\Leftrightarrow G_{j_{1}}\vee\cdots\vee G_{j_{s}} (s0s\geq 0). We have Iiff(ai)I\models\mbox{iff}(a_{i}). So if 𝐮I(i)=1{\mathbf{u}}_{I}(i)=1, IaiI\models a_{i}, and hence IGj1GjsI\models G_{j_{1}}\vee\cdots\vee G_{j_{s}}, giving di1d_{i}\geq 1 because did_{i} is the number of rule bodies in {Gj1,,Gjs}\{G_{j_{1}},\ldots,G_{j_{s}}\} that are true in II. So min1(di)=1\mbox{min}_{1}(d_{i})=1 holds. Otherwise if 𝐮I(i)=0{\mathbf{u}}_{I}(i)=0, we have I⊧̸aiI\not\models a_{i} and I⊧̸Gj1GjsI\not\models G_{j_{1}}\vee\cdots\vee G_{j_{s}}. Consequently none of the rule bodies are true in II and we have di=min1(di)=0d_{i}=\mbox{min}_{1}(d_{i})=0. Putting the two together, we have 𝐮I(i)=di{\mathbf{u}}_{I}(i)=d_{i}. Since ii is arbitrary, we conclude 𝐮I=min1(𝐝I){\mathbf{u}}_{I}={\rm min}_{1}({\mathbf{d}}_{I}), or equivalently 𝐮Imin1(𝐝I)2=0\|{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I})\|_{2}=0. The converse is similarly proved.     Q.E.D.

Proposition 1 says that whether II is a supported model of the program PP or not is determined by computing 𝐮Imin1(𝐝I){\mathbf{u}}_{I}-{\mbox{min}}_{1}({\mathbf{d}}_{I}) in vector spaces whose complexity is O(mn)O(mn) where mm is the number of rules in PP, nn that of atoms occurring in PP.

In the case of P0=(Q0,D0)P_{0}=(Q_{0},D_{0}) in (5) having three rules {r1,r2,r3}\{r_{1},r_{2},r_{3}\}, take a model I0={p,q}I_{0}=\{p,q\} over the ordered atom set 𝒜0={p,q,r}{\cal A}_{0}=\{p,q,r\} where pp and qq are true in I0I_{0} but rr is false in I0I_{0}. Then we have 𝐮I0=[ 1  1  0]T{\mathbf{u}}_{I_{0}}=[\,1\;\;1\;\;0\;]^{T}, 𝐮I0δ=[ 1  1  0  0  0  1]T{\mathbf{u}}_{I_{0}}^{\delta}=[\,1\;\;1\;\;0\;\;0\;\;0\;\;1\,]^{T}, 1𝐮I0δ=[ 0  0  1  1  1  0]T1-{\mathbf{u}}_{I_{0}}^{\delta}=[\,0\;\;0\;\;1\;\;1\;\;1\;\;0\,]^{T} and finally Q0(1𝐮I0δ)=[ 0  1  0]TQ_{0}(1-{\mathbf{u}}_{I_{0}}^{\delta})=[\,0\;\;1\;\;0\,]^{T}. The last equation says that the rule bodies of r1r_{1}, r2r_{2} and r3r_{3} have respectively zero, one and zero literal false in I0I_{0}. Hence min1(Q0(1𝐮I0δ))=[ 0  1  0]T\mbox{min}_{1}(Q_{0}(1-{\mathbf{u}}_{I_{0}}^{\delta}))=[\,0\;\;1\;\;0\,]^{T} indicates that only the second rule body is false and the other two bodies are true in I0I_{0}. So its bit inversion 1min1(Q0(1𝐮I0δ))=[ 1  0  1]T1-\mbox{min}_{1}(Q_{0}(1-{\mathbf{u}}_{I_{0}}^{\delta}))=[\,1\;\;0\;\;1\,]^{T} indicates that the second rule body is false in I0I_{0} while others are true in I0I_{0}. Thus by combining these truth values in terms of disjunctions D0D_{0}, we obtain 𝐝I0=D0(1min1(Q0(1𝐮I0δ)))=[ 1  1  0]T{\mathbf{d}}_{I_{0}}=D_{0}(1-\mbox{min}_{1}(Q_{0}(1-{\mathbf{u}}_{I_{0}}^{\delta})))=[\,1\;\;1\;\;0\,]^{T}.

𝐝I0=[ 1  1  0]T{\mathbf{d}}_{I_{0}}=[\,1\;\;1\;\;0\,]^{T} denotes for each atom a𝒜0a\in{\cal A}_{0} the number of rules for aa whose body is true in I0I_{0}. For example 𝐝I0(1)=1{\mathbf{d}}_{I_{0}}(1)=1 means that the first atom pp in 𝒜0{\cal A}_{0} has one rule (pq¬rp\leftarrow q\wedge\neg r) whose body (q¬rq\wedge\neg r) is true in I0I_{0}. Likewise 𝐝I0(2)=1{\mathbf{d}}_{I_{0}}(2)=1 means that the second atom qq has one rule (qq\leftarrow) whose body (empty) is true in I0I_{0}. 𝐝I0(3)=0{\mathbf{d}}_{I_{0}}(3)=0 indicates that the third atom rr has no such rule. Therefore min1(𝐝I0)=[ 1  1  0]T\mbox{min}_{1}({\mathbf{d}}_{I_{0}})=[\,1\;\;1\;\;0\,]^{T} denotes the truth values of the right hand sides of the completed rules {iff(p),iff(q),iff(r)}\{\mbox{iff}(p),\mbox{iff}(q),\mbox{iff}(r)\} evaluated by I0I_{0}. Since 𝐮I0=min1(𝐝I0){\mathbf{u}}_{I_{0}}=\mbox{min}_{1}({\mathbf{d}}_{I_{0}}) holds, it follows from Proposition 1 that I0I_{0} is a supported model of P0P_{0}.

We next show how PIP^{I}, the reduct of PP by II, is dealt with in vector spaces. We assume PP has mm rules {r1,,rm}\{r_{1},\ldots,r_{m}\} in a set 𝒜={a1,,an}{\cal A}=\{a_{1},\ldots,a_{n}\} of nn ordered atoms as before. We first show the evaluation of the reduct of the matricized program P=(D,Q)P=(D,Q) by a vectorized model 𝐮I{\mathbf{u}}_{I}. Write Q{0,1}m×2nQ\in\{0,1\}^{m\times 2n} as Q=[Q(1)Q(2)]Q=[Q^{(1)}\;Q^{(2)}] where Q(1){0,1}m×nQ^{(1)}\in\{0,1\}^{m\times n} (resp. Q(2){0,1}m×nQ^{(2)}\in\{0,1\}^{m\times n}) is the left half (resp. the right half) of QQ representing the positive literals (resp. negative literals) of each rule body in QQ. Compute M(2)=1min1(Q(2)𝐮I)M^{(2)}=1-\mbox{min}_{1}(Q^{(2)}{\mathbf{u}}_{I}). It is an m×1m\times 1 matrix (treated as a column vector here) such that M(2)(j)=0M^{(2)}(j)=0 if the body of rjr_{j} contains a negative literal false in II and M(2)(j)=1M^{(2)}(j)=1 otherwise (1jm1\leq j\leq m). Let rj+r_{j}^{+} be a rule rjr_{j} with negative literals in the body deleted. We see that PI={rj+M(2)(j)=1,1jm}P^{I}=\{r_{j}^{+}\mid M^{(2)}(j)=1,1\leq j\leq m\} and PIP^{I} is syntactically represented by (DI,Q(1))(D^{I},Q^{(1)}) where DI=DD^{I}=D with columns D(:,j)D(:,j) replaced by the zero column vector if M(2)(j)=0M^{(2)}(j)=0 (1jm1\leq j\leq m). DI(i,:)D^{I}(i,:) denotes a rule set {rj+DI(i,j)=1,1jm}\{r_{j}^{+}\mid D^{I}(i,j)=1,1\leq j\leq m\} in PIP^{I} for ai𝒜a_{i}\in{\cal A}. We call PI=(DI,Q(1))P^{I}=(D^{I},Q^{(1)}) the matricized reduct of PP by II.

The matricized reduct PI=(DI,Q(1))P^{I}=(D^{I},Q^{(1)}) is evaluated in vector spaces as follows. Compute M(1)=M(2)(1min1(Q(1)(1𝐮I)))M^{(1)}=M^{(2)}\odot(1-\mbox{min}_{1}(Q^{(1)}(1-{\mathbf{u}}_{I})))777 \odot is component-wise product. . M(1)M^{(1)} denotes the truth values of rule bodies in PIP^{I} evaluated by II. Thus M(1)(j)=1M^{(1)}(j)=1 (1jm1\leq j\leq m) if rj+r_{j}^{+} is contained in PIP^{I} and its body is true in II. Otherwise M(1)(j)=0M^{(1)}(j)=0 and rj+r_{j}^{+} is not contained in PIP^{I} or the body of rj+r_{j}^{+} is false in II. Introduce 𝐝I+=DM(1){\mathbf{d}}_{I}^{+}=DM^{(1)}. 𝐝I+(i){\mathbf{d}}_{I}^{+}(i) (1in1\leq i\leq n) is the number of rules in PIP^{I} for the ii-th atom aia_{i} in 𝒜{\cal A} whose bodies are true in II.

Proposition 2

Let P=(D,Q)P=(D,Q) be a matricized program PP in a set 𝒜={a1,,an}{\cal A}=\{a_{1},\ldots,a_{n}\} of nn ordered atoms and II a model over 𝒜{\cal A}. Write Q=[Q(1)Q(2)]Q=[Q^{(1)}\;Q^{(2)}] as above. Let 𝐮I{\mathbf{u}}_{I} be the vectorized model II. Compute M(2)=1min1(Q(2)𝐮I)M^{(2)}=1-{\rm min}_{1}(Q^{(2)}{\mathbf{u}}_{I}), M(1)=M(2)(1min1(Q(1)(1𝐮I)))M^{(1)}=M^{(2)}\odot(1-{\rm min}_{1}(Q^{(1)}(1-{\mathbf{u}}_{I}))) and 𝐝I+=DM(1){\mathbf{d}}_{I}^{+}=DM^{(1)}. Also compute 𝐝I=D(1min1(Q(1𝐮Iδ))){\mathbf{d}}_{I}=D(1-{\rm min}_{1}(Q(1-{\mathbf{u}}_{I}^{\delta}))). Then, Icomp(P)I\models{\rm comp}(P), 𝐮Imin1(𝐝I)2=0\|{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I})\|_{2}=0, 𝐮Imin1(𝐝I+)2=0\|{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I}^{+})\|_{2}=0 and Icomp(PI)I\models{\rm comp}(P^{I}) are all equivalent.

(Proof) We prove 𝐝I=𝐝I+{\mathbf{d}}_{I}={\mathbf{d}}_{I}^{+} first. Recall that a rule rj+r_{j}^{+} in PIP^{I} is created by removing negative literals true in II from the body of rjr_{j} in PP. So for any ai𝒜a_{i}\in{\cal A}, it is immediate that aia_{i} has a rule rjPr_{j}\in P whose body is true in II iff aia_{i} has the rule rj+PIr_{j}^{+}\in P^{I} whose body is true in II. Thus 𝐝I(i)=𝐝I+(i){\mathbf{d}}_{I}(i)={\mathbf{d}}_{I}^{+}(i) for every ii (1in1\leq i\leq n), and hence 𝐝I=𝐝I+{\mathbf{d}}_{I}={\mathbf{d}}_{I}^{+}. Consequently, we have 𝐮Imin1(𝐝I)2=0\|{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I})\|_{2}=0 iff 𝐮Imin1(𝐝I+)2=0\|{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I}^{+})\|_{2}=0. Also Icomp(PI)I\models\mbox{comp}(P^{I}) iff 𝐮Imin1(𝐝I+)2=0\|{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I}^{+})\|_{2}=0 is proved similarly to Proposition 1 (details omitted). The rest follows from Proposition 1.     Q.E.D.

From the viewpoint of end-to-end ASP, Proposition 2 means that we can obtain a supported model II as a binary solution 𝐮I{\mathbf{u}}_{I} of the equation 𝐮Imin1(𝐝I)=0{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I})=0 derived from PP or 𝐮Imin1(𝐝I+)=0{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I}^{+})=0 derived from the reduct PIP^{I}. Either equation is possible and gives the same result but their computation will be different. This is because the former equation 𝐮Imin1(𝐝I){\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I}) is piecewise linear w.r.t.  𝐮I{\mathbf{u}}_{I} whereas the latter 𝐮Imin1(𝐝I+){\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I}^{+}) is piecewise quadratic w.r.t.  𝐮I{\mathbf{u}}_{I}.

Now look at P0={r1,r2,r3}P_{0}=\{r_{1},r_{2},r_{3}\} in (5) and a model I0={p,q}I_{0}=\{p,q\} again. P0I0={pqq\begin{array}[]{lcl}P_{0}^{I_{0}}&=&\begin{array}[]{ll}\left\{\;\begin{array}[]{lllll}p&\leftarrow&q\\ q&\leftarrow&\\ \end{array}\right.\end{array}\end{array} is the reduct of P0P_{0} by I0I_{0}. P0I0P_{0}^{I_{0}} has the least model {p,q}\{p,q\} that coincides with I0I_{0}. So I0I_{0} is a stable model of P0P_{0}. To simulate the reduction process of P0P_{0} in vector spaces, let P0=(D0,Q0)P_{0}=(D_{0},Q_{0}) be the matricized P0P_{0}. We first decompose Q0Q_{0} in (8) as Q0=[Q0(1)Q0(2)]Q_{0}=[Q_{0}^{(1)}\;Q_{0}^{(2)}] where Q0(1)Q_{0}^{(1)} is the positive part and Q0(2)Q_{0}^{(2)} the negative part of Q0Q_{0}. They are

Q0(1)=[  0 1 0  0 0 0  0 0 0]andQ0(2)=[  0 0 1  0 1 0  0 0 0].\displaystyle\hskip 20.00003pt\begin{array}[]{lcl}Q_{0}^{(1)}&=&\begin{bmatrix}\;\;0&\,1&\,0\;\;\\ \;\;0&\,0&\,0\;\;\\ \;\;0&\,0&\,0\;\;\\ \end{bmatrix}\end{array}\hskip 10.00002pt\mbox{and}\hskip 10.00002pt\begin{array}[]{lcl}Q_{0}^{(2)}&=&\begin{bmatrix}\;\;0&\,0&\,1\;\;\\ \;\;0&\,1&\,0\;\;\\ \;\;0&\,0&\,0\;\;\\ \end{bmatrix}\end{array}.

Let 𝐮I0=[ 1  1  0]T{\mathbf{u}}_{I_{0}}=[\,1\;\;1\;\;0\,]^{T} be the vectorized I0I_{0}. We first compute M0(2)=1min1(Q0(2)𝐮I0)M_{0}^{(2)}=1-\mbox{min}_{1}(Q_{0}^{(2)}{\mathbf{u}}_{I_{0}}) to determine rules to be removed. Since M0(2)=[ 1  0  1]TM_{0}^{(2)}=[\,1\;\;0\;\;1\,]^{T}, the second rule r2r_{2}, indicated by M0(2)(2)=0M_{0}^{(2)}(2)=0, is removed from P0P_{0}, giving P0I0={r1+,r3+}P_{0}^{I_{0}}=\{r_{1}^{+},r_{3}^{+}\}. Using M0(2)M_{0}^{(2)} and D0D_{0} shown in (11), we then compute M0(1)=M0(2)(1min1(Q0(1)(1𝐮I0)))=[ 1  0  1]TM_{0}^{(1)}=M_{0}^{(2)}\odot(1-\mbox{min}_{1}(Q_{0}^{(1)}(1-{\mathbf{u}}_{I_{0}})))=[\,1\;\;0\;\;1\,]^{T} and 𝐝I0+=D0M0(1)=[ 1  1  0]T{\mathbf{d}}_{I_{0}}^{+}=D_{0}M_{0}^{(1)}=[\,1\;\;1\;\;0\,]^{T}. 𝐝I0+{\mathbf{d}}_{I_{0}}^{+} denotes the number rule bodies in P0I0P_{0}^{I_{0}} true in I0I_{0} for each atom. Thus, since 𝐮I0=min1(𝐝I0+)(=[ 1  1  0]T){\mathbf{u}}_{I_{0}}=\mbox{min}_{1}({\mathbf{d}}_{I_{0}}^{+})(=[\,1\;\;1\;\;0\,]^{T}) holds, I0I_{0} is a supported model of P0P_{0} by Proposition 2.

3.3 Cost minimization for supported models

Having linear algebraically reformulated several concepts in logic programming, we tackle the problem computing supported models in vector spaces. Although there already exist approaches for this problem, we tackle it without assuming any condition on programs while allowing constraints. Aspis et al. formulated the problem as solving a non-linear equation containing a sigmoid function [2]. They encode normal logic programs differently from ours based on Sakama’s encoding [22] and impose the MD condition on programs which is rather restrictive. No support is provided for constraints in their approach. Later Takemura and Inoue proposed another approach [27] which encodes a program in terms of a single matrix and evaluates conjunctions based on the number of true literals. They compute supported models by minimizing a non-negative function, not solving equations like [2]. Their programs are however restricted to those satisfying the SD condition and constraints are not considered.

Here we introduce an end-to-end way of computing supported models in vector spaces through cost minimization of a new cost function based on the evaluation of disjunction. We impose no syntactic restriction on programs and allow constraints. We believe that these two features make our end-to-end ASP approach more feasible.

We can base our supported model computation either on Proposition 1 or on Proposition 2. In the latter case, we have to compute GL reduction which requires complicated computation compared to the former case. So for the sake of simplicity, we explain the former. Then our task in vector spaces is to find a binary vector 𝐮I{\mathbf{u}}_{I} representing a supported model II of a matricized program P=(D,Q)P=(D,Q) that satisfies 𝐮Imin1(𝐝I)2=0\|{\mathbf{u}}_{I}-{\rm min}_{1}({\mathbf{d}}_{I})\|_{2}=0 where 𝐝I=D(1min1(Q(1𝐮Iδ))){\mathbf{d}}_{I}=D(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}_{I}^{\delta}))). For this task, we relax 𝐮I{1,0}n{\mathbf{u}}_{I}\in\{1,0\}^{n} to 𝐮n{\mathbf{u}}\in\mathbb{R}^{n} and introduce a non-negative cost function JSUJ_{SU}:

JSU\displaystyle J_{SU} =\displaystyle= 0.5(𝐮min1(𝐝)22+2𝐮(1𝐮)22)\displaystyle 0.5\cdot\left(\|{\mathbf{u}}-{\rm min}_{1}({\mathbf{d}})\|_{2}^{2}+\ell_{2}\cdot\|{\mathbf{u}}\odot(1-{\mathbf{u}})\|_{2}^{2}\right)
where2>0and𝐝=D(1min1(Q(1𝐮δ))).\displaystyle\;\mbox{where}\;\;\ \ell_{2}>0\;\mbox{and}\;{\mathbf{d}}=D(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}^{\delta}))).
Proposition 3

Let JSUJ_{SU} be defined from a program P=(D,Q)P=(D,Q) as above.

JSU=0J_{SU}=0   if-and-only-if   𝐮{\mathbf{u}} is a binary vector representing a supported model of PP.

(Proof) Apparently if JSU=0J_{SU}=0, we have 𝐮min1(𝐝)22=0\|{\mathbf{u}}-{\rm min}_{1}({\mathbf{d}})\|_{2}^{2}=0 and 𝐮(1𝐮)22=0\|{\mathbf{u}}\odot(1-{\mathbf{u}})\|_{2}^{2}=0. The second equation means 𝐮{\mathbf{u}} is binary (x(1x)=0x{1,0}x(1-x)=0\Leftrightarrow x\in\{1,0\}), and the first equation means this binary 𝐮{\mathbf{u}} is a vector representing a supported model of PP by Proposition 1. The converse is obvious.     Q.E.D.

JSUJ_{SU} is piecewise differentiable and we can obtain a supported model of PP as a root 𝐮{\mathbf{u}} of JSUJ_{SU} by minimizing JSUJ_{SU} to zero using Newton’s method. The Jacobian JaSUJ_{a_{SU}} required for Newton’s method is derived as follows. We assume PP is written in nn ordered atoms {a1,,an}\{a_{1},\ldots,a_{n}\} and 𝐮=[u1,,un]T{\mathbf{u}}=[u_{1},\ldots,u_{n}]^{T} represents their continuous truth values where 𝐮(p)=up{\mathbf{u}}(p)=u_{p}\in\mathbb{R} is the continuous truth value for atom apa_{p} (1pn1\leq p\leq n). For the convenience of derivation, we introduce the dot product (AB)=i,jA(i,j)B(i,j)({A}\bullet{B})=\sum_{i,j}{A}(i,j){B}(i,j) of matrices AA and BB and a one-hot vector 𝐈p{\mathbf{I}}_{p} which is a zero vector except for the pp-th element and 𝐈p(p)=1{\mathbf{I}}_{p}(p)=1. We note (A(BC))=((BA)C)({A}\bullet({B}\odot{C}))=(({B}\odot{A})\bullet{C}) and (A(BC))=((BTA)C)=((ACT)B)({A}\bullet({B}{C}))=(({B}^{T}{A})\bullet{C})=(({A}{C}^{T})\bullet{B}) hold.

Let P=(D,Q)P=(D,Q) be the matricized program and write Q=[Q(1)Q(2)]Q=[Q^{(1)}\,Q^{(2)}]. Introduce NN, MM, 𝐝{\mathbf{d}}, EE, FF and compute JSUJ_{SU} by

N=Q(1𝐮δ)=Q(1)(1𝐮)+Q(2)𝐮: (continuous) counts of false literals in the rule bodiesM=1min1(N): (continuous) truth values of the rule bodies𝐝=DM: (continuous) counts of true disjuncts for each atomE=min1(𝐝)𝐮: error between the estimated truth values of atoms and 𝐮F=𝐮(1𝐮)Jsq=(EE)Jnrm=(FF)JSU=Jsq+2Jnrm.\displaystyle\begin{array}[]{lclll}N&=&Q(1-{\mathbf{u}}^{\delta})=Q^{(1)}(1-{\mathbf{u}})+Q^{(2)}{\mathbf{u}}\\ &&\mbox{: (continuous) counts of false literals in the rule bodies}\\ M&=&1-{\rm min}_{1}(N)\\ &&\mbox{: (continuous) truth values of the rule bodies}\\ {\mathbf{d}}&=&DM\\ &&\mbox{: (continuous) counts of true disjuncts for each atom}\\ E&=&{\rm min}_{1}({\mathbf{d}})-{\mathbf{u}}\\ &&\mbox{: error between the estimated truth values of atoms and ${\mathbf{u}}$}\\ F&=&{\mathbf{u}}\odot(1-{\mathbf{u}})\\ J_{sq}&=&(E\bullet E)\\ J_{nrm}&=&(F\bullet F)\\ J_{SU}&=&J_{sq}+\ell_{2}\cdot J_{nrm}.\end{array} (27)

We then compute the Jacobian JaSUJ_{a_{SU}} of JSUJ_{SU}. We first compute Jsqup\displaystyle{\frac{\partial J_{sq}}{\partial{u}_{p}}} where up=𝐮(p){u}_{p}={\mathbf{u}}(p) (1pn1\leq p\leq n).

Mup\displaystyle\frac{\partial M}{\partial u_{p}} =\displaystyle= [N1]((Q(2)Q(1))𝐈p)=[N1]((Q(1)Q(2))𝐈p)\displaystyle-[N\leq 1]\odot\bigl{(}(Q^{(2)}-Q^{(1)}){\mathbf{I}}_{p}\bigr{)}=[N\leq 1]\odot\bigl{(}(Q^{(1)}-Q^{(2)}){\mathbf{I}}_{p}\bigr{)}
Jsqup\displaystyle\frac{\partial J_{sq}}{\partial u_{p}} =\displaystyle= (E[DM1](D(Mup))𝐈p)\displaystyle\Bigl{(}E\,\bullet\;[DM\leq 1]\odot\Bigl{(}D\left(\frac{\partial M}{\partial u_{p}}\right)\Bigr{)}-{\mathbf{I}}_{p}\Bigr{)}
=\displaystyle= (E[DM1](D([N1]((Q(1)Q(2))𝐈p)))𝐈p)\displaystyle\Bigl{(}E\,\bullet\;[DM\leq 1]\odot(D([N\leq 1]\odot((Q^{(1)}-Q^{(2)}){\mathbf{I}}_{p})))-{\mathbf{I}}_{p}\Bigr{)}
=\displaystyle= (DT([DM1]E)[N1](((Q(1)Q(2))𝐈p))(E𝐈p)\displaystyle\Bigl{(}D^{T}([DM\leq 1]\odot E)\;\bullet\;[N\leq 1]\odot(((Q^{(1)}-Q^{(2)}){\mathbf{I}}_{p})\Bigr{)}-(E\;\bullet\;{\mathbf{I}}_{p})
=\displaystyle= ((Q(1)Q(2))T([N1](DT([DM1]E)))E𝐈p)\displaystyle\Bigl{(}(Q^{(1)}-Q^{(2)})^{T}([N\leq 1]\odot(D^{T}([DM\leq 1]\odot E)))-E\;\bullet\;{\mathbf{I}}_{p}\Bigr{)}

Since pp is arbitrary, we have Jsq𝐮=(Q(1)Q(2))T([N1](DT([DM1]E)))E{\displaystyle\frac{\partial J_{sq}}{\partial{\mathbf{u}}}=(Q^{(1)}-Q^{(2)})^{T}([N\leq 1]\odot(D^{T}([DM\leq 1]\odot E)))-E}.

Next we compute Jnrmup{\displaystyle\frac{\partial J_{nrm}}{\partial u_{p}}}:

Fup\displaystyle\frac{\partial F}{\partial u_{p}} =\displaystyle= (𝐮up)(1𝐮)+𝐮((1𝐮)up)\displaystyle\left(\frac{\partial{\mathbf{u}}}{\partial u_{p}}\right)\odot(1-{\mathbf{u}})+{\mathbf{u}}\odot\left(\frac{\partial(1-{\mathbf{u}})}{\partial u_{p}}\right)
=\displaystyle= (𝐈p(1𝐮))(𝐮𝐈p)=(12𝐮)𝐈p\displaystyle({\mathbf{I}}_{p}\odot(1-{\mathbf{u}}))-({\mathbf{u}}\odot{\mathbf{I}}_{p})=(1-2{\mathbf{u}})\odot{\mathbf{I}}_{p}
Jnrmup\displaystyle\frac{\partial J_{nrm}}{\partial u_{p}} =\displaystyle= (F(Fup))\displaystyle\bigl{(}F\;\bullet\;\left(\frac{\partial F}{\partial u_{p}}\right)\bigr{)}
=\displaystyle= (F(12𝐮)𝐈p)=((12𝐮)F𝐈p)\displaystyle(F\;\bullet\;(1-2{\mathbf{u}})\odot{\mathbf{I}}_{p})=((1-2{\mathbf{u}})\odot F\;\bullet\;{\mathbf{I}}_{p})

Again since pp is arbitrary, we have Jnrm𝐮=(12𝐮)F{\displaystyle\frac{\partial J_{nrm}}{\partial{\mathbf{u}}}=(1-2{\mathbf{u}})\odot F} and reach

JaSU\displaystyle J_{a_{SU}} =\displaystyle= (Jsq𝐮)+2(Jnrm𝐮)\displaystyle\left(\frac{\partial J_{sq}}{\partial{\mathbf{u}}}\right)+\ell_{2}\cdot\left(\frac{\partial J_{nrm}}{\partial{\mathbf{u}}}\right)
=\displaystyle= (Q(1)Q(2))T([N1](DT([𝐝1]E)))E+2((12𝐮)F)\displaystyle(Q^{(1)}-Q^{(2)})^{T}([N\leq 1]\odot(D^{T}([{\mathbf{d}}\leq 1]\odot E)))-E+\ell_{2}\cdot((1-2{\mathbf{u}})\odot F)
whereN=Q(1𝐮δ),𝐝=D(1min1(N)),E=min1(𝐝)𝐮,\displaystyle\;\mbox{where}\;\;N=Q(1-{\mathbf{u}}^{\delta}),\,{\mathbf{d}}=D(1-{\rm min}_{1}(N)),\,E={\rm min}_{1}({\mathbf{d}})-{\mathbf{u}},\,
andF=𝐮(1𝐮).\displaystyle\;\;\mbox{and}\;\;F={\mathbf{u}}\odot(1-{\mathbf{u}}).

3.4 Adding constraints

A rule which has no head like a¬b\leftarrow a\wedge\neg b is called a constraint. We oftentimes need supported models which satisfy constraints. Since constraints are just rules without a head, we encode constraints just like rule bodies in a program using a binary matrix Qc=[Qc(1)Qc(2)]Q_{c}=[Q_{c}^{(1)}Q_{c}^{(2)}]. We call QcQ_{c} constraint matrix. We introduce NcN_{c}, a non-negative function JcJ_{c} of 𝐮{\mathbf{u}} and JcJ_{c}’s Jacobian JacJ_{a_{c}} as follows.

Nc\displaystyle N_{c} =\displaystyle= Qc(1𝐮δ)=Qc(1)(1𝐮)+Qc(2)𝐮\displaystyle Q_{c}(1-{\mathbf{u}}^{\delta})=Q_{c}^{(1)}(1-{\mathbf{u}})+Q_{c}^{(2)}{\mathbf{u}}
Jc\displaystyle J_{c} =\displaystyle= (𝟏(1min1(Nc)))where 𝟏 is an all-ones vector\displaystyle({\mathbf{1}}\bullet(1-{\rm min}_{1}(N_{c})))\hskip 5.0pt\mbox{where ${\mathbf{1}}$ is an all-ones vector} (29)
Jac\displaystyle J_{a_{c}} =\displaystyle= (Qc(1)Qc(2))T[Nc1]\displaystyle(Q_{c}^{(1)}-Q_{c}^{(2)})^{T}[N_{c}\leq 1] (30)

The meaning of NcN_{c} and JcJ_{c} is clear when 𝐮{\mathbf{u}} is binary. Note that any binary 𝐮{\mathbf{u}} is considered as a model over a set 𝒜={a1,,an}{\cal A}=\{a_{1},\ldots,a_{n}\} of nn ordered atoms in an obvious way. Suppose kk constraints are given to be satisfied. Then QcQ_{c} is a k×2nk\times 2n binary matrix and NcN_{c} is a k×1k\times 1 matrix. Nc(i)N_{c}(i) (1ik1\leq i\leq k) is the number of literals falsified by 𝐮{\mathbf{u}} in a conjunction GiG_{i} of the ii-th constraint Gi\leftarrow G_{i}. So Nc(i)=0N_{c}(i)=0, or equivalently 1min1(Nc(i))=11-{\rm min}_{1}(N_{c}(i))=1 implies GiG_{i} has no false literal i.e. 𝐮Gi{\mathbf{u}}\models G_{i}, and vice versa. Hence Jc=i=1k(1min1(Nc(i)))=(𝟏(1min1(Nc)))J_{c}=\sum_{i=1}^{k}(1-{\rm min}_{1}(N_{c}(i)))=({\mathbf{1}}\bullet(1-{\rm min}_{1}(N_{c}))) equals the number of violated constraints. Consequently when 𝐮{\mathbf{u}} is binary. we can say that Jc=0J_{c}=0 iff all constraints are satisfied by 𝐮{\mathbf{u}}.

When 𝐮{\mathbf{u}} is not binary but just a real vector n\in\mathbb{R}^{n}, NcN_{c} and JcJ_{c} are thought to be a continuous approximation to their binary counterparts. Since JcJ_{c} is a piecewise differentiable non-negative function of 𝐮{\mathbf{u}}, the approximation error can be minimized to zero by Newton’s method using JacJ_{a_{c}} in (30) (the derivation of JacJ_{a_{c}} is straightforward and omitted).

3.5 An algorithm for computing supported models with constraints

Here we present a minimization algorithm for computing supported models of the matricized program P=(D,Q)P=(D,Q) which satisfy constraints represented by a constraint matrix QcQ_{c}. We first combine JSUJ_{SU} and JcJ_{c} into JSU+cJ_{SU+c}.

JSU+c\displaystyle J_{SU+c} =\displaystyle= JSU+3Jc\displaystyle J_{SU}+\ell_{3}\cdot J_{c}
=\displaystyle= 0.5(𝐮min1(𝐝)22+2𝐮(1𝐮)22)\displaystyle 0.5\cdot\left(\|{\mathbf{u}}-{\rm min}_{1}({\mathbf{d}})\|_{2}^{2}+\ell_{2}\cdot\|{\mathbf{u}}\odot(1-{\mathbf{u}})\|_{2}^{2}\right)
+3(𝟏(1min1(Qc(1𝐮δ))))2>0,3>0\displaystyle+\ell_{3}\cdot({\mathbf{1}}\bullet(1-{\rm min}_{1}(Q_{c}(1-{\mathbf{u}}^{\delta}))))\hskip 10.00002pt\ell_{2}>0,\ell_{3}>0
where𝐝=D(1min1(Q(1𝐮δ)))\displaystyle\;\mbox{where}\;\;{\mathbf{d}}=D(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}^{\delta})))
JaSU+c\displaystyle J_{a_{SU+c}} =\displaystyle= JaSU+3Jac\displaystyle J_{a_{SU}}+\ell_{3}\cdot J_{a_{c}} (32)

The next proposition is immediate from Proposition 3.

Proposition 4

JSU+c=0J_{SU+c}=0  if-and-only-if   𝐮{\mathbf{u}} represents a supported model of PP satisfying a constraint matrix QcQ_{c}.

We compute JSUJ_{SU} in JSU+cJ_{SU+c} by (27) and JcJ_{c} by (29), and their Jacobians JaSUJ_{a_{SU}} and JacJ_{a_{c}} by (3.3) and by (30) respectively. We minimize the non-negative JSU+cJ_{SU+c} to zero by Newton’s method using Algorithm 1. It finds a solution 𝐮{\mathbf{u}}_{*} of JSU+c=0J_{SU+c}=0 which represents a supported model of PP satisfying constraint matrix QcQ_{c}. The updating formula is derived from the first order Taylor expansion of JSU+cJ_{SU+c} and by solving JSU+c+(JaSUc(𝐮new𝐮))=0{\displaystyle J_{SU+c}+(J_{a_{SU*c}}\bullet({\mathbf{u}}_{\rm new}-{\mathbf{u}}))=0} w.r.t. 𝐮new{\mathbf{u}}_{\rm new}. We use it with a learning rate α>0\alpha>0 as follows.

𝐮new\displaystyle{\mathbf{u}}_{\rm new} =\displaystyle= 𝐮α(JSU+c(JaSU+cJaSU+c))JaSU+c\displaystyle{\mathbf{u}}-\alpha\left(\frac{J_{SU+c}}{(J_{a_{SU+c}}\;\bullet\;J_{a_{SU+c}})}\right)J_{a_{SU+c}} (33)
1 Input: matricized program P=(D,Q)P=(D,Q), constraint matrix QcQ_{c}, max_itrmax\_itr\in\mathbb{Z}, max_trymax\_try\in\mathbb{Z}
2 Output: binary vector 𝐮{\mathbf{u}}_{*} representing a supported model of PP satisfying constraints represented by QcQ_{c}
3 𝐮{\mathbf{u}}\leftarrow random initialization
4 for i1i\leftarrow 1 to max_try do
      5 for j1j\leftarrow 1 to max_itr do
            6 optimally threshold 𝐮{\mathbf{u}} to a binary vector 𝐮{\mathbf{u}}_{*} so that
            7 error𝐮min1(𝐝)22+(𝟏(1min1(Qc(1𝐮δ))))error\leftarrow\|{\mathbf{u}}_{*}-{\rm min}_{1}({\mathbf{d}}_{*})\|_{2}^{2}+({\mathbf{1}}\bullet(1-{\rm min}_{1}(Q_{c}(1-{\mathbf{u}}_{*}^{\delta}))))
            8 is minimum where 𝐝=D(1min1(Q(1𝐮δ)){\mathbf{d}}_{*}=D(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}_{*}^{\delta}))
            9 if  error = 0 then
                   break
            10Update 𝐮{\mathbf{u}} by (33)
      11if  error = 0 then
             break
      12perturbate 𝐮{\mathbf{u}} to escape from a local minimum
13return 𝐮{\mathbf{u}}_{*}
Algorithm 1 minimizing JSU+cJ_{SU+c} to zero

Algorithm 1 is a double loop algorithm where the inner jj-loop updates 𝐮n{\mathbf{u}}\in\mathbb{R}^{n} repeatedly to minimize JSU+cJ_{SU+c} while thresholding 𝐮{\mathbf{u}} into a binary solution candidate 𝐮{1,0}n{\mathbf{u}}_{*}\in\{1,0\}^{n} for JSU+c=0J_{SU+c}=0. The outer ii-loop is for retry when the inner fails to find a solution. The initialization at line 3 is carried out by sampling 𝐮(i)𝒩(0,1)+0.5{\mathbf{u}}(i)\sim{\cal N}(0,1)+0.5 (1in1\leq i\leq n) where 𝒩(0,1){\cal N}(0,1) is the standard normal distribution. Lines 6,7 and 8 collectively perform thresholding of 𝐮{\mathbf{u}} into a binary 𝐮{\mathbf{u}}_{*}. As the inner loop repeats, JSU+cJ_{SU+c} becomes smaller and smaller and so do JsqJ_{sq} and JnrmJ_{nrm} in JSUJ_{SU}. JsqJ_{sq} being small means 𝐮{\mathbf{u}} is close to a supported model of PP while JnrmJ_{nrm} being small means each element of 𝐮{\mathbf{u}} is close to {1,0}\{1,0\}. So binarization 𝐮=[𝐮θ]{\mathbf{u}}_{*}=[{\mathbf{u}}\geq\theta] with an appropriate threshold θ\theta888 Currently given 𝐮{\mathbf{u}}, we divide the interval [min(𝐮),max(𝐮)][{\rm min}({\mathbf{u}}),{\rm max}({\mathbf{u}})] into 20 equally distributed notches and use each notch as a threshold value θ\theta. has a good chance of yielding a binary 𝐮{\mathbf{u}}_{*} representing a supported model of PP satisfying constraints represented by QcQ_{c}. It may happen that the inner loop fails to find a solution. In such case, we retry another jj-loop with perturbated 𝐮{\mathbf{u}} at line 12. There 𝐮{\mathbf{u}} is perturbated by 𝐮0.5(𝐮+Δ+0.5){\mathbf{u}}\leftarrow 0.5({\mathbf{u}}+\Delta+0.5) where Δ𝒩(0,1)\Delta\sim{\cal N}(0,1) before the next jj-loop.

3.6 Connection to neural network computation

At this point, it is quite interesting to see the connection of our approach to neural network computation. In (27), we compute MM and 𝐝=DM{\mathbf{d}}=DM. We point out that the computation of this 𝐝{\mathbf{d}} is nothing but the output of a forward process of a single layer ReLU network from an input vector 𝐮{\mathbf{u}}. Consider the computation of M=(1min1(Q(1𝐮δ)))M=(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}^{\delta}))). We rewrite this using 1min(x,1)=ReLU(1x)1-{\rm min}(x,1)={\rm ReLU}(1-x) to

M\displaystyle M =\displaystyle= 1min1(Q(1)(1𝐮)+Q(2)𝐮)\displaystyle 1-\mbox{min}_{1}(Q^{(1)}(1-{\mathbf{u}})+Q^{(2)}{\mathbf{u}})
=\displaystyle= ReLU(W𝐮+𝐛)\displaystyle{\rm ReLU}(W{\mathbf{u}}+{\mathbf{b}})
whereQ=[Q(1)Q(2)],W=Q(1)Q(2),𝐛=1Q(1)𝟏\displaystyle\mbox{where}\;\;Q=[Q^{(1)}\,Q^{(2)}],\,W=Q^{(1)}-Q^{(2)},\,{\mathbf{b}}=1-Q^{(1)}{\mathbf{1}}

So MM is the output of a ReLU network having a weight matrix W=Q(1)Q(2)W=Q^{(1)}-Q^{(2)} and a bias vector 𝐛=1Q(1)𝟏{\mathbf{b}}=1-Q^{(1)}{\mathbf{1}}. Then min1(𝐝)=min1(DM)=min1(DReLU(W𝐮+𝐛)){\rm min}_{1}({\mathbf{d}})={\rm min}_{1}(DM)={\rm min}_{1}(D\cdot{\rm ReLU}(W{\mathbf{u}}+{\mathbf{b}})) is the output of a ReLU network with a single hidden layer and a linear output layer represented by DD having min1(){\rm min}_{1}(\cdot) as activation function. Also when we compute a stable model 𝐮{\mathbf{u}}, we minimize JSU+cJ_{SU+c} (27) which contains an MSE error term Jsq=min1(𝐝)𝐮2J_{sq}=\|{\rm min}_{1}({\mathbf{d}})-{\mathbf{u}}\|^{2} using JaSU+cJ_{a_{SU+c}} (32). This is precisely back propagation from learning data 𝐮{\mathbf{u}}.

Thus we may say that our approach is an integration of ASP semantics and neural computation and provides a neuro-symbolic [23] way of ASP computation. Nonetheless, there is a big difference. In standard neural network architecture, a weight matrix WW and a bias vector 𝐛{\mathbf{b}} are independent. In our setting, they are interdependent and they faithfully reflect the logical structure of a program.

4 Computing stable models in vector spaces

4.1 Loop formulas and stable models

Let P=(D,Q)P=(D,Q) be a matricized program in a set of atoms 𝒜={a1,,an}{\cal A}=\{a_{1},\ldots,a_{n}\} having mm rules {ai1G1,,aimGm}\{a_{i_{1}}\leftarrow G_{1},\ldots,a_{i_{m}}\leftarrow G_{m}\} where D{1,0}n×mD\in\{1,0\}^{n\times m} and Q{1,0}m×2nQ\in\{1,0\}^{m\times 2n}. We assume atoms and rules are ordered as indicated.

Computing a supported model of PP is equivalent to computing any binary fixedpoint 𝐮{1,0}n{\mathbf{u}}\in\{1,0\}^{n} such that 𝐮=min1(D(1min1(Q(1𝐮δ)))){\mathbf{u}}={\rm min}_{1}(D(1-\mbox{min}_{1}(Q(1-{\mathbf{u}}^{\delta})))) in vector spaces and in this sense, it is conceptually simple (though NP-hard). Contrastingly since stable models are a proper subclass of supported models, if one wishes to obtain precisely stable models through fixedpoint computation, the exclusion of non-stable models is necessary. Lin-Zhao’s theorem [14] states that II is a stable model of PP iff II is a supported model of PP and satisfies a set of formulas called loop formulas associated with PP.

Let L={h1,,hk}𝒜L=\{h_{1},\ldots,h_{k}\}\subseteq{\cal A} be a loop in PP. Recall that LL is a set of atoms which are strongly connected in the positive dependency graph of PP999 In the case of a singleton loop L={h}L=\{h\}, we specifically require, following [14], that hh has a self-loop, i.e. there must be a rule of the form hhHh\leftarrow h\wedge H in PP. . A support rule for hh is a rule hHh\leftarrow H such that H+L=H^{+}\cap L=\emptyset. HH is called a support body for LL. Introduce a loop formula for LL101010 In the original form, the antecedent of LF(L)LF(L) is a disjunction (h1hp)(h_{1}\vee\cdots\vee h_{p}) [14]. Later it is shown that Lin-Zhao’s theorem also holds for the AND type LF(L)LF(L) [6]. We choose this AND type LF(L)LF(L) as it is easier to satisfy. by

LF(L)\displaystyle LF(L) =\displaystyle= (h1hp)(H1Hq)\displaystyle(h_{1}\wedge\cdots\wedge h_{p})\rightarrow(H_{1}\vee\cdots\vee H_{q}) (34)
where {H1,,Hq} are support bodies for L.\displaystyle\mbox{where $\{H_{1},\ldots,H_{q}\}$ are support bodies for $L$}.

Then define loop formulas associated with PP as LF(P)={LF(L)L is a loop in P}LF(P)=\{LF(L)\mid\mbox{$L$ is a loop in $P$}\}. Logically LF(P)LF(P) is treated as the conjunction of its elements and we sometimes call it the loop formula associated with PP. Now we evaluate LF(P)LF(P) by a real vector 𝐮n{\mathbf{u}}\in\mathbb{R}^{n}. Introduce an external support matrix Es{1,0}n×mE_{s}\in\{1,0\}^{n\times m} by Es(i,j)=1E_{s}(i,j)=1 if there is a support rule aiGja_{i}\leftarrow G_{j} for ai𝒜a_{i}\in{\cal A}, else Es(i,j)=0E_{s}(i,j)=0 (1in,1jm1\leq i\leq n,1\leq j\leq m). Suppose there are tt loops {L1,,Lt}\{L_{1},\ldots,L_{t}\} in PP. Introduce a loop matrix Loop{1,0}t×mL_{oop}\in\{1,0\}^{t\times m} such that Loop(s,j)=1L_{oop}(s,j)=1 if the ss-th loop LsL_{s} has GjG_{j} as a support body for LsL_{s}, else Loop(s,j)=0L_{oop}(s,j)=0 (1st1\leq s\leq t). Evaluate JLFJ_{LF} by 𝐮{\mathbf{u}} as follows.

M\displaystyle M =\displaystyle=\; 1min1(Q(1𝐮δ)): (continuous) truth values by 𝐮 of the rule bodies in P\displaystyle 1-{\rm min}_{1}(Q(1-{\mathbf{u}}^{\delta}))\hskip 20.00003pt\mbox{: (continuous) truth values by ${\mathbf{u}}$ of the rule bodies in $P$}
Ls\displaystyle L_{s} =\displaystyle=\; Loop(s,:): represents the s-th loop in {L1,,Lt}\displaystyle L_{oop}(s,:)\hskip 70.0001pt\mbox{: represents the $s$-th loop in $\{L_{1},\ldots,L_{t}\}$}
As\displaystyle A_{s} =\displaystyle=\; Ls(1𝐮)+LsEsM: (continuous) counts of true disjuncts by 𝐮 of LF(Ls)\displaystyle L_{s}(1-{\mathbf{u}})+L_{s}E_{s}M\hskip 30.00005pt\mbox{: (continuous) counts of true disjuncts by ${\mathbf{u}}$ of $LF(L_{s})$ }
JLF\displaystyle J_{LF} =\displaystyle=\; s=1t(1min1(As))\displaystyle\sum_{s=1}^{t}\left(1-{\rm min}_{1}(A_{s})\right) (35)

JLFJ_{LF} is a non-negative piecewise linear function of 𝐮{\mathbf{u}}.

Proposition 5

Let JLFJ_{LF} be defined as above. When 𝐮{\mathbf{u}} is binary, it holds that

JLF=0J_{LF}=0  if-and-only-if  𝐮LF(P){\mathbf{u}}\models LF(P).

(Proof) Suppose JLF=0J_{LF}=0 and 𝐮{\mathbf{u}} is binary. A summand (1min1(As))(1-{\rm min}_{1}(A_{s})) in JLFJ_{LF} (35) corresponds to the ss-th loop Ls={h1,,hp}L_{s}=\{h_{1},\ldots,h_{p}\} and is non-negative. Consider LF(Ls)=(h1hp)(H1Hq)LF(L_{s})=(h_{1}\wedge\cdots\wedge h_{p})\rightarrow(H_{1}\vee\cdots\vee H_{q}) as a disjunction ¬h1¬hpH1Hq\neg h_{1}\vee\cdots\vee\neg h_{p}\vee H_{1}\vee\cdots\vee H_{q}. Then JLF=0J_{LF}=0 implies (1min1(As))=0(1-{\rm min}_{1}(A_{s}))=0, or equivalently As1A_{s}\geq 1. Consequently, as 𝐮{\mathbf{u}} is binary, we have Ls(1𝐮)1L_{s}(1-{\mathbf{u}})\geq 1 or LsEsM1L_{s}E_{s}M\geq 1. The former means 𝐮¬h1¬hp{\mathbf{u}}\models\neg h_{1}\vee\cdots\vee\neg h_{p}. The latter, LsEsM1L_{s}E_{s}M\geq 1, means 𝐮H1Hq{\mathbf{u}}\models H_{1}\vee\cdots\vee H_{q}. This is because (EsM)(i)(E_{s}M)(i) is the number of support rules for ai𝒜a_{i}\in{\cal A} whose bodies are true in 𝐮{\mathbf{u}} (1in1\leq i\leq n), and hence LsEsM1L_{s}E_{s}M\geq 1 means some support body HrH_{r} (1rq1\leq r\leq q) for LsL_{s} is true in 𝐮{\mathbf{u}}. So in either case 𝐮LF(Ls){\mathbf{u}}\models LF(L_{s}). Since ss is arbitrary, we have 𝐮LF(P){\mathbf{u}}\models LF(P). The converse is straightforward and omitted.     Q.E.D.

The Jacobian JaLFJ_{a_{LF}} of JLFJ_{LF} is computed as follows.

N\displaystyle N =\displaystyle= Q(1𝐮δ)\displaystyle Q(1-{\mathbf{u}}^{\delta})
Ns\displaystyle N_{s} =\displaystyle= Ls(1𝐮)\displaystyle L_{s}(1-{\mathbf{u}})
Ms\displaystyle M_{s} =\displaystyle= min1(Ns)\displaystyle{\rm min}_{1}(N_{s})
JaLF\displaystyle J_{a_{LF}} =\displaystyle= JLF𝐮=s=1t(min1(As)𝐮)\displaystyle\frac{\partial J_{LF}}{\partial{\mathbf{u}}}=\sum_{s=1}^{t}-\left(\frac{\partial{\rm min}_{1}(A_{s})}{\partial{\mathbf{u}}}\right) (36)
=\displaystyle= s=1t[As1]((Ms𝐮)+LsEs(M𝐮)T)\displaystyle-\sum_{s=1}^{t}[A_{s}\leq 1]\left(\left(\frac{\partial M_{s}}{\partial{\mathbf{u}}}\right)+L_{s}E_{s}\left(\frac{\partial M}{\partial{\mathbf{u}}}\right)^{T}\right)
=\displaystyle= s=1t[As1]([Ns1]LsT+(((LsEs)[N1]T)(Q(2)Q(1))T)\displaystyle\sum_{s=1}^{t}[A_{s}\leq 1]\left([N_{s}\leq 1]L_{s}^{T}+(((L_{s}E_{s})\odot[N\leq 1]^{T})(Q^{(2)}-Q^{(1)})^{T}\right)

Here Q=[Q(1)Q(2)]Q=[Q^{(1)}\,Q^{(2)}] and LsL_{s}, AsA_{s} and MM are computed by (35).

Now introduce a new cost function JSU+c+LFJ_{SU+c+LF} by (37) that incorporates JLFJ_{LF} and compute its Jacobian JaSU+c+LFJ_{a_{SU+c+LF}} by (32).

JSU+c+LF\displaystyle J_{SU+c+LF} =\displaystyle= JSU+c+4JLFwhere 4>0\displaystyle J_{SU+c}+\ell_{4}\cdot J_{LF}\;\;\mbox{where $\ell_{4}>0$} (37)
JaSU+c+LF\displaystyle J_{a_{SU+c+LF}} =\displaystyle= JaSU+c+4JaLF\displaystyle J_{a_{SU+c}}+\ell_{4}\cdot J_{a_{LF}} (38)

By combining Proposition 4, 5 and Lin-Zhao’s theorem [14], the following is obvious.

Proposition 6

𝐮{\mathbf{u}} is a stable model of PP satisfying constraints represented by QcQ_{c}  if-and-only-if   𝐮{\mathbf{u}} is a root of JSU+c+LFJ_{SU+c+LF}.

We compute such 𝐮{\mathbf{u}} by Newton’s method using Algorithm 1 with a modified update rule (33) such that JSU+cJ_{SU+c} and JaSU+cJ_{a_{SU+c}} are replaced by JSU+c+LFJ_{SU+c+LF} and JaSU+c+LFJ_{a_{SU+c+LF}} respectively.

When a program PP is tight [5], for example when rules have no positive literal in their bodies, PP has no loop and hence LFLF is empty. In such case, we directly minimize JSU+cJ_{SU+c} instead of using JSU+c+LFJ_{SU+c+LF} with the empty LFLF.

4.2 LF heuristics

Minimizing JSU+c+LFJ_{SU+c+LF} is a general way of computing stable models under constraints. It is applicable to any program and gives us a theoretical framework for computing stable models in an end-to-end way without depending on symbolic systems. However there can be exponentially many loops [12] and they make the computation of JLFJ_{LF} (35) extremely difficult or practically impossible. To mitigate this seemingly insurmountable difficulty, we propose two heuristics which use a subset of loop formulas.

  • LFmaxLF_{max}:

    The first heuristics is LFmaxLF_{max} heuristics. We consider only a set LFmaxLF_{max} of loop formulas associated with SCCs in the positive dependency graph pdg(P)=(V,E)\mbox{pdg}(P)=(V,E) of a program PP. In the case of a singleton SCC {a}, ’a’ must have a self-loop in pdg(P)\mbox{pdg}(P). We compute SCCs in O(|E|+|V|)O(|E|+|V|) time by Tarjan’s algorithm [28].

  • LFminLF_{min}:

    In this heuristics, instead of SCCs (maximal strongly connected subgraphs), we choose minimal strongly connected subgraphs, i.e. cycle graphs. Denote by LFminLF_{min} the set of loop formulas associated with cycle graphs in pdg(P)\mbox{pdg}(P). We use an enumeration algorithm described in [15] to enumerate cycles and construct LFminLF_{min} due to its simplicity.

We remark that although LFmaxLF_{max} and LFminLF_{min} can exclude (some of) non-stable models, they do not necessarily exclude all of non-stable models. However, the role of loop formulas in our framework is entirely different from the one in symbolic ASP. Namely, the role of LF in our framework is not to logically reject non-stable models but to guide the search process by their gradient information in the continuous search space. Hence, we expect, as actually observed in experiments in the next section, some loop formulas have the power of guiding the search process to a root of JSU+c+LFJ_{SU+c+LF}.

4.3 Precomputation

We introduce here precomputation. The idea is to remove atoms from the search space which are false in every stable model. It downsizes the program and realizes faster model computation.

When a program PP in a set 𝒜{\cal A} = atom(PP) is given, we transform PP to a definite program P+P^{+} by removing all negative literals from the rule bodies in PP. Since P+PIP^{+}\supseteq P^{I} holds as a set of rules for any model II, we have LM(P+)LM(PI)LM(P^{+})\supseteq LM(P^{I}) where LM(P)LM(P) denotes the least model of a definite program PP. When II is a stable model, LM(PI)=ILM(P^{I})=I holds and we have LM(P+)ILM(P^{+})\supseteq I. By taking the complements of both sides, we can say that if an atom aa is outside of LM(P+)LM(P^{+}), i.e. if aa is false in LM(P+)LM(P^{+}), so is aa in any stable model II of PP. Thus, by precomputing the least model LM(P+)LM(P^{+}), we can remove a set of atoms P=𝒜LM(P+){\cal F}_{P}={\cal A}\setminus LM(P^{+}) from our consideration as they are known to be false in any stable model. We call P{\cal F}_{P} stable false atoms. Of course, this precomputation needs additional computation of LM(P+)LM(P^{+}) but it can be done in linear time proportional to the size of P+P^{+}, i.e. the total number of occurrences of atoms in P+P^{+} [3]111111 We implemented the linear time algorithm in [3] linear algebraically using vector and matrix and confirmed its linearity. . Accordingly precomputing the least model LM(P+)LM(P^{+}) makes sense if the benefit of removing stable false atoms from the search space outweighs linear time computation for LM(P+)LM(P^{+}), which is likely to happen when we deal with programs with positive literals in the rule bodies.

More concretely, given a program PP and a set of constraints CC, we can obtain downsized ones, PP^{\prime} and CC^{\prime}, as follows.

  • Step 1:

    Compute the least model LM(P+)LM(P^{+}) and the set of stable false atoms P=atom(P)LM(P+){\cal F}_{P}=\mbox{atom}(P)\setminus LM(P^{+}).

  • Step 2:

    Define

    G\displaystyle G^{\prime} =\displaystyle= conjunction GG with negative literals {¬aGaP}\{\neg a\in G\mid a\in{\cal F}_{P}\} removed
    P\displaystyle P^{\prime} =\displaystyle= {aGaGP,aP,G+P=}\displaystyle\{a\leftarrow G^{\prime}\mid a\leftarrow G\in P,a\not\in{\cal F}_{P},G^{+}\cap{\cal F}_{P}=\emptyset\}
    whereG+=positive literals in G\displaystyle\;\mbox{where}\;\;G^{+}=\mbox{positive literals in $G$}
    C\displaystyle C^{\prime} =\displaystyle= {GGC,G+P=}\displaystyle\{\leftarrow G^{\prime}\mid\leftarrow G\in C,G^{+}\cap{\cal F}_{P}=\emptyset\} (40)
Proposition 7

Let PP^{\prime} and CC^{\prime} be respectively the program (Step 2:) and constraints (40). Also let II^{\prime} be a model over atom(PP^{\prime}). Expand II^{\prime} to a model II over atom(PP) by assuming every atom in P{\cal F}_{P} is false in II. Then

II^{\prime} is a stable model of PP^{\prime} satisfying constraints CC^{\prime}   if-and-only-if   II is a stable model of PP satisfying constraints CC.

(Proof) We prove first II^{\prime} is a stable model of PP^{\prime} if-and-only-if II is a stable model of PP. To prove it, we prove LM(PI)=LM(PI)LM(P^{\prime I^{\prime}})=LM(P^{I}) as set.

Let aG+a\leftarrow G^{\prime+} be an arbitrary rule in PIP^{\prime I^{\prime}}. Correspondingly there is a rule aGa\leftarrow G^{\prime} in PP^{\prime} such that IGI^{\prime}\models G^{\prime-}. So there is a rule aGa\leftarrow G in PP such that G=G{¬bbP}G^{\prime}=G\setminus\{\neg b\mid b\in{\cal F}_{P}\} and G+P=G^{+}\cap{\cal F}_{P}=\emptyset. IGI^{\prime}\models G^{\prime-} implies IGI\models G^{-} by construction of II from II^{\prime}. So aG+a\leftarrow G^{+} is contained in PIP^{I}, which means aG+a\leftarrow G^{\prime+} is contained in PIP^{I} because G+=G+G^{\prime+}=G^{+} (recall that G=G{¬bbP}G^{\prime}=G\setminus\{\neg b\mid b\in{\cal F}_{P}\} and GG^{\prime} and GG have the same set of positive literals). Thus since aG+a\leftarrow G^{\prime+} is an arbitrary rule, we conclude PIPIP^{\prime I^{\prime}}\subseteq P^{I}, and hence LM(PI)LM(PI)LM(P^{\prime I^{\prime}})\subseteq LM(P^{I}).

Now consider aLM(PI)a\in LM(P^{I}). There is an SLD derivation for a\leftarrow a from PIP^{I}. Let bG+PIb\leftarrow G^{+}\in P^{I} be a rule used in the derivation which is derived from the rule bGPb\leftarrow G\in P such that IGI\models G^{-}. Since PIP+P^{I}\subseteq P^{+}, we have LM(PI)LM(P+)LM(P^{I})\subseteq LM(P^{+}) and hence LM(PI)P=LM(P^{I})\cap{\cal F}_{P}=\emptyset, i.e., LM(PI)LM(P^{I}) contains no stable false atom. So bPb\not\in{\cal F}_{P} and G+P=G^{+}\cap{\cal F}_{P}=\emptyset because every atom in the SDL derivation must belong in LM(PI)LM(P^{I}). Accordingly bGPb\leftarrow G^{\prime}\in P^{\prime}. On the other hand, IGI\models G^{-} implies IGI^{\prime}\models G^{\prime-}. So bGb\leftarrow G^{\prime} is in PP^{\prime} and bG+b\leftarrow G^{\prime+} is in PIP^{\prime I^{\prime}}. Therefore bG+b\leftarrow G^{+} is in PIP^{\prime I^{\prime}} because G+=G+G^{\prime+}=G^{+}. Thus every rule used in the derivation for a\leftarrow a from PIP^{I} is also a rule contained in PIP^{\prime I^{\prime}}, which means aLM(PI)a\in LM(P^{\prime I^{\prime}}). Since aa is arbitrary, it follows that LM(PI)LM(PI)LM(P^{I})\subseteq LM(P^{\prime I^{\prime}}). By putting LM(PI)LM(PI)LM(P^{\prime I^{\prime}})\subseteq LM(P^{I}) and LM(PI)LM(PI)LM(P^{I})\subseteq LM(P^{\prime I^{\prime}}) together, we conclude LM(PI)=LM(PI)LM(P^{I})=LM(P^{\prime I^{\prime}}).

Then, if II^{\prime} is a stable model of PP^{\prime}, we have I=LM(PI)=LM(PI)I^{\prime}=LM(P^{\prime I^{\prime}})=LM(P^{I}) as set. Since I=II=I^{\prime} as set, we have I=LM(PI)I=LM(P^{I}) as set, which means II is a stable model of PP. Likewise when II is a stable model of PP, we have I=LM(PI)=LM(PI)I=LM(P^{I})=LM(P^{\prime I^{\prime}}) and I=II=I^{\prime} as set. So I=LM(PI)I^{\prime}=LM(P^{\prime I^{\prime}}) as set and II^{\prime} is a stable model of PP^{\prime}. We can also similarly prove that II^{\prime} satisfies CC^{\prime} if-and-only-if II satisfies CC. So we are done.     Q.E.D.

5 Programming examples

In this section, we apply our ASP approach to examples as a proof of concept and examine the effectiveness of precomputation and heuristics. Since large scale computing is out of scope in this paper, the program size is mostly small121212 Matricized programs in this paper are all written in GNU Octave 6.4.0 and run on a PC with Intel(R) Core(TM) [email protected] CPU with 26GB memory. .

5.1 The 3-coloring problem

We first deal with the 3-coloring problem. Suppose we are given a graph G1G_{1}. The task is to color the vertices of the graph blue, red and green so that no two adjacent vertices have the same color like (b) in Fig. 1.

Refer to caption
(a) Graph G1G_{1}
Refer to caption
(b) A 3-coloring
Figure 1: 3-coloring problem

There are four nodes {a,b,c,d}\{a,b,c,d\} in the graph G1G_{1}. We assign a set of three color atoms (Boolean variables) to each node to represent their color. For example, node aa is assigned three color atoms {a1(red),a2(blue),a3(green)}\{a_{1}({\rm red}),a_{2}({\rm blue}),a_{3}({\rm green})\}. We need to represent two facts by these atoms.

  • Each node has a unique color chosen from {red,blue,green}\{red,blue,green\}. So color atoms assigned to each node are in an XOR relation. We represent this fact by a tight program P1P_{1} below containing three rules for each node.

    P1\displaystyle P_{1} =\displaystyle= {a1¬a2¬a3,a2¬a3¬a1,a3¬a1¬a2b1¬b2¬b3,b2¬b3¬b1,b3¬b1¬b2c1¬c2¬c3,c2¬c3¬c1,c3¬c1¬c2d1¬d2¬c3,d2¬d3¬d1,d3¬d1¬d2\displaystyle\left\{\;\begin{array}[]{lllll}a_{1}\leftarrow\neg a_{2}\wedge\neg a_{3},&a_{2}\leftarrow\neg a_{3}\wedge\neg a_{1},&a_{3}\leftarrow\neg a_{1}\wedge\neg a_{2}\\ b_{1}\leftarrow\neg b_{2}\wedge\neg b_{3},&b_{2}\leftarrow\neg b_{3}\wedge\neg b_{1},&b_{3}\leftarrow\neg b_{1}\wedge\neg b_{2}\\ c_{1}\leftarrow\neg c_{2}\wedge\neg c_{3},&c_{2}\leftarrow\neg c_{3}\wedge\neg c_{1},&c_{3}\leftarrow\neg c_{1}\wedge\neg c_{2}\\ d_{1}\leftarrow\neg d_{2}\wedge\neg c_{3},&d_{2}\leftarrow\neg d_{3}\wedge\neg d_{1},&d_{3}\leftarrow\neg d_{1}\wedge\neg d_{2}\end{array}\right. (45)
  • Two nodes connected by an edge must have a different color. We represent this fact in terms of constraints.

    C1\displaystyle C_{1} =\displaystyle= {a1b1,a2b2,a3b3a1c1,a2c2,a3c3b1c1,b2c2,b3c3b1d1,b2d2,b3d3d1c1,d2c2,d3c3\displaystyle\left\{\;\begin{array}[]{lllll}\leftarrow a_{1}\wedge b_{1},&\leftarrow a_{2}\wedge b_{2},&\leftarrow a_{3}\wedge b_{3}\\ \leftarrow a_{1}\wedge c_{1},&\leftarrow a_{2}\wedge c_{2},&\leftarrow a_{3}\wedge c_{3}\\ \leftarrow b_{1}\wedge c_{1},&\leftarrow b_{2}\wedge c_{2},&\leftarrow b_{3}\wedge c_{3}\\ \leftarrow b_{1}\wedge d_{1},&\leftarrow b_{2}\wedge d_{2},&\leftarrow b_{3}\wedge d_{3}\\ \leftarrow d_{1}\wedge c_{1},&\leftarrow d_{2}\wedge c_{2},&\leftarrow d_{3}\wedge c_{3}\\ \end{array}\right. (51)

Assuming an ordering of atoms {a1,a2,a3,,d1,d2,d3}\{a_{1},a_{2},a_{3},\ldots,d_{1},d_{2},d_{3}\}, the normal logic program P1P_{1} shown in (45) is matricized to P1=(D1,Q1)P_{1}=(D_{1},Q_{1}) where D1D_{1} is a (12×12)(12\times 12) binary identity matrix (because there are 12 atoms and each atom has just one rule) and Q1Q_{1} is a (12×24)(12\times 24) binary matrix shown in (54). Constraints listed in (51) are a matricized to a (15×1215\times 12) constraint matrix QC1Q_{C_{1}} (57). In (54) and (57), aa for example stands for a triple (a1a2a3)(a_{1}\,a_{2}\,a_{3}) and ¬a\neg a for (¬a1¬a2¬a3)(\neg a_{1}\,\neg a_{2}\,\neg a_{3}).

Q1\displaystyle Q_{1}\; =\displaystyle= abcd¬a¬b¬c¬d[H3H3H3H3]whereH3=[011101110]\displaystyle\begin{array}[]{ll}\begin{matrix}\;\;\;\;a&\;b&\;c&\;d&\hskip 1.00006pt\neg a&\hskip 1.00006pt\neg b&\hskip 1.00006pt\neg c&\hskip 1.00006pt\neg d\\ \vspace{-0.8em}\end{matrix}\\ \begin{bmatrix}&&&&\hskip 30.00005ptH_{3}&&&&\\ &&&&&H_{3}&&&\\ &&&&&&H_{3}&&\\ &&&&&&&H_{3}&\end{bmatrix}&\;\;\mbox{where}\;\;H_{3}\;=\;\begin{bmatrix}0&1&1\\ 1&0&1\\ 1&1&0\\ \end{bmatrix}\end{array} (54)
QC1\displaystyle Q_{C_{1}}\; =\displaystyle= abcd[E3E3E3E3E3E3E3E3E3E3]whereE3=[100010001]\displaystyle\begin{array}[]{ll}\begin{matrix}\;\;\;\;\;a&\hskip 6.00006ptb&\hskip 6.00006ptc&\hskip 5.0ptd\\ \vspace{-0.8em}\end{matrix}\\ \begin{bmatrix}E_{3}&E_{3}&&\\ E_{3}&&E_{3}&\\ &E_{3}&E_{3}&\\ &E_{3}&&E_{3}\\ &&E_{3}&E_{3}\\ \end{bmatrix}&\;\;\mbox{where}\;\;E_{3}\;=\;\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&1\\ \end{bmatrix}\end{array} (57)

We run Algorithm 1 on program P1P_{1} with constraints C1C_{1} to find a supported model (solution) of P1P_{1} satisfying C1C_{1}131313 Since P1P_{1} is a tight program, every supported model of P1P_{1} is a stable model and vice versa. .

Table 1: Time and the number of solutions
time(s) #solutions
6.7(0.7) 5.2(0.9)

To measure time to find a model, we conduct ten trials141414 One trial consists of max_itr×max_trymax\_itr\times max\_try parameter updates. of running Algorithm 1 with max_try=20max\_try=20, max_itr=50max\_itr=50, 2=3=0.1\ell_{2}=\ell_{3}=0.1 and take the average. The result is 0.104s(0.070) on average. Also to check the ability of finding different solutions, we perform ten trials of Algorithm 1151515 without another solution constraint introduced in Section 5.2 and count the number of different solutions in the returned solutions. #solutions in Table 1 is the average of ten such measurements. Due to naive implementation, computation is slow but the number of different solutions, 5.2 on average, seems rather high considering there are six solutions.

Next we check the scalability of our approach by a simple problem. We consider the 3-coloring of a cycle graph like (a) in Fig. 2. In general, given a cycle graph that has nn nodes, we encode its 3-coloring problem as in the previous example by a matricized program P2=(D2,Q2)P_{2}=(D_{2},Q_{2}) and a constraint matrix QC2Q_{C_{2}} where D2(3n×3n)D_{2}(3n\times 3n) is an identity matrix and Q(3n×6n)Q(3n\times 6n) and QC2(3n×6n)Q_{C_{2}}(3n\times 6n) represent respectively rules and constraints. There are 2n+2(1)n2^{n}+2(-1)^{n} solutions (n3n\geq 3) in 23n2^{3n} possible assignments for 3n3n atoms. So the problem will be exponentially difficult as nn goes up.

Refer to caption
(a) A cycle graph
   
Refer to caption
(b) Minimization of JSU+cJ_{SU+c}
Refer to caption
(c) Scalability
Figure 2: Convergence and scalability

The graph (b) in Fig. 2 is an example of convergence curve of JSU+cJ_{SU+c} by Algorithm 1 with n=10,max_try=100,max_itr=50n=10,max\_try=100,max\_itr=50. The curve tells us that in the first cycle of jj-loop, the inner for loop of Algorithm 1, no solution is found after max_itr=50max\_itr=50 iterations of update of continuous assignment vector 𝐮{\mathbf{u}}. Then perturbation is given to 𝐮{\mathbf{u}} which causes a small jump of JSU+cJ_{SU+c} at itr=51itr=51 and the second cycle of jj-loop starts and this time a solution is found after dozens of updates by thresholding 𝐮{\mathbf{u}} to a binary vector 𝐮{\mathbf{u}}_{*}.

The graph (c) in Fig. 2 shows the scalability of computation time to find a solution up to n=10000n=10000. We set max_try=100,max_itr=2000max\_try=100,max\_itr=2000 and plot the average of ten measurements of time to find a solution. The graph seems to indicate good linearity w.r.t. nn up to n=10000n=10000.

5.2 The Hamiltonian cycle problem, precomputation and another solution constraint

A Hamiltonian cycle (HC) is a cycle in a graph that visits every vertex exactly once and the Hamiltonian cycle problem is to determine if an HC exists in a given graph. It is an NP-complete problem and has been used as a programming example since the early days of ASP. Initially it is encoded by a non-tight program containing positive recursion [19]. Later a way of encoding by a program that is not tight but tight on its completion models is proposed [13]. We here introduce yet another encoding by a tight ground program inspired by SAT encoding proposed in [30] where Zhou showed that the problem is solvable by translating six conditions listed in Fig. 3 into a SAT problem161616 Actually he listed seven conditions to be encoded as a SAT problem. However, one of them is found to be redundant and we use the remaining six conditions. .

         conditions                                  meaning
(1) one_of(Hi,j1,,Hi,jk){\rm one\_of}(H_{i,j_{1}},\ldots,H_{i,j_{k}}) : one of outgoing edges {ij1,,ijk}\{i\rightarrow j_{1},\ldots,i\rightarrow j_{k}\} from vertex ii is in an HC
(2) Uj,qHi,jUi,q1U_{j,q}\leftarrow H_{i,j}\wedge U_{i,q-1} : if edge iji\rightarrow j is in an HC and vertex ii is visited at time q1q-1,
vertex jj is visited at time qq (1i,j,qN)(1\leq i,j,q\leq N)
(3) U1,1U_{1,1} : vertex 1 is visited at time 1 (starting point)
(4) one_of(Hi1,j,,Hik,j){\rm one\_of}(H_{i_{1},j},\ldots,H_{i_{k},j}) : one of incoming edges {i1j,,ikj}\{i_{1}\rightarrow j,\ldots,i_{k}\rightarrow j\} to vertex jj is in an HC
(5) Hi,1¬Ui,N\leftarrow H_{i,1}\wedge\neg U_{i,N} : if edge i1i\rightarrow 1 is in an HC, vertex ii is visited at time NN (2iN)(2\leq i\leq N)
(6) one_of(Ui,1,,Ui,N){\rm one\_of}(U_{i,1},\ldots,U_{i,N}) : vertex ii is visited once (1iN1\leq i\leq N)
Figure 3: Conditions for SAT encoding

In what follows, we assume vertices are numbered from 11 to NN = the number of vertices in a graph. We use iji\rightarrow j to denote an edge from vertex ii to vertex jj and Hi,jH_{i,j} to indicate there exists an edge from ii to jj in an HC. Uj,qU_{j,q} means vertex jj is visited at time qq (1j,qN1\leq j,q\leq N) and one_of(a1,,ak){\rm one\_of}(a_{1},\ldots,a_{k}) means that one of {a1,,ak}\{a_{1},\ldots,a_{k}\} is true. We translate these conditions into a program P3={(1),(2),(3)}P_{3}=\{(1),(2),(3)\} and constraints C3={(4),(5),(6)}C_{3}=\{(4),(5),(6)\}. To be more precise, the first condition (1)(1) is translated into a tight program just like a program P1P_{1} (45). The conditions {(2),(3)}\{(2),(3)\} constitute a tight definite program. Constraints C2={(4),(5),(6)}C_{2}=\{(4),(5),(6)\} are encoded as a set of implications of the form L1Lk\leftarrow L_{1}\wedge\cdots\wedge L_{k} where L1,,LkL_{1},\ldots,L_{k} are literals. A set of Uj,qU_{j,q} atoms contained in a stable model of P2P_{2} satisfying C2C_{2} gives an HC.

We apply the above encoding to a simple Hamiltonian cycle problem for a graph G2G_{2} in Fig. 4171717 G2G_{2} is taken from: Section 4.2 in A User’s Guide to gringo, clasp, clingo, and iclingo (http://wp.doc.ic.ac.uk/arusso/wp-content/uploads/sites/47/2015/01/clingo_guide.pd). . There are six vertices and six HCs181818 They are 12563411\rightarrow 2\rightarrow 5\rightarrow 6\rightarrow 3\rightarrow 4\rightarrow 1, 12635411\rightarrow 2\rightarrow 6\rightarrow 3\rightarrow 5\rightarrow 4\rightarrow 1, 12653411\rightarrow 2\rightarrow 6\rightarrow 5\rightarrow 3\rightarrow 4\rightarrow 1, 13562411\rightarrow 3\rightarrow 5\rightarrow 6\rightarrow 2\rightarrow 4\rightarrow 1, 14256311\rightarrow 4\rightarrow 2\rightarrow 5\rightarrow 6\rightarrow 3\rightarrow 1, 14265311\rightarrow 4\rightarrow 2\rightarrow 6\rightarrow 5\rightarrow 3\rightarrow 1. . To solve this HC problem, we matricize P3P_{3} and C3C_{3}. There are 36 Hi,jH_{i,j} atoms (1i,j61\leq i,j\leq 6) and 36 Uj,qU_{j,q} atoms (1j,q61\leq j,q\leq 6). So there are 72 atoms in total. P3={(1),(2),(3)}P_{3}=\{(1),(2),(3)\} contains 197 rules in these 72 atoms and we translate P3P_{3} into a pair of matrices (D3,Q3)(D_{3},Q_{3}) where D3D_{3} is a 72×19772\times 197 binary matrix for disjunctions191919 For example, for each Uj,qU_{j,q} (1j,q61\leq j,q\leq 6), condition (2) generates six rules {Uj,qHi,jUi,q11i6}\{U_{j,q}\leftarrow H_{i,j}\wedge U_{i,q-1}\mid 1\leq i\leq 6\}. and Q3Q_{3} is a 197×144197\times 144 matrix for conjunctions (rule bodies). Likewise C3={(4),(5),(6)}C_{3}=\{(4),(5),(6)\} is translated into a constraint matrix QC3Q_{C_{3}} which is a 67×14467\times 144 binary matrix. Then our task is to find a root 𝐮{\mathbf{u}} of JSU+cJ_{SU+c} (3.5) constructed from these D3D_{3}, Q3Q_{3} and QC3Q_{C_{3}} in a 72 dimensional vector space by minimizing JSU+cJ_{SU+c} to zero.

We apply precomputation in the previous section to P3=(D3,Q3)P_{3}=(D_{3},Q_{3}) and QC3Q_{C_{3}} to reduce program size. It takes 2.3ms and detects 32 false stable atoms. It outputs a precomputed program P3=(D3,Q3)P^{\prime}_{3}=(D^{\prime}_{3},Q^{\prime}_{3}) and a constraint matrix QC3Q^{\prime}_{C_{3}} of size D3(40×90)D^{\prime}_{3}(40\times 90), Q3(90×80)Q^{\prime}_{3}(90\times 80) and QC3(52×80)Q^{\prime}_{C_{3}}(52\times 80) respectively, which is 1/41/4 or 1/21/2 of the original size. So precomputation removes 45%45\% of atoms from the search space and returns much smaller matrices.

Refer to caption
(a) Graph G2G_{2}
no precomp. precomp.
time(s) 2.08(2.01) 0.66(0.52)
#solutions 4.9 5.7
(b) Time and the number of different solutions
Figure 4: A HC problem

We run Algorithm 1 on P3=(D3,Q3)P_{3}=(D_{3},Q_{3}) with QC3Q_{C_{3}} (no precomputation) and also on P3=(D3,Q3)P^{\prime}_{3}=(D^{\prime}_{3},Q^{\prime}_{3}) with QC3Q^{\prime}_{C_{3}} (precomputation) using max_try=20max\_try=20, max_itr=200max\_itr=200 and 2=3=0.1\ell_{2}=\ell_{3}=0.1 and measure time to find a solution, i.e. stable model satisfying constraints. The result is shown by Table (b) in Fig. 4 as time(s) where time(s) is an average of ten trials. Figures in the table, 2.08s vs. 0.66s, clearly demonstrate the usefulness of precomputation.

In addition to computation time, we examine the search power of different solutions in our approach by measuring the number of obtainable solutions. More concretely, we run Algorithm 1 seven times, and each time a stable model is obtained as a conjunction L1L72L_{1}\wedge\ldots\wedge L_{72} of literals, we add a new constraint L1L72\leftarrow L_{1}\wedge\ldots\wedge L_{72} to previous constraints, thereby forcibly computing a new stable model in next trial. We call such use of constraint another solution constraint. Since there are at most six solutions, the number of solutions obtained by seven trials is at most six. We repeat a batch of seven trials ten times and take the average of the number of solutions obtained by each batch. The average is denoted as #solutions in Table (b) which indicates that 5.7 solutions out of 6, almost all solutions, are obtained by seven trials using another solution constraint.

Summing up, figures in Table (b) exemplify the effectiveness of precomputation which significantly reduces computation time and returns a more variety of solutions when combined with another solution constraint.

5.3 LF heuristics and precomputation on loopy programs

So far we have been dealing with tight programs which have no loop and hence have no loop formulas. We here deal with non-tight programs containing loops and examine how LF heuristics, LFmaxLF_{max} and LFminLF_{min}, introduced in the previous section work. We use an artificial non-tight program P4_nP_{4\_n} (with no constraint) shown below that have exponentially many loops [12].

P4_n\displaystyle P_{4\_n} =\displaystyle= {a0a1ana0¬an+1a2i1a0a2ifor i:1in/2a2ia0a2i1for i:1in/2an+1an+1\displaystyle\begin{array}[]{ll}\left\{\;\begin{array}[]{lllll}a_{0}&\leftarrow&a_{1}\wedge\ldots\wedge a_{n}\\ a_{0}&\leftarrow&\neg a_{n+1}\\ \ldots\\ a_{2i-1}&\leftarrow&a_{0}\vee a_{2i}&\mbox{for $i:1\leq i\leq n/2$}\\ a_{2i}&\leftarrow&a_{0}\vee a_{2i-1}&\mbox{for $i:1\leq i\leq n/2$}\\ \ldots\\ a_{n+1}&\leftarrow&a_{n+1}\\ \end{array}\right.\end{array}

For an even nn, P4_nP_{4\_n} program has n+2n+2 atoms {a0,a1,,an,an+1}\{a_{0},a_{1},\ldots,a_{n},a_{n+1}\}, 2n/2+12^{n/2}+1 supported models and one stable model {a0,a1,,an}\{a_{0},a_{1},\ldots,a_{n}\}. There are n/2+1n/2+1 minimal loops {a1,a2},,\{a_{1},a_{2}\},\ldots, {an1,an},\{a_{n-1},a_{n}\}, {an+1}\{a_{n+1}\} and a maximal loop {a0,a1,,an}\{a_{0},a_{1},\ldots,a_{n}\}. The set of loop formulas for LF heuristics are computed as follows.

LFmax\displaystyle LF_{max} =\displaystyle= {(a0a1an)¬an+1,an+1}\displaystyle\{(a_{0}\wedge a_{1}\wedge\ldots\wedge a_{n})\rightarrow\neg a_{n+1},\,a_{n+1}\rightarrow\perp\}
LFmax\displaystyle LF_{max} =\displaystyle= {(a1a2)a0,,(an1an)a0,an+1}\displaystyle\{(a_{1}\wedge a_{2})\rightarrow a_{0},\ldots,(a_{n-1}\wedge a_{n})\rightarrow a_{0},\,a_{n+1}\rightarrow\perp\}

Note that although there are 2n/2+12^{n/2}+1 supported models, there is only one stable model. So LFmaxLF_{max} and LFminLF_{min} are expected to exclude 2n/22^{n/2} supported models.

After translating P4_nP_{4\_n} into a matricized program P4_n=(Q4_n,D4_n)P_{4\_n}=(Q_{4\_n},D_{4\_n}) where Q4_nQ_{4\_n} is a (2n+3)×(2n+4)(2n+3)\times(2n+4) binary matrix and D4_nD_{4\_n} is a (n+2)×(2n+3)(n+2)\times(2n+3) binary matrix respectively, we compute a stable model of P4_nP_{4\_n} for various nn by Algorithm 1 that minimizes JSU+c+LFJ_{SU+c+LF} (37) with coefficient 3=0\ell_{3}=0 for the constraint term (because of no use of constraints) using Jacobian JaSU+c+LFJ_{a_{SU+c+LF}} (32).

Since all supported models of P4_nP_{4\_n} except for one stable model are non-stable, even if LFmaxLF_{max} and LFminLF_{min} are used to guide the search process towards a stable model, Algorithm 1 is likely to return a non-stable model. We can avoid such situation by the use of another solution constraint.

Table 2: The effect of another solution constraint
another solution constraint time(s) #trials
not used 11.46(0.41)11.46(0.41) 104(0)10^{4}(0)
used 0.09(0.13)0.09(0.13) 3.5(1.6)3.5(1.6)

To verify it, we examine the pure effect of another solution constraint that guides the search process to compute a model different from previous ones. Without using LFmaxLF_{max} or LFminLF_{min} heuristics, we repeatedly run Algorithm 1 with/without another solution constraint for 10410^{4} trials with n=4n=4, max_try=20max\_try=20, max_itr=50max\_itr=50, 2=3=0.1\ell_{2}=\ell_{3}=0.1 and measure time to find a stable model and count the number of trials until then. We repeat this experiment ten times and take the average. The result is shown in Table 2.

The figure 104(0)10^{4}(0) in Table 2 in the case of no use of another solution constraint means Algorithm 1 always exhausts 10410^{4} trials without finding a stable model (due to implicit bias in Algorithm 1). When another solution constraint is used however, it finds a stable model in 0.09s after 3.5 trials on average. Thus Table 2 demonstrates the necessity and effectiveness of another solution constraint to efficiently explore the search space.

We next compare the effectiveness of LF heuristics and that of precomputation under another solution constraint. For n=10,,50n=10,\ldots,50, we repeatedly run Algorithm 1 using JSU+c+LFJ_{SU+c+LF} with max_try=10,max_itr=100max\_try=10,max\_itr=100 on matricized P4_n=(Q4_n,D4_n)P_{4\_n}=(Q_{4\_n},D_{4\_n}) (and no constraint matrix) to compute supported (stable) models. Coefficients in JSU+c+LFJ_{SU+c+LF} are set to 2=0.1,3=0,4=1\ell_{2}=0.1,\ell_{3}=0,\ell_{4}=1. To be more precise, for each nn and each case of LFmaxLF_{max}, LFminLF_{min}, precomputation (without {LFmax,LFmin}\{LF_{max},LF_{min}\}) and no {LFmax,LFmin,precomputation}\{LF_{max},LF_{min},\mbox{precomputation}\}, we run Algorithm 1 at most 100 trials to measure time to find a stable model and count the number of supported models computed till then. We repeat this computation ten times and take the average and obtain graphs in Fig. 5.

Refer to caption
(a) Time to find a stable model
Refer to caption
(b) The number of computed models
Figure 5: The effect of LF heuristics and precomputation on program P4_nP_{4\_n}

In Fig. 5, no_LF means no use of {LFmax,LFmin}\{LF_{max},LF_{min}\} heuristics. Also no_LF_pre means no_LF is applied to precomputed P4_nP_{4\_n}202020 Precomputation takes 0.006s and removes only one stable false atom. So precomputation is not helpful in the current case. .

We can see from graph (a) in Fig. 5 that computation time is LFmin>LFmax>no_LF>no_LF_preLF_{min}>LF_{max}>\mbox{no\_LF}>\mbox{no\_LF\_pre}. This means that using LF heuristics is not necessarily a good policy. They might cause extra computation to reach the same model. Concerning the number of non-stable models computed redundantly, graph (b) in Fig. 5 tells us that LFminLF_{min} allows computing redundant non-stable models but the rest, LFmaxLF_{max}, no_LF and no_LF_pre, return a stable model without computing redundant non-stable models. This shows first that LFmaxLF_{max} works correctly to suppress the computation of non-stable models and second that the LFminLF_{min} heuristics works adversely, i.e. guiding the search process away from the stable model. This somewhat unexpected result indicates the need of (empirical) choice of LF heuristics.

Finally to examine the effectiveness of precomputation more precisely, we apply precomputation to a more complex program P5_nkP_{5\_nk}. It is a modification of P4_nP_{4\_n} by adding self-loops of kk atoms as illustrated by (a) in Fig. 6. The addition of self-loop causes the choice of an+ja_{n+j} (1jk1\leq j\leq k) being true or being false in the search process. P5_nkP_{5\_nk} has (2n/21)(2k1)+1(2^{n/2}-1)(2^{k}-1)+1 supported models but has just one stable model {a0,a1,,an}\{a_{0},a_{1},\ldots,a_{n}\}.

P5_nk\displaystyle P_{5\_nk} =\displaystyle= {a0a1ana0¬an+1¬an+ka2i1a0a2ia2ia0a2i1an+1an+1an+kan+k\displaystyle\begin{array}[]{ll}\left\{\;\begin{array}[]{lll}a_{0}&\leftarrow&a_{1}\wedge\ldots\wedge a_{n}\\ a_{0}&\leftarrow&\neg a_{n+1}\wedge\ldots\wedge\neg a_{n+k}\\ \ldots\\ a_{2i-1}&\leftarrow&a_{0}\vee a_{2i}\\ a_{2i}&\leftarrow&a_{0}\vee a_{2i-1}\\ \ldots\\ a_{n+1}&\leftarrow&a_{n+1}\\ \ldots\\ a_{n+k}&\leftarrow&a_{n+k}\\ \end{array}\right.\end{array}
(a) A non-tight program P5_nkP_{5\_nk}
Refer to caption
(b) Scalability of precomputation
Figure 6: Precomputation applied to program P5_nkP_{5\_nk}

.

We compute a stable model by running Algorithm 1 on precomputed P5_nkP_{5\_nk} without using LF heuristics up to n=k=5000n=k=5000. When precomputation is applied to P5_nkP_{5\_nk} where n=k=5000n=k=5000, it detects 50005000 false stable atoms and downsizes the matrices in P5_nk=(D5_nk,Q5_nk)P_{5\_nk}=(D_{5\_nk},Q_{5\_nk}) from D5_nk(10001×15002)D_{5\_nk}(10001\times 15002) to D5_nk(5001×10002)D^{\prime}_{5\_nk}(5001\times 10002) and from Q5_nk(15002×20002)Q_{5\_nk}(15002\times 20002) to Q5_nk(10002×10002)Q^{\prime}_{5\_nk}(10002\times 10002). Thus precomputed P5_nk=(D5_nk,Q5_nk)P^{\prime}_{5\_nk}=(D^{\prime}_{5\_nk},Q^{\prime}_{5\_nk}) is downsized to 1/3 of the original P5_nkP_{5\_nk}.

We run Algorithm 1 on P5_nkP^{\prime}_{5\_nk} with 2=3=0.1\ell_{2}=\ell_{3}=0.1 and max_try=10max\_try=10, max_itr=100max\_itr=100 at most 100 trials to measure time to find a stable model ten times for each n=1000,,5000n=1000,\ldots,5000 and take the average. At the same time, we also run Clingo (version 5.6.2) on P5_nkP_{5\_nk} and similarly measure time. Graph (b) in Fig. 6 is the result. It shows that as far as computing a stable model of P5_nkP_{5\_nk} is concerned, our approach comes close to Clingo. However, this is due to a very specific situation that precomputation removes all false atoms {an+1,,an+k}\{a_{n+1},\ldots,a_{n+k}\} in the stable model of P5_nkP_{5\_nk} and Algorithm 1 run on the precomputed P5_nk=(D5_nk,Q5_nk)P^{\prime}_{5\_nk}=(D^{\prime}_{5\_nk},Q^{\prime}_{5\_nk}) detects the stable model only by thresholding 𝐮{\mathbf{u}} before starting any update of 𝐮{\mathbf{u}}. So what graph (b) really suggests seems to be the importance of optimization of a program like precomputation, which is to be developed further in our approach.

6 Related work

The most closely related work is [2] and [27]. As mentioned in Section 1, our approach differs from them in three points: (1) theoretically, the exclusion of non-stable models by loop formulas, (2) syntactically, no restriction on acceptable programs and (3) practically, incorporation of constraints. Concerning performance, they happen to use the same NN-negative loops program which consists of NN copies (alphabetic variants) of a program {a¬b,b¬a}\{a\leftarrow\neg b,b\leftarrow\neg a\}. According to [2], the success rate w.r.t. NN of returning a supported model goes from one initially to almost zero at N=64N=64 in [2] while it keeps one till N=100N=100 in [27]. We tested the same program with max_try=20max\_try=20, max_itr=100max\_itr=100 and observed that the success rate keeps one till N=10000N=10000.

Although our approach is non-probabilistic, i.e. purely linear algebraic, there are probabilistic differentiable approaches for ASP. Differentiable ASP/SAT [18] iteratively samples a stable model by an ASP solver a la ASSAT [14]. The solver decides the next decision literal based on the derivatives of a cost function which is the MSE between the target probabilities and predicted probabilities computed from the sampled stable models via parameters associated with ”parameter atoms” in a program.

NeurASP [29] uses an ASP solver to obtain stable models including ”neural atoms” for a program. They are associated with probabilities learned by deep learning and the likelihood of an observation (a set of ASP constraints) is computed from them. The whole learning is carried out by backpropagating the likelihood to neural atoms to parameters in a neural network.

Similarly to NeurASP, SLASH [26] uses an ASP solver to compute stable models for a program containing ”neural probabilistic predicates”. Their probabilities are dealt with by neural networks and probabilistic circuits. The latter makes it possible to compute a joint distribution of the class category and data.

Sato and Kojima developed another differentiable approach which, without using ASP solvers, samples supported models of probabilistic normal logic programs [24]. They encode programs by matrices and realize the sampling of supported models in vector spaces by repeatedly computing a fixedpoint of some differentiable equations.

Independently of differentiable approaches mentioned so far, Tuan et al. adopted matrix encoding for propositional normal logic programs based on [22] and proposed to compute stable models in vector spaces by a generate-and-test manner using sparse representation [21].

Also though not computing stable models, Sato et al.  developed a 2-stage approach for supported models. It first computes the least 3-valued model of a matricized program in vector spaces that contains undefined atoms, and then assigns true or false to them to obtain a supported model of the original program [25].

7 Conclusion

We proposed an end-to-end approach for computing stable models satisfying given constraints. We matricized a program and constraints and formulated stable model computation as a minimization problem in vector spaces of a non-negative cost function. We obtain a stable model satisfying constraints as a root the cost function by Newton’s method.

By incorporating all loop formula constraints introduced in Lin-Zhao’s theorem [14] into the cost function to be minimized, we can prevent redundant computation of non-stable models, at the cost of processing exponentially many loop formulas. Hence, we introduced precomputation which downsizes a program while preserving stable model semantics and also two heuristics that selectively use loop formulas. Then we confirmed the effectiveness of our approach including precomputation and loop formula heuristics by simple examples.

Acknowledgments

This work is supported by JSPS KAKENHI Grant Number JP21H04905 and JST CREST Grant Number JPMJCR22D3..

References

  • [1] Apt, K.R., Blair, H.A., Walker, A.: Foundations of deductive databases and logic programming. chap. Towards a Theory of Declarative Knowledge, pp. 89–148 (1988)
  • [2] Aspis, Y., Broda, K., Russo, A., Lobo, J.: Stable and supported semantics in continuous vector spaces. In: Calvanese, D., Erdem, E., Thielscher, M. (eds.) Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning, KR 2020, Rhodes, Greece, September 12-18, 2020. pp. 59–68 (2020)
  • [3] Dowling, W.F., Gallier, J.H.: Linear-time Algorithms for Testing the Satisfiability of Propositional Horn Formula. Journal of Logic Programming 3, 267–284 (1984)
  • [4] Erdem, E., Lifschitz, V.: Tight Logic Programs. Theory and Practice of Logic Programming (TPLP) 3(4–5), 499–518 (2003)
  • [5] Fages, F.: Consistency of Clark’s completion and existence of stable models. Journal of Methods of Logic in Computer Science 1, 51––60 (1994)
  • [6] Ferraris, P., Lee, J., Lifschitz, V.: A generalization of the lin-zhao theorem. Annals of Mathematics and Artificial Intelligence 47, 79–101 (2006)
  • [7] Gebser, M., Kaufmann, B., AndréNeumann, Schaub, T.: clasp: A conflict-driven answer set solver. In: LPNMR. Lecture Notes in Computer Science, vol. 4483, pp. 260–265. Springer (2007)
  • [8] Gebser, M., Kaufmann, B., Schaub, T.: Conflict-driven answer set solving: From theory to practice. Artif. Intell. 187, 52–89 (2012)
  • [9] Gelfond, M., Lifshcitz, V.: The stable model semantics for logic programming pp. 1070–1080 (1988)
  • [10] Lierler, Y.: Cmodels: Sat-based disjunctive answer set solver. In: Proceedings of the 8th International Conference on Logic Programming and Nonmonotonic Reasoning. p. 447–451. LPNMR’05, Springer-Verlag, Berlin, Heidelberg (2005)
  • [11] Lifschitz, V.: What is answer set programming? In: Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3. p. 1594–1597. AAAI’08, AAAI Press (2008)
  • [12] Lifschitz, V., Razborov, A.: Why are there so many loop formulas? ACM Trans. Comput. Logic 7(2), 261–268 (apr 2006). https://doi.org/10.1145/1131313.1131316
  • [13] Lin, F., Zhao, J.: On tight logic programs and yet another translation from normal logic programs to propositional logic. In: Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI’03). p. 853–858. Morgan Kaufmann Publishers Inc. (2003)
  • [14] Lin, F., Zhao, Y.: Assat: computing answer sets of a logic program by sat solvers. Artificial Intelligence 157(1), 115–137 (2004)
  • [15] Liu, H., Wang, J.: A new way to enumerate cycles in graph. Advanced Int’l Conference on Telecommunications and Int’l Conference on Internet and Web Applications and Services (AICT-ICIW’06) pp. 57–59 (2006)
  • [16] Marek, V.W., Truszczyński, M.: Stable Models and an Alternative Logic Programming Paradigm, pp. 375–398. Springer Berlin Heidelberg (1999). https://doi.org/10.1007/978-3-642-60085-2_17
  • [17] Marek, W., V.S.Subrahmanian: The relationship between stable, supported, default and autoepistemic semantics for general logic programs. Theoretical Computer Science 103(2), 365–386 (1992)
  • [18] Nickles, M.: Differentiable SAT/ASP. In: Proceedings of the 5th International Workshop on Probabilistic Logic Programming, PLP 2018. pp. 62–74 (2018)
  • [19] Niemelä, I.: Logic programs with stable model semantics as a constraint programming paradigm 25(3–4), 241–273 (1999)
  • [20] Niemelä, I., Simons, P.: Smodels — an implementation of the stable model and well-founded semantics for normal logic programs. In: Dix, J., Furbach, U., Nerode, A. (eds.) Logic Programming And Nonmonotonic Reasoning. pp. 420–429. Springer Berlin Heidelberg (1997)
  • [21] Quoc, T.N., Inoue, K., Sakama, C.: Enhancing linear algebraic computation of logic programs using sparse representation. New Gen. Comput. 40(1), 225–254 (2022). https://doi.org/10.1007/s00354-021-00142-2
  • [22] Sakama, C., Inoue, K., Sato, T.: Linear Algebraic Characterization of Logic Programs. In: Proceedings of the 10th International Conference on Knowledge Science, Engineering and Management (KSEM2017), LNAI 10412, Springer-Verlag. pp. 520–533 (2017)
  • [23] Sarker, M.K., Zhou, L., Eberhart, A., Hitzler, P.: Neuro-symbolic artificial intelligence: Current trends (2021)
  • [24] Sato, T., Kojima, R.: In: Artificial Intelligence IJCAI 2019 International Workshops, Revised Selected Best Papers (Amal El Fallah Seghrouchni,David Sarne (Eds.)), Lecture Notes in Computer Science, vol. 12158, pp. 239–252. Springer (2020). https://doi.org/10.1007/978-3-030-56150-5
  • [25] Sato, T., Sakama, C., Inoue, K.: From 3-valued semantics to supported model computation for logic programs in vector spaces. In: Rocha, A.P., Steels, L., van den Herik, H.J. (eds.) Proceedings of the 12th International Conference on Agents and Artificial Intelligence, ICAART 2020, Volume 2, Valletta, Malta, February 22-24, 2020. pp. 758–765. SCITEPRESS (2020). https://doi.org/10.5220/0009093407580765
  • [26] Skryagin, A., Stammer, W., Ochs, D., Dhami, D.S., Kersting, K.: Neural-Probabilistic Answer Set Programming. In: Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning. pp. 463–473 (2022)
  • [27] Takemura, A., Inoue, K.: Gradient-based supported model computation in vector spaces. In: Gottlob, G., Inclezan, D., Maratea, M. (eds.) Logic Programming and Nonmonotonic Reasoning - 16th International Conference, LPNMR 2022, Genova, Italy, September 5-9, 2022, Proceedings. Lecture Notes in Computer Science, vol. 13416, pp. 336–349. Springer (2022)
  • [28] Tarjan, R.E.: Depth-first search and linear graph algorithms. SIAM Journal on Computing 1(2), 146–160 (1972)
  • [29] Yang, Z., Ishay, A., Lee, J.: Neurasp: Embracing neural networks into answer set programming. In: Bessiere, C. (ed.) Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20. pp. 1755–1762. International Joint Conferences on Artificial Intelligence Organization (2020)
  • [30] Zhou, N.F.: In pursuit of an efficient sat encoding for the hamiltonian cycle problem. In: proceeding of the Principles and Practice of Constraint Programming: 26th International Conference (CP 2020). pp. 585–602. Springer-Verlag (2020)