Inputs, Outputs, and Composition in the Logic of Information Flows
Abstract.
The logic of information flows (LIF) is a general framework in which tasks of a procedural nature can be modeled in a declarative, logic-based fashion. The first contribution of this paper is to propose semantic and syntactic definitions of inputs and outputs of LIF expressions. We study how the two relate and show that our syntactic definition is optimal in a sense that is made precise. The second contribution is a systematic study of the expressive power of sequential composition in LIF. Our results on composition tie in the results on inputs and outputs, and relate LIF to first-order logic (FO) and bounded-variable LIF to bounded-variable FO.
This paper is the extended version of a paper presented at KR 2020 (Aamer et al., 2020b).
1. Introduction
The Logic of Information Flows (LIF) (Ternovska, 2017, 2019) is a knowledge representation framework designed to model and understand how information propagates in complex systems, and to find ways to navigate it efficiently. The basic idea is that modules, that can be given procedurally or declaratively, are the atoms of a logic whose syntax resembles first-order logic, but whose semantics produces new modules. In LIF, atomic modules are modeled as relations with designated input and output arguments. Computation is modeled as propagation of information from inputs to outputs, similarly to propagation of tokens in Petri nets. The specification of a complex system then amounts to connecting atomic modules together. For this purpose, LIF uses the classical logic connectives, i.e., the boolean operators, equality, and existential quantification. The goal is to start from constructs that are well understood, and to address the fundamental question of what logical means are necessary and sufficient to model computations declaratively. The eventual goal, which goes beyond the topic of this paper, is to come up with restrictions or extensions of LIF that make the computations efficient.
In its most general form, LIF is a rich family of logics with recursion and higher-order variables. Atomic modules are given by formulae in various logics, and may be viewed as solving the task of Model Expansion (Mitchell and Ternovska, 2005): the input structure is expanded to satisfy the specification of a module thus producing an output. The semantics is given in terms of pairs of structures. We can, for example, give a graph (a relational structure) on the input of a module that returns a Hamiltonian cycle on the output, and compose it sequentially with a module that checks whether the produced cycle is of even length. One can vary both the expressiveness of logics for specifying atomic modules and the operations for combining modules, to achieve desirable complexity of the computation for the tasks of interest.
Many issues surrounding LIF, however, are already interesting in a first-order setting (see, e.g., (Aamer et al., 2020a)); and in fact such a setting is more generic than the higher-order setting, which can be obtained by considering relations as atomary data values. Thus, in this paper, we give a self-contained, first-order presentation of LIF. Syntactically, atomic modules here are relation atoms with designated input and output positions. Such atoms are combined using a set of algebraic operations into LIF expressions. The semantics is defined in terms of pairs of valuations of first-order variables; the first valuation represents a situation right before applying the module, while the second represents a possible situation immediately afterwards. The results in this paper are then also applicable to the case of higher-order variables.
Our contributions can be summarized as follows.
-
(1)
While the input and output arguments of atomic modules are specified by the vocabulary, it is not clear how to designate the input and output variables of a complex LIF expression that represents a compound module. Actually, coming up with formal definitions of what it means for a variable to be an input or output is a technically and philosophically interesting undertaking. We propose semantic definitions, based on natural intuitions, which are, of course, open to further debate. The semantic notions of input and output turn out to be undecidable. This is not surprising, since LIF expressions subsume classical first-order logic formulas, for which most inference tasks in general are undecidable.
-
(2)
We proceed to give an approximate, syntactic definition of the input and output variables of a formula, which is effectively computable. Indeed, our syntactic definition is compositional, meaning that the set of syntactic input (or output) variables of a formula depends only on the top-level operator of the formula, and the syntactic inputs and outputs of the operands. We prove our syntactic input–output notion to be sound: every semantic input or output is also a syntactic input or output, and the syntactic inputs and outputs are connected by a property that we call input–output determinacy. Moreover, we prove an optimality result: our definition provides the most precise approximation to semantic input and outputs among all compositional and sound definitions.
-
(3)
We investigate the expressive power of sequential composition in the context of LIF. The sequential composition of two modules is fundamental to building complex systems. Hence, we are motivated to understand in detail whether or not this operation is expressible in terms of the basic LIF connectives. This question turns out to be approachable through the notion of inputs and outputs. Indeed, there turns out to be a simple expression for the composition of io-disjoint modules. Here, io-disjointness means that inputs and outputs do not overlap. For example, a module that computes a function of and returns the result in is io-disjoint; a module that stores the result back in , thus overwriting the original input, is not.
-
(4)
We then use the result on io-disjoint expressions to show that composition is indeed an expressible operator in the classical setting of LIF, where there is an infinite supply of fresh variables. (In contrast, the expression for io-disjoint modules does not need extra variables.)
-
(5)
Finally, we complement the above findings with a result on LIF in a bounded-variable setting: in this setting, composition is necessarily a primitive operator.
Many of our notions and results are stated generally in terms of transition systems (binary relations) on first-order valuations. Consequently, we believe our work is also of value to settings other than LIF inasmuch as they involve dynamic semantics. Several such settings, where input–output specifications are important, are discussed in the related work section.
The rest of this paper is organized as follows. In Section 2, we formally introduce the Logic of Information Flows from a first-order perspective. Section 3 presents our study concerning the notion of inputs and outputs of complex expressions. Section 7 then presents our study on the expressibility of sequential composition. Section 8 discusses related work. We conclude in Section 9. In Sections 4, 5, and 6, we give extensive proofs of theorems we discuss in Section 3.
2. Preliminaries
A (module) vocabulary is a triple where:
-
•
is a nonempty set, the elements of which are called module names;
-
•
assigns an arity to each module name in ;
-
•
assigns an input arity to each module name in , where .
We fix a countably infinite universe of data elements. An interpretation of assigns to each module name in an -ary relation over .
Furthermore, we fix a universe of variables . This set may be finite or infinite; the size of will influence the expressive power of our logic. A valuation is a function from to . The set of all valuations is denoted by . We say that and agree on if for all and that they agree outside if they agree on . A partial valuation on is a function from to ; we will also call this a -valuation. If is a valuation, we use to denote its restriction to . Let be a valuation and let be a partial valuation on . Then the substitution of into , denoted by , is defined as . In the special case where is defined on a single variable with , we also write as .
We assume familiarity with the syntax and semantics of first-order logic (FO, relational calculus) over (Enderton, 1972) and use to mean “is by definition”.
2.1. Binary Relations on Valuations
The semantics of LIF will be defined in terms of binary relations on (abbreviated BRV: Binary Relations on Valuations). Before formally introducing LIF, we define operations on BRVs corresponding to the classical logical connectives, adapted to a dynamic semantics. For boolean connectives, we simply use the standard set operations. For equality, we introduce selection operators. For existential quantification, we introduce cylindrification operators.
Let and be BRVs, let be a finite set of variables, and let and be variables.
-
•
Set operations: , and are well known.
-
•
Composition
-
•
Converse
-
•
Left and Right Cylindrifications
-
•
Left and Right Selections
-
•
Left-To-Right Selection
If and are tuples of variables of length , we write for
and if is a variable we write for . Intuitively, a BRV is a dynamic system that manipulates the interpretation of variables. A pair in a BRV represents that a transition from to is possible, i.e., that when given as input, the values of the variables can be updated to . The operations defined above correspond to manipulations/combinations of such dynamic systems. Union, for instance, represents a non-deterministic choice, while composition corresponds to composing two such systems. Left cylindrification corresponds, in the dynamic view, to performing search before following the underlying BRV. Indeed, when given an input , alternative values for the cylindrified variables are searched for which transitions are possible. The selection operations correspond to performing checks, on the input, the output, or a combination of both in addition to performing what the underlying BRV does.
Some of the above operators are redundant, in the sense that they can be expressed in terms of others, for instance, . We also have:
Lemma 2.1.
For any BRV , and any variables and , the following hold:
The expression for can be explained as follows. First, we copy from right to left by applying followed by . Selection can now be simulated by . The original value on the left is restored by a final application of and intersecting with the original .
2.2. The Logic of Information Flows
The language of LIF expressions over a vocabulary is defined by the following grammar:
Here, is any module name in ; is a finite set of variables; is a tuple of variables; and are variables. For atomic module expressions, i.e., expressions of the form , the length of must equal . In practice, we will often write for atomic module expressions, where is a tuple of variables of length and is a tuple of variables of length .
We will define the semantics of a LIF expression , in the context of a given interpretation , as a BRV which will be denoted by . Thus, adapting Gurevich’s terminology (Gurevich, 1983, 1988), every LIF expression denotes a global BRV : a function that maps interpretations of to the BRV .
For atomic module expressions, we define
Here, denotes the concatenation of tuples. Intuitively, the semantics of an expression represents a transition from to : the inputs of the module are “read” in and the outputs are updated in . The value of every variable that is not an output is preserved; this important semantic principle is a realization of the commonsense law of inertia (McCarthy and Hayes, 1969; Lifschitz, 1987).
We further define
The semantics of other operators is obtained directly by applying the corresponding operation on BRVs, e.g.,
We say that and are equivalent if for each interpretation , i.e., if they denote the same global BRV.
2.3. Satisfiability of LIF Expressions
In this section, we will show that the problem of deciding whether a given LIF expression is satisfiable is undecidable. Thereto we begin by noting that first-order logic (FO) is naturally embedded in LIF in the following manner. When evaluating FO formulas on interpretations, we agree that the domain of quantification is always .
Lemma 2.2.
Let be a vocabulary with for every . Then, for every FO formula over , there exists a LIF expression such that for every interpretation the following holds:
Proof.
The proof is by structural induction on .
-
•
If is , take .
-
•
If is for some , take .
-
•
If is , take .
-
•
If is , take .
-
•
If is , take . ∎
It is well known that satisfiability of FO formulas over a fixed countably infinite domain is undecidable. This leads to the following undecidability result.
Problem: Satisfiability Given: a LIF expression . Decide: Is there an interpretation such that ?
Proposition 2.3.
The satisfiability problem is undecidable.
Proof.
The proof is by reduction from the satisfiability of FO formulas. Let be an FO formula and let be the LIF expression obtained from Lemma 2.2. It is clear that is satisfiable iff is. ∎
3. Inputs and Outputs
We are now ready to study inputs and outputs of LIF expressions, and, more generally, of global BRVs. We first investigate what inputs and outputs mean on the semantic level before introducing a syntactic definition for LIF expressions.
3.1. Semantic Inputs and Outputs for Global BRVs
Intuitively, an output is a variable whose value can be changed by the expression, i.e., a variable that is not subject to inertia.
Definition 3.1.
A variable is a semantic output for a global BRV if there exists an interpretation and such that . We use to denote the set of semantic output variables of . If is a LIF expression, we call a variable a semantic output of if it is a semantic output of . We also write for the semantic outputs of . A variable that is not a semantic output is also called an inertial variable.
Defining semantic inputs is a bit more subtle. Intuitively, a variable is an input for a BRV if its value on the left-hand side matters for determining the right-hand side (i.e., that if the value of the input would have been different, so would have been the right-hand side; which is in fact a very coarse counterfactual definition of actual causality (Lewis, 1973)). However, a naive formalization of this intuition would result in a situation in which all inertial variables (variables that are not outputs) are inputs since their value on the right-hand side always equals to the one on the left-hand side. A slight refinement of our intuition is that the inputs are those variables whose value matters for determining the possible values of the outputs. This is formalized in the following definitions.
Definition 3.2.
Let be a global BRV and be sets of variables. We say that determines on if for every interpretation , every and every such that on , there exists a such that on and .
Definition 3.3.
A variable is a semantic input for a global BRV if does not determine on . The set of input variables of is denoted by . A variable is a semantic input of a LIF expression if it is a semantic input of ; the semantic inputs of are denoted by .
From Definition 3.2, we can rephrase the definition for semantic inputs to:
Proposition 3.4.
A variable is a semantic input for a global BRV iff there is an interpretation , a value , and such that there is no valuation that agrees with on and .
The following proposition shows that the semantic inputs of are indeed exactly the variables that determine .
Proposition 3.5.
If a set of variables determines a global BRV on , then .
Proof.
Let be any variable in . We know that does not determine on . If , then , so would not determine on , which is impossible. Hence, must be in as desired. ∎
Under a mild assumption, also the converse to Proposition 3.5 holds:111Proposition 3.6 indeed provides a converse to Proposition 3.5: given that determines on and for some set , clearly also determines on .
Proposition 3.6.
Assume there exists a finite set of variables that determines a global BRV on . Then, determines on .
Proof.
Let and on for some interpretation and valuations , , and . To show that determines on , we need to find a valuation such that and on . By assumption, let be a finite set of variables that determines on .
Thereto, take to be the valuation which is the valuation after changing the values for the variables in to be as in . Thus, on , while outside . Since determines on , we know that there is a valuation such that and on . To reach our goal, we would like to do incremental changes to in order to be similar to while showing that each of the intermediate valuations does satisfy the determinacy conditions.
From construction, we know that on . Using the finiteness assumption for , let be the set of variables . Define the sequence of valuations such that:
-
•
; and
-
•
so is after changing the value of to be as in .
We claim that for , there exists a valuation such that and on . Since is clearly the same valuation as , we can then take to be which is the required.
We verify our claim by induction.
- Base Case::
-
for , we can see that .
- Inductive Step::
-
for , by assumption, we know that there is a valuation such that and on . It is clear that outside which is . Since , we know that determines on . Hence, there is a valuation such that and on . Since on , we can see that on . ∎
In the next remark, we show that without our assumption, we can find an example of a global BRV that is not determined on its semantic outputs by its semantic inputs.
Remark 3.7.
Let be the global BRV that maps every to the same BRV, namely:
Since the variables that can be changed by are not restricted, we see that . We now verify that . Let be any variable. We can see that . Thereto, we check that determines on . Let be an interpretation and , , and valuations such that and outside . Since and differ on finitely many variables, we can see that and also do. Hence, .
Finally, we verify that does not determine on . To see a counterexample, let be an interpretation and be the valuation that assigns to every variable. We can see that . Let be the valuation that assigns to every variable. It is clear that on , however, clearly . By Proposition 3.6, we know that there is no finite set of variables that does determine on .∎
The reader should not be lulled into believing that determines a global BRV on the set of all variables since determines on and no other variable outside can have its value changed. In the following remark, we give a simple counterexample.
Remark 3.8.
We show that does not necessarily determine on for every global BRV . Take to be defined by the LIF expression , so, for every , we have
It is clear that and .
Let be the valuation that assigns to every variable. Clearly, for any interpretation . Now take to be the valuation that assigns to and , while it assigns to every other variable. It is clear that on .
If were to determine on , we should find a valuation that agrees with on such that . In other words, this means that , which is clearly not possible.∎
Assuming determinacy holds for some global BRV, we next show that it actually satisfies an even stricter version of determinacy.
Proposition 3.9.
Let be a global BRV. If determines on , then for every interpretation , every and every that agrees with on and outside , we have .
Proof.
Intuitively, the inputs and outputs are the only variables that matter for a given global BRV, similar to how in classical logic the free variables are the only ones that matter. All other variables can take arbitrary values, but, their values are preserved by inertia, i.e., remain unchanged by the dynamic system. We now formalize this intuition.
Definition 3.10.
Let be a global BRV and a set of variables. We say that is inertially cylindrified on if:
-
(1)
all variables in are inertial; and
-
(2)
for every interpretation , every , and every -valuation also .
Proposition 3.11.
Every global BRV is inertially cylindrified outside the semantic inputs and outputs of assuming that determines on .
Proof.
Let be a global BRV such that determines on . Moreover, let be the set of variables that are neither semantic inputs nor semantic outputs of . It is trivial to show that all the variables in are inertial since none of the variables in is a semantic output of . What remains to show is that for every interpretation , every , and every -valuation also .
Let for an arbitrary interpretation and let be a valuation for some -valuation . Since on , we know by determinacy that there is a valuation such that and on . We now argue that . On the variables of , we know that , whence, on . Now we consider the variables that are not in . It is clear that outside , whence, outside . ∎
Remark 3.12.
Without the assumption, we can give an example of a global BRV that is not inertially cylindrified outside its semantic inputs and outputs. Let be the global BRV that maps every to the same BRV, namely:
It is clear that and .
We proceed to verify that is not inertially cylindrified on . Let be any interpretation and be any valuation that maps every variable to a unique value from the domain. We can see that since every value in appears only once. Now fix some arbitrarily and consider the valuation that maps every variable to . We can see that since appears infinitely often in .∎
We remark that the converse of Proposition 3.11 is not true:
Remark 3.13.
Consider the same global BRV Q discussed in Remark 3.7 where we showed that does not determine on . Recall that , so the set of variables outside the semantic inputs and outputs is empty. Trivially, however, is inertially cylindrified on .
3.2. Semantic Inputs and Outputs for LIF Expressions
3.2.1. Properties of LIF Expressions
As we have discussed in the previous section, a global BRV has the properties of determinacy and inertial cylindrification under the assumption that there is a finite set of variables that determines the global BRV on its semantic outputs. From the results of Section 3.3, this assumption indeed holds for LIF assumptions. Indeed, we will define there, for every expression , a finite set of “syntactic input variables” and we will show that this set determines on a set of “syntactic output variables”. Moreover, the latter set will contain .
3.2.2. Determining Semantic Inputs and Outputs
For atomic LIF expressions, the semantic inputs and outputs are easy to determine, as we will show first. Unfortunately, we show next that the problem is undecidable for general expressions.
We show that semantic inputs and outputs are exactly what one expects for atomic modules:
Proposition 3.14.
If is an atomic LIF expressions , then
-
•
; and
-
•
.
Example 3.15.
A variable can be both input and output of a given expression. A very simple example is an atomic module . To illustrate where this can be useful, assume and consider an interpretation such that . In that case, the expression represents a dynamic system in which the value of is incremented by ; is an output of the system since its value is changed; it is an input since its original value matters for determining its value in the output.
We will now show that the problem of deciding whether a given variable is a semantic input or output of a LIF expression is undecidable. Proposition 2.3 showed that satisfiability of LIF expressions is undecidable. This leads to the following undecidability results.
Problem: Semantic Output Membership Given: a variable and a LIF expression . Decide: ?
Proposition 3.16.
The semantic output membership problem is undecidable.
Proof.
The proof is by reduction from the satisfiability of LIF expressions. Let be a LIF expression. Take to be . What remains to show is that is satisfiable.
Let . Then, there is certainly an interpretation and valuations and such that . Hence, there is also a valuation such that . Certainly, is satisfiable.
Let be satisfiable. Then, there is an interpretation and valuations and such that . Also, let be an -valuation that maps to with . It is clear then that . We thus see that . ∎
Problem: Semantic Input Membership Given: a variable and a LIF expression . Decide: ?
Proposition 3.17.
The semantic input membership problem is undecidable.
Proof.
Let be a LIF expression. Take to be , where is a variable that is not used in and different from . What remains to show is that is satisfiable.
Let . Then, certainly, there is an interpretation and valuations and such that . Certainly, is satisfiable.
Let be satisfiable. Then, there is an interpretation and valuations and such that . Without loss of generality, we can assume that since is a fresh variable. Hence, . Let be a valuation that agrees with outside such that . Since and are different variables, also , so clearly there is no valuation such that . We then see that . ∎
3.3. Syntactic Inputs and Outputs
Since both the membership problems for semantic inputs and outputs are undecidable, to determine inputs and outputs in practice, we will need decidable approximations of these concepts. Before giving our syntactic definition, we define some properties of candidate definitions.
Definition 3.18.
Let and be functions from LIF expressions to sets of variables. We say that is a sound input–output definition if the following hold:
-
•
If , then and ,
-
•
, and
-
•
determines on .
The first condition states that on atomic expressions (of which we know the inputs), and are defined correctly. The next condition states approximates the semantic notion correctly. We only allow for overapproximations; that is, false positives are allowed while false negatives are not. The reason for this is that falsely marking a variable as non-output while it is actually an output would mean incorrectly assuming the variable cannot change value. The last condition establishes the relation between and , and is called input–output determinacy. It states that the inputs need to be large enough to determine the outputs, as such generalizing the defining condition of semantic inputs. Essentially, this means that whenever we overapproximate our outputs, we should also overapproximate our inputs to compensate for this; that correspondence is formalized in Lemma 3.22.
We first remark that a proposition similar to Proposition 3.5 can be made about sound output definitions.
Proposition 3.19.
Let be a sound input-output definition, be a LIF expression, and a set of variables. Then, if determines on .
Proof.
The proof follows directly from Proposition 3.5 and knowing that . Indeed, in general, if determines on some and , then clearly also determines on . ∎
This proposition along with the input-output determinacy condition imply a condition similar to the second condition about :
Proposition 3.20.
Let be a sound input-output definition and be a LIF expression. Then, .
Proof.
The proof follows from Proposition 3.19 and knowing that determines on . ∎
Besides requiring that our definitions to be sound, we will focus on definitions that are compositional, in the sense that definitions of inputs and outputs of compound expressions can be given in terms of their direct subexpressions essentially treating subexpressions as black boxes. This means that the definition nicely follows the inductive definition of the syntax. Formally:
Definition 3.21.
Suppose and are functions from LIF expression to sets of variables. We say that is compositional if for all LIF expressions , , , and with , , , and the following hold:
-
•
For every unary operator : , and ; and
-
•
For every binary operator : , and .
The previous definition essentially states that in order to be compositional, the inputs and outputs of and should only depend on the inputs and outputs of and , and not on their inner structure.
The following lemma rephrases input–output determinacy in terms of the inputs and outputs: in order to determine the output-value of an inertial variable, we need to know its input-value.
Lemma 3.22.
Let be a sound input–output definition and let be a LIF expression. If is satisfiable, then
If is compositional, this holds for all .
Proof.
Let . For the sake of contradiction, assume that , so Proposition 3.11 is applicable since as we know by the soundness of . Hence, is inertially cylindrified on . We claim that this contradicts the fact that is a sound definition. In particular, we can verify that can not determine on in case is satisfiable. Let be an interpretation and and be valuations such that . We also know that since is inertially cylindrified on . Take to be the valuation where . By determinacy, we know that there is a valuation such that and on . Thus, since . On the other hand, since is inertially cylindrified on . Hence, a contradiction.
For the compositional case, we can always replace subexpressions by atomic expressions with the same inputs and outputs to ensure satisfiability. It is clear that when is an atomic module expression, it is always satisfiable. Now, consider any LIF expression which is of the form or , where is any of the unary operators and is any of the binary operator. Construct two atomic expressions and such that and for . By compositionality, we know that and for any unary operator, while and for any binary operator. Next, we give examples for an interpretation in which any and can be shown not be empty, so and are satisfiable expressions.
In what follows, let be the valuation that assigns to every variable.
Case is
Let be the interpretation with
-
•
; and
-
•
.
It is clear that , and , whence, .
All other cases
Let be the interpretation with
-
•
; and
-
•
.
We can see that . Consequently, for any unary operator . We can also see that , whence, for any binary operator . ∎ We now provide a sound and compositional input–output definition. While the definition might seem complex, there is a good reason for the different cases. Indeed, as we show below in Theorem 3.28, our definition is optimal among the sound and compositional definitions. In the definition, the condition simply means that and are the same variable and denotes the symmetric difference of two sets.
Definition 3.23.
The syntactic inputs and outputs of a LIF expression , denoted and respectively, are defined recursively as given in Table 1.
While it would be tedious to discuss the motivation for all the cases of Definition 1 (their motivation will be clarified in Theorem 3.28), we discuss here one of the most difficult parts, namely the case where . For a given interpretation ,
First, since , it is clear that the outputs of should be a subset of those of (if admits no pairs in its semantics that change the value of a variable, then neither does ). For the special case in which and are the same variable, this selection enforces to be inertial, i.e., it should not be an output of .
Secondly, all inputs of remain inputs of . Since we select those pairs whose -value on the right equals the -value on the left, clearly must be an input of (the special case and only covers cases where and are actually equivalent). Whether is an input depends on : if , is inertial. Since we compare the input-value of with the output-value of , essentially this is the same as comparing the input-values of both variables, i.e., the value of on the input-side matters. On the other hand, if , the value of can be changed by and thus this selection does not force being an input.
Our syntactic definition is clearly compositional (since we only use the inputs and outputs of subexpressions). An important result is that our definition is also sound, i.e., our syntactic concepts are overapproximations of the semantic concepts.
Theorem 3.24 (Soundness Theorem).
is a sound input–output definition.
Proof.
The proof is given in Section 4. ∎
Of course, since the semantic notions of inputs and outputs are undecidable and our syntactic notions clearly are decidable, expressions exist in which the semantic and syntactic notions do not coincide. We give some examples.
Example 3.25.
Consider the LIF expression
In this case, . However, it can be verified that .
Example 3.26.
Consider the LIF expression
Thus, we first cylindrify on both sides and afterwards only select those pairs that have inertia, therefore, we reach an expression that is equivalent to . As such, is inertially cylindrified in where and . However, .
These examples suggest that our definition can be improved. Indeed, one can probably keep coming up with ad-hoc but more precise approximations of inputs and outputs for various specific patterns of expressions. Such improvements would not be compositional, as they would be based on inspecting the structure of subexpressions. In the following results, we show that is actually the most precise sound and compositional input–output definition.
Theorem 3.27 (Precision Theorem).
Let be a LIF expression that is either atomic, or a unary operator applied to an atomic module expression, or a binary operator applied to two atomic module expressions involving different module names. Then
Proof.
The proof is given in Section 5. ∎
Now, the precision theorem forms the basis for our main result on syntactic inputs and outputs, which states that Definition 1 yields the most precise sound and compositional input–output definition.
Theorem 3.28 (Optimality Theorem).
Suppose is a sound and compositional input–output definition. Then for each LIF expression :
Proof.
The proof is given in Section 6. ∎
4. Soundness Theorem Proof
In this section, we prove Theorem 3.24. Thereto, we need to verify its three conditions for every LIF expression according to Definition 3.18:
- Atomic Module Case::
-
If , then and .
This is clear from the definitions.
- Output Approximation::
-
.
- Syntactic Input-Output Determinacy::
-
determines on .
4.1. Proof of Output Approximation
In this section, we prove:
Proposition 4.1.
Let be a LIF expression. Then, .
To prove this proposition, we introduce the following notion which is related to Definition 3.10.
Definition 4.2.
A BRV has inertia outside a set of variables if for every , we have outside . A global BRV has inertia outside a set of variables if has inertia outside for every interpretation .
Using this notion, Proposition 4.1 can be equivalently formulated as follows.
Proposition 4.3 (Inertia Property).
Let be a LIF expression. Then, has inertia outside .
In the remainder of this section we prove the inertia property by structural induction on the shape of LIF expressions. Also, we remove the superscript from and refer to it simply by .
4.1.1. Atomic Modules
Let be of the form . Recall that where is the set of variables in . The property directly follows from the definition of the semantics for atomic modules.
4.1.2. Identity
Let be of the form . Recall that . The property directly follows from the definition of .
4.1.3. Union
Let be of the form . Recall that . If , then at least one of the following holds:
-
(1)
. Then, by induction we know that outside . Since , we know that outside .
-
(2)
. Similar.
4.1.4. Intersection
Let be of the form . Recall that . If , then and . By induction, outside and also outside . Hence, outside .
4.1.5. Composition
Let be of the form . Recall that . If , then there exists a valuation such that and . By induction, outside and also outside . Hence, outside .
4.1.6. Difference
Let be of the form . Recall that . The proof then follows immediately by induction.
4.1.7. Converse
Let be of the form . Recall that . The proof is immediate by induction.
4.1.8. Left and Right Selections
Let be of the form or . Recall that . The proof is immediate by induction.
4.1.9. Left-to-Right Selection
Let be of the form . Recall the definition:
If , then we know that:
-
(1)
;
-
(2)
.
By induction from (1), we know that outside . Hence, for case we are done. In the other case, we must additionally show that , which follows from .
4.1.10. Right and Left Cylindrifications
4.2. Proof of Syntactic Free Variable Property
Lemma 4.4 (Syntactic Free Variable Property).
Let be a LIF expression. Denote by . Then, is inertially cylindrified on .
In the proof of this Lemma, we will often make use of the Lemma 4.5. In what follows, for a set of variables , we define to be . In the rest of the section, we remove the superscript from and refer to it simply by .
Lemma 4.5.
Let be a BRV that has inertia on . Then, is inertially cylindrified on if and only if is inertially cylindrified on every .
Proof.
The ‘if’-direction is immediate. Let us now consider the ‘only if’. To this end, suppose that and that is a partial valuation on . Extend to a valuation by on . Since has inertia on , we know that on . Thus, and . The lemma now directly follows since is inertially cylindrified on . ∎
This Lemma is always applicable for any expression and . Indeed, for every interpretation , we know by Proposition 4.3 that has inertia outside .
We are now ready to prove Lemma 4.4.
4.2.1. Atomic Modules
Let be of the form . Recall that where and are the variables in and , respectively. The property directly follows from the definition of the semantics for atomic modules.
4.2.2. Identity
Let be of the form . Recall that . The property directly follows from the definition of .
4.2.3. Union
Let be of the form . Recall that . If , then or . Let and let be a partial valuation on . Assume without loss of generality that . Clearly, since . By induction and Lemma 4.5, we know that as desired.
4.2.4. Intersection
Let be of the form . Recall that . If , then and . Let and let be a partial valuation on . Clearly, with since . By induction and Lemma 4.5, we know that with , whence as desired.
4.2.5. Composition
Let be of the form . Recall that . If , then there exists a valuation such that and . Let and let be a partial valuation on . Clearly, with since . By induction and Lemma 4.5, we know that and . Therefore, we may conclude that .
4.2.6. Difference
Let be of the form . Recall that . If , then we know that:
-
(1)
. By inertia, we know that outside .
-
(2)
.
Let and let be a partial valuation on . Clearly, with since . By induction from (1) and Lemma 4.5, we know that . All that remains is to show that Now, suppose for the sake of contradiction that . By induction and Lemma 4.5, we know that . Clearly, . Moreover, since outside by . We have thus obtained that , which contradicts .
4.2.7. Converse
Let be of the form . Recall that . The property follows directly by induction since .
4.2.8. Left Selection
Let be of the form . Recall the definition:
In case of , clearly . The property holds trivially by induction. In the other case when , if , then . Let and let be a partial valuation on . Clearly, since . By induction and Lemma 4.5, we know that . Moreover, since , we know that the selection does not look at , whence as desired.
4.2.9. Right Selection
Let be of the form . Recall the definition:
In case of , clearly . Hence, the property holds trivially by induction. In the other case, it is analogous to since here also .
4.2.10. Left-to-Right Selection
Let be of the form . Recall the definition:
In case of and , clearly . Hence, the property holds trivially by induction. In the other case, it is analogous to since here also .
4.2.11. Right and Left Cylindrifications
Let be of the form . The case for left cylindrification is analogous. Recall that . If , then there exists a valuation such that:
-
(1)
;
-
(2)
outside .
Let and let be a partial valuation on . Clearly, since . By induction from (1) and Lemma 4.5 we know that . Since , we know from that outside . Hence, we can conclude that .
4.3. Proof of Syntactic Input-Output Determinacy
Syntactic Input-Output determinacy is certainly proved if we can prove the following Lemma.
Lemma 4.6 (Syntactic Input-Output Determinacy).
Let be a LIF expression. Then, for every interpretation , every and every that agrees with on , there exists a valuation that agrees with on and .
In the proof, we will make use of a useful alternative formulation of syntactic input-output determinacy which is defined next.
Definition 4.7 (Alternative Input-Output Determinacy).
A LIF expression is said to satisfy alternative input-output determinacy if for every interpretation , every and every that agrees with on and outside , we have .
The following Lemma shows that the two definitions are equivalent. In what follows, will remove the superscript from and and refer to them as and , respectively.
Lemma 4.8.
Let be a LIF expression. satisfies alternative input-output determinacy iff it satisfies syntactic input-output determinacy.
Proof.
The proof of the “if”-direction is similar to the proof of Proposition 3.9. Now, we proceed to verify the other direction. Suppose that and on . We now construct a new valuation such that it agrees with on and it agrees with elsewhere. We thus have the following properties for :
-
(1)
on , and
-
(2)
on .
We know that on by assumption, whence (1) implies that on since . Combining this with (2), we know that on and outside . Thus, alternative input-output determinacy implies that . Since on , we know that there is a partial valuation on such that . By syntactic free variable, we know that . Thus, as desired. ∎
We are now ready for the proof of Lemma 4.6. In the following proof, we will use the notation to mean . Moreover, since we established by Lemma 4.8 that the two definitions for input-output determinacy are equivalent, we will verify any of them for each LIF expression.
4.3.1. Atomic Modules
Let be of the form . Recall the definitions:
-
•
where are the variables in .
-
•
where are the variables in .
Syntactic input-output determinacy directly follows from the definition of the semantics for atomic modules.
4.3.2. Identity
Let be of the form . Recall that the definition for . We proceed to verify that satisfies alternative input-output determinacy. Indeed, this is true since .
4.3.3. Union
Let be of the form . Recall the definitions:
-
•
.
-
•
.
We proceed to verify that satisfies alternative input-output determinacy. If , then or . Assume without loss of generality that . Now, let be a valuation such that on . Moreover, we have the following:
Therefore, certainly on . Thus, by induction and Lemma 4.8, whence as desired.
4.3.4. Intersection
Let be of the form . Recall the definitions:
-
•
.
-
•
.
We proceed to verify that satisfies alternative input-output determinacy. If , then and . Now, let be a valuation such that on . Just as in the case for , we have that . Therefore, certainly on . Thus, and by induction and Lemma 4.8, whence as desired.
4.3.5. Composition
Let be of the form . Recall the definitions:
-
•
.
-
•
.
We proceed to verify that satisfies syntactic input-output determinacy. If , then there exists a valuation such that
-
(i)
;
-
(ii)
.
Now, let be a valuation such that
(1) |
Since , then by induction there exists a valuation such that:
-
(iii)
;
-
(iv)
on .
By applying inertia to (i) and (iii) we get that and outside . Combining this with (1) we have that on . Together with (iv), this implies that
(2) |
By induction from (ii), there exists such that:
-
(v)
;
-
(vi)
on .
From (iii) and (vi) we get that . All that remains to be shown is that on . By applying inertia to (ii) and (v) we get that:
Combining this with 2 we have that on . Together with (vi) this implies that on , whence on as desired.
4.3.6. Difference
4.3.7. Converse
Let be of the form . Recall the definitions:
-
•
.
-
•
.
Alternative input-output determinacy holds since .
4.3.8. Left Selection
Let be of the form . Recall the definitions:
-
•
-
•
We proceed to verify that satisfies alternative input-output determinacy. Clearly, the property holds trivially by induction in case of . Indeed, in this case, . In the other case when , if , then we know that:
-
(1)
;
-
(2)
;
Let be a valuation such that on . In all cases, . Hence, . Moreover, in all cases . Thus, on . By induction from (1) and Lemma 4.8, we know that .
All that remains to be shown is that . Since , we know that on by assumption. Hence, by (2).
4.3.9. Right Selection
Let be of the form . Recall the definitions:
-
•
-
•
We proceed to verify that satisfies alternative input-output determinacy. Clearly, the property holds trivially by induction in case of . Indeed, in this case, . In the other case when , if , then we know that:
-
(1)
;
-
(2)
;
Let be a valuation such that on . In all cases, . Hence, . Moreover, in all cases . Thus, on . By induction from (1) and Lemma 4.8, we know that . Together with (2), we know that .
4.3.10. Left-to-Right Selection
Let be of the form . Recall the definitions:
-
•
-
•
We proceed to verify that satisfies alternative input-output determinacy. Clearly, in case of and . Hence, the property holds trivially by induction. In the other cases, if , then we know that:
-
(1)
;
-
(2)
.
Let be a valuation such that on . In all cases, . Hence, . Moreover, in all cases . Thus, on . By induction from (1) and Lemma 4.8, we know that . All that remains to be shown is that . Since and on , we have . Together with (2), we get that as desired, whence .
4.3.11. Right Cylindrification
Let be of the form . Recall the definitions:
-
•
.
-
•
.
-
•
.
We proceed to verify that satisfies alternative input-output determinacy. If , then there exists a valuation such that:
-
(1)
;
-
(2)
outside .
Now, let be a valuation such that on . We now split the proof in two cases:
- •
-
•
Conversely, suppose that . We have . Thus, on since on . By induction and Lemma 4.8, then . By syntactic free variable, we know that since . Clearly, , whence . Consequently, as desired.
4.3.12. Left Cylindrification
Let be of the form . Recall the definitions:
-
•
.
-
•
.
We proceed to verify that satisfies alternative input-output determinacy. If , then there exists a valuation such that:
-
(i)
outside ;
-
(ii)
.
Now, let be a valuation such that
(1) |
Clearly, outside . Since , we also know that on . Combining this with (1), we get that on . Clearly, , whence on . By induction from (ii) and Lemma 4.8, we get that , whence also .
5. Precision Theorem Proof
In this section, we prove Theorem 3.27. By soundness and Proposition 3.20, it suffices to prove and for every LIF expression . For the latter inequality, it will be convenient to use the equivalent definition of semantic input variables introduced in Proposition 3.4. Moreover, in the proof of the Precision Theorem, we will often make use of the following two technical lemmas.
Lemma 5.1.
Let be a nullary relation name and let be an interpretation where is nonempty. Then consists of all identical pairs of valuations.
Proof.
The proof follows directly from the semantics of atomic modules. ∎
Lemma 5.2.
Suppose and where . Let be either or . Assume that and for . Let . If for any interpretation where is empty, then and .
Proof.
First, we verify that . Let . Since , then . By definition, we know that there is an interpretation and such that . Take to be the interpretation where for any while is empty. Clearly, , whence, and . It follows then that .
Similarly, we proceed to verify . Let . Since , then . By definition, we know that there is an interpretation , , and such that for every valuation that agrees with on .
Take to be the interpretation where for any while is empty. Clearly, , whence, . Therefore, . Hence, . Indeed, and for any valuation if agrees with on , then agrees with on . ∎
The proof of Theorem 3.27 is done by extensive case analysis. Intuitively, for each of the different operations, and every variable , we construct an interpretation such that is not inertial in and thus . Similarly, for every variable , we construct an interpretation as a witness of the fact that does not determine on and thus that . In the proof, we often remove the superscript from and and refer to them by and , respectively.
5.1. Atomic Modules
Let be of the form , where is . Recall the definition:
-
•
, where are the variables in .
-
•
, where are the variables in .
We first proceed to verify . Let . Consider an interpretation where
Let be the valuation that is everywhere. Also let be the valuation that is on and everywhere else. Clearly, since and agrees with outside . Hence, since .
Now we proceed to verify . Let . Consider the same interpretation and the same valuations and as discussed above. We already established that . Take . We establish that by arguing that there is no for which . Indeed, this is true since . Consequently, .
5.2. Identity
Let be of the form . We recall that and are both empty. Hence, and is trivial.
5.3. Union
Let be of the form , where is and is . We distinguish different cases based on whether or is nullary. If and are both nullary there is nothing to prove.
5.3.1. is nullary, is not
Clearly, for any interpretation where is empty. By induction and Lemma 5.2, we establish that and . Since and are both empty, then we observe that:
-
•
.
-
•
.
Thus, and is trivial.
We proceed to verify . Let . Consider the interpretation where is not empty and
Let be the valuation that is everywhere. Clearly, since by Lemma 5.1. Take . We establish that by arguing that there for every valuation for which , we show that and disagrees on . Thereto, suppose . In particular, , whence by Lemma 5.1. Indeed, , , and but . Otherwise, . However, since , then as well. Therefore, there is no that agrees with on and at the same time. We conclude that .
5.3.2. is nullary, is not
This case is symmetric to the previous one.
5.3.3. Neither nor is nullary
Recall the definitions:
-
•
.
-
•
.
We first proceed to verify that . By induction, for or . Consequently, if , then there is an interpretation , such that . Indeed, , whence, for any interpretation .
We then proceed to verify that . The proof has four possibilities. Each case is discussed separately below.
When
If and , it is clear that for any interpretation where is empty. By Lemma 5.2 and by induction, we easily establish that .
When
This case is symmetric to the previous one.
When
If , then consider the interpretation such that and . Let be the valuation that is on and elsewhere. Clearly, , whence . Now take . If we can show that does not agree with on for any valuation such that , we are done. Thereto, suppose that there exists a valuation such that .
-
•
In particular, if , then since .
-
•
Otherwise, if , then since .
In both cases, have to be on which disagrees with on . Since and , then does not agree with on as desired. We conclude that .
When
This case is symmetric to the previous one.
5.4. Intersection
Let be of the form , where is and is . We distinguish different cases based on whether or is nullary. If and are both nullary there is nothing to prove.
5.4.1. is nullary, is not
In this case, and are both empty, then we observe that:
-
•
.
-
•
.
It is trivial to verify since is empty.
We proceed to verify . Let . Consider an interpretation where is not empty and . Let be the valuation that is everywhere. Clearly, since and . Take . We establish that by arguing that there is no valuation for which . Thereto, suppose . In particular, when , it is clear that . In the other case when , there is no such that belongs to both and . Indeed, the value for will never be agreed upon by and . Hence, as desired. We conclude that .
5.4.2. is nullary, is not
This case is symmetric to the previous one.
5.4.3. Neither nor is nullary
Recall the definitions:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation such that
, where are all the combinations of . Similarly, , where are all the combinations of .
Let be the valuation that is everywhere. Also let be the valuation that is on and elsewhere. Clearly, , whence and . Hence, . Indeed, for .
We then proceed to verify . Let . Consider an interpretation where
Similarly, . Let be the valuation that is everywhere. Clearly, , whence and . Take . We establish that by arguing that there is no valuation such that . Indeed, this is clear when or . On the other hand, when , we have for which whence and . This is not possible since belongs to either or , but not both. Hence, the value for will never be agreed upon by and . We conclude that .
5.5. Difference
Let be of the form , where is and is . We distinguish different cases based on whether or is nullary. If and are both nullary there is nothing to prove.
5.5.1. is nullary, is not
In this case, and are empty. In particular, is empty, so is trivial.
We proceed to verify . Observe that
Let . Consider the interpretation where is not empty and . Let be the valuation that is on and elsewhere. Clearly, . Take . We establish that by arguing that there is no for which . Thereto, suppose . In particular, , whence by Lemma 5.1. However, , so as desired.
5.5.2. is nullary, is not
Clearly, for any interpretation where is empty. By induction and Lemma 5.2, we establish that and . Since and are both empty, then we observe that:
-
•
.
-
•
.
Thus, and is trivial.
We proceed to verify . Let . Consider the interpretation where is not empty and
Let be the valuation that is on and elsewhere and let be the valuation that is everywhere. Clearly, . Take . We establish that by arguing that there is no for which . Thereto, suppose . In particular, , whence from the structure of . However, by Lemma 5.1, so as desired.
5.5.3. Neither nor is nullary
Recall the definitions:
-
•
.
-
•
.
The proof of is done together with the proof that for every . Discussions for the other cases for follow afterwards. Since , it is clear that for any interpretation where is empty. By induction and Lemma 5.2, we establish that and . Thus, and is trivial.
When
Let . Consider an interpretation where
and similarly . Let be the valuation that on and elsewhere. Also, let be the valuation that is on and agrees with everywhere else. Clearly, . Further, . Indeed, since then should have the value of on for to be in . Take . We establish that by arguing that there is no for which . Thereto, suppose that . Hence, and . Indeed, whence . Clearly, showing that as desired. Therefore, .
When
Let . Consider an interpretation where and . Let be the valuation that is on and elsewhere. Also let be the valuation that is on and agrees with everywhere else. Clearly, . Furthermore, . In particular, when , we know that and . Since , then . In the other case, when , we know that since . Consequently, since but . We verify that . Take . We establish that by arguing that there is no for which . Thereto, suppose that . Hence, and . Indeed, whence . Clearly, showing that as desired. Therefore, .
5.6. Composition
Let be of the form , where is and is . We distinguish different cases based on whether or is nullary. If and are both nullary there is nothing to prove.
5.6.1. is nullary, is not
Clearly, for any interpretation where is not empty. In this case, and are both empty, then we observe that:
-
•
.
-
•
.
First, we verify . Let . We know that by induction, then . By definition, we know that there is an interpretation and such that . Take to be the interpretation where for any while is not empty. Clearly, , whence, , by Lemma 5.1, and . It follows then that .
Similarly, we proceed to verify . Let . We know that by induction, then . By definition, we know that there is an interpretation , , and such that for every valuation that agrees with on .
Take to be the interpretation where for any while is not empty. Clearly, , whence, . Therefore, . Hence, . Indeed, and for any valuation if agrees with on , then agrees with on .
5.6.2. is nullary, is not
Clearly, for any interpretation where is not empty. In this case, and are both empty, then we observe that:
-
•
.
-
•
.
First, we verify . Let . We know that by induction, then . By definition, we know that there is an interpretation and such that . Take to be the interpretation where for any while is not empty. Clearly, , whence, , by Lemma 5.1, and . It follows then that .
Similarly, we proceed to verify . Let . We know that by induction, then . By definition, we know that there is an interpretation , , and such that for every valuation that agrees with on .
Take to be the interpretation where for any while is not empty. Clearly, , whence, . Therefore, . Hence, . Indeed, and for any valuation if agrees with on , then agrees with on .
5.6.3. Neither nor is nullary
Recall the definitions:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation such that
, where are all the combinations of . Similarly,
, where are all the combinations of .
Let be the valuation that is everywhere. Also, let be the valuation that is on and elsewhere. Clearly, . Let be the valuation that is on , on , and elsewhere. Clearly, , whence . Hence, . Indeed, for .
Now we proceed to verify . Let . Consider an interpretation where and similarly . Let be the valuation that is everywhere. Clearly, , whence and .
Take . We establish that by arguing that there is no valuation for which . In particular, when . Clearly, there is no such that . On the other hand, when . Clearly, , whence . However, there is no such that . Thus, there is no such that as desired. We conclude that .
5.7. Converse
Let be of the form , where . Recall the definitions:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation where
Let be the valuation that is on and elsewhere. Also let be the valuation that is everywhere. Clearly, since . Therefore, since .
Now we proceed to verify . Let . Consider the same interpretation and the same valuations and . We established that . Take . We establish that by arguing that there is no for which . Indeed, when , then have to be on . In the other case, when , then have to be on . Thus, there is no for which as desired. Consequently, .
5.8. Left Cylindrification
Let be of the form , where . Recall the definitions:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation where . Let be the valuation that is on and elsewhere. Also let be the valuation that is on and everywhere else. Clearly, since . Therefore, since .
Now we proceed to verify . Let . Consider the same interpretation and the same valuations and . We established that . Take . We establish that by arguing that there is no for which . Indeed, this is true since . Consequently, .
5.9. Right Cylindrification
Let be of the form , where . Recall the definitions:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation where . Let be the valuation that is everywhere. Also let be the valuation that is on and on and everywhere else. Clearly, since either or . Therefore, since .
Now we proceed to verify . Let . Consider the same interpretation and the same valuations and . We established that . Take . We establish that by arguing that there is no for which . Indeed, this is true since . Consequently, .
5.10. Left Selection
Let be of the form , where , , and . We distinguish different cases based on whether .
When and are the same variable ()
Recall the definitions in this case:
-
•
.
-
•
.
We proceed to verify that and . Indeed, this is true since for any interpretation because of .
When and are different variables ()
Recall the definitions in this case:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation where
Let be the valuation that is everywhere. Also let be the valuation that is on and everywhere else. Clearly, since and . Therefore, since .
Now we proceed to verify . Let . Consider an interpretation where . Let be the valuation that is everywhere. Clearly, since and . Take . We establish that by arguing that there is no for which . In particular, when , it is clear that there is no such that . In the other case, when is either or , there is no such that . Indeed, this is true since and . Consequently, .
5.11. Right Selection
Let be of the form , where , , and . We distinguish different cases based on whether .
When and are the same variable ()
Recall the definitions in this case:
-
•
.
-
•
.
We proceed to verify that and . Indeed, this is true since for any interpretation because of .
When and are different variables ()
Recall the definitions in this case:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation where
such that if is either or and , otherwise, . Let be the valuation that is on if , on if , and everywhere. Also let be the valuation that is on and agrees with everywhere else. Clearly, since and . Therefore, since .
Now we proceed to verify . Let . Consider an interpretation where . Let be the valuation that is everywhere. Clearly, since and . Take . We establish that by arguing that there is no for which . In particular, when , it is clear that there is no such that . Now we need to verify the same when is or and . Thereto, suppose . In the case of is and , this is only possible when . Therefore, but whether or not. Hence, since and . The case when is and is symmetric. Consequently, .
5.12. Left-to-Right Selection
Let be of the form , where , , and . We distinguish different cases based on whether and .
When and
Recall the definitions in this case:
-
•
.
-
•
.
In what follows, since we will refer to both of them with . We first proceed to verify . Let . Consider an interpretation such that where if , otherwise . Let be the valuation that is everywhere. Also let be the valuation that is on and everywhere else. Clearly, since and . Therefore, since .
Now we proceed to verify . Let . Consider an interpretation where . Let be the valuation that is everywhere. Clearly, since . Take . We establish that by arguing that there is no for which . Thereto, suppose that . In particular, when it is clear that . On the other hand, when and , clearly . Consequently, .
When and
Recall the definitions in this case:
-
•
.
-
•
.
In what follows, since we will refer to both of them with . We proceed to verify and . Indeed, this is true since for any interpretation because of and .
When and
Recall the definitions in this case:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation such that where if , otherwise . Also, if , otherwise . Let be the valuation that is on and everywhere else. Also let be the valuation that is on , on if and agrees with everywhere else. Clearly, since and . Therefore, . Indeed, in both cases whether or not, .
Now we proceed to verify . Let . Consider an interpretation where . Let be the valuation that is everywhere. Clearly, since . Take . We establish that by arguing that there is no for which . Thereto, suppose that . In particular, when it is clear that . On the other hand, when and , clearly . Consequently, .
When and
Recall the definitions in this case:
-
•
.
-
•
.
We first proceed to verify . Let . Consider an interpretation such that . Let be the valuation that is everywhere. Also let be the valuation that is on and everywhere else. Clearly, since and . Indeed, this is true since , then . Therefore, since .
Now we proceed to verify . Let . Consider an interpretation where . Let be the valuation that is everywhere. Clearly, since . Take . We establish that by arguing that there is no for which . Thereto, suppose that . In particular, when it is clear that . On the other hand, when or , clearly since and . Consequently, .
6. Optimality Theorem Proof
In this section, we prove Theorem 3.28. Thus, we would like to show that
for any LIF expression , assuming that is a sound and compositional input–output definition. The proof is by induction on the structure of .
Atomic Modules
For atomic module expressions , this follows directly from Theorem 3.27.
Identity
For , this is immediate since .
Binary Operators
For , where is a binary operator, we define two atomic module expressions and where , , , and with distinct module names of the right arity.
Since is sound, we know that the following holds for :
(1) |
Moreover by soundness and Proposition 3.20, we know that:
(2) |
From the Precision Theorem, we know that:
(3) |
From the compositionality of , we know that
(4) |
By combining Equations (2–4), we find that:
(5) |
We now claim the following
(6) |
If we prove our claim, then combining Equations (5–6) establishes our theorem for binary operators.
First, we prove our claim for the inductive cases for outputs of the different binary operators. From the inductive hypothesis and Equation (1), we know that for :
Hence, it is clear that:
-
•
is a subset of , which settles the cases when since for any LIF expressions and .
-
•
is a subset of , which settles the case when is since for any LIF expressions and .
-
•
is a subset of , which settles the case when is since for any LIF expressions and .
Now, we consider the inductive cases for the inputs of the different binary operators. Similar to the outputs, we know that for :
Consequently,
-
•
when , we consider the following cases:
-
–
if , then it is clear that ;
- –
In all cases, we verify that . This settles the case when is since for any LIF expressions and .
-
–
-
•
when , we consider the following cases:
-
–
if for some , then it is clear that .
- –
In all cases, we verify that . This settles the cases when since for any LIF expressions and .
-
–
Unary Operators
We follow a similar approach for unary operators. For , where is a unary operator, we define one atomic module expression where , and .
Since is sound, we know that the following holds:
(7) |
Moreover, we know that:
(8) |
From the precision theorem, we know that:
(9) |
From the compositionality of , we know that
(10) |
By combining Equations (8–10), we find that:
(11) |
We now claim the following
(12) |
If we prove our claim, then combining Equations (11–12) establishes our theorem for unary operators.
Proving our claim for the inductive cases for outputs of the different unary operators follows directly from the inductive hypothesis and Equation (7), which states that:
Indeed, and , respectively, simply equal and , except for the possible addition or removal of some fixed variable that depends only on .
Now, we consider the inductive cases for inputs. Similar to the outputs, we know that:
Here, we only discuss the cases for and as all the other cases again follow directly from the above inclusion and the definition of .
We begin by the cases for . The cases are:
-
•
when , we have
-
•
when and , we have
- •
Finally, we consider the case for when . The case when follows directly. By definition,
We can focus on . If or , we are done. Now, consider the case when , but . Similar to our reasoning for the last case in , we can show that , whence, by definition.
7. Primitivity of Composition
We now turn our attention to the study of composition in LIF. Indeed, LIF has a rich set of logical operators already, plus an explicit operator () for sequential composition. This begs the question whether is not already definable in terms of the other operators.
We begin by showing that for “well-behaved” expressions (all subexpressions have disjoint inputs and outputs) composition is redundant in LIF: every well-behaved LIF expression is equivalent to a LIF expression that does not use composition. As a corollary, we will obtain that composition is generally redundant if there is an infinite supply of variables. In contrast, in the bounded variable case, we will show that composition is primitive in LIF. Here, we use LIFnc to denote the fragment of LIF without composition.
7.1. When Inputs and Outputs are Disjoint, Composition is Non-Primitive
Our first non-primitivity result is based on inputs and outputs. We say that a LIF expression is io-disjoint if . The following theorem implies that if , , and all their subexpressions are io-disjoint, we can rewrite into a LIFnc expression. Of course, this property also holds in case since the syntactic inputs and outputs overapproximate the semantic ones.
Theorem 7.1.
Let and be LIF expressions such that is io-disjoint. Then, is equivalent to
Intuitively, the reason why this expression works is as follows: we cylindrify on the right. In general, this might result in a loss of information, but since we are only cylindrifying outputs of , this means we only forget the information that would be overwritten by anyway. Since the inputs and outputs of are disjoint, does not need to know what did to those variables in order to determine its own outputs. We also cylindrify on the left on the outputs of , since these values will be set by . One then still needs to be careful in showing that the intersection indeed removes all artificial pairs, by exploiting the fact that expressions are inertial outside their output.
Proof.
Let be an interpretation. First, we show that . If , then there is a such that and . By definition of the outputs of , and agree outside . Hence, . Similarly, we can show that .
For the other inclusion, assume that . Using the definition of the semantics of cylindrification, we find such that and agrees with outside and we find a such that agrees with outside and . Using the definition of output of , we know that also agrees with outside the outputs of , thus and agree outside the outputs of , and hence definitely on the inputs of . We can apply Proposition 3.6 thanks to the soundness, is finite and determines , which contains . So we guarantee that is determined by its inputs, whence, there exists a such that where on the outputs of and, since is inertial outside its outputs, outside the outputs of . But we previously established that agrees with outside the outputs of , whence . Summarized we now found that and , whence, as desired. ∎
Given the undecidability results of Section 3, Theorem 7.1 is not effective. We can however give the following syntactic variant.
Theorem 7.2.
Let and be LIF expressions such that . Then, is equivalent to
Proof.
Since , we obtain by Lemma 3.22 that . Thus, we alternatively show that is equivalent to the expression
We can also see that is io-disjoint, since and is sound. Thus, if we show that
for any interpretation , we can apply Theorem 7.1 and we are done.
Thereto, let be an interpretation. By soundness, it is clear that
What remains to show is that the other inclusion also holds. Thereto, let . Clearly, and . From , we can see that outside . From , we can see that there is a valuation such that and outside . Define to be the valuation . By construction and io-disjointness of , we see that on and outside . By Proposition 3.9, we obtain that . Define to be the valuation . By the semantics of cylindrification, we see that . Consequently, outside . Before, we established that and agree outside the same set of variables. So we obtain that outside . Moreover, we know by construction that on . Then, is the same valuation as . So we obtain that as desired. ∎
Example 7.3 (Example 3.15 continued).
Theorem 7.1 no longer holds in general if can have overlapping inputs and outputs, as the following example illustrates.
Example 7.4.
Consider the expression
with the interpretation as in the example above. In this case, increments the value of by two. However, and are both equal to
Hence, indeed, in this case is not equivalent to
7.2. If is Infinite, Composition is Non-Primitive
We know from Theorem 7.1 that if is io-disjoint, and can be composed without using the composition operator. If is sufficiently large, we can force any expression to be io-disjoint by having write its outputs onto unused variables instead of its actual outputs. The composition can then be eliminated following Theorem 7.1, after which we move the variables back so that the “correct” outputs are used. What we need to show is that “moving the variables around”, as described above, is expressible without composition. As before, we define the operators on BRVs but their definition is lifted to LIF expressions in a straightforward way.
Definition 7.5.
Let be a BRV and let and be disjoint tuples of distinct variables of the same length. The right move is defined as follows:
This operation can be expressed without composition, which we show in the following lemma:
Lemma 7.6.
Let and be disjoint tuples of distinct variables of the same length. Then, for any BRV , we have
Proof.
We give a “proof by picture”. Consider an arbitrary :
We will verify that when we apply the LHS and the RHS on this pair of valuations, we obtain identical results.
For the LHS, we see that yields the following pair of valuations when applied on :
Now, we check the RHS. We see that the following set of pairs of valuations is the result of when applied on :
Here the asterisk denotes a “wildcard”, i.e., any valuation on is allowed.
Then, we see that yields:
Next, we see that yields:
Finally, we see that yields the following pair of valuations which is the same as the result of the LHS.
∎
Lemma 7.7.
Let and be BRVs and let and be disjoint tuples of distinct variables of the same length such that all variables in are inertially cylindrified in and . In that case:
What this lemma shows is that we can temporarily move certain variables (the ) to unused variables (the ) and then move them back. The proof of this lemma is:
Proof of Lemma 7.7.
Again we give a proof by picture. Let the left be a generic pair of valuations that belongs to , while the one on the right be a generic one that belongs to . The “” here represents inertial cylindrification.
For the LHS, we see that composition can only be applied if and . Under this assumption, we get that yields the following:
Now, we check the RHS. We see that yields the following when applied on the generic pair belonging to :
To apply the composition in the RHS, we must have and , which are the same restrictions we had in applying the composition in the LHS, so the expression yields:
Finally, applying the last move operation, yields:
which is clearly identical to what we had from the LHS. ∎
This finally brings us to the main result of the current subsection.
Theorem 7.8.
If is infinite, then every LIF expression is equivalent to a LIFnc expression.
Proof.
We prove this theorem by induction on the number of compositions operators in a LIF expression . The base case (no composition operators), is trivial. For the inductive case, consider an expression containing at least one composition operator. We show how to rewrite equivalently with one composition operator less. Thereto, take any subexpression such that and are LIFnc expressions. We eliminate this composition as follows. Choose a tuple of variables of the same length as , such that does not occur in . In that case, is inertially cylindrified in and in , and hence, Lemma 7.7 yields that is equivalent to
We will next show that is io-disjoint. Indeed, from the equivalence in Lemma 7.6 and the soundness of our definitions, we can see that
Moreover, we generally have for any and any LIF expression in which is inertially cylindrified. As a consequence, .
Also, we can see that
Since does not occur in , we indeed obtain that is io-disjoint.
We can now apply Theorem 7.2 to eliminate the composition yielding the LIFnc expression
∎
7.3. If is Finite, Composition is Primitive
The case that remains is when is finite. We will show that in this case, composition is indeed primitive by relating bounded-variable LIF to bounded-variable first-order logic.
Assume . Since BRVs involve pairs of -valuations, we introduce a copy disjoint from . For clarity, we also write for . As usual, by we denote the fragment of first-order logic that uses only distinct variables. We observe:
Proposition 7.9.
For every LIF expression , there exists an formula with free variables in such that
where is the -valuation such that for each . Furthermore, if is a LIFnc expression, can be taken to be a formula.
Proof.
The proof is by induction on the structure of (using Lemma 2.1, we omit redundant operators).
We introduce a third copy of . For every we define as follows:
Using these functions, we can translate a valuation on to a corresponding valuation on with . Clearly, does this job.
In the proof, we actually show a stronger statement by induction, namely that for each and for every there is a formula with free variables in in such that for every :
Since the notations , , , and are taken, we use notations , and for variables.
-
•
. Take to be .
-
•
. Take to be
-
•
. Take to be .
-
•
. Take to be .
-
•
. Let . Take to be .
-
•
. By induction, exists. This formula can serve as .
-
•
. Take to be .
-
•
. Take to be .∎
Now that we have established that LIFnc can be translated into , all that is left to do is find a Boolean query that can be expressed in LIF with variables, but not in . We find such a query in the existence of a -clique. We will first show that we can construct a LIFnc expression such that, given an interpretation interpreting a binary relation , consists of all -cliques of . Next, we show how can be used (with composition) to construct an expression such that is non-empty if and only if has a -clique. Since this property cannot be expressed in , we can conclude that composition must be primitive.
To avoid confusion, we recall that a set of data elements is a -clique in a binary relation , if any two distinct and in , we have (and also ).
Proposition 7.10.
Suppose that with and let with . There exists a LIF expression such that
Proof.
Throughout this proof, we identify a pair of two valuations with the tuple of data elements
Before coming to the actual expression for , we introduce some auxiliary concepts. First, we define
It is clear that
A first condition for being a -clique is that all data elements are different. It is clear that the expression
has the property that consists of all -tuples where at least one data element is repeated. Hence, consists of all -tuples of distinct data elements, where
The second condition for being a -clique is that each two distinct elements are connected by . In order to check this, we define the following expressions for each two variables and :
With these definitions, for instance consists of all -tuples such that the and the element are connected (in two directions) in , and similar properties hold for and . From this, it follows that the expression
satisfies the proposition; it intersects with all the expressions stating that each two data elements must be (bidirectionally) connected by . ∎
Notice that can be used to compute all the -cliques of the input interpretation. We now use to check for existence of -cliques.
Proposition 7.11.
Suppose that with and let with . Define
Then, for every interpretation , is non-empty if and only if has a -clique.
It is well known that existence of a -clique is not expressible in (Ebbinghaus and Flum, 1999). The above proposition thus immediately implies primitivity of composition.
Theorem 7.12.
Suppose that . Then, composition is primitive in LIF. Specifically, no LIFnc expression is equivalent to the LIF expression .
8. Related Work
LIF grew out of the Algebra of Modular Systems (Ternovska, 2015), which was developed to provide foundations for programming from available components. That paper mentions information flows, in connection with input–output behavior in classical logic, for the first time. The paper also surveys earlier work from the author’s group, as well as other closely related work.
In a companion paper (Aamer et al., 2020a), we report on an application of LIF to querying under limited access patterns, as for instance offered by web services (Mcilraith et al., 2001). That work also involves inputs and outputs, but only of a syntactic nature, and for a restricted variant of LIF (called “forward” LIF) only. The property of io-disjointness turned also to be important in that work, albeit for a quite different purpose.
Our results also relate to the evaluation problem for LIF, which takes as input a LIF expression , an interpretation , and a valuation , and where the task is to find all such that . From our results, it follows that only the value of on the input variables is important, and similarly we are only interested in the values of each on the output variables. A subtle point, however, is that may be infinite, and moreover, even if itself is not infinite, the output of the evaluation problem may still be. In many cases, it is still possible to obtain a finite representation, for instance by using quantifier elimination techniques as done in Constraint Databases (Kuper et al., 2000).
We have defined the semantics of LIF algebraically, in the style of cylindric set algebra (Henkin et al., 1971; Imieliński and Lipski, 1984). An important difference is the dynamic nature of BRVs which are sets of pairs of valuations, as opposed to sets of valuations which are the basic objects in cylindric set algebra.
Our optimality theorem was inspired by work on controlled FO (Fan et al., 2014), which had as aim to infer boundedness properties of the outputs of first-order queries, given boundedness properties of the input relations. Since this inference task is undecidable, the authors defined syntactic inferences similar in spirit to our syntactic definition of inputs and outputs. They show (their Proposition 4.3) that their definitions are, in a sense, sharp. Note that our optimality theorem is stronger in that it shows that no other compositional and sound definition can be better than ours. Of course, the comparison between the two results is only superficial as the inference tasks at hand are very different.
The Logic of Information Flows is similar to dynamic predicate logic (DPL) (Groenendijk and Stokhof, 1991), in the sense that formulas are also evaluated with respect to pairs of valuations. There is, however a key difference in philosophy between the two logics. LIF starts from the idea that well-known operators from first-order logic can be used to describe combinations and manipulations of dynamic systems, and as such provides a means for procedural knowledge in a declarative language. The dynamics in LIF are dynamics of the described system. Dynamic predicate logic, on the other hand starts from the observation that, in natural language, operators such as conjunction and existential quantification are dynamic, where the dynamics are in the process of parsing a sentence, often related to coreference analysis. To the best of our knowledge, inputs and outputs of expressions have not been studied in DPL.
Since we developed a large part of our work in the general setting of BRVs, and thus of transition systems, we expect several of our results to be applicable in the context of other formalisms where specifying inputs and outputs is important, such as API-based programming (Calvanese et al., 2016) and synthesis (De Giacomo et al., 2013; Alechina et al., 2019), privacy and security, business process modeling (Calvanese et al., 2008), and model combinators in Constraint Programming (Fontaine et al., 2013).
9. Conclusion and Future Work
Declarative modeling is of central importance in the area of Knowledge Representation and Reasoning. The Logic of Information Flows provides a framework to investigate how, and to what degree, dynamic or imperative features can be modeled declaratively. In this paper we have focused on inputs, outputs, and sequential composition, as these three concepts are fundamental to modeling dynamic systems. There are many directions for further research.
Inputs and outputs are not just relevant from a theoretic perspective, but can also have ramifications on computation. Indeed, they form a first handle to parallelize computation of complex LIF expressions, or to decompose problems.
In this paper, we have worked with a basic set of operations motivated by the classical logic connectives. In order to provide a fine control of computational complexity, or to increase expressiveness, it makes sense to consider other operations.
The semantic notions developed in this paper (inputs, outputs, soundness) apply to global BRVs in general, and hence are robust under varying the set of operations. Moreover, our work delineates and demonstrates a methodology for adapting syntactic input–output definitions to other operations.
A specific operation that is natural to consider is converse. The converse of a BRV is defined to be . In the context of LIF (Ternovska, 2019) it can model constraint solving by searching for an input to a module that produces a desired outcome. When we add converse to LIF with only a single variable (), and the vocabulary has only binary relations of input arity one, then we obtain the classical calculus of relations (Tarski, 1941). There, converse is known to be primitive (Fletcher et al., 2015). When the number of variables is strictly more than half of the maximum arity of relations in the vocabulary, converse is redundant in LIF, as can be shown using similar techniques as used in this paper to show redundancy of composition. Investigating the exact number of variables needed for non-primitivity is an interesting question for further research.
Another direction for further research is to examine fragments of LIF for which the semantic input or output problem may be decidable, or even for which the syntactic definitions coincide with the semantic definitions.
Finally, an operation that often occurs in dynamic systems is the fixed point construct used by (Ternovska, 2019). It remains to be seen how our work, and the further research directions mentioned above, can be extended to include the fixpoint operation.
Acknowledgements.
This research received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme, from FWO Flanders project G0D9616N, and from Natural Sciences and Engineering Research Council of Canada (NSERC). Jan Van den Bussche is partially supported by the National Natural Science Foundation of China (61972455). Heba Aamer is supported by the Special Research Fund (BOF) (BOF19OWB16).References
- (1)
- Aamer et al. (2020a) Heba Aamer, Bart Bogaerts, Dimitri Surinx, Eugenia Ternovska, and Jan Van den Bussche. 2020a. Executable first-order queries in the logic of information flows. In Proceedings 23rd International Conference on Database Theory (Leibniz International Proceedings in Informatics, Vol. 155). Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 4:1–4:14.
- Aamer et al. (2020b) Heba Aamer, Bart Bogaerts, Dimitri Surinx, Eugenia Ternovska, and Jan Van den Bussche. 2020b. Inputs, Outputs, and Composition in the Logic of Information Flows. In Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning. 2–11. https://doi.org/10.24963/kr.2020/1
- Alechina et al. (2019) Natasha Alechina, Tomás Brázdil, Giuseppe De Giacomo, Paolo Felli, Brian Logan, and Moshe Y. Vardi. 2019. Unbounded Orchestrations of Transducers for Manufacturing. In AAAI. AAAI Press, 2646–2653.
- Calvanese et al. (2008) Diego Calvanese, Giuseppe De Giacomo, Maurizio Lenzerini, Massimo Mecella, and Fabio Patrizi. 2008. Automatic Service Composition and Synthesis: the Roman Model. IEEE Data Eng. Bull. 31, 3 (2008), 18–22.
- Calvanese et al. (2016) Diego Calvanese, Giuseppe De Giacomo, Maurizio Lenzerini, and Moshe Y. Vardi. 2016. Regular Open APIs. In Principles of Knowledge Representation and Reasoning: Proceedings of the Fifteenth International Conference, KR 2016, Cape Town, South Africa, April 25-29, 2016. AAAI Press, 329–338.
- De Giacomo et al. (2013) Giuseppe De Giacomo, Fabio Patrizi, and Sebastian Sardiña. 2013. Automatic behavior composition synthesis. Artif. Intell. 196 (2013), 106–142.
- Ebbinghaus and Flum (1999) Heinz-Dieter Ebbinghaus and Jörg Flum. 1999. Finite Model Theory (second ed.). Springer.
- Enderton (1972) Herbert B. Enderton. 1972. A Mathematical Introduction To Logic. Academic Press.
- Fan et al. (2014) Wenfei Fan, Floris Geerts, and Leonid Libkin. 2014. On scale independence for querying big data. In Proceedings 33th ACM Symposium on Principles of Database Systems. ACM, 51–62.
- Fletcher et al. (2015) George H. L. Fletcher, Marc Gyssens, Dirk Leinders, Dimitri Surinx, Jan Van den Bussche, Dirk Van Gucht, Stijn Vansummeren, and Yuqing Wu. 2015. Relative expressive power of navigational querying on graphs. Information Sciences 298 (2015), 390–406.
- Fontaine et al. (2013) Daniel Fontaine, Laurent Michel, and Pascal Van Hentenryck. 2013. Model Combinators for Hybrid Optimization. In Principles and Practice of Constraint Programming - 19th International Conference, CP 2013, Uppsala, Sweden, September 16-20, 2013. Proceedings. Springer, 299–314.
- Groenendijk and Stokhof (1991) Jeroen Groenendijk and Martin Stokhof. 1991. Dynamic predicate logic. Linguistics and Philosophy 14 (1991), 39–100.
- Gurevich (1983) Yuri Gurevich. 1983. Algebras of feasible functions. In Proceedings 24th Symposium on Foundations of Computer Science. IEEE Computer Society, 210–214.
- Gurevich (1988) Yuri Gurevich. 1988. Logic and the challenge of computer science. In Current Trends in Theoretical Computer Science, E. Börger (Ed.). Computer Science Press, 1–57.
- Henkin et al. (1971) Leon Henkin, J. Donald Monk, and Alfred Tarski. 1971. Cylindric Algebras. Part I. North-Holland.
- Imieliński and Lipski (1984) Tomasz Imieliński and Witold Lipski. 1984. The relational model of data and cylindric algebras. J. Comput. System Sci. 28 (1984), 80–102.
- Kuper et al. (2000) Gabriel M. Kuper, Leonid Libkin, and Jan Paredaens (Eds.). 2000. Constraint Databases. Springer.
- Lewis (1973) David Lewis. 1973. Causation. Journal of Philosophy 70 (1973), 556–567.
- Lifschitz (1987) Vladimir Lifschitz. 1987. Formal Theories of Action (Preliminary Report). In Proceedings of the 10th International Joint Conference on Artificial Intelligence. Milan, Italy, August 23-28, 1987, John P. McDermott (Ed.). Morgan Kaufmann, 966–972. http://ijcai.org/Proceedings/87-2/Papers/081.pdf
- McCarthy and Hayes (1969) John McCarthy and Patrick J. Hayes. 1969. Some Philosophical Problems from the Standpoint of Artificial Intelligence. In Machine Intelligence 4, B. Meltzer and D. Michie (Eds.). Edinburgh University Press, 463–502.
- Mcilraith et al. (2001) Sheila Mcilraith, Tran Son, and Honglei Zeng. 2001. Semantic Web Services. Intelligent Systems, IEEE 16 (04 2001), 46 – 53.
- Mitchell and Ternovska (2005) David G. Mitchell and Eugenia Ternovska. 2005. A framework for representing and solving NP search problems. In Proc. AAAI. AAAI Press / The MIT Press, 430–435.
- Tarski (1941) Alfred Tarski. 1941. On the calculus of relations. Journal of Symbolic Logic 6 (1941), 73–89.
- Ternovska (2015) Eugenia Ternovska. 2015. An Algebra of Combined Constraint Solving. In Global Conference on Artificial Intelligence, GCAI 2015, Tbilisi, Georgia, October 16-19, 2015. EasyChair, 275–295.
- Ternovska (2017) Eugenia Ternovska. 2017. Recent progress on the algebra of modular systems. In Proceedings 11th Alberto Mendelzon International Workshop on Foundations of Data Management (CEUR Workshop Proceedings, Vol. 1912), J.L. Reutter and D. Srivastava (Eds.). CEUR-WS.org.
- Ternovska (2019) Eugenia Ternovska. 2019. An algebra of modular systems: static and dynamic perspectives. In Frontiers of Combining Systems: Proceedings 12th FroCos (Lecture Notes in Artificial Intelligence, Vol. 11715), A. Herzig and A. Popescu (Eds.). Springer, 94–111.