This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Knowability as continuity:
a topological account of informational dependence

Alexandru Baltag & Johan van Benthem
Abstract

We study knowable informational dependence between empirical questions, modeled as continuous functional dependence between variables in a topological setting. We also investigate epistemic independence in topological terms and show that it is compatible with functional (but non-continuous) dependence. We then proceed to study a stronger notion of knowability based on uniformly continuous dependence. On the technical logical side, we determine the complete logics of languages that combine general functional dependence, continuous dependence, and uniformly continuous dependence.

1 Introduction and preview

Dependence in the world

Dependence occurs when objects or phenomena constrain each other’s behavior. These constraints range from complete functional determination to milder forms of correlation. At the opposite extreme lies independence: complete freedom of behavior, regardless of what the other object or phenomenon does. A common mathematical model for all this uses a family of variables XX for relevant items that can take values in their ranges, where correlations arise when not all a priori possible value combinations can occur. The laws of physics provide many examples: Newton’s law of gravity constrains the values that can arise for the masses of two objects, their distances, and their accelerations toward each other. Dependence is at the heart of the physical universe and our theories about it, but just as well our social world and the many dependencies in human behaviors. Logicians have long been interested in modeling this pervasive notion and studying its basic laws [Väänänen, 2007], [Baltag and van Benthem, 2021]. In this line, the present paper offers a new dependence logic in a richer topological setting that applies to empirical inquiry.

Dependence, knowledge and information flow

In a complementary sense to its physical manifestation, dependence can also be seen as an information-carrying relation. If yy depends functionally on XX, then, given the information which values the variables in XX have right now, we automatically know the value of yy. And weaker forms of correlation license weaker forms of knowledge transfer. This dual perspective was proposed in Situation Theory, where realism about constraints between situations in the world [Barwise and Perry, 1983] led to logical theories of information flow for human agents [Barwise and Seligman, 1995]. But dependence also underlies (Dynamic) Epistemic Logic, the basic theory of semantic information [Baltag and Renne, 2016], [van Ditmarsch et al., 2015], [van Benthem, 2011]. If some players get a card each from a set whose composition is common knowledge, then the card dealt to the first player constrains the card combinations available for the other players. Or epistemically, if we know the card of the first player, we also know more about which cards are held by which other player. In this paper, we will mainly pursue this informational-epistemic perspective on dependence.

Variables as wh-questions

As used in empirical sciences, ‘variables’ differ in an essential way from the First-Order Logic notion with the same name: while in FOL variables are simple placeholders, with no meaning of their own, in empirical science they are specific “what” questions (e.g. what is the position of a particle, or its velocity, momentum or acceleration, etc?). The values of a variable are the possible exact answers to the question (e.g. the exact position, etc). A variable yy will take different values in different possible states of the world, so in this sense variables are functions from a state space SS to a range of values 𝔻y\mathbb{D}_{y}. On the other hand, the corresponding propositional question can be understood as a partition of the state space: each cell of the partition represents (the set of worlds satisfying) a specific answer to the question. We obtain a correspondence between variables and (propositional) questions: any variable y:S𝔻yy:S\to\mathbb{D}_{y} induces a partition on SS, whose cells are given by the sets y1(s):={wS:y(w)=y(s)}y^{-1}(s):=\{w\in S:y(w)=y(s)\}, for all sSs\in S.

Functional dependence and propositional determination

Unlike in FOL, in empirical sciences the values of different variables are not necessarily independent: answering some questions may already be enough to answer other questions. In this sense, dependence is all about logical relations between questions. The standard logical setting for this is provided by the notion of functional dependence between variables/questions, expressed as a global statement =(X;y)=(X;y) in [Väänänen, 2007], and denoted in this paper by D(X;y)D(X;y): essentially, this captures the existence of a function FF that maps the values of the variables in XX at all states into the value of yy at the same states. There is also a local, state-dependent version DXyD_{X}y, introduced in [Baltag and van Benthem, 2021], which can be thought of as semantic determination: fixing the XX-values to the current ones (realized at the current state ss) also fixes the value for yy. In the epistemic reading, a dependence statement DXyD_{X}y can be seen as a conditional knowledge claim: an observer can know the current value of yy if given the current values of the variables in XX. As a special case, when yy is a Boolean variable associated to some proposition φ\varphi, we obtain a natural ‘determination modality’ DXφD_{X}\varphi, which says that fixing the XX-values to the current ones (at ss) ‘determines’ the truth of φ\varphi (i.e. fixes its value to ‘true’).

The complete logic of these notions was investigated in [Baltag and van Benthem, 2021]. Our concern in this paper is lifting an important idealization in the preceding framework, which is necessary to make dependence logics function in a setting of empirical investigation.

From idealized sharp values to empirical measurement

Standard scenarios in epistemic logic are discrete, and it then makes sense to work with sharp values of variables, such as truth values for proposition letters. However, in the empirical sciences, and in daily life, values are usually determined by imprecise observations or measurements. Even if not in error, these deliver only intervals or other sets of values containing the real value. Of course, one can perform more measurements and combine outcomes, or turn to a more precise measuring device, but sharp individual values for variables will still not be delivered. Thus, in empirical inquiry, the view of dependence as conditional knowledge of sharp values becomes unrealistic.

Here we arrive at the main question of this paper. How should we model dependence and its logic in an empirical setting where only measurements with limited precision are available? We need a richer style of modeling for outcomes of observations or measurements, and we need a way of describing how empirical knowledge can arise though a process of approximation.

Topological models for empirical inquiry and knowledge

As it happens, a suitable richer style of modeling exists. Topological models have been around since the 1930s for modeling intuitionistic and modal logics [van Benthem and Bezhanishvili, 2007], where the surplus of topological spaces over plain sets elucidates epistemic notions such as knowledge or verifiability. In our present setting, open sets in a topological space represent possible outcomes of experiments or measurements, while propositions of interest are modeled by arbitrary subsets. In this way, topological notions acquire epistemic meaning. For instance, a property PP denoting an open set is ‘verifiable’ in that, for any point ss satisfying PP, there exists an experiment that can tell us that the point has that property. Dually, properties denoting closed sets are falsifiable. In this setting, even technical separation properties for topological spaces make epistemic sense: e.g., the T0T_{0}-property says that any point can be identified uniquely by experiments, since any other point can be separated by some measurement. The area of Epistemic Topology exploits all these features, witness important studies like [Kelly, 1996], [Dabrowski et al., 1996], [Baltag et al., 2019]. Our analysis of dependence follows up on this, by looking at further structure in dependence models with topologies over the value ranges for the variables.

Empirical variables as topological maps

We are thus lead to a natural modeling of empirical variables yy as functions from a state space SS onto a topological space (𝔻y,τy)(\mathbb{D}_{y},\tau_{y}): in this way, in addition to the exact value y(s)y(s) of variable yy at state ss (an idealization which typically is not accessible by measurements), we also have observable approximations of this value, given by open neighborhoods UτyU\in\tau_{y} with y(s)Uy(s)\in U. Finally, topological spaces allow us to treat sets of variables XX as single new entities taking tuples as values, by using the product topology on the Cartesian product ΠxX𝔻x\Pi_{x\in X}\mathbb{D}_{x} value spaces for the separate xXx\in X. This uniform treatment of single variables and finite sets of these greatly simplifies epistemic analysis.

Propositionalization: empirical questions as topologies

If we move from empirical variables to the corresponding propositional questions, this requires refining the partitional view of questions into a topology on the state space SS; any open neighborhood of the current state ss in the question’s topology will represent a partial, “inexact” answer to the given question: these are the “feasible” answers, that one can determine by observations (measurements).111The underlying partition of SS into cells (representing the ‘exact’ answers) can still be recovered from this topological representation: the cell of state ss consists of all states that are “topologically indistinguishable” from ss, i.e. that belong to exactly the same open sets as ss. The above-mentioned correspondence between variables and questions-as-partitions extends to their topological versions: the topology τy\tau_{y} on the range of values 𝔻y\mathbb{D}_{y} of variable yy naturally induces a topology on the state domain SS, having open sets of the form 𝗒1[U]\mathsf{y}^{-1}[U], for any open UτyU\in\tau_{y}.

Continuity as information-carrying dependence

We have presented dependence DXyD_{X}y as delivering conditional knowledge of yy given XX. But note that this required being given the exact value(s) of XX. In an empirical setting such precision is seldom achievable: all we can observe are measurable approximations. Thus, a useful and truly ‘epistemic’ dependence should allow us to approximate the value of yy arbitrarily closely given suitable approximations for the values of XX. In its global form, this requires that the dependence function FF mapping XX-values to yy-values be globally continuous with respect to the underlying topologies. This is the definition of our known epistemic dependence K(X;y)K(X;y): the observer knows that she can get to know the value of yy as accurately as needed if given accurate enough approximations of XX. What makes this dependence “epistemic” is the fact that it requires only observable approximations, and what makes it “known” is its globality. As in the case of exact dependence, there is also a local version KXyK_{X}y of epistemic dependence, which only requires that the dependence function FF is continuous on some open neighborhood UU of the exact value(s) X(s)X(s) (of XX at the current state ss). We read KXyK_{X}y as knowable epistemic dependence: the observer can come to know (after doing an accurate enough measurement of XX) that she can get to know the value of yy as accurately as needed if given accurate enough approximations of XX.222There exists an even “more local” version kXyk_{X}y of epistemic dependence, requiring only point-continuity at X(s)X(s). But this is in a sense “too local”, too sensitive to the actual value of XX, at the risk of being “unknowable”: kXyk_{X}y may hold without ever being known to the observer, no matter how accurate her observations. In epistemological terms, point-continuity is not an ‘introspective’ fact.

Special case: conditional knowability of a proposition

As in the case of exact dependence, we can apply epistemic dependence KXyK_{X}y to the case when yy is a Boolean variable associated to a proposition φ\varphi, obtaining a natural notion of knowability of a proposition given (accurate enough approximations of) XX: KXφK_{X}\varphi holds at ss if there is if there is some XX-open neighborhood of ss (in the topology induced by XX on the state space) all of whose members satisfy φ\varphi. This means that one can come to know that φ\varphi was the case by measuring XX with enough accuracy. The modality KXφK_{X}\varphi connects back to the standard topological semantics for modal logic, more precisely to the so-called interior semantics: note that ss satisfies KXK_{X} iff it is in the interior of the set of states satisfying φ\varphi (where interior is taken wrt the topology induced by XX on the state space). As such, this notion fits well with the recent epistemic interpretations of the interior operator in terms of “knowability”.

Further notions: full independence versus topological independence

In mathematical practice, independence of variables often makes for greater simplicity than dependence. The probability of two independent events is just their product, no analysis of correlations is required. Independence, too, has a natural interpretation in the above models. Two sets of variables XX and YY are fully independent if, given the states that occur in our model, knowing the exact values of the variables in XX tells us nothing new about the values of YY (i.e. it still leaves them free to take on any combination of values that the model allows them anywhere). Independence is zero-correlation of YY-values with XX-values, functional dependence is total correlation.333There are also intermediate forms of correlation, as in the above epistemic models, but we will not study these as such in this paper. However, various forms of value correlation can be expressed in the logical languages for dependence models to be introduced later. But for empirical variables, this notion of independence is epistemically irrelevant (since typically we cannot know the exact values of XX). The richer topological setting offers a more subtle concept: two sets of variable XX and YY are topologically independent if no matter how accurate measurements of XX we are given, we can never learn anything new about YY. Interestingly enough, topological independence is compatible with global functional dependence! So we can have a total correlation DXYD_{X}Y (which in principle would give us complete knowledge of the exact values of YY if we only were given the exact values of XX), while at the same time having topological independence (so that no approximate observation of XX can tell us anything about YY). In our paper, we characterize such situations mathematically in terms of the dependence function being “everywhere surjective” [Bernardi and Rainaldi, 2018], thus significantly expanding the current interfaces of Epistemic Topology with mathematics. More philosophically, the extended framework obtained in this way allows us to reason about in-principle learnability and unlearnability on a par.

Special topologies

While our discussion may have suggested that we abandon standard (relational) epistemic logic, the topological setting contains in fact standard epistemic models as the special case of discrete topologies on the range of values (where all subsets of the space are open). In this case, the topologies induced on the state space are partitional, and we can thus interpret the corresponding propositional questions as “agents” (modeled using the accessibility relations associated to these partitions). DXYD_{X}Y and KXYK_{X}Y collapse to the same notion in this case (capturing ‘epistemic superiority’ of group XX over YY [Baltag and Smets, 2020]), and similarly DXφD_{X}\varphi and KXφK_{X}\varphi collapse to the standard 𝖲𝟧\mathsf{S5} notion of distributed knowledge. Another special class of topologies, to be used extensively in our later technical proofs, are Alexandrov spaces where each point has a smallest oopen neighborhood. These spaces match the usual models for modal 𝖲𝟦\mathsf{S4} and support a reduction to relational models where the accessibility relation is the standard topological specialization order sts\leq t. In what follows, such relational models will allow us to use standard modal techniques.

‘Knowing how’: from plain continuity to uniform continuity

Even a known epistemic dependence K(X;y)K(X;y) leaves something to be desired. The inverse map associated with a continuous function FF, from open neighborhoods of values F(s)F(s) to open neighborhoods of ss, is state dependent. So, the agent knows that she can approximate yy as accurately as desired given an accurate enough approximation of XX, but she may not know in advance how accurate needs the measurement of XX be (in order to obtain the desired accuracy for yy). To have in advance such an explicit ‘epistemic know-how’ (as to which approximation works) she needs the dependence function FF to be uniformly continuous. Formalizing this stronger notion of conditional knowability how (to find yy from XX) requires an even richer setting. We chose to use for this the well-known framework of metric spaces [Willard, 1970], mostly for reasons of accessibility to a general philosophical audience; but uniform continuity can also be defined in a more qualitative way in the much more general framework of uniform spaces [Isbell, 1964], and the even more general setting of quasi-uniform spaces, and our notions and results do carry through to these qualitative generalizations.


The content of this paper can be viewed in two ways. Technically, it presents a natural next step for current Logics of Dependence, moving from set-based models to topological ones, and leading to several new completeness, representation, and decidability theorems proved by extending modal techniques to this setting. More philosophically, however, what we offer in analyzing and defining notions of information-based dependence is an invitation to Epistemic Topology, an area of formal philosophy where many new concepts become available once we go beyond its already established interfaces with mathematics and computer science. In line with this, we see both our definitions and our theorems as main contributions of this paper.

Plan of this paper

Section 2 introduces the basic notions of empirical wh-questions, in two equivalent versions: empirical variables as topological maps, and propositional questions as topologies. Section 3 introduces and investigates various notions of dependence and the corresponding modalities: exact functional dependence (in both its global and local versions) and the determination modality, knowledge and knowability of a proposition given a set of variables, and most importantly the key notion of continuous (epistemic) dependence (in both its global and local versions). Section 4 contains the formal syntax and semantics of our decidable logic of continuous dependence 𝖫𝖢𝖣\mathsf{LCD}, a sound and complete proof system to reason about epistemic dependence, as well as some concrete examples. Section 5 discusses the notions of full independence and topological independence, and their interplay with functional dependence in a topological setting. Section 6 extends the setting to uniformly continuous dependence on metric models, and presents the syntax and semantics of the decidable logic of uniform dependence 𝖫𝖴𝖣\mathsf{LUD} (which in fact also comprises all the other forms of dependence and modalities encountered in this paper), as well as a sound and complete axiomatization of this logic. Section 7 lists some further directions of immediate epistemic relevance, including richer dynamic logics of approximation, a notion of computable dependence understood as Scott continuity, the use of ‘point-free topologies’, etc. A summary of results and general conclusions is found in Section 8. All the proofs of completeness and decidability are found in the Appendix.

2 Basic framework: empirical questions

In this section, we present a topological setting for imprecise observations. Our first task is the introduction of empirical questions, for which only imprecise partial answers may be observable. These will come in two variants: ‘wh-questions’ (whose answers are non-propositional objects or ‘values’: numbers, names or some other form of data), and propositional questions (whose answers are propositions). The first type (wh-questions) will be modeled as ‘empirical variables’, i.e. quantities that take values in some appropriate value space. Formally, empirical variables are maps X:S(𝔻,τ)X:S\to(\mathbb{D},\tau) from the state space SS to some topological space (𝔻,τ)(\mathbb{D},\tau) (where the open neighborhoods of the actual value of the variable in a state representing the measurable approximations of that exact value). The second type (propositional questions) will be represented as topologies τ𝒫(S)\tau\subseteq{\mathcal{P}}(S) on a state space SS, with the (inexact) propositional answers being given by the open neighborhoods of the actual state. This generalizes the traditional ‘partitional’ view of propositional questions (in which the possible exact answers form a partition of the state space) to the case of empirical questions, for which typically only inexact, approximative answers are epistemically available: such answers can be more or less precise, and so they are only partial answers, which can thus overlap, or even refine each other.

As we shall see, the two settings (wh-questions as empirical variables, and propositional questions as topologies) are in some sense equivalent: we can always ‘propositionalize’ any wh-question, and conversely we can associate a quantitative variable to every propositional question. Moreover, these correspondences are structure-preserving, and so the two settings obey the same logic 𝖫𝖢𝖣\mathsf{LCD}. However, the correspondence is not one-to-one: many different wh-questions correspond to the same propositional one. This reflects the intuitive fact that the first setting (of wh-questions as variables) is richer, giving us more information (in the form of the ‘actual values’ of the variable). For this reason, we will choose this first setting as the preferred formal semantics for our logic 𝖫𝖢𝖣\mathsf{LCD} (as a logic of dependence between empirical variables).

2.1 Preliminaries on topology

While in this paper we assume familiarity with basic notions of General Topology, we briefly review some of them here, mainly in order to fix the notation.

Topologies, neighborhoods and local bases   Recall that a topological space (𝔻,τ)(\mathbb{D},\tau) consists of a set 𝔻\mathbb{D} (the domain, consisting of ‘points’ or ‘values’ d𝔻d\in\mathbb{D}), together with a family τ𝒫(𝔻)\tau\subseteq\mathcal{P}(\mathbb{D}) of ‘open’ sets, assumed to include \emptyset and 𝔻\mathbb{D} itself, and to be closed under finite intersections and arbitrary unions. The complements of open sets are called ’closed’. For a point d𝔻d\in\mathbb{D}, we put

τ(d):={Uτ:dU}\tau(d)\,\,:=\,\,\{U\in\tau:d\in U\}

for the family of open neighborhoods of dd. A family of opens τ\mathcal{B}\subseteq\tau is a basis (or ‘base’) for the topology τ\tau if every open set UτU\in\tau is a union of open sets from \mathcal{B}. In this case, we also say that τ\tau is the topology generated by the basis \mathcal{B}.

Interior and closure   The well-known topological interior and closure operators are defined as usual, as maps Int,CL:𝒫(𝔻)𝒫(𝔻)Int,CL:\mathcal{P}(\mathbb{D})\to\mathcal{P}(\mathbb{D}), defined by putting for all sets of values D𝔻D\subseteq\mathbb{D}:

Int(D)={OD:Oτ},Cl(D)={FD:(𝔻F)τ}Int(D)=\bigcup\{O\subseteq D:O\in\tau\},\,\,Cl(D)=\bigcap\{F\supseteq D:(\mathbb{D}-F)\in\tau\}

In words, Int(D)Int(D) is the largest open subset of DD, while Cl(D)Cl(D) is the smallest closed superset of DD.

Specialization preorders, topological indistinguishability and separation axioms   Given a space (𝔻,τ)(\mathbb{D},\tau), we denote by 𝔻×𝔻\leq\,\subseteq\mathbb{D}\times\mathbb{D} the specialization preorder for its topology, defined by putting, for all d,c𝔻d,c\in\mathbb{D}:

dc iff τ(d)τ(c) iff Uτ(dUcU).d\leq c\,\,\,\mbox{ iff }\,\,\,\tau(d)\subseteq\tau(c)\,\,\,\mbox{ iff }\,\,\,\forall U\in\tau(d\in U\Rightarrow c\in U).

We also denote by \simeq the topological indistinguishability relation, defined as:

dc iff dcd iff τ(d)=τ(c) iff Uτ(dUcU).d\simeq c\,\,\,\mbox{ iff }\,\,\,d\leq c\leq d\,\,\,\mbox{ iff }\,\,\,\tau(d)=\tau(c)\,\,\,\mbox{ iff }\,\,\,\forall U\in\tau(d\in U\Leftrightarrow c\in U).

For any point d𝔻d\in\mathbb{D}, we will denote its equivalence class modulo the indistinguishability relation \simeq by

[d]τ:={d𝔻:dd}.[d]_{\tau}\,:=\,\{d^{\prime}\in\mathbb{D}:d\simeq d^{\prime}\}.

The space (𝔻,τ)(\mathbb{D},\tau) is T0T_{0}-separated if topologically indistinguishable points are the same, that is, τ(d)=τ(d)\tau(d)=\tau(d^{\prime}) implies d=dd=d^{\prime} (for all d,d𝔻d,d^{\prime}\in\mathbb{D}); equivalently, iff every equivalence class [d]τ[d]_{\tau} is just a singleton {d}\{d\}; and again equivalently: iff the specialization preorder \leq is a partial order (i.e., also anti-symmetric).

The space (𝔻,τ)(\mathbb{D},\tau) is T1T_{1}-separated if the specialization preorder is the identity, i.e. τ(d)τ(d)\tau(d)\leq\tau(d^{\prime}) implies d=dd=d^{\prime}.

Continuity at a point A function F:(𝔻,τ𝔻)(𝔼,τ𝔼)F:(\mathbb{D},\tau_{\mathbb{D}})\to(\mathbb{E},\tau_{\mathbb{E}}) between topological spaces is continuous at a point x𝔻x\in\mathbb{D} if we have

Uτ𝔻(F(x))Vτ𝔼(x):F(V)U,\forall U\in\tau_{\mathbb{D}}(F(x))\,\exists V\in\tau_{\mathbb{E}}(x):\,F(V)\subseteq U,

or equivalently: for every open neighborhood Uτ(F(x))U\in\tau(F(x)) of F(x)F(x), its preimage is an open neighborhood F1(U)τ(x)F^{-1}(U)\in\tau(x) of xx.

Global and local continuity F:(𝔻,τ𝔻)(𝔼,τ𝔼)F:(\mathbb{D},\tau_{\mathbb{D}})\to(\mathbb{E},\tau_{\mathbb{E}}) is (globally) continuous if it is continuous at all points x𝔻x\in\mathbb{D}. FF is locally continuous at (around) a point x𝔻x\in\mathbb{D} if it is continuous at all points in some open neighborhood Oτ(x)O\in\tau(x) of xx.

Subspaces and quotients Every subset D𝔻D\subseteq\mathbb{D} of (the domain of) a topological space (𝔻,τ)(\mathbb{D},\tau) determines a subspace (D,τD)(D,\tau_{D}), obtained by endowing DD with its subspace topology

τD:={UD:Uτ}\tau_{D}\,:=\,\{U\cap D:U\in\tau\}

Every equivalence relation 𝔻×𝔻\sim\subseteq\mathbb{D}\times\mathbb{D} on (the domain of) a topological space (𝔻,τ)(\mathbb{D},\tau) determines a quotient space (𝔻/,τ/)(\mathbb{D}/\sim,\tau/\sim), having as set of values/points the set

𝔻/:=[d]:d𝔻\mathbb{D}/\sim\,:=\,[d]_{\sim}:d\in\mathbb{D}

of equivalence classes [d:={d𝔻:dd;}[d_{\sim}:=\{d^{\prime}\in\mathbb{D}:d\sim d;\} modulo \sim, endowed with the quotient topology τ/\tau/\sim, given by putting for any set 𝒜𝔻/\mathcal{A}\subseteq\mathbb{D}/\sim of equivalence classes:

𝒜τ/ iff 𝒜={d𝔻:[d]𝒜}τ.\mathcal{A}\in\tau/\sim\,\mbox{ iff }\,\bigcup\mathcal{A}=\{d\in\mathbb{D}:[d]_{\sim}\in\mathcal{A}\}\in\tau.

The quotient topology is the largest (“finest”) topology on 𝔻/\mathbb{D}/\sim that makes continuous the canonical quotient map /\bullet/\sim, that maps every point d𝔻d\in\mathbb{D} to its equivalence class [d][d]_{\sim}.

Compactness and local compactness A topological space (𝔻,τ)(\mathbb{D},\tau) is compact if every open cover of 𝔻\mathbb{D} has a finite subcover; i.e., for every collection 𝒞τ\mathcal{C}\subseteq\tau of open subsets s.t. 𝒞=𝔻\bigcup\mathcal{C}=\mathbb{D}, there exists a finite subcollection 𝒞\mathcal{F}\subseteq\mathcal{C} s.t. =𝔻\bigcup\mathcal{F}=\mathbb{D}. A subset K𝔻K\subseteq\mathbb{D} of a topological space (𝔻,τ)(\mathbb{D},\tau) is compact if the subspace (K,τK)(K,\tau_{K}) determined by it is compact. A space (𝔻,τ)(\mathbb{D},\tau) is locally compact if every point has a compact neighborhood; i.e. for every d𝔻d\in\mathbb{D} there exists some open set UτU\in\tau and some compact set K𝔻K\subseteq\mathbb{D} s.t. dUKd\in U\subseteq K.

Metric spaces, pseudo-metric spaces and ultra-(pseudo-)metric spaces A pseudo-metric space (𝔻,d)(\mathbb{D},d) consists of a set 𝔻\mathbb{D} of points, together with a ‘pseudo-metric’ (or ‘pseudo-distance’) d:𝔻[0,)d:\mathbb{D}\to[0,\infty), satisfying three conditions: d(x,x)=0d(x,x)=0; d(x,y)=d(y,x)d(x,y)=d(y,x); d(x,z)d(x,y)+d(y,z)d(x,z)\leq d(x,y)+d(y,z). A metric space is a pseudo-metric space (𝔻,d)(\mathbb{D},d) satisfying an additional condition, namely: d(x,y)=0d(x,y)=0 implies x=yx=y. In this case dd is called a metric (or ‘distance’). Every pseudo-metric space is a topological space, with a basis given by open balls

(x,ε):={y𝔻:d(x,y)<ε}, with x𝔻 and ε(0,).\mathcal{B}(x,\varepsilon):=\{y\in\mathbb{D}:d(x,y)<\varepsilon\},\,\,\,\,\mbox{ with $x\in\mathbb{D}$ and $\varepsilon\in(0,\infty)$.}

Note that the interior Int(A)Int(A) of any set A𝔻A\subseteq\mathbb{D} is the union of all the open balls included in AA:

Int(A)={(x,ε)A:ε>0}={x𝔻:(x,ε)A for some ε>0};Int(A)=\bigcup\{\mathcal{B}(x,\varepsilon)\subseteq A:\varepsilon>0\}=\{x\in\mathbb{D}:\mathcal{B}(x,\varepsilon)\subseteq A\mbox{ for some }\varepsilon>0\};

while dually the closure Cl(A)Cl(A) is the intersection of all the closed balls that include AA:

Cl(A)={(x,ε)¯A:ε>0}, where (x,ε)¯:={y𝔻:d(x,y)ε}.Cl(A)=\bigcap\{\overline{\mathcal{B}(x,\varepsilon)}\supseteq A:\varepsilon>0\},\mbox{ where }\overline{\mathcal{B}(x,\varepsilon)}:=\{y\in\mathbb{D}:d(x,y)\leq\varepsilon\}.

Metric spaces are T1T_{1}-separated, hence also T0T_{0}-separated.444But pseudo-metric spaces are not necessarily so: in fact, a pseudo-metric space is T0T_{0}-separated only iff it is a metric space.

A familiar concrete example of metric spaces are the Euclidean spaces n\mathbb{R}^{n} of integer dimensions n>0n>0, consisting of nn-tuples (x1,,xn)(x_{1},\ldots,x_{n}) of real numbers x1,,xnx_{1},\ldots,x_{n}\in\mathbb{R}, endowed with the Euclidean distance d(x,y):=i(xiyi)2d(x,y):=\sqrt{\sum_{i}(x_{i}-y_{i})^{2}}. Euclidean spaces have the important property that they are locally compact (but not compact).

Another example are the ultra-(pseudo-)metric spaces: a (pseudo-)metric d:𝔻[0,)d:\mathbb{D}\to[0,\infty) is said to be an ultra-(pseudo-)metric if it satisfies a stronger version of the above triangle inequality, namely d(x,z)max{d(x,y),d(y,z)}d(x,z)\leq max\{d(x,y),d(y,z)\}.555A concrete example of ultra-metric spaces are the so-called pp-adic fields 𝔽p\mathbb{F}_{p}, for any prime number pp: in a certain mathematical sense, these are ultra-metric analogues of the field of real numbers \mathbb{R}.

Uniform continuity A function F:(𝔻,d𝔻)(𝔼,d𝔼)F:(\mathbb{D},d_{\mathbb{D}})\to(\mathbb{E},d_{\mathbb{E}}) between two pseudo-metric spaces is uniformly continuous if we have

ε>0δ>0x𝔻y𝔻(d𝔻(x,y)<δd𝔼(F(x),F(y))<ε).\forall\varepsilon>0\,\exists\delta>0\,\forall x\in\mathbb{D}\,\forall y\in\mathbb{D}\,\left(\,d_{\mathbb{D}}(x,y)<\delta\Rightarrow d_{\mathbb{E}}(F(x),F(y))<\varepsilon\,\right).

In contrast, global continuity corresponds in a metric space to swapping the quantifiers δ\exists\delta and x\forall x in the above statement to the weaker form

ε>0x𝔻δ>0y𝔻(d𝔻(x,y)<δd𝔼(F(x),F(y))<ε).\forall\varepsilon>0\,\forall x\in\mathbb{D}\,\exists\delta>0\,\forall y\in\mathbb{D}\,\left(\,d_{\mathbb{D}}(x,y)<\delta\Rightarrow d_{\mathbb{E}}(F(x),F(y))<\varepsilon\,\right).

A function F:(𝔻,d)(𝔼,d)F:(\mathbb{D},d)\to(\mathbb{E},d) is locally uniformly continuous at (around) a point x𝔻x\in\mathbb{D} if its restriction F|OF|O to (the subspace determined by) some open neighborhood O=(x,δ)O=\mathcal{B}(x,\delta) is uniformly continuous.

These various forms of continuity may collapse to the same notion in especially favorable situations, of which we give here two examples.

Proposition 2.1.

(Heine-Cantor Theorem) If F:(𝔻,d𝔻)(𝔼,d𝔼)F:(\mathbb{D},d_{\mathbb{D}})\to(\mathbb{E},d_{\mathbb{E}}) is a map from a compact metric space to another metric space, then FF is (globally) continuous iff it is (globally) uniformly continuous.

If F:(𝔻,d𝔻)(𝔼,d𝔼)F:(\mathbb{D},d_{\mathbb{D}})\to(\mathbb{E},d_{\mathbb{E}}) is a map from a locally compact metric space to another metric space, then FF is locally continuous (on a neighborhood) around a point x𝔻x\in\mathbb{D} iff it is locally uniformly continuous (on a neighborhood) around xx.

For our second example, we need a few more definitions.

Lipschitz, locally Lipschitz and pseudo-locally Lipschitz functions A function F:(𝔻,d𝔻)(𝔼,d𝔼)F:(\mathbb{D},d_{\mathbb{D}})\to(\mathbb{E},d_{\mathbb{E}}) between two pseudo-metric spaces is Lipschitz if there exists a constant K>0K>0 (called ‘Lipschitz constant’) s.t. for all x,y𝒟x,y\in\mathcal{D} we have that

d𝔻(x,y)Kd𝔼(F(x),F(y)).d_{\mathbb{D}}(x,y)\leq Kd_{\mathbb{E}}(F(x),F(y)).

FF is locally Lipschitz if for every x𝔻x\in\mathbb{D} there exists some open neighborhood O=(x,δ)τ𝔻(x)O=\mathcal{B}(x,\delta)\in\tau_{\mathbb{D}}(x) of xx, such that the restriction F|OF|O to (the subspace determined by) OO is Lipschitz.

Finally, FF is pseudo-locally Lipschitz666Unlike the other concepts in this Preliminaries’ section, the notion of pseudo-local Lipschitz functions seems to be novel. if for every x𝔻x\in\mathbb{D} there exists some open neighborhood Oτ𝔻(x)O\in\tau_{\mathbb{D}}(x) of xx, and some open neighborhood Uτ𝔼(F(x))U\in\tau_{\mathbb{E}}(F(x)) of F(x)F(x), such that the restriction F|(OF1(U))F|(O\cap F^{-1}(U)) to (the subspace determined by) the intersection of OO with the preimage F1(U)F^{-1}(U) is Lipschitz.

Proposition 2.2.

A Lipschitz function F:(𝔻,d𝔻)(𝔼,d𝔼)F:(\mathbb{D},d_{\mathbb{D}})\to(\mathbb{E},d_{\mathbb{E}}) between pseudo-metric spaces is uniformly continuous.

For every point x𝔻x\in\mathbb{D}, a locally Lipschitz function F:(𝔻,d𝔻)(𝔼,d𝔼)F:(\mathbb{D},d_{\mathbb{D}})\to(\mathbb{E},d_{\mathbb{E}}) is locally uniformly continuous around xx.

A pseudo-locally Lipschitz function F:(𝔻,d𝔻)(𝔼,d𝔼)F:(\mathbb{D},d_{\mathbb{D}})\to(\mathbb{E},d_{\mathbb{E}}) is continuous at some point x𝔻x\in\mathbb{D} iff it is locally uniformly continuous around xx (and so also iff it is locally continuous around xx).

Proof.

The proofs of all the above results are standard, with the exception of the last item. To justify it, assume FF is pseudo-locally Lipschitz, and let Oτ𝔻(x)O\in\tau_{\mathbb{D}}(x) and Uτ𝔼(F(x))U\in\tau_{\mathbb{E}}(F(x)) be s.t. FF is Lipschitz on OF1(U)O\cap F^{-1}(U). Assume now that FF is continuous at xx in 𝔻\mathbb{D}, hence we have F1(U)τ𝔻F^{-1}(U)\in\tau_{\mathbb{D}}, and thus OF1(U)τ𝔻(x)O\cap F^{-1}(U)\in\tau_{\mathbb{D}}(x) is an open neighborhood of xx. Since FF is Lipschitz on the open neighorhood OF1(U)O\cap F^{-1}(U), it is also uniformly continuous on it (by the first item of our Proposition), and therefore it is locally uniformly continuous around xx.∎

In conclusion, the notions of local continuity and local uniform continuity are equivalent on locally compact spaces; while all three notions of localized continuity (-continuity at a point, local continuity around the point, and local unform continuity around the point) coincide for pseudo-locally Lipschitz functions! As we’ll show, these facts have interesting epistemological and logical consequences, with a special relevance to our completeness and expressivity results.

2.2 Empirical variables as inexact wh-questions

In this section, we fix a state space SS. A proposition is just a subset PSP\subseteq S of the state space SS. To stress this reading, we sometimes write sPs\models P for sPs\in P, and read it as “PP is true at ss ”.

As traditionally understood, a wh-question is one that starts with “what”, “who”, “which”, “where”, “when” etc. Examples are “What is the speed of light?”, “Where on Earth I am?” (i.e. what is my position on the globe), “When did you first arrive to Amsterdam?”. The exact answer to a wh-question is not necessarily a proposition, but can be some other type of object, e.g. a number, a pair or triplet of coordinates, a date, a name etc. Here, we will think of each such object as the “value” assigned to a variable in a given state of the world. Note that, unlike the variables of predicate logic (which are just abstract, interchangeable placeholders with no intrinsic meaning, that can take any arbitrary values independent of each other), ours are “variables” in the sense used in empirical sciences and databases: they denote specific quantities or qualitative data (e.g. time, position, velocity, money, persons, shapes etc.), each with its own range of possible values, that may be subject to restrictions and inter-variable correlations. The generic form of a wh-question can thus be reduced to “What is the value of variable XX?”, where “the value” refers to the current value of XX in the actual world.

When calling our variables “empirical”, what we mean is that their exact value might typically be inaccessible to observation. Instead, one can usually observe only approximations of that value (e.g. a time interval, an area on the surface of the Earth, a range of speeds, a set of possible suspects, etc). So, when thinking of an empirical variable as a wh-question, its feasible answers (the ones one could obtain via some kind of empirical process e.g. an observation or experiment) are typically “imprecise” or partial answers, in the sense that may not completely match the exact answer.

We now proceed to formalize this key notion within a topological setting.

Definition 2.3.

(Empirical variables as topological maps) Given a state space SS, an empirical variable is a surjective map X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}) from states sSs\in S to ‘values’ X(s)𝔻XX(s)\in\mathbb{D}_{X} in a T0T_{0}-separated topological space (𝔻X,τX)(\mathbb{D}_{X},\tau_{X}).

Intuitively, the open neighborhoods UτX(X(s))U\in\tau_{X}(X(s)) represent observable approximations of the value of XX at state ss, obtainable e.g. as a result of a measurement of the value of XX. Both the surjectivity condition, saying that 𝔻X=X(S)\mathbb{D}_{X}=X(S), and the requirement of T0T_{0}-separation are innocuous: we can always enforce them by taking first a subspace (restricting to X[S]X[S] both the codomain 𝔻X\mathbb{D}_{X} and its topology), and then a quotient (identifying all indistinguishable points in 𝔻X\mathbb{D}_{X}). The point of these requirements is that they simplify several definitions and statements of results below.

Example: Euclidean variables A well-known example is that of Euclidean variables X:SnX:S\to\mathbb{R}^{n}, for some nn\in\mathbb{N}: here, 𝔻X=n\mathbb{D}_{X}=\mathbb{R}^{n} is the nn-dimensional Euclidean space for some nn\in\mathbb{N} (consisting of all real-number vectors x=(x1,,xn)x=(x_{1},\ldots,x_{n})), and τX\tau_{X} is the standard Euclidean topology, generated by the family of all nn-dimensional open balls of the form

(x,r)={yn:d(x,y)<r}\mathcal{B}(x,r)=\{y\in\mathbb{R}^{n}:d(x,y)<r\}

where xRnx\in R^{n}, r>0r>0, and d(x,y)=i(xiyi)2d(x,y)=\sqrt{\sum_{i}(x_{i}-y_{i})^{2}} is the standard Euclidean distance. Any open ball B(x,r)B(x,r), with X(s)B(x,r)X(s)\in B(x,r), can be considered as a measurement of the value of X(s)X(s) with margin of error rr.

Example: a particle in space For a more concrete example, consider a point-like particle in three-dimensional space. The state space is S=3S=\mathbb{R}^{3}. Consider the question “What is the xx-coordinate of the particle?”. As a wh-question, this is an empirical variable X:3(,τ)X:\mathbb{R}^{3}\to(\mathbb{R},\tau), with X(a,b,c):=aX(a,b,c):=a and the natural topology τ\tau is given by the family of rational open intervals (a,b)(a,b), where a,ba,b\in\mathbb{Q} with a<ba<b.777The restriction to intervals with rational endpoints corresponds to the fact that actual measurements always produce rational estimates. Suppose the particle is actually in the point (1,0,3)(1,0,3): so this triplet s(1,0,3)s(1,0,3) =is the actual state. Then the exact answer to the wh-question is the actual exact value 11 of the variable XX at state ss, while the feasible approximations at ss are rational intervals (a,b)(a,b) with a<1<ba<1<b. The similar wh-questions regarding the other two coordinates yy and zz are similarly represented by empirical variables Y,Z:3(,τ)Y,Z:\mathbb{R}^{3}\to(\mathbb{R},\tau), with the same topology τ\tau as before.

All topological notions introduced in the Introduction relativize to the range space (𝔻X,τX)(\mathbb{D}_{X},\tau_{X}) of some empirical variable XX, simply by labeling them with the subscript XX: so e.g. X𝔻X×𝔻X\leq_{X}\,\subseteq\mathbb{D}_{X}\times\mathbb{D}_{X} is the specialization preorder for the topology τX\tau_{X}. Note that we don’t need a special symbol for the corresponding indistinguishability relation on 𝔻X\mathbb{D}_{X}: since the space is assumed to be T0T_{0}, indistinguishability on 𝔻X\mathbb{D}_{X} simply boils down to equality ==.

Special case: exact questions An exact wh-question on a state space SS is just a variable X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}) mapping states into values living in a discrete space, i.e. s.t. τX=𝒫(𝔻X)\tau_{X}=\mathcal{P}(\mathbb{D}_{X}). Intuitively, this means that at every state sSs\in S one can observe the exact value {X(s)}\{X(s)\} of the variable XX. The topology is then irrelevant: the complete answer to the question can be determined, so no partial approximations are really needed.

2.3 Abstraction: propositional questions

A special case of empirical questions are the ones whose answers are propositions: we call these “propositional questions”. Traditionally, a propositional question is taken to be a partition of the state space SS: the partition cells are the possible (complete) answers to the question, while the unique cell that contains a given state ss is the true answer at state ss. But, once again, in empirical sciences the complete answer may be impossible to determine by observations; typically, only partial answers are empirically given. Hence, the partition into complete answers must be replaced by a family of propositions, called ‘feasible’ or ‘observable’ answers.

Definition 2.4.

(Empirical questions of the propositional kind) A (propositional) empirical question is just a topology τ𝒫(S)\tau\subseteq\mathcal{P}(S) on the set of states SS. If the actual state is ss, then the open neighborhoods OτO\in\tau with sOs\in O represent the feasible or ‘observable’ answers. So τ(s)\tau(s) is the family of all partial answers to the question that are (true and) observable at state ss.

The closure properties of τ(s)\tau(s) make obvious sense in this interpretation. Closure under binary intersections reflects the fact that the conjunction of two (true) feasible answers is itself (true and) feasible: indeed, PQP\wedge Q is empirically established iff both PP and QQ are. Closure under arbitrary unions means that any arbitrary disjunction of (true) feasible answers is itself a (true and) feasible answer to the question: indeed, iPi\bigvee_{i}P_{i} is empirically established iff at least one of the PiP_{i}’s is.888Note that the corresponding closure condition for arbitrary conjunctions does not apply to feasible answers: indeed, while for establishing an infinite disjunction is sufficient (and necessary) to establish only one of the disjuncts, establishing an infinite conjunction iPi\bigwedge_{i}P_{i} would necessitate to first establish all of the conjuncts. This would require waiting an infinite amount of time, hence it is unfeasible in an empirical context. Finally, the tautological ’answer’ (corresponding to the trivially true proposition given by the whole state space SS) is surely feasible, albeit uninformative.

Complete answers are not necessarily feasible. The complete ’answer’ to the question τ\tau at state ss is the equivalence class [s]τ:={wS:sw}[s]_{\tau}:=\{w\in S:s\simeq w\} of ss wrt the topological indistinguishability relation \simeq for τ\tau. The complete answer is the propositional analogue of the ‘exact’ value of an empirical variable. The fact that the complete answer might not be feasible is reflected in the fact that this equivalence class is not necessarily an open set. The indistinguishability relation \simeq induces a partition on SS, whose cells are the equivalence classes modulo \simeq, corresponding to all the possible complete answers to the question.

Example continued: the particle in space Recall our example of a point-like particle situated in the point (1,0,3)S=3(1,0,3)\in S=\mathbb{R}^{3}. Instead of answering the wh-question “what is the xx-coordinate?” by simply specifying the exact value X=1X=1 of the corresponding empirical variable, we can interpret the question as a propositional one, whose complete answer is {(1,y,z):y,z}\{(1,y,z):y,z\in\mathbb{R}\}, which in English corresponds to the proposition “The xx-coordinate of the particle is 11”. Similarly, if only an approximation (a,b)(a,b) of XX (with a,ba,b\in\mathbb{Q} and a<1<ba<1<b) is feasible or measurable, then the same information can be packaged in a partial answer {(x,y,z)3:a<x<b}\{(x,y,z)\in\mathbb{R}^{3}:a<x<b\} to the corresponding propositional question: in English, this is the proposition “The xx-coordinate is between aa and bb”.

As seen in this example, every wh-question can be “propositionalized”, i.e. converted into a propositional question, by abstracting away the actual values. We proceed now to formalize this process of abstraction from values.

From variables to propositional questions: the (weak) XX-topology on SS. To any empirical variable X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}), we can associate a propositional empirical question τXS\tau_{X}^{S}, called the XX-topology on SS: this is the so-called ‘weak topology’ induced by XX on SS, defined as the coarsest topology on SS that makes XX continuous. More explicitly, τXS\tau_{X}^{S} is given by

τXS:={X1(U):UτX}.\tau_{X}^{S}\,\,:=\,\,\{X^{-1}(U):U\in\tau_{X}\}.

We can interpret the XX-topology τXS\tau_{X}^{S} as the result of ‘propositionalizing’ the wh-question XX. An XX-neighbourhood of a state sSs\in S is just a neighbourhood of ss in the XX-topology on SS, i.e. a set NN with sX1(U)Ns\in X^{-1}(U)\subseteq N for some UτXU\in\tau_{X}. As usual, for a state sSs\in S, we denote by τXS(s)\tau_{X}^{S}(s) the family of open XX-neighborhoods of ss. Similarly, the XX-interior IntX(P)Int_{X}(P) of any set PSP\subseteq S is the interior of PP in the XX-topology, and the same for the XX-closure ClX(P)Cl_{X}(P). Finally, note that the XX-topology τXS\tau_{X}^{S} is not necessarily T0T_{0}-separated (unlike the topology τX\tau_{X} on the value range of XX, which was assumed to be T0T_{0}): indeed, the value of a variable XX can well be the same in two different states.999However, it would be natural to further assume that any two states that agree on the values of all the relevant variables are the same state: that would allow us to identify our ‘states’ with tuples of values (or rather assignments of values to all the relevant variables), obtaining a ‘concrete’ representation of the state space, in the style typically used in Physics and other empirical sciences. This additional assumption will be embodied in the so-called “concrete models”, defined in the next section as a special case of our more general topo-models.

XX-relations on states The relations X\leq_{X} and == on the range space 𝔻X\mathbb{D}_{X} naturally induce corresponding relations on states in SS, defined by putting for all s,wSs,w\in S:

sXSw iff X(s)XX(w) iff UτX(X(s)UX(w)U),s\leq_{X}^{S}w\,\,\,\mbox{ iff }\,\,\,X(s)\leq_{X}X(w)\,\,\,\mbox{ iff }\,\,\,\forall U\in\tau_{X}(X(s)\in U\Rightarrow X(w)\in U),
s=XSw iff X(s)=X(w) iff UτX(X(s)UX(w)U).s=_{X}^{S}w\,\,\,\mbox{ iff }\,\,\,X(s)=X(w)\,\,\,\mbox{ iff }\,\,\,\forall U\in\tau_{X}(X(s)\in U\Leftrightarrow X(w)\in U).

The first relation XS\leq_{X}^{S} coincides with the specialization preorder for the weak (XX-)topology τXS\tau_{X}^{S} on states; while the second relation =XS=_{X}^{S} corresponds to the (indistinguishability for the XX-topology, which by T0T_{0}-separation is the same as the) equality of XX-values on the two states.

Special case: partitions as exact propositional questions Note that the propositional counterpart of an exact wh-question X:S(𝔻X,𝒫(𝔻X))X:S\to(\mathbb{D}_{X},\mathcal{P}(\mathbb{D}_{X})) on a state space SS is a partition of SS, whose partition cells are sets of the form X1(d)X^{-1}(d), with d𝔻Xd\in\mathbb{D}_{X}. Conversely, every partition of SS can be viewed as a propositionalized exact question. So, from a purely propositional perspective, exact questions are just partitions of the state space.

2.4 Equivalence between variables and (propositional) questions

As we saw, we can go from wh-questions (modeled as empirical variables) to propositional questions, by abstracting away from the actual values: replacing the variable X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}) with the (weak) XX-topology τXS𝒫(S)\tau_{X}^{S}\subseteq\mathcal{P}(S). We show now that we can also go back (from questions to empirical variables), essentially by taking the complete answers as actual values, with the topology induced by the feasible answers.

From propositional questions to variables: the quotient topology. Given any propositional empirical question (topology) τ𝒫(S)\tau\subseteq\mathcal{P}(S), we can convert it into a wh-question by taking as our associated empirical variable the canonical quotient map /\bullet/\simeq from SS to the quotient space S/S/\simeq modulo the topological indistinguishability relation for τ\tau; more precisely, we take

𝔻X:=S/={[s]τ:sS}\mathbb{D}_{X}:=S/{\simeq}=\{[s]_{\tau}:s\in S\}

to be the set of all equivalence classes [s]τ:={wS:sw}[s]_{\tau}:=\{w\in S:s\simeq w\} wrt \simeq; X:SS/X:S\to S/{\simeq} is the quotient map /\bullet/\simeq, given by

X(s):=[s]τ;X(s):=[s]_{\tau};

and we endow 𝔻X\mathbb{D}_{X} with the quotient topology τX\tau_{X}, given by

τX:={U𝔻X:X1(U)τ}={X(O):Oτ}.\tau_{X}:=\{U\subseteq\mathbb{D}_{X}:X^{-1}(U)\in\tau\}=\{X(O):O\in\tau\}.

It is easy to see that these two operations are inverse to each other. First, if we start with an empirical variable X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}), take its associated XX-topology τXS\tau_{X}^{S} on SS, then take the canonical quotient map /=XS\bullet/=_{X}^{S} (modulo the indistinguishablity relation for τXS\tau_{X}^{S}, which as we saw is exactly the relation =XS=_{X}^{S} of equality of XX-values), then the result is ‘isomorphic’ to the original variable XX: more precise, the topological range space (𝔻X,τX)(\mathbb{D}_{X},\tau_{X}) (with the quotient topology) is homeomorphic to the original space (𝔻X,τX)(\mathbb{D}_{X},\tau_{X}) via the canonical homeomorphism α\alpha that maps every value X(s)X(s) to the equivalence class [s][s] modulo =XS=_{X}^{S}; and moreover, the quotient variable /=XS\bullet/=_{X}^{S} coincides with the functional composition αX\alpha\circ X of the homeomorphism α\alpha and the original variable XX. Vice-versa, if we start with a propositional question/topology τ\tau on SS, take its canonical quotient map /\bullet/\simeq (modulo its topological indistinguishability relation \simeq), and take the associated weak topology τ/S\tau_{\bullet/\simeq}^{S} on SS, we obtain exactly the original topology τ\tau.

This equivalence between empirical variables on SS (wh-questions) and topologies on SS (propositional questions) extends to all the notions defined in this paper, and will form the basis of two equivalent semantics for our logic (topo-dependence models and standard topo-models). However, note that this equivalence is not a bijection. While the second correspondence (going from propositional questions to variables, via our quotient construction) is injective, the first correspondence (going from variables to propositional questions, via ‘propositionalization’) is not injective: many different variables can give rise to the same weak topology on the state space. This fact captures the intuition that the setting of empirical variables is in some sense ‘richer’ than the one of propositional questions.

2.5 Joint questions

In an inquiry, we are typically interested in answering multiple questions simultaneously: e.g. we want to find the spatial xx, yy and xx coordinates of a point, as well as its temporal coordinate tt (with respect to some system of reference). We can combine such a set of questions into a single joint question, e.g. “what are the space-time coordinates of the point?”. A propositional answer to such a joint question will be a conjunction of propositional answers to each of the underlying simple questions, so a joint propositional question can be seen a the coarsest topology that refines all underlying topologies; while a value/object answer to a joint wh-question will be a tuple of values (each answering one of the underlying wh-questions), so a joint wh-question can be modeled using the product topology.

We first start with the propositional case.

Sets of propositional questions as joint questions: the join topology Given a set τ={τi:iI}\tau=\{\tau_{i}:i\in I\} of topologies on a state space SS (interpreted as empirical questions), we can regard it as a single question, given by the supremum or “join” iτi\bigvee_{i}\tau_{i} (in the lattice of topologies on SS with inclusion) of all the topologies τi\tau_{i}. This is the topology generated by the union iτi\bigcup_{i}\tau_{i} of the underlying topologies: the coarsest topology on SS that includes all of them. The indistinguishability relation τ\simeq_{\tau} for this join topology coincides with the intersection ii\bigcap_{i}\simeq_{i} of the indistinguishability relations i\simeq_{i} of each topology τi\tau_{i}. The complete answer to the join propositional question τ\tau at state ss is thus the intersection of the complete answers at ss to all the propositional questions τi\tau_{i}; while a partial answer to the join question, i.e. an open set OτO\in\tau, is an intersection iOi\bigcup_{i}O_{i} of partial answers OiτiO_{i}\in\tau_{i} to all the underlying questions τi\tau_{i}.

Some special cases When τ=\tau=\emptyset is the empty family of topologies, :={,S}\bigvee\emptyset:=\{\emptyset,S\} is the trivial (‘indiscrete’) topology on SS. When τ={τ1}\tau=\{\tau_{1}\} is a singleton, the joint question corresponding to the set τ\tau is the same as the question/topology τ1\tau_{1}.

We now move on to the more interesting case of joint empirical variables.

Sets of variables as joint wh-questions: the product topology A finite set X={Xi:iI}X=\{X_{i}:i\in I\} of empirical variables Xi:S(𝔻i,τi)X_{i}:S\to(\mathbb{D}_{i},\tau_{i}) can itself be regarded as a single variable (hence, our use of the same notation XX for both variables and sets of variables, a common practice in, e.g., Statistics when dealing with random variables), essentially given by taking the product map ΠiIXi:SΠiI(𝔻i,τi)\Pi_{i\in I}X_{i}:S\to\Pi_{i\in I}(\mathbb{D}_{i},\tau_{i}) into the topological product space (suitably restricted in its range to make this map surjective). More precisely, given such a finite set X={Xi:iI}X=\{X_{i}:i\in I\} of empirical variables, we can associate to it a single variable, also denoted by XX, as follows:

  • -

    the set of values 𝔻X:={(xi(s))iI:sS}ΠiI𝔻i\mathbb{D}_{X}:=\{(x_{i}(s))_{i\in I}:s\in S\}\subseteq\Pi_{i\in I}\mathbb{D}_{i};

  • -

    the topology τX\tau_{X} is the restriction to 𝔻X\mathbb{D}_{X} of the product topology on ΠiI𝔻i\Pi_{i\in I}\mathbb{D}_{i}, generated by the restrictions to 𝔻X\mathbb{D}_{X} of all products ΠiIUi\Pi_{i\in I}U_{i} of open sets (in their own topologies τi\tau_{i});

  • -

    finally, the map X:S𝔻XX:S\to\mathbb{D}_{X} is given by X(s)=(xi(s))iIX(s)=(x_{i}(s))_{i\in I}.

Essentially, we can interpret the products ΠiIUi\Pi_{i\in I}U_{i} as the possible results of a simultaneous measurement of all these variables. Since there might be correlations between the variables, the actual range of results 𝔻X\mathbb{D}_{X} as defined above is not necessarily the whole Cartesian product, but a subset of it. Putting it in our earlier interrogative terms, the variable XX represents the joint question obtained by simultaneously asking all the wh-questions in the set XX, and ΠiIUi\Pi_{i\in I}U_{i} is a joint approximate answer to all these questions.

Some special cases When X=X=\emptyset is empty, 𝔻={λ}\mathbb{D}_{\emptyset}=\{\lambda\} with λ=()\lambda=(\,) the empty string, τ={,𝔻}\tau_{\emptyset}=\{\emptyset,\mathbb{D}_{\emptyset}\} the trivial topology on 𝔻\mathbb{D}_{\emptyset} (which in this case coincides with the discrete topology!), and the map X:S𝔻X:S\to\mathbb{D}_{\emptyset} given by X(s)=λX(s)=\lambda for all sSs\in S. Note that, since the weak \emptyset-topology τS={,S}\tau_{\emptyset}^{S}=\{\emptyset,S\} is the trivial topology on SS, its indistinguishability relation =S=_{\emptyset}^{S} is the universal relation on SS (relating every two states). When X={x}X=\{x\} is a singleton, the single variable corresponding to the set XX is the same as xx itself.

Given the preceding, from now on we will use the notation XX for both single variables and for finite sets of variables, identifying the set XX with the associated single variable XX (and dually identifying a single variable xx with the associated set {x}\{x\}).


Sanity check: equivalence between the two notions of joint questions It is easy to see that above-mentioned equivalence between propositional empirical questions and empirical variables extends to the notion of joint question/variable. Indeed, the weak topology induced on SS by the product variable X=ΠiIXi:SΠiI(𝔻i,τi)X=\Pi_{i\in I}X_{i}:S\to\Pi_{i\in I}(\mathbb{D}_{i},\tau_{i}) is exactly the join supremum) of all the topologies τiS\tau_{i}^{S} (with iIi\in I).

3 Information-carrying dependence as topological continuity

In this section, we present a conceptual analysis of notions of epistemic dependence that make sense in a topological setting for imprecise observations. The outcome is a series of definitions, for which we prove some simple clarificatory characterizations. Deeper technical results on the resulting topological dependence logics are deferred to the next section.

We start by reviewing the older notion of exact (non-epistemic) dependence DXYD_{X}Y as it occurs in the Logic of Functional Dependence 𝖫𝖥𝖣\mathsf{LFD} in [Baltag and van Benthem, 2021], then we move to exploring various types of known/knowable dependence.

For this purpose, we fix once again a set of epistemic states or ‘worlds’ SS: intuitively, these are all the states that are consistent with some (implicit) agent’s background information.

3.1 Background: exact dependence in 𝖫𝖥𝖣\mathsf{LFD}

The form of dependence studied in Dependence Logic (DLDL) [Väänänen, 2007], denoted in DLDL literature by =(X;Y)=(X;Y) and here by D(X;Y)D(X;Y), is both exact and global: it requires the existence of a global functional dependence, that maps all exact values of the variable (or set of variables) XX into the exact values of the (set of) variable(s) YY. The Logic of Functional Dependence (𝖫𝖥𝖣\mathsf{LFD}) introduced in [Baltag and van Benthem, 2021] goes beyond this, by introducing a local, but still exact, version of dependence DXYD_{X}Y: the current exact value of XX (in the actual world) uniquely determines the current exact value of YY (in the same actual world).111111This local version of dependence introduced in 𝖫𝖥𝖣\mathsf{LFD} is more fundamental, in a sense, than the global one of DLDL: it turns out that local DXYD_{X}Y (combined with local determination modality DXφD_{X}\varphi) can define global D(X;Y)D(X;Y), but not the other way around. In addition, in 𝖫𝖥𝖣\mathsf{LFD} one considers propositional dependence operators DXPD_{X}P (“exact determination”): the (current) exact value of XX determines PP to be true, These modalities that can be seen as a special case of local dependence DXYD_{X}Y, in which the determined variable YY is propositional (denoting a Boolean function YP:S{0,1}Y_{P}:S\to\{0,1\} from states to truth values, seen as the characteristic function of a set of states PSP\subseteq S) and its uniquely determined value is 11 (‘true’).

In this section we review the basics of 𝖫𝖥𝖣\mathsf{LFD}. Note that these definitions are set-theoretical in nature: they do not use the topological structure of our space of values, but only the (set of) values themselves. This expresses the fact that this form of dependence is exact and ‘informational’ (i.e. the exact value of XX carries full information about the exact value of YY), rather than approximate and ‘epistemic’ (having to do with an observer’s ability to infer as good estimates as needed for the value of YY from sufficiently precise approximations of XX’s value). The epistemic versions will come in subsequent sections.

We start with the special case of propositional dependence operators:

Exact determination of a proposition by a variable Given a state space SS, a (finite set of) empirical variable(s) X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}) and a proposition PSP\subseteq S, we say that PP is determined by XX at state ss, and write sDXPs\models D_{X}P, if the (exact) value of XX (at ss) carries the information that PP is true (at ss):

sDXP iff X1(X(s))P iff wS(s=XSwsP),s\models D_{X}P\,\,\,\mbox{ iff }\,\,\,X^{-1}(X(s))\subseteq P\,\,\,\mbox{ iff }\,\,\,\forall w\in S(s=_{X}^{S}w\Rightarrow s\models P),

where =X=_{X} is the equality of XX-values (defined as above: s=Xws=_{X}w iff X(s)=X(w)X(s)=X(w)). Note that DXD_{X} is just a standard relational modality, having =X=_{X} as its accessibility relation. In particular, since s=ts=_{\emptyset}t holds for all s,tSs,t\in S, DPD_{\emptyset}P is the universal modality (quantifying over all states):

sDP iff wP for all wS.s\models D_{\emptyset}P\,\,\,\mbox{ iff }\,\,\,w\models P\mbox{ for all $w\in S$}.

This motivates an abbreviation A(P)A(P), saying that PP is true in all states:

A(P):=DP.A(P)\,\,:=\,\,D_{\emptyset}P.

Abstraction: exact determination by a propositional question By abstracting away from the specific values of the variable, we can ‘lift’ this notion to the level of propositional questions. Given a state space SS, a proposition PSP\subseteq S and a (empirical) propositional question (i.e. topology) τ\tau on SS, we say PP is determined by (the answer of) τ\tau at state ss, and write sDτPs\models D_{\tau}P, if the (complete) answer to τ\tau (at ss) carries the information that PP is true (at ss):

sDτP iff [s]τP,s\models D_{\tau}P\,\,\mbox{ iff }\,\,[s]_{\tau}\subseteq P,

where recall that [s]τ={wS:sw}[s]_{\tau}=\{w\in S:s\simeq w\}is the equivalence class of ss wrt the indistinguishability relation \simeq for the topology τ\tau. One can easily recognize DτD_{\tau} as a standard relational modality having topological indistinguishability as its underlying accessibility relation:

sDτP iff [s]τP iff wS(swsP).s\models D_{\tau}P\,\,\mbox{ iff }\,\,[s]_{\tau}\subseteq P\,\,\mbox{ iff }\,\,\forall w\in S(s\simeq w\Rightarrow s\models P).

Note again that the topology itself plays no role, but only the associated equivalence relation \simeq, or equivalently the corresponding partition of the state space into equivalence classes: in effect, this is really about the information carried by the underlying “exact” (partitional) question.

Exact dependence between variables Given two (finite sets of) empirical variables X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}) and Y:S(𝔻Y,τY)Y:S\to(\mathbb{D}_{Y},\tau_{Y}) over the same state space SS, we say that XX exactly determines YY at state ss, and write sDXYs\models D_{X}Y, if the value of XX at ss uniquely determines the value of YY:

sDXY iff X1(X(s))Y1(Y(s)) iff wS(s=XSws=YSw).s\models D_{X}Y\,\,\,\mbox{ iff }\,\,\,X^{-1}(X(s))\subseteq Y^{-1}(Y(s))\,\,\,\mbox{ iff }\,\,\,\forall w\in S(s=_{X}^{S}w\Rightarrow s=_{Y}^{S}w).

Since s=ts=_{\emptyset}t holds for all s,tSs,t\in S, the statement DYD_{\emptyset}Y holds iff YY is a constant: its value is the same in all states. We can thus introduce an abbreviation C(Y)C(Y) (saying that “YY is a constant”):

C(Y):=DYC(Y)\,\,:=\,\,D_{\emptyset}Y

Abstraction: exact dependence between propositional questions By abstracting again from the specific values, this notion is lifted to the level of propositional questions: given topologies τ\tau and τ\tau^{\prime} on the same state space SS, we say that τ\tau exactly determines τ\tau^{\prime} at state ss, and write sDττs\models D_{\tau}\tau^{\prime}, if the exact answer to τ\tau at ss uniquely determines the exact answer to τ\tau^{\prime}:

sDττ iff [s]τ[s]τ,s\models D_{\tau}\tau^{\prime}\,\,\,\mbox{ iff }\,\,\,[s]_{\tau}\subseteq[s]_{\tau^{\prime}},

where [s]τ[s]_{\tau} and [s]τ[s]_{\tau^{\prime}} are equivalence classes wrt indistinguishability relations for τ\tau and τ\tau^{\prime}.

Once again, the topologies themselves play no role, but only the corresponding partitions of the state space into equivalence classes. In effect, this notion is purely about the dependence between partitional questions: the (true) answer to the first question gives us full information about the (true) answer to the second question.

The logic of functional dependence The operators DXPD_{X}P and DXYD_{X}Y are at the heart of the simple Logic of Functional Dependence (𝖫𝖥𝖣\mathsf{LFD})121212This logic was introduced, in a purely set-theoretical setting, devoid of topological features, in [Baltag and van Benthem, 2021]., with the following syntax:

φ::=Px|¬φ|φφ|DXφ|DXY\begin{array}[]{c cc cc ccc cc cc cc cc}\varphi::=&P\vec{x}&|&\neg\varphi&|&\varphi\wedge\varphi&|&D_{X}\varphi&|&D_{X}Y\end{array}

where the letters XX range over finite sets of variables (coming from a fixed set VarVar), the letters PP are predicate symbols (coming from a finite set of such symbols) each having a specific arity nn, and x=(x1xn)\vec{x}=(x_{1}\ldots x_{n}) is any nn-tuple of variables in VarVar.

Given the above discussion, it should be clear that this logic has two equivalent interpretations: one in terms of empirical variables (regarded purely set-theoretically, as simple assignments of values to states, with no topological structure), and another one in terms of partitions (or equivalence relations, hence the name “relational semantics”).

In particular, variable-based dependence models come with: a state space SS; a way to assign to each syntactic variable xx some map 𝐱:S𝔻x\mathbf{x}:S\to\mathbb{D}_{x} that associates to each state sSs\in S some value 𝐱(s)\mathbf{x}(s) (in some range of values 𝔻x\mathbb{D}_{x}); and a way to interpret each nn-ary predicate symbol PP as an nn-ary relation I(P)I(P) between values. The value assignment can be naturally extended to (finite) sets of variables XVarX\subseteq Var, by putting 𝐗(s):=(𝐱(s))xX\mathbf{X}(s):=(\mathbf{x}(s))_{x\in X} for all sSs\in S, and then equality of values =X=_{X} can be defined as above, by putting: s=Xws=_{X}w iff 𝐱(s)=𝐗(w)\mathbf{x}(s)=\mathbf{X}(w). The variable-based semantics of the atoms PxP\vec{x} is given as usual by putting

sPx1xn iff (𝐱1(s),,𝐱(s))I(P).s\models Px_{1}\ldots x_{n}\,\,\mbox{ iff }\,\,(\mathbf{x}_{1}(s),\ldots,\mathbf{x}(s))\in I(P).

The equivalent relational semantics only assumes given a family of equivalence relations (or partitions) =x=_{x} on SS, one for each basic variable xx, and simply treats the atoms Px1xnPx_{1}\ldots x_{n} using a valuation (subject to additional constraints, see the Appendix for details). In both versions, the semantics for Boolean connectives uses the classical Tarskian clauses, while the semantics of exact dependence and determination operators is given by putting:

sDXφ iff wS(s=Xwwφ)s\models D_{X}\varphi\,\,\mbox{ iff }\,\,\forall w\in S(s=_{X}w\Rightarrow w\models\varphi)
sDXY iff wS(s=XwsYw)s\models D_{X}Y\,\,\mbox{ iff }\,\,\forall w\in S(s=_{X}w\Rightarrow s_{Y}w)

The operators AφA\varphi (for universal modality) and C(X)C(X) (for “XX is a constant) can be defined as abbreviations in this logic, as mentioned above: Aφ:=DφA\varphi:=D_{\emptyset}\varphi, and C(X):=DXC(X):=D_{\emptyset}X.

Global dependence As already mentioned, the standard notion of (functional) dependence =(X;Y)=(X;Y) in Dependence Logic DLDL is global: using our notation D(X;Y)D(X;Y), this is usually defined by putting

SD(X;Y) iff F:𝔻X𝔻Y s.t. F𝐗=𝐘 holds on 𝔻X,S\models D(X;Y)\,\,\,\mbox{ iff }\,\,\,\exists F:\mathbb{D}_{X}\to\mathbb{D}_{Y}\mbox{ s.t. }F\circ\mathbf{X}=\mathbf{Y}\mbox{ holds on }\mathbb{D}_{X},

which can equivalently be stated in terms of equality of values:

SD(X;Y) iff s,wS(s=Xws=Yw).S\models D(X;Y)\,\,\,\mbox{ iff }\,\,\,\forall s,w\in S(s=_{X}w\Rightarrow s=_{Y}w).

For propositional questions, given by topologies τ,τ\tau,\tau^{\prime} on SS, the corresponding notion is simply the global version of local dependence DττD_{\tau}\tau^{\prime}:

SD(τ;τ) iff [s]τ[s]τ holds for all sSS\models D(\tau;\tau^{\prime})\,\,\mbox{ iff }\,\,[s]_{\tau}\subseteq[s]_{\tau^{\prime}}\mbox{ holds for all $s\in S$}

(where recall that [s]τ[s]_{\tau} and [s]τ[s]_{\tau^{\prime}} are the equivalence classes of ss wrt the indistinguishability relations for the topologies τ\tau and τ\tau^{\prime}). Once again, note that this is a purely set-theoretic notion, in which the topologies do not play any role in themselves, but only vis the induced equivalence relations: global functional dependence is also in effect a relation between partitional questions. Also, it is again easy to see that these two notions of global dependence fit together, via the correspondence given by the weak topology; i.e., we have that

SD(X;Y) iff X1(X(s))Y1(Y(s)) holds for all sS.S\models D(X;Y)\,\,\mbox{ iff }\,\,X^{-1}(X(s))\subseteq Y^{-1}(Y(s))\mbox{ holds for all $s\in S$}.

In 𝖫𝖥𝖣\mathsf{LFD}, global dependence is not a primitive notion, but can be defined via the abbreviation:

D(X;Y):=ADXY.D(X;Y)\,\,:=\,\,AD_{X}Y.

Indeed, it is easy to see that the semantics of the formula ADXYAD_{X}Y matches the above semantic clause for D(X;Y)D(X;Y).

A sound and complete proof system Here is a version of the axiomatic proof system 𝐋𝐅𝐃\mathbf{LFD} for the Logic of Functional Dependence, whose completeness is the main technical result in [Baltag and van Benthem, 2021]:

(I) Axioms and rules of Propositional Logic
(II) S5S5 Axioms for Determination:
(DD-Necessitation) From φ\varphi, infer DXφD_{X}\varphi
(DD-Distribution) DX(φψ)(DXφDXψ)D_{X}(\varphi\to\psi)\to(D_{X}\varphi\to D_{X}\psi)
(Factivity: axiom TT) DXφφD_{X}\varphi\to\varphi
(Axiom 44) DXφDXDXφD_{X}\varphi\to D_{X}D_{X}\varphi
(Axiom 55) ¬DXφDX¬DXφ\neg D_{X}\varphi\to D_{X}\neg D_{X}\varphi
(III) Axioms for exact dependence:
(Inclusion) DXYD_{X}Y, provided that YXY\subseteq X
(Additivity) (DXYDXZ)DX(YZ)\left(D_{X}Y\wedge D_{X}Z\right)\to D_{X}(Y\cup Z)
(Transitivity) (DXYDYZ)DXZ\left(D_{X}Y\wedge D_{Y}Z\right)\to D_{X}Z
(Determined Dependence) DXYDXDXYD_{X}Y\to D_{X}D_{X}Y
(Transfer) DXY(DYφDXφ)D_{X}Y\to\left(D_{Y}\varphi\to D_{X}\varphi\right)
(Determined Atoms) Px1xnD{x1,,xn}Px1xnPx_{1}\ldots x_{n}\to D_{\{x_{1},\ldots,x_{n}\}}Px_{1}\ldots x_{n}
Table 1: The proof system 𝐋𝐅𝐃\mathbf{LFD}.

3.2 Propositional knowledge and conditional knowability of a proposition

Next, we take a look at some epistemic attitudes in an empirical setting. We start with propositional ‘knowledge’, then proceed to analyze when a proposition PP is ‘knowable’ given (a good enough approximation of the value of) some empirical variable XX.

Propositional knowledge as universal quantification over epistemic possibilities We say that a proposition PSP\subseteq S is known, and write K(P)K(P), if PP holds in all epistemically possible states, i.e., if in fact we have that P=SP=S. So, in our setting, the knowledge operator KK is simply the above-defined universal modality, quantifying over all epistemic states:

K(P)=A(P).K(P)=A(P).

Conditional knowability, given (approximate values of) a variable Given a proposition PSP\subseteq S and an empirical variable X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}), we say that PP is knowable given XX at state ss (or knowable conditional on XX), and write sKXPs\models K_{X}P, if the truth of PP at ss is determined by some (good enough) approximation of the value X(s)X(s), i.e. if we have

OτX(X(s)):X1(O)P.\exists O\in\tau_{X}(X(s)):\,X^{-1}(O)\subseteq P.

Intuitively, this means that one can come to know that PP is true, after observing a sufficiently accurate estimate of X(s)X(s).

Once again, there is a natural analogue of this notion for propositional questions, obtained by abstracting away from the actual values:

Abstraction: conditional knowability as interior operator Given a proposition PSP\subseteq S and a propositional question (topology) τ𝒫(S)\tau\subseteq{\mathcal{P}}(S), we say that PP is knowable given question (topology) τ\tau at state ss, and write sKτPs\models K_{\tau}P, if one can come to know that PP is true at ss after learning some (true feasible) answer to question τ\tau; i.e. if there exists some feasible answer UτU\in\tau such that sUPs\in U\subseteq P. It is obvious that we have:

sKτP iff sIntτ(P).s\models K_{\tau}P\,\,\mbox{ iff }\,\,s\in Int_{\tau}(P).

Equivalence between the two notions One can easily see that conditional knowability given an empirical variable XX is equivalent to conditional knowability given its associated propositional question τXS\tau_{X}^{S}:

Proposition 3.1.

The following are equivalent, for any variable XX and proposition PSP\subseteq S:

  1. 1.

    sKXPs\models K_{X}P

  2. 2.

    sIntX(P)s\in Int_{X}(P).

Our notion of conditional knowability (of a proposition given a variable or question) can thus be seen as a variant of the well-known interior semantics for modal logic.

Special case: Knowledge as unconditional knowability When we take the empty set of variables X=X=\emptyset, we obtain the above notion of (unconditional) knowledge K(P)K(P):

K(P):=KPK(P):=K_{\emptyset}P

3.3 Knowing the value of a variable

In a next step, we look at the various ways in which one can be said to know the value of an empirical variable.

Exact knowledge For a start, we say that a variable XX is exactly known, and write K(X)K(X), if there exists a value d𝔻Xd\in\mathbb{D}_{X} such that the proposition K(X=d)K(X=d) is known, i.e. if X(t)=dX(t)=d holds for all states tSt\in S; equivalently, iff 𝔻X\mathbb{D}_{X} is a singleton 𝔻X={d}\mathbb{D}_{X}=\{d\}. So, K(X)K(X) holds iff XX is a constant map:

K(X)=C(X).K(X)=C(X).

Approximate observations The act of observing a given variable XX at state ss within some given approximation UτX(X(s))U\in\tau_{X}(X(s)) can be modeled as restriction of the state space: we move from the original space SS to the subspace X1(U)={sS:X(s)U}X^{-1}(U)=\{s\in S:X(s)\in U\}. All the variables X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}) are thus also restricted to UU, so we obtain X:S(𝔻X,τX)X:S^{\prime}\to(\mathbb{D}^{\prime}_{X},\tau^{\prime}_{X}) where S:=X1(U)S^{\prime}:=X^{-1}(U), 𝔻X:=U\mathbb{D}^{\prime}_{X}:=U, and τX:=τX|U={OU:OτX}\tau^{\prime}_{X}:=\tau_{X}|U=\{O\cap U:O\in\tau_{X}\} is the subspace topology on UU .

Approximate knowledge For empirical variables, we also have a more inexact form of knowledge, namely approximate knowledge. Given any open set UτXU\in\tau_{X}, we say that the value of XX is known with approximation UU if we have K(XU)K(X\in U), that is, X(t)UX(t)\in U for all states tSt\in S.

Arbitrarily accurate knowledge We say that the value of XX is known with arbitrary accuracy at state ss, and write sk(X)s\models k(X) if: for every open neighborhood UτX(X(s))U\in\tau_{X}(X(s)) of X(s)X(s), the value of XX is known with approximation UU; i.e., we have that

UτX(X(s))tS(sUtU).\forall U\in\tau_{X}(X(s))\forall t\in S(s\in U\Rightarrow t\in U).
Proposition 3.2.
  1. 1.

    sk(X)s\models k(X) iff the singleton {X(s)}\{X(s)\} is dense w.r.t. the topology τX\tau_{X} (i.e., Cl({X(s)})=𝔻XCl(\{X(s)\})=\mathbb{D}_{X}).

  2. 2.

    Knowledge of arbitrarily accurate knowledge of XX is the same as exact knowledge of XX: i.e. we have Kk(X)K(X)Kk(X)\Leftrightarrow K(X).

  3. 3.

    If (𝔻X,τX)(\mathbb{D}_{X},\tau_{X}) is T1T_{1}-separated, then arbitrarily accurate knowledge of XX is the same as exact knowledge of XX: i.e. we have k(X)K(X)k(X)\Leftrightarrow K(X).

3.4 Known dependence versus knowable dependence

The local version of (exact) dependence is not necessarily known, or even knowable: DXYD_{X}Y does not imply KDXYKD_{X}Y, nor it implies KXDXYK_{X}D_{X}Y. Things change if we consider the notion of global functional dependence D(X;Y)D(X;Y) introduced above: using the universal modality AA, we can express the fact that D(X;Y)D(X;Y) holds globally via the identity

D(X;Y)=ADXY.D(X;Y)=AD_{X}Y.

Global dependence is known dependence This follows directly from the interpretation of the universal modality AA as “knowledge” KK, which together with the above identity gives us

D(X;Y)=KDXYD(X;Y)=KD_{X}Y

So D(X;Y)D(X;Y) captures a situation in which the existence of an exact dependence of YY on XX is known at ss.

But now we can also consider a virtual version of this:

Knowable dependence A knowable dependence of YY on XX holds at ss iff the existence of an (exact) dependence DXYD_{X}Y can be known if one is given some sufficiently accurate estimate of the value X(s)X(s). This is captured by the expression

KXDXY.K_{X}D_{X}Y.

This holds at a state ss whenever DXYD_{X}Y holds on some open neighborhood OτXS(s)O\in\tau_{X}^{S}(s).

However, an important observation is that (for inexact variables) both known dependence KDXYKD_{X}Y and knowable dependence KXDXYK_{X}D_{X}Y may still be epistemically useless, as far as knowability of YY from XX is concerned. They do not automatically guarantee that any estimate of the value of YY (no matter how vague) is ever known after observing the value of XX with any accuracy (no matter how precise)!

What we need instead is a notion of epistemic dependence, i.e. one that ensures knowledge transfer: we should be able to infer the value of YY with any desired accuracy from a sufficiently accurate measurement of the value of XX. This forms the topic of the next section.

3.5 Epistemic dependence as continuous correlation

We now introduce the key notion of our paper: the concept of epistemic dependence between (inexact) empirical variables. This is not the same as known or knowable dependence: what we want is rather an information-carrying dependence: given two (sets of) empirical variables XX and YY, when can one be said to know how to find the value of YY (with any desired accuracy) given (a sufficiently accurate estimate of) the value of XX? The answer will be given below by the notion of known epistemic dependence, to be written K(X;Y)K(X;Y), which requires global continuity of the dependence map.

A related, but weaker notion is that of knowable epistemic dependence KXYK_{X}Y: this is the case in which a known epistemic dependence between XX and YY can be acquired after learning some (sufficiently accurate) estimate of XX. Topologically, this is a more local property, obtained by requiring continuity only over a neighborhood of the current value of XX. We proceed now to formally define these key concepts.

Known epistemic dependence We introduce now a global concept K(X;Y)K(X;Y), obtained by adding the continuity requirement to the definition of (global) functional dependence:

SK(X;Y) iff  there is a continuous map F:𝔻X𝔻Y s.t. FX=Y.S\models K(X;Y)\,\,\mbox{ iff }\,\,\mbox{ there is a continuous map $F:\mathbb{D}_{X}\to\mathbb{D}_{Y}$ s.t. $F\circ X=Y$}.

To understand why continuity makes this global notion of dependence “epistemic” (in contrast to global exact dependence D(X;Y)D(X;Y)), it is useful to provide a number of other useful equivalent formulations of this notion:

Proposition 3.3.

The following are equivalent for empirical variables XX and YY:

  1. 1.

    SK(X;Y)S\models K(X;Y)

  2. 2.

    sSUτY(Y(s))VτX(X(s)):X1(V)Y1(U)\forall s\in S\forall U\in\tau_{Y}(Y(s))\,\exists V\in\tau_{X}(X(s)):\,X^{-1}(V)\subseteq Y^{-1}(U)

  3. 3.

    sSUτYS(s)VτXS(s)VU\forall s\in S\forall U\in\tau_{Y}^{S}(s)\exists V\in\tau_{X}^{S}(s)\,V\subseteq U

  4. 4.

    for all states sSs\in S and sets PSP\subseteq S, if sIntY(P)s\in Int_{Y}(P) then sIntX(P)s\in Int_{X}(P)

  5. 5.

    Y:(S,τXS)(𝔻Y,τY)Y:(S,\tau_{X}^{S})\to(\mathbb{D}_{Y},\tau_{Y}) is continuous (in the XX-topology on SS)

  6. 6.

    the identity function id:(S,τXS)(S,τYS)id:(S,\tau_{X}^{S})\to(S,\tau_{Y}^{S}) is continuous, if we endow its domain with the XX-topology and its codomain with the YY-topology.

Essentially, this second and third clause say that, for every state sSs\in S, one can know the value of Y(s)Y(s) with any desired accuracy, after observing a sufficiently accurate estimate of the value of X(s)X(s). Since this fact holds globally (for all states), it is known to the observing subject: so the meaning of K(X;Y)K(X;Y) is that the subject knows how to determine the value of YY with any desired degree of accuracy, if she was given accurate enough approximations of the value of XX.

By relaxing now this knowledge condition to knowability, we get a more “local” version KXYK_{X}Y of the same concept, obtained by restricting it to an open neighborhood of the actual state:

Epistemic dependence We write sKXYs\models K_{X}Y, and say that there exists a (knowable) epistemic dependence between XX and YY at state ss, if a known dependence K(X;Y)K(X;Y) can be achieved after observing some accurate enough approximation OτX(s)O\in\tau_{X}(s) of the value X(s)X(s) (of variable XX at a state ss). This happens when K(X;Y)K(X;Y) holds on the subspace OO (even if it does not hold on the whole original space SS):

sKXY iff  there exists OτXS(s) s.t. OK(X;Y).s\models K_{X}Y\,\,\mbox{ iff }\,\,\mbox{ there exists $O\in\tau_{X}^{S}(s)$ s.t. $O\models K(X;Y)$}.

By unfolding this definition, we can easily see that we have: sKXYs\models K_{X}Y iff there exists OτXS(s)O\in\tau_{X}^{S}(s), and a continuous map F:X(O)𝔻YF:X(O)\to\mathbb{D}_{Y}, s.t. FX=YF\circ X=Y holds on OO.

The next result gives us other useful characterizations of KXYK_{X}Y.

Proposition 3.4.

The following are equivalent for empirical variables XX and YY:

  1. 1.

    sKXYs\models K_{X}Y

  2. 2.

    there exists an open neighborhood OO of X(s)X(s), and a continuous map F:O𝔻YF:O\to\mathbb{D}_{Y}, s.t. FX=YF\circ X=Y holds on X1(O)X^{-1}(O)

  3. 3.

    OτXS(s)tOUτYS(t)VτXS(t))VU\exists O\in\tau_{X}^{S}(s)\forall t\in O\forall U\in\tau_{Y}^{S}(t)\exists V\in\tau_{X}^{S}(t))\,V\subseteq U

  4. 4.

    there is some open XX-neighborhood OO of ss, s.t. for all states tOt\in O and sets POP\subseteq O, if tIntY(P)t\in Int_{Y}(P) then tIntX(P)t\in Int_{X}(P)

  5. 5.

    YY is continuous in the XX-topology on some XX-open neighborhood of ss

  6. 6.

    the restriction of the identity function id:(O,τXO)(S,τXS)id:(O,\tau_{X}^{O})\to(S,\tau_{X}^{S}) to some open XX-neighborhood OτXS(s)O\in\tau_{X}^{S}(s) is continuous, if we endow its domain with (the topology τXO\tau_{X}^{O} induced on OO by) the XX-topology and its codomain with the YY-topology.

So, unlike the local version of exact dependence DXYD_{X}Y, local epistemic dependence KXYK_{X}Y is inherently knowable: i.e. we always the validity

KXYKXKXY.K_{X}Y\,\Rightarrow K_{X}K_{X}Y.

As in the case of exact dependence, it turns out that the local version of epistemic dependence is more fundamental than the global one: KXYK_{X}Y can define K(X;Y)K(X;Y) via the equivalence

K(X;Y)=KKXY=AKXY,K(X;Y)=KK_{X}Y=AK_{X}Y,

but not the other way around.

3.6 A side issue: continuity at a point

There is in fact an even “more local” form of dependence kXYk_{X}Y, that requires only continuity at a point. We call this notion “conditional knowability of YY given XX ”. As we’ll see, this form of dependence is in general epistemically opaque. As such, it will not play any role in this paper, but it may still be useful to define it, in order to better understand our key notion of epistemic dependence KXYK_{X}Y: the two are closely related, but nevertheless subtly different in important ways.

Conditional knowability of variables as point-continuous dependence We write skXYs\models k_{X}Y, and say that YY is conditionally knowable given XX at state ss if we have

UτY(Y(s))VτX(X(s)):X1(V)Y1(U),\forall U\in\tau_{Y}(Y(s))\,\exists V\in\tau_{X}(X(s)):\,X^{-1}(V)\subseteq Y^{-1}(U),

i.e. if one can approximate the value(s) of YY with any desired accuracy if given a sufficiently accurate estimate of the value(s) of XX.

We can characterize conditional knowability kXYk_{X}Y in a similar way to the way we did for KXYK_{X}Y and K(X;Y)K(X;Y):

Proposition 3.5.

The following are equivalent for empirical variables XX and YY:

  1. 1.

    skXYs\models k_{X}Y

  2. 2.

    UτYS(s)VτXS(s)VU\forall U\in\tau_{Y}^{S}(s)\,\exists V\in\tau_{X}^{S}(s)\,V\subseteq U

  3. 3.

    for all sets POP\subseteq O, if sIntY(P)s\in Int_{Y}(P) then sIntX(P)s\in Int_{X}(P)

  4. 4.

    YY is continuous at point ss in the XX-topology

  5. 5.

    the identity function id:SSid:S\to S is continuous at point ss, if we endow its domain with the XX-topology and its codomain with the YY-topology

However, unlike KXYK_{X}Y and K(X;Y)K(X;Y), conditional knowability kXYk_{X}Y is itself not necessarily knowable (based on observing XX):

kXY⇏KXkXYk_{X}Y\not\Rightarrow K_{X}k_{X}Y

In other words, even if YY is actually knowable given XX at ss, this very fact may forever remain unknown: no matter how precise her measurements of X(s)X(s), the agent may never be in a position to know this fact. The epistemic opacity of this form of dependence is due to its extreme sensitivity to small deviations or errors: if the value of XX happened to be even slightly different than the exact current value X(s)X(s), then kXYk_{X}Y would no longer hold. As a result, any imprecision, however small, is enough to cast doubt upon conditional knowability.

Counterexample: Thomae’s function An extreme form of unknowable conditional dependence is given by Thomae’s function. Let XX and YY be two single variables, S=𝔻X=𝔻Y=S=\mathbb{D}_{X}=\mathbb{D}_{Y}=\mathbb{R} (the set of real numbers), with τX=τY\tau_{X}=\tau_{Y} being the standard topology on \mathbb{R}, and we set X=idX=id to be the identity map, and YY be given by Thomae’s function:

Y(s):=1q if s=pq with p,q s.t. gcd(p,q)=1, and Y(s)=0 if s is irrational.Y(s):=\frac{1}{q}\,\mbox{ if $s=\frac{p}{q}$ with $p\in\mathbb{Z},q\in\mathbb{N}$ s.t. $gcd(p,q)=1$},\,\,\,\,\,\mbox{ and }\,\,\,\,\,Y(s)=0\,\mbox{ if $s\not\in\mathbb{Q}$ is irrational}.

This function is continuous at all irrational points, and discontinuous at all rational points.131313Note that Y(0)=Y(0/1)=1Y(0)=Y(0/1)=1, since by convention q=1q=1 is the only natural number s.t. gcd(0,q)=1gcd(0,q)=1. As such, YY is never continuous on any open interval, and as a consequence, we have in this case that

kXY¬KXkXYk_{X}Y\Rightarrow\neg K_{X}k_{X}Y

is valid on the space SS. This is the extreme opposite of inherent knowability: in this space, whenever kXYk_{X}Y is true, then it is unknowable!

In fact, conditional knowability kXYk_{X}Y is only knowable (based on observing XX) when we have a full-fledged epistemic dependence KXYK_{X}Y, as witnessed by the following equivalence:

KXYKXkXYK_{X}Y\,\Leftrightarrow\,K_{X}k_{X}Y

In other words, the ‘knowable’ version of conditional knowability kXYk_{X}Y is exactly the epistemic dependence KXYK_{X}Y. For this reason, we consider kXYk_{X}Y to be a somewhat irrelevant notion from an epistemological point of view, and we will not explore it in any depth in this paper.141414Nevertheless, we will later specify sufficient conditions in which kXYk_{X}Y is inherently knowable (and thus equivalent to KXYK_{X}Y): as we’ll see, such a ‘paradisiacal’ epistemic situation is guaranteed in a special kind of metric dependence models (“pseudo-locally Lipschitz models”).

4 The logic of continuous dependence

In this section we develop a formal language for an epistemic Logic of Continuous Dependence 𝖫𝖢𝖣\mathsf{LCD}, that can express both (exact) functional dependence and (continuous) epistemic dependence. The language we will use includes the syntax of the Logic of Functional Dependence 𝖫𝖥𝖣\mathsf{LFD} introduced in Section 3.1, but the semantics is topological, and the syntax is enriched with topological interior modalities KXφK_{X}\varphi (for ‘knowability’ of a proposition φ\varphi given an empirical variable XX) and continuous-dependence atoms KXYK_{X}Y (for ‘epistemic dependence’ between variables XX and YY). We provide a sound and complete proof system for reasoning about these forms of dependence, and we illustrate our setting with some examples.

4.1 Syntax and semantics

We assume given a (finite or infinite) set of basic variables VarVar, and a relational vocabulary (Pred,ar)(Pred,ar) consisting of a set PredPred of predicate symbols, together with an arity map ar:PredNar:Pred\to N, associating to each symbol PPredP\in Pred a natural number ar(P)Nar(P)\in N. We denote by XVarX\subseteq Var finite sets of variables in VV, and denote by x=(x1xn)\vec{x}=(x_{1}\ldots x_{n}) finite tuples of variables.

Definition 4.1 (Syntax of 𝖫𝖢𝖣\mathsf{LCD}).

The logic of continuous dependence has a language defined recursively by the following clauses:

φ::=Px|¬φ|φφ|DXφ|KXφ|DXY|KXY\begin{array}[]{c cc cc ccc cc cc cc cc}\varphi::=&P\vec{x}&|&\neg\varphi&|&\varphi\wedge\varphi&|&D_{X}\varphi&|&K_{X}\varphi&|&D_{X}Y&|&K_{X}Y\end{array}

where nn is the arity of PP and x=(x1,,xn)\vec{x}=(x_{1},\ldots,x_{n}) is any nn-tuple of variables in VarVar.

Abbreviations   Knowledge KφK\varphi, knowable dependence KXYK_{X}Y, known dependence K(X;Y)K(X;Y) and exact knowledge K(Y)K(Y) are defined in this language as abbreviations:

Kφ:=Kφ,K(X;Y):=KKXY,K(Y)=KYK\varphi\,\,\,:=\,\,\,K_{\emptyset}\varphi,\,\,\,\,\,\,\,\,\,\,\,\,K(X;Y)\,\,\,:=\,\,\,KK_{X}Y,\,\,\,\,\,\,\,\,\,\,\,\,K(Y)\,\,\,=\,\,\,K_{\emptyset}Y

It is easy to see that these abbreviations fit the semantic definitions of the corresponding operators given in previous sections, e.g. we have

sKKXY iff SK(X;Y).s\models KK_{X}Y\,\,\,\mbox{ iff }\,\,\,S\models K(X;Y).
Definition 4.2 (Topo-dependence models).

A typed topological model is a multi-typed structure M=(𝔻x,τx,I)xVarM=(\mathbb{D}_{x},\tau_{x},I)_{x\in Var}, indexed by variables xVarx\in Var (each thought as having its own distinct type), where: for each xVarx\in Var, 𝔻x\mathbb{D}_{x} is a set, giving the range of values of variable xx; τx\tau_{x} is a topology on 𝔻x\mathbb{D}_{x}; and II is an interpretation function, mapping each predicate symbol PP of arity nn into an nn-ary relation on the union xVar𝔻x\bigcup_{x\in Var}\mathbb{D}_{x}.

A topological dependence model (‘topo-dependence model’ for short) is a structure 𝐌=(M,S,𝐱)xVar{\mathbf{M}}=(M,S,\mathbf{x})_{x\in Var}, consisting of: a typed topological model M=(𝔻x,τx,I)xVarM=(\mathbb{D}_{x},\tau_{x},I)_{x\in Var}; a set SS of abstract states; and, for each variable xVarx\in Var, a corresponding empirical variable, i.e. a surjective map 𝐱:S𝔻x\mathbf{x}:S\to\mathbb{D}_{x}.

For every set XVarX\subseteq Var of variables, 𝐗:S𝔻X={(x(s))xX:sS}\mathbf{X}:S\to\mathbb{D}_{X}=\{(x(s))_{x\in X}:s\in S\} stands for the single joint empirical variable associated to the set {𝐱:xX}\{\mathbf{x}:x\in X\}, as defined in the preceding section: 𝐗(s):=(𝐱(s))xX\mathbf{X}(s):=(\mathbf{x}(s))_{x\in X} for all sSs\in S. We also use all the associated notation introduced in the previous section: in particular, for every XVarX\subseteq Var, we consider the 𝐗\mathbf{X}-topology on SS, given by τ𝐗S:={𝐗1(U):Uτ𝐗}\tau_{\mathbf{X}}^{S}:=\{\mathbf{X}^{-1}(U):U\in\tau_{\mathbf{X}}\}, and the 𝐗\mathbf{X}-interior Int𝐗:𝒫(S)𝒫(S)Int_{\mathbf{X}}:\mathcal{P}(S)\to\mathcal{P}(S) (defined as the interior operator in the 𝐗\mathbf{X}-topology).

Definition 4.3 (Semantics of 𝖫𝖢𝖣\mathsf{LCD}).

Truth of a formula φ\varphi in a topo-dependence model 𝐌=(M,S){\mathbf{M}}=(M,S) at a state sSs\in S (written 𝐌,sφ{\mathbf{M}},s\models\varphi, where we drop the index 𝐌{\mathbf{M}} when the model is understood) is defined as in 𝖫𝖥𝖣\mathsf{LFD} for the atoms, as well as for all the operators of 𝖫𝖥𝖣\mathsf{LFD}, and in addition by the following clauses:

sKXφ iff sInt𝐗φs\models K_{X}\varphi\,\,\,\mbox{ iff }\,\,\,s\in Int_{\mathbf{X}}\|\varphi\|
sKXY iff Oτ𝐗S(s)F:𝐗(O)𝔻Y s.t. F continuous and F𝐗=𝐘 holds on O.s\models K_{X}Y\,\,\,\mbox{ iff }\,\,\,\exists O\in\tau_{\mathbf{X}}^{S}(s)\,\exists F:\mathbf{X}(O)\to\mathbb{D}_{Y}\mbox{ s.t. }F\mbox{ continuous and }F\circ\mathbf{X}=\mathbf{Y}\mbox{ holds on }O.

The paradigmatic examples in topology are metric spaces, which give us our paradigmatic type of topo-dependence models:

Special case: metric models A metric dependence model (‘metric model’, for short) is a topo-dependence model 𝐌{\mathbf{M}}, in which each of the topologies τx\tau_{x} is given by a metric dxd_{x} on 𝔻x\mathbb{D}_{x}. The underlying typed metric model M=(𝔻x,dx,I)xVarM=(\mathbb{D}_{x},d_{x},I)_{x\in Var} comes with designated metrics dxd_{x} instead of topologies, but each of them induces of course a topology τx\tau_{x} having the family of open dxd_{x}-balls as its basis. For joint empirical variables 𝐗\mathbf{X} given by finite sets of variables XVarX\subseteq Var, the induced (product) topology τ𝐗\tau_{\mathbf{X}} on 𝔻X\mathbb{D}_{X} is also metric, and is easily seen to be generated by open balls wrt the (the restriction to 𝔻X\mathbb{D}_{X} of) the so-called Chebyshev distance (also known as the supremum metric, or LL_{\infty} metric):151515 This metric is uniformly equivalent with the more standard Euclidean metric on the product space.

dX(u,v):=sup{d(ux,vx):xX}, for all v=(vx)xX,u=(ux)xXΠxX𝔻x.d_{X}(\vec{u},\vec{v})\,:=\,sup\{d(u_{x},v_{x}):x\in X\},\mbox{ for all }\vec{v}=(v_{x})_{x\in X},\vec{u}=(u_{x})_{x\in X}\in\Pi_{x\in X}\mathbb{D}_{x}.

Note that, for (finite) non-empty sets of variables XX\neq\emptyset, this amounts to taking the maximum of all distances

dX(u,v):=max{d(ux,vx):xX},d_{X}(\vec{u},\vec{v})\,:=\,max\{d(u_{x},v_{x}):x\in X\},

while for X=X=\emptyset, recall that 𝔻={λ}\mathbb{D}_{\emptyset}=\{\lambda\} where λ=()\lambda=() is the empty string, so that we have that

d(λ,λ):=sup=0 (so that d is still a metric!)d_{\emptyset}(\lambda,\lambda)\,:=\,sup\,\emptyset=0\,\mbox{ (so that $d_{\emptyset}$ is still a metric!)}

The corresponding (weak) 𝐗\mathbf{X}-topologies τ𝐗S\tau_{\mathbf{X}}^{S} induced on the state set SS are not necessarily metric, but they are pseudo-metric, being generated by open balls of the form

X(s,ε):={wS:d𝐗S(s,w)<ε},\mathcal{B}_{X}(s,\varepsilon)\,:=\,\{w\in S:d_{\mathbf{X}}^{S}(s,w)<\varepsilon\},

with sSs\in S, ε>0\varepsilon>0 and where the pseudo-metric d𝐗Sd_{\mathbf{X}}^{S} on SS is given by setting:

d𝐗S(s,w):=d𝐗(𝐗(s),𝐗(w)).d_{\mathbf{X}}^{S}(s,w)\,:=\,d_{\mathbf{X}}(\mathbf{X}(s),\mathbf{X}(w)).

Note that as a result, d𝐗Sd_{\mathbf{X}}^{S} coincides with the supremum pseudo-metric induced by single-variable pseudo-distances:

d𝐗S(s,w)=sup{dxS(s,w):xX}d_{\mathbf{X}}^{S}(s,w)\,=\,sup\{d_{x}^{S}(s,w):x\in X\}

With these notations, it is easy to see that in a metric model our topological semantics amounts to the following:

sDXφ iff wS(dXS(s,w)=0wφ)s\models D_{X}\varphi\,\,\mbox{ iff }\,\,\forall w\in S\,(d_{X}^{S}(s,w)=0\Rightarrow w\models\varphi)
sDXY iff wS(dXS(s,w)=0dYS(s,w)=0)s\models D_{X}Y\,\,\mbox{ iff }\,\,\forall w\in S\,(d_{X}^{S}(s,w)=0\Rightarrow d_{Y}^{S}(s,w)=0)
sKXφ iff δ>0wS(dXS(s,w)<δwφ)s\models K_{X}\varphi\,\,\mbox{ iff }\,\,\exists\delta>0\,\forall w\in S\,(d_{X}^{S}(s,w)<\delta\Rightarrow w\models\varphi)
sKXY iff δ0>0tX(s,δ0)ε>0δ>0wX(s,δ0)(dXS(w,t)<δdYS(w,t)<ε)s\models K_{X}Y\,\mbox{ iff }\,\exists\delta_{0}>0\,\forall t\in\mathcal{B}_{X}(s,\delta_{0})\,\forall\varepsilon>0\,\exists\delta>0\,\forall w\in\mathcal{B}_{X}(s,\delta_{0})\,(d_{X}^{S}(w,t)<\delta\,\Rightarrow\,d_{Y}^{S}(w,t)<\varepsilon)

In many applications, it is useful to think of the states employed so far as being ‘concrete’, i.e., represented by means of tuples of values for each of the fundamental variables; in other words, assignments of values to variables:

Special case: concrete models A concrete topo-dependence model is a structure 𝐌=(M,S){\mathbf{M}}=(M,S), consisting of: a typed topological model M=(𝔻x,τx,I)xVarM=(\mathbb{D}_{x},\tau_{x},I)_{x\in Var}; and a set SΠxVar𝔻xS\subseteq\Pi_{x\in Var}\mathbb{D}_{x} of ‘concrete’ states, i.e. type-respecting assignments of values to variables. This structure is subject to the additional requirement that 𝔻x={s(x):sS}\mathbb{D}_{x}=\{s(x):s\in S\} for every xVarx\in Var.161616Once again, this condition is innocuous (we can always restrict the codomain 𝔻x\mathbb{D}_{x} to the actual range of values taken by xx), and is just meant to enforce the surjectivity of the corresponding empirical variables. A concrete topo-dependence model is indeed a special case of a topo-dependence model: we can associate to each variable xVarx\in Var an empirical variable 𝐱:S𝔻x\mathbf{x}:S\to\mathbb{D}_{x}, given by 𝐱(s):=s(x)\mathbf{x}(s):=s(x).

Example: Euclidean models An example of topo-dependence models that are both metric and concrete are Euclidean models. For a finite set VarVar, a Euclidean model is simply given by a subset SVarS\subseteq\mathbb{R}^{Var} of the Euclidean space of dimension |Var||Var|, consisting of assignments s:Vars:Var\to\mathbb{R} of real values s(x)s(x)\in\mathbb{R} to variables xVarx\in Var. The metric on each copy of \mathbb{R} is the standard Euclidean distance d(s,w)=i(s(x)y(s))2d(s,w)=\sqrt{\sum_{i}(s(x)-y(s))^{2}}, and so the topology τx\tau_{x} is the subspace topology induced on 𝔻x={s(x):sS}\mathbb{D}_{x}=\{s(x):s\in S\} by the standard Euclidean topology on \mathbb{R}.

4.2 The proof system 𝐋𝐂𝐃\mathbf{LCD}

We are now prepared to present our axiomatic proof system 𝐋𝐂𝐃\mathbf{LCD} for the Logic of Continuous Dependence. The axioms and rules are given in Table 2. The system includes, as group (I) in the Table, the axioms and rules of the system 𝖫𝖥𝖣\mathsf{LFD} for the Logic of Functional Dependence, as already listed in Table 1. The additional axioms and rules are divided into three further groups: (II) Axioms for propositional knowability, (III) Axioms for knowable (epistemic) dependence, and (IV) Connecting Axioms (that connect the epistemic/inexact notions with the corresponding exact notions).

Note the analogy of the 𝐋𝐂𝐃\mathbf{LCD} axioms in groups (II) and (III) of Table 2 with the similar 𝐋𝐅𝐃\mathbf{LFD} axioms in groups (II) and (II) of Table 1 (which are of course also included in 𝐋𝐂𝐃\mathbf{LCD}, within group (I) of Table 2). This is to be expected, since epistemic dependence is in a sense the “feasible” generalization of exact dependence to inexact variables; mathematically speaking, KXK_{X} is the topological analogue of DXD_{X}. But note first that not all the 𝖫𝖥𝖣\mathsf{LFD} have analogues for epistemic dependence: axiom (5) for DXD_{X} has no analogue for KXK_{X}, and the same applies to the 𝖫𝖥𝖣\mathsf{LFD} axiom of Determined Atoms; and second, that even in the case of the analogue axioms which do hold for inexact dependence, their meaning is different in the topological setting. We discuss this shift in some detail.

(I) Axioms and rules of 𝐋𝐅𝐃\mathbf{LFD}
(II) S4S4 Axioms for Knowability:
(KK-Necessitation) From φ\varphi, infer KXφK_{X}\varphi
(KK-Distribution) KX(φψ)(KXφKXψ)K_{X}(\varphi\to\psi)\to(K_{X}\varphi\to K_{X}\psi)
(Veracity) KXφφK_{X}\varphi\to\varphi
(Positive Introspection) KXφKXKXφK_{X}\varphi\to K_{X}K_{X}\varphi
(III) Axioms for Knowable Dependence:
(KK-Inclusion) KXYK_{X}Y, provided that YXY\subseteq X
(KK-Additivity) (KXYKXZ)KX(YZ)\left(K_{X}Y\wedge K_{X}Z\right)\to K_{X}(Y\cup Z)
(KK-Transitivity) (KXYKYZ)KXZ\left(K_{X}Y\wedge K_{Y}Z\right)\to K_{X}Z
(Knowability of Epistemic Dependence) KXYKXKXYK_{X}Y\to K_{X}K_{X}Y
(Knowability Transfer) KXY(KYφKXφ)K_{X}Y\to\left(K_{Y}\varphi\to K_{X}\varphi\right)
(IV) Connecting Axioms:
(Knowable Determination) KXφDXφK_{X}\varphi\to D_{X}\varphi
(Knowable Dependence) KXYDXYK_{X}Y\to D_{X}Y
(Knowledge of Necessity) AφKφA\varphi\to K\varphi
(Knowledge of Constants) C(Y)K(Y)C(Y)\to K(Y)
Table 2: The proof system 𝐋𝐂𝐃\mathbf{LCD}, with notations Aφ:=DφA\varphi:=D_{\emptyset}\varphi, C(Y):=DYC(Y):=D_{\emptyset}Y, Kφ:=KφK\varphi:=K_{\emptyset}\varphi, K(Y):=KYK(Y):=K_{\emptyset}Y.

What the 𝖫𝖢𝖣\mathsf{LCD} axioms mean Mathematically, the S4S4 axioms for KXK_{X} in Group II capture the main properties of the interior operator, as given by the Frechet axioms of topology. In addition, Veracity asserts that knowable facts are true.171717This axiom is listed here only for expository reasons, since is in fact derivable from the axiom of Knowable Dependence, together with the 𝐋𝐅𝐃\mathbf{LFD} axiom of Factivity. Similarly, one can easily see that the old 𝐋𝐅𝐃\mathbf{LFD} rule of DD-Necessitation is now derivable in 𝖫𝖢𝖣\mathsf{LCD} from KK-Necessitation together with Knowable Dependence.

The Axioms of Knowable Dependence in Group III are similar to the corresponding 𝖫𝖥𝖣\mathsf{LFD} axioms for exact dependence, and they are familiar from Database Theory, where they are known as the Armstrong axioms of dependence. Inclusion expresses “epistemic superiority”: (the approximate answers to) a larger set of questions carry all the information (and more) that is carried by (the approximate answers to) a subset of those questions. Additivity says that, if (the inexact answers to) two sets of questions are knowable then (the inexact answers to) all the questions in their union are knowable. Transitivity captures a version of Dretske’s Xerox Principle [Dretske, 1983]: if XX carries all the information about YY, and YY carries all the information about ZZ, them XX carries all the information about ZZ.

Mathematically, these Axioms of Knowable Dependence capture the main properties of the category of topological spaces, with product space as a categorical product: the Inclusion axiom holds because of the continuity of the projection maps πYX:𝔻X𝔻Y\pi^{X}_{Y}:\mathbb{D}_{X}\to\mathbb{D}_{Y} for YXY\subseteq X (where recall that 𝔻X:=X(S)\mathbb{D}_{X}:=X(S) is a subspace of ΠxX𝔻x\Pi_{x\in X}\mathbb{D}_{x} with the product topology); the Additivity axiom captures the universality property of the product, saying that a pair of continuous functions F:UX𝔻YF:U_{X}\to\mathbb{D}_{Y} and G:VX𝔻ZG:V_{X}\to\mathbb{D}_{Z} on two opens UX,VX𝔻XU_{X},V_{X}\subseteq\mathbb{D}_{X} gives rise to a continuous function (F,G):UXVX𝔻Y×𝔻Z(F,G):U_{X}\cap V_{X}\to\mathbb{D}_{Y}\times\mathbb{D}_{Z} into the product space, given by (F,G)(x):=(F(x),G(x))(F,G)(x):=(F(x),G(x)); and Transitivity captures the closure of continuous functions under composition. Next, the axiom of Knowability of Epistemic Dependence expresses the fact that KXYK_{X}Y is ‘topologically local’ (-it holds at a point only if it holds on a whole open neighborhood of that point), and that it is stronger, in general, than simple functional dependence. Epistemically, this says that epistemic dependence KXYK_{X}Y is indeed a knowable dependence: whenever it holds, one can come to know it (after observing some good enough approximation of X(s)X(s)). Going further, the Knowability Transfer axiom captures the continuity of dependence: if FF is continuous, then the inverse F1(U)F^{-1}(U) of any open subset UU of its domain is open. Epistemically: if YY is knowable given XX, then any proposition that is knowable given YY is also knowable given XX.

Finally, the Connection Axioms in Group IV spell out the relationship between epistemic (inexact) dependence and knowability and their exact analogues in 𝖫𝖥𝖣\mathsf{LFD}. Knowable Determination says that knowability implies determination: if a statement is knowable based on some approximation of the value of XX, then the truth of that statement is determined by the (exact) value of XX. Knowable Dependence is the analogue claim for empirical variables: if YY is epistemically dependent on XX, then the exact value of YY could be determined of one was given the exact value of XX. Mathematically, this just follows from the fact that a continuous dependence is a (functional) dependence. The last two Connection Axioms tell us that the converses of these statements hold (only) in the special case when X=X=\emptyset is the empty set of variables: mathematically, this is because τ\tau_{\emptyset} is by definition the discrete topology (so that IntInt_{\emptyset} is the global quantifier on SS, while KYK_{\emptyset}Y simply means that YY is a constant). Epistemically, Knowledge of Necessity says that, if a statement is necessary (true in all epistemically possible worlds), then it is known. Similarly, Knowledge of Constants says that, if XX a constant (having the same value in all epistemically possible worlds), then its value is known.

Theorem 4.4.

The proof system 𝐋𝐂𝐃\mathbf{LCD} in Table 1 is a sound and complete axiomatization of the logic of continuous dependence on any of the following classes of models: (a) arbitrary topo-dependence models; (b) metric dependence models; (c) concrete topo-dependence models; (d) pseudo-locally Lipschitz metric dependence models. Moreover, the logic 𝖫𝖢𝖣\mathsf{LCD} is decidable.

Proof.

Proofs for these assertions are found in the Appendix to this paper. ∎

Remark 4.5 (A first-order perspective).

Why do we succeed in axiomatizing the logic of topological dependence? It is well-known that many modal logics, including the basic dependence logic 𝖫𝖥𝖣\mathsf{LFD}, admit a faithful effective translation into the language of first-order logic. Thus, in principle, these logics are completely axiomatizable, and other properties follow as well. For our topological semantics, an obvious translation would use a three-sorted first-order language, with sorts for states, values of variables, and open sets. However, the standard requirement on topological spaces that opens be closed under arbitrary unions is not first-order. But then, in our motivation for topological models, we thought of open sets as outcomes of possible measurements. Usually not all open sets qualify for this. For instance, in the reals, open intervals are just a base for the topology: closed under finite intersections, but not under arbitrary unions. If we think of our second sort as an open base for the topology, requirements remain first-order expressible. Moreover, it is easy to add explicit first-order descriptions for the behavior of variables viewed as names of functions, and as a result, at least in principle, we are still in a first-order setting that predicts axiomatizability in general for the systems of reasoning investigated in this paper. Of course, to find explicit complete logics, we still have to do real work.

Note that the point-continuity atom kXYk_{X}Y introduced in Section 3.6 (capturing the epistemically opaque notion of “conditional knowability of YY given XX”) is not a part of the syntax of 𝖫𝖢𝖣\mathsf{LCD}. And indeed, we can use part (d) of Theorem 4.4 to show that:

Proposition 4.6.

Conditional knowability kXYk_{X}Y is not expressible in the language of 𝖫𝖢𝖣\mathsf{LCD}. As a consequence, the language 𝖫𝖪𝗄\mathsf{LKk} based only on the operators KXφK_{X}\varphi and conditional knowability atoms kXYk_{X}Y (i.e. the language obtained by replacing in 𝖫𝖢𝖣\mathsf{LCD} the local continuity atoms KXYK_{X}Y by point-continuity atoms kXYk_{X}Y) is more expressive than 𝖫𝖢𝖣\mathsf{LCD}.181818The fact that 𝖫𝖪𝗄\mathsf{LKk} is at least as expressive as 𝖫𝖢𝖣\mathsf{LCD} follows already from the fact that KXYK_{X}Y is definable in 𝖫𝖪𝗄\mathsf{LKk} (as KXkXYK_{X}k_{X}Y).

Proof.

If kXYk_{X}Y were definable in 𝖫𝖴𝖣\mathsf{LUD}, then the formula kXY¬KXYk_{X}Y\wedge\neg K_{X}Y would be expressible in 𝖫𝖢𝖣\mathsf{LCD}. This formula is obviously satisfiable: just take the Counterexample constructed in Section 3.6 (involving Thomae’s function). By the soundness of the proof system 𝐋𝐂𝐃\mathbf{LCD}, this formula must then be consistent with the axioms and rules of this proof system. But by part (d) of Theorem 4.4, the formula must be satisfiable in a pseudo-locally Lipschitz models: this is a contradiction (as pseudo-locally Lipschitz models validate the implication kXYKXYk_{X}Y\Rightarrow K_{X}Y, thus contradicting our formula).∎

This result suggests an obvious question:

Open problemWhat is the complete logic of the language 𝖫𝖪𝗄\mathsf{LKk} based only on the operators KXφK_{X}\varphi and conditional knowability atoms kXYk_{X}Y? Is that logic decidable?

4.3 A concrete example

After all these abstract notions, some simple concrete illustration may be helpful.

Example 4.7 (Car speed detection).

An aircraft police officer determines a car’s velocity vv, by measuring the time tt it takes the car to pass between two highway markings, say 1 km apart. An abstract topo-dependence model for this situation has a set WW of possible worlds or ‘states’, as well as two variables v,t:W(0,)v,t:W\to(0,\infty), whose common range 𝔻v=𝔻t=(0,)\mathbb{D}_{v}=\mathbb{D}_{t}=(0,\infty) comes the standard Euclidean topology τv=τt\tau_{v}=\tau_{t}.191919We assume that it is known that the car is already in motion, so we exclude the case v=t=0v=t=0. We could also consider a concrete model for the same situation: the set of possible states consists then of possible assignments for the basic variables W={(x,y)(0,)×(0,):xy=1}W=\{(x,y)\in(0,\infty)\times(0,\infty):x\cdot y=1\}, with v(x,y)=xv(x,y)=x, t(x,y)=yt(x,y)=y. A basis of observable properties consists of the open intervals (α,β)(0,)(\alpha,\beta)\subseteq(0,\infty), with rational endpoints. The variables stand in a (continuous) dependence relation, for all states sWs\in W:

s(v)=1s(t)s(v)=\frac{1}{s(t)}

Assuming that the legal speed limit is 120120 km per hour, there is a unary predicate SS (‘speeding’) denoting S=(120,)𝔻vS=(120,\infty)\subseteq\mathbb{D}_{v}. In terms of ontic dependence, the exact value of the speed carries the information on whether the car is speeding or not:

SvDvSv,¬SvDv¬Sv.Sv\Rightarrow D_{v}Sv,\,\,\,\,\,\,\,\neg Sv\Rightarrow D_{v}\neg Sv.

Since the speed functionally depends on the time, we also have the (global) dependence:

WD(t;v).W\models D(t;v).

Putting these together (and using the Transfer Axiom), we see that the models also validates:

SvDtSv,¬SvDt¬Sv.Sv\Rightarrow D_{t}Sv,\,\,\,\,\,\,\,\neg Sv\Rightarrow D_{t}\neg Sv.

In other words: the exact value of the time (needed to pass between the two markings) carries the information on whether the car is speeding or not.

By itself, this does not make the police officer know that the car is speeding. But as the speeding property SS is an open set, the officer can come to know that the car is speeding (if this is indeed the case):

SvKvSv.Sv\Rightarrow K_{v}Sv.

So the fact that the car is speeding is knowable: in principle, the officer can learn this by performing an accurate enough measurement of the speed vv. However, the velocity vv will typically not be directly available to him for measurement. Luckily though, the functional dependence between speed and time is continuous, so we have the (global, and therefore known) epistemic dependence:

WKtvW\models K_{t}v

Once again, by putting the last two statements together, and using this time the Knowability Transfer axiom, we obtain:

SvKtSv,Sv\Rightarrow K_{t}Sv,

In other words: by doing an accurate enough measurement of the time (needed to pass between the two markings), the policeman can come to know that the car is speeding.

In contrast, the police may never get to know that the car is not speeding, since the complement of (0,)S=(0,120](0,\infty)-S=(0,120] is not an open set: the implication

¬SvKv¬Sv\neg Sv\Rightarrow K_{v}\neg Sv

is not valid in the above model, and neither is the time-dependent knowability version

¬SvKt¬Sv\neg Sv\Rightarrow K_{t}\neg Sv

Of course, this epistemic difficulty only arises when the car is exactly at the speed limit v=120v=120: in this case, the car is not speeding, but the policemen will never be sure of this, no matter the accuracy of his measurements. On the other hand, x=120x=120 is the only counterexample in this case: since the interior Int((0,120])=(0,120)Int((0,120])=(0,120), every state ss with v(s)120v(s)\not=120 will satisfy the implication ¬SvKv¬Sv\neg Sv\Rightarrow K_{v}\neg Sv (and thus also the implication ¬SvKt¬Sv\neg Sv\Rightarrow K_{t}\neg Sv). Hence, if the speed is not exactly 120, the policeman can eventually come to know whether the car is speeding or not (assuming there is no absolute limit to the accuracy of his measurements or perception).

There are also further statistical aspects to this practical scenario, e.g. with aggregating results of repeated measurements, but these are beyond the scope of this paper.

5 Epistemic Independence

Known dependence versus epistemic dependence  Recall the exact dependence operators DXφD_{X}\varphi and DXYD_{X}Y of the logic 𝖫𝖥𝖣\mathsf{LFD}, as well as the associated ‘epistemic’ abbreviations

Kφ:=Dφ,K(Y):=DYK\varphi\,:=\,D_{\emptyset}\varphi,\,\,\,\,\,\,\,\,\,\,K(Y)\,:=\,D_{\emptyset}Y

(both equivalent to the corresponding notions defined in 𝖫𝖢𝖣\mathsf{LCD}), and the ‘global’ dependence

D(X;Y):=KDXY,D(X;Y)\,:=\,KD_{X}Y,

which captures a form of known dependence: it is known that the exact value of XX determines uniquely the exact value of YY. Let us now draw some comparisons.

It is easy to see that our notion of epistemic dependence implies the existence of a knowable exact dependence, as shown by the following validities:

KXφKXDXφDXφ,K_{X}\varphi\,\,\,\Rightarrow\,\,\,K_{X}D_{X}\varphi\,\,\,\Rightarrow\,\,\,D_{X}\varphi,
KXYKXDXYDXY,K_{X}Y\,\,\,\Rightarrow\,\,\,K_{X}D_{X}Y\,\,\,\Rightarrow\,\,\,D_{X}Y,
K(X;Y)D(X;Y).K(X;Y)\Rightarrow D(X;Y).

But the converse implications fail: epistemic dependence KXYK_{X}Y is stronger than knowable dependence KXDXYK_{X}D_{X}Y, and known global epistemic dependence K(X;Y)K(X;Y) is stronger than known global dependence D(X;Y)D(X;Y). The first notion requires as a surplus that good enough approximations of XX are enough to give any desired estimate of YY. Topologically, the distinction shows in the existence of non-continuous dependence maps.

In fact, even more extreme cases are possible: the exact dependence might be known, while at the same time it is known that there is no knowable epistemic dependence. This situation is topologically characterized by the following result.

Proposition 5.1.

The following are equivalent in topo-dependence models 𝐌{\mathbf{M}}:

  1. 1.

    D(X;Y)K¬KXYD(X;Y)\wedge K\neg K_{X}Y holds in 𝐌{\mathbf{M}} (at any/all states)

  2. 2.

    there exists some map F:𝔻X𝔻YF:\mathbb{D}_{X}\to\mathbb{D}_{Y} s.t. FX=YF\circ X=Y and FF has a dense set of discontinuities (i.e., Cl{d𝔻X:F discontinuous at d}=𝔻XCl\{d\in\mathbb{D}_{X}:F\mbox{ discontinuous at }d\}=\mathbb{D}_{X}).

There is nevertheless one type of variables for which known/knowable exact dependencies are equivalent to their epistemic counterparts. A variable X:S(𝔻X,τX)X:S\to(\mathbb{D}_{X},\tau_{X}) is called exact if its value topology is discrete (i.e. τX=𝒫(𝔻X)\tau_{X}=\mathcal{P}(\mathbb{D}_{X})).

Proposition 5.2.

For exact variables XX, the following implications are valid:

KXφDXφK_{X}\varphi\Leftrightarrow D_{X}\varphi
KXYDXYK_{X}Y\Leftrightarrow D_{X}Y

Complete ignorance as topological independence Going further with the above discrepancies, we now move to even more extreme situations where, while XX globally determines YY, it is known that no observable estimate of XX will give any information about YY! The dual opposite of epistemic dependence is the case when no observable approximation of the value of XX can give any information concerning the value of YY. We refer to this notion as epistemic (or ‘topological’) independence, and denote it by IgXYIg_{X}Y (from ‘ignorance’).

The notion IgXYIg_{X}Y is not a simple negation ¬KXY\neg K_{X}Y, but a much stronger concept, capturing a type of zero-knowledge situation: all XX is completely uninformative as far as YY is concerned.

sIgXY iff Y(U)=𝔻Y for all UτXS(s).s\models Ig_{X}Y\,\,\,\mbox{ iff }\,\,\,Y(U)=\mathbb{D}_{Y}\mbox{ for all }U\in\tau_{X}^{S}(s).

This is to be distinguished from simple ontic independence IXYI_{X}Y between the exact values in the sense of the original dependence logic 𝖫𝖥𝖣\mathsf{LFD}, given by

sIXY iff YX1(X(s))=𝔻Y.s\models I_{X}Y\,\,\,\mbox{ iff }\,\,\,Y\circ X^{-1}(X(s))=\mathbb{D}_{Y}.

Again we draw some comparisons. For a start, both notions have a global counterpart:

I(X;Y):=KIXY,Ig(X;Y):=KIgXY.I(X;Y)\,\,:=\,\,KI_{X}Y,\,\,\,\,\,\,\,\,\,\,\,Ig(X;Y)\,\,:=\,\,KIg_{X}Y.

The relationships between these notions are given by the following implications:

IXYIgXY,I(X;Y)Ig(X;Y)I_{X}Y\,\Rightarrow\,Ig_{X}Y,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,I(X;Y)\,\Rightarrow\,Ig(X;Y)
I(X;Y)IXY,Ig(X;Y)IgXYI(X;Y)\,\Rightarrow\,I_{X}Y,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,Ig(X;Y)\,\Rightarrow\,Ig_{X}Y

In particular then, epistemic independence is weaker than ontic independence! Moreover, as in the case of probabilistic independence, both global versions are symmetric, i.e. we have:

I(X;Y)I(Y;X),Ig(X;Y)Ig(Y;X).I(X;Y)\,\Leftrightarrow\,I(Y;X),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,Ig(X;Y)\,\Leftrightarrow\,Ig(Y;X).

The most interesting conceptual observation is that we can have full (known) exact dependence while at the same time having full (known) epistemic independence.

Proposition 5.3.

The following are equivalent for topo-dependence models 𝐌{\mathbf{M}}:

  1. 1.

    D(X;Y)Ig(X;Y)D(X;Y)\wedge Ig(X;Y) holds in 𝐌{\mathbf{M}} (at any/all states)

  2. 2.

    there exists some everywhere-surjective map F:𝔻X𝔻YF:\mathbb{D}_{X}\to\mathbb{D}_{Y} s.t. FX=YF\circ X=Y.

Here, we used the following mathematical notion from Analysis: F:(𝔻,τ)(𝔻,τ)F:(\mathbb{D},\tau)\to(\mathbb{D}^{\prime},\tau^{\prime}) is everywhere-surjective iff we have F(U)=𝔻F(U)=\mathbb{D}^{\prime} for all non-empty open sets Uτ{}U\in\tau\setminus\{\emptyset\}. Everywhere-surjectivity implies everywhere-discontinuity (hence also the above-mentioned density of the set of discontinuities), but it is a much stronger condition.

Example 5.4.

Each everywhere-surjective function F:𝔻X𝔻YF:\mathbb{D}_{X}\to\mathbb{D}_{Y} yields an example of known exact dependence with (knowledge of) epistemic independence. A simple example is the Dirichlet function: 𝔻X\mathbb{D}_{X} is the set of real numbers with the standard topology, 𝔻Y\mathbb{D}_{Y} is the set {0,1}\{0,1\} with the discrete topology (which is the same as the subspace topology induced by the standard topology), with F(d)=0F(d)=0 iff dd is rational, and F(d)=1F(d)=1 otherwise. More interesting examples are the ones in which the domain and codomain of FF are the same (i.e., 𝔻X=𝔻Y\mathbb{D}_{X}=\mathbb{D}_{Y} is the common range of both variables), preferably a nice space. See [Bernardi and Rainaldi, 2018] for many examples, where the common range is QQ, RR or other interesting spaces.

Example 5.5 (Pythagorean drivers).

Unlearnability in the sense of, say, the Dirichlet function may be rare, though in our earlier car speeding example, one could think of a Platonic police officer trying to book a Pythagorean driver who dematerializes at any irrational value of tt.

Unlearnability is related to the behavior of chaotic dynamical systems, [Lorenz, 1995], where predictions of future behavior may be impossible. However, we must leave this interface of Epistemic Topology and mathematics to another occasion.

Instead, we end with a technical logical issue. It is known that the joint logic 𝖫𝖥𝖣𝖨\mathsf{LFDI} of ontic independence and ontic dependence is undecidable, [Baltag and van Benthem, 2021], but topological indepenence had its own behavior. This suggests the following question about the purely topological component of our logic:

Open problemIs the logic of topological independence and dependence decidable?

6 Uniform dependence: a stronger notion of knowability

Up till now, we identified the approximations of the current value X(s)X(s) of variable XX at state ss with open neighborhoods UτX(X(s))U\in\tau_{X}(X(s)). In this sense, our global continuous dependence K(X;Y)K(X;Y) captures known dependence, both in the sense of knowing that/whether and in the sense of knowing how: for any desired estimate UYτYU_{Y}\in\tau_{Y}, the agent knows how to determine that Y(s)UYY(s)\in U_{Y} if given a sufficiently good estimate VXτXV_{X}\in\tau_{X} of X(s)X(s). This is full how-to-knowledge: in fact, the agent knows how to find the weakest appropriate estimate VXV_{X} that will determine whether Y(s)UYY(s)\in U_{Y} or not.

But the topological notion of estimation is local: there is no way to compare the accuracies of estimates of different values X(s)X(s) and X(t)X(t) in different states. Things change radically if we introduce a global notion of accuracy or error ε\varepsilon, as e.g. the one given by real-numbered open intervals (xε,x+ε)(x-\varepsilon,x+\varepsilon) of a given length, or more generally by open balls B(x,varepsilon)={y:d(x,y)<ε)B(x,varepsilon)=\{y:d(x,y)<\varepsilon) in a metric space. In such a setting, full how-to knowledge of a dependence would require that: for any given desired accuracy εY\varepsilon_{Y} in determining the value of YY, the agent knows how to find an appropriate accuracy δX\delta_{X}, such that an estimate of XX with accuracy δX\delta_{X} (in any state ss) will yield an estimate of YY with accuracy εY\varepsilon_{Y} (in the same state ss). Essentially, this stronger form of (global) epistemic dependency requires uniform continuity of the underlying dependence map.

6.1 Uniform dependence in metric spaces

The notion of uniform dependence takes us beyond topology: its semantics requires the move to metric dependence models.

Empirical variables over metric spaces As already noticed in Section 4, one may consider as a special case empirical variables of the form X:S(𝔻X,dX)X:S\to(\mathbb{D}_{X},d_{X}), whose range of values has a metric topology induced by a metric dXd_{X} on (𝔻X(\mathbb{D}_{X}. From now we will restrict ourselves to such “metric variables”. Closeness relations of the form d(X(s),X(t))<εd(X(s),X(t))<\varepsilon will then give us global notions of accuracy for the values of XX: margins of error for measurements (or more generally, for observations) of the value of XX.

Sets of variables as joint variables As we already saw in Section 4 (when giving the semantics of 𝖫𝖢𝖣\mathsf{LCD} on metric dependence models), a finite set X={Xi:iI}X=\{X_{i}:i\in I\} of empirical variables over metric spaces can itself be regarded as a single metric variable. More precisely, given such a finite set X={Xi:iI}X=\{X_{i}:i\in I\} of metric variables Xi:S(𝔻I,di)X_{i}:S\to(\mathbb{D}_{I},d_{i}), each coming with its own metric space, we can associate to it a single metric variable, also denoted by XX, whose set of values is as usual 𝔻X:={(xi(s))iI:sS}ΠiI𝔻i\mathbb{D}_{X}:=\{(x_{i}(s))_{i\in I}:s\in S\}\subseteq\Pi_{i\in I}\mathbb{D}_{i}, while the metric is (the restriction to 𝔻X\mathbb{D}_{X} of) the so-called Chebyshev distance (also known as the supremum metric):

dX(u,v):=sup{d(ui,vi):iI}, for all v=(vi)iI,u=(ui)iIΠiI𝔻i.d_{X}(\vec{u},\vec{v})\,:=\,sup\{d(u_{i},v_{i}):i\in I\},\mbox{ for all }\vec{v}=(v_{i})_{i\in I},\vec{u}=(u_{i})_{i\in I}\in\Pi_{i\in I}\mathbb{D}_{i}.

Strong epistemic dependence We say that there is a strong (or ‘uniform’) epistemic dependence between empirical variables XX and YY, and write U(X;Y)U(X;Y), if for every margin of error εY>0\varepsilon_{Y}>0 for YY-measurements there exists some margin of error δX>0\delta_{X}>0 for XX-measurements, such that every estimate of XX with accuracy εX\varepsilon_{X} entails an estimate of YY with accuracy δY\delta_{Y}:

ε>0δ>0sStS(dX(X(s),X(t))<δdY(Y(s),Y(t))<ε).\forall\varepsilon>0\,\exists\delta>0\,\forall s\in S\,\forall t\in S\,\left(\,d_{X}(X(s),X(t))<\delta\Rightarrow d_{Y}(Y(s),Y(t))<\varepsilon\,\right).

In contrast, if we apply the definition of known epistemic dependence K(X;Y)K(X;Y) in the metric topology, and unfold the definition in terms of the underlying accuracies, we obtain:

ε>0sSδ>0tS(dX(X(s),X(t))<δdY(Y(s),Y(t))<ε).\forall\varepsilon>0\,\forall s\in S\,\exists\delta>0\,\forall t\in S\,\left(\,d_{X}(X(s),X(t))<\delta\Rightarrow d_{Y}(Y(s),Y(t))<\varepsilon\,\right).

So the difference between known epistemic dependence and strong epistemic dependence is the position of the universal state quantifier s\forall s (before or after the existential accuracy quantifier δ\exists\delta), similarly to the analogue difference between continuity and uniform continuity. This analogy is not accidental, witness the following result in the style of earlier characterizations:

Proposition 6.1.

Given two empirical variables X:S(𝔻X,dX)X:S\to(\mathbb{D}_{X},d_{X}) and Y:S(𝔻Y,dY)Y:S\to(\mathbb{D}_{Y},d_{Y}), the following are equivalent:

  1. 1.

    U(X;Y)U(X;Y)

  2. 2.

    there exists a uniformly continuous map FX;Y:(𝔻X,dX)(𝔻Y,dY)F_{X;Y}:(\mathbb{D}_{X},d_{X})\to(\mathbb{D}_{Y},d_{Y}) s.t. FX=YF\circ X=Y holds on SS

  3. 3.

    Y:(S,dXS)(𝔻Y,dY)Y:(S,d_{X}^{S})\to(\mathbb{D}_{Y},d_{Y}) is uniformly continuous (wrt the XX-topology on SS induced by the pseudo-metric dXSd_{X}^{S})

  4. 4.

    the identity function id:(S,dXS)(S,dYS)id:(S,d_{X}^{S})\to(S,d_{Y}^{S}) is uniformly continuous (wrt to the XX-topology given by DXSD_{X}^{S} on its domain and the YY-topology given by dYSd_{Y}^{S} on its codomain).

Note that strong epistemic dependence U(X;Y)U(X;Y) is a global notion. We can also introduce a topologically local notion sUXYs\models U_{X}Y of locally strong epistemic dependence:

δ0>0ε>0δ>0t,wX(s,δ0)(dX(X(t),X(w))<δdY(Y(t),Y(w))<ε).\exists\delta_{0}>0\,\forall\varepsilon>0\,\exists\delta>0\,\forall t,w\in\mathcal{B}_{X}(s,\delta_{0})\,\left(\,d_{X}(X(t),X(w))<\delta\Rightarrow d_{Y}(Y(t),Y(w))<\varepsilon\,\right).

Once again, if we denote by τX\tau_{X} the topology induced by the metric dXd_{X}, this is equivalent to:

OτX(s) s.t. OU(X;Y) (i.e., it holds globally on the open subspace O).\exists O\in\tau_{X}(s)\mbox{ s.t. }O\models U(X;Y)\mbox{ (i.e., it holds \emph{globally} on the open \emph{subspace} $O$)}.
Proposition 6.2.

The following are equivalent:

  1. 1.

    sUXYs\models U_{X}Y

  2. 2.

    there exists an open neighborhood OO of X(s)X(s), and also a uniformly continuous map F:O𝔻YF:O\to\mathbb{D}_{Y} s.t. FX=YF\circ X=Y holds on X1(O)X^{-1}(O)

  3. 3.

    Y:(S,dXS)(𝔻Y,dY)Y:(S,d_{X}^{S})\to(\mathbb{D}_{Y},d_{Y}) is locally uniformly continuous around ss

  4. 4.

    the identity function id:(S,dXS)(S,dYS)id:(S,d_{X}^{S})\to(S,d_{Y}^{S}) is locally uniformly continuous around ss.

Moreover, in analogy to the case of (simple) epistemic dependence, strong epistemic dependence is inherently known, while locally strong epistemic dependence is inherently knowable:

U(X;Y)KU(X;Y),UXYKXUXY.U(X;Y)\Rightarrow KU(X;Y),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,U_{X}Y\Rightarrow K_{X}U_{X}Y.

However, there are some significant differences with our earlier topological notions. For both simple and epistemic dependence, the global versions are equivalent to known local ones:

D(X;Y)KDXY,K(X;Y)KKXYD(X;Y)\Leftrightarrow KD_{X}Y,\,\,\,\,\,\,\,\,\,\,\,\,\,K(X;Y)\Leftrightarrow KK_{X}Y

Knowing a dependence (between XX and YY) is the same as knowing that you could know the dependence (if given enough info on XX). However, this is not the case for strong epistemic dependence in our new sense of uniform continuity:

U(X;Y)is not equivalent withKUXYU(X;Y)\,\,\,\mbox{is \emph{not} equivalent with}\,\,\,KU_{X}Y

The explanation is that knowing how to find YY from XX (with any desired accuracy) is not the same as knowing that you could know how to do it, if given enough info on XX. The second notion is in general much weaker! We can provide counterexample to the equivalence of U(X;Y)U(X;Y) and KUXYKU_{X}Y by taking a fresh look at our car speeding scenario:

Example 6.3 (Car speeding revisited).

A priori, the police officer has a known epistemic dependence K(t,v)K(t,v), but not a strong one: U(t,v)U(t,v) is false! For any speed estimate a±εa\,\pm\,\varepsilon, the officer can find a δ\delta such that after measuring time tt with accuracy δ\delta, (s)he will know whether the velocity is in interval (aε,a+ε)(a-\varepsilon,a+\varepsilon). But, given a desired accuracy ε\varepsilon for the speed vv (with no further information on the interval), the officer cannot know in advance what accuracy δ\delta is needed in measuring time tt to determine velocity within ε\varepsilon. This is because, given a fixed ε>0\varepsilon>0, there is no δ>0\delta>0 s.t., for all t(0,)t\in(0,\infty):

1t1t+δ<ε\frac{1}{t}-\frac{1}{t+\delta}<\varepsilon

However, vv is strongly knowable from tt in the sense of a locally strong dependence: in fact, UtvU_{t}v holds globally, so this strong dependence is known:

KUtvKU_{t}v

Indeed, after performing just one reasonably accurate measurement (a,b)(a,b) of time tt (with 0<a<b<0<a<b<\infty), the officer will know, for any desired velocity accuracy ε\varepsilon, how to determine the needed time accuracy δ\delta; i.e. one s.t. the following will hold for all t[a,b]t\in[a,b]:

1t1t+δ<ε\frac{1}{t}-\frac{1}{t+\delta}<\varepsilon

This finishes our counterexample, but there are more interesting things going on in our car-speeding scenario: note that in this example both UtvU_{t}v and KtvK_{t}v hold globally, so the two notions are actually equivalent on this model, i.e. we have

UtvKtvU_{t}v\Leftrightarrow K_{t}v

Of course, the left-to-right implication always holds, but the converse gives us a kind of “epistemic bootstrapping”: when the value space we are working in is conducive to epistemic inquiry, epistemic dependence automatically gives us strong (uniform knowability).

We now proceed to generalize this last observation.

Important special case: locally compact spaces When the underlying spaces are locally compact (i.e. every value in DXD_{X} has a compact neighborhood), epistemic dependence implies locally strong dependence: the formula

KXYUXYK_{X}Y\Rightarrow U_{X}Y

holds on locally compact spaces. In contrast, global (known) epistemic dependence still does not imply strong epistemic dependence: even on locally compact spaces, we still have that

K(X;Y)⇏U(X;Y).K(X;Y)\not\Rightarrow U(X;Y).

Of course, the global implication K(X;Y)⇏U(X;Y)K(X;Y)\not\Rightarrow U(X;Y) will hold on fully compact spaces, but those are less pervasive, being specific to very circumscribed situation: in particular, our Euclidean space is not compact, though it is locally compact.

Interpretation: bootstrapping knowability Since Euclidean spaces are locally compact, the natural topology of our space is arguably epistemically fertile: fit to ‘bootstrap knowability’. Every epistemic dependence between Euclidean variables XX and YY is a locally strong dependence. Indeed, if such a dependence is knowable (i.e., if KXYK_{X}Y holds between Euclidean variables), then it is also knowable that (given enough information about XX) you could come to know how to find YY from XX with any desired accuracy. Since Euclidean variables are the ones most often encountered in empirical science, one could say that Nature makes us a free gift: it is enough to gain knowledge of an epistemic dependence to obtain potential ‘how-to knowability’, and thus, if you wish, pass from science to engineering.

Remark 6.4 (Agents and environments).

These observations have a more general epistemological import. Epistemic notions may ‘upgrade’ automatically when the learning environment is favorable. What this highlights is that epistemic success involves an interplay of two parties: the nature of the knowing agents and of the environments (adversarial or helpful) they operate in, cf. also [Barwise and Perry, 1983]. This point transcends our technical setting: in a sense, all epistemology is about the balance of the structure of agents and the world they live in.

The next class of models is a example of an even more favorable type of environment, that facilitates a kind of ‘epistemic super-bootstrapping’:

A stronger form of epistemic bootstrapping: pseudo-locally Lipschitz models A metric dependence model is called pseudo-locally Lipschitz if every basic empirical variable 𝐲\mathbf{y} is pseudo-locally Lipschitz wrt to every other basic variable 𝐱\mathbf{x}; i.e.: for any two basic variables x,yVx,y\in V, the map 𝐲:(S,dxS)(𝔻y,dy)\mathbf{y}:(S,d_{x}^{S})\to(\mathbb{D}_{y},d_{y}) is pseudo-locally Lipschitz. It is easy to see that this is equivalent to requiring that the identity map id:(S,dXS)(S,dYS)id:(S,d_{X}^{S})\to(S,d_{Y}^{S}) is pseudo-locally Lipschitz for all non-empty sets of variables X,YVX,Y\subseteq V. As a consequence, such models afford an even stronger form of epistemic bootstrapping:

Every pseudo-locally Lipschitz model validates the implication kXYUXYk_{X}Y\Rightarrow U_{X}Y, for all X,YVX,Y\subseteq V. So in such models, our three ‘localized’ forms of continuous dependence (point-continuous dependence kXYk_{X}Y, locally-continuous dependence KXYK_{X}Y and locally uniformly-continuous dependence UXYU_{X}Y) are all equivalent, for all X,YVX,Y\subseteq V.

Remark 6.5 (’Paradisiacal’ environments versus our world).

Pseudo-locally Lipschitz models describe a kind of “paradisiacal” epistemic situation, in which the weakest and most fragile form of dependence (conditional knowability kXYk_{X}Y, that typically is epistemically opaque) becomes in fact transparent, being bootstrapped all the way to the strongest form of know-how dependence (UXYU_{X}Y). Unfortunately, while our Euclidean space is compact, typical Euclidean variables encountered in empirical science are not usually pseudo-locally Lipschitz with respect to each other. In this sense, although our world is epistemically fertile, it is not paradisiacal!

6.2 The logic of uniform dependence

We proceed to extend the earlier language of 𝖫𝖢𝖣\mathsf{LCD} to capture the properties of uniform dependence. We start with a very simple extension: just add uniform dependence atoms U(X;Y)U(X;Y). The resulting logical system 𝖫𝖴𝖣\mathsf{LUD} is a straightforward extension of 𝖫𝖢𝖣\mathsf{LCD}.

Definition 6.6 (Syntax of 𝖫𝖴𝖣\mathsf{LUD}).

Given a set of predicates Pred={P,Q,,}Pred=\{P,Q,\ldots,\} and a set of variables V={x,y,}V=\{x,y,\ldots\}, the Logic of Uniform Dependence (𝖫𝖴𝖣\mathsf{LUD}) has a language given by adding uniform dependence atoms U(X;Y)U(X;Y) to the language of 𝖫𝖢𝖣\mathsf{LCD}; in other words, the set of formulas is recursively given by:

φ::=Px|¬φ|φφ|DXφ|DXY|KXφ|KXY|U(X;Y)\begin{array}[]{c cc cc ccc cc cc cc cc cccccccc}\varphi::=&P\vec{x}&|&\neg\varphi&|&\varphi\wedge\varphi&|&D_{X}\varphi&|&D_{X}Y&|&K_{X}\varphi&|&K_{X}Y&|&U(X;Y)\end{array}

where X,YVarX,Y\subseteq Var are finite sets of variables, PPredP\in Pred is a predicate and xVar\vec{x}\in Var^{*} is a string of variables of the same length as the arity of PP.

Definition 6.7 (Semantics of 𝖫𝖴𝖣\mathsf{LUD}).

Given a metric dependence model 𝐌=(M,S,𝐱)xVar\mathbf{M}=(M,S,\mathbf{x})_{x\in Var} over some typed metric model M=(𝔻x,dx,I)xVarM=(\mathbb{D}_{x},d_{x},I)_{x\in Var}, we define the satisfaction relation sφs\models\varphi by using the usual 𝖫𝖢𝖣\mathsf{LCD} recursive clauses for atomic formulas and the 𝖫𝖢𝖣\mathsf{LCD} operators (using the metric topologies τX\tau_{X} on 𝔻X\mathbb{D}_{X}), and interpreting the uniform dependence atoms U(X;Y)U(X;Y) in the obvious way:

sU(X;Y) iff ε>0δ>0s,tS(dXS(s,t)<δdYS(s,t)<ε),s\models U(X;Y)\,\,\,\mbox{ iff }\,\,\,\forall\varepsilon>0\,\exists\delta>0\,\forall s,t\in S\,(d_{X}^{S}(s,t)<\delta\Rightarrow d_{Y}^{S}(s,t)<\varepsilon),

where, as in Section 4, we use

dXS(s,t):=sup{dx(𝐱(s),𝐱(t)):xX}d_{X}^{S}(s,t)\,\,:=\,\,sup\{d_{x}(\mathbf{x}(s),\mathbf{x}(t)):x\in X\}

to denote the pseudo-metric induced on the state space by the Chebyshev distance on 𝔻XΠxX𝔻x\mathbb{D}_{X}\subseteq\Pi_{x\in X}\mathbb{D}_{x}.

Theorem 6.8.

The proof system 𝐋𝐔𝐃\mathbf{LUD} in Table 3 is a sound and complete axiomatization of the Logic of Uniform Dependence on metric dependence models (as well as on concrete metric models). The same proof system is also sound and complete with respect to pseudo-locally Lipschitz models. Moreover, the logic 𝖫𝖴𝖣\mathsf{LUD} is decidable.

The completeness proof is deferred to the Appendix. Essentially, it is based on refining the model construction in the completeness proof for 𝐋𝐂𝐃\mathbf{LCD} to obtain a pseudo-metric model, whose associated metric dependence model happens to be pseudo-locally Lipschitz.

(I) All axioms and rules of the system 𝐋𝐂𝐃\mathbf{LCD}
(II) Axioms for uniform dependence :
(UU-Inclusion) U(X;Y)U(X;Y), provided that YXY\subseteq X
(UU-Additivity) (U(X;Y)U(X;Z))U(X;YZ)\left(\,U(X;Y)\wedge U(X;Z)\,\right)\,\to\,U(X;Y\cup Z)
(UU-Transitivity) (U(X;Y)U(Y;Z))U(X;Z)\left(\,U(X;Y)\wedge U(Y;Z)\,\right)\to U(X;Z)
(Uniform Dependence is Known) U(X;Y)KU(X;Y)U(X;Y)\to KU(X;Y)
(Uniformity implies Continuity) U(X;Y)K(X;Y)U(X;Y)\to K(X;Y)
(Uniformity of Knowledge) K(Y)U(X;Y)K(Y)\to U(X;Y)
Table 3: The proof system 𝐋𝐔𝐃\mathbf{LUD}.

Note again the analogy of the 𝐋𝐔𝐃\mathbf{LUD} axioms in group (II) of Table 3 with some of the 𝐋𝐂𝐃\mathbf{LCD} axioms in groups (III) and (IV) of Table 2 (which are of course also included in 𝐋𝐔𝐃\mathbf{LUD}, within group (I) of Table 3). And indeed, if we replace U(X;Y)U(X;Y) by K(X;Y)K(X;Y) in the above axioms, we obtain valid theorems of 𝐋𝐂𝐃\mathbf{LCD}. Once again, this is to be expected, since uniform dependence is a strengthening of epistemic dependence, in the same sense in which epistemic dependence was a strengthening of exact dependence. But note that more properties are lost in the transition from KK to UU, in comparison to the earlier transition from DD to KK. There is e.g. no analogue of the Transfer axiom for uniform dependence, since U(X;Y)U(X;Y) does not come with an accompanying modality (in contrast to DXYD_{X}Y and KXYK_{X}Y, which come together with DXφD_{X}\varphi and respectively KXφK_{X}\varphi). The reason is that Boolean variables have a discrete range of truth values, so there is no sense in ‘uniformizing’ the meaning of KXφK_{X}\varphi.202020Things would be different if we moved to a multi-valued logic with a continuum set of values: one could then talk about “uniform knowability” of a proposition UXφU_{X}\varphi.

Another difference is that, unlike the case of exact dependence and epistemic dependence (where we took the local versions DXYD_{X}Y and KXYK_{X}Y as primitive notions in our syntax), in 𝖫𝖴𝖣\mathsf{LUD} we chose to take the global version U(X;Y)U(X;Y) as basic, and indeed we did not even include the local version UXYU_{X}Y in our syntax. One of the reasons is that, as we already mentioned, UXYU_{X}Y cannot define U(X;Y)U(X;Y) in any obvious way (unlike the case of exact dependence and epistemic dependence); and since we are in the end mainly interested in (global) uniformity, we must take U(X;Y)U(X;Y) as primitive.

Nevertheless, this limitation suggests extending our language of 𝖫𝖴𝖣\mathsf{LUD} with local atoms UXYU_{X}Y, obtaining an extended language 𝖫𝖴𝖣+\mathsf{LUD^{+}}. The following result shows that this move does increase expressivity:

Proposition 6.9.

Local uniform dependence UXYU_{X}Y is not expressible in the language of 𝖫𝖴𝖣\mathsf{LUD}. So the language 𝖫𝖴𝖣+\mathsf{LUD^{+}} of both local and global uniform dependence (obtained by adding local atoms UXYU_{X}Y to 𝖫𝖴𝖣\mathsf{LUD}) is more expressive than the language of 𝖫𝖴𝖣\mathsf{LUD}.

Proof.

The proof is completely similar to the corresponding inexpressivity result for kXYk_{X}Y (Proposition 4.6), involving instead the formula KXY¬UXYK_{X}Y\wedge\neg U_{X}Y. This formula is obviously satisfiable on metric dependence models (just take any example in which XX is mapped into YY by some function FF that is locally continuous around some point, but not locally uniformly continuous around it). By the soundness of the proof system 𝐋𝐔𝐃\mathbf{LUD}, this formula must then be consistent with the axioms and rules of this proof system. By the last part of Theorem 6.8 (completeness of 𝐋𝐔𝐃\mathbf{LUD} wrt pseudo-locally Lipschitz models), if UXYU_{X}Y were expressible in 𝖫𝖴𝖣\mathsf{LUD}, then the formula KXY¬UXYK_{X}Y\wedge\neg U_{X}Y would be satisfiable in a pseudo-locally Lipschitz models. But this is a contradiction, as pseudo-locally Lipschitz models validate the implication KXYUXYK_{X}Y\Rightarrow U_{X}Y, thus contradicting our formula.∎

Remark 6.10.

In themselves, the inexpressivity results claimed in Propositions 4.6 and 6.9 are not surprising: there was no reason to expect that the missing notions (kXYk_{X}Y and respectively UXYU_{X}Y) were expressible in languages (𝖫𝖢𝖣\mathsf{LCD} and respectively 𝖫𝖴𝖣\mathsf{LUD}) that did not explicitly include them. But such results on expressive limitations are often hard to prove, involving intricate counterexamples. So it comes as a pleasant surprise that in these two cases, the expressivity-limitation results simply follow from the special additional features of the model constructed in the completeness proof!

Furthermore, Proposition 6.9 leads us to yet another obvious question:

Open problemWhat is the complete logic of the language 𝖫𝖴𝖣+\mathsf{LUD^{+}} of both local and global uniform dependence? Is that logic decidable?

7 Directions for future work

Given the analysis presented here, many natural follow-up topics arise, of which we briefly list a few.

Further system issues The systems introduced in this paper raise many further technical questions. One immediate issue concerns definability. As we have noted, 𝖫𝖢𝖣\mathsf{LCD} performs a double epistemization of existing modal dependence logics. First we added topological modalities for knowability KXφK_{X}\varphi based on measuring values of variables XX, and after that, we also introduced continuous dependence KXYK_{X}Y. Is the latter step necessary, or more precisely, is KXYK_{X}Y definable in the logic 𝖫𝖥𝖣\mathsf{LFD} extended only with the topological interior modalities? We believe that the answer is negative, but we have not pursued the definability theory of our topological languages. Other natural open problems concern axiomatization.

Another immediate open problem is providing a complete axiomatization of the topological logic 𝖫𝖪𝗄\mathsf{LKk} of conditional knowability based on the operators KXφK_{X}\varphi and the point-continuity atoms for conditional knowability kXYk_{X}Y. As we mentioned, this language is strictly more expressive than 𝖫𝖢𝖣\mathsf{LCD}. But our methods for proving completeness and decidability do not seem to work for this extended logic. The reason is that, as noted earlier, kXYk_{X}Y is an ‘unstable’ property: it is very sensitive to any small changes of values. This sensitivity gives rise to technical difficulties. Is this logic decidable, or at least axiomatizable?

Other open problems that we noted were axiomatizing the logic of topological independence, as well as finding an axiomatization of the logic 𝖫𝖴𝖣+\mathsf{LUD^{+}} of both local and global uniform dependence.

A final set of questions concerns computability. For several logics in this paper, we do not know if they are decidable, and we know for none of our decidable logics what is its exact computational complexity.

Richer languages for metric spaces

While the language of 𝖫𝖢𝖣\mathsf{LCD} is a good abstract vehicle for dependence in topological spaces in general, even the extended logic 𝖫𝖴𝖣\mathsf{LUD} of uniformly continuous dependence seems poor in expressive power over the rich structure of metric spaces. One extension adds explicit accuracy variables and constants to the language, with corresponding modalities. One can then talk about ε\varepsilon-closeness, compute with margins of error, and determine the complete modal logic for explicit versions of continuity and uniform continuity.212121This setting covers Margin-of-Error Principles for knowledge that have been widely discussed in the philosophical literature, cf. [Baltag and van Benthem, 2020] for a logical analysis.

Uniform spaces

There is also a qualitative approach to the epistemic surplus structure in metric spaces by using so-called uniform spaces, [Isbell, 1964]. Here a family of reflexive symmetric ‘closeness relations’ is given on the topologized state space, closed under sequential compositions representing combined refinement. We can now define continuity and uniform continuity in terms of closeness intuitions (close arguments should yield close values), and study dynamic modalities for the effects of closeness refinement. We have a logical formalism for this setting and a plausible sound axiomatization, but proving completeness has so far eluded us.

Computable dependence as Scott-continuity

While uniform continuity strengthens knowable dependence in terms of state-independent approximations, another strengthenings would make knowability a form of computability. In this interpretation, the exact value of a variable might be the limit of an infinite process of computation. The computable approximations can then be understood as approximate values, living in the same domain as the exact value (rather than as open neighborhoods of the exact value), corresponding to partial results obtainable at finite stages of the computation. A natural semantic setting for this interpretation are Scott domains, and Domain Theory [Abramsky and Jung, 1995] fits within our topological framework, when we use the Scott topology. Again this allows us to use relational models in which the given topology does not match the modal semantics but the computational convergence given by Scott topology. Computability (to any desired degree of approximation) of a variable YY from a variable XX then amounts to existence of a Scott-continuous dependence function between them. We have a first completeness result for a logic of Scott-continuous dependence which extends the logic 𝖫𝖢𝖣\mathsf{LCD} with an axiom reflecting the special structure of domains..222222Domain Theory is an elegant abstract approach to computability, but it does not contain actual manipulation with code. For that, we might have to provide a logical analysis in our style for Recursive Analysis, [Goodstein, 1961], getting closer to the syntax of dependence functions as laws that we can compute with.

We plan to explore these issues in a forthcoming paper [Baltag and van Benthem, 2024b]. The next topics we list here concern more drastic extensions of our framework.

Point-free topology

In the approach of this paper, dependence in empirical settings assumes the original set-based functional dependence and adds extra conditions of continuity. However, the very spirit of approximation might seem to make the use of underlying points debatable. One could also work in a point-free topology, [Vickers, 1996], where there are only opens with some suitable (lattice) structure, and points only arise through representation theorems. Then, the requirement of continuity has to be replaced with conditions on maps from open approximations for values to open approximations for arguments. Again, our current topo-dependence models might then arise only in the limit as the result of some representation construction. For some first thoughts on this inn a category-theoretic perspective, see [Ye, 2022].

Learning dependence functions

Learning the actual state of the world, the paradigmatic issue in this paper, is a task when relevant dependencies exist, and are known in an informative manner. But in science, we also want to learn the regularities themselves. One way to go here is ‘lifting’ our setting of single dependence models to families of these, as in the dependence universes introduced in [Baltag and van Benthem, 2021], where we may not know which precise dependencies govern our actual world. In this lifted setting, there is now a richer repertoire of relevant update actions: one can perform measurements, but one might also learn about regularities through mathematical reasoning or in other ways.

Dynamical systems

Much of science is about dynamical systems transitioning to new states over time. While our logics can include temporal variables denoting points in time on a par with other variables, there is also a case for enriching our dependence logics with temporal operators describing the future and past of system evolution where the universe of states has a global stat transition function. What this would require is a topological extension of the dependence logic 𝖣𝖥𝖣\mathsf{DFD} for dynamical systems in [Baltag and van Benthem, 2024a], which would link up also with the more abstract temporal logic of topological dynamical systems of [Kremer and Mints, 2007]

Probabilistic dependence and statistical learning

The topological view of opens as results of measurements pursued in this paper says little about how measurements are combined in scientific practice. A major challenge emerges here: how should one interface our approach with the use of statistical techniques in Measurement Theory?

8 Conclusion

This paper has presented an extensive conceptual analysis of dependence in topological models for empirical inquiry. We gave new formal languages incorporating our main notions, and proved representation, completeness and decidability results for the resulting logics of continuous and uniformly continuous dependence.

These offerings can be viewed in several ways. First, we extended the area of Epistemic Topology with notions and systems that we believe to be of philosophical interest. We believe that our logics sit at the intersection of several epistemological perspectives. Our introduction of dependence merges two traditions to information and knowledge that have often been considered separate or even incompatible: Epistemic Logic and Situation Theory. Also, our two-step topologization of dependence logic is reminiscent of the move from classical to constructive systems like Intuitionistic Logic, where open sets replace arbitrary sets, and where, equally crucially, continuous functions replace arbitrary functions. In addition, one can see our systems as a further step toward a general logic underneath Formal Learning Theory. Finally, while these are all interfaces with formal epistemology, we believe that our questions-based perspective also links up with General Epistemology, and it might be very interesting, for instance, to see what the moves to continuous functions and even to uniformly continuous functions mean in a more informal common sense setting.

At the same time, our technical results also enrich the logical study of mathematical theories, such as topological semantics for modal logics, [van Benthem and Bezhanishvili, 2007]. The system 𝖫𝖢𝖣\mathsf{LCD} is a multi-index topological modal logic which can also talk about continuous dependencies between variables. This suggests extended dependence versions of classical results in the area like the Tarski-McKinsey Theorem for completeness over standard metric spaces, [McKinsey and Tarski, 1944]. The models constructed in our completeness proof do have logic-induced metric structure, but it remains to be seen if completeness holds wrt the standard Euclidean metric. But also new types of modal correspondence results arise. For instance, the epistemic desideratum that knowledge-that of propositions should imply uniform know-how for inquiry corresponds, in a precise modal sense, with a restriction to (locally) compact classes of metric spaces. Other connections of this technical sort have been identified at earlier points in the paper, such as interfaces with Domain Theory and with Topological Dynamic Logic.

Thus, Epistemic Topology, as pursued in the style of this paper, is both Epistemology endowed with topological tools and Topology/Analysis enriched with epistemological perspectives.


Acknowledgments We thank the constructive audiences at many venues in Amsterdam, Pittsburgh, Paris, Beijing and Stanford where these ideas or some of their precursors have been presented since 2013. In particular, we thank Adam Bjorndahl, Konstantin Genin, Valentin Goranko, Kevin Kelly, Dazhu Li, J. Väänänen, Fan Yang and Lingyuan Ye for their helpful feedback.

References

  • [Abramsky and Jung, 1995] Abramsky, S. and Jung, A. (1995). Domain Theory. Departments of Computer Science, University of Birmingham and Oxford University.
  • [Baltag et al., 2019] Baltag, A., Gierasimczuk, N., and Smets, S. (2019). Truth-tracking by belief revision. Studia Logica, 107:917 – 947.
  • [Baltag and Renne, 2016] Baltag, A. and Renne, B. (2016). Dynamic epistemic logic. Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/.
  • [Baltag and Smets, 2020] Baltag, A. and Smets, S. (2020). Learning what others know. In Kovacs, L. and Albert, E., editors, Proceedings of LPAR ’23, volume 73 of EPiC Series in Computing, pages 90–100.
  • [Baltag and van Benthem, 2020] Baltag, A. and van Benthem, J. (2020). Some thoughts on the logic of imprecise measurement. In Ying, X., editor, Xuebufendongxi: Essays in Tsinghua Philosophy, pages 329 – 364. Tsinghua University Press.
  • [Baltag and van Benthem, 2021] Baltag, A. and van Benthem, J. (2021). A simple logic of functional dependence. J. Phil. Log., 50:939–1005.
  • [Baltag and van Benthem, 2024a] Baltag, A. and van Benthem, J. (2024a). Dependence logics in temporal settings.
  • [Baltag and van Benthem, 2024b] Baltag, A. and van Benthem, J. (2024b). Topological dependence, approximation dynamics, and domain theory. Working paper, ILLC, University of Amsterdam.
  • [Barwise and Perry, 1983] Barwise, J. and Perry, J. (1983). Situations and Attitudes. The MIT Press, Cambridge MA.
  • [Barwise and Seligman, 1995] Barwise, J. and Seligman, J. (1995). Information Flow. The Logic of Distributed Systems. Cambridge University Press, Cambridge UK.
  • [Bernardi and Rainaldi, 2018] Bernardi, C. and Rainaldi, C. (2018). Everywhere surjections and related topics: Examples and counterexamples. Le Matematiche, LXXIII, pages 71–88.
  • [Blackburn et al., 2001] Blackburn, P., de Rijke, M., and Venema, Y. (2001). Modal Logic. Cambridge University Press.
  • [Dabrowski et al., 1996] Dabrowski, A., Moss, L., and Parikh, R. (1996). Topological reasoning and the logic of knowledge. Annals of Pure and Applied Logic, pages 73–110.
  • [Dretske, 1983] Dretske, F. (1983). Knowledge and the Flow of Information. The MIT Press.
  • [Goodstein, 1961] Goodstein, R. (1961). Recursive Analysis. Dover Books, Minneola NY.
  • [Isbell, 1964] Isbell, J. (1964). Uniform Spaces. American Mathematical Society, Providence.
  • [Kelly, 1996] Kelly, K. (1996). The Logic of Reliable Inquiry. Oxford University Press, Oxford.
  • [Kremer and Mints, 2007] Kremer, P. and Mints, G. (2007). Dynamic topological logic. In Handbook of Spatial Logics, pages 565–606. Springer.
  • [Lorenz, 1995] Lorenz, E. (1995). The Essence of Chaos. The University of Washington Press, Seattle.
  • [McKinsey and Tarski, 1944] McKinsey, J. C. C. and Tarski, A. (1944). The algebra of topology. Ann. of Math., 45:141–191.
  • [Väänänen, 2007] Väänänen, J. (2007). Dependence Logic: Theory and Applications. Cambridge University Press, Cambridge UK.
  • [van Benthem, 2011] van Benthem, J. (2011). Logical Dynamics of Information and Interaction. Cambridge University Press, Cambridge UK.
  • [van Benthem and Bezhanishvili, 2007] van Benthem, J. and Bezhanishvili, G. (2007). Modal logics of space. In Aiello, M., Pratt-Hartman, I., and van Benthem, J., editors, Handbook of Spatial Logics, pages 217–298. Springer Science, Heidelberg.
  • [van Ditmarsch et al., 2015] van Ditmarsch, H., Halpern, J., van der Hoek, W., and Kooi, B., editors (2015). Handbook of Epistemic Logic. College Publications, London.
  • [Vickers, 1996] Vickers, S. (1996). Topology via Logic. Cambridge University Press, Cambridge UK.
  • [Willard, 1970] Willard, S. (1970). General Topology. Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont.
  • [Ye, 2022] Ye, L. (2022). A structural study of information in a logical perspective. Bachelor’s thesis, Tsinghua University.

Appendix A Appendix: Proofs of Completeness and Decidability for the Logical Systems 𝖫𝖢𝖣\mathsf{LCD} and 𝖫𝖴𝖣\mathsf{LUD}

It is easy to verify the soundness of the axioms and rules of 𝖫𝖢𝖣\mathsf{LCD} (and respectively 𝖫𝖴𝖣\mathsf{LUD}) on topo-dependence models (and respectively metric dependence models). As for completeness, it is enough to prove (for both logics) completeness with respect to a smaller class of models, namely metric dependence models. However, the usual Modal Logic techniques [Blackburn et al., 2001] (in particular, canonical model construction and filtration) will only give us (finite) non-standard relational models (more precisely, what we will call preorder models). To unify the two types of models (metric/topological and relational), we will introduce a further generalization, that subsumes both under a more abstract notion: topo-models.

A.1 Abstract topo-models

Essentially, topo-models are abstract models, that combine a standard relational semantics for the LFDLFD modalities DXφD_{X}\varphi (in terms of equivalence relations) together with a standard topological semantics for the 𝖫𝖢𝖣\mathsf{LCD} modalities KXφK_{X}\varphi in terms of topological interior operators, while treating all atoms as just standard atomic sentences whose semantics is given by a valuation, as usual in Modal Logic. We just need to impose a number of constraints on the relations, topology and valuation, to ensure the soundness of our axioms.

Definition A.1 (Topo-models for 𝖫𝖢𝖣\mathsf{LCD} and 𝖫𝖴𝖣\mathsf{LUD}).

A topo-model for 𝖫𝖴𝖣\mathsf{LUD} is a relational-topological structure

𝐌=(W,=X,τXW,)XV,{\mathbf{M}}=(W,=_{X},\tau^{W}_{X},\|\bullet\|)_{X\subseteq V},

where: WW is a set of abstract states (‘possible worlds’); for each set XVX\subseteq V of variables, =X=_{X} is an equivalence relation on WW and τXW𝒫(W)\tau^{W}_{X}\subseteq\mathcal{P}(W) is a topology on WW; and \|\bullet\| is a valuation, that associates to each atom α\alpha of the form Px1,xnPx_{1}\ldots,x_{n}, DXYD_{X}Y, KXYK_{X}Y or U(X;Y)U(X;Y) some subset αW\|\alpha\|\subseteq W of the set of states, in the usual way. These ingredients are required to satisfy the following conditions:

(1)

if s=Xws=_{X}w and x1,,xnXx_{1},\ldots,x_{n}\in X, then we have sPx1xns\in\|Px_{1}\ldots x_{n}\| iff wPx1xnw\in\|Px_{1}\ldots x_{n}\|;

(2)

==_{\emptyset} is the total relation W×WW\times W, and τW={,W}\tau^{W}_{\emptyset}=\{\emptyset,W\} is the discrete topology on WW;232323Given condition (7), the second part of this clause follows in fact from the first part.

(3)

if s=Xws=_{X}w and sDXYs\in\|D_{X}Y\|, then s=Yws=_{Y}w and wDXYw\in\|D_{X}Y\|;

(4)

Inclusion, Additivity and Transitivity for all atoms of the form DXYD_{X}Y, KXYK_{X}Y or U(X;Y)U(X;Y):
DXY=KXY=U(X;Y)=W\|D_{X}Y\|=\|K_{X}Y\|=\|U(X;Y)\|=W, whenever XYX\supseteq Y;
DX(YZ)=DXYDXZ\|D_{X}(Y\cup Z)\|=\|D_{X}Y\|\cap\|D_{X}Z\|, and similarly for KXY\|K_{X}Y\| and U(X;Y)\|U(X;Y)\|;
DXYDYZDXZ\|D_{X}Y\|\cap\|D_{Y}Z\|\subseteq\|D_{X}Z\|, and similarly for KXY\|K_{X}Y\| and U(X;Y)\|U(X;Y)\|);

(5)

KXYτXW\|K_{X}Y\|\in\tau^{W}_{X};

(6)

if sKXYs\in\|K_{X}Y\|, then every XX-open neighborhood UτYW(s)U\in\tau^{W}_{Y}(s) includes some YY-open neighborhood OτXW(s)O\in\tau^{W}_{X}(s) (i.e., OUO\subseteq U);

(7)

if sOτXWs\in O\in\tau^{W}_{X} and s=Xts=_{X}t, then tOt\in O;

(8)

KXYDXY\|K_{X}Y\|\subseteq\|D_{X}Y\|;

(9)

DYKY\|D_{\emptyset}Y\|\subseteq\|K_{\emptyset}Y\|;

(10)

U(X;Y){,W}\|U(X;Y)\|\in\{\emptyset,W\};

(11)

U(X;Y)KXY\|U(X;Y)\|\subseteq\|K_{X}Y\| (or equivalently: if U(X;Y)=W\|U(X;Y)\|=W, then KXY=W\|K_{X}Y\|=W).

Topo-models for 𝖫𝖢𝖣\mathsf{LCD} are simply obtained by skipping from the above definition all references to the uniform dependence atoms U(X;Y)U(X;Y) and all clauses involving them.

Definition A.2 (Abstract Topo-Semantics).

As already announced, the topo-model semantics of 𝖫𝖴𝖣\mathsf{LUD} is specified (when given a topo-model 𝐌{\mathbf{M}} for 𝖫𝖴𝖣\mathsf{LUD}) by using the standard relational semantics for the operators DXφD_{X}\varphi (seen as modalities for =X=_{X}), as well as the interior semantics for the topological modalities KXφK_{X}\varphi, while the semantics of the all atoms α\alpha (be they propositional Px1Px_{1}\ldots or dependence atoms DXY,KXYD_{X}Y,K_{X}Y or U(X;Y)U(X;Y)) is given by the valuation α\|\alpha\|:

sα iff sα (for all atoms αs\models\alpha\,\,\,\mbox{ iff }\,\,\,s\in\|\alpha\|\,\,\mbox{ (for all atoms $\alpha$) }
sDXφ iff wW(s=Xwwwφ)s\models D_{X}\varphi\,\,\,\mbox{ iff }\,\,\,\forall w\in W\,(s=_{X}w\Rightarrow w\models w\models\varphi)
sKXφ iff UτXW(s)wUwφs\models K_{X}\varphi\,\,\,\mbox{ iff }\,\,\,\exists U\in\tau^{W}_{X}(s)\,\forall w\in U\,w\models\varphi

Of course, the semantics of Boolean connectives ¬φ\neg\varphi and φψ\varphi\wedge\psi is the usual one. The topo-model semantics of 𝖫𝖢𝖣\mathsf{LCD} is obtained (when given a topo-model 𝐌{\mathbf{M}} for 𝖫𝖢𝖣\mathsf{LCD}) by simply skipping the clause for U(X;Y)U(X;Y) in the above definition.

Topo-dependence models are topo-models Every topo-dependence model of the form 𝐌=(M,S,𝐱)xVar{\mathbf{M}}=(M,S,\mathbf{x})_{x\in Var}, based on a typed topological model M=(𝔻x,τx,I)xVarM=(\mathbb{D}_{x},\tau_{x},I)_{x\in Var}, gives rise to an associated topo-model 𝐌=(S,=X,τXS,){\mathbf{M}}^{\dagger}=(S,=_{X},\tau_{X}^{S},\|\bullet\|) for 𝖫𝖢𝖣\mathsf{LCD}, obtained by putting:

τXS:=𝐗1(τX)\tau_{X}^{S}\,:=\,\mathbf{X}^{-1}(\tau_{X})
s=Xw iff 𝐗(s)=𝐗(w)s=_{X}w\,\,\mbox{ iff }\,\,\mathbf{X}(s)=\mathbf{X}(w)
Px1xn={sW:(𝐱𝟏(s),,𝐱𝐧(s))I(P)}\|Px_{1}\ldots x_{n}\|\,\,=\,\,\{s\in W:(\mathbf{x_{1}}(s),\ldots,\mathbf{x_{n}}(s))\in I(P)\}
DXY={sW:wS(s=Xws=Yw)}\|D_{X}Y\|\,\,=\,\,\{s\in W:\forall w\in S\,(s=_{X}w\Rightarrow s=_{Y}w)\}
KXY={sW:OτXS(s)tOUτYS(t)VτXS(t)VU},\|K_{X}Y\|\,\,=\,\,\{s\in W:\exists O\in\tau_{X}^{S}(s)\,\forall t\in O\,\forall U\in\tau_{Y}^{S}(t)\,\exists V\in\tau_{X}^{S}(t)\,V\subseteq U\},

where τX\tau_{X} is the restriction to 𝔻X\mathbb{D}_{X} of the product topology on ΠxX𝔻x\Pi_{x\in X}\mathbb{D}_{x} and 𝐗\mathbf{X} is the joint empirical variable associated to the set of variables XX.

It should be obvious that the topo-model semantics for 𝖫𝖴𝖣\mathsf{LUD} on the associated topo-model 𝐌{\mathbf{M}}^{\dagger} is the same as the semantics on the original topo-dependence model 𝐌{\mathbf{M}}. Therefore, from now on we will identify a topo-dependence model with the associated topo-model, and thus include topo-dependence models in the class of topo-models. We will later be able to precisely characterize this class among all topo-models: topo-dependence models correspond to standard topo-models for 𝖫𝖢𝖣\mathsf{LCD}.

Definition A.3 (Topo-morphisms).

Given topo-models 𝐌{\mathbf{M}} and 𝐌{\mathbf{M}}^{\prime}, a map π:WW\pi:W\to W^{\prime} is a topo-morphism if it satisfies the following properties:

  1. 1.

    for each set XVarX\subseteq Var of variables, π\pi is an interior map (i.e., a map that is both open and closed in the standard topological sense for functions) between the topologies τXW\tau^{W}_{X} and τXW\tau^{W^{\prime}}_{X};

  2. 2.

    π\pi is a relational p-morphism (in the usual Modal Logic sense)242424A pp-morphism (also known as a bounded morphism) is a functional bisimulation between relational structures. See [Blackburn et al., 2001] for details. wrt to the relations =X=_{X} and the atoms Px1xnPx_{1}\ldots x_{n}, DXYD_{X}Y, KXYK_{X}Y and (in the case of 𝖫𝖴𝖣\mathsf{LUD} topo-models) U(X;Y)U(X;Y).

A topo-model 𝐌{\mathbf{M}}^{\prime} is said to be a topo-morphic image of another topo-model 𝐌{\mathbf{M}} if there exists a surjective topo-morphism from 𝐌{\mathbf{M}} to 𝐌{\mathbf{M}}^{\prime}.

Proposition A.4.

If π:WW\pi:W\to W^{\prime} is a topo-morphism between topo-models 𝐌{\mathbf{M}} and 𝐌{\mathbf{M}}^{\prime}, then the states sWs\in W and π(s)W\pi(s)\in W^{\prime} satisfy the same formulas φ\varphi of 𝖫𝖢𝖣\mathsf{LCD}:

s𝐌φ iff π(s)𝐌φs\models_{\mathbf{M}}\varphi\mbox{ iff }\pi(s)\models_{{\mathbf{M}}^{\prime}}\varphi

The proof is an easy induction on the complexity of φ\varphi, and in fact it simply combines the standard proofs of the similar result for relational and topological structures for modal logic. An immediate consequence is:

Corollary A.5.

If 𝐌{\mathbf{M}}^{\prime} is a topo-morphic image of 𝐌{\mathbf{M}}, then the same formulas of 𝖫𝖢𝖣\mathsf{LCD}, and respectively 𝖫𝖴𝖣\mathsf{LUD}, are satisfiable/globally true in 𝐌{\mathbf{M}}^{\prime} and in 𝐌{\mathbf{M}}.

A.2 Special case 1: preorder models

We say that a topo-model 𝐌{\mathbf{M}} is Alexandroff if all topologies τXW\tau_{X}^{W} are Alexandroff (i.e., the open sets are closed under arbitrary intersections). Alexandroff topo-models can be given a purely relational description, as a special case of modal-relational models for 𝖫𝖢𝖣\mathsf{LCD} and 𝖫𝖴𝖣\mathsf{LUD}, seen as purely modal languages:

Definition A.6 (Preorder models).

A preorder model for 𝖫𝖴𝖣\mathsf{LUD} is a relational model 𝐌=(W,=X,X,){\mathbf{M}}=(W,=_{X},\leq_{X},\|\bullet\|), where: WW is a set of abstract states (‘possible worlds’); =X=_{X} are equivalence relations on WW, and X\leq_{X} are preorders252525A preorder is a binary relation that is reflexive and transitive. on WW (one for each set XVX\subseteq V of variables); and \|\bullet\| is a valuation that associates to each atom α\alpha of the form Px1,xnPx_{1}\ldots,x_{n}, DXYD_{X}Y, KXYK_{X}Y or U(X;Y)U(X;Y) some subset αW\|\alpha\|\subseteq W of the set of states, in the usual way. These ingredients are required to satisfy the following conditions:

(1)

if s=Xws=_{X}w and x1,,xnXx_{1},\ldots,x_{n}\in X, then we have sPx1xns\in\|Px_{1}\ldots x_{n}\| iff wPx1xnw\in\|Px_{1}\ldots x_{n}\|;

(2)

both ==_{\emptyset} and \leq_{\emptyset} are equal to the total relation W×WW\times W;262626Once again, the second part of this clause follows in fact from the first part together with condition (7).

(3)

if s=Xws=_{X}w and sDXYs\in\|D_{X}Y\|, then s=Yws=_{Y}w and wDXYw\in\|D_{X}Y\|;

(4)

atoms DXYD_{X}Y, KXYK_{X}Y and U(X;Y)U(X;Y) satisfy Inclusion, Additivity and Transitivity;

(5)

if sXws\leq_{X}w and sKXYs\in\|K_{X}Y\|, then wKXYw\in\|K_{X}Y\|;

(6)

if sXws\leq_{X}w and sKXYs\in\|K_{X}Y\|, then sYws\leq_{Y}w;

(7)

if s=Xws=_{X}w, then sXws\leq_{X}w;

(8)

KXYDXY\|K_{X}Y\|\subseteq\|D_{X}Y\|;

(9)

DYKY\|D_{\emptyset}Y\|\subseteq\|K_{\emptyset}Y\|;

(10)

U(X;Y){,W}\|U(X;Y)\|\in\{\emptyset,W\};

(11)

U(X;Y)KXY\|U(X;Y)\|\subseteq\|K_{X}Y\|

Preorder models for are again simply obtained by skipping from the above definition all references to the atoms U(X;Y)U(X;Y) and all clauses involving them.

Definition A.7 (Preorder Semantics).

The preorder semantics for 𝖫𝖢𝖣\mathsf{LCD} uses the standard modal semantics for DXD_{X} (as relational modalities for =X=_{X}) and KXYK_{X}Y (as relational modalities for the preorders X\leq_{X}), while the semantics of all atoms α\alpha (be they propositional Px1Px_{1}\ldots or dependence atoms DXY,KXYD_{X}Y,K_{X}Y or U(X;Y)U(X;Y)) is again given by the valuation α\|\alpha\|:

sα iff sα (for all atoms αs\models\alpha\,\,\,\mbox{ iff }\,\,\,s\in\|\alpha\|\,\,\mbox{ (for all atoms $\alpha$) }
sDXφ iff wW(s=Xwwwφ)s\models D_{X}\varphi\,\,\,\mbox{ iff }\,\,\,\forall w\in W(s=_{X}w\Rightarrow w\models w\models\varphi)
sKXφ iff wW(sXwwwφ)s\models K_{X}\varphi\,\,\,\mbox{ iff }\,\,\,\forall w\in W(s\leq_{X}w\Rightarrow w\models w\models\varphi)

and again the semantics of the Boolean connectives is given by the usual clauses. The preorder semantics of 𝖫𝖢𝖣\mathsf{LCD} is obtained by skipping the clause for U(X;Y)U(X;Y) in the above definition.

Note that preorder models are just a special class of standard relational models for the extended language with atoms of the form P𝐱P\mathbf{x} and KXYK_{X}Y (as well as U(X;Y)U(X;Y) in the case of 𝖫𝖴𝖣\mathsf{LUD}). This means that we can apply to this semantics all the known concepts and results of Modal Logic [Blackburn et al., 2001], concerning pp-morphisms, bisimulations, filtrations, etc.

Proposition A.8.

Preorder models for 𝖫𝖢𝖣\mathsf{LCD} (and respectively 𝖫𝖴𝖣\mathsf{LUD}) are the same as Alexandroff topo-models for 𝖫𝖢𝖣\mathsf{LCD} (and respectively 𝖫𝖴𝖣\mathsf{LUD}). A map between two preorder models is a topo-morphism between the corresponding topo-models iff it is a relational p-morphism wrt to both relations =X=_{X} and X\leq_{X} and all the atoms (Px1xnPx_{1}\ldots x_{n}, DXYD_{X}Y, KXYK_{X}Y, as well as U(X;Y)U(X;Y) in the case of 𝖫𝖴𝖣\mathsf{LUD}).

Equivalence between preorder models and Alexandroff topo-models As a consequence of Proposition A.8, we will identify Alexandroff topo-models with the corresponding preorder models (and later we will show that they correspond precisely to standard preorder models).

A.3 Special case 2: pseudo-metric models

We now introduce a special class of abstract topo-models that generalizes our intended metric dependence models for 𝖫𝖴𝖣\mathsf{LUD}.

Definition A.9 (Pseudo-metric models for 𝖫𝖢𝖣\mathsf{LCD} and 𝖫𝖴𝖣\mathsf{LUD}).

Let 𝐌=(S,=X,τXW,)XV{\mathbf{M}}=(S,=_{X},\tau^{W}_{X},\|\bullet\|)_{X\subseteq V} be a topo-model for 𝖫𝖴𝖣\mathsf{LUD}. We say that 𝐌{\mathbf{M}} is a pseudo-metric model for 𝖫𝖴𝖣\mathsf{LUD} if the topologies τXS\tau_{X}^{S} are induced by pseudo-metrics272727Recall that a pseudo-metric on a set SS is a map d:S×SRd:S\times S\to R satisfying three conditions, for all s,w,tSs,w,t\in S: d(s,s)=0d(s,s)=0; d(s,w)=d(w,s)d(s,w)=d(w,s); d(s,w)d(s,t)+d(t,w)d(s,w)\leq d(s,t)+d(t,w). dXSd_{X}^{S} on SS, i.e. we have that

τX= the topology generated by the basis {X(s,ε):sS,ε>0},\tau_{X}=\mbox{ the topology generated by the basis }\{\mathcal{B}_{X}(s,\varepsilon):s\in S,\varepsilon>0\},

where X(s,ε):={tS;dXS(s,t)<ε}\mathcal{B}_{X}(s,\varepsilon):=\{t\in S;d_{X}^{S}(s,t)<\varepsilon\}. Pseudo-metric models for 𝖫𝖢𝖣\mathsf{LCD} are again obtained by skipping the clauses involving UU (i.e. they are just topo-models for 𝖫𝖢𝖣\mathsf{LCD} whose topologies are induced by pseudo-metrics). When we want to fix some specific pseudo-metrics, we exhibit them in a explicit manner, denoting the given pseudo-metric model by 𝐌=(S,=X,dXS,)XV{\mathbf{M}}=(S,=_{X},d^{S}_{X},\|\bullet\|)_{X\subseteq V}.

We now recover our metric dependence models as a special case of pseudo-metric topo-models.

Metric dependence models are pseudo-metric topo-models Every metric dependence model of the form 𝐌=(M,S,𝐱)xVar{\mathbf{M}}=(M,S,\mathbf{x})_{x\in Var}, based on a typed metric model M=(𝔻x,dx,I)xVarM=(\mathbb{D}_{x},d_{x},I)_{x\in Var}, gives rise to a pseudo-metric topo-model 𝐌{\mathbf{M}}^{\dagger} for 𝖫𝖴𝖣\mathsf{LUD}, obtained by putting:

dXS(s,w)=dX(𝐱(s),𝐱(w)), where dX(u,v)=sup{dx(ux,vx):xX}d_{X}^{S}(s,w)\,\,\,\,=\,\,\,d_{X}(\mathbf{x}(s),\mathbf{x}(w)),\,\,\mbox{ where $d_{X}(\vec{u},\vec{v})=sup\{d_{x}(u_{x},v_{x}):x\in X\}$}
s=Xw iff 𝐱(s)=𝐱(w)( iff dXS(s,w)=0)s=_{X}w\,\,\mbox{ iff }\,\,\mathbf{x}(s)=\mathbf{x}(w)\,\,(\mbox{ iff }\,\,d_{X}^{S}(s,w)=0)
Px1xn={sW:(𝐱𝟏(s),)I(P)}\|Px_{1}\ldots x_{n}\|=\{s\in W:(\mathbf{x_{1}}(s),\ldots)\in I(P)\}
DXY={sW:tS(dXS(s,t)=0dYS(s,t)=0)}\|D_{X}Y\|\,\,\,=\,\,\,\{s\in W:\forall t\in S\,(d_{X}^{S}(s,t)=0\Rightarrow d_{Y}^{S}(s,t)=0)\}
KXY={sW:δ0>0tX(s,δ0)ε>0δ>0X(s,δ0)X(t,δ)Y(t,ε)}\|K_{X}Y\|\,\,\,=\,\,\,\{s\in W:\exists\delta_{0}>0\,\forall t\in\mathcal{B}_{X}(s,\delta_{0})\,\forall\varepsilon>0\,\exists\delta>0\,\mathcal{B}_{X}(s,\delta_{0})\cap\mathcal{B}_{X}(t,\delta)\subseteq\mathcal{B}_{Y}(t,\varepsilon)\}
U(X;Y)=W iff ε>0δ>0t,wS(dXS(t,w)<δdYS(t,w)<ε)\|U(X;Y)\|=W\,\,\,\mbox{ iff }\,\,\,\forall\varepsilon>0\,\exists\delta>0\,\forall t,w\in S\,(d_{X}^{S}(t,w)<\delta\Rightarrow d_{Y}^{S}(t,w)<\varepsilon)
U(X;Y)=, otherwise.\|U(X;Y)\|=\emptyset,\mbox{ otherwise.}

It should be obvious that the topological structure of this associated pseudo-metric model matches that of the associated topo-model (as introduced in the previous section), which explains our use of the same notation 𝐌{\mathbf{M}}^{\dagger} for both constructions. It should be equally obvious that the topo-model semantics for 𝖫𝖴𝖣\mathsf{LUD} on the associated pseudo-metric model 𝐌{\mathbf{M}}^{\dagger} is the same as the semantics on the original metric dependence model 𝐌{\mathbf{M}}. So, as in the case of general topo-models, from now on we will identify a metric dependence model with the associated pseudo-metric model, and thus include metric dependence models in the class of pseudo-metric models (and later show that they precisely correspond to standard pseudo-metric models.)

A.4 Standard models and their representation

Definition A.10 (Standard topo-models).

An 𝖫𝖢𝖣\mathsf{LCD} topo-model 𝐌=(W,=X,τXW,){\mathbf{M}}=(W,=_{X},\tau^{W}_{X},\|\bullet\|) is standard if we have, for all sWs\in W and X,YVarX,Y\subseteq Var:

(St1)

τXW\tau^{W}_{X} is the topology generated by {τxW:xX}\bigcup\{\tau_{x}^{W}:x\in X\} (where we skipped set brackets for singletons as usual, writing τxW\tau_{x}^{W} for τ{x}W\tau_{\{x\}}^{W});

(St2)

=X=_{X} is the topological indistinguishability relation, given by: s=Xws=_{X}w iff τXW(s)=τXW(w)\tau_{X}^{W}(s)=\tau_{X}^{W}(w);

(St3)

sDXYs\in\|D_{X}Y\| iff wW(s=Xws=Yw)\forall w\in W\,(s=_{X}w\Rightarrow s=_{Y}w);

(St4)

sKXYs\in\|K_{X}Y\| iff O0τXW(s)tO0UτYW(t)OτXW(t)OU\exists O_{0}\in\tau^{W}_{X}(s)\,\forall t\in O_{0}\,\forall U\in\tau^{W}_{Y}(t)\,\exists O\in\tau^{W}_{X}(t)\,O\subseteq U.

Simplified presentation of standard topo-models It should be clear that a standard topo-model is uniquely determined by its set of points WW, its basic topologies τxW\tau_{x}^{W} and the valuation of the atoms of the form Px1xnPx_{1}\ldots x_{n}, so it can be identified with the structure

(W,τxW,xV),(W,\tau_{x}^{W},\|\bullet\|_{x\in V}),

where the valuation ranges only over atoms Px1xnPx_{1}\ldots x_{n}), while the topologies τXW\tau^{W}_{X} are defined using clause (St1) above, the relations =X=_{X} are defined by the clause (St2) above (i.e., as topological indistinguishability relations for τXW\tau_{X}^{W}), and the semantics of the atoms DXYD_{X}Y and KXYK_{X}Y is given by the clauses (St3) and (St4) above.

Proposition A.11.

For every topo-dependence model 𝐌{\mathbf{M}}, the associated topo-model 𝐌{\mathbf{M}}^{\dagger} is standard.

The proof of this is immediate. The converse also holds:

Proposition A.12.

Each standard topo-model 𝐌=(W,=X,τXW,){\mathbf{M}}=(W,=_{X},\tau^{W}_{X},\|\bullet\|) induces a topo-dependence model 𝐌{\mathbf{M}}^{\flat}, based on the same set S:=WS:=W of states, where each empirical variable 𝐱\mathbf{x} has as its range 𝔻x\mathbb{D}_{x} is given by

𝔻x:={(x,[w]x):wW}, where [w]x:={sW:w=xs};\mathbb{D}_{x}:=\{(x,[w]_{x}):w\in W\},\,\,\mbox{ where $[w]_{x}:=\{s\in W:w=_{x}s\}$};

the map 𝐱:S=W𝔻x\mathbf{x}:S=W\to\mathbb{D}_{x} is given by

𝐱(w):=(x,[w]x);\mathbf{x}(w):=(x,[w]_{x});

the topology τx\tau_{x} on 𝔻x\mathbb{D}_{x} is given by

τx:={𝐱(O):Oτx}, where 𝐱(O):={𝐱(w):wO};\tau_{x}:=\{\mathbf{x}(O):O\in\tau^{x}\},\,\,\mbox{ where $\mathbf{x}(O):=\{\mathbf{x}(w):w\in O\}$};

and finally, the interpretation I(P)I(P) of each predicate symbol is given by

I(P):={(𝐱𝟏(w),,𝐱𝐧(w):wW,x1,,xnV s.t. wPx1xn}.I(P):=\{(\mathbf{x_{1}}(w),\ldots,\mathbf{x_{n}}(w):w\in W,x_{1},\ldots,x_{n}\in V\mbox{ s.t. }w\in\|Px_{1}\ldots x_{n}\|\}.

Moreover, the abstract topo-semantics on 𝐌{\mathbf{M}} agrees with the topo-dependence semantics on the associated abstract topo-dependence model.

Equivalence between standard topo-models and topo-dependence models Based on Facts A.11 and A.12, from now on we will identify topo-dependence models with the corresponding standard topo-models.

Special case: standard preorder models A standard preorder model is just a preorder model that is standard when considered as a topo-model (i.e. it satisfies conditions (St1)-(St5) in the definition of standard topo-models). It is useful to give a more direct characterization of the standard preorder models in purely relational terms:

Proposition A.13.

A structure 𝐌=(W,=X,X,)XV{\mathbf{M}}=(W,=_{X},\leq_{X},\|\bullet\|)_{X\subseteq V}, consisting of a set WW of states, equivalence relations =X=_{X} and preorder relations X\leq_{X} on WW, as well as a valuation for the language of 𝖫𝖢𝖣\mathsf{LCD}, is a standard preorder model iff it satisfies the following conditions, for all sWs\in W and X,YVarX,Y\subseteq Var:

(0)

if s=Xws=_{X}w and x1,,xnXx_{1},\ldots,x_{n}\in X, then we have sPx1xns\in\|Px_{1}\ldots x_{n}\| iff wPx1xnw\in\|Px_{1}\ldots x_{n}\|;

(11)

X\leq_{X} is the intersection of all relations x\leq_{x} with xXx\in X (where for singletons we again wrote x\leq_{x} for {x}\leq_{\{x\}});

(22)

s=Xws=_{X}w iff both sXws\leq_{X}w and wXsw\leq_{X}s;

(33)

sDXYs\in\|D_{X}Y\| iff wW(s=Xws=Yw)\forall w\in W\,(s=_{X}w\Rightarrow s=_{Y}w);

(44)

sKXYs\in\|K_{X}Y\| iff t,wW(sXtXwtYw)\forall t,w\in W\,(s\leq_{X}t\leq_{X}w\Rightarrow t\leq_{Y}w).

Simplified presentation of standard pre-models It should again be clear that a standard pre-model is uniquely determined by its set of points WW, its basic preorders x\leq_{x} and the valuation of the atoms of the form Px1xnPx_{1}\ldots x_{n}, so it can be identified with the structure

(W,x,)xV(W,\leq_{x},\|\bullet\|)_{x\in V}

(where the valuation ranges only over atoms Px1xnPx_{1}\ldots x_{n}), with the only requirement on these structures being condition (0) of Proposition A.13. In this simplified presentation, all the other components are definable by standardness: the relations X\leq_{X} are defined by clause (11) in Proposition A.13, the relations =X=_{X} are defined by the clause (22) in the same Proposition, and the valuation of the atoms DXYD_{X}Y and KXYK_{X}Y is given by the clauses (33) and (44) in this Proposition.

Equivalence between standard preorder models and Alexandroff topo-dependence models As a special case of the above-mentioned equivalence between standard topo-models and topo-dependence models, we can now identify standard preorder models with the corresponding Alexandroff topo-dependence models.

Definition A.14 (Standard pseudo-metric models).

A pseudo-metric model of the form 𝐌=(W,=X,dXW,){\mathbf{M}}=(W,=_{X},d^{W}_{X},\|\bullet\|) for 𝖫𝖴𝖣\mathsf{LUD} is standard if it is standard as a topo-model (i.e. it satisfies all conditions (St1)-(St4) in the definition of standard topo-models), and in addition we have, for all sWs\in W and X,YVarX,Y\subseteq Var:

(St5)

sU(X;Y)s\in\|U(X;Y)\| iff ε>0δ>0s,tS(dXS(s,t)<δdYS(s,t)<ε)\forall\varepsilon>0\,\exists\delta>0\,\forall s,t\in S\,(d_{X}^{S}(s,t)<\delta\Rightarrow d_{Y}^{S}(s,t)<\varepsilon).

Note that, unlike in the case of preorder models, standard pseudo-metric models are not the same as pseudo-metric models whose underlying topo-model is standard: one needs the additional condition (Stg5). It is useful to have a direct characterization of standard pseudo-metric models in terms of the pseudo-metrics dXWd_{X}^{W}:

Proposition A.15.

A structure 𝐌=(W,=X,dXW,){\mathbf{M}}=(W,=_{X},d^{W}_{X},\|\bullet\|) consisting of a set WW of states, equivalence relations =X=_{X} and pseudo-distances dXWd_{X}^{W} on WW, as well as a valuation for the language of 𝖫𝖴𝖣\mathsf{LUD}, is a standard 𝖫𝖴𝖣\mathsf{LUD} pseudo-metric model iff it satisfies the following conditions, for all sWs\in W and X,YVarX,Y\subseteq Var:

(0)

if s=Xws=_{X}w and x1,,xnXx_{1},\ldots,x_{n}\in X, then we have sPx1xns\in\|Px_{1}\ldots x_{n}\| iff wPx1xnw\in\|Px_{1}\ldots x_{n}\|;

(11)

dXWd^{W}_{X} is the Chebyshev pseudo-metric induced by the basic pseudo-metrics dxWd_{x}^{W}:

dXW(s,w):=max{dxW(s,w):xX},d_{X}^{W}(s,w)\,:=\,max\{d_{x}^{W}(s,w):x\in X\},

with the convention that max=0max\,\emptyset=0;

(22)

pseudo-metric indistinguishability conincides with XX-equality: s=Xws=_{X}w iff dXW(s,w)=0d_{X}^{W}(s,w)=0;

(33)

sDXYs\in\|D_{X}Y\| iff wW(d(XW(s,w)=0dYW(s,w)=0)\forall w\in W\,(d(_{X}^{W}(s,w)=0\Rightarrow d_{Y}^{W}(s,w)=0);

(44)

sKXYs\in\|K_{X}Y\| iff δ0>0tX(s,δ0)ε>0δ>0wX(s,δ0)(dXW(t,w)<δdyW(t,w)<ε)\exists\delta_{0}>0\,\forall t\in\mathcal{B}_{X}(s,\delta_{0})\,\forall\varepsilon>0\,\exists\delta>0\,\forall w\in\mathcal{B}_{X}(s,\delta_{0})\,(d_{X}^{W}(t,w)<\delta\Rightarrow d_{y}^{W}(t,w)<\varepsilon);

(55)

sU(X;Y)s\in\|U(X;Y)\| iff ε>0δ>0t,wW(dXW(t,w)<δdYW(t,w)<ε)\forall\varepsilon>0\,\exists\delta>0\,\forall t,w\in W\,(d_{X}^{W}(t,w)<\delta\Rightarrow d_{Y}^{W}(t,w)<\varepsilon).

Standard 𝖫𝖢𝖣\mathsf{LCD} pseudo-metric models can be similarly characterized by restricting the valuation to the language of 𝖫𝖢𝖣\mathsf{LCD}, and dropping the last clause.

Simplified presentation of standard pseudo-metric models Again, a standard pseudo-metric model is uniquely determined by its set of points WW, its basic pseudo-distances dxWd_{x}^{W} and the valuation of the atoms of the form Px1xnPx_{1}\ldots x_{n}, so it can be identified with the structure

(W,dxW,)xV,(W,d_{x}^{W},\|\bullet\|)_{x\in V},

where the valuation ranges only over atoms Px1xnPx_{1}\ldots x_{n}), while the pseudo-distances dXWd_{X}^{W} are defined by clause (11) in Proposition A.15, the relations =X=_{X} are defined by the clause (22) in the same Proposition, and the valuation of the atoms DXYD_{X}Y, KXYK_{X}Y and U(X;Y)U(X;Y) is given by the clauses (33), (44) and (55) in this Proposition.

We now extend Propositions A.11 and A.12 to pseudo-metric models:

Proposition A.16.

For every metric dependence model 𝐌{\mathbf{M}}, the associated pseudo-metric model 𝐌{\mathbf{M}}^{\dagger} is standard.

As in the case of topo-dependence models, the proof of this is immediate, and the converse also holds:

Proposition A.17.

Every standard pseudo-metric model (given in its simplified presentation) 𝐌=(W,dxW,){\mathbf{M}}=(W,d^{W}_{x},\|\bullet\|) induces a metric dependence model 𝐌{\mathbf{M}}^{\flat}, based on the same set S:=WS:=W of states, together with: each empirical variable 𝐱\mathbf{x} has as its range 𝔻x\mathbb{D}_{x} is given by

𝔻x:={(x,[w]x):wW}, where [w]x:={sW:w=xs}={sW:dxW(w,s)=0};\mathbb{D}_{x}:=\{(x,[w]_{x}):w\in W\},\,\,\mbox{ where $[w]_{x}:=\{s\in W:w=_{x}s\}=\{s\in W:d_{x}^{W}(w,s)=0\}$};

the map 𝐱:S=W𝔻x\mathbf{x}:S=W\to\mathbb{D}_{x} is given by

𝐱(w):=(x,[w]x)\mathbf{x}(w):=(x,[w]_{x})

(this map is obviously surjective); the metrics dxd_{x} on 𝔻x\mathbb{D}_{x} are given by

dx(𝐱(s),𝐱(w)):=dxW(s,w);d_{x}(\mathbf{x}(s),\mathbf{x}(w)):=d_{x}^{W}(s,w)\,\,;

and finally, the interpretation I(P)I(P) of each predicate symbol is given by

I(P):={(𝐱𝟏(w),,𝐱𝐧(w):wW,x1,,xnV s.t. wPx1xn}.I(P):=\{(\mathbf{x_{1}}(w),\ldots,\mathbf{x_{n}}(w):w\in W,x_{1},\ldots,x_{n}\in V\mbox{ s.t. }w\in\|Px_{1}\ldots x_{n}\|\}.

Moreover, the pseudo-metric semantics on 𝐌{\mathbf{M}} agrees with the metric dependence semantics 𝐌{\mathbf{M}}^{\flat}.

Equivalence between standard pseudo-metric models and metric dependence models Based on Facts A.16 and A.17, we will identify metric dependence models with the corresponding standard pseudo-metric models.

A.5 Completeness wrt finite preorder models

Proposition A.18.

The system LCD is complete for finite preorder models for 𝖫𝖢𝖣\mathsf{LCD}, and similarly the system LUD is complete for finite preorder models for 𝖫𝖴𝖣\mathsf{LUD}.

Proof.

We only give the proof for the harder case of 𝖫𝖴𝖣\mathsf{LUD}, and specify which steps should be skipped to obtain the proof for 𝖫𝖢𝖣\mathsf{LCD}. The proof uses a variation of the standard Modal Logic method of finite canonical models, based on combining the techniques of canonical models and filtration.

Let φ0\varphi_{0} be our fixed consistent formula (where ‘consistent’ refers to the appropriate proof system, LCD and respectively LUD), and let VV be the finite set of variables that are actually occurring in φ\varphi. Take now Φ\Phi to be the smallest set of formulas in FmlΔFml_{\Delta}, with the following closure properties:

  • -

    φ0Φ\varphi_{0}\in\Phi;

  • -

    Φ\Phi is closed under subformulas and single negations φ\sim\varphi;

  • -

    DXPx1xnD_{X}Px_{1}\ldots x_{n}, DXDXY,KXKXY,DXKXY,U(X;Y)ΦD_{X}D_{X}Y,K_{X}K_{X}Y,D_{X}K_{X}Y,U(X;Y)\in\Phi, for all sets X,YVX,Y\subseteq V and all x1,,xnVx_{1},\ldots,x_{n}\in V (where the closure clause for U(X;Y)U(X;Y) can be skipped in the case of 𝖫𝖢𝖣\mathsf{LCD});

  • -

    if KXψΦK_{X}\psi\in\Phi, then DXKXψΦD_{X}K_{X}\psi\in\Phi.

We can easily show that Φ\Phi is finite.

We will build a finite preorder model for φ0\varphi_{0} consisting of (some) maximally consistent subsets of Φ\Phi (where again ‘consistent’ refers to the appropriate proof system, LCD and respectively LUD). First, we fix a maximally consistent subset s0s_{0} of Φ\Phi s.t. φ0s0\varphi_{0}\in s_{0}. (The existence of s0s_{0} follows by the standard enumeration argument for the finite case of the Lindenbaum Lemma.) Second, for each set XVX\subseteq V, we define two relations =X=_{X} and X\leq_{X} between maximally consistent subsets of s,wΦs,w\subseteq\Phi, by putting:

s=Xw iff (DYψ)Φ({DYψ,DXY}s{DYψ,DXY}w);s=_{X}w\,\,\,\mbox{ iff }\,\,\,\forall(D_{Y}\psi)\in\Phi\left(\,\{D_{Y}\psi,D_{X}Y\}\subseteq s\,\,\Leftrightarrow\,\,\{D_{Y}\psi,D_{X}Y\}\subseteq w\,\right);
sXw iff (KYψ)Φ(KYψ,KXY}s{KYψ,KXY}w).s\leq_{X}w\,\,\,\mbox{ iff }\,\,\,\forall(K_{Y}\psi)\in\Phi\left(\,K_{Y}\psi,K_{X}Y\}\subseteq s\,\,\Rightarrow\,\,\{K_{Y}\psi,K_{X}Y\}\subseteq w\,\right).

We define now a finite preorder model

𝐌=(W,=XW,XW,), where: {\mathbf{M}}=(W,=_{X}^{W},\leq_{X}^{W},\|\bullet\|),\,\,\mbox{ where: }
  • -

    W:={wΦ:w maximally consistent subset with s0=w}W\,\,:=\,\,\{w\subseteq\Phi:w\mbox{ maximally consistent subset with }s_{0}=_{\emptyset}w\};

  • -

    the relations =XW=_{X}^{W} and XW\leq_{X}^{W} are just the restrictions of the above-defined relations =X=_{X} and X\leq_{X} to the set WW;

  • -

    \|\bullet\| is the canonical valuation for all atoms P𝐱P\mathbf{x}, DXYD_{X}Y, KXYK_{X}Y and U(X;Y)U(X;Y), according to which by definition ww satisfies an atom iff the atom belongs to ww.292929More precisely, we put: P𝐱={sW:P𝐱s}\|P\mathbf{x}\|=\{s\in W:P\mathbf{x}\in s\}; DXY={sW:DXYs}\|D_{X}Y\|=\{s\in W:D_{X}Y\in s\}; KXY={sW:KXYs}\|K_{X}Y\|=\{s\in W:K_{X}Y\in s\}; and U(X;Y)={sW:U(X;Y)s}\|U(X;Y)\|=\{s\in W:U(X;Y)\in s\}, i.e. U(X;Y)=W\|U(X;Y)\|=W iff U(X;Y)s0U(X;Y)\in s_{0}, and U(X;Y)=\|U(X;Y)\|=\emptyset otherwise). We skip the clause for U(X;Y)U(X;Y) in the case of 𝖫𝖢𝖣\mathsf{LCD}.

Just by looking at the above definitions of =X=_{X} and X\leq_{X}, it should be obvious that the relations =X=_{X} (and so also their restrictions) =XW=_{X}^{W} are equivalence relations, and that the relations X\leq_{X} (and so also their restrictions XW\leq_{X}^{W}) are preorders. This can be further strengthened to:

Lemma A.19.

(Preorder Model Lemma) 𝐌{\mathbf{M}} is a preorder model.

Proof.

We need to check that 𝐌{\mathbf{M}} satisfies conditions (1)-(11) in the definition of preorder models.

For (1), let s,wWs,w\in W s.t. s=XWws=_{X}^{W}w and x1,,xnXx_{1},\ldots,x_{n}\in X, and assume that sPx1xns\in\|Px_{1}\ldots x_{n}\|. By the definition of the canonical valuation, this means that Px1xnsPx_{1}\ldots x_{n}\in s. By the Determined Atoms axiom of 𝖫𝖥𝖣\mathsf{LFD} and the closure conditions on Φ\Phi (as well as the closure of maximally consistent subsets of Φ\Phi under modus ponens in Φ\Phi), we have D{x1,,xn}Px1xnsD_{\{x_{1},\ldots,x_{n}\}}Px_{1}\ldots x_{n}\in s. Using the fact that x1,,xnXx_{1},\ldots,x_{n}\in X and the Inclusion axiom (and again the closure conditions on Φ\Phi), we see that DX{x1,xn}sD_{X}\{x_{1},\ldots x_{n}\}\in s, and hence {D{x1,,xn}Px1xn,DX{x1,xn}}s\{D_{\{x_{1},\ldots,x_{n}\}}Px_{1}\ldots x_{n},D_{X}\{x_{1},\ldots x_{n}\}\}\subseteq s. By the definition of =X=_{X} applied to s=Xws=_{X}w, we obtain that {D{x1,,xn}Px1xn,DX{x1,xn}}w\{D_{\{x_{1},\ldots,x_{n}\}}Px_{1}\ldots x_{n},D_{X}\{x_{1},\ldots x_{n}\}\}\subseteq w, and using the Factivity of DD (axiom TT) and the closure of Φ\Phi under subformulas, we conclude that Px1xnwPx_{1}\ldots x_{n}\in w, which by the definition of canonical valuation gives us that wPx1xnw\in\|Px_{1}\ldots x_{n}\|, as desired.

For (2): By the definition of WW, =W=_{\emptyset}^{W} is indeed the total relation on WW.303030As already remarked, the second part of condition (2) on preorder models is redundant, following from the first part together with condition (7).

For (3): Assume s=XWws=_{X}^{W}w and sDXYs\in\|D_{X}Y\|, and so DXYsD_{X}Y\in s (by the canonical valuation), hence DXYwD_{X}Y\in w (by the definition of =X=_{X}), and thus wDXYw\in\|D_{X}Y\| (again by the canonical valuation). All that is left is to check that s=YWws=_{Y}^{W}w. For this, let (DZψ)Φ(D_{Z}\psi)\in\Phi, and we need to show that {DZψ,DYZ}s{DZψ,DYZ}w\{D_{Z}\psi,D_{Y}Z\}\subseteq s\Leftrightarrow\{D_{Z}\psi,D_{Y}Z\}\subseteq w. We check the left-to-right implication (since the converse then follows by the symmetry of =XW=_{X}^{W}): assume that {DZψ,DYZ}s\{D_{Z}\psi,D_{Y}Z\}\subseteq s. Since DXY,DYZsD_{X}Y,D_{Y}Z\in s, we can use the DD-Transitivity axiom to obtain DXZsD_{X}Z\in s. So we have {DZψ,DXZ}s\{D_{Z}\psi,D_{X}Z\}\subseteq s, which together with s=XWws=_{X}^{W}w gives us that {DZψ,DXZ}w\{D_{Z}\psi,D_{X}Z\}\subseteq w, hence DZψwD_{Z}\psi\in w. On the other hand, from DYZsD_{Y}Z\in s we derive DYDYZsD_{Y}D_{Y}Z\in s by the axiom of Determined Dependence, and so we have {DYDYZ,DXY}s\{D_{Y}D_{Y}Z,D_{X}Y\}\subseteq s, which together with s=XWws=_{X}^{W}w gives us that {DYDYZ,DXY}w\{D_{Y}D_{Y}Z,D_{X}Y\}\subseteq w, hence DYDYZwD_{Y}D_{Y}Z\in w, and thus by Factivity (axiom TT) we have DYZwD_{Y}Z\in w. Putting these all together, we conclude that {DZψ,DYZ}w\{D_{Z}\psi,D_{Y}Z\}\subseteq w, as desired.

Condition (4) follows from the definition of canonical valuation, together with the axioms of Inclusion, Additivity and Transitivity for DXYD_{X}Y, KXYK_{X}Y and U(X;Y)U(X;Y).

For (5) and (6): The proof is completely similar to the above proof of the two conclusions of condition (3), using the definition of X\leq_{X} (instead of =X=_{X}) and the corresponding axioms for KXK_{X} (KK-Veracity, KK-Transitivity and Knowable Dependence) instead of the axioms for DXD_{X} (DD-Factivity, DD-Transitivity and Determined Dependence).

For (7): Assume s=XWws=_{X}^{W}w. To show that sXWws\leq_{X}^{W}w, let KYψΦK_{Y}\psi\in\Phi s.t. {KYψ,KXY}s\{K_{Y}\psi,K_{X}Y\}\subseteq s, and we need to prove that {KYψ,KXY}w\{K_{Y}\psi,K_{X}Y\}\subseteq w. To show that KYψwK_{Y}\psi\in w, note that KYψsK_{Y}\psi\in s implies DYKYψsD_{Y}K_{Y}\psi\in s (by the LCD axioms of KYK_{Y}-Positive Introspection and Knowable Determination, as well as our closure conditions on Φ\Phi), and similarly KXYsK_{X}Y\in s implies DXYsD_{X}Y\in s (by the LCD axiom of Knowable Dependence), and so we obtain {DYKYψ,DXY}s\{D_{Y}K_{Y}\psi,D_{X}Y\}\subseteq s. From this and s=Xws=_{X}w, we infer that {DYKYψ,DXY}w\{D_{Y}K_{Y}\psi,D_{X}Y\}\subseteq w, and thus in particular DYKYψwD_{Y}K_{Y}\psi\in w, which by the Factivity axiom gives us KYψwK_{Y}\psi\in w, as desired. Next, we also need to show that KXYwK_{X}Y\in w: for this, note that KXYsK_{X}Y\in s implies KXKXYsK_{X}K_{X}Y\in s (by Knowability of Epistemic Dependence), hence we also have DXKXYsD_{X}K_{X}Y\in s (by Knowable Dependence and the closure conditions on Φ\Phi), and thus {DXKXY,DXX}s\{D_{X}K_{X}Y,D_{X}X\}\subseteq s (by DD-Inclusion). From this together with s=Xws=_{X}w, we obtain that {DXKXY,DXX}w\{D_{X}K_{X}Y,D_{X}X\}\subseteq w, so in particular DXKXYwD_{X}K_{X}Y\in w, from which we infer KXYwK_{X}Y\in w (by Factivity). Putting all these together, we conclude that {KYψ,KXY}w\{K_{Y}\psi,K_{X}Y\}\subseteq w.

Conditions (8), (9) and (10) follow directly from the definition of canonical valuation, together with the Connecting Axioms of LCD. ∎

Lemma A.20.

(Diamond Lemma) For all states sWs\in W and formulas φ\sim\varphi (where we recall that \sim is single negation), we have:

  1. 1.

    If DXφΦD_{X}\varphi\in\Phi, then (DXφ)sW(\sim D_{X}\varphi)\in s\in W iff there exists wWw\in W with s=Xws=_{X}w and (φ)w(\sim\varphi)\in w;

  2. 2.

    If KXφΦK_{X}\varphi\in\Phi, then (KXφ)sW(\sim K_{X}\varphi)\in s\in W iff there exists wWw\in W with sXws\leq_{X}w and (φ)w(\sim\varphi)\in w.

Proof.

The proof of this result goes along standard lines, cf. [Blackburn et al., 2001]: the non-trivial (left-to-right) implications are shown by constructing a consistent set of formulas in Φ\Phi, that includes φ\sim\varphi and that forces every Φ\Phi-maximally consistent superset to be in the appropriate relation with ss.313131In fact, the proof of the first item was given in detail in [Baltag and van Benthem, 2021], while the proof of the second item is completely analogous (using the appropriate KXK_{X}-axioms instead of the corresponding DXD_{X}-axioms).

As an immediate consequence, we obtain:

Lemma A.21.

(Truth Lemma) For every sWs\in W, we have that, for all formulas φΦ\varphi\in\Phi:

s𝐌φ iff φs.s\models_{\mathbf{M}}\varphi\mbox{ iff }\varphi\in s.
Proof.

The proof proceeds by induction on φ\varphi, as usual. The atomic case is simply enforced by our choice of the canonical valuation, and the Boolean connective cases are trivial, while the modal cases follow from the above Diamond Lemma. ∎

Putting these together, we obtain the desired conclusion, thus showing the completeness of our logics for finite preorder models. ∎

As a consequence, the set of theorems of LCD is recursive, and the same holds for the set of theorems of LUD

Proposition A.22.

The logics axiomatized by the proof systems LCD and LUD are decidable.

Of course, this does not yet prove the decidability of our intended logics 𝖫𝖢𝖣\mathsf{LCD} and 𝖫𝖴𝖣\mathsf{LUD} (but only of the given proof systems), but this will follow when we will prove the completeness of these systems with respect to our intended models (i.e., topo-dependence models for LCD, and metric dependence models for LUD).

A.6 Completeness wrt topo-dependence models: unravelling

In this section we show the decidability and completeness of the logic 𝖫𝖢𝖣\mathsf{LCD} wrt topo-dependence models. By results in previous sections, it is enough to prove completeness wrt standard topo-models. For this, we will prove the following

Lemma A.23.

(Representation theorem for finite preorder models) Every finite preorder model for 𝖫𝖢𝖣\mathsf{LCD} over a finite set of variables VV is a topo-morphic image of a standard topo-model for 𝖫𝖢𝖣\mathsf{LCD} (more precisely a standard preorder model), and thus it is modally equivalent to a topo-dependence model.

The rest of this section is dedicated to the proof of this result.

The idea of the proof Throughout the section, we fix a finite preorder model of the form 𝐌=(S,=X,X,)XV{\mathbf{M}}=(S,=_{X},\leq_{X},\|\bullet\|)_{X\subseteq V} (for 𝖫𝖢𝖣\mathsf{LCD}), and a designated s0Ss_{0}\in S in this model.323232The specific choice of s0s_{0} is irrelevant, as all the other states in SS will be accessible from s0s_{0} via the total relation ==_{\emptyset}. To find a standard preorder model that is modally equivalent to 𝐌{\mathbf{M}}, we use the well-known modal logic technique of unravelling. The proof goes along familiar lines, by unravelling 𝐌{\mathbf{M}} into a tree of all possible “histories”, redefining the relations on this tree to ensure standardness, and defining a pp-morphism from the tree into our original model 𝐌{\mathbf{M}}, by mapping each history to its last element. The proof is very similar to the corresponding proof for LFDLFD in [Baltag and van Benthem, 2021], which is itself a modification of the standard modal completeness proof for distributed knowledge (meant to deal with the new dependence atoms DXYD_{X}Y).

Admittedly, our construction is slightly more complicated than necessary for the purposes of this section, since it will include some redundant numerical components that will play no role in this section, but will be useful later in dealing with the metric case. More precisely, instead of labeling the transitions in our unravelled tree by using simple sets XX of variables (‘agents’) as our labels (as in e.g. the standard proof for 𝖫𝖥𝖣\mathsf{LFD}), we label them by expressions of the form XβX^{\beta}, for any set of variables XVX\subseteq V and any real number β[0,1)\beta\in[0,1). The expression XβX^{\beta} is here just a more compact way of writing the ordered pair (X,β)(X,\beta). The actual real numbers β\beta will play no role in this section, except for the distinction between β=0\beta=0 (corresponding to =X=_{X}-transitions) and β0\beta\neq 0 (corresponding to X\leq_{X}-transitions). So in principle we could use only two values for β\beta in this section, and the reader can think of all the other real values in our construction as irrelevant ‘syntactic sugar’; but we do keep all these values for future use in the next section, where they will help us structure our tree of histories as a (standard) pseudo-metric model.333333Even for this purpose, not all real values in [0,1)[0,1) will actually be needed. If one prefers to keep the resulting model countable, we can restrict to rational values βQ[0,1)\beta\in Q\cap[0,1). In fact, we can restrict the numbers β\beta used in our completeness proofs to any set values B[0,1)B\subseteq[0,1), with the property that BB includes 0, as well as some countable sequence of numbers (βn)nN(\beta_{n})_{n\in N} with 0<βn<10<\beta_{n}<1 and limnβn=0lim_{n\to\infty}\beta_{n}=0.

Histories Let 𝐌=(S,=X,X,)XV{\mathbf{M}}=(S,=_{X},\leq_{X},\|\bullet\|)_{X\subseteq V} be a finite preorder model for 𝖫𝖢𝖣\mathsf{LCD}, and let s0Ss_{0}\in S be a designated state in this model. A history is any finite sequence (of any length n0n\geq 0) of the form h=(s0,X1β1,s1,,Xnβn,sn)h=(s_{0},X_{1}^{\beta_{1}},s_{1},\ldots,X_{n}^{\beta_{n}},s_{n}), where s0s_{0} is the designated world, and for each k1k\geq 1 we have: skSs_{k}\in S; XkVX_{k}\subseteq V; βk[0,1)\beta_{k}\in[0,1) is a non-negative real number less than 11; sk1Xksks_{k-1}\leq_{X_{k}}s_{k}; and finally sk1=Xksks_{k-1}=_{X_{k}}s_{k} whenever βk=0\beta_{k}=0. We denote by WW the set of possible histories. The map last:WSlast:W\to S, which sends every history h=(s0,X1β1,s1,,Xnβn,sn)h=(s_{0},X_{1}^{\beta_{1}},s_{1},\ldots,X_{n}^{\beta_{n}},s_{n}) to its last element last(h):=snlast(h):=s_{n}, will play a key role in our construction.

One-step transition relations on histories There are three relevant one-step relations on histories: the immediate predecessor relation hhh\to h^{\prime}, as well as two labelled versions hY=hh\to_{Y}^{=}h^{\prime} and hYhh\to_{Y}^{\leq}h^{\prime} of this relation, by putting (for all histories h,hh,h^{\prime} and sets YVY\subseteq V):

hh iff h=(h,Xβ,s) for some X,β,s;h\to h^{\prime}\,\,\mbox{ iff }\,\,h^{\prime}=(h,X^{\beta},s^{\prime})\mbox{ for some }X,\beta,s^{\prime};
hY=h iff h=(h,X0,s) for some X,s with last(h)DXY;h\to_{Y}^{=}h^{\prime}\,\,\mbox{ iff }\,\,h^{\prime}=(h,X^{0},s^{\prime})\mbox{ for some }X,s^{\prime}\mbox{ with }last(h)\models D_{X}Y;
hYh iff h=(h,Xβ,s) for some X,β,s with last(h)KXY.h\to_{Y}^{\leq}h^{\prime}\,\,\mbox{ iff }\,\,h^{\prime}=(h,X^{\beta},s^{\prime})\mbox{ for some }X,\beta,s^{\prime}\mbox{ with }last(h)\models K_{X}Y.

We denote by \leftarrow, Y=\leftarrow_{Y}^{=} and Y\leftarrow_{Y}^{\leq} the converses of these one-step relations.

Order, XX-equality and XX-order on histories We can now define the order relation hhh\preceq h^{\prime} on histories, as well as analogues =XW=_{X}^{W} and xW\leq_{x}^{W} of the relations =X=_{X} and X\leq_{X} on histories in WW:

:=\preceq\,\,\,\,:=\,\,\,\,\to^{*}
=XW:=(X=X=)=_{X}^{W}\,\,\,:=\,\,\,(\to_{X}^{=}\cup\leftarrow_{X}^{=})^{*}
XW:=(X=X=X)\leq_{X}^{W}\,\,\,:=\,\,\,(\to_{X}^{=}\cup\leftarrow_{X}^{=}\cup\to_{X}^{\leq})^{*}

where we denote by RR^{*} the reflexive-transitive closure of relation RR.

Tree of histories, neighbors and (non-redundant) paths Two histories h,hh,h^{\prime} are neighbors if either one is the immediate predecessor of the other. A path from a history hh to another history hh^{\prime} is a chain of histories (h0,,hn)(h_{0},\ldots,h_{n}), having h0=hh_{0}=h and hn=hh_{n}=h^{\prime} as its endpoints, and s.t. for every kk, histories hkh_{k} and hk+1h_{k+1} are neighbors. A path is non-redundant if no history appears twice in the chain. The set of histories (W,)(W,\preceq) endowed with the partial order \preceq has an easily visualizable tree structure:

  • -

    there is a unique history with no predecessor, namely (s0)(s_{0});

  • -

    every other history h(s0)h\not=(s_{0}) has a unique immediate predecessor;

  • -

    for every history hh, there are only finitely many histories ehe\preceq h;

  • -

    every pair of histories h,hSh,h^{\prime}\in S has a meet inf(h,h)inf(h,h^{\prime}) w.r.t. to the order \preceq; moreover, by repeating this operation, we can see that triplets of histories also have meets inf(h,h,h′′):=inf(h,inf(h,h′′))inf(h,h^{\prime},h^{\prime\prime}):=inf(h,inf(h^{\prime},h^{\prime\prime})),343434More generally, every non-empty set of histories has a meet. However, the tree structure (S,)(S,\preceq) is not a meet semi-lattice, since the empty set of histories \emptyset has no meet: that would be a top element, but the tree has no top.

  • -

    there exists a unique non-redundant path between any two histories;

  • -

    for all histories h,h,h′′Wh,h^{\prime},h^{\prime\prime}\in W, we have inf(h,h,h′′)=min{inf(h,h),inf(h,h′′)}inf(h,h^{\prime},h^{\prime\prime})=min\{inf(h,h^{\prime}),inf(h,h^{\prime\prime})\}, and thus we either have inf(h,h,h′′)=inf(h,h)inf(h,h^{\prime},h^{\prime\prime})=inf(h,h^{\prime}) or inf(h,h,h′′)=inf(h,h′′)inf(h,h^{\prime},h^{\prime\prime})=inf(h,h^{\prime\prime}).

Interval notation The interval [h,h][h,h^{\prime}] between two histories h,hh,h^{\prime} is the set of all histories on the non-redundant path from hh to hh^{\prime}, i.e.:

[h,h]:={h′′S:h′′ lies on some non-redundant path from h to h}.[h,h^{\prime}]\,\,:=\,\,\{h^{\prime\prime}\in S:h^{\prime\prime}\mbox{ lies on some non-redundant path from }h\mbox{ to }h^{\prime}\}.

It is easy to see that in general we have for all histories h,h,h′′Wh,h^{\prime},h^{\prime\prime}\in W:

[h,h][h,h′′][h′′,h];[h,h^{\prime}]\subseteq[h,h^{\prime\prime}]\cup[h^{\prime\prime},h^{\prime}];

and this inclusion becomes equality whenever h′′h^{\prime\prime} lies between hh and hh^{\prime} (i.e., in the interval between them):

h′′[h,h] implies [h,h]=[h,h′′][h′′,h].h^{\prime\prime}\in[h,h^{\prime}]\mbox{ implies }[h,h^{\prime}]=[h,h^{\prime\prime}]\cup[h^{\prime\prime},h^{\prime}].

Tree-distance and history length The tree-distance dist(h,h)dist(h,h^{\prime}) between histories hh and hh^{\prime} is defined by putting dist(h,h):=n1dist(h,h^{\prime}):=n-1, where nn is the cardinality of the set [h,h][h,h^{\prime}] (i.e., the length of the non-redundant path (h0,,hn)(h_{0},\ldots,h_{n}) from hh to hh^{\prime}). It is easy to see that distdist is a metric on SS, and that moreover dist(h,h)=1dist(h,h^{\prime})=1 iff h,hh,h^{\prime} are neighbors. The length l(h)l(h) of a history h=(s0,X1β1,s1,,Xnβn,sn)h=(s_{0},X_{1}^{\beta_{1}},s_{1},\ldots,X_{n}^{\beta_{n}},s_{n}) is the tree-distance l(h):=nl(h):=n between hh and the root (s0)(s_{0}).

Path-characterizations of XX-equality and XX-order It is easy to see that, given histories h,hh,h^{\prime}, if (h0,,hi,hn)(h_{0},\ldots,h_{i},\ldots h_{n}) is the non-redundant path from hh to hh^{\prime} where hi=inf(h,h)h_{i}=inf(h,h^{\prime}) is the meet of the two histories, then the XX-equality and XX-order relations can be characterized as follows:

  • -

    h=XWhh=_{X}^{W}h^{\prime} holds iff h=h0X=h1X=hiX=hn1X=hn=hh=h_{0}\leftarrow_{X}^{=}h_{1}\ldots\leftarrow_{X}^{=}h_{i}\to_{X}^{=}\ldots h_{n-1}\to_{X}^{=}h_{n}=h^{\prime};

  • -

    hXWhh\leq_{X}^{W}h^{\prime} holds iff h=h0X=h1X=hi(X=X)hn1(X=X)hn=hh=h_{0}\leftarrow_{X}^{=}h_{1}\leftarrow_{X}^{=}h_{i}(\to_{X}^{=}\cup\to_{X}^{\leq})\ldots h_{n-1}(\to_{X}^{=}\cup\to_{X}^{\leq})h_{n}=h^{\prime}.

To check this, it is enough to note that the definitions of =XW=X^{W} and XW\leq_{X}^{W} imply the existence of appropriate paths from hh to hh^{\prime} while that the tree structure implies that any such path must include the non-redundant path (h0,,h′′,hn)(h_{0},\ldots,h^{\prime\prime},\ldots h_{n}), then finally to check that this non-redundant path is “appropriate” for =XW=X^{W} and respectively for XW\leq_{X}^{W} only if it satisfies the corresponding clause in the above characterization.353535Technically speaking, proving this requires induction on the tree-distance d(h,h)d(h,h^{\prime}).

Claim 1. The relations =YW=_{Y}^{W} and YW\leq_{Y}^{W} have the following properties:

  1. 1.

    =YW=_{Y}^{W} are equivalence relations, and YW\leq_{Y}^{W} are preorders;

  2. 2.

    if h=YWhh=_{Y}^{W}h^{\prime}, then hYWhh\leq_{Y}^{W}h^{\prime};

  3. 3.

    the intersection of YW\leq_{Y}^{W} with its converse YW\geq_{Y}^{W} is the relation =YW=_{Y}^{W};

  4. 4.

    the relations =YW=_{Y}^{W} and YW\leq_{Y}^{W} are additive:

    h=YWh iff h=yh for all yY;h=_{Y}^{W}h^{\prime}\mbox{ iff }h=_{y}h^{\prime}\mbox{ for all $y\in Y$};
    hYWh iff hyh for all yY;h\leq_{Y}^{W}h^{\prime}\mbox{ iff }h\leq_{y}h^{\prime}\mbox{ for all $y\in Y$};
  5. 5.

    if h=YWhh=_{Y}^{W}h^{\prime} then last(h)=Ylast(h)last(h)=_{Y}last(h^{\prime});

  6. 6.

    if hYWhh\leq_{Y}^{W}h^{\prime} then last(h)YWlast(h)last(h)\leq_{Y}^{W}last(h^{\prime});

  7. 7.

    if h=XWhh=_{X}^{W}h^{\prime} and last(h)DXYlast(h)\models D_{X}Y, then h=YWhh=_{Y}^{W}h^{\prime} and last(h)DXYlast(h^{\prime})\models D_{X}Y;

  8. 8.

    if hXWhh\leq_{X}^{W}h^{\prime} and last(h)KXYlast(h)\models K_{X}Y, then hYWhh\leq_{Y}^{W}h^{\prime} and last(h)KXYlast(h^{\prime})\models K_{X}Y;

  9. 9.

    if h=YWhh=_{Y}^{W}h^{\prime} and h′′[h,h]h^{\prime\prime}\in[h,h^{\prime}], then h=YWh=YWh′′h=_{Y}^{W}h^{\prime}=_{Y}^{W}h^{\prime\prime};

  10. 10.

    if hYWhh\leq_{Y}^{W}h^{\prime} and h′′[h,h]h^{\prime\prime}\in[h,h^{\prime}], then hYWh′′YWhh\leq_{Y}^{W}h^{\prime\prime}\leq_{Y}^{W}h^{\prime}.

Proof.

Items 1 and 2 follow immediately from the definitions of these relations on histories.

Item 3: The fact that hYWhYWhh\leq_{Y}^{W}h^{\prime}\leq_{Y}^{W}h implies h=YWhh=_{Y}^{W}h^{\prime} is easily proved by induction on the tree-distance d(h,h)d(h,h^{\prime}) between the two histories, using the above characterizations of =YW=_{Y}^{W} and YW\leq_{Y}^{W} in terms of the non-redundant path, as well as the fact that Y\to_{Y}^{\leq} and Y\leftarrow_{Y}^{\leq} are disjoint by definition.

For item 4, we first check the additivity of Y=\to_{Y}^{=} and Y\to_{Y}^{\leq}, i.e. that: hY=hh\to^{=}_{Y}h^{\prime} iff hy=hh\to^{=}_{y}h^{\prime} for all yYy\in Y; and hYhh\to^{\leq}_{Y}h^{\prime} iff hyhh\to^{\leq}_{y}h^{\prime} for all yYy\in Y. (These follow directly from the definitions of Y=\to^{=}_{Y} and Y\to^{\leq}_{Y} and the Additivity axioms for DXYD_{X}Y and KXYK_{X}Y.) The desired additivity of =YW=_{Y}^{W} and YW\leq_{Y}^{W} follows from these observations by induction of the tree distance d(h,h)d(h,h^{\prime}) (using the above characterizations of =YW=_{Y}^{W} and YW\leq_{Y}^{W} in terms of the non-redundant path).

Items 5 and 6 similarly follow by easy induction (on the tree distance, using the above path characterization) from the corresponding statements for the one-step transitions, i.e.: hY=hh\to^{=}_{Y}h^{\prime} implies last(h)=Ylast(h)last(h)=_{Y}last(h^{\prime}); and similarly hYhh\to^{\leq}_{Y}h^{\prime} implies last(h)Ylast(h)last(h)\leq_{Y}last(h^{\prime}). To prove the first of these one-step statements: from hY=hh\to^{=}_{Y}h^{\prime}, we infer that h=(h,X0,s)h^{\prime}=(h,X^{0},s^{\prime}) for some X,sX,s^{\prime} with last(h)DXYlast(h)\models D_{X}Y; but in order for hh^{\prime} to be a well-formed history, we must have last(h)=Xslast(h)=_{X}s^{\prime}, which together with last(h)DXYlast(h)\models D_{X}Y gives us last(h)=Ys=last(h)last(h)=_{Y}s^{\prime}=last(h^{\prime}), as desired. The second one-step statement is similar: from hY=hh\leq^{=}_{Y}h^{\prime}, we infer that h=(h,Xβ,s)h^{\prime}=(h,X^{\beta},s^{\prime}) for some X,sX,s^{\prime} with last(h)KXYlast(h)\models K_{X}Y; but in order for hh^{\prime} to be a well-formed history, we must have last(h)Xslast(h)\leq_{X}s^{\prime}, which together with last(h)KXYlast(h)\models K_{X}Y gives us last(h)Ys=last(h)last(h)\leq_{Y}s^{\prime}=last(h^{\prime}), as desired.

Similarly, items 7 and 8 follow by easy induction (on the tree distance) from the corresponding statements for the one-step transitions. To check the first of these, assume hX=hh\to^{=}_{X}h^{\prime} with last(h)DXYlast(h)\models D_{X}Y. By definition, the first of these means that we have h=(h,Z0,s)h^{\prime}=(h,Z^{0},s^{\prime}) for some Z,sZ,s^{\prime} with last(h)DZXlast(h)\models D_{Z}X. Together with last(h)DXYlast(h)\models D_{X}Y, this gives us last(h)DZYlast(h)\models D_{Z}Y (by the Transitivity axiom for DD), and hence we have hY=h\to^{=}_{Y}, as desired. The case of the relatiion X\to^{\leq}_{X} (for item 8) is completely analogous.

Finally, items 9 and 10 follow by applying the above characterizations of h=YWhh=_{Y}^{W}h^{\prime} and respectively hYhhh\leq_{Y}^{h}h^{\prime} in terms of the non-redundant path from hh to hh^{\prime}, and then noticing that h′′[h,h]h^{\prime\prime}\in[h,h^{\prime}] implies that the same characterizing conditions apply to the subpaths from hh to h′′h^{\prime\prime}, and respectively from h′′h^{\prime\prime} to hh^{\prime}. ∎

The standard preorder model on histories To structure the set WW of histories as a standard preorder model (in simplified presentation)

𝐌=(W,xW,),{\mathbf{M}}^{\infty}_{\leq}=(W,\leq_{x}^{W},\|\bullet\|),

we take as our preorders xW\leq_{x}^{W} to be just the relations {x}W\leq_{\{x\}}^{W} defined above, while the valuation for atoms of the form PxP\vec{x} is inherited from our finite preorder model 𝐌{\mathbf{M}}:

Px1xn={hW:last(h)Px1xn}.\|Px_{1}\ldots x_{n}\|=\{h\in W:last(h)\models Px_{1}\ldots x_{n}\}.

This is meant to be a standard preorder model, so technically speaking the relations XW\leq_{X}^{W} and =XW=_{X}^{W} will be defined in this model by standardness (as intersections, in terms of the basic relations xW\leq_{x}^{W}, using clauses (1) and (2) of Proposition A.13), rather than the way we defined them above on histories. But Claim 1 ensures that they match the above definitions, as made explicit in the following result:

Claim 2. The structure 𝐌{\mathbf{M}}^{\infty}_{\leq} is indeed a standard preorder model, with its induced XX-equality and XX-preorder relations matching the relations XW\leq_{X}^{W} and =XW=_{X}^{W}, as defined above on histories.

Proof.

The fact that the relations XW\leq_{X}^{W} (as defined above on histories) coincide with the intersections xXxW\bigcap_{x\in X}\leq_{x}^{W} follows from the additivity of XW\leq_{X}^{W} (item 4 of Claim 1), while the fact that the relations =XW=_{X}^{W} coincide with the intersections XWXW\leq_{X}^{W}\cap\geq_{X}^{W} follows from item 3 of Claim 1. So, to prove standardness (given the simplified presentation), it is enough to check condition (0) of Proposition A.13. For this, assume that we have h=XWhh=_{X}^{W}h^{\prime} and hPx1xnh\in\|Px_{1}\ldots x_{n}\|. By our choice of valuation on 𝐌{\mathbf{M}}^{\infty}_{\leq}, this means that last(h)Px1xnlast(h)\models Px_{1}\ldots x_{n}. By item 5 in Claim 1, h=XWhh=_{X}^{W}h^{\prime} implies that last(h)=Xlast(h)last(h)=_{X}last(h^{\prime}) in 𝐌{\mathbf{M}}. This, together with last(h)Px1xnlast(h)\models Px_{1}\ldots x_{n} (and the fact that 𝐌{\mathbf{M}} is a preorder model), give us last(h)Px1xnlast(h^{\prime})\models Px_{1}\ldots x_{n} (by clause 1 in the definition of preorder models), from which we can conclude that hPx1xnh^{\prime}\in\|Px_{1}\ldots x_{n}\| (again by our choice of valuation on 𝐌{\mathbf{M}}^{\infty}_{\leq}), as desired. ∎

We now finish the proof of our representation result (Lemma A.23 above), with the following:

Claim 3. The function last:SWlast:S\to W, that maps every history hSh\in S to its last state last(h)Wlast(h)\in W, is a surjective pp-morphism from the standard preorder model 𝐌{\mathbf{M}}^{\infty}_{\leq} to the given finite preorder model 𝐌{\mathbf{M}}.

Proof.

The surjectivity of the map lastlast is immediate: given sWs\in W, if we take the history h=(s0,0,s)h=(s_{0},\emptyset^{0},s) of length 11 (which is a correct history, since s0=ss_{0}=_{\emptyset}s by the definition of preorder models), then last(h)=slast(h)=s. So we just need to check the clauses in the definition of pp-morphisms on preorder models.

Atomic clause Px1xnPx_{1}\ldots x_{n}: holds by our choice of valuation in 𝐌{\mathbf{M}}^{\infty}_{\leq}.

Atomic clause DXYD_{X}Y: We need to show that we have hDXYh\models D_{X}Y iff last(h)DXYlast(h)\models D_{X}Y.

For the left-to-right implication, suppose that hDXYh\models D_{X}Y, and let s:=last(h)s:=last(h); we must prove that sDXYs\models D_{X}Y. For this, take the history h:=(h,X0,s)h^{\prime}:=(h,X^{0},s). Then we have h=XWhh=_{X}^{W}h^{\prime}, which together with hKXYh\models K_{X}Y, gives h=YWhh=_{Y}^{W}h^{\prime} (since 𝐌{\mathbf{M}}^{\infty}_{\leq} is a preorder model). Since =YW=_{Y}^{W} is the reflexive-transitive closure of Y=Y=\to_{Y}^{=}\cup\leftarrow_{Y}^{=} while hh is an immediate predecessor of h=(h,X0,s)h^{\prime}=(h,X^{0},s), h=YWhh=_{Y}^{W}h^{\prime} can only hold in this case iff we have hY=hh\to_{Y}^{=}h^{\prime}. By the definition of Y=\to_{Y}^{=} and the structure of h=(h,X0,s)h^{\prime}=(h,X^{0},s), this means that we must have s=last(h)DXYs=last(h^{\prime})\models D_{X}Y, as desired.

For the converse implication, suppose that last(h)DXYlast(h)\models D_{X}Y; we need to show that hDXYh\models D_{X}Y as well. For this, let hWh^{\prime}\in W be any history in WW with h=XWhh=_{X}^{W}h^{\prime}, and we need to prove that h=YWhh=_{Y}^{W}h^{\prime}. Indeed, from h=XWhh=_{X}^{W}h^{\prime} and last(h)KXYlast(h)\models K_{X}Y, we get h=YWhh=_{Y}^{W}h^{\prime} (by item 7 of Claim 1), as desired.

Atomic clause KXYK_{X}Y: We need to show that we have hKXYh\models K_{X}Y iff last(h)KXYlast(h)\models K_{X}Y.

The left-to-right implication is similar to the above proof for DXYD_{X}Y: suppose that hKXYh\models K_{X}Y, and let s:=last(h)s:=last(h); we need to prove that sKXYs\models K_{X}Y. For this, take again the history h:=(h,X0,s)h^{\prime}:=(h,X^{0},s). Then we have hXWhh\leq_{X}^{W}h^{\prime} (in fact h=XWhh=_{X}^{W}h^{\prime}), which together with hKXYh\models K_{X}Y, gives us hYWhh\leq_{Y}^{W}h^{\prime} (since 𝐌{\mathbf{M}}^{\infty}_{\leq} is a preorder model). Once again, since YW\leq_{Y}^{W} is the reflexive-transitive closure of Y\to_{Y}^{\leq} and hh is an immediate predecessor of h=(h,X0,s)h^{\prime}=(h,X^{0},s), hYWhh\leq_{Y}^{W}h^{\prime} can only hold in this case iff we have hYhh\to_{Y}^{\leq}h^{\prime}. By the definition of Y\to_{Y}^{\leq} and the structure of h=(h,X0,s)h^{\prime}=(h,X^{0},s), this means that we must have s=last(h)KXYs=last(h^{\prime})\models K_{X}Y, as desired.

For the converse implication, let last(h)KXYlast(h)\models K_{X}Y; we need to show that hKXYh\models K_{X}Y as well. For this, let h,h′′Sh^{\prime},h^{\prime\prime}\in S be any histories with hXWhXWh′′h\leq_{X}^{W}h^{\prime}\leq_{X}^{W}h^{\prime\prime}, and we must prove that hYWh′′h^{\prime}\leq_{Y}^{W}h^{\prime\prime}. From hXWhh\leq_{X}^{W}h^{\prime} and last(h)KXYlast(h)\models K_{X}Y, we get last(h)KXYlast(h^{\prime})\models K_{X}Y (by item 8 of Claim 1 above). Using this and hXWh′′h^{\prime}\leq_{X}^{W}h^{\prime\prime}, we obtain hYWh′′h^{\prime}\leq_{Y}^{W}h^{\prime\prime} (by applying again item 8 of Claim 1), as desired.

The forth clauses for =XW=_{X}^{W} and XW\leq_{X}^{W} are given by parts 5 and 6 of Claim 1 above.

The back clause for =XW=_{X}^{W}: Assuming last(h)=Xslast(h)=_{X}s, we take h:=(h,X0,s)h^{\prime}:=(h,X^{0},s). This is a well-formed history in WW, satisfying h=XWhh=_{X}^{W}h^{\prime} and last(h)=slast(h^{\prime})=s, as desired.

The back clause for XW\leq_{X}^{W}: Assuming last(h)Xslast(h)\leq_{X}s, we take h:=(h,Xβ,s)h^{\prime}:=(h,X^{\beta},s) for any number β(0,1)\beta\in(0,1) chosen in our construction. Once again, this is a well-defined history in WW, obviously satisfying the conditions hXWhh\leq_{X}^{W}h^{\prime} and last(h)=slast(h^{\prime})=s, as required. ∎

Putting together Proposition A.18 and Lemma A.23 with the well-known facts about the preservation of modal formulas under pp-morphisms (or more generally, under bisimulations), as well as the modal equivalence between standard preorder models and Alexandroff topo-dependence models, we immediately obtain the completeness of LCD wrt topo-dependence models (and as a consequence the decidability of the logic 𝖫𝖢𝖣\mathsf{LCD} of topo-dependence models).

A.7 Completeness wrt metric dependence models: refinement

We now wrap up our completeness (and decidability) proofs for 𝖫𝖢𝖣\mathsf{LCD} and 𝖫𝖴𝖣\mathsf{LUD} wrt (pseudo-locally Lipschitz) metric dependence models. By previous results, it is enough to prove completeness wrt standard pseudo-metric models (whose pseudo-metrics satisfy the pseudo-locally Lipschitz condition). For this, we show the following:

Lemma A.24.

(Pseudo-metric representation theorem) Every finite preorder model for 𝖫𝖢𝖣\mathsf{LCD} and respectively 𝖫𝖴𝖣\mathsf{LUD} (over a finite set of variables VV) is a topo-morphic image of a standard pseudo-metric model for 𝖫𝖢𝖣\mathsf{LCD}, and respectively 𝖫𝖴𝖣\mathsf{LUD}. Moreover, the metric dependence model associated to this standard pseudo-metric models is pseudo-local Lipschitz.

Plan of the proof. Essentially, the proof proceeds by (a) defining pseudo-distances dXW:W×W[0,1]d_{X}^{W}:W\times W\to[0,1] between histories in the unravelled tree model 𝐌=(W,=XW,XW,){\mathbf{M}}^{\infty}_{\leq}=(W,=_{X}^{W},\leq_{X}^{W},\|\bullet\|) introduced in the previous section; then (b) showing that, importantly, this pseudo-metric topology is a refinement of the Alexandroff topology associated to the preorders XW\leq_{X}^{W} from the previous section, and (c) proving that the resulting structure 𝐌d=(W,=XW,dXW,){\mathbf{M}}^{\infty}_{d}=(W,=_{X}^{W},d_{X}^{W},\|\bullet\|) is a standard pseudo-metric model; and finally (d) showing that the map last:WSlast:W\to S (sending each history to its last element) is a (surjective) topo-morphism from the 𝖫𝖴𝖣\mathsf{LUD} topo-model 𝐌d{\mathbf{M}}^{\infty}_{d} to (the 𝖫𝖴𝖣\mathsf{LUD} topo-model associated) to our original preorder model for 𝖫𝖴𝖣\mathsf{LUD}.

The proof in detail In the rest of this section, we present the details of our proof of Lemma A.24. We give the proof of this representation result for 𝖫𝖴𝖣\mathsf{LUD} only; the proof for 𝖫𝖢𝖣\mathsf{LCD} can be obtained by simply skipping all the steps and clauses involving the uniform dependence atoms U(X;Y)U(X;Y). We start as in the previous section: let 𝐌=(S,=X,X,)XV{\mathbf{M}}=(S,=_{X},\leq_{X},\|\bullet\|)_{X\subseteq V} be a finite preorder model for 𝖫𝖴𝖣\mathsf{LUD}. Since this is also in particular a finite preorder model for 𝖫𝖢𝖣\mathsf{LCD}, we can apply the unravelling technique from the previous section to obtain the structure 𝐌=(W,=XW,XW,){\mathbf{M}}^{\infty}_{\leq}=(W,=_{X}^{W},\leq_{X}^{W},\|\bullet\|), as introduced in the previous section. All the notations and results in the previous section still apply, and we will make use of them, in particular of the fact that 𝐌{\mathbf{M}}^{\infty}_{\leq} is a standard preorder model for 𝖫𝖢𝖣\mathsf{LCD}, and that the map last:WSlast:W\to S (sending each history to its last element) is a surjective pp-morphism between preorder models for 𝖫𝖢𝖣\mathsf{LCD}, and hence a (surjective) interior map between the corresponding topo-models for 𝖫𝖢𝖣\mathsf{LCD}. For hW,sSh\in W,s\in S, we will use the notations

hXW:={hW:hXWh} and sX:={sS:sXs}h\uparrow_{X}^{W}:=\{h^{\prime}\in W:h\leq_{X}^{W}h^{\prime}\}\,\,\,\,\,\,\,\mbox{ and }\,\,\,\,\,\,\,s\uparrow_{X}:=\{s^{\prime}\in S:s\leq_{X}s^{\prime}\}

to denote the corresponding XX-upsets in WW and respectively SS. Note that the fact that the surjective map last:𝐌𝐌last:{\mathbf{M}}^{\infty}_{\leq}\to{\mathbf{M}} is a pp-morphism is equivalent to saying that, for all histories hWh\in W, we have:

last(h)X=last(hXW).last(h)\uparrow_{X}=last(h\uparrow_{X}^{W}).

As announced in the Proof Plan, we will endow the set WW of histories with a pseudo-metric structure, obtained by defining ultra-pseudo-metric distances dXW:W×W[0,1]d_{X}^{W}:W\times W\to[0,1], and at the same time we will extend the valuation to cover atoms of the form U(X;Y)U(X;Y), to obtain a pseudo-metric refinement 𝐌d=(W,=XW,dXW,){\mathbf{M}}^{\infty}_{d}=(W,=_{X}^{W},d_{X}^{W},\|\bullet\|) of the topology of our unravelled preorder model 𝐌{\mathbf{M}}^{\infty}_{\leq}. To do this, we need to introduce some preliminary notions.

The XX-root of a history When studying the Alexandroff topologies induced by the preorders X\leq_{X} on the set WW of all histories, the following notion is of special interest: the XX-root of a history hh is the shortest history hXh_{X} s.t. hXXWhh_{X}\leq_{X}^{W}h.363636Here, “shortest” means that there is no proper sub-history ehXe\prec h_{X} with eXWhe\leq_{X}^{W}h. It is easy to see that every history has a unique XX-root, which we will henceforth denote by hXh_{X}; moreover, we obviously have hXhh_{X}\preceq h. In fact, if we denote by h(X)h(X) the X\leq_{X}-connected component373737A set AWA\subseteq W of histories is X\leq_{X}-connected if for every two histories h,hAh,h^{\prime}\in A there exists some history h′′Ah^{\prime\prime}\in A with h′′Xh,hh^{\prime\prime}\leq_{X}h,h^{\prime}. The largest X\leq_{X}-connected subset of WW that contains a history hh is called the X\leq_{X}-connected component of hh, and denoted by h(X)h(X). In fact, h(X)h(X) is the equivalence class of hh with respect to the equivalence relation X\asymp_{X} given by putting hXhh\asymp_{X}h^{\prime} iff there exists h′′Xh,hh^{\prime\prime}\leq_{X}h,h^{\prime}. that contains hh, it is easy to see that h(X)h(X) is a subtree of WW (wrt \preceq), having hXh_{X} as its root. Obviously, states that belong to the same X\leq_{X}-connected component have the same XX-root, i.e. we have that:

hXWh implies hX=hX.h\leq_{X}^{W}h^{\prime}\mbox{ implies }h_{X}=h^{\prime}_{X}.

Density and XX-density of a history In addition to history length, we need another measure of the complexity of a history hh. The density δh\delta^{h} of a history hh is defined as the minimum non-zero index β\beta occurring in the history hh (where we put δh=1\delta^{h}=1 when there are no such non-zero indices in hh). More precisely, we put

δh:=inf{β>0:(e,Yβ,s)h for some YV,sS},\delta^{h}\,\,:=\,\,inf\{\beta>0:(e,Y^{\beta},s)\preceq h\mbox{ for some }Y\subseteq V,s\in S\},

with the convention that inf=1inf\,\emptyset=1. Note that, in our setting, this convention is the natural analogue of the more standard convention that inf=inf\,\emptyset=\,\infty: since dXWd_{X}^{W} takes values in [0,1][0,1], the maximum possible distance is 11, playing the role of ‘infinity’. A distance dXW(h,h)=1d_{X}^{W}(h,h^{\prime})=1 will indicate that the two histories are so “far” from each other, that one cannot reach hh^{\prime} from hh by any number of “small” changes of XX-value. Also note that the infimum in the above definition is applied to a finite set of real values, so it can be replaced by minimum whenever the set is non-empty.

The density of a history hh is a measure of how “far” apart are its preceding histories. However, for defining our XX-topology on the tree WW, not all the past of a history hh is relevant, but only the past of its XX-root. The XX-density of history hh is simply the density of its XX-root, i.e., the quantity

δhX.\delta^{h_{X}}.

Closeness To define our XX-distances, we need a notion of closeness between histories that differs from the tree distance: indeed, even a one-step transition may involve a large jump as far as the intended XX-distance is concerned. Two neighboring histories h=(s0,,s)h=(s_{0},\ldots,s) and h=(h,Yβ,s)h^{\prime}=(h,Y^{\beta},s^{\prime}) are said to be XX-close if any of the following conditions holds:

either 𝐌U(Y;X){\mathbf{M}}\models U(Y;X); or s𝐌KYXs\models_{\mathbf{M}}K_{Y}X and β<δhX\beta<\delta^{h_{X}}; or s𝐌DYXs\models_{\mathbf{M}}D_{Y}X and β=0\beta=0.

For arbitrary histories, hh and hh^{\prime} are said to be XX-close if all the neighboring histories on the non-redundant path from hh to hh^{\prime} are XX-close. Two histories are said to be XX-far if they are not XX-close. Note that, since U(X;)U(X;\emptyset) is a valid formula, every two histories are \emptyset-close; in other words: if two histories are XX-far then XX\neq\emptyset. Finally, in the case that X={x}X=\{x\} is a singleton, we can drop the set brackets, saying that two histories are xx-close (xx-far) if they are {x}\{x\}-close ({x}\{x\}-far).

Claim 1. For all histories h=(s0,,s)h=(s_{0},\ldots,s) and all sets XVX\subseteq V, we have:

  1. 1.

    0<δh10<\delta^{h}\leq 1;

  2. 2.

    if hhh\preceq h^{\prime}, then δhδh\delta^{h}\geq\delta^{h^{\prime}};

  3. 3.

    δhδhX\delta^{h}\leq\delta^{h_{X}};

  4. 4.

    if hXWhh\leq_{X}^{W}h^{\prime}, then δhX=δhX\delta^{h_{X}}=\delta^{h^{\prime}_{X}};

  5. 5.

    if 𝐌K(Y;X){\mathbf{M}}\models K(Y;X) (and so also, in particular, if 𝐌U(Y;X){\mathbf{M}}\models U(Y;X)), then δhYδhX\delta^{h_{Y}}\leq\delta^{h_{X}};

  6. 6.

    δhXY=min{δhX,δhY}\delta^{h_{X\cup Y}}=min\{\delta^{h_{X}},\delta^{h_{Y}}\};

  7. 7.

    δhX=max{δe:eXWh}\delta^{h_{X}}=max\{\delta^{e}:e\leq_{X}^{W}h\};

  8. 8.

    XX-closeness is additive: two histories are XYX\cup Y-close iff they are both XX-close and YY-close;

  9. 9.

    XX-closeness is an equivalence relation.

Proof.

: Parts 1 and 2 follow directly from the definitions of δh\delta^{h}. Part 3 follows from part 1 and the fact that hXhh_{X}\preceq h. Part 4 follows from the fact that hXWhh\leq_{X}^{W}h^{\prime} implies hX=hXh_{X}=h^{\prime}_{X}.

Part 5 follows from the observation that 𝐌K(Y;X){\mathbf{M}}\models K(Y;X) implies that every history eYWhe\leq_{Y}^{W}h has also the property that eXWhe\leq_{X}^{W}h, hence in particular we have that hYXhh_{Y}\leq_{X}h. By the definition of hXh_{X}, this implies that hXhYh_{X}\preceq h_{Y}.

For part 6: since hX,hYhh_{X},h_{Y}\preceq h, by the tree property we must have either hXhYh_{X}\preceq h_{Y} or hYhXh_{Y}\preceq h_{X}. Our claim is symmetric in XX and YY, so without loss of generality we can assume that hXhYh_{X}\preceq h_{Y}, hence by part 2 we have δhXδhY\delta^{h_{X}}\geq\delta^{h_{Y}}, i.e. min{δhX,δhY}=δhYmin\{\delta^{h_{X}},\delta^{h_{Y}}\}=\delta^{h_{Y}}. On the other hand, since hXXWhh_{X}\leq_{X}^{W}h and hXhYhh_{X}\preceq h_{Y}\preceq h, by Claim 1(10) in Section A.6 we must also have hYXWhh_{Y}\leq_{X}^{W}h; putting this together with hYYWhh_{Y}\leq_{Y}^{W}h and using the additivity of the relation W\leq^{W}, we obtain that hYXYhh_{Y}\leq_{X\cup Y}h; moreover, by definition of hYh_{Y}, no strict predecessor of hYh_{Y} is YWh\leq_{Y}^{W}h, hence no such predecessor can be XYWh\leq_{X\cup Y}^{W}h; this, together with the fact that hYXYhh_{Y}\leq_{X\cup Y}h, shows that hYh_{Y} satisfies the defining property of hXYh_{X\cup Y}, so we have hY=hXYh_{Y}=h_{X\cup Y}. In conclusion, we obtain min{δhX,δhY}=δhY=δhXYmin\{\delta^{h_{X}},\delta^{h_{Y}}\}=\delta^{h_{Y}}=\delta^{h_{X\cup Y}}, as desired.

For part 7, we prove two inequalities separately. The fact that δhXmax{δe:eXWh}\delta^{h_{X}}\leq max\{\delta^{e}:e\leq_{X}^{W}h\} follows immediately from the fact that hXXWhh_{X}\leq_{X}^{W}h. To check the converse inequality δhXmax{δe:eXWh}\delta^{h_{X}}\geq max\{\delta^{e}:e\leq_{X}^{W}h\}, let ee be any history s.t. eXWhe\leq_{X}^{W}h. Then by part 4 we have δeX=δhX\delta^{e_{X}}=\delta^{h_{X}}, and so by part 3 we have δeδeX=δhX\delta^{e}\leq\delta^{e_{X}}=\delta^{h_{X}}, as desired.

Part 8 follows directly from the definition of XX-closeness, using the Additivity Axioms for U(X;Y)U(X;Y), KXYK_{X}Y and DXYD_{X}Y, as well as part 6.

For part 9, it is obvious that XX-closeness is reflexive and symmetric. For transitivity, let h,h,h′′Wh,h^{\prime},h^{\prime\prime}\in W be s.t. hh is XX-close to hh^{\prime}, and hh^{\prime} is XX-close to h′′h^{\prime\prime}. By definition, this means that all neighboring histories in [h,h][h,h^{\prime}] are XX-close, and that the same holds for all neighboring histories in [h,h′′][h^{\prime},h^{\prime\prime}]. Using the fact that [h,h′′][h,h][h,h′′][h,h^{\prime\prime}]\subseteq[h,h^{\prime}]\cup[h^{\prime},h^{\prime\prime}], it follows that all neighboring histories in [h,h′′][h,h^{\prime\prime}] are also XX-close, and thus by definition hh and h′′h^{\prime\prime} are XX-close, as desired. ∎

The pseudo-metric model on histories We proceed now to define our ultra-pseudo-metric model (in simplified presentation) 𝐌d=(W,dxW,){\mathbf{M}}^{\infty}_{d}=(W,d_{x}^{W},\|\bullet\|). As the set of states, we take again the set WW of all histories, endowed with the same valuation on atoms of the form PxP\vec{x} as on 𝐌{\mathbf{M}}^{\infty}_{\leq}:

Px1xn={hW:last(h)Px1xn}.\|Px_{1}\ldots x_{n}\|=\{h\in W:last(h)\models Px_{1}\ldots x_{n}\}.

Since 𝐌d{\mathbf{M}}^{\infty}_{d} is meant to be a standard pseudo-metric model, the only thing left is to define the basic pseudo-metrics dxW:W×W[0,)d_{x}^{W}:W\times W\to[0,\infty). Essentially, the pseudo-distance between two ‘close’ histories will be given by the maximum of the real numbers β[0,1)\beta\in[0,1) encountered on the non-redundant path between them; while the pseudo-distance between ‘far’ histories will be =1=1 by definition. More precisely, for every xVx\in V we put:

dxW(h,h)\displaystyle d_{x}^{W}(h,h^{\prime}) :=\displaystyle:= max{β[0,1):e,(e,Yβ,s)[h,h] with sS,YV}, if h and h are x-close;\displaystyle max\{\beta\in[0,1):\exists e,(e,Y^{\beta},s)\in[h,h^{\prime}]\mbox{ with }s\in S,Y\subseteq V\},\mbox{ if $h$ and $h^{\prime}$ are $x$-close};
dxW(h,h)\displaystyle d_{x}^{W}(h,h^{\prime}) :=\displaystyle:= 1, if h and h are x-far,\displaystyle 1,\mbox{ if $h$ and $h^{\prime}$ are $x$-far},

where we assume the convention that max=0max\,\emptyset=0, thus ensuring that dxW(h,h)=0d_{x}^{W}(h,h)=0.

For sets of variables XVX\subseteq V, the pseudo-distances dXWd_{X}^{W} are defined like in any standard pseudo-metric model, using the Chebyshev distance

dXW(h,h):=max{dxW(h,h):xX}.d_{X}^{W}(h,h^{\prime})\,:=\,max\{d_{x}^{W}(h,h^{\prime}):x\in X\}.

As usual, for every history hWh\in W, set XVX\subseteq V and number ε(0,1]\varepsilon\in(0,1], we use the notation

X(h,ε):={hW:dX(h,h)<ε}\mathcal{B}_{X}(h,\varepsilon):=\{h^{\prime}\in W:d_{X}(h,h^{\prime})<\varepsilon\}

to denote the “open ball of radius ε\varepsilon centered at hh”.

We now proceed to prove a number of useful results. First, using the additivity of XX-closeness (part 8 of Claim 1), we can immediately extend the above defining equations for dxWd_{x}^{W} to sets XX of variables (given the definition of dXWd_{X}^{W} in terms of Chebyshev distance):

Claim 2. For every set of variables XVX\subseteq V, the functions dXWd_{X}^{W} satisfy the conditions:

dXW(h,h)\displaystyle d_{X}^{W}(h,h^{\prime}) =\displaystyle= 0, if X=;\displaystyle 0,\,\,\,\,\mbox{ if $X=\emptyset$};
dXW(h,h)\displaystyle d_{X}^{W}(h,h^{\prime}) =\displaystyle= max{β[0,1):e,(e,Yβ,s)[h,h] with sS,YV},\displaystyle max\{\beta\in[0,1):\exists e,(e,Y^{\beta},s)\in[h,h^{\prime}]\mbox{ with }s\in S,Y\subseteq V\},
 if h,h are X-close with X;\displaystyle\,\,\,\,\,\,\,\,\,\,\mbox{ if $h,h^{\prime}$ are $X$-close with $X\neq\emptyset$};
dXW(h,h)\displaystyle d_{X}^{W}(h,h^{\prime})\, =\displaystyle= 1,if h and h are X-far (hence X).\displaystyle 1,\,\,\,\,\,\,\mbox{if $h$ and $h^{\prime}$ are $X$-far (hence $X\neq\emptyset$)}.

As an immediate consequence of Claim 2, we have the following

Important Observations For all histories h,hWh,h^{\prime}\in W, we have:

  1. 1.

    dXW(h,h)<1d_{X}^{W}(h,h^{\prime})<1 iff hh and hh^{\prime} are XX-close;

  2. 2.

    if dXW(h,h),dYW(h,h)<1d_{X}^{W}(h,h^{\prime}),d_{Y}^{W}(h,h^{\prime})<1 with X,YX,Y\neq\emptyset, then dXW(h,h)=dYW(h,h)d_{X}^{W}(h,h^{\prime})=d_{Y}^{W}(h,h^{\prime}).

Next, we look at distances between neighboring histories, spelling out a few immediate consequences of Claim 2:

Claim 3. For all neighboring histories h=(s0,,s)h=(s_{0},\ldots,s), and h=(h,Yβ,s)h^{\prime}=(h,Y^{\beta},s^{\prime}), and all non-empty sets X,ZVX,Z\subseteq V (with X,ZX,Z\neq\emptyset), we have the following:

  1. 1.

    if dXW(h,h)=0d_{X}^{W}(h,h^{\prime})=0, then β=0\beta=0, dZW(h,h){0,1}d_{Z}^{W}(h,h^{\prime})\in\{0,1\}, sDYXs\models D_{Y}X and s=Yss=_{Y}s^{\prime} (hence also s=Xss=_{X}s^{\prime});

  2. 2.

    if XYX\subseteq Y, then dXW(h,h)=βd_{X}^{W}(h,h^{\prime})=\beta;

  3. 3.

    if dXW(h,h)<1d_{X}^{W}(h,h^{\prime})<1, then dXW(h,h)=β<1d_{X}^{W}(h,h^{\prime})=\beta<1, sDYXs\models D_{Y}X and sXss\leq_{X}s^{\prime};

Proof.

All parts follow from Claim 2 by simple case inspection. For part 1, we also use the constraints on histories and the fact that SX0S^{0}_{X} is the same as XX, as well as the fact that, by Claim 2, β=0\beta=0 is consistent only with dZ(h,h){0,1}d_{Z}(h,h^{\prime})\in\{0,1\}). For part 2, we use the fact that XYX\subseteq Y implies 𝐌U(Y;X){\mathbf{M}}\models U(Y;X), which implies sDYXs\models D_{Y}X. For part 3, we use the fact that both U(Y;X)U(Y;X) and KYXK_{Y}X imply DYXD_{Y}X, and the constraints on histories that imply that sYWss\leq_{Y}^{W}s^{\prime}, as well as the fact that dXW(h,h)<1d_{X}^{W}(h,h^{\prime})<1 implies that dXW(h,h)d_{X}^{W}(h,h^{\prime}) and that we have either sKYXs\models K_{Y}X (which together with sYWss\leq_{Y}^{W}s^{\prime} gives us sXss\leq_{X}s^{\prime}) or β=0\beta=0 (in which case the constraints on histories give us that s=Xss=_{X}s^{\prime}, hence also sXss\leq_{X}s^{\prime}). ∎

We now look at the key properties of our distances dXWd_{X}^{W} between arbitrary histories:

Claim 4. Each dxWd_{x}^{W} satisfies the following properties:

  1. 1.

    dXW(h,h)=0d_{X}^{W}(h,h)=0;

  2. 2.

    dXW(h,h)=dXW(h,h)d_{X}^{W}(h,h^{\prime})=d_{X}^{W}(h^{\prime},h);

  3. 3.

    dXW(h,h)=0d_{X}^{W}(h,h^{\prime})=0 holds iff h=XWhh=_{X}^{W}h^{\prime};

  4. 4.

    if h′′[h,h]h^{\prime\prime}\in[h,h^{\prime}], then dXW(h,h)=max{dXW(h,h′′),dXW(h′′,h)}d_{X}^{W}(h,h^{\prime})=max\{d_{X}^{W}(h,h^{\prime\prime}),d_{X}^{W}(h^{\prime\prime},h^{\prime})\};

  5. 5.

    if e,e[h,h]e,e^{\prime}\in[h,h^{\prime}], then dXW(e,e)dXW(h,h)d_{X}^{W}(e,e^{\prime})\leq d_{X}^{W}(h,h^{\prime});

  6. 6.

    dXW(h,h)max{dXW(h,h′′),dXW(h′′,h)}d_{X}^{W}(h,h^{\prime})\leq max\{d_{X}^{W}(h,h^{\prime\prime}),d_{X}^{W}(h^{\prime\prime},h^{\prime})\} (for arbitrary h′′h^{\prime\prime});

  7. 7.

    if hhh\preceq h^{\prime} and dXW(h,h)<1d_{X}^{W}(h,h^{\prime})<1, then hXWhh\leq_{X}^{W}h^{\prime} (and hence last(h)Xlast(h)last(h)\leq_{X}last(h^{\prime}));

  8. 8.

    if dXW(h,h)<1d_{X}^{W}(h,h^{\prime})<1, then inf(h,h)XWh,hinf(h,h^{\prime})\leq_{X}^{W}h,h^{\prime} (and hence δXh=δXh=δXinf(h,h)\delta_{X}^{h}=\delta_{X}^{h^{\prime}}=\delta_{X}^{inf(h,h^{\prime})});

  9. 9.

    X(h,δh)hX\mathcal{B}_{X}(h,\delta^{h})\subseteq h\uparrow_{X};

  10. 10.

    last(h)Xlast(X(h,ε))last(h)\uparrow_{X}\subseteq last(\mathcal{B}_{X}(h,\varepsilon)) for all ε>0\varepsilon>0;

  11. 11.

    if 𝐌U(Y;X){\mathbf{M}}\models U(Y;X) and dYW(h,h)<1d_{Y}^{W}(h,h^{\prime})<1, then hh and hh^{\prime} are XX-close;

  12. 12.

    if last(h)𝐌KYXlast(h)\models_{\mathbf{M}}K_{Y}X and dYW(h,h)<δhd_{Y}^{W}(h,h^{\prime})<\delta^{h}, then hh and hh^{\prime} are XX-close.

Proof.

Part 1 follows from Claim 2 and the convention that max=0max\,\emptyset=0. Part 2 follows immediately from the symmetrical shape of the inductive definition of dX(h,h)d_{X}(h,h^{\prime}), and the in-built symmetry of dX(h,h)d_{X}(h,h^{\prime}) in the case that dist(h,h)=1dist(h,h^{\prime})=1.

For part 3: The special case in which hh and hh^{\prime} are neighboring histories, say h=(h,Yβ,s)h^{\prime}=(h,Y^{\beta},s^{\prime}) should be obvious, using Claim 3(1), via the following sequence of equivalences: dXW(h,h)=0d_{X}^{W}(h,h^{\prime})=0 iff (β=0last(h)DYX)(\beta=0\wedge last(h)\models D_{Y}X) iff (h=(h,Y0,β)last(h)DYX)(h^{\prime}=(h,Y^{0},\beta)\wedge last(h)\models D_{Y}X) iff hX=hh\to_{X}^{=}h^{\prime} iff h=XWhh=_{X}^{W}h^{\prime}. The general case follows from this special case by induction on the length of the tree distance between hh and hh^{\prime} (using the general definition of dXhd_{X}^{h} for non-neighboring histories, as well as the transitivity of =XW=_{X}^{W}).

Part 4 follows immediately from the definition of dXWd_{X}^{W}, together with the above-mentioned fact that:

h′′[h,h] implies [h,h]=[h,h′′][h′′,h].h^{\prime\prime}\in[h,h^{\prime}]\mbox{ implies }[h,h^{\prime}]=[h,h^{\prime\prime}]\cup[h^{\prime\prime},h^{\prime}].

Part 5 is another immediate consequence of the definition of dXWd_{X}^{W}, while part 6 follows from the same definition together with another fact mentioned above, namely that the inclusion

[h,h][h,h′′][h′′,h][h,h^{\prime}]\subseteq[h,h^{\prime\prime}]\cup[h^{\prime\prime},h^{\prime}]

holds for arbitrary histories h′′h^{\prime\prime}.

For part 7, assume hhh\preceq h^{\prime} and dXW(h,h)<1d_{X}^{W}(h,h^{\prime})<1. Then, for all pairs e,e[h,h]e,e^{\prime}\in[h,h^{\prime}] satisfying eee\to e^{\prime}, we have dXW(e,e)dXW(h,h)<1d_{X}^{W}(e,e^{\prime})\leq d_{X}^{W}(h,h^{\prime})<1 (by part 6), hence by part 3 of Claim 3 we have last(e)Xlast(e)last(e)\leq_{X}last(e^{\prime}). Putting all these together along the non-redundant path from hh and hh^{\prime} and applying the transitivity of XW\leq_{X}^{W}, we obtain the desired conclusion.

To prove part 8, assume dXW(h,h)<1d_{X}^{W}(h,h^{\prime})<1 and put h′′:=inf(h,h)h^{\prime\prime}:=inf(h,h^{\prime}). Since h′′[h,h]h^{\prime\prime}\in[h,h^{\prime}], by part 5 we have dXW(h′′,h),dXW(h′′,h)dXW(h,h)<1d_{X}^{W}(h^{\prime\prime},h),d_{X}^{W}(h^{\prime\prime},h^{\prime})\leq d_{X}^{W}(h,h^{\prime})<1. This together with the fact that h′′=inf(h,h)h,hh^{\prime\prime}=inf(h,h^{\prime})\preceq h,h^{\prime}, gives by part 7 that h′′XWh,hh^{\prime\prime}\leq_{X}^{W}h,h^{\prime}, as desired.

For part 9, assume that hX(h,δh)h\in\mathcal{B}_{X}(h,\delta^{h}). From this and the fact that δh1\delta^{h}\leq 1, we infer that dXW(h,h)<1d_{X}^{W}(h,h^{\prime})<1, so if we put h′′:=inf(h,h)h^{\prime\prime}:=inf(h,h^{\prime}), then by part 8 we have h′′XWhh^{\prime\prime}\leq_{X}^{W}h^{\prime}. It is clearly enough to prove that h′′=XWhh^{\prime\prime}=_{X}^{W}h (hence hXWh′′XWhh\leq_{X}^{W}h^{\prime\prime}\leq_{X}^{W}h^{\prime}, and thus hhXWh^{\prime}\in h\uparrow_{X}^{W}, as desired), which by part 3 is equivalent to showing that dXW(h′′,h)=0d_{X}^{W}(h^{\prime\prime},h)=0. For this, we need to prove that dXW(e,e)=0d_{X}^{W}(e,e^{\prime})=0 holds for all pairs e,e[h′′,h]e,e^{\prime}\in[h^{\prime\prime},h] with eee\to e^{\prime}. To prove this, assume towards a contradiction that we have dXW(e,e)0d_{X}^{W}(e,e^{\prime})\neq 0 for some such pair. Then, by the definition of δh\delta^{h} and the fact e,ehe,e^{\prime}\preceq h (since h′′hh^{\prime\prime}\preceq h and e,e[h′′,h]e,e^{\prime}\in[h^{\prime\prime},h]), we have that δhdXW(e,e)\delta^{h}\leq d_{X}^{W}(e,e^{\prime}). But on the other hand, from e,e[h′′,h]e,e^{\prime}\in[h^{\prime\prime},h] and h′′h,hh^{\prime\prime}\preceq h,h^{\prime} we obtain by part 5 that dXW(e,e)dXW(h,h′′)dXW(h,h)<δhd_{X}^{W}(e,e^{\prime})\leq d_{X}^{W}(h,h^{\prime\prime})\leq d_{X}^{W}(h,h^{\prime})<\delta^{h}, which contradicts δhdXW(e,e)\delta^{h}\leq d_{X}^{W}(e,e^{\prime}).

For part 10, assume that ε>0\varepsilon>0 and slast(h)Xs\in last(h)\uparrow_{X}, i.e. last(h)Xslast(h)\leq_{X}s. Choose some β(0,ε)\beta\in(0,\varepsilon), and take the history h:=(h,Xβ,s)h^{\prime}:=(h,X^{\beta},s). The fact that last(h)Xslast(h)\leq_{X}s ensures that hh^{\prime} is a well-defined history with last(h)=slast(h^{\prime})=s. By part 2 of Claim 3, we also have that dXW(h,h)=β<εd_{X}^{W}(h,h^{\prime})=\beta<\varepsilon, i.e. hX(h,ε)h^{\prime}\in\mathcal{B}_{X}(h,\varepsilon), and thus s=last(h)last(X(h,ε))s=last(h^{\prime})\in last(\mathcal{B}_{X}(h,\varepsilon)), as desired.

For part 11, suppose that 𝐌U(Y;X){\mathbf{M}}\models U(Y;X), and let h,hWh,h^{\prime}\in W be histories with dY(h,h)<1d_{Y}(h,h^{\prime})<1, hence by Claim 2 hh and hh^{\prime} are YY-close. To show that hh and hh^{\prime} are also XX-close, we need to check that all neighboring histories e,e[h,h]e,e^{\prime}\in[h,h^{\prime}] are XX-close. For this, let e[h,h]e\in[h,h^{\prime}] with last(e)=slast(e)=s and e=(e,Zβ,s)[h,h]e^{\prime}=(e,Z^{\beta},s^{\prime})\in[h,h^{\prime}] be such neighboring histories. By part 5, we have dYW(e,e)dYW(h,h)<1d_{Y}^{W}(e,e^{\prime})\leq d_{Y}^{W}(h,h^{\prime})<1. By Claim 2, this implies that we are in one of the following three cases: either 𝐌U(Z;Y){\mathbf{M}}\models U(Z;Y) and β(0,1)\beta\in(0,1), or sKZYs\models K_{Z}Y and β(0,δYe)\beta\in(0,\delta^{e}_{Y}), or else sDZYs\models D_{Z}Y and β=0\beta=0. Using the assumption that 𝐌U(Y;X){\mathbf{M}}\models U(Y;X) (which implies that sKYXs\models K_{Y}X and sDYXs\models D_{Y}X) and the transitivity axioms for U,KU,K and DD, as well as the fact that δeYδeX\delta^{e_{Y}}\leq\delta^{e_{X}} (due to Claim 1.(5) and 𝐌U(Y;X){\mathbf{M}}\models U(Y;X)) we can easily see that each of these cases yields the analogous case for XX: in other words, we have either 𝐌U(Z;X){\mathbf{M}}\models U(Z;X) and β(0,1)\beta\in(0,1), or sKZXs\models K_{Z}X and β(0,δXe)\beta\in(0,\delta^{e}_{X}), or else sDZXs\models D_{Z}X and β=0\beta=0. In all cases, hh and hh^{\prime} are XX-close, as desired.

For part 12, let hh and hh^{\prime} be histories with last(h)𝐌KYXlast(h)\models_{\mathbf{M}}K_{Y}X and dYW(h,h)<δh1d_{Y}^{W}(h,h^{\prime})<\delta^{h}\leq 1, hence dYW(h,h)=β<δhd_{Y}^{W}(h,h^{\prime})=\beta<\delta^{h}, and we can also assume that 𝐌⊧̸U(Y;X){\mathbf{M}}\not\models U(Y;X) (otherwise the desired conclusion follows immediately by part 11). Applying again Claim 2, hh and hh^{\prime} are YY-close. To show that hh and hh^{\prime} are also XX-close, we need to check that all neighboring histories e,e[h,h]e,e^{\prime}\in[h,h^{\prime}] histories are XX-close. For this, let e[h,h]e\in[h,h^{\prime}] and e=(e,Zβ,s)[h,h]e^{\prime}=(e,Z^{\beta},s^{\prime})\in[h,h^{\prime}] be such neighboring histories. Since dYW(h,h)<δhd_{Y}^{W}(h,h^{\prime})<\delta^{h}, by part 9 we have hY(h,δh)hXh^{\prime}\in\mathcal{B}_{Y}(h,\delta^{h})\subseteq h\uparrow_{X}, i.e. hYWhh\leq_{Y}^{W}h^{\prime}. From this and the fact that e[h,h]e\in[h,h^{\prime}], we obtain that hYWeh\leq_{Y}^{W}e, which together with last(h)KYXlast(h)\models K_{Y}X, gives us that last(e)KYXlast(e)\models K_{Y}X and hXWeh\leq_{X}^{W}e (by Claim 1.(15) in Section A.6), and so δhX=δeX\delta^{h_{X}}=\delta^{e_{X}} by Claim 1.(4) above. Since h,hh,h^{\prime} are YY-close and [e,e[h,h][e,e^{\prime}\in[h,h^{\prime}], we have that e,ee,e^{\prime} are YY-close. By the definition of YY-closeness and the assumption that 𝐌⊧̸U(Y;X){\mathbf{M}}\not\models U(Y;X), it follows that we are in one of the following two cases: either last(e)KZYlast(e)\models K_{Z}Y and β<δeY\beta<\delta^{e_{Y}}, or else last(e)DZYlast(e)\models D_{Z}Y and β=0\beta=0. In the first case, from last(e)KZYlast(e)\models K_{Z}Y and last(e)KYXlast(e)\models K_{Y}X we obtain last(e)KZXlast(e)\models K_{Z}X, which combined with β<δhδhX=δeX\beta<\delta^{h}\leq\delta^{h_{X}}=\delta^{e_{X}} (due to parts 3 and 4 of Claim 1 and the fact that hXWeh\leq_{X}^{W}e) gives us that e,ee,e^{\prime} are XX-close, as desired. In the second case, in which we have last(e)DZYlast(e)\models D_{Z}Y and β=0\beta=0, we combine these with the fact that last(e)KYXlast(e)\models K_{Y}X implies last(e)DYXlast(e)\models D_{Y}X and use additivity to conclude that we have last(e)DZXlast(e)\models D_{Z}X and β=0\beta=0, thus e,ee,e^{\prime} are again XX-close, as desired. ∎

The facts in Claim 4 are the main ingredients ensuring that our construction will work as intended. We now proceed to show this, in Claims 5-8, that wrap up the proof of our representation result.

Claim 5. Each dXWd_{X}^{W} is an ultra-pseudo-metric whose topology is a refinement of the Alexandroff topology given by the corresponding preorder xW\leq_{x}^{W} and whose metric-topological indistinguishability relation (relating states s,ws,w s.t. dXW(s,w)=0d_{X}^{W}(s,w)=0) coincides with the XX-equality relation =XW=_{X}^{W}. Thus, 𝐌d{\mathbf{M}}^{\infty}_{d} is a refinement of the (standard) topo-model 𝐌{\mathbf{M}}^{\infty}_{\leq} to a standard pseudo-metric model. Moreover, the metric dependence model associated to 𝐌d{\mathbf{M}}^{\infty}_{d} is pseudo-locally Lipschitz (with Lipschitz constant 11), and so 𝐌d{\mathbf{M}}^{\infty}_{d} validates the ‘paradisiacal’ implication kXYUXYk_{X}Y\Rightarrow U_{X}Y.

Proof.

The fact that dXWd_{X}^{W} is an ultra-pseudo-metric follows immediately by putting together items 1, 2 and 6 of Claim 4. By part 9 of Claim 4, we have that every upset hX={hW:hXWh}h\uparrow_{X}=\{h^{\prime}\in W:h\leq_{X}^{W}h^{\prime}\} includes an XX-open ball centered at hh (namely the ball X(h,δh)={hW:dXW(h,h)<δh}\mathcal{B}_{X}(h,\delta^{h})=\{h^{\prime}\in W:d_{X}^{W}(h,h^{\prime})<\delta^{h}\}), which implies that the pseudo-metric topology induced by dXWd_{X}^{W} is a refinement of the preorder topology induced by the preorder XW\leq_{X}^{W}. Part 3 of Claim 4 shows that the metric indistinguishability relation coincides with XX-equality =XW=_{X}^{W}. Standardness is by the definition of dXWd_{X}^{W} as the Chebyshev distance based on the dxWd_{x}^{W}’s. Finally, the fact that the associated metric dependence model is pseudo-locally Lipschitz (with constant K=1K=1) follows immediately from the (second part of the) Important Observations following Claim 2. ∎

Claim 6. h𝐌dKYXh\models_{{\mathbf{M}}^{\infty}_{d}}K_{Y}X iff last(h)𝐌KYXlast(h)\models_{{\mathbf{M}}}K_{Y}X.

Proof.

We can assume that XX\neq\emptyset, otherwise both sides hold by definition.

For the left-to-right direction, suppose that hWh\in W is a history with h𝐌dKYXh\models_{{\mathbf{M}}^{\infty}_{d}}K_{Y}X. We will only use a weaker consequence of this, namely the fact that h𝐌dkYXh\models_{{\mathbf{M}}^{\infty}_{d}}k_{Y}X (pointwise continuous dependence at hh).393939But note that by the last part of Claim 5, this property is actually equivalent to KXYK_{X}Y on the ‘paradisiacal’ model 𝐌d{\mathbf{M}}^{\infty}_{d}. Choose any ε(0,1)\varepsilon\in(0,1). Then, by the continuous dependence kYXk_{Y}X at point hh, there exists some δ>0\delta>0 s.t.

()Y(h,δ)X(h,ε).(**)\,\,\,\,\,\,\,\,\,\,\mathcal{B}_{Y}(h,\delta)\subseteq\mathcal{B}_{X}(h,\varepsilon).

Take now the history h:=(h,Yβ,s)h^{\prime}:=(h,Y^{\beta},s), where s:=last(h)s:=last(h) and β\beta is any chosen number with 0<β<δ0<\beta<\delta. Since dYW(h,h)=β<δd_{Y}^{W}(h,h^{\prime})=\beta<\delta, we obtain by ()(**) that dXW(h,h)<ε<1d_{X}^{W}(h,h^{\prime})<\varepsilon<1. Given the definition of dXd_{X} on neighboring histories hh and hh^{\prime} and the assumption that XX\neq\emptyset, we can infer from dX(h,h)<1d_{X}(h,h^{\prime})<1 and β0\beta\neq 0 that we have s=last(h)𝐌KYXs=last(h)\models_{\mathbf{M}}K_{Y}X, as desired.

For the converse, let hWh\in W be a history with s:=last(h)𝐌KYXs:=last(h)\models_{\mathbf{M}}K_{Y}X. We need to show that h𝐌dKYXh\models_{{\mathbf{M}}^{\infty}_{d}}K_{Y}X. For this, we choose δ0:=δh\delta_{0}:=\delta^{h}, and have to prove that XX is continuously dependent on YY on the open neighborhood Y(h,δ0)\mathcal{B}_{Y}(h,\delta_{0}). In fact, we will prove a stronger statement, namely that XX is uniformly continuously dependent on YY on this neighborhood.404040Once again, note that by Claim 5 the apparently stronger condition UXYU_{X}Y is actually equivalent to KXYK_{X}Y on 𝐌d{\mathbf{M}}^{\infty}_{d}. Let ε>0\varepsilon>0 be any arbitrary positive number. To show uniformly continuous dependence on our given neighborhood, we need to show that

(?)δ>0 s.t. h,h′′Y(h,δ0)(dY(h,h′′)<δdXW(h,h′′)<ε).(?)\,\,\,\,\,\,\,\,\,\,\exists\delta>0\mbox{ s.t. }\forall h^{\prime},h^{\prime\prime}\in\mathcal{B}_{Y}(h,\delta_{0})\,\left(\,d_{Y}(h^{\prime},h^{\prime\prime})<\delta\Rightarrow d_{X}^{W}(h^{\prime},h^{\prime\prime})<\varepsilon\,\right).

For this, we choose δ:=min(δ0,ε)=min(δh,ε)\delta:=min(\delta_{0},\varepsilon)=min(\delta^{h},\varepsilon), so we have δδh,ε\delta\leq\delta^{h},\varepsilon. Let h,h′′Y(h,δ0)h^{\prime},h^{\prime\prime}\in\mathcal{B}_{Y}(h,\delta_{0}) be s.t. dY(h,h′′)<δd_{Y}(h^{\prime},h^{\prime\prime})<\delta; so we have dYW(h,h),dYW(h,h′′)<δhd_{Y}^{W}(h,h^{\prime}),d_{Y}^{W}(h,h^{\prime\prime})<\delta^{h} (since δ0=δh\delta_{0}=\delta^{h} and δδh\delta\leq\delta^{h}). From these, together with our assumption that last(h)𝐌KYXlast(h)\models_{\mathbf{M}}K_{Y}X, we infer (by part 12 of Claim 4) that hh is XX-close to both hh^{\prime} and h′′h^{\prime\prime}. Since closeness is an equivalence relation, it follows that hh^{\prime} and h′′h^{\prime\prime} are XX-close. By the Important Observation following Claim 2, we obtain that dXW(h,h′′)=dYW(h,h′′)<δεd_{X}^{W}(h^{\prime},h^{\prime\prime})=d_{Y}^{W}(h^{\prime},h^{\prime\prime})<\delta\leq\varepsilon, as desired. ∎


Next, we prove the analogous assertion for uniform dependence:

Claim 7. 𝐌dU(Y;X){\mathbf{M}}^{\infty}_{d}\models U(Y;X) iff 𝐌U(Y;X){\mathbf{M}}\models U(Y;X).

Proof.

For the left-to-right direction, suppose that 𝐌dU(Y;X){\mathbf{M}}^{\infty}_{d}\models U(Y;X), and assume towards a contradiction that 𝐌⊧̸U(Y;X){\mathbf{M}}\not\models U(Y;X). Choose some number ε(0,1)\varepsilon\in(0,1). Applying the definition of 𝐌dU(Y;X){\mathbf{M}}^{\infty}_{d}\models U(Y;X) to ε\varepsilon, it follows that there exists some number β(0,1)\beta\in(0,1) with the property that

()h,hW(dY(h,h)<βdX(h,h)<ε).(*)\,\,\,\,\,\,\,\,\,\,\forall h,h^{\prime}\in W(d_{Y}(h,h^{\prime})<\beta\Rightarrow d_{X}(h,h^{\prime})<\varepsilon).

Fix now some γ(0,β)\gamma\in(0,\beta), and consider the histories h:=(s0,γ,s0)h:=(s_{0},\emptyset^{\gamma},s_{0}) and h:=(h,Yγ,s0)=(s0,γ,s0,Yγ,s0)h^{\prime}:=(h,Y^{\gamma},s_{0})=(s_{0},\emptyset^{\gamma},s_{0},Y^{\gamma},s_{0}). By Claim 3.(2), we have dY(h,h)=γ<βd_{Y}(h,h^{\prime})=\gamma<\beta, so by ()(*) we have dX(h,h)<ε<1d_{X}(h,h^{\prime})<\varepsilon<1, hence dX(h,h)1d_{X}(h,h^{\prime})\neq 1, and thus (by Claim 2) hh and hh^{\prime} are XX-close. From this, together with the assumption that 𝐌⊧̸U(Y;X){\mathbf{M}}\not\models U(Y;X) and the fact that the history h=(h,Yγ,s0)h^{\prime}=(h,Y^{\gamma},s_{0}) has γ0\gamma\neq 0, we obtain (using the definition of XX-closeness) that s0=last(h)KYXs_{0}=last(h)\models K_{Y}X and β<δhX\beta<\delta^{h_{X}}. We consider now two cases: (s0)XWh(s_{0})\leq_{X}^{W}h and (s0)XWh(s_{0})\not\leq_{X}^{W}h. If (s0)XWh(s_{0})\leq_{X}^{W}h, then in fact we must have (s0)XWh(s_{0})\leq_{X}^{W}h (since (s0)(s_{0}) and hh are neighboring histories), so by the definition of X\to_{X}^{\leq} we infer that s0KXs_{0}\models K_{\emptyset}X, and hence s0U(Y;X)s_{0}\models U(Y;X) (by the axiom K(X)U(Y;X)K(X)\Rightarrow U(Y;X) of the system 𝐋𝐔𝐃\mathbf{LUD}), thus contradicting our assumption that 𝐌⊧̸U(Y;X){\mathbf{M}}\not\models U(Y;X). In the alternative case when (s0)XWh(s_{0})\not\leq_{X}^{W}h, it follows that h=(s0,γ,s0)h=(s_{0},\emptyset^{\gamma},s_{0}) is its own XX-root (since it contains no proper sub-history h′′hh^{\prime\prime}\prec h with h′′XWhh^{\prime\prime}\leq_{X}^{W}h), i.e., h=hXh=h_{X}, and thus we infer δhX=δh=γ<β<δhX\delta^{h_{X}}=\delta^{h}=\gamma<\beta<\delta^{h_{X}}, obtaining the contradiction δhX<δhX\delta^{h_{X}}<\delta^{h_{X}}.

For the converse direction, suppose that 𝐌U(Y;X){\mathbf{M}}\models U(Y;X). To prove that 𝐌dU(Y;X){\mathbf{M}}^{\infty}_{d}\models U(Y;X), let ε\varepsilon be any number with 0<ε<10<\varepsilon<1, and we need to show that there exists δ>0\delta>0 s.t.

h,hW(dYW(h,h)<δdXW(h,h)<ε).\forall h,h^{\prime}\in W(d_{Y}^{W}(h,h^{\prime})<\delta\Rightarrow d_{X}^{W}(h,h^{\prime})<\varepsilon).

For this, we take δ:=ε\delta:=\varepsilon. To prove the desired conclusion, let h,hWh,h^{\prime}\in W be histories with dYW(h,h)<δ=ε<1d_{Y}^{W}(h,h^{\prime})<\delta=\varepsilon<1. By part 11 of Claim 4 and the fact that 𝐌U(Y;X){\mathbf{M}}\models U(Y;X), this implies that hh and hh^{\prime} are XYX\cup Y-close, and thus (by the Important Observation following Claim 2) we have dXW(h,h)=dYW(h,h)<δ=εd_{X}^{W}(h,h^{\prime})=d_{Y}^{W}(h,h^{\prime})<\delta=\varepsilon, as desired. ∎

Claim 8. The map last:WSlast:W\to S, mapping each history hWh\in W to its last element last(h)Slast(h)\in S, is a uniform topo-morphism from 𝐌d{\mathbf{M}}^{\infty}_{d} to 𝐌{\mathbf{M}}.

Proof.

In the proof of Lemma A.23 (our representation result for preorder models), we showed that the map lastlast is a surjective topo-morphism from 𝐌{\mathbf{M}}^{\infty}_{\leq} to 𝐌{\mathbf{M}}. In particular, using the atomic clauses for PxP\vec{x} and DXYD_{X}Y in the definition of that topo-morphism, together with the fact that the models 𝐌d{\mathbf{M}}^{\infty}_{d} and 𝐌{\mathbf{M}}^{\infty}_{\leq} agree on the semantics of these atoms (which, in the case of DXYD_{X}Y follows from the fact that both 𝐌d{\mathbf{M}}^{\infty}_{d} and 𝐌{\mathbf{M}}^{\infty}_{\leq} are standard and that by Claim 5 the pseudo-metric indistinguishability coincides with XX-equality =XW=_{X}^{W}), we obtain the corresponding atomic clauses for PxP\vec{x} and DXYD_{X}Y in the definition of topo-morphism from 𝐌d{\mathbf{M}}^{\infty}_{d} to 𝐌{\mathbf{M}}.

The atomic clauses for KXYK_{X}Y and U(X;Y)U(X;Y) follow from Claims 6 and 7.

The fact that the map lastlast is a pp-morphism wrt =X=_{X} was already checked in the proof of Lemma A.23. Since Claim 5 tells us that the relation =XW=_{X}^{W} is the same as the pseudo-metric indistinguishability relation, it follows that lastlast is a topo-morphism with respect to this relation.

Finally, we need to prove that lastlast is an interior map, i.e. both open and continuous, when seen as a map from 𝐌d{\mathbf{M}}^{\infty}_{d} to 𝐌{\mathbf{M}}. For this, we will make use of the fact that (by the proof of Lemma A.23), lastlast is an interior map (hence continuous) when seen as a map from 𝐌{\mathbf{M}}^{\infty}_{\leq} to 𝐌{\mathbf{M}}.

To show that lastlast is open, let UU be a open set in τXW\tau_{X}^{W}. We need to prove that last(U)last(U) is upward-closed in SS wrt X\leq_{X}. For this, let last(h)last(U)last(h)\in last(U) with hUh\in U, and we have to show that last(h)Xlast(h)\uparrow_{X} is included in last(U)last(U). Since hUh\in U and UU is open in the pseudo-metric topology induced by dXWd_{X}^{W}, there must exist some ε>0\varepsilon>0 s.t. X(h,ε)U\mathcal{B}_{X}(h,\varepsilon)\subseteq U. By part 10 of Claim 4, we obtain last(h)Xlast(X(h,ε))last(U)last(h)\uparrow_{X}\subseteq last(\mathcal{B}_{X}(h,\varepsilon))\subseteq last(U), as desired.

To show that lastlast is continuous, let VSV\subseteq S be upward-closed in SS wrt X\leq_{X}. We need to prove that last1(V)last^{-1}(V) is open in the pseudo-metric topology τXW\tau_{X}^{W}. Since the map lastlast was shown in the previous section to be a pp-morphism from 𝐌{\mathbf{M}}^{\infty}_{\leq} to 𝐌{\mathbf{M}} (or equivalently, an interior map between the two preorder topologies), it follows that last1(V)last^{-1}(V) is upward-closed in WW wrt XW\leq_{X}^{W} (i.e., open in the Alexandroff topology induced by XW\leq_{X}^{W}). But our pseudo-metric topology τXW\tau_{X}^{W} is a refinement of the Alexandroff topology induced by XW\leq_{X}^{W}, and therefore last1(V)last^{-1}(V) is also open in the pseudo-metric topology. ∎


This finishes the proof of our representation theorem, hence also our completeness and decidability proof for 𝐋𝐔𝐃\mathbf{LUD} with respect to metric dependence models. Since the model we constructed is pseudo-locally Lipschitz, this also proves completeness for pseudo-locally Lipschitz models.