This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

1

Elementary Logic in Linear Space

Daniel Huang nnnn-nnnn-nnnn-nnnn University of California, Berkeley, EECS [email protected]
Abstract.

First-order logic is typically presented as the study of deduction in a setting with elementary quantification. In this paper, we take another vantage point and conceptualize first-order logic as a linear space that encodes “plausibility”. Whereas a deductive perspective emphasizes how (i.e., process), a space perspective emphasizes where (i.e., location). We explore several consequences that a shift in perspective to “signals in space” has for first-order logic, including (1) a notion of proof based on orthogonal decomposition, (2) a method for assigning probabilities to sentences that reflects logical uncertainty, and (3) a “models as boundary” principle that relates the models of a theory to its “size”.

first-order logic, linear space, logical uncertainty, distributive normal form
conference: Preprint; Under Review; January 10, 2020journalyear: 2020doi: copyright: noneccs: Software and its engineering General programming languagesccs: Social and professional topics History of programming languages

1. Introduction

The inspiration for this paper has its origins in Pólya’s writings on plausible reasoning (Pólya, 1990a, b, 2004).

Finished mathematics presented in a finished form appears as purely demonstrative, consisting of proofs only. Yet mathematics in the making resembles any other human knowledge in the making. You have to guess a mathematical theorem before you can prove it; you have to guess the idea of the proof before you carry through the details. You have to combine observations and follow analogies; you have to try and try again. (Pólya, 1990a, pg. vi)

This leads us to ask: Where is the information content of logical experimentation—the guesses, observations, and analogies—located? In this paper, we give our attempt at answering this question in the restricted setting where we use a first-order language to express statements.

The standard presentation of first-order logic is as the study of deduction in a setting with elementary (i.e., over elements of sets) quantification. Thus our first task is to identify a space that holds the information content of first-order sentences. Towards this end, we build upon ideas found in (Huang, 2019) which introduces a probability distribution on first-order sentences and identifies a corresponding space. We organize our exploration as a series of questions.

  • Is there a space that encodes the information content of first-order sentences with fixed quantifier rank (Section 2)? We use the observation that every first-order sentence can be written in distributive normal form (Hintikka, 1965) and identify a vector space where sentences of fixed quantifier rank are represented as vectors.

  • Is there a space for the case of sentences with arbitrary quantifier rank (Section 3)? We will see that an infinite dimensional analogue of a vector space—a Hilbert space—is an appropriate generalization to the setting with arbitrary quantifier rank. A first-order sentence can be identified with a signal in this space.

  • How does deduction affect the “plausibility” of a logical system (Section 4)? We can conceptualize a logical system as carrying the “plausibility” of first-order sentences. Similar to how a physical system carries an invariant amount of energy in different forms, a logical system carries an invariant amount of “plausibility” in different places. Under this formulation, we will see that deduction is an entropic operation.

At this point, we will have a view of first-order logic as “signals in space” and so can turn our attention towards the geometry of sentences in space.

  • Where is a first-order theory located (Section 5)? A theory manifests itself as a subspace of a certain dimension. It turns out that a complete theory has dimension one whereas an undecidable theory’s dimension is not computable.

  • What does orthogonal decomposition reveal about proofs (Section 6)? We show how a proof can be constructed by decomposing the “plausibility” of a first-order sentence into its orthogonal components. Counter-examples (i.e., theorem proving) can be used to approximate a theory from below and examples (i.e., model checking) can be used to approximate a theory from above. An application to assigning probabilities to first-order sentences that reflect logical uncertainty and a discussion of conjecturing will also be given.

  • How much space does a first-order theory occupy (Section 7)? We explore two aspects of a “models as boundary” principle that highlight the difficulty of approximating a theory. In particular, we will encounter a familiar (edge) isoperimetric principle on the Boolean hypercube: a theory with small “volume” (i.e., variance) can have unusually large “perimeter” (i.e., many models).

We emphasize that our aim in this paper is to offer a complementary perspective on first-order logic as “signals in space”. At the end of the paper, we will discuss some directions for future work.

Preliminaries

We assume familiarity with the syntax and semantics of first-order logic, vector spaces, and basic probability theory.

As notation, we write a formula ϕ\phi with free variables y1,,yky_{1},\dots,y_{k} as ϕ[y¯k]\phi[\bar{y}_{k}] or simply ϕ[y¯]\phi[\bar{y}]. When the quantifier rank of a sentence is important, we indicate it as ϕ(r)\phi^{(r)}. Throughout the paper, we work with a first-order language \mathcal{L} with equality and a finite number of predicates and without constants or function symbols.

Let 𝟐{0,1}{,}\mathbf{2}\triangleq\{0,1\}\cong\{\bot,\top\} denote the two element set where (x)x=x¬((x)x=x)\top\triangleq(\forall x)\,x=x\lor\lnot((\forall x)\,x=x) and ¬\bot\triangleq\lnot\top. Let 𝐒𝐞𝐭(X)𝟐X\mathbf{Set}(X)\triangleq\mathbf{2}^{X} denote the powerset of XX.

2. Space: Fixed Quantifier Rank

Is there a space that encodes the information content of first-order sentences with fixed quantifier rank? We begin our exploration of first-order logic as a space starting with the simpler case of fixed quantifier rank. We use the observation that every first-order sentence can be written in distributive normal form (Hintikka, 1965) and identify a vector space where sentences of fixed quantifier rank are represented as vectors (Section 2.1). We then examine basic properties of the vector space (Section 2.2). Throughout this section, we restrict attention to sentences of quantifier rank rr.

2.1. Vector Space

We define a vector space for rank rr sentences by identifying a basis of rank rr constituents for the vector space. Before we review the definition of constituents, we highlight two properties that make them a natural candidate for a basis.

Proposition 2.1 ((Hintikka, 1965)).
Distributive normal form:

Every formula ϕ(r)[y¯]\phi^{(r)}[\bar{y}] can be written in distributive normal form, i.e., as a disjunction

ϕ(r)[y¯]δ(r)[y¯]dnf(ϕ(r)[y¯])δ(r)[y¯]\phi^{(r)}[\bar{y}]\equiv\bigvee_{\delta^{(r)}[\bar{y}]\in\operatorname{dnf}(\phi^{(r)}[\bar{y}])}\delta^{(r)}[\bar{y}]

where dnf:[y¯]𝐒𝐞𝐭(Δ(r)[y¯])\operatorname{dnf}:\mathcal{L}[\bar{y}]\rightarrow\mathbf{Set}(\Delta^{(r)}[\bar{y}]) is a function that maps a formula (an element of [y¯]\mathcal{L}[\bar{y}]) to a subset of constituents (an element of 𝐒𝐞𝐭(Δ(r)[y¯])\mathbf{Set}(\Delta^{(r)}[\bar{y}])).

Mutual exclusion:

Any two distinct constituents δi(r)[y¯]\delta^{(r)}_{i}[\bar{y}] and δj(r)[y¯]\delta^{(r)}_{j}[\bar{y}] are mutually exclusive, i.e., δi(r)[y¯]¬δj(r)[y¯]\vDash\delta^{(r)}_{i}[\bar{y}]\rightarrow\lnot\delta^{(r)}_{j}[\bar{y}].

The first item captures the idea that every vector (formula) can be expressed as a linear combination of basis vectors (constituents) where logical or is interpreted as vector addition. The second item hints suggests an inner product for the vector space: two formulas that denote logically distinct possibilities will be orthogonal. Thus constituents actually give an orthogonal basis. We introduce constituents concretely now following (Hintikka, 1965).

The intuitive idea behind the definition of a constituent is that we would like to describe possible kinds of “logical worlds” by enumerating descriptions of how rr individuals in the domain are related to one another uniformly in rr.111Notably, this method of describing possibilities yields a finite number of finite descriptions for each quantifier rank rr. In contrast, attempting to describe possibilities by enumerating the individuals in the domain may result in an infinite description when the domain is infinite. In order to define constituents, we require (1) several auxiliary definitions and (2) the notion of an attributive constituent. We introduce these in turn.

Auxiliary definitions

Let 𝒜[y¯]\mathcal{A}[\bar{y}] denote the set of all atomic formula (i.e., a predicate applied to a tuple of variables) involving the free variables y¯\bar{y}. Let [y¯]\mathcal{B}[\bar{y}] denote the subset of 𝒜[y¯]\mathcal{A}[\bar{y}] that mentions the last element of y¯\bar{y} at least once. Let

𝐒({ϕ1,,ϕk}){i{1,,k}(±)biϕib1𝟐,,bk𝟐}\mathbf{S}(\{\phi_{1},\dots,\phi_{k}\})\triangleq\{\bigwedge_{i\in\{1,\dots,k\}}(\pm)^{b_{i}}\phi_{i}\mid b_{1}\in\mathbf{2},\dots,b_{k}\in\mathbf{2}\}

be the set where an element is a conjunction of every ϕi\phi_{i} or its negation where (±)0ϕ¬ϕ(\pm)^{0}\phi\triangleq\lnot\phi, and (±)1ϕϕ(\pm)^{1}\phi\triangleq\phi.

Attributive constituents

The set of attributive constituents Γ(r)[y¯]\Gamma^{(r)}[\bar{y}] with free variables y¯\bar{y} is defined by induction on quantifier rank rr.

In the base case,

Γ(0)[y¯]𝐒([y¯]).\Gamma^{(0)}[\bar{y}]\triangleq\mathbf{S}(\mathcal{B}[\bar{y}])\,.

Intuitively, a rank 0 attributive constituent with free variables y¯=y1,,yk\bar{y}=y_{1},\dots,y_{k} describes how the individual yky_{k} relates to the other individuals y1,,yk1y_{1},\dots,y_{k-1} via all the atomic formula in the language.

In the inductive case,

Γ(r)[y¯]{γ(0)[y¯]γ[y¯;z]Γ(r1)[y¯;z](±)s(γ[y¯;z])(Ez)γ[y¯;z]γ(0)Γ(0)[y¯],s:Γ(r1)[y¯;z]𝟐}.\Gamma^{(r)}[\bar{y}]\triangleq\Big{\{}\gamma^{(0)}[\bar{y}]\land\bigwedge_{\gamma[\bar{y};z]\in\Gamma^{(r-1)}[\bar{y};z]}(\pm)^{s(\gamma[\bar{y};z])}(\exists^{E}z)\gamma[\bar{y};z]\\ \mid\gamma^{(0)}\in\Gamma^{(0)}[\bar{y}],s:\Gamma^{(r-1)}[\bar{y};z]\rightarrow\mathbf{2}\Big{\}}\,.

The notation (Ez)(\exists^{E}z) indicates an exclusive interpretation of quantification so that zz is distinct from all other variables in scope. For example, (Ez)ϕ[y¯;z](z)zy1zykϕ[y¯;z](\exists^{E}z)\phi[\bar{y};z]\triangleq(\exists z)z\neq y_{1}\land\dots\land z\neq y_{k}\land\phi[\bar{y};z] where xyx\neq y is shorthand for ¬(x=y)\lnot(x=y). Intuitively, a rank rr constituent with free variables y¯\bar{y} describes (1) how yky_{k} is related to each y1,,yk1y_{1},\dots,y_{k-1} where y¯=y1,,yk\bar{y}=y_{1},\dots,y_{k} and (2) for every possible smaller description (i.e., a rank r1r-1 attributive constituents), whether or not an additional individual zz exists that is related to the other individuals y¯\bar{y} recursively via that description.

Constituents

The set of rank rr constituents Δ(r)[y¯]\Delta^{(r)}[\bar{y}] with free variables y¯\bar{y} is defined using attributive constituents as

Δ(r)[y¯;z]{A[y¯]γ(r)[y¯;z]A[y¯]𝒜[y¯],γ(r)[y¯;z]Γ(r)[y¯;z]}.\Delta^{(r)}[\bar{y};z]\triangleq\{A[\bar{y}]\land\gamma^{(r)}[\bar{y};z]\\ \mid A[\bar{y}]\in\mathcal{A}[\bar{y}],\gamma^{(r)}[\bar{y};z]\in\Gamma^{(r)}[\bar{y};z]\}\,.

Thus a rank rr constituent with free variables y¯;z\bar{y};z completes the description of a rank rr attributive constituent by additionally describing how each term in y¯\bar{y} is related to one another. Thus we can think of a constituent as describing a possible kind of “logical world”.

Some constituents will be unsatisfiable, and hence, describe possibilities that are impossible. The satisfiable constituents, on the other hand, describe logically possible worlds.

Proposition 2.2 ((Hintikka, 1965)).

A formula ϕ(r)[y¯]\phi^{(r)}[\bar{y}] is logically valid iff its distributive normal form contains all satisfiable rank rr constituents.

Of course, it is not decidable whether or not a constituent is satisfiable because validity of first-order statements is not decidable. When there are no free variables, attributive constituents and constituents are identical. We abbreviate Δ(r)[]\Delta^{(r)}[] as Δ(r)\Delta^{(r)}. Throughout the rest of the paper, we will largely focus on the case with no free variables, i.e., sentences, unless indicated otherwise.222A constituent δ(r)[y¯k]\delta^{(r)}[\bar{y}_{k}] is equivalent to (y1)(yk)δ(r)[y¯k](\exists y_{1})\dots(\exists y_{k})\delta^{(r)}[\bar{y}_{k}] which has a distributive normal form of rank r+kr+k.

Constituents as a basis

As a reminder, a (real-valued) vector space (V,+,)(V,+,\cdot) is a set VV along with vector addition +:V×VV+:V\times V\rightarrow V and scalar multiplication :×VV\cdot:\mathbb{R}\times V\rightarrow V satisfying the vector space axioms. We define a vector space for sentences of quantifier rank rr by defining the set and the two operations.

Definition 2.3.

Let V{a1δ^1(r)++anδ^n(r){δ1(r),δn(r)}𝐒𝐞𝐭(Δ(r)),a1,,an}V\triangleq\{a_{1}\hat{\delta}^{(r)}_{1}+\dots+a_{n}\hat{\delta}^{(r)}_{n}\mid\{\delta^{(r)}_{1},\dots\delta^{(r)}_{n}\}\in\mathbf{Set}(\Delta^{(r)}),a_{1}\in\mathbb{R},\dots,a_{n}\in\mathbb{R}\}. Define ++ as component-wise addition and \cdot as component-wise multiplication.

Proposition 2.4.

𝕍(r)(V,+,)\mathbb{V}^{(r)}\triangleq(V,+,\cdot) is a vector space with zero 0\vec{0} given by the all 0 vector and additive inverse given by the negation of the vector components.

Proof.

Routine verification of the vector space axioms. ∎

Proposition 2.5.

Constituents form a basis for 𝕍(r)\mathbb{V}^{(r)}.

Proof.

By construction. ∎

Because a vector space can be described by many different bases, we should think of constituents as the standard basis, similar to the unit vectors of d\mathbb{R}^{d}.

The vector space spans all rank rr first-order sentences.

Proposition 2.6.

Every (rank rr) first-order sentence ϕ(r)\phi^{(r)} can be written as

ϕ(r)=a1δ^1(r)+anδ^n(r)\bm{\phi}^{(r)}=a_{1}\hat{\delta}^{(r)}_{1}+\dots a_{n}\hat{\delta}^{(r)}_{n}

where dnf(ϕ(r))={δ1(r),,δn(r)}\operatorname{dnf}(\phi^{(r)})=\{\delta^{(r)}_{1},\dots,\delta^{(r)}_{n}\} and any non-zero a1,,ana_{1},\dots,a_{n}. Unless stated otherwise, we assume a1==an=|dnf(ϕ(r))|a_{1}=\dots=a_{n}=|\operatorname{dnf}(\phi^{(r)})|.

Proof.

Every first-order sentence of rank rr can be written as a disjunction of rank rr constituents (Proposition 2.1). ∎

As a reminder, every sentence of rank srs\leq r is equivalent to some sentence of rank rr. Thus this vector space expresses all first-order sentences up to rank rr.

2.2. Structure

We review some of the structure of the vector space in this section.

Dimensions

The number of attributive constituents of rank rr can be computed recursively as

|Γ(r)[y¯]||Γ(0)[y¯]|2|Γ(r1)[y¯;z]||\Gamma^{(r)}[\bar{y}]|\triangleq|\Gamma^{(0)}[\bar{y}]|2^{|\Gamma^{(r-1)}[\bar{y};z]|}

where |Γ(0)[y¯]||\Gamma^{(0)}[\bar{y}]|, is combinatorial in the number of base predicates in y¯\bar{y}. The number of constituents of rank rr is then

|Δ(r)[y¯]||Δ(0)[y¯]|2|Γ(r1)[y¯;z]||\Delta^{(r)}[\bar{y}]|\triangleq|\Delta^{(0)}[\bar{y}]|2^{|\Gamma^{(r-1)}[\bar{y};z]|}

where |Δ(0)[y¯]||\Delta^{(0)}[\bar{y}]| is combinatorial in the number of base predicates in y¯\bar{y}. Consequently, the first-order vector space has a super-exponential number of dimensions as a function of quantifier rank rr. Thus the vector space has astronomic dimension. Later in Section 7, we will see how models of a theory can be used to approximately span the theory.

Orthogonality

We can define an inner product between two vectors in the standard way:

v,wδ(r)Δ(r)vδ(r)wδ(r).\langle v,w\rangle\triangleq\sum_{\delta^{(r)}\in\Delta^{(r)}}v_{\delta^{(r)}}w_{\delta^{(r)}}\,.

It is a routine exercise to check that this is an inner product. As usual, two vector are orthogonal if their inner product is 0.

Proposition 2.7.

Two vectors are orthogonal in the space if they are logically mutually exclusive.

Proof.

By Proposition 2.1. ∎

The converse does not hold because there can be multiple inconsistent constituents at some quantifier rank rr. In other words, there are multiple ways to express falsehood in this standard basis so we may have two unsatisfiable sentences that have an inner product of 0. Note that if we remove all inconsistent constituents of rank rr from the basis, then we end up with a basis of rr-isomorphism types (e.g., see (Libkin, 2004)). In this case, falsehood is represented by 0\vec{0}.

Gram-Schmidt

Let \mathcal{E} be an enumeration (r)\mathcal{L}^{(r)} that spans it, i.e., every first-order sentence of rank rr can be written as a disjunction of elements from \mathcal{E}. Given an enumeration of (r)\mathcal{L}^{(r)}, we can construct an enumeration that spans it as ¬ϕ1(r),ϕ1(r),¬ϕ1(r)¬ϕ2(r),¬ϕ1(r)ϕ2(r),ϕ1(r)¬ϕ2(r),ϕ1(r)ϕ2(r),\lnot\phi^{(r)}_{1},\phi^{(r)}_{1},\lnot\phi^{(r)}_{1}\land\lnot\phi^{(r)}_{2},\lnot\phi^{(r)}_{1}\land\phi^{(r)}_{2},\phi^{(r)}_{1}\land\lnot\phi^{(r)}_{2},\phi^{(r)}_{1}\land\phi^{(r)}_{2},\dots.

Recall that the Gram-Schmidt process is a method for constructing an orthonormal basis from a set of vectors that spans the space. Applying the Gram-Schmidt process results in the following basis: e1^ϕ1(r),e2^¬ϕ1(r)ϕ2(r),e3^¬ϕ1(r)¬ϕ2(r)ϕ3(r),\hat{e_{1}}\triangleq\phi^{(r)}_{1},\hat{e_{2}}\triangleq\lnot\phi^{(r)}_{1}\land\phi^{(r)}_{2},\hat{e_{3}}\triangleq\lnot\phi^{(r)}_{1}\land\lnot\phi^{(r)}_{2}\land\phi^{(r)}_{3},\dots.

3. Space: Arbitrary Quantifier Rank

Is there a space for the case of sentences with arbitrary quantifier rank? The case of sentences with fixed quantifier rank suffices to illustrate the concepts that we hope to explore in the rest of the paper. Nevertheless, for the sake of completeness, we generalize the space perspective for sentences with fixed quantifier rank to the case where sentences have arbitrary quantifier rank. Towards this end, we identify a Hilbert space where first-order sentences are represented as signals in this space.

The construction of the Hilbert space is inspired by the one given in (Huang, 2019) (Section 3.1 and Section 3.2). The main difference is that we take the space view as primary whereas (Huang, 2019) defines a probability distribution on first-order sentences and derives the corresponding space.

3.1. Limits of Logical Descriptions

δ(0)\delta^{(0)}δa(1)\delta^{(1)}_{a}δc(2)\delta^{(2)}_{c}\vdots\vdotsδd(2)\delta^{(2)}_{d}\vdots\vdotsδb(1)\delta^{(1)}_{b}δe(2)\delta^{(2)}_{e}\vdots\vdotsδf(2)\delta^{(2)}_{f}\vdots\vdots
Figure 1. An illustration of a refinement tree TT. Each vertex represents a constituent and each edge indicates a refinement relation.

The intuitive difference between the fixed quantifier rank case and the arbitrary quantifier rank case is “resolution”: the former considers descriptions of logical possibilities at a fixed resolution whereas the latter considers descriptions of logical possibilities at arbitrary resolution. It is thus natural to think of an infinitely precise description of a logical possibility as the limit of a sequence of increasingly finer descriptions.

Our task in this section is to formalize this intuition of the limit of a convergent sequences of logical descriptions. The contents of this section are rather tedious and not essential to the rest of the paper. The high-level takeaway is that we can associate a tree structure (Figure 1) with the set of constituents ΔrΔ(r)\Delta\triangleq\bigcup_{r\in\mathbb{N}}\Delta^{(r)} where each (infinite) path (δ(r))r(\delta^{(r)})_{r\in\mathbb{N}} in the tree corresponds to a sequence of constituents ordered by quantifier rank that provide increasingly finer descriptions. The limiting description is the “endpoint” of the corresponding path and provides an infitely precise description.

Expansion

We begin by recalling a basic fact about constituents and their expansion to higher quantifier rank.

Proposition 3.1 ((Hintikka, 1965)).

Every rank rr constituent can be written as a disjunction of rank r+sr+s constituents, i.e., there is an expansion relation rs:Δ(r+s)×Δ(r)𝟐\succ^{s}_{r}:\Delta^{(r+s)}\times\Delta^{(r)}\rightarrow\mathbf{2} such that any δ(r)\delta^{(r)} can be expressed as

δ(r)δ(r+s)rsδ(r)δ(r+s).\delta^{(r)}\equiv\bigvee_{\delta^{(r+s)}\succ^{s}_{r}\delta^{(r)}}\delta^{(r+s)}\,.

Observe that a constituent can be the expansion of two distinct constituents, in which case it is necessarily unsatisfiable. In symbols, if δ(r+s)rsδ1(r)\delta^{(r+s)}\succ^{s}_{r}\delta^{(r)}_{1} and δ(r+s)rsδ2(r)\delta^{(r+s)}\succ^{s}_{r}\delta^{(r)}_{2} then δ(r+s)\nvDash\delta^{(r+s)}.

Additionally, observe that the notion of expansion is consistent with logical implication. More concretely, if δ(r+s)rsδ(r)\delta^{(r+s)}\succ^{s}_{r}\delta^{(r)} then δ(r+s)δ(r)\vDash\delta^{(r+s)}\rightarrow\delta^{(r)}. When δ(r+s)\delta^{(r+s)} is satisfiable, we can interpret it as extending the description of a logical possibility denoted by δ(r)\delta^{(r)} to account for ss additional individuals. In other words, δ(r+s)\delta^{(r+s)} is a higher resolution description of δ(r)\delta^{(r)}. When δ(r+s)\delta^{(r+s)} is not satisfiable, we can interpret it as an inconsistent description obtained by extending some description (either consistent or inconsistent).

These two observations highlight a certain asymmetry between satisfiable and unsatisfiable descriptions. Whereas a satisfiable constituent is necessarily in the expansion of another unique satisfiable constituent at a given rank due to mutual exclusivity, an unsatisfiable constituent can be in the expansion of any constituent. We thus have to make a choice about how different logical impossibilities are related to one another in order to prevent two sequences that eventually describe logical impossibilities from merging into each other.

Logical impossibilities

We use a syntactic criterion to aid with uniquely associating constituents with their expansions.333The syntactic criterion we use is one of the conditions used in the definition of trivial inconsistency (Hintikka, 1965). Trivial inconsistency is a syntactic criterion for determining whether or not a constituent is satisfiable that satisfies a completeness property (Hintikka, 1965): a constituent that is not satisfiable has a quantifier rank where all of its expansions are trivially inconsistent. The idea is to define a suitable notion for a constituent δ(r)\delta^{(r)} to be a prefix of another constituent δ(r+1)\delta^{(r+1)}.

The notation

δ(r){(Ez)γ(r1)[z]γ(r)γ(r1)[z]γ(r1)[z]Γ(r1)[z]}\dagger\delta^{(r)}\triangleq\{\langle(\exists^{E}z)\gamma^{(r-1)}[z]\rangle^{\gamma^{(r-1)}[z]}_{\gamma^{(r)}}\mid\gamma^{(r-1)}[z]\in\Gamma^{(r-1)}[z]\}

gives the set representation of a constituent where the notation γ(r)γ(r1)[z]\langle\cdot\rangle^{\gamma^{(r-1)}[z]}_{\gamma^{(r)}} indicates that whether or not the formula inside is negated depends on γ(r1)[z]\gamma^{(r-1)}[z] and γ(r)\gamma^{(r)}. When the quantifier rank is 0, we have δ(0){}\dagger\delta^{(0)}\triangleq\{\top\}. The notation δ~(r)\tilde{\delta}^{(r)} indicates that we remove the deepest layer of quantification from δ(r)\delta^{(r)}, i.e., all formula involving the rank rr quantifiers.444We can see the effect that removing a layer of quantification has on a constituent by rewriting it to only use rank 0 attributive constituents as δ(r+1)γ(r1)[z1]Γ(r)[z1](Ez1)γ(0)[z1]\displaystyle\delta^{(r+1)}\equiv\bigwedge_{\gamma^{(r-1)}[z_{1}]\in\Gamma^{(r)}[z_{1}]}\Big{\langle}(\exists^{E}z_{1})\gamma^{(0)}[z_{1}]\land\ldots\Big{\langle} γ(0)[z¯r+1]Γ(0)[z¯r+1](Ezr+1)γ(0)[z¯r+1]γ(r+1)γ(1)[z¯r]γ(r)[z1]γ(r+1).\displaystyle\bigwedge_{\gamma^{(0)}[\bar{z}_{r+1}]\in\Gamma^{(0)}[\bar{z}_{r+1}]}\Big{\langle}(\exists^{E}z_{r+1})\gamma^{(0)}[\bar{z}_{r+1}]\Big{\rangle}^{\gamma^{(1)}[\bar{z}_{r}]}_{\gamma^{(r+1)}}\ldots\Big{\rangle}^{\gamma^{(r)}[z_{1}]}_{\gamma^{(r+1)}}\,.

We say that δ(r)\delta^{(r)} is a prefix of δ(r+1)\delta^{(r+1)}, written δ(r)δ(r+1)\delta^{(r)}\sqsubseteq\delta^{(r+1)}, if γ(r)=γ~(r+1)\dagger\gamma^{(r)}=\dagger\tilde{\gamma}^{(r+1)}.

Proposition 3.2.
  1. (1)

    Every δ(r)\delta^{(r)} is a prefix of some δ(r+1)\delta^{(r+1)}.

  2. (2)

    If δ(r)δ(r+1)\delta^{(r)}\sqsubseteq\delta^{(r+1)} and δ(r+1)\mathcal{M}\vDash\delta^{(r+1)} for some \mathcal{M}, then δ(r)\mathcal{M}\vDash\delta^{(r)}.

  3. (3)

    If δ(r)δ(r+1)\delta^{(r)}\sqsubseteq\delta^{(r+1)} and δ(r)\nvDash\delta^{(r)}, then δ(r+1)\nvDash\delta^{(r+1)}.

Proof.

See appendix. ∎

The first item ensures that every constituent is the prefix of some expansion constituent so that the resolution of its description can increased. The second and third items ensure that prefixes are compatible with logical implication. More concretely, the second ensures that satisfiable expansions have satisfiable prefixes while the third ensures that an unsatisfiable constituent is the prefix of some unsatisfiable expansion.

1:function edges(rr, ξ\xi)
2:     for δ(r+1)Δ(r+1)\delta^{(r+1)}\in\Delta^{(r+1)} do
3:         if δ(r)δ(r+1)\delta^{(r)}\sqsubseteq\delta^{(r+1)} for some δ(r)\delta^{(r)} then
4:              Add the edge (δ(r),δ(r+1))(\delta^{(r)},\delta^{(r+1)}) to ξ\xi.
5:         else
6:              Let A{δ(r)δ(r+1)r1δ(r)}A\triangleq\{\delta^{(r)}\mid\delta^{(r+1)}\succ^{1}_{r}\delta^{(r)}\}.
7:              if |A|=0|A|=0 then
8:                  Choose an arbitrary δ(r)Δ(r)\delta^{(r)}\in\Delta^{(r)}.
9:                  Add the edge (δ(r),δ(r+1))(\delta^{(r)},\delta^{(r+1)}) to ξ(r+1)\xi^{(r+1)}.
10:              else
11:                  Choose an arbitrary δ(r)A\delta^{(r)}\in A.
12:                  Add the edge (δ(r),δ(r+1))(\delta^{(r)},\delta^{(r+1)}) to ξ(r+1)\xi^{(r+1)}.                             
13:     return ξ\xi
Algorithm 1 Algorithm for adding edges to a refinement tree.

Refinement tree

Define a sequence of graphs as follows. Let T0(Δ(0),{})T_{0}\triangleq(\Delta^{(0)},\{\}) so that the vertices are the constituents of rank 0 and no edges. Let Tr+1Tr(Δ(r+1),ξ(r+1))T_{r+1}\triangleq T_{r}\cup(\Delta^{(r+1)},\xi^{(r+1)}) with edges ξ(r+1)edges(r,ξ(r))\xi^{(r+1)}\triangleq\operatorname{edges}(r,\xi^{(r)}) where edges\operatorname{edges} is defined in Algorithm 1.

Definition 3.3.

The graph TrTrT\triangleq\bigcup_{r\in\mathbb{N}}T_{r} is a refinement tree.

We say that δ(r+s)\delta^{(r+s)} refines δ(r)\delta^{(r)} if there is a path from δ(r)\delta^{(r)} to δ(r+s)\delta^{(r+s)} in TT. We check that TT is a tree with the desired properties.

Proposition 3.4.

For any rr, TrT_{r} is a tree such that δ(s+1)δ(s)\vDash\delta^{(s+1)}\rightarrow\delta^{(s)} for any (δ(s),δ(s+1))ξ(r)(\delta^{(s)},\delta^{(s+1)})\in\xi^{(r)}.

Proof.

See appendix. ∎

Proposition 3.5.

TT is a tree such that δ(r+1)δ(r)\vDash\delta^{(r+1)}\rightarrow\delta^{(r)} for any (δ(r),δ(r+1))E(\delta^{(r)},\delta^{(r+1)})\in E.

Proof.

See appendix. ∎

The tree structure lends itself to constructing a topology that expresses what an initial segment of refining constituents could converge to.

Topology

Recall that we can define a topology on a tree by defining a basis of open sets identified by finite initial segments of the tree.555For background on topology, we refer the reader to (Munkres, 2000). Let Ψ\Psi be the set of paths of a refinement tree. We write δ¯(r)\bar{\delta}^{(r)} to indicate the path (δ(0),,δ(r))(\delta^{(0)},\dots,\delta^{(r)}) in a refinement tree. Then a basic open set has the form

Bδ(r){ρΨδ¯(r)ρ}B_{\delta^{(r)}}\triangleq\{\rho\in\Psi\mid\bar{\delta}^{(r)}\sqsubseteq\rho\}

where \sqsubseteq indicates a prefix relation on refinement tree paths. We can think of a basic open Bδ(r)B_{\delta^{(r)}} as the description of δ(r)\delta^{(r)} along with all possible ways in which that description can be extended.

Definition 3.6.

The constituent refinement space is the topological space (Ψ,𝒪(Ψ))(\Psi,\mathcal{O}(\Psi)) where the topology 𝒪(Ψ)\mathcal{O}(\Psi) is generated by the basis of clopen sets

{Bδ(r)δ(r)Δ(r)}.\{B_{\delta^{(r)}}\mid\delta^{(r)}\in\Delta^{(r)}\}\,.

3.2. Hilbert Space

At the end of the previous section, we obtained a topological space on appropriate sequences of constituents. In this section, we will use the notion of convergence given by the topological space to define a Hilbert space that encodes sentences of arbitrary quantifier rank. Towards this end, we reintroduce metric notions to the topological space so that we can recover a geometry.

The inner product ,\langle\cdot,\cdot\rangle is the fundamental metric notion in a Hilbert space. It intuitively gives a measure of similarity between two elements of a Hilbert space. In our case, we have a space of paths Ψ\Psi through a refinement tree. Recall that a path in a refinement tree describes a logical description in finer and finer detail. Thus, if we were to measure the similarity between two such paths, we would intuitively weigh the difference in coarser descriptions (the big picture) more than differences in finer descriptions (the minutiae). This leads us to construct a Hilbert space using L2(X,μ)L^{2}(X,\mu), i.e., the space of square summable sequences of XX weighted by a measure μ\mu.

Definition 3.7.

Let L2(Ψ,β)\mathbb{H}\triangleq L^{2}(\Psi,\beta) be the Hilbert space for sentences of arbitrary quantifier rank where β\beta is any measure on Ψ\Psi’s Borel σ\sigma-algebra. Unless stated otherwise, we assume that β\beta is the uniform measure, i.e., β(Bδ(r))=1|Δ(r)|\beta(B_{\delta^{(r)}})=\frac{1}{|\Delta^{(r)}|}.666Recall that we can uniquely extend a valuation on a clopen basis to a measure on σ\sigma-algebra. For background on measure-theoretic probability, we refer the reader to (Kallenberg, 2006).

Recall that L2(X,μ)L^{2}(X,\mu) is a Hilbert space so \mathbb{H} is a Hilbert space.

The Hilbert space spans all first-order sentences. Let χA()\chi_{A}(\cdot) be the characteristic function over the set AA.

Proposition 3.8.

Every first-order sentence ϕ(r)\phi^{(r)} can be written as

ϕ(r)=i=1naiχBδi(r)()\bm{\phi}^{(r)}=\sum_{i=1}^{n}a_{i}\chi_{B_{\delta^{(r)}_{i}}}(\cdot)

where dnf(ϕ(r))={δ1(r),,δn(r)}\operatorname{dnf}(\phi^{(r)})=\{\delta^{(r)}_{1},\dots,\delta^{(r)}_{n}\} and a1,,ana_{1},\dots,a_{n} are non-zero. Unless stated otherwise, we assume a1==an=|dnf(ϕ(r))|a_{1}=\dots=a_{n}=|\operatorname{dnf}(\phi^{(r)})|.

Proof.

Every first-order sentence of rank rr can be written as a disjunction of rank rr constituents (Proposition 2.1). ∎

Thus we can think of a first-order sentence ϕ(r)\phi^{(r)} as the signal ϕ(r)\bm{\phi}^{(r)} in \mathbb{H}.

Orthogonality

The inner product ,\langle\cdot,\cdot\rangle is given by

f,g=fg𝑑β\langle f,g\rangle=\int f\cdot g\,d\beta

for f,gf,g\in\mathbb{H} (i.e., ff and gg are square integrable).

Although there are an uncountable number of infinitely precise descriptions, there is a sense in which we really only have a denumerable handle on them.

Proposition 3.9.

\mathbb{H} has a countable basis.

Proof.

Let r{χBδ(r)δ(r)Δ(r)}\mathcal{B}_{r}\triangleq\{\chi_{B_{\delta}^{(r)}}\mid\delta^{(r)}\in\Delta^{(r)}\} and rr\mathcal{B}\triangleq\bigcup_{r\in\mathbb{N}}\mathcal{B}_{r}. Then \mathcal{B} is countable and spans the space. ∎

We can apply Gram-Schmidt to \mathcal{B} to obtain a orthonormal basis.

This concludes the construction of a space for first-order logic. For simplicity, we will largely restrict attention to the case of fixed quantifier rank throughout the rest of the paper as it suffices to illustrate the main ideas of what a “signals in space” perspective might offer.

4. Plausibility

How does deduction affect the “plausibility” of a logical system? In this section, we conceptualize a first-order space as carrying the “plausibility” of first-order sentences (Section 4.1). Like its physical counterpart which carries a conserved amount of energy in various forms, a logical system carries a conserved amount of “plausibility” in various places. We will then see that deduction is an entropic operation (Section 4.2). This provides an alternative explanation for the failure of logical omniscience, i.e., why a rational agent fails to know all the consequences of a set of axioms. We restrict attention to sentences of quantifier rank less than RR in this section.

4.1. Plausibility as Energy

Energy is a concept that is fundamental to the physical world. In particular, the law of conservation of energy states that energy cannot be created or destroyed—it can only be converted from one form into another by some physical process. We can thus reason about the effects of physical processes by accounting for the system’s energy.

We propose that “plausibility” is the analogous concept of energy for a logical system. A logical system is comprised of a single sentence.777A logical system consisting of nn sentences ϕ1,,ϕn\phi_{1},\dots,\phi_{n} is encoded as the one sentence system ϕ1ϕn\phi_{1}\land\dots\land\phi_{n}. We define the plausibility of a logical system ϕ\bm{\phi} to be

𝒫(ϕ)δ(R)Δ(R)p(δ(R))δ^(R),ϕ\mathcal{P}(\bm{\phi})\triangleq\sum_{\delta^{(R)}\in\Delta^{(R)}}p(\delta^{(R)})\langle\hat{\delta}^{(R)},\bm{\phi}\rangle

where pp is some probability mass function on Δ(R)\Delta^{(R)}.

We thus have the following analogy with energy:

plausibility energy\displaystyle\sim\text{energy}
logical possibilities forms of energy.\displaystyle\sim\text{forms of energy}\,.

Each logical possibility corresponds to a different form of energy (e.g., kinetic).

The law of conservation of plausibility can then be stated as follows: “Plausibility cannot be created or destroyed—it can only be shifted from one logical possibility to another by some reasoning process”. As with conservation of energy, conservation of plausibility only holds when we consider the entire system, i.e., \bm{\top}. We use the phrase “reasoning process” informally to suggest that the process is not necessarily deductive.888For example, Huang (2019) shows how a local Bayesian update process can be used to update the assignment of plausibility to logical possibilities. Nevertheless, deduction is an essential reasoning process and so we turn our attention towards this kind of reasoning next.

4.2. Deduction

From the perspective of language, deduction or modus ponens (mp) is formulated as the inference rule ϕ1ϕ2,ϕ1ϕ2\phi_{1}\rightarrow\phi_{2},\phi_{1}\implies\phi_{2} where \implies transforms the left hand side to the right hand side. From the perspective of space, mp will be formulated as an operator.

Modus ponens as an operator

We use the observation that ϕ1ϕ2\phi_{1}\rightarrow\phi_{2} is syntactic sugar for ¬ϕ1ϕ2\lnot\phi_{1}\lor\phi_{2} to express mp as an operator.

Definition 4.1.

Define the operator MPϕ1ϕ2:𝕍(R)𝕍(R)\textbf{MP}_{\phi_{1}\rightarrow\phi_{2}}:\mathbb{V}^{(R)}\rightarrow\mathbb{V}^{(R)} as MPϕ1ϕ2(ϕ)(ϕϕ𝟏)ϕ𝟐\textbf{MP}_{\phi_{1}\rightarrow\phi_{2}}(\bm{\phi})\triangleq(\bm{\phi}\ominus\bm{\phi_{1}}^{\perp})\oplus\bm{\phi_{2}} where:

\land ϕ1ϕ2\phi_{1}\oplus\phi_{2}\triangleq δ(R)Δ(R)max(δ^(R),ϕ𝟏,δ^(R),ϕ𝟐)δ^(R)\sum_{\delta^{(R)}\in\Delta^{(R)}}\max(\langle\hat{\delta}^{(R)},\bm{\phi_{1}}\rangle,\langle\hat{\delta}^{(R)},\bm{\phi_{2}}\rangle)\hat{\delta}^{(R)}
\lor ϕ1ϕ2\phi_{1}\ominus\phi_{2}\triangleq δ(R)Δ(R)min(δ^(R),ϕ𝟏,δ^(R),ϕ𝟐)δ^(R)\sum_{\delta^{(R)}\in\Delta^{(R)}}\min(\langle\hat{\delta}^{(R)},\bm{\phi_{1}}\rangle,\langle\hat{\delta}^{(R)},\bm{\phi_{2}}\rangle)\hat{\delta}^{(R)}
¬\lnot ϕ\phi^{\perp}\triangleq δ(R)dnf(ϕ)δ^(R),ϕδ^(R)\sum_{\delta^{(R)}\notin\operatorname{dnf}(\phi)}\langle\hat{\delta}^{(R)},\bm{\phi}\rangle\hat{\delta}^{(R)}

The additional vector ϕϕ𝟏\bm{\phi}\ominus\bm{\phi_{1}}^{\perp} is 0\vec{0} when ϕ=ϕ𝟏\bm{\phi}=\bm{\phi_{1}}. Thus this definition of mp from the space perspective additionally keeps track of how well the antecedent to an implication matches with the current state. It is an over approximation of logical mp.

Analogous to how we can model the evolution of a physical system by considering the sequencing of operators that correspond to physical processes, we can model the evolution of a logical system by considering the effects of sequencing mp operators that correspond to deductive processes.

Observe that MPϕ1ϕ2\textbf{MP}_{\phi_{1}\rightarrow\phi_{2}} is a continuous operator. We can interpret this as another instantiation of the locality of computation: Any sentence that can be deduced is a sequence of local and small steps. As it turns out, MPϕ1ϕ2\textbf{MP}_{\phi_{1}\rightarrow\phi_{2}} is “entropic” in a sense that we describe next.

Modus ponens is entropic

We begin with the observation that mp increases plausibility of a logical system.

Proposition 4.2.

𝒫(ϕ𝟏ϕ𝒏)𝒫(MPϕn1ϕnMPϕ1ϕ2(ϕ𝟏))\mathcal{P}(\bm{\phi_{1}\land\ldots\land\phi_{n}})\leq\mathcal{P}(\textbf{MP}_{\phi_{n-1}\rightarrow\phi_{n}}\circ\dots\circ\textbf{MP}_{\phi_{1}\rightarrow\phi_{2}}(\bm{\phi_{1}})).

Proof.

We get that MPϕn1ϕn(ϕn1)MPϕ1ϕ2(ϕ1)=ϕn\textbf{MP}_{\phi_{n-1}\rightarrow\phi_{n}}(\phi_{n-1})\circ\dots\circ\textbf{MP}_{\phi_{1}\rightarrow\phi_{2}}(\phi_{1})=\phi_{n} by induction on nn. It then follows that 𝒫(ϕ1ϕn)𝒫(ϕn)\mathcal{P}(\phi_{1}\land\dots\land\phi_{n})\leq\mathcal{P}(\phi_{n}) because dnf(ϕ1ϕn)dnf(ϕn)\operatorname{dnf}(\phi_{1}\land\dots\land\phi_{n})\subseteq\operatorname{dnf}(\phi_{n}). ∎

Consequently, knowledge of a proof contains less plausibility than knowledge of a theorem statement.

Intuitively, the plausibility of a logical system increases because the number of possibilities we consider possible after an application of mp increases. Note that this fact does not contradict the law of conservation of plausibility. In particular, an application of mp changes the logical system under consideration which may no longer be the total system \bm{\top}.

We can rephrase the increase in plausibility as an increase in unnormalized entropy. Recall the unnormalized entropy of a vector (a1,,an)(a_{1},\dots,a_{n}) is defined as U(a1,,an)i=1nai(1log(ai))U(a_{1},\dots,a_{n})\triangleq\sum_{i=1}^{n}a_{i}(1-\log(a_{i})). Let ϕ\bm{\phi}^{\ddagger} be the vector where δ^(R),ϕ=1\langle\hat{\delta}^{(R)},\bm{\phi}\rangle=1 whenever δ(R)dnf(ϕ)\delta^{(R)}\in\operatorname{dnf}(\phi) and δ^(R),ϕ=α\langle\hat{\delta}^{(R)},\bm{\phi}\rangle=\alpha otherwise where α>0\alpha>0.

Proposition 4.3.
U((ϕ𝟏ϕ𝟐))U((MPϕ1ϕ2(ϕ𝟏)))U((\bm{\phi_{1}\land\phi_{2}})^{\ddagger})\leq U((\textbf{MP}_{\phi_{1}\rightarrow\phi_{2}}(\bm{\phi_{1}}))^{\ddagger})
Proof.

Observe that dnf(ϕ1)dnf((ϕ¬ϕ1)ϕ2)\operatorname{dnf}(\phi_{1})\subseteq\operatorname{dnf}((\phi\land\lnot\phi_{1})\lor\phi_{2}). The result thus follows by routine calculation. ∎

Because deduction increases (unnormalized) entropy, a deductive process may spread out information over too many possibilities as to render it impossible to find the proof of a conjecture.

5. Theories

Where is a first-order theory located? As we might expect, a first-order theory TT forms a subspace. We can visualize the subset of sentences T(r)T^{(r)} that have quantifier rank rr as a hyperplane sitting in 𝕍(r)\mathbb{V}^{(r)} (Figure 2).

Proposition 5.1.

Let TT be some theory. Then T(r)T^{(r)} forms a subspace of 𝕍(r)\mathbb{V}^{(r)} for every rr.

Proof.

Observe that the subset of satisfiable constituents that satisfy the theory TT forms a basis for the subspace. ∎

δ^i(r)\hat{\delta}^{(r)}_{i}δ^j(r)\hat{\delta}^{(r)}_{j}δ^k(r)\hat{\delta}^{(r)}_{k}
Figure 2. A theory TT (shaded in red) that is spanned by {δ^i(r),δ^j(r)}\{\hat{\delta}^{(r)}_{i},\hat{\delta}^{(r)}_{j}\}. Thus the sentence denoting the vector (blue) is not part of the theory when δ^j(r)\hat{\delta}^{(r)}_{j} is not satisfiable.
Definition 5.2.

The dimension of a theory is a function D:D:\mathbb{N}\rightarrow\mathbb{N} such that D(r)D(r) gives the number of satisfiable constituents at rank rr.

The dimension of a theory has can be interpreted as a measure of its complexity. We introduce a complete theory and an undecidable theory next to give examples of the lower and upper limits of a theory’s complexity.

Example 5.3 (Unbounded dense linear order).

The theory of unbounded dense linear orders (UDLO) expresses the ordering of the real line using the binary relation <<.999For reference, the axioms for UDLO are: (x)(y)x<y¬(x=yy<x)(\forall x)(\forall y)x<y\rightarrow\lnot(x=y\lor y<x) (antisymmetry), (x)(y)(z)x<yy<zx<z(\forall x)(\forall y)(\forall z)x<y\land y<z\rightarrow x<z (transitivity), (x)(y)x<yy<xx=y(\forall x)(\forall y)x<y\lor y<x\lor x=y (trichotomy), (x)(y)x<y(z)x<zy<z(\forall x)(\forall y)x<y\rightarrow(\exists z)x<z\land y<z (dense), and (x)(y)(z)y<xx<z(\forall x)(\exists y)(\exists z)y<x\land x<z (unbounded).

UDLO is a complete theory so it is intuitively “simple”.

Proposition 5.4.

A complete theory has dimension D(r)=1D(r)=1 the constant one function.

Proof.

Let δ(r)\delta^{(r)} be some satisfiable constituent (which exists because the theory is consistent). Then ¬δ(r)δΔ(r)\{δ(r)}δ\lnot\delta^{(r)}\equiv\bigvee_{\delta\in\Delta^{(r)}\backslash\{\delta^{(r)}\}}\delta is not satisfiable by completeness of the theory. Thus every other constituent at rank rr is not satisfiable. ∎

To give a flavor of what a constituent looks like, the only satisfiable rank 11 and 22 constituents for UDLO are

  1. (1)

    ((Ex1)x1x1)(Ex1)x1x1((\exists^{E}x_{1})\,x_{1}\nless x_{1})\land(\forall^{E}x_{1})\,x_{1}\nless x_{1}; and

  2. (2)

    ((Ex1)x1x1E[x1]U[x1])(Ex1)x1x1E[x1]U[x1]((\exists^{E}x_{1})\,x_{1}\nless x_{1}\land E[x_{1}]\land U[x_{1}])\land(\forall^{E}x_{1})\,x_{1}\nless x_{1}\land E[x_{1}]\land U[x_{1}] where E[x1](Ex2)x2<x1(Ex2)x1<x2(Ex2)x2x2E[x_{1}]\triangleq(\exists^{E}x_{2})\,x_{2}<x_{1}\land(\exists^{E}x_{2})\,x_{1}<x_{2}\land(\exists^{E}x_{2})\,x_{2}\nless x_{2} and U[x1](Ex2)x2<x1x1<x2x2x2U[x_{1}]\triangleq(\forall^{E}x_{2})\,x_{2}<x_{1}\lor x_{1}<x_{2}\lor x_{2}\nless x_{2}

where xyx\nless y is syntactic sugar for ¬(x<y)\lnot(x<y). In words, the only satisfiable rank 22 constituent describes that either x1<x2x_{1}<x_{2} or x2<x1x_{2}<x_{1}, i.e., the only two possible linear orderings involving two elements.

Example 5.5 (Groups).

The theory of groups expresses groups from abstract algebra and involves one binary operator \cdot expressing group multiplication.101010For reference, the axioms of groups are: (e)(x)(xe=xex=x)((y)xy=eyx=e)(\exists e)(\forall x)(x\cdot e=x\land e\cdot x=x)\land((\exists y)x\cdot y=e\land y\cdot x=e) (identity) and (x)(y)(z)(xy)z=x(yz)(\forall x)(\forall y)(\forall z)(x\cdot y)\cdot z=x\cdot(y\cdot z) (transitivity). Recall that we do not consider function symbols in this paper. Thus the axioms should replace the binary operator \cdot with a ternary predicate MM and additional formula asserting that MM is a function.

Groups is not decidable so it is intuitively “complex”.

Proposition 5.6.

An undecidable theory has uncomputable dimension.

Proof.

A decision procedure for validity of constituents would decide validity of first-order sentences. ∎

Although the dimension is not computable, we do have a (trivial) upper-bound for the dimension of any theory given by the number of constituents.

Proposition 5.7.

The dimension U(r)=|Δ(r)|U(r)=|\Delta^{(r)}| is an upper bound on the dimension of DD of any theory.

Proof.

Straightforward. ∎

Thus there is a super-exponential upper bound on the dimension of a theory as a function of quantifier rank rr.

Compared to UDLO which only has one satisfiable constituent at any depth, there are many satisfiable constituents at any depth for groups. Intuitively, these constituents correspond to classifying the kinds of groups that are expressible in a first-order language. The most granular kind of group simply satisfies the axioms, a finer kind of group differentiates abelian groups from non-abelian groups, and so on.

6. Proofs

What does orthogonal decomposition reveal about proofs? We revisit the notion of proof from the space perspective in this section and explore what one based on orthogonal decomposition would look like (Section 6.1). We will see that each component can be lower bounded by a counter-example (i.e., theorem proving) or upper bounded by an example (i.e., model checking) (Section 6.2). This provides an explanation of the utility of examples to the process of proving. Finally, we highlight an application to assigning probabilities to first-order sentences that reflects logical uncertainty and provide a discussion of conjecturing (Section 6.3).

6.1. Component Proofs

The idea of decomposing a signal into its orthogonal components is one that has numerous applications in engineering and the sciences. It is a powerful method because it enables one to analyze each component separately and combine the results to obtain the behavior of the original signal. For example, we can understand the motion of an object in a 2D plane by separately analyzing its motion along the x-axis and y-axis and superimpose the results.

Decomposition is similar to the usage of compositionality in language where we give the meaning of a sentence be composing the meanings of its phrases. The difference between the two is that the former implicitly assumes the pieces to be composed are semantically independent whereas the latter does not. Thus we will be looking at proofs under an additional constraint in this section.

In the case of proving, the assumption of semantic independence means that each piece of the proof corresponds to a distinct logical possibility, and consequently, can be tested for satisfiability independently of one another. We formalize this idea now.

Definition 6.1.

A (rank rr) component proof system is a tuple (,τ,χ)(\mathcal{B},\tau,\chi) where

  • \mathcal{B} is a choice of orthogonal basis for 𝕍(r)\mathbb{V}^{(r)};

  • τ:𝟐\tau:\mathcal{B}\rightarrow\mathbf{2} is a valuation such that τ(ψ)=0\tau(\psi)=0 means that ψ\psi is unsatisfiable and τ(ψ)=1\tau(\psi)=1 means that ψ\psi is satisfiable; and

  • χ:×𝟐\chi:\mathcal{L}\times\mathcal{B}\rightarrow\mathbf{2} is a component classifier that indicates whether a sentence contains the corresponding basis element as a component or not. In symbols, χ(ϕ(r),ψ)=1\chi(\phi^{(r)},\psi)=1 if ψϕ(r)\vDash\psi\rightarrow\phi^{(r)} and χ(ϕ(r),ψ)=0\chi(\phi^{(r)},\psi)=0 otherwise. The component function is given by

    χ(ϕ(r),δ)=δdnf(ϕ(r))\chi(\phi^{(r)},\delta)=\delta\in\operatorname{dnf}(\phi^{(r)})

    when \mathcal{B} is the standard basis.

Definition 6.2.

A component proof of ϕ\phi is a sentence

ψτ(ψ)χ(ϕ(r),ψ)\bigwedge_{\psi\in\mathcal{B}}\tau(\psi)\land\chi(\phi^{(r)},\psi)

where (,τ,χ)(\mathcal{B},\tau,\chi) is a component proof system. Each τ(ψ)χ(ϕ(r),ψ)\tau(\psi)\land\chi(\phi^{(r)},\psi) is a component.

A priori, neither the valuation τ\tau nor the component classifier χ\chi are required to reflect logical validity. We say that a valuation τ\tau is sound if τ(ψ)1\tau(\psi)\neq 1 whenever ψ\psi is not satisfiable and a valuation τ\tau is complete if τ(ψ)=1\tau(\psi)=1 whenever ψ\psi is satisfiable. Similarly, we say that a component classifier is sound if χ(ϕ(r),ψ)1\chi(\phi^{(r)},\psi)\neq 1 whenever ψϕ\nvDash\psi\rightarrow\phi and a component classifier is complete if χ(ϕ(r),ψ)=1\chi(\phi^{(r)},\psi)=1 whenever ψϕ\vDash\psi\rightarrow\phi.

Because neither the valuation nor component classifier are required to reflect logical validity, a proof constructed in a component proof system can exhibit error. In particular, a component proof can be incorrect on some components and correct on others, leading to a “partially correct” proof.

Definition 6.3.
  • The error of the valuation τ^\hat{\tau} with respect to the ground truth τ\tau is

    EVψ|τ^(ψ)τ(ψ)|.E_{V}\triangleq\sum_{\psi\in\mathcal{B}}|\hat{\tau}(\psi)-\tau(\psi)|\,.
  • The error of the component classifier χ^\hat{\chi} with respect to the ground truth χ\chi on the sentence ϕ(r)\phi^{(r)} is

    Eϕ(r)ψ|χ^(ϕ(r),ψ)χ(ϕ(r),ψ)|.E_{\phi^{(r)}}\triangleq\sum_{\psi\in\mathcal{B}}|\hat{\chi}(\phi^{(r)},\psi)-\chi(\phi^{(r)},\psi)|\,.

The total error of a component proof for ϕ(r)\phi^{(r)} is EV+Eϕ(r)E_{V}+E_{\phi^{(r)}}.

We say that a component proof is partially correct if it has non-zero total error.

The error is related to the soundness and completeness of the valuation and component classifiers in the obvious way.

Proposition 6.4.

The total error EV+Eϕ(r)=0E_{V}+E_{\phi^{(r)}}=0 for every sentence ϕ(r)\phi^{(r)} iff both the valuation and component classifier are sound and complete.

Proof.

Straightforward. ∎

The notion of partial correctness is an essential difference between a standard proof and a component proof. As a reminder, a standard proof is a sequence of sentences where every sentence follows from the previous one by deduction. One invalid proof step thus invalidates the entire proof—a standard proof is brittle. In contrast, a component proof is somewhat robust because each component is semantically independent of the other. The robustness to error leads us to consider how approximation can be used to construct a component proof.

6.2. Approximation

Refer to caption
Figure 3. An illustration of approximating a component in a component proof. The idea is to determine each basis element’s ground truth value by either upper or lower bounding its value. In this example, we know the ground truth value for the basis elements ψd\psi_{d} and ψe\psi_{e} because the upper and lower bounds coincide. For basis elements ψa\psi_{a}, ψb\psi_{b}, ψc\psi_{c}, and ψf\psi_{f} we do not yet know their ground truth value.

The inspiration for an approximation procedure for constructing a component proof comes from sandwiching a function in analysis: use a simpler family of functions to simultaneously lower bound and upper bound the unwieldy function of interest. In our setting, we can use the family of characteristic functions {χδ(r)()δ(r)Δ(r)}\{\chi_{\delta^{(r)}}(\cdot)\mid\delta^{(r)}\in\Delta^{(r)}\}.

Figure 3 illustrates the idea behind the approximation procedure for the components of a component proof. Let Fϕ(ψ)τ(ψ)χ(ϕ,ψ)F_{\phi}(\psi)\triangleq\tau(\psi)\land\chi(\phi,\psi).

Lower bound:

We lower bound Fϕ(ψ)F_{\phi}(\psi) by attempting to show that ¬ψ\lnot\psi is logically valid. We can use any first-order proof system (e.g., an automated theorem prover) to show that ¬ψ\lnot\psi is derivable. This corresponds to showing that this specific component has a counter-example.

Upper bound:

We upper bound Fϕ(ψ)F_{\phi}(\psi) by (1) “choosing” a model \mathcal{M} and then (2) attempting to show that ψ\mathcal{M}\vDash\psi (e.g., using a model checker). This corresponds to showing that this specific component has an example.

The approximation procedure reveals an asymmetry in the efficacy of lower bounding (i.e., proving) versus upper bounding (i.e., model checking). In particular, any disjunction of unsatisfiable basis elements will be unsatisfiable as well. Consequently, by refuting ψBψ\bigvee_{\psi\in B}\psi for some subset BB of unsatisfiable basis elements, we can eliminate |B||B| basis elements in one go.

In contrast, the model checking approach can only show that one component is satisfiable at a time due to the mutual exclusivity of the basis. Moreover, it requires the user to devise a concrete example. Consequently, from the perspective of a component proof, it should be more effective to put more effort into proving as opposed to model checking.

6.3. Logical Uncertainty

Intuitively, we would expect proving and assigning logical uncertainty—assessing how probable a conjecture is—to be related tasks. For instance, we might believe that a conjecture is more likely to be true than false even when we have no clear path for proving it.

In this section, we show how an approximation procedure for constructing component proofs gives rise to a probability distribution on first-order sentences such that logically equivalent sentences are not necessarily assigned the same probability. This provides another take on the failure of logical omniscience that complements the one given in Section 4 where we saw that deduction was an entropic operation. We begin by introducing a representation of a theorem proving agent’s knowledge.

Representation

Let refute:𝟐\operatorname{refute}:\mathcal{L}\rightarrow\mathbf{2} be a function that soundly approximates the deduction relation, i.e., refute(ϕ)=\operatorname{refute}(\phi)=\top implies that ϕ\nvDash\phi. We can think of refute(ϕ)\operatorname{refute}(\phi) as a resource-bound prover. Similarly, let check𝟐\operatorname{check}_{\mathcal{M}}\mathcal{L}\rightarrow\mathbf{2} be a function that soundly approximates a model checker, i.e., check(ϕ)=\operatorname{check}_{\mathcal{M}}(\phi)=\top implies that ϕ\mathcal{M}\vDash\phi. As before, we can think of check(ϕ)\operatorname{check}_{\mathcal{M}}(\phi) as a resource-bound model checker.

Suppose we have sentences ϕ1F,,ϕNF\phi^{F}_{1},\dots,\phi^{F}_{N} that are known to be false, knowledge of models 1,,M\mathcal{M}_{1},\dots,\mathcal{M}_{M}, and sentences ϕ1,,ϕL\phi_{1},\dots,\phi_{L} whose satisfiability is not known. We can interpret ¬ϕ1F,,¬ϕNF\lnot\phi^{F}_{1},\dots,\lnot\phi^{F}_{N} as theorems, 1,,M\mathcal{M}_{1},\dots,\mathcal{M}_{M} as common examples, ϕ1,,ϕL\phi_{1},\dots,\phi_{L} as conjectures that are specific to the agent. (We will see why we include conjectures momentarily.) Let ϕ0i=1NϕiF\phi_{0}\triangleq\bigwedge_{i=1}^{N}\phi^{F}_{i}. Let Nij1N_{ij}\triangleq 1 when refute(ϕiϕj)=\operatorname{refute}(\phi_{i}\rightarrow\phi_{j})=\top and

Nij|{k0kM,checkk(ϕiϕj)=}|M+αN_{ij}\triangleq\frac{|\{\mathcal{M}_{k}\mid 0\leq k\leq M,\operatorname{check}_{\mathcal{M}_{k}}(\phi_{i}\rightarrow\phi_{j})=\top\}|}{M+\alpha}

otherwise where 0iL0\leq i\leq L, 0jL0\leq j\leq L, and α>0\alpha>0.

Definition 6.5.

The matrix

K (\@arstrutϕ0ϕ1ϕ2ϕL\\ϕ01111\\ϕ1N101N12N1L\\ϕ2N20N211N2L\\\\ϕLNL0NL1NL21\\) K\triangleq\hbox{}\vbox{\kern 0.86108pt\hbox{$\kern 0.0pt\kern 2.5pt\kern-5.0pt\left(\kern 0.0pt\kern-2.5pt\kern-6.66669pt\vbox{\kern-0.86108pt\vbox{\vbox{ \halign{\kern\arraycolsep\hfil\@arstrut$\kbcolstyle#$\hfil\kern\arraycolsep& \kern\arraycolsep\hfil$\@kbrowstyle#$\ifkbalignright\relax\else\hfil\fi\kern\arraycolsep&& \kern\arraycolsep\hfil$\@kbrowstyle#$\ifkbalignright\relax\else\hfil\fi\kern\arraycolsep\cr 5.0pt\hfil\@arstrut$\scriptstyle$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\phi_{0}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\phi_{1}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\phi_{2}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\dots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\phi_{L}\\\phi_{0}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\dots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1\\\phi_{1}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{10}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{12}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\dots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{1L}\\\phi_{2}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{20}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{21}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\dots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{2L}\\\vdots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\vdots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\vdots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\vdots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\ddots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\vdots\\\phi_{L}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{L0}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{L1}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle N_{L2}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle\dots$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1\\$\hfil\kern 5.0pt\crcr}}}}\right)$}}

represents the knowledge of an agent 𝒜\mathcal{A}.

Each row and column of the matrix is associated with the sentences F,ϕ1,,ϕLF,\phi_{1},\dots,\phi_{L}. Each entry of the matrix can be thought of as the probability that the row sentence ϕr\phi_{r} implies the column sentence ϕc\phi_{c}, i.e., NrcPr(ϕrϕc)N_{rc}\approx\Pr(\vDash\phi_{r}\rightarrow\phi_{c}). Thus the first row of the matrix is all 11 (because false statements imply any statement) and the first column of the matrix gives the probability that the given sentence is false (i.e., Nr0Pr(ϕr)N_{r0}\approx\Pr(\nvDash\phi_{r})). Note that NijNjkN_{ij}\neq N_{jk} in general (unless we have an iff), and consequently, we should not think of the matrix KK as a covariance matrix. Nevertheless, there is a sense in which we can think of each entry as a measure of logical overlap, i.e., logical correlation.

Conjectures

Consider the subspace of 𝕍(r)\mathbb{V}^{(r)} spanned by B{F,ϕ1,,ϕL}B\triangleq\{F,\phi_{1},\dots,\phi_{L}\} where rr is the maximum rank of all the sentences involved. We call BB a conjecturing basis.

In general, BB will not be (1) linearly independent nor (2) contain \top. In the setting of first-order logic, the first is not an issue for the purpose of determining validity. The second, however, is problematic because it means that there are certain sentences whose validity an agent will not be able to determine unless it considers alternative conjectures. We can thus view conjecturing from the perspective of linear algebra as finding a useful conjecturing basis.

An application of principal component analysis (pca) to KK gives an ordering on the agent’s conjectures with respect to KK’s eigenvalues. Because the matrix KK is constructed with respect to the satisfiability of an agent’s models, we can interpret the sentences with the lowest eigenvalues as those that have no known examples to the sentences with the highest eigenvalues as those with no known counterexamples. Thus there is an exploration versus exploitation tradeoff in determining which conjectures to invest time in. An agent that wishes to explore should invest in the sentences with the smallest eigenvalues while an agent that wishes to exploit should invest in the sentences with the largest eigenvalues.

Beliefs and probabilities

We return to the problem of defining probabilities on sentences, starting with a proxy for them.

Definition 6.6.

An agent with knowledge KK has belief

𝔹(ϕi)1Ni0.\mathbb{B}(\phi_{i})\triangleq 1-N_{i0}\,.

Note that an agent’s beliefs do not represent probabilities because they are not normalized.

Although an agent’s beliefs are not probabilities, they can be converted into them.

Definition 6.7.

Define a valuation on basic opens by on induction on quantifier rank rr as ν(Bδ(0))1\nu(B_{\delta^{(0)}})\triangleq 1 and

ν(Bδ(r+1)){0ν(Bδ(r))=0𝔹(δ(r+1))δ(r+1)child(δ(r))𝔹(δ(r+1))ν(Bδ(r))otherwise\nu(B_{\delta^{(r+1)}})\triangleq\begin{cases}0&\mbox{$\nu(B_{\delta^{(r)}})=0$}\\ \frac{\mathbb{B}(\delta^{(r+1)})}{\sum_{\delta^{(r+1)}\in\operatorname{child}(\delta^{(r)})}\mathbb{B}(\delta^{(r+1)})}\nu(B_{\delta^{(r)}})&\mbox{otherwise}\end{cases}

where δ(r)\delta^{(r)} is parent of δ(r+1)\delta^{(r+1)} and child(δ(r))\operatorname{child}(\delta^{(r)}) denotes δ(r)\delta^{(r)}’s children nodes.

Proposition 6.8.

ν\nu defines unique probability measure on Ψ\Psi.

Proof.

We see that ν\nu defines a valuation on the basic clopens of Ψ\Psi such that ν(Bδ(r))=δ(r+1)child(δ(r+1))ν(Bδ(r+1))\nu(B_{\delta^{(r)}})=\sum_{\delta^{(r+1)}\in\operatorname{child}(\delta^{(r+1)})}\nu(B_{\delta^{(r+1)}}) for any δ(r)\delta^{(r)} by induction on rr. Thus we can extend this to a measure in the usual way. ∎

Because an agent only has a finite amount of knowledge, it follows that an agent’s probabilities are not logically omniscient.

Proposition 6.9 (Failure of logical omniscience).

There are agents that are not logically omniscient.

Proof.

Consider the agent with matrix K=(1)K=(1) and knows no models. Then it assigns positive probability to unsatisfiable constituents so it cannot be logically omniscient. ∎

7. Size

How much space does a first-order theory occupy? In this section, we explore two views of a “models as boundary” principle that highlight the difficulty of approximating a theory. In the first, we view distinct models as spanning the theory and quantify the accuracy of approximation as a function of the number of distinct models (Section 7.1). In the second, we view the valuation for some theory’s basis as a Boolean hypercube (Section 7.2). A familiar (edge) isoperimetric principle then says that a theory with small “volume” (i.e., ‘variance) can still have large “perimeter” (i.e., many models).

7.1. Approximate Spanning

The first view we have of “models as boundary” is that a theory’s models span it. Intuitively, knowledge of distinct models suffices to describe the theory because each model describes exactly one basis element. It then follows that the number of distinct models we require as a function of quantifier rank is exactly the dimension of the theory.

How many models do we need to approximately span a theory? As it turns out, it depends on how we represent what we know to be true. Let 𝑼δ(r)Δ(r)1|Δ(r)|δ(r)^\bm{\top_{U}}\triangleq\sum_{\delta^{(r)}\in\Delta^{(r)}}\frac{1}{|\Delta^{(r)}|}\hat{\delta^{(r)}} be a representation of truth for an uninformed agent. Let 𝑶=δ^(r)Δ(r)|+1D(r)δ^(r)\bm{\top_{O}}=\sum_{\hat{\delta}^{(r)}\in\Delta^{(r)}|_{+}}\frac{1}{D(r)}\hat{\delta}^{(r)} where Δ(r)|+\Delta^{(r)}|_{+} are the satisfiable constituents of rank rr be a representation of truth for an omniscient agent.

Proposition 7.1.
  1. (1)

    There exist kk sentences ϕ1(r),,ϕk(r)\phi^{(r)}_{1},\dots,\phi^{(r)}_{k} such that

    𝑼1kj=1kϕj(r)22k|Δ(r)|.\lVert\bm{\top_{U}}-\frac{1}{k}\sum_{j=1}^{k}\bm{\phi}^{(r)}_{j}\rVert_{2}\leq\sqrt{\frac{2}{k|\Delta^{(r)}|}}\,.
  2. (2)

    There exist kk sentences ϕ1(r),,ϕk(r)\phi^{(r)}_{1},\dots,\phi^{(r)}_{k} such that

    𝑶1kj=1kϕj(r)22kD(r).\lVert\bm{\top_{O}}-\frac{1}{k}\sum_{j=1}^{k}\bm{\phi}^{(r)}_{j}\rVert_{2}\leq\sqrt{\frac{2}{kD(r)}}\,.
Proof.
  1. (1)

    Let {δ1(r),,δN(r)}=Δ(r)\{\delta^{(r)}_{1},\dots,\delta^{(r)}_{N}\}=\Delta^{(r)}. The set {i=1Naiδ^i(r)a1,aN[0,1/|Δ(r)|]}\{\sum_{i=1}^{N}a_{i}\hat{\delta}^{(r)}_{i}\mid a_{1},\dots a_{N}\in[0,1/|\Delta^{(r)}|]\} forms a convex subset of 𝕍(r)\mathbb{V}^{(r)} with diameter 2/|Δ(r)|\sqrt{2/|\Delta^{(r)}|} that contains 𝑼\bm{\top_{U}}.111111The diameter of a convex set SS is sup{xy2x,yS}\sup\{\lVert x-y\rVert_{2}\mid x,y\in S\}. The result follows by an application of the approximate Catheodory theorem.

  2. (2)

    Similar to item 11.

At first blush, it appears that a theory can be “accurately” spanned by a constant number of models independent of its dimension. Nevertheless, we should keep in mind that logical validity is brittle no matter how it is represented—being off by any ϵ>0\epsilon>0 is enough to lose it.

Comparing the results for 𝑼\bm{\top_{U}} versus 𝑶\bm{\top_{O}}, we see that it is easier to approximate 𝑼\bm{\top_{U}} than it is to approximate 𝑶\bm{\top_{O}} (i.e., smaller kk) because D(r)|Δ(r)|D(r)\leq|\Delta^{(r)}|. Put another way, it is easier to approximate the beliefs of an uninformed agent compared to the beliefs of an omniscient agent.

Our ability to approximate is different if we know models of a theory.

Proposition 7.2.

Let 1ϕ1(r),,kϕk(r)\mathcal{M}_{1}\vDash\phi^{(r)}_{1},\dots,\mathcal{M}_{k}\vDash\phi^{(r)}_{k} be kk distinct models of a theory that satisfy some ϕ(r)\phi^{(r)}. Then

𝑼ϕ(r)2\displaystyle\lVert\bm{\top_{U}}-\bm{\phi}^{(r)}\rVert_{2} |Δ(r)|22N|Δ(r)|+N2N|Δ(r)|2\displaystyle\leq\sqrt{\frac{|\Delta^{(r)}|^{2}-2N|\Delta^{(r)}|+N^{2}}{N|\Delta^{(r)}|^{2}}}
𝑶ϕ(r)2\displaystyle\lVert\bm{\top_{O}}-\bm{\phi}^{(r)}\rVert_{2} D(r)(D(r)2k)+kNND(r)2\displaystyle\leq\sqrt{\frac{D(r)(D(r)-2k)+kN}{ND(r)^{2}}}

where ϕ(r)δ(r)dnf(ϕ(r))1Nδ(r)\bm{\phi}^{(r)}\triangleq\sum_{\delta^{(r)}\in\operatorname{dnf}(\phi^{(r)})}\frac{1}{N}\delta^{(r)} and N=|dnf(δ(r))|N=|\operatorname{dnf}(\delta^{(r)})|.

Proof.

By direct calculation. ∎

Let us unpack the assertion. Observe that when an agent has uninformative beliefs, the error depends on how well ϕ(r)\phi^{(r)} approximates the kk known models. When an agent has omniscient beliefs, we have that the information content of a theory is approximately encoded by its models, i.e., its examples. Intuitively, this makes sense as the more distinct kinds of models that one knows, the better the approximation to the ground truth.

Example 7.3.

For a complete theory, we have D(r)=1D(r)=1. Thus the above result says that we have maximal error if we know no models and zero error if we know a model. For instance, we can fully describe a complete theory like UDLO with either the models (,<)(\mathbb{R},<) or (,<)(\mathbb{Q},<).

Example 7.4.

When D(r)D(r) is unknown as in the case of an undecidable theory, then the quality of approximation depends on the theory. If D(r)kD(r)\gg k, then the error term is essentially 1/N1/\sqrt{N}. Another way to put this is that if one knows only a few examples of “complex” theory, then ϕ(r)\phi^{(r)} should be chosen so that leaves as many possibilities as open in order to obtain a good approximation.

7.2. Hypercube Isoperimetric Principle

The second view of “models as boundary” takes inspiration from the Boolean hypercube and Boolean Fourier analysis (e.g., see (O’Donnell, 2014)). The basic idea is that we view the valuation for some theory’s basis as a Boolean hypercube with vertices labeled according to whether they are satisfiable or not. We make this concrete now.

Hypercube

A labeled Boolean hypercube of dimension NN is a tuple (V,E,)(V,E,\ell) where V𝟐NV\triangleq\mathbf{2}^{N} is a set of vertices of the length NN bit-strings, E{{x,y}d(x,y)=1}E\triangleq\{\{x,y\}\mid d(x,y)=1\} is a set of edge containing those bit-string with Hamming distance one (i.e., d(x,y)=|{nxnyn}|d(x,y)=|\{n\mid x_{n}\neq y_{n}\}|), and :V{1,1}\ell:V\rightarrow\{-1,1\} is a labeling that assigns to each vertex a Boolean value. We write x¬ix^{\lnot i} to indicate that we flip the ii-th bit in the bit-string xx. A boundary edge along the ii-th dimension is then any xx such that (x)(x¬i)\ell(x)\neq\ell(x^{\lnot i}). The influence at coordinate ii of f:𝟐N{1,1}f:\mathbf{2}^{N}\rightarrow\{-1,1\} is

Infi[f]xV12N[f(x)f(x¬i)]\textbf{Inf}_{i}[f]\triangleq\sum_{x\in V}\frac{1}{2^{N}}\bm{[}f(x)\neq f(x^{\lnot i})\bm{]}

where the notation []\bm{[}\cdot\bm{]} indicates an Iverson bracket gives the fraction of boundary edges along the ii-th dimension. The total influence Inf[f]i=1NInfi[f]\textbf{Inf}[f]\triangleq\sum_{i=1}^{N}\textbf{Inf}_{i}[f] then gives the total fraction of boundary edges.

Our next task is to associate each bit-string with a constituent. Recall that a constituent has the form

δs(r)γ[z]Γ(r1)[z](±)s(γ[z])(Ez)γ[z]\delta^{(r)}_{s}\triangleq\bigwedge_{\gamma[z]\in\Gamma^{(r-1)}[z]}(\pm)^{s(\gamma[z])}(\exists^{E}z)\gamma[z]

where s:Γ(r1)[z]𝟐s:\Gamma^{(r-1)}[z]\rightarrow\mathbf{2}. Thus a constituent can be identified with a bit-string of length |Γ(r1)[z]||\Gamma^{(r-1)}[z]| corresponding to ss which forms a vertex of a Boolean hypercube of dimension |Γ(r1)[z]||\Gamma^{(r-1)}[z]|. A valuation on τ:Δ(r){1,1}\tau:\Delta^{(r)}\rightarrow\{-1,1\} where τ(δ(r))=1\tau(\delta^{(r)})=-1 if δ(r)\delta^{(r)} is satisfiable and τ(δ(r))=1\tau(\delta^{(r)})=1 otherwise is then a labeling for the constituent hypercube.121212We follow the convention that 11-1\mapsto 1 and 101\mapsto 0 when converting between {1,1}\{-1,1\} and 𝟐\mathbf{2} (e.g., see (O’Donnell, 2014)).

A boundary edge along the ii-th dimension on the constituent hypercube corresponds to one where changing whether or not an individual that satisfies the ii-th constituent of rank r1r-1 exists or not flips the valuation. In other words, we have a boundary edge if we cross between those constituents with models and those without models. As we will see next through a familiar edge isoperimetric principle, the number of crossings can be quite large even if the theory has small variance (i.e., small “volume”).

Edge isoperimetric principle

We begin by calculating the variance of a theory’s valuation τ\tau.

Proposition 7.5.
Var[τ]=1(|Δ(r)|2D(r)|Δ(r)|)2.\textbf{Var}[\tau]=1-\left(\frac{|\Delta^{(r)}|-2D(r)}{|\Delta^{(r)}|}\right)^{2}\,.
Proof.

We have 𝔼[τ]=D(r)|Δ(r)||Δ(r)|D(r)|Δ(r)|\mathbb{E}[\tau]=\frac{D(r)}{|\Delta^{(r)}|}-\frac{|\Delta^{(r)}|-D(r)}{|\Delta^{(r)}|} and 𝔼[τ2]=D(r)|Δ(r)|+|Δ(r)|D(r)|Δ(r)|=1\mathbb{E}[\tau^{2}]=\frac{D(r)}{|\Delta^{(r)}|}+\frac{|\Delta^{(r)}|-D(r)}{|\Delta^{(r)}|}=1. The result follows by routine calculation. ∎

We obtain edge isoperimetric principles by recalling two classic results from Boolean Fourier analysis.

Proposition 7.6.

We have

  • Var[τ]Inf[τ]\textbf{Var}[\tau]\leq\textbf{Inf}[\tau] (Poincare’s inequality (O’Donnell, 2014, pg. 36)) and

  • Ω(lognn)Var[τ]maxiInfi[f]\Omega\!\left(\frac{\log n}{n}\right)\textbf{Var}[\tau]\leq\max_{i}\textbf{Inf}_{i}[f] (KKL theorem (O’Donnell, 2014, pg. 260)).131313As a reminder, Ω()\Omega(\cdot) is an asymptotic lower-bound.

Example 7.7.

For a complete theory, the variance is 4|Δ(r)|(11Δ(r))\frac{4}{|\Delta^{(r)}|}(1-\frac{1}{\Delta^{(r)}}) which is essentially zero. Observe that every coordinate is influential so that Inf[τ]=maxiInfi[τ]=1\textbf{Inf}[\tau]=\max_{i}\textbf{Inf}_{i}[\tau]=1.

8. Related Work

The primary building blocks for this paper come from Hintikka’s work on distributive normal forms (Hintikka, 1965, 1970, 1973) and Huang’s work (Huang, 2019) on using them to assign probabilities to first-order sentences. Due to the nature of our exploration, we have also touched upon a variety of topics in bits and pieces. We highlight related work in some of the areas that we touch upon so we can compare and contrast perspectives.

Logic and geometry

The connections between logic and geometry have been recognized in the literature, toposes being a prime example (e.g., see (Lane and Moerdijk, 1992)). Our approach is more specialized and concrete as we work with first-order logic directly and look at as a linear space.

High-dimensional phenomena such as phase transitions (e.g., see (Gent and Walsh, 1994)) and isoperimetric principles (e.g., see (O’Donnell, 2014)) have also been considered in the context of propositional logic. We explored some high-dimensional phenomena for first-order logic.

Logical uncertainty

Logical uncertainty and the problem of logical omniscience is a problem of interest in philosophy (e.g., see (Hacking, 1967; Corfield, 2003; Parikh, 2010; Seidenfeld et al., 2012)) and there have been many solutions proposed for addressing it. One approach proposes assigning probabilities to sentences such that logically equivalent statements are not necessarily assigned the same probability (e.g., see (Garrabrant et al., 2016) and (Huang, 2019)).141414There are also approaches focused on assigning probabilities to sentences (e.g., see (Gaifman, 1964) and (Scott and Krauss, 1966) for first-order sentences and  (Hutter et al., 2013) for higher-order sentences). Another approach syntactically models an agent’s knowledge (e.g., see (Gaifman, 2004) and (Garber, 1983)). A third approach models an agent’s reasoning ability as bounded (e.g., see (Icard III, 2014) and (Bjerring and Skipper, 2018)). Our approach has aspects of all three. We model the agent’s knowledge directly and show how it leads to probabilities that fail logical omniscience. We explain bounded rationality by the entropic nature of deduction.

Conjecturing

A popular approach for automatic conjecture generation is based on enumerating candidate sentences and pruning the list using heuristics (e.g., the shape of the sentence) and model-checking (e.g., keep sentences that have no known counter-example) (Lenat, 1976; Fajtlowicz, 1988; Haase, 1990; Colton et al., 1999). We suggest using pca with respect to an agent’s knowledge, which is constructed with model-checking, as a heuristic. Huang (2019) proposes conjecturing as hypothesis selection with respect to a probability distribution on first-order sentences.

9. Conclusion

In summary, we explored some consequences of a shift in perspective to “signals in space” for first-order logic. We close with some thoughts on what a signal perspective of first-order logic provides to guide further research.

One view of the signal encoding of sentences seen in this paper is that it provides another reduction of first-order logic to a propositional setting. It would be interesting to see how techniques typically applied in the propositional setting apply to the representation of first-order logic introduced here. In particular, we have already seen examples of how Boolean Fourier analysis apply in this paper.

A second view of the signal encoding is that it provides another information theoretic take on first-order logic that counts semantic possibilities as opposed to syntactic aspects of the sentences themselves. It would be interesting to see what other aspects of first-order logic that information theoretic ideas can reveal in addition to the one we saw that deduction is entropic.

Lastly, we saw how a signal perspective enables orthogonal decomposition, a technique that is essential in the sciences and statistical inference more generally, to be applied to a logical system. Thus the ideas in this paper may be useful in illuminating the application of machine learning to theorem proving (e.g., see (Kaliszyk et al., 2014; Selsam et al., 2018; Kaliszyk et al., 2017; Irving et al., 2016; Loos et al., 2017; Kaliszyk et al., 2018; Huang et al., 2019; Huang, 2019)). In particular, the “models as boundary” principle highlights the difficulty of approximating a theory and so we have a “negative” result. On a more encouraging note, we have seen how model-checking and theorem proving can be combined in a principled manner to construct a theorem prover.

Acknowledgements.
We would like to thank Dawn Song for support to pursue this direction of research.

References

  • (1)
  • Bjerring and Skipper (2018) Jens Christian Bjerring and Mattias Skipper. 2018. A Dynamic Solution to the Problem of Logical Omniscience. Journal of Philosophical Logic (2018), 1–21.
  • Colton et al. (1999) Simon Colton, Alan Bundy, and Toby Walsh. 1999. Automatic Concept Formation in Pure Mathematics. In International Joint Conference on Artificial Intelligence. 786–793.
  • Corfield (2003) David Corfield. 2003. Towards a Philosophy of Real Mathematics. Cambridge University Press.
  • Fajtlowicz (1988) Siemion Fajtlowicz. 1988. On Conjectures of Graffiti. In Annals of Discrete Mathematics. Vol. 38. Elsevier, 113–118.
  • Gaifman (1964) Haim Gaifman. 1964. Concerning Measures in First Order Calculi. Israel Journal of Mathematics 2, 1 (1964), 1–18.
  • Gaifman (2004) Haim Gaifman. 2004. Reasoning with Limited Resources and Assigning Probabilities to Arithmetical Statements. Synthese 140, 1 (2004), 97–119.
  • Garber (1983) Daniel Garber. 1983. Old Evidence and Logical Omniscience in Bayesian Confirmation Theory. Testing Scientific Theories X (1983), 99–131.
  • Garrabrant et al. (2016) Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor. 2016. Logical Induction. arXiv preprint arXiv:1609.03543.
  • Gent and Walsh (1994) Ian P Gent and Toby Walsh. 1994. The SAT Phase Transition. In ECAI, Vol. 94. PITMAN, 105–109.
  • Haase (1990) Kenneth W. Haase. 1990. Invention and Exploration in Discovery. Ph.D. Dissertation. Massachusetts Institute of Technology.
  • Hacking (1967) Ian Hacking. 1967. Slightly more realistic personal probability. Philosophy of Science 34, 4 (1967), 311–325.
  • Hintikka (1965) Jaakko Hintikka. 1965. Distributive Normal Forms in First-Order Logic. In Studies in Logic and the Foundations of Mathematics. Vol. 40. Elsevier, 48–91.
  • Hintikka (1970) Jaakko Hintikka. 1970. Surface Information and Depth Information. In Information and Inference. Springer, 263–297.
  • Hintikka (1973) Jaakko Hintikka. 1973. Logic, Language-Games and Information: Kantian Themes in the Philosophy of Logic. Oxford, Clarendon Press.
  • Huang (2019) Daniel Huang. 2019. On Learning to Prove. arXiv preprint arXiv:1904.11099.
  • Huang et al. (2019) Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. 2019. GamePad: A Learning Environment for Theorem Proving. In International Conference on Learning Representations. https://openreview.net/forum?id=r1xwKoR9Y7
  • Hutter et al. (2013) Marcus Hutter, John W. Lloyd, Kee Siong Ng, and William T.B. Uther. 2013. Probabilities on Sentences in an Expressive Logic. Journal of Applied Logic 11, 4 (2013), 386–420.
  • Icard III (2014) Thomas F. Icard III. 2014. The Algorithmic Mind: A Study of Inference in Action. Ph.D. Dissertation. Institute for Logic, Language and Computation, Universiteit van Amsterdam.
  • Irving et al. (2016) Geoffrey Irving, Christian Szegedy, Alexander A. Alemi, Niklas Een, Francois Chollet, and Josef Urban. 2016. Deepmath-deep sequence models for premise selection. In Advances in Neural Information Processing Systems. 2235–2243.
  • Kaliszyk et al. (2017) Cezary Kaliszyk, François Chollet, and Christian Szegedy. 2017. Holstep: A Machine Learning Dataset for Higher-order Logic Theorem Proving. In International Conference on Learning Representations.
  • Kaliszyk et al. (2014) Cezary Kaliszyk, Lionel Mamane, and Josef Urban. 2014. Machine learning of Coq proof guidance: First experiments. arXiv preprint arXiv:1410.5467.
  • Kaliszyk et al. (2018) Cezary Kaliszyk, Josef Urban, Henryk Michalewski, and Mirek Olšák. 2018. Reinforcement Learning of Theorem Proving. arXiv preprint arXiv:1805.07563.
  • Kallenberg (2006) Olav Kallenberg. 2006. Foundations of Modern Probability. Springer Science & Business Media.
  • Lane and Moerdijk (1992) Saunders Mac Lane and Ieke Moerdijk. 1992. Sheaves in Geometry and Logic: A First Introduction to Topos Theory. Springer.
  • Lenat (1976) Douglas B. Lenat. 1976. AM: An Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search. Technical Report. Stanford University, Department of Computer Science.
  • Libkin (2004) Leonid Libkin. 2004. Elements of Finite Model Theory. Springer.
  • Loos et al. (2017) Sarah M. Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. 2017. Deep Network Guided Proof Search. In LPAR (EPiC Series in Computing), Vol. 46. EasyChair, 85–105.
  • Munkres (2000) James R. Munkres. 2000. Topology (2 ed.). Prentice Hall.
  • O’Donnell (2014) Ryan O’Donnell. 2014. Analysis of Boolean Functions. Cambridge University Press.
  • Parikh (2010) Rohit Parikh. 2010. Sentences, Propositions and Logical Omniscience, or What does Deduction tell us? Technical Report. City University of New York.
  • Pólya (1990a) George Pólya. 1990a. Mathematics and Plausible Reasoning: Induction and Analogy in Mathematics. Vol. 1. Princeton University Press.
  • Pólya (1990b) George Pólya. 1990b. Mathematics and Plausible Reasoning: Patterns of plausible inference. Vol. 2. Princeton University Press.
  • Pólya (2004) George Pólya. 2004. How to Solve It: A New Aspect of Mathematical Method. Princeton University Press.
  • Scott and Krauss (1966) Dana Scott and Peter Krauss. 1966. Assigning Probabilities to Logical Formulas. In Studies in Logic and the Foundations of Mathematics. Vol. 43. Elsevier, 219–264.
  • Seidenfeld et al. (2012) Teddy Seidenfeld, Mark J. Schervish, and Joseph B. Kadane. 2012. What Kind of Uncertainty is That? Using Personal Probability for Expressing One’s Thinking About Logical and Mathematical Propositions. The Journal of Philosophy 109, 8/9 (2012), 516–533.
  • Selsam et al. (2018) Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, and David L. Dill. 2018. Learning a SAT Solver from Single-Bit Supervision. arXiv preprint arXiv:1802.03685.

Appendix A Appendix

Proposition A.1 (Proposition 3.2).
  1. (1)

    Every δ(r)\delta^{(r)} is a prefix of some δ(r+1)\delta^{(r+1)}.

  2. (2)

    If δ(r)δ(r+1)\delta^{(r)}\sqsubseteq\delta^{(r+1)} and δ(r+1)\mathcal{M}\vDash\delta^{(r+1)} for some \mathcal{M}, then δ(r)\mathcal{M}\vDash\delta^{(r)}.

  3. (3)

    If δ(r)δ(r+1)\delta^{(r)}\sqsubseteq\delta^{(r+1)} and δ(r)\nvDash\delta^{(r)}, then δ(r+1)\nvDash\delta^{(r+1)}.

Proof.
  1. (1)

    When r=0r=0, then δ(0)\delta^{(0)} is the prefix of every δ(1)\delta^{(1)}. Suppose r1r\geq 1. Observe that |{δ~(r+1)δ(r+1)Δ(r+1)}|=3|Γ(r1)[z1]||\{\tilde{\delta}^{(r+1)}\mid\delta^{(r+1)}\in\Delta^{(r+1)}\}|=3^{|\Gamma^{(r-1)}[z_{1}]|} because the third possibility for a deletion is that it contains both γ(r1)[z1]\gamma^{(r-1)}[z_{1}] and its negation. Meanwhile |Δ(r)|=2|Γ(r1)[z1]||\Delta^{(r)}|=2^{|\Gamma^{(r-1)}[z_{1}]|}. The result follows by the pigeonhole principle.

  2. (2)

    We have γ(r)[z1]γ(r+1)γ(r)[z1]\mathcal{M}\vDash\langle\gamma^{(r)}[z_{1}]\rangle^{\gamma^{(r)}[z_{1}]}_{\gamma^{(r+1)}} for every γ(r)[z1]δ(r+1)\gamma^{(r)}[z_{1}]\in\dagger\delta^{(r+1)} because δ(r+1)\mathcal{M}\vDash\delta^{(r+1)}. Moreover, we have γ(r1)[z1]δ(r)γ(r1)[z1]\mathcal{M}\vDash\langle\gamma^{(r-1)}[z_{1}]\rangle^{\gamma^{(r-1)}[z_{1}]}_{\delta^{(r)}} for every γ(r1)[z1]δ(r)\gamma^{(r-1)}[z_{1}]\in\dagger\delta^{(r)} because δ(r)δ(r+1)\delta^{(r)}\sqsubseteq\delta^{(r+1)}. Thus δ(r)\mathcal{M}\vDash\delta^{(r)} as desired.

  3. (3)

    Similar to item 22.

Proposition A.2 (Proposition 3.4).

For any rr, TrT_{r} is a tree such that δ(s+1)δ(s)\vDash\delta^{(s+1)}\rightarrow\delta^{(s)} for any (δ(s),δ(s+1))ξ(r)(\delta^{(s)},\delta^{(s+1)})\in\xi^{(r)}.

Proof.

By induction on rr. The base case is trivial. In the inductive case, we have that TrT_{r} is a tree such that δ(s)δ(s+1)\vDash\delta^{(s)}\rightarrow\delta^{(s+1)} for any (δ(s),δ(s+1))ξ(r)(\delta^{(s)},\delta^{(s+1)})\in\xi^{(r)}. In order to show that Tr+1T_{r+1} is a tree, it suffices to show that every δ(r)Δ(r)\delta^{(r)}\in\Delta^{(r)} has an outgoing edge and every δ(r+1)Δ(r+1)\delta^{(r+1)}\in\Delta^{(r+1)} has a unique incoming edge (because of the inductive hypothesis). This follows from Proposition 3.2 item 11 and the definition of edges\operatorname{edges}.

It remains to show that δ(s)δ(s+1)\vDash\delta^{(s)}\rightarrow\delta^{(s+1)} for any (δ(s),δ(s+1))ξ(r+1)(\delta^{(s)},\delta^{(s+1)})\in\xi^{(r+1)}. By the inductive hypothesis, it suffices to show that δ(r)δ(r+1)\vDash\delta^{(r)}\rightarrow\delta^{(r+1)} for any (δ(r),δ(r+1))ξ(r+1)(\delta^{(r)},\delta^{(r+1)})\in\xi^{(r+1)}. We proceed by case analysis on the edge set following the cases in Algorithm 1.

Suppose there is an edge (δ(r),δ(r+1))(\delta^{(r)},\delta^{(r+1)}) such that δ(r)δ(r+1)\delta^{(r)}\sqsubseteq\delta^{(r+1)}. If δ(r+1)\delta^{(r+1)} is not satisfiable then there is nothing to show. If δ(r+1)\delta^{(r+1)} is satisfiable then δ(r)\delta^{(r)} is satisfiable as well by Proposition 3.2 item 22. Thus δ(r+1)δ(r)\vDash\delta^{(r+1)}\rightarrow\delta^{(r)}.

Suppose there is an edge (δ(r),δ(r+1))(\delta^{(r)},\delta^{(r+1)}) such that δ(r)r1δ(r+1)\delta^{(r)}\nsucc^{1}_{r}\delta^{(r+1)}. Then δ(r+1)\delta^{(r+1)} is necessarily unsatisfiable so there is nothing to show.

Suppose there is an edge (δ(r),δ(r+1))(\delta^{(r)},\delta^{(r+1)}) such that δ(r)r1δ(r+1)\delta^{(r)}\succ^{1}_{r}\delta^{(r+1)}. Then the result follows as rs\succ^{s}_{r} respects implication. ∎

Proposition A.3 (Proposition 3.5).

TT is a tree such that δ(r+1)δ(r)\vDash\delta^{(r+1)}\rightarrow\delta^{(r)} for any (δ(r),δ(r+1))E(\delta^{(r)},\delta^{(r+1)})\in E.

Proof.

Consider the set {Trr}{T}\{T_{r}\mid r\in\mathbb{N}\}\cup\{T\} ordered by subgraph inclusion. We show that every chain 𝒞\mathcal{C} has an upper bound, namely t𝒞t\bigcup_{t\in\mathcal{C}}t. Let δ(r)\delta^{(r)} be some vertex of t𝒞t\bigcup_{t\in\mathcal{C}}t. Then it is in some TrT_{r} so it is identified by a unique path because TrT_{r} is a tree by Proposition 3.4. Moreover, let (δ(r),δ(r+1))(\delta^{(r)},\delta^{(r+1)}) be some edge of t𝒞t\bigcup_{t\in\mathcal{C}}t. Again, it is an edge of some TrT_{r} so that δ(r+1)δ(r)\vDash\delta^{(r+1)}\rightarrow\delta^{(r)} as desired. By Zorn’s lemma, there is a maximal element which in this case is exactly TT. ∎