Department of Mathematics
University of California,
Los Angeles, CA, USA
Department of Mathematics,
University of Athens, Greece3
\urladdrhttp://www.math.ucla.edu/\urltildeynm
Elementary recursive algorithms
00footnotetext: ∗A preliminary version of the results in this article was included in an early draft of [aric2019] (ARIC) as an additional Chapter in Part 1. It was replaced by a brief summary in Section 2H of the final, published version of ARIC, because it is not directly relevant to the main aim of that monograph, which is to develop methods for deriving and justifying robust lower complexity bounds for mathematical problems. There are many references in the sequel to ARIC and to older work by several people, but I have included enough of the basic definitions and facts so that the statements and proofs of the new results in Sections 4 and 5 stand on their own.They run our lives, if you believe the hype in the news, but there is no precise definition of algorithms which is generally accepted by the mathematicians, logicians and computer scientists who create and study them.111Using imprecise formulations of the Church-Turing Thesis and vague references to [church1935, church1936] and [turing1936], it is sometimes claimed naively that algorithms are Turing machines. This does not accord with the original formulations of the Church-Turing Thesis, cf. the discussion in Section 1.1 of [ynm2014] (which repeats points well-known and understood by those who have thought about this matter); and it is not a useful assumption for algebraic complexity theory, cf. page 2 of ARIC. My main aims here are (first) to discuss briefly and point to the few publications that try to deal with this foundational question and (primarily) to outline in Sections 4 and 5 simple proofs of two basic mathematical results about the elementary recursive algorithms from specified primitives expressed by recursive (McCarthy) programs.
1 What is an algorithm?
With the (standard, self-evident) notation of Sections 1D and LABEL:eqlogic of ARIC, we will focus on algorithms which compute partial functions and (decide partial) relations
(1-1) |
from the finitely many primitives of a (partial, typically infinite) -structure
(1-2) |
The most substantial part of this restriction is that it leaves out algorithms with side effects and interaction, cf. the footnote on page LABEL:elementary of ARIC and the relevant Section 3B in [ynmflr].
Equally important is the restriction to algorithms from specified primitives, especially as the usual formulations of the Church-Turing Thesis suggest that the primitives of a Turing machine are in some sense “absolutely computable” and need not be explicitly assumed or justified. We have noted in the Introduction to ARIC (and in several other places) why this is not a useful approach; but in trying to understand computability and the meaning of the Church-Turing Thesis, it is natural to ask whether there are absolutely computable primitives and what those might be. See Sections 2 and 8 of [ynm2014] for a discussion of the problem and references to relevant work, especially the eloquent analyses in [gandy1980] and [kripke2000].
There is also the restriction to first-order primitives, partial functions and relations. This is necessary for the development of a conventional theory of complexity, but recursion and computability from higher type primitives have been extensively studied: see [kleene1959], [ynmkechris-moschovakis] and [sacks1990] for the higher-type recursion which extends directly the first-order notion we have adopted in ARIC, and [longley-normann2015] for a near-complete exposition of the many and different approaches to the topic.222See also [ynmflr]—which is about recursion on structures with arbitrary monotone functionals for primitives—and the subsequent [ynmbotik] where the relevant notion of algorithm from higher-type primitives is modeled rigorously.
Once we focus on algorithms which compute partial functions and relations as in (1-1) from the primitives of a -structure, the problem of modeling them by set-theoretic objects comes down basically to choosing between iterative algorithms specified by (classical) computation models as in Section LABEL:cmodels of ARIC and elementary recursive algorithms expressed directly by recursive (McCarthy) programs; at least this is my view, which I have explained and defended as best I can in Section 3 of [ynmsicily].
The first of these choices—that algorithms are iterative processes—is the standard view, explicitly or implicitly adopted (sometimes with additional restrictions) by most mathematicians and computer scientists, including Knuth in Section 1.1 of his classic [knuth1973]. More recently (and substantially more abstractly, on arbitrary structures), this standard view has been developed, advocated and defended by Gurevich and his collaborators, cf. [gurevich1995, gurevich2000] and [gurevich2008]; see also [zt2000] and [duzi2014].
I have made the second choice—that algorithms are directly expressed by systems of mutual recursive definitions—and I have developed and defended this view in several papers, including [ynmsicily]. I will not repeat these arguments here, except for the few remarks in the remainder of this Section about the role that iterative algorithms play in the theory of recursion and (especially) Proposition 4.6, which verifies that iterative algorithms are “faithfully coded” by the recursive algorithms they define, and so their theory is not “lost” when we take recursive algorithms to be the basic objects.
By the definitions in Section LABEL:synsem of ARIC (reviewed in Section 2 below), a recursive (McCarthy) -program is a syntactic expression
(1-3) |
where are (pure, explicit) -terms and for every all the individual variables which occur in are included in the list of distinct individual variables; and an extended program is a pair
(1-4) |
of a program and a list of distinct individual variables which includes all the individual variables which occur in .
Recursive algorithms
My understanding of the algorithm defined by in a -structure A is that it calls for solving in A the system of recursive equations in the body of (within the braces ) and then plugging the solutions in its head to compute , the value of the partial function computed by on the input ; how we find the canonical (least) solutions of this system is not specified in this view by the algorithm from the primitives of A defined by .
This “lack of specificity” of elementary recursive algorithms is surely a weakness of the view I have adopted, especially as it leads to some difficult problems.
Implementations
To compute for , we might use the method outlined in the proof of the Fixed Point Lemma LABEL:lfp or the recursive machine defined in Section LABEL:cmodels of ARIC, or any one of several well known and much studied implementations of recursion. These are iterative algorithms defined on structures which are (typically) richer than A and they must satisfy additional properties relating them to —we would not call any iterative algorithm from any primitives which happens to compute the same partial function on as an implementation of it; so to specify the implementations of a recursive program is an important (and difficult) part of this approach to the foundations of the theory of algorithms, cf. [ynmsicily] and (especially) [ynmpaschalis2008] which proposes a precise definition and establishes some basic results about it.
On the other hand, the standard view has some problems of its own:
Recursion vs. iteration
Iterators are defined rigorously in ARIC and Theorem LABEL:itrecthmA gives a strong, precise version of the slogan
(1-5) |
on every structure A, if is computed by an A-explicit iterator, then is also computed by an A-recursive program. The converse of (1-5) is not true however: there are structures where some A-recursive relation is not tail recursive, i.e., it cannot be decided by an iterator which is explicit in A—it is necessary to add primitives and/or to enlarge the universe of A. Classical examples include [patterson-hewitt1970] (Theorem LABEL:pathewhm in ARIC), [lynch-blum1979] and (the most interesting) [tiuryn1989].333See also [jones1999, jones2001] and [bhaskar2017int, bhaskar2017ext]. The last Chapter LABEL:algebra of ARIC discusses a (basic) problem from algebraic complexity theory, a very rich and active research area in which the recursive representation of algorithms is essential.
Functional vs. imperative programming
It is sometimes argued that the main difference between recursion and iteration is a matter of “programming style”, at least for the structures which are mostly studied in complexity theory in which every recursive partial function is tail recursive, i.e., computed by some A-explicit iterator. This is not quite true: consider, for example the classical merge-sort algorithm (Section LABEL:algexamples of ARIC) which sorts strings from the primitives of the Lisp structure
over an ordered set , (LABEL:stringstr) in ARIC; it is certainly implemented by some -explicit iterator (as is every recursive algorithm of ), but which one of these is the merge-sort? It seems that we can understand this classic algorithm and reason about it better by looking at Proposition LABEL:mergelemma of ARIC rather than focussing on choosing some specific implementation of it.444See also Theorem LABEL:sortinglb of ARIC for a precise formulation and proof of the basic optimality property of the merge-sort.
Proofs of correctness
In Proposition LABEL:mergesortprop of ARIC, we claimed the correctness of the merge-sort—that it sorts—by just saying that
The sort function satisfies the equation …
whose proof was too trivial to deserve more than the single sentence
The validity of (LABEL:mergesort) is immediate, by induction on .
To prove the correctness of an iterator that “expresses the merge-sort”, you must first design a specific one and then explain how you can extract from all the “housekeeping” details necessary for the specification of iterators a proof that what is actually being computed is the sorting function; most likely you will trust that a formal version of (LABEL:mergesort) of ARIC is implemented correctly by some compiler or interpreter of whatever higher-level language you are using, as we did for the recursive machine.
Simply put, whether correct or not, the view that algorithms are faithfully expressed by systems of recursive equations typically separates proofs of their correctness and their basic complexity properties which involve only purely mathematical facts from the relevant subject and standard results of fixed-point-theory, from proofs of implementation correctness for programming languages which are ultimately necessary but have a very different flavor.
Defining mathematical objects in set theory
Finally, it may be useful to review here briefly what it means to define algorithms according to a general (and widely accepted, I think) naive view of what it means to define mathematical objects. This is discussed more fully in Section 3 of [ynmsicily].
One standard example is the “definition” (or “construction”) of real numbers using (canonically) convergent sequences of rational numbers: we set first
next we put for
and finally we declare that represent the same real number if .
In general, a representation in set theory of a mathematical notion is a pair of a set (or class) of set-theoretic objects which represent the objects that fall under and an equivalence condition on which models the identity relation on them; and the representation is faithful—and can be viewed as a definition of in set theory—to the extent that the relations on -objects that we want to study are those which are equivalent to the -invariant properties of the objects in .
Now, number theorists could not care less about such “definitions” of real numbers and they were happily investigating the existence and properties of rational, algebraic and transcedental numbers for more than two thousand years before they were given in the 1870s. Part of the reason for giving them was to argue for adopting set theory as a “foundation” (a “universal language”) for all of mathematics and to apply set theoretic methods to algebraic number theory;555Most spectacular of these was Cantor’s proof that transcedental numbers exist by a counting argument much simpler than Liouville’s original proof—the first “killer application” of set theory. but this is not the main point—some people would use category theory today and argue for its superiority over set theory as both a foundation and a tool for applications. The main point of looking for such “definitions” is to identify the fundamental, characteristic properties of a mathematical notion, which, to repeat, should be the -invariant properties of the set-theoretic objects that model the notion.
Another, fundamental (and much more sophisticated) example of this process of constructing faithful modelings of mathematical notions is the identification in [kolmogorov1933] of real-valued random variables with measurable functions on a probability space, under various equivalence relations.\ynmequal, equal almost surely, or equal in distribution
2 Equational logic of partial terms with conditionals
With the definitions in Section 1A of ARIC, a partial function
with input set and output set is an (ordinary) function on some subset , the domain of convergence of . We write
and most significantly, for and ,
This relation between partial functions with the same input and output sets is sometimes called Kleene’s strong equality between “partial terms”, but there is nothing partial about it: for any two and any , the proposition is either true or false.
A signature or vocabulary is a set of function symbols, each with assigned and ; and as in (1-2), a (two-sorted, partial) -structure is a pair
(2-1) |
of a (typically infinite) universe and an interpretation of the function symbols which matches their formal arities and sorts, i.e., for each ,
Syntax
The (pure, explicit) -terms are defined by the structural recursion
where is a fixed list of individual variables; for each and each , is a fixed list of (partial) function variables of sort and arity ; each term is assigned a sort in the natural way; and a Parsing Lemma (like Problem x1E.1 of ARIC) justifies the standard method of defining functions on these terms by structural recursion.666See Problem x2.1 for a detailed statement and proof of this. We just write for when , for and we do not allow variables over the set of truth values, cf. footnote 9 on p. 37 of ARIC.
A -term is a -term which has no function variables and a -term is a -term whose function variables are all in the list ; these terms are naturally interpreted in expansions of -structures which interpret each by some of the correct sort and arity.
An extended -term is a pair
of a -term and a finite list of distinct individual variables which includes all the individual variables that occur in . The notation provides a simple way to deal with substitutions,
Semantics
The denotation (or value) in a -structure A of each extended -term for given values of its variables is defined by the usual (compositional) structural recursion on the definition of terms: skipping the part (which remains constant),
and we will also use standard model-theoretic notation,
Problems for Section 2
Problem x2.1.
A set of pairs is closed under term formation if
where we view and as strings of symbols of length ; and a string is a pure, explicit -term of sort if the pair belongs to every set which is closed under term formation.
Formulate a Parsing Lemma for pure, explicit -terms (as in Problem x1E.1 of ARIC for terms without function variables) and give a complete proof of it using this precise definition.
3 Continuous, pure recursors
For any two sets , a (continuous, pure) recursor on to is a tuple
(3-1) |
such that for suitable sets ,777For the definitions of monotone and continuous functionals see Section 1A of ARIC.
are continuous functionals. We allow (as on page LABEL:emptyproduct of ARIC), in which case, by our conventions, .
The number is the dimension of , is its output or head functional, the poset is its solution space, and we allow –in which case there is no solution space and is naturally identified with the partial function . With the notation of (LABEL:wheredef) of ARIC, defines (or computes) the partial function , where
(3-2) |
We can summarize this situation succinctly by writing
(3-3) |
where “where” is now an operation which assigns to every tuple of continuous functionals (with suitable input and output sets) a recursor.
The recursor defined by an extended program
Every (deterministic) extended recursive -program
(as in (1-4)) with dimension , arity and sort defines naturally on each -structure A the continuous pure recursor on of sort and arity
(3-4) |
where
(3-5) |
or, with the notation in (3-3),
(3-6) |
It is immediate from the semantics of recursive programs on page LABEL:progsemantics of ARIC that the recursor of an extended program computes its denotation,
(3-7) |
However : does not typically model faithfully the algorithm expressed by on A, partly because it does not take into account the explicit steps which may be required to computes and (more importantly) because it does not record the dependence of that algorithm on the primitives of A. In the extreme case, if is a program with empty body (an explicit -term), then
is a trivial recursor of dimension , a partial function on —and it is the same for all explicit terms which define this partial function from any primitives, which is certainly not right. In the next Section 4 we will define the intension of in A which (we will claim) models faithfully the algorithm from the primitives of A expressed by . As it turns out, however,
for some extended program which is canonically associated with , and so every recursive algorithm of a structure A is for some .
About implementations (2)
Our understanding of the algorithm expressed by a recursor in (3-3) is that it calls for solving the system of recursive equations in its body and then plugging the solutions in the head functional to compute the value . How we find the canonical solutions of this system is not specified by : we might use the method outlined in the proof of the Fixed Point Lemma LABEL:lfp, or the recursive machine defined in Section LABEL:cmodels if is the recursor of an extended program in a structure, or any one of several well-known and much studied implementations of recursion. What the implementations of pure recursors are is an important part of this approach to the foundations of the theory of algorithms, cf. [ynmsicily] and (especially) [ynmpaschalis2008] which includes a precise definition and some basic results. We will not go into this here, except for a few additional comments on page 4.
Strong recursor isomorphism
The solutions of the system of recursive equations in the body of a recursor as in (3-3) do not depend on the order in which these equations are listed and the known methods for computing them also do not depend on that order in any important way; so it is natural to identify recursors which differ only in the order in which their bodies are enumerated, so that, for example
Detail for the example We need , so take , with the inverse being . This then requires the equations
For these we need to have
so
The precise definition of this equivalence relation is a bit messy in the general case: two recursors on the same input and output sets are strongly isomorphic—or just equal—if they have the same dimension and there is a permutation with inverse such that
(3-8) |
Directly from the definitions, strongly isomorphic recursors compute the same partial function ; the idea, of course, is that strongly isomorphic recursors model the same algorithm, and so we will simply write to indicate that and are strongly isomorphic. This view is discussed in several of the articles cited above and we will not go into it here.888This finest relation of recursor equivalence was introduced (for monotone recursors) in [ynmbotik], and more general, weaker notions were considered in [ynmsicily, ynmeng] and in [ynmpaschalis2008].
For the recursors defined by recursive programs, this definition takes the following form, where for any -term and sequences of distinct function and individual variables (satisfying the obvious sort and arity conditions),
(3-9) |
Lemma 3.1.
Two extended -programs
(3-10) |
of the same sort and arity define the same recursor on a -structure A exactly when they have the same number of parts and there is a permutation with inverse such that and for all and ,
(3-11) |
where .
[The proof] is an exercise in managing messy notations and we leave it for Problem x3.1.
(Extended) program congruence
Two programs and are congruent if can be constructed from by an alphabetic change (renaming) of the bound individual and function variables in its parts (as in (3-9)) and a permutation of the equations in the body of . Congruence is obviously an equivalence relation on programs, congruent programs have the same free variables and we write
By Lemma 3.1, congruent extended programs define equal recursors in every structure A, i.e., for every extended program and every program ,
(3-12) |
but the converse fails, cf. Problem x3.2.
More general recursors
The definition of recursors we gave is basically the one in [ynmbotik], except that we allowed there the parts of to be (monotone but) not continuous and to depend on the input set , i.e.,
() |
such that for suitable sets ,
Allowing the parts to be discontinuous is necessary for the theory of higher-type recursion which was the topic of [ynmflr, ynmbotik], but their dependence on the input set is not: the simpler, present notion is easier to work with and it includes the more general objects if we identify in ( ‣ 3) with
on the sets () with
Substantially more general notions of (monotone and continuous) recursors whose solution spaces are products of arbitrary complete posets were introduced in [ynmsicily, ynmeng] and (in greater detail) in [ynmpaschalis2008]; and it is routine to define nondeterministic algorithms along the same lines, using the fixpoint semantics of nondeterministic programs on page LABEL:ndfpsemantics of ARIC.
Problems for Section 3
Problem x3.1.
Prove Lemma 3.1. Hint: Check by a (routine) induction on terms, that for any two lists of distinct function variables and and individual variables , , and for every -term whose free variables are all in the list ,
assuming, of course, that the sorts and arities of the function variables are such that the claim makes sense.
Problem x3.2.
Prove that for every explicit term of sort boole,
and use this to define two extended programs such that for all A, but .
4 Canonical forms and intensions
In this section we will associate with each extended (recursive) -program
(4-1) |
a (unique up to congruence) canonical form , so that the construction
captures the algorithm expressed by in any -structure A. This theory of canonical forms yields, in particular, a robust notion of the elementary (first-order) algorithms of a structure A.
Some more syntax
For the syntactic work we will do in this Section we need to enrich and simplify the notation of ARIC summarized in Section 2.
We introduce two function symbols of arity and sort ( or boole) and the abbreviations
this is semantically inappropriate because the conditionals do not define (strict) partial functions, but it simplifies considerably the definition of pure, explicit -terms on page 2 which now takes the form
(Pure explicit terms) |
where c is any -ary function symbol in the vocabulary , or an -ary function variable or (and ).
Set representations
With each extended -program as in (4-1), we associate its set representation
(4-2) |
Notice that determines , its only member which is not an equation, and by the definition of the congruence relation on page 3,
the converse implication fails, but the algorithm which computes canonical forms naturally operates on these set representations of extended terms, so having a notation for them simplifies greatly its specification.
Immediate and irreducible terms
A term is immediate if it is an individual variable or a direct call to a recursive variable applied to individual variables,\ynmChecked this out in detail.
(Immediate terms) |
and a term is irreducible if it is immediate, a Boolean constant, or of the form with immediate ,999There are natural sort restrictions in these definitions and assignments of sorts to immediate and irreducible terms that we will suppress in the notation, cf. Problem x4.1.
(Irreducible terms) |
For example, are immediate, and are irreducible but not immediate, and is reducible.
Computationally, we think of immediate terms as “generalized variables” which can be accessed directly, like array entries in some programming languages, and a term is irreducible if it can be computed by direct (not nested) calls to primitives or to recursive variables. Both of these properties of terms are (trivially) preserved by alphabetic change of variables (3-9),
(4-3) |
Irreducible extended programs
An extended program
is irreducible if are all irreducible terms. These are the extended programs which cannot be reduced by the basic process of reduction, to which we turn next.
Arrow reduction
We first define the simplest reduction relation
on extended programs with the same list of free variable, where are function variables and is a number. The definition splits in two cases on the triple and it helps to call a function variable r fresh if it is none of the function variables in the body of .
(1) The body case: one of the equations in the body of is
and the term is not immediate. Put
(2) The head case: p and q are both fresh, , ,
and the term is not immediate. Put
Notice that in the head case, the specific function variable p does not occur on the right-hand-side of the definition as it does in the body case, it is used only as a “marker” that this is the head case: we put
(4-4) |
Lemma 4.1.
(1) If and , then .
(2) If is fresh, and , then
(4-5) |
(3) If is fresh and , then
(4-6) |
(4) An extended program is irreducible if and only if there are no such that .
(5) For every extended program , every program , every triple and every structure A,
Proof.
If, for example, the body case applies and q is any variable with the correct arity and sort, we can take
(1) – (3) are trivial, once we get through the notation and (4) follows immediately by the definition. (5) is also quite easy, either by a direct, fixpoint argument or by applying the Head and Bekič-Scott rules of Theorem LABEL:recidentities of ARIC. ∎
The key property of the reduction calculus is the following, simple
Lemma 4.2 (Amalgamation).
If ,
(4-7) |
then there is a program such that
(4-8) |
Proof.
This is obvious if , so that the two assumed reductions operate independently on different parts of and they give the same result if they are executed in either order. For the proof in the more interesting case when , we just put down as examples the relevant sequences of needed replacements in Diagram 1 for the body case when p is ternary, and , and in Diagram 2 for the head case when and . The general case is only notationally more complex. ∎
Extended program reduction
Using now arrow reduction, we define the one-step and full reduction relations on extended programs by
(4-9) | |||
(4-10) | |||
It is immediate from Lemma 4.1 that for every structure A, every extended program and every ,
(4-11) |
but this is true even if we removed the all-important, non-immediacy restrictions in the definition of reduction, by Theorem LABEL:recidentities of ARIC. The claim is that
but we will not get here into defending this naive view beyond what we said in Sections 1 and 3.
Caution. Reduction is a syntactic operation on extended recursive programs which models (very abstractly) partial compilation, bringing the mutual recursion expressed by the program to a useful form before the recursion is implemented without committing to any particular method of implementation. No real computation is done by it, and it does not embody any “optimization” of the algorithm expressed by an extended program, cf. Problem x4.4.
Size
Let be the number of occurrences of non-immediate proper subterms101010See Problem x4.3 for a rigorous definition and proofs of the properties of . of some part of a program . For example, if
(4-12) |
with , then , because the proper subterms we count are the boxed (twice), , , and .
From the definition of reduction, easily, an extended program is irreducible exactly when and each one-step reduction lowers size by , so, trivially:
Lemma 4.3.
If is a sequence of one-step reductions starting with , then ; and is irreducible if and only if .
We illustrate the reduction process by constructing in Figure 3 a maximal reduction sequence starting with with the program in (4-12).
p(x, y) = if test(ϕ_0(x),y) then y else p(ϕ_0(x),σ(x,y))
Lemma 4.4.
For any extended program and irreducible ,
is by induction on .
Basis, . If so that is irreducible, then the hypothesis gives immediately and , so .
If , then there is exactly one part of which can activate a one-step reduction. In the head case, the definition gives
with a fresh function variable q and the same equation for with a (possibly) different fresh function variable ; but then by (1) of Lemma 4.1. The same argument works in the body case.
Induction Step, . The hypothesis and Lemma 4.3 supply reduction sequences
(4-12) |
so focussing on the first one-step reductions in the two hypotheses, there are triples and such that
(4-13) |
We consider three cases on how this may arise:
Case 1: and . The Amalgamation Lemma 4.2 applies in this case and supplies a program such that
() |
We fix a (maximal) reduction sequence
() |
and notice that from the reductions in the three starred displays,
but , so the Induction Hypothesis applies and yields the required .
Case 2, but , so the first one-step reductions supplied by the Hypothesis are
If is fresh but with the same arity and sort as q, then (2) of Lemma 4.1 gives
now (3) of Lemma 4.1 gives
so that Case 1 applies and gives
but , which completes the argument in this case.
Case 3, and . The first one-step reductions supplied by the Hypothesis now are
where q and are distinct but have the same sort and arity; so
with any fresh r; and then by the Induction Hypothesis and (3) of Lemma 4.1 as above,
which is what we needed to prove.
Theorem 4.5 (Canonical forms).
Every extended program is reducible to a unique up to congruence irreducible extended program , its canonical form.
In detail: with every extended program , we can associate an extended program with the following properties: {ritemize}(3)
is irreducible.
.
If is irreducible and , then .
It follows that for all ,
Proof.
If is irreducible, take , and if , fix a maximal sequence of one-step reductions
and set ; now (1) and (2) are immediate and (3) follows from Lemma 4.4. The last claim holds because , so
and hence by (3). ∎
It is clearly possible to assign to each extended program a specific canonical form, e.g., by choosing to be the “lexicographically least” irreducible such that . This, however, is neither convenient nor useful, and it is best to think of as denoting ambiguously any irreducible extended program such that ; the theorem insures that any two such canonical forms of are congruent, and the construction in the proof gives a simple way to compute one of them.
Canonical forms for richer languages
A substantially more general version of the Canonical Form Theorem for functional structures was established in [ynmflr], and there are natural versions of it for many richer languages, including a suitable formulation of the typed -calculus with recursion (PCF). The proof we gave here for McCarthy programs is considerably easier than these, more general results.
Intensions and elementary algorithms
The (referential) intension of an extended McCarthy program in a structure A is the recursor of its canonical form,
(4-14) |
it does not depend on the choice of by (3-12), and it models the elementary algorithm expressed by in A.
A (bad) discussion of intensional equivalence between programs, which I should at some point address.
An operation on the extended programs of a structure A is an intensional invariant if it depends only on the intension of in A, i.e.,
(4-15) |
We would expect that the most basic complexity measures on programs are intensional invariants, and they are, Problems LABEL:callsinvariant, LABEL:depthinvariant.
In the structure where the terms and have the same intension, but do not have the same calls-complexity functions for because of the names; also, if , then the total calls-complexity is not determined by the intension. Most likely I should avoid these picky issues here, it is not the place for them.
The recursor representation of an iterative algorithm
Using the precise definition of the classical notion of an iterator (sequential machine)
in Section LABEL:cmodels of ARIC, let
be the structure associated with , where
is the disjoint sum of the sets and , and let
Proposition 4.6.
An iterator is determined by its intension
(4-16) |
the recursive algorithm on expressed by the extended tail recursive program associated with .
is easy, if a bit technical, and we leave it for Problem x4.13∗.
Theorem 4.2 in [ynmpaschalis2008] is a much stronger result along these lines, but it requires several definitions for its formulation and this simple Proposition expresses quite clearly the basic message: every property of an iterator can be expressed as a property of its associated recursor , and so choosing recursive rather than iterative algorithms as the basic objects does not force us to miss any important facts about the simpler objects.
About implementations (3)
An outline of a theory of implementations was sketched first in [ynmsicily] and then (in more detail) in [ynmpaschalis2008]. Briefly:
(1) A reducibility relation on recursors with the same input and output sets is defined, aiming to capture an exact notion of “simulation”.
(2) By definition, an implementation of a recursor is any iterator such that is reducible to the recursor associated with , .
By the main Theorem 5.3 in [ynmpaschalis2008],
the recursive machine associated with in A is an implementation of the recursor .
This provides some, initial evidence that this approach to a theory of implementations at least covers a most basic case, but not much more has been done on the question and we will not go into it here. What is worth emphasizing (again) is that the recursive machine lives in a structure whose universe extends the universe of A and which has richer primitives. In other words, the usual assertion that recursion can be reduced to (or implemented by) iteration involves extending the universe of the structure and adding new primitives; and this is necessary by the results discussed in Section LABEL:recvstail and on page LABEL:recvstailII of ARIC. \ynmin [stolboushkin1983] and [tiuryn1989] cited on page LABEL:recistail.
Problems for Section 4
Problem x4.1.
A term is immediate of sort if and ; or ; or where each is an individual variable. Write out a similar, full (recursive) definition of irreducible terms of sort .
Problem x4.2.
Prove Lemma 4.1
Problem x4.3 (Size).
(1) Prove than there is exactly one function which assigns a number to every explicit term so that
(2) For a program as in (4-1), set
and check that
-
(1)
is irreducible if and only if , and
-
(2)
if , then .
This was with only the intuitive def. around. (1) If is irreducible, then every equation of is of the form
with immediate and no non-immediate term occurs as a proper subterm of a part of and ; and if is reducible, then in some equation of as above some is not immediate and is counted, so .
(2) If is an equation of , then the contribution it makes to the size of is
If is not immediate, then the two equations in the program constructed by the one-step reduction contribute
to the .
Problem x4.4.
Notice that the irreducible extended program in Diagram 3 to which reduces calls for computing the value twice in each “loop”. Define an extended program which computes the same partial function as but calls for computing only once in each loop, and construct a complete reduction sequence for .
Problem x4.13∗.
Prove Proposition 4.6. Hint: Use the reduction calculus to compute a canonical form of the program ,
so that , where
The result is practically trivial from this, except for the fact that we only know the canonical form up to congruence, so it is not immediate how to “pick out” from the functionals the input, output, etc. of —and this is complicated by the fact that the sets and may overlap, in fact they may all be the same set. We need to use the details of the coding of many-sorted structures by single-sorted ones specified in Section 1D of ARIC.
5 Decidability of intensional program equivalence
Why did I put this note here? The reduction to irreducible identities needs to be checked.
Two extended -programs are intensionally equivalent on a -structure A if they have equal intensions,
(5-1) |
they are (globally) intensionally equivalent if they are intensionally equivalent on every infinite A,
(5-2) |
The main result of this section is
Theorem 5.1.
For every infinite -structure A, the relation of intensional equivalence on A between extended -programs is decidable.111111Except for a reference to a published result in Problem x5.46∗, I will not discuss in this paper the question of complexity of intensional equivalence on recursive programs, primarily because I do not know anything non-trivial about it.
We will also prove that global intensional equivalence between -programs is essentially equivalent with congruence, and so it is decidable, Theorem 5.14.
Plan for the proof
To decide if and are intensionally equivalent on A, we compute their canonical forms, check that these have the same number of irreducible parts and then (to apply Lemma 3.1) check if there is a permutation of with inverse such that and
These identities are all irreducible by (4-3), so the problem comes down to deciding the validity of irreducible term identities in A. The proof involves associating with each infinite -structure A a finite list of “conditions” about its (finitely many) primitives—its dictionary—which codifies all the information needed to decide whether an arbitrary irreducible identity holds in A; it involves some fussy details, but fundamentally it is an elementary exercise in the equational logic of partial terms with conditionals. In classical terminology, it is (basically) a Finite Basis Theorem for the theory of irreducible identities which hold in an arbitrary, infinite (partial) -structure.121212The decidability of intensional equivalence is considerably simpler for total structures A, cf. Lemma 5.9 and substantially more difficult for the FLR-structures of [ynmflr] which allow monotone, discontinuous functionals among their primitives; see [ynmfrege]—and a correction which fills a gap in the proof of this result in [ynmfrege] which is posted (along with the paper) on ynm’s homepage. It is an open—and, I think, interesting—question for the acyclic recursive algorithms of [ynmlcms], where intensional synonymy models synonymy (or faithful translation) for fragments of natural language.
The satisfaction relation
We will also need in this Section the classical characterization of valid identities in terms of the satisfaction relation.
An assignment in a -structure A associates with each variable an object of the proper kind, i.e.,
so that, for example, the boolean function variables are assigned nullary partial functions , essentially , tt or ff. For each -term , we define
by the usual compositional recursion and we set
If the variables of are among , then
and so by the notation on p. 2, for any two extended -terms,
Preliminary constructions and notation
We fix for this Section an infinite -structure (with finite , as always) and we assume without loss of generality that among the symbols in are such that
We will often define assignments partially, only on the variables which occur in a term and specifying only finitely many values of the function variables—those needed to compute —perhaps to come back later and extend when we view as a subterm of some .
Using the assumption that is infinite, we fix pairwise disjoint, infinite sets
whose union is co-infinite in and we fix an assignment on the individual and function variables of sort ind using injections
We do not define at this point any values for function variables of sort boole.
A term is pure, algebraic if it is of sort ind and none of , the conditional or any symbol from occurs in it, i.e.,
(Pure, algebraic terms) |
Lemma 5.2.
If are pure, algebraic terms, then
of this (classical) result is by an easy induction on , using the Parsing Lemma in Problem x2.1.
Forms of irreducible identities
To organize the proof of Theorem 5.1, we will appeal to the (trivial) fact that every irreducible term is in exactly one of the following forms:
-
(1)
tt, ff, or an individual variable v.
-
(2)
Function variable application, , where the terms are immediate of sort ind and ; p can be of either sort, ind or boole.
-
(3)
Conditional, , with immediate of sort boole and immediate of the same sort, ind or boole.
-
(4)
Primitive application, , , with immediate , all of sort ind and of either sort.
It follows that every irreducible identity falls in one of the ten, pairwise exclusive forms [i-j] with depending on the forms of its two sides. In dealing with these cases in the following Lemmas, we will use repeatedly the (trivial) fact that
if are immediate of sort boole and , then we can extend to the function variables which occur in and so that and take any pre-assigned truth values (or diverge).
Lemma 5.3 ([2-2]).
For any two irreducible terms in form (2),
Proof.
Let and , assume and consider two cases:
Case 1, ; now and are pure algebraic and so by Lemma 5.2:
Case 2, . If , extend by setting
and check that ; so , hence , and it is enough to prove that . If not, then for some , by Lemma 5.2; and then we can choose some and extend by setting
so that , which contradicts the hypothesis. ∎
Lemma 5.4 ([3-3]).
If are all immediate, then
Proof.
Assume , note that must be of sort boole and consider cases:
Case (a): are of sort ind. If , extend so that
which gives , hence . Setting gives and setting gives .
Case (b): all are of sort boole. Assume first, towards a contradiction that , and set
If , we can further set (even if ) and get a contradiction, so . The same argument, with this time gives ; and the symmetric argument gives , so what we have now is
with the added hypothesis that ; but then we can set
which gives
which violates the hypothesis; so , and the hypothesis in Case (b) takes the form
Now assume towards a contradiction that ; so one of them is different from , suppose it is ; so we now get a contradiction by setting and .
At this point we know that and , and the last bit that is simpler. ∎
Lemma 5.5 ([2-3]).
If are all immediate, then
Proof.
Notice first that if p is of sort ind, then
for any which converges on but such that .
If p is of sort boole and , then we can get a such that but so the hypothesis fails again. It follows that and the hypothesis takes the form
If , we get a contradiction by setting , and similarly if . ∎
Lemma 5.6 ([3-4]).
If are all immediate, then
Proof.
Notice that , since while and consider three cases:
If , extend by setting so
() |
If and is of sort ind, then setting now gives ( ‣ 5) again, since .
If and is of sort boole, then we can set which again gives ( ‣ 5). ∎
Lemma 5.7.
If are all immediate, then
In particular, with and ,
and with , .
Proof.
Assume the hypothesis and also that for any .
(a) ; because if , then
() Call relevant if or for some , set for every individual variable which occurs in a relevant term, and for a function variable of sort ind and arity , set
(b) For each pure, algebraic term , is defined exactly when is relevant and then . This is because by the definition, if is defined, then for a relevant and hence by Lemma 5.2.
It follows that is not defined by the stipulation (), since by the hypothesis and each is a proper subterm of , so it cannot be equal to it; and then we reach a contradiction by setting so that {endproofeqnarray*} σ(p)(¨σ(w_1),…,¨σ(w_m))≠ϕ^A(¨σ(z_1),…,¨σ(z_k). ∎
Lemma 5.8.
If there is an algorithm which decides the validity in A of identities in form [4-4].
then the relation of intensional equivalence on A between extended -programs is decidable.
Proof.
It is useful to assume here that there are nullary constants of sort boole and a unary function symbol id of sort id in such that
(5-3) |
by adding them to if necessary.
Forms [1-1], [1-2] and [1-3] are trivial, cf. Problem x5.1.
At this point there is a fork in the road: it is very easy to finish the proof of Theorem 5.1 for total structures, while the general case of arbitrary partial structures involves some technical difficulties. We do the simple thing first.
Lemma 5.9.
Suppose A is a total, infinite, -structure, and are immediate terms.
(1) If , then for all ,
(2) There is a sequence of (not necessarily distinct) individual variables such that
and for any such sequence
(5-4) |
Proof.
(1) Suppose towards a contradiction that
but is not identical with any and define the assignment which agrees with on all individual and function variables except that
this gives and , which contradicts the hypothesis.
(2) To construct a sequence of individual variables with the required property by induction on : pick a fresh if for every and otherwise let .
Suppose are such that , assume that
and for any assignment define so that
which is possible by the assumption relating and . The hypothesis gives
which then implies
and since was arbitrary we get the required
The converse is proved similarly. ∎
The point of the Lemma is that, for given , there are infinitely many identities which may or may not satisfy the left-hand-side of (5-4), because there are infinitely many immediate terms, e.g., could be any one of
while the right-hand-side of (5-4) involves only finitely many identities (up to alphabetic change), which we can exploit as follows:
An individual bare identity in is any identity in form [4-4]
() |
where the (not necessarily distinct) individual variables are chosen from the first entries in a fixed list of all individual variables; and the individual dictionary of a total structure A is the set
(Ind-Dictionary) |
This is a finite set (because is finite) and by (2) of Lemma 5.9, for any identity in form [4-4] we can construct an individual bare identity such that
(5-5) |
and then Lemma 5.8 then gives immediately
Theorem 5.10.
For every total, infinite -structure A, the relation of intensional equivalence on A between extended -programs is decidable.
The general case
We now turn to the proof of Theorem 5.1 for an arbitrary infinite partial -structure A which is similar in structure but requires some additional arguments. It will be useful to keep in mind the following example of an irreducible identity which illustrates the changes that we will need to make to the argument:
(5-6) |
An individual variable u is placed in an irreducible identity
(5-7) |
if for some ; and an assignment is injective if it assigns distinct values in to distinct, placed individual variables . We write
(5-8) |
Lemma 5.11.
With every irreducible identity (5-7), we can associate a sequence of (not necessarily distinct) individual and nullary function variables of sort ind so that for every (infinite) -structure A,
(5-9) |
Proof.
Call new (in (5-7)) if there is no such that , and set (by induction on ):
-
(a1)
, if is new and is an individual or nullary function variable;
-
(a2)
some fresh nullary function variable (distinct from every and from every with ), if is new and with ;
-
(a3)
for the least (and hence every) such that , if is not new.
Notice that directly from the definition,
(5-10) |
cf. Problem x5.4.
To prove first the direction in (5-9), assume that
let be any injective assignment, and define by setting first
Since is injective, this already insures that, however we extend it, will be an injective assignment.
Next, if u is an individual variable which occurs in some and is not placed, set to a fresh value in , so that
At this point we have defined on all the individual and nullary function variables which occur in (5-9) and it assigns distinct values to distinct individual variables. To define it on -ary function variables with which occur in (5-9), set
which is a good definition, because if it happens that for some variables such that for some , then , hence and by (5-10).
The hypothesis now gives us that , and we verify by a direct computation:
The lemma reduces the problem of the validity in A of any identity in (5-7) to the validity in A of a single identity from a finite list and that would give us the decision method we want by the same argument we used for total structures by appealing to Lemma 5.9—except for the annoying subscript , which we must deal with next.
Consider the example in (5-6), for which the construction in the Lemma verified that
an equivalence which may fail without the subscript by Problem x5.7. Now
the construction in the Lemma associates the identity with ; and it is quite easy to check that
(5-11) |
cf. Problem x5.9. This reduces the validity of in A to the injective validity in A of two identities which are effectively constructed from ; and the natural extension of this reduction holds for arbitrary irreducible identities which involve the primitives in :
Lemma 5.12.
Let be any irreducible identity with , let be an enumeration of the individual variables which are placed in , let be the set of equivalence relations on the set and for any , let
Then, for any infinite -structure A,
(5-12) |
where is the result of replacing in each by its representative in the equivalence relation .
Proof.
The direction of (5-12) is trivial: because
For the converse, assume the right-hand-side of (5-12), fix an assignment and set
is injective for the identity whose placed individual variables are exactly the representatives of the placed individual variables of , and so the hypothesis gives
To infer that , check first that for every immediate term ,
this holds by the definition of if for some and then trivially in every other case. Finally, compute:
∎
A bare -identity is an identity of the form
(5-13) |
where each is an individual or nullary function variable of sort ind chosen from the first entries in a fixed list
of all such variables; and the dictionary of A is the set
(Dictionary) |
The bare -identities are alphabetic variants of the identities constructed in Lemma 5.11 and there are only finitely many of them, cf. Problem x5.10.
We understand (5-13) to include (by convention) identities of the form
when the arity of one or both of is . We will not deal separately with these degenerate cases, since the arguments we will give can be trivially adjusted to apply to them also.
Corollary 5.13.
With every irreducible identity of the form
we can associate a finite sequence of bare identities so that for every -structure A,
(5-14) |
Proof.
Let be an enumeration of the equivalence relations on the set of the placed individual variables, let, for each , be the identity associated with when by the construction in Lemma 5.11, and let be the “alphabetically least” variant of in which only the allowed variables occur. ∎
For example, to be ridiculously formal, the bare identities associated with Example (5-6) ny (5-11) are
Following the plan for the proof on page 5, we reduce the problem of intensional equivalence between two extended -programs and to deciding the validity in A of irreducible term identities, and these fall in one of ten cases i-j with on the same page depending of the form of their two sides, for example
Case (3-4) |
Case (1-1) is trivial, as no identity holds between distinct terms which are immediate, or tt or ff, and Lemma 5.3 and the first three parts of Lemma LABEL:irrthm decide the validity of the identity if or —for example, the identity in case (3-4) never holds by Part (3) of Lemma LABEL:irrthm.
Cases (1-4) and (2-4), where is tt, ff or an irreducible (perhaps immediate) term of the form or . Now part (4) of Lemma LABEL:irrthm reduces this identity to
for some (that we can compute), and the same holds in the simpler case when is an individual or nullary function variable, cf. Problem x5.11. We assume that the identity map on is among the primitives—or we just add it, which will just make the decidability result stronger—and then this identity takes the form
and is covered by case (4-4) below. The same reduction to (4-4) works for tt and ff, we just assume that these boolean constants have names in .
Case (4-4) requires us to decide whether an irreducible identity of the form
holds in A and it is the only case which depends on the primitives of A. For this we will use the dictionary of A, the set
(Dictionary) |
This is a finite set by Problem x5.10, and by Corollary 5.13, we can construct from and bare identities such that
Global intensional equivalence
We might guess that irreducible programs are globally intensionally equivalent only if they are congruent, but this is spoiled by the trivial,
(5-15) |
already noticed in Problem x3.2. So we need to adjust:
An irreducible program is proper if none of its parts is an immediate term of the form p or and boolean sort.
Theorem 5.14.
(1) Every extended irreducible -program is globally intensionally equivalent to a proper one.
(2) Two extended, proper, irreducible -programs are globally intensionally equivalent if and only if they are congruent.
(3) The relation of global intensional equivalence between extended -programs is decidable.
Proof.
(1) Replace every part of the program which is immediate and of Boolean sort by .
(2) We need to show that if and are proper, extended irreducible -programs, then
and the idea is to use the hypothesis on a single, suitably “free” total structure A and then apply Lemma 5.9.
We fix an infinite (countable) set .
Step 1. For each of sort ind and arity (if there are any such), fix a set and an injection , so that
If , then is a singleton.
It is easy—if a bit tedious—to check that for all immediate terms , and however we complete the definition of A, if and are of sort ind, then
(5-16) |
cf. Problem x5.12.
Step 2. For any two of sort boole and arities and for every equivalence relation on the set , we choose a fresh tuple of elements in such that
and set . We do this by enumerating all triples where are of boolean sort and arities and is an equivalence relation on , and then successively fixing a fresh tuple for each triple with the required property—which we can do since, at any stage, we have used up only finitely many members of . This insures (5-16) for of sort boole and non-zero arities no matter how we complete the definitions of .
Step 3. If is of sort boole and arity (a boolean constant), we set .
At this point we have completed the definition of a structure A and the hypothesis gives us that
Lemma 1. For , , unless both and are distinct boolean constants in .
[Proof] is easy (if tedious) by appealing to Lemmas 5.2 and 5.5 and we leave it for Problem x5.13.
Lemma 2. For each boolean constant ,
Proof.
Fix and define the structure exactly as we defined A in Steps 1 and 2 above but replacing Step 3 by the following:
Step . Set and for every other boolean constant , set .
The hypothesis gives us a permutation such that and
By a slight modification of the detailed case analysis in the proof of Lemma 1 (in Cases [2-5], [3-5] and [4-5]),
and directly from the definition of B,
So pairs the parts of which are tt or with the parts of which are tt or , hence
but by Lemma 1, and so the last displayed identity implies the claim in the Lemma. ∎
Lemma 3. .
Proof.
This is true by Lemma 1, if is not a boolean constant in .
If , then the construction in Lemma 2 gives ; and the same construction starting with gives , so that
which implies that . ∎
To complete the proof that , we need to construct a permutation such that and
(5-17) |
We start by setting if is not a boolean constant in or , which insures (5-17) for such by Lemmas 1 and 3. Next, for each boolean constant , we appeal to Lemma 2 to extend in any way so that
and establishes that .
Part (2) follows almost immediately from (1) and we leave it for Problem x5.15. ∎
Problems for Section 5
Problem x5.1.
Give a decision procedure for in forms [1-1], [1-2], [1-3] and arbitrary A.
Problem x5.2 ([Lemma LABEL:irrthm, (3)).
Prove that if and , , are immediate, and A is any infinite -structure, then
() |
Problem x5.3.
True or false: for any infinite and any immediate terms with , we can define a partial function such that
Problem x5.4.
Hint: Use induction on .
Problem x5.5.
Prove that if and are distinct immediate terms or tt or ff, then , for any infinite A.
Problem x5.6.
Suppose A is a total -structure and . Find a sequence of (not necessarily distinct) individual variables such that
Problem x5.7.
Give an example where
Hint: Set , with a partial function which converges only when .
Problem x5.8.
Problem x5.9.
Prove (5-11).
Problem x5.10.
Prove that the number of bare -identities as in (5-13) is bounded above by , where is the largest arity of any and is its cardinality.
Problem x5.11.
Prove that if is irreducible, w is an individual variable and for some structure A, then every is an individual variable and w is one of them.
Problem x5.13.
Prove Lemma 1 in the proof of Theorem 5.14.
Problem x5.14.
Prove Case (2-2) of Theorem 5.14, that for immediate terms ,
Problem x5.15.
Derive (2) of Theorem 5.14 from (1).
Propositional programs
An explicit term is propositional if it has no individual variables, no symbols from and only boolean, nullary function variables,
(Prop terms) |
and a program is propositional if all its parts are, e.g.,
These are significant for the project of using referential intensions to model meanings initiated in [ynmfrege, ynmlcms] which is far from our topic, but they also bear on the complexity problem for intensional equivalence.
For our purposes here, a finite graph with nodes is a binary relation on the set , the edge relation on the set of nodes of the graph in more standard terminology.131313To the best of my knowledge (and that of Wikipedia), it is still open as I write this whether the graph isomorphism problem is co-NP.
Problem x5.46∗ ([ynmfrege], for a related language).
Prove that the problem of intensional equivalence between propositional programs is at least as hard as the problem of isomorphism between graphs. Hint: You need to associate with each graph of size an irreducible propositional program which codes , so that
and the trick is to use propositional variables , one for each and , one for each pair with .
References
- [1] \bibfitembhaskar2017int \guyS.SiddharthBhaskar 20170 \guysmagicSiddharth Bhaskar A difference in complexity between recursion and tail recursion, Theory of Computing Systems, vol.\weaktie60\yearmagic(2017), pp.\weaktie299–313. \TheSortKeyIsbhaskar siddharth 2017 difference in complexity between recursion and tail recursion
- [2] \bibritembhaskar2017ext \guyS.SiddharthBhaskar 20180 \guysmagic\bysame Recursion versus tail recursion over , Journal of Logical and Algebraic Methods in Programming\yearmagic,(2018), pp.\weaktie68–90. \TheSortKeyIsbhaskar siddharth 2018 recursion versus tail recursion over fp
- [3] \bibfitemchurch1935 \guyA.AlonzoChurch 19350 \guysmagicAlonzo Church An unsolvable problem in elementary number theory, Bulletin of the American Mathematical Society, vol.\weaktie41\yearmagic(1935), pp.\weaktie332–333, This is an abstract of [church1936]. \TheSortKeyIschurch alonzo 1935 unsolvable problem in elementary number theory
- [4] \bibritemchurch1936 \guyA.AlonzoChurch 19360 \guysmagic\bysame An unsolvable problem in elementary number theory, American Journal of Mathematics\yearmagic,(1936), pp.\weaktie345–363, An abstract of this paper was published in [church1935]. \TheSortKeyIschurch alonzo 1936a unsolvable problem in elementary number theory
- [5] \bibfitemgurevich2008 \guyN.NachumDershowitz and \guyY.YuriGurevich 20080 \guysmagicNachum Dershowitz \biband Yuri Gurevich A natural axiomatization of computability and proof of Church’s Thesis, \bslname, vol.\weaktie14\yearmagic(2008), pp.\weaktie299–350. \TheSortKeyIsdershowitz nachum gurevich yuri 2008 natural axiomatization of computability and proof of churchs thesis
- [6] \bibfitemduzi2014 \guyM.M.Duží 20140 \guysmagicM. Duží A procedural interpretation of the Church-Turing thesis, Church’s Thesis: Logic, Mind and Nature (Adam Olszewski, Bartosz Brozek, \biband Piotr Urbanczyk, editors), Copernicus Center Press, Krakow 2013\yearmagic,2014. \TheSortKeyIsduv z i m 2014 procedural interpretation of the church turing thesis
- [7] \bibfitemgandy1980 \guyR.RobinGandy 19800 \guysmagicRobin Gandy Church’s Thesis and principles for mechanisms, The Kleene Symposium (J. Barwise, H. J. Keisler, \biband K. Kunen, editors), North Holland Publishing Co\yearmagic,1980, pp.\weaktie123–148. \TheSortKeyIsgandy robin 1980 churchs thesis and principles for mechanisms
- [8] \bibfitemgurevich1995 \guyY.YuriGurevich 19950 \guysmagicYuri Gurevich Evolving algebras 1993: Lipari guide, Specification and validation methods (E. Börger, editor), Oxford University Press\yearmagic,1995, pp.\weaktie9–36. \TheSortKeyIsgurevich yuri 1995 evolving algebras 1993 lipari guide
- [9] \bibritemgurevich2000 \guyY.YuriGurevich 20000 \guysmagic\bysame Sequential abstract state machines capture sequential algorithms, ACM Transactions on Computational Logic, vol.\weaktie1\yearmagic(2000), pp.\weaktie77–111. \TheSortKeyIsgurevich yuri 2000 sequential abstract state machines capture sequential algorithms
- [10] \bibfitemjones1999 \guyN. D.Neil D.Jones 19990 \guysmagicNeil D. Jones LOGSPACE and PTIME characterized by programming languages, Theoretical Computer Science\yearmagic,(1999), pp.\weaktie151–174. \TheSortKeyIsjones neil d 1999 logspace and ptime characterized by programming languages
- [11] \bibritemjones2001 \guyN. D.Neil D.Jones 20010 \guysmagic\bysame The expressive power of higher-order types or, life without CONS, Journal of Functional Programming, vol.\weaktie11\yearmagic(2001), pp.\weaktie55–94. \TheSortKeyIsjones neil d 2001 expressive power of higher order types or life without cons
- [12] \bibfitemynmkechris-moschovakis \guyA. S.A. S.Kechris and \guyY. N.Y. N.Moschovakis 19770 \guysmagicA. S. Kechris \biband Y. N. Moschovakis Recursion in higher types, Handbook of mathematical logic (J. Barwise, editor), Studies in Logic, No. 90, North Holland, Amsterdam\yearmagic,1977, pp.\weaktie681–737. \TheSortKeyIskechris a s moschovakis y n 1977 recursion in higher types
- [13] \bibfitemkleene1959 \guyS. C.Stephen C.Kleene 19590 \guysmagicStephen C. Kleene Recursive functionals and quantifiers of finite types I, Transactions of the American Mathematical Society, vol.\weaktie91\yearmagic(1959), pp.\weaktie1–52. \TheSortKeyIskleene stephen c 1959 recursive functionals and quantifiers of finite types i
- [14] \bibfitemknuth1973 \guyD. E.D. E.Knuth 19730 \guysmagicD. E. Knuth The Art of Computer Programming, Volume 1. Fundamental Algorithms, second ed., Addison-Wesley\yearmagic,1973. \TheSortKeyIsknuth d e 1973 the art of computer programming volume 1 fundamental algorithms
- [15] \bibfitemkolmogorov1933 \guyA. N.A. N.Kolmogorov 19330 \guysmagicA. N. Kolmogorov Foundations of the theory of probability, Chelsea Publishing Co., New York\yearmagic,1933, English 1956 translation of the 1933 German monograph, edited by Nathan Morrison and with an added bibliography by A. T. Bharucha-Reid. \TheSortKeyIskolmogorov a n 1933 foundations of the theory of probability
- [16] \bibfitemkripke2000 \guyS. A.Saul A.Kripke 20000 \guysmagicSaul A. Kripke From the Church-Turing Thesis to the First-Order Algorithm Theorem, Proceedings of the 15th Annual IEEE Symposium on Logic in Computer Science (Washington, DC, USA), LICS ’00, IEEE Computer Society\yearmagic,2000, The reference is to an abstract. A video of a talk by Saul Kripke at The 21st International Workshop on the History and Philosophy of Science with the same title is posted at http://www.youtube.com/watch?v=D9SP5wj882w, and this is my only knowledge of this article. \TheSortKeyIskripke saul a 2000 from the church turing thesis to the first order algorithm theorem
- [17] \bibfitemlongley-normann2015 \guyJ.JohnLongley and \guyD.DagNormann 20150 \guysmagicJohn Longley \biband Dag Normann Higher-order computability, Springer\yearmagic,2015. \TheSortKeyIslongley john normann dag 2015 higher order computability
- [18] \bibfitemlynch-blum1979 \guyN. A.Nancy A.Lynch and \guyE. K.Edward K.Blum 19790 \guysmagicNancy A. Lynch \biband Edward K. Blum A difference in expressive power between flowcharts and recursion schemes, Mathematical Systems Theory\yearmagic,(1979), pp.\weaktie205–211. \TheSortKeyIslynch nancy a blum edward k 1979 difference in expressive power between flowcharts and recursion schemes
- [19] \bibfitemynmflr \guyY. N.Yiannis N.Moschovakis 1989a0 \guysmagicYiannis N. Moschovakis The formal language of recursion, \jslname, vol.\weaktie54\yearmagic(1989), pp.\weaktie1216–1252, posted in ynm’s homepage. \TheSortKeyIsmoschovakis yiannis n 1989 formal language of recursion
- [20] \bibritemynmbotik \guyY. N.Yiannis N.Moschovakis 1989b0 \guysmagic\bysame A mathematical modeling of pure, recursive algorithms, Logic at Botik ’89 (A. R. Meyer \biband M. A. Taitslin, editors), vol. 363, Springer-Verlag, Berlin\yearmagic,1989, Lecture Notes in Computer Science, posted in ynm’s homepage, pp.\weaktie208–229. \TheSortKeyIsmoschovakis yiannis n 1989 mathematical modeling of pure recursive algorithms
- [21] \bibritemynmfrege \guyY. N.Yiannis N.Moschovakis 19940 \guysmagic\bysame Sense and denotation as algorithm and value, Logic colloquium ’90 (J. Väänänen \biband J. Oikkonen, editors), Lecture Notes in Logic, vol. 2, Association for Symbolic Logic\yearmagic,1994, posted in ynm’s homepage, together with a corrected and expanded proof of the main Theorem 4.1 of this paper, pp.\weaktie210–249. \TheSortKeyIsmoschovakis yiannis n 1994 sense and denotation as algorithm and value
- [22] \bibritemynmsicily \guyY. N.Yiannis N.Moschovakis 19980 \guysmagic\bysame On founding the theory of algorithms, Truth in mathematics (H. G. Dales \biband G. Oliveri, editors), Clarendon Press, Oxford\yearmagic,1998, posted in ynm’s homepage, pp.\weaktie71–104. \TheSortKeyIsmoschovakis yiannis n 1998 on founding the theory of algorithms
- [23] \bibritemynmeng \guyY. N.Yiannis N.Moschovakis 20010 \guysmagic\bysame What is an algorithm?, Mathematics unlimited – 2001 and beyond (B. Engquist \biband W. Schmid, editors), Springer\yearmagic,2001, posted in ynm’s homepage, pp.\weaktie929–936. \TheSortKeyIsmoschovakis yiannis n 2001 what is an algorithm
- [24] \bibritemynmlcms \guyY. N.Yiannis N.Moschovakis 20060 \guysmagic\bysame A logical calculus of meaning and synonymy, Linguistics and Philosophy, vol.\weaktie29\yearmagic(2006), pp.\weaktie27–89, posted in ynm’s homepage. \TheSortKeyIsmoschovakis yiannis n 2006 logical calculus of meaning and synonymy
- [25] \bibritemynm2014 \guyY. N.Yiannis N.Moschovakis 20140 \guysmagic\bysame On the Church-Turing Thesis and relative recursion, Logic and Science Facing the New Technologies (Peter Schroeder-Heister, Gerhard Heinzmann, Wilfrid Hodges, \biband Pierre Edouard Bour, editors), College Publications\yearmagic,2014, Logic, Methodology and Philosophy of Science, Proceedings of the 14th International Congress (Nancy), posted in ynm’s homepage, pp.\weaktie179–200. \TheSortKeyIsmoschovakis yiannis n 2014 on the church turing thesis and relative recursion
- [26] \bibritemaric2019 \guyY. N.Yiannis N.Moschovakis 20190 \guysmagic\bysame Abstract recursion and intrinsic complexity, ASL Lecture Notes in Logic, vol. 48, Cambridge University Press\yearmagic,2019, posted in ynm’s homepage. \TheSortKeyIsmoschovakis yiannis n 2019 abstract recursion and intrinsic complexity
- [27] \bibfitemynmpaschalis2008 \guyY. N.Yiannis N.Moschovakis and \guyV.VasilisPaschalis 20080 \guysmagicYiannis N. Moschovakis \biband Vasilis Paschalis Elementary algorithms and their implementations, New computational paradigms (S. B. Cooper, Benedikt Lowe, \biband Andrea Sorbi, editors), Springer\yearmagic,2008, posted in ynm’s homepage, pp.\weaktie81—118. \TheSortKeyIsmoschovakis yiannis n paschalis vasilis 2008 elementary algorithms and their implementations
- [28] \bibfitempatterson-hewitt1970 \guyM. S.Michael S.Patterson and \guyC. E.Carl E.Hewitt 19700 \guysmagicMichael S. Patterson \biband Carl E. Hewitt Comparative schematology, MIT AI Lab publication posted at http://hdl.handle.net/1721.1/6291 with the date 1978\yearmagic,1970. \TheSortKeyIspatterson michael s hewitt carl e 1970 comparative schematology
- [29] \bibfitemsacks1990 \guyG. E.Gerald E.Sacks 19900 \guysmagicGerald E. Sacks Higher recursion theory, Perspectives in Mathematical Logic, Springer\yearmagic,1990. \TheSortKeyIssacks gerald e 1990 higher recursion theory
- [30] \bibfitemtiuryn1989 \guyJ.JerzyTiuryn 19890 \guysmagicJerzy Tiuryn A simplified proof of DDL DL, Information and Computation, vol.\weaktie82\yearmagic(1989), pp.\weaktie1–12. \TheSortKeyIstiuryn jerzy 1989 simplified proof of ddl dl
- [31] \bibfitemzt2000 \guyJ.J.V.Tucker and \guyJ.J.I.Zucker 20000 \guysmagicJ.V. Tucker \biband J.I. Zucker Computable functions and semicomputable sets on many-sorted algebras, Handbook of Logic in Computer Science (S. Abramsky, D.M. Gabbay, \biband T.S.E. Maibaum, editors), vol. 5, Oxford University Press\yearmagic,2000, pp.\weaktie317–523. \TheSortKeyIstucker jv zucker ji 2000 computable functions and semicomputable sets on many sorted algebras
- [32] \bibfitemturing1936 \guyA. M.Alan M.Turing 19360 \guysmagicAlan M. Turing On computable numbers with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, vol.\weaktie42\yearmagic(1936), pp.\weaktie230–265, A correction, ibid. volume 43 (1937), pp. 544–546. \TheSortKeyIsturing alan m 1936 on computable numbers with an application to the entscheidungsproblem
- [33]