Program Synthesis for Polynomial System using Algebraic Geometry
Abstract
Template-based synthesis, also known as sketching, is a localized approach to program synthesis in which the programmer provides not only a specification, but also a high-level “sketch” of the program. The sketch is basically a partial program that models the general intuition of the programmer, while leaving the low-level details as unimplemented “holes”. The role of the synthesis engine is then to fill in these holes such that the completed program satisfies the desired specification. In this work, we focus on template-based synthesis of polynomial imperative programs with real variables, i.e. imperative programs in which all expressions appearing in assignments, conditions and guards are polynomials over program variables. While this problem can be solved in a sound and complete manner by a reduction to the first-order theory of the reals, the resulting formulas will contain a quantifier alternation and are extremely hard for modern SMT solvers, even when considering toy programs with a handful of lines. Moreover, the classical algorithms for quantifier elimination are notoriously unscalable and not at all applicable to this use-case.
In contrast, our main contribution is an algorithm, based on several well-known theorems in polyhedral and real algebraic geometry, namely Putinar’s Positivstellensatz, the Real Nullstellensatz, Handelman’s Theorem and Farkas’ Lemma, which sidesteps the quantifier elimination difficulty and reduces the problem directly to Quadratic Programming (QP). Alternatively, one can view our algorithm as an efficient way of eliminating quantifiers in the particular formulas that appear in the synthesis problem. The resulting QP instances can then be handled quite easily by SMT solvers. Notably, our reduction to QP is sound and semi-complete, i.e. it is complete if polynomials of a sufficiently high degree are used in the templates. Thus, we provide the first method for sketching-based synthesis of polynomial programs that does not sacrifice completeness, while being scalable enough to handle meaningful programs. Finally, we provide experimental results over a variety of examples from the literature.
Keywords:
program synthesis, sketching, syntax-guided synthesis1 Introduction
An imperative program is a sequence of instructions in a programming language that manipulate the memory to solve a problem. The task of writing the program is usually done by a programmer who first comes up with a high-level idea for the algorithm, and then they implement their algorithm by writing a program. Synthesis makes the later part easier; given logical specification for the program, we can synthesize a program that satisfies those specifications. Syntax-Guided Synthesis problem (SyGuS) module theory is a general synthesis problem of a function given semantic constraint as a formula built from symbols in theory and , and syntactic constraint given as a (possibly infinite) set of expressions from specified using a context-free grammar.
In this paper, we focus on Syntax-Guided program synthesis problem module theory of Polynomial Arithmetic (over reals).
The treatment here generalises the methods for program synthesis used in [srivastava2013template] for linear programs to polynomial programs. Our main tools include the use of classical theorems from Real Algebraic Geoemtry [bochnak2013real] like Handelman’s Theorem (Thoerem 4.3), Putinar’s Positivstellensatz (Theorem 4.4) and Real Nullstellensatz (Theorem 4.5). We provide with a sound and semi-complete synthesis algorithm for polynomial programs.
Outline of the paper: In Section 2, we defined the notions of programs, semantics and templates. We also give a mathematically precise formulation of the problem in terms of constraint pairs. In Section 4.2, we presented classical theorems from Real Algebraic Geometry required for design of the synthesis algorithm. We also presented Theorem 4.6 which is a new mathematical contribution from this paper. In Section LABEL:synthesis_algo, we presented our synthesis algorithm along with proving it’s soundness and semi-completeness. Finally in Section LABEL:secion_proof_of_concept we presented a proof of our proposed concept by providing experimentation results on small programs.
2 Template-based Synthesis of Polynomial Programs
We consider program as a sequence of instructions, as in an imperative programming language. For instance, consider the following example program :
Example 1
The program variables in the above example program are , and we say that the program is defined over the variables . Note that we have the precondition and postcondition in our example program, which are polynomial assertion over the program variables. A program is said to be valid if every valuation of program variables that satisfies the pre condition, the valuation obtained after executing the program satisfies the post condition. The programmer sometimes may have a specification of a program via precondition and postcondition with partial implementation. We call such partial implementation as program template. The synthesis problem is to complete such partial implementation and generate a concrete valid program.
In this paper, we restrict ourselves to the so-called polynomial programs over a set of variables that can be generated from the following grammar:
where denotes any polynomial expression over the variables with real coefficients. The semantics of each instruction are as usual except for which is a while loop with guard for the block of code , and is a loop invariant. In general, the loop invariant should be strong enough for us to prove the correctness of the program with given precondition and postcondition.
Let the set of program variables be , be any valuation of . Given a polynomial over and a valuation , we denote to be the evaluation of the polynomial on substituting each with . On executing an instruction , the valuation of the variables becomes . We define the semantics of the program recursively in the Table 2.
Expression | Final Valuation |
---|---|
and | |
if then else | |
if and , then where and else |
Example 2
Consider the following template , for example, whose one realization is the program from Example 1. The synthesis algorithm will replace the holes with suitable polynomials, if they exist, such that the resulting program satisfies the constraints given by pre-conditions and post-conditions.
A template program basically specifies the intuition behind the final program and has holes at location where the program expects the synthesizer to fill in appropriate expressions. A hole could be thought of as a hint that programmer has provided to synthesis a correct program. In this example template program, we have two polynomial function symbols . , called a program hole, represents a symbolic polynomial P over the variables with total degree and coefficients from the template variable set . Given a template program, our goal is to synthesize values for all the template variables such that the resulting program is valid.
In the following we formalize what we mean by partial implementation, or template. A template of the polynomial program over the variables and set of polynomial function symbol is defined using the following grammar:
The polynomial function symbols could be used by the programmer to specify either a concrete polynomial or a symbolic polynomial. The programmar can also evaluate a polynomial function symbol using a substitution map . A substitution map that can be applied on polynomials and function symbols to obtain another polynomial expression.
Though this representation is programmer-friendly, to describe our algorithm succinctly we shall use an equivalent representation of programs and template programs. We provide with the notions of polynomial transition systems and symbolic polynomial transition systems which are respectively equivalent to polynomial programs and symbolic polynomial programs.
Example 3
In the figure 1, we graphically describe the symbolic polynomial transition system corresponding to the template program defined in Example 2. The location and are the inital and final location respectively. The set is the cutset.

We will first define few terms that are useful, and then define polynomial variants of linear transitions system and linear control-flow graph that were defined in [colon2003linear].
Definition 1 (Symbolic Polynomial Assertions)
Given a set of program variables and of template variables, we define to be the set of finite boolean combination of polynomial inequalities over program variables with coefficients being real polynomials over template variables. More precisely, is the set of all possible finite boolean combinations of inequalities of the form such that .
We can now define a polynomial transition system.
Definition 2 (Polynomial transition system)
A polynomial transition system is a tuple that consists of a set of variables , a set of locations , an initial location , an initial assertion , a final location , a final assertion and set of transitions . Each transition is a tuple , where are the pre and post locations, and is an polynomial assertion over , where represents current-state variables and its primed version represents the next-state variables.
Symbolic polynomial transition system over the program variables and has the same structure as the polynomial transition system except that its assertions are allowed to be symbolic polynomial assertion from .
Definition 3 (Symbolic polynomial transition system)
. A symbolic polynomial transition system is a tuple that consists of a set of variables , a set of template variables , a set of locations , an initial location , an initial symbolic polynomial assertion , a final location , a final symbolic polynomial assertion and set of transitions . Each transition is a tuple , where are the pre and post locations, and is a symbolic polynomial assertion over , where represents current-state variables and its primed version represents the next-state variables. All the coefficients ofsymbolic polynomial assertions over belong to the polynomial ring .
A control-flow graph corresponds to the underlying structure of a given transition system.
Definition 4 (Control-flow graph)
Given a transition system , we define control-flow graph with locations as the vertex set and transitions as the edge set. More precisely we have an edge from to if and only if we have a transition for some .
We shall now define the notion of cutset and basic path which would be help us in defining the inductive polynomial assertion map.
Definition 5 (Cutset)
Given a transition system , a subset of vertices of is called a cutset if every cyclic path in passes through some vertex in . An element of is called a cutpoint.
Definition 6 (Basic path)
Given a transition system , a path between two cutpoints and is called a basic path if it does not pass through any other cutpoint in .
Definition 7 (Inductive polynomial assertion map)
Given a transition system with a cutset and an assertion for each cutpoint , we say that is an inductive polynomial assertion map for if it satisfies the following conditions for all cutpoints :
-
•
Initiation: For each basic path from to , .
-
•
Consecution: For each basic path from to , .
-
•
Finalization: For each basic path from to ,
A inductive polynomial assertion map is said to be symbolic if it has one or more symbolic polynomial assertions. The definitions of control-flow graph, cutset and basic path also naturally extends to the symbolic polynomial transition system. Given a model of the template variables, let and denote the concrete polynomial transtion system and concrete inductive polynomial assertion map obtained by substituting the values of the template variables in the respective templates.
2.1 Problem Statement
Let be a symbolic polynomial transition system over the program variables and the set of template variables , be a cutset of , and be the symbolic polynomial inductive assertion map.
The synthesis problem for the tuple is to find a valuation of template variables such that is valid inductive polynomial assertion map for the concrete transtion system .
3 Hardness
?? TODO: FILL THIS IN
4 Algorithms
4.1 Overview of our approach
The programmer first writes a template in the grammar defined at in the beginning of the section. In the pre-processing step, the algorithm first verifies the syntax of the program and substitute the program holes with symbolic polynomial of the specified degree. We call the program generated after the pre-processing step as symbolic polynomial program.
A constraint pair is a pair where is a set of polynomial inequalities, and is an polynomial inequality of the form with . We can reduce the correctness of the symbolic polynomial program to validity of a set of constraint pairs .
We will describe our approach to construct the constraint pairs with the following example.
Example 4
We are given the symbolic polynomial transition system as described in Example 1, the cutset and the symbolic polynomial assertion map . Say, . We will create three condition pairs corresponding to various restrictions on for it to be a valid inductive assertion map.
-
1.
Initiation: There is only one basic path from the initial location to . We get the following constraint pair for the path :
-
2.
Consecution: There is one basic path from to . The constraint pair for the basic path is:
-
3.
Finalization: The path is the only basic path from points of cutset to the final location . For this path, we get the following constraint pair:
For each basic path from to ,
We get the three constraint pairs .
Theorem 4.1
The number of constraint pairs generated using our approach is at most quadratic in the number of locations if the given cutset contains all the locations, i.e, . Moreover, if the cutset , then the number of constraint pairs is linear in the to the number of basic paths between each pair of locations in the cutsets.
Proof
The proof follows directly from the construction.
4.2 Mathematical Toolkit
In Section LABEL:sec:algo:overview above, we provided an overview of our algorithm. However, the details of Step 3, i.e. the reduction to QP, were not presented since they depend on certain mathematical prerequisites. In this section, we provide the mathematical tools and theorems that are crucial for this step of the algorithm. We first recall some notation and classical definitions. Then, we present several theorems from polyhedral and real algebraic geometry. Finally, we obtain tailor-made versions of these theorems in a format that can be used in Step 3 of our algorithm above. We refer to [hartshorne2013algebraic, bochnak2013real] for a more detailed treatment of these theorems.
Sums of Squares
A polynomial is a sum of squares (SOS) iff there exist and polynomials such that
Strong Positivity
Given a set and a polynomial , we say that is strongly positive over X if . We write this as
Notation
Let be a set of polynomial inequalities where and . We define as the set of all real valuations over the variables that satisfy More formally,
Closure
Given a set , we define to be the closure of with respect to the Euclidean topology of . For a set of polynomial inequalities, we define as the system of polynomial inequalities obtained from by replacing every strict ineqaulity with its non-strict counterpart.
We are now ready to present the main mathematical theorems that will be used in our work. Our presentation follows that of [goharshady2020parameterized, Section 2.6], which also contains proofs of corollaries that are not proven here.
Theorem 4.2 (Farkas’ Lemma [farkas1902theorie])
Consider a set of real-valued variables and the following system of equations over :
When is satisfiable, it entails a linear inequality
if and only if can be written as non-negative linear combination of the inequalities in and the trivial inequality i.e. if there exist non-negative real numbers such that
Moreover, is unsatisfiable if and only if can be derived as above.
The importance of Farkas’ Lemma for us is that if we have a standard constraint of form (LABEL:eq:entailment-form) and if the constraint includes only linear/affine inequalities, then we can use this lemma in Step 3 of our algorithm to reduce the standard constraint to QP, just as we did in Section LABEL:sec:algo:overview. Moreover, Farkas’ Lemma guarantees that this approach is not only sound but also complete. A corner case that we have to consider is when is itself unsatisfiable and thus holds vacuously. Fortunately, Farkas’ Lemma also provides a criterion for unsatisfiability. In practice, we work with the following corollary of Theorem 4.2 which can also handle strict inequalities.
Corollary 1
Consider a set of real-valued variables and the following system of equations over :
where for all . When is satisfiable, it entails a linear inequality
with , if and only if can be written as non-negative linear combination of inequalities in and the trivial inequality . Note that if is strict, then at least one of the strict inequalities should appear with a non-zero coefficient in the linear combination. Moreover, is unsatisfiable if and only if either or can be derived as above.
We now consider extensions of Farkas’ Lemma which can help us handle non-linear standard constraints in Step 3. The first extension is Handelman’s theorem, which can be applied when the inequalities on the left hand side of (LABEL:eq:entailment-form) are linear/affine, but the right hand side is a polynomial of arbitrary degree. To present the theorem, we first need the concept of monoids.
Monoid
Consider a set of real-valued variables and the following system of linear inequalities over :
where for all . Let be the left hand side of the -th inequality, i.e. . The monoid of is defined as:
In other words, the monoid contains all polynomials that can be obtained as a multiplication of the ’s. Note that We define as the subset of polynomials in of degree at most
Theorem 4.3 (Handelman’s Theorem [handelman1988representing])
Consider a set of real-valued variables and the following system of equations over :
If is satisfiable, is compact, and entails a polynomial inequality then there exist non-negative real numbers and polynomials such that:
The intuition here is that if every inequality in holds, then all the LHS expressions in are non-negative and hence any multiplication of them is also non-negative. As in the case of Farkas’ Lemma, Handelman’s theorem shows that this approach is not only sound but also complete. We also need a variant that can handle strict inequalities in
Corollary 2
Consider a set of real-valued variables and the following system of equations over :
in which for all . If is satisfiable and is bounded, then entails a strong polynomial inequality if and only if there exist constants and , and polynomials such that:
Corollary 2 above can handle a wider family of standard constraints than Corollary 1. However, it is also more expensive, since we now need to generate one new variable for every polynomial in instead of itself. Moreover, there is no bound in the theorem itself, so introducing would lead to semi-completeness instead of completeness, i.e. the approach would be complete only if a large enough value of is used. As such, in cases where both sides of (LABEL:eq:entailment-form) are linear, Corollary 1 is preferable. We now consider a more expressive theorem that can handle polynomials on boths sides of (LABEL:eq:entailment-form).
Theorem 4.4 (Putinar’s Positivstellensatz [putinar1993positive])
Given a finite collection of polynomials , let be the set of inequalities defined as
If entails the polynomial inequality and there exist some such that is compact, then there exist polynomials such that
and every is a sum of squares. Moreover, is unsatisfiable if and only if can be obtained as above, i.e. with .
As in the cases of Farkas and Handelman, we need a variant of of Theorem 4.4 that can handle strict inequalities in
Corollary 3
Consider a finite collection of polynomials and let
where for all . Assume that there exist some such that is compact or equivalently is bounded. If is satisfiable, then it entails the strong polynomial inequality , iff there exists a constant and polynomials such that
and every is a sum of squares.
Trying to use the corollary above for handling standard constraints of form (LABEL:eq:entailment-form) in Step 3 of our algorithm leads to two problems: (i) we also need a criterion for unsatisfiability of to handle the cases where holds vacuously, and (ii) in our QP, we should somehow express the property that every is a sum of squares. We now show how each of these challenges can be handled. To handle (i), we need another classical theorem from real algebraic geometry.
Theorem 4.5 ([bochnak2013real, Corollary 4.1.8] the Real Nullstellensatz)
Given polynomials , exactly one of the following two statements holds: {compactitem}
There exists such that , but .
There exists and polynomials such that and is a sum of squares.
We now combine the Real Nullstellensatz with Putinar’s Postivstellensatz to obtain a criterion for unsatisfiability of
Theorem 4.6
Consider a finite collection of polynomials and the following system of inequalities:
where for all . is unsatisfiable if and only if at least one of the following statements holds: {compactitem}
There exist a constant and sum of square polynomials such that
(1) |
There exist and , such that for some with , we have
(2) |
where is a sum of squares in . Note that are new variables.
Proof
First we show that if any of the two equalities (1) or (2) holds then is unsatisfiable. Suppose is satisfiable and pick . Then, the RHS of (1) is positive at whereas the LHS is negative. So, (1) cannot hold. Now define Using this definition, we can rewrite (2) as . Moreover, Expanding the right hand side using the binomial theorem, we get for some . Now, substituting (2), we get
(3) |
Let us extend , which is a valuation of to a valuation over such that for all . Note that such an extension is always possible. We get a contradiction by evaluating (3) on as the LHS is positive, whereas the RHS is negative.
We now prove the other side. Suppose is unsatisfiable. We have two possibilities: either is also unsatisfiable or is satisfiable. Suppose is unsatisfiable, then using Theorem 4.4, entails and we can write for sum of squares polynomials . Therefore,
which fits into (1). Now we are left with the case when is satisfiable but is unsatisfiable. We first reorder the inequalities in such that the non-strict inequalities appear first in the order. Let be the smallest index for which , i.e. the set of first inequalities in , is unsatisfiable. By definition, is satisfiable and hence . We can rewrite . As is unsatisfiable, we know that entails . More precisely, this means . On taking closures on both sides we get . This implies that entails and hence entails . As defined above, . Now, we will show that there is no valuation over the variables such that for all , but . Suppose there exist such a valuation . Let us define to be the restriction of to . For each , we get as . We also get . Hence, we get a contradiction with the previous result that entails . Applying the Real Nullstellensatz (Theorem 4.5) to ’s and , we have
where is a non-negative integer and ’s and are sums of sqaures in . Using the definition of and the binomial theorem, we get for some . Therefore, we finally get the following expression:
which fits into the format of (2), hence completing the proof.
Finally, we provide the needed theorems to check that a certain polynomial is a sum of squares using QP.
Theorem 4.7 ([blekherman2012semidefinite, Theorem 3.39])
Let be the vector of all monomials of degree less than or equal to over the variables A polynomial of degree is a sum of squares if and only if there exist a positive semidefinite matrix of order such that
Theorem 4.8 (Cholesky decomposition [watkins2004fundamentals])
A symmetric square matrix is positive semidefinite if and only if it has a Cholesky decomposition of the form where is a lower-triangular matrix with non-negative diagonal entries.
Based on Theorems 4.7 and 4.8, a polynomial of degree is a sum of squares if and only if it can be written as such that diagonal entries of are non-negative. This representation provides us with a simple approach to generate a sum-of-squares polynomial of degree with symbolic coefficients and encoding them in QP. We first generate a lower triangular matrix of order by creating one fresh variable for each entry in the lower triangle and adding the extra condition that the entries on the diagonal must be non-negative. Then, we symbolically compute
4.3 Details of Step 3 of the Algorithm
We now have all the necessary ingredients to provide a variant of Step 3 of the algorithm that preserves completeness. We assume a positive integer is given as part of the input. This serves as an upper-bound on the degrees of polynomials/templates that we use in Handelman’s theorem (the monoid) and the Stellensätze. Our approach is complete as long as a large enough is chosen.
Step 3: Eliminating Program Variables and Reduction to QP
Recall that at the end of Step 2, we have a finite set of standard constraints of form (LABEL:eq:entailment-form). In this step, the algorithm handles each standard constraint separately and reduces it to quadratic programming over template variables and newly-introduced variables, hence effectively eliminating the program variables and the quantification over them. Let be one of the standard constraints. The algorithm considers three cases: (i) if all the inequalities on both sides of the constraint are affine, then it applies Farkas’ Lemma; (ii) if the LHS inequalities are affine but the RHS is a higher-degree polynomial, then the algorithm applies Handelman’s theorem; and (iii) if the LHS contains higher-degree polynomials, the algorithm applies the Stellensätze and Theorem 4.6. Below, we define and
Step 3.(i). Applying Farkas’ Lemma
Assuming all the constraints in and are affine, we can apply Corollary 1. Based on this corollary, we have to consider three cases disjunctively: {compactenum}
is satisfiable and entails : The algorithm creates new template variables with the constraint for every . It then symbolically computes The latter is a polynomial equality over So, the algorithm equates the coefficients of corresponding monomials on both sides, hence reducing this case to QP. Additionally, if is a strict inequality, the algorithm adds the extra constraint
is unsatisfiable and can be obtained: This is similar to the previous case, except that should be written as
is unsatisfiable and can be obtained: This is also similar to the last two cases. We have and Note that the template variables are freshly generated in each case above. Also, we have to consider cases (2) and (3) because is unsatisfiable in these cases and hence the constraint always holds vacuously.
Step 3.(ii). Applying Handelman’s Theorem
Assuming all constraints in are linear but is a higher-degree polynomial inequality, the algorithm applies Corollaries 1 and 2. Again, we have to consider the same three cases as in Step 3.(i): {compactenum}
is satisfiable and entails : We apply Corollary 2. The algorithm first symbolically computes It then generates new template variables and constrains them by setting If is a strict inequality, it further adds the constraint It then symbolically computes the equality
As usual, this is an equality whose both sides are polynomials over So, the algorithm equates the coefficients of corresponding monomials on the LHS and RHS, which reduces this case to QP.
Note that consists of linear inequalities, so we can use Farkas’ Lemma to check if is unsatisfiable. As such, this step is the same as case (2) of Step 3.(i).
This is the same as case (3) of Step 3.(i).
Step 3.(iii). Applying Stellensätze
If includes polynomial inequalities of degree or larger, then we have to apply Corollary 3 and Theorem 4.6. The algorithm considers three cases and combines them disjunctively: {compactenum}
is satisfiable and entails : In this case, we apply Corollary 3. The algorithm generates template sum-of-squares polynomials of degree and adds QP constraints that ensure each is a sum of squares (See the end of Section 4.2). It also generates a non-negative fresh variable . If is strict, the algorithm adds the constraint Finally, the algorithm symbolically computes
and equates the corresponding coefficients in the LHS and RHS to obtain QP constraints.
is unsatisfiable due to the first condition of Theorem 4.6: This case is handled similary to case (1) above, except that we have
is unsatisfiable due to the second condition of Theorem 4.6: The algorithm introduces new program variables It then generates a sum-of-squares template polynomial over and arbitrary template polynomials over All ’s are degree- templates. Finally, for every index that corresponds to a strict inequality, i.e. the algorithm symbolically computes
in which is the largest multiple of that does not exceed . Note that this is an equality between two polynomials over As before, the algorithm equates the coefficients of corresponding monomials on both sides of the equality and reduces it to QP.
After the algorithm runs Step 3 as above, all standard constraints generated in Step 2 will be reduced to QP and can hence be passed to an external solver in Step 4, as illustrated in Section LABEL:sec:algo:overview.
Degree bounds
We are using the same bound for the degree of the polynomials and templates in all cases above. This is not a requirement. One can fix different degree bounds for each case.
Handling Boundedness
To achieve completeness, we need the boundedness assumption, i.e. that for every variable we always have To model this in the algorithm, we can add the boundedness inequalities to the left-hand side of every standard constraint. Additionally, we can get a concrete value for as part of the input, or treat symbolically, i.e. as a template variable, and let the QP solver synthesize a value for it.
4.4 Soundness and Completeness
We now prove that our algorithm is sound and semi-complete for TBSP. The soundness result needs no extra assumptions and can be obtained directly. The completeness on the other hand relies on several assumptions: (i) boundedness, (ii) having chosen a large enough degree bound , and (iii) having invariants and post-conditions that, when written in DNF form, consist only of strongly positive polynomial inequalities of the form
Theorem 4.9 (Soundness)
Given a sketch polynomial program or equivalently a sketch polynomial transition system (SPTS) of the form together with a cutset as input, every concrete polynomial transition system (PTS) synthesized by the algorithm above is inductively valid.
Proof
The standard constraints of form (LABEL:eq:entailment-form) generated in Step 2 are equivalent to the initiation, consecution and finalization constraints in the definition of inductive validity. The reduction from standard constraints to QP in Step 3 is also sound since, in every case, it either writes the RHS of the standard constraint as a combination of the LHS polynomials, hence proving that it holds, or otherwise proves that the LHS is unsatisfiable and thus the standard constraint holds vacuously.
Theorem 4.10 (Semi-completeness)
Consider a solvable sketch polynomial transition system (SPTS) of the form together with a cutset that is given as input. Moreover, assume that: {compactenum}
The boundedness assumption holds, i.e. there is a constant such that for every we always have
Every invariant and post-condition when written in disjunctive normal form, contains only strongly positive polynomial inequalities of the form Then, there exists a constant degree bound , for which the algorithm above is guaranteed to successfully synthesize an inductively valid polynomial transition system (PTS).
Proof
Since our instance is solvable, there is a valuation for the template variables that yields an inductively valid PTS. We prove that for large enough , the valuation is obtained by one of the solutions of the QP instance solved in Step 4 of the algorithm. The proof is pretty straightforward since we just have to check that all steps of our algorithm are complete. Step 2 is complete since it simply rewrites the inductive validity constraints as an equivalent set of standard constraints. For Step 3, we prove completeness of each case separately. Step 3.(i) is complete due to Farkas’ Lemma (Corollary 1). In Step 3.(ii), if is unsatisfiable, then the algorithm is complete based on Corollary 1. Otherwise, it is complete based on Corollary 2. However, since we are using a degree bound for the monoid, the completeness only holds if the chosen is large enough. Moreover, Corollary 2 requires strong positivity of , which corresponds to invariants and post-conditions in our use-case, and boundedness of which is a direct consequence of our boundedness assumption. Finally, Step 3.(iii) is complete due to Corollary 3 and Theorem 4.6. These depend on , strong positivity and boundedness in the exact same manner as in the case of Step 3.(ii).
Limitations of Completeness
The main limitation in our completeness result is that it holds only if the degree bound chosen for the template polynomials is large enough. This is why we call it a semi-completeness theorem. In theory, it is possible to come up with adversarial instances in which the required degree is exponentially high [mai2022complexity]. In practice, we rarely, if ever, need to use a higher degree than that of the polynomials that are already part of the input SPTS. The second limitation is boundedness. This limitation cannot be lifted since both Handelman’s Theorem and Putinar’s Positivstellensatz assume compactness, which is equivalent to being closed and bounded in . Nevertheless, it does not have a significant practical effect and the algorithm remains sound even without this assumption. It is also noteworthy that the treatment of the linear/affine case using Farkas’ Lemma requires neither boundedness nor any specific value of and is always complete. Finally, strong positivity in the invariants and post-conditions is needed because Putinar’s Positivstellensatz and Theorem 4.6 can only provide a sound and complete characterization of strongly positive polynomials over a bounded semi-algebraic set if we do not assume that itself is closed. In terms of the synthesis problem, this means that our algorithm is not guaranteed to be complete for inequalities of the form in the invariants/post-conditions for which the value of in the runs of the program can get arbitrarily close to However, this limitation is also not significant in practice because (i) our soundness does not depend on it, and (ii) holds for any constant So, a small change (by any value ) in the invariants/postconditions leads to an instance over which our completeness holds.
References
- [1]
5 Appendix
5.1 Examples used in Section LABEL:secion_proof_of_concept
These examples are motivated from the benchmarks provided in [rodriguez2018some].