Masaryk University, Czechia and https://www.muni.cz/lide/422654-michal-ajdarow Masaryk University, Czechia and https://www.fi.muni.cz/usr/kucera/[email protected]://orcid.org/0000-0002-6602-8028 \CopyrightMichal Ajdarów and Antonín Kučera \ccsdesc[100]Theory of computation Models of computation \fundingThe work is supported by the Czech Science Foundation, Grant No. 21-24711S.\EventEditorsSerge Haddad and Daniele Varacca \EventNoEds2 \EventLongTitle32nd International Conference on Concurrency Theory (CONCUR 2021) \EventShortTitleCONCUR 2021 \EventAcronymCONCUR \EventYear2021 \EventDateAugust 23–27, 2021 \EventLocationVirtual Conference \EventLogo \SeriesVolume203 \ArticleNo24
Deciding Polynomial Termination Complexity for VASS Programs
Abstract
We show that for every fixed degree , the problem whether the termination/counter complexity of a given demonic VASS is , , and is -complete, -complete, and -complete, respectively. We also classify the complexity of these problems for . This shows that the polynomial-time algorithm designed for strongly connected demonic VASS in previous works cannot be extended to the general case. Then, we prove that the same problems for VASS games are -complete. Again, we classify the complexity also for . Tractable subclasses of demonic VASS and VASS games are obtained by bounding certain structural parameters, which opens the way to applications in program analysis despite the presented lower complexity bounds.
keywords:
Termination complexity, vector addition systemscategory:
1 Introduction
Vector addition systems with states (VASS) are a generic formalism expressively equivalent to Petri nets. In program analysis, VASS are used to model programs with unbounded integer variables, parameterized systems, etc. Thus, various problems about such systems reduce to the corresponding questions about VASS. This approach’s main bottleneck is that interesting questions about VASS tend to have high computational complexity (see, e.g., [7, 14, 15]). Surprisingly, recent results (see below) have revealed computational tractability of problems related to asymptotic complexity of VASS computations, allowing to answer questions like “Does the program terminate in time polynomial in for all inputs of size ?”, or “Is the maximal value of a given variable bounded by for all inputs of size ?”. These results are encouraging and may enhance the existing software tools for asymptotic program analysis such as SPEED [10], COSTA [1], RAML [11], Rank [2], Loopus [17, 18], AProVE [9], CoFloCo [8], C4B [6], and others. In this paper, we give a full classification of the computational complexity of deciding polynomial termination/counter complexity for demonic VASS and VASS games, and solve open problems formulated in previous works. Furthermore, we identify structural parameters making the asymptotic VASS analysis computationally hard. Since these parameters are often small in VASS program abstractions, this opens the way to applications in program analysis despite the established lower complexity bounds.
The termination complexity of a given VASS is a function assigning to every the maximal length of a computation initiated in a configuration with all counters initialized to . Similarly, the counter complexity of a given counter in is a function such that is the maximal value of along a computation initiated in a configuration with all counters set to . So far, three types of VASS models have been investigated in previous works.
-
•
Demonic VASS, where the non-determinism is resolved by an adversarial environment aiming to increase the complexity.
-
•
VASS Games, where every control state is declared as angelic or demonic, and the non-determinism is resolved by the controller or by the environment aiming to lower and increase the complexity, respectively.
-
•
VASS MDPs, where the states are either non-deterministic or stochastic. The non-determinism is usually resolved in the “demonic” way.
Let us note that the “angelic” and “demonic” non-determinism are standard concepts in program analysis [5] applicable to arbitrary computational devices including VASS. The use of VASS termination/counter complexity analysis is illustrated in the next example.
Example 1.1.
Consider the program skeleton of Fig. 1 (left). Since a VASS cannot directly model the assignment j:=i and cannot test a counter for zero, the skeleton is first transformed into an equivalent program of Fig. 1 (middle), where the assignment j:=i is implemented using an auxiliary variable Aux and two while loops. Clearly, the execution of the transformed program is only longer than the execution of the original skeleton (for all inputs). For the transformed program, an over-approximating demonic VASS model is obtained by replacing conditionals with non-determinism, see Fig. 1 (right). When all counters are initialized to , the VASS terminates after transitions. Hence, the same upper bound is valid also for the original program skeleton. Actually, the run-time complexity of the skeleton is where is the initial value of i, so the obtained upper bound is asymptotically optimal.
⬇ input i; while(i>0) i:=i-1; j:=i; while(j>0) j:=j-1; | ⬇ input i; while(i>0) i–; j:=0; Aux:=0; while(i>0) i–; j++; Aux++; while(Aux>0) i++; Aux–; while(j>0) j–; |
Existing results. In [4], it is shown that the problem whether for a given demonic VASS is solvable in polynomial time, and a complete proof method based on linear ranking functions is designed. The polynomiality of termination complexity for a given demonic VASS is also decidable in polynomial time, and if for any , then [13]. The same results hold for counter complexity. In [19], a polynomial time algorithm computing the least such that for a given demonic VASS is presented (the algorithm first checks if such a exists). It is also shown that if , then . Again, the same results hold also for counter complexity. The proof is actually given only for strongly connected demonic VASS, and it is conjectured that a generalization to unrestricted demonic VASS can be obtained by extending the presented construction (see the Introduction of [19]). In [12], it was shown that the problem whether the termination/counter complexity of a given demonic VASS belongs to a given level of Grzegorczyk hierarchy is solvable in polynomial time, and the same problem for VASS games is shown -complete. The upper bound follows by observing that player Angel can safely commit to a countreless111A strategy is counterless if the decision depends just on the control state of the configuration currently visited. strategy when minimizing the complexity level in the Grzegorczyk hierarchy. Intuitively, this is because Grzegorczyk classes are closed under function composition (unlike the classes ). Furthermore, the problem whether for a given VASS game is shown hard, but the decidability of this problem is left open. As for VASS MDPs, the only existing result is [3], where it is shown that the linearity of termination complexity is solvable in polynomial time for VASS MDPs with a tree-like MEC decomposition.
Our contribution. For demonic VASS, we refute the conjecture of [19] and prove that for general (not necessarily strongly connected) demonic VASS, the problem whether
-
•
is in for , and -complete for ;
-
•
is in for , and -complete for ;
-
•
is in for , -complete for , and -complete for ;
-
•
is -complete;
-
•
is in for , and -complete for ;
-
•
is -complete for , and -complete for .
Since the demonic VASS constructed in our proofs are relatively complicated, we write them in a simple imperative language with a precisely defined VASS semantics. This allows to present the overall proof idea clearly and justify technical correctness by following the control flow of the VASS program, examining possible side effects of the underlying “gadgets”, and verifying that the Demon does not gain anything by deviating from the ideal execution scenario.
When proving the upper bounds, we show that every path in the DAG of strongly connected components can be associated with the (unique) vector describing the maximal simultaneous increase of the counters. Here, the counters pumpable to exponential (or even larger) values require special treatment. We show that this vector is computable in polynomial time. Hence, the complexity of a given counter is iff there is a path in the DAG such that the associated maximal increase of is . Thus, we obtain the upper bound, and the other upper bounds follow similarly. The crucial parameter characterizing hard-to-analyze instances is the number of different paths from a root to a leaf in the DAG decomposition, and tractable subclasses of demonic VASS are obtained by bounding this parameter. We refer to Section 3 for more details.
Then, we turn our attention to VASS games, where the problem of polynomial termination/counter complexity analysis requires completely new ideas. In [12], it was observed that the information about the “asymptotic counter increase performed so far” must be taken into account by player Angel when minimizing the complexity level in the polynomial hierarchy, and counterless strategies are therefore insufficient. However, it is not clear what information is needed to make optimal decisions, and whether this information is finitely representable. We show that player Angel can safely commit to a so-called locking strategy. A strategy for player Angel is locking if whenever a new angelic state is visited, one of its outgoing transition is chosen and “locked” so that when is revisited, the same locked transition is used. The locked transition choice may depend on the computational history and the transitions locked in previously visited angelic states. Then, we define a locking decomposition of a given VASS that plays a role similar to the DAG decomposition for demonic VASS. Using the locking decomposition, the existence of a suitable locking strategy for player Angel is decided by an alternating polynomial time algorithm (and hence in polynomial space). Thus, we obtain the following: For every VASS game and a counter , we have that are either in or in . Furthermore, the problem whether
-
•
is -complete for and -complete for ;
-
•
is in for , -complete for , and -complete for ;
-
•
is -complete for and -complete for ;
-
•
is -complete;
-
•
is in for , and -complete for ;
-
•
is -complete.
Similarly to demonic VASS, tractable subclasses of VASS games are obtained by bounding the number of different paths in the locking decomposition.
The VASS model constructed in Example 1.1 is purely demonic. The use of VASS games in program analysis/synthesis is illustrated in the next example.
Example 1.2.
Consider the program of Fig. 2. The condition at line 3 is resolved by the environment in a demonic way. The two branches of if-then-else execute a code modifying the variables j and k. After that, the controller can choose one of the two while-loops at lines 8, 9 with the aim of keeping the value of z small. The question is how the size of z grows with the size of input if the controller makes optimal decisions. A closer look reveals that when the variable i is assigned at line 1, then
-
•
the values of j and k are and when the condition is evaluated to true;
-
•
the values of j and k are and when the condition is evaluated to false.
Hence, the controller can keep z in if an optimal decision is taken. Constructing a VASS game model for the program of Fig. 2 is straightforward (the required gadgets are given in Fig. 3). Using the results of this paper, the above analysis can be performed fully automatically.
2 Preliminaries
The sets of integers and non-negative integers are denoted by and , respectively, and we use to denote . The vectors of where are denoted by , and the vector is denoted by .
Definition 2.1 (VASS).
Let . A -dimensional vector addition system with states (VASS) is a pair , where is a finite set of states and is a finite set of transitions such that for every there exist and such that .
The set is split into two disjoint subsets and of angelic and demonic states controlled by the players Angel and Demon, respectively. A configuration of is a pair , where is the vector of counter values. We often refer to counters by their symbolic names. For example, when we say that has three counters and the value of in a configuration is , we mean that and where is the index associated to . When the mapping between a counter name and its index is essential, we use to denote the counter with index .
A finite path in of length is a finite sequence such that for all . We use to denote the effect of , defined as . An infinite path in is an infinite sequence such that every finite prefix of is a finite path in .
A computation of is a sequence of configurations of length such that for every there is a transition satisfying . Note that every computation determines its associated path in the natural way.
VASS Termination Complexity. A strategy for Angel (or Demon) in is a function assigning to every finite computation where (or ) a transition . Every pair of strategies for Angel/Demon and every initial configuration determine the unique maximal computation initiated in . The maximality means that the computation cannot be prolonged without making some counter negative. For a given counter , we use to denote the supremum of the ’s values in all configurations visited along . Furthermore, we use to denote the length of . Note that and len can be infinite for certain computations.
For every initial configuration , consider a game where the players Angel and Demon aim at minimizing and maximizing the or len objective, respectively. By applying standard game-theoretic arguments (see Appendix A), we obtain
(1) | |||||
(2) |
where and range over all strategies for Angel and Demon, respectively. Hence, there exists a unique termination value of , denoted by , defined by (1). Similarly, for every counter there exists a unique maximal counter value, denoted by , defined by (2). Furthermore, both players have optimal positional strategies and achieving the outcome specified by the equilibrium value or better in every configuration against every strategy of the opponent (here, a positional strategy is a strategy depending only on the currently visited configuration). We refer to Appendix A for details.
The termination complexity and -counter complexity of are functions where and . When the underlying VASS is not clear, we write and instead of and .
Observe that the asymptotic analysis of termination complexity for a given VASS is trivially reducible to the asymptotic analysis of counter complexity in a VASS obtained from by adding a fresh “step counter” sc incremented by every transition of . Clearly, . Therefore, the lower complexity bounds for the considered problems of asymptotic analysis are proven for , while the upper bounds are proven for .
3 Demonic VASS
In this section, we classify the computational complexity of polynomial asymptotic analysis for demonic VASS. The following theorem holds regardless whether the counter update vectors are encoded in unary or binary (the lower bounds hold for unary encoding, the upper bounds hold for binary encoding).
Theorem 3.1.
Let . For every demonic VASS and counter we have that () is either in or in . Furthermore, the problem whether
-
•
is in for , and -complete for ;
-
•
is in for , and -complete for ;
-
•
is in for , -complete for , and -complete for ;
-
•
is -complete;
-
•
is in for , and -complete for ;
-
•
is -complete for , and -complete for .
The next theorem identifies the crucial parameter influencing the complexity of polynomial asymptotic analysis for demonic VASS. Let be the standard DAG of strongly connected components of . For every leaf (bottom SCC) of , let be the total number of all paths from a root of to .
Theorem 3.2.
Let be a class of demonic VASS such that for every and every leaf of we have that is bounded by a fixed constant depending only on .
Then, the problems whether , , for given and , are solvable in polynomial time (where the is written in binary). The same results hold also for (for a given counter of ).
The degree of the polynomial bounding the running time of the decision algorithm for the three problems of Theorem 3.2 increases with the increasing size of the constant bounding . From the point of view of program analysis, Theorem 3.2 has a clear intuitive meaning. If is an abstraction of a program , then the instructions in increasing the complexity of the asymptotic analysis of are branching instructions such as if-then-else that are not embedded within loops. If executes many such constructs in a sequence, a termination point can be reached in many ways (“zigzags” in the ’s control-flow graph). This increases , where is a leaf of containing the control state modeling the termination point of .
3.1 Lower bounds
Lemma 3.3.
Let . For every propositional formula in 3-CNF there exists a demonic VASS , with counter , constructible in time polynomial in such that
-
•
if is satisfiable, then and ;
-
•
if is not satisfiable, then and .
Proof 3.4.
Let be a propositional formula where every is a clause with three literals over propositional variables (a literal is a propositional variable or its negation). We construct a VASS with the counters
-
•
, used to encode an assignment of truth values to . In the following, we identify literals of with their corresponding counters (i.e., if , the corresponding counter is ; and if , the corresponding counter is ).
-
•
used to encode the validity of clauses under the chosen assignment,
-
•
used to encode the (in)validity of under the chosen assignment,
-
•
and used to compute ,
-
•
and some auxiliary counters used in gadgets.
The structure of is shown in VASS Program 1. The basic instructions are implemented by the gadgets of Fig. 3 (top). Counter changes associated to a given transition are indicated by the corresponding labels, where and mean decrementing and incrementing a given counter by one (the other counters are unchanged). Hence, the empty label represents no counter change, i.e., the associated counter update vector is . The auxiliary counter is unique for every instance of these gadgets and it is not modified anywhere else.
The constructs and are implemented by connecting the underlying gadgets as shown in Fig. 3 (bottom). The foreach statements are just concise representations of the corresponding sequences of instructions connected by ‘;’.
Now suppose that the computation of VASS Program 1 is executed from line 1 where all counters are initialized to . One can easily verify that all gadgets implement the operations associated to their labels up to some “asymptotically irrelevant side effects”. More precisely,
-
•
the gadget ensures that the Demon can increase the value of counter by (but not more) if he plays optimally, where and are the values stored in and when initiating the gadget. Recall that the counter is unique for the gadget, and its initial value is . Also note that the value of is decreased to when the Demon strives to maximally increase the value of .
-
•
The gadget ensures that the Demon can add to the counter and then reset to the value (but not more) if he plays optimally. Again, note that is a unique counter for the gadget with initial value .
-
•
The gadget allows the Demon to increase by the minimum of and , and then restore to (but not more).
Now, the VASS Program 1 is easy to follow. We describe its execution under the assumption that the Demon plays optimally. It is easy to verify that the Demon cannot gain anything by deviating from the below described scenario where certain counters are pumped to their maximal values (in particular, the auxiliary counters are never re-used outside their gadgets, hence the Demon is not motivated to leave any positive values in them).
By executing line 1, the Demon pumps the counter to the value . Then, the Demon determines a truth assignment for every , where , by pumping either the counter or the counter to the value . A key observation is that when the chosen assignment makes true, then every clause contains a literal such that the value of its associated counter is . Otherwise, there is a clause such that all of the three counters corresponding to , , have the value . The Demon continues by pumping to the value at line 1. Then, for every , he selects a literal of and pumps to the minimum of and . Observe that is either or , and the same holds for after executing the instruction. Hence, is pumped either to or , depending on whether the chosen assignment sets every clause to true or not, respectively. Note that the length of the whole computation up to line 1 is , regardless whether the chosen assignment sets the formula to true or false. If was pumped to , then the last instruction at line 1 can pump the counter to in transitions. Hence, if is satisfiable, the Demon can schedule a computation of length and along the way pump to . Otherwise, the length of the longest computation is and counter is bounded by . Also observe that if the Demon starts executing in some other control state (i.e., not in the first instruction of line 1), the maximal length of a computation as well as the maximal value of is only smaller.
Recall that the class consists of problems that are intersections of one problem in and another problem in . The class is expected to be somewhat larger than , and it is contained in the level of the polynomial hierarchy. The standard -complete problem is Sat-Unsat, where an instance is a pair of propositional formulae and the question is whether is satisfiable and is unsatisfiable [16]. Hence, the lower bounds of Theorem 3.1 follow directly from the next lemmas (a proof can be found in Appendix B).
Lemma 3.5.
Let . For every pair of propositional formulae in 3-CNF there exists a demonic VASS such that iff is satisfiable and is unsatisfiable.
Lemma 3.6.
Let . For every pair of propositional formulae in 3-CNF there exists a demonic VASS along with a counter such that iff is satisfiable and is unsatisfiable.
3.2 Upper bounds
The upper complexity bounds of Theorem 3.1 are proven for . For the sake of clarity, we first sketch the main idea and then continue with developing a formal proof.
Intuition. For a given demonic VASS , we compute its SCC decomposition and proceed by analyzing the individual SCCs in the way indicated in Fig. 4. We start in a top SCC with all counters initialized to . Here, we can directly apply the results of [19, 13] and decide in polynomial time whether for some or (in the first case, we can also determine the ). We perform this analysis for every counter and thus obtain the vector describing the maximal asymptotic growth of the counters (such as in Fig. 4). Observe that
-
•
although the asymptotic growth of has been analyzed for each counter independently, all counters can be pumped to their associated asymptotic values simultaneously. Intuitively, this is achieved by considering the “pumping computations” for the smaller vector of initial counter values ( is the dimension of ), and then simply “concatenating” these computations in a configuration with all counters initialized to ;
-
•
if the asymptotic growth of is , the computation simultaneously pumping the counters to their asymptotic values may actually decrease the value of (the computation can be arranged so that the resulting value of stays above for all sufficiently large ). For example, the top SCC of Fig. 4 achieves the simultaneous asymptotic growth of all counters from to , but this does not imply the counters can be simultaneously increased above the original value (nevertheless, the simultaneous increase in the first and the third counter above is certainly possible for all sufficiently large ).
A natural idea how to proceed with next SCCs is to perform a similar analysis for larger vectors of initial counter values. Since we are interested just in the asymptotic growth of the counters, we can safely set the initial value of a counter previously pumped to to precisely . However, it is not immediately clear how to treat the counters previously pumped to . We show that the length of a computation “pumping” the counters to their new asymptotic values in the considered SCC is at most exponential in . Consequently, the “pumping computation” in can be constructed so that the resulting value of the “large” counters stays above one half of their original value. This means the value of “large” counters is still in after completing the computation in . Furthermore, the large counters can be treated as if their initial value was infinite when analyzing . This “infinite” initial value is implemented simply by modifying every counter update vector in so that for every “large” counter . This adjustment in the structure of is denoted by putting “” into the corresponding component of the initial counter value vector (see Fig. 4). This procedure is continued until processing all SCCs. Note that the same SCC may be processed multiple times for different vectors of initial counter values corresponding to different paths from a top SCC. In Fig. 4, the bottom SCC is processed for the initial vectors and corresponding to the two paths from the top SCC. The number of such initial vectors can be exponential in the size of , as witnessed by the VASS constructed in the proof of Lemma 3.3.
Now we give a formal proof. Let be a demonic VASS with counters. For every counter and every , we define the function where is the maximum of all where and .
Proposition 3.7.
Let be a strongly connected demonic VASS with counters, and let such that for every , where . For every counter , we have that either for some , or . It is decidable in polynomial time which of the two possibilities holds. In the first case, the value of is computable in polynomial time.
In [19], a special variant of Proposition 3.7 covering the subcase when is proven. In the introduction part of [19], it is mentioned that a generalization of this result (equivalent to Proposition 3.7) can be obtained by modifying the techniques presented in [19]. Although no explicit proof is given, the modification appears feasible. We give a simple explicit proof of Proposition 3.7, using the algorithm of [19] for the subcase as a “black-box procedure”. We refer to Appendix B for details.
Now we extend the function so that . Here, the components of correspond to counters that have already been pumped to “very large” values and do not constrain the computations in . As we shall see, “very large” actually means “at least singly exponential in ”.
Let , and let be the VASS obtained from by modifying every counter update vector into , where if , otherwise . Hence, the counters set to in are never changed in . Furthermore, let be the vector obtained from by changing all components into . We put .
For a given , we say that is -consistent if for every we have that the projection is either if , or if . Intuitively, a -consistent function assigns to every a vector of initial counter values growing consistently with .
Given , a control state , a -consistent function , an infinite family of Demon’s strategies in , a function , and , we use to denote the computation of starting at obtained by applying until a maximal computation is produced or transitions are executed.
The next lemma says that if is strongly connected, then all counters can be pumped simultaneously to the values asymptotically equivalent to so that the counters previously pumped to exponential values stay exponential.
Lemma 3.8.
Let be a strongly connected demonic VASS with counters. Let , and let be a -consistent function. Then for every counter such that and we have that for every . Furthermore, there exist , an infinite family of Demon’s strategies, and a function such that for every , the value of in the last configuration of is
-
•
if ;
-
•
if or .
A proof of Lemma 3.8 uses the result of [13] saying that counters pumpable to exponential values can be simultaneously pumped by a computation of exponential length from a configuration where all counters are set to (the same holds for polynomially bounded counters, where the length of the computation can be bounded even by a polynomial). Using the construction of Proposition 3.7, these results are extended to our setting with -consistent initial counter values. Then, the initial counter values are virtually “split into boxes” of size . The computations pumping the individual counters are then run “each in its own box” for these smaller initial vectors and concatenated. As the computation of one “box” cannot affect any other “box”, no computation can undo the effects of previous computations. The details can be found in Appendix B.4
Let be a function such that, for every ,
Note that every SCC (vertex) of can be seen as a strongly connected demonic VASS after deleting all transitions leading from/to the states outside . If the counters are simultaneously pumped to -consistent values before entering , then can further pump the counters to -consistent values (see Lemma 3.8). According to Lemma 3.7, is computable in polynomial time for every where every finite is bounded by for some .
Observe that all computations of can be divided into finitely many pairwise disjoint classes according to their corresponding paths in (i.e., the sequence of visited SCCs of ). For each such sequence , the vectors where and are computable in time polynomial in (note that ). The asymptotic growth of the counters achievable by computations following the path is then given by . Hence, iff there is a path in such that . Similarly, iff for every path in we have that . From this we immediately obtain the upper complexity bounds of Theorem 3.1.
Furthermore, for every SCC of , we can compute the set of all such that there is a path where is a root of , , and . A full description of the algorithm is given in Appendix B.5. If is bounded by a fixed constant independent of for every leaf of , then the algorithm terminates in polynomial time, which proves Theorem 3.2.
4 VASS Games
The computational complexity of polynomial asymptotic analysis for VASS games is classified in our next theorem. The parameter characterizing hard instances is identified at the end of this section.
Theorem 4.1.
Let . For every VASS game and counter we have that () is either in or in . Furthermore, the problem whether
-
•
is -complete for and -complete for ;
-
•
is in for , -complete for , and -complete for ;
-
•
is -complete for and -complete for ;
-
•
is -complete;
-
•
is in for , and -complete for ;
-
•
is -complete.
Furthermore, we show that for every VASS game , either or . In the first case, the such that can be computed in polynomial space. The same results hold for .
In [12], it has been shown that the problem whether is -complete, and if , then . This yields the and bounds of Theorem 4.1 for . Furthermore, it has been shown that the problem whether is -hard, and this proof can be trivially generalized to obtain all lower bounds of Theorem 4.1. For the sake of completeness, we sketch the arguments in Appendix C.
The key insight behind the proof of Theorem 4.1 is that player Angel can safely commit to a simple locking strategy when minimizing the counter complexity. We start by introducing locking strategies.
Definition 4.2.
Let be a VASS game. We say that a strategy for player Angel is locking if for every computation where and for every such that we have that .
In other words, when an angelic control state is visited for the first time, a locking strategy selects and “locks” an outgoing transition of so that whenever is revisited, the previously locked transition is taken. Observe that the choice of a “locked” transition may depend on the whole history of a computation.
Since a “locked” control state has only one outgoing transition, it can be seen as demonic. Hence, as more and more control states are locked along a computation, the VASS game becomes “more and more demonic”. We capture these changes as a finite acyclic graph called the locking decomposition of . Then, we say that a locking strategy is simple if the choice of a locked transition after performing a given history depends only on the finite path in associated to the history. We show that Angel can achieve an asymptotically optimal termination/counter complexity by using only simple locking strategies. Since the height of is polynomial in , the existence of an appropriate simple locking strategy for Angel can be decided by an alternating polynomial-time algorithm. As , this proves the upper bounds of Theorem 4.1. Furthermore, our construction identifies the structural parameters of making the polynomial asymptotic analysis of VASS games hard. When these parameters are bounded by fixed constants, the problems of Theorem 4.1 are solvable in polynomial time.
4.1 Locking sets and the locking decomposition of
Let be a VASS game. A Demonic decomposition of is a finite directed graph defined as follows. Let be an equivalence where iff either , or both are demonic and mutually reachable from each other via a finite path leading only through demonic control states. The vertices of are the equivalence classes , and iff and for some . For demonic VASS, becomes the standard DAG decomposition. For VASS games, is not necessarily acyclic.
A locking set of is a set of transitions such that implies , and implies . A control state is locked by if contains an outgoing transition of . We use to denote the set of all locking sets of . For every , let be the VASS game obtained from by “locking” the transitions of . That is, each control state locked by becomes demonic in , and the only outgoing transition of in is the transition .
Definition 4.3.
The locking decomposition of is a finite directed graph where the set of vertices and the set of edges of are the least sets and satisfying the following conditions:
-
•
All elements of are pairs where and is a vertex of . When is demonic/angelic in , we say that is demonic/angelic.
-
•
contains all pairs of the form .
-
•
If where is demonic in and is an edge of , then and .
-
•
If where is angelic in , then for every we have that and , where .
It is easy to see that is acyclic and the length of every path in is bounded by , where at most vertices in the path are demonic. Note that every computation of obtained by applying a locking strategy determines its associated path in in the natural way. A locking strategy is simple if the choice of a locked transition depends only on the path in associated to the performed history. (i.e., if two histories determine the same path in , then the strategy makes the same choice for both histories).
4.2 Upper bounds
Let be a VASS game with counters. For every and , let where . We extend this notation to the vectors in the same way as in Section 3.2, i.e., for a given , we put . Recall that is the vector obtained from by changing all components into , and is the VASS obtained from by modifying every counter update vector into , where if , otherwise . The main technical step towards obtaining the upper bounds of Theorem 4.1 is the next proposition.
Proposition 4.4.
Let be a VASS game with counters. Furthermore, let be a vertex of , , and a counter such that . Then, one of the following two possibilities holds:
-
•
there is such that for every -consistent there exist a simple locking Angel’s strategy in and a Demon’s strategy in such that is independent of and
-
–
for every Demon’s strategy in , we have that ;
-
–
for every Angel’s strategy in , we have that .
-
–
-
•
for every -consistent there is a Demon’s strategy in such that for every Angel’s strategy in , we have that .
Proposition 4.4 is proven by induction on the height of the subgraph rooted by . The case when is demonic (which includes the base case when is a leaf) follows from the constructions used in the proof of Proposition 3.7. When the vertex is angelic, it has immediate successors of the form where . We show that by locking one of the transitions in , Angel can minimize the growth of in asymptotically the same way as if he used all of these transitions freely when revisiting . See Appendix C.2 for details.
Observe that every computation in where Angel uses some simple locking strategy determines the unique corresponding path in (initiated in a vertex of the form ) in the natural way. Hence, all such computations can be divided into finitely many pairwise disjoint classes according to their corresponding paths in . Let be a path in where . Consider the corresponding sequence where and is equal either to or to , depending on whether is demonic or angelic, respectively. Here, is the function defined in Section 3.2 (observe that the component of containing can be seen as a strongly connected demonic VASS after deleting all transitions from/to the states outside ). The vector describes the maximal asymptotic growth of the counters achievable by Demon when Angel uses the simple locking strategy associated to the path. Furthermore, the sequence is computable in time polynomial in and all finite components of are bounded by because the total number of all demonic in the path is bounded by (cf. Proposition 3.7).
The problem whether can be decided by an alternating polynomial-time algorithm which selects an initial vertex of the form universally, and then constructs a maximal path in from where the successors of demonic/angelic vertices are chosen universally/existentially, respectively. After obtaining a maximal path , the vector is computed in polynomial time, and the algorithm answers yes/no depending on whether or not, respectively. The problem whether is decided similarly, but here the initial vertex is chosen existentially, the successors of demonic/angelic vertices are chosen existentially/universally, and the algorithm answers yes/no depending on whether or not, respectively. This proves the upper bounds of Theorem 4.1.
Observe that the crucial parameter influencing the computational hardness of the asymptotic analysis for VASS games is the number of maximal paths in . If and are bounded by constants, then the above alternating polynomial time algorithms can be simulated by deterministic polynomial time algorithms. Thus, we obtain the following:
Theorem 4.5.
Let be a class of VASS games such that for every we have that and , where is a leaf of , are bounded by a fixed constant depending only on . Then, the problems whether , , for given and , are solvable in polynomial time (where the is written in binary). The same results hold also for (for a given counter of ).
5 Conclusions, future work
We presented a precise complexity classification for the problems of polynomial asymptotic complexity of demonic VASS and VASS games. We also identified the structural parameters making these problems computationally hard, and we indicated that these parameters may actually stay reasonably small when dealing with VASS abstractions of computer programs. The actual applicability and scalability of the presented results to the problems of program analysis requires a more detailed study including experimental evaluation.
From a theoretical point of view, a natural question is whether the scope of effective asymptotic analysis can be extended from purely non-deterministic VASS to VASS with probabilistic transitions (i.e., VASS MDPs and VASS stochastic games). These problems are challenging and motivated by their applicability to probabilistic program analysis.
References
- [1] E. Albert, P. Arenas, S. Genaim, M. Gómez-Zamalloa, G. Puebla, D. V. Ramírez-Deantes, G. Román-Díez, and D. Zanardini. Termination and cost analysis with COSTA and its user interfaces. ENTCS, 258(1):109–121, 2009.
- [2] C. Alias, A. Darte, P. Feautrier, and L. Gonnord. Multi-dimensional rankings, program termination, and complexity bounds of flowchart programs. In Proceedings of SAS 2010, pages 117–133, 2010.
- [3] T. Brázdil, K. Chatterjee, A. Kučera, P. Novotný, and D. Velan. Deciding fast termination for probabilistic VASS with nondeterminism. In Proceedings of ATVA 2019, volume 11781 of LNCS, pages 462–478. Springer, 2019.
- [4] T. Brázdil, K. Chatterjee, A. Kučera, P. Novotný, D. Velan, and F. Zuleger. Efficient algorithms for asymptotic bounds on termination time in VASS. In Proceedings of LICS 2018, pages 185–194. ACM Press, 2018.
- [5] M. Broy and M. Wirsing. On the algebraic specification of nondeterministic programming languages. In Proceedings of CAAP’81, volume 112 of LNCS, pages 162–179. Springer, 1981.
- [6] Q. Carbonneaux, J. Hoffmann, T. W. Reps, and Z. Shao. Automated resource analysis with coq proof objects. In Proceedings of CAV 2017, pages 64–85, 2017.
- [7] W. Czerwiński, S. Lasota, R. Lazić, J. Leroux, and F. Mazowiecki. The reachability problem for Petri nets is not elementary. In Proceedings of STOC 2019, pages 24–33. ACM Press, 2019.
- [8] A. F.-Montoya and R. Hähnle. Resource analysis of complex programs with cost equations. In Proceedings of APLAS 2014, pages 275–295, 2014.
- [9] J. Giesl, C. Aschermann, M. Brockschmidt, F. Emmes, F. Frohn, C. Fuhs, J. Hensel, C. Otto, M. Plücker, P. Schneider-Kamp, T. Ströder, S. Swiderski, and R. Thiemann. Analyzing program termination and complexity automatically with aprove. J. Autom. Reasoning, 58(1):3–31, 2017.
- [10] S. Gulwani, K. K. Mehra, and T. M. Chilimbi. SPEED: precise and efficient static estimation of program computational complexity. In Proceedings of POPL 2009, pages 127–139, 2009.
- [11] J. Hoffmann, K. Aehlig, and M. Hofmann. Multivariate amortized resource analysis. ACM Trans. Program. Lang. Syst., 34(3):14:1–14:62, 2012.
- [12] A. Kučera, J. Leroux, and D. Velan. Efficient analysis of VASS termination complexity. In Proceedings of LICS 2020, pages 676–688. ACM Press, 2020.
- [13] J. Leroux. Polynomial vector addition systems with states. In Proceedings of ICALP 2018, volume 107 of LIPIcs, pages 134:1–134:13. Schloss Dagstuhl, 2018.
- [14] R. Lipton. The reachability problem requires exponential space. Technical report 62, Yale University, 1976.
- [15] E.W. Mayr and A.R. Meyer. The complexity of the finite containment problem for Petri nets. JACM, 28(3):561–576, 1981.
- [16] Ch. Papadimitriou. Computational Complexity. Addison, 1994.
- [17] M. Sinn, F. Zuleger, and H. Veith. A simple and scalable static analysis for bound analysis and amortized complexity analysis. In Proccedings of CAV 2014, pages 745–761, 2014.
- [18] M. Sinn, F. Zuleger, and H. Veith. Complexity and resource bound analysis of imperative programs using difference constraints. J. Autom. Reasoning, 59(1):3–45, 2017.
- [19] F. Zuleger. The polynomial complexity of vector addition systems with states. In Proceedings of FoSSaCS 2020, volume 12077 of LNCS, pages 622–641. Springer, 2020.
Appendix A The Existence of Equilibrium Value in VASS Games
In this section, we sketch a proof for the equalities
used in Section 2. These results appear folklore and plausible. Still, they do not immediately follow from the standard determinacy results for Borel objectives because the reward function is not bounded. Due to the importance of these equalities, we believe they are worth a proof sketch.
We prove the determinacy result for arbitrary finitely-branching games with countably many vertices. Consider a game where is a countably infinite set of vertices partitioned into the subsets and of demonic and angelic vertices, is a finitely-branching transition relation (i.e., every vertex has only finitely many immediate successors), and is a cost function. Furthermore, we fix an initial vertex . For every maximal path initiated in , let be the supremum of the costs of all vertices visited by . We show that
where and range over the strategies for Demon/Angel in , and is the unique maximal path initiated in determined by and .
Let be a (Bellman) operator such that, for a given , we have that , where is defined as follows:
Since is a continuous operator over the CPO of all functions with component-wise ordering, there is the least fixed-point of , where is the least element (i.e., a function assigning to every vertex). Consider two memoryless strategies and for Demon and Angel such that
-
•
for every , the strategy selects a successor of with the maximal value;
-
•
for every , the strategy selects a successor of with the minimal value.
Now, it suffices to show that
(3) |
The second inequality of (3) holds trivially. For the first inequality, observe that
Hence, it suffices to show , which is achieved by demonstrating for every (by induction on ). The last inequality in (3) is proven similarly (using ).
Appendix B Proofs for Section 3
B.1 A proof of Lemma 3.5
See 3.5
Proof B.1.
The structure of is given by the VASS Program 2. The program starts by executing the first eight lines of the VASS Program 1 constructed for and , followed by the first eight lines of the same program constructed for and where all variables representing the counters are fresh and previously unused. According to the proof of Lemma 3.3, the Demon can perform transitions when executing the first two lines of the VASS Program 2 regardless whether the formulae are satisfiable or not. Let and be the counters of and corresponding to the counter of the VASS Program 1. According to Lemma 3.3, we have the following:
-
•
If is satisfiable, then the counter can be pumped to ; otherwise, it can be pumped only to .
-
•
If is satisfiable, then the counter can be pumped to ; otherwise, it can be pumped only to .
The instructions at lines 2 and 2 ensure that iff the counter can be pumped to and the counter can be pumped only to . More precisely, the instructions at line 2 multiply the values of these two counters. Hence, if both values are , the multiplication gadget executes transitions, which is beyond . If one of the counter values is and the other is , the multiplication takes transitions. Finally, if both values are , the multiplication takes transitions. The instructions at line 2 compute the square of the value stored in , which takes either or transitions, depending on whether the value of is or , respectively. So, the only case when these instructions produce a sequence of transitions of length is when the value of is and the value of is .
B.2 A proof of Lemma 3.6
See 3.6
Proof B.2.
The structure of is given by the VASS Program 3. The program starts by executing the first eight lines of the VASS Program 1 constructed for and , followed by the first eight lines of the same program constructed for and where all variables representing the counters are fresh and previously unused. Let and be the counters of and corresponding to the counter of the VASS Program 1. According to Lemma 3.3, we have the following:
-
•
If is satisfiable, then the counter can be pumped to ; otherwise, it can be pumped only to .
-
•
If is satisfiable, then the counter can be pumped to ; otherwise, it can be pumped only to .
The instructions at line 3 pump to . Thus the instructions at line 3 ensure if is unsatisfiable then can be pumped at most to , and if is satisfiable then can be pumped at most to or depending on whether is satisfiable or unsatisfiable, respectively. Therefore iff is satisfiable and is unsatisfiable.
B.3 A proof of Proposition 3.7
In this section we give a full proof of Proposition 3.7. We start by recalling the special variant proven in [19].
Proposition B.3 (see [19]).
Let be a strongly connected demonic VASS with counters. For every counter , we have that either for some , or . It is decidable in polynomial time which of the two possibilities holds. In the first case, the is computable in polynomial time.
Our proof of Proposition 3.7 is obtained by modifying a given demonic VASS into another demonic VASS and applying Proposition B.3 to . See 3.7
Proof B.4.
Let be the counters of . We start by constructing a VASS that “pumps” every to from an initial configuration with all counters set to (see VASS Program 4). Since the components of can be exponential in , the trivial technique used in line 1 of VASS Program 1 is not applicable because it requires new counters and control states. Hence, needs to be constructed more carefully using repeated squaring. Let be the maximal component of , and let . Then, for every , there is a vector computable in time polynomial in such that . Let be the least index such that . At line 4, the counter is pumped to . Note that when maximizing the value of , the corresponding gadget (see Fig. 3) leaves either in or in , and the value of both counters is at most . The nested loop at lines 4–4 increase the counter by for every using the counters. Note that every counter is initialized to , so no computation is needed for . Also note that the if statements at lines 4–4 are purely symbolic—the conditions , and are constants denoting either true or false, and the instructions following then are either included into the code of or not. At lines 4–4, the values stored in are added to the counters . The gadget for the instruction “reads” the value of destructively, i.e., the counter is not restored to its original value (cf. the gadget for of Fig. 3).
Note that the VASS has two distinguished control states in and out, and uses auxiliary counters different from . Hence, the total number of counters of is . The transition update vectors of are constructed so that the first components specify the updates for . An important observation is that even if we extended with a transition allowing for “restarting” the computation of , this extra transition would be of no use when maximizing the values of . The best the Demon could do is to maximize the value of all , then maximize all , and then empty by transferring its content to . If the Demon violated from this scenario, leaving some positive values in the auxiliary counters of and then “restarting” the computation of using the transition , the resulting value of would be only smaller.
Now we construct a VASS with counters by taking the union of and , where the transition update vectors of are extended so that they do not modify the extra counters of . Furthermore, for every control state of , we add to the transitions and . Here, , where is the maximal absolute value of an update vector component in . Observe that is strongly connected and its size is polynomial in the size of .
We show that for every , where , the asymptotic growth of in is the same as the asymptotic growth of in . Hence, it suffices to apply Proposition B.3 to .
Clearly, because can use the sub-VASS to pump every to and then simulate a computation of . It remains to show . To see this, it suffices to verify that the best strategy for the Demon who aims at maximizing the value of in a computation of initiated in a configuration with all counters set to is to start in the in state of the sub-VASS and pump all to their maximal values, and then continue by simulating without ever returning to the in state of the sub-VASS . Such a computation can be “simulated” by from an initial configuration where every is set to , which proves our claim.
Obviously, an extra “detour” to is of no use if the counters have previously been pumped to their maximal values. As noted above, the Demon cannot gain anything by deviating from the scenario when are pumped just once at the beginning because the total increase in the values of brought by running the sub-VASS could be only smaller. So, the only reason why Demon might still wish to re-visit by entering in from a control state of the sub-VASS is a “shorter” path to some other control state of the sub-VASS passing through the sub-VASS . However, since is strongly connected, the Demon can always move from to via the control states of and the total decrease in every counter will be only smaller than the decrease caused by the transitions and . To sum up, the optimal strategy for Demon is to use the sub-VASS only at the beginning to pump all to their maximal values, and then schedule an appropriate computation of the sub-VASS .
B.4 A Proof of Lemma 3.8
First, let us restate Lemma 3.8.
See 3.8
Let be a strongly connected VASS. For every cycle of , let be the sum of the counter update vectors of the transitions executed along .
First we will need the following result of [13]. Let be the set of all counters such that . Then there exists an iteration scheme for , i.e., a sequence of cycles such that every counter strictly decremented by some is strictly incremented by . Furthermore, is precisely the set of all counters strictly incremented by . In [13], it is shown that an iteration scheme can be “iterated exponentially many times”, producing a computation of length such that all counters of are simultaneously pumped to the value (see Lemma 10 in [13]).
Second, every counter such that can be pumped to the value by a computation of polynomial length (using the results of [19], it is easy to show that the length of the computation can be bounded by ; for our purposes, even a singly exponential bound is sufficient, so there is no need to go into the details).
The above mentioned results assume that the vector of initial counter values is . Now let . Consider a VASS obtained by first modifying into (i.e., for every such that , every counter update vector of is modified so that , see Section 3.2), and then extending into by the construction of Proposition 3.7. That is, is obtained from by adding the gadget pumping every counter such the to from an initial configuration where all counters are set to . Now, the above results are applicable to . That is, for every counter such that , there exist
-
•
a control state ,
-
•
an infinite family of strategies for player Demon in ,
-
•
a function such that ,
such that for every , the value of in the configuration reached by applying from until a maximal computation is obtained or transitions are executed, is
-
•
if ,
-
•
if .
By applying the observations of Proposition 3.7, we can safely assume that where in is the distinguished starting state of the pumping gadget, and the above computations never revisit the control state in after passing through the control state out of the gadget. Let be the length of the computation from configuration under strategy until the control state out is reached (and if out is never reached), and let be such that . Note that .
For every , let be the maximal value of achievable by running the pumping gadget of from the control state in where all counters are initialized to (cf. VASS Program 4). Recall that . Also let .
Let be a -consistent function. We define a function such that is the largest satisfying the following condition:
-
•
for every , where are the control states of and is the maximal absolute value of an update vector component of ;
Observe that . For every and such that , consider a Demon’s strategy in defined as follows: Let be the control state visited by the strategy right after leaving the control state out of the pumping gadget of . The strategy takes the shortest path to and then starts to behave exactly like . Note that can faithfully simulate either until produces a maximal computation (for initial configuration ) or for at least steps, which is sufficient to pump the counter to if , or to if . The strategy is obtained by “concatenating” all (for all counters such that ), where is executed either for steps since initiating the simulation of or until produces a maximal computation. Note that each simulation of "assumes" that the counters are set to , and therefore the most any can lower any counter is by . Therefore we can assume that "splits the counters into boxes" of size , and performs each simulation in it’s own box. Hence, no simulation can "undo" the effect of the previous simulations. Also let be the maximal number of steps takes until the last simulation is finished from initial configuration for any . The total length of the whole simulation is bounded by . Observe that , and for every counter such that we have that the value of after performing in the above indicated way is at least , which is , since the -th box "remains untouched".
B.5 An Algorithm Computing
In this section, we present an algorithm computing the set for every SCC of . Recall that consists of all such that there is a path where is a root of , , and .
The sets are computed by Algorithm 5. In particular, at lines 5–5, is set to for every root of . The algorithm then follows the top-down acyclic structure of and computes for the remaining components.
Appendix C Proofs for Section 4
C.1 lower bounds of Theorem 4.1
In this section, we prove the lower bounds of Theorem 4.1. We use a modified construction of Lemma 3.3 to obtain a reduction from the QBF problem. Intuitively, the only change is that Angel determines the assignment for universally quantified propositional variables. Let us note that in the original construction of [12], Angel also selected the clause to be checked. Here, we use the construction of Lemma 3.3 based on the gadgets. Let
be a quantified Boolean formula where each clause contains precisely three literals (recall that the (in)validity of a given quantified Boolean formula is a complete problem).
Consider the VASS game defined by the VASS Program 6, where is a constant. Note that the only difference between and the demonic VASS (cf. the VASS Program 1) is that the valuation for the universally quantified variables is chosen by Angel. Formally, the gadget for D-choose is the same as the one for choose (see (Fig. 3), and the gadget for A-choose is also the same except that the newly added in state of the gadget is angelic. Note that all other states of , including the states in gadgets pumping the counters, are demonic. Using the observations of Lemma 3.3, it is easy to see that
-
•
if is valid, then and ;
-
•
if is not valid, then and .
From this we immediately obtain the lower bounds of Theorem 4.1.
C.2 A Proof of Proposition 4.4
First, let us restate the proposition.
See 4.4
Convention. For notation simplification, we assume that the counter update vector in every transition where is angelic satisfies (a transition where can be split into , where is a fresh demonic state).
Let us fix a vertex of , a vector , and a counter such that . First, consider the case when is a leaf of . Then is demonic, and can be seen as a strongly connected demonic VASS. The claim follows trivially by applying Lemma 3.8 to and .
Now suppose that is a demonic vertex of with successors . Note that can be seen as a strongly connected demonic VASS after deleting all transitions from/to the states outside . Let be a -consistent function. By applying Lemma 3.8 to , the vector , and , we obtain an infinite family of Demon’s strategies and a function such that the vector of counter values in the last configuration of satisfies the following for every :
-
•
if and ;
-
•
if or .
Let , where is the function defined in Section 3.2. Note that is a -consistent function. If , we define as the strategy which behaves like the strategy of for every computation of initiated in . Clearly, for every Angel’s strategy . If , for every we apply the induction hypothesis to , the vector , the counter , and the -consistent function . Thus, we obtain that for every one of the following possibilities holds:
-
•
There exists a , a simple locking Angel’s strategy in and a Demon’s strategy in such that is independent of and
-
–
for every Demon’s strategy in , we have ;
-
–
for every Angel’s strategy in , we have .
-
–
-
•
There is a Demon’s strategy in such that for every Angel’s strategy in we have that .
We distinguish two cases:
-
•
There is such that the second possibility holds. Let be a Demon’s strategy such that, for an initial configuration , starts by simulating the strategy of until a configuration with the vector of counter values is reached. Then, takes the shortest path to a configuration , and then switches to simulating for the initial configuration where is the largest number such that . The properties of (see above) imply that for an arbitrary Angel’s strategy .
-
•
For all , the first possibility holds. Let be the maximum of all for . Furthermore, let be a simple locking strategy which for a computation initiated in behaves like the simple locking strategy when the computation enters a control state of . Note that a computation initiated in cannot visit an angelic control state before leaving the component , and the decisions taken by simple locking strategies may depend on the sequence of previously visited components of . Hence, is a simple locking strategy. For every Demon’s strategy , we have that by our choice of and the properties of strategies (see above). Now consider the Demon’s strategy defined in the same way as in the previous case, where is the index such that . Then, the properties of imply that for an arbitrary Angel’s strategy .
Finally, suppose that is angelic. Then , and let be the outgoing transitions of (see the Convention above). For every , let . Let be a -consistent function. By induction hypothesis, for every , one of the following possibilities holds:
-
•
there exists a , a simple locking Angel’s strategy in and a Demon’s strategy in such that is independent of and
-
–
for every Demon’s strategy in , we have ;
-
–
for every Angel’s strategy in , we have .
-
–
-
•
there is a Demon’s strategy in such that for every Angel’s strategy in we have that .
Strictly speaking, the induction hypothesis applies to the initial configurations , but since is a control state of and is the only out-going transition of in , the above claim follows immediately.
Let us first assume that there exists such that the first possibility holds. Then, we fix a such that is minimal, and we put . Consider a simple locking strategy in which starts by locking the transition and then proceeds by simulating the simple locking strategy . For every Demon’s strategy , we have that is the same computation as . Hence, by induction hypothesis.
Now consider a Demon’s strategy in defined as follows. Let be a computation in initiated in a configuration such that is demonic. Furthermore, let be the associated path in . The mode of is the such that is the last outgoing transition of occurring in . The -th projection of , denoted by , is then obtained by concatenating all subsequences of initiated by the transition and terminated either by the next occurrence of or by . Consider the computation in obtained by performing from the initial configuration , where . The transition selected by after performing the computation is the transition selected by , after performing . Intuitively, “switches” among the strategies according to the current mode, and thus simulates computations of initiated in . The computation first reaching a terminal configuration is simulated completely. Hence, there exists such that “subsumes” , where is the “projection” of into . This implies by induction hypothesis and our choice of .
Finally, assume that the second possibility holds for all . Then, we construct a Demon’s strategy in the same way as above, and conclude (by the same reasoning) that for an arbitrary Angel’s strategy .