This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

New Algorithms for Combinations of Objectives
using Separating Automata

Ashwani Anand Chennai Mathematical Institute, Chennai, IndiaCNRS, LaBRI, Bordeaux, FranceThe Alan Turing Institute, London, United KingdomENS Lyon, Lyon, FranceCNRS, LaBRI, Bordeaux, FranceUniversité de Paris, Paris, France    Nathanaël Fijalkow CNRS, LaBRI, Bordeaux, FranceThe Alan Turing Institute, London, United KingdomENS Lyon, Lyon, FranceCNRS, LaBRI, Bordeaux, FranceUniversité de Paris, Paris, France    Aliénor Goubault-Larrecq ENS Lyon, Lyon, FranceCNRS, LaBRI, Bordeaux, FranceUniversité de Paris, Paris, France    Jérôme Leroux CNRS, LaBRI, Bordeaux, FranceUniversité de Paris, Paris, France    Pierre Ohlmann Université de Paris, Paris, France
Abstract

The notion of separating automata was introduced by Bojańczyk and Czerwiński for understanding the first quasipolynomial time algorithm for parity games. In this paper we show that separating automata is a powerful tool for constructing algorithms solving games with combinations of objectives. We construct two new algorithms: the first for disjunctions of parity and mean payoff objectives, matching the best known complexity, and the second for disjunctions of mean payoff objectives, improving on the state of the art. In both cases the algorithms are obtained through the construction of small separating automata, using as black boxes the existing constructions for parity objectives and for mean payoff objectives.

1 Introduction

The notion of separating automata was introduced by Bojańczyk and Czerwiński [2] to give a streamlined presentation of the first quasipolynomial time algorithm for parity games due to Calude, Jain, Khoussainov, Li, and Stephan [3]. The first observation made by Bojańczyk and Czerwiński was that the statistics used in that algorithm can be computed by a finite deterministic safety automaton reading a path in the game. They showed that the property making this particular automaton useful for solving parity games is that, roughly speaking, it separates positional winning paths from losing paths: they called this property separating. The second observation of Bojańczyk and Czerwiński was that separating automata yield a natural reduction to safety games, in other words the construction of a separating automaton induces an algorithm whose complexity depends on the size of the automaton.

The notion of separating automata has been further studied in the context of parity games: Czerwiński, Daviaud, Fijalkow, Jurdziński, Lazić, and Parys [8] showed that two other quasipolynomial time algorithms can also be presented using the construction of a separating automaton. The main technical result of [8] is that separating automata are in some sense equivalent to the notion of universal trees at the heart of the second quasipolynomial time algorithm by Jurdziński and Lazić [13] and formalised by Fijalkow [10]. The consequence of this equivalence is a quasipolynomial lower bound on the size of separating automata for parity objectives.

Going beyond parity games, Colcombet and Fijalkow [6, 7] introduced universal graphs and showed an equivalence result between separating automata and universal graphs for any positionally determined objective. This paves the way for using separating automata for other classes of objectives. The first work in that direction is due to Fijalkow, Ohlmann, and Gawrychowski [11], who obtained matching upper and lower bounds on the size of separating automata for mean payoff objectives, matching the best known (deterministic) complexity for solving mean payoff games.

The goal of this paper is to show how to construct separating automata for combinations of objectives, thereby obtaining new algorithms for solving the corresponding games. We will consider two subclasses combining parity and mean payoff objectives, with the following goal: rather than constructing separating automata from scratch, we want to define constructions using separating automata for the atomic objectives as black boxes. In other words, we assume the existence of separating automata for parity objectives (provided in [8], see also [6]) and for mean payoff objectives (provided in [11]), and construct separating automata for combinations of these classes. An important benefit of this approach is its simplicity: as we will see, both constructions and their correctness proofs are rather short and focus on the interactions between the objectives.

Section 2 introduces separating automata and shows how they yield algorithms by reduction to safety games. The two classes of objectives we consider are disjunctions of parity and mean payoff objectives in Section 3, and disjunctions of mean payoff objectives in Section 4. We refer to the subsections 3.2 and 4.3 for related work on solving these games.

2 Preliminaries

We write [i,j][i,j] for the interval {i,i+1,,j1,j}\left\{i,i+1,\dots,j-1,j\right\}, and use parentheses to exclude extremal values, so for instance [i,j)[i,j) is {i,i+1,,j1}\left\{i,i+1,\dots,j-1\right\}. We let CC denote a set of colours and write CC^{*} for finite sequences of colours (also called finite words), C+C^{+} for finite non-empty sequences, and CωC^{\omega} for infinite sequences (also called infinite words). The empty word is ε\varepsilon.

2.1 Graphs

Graphs

We consider edge labelled directed graphs: a graph GG is given by a (finite) set VV of vertices and a (finite) set EV×C×VE\subseteq V\times C\times V of edges, with CC a set of colours, so we write G=(V,E)G=(V,E). An edge (v,c,v)(v,c,v^{\prime}) is from the vertex vv to the vertex vv^{\prime} and is labelled by the colour cc. We sometimes refer to V(G)V(G) for VV and E(G)E(G) for EE to avoid any ambiguity. The size of a graph is its number of vertices. A vertex vv for which there exists no outgoing edges (v,c,v)E(v,c,v^{\prime})\in E is called a sink.

Paths

A path π\pi is a (finite or infinite) sequence

π=v_0c_0v_1c_1v_2\pi=v_{\_}0c_{\_}0v_{\_}1c_{\_}1v_{\_}2\dots

where for all ii we have (v_i,c_i,v_i+1)E(v_{\_}i,c_{\_}i,v_{\_}{i+1})\in E. If it is finite, that is, π=v_0c_0c_i1v_i\pi=v_{\_}0c_{\_}0\dots c_{\_}{i-1}v_{\_}i, we call len(π)=i0\textrm{len}(\pi)=i\geq 0 the length of π\pi, and use last(π)\textrm{last}(\pi) to denote the last vertex v_iv_{\_}i. The length of an infinite path is len(π)=\textrm{len}(\pi)=\infty. We say that π\pi starts from v_0v_{\_}0 or is a path from v_0v_{\_}0, and in the case where π\pi is finite we say that π\pi is a path ending in last(π)\textrm{last}(\pi) or simply a path to last(π)\textrm{last}(\pi). We say that vv^{\prime} is reachable from vv if there exists a path from vv to vv^{\prime}. We let π_i\pi_{\_}{\leq i} denote the prefix of π\pi of length ii, meaning π_i=v_0c_0v_1c_i1v_i\pi_{\_}{\leq i}=v_{\_}0c_{\_}0v_{\_}1\dots c_{\_}{i-1}v_{\_}i. A cycle is a path from a vertex to itself of length at least one.

We use Pathfin(G),Path(G)\textrm{Path}_{\textrm{fin}}(G),\textrm{Path}_{\infty}(G) and Path(G)\textrm{Path}(G) to denote respectively the sets of finite paths of GG, infinite paths of GG, and their union. We sometimes drop GG when it is clear from context. For v_0Vv_{\_}0\in V we also use Pathfin(G,v_0),Path(G,v_0)\textrm{Path}_{\textrm{fin}}(G,v_{\_}0),\textrm{Path}_{\infty}(G,v_{\_}0) and Path(G,v_0)\textrm{Path}(G,v_{\_}0) to refer to the sets of paths starting from v_0v_{\_}0. We use col(π)\textrm{col}(\pi) to denote the (finite or infinite) sequence of colours c_0c_1c_{\_}0c_{\_}1\dots induced by π\pi.

Objectives

An objective is a set ΩCω\Omega\subseteq C^{\omega} of infinite sequences of colours. We say that a sequence of colours belonging to Ω\Omega satisfies Ω\Omega, and extend this terminology to infinite paths: π\pi satisfies Ω\Omega if col(π)Ω\textrm{col}(\pi)\in\Omega.

Definition 1 (Graphs satisfying an objective).

Let Ω\Omega be an objective and GG a graph. We say that GG satisfies Ω\Omega if all infinite paths in GG satisfy Ω\Omega.

Safety automata

A deterministic safety automaton over the alphabet CC is given by a finite set of states QQ, an initial state q_0Qq_{\_}0\in Q, and a transition function δ:Q×CQ\delta:Q\times C\to Q, so we write 𝒜=(Q,q_0,δ)\mathcal{A}=(Q,q_{\_}0,\delta). Note that δ\delta is a partial function, meaning that it may be that δ(q,c)\delta(q,c) is undefined for some q,cQ×Cq,c\in Q\times C. Such an automaton induces a graph whose set of vertices is QQ and set of edges is E={(q,δ(q,c)):qQ,cC}E=\left\{(q,\delta(q,c)):q\in Q,c\in C\right\}; we therefore use the terminology for graphs to speak about automata, and identify an automaton and the graph it induces. Since we are only considering deterministic safety automata in this paper, we omit the adjectives deterministic and safety and simply speak of an automaton. We extend δ\delta to sequences of colours by the formulas δ(q,ϵ)=q\delta^{*}(q,\epsilon)=q and δ(q,wc)=δ(δ(q,w),c)\delta^{*}(q,wc)=\delta(\delta^{*}(q,w),c). The language recognised by an automaton 𝒜\mathcal{A} is L(𝒜)L(\mathcal{A}) defined by

L(𝒜)={col(π):π infinite path from q_0}.L(\mathcal{A})=\left\{\textrm{col}(\pi):\pi\text{ infinite path from $q_{\_}0$}\right\}.

Note that if ww is a finite prefix of a word in L(𝒜)L(\mathcal{A}), then δ(q_0,w)\delta^{*}(q_{\_}0,w) is well defined.

2.2 Games

Arenas

An arena is given by a graph GG together with a partition V=VEveVAdamV=\textrm{V}_{\textrm{Eve}}\uplus\textrm{V}_{\textrm{Adam}} of its set of vertices describing which player controls each vertex.

Games

A game is given by an arena and an objective Ω\Omega. We often let 𝒢\mathcal{G} denote a game, its size is the size of the underlying graph. It is played as follows. A token is initially placed on some vertex v_0v_{\_}0, and the player who controls this vertex pushes the token along an edge, reaching a new vertex; the player who controls this new vertex takes over and this interaction goes on either forever and describing an infinite path or until reaching a sink.

We say that a path is winning111We always take the point of view of Eve, so winning means winning for Eve, and similarly a strategy is a strategy for Eve. if it is infinite and satisfies Ω\Omega, or finite and ends in a sink. The definition of a winning path includes the following usual convention: if a player cannot move they lose, in other words sinks controlled by Adam are winning (for Eve) and sinks controlled by Eve are losing.

We extend the notations Pathfin,Path\textrm{Path}_{\textrm{fin}},\textrm{Path}_{\infty} and Path to games by considering the underlying graph.

Strategies

A strategy in 𝒢\mathcal{G} is a partial map σ:Pathfin(𝒢)E\sigma:\textrm{Path}_{\textrm{fin}}(\mathcal{G})\to E such that σ(π)\sigma(\pi) is an outgoing edge of last(π)\textrm{last}(\pi) when it is defined. We say that a path π=v_0c_0v_1\pi=v_{\_}0c_{\_}0v_{\_}1\dots is consistent with σ\sigma if for all i<len(π)i<\textrm{len}(\pi), if v_iVEvev_{\_}i\in\textrm{V}_{\textrm{Eve}} then σ\sigma is defined over π_i\pi_{\_}{\leq i} and σ(π_i)=(v_i,c_i,v_i+1)\sigma(\pi_{\_}{\leq i})=(v_{\_}i,c_{\_}i,v_{\_}{i+1}). A consistent path with σ\sigma is maximal if it is not the strict prefix of a consistent path with σ\sigma (in particular infinite consistent paths are maximal).

A strategy σ\sigma is winning from v_0v_{\_}0 if all maximal paths consistent with σ\sigma are winning. Note that in particular, if a finite path π\pi is consistent with a winning strategy σ\sigma and ends in a vertex which belongs to Eve, then σ\sigma is defined over π\pi. We say that v_0v_{\_}0 is a winning vertex of 𝒢\mathcal{G} or that Eve wins from vv in 𝒢\mathcal{G} if there exists a winning strategy from v_0v_{\_}0.

Positional strategies

Positional strategies make decisions only considering the current vertex. Such a strategy is given by σ^:VEveE\widehat{\sigma}:\textrm{V}_{\textrm{Eve}}\to E. A positional strategy induces a strategy σ:PathfinE\sigma:\textrm{Path}_{\textrm{fin}}\to E from any vertex v_0v_{\_}0 by setting σ(π)=σ^(last(π)){\sigma}(\pi)=\widehat{\sigma}(\textrm{last}(\pi)) when last(π)VEve\textrm{last}(\pi)\in\textrm{V}_{\textrm{Eve}}.

Definition 2.

We say that an objective Ω\Omega is positionally determined if for every game with objective Ω\Omega and vertex v_0v_{\_}0, if Eve wins from v_0v_{\_}0 then there exists a positional winning strategy from v_0v_{\_}0.

Given a game 𝒢\mathcal{G}, a vertex v_0v_{\_}0, and a positional strategy σ\sigma we let 𝒢[σ,v_0]\mathcal{G}[\sigma,v_{\_}0] denote the graph obtained by restricting 𝒢\mathcal{G} to vertices reachable from v_0v_{\_}0 by playing σ\sigma and to the moves prescribed by σ\sigma. Formally, the set of vertices and edges is

V[σ,v_0]={vV:there exists a path from v_0 to v consistent with σ},E[σ,v_0]={(v,c,v)E:vVAdam or (vVEve and σ(v)=(v,c,v))}V[σ,v_0]×C×V[σ,v_0].\begin{array}[]{lll}V[\sigma,v_{\_}0]&=&\left\{v\in V:\text{there exists a path from }v_{\_}0\text{ to }v\text{ consistent with }\sigma\right\},\\ E[\sigma,v_{\_}0]&=&\left\{(v,c,v^{\prime})\in E:v\in\textrm{V}_{\textrm{Adam}}\text{ or }\left(v\in\textrm{V}_{\textrm{Eve}}\text{ and }\right.\left.\sigma(v)=(v,c,v^{\prime})\right)\right\}\\ &\cap&V[\sigma,v_{\_}0]\times C\times V[\sigma,v_{\_}0].\end{array}

In this paper we will consider prefix independent objectives for technical convenience.

Definition 3.

We say that an objective Ω\Omega is prefix independent if for all uCu\in C^{*} and vCωv\in C^{\omega}, we have uvΩuv\in\Omega if and only if vΩv\in\Omega.

Lemma 1.

Let Ω\Omega be a prefix independent objective, 𝒢\mathcal{G} a game, v_0v_{\_}0 a vertex, and σ\sigma a positional strategy. Then σ\sigma is winning from v_0v_{\_}0 if and only if the graph 𝒢[σ,v_0]\mathcal{G}[\sigma,v_{\_}0] satisfies Ω\Omega and does not contain any sink controlled by Eve.

Solving games

Decision problem

The decision problem we consider in this paper, called solving a game, is the following: given a game 𝒢\mathcal{G} and an initial vertex v_0v_{\_}0, does Eve have a winning strategy from v_0v_{\_}0 in 𝒢\mathcal{G}? Each objective yields a class of games, so we speak for instance of “solving mean payoff games”.

Computational model

The complexity of solving a game depends on a number of parameters originating either from the underlying graph or the objective. Some of the objectives involve rational numbers. The typical parameters from the underlying graph are the number nn of vertices and the number mm of edges.

We use the classical Random Access Model (RAM) of computation with fixed word size, which is the size of a memory cell on which arithmetic operations take constant time. We specify for each class of objectives the word size.

Reduction to safety games

Safety games

The safety objective 𝚂𝚊𝚏𝚎\mathtt{Safe} is defined over the set of colours C={ε}C=\left\{\varepsilon\right\} by 𝚂𝚊𝚏𝚎={εω}\mathtt{Safe}=\left\{\varepsilon^{\omega}\right\}: in words, all infinite paths are winning, so losing for Eve can only result from reaching a sink that she controls. Since there is a unique colour, when manipulating safety games we ignore the colour component for edges.

Note that in safety games, strategies can equivalently be defined as maps to VV (and not EE): only the target of an edge matters when the source is fixed, since there is a unique colour. We use this abuse of notations for the sake of simplicity and conciseness.

Theorem 1.

There exists an algorithm in the RAM model with word size w=log(n)w=\log(n) computing the set of winning vertices of a safety game running in time O(m)O(m) and space O(n)O(n).

Reduction using safety automata

Let 𝒜=(Q,q_0,δ)\mathcal{A}=(Q,q_{\_}0,\delta) be an automaton and 𝒢\mathcal{G} a game with objective Ω\Omega. We define the chained game 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} as the safety game with vertices V=VEveVAdamV^{\prime}=\textrm{V}_{\textrm{Eve}}^{\prime}\uplus\textrm{V}_{\textrm{Adam}}^{\prime} and edges EE^{\prime} given by

VEve=(VEve×Q){},VAdam=VAdam×Q,E={((v,q),ε,(v,δ(q,c))):qQ,(v,c,v)E,δ(q,c) is defined}{((v,q),ε,):qQ,(v,c,v)E,δ(q,c) is not defined}.\begin{array}[]{lll}\textrm{V}_{\textrm{Eve}}^{\prime}&=&\left(\textrm{V}_{\textrm{Eve}}\times Q\right)\ \cup\left\{\bot\right\},\\ \textrm{V}_{\textrm{Adam}}^{\prime}&=&\textrm{V}_{\textrm{Adam}}\times Q,\\ E^{\prime}&=&\{((v,q),\varepsilon,(v^{\prime},\delta(q,c))):q\in Q,(v,c,v^{\prime})\in E,\delta(q,c)\text{ is defined}\}\\ &\cup&\{((v,q),\varepsilon,\bot):q\in Q,(v,c,v^{\prime})\in E,\delta(q,c)\text{ is not defined}\}.\end{array}

In words, from (v,q)V×Q(v,q)\in V\times Q, the player whom vv belongs to chooses an edge (v,c,v)E(v,c,v^{\prime})\in E, and the game progresses to (v,δ(q,c))(v^{\prime},\delta(q,c)) if qq has an outgoing edge with colour cc in 𝒜\mathcal{A}, and to \bot otherwise, which is losing for Eve.

Note that the obtained game 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} is a safety game: Eve wins if she can play forever or end up in a sink controlled by Adam.

Lemma 2.

Let Ω\Omega be an objective, 𝒢\mathcal{G} a game with objective Ω\Omega, 𝒜\mathcal{A} an automaton, and v_0Vv_{\_}0\in V. If 𝒜\mathcal{A} satisfies Ω\Omega and Eve wins from (v_0,q_0)(v_{\_}0,q_{\_}0) in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A}, then Eve wins from v_0v_{\_}0 in 𝒢\mathcal{G}.

Proof.

Let σ\sigma^{\prime} be a winning strategy from (v_0,q_0)(v_{\_}0,q_{\_}0) in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A}. We construct a strategy σ\sigma from v_0v_{\_}0 in 𝒢\mathcal{G} which simulates σ\sigma^{\prime} in the following sense, which we call the simulation property:

for any path π=v_0c_0c_i1v_i\pi=v_{\_}0c_{\_}0\dots c_{\_}{i-1}v_{\_}i in 𝒢\mathcal{G} consistent with σ\sigma,

there exists a path π=(v_0,q_0)(v_i,q_i)\pi^{\prime}=(v_{\_}0,q_{\_}0)\dots(v_{\_}i,q_{\_}i) in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} consistent with σ\sigma^{\prime}.

We define σ\sigma over finite paths v_0c_0c_i1v_iv_{\_}0c_{\_}0\dots c_{\_}{i-1}v_{\_}i with v_iVEvev_{\_}i\in\textrm{V}_{\textrm{Eve}} by induction over ii, so that the simulation property holds. For i=0i=0, π=(v_0,q_0)\pi^{\prime}=(v_{\_}0,q_{\_}0) is a path in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} consistent with σ\sigma^{\prime}. Let π=v_0c_0c_i1v_i\pi=v_{\_}0c_{\_}0\dots c_{\_}{i-1}v_{\_}{i} be a path in 𝒢\mathcal{G} consistent with σ\sigma and v_iVEvev_{\_}i\in\textrm{V}_{\textrm{Eve}}, we want to define σ(π)\sigma(\pi). Thanks to the simulation property there exists a path π=(v_0,q_0)(v_i,q_i)\pi^{\prime}=(v_{\_}0,q_{\_}0)\dots(v_{\_}{i},q_{\_}{i}) in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} consistent with σ\sigma^{\prime}. Then (v_i,q_i)VEve(v_{\_}i,q_{\_}i)\in\textrm{V}_{\textrm{Eve}}^{\prime}, and since σ\sigma^{\prime} is winning it is defined over π\pi^{\prime}, let us write σ(π)=(v_i+1,δ(q_i,c_i))\sigma^{\prime}(\pi^{\prime})=(v_{\_}{i+1},\delta(q_{\_}i,c_{\_}i)) with (v_i,c_i,v_i+1)E(v_{\_}i,c_{\_}i,v_{\_}{i+1})\in E. We set σ(π)=v_i+1\sigma(\pi)=v_{\_}{i+1}.

To conclude the definition of σ\sigma we need to show that the simulation property extends to paths of length i+1i+1. Let π_i+1=πc_iv_i+1=v_0c_0v_ic_iv_i+1\pi_{\_}{i+1}=\pi\ c_{\_}iv_{\_}{i+1}=v_{\_}0c_{\_}0\dots v_{\_}ic_{\_}iv_{\_}{i+1} be consistent with σ\sigma. We apply the simulation property to π\pi to construct π=(v_0,q_0)(v_i,q_i)\pi^{\prime}=(v_{\_}0,q_{\_}0)\dots(v_{\_}{i},q_{\_}{i}) a path in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} consistent with σ\sigma^{\prime}. Let us consider π_+1=π(v_i+1,δ(q_i,c_i))\pi^{\prime}_{\_}{+1}=\pi^{\prime}\ (v_{\_}{i+1},\delta(q_{\_}i,c_{\_}i)) with (v_i,c_i,v_i+1)E(v_{\_}i,c_{\_}i,v_{\_}{i+1})\in E, we claim that π_+1\pi^{\prime}_{\_}{+1} is consistent with σ\sigma^{\prime}. Indeed, if v_iVAdamv_{\_}i\in\textrm{V}_{\textrm{Adam}}, there is nothing to prove, and if v_iVEvev_{\_}i\in\textrm{V}_{\textrm{Eve}}, this holds by construction: since σ(π)=v_i+1\sigma(\pi)=v_{\_}{i+1} we have σ(π)=(v_i+1,δ(q_i,c_i))\sigma^{\prime}(\pi^{\prime})=(v_{\_}{i+1},\delta(q_{\_}i,c_{\_}i)). This concludes the inductive proof of the simulation property together with the definition of σ\sigma.

We now prove that σ\sigma is a winning strategy from v_0v_{\_}0. Let π=v_0c_0v_1\pi=v_{\_}0c_{\_}0v_{\_}1\dots be a maximal consistent path with σ\sigma, and let π=(v_0,q_0)(v_1,q_1)\pi^{\prime}=(v_{\_}0,q_{\_}0)(v_{\_}1,q_{\_}1)\dots be the corresponding path in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} consistent with σ\sigma^{\prime}. Let us first assume that π\pi is finite, and let v_i=last(π)v_{\_}i=\textrm{last}(\pi). If v_iVEvev_{\_}i\in\textrm{V}_{\textrm{Eve}}, then by construction σ\sigma is defined over π\pi, so the path v_0c_0v_ic_iv_i+1v_{\_}0c_{\_}0\dots v_{\_}ic_{\_}iv_{\_}{i+1}, where σ(π)=v_i+1\sigma(\pi)=v_{\_}{i+1} is consistent with σ\sigma, contradicting maximality of π\pi. If however v_iVAdamv_{\_}i\in\textrm{V}_{\textrm{Adam}}, then by maximality of σ\sigma, v_iv_{\_}i must be a sink, hence π\pi is winning. Now if π\pi is infinite then so is π\pi^{\prime}, and then q_0c_0q_1c_1q_{\_}0c_{\_}0q_{\_}1c_{\_}1\dots is a path in 𝒜\mathcal{A}, so col(π)=c_0c_1L(𝒜)Ω\textrm{col}(\pi)=c_{\_}0c_{\_}1\dots\in L(\mathcal{A})\subseteq\Omega, and π\pi is winning. We conclude that σ\sigma is a winning strategy from v_0v_{\_}0 in 𝒢\mathcal{G}. ∎

It is not hard to see that if 𝒜\mathcal{A} is deterministic and recognises exactly Ω\Omega, then 𝒢\mathcal{G} and 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} are equivalent. However, for many objectives Ω\Omega (for instance, parity or mean payoff objectives), a simple topological argument shows that such deterministic automata do not exist. Separating automata are defined by introducing nn as a parameter, and relaxing the condition L(𝒜)=ΩL(\mathcal{A})=\Omega to the weaker condition ΩnL(𝒜)Ω\Omega^{\mid n}\subseteq L(\mathcal{A})\subseteq\Omega, where Ωn\Omega^{\mid n} is the set of infinite sequence of colours that label paths from graphs of size at most nn satisfying Ω\Omega. Formally222Here we use the fact that Ω\Omega is prefix independent to simplify the definition: we are considering infinite paths from any vertex of the graph. To extend this definition beyond prefix independent objectives we would need to fix an initial vertex.,

Ωn={col(π)πPath(G),G has size at most n and satisfies Ω}.\Omega^{\mid n}=\{\textrm{col}(\pi)\mid\pi\in\textrm{Path}_{\infty}(G),G\text{ has size at most $n$ and satisfies $\Omega$}\}.
Definition 4.

An (n,Ω)(n,\Omega)-separating automaton 𝒜\mathcal{A} is an automaton such that

ΩnL(𝒜)Ω.\Omega^{\mid n}\subseteq L(\mathcal{A})\subseteq\Omega.

The definition given here differs from the original one given by Bojańczyk and Czerwiński [2], who use a different relaxation Ω_n\Omega_{\_}{\mid n} satisfying ΩnΩ_nΩ\Omega^{\mid n}\subseteq\Omega_{\_}{\mid n}\subseteq\Omega. Therefore a separating automaton in the sense of [2] is also a separating automaton in our sense (but not conversely).

Theorem 2.

Let Ω\Omega be a positionally determined objective, 𝒜\mathcal{A} an (n,Ω)(n,\Omega)-separating automaton, 𝒢\mathcal{G} a game of size nn, and v_0Vv_{\_}0\in V. Then Eve wins from v_0v_{\_}0 in 𝒢\mathcal{G} if and only if she wins from (v_0,q_0)(v_{\_}0,q_{\_}0) in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A}.

Proof.

The “if” directly follows from Lemma 2. Conversely, assume that Eve wins from v_0v_{\_}0 in 𝒢\mathcal{G}, and let σ\sigma be a strategy from v_0v_{\_}0 in 𝒢\mathcal{G}, which we choose to be positional. As explained in the definition of safety games, without loss of generality we can see σ\sigma as a function σ:VEveV\sigma:\textrm{V}_{\textrm{Eve}}\to V. We define a positional strategy σ:VEveV\sigma^{\prime}:\textrm{V}_{\textrm{Eve}}^{\prime}\to V^{\prime} by

σ(v,q)=(σ(v),δ(q,c)) with (v,c,v)E.\sigma^{\prime}(v,q)=(\sigma(v),\delta(q,c))\text{ with }(v,c,v^{\prime})\in E.

Let π=(v_0,q_0)(v_i,q_i)\pi^{\prime}=(v_{\_}0,q_{\_}0)\dots(v_{\_}i,q_{\_}i), with for all ji,e_j=(v_j,c_j,v_j+1)Ej\leq i,e_{\_}j=(v_{\_}j,c_{\_}j,v_{\_}{j+1})\in E, be a finite path in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} consistent with σ\sigma^{\prime}, and let π=v_0c_0c_iv_i+1\pi=v_{\_}0c_{\_}0\dots c_{\_}iv_{\_}{i+1}. Then by definition of σ\sigma^{\prime}, π\pi is consistent with σ\sigma, which rephrases a being a path in 𝒢[σ,v_0]\mathcal{G}[\sigma,v_{\_}0]. Now, 𝒢[σ,v_0]\mathcal{G}[\sigma,v_{\_}0] is a graph satisfying Ω\Omega, so col(π)=c_0c_i\textrm{col}(\pi)=c_{\_}0\dots c_{\_}i is a prefix of a word in ΩnL(𝒜)\Omega^{\mid n}\subseteq L(\mathcal{A}). This implies that δ(q_0,c_0c_i)\delta^{*}(q_{\_}0,c_{\_}0\dots c_{\_}i) is well defined, hence so is δ(q_i,c_i)\delta(q_{\_}i,c_{\_}i). In particular, σ\sigma^{\prime} induces a strategy in 𝒢𝒜\mathcal{G}\triangleright\mathcal{A} from (v_0,q_0)(v_{\_}0,q_{\_}0). We now prove that it is winning.

Let π=(v_0,q_0)\pi^{\prime}=(v_{\_}0,q_{\_}0)\dots be a maximal path in 𝒢\mathcal{G} consistent with σ\sigma^{\prime}, which we assume to be finite for contradiction. Let last(π)=(v_i,q_i)\textrm{last}(\pi^{\prime})=(v_{\_}i,q_{\_}i) be a sink, and define π=v_0c_0c_i1v_i\pi=v_{\_}0c_{\_}0\dots c_{\_}{i-1}v_{\_}i. By definition of π\pi^{\prime}, it holds that π\pi is a path in 𝒢\mathcal{G} which is consistent with σ\sigma. Since π\pi is finite and σ\sigma is winning v_iv_{\_}i cannot be a sink, it has an outgoing edge e_i=(v_i,c,v)Ee_{\_}i=(v_{\_}i,c,v^{\prime})\in E. Then ((v_i,q_i),ε,(v_i+1,δ(q_i,c_i))E((v_{\_}i,q_{\_}i),\varepsilon,(v_{\_}{i+1},\delta(q_{\_}i,c_{\_}i))\in E^{\prime}, so (v_i,q_i)(v_{\_}i,q_{\_}i) is not a sink: a contradiction. Hence π\pi^{\prime} is an infinite path in the safety game 𝒢𝒜\mathcal{G}\triangleright\mathcal{A}; it is winning by definition. ∎

Existing constructions for parity and mean payoff games

Parity games

The parity objective is defined over the set of colours [0,d][0,d]\subseteq\mathbb{N}, as follows:

𝙿𝚊𝚛𝚒𝚝𝚢_d={w[0,d]ω:the largest priority appearing infinitely many times in w is even}.\mathtt{Parity}_{\_}d=\left\{w\in[0,d]^{\omega}:\text{the largest priority appearing infinitely many times in $w$ is even}\right\}.

As mentioned above, the definition of separating automata given in [2] is slightly different, but the result below indeed holds for both definitions, as explained in [8].

Theorem 3 ([2, 8]).

Let n,dn,d\in\mathbb{N}. There exists an (n,𝙿𝚊𝚛𝚒𝚝𝚢_d)(n,\mathtt{Parity}_{\_}d)-separating automaton of size

O(n(log(n)+d/21log(n))).O\left(n\cdot\binom{\lceil\log(n)\rceil+d/2-1}{\lceil\log(n)\rceil}\right).

Mean payoff games

The mean payoff objective is defined over the set of colours [N,N][-N,N]\subseteq\mathbb{Z}, as follows:

𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N={w[N,N]ω:lim inf_n1n_i=1nw_i0}.\mathtt{MeanPayoff}_{N}=\left\{w\in[-N,N]^{\omega}:\liminf_{\_}n\frac{1}{n}\sum_{\_}{i=1}^{n}w_{\_}i\geq 0\right\}.
Theorem 4 ([11]).

Let n,Nn,N\in\mathbb{N}. There exists an (n,𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N)(n,\mathtt{MeanPayoff}_{N})-separating automaton of size O(nN)O(nN).

3 Disjunction of parity and mean payoff

We define the objective 𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N} referred to as ‘disjunction of parity and mean payoff’. The set of colours is [0,d]×[N,N][0,d]\times[-N,N]. For w([0,d]×[N,N])ωw\in([0,d]\times[-N,N])^{\omega} we write w_P[0,d]ωw_{\_}P\in[0,d]^{\omega} for the projection on the first component and w_MP[N,N]ωw_{\_}{MP}\in[-N,N]^{\omega} for the projection on the second component.

𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N={w([0,d]×[N,N])ω:w_P𝙿𝚊𝚛𝚒𝚝𝚢_dw_MP𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N}.\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N}=\left\{w\in([0,d]\times[-N,N])^{\omega}:w_{\_}P\in\mathtt{Parity}_{\_}d\vee w_{\_}{MP}\in\mathtt{MeanPayoff}_{N}\right\}.

We refer to Subsection 3.2 for a discussion on existing results.

Theorem 5 ([4]).

Disjunctions of parity and mean payoff objectives are prefix independent and positionally determined.

3.1 Separating automata for disjunctions of parity and mean payoff objectives

Theorem 6.

Let n,d,Nn,d,N\in\mathbb{N}. Let 𝒜_P\mathcal{A}_{\_}P an (n,𝙿𝚊𝚛𝚒𝚝𝚢_d)(n,\mathtt{Parity}_{\_}d)-separating automaton, 𝒜_MP\mathcal{A}_{\_}{MP} an (n,𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N)(n,\mathtt{MeanPayoff}_{N})-separating automaton. Then there exists an (n,𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N)(n,\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N})-separating automaton of size O(d|𝒜_P||𝒜_MP|)O(d\cdot|\mathcal{A}_{\_}P|\cdot|\mathcal{A}_{\_}{MP}|).

Proof.

Let us write 𝒜_P=(Q_P,q_0,P,δ_P)\mathcal{A}_{\_}P=(Q_{\_}P,q_{\_}{0,P},\delta_{\_}P) and 𝒜_MP=(Q_MP,q_0,MP,δ_MP)\mathcal{A}_{\_}{MP}=(Q_{\_}{MP},q_{\_}{0,MP},\delta_{\_}{MP}). We define a deterministic automaton 𝒜_PMP\mathcal{A}_{\_}{P\vee MP}: the set of states is [0,d]×Q_P×Q_MP[0,d]\times Q_{\_}P\times Q_{\_}{MP}, the initial state is (d,q_0,P,q_0,MP)(d,q_{\_}{0,P},q_{\_}{0,MP}), and the transition function is

δ((p,q_P,q_MP),(p,w))={(max(p,p),q_P,δ_MP(q_MP,w)) if δ_MP(q_MP,w) is defined,(0,δ_P(q_P,max(p,p)),q_0,MP) if δ_MP(q,_MP,w) is not defined.\delta((p,q_{\_}P,q_{\_}{MP}),(p^{\prime},w))=\begin{cases}(\max(p,p^{\prime}),q_{\_}P,\delta_{\_}{MP}(q_{\_}{MP},w))&\text{ if $\delta_{\_}{MP}(q_{\_}{MP},w)$ is defined,}\\ (0,\delta_{\_}P(q_{\_}{P},\max(p,p^{\prime})),q_{\_}{0,MP})&\text{ if $\delta_{\_}{MP}(q,_{\_}{MP},w)$ is not defined}.\end{cases}

Intuitively: 𝒜_PMP\mathcal{A}_{\_}{P\vee MP} simulates the automaton 𝒜_MP\mathcal{A}_{\_}{MP}, storing the maximal priority seen since the last reset (or from the beginning). If the automaton 𝒜_MP\mathcal{A}_{\_}{MP} rejects, the automaton resets, which means two things: it simulates one transition of the automaton 𝒜_P\mathcal{A}_{\_}{P} using the stored priority, and resets the state of 𝒜_MP\mathcal{A}_{\_}{MP} to its initial state. The automaton 𝒜_PMP\mathcal{A}_{\_}{P\vee MP} rejects only if 𝒜_P\mathcal{A}_{\_}{P} rejects during a reset.

We now prove that 𝒜_PMP\mathcal{A}_{\_}{P\vee MP} is an (n,𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N)(n,\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N})-separating automaton.

  • L(𝒜_PMP)𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏NL(\mathcal{A}_{\_}{P\vee MP})\subseteq\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N}. Let π\pi be an infinite path accepted by 𝒜_PMP\mathcal{A}_{\_}{P\vee MP}, we distinguish two cases by looking at the run of π\pi. We extend the previous notation for projections: for a path π\pi, we write π_P\pi_{\_}P for its projection on the first component and π_MP\pi_{\_}{MP} for its projection on the second component.

    • If the run is reset finitely many times, let us write π=ππ′′\pi=\pi^{\prime}\pi^{\prime\prime} where π\pi^{\prime} is finite and π_MP\pi^{\prime}_{\_}{MP} rejected by 𝒜_MP\mathcal{A}_{\_}{MP}, and π′′\pi^{\prime\prime} is infinite and π_′′MP\pi^{\prime\prime}_{\_}{MP} is accepted by 𝒜_MP\mathcal{A}_{\_}{MP}. Since 𝒜_MP\mathcal{A}_{\_}{MP} satisfies 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{MeanPayoff}_{N}, this implies that π_′′MP\pi^{\prime\prime}_{\_}{MP} satisfies 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{MeanPayoff}_{N}, so by prefix independence, π_MP\pi_{\_}{MP} satisfies 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{MeanPayoff}_{N} hence π\pi satisfies 𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N}.

    • If the run is reset infinitely many times, we may find an infinite decomposition π=π1π2\pi=\pi^{1}\pi^{2}\dots where for each ii\in\mathbb{N}, the path π_iMP\pi^{i}_{\_}{MP} is rejected by 𝒜_MP\mathcal{A}_{\_}{MP}, and its proper prefixes are not. Let p_ip_{\_}i be the maximum priority appearing in π_iP\pi^{i}_{\_}{P}, the run of π\pi over 𝒜_PMP\mathcal{A}_{\_}{P\vee MP} induces a run of p_1p_2p_{\_}1p_{\_}2\dots over 𝒜_P\mathcal{A}_{\_}P. Since 𝒜_P\mathcal{A}_{\_}P satisfies 𝙿𝚊𝚛𝚒𝚝𝚢_d\mathtt{Parity}_{\_}d, this implies that p_1p_2p_{\_}1p_{\_}2\dots satisfies 𝙿𝚊𝚛𝚒𝚝𝚢_d\mathtt{Parity}_{\_}d. By definition of the p_ip_{\_}i’s, this implies that π_P\pi_{\_}P satisfies 𝙿𝚊𝚛𝚒𝚝𝚢_d\mathtt{Parity}_{\_}d so a fortiori π\pi satisfies 𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N}.

  • (𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N)nL(𝒜_PMP)(\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N})^{\mid n}\subseteq L(\mathcal{A}_{\_}{P\vee MP}). Let GG be a graph satisfying 𝙿𝚊𝚛𝚒𝚝𝚢_d𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{Parity}_{\_}d\vee\mathtt{MeanPayoff}_{N} of size at most nn, we need to show that all infinite paths of GG are accepted by 𝒜_PMP\mathcal{A}_{\_}{P\vee MP}.

    We construct a graph G_MPG_{\_}{MP} over the set of colours [N,N][-N,N] and a graph G_PG_{\_}P over the set of colours [0,d][0,d]. Both use the same set of vertices as GG. We prove that the graph G_MPG_{\_}{MP} satisfies 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{MeanPayoff}_{N}, which is used to prove that G_PG_{\_}P satisfies 𝙿𝚊𝚛𝚒𝚝𝚢_d\mathtt{Parity}_{\_}d.

    • The graph G_MPG_{\_}{MP}. There is an edge (v,w,v)E(G_MP)(v,w,v^{\prime})\in E(G_{\_}{MP}) if there exists p[0,d]p\in[0,d] such that e=(v,(p,w),v)E(G)e=(v,(p,w),v^{\prime})\in E(G), and either pp is odd or pp is even and ee is not contained in any negative cycle with maximum priority pp.

      We claim that G_MPG_{\_}{MP} satisfies 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{MeanPayoff}_{N}. Assume towards contradiction that G_MPG_{\_}{MP} contains a negative cycle C_MPC_{\_}{MP}. It induces a negative cycle CC in GG, by definition of G_MPG_{\_}{MP} necessarily the maximum priority in CC is odd. Hence CC is a negative odd cycle in GG, a contradiction.

    • The graph G_PG_{\_}{P}. There is an edge (v,p,v)E(G_P)(v,p,v^{\prime})\in E(G_{\_}P) if there exists a path in GG from vv to vv^{\prime} with maximum priority pp and (whose projection on the mean payoff component is) rejected by 𝒜_MP\mathcal{A}_{\_}{MP}.

      We claim that G_PG_{\_}{P} satisfies 𝙿𝚊𝚛𝚒𝚝𝚢_d\mathtt{Parity}_{\_}d. Assume towards contradiction that G_PG_{\_}P contains an odd cycle C_PC_{\_}P. For each edge in this cycle there is a corresponding path rejected by 𝒜_MP\mathcal{A}_{\_}{MP} with the same maximum priority. Putting these paths together yields an odd cycle CC, of maximal priority pp, in GG whose projection on the mean payoff component is rejected by 𝒜_MP\mathcal{A}_{\_}{MP}. Since 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏NnL(𝒜_MP)\mathtt{MeanPayoff}_{N}^{\mid n}\subseteq L(\mathcal{A}_{\_}{MP}) and as we have shown, G_MPG_{\_}{MP} satisfies 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N\mathtt{MeanPayoff}_{N}, the projection of CC on the mean payoff component is not in G_MPG_{\_}{MP}, so there exists an edge (v,(p,w),v)(v,(p^{\prime},w),v^{\prime}) in CC such that (v,w,v)(v,w,v^{\prime}) is not in E(G_MP)E(G_{\_}{MP}). This implies that pp^{\prime} is even, so in particular p<pp^{\prime}<p, and (v,(p,w),v)(v,(p^{\prime},w),v^{\prime}) is contained in a negative cycle CC^{\prime} in GG with maximum priority pp^{\prime}. Combining the odd cycle CC followed by sufficiently many iterations of the negative cycle CC^{\prime} yields a path in GG, with negative weight, and maximal priority pp which is odd, a contradiction.

    Let π\pi an infinite path in GG, we show that π\pi is accepted by 𝒜_PMP\mathcal{A}_{\_}{P\vee MP}. Let us consider first only the mean payoff component: we run π\pi repeatedly over 𝒜_MP\mathcal{A}_{\_}{MP}, and distinguish two cases.

    • If there are finitely many resets, let us write π=π_1π_2π_kπ\pi=\pi_{\_}1\pi_{\_}2\dots\pi_{\_}k\pi^{\prime} where π_1,,π_k\pi_{\_}1,\dots,\pi_{\_}k are paths rejected by 𝒜_MP\mathcal{A}_{\_}{MP} with proper prefixes accepted by 𝒜_MP\mathcal{A}_{\_}{MP} and π\pi^{\prime} is accepted by 𝒜_MP\mathcal{A}_{\_}{MP}. To show that π\pi is accepted by 𝒜_PMP\mathcal{A}_{\_}{P\vee MP} we need to show that the automaton 𝒜_P\mathcal{A}_{\_}P accepts the word p_1p_kp_{\_}1\dots p_{\_}k where p_ip_{\_}i is the maximum priority appearing in π_i\pi_{\_}i for i[1,k]i\in[1,k]. Indeed, p_1p_kp_{\_}1\dots p_{\_}k is a path in G_PG_{\_}P, which is a graph of nn satisfying 𝙿𝚊𝚛𝚒𝚝𝚢_d\mathtt{Parity}_{\_}d, so 𝒜_P\mathcal{A}_{\_}P accepts p_1p_kp_{\_}1\dots p_{\_}k.

    • If there are infinitely many resets, let us write π=π_1π_2\pi=\pi_{\_}1\pi_{\_}2\dots where for each ii\in\mathbb{N}, the path π_i\pi_{\_}i is rejected by 𝒜_MP\mathcal{A}_{\_}{MP} and its proper prefixes are not. To show that π\pi is accepted by 𝒜_PMP\mathcal{A}_{\_}{P\vee MP} we need to show that the automaton 𝒜_P\mathcal{A}_{\_}P accepts the word p_1p_2p_{\_}1p_{\_}2\dots, which holds for the same reason as the other case: p_1p_2p_{\_}1p_{\_}2\dots is a path in G_PG_{\_}P.

3.2 The complexity of solving disjunctions of parity and mean payoff games using separating automata

The class of games with a disjunction of a parity and a mean payoff objective have been introduced in [4], with a twist: this paper studies the case of a conjunction instead of a disjunction, which is more natural in many applications. This is equivalent here since both parity and mean payoff objectives are dual, in other words if the objective of Eve is a disjunction of a parity and a mean payoff objective, then the objective of Adam is a conjunction of a parity and a mean payoff objective. Hence all existing results apply here with suitable changes. The reason why we consider the objective of the opponent is that it is positionally determined, which is the key assumption for using the separating automaton technology.

The state of the art for solving disjunctions of parity and mean payoff games is due to [9], which presents a pseudo-quasi-polynomial algorithm. We refer to [9] for references on previous studies for this class of games. As they explain, these games are logarithmic space equivalent to the same games replacing mean payoff by energy, and polynomial time equivalent to games with weights [15], extending games with costs [12].

Combining Theorem 6, Theorem 3, Theorem 4, and Theorem 1 yields the following result.

Theorem 7.

Let n,d,Nn,d,N\in\mathbb{N}. There exists an algorithm in the RAM model with word size w=log(n)+log(N)w=\log(n)+\log(N) for solving disjunctions of parity and mean payoff games with nn vertices, mm edges, weights in [N,N][-N,N] and priorities in [0,d][0,d] of time complexity

O(mdn(log(n)+d/21log(n))_ParitynN_Mean Payoff).O\left(md\cdot\underbrace{n\cdot\binom{\lceil\log(n)\rceil+d/2-1}{\lceil\log(n)\rceil}}_{\_}{\text{Parity}}\cdot\underbrace{nN}_{\_}{\text{Mean Payoff}}\right).

and space complexity O(n)O(n).

Our algorithm is similar to the one constructed in [9]: they are both value iteration algorithms (called progress measure lifting algorithm in [9]), combining the two value iteration algorithms for parity and mean payoff games. However, the set of values are not the same (our algorithm stores an additional priority) and the proofs are very different. Besides being much shorter, one advantage of our proof is that it works with abstract separating automata for both parity and mean payoff objectives, and shows how to combine them, whereas in [9] the proof is done from scratch, extending both proofs for the parity and mean payoff objectives.

4 Disjunction of mean payoff

We define the objective _i[1,d]𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni\bigvee_{\_}{i\in[1,d]}\mathtt{MeanPayoff}_{N}^{i} referred to as ‘disjunction of mean payoff’. The set of colours is [N,N]dd[-N,N]^{d}\subseteq\mathbb{Z}^{d}. For w([N,N]d)ωw\in([-N,N]^{d})^{\omega} and i[1,d]i\in[1,d] we write w_i[N,N]ωw_{\_}i\in[-N,N]^{\omega} for the projection on the ii-th component.

_i[1,d]𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni={w([N,N]d)ω:i[1,d],w_i𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N}.\bigvee_{\_}{i\in[1,d]}\mathtt{MeanPayoff}_{N}^{i}=\left\{w\in([-N,N]^{d})^{\omega}:\exists i\in[1,d],w_{\_}i\in\mathtt{MeanPayoff}_{N}\right\}.

We refer to Subsection 4.3 for a discussion on existing results.

Theorem 8 (Follows from [14] as observed in [16]).

Disjunctions of mean payoff objectives are prefix independent and positionally determined.

4.1 A general reduction of separating automata for strongly connected graphs

The first idea is very general, it roughly says that if we know how to construct separating automata for strongly connected graphs, then we can use them to construct separating automata for general graphs.

We say that a graph is strongly connected if for every pair of vertices there exists a path from one vertex to the other. Let us refine the notion of separating automata. We write Ω_nsc\Omega^{\mid n}_{\_}{\text{sc}} for the set of infinite sequences of colours that label paths from strongly connected graphs of size at most nn satisfying Ω\Omega. Formally,

Ω_nsc={col(π)πPath(G),G is strongly connected, has size at most n, and satisfies Ω}.\Omega^{\mid n}_{\_}{\text{sc}}=\{\textrm{col}(\pi)\mid\pi\in\textrm{Path}_{\infty}(G),G\text{ is strongly connected, has size at most $n$, and satisfies $\Omega$}\}.
Definition 5 (Separating automata for strongly connected graphs).

An automaton 𝒜\mathcal{A} is (n,Ω)(n,\Omega)-separating for strongly connected graphs if Ω_nscL(𝒜)Ω\Omega^{\mid n}_{\_}{\text{sc}}\subseteq L(\mathcal{A})\subseteq\Omega.

Let us give a first construction, which we refine later. Let 𝒜_1=(Q_1,q_0,1,δ_1)\mathcal{A}_{\_}1=(Q_{\_}1,q_{\_}{0,1},\delta_{\_}1) and 𝒜_2=(Q_2,q_0,2,δ_2)\mathcal{A}_{\_}2=(Q_{\_}2,q_{\_}{0,2},\delta_{\_}2) two safety automata, we define their sequential product 𝒜_1,𝒜_2\langle\mathcal{A}_{\_}1,\mathcal{A}_{\_}2\rangle as follows: the set of states is Q_1Q_2Q_{\_}1\cup Q_{\_}2, the initial state is q_0,1q_{\_}{0,1}, and the transition function is

δ(s,c)={δ_1(s,c) if sQ_1 and δ_1(s,c) is defined,q_0,2 if sQ_1 and δ_1(s,c) is not defined,δ_2(s,c) if sQ_2 and δ_2(s,c) is defined.\delta(s,c)=\begin{cases}\delta_{\_}1(s,c)&\text{ if $s\in Q_{\_}1$ and $\delta_{\_}1(s,c)$ is defined},\\ q_{\_}{0,2}&\text{ if $s\in Q_{\_}1$ and $\delta_{\_}1(s,c)$ is not defined},\\ \delta_{\_}2(s,c)&\text{ if $s\in Q_{\_}2$ and $\delta_{\_}2(s,c)$ is defined}.\end{cases}

The sequential product is extended inductively: 𝒜_1,,𝒜_p=𝒜_1,,𝒜_p1,𝒜_p\langle\mathcal{A}_{\_}1,\dots,\mathcal{A}_{\_}p\rangle=\langle\langle\mathcal{A}_{\_}1,\dots,\mathcal{A}_{\_}{p-1}\rangle,\mathcal{A}_{\_}p\rangle.

Lemma 3.

Let Ω\Omega be a prefix independent objective, nn\in\mathbb{N}, and 𝒜\mathcal{A} an (n,Ω)(n,\Omega)-separating automaton for strongly connected graphs. Then 𝒜n=𝒜,,𝒜_n times\mathcal{A}^{n}=\langle\underbrace{\mathcal{A},\dots,\mathcal{A}}_{\_}{n\text{ times}}\rangle is an (n,Ω)(n,\Omega)-separating automaton.

Proof.

We first show that L(𝒜n)ΩL(\mathcal{A}^{n})\subseteq\Omega. We note that any infinite run in 𝒜n\mathcal{A}^{n} eventually remains in one copy of 𝒜\mathcal{A}. Since 𝒜\mathcal{A} satisfies Ω\Omega and by prefix independence of Ω\Omega, this implies that the infinite path satisfies Ω\Omega, thus so does 𝒜n\mathcal{A}^{n}.

Let GG be a graph satisfying Ω\Omega, we show that Path(G)L(𝒜n)\textrm{Path}_{\infty}(G)\subseteq L(\mathcal{A}^{n}). We decompose GG into strongly connected components: let G_1,,G_pG_{\_}1,\dots,G_{\_}p be the maximal strongly connected components in GG indexed such that if there exists an edge from G_iG_{\_}i to G_jG_{\_}j then i<ji<j. Then V(G)V(G) is the disjoint union of the V(G_i)V(G_{\_}i)’s. Each G_iG_{\_}i is a subgraph of GG, and since GG satisfies Ω\Omega then so does G_iG_{\_}i. It follows that for each ii we have Path(G_i)L(𝒜)\textrm{Path}_{\infty}(G_{\_}i)\subseteq L(\mathcal{A}).

Let us consider a path π\pi in GG, we show that π\pi is accepted by 𝒜n\mathcal{A}^{n}. The proof is by induction on the number of strongly connected components that π\pi traverses. If it traverses only one such component, say G_iG_{\_}i, since Path(G_i)L(𝒜)\textrm{Path}_{\infty}(G_{\_}i)\subseteq L(\mathcal{A}) then indeed 𝒜\mathcal{A} accepts π\pi, so a fortiori 𝒜n\mathcal{A}^{n} accepts π\pi. Otherwise, let G_iG_{\_}i the first strongly connected component traversed by π\pi. Since Path(G_i)L(𝒜)\textrm{Path}_{\infty}(G_{\_}i)\subseteq L(\mathcal{A}), the run on π\pi remains in the first copy of 𝒜\mathcal{A} at least as long as π\pi remains in G_iG_{\_}i. If the run remains in the first copy of 𝒜\mathcal{A} forever, then π\pi is accepted by 𝒜n\mathcal{A}^{n}. If not, the run jumps to the second component to read a suffix of π\pi, which traverses one less strongly connected component, so by induction hypothesis this suffix is accepted by 𝒜n1\mathcal{A}^{n-1}, hence π\pi is accepted by 𝒜n\mathcal{A}^{n}. ∎

In the construction above we have used the fact that GG decomposes into at most nn strongly connected components of size nn. We refine this argument: the total size of the strongly connected components is nn. The issue we are facing in taking advantage of this observation is that the sequence of sizes is not known a priori, it depends on the graph. To address this we use a universal sequence. First, here a sequence means a finite sequence of non-negative integer, for instance (5,2,3,3)(5,2,3,3). The size of a sequence is the total sum of its elements, so (5,2,3,3)(5,2,3,3) has size 1313. We say that v=(v_1,,v_k)v=(v_{\_}1,\dots,v_{\_}k) embeds into u=(u_1,,u_k)u=(u_{\_}1,\dots,u_{\_}{k^{\prime}}) if there exists an increasing function f:[1,k][1,k]f:[1,k]\to[1,k^{\prime}] such that for all i[1,k]i\in[1,k], we have v_iu_f(i)v_{\_}i\leq u_{\_}{f(i)}. For example (5,2,3,3)(5,2,3,3) embeds into (4,6,1,2,4,1,3)(4,6,1,2,4,1,3) but not in (3,2,5,3,3)(3,2,5,3,3). A sequence uu is nn-universal if all sequences of size at most nn embed into uu.

Let us define an nn-universal sequence u_nu_{\_}n, inductively on nn\in\mathbb{N}. We set u_0=()u_{\_}0=() (the empty sequence), u_1=(1)u_{\_}1=(1), and u_nu_{\_}n is the concatenation of u_n/2u_{\_}{\lfloor n/2\rfloor} with the singleton sequence (n)(n) followed by u_n1n/2u_{\_}{n-1-\lfloor n/2\rfloor}. Writing ++ for concatenation, the definition reads u_n=u_n/2+(n)+u_n1n/2u_{\_}n=u_{\_}{\lfloor n/2\rfloor}+(n)+u_{\_}{n-1-\lfloor n/2\rfloor} Let us write the first sequences:

u_2=(1,2),u_3=(1,3,1),u_4=(1,2,4,1),u_5=(1,2,5,1,2),u_6=(1,3,1,6,1,2),u_{\_}2=(1,2),\quad u_{\_}3=(1,3,1),\quad u_{\_}4=(1,2,4,1),\quad u_{\_}5=(1,2,5,1,2),\quad u_{\_}6=(1,3,1,6,1,2),\dots
Lemma 4.

The sequence u_nu_{\_}n is nn-universal and has size O(nlog(n))O(n\log(n)).

Proof.

We proceed by induction on nn. The case n=0n=0 is clear, let us assume that n>0n>0. Let v=(v_1,,v_k)v=(v_{\_}1,\dots,v_{\_}k) be a sequence of size nn, we show that vv embeds into u_nu_{\_}n. There exists a unique p[1,k]p\in[1,k] such that (v_1,,v_p1)(v_{\_}1,\dots,v_{\_}{p-1}) has size smaller than or equal to n/2\lfloor n/2\rfloor and (v_1,,v_p)(v_{\_}1,\dots,v_{\_}p) has size larger than n/2\lfloor n/2\rfloor. This implies that (v_p+1,,v_k)(v_{\_}{p+1},\dots,v_{\_}k) has size at most n1n/2n-1-\lfloor n/2\rfloor. By induction hypothesis (v_1,,v_p1)(v_{\_}1,\dots,v_{\_}{p-1}) embeds into u_n/2u_{\_}{\lfloor n/2\rfloor} and (v_p+1,,v_k)(v_{\_}{p+1},\dots,v_{\_}k) embeds into u_n1n/2u_{\_}{n-1-\lfloor n/2\rfloor}, so vv embeds into u_nu_{\_}n.

The recurrence on size is |u_n|=|u_n/2|+n+|u_n1n/2||u_{\_}n|=|u_{\_}{\lfloor n/2\rfloor}|+n+|u_{\_}{n-1-\lfloor n/2\rfloor}|. Solving it shows that |u_n||u_{\_}n| is bounded by O(nlog(n))O(n\log(n)). ∎

We now use the universal sequence to improve on Lemma 3.

Lemma 5.

Let Ω\Omega be a positionally determined prefix independent objective and nn\in\mathbb{N}. For each k[1,n]k\in[1,n], let 𝒜_k\mathcal{A}_{\_}k be a (k,Ω)(k,\Omega)-separating automaton for strongly connected graphs. Let us write u_n=(x_1,,x_k)u_{\_}n=(x_{\_}1,\dots,x_{\_}k), then 𝒜(u_n)=𝒜_x_1,,𝒜_x_k\mathcal{A}(u_{\_}n)=\langle\mathcal{A}_{\_}{x_{\_}1},\dots,\mathcal{A}_{\_}{x_{\_}k}\rangle is an (n,Ω)(n,\Omega)-separating automaton.

Proof.

We follow the same lines as for Lemma 3, in particular the same argument implies that 𝒜(u_n)\mathcal{A}(u_{\_}n) satisfies Ω\Omega.

Let GG be a graph satisfying Ω\Omega, we show that Path(G)L(𝒜(u_n))\textrm{Path}_{\infty}(G)\subseteq L(\mathcal{A}(u_{\_}n)). We decompose GG into strongly connected components as before. Let us write v=(|V(G_1)|,,|V(G_p)|)v=(|V(G_{\_}1)|,\dots,|V(G_{\_}p)|) the sequence of sizes of the components. The sequence vv has size at most nn, implying that vv embeds into u_nu_{\_}n: there exists an increasing function f:[1,p][1,|u_n|]f:[1,p]\to[1,|u_{\_}n|] such that for all i[1,p]i\in[1,p] we have |V(G_i)|u_f(i)|V(G_{\_}i)|\leq u_{\_}{f(i)}. It follows that for each i[1,p]i\in[1,p], we have Path(G_i)L(𝒜_f(i))\textrm{Path}_{\infty}(G_{\_}i)\subseteq L(\mathcal{A}_{\_}{f(i)}).

Let us consider a path π\pi in GG, we show that π\pi is accepted by 𝒜(u_n)\mathcal{A}(u_{\_}n). The proof is by induction on the number of strongly connected components that π\pi traverses.

The base case is if π\pi traverses only one such component, say G_iG_{\_}i. This implies that the run of π\pi on 𝒜(u_n)\mathcal{A}(u_{\_}n) either remains in the first f(i)1f(i)-1 copies, or reaches the f(i)f(i) copy to read a suffix π\pi^{\prime} of π\pi. In the latter case, since Path(G_i)L(𝒜_f(i))\textrm{Path}_{\infty}(G_{\_}i)\subseteq L(\mathcal{A}_{\_}{f(i)}) and π\pi^{\prime} also remains in G_iG_{\_}i, then 𝒜_f(i)\mathcal{A}_{\_}{f(i)} accepts π\pi^{\prime}. In both cases π\pi is accepted by 𝒜(u_n)\mathcal{A}(u_{\_}n).

Otherwise, let G_iG_{\_}i the first strongly connected component traversed by π\pi. The run of π\pi on 𝒜(u_n)\mathcal{A}(u_{\_}n) either remains in the first f(i)1f(i)-1 copies, or reaches the f(i)f(i) copy to read a suffix π\pi^{\prime} of π\pi. Since Path(G_i)L(𝒜_f(i))\textrm{Path}_{\infty}(G_{\_}i)\subseteq L(\mathcal{A}_{\_}{f(i)}), this run remains in the f(i)f(i) copy as long as π\pi^{\prime} remains in G_iG_{\_}i. This can either hold forever, in which case π\pi is accepted by 𝒜(u_n)\mathcal{A}(u_{\_}n), or eventually the run jumps to the f(i)+1f(i)+1 copy to read a suffix π′′\pi^{\prime\prime} of π\pi^{\prime} (hence of π\pi). In the latter case, since π′′\pi^{\prime\prime} traverses one less strongly connected component by induction hypothesis π′′\pi^{\prime\prime} is accepted using the copies from f(i)+1f(i)+1 of 𝒜(u_n)\mathcal{A}(u_{\_}n). Thus π\pi is accepted by 𝒜(u_n)\mathcal{A}(u_{\_}n). ∎

To appreciate the improvement of Lemma 5 over Lemma 3, let us consider the case where the (k,Ω)(k,\Omega)-separating automaton 𝒜_k\mathcal{A}_{\_}k for strongly connected graphs has size αk\alpha k, where α\alpha does not depend on kk (see Theorem 4 for an example of such a case). Then Lemma 3 yields an (n,Ω)(n,\Omega)-separating automaton of size αn2\alpha n^{2} while Lemma 5 brings it down to O(αnlog(n))O(\alpha n\log(n)).

4.2 Separating automata for disjunctions of mean payoff objectives

We state in the following theorem an upper bound on the construction of separating automata for disjunctions of mean payoff objectives.

Theorem 9.

Let n,d,Nn,d,N\in\mathbb{N}. There exists an (n,_i[1,d]𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni)(n,\bigvee_{\_}{i\in[1,d]}\mathtt{MeanPayoff}_{N}^{i})-separating automaton of size O(nlog(n)dN)O(n\log(n)\cdot dN).

As suggested by the previous subsection, we will start by constructing (n,_i[1,d]𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni)(n,\bigvee_{\_}{i\in[1,d]}\mathtt{MeanPayoff}_{N}^{i})-separating automata for strongly connected components. The following result shows a decomposition property.

Lemma 6.

Let GG be a strongly connected graph and NN\in\mathbb{N}. If GG satisfies _i[1,d]𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni\bigvee_{\_}{i\in[1,d]}\mathtt{MeanPayoff}_{N}^{i} then there exists i[1,d]i\in[1,d] such that GG satisfies 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni\mathtt{MeanPayoff}_{N}^{i}.

Proof.

We prove the contrapositive property: assume that for all i[1,d]i\in[1,d] the graph does not satisfy 𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni\mathtt{MeanPayoff}_{N}^{i}, implying that for each i[1,d]i\in[1,d] there exists a negative cycle C_iC_{\_}i around some vertex v_iv_{\_}i. By iterating each cycle an increasing number of times, we construct a path in GG which does not satisfy _i𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni\bigvee_{\_}i\mathtt{MeanPayoff}_{N}^{i}.

To make this statement formal, we use the following property: for any i[1,d]i\in[1,d], any finite path π\pi can be extended to a finite path ππ\pi\pi^{\prime} with weight less than 1-1 on the iith component. This is achieved simply by first going to v_iv_{\_}i using strong connectedness, then iterating through cycle C_iC_{\_}i a sufficient number of times.

We then apply this process repeatedly and in a cyclic way over i[1,d]i\in[1,d] to construct an infinite path such that for each i[1,d]i\in[1,d], infinitely many times the ii-th component is less than 1-1. This produces a path which does not satisfy _i[1,d]𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni\bigvee_{\_}{i\in[1,d]}\mathtt{MeanPayoff}_{N}^{i}, a contradiction. ∎

Corollary 1.

Let n,Nn,N\in\mathbb{N} and 𝒜\mathcal{A} be an (n,𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N)(n,\mathtt{MeanPayoff}_{N})-separating automaton for strongly connected graphs. We construct d𝒜d\cdot\mathcal{A} the disjoint union of dd copies of 𝒜\mathcal{A}, where the ii-th copy reads the ii-th component. Then d𝒜d\cdot\mathcal{A} is an (n,_i[1,d]𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni)(n,\bigvee_{\_}{i\in[1,d]}\mathtt{MeanPayoff}_{N}^{i})-separating automaton for strongly connected graphs.

We can now prove Theorem 9. Thanks to Theorem 4, there exists an (n,𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏N)(n,\mathtt{MeanPayoff}_{N})-separating automaton of size nNnN. Thanks to Corollary 1, this implies an (n,_i[1,d]𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni)(n,\bigvee_{\_}{i\in[1,d]}\mathtt{MeanPayoff}_{N}^{i})-separating automaton for strongly connected graphs of size ndNndN. Now Lemma 5 yields an (n,_i𝙼𝚎𝚊𝚗𝙿𝚊𝚢𝚘𝚏𝚏Ni)(n,\bigvee_{\_}i\mathtt{MeanPayoff}_{N}^{i})-separating automaton of size O(nlog(n)dN)O(n\log(n)\cdot dN).

4.3 The complexity of solving disjunctions of mean payoff games using separating automata

As in the previous case disjunction of mean payoff games were studied focussing on the objective of the opponent, who has a conjunction of mean payoff objectives to satisfy [16]. Let us note here that there are actually two variants of the mean payoff objective: using infimum limit or supremum limit. In many cases the two variants are equivalent: this is the case when considering mean payoff games or disjunctions of parity and mean payoff games. However, this is not true anymore for disjunctions of mean payoff objectives, as explained in [16]. Our constructions and results apply using the infimum limit, and do not extend to the supremum limit.

The main result related to disjunction of mean payoff games in [16] (Theorem 6) is that the problem is in NPcoNP\textrm{NP}\cap\textrm{coNP} and can be solved in time O(mn2dN)O(m\cdot n^{2}\cdot d\cdot N). (Note that since we consider the dual objectives, the infimum limit becomes a supremum limit in [16].)

Combining Theorem 9, Theorem 4, and Theorem 1 yields the following result.

Corollary 2.

Let n,m,d,Nn,m,d,N\in\mathbb{N}. There exists an algorithm in the RAM model with word size w=log(n)+log(N)+log(d)w=\log(n)+\log(N)+\log(d) for solving disjunctions of mean payoff games with weights in [N,N][-N,N] with time complexity O(mnlog(n)dN)O(m\cdot n\log(n)\cdot d\cdot N) and space complexity O(n)O(n).

Note that for the choice of word size, the two tasks for manipulating separating automata, namely computing δ(q,w)\delta(q,w) and checking whether qqq\leq q^{\prime} are indeed unitary operations as they manipulate numbers of order nNnN.

Conclusions

The conceptual contribution of this paper is to show that the notion of separating automata is a powerful tool for constructing algorithms. We illustrated this point with two applications, constructing algorithms for two classes of objectives. A key appeal of our approach is that we assumed the existence of separating automata both for parity and for mean payoff objectives and used them as black boxes to construct separating automata for objectives combining them.

It is tempting to use our approach for constructing separating automata for disjunctions of parity objectives. However, solving games with disjunctions of two parity objectives is NP-complete [5], hence we cannot hope for subexponential separating automata.

In this paper we offered upper bounds on the sizes of separating automata for two classes of objectives. We leave open the questions of proving (matching) lower bounds, which would indicate that this approach cannot yield better algorithms in these cases.

References

  • [1]
  • [2] Mikołaj Bojańczyk & Wojciech Czerwiński (2018): An Automata Toolbox. Available at https://www.mimuw.edu.pl/~bojan/papers/toolbox-reduced-feb6.pdf.
  • [3] Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li & Frank Stephan (2017): Deciding parity games in quasipolynomial time. In: STOC, pp. 252–263, 10.1145/3055399.3055409.
  • [4] Krishnendu Chatterjee, Thomas A. Henzinger & Marcin Jurdziński (2005): Mean-Payoff Parity Games. In: LICS, pp. 178–187, 10.1109/LICS.2005.26.
  • [5] Krishnendu Chatterjee, Thomas A. Henzinger & Nir Piterman (2007): Generalized Parity Games. In Helmut Seidl, editor: FoSSaCS, Lecture Notes in Computer Science 4423, Springer, pp. 153–167, 10.1007/978-3-540-71389-0_12.
  • [6] Thomas Colcombet & Nathanaël Fijalkow (2018): Parity games and universal graphs. CoRR abs/1810.05106. Available at https://arxiv.org/abs/1810.05106.
  • [7] Thomas Colcombet & Nathanaël Fijalkow (2019): Universal Graphs and Good for Games Automata: New Tools for Infinite Duration Games. In: FoSSaCS, 10.1007/978-3-030-17127-8_1.
  • [8] Wojciech Czerwiński, Laure Daviaud, Nathanaël Fijalkow, Marcin Jurdziński, Ranko Lazić & Paweł Parys (2019): Universal trees grow inside separating automata: Quasi-polynomial lower bounds for parity games. In: SODA, 10.1137/1.9781611975482.142.
  • [9] Laure Daviaud, Marcin Jurdziński & Ranko Lazić (2018): A pseudo-quasi-polynomial algorithm for mean-payoff parity games. In: LICS, 10.1145/3209108.3209162.
  • [10] Nathanaël Fijalkow (2018): An Optimal Value Iteration Algorithm for Parity Games. CoRR abs/1801.09618. Available at https://arxiv.org/abs/1801.09618.
  • [11] Nathanaël Fijalkow, Paweł Gawrychowski & Pierre Ohlmann (2020): Value Iteration Using Universal Graphs and the Complexity of Mean Payoff Games. In: MFCS, 10.4230/LIPIcs.MFCS.2020.34.
  • [12] Nathanaël Fijalkow & Martin Zimmermann (2014): Parity and Streett Games with Costs. Logical Methods in Computer Science 10(2), 10.2168/LMCS-10(2:14)2014.
  • [13] Marcin Jurdziński & Ranko Lazić (2017): Succinct progress measures for solving parity games. In: LICS, 10.1109/LICS.2017.8005092.
  • [14] Eryk Kopczyński (2006): Half-Positional Determinacy of Infinite Games. In: ICALP, pp. 336–347, 10.1007/1178700629.
  • [15] Sven Schewe, Alexander Weinert & Martin Zimmermann (2019): Parity Games with Weights. Logical Methods in Computer Science 15(3), 10.23638/LMCS-15(3:20)2019.
  • [16] Yaron Velner, Krishnendu Chatterjee, Laurent Doyen, Thomas A. Henzinger, Alexander M. Rabinovich & Jean-François Raskin (2015): The complexity of multi-mean-payoff and multi-energy games. Information and Computation 241, pp. 177–196, 10.1016/j.ic.2015.03.001.