This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Optimal PSPACE-hardness of
Approximating Set Cover Reconfiguration

Shuichi Hirahara National Institute of Informatics, Japan. [email protected]    Naoto Ohsaka CyberAgent, Inc., Tokyo, Japan. [email protected]; [email protected]
Abstract

In the Minmax Set Cover Reconfiguration problem, given a set system \mathcal{F} over a universe and its two covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} of size kk, we wish to transform 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} into 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} by repeatedly adding or removing a single set of \mathcal{F} while covering the universe in any intermediate state. Then, the objective is to minimize the maximize size of any intermediate cover during transformation. We prove that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguration are 𝖯𝖲𝖯𝖠𝖢𝖤\mathsf{PSPACE}-hard to approximate within a factor of 21polyloglogN2-\frac{1}{\operatorname{polyloglog}N}, where NN is the size of the universe and the number of vertices in a graph, respectively, improving upon Ohsaka (SODA 2024) [Ohs24] and Karthik C. S. and Manurangsi (2023) [KM23]. This is the first result that exhibits a sharp threshold for the approximation factor of any reconfiguration problem because both problems admit a 22-factor approximation algorithm as per Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (Theor. Comput. Sci., 2011) [IDHPSUU11]. Our proof is based on a reconfiguration analogue of the FGLSS reduction [FGLSS96] from Probabilistically Checkable Reconfiguration Proofs of Hirahara and Ohsaka (2024) [HO24]. We also prove that for any constant ε(0,1)\varepsilon\in(0,1), Minmax Hypergraph Vertex Cover Reconfiguration on poly(ε1)\operatorname{poly}(\varepsilon^{-1})-uniform hypergraphs is 𝖯𝖲𝖯𝖠𝖢𝖤\mathsf{PSPACE}-hard to approximate within a factor of 2ε2-\varepsilon.

1 Introduction

1.1 Background

In the field of reconfiguration, we study the reachability and connectivity over the space of feasible solutions under an adjacency relation. Given a source problem that asks the existence of a feasible solution, its reconfiguration problem requires to decide if there exists a reconfiguration sequence, namely, a step-by-step transformation between a pair of feasible solutions while always preserving the feasibility of any intermediate solution. One of the reconfiguration problems we study in this paper is Set Cover Reconfiguration [IDHPSUU11], whose source problem is Set Cover. In the Set Cover Reconfiguration problem, for a set system \mathcal{F} over a universe 𝒰\mathcal{U} and its two covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} of size kk, we seek a reconfiguration sequence from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} consisting only of covers of size at most k+1k+1, each of which is obtained from the previous one by adding or removing a single set of \mathcal{F}. Countless reconfiguration problems have been defined from a variety of source problems, including Boolean satisfiability, constraint satisfaction problems, and graph problems. Studying reconfiguration problems may help elucidate the structure of the solution space of combinatorial problems [GKMP09].

The computational complexity of reconfiguration problems has the following trend: a reconfiguration problem is likely to be \PSPACE\PSPACE-complete if its source problem is intractable (say, \NP\NP-complete); e.g., Set Cover [IDHPSUU11], 33SAT [GKMP09], and Independent Set [HD05, HD09]; a source problem in \P frequently induces a reconfiguration problem in \P; e.g., Spanning Tree [IDHPSUU11] and 22SAT [GKMP09]. Some exception are however known; e.g., 33Coloring [CvJ11] and Shortest Path [Bon13]. We refer the readers to the surveys by [Nis18, van13] and the Combinatorial Reconfiguration wiki [Hoa23] for more algorithmic and hardness results of reconfiguration problems.

To overcome the computational hardness of a reconfiguration problem, we consider its optimization version, which affords to relax the feasibility of intermediate solutions. For example, Minmax Set Cover Reconfiguration [IDHPSUU11] is an optimization version of Set Cover Reconfiguration, where we are allowed to use any cover of size greater than k+1k+1, but required to minimize the maximum size of any covers in the reconfiguration sequence (see Section 4.1 for the formal definition). Solving this problem approximately, we may be able to find a “reasonable” reconfiguration sequence for Set Cover Reconfiguration which consists of covers of size at most, say, 1%1\% larger than k+1k+1. Unlike Set Cover, which is \NP\NP-hard to approximate within a factor smaller than lnn\ln n [DS14, Fei98, LY94], Minmax Set Cover Reconfiguration admits a 22-factor approximation algorithm due to [IDHPSUU11, Theorem 6 ]. An immediate question is: Is this the best possible?

Here, we summarize known hardness-of-approximation results on Minmax Set Cover Reconfiguration. [Ohs24] showed that Minmax Set Cover Reconfiguration is \PSPACE\PSPACE-hard to approximate within a factor of 1.00291.0029 assuming the Reconfiguration Inapproximability Hypothesis [Ohs23], which was recently proved [HO24, KM23]. [KM23] proved \NP\NP-hardness of the (2ε)(2-\varepsilon)-factor approximation for any constant ε(0,1)\varepsilon\in(0,1). Both results are not optimal: [Ohs24]’s factor 1.00291.0029 is far smaller than 22, while [KM23]’s result is not \PSPACE\PSPACE-hardness. This leaves a tantalizing possibility that there may exist a polynomial-length reconfiguration sequence that achieves a 1.00301.0030-factor approximation for Minmax Set Cover Reconfiguration, and hence the approximation problem may be in \NP\NP. Note that the \PSPACE\PSPACE-hardness result of [Ohs24] disproves the existence of a polynomial-length witness (in particular, a polynomial-length reconfiguration sequence) for the 1.00291.0029-factor approximation under the assumption that \NP\PSPACE\NP\neq\PSPACE.

1.2 Our Results

We present optimal results of \PSPACE\PSPACE-hardness of approximation for three reconfiguration problems. Our first result is that Minmax Set Cover Reconfiguration is \PSPACE\PSPACE-hard to approximate within a factor smaller than 22, improving upon [Ohs24, Corollary 4.2 ] and [KM23, Theorem 4 ]. This is the first result that exhibits a sharp threshold for the approximation factor of any reconfiguration problem: approximating within any factor below 22 is \PSPACE\PSPACE-complete and within a 22-factor is in \P [IDHPSUU11].

Theorem 1.1 (informal; see Theorem 4.1).

For a set system \mathcal{F} of universe size NN and its two covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} of size kk, it is \PSPACE\PSPACE-complete to distinguish between the following cases:

  • (Completeness) There exists a reconfiguration sequence from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} consisting only of covers of size at most k+1k+1.

  • (Soundness) Every reconfiguration sequence contains a cover of size greater than (2ε(N))(k+1)(2-\varepsilon(N))(k+1), where ε(N):-(polyloglogN)1\varepsilon(N)\coloneq(\operatorname{polyloglog}N)^{-1}.

In particular, Minmax Set Cover Reconfiguration is \PSPACE\PSPACE-hard to approximate within a factor of 21polyloglogN2-\frac{1}{\operatorname{polyloglog}N}.

As a corollary of Theorem 4.1 along with [Ohs24], the following \PSPACE\PSPACE-hardness of approximation is established for Dominating Set Reconfiguration, which also admits a 22-factor approximation [IDHPSUU11] (please refer to [Ohs24] for the problem definition).

Corollary 1.2 (from Theorem 4.1 and [Ohs24, Corollary 4.3]).

Minmax Dominating Set Reconfiguration is \PSPACE\PSPACE-hard to approximate within a factor of 21polyloglogN2-\frac{1}{\operatorname{polyloglog}N}, where NN is the number of vertices in a graph.

Our third result is a similar inapproximability result for Hypergraph Vertex Cover Reconfiguration, which is defined analogously to Set Cover Reconfiguration (see Section 4.2 for the formal definition). Minmax Hypergraph Vertex Cover Reconfiguration is easily shown to be 22-factor approximable [IDHPSUU11]; we prove that this is optimal.

Theorem 1.3 (informal; see Theorem 4.3).

For any constant ε(0,1)\varepsilon\in(0,1), a poly(ε1)\operatorname{poly}(\varepsilon^{-1})-uniform hypergraph, and its two vertex covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} of size kk, it is \PSPACE\PSPACE-complete to distinguish between the following cases:

  • (Completeness) There exists a reconfiguration sequence from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} consisting only of vertex covers of size at most k+1k+1.

  • (Soundness) Every reconfiguration sequence contains a vertex cover of size greater than (2ε)(k+1)(2-\varepsilon)(k+1).

In particular, Minmax Hypergraph Vertex Cover Reconfiguration on poly(ε1)\operatorname{poly}(\varepsilon^{-1})-uniform hypergraphs is \PSPACE\PSPACE-hard to approximate within a factor of 2ε2-\varepsilon.

We highlight here that the size of hyperedges in a Hypergraph Vertex Cover Reconfiguration instance of Theorem 4.3 depends (polynomially) only on the value of ε1\varepsilon^{-1}, whereas the size of subsets in a Set Cover Reconfiguration instance of Theorem 4.1 may depend on the universe size NN.

1.3 Proof Overview

At a high level, our proofs of Theorems 1.1 and 1.3 are given by combining the ideas developed in [HO24, Ohs23, Ohs24, KM23]. [KM23] proved \NP\NP-hardness of the (2ε)(2-\varepsilon)-factor approximation of Minmax Set Cover Reconfiguration as follows.

  1. 1.

    Starting from the PCP theorem for \NP\NP [ALMSS98, AS98], they applied the FGLSS reduction [FGLSS96] to prove \NP\NP-hardness of the 𝒪(ε1)\operatorname{\mathcal{O}}(\varepsilon^{-1})-factor approximation of an intermediate problem, which we call Max Partial 2CSP.

  2. 2.

    The 𝒪(ε1)\operatorname{\mathcal{O}}(\varepsilon^{-1})-factor approximation of Max Partial 2CSP is reduced to the (2ε)(2-\varepsilon)-factor approximation of a reconfiguration problem, which we call Label Cover Reconfiguration (Problem 2.3).

  3. 3.

    Label Cover Reconfiguration can be reduced to Minmax Set Cover Reconfiguration via approximation-preserving reductions of [Ohs24, LY94].

Here, Max Partial 2CSP is defined as follows. The input consists of a graph G=(𝒱,)G=(\mathcal{V},\mathcal{E}), a finite alphabet Σ\Sigma, and constraints ψe:Σ2{0,1}\psi_{e}\colon\Sigma^{2}\to\{0,1\} for each edge ee\in\mathcal{E}. A partial assignment is a function f:𝒱Σ{}f\colon\mathcal{V}\to\Sigma\cup\{\bot\}, where the symbol \bot indicates “unassigned.” The task is to maximize the fraction of assigned vertices in a partial assignment ff that satisfies ψe\psi_{e} for every e=(v,w)e=(v,w)\in\mathcal{E}; i.e., ψe(f(v),f(w))=1\psi_{e}(f(v),f(w))=1 if f(v)f(v)\neq\bot and f(w)f(w)\neq\bot.

To improve this \NP\NP-hardness result to \PSPACE\PSPACE-hardness, we replace the starting point with the PCRP (Probabilistically Checkable Reconfiguration Proof) system of [HO24], which is a reconfiguration analogue of the PCP theorem. We also replace Max Partial 2CSP with its reconfiguration analogue, which we call Partial 2CSP Reconfiguration (Problem 2.2). The proof of \PSPACE\PSPACE-hardness is outlined as follows.

  1. 1.

    Starting from the PCRP theorem for \PSPACE\PSPACE [HO24], we apply the FGLSS reduction [FGLSS96] to prove \PSPACE\PSPACE-hardness of Partial 2CSP Reconfiguration (Sections 3.1 and 3.2).

  2. 2.

    We reduce Partial 2CSP Reconfiguration to Label Cover Reconfiguration (Section 3.3).

  3. 3.

    We reduce Label Cover Reconfiguration to Minmax Set Cover Reconfiguration by the reductions of [Ohs24, LY94] (Section 4.1).

The second and third steps are similar to the previous work [KM23]. Our main technical contribution lies in the first step, which we explain below.

Roughly speaking, the PCRP theorem [HO24] shows that any \PSPACE\PSPACE computation on inputs of length nn can be encoded into a sequence π(1),,π(T){0,1}poly(n)\pi^{(1)},\cdots,\pi^{(T)}\in\{0,1\}^{\operatorname{poly}(n)} of exponentially many proofs such that any adjacent pair of proofs π(t)\pi^{(t)} and π(t+1)\pi^{(t+1)} differs in at most one bit, and each proof π(t)\pi^{(t)} can be probabilistically checked by reading q(n)q(n) bits of the proof and using r(n)r(n) random bits, where q(n)=𝒪(1)q(n)=\operatorname{\mathcal{O}}(1) and r(n)=𝒪(logn)r(n)=\operatorname{\mathcal{O}}(\log n). The FGLSS reduction [FGLSS96] transforms such a proof system into a graph G=(𝒱,)G=(\mathcal{V},\mathcal{E}), an alphabet Σ\Sigma, and constraints (ψe)e(\psi_{e})_{e\in\mathcal{E}} such that each vertex v𝒱:-{0,1}r(n)v\in\mathcal{V}\coloneq\{0,1\}^{r(n)} corresponds to a coin flip sequence of a verifier, each value αΣ={0,1}q(n)\alpha\in\Sigma=\{0,1\}^{q(n)} corresponds to a local view of the verifier, and the constraints ψe\psi_{e} check the consistency of two local views of the verifier. This reduction works in the case of the PCP theorem and proves \NP\NP-hardness of Max Partial 2CSP [KM23]. However, the reduction does not work in the case of the PCRP theorem: We need to ensure that the reconfiguration sequence of proofs π(1),,π(T)\pi^{(1)},\cdots,\pi^{(T)} is transformed into a sequence of partial assignments f(1),,f(T)f^{(1)},\cdots,f^{(T)}, each adjacent pair of which differs in at most one vertex. The issue is that changing one bit in the original proof π(t)\pi^{(t)} may result in changing the assignments of many vertices in a partial assignment f(t):𝒱Σ{}f^{(t)}\colon\mathcal{V}\to\Sigma\cup\{\bot\}.

To address this issue, we employ the ideas developed in [Ohs23, Ohs24], called the alphabet squaring trick, and modify the FGLSS reduction as follows. Given a verifier that reads q(n)q(n) bits of a proof, we define the alphabet as Σ={0,1,01}q(n)\Sigma=\{0,1,01\}^{q(n)}. Intuitively, the symbol “0101” means that we are taking 0 and 11 simultaneously. This enables us to construct a reconfiguration sequence of partial assignments f(1),,f(T)f^{(1)},\cdots,f^{(T)} from a reconfiguration sequence of proofs π(1),,π(T)\pi^{(1)},\cdots,\pi^{(T)}. Details can be found in Section 3.2.

1.4 Related Work

[IDHPSUU11] showed that optimization versions of SAT Reconfiguration and Clique Reconfiguration are \NP\NP-hard to approximate, relying on \NP\NP-hardness of approximating Max SAT [Hås01] and Max Clique [Hås99], respectively. Note that their \NP\NP-hardness results are not optimal since SAT Reconfiguration and Clique Reconfiguration are \PSPACE\PSPACE-complete. Toward \PSPACE\PSPACE-hardness of approximation for reconfiguration problems, [Ohs23] proposed the Reconfiguration Inapproximability Hypothesis (RIH), which postulates that a reconfiguration analogue of Constraint Satisfaction Problem is \PSPACE\PSPACE-hard to approximate, and demonstrated \PSPACE\PSPACE-hardness of approximation for many popular reconfiguration problems, including those of 33SAT, Independent Set, Vertex Cover, Clique, Dominating Set, and Set Cover. [Ohs24] adapted [Din07]’s gap amplification [Din07] to demonstrate that under RIH, optimization versions of 22CSP Reconfiguration and Set Cover Reconfiguration are \PSPACE\PSPACE-hard to approximate within a factor of 0.99420.9942 and 1.00291.0029, respectively.

Very recently, [KM23, HO24] announced the proof of RIH independently, implying that the above \PSPACE\PSPACE-hardness results hold unconditionally. [KM23] further proved that (optimization versions of) 22CSP Reconfiguration and Set Cover Reconfiguration are \NP\NP-hard to approximate within a factor smaller than 22, which is quantitatively tight because both problems are (nearly) 22-factor approximable. Our result partially resolves an open question of [KM23, Section 6]: “Can we prove tight \PSPACE\PSPACE-hardness of approximation results for GapMaxMin-2-𝖢𝖲𝖯q\mathsf{CSP}_{q} and Set Cover Reconfiguration?

Other reconfiguration problems whose approximability was investigated include those of Set Cover [IDHPSUU11], Subset Sum [ID14], and Submodular Maximization [OM22]. We note that optimization variants of reconfiguration problems frequently refer to those of the shortest reconfiguration sequence [MNPR17, BHIKMMSW20, IKKKO22, KMM11], which are orthogonal to this study.

2 Preliminaries

2.1 Notations

For a nonnegative integer nn\in\bm{\mathbb{N}}, let [n]:-{1,2,,n}[n]\coloneq\{1,2,\ldots,n\}. Unless otherwise specified, the base of logarithms is 22. A sequence 𝒮\mathscr{S} of a finite number of objects S(1),,S(T)S^{(1)},\ldots,S^{(T)} is denoted by (S(1),,S(T))(S^{(1)},\ldots,S^{(T)}), and we write S(t)𝒮S^{(t)}\in\mathscr{S} to indicate that S(t)S^{(t)} appears in 𝒮\mathscr{S}. Let Σ\Sigma be a finite set called alphabet. For a length-nn string πΣn\pi\in\Sigma^{n} and a finite sequence of indices I[n]I\subseteq[n]^{*}, we use π|I:-(πi)iI\pi|_{I}\coloneq(\pi_{i})_{i\in I} to denote the restriction of π\pi to II. The Hamming distance between two strings f,gΣnf,g\in\Sigma^{n}, denoted by Δ(f,g)\Delta(f,g), is defined as the number of positions on which ff and gg differ; namely,

Δ(f,g):-|{i[n]|figi}|.\displaystyle\Delta(f,g)\coloneq\left|\Bigl{\{}i\in[n]\Bigm{|}f_{i}\neq g_{i}\Bigr{\}}\right|. (2.1)

2.2 Reconfiguration Problems on Constraint Graphs

Constraint Graphs.

In this section, we formulate reconfiguration problems on constraint graphs. The notion of constraint graph is defined as follows.

Definition 2.1.

A qq-ary constraint graph is defined as a tuple G=(𝒱,,Σ,Ψ)G=(\mathcal{V},\mathcal{E},\Sigma,\Psi) such that

  • (𝒱,)(\mathcal{V},\mathcal{E}) is a qq-uniform111 A hypergraph is said to be qq-uniform if each of its hyperedges has size exactly qq. hypergraph called the underlying graph,

  • Σ\Sigma is a finite set called the alphabet, and

  • Ψ=(ψe)e\Psi=(\psi_{e})_{e\in\mathcal{E}} is a collection of qq-ary constraints, where each ψe:Σe{0,1}\psi_{e}\colon\Sigma^{e}\to\{0,1\} is a circuit.

A binary constraint graph is simply referred to as a constraint graph.

For an assignment f:𝒱Σf\colon\mathcal{V}\to\Sigma, we say that ff satisfies a hyperedge e={v1,,vq}e=\{v_{1},\ldots,v_{q}\}\in\mathcal{E} (or a constraint ψe\psi_{e}) if ψe(f(e))=1\psi_{e}(f(e))=1, where f(e):-(f(v1),,f(vq))f(e)\coloneq(f(v_{1}),\ldots,f(v_{q})), and ff satisfies GG if it satisfies all the hyperedges of GG. In the qqCSP Reconfiguration problem, for a qq-ary constraint graph GG and its two satisfying assignments f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\mathsf{goal}}, we are required to decide if there exists a reconfiguration sequence from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} consisting only of satisfying assignments for GG, each adjacent pair of which differs in at most one vertex. qqCSP Reconfiguration is \PSPACE\PSPACE-complete in general [GKMP09, IDHPSUU11]; thus, we formulate its two optimization versions.

Partial 2CSP Reconfiguration.

For a constraint graph G=(𝒱,,Σ,Ψ=(ψe)e)G=(\mathcal{V},\mathcal{E},\Sigma,\Psi=(\psi_{e})_{e\in\mathcal{E}}), a partial assignment is defined as a function f:𝒱Σ{}f\colon\mathcal{V}\to\Sigma\cup\{\bot\}, where the symbol \bot indicates “unassigned.” We say that a partial assignment f:𝒱Σ{}f\colon\mathcal{V}\to\Sigma\cup\{\bot\} satisfies an edge e=(v,w)e=(v,w)\in\mathcal{E} if ψe(f(v),f(w))=1\psi_{e}(f(v),f(w))=1 whenever f(v)f(v)\neq\bot and f(w)f(w)\neq\bot. The size of ff, denoted by f\|f\|, is defined as the number of vertices whose value is assigned; namely,

f:-|{v𝒱|f(v)}|.\displaystyle\|f\|\coloneq\left|\Bigl{\{}v\in\mathcal{V}\Bigm{|}f(v)\neq\bot\Bigr{\}}\right|. (2.2)

For two satisfying partial assignments f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} for GG, a reconfiguration partial assignment sequence from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} is a sequence F=(f(1),f(T))F=(f^{(1)},\ldots f^{(T)}) of satisfying partial assignments such that f(1)=f𝗌𝗍𝖺𝗋𝗍f^{(1)}=f^{\mathsf{start}}, f(T)=f𝗀𝗈𝖺𝗅f^{(T)}=f^{\mathsf{goal}}, and Δ(f(t),f(t+1))1\Delta(f^{(t)},f^{(t+1)})\leqslant 1 (i.e., f(t)f^{(t)} and f(t+1)f^{(t+1)} differ in at most one vertex) for all tt. For any reconfiguration partial assignment sequence F=(f(1),,f(T))F=(f^{(1)},\ldots,f^{(T)}), we define Fmin\|F\|_{\min} as

Fmin:-min1tTf(t).\displaystyle\|F\|_{\min}\coloneq\min_{1\leqslant t\leqslant T}\|f^{(t)}\|. (2.3)

Partial 2CSP Reconfiguration is formally defined as follows:

Problem 2.2 (Partial 2CSP Reconfiguration).

For a constraint graph G=(𝒱,,Σ,Ψ=(ψe)e)G=(\mathcal{V},\mathcal{E},\Sigma,\Psi=(\psi_{e})_{e\in\mathcal{E}}) and its two satisfying partial assignments f𝗌𝗍𝖺𝗋𝗍,f𝗀𝗈𝖺𝗅:𝒱Σ{}f^{\mathsf{start}},f^{\mathsf{goal}}\colon\mathcal{V}\to\Sigma\cup\{\bot\}, we are required to find a reconfiguration partial assignment sequence FF from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} such that Fmin\|F\|_{\min} is maximized.

Let 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}}) denote the maximum value of Fmin|𝒱|\frac{\|F\|_{\min}}{|\mathcal{V}|} over all possible reconfiguration sequences FF from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}}; namely,

𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅):-maxF=(f𝗌𝗍𝖺𝗋𝗍,,f𝗀𝗈𝖺𝗅)Fmin|𝒱|.\displaystyle\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\coloneq\max_{F=(f^{\mathsf{start}},\ldots,f^{\mathsf{goal}})}\frac{\|F\|_{\min}}{|\mathcal{V}|}. (2.4)

Note that 0𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)10\leqslant\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\leqslant 1. For every numbers 0sc10\leqslant s\leqslant c\leqslant 1, Gapc,s Partial 2CSP Reconfiguration requests to determine for a constraint graph GG and its two satisfying partial assignments f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\mathsf{goal}}, whether 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)c\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\geqslant c or 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)<s\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})<s. Note that we can assume f𝗌𝗍𝖺𝗋𝗍=f𝗀𝗈𝖺𝗅=|𝒱|\|f^{\mathsf{start}}\|=\|f^{\mathsf{goal}}\|=|\mathcal{V}| when c=1c=1.

Label Cover Reconfiguration.

For a constraint graph G=(𝒱,,Σ,Ψ)G=(\mathcal{V},\mathcal{E},\Sigma,\Psi), a multi-assignment is defined as a function f:𝒱2Σf\colon\mathcal{V}\to 2^{\Sigma}. We say that a multi-assignment ff satisfies edge e=(v,w)e=(v,w)\in\mathcal{E} if there exists a pair (α,β)f(v)×f(w)(\alpha,\beta)\in f(v)\times f(w) such that ψe(α,β)=1\psi_{e}(\alpha,\beta)=1. The size of ff, denoted by f\|f\|, is defined as the sum of |f(v)||f(v)| over all v𝒱v\in\mathcal{V}; namely,

f:-v𝒱|f(v)|.\displaystyle\|f\|\coloneq\sum_{v\in\mathcal{V}}|f(v)|. (2.5)

For two satisfying multi-assignments f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} for GG, a reconfiguration multi-assignment sequence from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} is a sequence F=(f(1),,f(T))F=(f^{(1)},\ldots,f^{(T)}) of satisfying multi-assignments such that f(1)=f𝗌𝗍𝖺𝗋𝗍f^{(1)}=f^{\mathsf{start}}, f(T)=f𝗀𝗈𝖺𝗅f^{(T)}=f^{\mathsf{goal}}, and

v𝒱|f(t)(v)f(t+1)(v)|1 for all t.\displaystyle\sum_{v\in\mathcal{V}}\Bigl{|}f^{(t)}(v)\triangle f^{(t+1)}(v)\Bigr{|}\leqslant 1\;\text{ for all }t. (2.6)

For any reconfiguration multi-assignment sequence F=(f(1),,f(T))F=(f^{(1)},\ldots,f^{(T)}), we define Fmax\|F\|_{\max} as

Fmax:-max1tTf(t).\displaystyle\|F\|_{\max}\coloneq\max_{1\leqslant t\leqslant T}\|f^{(t)}\|. (2.7)

Label Cover Reconfiguration is formally defined as follows.222 This problem can be thought of as a reconfiguration analogue of Min Rep [CHK11].

Problem 2.3 (Label Cover Reconfiguration).

For a constraint graph G=(𝒱,,Σ,Ψ)G=(\mathcal{V},\mathcal{E},\Sigma,\Psi) and its two satisfying multi-assignments f𝗌𝗍𝖺𝗋𝗍,f𝗀𝗈𝖺𝗅:𝒱2Σf^{\mathsf{start}},f^{\mathsf{goal}}\colon\mathcal{V}\to 2^{\Sigma}, we are required to find a reconfiguration multi-assignment sequence FF from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} such that Fmax\|F\|_{\max} is minimized.

Let 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}}) denote the minimum value of Fmax|𝒱|+1\frac{\|F\|_{\max}}{|\mathcal{V}|+1} over all possible reconfiguration multi-assignment sequences FF from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}}; namely,

𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅):-minF=(f𝗌𝗍𝖺𝗋𝗍,,f𝗀𝗈𝖺𝗅)Fmax|𝒱|+1.\displaystyle\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\coloneq\min_{F=(f^{\mathsf{start}},\ldots,f^{\mathsf{goal}})}\frac{\|F\|_{\max}}{|\mathcal{V}|+1}. (2.8)

Note that 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)|𝒱||𝒱|+1\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\geqslant\frac{|\mathcal{V}|}{|\mathcal{V}|+1}. For every numbers 1cs1\leqslant c\leqslant s, Gapc,s Label Cover Reconfiguration requests to determine whether 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)c\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\leqslant c or 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)>s\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})>s for a constraint graph GG and its two satisfying multi-assignments f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\mathsf{goal}}. Note that we can assume f𝗌𝗍𝖺𝗋𝗍|𝒱|+1=f𝗀𝗈𝖺𝗅|𝒱|+11\frac{\|f^{\mathsf{start}}\|}{|\mathcal{V}|+1}=\frac{\|f^{\mathsf{goal}}\|}{|\mathcal{V}|+1}\leqslant 1 when c=1c=1.

2.3 Probabilistically Checkable Reconfiguration Proof Systems

First, we formally define the notion of verifier.

Definition 2.4.

A verifier with randomness complexity r:r\colon\bm{\mathbb{N}}\to\bm{\mathbb{N}} and query complexity q:q\colon\bm{\mathbb{N}}\to\bm{\mathbb{N}} is a probabilistic polynomial-time algorithm VV that given an input x{0,1}x\in\{0,1\}^{*}, tosses r=r(|x|)r=r(|x|) random bits RR and uses RR to generate a sequence of q=q(|x|)q=q(|x|) queries I=(i1,,iq)I=(i_{1},\ldots,i_{q}) and a circuit D:{0,1}q{0,1}D\colon\{0,1\}^{q}\to\{0,1\}. We write (I,D)V(x)(I,D)\sim V(x) to denote the random variable for a pair of the query sequence and circuit generated by VV on input x{0,1}x\in\{0,1\}^{*}. Denote by Vπ(x):-D(π|I)V^{\pi}(x)\coloneq D(\pi|_{I}) the output of VV on input xx given oracle access to a proof π{0,1}\pi\in\{0,1\}^{*}. We say that V(x)V(x) accepts a proof π\pi if Vπ(x)=1V^{\pi}(x)=1; i.e., D(π|I)=1D(\pi|_{I})=1 for (I,D)V(x)(I,D)\sim V(x).

We proceed to the definition of Probabilistically Checkable Reconfiguration Proofs (PCRPs) due to [HO24], which offer a PCP-type characterization of \PSPACE\PSPACE. For any pair of proofs π𝗌𝗍𝖺𝗋𝗍,π𝗀𝗈𝖺𝗅{0,1}\pi^{\mathsf{start}},\pi^{\mathsf{goal}}\in\{0,1\}^{\ell}, a reconfiguration sequence from π𝗌𝗍𝖺𝗋𝗍\pi^{\mathsf{start}} to π𝗀𝗈𝖺𝗅\pi^{\mathsf{goal}} is a sequence (π(1),,π(T))({0,1})(\pi^{(1)},\ldots,\pi^{(T)})\in(\{0,1\}^{\ell})^{*} such that π(1)=π𝗌𝗍𝖺𝗋𝗍\pi^{(1)}=\pi^{\mathsf{start}}, π(T)=π𝗀𝗈𝖺𝗅\pi^{(T)}=\pi^{\mathsf{goal}}, and Δ(π(t),π(t+1))1\Delta(\pi^{(t)},\pi^{(t+1)})\leqslant 1 (i.e., π(t)\pi^{(t)} and π(t+1)\pi^{(t+1)} differ in at most one bit) for all tt.

Theorem 2.5 (PCRP theorem of [HO24]).

For any language LL in \PSPACE, there exists a verifier VV with randomness complexity r(n)=𝒪(logn)r(n)=\operatorname{\mathcal{O}}(\log n) and query complexity q(n)=𝒪(1)q(n)=\operatorname{\mathcal{O}}(1), coupled with polynomial-time computable functions π𝗌𝗍𝖺𝗋𝗍,π𝗀𝗈𝖺𝗅:{0,1}{0,1}\pi^{\mathsf{start}},\pi^{\mathsf{goal}}\colon\{0,1\}^{*}\to\{0,1\}^{*}, such that the following hold for any input x{0,1}x\in\{0,1\}^{*}:

  • (Completeness) If xLx\in L, there exists a reconfiguration sequence Π=(π(1),,π(T))\Pi=(\pi^{(1)},\ldots,\pi^{(T)}) from π𝗌𝗍𝖺𝗋𝗍(x)\pi^{\mathsf{start}}(x) to π𝗀𝗈𝖺𝗅(x)\pi^{\mathsf{goal}}(x) over {0,1}poly(|x|)\{0,1\}^{\operatorname{poly}(|x|)} such that V(x)V(x) accepts every proof with probability 11; namely,

    t[T],[V(x) accepts π(t)]=1.\displaystyle\forall t\in[T],\quad\operatorname*{\bm{\mathbb{P}}}\Bigl{[}V(x)\text{ accepts }\pi^{(t)}\Bigr{]}=1. (2.9)
  • (Soundness) If xLx\notin L, every reconfiguration sequence Π=(π(1),,π(T))\Pi=(\pi^{(1)},\ldots,\pi^{(T)}) from π𝗌𝗍𝖺𝗋𝗍(x)\pi^{\mathsf{start}}(x) to π𝗀𝗈𝖺𝗅(x)\pi^{\mathsf{goal}}(x) over {0,1}poly(|x|)\{0,1\}^{\operatorname{poly}(|x|)} includes a proof that is rejected by V(x)V(x) with probability more than 12\frac{1}{2}; namely,

    t[T],[V(x) accepts π(t)]<12.\displaystyle\exists t\in[T],\quad\operatorname*{\bm{\mathbb{P}}}\Bigl{[}V(x)\text{ accepts }\pi^{(t)}\Bigr{]}<\frac{1}{2}. (2.10)

We further introduce the notion of regular verifier. We say that a verifier is regular if each position in its proof is equally likely to be queried.333 Note that regular verifiers are sometimes called smooth verifiers, e.g., [Par21]. Since the term “regularity” is compatible with that of (hyper) graphs, we do not use the term “smoothness” but “regularity.”

Definition 2.6.

For a verifier VV and an input x{0,1}x\in\{0,1\}^{*}, the degree of a position ii of a proof is defined as the number of times ii is queried by V(x)V(x) over r(|x|)r(|x|) random bits; namely,

|{R{0,1}r(|x|)|iIR}|=(I,D)V(x)[iI]2r(|x|),\displaystyle\left|\Bigl{\{}R\in\{0,1\}^{r(|x|)}\Bigm{|}i\in I_{R}\Bigr{\}}\right|=\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}i\in I\Bigr{]}\cdot 2^{r(|x|)}, (2.11)

where rr is the randomness complexity of VV and IRI_{R} is the query sequence generated by V(x)V(x) on the randomness RR. A verifier VV is said to be Δ\Delta-regular if the degree of every position is exactly equal to Δ\Delta.

3 Subconstant Error PCRP Systems and FGLSS Reduction

In this section, we construct a bounded-degree PCRP verifier with subconstant error using Theorem 2.5 in Section 3.1, and prove \PSPACE\PSPACE-hardness of approximation for Partial 2CSP Reconfiguration and Label Cover Reconfiguration by the FGLSS reduction [FGLSS96] in Sections 3.2 and 3.3, respectively.

3.1 Bounded-degree PCRP Systems with Subconstant Error

Starting from Theorem 2.5, we first obtain a regular PCRP verifier for any \PSPACE\PSPACE language, whose proof uses the degree reduction technique due to [Ohs23].

Proposition 3.1.

For any language LL in \PSPACE\PSPACE, there exists a Δ\Delta-regular PCRP verifier VV with randomness complexity r(n)=𝒪(logn)r(n)=\operatorname{\mathcal{O}}(\log n), query complexity q(n)=𝒪(1)q(n)=\operatorname{\mathcal{O}}(1), perfect completeness, and soundness 1ε1-\varepsilon, for some constant Δ\Delta\in\bm{\mathbb{N}} and ε(0,1)\varepsilon\in(0,1).

Proof.
problem/verifier alph. size soundness notes reference
qq-query verifier 22 12\frac{1}{2} [HO24, Theorem 5.1]
22CSP Reconf 33 1ε1-\varepsilon (ε\varepsilon depends on qq) [Ohs23, Lemmas 3.2 and 3.6]
22CSP Reconf 66 1ε¯1-\overline{\varepsilon} (ε¯\overline{\varepsilon} depends on ε\varepsilon) max. degree Δ\Delta depends on ε\varepsilon [Ohs23, Lemma 3.7]
33SAT Reconf 22 1Ω(ε¯)1-\Omega(\overline{\varepsilon}) each variable appears 𝒪(Δ)\operatorname{\mathcal{O}}(\Delta) times [Ohs23, Lemma 3.2]
33-query verifier 22 1Ω(ε¯Δ)1-\Omega\left(\frac{\overline{\varepsilon}}{\Delta}\right) 𝒪(Δ)\operatorname{\mathcal{O}}(\Delta)-regular
Table 1: Sequence of reductions used in Proposition 3.1.

Here, the suffix “W” will designate the restricted case of qqCSP Reconfiguration whose alphabet size |Σ||\Sigma| is WW. By Theorem 2.5, for some integer qq\in\bm{\mathbb{N}}, there is a PCRP verifier VV for LL with randomness complexity 𝒪(logn)\operatorname{\mathcal{O}}(\log n), query complexity qq, perfect completeness, and soundness 12\frac{1}{2}. For an input x{0,1}x\in\{0,1\}^{*}, the verifier V(x)V(x) can be transformed into an instance of Gap1,12{}_{1,\frac{1}{2}} qqCSP2 Reconfiguration in a canonical manner, e.g., [HO24, Proposition 4.9]. By [Ohs23, Lemmas 3.2 and 3.6], we then obtain an instance of Gap1,1-ε 22CSP3 Reconfiguration, where ε(0,1)\varepsilon\in(0,1) depends only on qq. Further applying the degree reduction step of [Ohs23, Lemma 3.7], we get an instance of Gap1,1ε¯{}_{1,1-\overline{\varepsilon}} 22CSP6 Reconfiguration, whose underlying graph has maximum degree bounded by Δ\Delta, where ε¯(0,1)\overline{\varepsilon}\in(0,1) and Δ\Delta\in\bm{\mathbb{N}} depend only on ε\varepsilon. Using [Ohs23, Lemma 3.2], an instance of Gap1,1Ω(ε¯){}_{1,1-\Omega(\overline{\varepsilon})} 33SAT Reconfiguration is produced, where each Boolean variable appears in at most 𝒪(Δ)\operatorname{\mathcal{O}}(\Delta) clauses. By padding with trivial constraints, this instance is transformed into an instance of Gap1,1Ω(ε¯Δ){}_{1,1-\Omega(\frac{\overline{\varepsilon}}{\Delta})} 33CSP2 Reconfiguration, whose underlying graph is 𝒪(Δ)\operatorname{\mathcal{O}}(\Delta)-regular. That is to say, there exists a 33-query 𝒪(Δ)\operatorname{\mathcal{O}}(\Delta)-regular PCRP verifier V~\widetilde{V} for LL with randomness complexity 𝒪(logn)\operatorname{\mathcal{O}}(\log n), query complexity 𝒪(1)\operatorname{\mathcal{O}}(1), perfect completeness, and soundness 1ε1-\varepsilon^{\prime}, where ε=Ω(ε¯Δ)\varepsilon^{\prime}=\Omega\left(\frac{\overline{\varepsilon}}{\Delta}\right), as desired. See Table 1 for a sequence of reductions used to obtain V~\widetilde{V}. ∎

Subsequently, using a randomness-efficient sampler over expander graphs (e.g., [HLW06, Section 3]), we construct a bounded-degree PCRP verifier with subconstant error.

Proposition 3.2.

For any language LL in \PSPACE\PSPACE and any function δ:\delta\colon\bm{\mathbb{N}}\to\bm{\mathbb{R}} with δ(n)=Ω(n1)\delta(n)=\Omega(n^{-1}), there exists a bounded-degree PCRP verifier VV with randomness complexity r(n)=𝒪(logδ(n)1+logn)r(n)=\operatorname{\mathcal{O}}(\log\delta(n)^{-1}+\log n), query complexity q(n)=𝒪(logδ(n)1)q(n)=\operatorname{\mathcal{O}}(\log\delta(n)^{-1}), perfect completeness, and soundness δ(n)\delta(n). Moreover, for any input x{0,1}x\in\{0,1\}^{*}, the degree of any position is poly(δ(|x|)1)\operatorname{poly}(\delta(|x|)^{-1}).

Verifier Description.

Our PCRP verifier is described as follows. By Proposition 3.1, let VV be a Δ\Delta-regular PCRP verifier for a \PSPACE\PSPACE-complete language LL with randomness complexity r(n)=𝒪(logn)r(n)=\operatorname{\mathcal{O}}(\log n), query complexity q(n)=qq(n)=q\in\bm{\mathbb{N}}, perfect completeness, and soundness 1ε1-\varepsilon, where Δ\Delta\in\bm{\mathbb{N}} and ε(0,1)\varepsilon\in(0,1). The proof length, denoted by (n)\ell(n), is polynomially bounded since (n)q(n)2r(n)=poly(n)\ell(n)\leqslant q(n)2^{r(n)}=\operatorname{poly}(n). Hereafter, for any r(n)r(n) random bit sequence RR, let IRI_{R} and DRD_{R} respectively denote the query sequence and circuit generated by V(x)V(x) on the randomness RR. Given a function δ:\delta\colon\bm{\mathbb{N}}\to\bm{\mathbb{R}} with δ(n)=Ω(n1)\delta(n)=\Omega(n^{-1}), we construct the following verifier V~\widetilde{V}:

{itembox}

[l]Bounded-degree verifier V~\widetilde{V} with subconstant error.

1:a Δ\Delta-regular verifier VV with soundness 1ε1-\varepsilon, a function δ:\delta\colon\bm{\mathbb{N}}\to\bm{\mathbb{R}}, and an input x{0,1}nx\in\{0,1\}^{n}.
2:a proof π{0,1}(n)\pi\in\{0,1\}^{\ell(n)}.
3:construct a (d,λ)(d,\lambda)-expander graph XX over vertex set {0,1}r(n)\{0,1\}^{r(n)} with λd<ε4\frac{\lambda}{d}<\frac{\varepsilon}{4}.
4:let ρ:-2εlnδ(n)1=𝒪(logδ(n)1)\rho\coloneq\left\lceil\frac{2}{\varepsilon}\ln\delta(n)^{-1}\right\rceil=\operatorname{\mathcal{O}}(\log\delta(n)^{-1}).
5:uniformly sample a (ρ1)(\rho-1)-length random walk 𝐑=(R1,,Rρ)\mathbf{\bm{R}}=(R_{1},\ldots,R_{\rho}) over XX using r(n)+ρlogdr(n)+\rho\cdot\log d random bits. \Foreach 1kρ1\leqslant k\leqslant\rho
6:execute V(x)V(x) on RkR_{k} to generate a query sequence IRk=(i1,,iq)I_{R_{k}}=(i_{1},\ldots,i_{q}) and a circuit DRk:{0,1}q{0,1}D_{R_{k}}\colon\{0,1\}^{q}\to\{0,1\}. \IfDRk(π|IRk)=0D_{R_{k}}(\pi|_{I_{R_{k}}})=0
7:declare reject. \EndIf\EndFor
8:declare accept.

Correctness.

The perfect completeness and soundness for a fixed proof π{0,1}(n)\pi\in\{0,1\}^{\ell(n)} are shown below, whose proof relies on the property about random walks over expander graphs due to [AFWZ95].

Claim 3.3.

If V(x)V(x) accepts π\pi with probability 11, then V~(x)\widetilde{V}(x) accepts π\pi with probability 11. If V(x)V(x) accepts π\pi with probability less than 1ε1-\varepsilon, then V~(x)\widetilde{V}(x) accepts π\pi with probability less than δ(n)\delta(n).

To prove Claim 3.3, we refer to the following property about random walks over expander graphs.

Lemma 3.4 ([AFWZ95]).

Let XX be a (d,λ)(d,\lambda)-expander graph, SS be any vertex set of XX, and 𝐑:-(R1,,Rρ)\mathbf{\bm{R}}\coloneq(R_{1},\ldots,R_{\rho}) be a ρ\rho-tuple of random variables denoting the vertices of a uniformly chosen (ρ1)(\rho-1)-length random walk over XX. Then, it holds that

(|S||𝒱(X)|2λd)ρ𝐑[k[ρ],RkS](|S||𝒱(X)|+2λd)ρ.\displaystyle\left(\frac{|S|}{|\mathcal{V}(X)|}-2\frac{\lambda}{d}\right)^{\rho}\leqslant\operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\Bigl{[}\forall k\in[\rho],\;R_{k}\in S\Bigr{]}\leqslant\left(\frac{|S|}{|\mathcal{V}(X)|}+2\frac{\lambda}{d}\right)^{\rho}. (3.1)
Proof of Claim 3.3.

Suppose first V(x)V(x) accepts π\pi with probability 11; then, it holds that

[V~(x) accepts π]=𝐑[k[ρ],DRk(π|IRk)=1]=1.\displaystyle\operatorname*{\bm{\mathbb{P}}}\Bigl{[}\widetilde{V}(x)\text{ accepts }\pi\Bigr{]}=\operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\Bigl{[}\forall k\in[\rho],\;D_{R_{k}}(\pi|_{I_{R_{k}}})=1\Bigr{]}=1. (3.2)

Suppose then V(x)V(x) accepts π\pi with probability less than 1ε1-\varepsilon. Define

S:-{R{0,1}r(n)|DR(π|IR)=1}.\displaystyle S\coloneq\Bigl{\{}R\in\{0,1\}^{r(n)}\Bigm{|}D_{R}(\pi|_{I_{R}})=1\Bigr{\}}. (3.3)

Note that |S||𝒱(X)|<1ε\frac{|S|}{|\mathcal{V}(X)|}<1-\varepsilon. Then, it holds that

[V~(x) accepts π]=𝐑[k[ρ],DRk(π|IRk)=1]=𝐑[k[ρ],RkS].\displaystyle\operatorname*{\bm{\mathbb{P}}}\Bigl{[}\widetilde{V}(x)\text{ accepts }\pi\Bigr{]}=\operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\Bigl{[}\forall k\in[\rho],\;D_{R_{k}}(\pi|_{I_{R_{k}}})=1\Bigr{]}=\operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\Bigl{[}\forall k\in[\rho],\;R_{k}\in S\Bigr{]}. (3.4)

Applying Lemma 3.4, we derive

𝐑[k[ρ],RkS](|S||𝒱(X)|+2λd)ρ<(1ε2)ρ(by λd<ε4)exp(ε2ρ)δ(n),(by ρ=2εlnδ(n)1)\displaystyle\begin{aligned} \operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\Bigl{[}\forall k\in[\rho],\;R_{k}\in S\Bigr{]}&\leqslant\left(\frac{|S|}{|\mathcal{V}(X)|}+2\frac{\lambda}{d}\right)^{\rho}\\ &<\left(1-\frac{\varepsilon}{2}\right)^{\rho}&\text{(by }\frac{\lambda}{d}<\frac{\varepsilon}{4}\text{)}\\ &\leqslant\exp\left(-\frac{\varepsilon}{2}\rho\right)\\ &\leqslant\delta(n),&\text{(by }\rho=\left\lceil\frac{2}{\varepsilon}\ln\delta(n)^{-1}\right\rceil\text{)}\end{aligned} (3.5)

completing the proof. ∎

We are now ready to prove Proposition 3.2.

Proof of Proposition 3.2.

We first show the perfect completeness and soundness. Suppose xLx\in L, then there exists a reconfiguration sequence Π=(π(1),,π(T))\Pi=(\pi^{(1)},\ldots,\pi^{(T)}) from π𝗌𝗍𝖺𝗋𝗍(x)\pi^{\mathsf{start}}(x) to π𝗀𝗈𝖺𝗅(x)\pi^{\mathsf{goal}}(x) such that [V(x) accepts π(t)]=1\operatorname*{\bm{\mathbb{P}}}[V(x)\text{ accepts }\pi^{(t)}]=1 for all tt. By Claim 3.3, we have [V~(x) accepts π(t)]=1\operatorname*{\bm{\mathbb{P}}}[\widetilde{V}(x)\text{ accepts }\pi^{(t)}]=1 for all tt. Suppose xLx\notin L, then for every reconfiguration sequence Π=(π(1),,π(T))\Pi=(\pi^{(1)},\ldots,\pi^{(T)}) from π𝗌𝗍𝖺𝗋𝗍(x)\pi^{\mathsf{start}}(x) to π𝗀𝗈𝖺𝗅(x)\pi^{\mathsf{goal}}(x), it holds that [V(x) accepts π(t)]<1ε\operatorname*{\bm{\mathbb{P}}}[V(x)\text{ accepts }\pi^{(t)}]<1-\varepsilon for some tt. By Claim 3.3, we have [V~(x) accepts π(t)]<δ(n)\operatorname*{\bm{\mathbb{P}}}[\widetilde{V}(x)\text{ accepts }\pi^{(t)}]<\delta(n) for such tt.

Since ρ=𝒪(logδ(n)1)\rho=\operatorname{\mathcal{O}}(\log\delta(n)^{-1}), the randomness complexity of V~\widetilde{V} is equal to r~(n)=r(n)+ρlogd=𝒪(logδ(n)1+logn)\widetilde{r}(n)=r(n)+\rho\cdot\log d=\operatorname{\mathcal{O}}(\log\delta(n)^{-1}+\log n), and the query complexity is q~(n)=q(n)ρ=𝒪(logδ(n)1)\widetilde{q}(n)=q(n)\cdot\rho=\operatorname{\mathcal{O}}(\log\delta(n)^{-1}). Note that dd and λ\lambda may depend only on ε\varepsilon, and a (d,λ)(d,\lambda)-expander graph XX over {0,1}r(n)\{0,1\}^{r(n)} can be constructed in polynomial time in 2r(n)=poly(n)2^{r(n)}=\operatorname{poly}(n), e.g., by using an explicit construction of near-Ramanujan graphs [MOP21, Alo21].

Observe finally that V~\widetilde{V} queries each position i[(n)]i\in[\ell(n)] of a proof with probability equal to

𝐑[1kρ(iIRk)].\displaystyle\operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\left[\bigvee_{1\leqslant k\leqslant\rho}\Bigl{(}i\in I_{R_{k}}\Bigr{)}\right]. (3.6)

Since VV is Δ\Delta-regular, it holds that

R{0,1}r(n)[iIR]=Δ2r(n).\displaystyle\operatorname*{\bm{\mathbb{P}}}_{R\sim\{0,1\}^{r(n)}}\Bigl{[}i\in I_{R}\Bigr{]}=\frac{\Delta}{2^{r(n)}}. (3.7)

Using the fact that each RkR_{k} is uniformly distributed over {0,1}r(n)\{0,1\}^{r(n)}, we bound Eq. 3.6 as follows:

𝐑[1kρ(iIRk)]union boundk[ρ]𝐑[iIRk]=ρΔ2r(n)=𝒪(logδ(n)12r(n)).\displaystyle\operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\left[\bigvee_{1\leqslant k\leqslant\rho}\Bigl{(}i\in I_{R_{k}}\Bigr{)}\right]\underbrace{\leqslant}_{\text{union bound}}\sum_{k\in[\rho]}\operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\Bigl{[}i\in I_{R_{k}}\Bigr{]}=\frac{\rho\cdot\Delta}{2^{r(n)}}=\operatorname{\mathcal{O}}\left(\frac{\log\delta(n)^{-1}}{2^{r(n)}}\right). (3.8)

Consequently, the degree of each position ii with respect to V~\widetilde{V} is at most

𝐑[1kρ(iIRk)]2r~(n)=𝒪(logδ(n)12r(n))2r(n)+ρlogd=𝒪(logδ(n)1)2𝒪(logδ(n)1)=poly(δ(n)1),\displaystyle\begin{aligned} \operatorname*{\bm{\mathbb{P}}}_{\mathbf{\bm{R}}}\left[\bigvee_{1\leqslant k\leqslant\rho}\Bigl{(}i\in I_{R_{k}}\Bigr{)}\right]\cdot 2^{\widetilde{r}(n)}&=\operatorname{\mathcal{O}}\left(\frac{\log\delta(n)^{-1}}{2^{r(n)}}\right)\cdot 2^{r(n)+\rho\cdot\log d}\\ &=\operatorname{\mathcal{O}}(\log\delta(n)^{-1})\cdot 2^{\operatorname{\mathcal{O}}(\log\delta(n)^{-1})}\\ &=\operatorname{poly}(\delta(n)^{-1}),\end{aligned} (3.9)

which completes the proof. ∎

3.2 FGLSS Reduction and \PSPACE\PSPACE-hardness of Approximation for Partial 2CSP Reconfiguration

We now establish the FGLSS reduction from Proposition 3.2 and show that Partial 2CSP Reconfiguration is \PSPACE\PSPACE-hard to approximate within a factor arbitrarily close to 0.

Theorem 3.5.

For any function ε:\varepsilon\colon\bm{\mathbb{N}}\to\bm{\mathbb{R}} with ε(n)=Ω(1polylogn)\varepsilon(n)=\Omega\left(\frac{1}{\operatorname{polylog}n}\right), Gap1,ε(N) Partial 2CSP Reconfiguration with alphabet size poly(ε(N)1)\operatorname{poly}(\varepsilon(N)^{-1}) is \PSPACE-complete, where NN is the number of vertices.

Reduction.

We describe a reduction from a bounded-degree PCRP verifier to Partial 2CSP Reconfiguration. Define δ(n):-ε(poly(n))2\delta(n)\coloneq\frac{\varepsilon(\operatorname{poly}(n))}{2}, whose precise expression is given later. For any \PSPACE-complete language LL, let VV be a bounded-degree PCRP verifier of Proposition 3.2 with randomness complexity r(n)=𝒪(logδ(n)1+logn)r(n)=\operatorname{\mathcal{O}}(\log\delta(n)^{-1}+\log n), query complexity q(n)=𝒪(logδ(n)1)q(n)=\operatorname{\mathcal{O}}(\log\delta(n)^{-1}), perfect completeness, and soundness δ(n)\delta(n). The proof length (n)\ell(n) is polynomially bounded. Suppose we are given an input x{0,1}nx\in\{0,1\}^{n}. Let π𝗌𝗍𝖺𝗋𝗍,π𝗀𝗈𝖺𝗅{0,1}(n)\pi^{\mathsf{start}},\pi^{\mathsf{goal}}\in\{0,1\}^{\ell(n)} be the two proofs associated with V(x)V(x). Because the degree of VV is bounded by poly(δ(n)1)\operatorname{poly}(\delta(n)^{-1}), for some constant κ\kappa\in\bm{\mathbb{N}}, we have

(I,D)V(x)[iI]δ(n)κ2r(n) for any i[(n)].\displaystyle\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}i\in I\Bigr{]}\leqslant\frac{\delta(n)^{-\kappa}}{2^{r(n)}}\text{ for any }i\in[\ell(n)]. (3.10)

Hereafter, for any r(n)r(n) random bit sequence RR, let IRI_{R} and DRD_{R} denote the query sequence and the circuit generated by V(x)V(x) on the randomness RR, respectively. Construct a constraint graph G=(𝒱,,Σ,Ψ)G=(\mathcal{V},\mathcal{E},\Sigma,\Psi) as follows:

𝒱\displaystyle\mathcal{V} :-{0,1}r(n),\displaystyle\coloneq\{0,1\}^{r(n)}, (3.11)
\displaystyle\mathcal{E} :-{(R1,R2)𝒱×𝒱|IR1IR2},\displaystyle\coloneq\Bigl{\{}(R_{1},R_{2})\in\mathcal{V}\times\mathcal{V}\Bigm{|}I_{R_{1}}\cap I_{R_{2}}\neq\emptyset\Bigr{\}}, (3.12)
Σ\displaystyle\Sigma :-{{0},{1},{0,1}}q(n),\displaystyle\coloneq\Bigl{\{}\{0\},\{1\},\{0,1\}\Bigr{\}}^{q(n)}, (3.13)
Ψ\displaystyle\Psi :-{ψe}e,\displaystyle\coloneq\{\psi_{e}\}_{e\in\mathcal{E}}, (3.14)

where we define ψR1,R2:Σ×Σ{0,1}\psi_{R_{1},R_{2}}\colon\Sigma\times\Sigma\to\{0,1\} for each edge (R1,R2)(R_{1},R_{2})\in\mathcal{E} so that ψR1,R2(f(R1),f(R2))=1\psi_{R_{1},R_{2}}(f(R_{1}),f(R_{2}))=1 for an assignment f:𝒱Σf\colon\mathcal{V}\to\Sigma if and only if the following three conditions are satisfied:

αiIRf(R1)i,DR1(α)=1,\displaystyle\forall\alpha\in\prod_{i\in I_{R}}f(R_{1})_{i},\quad D_{R_{1}}(\alpha)=1, (3.15)
βiIRf(R2)i,DR2(β)=1,\displaystyle\forall\beta\in\prod_{i\in I_{R}}f(R_{2})_{i},\quad D_{R_{2}}(\beta)=1, (3.16)
iIR1IR2,f(R1)if(R2)i or f(R1)if(R2)i.\displaystyle\forall i\in I_{R_{1}}\cap I_{R_{2}},\quad f(R_{1})_{i}\subseteq f(R_{2})_{i}\text{ or }f(R_{1})_{i}\supseteq f(R_{2})_{i}. (3.17)

Here, for the sake of simple notation, we consider f(R)f(R) as if it were indexed by IRI_{R} (rather than [q(n)][q(n)]); namely, f(R){{0},{1},{0,1}}IRf(R)\in\{\{0\},\{1\},\{0,1\}\}^{I_{R}}. Thus, f(R)f(R) for each R𝒱R\in\mathcal{V} corresponds the local view of V(x)V(x) on RR.

For any proof π{0,1}(n)\pi\in\{0,1\}^{\ell(n)}, we associate it with an assignment fπ:𝒱Σf_{\pi}\colon\mathcal{V}\to\Sigma such that

fπ(R):-({πi})iIR for all R𝒱.\displaystyle f_{\pi}(R)\coloneq\Bigl{(}\{\pi_{i}\}\Bigr{)}_{i\in I_{R}}\text{ for all }R\in\mathcal{V}. (3.18)

Note that fπ(R){{0},{1}}IRf_{\pi}(R)\in\{\{0\},\{1\}\}^{I_{R}}. Constructing two assignments f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} from π𝗌𝗍𝖺𝗋𝗍\pi^{\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} from π𝗀𝗈𝖺𝗅\pi^{\mathsf{goal}} by Eq. 3.18, we obtain an instance (G,f𝗌𝗍𝖺𝗋𝗍,f𝗀𝗈𝖺𝗅)(G,f^{\mathsf{start}},f^{\mathsf{goal}}) of Partial 2CSP Reconfiguration. Observe that f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} satisfy GG and f𝗌𝗍𝖺𝗋𝗍=f𝗀𝗈𝖺𝗅=|𝒱|\|f^{\mathsf{start}}\|=\|f^{\mathsf{goal}}\|=|\mathcal{V}|. Note that N:-|𝒱|ncN\coloneq|\mathcal{V}|\leqslant n^{c} for some constant cc\in\bm{\mathbb{N}}. Letting δ(n):-ε(nc)2=Ω(1polylogn)\delta(n)\coloneq\frac{\varepsilon(n^{c})}{2}=\Omega\left(\frac{1}{\operatorname{polylog}n}\right) ensures that the alphabet size is |Σ|=𝒪(3q(n))=poly(ε(N)1)|\Sigma|=\operatorname{\mathcal{O}}(3^{q(n)})=\operatorname{poly}(\varepsilon(N)^{-1}). This completes the description of the reduction.

Correctness.

We first prove the completeness.

Lemma 3.6 (Completeness).

If xLx\in L, then 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)=1\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})=1.

Proof of Lemma 3.6.

It is sufficient to consider the case that π𝗌𝗍𝖺𝗋𝗍\pi^{\mathsf{start}} and π𝗀𝗈𝖺𝗅\pi^{\mathsf{goal}} differ in exactly one position, say, i[(n)]i^{\star}\in[\ell(n)]; namely, πi𝗌𝗍𝖺𝗋𝗍πi𝗀𝗈𝖺𝗅\pi^{\mathsf{start}}_{i^{\star}}\neq\pi^{\mathsf{goal}}_{i^{\star}} and πi𝗌𝗍𝖺𝗋𝗍=πi𝗀𝗈𝖺𝗅\pi^{\mathsf{start}}_{i}=\pi^{\mathsf{goal}}_{i} for all iii\neq i^{\star}. Note that f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} may differ in two or more vertices. Consider a reconfiguration partial assignment sequence FF from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} obtained by the following procedure: {itembox}[l]Reconfiguration sequence FF from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}}.

eachR𝒱R\in\mathcal{V}such that iIRi^{\star}\in I_{R}
1:change the ii^{\star}th entry of RR’s current value from f𝗌𝗍𝖺𝗋𝗍(R)i={πi𝗌𝗍𝖺𝗋𝗍}f^{\mathsf{start}}(R)_{i^{\star}}=\{\pi^{\mathsf{start}}_{i^{\star}}\} to {0,1}\{0,1\}. \EndFor\Foreach R𝒱R\in\mathcal{V} such that iIRi^{\star}\in I_{R}
2:change the ii^{\star}th entry of RR’s current value from {0,1}\{0,1\} to f𝗀𝗈𝖺𝗅(R)i={πi𝗀𝗈𝖺𝗅}f^{\mathsf{goal}}(R)_{i^{\star}}=\{\pi^{\mathsf{goal}}_{i^{\star}}\}. \EndFor
\For

Observe that any partial assignment ff^{\circ} of FF satisfies GG for the following reasons:

  • Since f(R)i{0,1}={πi𝗌𝗍𝖺𝗋𝗍,πi𝗀𝗈𝖺𝗅}=f𝗌𝗍𝖺𝗋𝗍(R)if𝗀𝗈𝖺𝗅(R)if^{\circ}(R)_{i^{\star}}\subseteq\{0,1\}=\{\pi^{\mathsf{start}}_{i^{\star}},\pi^{\mathsf{goal}}_{i^{\star}}\}=f^{\mathsf{start}}(R)_{i^{\star}}\cup f^{\mathsf{goal}}(R)_{i^{\star}} when iIRi^{\star}\in I_{R}, ff^{\circ} satisfies Eqs. 3.15 and 3.16.

  • Letting K:-{f(R)iiIR}K\coloneq\{f^{\circ}(R)_{i^{\star}}\mid i^{\star}\in I_{R}\}, we find KK to be either {{0}}\{\{0\}\}, {{1}}\{\{1\}\}, {{0,1}}\{\{0,1\}\}, {{0},{0,1}}\{\{0\},\{0,1\}\}, or {{1},{0,1}}\{\{1\},\{0,1\}\} by construction; i.e., ff^{\circ} satisfies Eq. 3.17.

Since f=|𝒱|\|f^{\circ}\|=|\mathcal{V}|, it holds that 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)Fmax|𝒱|=1\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\geqslant\frac{\|F\|_{\max}}{|\mathcal{V}|}=1, completing the proof. ∎

Lemma 3.7 (Soundness).

If xLx\notin L, then

𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)<δ(n)+q(n)δ(n)κ2r(n).\displaystyle\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})<\delta(n)+\frac{q(n)\cdot\delta(n)^{-\kappa}}{2^{r(n)}}. (3.19)

The proof of Theorem 3.5 follows from Lemmas 3.6 and 3.7 because for any sufficiently large nn such that q(n)δ(n)κ2r(n)δ(n)\frac{q(n)\cdot\delta(n)^{-\kappa}}{2^{r(n)}}\leqslant\delta(n) (note that δ(n)=Ω(1polylogn)\delta(n)=\Omega\left(\frac{1}{\operatorname{polylog}n}\right)), the following hold:

  • (Perfect completeness) If xLx\in L, then 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)=1\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})=1;

  • (Soundness) If xLx\notin L, then 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)<2δ(n)=ε(N)\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})<2\delta(n)=\varepsilon(N).

Proof of Lemma 3.7.

We prove the contrapositive. Suppose 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)Γ\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\geqslant\Gamma for some Γ(0,1)\Gamma\in(0,1), and there is a reconfiguration partial assignment sequence F=(f(1),,f(T))F=(f^{(1)},\ldots,f^{(T)}) from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} such that Fmin=𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)\|F\|_{\min}=\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}}). Define then a (not necessarily reconfiguration) sequence Π=(π(1),,π(T))\Pi=(\pi^{(1)},\ldots,\pi^{(T)}) over {0,1}(n)\{0,1\}^{\ell(n)} such that each proof π(t)\pi^{(t)} is determined based on the plurality vote over f(t)f^{(t)}; namely,

πi(t):-argmaxb{0,1}|{R𝒱|iIR and bf(t)(R)i}|for all i[(n)],\displaystyle\pi^{(t)}_{i}\coloneq\operatorname*{argmax}_{b\in\{0,1\}}\left|\Bigl{\{}R\in\mathcal{V}\Bigm{|}i\in I_{R}\text{ and }b\in f^{(t)}(R)_{i}\Bigr{\}}\right|\;\text{for all }i\in[\ell(n)], (3.20)

where ties are broken so that 0 is chosen. In particular, π(1)=π𝗌𝗍𝖺𝗋𝗍\pi^{(1)}=\pi^{\mathsf{start}} and π(T)=π𝗀𝗈𝖺𝗅\pi^{(T)}=\pi^{\mathsf{goal}}. Observe the following:

Observation 3.8.

For any t[T]t\in[T] and R𝒱R\in\mathcal{V}, it holds that

f(t)(R)DR(π(t)|IR)=1.\displaystyle f^{(t)}(R)\neq\bot\implies D_{R}(\pi^{(t)}|_{I_{R}})=1. (3.21)

Since R𝒱[f(t)(R)]=f(t)Γ\operatorname*{\bm{\mathbb{P}}}_{R\sim\mathcal{V}}[f^{(t)}(R)\neq\bot]=\|f^{(t)}\|\geqslant\Gamma, by Observation 3.8, we have that for all tt,

[V(x) accepts π(t)]=R{0,1}r(n)[DR(π(t)|IR)=1]R𝒱[f(t)(R)]Γ.\displaystyle\operatorname*{\bm{\mathbb{P}}}\Bigl{[}V(x)\text{ accepts }\pi^{(t)}\Bigr{]}=\operatorname*{\bm{\mathbb{P}}}_{R\sim\{0,1\}^{r(n)}}\Bigl{[}D_{R}(\pi^{(t)}|_{I_{R}})=1\Bigr{]}\geqslant\operatorname*{\bm{\mathbb{P}}}_{R\sim\mathcal{V}}\Bigl{[}f^{(t)}(R)\neq\bot\Bigr{]}\geqslant\Gamma. (3.22)

Unfortunately, Π\Pi is not a reconfiguration sequence because π(t)\pi^{(t)} and π(t+1)\pi^{(t+1)} may differ in two or more positions. Since f(t)f^{(t)} and f(t+1)f^{(t+1)} differ in a single vertex R𝒱R\in\mathcal{V}, we have πi(t)πi(t+1)\pi^{(t)}_{i}\neq\pi^{(t+1)}_{i} only if iIRi\in I_{R}, implying Δ(π(t),π(t+1))|IR|=q(n)\Delta(\pi^{(t)},\pi^{(t+1)})\leqslant|I_{R}|=q(n). Using this fact, we interpolate between π(t)\pi^{(t)} and π(t+1)\pi^{(t+1)} to find a valid reconfiguration sequence Π(t)\Pi^{(t)} such that V(x)V(x) accepts every proof of Π(t)\Pi^{(t)} with probability Γo(1)\Gamma-o(1).

Claim 3.9.

There exists a reconfiguration sequence Π(t)\Pi^{(t)} from π(t)\pi^{(t)} to π(t+1)\pi^{(t+1)} such that for every proof π\pi^{\circ} of Π(t)\Pi^{(t)},

[V(x) accepts π]Γq(n)δ(n)κ2r(n).\displaystyle\operatorname*{\bm{\mathbb{P}}}\Bigl{[}V(x)\text{ accepts }\pi^{\circ}\Bigr{]}\geqslant\Gamma-\frac{q(n)\cdot\delta(n)^{-\kappa}}{2^{r(n)}}. (3.23)

Concatenating Π(t)\Pi^{(t)}’s of Claim 3.9 for all tt, we obtain a valid reconfiguration sequence Π\Pi from π𝗌𝗍𝖺𝗋𝗍\pi^{\mathsf{start}} to π𝗀𝗈𝖺𝗅\pi^{\mathsf{goal}} such that

min1tT[V(x) accepts π(t)]Γq(n)δ(n)κ2r(n).\displaystyle\min_{1\leqslant t\leqslant T}\operatorname*{\bm{\mathbb{P}}}\Bigl{[}V(x)\text{ accepts }\pi^{(t)}\Bigr{]}\geqslant\Gamma-\frac{q(n)\cdot\delta(n)^{-\kappa}}{2^{r(n)}}. (3.24)

Substituting δ(n)+q(n)δ(n)κ2r(n)\delta(n)+\frac{q(n)\cdot\delta(n)^{-\kappa}}{2^{r(n)}} for Γ\Gamma, we have that if 𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)δ(n)+q(n)δ(n)κ2r(n)\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\geqslant\delta(n)+\frac{q(n)\cdot\delta(n)^{-\kappa}}{2^{r(n)}}, then V(x)V(x) accepts every proof π(t)\pi^{(t)} of Π\Pi with probability at least δ(n)\delta(n); i.e., xLx\in L. This completes the proof of Lemma 3.7. ∎

What remains to be done is to prove Observations 3.8 and 3.9.

Proof of Observation 3.8.

Suppose f(t)(R)f^{(t)}(R)\neq\bot for some t[T]t\in[T] and R𝒱R\in\mathcal{V}. We will show that πi(t)f(t)(R)i\pi^{(t)}_{i}\in f^{(t)}(R)_{i} for every iIRi\in I_{R}. Define

K:-{f(t)(R)i|R𝒱 s.t. iIR and f(t)(R)}.\displaystyle K\coloneq\Bigl{\{}f^{(t)}(R^{\prime})_{i}\Bigm{|}\exists R^{\prime}\in\mathcal{V}\text{ s.t.~{}}i\in I_{R^{\prime}}\text{ and }f^{(t)}(R^{\prime})\neq\bot\Bigr{\}}. (3.25)

Then, any pair α,βK\alpha,\beta\in K must satisfy that αβ\alpha\subseteq\beta or αβ\alpha\supseteq\beta because otherwise, f(t)f^{(t)} would violate Eq. 3.17 at edge (R1,R2)(R_{1},R_{2}) such that iR1R2i\in R_{1}\cap R_{2}, f(t)(R1)i=αf^{(t)}(R_{1})_{i}=\alpha, and f(t)(R2)i=βf^{(t)}(R_{2})_{i}=\beta, which is a contradiction. For each possible case of KK, the result of the plurality vote πi(t)\pi^{(t)}_{i} is shown below, implying that πi(t)f(t)(R)i\pi^{(t)}_{i}\in f^{(t)}(R)_{i}.

KK {}\{\} {{0}}\{\{0\}\} {{1}}\{\{1\}\} {{0,1}}\{\{0,1\}\} {{0},{0,1}}\{\{0\},\{0,1\}\} {{1},{0,1}}\{\{1\},\{0,1\}\}
πi(t)\pi^{(t)}_{i} 0 0 11 0 0 11

Since f(t)(R)f^{(t)}(R) must satisfy a self-loop (R,R)(R,R)\in\mathcal{E}, by the definition of ψR,R\psi_{R,R}, we have

αiIRf(t)(R)i,DR(α)=1,\displaystyle\forall\alpha\in\prod_{i\in I_{R}}f^{(t)}(R)_{i},\quad D_{R}(\alpha)=1, (3.26)

On the other hand, it holds that

π(t)|IRiIRf(t)(R)i,\displaystyle\pi^{(t)}|_{I_{R}}\in\prod_{i\in I_{R}}f^{(t)}(R)_{i}, (3.27)

implying DR(π(t)|IR)=1D_{R}(\pi^{(t)}|_{I_{R}})=1, as desired. ∎

Proof of Claim 3.9.

Recall that π(t)\pi^{(t)} and π(t+1)\pi^{(t+1)} may differ in at most q(n)q(n) positions. Consider any trivial reconfiguration sequence Π(t)\Pi^{(t)} from π(t)\pi^{(t)} to π(t+1)\pi^{(t+1)} by simply changing at most q(n)q(n) positions on which π(t)\pi^{(t)} and π(t+1)\pi^{(t+1)} differ. By construction, any proof π\pi^{\circ} of Π(t)\Pi^{(t)} differs from π(t)\pi^{(t)} in at most q(n)q(n) positions, say, I((n)q(n))I^{\circ}\in{\ell(n)\choose\leqslant q(n)}. Then, we derive the following:

[V(x) accepts π]=(I,D)V(x)[D(π|I)=1](I,D)V(x)[D(π|I)=1 and II=]=(I,D)V(x)[D(π(t)|I)=1 and II=]=(I,D)V(x)[D(π(t)|I)=1]=[V(x) accepts π(t)]Γ(I,D)V(x)[D(π(t)|I)=1 and II]Γ(I,D)V(x)[II].\displaystyle\begin{aligned} \operatorname*{\bm{\mathbb{P}}}\Bigl{[}V(x)\text{ accepts }\pi^{\circ}\Bigr{]}&=\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}D(\pi^{\circ}|_{I})=1\Bigr{]}\geqslant\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}D(\pi^{\circ}|_{I})=1\text{ and }I\cap I^{\circ}=\emptyset\Bigr{]}\\ &=\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}D(\pi^{(t)}|_{I})=1\text{ and }I\cap I^{\circ}=\emptyset\Bigr{]}\\ &=\underbrace{\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}D(\pi^{(t)}|_{I})=1\Bigr{]}}_{=\operatorname*{\bm{\mathbb{P}}}\left[V(x)\text{ accepts }\pi^{(t)}\right]\geqslant\Gamma}-\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}D(\pi^{(t)}|_{I})=1\text{ and }I\cap I^{\circ}\neq\emptyset\Bigr{]}\\ &\geqslant\Gamma-\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}I\cap I^{\circ}\neq\emptyset\Bigr{]}.\end{aligned}

Recall that (I,D)V(x)[iI]δ(n)κ2r(n)\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}[i\in I]\leqslant\frac{\delta(n)^{-\kappa}}{2^{r(n)}} for any i[(n)]i\in[\ell(n)] by assumption. Since |I|q(n)|I^{\circ}|\leqslant q(n), taking a union bound, we have

(I,D)V(x)[II]iI(I,D)V(x)[iI]q(n)δ(n)κ2r(n),\displaystyle\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}I\cap I^{\circ}\neq\emptyset\Bigr{]}\leqslant\sum_{i\in I^{\circ}}\operatorname*{\bm{\mathbb{P}}}_{(I,D)\sim V(x)}\Bigl{[}i\in I\Bigr{]}\leqslant\frac{q(n)\cdot\delta(n)^{-\kappa}}{2^{r(n)}}, (3.28)

implying that

[V(x) accepts π]Γq(n)δ(n)κ2r(n).\displaystyle\operatorname*{\bm{\mathbb{P}}}\Bigl{[}V(x)\text{ accepts }\pi^{\circ}\Bigr{]}\geqslant\Gamma-\frac{q(n)\cdot\delta(n)^{-\kappa}}{2^{r(n)}}. (3.29)

This completes the proof. ∎

3.3 Reducing Partial 2CSP Reconfiguration to Label Cover Reconfiguration

Subsequently, we show \PSPACE\PSPACE-hardness of approximation for Label Cover Reconfiguration by reducing from Partial 2CSP Reconfiguration, whose proof is similar to [KM23]. Note that Label Cover Reconfiguration admits a 22-factor approximation, similarly to Minmax Set Cover Reconfiguration (see Section 4.1).

Theorem 3.10.

For any function ε:\varepsilon\colon\bm{\mathbb{N}}\to\bm{\mathbb{R}} with ε(n)=Ω(1polylogn)\varepsilon(n)=\Omega\left(\frac{1}{\operatorname{polylog}n}\right), Gap1,2-ε(N) Label Cover Reconfiguration with alphabet size poly(ε(N)1)\operatorname{poly}(\varepsilon(N)^{-1}) is \PSPACE\PSPACE-complete, where NN is the number of vertices. In particular,

  • for any constant ε(0,1)\varepsilon\in(0,1), Gap1,2-ε Label Cover Reconfiguration with constant alphabet size is \PSPACE\PSPACE-complete, and

  • Gap1,21polyloglogN{}_{1,2-\frac{1}{\operatorname{polyloglog}N}} Label Cover Reconfiguration with alphabet size polyloglogN\operatorname{polyloglog}N is \PSPACE\PSPACE-complete.

Proof.

We present a gap-preserving reduction from Gap1,ε(N)2{}_{1,\frac{\varepsilon(N)}{2}} Partial 2CSP Reconfiguration to Gap1,2-ε(N) Label Cover Reconfiguration. Let (G=(𝒱,,Σ,Ψ),f𝗌𝗍𝖺𝗋𝗍,f𝗀𝗈𝖺𝗅)(G=(\mathcal{V},\mathcal{E},\Sigma,\Psi),f^{\mathsf{start}},f^{\mathsf{goal}}) be an instance of Partial 2CSP Reconfiguration with NN variables and alphabet size poly(ε(N)1)\operatorname{poly}(\varepsilon(N)^{-1}), where f𝗌𝗍𝖺𝗋𝗍=f𝗀𝗈𝖺𝗅=|𝒱|\|f^{\mathsf{start}}\|=\|f^{\mathsf{goal}}\|=|\mathcal{V}|. Without loss of generality, we can safely assume that NN is sufficiently large so that N4ε(N)N\geqslant\frac{4}{\varepsilon(N)} because ε(N)=Ω(1polylogN)\varepsilon(N)=\Omega\left(\frac{1}{\operatorname{polylog}N}\right). Construct then multi-assignments f𝗌𝗍𝖺𝗋𝗍,f𝗀𝗈𝖺𝗅:𝒱2Σf^{\prime\mathsf{start}},f^{\prime\mathsf{goal}}\colon\mathcal{V}\to 2^{\Sigma} such that f𝗌𝗍𝖺𝗋𝗍(v):-{f𝗌𝗍𝖺𝗋𝗍(v)}f^{\prime\mathsf{start}}(v)\coloneq\{f^{\mathsf{start}}(v)\} and f𝗀𝗈𝖺𝗅(v):-{f𝗀𝗈𝖺𝗅(v)}f^{\prime\mathsf{goal}}(v)\coloneq\{f^{\mathsf{goal}}(v)\} for all v𝒱v\in\mathcal{V}. Observe that f𝗌𝗍𝖺𝗋𝗍f^{\prime\mathsf{start}} and f𝗀𝗈𝖺𝗅f^{\prime\mathsf{goal}} satisfy GG; thus, (G,f𝗌𝗍𝖺𝗋𝗍,f𝗀𝗈𝖺𝗅)(G,f^{\prime\mathsf{start}},f^{\prime\mathsf{goal}}) is an instance of Label Cover Reconfiguration, completing the reduction.

We first show the perfect completeness; namely,

𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)1𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)1.\displaystyle\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\geqslant 1\implies\operatorname{\mathsf{MinLab}}_{G}(f^{\prime\mathsf{start}}\leftrightsquigarrow f^{\prime\mathsf{goal}})\leqslant 1. (3.30)

Suppose there is a reconfiguration partial assignment sequence F=(f(1),,f(T))F=(f^{(1)},\ldots,f^{(T)}) from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} such that Fmin=|𝒱|\|F\|_{\min}=|\mathcal{V}|. Construct then a sequence F=(f(1),f(1.5),,f(T0.5),f(T))F^{\prime}=(f^{\prime(1)},f^{\prime(1.5)},\ldots,f^{\prime(T-0.5)},f^{\prime(T)}) of multi-assignments such that f(t)(v)={f(t)(v)}f^{\prime(t)}(v)=\{f^{(t)}(v)\} for all t[T]t\in[T] and v𝒱v\in\mathcal{V}, and f(t+0.5)f^{\prime(t+0.5)} for each t[T1]t\in[T-1] is defined as follows: Given that f(t)f^{(t)} and f(t+1)f^{(t+1)} differ only in vv^{\star}, we let

f(t+0.5)(v):-{{f(t)(v),f(t+1)(v)}if v=v,{f(t)(v)}otherwise, for all v𝒱.\displaystyle f^{\prime(t+0.5)}(v)\coloneq\begin{cases}\left\{f^{(t)}(v^{\star}),f^{(t+1)}(v^{\star})\right\}&\text{if }v=v^{\star},\\ \left\{f^{(t)}(v)\right\}&\text{otherwise},\end{cases}\text{ for all }v\in\mathcal{V}. (3.31)

In particular, it holds that f(1)=f𝗌𝗍𝖺𝗋𝗍f^{\prime(1)}=f^{\prime\mathsf{start}} and f(T)=f𝗀𝗈𝖺𝗅f^{\prime(T)}=f^{\prime\mathsf{goal}}. Observe that f(t+0.5)f^{\prime(t+0.5)} is a satisfying multi-assignment with f(t+0.5)=N+1\|f^{\prime(t+0.5)}\|=N+1 for all tt, and that v𝒱|f(t)f(t+0.5)|=1\sum_{v\in\mathcal{V}}|f^{\prime(t)}\triangle f^{\prime(t+0.5)}|=1; i.e., FF^{\prime} is a reconfiguration multi-assignment sequence from f𝗌𝗍𝖺𝗋𝗍f^{\prime\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\prime\mathsf{goal}} such that Fmax=N+1\|F^{\prime}\|_{\max}=N+1; i.e., 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)FmaxN+1=1\operatorname{\mathsf{MinLab}}_{G}(f^{\prime\mathsf{start}}\leftrightsquigarrow f^{\prime\mathsf{goal}})\leqslant\frac{\|F^{\prime}\|_{\max}}{N+1}=1.

We then prove the soundness; i.e.,

𝖬𝖺𝗑𝖯𝖺𝗋G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)<ε(N)2𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)>2ε(N).\displaystyle\operatorname{\mathsf{MaxPar}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})<\frac{\varepsilon(N)}{2}\implies\operatorname{\mathsf{MinLab}}_{G}(f^{\prime\mathsf{start}}\leftrightsquigarrow f^{\prime\mathsf{goal}})>2-\varepsilon(N). (3.32)

Suppose we are given a reconfiguration multi-assignment sequence F=(f(1),,f(T))F^{\prime}=(f^{\prime(1)},\ldots,f^{\prime(T)}) from f𝗌𝗍𝖺𝗋𝗍f^{\prime\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\prime\mathsf{goal}} such that FmaxN+1=𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)\frac{\|F^{\prime}\|_{\max}}{N+1}=\operatorname{\mathsf{MinLab}}_{G}(f^{\prime\mathsf{start}}\leftrightsquigarrow f^{\prime\mathsf{goal}}). Define

𝒱1(t):-{v𝒱||f(t)(v)|=1}.\displaystyle\mathcal{V}_{1}^{(t)}\coloneq\Bigl{\{}v\in\mathcal{V}\Bigm{|}|f^{\prime(t)}(v)|=1\Bigr{\}}. (3.33)

Construct then a sequence F=(f(1),,f(T))F=(f^{(1)},\ldots,f^{(T)}) of partial assignments such that each f(t):𝒱Σ{}f^{(t)}\colon\mathcal{V}\to\Sigma\cup\{\bot\} is defined as follows:

f(t)(v):-{unique αf(t)(v)if v𝒱1(t),otherwise, for all v𝒱.\displaystyle f^{(t)}(v)\coloneq\begin{cases}\text{unique }\alpha\in f^{\prime(t)}(v)&\text{if }v\in\mathcal{V}_{1}^{(t)},\\ \bot&\text{otherwise},\end{cases}\text{ for all }v\in\mathcal{V}. (3.34)

In particular, it holds that f(1)=f𝗌𝗍𝖺𝗋𝗍f^{(1)}=f^{\mathsf{start}} and f(T)=f𝗀𝗈𝖺𝗅f^{(T)}=f^{\mathsf{goal}}. Observe easily that f(t)f^{(t)} is a satisfying partial assignment, and f(t)f^{(t)} and f(t+1)f^{(t+1)} differ in at most one vertex; i.e., FF is a reconfiguration partial assignment sequence from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}}. By assumption, there exists a partial assignment f(t)f^{(t)} in FF such that |𝒱1(t)|=f(t)<ε(N)2N|\mathcal{V}_{1}^{(t)}|=\|f^{(t)}\|<\frac{\varepsilon(N)}{2}\cdot N. Simple calculation derives that

f(t)1|𝒱1(t)|+2|𝒱𝒱1(t)|=2N|𝒱1(t)|>(2ε(N)2)NN4ε(N)(2ε(N))(N+1),\displaystyle\begin{aligned} \|f^{\prime(t)}\|&\geqslant 1\cdot|\mathcal{V}_{1}^{(t)}|+2\cdot|\mathcal{V}\setminus\mathcal{V}_{1}^{(t)}|\\ &=2N-|\mathcal{V}_{1}^{(t)}|\\ &>\left(2-\frac{\varepsilon(N)}{2}\right)\cdot N\\ &\underbrace{\geqslant}_{N\geqslant\frac{4}{\varepsilon(N)}}(2-\varepsilon(N))\cdot(N+1),\end{aligned} (3.35)

implying that 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)=FmaxN+1f(t)N+1>2ε(N)\operatorname{\mathsf{MinLab}}_{G}(f^{\prime\mathsf{start}}\leftrightsquigarrow f^{\prime\mathsf{goal}})=\frac{\|F^{\prime}\|_{\max}}{N+1}\geqslant\frac{\|f^{\prime(t)}\|}{N+1}>2-\varepsilon(N), which completes the proof. ∎

4 Applications

In this section, we apply Theorem 3.10 to show optimal \PSPACE\PSPACE-hardness of approximation results for Minmax Set Cover Reconfiguration (Theorem 4.1) and Minmax Hypergraph Vertex Cover Reconfiguration (Theorem 4.3).

4.1 \PSPACE\PSPACE-hardness of Approximation for Set Cover Reconfiguration

We first prove that Minmax Set Cover Reconfiguration is \PSPACE\PSPACE-hard to approximate within a factor smaller than 22. Let 𝒰\mathcal{U} be a finite set called the universe and ={S1,,Sm}\mathcal{F}=\{S_{1},\ldots,S_{m}\} be a family of mm subsets of 𝒰\mathcal{U}. A cover for a set system (𝒰,)(\mathcal{U},\mathcal{F}) is a subfamily of \mathcal{F} whose union is equal to 𝒰\mathcal{U}. For any pair of covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} for (𝒰,)(\mathcal{U},\mathcal{F}), a reconfiguration sequence from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} is a sequence 𝒞=(𝒞(1),,𝒞(T))\mathscr{C}=(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(T)}) of covers such that 𝒞(1)=𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{(1)}=\mathcal{C}^{\mathsf{start}}, 𝒞(T)=𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{(T)}=\mathcal{C}^{\mathsf{goal}}, and |𝒞(t)𝒞(t+1)|1|\mathcal{C}^{(t)}\triangle\mathcal{C}^{(t+1)}|\leqslant 1 (i.e., 𝒞(t+1)\mathcal{C}^{(t+1)} is obtained from 𝒞(t)\mathcal{C}^{(t)} by adding or removing a single set of \mathcal{F}) for all tt. In Set Cover Reconfiguration [IDHPSUU11], for a set system (𝒰,)(\mathcal{U},\mathcal{F}) and its two covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} of size kk, we are asked to decide if there is a reconfiguration sequence from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} consisting only of covers of size at most k+1k+1. Next, we formulate its optimization version. Denote by 𝗈𝗉𝗍()\operatorname{\mathsf{opt}}(\mathcal{F}) the size of the minimum cover of (𝒰,)(\mathcal{U},\mathcal{F}). For any reconfiguration sequence 𝒞=(𝒞(1),,𝒞(T))\mathscr{C}=(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(T)}), its cost is defined as the maximum value of |𝒞(t)|𝗈𝗉𝗍()+1\frac{|\mathcal{C}^{(t)}|}{\operatorname{\mathsf{opt}}(\mathcal{F})+1} over all 𝒞(t)\mathcal{C}^{(t)}’s in 𝒞\mathscr{C}; namely,444 Here, division by 𝗈𝗉𝗍()+1\operatorname{\mathsf{opt}}(\mathcal{F})+1 is derived from the nature that we must first add at least one set whenever |𝒞𝗌𝗍𝖺𝗋𝗍|=|𝒞𝗀𝗈𝖺𝗅|=𝗈𝗉𝗍()|\mathcal{C}^{\mathsf{start}}|=|\mathcal{C}^{\mathsf{goal}}|=\operatorname{\mathsf{opt}}(\mathcal{F}) and 𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{start}}\neq\mathcal{C}^{\mathsf{goal}}.

𝖼𝗈𝗌𝗍(𝒞):-max𝒞(t)𝒞|𝒞(t)|𝗈𝗉𝗍()+1,\displaystyle\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathscr{C})\coloneq\max_{\mathcal{C}^{(t)}\in\mathscr{C}}\frac{|\mathcal{C}^{(t)}|}{\operatorname{\mathsf{opt}}(\mathcal{F})+1}, (4.1)

In Minmax Set Cover Reconfiguration, we wish to minimize 𝖼𝗈𝗌𝗍(𝒞)\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathscr{C}) subject to 𝒞=(𝒞𝗌𝗍𝖺𝗋𝗍,,𝒞𝗀𝗈𝖺𝗅)\mathscr{C}=(\mathcal{C}^{\mathsf{start}},\ldots,\mathcal{C}^{\mathsf{goal}}). For a pair of covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} for (𝒰,)(\mathcal{U},\mathcal{F}), let 𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}}) denote the minimum value of 𝖼𝗈𝗌𝗍(𝒞)\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathscr{C}) over all possible reconfiguration sequences 𝒞\mathscr{C} from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}}; namely,

𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅):-min𝒞=(𝒞𝗌𝗍𝖺𝗋𝗍,,𝒞𝗀𝗈𝖺𝗅)𝖼𝗈𝗌𝗍(𝒞).\displaystyle\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}})\coloneq\min_{\mathscr{C}=(\mathcal{C}^{\mathsf{start}},\ldots,\mathcal{C}^{\mathsf{goal}})}\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathscr{C}). (4.2)

For every 1cs1\leqslant c\leqslant s, Gapc,s Set Cover Reconfiguration requests to distinguish whether 𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)c\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}})\leqslant c or 𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)>s\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}})>s.

For the sake of completeness, we here present a 22-factor approximation algorithm for Minmax Set Cover Reconfiguration of [IDHPSUU11]:555 Similarly, a 22-factor approximation algorithm can be obtained for Minmax Dominating Set Reconfiguration and Minmax Hypergraph Vertex Cover Reconfiguration. {itembox}[l]22-factor approximation for Minmax Set Cover Reconfiguration.

start from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}}.
1:insert each set of 𝒞𝗀𝗈𝖺𝗅𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{goal}}\setminus\mathcal{C}^{\mathsf{start}} into the current cover in any order.
2:discard each set of 𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{start}}\setminus\mathcal{C}^{\mathsf{goal}} from the current cover in any order. \LCommentend with 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}}.
\LComment

Our main result is stated below, whose proof uses a gap-preserving reduction from Label Cover Reconfiguration to Minmax Set Cover Reconfiguration [Ohs24, LY94].

Theorem 4.1.

Gap1,21polyloglogN{}_{1,2-\frac{1}{\operatorname{polyloglog}N}} Set Cover Reconfiguration is \PSPACE\PSPACE-complete, where NN is the universe size. In particular, Minmax Set Cover Reconfiguration is \PSPACE\PSPACE-hard to approximate within a factor of 21polyloglogN2-\frac{1}{\operatorname{polyloglog}N}.

Theorem 4.1 along with [Ohs24] implies that Minmax Dominating Set Reconfiguration is \PSPACE\PSPACE-hard to approximate within a factor of 21polyloglogN2-\frac{1}{\operatorname{polyloglog}N}, where NN is the number of vertices (see Corollary 1.2).

Proof of Theorem 4.1.

The reduction from Label Cover Reconfiguration to Minmax Set Cover Reconfiguration is almost the same as that due to [Ohs24, LY94]. Let (G=(𝒱,,Σ,Ψ),f𝗌𝗍𝖺𝗋𝗍,f𝗀𝗈𝖺𝗅)(G=(\mathcal{V},\mathcal{E},\Sigma,\Psi),f^{\mathsf{start}},f^{\mathsf{goal}}) be an instance of Label Cover Reconfiguration with NN vertices and alphabet size |Σ|=polyloglogN|\Sigma|=\operatorname{polyloglog}N, where f𝗌𝗍𝖺𝗋𝗍=f𝗀𝗈𝖺𝗅=|𝒱|\|f^{\mathsf{start}}\|=\|f^{\mathsf{goal}}\|=|\mathcal{V}|. Define B:-{0,1}ΣB\coloneq\{0,1\}^{\Sigma}. For each αΣ\alpha\in\Sigma and SΣS\subseteq\Sigma, we construct Qα¯B\overline{Q_{\alpha}}\subset B and QSBQ_{S}\subset B according to [Ohs24] in 2𝒪(|Σ|)2^{\operatorname{\mathcal{O}}(|\Sigma|)} time. Let \prec be an arbitrary order over VV. Create an instance of Minmax Set Cover Reconfiguration as follows. For each vertex v𝒱v\in\mathcal{V} and each value αΣ\alpha\in\Sigma, we define Sv,α×BS_{v,\alpha}\subset\mathcal{E}\times B as

Sv,α:-(e=(v,w):vw{e}×Qα¯)(e=(v,w):vw{e}×Qπe(α)),\displaystyle S_{v,\alpha}\coloneq\left(\bigcup_{e=(v,w)\in\mathcal{E}:v\prec w}\{e\}\times\overline{Q_{\alpha}}\right)\cup\left(\bigcup_{e=(v,w)\in\mathcal{E}:v\succ w}\{e\}\times Q_{\pi_{e}(\alpha)}\right), (4.3)

where πe(α):-{βΣψe(α,β)=1}\pi_{e}(\alpha)\coloneq\{\beta\in\Sigma\mid\psi_{e}(\alpha,\beta)=1\}. Then, a set system (𝒰,)(\mathcal{U},\mathcal{F}) is defined as

𝒰:-×B and :-{Sv,α|v𝒱,αΣ}.\displaystyle\mathcal{U}\coloneq\mathcal{E}\times B\text{ and }\mathcal{F}\coloneq\Bigl{\{}S_{v,\alpha}\Bigm{|}v\in\mathcal{V},\alpha\in\Sigma\Bigr{\}}. (4.4)

For a satisfying multi-assignment f:𝒱2Σf\colon\mathcal{V}\to 2^{\Sigma} for GG with f=|𝒱|\|f\|=|\mathcal{V}|,666 In other words, each f(v)f(v) is a singleton. we associate it with a subfamily 𝒞f\mathcal{C}_{f}\subset\mathcal{F} such that

𝒞f:-{Sv,α|v𝒱,αf(v)},\displaystyle\mathcal{C}_{f}\coloneq\Bigl{\{}S_{v,\alpha}\Bigm{|}v\in\mathcal{V},\alpha\in f(v)\Bigr{\}}, (4.5)

which is a minimum cover for (𝒰,)(\mathcal{U},\mathcal{F}) [Ohs24]; i.e., |𝒞f|=|𝒱|=𝗈𝗉𝗍()|\mathcal{C}_{f}|=|\mathcal{V}|=\operatorname{\mathsf{opt}}(\mathcal{F}). Constructing minimum covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} from f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} by Eq. 4.5, we obtain an instance ((𝒰,),𝒞𝗌𝗍𝖺𝗋𝗍,𝒞𝗀𝗈𝖺𝗅)((\mathcal{U},\mathcal{F}),\mathcal{C}^{\mathsf{start}},\mathcal{C}^{\mathsf{goal}}) of Minmax Set Cover Reconfiguration. This completes the description of the reduction.

Here, we will show that

𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)=𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅),\displaystyle\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})=\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}}), (4.6)

which implies the completeness and soundness; for this, we use the following lemma [Ohs24].

Lemma 4.2 ([Ohs24, Observation 4.4; Claim 4.7]).

Let f:𝒱2Σf\colon\mathcal{V}\to 2^{\Sigma} be a multi-assignment and 𝒞\mathcal{C}\subseteq\mathcal{F} be a subfamily such that for any v𝒱v\in\mathcal{V} and αΣ\alpha\in\Sigma, αf(v)\alpha\in f(v) if and only if Sv,α𝒞S_{v,\alpha}\in\mathcal{C}. Then, ff satisfies an edge e=(v,w)e=(v,w)\in\mathcal{E} if and only if 𝒞\mathcal{C} covers {e}×B\{e\}\times B. In particular, ff satisfies GG if and only if 𝒞\mathcal{C} covers ×B\mathcal{E}\times B. Moreover, it holds that f=|𝒞|\|f\|=|\mathcal{C}|.

We first show that 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\geqslant\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}}). For any reconfiguration multi-assignment sequence F=(f(1),,f(T))F=(f^{(1)},\ldots,f^{(T)}) from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} such that Fmax=𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)\|F\|_{\max}=\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}}), we can construct a reconfiguration sequence 𝒞=(𝒞f(1),,𝒞f(T))\mathscr{C}=(\mathcal{C}_{f^{(1)}},\ldots,\mathcal{C}_{f^{(T)}}) from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} by Eq. 4.5. By Lemma 4.2, each 𝒞f(t)\mathcal{C}_{f^{(t)}} covers 𝒰\mathcal{U}; thus, 𝒞\mathscr{C} is a valid reconfiguration sequence from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}}. Moreover, 𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)𝖼𝗈𝗌𝗍(𝒞)=Fmax=𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}})\leqslant\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathscr{C})=\|F\|_{\max}=\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}}), as desired. We then show that 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\leqslant\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}}). For any reconfiguration sequence 𝒞=(𝒞(1),,𝒞(T))\mathscr{C}=(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(T)}) from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} such that 𝖼𝗈𝗌𝗍(𝒞)=𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathscr{C})=\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}}), we can construct a sequence F=(f(1),,f(t))F=(f^{(1)},\ldots,f^{(t)}) of multi-assignments such that f(t):𝒱2Σf^{(t)}\colon\mathcal{V}\to 2^{\Sigma} is defined as follows:

f(t)(v):-{αΣ|Sv,α𝒞(t)} for all v𝒱.\displaystyle f^{(t)}(v)\coloneq\Bigl{\{}\alpha\in\Sigma\Bigm{|}S_{v,\alpha}\in\mathcal{C}^{(t)}\Bigr{\}}\text{ for all }v\in\mathcal{V}. (4.7)

By Lemma 4.2, each f(t)f^{(t)} satisfies GG; thus, FF is a valid reconfiguration multi-assignment sequence from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} to f𝗀𝗈𝖺𝗅f^{\mathsf{goal}}. Moreover, 𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)Fmax=𝖼𝗈𝗌𝗍(𝒞)=𝖼𝗈𝗌𝗍(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})\leqslant\|F\|_{\max}=\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathscr{C})=\operatorname{\mathsf{cost}}_{\mathcal{F}}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}}), which completes the proof of Eq. 4.6.

Since |Σ|=polyloglogN|\Sigma|=\operatorname{polyloglog}N, the reduction takes polynomial time in NN, and it holds that |𝒰|=|×B|=𝒪(N22polyloglogN)=𝒪(N3)|\mathcal{U}|=|\mathcal{E}\times B|=\operatorname{\mathcal{O}}(N^{2}\cdot 2^{\operatorname{polyloglog}N})=\operatorname{\mathcal{O}}(N^{3}); i.e., N=Ω(|𝒰|13)N=\Omega(|\mathcal{U}|^{\frac{1}{3}}). By Theorem 3.10, Gap1,21polyloglogN{}_{1,2-\frac{1}{\operatorname{polyloglog}N}} Label Cover Reconfiguration with alphabet size polyloglogN\operatorname{polyloglog}N is \PSPACE\PSPACE-complete; thus, Gap1,21polyloglog|𝒰|{}_{1,2-\frac{1}{\operatorname{polyloglog}|\mathcal{U}|}} Set Cover Reconfiguration is \PSPACE\PSPACE-complete as well, accomplishing the proof. ∎

4.2 \PSPACE\PSPACE-hardness of Approximation for Hypergraph Vertex Cover Reconfiguration

We conclude this section with a similar inapproximability result for Minmax Hypergraph Vertex Cover Reconfiguration on 𝒪(1)\operatorname{\mathcal{O}}(1)-uniform hypergraphs. A vertex cover of a hypergraph H=(𝒱,)H=(\mathcal{V},\mathcal{E}) is a vertex set 𝒞𝒱\mathcal{C}\subseteq\mathcal{V} that intersects every hyperedge in \mathcal{E}; i.e., e𝒞e\cap\mathcal{C}\neq\emptyset for every ee\in\mathcal{E}. For any pair of vertex covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} of HH, a reconfiguration sequence from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} is a sequence 𝒞=(𝒞(1),,𝒞(T))\mathscr{C}=(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(T)}) of vertex covers such that 𝒞(1)=𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{(1)}=\mathcal{C}^{\mathsf{start}}, 𝒞(T)=𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{(T)}=\mathcal{C}^{\mathsf{goal}}, and |𝒞(t)𝒞(t+1)|1|\mathcal{C}^{(t)}\triangle\mathcal{C}^{(t+1)}|\leqslant 1 for all tt. Denote by β(H)\beta(H) the size of the minimum vertex cover of HH. For a reconfiguration sequence 𝒞=(𝒞(1),𝒞(T))\mathscr{C}=(\mathcal{C}^{(1)},\ldots\mathcal{C}^{(T)}), its cost is defined as the maximum value of |𝒞(t)|β(H)+1\frac{|\mathcal{C}^{(t)}|}{\beta(H)+1} over all 𝒞(t)\mathcal{C}^{(t)}’s in 𝒞\mathscr{C}; namely,

𝖼𝗈𝗌𝗍H(𝒞):-max𝒞(t)𝒞|𝒞(t)|β(H)+1.\displaystyle\operatorname{\mathsf{cost}}_{H}(\mathscr{C})\coloneq\max_{\mathcal{C}^{(t)}\in\mathscr{C}}\frac{|\mathcal{C}^{(t)}|}{\beta(H)+1}. (4.8)

In the Minmax Hypergraph Vertex Cover Reconfiguration problem, for a hypergraph HH and its two vertex covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}}, we wish to minimize 𝖼𝗈𝗌𝗍H(𝒞)\operatorname{\mathsf{cost}}_{H}(\mathscr{C}) subject to 𝒞=(𝒞𝗌𝗍𝖺𝗋𝗍,,𝒞𝗀𝗈𝖺𝗅)\mathscr{C}=(\mathcal{C}^{\mathsf{start}},\ldots,\mathcal{C}^{\mathsf{goal}}). Let 𝖼𝗈𝗌𝗍H(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)\operatorname{\mathsf{cost}}_{H}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}}) denote the minimum value of 𝖼𝗈𝗌𝗍H(𝒞)\operatorname{\mathsf{cost}}_{H}(\mathscr{C}) over all possible reconfiguration sequences 𝒞\mathscr{C} from 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} to 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}}; namely,

𝖼𝗈𝗌𝗍H(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅):-min𝒞=(𝒞𝗌𝗍𝖺𝗋𝗍,,𝒞𝗀𝗈𝖺𝗅)𝖼𝗈𝗌𝗍H(𝒞).\displaystyle\operatorname{\mathsf{cost}}_{H}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}})\coloneq\min_{\mathscr{C}=(\mathcal{C}^{\mathsf{start}},\ldots,\mathcal{C}^{\mathsf{goal}})}\operatorname{\mathsf{cost}}_{H}(\mathscr{C}). (4.9)

For every 1cs1\leqslant c\leqslant s, Gapc,s Hypergraph Vertex Cover Reconfiguration requires to distinguish whether 𝖼𝗈𝗌𝗍H(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)c\operatorname{\mathsf{cost}}_{H}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}})\leqslant c or 𝖼𝗈𝗌𝗍H(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅)>s\operatorname{\mathsf{cost}}_{H}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}})>s. Our inapproximability result is shown below, whose proof reuses the reduction of Theorem 4.1.

Theorem 4.3.

For any constant ε(0,1)\varepsilon\in(0,1), Gap1,2-ε Hypergraph Vertex Cover Reconfiguration on poly(ε1)\operatorname{poly}(\varepsilon^{-1})-uniform hypergraphs is \PSPACE\PSPACE-complete. In particular, Minmax Hypergraph Vertex Cover Reconfiguration on poly(ε1)\operatorname{poly}(\varepsilon^{-1})-uniform hypergraphs is \PSPACE\PSPACE-hard to approximate within a factor of 2ε2-\varepsilon.

Proof.

We build an “inverted index” of Gap1,2-ε Set Cover Reconfiguration of Theorem 4.1. Let (G=(𝒱,,Σ,Ψ),f𝗌𝗍𝖺𝗋𝗍,f𝗀𝗈𝖺𝗅)(G=(\mathcal{V},\mathcal{E},\Sigma,\Psi),f^{\mathsf{start}},f^{\mathsf{goal}}) be an instance of Label Cover Reconfiguration with NN variables and alphabet size |Σ|=poly(ε1)|\Sigma|=\operatorname{poly}(\varepsilon^{-1}), where f𝗌𝗍𝖺𝗋𝗍=f𝗀𝗈𝖺𝗅=|𝒱|\|f^{\mathsf{start}}\|=\|f^{\mathsf{goal}}\|=|\mathcal{V}|. Define B:-{0,1}ΣB\coloneq\{0,1\}^{\Sigma} and Sv,αS_{v,\alpha}’s by Eq. 4.3. Construct then a hypergraph H=(𝒲,)H=(\mathcal{W},\mathcal{F}) as follows:

𝒲\displaystyle\mathcal{W} :-𝒱×Σ,\displaystyle\coloneq\mathcal{V}\times\Sigma, (4.10)
\displaystyle\mathcal{F} :-{Te,𝐪𝒲|(e,𝐪)×B},where\displaystyle\coloneq\Bigl{\{}T_{e,\mathbf{\bm{q}}}\subset\mathcal{W}\Bigm{|}(e,\mathbf{\bm{q}})\in\mathcal{E}\times B\Bigr{\}},\quad\text{where} (4.11)
Te,𝐪\displaystyle T_{e,\mathbf{\bm{q}}} :-{(v,α)𝒲|(e,𝐪)Sv,α}for all (e,𝐪)×B.\displaystyle\coloneq\Bigl{\{}(v,\alpha)\in\mathcal{W}\Bigm{|}(e,\mathbf{\bm{q}})\in S_{v,\alpha}\Bigr{\}}\quad\text{for all }(e,\mathbf{\bm{q}})\in\mathcal{E}\times B. (4.12)

Note that the size of hyperedge Te,𝐪T_{e,\mathbf{\bm{q}}} is

|Te,𝐪|=|{(v,α)𝒲|(e,𝐪)Sv,α}||{(v,α)𝒱×Σ|ve}|2|Σ|=poly(ε1).\displaystyle\begin{aligned} |T_{e,\mathbf{\bm{q}}}|&=\left|\Bigl{\{}(v,\alpha)\in\mathcal{W}\Bigm{|}(e,\mathbf{\bm{q}})\in S_{v,\alpha}\Bigr{\}}\right|\\ &\leqslant\left|\Bigl{\{}(v,\alpha)\in\mathcal{V}\times\Sigma\Bigm{|}v\in e\Bigr{\}}\right|\\ &\leqslant 2|\Sigma|=\operatorname{poly}(\varepsilon^{-1}).\end{aligned} (4.13)

To make HH into 2|Σ|2|\Sigma|-uniform, we simply augment each hyperedge Te,𝐪T_{e,\mathbf{\bm{q}}} with a set of fresh 2|Σ||Te,𝐪|2|\Sigma|-|T_{e,\mathbf{\bm{q}}}| vertices. For a satisfying multi-assignment f:𝒱2Σf\colon\mathcal{V}\to 2^{\Sigma} for GG with f=|𝒱|\|f\|=|\mathcal{V}|, we associate with it a vertex set 𝒞f𝒲\mathcal{C}_{f}\subset\mathcal{W} such that

𝒞f:-{(v,α)𝒱×Σ|v𝒱,αf(v)},\displaystyle\mathcal{C}_{f}\coloneq\Bigl{\{}(v,\alpha)\in\mathcal{V}\times\Sigma\Bigm{|}v\in\mathcal{V},\alpha\in f(v)\Bigr{\}}, (4.14)

which is a minimum vertex cover of HH (i.e., |𝒞f|=|𝒱|=β(H)|\mathcal{C}_{f}|=|\mathcal{V}|=\beta(H)), shown similarly to the proof of Theorem 4.1. Constructing minimum vertex covers 𝒞𝗌𝗍𝖺𝗋𝗍\mathcal{C}^{\mathsf{start}} from f𝗌𝗍𝖺𝗋𝗍f^{\mathsf{start}} and 𝒞𝗀𝗈𝖺𝗅\mathcal{C}^{\mathsf{goal}} from f𝗀𝗈𝖺𝗅f^{\mathsf{goal}} by Eq. 4.14, we obtain an instance (H,𝒞𝗌𝗍𝖺𝗋𝗍,𝒞𝗀𝗈𝖺𝗅)(H,\mathcal{C}^{\mathsf{start}},\mathcal{C}^{\mathsf{goal}}) of Minmax Hypergraph Vertex Cover Reconfiguration on a poly(ε1)\operatorname{poly}(\varepsilon^{-1})-uniform hypergraph. This completes the description of the reduction.

Similarly to the proof Theorem 4.1, we can show that

𝖬𝗂𝗇𝖫𝖺𝖻G(f𝗌𝗍𝖺𝗋𝗍f𝗀𝗈𝖺𝗅)=𝖼𝗈𝗌𝗍H(𝒞𝗌𝗍𝖺𝗋𝗍𝒞𝗀𝗈𝖺𝗅),\displaystyle\operatorname{\mathsf{MinLab}}_{G}(f^{\mathsf{start}}\leftrightsquigarrow f^{\mathsf{goal}})=\operatorname{\mathsf{cost}}_{H}(\mathcal{C}^{\mathsf{start}}\leftrightsquigarrow\mathcal{C}^{\mathsf{goal}}), (4.15)

implying the completeness and soundness. By Theorem 3.10, Gap1,2-ε Label Cover Reconfiguration with alphabet size poly(ε1)\operatorname{poly}(\varepsilon^{-1}) is \PSPACE\PSPACE-complete; thus, Gap1,2-ε Hypergraph Vertex Cover Reconfiguration on poly(ε1)\operatorname{poly}(\varepsilon^{-1})-uniform hypergraphs is \PSPACE\PSPACE-complete as well, which completes the proof. ∎

References

  • [AFWZ95] Noga Alon, Uriel Feige, Avi Wigderson and David Zuckerman “Derandomized graph products” In Comput. Complex. 5, 1995, pp. 60–75
  • [ALMSS98] Sanjeev Arora et al. “Proof Verification and the Hardness of Approximation Problems” In J. ACM 45.3, 1998, pp. 501–555
  • [Alo21] Noga Alon “Explicit Expanders of Every Degree and Size” In Comb. 41.4, 2021, pp. 447–463
  • [AS98] Sanjeev Arora and Shmuel Safra “Probabilistic Checking of Proofs: A New Characterization of NP” In J. ACM 45.1, 1998, pp. 70–122
  • [BHIKMMSW20] Marthe Bonamy et al. “Shortest Reconfiguration of Colorings Under Kempe Changes” In STACS, 2020, pp. 35:1–35:14
  • [Bon13] Paul Bonsma “The Complexity of Rerouting Shortest Paths” In Theor. Comput. Sci. 510, 2013, pp. 1–12
  • [CHK11] Moses Charikar, MohammadTaghi Hajiaghayi and Howard J. Karloff “Improved Approximation Algorithms for Label Cover Problems” In Algorithmica 61.1, 2011, pp. 190–206
  • [CvJ11] Luis Cereceda, Jan van den Heuvel and Matthew Johnson “Finding paths between 3-colorings” In J. Graph Theory 67.1, 2011, pp. 69–82
  • [Din07] Irit Dinur “The PCP Theorem by Gap Amplification” In J. ACM 54.3, 2007, pp. 12
  • [DS14] Irit Dinur and David Steurer “Analytical approach to parallel repetition” In STOC, 2014, pp. 624–633
  • [Fei98] Uriel Feige “A Threshold of lnn\ln n for Approximating Set Cover” In J. ACM 45.4, 1998, pp. 634–652
  • [FGLSS96] Uriel Feige et al. “Interactive Proofs and the Hardness of Approximating Cliques” In J. ACM 43.2, 1996, pp. 268–292
  • [GKMP09] Parikshit Gopalan, Phokion G. Kolaitis, Elitza Maneva and Christos H. Papadimitriou “The Connectivity of Boolean Satisfiability: Computational and Structural Dichotomies” In SIAM J. Comput. 38.6, 2009, pp. 2330–2355
  • [Hås01] Johan Håstad “Some optimal inapproximability results” In J. ACM 48.4, 2001, pp. 798–859
  • [Hås99] Johan Håstad “Clique is hard to approximate within n1εn^{1-\varepsilon} In Acta Math. 182, 1999, pp. 105–142
  • [HD05] Robert A. Hearn and Erik D. Demaine “PSPACE-Completeness of Sliding-Block Puzzles and Other Problems through the Nondeterministic Constraint Logic Model of Computation” In Theor. Comput. Sci. 343.1-2, 2005, pp. 72–96
  • [HD09] Robert A. Hearn and Erik D. Demaine “Games, Puzzles, and Computation” A K Peters, Ltd., 2009
  • [HLW06] Shlomo Hoory, Nathan Linial and Avi Wigderson “Expander graphs and their applications” In Bull. Am. Math. Soc. 43.4, 2006, pp. 439–561
  • [HO24] Shuichi Hirahara and Naoto Ohsaka “Probabilistically Checkable Reconfiguration Proofs and Inapproximability of Reconfiguration Problems” In CoRR abs/2401.00474, 2024
  • [Hoa23] Duc A. Hoang “Combinatorial Reconfiguration”, https://reconf.wikidot.com/, 2023
  • [ID14] Takehiro Ito and Erik D. Demaine “Approximability of the subset sum reconfiguration problem” In J. Comb. Optim. 28.3, 2014, pp. 639–654
  • [IDHPSUU11] Takehiro Ito et al. “On the Complexity of Reconfiguration Problems” In Theor. Comput. Sci. 412.12-14, 2011, pp. 1054–1065
  • [IKKKO22] Takehiro Ito et al. “Shortest Reconfiguration of Perfect Matchings via Alternating Cycles” In SIAM J. Discret. Math. 36.2, 2022, pp. 1102–1123
  • [KM23] Karthik C. S. and Pasin Manurangsi “On Inapproximability of Reconfiguration Problems: PSPACE-Hardness and some Tight NP-Hardness Results” In CoRR abs/2312.17140, 2023
  • [KMM11] Marcin Kamiński, Paul Medvedev and Martin Milanič “Shortest Paths Between Shortest Paths” In Theor. Comput. Sci. 412.39, 2011, pp. 5205–5210
  • [LY94] Carsten Lund and Mihalis Yannakakis “On the Hardness of Approximating Minimization Problems” In J. ACM 41.5, 1994, pp. 960–981
  • [MNPR17] Amer E. Mouawad, Naomi Nishimura, Vinayak Pathak and Venkatesh Raman “Shortest Reconfiguration Paths in the Solution Space of Boolean Formulas” In SIAM J. Discret. Math. 31.3, 2017, pp. 2185–2200
  • [MOP21] Sidhanth Mohanty, Ryan O’Donnell and Pedro Paredes “Explicit Near-Ramanujan Graphs of Every Degree” In SIAM J. Comput. 51.3, 2021, pp. STOC20-1-STOC20–23
  • [Nis18] Naomi Nishimura “Introduction to Reconfiguration” In Algorithms 11.4, 2018, pp. 52
  • [Ohs23] Naoto Ohsaka “Gap Preserving Reductions Between Reconfiguration Problems” In STACS, 2023, pp. 49:1–49:18
  • [Ohs24] Naoto Ohsaka “Gap Amplification for Reconfiguration Problems” In SODA, 2024, pp. 1345–1366
  • [OM22] Naoto Ohsaka and Tatsuya Matsuoka “Reconfiguration Problems on Submodular Functions” In WSDM, 2022, pp. 764–774
  • [Par21] Orr Paradise “Smooth and Strong PCPs” In Comput. Complex. 30.1, 2021, pp. 1
  • [van13] Jan van den Heuvel “The Complexity of Change” In Surveys in Combinatorics 2013 409 Cambridge University Press, 2013, pp. 127–160