This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Moderate Exponential-time Quantum Dynamic Programming Across the Subsets for Scheduling Problems111This is an extension of the conference paper of Grange et al., 2023a .

Camille Grange University of Montpellier, LIRMM, CNRS, 161 rue Ada, Montpellier, France SNCF, Technology, Innovation and Group Projects Department, 1 avenue François Mitterand, Saint-Denis, France Michael Poss University of Montpellier, LIRMM, CNRS, 161 rue Ada, Montpellier, France Eric Bourreau University of Montpellier, LIRMM, CNRS, 161 rue Ada, Montpellier, France Vincent T’kindt University of Tours, LIFAT, 64 avenue Jean Portalis, Tours, France Olivier Ploton University of Tours, LIFAT, 64 avenue Jean Portalis, Tours, France
Abstract

Grover Search is currently one of the main quantum algorithms leading to hybrid quantum-classical methods that reduce the worst-case time complexity for some combinatorial optimization problems. Specifically, the combination of Quantum Minimum Finding (obtained from Grover Search) with dynamic programming has proved particularly efficient in improving the complexity of NP-hard problems currently solved by classical dynamic programming. For these problems, the classical dynamic programming complexity in 𝒪(cn)\mathcal{O}^{*}(c^{n}), where 𝒪\mathcal{O}^{*} denotes that polynomial factors are ignored, can be reduced by a hybrid algorithm to 𝒪(cquantn)\mathcal{O}^{*}(c_{quant}^{n}), with cquant<cc_{quant}<c. In this paper, we provide a bounded-error hybrid algorithm that achieves such an improvement for a broad class of NP-hard single-machine scheduling problems for which we give a generic description. Moreover, we extend this algorithm to tackle the 3-machine flowshop problem. Our algorithm reduces the exponential-part complexity compared to the best-known classical algorithm, sometimes at the cost of an additional pseudo-polynomial factor.

keywords: Discrete Optimization, Quantum computing, Scheduling, Dynamic Programming

1 Introduction

The fields of quantum computing and combinatorial optimization are becoming every day more closely linked, thanks to the work of the quantum computing community, as well as the more recent interest of the operations research community that has been focusing on the new quantum paradigm. More precisely, there are two types of quantum algorithms for solving optimization problems. The first type encompasses heuristics, often designed today as hybrid quantum-classical algorithms, such as the class of Variational Quantum Algorithms described by Cerezo et al., (2021) or by Grange et al., 2023b and, within it, the famous Quantum Approximate Optimization Algorithm (QAOA) of Farhi et al., (2014). Essentially, these algorithms require the optimization problem to be formulated as an unconstrained binary problem with polynomial objective function and can be implemented on current noisy quantum computers because the quantum part can be made rather small. Note that, usually, the problem is formulated as a QUBO (Quadratic Unconstrained Binary Optimization) to limit the number of entanglement gates. Among others, the problems of MAX-CUT (Farhi et al., , 2014), Travelling Salesman Problem (Ruan et al., , 2020), MAX-3-SAT (Nannicini, , 2019), Graph Coloring (Tabi et al., , 2020) and Job Shop Scheduling (Kurowski et al., , 2023) are reformulated as QUBO and solved with hybrid heuristics on small instances. However, due to the small size of instances processed today and the nature of heuristics whose performances are evaluated empirically, no quantum advantage with these heuristics is emerging yet. This is where the second type of quantum algorithms differ: they are exact algorithms (i.e. that output the optimal solution with a high probability of success) that provide theoretical speed-ups for several types of problems and algorithms, as displayed by Nannicini, (2022) and Sutter et al., (2020). Notice that with the current quantum hardware, it is impossible to implement them today because of the huge size of quantum resources they require.

Grover, (1996) provides one key exact quantum algorithm, that achieves a quadratic speed-up when searching for a specific element in an unsorted table, where the complexity is computed as the number of queries of the table and is done by an oracle. Grover Search represents the routine of many exact quantum algorithms. For instance, Dürr and Høyer, (1996) use Grover Search as a subroutine for a hybrid quantum-classical algorithm that finds the minimum of an unsorted table, resulting in the algorithm called Quantum Minimum Finding. Later, Ambainis et al., (2019) combine Quantum Minimum Finding with dynamic programming to address NP-hard vertex ordering problems, such as the Traveling Salesman Problem (TSP) or the Minimum Set Cover problem. The problems of interest must satisfy a specific property which implies that they can be solved by classical dynamic programming in 𝒪(cn)\mathcal{O}^{*}(c^{n}), where cc is usually not smaller than 2. Henceforth, we use 𝒪\mathcal{O}^{*} which is the usual asymptotic notation that ignores the polynomial factors in the complexity. The hybrid algorithm of Ambainis et al., (2019) reduces the complexity to 𝒪(cquantn)\mathcal{O}^{*}(c_{\text{quant}}^{n}) for cquant<cc_{\text{quant}}<c. As an example, Held and Karp, (1970) dynamic programming solves the TSP in 𝒪(2n)\mathcal{O}^{*}(2^{n}) whereas the hybrid algorithm of Ambainis et al., (2019) achieves to solve it in 𝒪(1.728n)\mathcal{O}^{*}(1.728^{n}) by combining the dynamic programming recurrence of Held and Karp with Quantum Minimum Finding. Following the work of Ambainis et al., (2019), other NP-hard problems have been tackled with the idea of combining Grover Search (or Quantum Minimum Finding) and classical dynamic programming. This has led to quantum speed-ups for the Steiner Tree problem (Miyamoto et al., , 2020) and the graph coloring problem (Shimizu and Mori, , 2022).

The purpose of this work is to provide a hybrid quantum-classical algorithm, adapting the seminal idea of Ambainis et al., (2019), that reduces the time complexity of solving NP-hard scheduling problems. For that, we propose an extended version of well-known Dynamic Programming Across the Subsets (DPAS) recurrences used to solve combinatorial optimization problems like scheduling problems (see e.g. T’kindt et al., (2022)). Notice that DPAS is a common technique for designing exact algorithms for NP-hard problems as described by Woeginger, (2003).

Scheduling problems and DPAS.

A scheduling problem lies in finding the optimal assignment of a set of jobs to machines over time. Each job jj is defined by at least a processing time pjp_{j} and possibly additional data like a due date djd_{j}, a deadline d~j\tilde{d}_{j}, or even a weight wjw_{j} reflecting its priority. One or more machines can process the set of jobs, however, at any time point, a machine can only process one job at a time. The computation of a schedule is done to minimize a given objective function.

In Sections 2 and 3, we consider single-machine scheduling problems. Let [n]:={1,,n}[n]:=\{1,\ldots,n\} be the set of jobs to schedule on the machine. While a solution to a single-machine scheduling problem is described by a starting time for each job on the machine, it is standard to describe instead such a solution by a permutation πS[n]\pi\in S_{[n]} of the nn jobs. Indeed, the starting times can be directly deduced from the order of jobs in the permutation and the potential constraints thanks to the following assumptions. First, we assume that only one job can be processed at any time on the machine. Second, we deal only with non-preemptive scheduling, meaning that a job must be run to completion when it started. Henceforth, we use the permutation representation for the solutions. In Section 5, we consider the 3-machine flowshop of nn jobs. The definition of this problem, introduced in the above-mentioned section, makes also a solution entirely described by a permutation of [n][n] even if there are 3 machines.

Throughout this paper, we use the usual notation α|β|γ\alpha|\beta|\gamma, introduced by Graham et al., (1979), to describe the scheduling problem consisting of α\alpha machines, with the constraints β\beta and the criterion γ\gamma to be minimized. For instance, 1|d~j|jwjCj1|\tilde{d}_{j}|\sum_{j}w_{j}C_{j} is the problem of minimizing the total weighted completion time with deadline constraints on a single machine. The reader interested in scheduling can refer to any textbook in scheduling, e.g. to Pinedo, (2012).

The single-machine scheduling problems addressed in this work are those that satisfy the Dynamic Programming Across the Subsets (DPAS) property. It means that these problems can be solved by Dynamic Programming where the optimal solution for a set of jobs J[n]J\subseteq[n] is computed as the best concatenation over all jJj\in J of the optimal solution for J{j}J\setminus\{j\} and the cost of setting jj as the last processed jobs. Specifically, if we note OPT[J]\textup{OPT}[J] the optimal value for processing the set JJ of jobs, the recursion is

OPT[J]=minjJOPT[J{j}]+ϕj(kJpk),\textup{OPT}[J]=\min_{j\in J}\,\textup{OPT}[J\setminus\{j\}]+\phi_{j}\left(\sum_{k\in J}p_{k}\right)\,, (1)

where ϕj\phi_{j} is a function depending on job jj. This generic recursion captures many single-machine scheduling problems as recalled in the survey of T’kindt et al., (2022), leading to the worst-case time complexity of 𝒪(2n)\mathcal{O}^{*}(2^{n}) to solve all these problems. This naturally raises the question of the existence of moderate exponential-time algorithms with a complexity 𝒪(cn)\mathcal{O}^{*}(c^{n}) where c<2c<2. The question has been answered positively for specific problems such as minimizing the total weighted completion time with precedence constraints in 𝒪((2ϵ)n)\mathcal{O}^{*}((2-\epsilon)^{n}) for small ϵ>0\epsilon>0 by Cygan et al., (2014). But, as far as we know, no generic method provides such an improvement for a broad class of scheduling problems. In this paper, we present a hybrid algorithm that solves the problems satisfying (1) in 𝒪(1.728n)\mathcal{O}^{*}(1.728^{n}), sometimes with an additional pseudo-polynomial factor in the complexity that comes from the generalization of the recurrence.

Our contributions.

We extend existing recurrences for scheduling problems leading to a quantum speed-up for solving a general class of scheduling problems. The dynamic programming recurrences are adapted to solve scheduling problems with a proposed hybrid algorithm Q-DDPAS, which is an extension of the algorithm of Ambainis et al., (2019). Specifically, Q-DDPAS relies on dynamic programming over values indexed by a set, as by Ambainis et al., (2019), but also indexed by a new integer parameter. This new indexation is of a different nature from indexing with a set because it is not exhaustively enumerate in the recurrence but enables to express temporal constraints. Q-DDPAS also deals with non-linear objective functions, for which this is not sufficient anymore to add values of subsets but requires new ways to combine values, e.g. composition, coming from the specificity of scheduling objective functions. Specifically, we cover three types of problems that satisfy three different kinds of dynamic programming properties. For each of them, the best-known classical time complexity is in 𝒪(2n)\mathcal{O}^{*}(2^{n}) that is reduced in 𝒪(pseudop1.728n)\mathcal{O}^{*}(pseudop\cdot 1.728^{n}) by the hybrid algorithm of this paper, where pseudoppseudop is a pseudo-polynomial factor. Not only it applies to problems for which the dynamic programming property is based on the addition of optimal values of the problem on sub-instances (as done by Grange et al., 2023a ) but it also relates to problems for which the dynamic programming naturally applies on the composition of optimal values of the problem on sub-instances. Furthermore, we address the 3-machine flowshop problem that differs from previous problems by the nature of the recurrence property and widens the range of problems solved by the hybrid algorithm. Last, we also propose an approximation scheme for the 3-machine flowshop problem based on the hybrid algorithm. We summarize in Table 1 the complexities of several NP-hard scheduling problems through which we illustrate the recurrences in this paper.

Problem Our hybrid algorithm Best classical algorithm
1|d~j|wjCj1|\tilde{d}_{j}|\sum w_{j}C_{j} 𝒪(pj1.728n)\mathcal{O}^{*}\left(\sum p_{j}\cdot 1.728^{n}\right) 𝒪(2n)\mathcal{O}^{*}(2^{n}) (T’kindt et al., , 2022)
1||wjTj1||\sum w_{j}T_{j} 𝒪(pj1.728n)\mathcal{O}^{*}\left(\sum p_{j}\cdot 1.728^{n}\right) 𝒪(2n)\mathcal{O}^{*}(2^{n}) (T’kindt et al., , 2022)
1|prec|wjCj1|prec|\sum w_{j}C_{j} 𝒪(1.728n)\mathcal{O}^{*}\left(1.728^{n}\right) 𝒪((2ϵ)n)\mathcal{O}^{*}((2-\epsilon)^{n}), for small ϵ\epsilon (Cygan et al., , 2014)
1|rj|wjUj1|r_{j}|\sum w_{j}U_{j} 𝒪((wj)3pj1.728n)\mathcal{O}^{*}\left((\sum w_{j})^{3}\cdot\sum p_{j}\cdot 1.728^{n}\right) 𝒪(wjpj2n)\mathcal{O}^{*}(\sum w_{j}\cdot\sum p_{j}\cdot 2^{n}) (Ploton and T’kindt, , 2022)
1|rj|wjCj1|r_{j}|\sum w_{j}C_{j} 𝒪((wj)3(pj)41.728n)\mathcal{O}^{*}\left((\sum w_{j})^{3}\cdot(\sum p_{j})^{4}\cdot 1.728^{n}\right) 𝒪(wj(pj)22n)\mathcal{O}^{*}(\sum w_{j}\cdot(\sum p_{j})^{2}\cdot 2^{n}) (Ploton and T’kindt, , 2022)
F3||CmaxF3||C_{\text{max}} 𝒪((pij)41.728n)\mathcal{O}^{*}\left((\sum p_{ij})^{4}\cdot 1.728^{n}\right) 𝒪(3n)\mathcal{O}^{*}(3^{n}) (Shang et al., , 2018; Ploton and T’kindt, , 2023)
Table 1: Comparison of worst-case time complexities between our hybrid algorithm and the best-known classical algorithm.

Structure of the paper.

First, we present in Section 2 problems for which the dynamic programming property is based on the addition of optimal values of the problem on sub-instances (called Additive DPAS). We begin with the example of problem 1|d~j|jwjCj1|\tilde{d}_{j}|\sum_{j}w_{j}C_{j} and then provide a generic description of the problems at stake. We describe the related hybrid algorithm (called Q-DDPAS) as it is usually done in the algorithmic quantum literature, namely with a high-level description where quantum boxes interact with the classical part. We provide a rigorous and detailed description of the circuit-based implementation for interested readers in our companion paper (Grange et al., , 2024). Similarly, we tackle in Section 3 problems for which the dynamic programming property is based on the addition of optimal values of the problem on sub-instances (called Composed DPAS), beginning with the example of problem 1|rj|wjUj1|r_{j}|\sum w_{j}U_{j}. We provide in Section 4 some applications of Q-DDPAS to the scheduling literature. Lastly, in Section 5, we consider the 3-machine flowshop problem, for which the dynamic programming recurrence applies to its decision version. It results in a slightly different hybrid algorithm. Additionally, we provide an approximation scheme for this problem, based on the hybrid algorithm, that disposes of the pseudo-polynomial factor in the time complexity. We recall in Appendix A useful bounds to derive the complexities of the proposed algorithms.

2 Additive DPAS

In this section, we present problems for which the dynamic programming recursion is based on the addition of optimal values of problems for sub-instances. Next, we detail the hybrid algorithm Q-DDPAS to solve these problems.

2.1 A scheduling example

The NP-hard single-machine scheduling problem at stake is the minimization of the total weighted completion time with deadline constraints, often referred to as 1|d~j|jwjCj1|\tilde{d}_{j}|\sum_{j}w_{j}C_{j} in the scheduling literature. The input is given, for each job j[n]j\in[n], by a weight wjw_{j}, a processing time pjp_{j} and a deadline d~j\tilde{d}_{j} before which the job must be completed. We define the completion time Cj(π)C_{j}(\pi) of job jj as the end time of the job on the machine for the permutation π\pi. So, if jj starts as time tt for the permutation π\pi, then Cj(π)=t+pjC_{j}(\pi)=t+p_{j}. The problem aims at finding the feasible permutation for which the total weighted sum of completion times is minimal. A permutation π\pi is feasible if Cj(π)d~jC_{j}(\pi)\leq\tilde{d}_{j} for all job jj. Thus, the problem can be formulated as follows:

minπΠj=1nwjCj(π),\min_{\pi\in\Pi}\,\sum_{j=1}^{n}w_{j}C_{j}(\pi)\,,

where the set of feasible permutations is Π={πS[n]|Cj(π)d~j,j[n]}.\Pi=\{\pi\in S_{[n]}\,|\,C_{j}(\pi)\leq\tilde{d}_{j}\,,\forall j\in[n]\}\,.

This problem satisfies two recurrences. For deriving them, we need to introduce the set T:=0,j=1npjT:=\left\llbracket 0,\sum_{j=1}^{n}p_{j}\right\rrbracket, where we use the notation a,b={a,a+1,,b}\llbracket a,b\rrbracket=\{a,a+1,\ldots,b\} for integers aa and bb. For J[n]J\subseteq[n] and tTt\in T, we define OPT[J,t]\textup{OPT}[J,t] as the optimal value of the problem in which only jobs in JJ are scheduled from time tt. Thus, solving our nominal problem 1|d~j|jwjCj1|\tilde{d}_{j}|\sum_{j}w_{j}C_{j} amounts to compute OPT[[n],0]\textup{OPT}[[n],0].

The first recurrence comes from the standard Dynamic Programming Across the Subsets (DPAS) described in (1). However, compared to usual DPAS, we introduce an extra parameter tt necessary for the solution with our hybrid algorithm as explained later. The idea of this recurrence is to get the optimal value of our problem for jobs in JJ and starting at time tt by finding, over all jobs jJj\in J, the permutation that ends by jj with the best cost value. It is possible to do so because no matter what the optimal permutation of the first (|J|1)(|J|-1) jobs is, the cost of setting job jj at the end of the permutation is always known. Indeed, the time taken to process all jobs in J{j}J\setminus\{j\} is always kJ{j}pk\sum_{k\in J\setminus\{j\}}p_{k}. Thus, the completion time of jj is defined by cj=t+kJpkc_{j}=t+\sum_{k\in J}p_{k}. It results that the cost of setting jj at the end of the permutation is wj(t+kJpk)w_{j}(t+\sum_{k\in J}p_{k}). It also implies that the deadline constraint for job jj is satisfied if t+kJpkd~jt+\sum_{k\in J}p_{k}\leq\tilde{d}_{j}. Specifically, for all J[n]J\subseteq[n] and for all tTt\in T, we have

OPT[J,t]=minjJOPT[J{j},t]+{wj(t+kJpk)if t+kJpkd~j+otherwise,\textup{OPT}[J,t]=\min\limits_{j\in J}\,\textup{OPT}[J\setminus\{j\},t]+\left\{\begin{aligned} &w_{j}\left(t+\sum_{k\in J}p_{k}\right)~{}~{}&\text{if }t+\sum_{k\in J}p_{k}\leq\tilde{d}_{j}\\ &+\infty~{}~{}&\text{otherwise}\end{aligned}\right.\,, (2)

initialized by OPT[]=0\textup{OPT}[\emptyset]=0.

The second recurrence generalizes the previous one. For this recurrence, the principle of computing OPT[J,t]\textup{OPT}[J,t] is similar to (2) but instead of setting one job at the end of the permutation, we choose |J|/2|J|/2 jobs and set them to be the half last jobs of the permutation. Specifically, for all J[n]J\subseteq[n] of even cardinality and tTt\in T, we have

OPT[J,t]=minXJ|X|=|J|/2{OPT[X,t]+OPT[JX,t+iXpi]},\textup{OPT}[J,t]=\min\limits_{X\subseteq J\atop|X|=|J|/2}\Bigl{\{}\textup{OPT}[X,t]+\textup{OPT}[J\setminus X,t+\sum_{i\in X}p_{i}]\Bigr{\}}\,, (3)

initialized by, j[n]\forall j\in[n] and tTt\in T, OPT[{j},t]={wj(pj+t)if d~jpj+t+otherwise.\textup{OPT}[\{j\},t]=\left\{\begin{aligned} &w_{j}(p_{j}+t)~{}~{}&\text{if }\tilde{d}_{j}\geq p_{j}+t\\ &+\infty~{}~{}&\text{otherwise}\end{aligned}\right.\,.

For a given XJX\subseteq J of size |J|/2|J|/2, recurrence (3) computes the best permutation of jobs in XX starting at time tt, and the best permutation of jobs in JXJ\setminus X starting at time t+kXpkt+\sum_{k\in X}p_{k} as we know that, as before, no matter what is the optimal permutation for jobs in XX, the time taken to process them all is exactly kXpk\sum_{k\in X}p_{k}.

The two above recurrences have been illustrated with problem 1|d~j|jwjCj1|\tilde{d}_{j}|\sum_{j}w_{j}C_{j}. In the next subsection, we propose a general formulation of these recurrences that will be used to elaborate our algorithm as general as possible to solve a broad class of scheduling problems.

2.2 General formulation of recurrences

Let us consider the following general scheduling problem:

𝒫:minπΠf(π),\mathcal{P}:\hskip 17.07164pt\min_{\pi\in\Pi}f(\pi)\,,

where ΠS[n]\Pi\subseteq S_{[n]} is the set of feasible permutations of [n]:={1,,n}[n]:=\{1,\ldots,n\} according to given constraints and ff is the objective function. We introduce a related problem PP useful for deriving the dynamic programming recursion, for which we specify the instance: for J[n]J\subseteq[n] and tt\in\mathbb{Z},

P(J,t):minπΠ(J,t)f(π,J,t)P(J,t):\hskip 17.07164pt\min_{\pi\in\Pi(J,t)}f(\pi,J,t) (4)

as the nominal scheduling problem 𝒫\mathcal{P} that schedules only jobs in JJ and starts the schedule at time tt. Let us note OPT[J,t]\textup{OPT}[J,t] the optimal value of P(J,t)P(J,t). It results that solving 𝒫\mathcal{P} amounts to solving P([n],0)P([n],0), and it can be performed by Q-DDPAS if the related problem PP satisfies the two recurrences (Add-DPAS) and (Add-D-DPAS) below. Henceforth, we denote by 2[n]2^{[n]} the set of all subsets of [n][n], and by a,b\llbracket a,b\rrbracket the set {a,a+1,,b}\{a,a+1,\ldots,b\}. Let us introduce the first recurrence.

Property 2.1 (Additive DPAS).

There exists a function g:2[n]×[n]×Tg:2^{[n]}\times[n]\times T\rightarrow\mathbb{R}, computable in polynomial time, such that, for all J[n]J\subseteq[n] and for all t0Tt_{0}\in T,

OPT[J,t0]=minjJ{OPT[J{j},t0]+g(J,j,t0)}\textup{OPT}[J,t_{0}]=\min\limits_{j\in J}\Bigl{\{}\textup{OPT}[J\setminus\{j\},t_{0}]+g(J,j,t_{0})\Bigr{\}} (Add-DPAS)

initialized by OPT[,t0]=0\textup{OPT}[\emptyset,t_{0}]=0.

Lemma 2.2.

Dynamic programming (Add-DPAS) solves 𝒫\mathcal{P} in 𝒪(2n)\mathcal{O}^{*}(2^{n}).

Proof.

We solve Equation (Add-DPAS) for all JJ such that |J|=k|J|=k, and for t0=0t_{0}=0, starting from k=1k=1 to k=nk=n. For a given JJ, the values {OPT[J{j},0]:jJ}\{\textup{OPT}[J\setminus\{j\},0]:j\in J\} are known, so OPT[J,0]\textup{OPT}[J,0] is computed in time poly(n)kpoly(n)\cdot k according to Equation (Add-DPAS), where poly(n)poly(n) is a polymial function of nn (the computation of gg is polynomial). Eventually, the total complexity of computing OPT[[n],0]\textup{OPT}[[n],0] is k=1npoly(n)k(nk)=poly(n)n2n1=𝒪(2n).\sum_{k=1}^{n}poly(n)k\binom{n}{k}=poly(n)\cdot n\cdot 2^{n-1}=\mathcal{O}^{*}(2^{n}).

Throughout, we commit a slight abuse of language by letting (Add-DPAS) both refer to the property satisfied by a given optimization problem and to the resulting dynamic programming algorithm. Notice the presence of the additional parameter t0t_{0} in the above definition, which is typically absent in the scheduling literature. In particular, t0t_{0} is a constant throughout the whole recursion (Add-DPAS) and does not impact the resulting computational complexity. The use of that extra parameter in TT shall be necessary later when applying our hybrid algorithm.

Property 2.1 expresses that finding the optimal value of PP for jobs in JJ and starting at time tt is done by finding over all jobs jJj\in J the permutation that ends by jj with the best cost value. Function gg represents the cost of jj being the last job of the permutation. Notice that isolating the last job of the permutation is a usual technique in scheduling as displayed in (1). In the second recurrence below, we provide a similar scheme, where instead of one job, we isolate half of the jobs in JJ, turning the computation of gg to the solution of another problem on a sub-instance with |J|/2|J|/2 jobs.

Property 2.3 (Additive Dichotomic DPAS).

There exist two functions τshift:2[n]×2[n]×TT\tau_{\text{shift}}:2^{[n]}\times 2^{[n]}\times T\rightarrow T and h:2[n]×2[n]×Th:2^{[n]}\times 2^{[n]}\times T\rightarrow\mathbb{R}, computable in polynomial time, such that, for all J[n]J\subseteq[n] of even cardinality, and for all tTt\in T,

OPT[J,t]=minXJ|X|=|J|/2{OPT[X,t]+h(J,X,t)+OPT[JX,τshift(J,X,t)]}\displaystyle\textup{OPT}[J,t]=\min\limits_{X\subseteq J\atop|X|=|J|/2}\Bigl{\{}\textup{OPT}[X,t]+h(J,X,t)+\textup{OPT}[J\setminus X,\tau_{\text{shift}}(J,X,t)]\Bigr{\}} (Add-D-DPAS)

initialized by the values OPT[{j},t]\textup{OPT}[\{j\},t] for each j[n]j\in[n] and tTt\in T.

For a given XJX\subseteq J, the above recursion computes the best permutation of jobs in XX starting at time tt, and the best permutation of jobs in JXJ\setminus X starting at time τshift\tau_{\text{shift}}, adding the function hh that represents the cost of the concatenation between these two permutations.

Remark 2.4.

We observe that problem (4) satisfies recurrence (Add-DPAS) if and only if it satisfies (Add-D-DPAS). This can be seen by developing recursively both recurrences, which essentially leads to optimization problems over πS[n]\pi\in S_{[n]}, whose objective functions respectively involve gg in the first case and hh and τshift\tau_{\text{shift}} in the second case. Here, one readily verifies that gg can then be defined from hh and τshift\tau_{\text{shift}} and reciprocally.

Despite the previous remark, the two recurrences differ on the size of the subsets considered along the recursions, leading to different formulations and therefore require more or less sub-problems to be solved optimally in the dynamic programming process. This is formalized in the following proposition. Note that we use the notation f1(n)=ω(f2(n))f_{1}(n)=\omega(f_{2}(n)) if f1f_{1} dominates asymptotically f2f_{2}.

Lemma 2.5.

Dynamic programming (Add-D-DPAS) solves 𝒫\mathcal{P} in ω(|T|2n)\omega(|T|\cdot 2^{n}).

The proof is given in the Supplementary Materials. Notice that if nn is not a power of 2, we can still add fake jobs without changing the following conclusion: solving 𝒫\mathcal{P} with (Add-DPAS) is faster than with (Add-D-DPAS). However, in the next subsection, we describe a hybrid algorithm Q-DDPAS that improves the complexity of solving 𝒫\mathcal{P} by combining recurrences (Add-DPAS) and (Add-D-DPAS) with a quantum subroutine.

2.3 Hybrid algorithm for Additive DPAS

In this subsection, we describe our hybrid algorithm Q-DDPAS adapted from the work of Ambainis et al., (2019). Notice that it assumes to have a quantum random access memory (QRAM) (Giovannetti et al., , 2008), namely, to have a classical data structure that stores classical information but can answer queries in quantum superposition. We underline that this latter assumption is strong because QRAM is not yet available on current universal quantum hardware. First, let us introduce the Quantum Minimum Finding algorithm of Dürr and Høyer, (1996), which constitutes a fundamental subroutine in our algorithm. This algorithm essentially applies several times Grover Search (Grover, , 1996) and provides a quadratic speedup for the search of a minimum element in an unsorted table.

Definition 2.6 (Quantum Minimum Finding (Dürr and Høyer, , 1996)).

Let f:[n]f:[n]\rightarrow\mathbb{Z} be a function. Quantum Minimum Finding computes the minimum value of ff and the corresponding minimizer argmini[n]{f(i)}\operatorname*{arg\,min}_{i\in[n]}\{f(i)\}. The complexity of Quantum Minimum Finding is 𝒪(nCf(n))\mathcal{O}\left(\sqrt{n}\cdot C_{f}(n)\right), where 𝒪(Cf(n))\mathcal{O}(C_{f}(n)) is the complexity of computing a value of ff.

Remark 2.7 (Success probability and bounded-error algorithm (Bernstein and Vazirani, , 1993)).

Dürr and Høyer, (1996) prove that Quantum Minimum Finding computes the minimum value with a probability of success strictly larger than 12\frac{1}{2}, independent of nn. Thus, for ϵ>0\epsilon>0, finding the minimum value with probability (1ϵ)(1-\epsilon) is achieved by repeating 𝒪(log1ϵ)\mathcal{O}(\log\frac{1}{\epsilon}) times Quantum Minimum Finding. Henceforth, we refer to this statement when we write that Quantum Minimum Finding finds the minimum value with high probability. Equivalently, we say that this is a bounded-error algorithm. More generally, in the rest of the paper, we call a bounded-error algorithm an algorithm that provides the optimal solution with a probability as close to 1 as we want by repeating it a number of times independent of the instance size.

Next, we describe the algorithm of Ambainis et al., (2019) adapted for our Additive DPAS recurrences which implies extra parameters in TT. We call it Q-DDPAS and it consists essentially of calling recursively twice Quantum Minimum Finding and computing classically the left terms. Without loss of generality, we assume that 4 divides nn. This can be achieved by adding at most three fake jobs and, therefore, does not change the algorithm complexity. Q-DDPAS consists of two steps. First, we compute classically by (Add-DPAS) the optimal values of PP on sub-instances of n/4n/4 jobs and for all starting times tTt\in T. Second, we call recursively two times Quantum Minimum Finding with (Add-D-DPAS) to find optimal values of PP on sub-instances of n/2n/2 jobs starting at any time tTt\in T, and eventually of nn jobs starting at t=0t=0 (corresponding to the optimal value of the nominal problem 𝒫\mathcal{P}). Specifically, we describe Q-DDPAS in Algorithm 1.

Input: Problem PP satisfying (Add-DPAS) and (Add-D-DPAS)
Output: OPT[[n],0]\textup{OPT}[[n],0] with high probability
begin classical part
        for X[n]X\subseteq[n] such that |X|=n/4|X|=n/4, and tTt\in T do
              1 Compute OPT[X,t]\textup{OPT}[X,t] with (Add-DPAS) and store the results in the QRAM;
              
       
begin quantum part
       2 Apply Quantum Minimum Finding with (Add-D-DPAS) to find OPT[[n],0]\textup{OPT}[[n],0];
       3 To get values for the Quantum Minimum Finding above (the values OPT[J,t]\textup{OPT}[J,t] for J[n]J\subseteq[n] of size n/2n/2 and tTt\in T), apply Quantum Minimum Finding with (Add-D-DPAS);
       4 To get values for the Quantum Minimum Finding above (the values OPT[X,t]\textup{OPT}[X,t^{\prime}] for X[n]X\subseteq[n] of size n/4n/4 and tTt^{\prime}\in T), get them on the QRAM
Algorithm 1 Q-DDPAS for Additive DPAS
Theorem 2.8.

The bounded-error algorithm Q-DDPAS (Algorithm 1) solves 𝒫\mathcal{P} in 𝒪(|T|1.754n)\mathcal{O}^{*}(|T|\cdot 1.754^{n}).

The detailed proof of the correctness of the algorithm, involving the description of the gate implementation, is detailed in the companion paper Grange et al., (2024), with all the low-level details for implementing the algorithm.

Proof.

Hereafter, we provide a high-level proof. The upper bounds to simplify the complexities are detailed in the Appendix A.

  • Classical part: computing all OPT[X,t]\textup{OPT}[X,t] for all XX of size n/4n/4 and for all tTt\in T (Step 1 in Algorithm 1) is done by (Add-D-DPAS) in time 𝒪(|T|k=1n/4k(nk))=𝒪(|T|20.811n)\mathcal{O}^{*}\left(|T|\cdot\sum_{k=1}^{n/4}k\binom{n}{k}\right)=\mathcal{O}^{*}(|T|\cdot 2^{0.811n}).

  • Quantum part: according to Quantum Minimum Finding complexity (Definition 2.6), computing OPT[[n],0]\textup{OPT}[[n],0] with Quantum Minimum Finding (Step 1 in Algorithm 1) is done in 𝒪((nn/2)C1(n))\mathcal{O}\left(\sqrt{\binom{n}{n/2}}\cdot C_{1}(n)\right), where C1(n)C_{1}(n) is the complexity of computing OPT[J,t]\textup{OPT}[J,t] for a JJ of size n/2n/2 and tTt\in T. The essence of the quantum advantage here is that we do not need to enumerate all sets JJ and all time tt but we apply the Quantum Minimum Finding in parallel to all at once. Notice that (nn/2)\binom{n}{n/2} is the number of balanced bi-partitions of [n][n], namely the number of elements we search over to find the minimum of Equation (Add-D-DPAS) when computing OPT[[n],0]\textup{OPT}[[n],0]. Thus, C1(n)C_{1}(n) is exactly the complexity of Quantum Minimum Finding applied on Step 1 in Algorithm 1, namely C1(n)=𝒪((n/2n/4)C2(n))C_{1}(n)=\mathcal{O}\left(\sqrt{\binom{n/2}{n/4}}\cdot C_{2}(n)\right) where C2(n)C_{2}(n) is the complexity of computing OPT[X,t]\textup{OPT}[X,t^{\prime}] for XX of size n/4n/4 and tTt^{\prime}\in T. Those values are already computed and stored in the QRAM (Step 1 in Algorithm 1), namely, C2(n)=𝒪(1)C_{2}(n)=\mathcal{O}^{*}(1). Thus, the quantum part complexity is 𝒪((nn/2)(n/2n/4))=𝒪(20.75n)\mathcal{O}^{*}\left(\sqrt{\binom{n}{n/2}\binom{n/2}{n/4}}\right)=\mathcal{O}^{*}(2^{0.75n}).

Eventually, Q-DDPAS complexity is the maximum of the classical and the quantum part complexity. Specifically, the total complexity is 𝒪(|T|20.811n)=𝒪(|T|1.754n)\mathcal{O}^{*}(|T|\cdot 2^{0.811n})=\mathcal{O}^{*}(|T|\cdot 1.754^{n}). ∎

We observe that the complexity of Q-DDPAS can be further reduced by performing a third call to Equation (Add-D-DPAS) as suggested by Ambainis et al., (2019).

Observation 2.9.

A slight modification of Q-DDPAS reduces the complexity to 𝒪(|T|1.728n)\mathcal{O}^{*}(|T|\cdot 1.728^{n}).

Proof.

The slight modification of Q-DDPAS amounts to adding a level of recurrence in the quantum part so that the complexity of the classical part reduces whereas the complexity of the quantum part increases so that both are equal and thus minimize the total complexity. The third call searches for the best concatenation among all the bi-partitions of size (0.945n4,0.055n4)(0.945\cdot\frac{n}{4},0.055\cdot\frac{n}{4}) (that are integers asymptotically), i.e. solving

OPT[J,t]=minXJ|X|=0.945|J|{OPT[X,t]+h(J,X,t)+OPT[JX,τshift(J,X,t)]}.\displaystyle\textup{OPT}[J,t]=\min\limits_{X\subseteq J\atop|X|=0.945|J|}\Bigl{\{}\textup{OPT}[X,t]+h(J,X,t)+\textup{OPT}[J\setminus X,\tau_{\text{shift}}(J,X,t)]\Bigr{\}}\,.

The classical part computes all OPT[X,t]\textup{OPT}[X,t] for XX of size 0.945n40.945\cdot\frac{n}{4} and 0.055n40.055\cdot\frac{n}{4}, in 𝒪(1.728n).\mathcal{O}^{*}(1.728^{n})\,. The quantum part applies three levels of recurrence of Quantum Minimum Finding, computing the minimum over functions with a domain of size (nn/2)\binom{n}{n/2}, (n/2n/4)\binom{n/2}{n/4} and (n/40.945n/4)\binom{n/4}{0.945\cdot n/4} respectively. Its complexity is then 𝒪((nn/2)(n/2n/4)(n/40.945n/4))=𝒪(1.728n)\mathcal{O}^{*}\left(\sqrt{\binom{n}{n/2}\binom{n/2}{n/4}\binom{n/4}{0.945\cdot n/4}}\right)=\mathcal{O}^{*}(1.728^{n}) (see Appendix A). ∎

Notice that the classical part of Q-DDPAS can be replaced by any classical algorithm 𝒜\mathcal{A}, if 𝒜\mathcal{A} computes in 𝒪(|T|1.728n)\mathcal{O}^{*}(|T|\cdot 1.728^{n}) all OPT[X,t]\textup{OPT}[X,t] for X[n]X\subseteq[n] of size n/4n/4 and tTt\in T. Moreover, if 𝒜\mathcal{A} happens to reduce the classical part complexity 𝒪(|T|cn)\mathcal{O}^{*}(|T|\cdot c^{n}) for c<1.728c<1.728, the complexity of Q-DDPAS can also be reduced in the same spirit as the slight modification of Observation 2.9.

The application of Q-DDPAS for Additive DPAS to the specific problem 1|d~j|jwjCj1|\tilde{d}_{j}|\sum_{j}w_{j}C_{j} introduced in Subsection 2.1 is given in Subsection 4.1, together with other scheduling examples. Before introducing other types of problems tackled by Q-DDPAS in the next section, we provide some insights to underline why the use of the quantum subroutine Quantum Minimum Finding in Q-DDPAS must be carefully combined with classical computation to achieve a quantum speedup.

Remark 2.10.

Solving 𝒫\mathcal{P} with (Add-DPAS) and replacing each classical computation of the minimum by the quantum subroutine Quantum Minimum Finding would not improve the best classical complexity. Indeed, the complexity would be k=1npoly(n)k(nk)=𝒪(2n).\sum_{k=1}^{n}poly(n)\sqrt{k}\binom{n}{k}=\mathcal{O}^{*}(2^{n})\,.

Remark 2.11.

Solving 𝒫\mathcal{P} exclusively by recursive calls to Quantum Minimum Finding (thus avoiding the classical computations for sets of size n/4n/4) would not improve the classical complexity. Using recurrence (Add-D-DPAS), which is the quantum part of Algorithm 1 with roughly log2(n)\log_{2}(n) recursive calls, would give a complexity in 𝒪((nn/2)(n/2n/4)(21))\mathcal{O}\left(\sqrt{\binom{n}{n/2}\binom{n/2}{n/4}\ldots\binom{2}{1}}\right) that is worse than 𝒪(2n)\mathcal{O}^{*}(2^{n}). Using recurrence (Add-DPAS) would be even worse because it would require nn recursive calls leading to the complexity 𝒪(n(n1)1)\mathcal{O}(\sqrt{n(n-1)\ldots 1}).

3 Composed DPAS

In this section, we study scheduling problems whose constraints enable only the composition of problems on sub-instances. We describe the adaptation of Q-DDPAS for these problems.

3.1 A scheduling example

We begin with the specific problem of minimizing the total weighted number of late jobs with release date constraints, often referred to as 1|rj|wjUj1|r_{j}|\sum w_{j}U_{j} in the literature. The input is given by, for each job j[n]j\in[n], a weight wjw_{j}, a processing time pjp_{j}, a release date rjr_{j} that is the time from which the job can be scheduled (and not before), and a due date djd_{j} that indicates the time after which the job is late. Thus, a job jj is late in permutation π\pi if its completion time is larger than djd_{j}, namely if Cj(π)>djC_{j}(\pi)>d_{j}. We name Uj(π)=𝟙Cj(π)>djU_{j}(\pi)=\mathbb{1}_{C_{j}(\pi)>d_{j}} its indicator function of lateness. This problem aims at finding the feasible permutation, namely where each job starts after its release date, for which the total weighted number of late jobs is minimal. Thus, the problem can be formulated as follows:

minπΠj=1nwjUj(π),\min_{\pi\in\Pi}\,\sum_{j=1}^{n}w_{j}U_{j}(\pi)\,,

where the set of feasible solutions is Π={πS[n]|Cj(π)rj+pj}.\Pi=\{\pi\in S_{[n]}\,|\,C_{j}(\pi)\geq r_{j}+p_{j}\}\,.

This problem does not satisfy the recurrences (Add-DPAS) and (Add-D-DPAS) because the release date constraints do not allow the addition of sub-instances. Let us take the example of (Add-D-DPAS). The starting time of the second half of jobs JXJ\setminus X in (Add-D-DPAS) can be known only if we know the optimal permutation of the first half job, which is in opposition with the dynamic programming principle. Indeed, the release dates enable empty slots in the scheduling on the first half job such that the time to process all these jobs is not always equal to kXpk\sum_{k\in X}p_{k} and can be larger.

This observation leads to different recurrences, where the time to process the jobs would be known by dynamic programming. For that, we define an auxiliary problem on which the recurrences apply and we introduce a new set of parameters E:=0,j=1nwjE:=\left\llbracket 0,\sum_{j=1}^{n}w_{j}\right\rrbracket. For J[n]J\subseteq[n], tT:=0,j=1npj{+}t\in T:=\left\llbracket 0,\sum_{j=1}^{n}p_{j}\right\rrbracket\cup\{+\infty\} and ϵE\epsilon\in E, we note OPT[J,t,ϵ]\textup{OPT}[J,t,\epsilon] the minimum makespan, i.e. the completion time of the last job, for jobs in JJ beginning at time tt where the weighted number of late jobs is exactly ϵ\epsilon. Notice that by convention, OPT[J,t,ϵ]=+\textup{OPT}[J,t,\epsilon]=+\infty if there is no feasible solution, i.e. if {πSJ:Cj(π)max(t,rj)+pj,jJ and jJwjUj(π)=ϵ}=\left\{\pi\in S_{J}:C_{j}(\pi)\geq\max(t,r_{j})+p_{j}\,,\forall j\in J\mbox{ and }\sum_{j\in J}w_{j}U_{j}(\pi)=\epsilon\right\}=\emptyset. Thus, our initial problem 1|rj|wjUj1|r_{j}|\sum w_{j}U_{j} is

minϵE{ϵ:OPT[[n],0,ϵ]<+}.\min\limits_{\epsilon\in E}\{\epsilon:\textup{OPT}[[n],0,\epsilon]<+\infty\}\,.

The following recurrence that satisfies the auxiliary problem is inspired by the work of Lawler, (1990) for the problem of minimizing the total weighted number of late jobs on a single machine under preemption and release date constraints (1|rj,pmtn|wjUj1|r_{j},pmtn|\sum w_{j}U_{j}). For J[n]J\subseteq[n], tTt\in T, ϵE\epsilon\in E,

OPT[J,t,ϵ]=minjJ{OPT[{j},OPT[J{j},t,ϵ],0]job j is not late,OPT[{j},OPT[J{j},t,ϵwj],wj]job j is late}.\textup{OPT}[J,t,\epsilon]=\min\limits_{j\in J}\Bigl{\{}\underbrace{\textup{OPT}\Big{[}\{j\},\textup{OPT}[J\setminus\{j\},t,\epsilon],0\Big{]}}_{\text{job $j$ is not late}},\underbrace{\textup{OPT}\Big{[}\{j\},\textup{OPT}[J\setminus\{j\},t,\epsilon-w_{j}],w_{j}\Big{]}}_{\text{job $j$ is late}}\Bigr{\}}\,.

In this recurrence, for each jJj\in J, we impose jj as the last job of the permutation and distinguish two cases, whether it is late or not. Notice that the starting time of jj is known and equal to OPT[J{j},t,.]\textup{OPT}[J\setminus\{j\},t,.] which represents the value for the time parameter. The recurrence is initialized by, for j[n]j\in[n], tTt\in T and ϵE\epsilon\in E,

OPT[{j},t,ϵ]={Cj:=max(t,rj)+pj, if Cjdj and ϵ=0+, if Cj>dj and ϵ=0, or if Cjdj and ϵ=wjCj, if Cj>dj and ϵ=wj+, if ϵ1,wj1wj+1,k=1nwk\textup{OPT}[\{j\},t,\epsilon]=\begin{cases}C_{j}:=\max(t,r_{j})+p_{j},~{}~{}\text{ if }C_{j}\leq d_{j}\text{ and }\epsilon=0\\ +\infty,~{}~{}\text{ if }C_{j}>d_{j}\text{ and }\epsilon=0\,,\text{ or if }C_{j}\leq d_{j}\text{ and }\epsilon=w_{j}\\ C_{j},~{}~{}\text{ if }C_{j}>d_{j}\text{ and }\epsilon=w_{j}\\ +\infty,~{}~{}\text{ if }\epsilon\in\llbracket 1,w_{j}-1\rrbracket\cup\llbracket w_{j}+1,\sum_{k=1}^{n}w_{k}\rrbracket\end{cases}

This recurrence generalizes into the following dichotomic version for which, instead of setting the last job of the permutation, we set the half-last jobs. For all J[n]J\subseteq[n] of even cardinality, tTt\in T and ϵE\epsilon\in E,

OPT[J,t,ϵ]=minϵEXJ:|X|=|J|/2{OPT[X,OPT[JX,t,ϵϵ],ϵ]},\textup{OPT}[J,t,\epsilon]=\min\limits_{\epsilon^{\prime}\in E\atop X\in J:|X|=|J|/2}\Bigl{\{}\textup{OPT}\Big{[}X,\textup{OPT}[J\setminus X,t,\epsilon-\epsilon^{\prime}],\epsilon^{\prime}\Big{]}\Bigr{\}}\,,

initialized by the same values of OPT[{j},t,ϵ]\textup{OPT}[\{j\},t,\epsilon] for j[n]j\in[n], tTt\in T and ϵE\epsilon\in E. Next, we provide generic recurrences to consider problems for which the composition of sub-instances is possible.

3.2 General formulation of recurrence

Let us consider a scheduling problem with nn jobs

𝒫:minπΠf(π),\mathcal{P}:\hskip 17.07164pt\min_{\pi\in\Pi}f(\pi)\,,

where ΠS[n]\Pi\subseteq S_{[n]} is the set of feasible permutations of [n]:={1,,n}[n]:=\{1,\ldots,n\} according to given constraints and ff is the objective function. Following the example detailed in the previous subsection, we consider an auxiliary problem 𝒫\mathcal{P}^{\prime} useful for deriving the dynamic programming recursion, for which we specify the instance: for J[n]J\subseteq[n] the jobs to be scheduled, tt\in\mathbb{Z} the starting time of the schedule and ϵ\epsilon\in\mathbb{Z}, we define

P(J,t,ϵ):minπΠ(J,t,ϵ)f(π,J,t,ϵ),P^{\prime}(J,t,\epsilon):\hskip 17.07164pt\min_{\pi\in\Pi^{\prime}(J,t,\epsilon)}f^{\prime}(\pi,J,t,\epsilon)\,, (5)

where ff^{\prime}, respectively Π\Pi^{\prime}, is the objective function, respectively the feasible set, are different from those of 𝒫\mathcal{P}. We assume that solving 𝒫\mathcal{P} amounts to finding the smallest ϵ\epsilon\in\mathbb{Z} such that the auxiliary problem 𝒫\mathcal{P}^{\prime} is bounded. Specifically,

𝒫:minϵ{ϵ:OPT[[n],0,ϵ]<+}.\mathcal{P}:~{}~{}\min_{\epsilon\in\mathbb{Z}}\Bigl{\{}\epsilon:\textup{OPT}[[n],0,\epsilon]<+\infty\Bigr{\}}\,. (6)

To solve the nominal problem 𝒫\mathcal{P} by classical dynamic programming, problem 𝒫\mathcal{P}^{\prime} must satisfy recurrence (Comp-DPAS) or recurrence (Comp-D-DPAS) below (as in Remark 2.4, we can state that a problem satisfies one if and only if it satisfies the other one). As we explain later, solving 𝒫\mathcal{P} with our hybrid algorithm requires problem PP^{\prime} to satisfy the two recurrences.

Property 3.1 (Composed DPAS).

For all J[n]J\subseteq[n], tTt\in T and ϵE\epsilon\in E,

OPT[J,t,ϵ]=minϵEjJ{OPT[{j},OPT[J{j},t,ϵϵ],ϵ]},\textup{OPT}[J,t,\epsilon]=\min\limits_{\epsilon^{\prime}\in E\atop j\in J}\Bigl{\{}\textup{OPT}\Big{[}\{j\},\textup{OPT}[J\setminus\{j\},t,\epsilon-\epsilon^{\prime}],\epsilon^{\prime}\Big{]}\Bigr{\}}\,, (Comp-DPAS)

initialized by the values of OPT[{j},t,ϵ]\textup{OPT}[\{j\},t,\epsilon] for all j[n]j\in[n], ϵE\epsilon\in E and tTt\in T. Notice that for J[n]J\subseteq[n], tTt\in T and ϵE\epsilon\in E, we adopt the convention OPT[J,t,ϵ]=+\textup{OPT}[J,t,\epsilon]=+\infty for ϵE\epsilon\notin E.

Recurrence (Comp-DPAS) differs from recurrence (Add-DPAS) in two aspects. First, the optimal values of the problem on sub-instances are composed, and not added, because of the nature of the constraints. Second, the search for the minimum value is done not only over all jobs in JJ, but also over all values in EE. More precisely, for a given ϵ0E\epsilon_{0}\in E, the optimal value of P(J,t,ϵ0)P^{\prime}(J,t,\epsilon_{0}) is the minimum value of all possible composition of optimal values of the problem on sub-instances with parameters ϵ1\epsilon_{1} and ϵ2\epsilon_{2} such that ϵ1+ϵ2=ϵ0\epsilon_{1}+\epsilon_{2}=\epsilon_{0}. We have the following result.

Lemma 3.2.

(Comp-DPAS) solves 𝒫\mathcal{P} in 𝒪(|E|3|T|2n)\mathcal{O}^{*}(|E|^{3}\cdot|T|\cdot 2^{n}).

Proof.

Let ϵ0E\epsilon_{0}\in E. Similarly to the proof of Lemma 2.2, we show that (Comp-DPAS) solves P([n],0,ϵ0)P^{\prime}([n],0,\epsilon_{0}) in 𝒪(|E|2|T|2n)\mathcal{O}^{*}(|E|^{2}\cdot|T|\cdot 2^{n}). Indeed, to compute OPT[[n],0,ϵ0]\textup{OPT}[[n],0,\epsilon_{0}], we need to solve Equation (Comp-DPAS) for all JJ such that |J|=k|J|=k starting from k=1k=1 to k=nk=n, and for all tTt\in T and ϵE\epsilon\in E. For a given JJ, tTt\in T and ϵE\epsilon\in E, the values {OPT[J{j},t,ϵ]:jJ,tT,ϵE}\{\textup{OPT}[J\setminus\{j\},t^{\prime},\epsilon^{\prime}]:j\in J,t^{\prime}\in T,\epsilon^{\prime}\in E\} and {OPT[{j},t,ϵ]:jJ,tT,ϵE}\{\textup{OPT}[\{j\},t^{\prime},\epsilon^{\prime}]:j\in J,t^{\prime}\in T,\epsilon^{\prime}\in E\} are known, so OPT[J,t,ϵ]\textup{OPT}[J,t,\epsilon] is computed in time |E|k|E|\cdot k according to Equation (Comp-DPAS). Eventually, the total complexity of computing OPT[[n],0,ϵ0]\textup{OPT}[[n],0,\epsilon_{0}] is k=1n|T||E|2k(nk)=𝒪(|T||E|22n).\sum_{k=1}^{n}|T|\cdot|E|^{2}\cdot k\binom{n}{k}=\mathcal{O}^{*}(|T|\cdot|E|^{2}\cdot 2^{n}). Moreover, solving 𝒫\mathcal{P} amounts to solving P([n],0,ϵ)P^{\prime}([n],0,\epsilon) ,for all ϵE\epsilon\in E, according to (6). The complexity results directly from the above complexity of computing OPT[[n],0,ϵ0]\textup{OPT}[[n],0,\epsilon_{0}], for ϵ0E\epsilon_{0}\in E. ∎

The auxiliary problem PP^{\prime} must satisfy the following recurrence (Comp-D-DPAS) in addition to recurrence (Comp-DPAS).

Property 3.3 (Composed Dichotomic DPAS).

For all J[n]J\subseteq[n] of even cardinality, tTt\in T and ϵE\epsilon\in E,

OPT[J,t,ϵ]=minϵEXJ:|X|=|J|/2{OPT[X,OPT[JX,t,ϵϵ],ϵ]},\textup{OPT}[J,t,\epsilon]=\min\limits_{\epsilon^{\prime}\in E\atop X\in J:|X|=|J|/2}\Bigl{\{}\textup{OPT}\Big{[}X,\textup{OPT}[J\setminus X,t,\epsilon-\epsilon^{\prime}],\epsilon^{\prime}\Big{]}\Bigr{\}}\,, (Comp-D-DPAS)

initialized by the values of OPT[{j},t,ϵ]\textup{OPT}[\{j\},t,\epsilon] for all j[n]j\in[n], tTt\in T and ϵE\epsilon\in E.

Lemma 3.4.

(Comp-D-DPAS) solves 𝒫\mathcal{P} in ω(|E|3|T|2n)\omega(|E|^{3}\cdot|T|\cdot 2^{n}).

Proof.

This proof is essentially the same as the one of Lemma 2.5 with the same modifications that for the proof of Lemma 3.2. ∎

As for the Additive DPAS, we notice that, with a classical dynamic programming algorithm, the time complexity to solve 𝒫\mathcal{P} with recurrence (Comp-DPAS) is better than with recurrence (Comp-D-DPAS). Next, we show that the hybrid algorithm applied to problems satisfying Additive DPAS recurrences can be easily adapted to tackle problems satisfying Composed DPAS recurrences.

3.3 Hybrid algorithm for Composed DPAS

The hybrid algorithm for Composed DPAS derives naturally from Algorithm 1. It amounts to replacing the recurrence (Add-DPAS), respectivelly (Add-D-DPAS), by (Comp-DPAS), respectivelly (Comp-D-DPAS), resulting in Algorithm 2. Eventually, we use Algorithm 2 as a subroutine to solve Equation (6), i.e. to solve the nominal problem 𝒫\mathcal{P}.

Input: ϵ0E\epsilon_{0}\in E, auxiliary problem PP^{\prime} satisfying (Comp-DPAS) and (Comp-D-DPAS)
Output: OPT[[n],0,ϵ0]\textup{OPT}[[n],0,\epsilon_{0}] with high probability
begin classical part
        for X[n]X\subseteq[n] such that |X|=n/4|X|=n/4, and tTt\in T do
               Compute OPT[X,t,ϵ0]\textup{OPT}[X,t,\epsilon_{0}] with (Comp-DPAS) and store the results in the QRAM;
              
       
begin quantum part
        Apply Quantum Minimum Finding with (Comp-D-DPAS) to find OPT[[n],0,ϵ0]\textup{OPT}[[n],0,\epsilon_{0}];
        To get values for the Quantum Minimum Finding above (the values OPT[J,t,ϵ]\textup{OPT}[J,t,\epsilon] for J[n]J\subseteq[n] of size n/2n/2, tTt\in T and ϵE\epsilon\in E), apply Quantum Minimum Finding with (Comp-D-DPAS);
        To get values for the Quantum Minimum Finding above (the values OPT[X,t,ϵ]\textup{OPT}[X,t^{\prime},\epsilon^{\prime}] for X[n]X\subseteq[n] of size n/4n/4, tTt^{\prime}\in T and ϵE\epsilon^{\prime}\in E), get them on the QRAM
Algorithm 2 Q-DDPAS for Composed DPAS
Lemma 3.5.

Let ϵ0E\epsilon_{0}\in E. The bounded-error algorithm Q-DDPAS (Algorithm 2) solves P([n],0,ϵ0)P^{\prime}([n],0,\epsilon_{0}) in 𝒪(|E|2|T|1.754n)\mathcal{O}^{*}(|E|^{2}\cdot|T|\cdot 1.754^{n}).

Notice that the implementation of this algorithm is slightly different from the one of Algorithm 1, mainly due to the operation of composition. The details are given in the companion paper (Grange et al., , 2024).

Input: Auxiliary problem PP^{\prime} satisfying (Comp-DPAS) and (Comp-D-DPAS)
Output: minϵE{ϵ:OPT[[n],0,ϵ]<+}\min\limits_{\epsilon\in E}\Bigl{\{}\epsilon:\textup{OPT}[[n],0,\epsilon]<+\infty\Bigr{\}} with high probability
ϵ+\epsilon^{*}\leftarrow+\infty;
for ϵE\epsilon\in E do
        Solve P([n],0,ϵ)P([n],0,\epsilon) with Algorithm 2;
        if OPT[[n],0,ϵ]<+\textup{OPT}[[n],0,\epsilon]<+\infty and ϵ<ϵ\epsilon<\epsilon^{*} then
               ϵϵ\epsilon^{*}\leftarrow\epsilon;
              
       
Return ϵ\epsilon^{*}
Algorithm 3 Meta-algorithm with subroutine Q-DDPAS for Composed DPAS
Theorem 3.6.

The bounded-error Algorithm 3, with Q-DDPAS as a subroutine, solves 𝒫\mathcal{P} in 𝒪(|E|3|T|1.754n)\mathcal{O}^{*}(|E|^{3}\cdot|T|\cdot 1.754^{n}).

As for the case of Q-DDPAS for Additive DPAS, we can reduce the exponential part of Q-DDPAS complexity for Composed DPAS, by the modification indicated in Observation 2.9, thus leading to the following observation.

Observation 3.7.

A slight modification of the Q-DDPAS algorithm can reduce the complexity of Algorithm 3 to 𝒪(|E|3|T|1.728n)\mathcal{O}^{*}(|E|^{3}\cdot|T|\cdot 1.728^{n}).

We illustrate in Subsection 4.2 the application of Q-DDPAS for Composed DPAS to the problem 1|rj|wjUj1|r_{j}|\sum w_{j}U_{j} described in Subsection 3.1, together with another similar scheduling problem.

4 Application to the scheduling literature

In Subsection 2.2 and Subsection 3.2, we provided general formulations of problems satisfying Additive and Composed DPAS recurrences. Next, we illustrate these recurrences with several NP-hard single-machine scheduling problems enabling their resolution with our hybrid algorithm Q-DDPAS. The list of problems is non-exhaustive but highlights the structures’ specificity of scheduling problems that enable such recurrences. Eventually, for each problem, we compare in Table 1 the worst-case time complexity of Q-DDPAS with the complexity of the best-known moderate exponential-time exact algorithm. Q-DDPAS improves the exponential-part complexity, sometimes at the cost of an additional pseudo-polynomial factor.

4.1 Scheduling with deadlines and precedence constraints

Single-machine scheduling problems with no constraints, deadline constraints or precedence constraints satisfy the addition of optimal values of the problem on sub-instances. We provide next several examples of problems that satisfy Additive DPAS and thus can be solved by Q-DDPAS (Algorithm 1).

In Subsection 2.1, we have presented the problem of minimizing the total weighted completion time with deadline constraints (1|d~j|jwjCj1|\tilde{d}_{j}|\sum_{j}w_{j}C_{j}). The formulation needed the set TT to be equal to 0,j=1npj\llbracket 0,\sum_{j=1}^{n}p_{j}\rrbracket, hence its resolution with Q-DDPAS is in 𝒪(pj1.728n)\mathcal{O}^{*}(\sum p_{j}\cdot 1.728^{n}) according to Observation 2.9. Next, we give two more examples, beginning with the strongly NP-hard scheduling problem with minimization of the total weighted tardiness. Henceforth, we note p(J)=jJpjp(J)=\sum_{j\in J}p_{j} the sum of processing times of the jobs in J[n]J\subseteq[n].

Example 1 (Minimizing the total weighted tardiness, 1||jwjTj1||\sum_{j}w_{j}T_{j}).

For each job j[n]j\in[n], we are given a weight wjw_{j}, a processing time pjp_{j}, and a due date djd_{j} that indicates the time after which the job is late. Thus, a job jj is late in permutation π\pi if its completion time is larger than djd_{j}, and we define as Tj(π)=max(0,Cj(π)dj)T_{j}(\pi)=\max(0,C_{j}(\pi)-d_{j}) its tardiness. Our problem aims at finding the permutation that minimizes the total weighted tardiness, referred to as 1||jwjTj1||\sum_{j}w_{j}T_{j} in the scheduling literature. Let T=0,j=1npjT=\llbracket 0,\sum_{j=1}^{n}p_{j}\rrbracket be the set of all possible starting times. We define the related problem PP of Equation (4) as follows: for Jn]J\subseteq\mathbb{[}n] and tTt\in T, Π(J,t)=SJ,\Pi(J,t)=S_{J}, and for πΠ(J,t)\pi\in\Pi(J,t):

f(π,J,t)=jJwjmax(0,Cj(π)dj+t),f(\pi,J,t)=\sum_{j\in J}w_{j}\max(0,C_{j}(\pi)-d_{j}+t)\,,

where max(0,Cjdj+t)\max(0,C_{j}-d_{j}+t) represents the tardiness of job jj for the effective due date (djt)(d_{j}-t). Problem 1||jwjTj1||\sum_{j}w_{j}T_{j} satisfies both Additive DPAS recurrences. Indeed, Equation (Add-DPAS) is valid with: J[n],jJ,tT,\forall J\subseteq[n],\forall j\in J,\forall t\in T,

g(J,j,t)=wjmax(0,p(J)dj+t),\displaystyle g(J,j,t)=w_{j}\max(0,p(J)-d_{j}+t)\,,

where the computation of gg is polynomial (linear). Moreover, Equation (Add-D-DPAS) is valid for the following functions: XJ[n] s.t. |X|=|J|/2,tT,\forall X\subseteq J\subseteq[n]\mbox{ s.t. }|X|=|J|/2,\forall t\in T,

τshift(J,X,t)=t+p(X)andh(J,X,t)=0\displaystyle\tau_{\text{shift}}(J,X,t)=t+p(X)\qquad\mbox{and}\qquad h(J,X,t)=0 (7)

initialized by, for j[n]j\in[n] and tTt\in T, OPT[{j},t]=wjmax(0,pjdj+t).\textup{OPT}[\{j\},t]=w_{j}\max(0,p_{j}-d_{j}+t)\,.

We consider the scheduling problem with precedence constraints and minimization of the total weighted completion time that is also NP-hard. Conversely to the two previous examples, the set TT is reduced to {0}\{0\}, and function hh translates the potential infeasibility of the concatenation of problem PP on two sub-instances.

Example 2 (Minimizing the total weighted completion time with precedence constraints, 1|prec|jwjCj1|prec|\sum_{j}w_{j}C_{j}).

We are given, for each job j[n]j\in[n], a processing time pjp_{j}, a weight wjw_{j}, and a set of precedence constraints K={(i,j):ij}K=\{(i,j):i\prec j\}. A pair of jobs (i,j)(i,j) in KK implies that ii must precede jj in the permutation, namely that ii must be processed before jj. Our problem, denoted by 1|prec|jwjCj1|prec|\sum_{j}w_{j}C_{j}, aims at finding the feasible permutation, i.e. that respects the precedence constraints, that minimizes the total weighted completion time. Let be T={0}T=\{0\}. Here, an instance of the problem PP of Equation (4) under consideration is only indexed by the chosen subset of [n][n]. Thus, we consider the problem PP as follows: for Jn]J\subseteq\mathbb{[}n], Π(J,0)={πSJ|π respects K},\Pi(J,0)=\{\pi\in S_{J}\,|\,\pi\text{ respects }K\}\,, and for πΠ(J,0)\pi\in\Pi(J,0): f(π,J,0)=jJwjCj(π).f(\pi,J,0)=\sum_{j\in J}w_{j}C_{j}(\pi)\,. Our problem 1|prec|jwjCj1|prec|\sum_{j}w_{j}C_{j} satisfies both Additive DPAS recurrences. Indeed, Equation (Add-DPAS) is valid for:

J[n],jJ,g(J,j,0)={+if (j,k)E|kJwjp(J)otherwise,\forall J\subseteq[n],\forall j\in J,~{}~{}g(J,j,0)=\left\{\begin{aligned} &+\infty&\text{if }\exists(j,k)\in E|k\in J\\ &w_{j}p(J)&\text{otherwise}\end{aligned}\right.\,,

where the computation of gg is polynomial (quadratic). This problem PP also satisfies (Add-D-DPAS). Indeed, Equation (Add-D-DPAS) is valid for the following functions: XJ[n]\forall X\subseteq J\subseteq[n] such that |X|=|J|/2,|X|=|J|/2,

τshift(J,X,0)=0andh(J,X,0)={+if (j,k)E|jJX and kXp(X)jJXwjotherwise\displaystyle\tau_{\text{shift}}(J,X,0)=0\qquad\mbox{and}\qquad h(J,X,0)=\left\{\begin{aligned} &+\infty&\text{if }\exists(j,k)\in E|j\in J\setminus X\text{ and }k\in X\\ &p(X)\cdot\sum_{j\in J\setminus X}w_{j}&\text{otherwise}\end{aligned}\right.

where the computation of hh is polynomial (quadratic). The initialization is, for j[n]j\in[n], OPT[{j},0]=wjpj.\textup{OPT}[\{j\},0]=w_{j}p_{j}\,.

The three NP-hard scheduling problems examples described above can be solved with Q-DDPAS for Additive DPAS (Algorithm 1). We illustrate in Table 2 the worst-case time complexities of solving them with Q-DDPAS and compare them with the complexities of the best-known exact classical algorithms. Q-DDPAS improves the complexity of the exponent but sometimes at the cost of a pseudo-polynomial factor.

Problem Q-DDPAS for Additive DPAS Best classical algorithm
1|d~j|wjCj1|\tilde{d}_{j}|\sum w_{j}C_{j} 𝒪(pj1.728n)\mathcal{O}^{*}\left(\sum p_{j}\cdot 1.728^{n}\right) 𝒪(2n)\mathcal{O}^{*}(2^{n}) (T’kindt et al., , 2022)
1||wjTj1||\sum w_{j}T_{j} 𝒪(pj1.728n)\mathcal{O}^{*}\left(\sum p_{j}\cdot 1.728^{n}\right) 𝒪(2n)\mathcal{O}^{*}(2^{n}) (T’kindt et al., , 2022)
1|prec|wjCj1|prec|\sum w_{j}C_{j} 𝒪(1.728n)\mathcal{O}^{*}\left(1.728^{n}\right) 𝒪((2ϵ)n)\mathcal{O}^{*}((2-\epsilon)^{n}), for small ϵ\epsilon (Cygan et al., , 2014)
Table 2: Comparison of complexities between Q-DDPAS and the best-known classical algorithm for some scheduling problems satisfying (Add-DPAS) and (Add-D-DPAS).

4.2 Scheduling with release date constraints

Single-machine scheduling problems with release date constraints do not satisfy the addition of optimal values of the problem on sub-instances but enable the composition of them. We illustrate this notion with two examples of problems that satisfy Composed DPAS and thus can be solved by Q-DDPAS (Algorithm 2).

We have presented in Subsection 3.1 an example that is the problem of minimizing the weighted number of late jobs with release date constraints (1|rj|wjUj1|r_{j}|\sum w_{j}U_{j}). We have shown that the two sets to define the auxiliary problem are E=0,j=1nwjE=\llbracket 0,\sum_{j=1}^{n}w_{j}\rrbracket and T=0,j=1npj{+}T=\llbracket 0,\sum_{j=1}^{n}p_{j}\rrbracket\cup\{+\infty\}. Thus, Q-DDPAS solves this problem in 𝒪((wj)3pj1.728n)\mathcal{O}^{*}\left((\sum w_{j})^{3}\cdot\sum p_{j}\cdot 1.728^{n}\right) according to Observation 3.7. Next, we present another example which is the strongly NP-hard problem of minimizing the total weighted completion time with release date constraints.

Example 3 (Minimizing the total weighted completion time with release date constraints, 1|rj|wjCj1|r_{j}|\sum w_{j}C_{j}).

Each job j[n]j\in[n] has a weight wjw_{j}, a processing time pjp_{j}, and a release date rjr_{j}. This problem aims at finding the feasible permutation, namely where each job starts after its release date, for which the total weighted completion time is minimal. Let T=0,j=1npj{+}T=\llbracket 0,\sum_{j=1}^{n}p_{j}\rrbracket\cup\{+\infty\} and E=0,j=1nwjj=1npjE=\llbracket 0,\sum_{j=1}^{n}w_{j}\cdot\sum_{j=1}^{n}p_{j}\rrbracket. For a given ϵE\epsilon\in E, we consider the problem PP^{\prime} of Equation (5) as follows: J[n],tT,\forall J\subseteq[n],t\in T,

P(J,t,ϵ):minπΠ(J,t,ϵ)Cmax(π),P^{\prime}(J,t,\epsilon):~{}~{}\min_{\pi\in\Pi^{\prime}(J,t,\epsilon)}C_{\textup{max}}(\pi)\,,

where CmaxC_{\text{max}} is the makespan, and

Π(J,t,ϵ)={πSJ:Cj(π)max(t,rj)+pj and jJwjCj(π)=ϵ},\Pi^{\prime}(J,t,\epsilon)=\{\pi\in S_{J}:C_{j}(\pi)\geq\max(t,r_{j})+p_{j}\mbox{ and }\sum_{j\in J}w_{j}C_{j}(\pi)=\epsilon\}\,,

where CjC_{j} is the completion time of job jj. Problem PP^{\prime} satisfies the two Composed DPAS recurrences (Comp-DPAS) and (Comp-D-DPAS). The initialization of the recurrences is, for j[n]j\in[n], tTt\in T and ϵE\epsilon\in E,

OPT[{j},t,ϵ]={Cj:=max(t,rj)+pj, if ϵ=wjCj+,otherwise\textup{OPT}[\{j\},t,\epsilon]=\begin{cases}C_{j}:=\max(t,r_{j})+p_{j},~{}~{}\text{ if }\epsilon=w_{j}C_{j}\\ +\infty,~{}~{}\text{otherwise}\end{cases}

We synthesize in Table 3 the worst-case time complexities achieved by Q-DDPAS on the examples of scheduling problems satisfying the Composed DPAS recurrences. We compare them with the best-known classical complexities for exact algorithms. The latter comes from the algorithm of Inclusion-Exclusion designed by Ploton and T’kindt, (2022), which provides a generic method to solve such problems. We observe that Q-DDPAS improves the exponential part of the complexity, at a cost of a higher degree for the pseudo-polynomial factor.

Problem Q-DDPAS for Composed DPAS Best classical algorithm
1|rj|wjUj1|r_{j}|\sum w_{j}U_{j} 𝒪((wj)3pj1.728n)\mathcal{O}^{*}\left((\sum w_{j})^{3}\cdot\sum p_{j}\cdot 1.728^{n}\right) 𝒪(wjpj2n),\mathcal{O}^{*}(\sum w_{j}\cdot\sum p_{j}\cdot 2^{n})\,, (Ploton and T’kindt, , 2022)
1|rj|wjCj1|r_{j}|\sum w_{j}C_{j} 𝒪((wj)3(pj)41.728n)\mathcal{O}^{*}\left((\sum w_{j})^{3}\cdot(\sum p_{j})^{4}\cdot 1.728^{n}\right) 𝒪(wj(pj)22n),\mathcal{O}^{*}(\sum w_{j}\cdot(\sum p_{j})^{2}\cdot 2^{n})\,, (Ploton and T’kindt, , 2022)
Table 3: Comparison of complexities between Q-DDPAS and the best-known classical algorithm for some scheduling problems satisfying (Comp-DPAS) and (Comp-D-DPAS).

5 Decision-based DPAS

We saw in the previous section that the recurrence to solve 𝒫\mathcal{P} can be applied to a minimization problem, possibly involving an auxiliary problem. Sometimes, the recurrence does not apply directly to a minimization problem but to a decision problem. This is the case of the 3-machine flowshop problem. In this section, we adapt the hybrid algorithm Q-DDPAS to solve this problem. Notice that it easily generalizes to the mm-flowshop problem, for m4m\geq 4.

5.1 3-machine flowshop and dynamic programming

We consider the permutation flowshop problem on 3 machines for nn jobs with minimizing the makespan as the objective function. This strongly NP-hard problem is often referred to as F3||CmaxF3||C_{\text{max}} in the literature, as mentioned by Shang et al., (2018). Each job j[n]j\in[n] consists of 3 operations OijO_{ij} for i[3]i\in[3], each operation being processed on the ii-th machine. We note pijp_{ij} the processing time of operation OijO_{ij}. Each machine performs at most one operation at a time. For each job jj, operations must be processed in the specific order O1jO_{1j}, O2jO_{2j}, O3jO_{3j}: the first operation gets processed on the first machine, then the second operation gets processed on second machine (as soon as the first operation is finished and the second machine is available), and eventually the third operation gets processed on the third machine (as soon as the second operation is finished and the third machine is available). Thus, only the processing order of the jobs has to be decided, implying that a solution is entirely described by the permutation of jobs on the first machine. Thus, the problem can be formulated as

minπS[n]Cmax(π),\min\limits_{\pi\in S_{[n]}}C_{\text{max}}(\pi)\,, (8)

where CmaxC_{\text{max}} is the maximum completion time, namely the completion time of the last job processed on the last machine (third machine). Because the two techniques presented so far do not apply to (8), we present an alternative approach involving the decision counterpart of the above optimization problem.

We introduce below a decision problem for deriving the recurrences. For that, we define the bounded set

T=0,j[n],i[3]pij.T=\left\llbracket 0,\sum_{j\in[n],i\in[3]}p_{ij}\right\rrbracket\subseteq\mathbb{Z}\,.
Definition 5.1 (Decision problem).

For J[n]J\subseteq[n], β=(β2,β3)T2\vec{\beta}=(\beta_{2},\beta_{3})\in T^{2} and ϵ=(ϵ2,ϵ3)T2\vec{\epsilon}=(\epsilon_{2},\epsilon_{3})\in T^{2}, we define the decision problem D(J,β,ϵ)D(J,\vec{\beta},\vec{\epsilon}) on a sub-instance associated with jobs in JJ as the following question: “Does there exist a permutation πSJ\pi\in S_{J} such that, for i{2,3}i\in\{2,3\}, bi(π)βi,b_{i}(\pi)\geq\beta_{i}\,, and ei(π)ϵi?"e_{i}(\pi)\leq\epsilon_{i}\,?", where bi(π)b_{i}(\pi), respectively ei(π)e_{i}(\pi), denotes the time at which the first operation begins, respectively the last operation ends, on the ii-th machine.

In other words, problem D(J,β,ϵ)D(J,\vec{\beta},\vec{\epsilon}) asks whether or not there exists a feasible permutation with jobs in JJ such that it holds between the two temporal fronts β\vec{\beta} and ϵ\vec{\epsilon}. Notice that it is not necessary to impose any beginning and ending time for the first machine (i=1i=1). Indeed, the problem is time-invariant, thus we can always consider that the scheduling problem starts at time 0, and that the total completion time on the first machine is known and equal to the sum of processing times of the scheduled jobs. Notice that the number of parameters is four for the 3-machine flowshop, but generalizes to 2(m1)2(m-1) parameters for the mm-machine flowshop.

With these notations, 𝒫\mathcal{P} can be cast as follows:

𝒫:mincT{c:D[[n],(0,0),(c,c)]=True}.\mathcal{P}:\hskip 17.07164pt\min_{c\in T}\Bigl{\{}c:D[[n],(0,0),(c,c)]=\text{True}\Bigr{\}}\,. (9)

The decision problem DD satisfies both recurrences (Dec-DPAS) and (Dec-D-DPAS) below.

Property 5.2 (Decision DPAS).

For all J[n]J\subseteq[n] of even cardinality, βT2\vec{\beta}\in T^{2} and ϵT2\vec{\epsilon}\in T^{2},

D[J,β,ϵ]=XJ:|X|=|J|/2,t[β,ϵ](D[{j},β,t]D[J{j},tp1j,ϵp1j]),D[J,\vec{\beta},\vec{\epsilon}]=\bigvee\limits_{X\subseteq J:|X|=|J|/2,\atop\vec{t}\in[\vec{\beta},\vec{\epsilon}]}\left(D[\{j\},\vec{\beta},\vec{t}]\land D[J\setminus\{j\},\vec{t}\ominus p_{1j},\vec{\epsilon}\ominus p_{1j}]\right)\,, (Dec-DPAS)

where t[β,ϵ]\vec{t}\in[\vec{\beta},\vec{\epsilon}] means that the ii-th coordinate of t\vec{t} is between the ii-th coordinates of β\vec{\beta} and ϵ\vec{\epsilon}, and where operation vc\vec{v}\ominus c, for a vector v\vec{v} and a constant cc, subtracts cc to each coordinate of v\vec{v}.

This latter recurrence enables 𝒫\mathcal{P} to be solved by a classical dynamic programming algorithm.

Lemma 5.3.

(Dec-DPAS) solves 𝒫\mathcal{P} in 𝒪(|T|42n)\mathcal{O}^{*}(|T|^{4}\cdot 2^{n}).

Proof.

First, we can show that, for a given β0,ϵ0T2\vec{\beta}_{0},\vec{\epsilon}_{0}\in T^{2}, (Dec-DPAS) solves D([n],β0,ϵ0)D([n],\vec{\beta}_{0},\vec{\epsilon}_{0}) in 𝒪(|T|42n)\mathcal{O}^{*}(|T|^{4}\cdot 2^{n}). This is essentially the same lines of the proof as in Lemma 2.2. Second, to solve 𝒫\mathcal{P}, we make a dichotomic search over TT to find the minimum cTc\in T such that D([n],(0,c),(0,c))D([n],(0,c),(0,c)) is True according to Equation (9). Thus, (Dec-DPAS) is called log2(|T|)\log_{2}(|T|) times. Because |T|=pij|T|=\sum p_{ij} is a pseudo-polynomial term of the instance, the total complexity is 𝒪(log2(|T|)|T|42n)=𝒪(|T|42n).\mathcal{O}^{*}(\log_{2}(|T|)\cdot|T|^{4}\cdot 2^{n})=\mathcal{O}^{*}(|T|^{4}\cdot 2^{n})\,.

Property 5.4 (Decision Dichotomic DPAS).

For all J[n]J\subseteq[n] of even cardinality, βT2\vec{\beta}\in T^{2} and ϵT2\vec{\epsilon}\in T^{2},

D[J,β,ϵ]=XJ:|X|=|J|/2,t[β,ϵ](D[X,β,t]D[JX,tjXp1j,ϵjXp1j]).D[J,\vec{\beta},\vec{\epsilon}]=\bigvee\limits_{X\subseteq J:|X|=|J|/2,\atop\vec{t}\in[\vec{\beta},\vec{\epsilon}]}\left(D[X,\vec{\beta},\vec{t}]\land D[J\setminus X,\vec{t}\ominus\sum_{j\in X}p_{1j},\vec{\epsilon}\ominus\sum_{j\in X}p_{1j}]\right)\,. (Dec-D-DPAS)
Lemma 5.5.

(Dec-D-DPAS) solves 𝒫\mathcal{P} in ω(|T|42n)\omega(|T|^{4}\cdot 2^{n}).

Proof.

This proof is similar to the proof of Lemma 2.5, with the argument that a dichotomic search is polynomial in the size of the instance as in the proof of Lemma 5.3. ∎

Once again, we observe that recurrence (Dec-DPAS) outperforms recurrence (Dec-D-DPAS) to solve by classical dynamic programming our problem 𝒫\mathcal{P}. In the next subsection, we describe how we adapt Q-DDPAS to take advantage of those two recurrences to solve the 3-machine flowshop problem.

5.2 Hybrid algorithm for Decision-based DPAS

We call Q-Dec-DDPAS the adapted decision version of Q-DDPAS. The main difference is that instead of searching for a minimum value in a set in recurrence (Add-D-DPAS) or (Comp-D-DPAS), we search for a True value in a set in recurrence (Dec-D-DPAS). Thus, it essentially amounts to replacing Quantum Minimum Finding with the algorithm of Boyer et al., (1998) specified below, which extends Grover Search (Grover, , 1996).

Definition 5.6 (Grover Search Extension (Boyer et al., , 1998)).

Let f:[n]{0,1}f:[n]\rightarrow\{0,1\} be a function. Grover Search Extension computes with high probability the logical OR of all the ff values and the corresponding antecedent(s) x[n]x\in[n] such that f(x)=1f(x)=1. The complexity of Grover Search Extension is 𝒪(nCf(n))\mathcal{O}\left(\sqrt{n}\cdot C_{f}(n)\right), where 𝒪(Cf(n))\mathcal{O}(C_{f}(n)) is the complexity of computing a value of ff.

Notice that we cannot use Grover Search because we do not know the number of xx such that f(x)=1f(x)=1. The generalization of Boyer et al., (1998) enables us to deal with an unknown number of solutions while keeping the same complexity of Grover Search. Moreover, if there are tt\in\mathbb{N}^{*} solutions, the complexity is 𝒪(n/tCf(n))\mathcal{O}\left(\sqrt{n/t}\cdot C_{f}(n)\right) but, having no bounds on tt whenever we call Grover Search Extension, we omit it in the complexity.

Input: β0,ϵ0T2\vec{\beta}_{0},\vec{\epsilon}_{0}\in T^{2}, decision problem DD satisfying (Dec-DPAS) and (Dec-D-DPAS)
Output: D[[n],β0,ϵ0]D[[n],\vec{\beta}_{0},\vec{\epsilon}_{0}] with high probability
begin classical part
        for X[n]:|X|=n/4X\subseteq[n]:|X|=n/4 and β,ϵT2\vec{\beta},\vec{\epsilon}\in T^{2} do
               Compute D[X,β,ϵ]D[X,\vec{\beta},\vec{\epsilon}] with (Dec-DPAS) and store the results in the QRAM;
              
       
begin quantum part
        Apply Grover Search Extension with (Dec-D-DPAS) to find D[[n],β0,ϵ0]D[[n],\vec{\beta}_{0},\vec{\epsilon}_{0}];
        To get values for the Grover Search Extension above (the values D[J,β,ϵ]D[J,\vec{\beta},\vec{\epsilon}] for J[n]J\subseteq[n] of size n/2n/2 and β,ϵT\vec{\beta},\vec{\epsilon}\in T), apply Grover Search Extension with (Dec-D-DPAS);
        To get values for Grover Search Extension above (the values D[X,β,ϵ]D[X,\vec{\beta}^{\prime},\vec{\epsilon}^{\prime}] for X[n]X\subseteq[n] of size n/4n/4 and β,ϵT\vec{\beta}^{\prime},\vec{\epsilon}^{\prime}\in T), get them on the QRAM;
       
Algorithm 4 Q-Dec-DDPAS for 3-machine flowshop
Lemma 5.7.

Let β0,ϵ0T2\vec{\beta}_{0},\vec{\epsilon}_{0}\in T^{2}. The bounded-error algorithm Q-Dec-DDPAS (Algorithm 4) solves D([n],β0,ϵ0)D([n],\vec{\beta}_{0},\vec{\epsilon}_{0}) in 𝒪((pij)41.754n)\mathcal{O}^{*}((\sum p_{ij})^{4}\cdot 1.754^{n}).

The proof is essentially the same as for Theorem 2.8, detailed in Supplementary Materials. All the details of correctness and low-level implementation are given in our companion paper (Grange et al., , 2024).

Input: 3-machine flowshop
Output: Minimum makespan with high probability
c+c^{*}\leftarrow+\infty;
for cTc\in T do
        Solve D([n],(0,0),(c,c))D([n],(0,0),(c,c)) with Algorithm 4;
        if D[[n],(0,0),(c,c)]D[[n],(0,0),(c,c)] and c<cc<c^{*} then
               ccc^{*}\leftarrow c;
              
       
Return cc^{*}
Algorithm 5 Meta-algorithm with subroutine Q-Dec-DDPAS for the 3-machine flowshop
Theorem 5.8.

The bounded-error Algorithm 5 solves the 3-machine flowshop in 𝒪((pij)41.754n)\mathcal{O}^{*}((\sum p_{ij})^{4}\cdot 1.754^{n}) with high probability.

Once again, as mentioned in Observation 2.9, the complexity can be reduced thanks to a slight modification on the Q-Dec-DDPAS that constitutes the subroutine, thus leading to the following observation.

Observation 5.9.

A slight modification of Algorithm 5 reduces the complexity of solving the 3-machine flowshop in 𝒪((pij)41.728n)\mathcal{O}^{*}((\sum p_{ij})^{4}\cdot 1.728^{n}) with high probability.

This new method improves the best-known classical algorithm that is in 𝒪(3n)\mathcal{O}^{*}(3^{n}) or in 𝒪(M2n)\mathcal{O}^{*}(M\cdot 2^{n}) if there exists a constant MM such that pijMp_{ij}\leq M, for all i[3],j[n]i\in[3],j\in[n], presented by Shang et al., (2018) and Ploton and T’kindt, (2023). Hybrid quantum-classical bounded-error Algorithm 5 reduces the exponential part of the time complexity at the cost of a pseudo-polynomial factor. For most cases, this factor is negligible because the numerical values of 3-machine flowshop instances are small compared to the exponential part value. However, we present in the next subsection a way to dispose of this factor with an approximation scheme.

It is worth noting that the previous algorithm easily generalizes to the mm-machine flowshop problem. Indeed, the only difference is the description of the temporal front that necessitates 2(m1)2(m-1) parameters.

Observation 5.10.

The bounded-error Algorithm 5 generalizes to solve the mm-machine flowshop in 𝒪((pij)2(m1)1.728n)\mathcal{O}^{*}((\sum p_{ij})^{2(m-1)}\cdot 1.728^{n}) with high probability.

Notice that Ploton and T’kindt, (2023) present a classical resolution for the mm-machine flowshop by Inclusion-Exclusion in 𝒪((pij)m2n)\mathcal{O}^{*}((\sum p_{ij})^{m}\cdot 2^{n}).

5.3 Approximation scheme for the 3-machine flowshop

We present an approximation scheme for the 3-machine flowshop problem that trades the pseudo-polynomial factor in the complexity of Q-Dec-DDPAS and the optimality of the algorithm for a polynomial factor in 1ϵ\frac{1}{\epsilon} and an approximation factor of (1+ϵ)(1+\epsilon). In other words, we provide Algorithm 6 that finds a solution in time 𝒪(1ϵ31.728n)\mathcal{O}^{*}\left(\frac{1}{\epsilon^{3}}\cdot 1.728^{n}\right) for which the makespan is not greater than (1+ϵ\epsilon) times the optimal makespan. The latter point denotes that this is an ϵ\epsilon-approximation scheme. Our algorithm belongs to the class of moderate exponential-time approximation algorithms. Notice that the 3-machine flowshop problem does not admit an FPTAS (fully polynomial-time approximation scheme) because it is strongly NP-hard, meaning that no ϵ\epsilon-approximation algorithm exists to solve the 3-machine flowshop in time 𝒪(poly(n,1ϵ))\mathcal{O}\left(poly(n,\frac{1}{\epsilon})\right) unless P = NP (Vazirani, , 2001). In comparison, Hall, (1998) provides for the mm-machine flowshop problem an FPT-AS (fixed-parameter tractable approximation scheme), namely an ϵ\epsilon-approximation algorithm that runs in time 𝒪(f(ϵ,κ)poly(n))\mathcal{O}(f(\epsilon,\kappa)\cdot poly(n)) for κ\kappa a fixed parameter of the instance and ff a computable function. Hall, (1998) choose κ\kappa to be the number of machines of the flowshop, leading to an FPT-AS that runs in time 𝒪(n3.5(mϵ)m4ϵ2)\mathcal{O}\left(n^{3.5}\cdot(\frac{m}{\epsilon})^{\frac{m^{4}}{\epsilon^{2}}}\right). In our case, we consider the case m=3m=3.

Input: ϵ>0\epsilon>0, 3-machine flowshop on nn jobs with processing times {pij:i[3],j[n]}\{p_{ij}:i\in[3],j\in[n]\}
Output: solution at most 1+ϵ1+\epsilon times the optimal solution
1 P=maxi[3],j[n]{pij}P=\max\limits_{i\in[3],j\in[n]}\{p_{ij}\};
2 K=ϵPn+2K=\frac{\epsilon P}{n+2};
3 for i[3],j[n]i\in[3],j\in[n] do
4        pij=pijKp^{\prime}_{ij}=\lceil\frac{p_{ij}}{K}\rceil;
5       
6Solve the 3-machine flowshop on nn jobs with new processing times {pij:i[3],j[n]}\{p^{\prime}_{ij}:i\in[3],j\in[n]\} with Algorithm 5 that outputs permutation π\pi^{\prime};
Return π\pi^{\prime}
Algorithm 6 Hybrid approximation scheme for the 3-machine flowshop
Lemma 5.11.

Let π\pi^{*} be an optimal solution of the 3-machine flowshop problem, for the processing times {pij:i[3],j[n]}\{p_{ij}:i\in[3],j\in[n]\}. Let π\pi^{\prime} be the output of Algorithm 6. We have

Cmax(π)(1+ϵ)Cmax(π).C_{\textup{max}}(\pi^{\prime})\leq(1+\epsilon)\cdot C_{\textup{max}}(\pi^{*})\,.

Next, we introduce two observations necessary to prove Lemma 5.11. The proofs are omitted because of their simplicity.

Observation 5.12.

Let π\pi be a permutation and let be α+\alpha\in\mathbb{R}^{*}_{+}. We note Cmax(π)C_{\textup{max}}(\pi) the makespan of π\pi of the 3-machine flowshop for processing times {pij:i[3],j[n]}\{p_{ij}:i\in[3],j\in[n]\}. We note Cmax(π)C_{\textup{max}}^{\prime}(\pi) the makespan of π\pi of the 3-machine flowshop for processing times {pij:i[3],j[n]}\{p^{\prime}_{ij}:i\in[3],j\in[n]\} such that pij:=αpijp^{\prime}_{ij}:=\alpha p_{ij} for all i,ji,j. Then, Cmax(π)=αCmax(π).C_{\textup{max}}^{\prime}(\pi)=\alpha C_{\textup{max}}(\pi)\,. Notice that for pijαpijp^{\prime}_{ij}\leq\alpha p_{ij}, we have Cmax(π)αCmax(π)C_{\textup{max}}^{\prime}(\pi)\leq\alpha C_{\textup{max}}(\pi) even if the critical path in π\pi may differ to obtain CmaxC_{\textup{max}} and CmaxC_{\textup{max}}^{\prime}.

Observation 5.13.

Let π\pi be a permutation and let β\beta\in\mathbb{R} such that βmini[3],j[n]{pij}\beta\geq-\min\limits_{i\in[3],j\in[n]}\{p_{ij}\}. We note Cmax(π)C_{\textup{max}}(\pi) the makespan of π\pi of the 3-machine flowshop for processing times {pij:i[3],j[n]}\{p_{ij}:i\in[3],j\in[n]\}. We note Cmax′′(π)C_{\textup{max}}^{\prime\prime}(\pi) the makespan of π\pi of the 3-machine flowshop for processing times {pij′′:i[3],j[n]}\{p^{\prime\prime}_{ij}:i\in[3],j\in[n]\} such that pij′′:=pij+βp^{\prime\prime}_{ij}:=p_{ij}+\beta for all i[3],j[n]i\in[3],j\in[n]. Then, Cmax′′(π)Cmax(π)+β(n+2).C_{\textup{max}}^{\prime\prime}(\pi)\leq C_{\textup{max}}(\pi)+\beta(n+2)\,. Notice that for pij′′pij+βp^{\prime\prime}_{ij}\leq p_{ij}+\beta, we still have Cmax′′(π)Cmax(π)+β(n+2)C_{\textup{max}}^{\prime\prime}(\pi)\leq C_{\textup{max}}(\pi)+\beta(n+2) even if the critical path in π\pi may differ to obtain CmaxC_{\textup{max}} and Cmax′′C_{\textup{max}}^{\prime\prime}.

Proof of Lemma 5.11.

Let be ϵ>0\epsilon>0. The new processing times considered pij:=pijKp^{\prime}_{ij}:=\lceil\frac{p_{ij}}{K}\rceil imply that pijKpij<pijK+1\frac{p_{ij}}{K}\leq p^{\prime}_{ij}<\frac{p_{ij}}{K}+1. We note CmaxC_{\textup{max}}^{\prime} the makespan of the new problem, i.e. the 3-machine flowshop problem with processing times {pij:i[3],j[n]}\{p^{\prime}_{ij}:i\in[3],j\in[n]\}.

On the one hand, we have pij<pijK+1p^{\prime}_{ij}<\frac{p_{ij}}{K}+1, for all i[3],j[n]i\in[3],j\in[n]. Thus, according to Observations 5.12 and 5.13 considering the optimal permutation π\pi^{*}, Cmax(π)Cmax(π)K+n+2,C_{\textup{max}}^{\prime}(\pi^{*})\leq\frac{C_{\textup{max}}(\pi^{*})}{K}+n+2\,, namely, because K>0K>0,

KCmax(π)Cmax(π)+K(n+2).KC_{\textup{max}}^{\prime}(\pi^{*})\leq C_{\textup{max}}(\pi^{*})+K(n+2)\,. (10)

On the other hand, we have pijKpij\frac{p_{ij}}{K}\leq p^{\prime}_{ij}. Thus, according to Observation 5.12 considering the output permutation π\pi^{\prime} of Algorithm 6, Cmax(π)KCmax(π),\frac{C_{\textup{max}}(\pi^{\prime})}{K}\leq C_{\textup{max}}^{\prime}(\pi^{\prime})\,, namely, because K>0K>0,

Cmax(π)\displaystyle C_{\textup{max}}(\pi^{\prime}) KCmax(π)KCmax(π)\displaystyle\leq KC_{\textup{max}}^{\prime}(\pi^{\prime})\leq KC_{\textup{max}}^{\prime}(\pi^{*}) (11)
Cmax(π)+K(n+2)=Cmax(π)+ϵP\displaystyle\leq C_{\textup{max}}(\pi^{*})+K(n+2)=C_{\textup{max}}(\pi^{*})+\epsilon P (12)
Cmax(π)+ϵCmax(π)=(1+ϵ)Cmax(π),\displaystyle\leq C_{\textup{max}}(\pi^{*})+\epsilon C_{\textup{max}}(\pi^{*})=(1+\epsilon)C_{\textup{max}}(\pi^{*})\,, (13)

where (11) comes from the fact that π\pi^{\prime} is the optimal solution for makespan CmaxC_{\textup{max}}^{\prime}, (12) results from Equation (10), and (13) is true because the makespan is always larger than P=maxi[3],j[n]{pij}P=\max\limits_{i\in[3],j\in[n]}\{p_{ij}\}. ∎

Theorem 5.14.

Algorithm 6 is an approximation scheme for the 3-machine flowshop problem and outputs a solution whose makespan it at most (1+ϵ)(1+\epsilon) times the optimal value in time 𝒪(1ϵ31.728n).\mathcal{O}^{*}\left(\frac{1}{\epsilon^{3}}\cdot 1.728^{n}\right)\,.

Proof.

First, according to Lemma 5.11, Algorithm 6 outputs a solution whose makespan it at most (1+ϵ)(1+\epsilon) times the optimal value. Second, Algorithm 5 solves the new problem in time 𝒪((pij)41.728n)=𝒪(1ϵ41.728n)\mathcal{O}^{*}((\sum p_{ij}^{\prime})^{4}\cdot 1.728^{n})=\mathcal{O}^{*}(\frac{1}{\epsilon^{4}}\cdot 1.728^{n}). Indeed,

pij(pijK+1)=1Kpij+3n1K3nP+3n=3n(n+2)ϵ+3n.\displaystyle\sum p_{ij}^{\prime}\leq\sum\left(\frac{p_{ij}}{K}+1\right)=\frac{1}{K}\sum p_{ij}+3n\leq\frac{1}{K}\cdot 3nP+3n=\frac{3n(n+2)}{\epsilon}+3n\,.

Thus, pijpoly(n,1ϵ)\sum p_{ij}^{\prime}\leq poly(n,\frac{1}{\epsilon}). ∎

6 Conclusion

In this work, we propose generalized dynamic programming recurrences for NP-hard scheduling problems to solve a broad class of such problems with our hybrid algorithm Q-DDPAS which is an adapted version of the quantum-classical algorithm of Ambainis et al., (2019). Q-DDPAS provides a quantum speed-up to their exact resolution. Specifically, our hybrid algorithm reduces the best-known classical time complexity, often equal to 𝒪(2n)\mathcal{O}^{*}(2^{n}) for single-machine problems and 𝒪(3n)\mathcal{O}^{*}(3^{n}) for the 3-machine flowshop, to 𝒪(1.728n)\mathcal{O}^{*}(1.728^{n}), sometimes at the cost of an additional pseudo-polynomial factor as summarized in Table 1. Future work will be dedicated to widening the range of problems for which we can achieve a quantum speed-up. This will necessitate finding NP-hard problems for which the description of a solution is a permutation, and that satisfy a dynamic programming property admitting a dichotomic version, relying either on an optimization or a decision problem. A first lead is to consider the 33-machine jobshop problem. Indeed, a promising description of a solution of this problem is the Bierwith vector (Bierwirth, , 1995), a vector encoding a solution as a permutation with repetition of size 3n3n, where nn is the number of jobs.

Acknowledgements.

This work has been partially financed by the ANRT through the PhD number 2021/0281 with CIFRE funds.

References

  • Ambainis et al., (2019) Ambainis, A., Balodis, K., Iraids, J., Kokainis, M., Prūsis, K., and Vihrovs, J. (2019). Quantum speedups for exponential-time dynamic programming algorithms. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1783–1793. SIAM.
  • Bernstein and Vazirani, (1993) Bernstein, E. and Vazirani, U. (1993). Quantum complexity theory. In Proceedings of the twenty-fifth annual ACM symposium on Theory of computing, pages 11–20.
  • Bierwirth, (1995) Bierwirth, C. (1995). A generalized permutation approach to job shop scheduling with genetic algorithms. Operations-Research-Spektrum, 17(2):87–92.
  • Boyer et al., (1998) Boyer, M., Brassard, G., Høyer, P., and Tapp, A. (1998). Tight bounds on quantum searching. Fortschritte der Physik: Progress of Physics, 46(4-5):493–505.
  • Cerezo et al., (2021) Cerezo, M., Arrasmith, A., Babbush, R., Benjamin, S. C., Endo, S., Fujii, K., McClean, J. R., Mitarai, K., Yuan, X., Cincio, L., et al. (2021). Variational quantum algorithms. Nature Reviews Physics, 3(9):625–644.
  • Cygan et al., (2014) Cygan, M., Pilipczuk, M., Pilipczuk, M., and Wojtaszczyk, J. O. (2014). Scheduling partially ordered jobs faster than 2n2^{n}. Algorithmica, 68:692–714.
  • Dürr and Høyer, (1996) Dürr, C. and Høyer, P. (1996). A quantum algorithm for finding the minimum. arXiv preprint quant-ph/9607014.
  • Farhi et al., (2014) Farhi, E., Goldstone, J., and Gutmann, S. (2014). A quantum approximate optimization algorithm. arXiv preprint arXiv:1411.4028.
  • Giovannetti et al., (2008) Giovannetti, V., Lloyd, S., and Maccone, L. (2008). Quantum random access memory. Physical review letters, 100(16):160501.
  • Graham et al., (1979) Graham, R. L., Lawler, E. L., Lenstra, J. K., and Kan, A. R. (1979). Optimization and approximation in deterministic sequencing and scheduling: a survey. In Annals of discrete mathematics, volume 5, pages 287–326. Elsevier.
  • (11) Grange, C., Bourreau, E., Poss, M., and t’Kindt, V. (2023a). Quantum speed-ups for single-machine scheduling problems. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation, pages 2224–2231.
  • (12) Grange, C., Poss, M., and Bourreau, E. (2023b). An introduction to variational quantum algorithms for combinatorial optimization problems. 4OR, pages 1–41.
  • Grange et al., (2024) Grange, C., Poss, M., Bourreau, E., t’Kindt, V., and Ploton, O. (2024). Companion Paper: Moderate Exponential-time Quantum Dynamic Programming Across the Subsets for Scheduling Problems. HAL preprint, https://hal.science/hal-04296238.
  • Grover, (1996) Grover, L. K. (1996). A fast quantum mechanical algorithm for database search. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pages 212–219.
  • Hall, (1998) Hall, L. A. (1998). Approximability of flow shop scheduling. Mathematical Programming, 82(1-2):175–190.
  • Held and Karp, (1970) Held, M. and Karp, R. M. (1970). The traveling-salesman problem and minimum spanning trees. Operations Research, 18(6):1138–1162.
  • Kurowski et al., (2023) Kurowski, K., Pecyna, T., Slysz, M., Rozycki, R., Waligora, G., and Weglarz, J. (2023). Application of quantum approximate optimization algorithm to job shop scheduling problem. European Journal of Operational Research, 310(2):518–528.
  • Lawler, (1990) Lawler, E. L. (1990). A dynamic programming algorithm for preemptive scheduling of a single machine to minimize the number of late jobs. Annals of Operations Research, 26:125–133.
  • Miyamoto et al., (2020) Miyamoto, M., Iwamura, M., Kise, K., and Gall, F. L. (2020). Quantum speedup for the minimum steiner tree problem. In COCOON 2020, Atlanta, GA, USA, August 29–31, 2020, Proceedings, pages 234–245. Springer.
  • Nannicini, (2019) Nannicini, G. (2019). Performance of hybrid quantum-classical variational heuristics for combinatorial optimization. Physical Review E, 99(1):013304.
  • Nannicini, (2022) Nannicini, G. (2022). Fast quantum subroutines for the simplex method. Operations Research.
  • Pinedo, (2012) Pinedo, M. L. (2012). Scheduling, volume 29. Springer.
  • Ploton and T’kindt, (2022) Ploton, O. and T’kindt, V. (2022). Exponential-time algorithms for parallel machine scheduling problems. Journal of Combinatorial Optimization, 44(5):3405–3418.
  • Ploton and T’kindt, (2023) Ploton, O. and T’kindt, V. (2023). Moderate worst-case complexity bounds for the permutation flowshop scheduling problem using inclusion–exclusion. Journal of Scheduling, 26(2):137–145.
  • Ruan et al., (2020) Ruan, Y., Marsh, S., Xue, X., Liu, Z., Wang, J., et al. (2020). The quantum approximate algorithm for solving traveling salesman problem. CMC, 63(3):1237–1247.
  • Shang et al., (2018) Shang, L., Lenté, C., Liedloff, M., and T’Kindt, V. (2018). Exact exponential algorithms for 3-machine flowshop scheduling problems. Journal of Scheduling, 21:227–233.
  • Shimizu and Mori, (2022) Shimizu, K. and Mori, R. (2022). Exponential-time quantum algorithms for graph coloring problems. Algorithmica, pages 1–19.
  • Sutter et al., (2020) Sutter, D., Nannicini, G., Sutter, T., and Woerner, S. (2020). Quantum speedups for convex dynamic programming. arXiv preprint arXiv:2011.11654.
  • Tabi et al., (2020) Tabi, Z., El-Safty, K. H., Kallus, Z., Hága, P., Kozsik, T., Glos, A., and Zimborás, Z. (2020). Quantum optimization for the graph coloring problem with space-efficient embedding. In 2020 IEEE (QCE), pages 56–62. IEEE.
  • T’kindt et al., (2022) T’kindt, V., Della Croce, F., and Liedloff, M. (2022). Moderate exponential-time algorithms for scheduling problems. 4OR, pages 1–34.
  • Vazirani, (2001) Vazirani, V. V. (2001). Approximation algorithms, volume 1. Springer.
  • Woeginger, (2003) Woeginger, G. J. (2003). Exact algorithms for np-hard problems: A survey. In Combinatorial Optimization—Eureka, You Shrink! Papers, pages 185–207. Springer.

Appendix A Useful upper bounds

We define the binary entropy of ϵ]0,1[\epsilon\in]0,1[ by H(ϵ)=(ϵlog2(ϵ)+(1ϵ)log2(1ϵ)).H(\epsilon)=-(\epsilon\log_{2}(\epsilon)+(1-\epsilon)\log_{2}(1-\epsilon))\,. We remind some useful upper bounds of binomial coefficients Ambainis et al., (2019):

(nk)2H(kn)n,k1,nandi=1k(ni)2H(kn)n,k1,n2.\tiny\binom{n}{k}\leq 2^{H\left(\frac{k}{n}\right)\cdot n},\;\,\forall k\in\llbracket 1,n\rrbracket\qquad\mbox{and}\qquad\sum_{i=1}^{k}\binom{n}{i}\leq 2^{H\left(\frac{k}{n}\right)\cdot n},\,\forall k\in\left\llbracket 1,\frac{n}{2}\right\rrbracket.

Thus, it leads to the following upper bounds to compute the complexities of interest:

i=kn/4(nk)20.811n,k=10.945n/4(nk)20.789n,(nn/2)(n/2n/4)20.75n,(nn/2)(n/2n/4)(n/40.945n/4)20.789n\tiny\sum_{i=k}^{n/4}\binom{n}{k}\leq 2^{0.811n},\sum_{k=1}^{0.945\cdot n/4}\binom{n}{k}\leq 2^{0.789n},\sqrt{\binom{n}{n/2}\binom{n/2}{n/4}}\leq 2^{0.75n},\sqrt{\binom{n}{n/2}\binom{n/2}{n/4}\binom{n/4}{0.945\cdot n/4}}\leq 2^{0.789n}