Abstract 1 Introduction 2 Preliminaries 3 Quantum Tools 4 Quantum Dynamic Programming 5 Applications of the Quantum Dynamic Programming Framework 6 Conclusions and Open Problems References

Quantum Speedups for Polynomial-Time Dynamic Programming Algorithms

Susanna Caroppo ORCID Roma Tre University, Rome, Italy Giordano Da Lozzo ORCID Roma Tre University, Rome, Italy Giuseppe Di Battista ORCID Roma Tre University, Rome, Italy Michael T. Goodrich ORCID University of California, Irvine, CA, USA Martin Nöllenburg ORCID TU Wien, Austria
Abstract

We introduce a quantum dynamic programming framework that allows us to directly extend to the quantum realm a large body of classical dynamic programming algorithms. The correspondingquantum dynamic programming algorithms retain the same space complexity as their classical counterpart, while achieving a computational speedup. For a combinatorial (search or optimization) problem 𝒫 and an instance I of 𝒫, such a speedup can be expressed in terms of the average degree δ of the dependency digraph G𝒫(I) of I, determined by a recursive formulation of 𝒫. The nodes of this graph are the subproblems of 𝒫 induced by I and its arcs are directed from each subproblem to those on whose solution it relies. In particular, our framework allows us to solve the considered problems in O~(|V(G𝒫(I))|δ) time. As an example, we obtain a quantum version of the Bellman-Ford algorithm for computing shortest paths from a single source vertex to all the other vertices in a weighted n-vertex digraph with m edges that runs in O~(nnm) time, which improves the best known classical upper bound when mΩ(n1.4).

Keywords and phrases:
Dynamic Programming, Quantum Algorithms, Quantum Random Access Memory
Copyright and License:
[Uncaptioned image] © Susanna Caroppo, Giordano Da Lozzo, Giuseppe Di Battista, Michael T. Goodrich, and
Martin Nöllenburg; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Dynamic programming
; Theory of computation Quantum complexity theory
Related Version:
Full Version: https://arxiv.org/abs/2507.00823 [16]
Funding:
This research was supported, in part, by MUR of Italy (PRIN Project no. 2022ME9Z78 – NextGRAAL and PRIN Project no. 2022TS4Y3N – EXPAND), and the U.S. NSF under grant 2212129.
Editors:
Pat Morin and Eunjin Oh

1 Introduction

Quantum computing represents a paradigm shift in computation, leveraging the unique principles of quantum mechanics – superposition, entanglement, and interference – to solve problems that are intractable for classical computers. These principles allow quantum algorithms to achieve significant speedups for tasks such as factoring large numbers [54], searching unsorted databases [39], simulating complex physical systems [34], computational geometry [2, 5, 7, 29], and graph drawing [14, 15, 27, 28]. Classical computing has introduced fundamental algorithmic design paradigms that enable the efficient solution of combinatorial problems. Among these, for problems that exhibit a recursive structure, dynamic programming and divide and conquer, stand out predominantly for their efficiency and wide applicability. In this research, we introduce a framework that extends classical dynamic programming algorithms [9, 10, 11, 20, 21, 24, 33, 40, 41, 42, 43, 47, 49] to faster quantum counterparts. This framework applies to combinatorial search and optimization problems tackled by computing dynamic programming tables.

Dynamic Programming.

Let 𝒫 be a combinatorial problem and let I be an instance of 𝒫 of size n. The dynamic programming paradigm for algorithm design can often be applied to compute a solution sol(I) of I for 𝒫, if 𝒫 satisfies the optimal substructure property, i.e., an optimal solution for I can be decomposed into (interchangeable) optimal solutions for subinstances of I, and the recursive computations of solutions for larger instances require solutions for overlapping subproblems – unlike most divide-and-conquer algorithms, in which the subproblems do not overlap. The underlying idea of (bottom-up) dynamic programming is to create a table D, which stores the (values of) optimal solutions for all relevant subinstances of I, starting from the smallest ones and then computing optimal solutions for larger and larger subinstances by suitably combining the existing solutions of smaller subinstances. Often, the construction of sol(I), given D, is a simple task111The solution sol(I) of the actual instance I is frequently found in the “last” cell of D. Such entry is later denoted as D[i1][i2][ik].. Therefore, the time and space complexity for solving 𝒫 are asymptotically bounded by those of constructing and storing D, respectively. In particular, the construction time of D can be easily upper bounded by multiplying its size by the time required to recursively compute each table entry. For instance, a table D of size n2 and a linear-time computation for each entry yield a running time of O(n3).

Quantum Dynamic Programming.

Quantum dynamic programming algorithms have recently attracted considerable interest in the exponential-time regime (which we review later). Surprisingly, instead, polynomial-time quantum algorithms have not yet received as much attention, despite the potential for significant speedups in practical applications. Khadiev and Safina [45] proposed polynomial-time quantum dynamic programming algorithms for solving problems on directed acyclic graphs (DAGs). Furrow [30] gives a polynomial-time quantum dynamic programming algorithm for the Coin Change problem and for the Maximum Subarray Sum222It is worth remarking that, as discussed in [30], the proposed algorithm, albeit based on computing an auxiliary table, is not solved via dynamic programming, but rather by following a greedy approach. problem.

The pioneering work by Ambainis et al. [4] introduced two quantum dynamic programming frameworks that provide a novel approach for speeding up some exponential-time classical dynamic programming algorithms, developed to address NP-complete problems. Both frameworks use classical computation to construct partial dynamic programming tables, stored in QRAM, to be accessed via quantum search subroutines. The first, and simpler, framework addresses a subclass of set problems where the solution for a set S of size n can be determined by considering all partitions of S into two sets of sizes k and nk (for any positive integer k), and by selecting the optimal option. In this setting, a quantum advantage is obtained by suitably selecting the subset of the entries to be classically precomputed so as to balance the time needed to compute the solution (obtained by identifying optimal combinations of subsets) to the problem by performing (bounded-depth) recursive quantum search primitives. For instance, this framework allows to obtain fast exponential-time quantum algorithms for Traveling Salesman [4], for One-Sided Crossing Minimization [13, 14]333A quantum algorithm for Two-Sided Crossing Minimization, solely based on Grover’s search, has been presented in [17, 15]., for Graph Coloring [53], and for Dynamic programming across the subset [56]. The second, more complex, framework is based on the ability of efficiently solving a Path in the Hypercube (PHC) problem. Given a subgraph G of the Boolean hypercube, where edges are directed from vertices of lower Hamming weight to vertices of higher Hamming weight, the problem asks to determine if there exists a directed path in G from the vertex 0n to the vertex 1n. In this setting, a quantum advantage is reached by combining the access to precomputed partial dynamic programming tables, with recursive applications of the quantum algorithm solving PHC. For instance, this framework allows to obtain fast exponential-time quantum algorithms for Domatic Number [6] and for Treewidth Computation [48]. Furthermore, Glos et al. [36] generalized this framework to a quantum algorithm for finding a path in n-dimensional lattice graphs.

Our contributions.

While exponential-time quantum algorithms aim to tackle problems beyond the reach of classical computation, polynomial-time quantum algorithms can provide substantial efficiency gains for problems already considered tractable. In this work, we focus on polynomial-time quantum algorithms and introduce a framework that systematically extends classical dynamic programming algorithms into accelerated quantum counterparts, by harnessing quantum parallelism and amplitude amplification. Specifically, we developed quantum subroutines that integrate quantum search primitives, such as min, max, find, and findAll, within a dynamic programming context. These subroutines construct a superposition enabling access to the specific subset of previously computed entries, stored in a Quantum Random Access Memory (QRAM), that is needed for computing the currently-considered entry. This, in turn, enables us to harness the full potential of quantum search primitives by restricting the search space to the relevant candidates.

To demonstrate the broad usability of our framework, we apply it to several well-known problems for which the best worst-case classical algorithms rely on dynamic programming; see Table 1. For space reasons, in this paper, we only discuss in detail the application of the framework to the Single-Source Shortest Path and Membership in Context-Free Language problems. The applications to the remaining problems in Table 1 are presented in the full version of the paper [16].

Table 1: Comparison of classical and quantum time complexities, classified based on the operator op{min,max,find,findAll} in their recursive fomulation.
Problem 𝐨𝐩 Classical Complexity Quantum Complexity
Minimum-Weight Triangulation of Convex Polygon min O(n3) [49] O~(n2n)
All-Pairs Shortest Paths min O(n3loglogn/log2n) [40] O~(n2nlogn)
Single-Source Shortest Paths Section 5.1 min O~(mn4/5) [42] O~(nnm)
Multi-Criteria Boundary Labeling min po-leaders O(n3) [10, 11] O~(n2n)
min do-leaders O(n5) [10, 11] O~(n4n)
Segmented Least Squares min O(n2) [9, 47] O~(nn)
RNA Secondary Structure max O(n3) [47] O~(n2n)
Rod Cutting max O(n2) [20] O~(nn)
Largest Divisible Subset in Array max O(n2)[25] O~(nn)
Unbounded Knapsack max O(Wn) [21] O~(Wn)
Viterbi Path Problem max O(T×|S|2) [43] O~(T×|S|×|S|)
Text Segmentation find O(n2) [24] O~(nn)
Membership in Context-Free Language (CYK) Section 5.2 findAll O(n2.37) [58] O~(n2n)

2 Preliminaries

In this section, we introduce preliminary notation and definitions.

Notation.

Given two k-tuples of integers, a1,a2,,ak and b1,b2,,bk, we say that a1,a2,,ak lexicographically precedes b1,b2,,bk, denoted by a1,a2,,akb1,b2,,bk, if there exists an index j{1,2,,k} such that ai=bi for all i<j, and aj<bj. The relation defines a total order among the k-tuples of integers. Given a directed graph G=(V,E), we denote by deg(v) the degree of a vertex v of G, that is, the number of arcs in E having v as their tail or head. Moreover, we denote by degout(v) the outdegree of a vertex v of G, that is, the number of arcs in E having v as their tail. The average degree of G is δ=1nvV(G)deg(v)=2mn. In order to simplify the notation, we use [h] to denote the set {0,,h1}, where h is a positive integer. Also, given positive integers a and b, we denote ab as ab and loga as loga. If f(n)=O(logcn) for some constant c, we write f(n)=polylog(n). In case f(n)=O(ndpolylog(n)) for some constant d, we use the notation f(n)=O~(nd) (see, e.g., [59]).

Combinatorial problems.

A combinatorial search problem 𝒫 is a triple Λ,S,R, where:

  • Λ is the set of instances of 𝒫;

  • S is the set of solutions of 𝒫; and

  • RΛ×S is a binary relation that associates each instance IΛ with a set SOL(I)={sS:(I,s)R}. The elements in SOL(I) are the feasible solutions of 𝒫 for I.

A combinatorial optimization problem 𝒫 is a quintuple Λ,S,R,f𝒫,g, where:

  • Λ,S,R is a combinatorial search problem,

  • fopt:SY is the optimization function, where Y is a totally ordered set (usually Y{,});

  • g:2SS is the comparator function, with g{min,max}.

An optimal solution of 𝒫 for I is any feasible solution s in SOL(I) such that s=argminsSOL(I)fopt(s), if g=min, and s=argmaxsSOL(I)fopt(s), if g=max. For ease of notation, in the following, we denote by sol(I), both a feasible solution of a combinatorial search problem and an optimal solution of a combinatorial optimization problem. Also, we refer to combinatorial search/optimization problems simply as combinatorial problems.

Dynamic programming.

Let 𝒫 be a combinatorial problem and let I be an instance of 𝒫. A (bottom-up) dynamic programming algorithm for 𝒫 can be implemented by executing the following steps:

Whereas the efficiency of this algorithm design paradigm lies in the fact that solutions of overlapping problems are stored in the table and hence need to be computed only once, its correctness lies in the fact that “difficult” entries can be computed by exploiting the values of previously computed (“easy” and “difficult”) entries (by the optimal substructure property). Clearly, the Table Update is the most interesting and challenging step in the overall approach. Fortunately, many optimization problems 𝒫, including those considered in this paper, naturally exhibit a simple recursive formulation for the entries of their dynamic programming table D. Consider an entry D[i1][i2][ik] of D. Let Si1,i2,,ik be the dependency set of D[i1][i2][ik], composed of the indices of the entries of D on which the computation of the entry D[i1][i2][ik] depends. Observe that Si1,i2,,ik is some subset of k such that for each entry j1,j2,,jk in Si1,i2,,ik we have that j1,j2,,jki1,i2,,ik. Then, the recursive formula for D[i1][i2][ik] is of the form:

D[i1][i2][ik]=op𝒳𝒞i1,i2,,ikf𝒫(i1,i2,,ik,𝒳), (1)

where: (1) op{min,max,find,findAll}; (2) Ci1,i2,,ik is a set, called generating set, composed of h-element subsets of Si1,i2,,ik, where each subset provides the input to construct a particular candidate value for D[i1][i2][ik] (where h is an integer constant, called dependency index, determined by 𝒫, which specifies the number of subinstances into which each instance is divided in the recursive definition); (3) f𝒫 is a function specific for problem 𝒫 that computes a candidate value for D[i1][i2][ik], assuming that all entries of D with indices in Si1,i2,,ik have already been computed.

 Remark 1.

The optimal substructure property of a problem 𝒫 is formally captured by the dependency set Si1,i2,,ik, whose entries j1,j2,,jk must satisfy j1,j2,,jki1,i2,,ik.

In most problems, the dependency index is a small integer, usually equal to 1 or 2. A textbook example of a problem fitting Equation 1 with h=1 is the Coin Change problem [47]. Given a set of positive integer coin denominations c1<c2<<cr and a target sum T, the goal is to achieve T using the fewest possible number of coins, assuming an unlimited supply of each denomination, or determine if it is not possible to obtain T. Let D be a dynamic programming table of size T+1, whose entries D[i] represent the minimum number of coins needed to make up the sum i, with D[i]= if it is not possible. The base case of the dynamic programming approach is D[0]=0. For the recursive case, to compute D[i] for i>0, we have to consider all possible choices for the first coin. Once the first coin is selected, the remaining amount must be reached optimally. This can be expresses by the recursive formula

D[i]={0,if i=0minj:cji(1+D[icj]),if i>0 (2)

Clearly, the recursive case of Equation 2 matches the pattern of Equation 1. In fact, we can set Si={j:cji}, Ci={{j}:cji}, h=1, and f𝒫(i,{j})=1+D[icj].

A notable example of a problem fitting Equation 1 with h=2 is the Matrix Chain Multiplication problem. Given a sequence of n matrices, A1,A2,,An, and their dimensions p0,p1,p2,,pn, where for i=1,2,,n, matrix Ai has dimension pi1×pi, the problem asks to determine the order of matrix multiplications that minimizes the total number of scalar multiplications needed to obtain A1×A2××An. Note that, in the Matrix Chain Multiplication problem, we are not actually multiplying matrices; instead, the goal is only to determine an order for multiplying matrices that has the lowest cost. Recall that matrix multiplication is associative, therefore we aim at grouping the above multiplications to minimize the total number of scalar multiplications.

Let D be a dynamic programming table of size n×n, whose entries D[i][j], with 1ijn, store the minimum number of scalar multiplications needed to compute the product Ai×Ai+1××Aj. The base case of the dynamic programming approach is D[i][i]=0. For the recursive case, to compute D[i][j] for j>i, we have to consider all possible choices to split the product at a matrix Ak, with ik<j. This can be expressed by the recursive formula:

D[i][j]={0,if i=jminik<j(D[i][k]+D[k+1][j]+pi1pkpj),if i<j (3)

Clearly, the recursive case of Equation 3 matches the pattern of Equation 1. In fact, we can set Si,j={(i,k),(k+1,j):ik<j}, Ci,j={{(i,k),(k+1,j)}:ik<j}, h=2, and f𝒫(i,j,{(i,k),(k+1,j)})=D[i][k]+D[k+1][j]+pi1pkpj.

3 Quantum Tools

In this section, we provide the reader with the quantum primitives needed in this research. For a comprehensive introduction to the quantum computing field see, e.g., Nielsen and Chuang [50], and Aaronson [1].

Qubits are the fundamental units of quantum computing. They differ from classical bits in that they can exist in a superposition of the two classical states 𝟎 and 𝟏. This unique characteristic enables quantum computers to perform, in parallel, multiple computations, allowing in some cases to achieve a substantial (and sometimes exponential) speedup over classical systems. Mathematically, a qubit is represented in a Hilbert space as a two-dimensional vector444In Dirac’s notation, a ket such as |v, where v is an arbitrary label, represents a vector corresponding to a quantum state. |ψ=(αβ)2, that is, |ψ=α|0+β|1, where |0=(10) and |1=(01) are the two orthonormal basis states and the complex coefficients α and β are the amplitudes of such states. The likelihood of measuring the qubit in state |0 or |1 is given by |α|2 and |β|2, respectively. Therefore, α and β must satisfy the normalization condition |α|2+|β|2=1, ensuring that the total probability remains 1. A quantum state over n qubits is a unit vector in the Hilbert space 2n. The computational basis {|j}j[2n] consists of quantum states, where |j is the unit vector with a 1 in the j-th index and 0 elsewhere. A computational basis state |j can be interpreted as a classical bit string j, allowing quantum systems to simulate classical algorithms. Any quantum state can be expressed as a weighted sum (or superposition) of these basis states:

|Ψ=j=02n1αj|j,

where the amplitudes satisfy the normalization condition j=02n1|αj|2=1. If at least two coefficients αj in the above expression are nonzero, the state is said to be in superposition. When measuring |Ψ, the state collapses to |j with probability |αj|2.

In this paper, we focus on quantum computations performed in the circuit model of computation. In this model, quantum algorithms are specified by quantum circuits, obtained by composing quantum gates, that perform specific quantum computations. The process begins with the initialization of qubits, followed by the application of gate operations that modify their states. Finally, the circuit’s output is obtained by measuring the qubits. This operation collapses their quantum states into classical binary outcomes. Quantum gates are the fundamental building blocks of quantum circuits, analogous to classical logic gates in traditional computing. They are used to manipulate quantum states while preserving key quantum properties like superposition and entanglement. Specifically, a quantum gate performs a linear transformation on its input quantum state, meaning that a superposition of states is mapped to the corresponding superposition of their images. In particular, any such a transformation U must be unitary, satisfying the condition 𝕀=UU=UU, where 𝕀 denotes the identity matrix and U denotes the transpose conjugate of U.

As a consequence, quantum computation is inherently reversible as long as no measurement is performed. That is, given an output quantum state |ϕ, obtained by applying U to an initial quantum state |ψ, the original state can be fully recovered by applying U1=U to |ϕ. An important quantum gate (often used in the initialization of qubits) is the Hadamard gate H=12(1111), which can be used a preliminary step to transform the |0 state into to a uniform state 12(|0+|1) of the two basis states.

Quantum Random Access Memory (QRAM) is the “quantum analog” of conventional RAM, designed to store and access data in a quantum format. Like RAM, QRAM consists of three main components: an input (or address) register a, an output (or data) register d, and memory arrays. However, unlike traditional RAM, QRAM’s input and output registers are composed of qubits rather than classical bits, while the memory arrays can be either classical or quantum, depending on the application [35]. A key feature of QRAM lies in its method of memory access. Instead of retrieving data from a single memory location at a time, QRAM leverages quantum superposition to access multiple memory locations simultaneously. Specifically, while a classical RAM uses n bits to randomly access one of N=2n distinct memory cells, a QRAM uses n qubits to address a quantum superposition of all N memory cells simultaneously. When a quantum computer requires access to a superposition of memory cells, the address register555A set comprising multiple qubits is a register. A quantum register a with n qubits |qi, with i[n], is denoted as a tensor product |ψa=|q0|q1|qn1. a must hold a superposition of addresses, represented as jαj|ja. In response, the QRAM returns a corresponding superposition of data in the data register d, ensuring that the retrieved data remains correlated with the address register:

jαj|ja|0kdQRAMjαj|ja|δjd,

where δj represents the content of the j-th memory cell, k bits suffice to classically encode the content of any memory cell, and |0k denotes the quantum basis state composed of k qubits set to |0. QRAM plays a crucial role in several quantum algorithms, such as quantum searching [39], minimum/maximum finding [23], counting [12], period finding and discrete logarithm [54], to cite a few. In particular, the use of QRAM enables us to exploit quantum search primitives that involve condition checking on data stored in random access memory. Specifically, the QRAM may be used by an oracle to check conditions based on the data stored in memory, marking the superposition states that correspond to feasible or optimal solutions [35, 51, 60, 62].

We now describe the quantum subroutines used in our algorithms. Observe that every function f implemented as a classical circuit can be approximated with arbitrary accuracy, using a discrete set of quantum gates, by a quantum circuit with only a polylogarithmic overhead [22, 46, 50]. Therefore, in the following, we assume that every classical function f, provided as classical circuit Cf, can be passed to and accessed by our quantum procedures as a quantum version of Cf. In the remainder, for any computable function f, we use the notation Uf to denote the corresponding quantum circuit.

Let W be a ground set and let ω be an element of W. Let bin(ω) be the integer corresponding to a binary encoding of ω. For ease of notation, in the remainder, we will denote the state |bin(ω) simply as |ω. In particular, given an element 𝒳Ci1,i2,,ik, we will use the state |𝒳.

Consider a computational problem 𝒫 that can be solved via dynamic programming building a k-dimensional table D of size d1××dk. Recall that by Ci1,i2,,ik we denote the generating set of an entry D[i1][i2][ik] of D. We let λi1,i2,,ik=|Ci1,i2,,ik| and we let σ=i=1klogdi.

Algorithm 1 Procedure StatePrep prepares a uniform superposition over the set Ci1,i2,,ik.

Subroutine StatePrep.

The first subroutine prepares a uniform superposition Ψi1,i2,,ik of the integers in [λi1,i2,,ik]; see the pseudocode of Algorithm 1. It assumes the existence of a classical procedure fC that determines λi1,i2,,ik given the values i1,i2,,ik. The subroutine starts with the register |i1i2ik and with a register storing an -bit zero string |0 (where the value of will be discussed later). It exploits the quantum gate Uλ to determine λi1,i2,,ik and prepares the state |i1i2ik|λi1,i2,,ik. Then, the subroutine takes in input an additional register storing a log(λi1,i2,,ik)-bit zero string and applies Hadamard gates to each of the qubits of such a register to prepare the uniform superposition state:

|Ψi1,i2,,ik=1λi1,i2,,ik|i1i2ik|λi1,i2,,iku=0λi1,i2,,ik1|u

We use the signature StatePrep(i1,i2,,ik) to denote calls to the subroutine StatePrep.

Algorithm 2 Procedure TableIndexPrep for retrieving the u-th element of Ci1,i2,,ik.

Subroutine TableIndexPrep.

The second quantum subroutine prepares the state |Ei1i2ik,u that correlates the integers in [λi1,i2,,ik] with the elements of Ci1i2ik; see the pseudocode of Algorithm 2. Our quantum subroutine assumes the existence of a classical injective function γ:k×Ci1i2ik that maps the pair i1i2ik,u to an element of Ci1i2ik. Observe that the elements of Ci1,i2,,ik admit a binary representation bin(𝒳) of length hσ. The subroutine starts with the register |i1i2ik, the register |u, and with a register storing an (hσ)-bit zero string |0hσ and uses Uγ to prepare the state:

|Ei1i2ik,u=|i1i2ik|u|γ(i1,i2,,ik,u).

We use the signature TableIndexPrep(i1,i2,,ik,u) to denote calls to the subroutine TableIndexPrep.

Subroutine DP.

The third quantum subroutine uses either (i) the quantum min/max finding algorithms (QMin and QMax) due to Dürr and Høyer [23], or (ii) the quantum finding algorithm (QFind) due to Grover to search for an item satisfying a condition in an unsorted list [1] or (iii) the quantum finding algorithm (QFindAll) due to Ambainis to search for all items satisfying a condition in an unsorted list [3]. In particular, QMin and QMax allow to determine the element in a ground set of size N that minimizes/maximizes a given function in O~(N) time, QFind (resp. QFindAll) allows to determine an element (resp. all the elements) in a ground set of size N that satisfy a certain condition in O~(N) time (resp. NM time, where M is the number of elements satisfying the condition).

Algorithm 3 Procedure DP for computing the value op𝒳𝒞i1,i2,,ikf𝒫(i1,i2,,ik,𝒳). We use the signature QMaxMin(i1,i2,,ik,f𝒫,), QMaxMax(i1,i2,,ik,f𝒫,max,), QFind(i1,i2,,ik,f𝒫), and QFindAll(i1,i2,,ik,f𝒫) to denote calls to the algorithms QMaxMin, QMaxMin, QFind, and QFindAll, respectively.

The subroutine (see the pseudocode of Algorithm 3) takes as input at least: (1) the integers i1,i2,,ik; (2) the function f𝒫; and (3) an operator op{min,max,find, findAll}. Furthermore, if op{min,max}, it additionally takes as an input a comparator, , to maximize (or minimize) over, that defines a total ordering of the values in the codomain of f𝒫 (i.e., the values stored in D). First, the subroutine invokes StatePrep (i1,i2,,ik) to prepare the superposition state |Ψi1,i2,,ik and then applies TableIndexPrep, in parallel to each basis state of |Ψi1,i2,,ik, to prepare the superposition state:

|Φi1,i2,,ik=1λi1,i2,,ik|i1,i2,,ik|λi1,i2,,iku=0λi1,i2,,ik1|u|γ(i1,i2,,ik,u).
 Remark 2.

The time Ti1,i2,,ik required to prepare the state |Φi1,i2,,ik is bounded by the time needed to execute gates UfC and Uγ.

Observe that each of the states |i1,i2,,ik|λi1,i2,,ik|u|γ(i1,i2,,ik,u) composing |Φi1,i2,,ik provides the input for computing, using f𝒫 and the quantum search subroutines described above, a candidate value for D[i1][i2][ik], that is, the registers |i1,i2,,ik and |γ(i1,i2,,ik,u). Therefore, the subroutine proceeds by applying the algorithm QMin, QMax, QFind, or QFindAll, depending on whether the chosen operator op is equal to min (or max), find, or findAll, respectively, only to the candidate values for D[i1][i2][ik] determined by the entries in Ci1,i2,,ik.

We use the signatures DP(i1,i2,,ik,f𝒫,op) and DP(i1,i2,,ik,f𝒫,op,) to denote calls to the subroutine DP, when op{find,findAll} or op{min,max}, respectively.

Altogether, we obtain the following main algorithmic theorem.

Theorem 3.

Let 𝒫 be a combinatorial problem, let I be an instance of 𝒫, and let n be the size of I. Also, let T𝒫 be the time needed to compute quantumly the function f𝒫 and let Ti1,i2,,ik be the time needed to prepare |Φi1,i2,,ik. Suppose that each of the values of the entries of D can be represented using w bits with wO(polylog(n)). Then, the following holds for subroutine DP:

  • If op=find, DP determines a value of f𝒫 for an entry D[i1][i2][ik] in O~(Ti1,i2,,ik+T𝒫λi1,i2,,ik) time.

  • If op=findAll, DP determines all the M possible distinct values of f𝒫 for an entry D[i1][i2][ik] in O~(Ti1,i2,,ik+T𝒫λi1,i2,,ikM) time.

  • If op{min,max}, consider some total ordering defined by of the data values in D such that comparison according to such an ordering can be performed in O(w) time. Then, DP determines the minimum (or maximum) value of f𝒫 for an entry D[i1][i2][ik], under the specified ordering, in O~(Ti1,i2,,ik+T𝒫λi1,i2,,ik) time.

4 Quantum Dynamic Programming

In this section, we describe a framework that exploits the quantum subroutines defined in Section 3 to obtain quantum speedups for many computational problems solved classically using dynamic programming algorithms.

The recursive formula that specifies how to compute the entries of the dynamic programming table D used for solving a computational problem 𝒫 on a instance I determines a dependency digraph G𝒫(I). The nodes of G𝒫(I) are in one-to-one correspondence with the entries of D, i.e., the subproblems of 𝒫 defined by I. For each node ni1,i2,,ik of G𝒫(I) associated with the entry D[i1][i2][ik] of D, graph G𝒫(I) contains an arc directed from ni1,i2,,ik to each of the nodes nj1,j2,,jk associated with the entries D[j1][j2][jk] that occur in the recursive formula for D[i1][i2][ik]. These are the entries indexed by the tuples j1,j2,,jkSi1,i2,,ik. Clearly, by the optimal substructure property, G𝒫(I) is a directed acyclic graph.

Let 𝒫 be a combinatorial problem and suppose that it admits a dynamic programming algorithm that relates the dependency set Si1,i2,,ik and the generating set Ci1,i2,,ik of an entry D[i1][i2][ik] as follows. For each element j1,j2,,jkSi1,i2,,ik, there exists a unique set 𝒳Ci1,i2,,ik such that j1,j2,,jk𝒳. We say that a dynamic programming algorithm exhibiting the above characteristic is simple and that 𝒫 is a simple problem. Observe that, for a simple dynamic programming algorithm, it holds that λi1,i2,,ik=|Ci1,i2,,ik|=|Si1,i2,,ik|h. In particular, this implies that, for each node ni1,i2,,ik of G𝒫(I), it holds that degout(ni1,i2,,ik)=|Si1,i2,,ik|=hλi1,i2,,ik. We have that several combinatorial problems, including those listed in Table 1, as we prove later, are simple problems.

The next lemma shows how classical simple dynamic programming algorithms can be quantumly enhanced to reduce their time complexity, while maintaining the same storage requirements as in the classical setting. Such an improvement applies to problems whose recursive formulation satisfies Equation 1. Table 1 provides an overview of the speedups obtainable via Theorem 4 for many well-known problems.

Theorem 4.

Let 𝒫 be a simple combinatorial problem, let I be an instance of 𝒫, and let n be the size of I. Suppose that each of the values of the entries of D can be represented using w bits with wO(polylog(n)). Consider a simple classical dynamic programming algorithm 𝒜 that solves 𝒫 for I by computing each entry D[ii][i2][ik] of a table D of size d1×d2××dk, where the values di depend on I, using a recurrence formula of the same form as Equation 1:

D[i1][i2][ik]=op𝒳𝒞i1,i2,,ikf𝒫(i1,i2,,ik,𝒳),

where op{min,max,find,findAll}. Also, let T𝒫, TfC and Tγ be the time needed to compute quantumly the functions f𝒫, fC, and γ, respectively. Finally, let T=TfC+Tγ, let M be the maximum number of solutions of any subproblem of 𝒫 for I, and let δ be the average degree of G𝒫(I).

Then, there exists a quantum dynamic programming algorithm Q𝒜 that solves 𝒫 for I, using QRAM, with the following time and space bounds:

  • If op{find,min,max}, then Q𝒜 solves 𝒫 for I using O~(|V(G𝒫(I))|(T+T𝒫δ)) time and O~(|V(G𝒫(I))|) space.

  • If op=findAll, then Q𝒜 solves 𝒫 for I using O~(|V(G𝒫(I))|(T+T𝒫δM)) time and O~(M|V(G𝒫(I))|) space.

Proof.

In the following, we describe the algorithm Q𝒜 and prove its space and time complexity. We describe Q𝒜 in terms of the three steps that implement the dynamic programming approach provided by 𝒜.

Table Setup.

As in the classical scenario, the algorithm starts by initializing a dynamic programming table QD of size d1×d2××dk. Differently from the classical dynamic programming table D used by 𝒜, however, the table QD is stored in QRAM. Suppose that each of the values of the entries of D (and thus of QD) can be represented using w bits. Let finit be the classical function that, provided with the tuple i1,i2,,ik that addresses the entry D[i1][i2][ik], computes the initialization value of D[i1][i2][ik]. To this aim, for each i1,i2,,ik[d1]×[d2]××[dk], we provide the gate Uinit with the address register |i1i2ika and with a data register storing a w-bit zero string |0w. The application of Uinit to these registers results in the state |i1i2ika|finit(i1,i2,,ik)d, which forms the input for a QRAM write operation that initializes QD[i1][i2][ik].

Table Update.

We process the tuples i1,i2,,ik[d1]×[d2]××[dk] in lexicographic order. For each tuple i1,i2,,ik, the algorithm invokes either the quantum subroutine DP(i1,i2,,ik,f𝒫,op) or DP(i1,i2,,ik,f𝒫,op,), based on whether op{find,findAll} or op{min,max}, respectively. This allows us to compute op𝒳𝒞i1,i2,,ikf𝒫(i1,i2,,ik,𝒳), which is stored in an output register |di1,i2,,iko correlated with the address register |i1i2ika. Finally, the state |i1i2ika|di1,i2,,iko forms the input for a QRAM write operation that updates QD[i1][i2][ik].

Solution Retrieval.

Let i1,i2,,ik be the entry of D whose value determines the solution to 𝒫. Then, a solution for 𝒫 can be obtained by reading the entry QD[i1][i2][ik].

Next, we analyze the space complexity of Q𝒜. As in its classic counterpart, the space complexity of Q𝒜 is asymptotically bounded by the storage requirement of the dynamic programming table, which can be upper bounded by multiplying the number of entries of QD, the number w of bits to represent each of the values of the entries of QD (i.e., the solutions to the subproblems of 𝒫), and the maximum number M of solutions of any subproblem of 𝒫 for I. Also, recall that the entries of QD are in one-to-one correspondence with the nodes of G𝒫(I). Therefore, the space complexity of Q𝒜 is in O(w(|V(G𝒫(I))|), if op={find,min,max}, and in O((wM)|V(G𝒫(I))|), if op=findAll.

Finally, we analyze the time complexity of Q𝒜. Obviously, the time needed to compute function finit is in O(T𝒫). Therefore, the time complexity Time(I) of Q𝒜 on input I is asymptotically bounded by the time required to execute the Table Update step. We now give the corresponding bound. For simplicity, we assume first that op{find,min,max}. Let n=|V(G𝒫(I))| and recall that δ denotes the average degree of G𝒫(I). In order to upper bound Time(I), we proceed as follows. For i=1,,loglogn, let Vi be the subset of V(G𝒫(I)) whose degree is between 22i1δ and 22iδ, and let V0 be the subset of V(G𝒫(I)) whose degree is less than 2δ. Recall that, by Theorem 3, the quantum subroutine DP determines a value of f𝒫 for an entry D[i1][i2][ik] in O~(Ti1,i2,,ik+T𝒫λi1,i2,,ik) time, that Ti1,i2,,ikT, and that, for each node ni1,i2,,ik of G𝒫(I), it holds that λi1,i2,,ikdegout(ni1,i2,,ik) since 𝒫 is simple. Therefore, we have:

Time(I)=ni1,i2,,ikV(G𝒫(I))O~(Ti1,i2,,ik+T𝒫λi1,i2,,ik)ni1,i2,,ikV(G𝒫(I))O~(T+T𝒫degout(ni1,i2,,ik))O~(|V0|(T+T𝒫2δ))+i=1loglognO~(|Vi|(T+T𝒫22iδ))==O~(|V0|T)+O~(|V0|T𝒫δ)+i=1loglognO~(|Vi|T)+(T𝒫δ)i=1loglognO~(|Vi|22i1) (4)

Next, we provide an upper bound for |Vi|. Clearly, there exist at most 2n nodes ni1,i2,,ik of G𝒫(I) such that degout(ni1,i2,,ik)<2δ (that is, |V0|2n). Also, there exist at most n nodes ni1,i2,,ik of G𝒫(I) such that 2δdegout(ni1,i2,,ik)<4δ (that is, |V1|n). Similarly, there exist at most n2 nodes ni1,i2,,ik of G𝒫(I) such that 4δdegout(ni1,i2,,ik)<16δ (that is, |V2|n2). More in general, for i=1,,loglogn, there exist at most n22i11=2n22i1 nodes of G𝒫(I) such that 22i1δdegout(ni1,i2,,ik)<2iδ (that is, |Vi|n22i11=2n22i1). In Equation 4, we can then upper bound V0 with 2n, and |Vi| with 2n22i1, if i>1. Therefore, we have:

Time(I)=O~(|V0|T)+O~(|V0|T𝒫δ)+i=1loglognO~(|Vi|T)+(T𝒫δ)i=1loglognO~(|Vi|22i1)O~(nT)+O~(nT𝒫δ)+O~(nT)+(T𝒫δ)i=1loglognO~(|Vi|22i1)O~(nT)+O~(nT𝒫δ)+(T𝒫δ)i=1loglognO~(2n22i122i1)==O~(nT)+O~(nT𝒫δ)+(T𝒫δ)O~(n)i=1loglogn2=O~(n(T+(T𝒫δ)(1+2loglogn)))=O~(n(T+T𝒫δ)) (5)

Altogether, Equation 5 shows the desired bound for the running time of Q𝒜 when op{find,min,max}. For the case when op=findAll the same analysis applies with δ replaced by δM according to Theorem 3. This concludes the proof.

5 Applications of the Quantum Dynamic Programming Framework

In this section, we consider the Single-Source Shortest Paths (SSSP) problem and the Membership in Context-Free Language (MCFL) problem. Since both the celebrated Bellman-Ford algorithm for SSSP and Cocke-Younger-Kasami algorithm for MCFL are based on dynamic programming recurrences that respect Equation 1, they provide primary examples for computational problems amenable to a quantum speedup via Theorem 4.

In the following for the SSSP and the MCFL problems, and in the full version [16] for the remaining problems listed in Table 1, we demonstrate the applicability of Theorem 4 as follows. First, we present the corresponding classical dynamic programming algorithm. In particular, we describe (i) the subproblems whose solution is stored in the dynamic programming table, (ii) the optimal substructure property of the problem, and (iii) the overlap among subproblems. Then, if needed, we transform the recurrence relation of the dynamic programming algorithm in such a way that it matches the pattern of Equation 1, and we define the dependency set Si1,i2,,ik, the generating set Ci1,i2,,ik, the dependency index h, and the function f𝒫. Also, we bound the average degree of the dependency digraph G𝒫(I) in terms of the size of the instances. Further, we show that the dynamic programming algorithm is simple. Finally, we bound the time complexity of the functions f𝒫, fC, and γ. Altogether, this allows us to the establish our quantum speedups via Theorem 4.

5.1 Quantum Bellman-Ford for Single-Source Shortest Paths

(a) An instance I=(G,w,s=vs) of SSSP.
(b) The dependency graph G𝒫(I) of I and the dynamic programming table D for instance I. Each node of G𝒫(I) is placed inside the corresponding entry of D.
Figure 1: Illustrations for problem SSSP.

Let G=(V,E) be a weighted n-vertex m-edge digraph, where each arc e=uvE, directed from u to v, has an associated weight w(e). Let p=(u0,u1,,uk) be a directed path from u0 to uk. The weight of p is (uiui+1)E(p)w(uiui+1), and the length of p is |E(p)|=k. A shortest path from u to v is a minimum-weight directed path from u to v. The shortest path distance from u to v is the weight of a shortest path from u to v.

Next, we define the Single-Source Shortest Paths problem (see Figure 1(a) for the illustration of an instance of this problem).

Subproblems.

For all i[n] and for all vjV, a subproblem is defined as finding the smallest weight of a directed path, of length at most i, from the source vertex s to the vertex vj. Since the maximum length of a shortest path is at most n1, we have n2 subproblems.

Optimal substructure.

Let D be a dynamic programming table of size n×n, whose entries D[i][j] store the smallest weight of a directed path, of length at most i, from s to vj, for all i[n] and for all vjV. If the smallest-weight path ps,vj, of length at most i, from s to vj uses uvj as its last arc, then the subpath of ps,vj from s to u must be a smallest-weight path, of length at most i1, from s to u. Consider a smallest-weight path P, of length at most i, from s to vj, whose weight is stored in D[i][j]:

  • If P is of length at most i1, then D[i][j]=D[i1][j];

  • If P is of length (at most) i and its last edge is vkvj, then D[i][j]=D[i1][k]+w(vkvj).

Overlapping subproblems.

The value D[i1][k] must be accessed to compute all entries D[i][y], where vkvyE.

The above yields the following dynamic programming algorithm for SSSP, due to Bellman and Ford [8, 26].

DP 5.1.

(SSSP) The entries of D can be computes as follows:

Base case:

if i=0 :

D[i][j]={, for all vjs0, if vj=s
Recursive case:

if i<j:

D[i][j]=min(D[i1][j],minvkvjE(D[i1][k]+w(vkvj))) (6)

Note that Equation 6 can be rewritten as:

D[i][j]=minvkvjE(min(D[i1][j],D[i1][k]+w(vkvj))),

which matches the pattern of Equation 1. In particular, we have that: Si,j={(i1,k):vkvjE}, Ci,j={{(i1,k)}:vkvjE}, h=1, and f𝒫(i,j,{(i1,k)})=min(D[i1][j],D[i1][k]+w(vkvj)). Note that, SSSP is a simple problem, since each entry of Ci,j stems from a distinct entry of Si,j. Also observe that, the values in D are bounded by W=(n1)maxvivjEw(vivj). As in [31, 32, 37], we assume that the weights defined by w are in O(nc), where c is a constant. This implies that each of the entries of D can be represented using logWO(logn) bits. The time T𝒫, TfC, and Tγ needed for computing quantumly f𝒫, fC, and γ is then O(logW)=O(logn), O(logn), and O(logn), respectively. Thus, we have that the time T=TfC+Tγ (see the statement of Theorem 3) is in O(logn). Finally, we bound the average degree of G𝒫(I). Recall that an instance I of SSSP is a tuple I=(G,w,s). Note that, |V(G𝒫(I))| is the number of subproblems, that is, n2. The n nodes of V(G𝒫(I)) corresponding to the subproblems D[0][] have outdegree 0. Instead, by the definition of Ci,j, each node ni,j of G𝒫(I) corresponding to a subproblem D[i][j] has outdegree equal to degout(vj); refer to Figure 1(b). Therefore, for any i{1,,n1}, we have that vjVdegout(ni,j)=m. It follows that the average degree δ of G𝒫(I) is equal to 2m(n1)n2=2mn2mn2<2mn. Therefore, by Theorem 4, there exists a quantum dynamic programming algorithm that solves SSSP for an n-vertex m-edge digraph, using QRAM, in O~(nnm) time using O(n2) space.

Finally, we compare the current-best classical time bound for the SSSP problem with our quantum bound. We remark that the running time of the Bellman-Ford algorithm is O(n3) time. The current-fastest classical algorithm for SSSP, due to Huang, Jin, and Quanrud [42], runs in O~(mn4/5) time. Thus, our quantum dynamic programming version of the Bellman-Ford algorithm improves upon [42] when mΩ(n75)=Ω(n1.4).

We remark that, as in the classical setting, by explicitly computing the dynamic programming table, our algorithm allows to detect a negative cycle in O(m) time.

5.2 Quantum Cocke-Younger-Kasami for Membership in Context-Free Language

A formal grammar G is defined[38] a 4-tuple (V,Σ,R,S), where: (i) V is the finite set of non-terminal symbols; (ii) T is the finite set of terminal symbols; (iii) P is is the set of production rules; and (iv) SV is the start symbol.

Let G be a Context-Free Grammar (CFG) in Chomsky Normal Form (CNF).

A CFG is in CNF if all its production rules are of one of the following forms: (1) ABC where A,B,C are non-terminal symbols (variables), i.e., a non-terminal symbol A can be replaced by two non-terminal symbols, (2) Aa where A is a non-terminal symbol and a is a terminal symbol, i.e., a non-terminal symbol A can be replaced by a single terminal symbol a.
Let L(G) be the language composed of the strings that can be generated by the grammar G.

Next, we define the Membership in Context-Free Language problem.

Subproblems.

A subproblem is defined as determing whether a consecutive substring of w can be derived from a non-terminal symbol A. Since a string of n symbols has (n2) consecutive substrings, we have n(n+1)2 subproblems.

Optimal substructure.

Let D be a dynamic programming table of size n×n (where the entries above the main diagonal are not used), whose entries D[i][j] store the set of non-terminal symbols that can generate the substring wjwj+1wj+i1 of w, i.e., the substring of w of length i that starts at the j-th symbol of w. The set of non-terminal symbols in D[i][j] is defines as follows:

  • D[i][j]={A:(ABC)Gk[1,i):(BD[k][j])(CD[ik][j+k])}

Overlapping subproblems.

The value D[k][j] must be accessed to compute all the entries of the form D[u][j], with u>k; whereas the value D[ik][j+k] must be accessed to compute all the entries of the form D[u][j+k], with u>ik.

The above yields the following dynamic programming algorithm for MCFL, due to Cocke-Younger-Kasami [61, 44, 18].

DP 5.2.

(MCFL) The entries of D can be computes as follows:

Base case:

if i=1:

D[i][j]={A|(Awj)G}
Recursive case:

if i<j:

D[i][j]=findAll1k<i{A:(ABCG)(BD[k][j])(CD[ik][j+k])} (7)

Note that Equation 7 matches the pattern of Equation 1 of Theorem 4. Observe that, w can be generated from G if and only if SD[n][1]. In particular, we have that: Si,j={(k,j),(ik,j+k):1k<i}, Ci,j={{(k,j),(ik,j+k)}:1k<i}, h=2, and f𝒫(i,j,{(k,j),(ik,j+k)})={A:(ABCG)(BD[k][j])(CD[ik][j+k])}. Note that, MCFL is a simple problem, since each entry of Ci,j stems from a distinct entry of Si,j. The time needed for computing quantumly f𝒫, fC, and γ is then O(1), O(logn), and O(logn), respectively. Therefore, we have that TO(logn).

Figure 2: Graph G𝒫(I) for the dynamic programming algorithm that solves MCFL.

Finally, we bound the average degree of G𝒫(I). Recall that an instance I of MCFL is a pair (G,w). Note that, |V(G𝒫(I))| is the number of subproblems defined by I, that is, n(n+1)2. Note that, the n nodes of V(G𝒫(I)) corresponding to the subproblems D[1][j] have outdegree 0. Instead, by the definition of Ci,j, each node ni,j of G𝒫(I) corresponding to a subproblem D[i][j] has outdegree equal to 2(i1); refer to Figure 2. Therefore, for any i{1,,n}, we have that j=1n+1idegout(ni,j)=2(n+1i)(i1). The total number of edges of G𝒫(I) is equal to

i=1n2(n+1i)(i1)2ni=1ni=2n(n(n+1)2)=n2(n+1)

It follows that the average degree δ of G𝒫(I) is upperbounded by (n2(n+1))/(n(n+1)2)=2n. Therefore, by Theorem 4, there exists a quantum dynamic programming algorithm that solves MCFL for a constant-size context-free grammar G and a string w on length n, using QRAM, in O~(n2n) time using O(n2) space.

Finally, we compare the current-best classical time bound for the MCFL problem with our quantum bound. We remark that the running time of the CYK algorithm is O(n3). Valiant demonstrated in 1975 that membership in context-free language recognition can be performed at least as efficiently as Boolean matrix multiplication [57]. Since Boolean matrix multiplication can be done in O(n2.81) time using Strassen’s algorithm [55], this implies an indirect O(n2.81)-time algorithm for testing membership in a context-free language. However, such an algorithm does not allow to directly retrieve the sequence of productions used to derive w. The current-fastest classical algorithm for matrix multiplication has a time complexity of O(n2.371552) [58], building upon the Coppersmith-Winograd algorithm [19]. Observe that, the constant factors hidden by the Big O notation are alone so substantial that these fast matrix multiplication algorithms are impractical for matrix sizes manageable by current computers. Finally, we remark that our approach is methodologically simpler than relying upon matrix multiplication, as it directly accelerates an “easy” algorithm for context-free parsing. In fact, using matrix multiplication for parsing requires a more complex, three-step process: (1) transforming the parsing instance into a format suitable for matrix multiplication, (2) applying the sophisticated (and practically cumbersome) matrix multiplication algorithm described in [58], and (3) converting the output of the recognition process (provided by the matrix multiplication) into a parsing solution666Note that Ruzzo showed parsing is slightly harder than recognition by a logarithmic factor [52]..

6 Conclusions and Open Problems

In this paper, we presented a framework that systematically extends classical dynamic programming algorithms that exhibit specific characteristics in the computation of their dynamic programming table, into accelerated quantum counterparts. These characteristics pertain to the structure of the dependency set, specifying the global set of entries involved in computing a table entry, and to the structure of the generating set, specifying the groups of entries simultaneously involved in providing a candidate value for a table entry.

By leveraging quantum search primitives, such as find, findAll, min, and max, and the QRAM, we transform dynamic programming recurrence relations into efficient quantum subroutines. Our approach lowers the computational cost of constructing each entry in the dynamic programming table, often achieving a significant speedup over the best-known classical algorithms. We demonstrate the versatility of this framework by applying it to several well-known problems, including Single-Source Shortest Paths.

This work establishes a foundation for developing more efficient quantum algorithms for a class of dynamic programming problems, paving the way for several research directions. A natural extension is to investigate the applicability of our framework to dynamic programming algorithms that do not have the above characteristics. It is also worth investigating whether similar quantum speedups can be achieved for algorithms that iteratively refine estimates of the optimal solution, such as the Floyd-Warshall algorithm. Finally, a key challenge is optimizing the space complexity of quantum dynamic programming algorithms. Due to the limitations of quantum memory, it is crucial to develop strategies that avoid storing the entire dynamic programming table. This may involve computing entries on-the-fly, leveraging quantum speedups for efficient on-demand computation, and minimizing the number of entries stored in superposition.

References

  • [1] Scott Aaronson. Introduction to quantum information science lecture notes, April 2019. URL: https://www.scottaaronson.com/qclec.pdf.
  • [2] Scott Aaronson, Nai-Hui Chia, Han-Hsuan Lin, Chunhao Wang, and Ruizhe Zhang. On the quantum complexity of closest pair and related problems. In Shubhangi Saraf, editor, 35th Computational Complexity Conference, CCC 2020, volume 169 of LIPIcs, pages 16:1–16:43. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.CCC.2020.16.
  • [3] Andris Ambainis. Quantum search algorithms. SIGACT News, 35(2):22–35, June 2004. doi:10.1145/992287.992296.
  • [4] Andris Ambainis, Kaspars Balodis, Janis Iraids, Martins Kokainis, Krisjanis Prusis, and Jevgenijs Vihrovs. Quantum speedups for exponential-time dynamic programming algorithms. In Timothy M. Chan, editor, SODA 2019, pages 1783–1793. SIAM, 2019. doi:10.1137/1.9781611975482.107.
  • [5] Andris Ambainis and Nikita Larka. Quantum algorithms for computational geometry problems. In Steven T. Flammia, editor, 15th Conference on the Theory of Quantum Computation, Communication and Cryptography, TQC 2020, volume 158 of LIPIcs, pages 9:1–9:10. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.TQC.2020.9.
  • [6] Andris Ambainis and Ilja Repko. Quantum algorithm for the domatic number problem. Balt. J. Mod. Comput., 12(3), 2024. doi:10.22364/BJMC.2024.12.3.03.
  • [7] Vladimirs Andrejevs, Aleksandrs Belovs, and Jevgenijs Vihrovs. Quantum algorithms for hopcroft’s problem. In Rastislav Královic and Antonín Kucera, editors, 49th International Symposium on Mathematical Foundations of Computer Science, MFCS 2024, volume 306 of LIPIcs, pages 9:1–9:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.MFCS.2024.9.
  • [8] Richard Bellman. On a routing problem. Quarterly of Applied Mathematics, 16(1):87–90, 1958. URL: http://www.jstor.org/stable/43634538.
  • [9] Richard Bellman. On the approximation of curves by line segments using dynamic programming. Commun. ACM, 4(6):284, 1961. doi:10.1145/366573.366611.
  • [10] Marc Benkert, Herman J. Haverkort, Moritz Kroll, and Martin Nöllenburg. Algorithms for multi-criteria one-sided boundary labeling. In Seok-Hee Hong, Takao Nishizeki, and Wu Quan, editors, Graph Drawing, 15th International Symposium, GD 2007, volume 4875 of Lecture Notes in Computer Science, pages 243–254. Springer, 2007. doi:10.1007/978-3-540-77537-9_25.
  • [11] Marc Benkert, Herman J. Haverkort, Moritz Kroll, and Martin Nöllenburg. Algorithms for multi-criteria boundary labeling. J. Graph Algorithms Appl., 13(3):289–317, 2009. doi:10.7155/JGAA.00189.
  • [12] Gilles Brassard, Peter Høyer, and Alain Tapp. Quantum counting. In Kim Guldstrand Larsen, Sven Skyum, and Glynn Winskel, editors, ICALP’98, volume 1443 of LNCS, pages 820–831. Springer, 1998. doi:10.1007/BFB0055105.
  • [13] Susanna Caroppo, Giordano Da Lozzo, and Giuseppe Di Battista. Quantum algorithms for one-sided crossing minimization. In Stefan Felsner and Karsten Klein, editors, 32nd International Symposium on Graph Drawing and Network Visualization, GD 2024, volume 320 of LIPIcs, pages 20:1–20:9. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.GD.2024.20.
  • [14] Susanna Caroppo, Giordano Da Lozzo, and Giuseppe Di Battista. Quantum algorithms for one-sided crossing minimization. Theoretical Computer Science, 1052:115424, 2025. doi:10.1016/j.tcs.2025.115424.
  • [15] Susanna Caroppo, Giordano Da Lozzo, and Giuseppe Di Battista. Quantum graph drawing. J. Graph Algorithms Appl., 29(2):3–47, 2025. doi:10.7155/JGAA.V29I2.3039.
  • [16] Susanna Caroppo, Giordano Da Lozzo, Giuseppe Di Battista, Michael T. Goodrich, and Martin Nöllenburg. Quantum speedups for polynomial-time dynamic programming algorithms, 2025. arXiv:2507.00823.
  • [17] Susanna Caroppo, Giordano Da Lozzo, and Giuseppe Di Battista. Quantum graph drawing. In Ryuhei Uehara, Katsuhisa Yamanaka, and Hsu-Chun Yen, editors, Proc. 18th International Conference and Workshops on Algorithms and Computation, WALCOM 2024, volume 14549 of Lecture Notes in Computer Science, pages 32–46. Springer, 2024. doi:10.1007/978-981-97-0566-5_4.
  • [18] John Cocke. Programming languages and their compilers: Preliminary notes. New York University, 1969.
  • [19] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. J. Symb. Comput., 9(3):251–280, 1990. doi:10.1016/S0747-7171(08)80013-2.
  • [20] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, 3rd Edition. MIT Press, 2009. URL: http://mitpress.mit.edu/books/introduction-algorithms.
  • [21] Sanjoy Dasgupta, Christos H. Papadimitriou, and Umesh V. Vazirani. Algorithms. McGraw-Hill, 2008.
  • [22] Christopher M. Dawson and Michael A. Nielsen. The solovay-kitaev algorithm. Quantum Inf. Comput., 6(1):81–95, 2006. doi:10.26421/QIC6.1-6.
  • [23] Christoph Dürr and Peter Høyer. A quantum algorithm for finding the minimum. CoRR, quant-ph/9607014, 1996. arXiv:quant-ph/9607014.
  • [24] Jeff Erickson. Algorithms. Independently published, 2019. URL: http://jeffe.cs.illinois.edu/teaching/algorithms/.
  • [25] Geeks for Geeks. Largest divisible subset in array. https://www.geeksforgeeks.org/largest-divisible-subset-array/, 2024. Accessed: (21/02/2025).
  • [26] L. R. Ford. Network Flow Theory, pages 253–313. Springer New York, New York, NY, 1997. doi:10.1007/0-387-22633-8_9.
  • [27] Shion Fukuzawa, Michael T. Goodrich, and Sandy Irani. Quantum Tutte embeddings. CoRR, abs/2307.08851, 2023. doi:10.48550/arXiv.2307.08851.
  • [28] Shion Fukuzawa, Michael T. Goodrich, and Sandy Irani. Quantum Tutte embeddings. In Michael A. Bekos and Markus Chimani, editors, 31st International Symposium on Graph Drawing and Network Visualization, GD, volume 14466 of LNCS, pages 241–243. Springer, 2023.
  • [29] Shion Fukuzawa, Michael T. Goodrich, and Sandy Irani. Quantum combine and conquer and its applications to sublinear quantum convex hull and maxima set construction. In Oswin Aichholzer and Haitao Wang, editors, 41st International Symposium on Computational Geometry, SoCG 2025, volume 332 of LIPIcs, pages 51:1–51:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2025. doi:10.4230/LIPICS.SOCG.2025.51.
  • [30] Bartholomew Furrow. A panoply of quantum algorithms. Quantum Inf. Comput., 8(8):834–859, 2008. doi:10.26421/QIC8.8-9-11.
  • [31] Harold N. Gabow. Scaling algorithms for network problems. J. Comput. Syst. Sci., 31(2):148–168, 1985. doi:10.1016/0022-0000(85)90039-X.
  • [32] Harold N. Gabow and Robert Endre Tarjan. Faster scaling algorithms for network problems. SIAM J. Comput., 18(5):1013–1036, 1989. doi:10.1137/0218069.
  • [33] Andreas Gemsa, Benjamin Niedermann, and Martin Nöllenburg. Trajectory-based dynamic map labeling. In Leizhen Cai, Siu-Wing Cheng, and Tak Wah Lam, editors, ISAAC 2013, volume 8283 of LNCS, pages 413–423. Springer, 2013. doi:10.1007/978-3-642-45030-3_39.
  • [34] I. M. Georgescu, S. Ashhab, and Franco Nori. Quantum simulation. Rev. Mod. Phys., 86:153–185, March 2014. doi:10.1103/RevModPhys.86.153.
  • [35] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. Phys. Rev. Lett., 100:160501, April 2008. doi:10.1103/PhysRevLett.100.160501.
  • [36] Adam Glos, Martins Kokainis, Ryuhei Mori, and Jevgenijs Vihrovs. Quantum speedups for dynamic programming on n-dimensional lattice graphs. In Filippo Bonchi and Simon J. Puglisi, editors, MFCS 2021, volume 202 of LIPIcs, pages 50:1–50:23. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.MFCS.2021.50.
  • [37] Andrew V. Goldberg. Scaling algorithms for the shortest paths problem. SIAM J. Comput., 24(3):494–504, 1995. doi:10.1137/S0097539792231179.
  • [38] Raymond Greenlaw and James H. Hoover. Fundamentals of the Theory of Computation: Principles and Practice. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1998.
  • [39] Lov K. Grover. A fast quantum mechanical algorithm for database search. In Gary L. Miller, editor, STOC 1996, pages 212–219. ACM, 1996. doi:10.1145/237814.237866.
  • [40] Yijie Han and Tadao Takaoka. An O(n3loglogn/log2n) time algorithm for all pairs shortest paths. J. Discrete Algorithms, 38-41:9–19, 2016. doi:10.1016/J.JDA.2016.09.001.
  • [41] John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to automata theory, languages, and computation, 3rd Edition. Pearson international edition. Addison-Wesley, 2007.
  • [42] Yufan Huang, Peter Jin, and Kent Quanrud. Faster single-source shortest paths with negative real weights via proper hop distance. In Yossi Azar and Debmalya Panigrahi, editors, SODA 2025, pages 5239–5244. SIAM, 2025. doi:10.1137/1.9781611978322.178.
  • [43] Daniel Jurafsky and James H. Martin. Speech and language processing - an introduction to natural language processing, computational linguistics, and speech recognition. Prentice Hall series in artificial intelligence. Prentice Hall, 2000.
  • [44] Tadao Kasami. An efficient recognition and syntax-analysis algorithm for context-free languages. Coordinated Science Laboratory Report no. R-257, 1966.
  • [45] Kamil Khadiev and Liliya Safina. Quantum algorithm for dynamic programming approach for dags. applications for zhegalkin polynomial evaluation and some problems on dags. In Ian McQuillan and Shinnosuke Seki, editors, UCNC 2019, volume 11493 of LNCS, pages 150–163. Springer, 2019. doi:10.1007/978-3-030-19311-9_13.
  • [46] Alexei Y. Kitaev, A. H. Shen, and Mikhail N. Vyalyi. Classical and Quantum Computation, volume 47 of Graduate studies in mathematics. American Mathematical Society, 2002. URL: https://bookstore.ams.org/gsm-47/.
  • [47] Jon M. Kleinberg and Éva Tardos. Algorithm design. Addison-Wesley, 2006.
  • [48] Vladislavs Klevickis, Krisjanis Prusis, and Jevgenijs Vihrovs. Quantum speedups for treewidth. In François Le Gall and Tomoyuki Morimae, editors, TQC 2022, volume 232 of LIPIcs, pages 11:1–11:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.TQC.2022.11.
  • [49] G.T. Klincsek. Minimal triangulations of polygonal domains. In Peter L. Hammer, editor, Combinatorics 79, volume 9 of Annals of Discrete Mathematics, pages 121–123. Elsevier, 1980. doi:10.1016/S0167-5060(08)70044-X.
  • [50] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information (10th Anniversary edition). Cambridge University Press, 2016. URL: https://www.cambridge.org/de/academic/subjects/physics/quantum-physics-quantum-information-
    and-quantum-computation/quantum-computation-and-quantum-information-10th-
    anniversary-edition
    .
  • [51] Koustubh Phalak, Avimita Chatterjee, and Swaroop Ghosh. Quantum random access memory for dummies. Sensors, 23(17):7462, 2023. doi:10.3390/S23177462.
  • [52] Walter L. Ruzzo. On the complexity of general context-free language parsing and recognition (extended abstract). In Hermann A. Maurer, editor, Automata, Languages and Programming, 6th Colloquium, volume 71 of LNCS, pages 489–497. Springer, 1979. doi:10.1007/3-540-09510-1_39.
  • [53] Kazuya Shimizu and Ryuhei Mori. Exponential-time quantum algorithms for graph coloring problems. Algorithmica, 84(12):3603–3621, 2022. doi:10.1007/S00453-022-00976-2.
  • [54] Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Rev., 41(2):303–332, 1999. doi:10.1137/S0036144598347011.
  • [55] Volker Strassen. Gaussian elimination is not optimal. Matematika, 14(3):127–128, 1970.
  • [56] Vincent T’kindt, Federico Della Croce, and Mathieu Liedloff. Moderate exponential-time algorithms for scheduling problems. Ann. Oper. Res., 343(2):753–783, 2024. doi:10.1007/S10479-024-06289-7.
  • [57] Leslie G. Valiant. General context-free recognition in less than cubic time. J. Comput. Syst. Sci., 10(2):308–315, 1975. doi:10.1016/S0022-0000(75)80046-8.
  • [58] Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. New bounds for matrix multiplication: from alpha to omega. In David P. Woodruff, editor, SODA 2024, pages 3792–3835. SIAM, 2024. doi:10.1137/1.9781611977912.134.
  • [59] Gerhard J. Woeginger. Open problems around exact algorithms. Discret. Appl. Math., 156(3):397–405, 2008. doi:10.1016/J.DAM.2007.03.023.
  • [60] Shifan Xu, Connor T. Hann, Ben Foxman, Steven M. Girvin, and Yongshan Ding. Systems architecture for quantum random access memory. In MICRO 2023, pages 526–538. ACM, 2023. doi:10.1145/3613424.3614270.
  • [61] Daniel H. Younger. Recognition and parsing of context-free languages in time nˆ3. Inf. Control., 10(2):189–208, 1967. doi:10.1016/S0019-9958(67)80007-X.
  • [62] Mohammed Zidan, Abdel-Haleem Abdel-Aty, Ashraf Khalil, Mahmoud A. Abdel-Aty, and Hichem Eleuch. A novel efficient quantum random access memory. IEEE Access, 9:151775–151780, 2021. doi:10.1109/ACCESS.2021.3119588.