On Problems Equivalent to (min,+)-Convolution

In recent years, significant progress has been made in explaining the apparent hardness of improving upon the naive solutions for many fundamental polynomially solvable problems. This progress has come in the form of conditional lower bounds—reductions from a problem assumed to be hard. The hard problems include 3SUM, All-Pairs Shortest Path, SAT, Orthogonal Vectors, and others. In the (min ,+)-convolution problem, the goal is to compute a sequence (c[i])n-1i=0, where c[k] = mini=0,…; ,k { a[i] + b[k-i]}, given sequences (a[i])n-1i=0 and (b[i])n-1i=0. This can easily be done in O(n2) time, but no O(n2-ε) algorithm is known for ε > 0. In this article, we undertake a systematic study of the (min ,+)-convolution problem as a hardness assumption. First, we establish the equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min ,+)-convolution problem has been used as a building block in algorithms for many problems, notably problems in stringology. It has also appeared as an ad hoc hardness assumption. Second, we investigate some of these connections and provide new reductions and other results. We also explain why replacing this assumption with the Strong Exponential Time Hypothesis might not be possible for some problems.


INTRODUCTION 1.Hardness in P
For many problems, there exist ingenious algorithms that significantly improve upon the naive approach in terms of time complexity. On the other hand, for some fundamental problems, the naive algorithms are still the best known or have been improved upon only slightly. To some extent, this has been explained by the P NP conjecture. However, for many problems, even the naive approaches lead to polynomial algorithms, and the P NP conjecture does not seem to be particularly useful for proving polynomial lower bounds.
In recent years, significant progress has been made in establishing such bounds, conditioned on conjectures other than P NP. Each conjecture claims time complexity lower bounds for a different problem. The main conjectures are as follows. First, the conjecture that there is no O(n 2−ϵ ) algorithm for the 3SUM problem 1 implies hardness for problems in computational geometry [27] and dynamic algorithms [41]. Second, the conjecture that there is no algorithm O(n 3−ϵ ) for All-Pairs Shortest Path (APSP) implies the hardness of determining the graph radius and graph median and the hardness of some dynamic problems (see [47] for a survey of related results). Finally, the Strong Exponential Time Hypothesis (SETH) introduced in [31,32] has been used extensively to prove the hardness of parametrized problems [21,37] and has recently led to polynomial lower bounds via the intermediate Orthogonal Vectors problem (see [45]). These include bounds for the Edit Distance [4], Longest Common Subsequence [1,10], and others [47].
It is worth noting that in many cases, the results mentioned indicate not only the hardness of the problem in question but also that it is computationally equivalent to the underlying hard problem. This leads to clusters of equivalent problems being formed, each cluster corresponding to a single hardness assumption (see [47, Figure 1]).
As Christos H. Papadimitriou stated, "There is nothing wrong with trying to prove that P=NP by developing a polynomial-time algorithm for an NP-complete problem. The point is that without an NP-completeness proof we would be trying the same thing without knowing it! " [40]. In the same spirit, these new conditional hardness results have cleared the polynomial landscape by showing that there really are not that many hard problems (for the recent background, see [48]).

Hardness of MinConv
In this article, we propose yet another hardness assumption: the MinConv problem. This problem has previously been used as a hardness assumption for at least two specific problems [5,36], but to the best of our knowledge, no attempts have been made to systematically study the neighborhood of this problem in the polynomial complexity landscape.
To be more precise, in all problem definitions, we assume that the input sequences consist of integers in the range [−W ,W ]. Following the design of the APSP conjecture [49], we allow polylog(W ) factors in the definition of a subquadratic running time.
Let us first look at the place occupied by MinConv in the landscape of established hardness conjectures. Figure 1 shows known reductions between these conjectures and includes MinConv. Bremner et al. [8] showed the reduction from MinConv to APSP. It is also known [5] that Min-Conv can be reduced to 3SUM by using reductions [41] and [50,Proposition 3.4,Theorem 3.3] (we provide the details in Appendix A). Note that a reduction from 3SUM or APSP to MinConv would imply a reduction between 3SUM and APSP, which is a major open problem in this area of study [47]. No relation between MinConv and SETH or OV is known.
In this article, we study three broad categories of problems. The first category consists of the classic 0/1 Knapsack and its variants, which we show to be essentially equivalent to MinConv. This is perhaps somewhat surprising given the recent progress of Bringmann [9] for SubsetSum, which is a special case of 0/1 Knapsack. However, note that Bringmann's algorithm [9] (as well as other efficient solutions for SubsetSum) is built upon the idea of composing solutions using the (∨, ∧)-convolution, which can be implemented efficiently using a Fast Fourier Transform (FFT). The corresponding composition operation for 0/1 Knapsack is MinConv (see Section 6 for details).
The second category consists of problems directly related to MinConv. This includes decision versions of MinConv and problems related to the notion of subadditivity. Any subadditive sequence a with a[0] = 0 is an idempotent of MinConv; thus, it is perhaps unsurprising that these problems are equivalent to MinConv.
Finally, we investigate problems that have previously been shown to be related to MinConv and then contribute some new reductions or simplify existing ones. Moreover, some of the results of this article have been published independently by Künnemann et al. [35] at the same conference.

3SUM
3sum Input: Sets of integers A, B, C, each of size n. Task: Decide whether there exist a ∈ A, b ∈ B, c ∈ C such that a + b = c.
The 3sum problem is the first problem that was considered as a hardness assumption in P. It admits a simple O(n 2 log n) algorithm, but the existence of an O(n 2−ϵ ) algorithm remains a major open problem. The first lower bounds based on the hardness of 3sum appeared in 1995 [27]; some other examples can be found in [6,41,50]. The current best algorithm for 3sum runs in slightly subquadratic expected time O((n 2 / log 2 n)(log log n) 2 ) [6]. An O(n 1.5 polylog(n)) algorithm is possible on a nondeterministic Turing machine [14] (see Section 4 for the definition of nondeterministic algorithms).
The 3sum problem is known to be subquadratically equivalent to its convolution version in a randomized setting [41].
3sumConv Input: Sequences a, b, c of integers, each of length n. Task: Decide whether there exist i, j such that a Both problems are sometimes considered with real weights, but in this work, we restrict them to only the integer setting.

MinConv
We have already defined the MinConv problem in Section 1.2. Note that it is equivalent (just by negating elements) to the analogous MaxConv problem. MaxConv We describe our contribution in terms of MinConv, as this version has already been heavily studied. However, in the theorems and proofs, we use MaxConv, as it is easier to work with. We also work with a decision version of the problem. Herein, we will use a ⊕ max b to denote the MaxConv of two sequences a and b.

MaxConv UpperBound
If we replace the latter condition with c[k] ≤ max i+j=k (a[i] + b[j]), we obtain a similar problem MaxConv LowerBound. Yet another statement of a decision version asks whether a given sequence is a self upper bound with respect to MaxConv, i.e., if it is superadditive. From the perspective of MinConv, we may ask an analogous question about being subadditive (again, equivalent by negating elements). As far as we know, the computational complexity of these problems has not yet been studied.

SuperAdditivity Testing
In the standard (+, ·) ring, convolution can be computed in O(n log n) time by the FFT. A natural way to approach MinConv would be to design an analog of FFT in the (min, +)-semi-ring, also called the tropical semi-ring. 2 However, due to the lack of an inverse for the min-operation, it is unclear if such a transform exists for general sequences. When restricted to convex sequences, one can use a tropical analog of FFT, namely, the Legendre-Fenchel transform [24], which can be performed in linear time [38]. [29] also considered sparse variants of convolutions and their connection with 3sum.
There has been a long line of research dedicated to improving upon the O(n 2 ) algorithm for MinConv. Bremner et al. [8] presented an O(n 2 / log n) algorithm for MinConv, as well as a reduction from MinConv to APSP [8,Theorem 13]. Williams [46] developed an O(n 3 /2 Ω(log n) 1/2 ) algorithm for APSP, which can also be used to obtain an O(n 2 /2 Ω(log n) 1/2 ) algorithm for MinConv [17].
Truly subquadratic algorithms for MinConv exist for monotone increasing sequences with integer values bounded by O(n). Chan and Lewenstein [17] presented an O(n 1.859 ) randomized algorithm and an O(n 1.864 ) deterministic algorithm for this case. They exploited ideas from additive combinatorics. Bussieck et al. [13] showed that for a random input, MinConv can be computed in O(n log n) expected and O(n 2 ) worst-case time.
If we are satisfied with computing c with a relative error (1 + ϵ ), then the general MinConv admits a nearly linear algorithm [5,51]. It could be called an FPTAS (fully polynomial-time approximation scheme), noting that this name is usually reserved for single-output problems for which decision versions are NP-hard.
Using the techniques of Carmosino et al. [14] and the reduction from MaxConv UpperBound to 3sum (see Appendix A), one can construct an O(n 1.5 polylog(n)) algorithm that works on nondeterministic Turing machines for MaxConv UpperBound (see Lemma 8.1). This running time matches the O(n 1.5 ) algorithm for MinConv in the nonuniform decision tree model [8]. This result is based on the techniques of Fredman [25,26]. It remains unclear how to transfer these results to the word-RAM model [8].

0/1 Knapsack
Input: A set of items I with given integer weights and values ((w i , v i )) i ∈I , capacity t. Task: Find the maximum total value of a subset I ⊆ I such that i ∈I w i ≤ t.
If we are allowed to take multiple copies of a single item, then we obtain the Unbounded Knapsack problem. The decision versions of both problems are known to be NP-hard [28], but there are classical algorithms based on dynamic programming with a pseudopolynomial running time O(nt ) [7].
In fact, they are used to solve more general problems, i.e., 0/1 Knapsack + and Unbounded Knapsack + , where we are asked to output answers for each 0 < t ≤ t. There is also a long line of research on FPTAS for Knapsack, with the current best running times being O(n + 1 ϵ 2.4 ) for 0/1 Knapsack [16] and O(n + 1 ϵ 2 ) for Unbounded Knapsack [33].

Other Problems Related to MinConv
Tree Sparsity Input: A rooted tree T with a weight function w : V (T ) → N ≥0 , parameter k. Task: Find the maximum total weight of a rooted subtree of size k.
The Tree Sparsity problem admits an O(nk ) algorithm, which was at first invented for the restricted case of balanced trees [15] and then later generalized [5]. There is also a nearly linear FPTAS based on the FPTAS for MinConv [5]. It is known that an O(n 2−ϵ ) algorithm for Tree Sparsity entails a subquadratic algorithm for MinConv [5]. There is a trivial O(n 2 ) algorithm for MCSP and a nearly linear FPTAS based on the FPTAS for MinConv [19]. To the best of our knowledge, this is the first problem to have been explicitly proven to be subquadratically equivalent to MinConv [36]. Our reduction to SuperAdditivity Testing allows us to significantly simplify the proof (see Section 7.1).
∈ [0, 1) n describing locations of beads on a circle. Task: Output the cost of the best alignment in the p-norm, i.e., n−1 is a circular offset, s ∈ {0, . . . , n − 1} is a shift, and d is a distance function on a circle.
In the l p -Necklace Alignment problem, we are given two sorted sequences of real numbers (x[i]) n−1 i=0 and (y[i]) n−1 i=0 that represent two necklaces. We assume that each number in the sequence represents a point on a circle (we refer to this circle as the necklace and the points on it as the beads). The distance between beads x i and y j is defined in [8] as to represent the minimum between the clockwise and counterclockwise distances along the circular necklaces. The l p -Necklace Alignment is an optimization problem where we can manipulate two parameters. The first parameter is the offset c, which is the clockwise rotation of the necklace (x[i]) n−1 i=0 relative to the necklace (y[i]) n−1 i=0 . The second parameter is the shift s, which defines the perfect matching between beads from the first and second necklaces, i.e., bead x[i] matches bead y[i + s (mod n)] (see [8]).
For p = ∞, we are interested in bounding the maximum distance between any two matched beads. The problem initially emerged for p = 1 during research on the geometry of musical rhythm [43]. The family of Necklace Alignment problems was systematically studied by Bremner et al. [8] for various values of p. For p = 2, they presented an O(n log n) algorithm based on the FFT. For p = ∞, the problem was reduced to MinConv, which led to a slightly subquadratic algorithm. This makes l ∞ -Necklace Alignment a natural problem to study in the context of MinConv-based hardness. Interestingly, we are not able to show such hardness, which presents an intriguing open problem. Instead, we reduce l ∞ -Necklace Alignment to a related problem.
Although it is more natural to state the problem with inputs from [0, 1), we find it more convenient to work with integer sequences that describe a necklace after scaling.
Fast o(n 2 ) algorithms for MinConv have also found applications in text algorithms. Moosa and Rahman [39] reduced Indexed Permutation Matching to MinConv and obtained an o(n 2 ) algorithm. Burcsi et al. [11] used MinConv to obtain faster algorithms for Jumbled Pattern Matching and described how finding dominating pairs can be used to solve MinConv. Later, Burcsi et al. [12] showed that fast MinConv can also be used to obtain faster algorithms for a decision version of Approximate Jumbled Pattern Matching over binary alphabets. Figure 2 illustrates the technical contributions of this article. The long ring of reductions on the left side of Figure 2 is summarized below.  We allow randomized algorithms.

SUMMARY OF NEW RESULTS
Theorem 3.1 is split into five implications, presented separately as Theorems 5.1, 5.3, 5.4, 5.5, and 6.5. While Theorem 3.1 has a relatively short and simple statement, it is not the strongest possible version of the equivalence. In particular, one can show analogous implications for subpolynomial improvements, such as the O(n 2 /2 Ω(log n) 1/2 ) algorithm for MinConv presented by Williams [46]. The theorems listed above contain stronger versions of the implications. The proof of Theorem 5.5 has been independently given in [5]. We present it here because it is the first step in the ring of reductions and introduces the essential technique of Vassilevska and Williams [44]. Section 7 is devoted to the remaining arrows in Figure 2. In Section 7.1, we show that by using Theorem 3.1, we can obtain an alternative proof of the equivalence of MCSP and MaxConv (and thus also MinConv), which is much simpler than that presented in [36]. In Section 7.2, we show that Tree Sparsity reduces to MaxConv, complementing the opposite reduction shown in [5]. We also provide some observations on the possible equivalence between l ∞ -Necklace Alignment and MaxConv in Section 7.3.
The relation between MaxConv and 3sum implies that we should not expect the new conjecture to follow from SETH. In Section 8, we exploit the revealed connections between problems to show that it might also not be possible to replace the hardness assumption for Unbounded Knapsack with SETH. More precisely, we prove that there can be no deterministic reduction from SAT to Unbounded Knapsack that would rule out running time O(n 1−ε t ) under the assumption of NSETH.

PRELIMINARIES
We present a series of results of the following form: if a problem A admits an algorithm with running time T (n), then a problem B admits an algorithm with running time T (n), where function T depends on T and n is the length of the input. Our main interest is to show that . Some problems, in particular Knapsack, have no simple parametrization, and we allow function T to take multiple arguments.
In this article, we follow the convention of [14] and say that the decision problem L admits a nondeterministic algorithm in time T (n) if L ∈ NTIME(T (n)) ∩ co-NTIME(T (n)).
We assume that for all studied problems, the input consists of a list of integers within [−W ,W ]. Since Conjecture 1.1 is oblivious to polylog(W ) factors, we omit W as a running time parameter and allow functionT to hide factor polylog(W ) for the sake of readability. We also use O notation to explicitly hide polylogarithmic factors with respect to the argument. Herein, we will use a ⊕ max b to denote the MaxConv of sequences a, b (see Section 6.2).
As the size of the input may increase during our reductions, we restrict ourselves to a class of functions satisfying T (cn) = O(T (n)) for a constant c. This is justified, as we focus on functions of the form T (n) = n α . In some reductions, the integers in the new instance may increase to O(nW ). In these cases, we multiply the running time by polylog(n) to take into account the overhead of performing arithmetic operations. All logarithms are base 2. Proof. Consider an instance of Unbounded Knapsack with capacity t and the set of items given as weight-value pairs ((w i , v i )) i ∈I . Construct an equivalent 0/1 Knapsack instance with the same t and the set of items ((2 j w i , 2 j v i )) i ∈I,0≤j ≤log t . Let X = (x i ) i ∈I be the list of multiplicities of items chosen in a solution to the Unbounded Knapsack problem. Of course,

MAIN REDUCTIONS
1} to be the binary representation of x i . Then, the vector (x j i ) i ∈I,0≤j ≤log t induces a solution to 0/1 Knapsack with the same total weight and value. The described mapping can be inverted. This implies the equivalence between the instances and proves the claim.
We now consider the SuperAdditivity Testing problem. We start by showing that we can consider only the case of nonnegative monotonic sequences. This is a useful, technical assumption that simplifies the proofs.
In the case where a[0] ≤ 0, the 0-th element does not influence the result of the algorithm. Thus, we can set a [0] = 0 to ensure the nonnegativity of a . Next, to guarantee monotonicity, we choose Note that sequence a [i] is strictly increasing and nonnegative. Moreover, for i, j > 0, When i or j equals 0, then we have equality because a [0] = 0. and for any i. We claim that the answer to the constructed instance equals D if and only if a is superadditive.
If a is not superadditive, then there are i, j such that gives a solution with a value exceeding D.
Now, assume that a is superadditive. Observe that any feasible knapsack solution may contain at most one item with a weight exceeding n − 1. On the other hand, the optimal solution has to include one such item because the total value of the lighter ones is less than D. Therefore, the optimal solution contains an item (2n − 1 − k, D − a[k]) for some k < n. The total weight of the rest of the solution is at most k. As a is superadditive, we can replace any pair without decreasing the value of the solution. By repeating this argument, we end up with a single item lighter than n. The sequence a is monotonic; thus, it is always profitable to replace these two items with the heavier one, as long as the load does not exceed t. We conclude that every optimal solution must be of the form which completes the proof. Proof. We start by reducing the instance of MaxConv UpperBound to the case of nonnegative monotonic sequences (analogous to Lemma 5.2). Observe that condition a Herein, we can assume the given sequences to be nonnegative and monotonic. Define K to be the maximum value occurring in given sequences a, b, c. Construct a sequence e of length 4n as follows (see Figure 3)   The proof of the reduction from MaxConv to MaxConv UpperBound was recently independently given in [5]. The technique was introduced by Vassilevska and Williams [44] to show a subcubic reduction from (min, +)-matrix multiplication for detecting a negative weight triangle in a graph.
Proof. Let us assume that we have access to an oracle solving the MaxConv UpperBound, i.e., checking whether a ⊕ max b ≤ c. First, we argue that by invoking this oracle log n times, we can find an index k for which there exists a pair i, j violating the superadditivity constraint, i.e., satisfying a holds only for those k that are less than the smallest value of i + j with a broken constraint. We can use binary search to find the smallest k for which the inequality does not hold. This introduces an overhead of factor log n.
Next, we want to show that by using an oracle that finds one violated index, we can in fact find all  1)]. After finding a pair i, j that violates the superadditivity, we substitute c[i + j] := K, where K is a constant exceeding all feasible sums, and continue analyzing the same pair. Once anomalies are no longer detected, we move on to the next pair. It is important to note that when an index k violating superadditivity is set to c[k] := K, this value K is also preserved for further calls to the oracle-in this way, we ensure that each violated index k is reported only once.
For the sake of readability, we present a pseudocode (see Algorithm 1). The subroutine Max-ConvDetectSingle returns the value of i + j for a broken constraint or −1 if none exist. The notation s x stands for the subsequence of s in the interval I x . We assume that The number of considered pairs of intervals equals m 2 = O(n). Furthermore, for each pair, every call to MaxConvDetectSingle except the last one is followed by setting a value of some element of c to K. This can happen only once for each element; hence, the total number of repetitions is at most n. Therefore, the running time of the procedure MaxConvDetectViolations is  Proof. Our algorithm starts by discarding all items with weight larger than t. Since we are considering the unbounded case, for a given weight, we can ignore all items except the one with the highest value, as we can always take more copies of the most valuable item among the ones of equal weight. We are left with at most t items. Thus, using the standard O(nt ) dynamic programming leads to a running time of O(t 2 + n).
We show that from the perspective of the parameter t, this is the best we can hope for, unless n appears in the complexity with an exponent higher than 2 or there is a breakthrough for the MaxConv problem. In this section, we complement these results and show that a truly subquadratic algorithm for MaxConv implies an O(t 2−ϵ + n) algorithm for 0/1 Knapsack. We follow Bringmann's [9] near-linear pseudopolynomial time algorithm for SubsetSum and adapt it to the 0/1 Knapsack problem. To do this, we need to introduce some concepts related to the SubsetSum problem from previous works. The key observation is that we can substitute the FFT in [9] with MaxConv and consequently obtain an O(T (t ) + n) algorithm for 0/1 Knapsack (where T (n) is the time needed to solve MaxConv).

Set of All Subset Sums
Let us recall that in the SubsetSum problem, we are given a set S of n integers together with a target integer t. The goal is to determine whether there exists a subset of S that sums up to t.
Horowitz and Sahni [30] introduced the notion of the set of all subset sums that was later used by Eppstein [23] to solve the Dynamic Subset Sum problem. More recently, Koiliaris and Xu [34] used it to develop an O(σ ) algorithm for SubsetSum (σ denotes the sum of all elements). Later, Bringmann [9] improved this algorithm to O(n + t ) (t denotes the target number in the SubsetSum problem).
The set of all subset sums is defined as follows: Koiliaris and Xu [34] noticed that if we want to compute Σ(S ) for a given S, we can partition S into two sets: S 1 and S 2 , recursively compute Σ(S 1 ) and Σ(S 2 ), and then join them using the FFT. Koiliaris and Xu [34] analyzed their algorithm using Lemma 6.2, which was later also used by Bringmann [9]. , we have that f (n, m) = O(д(m) log n).

Sum of All Sets for 0/1 Knapsack
We now adapt the notion of the sum of all sets to the 0/1 Knapsack setting. Here, we use a data structure that, for a given capacity, stores the value of the best solution we can pack. This data structure can be implemented as an array of size t that keeps the largest value in each cell (for comparison, Σ(S ) was implemented as a binary vector of size t). To emphasize that we are working with 0/1 Knapsack, we use Π(S ) to denote the array of the values for the set of items S. To compute Π(S ), we can split S into two equal-cardinality, disjoint subsets S = S 1 ∪ S 2 , recursively compute Π(S 1 ) and Π(S 2 ), and finally join them in O(T (σ )) time (σ is the sum of weights of all items). By Lemma 6.2, we obtain an O(T (σ ) log σ log n) time algorithm (recall that the naive algorithm for MaxConv works in O(n 2 ) time).

Retracing Bringmann's Steps
In this section, we obtain an O(T (t ) + n) algorithm for 0/1 Knapsack, which improves upon the O(T (σ )) algorithm from the previous section. In his algorithm [9] for SubsetSum, Bringmann uses two key techniques. First, layer splitting is based on a very useful observation that an instance (Z , t ) can be partitioned into O(log n) layers L i ⊆ (t/2 i , t/2 i−1 ] (for 0 < i < log n ) and L log n ⊆ [0, t/2 log n −1 ]. With this partition, we may infer that for i > 0, at most 2 i elements from the set L i can be used in any solution (otherwise, their cumulative sum would be larger than t). The second technique is an application of color coding [3] that results in a fast, randomized algorithm that can compute all solutions with a sum of at most t using no more than k elements. By combining those two techniques, Bringmann [9] developed an O(t + n) time algorithm for SubsetSum. We now retrace both ideas and use them in the 0/1 Knapsack context.

Color Coding.
We modify Bringmann's [9] color coding technique by using MaxConv instead of FFT to obtain an algorithm for 0/1 Knapsack. We first discuss the Algorithm 2, which can compute all solutions in [0, t] that use at most k elements with high probability. We start by randomly partitioning the set of items into k 2 disjoint sets Z = Z 1 ∪ . . . ∪ Z k 2 . Algorithm 2 succeeds in finding a given solution if its elements are placed in different sets of the partition Z .
Proof. We split Z into k 2 parts: Z 1 ∪ . . . ∪ Z k 2 . Here, Z i is an array of size t, and Z i [j] is the value of a single element (if one exists) with weight j in Z i (in case of a conflict, we select a random one).
We claim that Z 1 ⊕ max t . . . ⊕ max t Z k 2 contains solutions at least as good as those that use k items (with high probability). We use the same argument as in [9]. Assume that the best solution uses the set Y ⊆ Z of items and |Y | ≤ k. The probability that all items of Y are in different sets of the partition is the same as the probability that the second element of Y is in a different set than the first one, the third element is in a different set than the first and second item, and so forth. That is, By repeating this process O(log( 1 δ )) times, we obtain the correct solution with a probability of at least 1 − δ . Also, to compute MaxConv, we need k 2 repetitions. Hence, we obtain an O(T (t )k 2 log(1/δ )) time algorithm.

Layer Splitting.
We can split our items into log n layers. Layer L i is the set of items with weights in (t/2 i , t/2 i−1 ] for 0 < i < log n ; the last layer L log n has items with weights in [0, t/2 log n −1 ]. With this, we can be sure that only 2 i items from the layer i can be chosen for a solution. If we can quickly compute Π(L i ) for all i, then it suffices to compute their MaxConv O(log n) times. We now show how to compute Π(L i ) in O(T (t ) + n) time using color coding.
end for 13: end for 14: return P 1 Proof. We use the same arguments as in [9, Lemma 3.2]. First, we split the set L into m disjoint subsets L = A 1 ∪ . . . ∪ A m (where m = l/ log(l/δ )). Then, for every partition, we compute Π(A i ) using O(log(l/δ )) items and probability δ/l using Lemma 6.3. For every A i , O(T (log(l )t/l ) log 3 (l/δ )) time is required. Hence, for all A i , we need O(T (t ) log 3 (l/δ )) time, as MinConv needs at least linear time T (n) = Ω(n).
Ultimately, we need to combine arrays Π(A i ) in a "binary tree way." In the first round, we com- Then, in the second round, we join the products of the first round in a similar way. We continue until we have joined all subsets. This process yields us significant savings over just computing Π(A 1 ) ⊕ max . . . ⊕ max Π(A m ) because in round h, we need to compute MaxConv with numbers of order O(2 h t log(l/δ )/l ), and there are at most log m rounds. The complexity of joining them is as follows: Overall, we determine that the time complexity of the algorithm is O(T (t log t ) log 3 (l/δ )) (some logarithmic factors could be omitted if we assume that there exists ϵ > 0 such thatT (n) = Ω(n 1+ϵ )).
The correctness of the algorithm is based on [9, Claim 3.3]. We take a subset of items Y ⊆ L and let Y j = Y ∩ A j . Claim 3.3 in [9] says that P[|Y j | ≥ 6 log(l/δ )] ≤ δ/l. Thus, we can run ColorCoding procedure for k = 6 log(l/δ ) and still guarantee a sufficiently high probability of success. Proof. To obtain an algorithm for 0/1 Knapsack, as mentioned before, we need to split Z into disjoint layers L i = Z ∩ (t/2 i , t/2 i−1 ] and L log n = Z ∩ [0, t/2 log n −1 ]. Then, we compute Π(L i ) for all i and join them using MaxConv. We present the pseudocode in Algorithm 4. It is based on  Algorithm 4 returns an array Π(Z ), where each entry z ∈ Π(Z ) is optimal with probability 1 − δ . Now, if we want to obtain the optimal solution for all knapsack capacities in [1, t], we need to increase the success probability to 1 − δ t so that we can use the union bound. Consequently, in this case, a single entry is faulty with a probability of at most δ/t, and we can upper bound the event, where at least one entry is incorrect by δ t t = δ . This introduces an additional polylog(t ) factor in the running time.
Finally, for completeness, we note that 0/1 Knapsack + is more general than 0/1 Knapsack. 0/1 Knapsack + returns a solution for all capacities ≤ t. However, in the 0/1 Knapsack problem, we are interested only in a capacity equal to exactly t.

OTHER PROBLEMS RELATED TO MINCONV 7.1 MCSP
The MCSP is, to the best of our knowledge, the first problem explicitly proven to be nontrivially subquadratically equivalent to MinConv [36]. In this section, we show the reduction from MCSP to MaxConv for completeness. Moreover, we present the reduction in the opposite direction, which, in our opinion, is simpler than the original one.
Thus, we can determine the maximum consecutive sum for each length k after performing MaxConv.
i=0 is sufficient to verify whether the above condition holds. Proof. We take advantage of the heavy-light decomposition introduced by Sleator and Tarjan [42]. This technique has been utilized by Backurs et al. [5] to transform a nearly linear PTAS for MaxConv to a nearly linear PTAS for Tree Sparsity.

Tree Sparsity
We decompose a tree into a set of paths (which we call spines) that will start from a head. First, we construct a spine with a head s 1 at the root of the tree. We define s i+1 as the child of s i for a larger subtree (in case of a draw, we choose any child) and the last node in the spine as a leaf. The remaining children of node s i become heads for analogous spines such that the whole tree is covered. Note that every path from a leaf to the root intersects at most log n spines because each spine transition doubles the subtree size (see Figure 4).
Similar to [5] for a node v with a subtree of size m, we want to compute the sparsity vector represents the weight of the heaviest subtree rooted at v with size i. We compute sparsity vectors for all heads of spines in the tree recursively. Let (s i ) i=1 be a spine with a head v, and for all i, let U i indicate the sparsity vector for the child of s i that is a head (i.e., the child with the smaller subtree). If s i has less than two children, then U i is a zero vector.
For an interval [a, b] ⊆ [1, ], let U a,b = U a ⊕ max U a+1 ⊕ max · · · ⊕ max U b , and let Y a,b [k] be a vector such that for all k, Y a,b [k] is the weight of a subtree of size k rooted at s a and not containing s b+1 (if it exists). Let c = a+b 2 . The ⊕ max operator is associative; hence, U a,b = U a,c ⊕ max U c+1,b . To compute the vector Y a,b , we consider two cases, depending on whether the optimal subtree contains s c+1 .
Recall that w : V (T ) → N ≥0 is the weight function from the definition of the problem (see Section 2.4). Using the presented formulas, we reduce the problem of computing X v = Y 1, to subproblems for intervals [1,2 ] and [ 2 + 1, ], and we merge the results with two (max, +)-convolutions. Proceeding further, we obtain log levels of recursion, where the sum of convolution sizes on each level is O(m), which results in a total running time of O(T (m) log m) (recall thatT is superadditive).
The heavy-light decomposition guarantees that there are at most O(log n) different spines on a path from a leaf to the root. Moreover, we compute sparsity vectors for all heads of the spine, with at most log n levels of recursion. In each recursion, we execute the MaxConv procedure. Hence, we obtain a running time of O(T (n) log 2 n). Fig. 4. Schema of spine decomposition [5]. Blue edges represent edges on the spine. For each spine, we build an efficient data structure that uses MaxConv (curly brackets). There are at most O(log n) different spines on any path from a leaf to the root.

l ∞ -Necklace Alignment
In this section, we study the l ∞ -Necklace Alignment alignment problem, which has been shown to be reducible to MinConv [8]. Even though we were not able to prove it as equivalent to MinConv, we have observed that l ∞ -Necklace Alignment is tightly connected to the (min, +)convolution, which leads to a reduction from a related problem-MaxConv LowerBound. This opens an avenue for expanding the class of problems equivalent to MinConv; however, it turns out that we first need to better understand the nondeterministic complexity of MinConv. We elaborate on these issues in this and the following section. Proof. Let a, b, c be the input sequences for MaxConv LowerBound. A combination is the sum of any choice of m elements from these sequences. More formally:
The order of this combination is as follows: We can assume the following properties of the input sequences w.l.o.g.
(1) We may assume that the sequences are nonnegative and that a[i] ≤ c[i] for all i. To guarantee this, we add C 1 to a, C 1 + C 2 to b, and 2C 1 + C 2 to c for appropriate positive constants C 1 , C 2 . (2) We can assume that the combinations of order ≤ n that contain the last element of sequence b with a positive coefficient are positive. We can achieve this property by artificially appending any b[n] that is larger than the sum of all elements. Note that since it is the last element, it does not influence the result of the MaxConv LowerBound instance.  D equal to the maximum absolute value of an element times a parameter L that will be set to 10. Note that previous inequalities compare combinations of the same order, and so they remain unaffected.
These transformations might increase the values of the elements to O(nW L 2 ). [1]. We define necklaces x, y of length 2B with N = 2n beads each. ).
Property (3) . In this setting, [8,Fact 5] says that for a fixed k, the optimal solution has a value of M k 2 . We want to show that for k ∈ [0, n), the following holds: There are five types of connections between beads (see Figure 5).
All formulas form combinations of length bounded by 5; thus, we can apply properties (2) and (3). Observe that the order of each combination equals k, except for i = 2n − k − 1, where the order is k + 1. Using property (3), we reason that B − c[n − k − 1] is indeed the maximal forward distance. We now show that the minimum lies within the group I. First, note that these are the only combinations with no occurrences of b [n]. We claim that every distance in group I is upperbounded by all distances in other groups. This is clear for group IV because the orders differ. For other groups, we can use property (2), as the combinations in question have the same order and only the one not in group I contains b [n].
For k < n, the condition ). If such a k exists, i.e., the answer to MaxConv LowerBound for sequences a, b, c is NO, then min k M k < B − B 1 and the return value is less than 1 2 (B − B 1 ). Finally, we need to prove that M k ≥ B − B 1 for all k if such a k does not exist. We have already verified this to be true for k < n. Each matching for k ≥ n can be represented as swapping sequences a and c inside the necklace x, developed via an index shift of k − n. The two halves of the necklace x are analogous; thus, all prior observations of the matching structure remain valid.
If the answer to MaxConv LowerBound for sequences a, b, c is YES, [k], and by the same argument as before, the cost of the solution is at least Observe that both l ∞ -Necklace Alignment and MaxConv LowerBound admit simple linear nondeterministic algorithms. For MaxConv LowerBound, it is sufficient to either assign each k a single condition a that is satisfied or to nondeterministically guess a value of k for which no inequality holds. For l ∞ -Necklace Alignment, we define a decision version of the problem by asking if there is an alignment of the value bounded by K (the problem is self-reducible via binary search). For positive instances, the algorithm simply nondeterministically guesses k, inducing an optimal solution. For negative instances, M k > 2K must hold for all k. Therefore, it suffices to nondeterministically guess for each k a pair i, j such that d ( In Section 8, we will show that MaxConv UpperBound admits an O(n 1.5 polylog(n)) nondeterministic algorithm (see Lemma 8.1) so, in fact, there is no obstacle to the existence of a subquadratic reduction from MaxConv LowerBound to MaxConv UpperBound. However, the nondeterministic algorithm for 3sum exploits techniques significantly different from ours, including modular arithmetic. A potential reduction would probably need to rely on some different structural properties of MaxConv.

NONDETERMINISTIC ALGORITHMS
Recently, Abboud et al. [2] proved that the running time for the Subset Sum problem cannot be improved to O(t 1−ϵ 2 o (n) ), assuming the SETH. It is tempting to look for an analogous lower bound for Knapsack that would make the O(nt )-time algorithm tight. In this section, we take advantage of the nondeterministic lens introduced by Carmosino et al. [14] to argue that the existence of this lower bound for Unbounded Knapsack is unlikely.
We recall that by a time complexity of a nondeterministic algorithm, we refer to a bound on running times for both nondeterministic and co-nondeterministic routines determining whether an instance belongs to the language. Assuming the Nondeterministic Strong Exponential Time Hypothesis (NSETH), we cannot break the O(2 (1−ε )n ) barrier for SAT even with nondeterministic algorithms.
The informal reason to rely on the NSETH is that if we decide to base lower bounds on the SETH, then we should believe that SAT is indeed a very hard problem that does not admit any hidden structure that has eluded researchers so far. On the other hand, the NSETH can be used to rule out deterministic reductions from SAT to problems with nontrivial nondeterministic algorithms. This allows us to argue that in some situations basing a hardness theory on the SETH can be a bad idea. Moreover, disproving the NSETH would imply nontrivial lower bounds on circuit sizes for E NP [14].
We present a nondeterministic algorithm for the decision version of Unbounded Knapsack with running time O(t √ n log 3 (W )), where W is the target value. This means that a running time O(n 1−ε t ) for Unbounded Knapsack cannot be ruled out with a deterministic reduction from SAT, under the assumption of the NSETH (for small ε < 1 2 ). We begin with an observation that a nontrivial nondeterministic algorithm for 3sum entails a similar result for MaxConv UpperBound. Lemma 8.1. MaxConv UpperBound admits a nondeterministic O(n 1.5 polylog(n))-time algorithm.
In the next step, we require a more careful complexity analysis of the nondeterministic algorithm for 3um developed by Carmosino et al. [14,Lemma 5.8]. Essentially, we claim that the running time can be bounded by O( √ n 1 n 2 n 3 log 2 (W )), where n 1 , n 2 , n 3 are sizes of the input sets. This is just a reformulation of the original proof, where an O(n 1.5 ) nondeterministic time algorithm is given, which we have presented in Appendix B for completeness.
In the decision version of Unbounded Knapsack, we are additionally given a threshold W , and we need to determine whether there is a multiset of items with a total weight of at most t and a total value of at least W . Proof. We can assume that n ≤ t. If we are given a YES-instance, then we can just nondeterministically guess the solution and verify it in O(t ) time.
To show that an instance admits no solution, we nondeterministically guess a proof involving an array (a[k]) t k=0 such that a[k] is an upper bound for the total value of items with weights summing to at most k. To verify the proof, we need to check that a[0] = 0, a[t] < W , a is nondecreasing, and, for each k and each item ( k=0 be a sequence defined as follows: if there is an item with w i = k, then we set b[k] = v i (if there are multiple items with the same weight, we choose the most valuable one) and 0 otherwise. The latter condition is equivalent to determining if a ⊕ max b ≤ a, which is an instance of MaxConv UpperBound with elements bounded by W .
Note that the sequence b contains only n nonzero elements.

CONCLUSIONS AND FUTURE WORK
In this article, we undertake a systematic study of MinConv as a hardness assumption and prove the subquadratic equivalence of MinConv with SuperAdditivity Testing, Unbounded Knapsack, 0/1 Knapsack, and Tree Sparsity. The MinConv conjecture is stronger than the wellknown conjectures APSP and 3SUM. Proving that MinConv is equivalent to either APSP or 3SUM would solve a long-standing open problem. An intriguing question is to determine whether the MinConv conjecture is also stronger than OV.
By exploiting the fast O(n 2 /2 Ω(log n) 1/2 ) algorithm for MaxConv, we automatically obtain o(n 2 )time algorithms for all problems in the class. This gives us the first (to the best of our knowledge) subquadratic algorithm for SuperAdditivity Testing and improves exact algorithms for Tree Sparsity by a polylogarithmic factor (although this does not lie within the scope of this article).
One consequence of our results is a new lower bound on 0/1 Knapsack. It is known that an O(t 1−ϵ n O(1) ) algorithm for 0/1 Knapsack contradicts the SetCover conjecture [20]. Here, we show that an O((n + t ) 2−ϵ ) algorithm contradicts the MinConv conjecture. This does not rule out an O(t + n O(1) ) algorithm, which leads to another interesting open problem.
Recently, Abboud et al. [2] replaced the SetCover conjecture with the SETH for SubsetSum. We have shown that one cannot exploit the SETH to prove that the O(nt )-time algorithm for Unbounded Knapsack is tight. The analogous question regarding 0/1 Knapsack remains open.
Finally, it is open whether MaxConv LowerBound is equivalent to MinConv, which would imply an equivalence between l ∞ -Necklace Alignment and MinConv.

APPENDICES A REDUCTION TO 3SUM
In this section, we show a connection between MaxConv and the 3sum conjecture. This reduction is widely known in the community but, to the best of our knowledge, has never been explicitly written. We include it in this appendix for completeness.
In this article, we showed an equivalence between MaxConv and MaxConv UpperBound (see Theorem 3.1). Also, it is known that the 3sumConv problem is subquadratically equivalent to 3sum [41]. Hence, the following theorem suffices.
The proof heavily utilizes [50, Proposition 3.4, Theorem 3.3], which we present here for completeness. pre i (x ) denotes the binary prefix of x of length i, where the most significant bit is considered the first. In the original statement (Proposition 3.4 [50]), the prefixes are alternately treated as integers or strings. We modify the notation slightly to work only with integers.
Lemma A.2 (Proposition 3.4 [50]). For three integers x, y, z, we have that x + y > z iff one of the following holds: (1) there exists a k such that pre k (x ) + pre k (y) = pre k (z) + 1, (2) there exists a k such that pre k+1 (x ) = 2 · pre k (x ) + 1, pre k+1 (y) = 2 · pre k (y) + 1, pre k+1 (z) = 2 · pre k (z), pre k (z) = pre k (x ) + pre k (y). holds for some i, j iff one of the constructed instances of 3sumConv returns true. As the number of instances is O(logW ), the claim follows. The 3sumConv problem is subquadratically equivalent to 3sum [41], which establishes a relationship between these two classes of subquadratic equivalence.

B NONDETERMINISTIC ALGORITHM FOR 3SUM
Carmosino et al. [14,Lemma 5.8] presented an O(n 1.5 ) nondeterministic algorithm for 3sum, i.e., the running time depends only on the size of the input. However, in our application, we need a running time that is a function of the sizes of sets A, B, and C. In this section, we analyze the running time of Carmosino et al. in regard to these parameters. Lemma B.1. There is a nondeterministic algorithm for 3sum with running time O( √ n 1 n 2 n 3 log 2 (W )), where n 1 = |A|, n 2 = |B|, n 3 = |C | and W is the maximum absolute value of integers in A ∪ B ∪ C (we assume that n 1 + n 2 + n 2 ≤ W ).
Proof. If there is a triple (a ∈ A, b ∈ B, c ∈ C) such that a + b = c, then we can nondeterministically guess it and verify it in O(1) time. To prove that there is no such triple, we nondeterministically guess the following: (1) a prime number p ≤ prime √ n 1 n 2 n 3 , where prime i denotes the i-th prime number; (2) an integer t (p) ≤ √ n 1 n 2 n 3 log(3W ), which is the number of solutions for sets (A mod p, B mod p, C mod p); (3) a set S = {(a 1 , b 1 , c 1 ), . . . , (a t (p ) , b t (p ) , c t (p ) )}, where |S | = t (p) and each triple (a i ∈ A, b i ∈ B, c i ∈ C) satisfies a 1 + b 1 ≡ c 1 (mod p).
To see that for each NO-instance there exists such a proof, consider the number of false positives, i.e., tuples (a ∈ A, b ∈ B, c ∈ C, p), where p is a prime. For each triple (a ∈ A, b ∈ B, c ∈ C), the value |a + b − c | has at most log(3W ) distinct prime divisors. Therefore, the number of false positives is bounded by n 1 n 2 n 3 log(3W ). Since there are √ n 1 n 2 n 3 candidates for p, we can choose one such that t (p) ≤ √ n 1 n 2 n 3 log(3W ). To verify the proof, we need to verify whether S contains no true solution and to compute the number of solutions for (A mod p, B mod p, C mod p). If it equals to |S |, then we are sure that all solutions for the instance modulo p are indeed false positives for the original instance.
Since the numbers are bounded by p, we can count the solutions using FFT in time O(p log p) = O( √ n 1 n 2 n 3 log 2 (W )).