Convolution and Knapsack in Higher Dimensions
Abstract
In the Knapsack problem, one is given the task of packing a knapsack of a given size with items in order to gain a packing with a high profit value. As one of the most classical problems in computer science, research for this problem has gone a long way. One important connection to the -convolution problem has been established, where knapsack solutions can be combined by building the convolution of two sequences. This observation has been used in recent years to give conditional lower bounds but also parameterized algorithms.
In this paper we carry these results into higher dimensions. We consider Knapsack where items are characterized by multiple properties – given through a vector – and a knapsack that has a capacity vector. The packing must not exceed any of the given capacity constraints. In order to show a similar sub-quadratic lower bound we consider a multidimensional version of -convolution. We then consider variants of this problem introduced by Cygan et al. and prove that they are all equivalent in terms of algorithms that allow for a running time sub-quadratic in the number of entries of the array.
We further develop a parameterized algorithm to solve higher dimensional Knapsack. The techniques we apply are inspired by an algorithm introduced by Axiotis and Tzamos. We will show that even for higher dimensional Knapsack, we can reduce the problem to convolution on one-dimensional, concave sequences, leading to an algorithm, where is the number of different weight vectors, the capacity vector and is the dimension of the problem. Then, we use the techniques to improve the approach of Eisenbrand and Weismantel to obtain an algorithm for Integer Linear Programming with upper bounds with running time .
Finally, we give an divide-and-conquer algorithm for ILP with running time .
Keywords and phrases:
Knapsack, Convolution, Integer Linear ProgrammingFunding:
Klaus Jansen: Supported by the German Research Foundation (DFG) project JA 612/25-1.Copyright and License:
2012 ACM Subject Classification:
Theory of computation Complexity classes ; Theory of computation Parameterized complexity and exact algorithmsAcknowledgements:
We want to thank the anonymous referees for their time and helpful feedback.Editors:
Pat Morin and Eunjin OhSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
The Knapsack problem is one of the core problems in computer science. The task of finding a collection of items that fits into a knapsack but also maximizes some profit is NP-hard and as such the aim is to find approximation algorithms, parameterized algorithms, and determine lower bounds for the running time of exact algorithms.
Cygan et al. [9] and Künnemann et al. [16] among others have used the following relationship between Knapsack and the -convolution problem. Assume we are given two disjoint item sets and and a knapsack of size . If we additionally know the optimal profits for all knapsack sizes we then can calculate the maximum profits for all sizes for by using convolution. This connection was used in order to show that the existence of a sub-quadratic (in terms of capacity) algorithm for Knapsack is equivalent to the existence of a sub-quadratic algorithm for -convolution.
In this work, we consider these problems in higher dimensions. The Knapsack problem is simply generalized by replacing the single value weight and capacity by vectors. We then look for a collection of items whose summed up vectors do not exceed the capacity vector in any position. The natural question arises whether a similar quadratic time lower bound exists for this problem. We answer this question positively by giving a generalization of convolution in higher dimensions as well. Using this generalization and techniques introduced by Bringmann [4] such as color-coding we are able to achieve similar subquadratic lower bounds as in the one-dimensional case.
1.1 Problem Definitions and Notations
We define , , and for . In the following, we write for two vectors that (resp. ) if for all we have that (resp. ). Further we denote with for any vector . With we denote the vector that has in every position.
Definition 1.
Let be a -dimensional vector and be a array. We call the vector the size of and denote the number of entries in with .
We call a vector with position of array . For ease of notation, we will denote for any array position of the respective array entry with .
We note that in this definition, array positions lie in between and . This makes it easier to work with positions for convolution and also for formulating time complexity bounds, as will be the main parameter we consider.
Definition 2 (Maximum Convolution).
Given two -dimensional arrays with equal size , the -convolution of and denoted as is defined as an array of size with for any .
Note that in the following we will refer to this problem as “Convolution”. We specifically limit ourselves to the special truncated case where both input arrays and the output array have the same size. In a more general setting, we could allow arrays of sizes as input and compute an array of size .
To measure the running time of our algorithms, we mainly consider the size or rather the number of entries from the resulting array because we need to calculate a value for every position. Therefore, in terms of theoretical performance, working with different sizes or calculating an array of combined size will not make a difference. By only considering arrays of the same size, we avoid many unnecessary cases and the dummy values of .
There is a quadratic time algorithm for -MaxConv like in the one-dimensional case. This algorithm simply iterates through all pairs of positions in time .
Further we define an upper bound test for the convolution problem. In this problem, we are given a third input array and need to decide whether its entries are upper bounds for the entries of the convolution.
We further generalize the notion of superadditivity to multidimensional arrays.
The next problem class we consider is the -Knapsack problem. In the one-dimensional case, one is given a set of items with weights and profits and one knapsack with a certain weight capacity. The goal is now to pack the knapsack such that the profit is maximized and the total weight of packed items does not exceed the weight capacity.
The natural higher dimensional generalization arises when we have more constraints to fulfill. When going on a journey by plane, one for example has several further requirements such as a maximum amount of allowed liquid or number of suitcases. By imposing more similar requirements, we can simply identify each item by a vector and also define the knapsack by a capacity vector. This leads to the following generalization of Knapsack into higher dimensions. We further differentiate between two problems 0/1 -Knapsack and Unbounded -Knapsack, depending on whether we allow items to be only used one time or an arbitrary number of times.
The running time for these problems is mainly dependent on the dimension , the number of items of an instance and the number of feasible capacities and we will further study the connection of these in regards to the convolution problems.
1.2 Related Work
Cygan et al. [9] as well as Künnemann et al. [16] initiated the research and were the first to introduce this class of problems and convolution hardness. In particular, Cygan et al. showed for that all these problems are equivalent in terms of whether they allow for a subquadratic algorithm. This allows to formulate a conditional lower bound for all these problems under the hypothesis that no subquadratic algorithm for -MaxConv exists.
Conjecture 3 (MaxConv-hypothesis).
There exists no time algorithm for any for -MaxConv with .
-Convolution.
The best known algorithm to solve convolution on sequences without any further assumption takes time . This result was achieved by Williams [21], who gave an algorithm for APSP in conjunction with a reduction by Bremner et al. [3]. However, the existence of a truly subquadratic algorithm remains open.
Research therefore has taken a focus on special cases of convolution where one or both input sequences is required to have certain structural properties such as monotony, concavity or linearity. Chan and Lewenstein [6] gave a subquadratic algorithm for instances where both sequences are monotone increasing and the values are bound by . Chi et al. [8] improved this further with a randomized algorithm.
Knapsack.
Convolution has proven to be a useful tool to solve other problems as well, in particular the Knapsack problem. In fact one of the reductions of Cygan et al. [9] was an adapted version of Bringmann’s algorithm for subset sum [4]. Bringmann’s algorithm works by constructing sub-instances, solving these and then combining the solutions via Fast Fourier Transformation (FFT). The algorithm by Cygan et al. to solve Knapsack works the same way, but uses Convolution instead of FFT.
Axiotis and Tzamos follow a similar approach but choose their sub-instances more carefully, by grouping items with the same weight. That way, the solutions make up concave profit sequences which can be combined in linear time [1]. This yields in total an algorithm, where is the number of different item weights.
Polak et al. [19] gave an algorithm, where denote the maximum weight and profit respectively. They achieved this by combining the techniques from Axiotis and Tzamos with proximity results from Eisenbrand and Weismantel [11]. Further, Chen et al. [7] improved this to a time of . Recently, Ce Jin [14] gave an improved algorithm. Independently to these results for 0/1 Knapsack, Bringmann gave an algorithm for Bounded Knapsack.
Doron-Arad et al. [10] showed that there are constants , such that for every integer there is no algorithm that solves 0/1 -Knapsack in time .
Integer Linear Program.
-dimensional Knapsack problems are a special case of integer linear programs.
Lower bounds may be present but can easily be removed by changing the right side to and the upper bound to . If we limit to be a matrix with only non-negative entries and then solving the above defined ILP is equivalent to 0/1 -Knapsack. If we omit the constraint, the resulting problem is Unbounded -Knapsack.
A very important part in research of ILPs has been on proximity. The proximity of an ILP is the distance between an optimal solution and an optimal solution of its relaxation. The relaxation of an ILP is given by allowing the solution vector to be fractional.
Eisenbrand and Weismantel [11] have proven that for an optimal solution of the LP-relaxation there is an optimal integral solution with with being the largest absolute entry in . They used this result to give an algorithm that solves ILPs without upper bounds in time and ILPs as defined above in time . They additionally extended their result to Bounded Knapsack and obtained the running time .
There have been further results on proximity such as Lee et al. [17], who gave a proximity bound of where is the largest absolute value of a minor of order of matrix . Celaya et al. [5] also gave proximity bounds for certain modular matrices.
Rohwedder and Węgrzycki [20] conjecture that there is no time algorithm for ILP with . They also show that there are several problems that are equivalent with respect to this conjecture.
1.3 Our Results
We begin by expanding the results of Cygan et al. [9] into higher dimensions. The natural question is whether similar relations shown in their work also exist in higher dimensions and in fact they do. In the first part of this paper we show that the same equivalence – regarding existing subquadratic time algorithms – holds among the higher dimensional problems.
Theorem 4.
For any fixed and any , the following statements are equivalent:
-
1.
There exists an -time algorithm for -MaxConv.
-
2.
There exists an -time algorithm for -MaxConv UpperBound.
-
3.
There exists an -time algorithm for -SuperAdditivity Testing.
-
4.
There exists an -time algorithm for Unbounded -Knapsack.
-
5.
There exists an -time algorithm for 0/1 -Knapsack.
Some of these reduction incur a multiplicative factor of . This is a natural consequence due to the exponentially larger amount of entries that our arrays hold and that we need to process. As an example, where it was sufficient in the one-dimensional case to split a problem in two sub-problems, we may now need to consider sub-problems. For this reason we require to be fixed, so we can omit these factors. We will prove this statement through a ring of reductions. We note that one of the reductions uses an algorithm or a generalization of it from Bringmann [4, 9] that is randomized. Part of this algorithm, involving so-called color-coding can be derandomized, but a full derandomization is still an open problem.
We note that under the MaxConv-hypothesis, there does not exist an -time algorithm for -MaxConv. If there exists any such that -MaxConv admits a sub-quadratic algorithm, then it would also solve the problem in sub-quadratic time for any and especially , hence contradicting the MaxConv-hypothesis, because we can extend any -dimensional array to a -dimensional one by adding dimensions with size one.
We also discuss how to use lower order improvements for 1-dimensional convolution for the -dimensional convolution via a standard argument using linearization of the vector and adding appropriate padding. As this is a standard trick we omit the proof in the main body but we state it in Appendix A for completeness.
Lemma 5.
Given an algorithm for 1-dimensional convolution with running time and two -dimensional arrays with size , we can calculate in time .
Together with the discussion above, Lemma 5 shows that 1-dimensional and -dimensional convolution are equivalent for fixed or up to a factor .
In the second part of our paper, we complement our conditional lower bound with a parameterized algorithm. To achieve this, we generalize an algorithm by Axiotis and Tzamos [1]. Our algorithm will also have a running time dependent on the number of different item weights, that we denote with and the largest item weight . This algorithm will also group items by weight vector and solve the respective sub-instances. The resulting partial solutions are then combined via -MaxConv. However, solving general -MaxConv needs quadratic time in the number of entries. We can overcome this barrier by reducing our problem to convolution on one-dimensional, concave sequences.
Theorem 6.
There is an algorithm for Bounded -EqualityKnapsack with running time .
In the general case our algorithm will achieve a running time of as the part only becomes relevant when we have a slim knapsack, that is very large in one component, but comparatively small in the other. We note that our algorithm achieves the lower bound proposed in Theorem 4 since is also upper bounded by .
We use the techniques for Theorem 6 to improve the approach of Eisenbrand and Weismantel [11] to obtain the following running time for ILP.
Theorem 7.
There is an algorithm for ILP with running time .
Our approach reduces the dependency on the number of dimensions ( instead of ) and removes the logarithmic factor . Further, our algorithm is mostly independent on but relies on the number of different columns of the given ILP.
In addition to their conjecture Rohwedder and Węgrzycki [20] showed that the quadratic dependence on the number of constraints in the exponent can be avoided if the term is allowed in the runtime. We improve their algorithm by removing from the base.
Theorem 8.
There is an algorithm that can solve ILP in time .
The algorithm is a divide and conquer algorithm which repeatedly halves the upper bounds.
1.4 Organization of the Paper
In Section 2 we present an overview of the reductions to proof Theorem 4. However, the concrete proofs are in the full version [12] as the proofs are similar to the ones by Cygan et al. [9]. Next we give the parameterized algorithm as well as the generalization to ILPs in Section 3. The proof of the divide-and-conquer algorithm for Theorem 8 is given in Section 4.
2 Reductions
For the Convolution problems, we will formulate the running time via , where is the dimension and is the size of the result array - meaning the first parameter resembles the number of entries in the array. For the Knapsack problems, we will add the number of items as parameter and denote the running time of an algorithm via , again using to denote the number of possible knapsack capacities.
Similar to Cygan et al. [9], we mainly look at functions satisfying for some constants . Therefore, we remark that for all constant we can write since is fixed and we then have .
For all the mentioned problems we assume that all inputs, that is also any value in any vector, consist of integers in the range of for some . This is generally omitted as a running time parameter and are omitted or hidden in functions. We remark that – unavoidably – during the reductions we may have to deal with larger values than . We generally will multiply a factor of in cases where the values we have to handle increase up to . Note that if is a constant, we again omit it.
We follow the same order as Cygan et al. [9], because that makes the reduction increasingly more complex. For the first reduction from Unbounded -Knapsack to 0/1 -Knapsack, we use the same reduction as in the one-dimensional case. The idea is to simply encode taking a multitude of items via binary encoding.
Theorem 9.
[Unbounded -Knapsack 0/1 -Knapsack] A algorithm for 0/1 -Knapsack implies an algorithm for Unbounded -Knapsack, where is the capacity of the knapsack.
For the next reduction we reduce -SuperAdditivity Testing to a special case where each array entry is non-negative and values fulfill a monotony property.
Theorem 10.
[-SuperAdditivity Testing Unbounded -Knapsack] A algorithm for Unbounded -Knapsack implies the existence an algorithm that solves -SuperAdditivity Testing in time for an array with size .
The next reduction differs from Cygan et al. [9]. We also combine our input arrays together, but in the context of arrays we need to handle a number of different combinations more than in the one-dimensional case. We therefore add a block of negative values in an initial dummy block. This way, any combination that is not relevant to the actual upper bound test, will result positively when tested in the upper bound test.
Theorem 11.
[-MaxConv UpperBound -SuperAdditivity Testing] A algorithm for -SuperAdditivity Testing implies an algorithm for -MaxConv UpperBound.
For the next reduction from -MaxConv to -MaxConv UpperBound, we will prove that we can use an algorithm for -MaxConv UpperBound to also identify a position that violates the upper bound property.
Theorem 12.
[-MaxConv -MaxConv UpperBound] A algorithm for -MaxConv UpperBound implies an algorithm for -MaxConv.
The last reduction is based on Cygan et al. [9] and Bringmann [4]. To make their approach work even in higher dimensional cases, we require a new more refined distribution of items into layers. With this new partition of items many used techniques such as color-coding naturally translate into higher dimensions.
Theorem 13 (0/1 -Knapsack -MaxConv).
If -MaxConv can be solved in time then 0/1 -Knapsack can be solved with a probability of at least in time for any .
3 Parameterized Algorithm for Multi-Dimensional Knapsack
In this part, we will consider solving Knapsack such that the capacity is completely utilized. This enables a concise presentation and is not a restriction as discussed after the concrete problem definition.
We will not only solve this problem for the given capacity but for all smaller capacities. In detail, we will construct a solution array with size such that for any we have that is the maximum profit of an item set whose total weight is exactly . Note that we can construct a solution for Bounded -Knapsack after solving Bounded -EqualityKnapsack by remembering the highest profit achieved in any position.
The algorithm we will show is based on the algorithm of Axiotis and Tzamos [1]. They solved one-dimensional Knapsack by splitting all items into subsets with equal weight. They proceed then to solve these resulting sub-instances. These sub-instances can be easily solved by gathering the highest profit items that fit into the knapsack. Finally, they combine these resulting partial solutions via convolution. The profits of one such partial solution forms a concave sequence, which allows each convolution to be calculated in linear time.
Definition 14 (Concave Sequences).
Let be a sequence of numbers of length . We call the sequence concave if for all we have .
Lemma 15 ([1, Lemma 9]).
Given an arbitrary sequence of length and a concave sequence of length we can compute the convolution of and in time .
We achieve a similar result and calculate higher dimensional convolutions in linear time. In fact, we will reduce our problem to calculating convolution of one-dimensional concave sequences. This way we can calculate the convolution of our sub-solutions in linear time in the number of entries in our array.
See 6 We remark, that in most cases so generally this algorithm will have a running time of . However, in the special case of a slim array that has size with one large value and other values being very small for our algorithm might have a slightly worse running time of . Note that is algorithm also works for -Knapsack by setting every upper bound to .
Proof of Theorem 6.
Since we have different weights, let be the different weights. We partition the items into sets of the same weight, for all . We start with an empty solution array of size such that and for . We will calculate solution arrays for where is the solution array for instance restricted to the items . Next we describe how we can calculate from .
Consider an optimal solution for some position in , i.e., containing only items from . The solution contains a certain number of items of weight . Let this number be . We have . Thus, the other items in the solution are from and have combined profit otherwise our initial solution is not optimal or is not correct. Also, the items with weight in the optimal solution have the highest profit possible, otherwise the solution is not optimal. Therefore, we have
| (1) |
where is the largest profit for taking items of weight . Hence, only position pairs such that their difference is an integer multiple of () are relevant for the calculation of or . This condition forms an equivalence relation and thus we can calculate the values of per equivalence class. Every equivalence class has the structure with . Define the sequences with and with . Thus, we can calculate by for every . This is sufficient by Equation 1. Note that, we only have to calculate the convolution once as we can use the result for every element of the equivalence class. We always have and is a concave sequence.
The items can be partitioned in the sets in time . In every set the most profitable items can be calculated in total time by using [2]. Calculating all values for with and takes time . Every convolution can be calculated in linear time by Lemma 15. Since the equivalence classes partition the set of positions, we can calculate from in time where the factor is due to multidimensional indices. We may assume which shows the claimed running time.
3.1 Generalisation to ILPs
As we mentioned before, Knapsack is a special case of ILP. We now want to discuss how to extend our results to ILPs. In the case that all matrix entries are non-negative ILP is equivalent to Bounded -EqualityKnapsack. We discuss later how to handle negative entries in the constraint matrix. Negative item profits are sensible in Bounded -EqualityKnapsack as those may be required to reach the capacity. The algorithm presented in Theorem 6 also works with negative profits. The running time of Theorem 6 depends on the capacity of the knapsack. To bound this value for ILPs we use the proximity bound by Eisenbrand and Weismantel [11] and improve their algorithmic approach. We start by giving a short summary of their approach.
For an ILP Instance with constraint matrix , right side , and upper bounds , let be the biggest absolute value of an entry in the constraint matrix. Eisenbrand and Weismantel [11] calculate an optimal solution for the LP-relaxation, which is possible in polynomial time. They proved that there exists an integral optimal solution , such that the distance in -norm is small, more precisely . They then proceed to take as an integral solution () and construct the following ILP to find an optimal solution.
Note that we may have for some . Combining an optimal solution to this ILP with yields an optimal solution to the original ILP. For any with we have due to the bound on the -norm. Eisenbrand and Weismantel construct an acyclic directed graph and find the longest path in that graph which is equivalent to an optimal solution to the ILP above. They bound the size of the graph using the bound on the -norm above. We avoid such a graph construction for our algorithm which allows us to improve the running time. For two with and we may assume that implies . Otherwise, we can modify to obtain a solution that adheres to this property with at least the same value. This structure allows us to reformulate the ILP by grouping variables together with the same column in as follows
where is a concave function for all . We calculate an optimal solution to this ILP using the convolution approach from Theorem 6. See 7
Proof.
For let be the best possible values using the variables for . The size of in every dimension is . To calculate from for some position such that we need to calculate
As in the proof of Theorem 6 we can consider the equivalence classes. Let be the equivalence class of , it has the structure . Define the sequence with for and for . Further we define the sequence with . We calculate the convolution of the sequences in time using Lemma 15. Then we get
| ( for ) | ||||
for . Next discuss how to obtain the running time of instead of the expected as in the proof of Theorem 6. To do this we add buffer zones of size at the start and the end of every dimension. Then the size for every dimension is still bounded by . Next we save the -dimensional array as a -dimensional array by concatenating the dimensions. We mark the entries in the buffer zones. For a fixed column we can calculate a step size in the -dimensional array that corresponds to a step with the column in the -dimensional array. Using the marked buffer zones we can check if we stepped outside the valid area which allows us to find the equivalence class of a position in linear time of the size of the class. Therefore, we can find all equivalence classes in time , and we can calculate the convolution in the same time.
First we need to solve the linear relaxation of the ILP. Denote the necessary time by . Next we partition the columns by weight in time . We can find the most profitable picked and unpicked items for every column type in total time using [2]. With this we can calculate all relevant values for every in time . As explained above every convolution can be calculated in time which yields the claimed running time. An optimal solution vector can be calculated by backtracking in time
4 Divide and Conquer Algorithm
In this section we show Theorem 8. We start by showing the central structural property.
Lemma 16.
Let , , and . If and , there exists such that
Proof.
Let with . We define the new solution component wise as follows.
for . We start by showing . Let . If is odd, we have
and
Otherwise, if is even, we have
and
Next, let . We have
for every and thus . This lemma implies that every solution to with can be decomposed into such that , is a solution to and . Note that does not depend on the concrete solution but is the same for all.
We can iterate Lemma 16 to obtain a series of solutions and upper bounds until we reach a point where all upper bounds are zero. Define and after applying Lemma 16 times, i.e., and . Let be smallest value such that .
Corollary 17.
We have for .
Proof.
By induction. is Lemma 16. Let . Then, we have
With this we can give the algorithm and proof Theorem 8.
Theorem 8. [Restated, see original statement.]
There is an algorithm that can solve ILP in time .
Proof.
We find an optimal solution to the bounded ILP by finding the longest path in a graph that is constructed based on Lemma 16. We define the graph as follows.
The intuition here is that we have a layer for every application of Lemma 16. We can jump from one layer to the next by doubling the corresponding right side (the first component) and length of the path to the current vertex. Then, to reconstruct the solution prior to an application of Lemma 16 we may need to increase variables by up to two. For this we have the third component. While going to a vertex with in the last component we may use the th column up to the number of times it was removed to get the new upper bound. This is also an upper bound for the number of times it was removed from a solution as stated in Lemma 16. More precisely, we associate the ILP
| (for ) | ||||
| (for ) | ||||
with a vertex and refer to it by . A path from to describes a solution to where the value is equal to the length of the path.
Next we give the precise definitions for the edges in the graph. For a vertex with and we define the following edges
with and for vertices with and we define the edge
Because of the changes in the second and third component the graph is acyclic, and thus the longest path can be calculated in linear time by utilizing a topological order of the graph. Note that the second type of edge does not have a classic length, but this doubling of the current length works with the aforementioned algorithm. The out degree of every vertex is bounded by and thus the size of graph is linear in the number vertices which is . Now we find the longest path from to . The only solution to is . The structure of the graph is depicted in Figure 1.
Let be a solution to the ILP. Further, let be the solutions obtained by iterating Lemma 16. Define
for and where is the th unit vector. We have by Lemmas 16 and 17. Thus,
is a path in the graph with length . Therefore, ILP has a solution if and only if there is a path in the graph from to . Also the value of the optimal solution to the ILP is equal to the length of the longest path in the graph.
5 Conclusion
First, we have proven that the relationship between Knapsack and Convolution in the one-dimensional case also prevails in higher dimensional variants. A natural follow-up question is how many more problems can be shown to be equivalent to -dimensional convolution.
We developed a parameterized algorithm for multidimensional knapsack that generalized techniques for one-dimensional knapsack. Further, we used these techniques to obtain a faster algorithm for Integer Linear Programming with upper bounds.
Finally, we gave an improved algorithm for Integer Linear Programming with upper bounds that avoids the quadratic dependency on the dimension by increasing the dependency on the number of variables.
References
- [1] Kyriakos Axiotis and Christos Tzamos. Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms. In 46th International Colloquium on Automata, Languages, and Programming (ICALP), volume 132, pages 19:1–19:13, Dagstuhl, Germany, 2019. doi:10.4230/LIPICS.ICALP.2019.19.
- [2] Manuel Blum, Robert W. Floyd, Vaughan R. Pratt, Ronald L. Rivest, and Robert Endre Tarjan. Time bounds for selection. J. Comput. Syst. Sci., 7(4):448–461, 1973. doi:10.1016/S0022-0000(73)80033-9.
- [3] David Bremner, Timothy M. Chan, Erik D. Demaine, Jeff Erickson, Ferran Hurtado, John Iacono, Stefan Langerman, and Perouz Taslakian. Necklaces, convolutions, and x + y. In Algorithms – ESA 2006, pages 160–171, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg. doi:10.1007/11841036_17.
- [4] Karl Bringmann. A Near-Linear Pseudopolynomial Time Algorithm for Subset Sum, pages 1073–1084. Society for Industrial and Applied Mathematics, 2017. doi:10.1137/1.9781611974782.69.
- [5] Marcel Celaya, Stefan Kuhlmann, Joseph Paat, and Robert Weismantel. Improving the cook et al. proximity bound given integral valued constraints. In Integer Programming and Combinatorial Optimization - 23rd International Conference, IPCO 2022, Eindhoven, The Netherlands, June 27-29, 2022, Proceedings, volume 13265 of Lecture Notes in Computer Science, pages 84–97. Springer, 2022. doi:10.1007/978-3-031-06901-7_7.
- [6] Timothy M. Chan and Moshe Lewenstein. Clustered integer 3sum via additive combinatorics. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, STOC ’15, pages 31–40, New York, NY, USA, 2015. Association for Computing Machinery. doi:10.1145/2746539.2746568.
- [7] Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang. Faster algorithms for bounded knapsack and bounded subset sum via fine-grained proximity results. In Proceedings of the 2024 ACM-SIAM Symposium on Discrete Algorithms, SODA 2024, Alexandria, VA, USA, January 7-10, 2024, pages 4828–4848. SIAM, 2024. doi:10.1137/1.9781611977912.171.
- [8] Shucheng Chi, Ran Duan, Tianle Xie, and Tianyi Zhang. Faster min-plus product for monotone instances. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, pages 1529–1542, New York, NY, USA, 2022. Association for Computing Machinery. doi:10.1145/3519935.3520057.
- [9] Marek Cygan, Marcin Mucha, Karol Węgrzycki, and Michał Włodarczyk. On problems equivalent to -convolution. ACM Trans. Algorithms, 15(1), January 2019. doi:10.1145/3293465.
- [10] Ilan Doron-Arad, Ariel Kulik, and Pasin Manurangsi. Fine grained lower bounds for multidimensional knapsack, 2024. doi:10.48550/arXiv.2407.10146.
- [11] Friedrich Eisenbrand and Robert Weismantel. Proximity results and faster algorithms for integer programming using the steinitz lemma. ACM Trans. Algorithms, 16(1), November 2019. doi:10.1145/3340322.
- [12] Kilian Grage, Klaus Jansen, and Björn Schumacher. Convolution and knapsack in higher dimensions, 2024. arXiv:2403.16117.
- [13] Dmitry V. Gribanov, Ivan A. Shumilov, and Dmitriy S. Malyshev. Structured -convolution and its applications for the shortest/closest vector and nonlinear knapsack problems. Optim. Lett., 18(1):73–103, 2024. doi:10.1007/S11590-023-02017-5.
- [14] Ce Jin. 0-1 knapsack in nearly quadratic time. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, Vancouver, BC, Canada, June 24-28, 2024, pages 271–282. ACM, 2024. doi:10.1145/3618260.3649618.
- [15] L. Kronecker. Grundzüge einer arithmetischen theorie der algebraische grössen. Journal für die reine und angewandte Mathematik, 1882(92):1–122, 1882. doi:10.1515/crll.1882.92.1.
- [16] Marvin Künnemann, Ramamohan Paturi, and Stefan Schneider. On the fine-grained complexity of one-dimensional dynamic programming. In 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017, July 10-14, 2017, Warsaw, Poland, volume 80 of LIPIcs, pages 21:1–21:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPICS.ICALP.2017.21.
- [17] Jon Lee, Joseph Paat, Ingo Stallknecht, and Luze Xu. Improving proximity bounds using sparsity. In Combinatorial Optimization - 6th International Symposium, ISCO 2020, Montreal, QC, Canada, May 4-6, 2020, Revised Selected Papers, volume 12176 of Lecture Notes in Computer Science, pages 115–127. Springer, 2020. doi:10.1007/978-3-030-53262-8_10.
- [18] Victor Y. Pan. Simple multivariate polynomial multiplication. J. Symb. Comput., 18(3):183–186, 1994. doi:10.1006/JSCO.1994.1042.
- [19] Adam Polak, Lars Rohwedder, and Karol Węgrzycki. Knapsack and Subset Sum with Small Items. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), volume 198 of Leibniz International Proceedings in Informatics (LIPIcs), pages 106:1–106:19, Dagstuhl, Germany, 2021. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2021.106.
- [20] Lars Rohwedder and Karol Węgrzycki. Fine-Grained Equivalence for Problems Related to Integer Linear Programming. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025), volume 325 of Leibniz International Proceedings in Informatics (LIPIcs), pages 83:1–83:18, Dagstuhl, Germany, 2025. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ITCS.2025.83.
- [21] Ryan Williams. Faster all-pairs shortest paths via circuit complexity. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 664–673, New York, NY, USA, 2014. Association for Computing Machinery. doi:10.1145/2591796.2591811.
Appendix A Calculate -dimensional Convolution with 1-dimensional Convolution
We can interpret -convolution as multiplication of polynomials over the -semiring, also called the Arctic semiring. -dimensional convolution is multiplication of polynomials with variables . More precisely, let be two -dimensional arrays of size . Then, the corresponding polynomial to is given by
The Kronecker map [15] as described in [18] maps to for every where for every . This map is generalized to polynomials accordingly and allows us to obtain an one-dimensional array this way by adding dummy values. An example for this construction is depicted in Figure 2. Let and be the results of this transformation.
Now, we can calculate with an one-dimensional -convolution algorithm and transform the result back according to the transformations above to obtain an -dimensional array. The result is correct since the polynomial multiplication is correct due to the added padding in the definition of [18].
To show Lemma 5 is remains to show that since is the length of the resulting arrays for the one-dimensional convolution. We have
| ( is the maximum index.) | ||||
| (With induction) | ||||
In conclusion we have show the following lemma. See 5
