Fine-Grained Equivalence for Problems Related to Integer Linear Programming
Abstract
Integer Linear Programming with binary variables and many -constraints can be solved in time and it is open whether the dependence on is optimal. Several seemingly unrelated problems, which include variants of Closest String, Discrepancy Minimization, Set Cover, and Set Packing, can be modelled as Integer Linear Programming with constraints to obtain algorithms with the same running time for a natural parameter in each of the problems. Our main result establishes through fine-grained reductions that these problems are equivalent, meaning that a algorithm with for one of them implies such an algorithm for all of them.
In the setting above, one can alternatively obtain an time algorithm for Integer Linear Programming using a straightforward dynamic programming approach, which can be more efficient if is relatively small (e.g., subexponential in ). We show that this can be improved to , where is the number of distinct (i.e., non-symmetric) variables. This dominates both of the aforementioned running times.
Keywords and phrases:
Integer Programming, Fine-Grained Complexity, Fixed-Parameter Tractable AlgorithmsFunding:
Lars Rohwedder: Supported by Dutch Research Council (NWO) project “The Twilight Zone of Efficiency: Optimality of Quasi-Polynomial Time Algorithms” [grant number OCEN.W.21.268].Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Design and analysis of algorithmsAcknowledgements:
We thank Friedrich Eisenbrand for his valuable insights and constructive discussions that enhanced this work.Editors:
Raghu MekaSeries and Publisher:

1 Introduction
The study of parameterized complexity for Integer Linear Programming has a long history: classical works by Lenstra [15] and Kannan [12] and very recently Rothvoss and Reis [19] provide FPT algorithms in the number of variables of an ILP of the form . In an orthogonal line of research, Papadimitriou [17] gave an FPT algorithm in the number of constraints and the size of the coefficients of and for an ILP in the standard form . Interest in the second line of work has been renewed by the improved algorithms due to Eisenbrand, Weismantel [9] and Jansen, Rohwedder [11], which give essentially optimal running times due to a conditional lower bound based on the Exponential Time Hypothesis (ETH) [14]. Here, is the maximum absolute size of an entry in . The work by Eisenbrand and Weismantel also considers a version where variables are subject to box-constraints, which will be the primary focus of this work, see the definition below.
Integer Linear Programming Input: Constraint matrix , right-hand side , variable bounds . Task: Find with
We refer to the variant where and as binary Integer Linear Programming.
Binary Integer Linear Programming Input: Constraint matrix , right-hand side . Task: Find with
The running times obtained in [9] for either variant is . Also for matrices with only coefficients nothing better than is known. It is an intriguing question whether the slightly unusual exponent of is necessary, which is in the spirit of fine-grained complexity. Since the dominant complexity-theoretic assumption of PNP is not powerful enough to show precise lower bounds on running times, the field of fine-grained complexity is concerned with finding lower bounds via stronger conjectures, see e.g., [2, 20, 6]. A number of such conjectures exist by now, often with interesting connections between them. Even if one doubts these conjectures, the reductions still provide insights into how problems relate to each other.
Based on existing conjectures, the best lower bound known on the exponent of Integer Linear Programming is from the easier unbounded setting [14], which is of course not tight in this setting. In this paper, we take another path: we assume that Integer Linear Programming cannot be solved faster than the state-of-the-art and present several other natural parameterized problems that are all equivalent with respect to improvements on their running time.
Hypothesis 1 (ILP Hypothesis).
For every fixed , there is no time algorithm for Integer Linear Programming with .
In all of the problems below, the symbol is chosen for the parameter of interest.
Closest String with Binary Alphabet Input: Alphabet , strings Task: Find string minimizing where is the Hamming distance between and , i.e., the number of positions the two strings differ in.
We refer to the generalization with arbitrary simply as Closest String.
Discrepancy Minimization Input: Universe , set system Task: Find coloring minimizing
Set Multi-Cover Input: Universe , set system , Task: Find of minimal cardinality such that for each there are at least sets with and .
Set Multi-Packing Input: Universe , set system , Task: Find of maximal cardinality such that for each there are at most sets with and .
Many of these problems are well-known applications of ILP techniques. For example, Knop et al. [13] provided algorithms for Closest String and Set Multi-Cover. Discrepancy Minimization is usually considered within the context of approximation algorithms (see, e.g., [3]).
As mentioned above, our main result is the following equivalence.
Theorem 2.
The following statements are equivalent:
-
(1)
There exists an algorithm for Integer Linear Programming with with .
-
(2)
There exists an algorithm for Binary Integer Linear Programming with and with .
-
(3)
There exists an algorithm for Closest String with Binary Alphabet with .
-
(4)
There exists an algorithm for Discrepancy Minimization with .
-
(5)
There exists an algorithm for Set Multi-Cover with .
-
(6)
There exists an algorithm for Set Multi-Packing with .
Note that Item 1 is the negation of 1. All problems in Theorem 2 are easily transformed into the first problem, i.e., Integer Linear Programming with , while maintaining the same value of . Hence, the more interesting aspect of the theorem is that all these problems are as expressive as the first one.
1 considers Integer Linear Programming with relatively small entries, i.e., . One can also ask the question of whether there is any parameter regime for for which the state-of-the-art can be improved. In this spirit, a stronger variant of the conjecture is the following.
Hypothesis 3 (Strong ILP Hypothesis).
For every fixed and , there is no time algorithm for Integer Linear Programming with .
Note that 1 is a special case of 3 for . Another interesting regime is the complexity of Integer Linear Programming with , because of a connection to block-structure integer programming, which we elaborate on later. There, the state-of-the-art algorithm requires time , Integer Linear Programming with large entries can be reduced to an equivalent instance with a matrix as seen in the following theorem, but the reduction is not strong enough to show equivalence between the two hypotheses.
Theorem 4.
There is a polynomial time algorithm that transforms an instance of Integer Linear Programming with into an equivalent one with for and .
This implies that if there is an algorithm with running time for Integer Linear Programming with , then there is a time algorithm for Integer Linear Programming with .
One might hope to improve the theorem to , since then a time algorithm for matrices would imply a time algorithm for . However, such a reduction would imply the strong result that under ETH is equivalent to 1. This is because under ETH the Subset Sum problem, i.e., the case when , cannot be solved in time [1] and the hypothetical reduction would be able to encode an instance of Subset Sum into an ILP with . We are not aware of any meaningful reduction in the other direction, i.e., from large and small to smaller and larger . It is possible to aggregate constraints into a single one with entries bounded by , but this reduction seems useless since the resulting parameter range requires time due to the ETH lower bound mentioned above.
Assuming 3, we derive a tight lower for a form of block-structured integer linear programs that has been studied extensively in recent literature. For simplicity, we consider here the basic setting with submatrices.
-Fold Integer Linear Programming Input: Square matrices , right-hand sides . Task: Find with
Theorem 5.
For every , there is no algorithm with running time for -Fold Integer Linear Programming when the maximum absolute entry is bounded by , unless 3 is false.
This matches the best algorithms known for the problem, see [5] and references therein. The reduction follows the same idea as used in [10], where the authors show a non-tight quadratic lower bound for the exponent based on ETH. Our lower bound is stronger simply because the conjecture we base it on is stronger.
1.1 Tightness of more general problems
There are several other, more general problems to the ones mentioned above, for which known algorithms would be tight assuming that one cannot improve the running time for the class of problems in Theorem 2. However, we do not know if these are all equivalent.
The algorithm by Eisenbrand and Weismantel [9] also works for the optimization version of ILP, i.e.,
This leads to the same running time of except for a slightly higher constant in the exponent. Notably, the coefficients of do not increase the number of arithmetic operations.
Given a matrix and a convex function , Dadush, Léonard, Rohwedder, and Verschae [7] have shown that one can find the minimizer of in time (assuming polynomial time to evaluate ). This problem is a generalization of Binary Integer Linear Programming, since one can take if and otherwise.
Given a linear matroid , where , a matrix and a right-hand side , Eisenbrand, Rohwedder, and Węgrzycki [8] gave an algorithm that finds a basis of such that in time , where is the incidence vector of . This generalizes Binary Integer Linear Programming, since one can take as the set of binary variables and additional dummy variables and then take as the uniform matroid of rank .
1.2 Algorithm for few distinct variables
We show that the running time of for Integer Linear Programming with , i.e., the subject of 1, can be improved if the number of distinct variables is low. For the sake of generality, we state the algorithmic result for the optimization variant and any range of .
Theorem 6.
Consider the integer programming problem
(1) | |||||
Let be an upper bound on the absolute value of entries of . The problem (1) can be solved in time
Using standard reductions, see e.g. Section 2.5.2, one may reduce to , making the logarithmic factor insignificant.
We will now interpret this running time and point out interesting special cases.
Binary ILP with
Here, the running time above implies an time algorithm, where is the number of distinct variables, i.e., variables that differ either in their entry of or in their column of : one can merge identical variables in time time, after which . Furthermore, without loss of generality, the rows of are linearly independent, thus . Hence, the overall running time is
Here, the last inequality follows from a case distinction whether is greater than or not. Note that in the setting without objective function we have . Thus, this running time is at least as good as the known time algorithms. It also dominates the running time of one would get by simple dynamic programming over the right-hand sides that are feasible (for which there can be at most ).
Binary ILP with and a constant number of s in each column
Here, the number of distinct columns is polynomial in and the previous case implies a running time of , meaning that 1 does not extend to this special case.
2 Reductions
The basic idea of all reductions for Theorem 2 is that we transform one problem with parameter into another problem with parameter . Additionally the size of the instance may only increase from to for some . The concrete reductions we prove can be seen in Figure 1.
2.1 Closest String
Let be the bound on the maximum hamming distance in the decision version of Closest String with Binary Alphabet. For a string we denote by the th character. For we write for the substring from the th to the th character.
Proof.
The following is an ILP model for Closest String with Binary Alphabet.
One may add slack variables to turn the inequalities into equalities.
Proof.
We want to transform the following ILP
(2) | ||||
where , into an equivalent instance of Closest String. We first rewrite (2) into a more convenient form.
Claim 9.
This follows from simple transformations. We defer the proof until later and first show how the reduction follows from it. By comparing to the ILP formulation of Closest String we observe that (3) corresponds to a “non-uniform” variant of Closest String. It can be reformulated as: given strings and bounds , find a string such that for each we have . This follows from the ILP model given in the proof of Lemma 7. Furthermore, we have the guarantee that any solution has exactly ones. To transform this into a regular instance, we add two more strings and to each string we add more characters, which makes a total of characters per string. The strings of this instance are defined as
, | , | , | ||
, | , | , | ||
, | , | | . |
Here, we assume without loss of generality that . We claim that there is a solution to this instance with maximum Hamming distance if and only if there is a solution to the non-uniform instance.
Claim 10.
If there is a string with distance at most to , and , , then there is also a string with distance at most to , .
Claim 11.
If there is a string with distance at most to , , then there is also a string with distance at most to , and , .
From these claims, the lemma follows immediately.
Proof of 9.
We add variables and force a solution to take exactly many ones
Next, we change the equality constraints into inequalities and into a matrix
where with being the all-ones matrix and .
Proof of 10.
Suppose there is a string with distance at most to each of the strings . Because , string must match and on characters where and match. More precisely, for . Formally, this follows because of
Let us now analyze the distance between and for some . Since the last characters of are zero, we have . Thus,
Thus, string is a solution for the non-uniform instance.
Proof of 11.
Let with for all . We extend it to a string by setting , , and . Let us now verify that has a distance at most to each string. For note that has exactly ones by guarantee of the non-uniform instance. Thus, the distance to and is exactly . Consider some . Then
2.2 Discrepancy Minimization
Proof.
Let be a bound on the objective in the decision version of Discrepancy Minimization. Let be the incidence matrix of the given set system. Then the colorings of discrepancy at most are exactly the solutions of
This can be equivalently formulated as
where . One may translate the inequalities into equalities by introducing slack variables. Therefore, an algorithm for Integer Linear Programming can be used to solve Discrepancy Minimization.
Proof.
Consider an ILP of the form
(4) | ||||
for and . We will construct an instance of Discrepancy Minimization which has discrepancy zero if and only if the ILP has a feasible solution. Towards this, we first reformulate the ILP above as
(5) | ||||
where . Note that is feasible for (4) if and only if is feasible for (5). Also, if , then (5) is already equivalent to an instance of Discrepancy Minimization that tests for discrepancy zero. To handle the general case, we transform it into an equivalent system with right-hand size . We first construct a gadget of elements that have the same color.
Claim 14.
For any we can construct a pair of matrices such that there are exactly two solutions to
namely and .
We will defer the proof to the end of the section. Using this gadget with we now replace each coefficient in the previous system by the variables from the gadget. Note that (5) is infeasible if . Thus assume without loss of generality that . Let be defined as follows. The th row of has many ones at arbitrary positions if and is all zero otherwise; contrary, the th row of has many ones at arbitrary positions if and is all zero otherwise.
Now consider the system
(6) | ||||
We claim that (6) has a solution if and only if there is a solution to (5). Let be a solution to the former. Notice that the negation of a solution is also feasible. Due to 14 we may assume without loss of generality that and , negating the solution if necessary. It follows that
Thus, , which concludes the first direction. For the other direction, assume that there is a solution to (5). We set , , which by 14 satisfies . As before we have that . Thus, is a solution to (6). This establishes the equivalence of the initial ILP instance to (6), which corresponds to an instance of Discrepancy Minimization where we test for discrepancy zero with sets.
Proof of 14.
The existence of such a matrix can be proven by induction: for , we simply take . Now suppose that we already have a pair of matrices as above. Then we set
It can easily be checked that choosing either or satisfies
(7) |
Now take any that satisfy (7). Then we have that . Hence by induction hypothesis either or . Assume for now the first case holds. Then because of the second-to-last rows of we have
Hence, . Similarly, the last row of implies that . Analogously, if then and .
2.3 Set Multi-Cover
Proof.
An instance of the decision version of Set Multi-Cover with a bound on the cardinality can be formulated as
where is the incidence matrix of the given set system. This can easily be translated to the form of (1) by introducing slack variables. Notice that this ILP has constraints. Thus, a faster algorithm for ILP would imply a faster algorithm for Set Multi-Cover.
In the remainder, we show that the converse is also true.
Proof.
First, we reduce to a “non-uniform” version of Set Multi-Cover where each element has a different demand . Let and and consider the solutions to
First, we add additional binary variables and the requirement that exactly variables are equal to one, i.e.,
This system is equivalent to the previous one, by setting arbitrary variables to . Next, we transform the equality constraints into inequalities:
Here, denotes the all-ones matrix. Note that the second constraint is equivalent to , since we fixed the number of ones in the solution. This ILP is feasible if and only if the optimum of the following ILP is .
This ILP corresponds to an instance of non-uniform Set Multi-Cover with elements.
To reduce a non-uniform instance , , to a uniform instance of Set Multi-Cover we proceed as follows: add one new element and many new sets to the instance. The coverage requirement of each element is . The new element is contained in each of the new sets and in none of the old ones. Thus, each new set has to be taken. Furthermore, we add each old element to many arbitrary new sets.
2.4 Set Multi-Packing
Proof.
Notice the following duality between Set Multi-Cover and Set Multi-Packing. Let , , and be an instance of Set Multi-Cover. Now consider the instance of Set Multi-Packing with universe , set system , and bounds . This is a bipartition between instances of Set Multi-Cover and Set Multi-Packing, i.e., it can be performed in both ways.
For one pair of such instances, a solution for Set Multi-Cover is feasible if and only if is feasible for Set Multi-Packing. Thus, if the optimum of Set Multi-Cover is , then the optimum of Set Multi-Packing is .
2.5 Integer Linear Programming
In this section, we prove Theorem 4, i.e., we show how to reduce an ILP with large coefficients into a (larger) ILP with only coefficients.
See 4
Furthermore, we show how to reduce an ILP with arbitrary upper and lower bounds into one with at most binary variables. Note that this implies that Statements 1 and 2 are equivalent and concludes the proof of Theorem 2.
2.5.1 From large coefficients to zero-one
We will transform an ILP of the form
where into an equivalent one with for . Let . For the th row of we introduce rows in , denoted by . Intuitively, the rows are symmetric rows that each stand for a value of in row . Similarly, stand for a value of in row . The right-hand sides of the new ILP are for row and zero for all other rows affiliated with .
For each column we derive a column of as follows: for row we consider the binary encoding of and have the column in use this binary encoding in if and in if . All other entries of this column are zero.
We now add auxiliary variables to shift the values from one power to another. For each row and each we add:
-
a variable with a one in row and and
-
a variable with a one in row and .
Furthermore, for each row and each we add:
-
a variable with a one in row and ,
-
a variable with a one in row and , and
-
a variable with a one in row and .
Note that each auxiliary variable does not alter the total value for row , i.e., the sum of times and minus times and across all .
Thus, any solution to the new ILP must correspond to a solution of the original ILP. The converse is also true: the auxiliary variables can be adjusted to preserve the original value of the ILP. This is because any value for one of the rows of can always be shifted to using the auxiliary variables, while still satisfying the constraints. Therefore, we can enforce the value of the unchanged variables.
2.5.2 From bounded to binary variables
Consider an ILP of the form
where . We will first transform this into an equivalent ILP of the form
(8) | ||||
where for . Assume without loss of generality that each column of is different. Otherwise, we can merge identical columns together by adding the respective upper and lower bounds. This implies that . Next, we compute a vertex solution to the continuous relaxation . The proximity bound by Eisenbrand and Weismantel [9] shows that there is an integer solution with if any integer solution exists. Thus, choosing and for , we can replace by without affecting feasibility. By shifting the solution, we can reduce the lower bound to zero and we arrive at the following ILP that is equivalent to the original one.
Note that . Thus replacing each variable by binary variables we arrive at an ILP with binary variables.
2.6 -Fold Integer Linear Programming
In this section, we prove Theorem 5. See 5 Consider the ILP
(9) | ||||
with . We will show that this can be formulated as an equivalent -Fold Integer Linear Program with parameter and . The reduction follows a similar idea as one in [10], which derives a lower bound based on Subset Sum. Note that if (9) had arbitrary variables (not necessarily binary) then we can use the reduction of the previous section to transform it into a binary one. For , define
This matrix has the property that the system has exactly two solutions, namely and . Our -Fold Integer Linear Program has one block for each column of . Matrix is defined as above with right-hand side . Matrix is derived from as follows: consider a coefficient . We rewrite
Such a exists since . Then we set the th row of as . Let be the variables corresponding to block . By choice of there are exactly two settings of that satisfy , namely and . Thus . Hence, the -Fold Integer Linear Program with is equivalent to (9).
3 Algorithm for few distinct variables
In this section, we prove Theorem 6.
See 6
We assume without loss of generality that in (1) we have . Note that otherwise, one may obtain an equivalent ILP with , , and . We first decompose each into a sum of powers of two with the properties described in the following lemma. Such a construction is well-known in literature, see e.g. [18, 4, 16].
Lemma 18 (Cover).
For every integer and , there exist integers such that
We call such a set a cover of . This set can be found in polynomial time in .
Proof.
For , let denote the th bit of the binary representation of . Let be the bottom bits, and let represent the top bits of . Now, consider the sets
We will set . Showing that these numbers are a cover of is equivalent to
First, observe that . Moreover, there is no gap of length at least in the set , that is, for every nonzero , there exists a smaller such that . This can be constructed by flipping an arbitrary bit of to . Therefore, consists of consecutive numbers. The smallest number in is clearly , and the largest is . We set . Using the lemma above, for each , we decompose with its cover. We can now naturally rewrite (1) as
Each produces a partial solution for some unknown right-hand side . We write .
Our approach is now to compute, for and for every potential value of , a solution . To do this, we first reduce the search space of relevant vectors .
Lemma 19.
Consider and with . Suppose that satisfies and with for each . Then for , we have
and furthermore
Proof.
The first property follows from the observation that is a vector with each component being a multiple of . Hence, .
For the second property, observe that for each . Therefore, . We say that a vector is relevant for if and . Clearly, the following holds:
Claim 20.
For every , the number of relevant vectors for is .
We will now iteratively compute solutions for all vectors relevant for from the solutions for all relevant vectors for with . Note that this will immediately establish Theorem 6.
Lemma 21.
Let be a set of optimal solutions to (1) for all that are relevant for . Then, in time , we can compute a set of optimal solutions for all relevant for .
Proof.
Let be the set of all vectors with and .
We define an edge-weighted directed acyclic graph with vertices , where are copies of , and is a distinguished source vertex.
If , there is an edge from to every vertex such that the vector corresponds to a relevant vector for for which (1) has a solution . This edge indicates feasibility, and the weight of the edge from to is the value for the optimal solution . In the base case where , there is exactly one edge of weight to the vertex in that corresponds to the all-zero vector.
For each vertex in for , there is an edge to the corresponding vertex in with weight zero. Further, if , for every vertex corresponding to in , there is an edge to the vertex in that corresponds to (if it exists). The weight of this edge is .
Similarly, if , then for every vertex in that corresponds to a relevant , there is an edge to a vertex in that corresponds to (if it exists), with a cost of .
Finally, we compute the longest path from to each vertex in , and we store these as the values of the solutions to all the relevant right-hand sides for .
For the running time, observe that by 20, we have . Hence, the graph has vertices and edges. Thus, the longest path problem can be solved in time .
For correctness, consider a path in the graph from vertex to vertex (corresponding to vectors and respectively). The edges of this path define a vector such that . Moreover, by construction, it holds that . Finally, the weight of this path corresponds to the value .
References
- [1] Amir Abboud, Karl Bringmann, Danny Hermelin, and Dvir Shabtay. SETH-based lower bounds for subset sum and bicriteria path. ACM Transactions on Algorithms (TALG), 18(1):1–22, 2022. doi:10.1145/3450524.
- [2] Karl Bringmann. Fine-grained complexity theory. In Proceedings of STACS, page 1, 2019.
- [3] Moses Charikar, Alantha Newman, and Aleksandar Nikolov. Tight Hardness Results for Minimizing Discrepancy. In Dana Randall, editor, Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, pages 1607–1614. SIAM, 2011. doi:10.1137/1.9781611973082.124.
- [4] Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang. Faster algorithms for bounded knapsack and bounded subset sum via fine-grained proximity results. In Proceedings of SODA, pages 4828–4848, 2024. doi:10.1137/1.9781611977912.171.
- [5] Jana Cslovjecsek, Friedrich Eisenbrand, Christoph Hunkenschröder, Lars Rohwedder, and Robert Weismantel. Block-structured integer and linear programming in strongly polynomial and near linear time. In Proceedings of SODA, pages 1666–1681, 2021. doi:10.1137/1.9781611976465.101.
- [6] Marek Cygan, Holger Dell, Daniel Lokshtanov, Dániel Marx, Jesper Nederlof, Yoshio Okamoto, Ramamohan Paturi, Saket Saurabh, and Magnus Wahlström. On problems as hard as CNF-SAT. ACM Transactions on Algorithms (TALG), 12(3):1–24, 2016. doi:10.1145/2925416.
- [7] Daniel Dadush, Arthur Léonard, Lars Rohwedder, and José Verschae. Optimizing low dimensional functions over the integers. In Proceedings of IPCO, pages 115–126, 2023. doi:10.1007/978-3-031-32726-1_9.
- [8] Friedrich Eisenbrand, Lars Rohwedder, and Karol Węgrzycki. Sensitivity, Proximity and FPT Algorithms for Exact Matroid Problems. In 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS), pages 1610–1620, 2024. doi:10.1109/FOCS61266.2024.00100.
- [9] Friedrich Eisenbrand and Robert Weismantel. Proximity results and faster algorithms for integer programming using the Steinitz lemma. ACM Transactions on Algorithms (TALG), 16(1):1–14, 2019. doi:10.1145/3340322.
- [10] Christoph Hunkenschröder, Kim-Manuel Klein, Martin Kouteckỳ, Alexandra Lassota, and Asaf Levin. Tight lower bounds for block-structured integer programs. In Proceedings of IPCO, pages 224–237, 2024. doi:10.1007/978-3-031-59835-7_17.
- [11] Klaus Jansen and Lars Rohwedder. On integer programming, discrepancy, and convolution. Mathematics of Operations Research, 48(3):1481–1495, 2023. doi:10.1287/MOOR.2022.1308.
- [12] Ravi Kannan. Minkowski’s convex body theorem and integer programming. Mathematics of operations research, 12(3):415–440, 1987. doi:10.1287/MOOR.12.3.415.
- [13] Dušan Knop, Martin Kouteckỳ, and Matthias Mnich. Combinatorial n-fold integer programming and applications. Mathematical Programming, 184(1):1–34, 2020. doi:10.1007/S10107-019-01402-2.
- [14] Dušan Knop, Michał Pilipczuk, and Marcin Wrochna. Tight complexity lower bounds for integer linear programming with few constraints. ACM Transactions on Computation Theory (TOCT), 12(3):1–19, 2020. doi:10.1145/3397484.
- [15] Hendrik W. Lenstra Jr. Integer programming with a fixed number of variables. Mathematics of operations research, 8(4):538–548, 1983. doi:10.1287/MOOR.8.4.538.
- [16] Silvano Martello and Paolo Toth. Knapsack problems: algorithms and computer implementations. John Wiley & Sons, Inc., 1990.
- [17] Christos H. Papadimitriou. On the complexity of integer programming. Journal of the ACM (JACM), 28(4):765–768, 1981. doi:10.1145/322276.322287.
- [18] Adam Polak, Lars Rohwedder, and Karol Węgrzycki. Knapsack and subset sum with small items. In Proceedings of ICALP, pages 1–19, 2021. doi:10.4230/LIPICS.ICALP.2021.106.
- [19] Victor Reis and Thomas Rothvoss. The subspace flatness conjecture and faster integer programming. In Proceedings of FOCS, pages 974–988, 2023. doi:10.1109/FOCS57990.2023.00060.
- [20] Virginia Vassilevska Williams. Hardness of easy problems: Basing hardness on popular conjectures such as the strong exponential time hypothesis (invited talk). In Proceedings of IPEC, 2015. doi:10.4230/LIPIcs.IPEC.2015.17.