The Support of Bin Packing Is Exponential
Abstract
Consider the classical Bin Packing problem with different item sizes and amounts of items The support of a Bin Packing solution is the number of differently filled bins. In this work, we show that the lower bound on the support of this problem is . Our lower bound matches the upper bound of given by Eisenbrand and Shmonin [Oper.Research Letters ’06] up to a constant factor.
This result has direct implications for the time complexity of several Bin Packing algorithms, such as Goemans and Rothvoss [SODA ’14], Jansen and Klein [SODA ’17] and Jansen and Solis-Oba [IPCO ’10].
To achieve our main result, we develop a technique to aggregate equality constrained ILPs with many constraints into an equivalent ILP with one constraint. Our technique contrasts existing aggregation techniques as we manage to integrate upper bounds on variables into the resulting constraint. We believe this technique can be useful for solving general ILPs or the -dimensional knapsack problem.
Keywords and phrases:
Bin Packing, Integer Programming, SupportCopyright and License:
2012 ACM Subject Classification:
Theory of computation Integer programming ; Theory of computation Packing and covering problemsFunding:
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project number 528381760Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz HermanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
In their seminal work [8] Eisenbrand and Shmonin inspect integer conic combinations of a finite set of integer vectors They provide upper bounds on the size of the smallest subset such that is an integer conic combination of elements in This can be interpreted in the context of integer programming. Integer Programs (IPs) are composed of a matrix an (optional) cost vector , a target vector and a solution vector . The goal is to compute the solution vector satisfying while minimizing the value In this context, the set of integer vectors correspond to the columns of and the size of the smaller subset is the number of non-zero entries (the support) in the solution vector. More precisely, they show that for any polytope and any integral vector of multiplicities there exists a such that and They achieve this through a sophisticated exchange argument, transforming a solution with a large support into one with a bounded support. They show a second bound of where is the largest number in the matrix , in a similar fashion. These celebrated result have found application in a variety of applications. These include classical problems of computer science like scheduling on identical and uniform machines (using the second bound) [15, 19] and bin packing problem (using the first bound) [13, 18, 20].
Improvements on the first bound would have direct implications for the time complexity of several algorithms. Consider the high-multiplicity Bin Packing (BP) problem with bin size : We are given item sizes and multiplicities The task is to find the minimum number such that all items can be packed into bins of size . See Figure 1 for an illustration. For this problem, Jansen and Solis-Oba [20] give an additive approximation algorithm with guarantee OPT+1. The time complexity of their algorithm is where represents a polynomial in the input. In their paper, they ask whether the support bound by Eisenbrand and Shmonin can be improved to be polynomial in . Consequently, this would reduce the dependency on in the algorithm to be singly-exponential, i.e., result in a running time of
Following this result, Goemans and Rothvoss [13] provide an algorithm running in time Here, denotes the bitsize of encoding the size vector , the multiplicity vector and the bin capacity in binary. The time complexity of this algorithm would also be improved by a smaller support bound. For example, a polynomial support in directly improves their running time to They achieve this result by solving a more general problem, the Cone and Polytope Intersection (CAPI) problem. Here, we are given two polytopes , where is bounded. The task is to decide whether there is a point in that can be expressed as a non-negative integer combination of integer points in . Goemans and Rothvoss show that this problem captures the essence of high-multiplicity BP. They reduce BP to CAPI by setting the integer points in to every possible configuration, i.e., combinations of items that fit in a single bin. They set to be the point that corresponds to all items being taken. The final entry in and is used to count the number of used bins
Jansen and Klein extend the work of Goemans and Rothvoss in [18]. They give an algorithm for bin packing that depends on the number of vertices in the underlying knapsack polytope of a single bin. Their algorithm has a running time of . Thus, it improves upon Goemans and Rothvoss’ result when is small. However, it does not improve upon the result by Goemans and Rothvoss in the general sense, as the bound on is as given by Hayes and Larman [14].
All three of these results could be improved with a polynomial (in ) support bound for BP. However, our result shows that there is no polynomial support bound in for bin packing, and thus answers the question posed by Jansen and Solis-Oba negatively. Goemans and Rothvoss made first strides in this direction in [13], where they show that the bound given by Eisenbrand and Shmonin is tight for polytopes, i.e., there is an exponential support when reaching a target with only column vectors or linear combinations of them. They construct a set of column vectors with dimension and Define the polytope where is the convex hull of . They then define and show that you can only reach by taking every element of exactly once. This shows a lower bound on the support of However, their bound does not admit a knapsack constraint and can therefore not be applied to the bin packing problem directly. This is because the columns in the bin packing ILP originate from a knapsack constraint that determines the feasible packings of a single bin.
With so many different problems gaining important information from support bounds, it should come as no surprise that there are other bounds on the support of integer programs with different parameterizations. Berndt, Brinkop, Jansen, Mnich and Stamm show that the support is also bounded by [3]. They complement this result by showing a lower bound on the size of the support of Aliev, de Loera, Oertel and O’Neill [2] give two support bounds for linear diophantine equations of , where is the column rank of the matrix and is the determinant of the span of the -dimensional sublattice of formed by all integer points in the linear subspace spanned by the columns of , i.e., Their second bound is . Aliev, De Loera, Eisenbrand, Oertel and Weismantel [1] show the bound of for general ILPs.
The main tool we use in the following is aggregation. Aggregation is a powerful tool that transforms an ILP with multiple constraints into a single constraint. Research into this technique began in the 70’s, resulting in several different aggregation methods, see e.g. [4, 5, 9, 11, 12, 21, 22, 23, 24]. Elmaghraby and Wig [9] iteratively combine two equations into a single one. Applying this technique repeatedly yields a single constraint. Another approach is given by Padberg [22]. Instead of combining two equations iteratively, he aggregates all equations at the same time.
Our approach is based on the techniques by Padberg [22]. Our aggregation techniques also aggregate all constraints in a single step. However, we manage to include the upper bounds of the variables inside the resulting knapsack constraints, where Padbergs method leaves them outside the constraint. We believe that this type of aggregation can find application in different contexts, such as -dimensional knapsack problems or general ILPs with constraints. Here, the aggregation may yield faster algorithms, especially when one manages to reduce the size of the coefficients. This may be done by using techniques by Frank and Tardos [10], Eisenbrand, Hunkenschröder, Klein, Koutecký Levin and Onn [7] or Jansen, Kahler and Zwanger [16].
On the negative side, it is known that aggregation is not feasible for a set of inequalities, as was proven by Chvátal and Hammer [6]. They provide a simple two-line ILP that cannot be aggregated.
1.1 Our Contribution
In this work, we continue research into the support of Bin Packing solutions. We show that a set of points of Cone and Polytope Intersection can be embedded inside a knapsack polytope of dimension Using this polytope, we show that the support of the high-multiplicity BP problem is exponential, requiring at least many different configurations in an optimum solution. This yields our main result, Theorem 1:
Theorem 1.
There exists a high-multiplicity Bin Packing instance of dimension such that the solution-vector (a vector of configurations) contains at least non-zero entries, i.e.,
With this result, we answer an open question posed by Jansen and Solis-Oba in [20]. They provide an algorithm for the cutting stock problem. The running time of this algorithm could be improved to be singly-exponential if the support of bin packing is not exponential. However, our result answers this question negatively. In a similar vein, our result shows that the algorithm by Goemans and Rothvoss [13] can not be directly improved. The same holds for the algorithm by Jansen and Klein [17].
A major tool to achieve this result is the aggregation of multiple constraints without upper bounds on the variables into a single constraint. To the best of our knowledge, this type of constraint-aggregation has not been done before. Contrasting prior results, our method of aggregation manages to include upper bounds inside the constraints instead of copying them from the original problem. This technique is given in Section 3.
Overview
Here, we give a brief overview of the structure of the document and the structure of our main proof. We begin, in Section 3, by developing a technique with which we can aggregate an ILP containing only equalities into one containing only a single equality. Next, in Section 4.1 we adapt an idea originally given by Goemans and Rothvoss in [13] to construct a polytope of dimension and generate a target vector We provide an alternative proof to [13] that shows we require points in to reach Next, in Section 4.2, we construct an equality ILP with constraints. Each constraint later defines an item size of the BP problem. We show that there are feasible solutions to this ILP, with each solution having the first entries as vectors in , i.e., there is one solution to the ILP for each vector in These solutions are the configurations of the BP problem. Then, we aggregate this ILP using the technique in Section 3. This yields an equality ILP with one row. We transform this into a knapsack constraint with right hand side . Denote the set of all configurations with total size at most as . This constraint is then used to define the capacity of a bin in the BP problem. The total number of items of each type is given as the sum of all configurations in . To place all these items in the optimal bins, we know, by Section 4.1, that need to take every configuration in exactly once. Taking any configuration in would require at least one additional bin. This proves Theorem 1.
2 Preliminaries
For a positive integer , we define and . A vector that contains a certain value in all entries is stylized, e.g. the zero vector is written as We omit the dimension if it is apparent from the context. The support of a -dimensional vector is the set of its indices with a non-zero entry. It is denoted by We define the size of the support with .
Definition 2 (Bin Packing).
Given is a set of different item types with sizes and amounts . Given a number , the goal is to decide whether all items can be assigned to one of bins, while all bins are filled to at most
A powerful tool to model countless optimization problems is that of integer programming.
Definition 3 (Integer Programming).
An integer linear program (ILP) is composed of a matrix an (optional) cost vector , a target vector and a solution vector . The goal is to compute a solution vector satisfying while minimizing the value
Such integer programs can be used to express high-multiplicity Bin Packing. These are often referred to as configuration ILPs. A configuration is a feasible assignment of items into a bin, i.e., one with total size at most Denote the set of all feasible configurations as and set Set and let each column of represent a configuration, where row corresponds to item type Finally set the right hand side i.e., the vector representing the total number of items of each size in the instance.
A useful tool that we use is the classical knapsack problem.
Definition 4 (Unbounded Knapsack).
Given a set of items each defined by a profit and a weight , along with a knapsack of capacity . Find a solution vector such that and is maximal. We call the equality Knapsack constraint to the corresponding problem.
Following this definition, we call the set of all solutions fulfilling , and the Knapsack Polytope.
3 Aggregation of an ILP
In this section, we show how to transform an ILP with multiple equality constraints into a single knapsack constraint. The novel aspect of our technique is that we also include the external upper bounds of the variables in the aggregation.
Consider an ILP of the form with and and let be the largest absolute value in .
For each variable , we define the following constraint: , where is a slack variable. This simulates the upper bound . Define as the overall upper bound on the sum of all variables. For this, add the constraint:
| (1) |
with as a slack variable. Note that bounding the sum of variables by the sum of their upper bounds does not change the solution.
This results in a new ILP , where
Next, we use the bound to define a large integer , which we then use to aggregate the constraints.
| (2) |
By giving the upper-bound constraint (1) the largest weight , we ensure that it may never be broken and thus, keeps the sum of the variables in its range. Multiplying both sides of the equations with generates the following ILP where all variables have to be non-negative integers.
| (3) |
These constraints can also be formulated as , where
An intuitive view of the aggregation technique is that the resulting single constraints solution is a base number, where is a large integer. There, the smallest digit represents the solution of the first constraint, the second constraints solution is represented by the second smallest digit and so on.
Next, we show that this ILP can be aggregated into a single equality knapsack constraint by proving that their solutions are identical. The proof is presented in the full version.
Lemma 5 (✂).
The vector is a feasible integer solution to (3) if and only if is an integer solution to the following equation:
| (4) |
4 Bounding the Support for Bin Packing
Assume we are given a knapsack polytope of dimension . Also assume that there exists a Bin Packing instance with just one unique integer solution , to the configIP and the -dimensional integer points in are equal to the first dimensions of the configurations. If we now require different points in to represent a vector as a conic integer combination then has at least non-zero entries, which means that we need at least different configurations in the Bin Packing solution. If now is exponential in , we have shown that there exists an exponential lower bound on the support of the Bin Packing problem, i.e., Theorem 1.
4.1 The support of closed polytopes
In this section, we give a knapsack polytope of dimension and show that there exists a vector such that we need at least points in to represent as a conic integer combination. We adapt a construction pioneered by Goemans and Rothvoss in [13] and set
| (5) |
with and define E.g. for we have
Assume that the elements in are sorted according to their length, i.e., and for all . Then the following lemma proves the goal of this section.
Lemma 6.
Set . The conic combination is unique (except for arbitrarily adding ). This implies that we need all points in exactly once to represent as a conic integer combination.
Proof.
Each integer conic combination that represents and includes can be transformed into an integer conic combination of less points by removing Therefore, we can assume that is not part of the conic combination.
Now considering , the only integer points in are the elements in i.e., . This holds due to the first coordinates. For all with it holds that and for each pair of vectors with , there is at least one coordinate , where with and or and . Thus, there can not be any integer points other than in
Now consider an arbitrary integer conic combination Since and we do not include , we can not take a single vector more than times. Furthermore, we can not take more than vectors total, as each vector except has an entry in at least one of the dimensions, and each entry can only sum up to . This leaves us with the following:
i.e., that at most vectors can be summed up to reach and each vector is taken times.
We now show that for all is the only solution to . First, observe that holds. Therefore, we have . Next, observe that holds for all because . Because of this, even after adding the maximum possible copies of the -th item, we have that We can only compensate this difference by adding smaller items. Here, we can apply the argument iteratively, as we can only add copies of column to sum up to a value Thus, holds. Therefore, we can not sum up to without the last item and directly follows. Taking and together, we have . Because of we have and we only need to consider . Then, this argument can be applied iteratively, yielding for all
By Lemma 6, we have shown that we require different points in to represent a vector as a conic integer combination. In the following we build the bridge to Bin Packing and prove that there exists a Bin Packing instance where the first dimensions of the solution vector of its configIP equals the integer points in . In total, our Bin Packing instance has dimensions which then with Lemma 6 implies the exponential lower bound of Bin Packing and therefore Theorem 1.
4.2 Construction of the ILP
In this section, we now aim to construct an ILP with equality constraints, where the first coordinates in its solution vectors are unique and form the following set:
| (6) |
Note that this is exactly the set of points in in the prior section without and where and that the solution vectors form the configurations in the configIP.
Also note that the last coordinate is of the form for some and the first coordinates are the binary encoding of We first define a variable , which we use to construct a set of constraints that is only feasible if holds for some Additionally, a certain set of variables in the constraints matches the points in Equation 6. In second part of this section, we use the aggregation presented in Section 3 to transform these constraints into a single knapsack constraint. We provide a proof that the knapsack constraint is indeed equivalent to the proposed set of constraints (i.e., both allow the same set of solutions) and requires many variables. With this equivalence, we then have that the first dimensions of the configurations in the configIP form the set Equation 6. In combination with the Lemma 6, this then implies that there exists an exponential lower bound on the support of the Bin Packing problem.
Next, assume that we are given a dimension and a base . We introduce a set of constraints with variables which has exactly integer solutions. For each , the variable of the solution vector is forced to take the value . Additionally, the variables match the binary encoding of , i.e.,
For a brief overview, our constraints behave as follows. First, the value is some integer in the interval . Let us assume that for some . Later, we show that we only allow those integers that equal where is a positive integer. Next, for each , we introduce the variable that helps us to correctly determine the bit of the binary encoding. We also introduce the variable that helps us to determine the which will then be used recursively. For a more detailed version of the procedure of converting a decimal number to binary, we refer to Algorithm 1. Note that the number we aim to convert is in the exponent of .
Input: Dimension ; Base ; Exponent
Output: Binary encoding of
The following set of constraints simulates this algorithm. Let be some integer number larger than 3. Later, represents the total number of constraints. Each set of constraints is defined for all . They simulate the algorithm above. Additionally, bounds , i.e., ensures its binary encoding has at most bits. guarantees that in the end, there is no remainder left. This constraint is also needed to guarantee that the exponent is integer (Lemma 12). Also, we do not want to allow , respectively . That is done by . Thus, in total, we have constraints.
Constraints 1.
For each , we have the following set of constraints:
Also, we add the additional constraints and
All of the constraints are necessary to ensure that takes some value of the form while is the binary encoding of . Moreover, the constraints forbid any other assignment of and . All other variables are forced to take a specific value based on . In the full version, we give an example that we cannot omit the variables , as that would allow other solutions with .
For easier understanding, we now present an example.
Example 7.
Let and set . Then, for , we get the following values for the variables.
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | |
| 0 | 0 | 0 | 1 | ||||
| 0 | 1 | 0 | 0 | 1 | |||
| 1 | 0 | 1 | 0 | 1 | 0 | 1 | |
| 0 | 0 | 0 | 1 | 1 | 1 | 1 | |
| 0 | 1 | 1 | 0 | 0 | 1 | 1 | |
| 1 | 0 | 1 | 0 | 1 | 0 | 1 | |
| 1 | |||||||
| 1 | 1 | 1 | |||||
| 0 | 0 | 0 | 1 | ||||
| 0 | 1 | 0 | 0 | 1 | |||
| 1 | 0 | 1 | 0 | 1 | 0 | 1 |
Now, we state properties for the variables based on the value of . After that, we prove that the vector is indeed the binary encoding of
and are lower bounds of the variables and . We omit these constraints later on by restricting all variables of the constructed ILP to be non-negative.
to ensure that the currently most significant bit is assigned correctly. As the values of are assigned from the most significant bit to the least significant bit, this ensures the correctness of the binary encoding. More specifically, we have the following property.
Observation 8.
Constraints to imply
Proof.
We show this by analyzing the cases separately:
Case 1: Assume states . With the assumption and the fact that is integer, we get . Now, implies . Since is also restricted to integer values we get with . Now, we only need to show that the other constraint is also fulfilled for For we get since which completes this part of the proof.
Case 2: Assume states . With the assumption, we get Since is restricted to integer values we get . Now, we only need to show that the other constraints are also fulfilled for . This clearly holds for . For we get since and This finalizes the proof. In particular, we have that the value of is unique for given . This is due to the fact that the feasible interval (given by and ) is of size . The second inequality implies that there exists at most one integer value in the interval. The first inequality in combination with the fact that is integer, we also have that there must exist an integer value in the interval.
Now, we are able to show that the matches the binary encoding of . ensures that if while provides that if Taken together with and the fact that is integer, we observe the following.
Observation 9.
Constraints to imply
Proof.
We show this by again separately analyzing the cases:
Case 1: Assume With Observation 8, we know now implies This also fulfills and
Case 2: Assume Observation 8 and now imply With we get This also fulfills and and finalizes the proof.
Note that this also upper bounds the value of . Since , implies .
And finally, to represent the remainder calculation and ensure that the constraints can be used recursively. More concretely, we have the following behavior.
Observation 10 (✂).
Constraints to imply
Also, for each , the remainder is bounded by
Proof Outline.
We prove this statement via case distinction and induction over Due to space concerns the proof can be found in the full version.
Now, we prove that if the Constraints 1 are feasible, then is of the form and is the binary encoding of . We also show that the constraints do not permit any other integer solutions.
Lemma 11.
Let and . If there exists an such that , with , then is the binary encoding of .
Proof.
Assume, there exists an such that and . We prove that is the binary encoding of by induction over .
Base Case: By the assumption , we have that is correctly set.
Inductive Step: Let and assume that is the correct remainder in the -th iteration. Following the procedure described at the beginning of this section, we know that is required to be 1 if and 0 otherwise. This property holds due to Observations 8 and 9. By Observation 10, we know that the next remainder is correctly set to if and to otherwise.
Now, we know that is the binary encoding of , if there exists with and To finalize that the constraints fulfill the desired property, we need to show that the constraints are infeasible for any other value of .
Lemma 12.
Let and . If for some holds, then the Constraints 1 are infeasible.
Proof.
Assume for some . Note that we can formulate any integer number in the form . The proof is split into different cases. For each case, we lead the constraints to a contradiction.
Case 1: First, we show that the set of constraints is infeasible when is negative. Let and . Set . By the assumption, we have . This contradicts the constraint that is required to be integer.
Case 2: We now show that the set of constraints is infeasible when is not integer. Let and Again set We now show by induction that the exponent of is not integer for all This then leads to a contradiction. This contradiction appears either at the additional constraint or already at constraint for some .
Base Case: By definition of in this case, is not integer and with the assumption we get and therefore is not integer. Note that the assumption that the value is assigned to the variable implies that must be integer.
Inductive Step: Let and assume is not integer and is integer. Now we can meet the following two cases. Either, or . In the first case, Observation 10 gives which directly implies Thus, is not integer and is integer. In the second case, with Observation 10 we get Since is not integer and is integer, we get that is not integer. If now is integer, we repeat the inductive step. However, if is not integer, there is no feasible integer assignment for as is an equality constraint.
If is integer for all , we get that is not integer. This implies . This is a contradiction to .
Case 3: The only case left is , i.e., . Again, by the assumption, we get which directly contradicts which states .
Thus, we have shown that the Constraints 1 are feasible if and only if variable takes a value of the form for given and positive integer (Lemma 12). Additionally, matches the binary encoding of , i.e., it holds that (Lemma 11).
The attentive reader might have noticed that in Example 7 the values of and match for all values of and . In the full version, we show that we can not simply replace with in .
4.3 Aggregation of the Constraints
Before we can aggregate the constraints into a single knapsack constraint, we need to transform the inequalities into equations such that we achieve an ILP of the form considered in Section 3. We can do that by introducing slack variables. Also, we omit the lower bounds and by restricting all variables to only take non-negative values. This results in the following new set of constraints:
Constraints 2.
Now, we add the upper bounds of the variables. With Observation 10, we have . In combination with Observation 8, we get . is bounded by 1 due to and it holds that by and . The slack variables are bounded by the coefficients, the variables and the right-hand-side of their constraint. For instance, since , we have . With and the bound on this implies .
Now, as done in Section 3, we define to be the sum of the upper bounds of all variables (including slacks) and introduce the constraint which restricts the sum of all variables by . Let be the sum of all variables. Thus, our additional constraint is Setting and multiplying the ILP with the vector results in an ILP of the form (3) which is equivalent to the Constraints 2.
Lemma 5 now implies that vector is a feasible solution to Constraints 2 if and only if is a solution to
| (7) |
This equation can be transformed into an unbounded knapsack constraint by grouping coefficients such that each variable only occurs once. These coefficients then define the item size vector . For example, the sizes of the item is . To complete the transformation, set the lower bound of each variable to 0 and exchange the equality into , i.e., the sum of item sizes and multiplicites is at most the right hand side. Denote the polytope as All feasible configurations are then integer points in , i.e.,
4.4 Main proof
With this, we are ready to prove our main theorem.
Theorem 1. [Restated, see original statement.]
There exists a high-multiplicity Bin Packing instance of dimension such that the solution-vector (a vector of configurations) contains at least non-zero entries, i.e.,
Proof.
Let The dimension of the Bin Packing instance has dimension , i.e., number of different item sizes, Denote the capacity of a single bin as and set the sizes according to (7), and denote them as the size vector . Set to be the right hand side of the knapsack constraint (7). Define the knapsack polytope Let be the set of all solutions in , i.e., with total size less than or equal to Define to be the set of all solutions with size exactly . See Figure 3 for an illustration. Thus, is the set of all feasible configurations. By the definition of the knapsack constraint and Lemma 12, we know that as (7) has exactly one solution for every possible combination of with Define the total amount of items as the sum of all configurations, i.e., Thus, the total size of all items to be placed is
Clearly, the optimal solution to this Bin Packing instance uses exactly bins. One possible solution is to take each configuration in exactly once. In the following we show that this is the only solution using elements in and using exactly bins. Now, define the projection . This projection reduces every configuration to just the binary encoding and the value It holds that Recall that we set . We can not consider configurations in as they have size As the total size of the right hand size is exactly taking one of these configurations while still using bins would require another bin to be filled which is infeasible due to (7). Thus, there are no other feasible configurations, as only those of the form (after applying ) exactly like in Lemma 6, i.e., those in , have size exactly Thus, when applying the projection we have met the conditions of applying Lemma 6 and know that the only optimal solution takes each of the vectors exactly once.
Therefore, after removing the projection, the only optimal solution to the Bin Packing instance uses every configuration in exactly once, and therefore we have As we have we have .
References
- [1] Iskander Aliev, Jesús A. De Loera, Friedrich Eisenbrand, Timm Oertel, and Robert Weismantel. The support of integer optimal solutions. SIAM J. Optim., 28(3):2152–2157, 2018. doi:10.1137/17M1162792.
- [2] Iskander Aliev, Jesús A. De Loera, Timm Oertel, and Christopher O’Neill. Sparse solutions of linear diophantine equations. SIAM J. Appl. Algebra Geom., 1(1):239–253, 2017. doi:10.1137/16M1083876.
- [3] Sebastian Berndt, Hauke Brinkop, Klaus Jansen, Matthias Mnich, and Tobias Stamm. New support size bounds for integer programming, applied to makespan minimization on uniformly related machines. In ISAAC, volume 283 of LIPIcs, pages 13:1–13:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPICS.ISAAC.2023.13.
- [4] Gordon H. Bradley. Transformation of integer programs to knapsack problems. Discrete Mathematics, 1(1):29–45, 1971. doi:10.1016/0012-365X(71)90005-7.
- [5] Gordon H. Bradley, Peter L. Hammer, and Laurence A. Wolsey. Coefficient reduction for inequalities in 0-1 variables. Math. Program., 7(1):263–282, 1974. doi:10.1007/BF01585527.
- [6] Václav Chvátal and Peter Hammer. Aggregation of inequalities in integer programming. Ann. Discrete Math, 1:145–162, 1977.
- [7] Friedrich Eisenbrand, Christoph Hunkenschröder, Kim-Manuel Klein, Martin Koutecký, Asaf Levin, and Shmuel Onn. Reducibility bounds of objective functions over the integers. Oper. Res. Lett., 51(6):595–598, 2023. doi:10.1016/J.ORL.2023.10.001.
- [8] Friedrich Eisenbrand and Gennady Shmonin. Carathéodory bounds for integer cones. Oper. Res. Lett., 34(5):564–568, 2006. doi:10.1016/J.ORL.2005.09.008.
- [9] SE Elmaghraby and MK Wig. On the treatment of stock cutting problems as diophantine programs. Operations Research Report, 61, 1970.
- [10] András Frank and Éva Tardos. An application of simultaneous diophantine approximation in combinatorial optimization. Comb., 7(1):49–65, 1987. doi:10.1007/BF02579200.
- [11] Fred W. Glover and Djangir A. Babayev. New results for aggregating integer-valued equations. Ann. Oper. Res., 58(3):227–242, 1995. doi:10.1007/BF02032133.
- [12] Fred W. Glover and Robert E. D. Woolsey. Aggregating diophantine equations. Z. Oper. Research, 16(1):1–10, 1972. doi:10.1007/BF01917186.
- [13] Michel X. Goemans and Thomas Rothvoss. Polynomiality for bin packing with a constant number of item types. J. ACM, 67(6):38:1–38:21, 2020. doi:10.1145/3421750.
- [14] A. C. Hayes and David G. Larman. The vertices of the knapsack polytope. Discret. Appl. Math., 6(2):135–138, 1983. doi:10.1016/0166-218X(83)90067-7.
- [15] Klaus Jansen. An EPTAS for scheduling jobs on uniform processors: Using an MILP relaxation with a constant number of integral variables. SIAM J. Discret. Math., 24(2):457–485, 2010. doi:10.1137/090749451.
- [16] Klaus Jansen, Kai Kahler, and Esther Zwanger. Exact and approximate high-multiplicity scheduling on identical machines. CoRR, abs/2404.17274, 2024. doi:10.48550/arXiv.2404.17274.
- [17] Klaus Jansen and Kim-Manuel Klein. A robust AFPTAS for online bin packing with polynomial migration. SIAM J. Discret. Math., 33(4):2062–2091, 2019. doi:10.1137/17M1122529.
- [18] Klaus Jansen and Kim-Manuel Klein. About the structure of the integer cone and its application to bin packing. Math. Oper. Res., 45(4):1498–1511, 2020. doi:10.1287/MOOR.2019.1040.
- [19] Klaus Jansen, Kim-Manuel Klein, and José Verschae. Closing the gap for makespan scheduling via sparsification techniques. Math. Oper. Res., 45(4):1371–1392, 2020. doi:10.1287/MOOR.2019.1036.
- [20] Klaus Jansen and Roberto Solis-Oba. A polynomial time OPT + 1 algorithm for the cutting stock problem with a constant number of object lengths. Math. Oper. Res., 36(4):743–753, 2011. doi:10.1287/MOOR.1110.0515.
- [21] Kenneth E Kendall and Stanley Zionts. Solving integer programming problems by aggregating constraints. Operations Research, 25(2):346–351, 1977.
- [22] Manfred W. Padberg. Equivalent knapsack-type formulations of bounded integer linear programs: An alternative approach. Naval Research Logistics Quarterly, 19(4):699–708, 1972. doi:10.1002/nav.3800190410.
- [23] Pierre-Louis Poirion. Optimal constraints aggregation method for ILP. Discret. Appl. Math., 262:148–157, 2019. doi:10.1016/J.DAM.2019.02.014.
- [24] I.G. Rosenberg. Aggregation of equations in integer programming. Discrete Mathematics, 10(2):325–341, 1974. doi:10.1016/0012-365X(74)90126-5.
