Abstract 1 Introduction 2 Preliminaries 3 Reducing the Problem 4 Algorithmic Ingredients 5 An Algorithm with Exponent 11/6 6 An Improved Algorithm with Exponent 7/4 References

Weakly Approximating Knapsack in Subquadratic Time

Lin Chen ORCID Zhejiang University, Hangzhou, China Jiayi Lian ORCID Zhejiang University, Hangzhou, China Yuchen Mao ORCID Zhejiang University, Hangzhou, China Guochuan Zhang ORCID Zhejiang University, Hangzhou, China
Abstract

We consider the classic Knapsack problem. Let t and OPT be the capacity and the optimal value, respectively. If one seeks a solution with total profit at least OPT/(1+ε) and total weight at most t, then Knapsack can be solved in O~(n+(1ε)2) time [Chen, Lian, Mao, and Zhang ’24][Mao ’24]. This running time is the best possible (up to a logarithmic factor), assuming that (min,+)-convolution cannot be solved in truly subquadratic time [Künnemann, Paturi, and Schneider ’17][Cygan, Mucha, Węgrzycki, and Włodarczyk ’19]. The same upper and lower bounds hold if one seeks a solution with total profit at least OPT and total weight at most (1+ε)t. Therefore, it is natural to ask the following question.

If one seeks a solution with total profit at least OPT/(1+ε) and total weight at most (1+ε)t, can Knsapck be solved in O~(n+(1ε)2δ) time for some constant δ>0?

We answer this open question affirmatively by proposing an O~(n+(1ε)7/4)-time algorithm.

Keywords and phrases:
Knapsack, FPTAS
Category:
Track A: Algorithms, Complexity and Games
Funding:
Yuchen Mao: National Natural Science Foundation of China [Project No. 62402436].
Guochuan Zhang: National Natural Science Foundation of China [Project No. 12271477].
Copyright and License:
[Uncaptioned image] © Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Approximation algorithms analysis
Related Version:
Full Version: https://arxiv.org/abs/2504.15001
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

In the Knapsack problem, one is given a knapsack of capacity t and a set of n items. Each item i has a weight wi and a profit pi. The goal is to select a subset of items such that their total weight does not exceed t, and their total profit is maximized.

Knapsack is one of Karp’s 21 NP-complete problems [26], which motivates the study of approximation algorithms. In particular, it is well known that Knapsack admits fully polynomial-time approximation schemes (FPTASes), which can return a feasible solution with a total profit of at least OPT/(1+ε) in poly(n,1ε) time. (We use OPT to denote the maximum total profit one can achieve using capacity t.) There has been a long line of research on designing faster FPTASes for Knapsack [21, 33, 29, 37, 10, 24, 18, 15, 34]. Very recently, Chen, Lian, Mao and Zhang [15] and Mao [34] indenpendently obtained O~(n+(1ε)2)-time FPTASes111In the paper, we use an O~() notation to hide polylogarithmic factors in n and 1ε.. On the negative side, there is no O((n+1ε)2δ)-time FPTAS for any constant δ>0 assuming (min,+)-convolution cannot be solved in subquadratic time [17, 32]. Hence, further improvement on FPTASs for Knapsack seems hopeless unless the (min,+)-convolution conjecture is overturned. Due to the symmetry of weight and profit, the same upper and lower bounds also hold if we seek a solution of total profit at least OPT, but relax the capacity to (1+ε)t. In this paper, we consider weak approximation schemes, which relax both the total profit and the capacity. More precisely, a weak approximation scheme seeks a subset of items with total weight at most (1+ε)t and total profit at least OPT/(1+ε).

Weak approximation algorithms are commonly studied in the context of resource augmentation (see e.g., [20, 22]). One often resorts to resource augmentation when there is a barrier to further improvement on standard (i.e., non-weak) approximation algorithms. The two most relevant examples are Subset Sum and Unbounded Knapsack. Both problems admit an O~(n+(1ε)2)-time FPTAS [27, 9, 23], while an O((n+1ε)2δ)-time FPTAS has been ruled out assuming the (min,+)-convolution conjecture [32, 17, 35, 9]. Their weak approximation, however, can be done in truly subquadratic time. For Subset Sum, Mucha, Węgrzycki and Włodarczyk [35] showed a O(n+(1ε)5/3)-time weak approximation scheme, and very recently, Chen et al. [13] further improved the running time to O~(n+1ε)-time. For Unbounded Knapsack, Bringmann and Cassis [6] established an O~(n+(1ε)3/2)-time weak approximation scheme. Given that the standard approximation of Knapsack has been in a similar situation, it is natural to ask whether the weak approximation of Knapsack can be done in truly subquadratic time.

It is worth mentioning that weak approximation schemes for Subset Sum and Unbounded Knapsack are not straightforward extensions of the existing standard (non-weak) approximation schemes. The first truly subquadratic-time weak approximation scheme for Subset Sum [35] as well as its recent improvement [13] both rely on a novel application of additive combinatorics, and the first truly subquadratic-time weak approximation scheme for Unbounded Knapsack [6] relies on a breakthrough on bounded monotone (min,+)-convolution [11, 16]. Unfortunately, techniques developed in these prior papers are not sufficient to obtain a truly subquadratic-time weak approximation scheme for Knapsack.

Our contribution

We propose the first truly subquadratic-time weak approximation scheme for Knapsack.

Theorem 1.

Knapsack admits an O~(n+(1ε)7/4)-time weak approximation scheme, which is randomized and succeeds with high probability.222Throughout the paper, “with high probability” stands for “with probability at least 1(n+1ε)Ω(1)”.

Our algorithm can be easily generalized to Bounded Knapsack, in which, every item i can be selected up to ui times. When ui=O(1) for all i, Bounded Knapsack is equivalent to Knapsack (up to a constant factor), and therefore can be handled by our algorithm. For the general case, there is a standard approach of reducing Bounded Knapsack to instances where ui=O(1) (see e.g.,  [35, Lemma 4.1][31, Lemma 2.2]). Note that such a reduction will blow up item weights and profits by a factor O(ui), but this does not make much difference in (weak) approximation schemes due to scaling.

1.1 Technical Overview

We give a brief overview of the techniques that are used by our algorithm.

Bounded Monotone (min,+)-Convolution

(min,+)-convolution (more precisely, its counterpart, (max,+)-convolution) has been used in almost all the recent progress on Knapsack [10, 24, 14, 25, 5, 15, 34]. It allows one to cut the input instance into sub-instances, deal with each sub-instance independently, and then merge the results using (min,+)-convolution. In general, it takes O(n2) time to compute the (min,+)-convolution of two length-n sequences333A slightly faster O(n2/2Ω(logn))-time algorithm is known by Bremner et.al.’s reduction to (min,+)-matrix multiplication [3] and William’s algorithm for the latter problem [38] (which was derandomized by Chan and Williams [12])., and it is conjectured that there is no algorithm with time O(n2δ) for any constant δ>0. Nevertheless, some special cases of (min,+)-convolution can be computed in subquadratic time (See [1, 2, 36, 11, 16] for example), and among them is bounded monotone (min,+)-convolution where the two sequences are monotone and their entries are integers bounded by O(n). In a celebrated work, Chan and Lewenstein [11] showed that bounded monotone (min,+)-convolution can be computed in O(n1.859) time. This was later improved to O~(n1.5) by Chi, Duan, Xie, and Zhang [16], and further generalized to rectangular monotone (min,+)-convolution by Bringmann, Dürr and Polak [8].

Bounded monotone (min,+)-convolution is closely related to Unbounded Knapsack. Indeed, Bringmann and Cassis presented a O~(n+(1ε)3/2)-time weak approximation scheme [6] that builds upon bounded monotone (min,+)-convolution, and they also showed an equivalence result between the two problems in the sense that a truly subquadratic-time weak approximation scheme for Unbounded Knapsack implies a truly subquadratic algorithm for bounded monotone (min,+)-convolution, and vice versa.

For our problem, we follow the general framework mentioned above: we cut the input instance into sub-instances (called the reduced problem), deal with each sub-instance independently, and then merge the results using bounded monotone (min,+)-convolution. Note that following this framework, for each sub-instance, we need to compute the near-optimal objective value for all (approximate) knapsack capacities.

One might anticipate that a subquadratic algorithm for bounded monotone (min,+)-convolution leads to a subquadratic weak approximation scheme for Knapsack straightforwardly, however, this is not the case. Bounded monotone (min,+)-convolution is only used to merge results; the main challenge lies in dealing with each group, where we need the following two techniques.

Knapsack with Items of Similar Efficiencies

When all items have the same efficiency, Knapsack becomes Subset Sum, and it can be weakly approximated in O~(n+1ε) time [13]. We extend the result to the case where all the items have their efficiencies (that is, profit-to-weight ratio) in the range [ρ,ρ+Δ] for constant ρ, and show that in such a case, Knapsack can be weakly approximated in O~(n+1ε+(1ε)2Δ) time. This result may be of independent interest. The following is a rough idea. By scaling, we can assume that there are only O(Δε) distinct item efficiencies. Then we divide the items into O(Δε) groups so that the items in the same group have the same efficiency, and therefore, each group can be treated as a Subset Sum problem. For each group, we solve it by invoking Chen et al.’s O~(n+1ε)-time weak approximation scheme for Subset Sum [13]. After solving all groups, we merge their results using 2D-FFT and divide-and-conquer, which takes O~((1ε)2Δ) time.

We note that the general idea of using Subset Sum to solve Knapsack when items have similar efficiencies was used by Jin [24], and the idea of merging Subset Sum results using 2D-FFT and divide-and-conquer was used by Deng, Jin and Mao [18].

Proximity Bounds

Proximity is another tool that is crucial to the recent progress of Knapsack [36, 14, 25, 5, 15]. Proximity bounds capture the intuition that the optimal solution should contain a lot of high-efficiency items and only a few low-efficiency items. (The efficiency of an item is defined to be its profit-to-weight ratio.) It was pioneered by Eisenbrand and Weismantel [19]. Their proximity bound works for a more general problem of integer programming and is based on Steinitz’s lemma. The proof can be greatly simplified if one only considers Knapsack [36]. This bound was subsequently improved using tools from additive combinatorics [14, 5, 25]. These bounds depend on the maximum item weight wmax or the maximum item profits pmax, and leads to O~(n+wmax2)-time and O~(n+pmax2)-time algorithm for Knapsack [25, 5]. (We remark that all the proximity bounds of this type assume that the item weights and profits are integers.) These results, however, seem not helpful in breaking the quadratic barrier: even if we can scale the wmax and pmax to integers bounded by O(1ε), it still takes O~(n+(1ε)2) time to solve the problem.

We shall use another type of proximity bounds, which depends on the item efficiencies. Such a proximity bound was used to design FPTASes for Knapsack [24, 18] before. This proximity states that the optimal solution selects almost all the high-efficiency items, and exchanges only a few of them for the low-efficiency items. Moreover, the number of exchanged items is inversely proportional to the efficiency gap between high- and low-efficiency items. Fewer exchanged items allow a larger additive error per exchanged item, and therefore, a faster approximation.

Our Main Technical Contribution

While each of the above-mentioned techniques appeared in previous papers before, it seems that none of them alone can break the quadratic barrier for weakly approximating Knapsack. Our main contribution is to modify and then combine these techniques to obtain a truly subquadratic-time algorithm.

Note that if the item efficiencies differ significantly, then the efficiency-based proximity bound becomes sharp; If item efficiencies stay similar, then the Subset Sum based approach becomes efficient. To combine both ideas, the main difficulty is that we need to approximate the optimal objective value for all (approximate) capacities, and the efficiency-based proximity bound may be sharp for certain capacities, and weak for other capacities. Towards this, the main idea is to use different rounding precision for different items. We roughly explain it below. For simplicity, when we say we approximately solve a group of items, we mean to approximately compute the optimal objective value for all knapsack capacities when taking items from this group. We partition all items into multiple groups of fixed size. If the items in a group have similar efficiencies, then we approximately solve this group with high precision using the Subset Sum based approach; Otherwise, we approximately solve this group with low precision. Low precision means a larger error per item, but if proximity guarantees that only a few items from this group will contribute to the optimal solution (or only a few items will not contribute), then the total error may still be bounded. So the challenge is to prove that a sharp proximity holds for such a group regardless of the knapsack capacity.

We remark that a simpler implementation of the above idea leads to an algorithm with running time O~(n+(1ε)11/6). By grouping the items in a more sophisticated way, the running time can be improved to O~(n+(1ε)7/4).

1.2 Further Related Work

FPTASes with Running Time in Multiplicative Form

So far, we have discussed FPTASes for Knapsack whose running times are n+f(1ε). There is also a line of research focusing on FPTASes with a running time of the form nO(1)f(1ε) (see, e.g., [33, 28, 10]). The best known running time of this form is O~(n3/2ε) due to Chan [10]. It remains open whether there exists an FPTAS of running time O(nε). It is not even clear whether Knapsack admits a weak approximation scheme in O(nε) time.

Pseudo-polynomial Time Algorithm

Recent years have witnessed important progress in the study of pseudo-polynomial time algorithms for Knapsack and Unbounded Knapsack [6, 7, 14, 5, 25]. In particular, the best-known algorithm for Knapsack that is parametrized by the largest weight wmax (or the largest profit pmax) runs in O~(n+wmax2)-time (or O~(n+pmax2)-time) [5, 25], and is the best possible assuming the (min,+)-convolution conjecture [17, 32]. It is, however, a major open problem whether the quadratic barrier can be broken by combining both parameters, that is, whether an O~(n+(wmax+pmax)2δ)-time algorithm exists for Knapsack. Notably, for Unbounded Knapsack, one can find an algorithm that runs in O~(n+(wmax+pmax)3/2) time [6].

1.3 Paper Outline

In Section 2, we introduce all the necessary notation and terminology. In Section 3, we reduce the problem so that it suffices to consider instances with nice properties. In Section 4, we present all the ingredients that are needed by our algorithm. In Section 5, we present an O~(n+(1ε)11/6)-time algorithm to illusrate our main idea. In Section 6, we show how to improve the running time to O~(n+(1ε)7/4). All the missing proofs can be found in the full version of the paper.

2 Preliminaries

We assume that ε1, since otherwise we can approximate using a linear-time 2-approximation algorithm for Knapsack [30, Section 2.5]. We denote the set of real numbers between a and b as [a,b]. All logarithms are based 2.

To ease the presentation of our algorithm, we interpret Knapsack from the perspective of tuples. An item can be regarded as a tuple (w,p), where w and p represent the weight and profit, respectively. Throughout the paper, we do not distinguish an item and a tuple, and always refer to the first entry as weight and the second entry as profit. The efficiency of a tuple (w,p) is defined to be its profit-to-weight ratio pw. The efficiency of (0,0) is defined to be 0.

Let I be a multiset of tuples. We use w(I) to denote the total weight of tuples in I. That is, w(I)=(w,p)Iw. Similarly, we define p(I)=(w,p)Ip. Define the set of all (2-dimensional) subset sums of I as follows:

𝒮(I)={(w(I),p(I)):II}.

Every tuple (w,p)𝒮(I) corresponds to a Knapsack solution with total weight w and total profit p. We say a tuple (w,p) dominates another tuple (w,p) if ww, pp, and at least one of the two inequalities holds strictly. Let 𝒮+(I) be the set obtained from 𝒮(I) by removing all dominated tuples. Every tuple in 𝒮+(I) corresponds to a Pareto optimal solution for Knapsack, and vice versa. Let t be the given capacity. Knapsack can be formulated as follows:

max{p : (w,p)𝒮+(I) and wt}.

We consider the more general problem of (weakly) approximating 𝒮+(I).

Definition 2.

Let S and S be two sets of tuples.

  1. (i)

    S approximates S with factor 1+ε if for any (w,p)S, there exists (w,p)S such that w(1+ε)w and pp/(1+ε);

  2. (ii)

    S approximates S with additive error (δw,δp) if for any (w,p)S, there exists (w,p)S such that ww+δw and ppδp;

  3. (iii)

    S dominates S if for any (w,p)S, there exists (w,p)S such that ww and pp.

When S approximates a set S that contains only one tuple (w,p), we also say that S approximates the tuple (w,p).

Let (I,t) be a Knapsack instance. Let OPT be the optimal value. Our goal is to compute a set S such that

  • S approximates 𝒮+(I)([0,t]×[0,OPT]) with additive error (εt,εOPT) and

  • S is dominated by 𝒮(I).

The former ensures that S provides a good (weak) approximation of the optimal solution, and the latter ensures that every tuple in S is not better than a true solution. Our definition is essentially the same as that by Bringmann and Cassis [6], albeit their definition is for sequences.

Let I1I2 be a partition of I. It is easy to observe the following.

  • 𝒮(I)=𝒮(I1)+𝒮(I2), where the sumset X+Y is defined by the following formula.

    X+Y={(w1+w2,p1+p2) : (w1,p1)X and (w2,p2)Y};
  • 𝒮+(I)=𝒮+(I1)𝒮+(I2), where represents the (max,+)-convolution defined by the following formula.

    XY={(w,p)X+Y : (w,p) is not dominated in X+Y}.

Using Chi et al.’s O~(n3/2)-time algorithm for bounded monotone (min,+)-convolution [16] and Bringmann and Cassis’s algorithm for approximating the (max,+)-convolution [6, Lemma 28 in the full version], we can approximate the (max,+)-convolution of two sets in O~((1ε)3/2) time. Using the divide-and-conquer approach from [10], we can generalize it to multiple sets. The proof of the following lemma can be found in the appendix of the full version.

Lemma 3.

Let S1,S2,,Sm({0}[A,B])×({0}[C,D]) be sets of tuples with total size . In O~(+(1ε)3/2m) time444Here, the O~() notation hides polylogarithmic factor not only in n and 1ε, but also in other parameters such as BA and DC. These parameters will eventually be set to be polynomial in n and 1ε. and with high probability, we can compute a set S~ of size O~(1ε) that approximates S1S2Sm with factor 1+O(ε). Moreover, S~ is dominated by S1+S2++Sm.

With Lemma 3, we can approximate 𝒮+(I)([0,t]×[0,OPT]) as follows. Partition I into several groups, approximate each group independently, and merge their results using Lemma 3. As long as there are not too many groups, the merging time will be subquadratic. The following lemmas explain how the multiplicative factor and additive error accumulate during the computation. Their proofs can be found in the full version.

Lemma 4.

Let S, S1, and S2 be sets of tuples.

  1. (i)

    If S1 approximates S with factor 1+ε1 and S2 approximates S1 with factor 1+ε2, then S2 approximates S with factor (1+ε1)(1+ε2).

  2. (ii)

    If S1 approximates S with additive error (δw1,δp1) and S2 approximates S1 with factor (δw2,δp2), then S2 approximates S with additive error (δw1+δw2,δp1+δp2).

Lemma 5.

Let S1, S2, S1, S2 be sets of tuples.

  1. (i)

    If S1 and S2 approximates S1 and S2 respectively with factor 1+ε, then S1S2 and S1+S2 approximate S1S2 and S1+S2 respectively with factor 1+ε.

  2. (ii)

    If S1 and S2 approximates S1 and S2 with additive error (δw1,δp1) and (δw2,δp2), respectively, then S1S2 and S1+S2 approximate S1S2 and S1+S2 respectively with additive error (δw1+δw2,δp1+δp2).

In the rest of the paper, we may switch between the multiplicative factor and the additive error, although our goal is to bound the latter. It is helpful to keep the following observation in mind.

Observation 6.

If S approximates S with factor 1+ε, then S approximates each tuple (w,p)S with additive error (εw,εp).

3 Reducing the Problem

With Lemma 3, we can reduce the problem of weakly approximating Knapsack to the following problem.

The reduced problem RP(α): given a set I of n items from [1,2]×[1,2] and a real number α[12,14ε], compute a set S of size O~(1ε) that approximates 𝒮+(I)([0,1αε]×[0,1αε]) with additive error (O(1α),O(1α)). Moreover, S should be dominated by 𝒮(I).

Lemma 7.

An O~(n+T(1ε))-time algorithm for the reduced problem RP(α) implies an O~(n+(1ε)3/2+T(1ε))-time weak approximation scheme for Knapsack.

The approach to reducing the problem is similar to that in [15]. The details of the proof can be found in the full version.

Throughout the rest of the paper, we write 𝒮+(I)([0,1αε]×[0,1αε]) as 𝒮+(I;1αε) for short, and when the weight and profit both have an additive error of δ, we simply say that additive error is δ.

We also remark that when presenting algorithms for the reduced problem RP(α), we will omit the requirement that the resulting set S should be dominated by 𝒮(I) since it is guaranteed by the algorithms in a quite straightforward way. More specifically, in those algorithms (including Lemma 3), rounding is the only place that may incur approximation errors, and whenever we round a tuple (w,p), we always round w up and round p down. As a consequence, a set after rounding is always dominated by the original set.

4 Algorithmic Ingredients

We present a few results that will be used by our algorithm as subroutines.

4.1 An Algorithm for Small Solution Size

Consider the reduced problem RP(α). Since items are from [1,2]×[1,2], it is easy to see that |I|1αε for any II with (w(I),p(I))𝒮+(I;1αε). When α is large, |I| is small, and such a case can be tackled using Bringmann’s two-layer color-coding technique [4]. Roughly speaking, we can divide I into k=O~(1αε) subsets I1,,Ik so that with high probability, |IjI|1 for every j. Then it suffices to (approximately) compute I1Ik, and the resulting set will provide a good approximation for the tuple (w(I),p(I)). Since this technique has become quite standard nowadays, we defer the proof to the full version.

Lemma 8.

Let I be a set of items from [1,2]×[1,2]. In O~(n+(1ε)5/21α) time and with high probability, we can compute a set S of size O~(1ε) that approximates 𝒮+(I;1αε) with factor 1+O(ε).

4.2 An Efficiency-Based Proximity Bound

A proximity bound basically states that the optimal solution of Knapsack must contain lots of high-efficiency items and very few low-efficiency items. Let I={(w1,p1),,(wn,pn)} be a set of items. Assume that the items are labeled in a non-increasing order of efficiencies. That is, p1w1pnwn. (The labeling may not be unique. We fix an arbitrary one in that case.) Given a capacity w, let b be the minimum integer such that w1++wb>w. We say that the item (wb,pb) is the breaking item with respect to w. (When no such b exists, we simply let b=n+1 and define the breaking item to be (+,0).)

The following proximity bound is essentially the same as the one used in [24, 18]. For completeness, we provide a proof in the full version.

Lemma 9.

Let I be a set of items from [1,2]×[1,2]. Let (w,p)𝒮+(I). Let II be a subset of items with w(I)=w and p(I)=p. Let ρb be the efficiency of the breaking item with respect to w. For any II,

  1. (i)

    if every item in I has efficiency at most ρbΔ for some Δ>0, then

    w(II)2Δ;
  2. (ii)

    if every item in I has efficiency at least ρb+Δ for some Δ>0, then

    w(II)2Δ.

4.3 An Algorithm for Items with Similar Efficiencies

When all the items have the same efficiency, Knapsack becomes Subset Sum, and there is an O~(n+1ε)-time weak approximation scheme for Subset Sum [13]. The following lemma extends this result to the case where the item efficiencies differ by only a small amount.

Lemma 10.

Let I be a set of n items from [1,2]×[0,] whose efficiencies are in the range [ρ,ρ+Δ]. In O~(n+1ε+(1ε)2Δρ) time and with high probability, we can compute a set of size O~(1ε) that approximates 𝒮+(I) with factor 1+O(ε).

We first consider the (max,+)-convolution of two sets of tuples.

Lemma 11.

Let S1,S2 be two sets of tuples from ({0}[A,B])×[0,+]) with total size . Suppose that the efficiencies of the tuples in S1S2 are in the range {0}[ρ,ρ+Δ]. In O~(+1ε+(1ε)2Δρ) time555Here, the O~() notation hides polylogarithmic factor not only in n and 1ε, but also in BA. These parameters will eventually be set to be polynomial in n and 1ε., we can compute a set S of size O~(1ε) that approximates S1S2 with factor 1+O(ε). Moreover, S is dominated by S1+S2.

Proof.

It suffices to consider the case where ρ=1. If ρ1, we can replace every tuple (w,p) with (w,pρ), and replace Δ with Δρ.

Let a[A,B]. We first describe how to approximate (S1S2)([0,a]×[0,+]) with additive error O(εa).

Let S1 and S2 be the sets obtained from S1 and S2, respectively, by replacing every tuple (w,p) with (w,pw). The range of item efficiencies of S1 and S2 becomes [0,Δ]. We assume that every tuple (w,p)S1S2 has wa since the tuples with w>a can be safely ignored. For every tuple (w,p), we round w up to and round p down to the nearest multiple of εa. Denote the resulting sets by S1′′ and S2′′. By scaling, we can assume that every tuple in S1′′S2′′ is a tuple of integers from [0,1ε]×[0,Δε]. Then we can compute S=S1′′+S2′′ using 2D-FFT in O~(1ε+(1ε)2Δ) time. It is easy to see that S is of size O~((1ε)2Δ), and is dominated by S1+S2 (since we round w up and round p down). Moreover, for each (w,p)S1+S2, there is a tuple (w,p)S such that www+2εa and p2εapp, which implies that p+w4εap+w2εap+w. Now consider the set S obtained from S by replacing each tuple (w,p) with (w,p+w2εa). One can verify that S is dominated by S1+S2 and approximates S1+S2 with additive error O(εa). Since S1S2S1+S2, we have that S also approximates S1S2 with additive error O(εa). Then we remove all the dominated tuples from S. This can be done in O(|S|log|S|)=O~((1ε)2Δ) time. After this, the size of S becomes O(1ε) because for each integer w[0,1ε], there can be at most one tuple with weight w in S.

To finish the proof, we repeat the above procedure for every a[A,B] that is a power of 2, and take the union of the logBA resulting sets. Note that for a tuple (w,p)S1+S2 with w[a/2,a], we have that pρw=wa/2. Therefore, the additive error O(εa) implies an approximation factor of 1+O(ε). The running time increases by a factor of logBA.

Now we are ready to prove Lemma 10.

Proof of Lemma 10.

If Δ/ρ<ε, we can round down item profits so that the efficiencies of all items are ρ. This increases the approximation factor by at most 1+ε. Now the problem degenerates to Subset Sum, and therefore, we can compute a set S that approximates 𝒮+(I) in O~(n+1ε) time by the weak approximation scheme666The weak approximation scheme in [13] actually returns a set S~ that approximates 𝒮+(Ij) with additive error εt. This can be easily modified to return a set that approximates 𝒮+(Ij) with factor 1+ε by repeating it for t=1,2,4,8,, and taking the union of the resulting sets..

Now we assume Δ/ρε. By rounding down item profits, we assume that the efficiencies of items are powers of 1+ε. This increases the approximation factor by at most 1+ε. So there are at most m=log1+εΔρ=O(Δερ) distinct efficiencies. We divide the items into m groups I1,,Im by their efficiencies. For each group Ij, since the items have the same efficiency, and again, we can compute a set Sj that approximates 𝒮+(Ij) in O~(|Ij|+1ε) time by the weak approximation schemefor Subset Sum [13]. The total time cost for computing all sets Sj is O~(n+mε+(1ε)2Δρ)=O~(n+(1ε)2Δρ).

Then we shall merge S1,,Sm using a divide-and-conquer approach. The tuples in Sj have the same efficiency (except that the tuple (0,0) has efficiency 0), which is the efficiency of the items in Ij . Let ρj be this efficiency. Without loss of generality, assume that ρ1<<ρm. We recursively (and approximately) compute S1Sm/2 and Sm/2+1Sm, and then merge the resulting two sets via Lemma 11. The recursion tree has logm levels. One can verify that the total merging time in each level is O~(mε+(1ε)2ρmρ1ρ1), which is bounded by O~((1ε)2Δρ) since ρρjρ+Δ. The approximation factor is (1+ε)logm=1+O(εlogm). We can adjust ε by a factor of logm, which increases the running time by a polylogarithmic factor.

The Lemma 10 immediately implies the following.

Corollary 12.

Let I be a set of n items from [1,2]×[0,+] whose efficiencies are in the range [ρ,ρ+Δ]. In O~(n+1ε+(1ε)2Δρ) time and with high probability, we can compute a set of size O~(1ε) that approximates each tuple (w,p)𝒮+(I) with additive error (εw,εp).

In the above corollary, the additive error is proportional to the value of the entry of the tuple: a large entry means a large additive error. We can also achieve the contrary and approximate in a way so that a large entry means a small additive error. Basically, this can be done by (approximately) solving a problem symmetric to Knapsack: instead of determining which items should be selected, we determine which items should not be selected.

Corollary 13.

Let I be a set of n items from [1,2]×[0,+] whose efficiencies are in the range [ρ,ρ+Δ]. In O~(n+1ε+(1ε)2Δρ) time and with high probability, we can compute a set of size O~(1ε) that approximates 𝒮+(I) with additive error (ε(w(I)w),ε(p(I)p)).

Proof.

Define

𝒮(I)={(w,p)𝒮(I) : (w,p) does not dominate any other tuple in 𝒮(I)}.

From the perspective of Knapsack, every tuple in 𝒮+(I) corresponds to an optimal solution of Knapsack, while every tuple (w,p) in 𝒮(I) corresponds to an optimal solution of the symmetric problem that seeks for the minimum total profit that can be achieved using capacity at least w. It is easy to observe that

𝒮+(I)={(w(I)w,p(I)p):(w,p)𝒮(I)}.

Therefore, to approximate 𝒮+(I), it suffices to approximate 𝒮(I)}.

Geometrically, 𝒮+(I) represents the upper envelope of 𝒮(I), while 𝒮(I) represents the lower envelope. But we can make 𝒮(I) be the upper envelope by exchanging the x-coordinate and y-coordinate. Then we can approximate 𝒮(I) using Corollary 12.

More precisely, let I={(p,w):(w,p)I}. The efficiencies of the items in I are in the range [ρ,ρ+Δ], where ρ=1ρ+Δ and ρ+Δ=1ρ. One can verify that Δρ=Δρ. Then we can compute a set S of size O~(1ε) that approximates each tuple (p,w)𝒮+(I) with additive error (εp,εw) via Corollary 12 in O~(n+1ε+(1ε)2Δρ) time. Let

S={(w(I)w,p(I)p):(p,w)S}.

One can verify that S approximates each tuple (w,p)𝒮+(I) with additive error (ε(w(I)w),ε(p(I)p)).

5 An Algorithm with Exponent 11/6

To illustrate our main idea, we first present a simpler algorithm that runs in O~(n+(1ε)11/6) time. When α(1ε)2/3, the algorithm in Lemma 8 already runs in O~(n+(1ε)11/6) time. It remains to consider the case where 12α(1ε)2/3. In the rest of this section, we assume that I={(w1,p1),,(wn,pn)} and that p1w1pnwn.

Let τ be a parameter that will be specified later. We first partition I into groups as follows. For the items (w1,p1),,(wn,pn) in I, we bundle every τ items as a group until we obtain 1αετ+1 groups. For the remaining items, say {(wi,pi),,(wn,pn)}, we further divide them into two groups {(wi,pi),,(wi,pi)} and {(wi+1,pi+1),,(wn,pn)}, where i is the maximum index such that piwipiwi1τ. At the end, we obtain m=1αετ+3 groups I1,I2,,Im. For simplicity, we assume that none of these groups is empty. This assumption is without loss of generality, since empty groups only make things easier.

For each group Ij, we define Δj to be the maximum difference between the efficiencies of the items in Ij. That is,

Δj=max{pwpw:(w,p),(w,p)Ij}.

By the way we construct the groups, we have that jΔj32.

We say a group Ij is good if Δj1τ, and bad otherwise.

Note that the efficiency of any item in I is within the range [12,2]. For each good group Ij, its item efficiencies are in the range [ρ,ρ+1τ] for some constant ρ[12,2], so we can compute a set Sj of O~(1ε) that approximates each tuple (w,p)𝒮+(Ij) with additive error (εw,εp) via Corollary 12 in O(|Ij|+1ε+(1ε)21τ) time. Since the items are from [1,2]×[1,2], for any tuple in 𝒮+(Ij), its weight and profit are of the same magnitude, so are the corresponding additive errors. We can say that Sj approximates each tuple (w,p)𝒮+(Ij) with additive error O(εw).

For bad groups, we shall approximate them less accurately: we shall use 1ατ instead of ε as the accuracy parameter. (Later we will show that we can still obtain a good bound on the total additive error using the proximity bound in Lemma 9.) We compute a set Sj of size O~(ατ) that approximates each tuple (w,p)𝒮+(Ij) with additive error (wατ,pατ) via Corollary 12 in O(|Ij|+ατ+α2τ2Δj) time. We also compute a set Sj′′ of size O~(ατ) that approximates each (w,p)𝒮+(Ij) with additive error (w(I)wατ,p(I)pατ) via Corollary 13 in O(|Ij|+ατ+α2τ2Δj) time. Again, since the weight and the profit are of the same magnitude, we can say that Sj approximates each tuple (w,p)𝒮+(Ij) with additive error O(1ατw), and that Sj′′ approximates each tuple (w,p)𝒮+(Ij) with additive error O(1ατ(w(I)w)). Now consider the set Sj=SjSj′′. One can verify that Sj approximates each tuple (w,p)𝒮+(Ij) with additive error O(1ατmin{w,(w(I)w)}).

The last step is to compute a set S of size O~(1ε) that approximates S1Sm with factor 1+O(ε) via Lemma 3. Note that each set Sj is of size O~(1ε) or O~(ατ). This step takes time O~(mε+mατ+(1ε)3/2m).

Bounding Total Time Cost

Partitioning the item into I1,,Im can be done by sorting and scanning, and therefore, it takes only O(nlogn+m) time. Each good group costs O(|Ij|+1ε+(1ε)21τ) time. Since there are at most m good groups, the good groups cost total time

O~(n+mε+(1ε)2mτ).

Each bad group Ij costs O(|Ij|+ατ+α2τ2Δj) time, so the total time cost for bad groups is

O~(n+mατ+α2τ2jΔj)O~(n+mατ+α2τ2).

The inequality is due to that jΔj32. Merging all the sets Sj via Lemma 3 costs O~(mε+mατ+(1ε)3/2m) time. Taking the sum of all these time costs, we have that the running time of the algorithm is

O~(n+mε+mατ+(1ε)2mτ+α2τ2+(1ε)3/2m).

Recall that m=1αετ and that 12α(1ε)2/3. Set τ=(1ε)11121α. One can verify that the running time is O~(n+(1ε)11/6).

Bounding Additive Error

To prove the correctness of the algorithm, it suffices to show that the set S returned by the algorithm approximates every tuple in 𝒮+(I;1αε) with additive error O(1α).

Let (w,p) be an arbitrary tuple in 𝒮+(I;1αε). Let (wb,pb) be the breaking item with respect to w. Let ρb=pbwb be the efficiency of the breaking item. We first show that the proximity bound can be applied to all but four bad groups.

Lemma 14.

At most four bad groups may contain an item whose efficiency is within the range [ρb1τ,ρb+1τ]. Moreover, each of these groups is of size τ.

Proof.

We first show that there are at most four bad groups each containing an item whose efficiency is within the range [ρb1τ,ρb+1τ]. Suppose, for the sake of contradiction, that there are five bad groups each containing an item whose efficiency is in the range [ρb1τ,ρb+1τ]. Let i1,i2,i3,i4,i5 be these five items. Without loss of generality, assume that pi1wi1pi5wi5. Since they belong to five bad groups, and within each bad group, the maximum and the minimum item efficiency differ by at least 1τ, we have that pi1wi1pi5wi53τ. But this is impossible since the efficiencies of i1 and i5 in the range [ρb1τ,ρb+1τ] whose length is 2τ.

Note that by our construction, Im1 is a good group, so it cannot be one of these four bad groups. Since the first m2 groups are of size τ, to finish the proof, it suffices to show that Im cannot have an item whose efficiency is within the range [ρb1τ,ρb+1τ]. Since (w,p)𝒮+(I;1αε), we have that w1αε. Since every item is from [1,2]×[1,2], we have b1αε+1. Recall that m=1αετ+3 and that the first m2 groups each has τ items. So it must be that (wb,pb)Ij for some jm2. Let i be the first item (the item with the highest efficiency) in Im1. We have piwipbwb. Let i′′ be the first item in Im. By the construction of Im1, we have that pi′′wi′′<piwi1τ. Therefore, pi′′wi′′<pbwb1τ. This implies that every item in Im has its efficiency strictly less than pbwb1τ.

Now we are ready to bound the total additive error.

Lemma 15.

S1S2Sm approximates (w,p) with additive error O(1α).

Proof.

Since I=I1Im and (w,p)𝒮+(I), we have that there is a tuple (wj,pj)𝒮+(Ij) for each j such that

(w,p)=(w1,p1)++(wm,pm).

Let J𝑔𝑜𝑜𝑑 be the indices of good groups. Let Jbad be the set of indices of bad groups. Recall that for each jJgood, Sj approximates (wj,pj) with additive error O(εwj), and that for each jJbad, the set Sj approximates (wj,pj) with additive error O(1ατmin{wj,w(Ij)wj}). Therefore, S1S2Sm approximates (w,p) with additive error

O(εjJ𝑔𝑜𝑜𝑑wj+1ατjJ𝑏𝑎𝑑min{wj,w(Ij)wj}).

It is easy to see that

jJ𝑔𝑜𝑜𝑑wjw1αε. (1)

It remains to bound jJ𝑏𝑎𝑑min{wj,w(Ij)wj}. We further divide J𝑏𝑎𝑑 into J𝑏𝑎𝑑HJ𝑏𝑎𝑑MJ𝑏𝑎𝑑L so that

  • For jJ𝑏𝑎𝑑H, all items in Ij have efficiency at least ρb+1τ

  • For jJ𝑏𝑎𝑑M, some items in Ij has efficiency in the range [ρb1τ,ρb+1τ].

  • For jJ𝑏𝑎𝑑L, all items in Ij have efficiency at most ρb1τ.

The proximity Lemma 9 implies that

jJ𝑏𝑎𝑑H(w(Ij)wj)2τandjJ𝑏𝑎𝑑Lwj2τ. (2)

Lemma 14 implies that |J𝑏𝑎𝑑M|4 and Ijτ for each jJ𝑏𝑎𝑑M. Recall that the item weights are at most 2. We have

jJ𝑏𝑎𝑑Mwj4τ2=8τ. (3)

In view of (2) and (3), we have

jJ𝑏𝑎𝑑min{wj,w(Ij)wj}jJ𝑏𝑎𝑑H(w(Ij)wj)+jJ𝑏𝑎𝑑Lwj+jJ𝑏𝑎𝑑Mwj12τ. (4)

In view of (1) and (4), we have S1S2Sm approximates (w,p) with additive error

O(εjJ𝑔𝑜𝑜𝑑wj+1ατjJ𝑏𝑎𝑑min{wj,w(Ij)wj})O(ε1αε+1ατ12τ)=O(1α).

The above lemma immediately implies the following since S approximates S0S1Sm with factor 1+O(ε).

Corollary 16.

S approximates (w,p) with additive error O(1α).

We summarize this section by the following lemma.

Lemma 17.

There is an O~(n+(1ε)11/6)-time algorithm for the reduced problem, which is randomized and succeeds with high probability.

6 An Improved Algorithm with Exponent 7/4

Due to the space limit, we present only the most crucial part of the algorithm and the analysis. The missing part can be found in the full version of the paper.

In our O~(n+(1ε)11/6)-time algorithm, only the bad groups benefit from the proximity bound, which allows us to approximate with larger factor 1+1ατ, while for the good groups, we simply approximate with factor 1+ε. To further improve the running time, we shall partition the items in a way that all the groups can benefit from the proximity bound.

We assume that 12α(1ε)3/4 since the algorithm in Lemma 8 already runs in O~(n+(1ε)7/4) time for all α(1ε)3/4. We assume that I={(w1,p1),,(wn,pn)} and that p1w1pnwn.

We first partition I into the head group Ihead and the tail group Itail where Ihead contains the first 1αε+1 items and Itail contains the rest of the items. Below we only give an algorithm for Ihead, and analyze its additive error. The remaining analysis and the algorithm for Itail can be found in the full version.

6.1 Approximating Head Group

Let n=1αε+1. We shall show that in O~((1ε)7/4) time, we can compute a set S of size O~(1ε) that approximate 𝒮+(Ihead) with additive error O(1α).

Let τ be a parameter that will be specified later. Roughly speaking, we will further partition Ihead into good groups and bad groups. The bad groups are the same as before: a bad group is a group of τ items whose efficiencies differ by at least 1τ. For good groups, we will make them large than before: when we obtain a good group I of size τ, we will keep adding items to I until the difference between the maximum and the minimum item efficiencies exceeds 1|I| with the next item. A more precise description of the partition process is given below.

We partition Ihead into I1,,Im as follows. Initially, j=1 and k=1, and the remaining items are {(wk,pk),,(wn,pn)}. Let k be the minimum integer such that

(kk)(pkwkpkwk)>1.

If such k does not exist, we set Ij={(wk,pk),,(wn,pn)}, set Δj=Δj=pkwkpnwn, set m:=j, and finish. Assume that k exists.

  • If kkτ, we set Ij={(wk,pk),,(wk1,pk1)}, set Δj=pkwkpk1wk1, set Δj=pkwkpkwk, and proceed with j:=j+1 and k:=k. In this case, we say Ij is a good group.

  • Otherwise, we let Ij={(wk,pk),,(wk+τ1,pk+τ1)}, set Δj=pkwkpk+τ1wk+τ1, set Δj=pkwkpk+τwk+τ, and proceed with j:=j+1 and k:=k+τ. In this case, we say that Ij is a bad group.

(We remark that Δj and Δj are not required by the algorithm. They are maintained only for the purpose of analysis. Δj is the actual difference between the maximum and minimum item efficiencies in Ij. For technical reasons, we also need Δj. Basically, compared to Δj, the gap Δj also includes the efficiency gap between the minimum efficiency in Ij and the maximum efficiency in Ij+1. We will use Δj to bound the time cost of Ij, and use Δj to create an efficiency gap for other groups.)

We have the following observations.

Observation 18.

The groups I1,,Im satisfy the following properties.

  1. (i)

    m1αετ+1

  2. (ii)

    j=1m|Ij|=|Ihead|=1αε+1 and j=1mΔjj=1mΔj32.

  3. (iii)

    For every group Ij, the efficiencies of the items in Ij differ by at most Δj.

  4. (iv)

    For every good group Ij, we have |Ij|Δj1 and |Ij|Δj12.

  5. (v)

    For every bad group Ij, we have |Ij|Δj|Ij|Δj>1.

  6. (vi)

    For the last group Im, we have |Im|Δm1.

For each group, we shall approximate it in exactly the same way as we did for the bad groups in the O~(n+(1ε)11/6)-time algorithm, except that we shall use 1α|Ij| as the accuracy parameter. More specifically, we use Corollary 12 and Corollary 13 to compute a set Sj of size O~(α|Ij|) that approximate each tuple (w,p)𝒮+(Ij) with additive error 1α|Ij|min{w,w(I)w}. The time cost for Ij is O~(|Ij|+α|Ij|+α2|Ij|2Δj).

Then we compute a set Shead of size O~(1ε) that approximates S1Sm with factor 1+O(ε) via Lemma 3.

Bound Additive Error

We shall show that Shead approximates every tuple (w,p)𝒮+(Ihead) with additive error O~(1α).

Let (w,p) be an arbitrary tuple in 𝒮+(Ihead). Let (wb,pb) be the breaking item with respect to w. Let ρb=pbwb be the efficiency of the breaking item.

Let Ij be the group containing the breaking item (wb,pb). To apply the proximity bound, we will show that the groups can be divided into O(log1ε) collections so that for each collection {Ijk+1,Ijk+2,,Ijk+11}, the efficiency gap between ρb and the groups in this collection is at least the maximum Δj of these groups. To do this, we need the following auxiliary lemma. Its proof is deferred to the full version.

Lemma 19.

Let Δ1,,Δn be a sequence of positive real numbers. Let Δmin and Δmax be the minimum and maximum numbers in the sequence. There exists h=O(logΔmaxΔmin) indices 1=j1<j2<<jh=n such that for any k{1,,h1}, we have that

max{Δj:jk<j<jk+1}{Δj:jk+1jn}, (5)

where the maximum of an empty set is defined to be 0.

Lemma 20.

S1Sm approximates (w,p) with additive error O~(1α).

Proof.

Since Ihead=I1Im and (w,p)𝒮+(Ihead), we have that there is a tuple (wj,pj)𝒮+(Ij) for each j such that (w,p)=(w1,p1)++(wm,pm). Recall that Sj approximates (wj,pj) with the additive error δj=1α|Ij|min{wj,w(Ij)wj}. It suffices to bound jδj. Let Ij be the group containing the breaking item (wb,pb). Since the weights of the items are in [1,2]. It is easy to see that δj1α|Ij|wj1α for any j. Therefore, δj+δm2α. It remains to consider j=1j1δj and j=jm1δj. We shall bound them using Lemma 19 and the proximity bound (Lemma 9).

Consider Δ1,Δ2,,Δj1. We shall apply Lemma 19 to them. Let Δmax=maxj=1j1Δj and let Δmin=minj=1j1Δj. Note that Δmax3/2 since the items are from [1,2]×[1,2], and that Δmin1/(2j|Ij|)αε2ε4 due to Observation 18(ii), (iv), and (v). Therefore, log(ΔmaxΔmin)=O(log1ε). By Lemma 19, there exists h=O(log1ε) indices 1=j1<j2<<jh=j1 such that for any k{1,,h1}, we have that

max{Δj:jk<j<jk+1}{Δj:jk+1jj1}. (6)

Fix some k. Consider the groups Ij with jk<j<jk+1. Let Δ=max{Δj:jk<j<jk+1}. The inequality (6) implies that the items in these groups Ij have efficiencies at least ρb+Δ. By Lemma 9, we have

jk<j<jk+1(w(Ij)wj)2Δ. (7)

Also note that for each Ij with jk<j<jk+1, we have 1|Ij|2Δj2Δ (due to Observation 18(iv) and (v)). Therefore,

jk<j<jk+1δjjk<j<jk+1w(Ij)wjα|Ij|jk<j<jk+12Δ(w(Ij)wj)α1α.

The last inequality is due to (7). Recall that δj1α for any j. We have the following.

j=1j1δj=k=1hδjk+k=1h1jk<j<jk+1δj=O(hα)=O~(1α).

In a symmetric way, we can show that j=j+1m1δjO~(1α).

The above lemma immediately implies the following since Shead approximates S1Sm with factor 1+O(ε).

Corollary 21.

Shead approximates (w,p) with additive error O~(1α).

The remaining analysis and the algorithm for Itail can be found in the full version. We summarize this with the following lemma.

Lemma 22.

There is an O~(n+(1ε)7/4)-time algorithm for the reduced problem, which is randomized and succeeds with high probability.

References

  • [1] Alok Aggarwal, Maria M. Klawe, Shlomo Moran, Peter Shor, and Robert Wilber. Geometric applications of a matrix-searching algorithm. Algorithmica, 2(1):195–208, November 1987. doi:10.1007/BF01840359.
  • [2] Kyriakos Axiotis and Christos Tzamos. Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019), pages 19:1–19:13. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.ICALP.2019.19.
  • [3] David Bremner, Timothy M Chan, Erik D Demaine, Jeff Erickson, Ferran Hurtado, John Iacono, Stefan Langerman, Mihai Pǎtraşcu, and Perouz Taslakian. Necklaces, convolutions, and x+ y. Algorithmica, 69(2):294–314, 2014. doi:10.1007/S00453-012-9734-3.
  • [4] Karl Bringmann. A Near-Linear Pseudopolynomial Time Algorithm for Subset Sum. In Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), pages 1073–1084. Society for Industrial and Applied Mathematics, 2017. doi:10.1137/1.9781611974782.69.
  • [5] Karl Bringmann. Knapsack with small items in near-quadratic time. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (STOC 2024), pages 259–270. Association for Computing Machinery, 2024. doi:10.1145/3618260.3649719.
  • [6] Karl Bringmann and Alejandro Cassis. Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022), pages 31:1–31:21. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.ICALP.2022.31.
  • [7] Karl Bringmann and Alejandro Cassis. Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution. In 31st Annual European Symposium on Algorithms (ESA 2023), pages 24:1–24:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPIcs.ESA.2023.24.
  • [8] Karl Bringmann, Anita Dürr, and Adam Polak. Even faster knapsack via rectangular monotone min-plus convolution and balancing. In Timothy M. Chan, Johannes Fischer, John Iacono, and Grzegorz Herman, editors, 32nd Annual European Symposium on Algorithms, ESA 2024, September 2-4, 2024, Royal Holloway, London, United Kingdom, volume 308 of LIPIcs, pages 33:1–33:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ESA.2024.33.
  • [9] Karl Bringmann and Vasileios Nakos. A Fine-Grained Perspective on Approximating Subset Sum and Partition. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA 2021), pages 1797–1815. Society for Industrial and Applied Mathematics, 2021. doi:10.1137/1.9781611976465.108.
  • [10] Timothy M. Chan. Approximation Schemes for 0-1 Knapsack. In 1st Symposium on Simplicity in Algorithms (SOSA 2018), pages 5:1–5:12. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2018. doi:10.4230/OASIcs.SOSA.2018.5.
  • [11] Timothy M. Chan and Moshe Lewenstein. Clustered Integer 3SUM via Additive Combinatorics. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing (STOC 2015), pages 31–40. Association for Computing Machinery, 2015. doi:10.1145/2746539.2746568.
  • [12] Timothy M. Chan and Ryan Williams. Deterministic apsp, orthogonal vectors, and more: Quickly derandomizing razborov-smolensky. In Proceedings of the 2016 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2016), pages 1246–1255, 2016. doi:10.1137/1.9781611974331.ch87.
  • [13] Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang. Approximating partition in near-linear time. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (STOC 2024), pages 307–318. Association for Computing Machinery, 2024. doi:10.1145/3618260.3649727.
  • [14] Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang. Faster algorithms for bounded knapsack and bounded subset sum via fine-grained proximity results. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2024), pages 4828–4848. Society for Industrial and Applied Mathematics, 2024. doi:10.1137/1.9781611977912.171.
  • [15] Lin Chen, Jiayi Lian, Yuchen Mao, and Guochuan Zhang. A nearly quadratic-time fptas for knapsack. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (STOC 2024), pages 283–294. Association for Computing Machinery, 2024. doi:10.1145/3618260.3649730.
  • [16] Shucheng Chi, Ran Duan, Tianle Xie, and Tianyi Zhang. Faster min-plus product for monotone instances. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2022), pages 1529–1542. Association for Computing Machinery, 2022. doi:10.1145/3519935.3520057.
  • [17] Marek Cygan, Marcin Mucha, Karol Węgrzycki, and Michał Włodarczyk. On problems equivalent to (min,+)-convolution. ACM Transactions on Algorithms, 15(1):1–25, January 2019. doi:10.1145/3293465.
  • [18] Mingyang Deng, Ce Jin, and Xiao Mao. Approximating Knapsack and Partition via Dense Subset Sums. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2023), pages 2961–2979. Society for Industrial and Applied Mathematics, 2023. doi:10.1137/1.9781611977554.ch113.
  • [19] Friedrich Eisenbrand and Robert Weismantel. Proximity Results and Faster Algorithms for Integer Programming Using the Steinitz Lemma. ACM Transactions on Algorithms, 16(1):5:1–5:14, November 2019. doi:10.1145/3340322.
  • [20] Aleksei V. Fishkin, Olga Gerber, Klaus Jansen, and Roberto Solis-Oba. On packing squares with resource augmentation: maximizing the profit. In Proceedings of the 2005 Australasian Symposium on Theory of Computing, CATS ’05, pages 61–67, AUS, 2005. Australian Computer Society, Inc. URL: https://dl.acm.org/doi/10.5555/1082260.1082268.
  • [21] Oscar H. Ibarra and Chul E. Kim. Fast Approximation Algorithms for the Knapsack and Sum of Subset Problems. Journal of the ACM, 22(4):463–468, October 1975. doi:10.1145/321906.321909.
  • [22] Kazuo Iwama and Guochuan Zhang. Online knapsack with resource augmentation. Information Processing Letters, 110(22):1016–1020, 2010. doi:10.1016/j.ipl.2010.08.013.
  • [23] Klaus Jansen and Stefan E. J. Kraft. A faster FPTAS for the Unbounded Knapsack Problem. European Journal of Combinatorics, 68:148–174, February 2018. doi:10.1016/j.ejc.2017.07.016.
  • [24] Ce Jin. An improved FPTAS for 0-1 knapsack. In Proceedings of 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019), pages 76:1–76:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.ICALP.2019.76.
  • [25] Ce Jin. 0-1 Knapsack in Nearly Quadratic Time. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (STOC 2024), pages 271–282. Association for Computing Machinery, 2024. doi:10.1145/3618260.3649618.
  • [26] Richard M. Karp. Reducibility among combinatorial problems. In Complexity of Computer Computations, pages 85–103. Springer US, Boston, MA, 1972. doi:10.1007/978-1-4684-2001-2_9.
  • [27] Hans Kellerer, Renata Mansini, Ulrich Pferschy, and Maria Grazia Speranza. An efficient fully polynomial approximation scheme for the Subset-Sum Problem. Journal of Computer and System Sciences, 66(2):349–370, March 2003. doi:10.1016/S0022-0000(03)00006-0.
  • [28] Hans Kellerer and Ulrich Pferschy. A New Fully Polynomial Time Approximation Scheme for the Knapsack Problem. Journal of Combinatorial Optimization, 3(1):59–71, July 1999. doi:10.1023/A:1009813105532.
  • [29] Hans Kellerer and Ulrich Pferschy. Improved Dynamic Programming in Connection with an FPTAS for the Knapsack Problem. Journal of Combinatorial Optimization, 8(1):5–11, March 2004. doi:10.1023/B:JOCO.0000021934.29833.6b.
  • [30] Hans Kellerer, Ulrich Pferschy, and Pisinger David. Knapsack Problems. Springer, Berlin Heidelberg, 2004. doi:10.1007/978-3-540-24777-7.
  • [31] Konstantinos Koiliaris and Chao Xu. Faster Pseudopolynomial Time Algorithms for Subset Sum. ACM Transactions on Algorithms, 15(3):1–20, July 2019. doi:10.1145/3329863.
  • [32] Marvin Künnemann, Ramamohan Paturi, and Stefan Schneider. On the Fine-Grained Complexity of One-Dimensional Dynamic Programming. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017), pages 21:1–21:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPIcs.ICALP.2017.21.
  • [33] Eugene L. Lawler. Fast Approximation Algorithms for Knapsack Problems. Mathematics of Operations Research, 4(4):339–356, November 1979. doi:10.1287/MOOR.4.4.339.
  • [34] Xiao Mao. (1ϵ)-approximation of knapsack in nearly quadratic time. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (STOC 2024), pages 295–306. Association for Computing Machinery, 2024. doi:10.1145/3618260.3649677.
  • [35] Marcin Mucha, Karol Węgrzycki, and Michał Włodarczyk. A Subquadratic Approximation Scheme for Partition. In Proceedings of the 2019 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2019), pages 70–88. Society for Industrial and Applied Mathematics, 2019. doi:10.1137/1.9781611975482.5.
  • [36] Adam Polak, Lars Rohwedder, and Karol Węgrzycki. Knapsack and Subset Sum with Small Items. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), pages 106:1–106:19. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPIcs.ICALP.2021.106.
  • [37] Donguk Rhee. Faster fully polynomial approximation schemes for knapsack problems. PhD thesis, Massachusetts Institute of Technology, 2015. URL: https://dspace.mit.edu/handle/1721.1/98564.
  • [38] Ryan Williams. Faster all-pairs shortest paths via circuit complexity. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing (STOC 2014), pages 664–673. Association for Computing Machinery, 2014. doi:10.1145/2591796.2591811.