Fully Dynamic Bin Packing Revisited

We consider the fully dynamic bin packing problem, where items arrive and depart in an online fashion and repacking of previously packed items is allowed. The goal is, of course, to minimize both the number of bins used as well as the amount of repacking. A recently introduced way of measuring the repacking costs at each timestep is the migration factor, defined as the total size of repacked items divided by the size of an arriving or departing item. Concerning the trade-off between number of bins and migration factor, if we wish to achieve an asymptotic competitive ration of $1 + \epsilon$ for the number of bins, a relatively simple argument proves a lower bound of $\Omega(\frac{1}{\epsilon})$ for the migration factor. We establish a nearly matching upper bound of $O(\frac{1}{\epsilon}^4 \log \frac{1}{\epsilon})$ using a new dynamic rounding technique and new ideas to handle small items in a dynamic setting such that no amortization is needed. The running time of our algorithm is polynomial in the number of items $n$ and in $\frac{1}{\epsilon}$. The previous best trade-off was for an asymptotic competitive ratio of $\frac{5}{4}$ for the bins (rather than $1+\epsilon$) and needed an amortized number of $O(\log n)$ repackings (while in our scheme the number of repackings is independent of $n$ and non-amortized).


Introduction
Consider the fully dynamic bin packing problem, where items have to be inserted or removed in an online fashion. At each timestep t, a set of items I(t) with sizes in (0, 1] is given and has to be packed into unit sized bins such that the number of used bins is minimized. A future instance I(t + 1) may either contain an additional item i. e., I(t) ∪ {i}, or an existing item may have to be removed, i. e., I(t) \ {i}. It is not known in advance if a new item arrives or if an item is removed nor what kind of items arrive or depart. This kind of bin packing problem often arises in the context of server consolidation, where services have to be distributed onto servers [AT07,ZCB10]. Each service uses a certain shared resource that is provided by the servers and the goal is to minimize the number of used servers while making sure that no server is overloaded. The idea of a dynamic bin packing setting was developed by Coffman, Garey and Johnson in 1983 [CGJ83]. They developed and analyzed an offline algorithm for the dynamic bin packing problem where arrival and departure of items are known in advance. Ivković and Lloyd introduced the fully dynamic bin packing problem in 1998 [IL98] as a generalisation of the well-known online bin packing problem. In contrast to the fully dynamic setting, items may not depart in the classical online bin packing problem. Online bin packing was introduced by Ullman [Ull71] and has seen enormous research since then (see the survey of Seiden [Sei02] for an exemplary overview). The quality of an online algorithm is typically measured by the asymptotic performance guarantee of the algorithm divided by the optimal offline solution and was named the (asymptotic) competitive ratio by Sleator and Tarjan [ST85].
In the case of online bin packing without departures, the best known algorithm has a competitive ratio of 1.58889 [Sei02]. On the other hand, it was shown that no algorithm can achieve a ratio better than 1.54037 [BBG10]. Unfortunately, in the case of fully dynamic bin packing, where departure of items is allowed, there is no bound for the competitive ratio. Ivković and Lloyd therefore allow to repack several already packed items as soon as an item has to be inserted or deleted. They develop an algorithm that achieves an asymptotic competitive ratio of 5 /4 using amortized O(log n) shifting moves, where n is the number of packed items. A shifting move repacks one large item or a bundle of small items of bounded size. Note that a bundle of small items may contain up to Ω(n) (very small) items [BBGR08]. Balogh et al. [BBGR08] proved that a lower bound of 1.3871 on the competitive ratio holds if an algorithm moves at most O(1) items, even if no deletion is used. This expands upon a lower bound of 4 /3 presented by Ivković and Lloyd [IL96] for the fully dynamic bin packing model. So there are fixed lower bounds for the competitive ratio when the amount of repacking is measured by the number of repacked items.
A modern way to measure repacking is the notion of the migration factor, developed by Sanders et al. [SSS09]. It is defined by the complete size of all moved items divided by the size of the arriving or departing item. Following the notation of Sanders et al., we will use the term (online) approximation ratio instead of competitive ratio. In the model of migration factor, the arrival or departure of a small item allows only a small total size of items to be repacked.
The asymptotic approximation ratio for an offline algorithm A is defined formally by the expression lim sup n→∞ sup I { A(I) opt(I) | opt(I) = n}. This leads to the notion of asymptotic polynomial time approximation scheme (APTAS). Given an instance of size n and a fixed parameter ǫ ∈ (0, 1], an APTAS has a running time of poly(n) and asymptotic approximation ratio 1 + ǫ. A typical running time for this class of algorithms is O(n f ( 1 /ǫ) ) for an arbitrary function f . An APTAS is called an asymptotic fully polynomial time approximation scheme (AFPTAS) if its running time is polynomial in n and in 1 ǫ . The first APTAS for offline bin packing was developed by Fernandez de la Vega and Lueker [FdlVL81], and Karmarkar and Karp improved this result by giving an AFPTAS [KK82] (see the survey on bin packing [CGJ97]). An online algorithm with asymptotic approximation ratio 1 + ǫ is called robust if its migration factor is of the size f ( 1 ǫ ), where f is an arbitrary function that only depends on 1 ǫ .

Our results:
Since the work of Ivković and Lloyd from 1998 [IL98], no progress was made on the fully dynamic bin packing problem concerning the (asymptotic) competitive ratio of 5 /4. It was also unclear whether the number of shifting moves (respectively migration factor) must depend on the number of packed items n. In this paper we have positive answers for both of these concerns. We develop an algorithm that provides at each time step t an approximation guarantee of (1 + ǫ) opt(I(t)) + O( 1 /ǫ log( 1 /ǫ)). The algorithm uses a migration factor of O( 1 /ǫ 4 · log( 1 /ǫ)) by repacking at most O( 1 /ǫ 3 · log( 1 /ǫ)) bins. Hence, the generated solution can be arbitrarily close to the optimum solution, and for every fixed ǫ the provided migration factor is constant (it does not depend on the number of packed items). The running time is polynomial in n and 1 /ǫ. In case that no deletions are used, the algorithm has a migration factor of O( 1 /ǫ 3 · log( 1 /ǫ)), which beats the best known migration factor of O( 1 /ǫ 4 ) by Jansen and Klein [JK13]. Since the number of repacked bins is bounded, so is the number of shifting moves. Furthermore, we prove that there is no asymptotic approximation scheme for the online bin packing problem with a migration factor of o( 1 /ǫ) even in the case that no items depart (and even if P = N P).
We use the following techniques to achieve our results: • In order to obtain a lower bound on the migration factor in Section 2, we construct a series of instances. To maintain an asymptotic approximation ratio of 1+ ǫ, these instances need a migration factor of at least Ω( 1 /ǫ). • In Section 3, we show how to handle large items in a fully dynamic setting. The fully dynamic setting involves more difficulties in the rounding procedure, in contrast to the setting where large items may not depart (treated in [JK13]). A simple adaption of the dynamic techniques developed in [JK13] do not work (see introduction of Section 3). We modify the offline rounding technique by Karmarkar and Karp [KK82] such that a feasible rounding structure can be maintained when items are inserted or removed. This way, we can make use of the LP-techniques developed in Jansen and Klein [JK13]. • In Section 4, we explain how to deal with small items in a dynamic setting. In contrast to the setting where departure of items is not allowed, the fully dynamic setting provides major challenges in the treatment of small items. An approach is thus developed, where small items of similar size are packed near each other. We describe how this structure can be maintained as new items arrive or depart. With this new approach, no amortization is needed, i. e., repacking need not be reserved for a later timestep. • In order to unify the different approaches for small and large items in Section 4.2, we give an advanced structure for the packing. We develop novel techniques and ideas to manage this mixed setting of small and large items. The advanced structure makes use of a potential function, which bounds the number of bins that need to be reserved for incoming items.

Related results:
Since the introduction of the migration factor, several problems were considered in this model and different robust algorithms for these problems have been developed. In the case of online bin packing (without deletion), Epstein and Levin [EL09] developed the first APTAS for the problem using a migration factor of 2 O((1/ǫ 2 ) log(1/ǫ)) . They also proved that there is no online algorithm for this problem that has a constant migration factor and that maintains an optimal solution. The APTAS by Epstein and Levin was later improved by Jansen and Klein [JK13], who developed an AFPTAS for the problem with migration factor O( 1 /ǫ 4 ). In their paper, they developed new linear program (LP)/integer linear program (ILP) techniques, which we make use of to obtain polynomial migration. It was shown by Epstein and Levin [EL13] that their APTAS for bin packing can be generalized to packing d-dimensional cubes into unit cubes. Sanders et al. [SSS09] developed a robust polynomial time approximation scheme (PTAS) for the scheduling problem on identical machines with a migration factor of 2 O((1/ǫ) log 2 (1/ǫ)) . Skutella and Verschae [SV10] studied the problem of maximizing the minimum load given n jobs and m identical machines. They also considered a dynamic setting, where jobs may also depart. They showed that there is no robust PTAS for this machine covering problem with constant migration. The main reason for the nonexistence is due to very small jobs. By using an amortized migration factor, they developed a PTAS for the problem with amortized migration of 2 O((1/ǫ) log 2 (1/ǫ)) . Before the introduction of the migration factor, other algorithms were developed that considered online bin packing (without deletion) with repacking in order to improve on the competitive ratio for online bin packing. For example, Gambosi et al. [GPT00] found an algorithm achieving an (asymptotic) competitive ratio of 1.33 using at most 7 shifting moves. By allowing amortized O(log n) shifting moves, Ivković and Lloyd [IL97] achieved an asymptotic competitive ratio of 1 + ǫ. Note that this does not contradict the lower bound of Balogh et al., as a shifting may move Ω(n) very small items [BBGR08].

Lower Bound
We start by showing that there is no robust (asymptotic) approximation scheme for bin packing with migration factor of o( 1 /ǫ), even if P = N P. This improves the lower bound given by Epstein and Levin [EL09], which states that no algorithm for bin packing, that maintains an optimal solution can have a constant migration factor. Previously it was not clear whether there exists a robust approximation algorithm for bin packing with sublinear migration factor or even a constant migration factor.
Theorem 1. For a fixed migration factor c, there is no robust approximation algorithm for bin packing with asymptotic approximation ratio better than 1 + 1 6c+5 .
Proof. Let A be an approximation algorithm with migration factor c. We will now construct an instance such that the asymptotic approximation ratio of A with migration factor c is at least 1 + 1 6c+5 . The instance contains only two types of items: An A-item has size a = 3 /2 3c+2 and an B-item has size b = 1 /2 − a /3. For a M ∈ N, let , (a, Insert), (a, Insert), . . . , (a, Insert) be the instance consisting of 2M insertions of B-items, followed by 2M (c + 1) insertions of A-items. Denote by r(t) the approximation ratio of the algorithm at time t ∈ N. The approximation ratio of the algorithm is thus r = max t {r(t)}.
The insertion of the B-items produces a packing with β 1 bins containing a single B-item and β 2 bins containing two B-items. These are the only possible packings and hence β 1 + 2β 2 = 2M . The optimal solution is reached if β 1 = 0, β 2 = M . We thus have an approximation ratio of which is strictly monotonically decreasing in β 2 . The A-items, which are inserted afterwards, may either be put into bins which only contain A-items or into bins which contain only one B-item. The choice of a, b implies 2·b+ a > 1 which shows that no A-item can be put into a bin containing two B-items. Denote by α the number of bins containing only A-items. The existing B-items may not be moved as the choice of a, b implies b > c · a. At most 1 /2+ a /3 a = c + 1 items of type A may be put into the bins containing only one B-item. Note that this also implies that a bin which contains one B-item and c + 1 items of type A is filled completely. The optimal packing thus consists of 2M of those bins and the approximation ratio of the solution is given by r(2M (c + 2)) =: There are at most β 1 · (c + 1) items of type A which can be put into bins containing only one B-item. The remaining (2M − β 1 )(c + 1) items of type A therefore need to be put into bins containing only A-items. We can thus conclude α ≥ (2M − β 1 )(c + 1)a = (2M − 2M + 2β 2 )(c + 1)a = 2β 2 (c + 1)a. As noted above, 1 /2+ a /3 a = c + 1 and thus (c + 1)a = 1 /2 + a /3. Hence the approximation ratio is at least which is strictly monotonically increasing in β 2 . As r ≥ max{r 1 , r 2 }, a lower bound on the approximation ratio is thus given if r 1 = r 2 by for a certain β. Solving this equation leads to β = M a /3+1 . The lower bound is thus given as by the choice of a. Note that this lower bound is independent from M . Hence, r is also a lower bound on the asymptotic approximation ratio of any algorithm as the instance size grows with M .

Dynamic Rounding
The goal of this section is to give a robust AFPTAS for the case that only large items arrive and depart. In the first subsection we present a general rounding structure. In the second subsection we give operations on how the rounding can be modified such that the general structure is preserved. We give the final algorithm in Section 3.3, which is performed, when large items arrive or depart. Finally, the correctness is proved by using the LP/ILP techniques developed in [JK13].
In [JK13], the last two authors developed a dynamic rounding technique based on an offline rounding technique from Fernandez de la Vega and Lueker [FdlVL81]. However, a simple adaption of these techniques does not work in the dynamic case where items may also depart. In the case of the offline rounding by Fernandez de la Vega and Lueker, items are sorted and then collected in groups of the same cardinality. As a new item arrives in an online fashion, this structure can be maintained by inserting the new item to its corresponding group. By shifting the largest item of each group to the left, the cardinality of each group (except for the first one) can be maintained. However, shifting items to the right whenever an item departs leads to difficulties in the LP/ILP techniques. As the rounding for a group may increase, patterns of the existing LP/ILP solution might become infeasible. We overcome these difficulties by developing a new dynamic rounding structure and operations based on the offline rounding technique by Karmarkar and Karp [KK82]. We felt that the dynamic rounding technique based on Karmarkar and Karp is easier to analyze since the structure can essentially be maintained by shifting items.
A bin packing instance consists of a set of items I = {i 1 , i 2 , . . . , i n } with size function s : I → [0, 1] ∩ Q. A feasible solution is a partition B 1 , . . . , B k of I such that i∈B j s(i) ≤ 1 for j = 1, . . . , k. We call a partition B 1 , . . . , B k a packing and a single set B j is called a bin. The goal is to find a solution with a minimal number of bins. If the item i is packed into the bin B j , we write B(i) = j. The smallest value of k ∈ N such that a packing with k bins exists is denoted by opt(I, s) or if the size function is clear by opt(I). A trivial lower bound is given by the value size(I, s) = i∈I s(i).

Rounding
To obtain an LP formulation of fixed (independent of |I|) dimension, we use a rounding technique based on the offline AFPTAS by Karmarkar and Karp [KK82]. In order to use the technique for our dynamic setting, we give a more general rounding. This generalized rounding has a certain structure that is maintained throughout the algorithm and guarantees an approximate solution for the original instance. First, we divide the set of items into small ones and large ones. An item i is called small if s(i) < ǫ /14, otherwise it is called large. Instance I is partitioned accordingly into a set of large items I L and a set of small items I S . We treat small items and large items differently. Small items can be packed using an algorithm presented in Section 4.1 while large items will be assigned using an ILP. In this section we discuss how to handle large items.
First, we characterize the set of large items more precisely by their sizes. We say that two large items i, i ′ are in the same size category if there is a ℓ ∈ N such that s(i) ∈ (2 −(ℓ+1) , 2 −ℓ ] and s(i ′ ) ∈ (2 −(ℓ+1) , 2 −ℓ ]. Denote the set of all size categories by W . As every large item has size at least ǫ /14, the number of size categories is bounded by log( 1 /ǫ) + 5. Next, items of the same size category are characterized by their block, which is either A or B and their position r ∈ N in this block. Therefore, we partition the set of large items into a set of groups G ⊆ W × {A, B} × N. A group g ∈ G consists of a triple (ℓ, X, r) with size category ℓ ∈ W , block X ∈ {A, B} and position r ∈ N. The rounding function is defined as a function R : I L → G that maps each large item i ∈ I L to a group g ∈ G. By g [R] we denote the set of items being mapped to the group g, i. e., g[R] = {i ∈ I L | R(i) = g}.
Let q(ℓ, X) be the maximal r ∈ N such that |(ℓ, X, r)[R]| > 0. If (ℓ, X 1 , r 1 ) and (ℓ, X 2 , r 2 ) are two different groups, we say that (ℓ, X 1 , r 1 ) is left of (ℓ, X 2 , r 1 ), if X 1 = A and X 2 = B or X 1 = X 2 and r 1 < r 2 . We say that (ℓ, X 1 , r 1 ) is right of (ℓ, X 2 , r 2 ) if it is not left of it. Given an instance (I, s) and a rounding function R, we define the rounded size function s R by rounding the size of every large item i ∈ g [R] up to the size of the largest item in its group, hence s R (i) = max {s(i ′ ) | R(i ′ ) = R(i)}. We denote by opt(I, s R ) the value of an optimal solution of the rounded instance (I, s R ).
The following lemma shows that the rounding function does in fact yield a (1+ǫ)-approximation. . We will now prove, that the error generated by this rounding is bounded by ǫ. As each solution to J∪K yields a solution to J and a solution to K, we get opt(J∪K, s R ) ≤ opt(J, s R )+opt(K, s R ).
. We can therefore pack at least 2 ℓ items from (ℓ, A, 0)[R] ∪ (ℓ, A, 1)[R] into a single bin. Hence, we get with property (c): We can therefore bound opt(K, s R ) as follows: Using property (b) for each item in ((ℓ, X, r + 1)[R]), s R ) we find a unique larger item in (ℓ, X, r) [R]. Therefore we have for every item in the rounded instance (J, s R ) an item with larger size in instance (I, s) and hence opt(J, s R ) ≤ opt(I, s).
The optimal value of the rounded solution can be bounded by We therefore have a rounding function, which generates only O( 1 /ǫ log( 1 /ǫ)) different item sizes and the generated error is bounded by ǫ.

Rounding Operations
Let us consider the case where large items arrive and depart in an online fashion. Formally this is described by a sequence of pairs (i 1 , A 1 ), . . . , (i n , A n ) where A i ∈ {Insert, Delete}. At each time t ∈ {1, . . . , n} we need to pack the item i t into the corresponding packing of i 1 , . . . , i t−1 if A i = Insert or remove the item i t from the corresponding packing of i 1 , . . . , i t−1 if A i = Delete. We will denote the instance i 1 , . . . , i t at time t by I(t) and the corresponding packing by B t . We will also round our items and denote the rounding function at time t by R t . The large items of I(t) are denoted by I L (t). At time t we are allowed to repack several items with a total size of β · s(i t ) but we intend to keep the migration factor β as small as possible. The term repack(t) = i,B t−1 (i) =Bt(i) s(i) denotes the sum of the items which are moved at time t, the migration factor β of an algorithm is then defined as max t { repack(t) /s(it)}. As the value of size will also change over the time, we define the value κ(t) as As shown in Lemma 1, we will make use of the value k(t) := ⌊κ(t)⌋. We present operations that modify the current rounding R t and packing B t with its corresponding LP/ILP solutions to give a solution for the new instance I(t + 1). At every time t the rounding R t maintains properties (a) to (d). Therefore the rounding provides an asymptotic approximation ratio of 1 + ǫ (Lemma 2) while maintaining only O( 1 /ǫ log( 1 /ǫ)) many groups (Lemma 1). We will now present a way how to adapt this rounding to a dynamic setting, where items arrive or depart online.
Our rounding R t is manipulated by different operations, called the insert, delete, shiftA and shiftB operation. Some ideas behind the operations are inspired by Epstein and Levin [EL09]. The insert operation is performed whenever a large item arrives and the delete operation is performed whenever a large item departs. The shiftA/shiftB operations are used to modify the number of groups that are contained in the A and B block. As we often need to filter the largest items of a group g belonging to a rounding R, we denote this item by λ(g, R).
-As we move an item into (ℓ, X 1 , r 1 ), set Whenever a shift-operation on (ℓ, X 1 , r 1 ) and (ℓ, X 2 , r 2 ) is performed, the LP solution x and the corresponding ILP solution y is updated to x ′ and y ′ . Let C i be a configuration containing λ((ℓ, In order to add the new item in (ℓ, X 1 , r 1 ), set x ′ h = x h + 1 and y ′ h = y h + 1 for the index h with C h = {1 : s(λ((ℓ, X 1 , r 1 ), R))}. The remaining configurations do not change.
We will then insert i t into (ℓ, X, r) and get the rounding R ′ by shifting the largest element of (ℓ, X, r) to (ℓ, X, r − 1) and the largest item of (ℓ, X, r − 1) to (ℓ, X, r − 2) and so on until (ℓ, The rounding function R ′ is then obtained by applying the shift operation on R * i.e. the new rounding is R ′ = shift((ℓ, A, 0), (ℓ, X, r), R * ).
In order to pack the new item, let i be the index with incrementing the number of each rounding group by 1. Note that the largest item in (ℓ, A, 0)[R ′ ] is already packed into a bin of its own due to the shift operation. Hence, no change in the packing or the LP/ILP solution is needed. The insert operation thus yields a new packing B ′ (or B ′′ ) which uses two more bins than the packing B.
• Delete: To delete item i t from the group (ℓ, X, r) with R(i t ) = (ℓ, X, r), we remove i t from this group and move the largest item from (ℓ, X, r + 1) into (ℓ, X, r) and the largest item from (ℓ, X, r + 2) into (ℓ, X, r + 1) and so on until (ℓ, B, q(ℓ, B)). Formally the rounding R ′ is described by the expression shift((ℓ, X, r), (ℓ, B, q(ℓ, B)), R * ) where As a single shift operation is used, the delete operation yields a new packing B ′ which uses one more bin than the packing B.
For the LP/ ILP solution let C i be a configuration containing λ((ℓ, B, q(ℓ, B)), R) with To control the number of groups in A and B we introduce operations shiftA and shiftB that increase or decrease the number of groups in A respectively B. An operation shiftA increases the number of groups in A by 1 and decreases the number of groups in B by 1. Operations shiftB is doing the inverse of shiftA.
• shiftA: In order to move a group from B to A we will perform exactly 2 ℓ times the operation shift((ℓ, B, 0), (ℓ, B, q(ℓ, B)), R) to receive the rounding R * . Instead of opening a new bin for each of those 2 ℓ items in every shift operation, we rather open one bin containing all items. Since every item in the corresponding size category has size ≤ 2 −ℓ , the items fit into a single bin. The group (ℓ, B, 0) has now the same size as the groups in (ℓ, A, ·). We transfer (ℓ, B, 0) to block A. Hence we define for the final rounding The resulting packing B ′ hence uses one more bin than the packing B.
• shiftB: In order to move a group from A to B we will perform exactly 2 ℓ times the operation shift((ℓ, A, 0), (ℓ, A, q(ℓ, A)), R) to receive the rounding R * . As before in shiftA, we open a single bin containing all of the 2 ℓ items. The group (ℓ, A, q(ℓ, A)) has now the same size as the groups in (ℓ, B, ·). We transfer (ℓ, A, q(ℓ, A)) to block B. Similar to shiftA we define for the final rounding R ′ that (ℓ, Proof. Property (a) is always fulfilled as no item is moved between different size categories and the insert operation inserts an item into its appropriate size category. As the order of items never changes and the insert operation inserts an item into the appropriate place, property (b) also holds.
For properties (c) and (d) we first note that the operation shift(g, g ′ , R) increases the number of items in g by 1 and decreases the number of items in g ′ by 1. The insert operation consists of adding a new item to a group g followed by a shift((ℓ, A, 0), g, R) operation. Hence the number of items in every group except for (ℓ, A, 0) (which is increased by 1) remains the same. The delete operation consists of removing an item from a group g followed by a shift(g, (ℓ, B, q(ℓ, B)), R) operation. Therefore the number of items in all groups except for (ℓ, B, q(ℓ, B)) (which is decreased by 1) remains the same. As the number of items in (ℓ, A, 0) and (ℓ, B, q(ℓ, B)) are treated seperately and may be smaller than 2 ℓ · k respectively 2 ℓ · (k − 1), the properties (c) and (d) are always fulfilled for the insert and the delete operation. Concerning the shiftA operation we increase the number of items in a group (ℓ, B, 0) by 2 ℓ . Therefore it now contains 2 ℓ (k − 1) + 2 ℓ = 2 ℓ · k items, which equals the number of items in groups of block A. As this group is now moved to block A, the properties (c) and (d) are fulfilled. Symmetrically the shiftB operation decreases the number of items in a group (ℓ, A, q(ℓ, A)) by 2 ℓ . Therefore the number of items in the group is now 2 ℓ · k − 2 ℓ = 2 ℓ · (k − 1), which equals the number of items in the groups of block B. As this group is now moved to block B, the properties (c) and (d) are fulfilled.
According to Lemma 1 the rounded instance (I, s R ) has O( 1 /ǫ log( 1 /ǫ)) different item sizes (given a suitable k). Using the LP formulation of Eisemann [Eis57], the resulting LP called LP (I, s R ) has m = O( 1 /ǫ log( 1 /ǫ)) constraints. We say a packing B corresponds to a rounding R and an integral solution y of the ILP if all items in (I, s R ) are packed by B according to y. Proof. We have to analyze how the LP for instance (I, s R ′ ) changes in comparison to the LP for instance (I, s R ). Shift Operation: A single shift(g 1 , g 2 , R) operation moves one item from each group g between g 1 and g 2 into g and one item out of g. As no item is moved out of g 1 and no item is moved into g 2 , the number of items in g 1 is increased by 1 and the number of items in g 2 is decreased by 1. The right hand side of the LP (I, s R ) is defined by the cardinalities |g[R]| of the rounding groups g in R. As only the cardinalities of g 1 and g 2 change by ±1 the right hand side changes accordingly to ±1 in the corresponding components of y. The moved item from g 2 is removed from the configuration and a new configuration containing the new item of g 1 is added. The LP and ILP solutions x and y are being modified such that λ(g 2 , R) is removed from its configuration and a new configuration is added such that the enhanced right hand side of g 1 is covered. Since the largest item λ(g, R) of every group g between g 1 and g 2 is shifted to its left group, the size is the second largest item of g [R]. Therefore each item in (I, s R ′ ) is rounded to a smaller or equal value as s(ι(g, R)) ≤ s(λ(g, R)). All configurations of (I, s R ) can thus be transformed into feasible configurations of (I, s R ′ ). Insert Operation: The insert operation consists of inserting the new item into its corresponding group g followed by a shift operation. Inserting the new item into g increases the right hand side of the LP by 1. To cover the increased right hand side, we add a new configuration {1 : s R ′ (i)} containing only the new item. In order to reflect the change in the LP solution, the new item is added into an additional bin. The remaining changes are due to the shift operation already treated above. Delete Operation: The delete operation consists of removing an item i from its corresponding group g followed by a shift operation. Removing the new item from g decreases the right hand side of the LP by 1. The current LP and ILP solutions x and y do not need to be changed to cover the new right hand side. The remaining changes are due to the shift operation already treated above. shiftA/shiftB Operation: As the shiftA and shiftB operations consist only of repeated use of the shift operation, the correspondence between the packing and the LP/ILP solution follow simply by induction.

Algorithm for Dynamic Bin Packing
We will use the operations from the previous section to obtain a dynamic algorithm for bin packing with respect to large items. The operations insert and delete are designed to process the input depending of whether an item is to be inserted or removed. Keep in mind that the parameter k = ⌊κ⌋ = size(I L )·ǫ 2(⌊log( 1 /ǫ)⌋+5) changes over time as size(I L ) may increase or decrease. In order to fulfill the properties (c) and (d), we need to adapt the number of items per group whenever k changes. The shiftA and shiftB operations are thus designed to manage the dynamic number of items in the groups as k changes. Note that a group in the A-block with parameter k has by definition the same number of items as a group in the B-block with parameter k − 1 assuming they are in the same size category. If k increases, the former A block is treated as the new B block in order to fulfill the properties (c) and (d) while a new empty A block is introduced. To be able to rename the blocks, the B block needs to be empty. Accordingly the A block needs to be empty if k decreases in order to treat the old B block as new A block. Hence we need to make sure that there are no groups in the B-block if k increases and vice versa, that there are no groups in the A-block if k decreases.
We denote the number of all groups in the A-blocks at time t by A(t) and the number of groups in B-blocks at time t by B(t). To make sure that the B-block (respectively the A-block) is empty when k increases (decreases) the ratio +B(t) needs to correlate to the fractional digits of κ(t) at time t denoted by ∆(t). Hence we partition the interval [0, 1) into exactly is 0 if the A-block is empty and the term is 1 if the B-block is empty. This way, we can make sure that as soon as k(t) increases, the number of B-blocks is close to 0 and as soon as k(t) decreases, the number of A-blocks is close to 0. Therefore, the A, B-block can be renamed whenever k(t) changes. The algorithm uses shiftA and shiftB operations to adjust the number of A-and B-blocks. Recall that a shiftA operation reduces the number of groups in the B-block by 1 and increases the number of groups in the A-block by 1 (shiftB works vice versa). Let d be the number of shiftA/shiftB operations that need to be performed to adjust In the following algorithm we make use of an algorithm called improve, which was developed in [JK13] to reduce the number of used bins. Using improve(x) on a packing B with approximation guarantee max i B(i) ≤ (1 +ǭ) opt +C for someǭ = O(ǫ) and some additive term C yields a new packing B ′ with approximation guarantee max i B(i) ≤ (1 +ǭ) opt +C − x. We use the operations in combination with the improve algorithm to obtain a fixed approximation guarantee. (4); delete(i); ReduceComponents; // // Shifting to the correct interval Let J i be the interval containing ∆(t); Let J j be the interval containing

Algorithm 1 (AFPTAS for large items).
Note that as exactly d groups are shifted from A to B (or B to A) we have by definition that A(t)+B(t) at the end of the algorithm. Note that d can be bounded by 11.
Lemma 5. At most 11 groups are shifted from A to B (or B to A) in Algorithm 1.
Proof. Since the value | size(I(t − 1)) − size(I(t))| changes at most by 1 we can bound to obtain the change in the fractional part. By Lemma 1 the number of intervals (=the number of groups) is bounded by ( 8 ǫ + 2)(log( 1 /ǫ) + 5). Using A(t−1)+B(t−1) ) and the fact that the number of groups A(t−1)+B(t−1) increases or decreases at most by 1, we can give a bound for the parameter d in both cases by Hence, the number of shiftA/shiftB operations is bounded by 11. Furthermore, the algorithm is designed such that whenever k increases the B-block is empty and the A-block is renamed to be the new B-block. Whenever k decreases the A-block is empty and the B-block is renamed to be the new A-block. Therefore the number of items in the groups is dynamically adapted to match with the parameter k.

Large items
In this section we prove that Algorithm 1 is a dynamic robust AFPTAS for the bin packing problem if all items have size at least ǫ /14. The treatment of small items is described in Section 4 and the general case is described in Section 4.2.
We will prove that the migration between packings B t and B t+1 is bounded by O( 1 /ǫ 3 log( 1 /ǫ)) and that we can guarantee an asymptotic approximation ratio such that max B t (i) ≤ (1 + 2∆) opt(I(t), s) + poly( 1 /∆) for a parameter ∆ = O(ǫ) and for every t ∈ N. The Algorithm improve was developed in [JK13] to improve the objective value of an LP with integral solution y and corresponding fractional solution x. For a vector z ∈ R n let V (z) be the set of all integral Let x be an approximate solution of the LP min { x 1 | Ax ≥ b, x ≥ 0} with m inequalities and let x 1 ≤ (1 + δ) lin and x 1 ≥ 2α(1/δ + 1), where lin denotes the fractional optimum of the LP and α ∈ N is part of the input of the algorithm (see Jansen and Klein [JK13]). Let y be an approximate integer solution of the LP with y 1 ≤ lin +2C for some value C ≥ δ lin and with y 1 ≥ (m + 2)(1/δ + 2). Suppose that both x and y have only ≤ C non-zero components. For every component i we suppose that y i ≥ x i . Furthermore we are given indices a 1 , . . . , a K , such that the non-zero components y a j are sorted in non-decreasing order, i. e., y a 1 ≤ . . . ≤ y a K .
4. Choose the largest ℓ such that the sum of the smallest components y 1 , . . . , y ℓ is bounded by Reduce the number of non-zero components to at most m + 1.
In the following we prove that the algorithm improve applied to the bin packing ILP actually generates a new improved packing B ′ from the packing B with corresponding LP and ILP solutions x ′ and y ′ . We therefore use Theorem 2 and Corollary 2 that were proven in [JK13].
Theorem 2. Let x be a solution of the LP with x 1 ≤ (1 + δ) lin and furthermore x 1 ≥ 2α(1/δ + 1). Let y be an integral solution of the LP with y ′ 1 ≤ lin +2C for some value C ≥ δ lin and with y 1 ≥ (m + 2)(1/δ + 2). Solutions x and y have the same number of nonzero components and for each component we have x i ≤ y i . The Algorithm improve(α) then returns a fractional solution x ′ with x ′ 1 ≤ (1+δ) lin −α and an integral solution y ′′ where one of the two properties hold: Both, x ′ and y ′ have at most C non-zero components and the distance between y ′ and y is bounded by y ′ − y 1 = O( m+α δ ).
Let ∆ = ǫ + δ + ǫδ and C = ∆ opt(I, s) + m. Proof. To use Theorem 2 and Corollary 2 we have to prove that certain conditions follow from the requisites of Theorem 3. We have max i B(i) = y 1 ≤ (1 + 2∆) opt(I, s) + m by condition. Since opt(I, s) ≤ opt(I, s R ) we obtain for the integral solution y that y 1 ≤ 2∆ opt(I, s) + m + opt(I, s R ) ≤ 2∆ opt(I, s) + m + lin(I, s R ) + m. Hence by definition of C we get y 1 ≤ lin(I, s R ) + 2C. This is one requirement to use Theorem 2 or Corollary 2. We distinguish the cases where δ ′ ≤ δ and δ ′ > δ and look at them separately. Case 1: δ ′ ≤ δ. For the parameter C we give a lower bound by the inequality C > ∆ opt(I, s) = (δ + ǫ + δǫ) opt(I, s). Lemma 2 shows that opt(I, s R ) ≤ (1 + ǫ) opt(I, s) and therefore yields and hence C > δ lin(I, s R ). We can therefore use Theorem 2. Algorithm improve returns by Theorem 2 a Furthermore we know by Theorem 2 that x ′ and y ′ have at most C non-zero components.
Case 2: δ ′ > δ. First we prove that C is bounded from below. Since which is a requirement to use Corollary 2. By using Algorithm improve on solutions x with x 1 = (1 + δ ′ ) lin(I, s R ) and y with y 1 ≤ lin(I, s R ) + 2C we obtain by Corollary 2 a fractional solution x ′ with x ′ 1 ≤ x 1 − α ≤ (1 + ∆) opt(I, s) − α and an integral solution y ′ with either y ′ 1 ≤ y 1 − α or y ′ 1 ≤ x 1 + C − α. So for the new packing B ′ we can guarantee that In the case that y ′ 1 ≤ x 1 + C − α, we can guarantee that max i B ′ (i) = y ′ 1 ≤ x 1 + C − α ≤ (1 + ∆) opt(I, s) + C − α ≤ (1 + 2∆) opt(I, s) + m − α. Furthermore we know by Corollary 2 that x ′ and y ′ have at most C non-zero components.
Theorem 2 as well as Corollary 2 state that the distance y ′ − y 1 is bounded by O( m /δ). Since y corresponds directly to the packing B and the new integral solution y ′ corresponds to the new packing B ′ , we know that only O( m /δ) bins of B need to be changed to obtain packing B ′ .
In order to prove correctness of Algorithm 1, we will make use of the auxiliary Algorithm 3 (ReduceComponents). Due to a delete-operation, the value of the optimal solution opt(I, s) might decrease. Since the number of non-zero components has to be bounded by C = ∆ opt(I, s)+ m, the number of non-zero components might have to be adjusted down. The following algorithm describes how a fractional solution x ′ and an integral solution y ′ with reduced number of non-zero components can be computed such that y − y ′ 1 is bounded. The idea behind the algorithm is also used in the Improve algorithm. The smallest m+2 components are reduced to m+1 components using a standard technique presented for example in [BM98]. Arbitrary many components of x ′ can thus be reduced to m + 1 components without making the approximation guarantee worse. y a 1 , . . . , y a m+2 . 2. If 1≤i≤m+2 y a i ≥ (1/∆ + 2)(m + 2) then return x = x ′ and y = y ′ 3. Reduce the components x a 1 , . . . , x a m+2 to m+1 componentsx b 1 , . . . ,x b m+1

Choose the smallest non-zero components
The following theorem shows that the algorithm above yields a new fractional solution x ′ and a new integral solution y ′ with a reduced number of non-zero components. Proof. Case 1: 1≤i≤m+2 y a i ≥ (1/∆ + 2)(m + 2). We will show that in this case, x and y already have ≤ C non-zero components. In this case the algorithm returns x ′ = x and y ′ = y. Since 1≤i≤m+2 y a i ≥ (1/∆+2)(m+2) the components y a 1 , . . . , y a m+2 have an average size of at least (1/∆ + 2) and since y a 1 , . . . , y a m+2 are the smallest components, all components of y have average size at least (1/∆ + 2). The size y 1 is bounded by (1 + 2∆) opt(I, s) + m. Hence the number of non-zero components can be bounded by (1+2∆) opt(I,s)+m Case 2: 1≤i≤m+1 y a i < (1/∆ + 2)(m + 2). We have to prove different properties for the new fractional solution x ′ and the new integral solution y ′ .

Number of non-zero components:
The only change in the number of non-zero components is in step 3 of the algorithm, where the number of non-zero components is reduced by 1. As x, y have at most C + 1 non-zero components, x ′ , y ′ have at most C non-zero components. In step 4 of the algorithm,ŷ is defined such thatŷ i ≥ x ′ i . In step 5 of the algorithm d is chosen Distance between y and y ′ : The only steps where components of y changes are in step 4 and 5. The distance between y andŷ is bounded by the sum of the components that are set to 0, i. e., m+2 j=1 y a j and the sum of the increase of the increased components m+1 j=1 ⌈x b j ⌉ ≤ m+1 j=1x b j + m + 1 = m+2 j=1 x a j + m + 1. As m+2 j=1 x a j ≤ m+2 j=1 y a j < (1/∆ + 2)(m + 2), we obtain that the distance between y andŷ is bounded by 2 · (1/∆ + 2)(m + 2) + m + 1. Using that d 1 ≤ m + 1, the distance between y and y ′ is bounded by y ′ − y 1 < 2 · (1/∆ + 3)(m + 2). Approximation guarantee: The fractional solution x is modified by condition of step 3 such that the sum of the components does not change. Hence x ′ 1 = x 1 ≤ (1 + ∆) opt(I, s). Case 2a: d 1 < m + 1. Since d is chosen maximally we have for every non-zero component that y ′ i − x ′ i < 1. Since there are at most C = ∆ opt(I, s) + m non-zero components we obtain that y ′ 1 ≤ x ′ 1 + C ≤ (1 + 2∆) opt(I, s) + m. Case 2b: d 1 = m + 1. By definition ofŷ we have ŷ 1 ≤ y 1 + m+1 x a j ≤ y 1 + m + 1. We obtain for y ′ that y ′ 1 = ŷ 1 − d 1 ≤ y 1 + m + 1 − (m + 1) = y 1 ≤ (1 + 2∆) opt(I, s) + m.
Theorem 5. Algorithm 1 is an AFPTAS with migration factor at most O( 1 ǫ 3 · log( 1 /ǫ)) for the fully dynamic bin packing problem with respect to large items.
Proof. Set δ = ǫ. Then ∆ = 2ǫ + ǫ 2 = O(ǫ). We assume in the following that ∆ ≤ 1 (which holds for ǫ ≤ √ 2 − 1). We prove by induction that four properties hold for any packing B t and the corresponding LP solutions. Let x be a fractional solution of the LP defined by the instance (I t , s Rt ) and y be an integral solution of this LP. The properties (2) to (4) are necessary to apply Theorem 3 and property (1) provides the wished approximation ratio for the bin packing problem.

4) x and y have the same number of non-zero components and that number is bounded by ∆ opt(I(t), s) + m
To apply Theorem 3 we furthermore need a guaranteed minimal size for x 1 and y 1 . According to Theorem 3 the integral solution y needs y 1 ≥ (m+2)( 1 /δ+2) and x 1 ≥ 8( 1 /δ+1) as we set α ≤ 4. By condition of the while-loop the call of improve is made iff SIZE(I t , s) ≥ 8( 1 /δ+1) and SIZE(I t , s) ≥ (m + 2)( 1 /δ + 2). Since y 1 ≥ x 1 ≥ SIZE(I t , s) the requirements for the minimum size are fulfilled. As long as the instance is smaller than 8( 1 /δ + 1) or (m + 2)( 1 /δ + 2) an offline algorithm for bin packing is used. Note that there is an offline algorithm which fulfills properties (1) to (4) as shown by Jansen and Klein [JK13]. Now let B t be a packing with SIZE(I t , s) ≥ 8( 1 /δ + 1) and SIZE(I t , s) ≥ (m + 2)( 1 /δ + 2) for instance I t with solutions x and y of the LP defined by (I(t), s Rt ). Suppose by induction that the properties (1) to (4) hold for the instance I t . We have to prove that these properties also hold for the instance I(t + 1) and the corresponding solutions x ′′ and y ′′ . The packing B t+1 is created by the repeated use of an call of improve for x and y followed by an operation (insert, delete, shiftA or shiftB). We will prove that the properties (1) to (4) hold after a call of improve followed by an operation. improve: Let x ′ be the resulting fractional solution of Theorem 3, let y ′ be the resulting integral solution of Theorem 3 and let B ′ t be the corresponding packing. Properties (1) to (4) are fulfilled for x, y and B t by induction hypothesis. Hence all conditions are fulfilled to use Theorem 3. By Theorem 3 the properties (1) to (4) are still fulfilled for x ′ , y ′ and B ′ t and moreover we get x ′ 1 ≤ (1 + ∆) opt(I(t), s) − α and y ′ 1 = max i B ′ t (i) ≤ (1 + 2∆) opt(I(t), s) + m − α for chosen parameter α. Let x ′′ and y ′′ be the fractional and integral solution after an operation is applied to x ′ and y ′ . We have to prove that the properties (1) to (4) are also fulfilled for x ′′ and y ′′ . operations: First we take a look at how the operations modify x ′ 1 and y ′ 1 = max i B ′ t (i). By construction of the insertion operation, x ′ 1 and y ′ are increased at most by 2. By construction of the delete operation, x ′ 1 and y ′ 1 are increased by 1. By construction of the shiftA and shiftB operation, x ′ 1 and y ′ 1 are increased by 1. An improve(2) call followed by an insertion operation therefore yields y ′′ = y ′ 1 + 2 = (1 + 2∆) opt(I(t), s) + m − 2 + 2 = (1 + 2∆) opt(I(t + 1), s) + m since opt(I(t), s) ≤ opt(I(t + 1), s). An improve(4) call followed by a delete operation yields y ′′ = y ′ 1 +1 = (1+2∆) opt(I(t), s)+m−3 ≤ (1+2∆) opt(I(t+ 1), s)+(1+2∆)+m−3 ≤ (1+2∆) opt(I(t+1), s) since opt(I(t), s) ≤ opt(I(t+1), s)+1 (an item is removed) and ∆ ≤ 1. In the same way we obtain that y ′′ 1 ≤ y ′ 1 + 1 ≤ (1 + 2∆) opt(I(t + 1), s) + m for an improve(1)/improve(3) call followed by a shiftA/shiftB operation. This concludes the proof that property (1) is fulfilled for I(t + 1). The proof that property (2) holds is analog since x ′ 1 increases in the same way as y ′ 1 and x ′ 1 ≤ (1 + ∆) opt(I(t), s) − α. For property (3) note that in the operations a configuration x i of the fractional solution is increased by 1 if and only if a configuration y i is increased by 1. Therefore the property that for all configurations x ′′ i ≤ y ′′ i retains from x ′ and y ′ . By Theorem 3 the number of non-zero components of x ′ and y ′ is bounded by ∆ opt(I(t), s) + m ≤ ∆ opt(I(t + 1), s) + m in case of an insert operation. If an item is removed, the number of non-zero components of x ′ and y ′ is bounded by ∆ opt(I(t), s)+m ≤ ∆ opt(I(t+1), s)+m+1 = C +1. By Theorem 4 the algorithm ReduceComponents guarantees that there are at most C = ∆ opt(I(t + 1), s) + m non-zero components. By construction of the shift-operation, x ′′ and y ′′ might have two additional nonzero components. But since these are being reduced by Algorithm 1 (note that we increased the number of components being reduced in step 6 by 2 to-see [JK13] for details), the LP solutions x ′′ and y ′′ have at most ∆ opt(I(t + 1), s) + m non-zero components which proves property (4). Algorithm 1 therefore has an asymptotic approximation ratio of 1 + ǫ.
The main complexity of Algorithm 1 lies in the use of Algorithm improve. As described by Jansen and Klein [JK13] the running time of improve is bounded by O(M ( 1 /ǫ log( 1 /ǫ)) · 1 /ǫ 3 log( 1 /ǫ)), where M (n) is the time needed to solve a system of n linear equations. By using heap structures to store the items, each operation can be performed in time O( 1 /ǫ log( 1 /ǫ) · log(ǫ 2 · n(t))) at time t, where n(t) denotes the number of items in the instance at time t. As the number of non-zero components is bounded by O(ǫ · n(t)), the total running time of the algorithm is bounded by O(M ( 1 /ǫ log( 1 /ǫ)) · 1 /ǫ 3 log( 1 /ǫ) + 1 /ǫ log( 1 /ǫ) log(ǫ 2 · n(t)) + ǫn(t)). The best known running time for the dynamic bin packing problem without removals was O(M ( 1 /ǫ 2 ) · 1 /ǫ 4 + ǫn(t) + 1 ǫ 2 log(ǫ 2 n(t))) and is due to Jansen and Klein [JK13]. As this is polynomial in n(t) and in 1 /ǫ we can conclude that Algorithm 1 is an AFPTAS.
If no deletions are present, we can use a simple FirstFit algorithm (as described by Jansen and Klein [JK13]) to pack the small items into the bins. This does not change the migration factor or the running time of the algorithm and we obtain a robust AFPTAS with O( 1 ǫ 3 ·log( 1 /ǫ)) migration for the case that no items is removed. This improves the best known migration factor of O( 1 ǫ 4 ) [JK13].

Handling Small Items
In this section we present methods for dealing with arbitrary small items in a dynamic online setting. First, we present a robust AFPTAS with migration factor of O( 1 /ǫ) for the case that only small items arrive and depart. In Section 4.3 we generalize these techniques to a setting where small items arrive into a packing where large items are already packed and can not be rearranged. Finally we state the AFPTAS for the general fully dynamic bin packing problem. In a robust setting without departing items, small items can easily be treated by packing them greedily via the classical FirstFit algorithm of Johnson et al.
[JDU + 74] (see Epstein and Levin [EL09] or Jansen and Klein [JK13]). However, in a setting where items may also depart, small items need to be treated much more carefully. We show that the FirstFit algorithm does not work in this dynamic setting.
Lemma 7. Using the FirstFit algorithm to pack small items may lead to an arbitrarily bad approximation.
Proof. Suppose, that there is an algorithm A with migration factor c which uses FirstFit on items with size < ǫ /14. We will now construct an instance where A yields an arbitrary bad approximation ratio. Let b = ǫ /14 − δ and a = ǫ /14c − ( (δ+cδ) /c) for a small δ such that (1−b) /a is integral. Note that ac < b by definition. Furthermore, let M ∈ N be an arbitrary integer and consider the instance After the insertion of all items, there are M bins containing an item of size b and 1−b /a items of size a (see Figure 7a). As ac < b, the deletion of the items of size a can not move the items of size b. The remaining M bins thus only contain a single item of size b (see Figure 7b), while ⌈M · b⌉ bins would be sufficient to pack all of the remaining items. The approximation ratio is thus at least M /M·b = 1 /b ≈ 1 ǫ and thus grows as ǫ shrinks. In order to avoid this problem, we design an algorithm which groups items of similar size together. Using such a mechanism would therefore put the second item of size b into the first bin by shifting out an appropriate number of items of size a and so on. Our algorithms achieves this grouping of small items by enumerating the bins and maintaining the property, that larger small items are always left of smaller small items.

Only Small Items
We consider a setting where only small items exist, i. e., items with a size less than ǫ /14. First, we divide the set of small items into different size intervals S j where S j = ǫ 2 j+1 , ǫ 2 j for j ≥ 1. Let b 1 , . . . , b m be the used bins of our packing. We say a size category S j is bigger than a size category S k if j < k, i. e., the item sizes contained in S j are larger (note that a size category S j with large index j is called small). We say a bin b i is filled completely if it has less than ǫ 2 j remaining space, where S j is the biggest size category appearing in b i . Furthermore we label bins b i as normal or as buffer bins and partition all bins b 1 , . . . , b m into queues Q 1 , . . . , Q d for |Q| ≤ m. A queue is a subsequence of bins b i , b i+1 . . . , b i+c where bins b i , . . . , b i+c−1 are normal bins and bin b i+c is a buffer bin. We denote the i-th queue by Q i and the number of bins in Q i by |Q i |. The buffer bin of queue Q i is denoted by bb i .
We will maintain a special form for the packing of small items such that the following properties are always fulfilled. For the sake of simplicity, we assume that 1 /ǫ is integral. ( This means: Items are ordered from left to right by their size intervals. (2) Every normal bin is filled completely.
(3) The length of each queue is at least 1 /ǫ and at most 2 /ǫ except for the last queue Q d . Note that property (1) implies that all items in the same size interval S j are packed into bins b x , b x+1 , . . . , b x+c for constants x and c. Items in the next smaller size category S j+1 are then packed into bins b x+c , b x+c+1 , . . . and so on. We denote by b S(ℓ) the last bin in which an item of size interval S ℓ appears. We denote by S >ℓ the set of smaller size categories S ℓ ′ with ℓ ′ > ℓ. Note that items in size category S >ℓ are smaller than items in size category Proof. Let C be the number of used bins in our packing. By property (2) we know that all normal bins have less than ǫ /14 free space. Property (3) implies that there are at most ǫ · C + 1 buffer bins and hence possibly empty. The number of normal bins is thus at least (1 − ǫ) · C − 1. Therefore we can bound the total size of all items by ≥ (1 − ǫ /14) · ((1 − ǫ) · C − 1). As opt(I, s) ≥ SIZE(I, s) ≥ (1 − ǫ /14) · ((1 − ǫ) · C − 1) and We will now describe the operations that are applied whenever a small item has to be inserted or removed from the packing. The operations are designed such that properties (1) to (3) are never violated and hence a good approximation ratio can be guaranteed by Lemma 8 at every step of the algorithm. The operations are applied recursively such that some items from each size interval are shifted from left to right (insert) or right to left (delete). The recursion halts if the first buffer bin is reached. Therefore, the free space in the buffer bins will change over time.
Since the recursion always halts at the buffer bin, the algorithm is applied on a single queue Q k .
The following Insert/Delete operation is defined for a whole set J = {i 1 , . . . , i n } of items. If an item i of size interval S ℓ has to be inserted or deleted, the algorithm is called with -Remove the set of items J = {i 1 , . . . , i n } with size s(i j ) ∈ S ≤ℓ from bin b x (By Lemma 9 the total size of J is bounded by O( 1 /ǫ) times the size of the item which triggered the first Delete operation.) -Insert as many small items , where S ℓ ′ is the smallest size interval appearing in b x such that b x is filled completely. If there are not enough items from the size category S ℓ ′ , choose items from size category S ≥ℓ ′ +1 in bin b x+1 .
Using the above operations maintains the property of normal bins to be filled completely. However, the size of items in buffer bins changes. In the following we describe how to handle buffer bins that are being emptied or filled completely.

Algorithm 5 (Handle filled or emptied buffer bins).
• Case 1: The buffer bin of Q i is filled completely by an insert operation.
-Label the filled bin as a normal bin and add a new empty buffer bin to the end of Q i .
The buffer bin of Q ′′ i is the newly added buffer bin. Add an empty bin labeled as the buffer bin to Q ′ i such that |Q ′ i | = |Q ′′ i |. • Case 2: The buffer bin of Q i is being emptied due to a delete operation.
-Remove the now empty bin.
-If |Q i | ≥ |Q i+1 | and |Q i | > 1 /ǫ, choose the last bin of Q i and label it as new buffer bin of Q i . -If |Q i+1 | > |Q i | and |Q i+1 | > 1 /ǫ, choose the first bin of Q i+1 and move the bin to Q i and label it as buffer bin. -If |Q i+1 | = |Q i | = 1 /ǫ, merge the two queues Q i and Q i+1 . As Q i+1 already contains a buffer bin, there is no need to label another bin as buffer bin for the merged queue.
Creating and deleting buffer bins this way guarantees that property (3) is never violated since queues never exceed the length of 2 /ǫ and never fall below 1 /ǫ. Figure 9: Example calls of Insert and Delete. Figure 9a shows an example call of Insert({i},b x ,Q k ). Item i with s(i) ∈ S 1 is put into the corresponding bin b x into the size interval S 1 . As b x now contains too many items, some items from the smallest size interval S 2 (marked by the dashed lines) are put into the last bin b x+2 containing items from S 2 . Those items in turn push items from the smallest size interval S 3 into the last bin containing items of this size and so on. This process terminates if either no items need to be shifted to the next bin or the buffer bin bb k is reached.
It remains to prove that the migration of the operations is bounded and that the properties are invariant under those operations.

Lemma 9.
(i) Let I be an instance that fulfills properties (1) to (3). Applying operations insert/delete on I yields an instance I ′ that also fulfills properties (1) to (3).

(ii) The migration factor of a single insert/delete operation is bounded by
Proof. Proof for (i): Suppose the insert/delete operation is applied to a packing which fulfills properties (1) to (3). By construction of the insert operation, items from a size category S ℓ in bin b x are shifted to a bin b y . The bin b y is either b S(ℓ) or the a buffer bin left of b S(ℓ) . By definition b y contains items of size category S ℓ . Therefore property (1) is not violated. Symmetrically, by construction of the delete operation, items from a size category S ℓ in bin b S(ℓ) are shifted to a bin b x . By definition b x contains items of size category S ℓ and property (1) is therefore not violated. For property (2): Let b x be a normal bin, where items i 1 , . . . , i n of size category S ≤ℓ are inserted. We have to prove that the free space in b x remains smaller than ǫ /2 j , where S j is the smallest size category appearing in bin b x . By construction of the insert operation, just as many items of size categories S >ℓ are shifted out of bin b x such that i 1 , . . . , i n fit into b x . Hence the remaining free space is less than ǫ 2 ℓ and bin b x is filled completely. The same argumentation holds for the delete operation. Property (3) is always fulfilled by definition of Algorithm 5.
Proof for (ii): According to the insert operation, in every recursion step of the algorithm, it tries to insert a set of items into a bin b x ′ , starting with an Insert({i}, b x ′ , Q k ) operation. Let insert(S ≤ℓ+y , b x ) (x ≥ x ′ ) be the size of all items in size categories S j with j ≤ ℓ + y that the algorithm tries to insert into b x as a result of an Insert({i}, b x ′ , Q k ) call. Let pack(b x ) be the size of items that are actually packed into bin b x . We have to distinguish between two cases. In the case that insert(S ≤ℓ+y , b x ) = pack(b x ) there are enough items of smaller size categories S >ℓ+y that can be shifted out, such that items I fit into bin b x . In the case that insert(S ≤ℓ+y , b x ) > pack(b x ) there are not enough items of smaller size category that can be shifted out and the remaining size of insert(S ≤ℓ+y , b x ) − pack(b x ) has to be shifted to the following bin b x+1 . Under the assumption that each insert(S ≤ℓ , b x ) ≤ 1 for all x and ℓ (which is shown in the following) all items fit into b x+1 . Note that no items from bins left of b x can be shifted into b x+1 since b x = b S(ℓ+y) is the last bin where items of size category S ≤ℓ+y appear. Hence all items shifted out from bins left of b x are of size categories S ≤ℓ+y (property (1)) and they are inserted into bins left of b x+1 . We prove by induction that for each insert(S ≤ℓ+y , b x ) the total size of moved items is at most The claim holds obviously for insert( (c) Case 2b Figure 10: All cases to consider in Lemma 9 Case 1: insert(S ≤ℓ+y , b x ) > pack(b x ) In this case, the size of all items that have to be inserted into b x+1 can be bounded by the size of items that did not fit into bin b x plus the size of items that were removed from bin b x . We can bound insert(S ≤ℓ+ȳ , b x+1 ) whereȳ > y is the largest index S ℓ+ȳ appearing in bin b x by Suppose that the algorithm tries to insert a set of items I of size categories S ≤ℓ+ȳ into the bin b x+1 = b S(ℓ+ȳ) . The items I can only be shifted from previous bins where items of size category S ≤ℓ+ȳ appear. There are only two possibilities remaining. Either all items I are shifted from a single bin bx (x ≤ x) or from two consecutive bins bx, bx +1 with insert(S ≤ℓ+y , bx) > pack(bx).
Note that b x+1 can only receive items from more than one bin if there are two bins bx, bx +1 with insert(S ≤ℓ+y , bx) > pack(bx) such that b x+1 = b S(ℓ+ȳ) and all items shifted out of bx, bx +1 and into b x+1 are of size category S ℓ+ȳ . Hence bins left of bx or right of bx +1 can not shift items into b x+1 .
Case 2a: All items I are shifted from a single bin bx withx ≤ x (note thatx < x is possible since pack(b x ) = insert(S ≤ℓ+y , b x ) can be zero). The total size of items that are shifted out of bx can be bounded by insert(S ≤ℓ+y , bx) + ǫ 2 ℓ+y . By induction hypothesis insert(S ≤ℓ+y , bx) is bounded by s(i) + 3 y j=1 ǫ 2 ℓ+j . Since all items that are inserted into b x+1 come from bx, the value insert(S ≤ℓ+ȳ , b x+1 ) (ȳ > y) can be bounded by insert(S ≤ℓ+y , bx) + ǫ 2 ℓ+y ≤ s(i) + 3 y j=1 ǫ 2 ℓ+j + ǫ 2 ℓ+y < s(i) + 3 ȳ j=1 ǫ 2 ℓ+j where S ℓ+ȳ is the smallest size category inserted into b x+1 . Note that the items I belong to only one size category S ℓ+ȳ ifx < x since all items that are in size intervals S <ℓ+ȳ are inserted into bin bx +1 .
Case 2b: Items I are shifted from bins bx and bx +1 (x + 1 ≤ x) with insert(S ≤ℓ+y , bx) > pack(bx). In this case, all items I belong to the size category S ℓ+ȳ since bx is left of b x . Hence all items which are inserted into bx +1 are from I, i. e., insert(S ≤ℓ+y , bx) = pack(bx) + pack(bx +1 ) as all items in I belong to the same size category S ℓ+ȳ . We can bound insert(S ℓ+ȳ , b x+1 ) by the size of items that are shifted out of bx plus the size of items that are shifted out of bx +1 . We obtain This yields that insert(S ≤ℓ+y , b x ) is bounded by s(i)+ 3 ȳ j=1 ǫ 2 ℓ+j for all bins b x in Q k . Now, we can bound the migration factor for every bin b x of Q k for any y ∈ N by pack(b x ) + ǫ 2 ℓ+y ≤ insert(S ≤ℓ+y , b x ) + ǫ 2 ℓ+y . Using the above claim, we get: Since there are at most 2 /ǫ bins per queue, we can bound the total migration of Insert({i}, b S(ℓ) , Q k ) by 7 · 2 /ǫ ∈ O( 1 /ǫ). Note also that s(i) ≤ ǫ /14 for every i implies that insert(S ≤ℓ , b x ) is bounded by ǫ /2 for all x and ℓ .
Suppose that items i 1 , . . . , i n of size interval S ℓ+y have to be removed from bin b x . In order to fill the emerging free space, items from the same size category are moved out of b S(ℓ) into the free space. As the bin b x may already have additional free space, we need to move at most a size of size(i 1 , . . . , i n ) + ǫ /2 ℓ+y . Using a symmetric proof as above yields a migration factor of O( 1 ǫ ).

Handling small items in the general setting
In the scenario that there are mixed item types (small and large items), we need to be more careful in the creation and the deletion of buffer bins. To maintain the approximation guarantee, we have to make sure that as long as there are bins containing only small items, the remaining free space of all bins can be bounded. Packing small items into empty bins and leaving bins with large items untouched does not lead to a good approximation guarantee as the free space of the bins containing only large items is not used. In this section we consider the case where a sequence of small items is inserted or deleted. We assume that the packing of large items does not change. Therefore the number of bins containing large items equals a fixed constant Λ(B).
In . We denote the last bin of queue Q i by bb i which is a buffer bin. The buffer bin bb ℓ is special and will be treated differently in the insert and delete operation. Note that the bins containing large items b 1 , . . . , b L(B) are enumerated first. This guarantees that the free space in the bins containing large items is used before new empty bins are opened to pack the small items. However, enumerating bins containing large items first, leads to a problem if according to Algorithm 5 when a buffer bin is being filled and a new bin has to be inserted right to the filled bin. Instead of inserting a new empty bin, we insert a heap bin at this position. Since the heap bin contains only large items, we do not violate the order of the small items (see Figure 11). As the inserted heap bin has remaining free space (is not filled completely) for small items, it can be used as a buffer bin. In order to get an idea of how many heap bins we have to reserve for Algorithm 5 where new bins are inserted or deleted, we define a potential function. As a buffer bin is being filled or emptied completely the Algorithm 5 is executed and inserts or deletes buffer bins. The potential function Φ(B) thus bounds the number of buffer bins in Q 1 , . . . , Q ℓ(B) that are about to get filled or emptied. The potential Φ(B) is defined by c(bb i ) and s(bb i ) is the total size of all small items in bb i . Note that the potential only depends on the queues Q 1 , . . . , Q ℓ(B) and the bins which contain small and large items. The term r i intends to measure the number of buffer bins that become full. According to Case 1 of the previous section a new buffer bin is opened when bb i is filled i. e., r i ≈ 1. Hence the sum ℓ−1 i=1 r i bounds the number of buffer bins getting filled. The term ǫΛ in the potential measures the number of bins that need to be inserted due to the length of a queue exceeding 2 /ǫ, as we need to split the queue Q i into two queues of length 1 /ǫ according to Case 1. Each of those queues needs a buffer bin, hence we need to insert a new buffer bin out of the heap bins. Therefore the potential Φ(B) bounds the number of bins which will be inserted as new buffer bins according to Case 1.
Just like in the previous section we propose the following properties to bound the approximation ratio and the migration factor. The first three properties remain the same as in Section 4.1 and the last property gives the desired connection between the potential function and the heap bins.
(1) For every item i ∈ b d with size s(i) ∈ S j for some j, d ∈ N, there is no item i ′ ∈ b d ′ with size s(i ′ ) ∈ s j ′ such that d ′ > d and j ′ > j. This means: Items are ordered from left to right by their size intervals.
(2) Every normal bin of b 1 , . . . , b m is filled completely (3) The length of each queue is at least 1 /ǫ and at most 2 /ǫ except for Q ℓ and Q d . The length of Q ℓ and Q d is only limited by 1 ≤ |Q ℓ |, |Q d | ≤ 1 /ǫ. Furthermore, |Q ℓ+1 | = 1 and 1 ≤ |Q ℓ+2 | ≤ 2 /ǫ. Since bins containing large items are enumerated first, property (1) implies in this setting that bins with large items are filled before bins that contain no large items. Note also that property (3) implies that Φ(B) ≥ 0 for arbitrary packings B since ǫΛ ≥ ℓ − 1 + ǫ and thus ⌈ǫΛ⌉ ≥ ℓ. The following lemma proves that a packing which fulfills properties (1) to (4) provides a solution that is close to the optimum. According to property (4) we have to guarantee, that if the rounded potential ⌊Φ(B)⌋ changes, the number of heap bins has to be adjusted accordingly. The potential ⌊Φ(B)⌋ might increases by 1 due to an insert operation. Therefore the number of heap bins has to be incremented. If the potential ⌊Φ(B)⌋ decreases due to a delete operation, the number of heap bins has to be decremented. In order to maintain property (4) we have to make sure, that the number of heap bins can be adjusted whenever ⌊Φ(B)⌋ changes. Therefore we define the fractional part {Φ(B)} = Φ(B) − ⌊Φ(B)⌋ of Φ(B) and put it in relation to the fill ratio r ℓ of bb ℓ (the last bin containing large items) through the following equation: where s is the biggest size of a small item appearing in bb ℓ . The Heap Equation ensures that the potential Φ(B) is correlated to 1 − r ℓ . The values may only differ by the small term s c(bb ℓ ) .
Note that the Heap Equation can always be fulfilled by shifting items from bb ℓ to queue Q ℓ+1 or vice versa. Assuming the Heap Equation holds and the potential ⌊Φ(B)⌋ increases by 1, we can guarantee that buffer bin bb ℓ is nearly empty. Hence the remaining items can be shifted to Q ℓ+1 and bb ℓ can be moved to the heap bins. The bin left of bb ℓ becomes the new buffer bin of Q ℓ . Vice versa, if ⌊Φ(B)⌋ decreases, we know by the Heap Equation that bb ℓ is nearly full, hence we can label bb ℓ as a normal bin and open a new buffer bin from the heap at the end of queue Q ℓ . Our goal is to ensure that the Heap Equation is fulfilled at every step of the algorithm along with properties (1) to (4). Therefore we enhance the delete and insert operations from the previous section. Whenever a small item i is inserted or removed, we will perform the operations described in Algorithm 4 (which can be applied to bins of different capacities) in the previous section. This will maintain properties (1) to (3). If items are inserted or deleted from queue Q ℓ (the last queue containing large and small items) the recursion does not halt at bb ℓ . Instead the recursion goes further and halts at bb ℓ+1 . So, when items are inserted into bin bb ℓ according to Algorithm 4 the bin bb ℓ is treated as a normal bin. Items are shifted from bb ℓ to queue Q ℓ+1 until the Heap Equation is fulfilled. This way we can make sure that the Heap Equation maintains fulfilled whenever an item is inserted or removed from Q ℓ .
Algorithm 7 (Change in the potential).
Like in the last section we also have to describe how to handle buffer bins that are being emptied or filled completely. We apply the same algorithm when a buffer bin is being emptied or filled but have to distinguish now between buffer bins of Q 1 , . . . , Q ℓ and buffer bins of Q ℓ+1 , . . . , Q d . Since the buffer bins in Q ℓ+1 , . . . , Q d all have capacity 1, we will use the same technique as in the last section. If a buffer bin in Q 1 , . . . , Q ℓ is emptied or filled we will also use similar technique. But instead of inserting a new empty bin as a new buffer bin, we take an existing bin out of the heap. And if a buffer bin from Q 1 , . . . Q ℓ is being emptied (it still contains large items), it is put into the heap. This way we make sure that there are always sufficiently many bins containing large items which are filled completely.
Lemma 11. Let B be an packing which fulfills the properties (1) to (4) and the Heap Equation.
Applying Algorithm 7 or Algorithm 5 on B during an insert/delete operation yields an packing B ′ which also fulfills properties (1) to (4). The migration to fulfill the Heap Equation is bounded by O( 1 /ǫ).

Proof. Analysis of Algorithm 7
Properties (1) and (2) are never violated by the algorithm because the items are only moved by shift operations. Property (3) is never violated because no queue (except for Q ℓ ) exceeds 2 /ǫ or falls below 1 /ǫ by construction. Algorithm 7 is called during an insert or delete operation. The Algorithm is executed as items are shifted into or out of buffer bb j such that ⌊Φ(B)⌋ changes.
In the following we prove property (4) for the packing B ′ assuming that ⌊Φ(B)⌋ = h(B) holds by induction. Furthermore we give a bound for the migration to fulfill the heap equation: • Case 1: The potential ⌊Φ(B)⌋ increases during an insert operation, i. e., it holds ⌊Φ(B ′ )⌋ = ⌊Φ(B)⌋ + 1. Let item i * be the first item that is shifted into a bin bb j such that ⌊Φ(B) + r * ⌋ = ⌊Φ(B ′ )⌋, where r * is the fill ratio being added to bb j by item i * . In this situation, the fractional part changes from {Φ(B)} ≈ 1 to {Φ(B ′ )} ≈ 0.
-In the case that |Q ℓ | > 1, the buffer bin bb ℓ is being emptied and moved to the heap bins. The bin left of bb ℓ becomes the new buffer bin bb ′ ℓ of Q ℓ . Hence the number of heap bins increases and we have h(B ′ ) = h(B) + 1 = ⌊Φ(B)⌋ + 1 = ⌊Φ(B ′ )⌋, which implies property (4).
To give a bound on the total size of items needed to be shifted out of (or into) bin bb ℓ to fulfill the heap equation, we bound the term |(1 − r ′ ℓ ) − {Φ(B ′ )}| by some term where r ′ ℓ is the fill ratio of bb ′ ℓ and s(i) is the size of the arriving or departing item. If the term |(1 − r ′ ℓ ) − {Φ(B ′ )}| can be bounded by C, the fill ratio of bb ′ ℓ has to be adjusted to fulfill the heap equation according to the insert and delete operation. This can be done be shifting a total size of at most C items out of (or into) bb ′ ℓ . The bin bb ′ ℓ is completely filled by property (3) and therefore has a fill ratio of r ′ ℓ ≥ c(bb ℓ )−s c(bb ℓ ) ≥ 1 − 2 s ǫ , where s ≤ ǫ 2 k is the largest size of a small item appearing in bb ℓ and S k is the largest size category appearing in bb ′ ℓ . Let k ′ be the largest size category appearing in bin bb j . As the bin bb ′ ℓ is right of bb j we know k ≤ k ′ (property (1)) and hence s ≤ 2s(i * ). We get Hence the Heap Equation can be fulfilled by shifting items of total size O( s(i * ) /ǫ) at the end of the insert operation.
-If |Q ℓ | = 1 a set of items in the buffer bin bb ℓ−1 is shifted to Q ℓ+1 to fulfill the Heap Equation.
Since items are being removed from bb ℓ−1 the potential decreases. If In the case that |Q ℓ | = 1 /ǫ a new queue Q ℓ+1 is created which consists of a single buffer bin (inserted from the heap), which does not contain small items, i. e., h(B ′′ ) = h(B ′ ) − 1 = h(B)−2, where B ′′ is the packing after the insertion of item i * . Let Φ(B ′′ ) be the potential after the queue Q ℓ+1 is created.
as the buffer bin bb ℓ is now counted in the potential, but does not contain any small items and thus r ′′ Analysis of Algorithm 5 Algorithm 5 is executed as an item i * is moved into a buffer bin bb j such that bb j is completely filled or Algorithm 5 is executed if the buffer bin bb j is emptied by moving the last item i * out of the bin. As in the analysis of Algorithm 7, properties (1) and (2) are never violated by the algorithm because the items are only moved by shift operations. Property (3) is never violated because no queue (except for Q ℓ ) exceeds 2 /ǫ or falls below 1 /ǫ by construction.
It remains to prove property (4) and a bound for the migration to fulfill the heap equation: • Case 1: An item i * is moved into the buffer bin bb j such that bb j is filled completely for some j < ℓ. According to Algorithm 5 a bin is taken out of the heap and labeled as the new buffer bin bb ′ j with fill ratio r ′ j = 0 of queue Q j , i. e., the number of heap bins decreases by 1. Let Φ(B) be the potential before Algorithm 5 is executed and let Φ(B ′ ) be the potential after Algorithm 5 is executed. The potential changes as follows: Since r ′ j = 0 the new potential is Φ(B ′ ) = Φ(B) − r j ≈ Φ(B) − 1 (assuming ℓ(B) = ℓ(B ′ ), as the splitting of queue is handled later on). items have to be shifted out of r ℓ such that the fill ratio r ℓ changes from r ℓ ≤ 1 − r j to r ℓ ≈ 1. Therefore we know that as items are shifted out of bb ℓ to fulfill the Heap Equation, the buffer bin bb ℓ is being emptied and moved to the heap (see Algorithm 7). We obtain for the number of heap bins that h(B ′ ) = h(B) + 1 − 1 = h(B) and hence h(B ′ ) = ⌊Φ(B ′ )⌋ (property (4)).
As • Case 2: Algorithm 5 is executed if bin bb j is emptied due to the removal of an item i * as a result of a Delete(i, b x , Q j ) call. According to Algorithm 5, the emptied bin is moved to the heap, i. e., the number of heap bins increases by 1. Depending on the length of Q j and Q j+1 , the bin right of bb j or the bin left of bb j is chosen as the new buffer bin bb ′ j . The potential changes by Φ(B ′ ) = Φ(B) + r ′ j , where r ′ j is the fill ratio of bb ′ j as in case 1. -If ⌊Φ(B ′ )⌋ = ⌊Φ(B)⌋ + 1 property (4) is fulfilled since the number of heap bins increases by h(B ′ ) = h(B) + 1.
As bin bb ′ j is completely filled, the fill ratio is bounded by r ′ j ≥ 1 − 2 s ǫ , where s is the largest size appearing in bb ′ j . Since the bin b x has to be left of bb j we know that s ≤ 2s(i). We obtain for the fractional part of the potential that {Φ(B)} ≥ {Φ(B ′ )} − 2 s ǫ ≤ 4 s(i) ǫ . Hence the Heap Equation can be fulfilled by shifting items of total size O( s(i) /ǫ) at the end of the remove operation.
-In the case that ⌊Φ(B ′ )⌋ = ⌊Φ(B)⌋ = ⌊Φ(B) + r ′ j ⌋ we know that the fractional part changes similar to case 1 by {Φ(B ′ )} = {Φ(B)} + r ′ j . Since the bin bb j is filled completely we know that r j ≥ c(bb j )−s c(bb j ) ≈ 1 and hence {Φ(B ′ )} ≥ r j ≈ 1 and {Φ(B)} ≤ 1 − r j ≈ 0. According to the Heap Equation items have to be shifted to bb ℓ such that the fill ratio r ℓ changes from r ℓ ≈ 0 to r ℓ ≈ 1. Therefore we know that as items are shifted into bb ℓ to fulfill the Heap Equation, bb ℓ is filled completely and a bin from the heap is labeled as the new buffer bin of Q ℓ (see Algorithm 7). We obtain for the number of heap bins that h(B ′ ) = h(B) − 1 + 1 = h(B) and hence h(B ′ ) = Φ(B ′ ) (property (4)). The Heap Equation can be fulfilled similarly to case 1 by shifting items of total size O( s(i) /ǫ).
Using the above lemma for, we can finally prove the following central theorem, which states that the migration of an insert/delete operation is bounded and that properties (1) to (4) are maintained.

Theorem 6.
(i) Let B be a packing which fulfills properties (1) to (4) and the Heap Equation.
Applying operations insert(i, b x , Q j ) or delete(i, b x , Q j ) on a packing B yields an instance B ′ which also fulfills properties (1) to (4) and the Heap Equation.
(ii) The migration factor of an insert/delete operation is bounded by O( 1 /ǫ).
Proof. Suppose a small item i with size s(i) is inserted or deleted from queue Q j . The insert and delete operation basically consists of application of Algorithm 4 and iterated use of steps (1) to (3) where Algorithms 5 and 7 are used and items in bb ℓ are moved to Q ℓ+1 and vice versa. Let B be the packing before the insert/delete operation and let B ′ be the packing after the operation. Proof for (i): Now suppose by induction that property (1) to (4) and the Heap Equation is fulfilled for packing B. We prove that property (4) and the Heap Equation maintain fulfilled after applying an insert or delete operation on B resulting in the new packing B ′ . Properties (1) to (3) hold by conclusion of Lemma 9 and Lemma 11. Since the potential and the number of heap bins only change as a result of Algorithm 5 or Algorithm 7, property (4) maintains fulfilled also. By definition of step 4 in the insert operation, items are shifted from bb ℓ to Q ℓ+1 until the Heap Equation is fulfilled. By definition of step 4 of the delete operation, the size of small items in bb ℓ is adjusted such that the Heap Equation is fulfilled. Hence the Heap Equation is always fulfilled after application of Insert(i, b x , Q j ) or Delete(i, b x , Q j ).
Proof for (ii): According to Lemma 9 the migration factor of the usual insert operation is bounded by O( 1 /ǫ). By Lemma 11 the migration in Algorithm 5 and Algorithm 7 is also bounded by O( 1 /ǫ). It remains to bound the migration for step 4 in the insert/delete operation. Therefore we have to analyze the total size of items to be shifted out or into bb ℓ in order to fulfill the Heap Equation.
Since the size of all items i 1 , . . . , i k that are inserted into bb j is bounded by 7s(i) (see Lemma 9) and the capacity of bb j is at least ǫ /14 the potential Φ(B) changes by at most O( s(i) /ǫ). By Lemma 11 the size of items that needs to be shifted out or into bb ℓ as a result of Algorithm 5 or 7 is also bounded by O( s(i) /ǫ). Therefore the size of all items that need to be shifted out or into bb ℓ in step (4) of the insert/delete operation is bounded by O( s(i) /ǫ).
Shifting a size of O( s(i) /ǫ) to Q ℓ+1 or vice versa leads to a migration factor of O( 1 /ǫ 2 ) (Lemma 9). Fortunately we can modify the structure of queues Q ℓ+1 and Q ℓ+2 such that we obtain a smaller migration factor. Assuming that Q ℓ+1 consists of a single buffer bin, i. e., |Q ℓ+1 | = 1 items can directly be shifted from bb ℓ to bb ℓ+1 and therefore we obtain a migration factor of O( 1 /ǫ). A structure with |Q ℓ+1 | = 1 and 1 ≤ |Q ℓ+2 | ≤ 2 /ǫ (see property (3)) can be maintained by changing Algorithm 5 in the following way: • If bb ℓ+1 is filled completely, move the filled bin to Q ℓ+2 .
• If bb ℓ+1 is being emptied, remove the bin and label the first bin of Q ℓ+2 as bb ℓ+1 .

Handling the General Setting
In the previous section we described how to handle small items in a mixed setting. It remains to describe how large items are handled in this mixed setting. Algorithm 1 describes how to handle large items only. However, in a mixed setting, where there are also small items, we have to make sure that properties (1) to (4) and the Heap Equation maintain fulfilled as a large item is inserted or deleted. Algorithm 1 changes the configuration of at most O( 1 /ǫ 2 · log 1 /ǫ) bins (Theorem 5). Therefore, the size of large items in a bin b (= 1 − c(b)) changes, as Algorithm 1 may increase or decrease the capacity of a bin. Changing the capacity of a bin may violate properties (2) to (4) and the Heap Equation. We describe an algorithm to change the packing of small items such that all properties and the Heap Equation are fulfilled again after Algorithm 1 was applied.
The following algorithm describes how the length of a queue Q j is adjusted if the length |Q j | falls below 1 /ǫ: Algorithm 8 (Adjust the queue length).
• Remove all small item I S from bb j and add bb j to the heap.
• Merge Q j with Q j+1 . The merged queue is called Q j .
• If |Q j | > 2 /ǫ split queue Q j by adding a heap bin in the middle.
• Insert items I S using Algorithm 6.
The following algorithm describes how the number of heap bins can be adjusted.