On ($1$, $\epsilon$)-Restricted Max-Min Fair Allocation Problem

We study the max-min fair allocation problem in which a set of $m$ indivisible items are to be distributed among $n$ agents such that the minimum utility among all agents is maximized. In the restricted setting, the utility of each item $j$ on agent $i$ is either $0$ or some non-negative weight $w_j$. For this setting, Asadpour et al. showed that a certain configuration-LP can be used to estimate the optimal value within a factor of $4+\delta$, for any $\delta>0$, which was recently extended by Annamalai et al. to give a polynomial-time $13$-approximation algorithm for the problem. For hardness results, Bezakova and Dani showed that it is \NP-hard to approximate the problem within any ratio smaller than $2$. In this paper we consider the $(1,\epsilon)$-restricted max-min fair allocation problem in which each item $j$ is either heavy $(w_j = 1)$ or light $(w_j = \epsilon)$, for some parameter $\epsilon \in (0,1)$. We show that the $(1,\epsilon)$-restricted case is also \NP-hard to approximate within any ratio smaller than $2$. Hence, this simple special case is still algorithmically interesting. Using the configuration-LP, we are able to estimate the optimal value of the problem within a factor of $3+\delta$, for any $\delta>0$. Extending this idea, we also obtain a quasi-polynomial time $(3+4\epsilon)$-approximation algorithm and a polynomial time $9$-approximation algorithm. Moreover, we show that as $\epsilon$ tends to $0$, the approximation ratio of our polynomial-time algorithm approaches $3+2\sqrt{2}\approx 5.83$.


Introduction
We consider in this paper the Max-Min Fair Allocation problem. A problem instance is defined by (A, B, w), where A is a set of n agents, B is a set of m items and the utility of each item j ∈ B perceived by agent i ∈ A has weight w ij . An allocation of items to agents is σ : B → A such that σ(j) = i iff item j is assigned to agent i. The max-min fair allocation problem aims at finding an allocation such that the minimum total weight received by an agent min i∈A j∈σ −1 (i) w ij is maximized. The problem is also known as the Santa Claus Problem [4]. In the restricted version of the problem, it is assumed that each item j has a fixed weight w j such that for each i ∈ A and j ∈ B, w ij ∈ {0, w j }, i.e., if an agent has non-zero utility for an item j, the utility is w j . We focus on this paper the restricted version of the problem (restricted allocation problem) and refer to the problem with general weights the unrestricted allocation problem. For the restricted allocation problem, let B i = {j ∈ B : w ij > 0} be the set of items agent i is interested in. For a collection of items S ⊆ B, let w(S) = j∈S w j .
The problem can be naturally formulated as an integer program, with variable x ij for each i ∈ A and j ∈ B indicating whether item j is assigned to agent i. Its linear program relaxation Assignment-LP (ALP) is shown as below.
j∈B i Let OPT be the maximum value of the restricted allocation problem such that in the optimal allocation, every agent is assigned a set of items with total weight at least OPT. Bezáková and Dani [5] showed that any feasible solution x and T for the ALP can be rounded into an allocation such that every agent i receives at least T − max j∈B i w j total value, which implies OPT ≥ T * − max j∈B w j , where T * is the optimal value of the ALP. However, the above result does not yield any guarantee on the integrality gap. Actually, it can be easily shown that the integrality gap of ALP is unbounded since it is possible to have a feasible solution with T > 0 while OPT = 0 (i.e., when |B| < |A|). It was shown in [5] that it is NP-hard to approximate the problem within any ratio smaller than 2 by a reduction from 3-dimensional matching.
To overcome the limitation of ALP, a stronger linear program called Configuration-LP (CLP) was proposed by Bansal and Sviridenko [4], in which an O( log n log log n )-approximation algorithm was obtained for the restricted allocation problem. For any T > 0, we call an allocation a T -allocation if it assigns to every agent a set of items with total weight at least T . The CLP is a feasibility LP associated with T indicating whether it is possible to (fractionally) assign to each agent one unit of bundle with sufficient utility. The LP (CLP(T )) and its dual are shown as follows.
Dual max i∈A y i − j∈B z j s.t. y i ≤ j∈S z j , ∀i ∈ A, S ∈ C(i, T ) Although CLP(T ) has an exponential number of variables, it is claimed in [4] that the separation problem for the dual LP is the minimum knapsack problem: given a candidate dual solution (y, z), a violated constraint can be identified by finding an agent i and a configuration S ∈ C(i, T ) such that y i > j∈S z j . Hence, we can solve CLP(T ) to any desired precision. Note that any feasible solution x of CLP(T ) induces a feasible solutionx for the ALP by settingx ij = S:j∈S∈C(i,T ) x i,S ≤ 1 for all i ∈ A and j ∈ B. Definition 1.2 (Integrality Gap) Let T * be the maximum value such that CLP(T * ) is feasible. The ratio T * OPT is known as the integrality gap.
Note that any upper bound c for the integrality gap implies that we can estimate the optimal value of the problem within a factor of c + δ, for any δ > 0. It is shown in [4] that the integrality gap of CLP for the unrestricted allocation problem is bounded by O( √ n). By repeatedly using the Lovasz Local Lemma, Uriel Feige [8] proved that the integrality gap of CLP for the restricted allocation problem is bounded by a constant. The result was later turned into a constructive proof by Haeupler [11], who obtained the first constant approximation algorithm for the restricted allocation problem, although the constant is unspecified. The integrality gap of CLP was later shown in [2] to be no larger than 4 by a local search technique developed from Haxell [12] for finding perfect matchings in bipartite hypergraphs. However, the algorithm is not guaranteed to terminate in polynomial time. It is later shown by Polacek and Svensson [15] that a simple modification of the local search algorithm can improve the running time from 2 O(n) to n O(log n) , which implies a quasi-polynomial (4 + δ)-approximation algorithm, for any δ > 0. Very recently, Annamalai et al. [1] further extended the local search technique developed in [2,15] for the restricted allocation problem and obtained a polynomial-time 13-approximation algorithm for the problem.

The (1, )-Restricted Allocation Problem
We consider in this paper the (1, )-restricted allocation problem, in which each item j ∈ B is either heavy (w j = 1) or light (w j = , for some ∈ (0, 1)). As the simplest case of the allocation problem, the problem is not well understood. The current best approximation results for the problem are for the restricted allocation problem. Indeed, we believe that a better understanding of the (1, )-restricted setting will shed light on improving the restricted (and even the unrestricted) allocation problem.
The (1, )-restricted setting has been studied under different names. Golovin [10] studied the "Big Goods/Small Goods" max-min allocation problem, which is exactly the same as the problem we consider in this paper, in which a small item has weight either 0 or 1 for each agent; a big item has weight either 0 or x > 1 for each agent. They gave an O( √ n)-approximation algorithm for this problem and proved that it is NP-hard to approximate the "Big Goods/Small Goods" max-min allocation problem within any ratio smaller than 2 by giving a hard instance with x = 2. We show in this paper that the inapproximability result holds for any fixed x ≥ 2 by generalizing the hardness instance shown in [5]. Later Khot and Ponnuswami [13] generalized the "Big Goods/Small Goods" setting and considered the (0, 1, U )-max-min allocation problem with sub-additive utility function in which the weight of an item to an agent is either 0, 1 or U for some U > 1 and obtained an n α -approximation algorithm with m O(1) n O(α) running time, for any α ≤ n 2 . Note that in their setting an item can have weight 1 for an agent and U for another. In the seminal paper, Bansal and Sviridenko [4] obtained an O( log n log log n )-approximation algorithm for the restricted allocation problem by first reducing the problem to the (1, )-restricted case for an arbitrarily small > 0 while losing a constant factor on the approximation ratio, and then proving an O( log n log log n )-approximation algorithm for the (1, )-restricted case.
The max-min fair allocation problem is closely related to the problem of scheduling jobs on unrelated machines to minimize makespan, which we call the min-max allocation problem. The problem has the same input as the max-min fair allocation problem but aims at finding an allocation that minimizes max i∈A j∈σ −1 (i) w ij . Lenstra et al. [14] showed a 2-approximation algorithm for the min-max allocation problem by rounding the ALP for the problem. Applying the techniques developed for the max-min fair allocation problem, Svensson [16] gave a 5 3 + upper bound for the CLP's integrality gap of the (1, )-restricted min-max allocation problem and then extended it to a 1.9412 upper bound for the general case. However, their algorithm is not known to converge in polynomial time. Recently Chakrabarty et al. [7] obtained the first (2 − δ)-approximation algorithm for the (1, )-restricted min-max allocation problem, for some constant δ > 0. They considered the case when is close to 0 since it is easy to obtain a (2 − )-approximation algorithm for the (1, )-restricted min-max allocation problem.
Since the (1, )-restriction is considered in the community to be interesting for the min-max setting, in this paper we consider this restriction for the max-min setting.

Summary of Our Results
We first show that we can slightly improve the hardness result of Golovin [10] for the (1, )-restricted allocation problem. Note that in the unweighted case ( = 1), the problem can be solved in polynomial time by combining the max-flow computation between A and B, with a binary search on the optimal value. The above algorithm for the unweighted case actually provides a trivial 1 -approximation algorithm for the (1, )-restricted allocation problem. Hence, we have a polynomial-time algorithm with ratio 1 < 2 for the problem when > 0.5. Theorem 1.1 (Inapproximability) For any ≤ 0.5, it is NP-hard to approximate the (1, )-restricted allocation problem within any ratio smaller than 2.
Our reduction shows that it is NP-hard to estimate the optimal value of the problem within any ratio smaller than 2. Hence, the above hardness result implies that the integrality gap of CLP(T ) is at least 2 unless P = NP. Actually, we are able to remove the P = NP assumption by giving an instance with integrality gap 2 in Section 5.
For the restricted allocation problem, the best hardness result on the approximation ratio is 2 while the best upper bound for the integrality gap of CLP(T ) is 4. It is not known which bound (or none) is tight. As a step towards closing this gap, we analyze the integrality gap of CLP(T ) for the (1, )-restricted case and show that the upper bound of 4 is not tight in this case (Section 2). Our result on the integrality gap upper bound implies that in polynomial time we can estimate OPT for the (1, )-restricted allocation problem within a factor of 3 + δ, for any δ > 0. We also observe that by picking the "closest addable edge", the running time of the local search algorithm can be improved to quasi-polynomial (Section 3). The idea was first used by Polacek and Svensson [15] to obtain the (4 + δ)-approximation algorithm for the restricted allocation problem. However, instead of constructing feasible dual solutions for CLP(T ), our analysis is based on the assumption of T ≤ OPT and is a direct extension of our proof on the integrality gap of CLP(T ). We further extend the quasi-polynomial approximation algorithm by combining the lazy update idea of [1] to obtain a polynomial approximation algorithm (Section 4).
Interestingly, while our quasi-polynomial-and polynomial-time algorithms are extended from the integrality gap analysis by combining ideas on improving the running time of local search, unlike existing techniques, our algorithms and analysis do not directly use the feasibility of CLP(T ). To lead to contradictions, existing results [15,1] tried to construct feasible dual solutions for CLP(T ) with positive objective values (which implies the infeasibility of CLP(T )). In contrast, our analysis shows that as long as T ≤ OPT, our algorithms terminate with the claimed approximation ratios, which simplifies the analysis and is an advantage in some cases when CLP(T ) cannot be applied, i.e., when the utility function is sub-additive [13].

Other Related Work
Unrestricted Allocation Problem. Based on Bansal and Sviridenko's proof [4] of O( √ n)-integrality gap for the unrestricted allocation problem, Asadpour and Saberi [3] achieved anÕ( √ n)-approximation algorithm. The current best approximation result for the problem is anÕ(n δ )-approximation algorithm that runs in time n O( 1 δ ) , for any δ = Ω( log log n log n ), obtained by Chakrabarty et al. [6].
Other Utility Functions. The max-min fair allocation problem with different utility functions has also been considered. Golovin [10] gave an (m − n + 1)-approximation algorithm for the problem when the utility functions of agents are submodular. We note that their result can also be extended to sub-additive utility functions. Khot and Ponnuswami [13] also considered the problem with sub-additive utility functions and obtained a (2n − 1)-approximation algorithm. Later Goemans and Harvey [9] obtained anÕ(n 1 2 +δ )approximation for submodular max-min allocation problem in n O( 1 δ ) time using theÕ(n δ )-approximation algorithm by Chakrabarty et al. [6] as a black box.

Integrality Gap for Configuration LP
We show in this section that for the (1, )-restricted allocation problem, the integrality gap of the CLP is at , which implies an integrality gap of at most 3. Given any solution x for CLP(T ) and the induced ALP solutionx, for all x ij = 0, we can remove j from B i (pretending that i is not interested in j). This operation will preserve the feasibility of x while (possibly) decreasing OPT, which could only enlarge the integrality gap. From now on we assume that a positive fraction of every item in B i is assigned to agent i.
Assumption on T . To achieve a T 3 -allocation, we can assume that T < 3 2 ; otherwise, we can get a T − 1 ≥ T 3 allocation by rounding the ALP solutionx [5]. We can further assume T ≥ 1 since otherwise we can set all weights w j ≥ T to T (which does not change CLP(T )) and scale all weights so that the maximum weight is 1. From now on, we assume that T ∈ [1, 3 2 ) and CLP(T ) is feasible. Let k = T . Note that every bundle consisting solely of light items must contain at least k items to have sufficient utility. For all i ∈ A, let B 1 i = {j ∈ B i : w j = 1} be the set of heavy items and B i = {j ∈ B i : w j = } be the set of light items. Our algorithm fixes an integer r = k 3 and tries to assign items such that each agent i receives either a heavy item j ∈ B 1 i or r light items in B i . Suppose we are able to find such an allocation, then the integrality gap is T r ≤ 3.

Getting a "Minimal" Solution
Let x * be a solution for CLP(T ). We create another solution x (which might not be feasible) as follows.
2. otherwise, S contains only light items and set Note that for each i ∈ A we have the following properties on x: A matching M ⊆ E is a collection of disjoint edges. Note that any perfect matching of H that matches all nodes in A provides an allocation that assigns each i ∈ A either a heavy item or r light items. For all

Finding a Perfect Matching
Recall that the existence of a perfect matching in H(A ∪ B, E) such that every agent in A is matched implies that the integrality gap of CLP(T ) is at most 3. For the recursive step, suppose we already have edges X t (where t = |X t |) and Y t , which together form an alternating tree rooted at i 0 . We consider adding the (t + 1)-st edge to X t as follows. An edge If such an edge e t+1 exists and blocking(e t+1 ) = ∅, let X t+1 = X t ∪ {e t+1 } and Y t+1 = Y t ∪ blocking(e t+1 ). If blocking(e t+1 ) = ∅, then we contract X t by swapping out blocking edges (the details of contraction will be discussed later). The contraction operation guarantees that every addable edge has at least one blocking edge. Proof: Let P = A(X t ∪ Y t ) be the agents in the tree. Note that |P | = |Y t | + 1 since each agent i = i 0 in P has an unique blocking edge that introduces i.
Let X 1 t (Y 1 t ) be the heavy edges and X t (Y t ) be the light edges of X t (Y t ). We have |X 1 t | = |Y 1 t | since heavy edges can only be blocked by heavy edges. We have |X t | ≤ |Y t | since each addable edge has at least one blocking edge. Let x 1 P = i∈P S⊆B 1 i x i,S be the total units of heavy bundles assigned to P by x, which is a lower bound for the total number of heavy items B 1 P = ∪ i∈P B 1 i agents in P are interested in since Let x P = i∈P S⊆B i x i,S be the total units of light bundles assigned to P by x. By construction of x, we have Since |Y 1 t | heavy items are already introduced in the tree, if x 1 P > |Y 1 t |, then there must exist an addable heavy edge for some i ∈ P . If x 1 P ≤ |Y 1 t |, then we have x P ≥ |P | − x 1 P ≥ |Y t | + 1 ≥ |X t | + 1. Note that every light addable edge has at most r − 1 unblocked items, the total number of light items in the tree is For each i ∈ P and S ⊆ B i , if x i,S > 0, then by construction we have |S| ≥ k ≥ 3r − 2. If i has no more addable light edges (has at most r − 1 unintroduced light items in H), then at least units of configurations of light items appear in the tree. If there is no more addable light edges for all i ∈ P , then we have which is a contradiction to (1).
Contraction. If blocking(e t+1 ) = ∅, then we remove the blocking edge f that introduces A(e t+1 ) from the matching and include e t+1 into the matching. Both e t+1 and f are removed from the tree. We also remove all edges added after f since they can possibly be introduced by A(f ). We call this operation a contraction on e t+1 . Note that this operation reduces the size of blocking(e ) by one, for the edge e that is blocked by f . If blocking(e ) = ∅ after that, then we further contract e recursively. After all contractions, suppose the remaining addable edges in the tree are e 1 , e 2 , . . . , e t (ordered by the time they are added to the tree), we set t = t , X t and Y t be the addable and blocking edges, respectively.
Signature. At any moment before including an addable edge (suppose there are t addable edges in the tree), let s i = |blocking(e i )| for all i ∈ [t]. Let s = (s 1 , s 2 , . . . , s t , ∞) be the signature of the tree. Then, we have the following.
1. The lexicographical value of s reduces after each iteration. If there is no contraction in the iteration, then in the signature, the (t + 1)-st coordinate decreases from ∞ to s t+1 , while s i remains the same for all i ≤ t. Otherwise, let e i be the edge whose number of blocking edges is reduced by one but remains positive in the contraction phase. Then, we have s i is reduced by one while s j remains the same for all j < i.

2.
There are at most 2 n different signatures since i∈[t] s i ≤ n and t ≤ n.
Since an addable edge can be found in polynomial time and the contraction operation stops in polynomial time, a perfect matching can be found in n · 2 n · poly(n) time.

Quasi-Polynomial-Time Approximation Algorithm
We show in this section that a simple modification on the algorithm for finding a perfect matching in Section 2 can dramatically improve the running time from 2 O(n) to n O(log n) . Assume that T ≤ OPT. Note that in this case we can still assume T ∈ [1, 3 2 ). Note that combining the polynomial time 1 -approximation algorithm, the approximation ratio we obtain in quasi-polynomial time is min{ 1 , 3 + 4 } ≤ 4 for all ∈ (0, 1). Moreover, when → 0 (in which case the problem is still (2 − δ)-inapproximable), our approximation ratio approaches the integrality gap upper bound 3. Proof of Theorem 1.3: Let T be a guess of OPT and k = T . Since the statement trivially holds for ≥ 1 4 ( 1 ≤ 3 + 4 ). We assume that < 1 4 (which means k ≥ 5). We show that if T ≤ OPT, then we can find in quasi-polynomial time a T 3+4 -allocation; if no such allocation is found after the time limit, then T should be decreased as in binary search. Let r = k 3+4 . To prove the theorem, it suffices to show that a feasible allocation that assigns to each agent i either a heavy item in B 1 i or r light items in B i can be found in n O(log n) time, for any < 1 4 . We define a heavy edge {i, j} for each j ∈ B 1 i and a light edge {i} ∪ S for each S ⊆ B i and |S| = r.
As in the proof of Theorem 2.1, we wish to find a perfect matching for all agents in A. Suppose in some partial matching, there is an unmatched agent i 0 and we construct an alternating tree rooted at i 0 . For each addable edge e, we denote by d(e) the number of light edges (including e) in the path from i 0 to e in an alternating tree rooted at i 0 . Note that a path is a sequence of edges alternating between addable edges and blocking edges. The algorithm we use in this section is the same as previous, except that when there are addable edges, we always pick the one e such that the distance d(e) is minimized. We show that in this case there is always an addable edge within distance O( 1 log n).
Let X i and Y i be the set of addable edges and blocking edges at distance i from i 0 , respectively. Note that Y i = ∅ for all odd i since light blocking edge must be introduced due to light addable edge. Moreover, since on the path from i 0 to every addable edge e ∈ X i , the light edge (if any) closest to e must be a blocking edge (of even distance), we know that X odd contains only light edges and X even contains only heavy edges. Let Y 1 i and Y i be the set of heavy edges and light edges in Y i , respectively. Let L = log 1+ 10 n . It suffices to prove Claim 3.1 below since it implies that which is a contradiction and implies that there is always an addable edge within distance 2L + 1. Note the the last inequality also comes from Claim 3.1 since otherwise |Y 2 | = 0 and |Y 4 | = 0 ≤ 10 |Y 2 | would be a contradiction.
Since there is no more addable edges within distance 2l + 1, we know that every agent i ∈ P does not admit any addable edges. Hence for each i ∈ P , all heavy items in B 1 i are already included in B 1 (X 1 ≤2l ) and at most r − 1 light items in B i are not included in Since T ≤ OPT, we know that at least |P | − |B 1 (X 1 ≤2l )| = |Y ≤2l | + 1 agents in P are assigned only light items. Hence, out of at least k light items assigned to each of those agents, at least k − r + 1 items must be included in B (X ≤2l+1 ∪ Y ≤2l+2 ), which means Assume |Y 2l+2 | ≤ 10 |Y ≤2l |, we have |Y ≤2l+2 | ≤ (1 + 10 )|Y ≤2l |. Since every addable edge contains at most r − 1 unblocked items (items not used by M ), we have the following upper bound for the number of light items in the tree: which is a contradiction since 4 k 3+4 > 1. Hence we have k ≥ 3r − 1, which implies since r ≤ 5 . Hence we have a contradiction. At any moment before adding an addable edge, suppose we have constructed X ≤2l and Y ≤2l . By the above argument we have 2l = (a 0 , b 0 , a 1 , b 1 , . . . , a 2l , b 2l , ∞) be the signature of the alternating tree. We show that s is lexicographically decreasing accross all iterations.
No contraction. Suppose we added an addable edge e with blocking(e) = ∅, then e will be included in X ≤2l or a newly constructed X 2l+1 , in both cases the lexicographic value of s decreases since the last modified coordinate decreases.
Contraction. Suppose the newly added edge has no blocking edge, then in the contraction, let f ∈ Y 2i , which must be light, be the last blocking edge that is removed. Since b 2i−1 decreases while a j (for all j ≤ 2i − 1) and b j (for all j ≤ 2i − 2) do not change, the lexicographic value of s decreases. Since Since an addable edge can be found in polynomial time and the contraction operation stops in polynomial time, the running time of the algorithm is n · poly(n) · n O( 1 log n) = n O( 1 log n) .

Polynomial-Time Approximation Algorithm
We give a polynomial-time approximation algorithm in this section. Based on the previous analysis, to improve the running time from n O(log n) to n O(1) , we need to bound the total number of iterations (signatures) by poly(n). On a high level, our algorithm is similar to that of Annamalai et al. [1]: we apply the idea of lazy update and greedy player such that after each iteration, either a new layer is constructed or the size of the highest layer changed is reduced by a constant factor. However, instead of constructing feasible dual solutions, we extend the charging argument used in the previous sections on counting the number of light items in the tree to prove the exponential growth property of the alternating tree. Moreover, by avoiding the use of CLP(T ) (and its dual), we are able to provide a simpler analysis of the algorithm while achieving a better approximation ratio.
In binary search, let T be a guess of OPT. As explained earlier, we can assume T ∈ [1, 3 2 ). Let k = T . Our algorithm aims at assigning to each agent either a heavy item or r light items, for some fixed r ≤ k 2 when T ≤ OPT. Such an allocation gives a k r -approximation. Let p ∈ (r, k) be an integer parameter. Let 0 < µ 1 be a very small constant, i.e., µ = 10 −10 . As before, for each i ∈ A, we call {i, j} a heavy edge for j ∈ B 1 i , and {i} ∪ S a light edge if S ⊆ B i . However, in this section, we use two types of light edges: either |S| = p (addable edges) or |S| = r (blocking edges). Let M be a maximum matching between A and B 1 . We can regard M as a partial allocation that assigns maximum number of heavy items. Let i 0 be an unmatched node in M . We can further assume that every heavy item is interesting to at least 2 agents since otherwise we can assign it to the only agent and remove the item and the agent from the problem instance. We use " + " and " − " to denote the inclusion and exclusion of singletons in a set, respectively.

Flow Network
Let G(A ∪ B 1 , E M ) be a directed graph uniquely defined by M as follows. For all i ∈ A and j ∈ B 1 i , if {i, j} ∈ M then (j, i) ∈ E M , otherwise (i, j) ∈ E M . We can interpret the digraph as the residual graph of the "interest" network (a digraph with directed edges from each i to j ∈ B 1 i ) with current flow M . The digraph G has the following properties • every i ∈ A has in-degree ≤ 1, every j ∈ B 1 has out-degree ≤ 1 and in-degree ≥ 1.
• all heavy items reachable from i ∈ A with in-degree 0 must have out-degree 1 (otherwise we can augment the size of M by one). Given two sets of light edges Y and X (A(Y ) and A(X) do not have to be disjoint), let f (Y, X) denote the maximum number of node-disjoint paths in G(A ∪ B 1 , E M ) from A(Y ) to A(X). Let F (Y, X) be those paths. We will later see that each such path alternates between heavy edges and their blocking edges. Unlike the quasi-polynomial-time algorithm, in our polynomial-time algorithm, the heavy edges do not appear in the alternating tree. Instead, they are used in the flow network G(A ∪ B 1 , E M ) to play a role of connecting existing addable light edges and blocking light edges.

Building Phase
where X i is a set of addable edges and Y i is a set of blocking edges that block edges in X i .
We call an addable edge e = {i} ∪ P unblocked if it contains at least r unblocked light items: |P \( e ∈blocking(e) B (e ))| ≥ r. Initialize the set of unblocked addable edges be I = ∅. Throughout the whole algorithm, we maintain a set I of unblocked addable edges and layers We build a new layer as follows.
Note that such an edge is connected to a blocking edge in Y ≤l by a path in G(A ∪ B 1 , E M ) that is disjoint from other paths connecting existing blocking edges and addable edges.
Given an addable edge: if it is unblocked, then include it in I; otherwise include it in X l+1 . When there is no more addable edges, let Y l+1 = blocking(X l+1 ) = e∈X l+1 blocking(e), set l = l + 1 and try to contract L l . Note that it is possible that a blocking edge blocks multiple addable edges.

Collapse Phase
Let W = F (Y ≤l , I) be constructed as follows. Initialize W = ∅ = F (Y ≤0 , I). Recursively for i = 1, 2, . . . , l, let W = F (Y ≤i , I) be augmented from W = F (Y ≤i−1 , I). In the final W , let W i ⊆ W be the paths from A(Y i ) to A(I) and let I i ⊆ I be those reached by W i . By the above construction, if f ∈ Y ≤i have no out-flow in F (Y ≤i , I), then it will not have out-flow in F (Y ≤j , I), for any j > i. Hence we have for all i = 1, 2, . . . , l, Note that every path in W i starts with an agent u ∈ A(Y i ) that is assigned a light edge by M and ends at a agent v ∈ A(I i ) with an unblocked addable edge, which provides a possibility of swapping out a blocking edge in the tree with an unblocked addable edge (by reassigning all heavy items in the path). Intuitively, |I i | ≥ µ|Y i | implies that we can swap out a µ fraction of blocking edges in Y i (which is called a collapse). Let L t be the earliest collapsible layer, we collapse it as follows.

reverse all heavy edges in
Note that after the above operations, only Y t and M are changed: the size |Y t | is reduced by a factor of at least µ and the number of heavy edges in M is not changed.
Step- (3). Set l = t and repeat the collapse if possible. Remove all unblocked edges in X t (since |Y t | decreases). For each removed unblocked edge e, include it in I if f (Y ≤t−1 , X ≤t ∪I +e) > f (Y ≤t−1 , X ≤t ∪I). Proof: We prove by induction on t ≥ 1. Consider the base case when t = 1. The statement trivially holds when L t is just constructed and when |X t ∪ I| increases. Suppose in some iteration |X t ∪ I| decreases, then it must be because Y t is collapsed, in which case f (Y ≤t−1 , X ≤t ∪ I) does not change due to the update rule of step- (3). Now assume the statement is true for t and consider t + 1.

Invariants and Properties
Since |X i | does not increase afterwards for all i ≤ t + 1, applying the same argument to L t+1 as above yields the fact. Proof: First note that we have (p − r + 1)|X ≤t | ≤ r|Y ≤t | since each edge in X ≤t has at least p − r + 1 blocked light items. Suppose |Y t | < µ 2 |Y ≤t−1 |, then we have f (Y ≤t−1 , X ≤t ∪ I) < ( r p−r+1 + 2µ)|Y ≤t−1 | since otherwise (the last inequality holds since no collapsible layer): which leads to a contradiction (assume 1 µ ≥ r p−r+1 + µ): Let γ = r p−r+1 + 2µ. Consider the moment when there is no more addable edge that can be included into X l+1 (before adding Y l+1 ). Assume |Y l+1 | < µ 2 |Y ≤l |, then we have |X ≤l+1 | ≤ f (Y ≤l , X ≤l+1 ∪I) < γ|Y ≤l |. The current number of light items in the tree is Consider the residual graph G of G(A ∪ B 1 , E M ) with flow F (Y ≤l , X ≤l+1 ∪ I) (obtained by reversing the direction of each path). Note that since f (Y ≤l , X ≤l+1 ∪ I) < γ|Y ≤l |, more than (1 − γ)|Y ≤l | of A(Y ≤l ) can reach at least one agent i ∈ A. In G , let T be those agents reachable from A(Y ≤l ). For all i ∈ T , we have |B i \B (Y ≤l ∪ X ≤l+1 ∪ I)| ≤ p − 1 (otherwise there are addable edges), and for all j ∈ B 1 i , j must be reachable from A(Y ≤l ) and assigned. Hence the total number of heavy items agents in T are interested in is less than |T | − (1 − γ)|Y ≤l |, which means that more than (1 − γ)|Y ≤l | agents in T are assigned only light items in the optimal solution. Note that each such agent is assigned at least k light items and at most p − 1 of those items are not included in the tree, we have Fix µ = 10 −10 , the above inequality is always not true for all k ≥ 9, r = k 9 and p = 3r − 1 by some simple calculation. Moreover, as → 0 (which means k → ∞), we can set r = k−10 3+2 √ 2 , p = (2 + √ 2)r − 1 such that the above inequality is not true. Hence, we have a contradiction and we claim that we always have |Y l+1 | ≥ µ 2 |Y ≤l |.
Now we are ready to prove Theorem 1.4. Proof of Theorem 1.4: For any T and k = T , the algorithm tries to compute an r -allocation, for integer r as large as possible, by enumerating all possible values of p between r and k. For any fixed r and p, we try to augment the partial matching M that matches each agent with either a heavy item or r light items. Hence it suffices to show that the algorithm terminates in polynomial time for augmenting the size of M by one. Since each iteration can be done in polynomial time, it suffices to bound the number of iterations by poly(n). The approximation ratio will be the maximum of k r , over all T ≤ OPT. By Lemma 4.1 and the definition of collapsible, we know that after each iteration, either (if no collapse) a new layer with |Y l+1 | ≥ µ 2 |Y ≤l | is constructed, or some |Y t | is reduced to at most (1 − µ)|Y t | while Y i are unchanged, for all i < t. Let s i = log 1 1−µ |Y i | µ 2i and s = (s 1 , s 2 , . . . , s l , ∞) be the signature, then we have: (1) it is lexicographically decreasing across all iterations: if there is no collapse, then some layer is newly constructed and hence s decreases; otherwise let L t be the last layer that is collapsed and |Y t | be the size of Y t before it is collapse: we know that at the end of the iteration s i is not changed for all i < t while s t ≤ log 1 is decreased by at least one, which also means s decreases; (2) its coordinates are not decreasing: for all i ∈ [l − 1] we have s i+1 = log 1 Since we have l = O(log n) and s i = O(log n) for all i ∈ [l], the total number of iterations (signatures) is at most 2 O(log n) = poly(n).
Approximation Ratio. When k ≤ 9, then a trivial 9-approximation can be achieved by a -allocation (maximum matching). By the proof of Lemma 4.1, the approximation ratio k r is always at most 9 and tends to 3 + 2 √ 2 ≈ 5.83 as → 0.

Hardness of (1, )-Restricted Allocation Problem
We show that for any ≤ 0.5, the (1, )-restricted allocation problem cannot be approximated within any ratio smaller than 2.
Definition 5.1 (3-dimensional matching) Given a 3-regular hypergraph H(X ∪ Y ∪ Z, E) where |X| = |Y | = |Z| and E ⊆ X × Y × Z, the 3-dimensional matching problem aims at finding a perfect matching M ⊆ E that matches all nodes.
Proof of Theorem 1.1: Deciding the existence of a perfect matching in the 3-dimensional matching problem is known to be NP-hard. Given an instance of the 3-dimensional matching problem H(X ∪ Y ∪ Z, E), for any fix ≤ 0.5, we show that there exists an instance (A, B, w) of the (1, )-restricted allocation problem for which OPT = 2 if H has a perfect matching; otherwise OPT ≤ . Hence no polynomial time algorithm can approximate the (1, )-restricted allocation problem within any ratio smaller than 2, unless P=NP. Let d(z) be the number of hyperedges adjacent to node z ∈ Z. DefineẐ = {z (1) , z (2) , . . . , z (d(z)−1) : z ∈ Z} to be the set containing d(z) − 1 copies of each z ∈ Z. Let A = E, B = X ∪ Y ∪Ẑ, w j = for all j ∈ X ∪ Y and w j = 1 for all j ∈Ẑ. For all e = (x, y, z) ∈ A, let B e = {x, y, z (1) , z (2) , . . . , z (d(z)−1) }.
Since there are |E| agents and z∈Z (d(z) − 1) = |E| − |Z| heavy items, at least |Z| agents receive only light items. Since there are 2|Z| light items, we have OPT ≤ 2 .
YES case. If H has a perfect matching M , then for each e = (x, y, z) ∈ M , we can assign x, y ∈ B e to e ∈ A. For the remaining |E| − |Z| agents, we can assign to each agent one heavy item inẐ (since exactly d(z) − 1 edges adjacent to each z ∈ Z are not assigned). Hence, we have OPT = 2 .
NO case. If there is no perfect matching then we show that OPT < 2 , which means OPT ≤ . Assume the contrary that OPT = 2 . Then every e = (x, y, z) ∈ A must receive either a single heavy item or two light items, which must be x and y. Since every z ∈ Z has only d(z) − 1 copies, at least one of the adjacent edges of z must receive no heavy item, which means the edges receiving light items having disjoint Z nodes. Hence, those |Z| edges receiving light items actually form a perfect matching and it is a contradiction.
While the above analysis implies that the integrality gap of CLP is at least 2 when P = NP, our following example shows that the integrality gap is, unconditionally, at least 2.
Lower Bound for the Integrality Gap. For the (1, )-restricted allocation problem instance in Figure 1 with 4 agents (circles) and 6 items (squares), T * = 2 while OPT = (since at least one light item will become useless after assigning all heavy items), which implies that the integrality gap is at least 2.