Fair Matchings and Related Problems

Let $$G = (A \cup B, E)$$ G = ( A ∪ B , E ) be a bipartite graph, where every vertex ranks its neighbors in an order of preference (with ties allowed) and let $$r$$ r be the worst rank used. A matching $$M$$ M is fair in $$G$$ G if it has maximum cardinality, subject to this, $$M$$ M matches the minimum number of vertices to rank  $$r$$ r neighbors, subject to that, $$M$$ M matches the minimum number of vertices to rank  $$(r-1)$$ ( r - 1 ) neighbors, and so on. We show an efficient combinatorial algorithm based on LP duality to compute a fair matching in $$G$$ G . We also show a scaling based algorithm for the fair b-matching problem. Our two algorithms can be extended to solve other profile-based matching problems. In designing our combinatorial algorithm, we show how to solve a generalized version of the minimum weighted vertex cover problem in bipartite graphs, using a single-source shortest paths computation—this can be of independent interest.


Introduction
Let G = (A ∪ B, E) be a bipartite graph on n vertices and m edges, where each u ∈ A ∪ B has a list ranking its neighbors in an order of preference (ties are allowed). Such an instance is usually referred to a stable marriage instance with incomplete lists and ties. A matching is a collection of edges, no two of which share an endpoint. The focus in stable marriage problems is to find matchings that are stable [6]. However, there are many applications where stability is not a proper objective: for instance, in matching students with counselors or applicants with training posts, we cannot compromise on the size of the matching and a fair matching is a natural candidate for an optimal matching in such problems.

Definition 1.
A matching M is fair in G = (A ∪ B, E) if M has maximum cardinality, subject to this, M matches the minimum number of vertices to rank r neighbors, and subject to that, M matches the minimum number of vertices to rank (r − 1) neighbors, and so on, where r is the worst rank used in the preference lists of vertices.
The fair matching problem can be solved in polynomial time as follows: for an edge e with incident ranks i and j, let w(e) = n i−1 + n j−1 . It is easy to see that a maximum cardinality matching of minimum weight (under weight function w) is a fair matching in G. Such a matching can be computed via the maximum weight matching algorithm by resetting e's weight to 4n r − n i−1 − n j−1 , where r is the largest rank used in any preference list.
However this approach can be expensive even if we use the fastest maximum-weight bipartite matching algorithms [1,3,4,5]. The running time will be O(rmn) orÕ(r 2 m √ n). Note that these complexities follow from the customary assumption that an arithmetic operation takes O(r) time on weights of the order n r . We present two different techniques to efficiently compute fair matchings and a generalization called fair b-matchings.
A combinatorial technique. Our first technique is an iterative combinatorial algorithm for the fair matching problem. The running time of this algorithm isÕ(r * m √ n) orÕ(r * n ω ) with high probability, where r * is the largest rank used in a fair matching and ω ≈ 2.37 is the exponent of matrix multiplication. This algorithm is based on linear programming duality and in each iteration i, we solve the following "dual problem" -dual to a variant of the maximum weight matching problem.
Generalized minimum weighted vertex cover problem. Let G i = (A ∪ B, E i ) be a bipartite graph with edge weights given by w i : When K i−1 = ∅, the above problem reduces to the standard weighted vertex cover problem. We show that the generalized minimum weighted vertex cover problem, where y i v for v ∈ K i−1 can be negative, can be solved via a single-source shortest paths subroutine in directed graphs, by a non-trivial extension of a technique of Iri [13].
A scaling technique. Our second technique uses scaling in order to solve the fair matching problem, by the aforementioned reduction to computing a maximum weight matching using exponentially large edge weights. It starts by solving the problem when each edge weight is 0 and then iteratively solves the problem for better and better approximations of the edge weights. This technique is applicable in the more generalized problem of computing fair b-matchings, where each vertex has a capacity associated with it. We solve the fair b-matching problem, in timeÕ(rmn) and space O(m), by solving the capacitated transshipment problem, while carefully maintaining "reduced costs" whose values are within polynomial bounds. Brute-force application of the fastest known minimum-cost flow algorithms would suffer from the additional cost of arithmetic and an O(rm) space requirement. For instance, using [9] would result inÕ(r 2 mn) running time and O(rm) space.

Background
Fair matchings are a special case of the profiled-based matching problems. So far fair matchings have received little attention in the literature. Except the two pre-prints [11,17] on which this work is based, the only work dealing with fair matchings is the Ph.D. thesis of Sng [23], where he gives an algorithm to find a fair b-matching 1 in O(rQ min{m log n, n 2 }) time, where Q = v∈V q(v), the sum of the capacity q(v) of all vertices v ∈ V .
is rank-maximal if M matches the maximum number of vertices to rank 1 neighbors, subject to this constraint, M matches the maximum number of vertices to rank 2 neighbors, subject to the above two constraints, M matches the maximum number of vertices to rank 3 neighbors, and so on.
However the rank-maximal matching problem has been studied so far in a more restricted model called the the one-sided preference lists model. In this model, only vertices of A have preferences over neighbors while vertices in B have no preferences. Note that a problem in the one-sided preference lists model can also be modeled as a problem with two-sided preference lists by making every b ∈ B assign rank r to every edge incident on it, where r is the worst rank in the preference lists of vertices in A.
The current fastest algorithm to compute a rank-maximal matching in the one-sided preference lists model takes time O(min{r * m √ n, mn, r * n ω }) [15], where r * is the largest rank used in a rank-maximal matching. In the one-sided preference lists setting, each edge has a unique rank associated with it, thus the edge set E is partitioned into E 1∪ E 2∪ · · ·∪ E r -this partition enables the problem of computing a rank-maximal matching to be reduced to computing r * maximum cardinality matchings in certain subgraphs of G.
We show here that our fair matching algorithm can be easily modified to compute a rank-maximal matching in the two-sided preference lists model. Thus this problem can be solved in timeÕ(r * m √ n) orÕ(r * n ω ) with high probability, which almost matches its running time for the one-sided case. Another problem that our algorithm can solve is the "maximum cardinality" rank-maximal matching problem. A matching M is a maximum cardinality rank-maximal matching if M has maximum cardinality, and within the set of maximum cardinality matchings, M is rank-maximal.
Organization of the paper. Section 2.1 contains our algorithm for the generalized bipartite vertex cover problem, Section 2.2 has our algorithm for fair matchings. Section 3 has our scaling algorithm. The omitted details can be found in the full version of this paper.

Our Combinatorial Technique for fair matchings
Recall that our input here is G = (A ∪ B, E) and r is the worst or largest rank used in any preference list. The notion of signature will be useful to us in designing our algorithm. We first define edge weight functions w i , for 1 ≤ i ≤ r − 1. The value w i (e), where e = (a, b), is defined as follows: Thus signature(M ) is an r-tuple, where the first coordinate is the size of M , the second coordinate is the number of vertices that get matched to neighbors ranked r − 1 or better, and so on. Let OPT denote a fair matching. Then signature(OPT) signature(M ) for any matching M in G, where is the lexicographic order on signatures.
In order to capture the first coordinate of signature(M ) also via an edge weight function, let us introduce the function w 0 defined as: w 0 (e) = 1 for all e ∈ E. Thus |M | = w 0 (M ) = e∈M w 0 (e). For any matching M and 0 ≤ j ≤ r − 1, let signature j (M ) denote the (j + 1)-tuple obtained by truncating signature(M ) to its first j + 1 coordinates.
Our algorithm runs for r * iterations, where r * ≤ r is the largest index i such that w i−1 (OPT) > 0. For any j ≥ 0, in the (j + 1)-st iteration, our algorithm solves the minimum weighted vertex cover problem in a subgraph G j . This involves computing a maximum w j -weight matching M j in the graph G j under the constraint that all vertices of a critical subset K j−1 ⊆ A ∪ B have to be matched. In the first iteration which corresponds to j = 0, we have G 0 = G and K −1 = ∅.
The problem of computing M j will be referred to as the primal program of the (j + 1)-st iteration and the minimum weighted vertex cover problem becomes its dual. We will show M j to be (j + 1)-optimal. The problem of computing M j can be expressed as a linear program (rather than an integer program) as the constraint matrix is totally unimodular and hence the corresponding polytope is integral. This linear program and its dual are given below.
Lemma 5. M j and y j are the optimal solutions to the primal and dual programs respectively, iff the following hold: ; Proposition 5 follows from the complementary slackness conditions in the linear programming duality theorem. This suggests the following strategy once the primal and dual optimal solutions M j and y j are found in the (j + 1)-st iteration.
to prune "irrelevant" edges: if e = (u, v) and y j u + y j v > w j (e), then no optimal solution of the j-th iteration primal program can contain e. So we prune such edges from G j and let G j+1 denote the resulting graph. The graph G j+1 will be used in the next iteration. to grow the critical set K j−1 : if y j u > 0 and u ∈ K j−1 , then u has to be matched in every optimal solution of the primal program of the (j + 1)-st iteration. Hence u should be added to the critical set. Adding such vertices u to K j−1 yields the critical set K j for the next iteration.
Below we first show how to solve the dual problem and then give the main algorithm.

Solving the dual problem
For any 0 ≤ j ≤ r − 1, let G j = (A ∪ B, E j ) be the subgraph that we work with in the (j + 1)-st iteration and let K j−1 ⊆ A ∪ B be the critical set of vertices in this iteration. Recall that for each e ∈ E j , we have w j (e) ∈ {0, 1, 2}. We now show how to solve the dual problem efficiently for a more general edge weight function, i.e., w j (e) ∈ {0, 1, . . . , c} for each e ∈ E j .
Let M j be the optimal solution of the primal program (we discuss how to compute it at the end of this section). We know that M j matches all vertices in K j−1 . We now describe our algorithm to solve the dual program using M j . Our idea is built upon that of Iri [13], who solved the special case of  to be reachable from s, it has to be the case that b is reachable from s, contradicting that b ∈ B \ R.
For part (3), observe that if b ∈ B \ R is unmatched in M j , then b ∈ K j−1 and such a vertex can be reached via z, contradicting the assumption that b ∈ B \ R. If a ∈ A \ R is unmatched in M j , then such a vertex can be reached from s, contradicting the assumption then a is also in R, a contradiction. This proves part (3).
Note that there may exist some edges in let H j denote the resulting graph. By Lemma 6.3, no edge of M j has been deleted, thus M j belongs to H j and M j is still an optimal matching in the graph H j . Moreover, H j is split into two parts: one part is (A ∪ B) ∩ R, which is isolated from the second part (A ∪ B) \ R. See Figure 2. Next add a directed edge from the source vertex s to each vertex in B \ R. Each of these edges e has weight d(e) = 0. By Lemma 6.3, all vertices can be reached from s now. Also note that there can be no negative-weight cycle, otherwise, we can augment M j along this cycle to get a matching of larger weight while still keeping the same set of vertices matched, which leads to a contradiction to the optimality of M j .
Apply the single-source shortest paths algorithm [7,21,22,24] from the source vertex s in this graph H j where edge weights are given by d(·). Such algorithms take O(m √ n) time orÕ(n ω ) time when the largest edge weight is O (1).
We define an initial vertex cover as follows. If a ∈ A , letỹ a := d a ; if b ∈ B, letỹ b := −d b . (We will adjust this cover further later.) Lemma 7. The constructed initial vertex cover {ỹ v } v∈A ∪B for the graph H j satisfies the following properties: Proof. For part (1), suppose that a ∈ (A ∩ R) \ K j−1 andỹ a < 0. By Lemma 6.2 and the fact that all edges from A \ R to B ∩ R are absent, the shortest path from s to a cannot go through (A ∪ B) \ R. So there exists an alternating path P (of even length) starting from some unmatched vertex a ∈ (A ∩ R) \ K j−1 and ending at a. The distance from a to a along path P must be negative, since d a =ỹ a < 0. Therefore, Note that it is possible that the first edge e = (a , b) ∈ P is a virtual edge, i.e., a = z and the first edge e connects z to some vertex b ∈ (B ∩ R) \ K j−1 . In this case, d e = 0 and b is not an element of the critical set K j−1 . Therefore, irrespective of whether the first edge is virtual or not, we can replace the matching M j by M j ⊕ P (ignoring the first edge in P if it is virtual), thereby creating a feasible matching with larger weight than M j , a contradiction.
So we are left to worry about the case when vertex b ∈ (B ∩ R) \ K j−1 . Recall that y b = −d b . We claim that d b ≤ 0. Suppose not. Then the shortest distance from s to b is strictly larger than 0. But this cannot be, since there is a path composed of edges (s, z) and (z, b), and such a path has total distance of exactly 0. This completes the proof of part (1).
To show part (2), by Lemma 6.3, an unmatched vertex must be in R. First, assume that this unmatched vertex is a ∈ (A ∩ R) \ K j−1 . By our construction, there is only one path from s to a, which is simply the directed edge from s to a and its distance is 0. So y a = d a = 0. Next assume that this unmatched vertex is b ∈ (B ∩ R) \ K j−1 . Suppose that y b > 0. Then d b = −ỹ b < 0. By Lemma 6.2 and the fact that all edges from A \ R to B ∩ R have been deleted, the shortest path from s to b cannot go through (A ∪ B) \ R. So the shortest path from s to b must consist of the edge from s to some unmatched vertex a ∈ (A ∩ R) \ K j−1 , followed by an augmenting path P (of odd length) ending at b. As in the proof of (1), we can replace M j by M j ⊕ P (irrespective of whether the first edge in P is virtual or not) so as to get a matching of larger weight while preserving the feasibility of the matching, a contradiction. This proves part (2). At this point, we possibly still do not have a valid cover for the dual program due to the following two reasons.
Some vertex a ∈ A \ K j−1 hasỹ a < 0. (However it cannot happen that some vertex b ∈ B \ K j−1 hasỹ b < 0, since Lemma 6.1 states that such a vertex is in R and Lemma 7.1 states thatỹ b must be non-negative.) The edges deleted from G j (to form H j ) are not properly covered by the initial vertex cover {ỹ v } v∈A∪B .
We can remedy these two defects as follows. Define δ = max{δ 1 , δ 2 , 0}, In O(n + m) time, we can compute δ. If δ = 0, the initial cover is already a valid solution to the dual program. In the following, we assume that δ > 0 exists (if the initial cover is already a valid solution for the dual program, then the proof that it is also optimal is just the same as in Theorem 8.) We build the final vertex cover as follows. Given M j , it follows that the dual problem can be solved in time O(m √ n) orÕ(n ω ). The problem of computing M j can be solved by the following folklore technique: form a new graphG j by taking two copies of G j and making the two copies of a vertex u / ∈ K j−1 adjacent using an edge of weight 0. A maximum weight perfect matching inG j yields a maximum weight matching in G j that matches all vertices in K j−1 , i.e., an optimal solution to the primal program of the j-th iteration. Since c = O(1), a maximum weight perfect matching iñ G j can be found in O(m √ n log n) time by the fastest bipartite matching algorithms [1,3,5], or inÕ(n ω ) time with high probability by Sankowski's algorithm [22].

Our main algorithm
We now present our algorithm to compute a fair matching. Recall that r is the worst rank in the problem instance and r * is the worst rank in a fair matching. We first present an algorithm that runs for r iterations and we show later in this section how to terminate our algorithm in r * iterations.

Initialization. Let
Find the optimal solution {y j u } u∈A∪B to the dual program of the (j + 1)-st iteration. b. Delete from G j every edge (a, b) such that y j a + y j b > w j (e). Call this subgraph G j+1 . c. Add all vertices with positive dual values to the critical set, i.e., K j = K j−1 ∪ {u} y j u >0 . 3. Return the optimal solution to the primal program of the last iteration.
The solution returned by our algorithm is a maximum (w r−1 )-weight matching in the graph G r−1 that matches all vertices in K r−2 . By Proposition 5, this is, in fact, a matching in the subgraph G r that matches all vertices in K r−1 . Lemma 10 proves the correctness of our algorithm. Lemma 9 guarantees that our algorithm is never "stuck" in any iteration due to the infeasibility of the primal or dual problem. Lemma 9. The primal and dual programs of the (j + 1)-st iteration are feasible, for 0 ≤ j ≤ r − 1.

Lemma 10.
For every 0 ≤ j ≤ r − 1, the following hold: conversely, a j-optimal matching in G is a matching in G j that matches all v ∈ K j−1 .

Proof.
We proceed by induction. The base case is j = 0. As K −1 = ∅, G 0 = G, and all matchings are, by definition, 0-optimal, the lemma holds vacuously.
For the induction step j ≥ 1, suppose that the lemma holds up to j − 1. As K j−1 ⊇ K j−2 and G j is a subgraph of G j−1 , M is a matching in G j−1 that matches all vertices of K j−2 . Thus by induction hypothesis, M is (j − 1)-optimal. For each edge e = (a, b) ∈ M to be present in G j , e must be a tight edge in the j-th iteration, i.e., y j−1 a + y j−1 b = w j−1 (e). Furthermore, as K j−1 ⊇ {u} y j−1 u >0 , we have where the final inequality holds because all vertices v with positive y j−1 v are matched in M . By linear programming duality, M must be optimal in the primal program of the j-th iteration. So the j-th primal program has optimal solution of value w j−1 (M ).
Recall that by definition, OPT is also (j − 1)-optimal. By (2) of the induction hypothesis, OPT is a matching in G j−1 and OPT matches all vertices in K j−2 . So OPT is a feasible solution of the primal program in the j-th iteration. Thus w j−1 (OPT) ≤ w j−1 (M ). However, it cannot happen that w j−1 (OPT) < w j−1 (M ), otherwise, signature(M ) signature(OPT), since both OPT and M have the same first j − 1 coordinates in their signatures. So we conclude that w j−1 (OPT) = w j−1 (M ), and this implies that M is j-optimal as well. This proves (1).
In order to show (2), let M be a j-optimal matching in G. Since M is j-optimal, it is also (j − 1)-optimal and by (2) of the induction hypothesis, it is a matching in G j−1 that matches all vertices in K j−2 . So M is a feasible solution to the primal program of the j-th iteration. As signature(M ) has w j−1 (OPT) in its j-th coordinate, M must be an optimal solution to this primal program; otherwise there is a j-optimal matching with a value larger than w j−1 (OPT) in the j-th coordinate of its signature, contradicting the optimality of OPT. By Proposition 5.2, all edges of M are present in G j and by Proposition 5.1, all vertices u ∈ K j−2 with y j−1 u > 0, in other words, all vertices in K j−1 \ K j−2 have to be matched by the optimal solution M . This completes the proof of (2).
Since our algorithm returns a matching in G r that matches all vertices in K r−1 , we know from Lemma 10.1 that this matching is r-optimal, thus the matching returned is fair. As mentioned earlier, our algorithm can be modified so that it terminates in r * iterations. For that, we need to know the value of r * .
We continue to use the weight function w 0 : E → {1}, however instead of w 1 , . . . , w r , we should use the weight functionsw 1 , . . . ,w r * −1 where for 1 ≤ i ≤ r * − 1,w i is defined as: for any edge e = (a, b),w i (e) is 2 if both a and b rank each other as rank ≤ r * − i + 1 neighbors, it is 1 if exactly one of {a, b} ranks the other as a rank ≤ r * − i + 1 neighbor, otherwise it is 0. The value r * can be easily computed right at the start of our algorithm as follows.
Let M * be a maximum cardinality matching in G. The value r * is the smallest index j such that the subgraphḠ j admits a matching of size |M * |, whereḠ j is obtained by deleting all edges e = (a, b) from G where either a or b (or both) ranks the other as a rank > j neighbor. We compute r * by first computing M * and then computing a maximum cardinality matching inḠ 1 ,Ḡ 2 , . . . and so on till we see a subgraphḠ j that admits a matching of size |M * |. This index j = r * and it can be found in O(r * m √ n) time [12] or in O(r * n ω ) time [10,19].
We now bound the running time of our algorithm. We showed how to solve the dual program in O(m √ n) time once we have the solution to the primal program and we have seen that the primal program can be solved in O(m √ n log n) time. Alternatively, both the primal and dual problems can be solved inÕ(n ω ) time with high probability. Theorem 11 follows. In the full version, we show how our algorithm can be adapted to find a rank-maximal and a maximum cardinality rank-maximal matching.

Theorem 12. A rank-maximal/maximum cardinality rank-maximal in G = (A ∪ B, E)
with two-sided preference lists, can be computed inÕ(r * m √ n) time, or inÕ(r * n ω ) time with high probability, where r * is the largest rank used in such a matching.

3
The fair b-matching problem: our scaling technique The fair matching problem can be generalized by introducing capacities on the vertices. We are given G = (A ∪ B, E) as before, along with the capacity function q : Our goal here is to find a fair b-matching, i.e., a b-matching M which has the largest possible size, subject to this constraint, M matches the minimum number of vertices to their rank r neighbors, and so on. The fair b-matching problem can be reduced to the minimum-cost flow problem as follows. Add two additional vertices s and t. For each vertex a ∈ A, add an edge (s, a) with capacity q(a) and cost zero; for each vertex b ∈ B, add an edge (b, t) with capacity q(b) and cost zero. Every edge (a, b) where a ∈ A, b ∈ B has capacity one and is directed from A to B. If the incident ranks on edge e are i and j, then e will be assigned a cost of −(4n r − n i−1 − n j−1 ). The resulting instance has a trivial upper bound of n 2 /4 on the maximum s-t flow. We also add an edge from t to s with zero cost and capacity larger than the n 2 /4 upper bound. It is easy to verify that a minimum-cost circulation yields a fair b-matching.
We note however, that the above reduction involves costs that are exponential in the size of the original problem. We now present a general technique in order to handle these huge costs -we focus on solving the capacitated transshipment version of the minimum-cost flow problem [8]. Let G = (V, E) be a directed network with a cost c : E → Z and capacity u : E → Z ≥0 associated with each edge. With each v ∈ V a real number b(v) is associated, where v∈V b(v) = 0. If b(v) > 0, then v is a supply node, and if b(v) < 0, then v is a demand node. We assume G to be symmetric, i.e., e ∈ E implies that the reverse arc e R ∈ E. The reversed edges are added in the initialization step. The cost and capacity functions satisfy c(e) = −c(e R ) for each e ∈ E, u(e) ≥ 0 for the original edges and u(e R ) = 0 for the additional edges. From now on, E denotes the set of original and artificial edges.
A pseudoflow is a function x : E → Z satisfying the capacity and antisymmetry constraints: for each e ∈ E, x(e) ≤ u(e) and x(e) = −x(e R ). This implies x(e) ≥ 0 for the original edges. For a pseudoflow x and a node v, the imbalance imb x(e). The minimum-cost flow problem asks for a flow of minimum cost.
For a given flow x, the residual capacity of e ∈ E is u x (e) = u(e) − x(e). The residual graph G(x) = (V, E(x)) is the graph induced by edges with positive residual capacity. A potential function is a function π : V → Z. For a potential function π, the reduced cost of A flow x is optimal if and only if there exists a potential function π such that c π (e) ≥ 0 for all residual graph edges e ∈ E(x). For a constant ε ≥ 0 a flow is ε-optimal if c π (e) ≥ −ε for all e ∈ E(x) for some potential function π. Consider an ε-optimal flow x and any original edge e. If c π (e) < −ε, the residual capacity of e must be zero and hence e is saturated, i.e., x(e) = u(e). If c π (e) > ε, we have c π (e R ) = −c π (e) < −ε and hence the residual capacity of e R must be zero. Thus e R is saturated, i.e., x(e R ) = u(e R ) = 0. So e is unused.
We are now ready to describe our scaling algorithm, which is presented in a concise form in Figure 3. The details can be found in the full version. We conclude this section with Theorem 13, which follows from the edge cost values used in our reduction.

Reduction.
a. Add two additional vertices s and t. For each vertex a ∈ A, add an edge (s, a) with capacity q(a) and cost zero; for each vertex b ∈ B, add an edge (b, t) with capacity q(b) and cost zero. Add an edge from t to s with zero cost and capacity larger than n 2 /4. b. Direct any edge (a, b) where a ∈ A and b ∈ B from A to B, set its capacity to one and cost to −(4n r − n i−1 − n j−1 ). c. Set the demand/supply values of all vertices to zero. Add, if required, additional edges to ensure that G is symmetric.

Initialization Phase.
a. Multiply all edge costs by 2 1+ log n to make them divisible by the same amount. b. Let K = log C where C is the magnitude of the largest edge cost and let E i , 1 ≤ i ≤ K denote the set of all edges having a 1 in the i-th bit of their cost. c. Initialize x 0 to any feasible flow and reduced cost c 0 (e) = 0 for any e ∈ E. The flow x i−1 is 3-optimal with respect to the cost functionc i and the zero potential function, i.e., the potential of all the vertices is 0. b. Use the results of [9] with input (i) the flow x i−1 , (ii)c i as the edge cost function and (iii) the zero potential function, to compute a 1-optimal flow and a potential functioñ π which proves the 1-optimality. Let x i be this flow. Potentials are only decreased, starting from zero, during the computation andπ(v) ≥ −d · n for some constant d and all v. Constant d depends on the way the techniques of [9] are applied to refine a 3-optimal flow to an 1-optimal flow. c. Compute new reduced costs as c i (u, v) =c i (u, v) +π(u) −π(v). d. If any edge e ∈ E has |c i (e)| > d · n + 1 where d is the constant from step 3b fix it to empty or saturated by removing it (and its reversal) from the graph and modifying the imbalances of both its endpoints accordingly. 4. Return the b-matching induced by the flow x K and any flow on edges which were fixed to either empty or saturated.