Abstract 1 Introduction 2 Preliminaries 3 Cardinal pivot on arborescence hypergraphs 4 The design of ordinal matrix and ordinal pivot 5 Polynomiality of Scarf’s algorithm 6 Non-integrality of the fractional stable matching polytope 7 Conclusion References

Scarf’s Algorithm on Arborescence Hypergraphs

Karthekeyan Chandrasekaran ORCID Grainger College of Engineering, University of Illinois, Urbana-Champaign, IL, USA Yuri Faenza ORCID Columbia University, New York, NY, USA Chengyue He ORCID Columbia University, New York, NY, USA Jay Sethuraman ORCID Columbia University, New York, NY, USA
Abstract

Scarf’s algorithm – a pivoting procedure that finds a dominating extreme point in a down-monotone polytope – can be used to show the existence of a fractional stable matching in hypergraphs. The problem of finding a fractional stable matching in hypergraphs, however, is PPAD-complete. In this work, we study the behavior of Scarf’s algorithm on arborescence hypergraphs, the family of hypergraphs in which hyperedges correspond to the paths of an arborescence. For arborescence hypergraphs, we prove that Scarf’s algorithm can be implemented to find an integral stable matching in polynomial time. En route to our result, we uncover novel structural properties of bases and pivots for the more general family of network hypergraphs. Our work provides the first proof of polynomial-time convergence of Scarf’s algorithm on hypergraphic stable matching problems, giving hope to the possibility of polynomial-time convergence of Scarf’s algorithm for other families of polytopes.

Keywords and phrases:
Scarf’s algorithm, Arborescence Hypergraphs, Stable Matchings
Category:
Track A: Algorithms, Complexity and Games
Funding:
Karthekeyan Chandrasekaran: Karthekeyan Chandrasekaran is supported in part by NSF grant CCF-2402667.
Yuri Faenza: Yuri Faenza is supported by NSF Grant 2046146 and by a Meta Research Award.
Chengyue He: Chengyue He is supported by NSF Grant 2046146 and by a Meta Research Award.
Copyright and License:
[Uncaptioned image] © Karthekeyan Chandrasekaran, Yuri Faenza, Chengyue He, and Jay Sethuraman; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Design and analysis of algorithms
Related Version:
Scarf’s Algorithm on Arborescence Hypergraphs: https://arxiv.org/abs/2412.03397 [11]
Acknowledgements:
The first three authors gratefully acknowledge the organizers of the ICERM Spring 2023 semester program on Discrete Optimization: Mathematics, Algorithms, and Computation, where this project was initiated.
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

Scarf [32] proved the existence of a core allocation in a large class of cooperative games with non-transferable utility. Key ingredients in this proof include a lemma – Scarf’s lemma – that asserts the existence of a dominating extreme point in certain polytopes, and a pivoting procedure – Scarf’s algorithm – to find one. Scarf’s results have profoundly influenced subsequent research in combinatorics, theoretical computer science, economics, and game theory: Scarf’s lemma has been used to show the existence of fair allocations such as cores and fractional cores [8, 32], strong fractional kernels [2], fractional stable solutions in hypergraphs [1] and fractional stable paths [20]. Although Scarf’s results are employed in many applications, it is not known how to efficiently construct these desirable allocations/solutions. It is widely believed that Scarf’s algorithm is unlikely to be efficient for arbitrary applications [22]. In this work, we investigate the possibility of efficient convergence of Scarf’s algorithm for finding a stable matching in hypergraphs.

The stable matching problem in bipartite graphs (also known as the stable marriage problem) is a classic and well-studied problem. It is well-known that a stable matching in a bipartite graph with preferences always exists [19] and can be found in polynomial time via multiple algorithms, including Gale and Shapley’s deferred acceptance algorithm [19], linear programming [4, 30, 31, 35] and other combinatorial algorithms [5, 15]. A stable matching in a bipartite graph can also be found using fixed-point approaches [17], which in general do not translate to polynomial-time algorithms. Scarf’s algorithm belongs to the class of fixed point approaches, and it has been recently shown to converge in polynomial time for the stable marriage problem [16]. To the best of our knowledge, this is the first proof of polynomial-time convergence of Scarf’s algorithm on any application. This result motivated us to address the possibility of efficient convergence of Scarf’s algorithm for hypergraph stable matching problems.

In contrast to graphs, the problem of finding a stable matching in hypergraphs is significantly more challenging. Even in the special case of tripartite 3-regular hypergraphs, it is NP-complete to find a stable matching [24] (this is also known as the stable family problem proposed by Knuth [23]). Despite the hardness results of finding (integral) stable matchings in hypergraphs, Aharoni and Fleiner [1] used Scarf’s lemma to show that a fractional stable matching exists in every hypergraph. However, this general implication from Scarf’s result comes at a computational price: the problem of computing a fractional stable matching on hypergraphs is PPAD-complete [22]. The latter problem remains PPAD-complete even in extremely restrictive cases, for example, even if the hypergraph has node degree/hyperedge size at most 3 [21, 13].

While these results suggest that Scarf’s algorithm is unlikely to converge in polynomial-time for arbitrary hypergraphs, investigating the behavior of Scarf’s algorithm in special classes of hypergraphs remains an interesting question, also because of real-world applications. For instance, a sequence of works [26, 27, 25] employed Scarf’s algorithm as a subroutine to find a fractional stable matching in hypergraphs, then iteratively rounds it to an integral solution which indicates a meaningful allocation. Such allocations provide solutions when complementarities and externalities appear in stable matching problems [29]. One of the most important such problems is the hospital/resident problem with couples (HRC) [10, 26]. HRC can equivalently be formulated as the problem of finding a stable matching in certain hypergraphs. It is shown in [9] that Scarf’s algorithm empirically outperforms all other known heuristics when the proportion of couples is high. However, finding a fractional solution of HRC is also known to be PPAD-complete [13]. Such computational gaps between theory and practice regarding Scarf’s algorithm are not well understood.

In this paper, we focus on a class of hypergraphs called arborescence hypergraphs (in which the hyperedges are paths in an arborescence), and design a polynomial-time algorithm to find a (integer) stable matching in this class. The class of arborescence hypergraph lies between hypergraphs represented by interval matrices [18] and network matrices [33]. In particular, we design a pivoting rule for Scarf’s algorithm and show that it terminates in a polynomial number of iterations on the fractional matching polytope of arborescence hypergraphs. The node-hyperedge incidence matrix of an arborescence hypergraph is totally unimodular and consequently, every extreme point of the associated fractional matching polytope is integral. Thus, every dominating extreme point is also integral. This observation and Aharoni and Fleiner’s results [1] imply that the extreme point found by Scarf’s algorithm for arborescence hypergraphs corresponds to an integral stable matching. Our pivoting rule can be repurposed to apply to the more general family of network hypergraphs (of which the incidence matrix is a network matrix111The node-hyperedge incidence matrix of such a hypergraph is also totally unimodular [34] and consequently there is always an integer stable matching.). This pivoting rule can also be repurposed to implement the simplex algorithm on a linear program whose constraint matrix is a network matrix, which includes the incidence matrix of digraphs [34, chapter 13.3]; thus, it reduces to a pivoting rule of choosing the leaving arc for the network simplex algorithm proposed by Cunningham [14]. If we do not impose any restrictions on the entering variables, network simplex algorithm under this rule can still perform an exponentially long sequence of degenerate pivots [3]. In contrast, we show that Scarf’s algorithm under our pivoting rule always performs non-degenerate pivots. Currently, we are not able to obtain a proof of polynomial convergence of Scarf’s algorithm implemented with our pivoting rule for the more general family of network hypergraphs. Nevertheless, understanding Scarf’s algorithm on arborescence hypergraphs is a natural first step in this direction.

1.1 Our Contributions

To formally present our result, we define the stable matching problem on hypergraphs as follows.

Definition 1 (Hypergraphic preference system).

A hypergraph H=(V,E) is defined by a vertex set V and a hyperedge set E, where every hyperedge eE is a subset of V. A hypergraphic preference system is given by a pair (H,), where H=(V,E) is a hypergraph and :={i:iV} is the preference profile, i being a strict order over δ(i)={eE:ie} for each iV. For e,eδ(i), we write eie if either eie or e=e. In addition, we assume that (i) for every iV, the singleton hyperedge ei={i}δ(i) and for every eδ(i), eiei and (ii) {1} is the only hyperedge incident to node 1, that is, δ(1)={{1}}222The first assumption corresponds, in the bipartite setting, to the usual hypothesis that an agent prefers to be matched rather than being unmatched. The second assumption is without loss of generality and we include it for the sake of simplifying the presentation..

Definition 2 (Stable matching).

A stable matching for a hypergraphic preference system is a vector x{0,1}E so that for every eE, there exists a vertex ie such that

eδ(i),eiexe=1. (1)

Equation (1) with e=ei imposes that x is the characteristic vector of a matching. Moreover, for every hyperedge e, there exists a vertex ie and a hyperedge e in the matching (i.e., xe=1) so that eie. Correspondingly, if a fractional vector x[0,1]E satisfies (1), then we call such a fractional vector x a fractional stable matching.

We remark here that the standard combinatorial definition of stable matching is a hypergraph matching that does not contain blocking hyperedges (see, e.g., [1]), which reduces to the classical stable matching when the hypergraph is a bipartite graph. Definition 2 is equivalent to the combinatorial definition, and helps us define the fractional generalization more naturally.

Definition 3 (Arborescence hypergraph).

An arborescence is a directed graph that contains a vertex r (called the root) such that every vertex vr has a unique directed path from r. A hypergraph H=(V,E) is an arborescence hypergraph if there exists an arborescence 𝒯=(U,𝒜0) such that V=𝒜0 and each hyperedge eE is a subset of arcs in 𝒜0 that forms a directed path. Such 𝒯 is called a principal arborescence of H.

Example 4.

We present an example of an arborescence and the corresponding arborescence hypergraph in Figure 1.

Figure 1: On the left we present an arborescence 𝒯 with the root v7 using gray arcs. On the right we have a hypergraph H with principal arborescence 𝒯, V=[6] and 8 hyperedges (6 of them are singletons). Each circle in the hypergraph is a singleton hyperedge. Thus, each node in H corresponds to a gray arc in 𝒯, and each hyperedge in H corresponds to a black arc in 𝒯.

As our main result, we show that Scarf’s algorithm can be implemented to run in polynomial time for every arborescence hypergraphic preference system.

Theorem 5.

Let (H=(V,E),) be a hypergraphic preference system where H is an arborescence hypergraph. There exists a pivoting rule such that Scarf’s algorithm terminates in at most |V| iterations and outputs a stable matching on (H=(V,E),) in time O(|V||E|).

A few remarks on this result are in order. To the best of our knowledge, this is the first result showing polynomiality of Scarf’s algorithm beyond the case of stable marriage [16]. We note that the pivoting rule used to show polynomiality of Scarf’s algorithm in stable marriage [16] was heavily inspired by Gale and Shapley’s classical deferred acceptance mechanism, which is a purely combinatorial algorithm. In contrast, the pivoting rule developed in this work does not seem to have a combinatorial counterpart and is substantively different from the one from [16]. In Section 6, we provide evidence suggesting that classical approaches to stable marriage problems, such as linear programming with an exact description of the convex hull of all stable matchings [35], cannot even extend to a subclass of arborescence hypergraphic preference systems.

Secondly, while most of the known results related to polynomiality of stable matching on hypergraphic preference system usually restricts either the degree of nodes [21] or the size of hyperedges [13], we do not assume any condition on them or on the preference lists of agents. Thirdly, a recent work [7] shows that there is a polynomial time algorithm for finding a stable matching on subtree hypergraphs. They reduce the problem of finding a stable matching on such instances to finding a kernel in the clique-acyclic superorientation of a chordal graph, which can be solved in polynomial time [28]. This reduction indeed implies that there exists a polynomial time algorithm that finds a stable matching in an arboresence hypergraphic system. We remark that the algorithm from [28] is substantially different from Scarf’s algorithm and does not generalize to network hypergraphs, while, in our investigation, we uncover novel properties of bases and pivots in network hypergraphs, which may be of independent interest and could lead to polynomial-time convergence proofs for this more general class of hypergraphs.

1.2 Organization of the paper

In Section 2, we give more details on Scarf’s results and the origin of arborescence hypergraphs, and formally define notation. Scarf’s algorithm can be seen as alternating between two operations, cardinal pivot and ordinal pivot. In Section 3, we discuss the cardinal pivots, while in Section 4, we introduce ordinal pivots. In Section 5, we present the key technical part of the proof of polynomiality of Scarf’s algorithm on arborescence hypergraphs. In Section 6, we present an (counter)example that shows non-integrality of the fractional stable matching polytope on interval hypergraphs. Lastly, we give conclusion with open questions in Section 7. Some proofs, generalizations, and extended discussions are omitted and can be found in the full version [11].

2 Preliminaries

While the original idea of Scarf’s algorithm [32] applies to more general settings, we review here only its implementation for hypergraphic preference systems, as proposed by Aharoni and Fleiner [1]. In the following, we let (H=(V,E),) be a hypergraphic preference system with V=[n] and E={e1,,em}.

2.1 Scarf’s lemma and hypergraphic stable matchings

Definition 6 (Fractional matching polytope, cardinal basis).

Let A=(ai,j){0,1}n×m be the incidence matrix of H such that ai,j=1 iff ejδ(i). The matrix A is in standard form if A has the form A=(In|A) where In is the n×n identity matrix333This means ei={i} for i[n], i.e., the first n columns of A correspond to singletons.. The polytope 𝒫={x0m:Ax=𝟙} is called the fractional matching polytope. A set B[m] of n linearly independent columns of A is called a cardinal basis. Every feasible cardinal basis corresponds to an extreme point x𝒫.

We note that, if x is a fractional stable matching, then x𝒫 (i.e., x is a fractional matching). Conversely, given a fractional matching x𝒫, one needs more conditions that involve the preference orders to claim that x is a fractional stable matching, which leads to the next definition.

Definition 7 (Ordinal matrix, ordinal basis, utility vector).

Let C=(ci,j)n×m be a matrix. The matrix C is an ordinal matrix if it satisfies ci,i<ci,k<ci,j for every distinct i,j[n], k[m][n]. Let O be a set of columns of C. For every row i[n], the utility of the row i (w.r.t. O) is

uiO:=minjOci,j. (2)

The set O is called an ordinal basis of C if |O|=n and for every column j[m], there is at least one row i[n] such that uiOci,j. The associated vector uOn is called the utility vector of the ordinal basis O.

Example 8.

The matrix C below is an ordinal matrix. One can verify that O1={2,3,5} is an ordinal basis with uO1=(3,0,0)T, and O2={4,5,6} is an ordinal basis with uO2=(1,1,1)T. We observe that O3={1,2,6} with uO3=(0,0,2)T is not an ordinal basis since ci,4>uiO3 for every i[3].

Definition 9 (Dominating basis, dominating extreme point).

A feasible cardinal basis B that is also an ordinal basis is called a dominating basis for (A,C), and the extreme point of 𝒫 corresponding to B is called a dominating extreme point for (A,C).

Scarf’s lemma shows the existence of a dominating extreme point under mild conditions on (A,C). In the following, we state a version of it that pertains to our setting, which connects dominating extreme points and fractional stable matchings.

Theorem 10.

Let (H=(V,E),) be a hypergraphic preference system. Let 𝒫={x0m:Ax=𝟙} be the fractional matching polytope of H where A is in standard form. Then, there exists an ordinal matrix Cn×m such that

  1. 1.

    (Scarf’s lemma [32]) A dominating extreme point for (A,C) exists.

  2. 2.

    ([1]) Every dominating extreme point for (A,C) is a fractional stable matching for (H=(V,E),).

2.2 Scarf’s algorithm

Scarf [32] provided a general pivoting algorithm to compute a dominating extreme point. The algorithm involves two operations, namely cardinal pivot and ordinal pivot.

2.2.1 Cardinal Pivot

Let B={j1,,jn} be a feasible cardinal basis and let jt[m]B. A cardinal pivot from B with entering column jt returns a feasible cardinal basis B:=B{jt}{j}, where the column j=BB is known as the leaving column. Cardinal pivot is exactly the operation used in the simplex algorithm when a column jt enters the basis B and one wants to choose a leaving column j to reach an adjacent basis. When 𝒫 is degenerate, in each iteration there may exist multiple candidates to be chosen as a leaving column j, thus a pivoting rule under which column is leaving needs to be specified. For a fixed pivoting rule, the choice of leaving column j is unique. Unlike the simplex algorithm, the other half of the pivoting procedure and the pivoting rule generating the entering column is obtained via an operation on the ordinal matrix.

2.2.2 Ordinal Pivot

Let O be an ordinal basis and jO. An ordinal pivot from O with leaving column j returns an ordinal basis O:=O{j}{j}, where j[m]O is chosen by a series of number comparisons. We give the formal description in Algorithm 1, and provide an example to illustrate this opertaion in Example 11. For correctness of Algorithm 1, we refer to [32].

Algorithm 1 Ordinal Pivot.
Example 11.

In this example, we show how an ordinal pivot proceeds. Consider the ordinal matrix C in Example 8, and its submatrix CO:

corresponding to an ordinal basis O={2,3,5} with utility vector uO=(3,0,0)T. If we want to remove column 2 from O, and perform an ordinal pivot, we let j=2. The row i is defined as the row index of the common entry in both the utility vector uO and the column vector C2=(5,0,4)T, thus we have i=2, since u2O=c2,2=0. In the remaining columns Oj={3,5}, the minimum entry in row i=2 is c2,5=2, thus we define the reference column jr=5. Similarly, by finding the common entry between uO and C5=(3,2,1)T, we find the row index ir=1 as u1O=c1,5=3. Now, the goal is to find the columns k in [6]O such that ci,k>ui{3,5} for all i1, i.e., to find columns that dominates the vector (,2,0)T, where means we do not have any restrictions. One can observe that such columns can be C1=(0,5,5)T or C6=(1,3,2)T. Thus, in Algorithm 1, K={1,6}. We need to choose one column k from K that maximizes cir,k, i.e., the row ir=1 is maximized. Therefore, we have j=6 as c1,6>c1,1. This ordinal pivot returns a new ordinal basis O={3,5,6} with the new utility vector u=(1,2,0)T.

In contrast to the cardinal pivot, the ordinal pivot reverses the order of entering and leaving. Moreover, the ordinal pivot operation is unique.

Lemma 12 ([32]).

Let O be an ordinal basis and let jO be an arbitrary column. Then there exists a unique column jO such that O{j}{j} is an ordinal basis, and such a j is determined by Algorithm 1.

In the following, we will see that an iteration of Scarf’s algorithm is the combination of a cardinal pivot and an ordinal pivot (in order). Recall that the simplex algorithm also chooses an entering column/variable and a leaving column/variable in order, but there a pivoting rule is usually specified by a pair (enter,leave) [6] and these two rules should cooperate in some way to avoid cycling. In comparison, by Lemma 12, Scarf’s algorithm defines a unique rule enter when the ordinal matrix C is fixed444One can of course design a specific ordinal matrix C¯ and a cardinal pivoting rule leave to let (C¯,leave) cooperate and see if Scarf’s algorithm converges in polynomial time. However, we need to also care about the dominating basis obtained by Scarf’s algorithm with C¯ – if such C¯ does not respect to the preference order given by certain applications, such dominating basis does not have a meaning., and we can only control leave on cardinal pivots. This observation distinguishes the pivoting rule of Scarf’s algorithm from it of the simplex algorithm.

2.2.3 Initialization, Iteration, and Termination

Scarf’s algorithm is initialized with a pair (B0,O0), where B0={1,2,,n} and O0={2,3,,n,n+1}. The algorithm iteratively starts with a pair (B,O) where 1B is a feasible cardinal basis, 1O is an ordinal basis, and |BO|=n1. It alternates a cardinal pivot and an ordinal pivot. In particular, it goes through a sequence

(B0,O0)(B1,O0)(B1,O1), (3)

where for every i0, we have that (Bi,Oi)(Bi+1,Oi) is a cardinal pivot from the feasible cardinal basis Bi with entering column jt=OiBi using the cardinal pivoting rule, and (Bi+1,Oi)(Bi+1,Oi+1) is an ordinal pivot from the ordinal basis Oi with leaving column j=Bi+1Oi.

Definition 13 (Scarf pair, iteration).

A pair (Bi,Oi) in the sequence (3) with BiOi is called a Scarf pair. We denote a consecutive pair of cardinal pivot and ordinal pivot (Bi,Oi)(Bi+1,Oi)(Bi+1,Oi+1) as an iteration.

The algorithm terminates if either (i) 1=BiOi happens to leave Bi, then Bi+1=Oi; or (ii) 1=Bi+1Oi happens to enter Oi+1, in which case Oi+1=Bi+1. Otherwise, the algorithm starts the next iteration with a new Scarf pair (Bi+1,Oi+1). Scarf [32] showed that when the polytope 𝒫 is non-degenerate, sequence (3) does not cycle. Thus, the sequence stops at a basis B=O, both cardinal and ordinal, thus dominating. A dominating extreme point x can be deduced from B.

2.3 Arborescence hypergraphs

To describe our implementation of Scarf’s algorithm, it will be useful to describe arborescence hypergraphs through digraphs. Let 𝒟=(U,𝒜) be a directed graph. Let F𝒜, and P=(a1,a2,,ap) be an ordering of F𝒜.

  • We say F (or P) is a 𝒟-path (or path) if P forms an undirected path in 𝒟.

  • An arc aiF is forward in P if following the order P we visit the tail of ai before visiting its head, otherwise ai is backward in P.

  • We say F (or P) is a 𝒟-directed path if every arc in P is a forward arc in P.

Recall that an arborescence 𝒯 is a directed graph with a root r such that for every vertex vr there is a unique 𝒯-directed path from r.

Suppose that an arborescence 𝒯=(U,𝒜0) is a subgraph of 𝒟=(U,𝒜). Then, for every a=(v,v)𝒜, there exists a unique 𝒯-directed path P(v,v) from v to v. An arc-path incidence matrix of (𝒟,𝒯) is defined as M=(mi,j){0,1}𝒜0×𝒜 such that mi,j=1 iff the j-th arc in 𝒜 is (v,v), and the unique 𝒯-directed path P(v,v) from v to v passes the i-th arc in 𝒜0.

Definition 14 (Arborescence Hypergraph).

A hypergraph H=(V,E) is an arborescence hypergraph if there exists a directed graph 𝒟=(U,𝒜) with an arborescence subgraph 𝒯=(U,𝒜0) such that the node-hyperedge incidence matrix of H is the arc-path incidence matrix of (𝒟,𝒯). If H is an arborescence hypergraph, we say that 𝒟=(U,𝒜) is the principal network of H and 𝒯=(U,𝒜0) is the principal arborescence of H.

A subclass of the arborescence hypergraphs is the interval hypergraphs [18].

Definition 15 (Interval Hypergraph).

Let H=(V,E) be an arborescence hypergraph with principal arborescence 𝒯=(U,𝒜0). If 𝒯 is a chain, then H is called an interval hypergraph.

Let H=(V,E) be an arborescence hypergraph with principal network 𝒟=(U,𝒜), where U={v1,v2,,vn+1}, 𝒜={a1,,am}. Its principal arborescence 𝒯=(U,𝒜0) has the form 𝒜0={f1,,fn}. Notice that 𝒜0𝒜 and hence, we assume ai=fi for i[n]. The node-hyperedge incidence matrix A=(ai,j){0,1}n×m of H is equivalently defined as:

  • ai,j=1 iff the hyperedge ej contains node i.

  • ai,j=1 iff the 𝒯-path formed by aj passes fi.

We also assume that 𝒯 is depth-first, defined as follows. This order is a common topological order on trees. Details can be found in [12].

Definition 16.

Let 𝒯=(U,𝒜0) be an arborescence with root r. We define a partial order 𝒯 over U such that v𝒯v if the 𝒯-directed path from r to v passes through v. We say that 𝒯 is depth-first if

  1. 1.

    U={v1,,vn+1}, and for i,j[n+1], vi𝒯vj implies ij. In particular, vn+1 is the unique root of 𝒯.

  2. 2.

    𝒜0={f1,,fn}, and for every i[n], the head of fi is vi.

3 Cardinal pivot on arborescence hypergraphs

In this section, we describe cardinal pivots of Scarf’s algorithm as operations on principal network 𝒟=(U,𝒜). In Section 3.1, we first show a structural result that maps the cardinal basis to directed trees on 𝒟, then we translate the change of basis operation in cardinal pivot into a cycle elimination operation. In Section 3.2, we define a pivoting rule called first-forward-leaving rule, which is interpreted as a choice of arcs that can be removed to form a new tree (basis).

3.1 Cardinal pivot in a digraph

We use the notation in Definition 6, where H is specified as an arborescence hypergraph with principal network 𝒟=(U,𝒜) and principal arborescence 𝒯=(U,𝒜0). We first observe that there is a bijection between cardinal bases (not necessarily feasible) of 𝒫 and directed trees on 𝒟. We note that, the results present in this section can be generalized to the case when A is a network matrix.

Theorem 17 ([33]).

Consider a subset B[m] of n columns. Denote the corresponding arc set by 𝒜B. Then the following statements are equivalent:

  1. 1.

    B is a cardinal basis.

  2. 2.

    𝒯B=(U,𝒜B) is a directed tree.

Next, we give a combinatorial interpretation of the cardinal pivot. Let B be a feasible cardinal basis with jt[m]B. By Theorem 17, they correspond to a directed tree 𝒯B=(U,𝒜B) and an arc ajt𝒜. Let ajt=(v,v), then we can find the unique 𝒯B-path denoted by PB(v,v). The path PB(v,v) along with the additional arc ajt form a cycle, from which a leaving column j is chosen, since 𝒯B=(U,𝒜B) has to be a directed tree where B=B{jt}{j}. Recall that we also need to take feasibility of B into consideration, since eliminating an arbitrary arc from the above cycle only guarantees that B is a cardinal basis. By computing the dynamics of how each variable is changing between a cardinal pivot, we give a graphic characterization of cardinal pivot in Lemma 19. For this, we need the notion of augmenting and descending paths.

Definition 18 (Augmenting and Descending Paths).

Let 𝒟=(U,𝒜), x𝒜, and P=(aj1,,ajp) be a 𝒟-path.

  1. 1.

    The path P is x-augmenting if every forward arc afwd in P has xafwd=1 and every backward arc abwd in P has xabwd=0.

  2. 2.

    The path P is x-descending if every forward arc afwd in P has xafwd=0 and every backward arc abwd in P has xabwd=1.

Lemma 19.

Consider a cardinal pivot from B to B with jt as the entering column where ajt=(v,v)𝒜. Let x,x be the extreme points corresponding to B,B, respectively. Let PB(v,v) be the unique 𝒯B-path from v to v (unique by Theorem 17). The following statements are equivalent:

  1. 1.

    The cardinal pivot is non-degenerate.

  2. 2.

    xjt=0 and xjt=1.

  3. 3.

    PB(v,v) is x-augmenting.

  4. 4.

    PB(v,v) is x-descending.

Moreover, if the cardinal pivot is non-degenerate, then B=B{jt}{j} is a feasible cardinal basis, where j corresponds to an (arbitrary) forward arc in PB(v,v). If the cardinal pivot is degenerate, then B=B{jt}{j} is a feasible cardinal basis, where j corresponds to an (arbitrary) forward arc in PB(v,v) such that xj=0.

3.2 Cardinal pivoting rule

While Lemma 19 gives a characterization of the candidate leaving columns j, often the choice of j that satisfies the theorem is not unique. We will use the following first forward arc leaving (FFL) rule as the pivoting rule. Roughly speaking, among all the candidate arcs verified by Lemma 19, we want to eliminate the first arc in PB(v,v).

Definition 20 (First forward arc leaving rule).

Let B be a feasible cardinal basis. Let ajt=(v,v) be the arc corresponding to the entering column and PB(v,v)=(a¯1,,a¯p) be the unique 𝒯B-path from v to v. If there exists a forward arc a¯k in PB(v,v) with xa¯k=0, choose the one with smallest subscript k as the arc corresponding to the leaving column. If all forward arcs a¯fwd in PB(v,v) have xa¯fwd=1, then let k be the smallest index such that a¯k is forward on PB(v,v), and let a¯k be the arc corresponding to the leaving column.

The FFL rule is well-defined, meaning that, for every entering column jtB, the leaving column defined above exists and is unique. We summarize the cardinal pivot operation with FFL rule in Algorithm 2.

Algorithm 2 Cardinal Pivot with FFL Rule.

4 The design of ordinal matrix and ordinal pivot

Unlike cardinal pivots, an ordinal pivot is uniquely defined by the ordinal matrix. However, we do have some degree of flexibility555The ordinal matrix that makes Theorem 10 valid may not be uniquely defined. Fixing one of them is in a sense the same as specifying an ordinal pivoting rule. to design an ordinal matrix C that satisfies Theorem 10. In Section 4.1, we construct a block partitioned ordinal matrix C which we use to prove the polynomiality of Scarf’s algorithm. Then, we discuss some properties of the ordinal pivots under C in Section 4.2. Throughout this section, let (H=(V,E),) be a hypergraphic preference system with V=[n] and ei={i} for i[n] being the singleton hyperedges666This section assumes no restriction on H. In particular, H can be any hypergraph..

4.1 Block structure of the ordinal matrix

In our implementation of Scarf’s algorithm, we design a block-partitioned ordinal matrix, defined as follows.

Definition 21.

We say that an ordinal matrix C is block-partitioned if:

  1. 1.

    For each iV, we define the i-th block Si:={eE{ei}:i=maxjej}.

  2. 2.

    Let C be a V×E matrix whose columns are ordered as follows: The columns corresponding to the singleton hyperedges are the first n columns of C. Next, we group the other hyperedges into n blocks S1,,Sn, with the i-th block Si ordered in decreasing order of preference from left to right according to i. The i-th block appears to the left of the i+1-th block for every i[n1].

  3. 3.

    Denote by E={e1,,en,en+1,,em}, where ej corresponds to the j-th column of C defined in part 2.

  4. 4.

    Assign the entries in C as follows. For each i[n],j[m] such that ejδ(i), we set ci,j=|δ(i)| if ej is the -th best hyperedge with respect to i. The remaining entries in row i are assigned integers that are no less than |δ(i)| and in decreasing order from left to right.

Example 22 (Block-partitioned matrix).

Let H=(V,E) be a hypergraph with V={1,2,3,4} in Figure 2 and the following preference list:

1 :{1,3}1{1,3,4}1{1},
2 :{2,3}2{2},
3 :{2,3}3{1,3}3{3,4}3{1,3,4}3{3},
4 :{1,3,4}4{3,4}4{4}.
Figure 2: The hypergraph in Example 22. The circles around the nodes are singleton hyperedges. Line segments represent hyperedges with cardinality 2. Other hyperedges (only {1,3,4} in this example) are indicated by splinegons.

The blocks are S1=,S2=,S3={{2,3},{1,3}},S4={{1,3,4},{3,4}}. The block-partitioned incidence matrix and ordinal matrix are as follows:

We remark that given a hypergraphic preference system, a block-partitioned matrix is unique and satisfies Theorem 10.

4.2 Controlling node and separator

4.2.1 Controlling node

When implementing Scarf’s algorithm, the first row/column of A,C can be seen as a controlling row/column in Scarf’s algorithm (see [32]). Recall that we assume δ(1)={e1}, which means node 1 is incident only to the singleton hyperedge by itself. This assumption is without loss of generality: Indeed, we can reduce an arbitrary hypergraphic preference system to the one that satisfies this assumption, and construct a bijection between the stable matchings in the two instances. We say node 1 is the controlling node.

4.2.2 Separator

At each iteration of Scarf’s algorithm, we start with a Scarf pair (B,O), where B is a feasible cardinal basis and O is an ordinal basis. Though not explicitly stated, B indicates a fractional matching in the hypergraph. The intuition behind the block structure introduced in the previous section is that we want to separate the nodes that have already been “matched” by B from the singletons. A separator is the node with the largest index that belongs to the former set, and every node behind the separator does not participate in the matching. We define the separator formally below.

Definition 23.

Let (B,O) be a Scarf pair. Let j:=max{j:jO} be the column with the largest index in O. If j belongs to the i-th block Si, then we say that i is the separator of (B,O).

We show that the separator satisfies the following property, which captures the idea that every node larger than the separator is matched by the singleton:

Lemma 24.

Let (B,O) be a Scarf pair and suppose i is the separator of (B,O). Then, for every iin, we have iBO.

We focus on a type of iteration of Scarf’s algorithm that increases the separator, which is triggered by a condition described as follows:

Lemma 25.

Let (B,O)(B,O)(B,O) be an iteration in Scarf’s algorithm. Let jO be the leaving column in the ordinal pivot OO. Suppose that jr and j are the reference column and the entering column of the ordinal pivot OO respectively. Let i be the separator of (B,O). Denote by uO,uO the utility vector of O,O, respectively. If uiO=ci,j, then we have

  1. 1.

    jr=max{j:jO}.

  2. 2.

    u1O=c1,jr and u1O=c1,j.

  3. 3.

    The separator of (B,O) is i for some i>i.

5 Polynomiality of Scarf’s algorithm

In this section, we show convergence and bound the run-time of our implementation of Scarf’s algorithm on arborescence hypergraphic preference systems. We are given a hypergraphic preference system (H=(V,E),) as the input, where H is an arborescence hypergraph and δ(1)={{1}}. Next, we find the principal arborescence 𝒯=(U,𝒜0) and principal network 𝒟=(U,𝒜) associated with H. Then, we order U={v1,,vn+1} and 𝒜0={f1,,fn} according to depth-first order on 𝒯 (consequently, the root of the arborescence 𝒯 is vn+1). Next, we construct the block-partitioned ordinal matrix C with an order of the hyperedges E={e1,,em}, and construct the node-hyperedge incidence matrix A with the rows and columns ordered as above.

We start with an initial Scarf pair (B0,O0) where B0={1,,n} and O0={2,,n+1}. Every cardinal pivot follows the FFL rule (c.f. Definition 20). Every iteration of the algorithm is unambiguously defined and we obtain a unique sequence of Scarf pairs (3). We will show that the number of iterations, i.e. the length of the sequence (3), is bounded by n=|V|. Since each iteration can be implemented in O(|E|) time, the total run-time is O(|V||E|).

We define a notion of a well-structured basis and will inductively show that all cardinal bases visited by the algorithm satisfy it. This structure of the cardinal basis is helpful in bounding the number of iterations of the algorithm.

Definition 26.

Let B be a feasible cardinal basis and 𝒯B=(U,𝒜B) be the directed tree corresponding to B. Let x be the extreme point associated with B. Let i[n]. We say that B is an i-nice basis if the following properties hold:

  1. 1.

    Let vU{vn+1}, and PB(vn+1,v) be the 𝒯B-path from vn+1 to v. Then, PB(vn+1,v) is x-augmenting.

  2. 2.

    {fi,fi+1,,fn}𝒜B.

  3. 3.

    Let f{fi,fi+1,,fn}. Let 𝒯f=(U,𝒜0{f}) be the subgraph of 𝒯 with exactly two connected components (in the undirected sense). Denote by U=RW the partition of vertices into two components such that vn+1R and vn+1W. If an arc a𝒜B has its end vertices in R and W, then a=f. In other words, the removal of each arc f{fi,fi+1,,fn} from 𝒯B partitions U into the same sets as the removal of the same arc from 𝒯.

Figure 3: An example of of a feasible cardinal basis B that is a 12-nice basis. The gray arcs form an arborescence 𝒯=(U,𝒜0) while the black arcs (both solid and dotted) are the arcs in 𝒯B=(U,𝒜B) (see Theorem 17). The solid (resp., dotted) circular arcs are associated to variables in B with x-value 1 (resp., 0). The blue dashed circular arc ajt=(v13,v11) is not in 𝒯B. Consider a cardinal pivot from B with jt entering: The 𝒯B-path from v13 to v11 is PB(v13,v11)=((v13,v12),(v12,v10),(v10,v8),(v9,v8),(v11,v9)) (see Lemma 19), where the first three arcs are forward and the last two arcs are backward, and such that PB(v13,v11) is x-augmenting. According to the FFL rule (Definition 20), we should remove the arc f12, as it stands for the first forward arc in PB(v13,v11).

See Figure 3 for an illustration of an i-nice basis. The following lemma is the key observation that allows us to bound the number of iterations in the algorithm. The proof of the third property in the lemma relies on Lemma 25.

Lemma 27.

Let (B,O) be a Scarf pair and consider the iteration (B,O)(B,O)(B,O). Suppose jt=OB is the entering column of the cardinal pivot, j=BB is the leaving column of the cardinal pivot, and j=OO is the entering column of the ordinal pivot. Let i[n] be the separator of (B,O). Suppose B is an i-nice basis and j1. Then, we have the following:

  1. 1.

    Let ajt=(vj,vk). Then, fi=(vj,vi) (recall that fi is the unique arc entering vi in 𝒯). In other words, ajt and fi share the same tail. In addition, fi is the first arc in P0(vj,vk), where P0(vj,vk) be the 𝒯-path from vj to vk.

  2. 2.

    aj=fi.

  3. 3.

    The separator of (B,O) is i for some i>i.

  4. 4.

    B is an i-nice basis.

We rewrite Scarf’s algorithm as a combination of Algorithm 2 (cardinal pivot) and Algorithm 3 (ordinal pivot). To prove Theorem 5, we verify that the initial cardinal basis B0 an i0-nice basis, where i0 is the separator of (B0,O0). By inductively applying Lemma 27, we have that every cardinal basis visited by the algorithm is a separator-nice basis, and every ordinal pivot increases the index of the separator. Since the index of the separator is bounded by n, the number of iterations is at most n. At a high level, the intuition is the following: The tree associated with arborescence hypergraphs has a unique source; the potential function we designed takes advantage of this fact, and the vertices of the hypergraph are organized to join the matching one by one according to the depth-first order, which only exists when the tree is an arborescence.

Algorithm 3 Ordinal Pivot with Separator Change.

6 Non-integrality of the fractional stable matching polytope

A classical approach to stable matching problem in graphs is to describe the stable matching polytope [35], that is, find an exact description of the convex hull of all characteristic vectors of stable matchings. We construct a hypergraphic preference system I=(H=(V,E),) where H is an arborescence hypergraph, on which the above classical approach does not succeed. This result provides evidence that the problem we are interested in is challenging compared to the classical stable matching problems.

We first define

P(I)=conv({x{0,1}E:x is a stable matching}), (4)

and

, (5)

where δ(i)={eE:ie} and e={eE:ie,eie}{e}. We call P(I) the stable matching polytope of I and Q(I) the fractional stable matching polytope of I. When H is a bipartite graph, it is shown that P(I)=Q(I) (see, e.g., [35]). We show the following.

Theorem 28.

There is a hypergraphic preference system I=(H=(V,E),) where H is an arborescence hypergraph, such that the fractional stable matching polytope Q(I) is not integral, thus P(I)Q(I).

Example 29.
Figure 4: The underlying hypergraph. A node belongs to an edge iff the latter covers the former in this figure.

Consider the hypergraph in Figure 4: Let I=(H=(V,E),) where V={1,2,3,4,5,6,7,8,9} and E={e1,e2,e3,f1,f2,f3,g1,g2,g3}. H is an interval hypergraph. The preference list is as follows:

1:e21g11f1,2:f12e22g1,3:f23f13e2,4:e14f24f1,5:f15e15f2,6:e16f26e36g2,7:f37e17e37g2,8:e38g38f3,9:f39e39g3.

Let x be such that x(e1)=x(e2)=x(e3)=0, x(f1)=x(f2)=x(f3)=x(g1)=x(g2)=x(g3)=12. Then, x is an extreme point of Q(I), which is non-integral.

7 Conclusion

We showed that Scarf’s algorithm converges in polynomial time and returns an integral stable matching on arborescence hypergraphic preference systems. Our result is the first proof of polynomial-time convergence of Scarf’s algorithm on hypergraphic stable matching problems. We note that some of our results hold for hypergraphs that are more general than arborescence hypergraphs. It would be interesting to generalize our approach to show polynomial convergence of Scarf’s algorithm for more general classes of hypergraphs, such as network hypergraphs. It would also be insightful to interpret our implementation of Scarf’s algorithm for arborescence hypergraphs as a purely combinatorial algorithm, if any such interpretation exists.

References

  • [1] Ron Aharoni and Tamás Fleiner. On a lemma of Scarf. Journal of Combinatorial Theory, Series B, 87(1):72–80, 2003. doi:10.1016/S0095-8956(02)00028-X.
  • [2] Ron Aharoni and Ron Holzman. Fractional kernels in digraphs. J. Combinatorial Theory, 73(1):1–6, 1998. doi:10.1006/JCTB.1997.1731.
  • [3] Ravindra K Ahuja, James B Orlin, Prabha Sharma, and PT Sokkalingam. A network simplex algorithm with o (n) consecutive degenerate pivots. Operations research letters, 30(3):141–148, 2002. doi:10.1016/S0167-6377(02)00114-1.
  • [4] Mourad Baïou and Michel Balinski. The stable admissions polytope. Mathematical programming, 87:427–439, 2000. doi:10.1007/S101070050004.
  • [5] Mourad Baïou and Michel Balinski. The stable allocation (or ordinal transportation) problem. Mathematics of Operations Research, 27(3):485–503, 2002. doi:10.1287/MOOR.27.3.485.310.
  • [6] Dimitris Bertsimas and John N Tsitsiklis. Introduction to linear optimization, volume 6. Athena Scientific Belmont, MA, 1997.
  • [7] Péter Biró, Gergely Csáji, and Ildikó Schlotter. Stable hypergraph matching in unimodular hypergraphs. arXiv preprint arXiv:2502.08827, 2025. doi:10.48550/arXiv.2502.08827.
  • [8] Péter Biró and Tamás Fleiner. Fractional solutions for capacitated ntu-games, with applications to stable matchings. Discrete Optimization, 22:241–254, 2016. doi:10.1016/J.DISOPT.2015.02.002.
  • [9] Péter Biró, Tamás Fleiner, and Robert W Irving. Matching couples with scarf’s algorithm. Annals of Mathematics and Artificial Intelligence, 77:303–316, 2016. doi:10.1007/S10472-015-9491-5.
  • [10] Péter Biró, David F Manlove, and Iain McBride. The hospitals/residents problem with couples: Complexity and integer programming models. In Experimental Algorithms: 13th International Symposium, SEA 2014, Copenhagen, Denmark, June 29–July 1, 2014. Proceedings 13, pages 10–21. Springer, 2014. doi:10.1007/978-3-319-07959-2_2.
  • [11] Karthekeyan Chandrasekaran, Yuri Faenza, Chengyue He, and Jay Sethuraman. Scarf’s algorithm on arborescence hypergraphs. arXiv preprint arXiv:2412.03397, 2024. doi:10.48550/arXiv.2412.03397.
  • [12] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2022.
  • [13] Gergely Csáji. On the complexity of stable hypergraph matching, stable multicommodity flow and related problems. Theoretical Computer Science, 931:1–16, 2022. doi:10.1016/J.TCS.2022.07.025.
  • [14] William H Cunningham. A network simplex method. Mathematical Programming, 11:105–116, 1976. doi:10.1007/BF01580379.
  • [15] Piotr Dworczak. Deferred acceptance with compensation chains. In Proceedings of the 2016 ACM Conference on Economics and Computation, pages 65–66, 2016. doi:10.1145/2940716.2940727.
  • [16] Yuri Faenza, Chengyue He, and Jay Sethuraman. Scarf’s algorithm and stable marriages. Mathematics of Operations Research, 2025. Accepted for publication. doi:10.1287/moor.2023.0055.
  • [17] Tamás Fleiner. A fixed-point approach to stable matchings and some applications. Mathematics of Operations research, 28(1):103–126, 2003. doi:10.1287/MOOR.28.1.103.14256.
  • [18] Delbert Fulkerson and Oliver Gross. Incidence matrices and interval graphs. Pacific journal of mathematics, 15(3):835–855, 1965.
  • [19] David Gale and Lloyd S Shapley. College admissions and the stability of marriage. The American Mathematical Monthly, 69(1):9–15, 1962.
  • [20] Penny E Haxell and Gordon T Wilfong. A fractional model of the border gateway protocol (bgp). In Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms, pages 193–199. Citeseer, 2008. URL: http://dl.acm.org/citation.cfm?id=1347082.1347104.
  • [21] Takashi Ishizuka and Naoyuki Kamiyama. On the complexity of stable fractional hypergraph matching. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2018.
  • [22] Shiva Kintali, Laura J Poplawski, Rajmohan Rajaraman, Ravi Sundaram, and Shang-Hua Teng. Reducibility among fractional stability problems. SIAM Journal on Computing, 42(6):2063–2113, 2013. doi:10.1137/120874655.
  • [23] Donald Ervin Knuth. Marriages stables. Technical report, 1976.
  • [24] Cheng Ng and Daniel S Hirschberg. Three-dimensional stabl matching problems. SIAM Journal on Discrete Mathematics, 4(2):245–252, 1991. doi:10.1137/0404023.
  • [25] Hai Nguyen, Thành Nguyen, and Alexander Teytelboym. Stability in matching markets with complex constraints. Management Science, 67(12):7438–7454, 2021. doi:10.1287/MNSC.2020.3869.
  • [26] Thanh Nguyen and Rakesh Vohra. Near-feasible stable matchings with couples. American Economic Review, 108(11):3154–3169, 2018.
  • [27] Thành Nguyen and Rakesh Vohra. Stable matching with proportionality constraints. Operations Research, 67(6):1503–1519, 2019. doi:10.1287/OPRE.2019.1909.
  • [28] Adèle Pass-Lanneau, Ayumi Igarashi, and Frédéric Meunier. Perfect graphs with polynomially computable kernels. Discrete Applied Mathematics, 272:69–74, 2020. doi:10.1016/J.DAM.2018.09.027.
  • [29] Alvin E Roth. Online and matching-based market design. Cambridge University Press, 2023.
  • [30] Alvin E Roth, Uriel G Rothblum, and John H Vande Vate. Stable matchings, optimal assignments, and linear programming. Mathematics of operations research, 18(4):803–828, 1993. doi:10.1287/MOOR.18.4.803.
  • [31] Uriel G Rothblum. Characterization of stable matchings as extreme points of a polytope. Mathematical Programming, 54:57–67, 1992. doi:10.1007/BF01586041.
  • [32] Herbert E Scarf. The core of an n person game. Econometrica: Journal of the Econometric Society, pages 50–69, 1967.
  • [33] Alexander Schrijver. Theory of linear and integer programming. John Wiley & Sons, 1998.
  • [34] Alexander Schrijver et al. Combinatorial optimization: polyhedra and efficiency, volume 24. Springer, 2003.
  • [35] Chung-Piaw Teo and Jay Sethuraman. The geometry of fractional stable matchings and its applications. Mathematics of Operations Research, 23(4):874–891, 1998. doi:10.1287/MOOR.23.4.874.