OWA for Bipartite Assignments
Abstract
In resource allocation problems, a central planner often strives to have a fair assignment. A challenge they might face, however, is that there are several objectives that could be argued to be fair, such as the max-min and maximum social welfare. In this work, we study bipartite assignment problems involving the optimization of a class of functions that is sensitive to the relative utilities derived by individuals in allocation and captures these traditional objectives. We introduce and study a subclass of evaluation functions that targets the average welfare attained within some interval of the economic ladder (e.g., the bottom 10%, middle 50%, or top 80%). We provide an efficient algorithm that can be used to optimize the welfare for an arbitrary interval and also show how the approach can be used to approximate more general evaluation functions. We also study a subclass of evaluation functions consisting of the “fair” ordered weighted averages (OWA) introduced by Lesca et al. (Algorithmica 2019), which are most sensitive to the utilities received by the worst-off individuals. We provide a simple proof that optimizing this objective belongs to the class XP.
Keywords and phrases:
fairness, matchings, approximation algorithmsFunding:
Jabari Hastings: Supported by the Simons Foundation Collaboration on the Theory of Algorithmic Fairness and the Simons Foundation investigators award 689988.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Algorithmic game theory and mechanism designAcknowledgements:
We thank Shahar Dobzinski and Gergely Csáji for helpful discussions.Editor:
Mark BunSeries and Publisher:

1 Introduction
Assignment problems are abundant in real life, arising in contexts such as assigning students to schools (e.g., [2]), matching doctors with fellowship positions (e.g., [18]), and allocating housing to individuals in need (e.g., [4]). In many of these scenarios, agents have numerical valuations (cardinal utilities) for each available object. Typically, a central planner is responsible for determining how to allocate these resources. A common approach involves defining a real-valued objective function over possible assignments and optimizing it.
One widely used objective is to maximize economic efficiency or social welfare – i.e., to maximize the sum of the values that agents derive from their assigned object. However, a potential worry with this approach is that it inherently favors agents with higher valuations, as improving overall welfare is more easily achieved by prioritizing those with the highest values. This can lead to unfair outcomes where the most advantaged agents benefit disproportionately. As an alternative, the Rawlsian perspective advocates for a more equitable allocation by maximizing the value of the least well-off agent in the assignment (max-min objective). This ensures that the agent with the lowest value in the chosen assignment is as well-off as possible.
A central planner that has to choose between max-min and maximizing social welfare faces a strict tradeoff between efficiency and fairness. To allow a smoother tradeoff and capture more nuanced preferences, we can consider a more general objective that encompasses the two objectives. Such objectives can be stated in terms of a generalized Gini social-evaluation (GSE) function [20], which aims to capture the inequality of wealth levels in a population. This function, denoted by , is defined as , where is a non-increasing weight vector and represents the value vector sorted in ascending order. By setting all weights to , we recover the standard objective of maximizing social welfare. Conversely, if and the rest of the weights are , we obtain the max-min objective. More generally, this framework allows us to express additional fairness-aware objectives, such as maximizing the total utility of the lowest 10 percent, optimizing social welfare while ignoring the top 10 percent, or lexicographic max-min which we can capture by assigning exponentially decreasing weights.
Optimizing GSE functions
Given their significance, GSE functions have been heavily studied within the combinatorial optimization literature. Starting with a seminal work by Yager [21], a long line of work examines GSE functions within the framework of ordered weighted averages (OWA) (cf. [22] for a survey). In particular, Lesca et al. [14] define “fair” OWA as the setting with non-increasing weights. A parallel line of work on norm optimization in combinatorial problems (e.g., [5, 1, 11]) has studied ordered norms, which are essentially the same as generalized Gini social-evaluation functions, but applied to settings involving cost minimization (e.g., job scheduling, facility location). Note that the difference between cost minimization and utility maximization is not merely terminological; maximization and minimization can lead to significant complexity differences. For example, finding the shortest path in a graph can be easily solved in polynomial time, whereas computing the longest path is NP-hard.
Despite the volume of work on GSE functions, their optimization in the context of the bipartite assignment problem has posed significant computational challenges.111The nature of the problem makes it also related to majorization theory [15]. Lesca et al. [14] showed that the decision variant of the problem, referred to as Fair OWA Multiagent Assignment (FOWAMA), is NP-hard. They also show that if the vector is assumed to have at most distinct values, the parametrized problem is in the class XP (i.e., problems that can be solved in time); the algorithm they present runs in polynomial time only for constant .
Alternatives to GSE functions
Since the optimization problem for GSE functions is solved using a rather complex algorithm, whose running time depends on the number of distinct weights, this raises an important question: if a central planner using a GSE function were instead to optimize purely for social welfare, how much worse would the outcome be? It is easy to see that such a naive approach would fail, as the social welfare function cannot even approximate the max-min objective function. However, what if, instead of optimizing the social welfare of the entire population, we focus only on a specific interval? For instance, we could optimize over the middle 60 percent of the population – those ranked between the bottom 20 percent and the top 20 percent. Such an optimization is also interesting in its own right. Consider, for example, a politician whose primary voter base consists of the middle class. When making assignment decisions, such a politician might prefer to optimize the utility of the middle class rather than that of the entire population. Observe that an arbitrary interval defines a function with binary weights that are not necessarily non-increasing. Therefore, existing paradigms like fair OWA and ordered norm optimization do not provide solutions for this new problem. Nonetheless, we show that optimization over an interval can be done in polynomial time.
Continuing with the perspective of the central planner as a politician trying to satisfy different segments of the population, motivates considering general weight functions. These can result from an aggregation rule that combines the preferences of different population segments into a single weight vector for the optimization problem. For example, if each segment prioritizes a specific interval, the central planner might assign equal weight to all positions that are considered important by at least one segment or weight each position according to the number of segments that care about it. An alternative interpretation of the aggregation process arises from considering the preferences of stakeholders – agents who participate in the assignment process. In the spirit of the veil of ignorance [17], each stakeholder’s preference for an assignment mechanism is determined by their estimate of where they might fall in the ranked distribution. Under this view, each agent effectively has a private OWA function, which the central planner aggregates into a single function. As part of this work, we provide approaches to approximate the aggregated functions and discuss their limitations.
1.1 Our Contributions
Our contributions can be distilled into the following vignettes:
-
Optimizing over a single interval. We present a simple, efficient algorithm that optimizes the welfare for any specified interval within the population. In fact, we show that optimizing any interval is almost as easy as computing a maximum-weight perfect matching. This means that, with little computational overhead, we provide a decision-maker with additional flexibility regarding their choice of segment to optimize.
-
Optimizing over multiple intervals. As a next natural step, we ask: what if we restrict our attention to binary weight functions? If the number of contiguous intervals of "1"s that we care about is , then a trivial argument implies that, since there are intervals, there must exist an interval such that optimizing over it guarantees a -approximation. This follows simply by selecting the interval that contributes the maximal utility to . We show that when the intervals are of equal size, we can do significantly better: there always exists a single interval that we can optimize over to achieve an -approximation of the optimal solution. We note that our result is in fact more nuanced and can also handle, to some degree, intervals of different sizes with some loss in the approximation ratio.
-
Optimizing GSE functions. We provide a much simpler proof of the result of Lesca et al. [14] that optimizing GSE functions in the bipartite matching context is in the class XP. Our result is obtained by adapting a framework introduced by Chakrabarty and Swamy [5] for minimizing monotone symmetric vector norms. At a high level, their approach involves designing proxy functions to replace the vector norm with a nearly equivalent objective that is less sensitive to the relative order of entries in the input vector. The replacement objective is then (approximately) solved using linear programming techniques. For our problem, we have to construct new proxy functions and show that they provide a good approximation of our desired objectives.
-
Optimizing over General Weights. Observe that since the restricted case of non-increasing weights [14] is NP-hard, the more general problem with arbitrary weights is also NP-hard.222We refer here to the decision variant of the problem, which asks whether, given a value , there exists an assignment whose resulting utility under exceeds . It is easy to see that verifying the validity of a solution to this problem can be done in polynomial time for any weight vector. Given this complexity, we again consider the question of optimizing over a single interval and show that such an optimization cannot guarantee an approximation ratio better than . The lower bound holds even for restricted binary weights.
1.2 Additional Related Work
The theme of optimizing relaxations of max-min and maximum social welfare objectives has a long history in the Operations Research literature on facility location (cf. [3] for a comprehensive survey) and the work of Chakrabarty and Swamy [5] has sparked interest in applying this theme to other combinatorial problems (e.g. job scheduling, load balancing). In addition to the works mentioned in the introduction, there are several other related works worth discussing. Notably, the work of Deng and Zhang [6] analyzes cost minimization in facility location for a class of objectives that includes an interval. Regarding the subject of fairness, an exciting research direction aims to simultaneously (approximately) satisfy of a class of objectives (e.g., [13, 7]). A more recent variant, proposed by Gupta et al. [10], studies the design of a small portfolio of solutions that simultaneously satisfies a class of objectives; this setting differs from ours in that we search for a single solution rather than a portfolio.
2 Preliminaries
We consider an assignment problem consisting of agents and indivisible items. Each agent has a non-negative valuation for each item denoted by . Throughout this work, we will use a square matrix to concisely convey agents’ valuations (so that ). We assume that each agent wants a single item (i.e., agents have unit demand valuations). We denote an allocation by a permutation , so that agent receives item . Note that a valid assignment corresponds to a perfect bipartite matching. We let denote the set of perfect matchings for our problem instance. Every allocation induces a vector of values denoted by . Our focus is on the ranked value vector , the result of sorting the entries of the vector in increasing order. In particular, is the smallest value in attained by an agent and is the largest value attained in .
Ordered Weighted Averages
Given a vector , we define a function that maps an arbitrary assignment to a weighted linear combination of the values derived by the individuals as follows:
(1) |
Note that the function applies weights that depend on the positions of entries in the ranked value vector, which is sorted in increasing order. This function is known as an ordered weighted average (OWA). In an ordered optimization problem, our goal is to find a one-to-one assignment that maximizes the OWA operator :
(2) |
As we will see, the ordered optimization problem captures several well-studied objectives. For instance, when the vector is uniform on the positions in the ranked value vector, the goal is to maximize total welfare. As another example, note that the max-min welfare problem arises when the vector only applies weight to the first position of the ranked value vector.
3 Non-increasing Weights
In this section, we examine the class of ordered objectives whose weight vector consists of non-increasing entries (i.e., ). These objectives, studied by Lesca et al. [14], are often called “fair” Ordered Weighted Averages (OWA). One particularly compelling reason behind the fairness label is the following interpretation: since the largest weights are applied to the smallest utilities, the objective conveys a gradation of concerns for several of the worst-off individuals under a particular assignment. For example, if the planner is concerned about the average welfare attained within quartiles, then she could use a weight vector in which the same (highest) weight is placed on the bottom 25%, a smaller weight is placed on the next 25%, and so on.
Fair OWA Multiagent Assignment
In pursuit of algorithms for the fair OWA objective, Lesca et al. [14] introduced and studied the Fair OWA Multiagent Assignment decision problem defined as follows.
Problem Fair OWA Multiagent Assignment (FOWAMA)
Input: A tuple , where is an integer-valued square matrix, is a vector of non-negative and non-increasing weights (i.e. ), and is a positive rational number.
Output: Does there exist an assignment such that ?
Although many special instances of the FOWAMA problem are known to admit polynomial times solutions (e.g., the max-min [12], max-welfare [8], and leximin [19]; see also [16], [9] for additional special cases), Lesca et al. [14] showed that the decision variant of the general problem is NP-complete. To better understand which subclasses of the FOWAMA problem admit efficient algorithms, they introduced a parametrized version called the -FOWAMA problem, where the vector is assumed to have at most distinct entries. They also showed that the -FOWAMA problem lies in the complexity class XP (i.e., the set of problems that can be solved in time , where is a computable function) via a somewhat involved recursive argument. In this section, we give a simpler proof of that result. We remark that proving membership within the complexity class FPT (i.e., the set of problems that can be solved in time , where is a computable function) remains an open question.333An earlier version of this paper had incorrectly claimed to resolve this problem.
Theorem 1.
There exists an algorithm that decides the -FOWAMA problem in time , where is the running time of an algorithm that solves the maximum-weight bipartite matching problem.
At a high level, we prove Theorem 1 by adapting a framework initially outlined by Chakrabarty and Swamy [5] to tackle cost-minimization problems involving vector norms (e.g., minimizing the distance from a facility, or minimizing the makespan for parallel computation). Although the objective function for the FOWAMA problem does not involve a vector norm and our task is a maximization problem, we are still able to apply many of the key ideas, but with different techniques. First, we use a cleverly designed proxy function, which allows us to focus on optimizing a sum of proxy values. We then solve a linear programming relaxation of the proxy optimization problem. We direct the reader to Section 3.1 for the details.
3.1 Proof of Theorem 1
Consider the case in which . For convenience, we also let . An important aspect of the vector that we examine are the breakpoints, namely the positions in the string for which the entry decreases in magnitude. Formally, we let be the set of breakpoints. We also introduce some additional notation to ease our exposition. For any , we let be the first breakpoint that appears after position .
As we have alluded to in our earlier discussion on the min-norm framework, our proof will rely on several key insights. We explain each of them step by step. We begin with the observation that our ordered objective admits a convenient decomposition.
Bottom- Decomposition
For ease of exposition, we abuse notation to let denote the value of the bottom-, that is the sum of the smallest entries of a vector . We first note that the objective can be viewed as the linear combination of bottom- objectives.
Claim 2.
For every vector , we have .
Proof.
We use a convenient telescoping sum and recall the definitions of and to derive that
The fact that our objective decomposes into bottom- objectives is good news, as we already have presented algorithms for the interval in Section 4. The drawback, however, is that we do not have a means of simultaneously optimizing those objectives. Our workaround is to consider replacing these objectives with proxies, which will in turn allow us to do the optimizations all at once.
Proxy Values
As a warmup, we first design a proxy function for maximizing some bottom- interval. Inspired by our approach for an arbitrary interval, we consider a function that will potentially underestimate values that are too large to fall within the target interval for an optimal solution. For a parameter , we define the function . Observe that the function gives the following easy lower bound for .
Claim 3.
For every value , and vector , we have .
Proof.
For any value and any vector , we have and hence
Notice that for the lower bound in Claim 3, a single transformation is being applied to all the entries in the vector , not just the smallest entries. Going forward, we want to exploit the oblivious nature of the bound. In the following claim, we also show that given a good guess of the value , the lower bound will in fact be a good estimate for the quantity .
Claim 4.
Fix a vector and a real number . Suppose that there exists a value such that for some integer . Then .
Proof.
Let . If is such that and , then . Furthermore, if is such that , then . Together, these facts imply that
With the insights gleaned from our examination of bottom- optimization, we now return to the goal of designing a proxy for the weighted objective . The crucial step we take is replacing each objective in our decomposition with the lower bounds we established. More precisely, for a vector , we define the proxy value of a vector as follows:
As with our proxy for the bottom-, we now have an expression that does not depend on the relative order of entries in the vector . On the other hand, it depends on the advice for each of the breakpoints in the set .
The following two claims, which immediately follow from Claim 3 and Claim 4, show that with the appropriate advice, our proxy is a good estimator of the objective .
Claim 5.
For every value , we have .
Claim 6.
Fix a vector and a real number . Suppose that there exists such that for every . Then
The only step that remains is to treat the proxy as the original objective and attempt to maximize it.
Optimizing the Proxy
Our new goal is to find a perfect matching that maximizes our proxy value, given value some advice . We formulate this as the following integer program.
Observe that this is precisely the maximum weight perfect matching problem, in which the weight on each edge in the graph is modified by a function of the advice and distribution . Hence, we can use any of the appropriate maximum weight matching algorithms to perform our optimization (e.g. the Hungarian algorithm [12]).
Finally, we have all the ingredients to solve our optimization problem.
Proof of Theorem 1.
We propose the following procedure. We first guess the relevant entries of the vector . Given this guess, we replace all values with appropriate weights given by the proxy function. Finally, we compute the maximum weight matching in the modified problem instance. We ultimately return the matching with the largest objective value. The main subroutine is described in Algorithm 1. We discuss the correctness and running time separately.
Input: A tuple of the problem instance; guess
We first show that for a suitable choice of , Algorithm 1 returns an optimal matching. Let be an optimal solution to the ordered optimization problem with non-increasing weights and let for every . Suppose that is returned by our algorithm above. Applying Claim 5 and Claim 6, we have
Hence, we have , as desired. Since our procedure encounters an optimal matching on some iteration and returns the matching with largest objective over all the iterations, it is clear that it returns an optimal solution.
Next, we compute the runtime of our procedure. We first compute the runtime for a single iteration of step 2 for a given sequence . Replacing a single edge weight can be done in time . Computing the maximum weight perfect matching can be done in time , where is the running time of a maximum weight perfect matching algorithm. Computing the objective value for the resulting matching can be done in time ; we sort the edges based on weights and multiply each by the appropriate entry in . Overall the cost for a single iteration is . It remains to determine the amount of advice needed to guarantee an optimal solution.
Claim 7.
Our procedure considers at most possible values of .
Proof.
We consider a simple counting argument. We have some non-decreasing sequence of values and we want to insert delimiters between consecutive sequence terms or after the final term . It is clear that there is a surjection between the possible sequences of and the ways of inserting the delimiters. Indeed, we can associate each delimiter with the largest sequence term that precedes it. The number of ways of inserting the delimiters is
which implies our desired result. Granting the claim above, and setting that , we conclude that the overall runtime of the algorithm is as desired.
4 Uniform Weights on Intervals
The fair OWA objectives discussed in the previous section would certainly appeal to a planner who is concerned about the worst-off individuals; however, they might not be compelling if the planner is more concerned about other slices of the population. This is especially the case if the planner wants to improve the outcomes experienced by “typical” individuals, those who do not lie within the extreme ends of the economic ladder. For example, a planner might want a strong guarantee for the median utility received in the assignment.
Example 8 (Median Utility).
Suppose that the planner is interested in obtaining a strong guarantee for the median position on the economic ladder. Consider the following setting with agents and items, where agent valuations are given by rows of a matrix .
In the above matrix, we assume that is some small positive constant, say . Note that the sorted value vectors for the max-min solution and max-welfare solutions are and respectively. On the other hand, the solution that maximizes the median welfare has sorted value vector .
More generally, a planner could be interested in maximizing the average utility derived by the individuals belonging to certain specified brackets within the economic ladder. For instance, she might believe that tending to the welfare of the extremely disadvantaged is not worth compromising on the outcomes for most individuals and thus opt for an assignment that prioritizes say the top 80% of individuals. Alternatively, she could believe that an ideal assignment is one that results in a strong middle class and thus aim to maximize the average utility derived by the middle of individuals.
To capture the kinds of priorities outlined above, we will consider ordered objectives whose weight vectors are such that for all indices belonging to some specified subset of the economic ladder and otherwise.
Optimizing a Single Interval
Perhaps, the most natural setting of the subset is a contiguous slice of the economic ladder. Note that the associated objective would amount to maximizing the average value derived within some interval. Thus, we refer to the ordered optimization task as interval optimization.
Problem Interval Optimization
Input: A tuple , where is a square matrix encoding agent valuations for items (i.e., ), and are natural numbers such that .
Output: A solution to the objective
Recall that efficient procedures are known for several special cases: when , there is the bottleneck algorithm [8]; and when , there is the Hungarian algorithm [12]. In this section, we complement those results by giving a simple, efficient combinatorial algorithm for maximizing the welfare for any arbitrary interval. In fact, we show that optimizing the more general interval objective is nearly as efficient as optimizing the traditional max-min and max-welfare objectives.
Theorem 9.
There is an algorithm that, on input , performs interval optimization in time , where is the running time of an algorithm that solves the maximum-weight perfect matching problem.
Proof of Theorem 9.
We first present a subroutine (Algorithm 2) that maximizes the welfare for the specified interval, given a guess about the maximum value attained within the interval for the optimal solution. We also provide some intuition behind the approach. Although it might be tempting to immediately use the maximum weight matching algorithm, an issue we encounter is potentially over-prioritizing the contribution of values that are too large. With the guess in mind, we can truncate the large values to minimize their contribution to the total weight. We ensure that the truncation is just enough, so that the large value items remain at least as valuable as the best value for the interval. Unfortunately, our approach could still fail even after truncation: the contribution of small values could also be too large. We overcome this issue by assigning highly valuable dummy items to agents who would have otherwise gotten low value items. We can easily reassign those individuals without compromising on our goal.
Input: A tuple of the problem instance; guess
Output: A one-to-one assignment
The algorithm that establishes the theorem is obtained by enumerating over all the possible guessed values. In other words, we iterate over all possible values , using the subroutine to obtain a candidate matching , and returning the candidate with the largest value of .
Claim 10.
Let be a one-to-one assignment that maximizes the welfare for the interval with endpoints . If , then the assignment returned by Algorithm 2 also maximizes the welfare on the same interval.
Proof of Claim 10.
For ease of exposition, let the function denote the values derived by individuals in the instance associated with the matrix , so that for every pair . Note that this function will underestimate the value of real items, when compared to the original value function , (i.e., for every pair ).
Let be the matching obtained by modifying as follows. First, we reassign the individuals with smallest value (according to the matrix ) to dummy items. Then we assign the unassigned items to the dummy individuals. We now examine how the items in the reassignment are valued according to the function . Note that the dummy items and items paired to dummy individuals will have value . Also, note that a real item’s altered value will only be truncated to if its original value (according to the matrix ) exceeds ; only values will be truncated. Hence, we deduce that
Now consider the map , a maximum-weight perfect perfect matching for the adjacency matrix (defined in step 2 of Algorithm 2). Let be the set of real individuals that are not assigned to a dummy item by the map . Also, let be a set of the individuals in with the smallest values (according to the matrix ). Since is optimal and every entry in is at most , we have
This, along with the following chain of inequalities, would imply our desired result.
The first inequality follows from the optimality of and the third follows from the fact that the function was constructed to underestimate the function . Thus, it remains to justify the second inequality. Suppose there are real individuals in who were assigned a dummy item under . When we modify to get , an individual in will now be among the individuals with smallest value (according to the matrix ). Therefore, the sum of values for the individuals in (a set of size ) is at most the sum .
Claim 10 implies that the assignment that has the highest value for the interval for all values of maximizes the social welfare of the interval . Note that we can obtain the right value of by iterating over all the possible values in the matrix . Having established correctness, we turn our attention to the running time. Note that steps 1 and 3 of Algorithm 2 can be done in time and step 2 can be done in time . Note also that computing our objective value for the returned matching can also be done in time . Thus, the cost for a single iteration is . Since we iterate over possible values of , the overall runtime is .
Remark 11.
There might not always be a unique solution to a given interval optimization task. For instance, it is possible to have two solutions that optimize the same interval, but the total social welfare is higher in one of them. Note that our algorithm outlined above makes no distinction between these optimal solutions and returns one of them arbitrarily.
Optimizing a Union of Intervals
Next, we consider the more general setting in which the subset is the union of disjoint intervals of the economic ladder. We call the ordered optimization task -intervals optimization.
Problem -Intervals Optimization
Input: A tuple , where is an integer-valued square matrix and are disjoint intervals on the economic ladder such that for every .
Output: A solution to the objective
For the above optimization problem, there is a simple algorithm with modest approximation guarantees when the set is comprised of intervals of the same size: we can run the interval optimization algorithm on instances for each and return the best assignment. Since one of the intervals must account for at least of the optimal value, a naive averaging argument would have suggested an approximation of ; however, we can show that the approach would actually result in a logarithmic multiplicative approximation. Our proof can be found in Appendix A.
Theorem 12.
Let be an instance of the -intervals optimization problem such that for some real number and every ,
There exists an interval such that the following holds. Any solution to the interval optimization problem on instance is an -approximation for the -intervals optimization problem on instance . In particular, if , the solution is a -approximation.
The above theorem highlights a potential upside of using interval optimization as a tool for (approximately) solving more ordered objectives. In the next section, we explore how well this approach fares on more general ordered objectives.
5 General Weights
Although the ordered objectives discussed so far are arguably quite natural from a fairness perspective, we recognize that a central planner could want to work with a more complicated objective. This is especially the case if she attempts to simultaneously reconcile the concerns of multiple stakeholders. Thus, we consider ordered optimization with general weights.
Problem General Ordered Optimization
Input: A tuple , where is an integer-valued square matrix, is an arbitrary vector of non-negative weights
Output: A solution to the objective
Since the FOWAMA problem discussed in Section 3 is already known to be NP-complete [14], it is highly unlikely that an efficient algorithm will exist when the weights are allowed to be arbitrary. Nevertheless, it is still worthwhile to consider simple heuristic algorithms that could lead to reasonable approximations.
A possible approach, inspired by works on simultaneous optimization [7, 13], is to consider how well a solution for one objective approximates another. In the previous section, we showed that arbitrary interval optimization is efficient. One might reasonably hope that a suitable interval could approximate a more complicated ordered objective. For instance, it could be the case that a max-welfare or max-min approach (or some other interval optimization) captures enough of the objective’s complexity. We show, however, that a logarithmic multiplicative error is about the best one can hope to achieve using a single interval for approximation for general instances. The proof can be found in Appendix B.
Theorem 13.
There exists an instance of the general ordered optimization problem such that the following holds. For any interval , there exists an assignment that optimizes interval such that
Recall that there could potentially be many assignments that optimize a given interval and we do not make any assumption about the one returned by our algorithm for interval optimization (see Remark 11). In the worst-case, our algorithm returns an assignment that yields the above lower bound.
6 Conclusion
In this work, we provide efficient exact and approximation algorithms for several ordered weighted average (OWA) operators that could appeal to a central planner with fairness considerations. Our work also motivates the study of OWA operators with arbitrary weights, which may arise as the result of aggregating the private OWA functions held by stakeholders. An interesting direction is to design and study aggregation functions for these privately held OWA functions. Consider the specific case in which each agent has some interval that she cares about. Is there an aggregation rule that motivates the agent to truthfully report the interval that they care about? Is there any such rule which is not some variation of dictatorship? An additional fascinating direction is determining how well OWA operators can capture or approximate other notions of fairness (e.g. Nash social welfare).
References
- [1] Fateme Abbasi, Sandip Banerjee, Jarosław Byrka, Parinya Chalermsook, Ameet Gadekar, Kamyar Khodamoradi, Dániel Marx, Roohani Sharma, and Joachim Spoerhase. Parameterized Approximation Schemes for Clustering with General Norm Objectives. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 1377–1399, 2023. doi:10.1109/FOCS57990.2023.00085.
- [2] Atila Abdulkadiroğlu, Parag A Pathak, and Alvin E Roth. The new york city high school match. American Economic Review, 95(2):364–367, 2005.
- [3] Hyung-Chan An and Ola Svensson. Recent developments in approximation algorithms for facility location and clustering problems. In Combinatorial Optimization and Graph Algorithms: Communications of NII Shonan Meetings, pages 1–19. Springer, 2017.
- [4] Nick Arnosti and Peng Shi. Design of lotteries and wait-lists for affordable housing allocation. Management Science, 66(6):2291–2307, 2020. doi:10.1287/MNSC.2019.3311.
- [5] Deeparnab Chakrabarty and Chaitanya Swamy. Approximation algorithms for minimum norm and ordered optimization problems. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 126–137, Phoenix AZ USA, June 2019. ACM. doi:10.1145/3313276.3316322.
- [6] Shichuan Deng and Qianfan Zhang. Ordered k-Median with Outliers. In Amit Chakrabarti and Chaitanya Swamy, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022), volume 245 of Leibniz International Proceedings in Informatics (LIPIcs), pages 34:1–34:22, Dagstuhl, Germany, 2022. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPICS.APPROX/RANDOM.2022.34.
- [7] Ashish Goel and Adam Meyerson. Simultaneous Optimization via Approximate Majorization for Concave Profits or Convex Costs. Algorithmica, 44(4):301–323, April 2006. doi:10.1007/S00453-005-1177-7.
- [8] O Gross. The bottleneck assignment problem. Rand, 1959.
- [9] S.K. Gupta and A.P. Punnen. k-sum optimization problems. Operations Research Letters, 9(2):121–126, 1990.
- [10] Swati Gupta, Jai Moondra, and Mohit Singh. Balancing Notions of Equity: Trade-offs Between Fair Portfolio Sizes and Achievable Guarantees, pages 1136–1165. Society for Industrial and Applied Mathematics, 2025. doi:10.1137/1.9781611978322.33.
- [11] Thomas Kesselheim, Marco Molinaro, and Sahil Singla. Supermodular approximation of norms and applications. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, pages 1841–1852, 2024. doi:10.1145/3618260.3649734.
- [12] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955.
- [13] A. Kumar and J. Kleinberg. Fairness measures for resource allocation. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages 75–85, 2000.
- [14] Julien Lesca, Michel Minoux, and Patrice Perny. The fair owa one-to-one assignment problem: Np-hardness and polynomial time special cases. Algorithmica, 81(1):98–123, January 2019. doi:10.1007/S00453-018-0434-5.
- [15] Albert W. Marshall, Ingram Olkin, and Barry C. Arnold. Inequalities: Theory of Majorization and Its Applications. Springer Series in Statistics. Springer New York, New York, NY, 2011.
- [16] M. Minoux. Solving combinatorial problems with combined min-max-min-sum objective and applications. Math. Program., 45(1–3):361–372, August 1989. doi:10.1007/BF01589111.
- [17] John Rawls. A theory of justice. Cambridge (Mass.), 1971.
- [18] Alvin E Roth. The origins, history, and design of the resident match. Jama, 289(7):909–912, 2003.
- [19] P.T. Sokkalingam and Y.P. Aneja. Lexicographic bottleneck combinatorial problems. Operations Research Letters, 23(1):27–33, 1998. doi:10.1016/S0167-6377(98)00028-5.
- [20] John A Weymark. Generalized gini inequality indices. Mathematical Social Sciences, 1(4):409–430, 1981. doi:10.1016/0165-4896(81)90018-4.
- [21] Ronald R Yager. On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Transactions on systems, Man, and Cybernetics, 18(1):183–190, 1988. doi:10.1109/21.87068.
- [22] Ronald R. Yager, Janusz Kacprzyk, Gleb Beliakov, and Janusz Kacprzyk, editors. Recent Developments in the Ordered Weighted Averaging Operators: Theory and Practice, volume 265 of Studies in Fuzziness and Soft Computing. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
Appendix A Missing Proofs from Section 4
See 12
Proof of Theorem 12.
For ease of exposition we will abuse notation, letting for a given subset denote the OWA operator where if and otherwise. We also remind the reader that for the the -intervals optimization problem, the intervals are ordered so that is a higher region on the economic ladder (and hence corresponds to individuals with higher utility) than interval , is higher than , and so on.
Let be an assignment that optimizes the union of intervals . For each index , let be the assignment that optimizes the interval Suppose that is a function such that for every , we have . Then, observe that for every , we have
The third inequality follows from recalling the fact that every value in an interval that precedes is at least as large as the maximum value within the interval and comparing the average value attained within each interval. Now, rearranging our final expression and summing over , we see that
for some absolute constant . We claim that would lead to a contradiction. On one hand, we would have . Yet, the the optimality of each assignment implies that
Hence, we must have . Moreover, there must exist an interval such that , as desired.
Appendix B Missing Proofs from Section 5
See 13
Proof of Theorem 13.
We will construct an instance involving the union of -disjoint intervals, where is later defined in terms of the number of agents . At a high level, we design the valuation matrix in such a way that each specified interval will correspond to a fixed set of agents . Each agent in will have a unique item they prefer most; their value for this item will significantly exceed that of any other item. In order to optimize the union , an assignment must ensure that for each , every agent in gets her unique preferred item. An assignment that optimizes some other region of the ladder may fail to provide maximum utility for some set , resulting in a significant decrease in the contribution to the target objective. In the paragraphs that follow, we discuss the aspects of our construction and analyze how well a single interval is able to approximate the union objective.
Construction
We construct an instance with agents. These agents are partitioned into sets such that and for every . For ease of exposition, we also let . Let be an injective function such that for each , every agent is mapped to some agent . We design a valuation matrix as follows:
By design of the function and matrix , the following claims are immediate.
Claim 14.
For every , the item most preferred by an agent is also the item most preferred by the agent .
Claim 15.
The following hold for any matching :
-
For any , and any pair , we have
-
For any , and any pair , we have .
As a result of Claim 15, we can treat each set of agents or as occupying a fixed interval in the economic ladder, regardless of the chosen assignment. Moreover, the ladder can be viewed as an interleaving of the sets (see Figure 1).
Analysis
Let be the intervals respectively corresponding to the agents in the sets and let be the union of those intervals. We first analyze the assignment that optimizes the union . For ease of exposition we will abuse notation, letting denote the OWA operator where if and otherwise for each subset . Recall that for each , the unique item preferred most by agent has value . An assignment that gives each agent in their most preferred item would maximize the value derived for the union . Computing the objective value , we therefore get
We now analyze how well a single interval is able to approximate this optimal value. For each , we will consider an assignment that optimizes the interval , but whose contribution to the objective is worst-case (in the sense that as many agents in receive their unique favorite item as possible). Since is worst case, we claim there is at most one set whose agents are assigned their most preferred items. Indeed, this follows from Claim 14 and considering several cases:
-
If corresponds to a region of the ladder that contains all the agents in and all the agents in for some , then modifying the assignment to give each agent in their most preferred item (instead of the agents in ) does not decrease the value of .
-
If corresponds to a region of the ladder that contains some of the agents in and none of the agents in for some , then modifying the assignment to give each agent in their unique favorite item does not decrease the value of
-
If corresponds to a region of the ladder that contains at most agents from and some of the agents in , then could potentially have given some of the agents in their most preferred items.
Since there is at most one set whose agents are assigned their most preferred items, the objective value is
Comparing to the assignment , we see that
We use the following claim to complete our proof.
Claim 16.
Proof.
Recall that the number of agents is
By taking the last term of the summation and assuming that , it follows that
Revision Notice
This is a revised version of the eponymous paper that appeared in the proceedings of FORC 2025 (LIPIcs, volume 329, https://www.dagstuhl.de/dagpub/978-3-95977-367-6, published in June, 2025), in which it was erroneously claimed that the fair OWA problem belongs to the class FPT. This claim is withdrawn and replaced by a simple proof of the fact that fair OWA belongs to the class XP in this version (Theorem 1). All references to this error in the introduction and abstract have also been updated accordingly.
Dagstuhl Publishing – September 02, 2025.