Search Results

Documents authored by Stein, Cliff


Found 2 Possible Name Variants:

Stein, Cliff

Document
Submodular Secretary Problem with Shortlists

Authors: Shipra Agrawal, Mohammad Shadravan, and Cliff Stein

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
In submodular k-secretary problem, the goal is to select k items in a randomly ordered input so as to maximize the expected value of a given monotone submodular function on the set of selected items. In this paper, we introduce a relaxation of this problem, which we refer to as submodular k-secretary problem with shortlists. In the proposed problem setting, the algorithm is allowed to choose more than k items as part of a shortlist. Then, after seeing the entire input, the algorithm can choose a subset of size k from the bigger set of items in the shortlist. We are interested in understanding to what extent this relaxation can improve the achievable competitive ratio for the submodular k-secretary problem. In particular, using an O(k) sized shortlist, can an online algorithm achieve a competitive ratio close to the best achievable offline approximation factor for this problem? We answer this question affirmatively by giving a polynomial time algorithm that achieves a 1-1/e-epsilon-O(k^{-1}) competitive ratio for any constant epsilon>0, using a shortlist of size eta_epsilon(k)=O(k). This is especially surprising considering that the best known competitive ratio (in polynomial time) for the submodular k-secretary problem is (1/e-O(k^{-1/2}))(1-1/e) [Thomas Kesselheim and Andreas Tönnis, 2017]. The proposed algorithm also has significant implications for another important problem of submodular function maximization under random order streaming model and k-cardinality constraint. We show that our algorithm can be implemented in the streaming setting using a memory buffer of size eta_epsilon(k)=O(k) to achieve a 1-1/e-epsilon-O(k^{-1}) approximation. This result substantially improves upon [Norouzi-Fard et al., 2018], which achieved the previously best known approximation factor of 1/2 + 8 x 10^{-14} using O(k log k) memory; and closely matches the known upper bound for this problem [McGregor and Vu, 2017].

Cite as

Shipra Agrawal, Mohammad Shadravan, and Cliff Stein. Submodular Secretary Problem with Shortlists. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ITCS.2019.1,
  author =	{Agrawal, Shipra and Shadravan, Mohammad and Stein, Cliff},
  title =	{{Submodular Secretary Problem with Shortlists}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.1},
  URN =		{urn:nbn:de:0030-drops-100949},
  doi =		{10.4230/LIPIcs.ITCS.2019.1},
  annote =	{Keywords: Submodular Optimization, Secretary Problem, Streaming Algorithms}
}
Document
Dynamic Matching: Reducing Integral Algorithms to Approximately-Maximal Fractional Algorithms

Authors: Moab Arar, Shiri Chechik, Sarel Cohen, Cliff Stein, and David Wajc

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We present a simple randomized reduction from fully-dynamic integral matching algorithms to fully-dynamic "approximately-maximal" fractional matching algorithms. Applying this reduction to the recent fractional matching algorithm of Bhattacharya, Henzinger, and Nanongkai (SODA 2017), we obtain a novel result for the integral problem. Specifically, our main result is a randomized fully-dynamic (2+epsilon)-approximate integral matching algorithm with small polylog worst-case update time. For the (2+epsilon)-approximation regime only a fractional fully-dynamic (2+epsilon)-matching algorithm with worst-case polylog update time was previously known, due to Bhattacharya et al. (SODA 2017). Our algorithm is the first algorithm that maintains approximate matchings with worst-case update time better than polynomial, for any constant approximation ratio. As a consequence, we also obtain the first constant-approximate worst-case polylogarithmic update time maximum weight matching algorithm.

Cite as

Moab Arar, Shiri Chechik, Sarel Cohen, Cliff Stein, and David Wajc. Dynamic Matching: Reducing Integral Algorithms to Approximately-Maximal Fractional Algorithms. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{arar_et_al:LIPIcs.ICALP.2018.7,
  author =	{Arar, Moab and Chechik, Shiri and Cohen, Sarel and Stein, Cliff and Wajc, David},
  title =	{{Dynamic Matching: Reducing Integral Algorithms to Approximately-Maximal Fractional Algorithms}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.7},
  URN =		{urn:nbn:de:0030-drops-90112},
  doi =		{10.4230/LIPIcs.ICALP.2018.7},
  annote =	{Keywords: Dynamic, Worst-case, Maximum Matching, Maximum Weight Matching}
}
Document
A 2-Competitive Algorithm For Online Convex Optimization With Switching Costs

Authors: Nikhil Bansal, Anupam Gupta, Ravishankar Krishnaswamy, Kirk Pruhs, Kevin Schewior, and Cliff Stein

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
We consider a natural online optimization problem set on the real line. The state of the online algorithm at each integer time is a location on the real line. At each integer time, a convex function arrives online. In response, the online algorithm picks a new location. The cost paid by the online algorithm for this response is the distance moved plus the value of the function at the final destination. The objective is then to minimize the aggregate cost over all time. The motivating application is rightsizing power-proportional data centers. We give a 2-competitive algorithm for this problem. We also give a 3-competitive memoryless algorithm, and show that this is the best competitive ratio achievable by a deterministic memoryless algorithm. Finally we show that this online problem is strictly harder than the standard ski rental problem.

Cite as

Nikhil Bansal, Anupam Gupta, Ravishankar Krishnaswamy, Kirk Pruhs, Kevin Schewior, and Cliff Stein. A 2-Competitive Algorithm For Online Convex Optimization With Switching Costs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 96-109, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bansal_et_al:LIPIcs.APPROX-RANDOM.2015.96,
  author =	{Bansal, Nikhil and Gupta, Anupam and Krishnaswamy, Ravishankar and Pruhs, Kirk and Schewior, Kevin and Stein, Cliff},
  title =	{{A 2-Competitive Algorithm For Online Convex Optimization With Switching Costs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{96--109},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.96},
  URN =		{urn:nbn:de:0030-drops-52970},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.96},
  annote =	{Keywords: Stochastic, Scheduling}
}

Stein, Clifford

Document
Drawing Competitive Districts in Redistricting

Authors: Gabriel Chuang, Oussama Hanguir, and Clifford Stein

Published in: LIPIcs, Volume 295, 5th Symposium on Foundations of Responsible Computing (FORC 2024)


Abstract
In the process of redistricting, one important metric is the number of competitive districts, that is, districts where both parties have a reasonable chance of winning a majority of votes. Competitive districts are important for achieving proportionality, responsiveness, and other desirable qualities; some states even directly list competitiveness in their legally-codified districting requirements. In this work, we discuss the problem of drawing plans with at least a fixed number of competitive districts. In addition to the standard, "vote-band" measure of competitivenesss (i.e., how close was the last election?), we propose a measure that explicitly considers "swing voters" - the segment of the population that may choose to vote either way, or not vote at all, in a given election. We present two main, contrasting results. First, from a computational complexity perspective, we show that the task of drawing plans with competitive districts is NP-hard, even on very natural instances where the districting task itself is easy (e.g., small rectangular grids of population-balanced cells). Second, however, we show that a simple hill-climbing procedure can in practice find districtings on real states in which all the districts are competitive. We present the results of the latter on the precinct-level graphs of the U.S. states of North Carolina and Arizona, and discuss trade-offs between competitiveness and other desirable qualities.

Cite as

Gabriel Chuang, Oussama Hanguir, and Clifford Stein. Drawing Competitive Districts in Redistricting. In 5th Symposium on Foundations of Responsible Computing (FORC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 295, pp. 7:1-7:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chuang_et_al:LIPIcs.FORC.2024.7,
  author =	{Chuang, Gabriel and Hanguir, Oussama and Stein, Clifford},
  title =	{{Drawing Competitive Districts in Redistricting}},
  booktitle =	{5th Symposium on Foundations of Responsible Computing (FORC 2024)},
  pages =	{7:1--7:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-319-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{295},
  editor =	{Rothblum, Guy N.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2024.7},
  URN =		{urn:nbn:de:0030-drops-200902},
  doi =		{10.4230/LIPIcs.FORC.2024.7},
  annote =	{Keywords: Redistricting, Computational Complexity, Algorithms}
}
Document
APPROX
Matching Drivers to Riders: A Two-Stage Robust Approach

Authors: Omar El Housni, Vineet Goyal, Oussama Hanguir, and Clifford Stein

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
Matching demand (riders) to supply (drivers) efficiently is a fundamental problem for ride-hailing platforms who need to match the riders (almost) as soon as the request arrives with only partial knowledge about future ride requests. A myopic approach that computes an optimal matching for current requests ignoring future uncertainty can be highly sub-optimal. In this paper, we consider a two-stage robust optimization framework for this matching problem where future demand uncertainty is modeled using a set of demand scenarios (specified explicitly or implicitly). The goal is to match the current request to drivers (in the first stage) so that the cost of first stage matching and the worst-case cost over all scenarios for the second stage matching is minimized. We show that this two-stage robust matching is NP-hard under both explicit and implicit models of uncertainty. We present constant approximation algorithms for both models of uncertainty under different settings and show they improve significantly over standard greedy approaches.

Cite as

Omar El Housni, Vineet Goyal, Oussama Hanguir, and Clifford Stein. Matching Drivers to Riders: A Two-Stage Robust Approach. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{housni_et_al:LIPIcs.APPROX/RANDOM.2021.12,
  author =	{Housni, Omar El and Goyal, Vineet and Hanguir, Oussama and Stein, Clifford},
  title =	{{Matching Drivers to Riders: A Two-Stage Robust Approach}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.12},
  URN =		{urn:nbn:de:0030-drops-147054},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.12},
  annote =	{Keywords: matching, robust optimization, approximation algorithms}
}
Document
Incremental Edge Orientation in Forests

Authors: Michael A. Bender, Tsvi Kopelowitz, William Kuszmaul, Ely Porat, and Clifford Stein

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
For any forest G = (V, E) it is possible to orient the edges E so that no vertex in V has out-degree greater than 1. This paper considers the incremental edge-orientation problem, in which the edges E arrive over time and the algorithm must maintain a low-out-degree edge orientation at all times. We give an algorithm that maintains a maximum out-degree of 3 while flipping at most O(log log n) edge orientations per edge insertion, with high probability in n. The algorithm requires worst-case time O(log n log log n) per insertion, and takes amortized time O(1). The previous state of the art required up to O(log n / log log n) edge flips per insertion. We then apply our edge-orientation results to the problem of dynamic Cuckoo hashing. The problem of designing simple families ℋ of hash functions that are compatible with Cuckoo hashing has received extensive attention. These families ℋ are known to satisfy static guarantees, but do not come typically with dynamic guarantees for the running time of inserts and deletes. We show how to transform static guarantees (for 1-associativity) into near-state-of-the-art dynamic guarantees (for O(1)-associativity) in a black-box fashion. Rather than relying on the family ℋ to supply randomness, as in past work, we instead rely on randomness within our table-maintenance algorithm.

Cite as

Michael A. Bender, Tsvi Kopelowitz, William Kuszmaul, Ely Porat, and Clifford Stein. Incremental Edge Orientation in Forests. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 12:1-12:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bender_et_al:LIPIcs.ESA.2021.12,
  author =	{Bender, Michael A. and Kopelowitz, Tsvi and Kuszmaul, William and Porat, Ely and Stein, Clifford},
  title =	{{Incremental Edge Orientation in Forests}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{12:1--12:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.12},
  URN =		{urn:nbn:de:0030-drops-145933},
  doi =		{10.4230/LIPIcs.ESA.2021.12},
  annote =	{Keywords: edge orientation, graph algorithms, Cuckoo hashing, hash functions}
}
Document
Track A: Algorithms, Complexity and Games
Log Diameter Rounds Algorithms for 2-Vertex and 2-Edge Connectivity

Authors: Alexandr Andoni, Clifford Stein, and Peilin Zhong

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
Many modern parallel systems, such as MapReduce, Hadoop and Spark, can be modeled well by the MPC model. The MPC model captures well coarse-grained computation on large data - data is distributed to processors, each of which has a sublinear (in the input data) amount of memory and we alternate between rounds of computation and rounds of communication, where each machine can communicate an amount of data as large as the size of its memory. This model is stronger than the classical PRAM model, and it is an intriguing question to design algorithms whose running time is smaller than in the PRAM model. In this paper, we study two fundamental problems, 2-edge connectivity and 2-vertex connectivity (biconnectivity). PRAM algorithms which run in O(log n) time have been known for many years. We give algorithms using roughly log diameter rounds in the MPC model. Our main results are, for an n-vertex, m-edge graph of diameter D and bi-diameter D', 1) a O(log D log log_{m/n} n) parallel time 2-edge connectivity algorithm, 2) a O(log D log^2 log_{m/n}n+log D'log log_{m/n}n) parallel time biconnectivity algorithm, where the bi-diameter D' is the largest cycle length over all the vertex pairs in the same biconnected component. Our results are fully scalable, meaning that the memory per processor can be O(n^{delta}) for arbitrary constant delta>0, and the total memory used is linear in the problem size. Our 2-edge connectivity algorithm achieves the same parallel time as the connectivity algorithm of [Andoni et al., 2018]. We also show an Omega(log D') conditional lower bound for the biconnectivity problem.

Cite as

Alexandr Andoni, Clifford Stein, and Peilin Zhong. Log Diameter Rounds Algorithms for 2-Vertex and 2-Edge Connectivity. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{andoni_et_al:LIPIcs.ICALP.2019.14,
  author =	{Andoni, Alexandr and Stein, Clifford and Zhong, Peilin},
  title =	{{Log Diameter Rounds Algorithms for 2-Vertex and 2-Edge Connectivity}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.14},
  URN =		{urn:nbn:de:0030-drops-105906},
  doi =		{10.4230/LIPIcs.ICALP.2019.14},
  annote =	{Keywords: parallel algorithms, biconnectivity, 2-edge connectivity, the MPC model}
}
Document
Invited Talk
Approximate Matchings in Massive Graphs via Local Structure (Invited Talk)

Authors: Clifford Stein

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
Finding a maximum matching is a fundamental algorithmic problem and is fairly well understood in traditional sequential computing models. Some modern applications require that we handle massive graphs and hence we need to consider algorithms in models that do not allow the entire input graph to be held in the memory of one computer, or models in which the graph is evolving over time. We introduce a new concept called an "Edge Degree Constrained Subgraph (EDCS)", which is a subgraph that is guaranteed to contain a large matching, and which can be identified via local conditions. We then show how to use an EDCS to find 1.5-approximate matchings in several different models including Map Reduce, streaming and distributed computing. We can also use an EDCS to maintain a 1.5-optimal matching in a dynamic graph. This work is joint with Sepehr Asadi, Aaron Bernstein, Mohammad Hossein Bateni and Vahab Marrokni.

Cite as

Clifford Stein. Approximate Matchings in Massive Graphs via Local Structure (Invited Talk). In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{stein:LIPIcs.ISAAC.2018.2,
  author =	{Stein, Clifford},
  title =	{{Approximate Matchings in Massive Graphs via Local Structure}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.2},
  URN =		{urn:nbn:de:0030-drops-99505},
  doi =		{10.4230/LIPIcs.ISAAC.2018.2},
  annote =	{Keywords: matching, dynamic algorithms, parallel algorithms, approximation algorithms}
}
Document
Scheduling (Dagstuhl Seminar 18101)

Authors: Magnús M. Halldórson, Nicole Megow, and Clifford Stein

Published in: Dagstuhl Reports, Volume 8, Issue 3 (2018)


Abstract
This report documents the program and outcomes of the Dagstuhl Seminar 18101 "Scheduling" in March 2018. The seminar brought together algorithmically oriented researchers from two communities with interests in resource management: (i) the scheduling community and (ii) the networking and distributed computing community. The primary objective of the seminar was to expose each community to the important problems and techniques from the other community, and to facilitate dialog and collaboration between researchers.

Cite as

Magnús M. Halldórson, Nicole Megow, and Clifford Stein. Scheduling (Dagstuhl Seminar 18101). In Dagstuhl Reports, Volume 8, Issue 3, pp. 1-20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Article{halldorson_et_al:DagRep.8.3.1,
  author =	{Halld\'{o}rson, Magn\'{u}s M. and Megow, Nicole and Stein, Clifford},
  title =	{{Scheduling (Dagstuhl Seminar 18101)}},
  pages =	{1--20},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2018},
  volume =	{8},
  number =	{3},
  editor =	{Halld\'{o}rson, Magn\'{u}s M. and Megow, Nicole and Stein, Clifford},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.8.3.1},
  URN =		{urn:nbn:de:0030-drops-92942},
  doi =		{10.4230/DagRep.8.3.1},
  annote =	{Keywords: scheduling, optimization, approximation algorithms}
}
Document
Simultaneously Load Balancing for Every p-norm, With Reassignments

Authors: Aaron Bernstein, Tsvi Kopelowitz, Seth Pettie, Ely Porat, and Clifford Stein

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
This paper investigates the task of load balancing where the objective function is to minimize the p-norm of loads, for p\geq 1, in both static and incremental settings. We consider two closely related load balancing problems. In the bipartite matching problem we are given a bipartite graph G=(C\cup S, E) and the goal is to assign each client c\in C to a server s\in S so that the p-norm of assignment loads on S is minimized. In the graph orientation problem the goal is to orient (direct) the edges of a given undirected graph while minimizing the p-norm of the out-degrees. The graph orientation problem is a special case of the bipartite matching problem, but less complex, which leads to simpler algorithms. For the graph orientation problem we show that the celebrated Chiba-Nishizeki peeling algorithm provides a simple linear time load balancing scheme whose output is an orientation that is 2-competitive, in a p-norm sense, for all p\geq 1. For the bipartite matching problem we first provide an offline algorithm that computes an optimal assignment. We then extend this solution to the online bipartite matching problem with reassignments, where vertices from C arrive in an online fashion together with their corresponding edges, and we are allowed to reassign an amortized O(1) vertices from C each time a new vertex arrives. In this online scenario we show how to maintain a single assignment that is 8-competitive, in a p-norm sense, for all p\geq 1.

Cite as

Aaron Bernstein, Tsvi Kopelowitz, Seth Pettie, Ely Porat, and Clifford Stein. Simultaneously Load Balancing for Every p-norm, With Reassignments. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ITCS.2017.51,
  author =	{Bernstein, Aaron and Kopelowitz, Tsvi and Pettie, Seth and Porat, Ely and Stein, Clifford},
  title =	{{Simultaneously Load Balancing for Every p-norm, With Reassignments}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.51},
  URN =		{urn:nbn:de:0030-drops-82009},
  doi =		{10.4230/LIPIcs.ITCS.2017.51},
  annote =	{Keywords: Online Matching, Graph Orientation, Minmizing the p-norm}
}
Document
Minimizing Maximum Flow Time on Related Machines via Dynamic Posted Pricing

Authors: Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Clifford Stein

Published in: LIPIcs, Volume 87, 25th Annual European Symposium on Algorithms (ESA 2017)


Abstract
We consider a setting where selfish agents want to schedule jobs on related machines. The agent submitting a job picks a server that minimizes a linear combination of the server price and the resulting response time for that job on the selected server. The manager's task is to maintain server prices to (approximately) optimize the maximum response time, which is a measure of social good. We show that the existence of a pricing scheme with certain competitiveness is equivalent to the existence of a monotone immediate-dispatch algorithm. Our main result is a monotone immediate-dispatch algorithm that is O(1)-competitive with respect to the maximum response time.

Cite as

Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Clifford Stein. Minimizing Maximum Flow Time on Related Machines via Dynamic Posted Pricing. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 51:1-51:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{im_et_al:LIPIcs.ESA.2017.51,
  author =	{Im, Sungjin and Moseley, Benjamin and Pruhs, Kirk and Stein, Clifford},
  title =	{{Minimizing Maximum Flow Time on Related Machines via  Dynamic Posted Pricing}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{51:1--51:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.51},
  URN =		{urn:nbn:de:0030-drops-78287},
  doi =		{10.4230/LIPIcs.ESA.2017.51},
  annote =	{Keywords: Posted pricing scheme, online scheduling, related machines, maximum flow time, competitiveness analysis}
}
Document
Extending Search Phases in the Micali-Vazirani Algorithm

Authors: Michael Huang and Clifford Stein

Published in: LIPIcs, Volume 75, 16th International Symposium on Experimental Algorithms (SEA 2017)


Abstract
The Micali-Vazirani algorithm is an augmenting path algorithm that offers the best theoretical runtime of O(n^{0.5} m) for solving the maximum cardinality matching problem for non-bipartite graphs. This paper builds upon the algorithm by focusing on the bottleneck caused by its search phase structure and proposes a new implementation that improves efficiency by extending the search phases in order to find more augmenting paths. Experiments on different types of randomly generated and real world graphs demonstrate this new implementation's effectiveness and limitations.

Cite as

Michael Huang and Clifford Stein. Extending Search Phases in the Micali-Vazirani Algorithm. In 16th International Symposium on Experimental Algorithms (SEA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 75, pp. 10:1-10:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.SEA.2017.10,
  author =	{Huang, Michael and Stein, Clifford},
  title =	{{Extending Search Phases in the Micali-Vazirani Algorithm}},
  booktitle =	{16th International Symposium on Experimental Algorithms (SEA 2017)},
  pages =	{10:1--10:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-036-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{75},
  editor =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2017.10},
  URN =		{urn:nbn:de:0030-drops-76141},
  doi =		{10.4230/LIPIcs.SEA.2017.10},
  annote =	{Keywords: matching, graph algorithm, experimental evaluation, non-bipartite}
}
Document
A Fast Distributed Stateless Algorithm for alpha-Fair Packing Problems

Authors: Jelena Marasevic, Clifford Stein, and Gil Zussman

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
We study weighted alpha-fair packing problems, that is, the problems of maximizing the objective functions (i) sum_j w_j*x_j^{1-alpha}/(1-alpha) when alpha > 0, alpha != 1 and (ii) sum_j w_j*ln(x_j) when alpha = 1, over linear constraints A*x <=b, x >= 0, where wj are positive weights and A and b are non-negative. We consider the distributed computation model that was used for packing linear programs and network utility maximization problems. Under this model, we provide a distributed algorithm for general alpha that converges to an epsilon-approximate solution in time (number of distributed iterations) that has an inverse polynomial dependence on the approximation parameter epsilon and poly-logarithmic dependence on the problem size. This is the first distributed algorithm for weighted alpha-fair packing with poly-logarithmic convergence in the input size. The algorithm uses simple local update rules and is stateless (namely, it allows asynchronous updates, is self-stabilizing, and allows incremental and local adjustments). We also obtain a number of structural results that characterize alpha-fair allocations as the value of alpha is varied. These results deepen our understanding of fairness guarantees in alpha-fair packing allocations, and also provide insight into the behavior of alpha-fair allocations in the asymptotic cases alpha -> 0, alpha -> 1, and alpha -> infinity.

Cite as

Jelena Marasevic, Clifford Stein, and Gil Zussman. A Fast Distributed Stateless Algorithm for alpha-Fair Packing Problems. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 54:1-54:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{marasevic_et_al:LIPIcs.ICALP.2016.54,
  author =	{Marasevic, Jelena and Stein, Clifford and Zussman, Gil},
  title =	{{A Fast Distributed Stateless Algorithm for alpha-Fair Packing Problems}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{54:1--54:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.54},
  URN =		{urn:nbn:de:0030-drops-63344},
  doi =		{10.4230/LIPIcs.ICALP.2016.54},
  annote =	{Keywords: Fairness, distributed and stateless algorithms, resource allocation}
}
Document
Scheduling (Dagstuhl Seminar 16081)

Authors: Nikhil Bansal, Nicole Megow, and Clifford Stein

Published in: Dagstuhl Reports, Volume 6, Issue 2 (2016)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 16081 "Scheduling". The seminar was centered around recent new developments, discussion of open problems and exploring future research directions within the broader scheduling community.

Cite as

Nikhil Bansal, Nicole Megow, and Clifford Stein. Scheduling (Dagstuhl Seminar 16081). In Dagstuhl Reports, Volume 6, Issue 2, pp. 97-118, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{bansal_et_al:DagRep.6.2.97,
  author =	{Bansal, Nikhil and Megow, Nicole and Stein, Clifford},
  title =	{{Scheduling (Dagstuhl Seminar 16081)}},
  pages =	{97--118},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{6},
  number =	{2},
  editor =	{Bansal, Nikhil and Megow, Nicole and Stein, Clifford},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.2.97},
  URN =		{urn:nbn:de:0030-drops-58902},
  doi =		{10.4230/DagRep.6.2.97},
  annote =	{Keywords: approximation algorithms, scheduling, optimization}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail